Archive for the ‘Machine Learning’ Category

Cloudy with a chance of neurons: The tools that make neural networks work – Ars Technica

Enlarge / Machine learning is really good at turning pictures of normal things into pictures of eldritch horrors.

Jim Salter

Artificial Intelligenceor, if you prefer, Machine Learningis today's hot buzzword. Unlike many buzzwords have come before it, though, this stuff isn't vaporware dreamsit's real, it's here already, and it's changing your life whether you realize it or not.

Before we go too much further, let's talk quickly about that term "Artificial Intelligence." Yes, it's warranted; no, it doesn't mean KITT from Knight Rider, or Samantha, the all-too-human unseen digital assistant voiced by Scarlett Johansson in 2013'sHer. Aside from being fictional, KITT and Samantha are examples ofstrong artificial intelligence, also known as Artificial General Intelligence (AGI). On the other hand, artificial intelligencewithout the "strong" or "general" qualifiersis an established academic term dating back to the 1955 proposal for the Dartmouth Summer Project on Artificial Intelligence (DSRPAI), written by Professors John McCarthy and Marvin Minsky.

All "artificial intelligence" really means is a system that emulates problem-solving skills normally seen in humans or animals. Traditionally, there are two branches of AIsymbolic and connectionist. Symbolic means an approach involving traditional rules-based programminga programmer tells the computer what to expect and how to deal with it, very explicitly. The "expert systems" of the 1980s and 1990s were examples of symbolic (attempts at) AI; while occasionally useful, it's generally considered impossible to scale this approach up to anything like real-world complexity.

NBCUniversal

Artificial Intelligence in the commonly used modern sense almost always refers to connectionist AI. Connectionist AI, unlike symbolic AI, isn't directly programmed by a human. Artificial neural networks are the most common type of connectionist AI, also sometimes referred to as machine learning. My colleague Tim Lee just got done writing about neural networks last weekyou can get caught up right here.

If you wanted to build a system that could drive a car, instead of programming it directly you might attach a sufficiently advanced neural network to its sensors and controls, and then let it "watch" a human driving for tens of thousands of hours. The neural network begins to attach weights to events and patterns in the data flow from its sensors that allow it to predict acceptable actions in response to various conditions. Eventually, you might give the network conditional control of the car's controls and allow it to accelerate, brake, and steer on its ownbut still with a human available. The partially trained neural network can continue learning in response to when the human assistant takes the controls away from it. "Whoops, shouldn't have donethat," and the neural network adjusts weighted values again.

Sounds very simple, doesn't it? In practice, not so muchthere are many different types of neural networks (simple, convolutional, generative adversarial, and more), and none of them is very bright on its ownthe brightest is roughly similar in scale to a worm's brain. Most complex, really interesting tasks will require networks of neural networks that preprocess data to find areas of interest, pass those areas of interest onto other neural networks trained to more accurately classify them, and so forth.

One last piece of the puzzle is that, when dealing with neural networks, there are two major modes of operation: inference and training. Training is just what it sounds likeyou give the neural network a large batch of data that represents a problem space, and let it chew through it, identifying things of interest and possibly learning to match them to labels you've provided along with the data. Inference, on the other hand, is using an already-trained neural network to give you answers in a problem space that it understands.

Both inference and training workloads can operate several orders of magnitude more rapidly on GPUs than on general-purpose CPUsbut that doesn't necessarily mean you want to do absolutely everything on a GPU. It's generally easier and faster to runsmall jobs directly on CPUs rather than invoking the initial overhead of loading models and data into a GPU and its onboard VRAM, so you'll very frequently see inference workloads run on standard CPUs.

Read the original post:

Cloudy with a chance of neurons: The tools that make neural networks work - Ars Technica

The Bot Decade: How AI Took Over Our Lives in the 2010s – Popular Mechanics

Bots are a lot like humans: Some are cute. Some are ugly. Some are harmless. Some are menacing. Some are friendly. Some are annoying ... and a little racist. Bots serve their creators and society as helpers, spies, educators, servants, lab technicians, and artists. Sometimes, they save lives. Occasionally, they destroy them.

In the 2010s, automation got better, cheaper, and way less avoidable. Its still mysterious, but no longer foreign; the most Extremely Online among us interact with dozens of AIs throughout the day. That means driving directions are more reliable, instant translations are almost good enough, and everyone gets to be an adequate portrait photographer, all powered by artificial intelligence. On the other hand, each of us now sees a personalized version of the world that is curated by an AI to maximize engagement with the platform. And by now, everyone from fruit pickers to hedge fund managers has suffered through headlines about being replaced.

Humans and tech have always coexisted and coevolved, but this decade brought us closer togetherand closer to the futurethan ever. These days, you dont have to be an engineer to participate in AI projects; in fact, you have no choice but to help, as youre constantly offering your digital behavior to train AIs.

So heres how we changed our bots this decade, how they changed us, and where our strange relationship is going as we enter the 2020s.

All those little operational tweaks in our day come courtesy of a specific scientific approach to AI called machine learning, one of the most popular techniques for AI projects this decade. Thats when AI is tasked not only with finding the answers to questions about data sets, but with finding the questions themselves; successful deep learning applications require vast amounts of data and the time and computational power to self-test over and over again.

Deep learning, a subset of machine learning, uses neural networks to extract its own rules and adjust them until it can return the right results; other machine learning techniques might use Bayesian networks, vector maps, or evolutionary algorithms to achieve the same goal.

In January, Technology Reviews Karen Hao released an exhaustive analysis of recent papers in AI that concluded that machine learning was one of the defining features of AI research this decade. Machine learning has enabled near-human and even superhuman abilities in transcribing speech from voice, recognizing emotions from audio or video recordings, as well as forging handwriting or video, Hao wrote. Domestic spying is now a lucrative application for AI technologies, thanks to this powerful new development.

Haos report suggests that the age of deep learning is finally drawing to a close, but the next big thing may have already arrived. Reinforcement learning, like generative adversarial networks (GANs), pits neural nets against one another by having one evaluate the work of the other and distribute rewards and punishments accordinglynot unlike the way dogs and babies learn about the world.

The future of AI could be in structured learning. Just as young humans are thought to learn their first languages by processing data input from fluent caretakers with their internal language grammar, computers can also be taught how to teach themselves a taskespecially if the task is to imitate a human in some capacity.

This decade, artificial intelligence went from being employed chiefly as an academic subject or science fiction trope to an unobtrusive (though occasionally malicious) everyday companion. AIs have been around in some form since the 1500s or the 1980s, depending on your definition. The first search indexing algorithm was AltaVista in 1995, but it wasnt until 2010 that Google quietly introduced personalized search results for all customers and all searches. What was once background chatter from eager engineers has now become an inescapable part of daily life.

One function after another has been turned over to AI jurisdiction, with huge variations in efficacy and consumer response. The prevailing profit model for most of these consumer-facing applications, like social media platforms and map functions, is for users to trade their personal data for minor convenience upgrades, which are achieved through a combination of technical power, data access, and rapid worker disenfranchisement as increasingly complex service jobs are doubled up, automated away, or taken over by AI workers.

The Harvard social scientist Shoshana Zuboff explained the impact of these technologies on the economy with the term surveillance capitalism. This new economic system, she wrote, unilaterally claims human experience as free raw material for translation into behavioural data, in a bid to make profit from informed gambling based on predicted human behavior.

Were already using machine learning to make subjective decisionseven ones that have life-altering consequences. Medical applications are only some of the least controversial uses of artificial intelligence; by the end of the decade, AIs were locating stranded victims of Hurricane Maria, controlling the German power grid, and killing civilians in Pakistan.

The sheer scope of these AI-controlled decision systems is why automation has the potential to transform society on a structural level. In 2012, techno-socialist Zeynep Tufekci pointed out the presence on the Obama reelection campaign of an unprecedented number of data analysts and social scientists, bringing the traditional confluence of marketing and politics into a new age.

Intelligence that relies on data from an unjust world suffers from the principle of garbage in, garbage out, futurist Cory Doctorow observed in a recent blog post. Diverse perspectives on the design team would help, Doctorow wrote, but when it comes to certain technology, there might be no safe way to deploy:

It doesnt help that data collection for image-based AI has so far taken advantage of the most vulnerable populations first. The Facial Recognition Verification Testing Program is the industry standard for testing the accuracy of facial recognition tech; passing the program is imperative for new FR startups seeking funding.

But the datasets of human faces that the program uses are sourced, according to a report from March, from images of U.S. visa applicants, arrested people who have since died, and children exploited by child pornography. The report found that the majority of data subjects were people who had been arrested on suspicion of criminal activity. None of the millions of faces in the programs data sets belonged to people who had consented to this use of their data.

State-level efforts to regulate AI finally emerged this decade, with some success. The European Unions General Data Protection Regulation (GDPR), enforceable from 2018, limits the legal uses of valuable AI training datasets by defining the rights of the data subject (read: us); the GDPR also prohibits the black box model for machine learning applications, requiring both transparency and accountability on how data are stored and used. At the end of the decade, Google showed the class how not to regulate when they built, and then scrapped, an external AI ethics panel a week later, feigning shock at all the negative reception.

Even attempted regulation is a good sign. It means were looking at AI for what it is: not a new life form that competes for resources, but as a formidable weapon. Technological tools are most dangerous in the hands of malicious actors who already hold significant power; you can always hire more programmers. During the long campaign for the 2016 U.S. presidential election, the Putin-backed IRA Twitter botnet campaignsessentially, teams of semi-supervised bot accounts that spread disinformation on purpose and learn from real propagandainfiltrated the very mechanics of American democracy.

Keeping up with AI capacities as they grow will be a massive undertaking. Things could still get much, much worse before they get better; authoritarian governments around the world have a tendency to use technology to further consolidate power and resist regulation.

Tech capabilities have long since proved too fast for traditional human lawmakers, but one hint of what the next decade might hold comes from AIs themselves, who are beginning to be deployed as weapons against the exact type of disinformation other AIs help to create and spread. There now exists, for example, a neural net devoted explicitly to the task of identifying neural net disinformation campaigns on Twitter. The neural nets name is Grover, and its really good at this.

Continued here:

The Bot Decade: How AI Took Over Our Lives in the 2010s - Popular Mechanics

The Top Five AWS Re:Invent 2019 Announcements That Impact Your Enterprise Today – Forbes

AWS CEO Andy Jassy, discusses a new initiative with the NFL that will transform player health and ... [+] safety using cloud computing during AWS re:Invent 2019 on Thursday, Dec. 5, 2019 in Las Vegas. (Isaac Brekken/AP Images for NFL)

Last week, I had the pleasure of attending Amazon.com AWSs re:Invent conference in Las Vegas. Re:Invent is AWSs once a year mega-event where it announces new services and holds 2,500 educational sessions for builders, CIOs, channel and ecosystem partners, customers, and of course, industry analysts like me. Its a large event at 65,000 attendees but could be much larger as it sells out after a few days. The attraction is simple. Its the most important cloud show you can attend and attendees want to get a head-start and hands-on with the latest and greatest of what AWS has to offer. AWS made hundreds of announcements and disclosures and while the Moor Insights & Strategy analyst team will be going deeper on the most impactful announcements, I wanted to make a top 5 list and why you should care.

1/ Graviton2 for EC2 M, R, and C 6th Gen instances

AWS Graviton2 instances

Based on an Arm N1 core, AWS says these new instances deliver up to 40% improved price/performance over comparable x86-based Skylake instances. In preview, AWS will make these available for Mainstream (M), memory-intensive (R) and compute intensive (C) instances.

Why this matters

You may expect that I gave the #1 spot to new chips because I can be a chip nerd. I can be, but when you think about a 40% improvement over IaaS, PaaS and SaaS services that cant easily be copied, Id say thats important. Thats not saying that advantage will last forever, but its very disruptive right now. First off, Id say that now no one can say Arm isnt ready for general purpose datacenter compute. It is, as AWS IaaS is larger than the #2-10 IaaS provider combined. I can see VMware and Oracle accelerating its offerings and maybe SAP doing anything with Arm, which they arent publicly. Finally, dont overthink this related to AMD and Intel. The market is massive, growing and I dont believe this is anti-Intel or AMD. But if a small AWS team can outperform AMD and Intel on some cloud workloads, you do have to do a pause. I wrote in-depth on all of this here.

2/ Many new hybrid offerings

Local Zones

While AWS doesnt want to use the term hybrid a lot, I think enterprises understand that it means they can extend their AWS experience to on-prem or close to on-prem compute and storage. AWS announced three capabilities here that are important, including going GA on Outposts and announcing Local Zones and Wavelength.

AWS describes it as, AWS Outposts are fully-managed and configurable racks of AWS-designed hardware that bring native AWS capabilities to on-premises locations using the familiar AWS or VMware control plane and tools. AWS Local Zones place select AWS services close to large population, industry, and IT centers in order to deliver applications with single-digit millisecond latencies, without requiring customers to build and operate datacenters or co-location facilities. AWS Wavelength enables developers to deploy AWS compute and storage at the edge of the 5G network, in order to support emerging applications like machine learning at the edge, industrial IoT, and virtual and augmented reality on mobile and edge devices.

Why this matters

AWS took the hybrid idea and doubled down on it. If youre a customer who wants a low latency experience on-prem with Outposts, lowest-latency in the public cloud with Local Zones, or in the core carrier network with Wavelength, AWS has you covered. When you add this to what AWS is doing with Snowball and where I think its going, its hard not for me to say AWS wont have the broadest and most diverse hybrid play. After our analyst fireside chat and Q&A with AWSs Matt Garman, Im convinced we will see tremendous compute and storage variability with all of AWSs offerings. It doesnt have all the blanks filled in, but I believe it will. This isnt for show; its for world domination.

AWS Wavelength

What Im most interested to see is how the economics and agility stack up compared to on-prem giants Dell Technologies, Hewlett Packard Enterprise, Cisco Systems, Lenovo and IBM.

3/ SageMaker Studio

SageMaker Studio

AWS says the Amazon SageMaker Studio is the first comprehensive IDE (integrated developer environment) for machine learning, allowing developers to build, train, explain, inspect, monitor, debug, and run their machine learning models from a single interface. Developers now have a simple way to manage the end-to-end machine learning development workflows so they can build, train, and deploy high-quality machine learning models faster and easier.

Why this matters

Machine learning is really hard without an army of data scientists and DL/ML-savvy developers. The problem is that these skills are very expensive, hard to attract and retain, not to mention the need to have very unique infrastructure like GPUs, FPGAs and ASICs. AWS did a lot with its base ML services to help solve the infrastructure and SageMaker to connect the building, training, and deploying ML at scale. But how do you connect the developer on an end to end workflow basis? Enter SageMaker Studio. Studio replaces many other components and toolsets that exist today for building, training, explaining, inspecting, monitoring, debugging, and running that may make those ISVs unhappy, but developers could be a lot happier.

Im very interested in lining this up against what both Google Cloud and Azure are doing and getting customer feedback. With SageMaker Studio, AWS is delivering what enterprises want; the only question is if its better than or a lot less expensive in what devs can put together themselves or run on another cloud.

4/ Inf1 EC2 instances with Inferentia

Inf1 Instances

Last year, AWS pre-announced Inferentia, its custom silicon for machine learning inference. This year, it announced the availability of instances based on that chip, called EC2 Inf1. AWS explains that With Amazon EC2 Inf1 instances, customers receive the highest performance and lowest cost for machine learning inference in the cloud. Amazon EC2 Inf1 instances deliver 2x higher inference throughput, and up to 66% lower cost-per-inference than the Amazon EC2 G4 instance family, which was already the fastest and lowest cost instance for machine learning inference available in the cloud.

Why this matters

Machine learning workloads in the cloud are split into training and inference. Enterprises train the workload with big data and monster GPUs and then run the model, or infer on smaller silicon close to the edge. Currently, the highest performance training and inference currently occurs on NVIDA GPUs, namely the V100 and G4. Most inference is done on a CPU for lower cost and latency purposes as described by Amazon retail gurus during the last two Xeon launches. While I am sure NVIDIA is hard at work on its next generation silicon, this is fascinating as nothing has served as a challenge even to NVIDIAs highest-performance instances. While I havent done a deep dive yet like Graviton 2 above, when I do, I will report back as will ML lead Karl Freund. Whatever the outcome, its good to see the level of competition rising in this space.

5/ No ML experience required services

AWS came out strong touting new services that dont require ML experience. Think of these as SaaS or high-order PaaS capabilities where you dont need a framework expert or even a data scientist. Amazon said

Why this matters

I will posit that theres more market opportunity for AWS in ML PaaS and SaaS if for nothing else the lack of data scientists and framework-savvy developers. If youre not a Fortune 100 company, youre at a distinct disadvantage to attract and retain those resources and I doubt they can be at the scale that you need them. Also, as AWS does most of its business in IaaS, theres just more opportunity in PaaS and SaaS.

AWS ML Stack

Kendra sound incredible and it will have an immense amount of competition from Azure and Google Cloud. Azure likely already has a lot of the enterprise data through Office 365, Teams and Skype and Google is good at search. CodeGuru sounds too good to be true but isnt, based on a few developer conversations I had at the show. The only thing limiting this service will be the cost, which I think is dense, given what it can save, but its human nature to not see the big picture. Fraud detector, like Kendra, will have a lot of competition, especially from IBM who have been doing this for decades. I love that the service is bringing its knowledge from its Amazon.com dealings and Id be surprised if the website has the highest fraud attacks given it does 40% of online etail transactions. Transcribe Medical is a dream come true for surgeons like my brother in-law and I hope AWS runs a truck through the aged transcription industry. AWS will have a lot of competition from both Azure and Google Cloud. A2I has been needed in the industry for a while as no state or federally regulated industry can deal with a black box.

Honorable mentions

There were so many good announcements to choose from I had to do an honorable mention list with my quick take.

Wrapping up

While its impossible to do justice to a huge event like AWS re:Invent in a single point, I also think its as important to point out the highlights with some honorable mentions. All in all, AWS answered the hybrid critics and raised the ante, introduced some homegrown silicon that de-commoditizes IaaS, and gave more reasons to use its databases and machine learning services from newbie to Ph.D.

Moor Insights & Strategy analysts will be diving more into AWS Outposts and Graviton2 (Matt Kimball), Braket (Paul Smith-Goodson), Inf1 (Karl Freund), and overall impressions (Rhett Dillingham).

More:

The Top Five AWS Re:Invent 2019 Announcements That Impact Your Enterprise Today - Forbes

Measuring Employee Engagement with A.I. and Machine Learning – Dice Insights

A small number of companies have begun developing new tools to measure employee engagement without requiring workers to fill out surveys or sit through focus groups. HR professionals and engagement experts are watching to see if these tools gain traction and lead to more effective cultural and retention strategies.

Two of these companiesNetherlands-based KeenCorp and San Franciscos Cultivateglean data from day-to-day internal communications. KeenCorp analyzes patterns in an organizations (anonymized) email traffic to gauge changes in the level of tension experienced by a team, department or entire organization. Meanwhile, Cultivate analyzes manager email (and other digital communications) to provide leadership coaching.

These companies are likely to pitch to a ready audience of employers, especially in the technology space. With IT unemployment hovering around 2 percent, corporate and HR leaders cant help but be nervous about hiring and retention. When competition for talent is fierce, companies are likely to add more and more sweeteners to each offer until they reel in the candidates they want. Then theres the matter of retaining those employees in the face of equally sweet counteroffers.

Thats why businesses utilize a lot of effort and money on keeping their workers engaged. Companies spend more than $720 million annually on engagement, according to the Harvard Business Review. Yet their efforts have managed to engage just 13 percent of the workforce.

Given the competitive advantage tech organizations enjoy when their teams are happy and productivenot to mention the money they save by keeping employees in placeengagement and retention are critical. But HR cant create and maintain an engagement strategy if it doesnt know the workforces mindset. So companies have to measure, and they measure primarily through surveys.

Today, many experts believe surveys dont provide the information employers need to understand their workforces attitudes. Traditional surveys have their place, they say, but more effective methods are needed. They see the answer, of course, in artificial intelligence (A.I.) and machine learning (ML).

One issue with surveys is they only capture a part of the information, and thats the part that the employee is willing to release, said KeenCorp co-founder Viktor Mirovic. When surveyed, respondents often hold back information, he explained, leaving unsaid data that has an effect similar to unheard data.

I could try to raise an issue that you may not be open to because you have a prejudice, Mirovic added. If tools dont account for whats left unsaid and unheard, he argued, they provide an incomplete picture.

As an analogy, Mirovic described studies of combat aircraft damaged in World War II. By identifying where the most harm occurred, designers thought they could build safer planes. However, the study relied on the wrong data, Mirovic said. Why? Because they only looked at the planes that came back. The aircraft that presumably suffered the most grievous damagethose that were shot downwerent included in the research.

None of this means traditional surveys surveys dont provide value. I think the traditional methods are still useful, said Alex Kracov, head of marketing for Lattice, a San Francisco-based workforce management platform that focuses on small and mid-market employers. Sometimes just the idea of starting to track engagement in the first place, just to get a baseline, is really useful and can be powerful.

For example, Lattice itself recently surveyed its 60 employees for the first time. It was really interesting to see all of the data available and how people were feeling about specific themes and questions, he said. Similarly, Kracov believes that newer methods such as pulse surveyswhich are brief studies conducted at regular intervalscan prove useful in monitoring employee satisfaction, productivity and overall attitude.

Whereas surveys require an employees active participation, the up-and-coming tools dont ask them to do anything more than their work. When KeenCorps technology analyzes a companys email traffic, its looking for changes in the patterns of word use and compositional style. Fluctuations in the products index signify changes in collective levels of tension. When a change is flagged, HR can investigate to determine why attitudes are in flux and then proceed accordingly, either solving a problem or learning a lesson.

When I ask you a question, you have to think about the answer, Mirovic said. Once you think about the answer, you start to include all kinds of other attributes. You know, youre my boss or youve just given me a raise or youre married to my sister. Those could all affect my response. What we try to do is go in as objectively as possible, without disturbing people as we observe them in their natural habitats.

Read more:

Measuring Employee Engagement with A.I. and Machine Learning - Dice Insights

Amazon Wants to Teach You Machine Learning Through Music? – Dice Insights

Machine learning has rapidly become one of those buzzwordsembraced by companies around the world. Even if they dont fully understandwhat it means, executives think that machine learning will magically transformtheir operations and generate massive profits. Thats good news fortechnologistsprovided they actually learn the technologys fundamentals, ofcourse.

Amazon wants to help with the learning aspect of things. At this years AWS re:Invent, the company is previewing the DeepComposer, a 32-key keyboard thats designed to train you in machine learning fundamentals via the power of music.

No, seriously. AWS DeepComposer is theworlds first musical keyboard powered by machine learning to enable developersof all skill levels to learn Generative AI while creating original musicoutputs, reads Amazonsultra-helpful FAQ on the matter. DeepComposer consists of a USB keyboardthat connects to the developers computer, and the DeepComposer service,accessed through the AWS Management Console.There are tutorials andtraining data included in the package.

Generative AI, the FAQcontinues, allows computers to learn the underlying pattern of a given problemand use this knowledge to generate new content from input (such as image,music, and text). In other words, youre going to play a really simple songlike Chopsticks,and this machine-learning platform will use that seed to build a four-hourWagner-style opera. Just kidding! Or are we?

Jokes aside, the ideathat a machine-learning platform can generate lots of data based on relativelylittle input is a powerful one. Of course, Amazon isnt totally altruistic inthis endeavor; by serving as a training channel for up-and-comingtechnologists, the company obviously hopes that more people will turn to it forall of their machine learning and A.I. needs in future years. Those interestedcan sign up for the preview on adedicated site.

This isnt the first time that Amazon has plunged into machine-learning training, either. Late last year, it introduced AWS DeepRacer, a model racecar designed to teach developers the principles of reinforcement learning. And in 2017, it rolled out AWS DeepLens camera, meant to introduce the technology world to Amazons take on computer vision and deep learning.

Membership has its benefits. Sign up for a free Dice profile, add your resume, discover great career insights and set your tech career in motion. Register now

For those who master the fundamentals of machine learning, the jobs can prove quite lucrative. In September, theIEEE-USA Salary & Benefits Salarysuggested that engineers with machine-learning knowledge make an annual average of $185,000. Earlier this year, meanwhile, Indeed pegged theaverage machine learning engineer salary at $146,085, and its job growth between 2015 and 2018 at 344 percent.

If youre not interested in Amazonsversion of a machine-learning education, there are other channels. For example,OpenAI, the sorta-nonprofit foundation (yes, itsas odd as it sounds), hosts what it calls Gym, a toolkit fordeveloping and comparing reinforcement algorithms; it also has a set of modelsand tools, along with a very extensive tutorialin deepreinforcement learning.

Googlelikewise has acrash course,complete with 25 lessonsand 40+ exercises, thats a good introduction to machine learning concepts.Then theres Hacker Noon and its interesting breakdown ofmachine learning andartificial intelligence.

Onceyou have a firmer grasp on the core concepts, you can turn to BloombergsFoundations of Machine Learning,afree online coursethat teaches advanced concepts such asoptimization and kernel methods. A lotof math is involved.

Whateverlearning route you take, its clear that machine learning skills have anincredible value right now. Familiarizing yourself through thistechnologywhether via traditional lessons or a musical keyboardcan only helpyour career in tech.

Follow this link:

Amazon Wants to Teach You Machine Learning Through Music? - Dice Insights