Archive for the ‘Machine Learning’ Category

Artificial intelligence will maximise efficiency of 5G network operations – ComputerWeekly.com

Compared with previous types of networks, 5G networks are both more in need of automation and more amenable to automation. Automation tools are still evolving and machine learning is not yet common in carrier-grade networking, but rapid change is expected.

Emerging standards from 3GPP, ETSI, ITU and the open source software community anticipate increased use of automation, artificial intelligence (AI) and machine learning (ML). And key suppliers activities add credibility to the vision and promise of artificially intelligent network operations.

Growing complexity and the need to solve repetitive tasks in 5G and future radio systems necessitate new automation solutions that take advantage of state-of-the-art artificial intelligence and machine learning techniques that boost system efficiency, wrote Ericssons chief technology officer (CTO), Erik Ekudden, recently.

In 2020, Ericsson engineers demonstrated machine learning software that orchestrated virtual machines on a web server. They reported that during a 12-hour stress test, their software decreased idle cycles to 2%, from a baseline of 20%. Similar efficiency gains could enhance collections of edge computers and computers within cloud-native 5G infrastructure.

Considering that 5G core networks are evolving towards increased dependence on software and generic computing resources, Ericssons demonstration suggests that large-scale use of AI solutions could help carriers use infrastructure as efficiently as possible while handling a mix of traffic types that change dynamically and fulfilling diverse service-level agreements.

Nokia marketing manager Filip De Greve recently stated: The benefits of AI and ML are unquestionable all it needs is the right approach and the right partner to unlock them.

A whitepaper from Nokia describes potential roles for AI and ML in virtually all phases of a service providers operations. Last month, Nokia announced the availability of its Software Enablement Platform, whose features include a means for making use of AI and ML in edge computers that run both open radio access networks (O-RANs) and application-level services. Nokias platform provides data that is important to machine learning developments for software-defined radios.

Carriers and third parties can develop software for Nokias platform, which comes with some samples that are in current commercial trials. One included xApp relies on machine learning methods for traffic steering roughly speaking, a type of service-aware load balancing for radio channels.

Huawei, too, has engaged in a number of machine learning developments in recent years, but seems to have made relatively few disclosures about the matter recently. The company said its management and orchestration (MANO) solution uses AI and big data technologies to implement automatic deployment, configuration, scaling and healing.

Needs for machine learning arise from expected challenges in managing future 5G networks. Future deployments will likely have traffic-carrying capacity orders of magnitude greater than existing infrastructures. Many suppliers, researchers and developers expect to need machine learning to make efficient use of 5G technologies.

Opportunities to use machine learning are arising with increased reliance on cloud-native resources in telecommunications networks. Carriers also experience the same powerful currents that impel many industries towards softwarisation, use of virtual machines, DevOps principles and other global vectors for intelligent automation.

Suppliers to telecoms carriers and advanced researchers are developing machine learning software that, for example, controls smart antennas with split-second timing, assigns and reassigns bandwidth within a packet core and orchestrates assignments for an edge computers virtual machines.

Essentially, the software plays a game, aiming to predict traffic loads and use the fewest resources to carry traffic in accordance with service-level agreements. The intended result would improve the availability of resources to serve additional customers at times when loads are at their peak. When loads abate, the software can cause hardware to operate in power-saving standby mode.

Rules-based scripts and statistical models can accomplish some of these goals, but hand-crafted algorithms face challenges. A vast number of parameters specify a connection event in a 5G network more so than in previous generations. That is why machine learning could be a requirement, not simply an optimisation tool, for efficient resource utilisation in full-scale 5G operations.

Recent reports have surveyed a range of wireless communications applications that machine learning researchers and developers are working on, yielding many candidate technologies for carrier roadmaps.

From a business lifecycle perspective, opportunities exist for machine learning developments to expedite network planning and design, operations, marketing and other duties that normally require an intelligent human. Developers are targeting network management functions, including fault management assurance, configuration, accounting, performance and security (FCAPS).

From a network technology perspective, machine learning applications in research and development phases could affect every layer of the communications stack, from low-level physical and data link layers, through media access, transport, switching, session, presentation and application layers.

At lower layers of radio access networks, generic computers process baseband signals, and they schedule and form directional radio beams by synchronising many antenna elements. Machine learning systems can alleviate congestion by assigning optimal modulation parameters and rapidly scheduling beams that are calculated to fulfil immediate demands.

At higher layers of communications stacks, softwarisation yields opportunities to use and reuse virtual network functions (VNFs) in dynamic combinations to handle changes in traffic patterns. For example, intelligent systems can right-size (autoscale) temporary combinations of resources to support a large video conference and reassign those resources to other jobs after the event.

In packet core networks, intelligent selection is among the astronomical number of ways to mix and match network functions to cut idling while keeping customers satisfied. In radio access networks, intelligent tweaks to power levels, symbol sets, frame sizes and other parameters promise to squeeze the greatest capacity from the available spectrum.

Cyber security and privacy measures can also benefit from machine learning. In theory, intelligent domain isolation can open and shut access automatically in accordance with knowledge encoded in large databases such as event logs. Distributed learning methods can run on edge computers and user devices, keeping private data separate from centralised databases.

Much as driverless cars are requiring more time and development resources than some expected, the vision of fully autonomic networks seems to remain a distant one Michael Gold

Junipers slogan the self-driving network expresses a vision of autonomous communications services, analogous to autonomous vehicles. Many other network technology developers have embraced similar ideas. Engineers and marketers often describe intent-based networking (IBN),one-touch provisioning, and zero-touch network and service management.

Most suppliers will probably use one of these phrases, or a similar phase. All of them refer to a subset of network operations that can occur autonomously, or nearly so. In fact, many software-defined networking technology concepts rely on rules-based systems, a programming strategy that the artificial intelligence community developed decades ago.

Verizon network architect Mehmet Toy recently described one interpretation of IBN to mean deploying and configuring the network resources according to operator intentions automatically. While developments often focus on fulfilling the intentions of network managers, Toy also envisions network configurations that respond to changes in user intentions.

Imaginably, a future network manager could employ natural language to revise a bandwidth-throttling policy. But beware of hype surrounding network automation. In some enterprise networks, zero-touch nodes configure automatically when a technician powers up a new rack. In contrast, installing a carrier-class fibre termination node remains complex.

Much as driverless cars are requiring more time and development resources than some expected, the vision of fully autonomic networks seems to remain a distant one. One major challenge consists of acquiring and analysing abundant telemetry data within service providers networks.

Many systems do not expose the data that data-hungry machine learning systems need to predict and respond to changes in traffic loads. Systems that do provide telemetry use diverse protocols and data structures, complicating AI software developments. Perhaps suppliers will see telemetry data as having high value as intellectual property and worthy of encryption.

A 2020 Nokia whitepaper advocates a multistage technology roadmap to manage the opportunities and risks. Nokia acknowledges that AI is rare in todays networks. More commonly, expert human network managers create, implement and often adjust statistical and rules-based models that govern automated systems in telecommunications networks.

Intermediate between todays model-driven practices and the future vision of autonomic networks, Nokia sees the emergence of intent-driven network management processes, enabled by closed-loop automation systems. Automated resource orchestration would free up human network managers to focus on business needs, service creation and DevOps.

In one sense, a changing technology landscape challenges networking professionals to keep up with new developments. In another sense, AI tools in diverse fields tend to be productivity enhancers rather than redundancy generators Michael Gold

Does AI threaten network managers jobs? In one sense, a changing technology landscape often challenges networking professionals to keep up with new developments. In another sense, AI tools in diverse fields tend to be productivity enhancers rather than redundancy generators. Similarly, for doctors and attorneys, AI is more of a tool than a threat.

One or another industry player seems to be always buzzing about intelligent networks. AT&T has been at it the longest, initially using the phrase in the 1980s to describe an early network computing initiative. Expectations of artificial intelligence in networks have focused and refocused repeatedly over the years. This time may be different. Are we there yet?

Now that computers control or constitute virtually all network nodes, software seems to be more agile at all layers of communications stacks. Business evolution will determine which AI and ML developments contribute most to business results and customer experiences, and which nodes in a network provide maximum leverage for machine learning software to add value.

More:
Artificial intelligence will maximise efficiency of 5G network operations - ComputerWeekly.com

The Future of AI: Careers in Machine Learning – Southern New Hampshire University

The robots are coming. If there is one thing we learned from the COVID-19 pandemic, its that when humans are sent home, machines keep working.

This doesnt mean that robots will take over the world. It does, however, mean that our technical landscape is changing.

Human history has a long and favorable track record of technological advancements, particularly when it comes to ideas that seem ludicrous at the time (Wright brothers, anyone?). The printing press, assembly line and personal computer have all helped move civilization forward by leaps and bounds over the last few centuries.

Imagine being one of the first people to replace glasses with contact lenses by putting them directly on their eyes, no less. Henry Ford replaced horses with the automobile as our main mode of transportation. The process of pasteurization changed the way we eat. Examples like these are endless, because throughout human history, there has been innovation and change.

Even as recently as the 1980s, there was no internet in peoples homes. The very means by which you are reading this article did not exist. Online school did not exist, at least not in the way we take college classes online now.

And while each technological advancement may have its detractors, its hard to argue with the benefits of technology as a whole. After all, thinking big got us to the moon, and gave us television, 3-D printing and a host of incredible advances in modern medicine.

So, are you wondering whats next? The future of technology lies squarely with machine learning and with artificial intelligence, known as AI.

Artificial intelligence is part of the field of data science. People who work in data science are skilled in developing mathematical algorithms to answer complex questions. When, for example, a company like Netflix wants to predict what movies a customer might want to watch next, a data scientist will create an algorithm based on that customers viewing history. Then, they will use that algorithm to offer a list of suggestions.

Machine learning is a branch of data science which involves using data science programs that can adapt based on experience, said Ben Tasker, technical program facilitator of data science and data analytics at Southern New Hampshire University. Take a weather predictor, for example. The more weather inputs there are, the better the prediction for what will come next.

While machine learning is useful, its important to note that there is no artificial intelligence involved in its functions. Machine learning involves rote mathematical or mechanical processes only.

Artificial intelligence then advances data science and machine learning even further.

Whereas machine learning can make predictions, artificial intelligence can make adjustments to its computations. In other words, AI can adjust a program to execute tasks smartly, Tasker said. For example, a fully autonomous, self-driving car is an example of something that would use full artificial intelligence.

These days, the idea of such a self-driving car is no longer science fiction. As the fields of science and engineering continue to advance, artificial intelligence is becoming a lot less artificial and a lot more intelligent, Tasker said.

Because so much about the field of data science in general and AI in particular is new, there are many opportunities to make your own niche, especially now that many companies have started to invest in the idea of artificial intelligence, Tasker said. This creates a wealth of career opportunities for those who thrive on charting their own path. The future of AI is great.

Careers for computer information and research scientists are predicted to grow 15% between now and 2029, according to the U.S. Bureau of Labor Statistics (BLS). That is much faster than the national average for career growth. The median pay is a healthy $122,840 per year, BLS reported.

Some other top career options for machine learning and artificial intelligence include:

So, will robots replace humans moving forward? For some jobs or tasks, quite possibly. For all jobs or tasks? Not likely.

Of course, robots are already in the workplace, Tasker said. They are not intelligent, but they perform basic tasks. Car manufacturers use robots on assembly lines already and have for years.

Whether a company actively uses artificial intelligence or not, all industries will be impacted by it, whether intentionally or unintentionally, Tasker said. I do think that some industries will have a higher barrier of entry, so to speak, such as medicine, he said. Patients still prefer a human touch for things like receiving a diagnosis or test results.

As artificial technology continues to develop, humans will need to have an ethical debate about what robots can and cannot do, but yes, we will see more robots, said Tasker.

And as use of robots grows, without a doubt, ethics is going to play a much larger role as AI grows, said Tasker, or at least it should.

Careers in machine learning and artificial intelligence are still being defined, which creates generous opportunities to innovate and carve your own career path. If you like math, computer programming, coding, and technology in general, a career in data science, machine learning, or AI is definitely one to consider.

Having a strong foundation in math and STEM can help prepare you for a career in AI. Knowledge of psychology will be particularly helpful, too.

Also important: a large threshold for change. Data science [and AI with it] changes every year, Tasker said, so the people working in data science will need to change with it. You will always be learning new technologies, algorithms, and coding languages.

The more math, programming, and experience with cloud computing that you can get under your belt, the better.

And, as more and more adoption of artificial intelligence technologies occur, we will begin to see an ethical debate emerge about what AI should and should not be doing, Tasker said. That makes courses in ethics critical, because "as the field of AI grows, more ethical considerations will need to be applied."

Keep in mind that while a bachelors degreeis a great foundation on which to build a career in artificial intelligence, an advanced degree is likely necessary to advance to the highest levels in the field.

Most jobs in the field of artificial intelligence require a graduate degree, such as a master of science or even doctorate, so be ready to continually learn, said Tasker.

While no career is truly future-proof given the ever-changing technology landscape, there are some ways you can be best prepared to weather the change. By grounding yourself with a strong science, math, and engineering background and then being ready to drive change, you may enjoy a long and prosperous career in the field of artificial intelligence.

Of course, while having a strong academic background is important, being good at math and programming is not enough. To really thrive in this career-field, you also need good, old-fashioned grit. In fact, curiosity, grit, and being humble are key traits toward having a successful, long-term career in data science, and especially in artificial intelligence, said Tasker. These are traits that you cannot necessarily learn in the classroom, but are helpful to being successful in this field long-term.

We have actually been using AI for some time, and not just in factories and on assembly lines, or to design futuristic cars.

Have you ever filled out a job application and included key words so that the artificial job screening tool doesnt filter you out of contention? Thats artificial intelligence.

Some artificial intelligence programs can even scan how a resume is drafted to see personality traits of an applicant, said Tasker. Other programs use facial recognition, which scans your facial expressions in an interview to create personality profiles of applicants.

Likewise, if you have ever used a website and a chat bot popped up, saying How can I help you today? that is also artificial intelligence. If youve ever thought you were chatting with a real, live human only to be informed that youre chatting with a bot, you already know just how realistic artificial intelligence tools already are in the business and retail world.

Chat bots and virtual assistants are being routinely used to respond to easy emails, schedule appointments, and even take meeting notes for users, Tasker said. While at times, being on the receiving end of using a bot can be frustrating, many businesses use them because they can perform repetitive tasks that have some known outcomes, such as which department your query needs to be routed to when you contact customer service for a company.

There are limitations currently, though. While chat bots can accomplish a surprisingly large number of tasks, they cannot operate your Tesla, for example, said Tasker.

With high return-on-investment to using chat bots and interview bots, the use of artificial intelligence in commerce is not likely to go away anytime soon. If anything, the use of AI will continue to grow in new and innovative ways.

With an increased use in artificial intelligence comes an increase in the conversation about how it should be implemented. This is where a background in psychologycould be helpful for people working in this field. "Psychology is important because it teaches a student how the human brain works, which is complicated," said Tasker. "To really learn to program AI, learning how the brain works at some basic level would help as well."

Just because a chat-bot can attend a meeting for an employee, does that mean that we should also make a bot that can perform medical exams? Where is the line? What about facilitating a classroom and teaching our children? Tasker asked. "What about fully autonomous truck driving?"

Is there a line between what we need versus what we can do? And where does focusing on the bottom line financially begin to cost us when it comes to our humanity?

These are big questions for which there are no easy answers. Yet by studying data science, math and STEM, and by embracing the change inherent in the field of machine learning and artificial intelligence, you just might be the next Wilbur or Orville Wright.

Marie Morganelli, PhD, is a freelance content writer and editor.

Continue reading here:
The Future of AI: Careers in Machine Learning - Southern New Hampshire University

Increasing the Accessibility of Machine Learning at the Edge – Industry Articles – All About Circuits

In recent years, connected devices and the Internet of Things (IoT) have become omnipresent in our everyday lives, be it in our homes and cars or at our workplace. Many of these small devices are connected to a cloud servicenearly everyone with a smartphone or laptop uses cloud-based services today, whether actively or through an automated backup service, for example.

However, a new paradigm known as "edge intelligence" is quickly gaining traction in technologys fast-changing landscape. This article introduces cloud-based intelligence, edge intelligence, and possible use-cases for professional users to make machine learning accessible for all.

Cloud computing, simply put, is the availability of remote computational resources whenever a client needs them.

For public cloud services, the cloud service provider is responsible for managing the hardware and ensuring that the service's availability is up to a certain standard and customer expectations. The customers of cloud services pay for what they use, and the employment of such services is generally only viable for large-scale operations.

On the other hand, edge computing happens somewhere between the cloud and the clients network.

While the definition of where exactly edge nodes sit may vary from application to application, they are generally close to the local network. These computational nodes provide services such as filtering and buffering data, and they help increase privacy, provide increased reliability, and reduce cloud-service costs and latency.

Recently, its become more common for AI and machine learning to complement edge-computing nodes and help decide what data is relevant and should be uploaded to the cloud for deeper analysis.

Machine learning (ML) is a broad scientific field, but in recent times, neural networks (often abbreviated to NN) have gained the most attention when discussing machine learning algorithms.

Multiclass or complex ML applications such as object tracking and surveillance, automatic speech recognition, and multi-face detection typically require NNs. Many scientists have worked hard to improve and optimize NN algorithms in the last decade to allow them to run on devices with limited computational resources, which has helped accelerate the edge-computing paradigms popularity and practicability.

One such algorithm is MobileNet, which is an image classification algorithm developed by Google. This project demonstrates that highly accurate neural networks can indeed run on devices with significantly restricted computational power.

Until recently, machine learning was primarily meant for data-science experts with a deep understanding of ML and deep learning applications. Typically, the development tools and software suites were immature and challenging to use.

Machine learning and edge computing are expanding rapidly, and the interest in these fields steadily grows every year. According to current research, 98% of edge devices will use machine learning by 2025. This percentage translates to about 18-25 billion devices that the researchers expect to have machine learning capabilities.

In general, machine learning at the edge opens doors for a broad spectrum of applications ranging from computer vision, speech analysis, and video processing to sequence analysis.

Some concrete examples for possible applications are intelligent door locks combined with a camera. These devices could automatically detect a person wanting access to a room and allow the person entry when appropriate.

Due to the previously discussed optimizations and performance improvements of neural network algorithms, many ML applications can now run on embedded devices powered by crossover MCUs such as the i.MX RT1170. With its two processing cores (a 1GHz Arm Cortex M7 and a 400 MHz Arm Cortex-M4 core), developers can choose to run compatible NN implementations with real-time constraints in mind.

Due to its dual-core design, the i.MX RT1170 also allows the execution of multiple ML models in parallel. The additional built-in crypto engines, advanced security features, and graphics and multimedia capabilities make the i.MX RT1170 suitable for a wide range of applications. Some examples include driver distraction detection, smart light switches, intelligent locks, fleet management, and many more.

The i.MX 8M Plus is a family of applications processors that focuses on ML, computer vision, advanced multimedia applications, and industrial automation with high reliability. These devices were designed with the needs of smart devices and Industry 4.0 applications in mind and come equipped with a dedicated NPU (neural processing unit) operating at up to 2.3 TOPS and up to four Arm Cortex A53 processor cores.

Built-in image signal processors allow developers to utilize either two HD camera sensors or a single 4K camera. These features make the i.MX 8M Plus family of devices viable for applications such as facial recognition, object detection, and other ML tasks. Besides that, devices of the i.MX 8M Plus family come with advanced 2D and 3D graphics acceleration capabilities, multimedia features such as video encode and decode support including H.265), and 8 PDM microphone inputs.

An additional low-power 800 MHz Arm Cortex M7 core complements the package. This dedicated core serves real-time industrial applications that require robust networking features such as CAN FD support and Gigabit Ethernet communication with TSN capabilities.

With new devices comes the need for an easy-to-use, efficient, and capable development ecosystem that enables developers to build modern ML systems. NXPs comprehensive eIQ ML software development environment is designed to assist developers in creating ML-based applications.

The eIQ tools environment includes inference engines, neural network compilers, and optimized libraries to enable working with ML algorithms on NXP microcontrollers, i.MX RT crossover MCUs, and the i.MX family of SoCs. The needed ML technologies are accessible to developers through NXPs SDKs for the MCUXpresso IDE and Yocto BSP.

The upcoming eIQ Toolkit adds an accessible GUI; eIQ Portal and workflow, enabling developers of all experience levels to create ML applications.

Developers can choose to follow a process called BYOM (bring your own model), where developers build their trained models using cloud-based tools and then import them to the eIQ Toolkit software environment. Then, all thats left to do is select the appropriate inference engine in eIQ. Or the developer can use the eIQ Portal GUI-based tools or command line interface to import and curate datasets and use the BYOD (bring your own data) workflow to train their model within the eIQ Toolkit.

Most modern-day consumers are familiar with cloud computing. However, in recent years a new paradigm known as edge computing has seen a rise in interest.

With this paradigm, not all data gets uploaded to the cloud. Instead, edge nodes, located somewhere between the end-user and the cloud, provide additional processing power. This paradigm has many benefits, such as increased security and privacy, reduced data transfer to the cloud, and lower latency.

More recently, developers often enhance these edge nodes with machine learning capabilities. Doing so helps to categorize collected data and filter out unwanted results and irrelevant information. Adding ML to the edge enables many applications such as driver distraction detection, smart light switches, intelligent locks, fleet management, surveillance and categorization, and many more.

ML applications have traditionally been exclusively designed by data-science experts with a deep understanding of ML and deep learning applications. NXP provides a range of inexpensive yet powerful devices, such as the i.MX RT1170 and the i.MX 8M Plus, and the eIQ ML software development environment to help open ML up to any designer. This hardware and software aims to allow developers to build future-proof ML applications at any level of experience, regardless of how small or large the project will be.

Industry Articles are a form of content that allows industry partners to share useful news, messages, and technology with All About Circuits readers in a way editorial content is not well suited to. All Industry Articles are subject to strict editorial guidelines with the intention of offering readers useful news, technical expertise, or stories. The viewpoints and opinions expressed in Industry Articles are those of the partner and not necessarily those of All About Circuits or its writers.

See more here:
Increasing the Accessibility of Machine Learning at the Edge - Industry Articles - All About Circuits

PS5 Capable Of Machine Learning, AI Upscaling According To Insomniac Games – PlayStation Universe

Spider-Man: Miles Morales developer Insomniac Games has revealed that the PS5 is capable of Machine Learning and AI Upscaling.

The comments come via Insomniac Games Josh DiCarlo during a series of tweets about the performance of the PS5 and its work on the Spider-Man franchise. DiCarlo revealed that its innards are ML based, and that the studio is only just scratching the service on what Sonys new console is capable of achieving.

Not sure how specific I can get with specs now for now, but you are correct in the assumption that all final deformations are resulting via ML interference at runtime on the PS5 hardware. There are no blend shapes, skin decamp, or tradition lyrics of the trade (nothing against them!)

Insomniac Games has been pretty busy with PS5 hardware, having released a remastered version of Marvels Spider-Man, a dedicated version of Spider-Man: Miles Morales alongside the PS4 edition, and is currently working on the upcoming Ratchet & Clank: Rift Apart.

Related Content Sony PS5 Complete Guide A Total Resource On PlayStation 5

[Source Joe Miller on Twitter via NeoGAF]

Read the original:
PS5 Capable Of Machine Learning, AI Upscaling According To Insomniac Games - PlayStation Universe

Is Machine Learning The Future Of Coffee Health Research? – Sprudge

If youve been a reader of Sprudge for any reasonable amount of time, youve no doubt by now ready multiple articles about how coffee is potentially beneficial for some particular facet of your health. The stories generally go like this: a study finds drinking coffee is associated with a X% decrease in [bad health outcome] followed shortly by the study is observational and does not prove causation.

In a new study in theAmerican Heart Associations journal Circulation: Heart Failure, researchers found a link between drinking three or more cups of coffee a day and a decreased risk of heart failure. But theres something different about this observational study. This study used machine learning to get to its conclusion, and it may significantly alter the utility of this sort of study in the future.

As reported by the New York Times, the new study isnt exactly new at all. Led by David Kao, a cardiologist at University of Colorado School of Medicine, researchers re-examined the Framingham Heart Study (FHS), a long-term, ongoing cardiovascular cohort studyof residents of the city of Framingham, Massachusetts that began in 1948 and has grown to include over 14,000 participants.

Whereas most research starts out with a hypothesis that it then seeks to prove or disprove, which can lead to false relationships being established by the sort variables researchers choose to include or exclude in their data analysis, Kao et al instead approached the FHS with no intended outcome. Instead, they utilized a powerful and increasingly popular data-analysis technique known as machine learning to find any potential links between patient characteristics captured in the FHS and the odds of the participants experiencing heart failure.

Able to analyze massive amounts of data in a short amount of timeas well as be programmed to handle uncertainties in the data, like if a reported cup of coffee is six ounces or eight ouncesmachine learning can then start to ascertain and rank which variables are most associated with incidents of heart failure, giving even observational studies more explanatory power in their findings. And indeed, when the results of the FHS machine learning analysis were compare to two other well-known studies, the Cardiovascular Heart Study (CHS) and the Atherosclerosis Risk in Communities study (ARIC), the algorithm was able to correctly predict the relationship between coffee intake and heart failure.

But, of course, there are caveats. Machine learning algorithms are only as good as the data being fed to it. If the scope is too narrow, the results may not translate more broadly and its real-world predictive utility is significantly decreased. The New York Times offers facial recognition software as an example: Trained primarily on white male subjects, the algorithms have been much less accurate in identifying women and people of color.

Still, the new study shows promise, not just for the health benefits the algorithm uncovered, but for how we undertake and interpret this sort of analysis-driven research.

Zac Cadwaladeris the managing editor at Sprudge Media Network and a staff writer based in Dallas.Read more Zac Cadwaladeron Sprudge.

See the rest here:
Is Machine Learning The Future Of Coffee Health Research? - Sprudge