Archive for the ‘Artificial Intelligence’ Category

Artificial Intelligence In Healthcare Market Size, Share & Trends Analysis Report By Component, By Application And Segment Forecasts, 2021 – 2028…

Artificial Intelligence In Healthcare Market Size, Share & Trends Analysis Report By Component (Software Solutions, Hardware, Service), By Application (Robot Assisted Surgery, Connected Machines, Clinical Trials), And Segment Forecasts, 2021 - 2028

New York, June 18, 2021 (GLOBE NEWSWIRE) -- Reportlinker.com announces the release of the report "Artificial Intelligence In Healthcare Market Size, Share & Trends Analysis Report By Component, By Application And Segment Forecasts, 2021 - 2028" - https://www.reportlinker.com/p06096560/?utm_source=GNW

Artificial Intelligence In Healthcare Market Growth & Trends

The global artificial intelligence in healthcare market size is expected to reach USD 120.2 billion by 2028 and is expected to expand at a CAGR of 41.8% over the forecast period. Growing technological advancements coupled with an increasing need for efficient and innovative solutions to enhance clinical and operational outcomes is contributing to market growth. The pressure for cutting down spending is rising globally as the cost of healthcare is growing faster than the growth of economies. Advancements in healthcare IT present opportunities to cut down spending by improving care delivery and clinical outcomes. Thus, the demand for AI technologies is expected to increase in the coming years.

Moreover, the ongoing COVID-19 pandemic and the introduction of technologically advanced products to improve patient care are factors anticipated to drive growth further in the coming years.The ongoing COVID-19 pandemic is further driving the adoption of AI in various applications such as clinical trials, diagnosis, and virtual assistants to add value to health care by analyzing complicated medical images of patients complications and supporting clinicians in detection as well as diagnosis.

Moreover, an increase in the number of AI startups coupled with high investments by venture capitalist firms for developing innovative technologies that support fast and effective patient management, due to a significant increase in the number of patients suffering from chronic diseases, is driving the market.

In addition, the shortage of public health workforce has become a major concern in many countries around the world.This can mainly be attributed to the growing demand for physicians, which is higher than the supply of physicians.

As per the WHO estimates in 2019, the global shortage of skilled personnel including nurses, doctors, and other professionals was approximately 4.3 million. Thus, the shortage of a skilled workforce is contributing to the demand for artificial intelligence-enabled systems in the industry.

Artificial Intelligence In Healthcare Market Report Highlights The market is anticipated to witness significant growth over the forecast period owing to the rapidly increasing application of artificial intelligence in this space The software component segment dominated the market in 2020 owing to the increased development of AI-based software solutions The clinical trials segment dominated the market in 2020 owing to the easy commercial availability of AI-based product in clinical trial applications that use AI technology to identify patterns from doctor-patient interaction related data to deliver a personalized medicine North America dominated the market in 2020 owing to the growing adoption of healthcare IT solutions, increasing funding for the development of AI capabilities, and the well-established healthcare sectorRead the full report: https://www.reportlinker.com/p06096560/?utm_source=GNW

About ReportlinkerReportLinker is an award-winning market research solution. Reportlinker finds and organizes the latest industry data so you get all the market research you need - instantly, in one place.

__________________________

Story continues

Go here to see the original:
Artificial Intelligence In Healthcare Market Size, Share & Trends Analysis Report By Component, By Application And Segment Forecasts, 2021 - 2028...

The promise and perils of Artificial Intelligence partnerships – Hindustan Times

A period that had been broadly described as engagement has come to an end, Kurt Campbell, the Indo-Pacific Coordinator at the United States (US) National Security Council, told a virtual audience in May on the subject of US-China relations. The dominant paradigm is going to be competition.

On several occasions, Campbell has highlighted that one of the major arenas of this competition will concern technology. This is increasingly reflected in US national security structures. Today, there is both a senior director and coordinator for technology and national security at the White House; the National Economic Council has briefed the Cabinet on supply chain resilience; and the focus of Department of Defense policy reviews have been on emerging military technologies.

The subject of intensifying technology competition is also making its way into new US avenues for cooperation with partners, including with India. This could take the form of bilateral cooperation, coordination at multilateral institutions, or through loose coalitions such as Quad. At the virtual summit in March of Quad, the four leaders (of India, Japan, Australia and the US) agreed, among other things, to establish a working group on critical and emerging technologies, which has already convened.

Artificial Intelligence (AI) has emerged as one technology of particular importance because of its role as an accelerator, its versatility, and its wide applicability. Driven by recent breakthroughs in machine learning made possible by plentiful data, cheap computing power, and accessible algorithms, AI is a good bellwether for the possibilities and challenges of international cooperation on emerging technologies. It is also incredibly lucrative, and may generate hundreds of billions of dollars in revenue over the coming decade.

There are some obvious areas of commonality and cooperation between India, the US, and other partners when it comes to AI. For example, there is a similar concern about developing AI in a broadly democratic setting. AI can be used in many positive ways to foster innovation, increase efficiency, improve development, and enhance consumer experience. For India, AI deployment will be tied closely to inclusive growth and its development trajectory, with potentially positive implications for agriculture, health, and education, among other sectors.

But AI can also be used for a host of undesirable purposes generating misinformation, criminal activity, and encroaching upon personal privacy. Quad countries and others including in Europe and North America generally seek partners amenable to broadly upholding a responsible, human-centric approach to AI.

Additionally, despite the nominally more nationalistic rhetoric (e.g. Build Back Better, Atmanirbhar Bharat), there is a fundamental recognition that international partnerships are valuable and necessary. AI development and deployment is inherently international in character.

Basic and applied research involves collaborations across universities, research centres, and countries. Data can be gathered more easily, a lot of development relies on open-source information, and funding for AI start-ups is a global enterprise. There is also a recognition that countries can learn from each others experiences and mistakes, and that the successful deployment of AI would serve as a model for others. India, for example, is one of the few developing countries large enough to marshal considerable resources for AI, in a manner that could be replicated, including in South Asia or Africa.

India and its partners also confront some similar challenges when it comes to the development and deployment of AI. One imperative involves nurturing, attracting, and retaining the requisite talent. According to Macro Polos Global AI Talent Tracker, 12% of elite AI researchers in the world received their undergraduate degrees from India, the most after the United States (35%) and more than China (10%). Yet, very little top-tier AI research is being conducted in India (over 90% is taking place in the United States, China, European Union, Canada, and the UK).

Beyond talent, additional challenges lie in securing the necessary infrastructure; ensuring resilient supply chains, especially for components such as microprocessors; alignment on standards, governance, and procurement; and ensuring critical minerals and other raw materials required for the development of the necessary physical infrastructure.

Given that various governments have only recently established AI policies, and in some cases are still formulating them, international cooperation is still very much a work in progress. More detailed efforts will be outlined in the coming months and years.

Nevertheless, the contours of cooperation are already discernible. Some areas are proving relatively easy, such as coordination in the setting of standards at the multilateral level, which is already underway. Other areas will prove more challenging. Supply chain security and building resilience should theoretically be easier, given the political-level agreement on this issue. However, ensuring bureaucratic and regulatory harmonisation remains complicated. India and its partners may have the most trouble aligning their approaches to data a particularly touchy subject at the moment and, in the long-run, incentivising joint research and development.

The future looks bright for organic cooperation on AI the demand is there and India and its partners all hold relative strengths. But critical decisions made in the near future could have transformative effects for international cooperation on AI, which, in turn, could decisively shape the contours of what some have described as the Fourth Industrial Revolution.

Dhruva Jaishankar is executive director, ORF America

The views expressed are personal

View post:
The promise and perils of Artificial Intelligence partnerships - Hindustan Times

Artificial Intelligence for Rapid Exclusion of COVID-19 Infection – SciTechDaily

rtificial intelligence (AI) may offer a way to accurately determine that a person is not infected with COVID-19. An international retrospective study finds that infection with SARS-CoV-2, the virus that causes COVID-19, creates subtle electrical changes in the heart. An AI-enhanced EKG can detect these changes and potentially be used as a rapid, reliable COVID-19 screening test to rule out COVID-19 infection.

The AI-enhanced EKG was able to detect COVID-19 infection in the test with a positive predictive value people infected of 37% and a negative predictive value people not infected of 91%. When additional normal control subjects were added to reflect a 5% prevalence of COVID-19 similar to a real-world population the negative predictive value jumped to 99.2%. The findings are published in Mayo Clinic Proceedings.

COVID-19 has a 10- to 14-day incubation period, which is long compared to other common viruses. Many people do not show symptoms of infection, and they could unknowingly put others at risk. Also, the turnaround time and clinical resources needed for current testing methods are substantial, and access can be a problem.

If validated prospectively using smartphone electrodes, this will make it even simpler to diagnose COVID infection, highlighting what might be done with international collaborations, says Paul Friedman, M.D., chair of Mayo Clinics Department of Cardiovascular Medicine in Rochester. Dr. Friedman is senior author of the study.

The realization of a global health crisis brought together stakeholders around the world to develop a tool that could address the need to rapidly, noninvasively and cost-effectively rule out the presence of acute COVID-19 infection. The study, which included data from racially diverse populations, was conducted through a global volunteer consortium spanning four continents and 14 countries.

The lessons from this global working group showed what is feasible, and the need pushed members in industry and academia to partner in solving the complex questions of how to gather and transfer data from multiple centers with their own EKG systems, electronic health records and variable access to their own data, says Suraj Kapa, M.D., a cardiac electrophysiologist at Mayo Clinic. The relationships and data processing frameworks refined through this collaboration can support the development and validation of new algorithms in the future.

The researchers selected patients with EKG data from around the time their COVID-19 diagnosis was confirmed by a genetic test for the SARS-Co-V-2 virus. These data were control-matched with similar EKG data from patients who were not infected with COVID-19.

Researchers used more than 26,000 of the EKGs to train the AI and nearly 4,000 others to validate its readings. Finally, the AI was tested on 7,870 EKGs not previously used. In each of these sets, the prevalence of COVID-19 was around 33%.

To accurately reflect a real-world population, more than 50,000 additional normal EKGs were then added to reach a 5% prevalence rate of COVID-19. This raised the negative predictive value of the AI from 91% to 99.2%.

Zachi Attia, Ph.D., a Mayo Clinic engineer in the Department of Cardiovascular Medicine, explains that prevalence is a variable in the calculation of positive and negative predictive values. Specifically, as the prevalence decreases, the negative predictive value increases. Dr. Attia is co-first author of the study with Dr. Kapa.

Accuracy is one of the biggest hurdles in determining the value of any test for COVID-19, says Dr. Attia. Not only do we need to know the sensitivity and specificity of the test, but also the prevalence of the disease. Adding the extra control EKG data was critical to demonstrating how a variable prevalence of the disease as we have encountered with regions having widely different rates of disease at different stages of the pandemic would impact how the test would perform.

This study demonstrates the presence of a biological signal in the EKG consistent with COVID-19 infection, but it included many ill patients. While it is a hopeful signal, we must prospectively test this in asymptomatic people using smartphone-based electrodes to confirm that it can be practically used in the fight against the pandemic, notes Dr. Friedman. Studies are underway now to address that question.

Reference: 15 June 2021, Mayo Clinic Proceedings.

This study was designed and conceived by Mayo Clinic investigators, and the work was made possible in part by a philanthropic gift from the Lerer Family Charitable Foundation Inc., and by the voluntary support from participating physicians and hospitals around the world who contributed in an effort to combat the COVID-19 pandemic. Technical support was donated by GE Healthcare, Philips and Epiphany Healthcare for the transfer of EKG data.

Read the original:
Artificial Intelligence for Rapid Exclusion of COVID-19 Infection - SciTechDaily

Driving The Next Generation of Artificial Intelligence (AI) – BBN Times

Artificial intelligence (AI) is disrupting a multitude of industries.

This article is a response to an article arguing that anAI Winter maybe inevitable.

However, I believe that there are fundamental differences between what happened in the1970s (the fist AI winter) and late 1980s (the second AI winter with the fall of Expert Systems)with the arrival and growth of the internet, smart mobiles and social media resulting inthe volume and velocity of databeing generated constantly increasing and requiringMachine LearningandDeep Learningto make sense of theBig Datathat we generate.

For those wishing to see a details about what AI is then I suggest reading anIntro to AI, and for the purposes of this article I will assume Machine Learning and Deep Learning to be a subset of Artificial Intelligence (AI).

AI deals with the area of developing computing systems that are capable of performing tasks that humans are very good at, for example recognising objects, recognising and making sense of speech, and decision making in a constrained environment.

The rapid growth inBig Datahas driven much of the growth in AI alongside reduced cost ofdata storage (Cloud Servers)andGraphical Processing Units (GPUs)making Deep Learning more scalable. Data will continue to drive much of the future growth of AI, however, the nature of the data and the location of its interaction with AI will change. This article will set out how the future of AI will increasingly be alongside data generated at the edge of the network (on device) closer to the user. This will have the advantage thatlatency will be lower and 5G networks will enable a dramatic increase in device connectivitywith much greater capacity to connect IoT devices relative to 4G networks.

The 8th edition of Data Never Sleeps byDomoillustrates how "data is constantly being generated in ad clicks, reactions on social media, shares, rides, transactions, streaming content, and so much more. When examined, this data can help you better understand a world that is moving at increasing speeds."

AI has been applied effectively in the Digital Marketing space in particular where Digital footprints have been created and allow for Machine Learning algorithms to learn from historical data to make predictions of the future.

The growth in data has been a key factor in the growth of AI (Machine Learning and Deep Learning) over the past decade and will be an ongoing theme in the 2020s as one consequence of covid crisis has been to accelerate digital transformationand hence digital data will continue to grow.

Much of the data is being generated today is via the social media giants, tech majors and commerce giants and the Machine Learning or Deep Learning algorithms are used to generated personalised recommendations and content for the user. If the recommendations contain an error then there is no injury or death and no complex issues about liabilities and damages. However, when we seek to scale Deep Learning into areas such as healthcare and autonomous systems such as autonomous cars, drones and robots, then the real-word issues become more complex and the issues set out above have to be addressed.

If AI is set to enter increasing sectors of the economy and into our homes and places of work (be that home, the office, a healthcare facility or a factory for example) then we'll need to develop AI with the capabilities ofcausalityandtransparencytoo whilst ensuring data security and privacy safeguards.

Moreover, I believe that we are on the verge of experiencingtechnology convergence between AI, 5G and the IoTwith AI increasingly on the Edge of the network (on the device). This in turn is creating demand for AI research with innovation for areas that will extract more from less such as Neural Compression (see below) and also innovation with hybrid AI techniques that will lead toBroad AI, an area in betweenNarrow AI (ANI)andArtificial General Intelligence (AGI).

Instead of Zoom and other 2D video calls that we've become accustomed to during the Covid crisis in 2020, we'll be using Holographic 3D calls with 5G enabled Glasses in a few years.

Furthermore, going forwards a great deal of the data generated will be fromInternet of Things (IoT)connected devices on the Edge of the network. With real-time data and analytics being increasingly important.

This will lead to demand for innovation and new techniques in AI and Deep Learning as we increasingly work with data that is closer to where it is being generated, nearer to the user, and also as autonomous systems will need to deal with dynamic environments in a real-time environment.

This need to enable near real-time analytics and responses will lead to AI Deep Learning techniques that are more efficient with smaller data sets. Furthermore, this will also be benefits from a climate sustainability perspective.

The area of Neural Network Compression, including with Pruning techniques such as the one highlighted by theLottery Ticket Hypothesiswill be increasingly important.

An example is shown in the video hosted by Clarafai featuring a Facebook AI Research engineer, Dr Michaela Paganini.

Increasingly we will find ways to squeeze more out of Neural Networks by making them more efficient and hence enable them to work efficiently in IoT and Edge use case environments.

Decentralised Data: Federated Learning and Differential Privacy

Much of the data that we will be working with in the era of Edge Computing and 5G networks will be decentralised and distributed data. Decentralised data has also been a challenge in healthcare today with many hospitals and other healthcare providers holing data in local silos and concerned about data sharing due to privacy regulations such asGDPRorHIPAA. Indeed data security is a key area of corporate governance and issue that needs careful consideration when seeking to leverage AI technology to certain sectors.

Federated Learning with Differential Privacywill be a key area to expand Machine Learning into areas such as Healthcare, Financial Services including banking and insurance.

Transformers have been revolutionising the field ofNatural Language Processing (NLP)over recent years with the likes ofBERT,GPT-2and more recentlyGPT-3.

In 2020 we also started to see Transformers making an impart in Computer Vision with the likes of theDETRmodel produced by Facebook AI Research that combined Convolutional Neural Networks withTransformers with Self-Attention Mechanismto streamline the process.

Furthermore, in October an ICLR paper was released entitled "An image is worth 16*16 words" that demonstrated Transformer network to outperform even the most advanced Convolutional Neural Network in image recognition.The paper received huge enthusiasm in the AI research communitywith Synced noting that "...the paper suggeststhe direct application of Transformers to image recognition can outperform even the best convolutional neural networks when scaled appropriately.Unlike prior works using self-attention in CV, the scalable design does not introduce any image-specific inductive biases into the architecture."

A key extract from the paper is set out below.

"While the Transformer architecture has become the de-facto standard for natural language processing tasks, its applications to computer vision remain limited. In vision, attention is either applied in conjunction with convolutional networks, or used to replace certain components of convolutional networks while keeping their overall structure in place. We show that this reliance on CNNs is not necessary and a pure transformer applied directly to sequences of image patches can perform very well on image classification tasks. When pre-trained on large amounts of data and transferred to multiple mid-sized or small image recognition benchmarks (ImageNet, CIFAR-100, VTAB, etc.), Vision Transformer (ViT) attains excellent results compared to state-of-the-art convolutional networks while requiring substantially fewer computational resources to train."

One key aspect with Transformer models will be the potential to reduce computational cost relative to a Convolutional Neural Network and also the ability to apply Neural Network Compression in the form of pruning to NLP Transformer models.

Carbin et al. authored a paper entitled "The Lottery Ticket Hypothesis for Pre-trained BERT Networks" where they argued that "As large-scale pre-training becomes an increasingly central paradigm in Deep Learning, our results demonstrate that the main lottery ticket observations remain relevant in this context." The key here is pointed out by MIT CSAIL "Google search's NLP model BERT requires massive processing power and the new paper suggests that it would often run just as well with as little as 40% as many neural nets" (subnetworks).

Reducing the computational cost of Deep Learning is important for environmental sustainability and also for scaling Deep Learning across the Edge and IoT microcontrollers have a very limited resource budget, especially memory (SRAM) and storage (Flash).

Lin et al. explained in a paper entitled "MCUNet: Tiny Deep Learning on IoT Devices" that

"The on-chip memory is 3 orders of magnitude smaller than mobile devices, and 5-6 orders of magnitude smaller than cloud GPUs, making deep learning deployment extremely difficult."

Daniel Ackerman in MIT news in an article entitled "System brings Deep Learning to Internet of Things devices" explains the research work stating that "The system, calledMCUNet, designs compact neural networks that deliver unprecedented speed and accuracy for deep learning on IoT devices, despite limited memory and processing power. The technology could facilitate the expansion of the IoT universe while saving energy and improving data security." Kurt Keutzer, who was not involved in the work and is a computer scientist at the University of California at Berkeley added that MCUNet could bring intelligent computer-vision capabilities to even the simplest kitchen appliances, or enable more intelligent motion sensors.

The article also notes that MCUNet may enhance IoT device more security. A key advantage is preserving privacy, says Han. You dont need to transmit the data to the cloud.

"Analyzing data locally reduces the risk of personal information being stolen including personal health data. Han envisions smart watches with MCUNet that dont just sense users heartbeat, blood pressure, and oxygen levels, but also analyze and help them understand that information. MCUNet could also bring deep learning to IoT devices in vehicles and rural areas with limited internet access."

Moreover, the carbon footprint of MCUNet is also slim.

The article quotes Han as stating Our big dream is for green AI ... adding that training a large neural network can burn carbon equivalent to the lifetime emissions of five cars. MCUNet on a microcontroller would require a small fraction of that energy."

Our end goal is to enable efficient, tiny AI with less computational resources, less human resources, and less data, says Han.

Further breakthroughs in the area of AI research in 2020 have included the rise of Neuro Symbolic AI that combines Deep Neural Networks with Symbolic AI with the work of IBM Watson and MIT CSAIL.Shristri Deoras in India Analyticsmagazine observed that researchers were gushing over it and observed that "They used CLEVRER to benchmark the performances of neural networks andneuro-symbolicreasoning by using only a fraction of the data required for traditional deep learning systems. It helped AI not only to understand casual relationships but apply common sense to solve problems."

Microsoft Research publishedNext-generation architectures bridge gap between neural and symbolic representations with neural symbolsexplaining that "In neurosymbolic AI, symbol processing and neural network learning collaborate. Using a unique neurosymbolic approach that borrows a mathematical theory of how the brain can encode and process symbols.."

"Two of these new neural architecturestheTensor-Product Transformer (TP-Transformer)andTensor Products for Natural- to Formal-Language mapping (TP-N2F)have set a new state of the art for AI systems solving math and programming problems stated in natural language. Because these models incorporate neural symbols, theyre not completely opaque, unlike nearly all current AI systems. We can inspect the learned symbols and their mutual relations to begin to understand how the models achieve their remarkable results."

Another field showing promise in the field of AI is the area ofNeuroevolution. Areas where Neurevoltion is being applied and generating excitement include robotics and gaming environments. It is an area of active research with an example of recent interesting research being produced by Rasekh and Safi-Esfahani entitledEDNC: Evolving Differentiable Neural Computers"shows that both proposed encodings reduce the evolution time by at least 73% in comparison with the baseline methods."

On another note relating to breakthrough AI research DeepMind recently announced that they had solved the challenge of protein folding as noted by Nature in an article entitled "It will change everything: DeepMinds AI makes gigantic leap in solving protein structures"with the article noting that"The ability to accurately predict protein structures from their amino-acid sequence would be a huge boon to life sciences and medicine. It would vastly accelerate efforts to understand the building blocks of cells and enable quicker and more advanced drug discovery."

The achievement by DeepMind illustrates that we are able to achieve real-world impact with cutting edge AI research such as Deep Reinforcement Learning.

It is ironic that DeepMind released the research results in close proximity to the article suggesting that a new AI winter is inevitable (in part due to a view of Moore's law reaching a limit) as a year ago a VC informed me that there was no way that we would solve protein folding without quantum computing capabilities as currently computing capabilities were insufficient and that this would take at least one decade or more as we were hitting the limits of Moore's law and again needed Quantum Computing capabilities to solve such complex tasks.

Moreover, if one refers to the limitations of Alpha Go, the Deep Mind algorithm that beat the world Go Champion Lee Sidol in 2016, as evidence of the onset of an AI winter because Alpha Go can only play one game, then one should also refer to its successor model (and successor to Alpha Go Zero) AlphaZero that Jennifer Oullette authored an article about entitled "Move over AlphaGo: AlphaZero taught itself to play three different games" with Jennifer Oullette observing that "AlphaZero, this program taught itself to play three different board games (chess,Go, and shogi, a Japanese form ofchess) in just three days, with no human intervention." See alsoMulti-task Deep Reinforcement Learning with PopArt.

However, true multitasking at scale and on highly unrelated tasks by Deep Neural Networks remains a challenge (however, see the reference to generative replay below).

Research work is also being undertaken in relation to Neural Compression with Deep Reinforcement Learning. For example Zhang et al. authored research entitled "Accelerating the Deep Reinforcement Learning with Neural Network Compression" with the authors stating that "The experiments show that our approach NNC-DRL can speed up the whole training process by about 10-20% on Atari 2600 games with little performance loss."

Furthermore, AI researchers are increasingly taking inspiration from biology and the human brain and this is an area where efficiency has long been optimised by evolution. Science Daily reported that "The brain's memory abilities inspire AI experts in making neural networks less forgetful." The article states that "Artificial intelligence (AI) experts at the University of Massachusetts Amherst and the Baylor College of Medicine report that they have successfully addressed what they call a "major, long-standing obstacle to increasing AI capabilities" by drawing inspiration from a human brain memory mechanism known as "replay."

The research referenced above is significant as the authors argue that they can solve for catastrophic memory loss that has prevented Deep Neural Networks from truly multitasking in a genuinely scaled manner without forgetting how to perform other tasks (that may be more unrelated to the initial tasks that it learned).

The research paper by van de Ven et al. was published in Nature Communications and entitled"Brain-inspired replay for continual learning with artificial neural networks" and states the following

"Artificial neural networks suffer from catastrophic forgetting. Unlike humans, when these networks are trained on something new, they rapidly forget what was learned before. In the brain, a mechanism thought to be important for protecting memories is the reactivation of neuronal activity patterns representing those memories. In artificial neural networks, such memory replay can be implemented as generative replay, which can successfully and surprisingly efficiently prevent catastrophic forgetting on toy examples even in a class-incremental learning scenario. However, scaling up generative replay to complicated problems with many tasks or complex inputs is challenging. We propose a new, brain-inspired variant of replay in which internal or hidden representations are replayed that are generated by the networks own, context-modulated feedback connections. Our method achieves state-of-the-art performance on challenging continual learning benchmarks (e.g., class-incremental learning on CIFAR-100) without storing data, and it provides a novel model for replay in the brain."

The above research may provide groundbreaking results in the future if it can be replicated and scaled as this would truly solve for the catastrophic memory loss issue.

We won't suddenly arrive at AGI and I am on record as stating that we will not achieve AGI this decade. However, we will move into the era of Broad AI during the 2020s and may look back in the future to realise that 2020 was the year not only of the Covid tragedy but also the year in which we made the initial concrete steps of leaving the era of Narrow AI and stepping towards Broad AI with the research breakthroughs in Transformers, Neuro Symbolic AI, Neural Compression (to reduce the computational cost of Deep Learning that limits its scaling in real-world environments) and solving for Catastrophic Memory Loss.

In conclusion the next few years will be one of great excitement with the arrival of AR enabled glasses with AI on the device and a number of the research breakthroughs that are being made this year will start to make way into the production environment resulting in exciting new opportunities.

And the AIIoT enabled by 5G networks will change how we live and work.

The technological convergence will drive the demand for broad AI with innovations in Deep Learning with hybrid AI (Neuro Symbolic AI and Neuroevolutonary AI) and Neural Compression to make Deep Neural Networks less computational costly and scalable across the Edge (on device).

It is true that we will move away from a number of the current AI models and the manner in which we develop models. A number of the AI startups that are heavily locked into the current way of doing things will fall away and new ones will take over and scale with the new, more efficient techniques that may effectively scale across broader areas of the economy and beyond social media and ecommerce.

AGI will still take time to arrive and maybe that is not a bad thing as I would argue that we are not ready for it as a society, withTay the chatbotbeing a good example of what can go wrong. However, the 2020s will be the era of technology convergence between AI, IoT and 5G, Neural Compression and the rise of Broad AI with hybrid AI techniques and Transformers.

These are the techniques that will be needed to enable autonomous vehicles and other amazing innovations to truly work in the real-world.

This moment in time reminds me of the era in 2013/2014 when many in the Machine Learning world where still yet to appreciate and understand the impact ofAlexNetand the potential and rise of Deep Learning in Computer Vision and then NLP. However, around 2022 /2023 I believe that the laws of supply and demand will result in the innovations in AI and technology convergence to drive a lot of the rise of next generation AI techniques with intelligence devices around us to improve key areas such as healthcare, financial services, retail, smart cities, autonomous networks and social communications.

Enlightened investors including Venture Capitalists seeking to invest in AI going forwards may wish to focus on the next generation that may scale across the sectors of the economy. A helpful summary of technology convergence and the opportunity is provided bySierra Ventures(a Silicon Valley firm that has invested $1.9 billioninto tech) partnerBen Yu in The Convergence of 5G and AI: A Venture Capitalists View.

Read the rest here:
Driving The Next Generation of Artificial Intelligence (AI) - BBN Times

5 Uses of Artificial Intelligence to Improve Customer Experience Measurement – Small Business Trends

Customer experience plays an important role in the growth of your brand. Thats why its essential to not only offer a great experience but also understand if you truly are able to cater to your customers well. Thats where you can use artificial intelligence to improve your customer experience measurement.

But why is customer experience so important?

Nearly 84% of consumers say they go out of their way to spend more money on great experiences. So, its safe to say that a better customer experience translates into higher sales and revenue.

Image via Gladly

But to improve your customer experience, you must know where you stand in the first place. For this, its important to do customer experience measurement. Artificial intelligence (AI) plays a major role in automating and speeding up various marketing activities and it can help improve this process as well.

So, lets take a look at how you can use artificial intelligence to improve your customer experience measurement.

Here are the different ways through which you can use artificial intelligence to improve your customer experience measurement.

To truly get an idea of where you stand in terms of your customer experience, its essential to collect and analyze customer feedback. The idea here is to hear all about your customer experience from the customers themselves.

Its the best mode of understanding where youre excelling and or lagging in certain aspects of customer experience. Accordingly, you can understand what changes need to be implemented to improve the experience. This, in turn, can help boost the sales of your ecommerce or brick-and-mortar business.

So, how can artificial intelligence help with this?

Collecting customer feedback may be simple. However, analyzing the feedback can take a lot of time and effort, especially if youve got a lot of customers. Youd have to manually go through individual feedback and then analyze that unstructured data.

However, AI can speed up this process of measurement. Using text analytics platforms, you can seamlessly analyze large amounts of feedback data from your customers. This quick analysis will help you derive valuable insights that you can leverage to improve your customer experience strategy.

Another way in which artificial intelligence can help you collect, analyze, and improve your customer experience measurement is through the use of chatbots and live chat.

Using AI-powered chatbots, you can converse with your customers in real-time. Using the power of machine learning and natural language processing, these chatbots can understand the questions posed by your customers and answer them.

Whats more?

Apart from chatbots, you should also use live chat platforms to offer customer service in case the chatbots arent able to answer the questions posed by the customers.

But how does customer experience measurement come into the picture here?

When your customers chat with your chatbot or customer support representatives, they can ask them to rate the interactions. The feedback data collected can be analyzed by artificial intelligence-based tools to help you understand how well you were able to service their questions.

To understand your customer experience, its important to get an idea of their emotions as well. You need to understand and predict them to find out if theyre satisfied with your brands services or not.

Until recently, there was no easy way of going about this. You had to rely on the customers telling you about their emotions, and such instances, unfortunately, arent many.

However, with the advent of artificial intelligence, its possible to detect the emotions of your customers from multiple channels.

For instance, artificial intelligence tools can seamlessly detect the customers emotions based on the messages theyve sent or the conversations theyve had with your customer support team.

Emotion AI tools can pick up emotional signals by observing the tone and pitch of the customers voice. They can also analyze the text written by your customers to understand if theyre happy, sad, stressed out, angry, etc.

Whats more?

Even if youve got videos of the customers, these tools can identify their emotions using their body language, changes in facial expressions, etc.

All of this analysis can help you identify how well youre performing when it comes to customer experience.

For instance, Grammarly, the popular writing tool, can recognize the emotions in the text thats written. This helps you better understand the customer experience and you can accordingly take steps to improve it.

Image via Grammarly

Most call center records are converted into transcripts for reviewing at a later stage. However, the one thing that transcripts cant help you identify is the emotions of the customer at each point in the conversation.

You wouldnt know if the customer raised their voice, had an angry tone, felt sad, or was elated by your service. Transcripts wont be able to tell these things to you and when it comes to customer experience, these are all important cues that you must not miss.

All of these cues would only be available if youve recorded the customers call in its audio format. By getting access to this speech, you would be better able to understand if your customer experience was positive or negative.

Artificial intelligence can help improve your customer experience measurement in this case too. Using AI-powered speech analysis tools, you can understand the tone of each customer. Also, these tools can help you find out the:

This measurement process would be quick too as artificial intelligence would be able to go through a large number of calls with ease as compared to listening to them manually. All this information would be extremely useful for helping you understand the customers current situation. Based on that, you would be able to determine the future course of action as well.

One of the toughest tasks that you might face as a customer experience professional is that of finding out the customer experience throughout the sales funnel.

But why is this task difficult?

The customers may go through numerous stages during the sales funnel and they may connect with you at various touchpoints too. As a result, all the customer data would be in different silos. These silos can act as deterrents to determining the customer experience as you wouldnt have a unified database for each customer.

Analytics and insights derived from such segregated data might not be very accurate and wont paint the whole picture for your customer experience.

However, customer journey analytics tools based on artificial intelligence can help you change this. They can unify your customer data from the entire customer journey and analyze it. This singular customer journey view will help you get an accurate measurement of the customer experience.

Customer experience plays a pivotal role in the success of your brand as it influences customer retention. Thats why its essential to measure your customer experience regularly and improve it.

Artificial intelligence can help with this by analyzing customer feedback and deriving insights from it. Also, you can use chatbots and live chat to collect and analyze customer feedback.

Whats more?

Tools powered by AI can also recognize customer emotions in text, voice, and videos. This can help you understand their experience and improve it. Finally, these tools can also help unify all your customer data from across their journey and analyze it. As a result, youll be able to get an accurate measurement of your customer experience.

Do you have any questions about the various methods of using artificial intelligence to improve customer experience measurement mentioned above? Ask them in the comments.

Image: Depositphotos

See the original post:
5 Uses of Artificial Intelligence to Improve Customer Experience Measurement - Small Business Trends