Archive for the ‘Artificial Intelligence’ Category

Zebra Medical Vision Announces Agreement With DePuy Synthes to Deploy Cloud Based Artificial Intelligence Orthopaedic Surgical Planning Tools -…

KIBBUTZ SHEFAYIM, Israel--(BUSINESS WIRE)--Zebra Medical Vision, the deep learning medical imaging analytics company, announces today a global co-development and commercialization agreement with DePuy Synthes* to bring Artificial Intelligence (AI) opportunities to orthopaedics, based on imaging data.

Every year, millions of orthopaedic procedures worldwide use traditional two-dimensional (2D) CT scans or MRI imaging to assist with pre-operative planning. CT scans and MRI imaging can be expensive, and CT scans are associated with more radiation and are uncomfortable for some patients. Zebra-Meds technology uses algorithms to create three-dimensional (3D) models from X-ray images. This technology aims to bring affordable pre-operative surgical planning to surgeons worldwide without the need for traditional MRI or CT-based imaging.

We are thrilled to start this collaboration and have the opportunity to impact and improve orthopaedic procedures and outcomes in areas including the knee, hip, shoulder, trauma, and spine care, says Eyal Gura, Co-Founder and CEO of Zebra Medical Vision. We share a common vision surrounding the impact we can have on patients lives through the use of AI, and we are happy to initiate such a meaningful strategic partnership, leveraging the tools and knowledge we have built around bone health AI in the last five years.

This technology is planned to be introduced as part of DePuy Synthes VELYS Digital Surgery solutions for pre-operative, operative, and post-operative patient care.

Read more on Zebra-Meds blog: https://zebramedblog.wordpress.com/another-dimension-to-zebras-ai-how-we-impact-the-orthopedic-world

About Zebra Medical VisionZebra Medical Visions imaging analytics platform allows healthcare institutions to identify patients at risk of disease and offer improved, preventative treatment pathways, to improve patient care. The company is funded by Khosla Ventures, Marc Benioff, Intermountain Investment Fund, OurCrowd Qure, Aurum, aMoon, Nvidia, Johnson & Johnson Innovation JJDC, Inc. (JJDC) and Dolby Ventures. Zebra Medical Vision has raised $52 million in funding to date, and was named a Fast Company Top-5 AI and Machine Learning company. Zebra-Med is a global leader in AI FDA cleared products, and is installed in hospitals globally, from Australia to India, Europe to the U.S, and the LATAM region.

*Agreement is between DePuy Ireland Unlimited Company and Zebra Medical Vision.

Read the original post:

Zebra Medical Vision Announces Agreement With DePuy Synthes to Deploy Cloud Based Artificial Intelligence Orthopaedic Surgical Planning Tools -...

Top Artificial Intelligence Books Released In 2019 That You Must Read – Analytics India Magazine

Artificial Intelligence has had many breakthroughs in 2019. In fact, we can go as far as to say that it has trickled down to every single facet of modern life. With its intervention in our daily life, it is imperative that everyone knows about how it is affecting our lives, bringing about change in it, the threats and possible solutions.

While there are some people who still think AI is only robots and chatbots, it is important that they know of the advancements in the field. There are many online courses and books on artificial intelligence that give a comprehensive understanding to the reader whether it is a professional or an AI enthusiast.

In this article, we have compiled a list of books on artificial intelligence published in 2019 that one can use to learn more about this fascinating technology:

Written by Dr Eric Topol, an American cardiologist, geneticist and digital medicine researcher, Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again, is Amazon #1 bestseller this year.

This book boldly sets out the potential of AI in healthcare and deep medicine. Topol calls AI the next industrial revolution. The book contains short examples to highlight AIs importance along with a proper expansion on likely AI is going to transform the medical industry. Topol believes that AI can not only help in enhancing diagnosis and treatment but also help them in saving time in other activities like taking notes, reading scans which will eventually help them to spend more time on the patients. This is a resourceful book for someone interested in AI and its impact on healthcare.

Written by Dr Stuart Russell, Human Compatible: AI and the Problem of Control is possibly one of the most important books of this year on AI. The book talks about the threats by artificial intelligence and solutions to it. The author, Stuart Russell, makes use dry humour not to make his book sound like a boring information magazine.

The book is for both the public and AI researches, Stuart Russel, in this doesnt hammer AI, he points out the threats and solution as someone who feels a sense of responsibility towards the changes and revolution his own field is bringing.

This book is written by Marcus du Sautoy, a professor of mathematics at the University of Oxford and a researcher fellow at the Royal Society.

This book is a fact-packed, funny journey to the world of AI. It questions the present meaning of the word creativity and about how the machine will be able to crack the code on human emotions.

This book dances around the concept of using AI assistance in art-making. The book discusses the math behind ML and AI as its centre point of discussion in art.

Janelle Shanes AIwierdness.com is an AI humour blog and looks to have a different take on AI, the part of AI. In this book, the author makes use of humorous cartoons and pop-culture illustrations to try and take a look inside the algorithms that are used in machine learning.

The authors of this book Gary Marcus, a scientist and the founder and CEO of Robust.AI and Ernest Davis, a professor of computer science at NYU tell what AI is, what it is not, its potentials if we worked towards it with more resilience and be more creative. Many authors seem to hype up AI, not just the good part about it but also the wrong side about it. The authors here seem to have found the balance in between.

The book, Rebooting AI: Building Artificial Intelligence We Can Trust, highlights the weaknesses of the current technology, where it is going wrong and what should we be doing to find the solutions. It isnt just some book that only researchers can read but also for the general public. It illustrates many examples and excellent use of humour wherever needed.

The first edition of the series of books written by the Alex Castrounis, answer one of the most critical questions in todays age concerning business and AI, How can I build a successful business by using AI?

The AI for People and Business: A Framework for Better Human Experiences and Business Success is exclusively written for anyone interested in making use of AI in their organisation.

The author examines the value of Ai and gives solutions for developing an AI strategy that benefits both people and businesses.

This book by Andriy Burkov remains true to its name and just manages to do the seemingly impossible task of trying to bundle all of the machine learning inside of a hundred-page book.

This book provides an in-depth introduction to the field of machine learning with the smart choice of topics for both theory and practice.

If you are new to the field of machine learning, then this book gives you a comprehensive introduction to the vocabulary/ terminology.

comments

Excerpt from:

Top Artificial Intelligence Books Released In 2019 That You Must Read - Analytics India Magazine

Why video games and board games arent a good measure of AI intelligence – The Verge

Measuring the intelligence of AI is one of the trickiest but most important questions in the field of computer science. If you cant understand whether the machine youve built is cleverer today than it was yesterday, how do you know youre making progress?

At first glance, this might seem like a non-issue. Obviously AI is getting smarter is one reply. Just look at all the money and talent pouring into the field. Look at the milestones, like beating humans at Go, and the applications that were impossible to solve a decade ago that are commonplace today, like image recognition. How is that not progress?

Another reply is that these achievements arent really a good gauge of intelligence. Beating humans at chess and Go is impressive, yes, but what does it matter if the smartest computer can be out-strategized in general problem-solving by a toddler or a rat?

This is a criticism put forward by AI researcher Franois Chollet, a software engineer at Google and a well-known figure in the machine learning community. Chollet is the creator of Keras, a widely used program for developing neural networks, the backbone of contemporary AI. Hes also written numerous textbooks on machine learning and maintains a popular Twitter feed where he shares his opinions on the field.

In a recent paper titled On the Measure of Intelligence, Chollet also laid out an argument that the AI world needs to refocus on what intelligence is and isnt. If researchers want to make progress toward general artificial intelligence, says Chollet, they need to look past popular benchmarks like video games and board games, and start thinking about the skills that actually make humans clever, like our ability to generalize and adapt.

In an email interview with The Verge, Chollet explained his thoughts on this subject, talking through why he believes current achievements in AI have been misrepresented, how we might measure intelligence in the future, and why scary stories about super intelligent AI (as told by Elon Musk and others) have an unwarranted hold on the publics imagination.

This interview has been lightly edited for clarity.

In your paper, you describe two different conceptions of intelligence that have shaped the field of AI. One presents intelligence as the ability to excel in a wide range of tasks, while the other prioritizes adaptability and generalization, which is the ability for AI to respond to novel challenges. Which framework is a bigger influence right now, and what are the consequences of that?

In the first 30 years of the history of the field, the most influential view was the former: intelligence as a set of static programs and explicit knowledge bases. Right now, the pendulum has swung very far in the opposite direction: the dominant way of conceptualizing intelligence in the AI community is the blank slate or, to use a more relevant metaphor, the freshly-initialized deep neural network. Unfortunately, its a framework thats been going largely unchallenged and even largely unexamined. These questions have a long intellectual history literally decades and I dont see much awareness of this history in the field today, perhaps because most people doing deep learning today joined the field after 2016.

Its never a good thing to have such intellectual monopolies, especially as an answer to poorly understood scientific questions. It restricts the set of questions that get asked. It restricts the space of ideas that people pursue. I think researchers are now starting to wake up to that fact.

In your paper, you also make the case that AI needs a better definition of intelligence in order to improve. Right now, you argue, researchers focus on benchmarking performance in static tests like beating video games and board games. Why do you find this measure of intelligence lacking?

The thing is, once you pick a measure, youre going to take whatever shortcut is available to game it. For instance, if you set chess-playing as your measure of intelligence (which we started doing in the 1970s until the 1990s), youre going to end up with a system that plays chess, and thats it. Theres no reason to assume it will be good for anything else at all. You end up with tree search and minimax, and that doesnt teach you anything about human intelligence. Today, pursuing skill at video games like Dota or StarCraft as a proxy for general intelligence falls into the exact same intellectual trap.

This is perhaps not obvious because, in humans, skill and intelligence are closely related. The human mind can use its general intelligence to acquire task-specific skills. A human that is really good at chess can be assumed to be pretty intelligent because, implicitly, we know they started from zero and had to use their general intelligence to learn to play chess. They werent designed to play chess. So we know they could direct this general intelligence to many other tasks and learn to do these tasks similarly efficiently. Thats what generality is about.

But a machine has no such constraints. A machine can absolutely be designed to play chess. So the inference we do for humans can play chess, therefore must be intelligent breaks down. Our anthropomorphic assumptions no longer apply. General intelligence can generate task-specific skills, but there is no path in reverse, from task-specific skill to generality. At all. So in machines, skill is entirely orthogonal to intelligence. You can achieve arbitrary skills at arbitrary tasks as long as you can sample infinite data about the task (or spend an infinite amount of engineering resources). And that will still not get you one inch closer to general intelligence.

The key insight is that there is no task where achieving high skill is a sign of intelligence. Unless the task is actually a meta-task, that involves acquiring new skills over a broad [range] of previously unknown problems. And thats exactly what I propose as a benchmark of intelligence.

If these current benchmarks dont help us develop AI with more generalized, flexible intelligence, why are they so popular?

Theres no doubt that the effort to beat human champions at specific well-known video games is primarily driven by the press coverage these projects can generate. If the public wasnt interested in these flashy milestones that are so easy to misrepresent as steps toward superhuman general AI, researchers would be doing something else.

I think its a bit sad because research should about answering open scientific questions, not generating PR. If I set out to solve Warcraft III at a superhuman level using deep learning, you can be quite sure that I will get there as long as I have access to sufficient engineering talent and computing power (which is on the order of tens of millions of dollars for a task like this). But once Id have done it, what would I have learned about intelligence or generalization? Well, nothing. At best, Id have developed engineering knowledge about scaling up deep learning. So I dont really see it as scientific research because it doesnt teach us anything we didnt already know. It doesnt answer any open question. If the question was, Can we play X at a superhuman level?, the answer is definitely, Yes, as long as you can generate a sufficiently dense sample of training situations and feed them into a sufficiently expressive deep learning model. Weve known this for some time. (I actually said as much a while before the Dota 2 and StarCraft II AIs reached champion level.)

What do you think the actual achievements of these projects are? To what extent are their results misunderstood or misrepresented?

One stark misrepresentation Im seeing is the argument that these high-skill game-playing systems represent real progress toward AI systems, which can handle the complexity and uncertainty of the real world [as OpenAI claimed in a press release about its Dota 2-playing bot OpenAI Five]. They do not. If they did, it would be an immensely valuable research area, but that is simply not true. Take OpenAI Five, for instance: it wasnt able to handle the complexity of Dota 2 in the first place because it was trained with 16 characters, and it could not generalize to the full game, which has over 100 characters. It was trained over 45,000 years of gameplay then again, note how training data requirements grow combinatorially with task complexity yet, the resulting model proved very brittle: non-champion human players were able to find strategies to reliably beat it in a matter of days after the AI was made available for the public to play against.

If you want to one day become able to handle the complexity and uncertainty of the real world, you have to start asking questions like, what is generalization? How do we measure and maximize generalization in learning systems? And thats entirely orthogonal to throwing 10x more data and compute at a big neural network so that it improves its skill by some small percentage.

So what would be a better measure of intelligence for the field to focus on?

In short, we need to stop evaluating skill at tasks that are known beforehand like chess or Dota or StarCraft and instead start evaluating skill-acquisition ability. This means only using new tasks that are not known to the system beforehand, measuring the prior knowledge about the task that the system starts with, and measuring the sample-efficiency of the system (which is how much data is needed to learn to do the task). The less information (prior knowledge and experience) you require in order to reach a given level of skill, the more intelligent you are. And todays AI systems are really not very intelligent at all.

In addition, I think our measure of intelligence should make human-likeness more explicit because there may be different types of intelligence, and human-like intelligence is what were really talking about, implicitly, when we talk about general intelligence. And that involves trying to understand what prior knowledge humans are born with. Humans learn incredibly efficiently they only require very little experience to acquire new skills but they dont do it from scratch. They leverage innate prior knowledge, besides a lifetime of accumulated skills and knowledge.

[My recent paper] proposes a new benchmark dataset, ARC, which looks a lot like an IQ test. ARC is a set of reasoning tasks, where each task is explained via a small sequence of demonstrations, typically three, and you should learn to accomplish the task from these few demonstrations. ARC takes the position that every task your system is evaluated on should be brand-new and should only involve knowledge of a kind that fits within human innate knowledge. For instance, it should not feature language. Currently, ARC is totally solvable by humans, without any verbal explanations or prior training, but it is completely unapproachable by any AI technique weve tried so far. Thats a big flashing sign that theres something going on there, that were in need of new ideas.

Do you think the AI world can continue to progress by just throwing more computing power at problems? Some have argued that, historically, this has been the most successful approach to improving performance. While others have suggested that were soon going to see diminishing returns if we just follow this path.

This is absolutely true if youre working on a specific task. Throwing more training data and compute power at a vertical task will increase performance on that task. But it will gain you about zero incremental understanding of how to achieve generality in artificial intelligence.

If you have a sufficiently large deep learning model, and you train it on a dense sampling of the input-cross-output space for a task, then it will learn to solve the task, whatever that may be Dota, StarCraft, you name it. Its tremendously valuable. It has almost infinite applications in machine perception problems. The only problem here is that the amount of data you need is a combinatorial function of task complexity, so even slightly complex tasks can become prohibitively expensive.

Take self-driving cars, for instance. Millions upon millions of training situations arent sufficient for an end-to-end deep learning model to learn to safely drive a car. Which is why, first of all, L5 self-driving isnt quite there yet. And second, the most advanced self-driving systems are primarily symbolic models that use deep learning to interface these manually engineered models with sensor data. If deep learning could generalize, wed have had L5 self-driving in 2016, and it would have taken the form of a big neural network.

Lastly, given youre talking about constraints for current AI systems, it seems worth asking about the idea of superintelligence the fear that an extremely powerful AI could cause extreme harm to humanity in the near future. Do you think such fears are legitimate?

No, I dont believe the superintelligence narrative to be well-founded. We have never created an autonomous intelligent system. There is absolutely no sign that we will be able to create one in the foreseeable future. (This isnt where current AI progress is headed.) And we have absolutely no way to speculate what its characteristics may be if we do end up creating one in the far future. To use an analogy, its a bit like asking in the year 1600: Ballistics has been progressing pretty fast! So, what if we had a cannon that could wipe out an entire city. How do we make sure it would only kill the bad guys? Its a rather ill-formed question, and debating it in the absence of any knowledge about the system were talking about amounts, at best, to a philosophical argument.

One thing about these superintelligence fears is that they mask the fact that AI has the potential to be pretty dangerous today. We dont need superintelligence in order for certain AI applications to represent a danger. Ive written about the use of AI to implement algorithmic propaganda systems. Others have written about algorithmic bias, the use of AI in weapons systems, or about AI as a tool of totalitarian control.

Theres a story about the siege of Constantinople in 1453. While the city was fighting off the Ottoman army, its scholars and rulers were debating what the sex of angels might be. Well, the more energy and attention we spend discussing the sex of angels or the value alignment of hypothetical superintelligent AIs, the less we have for dealing with the real and pressing issues that AI technology poses today. Theres a well-known tech leader that likes to depict superintelligent AI as an existential threat to humanity. Well, while these ideas are grabbing headlines, youre not discussing the ethical questions raised by the deployment of insufficiently accurate self-driving systems on our roads that cause crashes and loss of life.

If one accepts these criticisms that there is not currently a technical grounding for these fears why do you think the superintelligence narrative is popular?

Ultimately, I think its a good story, and people are attracted to good stories. Its not a coincidence that it resembles eschatological religious stories because religious stories have evolved and been selected over time to powerfully resonate with people and to spread effectively. For the very same reason, you also find this narrative in science fiction movies and novels. The reason why its used in fiction, the reason why it resembles religious narratives, and the reason why it has been catching on as a way to understand where AI is headed are all the same: its a good story. And people need stories to make sense of the world. Theres far more demand for such stories than demand for understanding the nature of intelligence or understanding what drives technological progress.

Original post:

Why video games and board games arent a good measure of AI intelligence - The Verge

New Findings Show Artificial Intelligence Software Improves Breast Cancer Detection and Physician Accuracy – P&T Community

CHICAGO, Dec. 19, 2019 /PRNewswire/ --A New York City based large volume private practice radiology group conducted a quality assurance review that included an 18 monthsoftware evaluation in the breast center comprised of nine (9) specialist radiologists using an FDA cleared artificial intelligence software by Koios Medical, Inc as a second opinion for analyzing and assessing lesions found during breast ultrasound examinations.

Over the evaluation period, radiologists analyzed over 6,000 diagnostic breast ultrasound exams. Radiologists used Koios DS Breast decision support software (Koios Medical, Inc.) to assist in lesion classification and risk assessment. As part of the normal diagnostic workflow, radiologists would activate Koios DS and review the software findings with clinical details to formulate the best management.

Analysis was then performed comparing the physicians' diagnostic performance to the 18-month period prior to the introduction of the AI enabled software. Comparing the two periods, physicians recommended biopsy for suspicious lesions at a similar rate (17%) and performed 14% more biopsies increasing the cancer detection rate (from 8.5 to 11.8 per 1,000 diagnostic exams) while simultaneously experiencing a significant reduction in benign biopsies (aka, false positives). Noteworthy is the aggregate nature of the findings where adoption of the software gradually increased over time during the 18-month evaluation period. Trailing 6-month results indicate a benign biopsy reduction exceeding 20% across the group. Positive predictive value, the percentage a positive test returns a positive result, improved over 20%.

"Physicians were skeptical in the beginning that software could help them given their years of training and specialization focusing on breast radiology. With experience using Koios software, however, over time and seeing the preliminary analysis they came to realize that the Koios AI software was gradually impacting patient care in a very positive way.Initially, radiologists completed internal studies that verified Koios software's accuracy, and discovered the larger impact happens gradually over time. In looking at the statistics, physicians were pleasantly surprised to see the benefit was even greater than expected. The software has the potential to make a profound impact on overall quality," says Vice President of Activations Amy Fowler.

Koios DS Breast 2.0 is artificial intelligence software designed around a dataset of over 450,000 breast ultrasound images with known results intended for use to assist physicians analyzing breast ultrasound images and aligns a machine learning-generated probability of malignancy. This probabilityis then checked against and aligned to the lesion's assigned BI-RADScategory, the scale physicians use to recommend care pathways.

"We are seeing the promise of machine learning as a physician's assistant coming to fruition. This will undoubtedly improve quality, outcomes, and patient experiencesand ultimately save lives. Koios DS Breast 2.0 is proving this within several physician groups across the US," says company CFO Graham Anderson.

Koios DS Breast 2.0 can be used in conjunction and integrated directly into most major viewing workstation platforms and is directly available on the LOGIQTME10, GE Healthcare's next generation digital ultrasound system that integrates artificial intelligence, cloud connectivity, and advanced algorithms. Artificial intelligence software generated results can be exported directly into a patient's record. Koios Medical continues to experiment with thyroid ultrasound image data and expects to add to its offering in the next year.

"We could not be more encouraged by the results these physicians are seeing. All our prior testing on historical images have consistently demonstrated high levels of system accuracy. Now, and for the first time ever, physicians using AI software as a second opinion with patients in real-time, within their practice, are delivering on the promise to measurably elevate quality of care. Catching more cancers earlier while reducing avoidable procedures and improving patient experiences is fast becoming a reality," says Koios Medical CEO Chad McClennan.

Discussing future plans during the recent Radiological Society of North America (RSNA) annual meeting in Chicago, McClennan shared, "Several major academic medical centers and community hospitals are utilizing our software and conducting studies into the quality impact for publication. We expect those results to mimic these early clinical findings and further validate the experience of our physician customers in both in New York City and across the country, and most importantly, the positive patient impact."

About KoiosMedical:

Koios Medical develops medical software to assist physicians interpreting ultrasound images and applies deep machine learning methods to the process of reaching an accurate diagnosis. The FDA cleared Koios DS platform uses advanced AI algorithms to assist in the early detection of disease while reducing recommendations for biopsy of benign tissue. Patented technology saves physicians time, helps improve patient outcomes, and reduces healthcare costs. Koios Medical is presently focused on breast and thyroid cancer diagnosis assistance market. Women with dense breast tissue (over 40% in the US) often require an alternative to mammography for diagnosis. Ultrasound is a widely available and effective alternative to mammography with no radiation and is standard of care for breast cancer diagnosis. To learn more please contact us at info@koiosmedical.comor (732) 529-5755.

Learn more about Koios at: koiosmedical.com

View original content to download multimedia:http://www.prnewswire.com/news-releases/new-findings-show-artificial-intelligence-software-improves-breast-cancer-detection-and-physician-accuracy-300978087.html

SOURCE Koios Medical

Continued here:

New Findings Show Artificial Intelligence Software Improves Breast Cancer Detection and Physician Accuracy - P&T Community

How Internet of Things and Artificial Intelligence pave the way to climate neutrality – EURACTIV

Calls for action on the climate emergency have reached a crescendo with the COP25 in Madrid. It is good to see the new Commission re-claiming EUs leadership in climate technology with the Green Deal presented this Wednesday. But for a faster energy transition, it is not enough just to have more renewables, writes Hanno Schoklitsch.

Hanno Schoklitsch is the CEO and founder of Kaiserwetter Energy Asset Management.

Communication makes the right points when they promise to accelerate the energy transition and clearly state that Artificial Intelligence, Internet of Things and Cloud Computing can have an important impact on tackling environmental challenges. However, the specific impact on the energy transition is ignored.

For an accelerated energy transition, just more renewables are not enough. Germany, for example, has an installed renewable capacity of almost 120 Gigawatt whilst peak demand is never higher than 75 Gigawatt.

Nevertheless, Germany is far behind its climate targets. You see: We need more efficiency and accurateness in the energy transition. This is above all a data problem, but it is a problem that is easily be resolvable by innovative technology.

The future of energy, driven by IoT and AI

To understand the whole context, we have to see: The future of energy will be marked by the radical decentralization of energy supply, including so-called flexibility options like storage, load management, power-to-heat or power-to-gas.

Virtual power plants will assume a central role. All those technologies will help to realize a demand-side economic approach. This means that the power supply follows the energy demand.

And for this approach Internet of Things (IoT) combined with Artificial Intelligence (AI: Machine Learning, Deep Learning) is key. They will help to optimize the match between regional generation and regional demand something that is unthinkable without advanced data intelligence.

For more than a century, we have lived in a baseload world which means that a few central megawatt power plants run the whole year, more or less independently from the actual demand. The unintelligent, inefficient usage of dirty energy resources is doubtlessly the main cause of the climate crisis.

Therefore, the energy transition must be seen as a shift towards renewables and energy intelligence. To fulfil the Paris goals, we need a faster energy transition, for sure, but above all, we need a more intelligent energy system.

The Energy Cloud for Nation our approach to attaining energy intelligence

While most of the energy value chain will be organized in a decentralised way, data collection and analytics must be organised centrally. There are solutions providing national and international governments and authorities with detailed insights into their energy systems based on real-time production.

Planning of new capacities, including renewable generation, storage, grid expansion and load shifting gains a new, unprecedented accurateness. Speeding up energy transition without the risk of false decision-making and failed investments becomes possible.

IoT and AI can help governments and authorities to cope with the increasing complexity of the energy transition an important point especially for countries that aspire a pioneering role in climate policy but fear the energy transitions ramifications.

Attracting and activating the needed investment capital is one of the major challenges, and risk mitigation and investment certainty will need to be considered as key. IoT and AI can make a crucial difference.

The Green Revolution also a digitisation revolution

The combination of IoT and AI will be key drivers for a successful, risk-minimized shift to a green economy in general. Inefficient usage of resources was characteristic of the 19th and 20th centuries.

The digitisation will make it easy to open a new economy mode characterized by the efficient, spatially and timely accurate match between supply and demand. The energy sector will be the front-runner followed by other sectors that use critical resources such as water, agriculture, transportation and so on.

It is based on that reasoning that I am convinced that IoT and AI can make a major contribution to securing the planet for generations to come.

Link:

How Internet of Things and Artificial Intelligence pave the way to climate neutrality - EURACTIV