Archive for the ‘Artificial Intelligence’ Category

Sanofi invests $180 million equity in Owkin’s artificial intelligence and federated learning to advance oncology pipeline – GlobeNewswire

Sanofi invests $180 million equity in Owkins artificial intelligence and federated learning to advance oncology pipeline

PARIS November 18, 2021 Sanofi announced today an equity investment of $180 million and a new strategic collaboration with Owkin comprised of discovery and development programmes in four exclusive types of cancer, witha total payment of $90 million for three years plus additional research milestone-based payments. Owkin, an artificial intelligence (AI) and precision medicine company, builds best-in-class predictive biomedical AI models and robust data sets. With the ambition to optimize clinical trial design and detect predictive biomarkers for diseases and treatment outcomes, this collaboration will support Sanofis growing oncology portfolio in core areas such as lung cancer, breast cancer and multiple myeloma.

To accelerate medical research with AI in a privacy-preserving way, Owkin has assembled a global research network powered by federated learning, which allows data scientists to securely connect to decentralized, multi-party data sets and train AI models without having to pool data. This approach will complement Sanofis emerging strength in oncology, as the companys scientists apply cutting-edge technology platforms to design potentially life-transforming medicines for cancer patients worldwide.

"Owkins unique methodology, which applies AI on patient data from partnerships with multiple academic medical centers, supports our ambition to leverage data in innovative ways in R&D, said Arnaud Robert, Executive Vice President, Chief Digital Officer, Sanofi. We are striving to advance precision medicine to the next level and to discover innovative treatment methods with the greatest benefits for patients.

Sanofi will leverage the comprehensive Owkin Platform, in order to find new biomarkers and therapeutic targets, building prognostic models, and predicting response to treatment from multimodal patient data. Sanofis investment will support Owkins development and goal to grow the worlds leading histology and genomic cancer database from top oncology centers.

Owkins mission is to improve patients lives by using our platform to discover and develop the right treatment for every patient, said Thomas Clozel, M.D., Co-Founder and CEO at Owkin. We believe that the future of precision medicine lies in technologies that can unlock insights from the vast amount of patient data in hospitals and research centers in a privacy-preserving and secure way. This landmark partnership with Sanofi will see federated learning used to create research collaborations at a truly unprecedented scale. The future of AI to transform how we develop treatments is incredibly bright, and we are proud to partner with Sanofi on this mission.

This collaboration agreement will allow Sanofi to work closely with Owkin in identifying new oncology treatments across four cancers.

We look forward to working with our colleagues at Owkin to analyze data from hundreds of thousands of patients, said John Reed, M.D., Ph.D., Global Head of Research and Development, Sanofi. Sanofi's investment in the company includes a three-year agreement that will help discover and develop new treatments for non-small cell lung cancer, triple negative breast cancer, mesothelioma and multiple myeloma. This partnership will help accelerate our ambitious oncology program as we advance a rich pipeline of medicines to address unmet patient needs.

About Owkin

Owkin is a French American startup that specializes in AI and federated learning for medical research. It was co-founded in 2016 by Dr Thomas Clozel M.D., a clinical research doctor and former assistant professor in clinical hematology, and Dr Gilles Wainrib, Ph.D., a pioneer in the field of artificial intelligence in biology. Owkin has recently published groundbreaking research at the frontier of AI and medicine in Nature Medicine, Nature Communications and Hepatology. The Owkin Platform connects life science companies with world-class academic researchers and hospitals to share deep medical insights for drug discovery and development. Using federated learning and breakthrough collaborative AI technology, Owkin enables its partners to unlock siloed datasets while protecting patient privacy and securing proprietary data. Through sharing high-value insights, the company powers unprecedented collaboration to improve patient outcomes. Owkin works with the most prominent cancer centers and pharmaceutical companies in Europe and the US. Key achievements to date include HealthChain and MELLODDY; two Owkin led federated learning consortia fuelling unprecedented collaboration in academic research and drug discovery, respectively. For more information, please visit Owkin.com and follow @OWKINscience on Twitter.

About SanofiSanofi is dedicated to supporting people through their health challenges. We are a global biopharmaceutical company focused on human health. We prevent illness with vaccines, provide innovative treatments to fight pain and ease suffering. We stand by the few who suffer from rare diseases and the millions with long-term chronic conditions. With more than 100,000 people in 100 countries, Sanofi is transforming scientific innovation into healthcare solutions around the globe.

Media Relations ContactsSally BainTel: +1 (781) 264-1091Sally.Bain@sanofi.com

Nicolas Obrist Tel: + 33 6 77 21 27 55Nicolas.Obrist@sanofi.com

Investor Relations Contacts ParisEva Schaefer-JansenArnaud DelepineNathalie Pham

Investor Relations Contacts North AmericaFelix Lauscher

Tel.: +33 (0)1 53 77 45 45investor.relations@sanofi.comhttps://www.sanofi.com/en/investors/contact

Forward-Looking StatementsThis press release contains forward-looking statements as defined in the Private Securities Litigation Reform Act of 1995, as amended. Forward-looking statements are statements that are not historical facts. These statements include projections and estimates and their underlying assumptions, statements regarding plans, objectives, intentions and expectations with respect to future financial results, events, operations, services, product development and potential, and statements regarding future performance. Forward-looking statements are generally identified by the words expects, anticipates, believes, intends, estimates, plans and similar expressions. Although Sanofis management believes that the expectations reflected in such forward-looking statements are reasonable, investors are cautioned that forward-looking information and statements are subject to various risks and uncertainties, many of which are difficult to predict and generally beyond the control of Sanofi, that could cause actual results and developments to differ materially from those expressed in, or implied or projected by, the forward-looking information and statements. These risks and uncertainties include among other things, the uncertainties inherent in research and development, future clinical data and analysis, including post marketing, decisions by regulatory authorities, such as the FDA or the EMA, regarding whether and when to approve any drug, device or biological application that may be filed for any such product candidates as well as their decisions regarding labelling and other matters that could affect the availability or commercial potential of such product candidates, the fact that product candidates if approved may not be commercially successful, the future approval and commercial success of therapeutic alternatives, Sanofis ability to benefit from external growth opportunities, to complete related transactions and/or obtain regulatory clearances, risks associated with intellectual property and any related pending or future litigation and the ultimate outcome of such litigation, trends in exchange rates and prevailing interest rates, volatile economic and market conditions, cost containment initiatives and subsequent changes thereto, and the impact that COVID-19 will have on us, our customers, suppliers, vendors, and other business partners, and the financial condition of any one of them, as well as on our employees and on the global economy as a whole. Any material effect of COVID-19 on any of the foregoing could also adversely impact us. This situation is changing rapidly and additional impacts may arise of which we are not currently aware and may exacerbate other previously identified risks. The risks and uncertainties also include the uncertainties discussed or identified in the public filings with the SEC and the AMF made by Sanofi, including those listed under Risk Factors and Cautionary Statement Regarding Forward-Looking Statements in Sanofis annual report on Form 20-F for the year ended December 31, 2020. Other than as required by applicable law, Sanofi does not undertake any obligation to update or revise any forward-looking information or statements.

Read the original here:
Sanofi invests $180 million equity in Owkin's artificial intelligence and federated learning to advance oncology pipeline - GlobeNewswire

Filings buzz in fashion and accessories: 48% decrease in artificial intelligence (AI) mentions in Q2 of 2021 – just-style.com

In total, the frequency of sentences related to artificial intelligence between July 2020 and June 2021 was 31% decrease than in 2016 when GlobalData, from whom our data for this article is taken, first began to track the key issues referred to in company filings.

When fashion and accessories companies publish annual and quarterly reports, ESG reports and other filings, GlobalData analyses the text and identifies individual sentences that relate to disruptive forces facing companies in the coming years. Artificial intelligence is one of these topics - companies that excel and invest in these areas are thought to be better prepared for the future business landscape and better equipped to survive unforeseen challenges.

To assess whether artificial intelligence is featuring more in the summaries and strategies of fashion and accessories companies, two measures were calculated. Firstly, we looked at the percentage of companies which have mentioned artificial intelligence at least once in filings during the past twelve months - this was 74% compared to 68% in 2016. Secondly, we calculated the percentage of total analysed sentences that referred to artificial intelligence.

Of the 10 biggest employers in the fashion industry, Wacoal Holdings Corp was the company which referred to artificial intelligence the most between July 2020 and June 2021. GlobalData identified 22 artificial intelligence-related sentences in the Japan-based company's filings - 1% of all sentences. Hermes International SA mentioned artificial intelligence the second most - the issue was referred to in 0.1% of sentences in the company's filings. Other top employers with high artificial intelligence mentions included Yue Yuen Industrial (Holdings) Ltd, Pou Chen Corp and Christian Dior SE.

Across all fashion and accessories companies the filing published in the second quarter of 2021 which exhibited the greatest focus on artificial intelligence came from Pou Chen Corp. Of the document's 2,590 sentences, four (0.2%) referred to artificial intelligence.

This analysis provides an approximate indication of which companies are focusing on artificial intelligence and how important the issue is considered within the fashion industry, but it also has limitations and should be interpreted carefully. For example, a company mentioning artificial intelligence more regularly is not necessarily proof that they are utilising new techniques or prioritising the issue, nor does it indicate whether the company's ventures into artificial intelligence have been successes or failures.

In the last quarter, fashion and accessories companies based in Western Europe were most likely to mention artificial intelligence with 0.06% of sentences in company filings referring to the issue. In contrast, companies with their headquarters in the United States mentioned artificial intelligence in just 0.05% of sentences.

See the rest here:
Filings buzz in fashion and accessories: 48% decrease in artificial intelligence (AI) mentions in Q2 of 2021 - just-style.com

Eyes of the City: Visions of Architecture After Artificial Intelligence – ArchDaily

Eyes of the City: Visions of Architecture After Artificial Intelligence

Facebook

Twitter

Pinterest

Whatsapp

Mail

Or

This book tells the story of Eyes of the Cityan international exhibition on technology and urbanism held in Shenzhen during the winter of 2019 and 2020, with a curation process that unfolded between summer 2018 and spring 2020. Conceived as a cultural event exploring future scenarios in architecture and design, Eyes of the City found itself in an extraordinary, if unstable, position, enmeshed within a series of powerfully contingent eventsthe political turmoil in Hong Kong, the first outbreak of COVID-19 in Chinathat impacted not only the scope of the project, but also the global debate around society and urban space.

Eyes of the City was one of the two main sections of the eighth edition of the Shenzhen Bi-City Biennale of UrbanismArchitecture (UABB), titled Urban Interactions. Jointly curated by CRA-Carlo Ratti Associati, Politecnico di Torino and South China University of Technology, it focused on the various relationships between the built environment and increasingly pervasive digital technologiesfrom artificial intelligence to facial recognition, drones to self-driving vehiclesin a city that is one of the worlds leading centers of the Fourth Industrial Revolution. [1]

The topic of the exhibition was decided well before the two events mentioned above made it an especially sensitive one for a Chinese, as well as an international, audience. The Biennale opened its doors in December 2019, just after the months-long protests in Hong Kong had reached their climax and the discussion on the role of surveillance systems embedded in physical space was at its most controversial. [2] In addition, the location the UABB organizers had chosen for the Biennale also caused controversy. The exhibition venue was at the heart of Shenzhens Central Business District, in the hall of Futian Station, one of the largest infrastructure spaces in Asia as well as a multi-modal hub connecting the metropoliss metro system with high-speed trains capable of reaching Hong Kong in about ten minutes.

The agitations occurring on the south side of the border never spilled over into the first outpost of Mainland China. Nevertheless, as the curation process progressed and the opening day approached, the climate grew more tense. In those weeks, it was enough for an exhibitor to merely include as part of his/her proposal a drawing of people on the street standing under umbrellas to prompt heated reactions, with the image reminding visitors of the 2014 pro-democracy movements symbol. Immediately prior to the opening, the stations police fenced off the Biennale venue, instituting check-points for visitors (fortunately, this provision lasted only two weeks before people were permitted again to roam freely inside the station). Despite these contingencies, Eyes of the City managed to offer what a Reuters journalist defined as a rare public space for reflection on increasingly pervasive surveillance by tech companies and the government. [3]

Then, in the second half of January 2020, what began as a local sickness in the city of Wuhan [4] 1,000 kilometers north of Shenzhenspread across the country and beyond, rapidly becoming a global pandemic. Trains between Futian and Hong Kong were discontinued [5], the Biennale venue was shut, while in a matter of weeks, the role of emerging technologies in regulating and facilitating peoples work and social lives became one of the most-discussed topics worldwide, after the grim tally of infections and deaths. In the design field, COVID-19 was seen as exposing and amplifying, on a transcontinental scale, trajectories of change that were already underway.

In an unforeseeable fashion, the occurrences of history in southern China between late 2019 and early 2020 made the question of the city with eyes even more timely and pressing. In the midst of these events, the exhibition had to reinvent itself, experimenting with its form and content in order to continue carrying out its program and contribute to the growing debate. A product of this context, this book is the result of similar processes of continuous adjustment, reflection-in-action, and exchange.

The book challenges the traditional notion of exhibition catalog, crossing the three temporal and conceptual dimensions that were also tackled by the exhibition as a whole. The book is composed of three parts, which loosely represent the different laboratories of the exhibition: the curatorial work that preceded it, the open debate that accompanied it, and the content that made it relevant. Overall, the book adopts Eyes of the City as a trans-scalar and multidisciplinary interpretative key for rethinking the city as a complex entanglement of relationships.

The first part expands on curatorial practices and reflects on the exhibition as an incubator of ideas. The opening essay is written by the exhibitions chief curator Carlo Ratti and academic curators Michele Bonino (Politecnico di Torino) and Yimin Sun (South China University of Tecnology): it positions Eyes of the City as an urgent urban category and proposes a legacy for the show which reframes the role of architecture biennales. The second essay is written by the exhibitions executive curators: it reconstructs visually the exhibitions design process and its materialization of our open-curatorship approach.

The second part of the book expands on a discussion that accompanied the entire curatorial process from spring 2019 to summer 2020, through a rubric on ArchDaily. Tens of designers, writers, and philosophers, as foundational contributors, were asked to respond to the curatorial statement of Eyes of the City: the book contains a selection of these responses covering topics as diverse as the identity of the eyes of the city and the aesthetic regimes behind them by Antoine Picon and Jian LIU . The evolution of the concept of urban anonymity by Yung-Ho Chang, and Deyan Sudjic, the role of the natural world in the technologically-enhanced city by Jeanne Gang, and advances in design practices that lie between robotics and archivization by Albena Yaneva and Philip Yuan

The third part unpacks the content of the exhibition through eight essayscorresponding to the sections of the exhibitionwritten by researchers who were part of the curatorial team. These essays position the installations within a wider landscape of intra- and inter-disciplinary debate through an outward movement from the laboratories of the exhibition to possible future scenarios.

Eyes of the City has striven to broaden discussion and reflection on possible future urban spaces as well as on the notion of the architectural biennale itself. The curatorial line adopted throughout the eighteen-month-long processan entanglement of online and on-site interactions, extensively leaning on academic researchconfigured the exhibition as an open system; that is, a platform of exchange independent of any aprioristic theoretical direction. The outbreak of COVID-19 inevitably impacted the material scale of the project. At the same time, it underlined the relevance of its immaterial legacy. Eyes of the City progressively re-invented itself in a virtual dimension, experimenting with diverse tactics to make its cultural program accessible. In doing so, it spawned a set of digital and physical documents, strategies and traces that address some of the many open issues the city with eyes will face in the future. This book aims at a first systematization of this heterogeneous legacy.

Eyes of the City: Visions of Architecture After Artificial Intelligence

Bibliography

AUTHORS BIOS:

VALERIA FEDERIGHI is an architect and assistant professor at Politecnico di Torino, Italy. She received a MArch and a Ph.D. from the same university, and a Master of Science in Design Research from the University of Michigan. She is on the editorial board of the journal Ardeth-Architectural Design Theory-and she is part of the China Room research group. Her main publication to date is the book The Informal Stance: Representations of Architectural Design and Informal Settlements (Applied Research Design, ORO Editions, 2018). She was Head Curator of Events and Editorial for the Eyes of the City exhibition.

MONICA NASO is an architect and a Ph.D. candidate in Architecture. History and Project at Politecnico di Torino. She received a MArch with honors from the same university and had several professional experiences in Paris and Turin. As a member of the China Room research group and of the South China-Torino Collaboration Lab, she takes part in international and interdisciplinary research and design projects, and she was among the curators of the Italian Design Pavilion at the Shenzhen Design Week 2018. She was Head Curator of Exhibition and On-site Coordination for the Eyes of the City exhibition.

DANIELE BELLERI is a Partner at the design and innovation practice CRA-Carlo Ratti Associati, where he manages all curatorial, editorial, and communication projects of the office. He has a background in contemporary history, urban studies, and political science, and spent a period as a researcher at Moscows Strelka Institute for Media, Architecture, and Design. Before joining CRA, he ran a London-based strategic design agency advising cultural organizations in Europe and Asia, and worked as an independent journalist writing on design and urban issues in international publications. He was one of the Executive Curators of the Eyes of the City exhibition. Currently, he is leading the development of CRAs Urban Study for Manifesta 14 Prishtina.

See original here:
Eyes of the City: Visions of Architecture After Artificial Intelligence - ArchDaily

Does artificial intelligence for IT operations pay off? – IT World Canada

For overwhelmed IT teams, AIOps holds the promise of automatically heading off potential business impacting outages. But some IT leaders are skeptical about whether it can really deliver results.

Rodrigo de la Parra, AIOps Domain Leader at IBM Automation, addressed that skepticism at a recent CanadianCIO virtual roundtable. Its more than a buzzword, said de la Parra, AIOps takes IT to a more software-driven, agile approach.

AIOps is the application of artificial intelligence to enhance IT operations, explained de la Parra. It spots issues by using machine learning to analyze huge amounts of data generated by tools across an organizations infrastructure. Automation and natural language processing can be leveraged to help fix problems in real-time.

Its not a product or a single solution, said de la Parra. Its a journey. To unlock the value, he said its essential to align AIOps to support business needs for improved efficiency and customer service.

De la Parra distinguished between what he referred to as domain specific and domain agnostic tools. He noted that the domain specific tools had great value within their specific silo. But the real value, de la Parra said, comes from adding a domain agnostic approach because it can take feeds from all the tools running in silos and produce a single data source. This becomes the single source of truth for the analytics and to provide evidence on the root cause to the stakeholders, said de la Parra.

Successful implementation starts with an operational assessment to identify current problems related to the organizations business needs. From that, key performance indicators (KPIs) should be established to measure progress. Benchmarking where you are today, looking for real problems and developing measurable KPIs are at the heart of finding and proving the value of AIOps.

For example, de la Parra suggested that organizations could examine their efficiency by tracking the volume of major incidents relative to their applications, or the mean time to detect, acknowledge and resolve incidents. Value could be measured by looking at how much manual work is eliminated, or reductions in the number of issues reported by users.

One participant questioned how long it could take to set up the platform. According to de la Parra, this can be completed within a few weeks in many cases. He recommended starting with a manageable sized pilot to get some meaningful results quickly. Once baseline data is fed into the model, it will start detecting deviations in real-time. In addition, de la Parra noted that the IBM Watson AIOps solution comes with pre-defined algorithms that produce models to accelerate the implementation and the return on investment (ROI). This approach removes the need for data scientists to normalize data, build a data lake, create models, and integrate interfaces to collaborate with the solution such as ChatOps, he said.

Despite the discussion, it was clear that many of the participants remained skeptical about whether AIOps can produce a measurable return on investment. As well, there were questions about the trustworthiness of the data and whether domain-specific tools, such as those that monitor security, are sufficient.

The main advantage of domain agnostic AIOps over domain-specific tools is that it provides complete visibility, said de la Parra. This is what makes it trustworthy AI, he said. Decisions are driven by evidence from analyzing different data sources, grouping entities, localizing issues visualized in topology views to provide context, probable cause and next best action to resolve incidents. This is all done within the confines of policies and compliance requirements.

Its understandable to have skepticism over the effectiveness of AIOps given a common preconception around biased AI in general and the effort to implement solid AI models, said de la Parra. However, when we talk about AIOps at IBM, we are referring to a specific set of capabilities that provide concrete models to support log anomaly detection, blast radius, seasonal event grouping, next best action among others.

Another concern raised by the group related to the issue of false positives on potential incidents. De la Parra noted that AIOps can analyze whether an issue is having an impact on business systems. If there is no impact, it does not send alerts. Reducing the noise is critical to allow staff to spend time on higher value tasks, said de la Parra. A 2021 study from Forrester analyzed the total economic impact of IBM Watson AIOps. It showed a 50 per cent reduction of MTTR (Mean Time to Resolve), 80 per cent time saved from remediating false-positive incidents, leading to $623K in savings and other benefits, such as proactive incident avoidance.

According to de la Parra, AIOps results in better overall IT service management. Not only does it reduce response time and downtime, it can also be used to look at the appropriate resource allocation for workloads in the cloud.

Organizations already have the data, said de la Parra. AIOps enables the IT team to be more proactive and to become a trusted partner that helps drive business forward.

See more here:
Does artificial intelligence for IT operations pay off? - IT World Canada

How Artificial Intelligence Will Impact Your Daily Life in the 2020s – BBN Times

Artificial intelligence (AI) powers 5G, blockchain, the internet of things, quantum computing and self-driving cars.

Source: The Scientist Magazine

Artificial intelligencedeals with the area of developing computing systems which are capable of performing tasks that humans are very good at, for example recognising objects, recognising and making sense of speech, and decision making in a constrained environment.

Machine Learningis defined as the field of AI that applies statistical methods to enable computer systems to learn from the data towards an end goal. The term was introduced by Arthur Samuel in 1959.

Neural Networksare biologically inspired networks that extract abstract features from the data in a hierarchical fashion.

Deep Learningrefers to the field of Neural Networks with several hidden layers. Such a Neural Network is often referred to as a Deep Neural Network.

I will refer to AI in this article as covering the spectrum of Machine learning and Deep Learning as well as the classical AI techniques such as Logic and Search algorithms.

Source: Qualcomm

5G refers to "5th Generation", and relates to the newest standards in mobile communications. The performance levels for 5G will be focused on ultra low latency, lower energy consumption, large rates of data, and enormous connectivity of devices. The era of 5G, that will spread around much of the world from 2020 onwards (with some limited deployments in 2019), will be world where cloud servers will continue to be used, and also one whereby we witness the rise in prominence of AI on the edge (on device) where the data is generated enabling real-time (or very near real time) responses from intelligent devices. 5G and edge computing with machine to machine communication will be of great importance for autonomous systems with AI such as self-driving cars, drones, autonomous robots, and intelligent sensors within the context of IoT. 5G with AI will also enable the invisible bank and payments that leading Fintech influencers, such as Brett King and Jim Marous, dream about. The significantly faster speeds of 5G over 4G will enable technologies that are suboptimal today such as Virtual Reality (VR) to perform much better. Augmented Reality (AR) and Holographic technologies will emerge across different use cases in this period too. Those companies that are going to thrive (even survive) the resulting digital transformation will be the ones that are already planning and exploring the potential.

As a society we need to be aware of the impending changes across all sectors of the economy. We need to ensure that our political leaders and regulators actually understand the scale of change that will arise and ensure that the regulatory frameworks and infrastructure are optimised to enable the deployment of AI for improving healthcare with personalized medicine, finance with better services for the customer, marketing with enhanced personalization and better service to the customer, plus smarter and more efficient manufacturing.

The graphic above shows an example of computers on board autonomous cars engaging in Machine to Machine communication as the vehicle in red broadcasts to all other vehicles upon discovering the broken down car.

Every single sector of the economy will be transformed by AI and 5G in the next few years. Autonomous vehicles may result in reduced demand for cars and car parking spaces within towns and cities will be freed up for other usage. It maybe that people will not own a car and rather opt to pay a fee for a car pooling or ride share option whereby an autonomous vehicle will pick them up take them to work or shopping and then rather than have the vehicle remain stationary in a car park, the same vehicle will move onto its next customer journey. The interior of the car will use AR with Holographic technologies to provide an immersive and personalised experience using AI to provide targeted and location-based marketing to support local stores and restaurants. Machine to machine communication will be a reality with computers on board vehicles exchanging braking, speed, location and other relevant road data with each other and techniques such as multi-agent Deep Reinforcement Learning may be used to optimise the decision making by the autonomous vehicles.Deep Reinforcement Learning refers to Deep learning and Reinforcement Learning (RL) being combined together. This area of research has potential applications in finance, healthcare, IoT and autonomous systems such as robotics and has shown promise in solving complicated tasks that require decision making and in the past had been considered as too complex for a machine. Multi-agent reinforcement learning seeks to enable agents that interact with each other the ability to learn collaboratively as they adapt to the behaviour of other agents.Furthermore, object detection using Convolutional Neural Networks (CNNs) will also occur on the edge in cameras too (autonomous systems and also security cameras for intruder detection). ACNN is a type of Deep Neural Network that uses convolutions to extract patterns from the input data in a hierarchical manner. Its mainly used in data that has spatial relationships such as images.

The image above shows an example of Machine to Machine communication between autonomous vehicles and devices that may develop in the world in 5G to enable reduced accidents on the road.

The physical retail sector may transition from one whereby costly inventory is held in bulk to an inventory light model using smart mirrors, AR and VR combined with AI to provide personalised recommendations for apparel. In the event that the customer selects an item then an autonomous vehicle may deliver to the store whilst the customer is enjoying a digital experience and refreshments or to their home at a pre-agreed delivery time. Over time healthcare may evolve into a more efficient sector whereby the next generation of drugs will be developed with personalised medicine in mind so that side effects of a given drug are minimised and the benefits of the medication are maximised and data from Electronic Health Records is mined effectively, and medical imaging with explainable AI deployed efficiently across clinics and hospitals so as to improve timely diagnosis of a condition, and thereby reduce misdiagnosis for patients.

Source: Statista

The chart above illustrates the rapid growth in the number of connected devices. Statista estimated that there will be approximately 31 billion IoT connected devices in 2020 and 75 Billion by 2025. As we move into the world of 5G the role of AI will be of fundamental importance to the economy overall and to your day to day life.

In summaryI believe that AI and the other industry 4.0 digital technologies should be developed and encouraged to drive economic growth in ways that are cleaner, more efficient and allow wider participation across society for education, healthcare and better living standards. The issue of warfare and AI is a highly debated and emotive subject, and automation in warfare has been on display since the first Gulf war in the 1990s with fire and forget and cruise missiles. At the very least it is important to consider the need for transparency with robust frameworks to understand what is being done in order to ensure that there is sufficient oversight as a society over those making the decisions. However, in spite of what some in the media would have us believe, the vast majority of the AI community are not working on developing killer robots nor other autonomous weapons. Whilst attending speaking at an event on AI hosted in Davos during the WEF, I happened to meet Viktoriya Tigipko of TA ventures and@JamesPeyerof@Apollo_Venturesand was impressed with the positive outlook and vision that they had for AI in relation to healthcare and the development of next generation treatments that will help humanity. I have also been inspired by the work of the brilliant Dr Anna Becker who started her degree at the age of 16 and her postgraduate studies at 19 before going on to build and run an AI company. AI and in particular Machine Learning and Deep Learning serve at this moment in time (and in the foreseeable future) to solve for the issue of making sense of the deluge of data that we generate from digital platforms rather than to create Skynet with Terminator machines to wipe us out (AGI itself does not exist today nor the medium term future). AI also provides an opportunity to improve living standards and promote cleaner and more efficient industry, agriculture, smarter cities and energy systems as we move into the world of industry 4.0 with the arrival of 5G.

Original post:
How Artificial Intelligence Will Impact Your Daily Life in the 2020s - BBN Times