Archive for the ‘Artificial Intelligence’ Category

Eyes of the City: Visions of Architecture After Artificial Intelligence – ArchDaily

Eyes of the City: Visions of Architecture After Artificial Intelligence

Facebook

Twitter

Pinterest

Whatsapp

Mail

Or

This book tells the story of Eyes of the Cityan international exhibition on technology and urbanism held in Shenzhen during the winter of 2019 and 2020, with a curation process that unfolded between summer 2018 and spring 2020. Conceived as a cultural event exploring future scenarios in architecture and design, Eyes of the City found itself in an extraordinary, if unstable, position, enmeshed within a series of powerfully contingent eventsthe political turmoil in Hong Kong, the first outbreak of COVID-19 in Chinathat impacted not only the scope of the project, but also the global debate around society and urban space.

Eyes of the City was one of the two main sections of the eighth edition of the Shenzhen Bi-City Biennale of UrbanismArchitecture (UABB), titled Urban Interactions. Jointly curated by CRA-Carlo Ratti Associati, Politecnico di Torino and South China University of Technology, it focused on the various relationships between the built environment and increasingly pervasive digital technologiesfrom artificial intelligence to facial recognition, drones to self-driving vehiclesin a city that is one of the worlds leading centers of the Fourth Industrial Revolution. [1]

The topic of the exhibition was decided well before the two events mentioned above made it an especially sensitive one for a Chinese, as well as an international, audience. The Biennale opened its doors in December 2019, just after the months-long protests in Hong Kong had reached their climax and the discussion on the role of surveillance systems embedded in physical space was at its most controversial. [2] In addition, the location the UABB organizers had chosen for the Biennale also caused controversy. The exhibition venue was at the heart of Shenzhens Central Business District, in the hall of Futian Station, one of the largest infrastructure spaces in Asia as well as a multi-modal hub connecting the metropoliss metro system with high-speed trains capable of reaching Hong Kong in about ten minutes.

The agitations occurring on the south side of the border never spilled over into the first outpost of Mainland China. Nevertheless, as the curation process progressed and the opening day approached, the climate grew more tense. In those weeks, it was enough for an exhibitor to merely include as part of his/her proposal a drawing of people on the street standing under umbrellas to prompt heated reactions, with the image reminding visitors of the 2014 pro-democracy movements symbol. Immediately prior to the opening, the stations police fenced off the Biennale venue, instituting check-points for visitors (fortunately, this provision lasted only two weeks before people were permitted again to roam freely inside the station). Despite these contingencies, Eyes of the City managed to offer what a Reuters journalist defined as a rare public space for reflection on increasingly pervasive surveillance by tech companies and the government. [3]

Then, in the second half of January 2020, what began as a local sickness in the city of Wuhan [4] 1,000 kilometers north of Shenzhenspread across the country and beyond, rapidly becoming a global pandemic. Trains between Futian and Hong Kong were discontinued [5], the Biennale venue was shut, while in a matter of weeks, the role of emerging technologies in regulating and facilitating peoples work and social lives became one of the most-discussed topics worldwide, after the grim tally of infections and deaths. In the design field, COVID-19 was seen as exposing and amplifying, on a transcontinental scale, trajectories of change that were already underway.

In an unforeseeable fashion, the occurrences of history in southern China between late 2019 and early 2020 made the question of the city with eyes even more timely and pressing. In the midst of these events, the exhibition had to reinvent itself, experimenting with its form and content in order to continue carrying out its program and contribute to the growing debate. A product of this context, this book is the result of similar processes of continuous adjustment, reflection-in-action, and exchange.

The book challenges the traditional notion of exhibition catalog, crossing the three temporal and conceptual dimensions that were also tackled by the exhibition as a whole. The book is composed of three parts, which loosely represent the different laboratories of the exhibition: the curatorial work that preceded it, the open debate that accompanied it, and the content that made it relevant. Overall, the book adopts Eyes of the City as a trans-scalar and multidisciplinary interpretative key for rethinking the city as a complex entanglement of relationships.

The first part expands on curatorial practices and reflects on the exhibition as an incubator of ideas. The opening essay is written by the exhibitions chief curator Carlo Ratti and academic curators Michele Bonino (Politecnico di Torino) and Yimin Sun (South China University of Tecnology): it positions Eyes of the City as an urgent urban category and proposes a legacy for the show which reframes the role of architecture biennales. The second essay is written by the exhibitions executive curators: it reconstructs visually the exhibitions design process and its materialization of our open-curatorship approach.

The second part of the book expands on a discussion that accompanied the entire curatorial process from spring 2019 to summer 2020, through a rubric on ArchDaily. Tens of designers, writers, and philosophers, as foundational contributors, were asked to respond to the curatorial statement of Eyes of the City: the book contains a selection of these responses covering topics as diverse as the identity of the eyes of the city and the aesthetic regimes behind them by Antoine Picon and Jian LIU . The evolution of the concept of urban anonymity by Yung-Ho Chang, and Deyan Sudjic, the role of the natural world in the technologically-enhanced city by Jeanne Gang, and advances in design practices that lie between robotics and archivization by Albena Yaneva and Philip Yuan

The third part unpacks the content of the exhibition through eight essayscorresponding to the sections of the exhibitionwritten by researchers who were part of the curatorial team. These essays position the installations within a wider landscape of intra- and inter-disciplinary debate through an outward movement from the laboratories of the exhibition to possible future scenarios.

Eyes of the City has striven to broaden discussion and reflection on possible future urban spaces as well as on the notion of the architectural biennale itself. The curatorial line adopted throughout the eighteen-month-long processan entanglement of online and on-site interactions, extensively leaning on academic researchconfigured the exhibition as an open system; that is, a platform of exchange independent of any aprioristic theoretical direction. The outbreak of COVID-19 inevitably impacted the material scale of the project. At the same time, it underlined the relevance of its immaterial legacy. Eyes of the City progressively re-invented itself in a virtual dimension, experimenting with diverse tactics to make its cultural program accessible. In doing so, it spawned a set of digital and physical documents, strategies and traces that address some of the many open issues the city with eyes will face in the future. This book aims at a first systematization of this heterogeneous legacy.

Eyes of the City: Visions of Architecture After Artificial Intelligence

Bibliography

AUTHORS BIOS:

VALERIA FEDERIGHI is an architect and assistant professor at Politecnico di Torino, Italy. She received a MArch and a Ph.D. from the same university, and a Master of Science in Design Research from the University of Michigan. She is on the editorial board of the journal Ardeth-Architectural Design Theory-and she is part of the China Room research group. Her main publication to date is the book The Informal Stance: Representations of Architectural Design and Informal Settlements (Applied Research Design, ORO Editions, 2018). She was Head Curator of Events and Editorial for the Eyes of the City exhibition.

MONICA NASO is an architect and a Ph.D. candidate in Architecture. History and Project at Politecnico di Torino. She received a MArch with honors from the same university and had several professional experiences in Paris and Turin. As a member of the China Room research group and of the South China-Torino Collaboration Lab, she takes part in international and interdisciplinary research and design projects, and she was among the curators of the Italian Design Pavilion at the Shenzhen Design Week 2018. She was Head Curator of Exhibition and On-site Coordination for the Eyes of the City exhibition.

DANIELE BELLERI is a Partner at the design and innovation practice CRA-Carlo Ratti Associati, where he manages all curatorial, editorial, and communication projects of the office. He has a background in contemporary history, urban studies, and political science, and spent a period as a researcher at Moscows Strelka Institute for Media, Architecture, and Design. Before joining CRA, he ran a London-based strategic design agency advising cultural organizations in Europe and Asia, and worked as an independent journalist writing on design and urban issues in international publications. He was one of the Executive Curators of the Eyes of the City exhibition. Currently, he is leading the development of CRAs Urban Study for Manifesta 14 Prishtina.

See original here:
Eyes of the City: Visions of Architecture After Artificial Intelligence - ArchDaily

Does artificial intelligence for IT operations pay off? – IT World Canada

For overwhelmed IT teams, AIOps holds the promise of automatically heading off potential business impacting outages. But some IT leaders are skeptical about whether it can really deliver results.

Rodrigo de la Parra, AIOps Domain Leader at IBM Automation, addressed that skepticism at a recent CanadianCIO virtual roundtable. Its more than a buzzword, said de la Parra, AIOps takes IT to a more software-driven, agile approach.

AIOps is the application of artificial intelligence to enhance IT operations, explained de la Parra. It spots issues by using machine learning to analyze huge amounts of data generated by tools across an organizations infrastructure. Automation and natural language processing can be leveraged to help fix problems in real-time.

Its not a product or a single solution, said de la Parra. Its a journey. To unlock the value, he said its essential to align AIOps to support business needs for improved efficiency and customer service.

De la Parra distinguished between what he referred to as domain specific and domain agnostic tools. He noted that the domain specific tools had great value within their specific silo. But the real value, de la Parra said, comes from adding a domain agnostic approach because it can take feeds from all the tools running in silos and produce a single data source. This becomes the single source of truth for the analytics and to provide evidence on the root cause to the stakeholders, said de la Parra.

Successful implementation starts with an operational assessment to identify current problems related to the organizations business needs. From that, key performance indicators (KPIs) should be established to measure progress. Benchmarking where you are today, looking for real problems and developing measurable KPIs are at the heart of finding and proving the value of AIOps.

For example, de la Parra suggested that organizations could examine their efficiency by tracking the volume of major incidents relative to their applications, or the mean time to detect, acknowledge and resolve incidents. Value could be measured by looking at how much manual work is eliminated, or reductions in the number of issues reported by users.

One participant questioned how long it could take to set up the platform. According to de la Parra, this can be completed within a few weeks in many cases. He recommended starting with a manageable sized pilot to get some meaningful results quickly. Once baseline data is fed into the model, it will start detecting deviations in real-time. In addition, de la Parra noted that the IBM Watson AIOps solution comes with pre-defined algorithms that produce models to accelerate the implementation and the return on investment (ROI). This approach removes the need for data scientists to normalize data, build a data lake, create models, and integrate interfaces to collaborate with the solution such as ChatOps, he said.

Despite the discussion, it was clear that many of the participants remained skeptical about whether AIOps can produce a measurable return on investment. As well, there were questions about the trustworthiness of the data and whether domain-specific tools, such as those that monitor security, are sufficient.

The main advantage of domain agnostic AIOps over domain-specific tools is that it provides complete visibility, said de la Parra. This is what makes it trustworthy AI, he said. Decisions are driven by evidence from analyzing different data sources, grouping entities, localizing issues visualized in topology views to provide context, probable cause and next best action to resolve incidents. This is all done within the confines of policies and compliance requirements.

Its understandable to have skepticism over the effectiveness of AIOps given a common preconception around biased AI in general and the effort to implement solid AI models, said de la Parra. However, when we talk about AIOps at IBM, we are referring to a specific set of capabilities that provide concrete models to support log anomaly detection, blast radius, seasonal event grouping, next best action among others.

Another concern raised by the group related to the issue of false positives on potential incidents. De la Parra noted that AIOps can analyze whether an issue is having an impact on business systems. If there is no impact, it does not send alerts. Reducing the noise is critical to allow staff to spend time on higher value tasks, said de la Parra. A 2021 study from Forrester analyzed the total economic impact of IBM Watson AIOps. It showed a 50 per cent reduction of MTTR (Mean Time to Resolve), 80 per cent time saved from remediating false-positive incidents, leading to $623K in savings and other benefits, such as proactive incident avoidance.

According to de la Parra, AIOps results in better overall IT service management. Not only does it reduce response time and downtime, it can also be used to look at the appropriate resource allocation for workloads in the cloud.

Organizations already have the data, said de la Parra. AIOps enables the IT team to be more proactive and to become a trusted partner that helps drive business forward.

See more here:
Does artificial intelligence for IT operations pay off? - IT World Canada

How Artificial Intelligence Will Impact Your Daily Life in the 2020s – BBN Times

Artificial intelligence (AI) powers 5G, blockchain, the internet of things, quantum computing and self-driving cars.

Source: The Scientist Magazine

Artificial intelligencedeals with the area of developing computing systems which are capable of performing tasks that humans are very good at, for example recognising objects, recognising and making sense of speech, and decision making in a constrained environment.

Machine Learningis defined as the field of AI that applies statistical methods to enable computer systems to learn from the data towards an end goal. The term was introduced by Arthur Samuel in 1959.

Neural Networksare biologically inspired networks that extract abstract features from the data in a hierarchical fashion.

Deep Learningrefers to the field of Neural Networks with several hidden layers. Such a Neural Network is often referred to as a Deep Neural Network.

I will refer to AI in this article as covering the spectrum of Machine learning and Deep Learning as well as the classical AI techniques such as Logic and Search algorithms.

Source: Qualcomm

5G refers to "5th Generation", and relates to the newest standards in mobile communications. The performance levels for 5G will be focused on ultra low latency, lower energy consumption, large rates of data, and enormous connectivity of devices. The era of 5G, that will spread around much of the world from 2020 onwards (with some limited deployments in 2019), will be world where cloud servers will continue to be used, and also one whereby we witness the rise in prominence of AI on the edge (on device) where the data is generated enabling real-time (or very near real time) responses from intelligent devices. 5G and edge computing with machine to machine communication will be of great importance for autonomous systems with AI such as self-driving cars, drones, autonomous robots, and intelligent sensors within the context of IoT. 5G with AI will also enable the invisible bank and payments that leading Fintech influencers, such as Brett King and Jim Marous, dream about. The significantly faster speeds of 5G over 4G will enable technologies that are suboptimal today such as Virtual Reality (VR) to perform much better. Augmented Reality (AR) and Holographic technologies will emerge across different use cases in this period too. Those companies that are going to thrive (even survive) the resulting digital transformation will be the ones that are already planning and exploring the potential.

As a society we need to be aware of the impending changes across all sectors of the economy. We need to ensure that our political leaders and regulators actually understand the scale of change that will arise and ensure that the regulatory frameworks and infrastructure are optimised to enable the deployment of AI for improving healthcare with personalized medicine, finance with better services for the customer, marketing with enhanced personalization and better service to the customer, plus smarter and more efficient manufacturing.

The graphic above shows an example of computers on board autonomous cars engaging in Machine to Machine communication as the vehicle in red broadcasts to all other vehicles upon discovering the broken down car.

Every single sector of the economy will be transformed by AI and 5G in the next few years. Autonomous vehicles may result in reduced demand for cars and car parking spaces within towns and cities will be freed up for other usage. It maybe that people will not own a car and rather opt to pay a fee for a car pooling or ride share option whereby an autonomous vehicle will pick them up take them to work or shopping and then rather than have the vehicle remain stationary in a car park, the same vehicle will move onto its next customer journey. The interior of the car will use AR with Holographic technologies to provide an immersive and personalised experience using AI to provide targeted and location-based marketing to support local stores and restaurants. Machine to machine communication will be a reality with computers on board vehicles exchanging braking, speed, location and other relevant road data with each other and techniques such as multi-agent Deep Reinforcement Learning may be used to optimise the decision making by the autonomous vehicles.Deep Reinforcement Learning refers to Deep learning and Reinforcement Learning (RL) being combined together. This area of research has potential applications in finance, healthcare, IoT and autonomous systems such as robotics and has shown promise in solving complicated tasks that require decision making and in the past had been considered as too complex for a machine. Multi-agent reinforcement learning seeks to enable agents that interact with each other the ability to learn collaboratively as they adapt to the behaviour of other agents.Furthermore, object detection using Convolutional Neural Networks (CNNs) will also occur on the edge in cameras too (autonomous systems and also security cameras for intruder detection). ACNN is a type of Deep Neural Network that uses convolutions to extract patterns from the input data in a hierarchical manner. Its mainly used in data that has spatial relationships such as images.

The image above shows an example of Machine to Machine communication between autonomous vehicles and devices that may develop in the world in 5G to enable reduced accidents on the road.

The physical retail sector may transition from one whereby costly inventory is held in bulk to an inventory light model using smart mirrors, AR and VR combined with AI to provide personalised recommendations for apparel. In the event that the customer selects an item then an autonomous vehicle may deliver to the store whilst the customer is enjoying a digital experience and refreshments or to their home at a pre-agreed delivery time. Over time healthcare may evolve into a more efficient sector whereby the next generation of drugs will be developed with personalised medicine in mind so that side effects of a given drug are minimised and the benefits of the medication are maximised and data from Electronic Health Records is mined effectively, and medical imaging with explainable AI deployed efficiently across clinics and hospitals so as to improve timely diagnosis of a condition, and thereby reduce misdiagnosis for patients.

Source: Statista

The chart above illustrates the rapid growth in the number of connected devices. Statista estimated that there will be approximately 31 billion IoT connected devices in 2020 and 75 Billion by 2025. As we move into the world of 5G the role of AI will be of fundamental importance to the economy overall and to your day to day life.

In summaryI believe that AI and the other industry 4.0 digital technologies should be developed and encouraged to drive economic growth in ways that are cleaner, more efficient and allow wider participation across society for education, healthcare and better living standards. The issue of warfare and AI is a highly debated and emotive subject, and automation in warfare has been on display since the first Gulf war in the 1990s with fire and forget and cruise missiles. At the very least it is important to consider the need for transparency with robust frameworks to understand what is being done in order to ensure that there is sufficient oversight as a society over those making the decisions. However, in spite of what some in the media would have us believe, the vast majority of the AI community are not working on developing killer robots nor other autonomous weapons. Whilst attending speaking at an event on AI hosted in Davos during the WEF, I happened to meet Viktoriya Tigipko of TA ventures and@JamesPeyerof@Apollo_Venturesand was impressed with the positive outlook and vision that they had for AI in relation to healthcare and the development of next generation treatments that will help humanity. I have also been inspired by the work of the brilliant Dr Anna Becker who started her degree at the age of 16 and her postgraduate studies at 19 before going on to build and run an AI company. AI and in particular Machine Learning and Deep Learning serve at this moment in time (and in the foreseeable future) to solve for the issue of making sense of the deluge of data that we generate from digital platforms rather than to create Skynet with Terminator machines to wipe us out (AGI itself does not exist today nor the medium term future). AI also provides an opportunity to improve living standards and promote cleaner and more efficient industry, agriculture, smarter cities and energy systems as we move into the world of industry 4.0 with the arrival of 5G.

Original post:
How Artificial Intelligence Will Impact Your Daily Life in the 2020s - BBN Times

6 ways artificial intelligence is revolutionizing home search – Inman

As all agents, brokers, and home buyers know, searching for a home is a deeply personal process, and one of the most difficult challenges for buyers is narrowing down what they want. When a prospective buyer walks through a home or searches for one online, they are making hundreds of value judgments, often without ever consciously realizing them or expressing them to the real estate professional they are working with.

Thankfully, artificial intelligence (AI) can now help bridge that gap and deliver a customized and personalized experience for consumers, without additional work by the agent or broker.

Here are a few exciting ways AI technology is making this possible:

For years, it has been easy to search for homes based on basic criteria like square footage, but what if a client wants something a little more specific, such as hardwood floors in all of the bedrooms, or homes with granite counters and white kitchen cabinets?

Thats where AI comes in. Those kinds of variables, or combinations of them, are not often captured by a listing data feed, but they can be critical to personalizing the customer experience. AI makes it easy to get the right search results quickly for even the most particular clients.

If you watch Netflix or use Amazon, youre already familiar with AI technology that reacts to each individual consumers preferences. On those platforms, what you stop to review, or even the amount of time you spend reviewing, is used to define preferences without ever asking you a specific question. In real estate, AI-powered search platforms are starting to offer buyers similar interactions.

Agents can now encourage consumers to find and upload images of what theyre looking for types of home, the finishes, the features, the layout and have tech tools handle the hard work of searching for similar properties on the market.

Firms like Wayfair, Home Depot, and others are leveraging tools that allow consumers to visualize what a room or a home would look like with different paint colors, with their own furniture or even after a renovation. This allows buyers and sellers to maximize the interest in a transaction by seeing what their home will look like in the future.

Instead of typing something like, New York, three-bedroom apartment, prospects are now able to simply speak into their phone or computer microphone and say something like, I need a three-bedroom apartment with a Central Park view in New York, facing east. And before long, platforms will be able to reply to them verbally. With computer vision technology, that becomes a reality by utilizing plain-English descriptions of what is tagged in images and searching for them.

For sellers, search placement can be improved by using technology that automatically tags home features in listing photos. That means that agents can avoid writing all those tags and detailed image descriptions, but still have their sellers benefit from optimal search engine placement. At a time when the vast majority of home searches start online, thats a big deal.

Put simply, developments like these are increasingly transforming the home search process and making it easy for real estate professionals to deliver an even more highly personalized service for their customers without adding more to their plates.

Red Bell Real Estate, LLC, a homegenius company, is at the forefront of these and other exciting technology developments that will make agents and brokers jobs easier and more lucrative. If youre interested in learning more about how this tech could work for you or your agents, visit homegenius.com.

2021 Radian Group Inc. All Rights Reserved. Red Bell Real Estate, LLC, 7730 South Union Park Avenue, Suite 400, Midvale, UT 84047. Tel: 866-626-2381. Licensed in every State and the District of Columbia. This communication is provided for use by real estate professionals only and is not intended for distribution to consumers or other third parties. This does not constitute an advertisement as defined by Section 1026.2(a)(2) of Regulation Z.

Visit link:
6 ways artificial intelligence is revolutionizing home search - Inman

Artificial Intelligence Is Taking Over Jobs That Humans Did For Years – wpgtalkradio.com

Margie and I were shopping at Sams Wholesale Club yesterday when all of a sudden a floor cleaning machine drove right past us.

This doesnt sound at all eventful, however - at second glance - I could see that the floor cleaning vehicle was driverless. There was a seat, controls, and a steering wheel, but there was no human driver.

As I processed this moment, the first thing I thought about was how cool this is. A driverless, automated vehicle that has a cleaning route all mapped out.

It automatically beeps its horn to alert people of its presence. I watched it break timely for human traffic.

Its amazing, game-changing technology. The store confidently operates this equipment during normal operating hours with people walking right near it. We were there right at 10:00 a.m., yesterday during the opening minutes of operation.

A moment later, I thought, wow, this equipment has taken away a good job that used to exist.

Now, its true that businesses all over America are having a hard time filling numerous open job classifications.

I couldnt help but think about the many jobs that have been eliminated over the past recent years because of technology.

A few years ago,a McKinsey reporthighlighted the following statistics:

Regarding workforce displacement, they conclude that as many as 800 million global jobs and 475 million employees could be disrupted by automation before 2030.

Here are some of the most recent jobs lost due to technology:

Here are six jobs that may disappear by 2030:

Here are five jobs that wont be eliminated by 2030:

In summary, technology is amazing and wonderful. Yet, on the other hand, good jobs that have existed for generations are being eliminated. People will have to become more nimble and adaptable than ever before and be prepared to potentially have to make a career change as the marketplace continues to evolve.

Disgusting:New Jersey's 7 Most Repulsive And Financially Rewarding Jobs

7 Places to eat or drink that are worth the weekend drive

Follow this link:
Artificial Intelligence Is Taking Over Jobs That Humans Did For Years - wpgtalkradio.com