Archive for the ‘Artificial Intelligence’ Category

The Evolution of Artificial Intelligence and Future of National Security – The National Interest

Artificial intelligence is all the rage these days. In the popular media, regular cyber systems seem almost passe, as writers focus on AI and conjure up images of everything from real-life Terminator robots to more benign companions. In intelligence circles, Chinas uses of closed-circuit television, facial recognition technology, and other monitoring systems suggest the arrival of Big Brotherif not quite in 1984, then only about forty years later. At the Pentagon, legions of officers and analysts talk about the AI race with China, often with foreboding admonitions that the United States cannot afford to be second in class in this emerging realm of technology. In policy circles, people wonder about the ethics of AIsuch as whether we can really delegate to robots the ability to use lethal force against Americas enemies, however bad they may be. A new report by the Defense Innovation Board lays out broad principles for the future ethics of AI, but only in general terms that leave lots of further work to still be done.

What does it all really mean and is AI likely to be all its cracked up to be? We think the answer is complex and that a modest dose of cold water should be thrown on the subject. In fact, many of the AI systems being envisioned today will take decades to develop. Moreover, AI is often being confused with things it is not. Precision about the concept will be essential if we are to have intelligent discussions about how to research, develop, and regulate AI in the years ahead.

AI systems are basically computers that can learn how to do things through a process of trial and error with some mechanism for telling them when they are right and when they are wrongsuch as picking out missiles in photographs, or people in crowds, as with the Pentagon's "Project Maven"and then applying what they have learned to diagnose future data. In other words, with AI, the software is built by the machine itself, in effect. The broad computational approach for a given problem is determined in advance by real old-fashioned humans, but the actual algorithm is created through a process of trial and error by the computer as it ingests and processes huge amounts of data. The thought process of the machine is really not that sophisticated. It is developing artificial instincts more than intelligenceexamining huge amounts of raw data and figuring out how to recognize a cat in a photo or a missile launcher on a crowded highway rather than engaging in deep thought (at least for the foreseeable future).

This definition allows us quickly to identify some types of computer systems that are not, in fact, AI. They may be important, impressive, and crucial to the warfighter but they are not artificial intelligence because they do not create their own algorithms out of data and multiple iterations. There is no machine learning involved, to put it differently. As our colleague, Tom Stefanick, points out, there is a fundamental difference between advanced algorithms, which have been around for decades (though they are constantly improving, as computers get faster), and artificial intelligence. There is also a difference between an autonomous weapons system and AI-directed robotics.

For example, the computers that guide a cruise missile or a drone are not displaying AI. They follow an elaborate, but predetermined, script, using sensors to take in data and then putting it into computers, which then use software (developed by humans, in advance) to determine the right next move and the right place to detonate any weapons. This is autonomy. It is not AI.

Or, to use an example closer to home for most people, when your smartphone uses an app like Google Maps or Waze to recommend the fastest route between two points, this is not necessarily, AI either. There are only so many possible routes between two places. Yes, there may be dozens or hundredsbut the number is finite. As such, the computer in your phone can essentially look at each reasonable possibility separately, taking in data from the broader network that many other peoples phones contribute to factor traffic conditions into the computation. But the way the math is actually done is straightforward and predetermined.

Why is this important? For one thing, it should make us less breathless about AI, and see it as one element in a broader computer revolution that began in the second half of the twentieth century and picked up steam in this century. Also, it should help us see what may or may not be realistic and desirable to regulate in the realm of future warfare.

The former vice chairman of the joint chiefs of staff, Gen. Paul Selva, has recently argued that the United States could be about a decade away from having the capacity to build an autonomous robot that could decide when to shoot and whom to killthough he also asserted that the United States had no plans actually to build such a creature. But if you think about it differently, in some ways weve already had autonomous killing machines for a generation. That cruise missile we discussed above has been deployed since the 1970s. It has instructions to fly a given route and then detonate its warhead without any human in the loop. And by the 1990s, we knew how to build things like skeet submunitions that could loiter over a battlefield and look for warm objects like tanksusing software to decide when to then destroy them. So the killer machine was in effect already deciding for itself.

Even if General Selva's terminator is not built, robotics will in some cases likely be given greater decisionmaking authority to decide when to use force, since we have in effect already crossed over this threshold. This highly fraught subject requires careful ethical and legal oversight, to be sure, and the associated risks are serious. Yet the speed at which military operations must occur will create incentives not to have a person in the decisionmaking loop in many tactical settings. Whatever the United States may prefer, restrictions on automated uses of violent force would also appear relatively difficult to negotiate (even if desirable), given likely opposition from Russia and perhaps from other nations, as well as huge problems with verification.

For example, small robots that can operate as swarms on land, in the air or in the water may be given certain leeway to decide when to operate their lethal capabilities. By communicating with each other, and processing information about the enemy in real-time, they could concentrate attacks where defenses are weakest in a form of combat that John Allen and Amir Husain call hyperwar because of its speed and intensity. Other types of swarms could attack parked aircraft; even small explosives, precisely detonated, could disable wings or engines or produce secondary and much larger explosions. Many countries will have the capacity to do such things in the coming twenty years. Even if the United States tries to avoid using such swarms for lethal and offensive purposes, it may elect to employ them as defensive shields (perhaps against North Korean artillery attack against Seoul) or as jamming aids to accompany penetrating aircraft. With UAVs that can fly ten hours and one hundred kilometers now costing only in the hundreds of thousands of dollars, and quadcopters with ranges of a kilometer more or less costing in the hundreds of dollars, the trendlines are clearand the affordability of using many drones in an organized way is evident.

Where regulation may be possible, and ethically compelling, is limiting the geographic and temporal space where weapons driven by AI or other complex algorithms can use lethal force. For example, the swarms noted above might only be enabled near a ship, or in the skies near the DMZ in Korea, or within a small distance of a military airfield. It may also be smart to ban letting machines decide when to kill people. It might be tempting to use facial recognition technology on future robots to have them hunt the next bin Laden, Baghdadi, or Soleimani in a huge Mideastern city. But the potential for mistakes, for hacking, and for many other malfunctions may be too great to allow this kind of thing. It probably also makes sense to ban the use of AI to attack the nuclear command and control infrastructure of a major nuclear power. Such attempts could give rise to use them or lose them fears in a future crisis and thereby increase the risks of nuclear war.

We are in the early days of AI. We cant yet begin to foresee where its going and what it may make possible in ten or twenty or thirty years. But we can work harder to understand what it actually isand also think hard about how to put ethical boundaries on its future development and use. The future of warfare, for better or for worse, is literally at stake.

Retired Air Force Gen. Lori Robinson is a nonresident senior fellow on the Security and Strategy team in the Foreign Policy program at Brookings. She was commander of all air forces in the Pacific.

View post:
The Evolution of Artificial Intelligence and Future of National Security - The National Interest

Artificial Intelligence Applications: Is Your Business Implementing AI Smartly? – IoT For All

The book Design, Launch, and Scale IoT Servicesclassifies the components of IoT services into technical modules. One of the most important of these is Artificial Intelligence (AI). This article is intended to supplement the book by providing insight into AIand its applications for IoT.

After many years in the wilderness, AIis back on the hype curve and will change the world again. Or, will it? AIhas always been interesting, but what has changed to justify the current hype?

There are several contributing factors. The volumes of data that will be produced by many IoT services suggest that this data cannot be managed by humans with traditional analytics tools. Therefore, AIcan offer opportunities for IoT services to extract maximum value from the data. IoT cloud platforms are now offering AIservices via APIs and application development tools, making AI more accessible for many IoT services. Now, AIcan be incorporated without requiring extensive development or excessive costs.

AI can perform the Treble A actions automatically but there is a cost associated with every step in the lifecycle, therefore business owners should ask themselves why they should introduce AI. Understanding the end-goal is the starting point. Its not suitable for all services and requires evaluation to understand when and how it should be introduced.

The following questions can provide a useful starting point for evaluating the introduction of AI:

The majority of IoT services include (or claim to include) some aspect of AIin their solution. This is due to a wide diversity in AI definitions (supervised/unsupervised, reinforced/deep learning) and the hype surrounding AI. (Note: All IoT services should take advantage of this hype while it lasts.)

Lets look at the most common AI features and IoT industries to consider how IoT service owners can best evaluate AI and answer the questions above.

IoT cloud platform providers are offering powerful AIvisual recognition APIs. For example, developing a human visual recognition tool has now become a trivial exercise for developers, and the cost of using visual recognition in IoT services has reduced drastically. These tools are best used for use cases recognizing humans and objects, but may not be useful for very precise recognition use cases. Developing specific visual recognition capabilities proves too expensive for most services, but it does make the service more attractive for end-users.

Robotics is a branch of AIthat, for many, implies a 2-armed, 2-legged machine that communicateswith humans using visual or voice recognition. However, the most important use cases for IoT robotics involve the collection of data from sensors or extracted from robot programs. This data can be used by IoT services as input for AImachine learning algorithms to increase robot efficiency, implementing features such as predictive fault management or adaptive positioning. AIcan be used to increase productivity with robotic systems as part of Industrial IoT services that will become vital for many Industry 4.0 use cases.

Natural Language Processing (NLP) and voice recognition features have become widely available in mobile phones and CRM (customer relationship management) systems. They can be implemented via IoT cloud service APIs. This will be an option for many IoT services without requiring significant investment. It will make most services more attractive, implyingmore sales.However, we are probably quite far off from the stage where NLP is fundamental for IoT services. Its available on many mobile apps, but most users still prefer to use a touch screen. The main use cases for voice control systems will most likely involve voice to text transcription for operational or CRM activities to reduce cost but may increase frustration for end-users.(Note: Cloud providers are also introducing AIaudio recognition APIs for fault detection that can be used to replace or augment visual recognition features.)

Smart factories offer numerous opportunities for implementing use cases that can increase efficiency via visual inspection, checking for faulty components or assembly processes errors. The analysis required should include cost vs benefits. If visual inspection slows the production process, it may be counterproductive to introduce in a manufacturing processthat has a low fault rate.

For example, lets say that a smart factory is creating 5,000components per day averaging 50 faulty components. The introduction of a visual inspection may reduce the components to 0 faults. However, if it slows the manufacturing process to produce only 4,000components per day, is it worthwhile? The process owner will have to calculate if the reduction in throughput outweighs the benefits of a reduction in faulty components. This is an example of real-time fault detection that can used for industrial IoT services. (Note: Many of the IoT Cloud platform providers offer the possibility to implement AIon edge devices, thus increasing the number use cases for real-time AI.)

Many industrial IoT solutions suggest that visual recognition will be used to determine thecurrent health and emotional status of machine operators. This would require quite advanced featuresto be beneficial, and therefore,its unlikely to be relevant for most IoT services.

Visual inspection shows great promise in detecting cancer and other ailments using advanced AItechniques and is improving the accuracy of diagnosis in many IoT health use cases. Very often, visual inspection requires large volumes of sample cases and training sets to ensure that the performance is acceptable. Genome technology generates billions of data items mapping our DNA that cannot be handled by humans and analytics tools. The introduction of AIoffers the possibility to predict future health issues. Using data volumes of this magnitude requires unsupervised learning techniques, such as clustering. This may prove too complex and expensive for the majority of current IoT use cases. Again, cloud service providers provide options facilitating the management of training models and data with tools such as Google Cloud AutoML. However, its likely this will only be cost-effective for a limited number of IoT services.

Its surprising that we havent yet seen the widespread deployment of AIin the management of intelligent hospitals. As with any complex logistical processes, AIcan create significant efficiencies with relatively low investment.

Many smart home IoT services will implement voice recognition that connectswith smart speakers. These are widely available from providers such as Amazon, Google and Apple.They can communicate with most smart home devices without significant complexity. Its likely that voice recognition will be an add-on for the majority of IoT services; nice to have, but not fundamental. Therefore, in most cases, IoT business owners may have to budget for this as a premium service.

The potential of AI in transportation is very exciting (i.e. driverless cars.) There will be a lot of innovation with AIfor drivers, but new IoT service owners will have to carve out a niche in this market. Although the technology is available, we may still be quite a way off from many use cases being acceptable for drivers. Imagine all the cars on the road communicating with each other and learning from one another as they hit the road.

One example to consider: Car A detects ice on the road,informs other cars and they all proceed to automatically adjust speed and brakes based on performance data from the other cars. This may seem futuristic, but the technology is currently available and AI offers the possibility of increased performance and decision making.

Analytics is closely interlinked with AI. When utilizing AI, its typical to ask yourself if you need analytics tools or if analytics will die due to the implementation of AI. The answer? Not quite.Most IoT services employ analytics, and therefore the data required by AI will already be available. AIshould be able to replace a lot of the activities performed by humans using analytics tools. Or, the output of analytics can be the starting point of AIsintroduction in many IoT services. The latter doesnt imply analytics are a prerequisite. If the data is available, expert systems can be developed without analytics.

Now, were starting to see augmented analytics. This is where AI assists analytics with intelligent searching and other tasks. This may not be necessary for most IoT services, but we can be sure that its being used by the massive tech companies around the world. Unfortunately, most IoT services wont generate enough data to be cost-effective to introduce.

Analytics, statistics and lies are often interchangeable. These wont be solved by AI. One challenge for many IoT services is that neural networks and deep learning AI techniques cannot explain why theyre making decisions. This can reduce customer confidence and will be unsuitable for IoT services where a clear understanding of a decision-making process is important.

Original post:
Artificial Intelligence Applications: Is Your Business Implementing AI Smartly? - IoT For All

The 10 most innovative artificial intelligence companies of 2020 – Fast Company

Artificial intelligence has reached the inflection point where its less of a trend than a core ingredient across virtually every aspect of computing. These companies are applying the technology to everything from treating strokes to detecting water leaks to understanding fast-food orders. And some of them are designing the AI-ready chips that will unleash even more algorithmic innovations in the years to come.

For enabling the next generation of AI applications with its Intelligent Processing Unit AI chip

As just about every aspect of computing is being transformed by machine learning and other forms of AI, companies can throw intense algorithms at existing CPUs and GPUs. Or they can embrace Graphcores Intelligence Processing Unit, a next-generation processor designed for AI from the ground up. Capable of reducing the necessary number crunching for tasks such as algorithmic trading from hours to minutes, the Bristol, England, startups IPUs are now shipping in Dell servers and as an on-demand Microsoft Azure cloud service.

Read more about why Graphcore is one of the Most Innovative Companies of 2020.

For tutoring clients like Chase to fluency in marketing-speak

Ever tempted to click on the exciting discount offered to you in a marketing email? That might be the work of Persado, which uses AI and data science to generate marketing language that might work best on you. The companys algorithms learn what a brand hopes to convey to potential customers and suggests the most effective approachand it works. In 2019, Persado signed contracts with large corporations like JPMorgan Chase, which signed a five-year deal to use the companys AI across all its marketing. In the last three years, Persado claims that it has doubled its annual recurring revenue.

For becoming a maven in discerning customer intent via messaging apps

We may be a long way from AI being able to replace a friendly and knowledgeable customer-service representative. But LivePersons Conversational AI is helping companies get more out of their human reps. The machine-learning-infused service routes incoming queries to the best agent, learning as it goes so that it grows more accurate over time. It works over everything from text messaging to WhatsApp to Alexa. With Conversational AI and LivePersons chat-based support, the companys clients have seen a two-times increase in agent efficiency and a 20% boost in sales conversions compared to voice interactions.

For catalyzing care after a patients stroke

When a stroke victim arrives at the ER, it can sometimes be hours before they receive treatment. Viz.ai makes an artificial intelligence program that analyzes the patients CT scan, then organizes all the clinicians and facilities needed to provide treatment. This sets up workflows that happen simultaneously, instead of one at a time, which collapses how long it takes for someone to receive treatment and improves outcomes. Viz.ai says that its hospital customer base grew more than 1,600% in 2019.

For transforming sketches into finished images with its GauGAN technology

GauGAN, named after post-Impressionist painter Paul Gauguin, is a deep-learning model that acts like an AI paintbrush, rapidly converting text descriptions, doodles, or basic sketches into photorealistic, professional-quality images. Nvidia says art directors and concept artists from top film studios and video-game companies are already using GauGAN to prototype ideas and make rapid changes to digital scenery. Computer scientists might also use the tool to create virtual worlds used to train self-driving cars, the company says. The demo video has more than 1.6 million views on YouTube.

For bringing savvy to measuring the value of TV advertising and sponsorship

Conventional wisdom has it that precise targeting and measuring of advertising is the province of digital platforms, not older forms of media. But Hives AI brings digital-like precision to linear TV. Its algorithms ingest video and identify its subject matter, allowing marketers to associate their ads with relevant contentsuch as running a car commercial after a chase scene. Hives Mensio platform, offered in partnership with Bain, melds the companys AI-generated metadata with info from 20 million households to give advertisers new insights into the audiences their messages target.

For moving processing power to the smallest devices, with its low-power chips that handle voice interactions

Semiconductor company Syntiant builds low-power processors designed to run artificial intelligence algorithms. Because the companys chips are so small, theyre ideal for bringing more sophisticated algorithms to consumer tech devicesparticularly when it comes to voice assistants. Two of Syntiants processors can now be used with Amazons Alexa Voice Service, which enables developers to more easily add the popular voice assistant to their own hardware devices without needing to access the cloud. In 2019, Syntiant raised $30 million from the likes of Amazon, Microsoft, Motorola, and Intel Capital.

For plugging leaks that waste water

Wint builds software that can help stop water leaks. That might not sound like a big problem, but in commercial buildings, Wint says that more than 25% of water is wasted, often due to undiscovered leaks. Thats why the company launched a machine-learning-based tool that can identify leaks and waste by looking for water use anomalies. Then, managers for construction sites and commercial facilities are able to shut off the water before pipes burst. In 2019, the companys attention to water leaks helped it grow its revenue by 400%, and it has attracted attention from Fortune 100 companies, one of which reports that Wint has reduced its water consumption by 24%.

For serving restaurants an intelligent order taker across app, phone, and drive-through

If youve ever ordered food at a drive-through restaurant and discovered that the items you got werent the ones you asked for, you know that the whole affair is prone to human error. Launched in 2019, Interactions Guest Experience Platform (GXP) uses AI to accurately field such orders, along with ones made via phone and text. The technology is designed to unflinchingly handle complex custom ordersand yes, it can ask you if you want fries with that. Interactions has already handled 3 million orders for clients youve almost certainly ordered lunch from recently.

For giving birth to Kai (born from the same Stanford research as Siri), who has become a finance whiz

Kasisto makes digital assistants that know a lot about personal finance and know how to talk to human beings. Its technology, called KAI, is the AI brains behind virtual assistants offered by banks and other financial institutions to help their customers get their business done and make better decisions. Kasisto incubated at the Stanford Research Institute, and KAI branched from the same code base and research that birthed Apples Siri assistant. Kasisto says nearly 18 million banking customers now have access to KAI through mobile, web, or voice channels.

Read more about Fast Companys Most Innovative Companies:

Read more from the original source:
The 10 most innovative artificial intelligence companies of 2020 - Fast Company

The New ABCs: Artificial Intelligence, Blockchain And How Each Complements The Other – JD Supra

The terms revolution and disruption in the context of technological innovation are probably bandied about a bit more liberally than they should. Technological revolution and disruption imply upheaval and systemic reevaluations of the way that humans interact with industry and even each other. Actual technological advancement, however, moves at a much slower pace and tends to augment our current processes rather than to outright displace them. Oftentimes, we fail to realize the ubiquity of legacy systems in our everyday lives sometimes to our own detriment.

Consider the keyboard. The QWERTY layout of keys is standard for English keyboards across the world. Even though the layout remains a mainstay of modern office setups, its origins trace back to the mass popularization of a typewriter manufactured and sold by E. Remington & Sons in 1874.[1] Urban legend has it that the layout was designed to slow down typists from jamming typing mechanisms, yet the reality reveals otherwise the layout was actually designed to assist those transcribing messages from Morse code.[2] Once typists took to the format, the keyboard, as we know it today, was embraced as a global standard even as the use of Morse code declined.[3] Like QWERTY, our familiarity and comfort with legacy systems has contributed to their rise. These systems are varied in their scope, and they touch everything: healthcare, supply chains, our financial systems and even the way we interact at a human level. However, their use and value may be tested sooner than we realize.

Artificial intelligence (AI) and blockchain technology (blockchain) are two novel innovations that offer the opportunity for us to move beyond our legacy systems and streamline enterprise management and compliance in ways previously unimaginable. However, their potential is often clouded by their buzzword status, with bad actors taking advantage of the hype. When one cuts through the haze, it becomes clear that these two technologies hold significant transformative potential. While these new innovations can certainly function on their own, AI and blockchain also complement one another in such ways that their combination offers business solutions, not only the ability to build upon legacy enterprise systems but also the power to eventually upend them in favor of next level solutions. Getting to that point, however, takes time and is not without cost. While humans are generally quick to embrace technological change, our regulatory frameworks take longer to adapt. The need to address this constraint is pressing real market solutions for these technologies have started to come online, while regulatory opaqueness hurdles abound. As innovators seek to exploit the convergence of AI and blockchain innovations, they must pay careful attention to overcome both technical and regulatory hurdles that accompany them. Do so successfully, and the rewards promise to be bountiful.

First, a bit of taxonomy is in order.

AI in a Nutshell:

Artificial Intelligence is the capability of machine to imitate intelligent human behavior, such as learning, understanding language, solving problems, planning and identifying objects.[4] More practically speaking, however, todays AI is actually mostly limited to if X, then Y varieties of simple tasks. It is through supervised learning that AI is trained, and this process requires an enormous amount of data. For example, IBMs question-answering supercomputer Watson was able to beat Jeopardy! champions Brad Rutter and Ken Jennings in 2011, because Watson had been coded to understand simple questions by being fed countless iterations and had access to vast knowledge in the form of digital data Likewise, Google DeepMinds AlphaGo defeated the Go champion Lee Sedol in 2016, since AlphaGo had undergone countless instances of Go scenarios and collected them as data. As such, most implementations of AI involve simple tasks, assuming that relevant information is readily accessible. In light of this, Andrew Ng, the Stanford roboticist, noted that, [i]f a typical person can do a mental task with less than one second of thought, we can probably automate it using AI either now or in the near future.[5]

Moreover, a significant portion of AI currently in use or being developed is based on machine learning. Machine learning is a method by which AI adapts its algorithms and models based on exposure to new data thereby allowing AI to learn without being programmed to perform specific tasks. Developing high performance machine learning-based AI, therefore, requires substantial amounts of data. Data high in both quality and quantity will lead to better AI, since an AI instance can indiscriminately accept all data provided to it, and can refine and improve its algorithms to the extent of the provided data. For example, AI that visually distinguishes Labradors from other breeds of dogs will become better at its job the more it is exposed to clear and accurate pictures of Labradors.

It is in these data amalgamations that AI does its job best. Scanning and analyzing vast subsets of data is something that a computer can do very rapidly as compared to a human. However, AI is not perfect, and many of the pitfalls that AI is prone to are often the result of the difficulty in conveying how humans process information in contrast to machines. One example of this phenomenon that has dogged the technology has been AIs penchant for hallucinations. An AI algorithm hallucinates when the input is interpreted by the machine into something that seems implausible to a human looking at the same thing.[6] Case in point, AI has interpreted an image of a turtle as that of a gun or a rifle as a helicopter.[7] This occurs because machines are hypersensitive to, and interpret, the tiniest of pixel patterns that we humans do not process. Because of the complexity of this analysis, developers are only now beginning to understand such AI phenomena.

When one moves beyond pictures of guns and turtles, however, AIs shortfalls can become much less innocuous. AI learning is based on inputted data, yet much of this data reflects the inherent shortfalls and behaviors of everyday individuals. As such, without proper correction for bias and other human assumptions, AI can, for example, perpetuate racial stereotypes and racial profiling.[8] Therefore, proper care for what goes into the system and who gets access to the outputs must be employed for the ethical employment of AI, but therein lies an additional problem who has access to enough data to really take full advantage of and develop robust AI?

Not surprisingly, because large companies are better able to collect and manage increasingly larger amounts of data than individuals or smaller entities, such companies have remained better positioned in developing complex AI. In response to this tilted landscape, various private and public organizations, including the U.S. Department of Justices Bureau of Justice, Google Scholar and the International Monetary Fund, have launched open source initiatives to make publicly available vast amounts of data that such organizations have collected over many years.

Blockchain in a Nutshell:

Blockchain technology as we know it today came onto the scene in late 2009 with the rise of Bitcoin, perhaps the most famous application of the technology. Fundamentally, blockchain is a data structure that makes it possible to create a tamper-proof, distributed, peer-to-peer system of ledgers containing immutable, time-stamped and cryptographically connected blocks of data. In practice, this means that data can be written only once onto a ledger, which is then read-only for every user. However, many of the most utilized blockchain protocols, for example, the Bitcoin or Ethereum networks, maintain and update their distributed ledgers in a decentralized manner, which stands in contrast to traditional networks reliant on a trusted, centralized data repository.[9] In structuring the network in this way, these blockchain mechanisms function to remove the need for a trusted third party to handle and store transaction data. Instead, data are distributed so that every user has access to the same information at the same time. In order to update a ledgers distributed information, the network employs pre-defined consensus mechanisms and militarygrade cryptography to prevent malicious actors from going back and retroactively editing or tampering with previously recorded information. In most cases, networks are open source, maintained by a dedicated community and made accessible to any connected device that can validate transactions on a ledger, which is referred to as a node.

Nevertheless, the decentralizing feature of blockchain comes with significant resource and processing drawbacks. Many blockchain-enabled platforms run very slowly and have interoperability and scalability problems. Moreover, these networks use massive amounts of energy. For example, the Bitcoin network requires the expenditure of about 50 terawatt hours per year equivalent to the energy needs of the entire country of Singapore.[10] To ameliorate these problems, several market participants have developed enterprise blockchains with permissioned networks. While many of them may be open source, the networks are led by known entities that determine who may verify transactions on that blockchain, and, therefore, the required consensus mechanisms are much more energy efficient.

Not unlike AI, a blockchain can also be coded with certain automated processes to augment its recordkeeping abilities, and, arguably, it is these types of processes that contributed to blockchains rise. That rise, some may say, began with the introduction of the Ethereum network and its engineering around smart contracts a term used to describe computer code that automatically executes all or part of an agreement and is stored on a blockchain-enabled platform. Smart contracts are neither contracts in the sense of legally binding agreements nor smart in employing applications of AI. Rather, they consist of coded automated parameters responsive to what is recorded on a blockchain. For example, if the parties in a blockchain network have indicated, by initiating a transaction, that certain parameters have been met, the code will execute the step or steps triggered by those coded parameters. The input parameters and the execution steps for smart contracts need to be specific the digital equivalent of if X, then Y statements. In other words, when required conditions have been met, a particular specified outcome occurs; in the same way that a vending machine sells a can of soda once change has been deposited, smart contracts allow title to digital assets to be transferred upon the occurrence of certain events. Nevertheless, the tasks that smart contracts are currently capable of performing are fairly rudimentary. As developers figure out how to expand their networks, integrate them with enterprise-level technologies and develop more responsive smart contracts, there is every reason to believe that smart contracts and their decentralized applications (dApps) will see increased adoption.

AI and blockchain technology may appear to be diametric opposites. AI is an active technology it analyzes what is around and formulates solutions based on the history of what it has been exposed to. By contrast, blockchain is data agnostic with respect to what is written into it the technology bundle is largely passive. It is primarily in that distinction that we find synergy, for each technology augments the strengths and tempers the weaknesses of the other. For example, AI technology requires access to big data sets in order to learn and improve, yet many of the sources of these data sets are hidden in proprietary silos. With blockchain, stakeholders are empowered to contribute data to an openly available and distributed network with immutability of data as a core feature. With a potentially larger pool of data to work from, the machine learning mechanisms of a widely distributed, blockchain-enabled and AI-powered solution could improve far faster than that of a private data AI counterpart. These technologies on their own are more limited. Blockchain technology, in and of itself, is not capable of evaluating the accuracy of the data written into its immutable network garbage in, garbage out. AI can, however, act as a learned gatekeeper for what information may come on and off the network and from whom. Indeed, the interplay between these diverse capabilities will likely lead to improvements across a broad array of industries, each with unique challenges that the two technologies together may overcome.

[1] See Rachel Metz, Why We Cant Quit the QWERTY Keyboard, MIT Technology Review (Oct. 13, 2018), available at: https://www.technologyreview.com/s/611620/why-we-cant-quit-the-qwerty-keyboard/.

[2] Alexis Madrigal, The Lies Youve Been Told About the Origin of the QWERTY Keyboard, The Atlantic (May 3, 2013), available at: https://www.theatlantic.com/technology/archive/2013/05/the-lies-youve-been-told-about-the-origin-of-the-qwerty-keyboard/275537/.

[3] See Metz, supra note 1.

[4] See Artificial Intelligence, Merriam-Websters Online Dictionary, Merriam-Webster (last accessed Mar. 27, 2019), available at: https://www.merriam-webster.com/dictionary/artificial%20intelligence.

[5] See Andrew Ng, What Artificial Intelligence Can and Cant Do Right Now, Harvard Business Review (Nov. 9, 2016), available at: https://hbr.org/2016/11/what-artificial-intelligence-can-and-cant-do-right-now.

[6] Louise Matsakis, Artificial Intelligence May Not Hallucinate After All, Wired (May 8, 2019), available at: https://www.wired.com/story/adversarial-examples-ai-may-not-hallucinate/.

[7] Id.

[8] Jerry Kaplan, Opinion: Why Your AI Might Be Racist, Washington Post (Dec. 17, 2018), available at: https://www.washingtonpost.com/opinions/2018/12/17/why-your-ai-might-be-racist/?noredirect=on&utm_term=.568983d5e3ec.

[9] See Shanaan Cohsey, David A. Hoffman, Jeremy Sklaroff and David A. Wishnick, Coin-Operated Capitalism, Penn. Inst. for L. & Econ. (No. 18-37) (Jul. 17, 2018) at 12, available at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3215345##.

[10] See Bitcoin Energy Consumption Index (last accessed May 13, 2019), available at: https://digiconomist.net/bitcoin-energy-consumption.

[View source.]

The rest is here:
The New ABCs: Artificial Intelligence, Blockchain And How Each Complements The Other - JD Supra

Blockchain and Artificial Intelligence Convergence Powering the Robotics Capability – EnterpriseTalk

Enterprises are using multiple applications powered by the convergence of blockchain and artificial intelligence, to increase efficiency and effectiveness of RPA.

It is common knowledge that Robotics is powered by artificial intelligence delivering excellence and efficiency in well-known areas- cryptocurrencies, chatbots, or voice-assisted technologies.

Blockchain In the Times of AI

The field of robotics is immensely challenging, and to grow in this segment, companies need to offer reliable and affordable solutions to their clients and customers.

The exciting news is that RPA is also one of the most promising areas utilizing the convergence of Blockchain and AI. This convergence is now showing never before massive efficiencies in the field of robotics.

Robotics has gained massive popularity across industries over the years using artificial intelligence, making all processes more effective and error-free. Now, blockchain will keep the data decentralized and free from any central or concentrated control. By combining the decentralized power of blockchain with the agility of artificial intelligence, the field of robotics can be elevated and advanced in several ways.

The features offered by artificial intelligence will increase the efficiency of robots using automation multi-fold, while data immutability offered by blockchain will tamper-proof the processes. Leveraging these technologies simultaneously to the robotics, the operating mechanism is pre-set to achieve the desired objectives and business goals.

Swarm Robotics: The one to be benefitted the most?

The significance of artificial intelligence and blockchain is the most prominent in the case of Swarm Robotics. This is mainly because both these innovations can be applied collectively to control a group of robots. AI controls every Swarm Robot as it operates according to the pre-set principles and requirements. The collective response and behavior of the Robots can be significantly enhanced with the application of artificial intelligence and blockchain.

AIoT Convergence of Artificial Intelligence with the Internet of Things

This convergence has enormous benefits on scalability with the enhanced scope of operations. Global enterprises have already started witnessing the application of blockchain and artificial intelligence with the Swarm robotics gaining popularity, specifically in the areas related to entertainment, healthcare, and farming. Although several stakeholders have explicitly expressed concerns about the security and safety of the features, there is hardly any negative view about the potential of applications to benefit the industry. Blockchain is a credible technology measure to alleviate the concerns of the stakeholders about the privacy and secrecy of the data. Using the secure cryptographic signatures and other advanced technologies available in the blockchain space, security, and safety concerns regarding robots can be easily handled.

Artificial intelligence will power the Robots while continuing to be the strength of this integration, while Blockchain technology will be playing a passive role by providing backup support to ensure data security and safety. Hence, with this convergence is applied to robotics in an integrated manner, robotics will transform and benefit the industry in an unbelievably positive way.

Originally posted here:
Blockchain and Artificial Intelligence Convergence Powering the Robotics Capability - EnterpriseTalk