Archive for the ‘Artificial Intelligence’ Category

The History of Artificial Intelligence – Science in the News

by Rockwell Anyoha

In the first half of the 20th century, science fiction familiarized the world with the concept of artificially intelligent robots. It began with the heartless Tin man from the Wizard of Oz and continued with the humanoid robot that impersonated Maria in Metropolis. By the 1950s, we had a generation of scientists, mathematicians, and philosophers with the concept of artificial intelligence (or AI) culturally assimilated in their minds. One such person was Alan Turing, a young British polymath who explored the mathematical possibility of artificial intelligence. Turing suggested that humans use available information as well as reason in order to solve problems and make decisions, so why cant machines do the same thing? This was the logical framework of his 1950 paper, Computing Machinery and Intelligence in which he discussed how to build intelligent machines and how to test their intelligence.

Unfortunately, talk is cheap. What stopped Turing from getting to work right then and there? First, computers needed to fundamentally change. Before 1949 computers lacked a key prerequisite for intelligence: they couldnt store commands, only execute them. In other words, computers could be told what to do but couldnt remember what they did. Second, computing was extremely expensive. In the early 1950s, the cost of leasing a computer ran up to $200,000 a month. Only prestigious universities and big technology companies could afford to dillydally in these uncharted waters. A proof of concept as well as advocacy from high profile people were needed to persuade funding sources that machine intelligence was worth pursuing.

Five years later, the proof of concept was initialized through Allen Newell, Cliff Shaw, and Herbert Simons, Logic Theorist. The Logic Theorist was a program designed to mimic the problem solving skills of a human and was funded by Research and Development (RAND) Corporation. Its considered by many to be the first artificial intelligence program and was presented at the Dartmouth Summer Research Project on Artificial Intelligence (DSRPAI) hosted by John McCarthy and Marvin Minsky in 1956. In this historic conference, McCarthy, imagining a great collaborative effort, brought together top researchers from various fields for an open ended discussion on artificial intelligence, the term which he coined at the very event. Sadly, the conference fell short of McCarthys expectations; people came and went as they pleased, and there was failure to agree on standard methods for the field. Despite this, everyone whole-heartedly aligned with the sentiment that AI was achievable. The significance of this event cannot be undermined as it catalyzed the next twenty years of AI research.

From 1957 to 1974, AI flourished. Computers could store more information and became faster, cheaper, and more accessible. Machine learning algorithms also improved and people got better at knowing which algorithm to apply to their problem. Early demonstrations such as Newell and Simons General Problem Solver and Joseph Weizenbaums ELIZA showed promise toward the goals of problem solving and the interpretation of spoken language respectively. These successes, as well as the advocacy of leading researchers (namely the attendees of the DSRPAI) convinced government agencies such as the Defense Advanced Research Projects Agency(DARPA) to fund AI research at several institutions. The government was particularly interested in a machine that could transcribe and translate spoken language as well as high throughput data processing. Optimism was high and expectations were even higher. In 1970 Marvin Minsky told Life Magazine, from three to eight years we will have a machine with the general intelligence of an average human being. However, while the basic proof of principle was there, there was still a long way to go before the end goals of natural language processing, abstract thinking, and self-recognition could be achieved.

Breaching the initial fog of AI revealed a mountain of obstacles. The biggest was the lack of computational power to do anything substantial: computers simply couldnt store enough information or process it fast enough. In order to communicate, for example, one needs to know the meanings of many words and understand them in many combinations. Hans Moravec, a doctoral student of McCarthy at the time, stated that computers were still millions of times too weak to exhibit intelligence. As patience dwindled so did the funding, and research came to a slow roll for ten years.

In the 1980s, AI was reignited by two sources: an expansion of the algorithmic toolkit, and a boost of funds. John Hopfield and David Rumelhart popularized deep learning techniques which allowed computers to learn using experience. On the other hand Edward Feigenbaum introduced expert systems which mimicked the decision making process of a human expert. The program would ask an expert in a field how to respond in a given situation, and once this was learned for virtually every situation, non-experts could receive advice from that program. Expert systems were widely used in industries. The Japanese government heavily funded expert systems and other AI related endeavors as part of their Fifth Generation Computer Project (FGCP). From 1982-1990, they invested $400 million dollars with the goals of revolutionizing computer processing, implementing logic programming, and improving artificial intelligence. Unfortunately, most of the ambitious goals were not met. However, it could be argued that the indirect effects of the FGCP inspired a talented young generation of engineers and scientists. Regardless, funding of the FGCP ceased, and AI fell out of the limelight.

Ironically, in the absence of government funding and public hype, AI thrived. During the 1990s and 2000s, many of the landmark goals of artificial intelligence had been achieved. In 1997, reigning world chess champion and grand master Gary Kasparov was defeated by IBMs Deep Blue, a chess playing computer program. This highly publicized match was the first time a reigning world chess champion loss to a computer and served as a huge step towards an artificially intelligent decision making program. In the same year, speech recognition software, developed by Dragon Systems, was implemented on Windows. This was another great step forward but in the direction of the spoken language interpretation endeavor. It seemed that there wasnt a problem machines couldnt handle. Even human emotion was fair game as evidenced by Kismet, a robot developed by Cynthia Breazeal that could recognize and display emotions.

We havent gotten any smarter about how we are coding artificial intelligence, so what changed? It turns out, the fundamental limit of computer storage that was holding us back 30 years ago was no longer a problem. Moores Law, which estimates that the memory and speed of computers doubles every year, had finally caught up and in many cases, surpassed our needs. This is precisely how Deep Blue was able to defeat Gary Kasparov in 1997, and how Googles Alpha Go was able to defeat Chinese Go champion, Ke Jie, only a few months ago. It offers a bit of an explanation to the roller coaster of AI research; we saturate the capabilities of AI to the level of our current computational power (computer storage and processing speed), and then wait for Moores Law to catch up again.

We now live in the age of big data, an age in which we have the capacity to collect huge sums of information too cumbersome for a person to process. The application of artificial intelligence in this regard has already been quite fruitful in several industries such as technology, banking, marketing, and entertainment. Weve seen that even if algorithms dont improve much, big data and massive computing simply allow artificial intelligence to learn through brute force. There may be evidence that Moores law is slowing down a tad, but the increase in data certainly hasnt lost any momentum. Breakthroughs in computer science, mathematics, or neuroscience all serve as potential outs through the ceiling of Moores Law.

So what is in store for the future? In the immediate future, AI language is looking like the next big thing. In fact, its already underway. I cant remember the last time I called a company and directly spoke with a human. These days, machines are even calling me! One could imagine interacting with an expert system in a fluid conversation, or having a conversation in two different languages being translated in real time. We can also expect to see driverless cars on the road in the next twenty years (and that is conservative). In the long term, the goal is general intelligence, that is a machine that surpasses human cognitive abilities in all tasks. This is along the lines of the sentient robot we are used to seeing in movies. To me, it seems inconceivable that this would be accomplished in the next 50 years. Even if the capability is there, the ethical questions would serve as a strong barrier against fruition. When that time comes (but better even before the time comes), we will need to have a serious conversation about machine policy and ethics (ironically both fundamentally human subjects), but for now, well allow AI to steadily improve and run amok in society.

Rockwell Anyoha is a graduate student in the department of molecular biology with a background in physics and genetics. His current project employs the use of machine learning to model animal behavior. In his free time, Rockwell enjoys playing soccer and debating mundane topics.

This article is part of a Special Edition on Artificial Intelligence.

Brief Timeline of AI

https://www.livescience.com/47544-history-of-a-i-artificial-intelligence-infographic.html

Complete Historical Overview

http://courses.cs.washington.edu/courses/csep590/06au/projects/history-ai.pdf

Dartmouth Summer Research Project on Artificial Intelligence

https://www.aaai.org/ojs/index.php/aimagazine/article/view/1904/1802

Future of AI

https://www.technologyreview.com/s/602830/the-future-of-artificial-intelligence-and-cybernetics/

Discussion on Future Ethical Challenges Facing AI

http://www.bbc.com/future/story/20170307-the-ethical-challenge-facing-artificial-intelligence

Detailed Review of Ethics of AI

https://intelligence.org/files/EthicsofAI.pdf

Excerpt from:
The History of Artificial Intelligence - Science in the News

What is Artificial Intelligence (AI)? – AI Definition and How …

Artificial intelligence is the simulation of human intelligence processes by machines, especially computer systems. Specific applications of AI include expert systems, natural language processing, speech recognition and machine vision.

As the hype around AI has accelerated, vendors have been scrambling to promote how their products and services use AI. Often what they refer to as AI is simply one component of AI, such as machine learning. AI requires a foundation of specialized hardware and software for writing and training machine learning algorithms. No one programming language is synonymous with AI, but a few, including Python, R and Java, are popular.

In general, AI systems work by ingesting large amounts of labeled training data, analyzing the data for correlations and patterns, and using these patterns to make predictions about future states. In this way, a chatbot that is fed examples of text chats can learn to produce lifelike exchanges with people, or an image recognition tool can learn to identify and describe objects in images by reviewing millions of examples.

AI programming focuses on three cognitive skills: learning, reasoning and self-correction.

Learning processes. This aspect of AI programming focuses on acquiring data and creating rules for how to turn the data into actionable information. The rules, which are called algorithms, provide computing devices with step-by-step instructions for how to complete a specific task.

Reasoning processes. This aspect of AI programming focuses on choosing the right algorithm to reach a desired outcome.

Self-correction processes. This aspect of AI programming is designed to continually fine-tune algorithms and ensure they provide the most accurate results possible.

AI is important because it can give enterprises insights into their operations that they may not have been aware of previously and because, in some cases, AI can perform tasks better than humans. Particularly when it comes to repetitive, detail-oriented tasks like analyzing large numbers of legal documents to ensure relevant fields are filled in properly, AI tools often complete jobs quickly and with relatively few errors.

This has helped fuel an explosion in efficiency and opened the door to entirely new business opportunities for some larger enterprises. Prior to the current wave of AI, it would have been hard to imagine using computer software to connect riders to taxis, but today Uber has become one of the largest companies in the world by doing just that. It utilizes sophisticated machine learning algorithms to predict when people are likely to need rides in certain areas, which helps proactively get drivers on the road before they're needed. As another example, Google has become one of the largest players for a range of online services by using machine learning to understand how people use their services and then improving them. In 2017, the company's CEO, Sundar Pichai, pronounced that Google would operate as an "AI first" company.

Today's largest and most successful enterprises have used AI to improve their operations and gain advantage on their competitors.

Artificial neural networks and deep learning artificial intelligence technologies are quickly evolving, primarily because AI processes large amounts of data much faster and makes predictions more accurately than humanly possible.

While the huge volume of data being created on a daily basis would bury a human researcher, AI applications that use machine learning can take that data and quickly turn it into actionable information. As of this writing, the primary disadvantage of using AI is that it is expensive to process the large amounts of data that AI programming requires.

Advantages

Disadvantages

AI can be categorized as either weak or strong.

Arend Hintze, an assistant professor of integrative biology and computer science and engineering at Michigan State University, explained in a 2016 article that AI can be categorized into four types, beginning with the task-specific intelligent systems in wide use today and progressing to sentient systems, which do not yet exist. The categories are as follows:

AI is incorporated into a variety of different types of technology. Here are six examples:

Artificial intelligence has made its way into a wide variety of markets. Here are nine examples.

AI in healthcare. The biggest bets are on improving patient outcomes and reducing costs. Companies are applying machine learning to make better and faster diagnoses than humans. One of the best-known healthcare technologies is IBM Watson. It understands natural language and can respond to questions asked of it. The system mines patient data and other available data sources to form a hypothesis, which it then presents with a confidence scoring schema. Other AI applications include using online virtual health assistants and chatbots to help patients and healthcare customers find medical information, schedule appointments, understand the billing process and complete other administrative processes. An array of AI technologies is also being used to predict, fight and understand pandemics such as COVID-19.

AI in business. Machine learning algorithms are being integrated into analytics and customer relationship management (CRM) platforms to uncover information on how to better serve customers. Chatbots have been incorporated into websites to provide immediate service to customers. Automation of job positions has also become a talking point among academics and IT analysts.

AI in education. AI can automate grading, giving educators more time. It can assess students and adapt to their needs, helping them work at their own pace. AI tutors can provide additional support to students, ensuring they stay on track. And it could change where and how students learn, perhaps even replacing some teachers.

AI in finance. AI in personal finance applications, such as Intuit Mint or TurboTax, is disrupting financial institutions. Applications such as these collect personal data and provide financial advice. Other programs, such as IBM Watson, have been applied to the process of buying a home. Today, artificial intelligence software performs much of the trading on Wall Street.

AI in law. The discovery process -- sifting through documents -- in law is often overwhelming for humans. Using AI to help automate the legal industry's labor-intensive processes is saving time and improving client service. Law firms are using machine learning to describe data and predict outcomes, computer vision to classify and extract information from documents and natural language processing to interpret requests for information.

AI in manufacturing. Manufacturing has been at the forefront of incorporating robots into the workflow. For example, the industrial robots that were at one time programmed to perform single tasks and separated from human workers, increasingly function as cobots: Smaller, multitasking robots that collaborate with humans and take on responsibility for more parts of the job in warehouses, factory floors and other workspaces.

AI in banking. Banks are successfully employing chatbots to make their customers aware of services and offerings and to handle transactions that don't require human intervention. AI virtual assistants are being used to improve and cut the costs of compliance with banking regulations. Banking organizations are also using AI to improve their decision-making for loans, and to set credit limits and identify investment opportunities.

AI in transportation. In addition to AI's fundamental role in operating autonomous vehicles, AI technologies are used in transportation to manage traffic, predict flight delays, and make ocean shipping safer and more efficient.

Security. AI and machine learning are at the top of the buzzword list security vendors use today to differentiate their offerings. Those terms also represent truly viable technologies. Organizations use machine learning in security information and event management (SIEM) software and related areas to detect anomalies and identify suspicious activities that indicate threats. By analyzing data and using logic to identify similarities to known malicious code, AI can provide alerts to new and emerging attacks much sooner than human employees and previous technology iterations. The maturing technology is playing a big role in helping organizations fight off cyber attacks.

Some industry experts believe the term artificial intelligence is too closely linked to popular culture, and this has caused the general public to have improbable expectations about how AI will change the workplace and life in general.

While AI tools present a range of new functionality for businesses, the use of artificial intelligence also raises ethical questions because, for better or worse, an AI system will reinforce what it has already learned.

This can be problematic because machine learning algorithms, which underpin many of the most advanced AI tools, are only as smart as the data they are given in training. Because a human being selects what data is used to train an AI program, the potential for machine learning bias is inherent and must be monitored closely.

Anyone looking to use machine learning as part of real-world, in-production systems needs to factor ethics into their AI training processes and strive to avoid bias. This is especially true when using AI algorithms that are inherently unexplainable in deep learning and generative adversarial network (GAN) applications.

Explainability is a potential stumbling block to using AI in industries that operate under strict regulatory compliance requirements. For example, financial institutions in the United States operate under regulations that require them to explain their credit-issuing decisions. When a decision to refuse credit is made by AI programming, however, it can be difficult to explain how the decision was arrived at because the AI tools used to make such decisions operate by teasing out subtle correlations between thousands of variables. When the decision-making process cannot be explained, the program may be referred to as black box AI.

Despite potential risks, there are currently few regulations governing the use of AI tools, and where laws do exist, they typically pertain to AI indirectly. For example, as previously mentioned, United States Fair Lending regulations require financial institutions to explain credit decisions to potential customers. This limits the extent to which lenders can use deep learning algorithms, which by their nature are opaque and lack explainability.

The European Union's General Data Protection Regulation (GDPR) puts strict limits on how enterprises can use consumer data, which impedes the training and functionality of many consumer-facing AI applications.

In October 2016, the National Science and Technology Council issued a report examining the potential role governmental regulation might play in AI development, but it did not recommend specific legislation be considered.

Crafting laws to regulate AI will not be easy, in part because AI comprises a variety of technologies that companies use for different ends, and partly because regulations can come at the cost of AI progress and development. The rapid evolution of AI technologies is another obstacle to forming meaningful regulation of AI. Technology breakthroughs and novel applications can make existing laws instantly obsolete. For example, existing laws regulating the privacy of conversations and recorded conversations do not cover the challenge posed by voice assistants like Amazon's Alexa and Apple's Siri that gather but do not distribute conversation -- except to the companies' technology teams which use it to improve machine learning algorithms. And, of course, the laws that governments do manage to craft to regulate AI don't stop criminals from using the technology with malicious intent.

The terms AI and cognitive computing are sometimes used interchangeably, but, generally speaking, the label AI is used in reference to machines that replace human intelligence by simulating how we sense, learn, process and react to information in the environment.

The label cognitive computing is used in reference to products and services that mimic and augment human thought processes.

The concept of inanimate objects endowed with intelligence has been around since ancient times. The Greek god Hephaestus was depicted in myths as forging robot-like servants out of gold. Engineers in ancient Egypt built statues of gods animated by priests. Throughout the centuries, thinkers from Aristotle to the 13th century Spanish theologian Ramon Llull to Ren Descartes and Thomas Bayes used the tools and logic of their times to describe human thought processes as symbols, laying the foundation for AI concepts such as general knowledge representation.

The late 19th and first half of the 20th centuries brought forth the foundational work that would give rise to the modern computer. In 1836, Cambridge University mathematician Charles Babbage and Augusta Ada Byron, Countess of Lovelace, invented the first design for a programmable machine.

1940s. Princeton mathematician John Von Neumann conceived the architecture for the stored-program computer -- the idea that a computer's program and the data it processes can be kept in the computer's memory. And Warren McCulloch and Walter Pitts laid the foundation for neural networks.

1950s. With the advent of modern computers, scientists could test their ideas about machine intelligence. One method for determining whether a computer has intelligence was devised by the British mathematician and World War II code-breaker Alan Turing. The Turing Test focused on a computer's ability to fool interrogators into believing its responses to their questions were made by a human being.

1956. The modern field of artificial intelligence is widely cited as starting this year during a summer conference at Dartmouth College. Sponsored by the Defense Advanced Research Projects Agency (DARPA), the conference was attended by 10 luminaries in the field, including AI pioneers Marvin Minsky, Oliver Selfridge and John McCarthy, who is credited with coining the term artificial intelligence. Also in attendance were Allen Newell, a computer scientist, and Herbert A. Simon, an economist, political scientist and cognitive psychologist, who presented their groundbreaking Logic Theorist, a computer program capable of proving certain mathematical theorems and referred to as the first AI program.

1950s and 1960s. In the wake of the Dartmouth College conference, leaders in the fledgling field of AI predicted that a man-made intelligence equivalent to the human brain was around the corner, attracting major government and industry support. Indeed, nearly 20 years of well-funded basic research generated significant advances in AI: For example, in the late 1950s, Newell and Simon published the General Problem Solver (GPS) algorithm, which fell short of solving complex problems but laid the foundations for developing more sophisticated cognitive architectures; McCarthy developed Lisp, a language for AI programming that is still used today. In the mid-1960s MIT Professor Joseph Weizenbaum developed ELIZA, an early natural language processing program that laid the foundation for today's chatbots.

1970s and 1980s. But the achievement of artificial general intelligence proved elusive, not imminent, hampered by limitations in computer processing and memory and by the complexity of the problem. Government and corporations backed away from their support of AI research, leading to a fallow period lasting from 1974 to 1980 and known as the first "AI Winter." In the 1980s, research on deep learning techniques and industry's adoption of Edward Feigenbaum's expert systems sparked a new wave of AI enthusiasm, only to be followed by another collapse of government funding and industry support. The second AI winter lasted until the mid-1990s.

1990s through today. Increases in computational power and an explosion of data sparked an AI renaissance in the late 1990s that has continued to present times. The latest focus on AI has given rise to breakthroughs in natural language processing, computer vision, robotics, machine learning, deep learning and more. Moreover, AI is becoming ever more tangible, powering cars, diagnosing disease and cementing its role in popular culture. In 1997, IBM's Deep Blue defeated Russian chess grandmaster Garry Kasparov, becoming the first computer program to beat a world chess champion. Fourteen years later, IBM's Watson captivated the public when it defeated two former champions on the game show Jeopardy!. More recently, the historic defeat of 18-time World Go champion Lee Sedol by Google DeepMind's AlphaGo stunned the Go community and marked a major milestone in the development of intelligent machines.

Because hardware, software and staffing costs for AI can be expensive, many vendors are including AI components in their standard offerings or providing access to artificial intelligence as a service (AIaaS) platforms. AIaaS allows individuals and companies to experiment with AI for various business purposes and sample multiple platforms before making a commitment.

Popular AI cloud offerings include the following:

Read more here:
What is Artificial Intelligence (AI)? - AI Definition and How ...

Artificial Intelligence – an overview | ScienceDirect Topics

12.10 Conclusion and Future Research

AI blockchain enabled distributed autonomous energy organizations may help to increase the energy efficiency, cyber security, and resilience of the electricity infrastructure. These are timely goals as we modernize the US power grida complex system of systems that requires secure and reliable communications and a more trustworthy global supply chain. While blockchain, AI, and IoT are creating a buzz right now, many challenges remain to be overcome to realize the full potential of these innovative technological solutions. A lot of news and media coverage of blockchain today falsely suggests that it is a panacea for all that ails usclimate change, cyber security, and volatile financial systems. There is similar hysteria around AI, with articles suggesting that the robots are coming, and that AI will take all of our jobs. While these new technologies are disruptive in their own way and create some exciting new opportunities, many challenges remain. Several fundamental policy, regulatory, and scientific challenges exist before blockchain realizes its full disruptive potential.

Future research should continue to explore the challenges related to blockchain and distributed ledger technology. Applying AI blockchain to modernizing the electricity infrastructure also requires speed, agility, and affordable technology. AI-enhanced algorithms are expensive and often require prodigious data sets that must be broken down into a code that makes sense. However, a lot of noise (distracting data) is being collected and exchanged in the electricity infrastructure, making it difficult to identify cyber anomalies. When there is a lot of disparate data being exchanged at subzero-second speeds, it is difficult to determine the cause of the anomaly, such as a software glitch, cyber-attack, weather event, or hybrid cyber-physical event. It can be very difficult to determine what normal looks like and set the accurate baseline that is needed to detect anomalies. Developing an AI blockchainenhanced grid requires that the data be broken into observable patterns, which is very challenging from a cyber perspective when threats are complex, nonlinear, and evolving.

Applying blockchain to modernizing and securing the electricity infrastructure presents several cyber-security challenges that should be further examined in future research. For example, Ethereum-based smart contracts provide the ability for anyone to write electronic code that can be executed in a blockchain. If an energy producer or consumer agrees to buy or sell renewable energy from a neighbor for an agreed-upon price, it can be captured in a blockchain-based smart contract. AI could help to increase efficiency by automating the auction to include other bidders and sellers in a more efficient and dynamic waythis would require a lot more data and analysis to recognize the discernable patterns that inform the AI algorithm of the smart contracts performance. Increased automation, however, will also require that the code of the blockchain is more resilient to cyber-attacks. Previously, Ethereum was shown to have several vulnerabilities that may undermine the trustworthiness of this transaction mechanism. Vulnerabilities in the code have been exploited in at least three multimillion dollar cyber incidents. In June 2016 DAO was hackedits smart contract code was exploited, and approximately $50 million dollars were extracted. In July 2017 code in an Ethereum wallet was exploited to extract $30 million dollars of cryptocurrency. In January 2018 hackers stole roughly 58 billion yen ($532.6 million) from a Tokyo-based cryptocurrency exchange, Coincheck, Inc. The latter incident highlighted the need for increased security and regulatory protection for cryptocurrencies and other blockchain applications. The Coincheck hack appears to have exploited vulnerabilities in a hot wallet, which is a cryptocurrency wallet that is connected to the internet. In contrast, cold wallets, such as Trezor and Ledger Nano S, are cryptocurrency wallets that are stored offline.

Despite being a centralized currency, Coincheck was a cryptocurrency exchange with a single point of failure. However, the blockchain shared ledger of the account may potentially be able to tag and follow the stolen coins and identify any account that receives them (Fadilpai & Garlick, 2017). Storing prodigious data sets that are constantly growing in a blockchain can also create potential latency or bloat in the chain, requiring large amounts of memory. Requirements for Ethereum-based smart contracts have grown over time and the block takes a longer time to process. For time-sensitive energy transactions, this situation may create speed, scale, and cost issues if the smart contract is not designed properly. Certainly, future research is needed to develop, validate, and verify a more secure approach.

Finally, future research should examine the functional requirements and potential barriers for applying blockchain to make energy organizations more distributed, autonomous, and secure. For example, even if some intermediaries are replaced in the energy sector, a schedule and forecast still need to be submitted to the transmission system operator for the electricity infrastructure to be reliable. Another challenge is incorporating individual blockchain consumers into a balancing group and having them comply with market reliability and requirements as well as submit accurate demand forecasts to the network operator. Managing a balancing group is not a trivial task and this approach could potentially increase the costs of managing the blockchain. To avoid costly disruptions, blockchain autonomous data exchanges, such as demand forecasts from the consumer to the network operator, will need to be stress tested for security and reliability before being deployed at scale. In considering all of these innovative applications, as well as the many associated challenges, future research is needed to develop, validate, and verify AI blockchain enabled DAEOs.

Read the original here:
Artificial Intelligence - an overview | ScienceDirect Topics

Lenovo Delivers Artificial Intelligence at the Edge to Drive Business Transformation – Business Wire

RESEARCH TRIANGLE PARK, N.C.--(BUSINESS WIRE)--Today, Lenovo (HKSE: 992) (ADR: LNVGY) Infrastructure Solutions Group (ISG) announces the expansion of the Lenovo ThinkEdge portfolio with the introduction of the new ThinkEdge SE450 server, delivering an artificial intelligence (AI) platform directly at the edge to accelerate business insights. The ThinkEdge SE450 advances intelligent edge capabilities with best-in-class, AI-ready technology that provides faster insights and leading computing performance to more environments, accelerating real-time decision making at the edge and unleashing full business potential.

As companies of all sizes continue to work on solving real-world challenges, they require powerful infrastructure solutions to help generate faster insights that inform competitive business strategies, directly at edge sites, said Charles Ferland, Vice President and General Manager, Edge Computing and Communication Service Providers at Lenovo ISG. With the ThinkEdge SE450 server and in collaboration with our broad ecosystem of partners, Lenovo is delivering on the promise of AI at the edge, whether its enabling greater connectivity for smart cities to detect and respond to traffic accidents or addressing predictive maintenance needs on the manufacturing line.

Accelerate Business Insights at the Edge

Edge computing is at the heart of digital transformation for many industries as they seek to optimize how to process data directly at the point of origin. Gartner estimates that 75 percent of enterprise-generated data will be processed at the edge by 2025 and 80 percent of enterprise IoT projects will incorporate AI by 2022. Lenovo customers are using edge-driven data sources for immediate decision making on factory floors, retail shelves, city streets and telecommunication mobile sites. Lenovos complete ThinkEdge portfolio goes beyond the data center to deliver the ultimate edge computing power experience.

Expanding our cloud to on-premise enables faster data processing while adding resiliency, performance and enhanced user experiences. As an early testing partner, our current deployment of Lenovos ThinkEdge SE450 server is hosting a 5G network delivered on edge sites and introducing new edge applications to enterprises, said Khaled Al Suwaidi, Vice President Fixed and Mobile Core at Etisalat. It gives us a compact, ruggedized platform with the necessary performance to host our telecom infrastructure and deliver applications, such as e-learning, to users.

Enhance Performance, Scalability and Security

Designed to stretch the limitations of server locations, Lenovos ThinkEdge SE450 delivers real-time insights with enhanced compute power and flexible deployment capabilities that can support multiple AI workloads while allowing customers to scale. It meets the demands of a wide variety of critical workloads with a unique, quieter go-anywhere form factor, featuring a shorter depth that allows it to be easily installed in space constrained locations. The GPU-rich server is purpose-built to meet the requirements of vertically specific edge environments, with a ruggedized design that withstands a wider operating temperature, as well as high dust, shock and vibration for harsh settings. As one of the first NVIDIA-Certified Edge systems, Lenovos ThinkEdge SE450 leverages NVIDIA GPUs for enterprise and industrial AI at the edge applications, providing maximum accelerated performance.

Security at the edge is crucial and Lenovo enables businesses to navigate the edge-to-cloud frontier confidently, using resilient, better secured infrastructure solutions that are designed to mitigate security risks and data threats. The ThinkEdge portfolio provides a variety of connectivity and security options that are easily deployed and more securely managed in todays remote environments, including a new locking bezel to help prevent unauthorized access and robust security features to better protect data.

The ThinkEdge SE450 is built on the latest 3rd Gen Intel Xeon Scalable processor with Intel Deep Learning Boost technologies, featuring all-flash storage for running AI and analytics at the edge and optimized for delivering intelligence. It has been verified by Intel as an Intel Select Solution for vRAN. This pre-validated solution takes the guesswork out of the evaluation and procurement process by meeting strictly defined hardware and software configuration requirements and rigorous system-wide performance benchmarks to speed deployment and lower risk for communications service providers.

Our collaboration with Lenovo helps enterprises across many sectors drive business value through network transformation and edge computing, said Jeni Panhorst, Vice President and General Manager of the Network & Edge Platforms Division at Intel. Resilient and flexible edge servers built with 3rd Gen Intel Xeon Scalable processors provide enhanced performance enabling the delivery of innovative AI-driven services where customers will expect them.

Edge site locations are often unmanned and hard to reach; therefore, the ThinkEdge SE450 is automatically installed and managed with Lenovo Open Cloud Automation (LOC-A) and easily configured with Lenovo XClarity Orchestrator software. Remote access to the server, via a completely out-of-band wired or wireless access, avoids any unnecessary trip to the edge locations.

AI-Ready Solutions at the Edge

Through an agile hardware development approach with partners and customers, the Lenovo ThinkEdge SE450 is the culmination of multiple prototypes, with live trials running real workloads in telecommunication, retail and smart city settings. The ThinkEdge SE450 AI-ready server is designed specifically for enabling a vast ecosystem of partners to make it easier for customers to deploy these edge solutions. As enterprises build out their hybrid infrastructures from the cloud to the edge, it is the perfect extension for the on-premise cloud currently supporting Microsoft, NVIDIA, Red Hat and VMware technologies.

Providing a complete portfolio of Edge servers, AI-ready storage and solutions, Lenovo offerings are also available as-a-Service through Lenovo TruScale, which easily extends workloads from the edge to the cloud in a consumption-based model.

Learn more here about this artificial intelligence edge solution.

LENOVO, THINKEDGE, TRUSCALE and XCLARITY are trademarks of Lenovo. Intel is a trademark of Intel Corporation or its subsidiaries in the U.S. and/or other countries. Intel, the Intel logo, and other Intel marks are trademarks of Intel Corporation or its subsidiaries. NVIDIA is a trademark of NVIDIA Corporation. Inc. VMware is a trademark of VMware, Inc. All other trademarks are the property of their respective owners. 2021 Lenovo. All rights reserved.

About Lenovo

Lenovo (HKSE: 992) (ADR: LNVGY) is a US$60 billion revenue Fortune Global 500 company serving customers in 180 markets around the world. Focused on a bold vision to deliver smarter technology for all, we are developing world-changing technologies that power (through devices and infrastructure) and empower (through solutions, services, and software) millions of customers every day and together create a more inclusive, trustworthy, and sustainable digital society for everyone, everywhere. To find out more visit https://www.lenovo.com, and read about the latest news via our StoryHub.

See original here:
Lenovo Delivers Artificial Intelligence at the Edge to Drive Business Transformation - Business Wire

Cognitive Space 2021 Recap Momentum in Artificial Intelligence for Satellite Operations – Business Wire

HOUSTON--(BUSINESS WIRE)--Cognitive Space announced the highlights of a very successful year in its mission to dramatically improve the way we monitor the Earth for economic, environmental, and national security understanding. The company helps organizations fly their satellites with new tools for New Space - providing satellite operators and space infrastructure companies with sophisticated SaaS services for optimizing revenue and performance yield, forecasting future capacity, and orchestrating collection management as satellite constellations grow and scale.

The New Space economy is attracting massive investment and is growing exponentially. Space will be filled with thousands of new commercial satellites, said Scott Herman, CEO of Cognitive Space. But building out the required ground architecture is a major hurdle for New Space companies and usually represents a significant monetary investment, a multi-year time commitment, and major execution risk as they build their business. Cognitive Space provides a blueprint and an operational capability that de-risks and accelerates their buildout schedule, controls costs, and then optimizes their ongoing operations to power their business vision.

2021 Highlights for Cognitive Space:

$5.5M in Investment Capital raised Cognitive Space started the year with a $1.5M pre-seed raise, followed in November with the closing of a $4M Series Seed led by Grit Ventures of Menlo Park. Additional investors include Argon Ventures, Techstars, UltraTech Capital Partners, Cultivation Capital, Glasswing Ventures, Gutbrain Ventures, PBJ Capital, SpaceFund, and Deep Ventures. Outside counsel for the transaction were Covington and Burling LLP. As a result of this 2021 fundraising effort, Cognitive Space enters 2022 with $5.5M in funds ready to apply towards commercial product development and company growth.

Billions of investment dollars are flowing into the New Space economy. There is an unmet imperative for cost-effective, scalable, and business-savvy constellation operations, commented Jennifer Gill Roberts, Managing Partner at Grit Ventures. We believe Cognitive Space's AI-driven approach to maximizing constellation revenue and performance yield gives their customers a significant competitive advantage in this emerging market for Space-based services.

New and expanded US Government contracts Cognitive Space continued its work with several US Government agencies, including the US Space Force, the Air Force Research Lab (AFRL), the National Geospatial Intelligence Agency (NGA), and other members of the national security community. In these engagements Cognitive Space focused on concept development and rapid prototyping for topics such as orchestrated collection management, hybrid space architecture, and global monitoring. Of particular note, Cognitive Space was selected as a winner of the Space Force Pitch Day competition, resulting in a $1.7M contract for exploring new approaches to satellite operations using Artificial Intelligence.

Cognitive Space also supported several US Government exercises, including RIMPAC, Northern Edge, and Joint Warrior. The company orchestrated collection opportunities across multiple commercial and government suppliers of satellite remote sensing. Cognitive Space provided the US Government with insight into the emerging wave of commercial remote sensing capabilities, helping them understand the impact of these capabilities on future operations, tradecraft, tools, and procurement methods.

Commercial Sales Traction This summer, Cognitive Space introduced its SaaS-based platform for autonomous and dynamic satellite operations to a growing set of commercial satellite operators and space infrastructure companies. The platform revolutionizes satellite operations with the power of artificial intelligence for mission management, collections planning, and communications link coordination. The suite is available in versions tailored for startups, growth, and enterprise-class customers in the New Space domain.

Strong Revenue growth Cognitive Space continues to dramatically increase its year-over-year revenue with new contracts, solid bookings, and a dense opportunity pipeline going into 2022.

Accelerator Wins Cognitive Space was competitively selected for several startup accelerators, including the Amazon Web Services (AWS) Seraphim Space Accelerator and the NGA Startup Accelerator. As one of 10 companies chosen by AWS out of a field of approximately 200 startups, Cognitive Space received $100,000 in cloud infrastructure credits, AWS Cloud training and support, mentorship, and additional business development resources including opportunities to speak with space-savvy venture investors. With the NGA Accelerator, Cognitive Space has been working with government analysts on a pilot project exploring the role of future commercial satellite capabilities for facility monitoring and pattern-of-life analytics in real-world scenarios.

Building the best team in AI-driven Satellite Operations Cognitive Space continues to recruit aggressively for an expanding team of AI/ML scientists and mathematicians, satellite and aerospace engineers, full-stack and frontend/backend developers, system architects, and Cloud DevOps engineers. In 2021, the company also made strategic additions to the executive team by recruiting senior industry veterans Scott Herman (as CEO) and Hanna Steplewska (as VP, Business Development & Operations). Scott and Hanna bring deep experience in Space Operations, Satellite Remote Sensing, Geospatial Analytics, and National Security and a comprehensive understanding of the New Space ecosystem.

About Cognitive Space

More information about Cognitive Space can be found at http://www.cognitivespace.com.

See the rest here:
Cognitive Space 2021 Recap Momentum in Artificial Intelligence for Satellite Operations - Business Wire