Archive for the ‘Artificial Intelligence’ Category

Marshaling artificial intelligence in the fight against Covid-19 – MIT News

Artificial intelligencecouldplay adecisiverole in stopping the Covid-19 pandemic. To give the technology a push, the MIT-IBM Watson AI Lab is funding 10 projects at MIT aimed atadvancing AIs transformative potential for society. The research will target the immediate public health and economic challenges of this moment. But it could havealasting impact on how we evaluate and respond to risk long after the crisis has passed. The 10 research projects are highlightedbelow.

Early detection of sepsis in Covid-19 patients

Sepsis is a deadly complication of Covid-19, the disease caused by the new coronavirus SARS-CoV-2. About 10 percent of Covid-19 patients get sick with sepsis within a week of showing symptoms, but only about half survive.

Identifying patients at risk for sepsis can lead to earlier, more aggressive treatment and a better chance of survival. Early detection can also help hospitals prioritize intensive-care resources for their sickest patients. In a project led by MIT ProfessorDaniela Rus, researchers will develop a machine learning system to analyze images of patients white blood cells for signs of an activated immune response against sepsis.

Designing proteins to block SARS-CoV-2

Proteins are the basic building blocks of life, and with AI, researchers can explore and manipulate their structures to address longstanding problems. Take perishable food: The MIT-IBM Watson AI Labrecently used AIto discover that a silk protein made by honeybees could double as a coating for quick-to-rot foods to extend their shelf life.

In a related project led by MIT professorsBenedetto MarelliandMarkus Buehler, researchers will enlist the protein-folding method used in their honeybee-silk discovery to try to defeat the new coronavirus. Their goal is to design proteins able to block the virus from binding to human cells, and to synthesize and test their unique protein creations in the lab.

Saving lives while restarting the U.S. economy

Some states are reopening for business even as questions remain about how to protect those most vulnerable to the coronavirus. In a project led by MIT professorsDaron Acemoglu,Simon JohnsonandAsu Ozdaglarwill model the effects of targeted lockdowns on the economy and public health.

In arecent working paperco-authored by Acemoglu,Victor Chernozhukov, Ivan Werning, and Michael Whinston,MIT economists analyzed the relative risk of infection, hospitalization, and death for different age groups. When they compared uniform lockdown policies against those targeted to protect seniors, they found that a targeted approach could save more lives. Building on this work, researchers will consider how antigen tests and contact tracing apps can further reduce public health risks.

Which materials make the best face masks?

Massachusetts and six other states have ordered residents to wear face masks in public to limit the spread of coronavirus. But apart from the coveted N95 mask, which traps 95 percent of airborne particles 300 nanometers or larger, the effectiveness of many masks remains unclear due to a lack of standardized methods to evaluate them.

In a project led by MIT Associate ProfessorLydia Bourouiba, researchers are developing a rigorous set of methods to measure how well homemade and medical-grade masks do at blocking the tiny droplets of saliva and mucus expelled during normal breathing, coughs, or sneezes. The researchers will test materials worn alone and together, and in a variety of configurations and environmental conditions. Their methods and measurements will determine howwell materials protect mask wearers and the people around them.

Treating Covid-19 with repurposed drugs

As Covid-19s global death toll mounts, researchers are racing to find a cure among already-approved drugs. Machine learning can expedite screening by letting researchers quickly predict if promising candidates can hit their target.

In a project led by MIT Assistant ProfessorRafael Gomez-Bombarelli, researchers will represent molecules in three dimensions to see if this added spatial information can help to identify drugs most likely to be effective against the disease. They will use NASAs Ames and U.S. Department of Energys NSERC supercomputers to further speed the screening process.

A privacy-first approach to automated contact tracing

Smartphone data can help limit the spread of Covid-19 by identifying people who have come into contact with someone infected with the virus, and thus may have caught the infection themselves. But automated contact tracing also carries serious privacy risks.

Incollaborationwith MIT Lincoln Laboratory and others, MIT researchersRonald RivestandDaniel Weitznerwill use encrypted Bluetooth data to ensure personally identifiable information remains anonymous and secure.

Overcoming manufacturing and supply hurdles to provide global access to a coronavirus vaccine

A vaccine against SARS-CoV-2 would be a crucial turning point in the fight against Covid-19. Yet, its potential impact will be determined by the ability to rapidly and equitably distribute billions of doses globally.This is an unprecedented challenge in biomanufacturing.

In a project led by MIT professorsAnthony SinskeyandStacy Springs, researchers will build data-driven statistical models to evaluate tradeoffs in scaling the manufacture and supply of vaccine candidates. Questions include how much production capacity will need to be added, the impact of centralized versus distributed operations, and how to design strategies forfair vaccine distribution. The goal is to give decision-makers the evidenceneededto cost-effectivelyachieveglobalaccess.

Leveraging electronic medical records to find a treatment for Covid-19

Developed as a treatment for Ebola, the anti-viral drug remdesivir is now in clinical trials in the United States as a treatment for Covid-19. Similar efforts to repurpose already-approved drugs to treat or prevent the disease are underway.

In a project led by MIT professorsRoy Welschand Stan Finkelstein, researchers will use statistics, machine learning, and simulated clinical drug trials to find and test already-approved drugs as potential therapeutics against Covid-19. Researchers will sift through millions of electronic health records and medical claims for signals indicating that drugs used to fight chronic conditions like hypertension, diabetes, and gastric influx might also work against Covid-19 and other diseases.

Finding better ways to treat Covid-19 patients on ventilators

Troubled breathing from acute respiratory distress syndrome is one of the complications that brings Covid-19 patients to the ICU. There, life-saving machines help patients breathe by mechanically pumping oxygen into the lungs. But even as towns and cities lower their Covid-19 infections through social distancing, there remains a national shortage of mechanical ventilators and serious health risks of ventilation itself.

In collaboration with IBM researchers Zach Shahn and Daby Sow, MIT researchersLi-Wei LehmanandRoger Markwill develop an AI tool to help doctors find better ventilator settings for Covid-19 patients and decide how long to keep them on a machine. Shortened ventilator use can limit lung damage while freeing up machines for others.To build their models, researchers will draw on data from intensive-care patients with acute respiratory distress syndrome, as well as Covid-19 patients at a local Boston hospital.

Returning to normal via targeted lockdowns, personalized treatments, and mass testing

In a few short months, Covid-19 has devastated towns and cities around the world. Researchers are now piecing together the data to understand how government policies can limit new infections and deaths and how targeted policies might protect the most vulnerable.

In a project led by MIT ProfessorDimitris Bertsimas, researchers will study the effects of lockdowns and other measures meant to reduce new infections and deaths and prevent the health-care system from being swamped. In a second phase of the project, they will develop machine learning models to predict how vulnerable a given patient is to Covid-19, and what personalized treatments might be most effective. They will also develop an inexpensive, spectroscopy-based test for Covid-19 that can deliver results in minutes and pave the way for mass testing. The project will draw on clinical data from four hospitals in the United States and Europe, including Codogno Hospital, which reported Italys first infection.

See the original post here:
Marshaling artificial intelligence in the fight against Covid-19 - MIT News

The Basics of Autonomous Vehicles, Part I: Artificial Intelligence – JD Supra

Two of the most exciting emerging technologies over the past few years have been artificial intelligence (AI) and autonomous vehicles (AVs). And the application of AI to achieve vehicle autonomy holds tremendous potential. While there is a great deal of excitement surrounding the promise of AI to solve AV issues, the concept of AI can be nebulous, as it is an umbrella term that encompasses many different technologies. In this article, we will provide an overview of AI technologies in use in the AV space and a few ways AI could be used to power AVs.

What Is Artificial Intelligence?

Formally, artificial intelligence (AI) refers to machines that respond to stimulation consistent with traditional responses from humans, given the human capacity for contemplation, judgment, and intention.[1] In practice, it refers to the science of making machines that mimic human perception and response. AI systems typically demonstrate some of the behaviors of human cognition, such as planning, learning, categorizing, problem-solving, and recognizing patterns. AI relies on several enabling technologies to function, many of which have become buzzwords in themselves, such as machine learning, deep learning, and neural networks. At their core, these technologies rely on big datathey require an internal or external system that evaluates large amounts of data, identifies patterns in it, and determines the significance of the patterns in order to draw inferences, arrive at conclusions, and respondmuch the same way the human brain works.

Machine Learning

Generally, machine learning refers to the process by which computers learn from data so they can perform tasks without being specifically programmed to do so. It can be either supervised learning, which requires a human first to input data and then tell the system something about the expected output, or unsupervised learning, in which the machine learns from the data itself and arrives at conclusions without an expected output. A basic form of machine learning is image recognition, wherein a machine is able to categorize images into groups based on the content of the images.

Neural Networks

A neural network is a type of machine learning system that processes information between interconnected nodes (similar to neurons in the human brain) to find patterns, establish connections, and derive meaning from data. For example, consider an algorithm that a neural network would use to identify pictures of cats. An engineer first would create a data set of millions of pictures of different animals. The engineer then would go through all of those images and label them either as cat or not cat. Once the data was classified, the engineer would prime the neural network by telling it what some of the images arecat or not cat. The neural network then would be allowed to run through the data set on its own to attempt to classify each image. When it is done, the engineer would tell it whether its classification of each picture was correct. This process then would be run again repeatedly until the neural network achieved near 100 percent accuracy.

Deep Learning

Deep learning is an advanced type of machine learning wherein a large system of neural networks analyzes vast amounts of data and arrives at conclusions without first being trained by a human. An example of deep learning would be a predictive algorithm that estimates the price of a plane ticket based on the input data of (1) origin airport, (2) destination airport, (3) departure date, and (4) airline. This input data is known as an input layer. The data from the input layer then is run through a set of hidden layers that perform complex mathematical calculations on the input data to arrive at a conclusion, known as the output layerin this case, the ticket price (see Figure 1).

Figure 1: Deep learning input layer, hidden layers, and output layer[2]

This type of deep learning is unsupervised learning because the machine performs calculations on the data itself in its hidden layers and outputs a result that was not predetermined.

The Origins of AI Technology

AI has been around longer than most would imagine. Alan Turing, the celebrated British World War II codebreaker, is credited with originating the concept in 1950 when speculating about thinking machines that could function similarly to the human brain.[3] Turing proposed that a computer can be said to be intelligent if it can mimic human responses to questions to such an extent that an observer would not be able to tell whether he or she were talking to a human or a machine. This became known as the Turing Test.

The term artificial intelligence was first coined by John McCarthy, widely considered the father of artificial intelligence, at a conference in 1956. To McCarthy, a computer was intelligent when it could do things that, when done by humans, would be said to involve intelligence. Early successes included Allen Newell and Herbert Simons General Problem Solver and Joseph Weizenbaums ELIZA. Despite these early successes and enthusiasm about AI, the field encountered technology barriers. The limitations of storage capacity necessary to store large amounts of data and microprocessor computational power slowed the development of AI technologies throughout much of the 1970s. By the 1990s, however, many of the early goals of AI became attainable due to advances in computing technology, exemplified by IBMs Deep Blue systems defeat of the reigning world chess champion and grandmaster Gary Kasparov in 1997. Advancements in AI have continued to accompany advancements in computer technologies, as computers have become faster, cheaper, more efficient, and more capable of storing the large amounts of data necessary to enable AI.

At present, AI is being deployed across a wide range of industries, with new applications being discovered all the time. In the consumer sector, Siri, Alexa, and other voice-enabled personal assistants utilize natural language processing to understand and respond to users vocal prompts. Streaming music and video services like Spotify and Netflix also use machine learning to enable their recommender systems that identify the types of music and movies users enjoy and then recommend new content based on this identification. In the financial field, AI is being used increasingly in loan underwriting to consider more variables than traditional underwriting models, which proponents say helps them make more accurate lending decisions and decrease the risks of default. In health care, it is showing promise in diagnosing certain medical conditions by evaluating hundreds or even thousands of medical images to detect the presence of anomalies.

How AI Powers AVs

AI in the context of AVs focuses on the perception of environment and automated responses to that environment while executing the primary goal of getting the vehicle safely from origin to destination. Below, we discuss two functions of AI in the context of AVs.

Image and Pattern Recognition

AVs must be able to see the world around them to travel safely from point A to point B. To do so, they must be able to recognize all of the elements that collectively make up the transportation system, including other vehicles, pedestrians, and cyclists as well as the vehicles environment, such as roadway infrastructure, buildings, intersection controls, signs, pavement markings, and weather conditions. The safe operation of an AV requires connectivity between the vehicle and other elements of the transportation system. Engineers have identified five key types of connectivity:

A common application of AI technology in the AV industry is the use of neural networks to train AVs to recognize individual elements of the transportation system. Engineers feed the network hundreds of thousands of imagessuch as stop signs, yield signs, speed limit signs, road markings, etc.to train it to recognize them in the field.

In addition to cameras, AVs use several different sensing technologies that allow them to perceive objects in the field. Two of the most popular sensing technologies for this purpose are RADAR and LiDAR. RADAR (Radio Detection and Ranging) works by emitting radio waves in pulses. Once those pulses hit an object, they return to the sensor, providing data on the objects location, distance, speed, and direction of movement. LiDAR (Light Detection and Ranging) works similarly by firing thousands of laser beams in all directions and measuring the amount of time it takes for the beams to return. The signals the beams emit create point clouds that represent objects surrounding the vehicle.[4] By emitting thousands of beams per second, LiDAR sensors can create an incredibly detailed 3D map of the world around them in real time. The sensor inputs from the AVs RADAR and LiDAR systems are then fed into a centralized AI computer in a process known as sensor fusion, making it possible for the vehicle to combine many points of sensor datasuch as shape, speed, and distance.

Automated Decision Making

Besides recognizing elements in the transportation system, AVs also must be able to make safe driving decisions based on their accurate perception of real-life traffic conditions. But simple, manual instructions such as stop when you see red are not enough; AV decision making is powered by expert systemsAI software that attempts to mimic the decision-making expertise of an experienced driver.[5]

Expert systems work by pairing a knowledge base with an inference engine. A knowledge base is a collection of data, information, and past experiences relevant to the task at hand, and it contains both factual knowledge (information widely accepted by human experts in the field) and heuristic knowledge (practice, judgment, evaluation, and guesses). The inference engine uses the information contained in the knowledge base to arrive at solutions to problems. It does so by:

To recommend a solution, the inference engine uses both forward chaining and backward chaining. Forward chaining answers the question What can happen next? and involves identifying the facts, following a chain of conditions and decisions, and arriving at a solution (see Figure 2).

Figure 2: Forward Chaining (source)

Backward chaining answers the question Why did this happen? and requires the inference engine to identify which conditions caused a particular result (see Figure 3).

Figure 3: Backward Chaining (source)

In the context of AVs, factual knowledge consists of the rules of the road as well as the procedures for operating the vehicle. Heuristic knowledge consists of the collective past experiences of experienced drivers that inform their decision makingfor example, understanding that there are ice patches on the road and consequently reducing speed and allowing increased stopping distance.

Conclusion

AVs hold great promise. From automating supply chains to eliminating drunk driving accidents to transforming urban land use patterns, AVs have the potential to fundamentally change the way we live and work. But many of the technologies that enable AVs are still in their infancy and will require continued research and development before AVs can be rolled out on a widespread basis. While these efforts likely will continue at breakneck speed, the legal and regulatory landscape surrounding AVs is already struggling to catch up. In Part II, we will focus on the legal challenges and opportunities facing the AV industry.

[1] https://www.brookings.edu/research/what-is-artificial-intelligence/

[2] https://www.freecodecamp.org/news/want-to-know-how-deep-learning-works-heres-a-quick-guide-for-everyone-1aedeca88076/

[3] https://www.csee.umbc.edu/courses/471/papers/turing.pdf

[4] https://blogs.nvidia.com/blog/2019/04/15/how-does-a-self-driving-car-see/

[5] https://www.tutorialspoint.com/artificial_intelligence/artificial_intelligence_expert_systems.htm

[View source.]

More:
The Basics of Autonomous Vehicles, Part I: Artificial Intelligence - JD Supra

Powering the Artificial Intelligence Revolution – HPCwire

It has been observed by many that we are at the dawn of the next industrial revolution: The Artificial Intelligence (AI) revolution. The benefits delivered by this intelligence revolution will be many: in medicine, improved diagnostics and precision treatment, better weather forecasting, and self-driving vehicles to name a few. However, one of the costs of this revolution is going to be increased electrical consumption by the data centers that will power it. Data center power usage is projected to double over the next 10 years and is on track to consume 11% of worldwide electricity by 2030. Beyond AI adoption, other drivers of this trend are the movement to the cloud and increased power usage of CPUs, GPUs and other server components, which are becoming more powerful and smart.

AIs two basic elements, training and inference, each consume power differently. Training involves computationally intensive matrix operations over very large data sets, often measured in terabytes to petabytes. Examples of these data sets can range from online sales data to captured video feeds to ultra-high-resolution images of tumors. AI inference is computationally much lighter in nature, but can run indefinitely as a service, which draws a lot of power when hit with a large number of requests. Think of a facial recognition application for security in an office building. It runs continuously but would stress the compute and storage resources at 8:00am and again at 5:00pm as people come and go to work.

However, getting a good handle on power usage in AI is difficult. Energy consumption is not part of standard metrics tracked by job schedulers and while it can be set up, it is complicated and vendor dependent. This means that most users are flying blind when it comes to energy usage.

To map out AI energy requirements, Dr. Miro Hodak led a team of Lenovo engineers and researchers, which looked at the energy cost of an often-used AI workload. The study, Towards Power Efficiency in Deep Learning on Data Center Hardware, (registration required) was recently presented at the 2019 IEEE International Conference on Big Data and was published in the conference proceedings. This work looks at the energy cost of training ResNet50 neural net with ImageNet dataset of more than 1.3 million images on a Lenovo ThinkSystem SR670 server equipped with 4 Nvidia V100 GPUs. AC data from the servers power supply, indicates that 6.3 kWh of energy, enough to power an average home for six hours, is needed to fully train this AI model. In practice, trainings like these are repeated multiple times to tune the resulting models, resulting in energy costs that are actually several times higher.

The study breaks down the total energy into its components as shown in Fig. 1. As expected, the bulk of the energy is consumed by the GPUs. However, given that the GPUs handle all of the computationally intensive parts, the 65% share of energy is lower than expected. This shows that simplistic estimates of AI energy costs using only GPU power are inaccurate and miss significant contributions from the rest of the system. Besides GPUs, CPU and memory account for almost quarter of the energy use and 9% of energy is spent on AC to DC power conversion (this is within line of 80 PLUS Platinum certification of SR670 PSUs).

The study also investigated ways to decrease energy cost by system tuning without changing the AI workload. We found that two types of system settings make most difference: UEFI settings and GPU OS-level settings. ThinkSystem servers provides four UEFI running modes: Favor Performance, Favor Energy, Maximum Performance and Minimum Power. As shown in Table 1, the last option is the best and provides up to 5% energy savings. On the GPU side, 16% of energy can be saved by capping V100 frequency to 1005 MHz as shown in Figure 2. Taking together, our study showed that system tunings can decrease energy usage by 22% while increasing runtime by 14%. Alternatively, if this runtime cost is unacceptable, a second set of tunings, which save 18% of energy while increasing time by only 4%, was also identified. This demonstrates that there is lot of space on system side for improvements in energy efficiency.

Energy usage in HPC has been a visible challenge for over a decade, and Lenovo has long been a leader in energy efficient computing. Whether through our innovative Neptune liquid-cooled system designs, or through Energy-Aware Runtime (EAR) software, a technology developed in collaboration with Barcelona Supercomputing Center (BSC). EAR analyzes user applications to find optimum CPU frequencies to run them at. For now, EAR is CPU-only, but investigations into extending it to GPUs are ongoing. Results of our study show that that is a very promising way to bring energy savings to both HPC and AI.

Enterprises are not used to grappling with the large power profiles that AI requires, the way HPC users have become accustomed. Scaling out these AI solutions will only make that problem more acute. The industry is beginning to respond. MLPerf, currently the leading collaborative project for AI performance evaluation, is preparing new specifications for power efficiency. For now, it is limited to inference workloads and will most likely be voluntary, but it represents a step in the right direction.

So, in order to enjoy those precise weather forecasts and self-driven cars, well need to solve the power challenges they create. Today, as the power profile of CPUs and GPUs surges ever upward, enterprise customers face a choice between three factors: system density (the number of servers in a rack), performance and energy efficiency. Indeed, many enterprises are accustomed to filling up rack after rack with low cost, adequately performing systems that have limited to no impact on the electric bill. Unfortunately, until the power dilemma is solved, those users must be content with choosing only two of those three factors.

View post:
Powering the Artificial Intelligence Revolution - HPCwire

Humans And Artificial Intelligence Systems Perform Better Together: Microsoft Chief Scientist Eric Horvitz – Digital Information World

According to a recent study, humans and artificial intelligence systems can perform better when both of them work together to tackle problems. The research was done by Eric Horvitz Chief scientist Microsoft, Ece Kamar the Microsoft Research principal researcher, and Bryan Wilder, a student at Harvard University and Microsoft Research intern.

It seems that Eric Horvitz first published the research paper. He was hired as Microsoft principal researcher back in the year 1993 and the company named him Microsoft Chief Scientist officer during March. He led the companys research programs from the year 2017 to 2020. The research paper was published earlier this month and it studies the performance of artificial intelligence teams and humans operating together on two PC vision projects namely breast cancer metastasis recognition and Galaxy categorization. With this proposed approach, the artificial intelligence (AI) model evaluates which tasks humans can perform best and what type of tasks AI systems can handle better.

In this approach, the learning procedure is developed to merge human contributions and machine predictions. The artificial intelligence systems work to tackle problems that can be difficult for humans while humans focus on solving issues that can be tough for AI systems to figure out. Basically, AI system predictions generated with lower accuracy levels are routed to human teams in this system. According to researchers, combined training of human and artificial intelligence systems can enhance the galaxy classification model for us. It can improve the performance of Galaxy Zoo with a 21 to 73% decrease in loss. This system can also deliver an up to 20% better performance for CAMELYON16.

The research paper states that the performance of machine learning in segregation overcomes the circumstances where human skills can add integral context, although human teams have their own restrictions including systematic biases. Researchers stated in the paper that they have developed methods focused on training the AI learning model to supplement human strengths. It also accounts for the expense of inquiring an expert. Human and AI system teamwork can take various forms but the researchers focused on settings where machines would decide which instances required human absorption and then merging human and machine judgments.

Horvitz, during the year 2007, worked on a policy to examine when human assistants should interfere in consumer conversations with computerized receptionist systems. The researchers also stated in the paper, Learning to Complement Humans, that they see opportunities of studying extra aspects of human-machine cooperation across various settings. While studying a different type of teamwork, Open Artificial Intelligence research experts have looked at machine assistants operating together in games such as hide and seek, and Quake 3.

Photo: Ipopba / Getty Images

Read next: Researchers Developed An Artificial Intelligence System That Can Transform Brain Signals Into Words

Excerpt from:
Humans And Artificial Intelligence Systems Perform Better Together: Microsoft Chief Scientist Eric Horvitz - Digital Information World

A New Way To Think About Artificial Intelligence With This ETF – MarketWatch

Among the myriad thematic exchange traded funds investors have to consider, artificial intelligence products are numerous and some are catching on with investors.

Count the ROBO Global Artificial Intelligence ETF THNQ, +0.40% as the latest member of the artificial intelligence ETF fray. HNQ, which debuted earlier this week, comes from a good gene pool as its stablemate,the Robo Global Robotics and Automation Index ETF ROBO, -0.32%, was the original and remains one of the largest robotics ETFs.

That's relevant because artificial intelligence and robotics are themes that frequently intersect with each other. Home to 72 stocks, the new THNQ follows the ROBO Global Artificial Intelligence Index.

Adding to the case for A.I., even with a new product such as THNQ, is that the technology has hundreds, if not thousands, of applications supporting its growth.

Companies developing AV technology are mainly relying on machine learning or deep learning, or both, according to IHS Markit. A major difference between machine learning and deep learning is that, while deep learning can automatically discover the feature to be used for classification in unsupervised exercises, machine learning requires these features to be labeled manually with more rigid rulesets. In contrast to machine learning, deep learning requires significant computing power and training data to deliver more accurate results.

Like its family ROBO, THNQ offers wide reach with exposure to 11 sub-groups. Those include big data, cloud computing, cognitive computing, e-commerce and other consumer angles and factory automation, among others. Of course, semiconductors are part of the THNQ fold, too.

The exploding use of AI is ushering in a new era of semiconductor architectures and computing platforms that can handle the accelerated processing requirements of an AI-driven world, according to ROBO Global. To tackle the challenge, semiconductor companies are creating new, more advanced AI chip engines using a whole new range of materials, equipment, and design methodologies.

While THNQ is a new ETF, investors may do well to not focus on that rather focus on the fact the AI boom is in its nascent stages.

Historically, the stock market tends to under-appreciate the scale of opportunity enjoyed by leading providers of new technologies during this phase of development, notes THNQ's issuer. This fact creates a remarkable opportunity for investors who understand the scope of the AI revolution, and who take action at a time when AI is disrupting industry as we know it and forcing us to rethink the world around us.

The new ETF charges 0.68% per year, or $68 on a $10,000 investment. That's inline with rival funds.

2020 Benzinga.com. Benzinga does not provide investment advice. All rights reserved.

Read more:
A New Way To Think About Artificial Intelligence With This ETF - MarketWatch