Archive for the ‘Artificial Intelligence’ Category

A Brief Outlook on the Artificial Intelligence landscape in Germany – Analytics Insight

Artificial Intelligence acts as a potential key technology of dystopian future concepts, social control, and autocratic world power fantasies. It is gradually finding its way to the public and private board room discussions and government policies. Even countries like Germany, which were lagging in the AI race, have gone through tremendous change in the past few years. According to PwC research, by 2030, Germany alone shall have Gross Domestic Product (GDP) up by 11.3% and generate 430 billion due to AI. And by percentage, this potential is more than most of the other European Nations. This makes the country as Europes largest economy, with a thriving market and high potential for new to market brands. The study also that industries like healthcare, energy, and the auto industry will benefit from significant productivity gains by adopting AI applications.

While Germany is currently at the forefront of AI in Europe, research and innovative projects have also commenced in the Cyber Valley. The goal is to further the mission to develop increasingly sophisticated machines with extensive capabilities and boost R&D in AI. Founded just four years ago by the Max Planck Institute for Intelligent Systems (MPII) together with auto groups Bosch, Daimler, BMW, and Porsche, the cluster had also secured 1.25 million investment from Amazon for research partnership. The main motive behind this initiative is to leverage AI to make theGerman industries, services, and products even better. Germany is also striving to bring the digital revolution through Industry 4.0, which was also mentioned in the AI strategy of 2018. The strategy report further expresses that the country shall expand its strong position and rise to be a global leader in AI on the grounds of ethics and legal terms too. It also intends to use AI to promote social participation, freedom of action, and self-determination for citizens and foster the sustainable development of the society. To achieve this goal, the Federal Government first allocated a total of 500 million to beef up the AI strategy for 2019 and further anticipates matching funds from the private sector and other the federal states, therefore bringing the total investment to 6 billion.

Meanwhile, the emphasis is also made on improving data sharing facilities by providing open access to governmental data. The government is also working to build a reliable data and analysis infrastructure based on cloud platforms and upgraded storage and computing capacity. These measures are crucial and necessary as, without data, AI innovations cannot be used to solve the bottlenecks and other issues faced by different industries in their quest for AI adoption. Recently, Germany is looking for ways to tighten data security. It is calling for a more concrete definition when data records must be stored on a mandatory basis. At the European Union, it has also requested for developing a new classification scheme together with the member states.

On the business front, tests are carried to maximize the use of collaborative AI robots and link augmented reality technology to AI-based production planning systems. Major automobile behemoths Volkswagen, BMW, and Daimler, are investing heavily in modern, AI-controlled factories. They are working on solutions for assisted and autonomous driving, intelligent operating systems, entertainment systems, and navigation systems at their German R&D centers.

Germany is also growing as a preferred hub for startups focusing on AI and its applications like machine learning, deep learning, computer vision, predictive analysis, and so on. Further, it has the most active corporate venture investors in Europe (91% of all non-IPO exitsin 2019 were related to corporates). The most common areas of focus for these AI startups are software development, image recognition, customer support and communication, and marketing and sales. These five categories are found to constitute around 48% of German AI startups. Currently, Berlin is the fourth largest global AI hub, following Silicon Valley of the USA, London, and Paris. So, it is high time that companies take Germany as an upcoming nation in global AI leadership and start investing or collaborating with it.

Go here to see the original:
A Brief Outlook on the Artificial Intelligence landscape in Germany - Analytics Insight

Different Scopes Of Artificial Intelligence To Dive In With! – Inventiva

What is artificial intelligence and why is it so famous?

Artificial intelligence is the talk of the town. It is the simulation of human intelligence with the usage of machines and especially the management of the computer system. AI can be categorized in a lot of streams. This means that their primary basis of categorization is dependent on the weakness and how strong they can be. We all know that the application of Artificial intelligence is increasing in this modern world, and each and every technology is managing their resources in the right way. Take, for example, apples voice control uses their Artificial intelligence known as Siri to communicate and get your work done in the best of form.

How is it changing the current scenario?

Here is the list of features and advantages of using Artificial intelligence.

Units of Artificial Intelligence

These are the following units of AI which work for the current period.

All these units of artificial intelligence have different features of their own. These units are fundamental in your life, and they help to paint the whole world. AI is the new simulation of the human, which allows you to process data and include the techniques of learning. We need AI for the work we do. It becomes an automated routine for us to use their units for our daily work. Like take, for example, the usage of robotics is increasing, and it is said to cross a massive platform in a few years. Even though it is a sub-field, it holds as much crucial as the central concept. And if you are interested then you can choose one field and excel in the same.

Does the work for you

Artificial intelligence is changing the current scenario in the way you have never seen before. The smallest of activities are being conducted by them. They dont need to take breaks like us. If you work regularly, then your body might give up on you, but Artificial intelligence wont ever do the same. They are programmed to work for a very long period. They dont need lunch breaks, and neither can they ever get tired. You need to recharge their cells so that they dont shut off.

Like Loading...

Related

See the article here:
Different Scopes Of Artificial Intelligence To Dive In With! - Inventiva

How the Coronavirus Pandemic Is Breaking Artificial Intelligence and How to Fix It – Gizmodo

As covid-19 disrupted the world in March, online retail giant Amazon struggled to respond to the sudden shift caused by the pandemic. Household items like bottled water and toilet paper, which never ran out of stock, suddenly became in short supply. One- and two-day deliveries were delayed for several days. Though Amazon CEO Jeff Bezos would go on to make $24 billion during the pandemic, initially, the company struggled with adjusting its logistics, transportation, supply chain, purchasing, and third-party seller processes to prioritize stocking and delivering higher-priority items.

Under normal circumstances, Amazons complicated logistics are mostly handled by artificial intelligence algorithms. Honed on billions of sales and deliveries, these systems accurately predict how much of each item will be sold, when to replenish stock at fulfillment centers, and how to bundle deliveries to minimize travel distances. But as the coronavirus pandemic crisis has changed our daily habits and life patterns, those predictions are no longer valid.

In the CPG [consumer packaged goods] industry, the consumer buying patterns during this pandemic has shifted immensely, Rajeev Sharma, SVP and global head of enterprise AI solutions & cognitive engineering at AI consultancy firm Pactera Edge, told Gizmodo. There is a tendency of panic buying of items in larger quantities and of different sizes and quantities. The [AI] models may have never seen such spikes in the past and hence would give less accurate outputs.

Artificial intelligence algorithms are behind many changes to our daily lives in the past decades. They keep spam out of our inboxes and violent content off social media, with mixed results. They fight fraud and money laundering in banks. They help investors make trade decisions and, terrifyingly, assist recruiters in reviewing job applications. And they do all of this millions of times per day, with high efficiencymost of the time. But they are prone to becoming unreliable when rare events like the covid-19 pandemic happen.

Among the many things the coronavirus outbreak has highlighted is how fragile our AI systems are. And as automation continues to become a bigger part of everything we do, we need new approaches to ensure our AI systems remain robust in face of black swan events that cause widespread disruptions.

Key to the commercial success of AI is advances in machine learning, a category of algorithms that develop their behavior by finding and exploiting patterns in very large sets of data. Machine learning and its more popular subset deep learning have been around for decades, but their use had previously been limited due to their intensive data and computational requirements. In the past decade, the abundance of data and advances in processor technology have enabled companies to use machine learning algorithms in new domains such as computer vision, speech recognition, and natural language processing.

When trained on huge data sets, machine learning algorithms often ferret out subtle correlations between data points that would have gone unnoticed to human analysts. These patterns enable them to make forecasts and predictions that are useful most of the time for their designated purpose, even if theyre not always logical. For instance, a machine-learning algorithm that predicts customer behavior might discover that people who eat out at restaurants more often are more likely to shop at a particular kind of grocery store, or maybe customers who shop online a lot are more likely to buy certain brands.

All of those correlations between different variables of the economy are ripe for use by machine learning models, which can leverage them to make better predictions. But those correlations can be ephemeral, and highly context-dependent, David Cox, IBM director at the MIT-IBM Watson AI Lab, told Gizmodo. What happens when the ground conditions change, as they just did globally when covid-19 hit? Customer behavior has radically changed, and many of those old correlations no longer hold. How often you eat out no longer predicts where youll buy groceries, because dramatically fewer people eat out.

As consumers change their habits, the intrinsic correlations between the myriad variables that define the behavior of a supply chain fall apart, and those old prediction models lose their relevance. This can result in depleted warehouses and delayed deliveries on a large scale, as Amazon and other companies have experienced. If your predictions are based on these correlations, without an understanding of the underlying causes and effects that drive those correlations, your predictions will be wrong, said Cox.

The same impact is visible in other areas, such as banking, where machine learning algorithms are tuned to detect and flag sudden changes to the spending habits of customers as possible signs of compromised accounts. According to Teradata, a provider of analytics and machine learning services, one of the companies using its platform to score high-risk transactions saw a fifteen-fold increase in mobile payments as consumers started spending more online and less in physical stores. (Teradata did not disclose the name of the company as a matter of policy.) Fraud-detection algorithms search for anomalies in customer behavior, and such sudden shifts can cause them to flag legitimate transactions as fraudulent. According to the firm, it was able to maintain the accuracy of its banking algorithms and adapt them to the sudden shifts caused by the lockdown.

But the disruption was more fundamental in other areas such as computer vision systems, the algorithms used to detect objects and people in images.

Weve seen several changes in underlying data due to covid-19, which has had an impact on performances of individual AI models as well as end-to-end AI pipelines, said Atif Kureishy, VP of global emerging practices, artificial intelligence and deep learning for Teradata. As people start wearing masks due to the covid-19, we have seen performance decay as facial coverings introduce missed detections in our models.

Teradatas Retail Vision technology uses deep learning models trained on thousands of images to detect and localize people in the video streams of in-store cameras. With powerful and potentially ominous capabilities, the AI also analyzes the video for information such as peoples activities and emotions, and combines it with other data to provide new insights to retailers. The systems performance is closely tied to being able to locate faces in videos, but with most people wearing masks, the AIs performance has seen a dramatic performance drop.

In general, machine and deep learning give us very accurate-yet-shallow models that are very sensitive to changes, whether it is different environmental conditions or panic-driven purchasing behavior by banking customers, Kureishy said.

We humans can extract the underlying rules from the data we observe in the wild. We think in terms of causes and effects, and we apply our mental model of how the world works to understand and adapt to situations we havent seen before.

If you see a car drive off a bridge into the water, you dont need to have seen an accident like that before to predict how it will behave, Cox said. You know something (at least intuitively) about why things float, and you know things about what the car is made of and how it is put together, and you can reason that the car will probably float for a bit, but will eventually take on water and sink.

Machine learning algorithms, on the other hand, can fill the space between the things theyve already seen, but cant discover the underlying rules and causal models that govern their environment. They work fine as long as the new data is not too different from the old one, but as soon as their environment undergoes a radical change, they start to break.

Our machine learning and deep learning models tend to be great at interpolationworking with data that is similar to, but not quite the same as data weve seen beforebut they are often terrible at extrapolationmaking predictions from situations that are outside of their experience, Cox says.

The lack of causal models is an endemic problem in the machine learning community and causes errors regularly. This is what causes Teslas in self-driving mode to crash into concrete barriers and Amazons now-abandoned AI-powered hiring tool to penalize a job applicant for putting womens chess club captain in her resume.

A stark and painful example of AIs failure to understand context happened in March 2019, when a terrorist live-streamed the massacre of 51 people in New Zealand on Facebook. The social networks AI algorithm that moderates violent content failed to detect the gruesome video because it was shot in first-person perspective, and the algorithms had not been trained on similar content. It was taken down manually, and the company struggled to keep it off the platform as users reposted copies of it.

Major events like the global pandemic can have a much more detrimental effect because they trigger these weaknesses in a lot of automated systems, causing all sorts of failures at the same time.

It is imperative to understand that the AI/ML models trained on consumer behavior data are bound to suffer in terms of their accuracy of prediction and potency of recommendations under a black swan event like the pandemic, said Pacteras Sharma. This is because the AI/ML models may have never seen that kind of shifts in the features that are used to train them. Every AI platform engineer is fully aware of this.

This doesnt mean that the AI models are wrong or erroneous, Sharma pointed out, but implied that they need to be continuously trained on new data and scenarios. We also need to understand and address the limits of the AI systems we deploy in businesses and organizations.

Sharma described, for example, an AI that classifies credit applications as Good Credit or Bad Credit and passes on the rating to another automated system that approves or rejects applications. If owing to some situations (like this pandemic), there is a surge in the number of applicants with poor credentials, Sharma said, the models may have a challenge in their ability to rate with high accuracy.

As the worlds corporations increasingly turn to automated, AI-powered solutions for deciding the fate of their human clients, even when working as designed, these systems can have devastating implications for those applying for credit. In this case, however, the automated system would need to be explicitly adjusted to deal with the new rules, or the final decisions can be deferred to a human expert to prevent the organization from accruing high risk clients on its books.

Under the present circumstances of the pandemic, where model accuracy or recommendations no longer hold true, the downstream automated processes may need to be put through a speed breaker like a human-in-the-loop for added due diligence, he said.

IBMs Cox believes if we manage to integrate our own understanding of the world into AI systems, they will be able to handle black swan events like the covid-19 outbreak.

We must build systems that actually model the causal structure of the world, so that they are able to cope with a rapidly changing world and solve problems in more flexible ways, he said.

MIT-IBM Watson AI Lab, where Cox works, has been working on neurosymbolic systems that bring together deep learning with classic, symbolic AI techniques. In symbolic AI, human programmers explicitly specify the rules and details of the systems behavior instead of training it on data. Symbolic AI was dominant before the rise of deep learning and is better suited for environments where the rules are clearcut. On the other hand, it lacks the ability of deep learning systems to deal with unstructured data such as images and text documents.

The combination of symbolic AI and machine learning has helped create systems that can learn from the world, but also use logic and reasoning to solve problems, Cox said.

IBMs neurosymbolic AI is still in the research and experimentation stage. The company is testing it in several domains, including banking.

Teradatas Kureishy pointed to another problem that is plaguing the AI community: labeled data. Most machine learning systems are supervised, which means before they can perform their functions, they need to be trained on huge amounts of data annotated by humans. As conditions change, the machine learning models need new labeled data to adjust themselves to new situations.

Kureishy suggested that the use of active learning can, to a degree, help address the problem. In active learning models, human operators are constantly monitoring the performance of machine learning algorithms and provide them with new labeled data in areas where their performance starts to degrade. These active learning activities require both human-in-the-loop and alarms for human intervention to choose what data needs to be relabeled, based on quality constraints, Kureishy said.

But as automated systems continue to expand, human efforts fail to meet the growing demand for labeled data. The rise of data-hungry deep learning systems has given birth to a multibillion-dollar data-labeling industry, often powered by digital sweatshops with underpaid workers in poor countries. And the industry still struggles to create enough annotated data to keep machine learning models up to date. We will need deep learning systems that can learn from new data with little or no help from humans.

As supervised learning models are more common in the enterprise, they need to be data-efficient so that they can adapt much faster to changing behaviors, Kureishy said. If we keep relying on humans to provide labeled data, AI adaptation to novel situations will always be bounded by how fast humans can provide those labels.

Deep learning models that need little or no manually labeled data is an active area of AI research. In last years AAAI Conference, deep learning pioneer Yann LeCun discussed progress in self-supervised learning, a type of deep learning algorithm that, like a child, can explore the world by itself without being specifically instructed on every single detail.

I think self-supervised learning is the future. This is whats going to allow our AI systems to go to the next level, perhaps learn enough background knowledge about the world by observation, so that some sort of common sense may emerge, LeCun said in his speech at the conference.

But as is the norm in the AI industry, it takes yearsif not decadesbefore such efforts become commercially viable products. In the meantime, we need to acknowledge and embrace the power and limits of current AI.

These are not your static IT systems, Sharma says. Enterprise AI solutions are never done. They need constant re-training. They are living, breathing engines sitting in the infrastructure. It would be wrong to assume that you build an AI platform and walk away.

Ben Dickson is a software engineer, tech analyst, and the founder of TechTalks.

Link:
How the Coronavirus Pandemic Is Breaking Artificial Intelligence and How to Fix It - Gizmodo

Artificial intelligence needs an update on ethics to be able to help humanity in times of crisis – Economic Times

Currently ethics for AI focuses too much on high-level principles. Using AI to deal with crises would mean anticipating problems before they happen and building safety and reliability into it. Plus, ethics should be part of how AI is built and used, not an add-on or afterthought. Researchers and engineers need to think through the implications of what they build.

By Will HeavenJess Whittlestone at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge and her colleagues published a comment piece in Nature Machine Intelligence this week arguing that if artificial intelligence is going to help in a crisis, we need a new, faster way of doing AI ethics, which they call ethics for urgency. For Whittlestone, this means anticipating problems before they happen, finding better ways

AbcSmall

AbcMedium

AbcLarge

Access the exclusive Economic Times stories, Editorial and Expert opinion

Already a Member? Sign In now

Sharp Insight-rich, Indepth stories across 20+ sectors

Access the exclusive Economic Times stories, Editorial and Expert opinion

Clean experience withMinimal Ads

Comment & Engage with ET Prime community

Exclusive invites to Virtual Events with Industry Leaders

A trusted team of Journalists & Analysts who can best filter signal from noise

Originally posted here:
Artificial intelligence needs an update on ethics to be able to help humanity in times of crisis - Economic Times

Navigating ‘information pollution’ with the help of artificial intelligence – Penn: Office of University Communications

Theres still a lot thats not known about the novel coronavirus SARS-CoV-2 and COVID-19, the disease it causes. What leads some people to have mild symptoms and others to end up in the hospital? Do masks help stop the spread? What are the economic and political implications of the pandemic?

As researchers try to address many of these questions, many of which will not have a simple yes or no answer, people are also trying to figure out how to keep themselves and their families safe. But between the 24-hour news cycle, hundreds of preprint research articles, and guidelines that vary between regional, state, and federal governments, how can people best navigate through such vast amounts of information?

Using insights from the field of natural language processing and artificial intelligence, computer scientist Dan Roth and the Cognitive Computation Group are developing an online platform to help users find relevant and trustworthy information about the novel coronavirus. As part of a broader effort by his group to develop tools for navigating information pollution, this platform is devoted to identifying the numerous perspectives that a single query might have, showing the evidence that supports each perspective and organizing results, along with each sources trustworthiness, so users can better understand what is known, by whom, and why.

Creating these types of automated platforms represents a huge challenge for researchers in the field of natural language processing and machine learning because of the complexity of human language and communication. Language is ambiguous. Every word, depending on context, could mean completely different things, says Roth. And language is variable. Everything you want to say, you can say in different ways. To automate this process, we have to get around these two key difficulties, and this is where the challenge is coming from.

Thanks to numerous conceptual and theoretical advances, the Cognitive Computational Groups fundamental research in natural language understanding has allowed them to apply their research insights and to develop automated systems that can better understand the contents of human language, such as what is being written about in a news article or scientific paper. Roth and his team have been working on issues related to information pollution for many years and are now applying what theyve learned to information about the novel coronavirus.

Information pollution comes in many forms, including biases, misinformation, and disinformation, and because of the sheer volume of information the process of sorting fact from fiction needs automated support. Its very easy to publish information, says Roth, adding that while organizations like FactCheck.org, a project of Penns Annenberg Public Policy Center, manually verify the validity of many claims, theres not enough human power to fact check every claim being posted on the Internet.

And fact checking alone isnt enough to address all of the problems of information pollution, says Ph.D. student Sihao Chen. Take the question of whether people should wear face masks: The answer to that question has changed dramatically in the past couple months, and the reason for that change is multi-faceted, he says. You couldnt find an objective truth attached to that specific question, and the answer to that question is context-dependent. Fact checking alone doesnt solve this problem because theres no single answer. This is why the team says that identifying various perspectives along with evidence that supports them is important.

To help address both of these hurdles, the COVID-19 search platform visualizes results that include a sources level of trustworthiness while also highlighting different perspectives. This is different from how online search engines display information, where top results are based on popularity and keyword match and where its not easy to see how the arguments in articles compare to one another. On this platform, however, instead of displaying articles on an individual basis, they are organized based on the claims they make.

Search engines make a point not to touch the information and not to give suggestions and organize this material, says Roth. The redundancy of information by itself is quite often misleading and leads to bias, since people tend to think that seeing something many times makes it more correct. Here, if there are 500 articles that are saying the same thing, we cluster them together and say, All these articles are quoting the same sources, so just focus on one of them. Then, these other articles are interviewing other people and making different claims, so you can sample from different clusters.

When visiting the website, users can enter a question, claim, or topic into the search bar, and results are grouped together based on the similarity of perspectives. Since everything is set up to be automated, the researchers are eager to share this first iteration of the platform with the community so they can improve the language-processing models. Its a community effort, says Roth, adding that their platform was designed to be transparent and open source so that they can easily collaborate with others.

Chen hopes that their efforts support both the users who are interested in sorting through COVID-19 information pollution as well as fellow researchers in the field of natural language processing. We want to help everyone whos interested in reading news like this, and at the same time we want to build better techniques to accommodate that need, says Chen.

Dan Roth is the Eduardo D. Glandt Distinguished Professor in the Department of Computer and Information Science in the School of Engineering and Applied Science at the University of Pennsylvania.

The online search platform is available on the Penn Information Pollution project website.

Additional information and resources on COVID-19 are available at https://coronavirus.upenn.edu/.

Visit link:
Navigating 'information pollution' with the help of artificial intelligence - Penn: Office of University Communications