Archive for the ‘Artificial Intelligence’ Category

TeamViewer Brings Artificial Intelligence to the Shopfloor – PR Newswire

"As the European leader in enterprise AR solutions, we are constantly exploring new ways of supporting frontline workers' daily tasks with intelligent technology. The integration of AI capabilities into AR workflows was the next logical step for us. Enriching complex manual processes with self-learning algorithms truly is a game-changer for digitalization projects and adds immediate value for our customers. For example, AI can perform certain verification tasks, reducing the probability for human errors to almost zero," says Hendrik Witt, Chief Product Officer at TeamViewer.

Global customers from the food and beverage industry such as NSF participated in a closed early access program of AiStudio and have already developed AI-supported TeamViewer Frontline workflows for quality assurance and workplace safety, further improving productivity and efficiency. Use cases include the automated verification that hygiene gloves are worn during food preparation processes, as well as confirmation of the correct commissioning in warehouse logistics. Other scenarios for the add-on range from quality assurance with AI-based detection of damaged or wrongly assembled products, to automatically recognizing factory equipment such as industrial machines and instantly providing additional information such as relevant maintenance instructions via augmented reality software.

Two out-of-the-box AI capabilities will be available for all customers with a Frontline license: one can detect common shopfloor warning signs through the smart glasses' camera, the other one can detect if safety helmets are worn. Companies can easily implement further individual automated safety checks, adding an AI-based layer of workplace security.

More information on AiStudio can be found here.

About TeamViewerTeamViewer is a leading global technology company that provides a connectivity platform to remotely access, control, manage, monitor, and repair devices of any kind from laptops and mobile phones to industrial machines and robots. Although TeamViewer is free of charge for private use, it has more than 625,000 subscribers and enables companies of all sizes and from all industries to digitalize their business-critical processes through seamless connectivity. Against the backdrop of global megatrends like device proliferation, automation and new work, TeamViewer proactively shapes digital transformation and continuously innovates in the fields of Augmented Reality, Internet of Things and Artificial Intelligence. Since the company's foundation in 2005, TeamViewer's software has been installed on more than 2.5 billion devices around the world. The company is headquartered in Goppingen, Germany, and employs around 1,500 people globally. In 2021, TeamViewer achieved billings of EUR 548 million. TeamViewer AG (TMV) is listed at Frankfurt Stock Exchange and belongs to the MDAX. Further information can be found at https://www.teamviewer.com/.

Press ContactJulia Gottschalk Tel.: +49 7161 60692 3895E-mail: [emailprotected]

SOURCE TeamViewer

Read this article:
TeamViewer Brings Artificial Intelligence to the Shopfloor - PR Newswire

7 Roles of Artificial Intelligence in the Defence Sector – Robotics and Automation News

Artificial Intelligence has managed to infiltrate many industries and sectors, including the defence sector and different military operations.

Artificial Intelligence is used by almost all nations for managing the defence sector and military operations.

Currently, a huge investment is made in this niche to further strengthen the defence sector of any country.

Here are seven roles of artificial intelligence in the defence sector.

Without an actual war, how would one teach the soldiers about actual war life situations? In such an important situation, the role of Artificial Intelligence is huge.

Artificial Intelligence can be used for creating simulations and training to design different models to train the soldiers to get used to the different fighting systems, which is important for actual military operations.

The navy and army of different countries use Artificial Intelligence to create sensor simulation programmes to help the soldiers.

Such AI is also combined with augmented reality and virtual reality to create more real-life situations.

The defence sector holds much critical and classified information. The sensitive information makes the defence sector extremely prone to cyberattacks.

The defence sector obviously hides its digital footprints by adding a layer of security.

Many times, the defence sector also hides the IP and one can check their IP in What Is My IP. However, normal security is not enough to secure sensitive information.

For providing an added level of security, the military sector often uses Artificial Intelligence. AI plays a critical role in preventing unauthorized intrusion.

It is no secret that surveillance plays an important role in the defence sector and different military operations.

Artificial Intelligence can be used in surveillance for keeping an eye on suspicious activity.

Also, not only is it able to identify suspicious activity but also alerts the respective authorities to tackle the situation. AI-enabled robots also play a critical role in such activities.

Weapons are no longer simple weapons but are new-age weapons. These weapons are commonly embedded with Artificial Intelligence technology.

The application of AI can be most commonly seen in sophisticated missiles which are designed to accurately attack a target.

Military operations often have to deal with logistics too. The logistic operation in the defence sector is not like an ordinary logistic service.

Artificial Intelligence also plays a critical role in ensuring the safety, security and efficiency of the logistic system.

Robots and Artificial Intelligence are combined together to create a Remotely Operated Vehicle which is used for defusing explosives. Sending someone to defuse explosives can be dangerous for obvious reasons.

However, by creating delicate and highly intelligent Remotely Operated Devices, the entire process of defusing explosives can be made safer.

Artificial Intelligence is also used in Network Traffic Analysis. This system mostly monitors the internet traffic, especially the voice traffic passing through different software like Google Talk and Skype.

The voice traffic is then checked for intercept messages with keywords like kill, blast and bomb and that too in real life. This technology is useful in preventing attacks and thus, working towards the safety of the people.

Other usages of Artificial Intelligence in the defence and military sector include analysis of data from different sensors and satellites.

Also, it is used by water ships which use sonar for detecting mines. Military robots, as discussed above, obviously ensure the safety of everyone. AI and machine learning merged to handle unmanned vehicles like battle necks and aircraft.

Usage of Artificial Intelligence in the military is not new. Many developed and developing nations use AI-based technology to strengthen their military operation.

The countries are investing highly in Artificial Intelligence to develop different military infrastructures. The degree of such investment, of course, differs from one country to another.

Even though the financial investment is huge, it is worth the investment. Also, employing Artificial Intelligence requires expertise too. Many scientists, coders and developers work together in a laboratory to employ Artificial Intelligence in military operations.

The challenges of employing Artificial Intelligence in military operations come in the form of money and skills.

However, the same can be addressed by making it a priority. In the coming years, the usage of AI will keep improving in different sectors, including the defence sector.

You might also like

The rest is here:
7 Roles of Artificial Intelligence in the Defence Sector - Robotics and Automation News

Engineers use artificial intelligence to capture the complexity of breaking waves – MIT News

Waves break once they swell to a critical height, before cresting and crashing into a spray of droplets and bubbles. These waves can be as large as a surfers point break and as small as a gentle ripple rolling to shore. For decades, the dynamics of how and when a wave breaks have been too complex to predict.

Now, MIT engineers have found a new way to model how waves break. The team used machine learning along with data from wave-tank experiments to tweak equations that have traditionally been used to predict wave behavior. Engineers typically rely on such equations to help them design resilient offshore platforms and structures. But until now, the equations have not been able to capture the complexity of breaking waves.

The updated model made more accurate predictions of how and when waves break, the researchers found. For instance, the model estimated a waves steepness just before breaking, and its energy and frequency after breaking, more accurately than the conventional wave equations.

Their results, published today in the journal Nature Communications, will help scientists understand how a breaking wave affects the water around it. Knowing precisely how these waves interact can help hone the design of offshore structures. It can also improve predictions for how the ocean interacts with the atmosphere. Having better estimates of how waves break can help scientists predict, for instance, how much carbon dioxide and other atmospheric gases the ocean can absorb.

Wave breaking is what puts air into the ocean, says study author Themis Sapsis, an associate professor of mechanical and ocean engineering and an affiliate of the Institute for Data, Systems, and Society at MIT. It may sound like a detail, but if you multiply its effect over the area of the entire ocean, wave breaking starts becoming fundamentally important to climate prediction.

The studys co-authors include lead author and MIT postdoc Debbie Eeltink, Hubert Branger and Christopher Luneau of Aix-Marseille University, Amin Chabchoub of Kyoto University, Jerome Kasparian of the University of Geneva, and T.S. van den Bremer of Delft University of Technology.

Learning tank

To predict the dynamics of a breaking wave, scientists typically take one of two approaches: They either attempt to precisely simulate the wave at the scale of individual molecules of water and air, or they run experiments to try and characterize waves with actual measurements. The first approach is computationally expensive and difficult to simulate even over a small area; the second requires a huge amount of time to run enough experiments to yield statistically significant results.

The MIT team instead borrowed pieces from both approaches to develop a more efficient and accurate model using machine learning. The researchers started with a set of equations that is considered the standard description of wave behavior. They aimed to improve the model by training the model on data of breaking waves from actual experiments.

We had a simple model that doesnt capture wave breaking, and then we had the truth, meaning experiments that involve wave breaking, Eeltink explains. Then we wanted to use machine learning to learn the difference between the two.

The researchers obtained wave breaking data by running experiments in a 40-meter-long tank. The tank was fitted at one end with a paddle which the team used to initiate each wave. The team set the paddle to produce a breaking wave in the middle of the tank. Gauges along the length of the tank measured the waters height as waves propagated down the tank.

It takes a lot of time to run these experiments, Eeltink says. Between each experiment you have to wait for the water to completely calm down before you launch the next experiment, otherwise they influence each other.

Safe harbor

In all, the team ran about 250 experiments, the data from which they used to train a type of machine-learning algorithm known as a neural network. Specifically, the algorithm is trained to compare the real waves in experiments with the predicted waves in the simple model, and based on any differences between the two, the algorithm tunes the model to fit reality.

After training the algorithm on their experimental data, the team introduced the model to entirely new data in this case, measurements from two independent experiments, each run at separate wave tanks with different dimensions. In these tests, they found the updated model made more accurate predictions than the simple, untrained model, for instance making better estimates of a breaking waves steepness.

The new model also captured an essential property of breaking waves known as the downshift, in which the frequency of a wave is shifted to a lower value. The speed of a wave depends on its frequency. For ocean waves, lower frequencies move faster than higher frequencies. Therefore, after the downshift, the wave will move faster. The new model predicts the change in frequency, before and after each breaking wave, which could be especially relevant in preparing for coastal storms.

When you want to forecast when high waves of a swell would reach a harbor, and you want to leave the harbor before those waves arrive, then if you get the wave frequency wrong, then the speed at which the waves are approaching is wrong, Eeltink says.

The teams updated wave model is in the form of an open-source code that others could potentially use, for instance in climate simulations of the oceans potential to absorb carbon dioxide and other atmospheric gases. The code can also be worked into simulated tests of offshore platforms and coastal structures.

The number one purpose of this model is to predict what a wave will do, Sapsis says. If you dont model wave breaking right, it would have tremendous implications for how structures behave. With this, you could simulate waves to help design structures better, more efficiently, and without huge safety factors.

This research is supported, in part, by the Swiss National Science Foundation, and by the U.S. Office of Naval Research.

Go here to read the rest:
Engineers use artificial intelligence to capture the complexity of breaking waves - MIT News

Another Firing Among Googles A.I. Brain Trust, and More Discord – The New York Times

Less than two years after Google dismissed two researchers who criticized the biases built into artificial intelligence systems, the company has fired a researcher who questioned a paper it published on the abilities of a specialized type of artificial intelligence used in making computer chips.

The researcher, Satrajit Chatterjee, led a team of scientists in challenging the celebrated research paper, which appeared last year in the scientific journal Nature and said computers were able to design certain parts of a computer chip faster and better than human beings.

Dr. Chatterjee, 43, was fired in March, shortly after Google told his team that it would not publish a paper that rebutted some of the claims made in Nature, said four people familiar with the situation who were not permitted to speak openly on the matter. Google confirmed in a written statement that Dr. Chatterjee had been terminated with cause.

Google declined to elaborate about Dr. Chatterjees dismissal, but it offered a full-throated defense of the research he criticized and of its unwillingness to publish his assessment.

We thoroughly vetted the original Nature paper and stand by the peer-reviewed results, Zoubin Ghahramani, a vice president at Google Research, said in a written statement. We also rigorously investigated the technical claims of a subsequent submission, and it did not meet our standards for publication.

Dr. Chatterjees dismissal was the latest example of discord in and around Google Brain, an A.I. research group considered to be a key to the companys future. After spending billions of dollars to hire top researchers and create new kinds of computer automation, Google has struggled with a wide variety of complaints about how it builds, uses and portrays those technologies.

Tension among Googles A.I. researchers reflects much larger struggles across the tech industry, which faces myriad questions over new A.I. technologies and the thorny social issues that have entangled these technologies and the people who build them.

The recent dispute also follows a familiar pattern of dismissals and dueling claims of wrongdoing among Googles A.I. researchers, a growing concern for a company that has bet its future on infusing artificial intelligence into everything it does. Sundar Pichai, the chief executive of Googles parent company, Alphabet, has compared A.I. to the arrival of electricity or fire, calling it one of humankinds most important endeavors.

Google Brain started as a side project more than a decade ago when a group of researchers built a system that learned to recognize cats in YouTube videos. Google executives were so taken with the prospect that machines could learn skills on their own, they rapidly expanded the lab, establishing a foundation for remaking the company with this new artificial intelligence. The research group became a symbol of the companys grandest ambitions.

Before she was fired, Dr. Gebru was seeking permission to publish a research paper about how A.I.-based language systems, including technology built by Google, may end up using the biased and hateful language they learn from text in books and on websites. Dr. Gebru said she had grown exasperated over Googles response to such complaints, including its refusal to publish the paper.

A few months later, the company fired the other head of the team, Margaret Mitchell, who publicly denounced Googles handling of the situation with Dr. Gebru. The company said Dr. Mitchell had violated its code of conduct.

The paper in Nature, published last June, promoted a technology called reinforcement learning, which the paper said could improve the design of computer chips. The technology was hailed as a breakthrough for artificial intelligence and a vast improvement to existing approaches to chip design. Google said it used this technique to develop its own chips for artificial intelligence computing.

Google had been working on applying the machine learning technique to chip design for years, and it published a similar paper a year earlier. Around that time, Google asked Dr. Chatterjee, who has a doctorate in computer science from the University of California, Berkeley, and had worked as a research scientist at Intel, to see if the approach could be sold or licensed to a chip design company, the people familiar with the matter said.

But Dr. Chatterjee expressed reservations in an internal email about some of the papers claims and questioned whether the technology had been rigorously tested, three of the people said.

While the debate about that research continued, Google pitched another paper to Nature. For the submission, Google made some adjustments to the earlier paper and removed the names of two authors, who had worked closely with Dr. Chatterjee and had also expressed concerns about the papers main claims, the people said.

When the newer paper was published, some Google researchers were surprised. They believed that it had not followed a publishing approval process that Jeff Dean, the companys senior vice president who oversees most of its A.I. efforts, said was necessary in the aftermath of Dr. Gebrus firing, the people said.

Google and one of the papers two lead authors, Anna Goldie, who wrote it with a fellow computer scientist, Azalia Mirhoseini, said the changes from the earlier paper did not require the full approval process. Google allowed Dr. Chatterjee and a handful of internal and external researchers to work on a paper that challenged some of its claims.

The team submitted the rebuttal paper to a so-called resolution committee for publication approval. Months later, the paper was rejected.

The researchers who worked on the rebuttal paper said they wanted to escalate the issue to Mr. Pichai and Alphabets board of directors. They argued that Googles decision to not publish the rebuttal violated its own A.I. principles, including upholding high standards of scientific excellence. Soon after, Dr. Chatterjee was informed that he was no longer an employee, the people said.

Ms. Goldie said that Dr. Chatterjee had asked to manage their project in 2019 and that they had declined. When he later criticized it, she said, he could not substantiate his complaints and ignored the evidence they presented in response.

Sat Chatterjee has waged a campaign of misinformation against me and Azalia for over two years now, Ms. Goldie said in a written statement.

She said the work had been peer-reviewed by Nature, one of the most prestigious scientific publications. And she added that Google had used their methods to build new chips and that these chips were currently used in Googles computer data centers.

Laurie M. Burgess, Dr. Chatterjees lawyer, said it was disappointing that certain authors of the Nature paper are trying to shut down scientific discussion by defaming and attacking Dr. Chatterjee for simply seeking scientific transparency. Ms. Burgess also questioned the leadership of Dr. Dean, who was one of 20 co-authors of the Nature paper.

Jeff Deans actions to repress the release of all relevant experimental data, not just data that supports his favored hypothesis, should be deeply troubling both to the scientific community and the broader community that consumes Google services and products, Ms. Burgess said.

Dr. Dean did not respond to a request for comment.

After the rebuttal paper was shared with academics and other experts outside Google, the controversy spread throughout the global community of researchers who specialize in chip design.

The chip maker Nvidia says it has used methods for chip design that are similar to Googles, but some experts are unsure what Googles research means for the larger tech industry.

If this is really working well, it would be a really great thing, said Jens Lienig, a professor at the Dresden University of Technology in Germany, referring to the A.I. technology described in Googles paper. But it is not clear if it is working.

More here:
Another Firing Among Googles A.I. Brain Trust, and More Discord - The New York Times

Artificial Intelligence And the Human Context of War – The National Interest Online

Excitement and fear about artificial intelligence (AI) have been building for years. Many believe that AI is poised to transform war as profoundly as it has business. There is a burgeoning literature on the AI revolution in war, and even Henry Kissinger has weighed in on The Age of AI And Our Human Future.

Governments around the world seem to agree. Chinas AI development plan states that AI has become a new focus of international competition and is a strategic technology that will lead in the future. The U.S. National Security Commission on AI warns that AI is deepening the threat posed by cyber attacks and disinformation campaigns that Russia, China, and others are using to infiltrate our society, steal our data, and interfere in our democracy. China and the United States are in a race for AI supremacy, and both nations are investing huge sums into lethal autonomous weapons to gain an edge in great power competition.

Scholars expect that authoritarians and democracies alike will embrace AI to improve military effectiveness and limit their domestic costs. Military AI systems will be able to sense, respond, and swarm faster than humans. Speed and lethality would encourage preemption, leading to strategic deterrence failures. Unaccountable killing would be an ethical catastrophe. Taken to an extreme, a superintelligence could eliminate humanity altogether.

The Economics of Prediction

These worrisome scenarios assume that AI can and will replace human warriors. Yet the literature on the economics of technology suggests that this assumption is mistaken. Technologies that replace some human tasks typically create demand for other tasks. In general, the economic impact of technology is determined by its complements. This suggests that the complements of AI may have a bigger impact on international politics than AI technology alone.

Technological substitution typically increases the value of complements. When automobiles replaced horse-carts, this also created demand for people who could build roads, repair cars, and keep them fueled. A drop in the price of mobility increased the value of transportation infrastructure. Something similar is happening with AI.

The AI technology that has received all the media attention is machine learning. Machine learning is a form of prediction, which is the process of filling in missing information. Notable AI achievements in automated translation, image recognition, video game playing, and route navigation are all examples of automated prediction. Technological trends in computing, memory, and bandwidth are making large-scale prediction commercially feasible.

Yet prediction is only part of decisionmaking. The other parts are data, judgment, and action. Data makes prediction possible. Judgment is about values; it determines what to predict and what actions to take after a prediction is made. An AI may be able to predict whether rain is likely by drawing on data about previous weather, but a human must decide whether the risk of getting wet merits the hassle of carrying an umbrella.

Studies of AI in the commercial world demonstrate that AI performance depends on having a lot of good data and clear judgment. Firms like Amazon, Uber, Facebook, and FedEx have benefitted from AI because they have invested in data collection and have made deliberate choices about what to predict and what to do with AI predictions. Once again, the economic impact of new technology is determined by its complements. As innovation in AI makes prediction cheaper, data and judgment become more valuable.

The Complexity of Automated War

In a new study we explore the implications of the economic perspective for military power. Organizational and strategic context shapes the performance of all military information systems. AI should be no different in this regard. The question is how the unique context of war shapes the critical AI complements of data and judgment.

While decisionmaking is similar in military and business organizations, they operate in radically different circumstances. Commercial organizations benefit from institutionalized environments and common standards. Military systems, by contrast, operate in a more anarchic and unpredictable environment. It is easier to meet the conditions of quality data and clear judgment in peacetime commerce than in violent combat.

An important implication is that military organizations that rely on AI will tend to become more complex. Militaries that invest in AI will become preoccupied with the quality of their data and judgment, as well as the ways in which teams of humans and machines make decisions. Junior personnel will have more responsibility for managing the alignment of AI systems and military objectives. Assessments of the relative power of AI-enabled militaries will thus turn on the quality of their human capital and managerial choices.

Anything that is a source of strength in war also becomes an attractive target. Adversaries of AI-enabled militaries will have more incentives to target the quality of data and the coherence of judgment. As AI enables organizations to act more efficiently, they will have to invest more in coordinating and protecting everything that they do. Rather than making military operations faster and more decisive, we expect the resulting organizational and strategic complexity to create more delays and confusion.

Emerging Lessons from Ukraine

The ongoing war in Ukraine features conventional forces in pitched combat over territorial control. This is exactly the kind of scenario that appears in a lot of AI futurism. Yet this same conflict may hold important lessons about AI might be used very differently in war, or not used at all.

Many AI applications already play a supporting role. Ukraine has been dominating the information war as social media platforms, news feeds, media outlets, and even Russian restaurant reviews convey news of Ukrainian suffering and heroism. These platforms all rely on AI, while sympathetic hacktivists attempt to influence the content that AI serves up. Financial analysts use AI as they assess the effects of crushing economic sanctions on Russia, whether to better target them or protect capital from them. AI systems also support the commercial logistics networks that are funneling humanitarian supplies to Ukraine from donors around the world.

Western intelligence agencies also use data analytics to wade through a vast quantity of datasatellite imagery, airborne collection, signals intelligence, open-source chatteras they track the battlefield situation. These agencies are sharing intelligence with Kyiv, which is used to support Ukrainian forces in the field. This means AI is already an indirect input to battlefield events. Another more operational application of AI is in commercial cybersecurity. For instance, Microsofts proactive defense against Russian wipers, has likely relied on AI to detect malware.

Importantly, these AI applications work because they are grounded in peaceful institutions beyond the battlefield. The war in Ukraine is embedded in a globalized economy that both shapes and is shaped by the war. Because AI is already an important part of that economy, it is already a part of this war. Because AI helps to enable global interdependence, it is also helps to weaponize interdependence. While futurist visions of AI focus on direct battlefield applications, AI may end up playing a more important role in the indirect economic and informational context of war.

Futurist visions generally emphasize the offensive potency of AI. Yet the AI applications in use today are marginally empowering Ukraine in its defense against the Russian offensive. Instead of making war faster, AI is helping to prolong it by increasing the ability of Ukraine to resist. In this case, time works against the exposed and harried Russian military.

We expect that the most promising military applications of AI are those with analogues in commercial organizations, such as administration, personnel, and logistics. Yet even these activities are full of friction. Just-in-time resupply would not be able to compensate for Russias abject failure to plan for determined resistance. Efficient personnel management systems would not have informed Russian personnel about the true nature of their mission.

Almost everyone overestimated Russia and underestimated Ukraine based on the best data and assessments available. The intelligence failures in Russia had little to do with the quality of data and analysis, moreover, and more with the insularity of Russian leadership. AI cannot fix, and may worsen, the information pathologies of authoritarian regimes. AI-enabled cyber warfare capabilities would likewise be of little use if leaders failed to include a cyber warfare plan.

The Human Future of Automated War

It is folly to expect the same conditions that have enabled AI success in commerce to be replicated in war. The wartime conditions of violent uncertainty, unforeseen turbulence, and political controversy will tend to undermine the key AI conditions of good data and clear judgment. Indeed, strategy and leadership cannot be automated.

The questions that matter most about the causes, conduct, and conclusion of the war in Ukraine (or any war) are not really about prediction at all. Questions about the strategic aims, political resolve, and risk tolerances of leaders like Vladimir Putin, Volodymyr Zelenskyy, and Joseph Biden turn on judgments of values, goals, and priorities. Only humans can provide the answers.

AI will provide many tactical improvements in the years to come. Yet fancy tactics are no substitute for bad strategy. Wars are caused by miscalculation and confusion, and artificial intelligence cannot offset natural stupidity.

Read more here:
Artificial Intelligence And the Human Context of War - The National Interest Online