Archive for the ‘Artificial Intelligence’ Category

BigBear.ai to Highlight Artificial Intelligence and Machine Learning Capabilities at Upcoming Industry Events – Business Wire

COLUMBIA, Md.--(BUSINESS WIRE)--BigBear.ai (NYSE: BBAI), a leader in AI-powered analytics and cyber engineering solutions, announced company executives are embarking on a thought-leadership campaign across multiple global industry events. The campaign will emphasize how the companys advancements in AI technologies will impact the federal and commercial markets in the coming months.

At these events, BigBear.ai leaders will highlight the capabilities of BigBear.ais newly acquired company, ProModel Corporation, the importance of defining responsible AI usage, and how federal and commercial organizations leverage AI and ML.

The events BigBear.ai is scheduled to address include:

CTMA Partners Meeting May 3-5, 2022: Virginia Beach, VA

Due to the rapid deployment and advancement of sensor technologies, artificial intelligence, and data science, the Department of Defense has turned to a more predictive-based approach to maintaining technology assets. The agencys recently revamped condition-based maintenance plus (CBM+) policy will accelerate the adoption, integration, and use of these emerging technologies while shifting its strategic approach from largely reactive maintenance to proactive maintenance. Participating as part of a panel session to address this trend, BigBear.ai Senior Vice President of Analytics Carl Napoletano will highlight ProModels commercial capabilities and ProModel Government Services legacy capabilities in the federal space.

DIA Future Technologies Symposium May 11-12, 2022: Virtual Event

BigBear.ais Senior Vice President of Analytics, Frank Porcelli, will brief the DIA community about BigBear.ais AI-powered solutions at this virtual presentation. After providing a high-level overview and demonstration of the companys AI products (Observe, Orient, and Dominate), Frank will also offer insights into how AI technologies are being leveraged in the federal sector.

Conference on Governance of Emerging Technologies and Science May 19-20, 2022: Phoenix, Arizona

Newly appointed BigBear.ai General Counsel Carolyn Blankenship will attend the ninth edition of Arizona States annual conference, which examines how to create sustainable governance solutions that address new technologies legal, regulatory, and policy ramifications. During her presentation, Carolyn will detail the importance of Intellectual Property (IP) law in AI and the responsible use of AI and other emerging technologies. Prior to starting as General Counsel, Carolyn organized and led Thomson Reuters cross-functional team that outlined the organizations first set of Data Ethics Principles.

Automotive Innovation Forum May 24-25, 2022: Munich, Germany

ProModel was among the select few organizations invited to attend Autodesks The Automotive Innovation Forum 2022. This premier industry event celebrates new automotive plant design and manufacturing technology solutions. Michael Jolicoeur of ProModel, Director of the Autodesk Business Division, will headline a panel at the conference and highlight the latest industry trends in automotive factory design and automation.

DAX 2022 June 4, 2022: University of Maryland, Baltimore County, Baltimore, Maryland

Three BigBear.ai experts - Zach Casper, Senior Director of Cyber; Leon Worthen, Manager of Strategic Operations; and Sammy Hamilton, Data Scientist/Engagement Engineer - will headline a panel discussion exploring the variety of ways AI and ML are deployed throughout the defense industry. The trio of experts will discuss how AI and ML solve pressing cybersecurity problems facing the Department of Defense and intelligence communities.

To connect with BigBear.ai at these events, send an email to events@bigbear.ai.

About BigBear.ai

BigBear.ai delivers AI-powered analytics and cyber engineering solutions to support mission-critical operations and decision-making in complex, real-world environments. BigBear.ais customers, which include the US Intelligence Community, Department of Defense, the US Federal Government, as well as customers in manufacturing, logistics, commercial space, and other sectors, rely on BigBear.ais solutions to see and shape their world through reliable, predictive insights and goal-oriented advice. Headquartered in Columbia, Maryland, BigBear.ai has additional locations in Virginia, Massachusetts, Michigan, and California. For more information, please visit: http://bigbear.ai/ and follow BigBear.ai on Twitter: @BigBearai.

Link:
BigBear.ai to Highlight Artificial Intelligence and Machine Learning Capabilities at Upcoming Industry Events - Business Wire

TeamViewer Brings Artificial Intelligence to the Shopfloor – PR Newswire

"As the European leader in enterprise AR solutions, we are constantly exploring new ways of supporting frontline workers' daily tasks with intelligent technology. The integration of AI capabilities into AR workflows was the next logical step for us. Enriching complex manual processes with self-learning algorithms truly is a game-changer for digitalization projects and adds immediate value for our customers. For example, AI can perform certain verification tasks, reducing the probability for human errors to almost zero," says Hendrik Witt, Chief Product Officer at TeamViewer.

Global customers from the food and beverage industry such as NSF participated in a closed early access program of AiStudio and have already developed AI-supported TeamViewer Frontline workflows for quality assurance and workplace safety, further improving productivity and efficiency. Use cases include the automated verification that hygiene gloves are worn during food preparation processes, as well as confirmation of the correct commissioning in warehouse logistics. Other scenarios for the add-on range from quality assurance with AI-based detection of damaged or wrongly assembled products, to automatically recognizing factory equipment such as industrial machines and instantly providing additional information such as relevant maintenance instructions via augmented reality software.

Two out-of-the-box AI capabilities will be available for all customers with a Frontline license: one can detect common shopfloor warning signs through the smart glasses' camera, the other one can detect if safety helmets are worn. Companies can easily implement further individual automated safety checks, adding an AI-based layer of workplace security.

More information on AiStudio can be found here.

About TeamViewerTeamViewer is a leading global technology company that provides a connectivity platform to remotely access, control, manage, monitor, and repair devices of any kind from laptops and mobile phones to industrial machines and robots. Although TeamViewer is free of charge for private use, it has more than 625,000 subscribers and enables companies of all sizes and from all industries to digitalize their business-critical processes through seamless connectivity. Against the backdrop of global megatrends like device proliferation, automation and new work, TeamViewer proactively shapes digital transformation and continuously innovates in the fields of Augmented Reality, Internet of Things and Artificial Intelligence. Since the company's foundation in 2005, TeamViewer's software has been installed on more than 2.5 billion devices around the world. The company is headquartered in Goppingen, Germany, and employs around 1,500 people globally. In 2021, TeamViewer achieved billings of EUR 548 million. TeamViewer AG (TMV) is listed at Frankfurt Stock Exchange and belongs to the MDAX. Further information can be found at https://www.teamviewer.com/.

Press ContactJulia Gottschalk Tel.: +49 7161 60692 3895E-mail: [emailprotected]

SOURCE TeamViewer

Read this article:
TeamViewer Brings Artificial Intelligence to the Shopfloor - PR Newswire

7 Roles of Artificial Intelligence in the Defence Sector – Robotics and Automation News

Artificial Intelligence has managed to infiltrate many industries and sectors, including the defence sector and different military operations.

Artificial Intelligence is used by almost all nations for managing the defence sector and military operations.

Currently, a huge investment is made in this niche to further strengthen the defence sector of any country.

Here are seven roles of artificial intelligence in the defence sector.

Without an actual war, how would one teach the soldiers about actual war life situations? In such an important situation, the role of Artificial Intelligence is huge.

Artificial Intelligence can be used for creating simulations and training to design different models to train the soldiers to get used to the different fighting systems, which is important for actual military operations.

The navy and army of different countries use Artificial Intelligence to create sensor simulation programmes to help the soldiers.

Such AI is also combined with augmented reality and virtual reality to create more real-life situations.

The defence sector holds much critical and classified information. The sensitive information makes the defence sector extremely prone to cyberattacks.

The defence sector obviously hides its digital footprints by adding a layer of security.

Many times, the defence sector also hides the IP and one can check their IP in What Is My IP. However, normal security is not enough to secure sensitive information.

For providing an added level of security, the military sector often uses Artificial Intelligence. AI plays a critical role in preventing unauthorized intrusion.

It is no secret that surveillance plays an important role in the defence sector and different military operations.

Artificial Intelligence can be used in surveillance for keeping an eye on suspicious activity.

Also, not only is it able to identify suspicious activity but also alerts the respective authorities to tackle the situation. AI-enabled robots also play a critical role in such activities.

Weapons are no longer simple weapons but are new-age weapons. These weapons are commonly embedded with Artificial Intelligence technology.

The application of AI can be most commonly seen in sophisticated missiles which are designed to accurately attack a target.

Military operations often have to deal with logistics too. The logistic operation in the defence sector is not like an ordinary logistic service.

Artificial Intelligence also plays a critical role in ensuring the safety, security and efficiency of the logistic system.

Robots and Artificial Intelligence are combined together to create a Remotely Operated Vehicle which is used for defusing explosives. Sending someone to defuse explosives can be dangerous for obvious reasons.

However, by creating delicate and highly intelligent Remotely Operated Devices, the entire process of defusing explosives can be made safer.

Artificial Intelligence is also used in Network Traffic Analysis. This system mostly monitors the internet traffic, especially the voice traffic passing through different software like Google Talk and Skype.

The voice traffic is then checked for intercept messages with keywords like kill, blast and bomb and that too in real life. This technology is useful in preventing attacks and thus, working towards the safety of the people.

Other usages of Artificial Intelligence in the defence and military sector include analysis of data from different sensors and satellites.

Also, it is used by water ships which use sonar for detecting mines. Military robots, as discussed above, obviously ensure the safety of everyone. AI and machine learning merged to handle unmanned vehicles like battle necks and aircraft.

Usage of Artificial Intelligence in the military is not new. Many developed and developing nations use AI-based technology to strengthen their military operation.

The countries are investing highly in Artificial Intelligence to develop different military infrastructures. The degree of such investment, of course, differs from one country to another.

Even though the financial investment is huge, it is worth the investment. Also, employing Artificial Intelligence requires expertise too. Many scientists, coders and developers work together in a laboratory to employ Artificial Intelligence in military operations.

The challenges of employing Artificial Intelligence in military operations come in the form of money and skills.

However, the same can be addressed by making it a priority. In the coming years, the usage of AI will keep improving in different sectors, including the defence sector.

You might also like

The rest is here:
7 Roles of Artificial Intelligence in the Defence Sector - Robotics and Automation News

Engineers use artificial intelligence to capture the complexity of breaking waves – MIT News

Waves break once they swell to a critical height, before cresting and crashing into a spray of droplets and bubbles. These waves can be as large as a surfers point break and as small as a gentle ripple rolling to shore. For decades, the dynamics of how and when a wave breaks have been too complex to predict.

Now, MIT engineers have found a new way to model how waves break. The team used machine learning along with data from wave-tank experiments to tweak equations that have traditionally been used to predict wave behavior. Engineers typically rely on such equations to help them design resilient offshore platforms and structures. But until now, the equations have not been able to capture the complexity of breaking waves.

The updated model made more accurate predictions of how and when waves break, the researchers found. For instance, the model estimated a waves steepness just before breaking, and its energy and frequency after breaking, more accurately than the conventional wave equations.

Their results, published today in the journal Nature Communications, will help scientists understand how a breaking wave affects the water around it. Knowing precisely how these waves interact can help hone the design of offshore structures. It can also improve predictions for how the ocean interacts with the atmosphere. Having better estimates of how waves break can help scientists predict, for instance, how much carbon dioxide and other atmospheric gases the ocean can absorb.

Wave breaking is what puts air into the ocean, says study author Themis Sapsis, an associate professor of mechanical and ocean engineering and an affiliate of the Institute for Data, Systems, and Society at MIT. It may sound like a detail, but if you multiply its effect over the area of the entire ocean, wave breaking starts becoming fundamentally important to climate prediction.

The studys co-authors include lead author and MIT postdoc Debbie Eeltink, Hubert Branger and Christopher Luneau of Aix-Marseille University, Amin Chabchoub of Kyoto University, Jerome Kasparian of the University of Geneva, and T.S. van den Bremer of Delft University of Technology.

Learning tank

To predict the dynamics of a breaking wave, scientists typically take one of two approaches: They either attempt to precisely simulate the wave at the scale of individual molecules of water and air, or they run experiments to try and characterize waves with actual measurements. The first approach is computationally expensive and difficult to simulate even over a small area; the second requires a huge amount of time to run enough experiments to yield statistically significant results.

The MIT team instead borrowed pieces from both approaches to develop a more efficient and accurate model using machine learning. The researchers started with a set of equations that is considered the standard description of wave behavior. They aimed to improve the model by training the model on data of breaking waves from actual experiments.

We had a simple model that doesnt capture wave breaking, and then we had the truth, meaning experiments that involve wave breaking, Eeltink explains. Then we wanted to use machine learning to learn the difference between the two.

The researchers obtained wave breaking data by running experiments in a 40-meter-long tank. The tank was fitted at one end with a paddle which the team used to initiate each wave. The team set the paddle to produce a breaking wave in the middle of the tank. Gauges along the length of the tank measured the waters height as waves propagated down the tank.

It takes a lot of time to run these experiments, Eeltink says. Between each experiment you have to wait for the water to completely calm down before you launch the next experiment, otherwise they influence each other.

Safe harbor

In all, the team ran about 250 experiments, the data from which they used to train a type of machine-learning algorithm known as a neural network. Specifically, the algorithm is trained to compare the real waves in experiments with the predicted waves in the simple model, and based on any differences between the two, the algorithm tunes the model to fit reality.

After training the algorithm on their experimental data, the team introduced the model to entirely new data in this case, measurements from two independent experiments, each run at separate wave tanks with different dimensions. In these tests, they found the updated model made more accurate predictions than the simple, untrained model, for instance making better estimates of a breaking waves steepness.

The new model also captured an essential property of breaking waves known as the downshift, in which the frequency of a wave is shifted to a lower value. The speed of a wave depends on its frequency. For ocean waves, lower frequencies move faster than higher frequencies. Therefore, after the downshift, the wave will move faster. The new model predicts the change in frequency, before and after each breaking wave, which could be especially relevant in preparing for coastal storms.

When you want to forecast when high waves of a swell would reach a harbor, and you want to leave the harbor before those waves arrive, then if you get the wave frequency wrong, then the speed at which the waves are approaching is wrong, Eeltink says.

The teams updated wave model is in the form of an open-source code that others could potentially use, for instance in climate simulations of the oceans potential to absorb carbon dioxide and other atmospheric gases. The code can also be worked into simulated tests of offshore platforms and coastal structures.

The number one purpose of this model is to predict what a wave will do, Sapsis says. If you dont model wave breaking right, it would have tremendous implications for how structures behave. With this, you could simulate waves to help design structures better, more efficiently, and without huge safety factors.

This research is supported, in part, by the Swiss National Science Foundation, and by the U.S. Office of Naval Research.

Go here to read the rest:
Engineers use artificial intelligence to capture the complexity of breaking waves - MIT News

Another Firing Among Googles A.I. Brain Trust, and More Discord – The New York Times

Less than two years after Google dismissed two researchers who criticized the biases built into artificial intelligence systems, the company has fired a researcher who questioned a paper it published on the abilities of a specialized type of artificial intelligence used in making computer chips.

The researcher, Satrajit Chatterjee, led a team of scientists in challenging the celebrated research paper, which appeared last year in the scientific journal Nature and said computers were able to design certain parts of a computer chip faster and better than human beings.

Dr. Chatterjee, 43, was fired in March, shortly after Google told his team that it would not publish a paper that rebutted some of the claims made in Nature, said four people familiar with the situation who were not permitted to speak openly on the matter. Google confirmed in a written statement that Dr. Chatterjee had been terminated with cause.

Google declined to elaborate about Dr. Chatterjees dismissal, but it offered a full-throated defense of the research he criticized and of its unwillingness to publish his assessment.

We thoroughly vetted the original Nature paper and stand by the peer-reviewed results, Zoubin Ghahramani, a vice president at Google Research, said in a written statement. We also rigorously investigated the technical claims of a subsequent submission, and it did not meet our standards for publication.

Dr. Chatterjees dismissal was the latest example of discord in and around Google Brain, an A.I. research group considered to be a key to the companys future. After spending billions of dollars to hire top researchers and create new kinds of computer automation, Google has struggled with a wide variety of complaints about how it builds, uses and portrays those technologies.

Tension among Googles A.I. researchers reflects much larger struggles across the tech industry, which faces myriad questions over new A.I. technologies and the thorny social issues that have entangled these technologies and the people who build them.

The recent dispute also follows a familiar pattern of dismissals and dueling claims of wrongdoing among Googles A.I. researchers, a growing concern for a company that has bet its future on infusing artificial intelligence into everything it does. Sundar Pichai, the chief executive of Googles parent company, Alphabet, has compared A.I. to the arrival of electricity or fire, calling it one of humankinds most important endeavors.

Google Brain started as a side project more than a decade ago when a group of researchers built a system that learned to recognize cats in YouTube videos. Google executives were so taken with the prospect that machines could learn skills on their own, they rapidly expanded the lab, establishing a foundation for remaking the company with this new artificial intelligence. The research group became a symbol of the companys grandest ambitions.

Before she was fired, Dr. Gebru was seeking permission to publish a research paper about how A.I.-based language systems, including technology built by Google, may end up using the biased and hateful language they learn from text in books and on websites. Dr. Gebru said she had grown exasperated over Googles response to such complaints, including its refusal to publish the paper.

A few months later, the company fired the other head of the team, Margaret Mitchell, who publicly denounced Googles handling of the situation with Dr. Gebru. The company said Dr. Mitchell had violated its code of conduct.

The paper in Nature, published last June, promoted a technology called reinforcement learning, which the paper said could improve the design of computer chips. The technology was hailed as a breakthrough for artificial intelligence and a vast improvement to existing approaches to chip design. Google said it used this technique to develop its own chips for artificial intelligence computing.

Google had been working on applying the machine learning technique to chip design for years, and it published a similar paper a year earlier. Around that time, Google asked Dr. Chatterjee, who has a doctorate in computer science from the University of California, Berkeley, and had worked as a research scientist at Intel, to see if the approach could be sold or licensed to a chip design company, the people familiar with the matter said.

But Dr. Chatterjee expressed reservations in an internal email about some of the papers claims and questioned whether the technology had been rigorously tested, three of the people said.

While the debate about that research continued, Google pitched another paper to Nature. For the submission, Google made some adjustments to the earlier paper and removed the names of two authors, who had worked closely with Dr. Chatterjee and had also expressed concerns about the papers main claims, the people said.

When the newer paper was published, some Google researchers were surprised. They believed that it had not followed a publishing approval process that Jeff Dean, the companys senior vice president who oversees most of its A.I. efforts, said was necessary in the aftermath of Dr. Gebrus firing, the people said.

Google and one of the papers two lead authors, Anna Goldie, who wrote it with a fellow computer scientist, Azalia Mirhoseini, said the changes from the earlier paper did not require the full approval process. Google allowed Dr. Chatterjee and a handful of internal and external researchers to work on a paper that challenged some of its claims.

The team submitted the rebuttal paper to a so-called resolution committee for publication approval. Months later, the paper was rejected.

The researchers who worked on the rebuttal paper said they wanted to escalate the issue to Mr. Pichai and Alphabets board of directors. They argued that Googles decision to not publish the rebuttal violated its own A.I. principles, including upholding high standards of scientific excellence. Soon after, Dr. Chatterjee was informed that he was no longer an employee, the people said.

Ms. Goldie said that Dr. Chatterjee had asked to manage their project in 2019 and that they had declined. When he later criticized it, she said, he could not substantiate his complaints and ignored the evidence they presented in response.

Sat Chatterjee has waged a campaign of misinformation against me and Azalia for over two years now, Ms. Goldie said in a written statement.

She said the work had been peer-reviewed by Nature, one of the most prestigious scientific publications. And she added that Google had used their methods to build new chips and that these chips were currently used in Googles computer data centers.

Laurie M. Burgess, Dr. Chatterjees lawyer, said it was disappointing that certain authors of the Nature paper are trying to shut down scientific discussion by defaming and attacking Dr. Chatterjee for simply seeking scientific transparency. Ms. Burgess also questioned the leadership of Dr. Dean, who was one of 20 co-authors of the Nature paper.

Jeff Deans actions to repress the release of all relevant experimental data, not just data that supports his favored hypothesis, should be deeply troubling both to the scientific community and the broader community that consumes Google services and products, Ms. Burgess said.

Dr. Dean did not respond to a request for comment.

After the rebuttal paper was shared with academics and other experts outside Google, the controversy spread throughout the global community of researchers who specialize in chip design.

The chip maker Nvidia says it has used methods for chip design that are similar to Googles, but some experts are unsure what Googles research means for the larger tech industry.

If this is really working well, it would be a really great thing, said Jens Lienig, a professor at the Dresden University of Technology in Germany, referring to the A.I. technology described in Googles paper. But it is not clear if it is working.

More here:
Another Firing Among Googles A.I. Brain Trust, and More Discord - The New York Times