Archive for the ‘Alphago’ Category

Project Force: AI and the military a friend or foe? – Al Jazeera English

The accuracy and precision of todays weapons are steadily forcing contemporary battlefields to empty of human combatants.

As more and more sensors fill the battlespace, sending vast amounts of data back to analysts, humans struggle to make sense of the mountain of information gathered.

This is where artificial intelligence (AI) comes in learning algorithms that thrive off big data; in fact, the more data these systems analyse, the more accurate they can be.

In short, AI is the ability for a system to think in a limited way, working specifically on problems normally associated with human intelligence, such as pattern and speech recognition, translation and decision-making.

AI and machine learning have been a part of civilian life for years. Megacorporations like Amazon and Google have used these tools to build vast commercial empires based in part on predicting the wants and needs of the people that use them.

The United States military has also long invested in civilian AI, with the Pentagons Defense Advanced Research Projects Agency (DARPA), funnelling money into key areas of AI research.

However, to tackle specific military concerns, the defence establishment soon realised its AI needs were not being met. So they approached Silicon Valley, asking for its help in giving the Pentagon the tools it would need to process an ever-growing mountain of information.

Employees at several corporations were extremely uncomfortable with their research being used by the military and persuaded the companies Google being one of them to opt out of, or at least dial down, its cooperation with the defence establishment.

While the much-hyped idea of Killer Robots remorseless machines hunting down humans and terminating them for some reason known to themselves has caught the publics imagination, the current focus of AI could not be further from that.

As a recent report on the military applications of AI points out, the technology is central to providing robotic assistance on the battlefield, which will enable forces to maintain or expand warfighting capacity without increasing manpower.

What does this mean? In effect, robotic systems will do tasks considered too menial or too dangerous for human beings such as unmanned supply convoys, mine clearance or the air-to-air refuelling of aircraft. It is also a force multiplier, which means it allows the same amount of people to do and achieve more.

An idea that illustrates this is the concept of the robotic Loyal Wingman being developed for the US Air Force. Designed to fly alongside a jet flown by a human pilot, this unmanned jet would fight off the enemy, be able to complete its mission, or help the human pilot do so. It would act as an AI bodyguard, defending the manned aircraft, and is also designed to sacrifice itself if there is a need to do so to save the human pilot.

A Navy X-47B drone, an unmanned combat aerial vehicle [File: AP]As AI power develops, the push towards systems becoming autonomous will only increase. Currently, militaries are keen to have a human involved in the decision-making loop. But in wartime, these communication links are potential targets cut off the head and the body would not be able to think. The majority of drones currently deployed around the world would lose their core functions if the data link connecting them to their human operator were severed.

This is not the case with the high-end, intelligence-gathering, unarmed drone Global Hawk, which, once given orders is able to carry them out independently without the need for a vulnerable data link, allowing it to be sent into highly contested airspaces to gather vital information. This makes it far more survivable in a future conflict, and money is now pouring into these new systems that can fly themselves, like Frances Dassault Neuron or Russias Sukhoi S70 both semi-stealthy autonomous combat drone designs.

AI programmes and systems are constantly improving, as their quick reactions and data processing allow them to finely hone the tasks they are designed to perform.

Robotic air-to-air refuelling aircraft have a better flight record and are able to keep themselves steady in weather that would leave a human pilot struggling. In war games and dogfight simulations, AI pilots are already starting to score significant victories over their human counterparts.

While AI algorithms are great at data-crunching, they have also started to surprise observers in the choices they make.

In 2016, when an AI programme, AlphaGo, took on a human grandmaster and world champion of the famously complex game of Go, it was expected to act methodically, like a machine. What surprised everyone watching was the unexpectedly bold moves it sometimes made, catching its opponent Lee Se-dol off-guard. The algorithm went on to win, to the shock of the tournaments observers. This kind of breakthrough in AI development had not been expected for years, yet here it was.

Machine intelligence is and will be increasingly incorporated into manned platforms. Ships will now have fewer crew members as the AI programmes will be able to do more. Single pilots will be able to control squadrons of unmanned aircraft that will fly themselves but obey that humans orders.

Facial recognition security cameras monitor a pedestrian shopping street in Beijing [File: AP]AIs main strength is in the arena of surveillance and counterinsurgency: being able to scan images made available from millions of CCTV cameras; being able to follow multiple potential targets; using big data to finesse predictions of a targets behaviour with ever-greater accuracy. All this is already within the grasp of AI systems that have been set up for this purpose unblinking eyes that watch, record, and monitor 24 hours a day.

The sheer volume of material that can be gathered is staggering and would be beyond the scope of human analysts to watch, absorb and fold into any conclusions they made.

AI is perfect for this and one of the testbeds for this kind of analytical, detection software is in special operations, where there has been a significant success. The tempo of special forces operations in counterinsurgency and counterterrorism has increased dramatically as information from a raid can now be quickly analysed and acted upon, leading to other raids that same night, which leads to more information gathered.

This speed has the ability to knock any armed group off balance as the raids are so frequent and relentless that the only option left is for them to move and hide, suppressing their organisation and rendering it ineffective.

A man uses a PlayStation-style console to manoeuvre the aircraft, as he demonstrates a control system for unmanned drones [File: AP]As AI military systems mature, their record of success will improve, and this will help overcome another key challenge in the acceptance of informationalised systems by human operators: trust.

Human soldiers will learn to increasingly rely on smart systems that can think at a faster rate than they can, spotting threats before they do. An AI system is only as good as the information it receives and processes about its environment, in other words, what it perceives. The more information it has, the more accurate it will be in its perception, assessment and subsequent actions.

The least complicated environment for a machine to understand is flight. Simple rules, a slim chance of collision, and relatively direct routes to and from its area of operations mean that this is where the first inroads into AI and relatively smart systems have been made. Loitering munitions, designed to search and destroy radar installations, are already operational and have been used in conflicts such as the war between Armenia and Azerbaijan.

Investment and research have also poured into maritime platforms. Operating in a more complex environment with sea life and surface traffic potentially obscuring sensor readings, a major development is in unmanned underwater vehicles (UUVs). Stealthy, near-silent systems, they are virtually undetectable and can stay submerged almost indefinitely.

Alongside the advances, there is a growing concern with how deadly these imagined AI systems could be.

Human beings have proven themselves extremely proficient in the ways of slaughter but there is increased worry that these mythical robots would run amuck, and that humans would lose control. This is the central concern among commentators, researchers and potential manufacturers.

But an AI system would not get enraged, feel hatred for its enemy, or decide to take it out on the local population if its AI comrades were destroyed. It could have the Laws of Armed Conflict built into its software.

The most complex and demanding environment is urban combat, where the wars of the near future will increasingly be fought. Conflicts in cities can overwhelm most human beings and it is highly doubtful a machine with a very narrow view of the world would be able to navigate it, let alone fight and prevail without making serious errors of judgement.

A man looks at a demonstration of human motion analysis software at the stall of an artificial intelligence solutions maker at an exhibition in China [File: Reuters]While they do not exist now, killer robots continue to appear as a worry for many and codes of ethics are already being worked on. Could a robot combatant indeed understand and be able to apply the Laws of Armed Conflict? Could it tell friend from foe, and if so, what would its reaction be? This applies especially to militias, soldiers from opposing sides using similar equipment, fighters who do not usually wear a defining uniform, and non-combatants.

The concern is so high that the Human Rights Watch has urged for the prohibition of fully autonomous AI units capable of making lethal decisions, calling for a ban very much like those in place for mines and chemical and biological weapons.

Another main concern is that a machine can be hacked in ways a human cannot. It might be fighting alongside you one minute but then turn on you the next. Human units have mutinied and changed allegiances before but to turn ones entire army or fleet against them with a keystroke is a terrifying possibility for military planners. And software can go wrong. A pervasive phrase in modern civilian life is sorry, the system is down; imagine this applied to armed machines engaged in battle.

Perhaps the most concerning of all is the offensive use of AI malware. More than 10 years ago, the worlds most famous cyber-weapon Stuxnet sought to insinuate itself into the software controlling the spinning of centrifuges refining uranium in Iran. Able to hide itself, it covered up its tracks, searching for a particular piece of code to attack that would cause the centrifuges to spin out of control and be destroyed. Although highly sophisticated then, it is nothing compared with what is available now and what could be deployed during a conflict.

The desire to design and build these new weapons that are expected to tip the balance in future conflicts has triggered an arms race between the US and its near-peer competitors Russia and China.

AI can not only be empowering, it is asymmetric in its leverage, meaning a small country can develop effective AI software without the industrial might needed to research, develop and test a new weapons system. It is a powerful way for a country to leapfrog over the competition, producing potent designs that will give it the edge needed to win a war.

Russia has declared this the new frontier for military research. President Vladimir Putin in an address in 2017 said that whoever became the leader in the sphere of AI would become the ruler of the world. To back that up, the same year Russias Military-Industrial Committee approved the integration of AI into 30 percent of the countrys armed forces by 2030.

Current realities are different, and so far Russian ventures into this field have proven patchy. The Uran-9 unmanned combat vehicle performed poorly in the urban battlefields of Syria in 2018, often not understanding its surroundings or able to detect potential targets. Despite these setbacks, it was inducted into the Russian military in 2019, a clear sign of the drive in senior Russian military circles to field robotic units with increasing autonomy as they develop in complexity.

China, too, has clearly stated that a major focus of research and development is how to win at intelligent(ised) warfare. In a report into Chinas embracing of and use of AI in military applications, the Brookings Institution wrote that it will include command decision making, military deductions that could change the very mechanisms for victory in future warfare. Current areas of focus are AI-enabled radar, robotic ships and smarter cruise and hypersonic missiles, all areas of research that other countries are focusing on.

An American military pilot flies a Predator drone from a ground command post during a night border mission [File: AP]The development of military artificial intelligence giving systems increasing autonomy gives military planners a tantalising glimpse at victory on the battlefield, but the weapons themselves, and the countermeasures that would be aimed against them in a war of the near future, remain largely untested.

Countries like Russia and China with their revamped and streamlined militaries are no longer looking to achieve parity with the US; they are looking to surpass it by researching heavily into the weapons of the future.

Doctrine is key: how these new weapons will integrate into future war plans and how they can be leveraged for their maximum effect on the enemy.

Any quantitative leap in weapons design is always a concern as it gives a country the belief that they could be victorious in battle, thus lowering the threshold for conflict.

As war speeds up even further, it will increasingly be left in the hands of these systems to fight them, to give recommendations, and ultimately, to make the decisions.

Read more here:
Project Force: AI and the military a friend or foe? - Al Jazeera English

Diffblue’s First AI-Powered Automated Java Unit Testing Solution Is Now Free for Commercial and Open Source Software Developers – StreetInsider.com

Get inside Wall Street with StreetInsider Premium. Claim your 1-week free trial here.

OXFORD, United Kingdom, March 22, 2021 (GLOBE NEWSWIRE) -- Diffblue, creators of the worlds first AI for code solution that automates writing unit tests for Java, today announced that its free IntelliJ plugin, Diffblue Cover: Community Edition, is now available to use to create unit tests for all of an organizations Java code both open source and commercial.

Free for any individual user, the IntelliJ plugin is availablehere for immediate download. It supports both IntelliJ versions 2020.02 and 2020.03. The Diffblue Cover: Community Edition to date has already automatically created nearly 150,000 Java unit tests!

Diffblue also offers a professional version for commercial customers who require premium support as well as indemnification and the ability to write tests for packages. In addition, Diffblue offers a CLI version of Diffblue Cover, perfect for teams to collaborate using.

Diffblues pioneering technology, developed by researchers from the University of Oxford, is based on reinforcement learning, the same machine learning strategy that powered AlphaGo, Alphabet subsidiary DeepMinds software program that beat the world champion player of Go.

Diffblue Cover automates the burdensome task of writing Java unit tests, a task that takes up as much as 20 percent of Java developers time. Diffblue Cover creates Java tests at speeds 10X-100X faster than humans that are also easy for developers to understand, and automatically maintains the tests as the code evolves even on applications with tens of millions of lines of code. Most unit test generators create boilerplate code for tests, rather than tests that compile and run. These tools guess the inputs that can be used as a starting point, but developers have to finish them to get functioning tests. Diffblue Cover is uniquely able to create complete human-readable unit tests that are ready to run immediately.

Diffblue Cover today supports Java, the most popular enterprise programming language in the Global 2000. The technology behind Diffblue Cover can also be extended to support other popular programming languages such as Python, Javascript and C#.

About DiffblueDiffblue is leading the automation of software creation through the power of AI. Founded by researchers from the University of Oxford, Diffblue Cover uses AI for code to write unit tests that help software teams and organizations efficiently improve their code coverage and quality and to ship software faster, more frequently and with fewer defects. With customers including AWS and Goldman Sachs, Diffblue is venture-backed by Goldman Sachs and Oxford Sciences Innovation. Follow us on Twitter:@diffblueHQ

Editorial contact DiffblueLonn Johnston, Flak42lonn@flak42.com+1.650.219.7764

Visit link:
Diffblue's First AI-Powered Automated Java Unit Testing Solution Is Now Free for Commercial and Open Source Software Developers - StreetInsider.com

PNYA Post Break Will Explore the Relationship Between Editors and Assistants – Creative Planet Network

n honor of Womens History Month, Post Break, Post New York Alliance (PNYA)s free webinar series, will examine the way two top female editors have worked with their assistants to deliver shows for HBO, Freeform and others.

By ArtisansPR Published: March 23, 2021

Free video conference slated for Thursday, March 25th at 4:00 p.m. EDT

NEW YORK CITYA strong working relationship between the editor and her assistants is crucial to successfully completing films and television shows. In honor of Womens History Month, Post Break, Post New York Alliance (PNYA)s free webinar series, will examine the way two top female editors have worked with their assistants to deliver shows for HBO, Freeform and others.

Agns Challe-Grandits, editor of the upcoming Freeform series Single, Drunk Female and her assistant, Tracy Nayer will join Shelby Siegel, Emmy and ACE award winner for the HBO series The Jinx: The Life and Deaths of Robert Durst and her assistant, JiYe Kim, to discuss collaboration, how they organize their projects and how editors and assistants support one another. The discussion will be moderated by Post Producer Claire Shanley.

The session is scheduled for Thursday, March 25th at 4:00pm EDT. Following the webinar, attendees will have an opportunity to join small, virtual breakout groups for discussion and networking.

Panelists

Agns Grandits has decades of experience as a film and television editor. Her current project is Single Drunk Female, a new, half-hour comedy for Freeform. Her previous television credits include P. Valley and SweetBitter for STARZ, Divorce for HBO, Odd Mom Out for Bravo and The Breaks for VH1. She also worked for Showtime on The Affair and Nurse Jackie. In addition, she edited The Jim Gaffigan Show for TV Land, Gracepoint for Fox, an episode on the final season of Bored to Death for HBO, and 100 Centre Street, directed by Sydney Lumet for A&E. Her credits with HBO also include Sex and the City and The Wire.

Tracy Nayer has been an Assistant Editor for more than ten years and has been assisting Agns Grandits for five. She began her career in editorial finishing at a large post-production studio.

Shelby Siegel is an Emmy award-winning film and television editor who has worked in New York for more than 20 years. Her credits include Andrew Jareckis Capturing the Friedmans and All Good Things, Jonathan Caouettes Tarnation, and Gary Hustwits Helvetica and Urbanized. She won Emmy and ACE awards for HBOs acclaimed six-part series The Jinx: The Life and Deaths of Robert Durst. Most recently, she edited episodes of Quantico (ABC), High Maintenance (HBO) and The Deuce (HBO). She began her career working under some of the industrys top directors, including Paul Haggis (In the Valley of Elah), Mike Nichols (Charlie Wilsons War), and Ang Lee on his Oscar-winning films, Crouching Tiger, Hidden Dragon and Brokeback Mountain. She also worked on the critically acclaimed series The Wire.

JiYe Kim began her career in experimental films, working with Anita Thacher and Barbara Hammer. Her first credit as an assistant editor came for Alphago (2017). Her most recent credits include High Maintenance, The Deuce, Her Smell and Share.

Moderator

Claire Shanley is a Post Producer whose recent projects include The Plot Against America and The Deuce. Her background also includes post facility and technical management roles. She served as Managing Director at Sixteen19 and Technical Director at Broadway Video. She Co-Chairs the Board of Directors of the NYC LGBT Center and serves on the Advisory Board of NYWIFT (NY Women in Film & Television).

When: Thursday, March 25, 2021, 4:00pm EDT

Title: The E&A Team

REGISTER HERE

Sound recordings of past Post Break sessions are available here: https://www.postnewyork.org/page/PNYAPodcasts

Past Post Break sessions in video blog format are available here: https://www.postnewyork.org/blogpost/1859636/Post-Break

About Post New York Alliance (PNYA)

The Post New York Alliance (PNYA) is an association of film and television post-production facilities, labor unions and post professionals operating in New York State. The PNYAs objective is to create jobs by: 1) extending and improving the New York State Tax Incentive Program; 2) advancing the services the New York Post Production industry provides; and 3) creating avenues for a diverse talent pool to enter into The Industry.

http://www.pnya.org

Read more:
PNYA Post Break Will Explore the Relationship Between Editors and Assistants - Creative Planet Network

You need to Know about the History of Artificial Intelligence – Technotification

The concept of a machine that thinks derives from ancient Greece. However, as the introduction and the events and achievements for the development of artificial intelligence are significant events and related to some of the subjects covered in this article:

1950: Computing Machinery and Knowledge is written by Alan Turing. In the article, Turingknown for cracking the ENIGMA code of Nazi society during the Second World Warproposes to respond to the question do machines think? and implements the Turing Test (link exists outside of IBM), which would decide if a robot can show the same intellect as the human. Since then, the importance of the Turing test has been discussed.

1956: at the first-ever Dartmouth College AI meeting, John McCarthy coins the word artificial intelligence. (The Lisp language was invented by McCarthy.) Later that year the Logic Theorist, the first AI software in use was developed by Allen Newell, J.C. Shaw, and Herbert Simon.

1967: The first computer-based on a neural network, Frank Rosenblatt produces the Mark 1 Perceptron which is learned through checking and mistakes. A year later, Marvin Minsky and Seymour Papert released a book called Perceptrons, the seminal work of neural entries and an argument against future research ventures of neural networks for at least a while.

1980: Backpropagated neural networks network training algorithms are commonly used in AI applications.

1997: IBMs Deep Blue champion Garry Kasparov defeats than in the chess match (and rematch).

2011: Ken Jennings and Brad Rutter at Jeopardy beating winners of IBM Watson!

2015: The Minwa Supercomputer of Baidu uses a special deep neural network called a neural network to recognize and categorize images that are more precise than the human average.

2016: the AlphaGo software from DeepMind, guided by a deep neural network beats in a five-match game Lee Sodol, the world champion Go. The win is important in light of the large number of moves available (more than 14.5% after only four moves!) as the game progresses. Subsequently, Google bought a rumored $400 million from DeepMind.

Artificial intelligence allows computers and devices to imitate the mental capacity of the human mind to perceive, understand, solve problems and take decisions.

The word artificial intelligence refers to a human-like intelligence demonstrated by a computer, robot, or other machines in computer science. Artificial intelligence refers in common use to a computer or a machines capacity to imitate the human minds capacity to learn from examples and experience, to recognize objects, to understand and respond to language, to make decisions, to resolve problems and to combine these and other capabilities to full function, for instance, to greet a guest at the hotel.

AI is now part of our daily lives after decades of being confined to science fiction. A huge volume of data and the subsequent growth, as well as the vast availability of computing technology, are making a boost in AI development feasible, which allows all this data to be processed more quickly and reliably than human users can. AI finishes by typing our terms, gives instructions when asked, vacuums our floors, and suggests what to buy or watch next. It also helps qualified practitioners perform quicker and with better results by pushing software, such as medical image processing.

As popular today as is artificial intelligence, it can be difficult to grasp the language for AI and AI since certain words are used interchangeably, and in some cases synonymous. What is the difference between machine learning and artificial intelligence? Between the learning process and profound learning? Between speech recognition and production of natural language? Zwischen AI weak and AI strong? This article will help you overcome these and other terms and grasp the concepts of AIs working.

Continue reading here:
You need to Know about the History of Artificial Intelligence - Technotification

Google’s AlphaGo computer beats human champ Lee Sedol in …

SEOUL, South Korea -- Game not over? Human Go champion Lee Sedol says Google's Go-playing program AlphaGo is not yet superior to humans, despite its 4:1 victory in a match that ended Tuesday.

The week-long showdown between the South Korean Go grandmaster and Google DeepMind's artificial intelligence program showed the computer software has mastered a major challenge for artificial intelligence.

"I don't necessarily think AlphaGo is superior to me. I believe that there is still more a human being could do to play against artificial intelligence," Lee said after the nearly five-hour-long final game.

AlphaGo had the upper hand in terms of its lack of vulnerability to emotion and fatigue, two crucial aspects in the intense brain game.

"When it comes to psychological factors and strong concentration power, humans cannot be a match," Lee said.

But he added, "I don't think my defeat this time is a loss for humanity. It clearly shows my weaknesses, but not the weakness of all humanity."

He expressed deep regret for the loss and thanked his fans for their support, saying he enjoyed all five matches.

Lee, 33, has made his living playing Go since he was 12 and is famous in South Korea even among people who do not play the game. The entire country was rooting for him to win.

The series was one of the most intensely watched events in the past week across Asia. The human-versus-machine battle hogged headlines, eclipsing reports of North Korean threats of a pre-emptive strike on the South.

The final game was too close to call until the very end. Experts said it was the best of the five games in that Lee was in top form and AlphaGo made few mistakes. Lee resigned about five hours into the game.

The final match was broadcast live on three major TV networks in South Korea and on big TV screens in downtown Seoul.

Google estimated that 60 million people in China, where Go is a popular pastime, watched the first match on Wednesday.

Before AlphaGo's victory, the ancient Chinese board game was seen as too complex for computers to master. Go fans across Asia were astonished when Lee, one of the world's best Go players, lost the first three matches.

Lee's win over AlphaGo in the fourth match, on Sunday, showed the machine was not infallible: Afterward, Lee said AlphaGo's handling of surprise moves was weak. The program also played less well with a black stone, which plays first and has to claim a larger territory than its opponent to win.

Choosing not to exploit that weakness, Lee opted for a black stone in the last match.

Go players take turns placing the black and white stones on 361 grid intersections on a nearly square board. Stones can be captured when they are surrounded by those of their opponent.

To take control of territory, players surround vacant areas with their stones. The game continues until both sides agree there are no more places to put stones, or until one side decides to quit.

Google officials say the company wants to apply technologies used in AlphaGo in other areas, such as smartphone assistants, and ultimately to help scientists solve real-world problems.

As for Go, other top players are bracing themselves.

Chinese world Go champion Ke Jie said it was just a matter of before top Go players like himself would be overtaken by artificial intelligence.

"It is very hard for Go players at my level to improve even a little bit, whereas AlphaGo has hundreds of computers to help it improve and can play hundreds of practice matches a day," Ke said.

"It does not seem like a good thing for we professional Go players, but the match played a very good role in promoting Go," Ke said.

Go here to see the original:
Google's AlphaGo computer beats human champ Lee Sedol in ...