Archive for the ‘Alphago’ Category

System on Chips And The Modern Day Motherboards – Analytics India Magazine

The SoC is the new motherboard.

Data centres are no longer betting on the one-size-fits-all compute. Decades of homogenous compute strategies are disrupted by the need to optimise. Modern-day data centres are embracing purpose-built System on Chip (SoC) designs to have more control over peak performance, optimise power consumption and scalability. Thus, customisation of chips has become the go-to solution for many cloud providers. Companies like Google Cloud especially are doubling down on this front.

Google introduced the Tensor Processing Unit (TPU) back in 2015. Today TPUs power services such as real-time voice search, photo object recognition, and interactive language translation. TPUs drive DeepMinds powerful AlphaGo algorithms, which outclassed the worlds best Go player. They were later used for Chess and Shogi. Today, TPUs have the power to process over 100 million photos a day. Most importantly, TPUs are also used for Googles search results. The search giant even unveiled OpenTitan, the first open-source silicon root-of-trust project. The companys custom hardware solutions range from SSDs, to hard drives, network switches, and network interface cardsoften in deep collaboration with external partners.

Workloads demand even deeper integration into the underlying hardware.

Just like on a motherboard, CPUs and TPUs come from different sources. A Google data centre consists of thousands of server machines connected to a local network. Google designs custom chips, including a hardware security chip currently being deployed on both servers and peripherals. According to Google Cloud, these chips allow them to securely identify and authenticate legitimate Google devices at the hardware level.

According to the team at GCP, computing at Google is at a critical inflection point. Instead of integrating components on a motherboard, Google focuses more on SoC designs where multiple functions sit on the same chip or on multiple chips inside one package. The company even claimed that the System on Chips is the modern-day motherboard.

To date, writes Amin Vahdat of GCP, the motherboard has been the integration point, where CPUs, networking, storage devices, custom accelerators, memory, all from different vendors blended into an optimised system. However, the cloud providers, especially companies like Google Cloud, AWS which own large data centres, gravitate towards deeper integration in the underlying hardware to gain higher performance at lesser power consumption.

According to ARM acquired by NVIDIA recently renewed interest towards design freedom and system optimisation has led to higher compute utilisation, improved performance-power ratios, and the ability to get more out of a physical datacenter.

For example, AWS Graviton2 instances, using the Arm Neoverse N1 platform, deliver up to 40 percent better price-performance over the previous x86-based instances at a 20 percent lower price. Silicon solutions such as Amperes Altra are designed to deliver performance-per-watt, flexibility, and scalability their customers demand.

The capabilities of cloud instances rely on the underlying architectures and microarchitectures that power the hardware.

Amazon has made its silicon ambitions obvious as early as 2015. Amazon acquired Israel-based Annapurna Labs, known for networking-focused Arm SoCs. Amazon leveraged Annapurna Labs tech to build a custom Arm server-grade chipGraviton2. After its release, Graviton2 locked horns with Intel and AMD, the data centre chip industrys major players. While the Graviton2 instance offered 64 physical cores, AMD or Intel could manage only 32 physical cores.

Last year, AWS even launched custom-built AWS Inferentia chips for the hardware specialisation department. Inferentias performance convinced AWS to deploy them for their popular Alexa services that require state of the art ML for speech processing and other tasks.

Amazons popular EC2 instances are now powered by AWS Inferentia chips that can deliver up to 30% higher throughput and up to 45% lower cost per inference. Whereas, Amazon EC2 F1 instances use FPGAs to enable delivery of custom hardware accelerations. F1 instances are easy to program and come with an FPGA Developer AMI and support hardware level development on the cloud. Examples of target applications that can benefit from F1 instance acceleration include genomics, search/analytics, image and video processing, network security, electronic design automation (EDA), image and file compression and big data analytics.

Source:AWS

Followed by AWS Inferentias success in providing customers with high-performance ML inference at the lowest cost in the cloud, AWS is launching Trainium to address the shortcomings of Inferentia. The Trainium chip is specifically optimised for deep learning training workloads for applications including image classification, semantic search, translation, voice recognition, natural language processing and recommendation engines.

The above table is a performance comparison by Anandtech, which shows how the cloud providers can ditch the legacy chip makers, thanks to ARMs license provisions. Even Microsoft is reportedly building an ARM-based processor for Azure data centres. Apart from custom chips thats under wraps, Microsoft too had a shot at silicon success. They have collaborated with AMD, Intel, and Qualcomm Technologies and announced the Microsoft Pluton security processor. The Pluton design builds security directly into the CPU.

To overcome the challenges and realise the opportunities presented by semiconductor densities and capabilities, electronic product cloud companies will look into System-on-a-Chip (SoC) design methodologies of incorporating pre-designed components, also called SoC Intellectual Property (SoC-IP), which can then be integrated into their own algorithms. As SoCs incorporate processors that allow customisation in the layers of software as well as in the hardware around the processors is the reason why even Google Cloud is bullish on this. They even roped in Intel veteran Uri Frank to lead their server chip design efforts. According to Amin Vahdata, VP, GCP, SoCs offer many orders of magnitude better performance with greatly reduced power and cost compared to assembling individual ASICs on a motherboard. The future of cloud infrastructure is bright, and its changing fast, said Vahdat.

View post:
System on Chips And The Modern Day Motherboards - Analytics India Magazine

BOOK REVIEW: Genius Makers, by Cade Metz the tribal war in AI – Business Day

A guide to an intellectual counter-revolution that is already transforming the world

BL PREMIUM

01 April 2021 - 05:10 John Thornhill

It may not be on the level of the Montagues and the Capulets, or the Sharks and the Jets, but in the world of geeks the rivalry is about as intense as it gets. For decades, two competing tribes of artificial intelligence (AI) experts have been furiously duelling with each other in research labs and conference halls around the world. But rather than swords or switchblades, they have wielded nothing more threatening than mathematical models and computer code.

On one side, the connectionist tribe believes that computers can learn behaviour in the same way as humans do, by processing a vast array of interconnected calculations. On the other, the symbolists argue that machines can only follow discrete rules. The machines instructions are contained in specific symbols, such as digits and letters...

The rest is here:
BOOK REVIEW: Genius Makers, by Cade Metz the tribal war in AI - Business Day

Reinforcement learning: The next great AI tech moving from the lab to the real world – VentureBeat

Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.

Reinforcement learning (RL) is a powerful type of artificial intelligence technology that can be used to learn strategies to optimally control large, complex systems such as manufacturing plants, traffic control systems (road/train/aircraft), financial portfolios, robots, etc. It is currently transitioning from research labs to highly impactful, real world applications. For example, self-driving car companies like Wayveand Waymoare using reinforcement learning to develop the control systems for their cars.

AI systems that are typically used in industry perform pattern recognition to make a prediction. For instance, they may recognize patterns in images to detect faces (face detection), or recognize patterns in sales data to predict a change in demand (demand forecasting), and so on. Reinforcement learning methods, on the other hand, are used to make optimal decisions or take optimal actions in applications where there is a feedback loop. An example where both traditional AI methods and RL may be used, but for different purposes, will make the distinction clearer.

Say we are using AI to help operate a manufacturing plant. Pattern recognition may be used for quality assurance, where the AI system uses images and scans of the finished product to detect any imperfections or flaws. An RL system, on the other hand, would compute and execute the strategy for controlling the manufacturing process itself (by, for example, deciding which lines to run, controlling machines/robots, deciding which product to manufacture, and so on). The RL system will also try to ensure that the strategy is optimal in that it maximizes some metric of interest such as the output volume while maintaining a certain level of product quality. The problem of computing the optimal control strategy, which RL solves, is very difficult for some subtle reasons (often much more difficult than pattern recognition).

In computing the optimal strategy, or policy in RL parlance, the main challenge an RL learning algorithm faces is the so-called temporal credit assignment problem. That is, the impact of an action (e.g. run line 1 on Wednesday) in a given system state (e.g. current output level of machines, how busy each line is, etc.) on the overall performance (e.g. total output volume) is not known until after (potentially) a long time. To make matters worse, the overall performance also depends on all the actions that are taken subsequent to the action being evaluated. Together, this implies that, when a candidate policy is executed for evaluation, it is difficult to know which actions were the good ones and which were the bad ones in other words, it is very difficult to assign credit to the different actions appropriately. The large number of potential system states in these complex problems further exacerbates the situation via the dreaded curse of dimensionality. A good way to get an intuition for how an RL system solves all these problems at the same time is by looking at the recent spectacular successes they have had in the lab.

Many of the recent, prominent demonstrations of the power of RL come from applying them to board games and video games. The first RL system to impress the global AI community was able to learn to outplay humans in different Atari games when only given as input the images on screen and the scores received by playing the game. This was created in 2013 by London-based AI research lab Deepmind (now part of Alphabet Inc.). The same lab later created a series of RL systems (or agents), starting with the AlphaGo agent, which were able to defeat the top players in the world in the board game Go. These impressive feats, which occurred between 2015 and 2017, took the world by storm because Go is a very complex game, with millions of fans and players around the world, that requires intricate, long-term strategic thinking involving both the local and global board configurations.

Subsequently, Deepmind and the AI research lab OpenAI have released systems for playing the video games Starcraft and DOTA 2 that can defeat the top human players around the world. These games are challenging because they require strategic thinking, resource management, and control and coordination of multiple entities within the game.

All the agents mentioned above were trained by letting the RL algorithm play the games many many times (e.g. millions or more) and learning which policies work and which do not against different kinds of opponents and players. The large number of trials were possible because these were all games running on a computer. In determining the usefulness of various policies, the RL algorithm often employed a complex mix of ideas. These include hill climbing in policy space, playing against itself, running leagues internally amongst candidate policies or using policies used by humans as a starting point and properly balancing exploration of the policy space vs. exploiting the good policies found so far. Roughly speaking, the large number of trials enabled exploring many different game states that could plausibly be reached, while the complex evaluation methods enabled the AI system to determine which actions are useful in the long term, under plausible plays of the games, in these different states.

A key blocker in using these algorithms in the real world is that it is not possible to run millions of trials. Fortunately, a workaround immediately suggests itself: First, create a computer simulation of the application (a manufacturing plant simulation, or market simulation etc.), then learn the optimal policy in the simulation using RL algorithms, and finally adapt the learned optimal policy to the real world by running it a few times and tweaking some parameters. Famously, in a very compelling 2019 demo, OpenAI showed the effectiveness of this approach by training a robot arm to solve the Rubiks cube puzzle one-handed.

For this approach to work, your simulation has to represent the underlying problem with a high degree of accuracy. The problem youre trying to solve also has to be closed in a certain sense there cannot be arbitrary or unseen external effects that may impact the performance of the system. For example, the OpenAI solution would not work if the simulated robot arm was too different from the real robot arm or if there were attempts to knock the Rubiks cube out of the real robot arm (though it may naturally be or be explicitly trained to be robust to certain kinds of obstructions and interferences).

These limitations will sound acceptable to most people. However, in real applications it is tricky to properly circumscribe the competence of an RL system, and this can lead to unpleasant surprises. In our earlier manufacturing plant example, if a machine is replaced with one that is a lot faster or slower, it may change the plant dynamics enough that it becomes necessary to retrain the RL system. Again, this is not unreasonable for any automated controller, but stakeholders may have far loftier expectations from a system that is artificially intelligent, and such expectations will need to be managed.

Regardless, at this point in time, the future of reinforcement learning in the real world does seem very bright. There are many startups offering reinforcement learning products for controlling manufacturing robots (Covariant, Osaro, Luffy), managing production schedules (Instadeep), enterprise decision making (Secondmind), logistics (Dorabot), circuit design (Instadeep), controlling autonomous cars (Wayve, Waymo, Five AI), controlling drones (Amazon), running hedge funds (Piit.ai), and many other applications that are beyond the reach of pattern recognition based AI systems.

Each of the Big Tech companies has made heavy investments in RL research e.g. Google acquiring Deepmind for a reported 400 million (approx $525 million) in 2015. So it is reasonable to assume that RL is either already in use internally at these companies or is in the pipeline; but theyre keeping the details pretty quiet for competitive advantage reasons.

We should expect to see some hiccups as promising applications for RL falter, but it will likely claim its place as a technology to reckon with in the near future.

M M Hassan Mahmud is a Senior AI and Machine Learning Technologist at Digital Catapult, with a background in machine learning within academia and industry.

Original post:
Reinforcement learning: The next great AI tech moving from the lab to the real world - VentureBeat

Project Force: AI and the military a friend or foe? – Al Jazeera English

The accuracy and precision of todays weapons are steadily forcing contemporary battlefields to empty of human combatants.

As more and more sensors fill the battlespace, sending vast amounts of data back to analysts, humans struggle to make sense of the mountain of information gathered.

This is where artificial intelligence (AI) comes in learning algorithms that thrive off big data; in fact, the more data these systems analyse, the more accurate they can be.

In short, AI is the ability for a system to think in a limited way, working specifically on problems normally associated with human intelligence, such as pattern and speech recognition, translation and decision-making.

AI and machine learning have been a part of civilian life for years. Megacorporations like Amazon and Google have used these tools to build vast commercial empires based in part on predicting the wants and needs of the people that use them.

The United States military has also long invested in civilian AI, with the Pentagons Defense Advanced Research Projects Agency (DARPA), funnelling money into key areas of AI research.

However, to tackle specific military concerns, the defence establishment soon realised its AI needs were not being met. So they approached Silicon Valley, asking for its help in giving the Pentagon the tools it would need to process an ever-growing mountain of information.

Employees at several corporations were extremely uncomfortable with their research being used by the military and persuaded the companies Google being one of them to opt out of, or at least dial down, its cooperation with the defence establishment.

While the much-hyped idea of Killer Robots remorseless machines hunting down humans and terminating them for some reason known to themselves has caught the publics imagination, the current focus of AI could not be further from that.

As a recent report on the military applications of AI points out, the technology is central to providing robotic assistance on the battlefield, which will enable forces to maintain or expand warfighting capacity without increasing manpower.

What does this mean? In effect, robotic systems will do tasks considered too menial or too dangerous for human beings such as unmanned supply convoys, mine clearance or the air-to-air refuelling of aircraft. It is also a force multiplier, which means it allows the same amount of people to do and achieve more.

An idea that illustrates this is the concept of the robotic Loyal Wingman being developed for the US Air Force. Designed to fly alongside a jet flown by a human pilot, this unmanned jet would fight off the enemy, be able to complete its mission, or help the human pilot do so. It would act as an AI bodyguard, defending the manned aircraft, and is also designed to sacrifice itself if there is a need to do so to save the human pilot.

A Navy X-47B drone, an unmanned combat aerial vehicle [File: AP]As AI power develops, the push towards systems becoming autonomous will only increase. Currently, militaries are keen to have a human involved in the decision-making loop. But in wartime, these communication links are potential targets cut off the head and the body would not be able to think. The majority of drones currently deployed around the world would lose their core functions if the data link connecting them to their human operator were severed.

This is not the case with the high-end, intelligence-gathering, unarmed drone Global Hawk, which, once given orders is able to carry them out independently without the need for a vulnerable data link, allowing it to be sent into highly contested airspaces to gather vital information. This makes it far more survivable in a future conflict, and money is now pouring into these new systems that can fly themselves, like Frances Dassault Neuron or Russias Sukhoi S70 both semi-stealthy autonomous combat drone designs.

AI programmes and systems are constantly improving, as their quick reactions and data processing allow them to finely hone the tasks they are designed to perform.

Robotic air-to-air refuelling aircraft have a better flight record and are able to keep themselves steady in weather that would leave a human pilot struggling. In war games and dogfight simulations, AI pilots are already starting to score significant victories over their human counterparts.

While AI algorithms are great at data-crunching, they have also started to surprise observers in the choices they make.

In 2016, when an AI programme, AlphaGo, took on a human grandmaster and world champion of the famously complex game of Go, it was expected to act methodically, like a machine. What surprised everyone watching was the unexpectedly bold moves it sometimes made, catching its opponent Lee Se-dol off-guard. The algorithm went on to win, to the shock of the tournaments observers. This kind of breakthrough in AI development had not been expected for years, yet here it was.

Machine intelligence is and will be increasingly incorporated into manned platforms. Ships will now have fewer crew members as the AI programmes will be able to do more. Single pilots will be able to control squadrons of unmanned aircraft that will fly themselves but obey that humans orders.

Facial recognition security cameras monitor a pedestrian shopping street in Beijing [File: AP]AIs main strength is in the arena of surveillance and counterinsurgency: being able to scan images made available from millions of CCTV cameras; being able to follow multiple potential targets; using big data to finesse predictions of a targets behaviour with ever-greater accuracy. All this is already within the grasp of AI systems that have been set up for this purpose unblinking eyes that watch, record, and monitor 24 hours a day.

The sheer volume of material that can be gathered is staggering and would be beyond the scope of human analysts to watch, absorb and fold into any conclusions they made.

AI is perfect for this and one of the testbeds for this kind of analytical, detection software is in special operations, where there has been a significant success. The tempo of special forces operations in counterinsurgency and counterterrorism has increased dramatically as information from a raid can now be quickly analysed and acted upon, leading to other raids that same night, which leads to more information gathered.

This speed has the ability to knock any armed group off balance as the raids are so frequent and relentless that the only option left is for them to move and hide, suppressing their organisation and rendering it ineffective.

A man uses a PlayStation-style console to manoeuvre the aircraft, as he demonstrates a control system for unmanned drones [File: AP]As AI military systems mature, their record of success will improve, and this will help overcome another key challenge in the acceptance of informationalised systems by human operators: trust.

Human soldiers will learn to increasingly rely on smart systems that can think at a faster rate than they can, spotting threats before they do. An AI system is only as good as the information it receives and processes about its environment, in other words, what it perceives. The more information it has, the more accurate it will be in its perception, assessment and subsequent actions.

The least complicated environment for a machine to understand is flight. Simple rules, a slim chance of collision, and relatively direct routes to and from its area of operations mean that this is where the first inroads into AI and relatively smart systems have been made. Loitering munitions, designed to search and destroy radar installations, are already operational and have been used in conflicts such as the war between Armenia and Azerbaijan.

Investment and research have also poured into maritime platforms. Operating in a more complex environment with sea life and surface traffic potentially obscuring sensor readings, a major development is in unmanned underwater vehicles (UUVs). Stealthy, near-silent systems, they are virtually undetectable and can stay submerged almost indefinitely.

Alongside the advances, there is a growing concern with how deadly these imagined AI systems could be.

Human beings have proven themselves extremely proficient in the ways of slaughter but there is increased worry that these mythical robots would run amuck, and that humans would lose control. This is the central concern among commentators, researchers and potential manufacturers.

But an AI system would not get enraged, feel hatred for its enemy, or decide to take it out on the local population if its AI comrades were destroyed. It could have the Laws of Armed Conflict built into its software.

The most complex and demanding environment is urban combat, where the wars of the near future will increasingly be fought. Conflicts in cities can overwhelm most human beings and it is highly doubtful a machine with a very narrow view of the world would be able to navigate it, let alone fight and prevail without making serious errors of judgement.

A man looks at a demonstration of human motion analysis software at the stall of an artificial intelligence solutions maker at an exhibition in China [File: Reuters]While they do not exist now, killer robots continue to appear as a worry for many and codes of ethics are already being worked on. Could a robot combatant indeed understand and be able to apply the Laws of Armed Conflict? Could it tell friend from foe, and if so, what would its reaction be? This applies especially to militias, soldiers from opposing sides using similar equipment, fighters who do not usually wear a defining uniform, and non-combatants.

The concern is so high that the Human Rights Watch has urged for the prohibition of fully autonomous AI units capable of making lethal decisions, calling for a ban very much like those in place for mines and chemical and biological weapons.

Another main concern is that a machine can be hacked in ways a human cannot. It might be fighting alongside you one minute but then turn on you the next. Human units have mutinied and changed allegiances before but to turn ones entire army or fleet against them with a keystroke is a terrifying possibility for military planners. And software can go wrong. A pervasive phrase in modern civilian life is sorry, the system is down; imagine this applied to armed machines engaged in battle.

Perhaps the most concerning of all is the offensive use of AI malware. More than 10 years ago, the worlds most famous cyber-weapon Stuxnet sought to insinuate itself into the software controlling the spinning of centrifuges refining uranium in Iran. Able to hide itself, it covered up its tracks, searching for a particular piece of code to attack that would cause the centrifuges to spin out of control and be destroyed. Although highly sophisticated then, it is nothing compared with what is available now and what could be deployed during a conflict.

The desire to design and build these new weapons that are expected to tip the balance in future conflicts has triggered an arms race between the US and its near-peer competitors Russia and China.

AI can not only be empowering, it is asymmetric in its leverage, meaning a small country can develop effective AI software without the industrial might needed to research, develop and test a new weapons system. It is a powerful way for a country to leapfrog over the competition, producing potent designs that will give it the edge needed to win a war.

Russia has declared this the new frontier for military research. President Vladimir Putin in an address in 2017 said that whoever became the leader in the sphere of AI would become the ruler of the world. To back that up, the same year Russias Military-Industrial Committee approved the integration of AI into 30 percent of the countrys armed forces by 2030.

Current realities are different, and so far Russian ventures into this field have proven patchy. The Uran-9 unmanned combat vehicle performed poorly in the urban battlefields of Syria in 2018, often not understanding its surroundings or able to detect potential targets. Despite these setbacks, it was inducted into the Russian military in 2019, a clear sign of the drive in senior Russian military circles to field robotic units with increasing autonomy as they develop in complexity.

China, too, has clearly stated that a major focus of research and development is how to win at intelligent(ised) warfare. In a report into Chinas embracing of and use of AI in military applications, the Brookings Institution wrote that it will include command decision making, military deductions that could change the very mechanisms for victory in future warfare. Current areas of focus are AI-enabled radar, robotic ships and smarter cruise and hypersonic missiles, all areas of research that other countries are focusing on.

An American military pilot flies a Predator drone from a ground command post during a night border mission [File: AP]The development of military artificial intelligence giving systems increasing autonomy gives military planners a tantalising glimpse at victory on the battlefield, but the weapons themselves, and the countermeasures that would be aimed against them in a war of the near future, remain largely untested.

Countries like Russia and China with their revamped and streamlined militaries are no longer looking to achieve parity with the US; they are looking to surpass it by researching heavily into the weapons of the future.

Doctrine is key: how these new weapons will integrate into future war plans and how they can be leveraged for their maximum effect on the enemy.

Any quantitative leap in weapons design is always a concern as it gives a country the belief that they could be victorious in battle, thus lowering the threshold for conflict.

As war speeds up even further, it will increasingly be left in the hands of these systems to fight them, to give recommendations, and ultimately, to make the decisions.

Read more here:
Project Force: AI and the military a friend or foe? - Al Jazeera English

Diffblue’s First AI-Powered Automated Java Unit Testing Solution Is Now Free for Commercial and Open Source Software Developers – StreetInsider.com

Get inside Wall Street with StreetInsider Premium. Claim your 1-week free trial here.

OXFORD, United Kingdom, March 22, 2021 (GLOBE NEWSWIRE) -- Diffblue, creators of the worlds first AI for code solution that automates writing unit tests for Java, today announced that its free IntelliJ plugin, Diffblue Cover: Community Edition, is now available to use to create unit tests for all of an organizations Java code both open source and commercial.

Free for any individual user, the IntelliJ plugin is availablehere for immediate download. It supports both IntelliJ versions 2020.02 and 2020.03. The Diffblue Cover: Community Edition to date has already automatically created nearly 150,000 Java unit tests!

Diffblue also offers a professional version for commercial customers who require premium support as well as indemnification and the ability to write tests for packages. In addition, Diffblue offers a CLI version of Diffblue Cover, perfect for teams to collaborate using.

Diffblues pioneering technology, developed by researchers from the University of Oxford, is based on reinforcement learning, the same machine learning strategy that powered AlphaGo, Alphabet subsidiary DeepMinds software program that beat the world champion player of Go.

Diffblue Cover automates the burdensome task of writing Java unit tests, a task that takes up as much as 20 percent of Java developers time. Diffblue Cover creates Java tests at speeds 10X-100X faster than humans that are also easy for developers to understand, and automatically maintains the tests as the code evolves even on applications with tens of millions of lines of code. Most unit test generators create boilerplate code for tests, rather than tests that compile and run. These tools guess the inputs that can be used as a starting point, but developers have to finish them to get functioning tests. Diffblue Cover is uniquely able to create complete human-readable unit tests that are ready to run immediately.

Diffblue Cover today supports Java, the most popular enterprise programming language in the Global 2000. The technology behind Diffblue Cover can also be extended to support other popular programming languages such as Python, Javascript and C#.

About DiffblueDiffblue is leading the automation of software creation through the power of AI. Founded by researchers from the University of Oxford, Diffblue Cover uses AI for code to write unit tests that help software teams and organizations efficiently improve their code coverage and quality and to ship software faster, more frequently and with fewer defects. With customers including AWS and Goldman Sachs, Diffblue is venture-backed by Goldman Sachs and Oxford Sciences Innovation. Follow us on Twitter:@diffblueHQ

Editorial contact DiffblueLonn Johnston, Flak42lonn@flak42.com+1.650.219.7764

Visit link:
Diffblue's First AI-Powered Automated Java Unit Testing Solution Is Now Free for Commercial and Open Source Software Developers - StreetInsider.com