Archive for the ‘Machine Learning’ Category

Google Teaches AI To Play The Game Of Chip Design – The Next Platform

If it wasnt bad enough that Moores Law improvements in the density and cost of transistors is slowing. At the same time, the cost of designing chips and of the factories that are used to etch them is also on the rise. Any savings on any of these fronts will be most welcome to keep IT innovation leaping ahead.

One of the promising frontiers of research right now in chip design is using machine learning techniques to actually help with some of the tasks in the design process. We will be discussing this at our upcoming The Next AI Platform event in San Jose on March 10 with Elias Fallon, engineering director at Cadence Design Systems. (You can see the full agenda and register to attend at this link; we hope to see you there.) The use of machine learning in chip design was also one of the topics that Jeff Dean, a senior fellow in the Research Group at Google who has helped invent many of the hyperscalers key technologies, talked about in his keynote address at this weeks 2020 International Solid State Circuits Conference in San Francisco.

Google, as it turns out, has more than a passing interest in compute engines, being one of the large consumers of CPUs and GPUs in the world and also the designer of TPUs spanning from the edge to the datacenter for doing both machine learning inference and training. So this is not just an academic exercise for the search engine giant and public cloud contender particularly if it intends to keep advancing its TPU roadmap and if it decides, like rival Amazon Web Services, to start designing its own custom Arm server chips or decides to do custom Arm chips for its phones and other consumer devices.

With a certain amount of serendipity, some of the work that Google has been doing to run machine learning models across large numbers of different types of compute engines is feeding back into the work that it is doing to automate some of the placement and routing of IP blocks on an ASIC. (It is wonderful when an idea is fractal like that. . . .)

While the pod of TPUv3 systems that Google showed off back in May 2018 can mesh together 1,024 of the tensor processors (which had twice as many cores and about a 15 percent clock speed boost as far as we can tell) to deliver 106 petaflops of aggregate 16-bit half precision multiplication performance (with 32-bit accumulation) using Googles own and very clever bfloat16 data format. Those TPUv3 chips are all cross-coupled using a 3232 toroidal mesh so they can share data, and each TPUv3 core has its own bank of HBM2 memory. This TPUv3 pod is a huge aggregation of compute, which can do either machine learning training or inference, but it is not necessarily as large as Google needs to build. (We will be talking about Deans comments on the future of AI hardware and models in a separate story.)

Suffice it to say, Google is hedging with hybrid architectures that mix CPUs and GPUs and perhaps someday other accelerators for reinforcement learning workloads, and hence the research that Dean and his peers at Google have been involved in that are also being brought to bear on ASIC design.

One of the trends is that models are getting bigger, explains Dean. So the entire model doesnt necessarily fit on a single chip. If you have essentially large models, then model parallelism dividing the model up across multiple chips is important, and getting good performance by giving it a bunch of compute devices is non-trivial and it is not obvious how to do that effectively.

It is not as simple as taking the Message Passing Interface (MPI) that is used to dispatch work on massively parallel supercomputers and hacking it onto a machine learning framework like TensorFlow because of the heterogeneous nature of AI iron. But that might have been an interesting way to spread machine learning training workloads over a lot of compute elements, and some have done this. Google, like other hyperscalers, tends to build its own frameworks and protocols and datastores, informed by other technologies, of course.

Device placement meaning, putting the right neural network (or portion of the code that embodies it) on the right device at the right time for maximum throughput in the overall application is particularly important as neural network models get bigger than the memory space and the compute oomph of a single CPU, GPU, or TPU. And the problem is getting worse faster than the frameworks and hardware can keep up. Take a look:

The number of parameters just keeps growing and the number of devices being used in parallel also keeps growing. In fact, getting 128 GPUs or 128 TPUv3 processors (which is how you get the 512 cores in the chart above) to work in concert is quite an accomplishment, and is on par with the best that supercomputers could do back in the era before loosely coupled, massively parallel supercomputers using MPI took over and federated NUMA servers with actual shared memory were the norm in HPC more than two decades ago. As more and more devices are going to be lashed together in some fashion to handle these models, Google has been experimenting with using reinforcement learning (RL), a special subset of machine learning, to figure out where to best run neural network models at any given time as model ensembles are running on a collection of CPUs and GPUs. In this case, an initial policy is set for dispatching neural network models for processing, and the results are then fed back into the model for further adaptation, moving it toward more and more efficient running of those models.

In 2017, Google trained an RL model to do this work (you can see the paper here) and here is what the resulting placement looked like for the encoder and decoder, and the RL model to place the work on the two CPUs and four GPUs in the system under test ended up with 19.3 percent lower runtime for the training runs compared to the manually placed neural networks done by a human expert. Dean added that this RL-based placement of neural network work on the compute engines does kind of non-intuitive things to achieve that result, which is what seems to be the case with a lot of machine learning applications that, nonetheless, work as well or better than humans doing the same tasks. The issue is that it cant take a lot of RL compute oomph to place the work on the devices to run the neural networks that are being trained themselves. In 2018, Google did research to show how to scale computational graphs to over 80,000 operations (nodes), and last year, Google created what it calls a generalized device placement scheme for dataflow graphs with over 50,000 operations (nodes).

Then we start to think about using this instead of using it to place software computation on different computational devices, we started to think about it for could we use this to do placement and routing in ASIC chip design because the problems, if you squint at them, sort of look similar, says Dean. Reinforcement learning works really well for hard problems with clear rules like Chess or Go, and essentially we started asking ourselves: Can we get a reinforcement learning model to successfully play the game of ASIC chip layout?

There are a couple of challenges to doing this, according to Dean. For one thing, chess and Go both have a single objective, which is to win the game and not lose the game. (They are two sides of the same coin.) With the placement of IP blocks on an ASIC and the routing between them, there is not a simple win or lose and there are many objectives that you care about, such as area, timing, congestion, design rules, and so on. Even more daunting is the fact that the number of potential states that have to be managed by the neural network model for IP block placement is enormous, as this chart below shows:

Finally, the true reward function that drives the placement of IP blocks, which runs in EDA tools, takes many hours to run.

And so we have an architecture Im not going to get a lot of detail but essentially it tries to take a bunch of things that make up a chip design and then try to place them on the wafer, explains Dean, and he showed off some results of placing IP blocks on a low-powered machine learning accelerator chip (we presume this is the edge TPU that Google has created for its smartphones), with some areas intentionally blurred to keep us from learning the details of that chip. We have had a team of human experts places this IP block and they had a couple of proxy reward functions that are very cheap for us to evaluate; we evaluated them in two seconds instead of hours, which is really important because reinforcement learning is one where you iterate many times. So we have a machine learning-based placement system, and what you can see is that it sort of spreads out the logic a bit more rather than having it in quite such a rectangular area, and that has enabled it to get improvements in both congestion and wire length. And we have got comparable or superhuman results on all the different IP blocks that we have tried so far.

Note: I am not sure we want to call AI algorithms superhuman. At least if you dont want to have it banned.

Anyway, here is how that low-powered machine learning accelerator for the RL network versus people doing the IP block placement:

And here is a table that shows the difference between doing the placing and routing by hand and automating it with machine learning:

And finally, here is how the IP block on the TPU chip was handled by the RL network compared to the humans:

Look at how organic these AI-created IP blocks look compared to the Cartesian ones designed by humans. Fascinating.

Now having done this, Google then asked this question: Can we train a general agent that is quickly effective at placing a new design that it has never seen before? Which is precisely the point when you are making a new chip. So Google tested this generalized model against four different IP blocks from the TPU architecture and then also on the Ariane RISC-V processor architecture. This data pits people working with commercial tools and various levels tuning on the model:

And here is some more data on the placement and routing done on the Ariane RISC-V chips:

You can see that experience on other designs actually improves the results significantly, so essentially in twelve hours you can get the darkest blue bar, Dean says, referring to the first chart above, and then continues with the second chart above. And this graph showing the wireline costs where we see if you train from scratch, it actually takes the system a little while before it sort of makes some breakthrough insight and was able to significantly drop the wiring cost, where the pretrained policy has some general intuitions about chip design from seeing other designs and people that get to that level very quickly.

Just like we do ensembles of simulations to do better weather forecasting, Dean says that this kind of AI-juiced placement and routing of IP block sin chip design could be used to quickly generate many different layouts, with different tradeoffs. And in the event that some feature needs to be added, the AI-juiced chip design game could re-do a layout quickly, not taking months to do it.

And most importantly, this automated design assistance could radically drop the cost of creating new chips. These costs are going up exponentially, and data we have seen (thanks to IT industry luminary and Arista Networks chairman and chief technology officer Andy Bechtolsheim), an advanced chip design using 16 nanometer processes cost an average of $106.3 million, shifting to 10 nanometers pushed that up to $174.4 million, and the move to 7 nanometers costs $297.8 million, with projections for 5 nanometer chips to be on the order of $542.2 million. Nearly half of that cost has been and continues to be for software. So we know where to target some of those costs, and machine learning can help.

The question is will the chip design software makers embed AI and foster an explosion in chip designs that can be truly called Cambrian, and then make it up in volume like the rest of us have to do in our work? It will be interesting to see what happens here, and how research like that being done by Google will help.

See the rest here:
Google Teaches AI To Play The Game Of Chip Design - The Next Platform

How to Train Your AI Soldier Robots (and the Humans Who Command Them) – War on the Rocks

Editors Note: This article was submitted in response to thecall for ideas issued by the co-chairs of the National Security Commission on Artificial Intelligence, Eric Schmidt and Robert Work. It addresses the third question (part a.), which asks how institutions, organizational structures, and infrastructure will affect AI development, and will artificial intelligence require the development of new institutions or changes to existing institutions.

Artificial intelligence (AI) is often portrayed as a single omnipotent force the computer as God. Often the AI is evil, or at least misguided. According to Hollywood, humans can outwit the computer (2001: A Space Odyssey), reason with it (Wargames), blow it up (Star Wars: The Phantom Menace), or be defeated by it (Dr. Strangelove). Sometimes the AI is an automated version of a human, perhaps a human fighters faithful companion (the robot R2-D2 in Star Wars).

These science fiction tropes are legitimate models for military discussion and many are being discussed. But there are other possibilities. In particular, machine learning may give rise to new forms of intelligence; not natural, but not really artificial if the term implies having been designed in detail by a person. Such new forms of intelligence may resemble that of humans or other animals, and we will discuss them using language associated with humans, but we are not discussing robots that have been deliberately programmed to emulate human intelligence. Through machine learning they have been programmed by their own experiences. We speculate that some of the characteristics that humans have evolved over millennia will also evolve in future AI, characteristics that have evolved purely for their success in a wide range of situations that are real, for humans, or simulated, for robots.

As the capabilities of AI-enabled robots increase, and in particular as behaviors emerge that are both complex and outside past human experience, how will we organize, train, and command them and the humans who will supervise and maintain them? Existing methods and structures, such as military ranks and doctrine, that have evolved over millennia to manage the complexity of human behavior will likely be necessary. But because robots will evolve new behaviors we cannot yet imagine, they are unlikely to be sufficient. Instead, the military and its partners will need to learn new types of organization and new approaches to training. It is impossible to predict what these will be but very possible they will differ greatly from approaches that have worked in the past. Ongoing experimentation will be essential.

How to Respond to AI Advances

The development of AI, especially machine learning, will lead to unpredictable new types of robots. Advances in AI suggest that humans will have the ability to create many types of robots, of different shapes, sizes, or degrees of independence or autonomy. It is conceivable that humans may one day be able to design tiny AI bullets to pierce only designated targets, automated aircraft to fly as loyal wingmen alongside human pilots, or thousands of AI fish to swim up an enemys river. Or we could design AI not as a device but as a global grid that analyzes vast amounts of diverse data. Multiple programs funded by the Department of Defense are on their way to developing robots with varying degrees of autonomy.

In science fiction, robots are often depicted as behaving in groups (like the robot dogs in Metalhead). Researchers inspired by animal behaviors have developed AI concepts such as swarms, in which relatively simple rules for each robot can result in complex emergent phenomena on a larger scale. This is a legitimate and important area of investigation. Nevertheless, simply imitating the known behaviors of animals has its limits. After observing the genocidal nature of military operations among ants, biologists Bert Holldobler and E. O. Wilson wrote, If ants had nuclear weapons, they would probably end the world in a week. Nor would we want to limit AI to imitating human behavior. In any case, a major point of machine learning is the possibility of uncovering new behaviors or strategies. Some of these will be very different from all past experience; human, animal, and automated. We will likely encounter behaviors that, although not human, are so complex that some human language, such as personality, may seem appropriately descriptive. Robots with new, sophisticated patterns of behavior may require new forms of organization.

Military structure and scheme of maneuver is key to victory. Groups often fight best when they dont simply swarm but execute sophisticated maneuvers in hierarchical structures. Modern military tactics were honed over centuries of experimentation and testing. This was a lengthy, expensive, and bloody process.

The development of appropriate organizations and tactics for AI systems will also likely be expensive, although one can hope that through the use of simulation it will not be bloody. But it may happen quickly. The competitive international environment creates pressure to use machine learning to develop AI organizational structure and tactics, techniques, and procedures as fast as possible.

Despite our considerable experience organizing humans, when dealing with robots with new, unfamiliar, and likely rapidly-evolving personalities we confront something of a blank slate. But we must think beyond established paradigms, beyond the computer as all-powerful or the computer as loyal sidekick.

Humans fight in a hierarchy of groups, each soldier in a squad or each battalion in a brigade exercising a combination of obedience and autonomy. Decisions are constantly made at all levels of the organization. Deciding what decisions can be made at what levels is itself an important decision. In an effective organization, decision-makers at all levels have a good idea of how others will act, even when direct communication is not possible.

Imagine an operation in which several hundred underwater robots are swimming up a river to accomplish a mission. They are spotted and attacked. A decision must be made: Should they retreat? Who decides? Communications will likely be imperfect. Some mid-level commander, likely one of the robot swimmers, will decide based on limited information. The decision will likely be difficult and depend on the intelligence, experience, and judgment of the robot commander. It is essential that the swimmers know who or what is issuing legitimate orders. That is, there will have to be some structure, some hierarchy.

The optimal unit structure will be worked out through experience. Achieving as much experience as possible in peacetime is essential. That means training.

Training Robot Warriors

Robots with AI-enabled technologies will have to be exercised regularly, partly to test them and understand their capabilities and partly to provide them with the opportunity to learn from recreating combat. This doesnt mean that each individual hardware item has to be trained, but that the software has to develop by learning from its mistakes in virtual testbeds and, to the extent that they are feasible, realistic field tests. People learn best from the most realistic training possible. There is no reason to expect machines to be any different in that regard. Furthermore, as capabilities, threats, and missions evolve, robots will need to be continuously trained and tested to maintain effectiveness.

Training may seem a strange word for machine learning in a simulated operational environment. But then, conventional training is human learning in a controlled environment. Robots, like humans, will need to learn what to expect from their comrades. And as they train and learn highly complex patterns, it may make sense to think of such patterns as personalities and memories. At least, the patterns may appear that way to the humans interacting with them. The point of such anthropomorphic language is not that the machines have become human, but that their complexity is such that it is helpful to think in these terms.

One big difference between people and machines is that, in theory at least, the products of machine learning, the code for these memories or personalities, can be uploaded directly from one very experienced robot to any number of others. If all robots are given identical training and the same coded memories, we might end up with a uniformity among a units members that, in the aggregate, is less than optimal for the unit as a whole.

Diversity of perspective is accepted as a valuable aid to human teamwork. Groupthink is widely understood to be a threat. Its reasonable to assume that diversity will also be beneficial to teams of robots. It may be desirable to create a library of many different personalities or memories that could be assigned to different robots for particular missions. Different personalities could be deliberately created by using somewhat different sets of training testbeds to develop software for the same mission.

If AI can create autonomous robots with human-like characteristics, what is the ideal personality mix for each mission? Again, we are using the anthropomorphic term personality for the details of the robots behavior patterns. One could call it a robots programming if that did not suggest the existence of an intentional programmer. The robots personalities have evolved from the robots participation in a very large number of simulations. It is unlikely that any human will fully understand a given personality or be able to fully predict all aspects of a robots behavior.

In a simple case, there may be one optimum personality for all the robots of one type. In more complicated situations, where robots will interact with each other, having robots that respond differently to the same stimuli could make a unit more robust. These are things that military planners can hope to learn through testing and training. Of course, attributes of personality that may have evolved for one set of situations may be less than optimal, or positively dangerous, in another. We talk a lot about artificial intelligence. We dont discuss artificial mental illness. But there is no reason to rule it out.

Of course, humans will need to be trained to interact with the machines. Machine learning systems already often exhibit sophisticated behaviors that are difficult to describe. Its unclear how future AI-enabled robots will behave in combat. Humans, and other robots, will need experience to know what to expect and to deal with any unexpected behaviors that may emerge. Planners need experience to know which plans might work.

But the human-robot relationship might turn out to be something completely different. For all of human history, generals have had to learn their soldiers capabilities. They knew best exactly what their troops could do. They could judge the psychological state of their subordinates. They might even know when they were being lied to. But todays commanders do not know, yet, what their AI might prove capable of. In a sense, it is the AI troops that will have to train their commanders.

In traditional military services, the primary peacetime occupation of the combat unit is training. Every single servicemember has to be trained up to the standard necessary for wartime proficiency. This is a huge task. In a robot unit, planners, maintainers, and logisticians will have to be trained to train and maintain the machines but may spend little time working on their hardware except during deployment.

What would the units look like? What is the optimal unit rank structure? How does the human rank structure relate to the robot rank structure? There are a million questions as we enter uncharted territory. The way to find out is to put robot units out onto test ranges where they can operate continuously, test software, and improve machine learning. AI units working together can learn and teach each other and humans.

Conclusion

AI-enabled robots will need to be organized, trained, and maintained. While these systems will have human-like characteristics, they will likely develop distinct personalities. The military will need an extensive training program to inform new doctrines and concepts to manage this powerful, but unprecedented, capability.

Its unclear what structures will prove effective to manage AI robots. Only by continuous experimentation can people, including computer scientists and military operators, understand the developing world of multi-unit human and robot forces. We must hope that experiments lead to correct solutions. There is no guarantee that we will get it right. But there is every reason to believe that as technology enables the development of new and more complex patterns of robot behavior, new types of military organizations will emerge.

Thomas Hamilton is a Senior Physical Scientist at the nonprofit, nonpartisan RAND Corporation. He has a Ph.D. in physics from Columbia University and was a research astrophysicist at Harvard, Columbia, and Caltech before joining RAND. At RAND he has worked extensively on the employment of unmanned air vehicles and other technology issues for the Defense Department.

Image: Wikicommons (U.S. Air Force photo by Kevin L. Moses Sr.)

Here is the original post:
How to Train Your AI Soldier Robots (and the Humans Who Command Them) - War on the Rocks

Googles Machine Learning Is Making You More Effective In 2020 – Forbes

The collection of web-based software that Google offers to businesses and consumers is officially known as G Suite. Most people are familiar with Gmail and Google Docs, but quite a few do not realize that they offer a whole range of productivity and collaboration tools via your computer or mobile device.

HONG KONG, HONG KONG - November 27: A woman using an Macbook Pro as she uses Google G Suite on ... [+] November 27, 2017 in Hong Kong, Hong Kong. (Photo by studioEAST/Getty Images)

I have been working on another post about consumer-level uses of artificial intelligence (AI), not the media-hyped creepiness, but the practical, useful ways that AI is helping us do more and be more. Google started me thinking about this as I have watched it add various smart functions (think AI) to email as well as increasing ways to help me complete or enhance a document, spreadsheet, or presentation with the Explore function.It keeps learning from you and adjusting to you with these features.

Draft and send email responses quicker: Two relatively new, intelligent features include Smart Compose and Smart Reply. Gmail will suggest ways to complete your sentences while drafting an email and suggest responses to incoming messages as one-click buttons (at the bottom of the newly received message). This works in relatively simple messages that are calling for answers like these:

Enable Smart Compose and Smart Reply by going to Settings (that little gear icon in the upper right of your email inbox). Smart Reply is automatically enabled when users switch to the new Gmail.

On mobile and desktop or web, Smart Reply utilizes machine learning to give you better responses the more you use it. So if you're more of a thanks! than a thanks. person, itll suggest the response that is more authentic to you. Subtle difference, for sure, but I have noticed with certain people I interact with, the punctuation does change to show more emotion. I have not seen any emojis popping up, however. That may be a good thing.

For some of the newest features, you must go to Settings, then click Experimental Access. Features that are under test have a special little chemistry bottle icon or emoji. Most of the features in this post have already been fully tested and released to the general public.

Auto-reminders to respond: Gmails new Nudging function reportedly will now automatically bump both incoming and outgoing messages to the top of your inbox after a few days if neither party has responded. You can turn this feature on/off in Settings. However, I have not had this work properly, but maybe I am simply too efficient. Not. Either way, I have not noticed these reminders yet.

Machine Learning in Google Docs, Google Sheets, and Google Slides

The Explore button in the lower right corner of Docs, Sheets, or Slides is machine learning (ML) in action. You can visualize data in Sheets without using a formula. The explore button is a compass-looking type star and as you hover over it, it expands. Once clicked, it serves as a search tool within these products.

Explore and visualize in Sheets to help you decipher data easily by asking Explore with words, not formulas to get answer about your data. You can ask it a question like, how many units were sold on Black Friday, or what is my best selling product? or how much was spent on payroll last month, can be asked directly instead of creating formulas to get an answer. Explore in Sheets is available on the web, Android and iOS. On Android, you click the three vertical dots to get to the menu and then Explore is listed. When you first click it, it offers a try an example option and creates a new spreadsheet showcasing various examples.

Explore in Docs gives you a way to stay focused in the same tab. Using Explore, you get a little sidebar with Web, Images, and Drive results. It provides instant suggestions based on the content in your document including related topics to learn about, images to insert, or more content to check out in Docs. You can also find a related document from Drive or search Google right within Explore. Explore in Docs is available in a web browser, but I did not find it on my mobile apps for Android or iOS.

Explore in Slides makes designing a presentation simple. I think theres some AI/ML going on here, as Explore dynamically generates design suggestions, based on the content of your slides. Then, pick a recommendation and apply it with a single click, without having to crop, resize or reformat.

Are all of these features going to single-handedly make you the most productive person on the planet? No, but they are definitely small and constant improvements that point the way to a more customized and helpful use of artificial intelligence and machine learning.

If you are looking for other creative ways that people and organizations are using G Suite, there are tons of great customer stories that Google shares about how big and small organizations and companies use its free and enterprise-level products that may give you ideas for how you can leverage their cloud software. I find many of these case studies inspiring, but that is based on how organizations are responding to community needs.

Check out this one from Eagle County, Colorado during a wildfire there and this one from the City of Los Angeles with a real-time sheet to show police officers which homeless shelters have available beds.

More:
Googles Machine Learning Is Making You More Effective In 2020 - Forbes

How machine learning is changing the face of financial services – Techerati

As the financial services industry continues to leverage machine learning and predictive analytics, the volume of data being generated is ballooning. This has massive implications for data security

Artificial intelligence(AI)has become integrated into our everyday lives. It powerswhat we see in our social media newsfeeds, activates facial recognition (to unlock our smartphones), and even suggests music for us to listen to. Machine learning, a subset of AI, is progressively integrating into our everyday and changing how we live and make decisions.

Business changes all the time, but advances in todays technologies have accelerated the pace of change.Machine learning analyses historical data and behaviours to predict patterns and make decisions. It has proved hugely successful in retail for its ability to tailor products and services to customers.

Unsurprisingly, retail banking and machine learning are a perfect combination. Thanks to machine learning, functions such as fraud detection and credit scoring are now automated. Banks also leverage machine learning and predictive analytics to offer their customers a far more personalised user experience, recommend new products, and animate chatbots that help with routine transactions such as account checking and paying bills.

Machine learning is also disrupting the insurance sector. As more connected devices provide deeper insights into customer behaviours, insurers are enabled to set premiums and make payout decisions based on data. Insurtech firms are shaking things up by harnessing new technologies to develop enhanced solutions for customers.The potential for change is huge and, according to McKinsey, the [insurance] industry is on the verge of a seismic, tech-driven shift.

Few industries have as much historical and structured data than the financial services industry, making it the perfect playing field for machine learning technologies.

Read this article:
How machine learning is changing the face of financial services - Techerati

‘Technology is never neutral’: why we should remain wary of machine learning in children’s social care – Communitycare.co.uk

(credit: Pablo Lagarto / Adobe Stock)

On 1 February 2020, YouTuber Simon Weekert posted a video on YouTube claiming to have redirected traffic by faking traffic jams on Google Maps. The video shows Weekert walking slowly along traffic-free streets in Berlin, pulling a pile of second-hand mobile phones in a cart behind him and Google Maps generating traffic jam alerts because the phones had their location services turned on.

Weekerts performance act demonstrates the fragility and vulnerability of our systems and their difficulty in interpreting outliers, and highlights a kind of decisional blindness when we think of data as objective, unambiguous and interpretation free, as he put it. There are many other examples of decisional blindness relating to drivers following Google Maps and falling off cliffs or driving into rivers.

Google has the resources, expertise and technology to rapidly learn from this experience and make changes to avoid similar situations. But the same vulnerability to hacking or outliers applies to the use of machine learning in childrens social care (CSC) and this raises the question of whether the sector has the means to identity and rectify issues in a timely manner and without adverse effects for service users.

Have you ever had the experience of asking the wrong question in Google search and getting the right answer? Thats because of contextual computing that makes use of AI and machine learning.

At its heart, machine learning is the application of statistical techniques to identify patterns and enable computers to use data to progressively learn and improve their performance.

From Google search and Alexa to online shopping, and from games and health apps to WhatsApp and online dating, most online interactions are mediated by AI and machine learning. Like electricity, AI and machine learning will power every software and digital device and will transform and mediate every aspect of human experience mostly without end users giving them a thought.

But there are particular concerns about their applications in CSC and, therefore, a corresponding need for national standards for machine learning in social care and for greater transparency and scrutiny around the purpose, design, development, use, operation and ethics of machine learning in CSC. This was set out in What Works for Childrens Social Cares ethics review into machine learning, published at the end of January.

The quality of machine learning systems predictive analysis is dependent on the quality, completeness and representativeness of the dataset they draw on. But peoples lives are complex, and often case notes do not capture this complexity and instead are complemented by practitioners intuition and practice wisdom. Such data lacks the quality and structure needed for machine learning applications, making high levels of accuracy harder to achieve.

Inaccuracy in identifying children and families can result in either false positives that infringe on peoples rights and privacy, cause stress and waste time and resources, or false negatives that miss children and families in need of support and protection.

Advocates of machine learning often point out that systems only provide assistance and recommendations, and that it remains the professionals who make actual decisions. Yet decisional blindness can undermine critical thinking, and false positives and negatives can result in poor practice and stigmatisation, and can further exclusion, harm and inequality.

Its true that AI and machine learning can be used in empowering ways to support services or to challenge discrimination and bias. The use of Amazons Alexa to support service users in adult social care is, while not completely free of concerns, one example of positive application of AI in practice.

Another is Essex councils use of machine learning to produce anonymised aggregate data at community level of children who may not be ready for school by their fifth birthday. This data is then shared with parents and services who are part of the project to inform their funding allocation or changes to practice as need be. This is a case of predictive analytics being used in a way that is supportive of children and empowering for parents and professionals.

The Principal Children and Families Social Worker (PCFSW) Network is conducting a survey of practitioners to understand their current use of technology and challenges and the skills, capabilities and support that they need.

It only takes 10 minutes to complete the survey on digital professionalism and online safeguarding. Your responses will inform best practice and better support for social workers and social care practitioners to help ensure practitioners lead the changes in technology rather than technology driving practice and shaping practitioners professional identity.

But its more difficult to make such an assessment in relation to applications that use hundreds of thousands of peoples data, without their consent, to predict child abuse. While there are obvious practical challenges around seeking the permission of huge numbers of people, failing to do so shifts the boundaries of individual rights and privacy vis--vis surveillance and the power of public authorities. Unfortunately though, ethical concerns do not always influence the direction or speed of change.

Another controversial recent application of technology is the use of live facial recognition cameras in London. An independent report by Essex Universitylast year suggested concerns with inaccuracies in use of live facial recognition, while the Met Polices senior technologist, Johanna Morley said millions of pounds would need to be invested in purging police suspect lists and aligning front- and back-office systems to ensure the legality of facial recognition cameras. Despite these concerns, the Met will begin using facial recognition cameras in London streets, with the aim of tackling serious crime, including child sexual exploitation.

Research published in November 2015, meanwhile, showed that a flock of trained pigeons can spot cancer in images of biopsied tissue with 99% accuracy; that is comparable to what would be expected of a pathologist. At the time, one of the co-authors of the report suggested that the birds might be able to assess the quality of new imaging techniques or methods of processing and displaying images without forcing humans to spend hours or days doing detailed comparisons.

Although there are obvious cost efficiencies in recruiting pigeons instead of humans, I am sure most of us will not be too comfortable having a flock of pigeons as our pathologist or radiologist.

Many people would also argue more broadly that fiscal policy should not undermine peoples health and wellbeing. Yet the past decade of austerity, with 16bn in cuts in core government funding for local authorities by this year and a continued emphasis on doing more with less, has led to resource-led practices that are far from the aspirations of Children Act 1989 and of every child having the opportunity to achieve their potential.

Technology is never neutral and there are winners and losers in every change. Given the profound implications of AI and machine learning for CSC, it is essential such systems are accompanied by appropriate safeguards and processes that prevent and mitigate false positives and negatives and their adverse impact and repercussions. But in an environment of severe cost constraints, positive aspirations might not be matched with adequate funding to ensure effective prevention and adequate support for those negatively impacted by such technologies.

In spite of the recent ethics reviews laudable aspirations, there is also the real risk that many of the applications of machine learning pursued to date in CSC may cement current practice challenges by hard-coding austerity and current thresholds into systems and the future of services.

The US constitution was written and ratified by middle-aged white men and it took over 130 years for women to gain the right of suffrage and 176 years to recognise and outlaw discrimination based on race, sex, religion and national origin. Learning from history would suggest we must be cautious about reflecting childrens social cares operating context into systems, all designed, developed and implemented by experts and programmers who may not represent the diversity of the people who will be most affected by such systems.

Dr Peter Buzzi (@MHChat) is the director of Research and Management Consultancy Centre and the Safeguarding Research Institute. He is also the national research lead for the Principal Children and Families Social Worker (PCFSW) Networks online safeguarding research and practice development project.

The rest is here:
'Technology is never neutral': why we should remain wary of machine learning in children's social care - Communitycare.co.uk