(credit: Pablo Lagarto / Adobe Stock)
On 1 February 2020, YouTuber Simon Weekert posted a video on YouTube claiming to have redirected traffic by faking traffic jams on Google Maps. The video shows Weekert walking slowly along traffic-free streets in Berlin, pulling a pile of second-hand mobile phones in a cart behind him and Google Maps generating traffic jam alerts because the phones had their location services turned on.
Weekerts performance act demonstrates the fragility and vulnerability of our systems and their difficulty in interpreting outliers, and highlights a kind of decisional blindness when we think of data as objective, unambiguous and interpretation free, as he put it. There are many other examples of decisional blindness relating to drivers following Google Maps and falling off cliffs or driving into rivers.
Google has the resources, expertise and technology to rapidly learn from this experience and make changes to avoid similar situations. But the same vulnerability to hacking or outliers applies to the use of machine learning in childrens social care (CSC) and this raises the question of whether the sector has the means to identity and rectify issues in a timely manner and without adverse effects for service users.
Have you ever had the experience of asking the wrong question in Google search and getting the right answer? Thats because of contextual computing that makes use of AI and machine learning.
At its heart, machine learning is the application of statistical techniques to identify patterns and enable computers to use data to progressively learn and improve their performance.
From Google search and Alexa to online shopping, and from games and health apps to WhatsApp and online dating, most online interactions are mediated by AI and machine learning. Like electricity, AI and machine learning will power every software and digital device and will transform and mediate every aspect of human experience mostly without end users giving them a thought.
But there are particular concerns about their applications in CSC and, therefore, a corresponding need for national standards for machine learning in social care and for greater transparency and scrutiny around the purpose, design, development, use, operation and ethics of machine learning in CSC. This was set out in What Works for Childrens Social Cares ethics review into machine learning, published at the end of January.
The quality of machine learning systems predictive analysis is dependent on the quality, completeness and representativeness of the dataset they draw on. But peoples lives are complex, and often case notes do not capture this complexity and instead are complemented by practitioners intuition and practice wisdom. Such data lacks the quality and structure needed for machine learning applications, making high levels of accuracy harder to achieve.
Inaccuracy in identifying children and families can result in either false positives that infringe on peoples rights and privacy, cause stress and waste time and resources, or false negatives that miss children and families in need of support and protection.
Advocates of machine learning often point out that systems only provide assistance and recommendations, and that it remains the professionals who make actual decisions. Yet decisional blindness can undermine critical thinking, and false positives and negatives can result in poor practice and stigmatisation, and can further exclusion, harm and inequality.
Its true that AI and machine learning can be used in empowering ways to support services or to challenge discrimination and bias. The use of Amazons Alexa to support service users in adult social care is, while not completely free of concerns, one example of positive application of AI in practice.
Another is Essex councils use of machine learning to produce anonymised aggregate data at community level of children who may not be ready for school by their fifth birthday. This data is then shared with parents and services who are part of the project to inform their funding allocation or changes to practice as need be. This is a case of predictive analytics being used in a way that is supportive of children and empowering for parents and professionals.
The Principal Children and Families Social Worker (PCFSW) Network is conducting a survey of practitioners to understand their current use of technology and challenges and the skills, capabilities and support that they need.
It only takes 10 minutes to complete the survey on digital professionalism and online safeguarding. Your responses will inform best practice and better support for social workers and social care practitioners to help ensure practitioners lead the changes in technology rather than technology driving practice and shaping practitioners professional identity.
But its more difficult to make such an assessment in relation to applications that use hundreds of thousands of peoples data, without their consent, to predict child abuse. While there are obvious practical challenges around seeking the permission of huge numbers of people, failing to do so shifts the boundaries of individual rights and privacy vis--vis surveillance and the power of public authorities. Unfortunately though, ethical concerns do not always influence the direction or speed of change.
Another controversial recent application of technology is the use of live facial recognition cameras in London. An independent report by Essex Universitylast year suggested concerns with inaccuracies in use of live facial recognition, while the Met Polices senior technologist, Johanna Morley said millions of pounds would need to be invested in purging police suspect lists and aligning front- and back-office systems to ensure the legality of facial recognition cameras. Despite these concerns, the Met will begin using facial recognition cameras in London streets, with the aim of tackling serious crime, including child sexual exploitation.
Research published in November 2015, meanwhile, showed that a flock of trained pigeons can spot cancer in images of biopsied tissue with 99% accuracy; that is comparable to what would be expected of a pathologist. At the time, one of the co-authors of the report suggested that the birds might be able to assess the quality of new imaging techniques or methods of processing and displaying images without forcing humans to spend hours or days doing detailed comparisons.
Although there are obvious cost efficiencies in recruiting pigeons instead of humans, I am sure most of us will not be too comfortable having a flock of pigeons as our pathologist or radiologist.
Many people would also argue more broadly that fiscal policy should not undermine peoples health and wellbeing. Yet the past decade of austerity, with 16bn in cuts in core government funding for local authorities by this year and a continued emphasis on doing more with less, has led to resource-led practices that are far from the aspirations of Children Act 1989 and of every child having the opportunity to achieve their potential.
Technology is never neutral and there are winners and losers in every change. Given the profound implications of AI and machine learning for CSC, it is essential such systems are accompanied by appropriate safeguards and processes that prevent and mitigate false positives and negatives and their adverse impact and repercussions. But in an environment of severe cost constraints, positive aspirations might not be matched with adequate funding to ensure effective prevention and adequate support for those negatively impacted by such technologies.
In spite of the recent ethics reviews laudable aspirations, there is also the real risk that many of the applications of machine learning pursued to date in CSC may cement current practice challenges by hard-coding austerity and current thresholds into systems and the future of services.
The US constitution was written and ratified by middle-aged white men and it took over 130 years for women to gain the right of suffrage and 176 years to recognise and outlaw discrimination based on race, sex, religion and national origin. Learning from history would suggest we must be cautious about reflecting childrens social cares operating context into systems, all designed, developed and implemented by experts and programmers who may not represent the diversity of the people who will be most affected by such systems.
Dr Peter Buzzi (@MHChat) is the director of Research and Management Consultancy Centre and the Safeguarding Research Institute. He is also the national research lead for the Principal Children and Families Social Worker (PCFSW) Networks online safeguarding research and practice development project.
The rest is here:
'Technology is never neutral': why we should remain wary of machine learning in children's social care - Communitycare.co.uk