
This article is part of our new ‘Future of Social Work‘ series, where we’ll be reporting on innovative practice approaches and technology driving social work forward. Get in contact with us to flag up anything that you think ticks either of those boxes at anastasia.koutsounia@markallengroup.com
Social Work England has commissioned research to examine how artificial intelligence is affecting the profession, while also hosting a summit on the issue.
The two pieces of research – one of which is a literature review – are exploring how AI is shaping social work practice and education.
The regulator said the purpose of the research was to help it understand:
- The areas of Social Work England’s professional standards that may be affected by social workers’ use of AI in their work.
- The types of AI being used across health and social care in England and their application in social work practice, including the risks of bias and discrimination.
- If social workers feel confident and prepared to use AI ethically and appropriately, in line with Social Work England’s professional standards, and how employers are supporting them to do this.
- How social work education providers are preparing students for AI in their future work.
- Data protection and confidentiality when using AI with people using services and the public.
Summit on AI amid increasing use in social work
The summit with sector leaders, held today (4 February 2025), covered the extent of AI use in social work practice currently, the opportunities it can bring to a relationship-based profession, the risks it carries and the concerns being raised with the profession and the ethical implications, particularly regarding equality, diversity, and inclusion.
The news comes with increasing numbers of councils testing the impact of AI tools on practice, including in helping practitioners save time on recording and summarising case notes and suggesting actions to take following assessments or visits.
About one in five practitioners were using such tools for day-to-day case work as of October 2024, according to a Community Care poll.
Other usages for AI in the sector include supporting student and practitioner learning and predicting future needs for social care.
However, social work bodies have raised concerns about the technology’s impact on the profession, including in relation to the quality and reliability of tools, their susceptibility to bias and discrimination and their implications for the privacy of the people social workers work with.
Government plans to roll out AI in public sector
At the same time, the government is planning to roll out the use of artificial intelligence across the public sector in order to reform services.
The implications of this for social work and social care are as yet unclear, though prime minister Keir Starmer pointed to reductions in the time social workers spent on administration as a benefit of the technology, in launching the government’s AI opportunities plan last month.
In a LinkedIn post, following the summit, Social Work England’s executive director of professional practice and external engagement, Sarah Blackmore, said: While already in use, this is a new area for social workers to get to grips with. We are also keen to develop our knowledge through connecting and working with the experts.
“Holistically, there is real value in the tech and social work sectors working together with the potential for real positive impact on people across the country.”
Standardised and harmonised codings are a prerequisite for AI to function with any degree of accuracy. It’s a question of at what level of sophistication.
The Zachmans Inventory and the Carnegie Mellon Model both accepted measures used by TOGAF (the open group architecture framework) and adopted by, then, UK Government in 2010, to assess organisational value and competency (the resource view of the firm), showed AI failed!
The level of accuracy driving down standards by reducing social complexity to basic level computational ciphers based around normative assumptions about all behaviour having the same properties and qualities, and measuring how consistent the information gathered is with and from these norms ie all water is wet type judgements. It’s the idea of completeness assumed by Booles when arriving at the yes/no (one’s and zero’s) in lie detection. The specificalities of the design criteria in use is surveillance the rathermore nuanced elicitation of any espoused values simply incompatible with the design briefing.
Sha Xin Wei warned against such catastrophic uses of data in his seminar ‘Navigating Indeterminacy’ presented to the Australian government a few years ago. And, decades ago Eric Trist and Howard Permutter writing in 1986 warned against the same, see ‘Societies in Transition’ Journal of Human Relations 1986. As well as the huge and costly failures in IT procurement across the NHS and LocGov since Interoperability was first tested.
The very first IT kit bought by Westminster Council for use in Children’s Services Social Work, called UCare back in 1983 (I knew the peeps developing the kit) failed, even though the IT provider has changed, it still doesn’t work!
Unless AI is confined to highly standardised, deeply harmonised and routine functions it will inevitably fail.
Any coder will tell you this. Speak to the DnD gamers who have been doing this for decades They know that the DM makes or breaks the game and the players.
Are Social Workers, as the so called navigators, the new Dungeon Masters of The Poor Laws Wars; Or What?
For a real time and real world take on this check out Dunder Moose a US podcast covering the ethical responsibilities of a DM towards the players.
Crazy, crazy 🤪
I am not an expert in this field. However, quoting bullet point 3 above;
‘The types of AI being used across health and social care in England and their application in social work practice, including the risks of bias and discrimination.’
This Artificail Intelligence (AI) aspect including for sections of the Black workforce community, is and remains point of keen interest. Not least with reference to Fitness to Practice processes. Unfortunately, to best of knowledge there has still been no valid offered explanation by the SWE to the facts of its findings. This is despite borne out by its repeated, consistent and rigourous data crunching analysis. Having responded to the regulators’s important request to submit ethnicity data, the findings have shown systematic demographic discrimination repeated impacting particular groups.
In most simplistic term, it seems the new announcement of this AI probe – is a case of the ‘tail wagging the dog’. Its back to front, delayed and seems not a proactive initiative on the part of SWE. This is when you consider the historical work already reported to and commissioned by its accountable body the Public Standards Authority (PSA) 5 years ago. This is with reference to the ethical aspects, data, and advance propose usage of AI going back even further in 2019.
The links below are seperate with context to AI in health care profession and impact experiences of discrimination – where in the latter, the value of reported lived experiences also count as valid data informing strategy and boardroom decision in health and care professions.
Please see;
https://www.professionalstandards.org.uk/news-and-updates/news/artificial-intelligence-what-it-and-what-impact-will-it-have-professional
and associated:
https://www.professionalstandards.org.uk/news-and-updates/news/racial-discrimination-healthcare-regulation-and-oversight-importance.
Additionally, I sense this article 4.2.25 is also a way of SWE preparing to advance the reasons for the discrimination (in relation to Fitness to practice processes). That is because of or a contribution of AI. While not pre-empting, I have commented elsewhere on the moral and ethical implications when and if we rely on the position of AI for accountability. That is, you cannot hold ‘machines’ or ‘robots’ no matter how sophisticated, developed and advance or complex, accountable. Not when attitudes, behaviours and decisions are made by people.
Social work England need to focus on meeting their own professional standards and gaining a basic comprehension of the profession before even thinking about AI.
When said, the announced intended probe by SWE into AI in other respect, is welcomed if only possibly reactionary.
With context to the Fitness to Practice process and experiences, I am thinking perhaps ‘old school’ from a social science (not computer science) perspective. These are to the potental issues and learnings arising from the discriminatory findings.
In relation to AI, utilisation of ethnic diversity data and the communities showing most affected, it does not seems because of, nor interpretated as ‘failure’, in AI usage, elements or aspects archictecture. Instead, perhaps more in the opposite of a ‘success’ statement being disturbingly played out and reported – as achieved. In particularly, the deeper and interpretative use of protected and private data, whether by ‘big data’ machines or in this matter sensitively needed by regulatory professional bodies, requesting urgently ethnicity & diversity data to inform ‘gaps’ for best organisational outcomes within un/consciously bias frameworks.
The findings from Professional Standards Authority, and other regulatory landscape bodies do not seem to be showing ‘failures’ or decline in their reporting of discrimination if and when asked. This is within the health and care profession and perhaps other professions. I am not aware of reported decline trajectories in experiences, but instead the opposite and or status quo, observed normalised into daily operations, unless proactive measures in place.
I am perhaps of old school as said. I am mindful of and will continue ponder respectfully to the important comment of the above author –
“….Unless AI is confined to highly standardised, deeply harmonised and routine functions it will inevitably fail…”
All the above appear written by academics with the usual academic concerns. Let me tell you. This is the first chance (ever) to actually do what we all know on the front line needs to happen – getting social workers away from computers writing reports AI can do in seconds and working and safeguarding children. What worries me is not the academic nonsense but what will really happen – in a rush to save money LAs will get rid of the social workers – downgrading to early help workers because AI can do the bits they get from the social workers- so they will see it anyway
I note the comments if they are in possible reference to my own responses so far. However, I am not an academic. I suspect many front line social workers (whether in case holding practice or not) are also not academics. Other contributers sharing their knowledge in AI to this article and with interest to assist the profession, may also not be academics either. We all do however make necessary study research reflections.
The issues of AI in social work is here as you rightly and correctly say. They are also as you say, impacting day to day social work practice. Getting social workers away from computers what AI can do in seconds, as you say. It seems AI also deeply affects the human approach towards service users engagement. This includes in wider workforce, CPD learning, education and application of professional standards. Ironically, I have wondered why the Social Work England sit within the UK government Department for Education (DfE) and not strangely, with the Department for Health and Social Care (DHSC). May be because of Education funding and services relating to Children & Families and Adults dichtomy.
Your comments about costs and predictive conditions for social workers with impact of AI roll out seems also shared elsewhere and what you suspect will really happen:
https://www.mysocialworknews.com/article/ai-can-help-write-casenotes-quicker-but-can-it-do-a-home-visit
The point remains, and relates to the Fitness to Practice processes. It connects also with the application use of requested submitted diversity data. They have shown to date an official outcome by the Social Work England, pointing towards discrimination practices. The outcome is linked deeply and possibly ‘harmonised’ in all the stages and across affecting Children – Adult workforce. What those affected still do not have is the official acceptance nor to the role of AI in such processes (if there can indeed be any contributions), to the findings.