We were thrilled to partner with Oxford University to run a roundtable discussion on the ethics of AI in social care. This event brought together 21 care workers with Oxford University researchers to begin drafting guidance for a sector that is urgently calling for support in using AI ethically and safely in this rapidly evolving field. Recognising the insight and knowledge that frontline workers bring to this conversation we compensated them for their time and expertise as well as arranging accommodation and dinner and covering expenses.
From the outset, it was evident that our discussion needed to address not only AI’s benefits and risk but also the broader context in which care is provided. The roundtable highlighted that while AI has the potential to enhance efficiency and quality of care, its implementation must be carefully considered to ensure it supports rather than replaces the invaluable human element of caregiving. AI should be viewed as a tool to assist rather than replace human judgment. It can provide ideas, but these should be verified with the person drawing on social care, their family, and medical professionals as appropriate. Understanding AI’s technical limitations and potential biases is crucial for its responsible use. Care workers were positive about the role AI could play in freeing up time. For instance, AI could assist with administrative tasks, scheduling, and routine monitoring, allowing care workers more time to engage meaningfully with those they support.
One of the concerns raised by care workers is data protection and confidentiality. Even with anonymised data, there is a risk of identifying people, particularly when it involves individuals with rare conditions. Once data is inputted into AI systems, it is impossible to retrieve or delete it, raising concerns about privacy and control. Therefore, stringent data protection measures must be a cornerstone of any AI ethics policy. The group also discussed informed consent as a two-way process. Care recipients should be informed if AI tools are being used in their care, and care staff should be aware if they are being recorded by devices like Alexa.
Another critical area discussed was the line between personal and company liability when using AI in social care. Care workers thought there should be shared responsibility, but procedures and policies must be clear to protect staff who follow guidelines. This clarity is essential to ensure that care workers feel supported and are not unfairly held accountable for systemic problems.
Care workers emphasised the importance of paid training on AI usage. This training should be inclusive, covering all staff, including those on zero hour contracts. A base level of training for all staff is essential, ensuring everyone has a foundational understanding of AI and the group felt that incorporating basic AI training into care certification and ensuring it is standardised across employers would streamline the integration of AI into social care. They wanted additional levels of training to be provided depending on roles and how AI will be utilised within them. Training needs to be equitable, acknowledging that some workers may require more training than others. Additionally, care workers should be provided with work devices to use AI tools, as relying on personal devices can raise privacy and data security concerns.
Overall, there was agreement across the group that AI holds great promise for enhancing social care, but its ethical implementation requires careful consideration. Developing comprehensive ethical guidelines, as initiated by The CWC and Oxford University, is a crucial step toward ensuring that AI benefits the sector responsibly and equitably.
We hear from thousands of care workers every year and are working to provide opportunities for their voices to be heard in policy, research, and practice discussions. If you would like to collaborate on a roundtable with care workers, please get in touch.