Privacy Vs Security: AI in Prisons
Hanne Juncher, Director, Directorate of Security, Integrity and Rule of Law, Directorate General Human Rights and Rule of Law, Council of Europe talks AI in prisons
Please give us a brief insight into the Council of Europe.
The Council of Europe (CoE), founded in 1949 after the II World War to uphold peace as well as to promote and protect democracy, human rights, and the rule of law across its current 46 member states, has developed numerous conventions since the 1950 European Convention on Human Rights. The most recent one is the Framework Convention on Artificial Intelligence (AI) and Human Rights, Democracy and the Rule of Law, adopted in May of this year. On 9 October, the CoE’s governing body, the Committee of Ministers, has adopted Recommendation (2024)5 regarding the ethical and organisational aspects of the use of AI and related digital technologies by prison and probation services. The Recommendation calls on governments to ensure that prison and probation services use AI and the related digital technologies legitimately and proportionately and only if they contribute to the good management of these services and the rehabilitation of offenders. This Recommendation was elaborated by the Council for Penological Cooperation (PC-CP), a main subcommittee of the European Committee on Crime Problems (CDPC) of the Council of Europe.
How can AI alleviate security and safety concerns in prison environments?
The use of AI and related digital technologies can alleviate staff from habitual repetitive tasks such as opening and closing doors, managing schedules, or monitoring movements and behaviour. These systems can enhance safety by allowing better risk management and crisis response by detecting signs of possible aggression or suicide and self-harm attempts. Predictive algorithms, for instance, can detect unusual behaviours, potential conflicts, and high-risk areas, helping staff prevent violence and maintain a secure environment. However, as AI algorithms can easily be biased (replicating the same mistakes humans make), it is essential to strive to ensure bias-free algorithms. Furthermore, AI systems should be transparent and explainable, meaning that their reasoning should be clear and understandable to human observers and decision‑makers and thus can be easily challenged if necessary.
AI should always be developed and used under human control and must remain human-centred, particularly when it comes to balancing safety and security with the social inclusion of offenders. The human element should never be lost, especially in positive and motivating interactions of staff with offenders, as regular positive human contact is key to their rehabilitation and reduction of recidivism.
What are the key legal safeguards in place to ensure AI technologies used in prisons and by probation services respect the privacy and dignity of detainees, staff, and probationers?
The CoE’s Recommendation emphasises that AI use in prison and probation services must comply with human rights and ethical principles (transparency, human oversight, necessity, proportionality), avoiding any negative effects on privacy, dignity and well-being of detainees, probationers or staff. Any use of AI must avoid causing intentional physical or mental harm. Furthermore, AI technologies, including those used in electronic monitoring or biometric recognition, should be used proportionally and with human oversight, ensuring data and private life protection and their focus should remain on reintegrating offenders into society.
When used in selection and recruitment of staff, prison and probation services should be involved in designing AI tools which meet the specific needs of staff and contribute to their professional development. AI should support, not replace, human judgment in the procedure of selection and training of staff.
Read more exclusives and news in our latest issue here.
Never miss a story… Follow us on:
Security Buyer
@SecurityBuyer
@Secbuyer
Media Contact
Rebecca Morpeth Spayne,
Editor, Security Portfolio
Tel: +44 (0) 1622 823 922
Email: editor@securitybuyer.com