Words from Hanwha Techwin Europe’s Uri Guterman
Video surveillance technology must be used responsibly, and regulating technology such as live facial recognition is important. But in appraising these technologies, we must be careful not to throw away AI and its benefits with the regulatory bathwater.
Throughout history, technology has been put to both good and bad use. There is no better example of this than the personal computer. Connected to the internet, the humble PC opened up the world to its users, making communications with friends and family on the other side of the world cheap and easy.
In the wrong hands, however, the same personal computer allowed hackers to access inadequately secured corporate and government networks, steal sensitive information and sell it to the highest bidder.
It is right that we continually address the technology available to us and closely consider both the value it brings and its potential for misuse. At present, there is a specific concern across society about technology that, if used irresponsibly, will infringe on our privacy and undermine civil liberties. This includes a number of technologies that rely on artificial intelligence (AI), most notably live facial recognition (LFR). A number of police forces across Europe have been using LFR, but a series of legal challenges have emerged around its use.
LFR is one of many examples of AI at work in video technology. Employed in video systems, AI can improve image quality, remove noise and eliminate false alarms, ensuring that operators only focus on the incidents and events that matter. Used this way, AI saves time, reduces cost, boosts efficiency and enhances security.
At the same time, the European Commission is proposing the first-ever legal framework on AI to provide AI developers, deployers and users with clear requirements and obligations regarding specific uses of AI. Central to this will be the identification of different levels of risk associated with a particular application of AI.
Importantly, the commission also hopes that regulation will “strengthen uptake, investment and innovation in AI across the EU.” A balance is required between regulating the use of technology to ensure it does not trample over individual rights, while encouraging the innovation that gives rise to more and more applications where AI can deliver real benefits.
Responsible manufacturers like Hanwha Techwin are used to striking these balances. To be clear, Hanwha doesn’t offer a live facial recognition solution in Europe. And, as a Korean NDAA-compliant manufacturer with complete control over its supply chain and a strong background in cyber security, we are strong advocates for the responsible use of video surveillance technology.
We support proper regulation and a public debate around the responsible use of technology such as AI. At the same time, however, it is important that the valuable role that AI plays in video technology to help keep our public spaces operational, safe and efficient is recognised and encouraged.
For example, Hanwha is pioneering the use of AI ‘at the edge’ in video cameras to solve typical challenges that police forces face when investigating crime. Crucially, this is achieved without impacting on civil liberties.
AI can help police officers quickly locate a missing person by examining masses of video footage and quickly classifying images to extract only those that include attributes such as the colour of clothing worn by the individual, whether they were wearing a hat or carrying a bag, or even moving in a particular direction of travel. What might previously have taken police officers days to achieve can now be done in hours, freeing up officers to undertake more detailed investigative work instead of manually scanning video footage. AI makes this possible.
Crucially, this use of AI does not carry implications for privacy as these systems are trained to focus on objects and attributes rather than identifying individuals through facial features.
What’s more, AI is even called on to protect the identity of members of the public captured on this video footage by redacting the faces of anyone except the missing person and rendering them unrecognisable. Only police and court personnel and those permitted to access the unredacted video footage may do so.
In the light of the amazing things that AI now makes possible, it would be a shame if, in the quite proper assessment of forms of technology that can present a threat to privacy, AI and its benefits are thrown away with the regulatory bathwater.
For more news updates, check out our June issue here.
Media contact
Rebecca Morpeth Spayne,
Editor, Security Portfolio
Tel: +44 (0) 1622 823 922