Ensuring integrity in AI in 2022

Artificial intelligence (AI) has become ubiquitous in almost every part of our lives and, in doing so, society is facing a challenge to keep pace with its advancements in a face of the Fourth Industrial Revolution. It holds exciting potential for many aspects of our lives, from improving the safety and efficiency of our cities, and guiding autonomous vehicles, to understanding consumer behaviour in a store, and supporting public health measures.

But with this potential, comes risk. That’s why more organisations are now taking a hard look at how AI might worsen societal inequality and biases, and how to combat this by developing ethical and responsible AI. Indeed, as part of the wider Hanwha Group, we want to elevate the quality of life with our innovations and solutions. AI plays a key role in this, as long as it’s developed and used in a responsible way.

Recently, Gartner identified ‘smarter, responsible and scalable AI’ as the number one market trend in 2021. A trend that will need to continue in 2022, with public trust remaining at significantly low levels. Almost two-thirds of people are inclined to distrust organisations.

The key to this is building integrity into your AI strategy and ensuring all products that use AI, do so in an ethical and responsible way. And simultaneously, that any partner or vendor that your organisation aligns itself with shares the same values and sense of responsibility to do the right thing.

Communicating and collaborating Edelman CEO Richard Edelman recommends breaking the current cycle of distrust by uniting people together on common ground issues and making clear progress on areas of concern. Additionally, he advises institutions to provide factual information that doesn’t rely on outrage, fear, or clickbait, and that instead informs and educates on major societal issues.

Therein lies a clear opportunity to build greater integrity into AI. Trust relies on everyone being on the same page, with the same access to the facts, and an ability to relay their thoughts and feedback to product creators and business leaders.

In practice, that means communicating the use and benefits of AI to stakeholders, including customers, partners, investors, and employees. However, research has shown that people perceive the ‘threat’ of AI differently based on things like their age, gender and prior subject knowledge. The same study found that there’s a huge gap between laypeople’s perception and reality when it comes to AI, with many AI applications (like crime prevention and AI art) still requiring significant explanation.

Therefore, when communicating any new AI solution, it’s worth considering the different knowledge levels that need to be accommodated. Better still, look at the differing priorities, painpoints, and concerns of each audience group and tailor your message to this. This ensures everyone is coming to the table with the same basic level of knowledge about an AI use case.

 

Media contact

Rebecca Morpeth Spayne,
Editor, Security Portfolio
Tel: +44 (0) 1622 823 922
Email: editor@securitybuyer.com

Subscribe to our newsletter

Don't miss new updates on your email
Scroll to Top