Blog

28/12/2023

Maintain customer trust in AI

You’ve got your data sorted, a great team of people in place, and a strategic partner you can trust to help with the technology. You’re almost ready to take the AI leap, but how do you ensure you’re implementing AI in an ethical and trusted manner?

We’ve all seen the doomsday articles and clickbait headlines detailing the pitfalls of AI,  but any new technology has historically brought concerns.

While we were all addicted to emailing from our Blackberry, everyone was sure the iPhone would fail. Why would we need internet access on our phones or GPS in our pockets? Yet we adopted and adapted and now arguably would struggle to function without this device in our pockets.

Legal and moral responsibilities

When I first started in tech, I spent most of my time explaining cloud-based multi-tenant architecture to organisations interested in adopting cloud CRM. That was around 2010, and Siebel was the incumbent on-premise CRM, yet within a few years, how software as a service was partitioned on a database was rarely discussed.

The industry has evolved, as has the conversation regarding technology and its adoption.

With the introduction of GDPR and further EU and global regulations coming down the line, e.g. the AI Act, NIS2 Directive and the EU Digital Services Act, we are finding that while our customers are excited about adopting AI, they are asking for guidance from their technology partnership. Ethics, cybersecurity, and data protection are all regular topics of conversation raised by our customers and potential clients in every conversation.

While data privacy and cybersecurity are top of mind for IT teams, a broader conversation should be had regarding other organisational stakeholders, such as your CMO, CRO, or Head of Legal.

AI technology is rapidly evolving and can be difficult to keep up. We’ve been working with customers to ensure they have the right people, processes and technology to ensure any technical solution delivers ROI across all use cases, whether through sales acceleration, improved customer service or more accurate analytics predictions.

AI technology will transform how we do business and interact in society. Still, before adopting any solution, we work with our customers to put a plan in place to address any ethical or trust concerns.

Key pillars of AI ethics

Before diving in head first with any AI project, we work with our customers to address the following frequently raised concerns.

Step 1: Data privacy considerations

Data Privacy regulations are at the forefront of any AI technology implementation.

Large Learning Models typically require vast amounts of data to make decisions, often including sensitive personal information (PII data). How organisations manage this data is paramount.

Privacy concerns relate not only to the unauthorised use of data but also issues of consent, data retention, data transfers outside the EU, and the potential for AI to uncover information that individuals may not have willingly disclosed.

The EU Framework for AI has just been agreed as of December 2023, and the AI Act is expected to be enacted within approximately two years. The framework has recommendations on high-risk data, and it is no surprise that some industries, such as healthcare, financial services, and the public sector, should be more cautious.

I suggest reading the framework before the Act is enforced and preparing now.

Step 2: Human rights obligations

Human rights treaties provide a critical framework for AI’s ethical development and deployment. Numerous international treaties and the European Convention on Human Rights recognise an individual’s right to privacy.

AI systems must be designed and operated in a manner that respects these rights. That also means implementing adequate safeguards against intrusive surveillance and ensuring that AI decision-making does not result in discriminatory outcomes.

Salesforce has recognised their obligations in this regard well through their Office of the Humane Use of AI, which works across product, law, policy and ethics.

Step 3: Transparency in business

Using AI responsibly in business will involve impact assessments (DPIAs) to understand what data will be used and how it may affect your customers or employees.

Transparency is key, and businesses must be clear about what data is being collected, how it is being used, and what the implications are.

We’ve been here before with marketing consent and preference centres. If businesses are using AI, we should be transparent about the purpose for which we use algorithms or data.

Allowing customers to exercise their discretion is key, and a strategy around preferences and data processing purposes will ensure an organisation maintains customer trust.

Step 4: Employees and Internal Teams

Employees and internal teams play a critical role in the responsible use of AI. They must be trained to understand the ethical implications of their work activities and are empowered to raise concerns about potential privacy violations or other ethical issues.

Ethical guidelines should govern the development and use of AI within your organisation. Just as GDPR has the requirement that any employee who touches PII data should have adequate training, we’re all DPOs at the end of the day. Any data that AI or a Large Language Model has impacted should be treated similarly. Don’t forget your 3rd party data processors, too.

Conclusion

The ethical use of AI is an ongoing concern requiring all organisational stakeholders’ vigilance and commitment. As AI technology continues to evolve, so must our approach to ensure that AI and Large Language Models are used in ways that respect privacy.

Your employees and various internal teams must all be part of the conversation, working together to ensure AI is used responsibly and ethically. 

Lesley Bell

Lead Consultant

Read next

30/01/2024

Why AI translations are gaining traction in customer service

Contact us

Contact us