Trust Is a Process, Not a Policy
When reviewing a supplier's approach to audience data and AI, the question that matters most is not what the supplier believes about trust. It is whether the supplier can show exactly how trust is maintained day to day, at the process level, in the workflows that operational teams actually run.
That distinction matters because a policy document describes intent. A process description shows what happens in practice, who is responsible for each step, what controls are in place, and how the organisation knows when something has gone wrong. The two are not alternatives: most organisations have both. The difference is which one carries the weight of verification.
Consent management: what it means in practice
Consent management means tracking what each audience member has agreed to, at the level of each specific use of their data, and keeping that record accurate as services and data uses change over time.
A 2022 ICO survey found that around 90% of UK consumers expressed concern about data collection and use without their knowledge. Consent management is how organisations address that concern operationally, not just in policy. The most common gap is treating it as a one-time event: something collected at registration and not revisited unless a compliance deadline forces it. In practice, consent needs to be managed as a live record that reflects each new service feature, data partnership, or change to personalisation logic.
In a well-run consent management process, this works as follows. Each data use is defined and documented. Each consent record is linked to specific data uses, not to a generic permission. When a new use is introduced, the system identifies which audience members have consented to it and which have not, without requiring a blanket reset that creates friction for everyone. Consent changes, whether from a member updating their preferences or from a service update triggering a new consent requirement, are logged with a timestamp and stored against the audience record.
This level of operational detail is what makes consent management auditable. It means a reviewer, an auditor, or a regulatory inspector can look at any consent record and trace exactly what was agreed, when, and for what purpose.
Data handling: building an audit trail
An audit trail in data handling means that for every decision made using audience data, there is a record of what data was used, what the decision was, who or what system made it, and when. This record does not need to be complex. It needs to be complete and retrievable.
The ICO expanded its national compliance check to the UK’s top 1,000 websites in 2025, with incomplete records of data use identified as a recurring issue. The practical steps that create a proper audit trail are: version-controlled documentation of data processing activities; exception logs that capture unexpected outputs and what action was taken; and access logs that show which team or system accessed which data at which point.
For operational and delivery teams, the value of this discipline is not primarily about compliance, though it supports compliance. It is about being able to identify where a problem originated when something goes wrong, and being able to demonstrate to an external reviewer that the organisation's data practices are what it says they are. Organisations that have built this kind of audit trail into their processes, rather than reconstructing it retrospectively when asked, are the ones that can answer specific questions with specific evidence.
A policy document describes intent. A process description shows what actually happens, who is responsible for each step, and how the organisation knows when something has gone wrong.
How personalisation decisions are made and reviewed
Personalisation means using data about an audience member's behaviour, preferences or history to change what content or service they are shown. The question that operational and delivery reviewers need to be able to answer is: how is that decision made, who can change the logic behind it, and what happens when the output looks wrong?
In a well-designed personalisation process, the logic is documented in plain terms that a non-specialist can understand. The inputs, meaning the types of data used to make a recommendation, are agreed and stable. Changes to the logic go through a review process that includes sign-off from editorial, governance or product teams, not only from the technical team that built it. Outputs are monitored at the aggregate level so that patterns, such as certain audience groups consistently not receiving recommendations for particular content types, are visible and can be acted on.
None of this requires the reviewer to understand machine learning. It requires the reviewer to be able to see that there is a structured process behind the decisions the system makes, that the process has named owners, and that there is a mechanism for identifying and correcting outputs that do not match the intended behaviour.
Where human oversight sits in the workflow
Human oversight in an AI-assisted process does not mean a person reviews every output. It means the process has been designed so that the points where human judgment adds the most value have been identified in advance, and a person is involved at those points as part of the standard workflow, not only when something has already gone wrong.
For an audience data or personalisation workflow, the points where human oversight typically matters most are: when the logic governing recommendations is set or changed; when an output falls outside a defined threshold, for example an unusual pattern in how content is being distributed across audience segments; and when a complaint or query from an audience member requires a review of what data was used and why.
The role of these people is not to second-guess the system on every output. It is to apply judgment at the points where the system is most likely to need correction, and to maintain the accountability that makes the process defensible. This requires people who understand the service and the data, and who have the authority to act on what they find.
Trust is the product of a process that is documented, owned, monitored and improvable. Demonstrating this at the workflow level is what gives reviewers the specificity they need to score with confidence. Speak to our team about building a supplier assurance framework that holds up under scrutiny.