The UK Digital Identity Trust Framework: What is it and why does it matter?

Sophie Bennani-Taylor
Digital Diplomacy
Published in
10 min readMar 18, 2021

--

Photo by Lukenn Sabellano on Unsplash

Many people in the Digital Identity space are talking about the UK’s Digital Identity and Attributes Trust Framework, and rightly so; as the market grows and fragments, the framework marks an important step in the UK Government’s commitment to shaping how digital identities are developed in the UK.

So, what is the framework?

The UK Digital Identity and Attributes Trust Framework is a document produced by the Government to outline how organisations who create and use identity services should behave. Those who follow the rules of the Trust Framework will be granted a ‘trust mark’ which will communicate the organisation’s trustworthiness with the public and other organisations. Following consultation with a range of public and private organisations last year, the Government has produced a draft (or ‘Alpha’) version of the document, which it has invited organisations and citizens to comment on.

The Trust Framework aims to describe the high-level principles of digital identity, including creating shared definitions of terms such as ‘attribute’ and ‘identity’. This is a welcome step forward in a fragmented landscape which can include anything from a Facebook login to a digital passport — depending on who you ask. The UK Government describes digital identities as “a digital representation of a person [which] enables them to prove who they are during interactions and transactions”. Unlike a Facebook login, it must include attributes which tie an identity to a real person, with evidence that shows they exist and are who they say they are.

Why does it matter?

Digital identity solutions are inevitably intertwined with ethical challenges: depending on how they’re designed, they can either exacerbate or reduce issues like fraud, data loss and digital exclusion. So, frameworks governing the use of digital identity solutions and whether they should be trusted are essential to protecting the public.

While this draft Trust Framework is helpful in providing a high-level introduction to digital identity and the areas of consideration for service providers, it lacks detail regarding how principles (such as privacy, interoperability and inclusion) can be integrated into a solution. As a result, it is difficult to understand how an organisation may be certified against these requirements. This is central to building trust between service providers and users: a principle which is integral to the success of the digital identity market. While recommendations may give organisations scope to start thinking about the importance of these principles, providing rules would create a means against which organisations can be held accountable for protecting the rights of users.

“Digital identity solutions are inevitably intertwined with ethical challenges: depending on how they’re designed, they can either exacerbate or reduce issues like fraud, data loss and digital exclusion.”

There are many principles that underpin the development of ethical and trustworthy Digital Identity systems. Many reputable organisations in this space (including the World Bank, the Alan Turing and GDS) highlight the importance of principles such as inclusion, privacy, resilience, accountability, and accessibility. However, in order to explore the draft Trust Framework without writing a thesis(!), I will only focus on three. These were themes which I was happy to see included in the Government’s Trust Framework, but which could benefit from more rigorous and practical advice.

1. Inclusion

Any Digital Identity solution risks excluding vulnerable and/or minoritised groups, so it’s essential that these solutions are designed to encompass the needs of all users. This is already recognised by the Government, who dedicated section 2.3 of the paper to the topic. The draft framework includes requirements to comply with the Equality Act 2010, noting how technologies can exclude specific user groups, especially if they’ve only been tested with a particular demographic. However, the paper needs to go further. While the Equality Act forms the basis of anti-discrimination governance in the UK, reports such as the CDEI’s Landscape Summary into Bias in Algorithmic Decision-making demonstrates that the Act does not sufficiently cover all manifestations of algorithmic bias. Perhaps now is the time for the Government to take action on combatting algorithmic bias by introducing algorithmic assessments and asking organisations to transparently communicate the results of their audits. Furthermore, the Equality Act doesn’t recognise the rights of individuals who don’t identify as male or female (for example those who identify as non-binary). As a result, the Act is not sufficient for ensuring that digital identity solutions don’t discriminate on the basis of gender.

Figure 1: World Bank

The Framework also needs to be explicit about which groups might be disproportionately disadvantaged by these technologies, drawing on best practice from organisations like the World Bank (see figure 1). This would help organisations to ensure that they are actively mitigating against discrimination of particular groups, and offer opportunities for organisations to communicate what they are doing in this space. This does not necessarily require “find[ing] out as much as you can about the types of people that will use [the service]” as outlined in the framework — which contradicts the principle of data minimisation. Instead, it requires building on new and existing research in this area, and speaking to users to find out the right information about them — their reliance on services, how they currently access them, what devices they have access to and their ability to use them, and any barriers they face to accessing services.

This takes us to a final requirement on inclusion in the Trust Framework: the ‘annual exclusion report’:

“All identity service providers must submit an exclusion report to the governing body every year. The governing body will tell you exactly what information should go in the report. It will at a minimum need to say which demographics have been, or are likely to be, excluded from using your product or service. You must explain why this has happened or could happen.”

While this requirement is a welcome step in asking organisations to actively pre-empt areas of exclusion and mitigate against them — as well as holding organisations accountable for the result of this work — it also means that organisations will need to collect demographic data on their users that may not otherwise be needed. Without describing which categories of exclusion will be measured, it is difficult for organisations to start putting changes into place. Demographic data should not be collected without a reasonable purpose for doing so, so it’s essential that the Government ties its exclusion report to clearly defined areas of discrimination. This is not a new area of exploration, and there is plenty of pre-existing work that the Government can draw on to highlight how discrimination is embedded into technology. Some notable examples include Joy Buolomwini’s Gender Shades project which looks at how facial recognition technologies produce dramatically poorer results on the faces of darker-skinned women, or Cathy O’Neill’s book Weapons of Math Destruction which highlights many examples of algorithmic bias, including how algorithms can be weaponised to discriminate on the basis of socio-economic background.

“Perhaps now is the time for the Government to take action on combatting algorithmic bias”

2. Privacy and transparency

Privacy and transparency are increasingly important to corporate agendas, as high-profile privacy breaches have reinforced the risks (as well as the benefits) that using data can bring. The draft Trust Framework clearly recognises this; indeed, the word is mentioned throughout the document as a key principle for successful digital identity solutions. However, the document requires more detail to help organisations make clear, informed decisions about best practice when it comes to data use. For example, the document nods to the fact that “users will be able to choose which organisations can see and share their personal data [but] not have a choice in specific situations”. This is a useful principle of consent to outline in the context of digital identity, but should form part of a clear privacy mandate that highlights which specific situations will prevent users from seeing how their data is used (and why), as well as advice for ensuring privacy and transparency throughout the identity ecosystem and lifecycle. Many end-to-end digital identity solutions will require the tools of multiple suppliers, leaving us with a key question — do all these suppliers need to be trusted by the Trust Framework, and if not, how do we ensure that individuals are protected as they navigate through the ecosystem? How will all organisations (not just user- or customer-facing ones) be incentivised to comply with the Trust Framework?

The concept of authorisation and consent should be central to this framework. User consent has been difficult to measure and guarantee, as evidenced by data privacy scandals such as Cambridge Analytica, where Facebook users were unaware of how their data was shared and used (despite this being ‘present’ in terms and conditions). The draft Trust Framework states that users should be told how a product or service works by clearly explaining “any terms and conditions of use that the user needs to be aware of”. But what scandals such as Cambridge Analytica teach us, is how loosely the concept of a ‘clear explanation’ can be. The Trust Framework should mandate practical rules for ensuring the readability of terms, such as achieving a certain readability score or summarising key points in a reasonable number of words.

“How will all organisations (not just user- or customer-facing ones) be incentivised to comply with the Trust Framework?”

Surprisingly, the draft Trust Framework provides very little reference to biometric data. In fact, the document provides no detail around the specific considerations that the collection, storage and use of biometric data may require. This is unexpected, considering the public attention that biometric technologies such as facial recognition have garnered. Additionally, the use of biometrics in other national identity systems have taught us some important lessons about the risks of their error rates: one study found that 20% of households in the Indian state of Jharkhand failed to get food rations due to biometric errors — a rate five times higher than that of ordinary ration cards. Future iterations of the Trust Framework should outline specific considerations relating to these technologies, with particular reference to known risk areas including accessibility & inclusion, security, and privacy.

3. Accountability

Digital Identity solutions have the opportunity to drastically reshape how people across the UK access products and services. As outlined in the draft Trust Framework, making interactions and transactions available online can save organisations time and money; reduce the risk of fraud; be quicker and easier for users to complete; and encourage innovation. Although this is true, achieving these benefits is dependent on the design of a solution. While on the one hand a digital identity solution can reduce fraud, if it is not designed with the appropriate privacy and security controls in place, it can also increase fraud. The draft Trust Framework provides a great first step in recognising the importance of good design. However, we must not forget the potential for harm to users, and how organisations will be held accountable.

“One study found that 20% of households in the Indian state of Jharkhand failed to get food rations due to biometric errors — a rate five times higher than that of ordinary ration cards.”

The draft Trust Framework highlights that it: “would be owned and run by a governing body established by the government. […] The governing body will also make sure that organisations and schemes follow the rules, and decide what to do if they don’t. The body will point you to sources of help for issues which can’t be solved by trust framework members, and may get involved in redress cases.” The presence of a governing body is an important one, and this body will have great responsibility to ensure that digital identity services are appropriately developed and used. However, the draft Trust Framework does not explicitly outline the responsibilities of the body, such as which redress cases they might get involved in. There is concern that this could create gaps in accountability, where users are unclear where to turn to for redress. I would welcome a second iteration of the draft Trust Framework that outlines clear and actionable responsibilities for the body, as well as details of its composition.

The draft Trust Framework provides ample advice and sources of information for organisations dealing with a data breach, including how to respond to and investigate an incident. However, there is little mention of the support required for users whose data has been lost or who have become victims of fraud. Responses to data breaches should not only include a requirement to tell users that their data has been lost, but also to support users in understanding how to respond to this. This should include communicating with users the cause of the data loss (if known), the potential impact on them, any next steps they should take, support for economic or emotional damage, and details on how to request compensation. This should also include any specific considerations for the loss of biometric data, which has more complex consequences. Organisations must be held accountable for the impact that any failure in their system has on users, and providing clear requirements for doing so would strengthen the trust that the Trust Framework is designed to engender.

What next?

The draft Trust Framework marks an significant step in the development of the Digital Identity market in the UK, and I am excited to follow the conversations it fosters. The publication of the framework has certainly helped to advance the debate in this space, and draws on some fundamental principles for successful solutions — including privacy, inclusion, accessibility and interoperability. How the government takes into account the gaps in the draft Trust Framework is yet to be seen, and I look forward to seeing how the next iteration takes shape.

--

--

Sophie Bennani-Taylor
Digital Diplomacy

Sophie is a researcher interested in digital identification, and the intersection of technology and migration.