SUMMER 2025 DIGITAL - Flipbook - Page 42
GPSJ
AI & DIGITAL TRANSFORMATION
If AI is widening the digital
divide, what can be done to
stop it?
AI’s increasing presence poses a real danger of making an already sizable digital literacy divide worse. Jon
Rimmer, CXO at Mercator Digital, says governments and organisations have a responsibility to bridge this gap,
explaining both why and how.
New technology is designed - or
at least is meant to be designed
- to make life easier. In the UK,
for example, people can now
renew passports online, 昀椀le taxes
through HMRC’s digital service,
receive emergency alerts on their
phones, and even attend virtual
Jobcentre appointments - all
in the name of saving time and
improving access.
However, for those with low
digital literacy, advancements
in technology can in fact do
the exact opposite, further
excluding people who are already
marginalised.
According to recent research,
8.5 million people lack basic
digital skills, of which a large
proportion include those living
in poverty. In fact, 3.7 million
families fall below the Minimum
Digital Living Standard, facing
barriers such as limited internet
connectivity, outdated devices,
and insu昀케cient digital literacy
support.
Similarly, both older people
and those with physical
and mental disabilities often
encounter accessibility issues
that make digital tools frustrating
or even unusable. Again, this
can be down to equipment and
connectivity issues, but lack
of con昀椀dence or skills to utilise
technology also come into play.
A survey of people with severe
mental illness, for example,
found that 42% lacked basic
digital skills, such as changing
passwords or connecting to WiFi.
While it’s already widely
42
acknowledged that digital
exclusion disproportionately
a昀昀ects the above-mentioned
groups, even beyond the barriers
of access and a昀昀ordability, 21%
of people still say they feel left
behind by technology.
These are all high 昀椀gures that,
with the advent of AI, are at risk of
rising.
The impact of AI on digital
exclusion
AI of course has the potential
to drastically improve public
services, healthcare, education,
and employment. But, if
not carefully designed and
implemented, it also risks
deepening digital exclusion.
For those already struggling
to use digital systems, AI adds
complexity to interactions.
Chatbots and automated
interfaces, for example, are
becoming increasingly common
in settings like healthcare and
social services, where human
interaction is often essential.
These tools can confuse users
with low digital literacy or those
experiencing mental health
challenges, creating yet another
barrier between vulnerable
individuals and the services they
need.
Looking beyond usability,
there’s also a deeper structural
problem: the data used to train
most AI models is inherently
biased. These datasets are often
pulled from the web, where
information has historically been
shaped by academic, technical,
and hobbyist communities (think
GOVERNMENT AND PUBLIC SECTOR JOURNAL SUMMER 2025
Western, white, middle-class,
English-speaking men). As a
result, marginalised groups are
signi昀椀cantly underrepresented,
leading AI systems to re昀氀ect
and reinforce existing social
inequalities; a problem further
propagated through continued
use.
And this is not just theoretical.
Take the COVID-19 pandemic as
an example, where the impact
of AI bias was clear as day - an
AI system downgraded exam
results for 39% of students,
disproportionately a昀昀ecting those
from disadvantaged schools.
We’ve seen other examples
in healthcare too, with recent
research showing that AI systems
being developed to diagnose skin
cancer run the risk of being less
accurate for people with dark
skin, simply because the data
used is predominantly from those
with lighter skin. In this case, it’s
a disparity with potentially lifethreatening implications.
Four strategies to bridge the
digital divide
For me, governments
and organisations have a
responsibility to address both
this bias and the widening digital
divide. If not, they risk a huge
proportion of the public feeling
annoyed, isolated and inadequate
- and that’s at best. At worst,
there’s a real risk that the needs
of the UK’s most vulnerable
people are not met; which in turn
causes signi昀椀cant implications
for education, employment, and
health and wellbeing.
With this in mind, below are
some practical strategies for
ensuring AI-powered services are
intuitive, inclusive, and adaptable
to di昀昀erent needs:
1. Embed inclusive research
as a foundational step in
service design: In order to
ensure inclusion from the outset,
robust user research is key.
When it comes to the design of
Government Digital Services,
researchers have always held
the responsibility of amplifying
the voices of marginalised
and digitally excluded groups,
ensuring that services derived
are e昀昀ective and usable for all.
Researchers also play a key role
in the ethical and responsible
uses of AI, which is something
that must continue into every
stage of service design.
Prioritising researchers’ work
here helps uncover the real-world
challenges people face, not
just assumptions based on the
experiences of digital natives.
While there is already a
well-established community of
researchers across Government,
there is always more that can be
done to share 昀椀ndings across
Departments that will relate
more widely than the individual
project to which a researcher is
assigned.
We also need to continue
validating digital services with
representative users at every
stage of development, using
insights from government
researchers to shape and test
design decisions.