SUMMER 2025 DIGITAL - Flipbook - Page 43
AI & DIGITAL TRANSFORMATION
GPSJ
Jon Rimmer
2. Apply the MASTA
framework to AI inclusion:
● Motivation: If users don’t
see how AI improves their daily
lives or work, they’re less likely
to develop the skills to use it.
As such, it’s important to raise
public awareness of how AI and
data can be used safely and
meaningfully. This education
needs to be embedded early in
schools and extended to older
adults through touchpoints.
The NHS, for example, is
already doing a great job of this,
showcasing the advantages of
aggregated data.
● Access: For AI technologies
to work, they quite often need
reliable internet, modern devices,
and supporting infrastructure.
Without access to these things,
existing digital divides will only
deepen. The Government must
continue to fund or subsidise
broadband roll out, providing
hubs where people can get
access and support.
● Security: Security is a
big concern for many, but is
especially worrisome to those
lacking the skills and knowledge
of how to stay safe online. That’s
why practical training on how to
recognise and protect against
AI-enabled and general digital
scams is key. This guidance
should be accessible and
relevant to di昀昀erent age groups
and communities.
● Trust: If people don’t trust that
AI is fair, unbiased, and secure,
they simply won’t engage with it
- so we need a better explanation
of how data is derived and used
within systems to improve trust in
use.
● Anxiety: People need help to
build con昀椀dence with anything
new - without this, even welldesigned AI tools risk being
underused. So again, training and
education to improve con昀椀dence
whilst interacting with digital
tools and services is key here.
But it’s not just about people;
systems and interfaces also
need to do their part. Baking in
appropriate reassurances at key
moments can reduce cognitive
overload and performance
anxiety. Time and time again, I’ve
seen technically con昀椀dent users
demonstrate impoverished skills
under stress. Think of the panic
that hits when 昀椀lling out a tax
return and wondering, “If I get this
wrong, do I go to jail?” Thoughtful
prompts, clear feedback, and
supportive design cues can make
all the di昀昀erence.
3. Tackle AI bias and break
down silos through smarter
collaboration: For Governments
to design services that are
intuitive, inclusive, and adaptable
to di昀昀erent needs, it’s time to
approach potential biases in AI
head on by understanding where
data sets are derived from and
actively work to acknowledge,
avoid or counterbalance skewed
inputs. At the same time, we
need to accelerate programmes
that reduce silos across
government departments, while
bolstering security measures to
ensure individual and business
data is secure. This, of course,
is far easier said than done. It’s
key to recognise that, unlike startups, the government can’t always
“move quickly and break things,”
but closer alliances with smaller
companies can help it quickly
learn from their techniques and
昀椀ndings.
4. Strengthen policy
frameworks and funding:
While I think we don’t necessarily
need brand new initiatives,
as some helpful ones already
exist, the issue is a lack of
attention or/and funding. Service
Standard 5 (a UK government
digital standard), for example, is
already about inclusion, ensuring
everyone can use digital services,
including people with disabilities,
low con昀椀dence, or no internet
access. But, it’s perhaps time
to speci昀椀cally call out AI in this
standard, making sure it’s clear
that inclusion must extend to AIdriven services too.
Alternatively, I’d like to see a
speci昀椀c standard on AI and Data
within the Government Digital
Service Standards to make
sure these technologies are
designed and deployed in a way
that doesn’t exclude vulnerable
people. Existing initiatives, like
Helen Milner’s ‘Good Things
Foundation’, are already
working to boost digital skills in
underrepresented communities.
They just need more support and
funding to scale that work and to
add a focus on AI resilience.
The bottom line here is that AI
doesn’t have to reinforce the
status quo, or deepen an existing
gap. With thoughtful design,
transparent data practices, and
meaningful human oversight,
AI has the potential to break it
entirely.
GOVERNMENT AND PUBLIC SECTOR JOURNAL SUMMER 2025
43