What the market asks
Educational, certification and licensing organisations are
attracted by the logistical and ecological advantages of
digital delivery, the tailored experience enabled by adaptive
technology and the faster results, EF SET academic director
Dana Alhadeedi explains.
The same advantages apply for users, with techno-logy
providing choice, flexibility, and equal opportunities,
Pearson’s head of assessment Freya Thomas Monk says.
“I see digital assessment as providing a level-playing field;
it’s essentially the same experience wherever you take the
test in the world,” she explains.
While Thomas Monk says learners particularly appre-ciate
speed of feedback but are not too concerned by the
speed of the test itself, for other players in the industry this
is a key selling point.
LanguageCert for example, an on-demand test provider,
is going to replace its linear computer-based test with an
adaptive model, cutting time to about 60 minutes – but for
two skills. “This is going to make the computer-based test a
lot more popular than it is at the moment,” says Language-
Cert Portfolio manager Mary Yannacopoulou.
shiny digital solutions
with excellent user
experience… but often
based on old pedagogy “
18 | THE PIE REVIEW | ISSUE #19
Some experts in the field
believe that there is a
need to provide tests that
allow for communicative
competence to be
What has technology ever done for us?
Beyond flexibility, facilitated access and cheaper fees,
technology is changing the shape of language exams them-selves,
making them shorter, personalised, and allowing for
creative solutions for assessing integrated skills. Platforms
such as English3 or Duolingo even allow for universities to
directly review part of the candidate performance or pro-vide
a video interview.
For Stead at Babbel, digital delivery allows them to create
test items that are better indicators of language ability. “The-re
are a bunch of those opportunities sitting there, waiting
for new digital agitators to shake things up,” he says.
Sarah Rogerson, who leads the assessment development
team at Cambridge Assessment English, explains that
particularly interesting developments are contextualised,
immersive and scenario-based assessment.
But she maintains that technology is not inherently better,
just different. “I don’t think there is enough evidence to say
that technology measures language ability better. It enables
us to do a lot of things differently as a powerful tool,” she
tells The PIE Review.
But not everyone is using it well. “I see lots of shiny digital
solutions with excellent user experience... but often based
on old pedagogy, such as grammar-translation, moving back
in time,” she adds.
I have marked things you machines wouldn’t believe
Part of the problem is that authenticity doesn’t sit well with
AI marking, which provides one of the most exciting new
developments in testing – instantaneous feedback.
AI can assess linguistic competence: all the building
blocks such as pronunciation, fluency, grammar, syntax,
vocabulary, are within its power, Trinity College London lead
academic Alex Thorp explains. The problem, however, is
that communicative competence is missing.
I see lots of