Understanding and Mitigating Voice Biometrics Vulnerabilities
Voice Biometrics can significantly increase your customer-facing security processes’ usability, efficiency and security, but no security technology is perfect. It’s essential to understand and mitigate vulnerabilities when implementing as part of your call centre’s authentication and fraud prevention processes. In this session, Matt Smallman introduces his framework for assessing the risk and determining appropriate technical and process mitigations, including:
This session was followed by an open question and answer session where members could ask questions and discuss their specific challenges.
Matt is the author of “Unlock Your Call Centre: A proven way to upgrade security, efficiency and caller experience”, a book based on his more than a decade’s experience transforming the security processes of the world’s most customer-centric organisations.Matt’s mission is to remove “Security Farce” from the call centre and all our lives. All organisations need to secure their call centre interactions, but very few do this effectively today. The processes and methods they use should deliver real security appropriate to the risk, with as little impact on the caller and agent experience as possible. Matt is an independent consultant engaged by end-users of the latest authentication and fraud prevention technologies. As a direct result of his guidance, his clients are some of the most innovative users of modern security technology and have the highest levels of customer adoption. He is currently leading the business design and implementation of modern security for multiple clients in the US and UK.
Only available to signed-in members
[00:00:00] Matt Smallman: Hi. Good afternoon, everyone and- and thank you very much for joining us in this Modern Security Community session this afternoon. My name’s Matt Smallman. I’m the author of the book, Unlock Your Call Centre, which is just behind me, tactfully positioned here, uh, and my work is helping organizations improve their usability, efficiency, and security of their identification, authentication, and full prevention process.
[00:00:20] But a- a bit less of a mouthful, is, really, I do everything I possibly can to get rid of those time-consuming, frustrating, uh, and to some extent, pointless, um, security questions that many of us face on a regular basis when we interact with organizations.
[00:00:36] Today, we’re gonna be looking at understanding and mitigating voice biometrics vulnerabilities, um, but the session today is a solo one. So, we don’t have any guests on, um, so what I could really do with is your participation and your interaction. Uh, in the webinar, we have a chat feature, which will be visible to everyone else who’s participating in the webinar. They’ll be able to see your name, uh, and we have a Q and A feature, where only I will be able to see your name. So, if you want to ask questions, uh, privately and discreetly, then use the Q and A feature. If you’re happy for everyone else to see them, then please use the chat feature.
[00:01:11] Uh, I’d love to hear your questions, as we go through each of this to- these topics. It- it’s a big area, uh, and I’m sure many people on the calls have specific, um, areas of focus. So, please use those. If I don’t get to your questions during the main bit of the presentation, we’ll try to at the end, uh, and if I don’t get to them at the end, I will follow up with you, uh, individually. So, thank you for the- the time you’ve taken to- to- to interact with us. So, let’s get started.
[00:01:47] Matt Smallman: Before we start, I think it’s worth being a bit of a health warning. laughs] Some might say we th- we shouldn’t be talking about this subject entirely, or certainly, as openly as we have been done, that to do so puts organizations and this technology at greater risk of exploitation, but I- but I really don’t agree. Um, whilst I’ll absolutely do my best to ensure that this ma- this video and my other material doesn’t become a playbook for bad actors, a lot of what we’re going to discuss today is just plain fact.
[00:02:15] Responsible organizations face these facts, assess the risks to them and their customers, and- and act accordingly. My intention today is to make you think more broadly and holistically than the current media hype cycle, uh, to ensure that voice biometrics remains fit for purpose Whenever I ask consumers and users of voice biometrics solutions, whether it’s quicker and easier and more enjoyable, even, as an authentication process, I get a resounding "yes," but many of you will have been drawn to today’s video, as a result of recent press and media articles, whilst people are starting to question whether it really does provide the security expect.
[00:02:53] Uh, and I would argue that, potentially, people’s security perceptions of this technology have always been slightly overblown, and in practice, the media and o- other activities are really bringing that back to alignment with reality.
[00:03:09] Matt Smallman: Some of you all know that I spent most of my formative years as a- an officer in the British Army. In fact, I think I have one of my former colleagues on the call today. So, thank you for joining us. So, you will forgive me if I deploy Sun Tzu’s Art of War to start our session. There is a point to this, I promise. His- his two-and-a-half-thousand-year-old wisdom is as true today as it was then. You- you need to understand yourself, the vulnerabilities of your systems and processes, as well as your enemies, their intentions, and the capabilities they have open to them in order to withstand them.
[00:03:41] As a s- slight aside, for a significant chunk of my time in the Army, I was responsible for leading and then teaching search teams to counter the ID threat on deployed operations, which, I’m sure as you can imagine, in the period immediately following 9/11, was rapidly evolving, and we quickly found that tactics, techniques, and procedures that had been developed and effective at countering terrorism in Northern Ireland and elsewhere were not directly applicable to the situation we found ourselves in.
[00:04:07] Even as we iterated and improved these processes, the training and deployment timeframe was such that by the time soldiers arrived in theater, the threat had already moved on, two or three, if not four, um, steps. So, our training had to evolve as well, becoming less and less about specific procedures and tactics, which are easy to teach, and towards thinking like the enemy in order to counter them, which is a lot more challenging, trust me.
[00:04:30] But one of the most effective exercises we developed was to ask our students on the first day of the course to design an attack on their own team, on- on themselves, as individuals, as they moved backwards and forwards to our training center every day. As you can imagine, this inspired huge amounts of creativity and all sorts of harebrained schemes, but ultimately, as we piece- pieced apart the detail of that, it helped students to understand that, in practice, the threat is constrained by three things.
[00:05:04] Matt Smallman: The enemy’s intent, their capability, and in- and in the case of the military, we called it the grounds, but in this case, we’re gonna talk about vulnerability. In the case of voice biometrics and threat to voice biometrics, we’re gonna talk about vulnerability. So, to properly understand the threat to your voice biometric systems or those you’re considering deploying, or your- even your security processes, in general, you need to cover all of these dimensions.
[00:05:27] Now, in today’s session, I’m gonna explicitly focus on vulnerabilities at the bottom of our pyramid. That is those inherent and internal characteristics of the technology and your business processes that make them vulnerable. I’ll- I’ll definitely talk a little about the enemy’s capability and- and, uh, shortly about their intention, but this will mostly be in reference to the kind of emerging synthetic speech capabilities that we’ve seen so much of in the- in the media.
[00:05:52] Uh, before we start, though, I just wanna touch on those threat actors an- and their intention.
[00:05:57] Matt Smallman: The first is to say they are not all created equal. [laughs] Um, it’s easy to model and to think about, uh, an anonymous fraudster who may be part of one of these- these first three groups on- on the slide, but often, we forget, when we do that, about other individuals and- and the different risks and different intents and even different capabilities they have, uh, open to them.
[00:06:18] It’s easy, when we think about fraudsters and this kind of amorphous group, and- and in fact, the first three, the global criminal networks, criminal gangs, and prolific individuals are all ve- fairly similar in their intents, if not necessarily in their capabilities and the way in which they deploy them. But when we move on to look at the other categories, friends, family, and caregivers, potentially, their intent is not even malicious.
[00:06:42] Uh, a family member may be attempting to access a service legit- well, somewhat legitimately on behalf of a- a family member or an ill relative in order that they can get service for that person. Um, unfortunately, however, that may also be the same caregiver who is trying to exploit that individual and e- extract funds and resources from them.
[00:07:05] Uh, most recently, we’ve seen, uh, reporters as a threat actor in this space. What are they after? What is their intention? And their intention is really to tell a story, to link up with hype that may be happening in other domains, uh, to sell a story that gets them the coverage that they want, and to sell a story and to tell that story the first time. So, a- a- again, another set of intents and, uh, again, a different set of capabilities potentially available there.
[00:07:32] And then, finally, uh, we have opportunists, people who are just giving it a shot, having a go. Uh, the person who picks up a credit card in the street and decides whether they should try it. Um, so, very many threat actors. E- every- every organization will have different, uh, actors that it needs to consider, uh, and when we look at these vulnerabilities, each of these vulnerabilities can be exploited in different ways by different actors, depending on their capabilities and their intentions. So, just a quick nods to think about intent, uh, before we delve into, uh, vulnerabilities as a whole.
[00:08:08] Matt Smallman: So, the model I use to think about voice biometrics vulnerability categorizes threat vulnerabilities in one of these five categories, uh, and this is a model I’ve been using for the last six or seven years. Uh, I’m sure we could all a- argue semantically about whether these are the correct categories and whether, in fact, some of these are opportunities rather than vulnerabilities, but it serves a purpose and helps frame our- our conversation. So, that’s what we’re gonna use today.
[00:08:32] It starts with biometric vulnerabilities, exploiting the core biometric decision process by obtaining a false acceptor, a function of the technology. It moves through imposter enrollment, where bad actors exploit the registration process to enroll as if they’re a genuine user, through to, what I believe, is one of the- the most significant vulnerabilities in systems deployed today, that of the simple act of not engaging with it, uh, and bypassing, um, afterwards.
[00:08:58] Uh, and bypassing the technology in order to exploit legacy authentication or- or other mechanisms that may be available. Presentation, uh, the type of attacks we’ve seen most recently, which cover a wide collection of different scenarios where a voice or audio sample that is not, uh, the genuine speaker, or is certainly not the genuine speaker with the intent to do what the sample is presented as, um, are presented to the system and- and matching that way.
[00:09:25] Uh, and then, finally, insider threats, where the system can be compromised or subverted by front-line or privileged individuals within the organization. Uh, a- and we’re gonna dig into each of these in- in a little bit more detail in our session. Now, clearly, we can’t cover the intimate detail of every single one of these today, uh, but we’ll do our best to scratch the surface and hopefully give you some food for thought, as you think about the- the vulnerabilities and threat to your organization.
[00:09:51] Matt Smallman: So, looking at biometrics first. How can a bad actor exploit the biometric characteristics, specifically the realty of false accepts. Um, as a reminder, from our session with the Beginner’s Guide to Voice Biometrics a few weeks ago, like this bicycle, if you want a perfectly secure system, you have to accept that no one will be able to use it. Every security system involve some degree of trade off and voice biometrics is no exception. We call the negative side of that trade off the false accept risk, and a- as you can see, the low we produce- the- the lower the false accept risk, the higher the chance that the legitimate user is rejected from the system.
[00:10:27] Now, the specific shape and, uh, position of this curve will vary, depending on your organization, um, but it is true that this trade off exists in- in every situation. Uh, and what the fraudster is really trying to do, or the bad actor, threat actor, is trying to do is to exploit, um, the false acceptor pro- the false accept reality. Now, that number may be really, really low, um, but it’s not zero, uh, and that’s really the point I want to make here.
[00:10:53] We can delve into this in a little bit more detail, again, with a chart that we looked at on the previous session. The vast majority of imposters are going to score very poorly and significantly below the threshold that we might establish to be a genuine user. However, some imposters will score more highly and mil- may even score higher than some genuine users in particularly bad situations or on occasions when they, uh, have- have poor samples.
[00:11:20] Uh, and that creates this false accept risk, which, when we blow up in more detail, we can see the threshold that all biometrics systems must have, under which any score is considered to be a mismatch, and above which, any score is considered to be a match, and we can see in this situation, underneath the red line, are imposters. We see a small number of false accepts, a theoretical possibility of false accepts. Uh, and, again, similarly, on the other side of the threshold, under the, um, under the, uh, the green line, we see that false reject possibility.
[00:11:51] Now, in nearly every case, the rate of false, uh, accept is significantly smaller than the race of false reject, uh, and the false- rate of false accept is not every call. It’s a function of the attack rate. But if the, uh, false accept rate, for example, just choosing easy numbers is- is 0.1 percent, then some of you may easily extrapolate that as being a one in a thousand chance of success, and what I really want to show- try and show today is that isn’t necessarily true.
[00:12:18] If a fraudster tries the brute force kind of form of attack that we see with, uh, trying to exploit false accept, and- and attacked 1,000 accounts, their probability of success would not- they would not be guaranteed to be successful in each of those, and I- I think the easiest way to remember this is- is the coin toss, analogy. If I throw- if I coss- toss a- a coin twice, then I am not guaranteed that one of those turns is a head, even though the probability of it being a head on any flip is 50 percent. Throwing it twice does not guarantee me, uh, that success.
[00:12:50] So, when thinking about brute force attacks, we need to remember that- that fraudsters are expectantly gonna expose themselves to many, many attempts, uh, in order to- to access the system, and that each of those attempts comes with some cost. The other ch- the other, uh, thing you might thing about is, potentially, the same fraudster could attack one account a thousand times, but because the first time, they were unlikely to score high enough that they didn’t score high enough to get in, because the probability was true, in practice, on following times, they’re un- less- less and less likely each time to score, um, sufficiently close to the thresholds to match on those occasions.
[00:13:24] So, in practice, and this is where the real security from a voice biometrics system comes from, the probability of an imposter being falsely accepted is- is really, really low, um, and is in the control of an organization. Through the tuning and calibration process, you’re able to establish, um, and appropriate risk envelope, and appropriate level of false accepts for your individual organization’s, um, risk appetite.
[00:13:51] But what we need to consider is how these charts are calculated for the second form of attack that may exploit biometrics, uh, and that is that these curves and the- the tuning process involves what’s often called a true user imposter test, where hundreds or thousands of samples from the production application are tried against hundreds or thousands of other users, and the average score is calculated for e- uh, the scores are calculated for each of those attempts, uh, and the distribution is plotted, such that we can calculate these false accepts and false reject risks.
[00:14:22] But that means that the false accept risks that we’re accepting is really about the average man in the street. Often it is same sex comparison. So, males on males or females on females, or certainly, as they’re determined by the biometrics system, as opposed to any gender labels, um, and that represents the- the universe as a whole, but that creates two really interesting situations. What happens if the imposter is significantly more like the individual who they’re trying to attack than the population as a whole? And what happens if the person who is, um, being attacked, and even the person who is doing- doing the attack, is significantly less like the population as a whole, the- the bias problem.
[00:15:04] Uh, so the- and the first problem, we really call that, uh, the evil twin problem. Um, and these are real, genuine risks, um, but what- what I want to say at this point is that they are not certainties. Uh, in- in situations where I look at this, uh, we often see that maybe it’s five to even maybe 10 times more likely to get through. But if that’s a false accept rate of 0.1 percent, that means a force accept rate for those particular individuals on that particular scenario may now be one percent.
[00:15:33] And- and how do we know this to be true? Well, because human nature being what it is, people like to test these systems. Uh, very often, after users are first to set up on these systems, we see a high volume of, uh, test calls, uh, and legitimately, someone who’s now got a new security mechanism might want to test if it still works for them. So, we see a whole bunch of positive tests take place, uh, and we also see a whole bunch of negative tests take place.
[00:15:59] Um, significantly smaller on the positive tests, usually kind of less than 10 percent of the positive tests, where friends, usually on a Friday night, usually late on that Friday night, hands their phone or something to- something to their- to their friend and says, "You try and break into my account," and sometimes we can even hear the bar noises in the background and- and most case- in most occasions, those do fail, but on- on a larger than- o- on a greater than should be the average proportion of chances, they- they do pass, and when we’re looking at analysis of that, we can kind of deduce what those increased false accept risks are.
[00:16:28] Now, this doesn’t necessarily mean that this form of attack is scalable or even that that attempt is malicious, uh, but it does create reputational risks for an organization, and- and these needs to be understood and mitigations and plans put in place to- to handle them.
[00:16:45] Matt Smallman: So, to summarize the biometric risk then, uh, it’s a vulnerability that can be exploited in a couple of ways, by brute force, theoretically, uh, but we’ll look at some mitigations to that part in a moment, by related parties, as we’ve talked about, who are g- who are more similar than the average to the- the individual, and therefore, stand a greater chance of exploiting that false accept risk, uh, and even by the odd, single, random occurrence, given tens of millions of authentications and hundreds of thousands of millions of mismatches, uh, or imposter attempts, or even unintentional imposter attempts, there- there are probabilities that a single occurrence may- may occur, and- but not necessarily that individual has malicious intent.
[00:17:26] When we think about mitigation, and we’re not gonna go into this in too much detail, there are a range of mitigation options open to us. First off, the tuning and calibration process, itself, uh, enables us to establish this, uh, a risk appetite that is appropriate to the transactions being protected and the organization that’s implementing it, uh, and establishes the- the- the maths at the start of this process.
[00:17:46] In order to counter some of those brute force attacks that we’ve talked about, it is absolutely essential, uh, and we’ve seen this in nearly every media case, that some force of rate limiting be implemented. People do not necessarily always get this right on the first attempt. In fact, there will be wrong significantly more times than they are right. So, limiting the number of times that an individual can mismatch against an account, uh, even if that is just a time bar or some kind of, uh, ban for a short period of time, um, significantly reduces the potential of the false accept being exploited.
[00:18:20] In addition, every single one of those samples that’s provided that mismatches, and remembering that they of- missed- fraudsters attempting a brute force attack will mismatch significantly more times than they will match, uh, provides us a sample that can be used, uh, to add to either a fraud watch list or to crosscheck against other accounts, which have suffered mismatches in a similar timeframe, to identify individuals that we then explicitly ban and bar from accessing the system.
[00:18:44] Uh, and then, finally, uh, accepting that false accepts are, uh, an implicit part of implementing and running a voice biometrics system, minimizing the impact when they do occur, particularly in these related parties, single occurrences, and even the- the journalistic attempts, by having a really, really clear playbook of business processes and procedures that will be followed when one is identified. Because the majority of these, uh, occurrences will be identified by your organization, hopefully not downstream, as part of fraud, most often, by individuals who’ve been testing their own accounts and have- have run- been lucky on that occasion, have managed to match, uh, and who need their confidence reassured that the system is- is more secure than the previous mechanism.
[00:19:27] Remembering, of course, our evil twin, if they really wanted to break into your account, they have access to nearly all of the information they would ever need to do so, uh, by voice biometrics apart, and if you really don’t trust your twin, then- then you probably have more, uh, more challenges to worry about.
[00:19:46] Matt Smallman: Next I want to talk about what I believe is the biggest vulnerability, uh, and they see most often in organizations deploying voice biometrics, and it’s what I call the bypass risk. Uh, I was very fortunate to enable, uh, an AI to be able to generate me a bank vault on the road in the middle of nowhere, um, to really symbolize this risk. Um, you could build the most strongest front door you like, but if you don’t put the fences up around it to prevent going around the sides, then it’s not really gonna provide a lot of security.
[00:20:13] Yes, sure, it’ll keep a few people out. It will deter some people psychologically, but it won’t really provide security. A- and you could also liken this to exploiting the false reject risk. The risk that we talked about earlier, the counter party to the false accept, is that the genuine customers, um, on some occasions, will be denied access, will mismatch, um, just because of the, um, the probabilities involved in the calculation, uh, because of the acoustic situation they’re in, because of the quality of their original voice print.
[00:20:42] And as organizations, we don’t want to deliver them a poor service, uh, and therefore, we must have full backing contingency processes in order to enable these customers to continue to access their accounts, service the transactions that need servicing. Most often, however, I see this, and certainly during the early stage of implementation, being a fallback to- to legacy, to the previous knowledge-based authentication method, which is- which we were replacing, because we knew it was less secure.
[00:21:07] And, in fact, because customers believe that their accounts are now protected by voice biometrics, they even feel that they are more secure, but if we still g- if we go back to what we had before, just testing with knowledge-based authentication, then it’s easy to- easy for a fraudster to exploit. When, particularly, in automated and predictable systems, where, um, potentially, the- the fraudster just stays quiet, uh, and the system use- falls back to a fallback process, where they can enter the digits to the- a date of birth or a passcode that they’ve exploited from social engineering or- or a data dump.
[00:21:39] But it does go further, because privacy legislation also requires that we have processes to enable customers to remove their biometric prints, uh, and these can be similarly exploited, removing the door, if you look. Um, now, this won’t happen in every case, but when targeting specific individuals, potentially removing the front door, um, and disabling it is better than trying to get round the side, particularly if you have additional controls in place. Um, and while this hole remains open, uh, fraudsters really have no incentive to try and exploit the biometric risks we’ve talked about or some of the presentation risks we’ve talked about- we’ll talk about shortly.
[00:22:16] Matt Smallman: To summarize this area, then, uh, even though a customer thinks their accounts are protected by voice biometrics, in practice, the fallback process, exploiting the false reject process, may allow their accounts to become compromised, and the de-registration on enroll process may also allow the locks to be removed from the door. But the risks can be mitigated. Uh, using additional step-up authentication when a mismatch occurs might add additional efforts to the authentication process, and- and reduce somewhat, the efficiency or usability improvements we get, but as long as the false reject rate is at an acceptable level, actually, most customers won’t, uh, won’t mind at this time, if it’s providing an additional security.
[00:22:55] And- and I don’t mean just falling back to knowledge-based authentication. I mean some additional steps, whether that be SMS two factor, the advantages of which we can debate into the, uh, into the night, um, or other modern authentication methods, like network authentication, or device-based authentication that might take more effort on both the customer and the employee’s part, um, but in practice, are providing more security for more effort.
[00:23:18] There’s transaction screening. You- even if you do fall back to knowledge-based authentication and- and some legacy method, noting the fact that the customer’s failed to mis- failed to match successfully on this occasion may limit the services you make available to them, or may apply some additional stre- screening or scrutiny to the transactions that take place.
[00:23:36] A- and then, finally, to come to mitigate the risks that people’s, uh, registrations are unenrolled without their knowledge, uh, notifications and the opportunity for customers to repudiate the fact that they, uh, unenrolled, uh, is an important part of that protection, a- as well as, uh, other business processes, that I’m sure you can imagine.
[00:23:56] Matt Smallman: Next, I want to talk briefly about imposter registration. Uh, this often comes up as a source of concern when designing the registration and enrollment process. No- now, in most cases, I don’t think this is an issue, because you should only really trust the voice biometrics voice print to the same extent as you trust the security used to enroll it. Um, but you may choose, over time, to increase your trust in a voice print, um, based on its use or even, uh, the way in which it was, uh, based on how it’s being used and subsequent, uh, validation of its provenance.
[00:24:28] This does, then, create the risk that an imposter enrolling may, over time, be able to accumulate additional privileges, uh, and escalation of privileges to attack, if you might like, over an above what they might have been able to access with a security credentials that they used to enroll. Now, this is particularly important when it becomes to, uh, related parties, because they are, then, able to access, uh, accounts that they may not have otherwise, uh, been able to.
[00:24:53] Um, but it may also happen, um, unintentionally in this situation, where are user is operating the account of a- of a- of a family member on their behalf, potentially with their consent, but not necessarily recorded on your systems, then it is very likely that they are going to be accidentally enrolled in your system. And you may, on some occasions, think that they are the legitimate customer, uh, and that may cause concerns and challenges elsewhere, when you see the same person come up in- in many different situations.
[00:25:22] So, imposter registration is something to be considered, not something I necessarily worry too much about, um, but there are a range of mitigations that I would recommend on most occasions for doing this. The first is, uh, the opportunity for notifications and repudiation. That is telling people that they have now been registered. Um, I think the fastest response I’ve ever seen to an imposter enrollment was the fact that the call was still in progress as the- as the customer called another, um, service associate to complain that they hadn’t registered, um, and somebody else was doing that on their- on their behalf, and that was sent via- via text message.
[00:25:56] So, customers do really react to these when it doesn’t happen, and, uh, I- I- through multiple situations, I found- found these repudiation opportunities to be a- a really, really valuable to, um, secure registration process. There’s also the opportunity to run enrollment watch lists to check all the people who enrolling on a s- given day or time period, to make sure that they are not known bad actors, uh, and even to crosscheck enrollments to make sure that the same person is not enrolling on multiple accounts, uh, and- and that can cause issues, particularly when related parties are operating account- operating both their own accounts and part accounts of other people.
[00:26:33] Uh, and finally, then, um, basing the trust of the voice print on the activity that’s associated with it, uh, along with the time in which it is being in use, um, is often far better than, uh, establishing a trust from- from day one, assuming that the voice biometrics voice print is trustworthy. The fact that it is, over time, associated with multiple transactions, that none of those transactions are subsequently repudiated or associated with fraud, can increase your trust over time. So, if you want to enable, um, that voice print to do things- more things than it would otherwise be able to do, then that- that’s a perfect opportunity.
[00:27:13] Matt Smallman: So, uh, now that the one that everyone wants to talk about, and I was just checking, we had a few questions on this, uh, uh, as well. Presentation vulnerabilities. Now, today, we’re just gonna skim the surface of these, uh, really because I want to put them in the context of the wider set of voice biometrics vulnerabilities we talk about. We will be going much deeper in about three week’s time, when we’re joined by Haydar, who’s a- who’s a research scientist with a lot of experience, uh, in- in this space.
[00:27:40] So, let’s talk about presentation attacks. Because voice biometrics systems are comparing audio, a- at the most basic level, they don’t care where that audio is obtained from or what the intention behind that user was when it was provided, um, even whether they’re- they’re actually providing it in live and real time. There is an assumption in many cases that they are, but they can’t tell, uh, in many cases.
[00:28:02] So, if a bad actor is able to present an audio sample in a way other that which the genuine speaker intended, they can exploit the true accept risk that we’ve talked about, uh, and be a- sorry, a true accept risk, and be allowed through. A- and this risk only increases the more predictable our authentication systems are, and process are. So, they are particularly heightened in automated systems for two reasons. One, that there is no- none of the unpredictability added by an agent, who might ask how the weather is when we think they’re going to ask for a password, um, but also, that it gives the fraudsters and imposters m- many chances to practice and to get their timings right and to re- recognize a, um, the process before trying to exploit it, most often, without being noticed.
[00:28:47] So, how does it actually get exploited? I used to call this category stolen voice, uh, but I think presentation has become the more accepted category in the- in the last, uh, few years, because I want to expand it just beyond the kind of synthetic speech and recordings that you might see talking about, because this can also happen in situations, potentially, where, um, legitimate customers under duress and forced to provide their voice print sample.
[00:29:10] Now, in the case of, uh, banking and financial services in- in much of North America, Europe, and on AustralAsia, and even the far east, I- I’m not sure that’s a- a big concern, but in some geograph- geographies, that may be a concern, in which case, it’s something that needs to be considered.
[00:29:26] This could also be the case of a, what I would call a man in the middle attack, where a cus- where a imposter socially engineers a user to speak to them as if they were the bank, uh, or the organization that’s authenticating this way, uh, and subsequently drops the call, um, when the authentication process is complete and carries on speaking to the agent or the automated system and carries out a transaction that the customer would not have otherwise have wanted. So, it’s not just about these, uh, synthetic and presentation attacks, though, I think those are the ones in which have captured the most of the public imagination, uh, more recently.
[00:30:03] So, recreating these samples is increasingly viable. The first big challenge, though, is overcoming the biometric match. Before we even get to, um, whether it- oh, sorry. The first real big challenge is that, historically, either recordings and even synthetic voice challenges, haven’t even been good enough to match as the individual themselves. The quality of the voice was insufficiently good, a- and you can liken this to the, uh, impersonator scenario. I- impersonators are very good at representing the features that we, as humans, um, most attribute to individuals, uh, and they are the really, kind of the accent feature, the differentiating features.
[00:30:41] But, as humans, we’re really only able to distinguish three or four of these at any one- any one point, and it doesn’t have to be that good for us to pick up on the combination of them for us to think that that impersonator is sounding like a real individual. So, synthetic voices and recordings can also sound like the individual, but when we look at them, comparing them from a biometric perspective when we’re evaluating hundreds or thousands of features over a couple of seconds of audio, they look nothing like the real speaker, and- and often, even fail- often, historically, even fail to match on that basis.
[00:31:12] But it is true that this is getting better and better, uh, and we will, as I said, be going into these capabilities in- in more depth in- in the near- in session in three week’s time. Um, but I was just kind of highlighted this to me today. I- I- I quickly summed up the amount of my audio that’s available on just this website alone, and there’s more than 300 minutes of audio, which is more than sufficient to create a high quality synthetic voice, using some of the best of breed tools available today.
[00:31:38] So, this is a legitimate, uh, attack. Uh, it’s not necessarily scalable at this point, um, and it’s not necessarily something that could target every- every individual, but it is definitely something that should be considered. The same goes for recordings, the quality of recordings available and the ability to play those back to systems today is very high, but fortunately, in many cases, we have associates and agents in the loop, which would easily detect the fact that the conversation is stilted.
[00:32:06] Um, the variation we provide into the- into the call today makes this quite challenging to, um, pull off these attacks, uh, although we have seen the- the reporters do so, um, recently. We are gonna go into this, I promise, in far more detail in the next session. So, this was really just a- a teaser at this point.
[00:32:23] Matt Smallman: To summarize some of this attacks and the- and the vulnerabilities in the presentation category, then, uh, think about man in the middle attacks, where, uh, an imposter gets a real customer to start the call and then drops the call and speaks to the agent. Think about, uh, replay or recordings, where the customer’s, uh, voice, particularly with static pass phrases, um, is replayed back to a system, uh, and the cust- and the- the system have made no ability to differentiate that between a legitimate recording. Synthetic voices, where a voice can be simulated, uh, in very realistic forms today, uh, and then, finally, duress, where the customer may be forced to provide their voice against their will to access a- a system.
[00:33:04] There are a range of mitigations, and we will go into these in- in depth in our next session, but to cover them at a high level. Uh, first off, uh, multiple authentications. It may be easy to guess what the first bit of a call looks like or to s- or to pass- or to- to understand what the different interactions in IVR look like, but by the time you’re three, four minutes into a call with an agent, um, it’s very difficult to maintain that charade realistically.
[00:33:30] So, repeating authentications later on during the call, um, has an opportunity to mitigate some of this risk, and certainly, is being a switch from being in an IVR to being, uh, speaking with an agent, is always advisable to carry out an- a separate background authentication, at that point, to make sure that the speaker hasn’t changed between those two services.
[00:33:50] There is increasing focus, and this is, again, will be a subject of our discussions, uh, in three week’s time, on characteristic detection. These synthetic speeches and, in fact, recordings all go through some form of signal processing, and those signal- those signal processing and the way in which they’re created, create artifacts that are indicative of their origins, and those artifacts are predictable and repeatable and detectable, um, over t- over time.
[00:34:15] Um, so, that is what characteristic detection looks at, and, in fact, as we’ll talk about in a few week’s time, uh, even now, many of the providers are adding watermarks to their systems, um, so that they can, um, detect who- which users, in fact, in their systems created these voices. There is obviously the fact that additional factors should, in- whenever possible, be used in order to authenticate individuals. Now, I- I would advocate that these be modern security authentication tokens like network authentication that can take place in the background, um, and confirm whether a user is or not is- or is not using the device that we think they should be using, um, but there are, of course, other meth- methods that can be used to add an additional layer of security.
[00:34:57] Liveness detection, most often, adding randomness to the processes or, uh, challenge response type questions that are far harder for imposters to predict is another mechanism, as is rate limiting, that we talked about earlier, uh, in- in the call, where limiting a- a fraudster’s, an imposter’s opportunity to practice and to test these, uh, mechanisms against your system will significantly constrain their ability to exploit them.
[00:35:24] O- on nearly every case we’ve looked at, there’s been highlights in the media recently, the one you see on TV or hear about in the- in the press is not the first. In fact, it may be the nth, uh, number of, uh, attacks that they’ve tried or different number of accounts that have been tried. So, rate limiting is an important, uh, slowing down factor here.
[00:35:44] Matt Smallman: Before we finish, I just want to highlight the final vulnerability area that I think is important for people to consider, and that- that’s the insider risk, whether that be administrators with privileged access to your systems who may have the capability to adjust thresholds, most worryingly, and therefore, increase that false accept risk and the ability for people to exploit it, or delete, remove, or change specific voice prints enable- in order to enable fraudsters or imposters to- to access those prints.
[00:36:11] Um, the mitigations for these, I’m sure you can imagine, are a part of the standard, um, systems and controls processes. But there are- one other place to consider is also frontline users. Often, they have quite a lot of discretion as to whether users are registered or not registered, what authentication they must have or must not have in order to enroll, and there is opportunity for frontline users to exploit their role in order to create voice prints from people who are not who they claim to be, or to disable voice prints or to carry out other malicious actions against voice prints that are, theoretically, protecting people’s accounts.
[00:36:42] So, just don’t forget the insider risk when, uh, a- assessing your vulnerabilities to your voice biometrics system. We’ve talked about the- some of these mitigations will be obvious. Um, least privileged access to and logging for administers and frontline users, but also, as we talked about, when looking at imposter, uh, enrollment and even bypass, um, using notifications to legitimate customers and giving them an opportunity to repudiate their request, uh, is a- is a key, uh, a key difference there.
[00:37:10] Matt Smallman: So, in summary, then, uh, there are a wide range of voice- vulnerabilities to a voice biometrics system. Nearly all of them are effectively, uh, mitigated by a reasonably simple set of controls that can be built into most business’s processes. However, not believing that these vit- vulnerabilities does- doesn’t exist is- is a fool’s error. We must accept that they exist. We must assess the risk that they might occur to our organization, assess the impact if it were to occur, identify appropriate mitigations, uh, and be comfortable with whatever the residual risk is, making sure that we have mechanisms in place to, uh, respond to that event, if it were to occur, because it almost certainly will, in some of these occasions.
[00:37:55] Um, and I thank you so much for- for joining us on today’s session. We- we do have a few questions coming up in- in the chat. So, I’m gonna- I’m gonna go through those, then, now, but I- I would encourage anyone who’s got any more detailed questions or particular examples they want to discuss, just to jump in and- and let us have them.
[00:38:10] Matt Smallman: Um, so, we had a question, uh, about synthetic speech and the sophistication seems to be evolving considerable, but is it a realistically viable attack vector or firms using a passive conversational VB model.
[00:38:21] Um, and I- I- I think we have to accept that the, um, technology has evolved significantly. Um, there are still a few kind of horizon factors that we are monitoring that will impact where- how viable it is, as a scalable attack. I- I think we can be certain, now, as we’ve seen from several press, uh, reports, that as a targeted attack, uh, with specific intentions, it- it r- it is viable, uh, if appropriate counter measures are not deployed, and appropriate mitigations are not deployed, then it is viable in theory.
[00:38:53] Uh, we will definitely be talking about this in, uh, uh, future sessions, but there are still a whole range of factors that make it less than viable for sca- for a scaled attack, uh, and we, again, we’ll discuss those in the next session. And what I would also like to highlight is if- if you’re allowing customer- if you’re allowing fraudsters to bypass your biometric security system altogether, um, by staying silent or by not changing the treatment of customers when they mismatch, then there is absolutely no incentive for fraudsters to, uh, to try this mechanism, and- and that’s really why I think today, we haven’t seen a huge number of these, uh, in the world, is because the incentive isn’t- isn’t there.
[00:39:27] But as we start to patch those holes and get better at securing that element of the, uh, those vulnerabilities, then I think the incentive will increase, uh, and as the technology increases as well, we must be prepared for that being, uh, affective and have in place a numbe- the appropriate mitigations on- based on an under- a real understanding of the risk and how it might be used.
[00:39:49] Uh, so, thanks for that, uh, question. Um, also, um, some- a couple people have asked where the slides and recordings will be available, and they- they absolutely will be available, uh, hopefully, uh, Tuesday this week, because it’s, uh, coronation weekend in the UK. Uh, and yeah, that was the- that was the questions we had. So, are there any more questions on the phone, on the call today?
[00:40:14] Okay. In- in wrapping up, I think I’ve probably done, um, done enough of a promotion of next week- uh, of, um, the Battling Deep Fakes and Synthetic Voices session that we’re holding on the Thursday, the 25th of May. I’ll be joined by Haydar Talib, from Nuance Communications, who is the- leading their research and development efforts, uh, who is really at the front of understanding what the scalable attacks might look like in this space and how to better mitigate them. So, I’m really looking forward to that conversation.
[00:40:42] Uh, I encourage you to, um, have your questions ready for him and I, uh, and we will see you then.