More data breaches, how does the darknet economy work, and what is the threat from generative AI?

Matt Smallman
Matt Smallman
8 mins read
Last updated
20 Apr 2024

​​Modern Security Newsletter #001 – January 2023

Welcome to the first edition of the Modern Customer Security Community newsletter. This newsletter aims to provide members with a monthly summary of news, ideas, insight, and analysis in the field of customer security based on my hours of reading and consideration so that you don’t have to. 

​I plan to send a new edition every month, but I need your help to ensure that it is valuable and interesting. I don’t want this to be a nuisance in your inbox, so please let me know what you think: matt.smallman@symnexconsulting.com.

​In the news

  • New Breaches – It will be no surprise that stunningly large amounts of personal data continue to be lost through carelessness or fraudulent activity. In the US, 37 million customer records from T-Mobile were exposed, and Experian took 47 days to rectify an issue that allowed anyone to access complete credit files. 
  • Analysis – The US-based Identity Theft Resource Center published its annual report detailing more than 422 million victims across 1,802 breaches, which, whilst enormous, is not a significant increase on 2021. For me, the most interesting thing was that in 1,143 (64%) of these, Social Security Number was exposed and in 633 (35%) cases, Date of Birth was disclosed, underlining yet again the poor security value these data points provide.
  • Using Breaches for Good – Troy Hunt, the creator of Have I Been Pwnedhas come up with an unexpected use of breached data – Proving new users are actually real when signing up for goods and services. The theory goes that if your data hasn’t been included in a breached data set, you are less likely to be a real user because, in practice, nearly everyone has lost data to a breach. This is a really clever idea, but it doesn’t sit right with me. Fortunately, my new personal email address, which I changed after having my details exposed in 16 breaches, has already appeared in 1 scraping breach.
  • The Darknet economy – A slight aside, but I saw this fascinating article on the mechanics of darknet markets. The research estimates that over an eight-month period, there were over two thousand active vendors with 96,000 listings over more than 30 marketplaces. Collectively, they earned over $140 million in revenue. 
  • Text-Based One-Time Passcodes – The SIM Swap risk is genuine, and Coinbase is an exceptionally high-profile target given the un-traceability of crypto transactions. Hence, their analysis on the methods of account takeover is fascinating. 96% of all account takeovers were enabled by SMS/Text Based Codes, with significantly lower rates for Time Based One Time Passcodes (4%), Physical Keys (0.04%) and Push Notifications (0.2%). This is, of course, part of their education and marketing effort to move customers to their own app-based authentication process (<5% of customers but 57% of assets) away from SMS (95% of customers) so the numbers are roughly proportional with the usage of each factor if you look at the precision of the numbers in the report you can assume at least 2,500 account takeovers and therefore almost 2,400 SMS compromises which are significantly more than a handful.
  • ​Why do phishing emails have such obvious typos? – I found this theory that phishers are not just non-native English speakers with poor grammar but that they actively include typos to filter out victims with the attention to detail required to prevent the scam. It makes complete sense and explains why when you or I look at a phishing email, we can’t believe people fell for it.

​Generative AI threat to Customer Security

​Unless you were living under a rock over the holiday period, you can’t have failed to see the hype surrounding Chat-GPT, DALL-E, Stable Diffusion and other generative AI technologies which are capable of some amazingly human-like written, visual and even spoken responses. From a customer security perspective, I think the implications fall into one of three camps: 

  • User-Targeted Social Engineering – Most mainstream media focused on the implications for schools, exams, and knowledge work in general. Still, in the darker corners of the internet, the implications of this type of technology being used to scale up social engineering, investment, romance and all other types of scam have not gone unnoticed. The ability to carry on thousands of realistic conversations will allow attackers to exploit more victims simultaneously and likely lower everyone’s level of trust in unsolicited communications.
  • ​Automating phone bots – Many organisations already spend an inordinate amount of time trying to prevent bots from attacking their websites and sometimes IVR systems to access customer data. Still, these generative AI systems have the potential to go much further, engaging real humans in realistic conversations. Joshua Browder (@jbrowder1), CEO of donotpay.com posted a fantastic video (since taken down) of a speech-to-text engine transcribing a conversation with the Wells Fargo IVR and, subsequently, an agent so that a GPT-3 based AI could request a refund for a charge on their account using Resemble.ai version of his own voice. The agent was utterly convinced and processed the refund. The implication here is that it will be increasingly difficult to tell whether a caller is a real person or not without deploying synthetic speech detection.
  • Deepfake impact on Voice Biometrics – Finally, if it couldn’t get worse, Microsoft released a paper about their Vall-E text-to-speech engine, which they claim can create realistic utterances of a target individual from just 3 seconds of audio. The headline, of course, creates the fear that these engines could circumvent Voice Biometric systems. I’m not sure this claim (other than the approach and size of the training set) is much different to a similar one made by Google last year, and if you actually listen to the samples provided they are clearly synthesized. The big advance in both cases is the amount of data needed to train a “sort of” realistic voice. The risk being that with just a few utterances from a victim, a fraudster could create a realistic conversation and circumvent Voice Biometric controls. The results, whilst impressive, are clearly not there yet, and several Voice Biometric vendors have confirmed the same to me. These are both research projects using enormous computational workloads and are not publicly available, but that doesn’t mean they won’t be and won’t get better at some point in the not-too-distant future. If you are not already deploying synthetic speech detection, then you really should be. Of course, in the grand scheme of things, this threat must be evaluated against the full range of many easier-to-exploit weaknesses in customer-facing security processes. 

​I will be exploring Voice Biometrics vulnerabilities, including synthetic speech, in our community session on the 4 May but if you wish to discuss before then, please reach out. 

​Past Events

  • Lloyds Banking Group – Lessons from Implementing Voice Biometrics and Network Authentication – 26 Jan – We are grateful to Andrea Ayres for telling the behind-the-scenes story of their implementation of both Voice Biometrics and Network Authentication came to be. It was fascinating to hear that Voice Biometrics accounts for more than 50% of all telephone authentications, having completed more than 90 million authentications since launch. Andrea also gave a sneak peek at how Lloyd’s is using Network Authentication to increase further IVR authentication and automation rates, as well as the realities of implementing and operating these technologies. You can access the replay here

​Upcoming Events

  • GDPR, CCPA and BIPA’s impact on Voice Biometrics adoption – Help or hindrance? – ​9 Feb – Learn from one of the world’s leading experts about the key privacy regulations that impact Voice Biometrics implementation in consumer-facing use cases. Covering the critical regulations from North America and Europe (BIPA, GDPR, CCPA, UK DPA etc.) Douwe Korff will explain how meeting these regulations shouldn’t be seen as a barrier but that compliance can help improve user acceptance and adoption.
  • How to maximise Voice Biometrics adoption – Best practices from 25 million enrolments – 23 Feb – ​I will introduce the Voice Biometrics value chain and shows how it can be used to understand registration (enrolment) performance. Using experience from more than 25 million registrations, I will cover the appropriate use of language, how best to best position the offer with users, and how to manage agent performance and not get in the way of the customer’s intent.

​You can see the complete upcoming calendar here.

Popular Posts