Skip to main content

Benchmarking Azure Key Vault Decryption

Azure Key Vault is used to store Keys, Certificates and Secrets and make them available to applications safely. It can create and store asymmetric (RSA and EC) keys. These keys expose their public key material but the private key remains stored within Key Vault. For decryption, the application needs to make a REST call to the Key Vault which will then return the decrypted result. There are libraries available for various languages and frameworks (including .NET) which enable developers to do this seamlessly. Integrating Azure Key Vault with .NET applications is a straight-forward process although not documented widely.

One of the frequently recommended technique for securing data is to use Envelope Encryption. This requires use Key Encryption Key (KEK) which is typically a RSA key stored in Azure Key Vault. The Data Encryption Key (DEK) is generated for each piece of data and is then used to encrypt the data using symmetric algorithms like AES. DEK is then itself encrypted using KEK and stored along with data. When the data needs to decrypted, DEK is extracted, decrypted using KEK and then data itself is decrypted using the decrypted DEK.

In Azure Key Vault, there is no cost associated with encryption. Encrypt, wrap, and verify public-key operations can be performed with no access to Key Vault, which not only reduces risk of throttling, but also improves reliability (as long as you properly cache the public key material). However, at the time of writing this article, operations against all keys (software-protected keys and HSM-protected keys), secrets and certificates are billed at a flat rate of £0.023 per 10,000 operations. If you have millions of messages flowing through your system, you have to wonder how much will it cost you to decrypt in terms of money. Since it will be a REST call to decrypt the DEK per message, time should also be a concern.

In my quest to find out the time needed for decryption operation, I did not have much luck googling. So I decided to benchmark it myself. I used Azure Key Vault libraries available for .NET Core. For testing purpose, my Azure Key Vault is located in UK South region. I tried two scenarios running the decryption 1000 times per scenario:
  1. Run the benchmark on my local machine (situated in Hyderabad, India) over a decent internet connection.
  2. Run the same benchmark in an Azure VM located in UK South region.
Let's see the results.

For scenario 1 (local machine):
BenchmarkDotNet=v0.12.1, OS=Windows 10.0.18363.836 (1909/November2018Update/19H2)
Intel Core i5-8265U CPU 1.60GHz (Whiskey Lake), 1 CPU, 8 logical and 4 physical cores
.NET Core SDK=3.1.300
  [Host]     : .NET Core 3.1.4 (CoreCLR 4.700.20.20201, CoreFX 4.700.20.22101), X64 RyuJIT
  DefaultJob : .NET Core 3.1.4 (CoreCLR 4.700.20.20201, CoreFX 4.700.20.22101), X64 RyuJIT


MethodMeanErrorStdDev
Decrypt203.0 ms3.33 ms2.95 ms

For scenario 2 (Azure VM in same region):
BenchmarkDotNet=v0.12.1, OS=Windows 10.0.17763.1217 (1809/October2018Update/Redstone5)
Intel Xeon Platinum 8171M CPU 2.60GHz, 1 CPU, 2 logical cores and 1 physical core
.NET Core SDK=3.1.300
  [Host]     : .NET Core 3.1.4 (CoreCLR 4.700.20.20201, CoreFX 4.700.20.22101), X64 RyuJIT
  DefaultJob : .NET Core 3.1.4 (CoreCLR 4.700.20.20201, CoreFX 4.700.20.22101), X64 RyuJIT


MethodMeanErrorStdDev
Decrypt15.31 ms0.305 ms0.669 ms

The decryption time drastically reduces when running in an Azure VM in same region as Key Vault. This is expected as most of the latency, when running in local machine, comes from internet connectivity. However, this time is not zero. Considering a 15ms response time, for decrypting DEKs for 1 million messages, it would take close to 4 hours. Then there would be some time taken for decrypting the actual data too. Azure Key Vault is a fantastic piece of technology but it helps to know its limitations too.

Hope this research helps when you are designing a system's Disaster Recovery response and trying to achieve your RTO and RPO targets.

Comments

Popular posts from this blog

Centralized Configuration for .NET Core using Azure Cosmos DB and Narad

We are living in a micro services world. All these services are generally hosted in Docker container which are ephemeral. Moreover these service need to start themselves up, talk to each other, etc. All this needs configuration and there are many commercially available configuration providers like Spring Cloud Config Server, Consul etc. These are excellent tools which provide a lot more functionality than just storing configuration data. However all these have a weakness - they have a single point of failure - their storage mechanism be it a file system, database etc. There are ways to work around those but if you want a really simple place to store configuration values and at the same time make it highly available, with guaranteed global availability and millisecond reads, what can be a better tool than Azure Cosmos DB!
So I set forth on this journey for ASP.NET Core projects to talk to Cosmos DB to retrieve their configuration data. For inspiration I looked at Steeltoe Configuratio…

Enabling IT in Healthcare by simulating Patient Data

In the current times of pandemics and disease outbreaks, it is of paramount importance that we leverage software to treat diseases. One important aspect of healthcare is Patient Management. IT systems have to be developed which can support management at an enormous scale. Any development of such good software system requires data which is as realistic as possible but not real. There has to be a fine balance between privacy and enabling developers to anticipate the myriad health scenarios that may occur. Towards this initiative, healthcare industry has been standardizing around the HL7 (Health Level 7) protocol. The HL7 protocol itself has undergone transformation from pipe-based formats (v2.x) to JSON/XML based (FHIR) modern formats. FHIR is the future of HL7 messages and is being increasingly adopted. However given the slow nature and reluctance by healthcare providers to change or upgrade their systems, a majority of hospitals still use the HL7 2.3 format.
FHIR is a modern format. …

Pi Hole - Ad blocking (Turbocharged!)

The entire internet is now made up of ads. To easily navigate it and find the information you are looking for, most people use ad blocking software. It improves page loading times and also uses less data (sometimes by up to 10 times!)
Google Chrome is the de facto browser of choice for most people. Google's main business is advertising. So you can see how ad-blocking software collides with Google's business objectives. When Chrome was trying to be popular, it started allowing plugins like AdBlock Plus etc. Then slowly it started partnering with them for "Acceptable Ads Program" for a lot of money. Now after cementing its position as the most popular browser, Google is now coming down hard on ad blocking software. It is turning off a Chrome API (webRequest API) which most ad blocking plug-ins use to block ads.
Enter Pi Hole. This is an amazing use of Raspberry Pi which blocks ads before they enter your network. It keeps a blacklist of most popular ad serving domains …