Skip to main content

Benchmarking Azure Key Vault Decryption

Azure Key Vault is used to store Keys, Certificates and Secrets and make them available to applications safely. It can create and store asymmetric (RSA and EC) keys. These keys expose their public key material but the private key remains stored within Key Vault. For decryption, the application needs to make a REST call to the Key Vault which will then return the decrypted result. There are libraries available for various languages and frameworks (including .NET) which enable developers to do this seamlessly. Integrating Azure Key Vault with .NET applications is a straight-forward process although not documented widely.

One of the frequently recommended technique for securing data is to use Envelope Encryption. This requires use Key Encryption Key (KEK) which is typically a RSA key stored in Azure Key Vault. The Data Encryption Key (DEK) is generated for each piece of data and is then used to encrypt the data using symmetric algorithms like AES. DEK is then itself encrypted using KEK and stored along with data. When the data needs to decrypted, DEK is extracted, decrypted using KEK and then data itself is decrypted using the decrypted DEK.

In Azure Key Vault, there is no cost associated with encryption. Encrypt, wrap, and verify public-key operations can be performed with no access to Key Vault, which not only reduces risk of throttling, but also improves reliability (as long as you properly cache the public key material). However, at the time of writing this article, operations against all keys (software-protected keys and HSM-protected keys), secrets and certificates are billed at a flat rate of £0.023 per 10,000 operations. If you have millions of messages flowing through your system, you have to wonder how much will it cost you to decrypt in terms of money. Since it will be a REST call to decrypt the DEK per message, time should also be a concern.

In my quest to find out the time needed for decryption operation, I did not have much luck googling. So I decided to benchmark it myself. I used Azure Key Vault libraries available for .NET Core. For testing purpose, my Azure Key Vault is located in UK South region. I tried two scenarios running the decryption 1000 times per scenario:
  1. Run the benchmark on my local machine (situated in Hyderabad, India) over a decent internet connection.
  2. Run the same benchmark in an Azure VM located in UK South region.
Let's see the results.

For scenario 1 (local machine):
BenchmarkDotNet=v0.12.1, OS=Windows 10.0.18363.836 (1909/November2018Update/19H2)
Intel Core i5-8265U CPU 1.60GHz (Whiskey Lake), 1 CPU, 8 logical and 4 physical cores
.NET Core SDK=3.1.300
  [Host]     : .NET Core 3.1.4 (CoreCLR 4.700.20.20201, CoreFX 4.700.20.22101), X64 RyuJIT
  DefaultJob : .NET Core 3.1.4 (CoreCLR 4.700.20.20201, CoreFX 4.700.20.22101), X64 RyuJIT


MethodMeanErrorStdDev
Decrypt203.0 ms3.33 ms2.95 ms

For scenario 2 (Azure VM in same region):
BenchmarkDotNet=v0.12.1, OS=Windows 10.0.17763.1217 (1809/October2018Update/Redstone5)
Intel Xeon Platinum 8171M CPU 2.60GHz, 1 CPU, 2 logical cores and 1 physical core
.NET Core SDK=3.1.300
  [Host]     : .NET Core 3.1.4 (CoreCLR 4.700.20.20201, CoreFX 4.700.20.22101), X64 RyuJIT
  DefaultJob : .NET Core 3.1.4 (CoreCLR 4.700.20.20201, CoreFX 4.700.20.22101), X64 RyuJIT


MethodMeanErrorStdDev
Decrypt15.31 ms0.305 ms0.669 ms

The decryption time drastically reduces when running in an Azure VM in same region as Key Vault. This is expected as most of the latency, when running in local machine, comes from internet connectivity. However, this time is not zero. Considering a 15ms response time, for decrypting DEKs for 1 million messages, it would take close to 4 hours. Then there would be some time taken for decrypting the actual data too. Azure Key Vault is a fantastic piece of technology but it helps to know its limitations too.

Hope this research helps when you are designing a system's Disaster Recovery response and trying to achieve your RTO and RPO targets.

Comments

Popular posts from this blog

Creating a Smart Playlist

A few days earlier I was thinking that wouldn't it be nice if I had something which will automatically generate a playlist for me with no artists repeated. Also, it would be nice if I could block those artists which I really hate (like Himesh Reshammiya!). Since I couldn't find anything already available, I decided to code it myself. Here is the outcome -  This application is created entirely in .NET Framework 4/WPF and uses Windows Media Player Library as its source of information. So you have to keep your Windows Media Player Library updated for this to work. It is tested only on Windows 7/Vista. You can download it from here . UPDATE : You can download the Windows XP version of the application here . Please provide your feedback!

Integrating React with SonarQube using Azure DevOps Pipelines

In the world of automation, code quality is of paramount importance. SonarQube and Azure DevOps are two tools which solve this problem in a continuous and automated way. They play well for a majority of languages and frameworks. However, to make the integration work for React applications still remains a challenge. In this post we will explore how we can integrate a React application to SonarQube using Azure DevOps pipelines to continuously build and assess code quality. Creating the React Application Let's start at the beginning. We will use npx to create a Typescript based React app. Why Typescript? I find it easier to work and more maintainable owing to its strongly-typed behavior. You can very well follow this guide for jsx based applications too. We will use the fantastic Create-React-App (CRA) tool to create a React application called ' sonar-azuredevops-app '. > npx create-react-app sonar-azuredevops-app --template typescript Once the project creation is done, we

Serverless Generative AI: How to Query Meta’s Llama 2 Model with Microsoft’s Semantic Kernel and AWS Services

Generative AI is a type of artificial intelligence that can create new content such as text, images, music, etc. in response to prompts. Generative AI models learn the patterns and structure of their input training data by applying neural network machine learning techniques, and then generate new data that has similar characteristics. They are all the rage these days. 😀 Some types of generative AI include: Foundation models , which are complex machine learning systems trained on vast quantities of data (text, images, audio or a mix of data types) on a massive scale. Foundation models can be adapted quickly for a wide range of downstream tasks without needing task-specific training. Examples of foundation models are GPT, LaMDA and Llama . Generative adversarial networks (GANs) , which are composed of two competing neural networks: a generator that creates fake data and a discriminator that tries to distinguish between real and fake data. The generator improves its ability to fool the d