Database passwords, account passwords, API keys, private keys, other confidential data... A modern cloud application with multiple microservices is filled with confidential data that needs to be separated and managed. In the process of researching how we would improve and automate secrets management for Stackery customers, I found much of what you find online is bad advice. For example, there are quite a few popular tutorials which suggest storing passwords in environment variables or AWS Parameter Store. These are bad ideas which make your serverless apps less secure and introduce scalability problems.
Here are the top 3 bad ideas for handling serverless secrets:
Using environment variables to pass environment configuration information into your serverless functions is a common best practice for separating config from your source code. However, environment variables should never be used to pass secrets, such as passwords, api keys, credentials, and other confidential information.
Never store secrets in environment variables. The risk of accidental exposure of environment variables is exceedingly high. That's why (just to be clear) you should never pass secrets in environment variables to Lambda functions. For example:
At Stackery we never put secrets in environment variables. Instead we fetch secrets from AWS Secrets Manager at runtime and store them in local variables while they're in use. This makes is very difficult for secrets to be logged or otherwise exfiltrated from the runtime environment.
If you're dealing with secrets they should always be encrypted at rest and encrypted in transmission. By now we all know that keeping secrets in source code is a bad idea. Yes, secrets can't live in git with your code so where should you keep them? There's a lot of bad advice online, suggesting AWS Systems Manager Parameter Store (aka SSM) is a good place to store your secrets. Like environment variables Parameter Store is good for configuration, but terrible for secrets.
AWS Systems Manager Parameter Store falls short as a secrets backend in a few key areas:
At Stackery we use AWS Secrets Manager which stores secrets securely with fine grained access policies, auto-scales to handle traffic spikes, and is straightforward to query at runtime.
Each function in your application should only have access to the secrets it needs to do its work. However it's very common for teams to run configurations (often unintentionally) where every function by default is granted access to all secrets from all environments. These "/*" permissions mean a compromised function in a test environment can be used to fetch all production secrets from the secrets store. This is a bad idea for obvious reasons. Permission access should be tightly scoped by environment and usage, with functions defaulting to no secrets access.
At Stackery we automatically scope an IAM role per function and Fargate container tasks which limits AWS Secrets Manager access to the current environment the function is running in and the set of secrets required by that specific function.
Our team has learned a lot about how to manage serverless secrets, running production serverless applications, and working with many serverless teams and pioneers. We've integrated these best practices back into Stackery so serverless teams can easily layer secure secrets management onto their existing projects. If you are curious to read more about how Stackery handles secrets check out the Environment Secrets in the Stackery Docs.