I'd like to discuss how to secure your (Open)API's using the tools that already exist in the AWS services we're using, and how AWS WAF (Web Application Firewall) can potentially assist (for a price).
We'll cover the following topics:
- Introduction
- OWASP Top 10
- (1) Injection
- (2) Broken Authentication
- (3) Sensitive data exposure
- (4) XML External Entities (XXE)
- (5) Broken Access Control
- (6) Security Misconfiguration
- (7) Cross-Site Scripting XSS
- (8) Insecure Deserialization
- (9) Using Components with Known Vulnerabilities
- (10) Insufficient Logging & Monitoring
- In closing
- Further reading
Introduction
To give this discussion better context, I'll be using the OWASP Top 10 Web Application Vulnerabilities as a guiding list.
The architecture to be considered for this article is my reference application architecture used for this series of OpenAPI articles.
It features;
- API Gateway, exposes all the available services
- Lambda, contains the code that ultimately exposes/processes data to/from the API's.
- Cognito for Identity Management, identity registration, login, and session management.
- CloudWatch logs, error, info, debug logging and Alarms service.
- SNS, reliable and durable pub/sub message based integration service.
- AWS CodePipeline and CodeBuild, CI/CD for Lambda code deployments.
Normally you would also use a data source, here we can assume we're either using NoSQL (DynamoDB) or a SQL variant (MySQL, Postgress).
OWASP Top 10
The OWASP list represents a broad consensus about the most critical security risks to web applications.
For each of the 10 items I'll go through the general risks associated with that item, and then the possible solutions to apply for each of the relevant AWS services. In the conclusion for each chapter I'll briefly state my recommendation.
(1) Injection
Risks
Injection risks are most prevalent in environment variables, (No)SQL queries, JSON/XML parsers, and API query/body parameters.
Solutions
AWS API Gateway & OpenAPI
To mitigate any type of injection attempt we can force correct input of our API's. To do that we can use the concept of "Models" in API Gateway. These models are essentially JSON Schema's that define what type of data is accepted as input or what is given as output after execution.
For example, this register model has a regex pattern that defines what kind of information is allowed to be input into API's using this model:
{
"title": "Register",
"required": ["email", "firstName", "lastName", "password", "username"],
"type": "object",
"properties": {
"email": {
"pattern": "^[_A-Za-z0-9-\\+]+(\\.[_A-Za-z0-9-]+)*@[A-Za-z0-9-]+(\\.[A-Za-z0-9]+)*(\\.[A-Za-z]{2,})$",
"type": "string"
},
"password": {
"type": "string"
},
"username": {
"type": "string"
},
"firstName": {
"type": "string"
},
"lastName": {
"type": "string"
}
},
"description": "Registration details"
}
When you enable validation
on your API resource in your OpenAPI specification, it will use this JSON Schema to automatically validate your input. This will support regex validation as well and not just the required property(!). See this AWS documentation for more information.
/identity/register:
post:
tags:
- 'Identity'
description: 'Register new Business user'
operationId: 'identityRegister'
requestBody:
description: 'Registration details'
content:
application/json:
schema:
$ref: '#/components/schemas/Register'
required: true
x-amazon-apigateway-request-validator: full
The request validator selects which configuration to use for that particular endpoint.
Somewhere else in the root of the document we should configure the variants, like so:
x-amazon-apigateway-request-validators:
full:
validateRequestBody: true
validateRequestParameters: true
body-only:
validateRequestBody: true
validateRequestParameters: false
Of course, you will have to do more data validation within your application code, string
will accept anything. Here you can make a trade off between API Gateway validation and code based validation. The advantage of API Gateway validation is that you will not be charged for any code execution at all.
When you execute against an API with this configuration and you supply incorrect input, this is the response:
{
"message": "Invalid request body"
}
AWS Lambda
By default all environment variables are encrypted at rest, see here if you want to use your own encryption keys (CMK).
If you want to encrypt the environment variables in transit, you can use a key managed by AWS KMS to do that. See this article explaining how to create the key, and how to manage encryption and decryption in AWS Lambda.
AWS WAF
The Web application firewall support OSI Layer Level 7 filtering, meaning on the application level it can do input filtering. This is similar to the filtering shown previously but this is automated and it won't reach your API at all if the filter blocks the API request.
This Github project implements all the OWASP Top 10 rules into AWS WAF for you if you wish. There's a downside with respect to the pricing model you need to be aware of. So here a trade off needs to be made between code level protection or an AWS service protection.
resource "aws_wafregional_sql_injection_match_set" "owasp_01_sql_injection_set" {
count = "${lower(var.target_scope) == "regional" ? "1" : "0"}"
name = "${lower(var.service_name)}-owasp-01-detect-sql-injection-${random_id.this.0.hex}"
sql_injection_match_tuple {
text_transformation = "URL_DECODE"
field_to_match {
type = "URI"
}
}
sql_injection_match_tuple {
text_transformation = "HTML_ENTITY_DECODE"
field_to_match {
type = "URI"
}
}
sql_injection_match_tuple {
text_transformation = "URL_DECODE"
field_to_match {
type = "QUERY_STRING"
}
}
sql_injection_match_tuple {
text_transformation = "HTML_ENTITY_DECODE"
field_to_match {
type = "QUERY_STRING"
}
}
sql_injection_match_tuple {
text_transformation = "URL_DECODE"
field_to_match {
type = "BODY"
}
}
sql_injection_match_tuple {
text_transformation = "HTML_ENTITY_DECODE"
field_to_match {
type = "BODY"
}
}
sql_injection_match_tuple {
text_transformation = "URL_DECODE"
field_to_match {
type = "HEADER"
data = "Authorization"
}
}
sql_injection_match_tuple {
text_transformation = "HTML_ENTITY_DECODE"
field_to_match {
type = "HEADER"
data = "Authorization"
}
}
}
resource "aws_wafregional_rule" "owasp_01_sql_injection_rule" {
depends_on = ["aws_wafregional_sql_injection_match_set.owasp_01_sql_injection_set"]
count = "${lower(var.target_scope) == "regional" ? "1" : "0"}"
name = "${lower(var.service_name)}-owasp-01-mitigate-sql-injection-${random_id.this.0.hex}"
metric_name = "${lower(var.service_name)}OWASP01MitigateSQLInjection${random_id.this.0.hex}"
predicate {
data_id = "${aws_wafregional_sql_injection_match_set.owasp_01_sql_injection_set.0.id}"
negated = "false"
type = "SqlInjectionMatch"
}
}
This particular example consists of 8 rules within one Web ACL. The total cost of activating this rule is:
Resource type | Number | Cost |
---|---|---|
Web ACL | 1 | $5.00 (pro-rated hourly) |
Rule | 8 | $8.00 (pro-rated hourly) |
Request | per 1 Million | $0.60 |
Total | $13.60 / month |
Based on 1 Million requests to your API, you'll pay $13.60 for protection against all types of SQL injection attacks on top of the regular API Gateway request costs.
Here there's a definitive trade off between convenience, cost, traffic, and trusting development level security over DevOps rule based systems such as AWS WAF.
Conclusion
For most applications I would say AWS WAF for SQL injection is not worth enabling over doing API Gateway input validation and application level input filtering.
If you have real-world experience using AWS WAF in large scale applications, let me know in the comments.
(2) Broken Authentication
Risks
Here we want to avoid dictionary attacks on your authentication process, and/or broken or badly designed authentication mechanisms.
Solutions
AWS API Gateway & OpenAPI
For our API's we want to make sure we enable authentication on our API's as much as possible, to enable this on our OpenAPI we add the security
parameter as follows:
paths:
/user:
get:
operationId: getUser
description: get User details by ID
parameters:
- $ref: '#/components/parameters/userID'
security:
- example-CognitoUserPoolAuthorizer: []
That name is a reference to the Cognito user pool authorizer configuration:
components:
securitySchemes:
example-CognitoUserPoolAuthorizer:
type: 'apiKey'
name: 'Authorization'
in: 'header'
x-amazon-apigateway-authtype: 'cognito_user_pools'
x-amazon-apigateway-authorizer:
providerARNs:
- '${cognito_user_pool_arn}'
type: 'cognito_user_pools'
Once enabled for an API, they need to have a registered user in the Cognito User Pool before they can be authenticated and receive a JWT Token
as proof they're authorized to execute the API.
AWS Cognito
In Cognito we need to do several things to make sure we defend against the following potential risks:
- Dictionary attacks / brute force attacks
- Password length and strength requirements.
- Multi-factor authentication
- Rotation of authorization session IDs.
- Invalidation of session IDs.
First of all, we want to make sure password are at least 8 characters long as recommended by NIST
Within the aws_cognito_user_pool
resource, we set the password policy as follows:
resource "aws_cognito_user_pool" "_" {
# ... other configuration ...
password_policy {
minimum_length = 8
require_uppercase = true
require_lowercase = true
require_numbers = true
require_symbols = true
}
# ... other configuration ...
}
Then to enable multi-factor authentication:
resource "aws_cognito_user_pool" "_" {
# ... other configuration ...
mfa_configuration = "ON"
sms_authentication_message = "Your code is {####}"
sms_configuration {
external_id = "example"
sns_caller_arn = aws_iam_role.sns_caller.arn
}
software_token_mfa_configuration {
enabled = true
}
# ... other configuration ...
}
When mfa_configuration
is set to ON
it is required for everyone to either have an SMS or Software MFA enabled.
If you want, Cognito supports more advanced security features for additional cost. The example given for 100.000 users without advanced security is 4525(!). Again here, is this worth your investment.
The features provided are:
- Checks for compromised credentials, this will do username/password scans to see if it has been leaked anywhere.
- Adaptive authentication, determines risk level per authentication and can block the sign-in if determined to be suspicious.
- Publishing of security metrics, this will report all the sign in metrics to your CloudWatch logs.
This will obviously require you integrate the Cognito SDK into your app or web application.
Conclusion
Properly implementing API Gateway security and Cognito Authentication with sensible password requirements with 2FA can already mitigate most issues indicated by OWASP.
If you have stricter security requirements for your app, the additional advanced security features offered by Cognito can be worthwhile investigating.
(3) Sensitive data exposure
Risks
Here obvious risks are not encrypting data in transit or at rest, weak cryptography that can be reversed engineered, and man in the middle attacks.
Solutions
All data on the AWS Services side is encrypted in transit using TLS/SSL. We're now mostly looking at how you can protect data at rest.
AWS DynamoDB
For DynamoDB we want to make sure we're encrypting information as follows.
resource "aws_dynamodb_table" "_" {
# ... other configuration ...
server_side_encryption {
enabled = true
}
# ... other configuration ...
}
For more information on CMK or updating table encryption, see this official documentation.
For further best practices regarding security on DynamoDB, see this official documentation.
Since DynamoDB uses IAM role access it's not strictly necessary to use a VPC to restrict public access. If you want to be following best practices, using a VPC solution is recommended here and can ensure all traffic is done within the AWS network for enhanced security.
AWS RDS
When creating a SQL based data store, we can use the following to enable encryption:
resource "aws_db_instance" "_" {
# ... other configuration ...
storage_encrypted = true
publicly_accessible = false
# ... other configuration ...
}
You would also be well advised not to allow public access to the database. A VPC should be configured to allow only private subnets access to your database to enhance security.
To connect to RDS you can use it's public SSL certificate to ensure data transmissions are encrypted.
See this article for general security guidelines for AWS RDS.
AWS Lambda
Before storing sensitive data such as credit card details, it's advised to use the proper encryption algorithms. These algorithms have a work factor delay that makes brute forcing very difficult. Argon2, scrypt, bcrypt, or PBKDF2.
Conclusion
Depending on which Database you're using, the options are available to ensure in transit and at rest encryption. The next thing is to prevent public access as much as possible using a VPC setup.
(4) XML External Entities (XXE)
Risks
A lot of systems still use XML in one form or another, you can think about SOAP API services, and integration layer messaging systems. This risk is about automated XML processing with mostly outdated library dependencies.
Solutions
AWS Lambda
Ensure your XML processing libraries are up to date and check the Example Attack Scenarios
here to get an idea about the attack vectors.
Use this cheat sheet to find out exactly how to mitigate the risk for these programming languages:
Conclusion
To be honest, I don't use XML at all. We have just one project where we need to ingest XML coming from a USA government agency and that particular format does not even conform XML standards. We had to build a custom parser to deal with it.
(5) Broken Access Control
Risks
Broken access control means attackers can get access to information that does not belong to the user session. For instance, by means of altering URL query parameters like this:
http://example.com/app/accountInfo?acct=notmyacct
Another potential risk is liberal CORS settings that allow API execution from different web domains.
Solutions
AWS API Gateway & OpenAPI
CORS
Managing the correct CORS settings for your API is the first step. For this we need to modify the options
operation on each of the API paths that has CORS enabled:
options:
responses:
200:
$ref: '#/components/responses/cors'
400:
$ref: '#/components/responses/cors'
500:
$ref: '#/components/responses/cors'
x-amazon-apigateway-integration:
responses:
default:
statusCode: '200'
responseParameters:
method.response.header.Access-Control-Max-Age: "'7200'"
method.response.header.Access-Control-Allow-Methods: "'OPTIONS,HEAD,GET,POST,PUT,PATCH,DELETE'"
method.response.header.Access-Control-Allow-Headers: "'Content-Type,X-Amz-Date,Authorization,X-Api-Key,X-Amz-Security-Token'"
method.response.header.Access-Control-Allow-Origin: "'*'"
passthroughBehavior: 'when_no_match'
timeoutInMillis: 29000
requestTemplates:
application/json: '{ "statusCode": 200 }'
type: 'mock'
Crucial is the method.response.header.Access-Control-Allow-Origin
response parameter that is now set to '*'
to accept all origin requests. For development purposes this may be fine when you're dealing with various testing servers and local execution environments. For production we need to lock this down to the domain name executing this API only.
API Limits
Next thing we want to make sure that we limit the number of parallel executions of the API.
resource "aws_api_gateway_method_settings" "_" {
rest_api_id = aws_api_gateway_rest_api._.id
stage_name = aws_api_gateway_stage._.stage_name
method_path = "*/*"
settings {
throttling_burst_limit = var.api_throttling_burst_limit
throttling_rate_limit = var.api_throttling_rate_limit
metrics_enabled = var.api_metrics_enabled
logging_level = var.api_logging_level
data_trace_enabled = var.api_data_trace_enabled
}
}
To control limits on your API stage, set the throttling_burst_limit
and throttling_rate_limit
parameters. They control the following:
throttling_burst_limit
: The number of requests per second the API Stage will sustain for a few seconds after which it will error with a 429 HTTP response.throttling_rate_limit
: The number of requests per second for the API Stage, it will continue until it reaches the burst rate limit after which it will error with a 429 HTTP response.
In case of abuse, we need to ensure these have sensible limits such that DoS attacks are limited as much as possible.
AWS DynamoDB, Cognito & Lambda
Fine grained access control
We can implement fine grained access control mechanisms on DynamoDB that will allow only the authenticated user access to its own records.
This article goes into detail how to do this, the following example is given:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": ["dynamodb:GetItem", "dynamodb:BatchGetItem", "dynamodb:Query"],
"Resource": ["arn:aws:dynamodb:us-west-2:123456789012:dynamodb:table/GameScores"],
"Condition": {
"ForAllValues:StringEquals": {
"dynamodb:LeadingKeys": ["${www.amazon.com:user_id}"],
"dynamodb:Attributes": ["UserId", "GameTitle", "Wins"]
},
"StringEqualsIfExists": {
"dynamodb:Select": "SPECIFIC_ATTRIBUTES"
}
}
}
]
}
The above policy will give read only access to items in the GameScores
table that belong to www.amazon.com:user_id
and it will only be able to retrieve these columns for each row; UserId
, GameTitle
, and Wins
.
To note is that www.amazon.com:user_id
refers to identities from an Amazon Federated login and not to the regular Cognito User Pool. To use the User Pool registered identities, use cognito-identity.amazonaws.com:sub
instead.
To be able to use this policy in your AWS Lambda code you have to retrieve the temporary credentials with the following function, see the official documentation for more details:
var params = {
RoleArn: 'STRING_VALUE' /* required */,
RoleSessionName: 'STRING_VALUE' /* required */,
WebIdentityToken: 'STRING_VALUE' /* required */,
DurationSeconds: 'NUMBER_VALUE',
Policy: 'STRING_VALUE',
PolicyArns: [
{
arn: 'STRING_VALUE',
},
/* more items */
],
ProviderId: 'STRING_VALUE',
}
sts.assumeRoleWithWebIdentity(params, function (err, data) {
if (err)
console.log(err, err.stack) // an error occurred
else console.log(data) // successful response
})
When this executes successfully you have temporary credentials that allow you to read from DynamoDB as indicated by the policy earlier.
Roles & Role access
To solve the roles and role access problems for your API, you can use the following setup in Cognito:
module "cognito" {
source = "../modules/cognito"
namespace = var.namespace
resource_tag_name = var.resource_tag_name
region = var.region
cognito_identity_pool_name = var.cognito_identity_pool_name
cognito_identity_pool_provider = var.cognito_identity_pool_provider
schema_map = [
{
name = "email"
attribute_data_type = "String"
mutable = false
required = true
},
{
name = "phone_number"
attribute_data_type = "String"
mutable = false
required = true
},
{
name = "businessID"
attribute_data_type = "String"
mutable = true
required = false
},
{
name = "role"
attribute_data_type = "String"
mutable = true
required = false
},
{
name = "roleAccess"
attribute_data_type = "String"
mutable = true
required = false
}
]
}
The schema_map
contains all the additional parameters that need to be recorded (either manual or automated after registration) for a registration and they're retrievable during an authenticated
session in AWS Lambda.
The event
object in an AWS Lambda session contains all the information from the currently authenticated user. To retrieve, access the claims
object: event.requestContext.authorizer.claims
which will contain the custom parameters set in AWS Cognito like so:
{
'custom:role': 'USER',
'custom:roleAccess': '{
"business": "rwu",
"role": "r",
"user": "rwud",
}',
'custom:businessID': '1'
}
When a user authenticates against Cognito, now AWS Lambda knows which role the user has, and what kind of role access that entails. Each of those properties, e.g. business
, will enable you to allow access to GET
(r), POST
(w), PUT
(u), DELETE
(d) operations on that specific business API. You can make this as fine grained as you need it to.
For key identifiers such as a businessID
it's recommended to make those part of the DynamoDB partition key such that they're mandatory when querying for data and never supplied via API query parameters or body JSON objects.
Conclusion
Broken access controls can create a lot of problems both for data security as well as (D)DoS attack opportunities via loose CORS and Rate limit settings on your API.
(6) Security Misconfiguration
Risks
This is a very broad set of threats that go across the application stack. From unpatched software, to default accounts. A typical example here is an S3 bucket that has public permissions, this used to be prevalent as this article shows.
Solutions
AWS Code Build
Since this stack is running fully Serverless and thus is a managed service by Amazon, the area's that are most exposed to potential security leaks is the code we use on AWS Lambda and Lambda Layer.
In case of Node, we need to ensure we have upgraded to the latest version supported on AWS and that is version 12. Within your build system we have many opportunities to test for vulnerabilities in dependencies and code such as:
- NPM Audit: To audit your dependencies, see this article for details, we can implement the
npm audit
step in CodeBuild and halt the pipeline once vulnerabilities have been found. - OWASP Dependency checker: OWASP has a dependency checker that can be integrated on the command line as well, see here. This particular analyzer has support for several languages including NodeJS and NPM Package manager, see here for the details.
- NodeJsScan: NodeJsScan is a docker ready general purpose security scanner specifically for NodeJS, here's the Github for more details. This scans your entire codebase for vulnerabilities such as XSS, Remote Code Injection, and SQL Injection among others.
- RetireJs: is a well known scanner for node dependencies and javascript code vulnerabilities. See their Github page for details.
AWS Lambda
For AWS Lambda we have a great middleware to help cover correct security configurations regarding HTTP REST API's, it's called Middy. These are some of the recommended plugins:
- HTTP-CORS helps to set the correct HTTP CORS headers.
- HTTP-Error-handler returns correct HTTP responses, this works in conjunction with http-error.
- HTTP-Security-headers this applies best practice security headers to HTTP responses.
Conclusion
Since AWS takes care of the details around every service we use, we still have a responsibility in the area's we introduce possible security leaks. Of course we can misconfigure an S3 bucket for public access, and there are many other examples that I could cover. However, I focussed on the key area of source code instead.
If you're interested in a much more broad and in-depth look, please review the guidelines on CIS for additional solutions here.
(7) Cross-Site Scripting XSS
Risks
XSS flaws occur when web applications include user-provided data in webpages that is sent to the browser without proper sanitization. If the data isn’t properly validated or escaped, an attacker can embed scripts, inline frames, or other objects into the rendered page.
XSS is a very well known security risk, there are several variants OWASP recognizes:
- Reflected XSS: this is typically about URL interaction/scripts that have malicious intent coming from user data that is not sanitized.
- Stored XSS: storing of unsanitized user input that is viewed by someone else and in the worst case by someone with admin privileges.
- DOM XSS: most common with javascript frameworks that manipulate the DOM such as ReactJS and VueJS. An example here is a DOM node replacement with a malicious login screen.
Solutions
AWS API Gateway & Lambda
This risk is about input sanitation, we have discussed this in previous chapters, please see the chapter about Injection for my recommendations on how to deal with this.
AWS WAF
Additionally, if you'd like specific Web Application Firewall rules to deal with XSS risks, the following Terraform code can be used (taken from this Github repo):
resource "aws_wafregional_xss_match_set" "owasp_03_xss_set" {
count = "${lower(var.target_scope) == "regional" ? "1" : "0"}"
name = "${lower(var.service_name)}-owasp-03-detect-xss-${random_id.this.0.hex}"
xss_match_tuple {
text_transformation = "URL_DECODE"
field_to_match {
type = "URI"
}
}
xss_match_tuple {
text_transformation = "HTML_ENTITY_DECODE"
field_to_match {
type = "URI"
}
}
xss_match_tuple {
text_transformation = "URL_DECODE"
field_to_match {
type = "QUERY_STRING"
}
}
xss_match_tuple {
text_transformation = "HTML_ENTITY_DECODE"
field_to_match {
type = "QUERY_STRING"
}
}
xss_match_tuple {
text_transformation = "URL_DECODE"
field_to_match {
type = "BODY"
}
}
xss_match_tuple {
text_transformation = "HTML_ENTITY_DECODE"
field_to_match {
type = "BODY"
}
}
xss_match_tuple {
text_transformation = "URL_DECODE"
field_to_match {
type = "HEADER"
data = "cookie"
}
}
xss_match_tuple {
text_transformation = "HTML_ENTITY_DECODE"
field_to_match {
type = "HEADER"
data = "cookie"
}
}
}
resource "aws_wafregional_rule" "owasp_03_xss_rule" {
depends_on = ["aws_wafregional_xss_match_set.owasp_03_xss_set"]
count = "${lower(var.target_scope) == "regional" ? "1" : "0"}"
name = "${lower(var.service_name)}-owasp-03-mitigate-xss-${random_id.this.0.hex}"
metric_name = "${lower(var.service_name)}OWASP03MitigateXSS${random_id.this.0.hex}"
predicate {
data_id = "${aws_wafregional_xss_match_set.owasp_03_xss_set.0.id}"
negated = "false"
type = "XssMatch"
}
}
Again here, the cost to have AWS take care of this for you is as follows. These XSS protections contain 8 rules within one Web ACL. The total cost of activating this rule is:
Resource type | Number | Cost |
---|---|---|
Web ACL | 1 | $5.00 (pro-rated hourly) |
Rule | 8 | $8.00 (pro-rated hourly) |
Request | per 1 Million | $0.60 |
Total | $13.60 / month |
Based on 1 Million requests to your API, you'll pay 13.60 USD for protection against all types of XSS attacks on top of the regular API Gateway request costs. You can of course activate this together with the SQL injection rules within the same Web ACL to save 5.00 USD per month for a total cost of 22.20 USD / month.
Conclusion
When you apply good input sanitation that should sufficiently protect you from any risk of XSS. Please check your current implementation againt this cheat sheet from OWASP to ensure you're fully protected.
(8) Insecure Deserialization
Risks
Flaws in how deserialization is done can result in remote code execution. For example, a JSON object send by a web application via an AWS API Gateway is deserialized (JSON to Javascript Object conversion) and actions are performed on this data.
Examples of potential risks are; session cookie tampering, and session state manipulation.
Solutions
AWS Lambda
This requires manual code review and the use of code vulnerability scanners as introduced in the Security Misconfiguration chapter. The solution as recommended by OWASP is to validate state and cookies using integrity checks with digital signatures such as a hash or an authentication signature. If the data does not contain such signatures, it will be ignored. Furthermore, logging of any deserialization errors is very useful to detect any possible tampering.
Conclusion
Please view this OWASP cheat sheet for details per programming language what to look out for.
(9) Using Components with Known Vulnerabilities
Risks
Any code or component that is used as a dependency can have known vulnerabilities. Because there's such a massive reliance on open source dependencies in most projects it's very hard to be aware of security risks.
Solutions
Here we need to rely on Build phase dependency scanning and manual review to avoid security risks as much as possible. Please review the Solutions in the chapter Security Misconfiguration to mitigate this security risk.
Conclusion
The fact that several OWASP Top 10 risks share similarities means that many software services and products rely on open source software. With that reliance come security vulnerabilities that are hard to detect without some form of automated detection. Developers and DevOps need to put a lot more attention on threat detection in the build phase before code is put into production.
(10) Insufficient Logging & Monitoring
Risks
This risks is about a lack of contextual logging (where, what, and how) which decreases awareness, and in time monitoring of threats. The latter is about how quickly an IT team can respond to security incidents.
Solutions
AWS CloudWatch
CloudWatch can provide both Logging, and Monitoring with Alerts. As I've demonstrated in an earlier article in this OpenAPI series of articles. Review that article for logging and Monitoring suggestions that will increase visibility and make debugging and error (threat) detection easier.
How to do Logging on AWS Serverless
AWS X-Ray
X-Ray is as the name suggest a service that provides much more transparency into Serverless execution by both visualizing the execution path, and making logs easily accessible. I've covered the use and implementation of it in an earlier article, please review that article below.
Serverless Tracing with AWS X-Ray
Conclusion
With these two tools in hand most anomalies can be relatively quickly detected either though monitoring alerts from CloudWatch via SNS, or through bug/error analysis with AWS X-Ray.
In closing
There was a lot to cover here and I'm sure I missed equally as much as well.
If I've made any mistakes or you have suggestions, additions please do let me know.
I'm very much interested reading your thoughts and expertise on this difficult subject.
Thanks for reading!