Status | ||||
---|---|---|---|---|
|
Table of Contents |
---|
Overview
Voluntarily has various service endpoints : Managing Access to open platforms and this page gives a high level overview on how these are set up.
Deployment Design
Our services are deployed on AWS, connected to an externally managed database(mongodb).
...
ECR to store our images for each environment(aplha, beta, gamma, live).
ECS(fargate) to deploy our container service(one each for all environments).
Network for our services attached to internet gateway for external access and some security groups and route table for managing traffic access.
Load balancer to manage traffic to our service.
...
route53 holding 'A records for all our services.
...
Infrastructure
Currently we have aws folder in the voluntarily repo containing cloudformation scripts and few other bash scripts that will deploy each environment based on environment specific config file. This is a manual setup for each environment.
Voluntarily Service
Once the infrastructure is setup, it will deploy the service based on task definitions for the ECS cluster. The CI pipeline then builds and pushes new image to ECR and runs a update command for ECS service to pull the latest image to roll out a new version of service. CI only deploys alpha and beta currently, other environments are deployed manually.
Environment Setup
Currently, at docker build time, we pass in a parameter which specifies what kind of image to build- for what environment. Based on certain keyword(alpha, beta, gamma), during docker build time, the. application will use this information and decrypt the appropriate encoded env file and load the environment variables. The image is then built and pushed to ECR.
Proposals/Options to think for future
Docker image
Currently we use build to determine environment type and create image specifically for each environment. This creates an image baked with variables meaning if any of those change, we have to rebuild image.
Proposal is to isolate environment specific variable setup from application, have env vars pushed to container scope so if these are to change, all we need to do is to update environment and restart container. This will mean we can have one image, and have multiple environments using same image but are different setups due. to having different environment variables.
Build
Currently, we have scripts to update deployments and CI build only updates alpha/beta depending on the environment variable in build file.
Proposal is to have build trigger deployments to any environment automatically. We can leverage git tagging and CI pipeline conditionals to deploy to various environments based on tag.
Things missing here:
cloudflare setup
Monitoring and Metrics
Have a dashboard setup to view application and infrastructure usage data, telemetry.
High level Design
(in progress)..
Infrastructure
load Balancers
Fargate Service
CPU/Memory Usage
Current Setup:
Cloud Watch dashboard setup for ELB and ECS services used by vly. Enabled ‘container insights’ for ECS services. View sample dashboard: https://ap-southeast-2.console.aws.amazon.com/cloudwatch/home?region=ap-southeast-2#dashboards:name=vly-services;accountId=585172581592
ToDo: Team consensus and understanding on setup
...
are we using the route53 records? I was unable to find/understand how they link up to the load balancer
...
. Naming. Add further missing information, check existing setup, compare differences and fix what is needed.
Application
Application metrics
tracing?
Health end points
separate Test Suite/service deployed for regular(5 min interval?) app endpoint checks?
Current Setup:
Notifications
possibly start with slack channel notifications
aws infrastructure can utilise cloudwatch and SNS
Current Setup:
Cloud watch Alert → SNS Topic → Lambda function to push alert to Slack channel ‘#notifications’
ToDo:
Refire alert if it is not addressed
serverless setup for lambda function code, currently its just a quick demo
Logging
thoughts:
configure log levels based on env var so just update env var to DEBUG and get all logs
log rotation?
standardised log solution
Tracing
thoughts: (do we want this?)
standardised distributed trace solution?
i have previously worked with Jaeger-open tracing standard - OpenTracing standard