Blue / Green Spread with Kubernetes and Amazon ELB

9:03 PM
Blue / Green Spread with Kubernetes and Amazon ELB -

Today's post was originally published on Jade Meskill personal blog and shared here is.

In Octoblu we use very often, and we are tired of seeing our users the occasional gaffe, when a new version is put into operation.

Although we use Amazon Opsworks to more easily manage our infrastructure, our updates may take a while for dependencies before the service can be installed restarted - .. no great experience

Enter Kubernetes

We knew that would move to a fixed infrastructure approach help us deploy our applications ranging from extremely simple web services area, to complex near real-time messaging systems, faster and easier.

Containerisation is the future of app deployment, but the administration and a number of instances Docker scaling, management of all port assignments, is not a simple matter.

Kubernetes simplifies that part of our development strategy. We still had a problem while Kubernetes is ramped up new versions of our Docker instances, we could get into a state where both old and new versions in the mixture were. If we shut down the old before the new education, we would also have a short (sometimes not so short) period of downtime.

Blue / Green Deploys

I first read about Blue / Green unfolds in Martin Fowler excellent article Bluegreen deployment, a simple but powerful concept. We started to find a way to build to do this in Kubernetes. After some complicated experiments we came up with a simple idea: use Amazon Elbs as a router. Kubernetes treated the complexity of your request to the appropriate favorite routing by listening to all minions at a particular port, so ELB load balances a piece of cake. Have the ELB listen on port 80 and 443, then route the request to the port at all Kubernetes minions.

blue or green?

The next problem was to find out whether blue or green is currently active. Another simple idea to save a blue port and a green port as tags in the ELB and look at the current configuration of the ELB to see that you are currently living. No need to store the value somewhere that is not correct.

putting it all together.

We currently use a combination of Travis CI and Amazon CodeDeploy the blue / green deploy process to kick off.

The following part of a script that runs on providing our Triggers service. You can check the code on GitHub if you want to see how it all works together.

I have some note added to explain what happened.

#! / Bin / bash
SCRIPT_DIR = ` dirname $ 0 '
DISTRIBUTION_DIR =` dirname $ SCRIPT_DIR`
export PATH = / usr / local / bin : $ PATH
export AWS_DEFAULT_REGION = US- West-2
# query ELB the blue connector label
BLUE_PORT = `get aws elb describe tags --load Balancer name triggers-octoblu-com | JQ & # 039; .TagDescriptions [0] .Tags [] | select (.KEY == "blue") | .Value | tonumber'` obtain
# query ELB green Port-label
GREEN_PORT = `aws elb describe tags --load Balancer - name-triggers octoblu-com | JQ & # 039; .TagDescriptions [0] .Tags [] | select (.KEY == "green") | .Value | tonumber'`
# query ELB the current port to find out
OLD_PORT = `aws elb describe load balancers --load- Balancer name triggers-octoblu-com | JQ '.LoadBalancerDescriptions[0].ListenerDescriptions[0].Listener.InstancePort'`
# out if the new color blue or green is
NEW_COLOR = blue
new_port = $ {BLUE_PORT}
when [" $ {OLD_PORT} " == " $ {BLUE_PORT} "]; and then
NEW_COLOR = green
new_port = $ {GREEN_PORT}
fi
export BLUE_PORT GREEN_PORT OLD_PORT NEW_COLOR new_port
# crazy template things, do not ask.
#
# Some people, when confronted with a problem,
# think "I know I'll use regular expressions."
# Now they have two problems.
# - jwz
REPLACE_REGEX='s;(\*)($([a-zA-Z_][a-zA-Z_0-9]*)|${([a-zA-Z_][a-zA-Z_0-9]*)})?;substr($1,0,int(length($1)/2)).($2&&length($1)%2?$2:$ENV{$3||$4});eg'
perl -pe $ REPLACE_REGEX $ SCRIPT_DIR / Trigger service blue-Service .yaml.tmpl & gt; $SCRIPT_DIR/triggers-service-blue-service.yaml
perl -pe $ REPLACE_REGEX $ SCRIPT_DIR / Trigger service green-Service .yaml.tmpl & gt; $SCRIPT_DIR/triggers-service-green-service.yaml
create
# always both services
kubectl delete -f $SCRIPT_DIR/triggers-service-${NEW_COLOR}-service.yaml
kubectl -f create $SCRIPT_DIR/triggers-service-${NEW_COLOR}-service.yaml
# the old version of the new color destroy
kubectl stop rc = -lName trigger service - $ {} NEW_COLOR
[Clear${}NEW_COLOR
kubectl pods -lName = trigger service - - kubectl rc -lName delete = trigger service $ {} NEW_COLOR
kubectl -f create $SCRIPT_DIR/triggers-service-${NEW_COLOR}-controller.yaml
# Awaiting Kubernetes instances correctly
x = 0
to bring during [" $ x " -lt 20 -a -Z " $ KUBE_STATUS "]; do
x = $ ((x + 1))
sleep 10
echo "Check kubectl status, $ try {x} ... "
KUBE_STATUS` = pod get kubectl -o json -lName = trigger service - $ {} NEW_COLOR | JQ ".items [] .currentState.info ["triggers-service-${NEW_COLOR}"] .ready" | uniq | grep `
done right
when [-z" $ KUBE_STATUS "]; and then
echo "triggers-service - $ {NEW_COLOR} is not ready to give up."
exit 1
fi
# port mappings on the ELB
aws elb Clear-Load Balancer handset --load Balancer name triggers-octoblu-com --load Balancer remove ports 80
aws elb Clear-Load Balancer handset --load Balancer name triggers-octoblu-com --load Balancer ports 443
# Create a new port mappings
aws elb create-Load Balancer handset --load Balancer name triggers-octoblu-com --listeners Protocol = HTTP, LoadBalancerPort = 80 , InstanceProtocol = HTTP, InstancePort = $ {} new_port
aws elb create-Load Balancer handset --load Balancer name triggers-octoblu-com - listener Protocol=HTTPS,LoadBalancerPort=443,InstanceProtocol=HTTP,InstancePort=${NEW_PORT},SSLCertificateId=arn:aws:iam::82206980720:server-certificate/startinter.octoblu.com
# reconfigure the health check
aws elb configure health check --load Balancer name triggers-octoblu-com --health check Target=HTTP:${NEW_PORT}/healthcheck,Interval=30,Timeout=5,UnhealthyThreshold=2,HealthyThreshold=2

Oops happened!

Sometimes Peter makes a mistake. We must quickly take back to an earlier version. If it is the off-cluster, rollback is as easy as re-mapping of the EFA to the old ports to forward. Sometimes Peter tried to correct his mistake with a new deploy, and now we have a real mess.

As this has happened more than once, we created oops. Oops allows us to make immediately to the off-cluster reversed by simply running oops rollback or quickly redeploy a previous version oops deploy git-commit .

We add a .oopsrc to all of our applications, which looks something like this:

{
"Elf-name": "Trigger octoblu- com "
" application-name ":" trigger-service "
" Deployment group "," Master ",
" s3-bucket ":" octoblu implement "
}

oops list show us all available implementations.

We are always looking for ways to get better results if you have any suggestions, please let us know.

Previous
Next Post »
0 Komentar