Today and a new article, this time about lambda with Python.
Today and a new article, this time about lambda with Python.
Customer-centric mission. Control in the hands of the customer. Deliver the best service to the customers. To You.
0. Protect customers at all time (Security)
1.Listen closely to customers and act
2.Give customers choice
3.Work backwards from the customer
4.Help customers transform
Amazon Web Services have and do this. That is why they offer well over 1000 services. Removing pain points and giving the choice of how you want to build the products and services you want to build. From primitives with Compute, Memory and Disk, to advanced services for you to choice made up from these primitives RDS, ECS and Lambda. This gives you the choice so you don’t have to worry about managing the detail.
Software changes rapidly with agility gives the systems to rapidly deploy code in a reliable way. Scaling and deploying with 99.999% uptime is a reality. By using the components and removing points of failure and planned rapid deployments every day, will allow your business and software platform to grow daily and even hourly.
Reduce risk; Smaller, targeted application; deliver faster; Reactive to customer needs; More experimental.
Development and testing is where agility is: Unconstrained access to resources; testing with higher fidelity; Faster to market; Major productivity improvement; Significant cost improvements
Security; Reliability;Performance efficiency;Cost optimisation;Operational excellence (Prepare->Operated->Respond)
Security and Reliability will be improved by Automation
CloudFormation or Infrastructure as Code. Defining your infrastructure environments in code provides consistent secure and reliable systems to develop and run production systems on.
OpsWorks is a managed Chef service.
EC2 Systems Manager. Collection of tools to install and patch packages.
AWS CodeBuild – per minute billing for CI/CD build environments. Provide Docker images as a basis for your build machine.
Personal Health Dashboard – A personalised view of performance and availability of the AWS services you are using.
Even more services added to Amazon Web Services.
Stay tuned for more
Continuous integration has been used for years amongst the software developers out there. With the rise of Infrastructure as code, we Devops infrastructure types can also use the design, code, test, deploy pattern.
After becoming very inspired following Martin Alfke’s “Moving From Exec to Types and Providers” talk at Puppet camp London, I feel the need to share the love that is Types and Providers.
If you don’t mind ‘fun’ language, then Gary’s post Fun with Providers here will help explain things better than I can here.
I’m currently working on a scalable and resilient Mongodb replication cluster and have this long bash script triggered by an Exec. This is bad, not only because exec statements are a sign that technical debt is building, but the audit entries from Puppet are sparse.
My team will be working on getting up to speed with types and providers to create a much better Mongodb puppet module experience.
All going well, this code may find its way into the puppetlabs mongodb module.
Writing reusable code is the heart of the DRY principle. However it is important to reuse the correct code.
A picture, as they say
Technical Debt and Dependency hell can quickly create big issues and slow a project to a crawl. This can be avoided if a great architect and design steps are involved before the project starts. Often refactoring code will be required to make modules or functions loosely coupled and less dependent on each other.
Even within a Scrum and Agile framework, it is better to have a clear view of the end result and understand how the new ‘thing’, will interface with the existing ‘stuff’.
A great day in London yesterday at Puppet Camp London 2016 held at the very beautiful Kings Place. The venue was a wonderful setting with its open gallery floors and ‘scary’ high escalator that spanned 2 floors.
150 or so expectant techies with about half the room as puppet beginners attended at my biggest speaking event so far.
My talk, Can puppet help you run docker on a t2.micro? appeared to be received well by those I had the good fortune to talk to after the event. We also shared a discussion about Docker Persistent Storage for which there is no clear solution yet. Persistent storage is useful for data that changes during the containers life time but not something you would want to ‘bake’ into an image or layer.
I’ve been playing with hosting a WordPress site on Docker, and one solution I’ve arrived at is using a sync product like BTsync to form a mini network to synchronise data across a number of docker hosts. Another route being investigated for one of our customers is utilising a GlusterFS cluster with NFS client. Storing the data locally to each docker host.
I hope to be invited to speak at another Puppet event soon.
We have daily stand-up meetings to enable the sharing of updates for each member of the team, operations or project, which enables everyone to have visibility of what is going on as well as report any obstacles impeding progress. Anyone is able to attend these meetings for total transparency.
In addition to the project work, housekeeping and operation support work alongside to keep the systems running as they should. Let me explain what we mean by this.
During a project, code is being updated and deployed all the time. Most of the systems rely on other software to operate and this too is being maintained and updated by external sources. To avoid dependency hell, our team evaluates and updates where needed the other software running on the server. This also includes removing redundant code and logs that are out of date. To keep things running as cleanly as possible, we also rebuild fresh servers as part of this step.
This can be a transparent so it is included in the weekly report.
Things change and sometimes an undesired outcome arises. With our monitoring systems in place, we proactively fix issues as we find them and report to you during and sometimes after the event. These events will of course be in the weekly report.
Once a week a report is compiled for our customers based on the information in the stand up meetings and if it is a project, a progress report is also included.
In my talk – Can Puppet help you run Docker on a T2.Micro? – You will learn the technologies we use to create environments from scratch every day.
A walk through a number of the key concepts of puppet; stages, Role and profile, hieradata and puppet forge, as well as a brief introduction to Docker.
Using these to explain a solution of running a puppet manifest to configure Amazon‘s smallest (Yes I’ve run this on a t2.nano too) server to run a docker containerised web service.
You will learn why puppet stages can be used to help in this solution, how roles and profiles are defined and used, and finally use of the puppet Forge with Hieradata to install and run docker containers.
This talk will contain links to code that can be used afterwards and I’ll touch on what docker is and how to configure the puppet module to automatically run containers.
Come see me talk at Puppet Camp – 8th November 11:15 in London.
After many years of working with developers, one of the most common problems faced by the ops and support team is code that doesn’t work on production systems.
This is often caused by subtle differences in the developers machine. Software versions of components, different install locations and different shared libraries. All these can have an impact on code that works fine on one machine or server, but won’t install or run on another.
Following our structure and automation, we can provide a machine that looks exactly the same as test and production. Allowing the developers access to the tools needed to build their own machine, via Jenkins for example, everyone is empowered to build and develop knowing that everyone on the team is on the same page and has the same environment or environments to work with.
Some customers have a number of environments to develop and test systems in. Often these are:
development – often shortened to DEV. This can be virtual machines on the developers own computer.
testing – TEST, to pass the initial automated test, (yes those can be automated too, allowing your testers more time to diagnose)
integration – This is usually a bigger environment and has links to other systems in the business to test integration between components and systems.
UAT – User acceptance testing. Where user journeys are tested end to end.
Performance – Load testing occurs at this level, ensuring that the servers can cope with high load, stress testing all the components and can test the scaling code. This helps defend against the slashdot effect.
Preprod or staging – the final environment before production. allows for dry runs of installation or roll out of new code.
Production – Live, The final stage and where your customers really see the efforts of the team.
With so many environments it is important to have version control every step of the way and a release process to enable everyone involved to understand where they stand and how their efforts will effect the overall position of the company.
Please contact us about this and any projects you are thinking about.
Ahead of Tuesday’s £150M+ Lottery draw, an email was sent to thousands of users of the UK National Lottery website.
“This is a service message to let you know that we expect to see high volumes of visitors to The National Lottery website and app in the hours leading up to the close of ticket sales for the EuroMillions draw, at 7.30pm, on Tuesday 11th October.
If you plan to buy a ticket for any of our games this week and you need to add funds to your account to play, we would recommend that you do this as early as possible in order to avoid disappointment. ”
We suspect that something in the infrastructure doesn’t scale automatically. No doubt the web or front end servers scale, however they are expecting a performance bottle neck somewhere. This is where performance testing of the whole infrastructure is important, to identify and work on those bottle necks. Once identified, automation of the service can be planned and provided. Often it is the database layer, but these too can be scaled with read replicas or sharded, allowing for greater number of active connections.
For more information or a quick chat, contact us.