8 Best Practices to keep the Release Train moving

Standard

We have moved far from the traditional way of developing and releasing software. All those few weeks-to-months of developed code that gets tested for a few weeks after which it’s packaged for production is no longer exciting for engineering and release teams. A highly iterative and agiltraine development process has evolved as a norm for companies dealing with products of any size and complexity.

Today, we see Facebook built and deployed in less than half an hour; Flickr having 10+ deploys a day and companies thriving for a daily push. All these web companies share some common continuous delivery principles, which drive their passion for short and frequent releases.

Here are 8 best practices adopted by some of the world’s best release engineering teams. While each of these practices alone can be a book of explanation, I tried to capture some “valuable clips”. Jez Humble and David Farley’s book on Continuous Delivery explores some of the concepts in great detail.

1. Automate Everything – Build, Test and Deploy: You should release your product with the mere push of a button. Automation in all stages of building, testing and deploying allows for better control and feeds faster feedback channels. CI tools like Hudson, CruiseControl, Jenkins automatically build from your SCM for every check-in and also run automated tests on a grid.

2. Zero Downtime Trunk: All engineers commit code to a mainline or trunk frequently and every commit is validated through a set of commit stage tests. A quick feedback cycle is facilitated through these validations, which allows for spotting out any defective code. Trunk should be available for all teams – dev, test and ops ALL time.

3. Fail Fast Commit Tests: The “Fail Fast” mantra of the commit tests is to validate every commit to the system through a set of tests to spot issues early in the cycle. The commit tests execute in less than 10 minutes and include compilation, unit, and integration tests. Hence, designing the right suite of tests for the commit stage is very critical. All tests should PASS to pass the commit stage and meet thresholds on other key code metrics like coverage, complexity etc.  E.g. 60-70% of unit test coverage.

4. Concurrent Test Execution: Tests that validate the functional and non-functional behaviors of the application should be automated. Functional Test Automation should target creation of stateless tests to allow for parallel execution. Tests that carry state and depend on a pre-condition or other application states will consume time and may demand sequential execution. Hence, a test should create its own data that doesn’t conflict with other tests and tear down/clean up in the end. Tests should be written to run on multiple nodes concurrently and deliver quick results. A handful of exploratory tests and tests verifying usability and acceptance should still be conducted manually.

5. Short Cycle Time: Keep your cycle time to the minimum, an important metric to always keep an eye on. This is the time it takes for a single change to move from check-in to release. This means reduced Time to Market for products that value it as a key strategy. Culture plays an important role in achieving this, the DevOps movement has shown high success rate in continuous deployment.

6. Emergency Stop: Applications demanding high availability and quality cannot afford to fail or break. In case of any undesired behavior, the application is rolled back to a known-good state by maintaining versions of builds. However, use of this emergency stop is considered only a last resort.

7. Canary Releasing: Releasing new versions only to a subset of your users to get faster feedback on how the new features are being used. E.g: Flickr started as a gaming site but soon realized that people are using it for sharing photos. This canary releasing helps in A/B testing which is an experimental approach to comparing between two different versions of the same product.

8. Release Monitoring System: Create a suite of highly efficient tools for monitoring and analyzing the status of the builds in different stages, the quality checks it has passed and visualizing after pushing into production for any errors, resource utilization and other relevant metrics. Without creating this kind of a dashboard, continuous delivery will remain to be a dream. Remember, your release train needs to have an intelligent real time failure detector!

Advertisements

What’s so BIG about “Big Data”?

Standard

Find Your WayWhen I am writing this, there is so much buzz around Big Data, Analytics and technologies like Hadoop, NoSQL and Map Reduce which are used in the “Big Data” Context. Mckinsey, Gartner and many others have forecasted the value and potential of Big Data business. We have been consuming a large portion of digital content from few decades but why is data gaining so much popularity recently?

Facebook’s data warehouses grow by “Over half a petabyte every 24 hours”, Walmart handles one million customer transactions every hour, internet traffic is predicted to reach 667 exabytes by 2013 and research says that content gets doubled every 1.8 years. The way data and content is exploding is posing a big challenge for companies dealing with complex data across different dimensions – volume, velocity and variety – the 3Vs of data.

Companies have to process large data sets from various sources including Emails and other unstructured content, Web Logs, GIS, RFID, social feeds, events, marketing, documents, audio and video information to make critical decisions about their business.  A plethora of large scale data gathering and analytical technologies and visualization tools are booming up in the industry. Companies who shape their strategies based on smart data analytical techniques are expected to make a big transformation towards sustained business growth. While a lot of structured data stored in relational databases provided a cake walk to make important decisions, unstructured BIG DATA is the Game Changer.

Data by itself is raw and doesn’t carry any meaning or intent. A whole lot of techniques starting from creation, curing, review, moderation, structuring and evaluation make data useful to the humankind. Content is generated by humans unlike most data, which is usually machine generated. Content could be seen as your emails, tweets and documents, which carry sentiment and emotion and are mostly unstructured. The trick lies in identifying hidden patterns inside the unstructured data and content to generate Value. I’m not going to dig into the different technologies that support the data evolution through analysis which include machine learning, OCR, semantic analysis, pattern recognition, distributed file systems and cloud infrastructures.

What it means for us at the end of the day is how smarter is the world we live in – from the gadgets we use to the services we consume from different sectors!

Hello world!

Standard

Welcome to my online world! This is my FiRsT post. I plan to blog about anything and everything from technology, leadership, new industry trends and sharing my personal experiences. You are never too late for anything and so I am HERE – BlogginG!!