This post is part of the Applied Cloud Stories initiative, promoted by Microsoft, to allow the community to share success stories using Azure services and products and help other people learn how they can also use it to overcome challenges and issues that eventually they face, or just for improve in any way their life and business.
Few years ago, when I started to work with one of my clients, between an extensive number of important projects, there was a initiative being implemented by the team to redesign our work processes and improve productivity and quality of our applications and the services we provide.
So we decided to adopt Azure DevOps to mitigate some pain points we were facing at that time.
Azure Repos and Azure Artifacts
Starting by applications deployments. So the client had to manage an on-premises infrastructure to host his applications. They use to have a dedicated person that would manually deploy the application to the on-premises environment, perform configurations to synchronization processes, transfer files between virtual machines and folders, and more. So, it was a time-consuming process, with several manual operations and prone to errors.
Also, since the applications were being developed by third-party companies, there were multiple repositories in different locations (Github, TFS, SVN, etc).
During the configuration initial of Azure DevOps, we decided to migrate the code available in the different locations, and store it in the Azure Repos that we configured. Immediately when we finished the migration, we end-up with a central and unique place for all code, totally secure and available to be used.
Also, some applications were sharing common code and components, so we end-up creating Azure Artifacts to be shared by those applications and managed in a central place too.
All this process assisted by the extensive and useful Microsoft documentation. For more information about Azure Repos, please visit this site.
After migrating the code, the next step would be to explore the Azure Pipelines functionalities to automate our application’s deployment processes.
Since the applications were hosted on-premises and there were several custom processes for synchronization, configuration IIS, and more, we knew this step would require some configuration.
So, our configuration started by configuring the Agents in the environments to be used by our pipelines, so these agents could have access to the environment while running the tasks of a given pipeline. After finalizing the configuration of the agents, we realized we would need to implement some Vnext Tasks to be used in our pipelines, since we were using custom configurations in IIS, custom synchronization processes, and more.
We started the development of the Vnext Tasks, and after few weeks, we were able to have a fully pipeline (Build + Release) ready to be used and deploy our applications. So, we moved from a manual deployment process that used to consume several hours, to an automated deployment process that would deploy the application in 10-15 minutes.
Once again, we used a lot the Microsoft documentation available, specially the documentation related to the process of creating custom Vnext tasks to our pipelines. In case you want to try and create your first pipeline, please have a look to this site.
Azure Boards and Azure Test Plans
After having our deployment processes implemented, using Continuous Integration and Continuous Delivery practices, it was time to understand how we could improve our work processes, and manage the work done by these third-party companies. So, it was time to explore the Azure Boards and Azure Test Plans.
We started by configure projects in the Azure Boards, each project per application. Since we were embracing at that time agile processes to manage our and vendors work, we configured these projects using the Agile template available.
After configuring the team members, team capacity, sprints duration, and perform some small configurations of fields in the agile template, we started to create the sprints with epics, stories and features, to manage our and vendors work.
Additionally, at this stage, we were able to track code changes and commits against the tasks performed by the developers, quantify the number of bugs per sprint and per deliverable, and more. It was a big move that allow everyone in the team to have visibility of what each member of the team was working on, time expected to finish, timeline of the sprints and project, and more.
Also, we started to configure and use our Test Plans and Test Cases, so we could specify the tests our team and the vendor’s team would need to perform. And track those executions and results against the tasks being performed.
The next step was the integration of our documentation with Azure Wiki. By doing this, we would have the documentation in the same place as our application’s code, and all managed in the same place. Also, in case of versioning of components (e.g. APIs versioning), you can track directly the documentation against the different versions of the APIs. And of course, since Azure Wiki supports Markdown language, we could use it to easily write our documentation, with amazing styles and formatted, to improve the user experience of the people reading it.
Last, but not least, it is important to mention that few months later, the client decided to run a Cloud First initiative, and start to plan and migrate existing applications to Azure, and build new Cloud Native applications in Azure. So, once again we were required to work in our Azure pipelines processes, but this time, it saved us huge amount of time, since we could leverage the existing templates and tasks available in the Azure Marketplace, and implement our pipelines without the need to code them, just based on configuration. And once again, we took the opportunity to enhance our processes and introduce another important principle - Infrastructure as code (IaC). With this principle, we leveraged the Azure Resource Manager (ARM) and we defined ARM templates that would be used to create the Azure resources required to host our applications. So, during the execution of our pipelines, we would run built-in tasks using these templates to create or update the required Azure resources for a given application.
This post represents a success story of how Azure DevOps helped to improve the productivity and quality of work processes in one of my clients.
Using Azure Repos, we were able to consolidate the applications source code, and store it in a central place, instead of using multiple locations to manage the code.
Going from manual application deployments, to automated deployments using Azure Pipelines, we decreased the process duration from several hours to 10-15 minutes.
Leveraging the Azure Boards templates, queries and reports, we were able to manage our work (sprints, user stories, tasks and more), using agile processes, and report useful metrics that we were not able to collect efficiently previously.
Exploring the Azure Test Plans features, we were able to define clear and concise Test Plans and Test Suites to enhance the quality of our applications, and minimize the impacts of existing bugs, by proactively testing the applications and finding the bugs before they hit Production.