Introduction
When this option is appropriate for your situation, it can be a very effective way to get to the cloud fast and efficiently. It is appropriate for Stateless, tightly coupled applications that are difficult or risky to move to the cloud piece by piece or components of a 3-tier application that has sensitive inter-connectivity latency. You accomplish this by moving your workload from a data center to a hosted infrastructure on the cloud without code changes.
The bulk of the work required may include very few code changes, if any, but mostly copying binaries, creating configurations, setting up security groups and elastic IP addresses, and, most importantly, optimizations.
However, that’s only the tip of the iceberg and the beginning of the journey. In this strategy, the application might not be able to take immediate advantage of the elasticity and scalability of the cloud, but it can help shrink the IT infrastructure footprint.
This affordable and “mostly” straightforward solution allows for a quick win and some cost savings through CAPEX to OPEX conversion while gaining some cloud computing benefits and experience.
The checklists below explore best practices and tips to help maximize the benefits, minimize the often overlooked migration challenges and limit the surprise element.
In This Article
The Lift and Shift Approach
Here are the six key steps to ensure the cloud journey transition is successful.
Checklist - Before the Migration
1. Expect Higher Costs: This is the default and expected with dual hosting that should gradually be eliminated. Also, the costs include time for the team to gain cloud management experience. It’s important not to rush and set the right expectations. The Lift-and-Shift migration event will not generate the desired total cost of ownership reduction and return on investment without additional optimizations and continuous improvements. It is expected that a business can acquire up to 30-40% savings on costs within 12-18 months after migrations and optimizations while gaining cloud experience. However, it is also noted that more spending is expected in the initial six months after the migration.
2. Establish a Migration Architect Role: The migration architect is a system architect who is responsible for planning and completing all aspects of the migration. Their responsibility includes managing the end-to-end process, defining the backlog, choosing and assigning other roles, the decision on refactoring needed, data strategy, security strategy, understanding cloud solution requirements, and guiding the timeline.
3. Single Cloud, Hybrid, or Multi-Cloud Decision: Decide on whether to pick a single cloud provider and migrate and optimize the application for a single environment, or is there a need to have multiple cloud providers? A single cloud provider is simpler for deployment and for the team to gain experience with one provider’s APIs. The disadvantages include being vendor locked in and limited negotiation ability for pricing if you don’t have other choices. Using multiple cloud providers or choosing to be cloud provider agnostic can offer more options but also add development complexities. You may also be in a situation where a hybrid cloud makes sense. The important thing is to decide early on, understand the pros and cons of each, and document (e.g., CloudCraft, LucidChart) the system architecture for guidance.
4. Cloud is No Escape From Technical Debt: Conduct a technical debt analysis before the migration. It is far easier to address gaps, especially high-priority bugs and security issues, before a straight lift-and-shift and moving all the technical debt from the data center into the cloud. It will help save a fair amount of work and allows focus on learning the new environment.
5. Clean House to Avoid Over-Provisioning: Typically, workloads are over-provisioned when migrated to the cloud by configuring and sending excess computing and storage capacity. This is simply transferring existing inefficiencies to a new location. The migration should be taken as an opportunity to rationalize the computing and storage needs. Taking the time and conducting a clear inventory may surface a lot of opportunities for cost savings and efficiencies (e.g., duplication, outdated content, and even servers that are not needed). We estimate that the initial cost-cutting opportunity to be in the range of 20-30% on average.
If usage and storage patterns data trends are available, they can provide even a larger opportunity for elimination and housekeeping with regards to both storage and CPU capacity needed.
6. Manual vs. Automation Decision: Consider what to automate from the onset. The deployment and using infrastructure as a code approach can add significant efficiencies and improve application management and reduce costs.
Understanding that ecosystem and leveraging the full extent of the tools can add significant efficiencies. Rationalizing the workload and creating templates enables less time spent on future or repeat deployments, dev/test environments, disaster recovery, and rollbacks.
Cloud providers offer tools to templatize and automate configuration and provisioning, such as Amazon CloudFormation, Amazon OpsWorks, Azure ARM, GCP Cloud Deployment, or Open Source tools.
7. Explore the EcoSystem with Pilots: Piloting is the only way to truly grasp the ecosystem, automation tools, and opportunities for cost reduction in a lift-and-shift migration. Explore the native platform tools for your target platform. Read more at AWS cost reduction, Azure Migration Journey, and GCP Migration.
8. Data Security and Compliance Are Different: Changing the environment and understanding how the availability zone selection might impact the data and security compliance standard for your specific domain. This has to be worked out ahead of time to avoid rework or downtime.
9. Application Architecture Optimization: Often, the architecture is not optimized for the cloud platform (e.g., load balancer requiring a virtual appliance). A good look at the architecture before the migration is important to surface issues and minimize the number of surprises. A few questions to consider about the current architecture:
Changes are needed to leverage dynamic scaling.
Reduction in resource utilization and allocation.
Which components can move to a service-oriented architecture before?
10. Data Migration Requires Planning: Understanding the data architecture, current data set and carefully migrating it. This is a very challenging exercise as it can heavily impact performance. This includes data diffs and synchronization for potential repeat migrations. There is a need for this to be well-planned with proper automation to avoid data loss or duplication. Luckily, cloud providers have a host of automated tools to assist (AWS, Azure, GCP).
11. Dependency Mapping Will Inform Performance: The older the system, the more dependencies exist and are taken for granted, with a lack of awareness in some cases. Before migration, it’s a good practice to conduct an exercise to create a dependency chart. Dependencies should be identified and migrated simultaneously to avoid latency issues. NewRelic offers a nice dependency mapping Tool.
12. Security and Privacy Is a Different Mindset: The security know-how for on-premise and data centers will be reshaped while learning the shared responsibility between the cloud provider and the team. It’s not as simple as fire-wall configuration, but Security has to take a zero-tolerance approach with assumed breaches. The cloud focus takes infrastructure as a code approach for the efficient use of configurable security groups and gateway services.
Azure, for example, is an open platform by design, and how to lock it down is a skill that needs honing. The design has to be for resiliency and protection with a zero-trust architecture. Using a threat modeling like STRIDE (AWS Well-Architected Framework, Azure) is recommended to explore before deployment.
13. Renew Disaster Recovery and Business Continuity: New disaster recovery and business continuity plans need to be developed and tested before deployment.
14. Commit to a Future Culture Change: The cloud will have implications on the culture of the organization. While no immediate action is required, it’s important to begin the consideration before starting so that the team in charge of the migration can plan for those changes and deploy the necessary training, team design, and practice upgrade when the time is right.
15. Roll-back Plan Development: Most organizations embark on the migration and transition with enthusiasm but often miss that it may take a few trials or, worst case, may not be successful (e.g., back-office systems, specific technology need, costs, embarking on a mix with private cloud, need for a multi-cloud solution). Luckily, this is rare, but it does require some planning. The exercise requires light planning, especially to avoid the possibility of being vendor locked.
Checklist - During the Migration
One common concern is that the scalability of the cloud deployment would need to be thoroughly tested for performance and scalability. Architecture may be designed in a way that does not take advantage of the target infrastructure and therefore the scalability and performance can be degraded or consume extensive resources defeating the purpose.
1. Do not Rush!: Migrations hit snags no matter how much planning takes place; pilots can help reduce the number of delays. It’s important to not expedite the initial migration, which can lead to a poorly optimized migration, incurring larger costs long term (e.g., wrong VM sizing, underutilization) as well as cheating the migration team from the learning they need to acquire.
2. Watch-Out for Over-Provisioning (again): The only data baseline that exists will be from a different environment that will not match the new environment. The first migration will also not be a measure for ongoing costs as it will be followed by configurations and optimizations. In other words, the initial deployment will very likely cost more than planned until further optimized. Mirroring the same environment, including unneeded artifacts, will make those extra costs even more pronounced. It is important to establish a new baseline definition with KPIs for the new deployment.
3. Scalability and Performance Test to Avoid Surprises: Applications that have performance issues on-premise will very likely run even slower on the cloud. While resources can be augmented on the cloud, the cost will also be higher. Applications with dependency latency sensitivities will likely perform poorly. Using a service dependency map ensures all dependencies have been accounted for and analyzed.
An application that has poor horizontal scalability will also likely exhibit the same behavior on a cloud VM. Vertical scaling with larger VMs can help, but then again will increase the costs and defeat the purpose, and there are ultimate bottlenecks with the database IO for example. The rule of thumb is that a small on-premise VM typically has a relatively low Opex and relatively high Capex (hardware, license). Conversely, a small VM in the cloud has no Capex but a relatively higher Opex.
4. Consider Elasticity vs. Peak-Capacity Design: Recreating the configurations deployed for on-premises and designed for handling peak capacity would incur continuous costs in a cloud environment vs. optimizing for elasticity when needed.
5. Perfect Hardware/Hypervisor Configuration: The cloud platforms are designed to serve everyone, and the out-of-the-box is not likely to fit your organization like a glove. This translates into a lot of learning to secure images, automated configurations, and enterprise monitoring using the provided tools to fit your architecture and scalability needs and bring efficiency and cost savings.
6. Leverage Monitoring and Tools: Both the approach for monitoring, toolset, team approach, and metrics are different on-premise than in the cloud. Planning those metrics (e.g., Performance, uptime, business metrics) and understanding the available platform-specific toolset is important to define and set up before migration. NewRelic offers a very comprehensive list of metrics to consider.
7. Understand Testing in Production (TiP): Unless your expenditure allows for the ability to set up a parallel non-production environment and the tools needed to keep the environments in sync, planning for testing in production (TiP) is a mandatory practice and propels the organization and software development lifecycle to new heights and new possibilities for faster go-to-market fast. Read more on TiP.
Checklist - After the Migration
1. Commercial and Finance: There will be a commercial impact and especially in finance and forecasting. With the cloud, the steady CAPEX line items will be less predictable in forecasting. There will also likely be additional OPEX (e.g., external consultancy expected, reduction in enterprise agreements licensing if rolled into the cloud provider plan, etc.). This will require adjustment in the finance department process, especially after the initial migration is concluded, and regular data collection might indicate the normal spend ranges.
The following cloud provider tools combined with collected metrics can help with forecasting: AWS TCO Calculator, GCP Pricing, Azure Pricing, and AWS Monthly Cost.
2. Expect Cultural Changes: Often, the cultural angle is overlooked and not planned for. As with anything else, there will be resistance to the process, change controls, and software development lifecycle. It’s important to plan for this with leadership messaging and support, including delivering a plan to the organization and support with training where necessary. In addition to the initial changes, there will likely be longer-term organizational changes as the team moves deeper into the cloud and the organization cannot trail behind (e.g., DevOps practices).
3. Scheduled Down Time: This is opportunistic and will vary based on the type of system. But evaluate opportunities to turn off specific services that are not utilized in VMs. This can save on Compute Seconds, but disk storage will still be charged unless you also leverage auto delete. Scheduling is typically the easiest way to go, but leveraging cloud tools such as Serverless functions in Azure, Lambda/CloudWatch in AWS, or Cloud Scheduler in GCP.
4. Data. Monitoring and Alerts: In an on-premise world, data is very important for quality, uptime, and nice to have to manage fiscal CAPEX costs. In the cloud, it is an essential tool that your organization will have to get good at and build the necessary muscle to forecast in a dynamic ecosystem, receive alerts and manage security in a different ecosystem. The team would need to learn and leverage the Cloud-native monitoring and alerting tools such as AWS CloudWatch, GCP Stackdriver, and Azure Monitor.
5. Leverage the Data Trends to Optimize: Although this may appear to be a repeat of the previous point, it is more about the data trends. It’s about the insights that data surfaces. After running the new environment for a few months, data will accumulate and provide insights on trends in usage, performance, and storage that allow further optimizations for the VMs and databases for that new environment. Leverage the tools like Azure Advisor, AWS Cost Optimiser, GCP Compute Engine, Azure Monitor, and AWS CloudWatch for right-sizing information, utilization analysis, and performance metrics dashboards.
6. Smart Pricing Discounts: As the team gets better at figuring out your needs based on data collected. You can leverage Spot or Reserved instances which are offered by all three major cloud providers at AWS, Azure, and GCP. This can bring those specific areas down by 60-80% discounts for the known workload (e.g., known spikes, development work). The power of data, alerts, dashboard, and the ability to leverage them is key for cloud planning and being able to reduce the higher on-demand costs and substitute them for committed planned usage.
7. ReThink the Software Licensing Strategy: Depending on your circumstances, evaluate and conduct a pricing study on whether to port your licenses and roll with your cloud provider or to continue with an Enterprise Agreement. The migration provides a good opportunity to revise, compare pricing and get rid of excess licensing.
8. Agility and DevOps Transformation: To truly leverage the power of the cloud, you will eventually have to transform your team with DevOps practices and structure. DevOps are a set of engineering practices that combine software development, IT operations, processes, and tools. It is used to shorten the development life cycle and provide continuous delivery with higher software quality.
Ultimately, it ensures teams focus on building their IP, faster go-to-market, and the ability to collect feedback while continuously improving quality. Read more about DevOps at NewRelic, but some of the core principles include:
Collaboration Mindset (organizational setup)
Automation
Tools: Test and Build (e.g., Jenkins)
Tools: Collaborative Source Control (e.g., Github)
Tools: Configuration Management (e.g., Chef)
Tools: Monitoring and Data Analysis (e.g., NewRelic)
9. Define The Long-Term Cloud Strategy: The Lift-and-Shift might be a great step toward the cloud. But if maximum benefits are desired post-migration, consider the suggestions:
Modify the architecture to enable auto-scaling to reduce spending and take advantage of the cloud elasticity promise.
Prioritize and select individual services in your architecture for redesign and deployment in containers and Kubernetes with orchestration to scale and self-heal. Read More at Cloud Native Computing Foundation (Trail Map)
Replace 3rd-party network virtual appliances with cloud-native services such as WAF Azure App Gateway.
Consider addressing technical debt and wrapping old legacy functionality in a set of modern REST APIs. This optimization makes it suitable for the new ecosystem of modern cloud service and allows for building services around legacy in the future.
Review the Microsoft checklist for best practices that are used in cloud technical due diligence that provides useful insights.
Consider continuing with a hybrid approach like re-platforming and Refactoring as a long-term plan. A multi-year gradual approach for optimization and gradual use of native cloud services and serverless to reap the full value. This AWS diagram below visualizes the various approaches.
Lessons from History
100 years ago, factories running on steam power decided to convert to the new electrical power technology to drive boilers and machinery for reliability, safety, and cost savings.
They lifted and shifted electric power into the factories without changing the factory designs. To the surprise of many, their output remained flat. However, those factories that adapted and re-organized their machinery layout into production lines witnessed an increase in their output efficiency. They re-organized and leveraged the power of the electric motors to relocate machines anywhere and were able to exploit this new technology allowing them to compete better and achieve better margins from a better factory layout. Another favorite “evolve or perish” story that I heard firsthand from Guy Kawasaki is the story of ice and told so eloquently.
About the Author
Hazem has been in the software and M&A industry for over 26 years. As a managing partner at RingStone, he works with private equity firms globally in an advisory capacity. Before RingStone, Hazem built and managed a global consultancy, coached high-profile executives, and conducted technical due diligence in hundreds of deals and transformation strategies. He spent 18 years at Microsoft in software development, incubations, M&A, and cross-company transformation initiatives. Before Microsoft, Hazem built several businesses with successful exits, namely in e-commerce, software, hospitality, and manufacturing. A multidisciplinary background in computer engineering, biological sciences, and business with a career spanning a global stage in the US, UK, and broadly across Europe, Russia, and Africa. He is a sought-after public speaker and mentor in software, M&A, innovation, and transformations. Contact Hazem at hazem@ringstonetech.com.