Before you move a single SAP system, you need to answer 5 questions. Miss even one and your migration might fail before it starts. Most teams skip this part. They jump straight into provisioning cloud resources, copying environments, and trying to meet a go-live deadline. But that’s like building a train schedule without knowing how many trains you’ve got, or where they’re going. Back when I consulted for large SAP migrations - from Colgate to Fortune 100 manufacturers - we never started with tooling. We started with assessment. Because without a clear understanding of what you’re moving, how it’s connected, and what it impacts - you're flying blind. These are the 5 things I always map before touching a single system: 1. System inventory — what exists, and what’s connected You’d be surprised how many environments have orphaned or undocumented dependencies. Miss one? That’s your failure point. 2. Business criticality — what can’t go down, even for a minute Not all systems are equal. Some run background jobs. Others run revenue. You migrate those differently. 3. Resource constraints — who’s available, when, and for how long Most IT teams are already overloaded. You need to know what talent you have before committing to timelines. 4. Downtime thresholds — what’s the business actually willing to tolerate? I’ve seen 80-hour migration estimates get crammed into 24-hour windows. You don’t negotiate after you start. You plan ahead. 5. Migration sequencing — what moves first, and what moves in parallel Dependencies aren’t just technical — they’re operational. Order matters. Or everything stalls. Assessment isn’t overhead. It’s insurance. And the cost of skipping it? Blown deadlines. Missed shipments. Angry execs. And a team stuck in recovery mode for weeks. Every successful migration I’ve ever led had this phase built in from the start. And every failed one I’ve seen? Didn’t.
Tips for Successful Data Migration
Explore top LinkedIn content from expert professionals.
Summary
Data migration involves transferring data from one system to another, a critical process in today’s evolving technology landscape. It requires meticulous planning to prevent issues like data loss, system incompatibilities, or disruptions to business operations.
- Start with a clear assessment: Identify what data needs to be moved, its dependencies, and how it impacts business processes to avoid unforeseen challenges.
- Clean and organize data: Remove duplicates, resolve discrepancies, and ensure only relevant, high-quality data is transferred to the new system.
- Test and validate: Run comprehensive tests to confirm data accuracy and system compatibility before fully committing to the new platform.
-
-
Its easy to forget that data migrations are an iterative process. The natural inclination of most people when presented with a list of items to migrate is to systematically go through that entire list. However, the objective of your first iteration of a data migration is usually focused on making sure you have an understanding of the tools, processes, and knowledge required to migrate the data. Migrating the utility network can be a daunting task. Even if you've done data migrations before, there is a very particular way that you need to migrate your data in order to be successful (this process has been covered in detail in some of the webinars on this page https://lnkd.in/e7cr_9JF). So how do I recommend you do your first iteration? 1 - Bring over all your network layers. If its a point, line, or polygon representing a network feature you need to have it in your Utility Network so you can identify and topology issues associated with it. 2 - Don't migrated proposed, retired, or abandoned features in your first iteration. Depending on how you modelled these types of features in your data they may cause topology errors. While you will want to migrate them to your utility network initially, it makes it a lot easier to track down and fix topology errors in your first iteration if you leave these features behind. 3 - Focus on mapping the fields that are absolutely require by the utility network. This will always include Asset Group, Asset Type, and Global ID. You'll also need to include fields required to support tracing, which can vary for each model. Electric models need device status (open/closed) and should include the normal phasing. Pipeline models also need device status (open/closed), and if you have cathodic protection equipment you'll need to include the material of equipment. 4 - If your current system maintains unique identifiers that can assist during the quality assurance process, then you should add them into the target model and bring them along for the conversion. Common example of this include network information (feeder, pressure zone, etc), work order numbers, or any other identifier you can just add into the target database and populate directly without needing any translation or domains. In my next article I will describe what to do once you've got your proof-of-concept migrated. In the mean time, you can access a free tutorial on how to migrate data into a Utility Network using Esri's Data Loading tools in our documentation gallery: https://lnkd.in/eJBDXR9K
-
I've been part of 7 significant data migrations throughout my career. I'll teach you the key things to be mindful of in 10 minutes: 1. Data migration > Copying data over to the new system A few factors to consider: * Do you need to move historical data? * Are the data types similar between the new and old systems? * Do you have DDLs defined in your code base? 2. Redirecting input sources > Your new system needs to be able to access the necessary inputs A few factors to consider: * Are the input data sources the same? * Do the input sources in the new system have similar or better SLAs? * Are the input sources of the same quality and schema? 3. Moving code > Does your old code work with the new system If you are moving from a primarily SQL-based code base to a dataframe, you'd need lots of new code. A few factors to consider: * How different are the new and old systems in terms of code interface (e.g., pure SQL v Python)? * Does the new system have all (& ideally more) features than the old one? * Does the scale of the new system satisfy your data SLAs? * The better your code tests, the simpler this step 4. Tools > Your systems probably have non-pipeline tools (e.g., GitHub actions, etc), ensure that they work with the new system A few factors to consider: * Do the tools (e.g., dbt elementary -> Spark?) of the old system work in the new one or have better replacements? * If your new system has "another" tool to do similar things, ensure it can! * If your system interacts with external company-wide tools (e.g., GitHub actions), ensure good integration with the new system 5. Validation period > Run the new and old systems for a switch-over period before switching over users to the new systems A few factors to consider: * Keep the old and new systems running for a switch-over period. * Run frequent (ideally scheduled) validation checks between new and old systems during this period. * After enabling end-user access to the new system, keep the old system on in case of rollbacks 6. Permission patterns > Do the end users have the same permissions as the old system A few factors to consider: * Do your current stakeholders have the same access(read-write-create-delete) in the new system? * If you are changing permissions, ensure you provide the end users sufficient time to adapt. 7. Interface layer for end-users > Will the end users be able to access data with the same data asset name and schemas? A few factors to consider: * Does the new systems require the end user to change any of their code/queries? * If you have used an interface layer (usually a view), this should be simple * Will the new data system have the same or better SLAs? 8. Observability systems > Will your new system's observability system work similarly? What other migration tips do you have? Let me know in the comments below. - Enjoy this? ♻️ Repost it to your network and follow me for more actionable data content. #data #dataengineering
-
Planning your CRM migration? Here’s a quick guide to ensuring a seamless transition. ✅ Do's: [1] Plan thoroughly: Develop a comprehensive migration plan with clear goals, timelines, and assigned responsibilities. [2] Clean your data: Remove duplicates and outdated info to ensure only high-quality data enters your new CRM. [3] Test extensively: Check data accuracy and system functionalities to iron out issues before going live. [4] Train your team: Conduct in-depth training sessions to familiarize your team with the new CRM features. [5] Keep communication open: Regular updates and clear communication can help manage expectations and reduce resistance. ⚠️ Don'ts: [1] Rush the process: Take the necessary time for each step to avoid costly mistakes. [2] Overlook user input: User insights are crucial for a system that meets daily operational needs. [3] Neglect data security: Ensure all data transfers are secure and compliant with data protection laws. [4] Migrate unnecessary data: Avoid clutter by only transferring relevant data. [5] Ignore post-migration support: Continue to provide support and monitor the system to resolve any issues that arise. Leaders and executives, what challenges have you faced during CRM migration, and how did you overcome them? Let's share insights below! [👍 Like | 💬 Comment | 🔁 Share] #crm #data #migration
-
Spotted in Boston: a utility pole being "migrated" one connection at a time to a new pole, held together with some well-placed bolts and lumber. My first reaction was to laugh, but then I realized—this is exactly how many of the most successful data migrations work: 1. Set up the new system alongside the old 2. Move components over gradually rather than attempting a one-shot cutover 3. Keep both systems operational until the migration is complete It looks messy. But compare the tradeoffs: 𝐎𝐧𝐞-𝐬𝐡𝐨𝐭 𝐚𝐩𝐩𝐫𝐨𝐚𝐜𝐡: Coordinate every stakeholder, plan for every edge case, extensive testing, long prep window 𝐆𝐫𝐚𝐝𝐮𝐚𝐥 𝐦𝐢𝐠𝐫𝐚𝐭𝐢𝐨𝐧: Start immediately, spread risk across multiple smaller changes, easier rollbacks, but long maintenance window Of course, the catch is that you're maintaining two systems, which carries its own costs and risks that vary by situation. In this case, the cost of maintaining two systems might also go into the additional durability needed for the new pole to hold up both, especially during winter. I took this photo in the summer, which is why it’s missing a heck of a lot of snow 😏. Wonder how well it’s holding up now… (Standard disclaimer that I'm a data engineer, not a civil engineer—your utility pole architecture may vary) What's your take on one-shot vs gradual migrations? #analytics #dataengineering #migrations
-
Where do I start? This is arguably the question I’ve been asked the most by data leaders tasked with a large scale transformation initiative. The transformation could be a cloud migration, an ERP consolidation, or any large data-centric replatforming that involves a complex web of people, process, and technology. Quite often, many leaders have convinced themselves, or have been guided by a consultant, that taking a ‘bottoms up’ approach that starts with with an inventory of the data, often along with some form of a maturity assessment, is the right way to go. It’s not. The right way to go is to take an outcome-driven approach where you are rabidly focused on solving a very limited number of business problems. Each problem would have a well defined and limited scope, and would be accompanied by a business case where the financial benefits of that initiative are quantified, and aligned upon by your customers. Instead of focusing on all data, you’ll instead inventory, observe, govern, steward, master and integrate only the data needed to solve your immediate problem. Yes, some idea of the ‘future state’ must be defined and you need to ensure you’re building out an architecture that is scalable and flexible, but complete clarity on all aspects of every individual deliverable between now and that future state do not need to be defined. If you focus each of your phases around solving specifc problems, you will build the momentum and business support you need to get more funding, and slowly grow the program over time. Instead of taking a ‘framework driven’ approach that ensures your customers will have to wait 18+ months to see any value, your customers will get benefits now. Don’t be foooled into thinking that you need to catalog and govern everything in order to transform your data estate. You don’t. Focus on solving business problems and in time, you’ll catalog and govern what matters the most. What do you think? If you have different ideas on where to start, I would love to hear them? #cdo #datagovernance #datamanagement
-
We interviewed data leaders who've been through major platform migrations, and one thing became crystal clear: migrations are so critical to the data team’s ability to impact the business that they define the data team’s success and data engineer's careers. The migration I ran at Lyft certainly defined mine — and put me on a path to build Datafold. Migrations are a perfect storm of high stakes, tight deadlines, and zero room for error. You're not just moving data – you're juggling pipeline re-engineering, data validation, stakeholder management, and keeping production running. The insights from these leaders were surprisingly consistent: > Lift-and-shift first, optimize later. The temptation to fix everything during migration is strong but often creates more problems than it solves. > Stakeholder trust is everything. Without clear proof of data parity, skepticism creeps in, systems run in parallel longer than planned and costs balloon. > The "last mile" is where migrations go to die. When you think you're done, stakeholder reviews uncover edge cases that trigger new refinement cycles. But here's what gives me hope: we're finally at a technological inflection point. What used to take years can now be done in months., and whatrequired armies of engineers can now be automated. We've compiled these stories, lessons, and a practical framework for modern migrations in our latest guide: https://lnkd.in/em-BBqM4
-
In many data warehouse migration projects there’s massive pressure to deliver value quickly so engineering teams opt for the familiar (aka “lift and shift”) but this is often a mistake for two reasons: 1. When you lift and shift you enable analytics to move faster, which of course is seen as a good thing. Problems are not visible at first, but soon the cracks will appear. The issue is of course all that technical debt that’s been accumulating in your existing system is not paid down but simply punted down the road. 2. When you lift and shift you miss out entirely on the opportunity to redesign the data architecture properly. A redesign offers a great opportunity to clean house and assess exactly what matters, but alas it’s often wasted. What to do instead? My friend Aaron Ormiston has a metaphor he calls “steel threads” of value. You determine what useful pieces need to be delivered, like a sales dashboard for example, and you horizontally slice the work from raw data to the data mart like an end to end thread. The “steel thread” allows you to deliver work efficiently and smoothly while modeling data properly as you go along. This will keep both your engineers and your stakeholders happy. —- If you’re doing a data warehouse migration, or building one from scratch and need help, feel free to reach out via DM
-
𝗟𝗲𝘀𝘀 𝘁𝗵𝗮𝗻 𝗵𝗮𝗹𝗳 𝗼𝗳 𝗖𝗥𝗠 𝗺𝗶𝗴𝗿𝗮𝘁𝗶𝗼𝗻𝘀 𝘀𝘂𝗰𝗰𝗲𝗲𝗱. I have participated in a few of them - they good ones and the one that fail. I've created a mini CRM Migration checklist to cover 4 key areas that you need to consider. ☑ Comprehensive Planning: Define objectives: Know your goals. Stakeholder engagement: Get everyone on board. ☑ Data Management: Data cleansing: Ensure accuracy. Data mapping: Align old and new systems. Know your ETL tool. Roll-out plan, go small, in waves until you do a full migration. ☑ System Integration: API configurations: Connect smoothly. Focus in details Testing: Validate every step. Test, and test, and test again. ☑ User Training: Training programs: Empower your team to help others. Support resources: Provide ongoing help. Every step, process, and tool is clear and straightforward. No jargon. Just actionable steps. What else could be part of this checklist?
-
Data migration is rarely a one-and-done. It usually creates new data pipelines. If you are a data engineer, listen up if someone mentions a data migration, it always involves data integration. In fact, data migration is not that far behind a new cloud data warehouse as one of the top three data integration use cases. Estuary has several data migration projects. Data migration could be called storage, database, application, data center, or cloud migration. Moving databases or entire applications to the cloud, or to new cloud services is probably the most common type of data migration today. There are plenty of database migrations as part of modernization initiatives. Application migrations can be short. The process is usually: 1. Extract the data and analyze it for data quality issues. Ideally this is using CDC. 2. Build the rules to cleanse it over time and continue to measure and improve quality. 3. Merge it with other data needed for the new target app. Improve that quality. 4. Once you’re confident the rules work, load the target with the data. This may take a few times to get it right. 5. Enter a phase of running the old and new apps in parallel. Often this involves a phased migration. Here the data pipeline keeps the old and new apps in sync. 6. Retire the old app once everyone is moved over. By then, that data integration technology into the new app is the new data pipeline. Database migrations can take much longer. Why? There can be multiple apps on the database, and there can be that 1 last app (or a few) that just won’t move. Why? Sometimes it’s a platform reason. If you’ve done a mainframe migration, you know. Other times it’s the phased migration itself that becomes complicated. With multi-app databases the pipeline evolves in a slightly different way. 1. Extract from the old database. The best approach is to use CDC. 2. There isn’t a lot of cleanup in this case. This is often a 1-to-1 replication because the apps aren’t changing. 3. If you are consolidating apps, then you might be doing some data merging. 4. Once you’re all set, move the CDC pipeline and new database into production. 5. Start to move the apps, 1-by-1, onto the target. 6. As you do, turn off any old pipelines into the old app, and turn on the new pipelines to the new app. Then you can turn off that part of CDC. In the end, you often continue to rely on replication to keep data in sync because you still need that remaining data from the old database for one of the reasons all those apps existed on the same database; because the data was shared. While you might decide not to make changes to the apps, which makes the migration easier, you should consider making some changes since you’re probably forced to make a few changes anyway. Consider your request backlog. It gives users an incentive to move. Do you work on data migrations? Are they completely separate from the (cloud) data warehouse pipeline teams in your company?