← Back to Blog

Migrating 800K Lines of .NET Framework to .NET 8

Kloudpath Team

The codebase was 15 years old, hosted on Windows Server 2016, deployed by copying DLLs to a file share, and held together by a web of WCF services that nobody fully understood. It was 800,000 lines of C# targeting .NET Framework 4.7.2, and it powered the core operations of a financial services company processing $2B in annual transactions. The client wanted it on .NET 8, running in containers on Linux, deployed through CI/CD. This is how we did it.

The Assessment Phase

Before writing a single line of migration code, we spent three weeks on assessment. We used the .NET Upgrade Assistant to scan every project in the solution and generate a compatibility report. The tool identified 47 NuGet packages with no .NET 8 equivalent, 12 direct references to Windows-only APIs (System.Drawing, Registry access, Windows Event Log), and approximately 200 uses of APIs that had been moved to different namespaces.

We also built a dependency graph of the entire solution using a custom Roslyn analyzer. This was critical for understanding which projects could be migrated independently and which were tightly coupled. The graph revealed that the 142-project solution was actually organized around six logical domains, with a shared core library that everything depended on. The core library became our first migration target.

# Run the .NET Upgrade Assistant analysis
upgrade-assistant analyze ./LegacyApp.sln --target-tfm-support net8.0

# Output key metrics
# Total projects: 142
# Compatible without changes: 23
# Minor changes needed: 87
# Major refactoring needed: 32

The Strangler Fig Pattern

A big-bang migration of 800K lines was out of the question. The application needed to keep running and receiving features throughout the migration, which we estimated would take 9-12 months. We adopted the strangler fig pattern: new functionality would be built on .NET 8, and existing functionality would be migrated incrementally, with a reverse proxy routing traffic between the old and new systems.

We deployed YARP (Yet Another Reverse Proxy) as the front door to the application. YARP is a .NET-based reverse proxy that gave us fine-grained control over routing rules. As each service was migrated, we updated the YARP configuration to route its traffic to the new .NET 8 instance. The old service remained running as a fallback, and we could switch back with a configuration change if something went wrong.

The Dependency Audit

The 47 incompatible NuGet packages fell into three categories. The first was packages that had been superseded by built-in .NET 8 functionality, such as Newtonsoft.Json being replaced by System.Text.Json. The second was packages that had .NET 8-compatible versions we could upgrade to. The third, and most problematic, was packages that were abandoned or Windows-only.

For abandoned packages, we had three options: find an alternative, fork and port the package, or rewrite the functionality. We ended up forking two packages (a PDF generation library and a legacy SOAP client) and replacing four others with modern alternatives. The SOAP client was particularly painful because it relied on System.ServiceModel, which is only partially available in .NET 8 through the CoreWCF compatibility package.

The Windows-Only Trap

The 12 Windows-specific API usages were scattered across the codebase, and each required a different approach. System.Drawing calls for image manipulation were replaced with SkiaSharp, which is cross-platform. Registry access for configuration was replaced with environment variables and the Options pattern. Windows Event Log writes were replaced with structured logging through Serilog, which could target any sink.

The most insidious Windows dependency was not in our code at all. It was in a third-party DLL for which we had no source code, which used P/Invoke to call native Windows APIs for generating barcodes. We solved this by wrapping the DLL in a minimal .NET Framework Windows service that exposed its functionality over gRPC. This service ran on a single Windows container alongside the Linux containers for everything else. Not elegant, but pragmatic.

Containerization Strategy

We containerized each migrated service incrementally. The Dockerfile for each service followed a multi-stage build pattern: build stage using the .NET 8 SDK image, followed by a runtime stage using the ASP.NET 8 runtime image. This kept our production images lean at around 80MB each, compared to the 2GB+ Windows Server images that would have been required for .NET Framework.

FROM mcr.microsoft.com/dotnet/sdk:8.0 AS build
WORKDIR /src
COPY ["OrderService/OrderService.csproj", "OrderService/"]
RUN dotnet restore "OrderService/OrderService.csproj"
COPY . .
RUN dotnet publish "OrderService/OrderService.csproj" \
    -c Release -o /app/publish /p:UseAppHost=false

FROM mcr.microsoft.com/dotnet/aspnet:8.0 AS runtime
WORKDIR /app
COPY --from=build /app/publish .
ENTRYPOINT ["dotnet", "OrderService.dll"]

For local development, we used Docker Compose to spin up the entire system, including the YARP proxy, migrated services, and legacy services running in Windows containers. This gave developers a single docker compose up command to get the full application running.

Testing Strategy

Testing was the most underestimated part of the migration. The legacy application had approximately 15% code coverage with unit tests. We could not reasonably achieve high unit test coverage during the migration, so we invested heavily in integration and end-to-end testing instead.

We built a parallel execution framework that sent every production request to both the old and new service simultaneously, compared the responses, and logged discrepancies. This "shadow traffic" approach caught dozens of behavioral differences that unit tests would never have found, including date formatting differences between .NET Framework and .NET 8, subtle floating-point calculation changes, and a case sensitivity issue in a dictionary lookup that had been silently depending on the default comparer behavior.

For the financial calculation modules, we created a comprehensive golden file test suite. We captured 50,000 production inputs with their expected outputs from the legacy system, then ran them through the migrated code and asserted byte-for-byte identical results. This gave us and the client confidence that the migration had not introduced any calculation errors.

The Deployment Pipeline

We built the CI/CD pipeline in GitHub Actions with a staged deployment process. Each pull request triggered a build, ran the unit and integration test suites, built the Docker image, and deployed to a staging environment. The staging environment ran the shadow traffic comparison against production data, and only after the comparison showed zero discrepancies would we approve the production deployment.

Production deployments used a canary strategy. New versions received 5% of traffic for the first hour, then 25%, then 50%, then 100%, with automatic rollback if error rates exceeded the baseline. This cautious approach meant deployments took four hours to fully roll out, but it caught two regressions that would have caused production incidents.

Results and Lessons

The migration took 11 months to complete. The application now runs entirely on Linux containers in EKS, with the exception of the one Windows container for the barcode DLL. Build times dropped from 12 minutes to 3 minutes. Cold start times improved by 60%. The annual infrastructure cost decreased by approximately 40% due to the move from Windows Server licensing to Linux containers.

The biggest lesson: invest in the assessment phase. Every hour spent understanding dependencies and building the migration graph saved us days of rework later. The second lesson: shadow traffic testing is not optional for financial systems. And the third: the strangler fig pattern works, but it requires discipline. The temptation to accelerate by migrating larger chunks is real, but the risk of a big-bang deployment in a migration project is catastrophic.

← Back to Blog