Skip to main content
Performance

.NET + Angular Performance Secrets from Enterprise Production

By Ilir Ivezaj· ·10 min read
Ilir Ivezaj development workspace

After years running .NET + Angular applications in enterprise production — processing millions of records across multiple databases — I've collected performance optimizations that aren't in any tutorial. These are the changes that moved our p95 latency from seconds to milliseconds.

EF Core: Split Queries Save Everything

If you have an entity with multiple collection navigations and use .Include() on all of them, EF Core generates a single SQL query with JOINs. The result is a cartesian explosion. A query that should return 100 rows returns 100,000 because every combination of child collections is multiplied.

// SLOW: Cartesian explosion - 100K+ rows returned
var orders = await context.Orders
    .Include(o => o.Items)
    .Include(o => o.Payments)
    .Include(o => o.AuditLogs)
    .ToListAsync();

// FAST: Split into separate queries - 300 rows total
var orders = await context.Orders
    .Include(o => o.Items)
    .Include(o => o.Payments)
    .Include(o => o.AuditLogs)
    .AsSplitQuery()
    .ToListAsync();

.AsSplitQuery() tells EF Core to issue separate SQL queries for each include and stitch the results in memory. In our data integration platform, this single change reduced a dashboard query from 8 seconds to 200ms.

Angular OnPush: The 80% Optimization Nobody Uses

By default, Angular checks every component on every change detection cycle. With a complex enterprise UI, that's thousands of checks per user interaction. Switching to OnPush tells Angular to only check a component when its inputs change or an observable emits.

@Component({
  selector: 'app-data-grid',
  changeDetection: ChangeDetectionStrategy.OnPush,
  template: \`
    <div *ngFor="let row of data$ | async">
      {{ row.name }}
    </div>
  \`
})
export class DataGridComponent {
  data$ = this.store.select(selectGridData);
}

The key insight: use the async pipe for all observables. It automatically triggers change detection when data arrives AND unsubscribes on destroy. We measured an 82% reduction in change detection cycles across our application after migrating to OnPush.

Lazy Loading with PreloadAllModules

Most Angular apps either eagerly load everything (slow initial load) or lazy load routes (fast initial load, slow navigation). The sweet spot is PreloadAllModules: it lazy-loads the initial route for a fast first paint, then immediately starts downloading other route modules in the background.

@NgModule({
  imports: [RouterModule.forRoot(routes, {
    preloadingStrategy: PreloadAllModules
  })]
})

By the time a user navigates to a different route, the module is already downloaded. You get the initial load time of lazy loading with the navigation speed of eager loading.

SignalR Backpressure: When the Default Breaks

SignalR's default MaximumReceiveMessageSize is 32KB. That sounds fine until you're pushing real-time data updates for a dashboard with 500 rows. The connection silently drops with no useful error.

services.AddSignalR(options => {
    options.MaximumReceiveMessageSize = 512 * 1024; // 512KB
    options.StreamBufferCapacity = 20;
    options.EnableDetailedErrors = isDevelopment;
});

For server-side buffering, use Channel<T> with bounded capacity. This prevents a slow client from causing memory pressure on the server. Set BoundedChannelFullMode.DropOldest for real-time dashboards where the latest data matters most.

Multi-DB Connection Pooling

When your .NET application connects to Oracle, SQL Server, AND PostgreSQL simultaneously (as ours does), each DbContext gets its own connection pool. That's 3x the pool overhead. The solution: use AddDbContextPool<> instead of AddDbContext<>.

// Instead of AddDbContext (creates new context per request)
services.AddDbContextPool<OracleContext>(options =>
    options.UseOracle(connectionString), poolSize: 128);

services.AddDbContextPool<SqlServerContext>(options =>
    options.UseSqlServer(connectionString), poolSize: 128);

The pool reuses DbContext instances, avoiding the allocation cost. In our benchmarks, this reduced GC pressure by 40% under load.

SQL Server Parameter Sniffing

The most insidious performance bug in SQL Server: parameter sniffing. SQL Server caches a query plan based on the first parameter values it sees. If those values are atypical, every subsequent execution uses a terrible plan.

The symptom: a query runs in 50ms sometimes and 30 seconds other times, with no code changes. The fix for the worst offenders:

SELECT * FROM Orders
WHERE Status = @status AND CreatedDate > @date
OPTION (RECOMPILE)

OPTION (RECOMPILE) forces a fresh plan every execution. It costs a few milliseconds of compile time but prevents the 100x slowdowns. Use it surgically on queries with highly variable parameter distributions.

About the author: Ilir Ivezaj has optimized enterprise .NET + Angular platforms serving thousands of users daily. He's a technology executive and entrepreneur based in Michigan. Get in touch.