Last month I spent three hours debugging a “slow API” for a teammate. The endpoint was taking 4 seconds to return a list of 50 orders. Turned out it was loading every single related entity, customers, products, addresses, payment details, all of them, every single request. One line fix. I added a .Select() projection to return only what the response actually needed. Response time dropped from 4 seconds to 80 milliseconds.
That’s the thing with EF Core. It makes it dangerously easy to write queries that work perfectly in development with 10 rows of seed data, and then explode in production with 100K rows. The code compiles. The tests pass. Everything looks fine until real traffic hits and your API crawls.
In this article, I’ll walk through the 10 EF Core performance mistakes I see most often in production .NET codebases, with the exact fix for each one in .NET 10 and EF Core 10. Every fix is backed by real benchmarks, complete code samples, and a deeper article you can follow if you want to go further. Let’s get into it.
TL;DR. The 10 EF Core performance mistakes that ship to production: (1) N+1 queries, (2) returning full entities instead of projections, (3) forgetting
AsNoTrackingon read-only queries, (4) leaving lazy loading on in production, (5) cartesian explosion from multipleIncludecalls, (6) filtering after materialization with.ToList()before.Where(), (7) loading entities just to update or delete them in bulk, (8) no pagination on list endpoints, (9) missing database indexes on filtered or joined columns, and (10) not using compiled queries on hot paths. Each one is fixable in under 10 lines of code. Together, they routinely turn 4-second endpoints into 80-millisecond endpoints.
.NET Web API Course
Master .NET Web API development from scratch. Learn to build production-ready APIs with Clean Architecture, best practices, and real-world patterns.
20+ chapters 5,000+ developers Highly rated
ASP.NET Core CRUD with EF Core 10
New to EF Core 10? Start here. This article walks through setting up EF Core 10 with PostgreSQL, creating entities, running migrations, and building a complete CRUD API end to end.
What Counts as an EF Core Performance Mistake?
An EF Core performance mistake is any query, configuration, or data-access pattern that produces correct results in development but degrades unacceptably under production load. The mistakes in this article have three things in common: the code compiles, the tests pass, and the slowdown only shows up when real data volume or real concurrency hits.
In .NET 10 with EF Core 10, the runtime is faster than ever. The JIT got better at inlining the materialization hot path, generic specialization is cheaper, and EF Core 10 added first-class LeftJoin and RightJoin operators (along with consistent ordering fixes for split queries). None of that helps if your code keeps making the same 10 mistakes below. Performance comes from the patterns you write, not the framework version.
LINQ LeftJoin and RightJoin in EF Core 10
The new first-class LeftJoin and RightJoin LINQ operators in .NET 10 and how they translate to native LEFT JOIN and RIGHT JOIN SQL. Cleaner queries, no more GroupJoin gymnastics.
How to Spot These in Your Codebase
Before fixing anything, measure. The fastest way I’ve found to spot EF Core problems is to log the generated SQL during development:
builder.Services.AddDbContext<AppDbContext>(options => options.UseNpgsql(connectionString) .LogTo(Console.WriteLine, LogLevel.Information) .EnableSensitiveDataLogging(builder.Environment.IsDevelopment()));When you hit an endpoint and 50 separate SELECT statements appear in your console, that’s an N+1. When a single SELECT returns 10,000 rows for a 20-row entity, that’s a cartesian explosion. When every read query has a __EFCore_OriginalValues_ snapshot column, that’s change tracking running on data you’ll never update. You can also use tools like dotnet-trace, MiniProfiler, or EF Core Power Tools for deeper analysis, but the LogTo one-liner above catches 80% of issues in 5 minutes.
Now let’s walk through the 10 mistakes.
Mistake 1: The N+1 Query Problem
This is the universal #1 EF Core performance killer. You load a list of entities with one query, and then your code triggers a separate database query for each entity when it accesses a related navigation property.
The wrong code:
var orders = await context.Orders.ToListAsync(ct);
foreach (var order in orders){ // Each access here fires a separate SELECT to the database Console.WriteLine($"{order.Customer.Name} - {order.Total}");}If you have 100 orders, this runs 101 queries. One for the list, and one for each customer. On a list endpoint with 1,000 rows, you’ve just generated 1,001 database round trips for what should have been a single query.
The fix:
// Option 1: Eager load with Includevar orders = await context.Orders .Include(o => o.Customer) .ToListAsync(ct);
// Option 2: Project to a DTO (often better)var orders = await context.Orders .Select(o => new OrderListItem { Id = o.Id, CustomerName = o.Customer.Name, Total = o.Total }) .ToListAsync(ct);Projection is usually the better fix because it also solves mistake #2. Use Include when you genuinely need the full related entity for further logic, and projection when you’re returning data to a client.
The hidden version of this mistake is the N+1 via JSON serialization trap. If lazy loading is on (see mistake #4) and your API serializes a navigation property to JSON, the serializer triggers a lazy load while writing the response. The query log shows queries firing from inside System.Text.Json. I have lost hours to this one.
EF Core Relationships - One-to-One, One-to-Many, Many-to-Many
Set up your relationships properly with the Fluent API and learn how Include navigates them. Most N+1 issues start with sloppy relationship configuration.
Mistake 2: Returning Full Entities Instead of Projections
You return entire database entities from API endpoints when the client only needs three fields. The query loads 20 columns from disk, hydrates 20 properties on a tracked entity, snapshots them for change detection, then serializes 20 properties to JSON, where the client uses three.
The wrong code:
app.MapGet("/products", async (AppDbContext db, CancellationToken ct) =>{ var products = await db.Products.ToListAsync(ct); return Results.Ok(products);});This loads every column, including Description (which can be a large text blob), InternalNotes, CostPrice, and everything else, even if the client only needs Id, Name, and Price.
The fix:
app.MapGet("/products", async (AppDbContext db, CancellationToken ct) =>{ var products = await db.Products .Select(p => new ProductListItem { Id = p.Id, Name = p.Name, Price = p.Price }) .ToListAsync(ct);
return Results.Ok(products);});Projection through Select() generates more efficient SQL (fewer columns), skips change tracking automatically, prevents over-fetching wide tables, and gives you a stable DTO contract that doesn’t leak internal entity properties to clients. On a 20-column table with 5,000 rows, swapping ToListAsync() for a 3-column projection routinely cuts response payload size by 70% and query execution time by 30 to 50%.
My take: every list endpoint should return a DTO via projection, never a raw entity. Use entities for write paths and lookups by ID. Use projections for everything that goes back to a client.
Mistake 3: Forgetting AsNoTracking on Read-Only Queries
By default, EF Core tracks every entity it loads through the ChangeTracker. This is how SaveChanges() knows what to update. But on a typical API, 80% of your endpoints are reads. They load data, serialize it, and return it. They never call SaveChanges(). The tracking is pure overhead.
The wrong code:
var orders = await context.Orders .Where(o => o.CreatedAt > DateTime.UtcNow.AddDays(-7)) .ToListAsync(ct);// Every order is now in the ChangeTracker. EF keeps a snapshot// of every property to detect changes that will never happen.The fix:
var orders = await context.Orders .AsNoTracking() .Where(o => o.CreatedAt > DateTime.UtcNow.AddDays(-7)) .ToListAsync(ct);Or set it globally on the DbContext so the safe choice is the default:
builder.Services.AddDbContext<AppDbContext>(options => options.UseNpgsql(connectionString) .UseQueryTrackingBehavior(QueryTrackingBehavior.NoTracking));With QueryTrackingBehavior.NoTracking as the default, you opt back into tracking only on the write paths where you genuinely need it, by calling .AsTracking() on the specific query. In benchmarks on a medium-sized dataset of around 10,000 rows, AsNoTracking() consistently shows 20 to 40% faster query execution for pure reads, plus lower memory pressure and faster garbage collection.
One caveat: if you’re using optimistic concurrency tokens ([Timestamp] or xmin), you’ll want tracking on the queries that lead into a SaveChanges() call. Don’t blanket-disable tracking on those paths.
Concurrency Control and Optimistic Locking in EF Core
The full guide to optimistic concurrency in EF Core 10, including when tracking is required, retry patterns, and 3 conflict resolution strategies. Pair this with AsNoTracking decisions on write paths.
Tracking vs. No-Tracking Queries in EF Core
The full deep-dive on the change tracker, including the exact benchmark numbers, the memory overhead, and the rare cases where you actually want tracking on a read.
Mistake 4: Leaving Lazy Loading Enabled in Production
Lazy loading is a feature that quietly fires a database query the first time you touch a navigation property. It sounds convenient in theory, and it’s a disaster in production because the queries fire from places you don’t expect, like inside JSON serialization, inside mapping code, inside logging.
The wrong setup:
builder.Services.AddDbContext<AppDbContext>(options => options.UseNpgsql(connectionString) .UseLazyLoadingProxies()); // Don't.Now any code that reads order.Customer.Name triggers a query. The serializer iterating through navigation properties? Queries. A mapping library that walks all properties? Queries. A logger that calls ToString()? Maybe queries. You end up with non-deterministic N+1 storms that only show up under load.
The fix:
// Don't call UseLazyLoadingProxies. Use explicit eager loading// or projections instead.
var order = await context.Orders .Include(o => o.Customer) .Include(o => o.LineItems) .FirstOrDefaultAsync(o => o.Id == id, ct);
// Or even better - project exactly what you needvar orderDto = await context.Orders .Where(o => o.Id == id) .Select(o => new OrderDetail { Id = o.Id, CustomerName = o.Customer.Name, Items = o.LineItems.Select(i => new LineItemDto { ProductName = i.Product.Name, Quantity = i.Quantity }).ToList() }) .FirstOrDefaultAsync(ct);Explicit eager loading via Include makes the dependency visible in the code. Projection makes the shape of the data part of the contract. Both let you reason about exactly what SQL runs, when, and how big it is.
My take: never enable lazy loading proxies on a server-side ASP.NET Core app. The implicit query firing is incompatible with the way HTTP handlers should reason about database access.
Mistake 5: Cartesian Explosion from Multiple Includes
When you eagerly load two or more collection navigation properties in a single query, EF Core generates LEFT JOINs that produce a cross product. With a Department that has 10 Projects and 10 Employees, the database returns 100 rows for that single Department. With more collections, the row count multiplies. This is called a cartesian explosion, and it’s a different problem from the N+1.
The wrong code:
var departments = await context.Departments .Include(d => d.Projects) .Include(d => d.Employees) .ToListAsync(ct);The SQL looks fine until you check the row count. With 50 departments, 20 projects each, and 30 employees each, that one query returns 30,000 rows of mostly duplicated department data. EF Core de-duplicates on the client side, but the wire transfer and the deserialization cost have already been paid.
The fix:
var departments = await context.Departments .AsSplitQuery() .Include(d => d.Projects) .Include(d => d.Employees) .ToListAsync(ct);AsSplitQuery() tells EF Core to execute one query per Include and stitch the results together in memory. You get three smaller queries instead of one massive cross-joined query. The tradeoff is the three queries hit the network three times. The rule I use: if you’re including 2+ collections AND the cross product is more than 10x the parent count, use AsSplitQuery. If it’s a single collection or the multiplier is small, the default single-query mode is fine.
You can also set this globally per DbContext:
builder.Services.AddDbContext<AppDbContext>(options => options.UseNpgsql(connectionString, npgsql => npgsql.UseQuerySplittingBehavior(QuerySplittingBehavior.SplitQuery)));Mistake 6: Filtering After Materialization
This is the silent killer. You call .ToListAsync() first, then call .Where() on the result. EF Core materialized the entire table into memory, and the filter ran client-side. On a table with 100,000 rows, you just loaded 100,000 rows over the wire to keep 12 of them.
The wrong code:
var orders = (await context.Orders.ToListAsync(ct)) .Where(o => o.CreatedAt > DateTime.UtcNow.AddDays(-7)) .Take(20);The .ToListAsync() happens before the .Where() because it’s wrapped in parentheses. SQL Server happily streams every row of the Orders table to your app server, and then C# does the filtering after the data is already in memory. The endpoint is now bottlenecked on network I/O and the table size, not on what you actually want.
The fix:
var orders = await context.Orders .Where(o => o.CreatedAt > DateTime.UtcNow.AddDays(-7)) .OrderByDescending(o => o.CreatedAt) .Take(20) .AsNoTracking() .ToListAsync(ct);The order matters. Always filter, sort, and page BEFORE you materialize. IQueryable<T> builds the SQL expression tree lazily, and only ToListAsync(), FirstAsync(), CountAsync(), and similar terminal operators execute it. Anything you do after a terminal operator runs on objects in memory.
A related anti-pattern is client-side evaluation. If your Where predicate calls a C# method EF Core can’t translate to SQL, EF Core 3+ throws an exception by default instead of silently switching to client-side evaluation. That’s a feature, not a bug. Don’t try to bypass it. Refactor the query.
Mistake 7: Loading Entities Just to Update or Delete Them in Bulk
You want to mark 10,000 orders as archived. You write a clean foreach loop that loads each order, sets the flag, and calls SaveChanges(). The job takes 5 minutes. Then your CSV import endpoint times out and your DBA sends you a message at 11 PM.
The wrong code:
var oldOrders = await context.Orders .Where(o => o.CreatedAt < DateTime.UtcNow.AddYears(-1)) .ToListAsync(ct);
foreach (var order in oldOrders){ order.IsArchived = true;}
await context.SaveChangesAsync(ct);EF Core just loaded every column of 10,000 rows, hydrated 10,000 tracked entity instances, ran change detection on all of them, and then generated 10,000 individual UPDATE statements (batched, but still 10,000 statements). All to flip one boolean column.
The fix using EF Core 10’s set-based operations:
await context.Orders .Where(o => o.CreatedAt < DateTime.UtcNow.AddYears(-1)) .ExecuteUpdateAsync(updates => updates .SetProperty(o => o.IsArchived, true) .SetProperty(o => o.ArchivedAt, DateTime.UtcNow), ct);One SQL statement. No entities loaded. No change tracking. No round trip per row. Just a single UPDATE that runs on the database server where it should. Same story for deletes:
await context.Orders .Where(o => o.IsArchived && o.CreatedAt < DateTime.UtcNow.AddYears(-3)) .ExecuteDeleteAsync(ct);In my own benchmarks of EF Core 10 bulk operations, ExecuteUpdateAsync and ExecuteDeleteAsync come out 300 to 500x faster than the load-then-SaveChanges pattern on 10,000-row updates. The catch: these methods bypass the change tracker entirely, which means EF Core interceptors don’t fire, global query filters are not applied to the update predicate (you have to add them yourself), and audit trail logic that hooks into SaveChanges does not run. Plan for that explicitly. If you need interceptor behavior, stick with SaveChanges and accept the cost.
Bulk Operations in EF Core 10 - Benchmarking Insert, Update, and Delete Strategies
The full benchmark suite for bulk inserts, updates, and deletes in EF Core 10. ExecuteUpdate vs SaveChanges vs SqlBulkCopy vs EFCore.BulkExtensions with real numbers and a decision matrix.
Mistake 8: No Pagination on List Endpoints
Your /products endpoint returns the full Products table. In development, it has 50 rows. In production, the catalog import job runs and adds 200,000 SKUs overnight. The next morning, every dashboard loading that endpoint is hung, and your APM tool is screaming.
The wrong code:
app.MapGet("/products", async (AppDbContext db, CancellationToken ct) =>{ var products = await db.Products.AsNoTracking().ToListAsync(ct); return Results.Ok(products);});Returning the entire table works fine until it doesn’t. The fix is to always paginate list endpoints from day one, even when the table is small. You’ll be glad you did the day production data shows up.
The fix with offset pagination:
app.MapGet("/products", async ( int page, int pageSize, AppDbContext db, CancellationToken ct) =>{ page = Math.Max(1, page); pageSize = Math.Clamp(pageSize, 1, 100);
var query = db.Products.AsNoTracking().OrderBy(p => p.Id);
var totalCount = await query.CountAsync(ct); var items = await query .Skip((page - 1) * pageSize) .Take(pageSize) .Select(p => new ProductListItem { Id = p.Id, Name = p.Name, Price = p.Price }) .ToListAsync(ct);
return Results.Ok(new { items, totalCount, page, pageSize });});Two things matter here. First, always cap pageSize server-side. Never trust the client to choose a sane value. Second, on tables larger than a million rows, switch from offset pagination (Skip/Take) to keyset pagination using a cursor. Offset pagination forces SQL Server or PostgreSQL to count and skip rows for every page, which gets slow as the page number grows. Keyset pagination uses a WHERE clause on an indexed column and scales constantly regardless of page depth.
Pagination, Sorting, and Searching in ASP.NET Core APIs
The full guide to building paginated list endpoints with both offset and keyset pagination, dynamic sorting, and search filtering. Production patterns for paginated APIs in .NET 10.
Mistake 9: Missing Database Indexes on Filtered or Joined Columns
EF Core gives you a beautiful LINQ query. The database happily runs it. Without an index on the filtered column, the database is doing a full table scan every time. With 10,000 rows in development, it’s fast. With 10 million rows in production, the same query takes 8 seconds.
The wrong setup:
public class Product{ public Guid Id { get; set; } public string Sku { get; set; } = null!; public string Name { get; set; } = null!; public Guid CategoryId { get; set; } // No indexes declared}Every query that does Where(p => p.Sku == sku) or joins on CategoryId is a full scan.
The fix using Fluent API:
public class ProductConfiguration : IEntityTypeConfiguration<Product>{ public void Configure(EntityTypeBuilder<Product> builder) { builder.HasIndex(p => p.Sku).IsUnique(); builder.HasIndex(p => p.CategoryId); builder.HasIndex(p => new { p.CategoryId, p.CreatedAt }); // Composite }}Or with the [Index] attribute on the entity:
[Index(nameof(Sku), IsUnique = true)][Index(nameof(CategoryId))][Index(nameof(CategoryId), nameof(CreatedAt))]public class Product { /* ... */ }Run dotnet ef migrations add AddProductIndexes and then dotnet ef database update. Indexes do cost write performance and storage, so don’t index every column. Index the ones that show up in WHERE, JOIN, and ORDER BY clauses in your hot queries. A good rule: if a query is on a hot path and filters on a column with more than a few thousand distinct values, that column wants an index.
Configuring Entities with Fluent API in EF Core
The full guide to entity configuration with the Fluent API, including HasIndex, value converters, owned entities, and table splitting. The Fluent API is where production-grade EF Core configuration lives.
Mistake 10: Not Using Compiled Queries on Hot Paths
Every time you execute a LINQ query, EF Core walks the expression tree and translates it to SQL. For most queries, that’s a few microseconds and you never notice. On hot paths that fire thousands of times per second, the translation cost adds up. EF Core supports compiled queries, where you compile the LINQ-to-SQL translation once and reuse the delegate forever.
The wrong code on a hot path:
app.MapGet("/products/{id:guid}", async ( Guid id, AppDbContext db, CancellationToken ct) =>{ var product = await db.Products .AsNoTracking() .FirstOrDefaultAsync(p => p.Id == id, ct);
return product is null ? Results.NotFound() : Results.Ok(product);});This is correct, but on an endpoint serving 5,000 RPS, the LINQ translation cost is non-trivial.
The fix using EF.CompileAsyncQuery:
private static readonly Func<AppDbContext, Guid, CancellationToken, Task<Product?>> GetProductById = EF.CompileAsyncQuery((AppDbContext db, Guid id, CancellationToken ct) => db.Products .AsNoTracking() .FirstOrDefault(p => p.Id == id));
app.MapGet("/products/{id:guid}", async ( Guid id, AppDbContext db, CancellationToken ct) =>{ var product = await GetProductById(db, id, ct); return product is null ? Results.NotFound() : Results.Ok(product);});In benchmarks I’ve run on a single-row primary-key lookup, the compiled query consistently runs 30 to 60% faster than the equivalent ad-hoc LINQ query. EF Core 9 also introduced an experimental precompiled queries feature (tied to .NET NativeAOT support) that pushes the gap even further on workloads with many query executions, though it remains in preview and has documented limitations. The compiled-query route makes the most sense on a handful of critical endpoints, not on every query in the codebase. Profile first, then compile the hot 5% that actually matter.
Compiled Queries in EF Core - Benchmarks and Best Practices
The deep-dive on compiled queries with BenchmarkDotNet results across single-row lookups, list endpoints, and high-RPS workloads. Includes the new precompiled query feature in EF Core 9+.
When to Use What: The Decision Matrix
Some of these fixes overlap. Here’s how I decide which one to reach for first:
| Symptom | First fix to try | Then |
|---|---|---|
| Endpoint returns full entities, big payloads | Projection via Select | Then AsNoTracking |
| 50+ SQL queries fire for one request | Add Include or projection | Check for lazy loading |
| Single query returns 30,000 rows for 50 entities | AsSplitQuery | Or project specific columns |
| Bulk update or delete is slow | ExecuteUpdateAsync / ExecuteDeleteAsync | Use EFCore.BulkExtensions for 50K+ rows |
| List endpoint times out under load | Add pagination | Then AsNoTracking + projection |
WHERE query is slow | Add an index on the filtered column | Verify the SQL with EXPLAIN |
| Hot endpoint with high RPS | Compile the query with EF.CompileAsyncQuery | Consider HybridCache for stable reads |
The rule I follow: fix the cheapest, highest-impact mistake first, then measure again. Most APIs jump from 80th to 95th percentile performance by fixing just three of these (projections, AsNoTracking, pagination) before touching anything advanced.
HybridCache in ASP.NET Core for EF Core Reads
Once you've tuned the EF Core query, the next move is to cache stable reads. HybridCache combines in-memory + distributed caching with stampede protection - the right layer above EF Core for read-heavy endpoints.
What EF Core 10 Improves Out of the Box
Some of these mistakes are easier to fix in EF Core 10 than in older versions, even without changing your patterns:
LeftJoinandRightJoinas first-class operators: You no longer need the awkwardGroupJoin+SelectMany+DefaultIfEmptypattern. Cleaner LINQ, identical SQL, fewer footguns.- More consistent ordering for split queries: EF Core 10 fixed a subtle correctness issue where
AsSplitQuerycombined withTakeandIncludecould produce non-deterministic results because the subquery ordering omitted the primary key. The fix ensures the same ordering is applied across all split queries. - Faster materialization: .NET 10’s JIT improvements (better inlining of the hot materialization path, stronger devirtualization, cheaper generic specialization) make every EF Core query run faster without changing a line.
- Improved translation for parameterized collections: EF Core 10 changes the default translation of
IEnumerable.Containsto use individual scalar parameters with padding, giving the query planner better cardinality information while still preserving plan cache reuse.
None of these fix the 10 mistakes for you. They make the right patterns slightly faster, and they don’t rescue bad patterns. The mistakes still ship to production unless you fix the code.
Key Takeaways
- Projection +
AsNoTracking+ pagination is the 80% solution. Apply those three on every list endpoint by default. - N+1 queries are the #1 killer. Detect them by logging SQL with
LogToin development. Fix them withIncludeor projection. - Lazy loading proxies do not belong on a server-side ASP.NET Core app. Use explicit eager loading or projections.
AsSplitQueryis your fix for cartesian explosion when you genuinely need 2+ collections. The default single-query mode is fine for single-collection includes.ExecuteUpdateAsyncandExecuteDeleteAsyncare 300-500x faster than load-then-SaveChanges. Use them for any bulk write that doesn’t need interceptors.- Indexes matter as much as query structure. Profile your hot queries and index the columns in their
WHERE,JOIN, andORDER BYclauses. - Compiled queries belong on the hot 5% of endpoints. Profile first, then compile.
- Measure before and after every change. Use
LogTo,dotnet-trace, MiniProfiler, or BenchmarkDotNet. Performance work without measurement is folk magic.
Frequently Asked Questions
What is the N+1 query problem in EF Core?
The N+1 problem happens when EF Core executes one query to load a list of entities and then one additional query for each entity in that list, usually when code accesses a navigation property inside a loop. With 100 entities, you get 101 queries instead of 1 or 2. The fix is to use Include for eager loading or to project the related data into a DTO via Select.
When should I use AsNoTracking in EF Core?
Use AsNoTracking on any query where the returned entities will not be updated or deleted. That includes API list endpoints, reports, lookups, and most read-only data access. It typically improves read performance by 20 to 40% and reduces memory pressure. The exception is queries that load entities you intend to modify and SaveChanges, where you need tracking to detect the changes.
Is lazy loading bad in EF Core?
Lazy loading is risky in server-side ASP.NET Core applications because it triggers hidden database queries from unexpected places like JSON serialization, mapping code, and logging. This produces non-deterministic N+1 storms that only show up under production load. Use explicit eager loading via Include or project the data you need with Select. Never enable UseLazyLoadingProxies on a web API.
What is the difference between Include and AsSplitQuery in EF Core?
Include generates a single SQL query with JOINs to load related entities together. AsSplitQuery generates separate SQL queries for each Include and stitches the results in memory. Use the default single-query mode for single-collection Includes. Use AsSplitQuery when you Include 2 or more collection properties at once, to avoid cartesian explosion where row counts multiply.
When should I use ExecuteUpdateAsync instead of SaveChanges?
Use ExecuteUpdateAsync for bulk updates that do not need to load entities into memory or fire EF Core interceptors. It runs as a single SQL UPDATE statement and is typically 300 to 500 times faster than loading entities and calling SaveChanges. Use SaveChanges when you need change tracking, audit trail interceptors, or domain event dispatching to run on the modified entities.
Do I still need compiled queries in EF Core 10?
Compiled queries are still useful in EF Core 10, but only on hot paths that fire thousands of times per second. EF Core 10 with .NET 10 has faster query translation than older versions, which narrows the gap. Profile first to identify the 5% of endpoints that actually need it, then use EF.CompileAsyncQuery for those. For most endpoints, the ad-hoc LINQ query is already fast enough.
How do I detect N+1 queries in my .NET application?
The fastest way is to enable SQL logging on your DbContext with LogTo and EnableSensitiveDataLogging in development, then watch the console as you hit endpoints. If you see many SELECT statements firing for one HTTP request, you have an N+1. For deeper analysis, use MiniProfiler, EF Core Power Tools, or dotnet-trace. APM tools like Application Insights, Datadog, and New Relic also flag N+1 patterns automatically.
Does EF Core 10 fix any of these performance issues automatically?
EF Core 10 ships with faster materialization thanks to .NET 10 JIT improvements, first-class LeftJoin and RightJoin LINQ operators, more consistent ordering for split queries (a correctness fix in Include plus Take scenarios), and improved translation for parameterized collections. None of these fix the 10 mistakes for you. They make correct patterns slightly faster, but the mistakes still ship to production unless you fix the code patterns themselves.
Troubleshooting Common Issues
Symptom: My query is fast in development but slow in production.
The most common cause is missing indexes. Development databases usually have hundreds of rows. Production has millions. A full table scan is invisible at 100 rows and catastrophic at 10 million. Run EXPLAIN (PostgreSQL) or SET SHOWPLAN_ALL ON (SQL Server) on the production query and check for sequential scans or table scans.
Symptom: I added Include and now my query returns way too many rows.
You probably hit a cartesian explosion. Switch to AsSplitQuery() or project specific columns with Select() instead of loading full entities.
Symptom: ExecuteUpdateAsync runs but my audit trail isn’t firing.
That’s by design. ExecuteUpdateAsync and ExecuteDeleteAsync bypass the change tracker, which means EF Core interceptors and any code wired into SaveChanges do not run. If you need that behavior, stick with the load-then-SaveChanges pattern or move your audit logic to database triggers.
Symptom: I enabled AsNoTracking globally and now SaveChanges doesn’t work.
With QueryTrackingBehavior.NoTracking as the default, you need to opt back into tracking for write paths by adding .AsTracking() to those specific queries. Or attach the entity manually before calling SaveChanges.
Symptom: My compiled query throws “the query is not supported in compiled form.” Compiled queries support most LINQ operators, but some patterns (like calling user-defined methods inside the query) don’t compile. Refactor the query to use only translatable LINQ, or fall back to ad-hoc LINQ for that one endpoint.
Symptom: My index isn’t being used by the query planner.
Either the column has low cardinality (few distinct values, like a boolean), or the query is rewriting in a way that disables the index (e.g., wrapping the column in a function like LOWER(Sku) = 'abc'). Check the query plan and rewrite the LINQ to keep the column unwrapped.
Wrap-Up
EF Core is fast. The framework rarely is the bottleneck. The patterns you write on top of it are. Every one of the 10 mistakes in this article comes from the same root cause: writing code that looks correct, ships green tests, and breaks at the moment production traffic arrives.
The fix isn’t to learn a new framework. It’s to keep applying the same boring discipline on every endpoint: projection, AsNoTracking, pagination, explicit eager loading, indexes on filtered columns, set-based bulk operations, and a healthy fear of lazy loading. Do those things by default and you’ll skip 90% of the EF Core performance pain that ends up on incident reports.
If you want the full hands-on flow with a real .NET 10 Web API, the free Web API course walks through these patterns module by module, with complete source code on GitHub.
If you found this helpful, share it with a colleague who’s currently debugging a slow endpoint. And if there’s an EF Core topic you’d like me to cover next, drop a comment and let me know.
Happy Coding :)




What's your take?
Push back, share a war story, or ask the obvious question someone else is wondering. I read every comment.