I have an endpoint that accepts a model containing a byte[] in the request body:
[HttpPost]
public ActionResult<string> UploadFileChunk([FromBody] FileChunk chunk, string guid)
When uploading large files, the process memory jumps from ~80MB to ~130MB+ before throwing an IOException. While I suspect the internal mapping of the request body to the FileChunk model is the culprit (and am considering streaming instead), I want to improve my diagnostic skills.
The Scenario:
Hypothetically, if you didn’t have access to external documentation or Stack Overflow, how would you use your IDE/Debugger to prove that the model binder is causing this allocation?
My Current Progress in Visual Studio:
-
I take a Memory Snapshot before and during the spike.
-
I open the Heap Diff View and filter by the largest byte differential.
-
I see
SharedArrayPool<Byte>at the top of the list, but when I view Paths to Root, I cannot find a direct reference to myFileChunkmodel.
My Questions:
-
Heap Diff View: What exactly am I looking at here? Does this represent all live objects on the managed heap, or just those allocated between snapshots?
-
Attribution: Why doesn’t my
FileChunkmodel appear in the “Paths to Root” for these large byte arrays? Is it because the framework is using an internal buffer (likeSharedArrayPool) to materialize the object? -
The “Senior Dev” Workflow: Using only the Visual Studio IDE (Performance Profiler, Diagnostic Tools, etc.), what is the specific step-by-step workflow you would use to trace these anonymous byte allocations back to a specific Controller Action or Model?
For a hands-on look at the configuration and to run the test yourself, I’ve have a minimal reproducible example in this GitHub repository: https://github.com/knunn552/memory_explosion