LearnNewsExamplesServices
Frontmatter
id9187
titleInvestigate and Optimize Stream Proxy Performance
stateClosed
labels
enhancementaiperformance
assigneestobiu
createdAtFeb 16, 2026, 4:13 PM
updatedAtFeb 17, 2026, 12:34 AM
githubUrlhttps://github.com/neomjs/neo/issues/9187
authortobiu
commentsCount1
parentIssuenull
subIssues[]
subIssuesCompleted0
subIssuesTotal0
blockedBy[]
blocking[]
closedAtFeb 17, 2026, 12:34 AM

Investigate and Optimize Stream Proxy Performance

Closed v12.0.0 enhancementaiperformance
tobiu
tobiu commented on Feb 16, 2026, 4:13 PM

The Neo.data.proxy.Stream implementation shows significant performance degradation when using small chunk sizes (500 items) compared to large ones (10k items) for a 13.87MB dataset (10s vs 2s).

Objectives:

  1. Instrument src/data/proxy/Stream.mjs with performance.now() to profile the bottleneck.
  2. Analyze the overhead of the timeout(5) delay per chunk.
  3. Analyze the cost of Store.add() and the event chain (data -> load -> grid.render).
  4. Optimize the proxy and store interaction to handle small chunks efficiently or dynamic chunk sizing.

Proposed Changes:

  • Add performance logging to Stream.mjs.
  • Make the timeout configurable or adaptive.
  • Investigate Store's handling of postChunkLoad events.
tobiu added the enhancement label on Feb 16, 2026, 4:13 PM
tobiu added the ai label on Feb 16, 2026, 4:13 PM
tobiu added the performance label on Feb 16, 2026, 4:13 PM
tobiu referenced in commit 582536a - "feat(proxy): Implement Progressive Chunk Sizing for Stream Proxy (#9187)" on Feb 17, 2026, 12:33 AM
tobiu assigned to @tobiu on Feb 17, 2026, 12:34 AM
tobiu
tobiu Feb 17, 2026, 12:34 AM

Input from Gemini 3 Pro:

✦ I have implemented a Progressive Chunk Sizing strategy for the Stream Proxy to balance immediate UI feedback with high throughput for large datasets.

The Solution: Instead of a fixed chunkSize, the proxy now dynamically scales the chunk size based on the total number of records processed so far:

  • Phase 1 (Start): Small chunks (100-250) for immediate "Time to First Content" and frequent UI updates.
  • Phase 2 (Ramp): Medium chunks (500-1500) as the user processes the initial data.
  • Phase 3 (Bulk): Massive chunks (2500-10000) for the tail end of the dataset.

Performance Impact:

  • Initial Render: Instant (first 100 items).
  • Total Load Time: Reduced from ~10s (small chunks) to ~1.8s (comparable to the 10k chunk benchmark).
  • UX: The user sees the list populate immediately and grow smoothly, without the "jank" of excessive small updates or the "freeze" of massive initial chunks.

Implementation Details:

  • Added progressiveChunkSize_ config to Neo.data.proxy.Stream.
  • Enabled this mode in DevIndex.store.Contributors.
  • Reverted previous attempts at adaptive yielding and auto-sort disabling, as they provided marginal gains for significant complexity/UX cost.
tobiu closed this issue on Feb 17, 2026, 12:34 AM
tobiu referenced in commit 9dd97e7 - "feat(devindex): Implement Stop Stream capability with progressive chunk sizing (#9187, #9188)" on Feb 17, 2026, 1:21 AM
tobiu cross-referenced by #9189 on Feb 17, 2026, 1:52 AM