Frontmatter
| id | 9187 |
| title | Investigate and Optimize Stream Proxy Performance |
| state | Closed |
| labels | enhancementaiperformance |
| assignees | tobiu |
| createdAt | Feb 16, 2026, 4:13 PM |
| updatedAt | Feb 17, 2026, 12:34 AM |
| githubUrl | https://github.com/neomjs/neo/issues/9187 |
| author | tobiu |
| commentsCount | 1 |
| parentIssue | null |
| subIssues | [] |
| subIssuesCompleted | 0 |
| subIssuesTotal | 0 |
| blockedBy | [] |
| blocking | [] |
| closedAt | Feb 17, 2026, 12:34 AM |
Investigate and Optimize Stream Proxy Performance
tobiu assigned to @tobiu on Feb 17, 2026, 12:34 AM

tobiu
Feb 17, 2026, 12:34 AM
Input from Gemini 3 Pro:
✦ I have implemented a Progressive Chunk Sizing strategy for the Stream Proxy to balance immediate UI feedback with high throughput for large datasets.
The Solution: Instead of a fixed
chunkSize, the proxy now dynamically scales the chunk size based on the total number of records processed so far:
- Phase 1 (Start): Small chunks (100-250) for immediate "Time to First Content" and frequent UI updates.
- Phase 2 (Ramp): Medium chunks (500-1500) as the user processes the initial data.
- Phase 3 (Bulk): Massive chunks (2500-10000) for the tail end of the dataset.
Performance Impact:
- Initial Render: Instant (first 100 items).
- Total Load Time: Reduced from ~10s (small chunks) to ~1.8s (comparable to the 10k chunk benchmark).
- UX: The user sees the list populate immediately and grow smoothly, without the "jank" of excessive small updates or the "freeze" of massive initial chunks.
Implementation Details:
- Added
progressiveChunkSize_config toNeo.data.proxy.Stream.- Enabled this mode in
DevIndex.store.Contributors.- Reverted previous attempts at adaptive yielding and auto-sort disabling, as they provided marginal gains for significant complexity/UX cost.
tobiu closed this issue on Feb 17, 2026, 12:34 AM
tobiu cross-referenced by #9189 on Feb 17, 2026, 1:52 AM
The
Neo.data.proxy.Streamimplementation shows significant performance degradation when using small chunk sizes (500 items) compared to large ones (10k items) for a 13.87MB dataset (10s vs 2s).Objectives:
src/data/proxy/Stream.mjswithperformance.now()to profile the bottleneck.timeout(5)delay per chunk.Store.add()and the event chain (data->load->grid.render).Proposed Changes:
Stream.mjs.timeoutconfigurable or adaptive.Store's handling ofpostChunkLoadevents.