Streaming First
Convert Buddy is designed around streaming from the ground up. This isn't an optional featureβit's the foundation that enables everything else.
What is streaming?
Instead of loading an entire file into memory, streaming processes data in small chunks:
Traditional approach:
1. Read entire 5GB file into memory (π₯ crash)
2. Parse all data
3. Write all output
Streaming approach:
1. Read 1MB chunk
2. Parse chunk β emit output
3. Read next 1MB chunk
4. Repeat (memory stays constant at ~10MB)Chunked processing
Convert Buddy processes data in configurable chunks (default: ~1MB). Each chunk flows through the parse β transform β write pipeline independently.
Code Sandbox
import { ConvertBuddy } from "convert-buddy-js";
const fileUrl = "";
async function demonstrateChunking() {
const response = await fetch(fileUrl);
const data = await response.text();
const buddy = await ConvertBuddy.create({
inputFormat: 'csv',
outputFormat: 'json',
chunkTargetBytes: 512 * 1024, // 512KB chunks
onRecords: (ctrl, batch, stats, total) => {
console.log(`Processed ${stats.chunksIn} chunks`);
console.log(`Memory: ${stats.maxBufferSize} bytes`);
console.log(`Records: ${stats.recordsProcessed}`);
}
});
const encoder = new TextEncoder();
const inputChunk = encoder.encode(data);
buddy.push(inputChunk);
const output = buddy.finish();
console.log('Conversion complete!');
}
demonstrateChunking().catch(console.error);Terminal
Waiting for output...Key backpressure patterns
| Pattern | When to use | Implementation |
|---|---|---|
| Automatic (Web Streams) | Browser, simple streaming | Use ReadableStream - backpressure is automatic |
| Manual pause/resume | Slow consumers (DB writes, API calls) | buddy.pause() β process β buddy.resume() |
| Chunk-based throttling | Rate limiting, batch processing | Pause every N chunks, resume after processing |
| Memory-based throttling | Memory-constrained environments | Monitor stats.maxBufferSize, pause if threshold exceeded |
The key insight: Convert Buddy never forces you to consume data faster than you can handle it. Whether automatic or manual, backpressure keeps memory usage constant and your application stable.
Memory guarantees
Convert Buddy guarantees constant memory usage:
| Component | Memory usage |
|---|---|
| Input buffer | ~1MB (chunk size) |
| Parser state | <1MB |
| Transform records | ~100KB (batch size) |
| Output buffer | ~1MB (chunk size) |
| Total | ~3-5MB constant |
This means converting a 10GB file uses the same memory as converting a 10KB file.
When streaming matters
β Use streaming for:
- Files >10MB (definitely >100MB)
- Unknown/untrusted input sizes
- Browser environments with memory limits
- Real-time data processing
- Progress tracking during conversion
β οΈ Streaming less important for:
- Small files (<1MB) where buffering is fine
- Server environments with abundant memory
- One-time conversions without progress needs