Ingesting Events
The SDK provides three ways to send events, each suited to different throughput and reliability needs.Building Events
All ingestion methods acceptEvent instances built with the fluent builder:
Event Fields
| Field | Required | Description |
|---|---|---|
customer_id | Yes* | Monk’s internal customer UUID |
external_customer_id | Yes* | Your external customer identifier |
event_name | Yes | Identifies the type of event |
properties | No | Key-value metadata (strings, numbers, booleans) |
timestamp | No | Event time (defaults to now). Must not be more than 1 hour in the future. |
idempotency_key | No | Unique key to prevent duplicates (auto-generated) |
customer_id or external_customer_id must be provided.
Single Event
Send one event at a time with automatic retries on transient failures:Batch Ingestion
Send up to 10,000 events in a single HTTP request:Buffered Ingestion
For high-throughput pipelines, enqueue events into an in-memory buffer. The SDK automatically flushes them in batches using parallel HTTP workers:How Buffering Works
ingest_bufferedpushes the event into a bounded in-memory channel- A background tokio task accumulates events
- When the batch reaches
max_batch_sizeor theflush_intervalelapses, a flush is triggered - Multiple flushes run concurrently, limited by a semaphore (
max_concurrent_flushes) - If a flush fails after all retries, the
on_flush_failurecallback fires
When to Use Each Method
| Method | Throughput | Confirmation | Use Case |
|---|---|---|---|
ingest | Low | Per-event | Critical events, low volume |
ingest_batch | Medium | Per-batch | Bulk imports, backfills |
ingest_buffered | High | Fire-and-forget | Real-time pipelines, high-frequency events |