Documentation Index
Fetch the complete documentation index at: https://docs.wirekite.io/llms.txt
Use this file to discover all available pages before exploring further.
Overview
Wirekite supports MongoDB as a target database for:- Schema Loading - Create collections with optional JSON Schema validators
- Data Loading - Bulk load extracted data using InsertMany operations
- Change Loading (CDC) - Apply ongoing changes using BulkWrite operations
MongoDB loaders convert relational table data to BSON documents. Primary keys are mapped to MongoDB’s
_id field. Composite primary keys are stored as nested BSON documents (e.g., {_id: {col1: val1, col2: val2}}).Prerequisites
Before configuring MongoDB as a Wirekite target, ensure the following requirements are met:Database Configuration
- Version: MongoDB 4.x or above
- User Permissions: The connection user must have:
readWriterole on the target database
Limitations
Schema Loader
The Schema Loader reads Wirekite’s intermediate schema format (.skt file) and emits two mongo-shell script files — one with db.<collection>.drop() statements and one with db.createCollection("<schema.table>") statements. The orchestrator runs these scripts against the target during the schema-apply phase; the schema loader itself does not connect to MongoDB.
Collection names follow
schema.tablename format (e.g., public.users). When the orchestrator applies the create script, createCollection is a no-op if the collection already exists.Required Parameters
Path to the Wirekite schema file (
.skt) generated by the Schema Extractor.Absolute path to the log file for Schema Loader operations.
Output file for
db.<collection>.drop() statements (one line per table). The orchestrator runs this script against the target before re-creating collections, only when re-applying schema.Output file for
db.createCollection("<schema.table>") statements (one line per table). The orchestrator runs this script against the target during schema apply.Data Loader
The Data Loader reads Wirekite’s intermediate data format (.dkt files) and loads documents into MongoDB collections using InsertMany with unordered batches for maximum throughput.
The Data Loader uses a 3-stage pipeline architecture (Scanner, Parsers, Writers) for high-performance parallel loading.
Required Parameters
Path to a file containing the MongoDB connection string.
Directory containing data files (
.dkt) to load.Path to the Wirekite schema file for table structure information.
Absolute path to the log file for Data Loader operations.
Optional Parameters
Maximum number of parallel threads for loading files. Each thread loads one file at a time.
Set to
true if data was extracted using hex encoding instead of base64.Change Loader
The Change Loader applies ongoing data changes (INSERT, UPDATE, DELETE) to MongoDB collections using BulkWrite operations.Updates use sparse
$set operations, only modifying changed fields. Inserts and replaces use full document upserts.Required Parameters
Path to a file containing the MongoDB connection string.
Directory containing change files (
.ckt) from the Change Extractor.Path to the Wirekite schema file for table structure information.
Absolute path to the log file for Change Loader operations.
Optional Parameters
Set to
true if change data was extracted using hex encoding.Orchestrator Configuration
When using the Wirekite Orchestrator, prefix target parameters withtarget.schema., target.data., or target.change. depending on the operation.
Example orchestrator configuration for MongoDB target:
