Tables
Introduction
Tables are the foundational data storage containers in Golden Core. They store records in a structured format and provide the persistent layer for all entity operations.
While entities provide smart features like search and deduplication, tables provide the raw storage and querying capabilities.
A Table in Golden:
Stores records in a PostgreSQL database
References a dataset defining the record structure
Supports history tracking for audit trails
Provides pagination and filtering for large datasets
Can be locked to prevent modifications
Tables are referenced by entities but can also be used independently for data operations.
Table Types
Type | Description | Use Case |
|---|---|---|
TABLE | Standard active records | Live operational data |
HISTORY | Archived historical records | Audit trail, merged/deleted records |
History tables are automatically created when the history flag is enabled and are named table_name_history.
Table Configuration
Core Properties
Property | Type | Description |
|---|---|---|
| String | Unique table identifier |
| String | Human-readable description |
| String | Reference to dataset defining structure |
| Boolean | Enable history table (default: |
| Boolean | Enable audit trail (default: |
| Boolean | Prevent modifications (default: |
Naming Rules
Valid names:
Alphanumeric characters and underscores only
Must start with a letter
Maximum 63 characters
Cannot use PostgreSQL reserved keywords
Invalid names:
Cannot start with underscore (reserved)
Cannot contain
system,golden, orhistoryCannot contain
$characterCannot use hyphens (converted to underscores)
API Operations
Table Management
Create Table
Endpoint: POST /tables
Permission: table.save
{
"table": "customers",
"dataset": "customer_dataset",
"description": "Customer data table",
"history": true,
"auditable": true
}
Creates:
Main table:
customersHistory table (if enabled):
customers_historyIndexes on
idandmetadatafieldsAdditional indexes on lookup columns
List All Tables
Endpoint: GET /tables
Permission: table.list
{
"tables": [
{
"id": "customers",
"description": "Customer data",
"dataset": "customer_dataset",
"history": true,
"audit": true,
"created": "2025-01-15T08:00:00Z",
"size": 15000,
"type": "TABLE",
"locked": false
}
]
}
Get Table Details
Endpoint: GET /tables/{id}
Permission: table.view
Returns table configuration and statistics.
Delete Table
Endpoint: DELETE /tables/{name}
Permission: table.delete
Deletes table and all records permanently. Cannot be undone.
Clear Table
Endpoint: PUT /tables/clear/{name}
Permission: table.clear
Removes all records but preserves table structure.
Use clear instead of delete when you want to reload data fresh.
Rename Table
Endpoint: PUT /tables/rename/{id}/{newId}
Permission: table.save
Constraints:
Table cannot be in use by an entity
New name must not already exist
Table must be unlocked
Update Description
Endpoint: PUT /tables/description/{id}?description={text}
Permission: table.save
Record Query Operations
Get Records Page
Endpoint: GET /tables/{id}/records
Permissions: table.view, golden.listRecord
Query Parameters:
Parameter | Default | Description |
|---|---|---|
| TABLE | TABLE or HISTORY |
| 0 | Page number (0-based) |
| 10 | Records per page |
| - | Metadata filters (see below) |
Response:
{
"table": {
"id": "customers",
"size": 15000
},
"page": {
"number": 0,
"size": 50,
"totalElements": 15000,
"totalPages": 300
},
"columns": ["id", "email", "full_name", "phone"],
"records": [
{
"id": "uuid-12345",
"email": "[email protected]",
"full_name": "John Doe",
"phone": "+1234567890"
}
]
}
Get Single Record
Endpoint: GET /tables/{table}/records/{id}
Permissions: table.view, golden.viewRecord
Query Parameters:
type- TABLE or HISTORY (default: TABLE)expanded- Include audit and related data (default: false)
Expanded response includes:
Full record data with metadata
Audit trail (if enabled)
Merged record references
Dataset structure
User principals (for audit)
Get Empty Record Template
Endpoint: GET /tables/{id}/records/empty
Permissions: table.view, golden.viewRecord
Returns empty record with full column structure based on dataset.
Use case: UI forms for creating new records.
Get Table Datasets
Endpoint: GET /tables/{id}/datasets
Permission: table.view
Returns dataset configuration including nested datasets.
Record Filtering
Apply metadata filters to narrow query results:
Filter Parameter | Type | Description |
|---|---|---|
| Boolean | Records with validation errors |
| Boolean | Records that have been merged |
| Boolean | Records marked as disconnected |
| ISO 8601 | Records updated after timestamp |
| ISO 8601 | Records updated before timestamp |
| Decimal | Quality score ≥ value |
| Decimal | Quality score ≤ value |
Returns records with errors and quality score below 0.5.
ETL Operations
Load Data
Endpoint: POST /tables/load
Permission: table.load
Load data from source into table.
{
"source": "crm_source",
"sinkTable": "customers",
"transformation": "normalize_customer",
"pipeline": "customer_cleaning",
"operation": "UPSERT",
"maxRecords": 10000,
"sampleRecords": 0
}
Operations:
INSERT- Add new records onlyUPDATE- Update existing recordsUPSERT- Insert new or update existingDELETE- Remove records
Returns task ID for monitoring.
Export Data
Endpoint: POST /tables/export
Permission: table.export
Export table data to file.
{
"source": "customers",
"transformation": "format_export",
"pipeline": "export_pipeline",
"maxRecords": 5000,
"chunkRecords": 1000
}
Parameters:
maxRecords- Limit number of recordssampleRecords- Export sample for testingchunkRecords- Chunk size for processing
Returns task ID. Download file when task completes.
Transform Data
Endpoint: POST /tables/transform
Permission: table.transform
Apply in-place transformations to table data.
{
"source": "customers",
"transformation": "enrich_customer",
"pipeline": "quality_improvement",
"maxRecords": 10000
}
Modifies records in the same table.
Extract Table Definitions
Endpoint: POST /tables/extract
Permission: table.export
Export table metadata (not records).
{
"ids": ["customers", "products", "orders"]
}
Returns JSON configuration for backup/migration.
Export All Tables
Endpoint: GET /tables/extract/all
Permission: table.export
Exports all table definitions.
Import Tables
Endpoint: POST /tables/ingest
Permission: table.import
Import table definitions from JSON.
{
"exchange": {
"tables": [
{ /* table configuration */ }
]
}
}
Working with History Tables
History tables track changes over time:
When Records Move to History
Record merged into golden record
Record deleted by user
Record disconnected from bucket
Manually moved via API
Querying History
History records include:
Original record data
_history: trueflag_updatedtimestamp when movedOriginal
_idpreserved
Use Cases
Audit trails and compliance
Historical reporting
Undo operations
Data forensics
Table Performance
Indexing
Tables automatically create indexes on:
_idfield (unique index)_metadatafield (JSONB index)Lookup columns (defined in dataset)
Filtering
Apply filters at database level for performance:
More efficient than fetching all records and filtering client-side.
Best Practices
Table Design
Meaningful names - Use clear business terminology
Enable history - For important data requiring audit trails
Define datasets first - Create dataset before table
Appropriate indexes - Mark frequently queried columns as lookup
Data Operations
Lock during ETL - Lock tables during bulk operations
Use pagination - Always paginate large datasets
Filter at source - Apply filters in queries, not client-side
Monitor size - Track table growth over time
ETL Strategy
Incremental loading - Use timestamp filtering for updates
Batch operations - Process in chunks for large datasets
Test with samples - Use
sampleRecordsfor testingMonitor tasks - Check task status for failures
History Management
Regular archival - Archive old history records periodically
Query optimization - History tables can grow large
Retention policies - Define how long to keep history
Troubleshooting
Table Creation Fails
Check:
Dataset exists and is valid
Table name follows naming rules
No existing table with same name
Sufficient database permissions
Records Not Appearing
Check:
Query correct table type (TABLE vs HISTORY)
Pagination parameters correct
Filters not excluding all records
Records actually loaded (check table size)
Performance Issues
Solutions:
Reduce page size for faster queries
Use metadata filters to limit results
Enable lookup indexes on queried columns
Archive old history records
ETL Task Failures
Check:
Source connection valid
Transformation mappings correct
Pipeline processors configured properly
Task logs for specific errors