Skip to main content
Skip table of contents

Tables

Introduction

Tables are the foundational data storage containers in Golden Core. They store records in a structured format and provide the persistent layer for all entity operations.

While entities provide smart features like search and deduplication, tables provide the raw storage and querying capabilities.

A Table in Golden:

  • Stores records in a PostgreSQL database

  • References a dataset defining the record structure

  • Supports history tracking for audit trails

  • Provides pagination and filtering for large datasets

  • Can be locked to prevent modifications

Tables are referenced by entities but can also be used independently for data operations.


Table Types

Type

Description

Use Case

TABLE

Standard active records

Live operational data

HISTORY

Archived historical records

Audit trail, merged/deleted records

History tables are automatically created when the history flag is enabled and are named table_name_history.


Table Configuration

Core Properties

Property

Type

Description

id

String

Unique table identifier

description

String

Human-readable description

dataset

String

Reference to dataset defining structure

history

Boolean

Enable history table (default: false)

audit

Boolean

Enable audit trail (default: false)

locked

Boolean

Prevent modifications (default: false)

Naming Rules

Valid names:

  • Alphanumeric characters and underscores only

  • Must start with a letter

  • Maximum 63 characters

  • Cannot use PostgreSQL reserved keywords

Invalid names:

  • Cannot start with underscore (reserved)

  • Cannot contain system, golden, or history

  • Cannot contain $ character

  • Cannot use hyphens (converted to underscores)


API Operations

Table Management

Create Table

Endpoint: POST /tables
Permission: table.save

JSON
{
  "table": "customers",
  "dataset": "customer_dataset",
  "description": "Customer data table",
  "history": true,
  "auditable": true
}

Creates:

  • Main table: customers

  • History table (if enabled): customers_history

  • Indexes on id and metadata fields

  • Additional indexes on lookup columns

List All Tables

Endpoint: GET /tables
Permission: table.list

JSON
{
  "tables": [
    {
      "id": "customers",
      "description": "Customer data",
      "dataset": "customer_dataset",
      "history": true,
      "audit": true,
      "created": "2025-01-15T08:00:00Z",
      "size": 15000,
      "type": "TABLE",
      "locked": false
    }
  ]
}

Get Table Details

Endpoint: GET /tables/{id}
Permission: table.view

Returns table configuration and statistics.

Delete Table

Endpoint: DELETE /tables/{name}
Permission: table.delete

Deletes table and all records permanently. Cannot be undone.

Clear Table

Endpoint: PUT /tables/clear/{name}
Permission: table.clear

Removes all records but preserves table structure.

Use clear instead of delete when you want to reload data fresh.

Rename Table

Endpoint: PUT /tables/rename/{id}/{newId}
Permission: table.save

Constraints:

  • Table cannot be in use by an entity

  • New name must not already exist

  • Table must be unlocked

Update Description

Endpoint: PUT /tables/description/{id}?description={text}
Permission: table.save

Record Query Operations

Get Records Page

Endpoint: GET /tables/{id}/records
Permissions: table.view, golden.listRecord

Query Parameters:

Parameter

Default

Description

type

TABLE

TABLE or HISTORY

pageNumber

0

Page number (0-based)

pageSize

10

Records per page

filter

-

Metadata filters (see below)

Response:

JSON
{
  "table": {
    "id": "customers",
    "size": 15000
  },
  "page": {
    "number": 0,
    "size": 50,
    "totalElements": 15000,
    "totalPages": 300
  },
  "columns": ["id", "email", "full_name", "phone"],
  "records": [
    {
      "id": "uuid-12345",
      "email": "[email protected]",
      "full_name": "John Doe",
      "phone": "+1234567890"
    }
  ]
}

Get Single Record

Endpoint: GET /tables/{table}/records/{id}
Permissions: table.view, golden.viewRecord

Query Parameters:

  • type - TABLE or HISTORY (default: TABLE)

  • expanded - Include audit and related data (default: false)

Expanded response includes:

  • Full record data with metadata

  • Audit trail (if enabled)

  • Merged record references

  • Dataset structure

  • User principals (for audit)

Get Empty Record Template

Endpoint: GET /tables/{id}/records/empty
Permissions: table.view, golden.viewRecord

Returns empty record with full column structure based on dataset.

Use case: UI forms for creating new records.

Get Table Datasets

Endpoint: GET /tables/{id}/datasets
Permission: table.view

Returns dataset configuration including nested datasets.

Record Filtering

Apply metadata filters to narrow query results:

Filter Parameter

Type

Description

filterError

Boolean

Records with validation errors

filterMerged

Boolean

Records that have been merged

filterDisconnected

Boolean

Records marked as disconnected

filterUpdatedAfter

ISO 8601

Records updated after timestamp

filterUpdatedBefore

ISO 8601

Records updated before timestamp

filterQualityGreater

Decimal

Quality score ≥ value

filterQualityLess

Decimal

Quality score ≤ value

Returns records with errors and quality score below 0.5.


ETL Operations

Load Data

Endpoint: POST /tables/load
Permission: table.load

Load data from source into table.

JSON
{
  "source": "crm_source",
  "sinkTable": "customers",
  "transformation": "normalize_customer",
  "pipeline": "customer_cleaning",
  "operation": "UPSERT",
  "maxRecords": 10000,
  "sampleRecords": 0
}

Operations:

  • INSERT - Add new records only

  • UPDATE - Update existing records

  • UPSERT - Insert new or update existing

  • DELETE - Remove records

Returns task ID for monitoring.

Export Data

Endpoint: POST /tables/export
Permission: table.export

Export table data to file.

JSON
{
  "source": "customers",
  "transformation": "format_export",
  "pipeline": "export_pipeline",
  "maxRecords": 5000,
  "chunkRecords": 1000
}

Parameters:

  • maxRecords - Limit number of records

  • sampleRecords - Export sample for testing

  • chunkRecords - Chunk size for processing

Returns task ID. Download file when task completes.

Transform Data

Endpoint: POST /tables/transform
Permission: table.transform

Apply in-place transformations to table data.

JSON
{
  "source": "customers",
  "transformation": "enrich_customer",
  "pipeline": "quality_improvement",
  "maxRecords": 10000
}

Modifies records in the same table.

Extract Table Definitions

Endpoint: POST /tables/extract
Permission: table.export

Export table metadata (not records).

JSON
{
  "ids": ["customers", "products", "orders"]
}

Returns JSON configuration for backup/migration.

Export All Tables

Endpoint: GET /tables/extract/all
Permission: table.export

Exports all table definitions.

Import Tables

Endpoint: POST /tables/ingest
Permission: table.import

Import table definitions from JSON.

JSON
{
  "exchange": {
    "tables": [
      { /* table configuration */ }
    ]
  }
}

Working with History Tables

History tables track changes over time:

When Records Move to History

  • Record merged into golden record

  • Record deleted by user

  • Record disconnected from bucket

  • Manually moved via API

Querying History

History records include:

  • Original record data

  • _history: true flag

  • _updated timestamp when moved

  • Original _id preserved

Use Cases

  • Audit trails and compliance

  • Historical reporting

  • Undo operations

  • Data forensics


Table Performance

Indexing

Tables automatically create indexes on:

  • _id field (unique index)

  • _metadata field (JSONB index)

  • Lookup columns (defined in dataset)

Filtering

Apply filters at database level for performance:

More efficient than fetching all records and filtering client-side.


Best Practices

Table Design

  • Meaningful names - Use clear business terminology

  • Enable history - For important data requiring audit trails

  • Define datasets first - Create dataset before table

  • Appropriate indexes - Mark frequently queried columns as lookup

Data Operations

  • Lock during ETL - Lock tables during bulk operations

  • Use pagination - Always paginate large datasets

  • Filter at source - Apply filters in queries, not client-side

  • Monitor size - Track table growth over time

ETL Strategy

  • Incremental loading - Use timestamp filtering for updates

  • Batch operations - Process in chunks for large datasets

  • Test with samples - Use sampleRecords for testing

  • Monitor tasks - Check task status for failures

History Management

  • Regular archival - Archive old history records periodically

  • Query optimization - History tables can grow large

  • Retention policies - Define how long to keep history


Troubleshooting

Table Creation Fails

Check:

  • Dataset exists and is valid

  • Table name follows naming rules

  • No existing table with same name

  • Sufficient database permissions

Records Not Appearing

Check:

  • Query correct table type (TABLE vs HISTORY)

  • Pagination parameters correct

  • Filters not excluding all records

  • Records actually loaded (check table size)

Performance Issues

Solutions:

  • Reduce page size for faster queries

  • Use metadata filters to limit results

  • Enable lookup indexes on queried columns

  • Archive old history records

ETL Task Failures

Check:

  • Source connection valid

  • Transformation mappings correct

  • Pipeline processors configured properly

  • Task logs for specific errors

JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.