Quickstart
Get from a fresh checkout to a running Frank stack with API, UI, workers, Iceberg, and CLI access.
What you need
- Docker and Docker Compose
- Node.js 18+ for
frankctl - Python 3.11+ if you run backend tools outside containers
- The shared
common-infrastack available next to this repo
Frank's Docker Compose file expects services from ../common-infra, including Postgres, MinIO, Iceberg REST, Temporal, Trino, Loki, and related infrastructure.
1. Start shared infrastructure
From the ecosystem root:
cd ../common-infra
docker-compose up -dThen return to Frank:
cd ../frank-low-code-pipeline2. Start Frank
make upThis builds and starts:
- FastAPI on
http://localhost:8002 - SvelteKit UI on
http://localhost:5175 - Source worker for discovery and extraction
- Transform worker for transform execution lifecycle
- Temporal worker for workflow tasks
- API route initialization, database migrations, pattern sync, and SDM seeding
Check status:
make status
curl http://localhost:8002/healthOpen:
- UI:
http://localhost:5175 - API docs:
http://localhost:8002/docs - Dagster:
http://localhost:3000or the configured Dagster URL
3. Build the TypeScript CLI
cd frank-cli
npm install
npm run build
npm link
frankctl --helpThe CLI defaults to http://localhost:8000, so point it at the compose port:
frankctl --api-url http://localhost:8002 statusFor repeated local use:
frankctl config set profiles.default.apiUrl http://localhost:8002
frankctl statusIn current local dev stacks, some routes require tenant context before full JWT tenant resolution is wired everywhere:
export FRANK_DEV_MODE=true
export FRANKCTL_TENANT_ID="00000000-0000-0000-0000-000000000001"Use a real tenant UUID if your database has seeded tenants.
4. Discover a source pattern
frankctl patterns list --jsonSource patterns include databases, SaaS systems, warehouses, APIs, files, and streams. Transform patterns include filtering, dedupe, joins, aggregation, geospatial operations, validation, conversion, SCD handling, and Python-runner patterns.
5. Create a source
Create a YAML file:
name: local-postgres
pattern_id: postgres
source_config:
host: host.docker.internal
port: 5432
database: app
username: app_user
password: secret
connector_type: postgres
target_config:
namespace_mode: destination_defined
table_prefix: raw_Then create and discover:
frankctl sources create -f source.yaml
frankctl sources discover <source-id> --timeout 300
frankctl sources streams list <source-id>Configure streams:
streams:
- name: customers
sync_mode: incremental
cursor_field: updated_at
write_disposition: merge
primary_key_path: ["id"]
is_enabled: truefrankctl sources streams set <source-id> -f streams.yaml
frankctl sources sync <source-id> --timeout 6006. Browse datasets
frankctl datasets list --layer bronze
frankctl datasets preview bronze.tenant_00000000_local_postgres.raw_customers --limit 20The exact dataset ID depends on tenant ID, source name, stream name, and target configuration.
7. Build a transform
Use the UI for the full guided flow:
- Open
http://localhost:5175/transforms/new. - Select one or more source tables.
- Pick a target schema from FIWARE or custom schemas.
- Accept or edit AI-suggested field mappings.
- Preview, hydrate, and run.
Or trigger an existing transform from the CLI:
frankctl transforms list
frankctl transforms trigger <transform-id>
frankctl transforms runs <transform-id>8. Compose a pipeline
The UI flow lives at /pipelines/new. The CLI can validate an existing pipeline:
frankctl pipelines list
frankctl pipelines validate <pipeline-id> --timeout 600Pipeline validation starts a sandbox workflow and returns a final JSON summary.
Troubleshooting
| Symptom | Check |
|---|---|
| API does not start | make logs-api, Postgres credentials, and ../common-infra status |
| Source discovery fails | make logs, source-worker logs, Docker socket mount, AIRBYTE_DOCKER_NETWORK |
| UI cannot reach API | VITE_API_URL, CORS origins, API port 8002 |
| AI action returns an API error | Martha health, MARTHA_API_URL, MARTHA_CLIENT_SECRET, workflow seeding |
| Ontology sync reports an error | ONTOLOGY_ENABLED, ONTOLOGY_SERVICE_URL, ontology auth env vars |
What is next
- Sources for EL configuration.
- Transforms for mappings, artifacts, and runs.
- AI assistance for Martha-backed help.
- Ontology integration for backing datasets.