diff --git a/.env.example b/.env.example index db1bf7f..ee7d8c2 100644 --- a/.env.example +++ b/.env.example @@ -8,7 +8,8 @@ TREASURY_INSTANCE= # Sync mainnet start point (slot and block hash) # The indexer will start syncing from this point on first run -# 160964954 and 560c7537831007f9670d287b15a69ba18a322b1edc39c0c23ccab3c12ad77b9f are good for Intersect 2025 budget instance +# Must be BEFORE the publish tx (slot 160963893) to capture the full event history. +# 160963800 and 65233bb713c15c4bb427ccbf0e7e5c1c6a6a9c5c04b5edfa1e0e8e72f1285c9c are good for Intersect 2025 budget instance STORE_CARDANO_SYNC_START_SLOT= STORE_CARDANO_SYNC_START_BLOCKHASH= \ No newline at end of file diff --git a/.gitignore b/.gitignore index 4c49bd7..4a96f0b 100644 --- a/.gitignore +++ b/.gitignore @@ -1 +1,5 @@ .env +.DS_Store +api/logs/ +indexer/logs/ +scripts/diverging_events.csv diff --git a/CLAUDE.md b/CLAUDE.md new file mode 100644 index 0000000..530f08b --- /dev/null +++ b/CLAUDE.md @@ -0,0 +1,187 @@ +# CLAUDE.md — Administration Data + +## Project Overview + +Indexes Cardano treasury governance data from the blockchain and exposes it via a REST API. Three components work together: + +1. **YACI Store indexer** — Java-based blockchain indexer (black-box dependency) that reads from a Cardano node and writes raw data to PostgreSQL +2. **PostgreSQL** — stores both raw blockchain data (`yaci_store` schema) and normalized app data (`treasury` schema) +3. **Rust API** — syncs from YACI Store tables into treasury tables, then serves REST endpoints + +Swagger docs are at `/docs` when the API is running. + +## Architecture & Data Flow + +``` +Cardano Node → YACI Store indexer → PostgreSQL (yaci_store schema) + ↓ + Rust API sync service + ↓ + PostgreSQL (treasury schema) + ↓ + REST API +``` + +- **`yaci_store` schema**: raw blockchain data, managed by YACI Store's Flyway migrations — never modify manually +- **`treasury` schema**: normalized app data, managed by `database/schema/treasury.sql` and init scripts +- The YACI Store plugin filter (`indexer/plugins/scripts/treasury-filter.mvel`) reduces stored data by ~95% + +## Domain Context (TOM / Cardano Treasury) + +This project implements the **Treasury Oversight Metadata (TOM)** standard, using CIP-100 metadata label **1694**. + +### Contract Hierarchy +- **Treasury Contract (TRSC)** → at a unique script address, holds treasury reserve funds. Stored in `treasury.treasury_contracts`. +- **Vendor Contract (PSSC)** → **ONE shared script address for ALL projects** (not one per project). Stored in `treasury.vendor_contracts` (singleton row: `address`, `stake_credential`). + - Each `fund` tx creates UTXOs at the shared PSSC address + - UTXOs belong to specific projects, distinguished by inline datum, NOT by address + - UTXO chain tracking (`find_project_from_inputs`) links events to projects by tracing spent inputs +- **Project** → one row per `fund` event (e.g. `EC-0008-25`). Stored in `treasury.projects`. Foreign-keyed from milestones, events, and UTXO history via `project_db_id`. +- **Milestones** → belong to a project + +### Vendor Naming +- `vendor.name` does **not exist** in the TOM spec — code extracts it but always gets null +- `vendor.label` in the spec is the vendor's display name; in practice, real metadata puts the payment address here +- Vendor identity is typically embedded in the top-level `body.label` by convention (e.g., "Tastenkunst GmbH - Eternl Maintenance") + +### Event Types +publish, initialize, fund, complete, disburse, withdraw, pause, resume, modify, cancel, sweep, reorganize + +See [`docs/event-processing.md`](docs/event-processing.md) for detailed per-event field mappings, code extraction paths, DB writes, and known bugs. + +### Financial Model +- All amounts are in **lovelace** (1 ADA = 1,000,000 lovelace) + +### Milestone Lifecycle +Milestones use 4 independent boolean flags (not a linear status): +- **evidence_provided** — vendor submitted completion evidence via a `complete` transaction +- **withdrawn** — vendor withdrew payment via a `withdraw` transaction +- **paused** — oversight committee paused this milestone (from inline datum constructor 0→1) +- **archived** — milestone replaced by a `modify` event (old row preserved, new row created) + +Additionally, each milestone has a **time_limit** (POSIXTime ms) from the inline UTXO datum. +Claimability is derived: time_limit < current time AND NOT withdrawn. + +Archive model: on modify, existing row → archived=true, new row inserted. superseded_by FK links old → new. + +**Disburse vs Withdraw**: Disburse is treasury-level (moves funds from treasury contract to any address). +Withdraw is milestone-level (vendor claims matured milestone funds from vendor contract). These are completely separate. + +### Treasury Instance +The `TREASURY_INSTANCE` env var filters to a specific on-chain treasury. Changing it tracks a different treasury entirely. + +## Development Setup + +### Prerequisites +- Docker and docker-compose +- Rust toolchain (for native API development) + +### Quick Start +```bash +./dev.sh start # starts PostgreSQL + indexer + API +``` + +### Dev Script Commands +```bash +./dev.sh start # start all services (API runs natively if Rust is installed) +./dev.sh stop # stop all Docker services +./dev.sh restart # restart Docker services +./dev.sh logs # tail all logs (or: ./dev.sh logs indexer) +./dev.sh status # show service status +./dev.sh build # build Docker images +./dev.sh clean # stop and remove all containers + volumes +``` + +### Native API Development +```bash +# With Docker DB already running: +cd api +DATABASE_URL="postgresql://postgres:postgres@localhost:5433/administration_data" cargo run +``` + +### Build & Test +```bash +cd api +cargo build --release +cargo check # fast type-checking +cargo test # run tests +``` + +### Environment Setup +Copy `.env.example` to `.env` and configure: +- `TREASURY_INSTANCE` — the on-chain treasury to track +- `STORE_CARDANO_SYNC_START_SLOT` / `STORE_CARDANO_SYNC_START_BLOCKHASH` — where to start syncing + +## Port Mappings + +| Service | Host Port | Container Port | +|------------|-----------|----------------| +| PostgreSQL | 5433 | 5432 | +| YACI Store | 8081 | 8080 | +| API | 8080 | 8080 | + +PostgreSQL uses **5433** on the host to avoid conflicts with local PostgreSQL installations. + +Database connection string: `postgresql://postgres:postgres@localhost:5433/administration_data` + +## Code Conventions + +- Rust 2021 edition, **Axum 0.7** web framework +- **SQLx** for database queries (compile-time checked) +- **utoipa** OpenAPI decorators on all endpoints and models +- Consistent API response envelope: `{ data, pagination?, meta.timestamp }` +- Follow existing patterns in the codebase +- Add tests for new code + +## Database + +- Schema source of truth: `database/schema/treasury.sql` +- Init scripts: `database/init/` — run on first Docker PostgreSQL start +- For schema changes: create incremental migration files, don't edit `treasury.sql` directly for running systems +- YACI Store schema is auto-managed by Flyway — **never modify manually** + +## Indexer + +YACI Store is a **black-box dependency**. Only modify configuration and plugins: + +- `indexer/application.properties` — indexer config +- `indexer/config/application-plugins.yml` — plugin configuration +- `indexer/plugins/scripts/treasury-filter.mvel` — MVEL filter script + +Never modify `yaci-store.jar` or YACI Store internals. Primary network: Mainnet (`backbone.cardano.iog.io:3001`). + +## CI/CD + +- **`ci.yml`**: runs `cargo build --release && cargo test` on push/PR to main/develop +- **`push-to-ecr.yaml`**: builds Docker image and pushes to AWS ECR on push to main (or manual dispatch) +- Deployment: Helm chart bump in a separate repo + +## Gotchas + +- **Startup ordering**: YACI Store must be running and synced before the API sync service can process events. The sync service (`api/src/services/sync.rs`) waits for YACI Store tables to exist. +- **Port 5433**: PostgreSQL is on host port 5433, not 5432. +- **`.env` not committed**: copy `.env.example` and configure before first run. +- **UTXO pruning**: YACI Store prunes spent UTXOs — historical UTXO data may not be available. +- **Cold replay vs continuous operation**: The milestone-event chain trace (`find_project_from_inputs`) needs UTXO history to link withdraw/complete/pause/resume to a project. The Postgres triggers installed by `install_utxo_history_triggers` (in `api/src/services/sync.rs`) capture every script-address UTXO into `treasury.utxo_history` synchronously with YACI Store's INSERT, so pruning no longer drops chain-trace inputs. Triggers only protect from the moment they're armed — to recover pre-existing pruned data, wipe the database volume and re-sync with the API running so the triggers arm before YACI Store ingests. See [`docs/known-issues.md`](docs/known-issues.md) `KI-CR-01` and `KI-UTX-01`. +- **Large JAR**: `indexer/yaci-store.jar` is ~108MB and committed to the repo. Don't regenerate unnecessarily. +- **Inline datums**: `store.script.enabled=true` in YACI Store config enables milestone datum data (amounts, time limits, pause flags). Requires full re-sync after enabling. +- **Milestone archiving**: Filter `WHERE NOT archived` for current milestones. Archived rows are historical versions. + +## Key File Locations + +| Purpose | Path | +|--------------------|---------------------------------------| +| API entry point | `api/src/main.rs` | +| API routes | `api/src/routes/v1/` | +| API models | `api/src/models/v1.rs` | +| Event processing | `api/src/services/event_processor.rs` | +| Sync service | `api/src/services/sync.rs` | +| DB schema | `database/schema/treasury.sql` | +| DB init scripts | `database/init/` | +| Docker setup | `docker-compose.yml` | +| Dev script | `dev.sh` | +| Indexer config | `indexer/application.properties` | +| Plugin config | `indexer/config/application-plugins.yml` | +| Treasury filter | `indexer/plugins/scripts/treasury-filter.mvel` | +| CI | `.github/workflows/ci.yml` | +| ECR push | `.github/workflows/push-to-ecr.yaml` | diff --git a/README.md b/README.md index 720c8a6..0dc6ccd 100644 --- a/README.md +++ b/README.md @@ -124,22 +124,30 @@ Interactive documentation available at `/docs` (Swagger UI). | `GET /api/v1/treasury/utxos` | Treasury UTXOs | | `GET /api/v1/treasury/events` | Treasury-level events | -### Vendor Contracts (Projects) +### Vendor Contract (singleton PSSC) | Endpoint | Description | |----------|-------------| -| `GET /api/v1/vendor-contracts` | List all vendor contracts (with pagination, filtering, search) | -| `GET /api/v1/vendor-contracts/:project_id` | Get vendor contract details | -| `GET /api/v1/vendor-contracts/:project_id/milestones` | Get project milestones | -| `GET /api/v1/vendor-contracts/:project_id/events` | Get project event history | -| `GET /api/v1/vendor-contracts/:project_id/utxos` | Get project UTXOs | +| `GET /api/v1/vendor-contract` | Shared PSSC script address + project rollup by status | +| `GET /api/v1/vendor-contract/utxos` | Currently-unspent UTxOs at the PSSC, labeled per project | + +### Projects + +| Endpoint | Description | +|----------|-------------| +| `GET /api/v1/projects` | List all projects (with pagination, filtering, search) | +| `GET /api/v1/projects/:project_id` | Get project details (includes inline `current_utxos`) | +| `GET /api/v1/projects/:project_id/milestones` | Get project milestones | +| `GET /api/v1/projects/:project_id/events` | Get project event history | +| `GET /api/v1/projects/:project_id/utxos` | Get project UTXOs | ### Milestones | Endpoint | Description | |----------|-------------| | `GET /api/v1/milestones` | List all milestones (with pagination, filtering) | -| `GET /api/v1/milestones/:id` | Get milestone details | +| `GET /api/v1/milestones/:project_id` | List milestones for a project (paginated) | +| `GET /api/v1/milestones/by-id/:id` | Get milestone by integer database ID | ### Events @@ -183,8 +191,8 @@ Limitation: this is only configured for Mainnet currently The sync start point is configured via environment variables in `.env`: ```bash -STORE_CARDANO_SYNC_START_SLOT=160964954 -STORE_CARDANO_SYNC_START_BLOCKHASH=560c7537831007f9670d287b15a69ba18a322b1edc39c0c23ccab3c12ad77b9f +STORE_CARDANO_SYNC_START_SLOT=160963800 +STORE_CARDANO_SYNC_START_BLOCKHASH=65233bb713c15c4bb427ccbf0e7e5c1c6a6a9c5c04b5edfa1e0e8e72f1285c9c ``` Network settings (host, port, protocol magic) are in `indexer/application.properties`. @@ -222,10 +230,12 @@ The system uses two schemas: | Table | Description | |-------|-------------| | `treasury.treasury_contracts` | Treasury reserve contracts (TRSC) | -| `treasury.vendor_contracts` | Vendor/project contracts (PSSC) | -| `treasury.milestones` | Project milestones | -| `treasury.events` | All TOM event audit log | -| `treasury.utxos` | UTXO tracking for event linking | +| `treasury.vendor_contracts` | Singleton row for the shared PSSC script address (one per deployment) | +| `treasury.projects` | One row per `fund` event (e.g. `EC-0008-25`); holds project metadata | +| `treasury.milestones` | Project milestones (4 independent boolean flags + archive model). FKs to `projects` via `project_db_id` | +| `treasury.events` | All TOM event audit log. FKs to `projects` via `project_db_id`; `destination` is JSONB | +| `treasury.utxo_history` | Persistent UTXO history (populated by Postgres triggers on `yaci_store.address_utxo`) for chain trace + datum cache | +| `treasury.sync_status` | Heartbeat: per-stream `last_slot` / `last_block` / `updated_at` | ### Connecting to Database @@ -246,11 +256,11 @@ SELECT * FROM yaci_store.block ORDER BY number DESC LIMIT 5; -- Treasury summary SELECT * FROM treasury.v_treasury_summary; --- Vendor contracts with financials +-- Projects with financials SELECT project_id, project_name, status, initial_amount_lovelace / 1000000 as allocated_ada, - total_disbursed_lovelace / 1000000 as disbursed_ada -FROM treasury.v_vendor_contracts_summary; + total_withdrawn_lovelace / 1000000 as withdrawn_ada +FROM treasury.v_projects_summary; -- Recent events SELECT * FROM treasury.v_events_with_context @@ -268,10 +278,17 @@ This reduces database size by ~95% while keeping all treasury data. ## Component Documentation - [Architecture & Data Flow](docs/architecture.md) - System architecture and data flow diagrams +- [Event Processing](docs/event-processing.md) - Per-event-type field mappings and write paths +- [Known Issues](docs/known-issues.md) - Indexed catalog of NULL-field cases, on-chain data quirks, and sync-loop gotchas - [API Documentation](api/README.md) - Full API reference - [Indexer Setup](indexer/README.md) - YACI Store configuration - [Database Schema](database/schema/) - Treasury schema definitions +## Gotchas + +- **Cold replay vs continuous operation**: a fresh local sync from an old `STORE_CARDANO_SYNC_START_SLOT` cannot reconstruct UTXO chains whose inputs were pruned *before* the `treasury.utxo_history` triggers were installed. With the triggers armed, every script-address UTXO YACI Store inserts is captured before pruning runs. To recover pre-existing pruned data, wipe the volume and re-sync with the API running so the triggers arm before YACI Store ingests. See [`docs/known-issues.md`](docs/known-issues.md) `KI-CR-01` / `KI-UTX-01`. +- **Stale-looking sync timestamp**: `treasury.sync_status.updated_at` only bumps when new events arrive. A long delta does not mean the sync loop is dead. See `KI-SY-01`. + ## License See [LICENSE](./LICENSE). diff --git a/api/Cargo.toml b/api/Cargo.toml index 359dab0..3c4aab5 100644 --- a/api/Cargo.toml +++ b/api/Cargo.toml @@ -40,3 +40,9 @@ chrono = { version = "0.4", features = ["serde"] } # UUID uuid = { version = "1.0", features = ["v4", "serde"] } + +# Cardano datum parsing (CBOR/Plutus) +pallas-primitives = "0.30" +pallas-codec = "0.30" +pallas-addresses = "0.30" +hex = "0.4" diff --git a/api/README.md b/api/README.md index 3a4c285..66388cc 100644 --- a/api/README.md +++ b/api/README.md @@ -6,7 +6,7 @@ Rust-based REST API for querying Cardano treasury fund tracking data. Built with - RESTful API with OpenAPI/Swagger documentation - Consistent response envelopes with pagination -- Both lovelace AND ADA amounts in responses +- All amounts in lovelace (1 ADA = 1,000,000 lovelace) - Raw metadata AND parsed/normalized data - Background sync service for real-time data @@ -52,12 +52,11 @@ All responses use a consistent envelope: ### Amount Fields -All monetary amounts include both representations: +All monetary amounts are in lovelace (1 ADA = 1,000,000 lovelace): ```json { - "initial_amount_lovelace": 1000000000000, - "initial_amount_ada": 1000000.0 + "initial_amount_lovelace": 1000000000000 } ``` @@ -79,22 +78,37 @@ Returns the health status of the API. #### `GET /api/v1/status` -Get API status and sync information. +Get API status and sync information. Three time domains are surfaced separately: +`database.checked_at` (server-side), `sync.heartbeat` (server-side, last sync poll), +`sync.last_event_processed` (on-chain block time of most recent processed TOM event), +and `chain.indexer_time` (on-chain block time of YACI Store's tip). **Response:** ```json { "data": { - "api_version": "1.0.0", - "database_connected": true, - "last_sync_slot": 163964156, - "last_sync_block": 12296746, - "last_sync_time": 1704067200, - "total_events": 21, - "total_vendor_contracts": 5 + "api_version": "2.0.0", + "database": { + "connected": true, + "checked_at": "2026-05-01T10:30:00Z" + }, + "sync": { + "heartbeat": "2026-05-01T10:29:55Z", + "last_event_processed": { "unix": 1777623100, "iso": "2026-05-01T08:11:40Z" } + }, + "chain": { + "indexer_block": 12296746, + "indexer_slot": 163964156, + "indexer_time": { "unix": 1777623200, "iso": "2026-05-01T08:13:20Z" } + }, + "totals": { + "events": 411, + "projects": 42, + "events_by_type": { "fund": 42, "complete": 189, "withdraw": 129, "pause": 63, "resume": 32 } + } }, "meta": { - "timestamp": "2026-01-28T10:30:00Z" + "timestamp": "2026-05-01T10:30:00Z" } } ``` @@ -115,25 +129,23 @@ Get treasury contract details with statistics and financials. "contract_instance": "9e65e4ed7d6fd86fc4827d2b45da6d2c601fb920e8bfd794b8ecc619", "contract_address": "addr1xxzc8pt7fgf0lc0x7eq6z7z6puhsxmzktna7dluahrj6g6...", "stake_credential": "8583857e4a12ffe1e6f641a1785a0f2f036c565cfbe6ff9db8e5a469", - "name": "CC Treasury", "status": "active", "publish_tx_hash": "abc123...", - "publish_time": 1704067200, + "publish_time": { "unix": 1704067200, "iso": "2024-01-01T00:00:00Z" }, "initialized_tx_hash": "def456...", - "initialized_at": 1704067300, + "initialized_at": { "unix": 1704067300, "iso": "2024-01-01T00:01:40Z" }, "permissions": { ... }, "statistics": { - "vendor_contract_count": 10, - "active_contracts": 8, - "completed_contracts": 2, - "cancelled_contracts": 0, + "project_count": 42, + "active_contracts": 35, + "completed_contracts": 6, + "cancelled_contracts": 1, "total_events": 45, "utxo_count": 12, - "last_event_time": 1704153600 + "last_event_time": { "unix": 1704153600, "iso": "2024-01-02T00:00:00Z" } }, "financials": { - "balance_lovelace": 264568247000000, - "balance_ada": 264568247.0 + "balance_lovelace": 264568247000000 }, "created_at": "2024-01-01T00:00:00Z", "updated_at": "2024-01-15T12:00:00Z" @@ -156,7 +168,6 @@ Get all unspent UTXOs at the treasury contract address. "address": "addr1x...", "address_type": "treasury", "lovelace_amount": 100000000000, - "ada_amount": 100000.0, "slot": 163964156, "block_number": 12296746 } @@ -178,11 +189,82 @@ Get treasury-level events (publish, initialize, sweep, reorganize). --- -### Vendor Contracts +### Vendor Contract (singleton PSSC) -#### `GET /api/v1/vendor-contracts` +#### `GET /api/v1/vendor-contract` -List all vendor contracts (projects) with pagination and filtering. +Get the shared vendor contract — the singleton on-chain script address every project sits at, plus a quick rollup of the projects bound to it. + +**Response:** +```json +{ + "data": { + "address": "addr1x...", + "stake_credential": "8583857e...", + "projects": { + "total": 42, + "by_status": { "active": 35, "completed": 6, "cancelled": 1 } + } + }, + "meta": { ... } +} +``` + +**Errors:** +- `404 Not Found` - Vendor contract not yet known (first fund event has not been processed) + +--- + +#### `GET /api/v1/vendor-contract/utxos` + +List currently-unspent UTxOs at the shared vendor contract, each row labeled with its owning project. Lets you enumerate every live PSSC output in one call instead of fanning out across every project. + +"Currently unspent" is sourced from `yaci_store.address_utxo` with an anti-join against `yaci_store.tx_input` (same approach as `/projects/:id/utxos` and `/treasury/utxos`). + +**Query Parameters:** + +| Parameter | Type | Default | Description | +|-----------|------|---------|-------------| +| `page` | integer | 1 | Page number (1-indexed) | +| `limit` | integer | 50 | Results per page (max: 100) | + +**Example:** +```bash +curl "http://localhost:8080/api/v1/vendor-contract/utxos?limit=10" +``` + +**Response:** +```json +{ + "data": [ + { + "tx_hash": "cb923b75...", + "output_index": 0, + "address": "addr1x...", + "lovelace_amount": 79500000000, + "slot": 186056809, + "block_number": 13361422, + "project_db_id": 8, + "project_id": "EG-0001-25", + "project_name": "AdaStat.net Cardano blockchain explorer", + "project_status": "active" + } + ], + "pagination": { "page": 1, "limit": 10, "total_count": 33, "has_next": true }, + "meta": { ... } +} +``` + +**Errors:** +- `404 Not Found` - Vendor contract not yet known (first fund event has not been processed) + +--- + +### Projects + +#### `GET /api/v1/projects` + +List all projects (one per `fund` event) with pagination and filtering. **Query Parameters:** @@ -191,7 +273,7 @@ List all vendor contracts (projects) with pagination and filtering. | `page` | integer | 1 | Page number (1-indexed) | | `limit` | integer | 50 | Results per page (max: 100) | | `status` | string | - | Filter by status: `active`, `paused`, `completed`, `cancelled` | -| `search` | string | - | Search in project_id, project_name, description, vendor_name | +| `search` | string | - | Search in project_id, project_name, description | | `sort` | string | `fund_time` | Sort field: `fund_time`, `project_id`, `project_name`, `initial_amount` | | `order` | string | `desc` | Sort order: `asc`, `desc` | | `from_time` | integer | - | Filter by fund time (Unix timestamp, from) | @@ -199,7 +281,7 @@ List all vendor contracts (projects) with pagination and filtering. **Example:** ```bash -curl "http://localhost:8080/api/v1/vendor-contracts?status=active&search=community&limit=10" +curl "http://localhost:8080/api/v1/projects?status=active&search=community&limit=10" ``` **Response:** @@ -211,36 +293,30 @@ curl "http://localhost:8080/api/v1/vendor-contracts?status=active&search=communi "project_id": "EC-0008-25", "project_name": "Community Hub Development", "description": "Building decentralized community infrastructure", - "vendor_name": "Acme Blockchain Solutions", "vendor_address": "addr1q...", - "contract_url": "https://...", "contract_address": "addr1x...", "status": "active", "fund_tx_hash": "abc123...", - "fund_time": 1704067200, + "fund_time": { "unix": 1704067200, "iso": "2024-01-01T00:00:00Z" }, "initial_amount_lovelace": 1000000000000, - "initial_amount_ada": 1000000.0, "milestones_summary": { "total": 5, "pending": 2, "completed": 2, - "disbursed": 1 + "withdrawn": 1, + "paused": 0 }, "financials": { "total_allocated_lovelace": 1000000000000, - "total_allocated_ada": 1000000.0, - "total_disbursed_lovelace": 400000000000, - "total_disbursed_ada": 400000.0, + "total_withdrawn_lovelace": 400000000000, "current_balance_lovelace": 600000000000, - "current_balance_ada": 600000.0, - "disbursement_percentage": 40.0, + "withdrawal_percentage": 40.0, "utxo_count": 3 }, "treasury": { - "contract_instance": "9e65e4ed...", - "name": "CC Treasury" + "contract_instance": "9e65e4ed..." }, - "last_event_time": 1704153600, + "last_event_time": { "unix": 1704153600, "iso": "2024-01-02T00:00:00Z" }, "event_count": 8 } ], @@ -254,9 +330,9 @@ curl "http://localhost:8080/api/v1/vendor-contracts?status=active&search=communi } ``` -#### `GET /api/v1/vendor-contracts/:project_id` +#### `GET /api/v1/projects/:project_id` -Get detailed information about a specific vendor contract. +Get detailed information about a specific project. **Path Parameters:** @@ -266,14 +342,23 @@ Get detailed information about a specific vendor contract. **Response:** Same as list item but with additional fields: - `other_identifiers`: Related project IDs +- `vendor_payment_key_hash`: Vendor payment key hash from inline datum +- `current_utxos`: Array of `{ tx_hash, output_index, lovelace_amount, slot }` for the project's currently-unspent outputs at the vendor contract. Empty when fully withdrawn. Sum equals `financials.current_balance_lovelace`. - `created_at`, `updated_at`: Timestamps **Errors:** -- `404 Not Found` - Vendor contract not found +- `404 Not Found` - Project not found + +#### `GET /api/v1/projects/:project_id/milestones` -#### `GET /api/v1/vendor-contracts/:project_id/milestones` +Get all (non-archived) milestones for a specific project. Paginated. -Get all milestones for a specific project. +**Query Parameters:** + +| Parameter | Type | Default | Description | +|-----------|------|---------|-------------| +| `page` | integer | 1 | Page number | +| `limit` | integer | 50 | Results per page (max: 100) | **Response:** ```json @@ -287,31 +372,38 @@ Get all milestones for a specific project. "description": "Complete market research and requirements gathering", "acceptance_criteria": "Deliver research report", "amount_lovelace": 200000000000, - "amount_ada": 200000.0, - "status": "disbursed", + "time_limit": 1704240000000, + "withdrawn": true, + "evidence_provided": true, + "paused": false, + "archived": false, "completion": { "tx_hash": "abc123...", - "time": 1704067200, + "time": { "unix": 1704067200, "iso": "2024-01-01T00:00:00Z" }, "description": "Research completed successfully", "evidence": [...] }, - "disbursement": { + "withdrawal": { "tx_hash": "def456...", - "time": 1704153600, - "amount_lovelace": 200000000000, - "amount_ada": 200000.0 + "time": { "unix": 1704153600, "iso": "2024-01-02T00:00:00Z" }, + "amount_lovelace": 200000000000 }, + "archive_info": null, + "pause_history": null, "project": { "project_id": "EC-0008-25", "project_name": "Community Hub Development" } } ], + "pagination": { ... }, "meta": { ... } } ``` -#### `GET /api/v1/vendor-contracts/:project_id/events` +`pause_history` is non-null when at least one pause/resume event has been recorded for the milestone. It carries `currently_paused`, `last_pause_tx_hash` / `last_pause_time` and `last_resume_tx_hash` / `last_resume_time`. + +#### `GET /api/v1/projects/:project_id/events` Get event history for a specific project. @@ -323,9 +415,9 @@ Get event history for a specific project. | `limit` | integer | 50 | Results per page | | `type` | string | - | Filter by event type | -#### `GET /api/v1/vendor-contracts/:project_id/utxos` +#### `GET /api/v1/projects/:project_id/utxos` -Get current (unspent) UTXOs for a specific project. +Get current (unspent) UTXOs for a specific project. Paginated. --- @@ -341,13 +433,27 @@ List all milestones across all projects. |-----------|------|---------|-------------| | `page` | integer | 1 | Page number | | `limit` | integer | 50 | Results per page | -| `status` | string | - | Filter by status: `pending`, `completed`, `disbursed` | +| `withdrawn` | boolean | - | Filter by withdrawn status | +| `evidence_provided` | boolean | - | Filter by evidence provided status | +| `archived` | boolean | false | Filter by archived status (defaults to false) | | `project_id` | string | - | Filter by project ID | -| `sort` | string | - | Sort field: `milestone_order`, `complete_time`, `disburse_time`, `amount` | +| `sort` | string | - | Sort field: `milestone_order`, `complete_time`, `withdraw_time`, `amount` | +| `from_time` | integer | - | Filter by milestone time (Unix timestamp, from). Matches whichever of `complete_time` or `withdraw_time` is set on the row. | +| `to_time` | integer | - | Filter by milestone time (Unix timestamp, to). | + +#### `GET /api/v1/milestones/:project_id` + +List milestones for a specific project (paginated). Convenience endpoint mirroring `/api/v1/projects/{project_id}/milestones`, served under the `/milestones/` root. + +**Path Parameters:** + +| Parameter | Type | Description | +|-----------|------|-------------| +| `project_id` | string | Project identifier (e.g., "EC-0008-25") | -#### `GET /api/v1/milestones/:id` +#### `GET /api/v1/milestones/by-id/:id` -Get a specific milestone by database ID. +Get a specific milestone by integer database ID. The integer ID is rarely useful to clients; prefer the project-scoped lookup above. **Path Parameters:** @@ -373,6 +479,7 @@ List all events with full context. | `project_id` | string | - | Filter by project ID | | `from_time` | integer | - | Filter by time (Unix timestamp, from) | | `to_time` | integer | - | Filter by time (Unix timestamp, to) | +| `q` | string | - | Full-text search across `reason`, `destination`, and raw `metadata` (case-insensitive substring) | **Response:** ```json @@ -383,20 +490,17 @@ List all events with full context. "tx_hash": "abc123...", "slot": 163964156, "block_number": 12296746, - "block_time": 1704067200, + "block_time": { "unix": 1704067200, "iso": "2024-01-01T00:00:00Z" }, "event_type": "fund", "amount_lovelace": 1000000000000, - "amount_ada": 1000000.0, "reason": null, "destination": null, "treasury": { - "contract_instance": "9e65e4ed...", - "name": "CC Treasury" + "contract_instance": "9e65e4ed..." }, "project": { "project_id": "EC-0008-25", "project_name": "Community Hub Development", - "vendor_name": "Acme Blockchain Solutions", "contract_address": "addr1x..." }, "milestone": null, @@ -409,6 +513,8 @@ List all events with full context. } ``` +`destination` is a JSONB `{label, details}` object preserved as-is from the TOM metadata; populated on `disburse` events only. + #### `GET /api/v1/events/recent` Get recent events for activity feeds. @@ -445,51 +551,58 @@ Get comprehensive statistics across all data. "data": { "treasury": { "total_count": 1, - "active_count": 1 + "active_count": 1, + "disbursed_count": 3 + }, + "vendor_contracts": { + "total_count": 1, + "address": "addr1x...", + "project_count": 42, + "utxo_history_count": 1235, + "unspent_utxo_count": 449, + "current_balance_lovelace": 600000000000 }, "projects": { - "total_count": 10, - "active_count": 8, - "completed_count": 2, + "total_count": 42, + "active_count": 35, + "completed_count": 6, "paused_count": 0, - "cancelled_count": 0 + "cancelled_count": 1 }, "milestones": { - "total_count": 50, - "pending_count": 20, - "completed_count": 15, - "disbursed_count": 15 + "total_count": 364, + "pending_count": 100, + "completed_count": 60, + "withdrawn_count": 204 }, "events": { - "total_count": 45, + "on_chain_count": 411, + "processed_count": 411, "by_type": { - "fund": 10, - "complete": 15, - "disburse": 15, - "publish": 1, - "initialize": 1, - "pause": 2, - "resume": 1 + "fund": 42, + "complete": 189, + "withdraw": 129, + "pause": 63, + "resume": 32 } }, "financials": { "total_allocated_lovelace": 5000000000000, - "total_allocated_ada": 5000000.0, - "total_disbursed_lovelace": 2000000000000, - "total_disbursed_ada": 2000000.0, - "current_balance_lovelace": 3000000000000, - "current_balance_ada": 3000000.0 + "total_withdrawn_lovelace": 2000000000000, + "current_balance_lovelace": 3000000000000 }, "sync": { "last_slot": 163964156, "last_block": 12296746, - "last_updated": "2024-01-15T12:00:00Z" + "last_updated": "2026-05-01T08:11:40Z" } }, "meta": { ... } } ``` +`vendor_contracts` is the singleton-PSSC rollup (see `GET /api/v1/vendor-contract`); `projects` counts rows in `treasury.projects`. + --- ## Event Types @@ -501,9 +614,9 @@ The API tracks the following Treasury Oversight Metadata (TOM) events: | `publish` | Publish a treasury contract | | `initialize` | Initialize a treasury contract | | `fund` | Fund a vendor contract from treasury | -| `complete` | Mark a milestone as complete | -| `disburse` | Disburse funds for a completed milestone | -| `withdraw` | Withdraw funds | +| `complete` | Submit evidence of milestone completion | +| `disburse` | Disburse funds from treasury (treasury-level) | +| `withdraw` | Vendor withdraws matured milestone funds (milestone-level) | | `pause` | Pause a contract | | `resume` | Resume a paused contract | | `modify` | Modify contract parameters | @@ -511,6 +624,58 @@ The API tracks the following Treasury Oversight Metadata (TOM) events: | `sweep` | Sweep remaining funds | | `reorganize` | Reorganize treasury funds | +For per-event field mappings (which JSON path becomes which DB column) see +[`docs/event-processing.md`](../docs/event-processing.md). For the catalog of +known data-quality holes (NULL fields, on-chain inconsistencies, sync-loop +quirks) see [`docs/known-issues.md`](../docs/known-issues.md). + +--- + +## Event Processing Pipeline + +The API runs a background sync task (`api/src/services/sync.rs::run_sync_loop`) +that drives event ingestion. The pipeline has four stages: + +1. **Pre-fetch UTXOs** — `EventProcessor::pre_fetch_utxos` + (`api/src/services/event_processor.rs`) batches the tx_hashes of + pending TOM events and copies their outputs and inputs from + `yaci_store.address_utxo` into `treasury.utxo_history`. This is a + defensive backstop on top of the Postgres triggers (`install_utxo_history_triggers` + in `api/src/services/sync.rs`) that capture every script-address UTXO + into `treasury.utxo_history` synchronously with YACI Store's INSERT. +2. **Dispatch** — `process_event` (`event_processor.rs`) reads + `body.body.event` and delegates to a per-event handler. Treasury-level + events (`publish`, `initialize`, `disburse`, `sweep`, `reorganize`) write + to `treasury_contracts` + `events`; project-level events write to + `projects` + `milestones` + `events`. +3. **Project resolution** — milestone-level events + (`complete`/`withdraw`/`pause`/`resume`) take their `project_db_id` + from `body.identifier` when present, otherwise from + `find_project_from_inputs`, which traces input UTXOs back to the seed + planted by the project's `fund` event. When multiple project chains + feed a single tx (sibling-project fee inputs, etc.) the trace + disambiguates by scoring candidate projects against the metadata's + milestone keys (`collect_milestone_id_hints`). +4. **Insert** — `insert_event_full` writes one row per `tx_hash` into + `treasury.events` with `ON CONFLICT (tx_hash) DO UPDATE`, preserving + idempotency. Events are recorded even when the chain trace fails + (`project_db_id IS NULL`) so nothing is silently dropped. + +In addition to the incremental loop, a separate `tokio::spawn` task runs +`sync_all_events` every 10 minutes as an idempotent backfill — any event +that wedged the incremental loop (e.g. a postgres restart mid-batch) is +recovered by the next full re-sync via the `ON CONFLICT DO UPDATE` chain. +See `KI-SY-02` in `docs/known-issues.md`. + +Datum parsing (milestone amounts, time limits, paused flags, vendor payment +key hash) lives in `api/src/parsers/datum.rs`; address parsing +(stake-credential extraction from bech32) lives in +`api/src/parsers/address.rs`. + +For the SQL queries that surface where this pipeline produces NULLs in +practice, see the repro queries in +[`docs/known-issues.md`](../docs/known-issues.md). + --- ## Error Responses @@ -580,10 +745,11 @@ The API queries the `treasury` schema: | Table | Description | |-------|-------------| | `treasury.treasury_contracts` | Treasury reserve contracts (TRSC) | -| `treasury.vendor_contracts` | Vendor/project contracts (PSSC) | -| `treasury.milestones` | Project milestones | -| `treasury.events` | All TOM event audit log | -| `treasury.utxos` | UTXO tracking for event linking | +| `treasury.vendor_contracts` | Singleton row for the shared PSSC script address | +| `treasury.projects` | One row per `fund` event (the 42 active projects) | +| `treasury.milestones` | Project milestones; FK to `projects.id` via `project_db_id` | +| `treasury.events` | All TOM event audit log; FK to `projects.id` via `project_db_id`; `destination` is JSONB | +| `treasury.utxo_history` | Persistent UTXO history (Postgres-trigger captured) for chain trace + datum cache | | `treasury.sync_status` | Sync progress tracking | ### Views @@ -591,7 +757,8 @@ The API queries the `treasury` schema: | View | Description | |------|-------------| | `v_treasury_summary` | Treasury with statistics and financials | -| `v_vendor_contracts_summary` | Projects with milestone counts and financials | +| `v_projects_summary` | Projects with milestone counts and financials | | `v_events_with_context` | Events with treasury/project/milestone context | -| `v_financial_summary` | Allocated vs disbursed vs remaining | +| `v_recent_events` | Events with context, ordered by slot DESC | +| `v_financial_summary` | Allocated vs withdrawn vs remaining | | `v_milestone_timeline` | Milestones with project context | diff --git a/api/src/errors.rs b/api/src/errors.rs new file mode 100644 index 0000000..a9d3da8 --- /dev/null +++ b/api/src/errors.rs @@ -0,0 +1,95 @@ +//! Structured JSON error responses +//! +//! Every non-2xx response shares the shape: +//! +//! ```json +//! { +//! "error": { "code": "", "message": "", "details": {...}? }, +//! "meta": { "timestamp": "" } +//! } +//! ``` +//! +//! Handlers return `Result`. `ApiError`'s `IntoResponse` impl +//! converts each variant into the appropriate HTTP status + JSON body. + +use axum::http::StatusCode; +use axum::response::{IntoResponse, Json, Response}; +use serde::{Deserialize, Serialize}; +use utoipa::ToSchema; + +use crate::models::v1::ResponseMeta; + +#[derive(Debug, Serialize, Deserialize, ToSchema)] +pub struct ApiErrorBody { + pub error: ApiErrorDetail, + pub meta: ResponseMeta, +} + +#[derive(Debug, Serialize, Deserialize, ToSchema)] +pub struct ApiErrorDetail { + /// Short machine-readable code, e.g. `not_found`, `bad_request`, `internal`. + pub code: String, + /// Human-readable message. + pub message: String, + /// Optional structured detail (validation errors, internal context). + #[serde(skip_serializing_if = "Option::is_none")] + pub details: Option, +} + +/// Error type returned by every v1 handler. +#[derive(Debug)] +#[allow(dead_code)] +pub enum ApiError { + NotFound(String), + BadRequest(String), + Internal(String), + Database(sqlx::Error), +} + +impl ApiError { + fn status_and_code(&self) -> (StatusCode, &'static str) { + match self { + ApiError::NotFound(_) => (StatusCode::NOT_FOUND, "not_found"), + ApiError::BadRequest(_) => (StatusCode::BAD_REQUEST, "bad_request"), + ApiError::Internal(_) => (StatusCode::INTERNAL_SERVER_ERROR, "internal"), + ApiError::Database(_) => (StatusCode::INTERNAL_SERVER_ERROR, "internal"), + } + } + + fn message(&self) -> String { + match self { + ApiError::NotFound(m) | ApiError::BadRequest(m) | ApiError::Internal(m) => m.clone(), + ApiError::Database(_) => "database error".to_string(), + } + } +} + +impl From for ApiError { + fn from(e: sqlx::Error) -> Self { + // Log the full error server-side; clients only see "database error". + tracing::error!("Database error: {}", e); + ApiError::Database(e) + } +} + +impl From for ApiError { + fn from(e: anyhow::Error) -> Self { + tracing::error!("Internal error: {:?}", e); + ApiError::Internal(e.to_string()) + } +} + +impl IntoResponse for ApiError { + fn into_response(self) -> Response { + let (status, code) = self.status_and_code(); + let body = ApiErrorBody { + error: ApiErrorDetail { + code: code.to_string(), + message: self.message(), + details: None, + }, + meta: ResponseMeta::default(), + }; + (status, Json(body)).into_response() + } +} diff --git a/api/src/main.rs b/api/src/main.rs index 9ace2bb..ea079c0 100644 --- a/api/src/main.rs +++ b/api/src/main.rs @@ -17,8 +17,10 @@ fn env_u16(key: &str, default: u16) -> u16 { .unwrap_or(default) } +mod errors; mod models; mod openapi; +mod parsers; mod routes; mod services; diff --git a/api/src/models/mod.rs b/api/src/models/mod.rs index 6125027..0040833 100644 --- a/api/src/models/mod.rs +++ b/api/src/models/mod.rs @@ -2,4 +2,5 @@ //! //! All models are in the v1 namespace. +pub mod time; pub mod v1; diff --git a/api/src/models/time.rs b/api/src/models/time.rs new file mode 100644 index 0000000..3a3bd05 --- /dev/null +++ b/api/src/models/time.rs @@ -0,0 +1,30 @@ +//! Chain timestamp helpers +//! +//! On-chain block times are returned as a paired `{unix, iso}` object so +//! clients don't have to choose between Unix-int sortability and +//! ISO-8601 readability. Server-side timestamps (`created_at`, etc.) keep +//! their plain `DateTime` representation. + +use chrono::{DateTime, TimeZone, Utc}; +use serde::{Deserialize, Serialize}; +use utoipa::ToSchema; + +/// On-chain block timestamp, paired as Unix seconds and ISO 8601 UTC. +#[derive(Debug, Clone, Copy, Serialize, Deserialize, ToSchema)] +pub struct ChainTime { + /// POSIX seconds since 1970-01-01. + pub unix: i64, + /// ISO 8601 UTC string for display. + pub iso: DateTime, +} + +impl ChainTime { + pub fn from_secs(secs: i64) -> Self { + let iso = Utc.timestamp_opt(secs, 0).single().unwrap_or_else(|| Utc.timestamp_opt(0, 0).unwrap()); + Self { unix: secs, iso } + } + + pub fn maybe_from_secs(opt: Option) -> Option { + opt.map(Self::from_secs) + } +} diff --git a/api/src/models/v1.rs b/api/src/models/v1.rs index adea57b..30fd10c 100644 --- a/api/src/models/v1.rs +++ b/api/src/models/v1.rs @@ -1,26 +1,18 @@ //! V1 API Models with OpenAPI support //! //! These models follow the new API design with: -//! - Both lovelace AND ADA amounts in responses +//! - Amounts in lovelace (1 ADA = 1,000,000 lovelace) — single source of truth +//! - On-chain block times paired as `{unix, iso}` via [`crate::models::time::ChainTime`] //! - Raw metadata AND parsed/normalized data //! - Consistent response envelopes with pagination +//! - Structured error envelope via [`crate::errors::ApiErrorBody`] use chrono::{DateTime, Utc}; use serde::{Deserialize, Serialize}; use sqlx::FromRow; use utoipa::{IntoParams, ToSchema}; -// ============================================================================ -// CONSTANTS -// ============================================================================ - -/// Lovelace per ADA -pub const LOVELACE_PER_ADA: f64 = 1_000_000.0; - -/// Convert lovelace to ADA -pub fn lovelace_to_ada(lovelace: i64) -> f64 { - lovelace as f64 / LOVELACE_PER_ADA -} +use crate::models::time::ChainTime; // ============================================================================ // RESPONSE ENVELOPE @@ -110,23 +102,94 @@ impl PaginatedResponse { // STATUS & HEALTH // ============================================================================ -/// API status response +/// API status response. +/// +/// Three time domains are surfaced separately: +/// +/// - `database.checked_at` — server-side; when the response was generated. +/// - `sync.heartbeat` — server-side; last time the API's TOM-sync loop ran. +/// Bumps every poll regardless of whether new events arrived. +/// - `sync.last_event_processed` — on-chain; block time of the most recent +/// TOM event the API has written into `treasury.events`. +/// - `chain.indexer_time` — on-chain; block time of the most recent block +/// YACI Store has ingested. Tells you whether YACI is at tip. #[derive(Debug, Serialize, Deserialize, ToSchema)] pub struct StatusResponse { /// API version pub api_version: String, - /// Database connection status - pub database_connected: bool, - /// Last sync slot - pub last_sync_slot: Option, - /// Last sync block - pub last_sync_block: Option, - /// Last sync time (Unix timestamp) - pub last_sync_time: Option, - /// Total events processed - pub total_events: i64, - /// Total vendor contracts - pub total_vendor_contracts: i64, + /// Database health + pub database: DatabaseStatus, + /// API-side sync state + pub sync: SyncStatusBlock, + /// On-chain indexer state + pub chain: ChainStatus, + /// Top-level counts + pub totals: TotalsBlock, +} + +/// Database health subsection +#[derive(Debug, Serialize, Deserialize, ToSchema)] +pub struct DatabaseStatus { + /// Whether the API can talk to Postgres + pub connected: bool, + /// When this status response was generated (server-side, ISO 8601) + pub checked_at: DateTime, +} + +/// API-side sync subsection +#[derive(Debug, Serialize, Deserialize, ToSchema)] +pub struct SyncStatusBlock { + /// Server-side timestamp of the last TOM-sync poll. Bumps every 15 s + /// regardless of whether new events arrived (KI-SY-01). + pub heartbeat: Option>, + /// On-chain block time of the most recent TOM event the API has + /// processed into `treasury.events`. Null if no events processed yet. + pub last_event_processed: Option, +} + +/// On-chain indexer subsection +#[derive(Debug, Serialize, Deserialize, ToSchema)] +pub struct ChainStatus { + /// Block number the YACI Store indexer has reached (most recent block). + pub indexer_block: Option, + /// Slot the indexer has reached. + pub indexer_slot: Option, + /// On-chain block time of the indexer's most recent block. + pub indexer_time: Option, +} + +/// Top-level row counts +#[derive(Debug, Serialize, Deserialize, ToSchema)] +pub struct TotalsBlock { + pub events: i64, + pub projects: i64, + /// Count of `treasury.events` rows by `event_type`. + pub events_by_type: std::collections::HashMap, +} + +// ============================================================================ +// VENDOR CONTRACT (singleton — the shared PSSC) +// ============================================================================ + +/// Response for `/api/v1/vendor-contract` — the *one* shared PSSC script +/// address every project sits at, plus a quick rollup of the projects. +#[derive(Debug, Serialize, Deserialize, ToSchema)] +pub struct VendorContractResponse { + /// Shared PSSC script address (`addr1x...`). + pub address: String, + /// Stake credential portion of the address. + pub stake_credential: Option, + /// Project rollup at this vendor contract. + pub projects: VendorContractProjectsBlock, +} + +/// Project rollup nested inside `VendorContractResponse`. +#[derive(Debug, Serialize, Deserialize, ToSchema)] +pub struct VendorContractProjectsBlock { + /// Total projects. + pub total: i64, + /// Counts keyed by `status` (`active`, `paused`, `completed`, `cancelled`). + pub by_status: std::collections::HashMap, } // ============================================================================ @@ -144,18 +207,16 @@ pub struct TreasuryResponse { pub contract_address: Option, /// Stake credential pub stake_credential: Option, - /// Human-readable name - pub name: Option, /// Contract status (active/paused) pub status: Option, /// Publish transaction hash pub publish_tx_hash: Option, - /// Publish time (Unix timestamp) - pub publish_time: Option, + /// On-chain publish time (`{unix, iso}`) + pub publish_time: Option, /// Initialize transaction hash pub initialized_tx_hash: Option, - /// Initialize time (Unix timestamp) - pub initialized_at: Option, + /// On-chain initialize time (`{unix, iso}`) + pub initialized_at: Option, /// Permission rules pub permissions: Option, /// Statistics @@ -171,20 +232,20 @@ pub struct TreasuryResponse { /// Treasury statistics #[derive(Debug, Serialize, Deserialize, ToSchema)] pub struct TreasuryStatistics { - /// Total vendor contracts - pub vendor_contract_count: i64, - /// Active vendor contracts + /// Total projects + pub project_count: i64, + /// Active projects pub active_contracts: i64, - /// Completed vendor contracts + /// Completed projects pub completed_contracts: i64, - /// Cancelled vendor contracts + /// Cancelled projects pub cancelled_contracts: i64, /// Total events pub total_events: i64, /// Current UTXO count pub utxo_count: i64, - /// Last event time (Unix timestamp) - pub last_event_time: Option, + /// Last event time (`{unix, iso}`) + pub last_event_time: Option, } /// Treasury financial summary @@ -192,8 +253,6 @@ pub struct TreasuryStatistics { pub struct TreasuryFinancials { /// Treasury balance in lovelace pub balance_lovelace: i64, - /// Treasury balance in ADA - pub balance_ada: f64, } /// Database row for treasury summary @@ -203,14 +262,13 @@ pub struct TreasurySummaryRow { pub contract_instance: String, pub contract_address: Option, pub stake_credential: Option, - pub name: Option, pub status: Option, pub publish_tx_hash: Option, pub publish_time: Option, pub initialized_tx_hash: Option, pub initialized_at: Option, pub permissions: Option, - pub vendor_contract_count: Option, + pub project_count: Option, pub active_contracts: Option, pub completed_contracts: Option, pub cancelled_contracts: Option, @@ -230,25 +288,23 @@ impl From for TreasuryResponse { contract_instance: row.contract_instance, contract_address: row.contract_address, stake_credential: row.stake_credential, - name: row.name, status: row.status, publish_tx_hash: row.publish_tx_hash, - publish_time: row.publish_time, + publish_time: ChainTime::maybe_from_secs(row.publish_time), initialized_tx_hash: row.initialized_tx_hash, - initialized_at: row.initialized_at, + initialized_at: ChainTime::maybe_from_secs(row.initialized_at), permissions: row.permissions, statistics: TreasuryStatistics { - vendor_contract_count: row.vendor_contract_count.unwrap_or(0), + project_count: row.project_count.unwrap_or(0), active_contracts: row.active_contracts.unwrap_or(0), completed_contracts: row.completed_contracts.unwrap_or(0), cancelled_contracts: row.cancelled_contracts.unwrap_or(0), total_events: row.total_events.unwrap_or(0), utxo_count: row.utxo_count.unwrap_or(0), - last_event_time: row.last_event_time, + last_event_time: ChainTime::maybe_from_secs(row.last_event_time), }, financials: TreasuryFinancials { balance_lovelace: balance, - balance_ada: lovelace_to_ada(balance), }, created_at: row.created_at, updated_at: row.updated_at, @@ -262,7 +318,7 @@ impl From for TreasuryResponse { /// Vendor contract (project) summary #[derive(Debug, Serialize, Deserialize, ToSchema)] -pub struct VendorContractSummary { +pub struct ProjectSummary { /// Internal database ID pub id: i32, /// Logical project identifier (e.g., "EC-0008-25") @@ -271,39 +327,33 @@ pub struct VendorContractSummary { pub project_name: Option, /// Project description pub description: Option, - /// Vendor name - pub vendor_name: Option, /// Vendor payment address pub vendor_address: Option, - /// Contract URL (link to agreement) - pub contract_url: Option, /// PSSC script address pub contract_address: Option, /// Contract status (active/paused/completed/cancelled) pub status: Option, /// Fund transaction hash pub fund_tx_hash: String, - /// Fund time (Unix timestamp) - pub fund_time: Option, + /// On-chain fund time (`{unix, iso}`) + pub fund_time: Option, /// Initial allocated amount in lovelace pub initial_amount_lovelace: Option, - /// Initial allocated amount in ADA - pub initial_amount_ada: Option, /// Milestone summary pub milestones_summary: MilestonesSummary, /// Financial summary pub financials: VendorFinancials, /// Treasury reference pub treasury: TreasuryReference, - /// Last event time (Unix timestamp) - pub last_event_time: Option, + /// Last event time (`{unix, iso}`) + pub last_event_time: Option, /// Total event count pub event_count: Option, } /// Vendor contract detail (full response) #[derive(Debug, Serialize, Deserialize, ToSchema)] -pub struct VendorContractDetail { +pub struct ProjectDetail { /// Internal database ID pub id: i32, /// Logical project identifier (e.g., "EC-0008-25") @@ -314,34 +364,33 @@ pub struct VendorContractDetail { pub project_name: Option, /// Project description pub description: Option, - /// Vendor name - pub vendor_name: Option, /// Vendor payment address pub vendor_address: Option, - /// Contract URL (link to agreement) - pub contract_url: Option, + /// Vendor payment key hash from datum + pub vendor_payment_key_hash: Option, /// PSSC script address pub contract_address: Option, /// Contract status (active/paused/completed/cancelled) pub status: Option, /// Fund transaction hash pub fund_tx_hash: String, - /// Fund time (Unix timestamp) - pub fund_time: Option, + /// On-chain fund time (`{unix, iso}`) + pub fund_time: Option, /// Initial allocated amount in lovelace pub initial_amount_lovelace: Option, - /// Initial allocated amount in ADA - pub initial_amount_ada: Option, /// Milestone summary pub milestones_summary: MilestonesSummary, /// Financial summary pub financials: VendorFinancials, /// Treasury reference pub treasury: TreasuryReference, - /// Last event time (Unix timestamp) - pub last_event_time: Option, + /// Last event time (`{unix, iso}`) + pub last_event_time: Option, /// Total event count pub event_count: Option, + /// Currently-unspent UTxOs belonging to this project at the vendor contract. + /// Empty when the project has no live outputs (fully withdrawn or pre-fund). + pub current_utxos: Vec, /// Record created at pub created_at: Option>, /// Record updated at @@ -355,10 +404,12 @@ pub struct MilestonesSummary { pub total: i64, /// Pending milestones pub pending: i64, - /// Completed milestones (but not yet disbursed) + /// Completed milestones (evidence provided but not yet withdrawn) pub completed: i64, - /// Disbursed milestones - pub disbursed: i64, + /// Withdrawn milestones + pub withdrawn: i64, + /// Paused milestones + pub paused: i64, } /// Vendor contract financial summary @@ -366,18 +417,12 @@ pub struct MilestonesSummary { pub struct VendorFinancials { /// Total allocated amount in lovelace pub total_allocated_lovelace: i64, - /// Total allocated amount in ADA - pub total_allocated_ada: f64, - /// Total disbursed amount in lovelace - pub total_disbursed_lovelace: i64, - /// Total disbursed amount in ADA - pub total_disbursed_ada: f64, + /// Total withdrawn amount in lovelace + pub total_withdrawn_lovelace: i64, /// Current balance in lovelace (from UTXOs) pub current_balance_lovelace: i64, - /// Current balance in ADA - pub current_balance_ada: f64, - /// Disbursement percentage - pub disbursement_percentage: f64, + /// Withdrawal percentage + pub withdrawal_percentage: f64, /// UTXO count pub utxo_count: i64, } @@ -387,23 +432,33 @@ pub struct VendorFinancials { pub struct TreasuryReference { /// Contract instance identifier pub contract_instance: Option, - /// Treasury name - pub name: Option, +} + +/// Compact UTxO reference embedded on `ProjectDetail` to give clients the +/// project's currently-unspent outputs without a second round trip. +#[derive(Debug, Serialize, Deserialize, ToSchema, FromRow)] +pub struct ProjectCurrentUtxo { + /// Transaction hash + pub tx_hash: String, + /// Output index + pub output_index: i16, + /// Amount in lovelace + pub lovelace_amount: Option, + /// Creation slot + pub slot: Option, } /// Database row for vendor contract summary #[derive(Debug, FromRow)] #[allow(dead_code)] -pub struct VendorContractSummaryRow { +pub struct ProjectSummaryRow { pub id: i32, pub treasury_id: Option, pub project_id: String, pub other_identifiers: Option>, pub project_name: Option, pub description: Option, - pub vendor_name: Option, pub vendor_address: Option, - pub contract_url: Option, pub contract_address: Option, pub fund_tx_hash: String, pub fund_slot: Option, @@ -413,25 +468,25 @@ pub struct VendorContractSummaryRow { pub created_at: Option>, pub updated_at: Option>, pub treasury_instance: Option, - pub treasury_name: Option, pub total_milestones: Option, pub pending_milestones: Option, pub completed_milestones: Option, - pub disbursed_milestones: Option, - pub total_disbursed_lovelace: Option, + pub withdrawn_milestones: Option, + pub paused_milestones: Option, + pub total_withdrawn_lovelace: Option, pub current_balance_lovelace: Option, pub utxo_count: Option, pub last_event_time: Option, pub event_count: Option, } -impl From for VendorContractSummary { - fn from(row: VendorContractSummaryRow) -> Self { +impl From for ProjectSummary { + fn from(row: ProjectSummaryRow) -> Self { let initial_amount = row.initial_amount_lovelace.unwrap_or(0); - let total_disbursed = row.total_disbursed_lovelace.unwrap_or(0); + let total_withdrawn = row.total_withdrawn_lovelace.unwrap_or(0); let current_balance = row.current_balance_lovelace.unwrap_or(0); - let disbursement_pct = if initial_amount > 0 { - (total_disbursed as f64 / initial_amount as f64) * 100.0 + let withdrawal_pct = if initial_amount > 0 { + (total_withdrawn as f64 / initial_amount as f64) * 100.0 } else { 0.0 }; @@ -441,48 +496,42 @@ impl From for VendorContractSummary { project_id: row.project_id, project_name: row.project_name, description: row.description, - vendor_name: row.vendor_name, vendor_address: row.vendor_address, - contract_url: row.contract_url, contract_address: row.contract_address, status: row.status, fund_tx_hash: row.fund_tx_hash, - fund_time: row.fund_block_time, + fund_time: ChainTime::maybe_from_secs(row.fund_block_time), initial_amount_lovelace: row.initial_amount_lovelace, - initial_amount_ada: row.initial_amount_lovelace.map(lovelace_to_ada), milestones_summary: MilestonesSummary { total: row.total_milestones.unwrap_or(0), pending: row.pending_milestones.unwrap_or(0), completed: row.completed_milestones.unwrap_or(0), - disbursed: row.disbursed_milestones.unwrap_or(0), + withdrawn: row.withdrawn_milestones.unwrap_or(0), + paused: row.paused_milestones.unwrap_or(0), }, financials: VendorFinancials { total_allocated_lovelace: initial_amount, - total_allocated_ada: lovelace_to_ada(initial_amount), - total_disbursed_lovelace: total_disbursed, - total_disbursed_ada: lovelace_to_ada(total_disbursed), + total_withdrawn_lovelace: total_withdrawn, current_balance_lovelace: current_balance, - current_balance_ada: lovelace_to_ada(current_balance), - disbursement_percentage: disbursement_pct, + withdrawal_percentage: withdrawal_pct, utxo_count: row.utxo_count.unwrap_or(0), }, treasury: TreasuryReference { contract_instance: row.treasury_instance, - name: row.treasury_name, }, - last_event_time: row.last_event_time, + last_event_time: ChainTime::maybe_from_secs(row.last_event_time), event_count: row.event_count, } } } -impl From for VendorContractDetail { - fn from(row: VendorContractSummaryRow) -> Self { +impl From for ProjectDetail { + fn from(row: ProjectSummaryRow) -> Self { let initial_amount = row.initial_amount_lovelace.unwrap_or(0); - let total_disbursed = row.total_disbursed_lovelace.unwrap_or(0); + let total_withdrawn = row.total_withdrawn_lovelace.unwrap_or(0); let current_balance = row.current_balance_lovelace.unwrap_or(0); - let disbursement_pct = if initial_amount > 0 { - (total_disbursed as f64 / initial_amount as f64) * 100.0 + let withdrawal_pct = if initial_amount > 0 { + (total_withdrawn as f64 / initial_amount as f64) * 100.0 } else { 0.0 }; @@ -493,37 +542,33 @@ impl From for VendorContractDetail { other_identifiers: row.other_identifiers, project_name: row.project_name, description: row.description, - vendor_name: row.vendor_name, vendor_address: row.vendor_address, - contract_url: row.contract_url, + vendor_payment_key_hash: None, // populated from DB when queried directly contract_address: row.contract_address, status: row.status, fund_tx_hash: row.fund_tx_hash, - fund_time: row.fund_block_time, + fund_time: ChainTime::maybe_from_secs(row.fund_block_time), initial_amount_lovelace: row.initial_amount_lovelace, - initial_amount_ada: row.initial_amount_lovelace.map(lovelace_to_ada), milestones_summary: MilestonesSummary { total: row.total_milestones.unwrap_or(0), pending: row.pending_milestones.unwrap_or(0), completed: row.completed_milestones.unwrap_or(0), - disbursed: row.disbursed_milestones.unwrap_or(0), + withdrawn: row.withdrawn_milestones.unwrap_or(0), + paused: row.paused_milestones.unwrap_or(0), }, financials: VendorFinancials { total_allocated_lovelace: initial_amount, - total_allocated_ada: lovelace_to_ada(initial_amount), - total_disbursed_lovelace: total_disbursed, - total_disbursed_ada: lovelace_to_ada(total_disbursed), + total_withdrawn_lovelace: total_withdrawn, current_balance_lovelace: current_balance, - current_balance_ada: lovelace_to_ada(current_balance), - disbursement_percentage: disbursement_pct, + withdrawal_percentage: withdrawal_pct, utxo_count: row.utxo_count.unwrap_or(0), }, treasury: TreasuryReference { contract_instance: row.treasury_instance, - name: row.treasury_name, }, - last_event_time: row.last_event_time, + last_event_time: ChainTime::maybe_from_secs(row.last_event_time), event_count: row.event_count, + current_utxos: Vec::new(), // populated by handler after this conversion created_at: row.created_at, updated_at: row.updated_at, } @@ -551,42 +596,82 @@ pub struct MilestoneResponse { pub acceptance_criteria: Option, /// Allocated amount in lovelace pub amount_lovelace: Option, - /// Allocated amount in ADA - pub amount_ada: Option, - /// Milestone status (pending/completed/disbursed) - pub status: String, + /// Time limit (POSIXTime in milliseconds) + pub time_limit: Option, + /// Whether the vendor has withdrawn funds + pub withdrawn: bool, + /// Whether completion evidence has been provided + pub evidence_provided: bool, + /// Whether this milestone is currently paused (latest pause/resume state) + pub paused: bool, + /// Whether this milestone has been archived (replaced by a modify event) + pub archived: bool, /// Completion details pub completion: Option, - /// Disbursement details - pub disbursement: Option, + /// Withdrawal details + pub withdrawal: Option, + /// Archive info (present when archived) + pub archive_info: Option, + /// Pause/resume history (present when at least one pause event has been recorded) + pub pause_history: Option, /// Project reference pub project: ProjectReference, } +/// Pause/resume history for a milestone. +/// +/// Present when at least one pause OR resume event has been recorded for the +/// milestone. `currently_paused` reflects the milestone's current state (set +/// by the contract output datum). Either `last_pause_*` or `last_resume_*` +/// may be null if that side hasn't happened yet (e.g., a milestone that was +/// resumed but whose original pause predates our indexing). +#[derive(Debug, Serialize, Deserialize, ToSchema)] +pub struct MilestonePauseHistory { + /// Whether the milestone is currently paused (mirrors the top-level `paused`) + pub currently_paused: bool, + /// Most recent pause transaction + pub last_pause_tx_hash: Option, + /// On-chain time of the last pause (`{unix, iso}`) + pub last_pause_time: Option, + /// Most recent resume transaction + pub last_resume_tx_hash: Option, + /// On-chain time of the last resume (`{unix, iso}`) + pub last_resume_time: Option, +} + /// Milestone completion details #[derive(Debug, Serialize, Deserialize, ToSchema)] pub struct MilestoneCompletion { /// Completion transaction hash pub tx_hash: String, - /// Completion time (Unix timestamp) - pub time: Option, + /// On-chain completion time (`{unix, iso}`) + pub time: Option, /// Completion description pub description: Option, /// Evidence array pub evidence: Option, } -/// Milestone disbursement details +/// Milestone withdrawal details #[derive(Debug, Serialize, Deserialize, ToSchema)] -pub struct MilestoneDisbursement { - /// Disbursement transaction hash +pub struct MilestoneWithdrawal { + /// Withdrawal transaction hash pub tx_hash: String, - /// Disbursement time (Unix timestamp) - pub time: Option, - /// Disbursed amount in lovelace + /// On-chain withdrawal time (`{unix, iso}`) + pub time: Option, + /// Withdrawn amount in lovelace pub amount_lovelace: Option, - /// Disbursed amount in ADA - pub amount_ada: Option, +} + +/// Milestone archive info (present when milestone has been superseded by a modify event) +#[derive(Debug, Serialize, Deserialize, ToSchema)] +pub struct MilestoneArchiveInfo { + /// Transaction hash of the modify event that archived this milestone + pub archived_by_tx_hash: Option, + /// On-chain archive time (`{unix, iso}`) + pub archived_at: Option, + /// ID of the new milestone that replaced this one + pub superseded_by_id: Option, } /// Project reference @@ -603,21 +688,32 @@ pub struct ProjectReference { #[allow(dead_code)] pub struct MilestoneRow { pub id: i32, - pub vendor_contract_id: i32, + pub project_db_id: i32, pub milestone_id: String, pub milestone_order: i32, pub label: Option, pub description: Option, pub acceptance_criteria: Option, pub amount_lovelace: Option, - pub status: String, + pub time_limit: Option, + pub withdrawn: bool, + pub evidence_provided: bool, + pub paused: bool, + pub archived: bool, pub complete_tx_hash: Option, pub complete_time: Option, pub complete_description: Option, pub evidence: Option, - pub disburse_tx_hash: Option, - pub disburse_time: Option, - pub disburse_amount: Option, + pub withdraw_tx_hash: Option, + pub withdraw_time: Option, + pub withdraw_amount: Option, + pub archived_by_tx_hash: Option, + pub archived_at: Option, + pub superseded_by: Option, + pub last_pause_tx_hash: Option, + pub last_pause_time: Option, + pub last_resume_tx_hash: Option, + pub last_resume_time: Option, pub project_id: String, pub project_name: Option, } @@ -626,18 +722,39 @@ impl From for MilestoneResponse { fn from(row: MilestoneRow) -> Self { let completion = row.complete_tx_hash.as_ref().map(|tx| MilestoneCompletion { tx_hash: tx.clone(), - time: row.complete_time, + time: ChainTime::maybe_from_secs(row.complete_time), description: row.complete_description.clone(), evidence: row.evidence.clone(), }); - let disbursement = row.disburse_tx_hash.as_ref().map(|tx| MilestoneDisbursement { + let withdrawal = row.withdraw_tx_hash.as_ref().map(|tx| MilestoneWithdrawal { tx_hash: tx.clone(), - time: row.disburse_time, - amount_lovelace: row.disburse_amount, - amount_ada: row.disburse_amount.map(lovelace_to_ada), + time: ChainTime::maybe_from_secs(row.withdraw_time), + amount_lovelace: row.withdraw_amount, }); + let archive_info = if row.archived { + Some(MilestoneArchiveInfo { + archived_by_tx_hash: row.archived_by_tx_hash, + archived_at: ChainTime::maybe_from_secs(row.archived_at), + superseded_by_id: row.superseded_by, + }) + } else { + None + }; + + let pause_history = if row.last_pause_tx_hash.is_some() || row.last_resume_tx_hash.is_some() { + Some(MilestonePauseHistory { + currently_paused: row.paused, + last_pause_tx_hash: row.last_pause_tx_hash.clone(), + last_pause_time: ChainTime::maybe_from_secs(row.last_pause_time), + last_resume_tx_hash: row.last_resume_tx_hash.clone(), + last_resume_time: ChainTime::maybe_from_secs(row.last_resume_time), + }) + } else { + None + }; + Self { id: row.id, milestone_id: row.milestone_id, @@ -646,10 +763,15 @@ impl From for MilestoneResponse { description: row.description, acceptance_criteria: row.acceptance_criteria, amount_lovelace: row.amount_lovelace, - amount_ada: row.amount_lovelace.map(lovelace_to_ada), - status: row.status, + time_limit: row.time_limit, + withdrawn: row.withdrawn, + evidence_provided: row.evidence_provided, + paused: row.paused, + archived: row.archived, completion, - disbursement, + withdrawal, + archive_info, + pause_history, project: ProjectReference { project_id: row.project_id, project_name: row.project_name, @@ -673,23 +795,23 @@ pub struct EventResponse { pub slot: Option, /// Block number pub block_number: Option, - /// Block time (Unix timestamp) - pub block_time: Option, - /// Event type (publish/initialize/fund/complete/disburse/etc.) + /// On-chain block time (`{unix, iso}`) + pub block_time: Option, + /// Event type (publish/initialize/fund/complete/disburse/withdraw/pause/resume/modify/cancel/sweep/reorganize) pub event_type: String, - /// Amount in lovelace (if applicable) + /// Amount in lovelace. Set on `fund` and `withdraw` events; null otherwise. pub amount_lovelace: Option, - /// Amount in ADA (if applicable) - pub amount_ada: Option, - /// Reason (for pause/cancel/modify events) + /// Justification text. Set on `pause`, `cancel`, `modify` events; null otherwise. pub reason: Option, - /// Destination (for disburse events) - pub destination: Option, - /// Treasury context + /// TOM `{label, details}` object preserved as-is. Set on `disburse` events only. + pub destination: Option, + /// Treasury context. Populated for treasury-level events + /// (`publish`, `initialize`, `disburse`, `sweep`, `reorganize`); null on vendor-level events. pub treasury: Option, - /// Project context + /// Project context. Populated for vendor-level events; null on treasury-level events. pub project: Option, - /// Milestone context + /// Milestone context. Populated when the event resolves to a specific milestone + /// (typically `complete`, `withdraw`); null otherwise. pub milestone: Option, /// Raw metadata pub metadata_raw: Option, @@ -702,8 +824,6 @@ pub struct EventResponse { pub struct EventTreasuryContext { /// Contract instance pub contract_instance: String, - /// Treasury name - pub name: Option, } /// Project context for event @@ -713,8 +833,6 @@ pub struct EventProjectContext { pub project_id: String, /// Project name pub project_name: Option, - /// Vendor name - pub vendor_name: Option, /// Contract address pub contract_address: Option, } @@ -741,14 +859,12 @@ pub struct EventWithContextRow { pub event_type: String, pub amount_lovelace: Option, pub reason: Option, - pub destination: Option, + pub destination: Option, pub metadata: Option, pub created_at: Option>, pub treasury_instance: Option, - pub treasury_name: Option, pub project_id: Option, pub project_name: Option, - pub vendor_name: Option, pub project_address: Option, pub milestone_id: Option, pub milestone_label: Option, @@ -759,13 +875,11 @@ impl From for EventResponse { fn from(row: EventWithContextRow) -> Self { let treasury = row.treasury_instance.as_ref().map(|inst| EventTreasuryContext { contract_instance: inst.clone(), - name: row.treasury_name.clone(), }); let project = row.project_id.as_ref().map(|pid| EventProjectContext { project_id: pid.clone(), project_name: row.project_name.clone(), - vendor_name: row.vendor_name.clone(), contract_address: row.project_address.clone(), }); @@ -780,10 +894,9 @@ impl From for EventResponse { tx_hash: row.tx_hash, slot: row.slot, block_number: row.block_number, - block_time: row.block_time, + block_time: ChainTime::maybe_from_secs(row.block_time), event_type: row.event_type, amount_lovelace: row.amount_lovelace, - amount_ada: row.amount_lovelace.map(lovelace_to_ada), reason: row.reason, destination: row.destination, treasury, @@ -812,8 +925,6 @@ pub struct UtxoResponse { pub address_type: Option, /// Amount in lovelace pub lovelace_amount: Option, - /// Amount in ADA - pub ada_amount: Option, /// Creation slot pub slot: Option, /// Block number @@ -840,13 +951,69 @@ impl From for UtxoResponse { address: row.address, address_type: row.address_type, lovelace_amount: row.lovelace_amount, - ada_amount: row.lovelace_amount.map(lovelace_to_ada), slot: row.slot, block_number: row.block_number, } } } +/// UTXO at the shared vendor contract, labeled with the owning project. +#[derive(Debug, Serialize, Deserialize, ToSchema)] +pub struct ProjectUtxoResponse { + /// Transaction hash + pub tx_hash: String, + /// Output index + pub output_index: i16, + /// Address (always the singleton PSSC for this endpoint) + pub address: Option, + /// Amount in lovelace + pub lovelace_amount: Option, + /// Creation slot + pub slot: Option, + /// Block number + pub block_number: Option, + /// Internal project DB id + pub project_db_id: i32, + /// Logical project identifier (e.g., "EC-0008-25") + pub project_id: String, + /// Project display name + pub project_name: Option, + /// Project status (active/paused/completed/cancelled) + pub project_status: Option, +} + +/// Database row for a project-labeled vendor-contract UTXO. +#[derive(Debug, FromRow)] +pub struct ProjectUtxoRow { + pub tx_hash: String, + pub output_index: i16, + pub address: Option, + pub lovelace_amount: Option, + pub slot: Option, + pub block_number: Option, + pub project_db_id: i32, + pub project_id: String, + pub project_name: Option, + pub project_status: Option, +} + +impl From for ProjectUtxoResponse { + fn from(row: ProjectUtxoRow) -> Self { + Self { + tx_hash: row.tx_hash, + output_index: row.output_index, + address: row.address, + lovelace_amount: row.lovelace_amount, + slot: row.slot, + block_number: row.block_number, + project_db_id: row.project_db_id, + project_id: row.project_id, + project_name: row.project_name, + project_status: row.project_status, + } + } +} + // ============================================================================ // STATISTICS // ============================================================================ @@ -854,8 +1021,10 @@ impl From for UtxoResponse { /// Comprehensive statistics response #[derive(Debug, Serialize, Deserialize, ToSchema)] pub struct StatisticsResponse { - /// Treasury statistics + /// Treasury (singleton TRSC) statistics pub treasury: TreasuryStats, + /// Vendor contract (singleton shared PSSC) statistics + pub vendor_contracts: VendorContractStats, /// Project statistics pub projects: ProjectStats, /// Milestone statistics @@ -879,6 +1048,23 @@ pub struct TreasuryStats { pub disbursed_count: i64, } +/// Vendor contract (singleton) statistics — the shared PSSC every project sits at. +#[derive(Debug, Serialize, Deserialize, ToSchema)] +pub struct VendorContractStats { + /// Total vendor contracts known to the API. Expected to be 1 for our deployment. + pub total_count: i64, + /// Shared PSSC script address (`addr1x...`). Null until the first fund event lands. + pub address: Option, + /// Number of distinct projects bound to this vendor contract. + pub project_count: i64, + /// Total UTXOs ever observed at this address (regardless of spent state). + pub utxo_history_count: i64, + /// Currently unspent UTXOs at this address. + pub unspent_utxo_count: i64, + /// Sum of unspent lovelace held at this address. + pub current_balance_lovelace: i64, +} + /// Project statistics #[derive(Debug, Serialize, Deserialize, ToSchema)] pub struct ProjectStats { @@ -910,9 +1096,11 @@ pub struct MilestoneStats { /// Event statistics #[derive(Debug, Serialize, Deserialize, ToSchema)] pub struct EventStats { - /// Total events - pub total_count: i64, - /// Events by type + /// Total on-chain TOM events (from yaci_store) + pub on_chain_count: i64, + /// Total processed events (in treasury schema) + pub processed_count: i64, + /// On-chain events by type pub by_type: std::collections::HashMap, } @@ -921,16 +1109,10 @@ pub struct EventStats { pub struct FinancialStats { /// Total allocated to projects in lovelace pub total_allocated_lovelace: i64, - /// Total allocated to projects in ADA - pub total_allocated_ada: f64, - /// Total disbursed in lovelace - pub total_disbursed_lovelace: i64, - /// Total disbursed in ADA - pub total_disbursed_ada: f64, + /// Total withdrawn in lovelace + pub total_withdrawn_lovelace: i64, /// Current total balance in lovelace (from UTXOs) pub current_balance_lovelace: i64, - /// Current total balance in ADA - pub current_balance_ada: f64, } /// Sync status statistics @@ -953,7 +1135,7 @@ fn default_limit() -> u32 { 50 } /// Vendor contracts query parameters #[derive(Debug, Deserialize, ToSchema, IntoParams)] -pub struct VendorContractsQuery { +pub struct ProjectsQuery { /// Page number (1-indexed) #[serde(default = "default_page")] pub page: u32, @@ -962,7 +1144,7 @@ pub struct VendorContractsQuery { pub limit: u32, /// Filter by status (active/paused/completed/cancelled) pub status: Option, - /// Search in project_id, project_name, description, vendor_name + /// Search in project_id, project_name, description pub search: Option, /// Sort field (fund_time, project_id, project_name) pub sort: Option, @@ -992,6 +1174,8 @@ pub struct EventsQuery { pub from_time: Option, /// Filter by time (Unix timestamp, to) pub to_time: Option, + /// Full-text search across `reason`, `destination`, and raw `metadata` (case-insensitive substring match). + pub q: Option, } /// Recent events query parameters @@ -1019,12 +1203,21 @@ pub struct MilestonesQuery { /// Items per page #[serde(default = "default_limit")] pub limit: u32, - /// Filter by status (pending/completed/disbursed) - pub status: Option, + /// Filter by withdrawn status + pub withdrawn: Option, + /// Filter by evidence_provided status + pub evidence_provided: Option, + /// Filter by archived status (defaults to false if not specified) + pub archived: Option, /// Filter by project ID pub project_id: Option, - /// Sort field (milestone_order, complete_time, disburse_time) + /// Sort field (milestone_order, complete_time, withdraw_time) pub sort: Option, + /// Filter by milestone time (Unix timestamp, from). Matches whichever of + /// `complete_time` or `withdraw_time` is set on the milestone. + pub from_time: Option, + /// Filter by milestone time (Unix timestamp, to). + pub to_time: Option, } /// Project events query parameters @@ -1040,3 +1233,14 @@ pub struct ProjectEventsQuery { #[serde(rename = "type")] pub event_type: Option, } + +/// Generic pagination query (used by sub-list endpoints that don't need other filters). +#[derive(Debug, Deserialize, ToSchema, IntoParams)] +pub struct PaginationQuery { + /// Page number (1-indexed) + #[serde(default = "default_page")] + pub page: u32, + /// Items per page (max 100) + #[serde(default = "default_limit")] + pub limit: u32, +} diff --git a/api/src/openapi.rs b/api/src/openapi.rs index 1b62629..ac98cdf 100644 --- a/api/src/openapi.rs +++ b/api/src/openapi.rs @@ -3,25 +3,28 @@ use utoipa::OpenApi; use crate::models::v1::{ - ApiResponse, EventMilestoneContext, EventProjectContext, EventResponse, EventStats, - EventTreasuryContext, EventsQuery, FinancialStats, MilestoneCompletion, MilestoneDisbursement, - MilestoneResponse, MilestoneStats, MilestonesSummary, MilestonesQuery, PaginatedResponse, - Pagination, ProjectEventsQuery, ProjectReference, ProjectStats, RecentEventsQuery, - ResponseMeta, StatisticsResponse, StatusResponse, SyncStats, TreasuryFinancials, - TreasuryReference, TreasuryResponse, TreasuryStatistics, TreasuryStats, UtxoResponse, - VendorContractDetail, VendorContractSummary, VendorContractsQuery, VendorFinancials, + ApiResponse, ChainStatus, DatabaseStatus, EventMilestoneContext, EventProjectContext, + EventResponse, EventStats, EventTreasuryContext, EventsQuery, FinancialStats, + MilestoneArchiveInfo, MilestoneCompletion, MilestoneWithdrawal, MilestoneResponse, + MilestoneStats, MilestonesSummary, MilestonesQuery, PaginatedResponse, Pagination, + PaginationQuery, ProjectCurrentUtxo, ProjectDetail, ProjectEventsQuery, ProjectReference, + ProjectStats, ProjectSummary, ProjectUtxoResponse, ProjectsQuery, RecentEventsQuery, + ResponseMeta, StatisticsResponse, StatusResponse, SyncStats, SyncStatusBlock, TotalsBlock, + TreasuryFinancials, TreasuryReference, TreasuryResponse, TreasuryStatistics, TreasuryStats, + UtxoResponse, VendorContractProjectsBlock, VendorContractResponse, VendorContractStats, + VendorFinancials, }; use crate::routes::v1::{ - events, milestones, statistics, status, treasury, vendor_contracts, + events, milestones, projects, statistics, status, treasury, vendor_contract, }; #[derive(OpenApi)] #[openapi( info( title = "Cardano Administration API", - version = "1.0.0", - description = "REST API for tracking Cardano treasury contracts and fund disbursements.\n\n## Overview\n\nThis API provides access to treasury contract data, vendor contracts (projects), milestones, and event history for the Cardano treasury system.\n\n## Key Concepts\n\n- **Treasury Contract (TRSC)**: The root treasury reserve contract that holds funds\n- **Vendor Contract (PSSC)**: Project-specific contracts that receive funding from the treasury\n- **Milestone**: Individual deliverables within a vendor contract\n- **Event**: Audit log of all treasury operations (fund, complete, disburse, etc.)\n\n## Response Format\n\nAll responses use a consistent envelope format:\n\n```json\n{\n \"data\": { ... },\n \"pagination\": { ... }, // Only for paginated endpoints\n \"meta\": {\n \"timestamp\": \"2026-01-28T10:30:00Z\"\n }\n}\n```\n\n## Amounts\n\nAll monetary amounts are provided in both lovelace (smallest unit) and ADA:\n- `amount_lovelace`: Integer amount in lovelace\n- `amount_ada`: Float amount in ADA (1 ADA = 1,000,000 lovelace)", + version = "2.0.0", + description = "REST API for tracking Cardano treasury contracts and fund disbursements.\n\n## Overview\n\nThis API provides access to the treasury contract, the shared vendor contract, projects, milestones, and event history for the Cardano treasury system.\n\n## Key Concepts\n\n- **Treasury Contract (TRSC)**: The singleton on-chain reserve contract that holds funds.\n- **Vendor Contract (PSSC)**: The singleton on-chain script address every project sits at, distinguished only by inline datum.\n- **Project**: One row per `fund` event (e.g. `EC-0008-25`). 42 of these in our deployment.\n- **Milestone**: An individual deliverable within a project.\n- **Event**: Audit log of all treasury operations (fund, complete, disburse, etc.).\n\n## Response Format\n\nAll responses use a consistent envelope:\n\n```json\n{\n \"data\": { ... },\n \"pagination\": { ... }, // present on paginated endpoints\n \"meta\": { \"timestamp\": \"2026-05-01T10:30:00Z\" }\n}\n```\n\nErrors use a parallel envelope:\n\n```json\n{\n \"error\": { \"code\": \"not_found\", \"message\": \"…\", \"details\": {…}? },\n \"meta\": { \"timestamp\": \"2026-05-01T10:30:00Z\" }\n}\n```\n\n## Amounts\n\nAll monetary amounts are in **lovelace** (the smallest unit; 1 ADA = 1,000,000 lovelace). Clients are responsible for ADA formatting.\n\n## Timestamps\n\nOn-chain block times are returned as a paired object: `{\"unix\": 1777609469, \"iso\": \"2025-09-29T12:24:29Z\"}`. Server-side timestamps (`created_at`, `updated_at`) are ISO 8601 strings.", license( name = "Apache 2.0", url = "https://www.apache.org/licenses/LICENSE-2.0" @@ -35,8 +38,9 @@ use crate::routes::v1::{ ), tags( (name = "Status", description = "API health and status endpoints"), - (name = "Treasury", description = "Treasury contract endpoints"), - (name = "Vendor Contracts", description = "Vendor contract (project) endpoints"), + (name = "Treasury", description = "Treasury contract (singleton TRSC) endpoints"), + (name = "Vendor Contract", description = "Shared vendor contract (singleton PSSC) endpoint"), + (name = "Projects", description = "Project endpoints (one per fund event)"), (name = "Milestones", description = "Milestone endpoints"), (name = "Events", description = "Event log endpoints"), (name = "Statistics", description = "Aggregated statistics endpoints") @@ -46,12 +50,15 @@ use crate::routes::v1::{ treasury::get_treasury, treasury::get_treasury_utxos, treasury::get_treasury_events, - vendor_contracts::list_vendor_contracts, - vendor_contracts::get_vendor_contract, - vendor_contracts::get_vendor_contract_milestones, - vendor_contracts::get_vendor_contract_events, - vendor_contracts::get_vendor_contract_utxos, + vendor_contract::get_vendor_contract, + vendor_contract::get_vendor_contract_utxos, + projects::list_projects, + projects::get_project, + projects::get_project_milestones, + projects::get_project_events, + projects::get_project_utxos, milestones::list_milestones, + milestones::list_milestones_by_project, milestones::get_milestone, events::list_events, events::get_recent_events, @@ -62,7 +69,8 @@ use crate::routes::v1::{ schemas( // Response envelopes ApiResponse, - ApiResponse, + ApiResponse, + ApiResponse, ApiResponse>, ApiResponse>, ApiResponse>, @@ -70,25 +78,31 @@ use crate::routes::v1::{ ApiResponse, ApiResponse, ApiResponse, - PaginatedResponse>, + PaginatedResponse>, PaginatedResponse>, PaginatedResponse>, + PaginatedResponse>, + PaginatedResponse>, Pagination, ResponseMeta, // Treasury TreasuryResponse, TreasuryStatistics, TreasuryFinancials, - // Vendor Contracts - VendorContractSummary, - VendorContractDetail, + // Vendor Contract (singleton) + VendorContractResponse, + VendorContractProjectsBlock, + // Projects + ProjectSummary, + ProjectDetail, VendorFinancials, MilestonesSummary, TreasuryReference, // Milestones MilestoneResponse, MilestoneCompletion, - MilestoneDisbursement, + MilestoneWithdrawal, + MilestoneArchiveInfo, ProjectReference, // Events EventResponse, @@ -97,9 +111,12 @@ use crate::routes::v1::{ EventMilestoneContext, // UTXOs UtxoResponse, + ProjectUtxoResponse, + ProjectCurrentUtxo, // Statistics StatisticsResponse, TreasuryStats, + VendorContractStats, ProjectStats, MilestoneStats, EventStats, @@ -107,12 +124,22 @@ use crate::routes::v1::{ SyncStats, // Status StatusResponse, + DatabaseStatus, + SyncStatusBlock, + ChainStatus, + TotalsBlock, + // Errors + crate::errors::ApiErrorBody, + crate::errors::ApiErrorDetail, + // Time + crate::models::time::ChainTime, // Query params - VendorContractsQuery, + ProjectsQuery, EventsQuery, RecentEventsQuery, MilestonesQuery, ProjectEventsQuery, + PaginationQuery, ) ) )] diff --git a/api/src/parsers/address.rs b/api/src/parsers/address.rs new file mode 100644 index 0000000..59ba447 --- /dev/null +++ b/api/src/parsers/address.rs @@ -0,0 +1,32 @@ +//! Address parsing utilities for Cardano addresses + +use pallas_addresses::Address; + +/// Extract the stake credential from a bech32 Cardano address. +/// +/// For Shelley base addresses, returns the delegation/stake part as a hex string. +/// Returns None for non-Shelley addresses or on any parse error. +pub fn extract_stake_credential(bech32_addr: &str) -> Option { + let addr = Address::from_bech32(bech32_addr).ok()?; + match addr { + Address::Shelley(shelley) => { + let hash = shelley.delegation().as_hash()?; + Some(hex::encode(hash.as_ref())) + } + _ => None, + } +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn test_extract_stake_credential_returns_some_for_base_address() { + // A typical mainnet base address (addr1q...) should return a stake credential + // Using a well-known test vector isn't practical without real addresses, + // so we just verify the function doesn't panic on a script address format + let result = extract_stake_credential("addr1x_invalid"); + assert!(result.is_none()); + } +} diff --git a/api/src/parsers/datum.rs b/api/src/parsers/datum.rs new file mode 100644 index 0000000..cc8dcd7 --- /dev/null +++ b/api/src/parsers/datum.rs @@ -0,0 +1,430 @@ +//! CBOR datum parser for Plutus vendor contract datums +//! +//! Parses inline datum CBOR hex from `address_utxo.inline_datum` into structured data. +//! +//! Datum structure (from Plutus vendor contract): +//! ```text +//! Constr(0, [ +//! Constr(0, [ByteString(vendor_payment_key_hash)]), +//! Array([ +//! Constr(0, [BigInt(time_limit), Map(value), Constr(0|1, [])]), // per milestone +//! ... +//! ]) +//! ]) +//! ``` +//! +//! pallas uses tag 121 = constructor 0, tag 122 = constructor 1. + +use anyhow::{anyhow, Context}; +use pallas_primitives::alonzo::{BigInt, PlutusData}; + +/// Parsed vendor contract datum. +/// +/// Partial: each section may independently succeed or fail. The call site +/// persists what's `Ok` and writes the error string to `datum_parse_error` +/// for the relevant row, so failures are queryable in SQL rather than +/// scraped from logs. +#[derive(Debug, Clone)] +pub struct ParsedProjectDatum { + /// Vendor payment key hash (hex). `None` means vendor-info section + /// failed to parse — see `vendor_info_error`. + pub vendor_payment_key_hash: Option, + /// Error message if vendor-info section couldn't be parsed. + pub vendor_info_error: Option, + /// Per-milestone results. `Ok` rows update the milestone row; + /// `Err` rows record the error in `treasury.milestones.datum_parse_error`. + pub milestones: Vec>, + /// Top-level CBOR / shape error. When set, every other field is empty. + pub top_level_error: Option, +} + +/// Parsed milestone data from inline datum +#[derive(Debug, Clone)] +pub struct ParsedMilestoneDatum { + /// POSIXTime in milliseconds + pub time_limit: i64, + /// Lovelace amount from Value map {"": {"": amount}} + pub amount_lovelace: i64, + /// Constructor 0 = active, Constructor 1 = paused + pub paused: bool, +} + +/// Parse a vendor contract datum from CBOR hex string. +/// +/// Two on-chain vendor-info shapes are supported: +/// +/// 1. Single-key (`m-N` projects): +/// `Constr(0, [Constr(0, [bytes(28)]), [milestones]])` +/// +/// 2. Multi-key (`UTXO-*` projects, e.g. `UTXO-EC-0002-25-*`): +/// `Constr(0, [Constr(1, [(Constr(0, [bytes(28)]), Constr(N, [bytes(28)]))]), [milestones]])` +/// — vendor info wraps a tuple of two parties (different signature +/// types). We walk the subtree and collect every 28-byte BoundedBytes +/// (Cardano key-hash size, Blake2b-224), joining them with `,` for +/// storage in the single `vendor_payment_key_hash` column. +/// +/// The milestones array structure is identical between the two formats. +/// +/// Returns a partial result: a top-level CBOR/shape error short-circuits +/// the whole datum, otherwise vendor info and each milestone parse +/// independently so a single bad milestone doesn't lose the rest. +pub fn parse_project_datum(cbor_hex: &str) -> ParsedProjectDatum { + let mut out = ParsedProjectDatum { + vendor_payment_key_hash: None, + vendor_info_error: None, + milestones: Vec::new(), + top_level_error: None, + }; + + let bytes = match hex::decode(cbor_hex).context("invalid hex in datum") { + Ok(b) => b, + Err(e) => { out.top_level_error = Some(format!("{:#}", e)); return out; } + }; + let datum: PlutusData = match pallas_codec::minicbor::decode(&bytes) + .context("failed to decode CBOR datum") { + Ok(d) => d, + Err(e) => { out.top_level_error = Some(format!("{:#}", e)); return out; } + }; + + // Top-level: Constr(0, [vendor_info, milestones_array]) + let top_fields = match expect_constr(&datum, 0, "top-level datum") { + Ok(f) => f, + Err(e) => { out.top_level_error = Some(format!("{:#}", e)); return out; } + }; + if top_fields.len() < 2 { + out.top_level_error = Some(format!( + "top-level datum has {} fields, expected 2", + top_fields.len() + )); + return out; + } + + // Field 0: vendor info. Walk the subtree and collect every 28-byte + // BoundedBytes — handles both single-key and multi-key shapes uniformly. + let mut hashes: Vec = Vec::new(); + collect_key_hashes(&top_fields[0], &mut hashes); + if hashes.is_empty() { + out.vendor_info_error = Some("no 28-byte key hash found in vendor info".to_string()); + } else { + out.vendor_payment_key_hash = Some(hashes.join(",")); + } + + // Field 1: Array of milestone Constrs + match expect_array(&top_fields[1], "milestones array") { + Ok(milestone_data_list) => { + out.milestones.reserve(milestone_data_list.len()); + for (idx, ms_datum) in milestone_data_list.iter().enumerate() { + let r = parse_milestone_datum(ms_datum, idx) + .with_context(|| format!("milestone {}", idx)) + .map_err(|e| format!("{:#}", e)); + out.milestones.push(r); + } + } + Err(e) => { + // Surface as a top-level error so the call site doesn't silently + // skip the milestones UPDATE loop. + out.top_level_error = Some(format!("{:#}", e)); + } + } + + out +} + +/// Parse a single milestone datum: Constr(0, [BigInt(time_limit), Map(value), Constr(0|1, [])]) +fn parse_milestone_datum(datum: &PlutusData, _idx: usize) -> anyhow::Result { + let fields = expect_constr(datum, 0, "milestone")?; + if fields.len() < 3 { + return Err(anyhow!( + "milestone datum has {} fields, expected 3", + fields.len() + )); + } + + // Field 0: time_limit as BigInt + let time_limit = expect_integer(&fields[0], "time_limit")?; + + // Field 1: Value as Map - extract lovelace from {"": {"": amount}} + let amount_lovelace = extract_lovelace_from_value(&fields[1])?; + + // Field 2: Constr(0|1, []) — 0=active, 1=paused + let paused = match &fields[2] { + PlutusData::Constr(constr) => { + // pallas tag: 121 = constructor 0 (active), 122 = constructor 1 (paused) + match constr.tag { + 121 => false, + 122 => true, + _ => { + return Err(anyhow!( + "unexpected pause constructor tag: {}", + constr.tag + )) + } + } + } + _ => return Err(anyhow!("expected Constr for pause flag")), + }; + + Ok(ParsedMilestoneDatum { + time_limit, + amount_lovelace, + paused, + }) +} + +/// Extract lovelace amount from a Plutus Value: +/// Map({ ByteString("") => Map({ ByteString("") => BigInt(amount) }) }) +fn extract_lovelace_from_value(datum: &PlutusData) -> anyhow::Result { + match datum { + PlutusData::Map(entries) => { + let pairs: Vec<_> = entries.clone().to_vec(); + // Look for the empty-bytestring key (ADA policy ID) + for (key, val) in &pairs { + if is_empty_bytes(key) { + // Inner map: {"": amount} + match val { + PlutusData::Map(inner_entries) => { + let inner_pairs: Vec<_> = inner_entries.clone().to_vec(); + for (inner_key, inner_val) in &inner_pairs { + if is_empty_bytes(inner_key) { + return expect_integer(inner_val, "lovelace amount"); + } + } + return Err(anyhow!("no empty-key entry in inner Value map")); + } + // Some datums encode Value as Map({ "" => amount }) (flat) + _ => return expect_integer(val, "lovelace amount"), + } + } + } + Err(anyhow!("no ADA (empty policy) key in Value map")) + } + _ => Err(anyhow!("expected Map for Value, got {:?}", datum_type_name(datum))), + } +} + +// ============================================================================ +// Helpers +// ============================================================================ + +/// Walk a Plutus datum subtree and collect every 28-byte BoundedBytes +/// (Cardano key-hash size, Blake2b-224) it contains, in pre-order. +fn collect_key_hashes(datum: &PlutusData, acc: &mut Vec) { + match datum { + PlutusData::BoundedBytes(b) => { + if b.len() == 28 { + acc.push(hex::encode(b.as_slice())); + } + } + PlutusData::Constr(c) => { + for f in &c.fields { + collect_key_hashes(f, acc); + } + } + PlutusData::Array(arr) => { + for x in arr { + collect_key_hashes(x, acc); + } + } + _ => {} + } +} + +fn expect_constr<'a>( + datum: &'a PlutusData, + expected_tag_offset: u64, + context: &str, +) -> anyhow::Result<&'a Vec> { + match datum { + PlutusData::Constr(constr) => { + let expected_tag = 121 + expected_tag_offset; + if constr.tag != expected_tag { + return Err(anyhow!( + "{}: expected constructor tag {}, got {}", + context, + expected_tag, + constr.tag + )); + } + Ok(&constr.fields) + } + _ => Err(anyhow!( + "{}: expected Constr, got {:?}", + context, + datum_type_name(datum) + )), + } +} + +fn expect_array<'a>( + datum: &'a PlutusData, + context: &str, +) -> anyhow::Result<&'a Vec> { + match datum { + PlutusData::Array(arr) => Ok(arr), + _ => Err(anyhow!( + "{}: expected Array, got {:?}", + context, + datum_type_name(datum) + )), + } +} + +#[allow(dead_code)] +fn expect_bytes(datum: &PlutusData, context: &str) -> anyhow::Result { + match datum { + PlutusData::BoundedBytes(bytes) => Ok(hex::encode(bytes.as_slice())), + _ => Err(anyhow!( + "{}: expected BoundedBytes, got {:?}", + context, + datum_type_name(datum) + )), + } +} + +fn expect_integer(datum: &PlutusData, context: &str) -> anyhow::Result { + match datum { + PlutusData::BigInt(big_int) => { + match big_int { + BigInt::Int(int_val) => { + // pallas_codec::utils::Int implements Into + let n: i128 = (*int_val).into(); + Ok(n as i64) + } + BigInt::BigUInt(bytes) => { + let mut val: i64 = 0; + for b in bytes.as_slice() { + val = val.checked_mul(256).unwrap_or(i64::MAX); + val = val.checked_add(*b as i64).unwrap_or(i64::MAX); + } + Ok(val) + } + BigInt::BigNInt(bytes) => { + let mut val: i64 = 0; + for b in bytes.as_slice() { + val = val.checked_mul(256).unwrap_or(i64::MIN); + val = val.checked_add(*b as i64).unwrap_or(i64::MIN); + } + Ok(-val) + } + } + } + _ => Err(anyhow!( + "{}: expected BigInt, got {:?}", + context, + datum_type_name(datum) + )), + } +} + +fn is_empty_bytes(datum: &PlutusData) -> bool { + matches!(datum, PlutusData::BoundedBytes(bytes) if bytes.is_empty()) +} + +fn datum_type_name(datum: &PlutusData) -> &'static str { + match datum { + PlutusData::Constr(_) => "Constr", + PlutusData::Map(_) => "Map", + PlutusData::BigInt(_) => "BigInt", + PlutusData::BoundedBytes(_) => "BoundedBytes", + PlutusData::Array(_) => "Array", + } +} + +#[cfg(test)] +mod tests { + use super::*; + + #[test] + fn test_parse_empty_hex_returns_top_level_error() { + let r = parse_project_datum(""); + assert!(r.top_level_error.is_some()); + assert!(r.vendor_payment_key_hash.is_none()); + assert!(r.milestones.is_empty()); + } + + #[test] + fn test_parse_invalid_hex_returns_top_level_error() { + let r = parse_project_datum("zzzz"); + assert!(r.top_level_error.is_some()); + } + + #[test] + fn test_parse_invalid_cbor_returns_top_level_error() { + let r = parse_project_datum("deadbeef"); + assert!(r.top_level_error.is_some()); + } + + #[test] + fn test_utxo_emi_0001_25_fixture_parses() { + // Real on-chain datum from project UTXO-EMI-0001-25 + // (tx 5849b0ec727e062ef2ee29076f0f5fcc72206081f40c2dc9cba604bca93c9e3c, output 0). + // Multi-key vendor info (Constr(1, [Array([Constr(0, [bytes28]), Constr(N, [bytes28])])])). + let hex = include_str!("../../tests/fixtures/utxo_emi_0001_25.hex").trim(); + let r = parse_project_datum(hex); + assert!(r.top_level_error.is_none(), "top-level error: {:?}", r.top_level_error); + assert!(r.vendor_info_error.is_none(), "vendor info error: {:?}", r.vendor_info_error); + let kh = r.vendor_payment_key_hash.expect("key hash"); + assert!(kh.contains(','), "expected multi-key (comma-joined), got {}", kh); + assert_eq!(r.milestones.len(), 5, "expected 5 milestones"); + for (i, m) in r.milestones.iter().enumerate() { + assert!(m.is_ok(), "milestone {} failed: {:?}", i, m); + } + } + + #[test] + fn test_utxo_ec_0002_25_01_fixture_parses() { + // 16-milestone fund datum from UTXO-EC-0002-25-01 (largest variant). + let hex = include_str!("../../tests/fixtures/utxo_ec_0002_25_01.hex").trim(); + let r = parse_project_datum(hex); + assert!(r.top_level_error.is_none(), "top-level error: {:?}", r.top_level_error); + assert!(r.vendor_payment_key_hash.is_some()); + assert_eq!(r.milestones.len(), 16); + assert!(r.milestones.iter().all(|m| m.is_ok())); + } + + #[test] + fn test_utxo_ec_0002_25_03_fixture_parses() { + // Real on-chain datum from project UTXO-EC-0002-25-03 (20 milestones). + // This datum was historically corrupted by a prior bug; ensures the + // post-fix parser still handles it correctly. + let hex = include_str!("../../tests/fixtures/utxo_ec_0002_25_03.hex").trim(); + let r = parse_project_datum(hex); + assert!(r.top_level_error.is_none(), "top-level error: {:?}", r.top_level_error); + assert!(r.vendor_payment_key_hash.is_some()); + assert_eq!(r.milestones.len(), 20); + assert!(r.milestones.iter().all(|m| m.is_ok())); + } + + #[test] + fn test_partial_parse_keeps_vendor_info_when_milestones_fail() { + // Hand-crafted datum: valid vendor info, milestones array contains a + // bogus milestone (tag 199 instead of 121). The partial parser should + // still surface the vendor key hash. + // Constr(0, [ + // Constr(0, [bytes(28)]), // vendor_info: single-key + // [Constr(199, [])] // milestones with one bad entry + // ]) + // tag 199 is not a valid Plutus constructor, so milestone parsing + // returns Err for that entry but the vendor info is still extracted. + let mut bytes = Vec::new(); + bytes.extend_from_slice(&[0xd8, 0x79, 0x9f]); // Constr(0, [ + bytes.extend_from_slice(&[0xd8, 0x79, 0x9f]); // Constr(0, [ + bytes.push(0x58); bytes.push(28); // bytes(28) + bytes.extend_from_slice(&[0xaa; 28]); + bytes.push(0xff); // ]) + bytes.push(0x9f); // [ + bytes.extend_from_slice(&[0xd9, 0x05, 0x06]); // tag 1286 (constructor 6 in extended range) + bytes.push(0x80); // [] + bytes.push(0xff); // ] + bytes.push(0xff); // ]) + + let hex = hex::encode(&bytes); + let r = parse_project_datum(&hex); + // Vendor info should succeed even though the milestone is malformed. + assert_eq!( + r.vendor_payment_key_hash.as_deref(), + Some("aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa") + ); + assert!(r.vendor_info_error.is_none()); + } +} diff --git a/api/src/parsers/mod.rs b/api/src/parsers/mod.rs index b121b63..5117a65 100644 --- a/api/src/parsers/mod.rs +++ b/api/src/parsers/mod.rs @@ -1 +1,3 @@ // Metadata parsers for treasury contract transactions +pub mod address; +pub mod datum; diff --git a/api/src/routes/v1/events.rs b/api/src/routes/v1/events.rs index 8982c9d..eda1389 100644 --- a/api/src/routes/v1/events.rs +++ b/api/src/routes/v1/events.rs @@ -2,11 +2,11 @@ use axum::{ extract::{Extension, Path, Query}, - http::StatusCode, response::Json, }; use sqlx::PgPool; +use crate::errors::ApiError; use crate::models::v1::{ ApiResponse, EventResponse, EventWithContextRow, EventsQuery, PaginatedResponse, RecentEventsQuery, @@ -27,13 +27,12 @@ use crate::models::v1::{ pub async fn list_events( Extension(pool): Extension, Query(params): Query, -) -> Result>>, StatusCode> { +) -> Result>>, ApiError> { let page = params.page.max(1); let limit = params.limit.min(100).max(1); let offset = ((page - 1) * limit) as i64; let limit_i64 = limit as i64; - // Build dynamic query based on filters let mut conditions = Vec::new(); let mut bind_index = 1; @@ -57,6 +56,15 @@ pub async fn list_events( bind_index += 1; } + // Full-text search across reason, destination, and raw metadata. + if params.q.is_some() { + conditions.push(format!( + "(COALESCE(reason, '') ILIKE ${0} OR destination::text ILIKE ${0} OR metadata::text ILIKE ${0})", + bind_index + )); + bind_index += 1; + } + let where_clause = if conditions.is_empty() { String::new() } else { @@ -83,14 +91,11 @@ pub async fn list_events( if let Some(to_time) = params.to_time { count_q = count_q.bind(to_time); } + if let Some(ref q) = params.q { + count_q = count_q.bind(format!("%{}%", q)); + } - let (total_count,) = count_q - .fetch_one(&pool) - .await - .map_err(|e| { - tracing::error!("Database query error: {}", e); - StatusCode::INTERNAL_SERVER_ERROR - })?; + let (total_count,) = count_q.fetch_one(&pool).await?; // Get data let data_query = format!( @@ -120,16 +125,11 @@ pub async fn list_events( if let Some(to_time) = params.to_time { data_q = data_q.bind(to_time); } + if let Some(ref q) = params.q { + data_q = data_q.bind(format!("%{}%", q)); + } - let rows = data_q - .bind(limit_i64) - .bind(offset) - .fetch_all(&pool) - .await - .map_err(|e| { - tracing::error!("Database query error: {}", e); - StatusCode::INTERNAL_SERVER_ERROR - })?; + let rows = data_q.bind(limit_i64).bind(offset).fetch_all(&pool).await?; let events: Vec = rows.into_iter().map(EventResponse::from).collect(); Ok(Json(PaginatedResponse::new(events, page, limit, total_count))) @@ -150,7 +150,7 @@ pub async fn list_events( pub async fn get_recent_events( Extension(pool): Extension, Query(params): Query, -) -> Result>>, StatusCode> { +) -> Result>>, ApiError> { let hours = params.hours.max(1).min(168); // Max 1 week let limit = params.limit.min(100).max(1) as i64; @@ -189,10 +189,7 @@ pub async fn get_recent_events( .await }; - let rows = rows.map_err(|e| { - tracing::error!("Database query error: {}", e); - StatusCode::INTERNAL_SERVER_ERROR - })?; + let rows = rows?; let events: Vec = rows.into_iter().map(EventResponse::from).collect(); Ok(Json(ApiResponse::new(events))) @@ -216,22 +213,18 @@ pub async fn get_recent_events( pub async fn get_event( Extension(pool): Extension, Path(tx_hash): Path, -) -> Result>, StatusCode> { +) -> Result>, ApiError> { let row = sqlx::query_as::<_, EventWithContextRow>( r#" SELECT * FROM treasury.v_events_with_context WHERE tx_hash = $1 - "# + "#, ) .bind(&tx_hash) .fetch_optional(&pool) - .await - .map_err(|e| { - tracing::error!("Database query error: {}", e); - StatusCode::INTERNAL_SERVER_ERROR - })? - .ok_or(StatusCode::NOT_FOUND)?; + .await? + .ok_or_else(|| ApiError::NotFound(format!("event `{}` not found", tx_hash)))?; Ok(Json(ApiResponse::new(EventResponse::from(row)))) } diff --git a/api/src/routes/v1/milestones.rs b/api/src/routes/v1/milestones.rs index f67f548..2671a5c 100644 --- a/api/src/routes/v1/milestones.rs +++ b/api/src/routes/v1/milestones.rs @@ -2,13 +2,14 @@ use axum::{ extract::{Extension, Path, Query}, - http::StatusCode, response::Json, }; use sqlx::PgPool; +use crate::errors::ApiError; use crate::models::v1::{ ApiResponse, MilestoneResponse, MilestoneRow, MilestonesQuery, PaginatedResponse, + PaginationQuery, }; /// List all milestones @@ -26,7 +27,7 @@ use crate::models::v1::{ pub async fn list_milestones( Extension(pool): Extension, Query(params): Query, -) -> Result>>, StatusCode> { +) -> Result>>, ApiError> { let page = params.page.max(1); let limit = params.limit.min(100).max(1); let offset = ((page - 1) * limit) as i64; @@ -36,8 +37,22 @@ pub async fn list_milestones( let mut conditions = Vec::new(); let mut bind_index = 1; - if params.status.is_some() { - conditions.push(format!("m.status = ${}", bind_index)); + // Default to non-archived milestones unless archived filter is explicitly set + if let Some(archived) = params.archived { + conditions.push(format!("m.archived = ${}", bind_index)); + bind_index += 1; + let _ = archived; // used via binding below + } else { + conditions.push("NOT m.archived".to_string()); + } + + if params.withdrawn.is_some() { + conditions.push(format!("m.withdrawn = ${}", bind_index)); + bind_index += 1; + } + + if params.evidence_provided.is_some() { + conditions.push(format!("m.evidence_provided = ${}", bind_index)); bind_index += 1; } @@ -46,6 +61,24 @@ pub async fn list_milestones( bind_index += 1; } + // KI: time-range filter on milestones, matching whichever of complete_time + // or withdraw_time is set on the row. + if params.from_time.is_some() { + conditions.push(format!( + "(m.complete_time >= ${0} OR m.withdraw_time >= ${0})", + bind_index + )); + bind_index += 1; + } + + if params.to_time.is_some() { + conditions.push(format!( + "(m.complete_time <= ${0} OR m.withdraw_time <= ${0})", + bind_index + )); + bind_index += 1; + } + let where_clause = if conditions.is_empty() { String::new() } else { @@ -55,7 +88,7 @@ pub async fn list_milestones( // Determine sort order let sort_clause = match params.sort.as_deref() { Some("complete_time") => "m.complete_time DESC NULLS LAST", - Some("disburse_time") => "m.disburse_time DESC NULLS LAST", + Some("withdraw_time") => "m.withdraw_time DESC NULLS LAST", Some("amount") => "m.amount_lovelace DESC NULLS LAST", _ => "vc.project_id, m.milestone_order", }; @@ -65,7 +98,7 @@ pub async fn list_milestones( r#" SELECT COUNT(*) FROM treasury.milestones m - JOIN treasury.vendor_contracts vc ON vc.id = m.vendor_contract_id + JOIN treasury.projects vc ON vc.id = m.project_db_id {} "#, where_clause @@ -73,45 +106,62 @@ pub async fn list_milestones( let mut count_q = sqlx::query_as::<_, (i64,)>(&count_query); - if let Some(ref status) = params.status { - count_q = count_q.bind(status); + if let Some(archived) = params.archived { + count_q = count_q.bind(archived); + } + if let Some(withdrawn) = params.withdrawn { + count_q = count_q.bind(withdrawn); + } + if let Some(evidence_provided) = params.evidence_provided { + count_q = count_q.bind(evidence_provided); } if let Some(ref project_id) = params.project_id { count_q = count_q.bind(project_id); } + if let Some(from_time) = params.from_time { + count_q = count_q.bind(from_time); + } + if let Some(to_time) = params.to_time { + count_q = count_q.bind(to_time); + } - let (total_count,) = count_q - .fetch_one(&pool) - .await - .map_err(|e| { - tracing::error!("Database query error: {}", e); - StatusCode::INTERNAL_SERVER_ERROR - })?; + let (total_count,) = count_q.fetch_one(&pool).await?; // Get data let data_query = format!( r#" SELECT m.id, - m.vendor_contract_id, + m.project_db_id, m.milestone_id, m.milestone_order, m.label, m.description, m.acceptance_criteria, m.amount_lovelace, - m.status, + m.time_limit, + m.withdrawn, + m.evidence_provided, + m.paused, + m.archived, m.complete_tx_hash, m.complete_time, m.complete_description, m.evidence, - m.disburse_tx_hash, - m.disburse_time, - m.disburse_amount, + m.withdraw_tx_hash, + m.withdraw_time, + m.withdraw_amount, + m.archived_by_tx_hash, + m.archived_at, + m.superseded_by, + (SELECT tx_hash FROM treasury.events WHERE milestone_id = m.id AND event_type = 'pause' ORDER BY block_time DESC LIMIT 1) AS last_pause_tx_hash, + (SELECT block_time FROM treasury.events WHERE milestone_id = m.id AND event_type = 'pause' ORDER BY block_time DESC LIMIT 1) AS last_pause_time, + (SELECT tx_hash FROM treasury.events WHERE milestone_id = m.id AND event_type = 'resume' ORDER BY block_time DESC LIMIT 1) AS last_resume_tx_hash, + (SELECT block_time FROM treasury.events WHERE milestone_id = m.id AND event_type = 'resume' ORDER BY block_time DESC LIMIT 1) AS last_resume_time, vc.project_id, vc.project_name FROM treasury.milestones m - JOIN treasury.vendor_contracts vc ON vc.id = m.vendor_contract_id + JOIN treasury.projects vc ON vc.id = m.project_db_id {} ORDER BY {} LIMIT ${} OFFSET ${} @@ -124,33 +174,40 @@ pub async fn list_milestones( let mut data_q = sqlx::query_as::<_, MilestoneRow>(&data_query); - if let Some(ref status) = params.status { - data_q = data_q.bind(status); + if let Some(archived) = params.archived { + data_q = data_q.bind(archived); + } + if let Some(withdrawn) = params.withdrawn { + data_q = data_q.bind(withdrawn); + } + if let Some(evidence_provided) = params.evidence_provided { + data_q = data_q.bind(evidence_provided); } if let Some(ref project_id) = params.project_id { data_q = data_q.bind(project_id); } + if let Some(from_time) = params.from_time { + data_q = data_q.bind(from_time); + } + if let Some(to_time) = params.to_time { + data_q = data_q.bind(to_time); + } - let rows = data_q - .bind(limit_i64) - .bind(offset) - .fetch_all(&pool) - .await - .map_err(|e| { - tracing::error!("Database query error: {}", e); - StatusCode::INTERNAL_SERVER_ERROR - })?; + let rows = data_q.bind(limit_i64).bind(offset).fetch_all(&pool).await?; let milestones: Vec = rows.into_iter().map(MilestoneResponse::from).collect(); Ok(Json(PaginatedResponse::new(milestones, page, limit, total_count))) } -/// Get a specific milestone by ID +/// Get a specific milestone by integer database ID /// -/// Returns detailed information about a specific milestone. +/// Returns detailed information about a specific milestone. The integer +/// database ID is rarely useful to clients; prefer +/// `/api/v1/milestones/{project_id}` for the human-readable project-scoped +/// list. #[utoipa::path( get, - path = "/api/v1/milestones/{id}", + path = "/api/v1/milestones/by-id/{id}", params( ("id" = i32, Path, description = "Milestone database ID") ), @@ -163,41 +220,134 @@ pub async fn list_milestones( pub async fn get_milestone( Extension(pool): Extension, Path(id): Path, -) -> Result>, StatusCode> { +) -> Result>, ApiError> { let row = sqlx::query_as::<_, MilestoneRow>( r#" SELECT m.id, - m.vendor_contract_id, + m.project_db_id, m.milestone_id, m.milestone_order, m.label, m.description, m.acceptance_criteria, m.amount_lovelace, - m.status, + m.time_limit, + m.withdrawn, + m.evidence_provided, + m.paused, + m.archived, m.complete_tx_hash, m.complete_time, m.complete_description, m.evidence, - m.disburse_tx_hash, - m.disburse_time, - m.disburse_amount, + m.withdraw_tx_hash, + m.withdraw_time, + m.withdraw_amount, + m.archived_by_tx_hash, + m.archived_at, + m.superseded_by, + (SELECT tx_hash FROM treasury.events WHERE milestone_id = m.id AND event_type = 'pause' ORDER BY block_time DESC LIMIT 1) AS last_pause_tx_hash, + (SELECT block_time FROM treasury.events WHERE milestone_id = m.id AND event_type = 'pause' ORDER BY block_time DESC LIMIT 1) AS last_pause_time, + (SELECT tx_hash FROM treasury.events WHERE milestone_id = m.id AND event_type = 'resume' ORDER BY block_time DESC LIMIT 1) AS last_resume_tx_hash, + (SELECT block_time FROM treasury.events WHERE milestone_id = m.id AND event_type = 'resume' ORDER BY block_time DESC LIMIT 1) AS last_resume_time, vc.project_id, vc.project_name FROM treasury.milestones m - JOIN treasury.vendor_contracts vc ON vc.id = m.vendor_contract_id + JOIN treasury.projects vc ON vc.id = m.project_db_id WHERE m.id = $1 "# ) .bind(id) .fetch_optional(&pool) - .await - .map_err(|e| { - tracing::error!("Database query error: {}", e); - StatusCode::INTERNAL_SERVER_ERROR - })? - .ok_or(StatusCode::NOT_FOUND)?; + .await? + .ok_or_else(|| ApiError::NotFound(format!("milestone `{}` not found", id)))?; Ok(Json(ApiResponse::new(MilestoneResponse::from(row)))) } + +/// List milestones for a project (paginated) +/// +/// Convenience endpoint mirroring `/api/v1/projects/{project_id}/milestones`, +/// served under `/api/v1/milestones/{project_id}` for clients that prefer +/// the milestones-rooted hierarchy. +#[utoipa::path( + get, + path = "/api/v1/milestones/{project_id}", + params( + ("project_id" = String, Path, description = "Project identifier (e.g., EC-0008-25)"), + PaginationQuery + ), + responses( + (status = 200, description = "Milestones for the project", body = PaginatedResponse>), + (status = 404, description = "Project not found", body = crate::errors::ApiErrorBody) + ), + tag = "Milestones" +)] +pub async fn list_milestones_by_project( + Extension(pool): Extension, + Path(project_id): Path, + Query(params): Query, +) -> Result>>, ApiError> { + let page = params.page.max(1); + let limit = params.limit.min(100).max(1); + let offset = ((page - 1) * limit) as i64; + let limit_i64 = limit as i64; + + let exists = sqlx::query_as::<_, (i32,)>( + "SELECT id FROM treasury.projects WHERE project_id = $1", + ) + .bind(&project_id) + .fetch_optional(&pool) + .await?; + if exists.is_none() { + return Err(ApiError::NotFound(format!( + "project `{}` not found", + project_id + ))); + } + + let (total_count,): (i64,) = sqlx::query_as( + r#" + SELECT COUNT(*) + FROM treasury.milestones m + JOIN treasury.projects p ON p.id = m.project_db_id + WHERE p.project_id = $1 AND NOT m.archived + "#, + ) + .bind(&project_id) + .fetch_one(&pool) + .await?; + + let rows = sqlx::query_as::<_, MilestoneRow>( + r#" + SELECT + m.id, m.project_db_id, m.milestone_id, m.milestone_order, + m.label, m.description, m.acceptance_criteria, + m.amount_lovelace, m.time_limit, + m.withdrawn, m.evidence_provided, m.paused, m.archived, + m.complete_tx_hash, m.complete_time, m.complete_description, m.evidence, + m.withdraw_tx_hash, m.withdraw_time, m.withdraw_amount, + m.archived_by_tx_hash, m.archived_at, m.superseded_by, + (SELECT tx_hash FROM treasury.events WHERE milestone_id = m.id AND event_type = 'pause' ORDER BY block_time DESC LIMIT 1) AS last_pause_tx_hash, + (SELECT block_time FROM treasury.events WHERE milestone_id = m.id AND event_type = 'pause' ORDER BY block_time DESC LIMIT 1) AS last_pause_time, + (SELECT tx_hash FROM treasury.events WHERE milestone_id = m.id AND event_type = 'resume' ORDER BY block_time DESC LIMIT 1) AS last_resume_tx_hash, + (SELECT block_time FROM treasury.events WHERE milestone_id = m.id AND event_type = 'resume' ORDER BY block_time DESC LIMIT 1) AS last_resume_time, + p.project_id, p.project_name + FROM treasury.milestones m + JOIN treasury.projects p ON p.id = m.project_db_id + WHERE p.project_id = $1 AND NOT m.archived + ORDER BY m.milestone_order + LIMIT $2 OFFSET $3 + "#, + ) + .bind(&project_id) + .bind(limit_i64) + .bind(offset) + .fetch_all(&pool) + .await?; + + let milestones: Vec = + rows.into_iter().map(MilestoneResponse::from).collect(); + Ok(Json(PaginatedResponse::new(milestones, page, limit, total_count))) +} diff --git a/api/src/routes/v1/mod.rs b/api/src/routes/v1/mod.rs index 7762c6b..93c286c 100644 --- a/api/src/routes/v1/mod.rs +++ b/api/src/routes/v1/mod.rs @@ -1,13 +1,16 @@ //! V1 API Routes //! -//! New API design with: -//! - Consistent response envelopes -//! - Pagination support -//! - Both lovelace and ADA amounts -//! - Raw and parsed metadata +//! Conventions: +//! - Consistent response envelopes (`ApiResponse` / `PaginatedResponse`). +//! - Errors return a parallel `ApiErrorBody` envelope. +//! - Pagination on every list endpoint. +//! - Amounts in lovelace only (1 ADA = 1,000,000 lovelace). +//! - On-chain block times are paired `{unix, iso}`; server-side timestamps stay ISO. +//! - Raw and parsed metadata. pub mod treasury; -pub mod vendor_contracts; +pub mod vendor_contract; +pub mod projects; pub mod milestones; pub mod events; pub mod statistics; @@ -19,19 +22,23 @@ pub fn router() -> Router { Router::new() // Status endpoint .route("/status", get(status::get_status)) - // Treasury endpoints + // Treasury endpoint (the singleton TRSC) .route("/treasury", get(treasury::get_treasury)) .route("/treasury/utxos", get(treasury::get_treasury_utxos)) .route("/treasury/events", get(treasury::get_treasury_events)) - // Vendor contracts endpoints - .route("/vendor-contracts", get(vendor_contracts::list_vendor_contracts)) - .route("/vendor-contracts/:project_id", get(vendor_contracts::get_vendor_contract)) - .route("/vendor-contracts/:project_id/milestones", get(vendor_contracts::get_vendor_contract_milestones)) - .route("/vendor-contracts/:project_id/events", get(vendor_contracts::get_vendor_contract_events)) - .route("/vendor-contracts/:project_id/utxos", get(vendor_contracts::get_vendor_contract_utxos)) + // Vendor contract endpoint (the singleton shared PSSC) + .route("/vendor-contract", get(vendor_contract::get_vendor_contract)) + .route("/vendor-contract/utxos", get(vendor_contract::get_vendor_contract_utxos)) + // Project endpoints (one per fund event; 42 of these for our deployment) + .route("/projects", get(projects::list_projects)) + .route("/projects/:project_id", get(projects::get_project)) + .route("/projects/:project_id/milestones", get(projects::get_project_milestones)) + .route("/projects/:project_id/events", get(projects::get_project_events)) + .route("/projects/:project_id/utxos", get(projects::get_project_utxos)) // Milestones endpoints .route("/milestones", get(milestones::list_milestones)) - .route("/milestones/:id", get(milestones::get_milestone)) + .route("/milestones/by-id/:id", get(milestones::get_milestone)) + .route("/milestones/:project_id", get(milestones::list_milestones_by_project)) // Events endpoints .route("/events", get(events::list_events)) .route("/events/recent", get(events::get_recent_events)) @@ -41,10 +48,15 @@ pub fn router() -> Router { } pub mod status { - use axum::{extract::Extension, http::StatusCode, response::Json}; + use axum::{extract::Extension, response::Json}; use sqlx::PgPool; + use std::collections::HashMap; - use crate::models::v1::{ApiResponse, StatusResponse}; + use crate::errors::ApiError; + use crate::models::time::ChainTime; + use crate::models::v1::{ + ApiResponse, ChainStatus, DatabaseStatus, StatusResponse, SyncStatusBlock, TotalsBlock, + }; /// Get API status and sync information #[utoipa::path( @@ -57,46 +69,75 @@ pub mod status { )] pub async fn get_status( Extension(pool): Extension, - ) -> Result>, StatusCode> { - // Get sync status - let sync_row = sqlx::query_as::<_, (Option, Option, Option>)>( - "SELECT last_slot, last_block, updated_at FROM treasury.sync_status WHERE sync_type = 'events'" + ) -> Result>, ApiError> { + // Sync heartbeat (server-side ISO) + let heartbeat: Option> = sqlx::query_scalar( + "SELECT updated_at FROM treasury.sync_status WHERE sync_type = 'events'", ) .fetch_optional(&pool) + .await? + .flatten(); + + // Most recent TOM event processed (on-chain ChainTime) + let last_event_block_time: Option = sqlx::query_scalar( + "SELECT MAX(block_time) FROM treasury.events", + ) + .fetch_one(&pool) .await - .map_err(|e| { - tracing::error!("Database query error: {}", e); - StatusCode::INTERNAL_SERVER_ERROR - })?; + .unwrap_or(None); - let (last_slot, last_block, last_sync_time) = sync_row.unwrap_or((None, None, None)); + // Indexer cursor + block time of that block + let cursor: Option<(Option, Option, Option)> = sqlx::query_as( + r#" + SELECT c.block_number, c.slot, b.block_time + FROM yaci_store.cursor_ c + LEFT JOIN yaci_store.block b ON b.number = c.block_number + ORDER BY c.slot DESC + LIMIT 1 + "#, + ) + .fetch_optional(&pool) + .await + .unwrap_or(None); + let (indexer_block, indexer_slot, indexer_block_time) = cursor.unwrap_or((None, None, None)); - // Get event count + // Totals let (total_events,): (i64,) = sqlx::query_as("SELECT COUNT(*) FROM treasury.events") .fetch_one(&pool) - .await - .map_err(|e| { - tracing::error!("Database query error: {}", e); - StatusCode::INTERNAL_SERVER_ERROR - })?; + .await?; - // Get vendor contract count - let (total_vendor_contracts,): (i64,) = sqlx::query_as("SELECT COUNT(*) FROM treasury.vendor_contracts") - .fetch_one(&pool) - .await - .map_err(|e| { - tracing::error!("Database query error: {}", e); - StatusCode::INTERNAL_SERVER_ERROR - })?; + let (total_projects,): (i64,) = + sqlx::query_as("SELECT COUNT(*) FROM treasury.projects") + .fetch_one(&pool) + .await?; + + let by_type_rows: Vec<(String, i64)> = sqlx::query_as( + "SELECT event_type, COUNT(*) FROM treasury.events GROUP BY event_type ORDER BY 1", + ) + .fetch_all(&pool) + .await?; + let events_by_type: HashMap = by_type_rows.into_iter().collect(); Ok(Json(ApiResponse::new(StatusResponse { - api_version: "1.0.0".to_string(), - database_connected: true, - last_sync_slot: last_slot, - last_sync_block: last_block, - last_sync_time: last_sync_time.map(|t| t.timestamp()), - total_events, - total_vendor_contracts, + api_version: "2.0.0".to_string(), + database: DatabaseStatus { + connected: true, + checked_at: chrono::Utc::now(), + }, + sync: SyncStatusBlock { + heartbeat, + last_event_processed: ChainTime::maybe_from_secs(last_event_block_time), + }, + chain: ChainStatus { + indexer_block, + indexer_slot, + indexer_time: ChainTime::maybe_from_secs(indexer_block_time), + }, + totals: TotalsBlock { + events: total_events, + projects: total_projects, + events_by_type, + }, }))) } } diff --git a/api/src/routes/v1/projects.rs b/api/src/routes/v1/projects.rs new file mode 100644 index 0000000..d233711 --- /dev/null +++ b/api/src/routes/v1/projects.rs @@ -0,0 +1,472 @@ +//! Vendor Contracts (Projects) endpoints + +use axum::{ + extract::{Extension, Path, Query}, + response::Json, +}; +use sqlx::PgPool; + +use crate::errors::ApiError; +use crate::models::v1::{ + ApiResponse, EventResponse, EventWithContextRow, MilestoneResponse, MilestoneRow, + PaginatedResponse, PaginationQuery, ProjectCurrentUtxo, ProjectDetail, ProjectEventsQuery, + ProjectSummary, ProjectSummaryRow, ProjectsQuery, UtxoResponse, UtxoRow, +}; + +/// List all vendor contracts +/// +/// Returns a paginated list of vendor contracts with filtering and search support. +#[utoipa::path( + get, + path = "/api/v1/projects", + params(ProjectsQuery), + responses( + (status = 200, description = "List of vendor contracts", body = PaginatedResponse>) + ), + tag = "Projects" +)] +pub async fn list_projects( + Extension(pool): Extension, + Query(params): Query, +) -> Result>>, ApiError> { + let page = params.page.max(1); + let limit = params.limit.min(100).max(1); + let offset = ((page - 1) * limit) as i64; + let limit_i64 = limit as i64; + + let mut conditions = Vec::new(); + let mut bind_index = 1; + + if params.status.is_some() { + conditions.push(format!("status = ${}", bind_index)); + bind_index += 1; + } + + if params.search.is_some() { + conditions.push(format!( + "(project_id ILIKE ${0} OR project_name ILIKE ${0} OR description ILIKE ${0})", + bind_index + )); + bind_index += 1; + } + + if params.from_time.is_some() { + conditions.push(format!("fund_block_time >= ${}", bind_index)); + bind_index += 1; + } + + if params.to_time.is_some() { + conditions.push(format!("fund_block_time <= ${}", bind_index)); + bind_index += 1; + } + + let where_clause = if conditions.is_empty() { + String::new() + } else { + format!("WHERE {}", conditions.join(" AND ")) + }; + + let sort_field = match params.sort.as_deref() { + Some("project_id") => "project_id", + Some("project_name") => "project_name", + Some("initial_amount") => "initial_amount_lovelace", + _ => "fund_block_time", + }; + let sort_order = match params.order.as_deref() { + Some("asc") => "ASC", + _ => "DESC", + }; + + let count_query = format!( + "SELECT COUNT(*) FROM treasury.v_projects_summary {}", + where_clause + ); + + let mut count_q = sqlx::query_as::<_, (i64,)>(&count_query); + + if let Some(ref status) = params.status { + count_q = count_q.bind(status); + } + if let Some(ref search) = params.search { + count_q = count_q.bind(format!("%{}%", search)); + } + if let Some(from_time) = params.from_time { + count_q = count_q.bind(from_time); + } + if let Some(to_time) = params.to_time { + count_q = count_q.bind(to_time); + } + + let (total_count,) = count_q.fetch_one(&pool).await?; + + let data_query = format!( + r#" + SELECT * + FROM treasury.v_projects_summary + {} + ORDER BY {} {} NULLS LAST + LIMIT ${} OFFSET ${} + "#, + where_clause, + sort_field, + sort_order, + bind_index, + bind_index + 1 + ); + + let mut data_q = sqlx::query_as::<_, ProjectSummaryRow>(&data_query); + + if let Some(ref status) = params.status { + data_q = data_q.bind(status); + } + if let Some(ref search) = params.search { + data_q = data_q.bind(format!("%{}%", search)); + } + if let Some(from_time) = params.from_time { + data_q = data_q.bind(from_time); + } + if let Some(to_time) = params.to_time { + data_q = data_q.bind(to_time); + } + + let rows = data_q.bind(limit_i64).bind(offset).fetch_all(&pool).await?; + + let contracts: Vec = rows.into_iter().map(ProjectSummary::from).collect(); + Ok(Json(PaginatedResponse::new(contracts, page, limit, total_count))) +} + +/// Get a specific vendor contract by project ID +#[utoipa::path( + get, + path = "/api/v1/projects/{project_id}", + params( + ("project_id" = String, Path, description = "Project identifier (e.g., EC-0008-25)") + ), + responses( + (status = 200, description = "Vendor contract details", body = ApiResponse), + (status = 404, description = "Vendor contract not found", body = crate::errors::ApiErrorBody) + ), + tag = "Projects" +)] +pub async fn get_project( + Extension(pool): Extension, + Path(project_id): Path, +) -> Result>, ApiError> { + let row = sqlx::query_as::<_, ProjectSummaryRow>( + r#" + SELECT * + FROM treasury.v_projects_summary + WHERE project_id = $1 + "#, + ) + .bind(&project_id) + .fetch_optional(&pool) + .await? + .ok_or_else(|| ApiError::NotFound(format!("vendor contract `{}` not found", project_id)))?; + + let project_db_id = row.id; + let mut detail = ProjectDetail::from(row); + + // Inline the project's currently-unspent UTxOs so callers don't need a + // second round trip to /projects/:id/utxos for the live state. Same + // unspent-source-of-truth pattern as that endpoint. + detail.current_utxos = sqlx::query_as::<_, ProjectCurrentUtxo>( + r#" + SELECT + au.tx_hash, + au.output_index, + au.lovelace_amount, + au.slot + FROM yaci_store.address_utxo au + JOIN treasury.utxo_history uh + ON uh.tx_hash = au.tx_hash AND uh.output_index = au.output_index + WHERE uh.project_db_id = $1 + AND NOT EXISTS ( + SELECT 1 FROM yaci_store.tx_input ti + WHERE ti.tx_hash = au.tx_hash AND ti.output_index = au.output_index + ) + ORDER BY au.slot DESC, au.tx_hash ASC, au.output_index ASC + "#, + ) + .bind(project_db_id) + .fetch_all(&pool) + .await?; + + Ok(Json(ApiResponse::new(detail))) +} + +/// Get milestones for a vendor contract (paginated) +#[utoipa::path( + get, + path = "/api/v1/projects/{project_id}/milestones", + params( + ("project_id" = String, Path, description = "Project identifier"), + PaginationQuery + ), + responses( + (status = 200, description = "Project milestones", body = PaginatedResponse>), + (status = 404, description = "Vendor contract not found", body = crate::errors::ApiErrorBody) + ), + tag = "Projects" +)] +pub async fn get_project_milestones( + Extension(pool): Extension, + Path(project_id): Path, + Query(params): Query, +) -> Result>>, ApiError> { + let page = params.page.max(1); + let limit = params.limit.min(100).max(1); + let offset = ((page - 1) * limit) as i64; + let limit_i64 = limit as i64; + + let exists = sqlx::query_as::<_, (i32,)>( + "SELECT id FROM treasury.projects WHERE project_id = $1", + ) + .bind(&project_id) + .fetch_optional(&pool) + .await?; + + if exists.is_none() { + return Err(ApiError::NotFound(format!( + "vendor contract `{}` not found", + project_id + ))); + } + + let (total_count,): (i64,) = sqlx::query_as( + r#" + SELECT COUNT(*) + FROM treasury.milestones m + JOIN treasury.projects vc ON vc.id = m.project_db_id + WHERE vc.project_id = $1 AND NOT m.archived + "#, + ) + .bind(&project_id) + .fetch_one(&pool) + .await?; + + let rows = sqlx::query_as::<_, MilestoneRow>( + r#" + SELECT + m.id, m.project_db_id, m.milestone_id, m.milestone_order, + m.label, m.description, m.acceptance_criteria, + m.amount_lovelace, m.time_limit, + m.withdrawn, m.evidence_provided, m.paused, m.archived, + m.complete_tx_hash, m.complete_time, m.complete_description, m.evidence, + m.withdraw_tx_hash, m.withdraw_time, m.withdraw_amount, + m.archived_by_tx_hash, m.archived_at, m.superseded_by, + (SELECT tx_hash FROM treasury.events WHERE milestone_id = m.id AND event_type = 'pause' ORDER BY block_time DESC LIMIT 1) AS last_pause_tx_hash, + (SELECT block_time FROM treasury.events WHERE milestone_id = m.id AND event_type = 'pause' ORDER BY block_time DESC LIMIT 1) AS last_pause_time, + (SELECT tx_hash FROM treasury.events WHERE milestone_id = m.id AND event_type = 'resume' ORDER BY block_time DESC LIMIT 1) AS last_resume_tx_hash, + (SELECT block_time FROM treasury.events WHERE milestone_id = m.id AND event_type = 'resume' ORDER BY block_time DESC LIMIT 1) AS last_resume_time, + vc.project_id, vc.project_name + FROM treasury.milestones m + JOIN treasury.projects vc ON vc.id = m.project_db_id + WHERE vc.project_id = $1 AND NOT m.archived + ORDER BY m.milestone_order + LIMIT $2 OFFSET $3 + "#, + ) + .bind(&project_id) + .bind(limit_i64) + .bind(offset) + .fetch_all(&pool) + .await?; + + let milestones: Vec = rows.into_iter().map(MilestoneResponse::from).collect(); + Ok(Json(PaginatedResponse::new(milestones, page, limit, total_count))) +} + +/// Get events for a vendor contract +#[utoipa::path( + get, + path = "/api/v1/projects/{project_id}/events", + params( + ("project_id" = String, Path, description = "Project identifier"), + ProjectEventsQuery + ), + responses( + (status = 200, description = "Project events", body = PaginatedResponse>), + (status = 404, description = "Vendor contract not found", body = crate::errors::ApiErrorBody) + ), + tag = "Projects" +)] +pub async fn get_project_events( + Extension(pool): Extension, + Path(project_id): Path, + Query(params): Query, +) -> Result>>, ApiError> { + let page = params.page.max(1); + let limit = params.limit.min(100).max(1); + let offset = ((page - 1) * limit) as i64; + let limit_i64 = limit as i64; + + let exists = sqlx::query_as::<_, (i32,)>( + "SELECT id FROM treasury.projects WHERE project_id = $1", + ) + .bind(&project_id) + .fetch_optional(&pool) + .await?; + + if exists.is_none() { + return Err(ApiError::NotFound(format!( + "vendor contract `{}` not found", + project_id + ))); + } + + let (total_count, rows) = if let Some(ref event_type) = params.event_type { + let (count,): (i64,) = sqlx::query_as( + r#" + SELECT COUNT(*) + FROM treasury.v_events_with_context + WHERE project_id = $1 AND event_type = $2 + "#, + ) + .bind(&project_id) + .bind(event_type) + .fetch_one(&pool) + .await?; + + let rows = sqlx::query_as::<_, EventWithContextRow>( + r#" + SELECT * + FROM treasury.v_events_with_context + WHERE project_id = $1 AND event_type = $2 + ORDER BY block_time DESC + LIMIT $3 OFFSET $4 + "#, + ) + .bind(&project_id) + .bind(event_type) + .bind(limit_i64) + .bind(offset) + .fetch_all(&pool) + .await?; + + (count, rows) + } else { + let (count,): (i64,) = sqlx::query_as( + r#" + SELECT COUNT(*) + FROM treasury.v_events_with_context + WHERE project_id = $1 + "#, + ) + .bind(&project_id) + .fetch_one(&pool) + .await?; + + let rows = sqlx::query_as::<_, EventWithContextRow>( + r#" + SELECT * + FROM treasury.v_events_with_context + WHERE project_id = $1 + ORDER BY block_time DESC + LIMIT $2 OFFSET $3 + "#, + ) + .bind(&project_id) + .bind(limit_i64) + .bind(offset) + .fetch_all(&pool) + .await?; + + (count, rows) + }; + + let events: Vec = rows.into_iter().map(EventResponse::from).collect(); + Ok(Json(PaginatedResponse::new(events, page, limit, total_count))) +} + +/// Get UTXOs for a vendor contract (paginated) +#[utoipa::path( + get, + path = "/api/v1/projects/{project_id}/utxos", + params( + ("project_id" = String, Path, description = "Project identifier"), + PaginationQuery + ), + responses( + (status = 200, description = "Project UTXOs", body = PaginatedResponse>), + (status = 404, description = "Vendor contract not found", body = crate::errors::ApiErrorBody) + ), + tag = "Projects" +)] +pub async fn get_project_utxos( + Extension(pool): Extension, + Path(project_id): Path, + Query(params): Query, +) -> Result>>, ApiError> { + let page = params.page.max(1); + let limit = params.limit.min(100).max(1); + let offset = ((page - 1) * limit) as i64; + let limit_i64 = limit as i64; + + let exists = sqlx::query_as::<_, (i32,)>( + "SELECT id FROM treasury.projects WHERE project_id = $1", + ) + .bind(&project_id) + .fetch_optional(&pool) + .await?; + + if exists.is_none() { + return Err(ApiError::NotFound(format!( + "vendor contract `{}` not found", + project_id + ))); + } + + // Source of truth for "currently unspent" is yaci_store.address_utxo (rows + // pruned on spend; anti-join against tx_input handles the pruning-window lag). + // utxo_history is used only for project attribution. + let (total_count,): (i64,) = sqlx::query_as( + r#" + SELECT COUNT(*) + FROM yaci_store.address_utxo au + JOIN treasury.utxo_history uh + ON uh.tx_hash = au.tx_hash AND uh.output_index = au.output_index + JOIN treasury.projects vc ON vc.id = uh.project_db_id + WHERE vc.project_id = $1 + AND NOT EXISTS ( + SELECT 1 FROM yaci_store.tx_input ti + WHERE ti.tx_hash = au.tx_hash AND ti.output_index = au.output_index + ) + "#, + ) + .bind(&project_id) + .fetch_one(&pool) + .await?; + + let rows = sqlx::query_as::<_, UtxoRow>( + r#" + SELECT + au.tx_hash, + au.output_index, + au.owner_addr AS address, + uh.address_type, + au.lovelace_amount, + au.slot, + au.block AS block_number + FROM yaci_store.address_utxo au + JOIN treasury.utxo_history uh + ON uh.tx_hash = au.tx_hash AND uh.output_index = au.output_index + JOIN treasury.projects vc ON vc.id = uh.project_db_id + WHERE vc.project_id = $1 + AND NOT EXISTS ( + SELECT 1 FROM yaci_store.tx_input ti + WHERE ti.tx_hash = au.tx_hash AND ti.output_index = au.output_index + ) + ORDER BY au.slot DESC + LIMIT $2 OFFSET $3 + "#, + ) + .bind(&project_id) + .bind(limit_i64) + .bind(offset) + .fetch_all(&pool) + .await?; + + let utxos: Vec = rows.into_iter().map(UtxoResponse::from).collect(); + Ok(Json(PaginatedResponse::new(utxos, page, limit, total_count))) +} diff --git a/api/src/routes/v1/statistics.rs b/api/src/routes/v1/statistics.rs index 484580c..0b3b2d9 100644 --- a/api/src/routes/v1/statistics.rs +++ b/api/src/routes/v1/statistics.rs @@ -1,12 +1,13 @@ //! Statistics endpoint -use axum::{extract::Extension, http::StatusCode, response::Json}; +use axum::{extract::Extension, response::Json}; use sqlx::PgPool; use std::collections::HashMap; +use crate::errors::ApiError; use crate::models::v1::{ - lovelace_to_ada, ApiResponse, EventStats, FinancialStats, MilestoneStats, ProjectStats, - StatisticsResponse, SyncStats, TreasuryStats, + ApiResponse, EventStats, FinancialStats, MilestoneStats, ProjectStats, + StatisticsResponse, SyncStats, TreasuryStats, VendorContractStats, }; /// Get comprehensive statistics @@ -22,27 +23,18 @@ use crate::models::v1::{ )] pub async fn get_statistics( Extension(pool): Extension, -) -> Result>, StatusCode> { - // Treasury stats +) -> Result>, ApiError> { let treasury_stats = get_treasury_stats(&pool).await?; - - // Project stats + let vendor_contract_stats = get_vendor_contract_stats(&pool).await?; let project_stats = get_project_stats(&pool).await?; - - // Milestone stats let milestone_stats = get_milestone_stats(&pool).await?; - - // Event stats let event_stats = get_event_stats(&pool).await?; - - // Financial stats let financial_stats = get_financial_stats(&pool).await?; - - // Sync stats let sync_stats = get_sync_stats(&pool).await?; Ok(Json(ApiResponse::new(StatisticsResponse { treasury: treasury_stats, + vendor_contracts: vendor_contract_stats, projects: project_stats, milestones: milestone_stats, events: event_stats, @@ -51,7 +43,67 @@ pub async fn get_statistics( }))) } -async fn get_treasury_stats(pool: &PgPool) -> Result { +async fn get_vendor_contract_stats(pool: &PgPool) -> Result { + let (total_count,): (i64,) = + sqlx::query_as("SELECT COUNT(*) FROM treasury.vendor_contracts") + .fetch_one(pool) + .await?; + + let address: Option = + sqlx::query_scalar("SELECT address FROM treasury.vendor_contracts ORDER BY id LIMIT 1") + .fetch_optional(pool) + .await?; + + let (project_count,): (i64,) = sqlx::query_as("SELECT COUNT(*) FROM treasury.projects") + .fetch_one(pool) + .await?; + + // utxo_history_count is the lifetime capture count (script-address only). + // unspent_utxo_count + current_balance_lovelace come from yaci's live UTXO set + // (authoritative — pruned spent rows can't leak in, ghost-unspent rows in + // utxo_history are excluded by the JOIN). + // utxo_history_count is the lifetime capture count (script-address only). + // unspent_utxo_count + current_balance_lovelace come from yaci's live UTXO set. + // yaci_store.address_utxo has no `spent` column — pruning removes rows. The + // anti-join against tx_input handles the pruning-window lag. + let (utxo_history_count, unspent_utxo_count, current_balance_lovelace): (i64, i64, Option) = + if let Some(ref addr) = address { + sqlx::query_as( + r#" + SELECT + (SELECT COUNT(*) FROM treasury.utxo_history WHERE address = $1)::BIGINT, + (SELECT COUNT(*) FROM yaci_store.address_utxo au + WHERE au.owner_addr = $1 + AND NOT EXISTS ( + SELECT 1 FROM yaci_store.tx_input ti + WHERE ti.tx_hash = au.tx_hash AND ti.output_index = au.output_index + ))::BIGINT, + (SELECT COALESCE(SUM(au.lovelace_amount), 0) FROM yaci_store.address_utxo au + WHERE au.owner_addr = $1 + AND NOT EXISTS ( + SELECT 1 FROM yaci_store.tx_input ti + WHERE ti.tx_hash = au.tx_hash AND ti.output_index = au.output_index + ))::BIGINT + "#, + ) + .bind(addr) + .fetch_one(pool) + .await? + } else { + (0, 0, Some(0)) + }; + + Ok(VendorContractStats { + total_count, + address, + project_count, + utxo_history_count, + unspent_utxo_count, + current_balance_lovelace: current_balance_lovelace.unwrap_or(0), + }) +} + +async fn get_treasury_stats(pool: &PgPool) -> Result { let row = sqlx::query_as::<_, (i64, i64)>( r#" SELECT @@ -62,10 +114,7 @@ async fn get_treasury_stats(pool: &PgPool) -> Result ) .fetch_one(pool) .await - .map_err(|e| { - tracing::error!("Database query error: {}", e); - StatusCode::INTERNAL_SERVER_ERROR - })?; + ?; // Get disbursement count from events let (disbursed_count,): (i64,) = sqlx::query_as( @@ -73,10 +122,7 @@ async fn get_treasury_stats(pool: &PgPool) -> Result ) .fetch_one(pool) .await - .map_err(|e| { - tracing::error!("Database query error: {}", e); - StatusCode::INTERNAL_SERVER_ERROR - })?; + ?; Ok(TreasuryStats { total_count: row.0, @@ -85,7 +131,7 @@ async fn get_treasury_stats(pool: &PgPool) -> Result }) } -async fn get_project_stats(pool: &PgPool) -> Result { +async fn get_project_stats(pool: &PgPool) -> Result { let row = sqlx::query_as::<_, (i64, i64, i64, i64, i64)>( r#" SELECT @@ -94,15 +140,12 @@ async fn get_project_stats(pool: &PgPool) -> Result { COUNT(*) FILTER (WHERE status = 'completed'), COUNT(*) FILTER (WHERE status = 'paused'), COUNT(*) FILTER (WHERE status = 'cancelled') - FROM treasury.vendor_contracts + FROM treasury.projects "# ) .fetch_one(pool) .await - .map_err(|e| { - tracing::error!("Database query error: {}", e); - StatusCode::INTERNAL_SERVER_ERROR - })?; + ?; Ok(ProjectStats { total_count: row.0, @@ -113,23 +156,20 @@ async fn get_project_stats(pool: &PgPool) -> Result { }) } -async fn get_milestone_stats(pool: &PgPool) -> Result { +async fn get_milestone_stats(pool: &PgPool) -> Result { let row = sqlx::query_as::<_, (i64, i64, i64, i64)>( r#" SELECT - COUNT(*), - COUNT(*) FILTER (WHERE status = 'pending'), - COUNT(*) FILTER (WHERE status = 'completed'), - COUNT(*) FILTER (WHERE status = 'withdrawn') + COUNT(*) FILTER (WHERE NOT archived), + COUNT(*) FILTER (WHERE NOT archived AND NOT evidence_provided AND NOT withdrawn), + COUNT(*) FILTER (WHERE NOT archived AND evidence_provided AND NOT withdrawn), + COUNT(*) FILTER (WHERE NOT archived AND withdrawn) FROM treasury.milestones "# ) .fetch_one(pool) .await - .map_err(|e| { - tracing::error!("Database query error: {}", e); - StatusCode::INTERNAL_SERVER_ERROR - })?; + ?; Ok(MilestoneStats { total_count: row.0, @@ -139,98 +179,110 @@ async fn get_milestone_stats(pool: &PgPool) -> Result Result { - // Get total count - let (total_count,): (i64,) = sqlx::query_as("SELECT COUNT(*) FROM treasury.events") +async fn get_event_stats(pool: &PgPool) -> Result { + // Get processed event count + let (processed_count,): (i64,) = sqlx::query_as("SELECT COUNT(*) FROM treasury.events") .fetch_one(pool) .await - .map_err(|e| { - tracing::error!("Database query error: {}", e); - StatusCode::INTERNAL_SERVER_ERROR - })?; + ?; - // Get counts by type - let type_rows = sqlx::query_as::<_, (String, i64)>( + // Get on-chain TOM event count and breakdown by type from yaci_store. + // Event type lives at body.body.event in the metadata (see event_processor::process_event). + let on_chain_rows = sqlx::query_as::<_, (String, i64)>( r#" - SELECT event_type, COUNT(*) - FROM treasury.events - GROUP BY event_type + SELECT + COALESCE(LOWER(body::jsonb -> 'body' ->> 'event'), 'unknown') AS event_type, + COUNT(*) + FROM yaci_store.transaction_metadata + WHERE label = '1694' + GROUP BY 1 ORDER BY COUNT(*) DESC "# ) .fetch_all(pool) .await - .map_err(|e| { - tracing::error!("Database query error: {}", e); - StatusCode::INTERNAL_SERVER_ERROR - })?; + ?; - let by_type: HashMap = type_rows.into_iter().collect(); + let on_chain_count: i64 = on_chain_rows.iter().map(|(_, c)| c).sum(); + let by_type: HashMap = on_chain_rows.into_iter().collect(); Ok(EventStats { - total_count, + on_chain_count, + processed_count, by_type, }) } -async fn get_financial_stats(pool: &PgPool) -> Result { +async fn get_financial_stats(pool: &PgPool) -> Result { // Get total allocated (sum of initial amounts) let (total_allocated,): (Option,) = sqlx::query_as( - "SELECT COALESCE(SUM(initial_amount_lovelace), 0)::BIGINT FROM treasury.vendor_contracts" + "SELECT COALESCE(SUM(initial_amount_lovelace), 0)::BIGINT FROM treasury.projects" ) .fetch_one(pool) .await - .map_err(|e| { - tracing::error!("Database query error: {}", e); - StatusCode::INTERNAL_SERVER_ERROR - })?; - - // Get total disbursed - let (total_disbursed,): (Option,) = sqlx::query_as( - "SELECT COALESCE(SUM(disburse_amount), 0)::BIGINT FROM treasury.milestones WHERE status = 'disbursed'" + ?; + + // Get total withdrawn + let (total_withdrawn,): (Option,) = sqlx::query_as( + "SELECT COALESCE(SUM(withdraw_amount), 0)::BIGINT FROM treasury.milestones WHERE withdrawn AND NOT archived" ) .fetch_one(pool) .await - .map_err(|e| { - tracing::error!("Database query error: {}", e); - StatusCode::INTERNAL_SERVER_ERROR - })?; + ?; - // Get current balance (unspent UTXOs) + // Current balance: raw live unspent at TRSC + PSSC addresses, from yaci's UTXO + // set (rows pruned on spend; anti-join against tx_input handles pruning lag). + // We do NOT trust treasury.utxo_history.spent — pre-trigger captures and + // KI-UTX-02 non-script captures leave stale spent=FALSE rows that would + // over-count. We do NOT join through utxo_history for project attribution + // either — chain-trace gaps would *under*-count the vendor contract balance. let (current_balance,): (Option,) = sqlx::query_as( - "SELECT COALESCE(SUM(lovelace_amount), 0)::BIGINT FROM treasury.utxos WHERE NOT spent" + r#" + SELECT ( + COALESCE(( + SELECT SUM(au.lovelace_amount) + FROM yaci_store.address_utxo au + JOIN treasury.treasury_contracts tc ON tc.contract_address = au.owner_addr + WHERE NOT EXISTS ( + SELECT 1 FROM yaci_store.tx_input ti + WHERE ti.tx_hash = au.tx_hash AND ti.output_index = au.output_index + ) + ), 0) + + + COALESCE(( + SELECT SUM(au.lovelace_amount) + FROM yaci_store.address_utxo au + JOIN treasury.vendor_contracts vco ON vco.address = au.owner_addr + WHERE NOT EXISTS ( + SELECT 1 FROM yaci_store.tx_input ti + WHERE ti.tx_hash = au.tx_hash AND ti.output_index = au.output_index + ) + ), 0) + )::BIGINT + "# ) .fetch_one(pool) .await - .map_err(|e| { - tracing::error!("Database query error: {}", e); - StatusCode::INTERNAL_SERVER_ERROR - })?; + ?; let allocated = total_allocated.unwrap_or(0); - let disbursed = total_disbursed.unwrap_or(0); + let withdrawn = total_withdrawn.unwrap_or(0); let balance = current_balance.unwrap_or(0); Ok(FinancialStats { total_allocated_lovelace: allocated, - total_allocated_ada: lovelace_to_ada(allocated), - total_disbursed_lovelace: disbursed, - total_disbursed_ada: lovelace_to_ada(disbursed), + total_withdrawn_lovelace: withdrawn, current_balance_lovelace: balance, - current_balance_ada: lovelace_to_ada(balance), }) } -async fn get_sync_stats(pool: &PgPool) -> Result { +async fn get_sync_stats(pool: &PgPool) -> Result { let row = sqlx::query_as::<_, (Option, Option, Option>)>( "SELECT last_slot, last_block, updated_at FROM treasury.sync_status WHERE sync_type = 'events'" ) .fetch_optional(pool) .await - .map_err(|e| { - tracing::error!("Database query error: {}", e); - StatusCode::INTERNAL_SERVER_ERROR - })?; + ?; match row { Some((last_slot, last_block, updated_at)) => Ok(SyncStats { diff --git a/api/src/routes/v1/treasury.rs b/api/src/routes/v1/treasury.rs index 85f334d..799e4b8 100644 --- a/api/src/routes/v1/treasury.rs +++ b/api/src/routes/v1/treasury.rs @@ -2,14 +2,14 @@ use axum::{ extract::{Extension, Query}, - http::StatusCode, response::Json, }; use sqlx::PgPool; +use crate::errors::ApiError; use crate::models::v1::{ ApiResponse, EventResponse, EventWithContextRow, EventsQuery, PaginatedResponse, - TreasuryResponse, TreasurySummaryRow, UtxoResponse, UtxoRow, + PaginationQuery, TreasuryResponse, TreasurySummaryRow, UtxoResponse, UtxoRow, }; /// Get treasury contract details @@ -21,85 +21,107 @@ use crate::models::v1::{ path = "/api/v1/treasury", responses( (status = 200, description = "Treasury details", body = ApiResponse), - (status = 404, description = "No treasury found") + (status = 404, description = "No treasury found", body = crate::errors::ApiErrorBody) ), tag = "Treasury" )] pub async fn get_treasury( Extension(pool): Extension, -) -> Result>, StatusCode> { +) -> Result>, ApiError> { let row = sqlx::query_as::<_, TreasurySummaryRow>( r#" SELECT * FROM treasury.v_treasury_summary LIMIT 1 - "# + "#, ) .fetch_optional(&pool) - .await - .map_err(|e| { - tracing::error!("Database query error: {}", e); - StatusCode::INTERNAL_SERVER_ERROR - })? - .ok_or(StatusCode::NOT_FOUND)?; + .await? + .ok_or_else(|| ApiError::NotFound("treasury not found".into()))?; Ok(Json(ApiResponse::new(TreasuryResponse::from(row)))) } -/// Get treasury UTXOs +/// Get treasury UTXOs (paginated) /// /// Returns all unspent UTXOs at the treasury contract address. #[utoipa::path( get, path = "/api/v1/treasury/utxos", + params(PaginationQuery), responses( - (status = 200, description = "Treasury UTXOs", body = ApiResponse>), - (status = 404, description = "No treasury found") + (status = 200, description = "Treasury UTXOs", body = PaginatedResponse>), + (status = 404, description = "No treasury found", body = crate::errors::ApiErrorBody) ), tag = "Treasury" )] pub async fn get_treasury_utxos( Extension(pool): Extension, -) -> Result>>, StatusCode> { + Query(params): Query, +) -> Result>>, ApiError> { + let page = params.page.max(1); + let limit = params.limit.min(100).max(1); + let offset = ((page - 1) * limit) as i64; + let limit_i64 = limit as i64; + // First get the treasury contract address let treasury = sqlx::query_as::<_, (Option,)>( - "SELECT contract_address FROM treasury.treasury_contracts LIMIT 1" + "SELECT contract_address FROM treasury.treasury_contracts LIMIT 1", ) .fetch_optional(&pool) - .await - .map_err(|e| { - tracing::error!("Database query error: {}", e); - StatusCode::INTERNAL_SERVER_ERROR - })? - .ok_or(StatusCode::NOT_FOUND)?; + .await? + .ok_or_else(|| ApiError::NotFound("treasury not found".into()))?; + + let address = treasury + .0 + .ok_or_else(|| ApiError::NotFound("treasury contract_address not yet known".into()))?; - let address = treasury.0.ok_or(StatusCode::NOT_FOUND)?; + // Source of truth for "currently unspent" is yaci_store.address_utxo (rows are + // deleted on prune). Anti-join against tx_input handles the pruning-window lag + // window. We do NOT trust treasury.utxo_history.spent — pre-trigger captures + // and KI-UTX-02 non-script captures leave stale spent=FALSE rows. + let (total_count,): (i64,) = sqlx::query_as( + r#" + SELECT COUNT(*) FROM yaci_store.address_utxo au + WHERE au.owner_addr = $1 + AND NOT EXISTS ( + SELECT 1 FROM yaci_store.tx_input ti + WHERE ti.tx_hash = au.tx_hash AND ti.output_index = au.output_index + ) + "#, + ) + .bind(&address) + .fetch_one(&pool) + .await?; let rows = sqlx::query_as::<_, UtxoRow>( r#" SELECT - tx_hash, - output_index, - address, - address_type, - lovelace_amount, - slot, - block_number - FROM treasury.utxos - WHERE address = $1 AND NOT spent - ORDER BY slot DESC - "# + au.tx_hash, + au.output_index, + au.owner_addr AS address, + 'treasury'::TEXT AS address_type, + au.lovelace_amount, + au.slot, + au.block AS block_number + FROM yaci_store.address_utxo au + WHERE au.owner_addr = $1 + AND NOT EXISTS ( + SELECT 1 FROM yaci_store.tx_input ti + WHERE ti.tx_hash = au.tx_hash AND ti.output_index = au.output_index + ) + ORDER BY au.slot DESC + LIMIT $2 OFFSET $3 + "#, ) .bind(&address) + .bind(limit_i64) + .bind(offset) .fetch_all(&pool) - .await - .map_err(|e| { - tracing::error!("Database query error: {}", e); - StatusCode::INTERNAL_SERVER_ERROR - })?; + .await?; let utxos: Vec = rows.into_iter().map(UtxoResponse::from).collect(); - Ok(Json(ApiResponse::new(utxos))) + Ok(Json(PaginatedResponse::new(utxos, page, limit, total_count))) } /// Get treasury-level events @@ -117,7 +139,7 @@ pub async fn get_treasury_utxos( pub async fn get_treasury_events( Extension(pool): Extension, Query(params): Query, -) -> Result>>, StatusCode> { +) -> Result>>, ApiError> { let page = params.page.max(1); let limit = params.limit.min(100).max(1); let offset = ((page - 1) * limit) as i64; @@ -132,18 +154,13 @@ pub async fn get_treasury_events( SELECT COUNT(*) FROM treasury.events e JOIN treasury.treasury_contracts tc ON tc.id = e.treasury_id - WHERE e.event_type = ANY($1) AND e.vendor_contract_id IS NULL - "# + WHERE e.event_type = ANY($1) AND e.project_db_id IS NULL + "#, ) .bind(&treasury_event_types) .fetch_one(&pool) - .await - .map_err(|e| { - tracing::error!("Database query error: {}", e); - StatusCode::INTERNAL_SERVER_ERROR - })?; + .await?; - // Get events let rows = sqlx::query_as::<_, EventWithContextRow>( r#" SELECT * @@ -151,17 +168,13 @@ pub async fn get_treasury_events( WHERE event_type = ANY($1) AND project_id IS NULL ORDER BY block_time DESC LIMIT $2 OFFSET $3 - "# + "#, ) .bind(&treasury_event_types) .bind(limit_i64) .bind(offset) .fetch_all(&pool) - .await - .map_err(|e| { - tracing::error!("Database query error: {}", e); - StatusCode::INTERNAL_SERVER_ERROR - })?; + .await?; let events: Vec = rows.into_iter().map(EventResponse::from).collect(); Ok(Json(PaginatedResponse::new(events, page, limit, total_count))) diff --git a/api/src/routes/v1/vendor_contract.rs b/api/src/routes/v1/vendor_contract.rs new file mode 100644 index 0000000..382db48 --- /dev/null +++ b/api/src/routes/v1/vendor_contract.rs @@ -0,0 +1,163 @@ +//! Vendor Contract endpoint (singleton — the shared PSSC) + +use axum::{ + extract::{Extension, Query}, + response::Json, +}; +use sqlx::PgPool; +use std::collections::HashMap; + +use crate::errors::ApiError; +use crate::models::v1::{ + ApiResponse, PaginatedResponse, PaginationQuery, ProjectUtxoResponse, ProjectUtxoRow, + VendorContractProjectsBlock, VendorContractResponse, +}; + +/// Get the shared vendor contract (PSSC) details +/// +/// Returns the singleton vendor contract — the on-chain script address +/// where every project's funds sit, distinguished only by inline datum — +/// plus a quick rollup of the projects bound to it. +#[utoipa::path( + get, + path = "/api/v1/vendor-contract", + responses( + (status = 200, description = "Vendor contract details", body = ApiResponse), + (status = 404, description = "Vendor contract not yet known", body = crate::errors::ApiErrorBody) + ), + tag = "Vendor Contract" +)] +pub async fn get_vendor_contract( + Extension(pool): Extension, +) -> Result>, ApiError> { + let row: Option<(String, Option)> = sqlx::query_as( + "SELECT address, stake_credential FROM treasury.vendor_contracts ORDER BY id LIMIT 1", + ) + .fetch_optional(&pool) + .await?; + + let (address, stake_credential) = row.ok_or_else(|| { + ApiError::NotFound( + "vendor contract not yet known — first fund event has not been processed" + .into(), + ) + })?; + + let (total,): (i64,) = sqlx::query_as("SELECT COUNT(*) FROM treasury.projects") + .fetch_one(&pool) + .await?; + + let by_status_rows: Vec<(Option, i64)> = sqlx::query_as( + "SELECT status, COUNT(*) FROM treasury.projects GROUP BY status", + ) + .fetch_all(&pool) + .await?; + + let by_status: HashMap = by_status_rows + .into_iter() + .map(|(status, count)| (status.unwrap_or_else(|| "unknown".into()), count)) + .collect(); + + Ok(Json(ApiResponse::new(VendorContractResponse { + address, + stake_credential, + projects: VendorContractProjectsBlock { total, by_status }, + }))) +} + +/// Get currently-unspent UTxOs at the shared vendor contract, labeled per project. +/// +/// Returns every live output sitting at the singleton PSSC, with each row +/// carrying its owning project's `project_id`, `project_name`, and `project_status` +/// so callers can enumerate vendor-contract state in a single round trip +/// instead of fanning out across every project. +/// +/// "Currently unspent" is sourced from `yaci_store.address_utxo` with an +/// anti-join against `yaci_store.tx_input` to ride out the trigger +/// pruning-window lag (same approach used by `/projects/:id/utxos` and +/// `/treasury/utxos`). +#[utoipa::path( + get, + path = "/api/v1/vendor-contract/utxos", + params(PaginationQuery), + responses( + (status = 200, description = "Currently-unspent UTxOs at the vendor contract, labeled per project", + body = PaginatedResponse>), + (status = 404, description = "Vendor contract not yet known", body = crate::errors::ApiErrorBody) + ), + tag = "Vendor Contract" +)] +pub async fn get_vendor_contract_utxos( + Extension(pool): Extension, + Query(params): Query, +) -> Result>>, ApiError> { + let page = params.page.max(1); + let limit = params.limit.min(100).max(1); + let offset = ((page - 1) * limit) as i64; + let limit_i64 = limit as i64; + + let address: String = sqlx::query_scalar( + "SELECT address FROM treasury.vendor_contracts ORDER BY id LIMIT 1", + ) + .fetch_optional(&pool) + .await? + .ok_or_else(|| { + ApiError::NotFound( + "vendor contract not yet known — first fund event has not been processed".into(), + ) + })?; + + let (total_count,): (i64,) = sqlx::query_as( + r#" + SELECT COUNT(*) + FROM yaci_store.address_utxo au + JOIN treasury.utxo_history uh + ON uh.tx_hash = au.tx_hash AND uh.output_index = au.output_index + JOIN treasury.projects p ON p.id = uh.project_db_id + WHERE au.owner_addr = $1 + AND NOT EXISTS ( + SELECT 1 FROM yaci_store.tx_input ti + WHERE ti.tx_hash = au.tx_hash AND ti.output_index = au.output_index + ) + "#, + ) + .bind(&address) + .fetch_one(&pool) + .await?; + + let rows = sqlx::query_as::<_, ProjectUtxoRow>( + r#" + SELECT + au.tx_hash, + au.output_index, + au.owner_addr AS address, + au.lovelace_amount, + au.slot, + au.block AS block_number, + p.id AS project_db_id, + p.project_id AS project_id, + p.project_name AS project_name, + p.status AS project_status + FROM yaci_store.address_utxo au + JOIN treasury.utxo_history uh + ON uh.tx_hash = au.tx_hash AND uh.output_index = au.output_index + JOIN treasury.projects p ON p.id = uh.project_db_id + WHERE au.owner_addr = $1 + AND NOT EXISTS ( + SELECT 1 FROM yaci_store.tx_input ti + WHERE ti.tx_hash = au.tx_hash AND ti.output_index = au.output_index + ) + ORDER BY au.slot DESC, au.tx_hash ASC, au.output_index ASC + LIMIT $2 OFFSET $3 + "#, + ) + .bind(&address) + .bind(limit_i64) + .bind(offset) + .fetch_all(&pool) + .await?; + + let utxos: Vec = + rows.into_iter().map(ProjectUtxoResponse::from).collect(); + Ok(Json(PaginatedResponse::new(utxos, page, limit, total_count))) +} diff --git a/api/src/routes/v1/vendor_contracts.rs b/api/src/routes/v1/vendor_contracts.rs deleted file mode 100644 index 16522e1..0000000 --- a/api/src/routes/v1/vendor_contracts.rs +++ /dev/null @@ -1,452 +0,0 @@ -//! Vendor Contracts (Projects) endpoints - -use axum::{ - extract::{Extension, Path, Query}, - http::StatusCode, - response::Json, -}; -use sqlx::PgPool; - -use crate::models::v1::{ - ApiResponse, EventResponse, EventWithContextRow, MilestoneResponse, MilestoneRow, - PaginatedResponse, ProjectEventsQuery, UtxoResponse, UtxoRow, VendorContractDetail, - VendorContractSummary, VendorContractSummaryRow, VendorContractsQuery, -}; - -/// List all vendor contracts -/// -/// Returns a paginated list of vendor contracts with filtering and search support. -#[utoipa::path( - get, - path = "/api/v1/vendor-contracts", - params(VendorContractsQuery), - responses( - (status = 200, description = "List of vendor contracts", body = PaginatedResponse>) - ), - tag = "Vendor Contracts" -)] -pub async fn list_vendor_contracts( - Extension(pool): Extension, - Query(params): Query, -) -> Result>>, StatusCode> { - let page = params.page.max(1); - let limit = params.limit.min(100).max(1); - let offset = ((page - 1) * limit) as i64; - let limit_i64 = limit as i64; - - // Build dynamic query based on filters - let mut conditions = Vec::new(); - let mut bind_index = 1; - - if params.status.is_some() { - conditions.push(format!("status = ${}", bind_index)); - bind_index += 1; - } - - if params.search.is_some() { - conditions.push(format!( - "(project_id ILIKE ${0} OR project_name ILIKE ${0} OR description ILIKE ${0} OR vendor_name ILIKE ${0})", - bind_index - )); - bind_index += 1; - } - - if params.from_time.is_some() { - conditions.push(format!("fund_block_time >= ${}", bind_index)); - bind_index += 1; - } - - if params.to_time.is_some() { - conditions.push(format!("fund_block_time <= ${}", bind_index)); - bind_index += 1; - } - - let where_clause = if conditions.is_empty() { - String::new() - } else { - format!("WHERE {}", conditions.join(" AND ")) - }; - - // Determine sort order - let sort_field = match params.sort.as_deref() { - Some("project_id") => "project_id", - Some("project_name") => "project_name", - Some("initial_amount") => "initial_amount_lovelace", - _ => "fund_block_time", - }; - let sort_order = match params.order.as_deref() { - Some("asc") => "ASC", - _ => "DESC", - }; - - // Get total count - let count_query = format!( - "SELECT COUNT(*) FROM treasury.v_vendor_contracts_summary {}", - where_clause - ); - - let mut count_q = sqlx::query_as::<_, (i64,)>(&count_query); - - if let Some(ref status) = params.status { - count_q = count_q.bind(status); - } - if let Some(ref search) = params.search { - count_q = count_q.bind(format!("%{}%", search)); - } - if let Some(from_time) = params.from_time { - count_q = count_q.bind(from_time); - } - if let Some(to_time) = params.to_time { - count_q = count_q.bind(to_time); - } - - let (total_count,) = count_q - .fetch_one(&pool) - .await - .map_err(|e| { - tracing::error!("Database query error: {}", e); - StatusCode::INTERNAL_SERVER_ERROR - })?; - - // Get data - let data_query = format!( - r#" - SELECT * - FROM treasury.v_vendor_contracts_summary - {} - ORDER BY {} {} NULLS LAST - LIMIT ${} OFFSET ${} - "#, - where_clause, - sort_field, - sort_order, - bind_index, - bind_index + 1 - ); - - let mut data_q = sqlx::query_as::<_, VendorContractSummaryRow>(&data_query); - - if let Some(ref status) = params.status { - data_q = data_q.bind(status); - } - if let Some(ref search) = params.search { - data_q = data_q.bind(format!("%{}%", search)); - } - if let Some(from_time) = params.from_time { - data_q = data_q.bind(from_time); - } - if let Some(to_time) = params.to_time { - data_q = data_q.bind(to_time); - } - - let rows = data_q - .bind(limit_i64) - .bind(offset) - .fetch_all(&pool) - .await - .map_err(|e| { - tracing::error!("Database query error: {}", e); - StatusCode::INTERNAL_SERVER_ERROR - })?; - - let contracts: Vec = rows.into_iter().map(VendorContractSummary::from).collect(); - Ok(Json(PaginatedResponse::new(contracts, page, limit, total_count))) -} - -/// Get a specific vendor contract by project ID -/// -/// Returns detailed information about a vendor contract including milestones summary and financials. -#[utoipa::path( - get, - path = "/api/v1/vendor-contracts/{project_id}", - params( - ("project_id" = String, Path, description = "Project identifier (e.g., EC-0008-25)") - ), - responses( - (status = 200, description = "Vendor contract details", body = ApiResponse), - (status = 404, description = "Vendor contract not found") - ), - tag = "Vendor Contracts" -)] -pub async fn get_vendor_contract( - Extension(pool): Extension, - Path(project_id): Path, -) -> Result>, StatusCode> { - let row = sqlx::query_as::<_, VendorContractSummaryRow>( - r#" - SELECT * - FROM treasury.v_vendor_contracts_summary - WHERE project_id = $1 - "# - ) - .bind(&project_id) - .fetch_optional(&pool) - .await - .map_err(|e| { - tracing::error!("Database query error: {}", e); - StatusCode::INTERNAL_SERVER_ERROR - })? - .ok_or(StatusCode::NOT_FOUND)?; - - Ok(Json(ApiResponse::new(VendorContractDetail::from(row)))) -} - -/// Get milestones for a vendor contract -/// -/// Returns all milestones for a specific project. -#[utoipa::path( - get, - path = "/api/v1/vendor-contracts/{project_id}/milestones", - params( - ("project_id" = String, Path, description = "Project identifier") - ), - responses( - (status = 200, description = "Project milestones", body = ApiResponse>), - (status = 404, description = "Vendor contract not found") - ), - tag = "Vendor Contracts" -)] -pub async fn get_vendor_contract_milestones( - Extension(pool): Extension, - Path(project_id): Path, -) -> Result>>, StatusCode> { - // First verify the project exists - let exists = sqlx::query_as::<_, (i32,)>( - "SELECT id FROM treasury.vendor_contracts WHERE project_id = $1" - ) - .bind(&project_id) - .fetch_optional(&pool) - .await - .map_err(|e| { - tracing::error!("Database query error: {}", e); - StatusCode::INTERNAL_SERVER_ERROR - })?; - - if exists.is_none() { - return Err(StatusCode::NOT_FOUND); - } - - let rows = sqlx::query_as::<_, MilestoneRow>( - r#" - SELECT - m.id, - m.vendor_contract_id, - m.milestone_id, - m.milestone_order, - m.label, - m.description, - m.acceptance_criteria, - m.amount_lovelace, - m.status, - m.complete_tx_hash, - m.complete_time, - m.complete_description, - m.evidence, - m.disburse_tx_hash, - m.disburse_time, - m.disburse_amount, - vc.project_id, - vc.project_name - FROM treasury.milestones m - JOIN treasury.vendor_contracts vc ON vc.id = m.vendor_contract_id - WHERE vc.project_id = $1 - ORDER BY m.milestone_order - "# - ) - .bind(&project_id) - .fetch_all(&pool) - .await - .map_err(|e| { - tracing::error!("Database query error: {}", e); - StatusCode::INTERNAL_SERVER_ERROR - })?; - - let milestones: Vec = rows.into_iter().map(MilestoneResponse::from).collect(); - Ok(Json(ApiResponse::new(milestones))) -} - -/// Get events for a vendor contract -/// -/// Returns paginated event history for a specific project. -#[utoipa::path( - get, - path = "/api/v1/vendor-contracts/{project_id}/events", - params( - ("project_id" = String, Path, description = "Project identifier"), - ProjectEventsQuery - ), - responses( - (status = 200, description = "Project events", body = PaginatedResponse>), - (status = 404, description = "Vendor contract not found") - ), - tag = "Vendor Contracts" -)] -pub async fn get_vendor_contract_events( - Extension(pool): Extension, - Path(project_id): Path, - Query(params): Query, -) -> Result>>, StatusCode> { - let page = params.page.max(1); - let limit = params.limit.min(100).max(1); - let offset = ((page - 1) * limit) as i64; - let limit_i64 = limit as i64; - - // First verify the project exists - let exists = sqlx::query_as::<_, (i32,)>( - "SELECT id FROM treasury.vendor_contracts WHERE project_id = $1" - ) - .bind(&project_id) - .fetch_optional(&pool) - .await - .map_err(|e| { - tracing::error!("Database query error: {}", e); - StatusCode::INTERNAL_SERVER_ERROR - })?; - - if exists.is_none() { - return Err(StatusCode::NOT_FOUND); - } - - // Build query based on event type filter - let (total_count, rows) = if let Some(ref event_type) = params.event_type { - let (count,): (i64,) = sqlx::query_as( - r#" - SELECT COUNT(*) - FROM treasury.v_events_with_context - WHERE project_id = $1 AND event_type = $2 - "# - ) - .bind(&project_id) - .bind(event_type) - .fetch_one(&pool) - .await - .map_err(|e| { - tracing::error!("Database query error: {}", e); - StatusCode::INTERNAL_SERVER_ERROR - })?; - - let rows = sqlx::query_as::<_, EventWithContextRow>( - r#" - SELECT * - FROM treasury.v_events_with_context - WHERE project_id = $1 AND event_type = $2 - ORDER BY block_time DESC - LIMIT $3 OFFSET $4 - "# - ) - .bind(&project_id) - .bind(event_type) - .bind(limit_i64) - .bind(offset) - .fetch_all(&pool) - .await - .map_err(|e| { - tracing::error!("Database query error: {}", e); - StatusCode::INTERNAL_SERVER_ERROR - })?; - - (count, rows) - } else { - let (count,): (i64,) = sqlx::query_as( - r#" - SELECT COUNT(*) - FROM treasury.v_events_with_context - WHERE project_id = $1 - "# - ) - .bind(&project_id) - .fetch_one(&pool) - .await - .map_err(|e| { - tracing::error!("Database query error: {}", e); - StatusCode::INTERNAL_SERVER_ERROR - })?; - - let rows = sqlx::query_as::<_, EventWithContextRow>( - r#" - SELECT * - FROM treasury.v_events_with_context - WHERE project_id = $1 - ORDER BY block_time DESC - LIMIT $2 OFFSET $3 - "# - ) - .bind(&project_id) - .bind(limit_i64) - .bind(offset) - .fetch_all(&pool) - .await - .map_err(|e| { - tracing::error!("Database query error: {}", e); - StatusCode::INTERNAL_SERVER_ERROR - })?; - - (count, rows) - }; - - let events: Vec = rows.into_iter().map(EventResponse::from).collect(); - Ok(Json(PaginatedResponse::new(events, page, limit, total_count))) -} - -/// Get UTXOs for a vendor contract -/// -/// Returns all unspent UTXOs for a specific project. -#[utoipa::path( - get, - path = "/api/v1/vendor-contracts/{project_id}/utxos", - params( - ("project_id" = String, Path, description = "Project identifier") - ), - responses( - (status = 200, description = "Project UTXOs", body = ApiResponse>), - (status = 404, description = "Vendor contract not found") - ), - tag = "Vendor Contracts" -)] -pub async fn get_vendor_contract_utxos( - Extension(pool): Extension, - Path(project_id): Path, -) -> Result>>, StatusCode> { - // First verify the project exists - let exists = sqlx::query_as::<_, (i32,)>( - "SELECT id FROM treasury.vendor_contracts WHERE project_id = $1" - ) - .bind(&project_id) - .fetch_optional(&pool) - .await - .map_err(|e| { - tracing::error!("Database query error: {}", e); - StatusCode::INTERNAL_SERVER_ERROR - })?; - - if exists.is_none() { - return Err(StatusCode::NOT_FOUND); - } - - let rows = sqlx::query_as::<_, UtxoRow>( - r#" - SELECT - u.tx_hash, - u.output_index, - u.address, - u.address_type, - u.lovelace_amount, - u.slot, - u.block_number - FROM treasury.utxos u - JOIN treasury.vendor_contracts vc ON vc.id = u.vendor_contract_id - WHERE vc.project_id = $1 AND NOT u.spent - ORDER BY u.slot DESC - "# - ) - .bind(&project_id) - .fetch_all(&pool) - .await - .map_err(|e| { - tracing::error!("Database query error: {}", e); - StatusCode::INTERNAL_SERVER_ERROR - })?; - - let utxos: Vec = rows.into_iter().map(UtxoResponse::from).collect(); - Ok(Json(ApiResponse::new(utxos))) -} diff --git a/api/src/services/event_processor.rs b/api/src/services/event_processor.rs index c657b8b..83bfa35 100644 --- a/api/src/services/event_processor.rs +++ b/api/src/services/event_processor.rs @@ -32,7 +32,7 @@ impl EventProcessor { FROM yaci_store.transaction_metadata m JOIN yaci_store.block b ON b.slot = m.slot WHERE m.label = '1694' - ORDER BY m.slot ASC + ORDER BY m.slot ASC, m.tx_hash ASC "# ) .fetch_all(&self.pool) @@ -40,6 +40,14 @@ impl EventProcessor { tracing::info!("Processing {} total TOM events", rows.len()); + // Pre-fetch UTXOs in batches before processing to guard against pruning + let tx_hashes: Vec = rows.iter().map(|r| r.tx_hash.clone()).collect(); + for chunk in tx_hashes.chunks(100) { + if let Err(e) = self.pre_fetch_utxos(chunk).await { + tracing::warn!("UTXO pre-fetch failed (non-fatal): {}", e); + } + } + let mut processed = 0; for row in &rows { if let Err(e) = self.process_event(row).await { @@ -92,7 +100,7 @@ impl EventProcessor { "initialize" => self.process_initialize(event, body, instance).await?, "fund" => self.process_fund(event, body, instance).await?, "complete" => self.process_complete(event, body).await?, - "disburse" => self.process_disburse(event, body).await?, + "disburse" => self.process_disburse(event, body, instance).await?, "withdraw" => self.process_withdraw(event, body).await?, "pause" => self.process_pause(event, body).await?, "resume" => self.process_resume(event, body).await?, @@ -111,24 +119,21 @@ impl EventProcessor { /// Process a publish event - create treasury contract async fn process_publish(&self, event: &RawTomEvent, body: &Value, instance: &str) -> anyhow::Result<()> { let event_body = body.get("body").unwrap_or(body); - let name = extract_text(event_body, "label"); let permissions = event_body.get("permissions").cloned(); // Upsert treasury contract let treasury_id: i32 = sqlx::query_scalar( r#" - INSERT INTO treasury.treasury_contracts (contract_instance, name, publish_tx_hash, publish_time, permissions) - VALUES ($1, $2, $3, $4, $5) + INSERT INTO treasury.treasury_contracts (contract_instance, publish_tx_hash, publish_time, permissions) + VALUES ($1, $2, $3, $4) ON CONFLICT (contract_instance) DO UPDATE - SET name = COALESCE(EXCLUDED.name, treasury.treasury_contracts.name), - publish_tx_hash = COALESCE(treasury.treasury_contracts.publish_tx_hash, EXCLUDED.publish_tx_hash), + SET publish_tx_hash = COALESCE(treasury.treasury_contracts.publish_tx_hash, EXCLUDED.publish_tx_hash), publish_time = COALESCE(treasury.treasury_contracts.publish_time, EXCLUDED.publish_time), permissions = COALESCE(EXCLUDED.permissions, treasury.treasury_contracts.permissions) RETURNING id "# ) .bind(instance) - .bind(&name) .bind(&event.tx_hash) .bind(event.block_time) .bind(&permissions) @@ -136,7 +141,7 @@ impl EventProcessor { .await?; // Insert event record - self.insert_event(event, "publish", Some(treasury_id), None, None, body).await?; + self.insert_event_full(event, "publish", Some(treasury_id), None, None, None, &None, &None, body).await?; Ok(()) } @@ -160,7 +165,23 @@ impl EventProcessor { .fetch_one(&self.pool) .await?; - self.insert_event(event, "initialize", Some(treasury_id), None, None, body).await?; + // Extract treasury contract address from tx outputs (with fallback for pruned UTXOs) + let (contract_address, _, _) = self.get_script_utxo_for_tx(&event.tx_hash).await?; + + if let Some(ref addr) = contract_address { + let stake_cred = crate::parsers::address::extract_stake_credential(addr); + sqlx::query("UPDATE treasury.treasury_contracts SET contract_address = COALESCE(contract_address, $1), stake_credential = COALESCE(stake_credential, $2) WHERE id = $3") + .bind(addr) + .bind(&stake_cred) + .bind(treasury_id) + .execute(&self.pool) + .await?; + } + + let event_body = body.get("body").unwrap_or(body); + let reason = extract_text(event_body, "reason"); + + self.insert_event_full(event, "initialize", Some(treasury_id), None, None, None, &reason, &None, body).await?; Ok(()) } @@ -169,44 +190,33 @@ impl EventProcessor { async fn process_fund(&self, event: &RawTomEvent, body: &Value, instance: &str) -> anyhow::Result<()> { let event_body = body.get("body").unwrap_or(body); - let project_id = event_body.get("identifier") + let raw_identifier = event_body.get("identifier") .and_then(|i| i.as_str()) .unwrap_or(""); - if project_id.is_empty() { + if raw_identifier.is_empty() { return Ok(()); } + // Split space-separated identifiers: first becomes project_id, rest merge into other_identifiers + let id_parts: Vec<&str> = raw_identifier.split_whitespace().collect(); + let project_id = id_parts[0]; + let extra_ids: Vec = id_parts[1..].iter().map(|s| s.to_string()).collect(); + let project_name = extract_text(event_body, "label"); let description = extract_text(event_body, "description"); - let vendor_name = event_body.get("vendor") - .and_then(|v| v.get("name")) - .and_then(|n| n.as_str()) - .map(|s| s.to_string()); let vendor_address = event_body.get("vendor") .and_then(|v| extract_text_from_value(v.get("label"))); - let contract_url = event_body.get("contract") - .and_then(|c| c.as_str()) - .map(|s| s.to_string()); - let other_identifiers = event_body.get("otherIdentifiers") + let mut other_identifiers: Vec = event_body.get("otherIdentifiers") .and_then(|o| o.as_array()) - .map(|arr| arr.iter().filter_map(|v| v.as_str()).map(|s| s.to_string()).collect::>()); - - // Get contract address from fund tx output - let contract_address: Option = sqlx::query_scalar( - "SELECT owner_addr FROM yaci_store.address_utxo WHERE tx_hash = $1 AND owner_addr LIKE 'addr1x%' LIMIT 1" - ) - .bind(&event.tx_hash) - .fetch_optional(&self.pool) - .await?; + .map(|arr| arr.iter().filter_map(|v| v.as_str()).map(|s| s.to_string()).collect::>()) + .unwrap_or_default(); + other_identifiers.extend(extra_ids); + let other_identifiers = if other_identifiers.is_empty() { None } else { Some(other_identifiers) }; - // Get initial amount from fund tx output - let initial_amount: Option = sqlx::query_scalar( - "SELECT lovelace_amount FROM yaci_store.address_utxo WHERE tx_hash = $1 AND owner_addr LIKE 'addr1x%' LIMIT 1" - ) - .bind(&event.tx_hash) - .fetch_optional(&self.pool) - .await?; + // Get contract address, initial amount, and inline datum from fund tx output + // (with fallback to treasury.utxo_history for pruned UTXOs) + let (contract_address, initial_amount, fund_inline_datum) = self.get_script_utxo_for_tx(&event.tx_hash).await?; // Get or create treasury contract let treasury_id: Option = if !instance.is_empty() { @@ -225,18 +235,98 @@ impl EventProcessor { None }; - // Insert vendor contract - let vendor_contract_id: i32 = sqlx::query_scalar( + // Fallback: populate treasury contract_address if still null + // The treasury address is the addr1x input that differs from the vendor contract output + if let Some(tid) = treasury_id { + if let Some(ref vc_addr) = contract_address { + let treasury_addr: Option = sqlx::query_scalar( + r#" + SELECT DISTINCT au.owner_addr + FROM yaci_store.tx_input ti + JOIN yaci_store.address_utxo au ON au.tx_hash = ti.tx_hash AND au.output_index = ti.output_index + WHERE ti.spent_tx_hash = $1 AND au.owner_addr LIKE 'addr1x%' AND au.owner_addr != $2 + LIMIT 1 + "# + ) + .bind(&event.tx_hash) + .bind(vc_addr) + .fetch_optional(&self.pool) + .await?; + + // Fallback: use treasury.utxo_history for pruned input UTXOs + let treasury_addr = match treasury_addr { + Some(addr) => Some(addr), + None => sqlx::query_scalar::<_, Option>( + r#" + SELECT DISTINCT u.address + FROM yaci_store.transaction t + CROSS JOIN LATERAL jsonb_array_elements(t.inputs::jsonb) AS inp + JOIN treasury.utxo_history u + ON u.tx_hash = inp->>'tx_hash' + AND u.output_index = (inp->>'output_index')::smallint + WHERE t.tx_hash = $1 AND u.address LIKE 'addr1x%' AND u.address != $2 + LIMIT 1 + "# + ) + .bind(&event.tx_hash) + .bind(vc_addr) + .fetch_optional(&self.pool) + .await? + .flatten(), + }; + + if let Some(ref addr) = treasury_addr { + let stake_cred = crate::parsers::address::extract_stake_credential(addr); + sqlx::query("UPDATE treasury.treasury_contracts SET contract_address = COALESCE(contract_address, $1), stake_credential = COALESCE(stake_credential, $2) WHERE id = $3") + .bind(addr) + .bind(&stake_cred) + .bind(tid) + .execute(&self.pool) + .await?; + } + } + } + + // Upsert the singleton vendor contract row (one per shared PSSC address) + // so /api/v1/vendor-contract has something to return. Reject any address + // that matches the treasury contract — `get_script_utxo_for_tx` may return + // the treasury change output if the vendor output happens to be later in + // the tx. Filtering here prevents the treasury address from leaking into + // the vendor_contracts singleton. + if let Some(ref vc_addr) = contract_address { + let stake_cred = crate::parsers::address::extract_stake_credential(vc_addr); + sqlx::query( + r#" + INSERT INTO treasury.vendor_contracts (treasury_id, address, stake_credential) + SELECT $1, $2, $3 + WHERE NOT EXISTS ( + SELECT 1 FROM treasury.treasury_contracts + WHERE contract_address = $2 + ) + ON CONFLICT (address) DO UPDATE + SET treasury_id = COALESCE(EXCLUDED.treasury_id, treasury.vendor_contracts.treasury_id), + stake_credential = COALESCE(EXCLUDED.stake_credential, treasury.vendor_contracts.stake_credential) + "#, + ) + .bind(treasury_id) + .bind(vc_addr) + .bind(&stake_cred) + .execute(&self.pool) + .await?; + } + + // Insert project + let project_db_id: i32 = sqlx::query_scalar( r#" - INSERT INTO treasury.vendor_contracts ( + INSERT INTO treasury.projects ( treasury_id, project_id, other_identifiers, project_name, description, - vendor_name, vendor_address, contract_url, contract_address, + vendor_address, contract_address, fund_tx_hash, fund_slot, fund_block_time, initial_amount_lovelace, status ) - VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13, 'active') + VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, 'active') ON CONFLICT (project_id) DO UPDATE - SET project_name = COALESCE(EXCLUDED.project_name, treasury.vendor_contracts.project_name), - description = COALESCE(EXCLUDED.description, treasury.vendor_contracts.description) + SET project_name = COALESCE(EXCLUDED.project_name, treasury.projects.project_name), + description = COALESCE(EXCLUDED.description, treasury.projects.description) RETURNING id "# ) @@ -245,9 +335,7 @@ impl EventProcessor { .bind(&other_identifiers) .bind(&project_name) .bind(&description) - .bind(&vendor_name) .bind(&vendor_address) - .bind(&contract_url) .bind(&contract_address) .bind(&event.tx_hash) .bind(event.slot) @@ -256,30 +344,47 @@ impl EventProcessor { .fetch_one(&self.pool) .await?; - // Process milestones - if let Some(milestones) = event_body.get("milestones").and_then(|m| m.as_array()) { - for (idx, milestone) in milestones.iter().enumerate() { - let default_id = format!("m-{}", idx); - let milestone_id = milestone.get("identifier") - .and_then(|i| i.as_str()) - .unwrap_or(&default_id); - let label = extract_text_from_value(Some(milestone.get("label").unwrap_or(&Value::Null))); - let description = extract_text_from_value(Some(milestone.get("description").unwrap_or(&Value::Null))); + // Process milestones — handle both array format and object format (keyed by ID) + let milestones_list: Vec<(String, &Value)> = if let Some(milestones_val) = event_body.get("milestones") { + if let Some(arr) = milestones_val.as_array() { + arr.iter().enumerate().map(|(idx, m)| { + let id = m.get("identifier") + .and_then(|i| i.as_str()) + .map(|s| s.to_string()) + .unwrap_or_else(|| format!("m-{}", idx)); + (id, m) + }).collect() + } else if let Some(obj) = milestones_val.as_object() { + obj.iter().map(|(k, v)| (k.clone(), v)).collect() + } else { + vec![] + } + } else { + vec![] + }; + + for (idx, (milestone_id_str, milestone)) in milestones_list.iter().enumerate() { + let milestone_id = milestone_id_str.as_str(); let acceptance_criteria = extract_text_from_value(Some(milestone.get("acceptanceCriteria").unwrap_or(&Value::Null))); + let (label, description) = extract_milestone_label_description( + extract_text_from_value(Some(milestone.get("label").unwrap_or(&Value::Null))), + extract_text_from_value(Some(milestone.get("description").unwrap_or(&Value::Null))), + &acceptance_criteria, + ); let amount = milestone.get("amount") .and_then(|a| a.as_i64()); sqlx::query( r#" INSERT INTO treasury.milestones ( - vendor_contract_id, milestone_id, milestone_order, label, - description, acceptance_criteria, amount_lovelace, status + project_db_id, milestone_id, milestone_order, label, + description, acceptance_criteria, amount_lovelace ) - VALUES ($1, $2, $3, $4, $5, $6, $7, 'pending') - ON CONFLICT (vendor_contract_id, milestone_id) DO NOTHING + VALUES ($1, $2, $3, $4, $5, $6, $7) + ON CONFLICT (project_db_id, milestone_id) WHERE NOT archived DO NOTHING "# ) - .bind(vendor_contract_id) + .bind(project_db_id) .bind(milestone_id) .bind((idx + 1) as i32) .bind(&label) @@ -288,10 +393,9 @@ impl EventProcessor { .bind(amount) .execute(&self.pool) .await?; - } } - self.insert_event(event, "fund", treasury_id, Some(vendor_contract_id), None, body).await?; + self.insert_event_full(event, "fund", treasury_id, Some(project_db_id), None, initial_amount, &None, &None, body).await?; // Record the output UTXOs from this fund transaction for future lookups // Get all outputs from the transaction table @@ -308,24 +412,159 @@ impl EventProcessor { output.get("tx_hash").and_then(|h| h.as_str()), output.get("output_index").and_then(|i| i.as_i64()) ) { - // Record this UTXO with the vendor_contract_id for future event lookups + // Look up address and amount (with fallback for pruned UTXOs) + let (looked_up_address, lovelace_amount, _) = self.lookup_utxo(tx_hash, output_index as i16).await?; + + // For known addresses, only track script (addr1x) outputs and skip change. + // For pruned outputs (address unknown), assume the fund event's contract_address + // — the chain trace only matches by (tx_hash, output_index, project_db_id), + // so over-seeding a non-script output is harmless and lets cold replay link + // milestone events whose input UTXOs have already been pruned. + let address = match looked_up_address { + Some(addr) if addr.starts_with("addr1x") => Some(addr), + Some(_) => continue, + None => contract_address.clone(), + }; + + let address_type = Some("vendor_contract"); + + // Record this UTXO with the project_db_id for future event lookups sqlx::query( r#" - INSERT INTO treasury.utxos (tx_hash, output_index, vendor_contract_id, slot, spent) - VALUES ($1, $2, $3, $4, false) - ON CONFLICT (tx_hash, output_index) DO NOTHING + INSERT INTO treasury.utxo_history (tx_hash, output_index, project_db_id, slot, block_number, address, address_type, lovelace_amount, spent) + VALUES ($1, $2, $3, $4, $5, $6, $7, $8, false) + ON CONFLICT (tx_hash, output_index) DO UPDATE + SET project_db_id = COALESCE(EXCLUDED.project_db_id, treasury.utxo_history.project_db_id), + address = COALESCE(EXCLUDED.address, treasury.utxo_history.address), + address_type = COALESCE(EXCLUDED.address_type, treasury.utxo_history.address_type), + lovelace_amount = COALESCE(EXCLUDED.lovelace_amount, treasury.utxo_history.lovelace_amount), + block_number = COALESCE(EXCLUDED.block_number, treasury.utxo_history.block_number) "# ) .bind(tx_hash) .bind(output_index as i16) - .bind(vendor_contract_id) + .bind(project_db_id) .bind(event.slot) + .bind(event.block_number) + .bind(&address) + .bind(address_type) + .bind(lovelace_amount) .execute(&self.pool) .await?; } } } + // Parse inline datum for milestone amounts, time_limits, and vendor_payment_key_hash. + // Uses the datum already fetched by get_script_utxo_for_tx (with pruning fallback). + // + // The parser is partial — vendor info and each milestone parse independently. + // We persist whatever is Ok and stash any error string in `datum_parse_error` + // so failures are queryable in SQL. + if let Some(datum_hex) = fund_inline_datum { + let parsed = crate::parsers::datum::parse_project_datum(&datum_hex); + + // Project-level error column reflects either a top-level CBOR/shape error + // (no vendor_info, no milestones) or just the vendor-info error. + let project_err = parsed.top_level_error.clone() + .or_else(|| parsed.vendor_info_error.clone()); + + if let Some(ref e) = project_err { + tracing::warn!( + "Fund datum parse error for tx {} (project {}): {}", + event.tx_hash, project_id, e + ); + } + + sqlx::query( + r#" + UPDATE treasury.projects + SET vendor_payment_key_hash = COALESCE($1, vendor_payment_key_hash), + datum_parse_error = $2 + WHERE id = $3 + "# + ) + .bind(&parsed.vendor_payment_key_hash) + .bind(&project_err) + .bind(project_db_id) + .execute(&self.pool) + .await?; + + // Update milestones with datum data (amount, time_limit, paused). + // + // The FUND tx output's datum represents the *initial* project state — + // every milestone is non-withdrawn and present in original order. So + // we apply by `milestone_order` to ALL non-archived milestones, + // including ones now marked withdrawn by later events. Without this, + // a periodic re-run after withdrawals would skip the withdrawn rows + // and they would keep their NULL amount/time_limit forever. + if parsed.top_level_error.is_none() { + let milestone_rows: Vec<(i32,)> = sqlx::query_as( + "SELECT id FROM treasury.milestones WHERE project_db_id = $1 AND NOT archived ORDER BY milestone_order" + ) + .bind(project_db_id) + .fetch_all(&self.pool) + .await?; + + for (datum_idx, (db_id,)) in milestone_rows.iter().enumerate() { + match parsed.milestones.get(datum_idx) { + Some(Ok(ms_datum)) => { + sqlx::query( + r#" + UPDATE treasury.milestones + SET amount_lovelace = $1, + time_limit = $2, + paused = $3, + datum_parse_error = NULL + WHERE id = $4 + "# + ) + .bind(ms_datum.amount_lovelace) + .bind(ms_datum.time_limit) + .bind(ms_datum.paused) + .bind(db_id) + .execute(&self.pool) + .await?; + } + Some(Err(err)) => { + tracing::warn!( + "Milestone {} datum parse error for tx {}: {}", + datum_idx, event.tx_hash, err + ); + sqlx::query( + "UPDATE treasury.milestones SET datum_parse_error = $1 WHERE id = $2" + ) + .bind(err) + .bind(db_id) + .execute(&self.pool) + .await?; + } + None => {} // datum has fewer entries than DB has milestones + } + } + } + + // Store raw CBOR on the UTXO tracking row, but only if we have a + // *better* datum than what's already stored. The trigger and + // pre_fetch may have captured the larger original datum already; + // overwriting with a shorter one (e.g., a re-fetch that picked + // a sibling change output) would corrupt the project's saved + // datum. Preserve the longest one we've seen. + sqlx::query( + r#" + UPDATE treasury.utxo_history + SET inline_datum_cbor = $1 + WHERE tx_hash = $2 AND project_db_id = $3 + AND (inline_datum_cbor IS NULL OR length($1) > length(inline_datum_cbor)) + "# + ) + .bind(&datum_hex) + .bind(&event.tx_hash) + .bind(project_db_id) + .execute(&self.pool) + .await?; + } + Ok(()) } @@ -333,49 +572,49 @@ impl EventProcessor { async fn process_complete(&self, event: &RawTomEvent, body: &Value) -> anyhow::Result<()> { let event_body = body.get("body").unwrap_or(body); - // First try to get project_id from metadata (older format) let project_id_from_meta = event_body.get("identifier") .and_then(|i| i.as_str()) .filter(|s| !s.is_empty()); - // Get vendor contract ID - either from metadata or by tracing tx chain - let vendor_contract_id: Option = if let Some(pid) = project_id_from_meta { + // Hints used to disambiguate when chain tracing finds multiple candidate + // vendor contracts (e.g. the tx pulls fee inputs from a sibling project). + let milestone_hints = collect_milestone_id_hints(event_body); + + let project_db_id: Option = if let Some(pid) = project_id_from_meta { sqlx::query_scalar( - "SELECT id FROM treasury.vendor_contracts WHERE project_id = $1" + "SELECT id FROM treasury.projects WHERE project_id = $1" ) .bind(pid) .fetch_optional(&self.pool) .await? } else { - // Trace back through transaction chain to find the project - self.find_vendor_contract_from_inputs(&event.tx_hash).await? + self.find_project_from_inputs(&event.tx_hash, &milestone_hints).await? }; - let vendor_contract_id = match vendor_contract_id { - Some(id) => id, - None => { - tracing::debug!("Could not find vendor contract for complete event {}", event.tx_hash); - return Ok(()); - } - }; + if project_db_id.is_none() { + tracing::warn!("Could not find vendor contract for complete event {}", event.tx_hash); + } - // Process completed milestones - if let Some(milestones) = event_body.get("milestones") { - // Milestones can be an object keyed by milestone_id - if let Some(obj) = milestones.as_object() { + let mut matched_milestone_id: Option = None; + + if let Some(vc_id) = project_db_id { + if let Some(obj) = event_body.get("milestones").and_then(|m| m.as_object()) { for (milestone_id, milestone_data) in obj { let description = extract_text_from_value(Some(milestone_data.get("description").unwrap_or(&Value::Null))); let evidence = milestone_data.get("evidence").cloned(); + let order_hint = canonical_milestone_order(milestone_id); let db_milestone_id: Option = sqlx::query_scalar( r#" UPDATE treasury.milestones - SET status = 'completed', + SET evidence_provided = TRUE, complete_tx_hash = $1, complete_time = $2, complete_description = $3, evidence = $4 - WHERE vendor_contract_id = $5 AND milestone_id = $6 + WHERE project_db_id = $5 + AND NOT archived + AND (milestone_id = $6 OR milestone_order = $7) RETURNING id "# ) @@ -383,101 +622,76 @@ impl EventProcessor { .bind(event.block_time) .bind(&description) .bind(&evidence) - .bind(vendor_contract_id) + .bind(vc_id) .bind(milestone_id) + .bind(order_hint) .fetch_optional(&self.pool) .await?; - if let Some(mid) = db_milestone_id { - self.insert_event(event, "complete", None, Some(vendor_contract_id), Some(mid), body).await?; + if matched_milestone_id.is_none() { + matched_milestone_id = db_milestone_id; } } } - } - // Also check for single milestone field (older format) - if let Some(milestone_id) = event_body.get("milestone").and_then(|m| m.as_str()) { - sqlx::query( - r#" - UPDATE treasury.milestones - SET status = 'completed', - complete_tx_hash = $1, - complete_time = $2 - WHERE vendor_contract_id = $3 AND milestone_id = $4 AND status = 'pending' - "# - ) - .bind(&event.tx_hash) - .bind(event.block_time) - .bind(vendor_contract_id) - .bind(milestone_id) - .execute(&self.pool) - .await?; + if let Some(milestone_id) = event_body.get("milestone").and_then(|m| m.as_str()) { + let order_hint = canonical_milestone_order(milestone_id); + let db_milestone_id: Option = sqlx::query_scalar( + r#" + UPDATE treasury.milestones + SET evidence_provided = TRUE, + complete_tx_hash = $1, + complete_time = $2 + WHERE project_db_id = $3 + AND NOT archived + AND (milestone_id = $4 OR milestone_order = $5) + RETURNING id + "# + ) + .bind(&event.tx_hash) + .bind(event.block_time) + .bind(vc_id) + .bind(milestone_id) + .bind(order_hint) + .fetch_optional(&self.pool) + .await?; + + if matched_milestone_id.is_none() { + matched_milestone_id = db_milestone_id; + } + } } + self.insert_event_full(event, "complete", None, project_db_id, matched_milestone_id, None, &None, &None, body).await?; + Ok(()) } - /// Process a disburse event - update milestone status - async fn process_disburse(&self, event: &RawTomEvent, body: &Value) -> anyhow::Result<()> { + /// Process a disburse event - treasury-level fund movement (does not touch milestones) + async fn process_disburse(&self, event: &RawTomEvent, body: &Value, instance: &str) -> anyhow::Result<()> { let event_body = body.get("body").unwrap_or(body); + // Per TOM spec, `destination` is an object `{label, details}`. Preserve the + // full object in JSONB so neither sub-field is lost (KI-API-01). + let destination = event_body.get("destination").cloned(); - let project_id_from_meta = event_body.get("identifier") - .and_then(|i| i.as_str()) - .filter(|s| !s.is_empty()); - - let destination = extract_text(event_body, "destination"); - - // Get vendor contract ID - either from metadata or by tracing tx chain - let vendor_contract_id: Option = if let Some(pid) = project_id_from_meta { - sqlx::query_scalar( - "SELECT id FROM treasury.vendor_contracts WHERE project_id = $1" - ) - .bind(pid) - .fetch_optional(&self.pool) - .await? - } else { - self.find_vendor_contract_from_inputs(&event.tx_hash).await? - }; - - // Get disbursed amount from tx outputs - cast SUM to BIGINT - let disburse_amount: Option = sqlx::query_scalar( - "SELECT COALESCE(SUM(lovelace_amount)::bigint, 0) FROM yaci_store.address_utxo WHERE tx_hash = $1 AND owner_addr NOT LIKE 'addr1x%'" - ) - .bind(&event.tx_hash) - .fetch_optional(&self.pool) - .await?; - - // Check for milestone field and update if present - let db_milestone_id: Option = if let (Some(vc_id), Some(milestone_id)) = (vendor_contract_id, event_body.get("milestone").and_then(|m| m.as_str())) { + // Disburse is a treasury-level operation — look up treasury_id, not project_db_id + let treasury_id: Option = if !instance.is_empty() { sqlx::query_scalar( - r#" - UPDATE treasury.milestones - SET status = 'disbursed', - disburse_tx_hash = $1, - disburse_time = $2, - disburse_amount = $3 - WHERE vendor_contract_id = $4 AND milestone_id = $5 - RETURNING id - "# + "SELECT id FROM treasury.treasury_contracts WHERE contract_instance = $1" ) - .bind(&event.tx_hash) - .bind(event.block_time) - .bind(disburse_amount) - .bind(vc_id) - .bind(milestone_id) + .bind(instance) .fetch_optional(&self.pool) .await? } else { None }; - // Always insert the disburse event (may be treasury-level without vendor_contract) - self.insert_event_with_destination(event, "disburse", None, vendor_contract_id, db_milestone_id, &destination, body).await?; + self.insert_event_full(event, "disburse", treasury_id, None, None, None, &None, &destination, body).await?; Ok(()) } - /// Process a withdraw event + /// Process a withdraw event - vendor claims matured milestone funds async fn process_withdraw(&self, event: &RawTomEvent, body: &Value) -> anyhow::Result<()> { let event_body = body.get("body").unwrap_or(body); @@ -485,24 +699,113 @@ impl EventProcessor { .and_then(|i| i.as_str()) .filter(|s| !s.is_empty()); - // Get vendor contract ID - either from metadata or by tracing tx chain - let vendor_contract_id: Option = if let Some(pid) = project_id_from_meta { + let milestone_hints = collect_milestone_id_hints(event_body); + + let project_db_id: Option = if let Some(pid) = project_id_from_meta { sqlx::query_scalar( - "SELECT id FROM treasury.vendor_contracts WHERE project_id = $1" + "SELECT id FROM treasury.projects WHERE project_id = $1" ) .bind(pid) .fetch_optional(&self.pool) .await? } else { - self.find_vendor_contract_from_inputs(&event.tx_hash).await? + self.find_project_from_inputs(&event.tx_hash, &milestone_hints).await? }; - if let Some(vc_id) = vendor_contract_id { - self.insert_event(event, "withdraw", None, Some(vc_id), None, body).await?; - } else { - tracing::debug!("Could not find vendor contract for withdraw event {}", event.tx_hash); + if project_db_id.is_none() { + tracing::warn!("Could not find vendor contract for withdraw event {}", event.tx_hash); + } + + // Withdraw amount comes from tx outputs (non-script addresses) and is independent of vc lookup. + let withdraw_amount: Option = sqlx::query_scalar( + "SELECT COALESCE(SUM(lovelace_amount)::bigint, 0) FROM yaci_store.address_utxo WHERE tx_hash = $1 AND owner_addr NOT LIKE 'addr1x%'" + ) + .bind(&event.tx_hash) + .fetch_optional(&self.pool) + .await?; + + let withdraw_amount = match withdraw_amount { + Some(amt) if amt > 0 => Some(amt), + _ => { + let fallback: Option = sqlx::query_scalar( + "SELECT COALESCE(SUM(lovelace_amount)::bigint, 0) FROM treasury.utxo_history WHERE tx_hash = $1 AND address NOT LIKE 'addr1x%'" + ) + .bind(&event.tx_hash) + .fetch_optional(&self.pool) + .await?; + match fallback { + Some(a) if a > 0 => Some(a), + _ => withdraw_amount, + } + } + }; + + let mut matched_milestone_id: Option = None; + + if let Some(vc_id) = project_db_id { + if let Some(obj) = event_body.get("milestones").and_then(|m| m.as_object()) { + for (milestone_id, _milestone_data) in obj { + let order_hint = canonical_milestone_order(milestone_id); + let db_milestone_id: Option = sqlx::query_scalar( + r#" + UPDATE treasury.milestones + SET withdrawn = TRUE, + withdraw_tx_hash = $1, + withdraw_time = $2, + withdraw_amount = $3 + WHERE project_db_id = $4 + AND NOT archived + AND (milestone_id = $5 OR milestone_order = $6) + RETURNING id + "# + ) + .bind(&event.tx_hash) + .bind(event.block_time) + .bind(withdraw_amount) + .bind(vc_id) + .bind(milestone_id) + .bind(order_hint) + .fetch_optional(&self.pool) + .await?; + + if matched_milestone_id.is_none() { + matched_milestone_id = db_milestone_id; + } + } + } + + if let Some(milestone_id) = event_body.get("milestone").and_then(|m| m.as_str()) { + let order_hint = canonical_milestone_order(milestone_id); + let db_milestone_id: Option = sqlx::query_scalar( + r#" + UPDATE treasury.milestones + SET withdrawn = TRUE, + withdraw_tx_hash = $1, + withdraw_time = $2, + withdraw_amount = $3 + WHERE project_db_id = $4 + AND NOT archived + AND (milestone_id = $5 OR milestone_order = $6) + RETURNING id + "# + ) + .bind(&event.tx_hash) + .bind(event.block_time) + .bind(withdraw_amount) + .bind(vc_id) + .bind(milestone_id) + .bind(order_hint) + .fetch_optional(&self.pool) + .await?; + + if matched_milestone_id.is_none() { + matched_milestone_id = db_milestone_id; + } + } } + self.insert_event_full(event, "withdraw", None, project_db_id, matched_milestone_id, withdraw_amount, &None, &None, body).await?; + Ok(()) } @@ -516,17 +819,17 @@ impl EventProcessor { let reason = extract_text(event_body, "reason"); // Get vendor contract ID - either from metadata or by tracing tx chain - let vendor_contract_id: Option = if let Some(pid) = project_id_from_meta { + let project_db_id: Option = if let Some(pid) = project_id_from_meta { sqlx::query_scalar( - "UPDATE treasury.vendor_contracts SET status = 'paused' WHERE project_id = $1 RETURNING id" + "UPDATE treasury.projects SET status = 'paused' WHERE project_id = $1 RETURNING id" ) .bind(pid) .fetch_optional(&self.pool) .await? } else { // Find via tx chain first, then update - if let Some(vc_id) = self.find_vendor_contract_from_inputs(&event.tx_hash).await? { - sqlx::query("UPDATE treasury.vendor_contracts SET status = 'paused' WHERE id = $1") + if let Some(vc_id) = self.find_project_from_inputs(&event.tx_hash, &[]).await? { + sqlx::query("UPDATE treasury.projects SET status = 'paused' WHERE id = $1") .bind(vc_id) .execute(&self.pool) .await?; @@ -536,10 +839,20 @@ impl EventProcessor { } }; - if let Some(vc_id) = vendor_contract_id { - self.insert_event_with_reason(event, "pause", None, Some(vc_id), None, &reason, body).await?; + let matched_milestone_id = if let Some(vc_id) = project_db_id { + // Also update per-milestone pause flags from output datum if available + self.update_milestone_pause_from_datum(&event.tx_hash, vc_id).await?; + // Resolve the affected milestone(s) from body.milestones keys + self.resolve_first_milestone_from_body(event_body, vc_id).await? + } else { + None + }; + + if let Some(vc_id) = project_db_id { + self.insert_event_full(event, "pause", None, Some(vc_id), matched_milestone_id, None, &reason, &None, body).await?; } else { - tracing::debug!("Could not find vendor contract for pause event {}", event.tx_hash); + tracing::warn!("Could not find vendor contract for pause event {}", event.tx_hash); + self.insert_event_full(event, "pause", None, None, None, None, &reason, &None, body).await?; } Ok(()) @@ -554,16 +867,16 @@ impl EventProcessor { .filter(|s| !s.is_empty()); // Get vendor contract ID - either from metadata or by tracing tx chain - let vendor_contract_id: Option = if let Some(pid) = project_id_from_meta { + let project_db_id: Option = if let Some(pid) = project_id_from_meta { sqlx::query_scalar( - "UPDATE treasury.vendor_contracts SET status = 'active' WHERE project_id = $1 RETURNING id" + "UPDATE treasury.projects SET status = 'active' WHERE project_id = $1 RETURNING id" ) .bind(pid) .fetch_optional(&self.pool) .await? } else { - if let Some(vc_id) = self.find_vendor_contract_from_inputs(&event.tx_hash).await? { - sqlx::query("UPDATE treasury.vendor_contracts SET status = 'active' WHERE id = $1") + if let Some(vc_id) = self.find_project_from_inputs(&event.tx_hash, &[]).await? { + sqlx::query("UPDATE treasury.projects SET status = 'active' WHERE id = $1") .bind(vc_id) .execute(&self.pool) .await?; @@ -573,16 +886,25 @@ impl EventProcessor { } }; - if let Some(vc_id) = vendor_contract_id { - self.insert_event(event, "resume", None, Some(vc_id), None, body).await?; + let matched_milestone_id = if let Some(vc_id) = project_db_id { + // Also update per-milestone pause flags from output datum if available + self.update_milestone_pause_from_datum(&event.tx_hash, vc_id).await?; + self.resolve_first_milestone_from_body(event_body, vc_id).await? } else { - tracing::debug!("Could not find vendor contract for resume event {}", event.tx_hash); + None + }; + + if let Some(vc_id) = project_db_id { + self.insert_event_full(event, "resume", None, Some(vc_id), matched_milestone_id, None, &None, &None, body).await?; + } else { + tracing::warn!("Could not find vendor contract for resume event {}", event.tx_hash); + self.insert_event_full(event, "resume", None, None, None, None, &None, &None, body).await?; } Ok(()) } - /// Process a modify event - update vendor contract + /// Process a modify event - update vendor contract, archive and replace milestones async fn process_modify(&self, event: &RawTomEvent, body: &Value) -> anyhow::Result<()> { let event_body = body.get("body").unwrap_or(body); @@ -592,21 +914,125 @@ impl EventProcessor { let reason = extract_text(event_body, "reason"); // Get vendor contract ID - either from metadata or by tracing tx chain - let vendor_contract_id: Option = if let Some(pid) = project_id_from_meta { + let project_db_id: Option = if let Some(pid) = project_id_from_meta { sqlx::query_scalar( - "SELECT id FROM treasury.vendor_contracts WHERE project_id = $1" + "SELECT id FROM treasury.projects WHERE project_id = $1" ) .bind(pid) .fetch_optional(&self.pool) .await? } else { - self.find_vendor_contract_from_inputs(&event.tx_hash).await? + self.find_project_from_inputs(&event.tx_hash, &[]).await? }; - if let Some(vc_id) = vendor_contract_id { - self.insert_event_with_reason(event, "modify", None, Some(vc_id), None, &reason, body).await?; + if let Some(vc_id) = project_db_id { + // Update naming fields if present in modify metadata + let project_name = extract_text(event_body, "label"); + let description = extract_text(event_body, "description"); + let vendor_address = event_body.get("vendor") + .and_then(|v| extract_text_from_value(v.get("label"))); + + sqlx::query( + r#" + UPDATE treasury.projects + SET project_name = COALESCE($1, project_name), + description = COALESCE($2, description), + vendor_address = COALESCE($3, vendor_address) + WHERE id = $4 + "# + ) + .bind(&project_name) + .bind(&description) + .bind(&vendor_address) + .bind(vc_id) + .execute(&self.pool) + .await?; + + // If milestones are present in the modify metadata, archive existing and insert new + let milestones_list: Vec<(String, &Value)> = if let Some(milestones_val) = event_body.get("milestones") { + if let Some(arr) = milestones_val.as_array() { + arr.iter().enumerate().map(|(idx, m)| { + let id = m.get("identifier") + .and_then(|i| i.as_str()) + .unwrap_or(&format!("m-{}", idx)) + .to_string(); + (id, m) + }).collect() + } else if let Some(obj) = milestones_val.as_object() { + obj.iter().map(|(k, v)| (k.clone(), v)).collect() + } else { + vec![] + } + } else { + vec![] + }; + + if !milestones_list.is_empty() { + // Archive all active milestones for this vendor contract + sqlx::query( + r#" + UPDATE treasury.milestones + SET archived = TRUE, archived_by_tx_hash = $1, archived_at = $2 + WHERE project_db_id = $3 AND NOT archived + "# + ) + .bind(&event.tx_hash) + .bind(event.block_time) + .bind(vc_id) + .execute(&self.pool) + .await?; + + // Insert new milestone rows + for (idx, (milestone_id_str, milestone)) in milestones_list.iter().enumerate() { + let milestone_id = milestone_id_str.as_str(); + let acceptance_criteria = extract_text_from_value(Some(milestone.get("acceptanceCriteria").unwrap_or(&Value::Null))); + let (label, description) = extract_milestone_label_description( + extract_text_from_value(Some(milestone.get("label").unwrap_or(&Value::Null))), + extract_text_from_value(Some(milestone.get("description").unwrap_or(&Value::Null))), + &acceptance_criteria, + ); + let amount = milestone.get("amount") + .and_then(|a| a.as_i64()); + + let new_id: i32 = sqlx::query_scalar( + r#" + INSERT INTO treasury.milestones ( + project_db_id, milestone_id, milestone_order, label, + description, acceptance_criteria, amount_lovelace + ) + VALUES ($1, $2, $3, $4, $5, $6, $7) + RETURNING id + "# + ) + .bind(vc_id) + .bind(milestone_id) + .bind((idx + 1) as i32) + .bind(&label) + .bind(&description) + .bind(&acceptance_criteria) + .bind(amount) + .fetch_one(&self.pool) + .await?; + + // Update superseded_by on the archived row that matches this milestone_id + sqlx::query( + r#" + UPDATE treasury.milestones + SET superseded_by = $1 + WHERE project_db_id = $2 AND milestone_id = $3 AND archived AND superseded_by IS NULL + "# + ) + .bind(new_id) + .bind(vc_id) + .bind(milestone_id) + .execute(&self.pool) + .await?; + } + } + + self.insert_event_full(event, "modify", None, Some(vc_id), None, None, &reason, &None, body).await?; } else { - tracing::debug!("Could not find vendor contract for modify event {}", event.tx_hash); + tracing::warn!("Could not find vendor contract for modify event {}", event.tx_hash); } Ok(()) @@ -622,16 +1048,16 @@ impl EventProcessor { let reason = extract_text(event_body, "reason"); // Get vendor contract ID - either from metadata or by tracing tx chain - let vendor_contract_id: Option = if let Some(pid) = project_id_from_meta { + let project_db_id: Option = if let Some(pid) = project_id_from_meta { sqlx::query_scalar( - "UPDATE treasury.vendor_contracts SET status = 'cancelled' WHERE project_id = $1 RETURNING id" + "UPDATE treasury.projects SET status = 'cancelled' WHERE project_id = $1 RETURNING id" ) .bind(pid) .fetch_optional(&self.pool) .await? } else { - if let Some(vc_id) = self.find_vendor_contract_from_inputs(&event.tx_hash).await? { - sqlx::query("UPDATE treasury.vendor_contracts SET status = 'cancelled' WHERE id = $1") + if let Some(vc_id) = self.find_project_from_inputs(&event.tx_hash, &[]).await? { + sqlx::query("UPDATE treasury.projects SET status = 'cancelled' WHERE id = $1") .bind(vc_id) .execute(&self.pool) .await?; @@ -641,10 +1067,10 @@ impl EventProcessor { } }; - if let Some(vc_id) = vendor_contract_id { - self.insert_event_with_reason(event, "cancel", None, Some(vc_id), None, &reason, body).await?; + if let Some(vc_id) = project_db_id { + self.insert_event_full(event, "cancel", None, Some(vc_id), None, None, &reason, &None, body).await?; } else { - tracing::debug!("Could not find vendor contract for cancel event {}", event.tx_hash); + tracing::warn!("Could not find vendor contract for cancel event {}", event.tx_hash); } Ok(()) @@ -659,7 +1085,7 @@ impl EventProcessor { .fetch_optional(&self.pool) .await?; - self.insert_event(event, "sweep", treasury_id, None, None, body).await?; + self.insert_event_full(event, "sweep", treasury_id, None, None, None, &None, &None, body).await?; Ok(()) } @@ -673,65 +1099,39 @@ impl EventProcessor { .fetch_optional(&self.pool) .await?; - self.insert_event(event, "reorganize", treasury_id, None, None, body).await?; + self.insert_event_full(event, "reorganize", treasury_id, None, None, None, &None, &None, body).await?; Ok(()) } - /// Insert an event record - async fn insert_event( + /// Insert an event record with all optional fields + async fn insert_event_full( &self, event: &RawTomEvent, event_type: &str, treasury_id: Option, - vendor_contract_id: Option, - milestone_id: Option, - body: &Value, - ) -> anyhow::Result<()> { - sqlx::query( - r#" - INSERT INTO treasury.events ( - tx_hash, slot, block_number, block_time, event_type, - treasury_id, vendor_contract_id, milestone_id, metadata - ) - VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9) - ON CONFLICT (tx_hash) DO NOTHING - "# - ) - .bind(&event.tx_hash) - .bind(event.slot) - .bind(event.block_number) - .bind(event.block_time) - .bind(event_type) - .bind(treasury_id) - .bind(vendor_contract_id) - .bind(milestone_id) - .bind(body) - .execute(&self.pool) - .await?; - - Ok(()) - } - - /// Insert an event with reason field - async fn insert_event_with_reason( - &self, - event: &RawTomEvent, - event_type: &str, - treasury_id: Option, - vendor_contract_id: Option, + project_db_id: Option, milestone_id: Option, + amount_lovelace: Option, reason: &Option, + destination: &Option, body: &Value, ) -> anyhow::Result<()> { sqlx::query( r#" INSERT INTO treasury.events ( tx_hash, slot, block_number, block_time, event_type, - treasury_id, vendor_contract_id, milestone_id, reason, metadata + treasury_id, project_db_id, milestone_id, + amount_lovelace, reason, destination, metadata ) - VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10) - ON CONFLICT (tx_hash) DO NOTHING + VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12) + ON CONFLICT (tx_hash) DO UPDATE SET + treasury_id = COALESCE(EXCLUDED.treasury_id, treasury.events.treasury_id), + project_db_id = COALESCE(EXCLUDED.project_db_id, treasury.events.project_db_id), + milestone_id = COALESCE(EXCLUDED.milestone_id, treasury.events.milestone_id), + amount_lovelace = COALESCE(EXCLUDED.amount_lovelace, treasury.events.amount_lovelace), + reason = COALESCE(EXCLUDED.reason, treasury.events.reason), + destination = COALESCE(EXCLUDED.destination, treasury.events.destination) "# ) .bind(&event.tx_hash) @@ -740,45 +1140,10 @@ impl EventProcessor { .bind(event.block_time) .bind(event_type) .bind(treasury_id) - .bind(vendor_contract_id) + .bind(project_db_id) .bind(milestone_id) + .bind(amount_lovelace) .bind(reason) - .bind(body) - .execute(&self.pool) - .await?; - - Ok(()) - } - - /// Insert an event with destination field - async fn insert_event_with_destination( - &self, - event: &RawTomEvent, - event_type: &str, - treasury_id: Option, - vendor_contract_id: Option, - milestone_id: Option, - destination: &Option, - body: &Value, - ) -> anyhow::Result<()> { - sqlx::query( - r#" - INSERT INTO treasury.events ( - tx_hash, slot, block_number, block_time, event_type, - treasury_id, vendor_contract_id, milestone_id, destination, metadata - ) - VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10) - ON CONFLICT (tx_hash) DO NOTHING - "# - ) - .bind(&event.tx_hash) - .bind(event.slot) - .bind(event.block_number) - .bind(event.block_time) - .bind(event_type) - .bind(treasury_id) - .bind(vendor_contract_id) - .bind(milestone_id) .bind(destination) .bind(body) .execute(&self.pool) @@ -787,30 +1152,44 @@ impl EventProcessor { Ok(()) } - /// Find vendor_contract_id by looking up input UTXOs in our treasury.utxos tracking table. - /// When a fund event is processed, its output UTXOs are recorded with the vendor_contract_id. + /// Find project_db_id by looking up input UTXOs in our treasury.utxo_history tracking table. + /// When a fund event is processed, its output UTXOs are recorded with the project_db_id. /// Subsequent events (complete/withdraw/etc) spend those UTXOs, so we can find the project /// by looking at which tracked UTXOs are being spent as inputs. - async fn find_vendor_contract_from_inputs(&self, tx_hash: &str) -> anyhow::Result> { - // Get the inputs to this transaction - let inputs: Vec<(String, i16)> = sqlx::query_as( - r#" - SELECT tx_hash, output_index::smallint - FROM yaci_store.tx_input - WHERE spent_tx_hash = $1 - "# + async fn find_project_from_inputs( + &self, + tx_hash: &str, + milestone_id_hints: &[String], + ) -> anyhow::Result> { + // Get the inputs to this transaction from the transaction table's inputs JSONB. + // We use this instead of yaci_store.tx_input because tx_input is pruned + // (only retains ~44K recent slots), while transaction.inputs is permanent. + let inputs_json: Option = sqlx::query_scalar( + "SELECT inputs::jsonb FROM yaci_store.transaction WHERE tx_hash = $1" ) .bind(tx_hash) - .fetch_all(&self.pool) + .fetch_optional(&self.pool) .await?; - // Look up each input in our tracked UTXOs + let inputs: Vec<(String, i16)> = match inputs_json { + Some(serde_json::Value::Array(arr)) => arr.iter().filter_map(|elem| { + let tx = elem.get("tx_hash")?.as_str()?.to_string(); + let idx = elem.get("output_index")?.as_i64()? as i16; + Some((tx, idx)) + }).collect(), + _ => vec![], + }; + + // Collect every input that maps to a tracked vendor contract. A single tx can + // include inputs from multiple project chains (e.g. fee/collateral from another + // contract), so we don't want to commit to the first match blindly. + let mut candidates: Vec<(String, i16, i32)> = Vec::new(); for (input_tx_hash, input_output_index) in &inputs { - let vendor_contract_id: Option = sqlx::query_scalar( + let project_db_id: Option = sqlx::query_scalar( r#" - SELECT vendor_contract_id - FROM treasury.utxos - WHERE tx_hash = $1 AND output_index = $2 AND vendor_contract_id IS NOT NULL + SELECT project_db_id + FROM treasury.utxo_history + WHERE tx_hash = $1 AND output_index = $2 AND project_db_id IS NOT NULL "# ) .bind(input_tx_hash) @@ -818,135 +1197,456 @@ impl EventProcessor { .fetch_optional(&self.pool) .await?; - if let Some(vc_id) = vendor_contract_id { - // Mark this UTXO as spent and record the new outputs - sqlx::query( - r#" - UPDATE treasury.utxos - SET spent = true, spent_tx_hash = $1 - WHERE tx_hash = $2 AND output_index = $3 - "# - ) - .bind(tx_hash) - .bind(input_tx_hash) - .bind(input_output_index) - .execute(&self.pool) - .await?; + if let Some(vc_id) = project_db_id { + candidates.push((input_tx_hash.clone(), *input_output_index, vc_id)); + } + } - // Record the outputs of this transaction with the same vendor_contract_id - let outputs: Option = sqlx::query_scalar( - "SELECT outputs::jsonb FROM yaci_store.transaction WHERE tx_hash = $1" + if candidates.is_empty() { + tracing::warn!("No tracked UTXO found for tx {} inputs", tx_hash); + return Ok(None); + } + + // Disambiguate when multiple project chains feed this tx. Prefer the candidate + // whose stored milestones match the hint keys carried by the event metadata + // (the `body.milestones` keys for complete/withdraw). Falls back to the first + // candidate if no hint matches. + let chosen_idx = if candidates.len() > 1 && !milestone_id_hints.is_empty() { + let mut best_idx = 0usize; + let mut best_score: i64 = -1; + for (i, (_, _, vc_id)) in candidates.iter().enumerate() { + let score: i64 = sqlx::query_scalar( + "SELECT COUNT(*) FROM treasury.milestones WHERE project_db_id = $1 AND milestone_id = ANY($2)" ) - .bind(tx_hash) - .fetch_optional(&self.pool) + .bind(vc_id) + .bind(milestone_id_hints) + .fetch_one(&self.pool) .await?; + if score > best_score { + best_score = score; + best_idx = i; + } + } + best_idx + } else { + 0 + }; - if let Some(serde_json::Value::Array(output_arr)) = outputs { - for output in output_arr { - if let (Some(out_tx_hash), Some(output_index)) = ( - output.get("tx_hash").and_then(|h| h.as_str()), - output.get("output_index").and_then(|i| i.as_i64()) - ) { - sqlx::query( - r#" - INSERT INTO treasury.utxos (tx_hash, output_index, vendor_contract_id, spent) - VALUES ($1, $2, $3, false) - ON CONFLICT (tx_hash, output_index) DO UPDATE - SET vendor_contract_id = EXCLUDED.vendor_contract_id - "# - ) - .bind(out_tx_hash) - .bind(output_index as i16) - .bind(vc_id) - .execute(&self.pool) - .await?; + let (input_tx_hash, input_output_index, vc_id) = candidates[chosen_idx].clone(); + + // Get the input UTXO's script address as a fallback for pruned outputs + let input_address: Option = sqlx::query_scalar( + "SELECT address FROM treasury.utxo_history WHERE tx_hash = $1 AND output_index = $2" + ) + .bind(&input_tx_hash) + .bind(input_output_index) + .fetch_optional(&self.pool) + .await? + .flatten(); + + // Mark this UTXO as spent and record the new outputs + sqlx::query( + r#" + UPDATE treasury.utxo_history + SET spent = true, spent_tx_hash = $1 + WHERE tx_hash = $2 AND output_index = $3 + "# + ) + .bind(tx_hash) + .bind(&input_tx_hash) + .bind(input_output_index) + .execute(&self.pool) + .await?; + + // Record the outputs of this transaction with the same project_db_id + let outputs: Option = sqlx::query_scalar( + "SELECT outputs::jsonb FROM yaci_store.transaction WHERE tx_hash = $1" + ) + .bind(tx_hash) + .fetch_optional(&self.pool) + .await?; + + if let Some(serde_json::Value::Array(output_arr)) = outputs { + for output in output_arr { + if let (Some(out_tx_hash), Some(output_index)) = ( + output.get("tx_hash").and_then(|h| h.as_str()), + output.get("output_index").and_then(|i| i.as_i64()) + ) { + let (address, lovelace_amount, out_datum) = { + let (addr, amt, datum) = self.lookup_utxo(out_tx_hash, output_index as i16).await?; + if addr.is_some() { + (addr, amt, datum) + } else { + (input_address.clone(), None, None) } + }; + + if !address.as_ref().map_or(false, |a| a.starts_with("addr1x")) { + continue; } - } - return Ok(Some(vc_id)); + let address_type = Some("vendor_contract"); + + sqlx::query( + r#" + INSERT INTO treasury.utxo_history (tx_hash, output_index, project_db_id, address, address_type, lovelace_amount, inline_datum_cbor, spent) + VALUES ($1, $2, $3, $4, $5, $6, $7, false) + ON CONFLICT (tx_hash, output_index) DO UPDATE + SET project_db_id = EXCLUDED.project_db_id, + address = COALESCE(EXCLUDED.address, treasury.utxo_history.address), + address_type = COALESCE(EXCLUDED.address_type, treasury.utxo_history.address_type), + lovelace_amount = COALESCE(EXCLUDED.lovelace_amount, treasury.utxo_history.lovelace_amount), + inline_datum_cbor = COALESCE(EXCLUDED.inline_datum_cbor, treasury.utxo_history.inline_datum_cbor) + "# + ) + .bind(out_tx_hash) + .bind(output_index as i16) + .bind(vc_id) + .bind(&address) + .bind(address_type) + .bind(lovelace_amount) + .bind(&out_datum) + .execute(&self.pool) + .await?; + } } } - tracing::debug!("No tracked UTXO found for tx {} inputs", tx_hash); - Ok(None) + Ok(Some(vc_id)) } - /// Sync UTXOs for all tracked addresses - pub async fn sync_utxos(&self) -> anyhow::Result<()> { - // Get all contract addresses (both treasury and vendor) - let addresses: Vec = sqlx::query_scalar( + /// Pre-fetch UTXOs from yaci_store into treasury.utxo_history before they can be pruned. + /// Called before processing a batch of events to capture UTXO data that YACI Store + /// may prune (spent UTXOs are removed after ~2160 blocks / ~10 days). + pub async fn pre_fetch_utxos(&self, tx_hashes: &[String]) -> anyhow::Result<()> { + if tx_hashes.is_empty() { + return Ok(()); + } + + // Pre-fetch output UTXOs for event transactions + let output_result = sqlx::query( r#" - SELECT contract_address FROM treasury.treasury_contracts WHERE contract_address IS NOT NULL - UNION - SELECT contract_address FROM treasury.vendor_contracts WHERE contract_address IS NOT NULL - UNION - SELECT vendor_address FROM treasury.vendor_contracts WHERE vendor_address IS NOT NULL + INSERT INTO treasury.utxo_history (tx_hash, output_index, address, lovelace_amount, inline_datum_cbor) + SELECT au.tx_hash, au.output_index, au.owner_addr, au.lovelace_amount, au.inline_datum + FROM yaci_store.address_utxo au + WHERE au.tx_hash = ANY($1) + ON CONFLICT (tx_hash, output_index) DO UPDATE + SET address = COALESCE(EXCLUDED.address, treasury.utxo_history.address), + lovelace_amount = COALESCE(EXCLUDED.lovelace_amount, treasury.utxo_history.lovelace_amount), + inline_datum_cbor = COALESCE(EXCLUDED.inline_datum_cbor, treasury.utxo_history.inline_datum_cbor) "# ) - .fetch_all(&self.pool) + .bind(tx_hashes) + .execute(&self.pool) + .await?; + + // Pre-fetch input-side UTXOs (outputs being spent by these transactions) + let input_result = sqlx::query( + r#" + INSERT INTO treasury.utxo_history (tx_hash, output_index, address, lovelace_amount, inline_datum_cbor) + SELECT au.tx_hash, au.output_index, au.owner_addr, au.lovelace_amount, au.inline_datum + FROM yaci_store.transaction t + CROSS JOIN LATERAL jsonb_array_elements(t.inputs::jsonb) AS inp + JOIN yaci_store.address_utxo au + ON au.tx_hash = inp->>'tx_hash' + AND au.output_index = (inp->>'output_index')::smallint + WHERE t.tx_hash = ANY($1) + ON CONFLICT (tx_hash, output_index) DO UPDATE + SET address = COALESCE(EXCLUDED.address, treasury.utxo_history.address), + lovelace_amount = COALESCE(EXCLUDED.lovelace_amount, treasury.utxo_history.lovelace_amount), + inline_datum_cbor = COALESCE(EXCLUDED.inline_datum_cbor, treasury.utxo_history.inline_datum_cbor) + "# + ) + .bind(tx_hashes) + .execute(&self.pool) .await?; - for address in addresses { - self.sync_address_utxos(&address).await?; + let total = output_result.rows_affected() + input_result.rows_affected(); + if total > 0 { + tracing::debug!("Pre-fetched {} UTXOs ({} outputs + {} inputs) into treasury.utxo_history", + total, output_result.rows_affected(), input_result.rows_affected()); } Ok(()) } - /// Sync UTXOs for a specific address - async fn sync_address_utxos(&self, address: &str) -> anyhow::Result<()> { - // Determine address type and get vendor_contract_id if applicable - let vendor_contract_id: Option = sqlx::query_scalar( - "SELECT id FROM treasury.vendor_contracts WHERE contract_address = $1 OR vendor_address = $1" + /// Fetch script UTXO data (address, lovelace_amount, inline_datum) for a transaction. + /// + /// Picks the addr1x output with the LARGEST inline_datum across BOTH + /// `yaci_store.address_utxo` AND `treasury.utxo_history`. Fund txs + /// often produce two script outputs — a vendor-contract output carrying + /// the kilobyte project datum, plus a treasury-contract reference + /// output carrying a trivial `Constr(0, [])` datum (3 bytes). yaci_store + /// prunes spent UTXOs (~2160 blocks) so the spent vendor-contract + /// output may only survive in `treasury.utxo_history` (captured by the + /// trigger before pruning); the unspent treasury-reference output stays + /// in yaci_store. Querying yaci_store alone returns the trivial datum + /// for these txs; merging across both reliably surfaces the real + /// project datum. + async fn get_script_utxo_for_tx(&self, tx_hash: &str) -> anyhow::Result<(Option, Option, Option)> { + let result: Option<(Option, Option, Option)> = sqlx::query_as( + r#" + SELECT address, lovelace_amount, inline_datum_cbor + FROM ( + SELECT owner_addr AS address, lovelace_amount, inline_datum AS inline_datum_cbor + FROM yaci_store.address_utxo + WHERE tx_hash = $1 AND owner_addr LIKE 'addr1x%' + UNION ALL + SELECT address, lovelace_amount, inline_datum_cbor + FROM treasury.utxo_history + WHERE tx_hash = $1 AND address LIKE 'addr1x%' + ) merged + ORDER BY length(COALESCE(inline_datum_cbor, '')) DESC + LIMIT 1 + "# ) - .bind(address) + .bind(tx_hash) .fetch_optional(&self.pool) .await?; - let address_type = if address.starts_with("addr1x") { - if vendor_contract_id.is_some() { "vendor_contract" } else { "treasury" } - } else { - "vendor" - }; + Ok(result.unwrap_or((None, None, None))) + } - // Get UTXOs from yaci_store - let utxos = sqlx::query_as::<_, (String, i16, i64, i64, Option)>( - r#" - SELECT tx_hash, output_index::smallint, lovelace_amount, slot, block as block_number - FROM yaci_store.address_utxo - WHERE owner_addr = $1 - "# + /// Look up a specific UTXO by tx_hash + output_index. + /// Tries yaci_store.address_utxo first, falls back to treasury.utxo_history. + async fn lookup_utxo(&self, tx_hash: &str, output_index: i16) -> anyhow::Result<(Option, Option, Option)> { + let result: Option<(String, Option, Option)> = sqlx::query_as( + "SELECT owner_addr, lovelace_amount, inline_datum FROM yaci_store.address_utxo WHERE tx_hash = $1 AND output_index = $2 LIMIT 1" ) - .bind(address) - .fetch_all(&self.pool) + .bind(tx_hash) + .bind(output_index) + .fetch_optional(&self.pool) .await?; - for (tx_hash, output_index, lovelace_amount, slot, block_number) in utxos { - sqlx::query( - r#" - INSERT INTO treasury.utxos ( - tx_hash, output_index, address, address_type, - vendor_contract_id, lovelace_amount, slot, block_number, spent - ) - VALUES ($1, $2, $3, $4, $5, $6, $7, $8, false) - ON CONFLICT (tx_hash, output_index) DO NOTHING - "# + if let Some((addr, amt, datum)) = result { + return Ok((Some(addr), amt, datum)); + } + + let result: Option<(Option, Option, Option)> = sqlx::query_as( + "SELECT address, lovelace_amount, inline_datum_cbor FROM treasury.utxo_history WHERE tx_hash = $1 AND output_index = $2 LIMIT 1" + ) + .bind(tx_hash) + .bind(output_index) + .fetch_optional(&self.pool) + .await?; + + Ok(result.unwrap_or((None, None, None))) + } + + /// Update per-milestone pause flags from the output datum of a transaction + async fn update_milestone_pause_from_datum(&self, tx_hash: &str, project_db_id: i32) -> anyhow::Result<()> { + // Query inline datum from the tx output at the vendor contract address + let inline_datum: Option = sqlx::query_scalar( + "SELECT inline_datum FROM yaci_store.address_utxo WHERE tx_hash = $1 AND owner_addr LIKE 'addr1x%' AND inline_datum IS NOT NULL LIMIT 1" + ) + .bind(tx_hash) + .fetch_optional(&self.pool) + .await?; + + // Fallback to pre-fetched treasury.utxo_history + let inline_datum = match inline_datum { + Some(d) => Some(d), + None => sqlx::query_scalar::<_, Option>( + "SELECT inline_datum_cbor FROM treasury.utxo_history WHERE tx_hash = $1 AND address LIKE 'addr1x%' AND inline_datum_cbor IS NOT NULL LIMIT 1" ) - .bind(&tx_hash) - .bind(output_index) - .bind(address) - .bind(address_type) - .bind(vendor_contract_id) - .bind(lovelace_amount) - .bind(slot) - .bind(block_number) - .execute(&self.pool) + .bind(tx_hash) + .fetch_optional(&self.pool) + .await? + .flatten(), + }; + + if let Some(datum_hex) = inline_datum { + let parsed = crate::parsers::datum::parse_project_datum(&datum_hex); + + if let Some(ref e) = parsed.top_level_error { + tracing::warn!( + "Pause/resume datum parse error for tx {}: {}", + tx_hash, e + ); + return Ok(()); + } + + // Get non-withdrawn milestones ordered by milestone_order. + // Withdrawn milestones are consumed on-chain, so the datum only + // contains entries for non-withdrawn milestones. We must skip + // withdrawn rows to keep datum indices aligned with DB rows. + let milestone_ids: Vec<(i32,)> = sqlx::query_as( + "SELECT id FROM treasury.milestones WHERE project_db_id = $1 AND NOT archived AND NOT withdrawn ORDER BY milestone_order" + ) + .bind(project_db_id) + .fetch_all(&self.pool) .await?; + + for (datum_idx, (db_id,)) in milestone_ids.iter().enumerate() { + if let Some(Ok(ms_datum)) = parsed.milestones.get(datum_idx) { + sqlx::query( + "UPDATE treasury.milestones SET paused = $1 WHERE id = $2" + ) + .bind(ms_datum.paused) + .bind(db_id) + .execute(&self.pool) + .await?; + } + } + + // Update contract-level status: paused if ALL milestones paused. + // Only consider successfully-parsed milestones; if any failed to + // parse we can't trust the all/any predicate and skip the update. + let parsed_ms: Vec<&crate::parsers::datum::ParsedMilestoneDatum> = + parsed.milestones.iter().filter_map(|m| m.as_ref().ok()).collect(); + if !parsed_ms.is_empty() && parsed_ms.len() == parsed.milestones.len() { + let all_paused = parsed_ms.iter().all(|m| m.paused); + let any_paused = parsed_ms.iter().any(|m| m.paused); + if all_paused { + sqlx::query("UPDATE treasury.projects SET status = 'paused' WHERE id = $1") + .bind(project_db_id) + .execute(&self.pool) + .await?; + } else if !any_paused { + sqlx::query("UPDATE treasury.projects SET status = 'active' WHERE id = $1") + .bind(project_db_id) + .execute(&self.pool) + .await?; + } + } } Ok(()) } + + /// Resolve the first milestone referenced in an event's `body.milestones` + /// keys (or `body.milestone` string) to a `treasury.milestones.id` for the + /// given project. Used by pause/resume to populate `events.milestone_id` + /// so consumers don't have to re-parse the metadata to know which + /// milestone the event affects. + async fn resolve_first_milestone_from_body( + &self, + event_body: &Value, + project_db_id: i32, + ) -> anyhow::Result> { + let candidates: Vec = if let Some(obj) = event_body.get("milestones").and_then(|m| m.as_object()) { + obj.keys().cloned().collect() + } else if let Some(s) = event_body.get("milestone").and_then(|m| m.as_str()) { + vec![s.to_string()] + } else { + return Ok(None); + }; + + for key in candidates { + let order_hint = canonical_milestone_order(&key); + let id: Option = sqlx::query_scalar( + r#" + SELECT id FROM treasury.milestones + WHERE project_db_id = $1 + AND NOT archived + AND (milestone_id = $2 OR milestone_order = $3) + LIMIT 1 + "#, + ) + .bind(project_db_id) + .bind(&key) + .bind(order_hint) + .fetch_optional(&self.pool) + .await?; + if id.is_some() { + return Ok(id); + } + } + Ok(None) + } +} + +/// Extract milestone label and description from metadata fields. +/// +/// On-chain TOM metadata typically has no `label` field on milestones and an empty +/// `description`. Instead, `acceptanceCriteria` contains structured text like: +/// "Milestone 2 - Documentation\nDeliverables: detailed description" +/// or with a project prefix: +/// "Ledger App Rewrite:\nMilestone 2 – Impl\nDeliverables: ..." +/// +/// This function extracts a clean label and description: +/// - label: the milestone title (text before "\nDeliverables:" or first line) +/// - description: the deliverables text (after "Deliverables:"), or original description +fn extract_milestone_label_description( + raw_label: Option, + raw_description: Option, + acceptance_criteria: &Option, +) -> (Option, Option) { + // If label is explicitly provided, use it (truncated to first line) + if let Some(ref label) = raw_label { + let label = label.lines().next().unwrap_or(label).trim().to_string(); + if !label.is_empty() { + return (Some(label), raw_description); + } + } + + // No label — try to derive from acceptance_criteria, then fall back to + // the first line of the description (covers `UTXO-*` projects whose + // metadata uses `description` instead of `acceptanceCriteria`). + // See KI-MIL-01 in docs/known-issues.md. + let ac = match acceptance_criteria { + Some(ac) if !ac.is_empty() => ac, + _ => { + if let Some(ref desc) = raw_description { + let label = desc.lines().next().unwrap_or(desc).trim().to_string(); + if !label.is_empty() { + return (Some(label), raw_description); + } + } + return (None, raw_description); + } + }; + + // Look for "Deliverables:" separator (case-insensitive find) + let deliverables_pos = ac.to_lowercase().find("\ndeliverables:"); + if let Some(pos) = deliverables_pos { + let label = ac[..pos].trim().to_string(); + let desc_start = pos + 1; // skip the \n + let deliverables = ac[desc_start..].trim().to_string(); + let description = if !deliverables.is_empty() { + Some(deliverables) + } else { + raw_description + }; + return ( + if label.is_empty() { None } else { Some(label) }, + description, + ); + } + + // No "Deliverables:" marker — use first line as label + let label = ac.lines().next().unwrap_or(ac).trim().to_string(); + ( + if label.is_empty() { None } else { Some(label) }, + raw_description, + ) +} + +/// Extract milestone identifier hints from a complete/withdraw event body. +/// Used to disambiguate which vendor contract a tx belongs to when its inputs span +/// multiple project chains (e.g. fee/collateral pulled from a sibling contract). +/// Convert a metadata milestone key (`m-N` 0-indexed or `MS-N` 1-indexed) to +/// its canonical 1-indexed `milestone_order`. Returns `None` for unrecognised +/// formats. See `docs/known-issues.md` `KI-OC-01` for context. +fn canonical_milestone_order(key: &str) -> Option { + if let Some(rest) = key.strip_prefix("m-") { + rest.parse::().ok().map(|n| n + 1) + } else if let Some(rest) = key.strip_prefix("MS-") { + rest.parse::().ok() + } else { + None + } +} + +fn collect_milestone_id_hints(event_body: &Value) -> Vec { + let mut hints: Vec = Vec::new(); + if let Some(obj) = event_body.get("milestones").and_then(|m| m.as_object()) { + hints.extend(obj.keys().cloned()); + } + if let Some(s) = event_body.get("milestone").and_then(|m| m.as_str()) { + hints.push(s.to_string()); + } + hints } /// Extract text from a field that might be a string or array @@ -954,7 +1654,9 @@ fn extract_text(obj: &Value, field: &str) -> Option { extract_text_from_value(obj.get(field)) } -/// Extract text from a value that might be a string or array +/// Extract text from a value that might be a string or array of 64-byte CIP-100 chunks. +/// Joining with "" (no separator) is correct for CIP-100: text is split at fixed byte +/// boundaries, so chunks are contiguous fragments that reconstruct the original text. fn extract_text_from_value(value: Option<&Value>) -> Option { match value { Some(Value::String(s)) => Some(s.clone()), @@ -965,6 +1667,11 @@ fn extract_text_from_value(value: Option<&Value>) -> Option { .join(""); if joined.is_empty() { None } else { Some(joined) } } + Some(Value::Object(obj)) => { + obj.get("label") + .or_else(|| obj.get("name")) + .and_then(|v| extract_text_from_value(Some(v))) + } _ => None, } } diff --git a/api/src/services/sync.rs b/api/src/services/sync.rs index 95a6d00..13555a5 100644 --- a/api/src/services/sync.rs +++ b/api/src/services/sync.rs @@ -37,21 +37,43 @@ pub async fn run_sync_loop(pool: PgPool) { tokio::time::sleep(Duration::from_secs(5)).await; } + // Install / refresh the trigger that captures every script-address UTXO + // into treasury.utxo_history before YACI Store can prune it. Idempotent. + if let Err(e) = install_utxo_history_triggers(&pool).await { + tracing::warn!("Failed to install utxo_history triggers (non-fatal): {}", e); + } else { + tracing::info!("utxo_history triggers installed"); + } + // Initial sync: process all events from beginning tracing::info!("Starting initial TOM event sync..."); if let Err(e) = processor.sync_all_events().await { tracing::error!("Initial sync failed: {}", e); } - // Sync UTXOs for tracked addresses - tracing::info!("Syncing UTXOs for tracked addresses..."); - if let Err(e) = processor.sync_utxos().await { - tracing::error!("UTXO sync failed: {}", e); - } - tracing::info!("Initial sync complete. Starting continuous sync loop."); - // Continuous sync loop + // Periodic full re-sync (every 10 minutes) as a belt-and-braces safety net + // for KI-VND-01 / KI-MIL-01 / KI-EVT-01-residual / KI-SY-02. The cold-resync + // race that leaves UTXO-* fund datums NULL on first pass is invisible to the + // 15s incremental loop because it never re-visits already-processed slots; + // sync_all_events re-runs every fund event and the idempotent + // ON CONFLICT DO UPDATE chain backfills missing fields. + let pool_for_full_sync = pool.clone(); + tokio::spawn(async move { + let processor = EventProcessor::new(pool_for_full_sync); + // First periodic run after a short delay so we don't double up with + // the initial sync above. + tokio::time::sleep(Duration::from_secs(60)).await; + loop { + if let Err(e) = processor.sync_all_events().await { + tracing::error!("Periodic full-sync failed (non-fatal): {}", e); + } + tokio::time::sleep(Duration::from_secs(600)).await; + } + }); + + // Continuous sync loop (15s incremental) loop { tokio::time::sleep(Duration::from_secs(15)).await; @@ -83,7 +105,7 @@ async fn sync_new_events(pool: &PgPool, processor: &EventProcessor) -> anyhow::R FROM yaci_store.transaction_metadata m JOIN yaci_store.block b ON b.slot = m.slot WHERE m.label = '1694' AND m.slot > $1 - ORDER BY m.slot ASC + ORDER BY m.slot ASC, m.tx_hash ASC LIMIT 1000 "# ) @@ -92,24 +114,55 @@ async fn sync_new_events(pool: &PgPool, processor: &EventProcessor) -> anyhow::R .await?; if rows.is_empty() { + // Bump updated_at on idle ticks so /api/v1/statistics shows a live heartbeat. + // Closes KI-SY-01 (`updated_at` doesn't bump on idle ticks). + sqlx::query( + "UPDATE treasury.sync_status SET updated_at = NOW() WHERE sync_type = 'events'" + ) + .execute(pool) + .await?; return Ok(()); } tracing::info!("Processing {} new TOM events", rows.len()); + // Pre-fetch UTXOs from yaci_store into treasury.utxo_history before processing. + // This captures UTXO data before YACI Store can prune spent UTXOs (~2160 blocks). + let tx_hashes: Vec = rows.iter().map(|r| r.tx_hash.clone()).collect(); + if let Err(e) = processor.pre_fetch_utxos(&tx_hashes).await { + tracing::warn!("UTXO pre-fetch failed (non-fatal): {}", e); + } + + // Track the watermark as the slot of the LAST CONTIGUOUSLY-SUCCESSFUL row. + // If a row fails, every later row in this batch — even successful ones — + // does NOT advance the watermark, so retry on the next tick will revisit + // them. Closes KI-SY-02: previously a later success bumped the watermark + // past a failed earlier row, silently losing it. + // + // Cost: a permanently-failing event wedges the loop at its slot until an + // operator fixes it. That's intentional — visible stall beats silent loss. let mut last_processed_slot = last_slot; let mut last_processed_tx = String::new(); let mut last_block = 0i64; + let mut hole_seen = false; for row in rows { - if let Err(e) = processor.process_event(&row).await { - tracing::error!("Failed to process event {}: {}", row.tx_hash, e); - continue; + match processor.process_event(&row).await { + Err(e) => { + tracing::error!( + "Failed to process event {} at slot {:?}: {:#}", + row.tx_hash, row.slot, e + ); + hole_seen = true; + } + Ok(()) => { + if !hole_seen { + last_processed_slot = row.slot.unwrap_or(last_processed_slot); + last_block = row.block_number.unwrap_or(last_block); + last_processed_tx = row.tx_hash.clone(); + } + } } - - last_processed_slot = row.slot.unwrap_or(last_processed_slot); - last_block = row.block_number.unwrap_or(last_block); - last_processed_tx = row.tx_hash.clone(); } // Update sync status @@ -126,8 +179,126 @@ async fn sync_new_events(pool: &PgPool, processor: &EventProcessor) -> anyhow::R .execute(pool) .await?; - // Also sync any new UTXOs - processor.sync_utxos().await?; + Ok(()) +} + +/// Install the Postgres triggers that mirror `yaci_store.address_utxo` and +/// `yaci_store.tx_input` into `treasury.utxo_history`. Idempotent. +/// +/// Why this exists: YACI Store prunes spent UTXOs from `address_utxo` after +/// ~2160 blocks (~10 days). The trigger fires synchronously inside YACI's +/// INSERT transaction, so by the time pruning runs we already have a copy. +/// This solves the cold-replay limitation documented as `KI-CR-01` and +/// closes `KI-EVT-01` / `KI-VND-04` / `KI-UTX-01`. +async fn install_utxo_history_triggers(pool: &PgPool) -> anyhow::Result<()> { + // Capture every script-address (addr1x*) UTXO created by YACI Store. + sqlx::query( + r#" + CREATE OR REPLACE FUNCTION treasury.capture_utxo_history() + RETURNS TRIGGER AS $$ + BEGIN + IF NEW.owner_addr LIKE 'addr1x%' THEN + INSERT INTO treasury.utxo_history ( + tx_hash, output_index, address, lovelace_amount, + inline_datum_cbor, slot, block_number + ) VALUES ( + NEW.tx_hash, NEW.output_index, NEW.owner_addr, + NEW.lovelace_amount, NEW.inline_datum, NEW.slot, NEW.block + ) + ON CONFLICT (tx_hash, output_index) DO UPDATE + SET address = COALESCE(EXCLUDED.address, treasury.utxo_history.address), + lovelace_amount = COALESCE(EXCLUDED.lovelace_amount, treasury.utxo_history.lovelace_amount), + inline_datum_cbor = COALESCE(EXCLUDED.inline_datum_cbor, treasury.utxo_history.inline_datum_cbor), + slot = COALESCE(EXCLUDED.slot, treasury.utxo_history.slot), + block_number = COALESCE(EXCLUDED.block_number, treasury.utxo_history.block_number); + END IF; + RETURN NEW; + END; + $$ LANGUAGE plpgsql; + "#, + ) + .execute(pool) + .await?; + + // CREATE TRIGGER takes a SHARE ROW EXCLUSIVE lock; if YACI Store is mid-batch + // it will block. Skip the create when the trigger already exists so subsequent + // restarts are non-blocking. + sqlx::query( + r#" + DO $$ + BEGIN + IF NOT EXISTS ( + SELECT 1 FROM information_schema.triggers + WHERE event_object_schema = 'yaci_store' + AND event_object_table = 'address_utxo' + AND trigger_name = 'capture_address_utxo' + ) THEN + CREATE TRIGGER capture_address_utxo + AFTER INSERT OR UPDATE ON yaci_store.address_utxo + FOR EACH ROW EXECUTE FUNCTION treasury.capture_utxo_history(); + END IF; + END $$; + "#, + ) + .execute(pool) + .await?; + + // Mark UTXOs as spent when YACI Store records the spending tx_input row. + sqlx::query( + r#" + CREATE OR REPLACE FUNCTION treasury.mark_utxo_spent() + RETURNS TRIGGER AS $$ + BEGIN + UPDATE treasury.utxo_history + SET spent = TRUE, + spent_tx_hash = NEW.spent_tx_hash, + spent_slot = NEW.spent_at_slot + WHERE tx_hash = NEW.tx_hash AND output_index = NEW.output_index; + RETURN NEW; + END; + $$ LANGUAGE plpgsql; + "#, + ) + .execute(pool) + .await?; + + sqlx::query( + r#" + DO $$ + BEGIN + IF NOT EXISTS ( + SELECT 1 FROM information_schema.triggers + WHERE event_object_schema = 'yaci_store' + AND event_object_table = 'tx_input' + AND trigger_name = 'mark_utxo_spent' + ) THEN + CREATE TRIGGER mark_utxo_spent + AFTER INSERT ON yaci_store.tx_input + FOR EACH ROW EXECUTE FUNCTION treasury.mark_utxo_spent(); + END IF; + END $$; + "#, + ) + .execute(pool) + .await?; + + // One-shot backfill: copy any address_utxo rows already present (covers + // the case where YACI Store ingested rows before the trigger was armed). + sqlx::query( + r#" + INSERT INTO treasury.utxo_history ( + tx_hash, output_index, address, lovelace_amount, + inline_datum_cbor, slot, block_number + ) + SELECT au.tx_hash, au.output_index, au.owner_addr, + au.lovelace_amount, au.inline_datum, au.slot, au.block + FROM yaci_store.address_utxo au + WHERE au.owner_addr LIKE 'addr1x%' + ON CONFLICT (tx_hash, output_index) DO NOTHING + "#, + ) + .execute(pool) + .await?; Ok(()) } diff --git a/api/tests/fixtures/ec_0013_25.hex b/api/tests/fixtures/ec_0013_25.hex new file mode 100644 index 0000000..098b125 --- /dev/null +++ b/api/tests/fixtures/ec_0013_25.hex @@ -0,0 +1 @@ +d8799fd8799f581c6c170db91076434de83f93868e61f4020ef7840123636ba0a4a512abff9fd8799f1b0000019896365580a140a1401b000000746a528800d87980ffd8799f1b0000019ca18b8000a140a1401b00000045d964b800d87980ffd8799f1b0000019ddb787d80a140a1401b00000045d964b800d87980ffd8799f1b0000019f159c6980a140a1401b000000174876e800d87980ffd8799f1b000001a04fc05580a140a1401b000000174876e800d87980ffd8799f1b000001a18a1b3000a140a1401b0000001176592e00d87980ffd8799f1b000001a18a1b3000a140a1401b0000002e90edd000d87980ffd8799f1b000001a2c43f1c00a140a1401b000000266ac43200d87980ffd8799f1b000001a363e44000a140a1401b0000000ba43b7400d87980ffd8799f1b000001a363e44000a140a1401b00000002540be400d87980ffd8799f1b0000019896365580a140a1401b000000174876e800d87980ffd8799f1b0000019bbef3b000a140a1401b0000002e90edd000d87980ffd8799f1b0000019ca18b8000a140a1401b0000002e90edd000d87980ffd8799f1b0000019ceecae400a140a1401b000000746a528800d87980ffd8799f1b0000019e28b7e180a140a1401b000000746a528800d87980ffd8799f1b0000019f62dbcd80a140a1401b0000006b1a22f800d87980ffd8799f1b000001a0a2261580a140a1401b00000004a817c800d87980ffd8799f1b000001a1dc80f000a140a1401b00000004a817c800d87980ffd8799f1b0000019896365580a140a1401b00000004a817c800d87980ffd8799f1b0000019a84cfc400a140a1401b00000004a817c800d87980ffd8799f1b0000019ca18b8000a140a1401b000000174876e800d87980ffd8799f1b0000019fb01b3180a140a1401b00000005d21dba00d87980ffd8799f1b000001a363e44000a140a1401b00000002540be400d87980ffffff diff --git a/api/tests/fixtures/utxo_ec_0002_25_01.hex b/api/tests/fixtures/utxo_ec_0002_25_01.hex new file mode 100644 index 0000000..9818f60 --- /dev/null +++ b/api/tests/fixtures/utxo_ec_0002_25_01.hex @@ -0,0 +1 @@ +d8799fd87a9f9fd8799f581c33afc56ecef7fc370f59d5574416826d6bd3d1f88e0f449ed5ae79f5ffd87f9f581ccfc13c3728f4dd600a33f4e977aa0168eaf2ffb3474826fdd90af37effffff9fd8799f1b0000019a37905c18a140a1401b000000d86a846f00d87a80ffd8799f1b0000019de0d5c418a140a1401b000000d86a846f00d87980ffd8799f1b0000019fb5787818a140a1401b000000d86a846f00d87980ffd8799f1b000001a18f418818a140a1401b000000d86a846f00d87980ffd8799f1b0000019c11596c18a140a1401b0000016f405f8740d87980ffd8799f1b0000019fb5787818a140a1401b0000016f405f8740d87980ffd8799f1b0000019fb5787818a140a1401b0000016f406ec980d87980ffd8799f1b0000019a37905c18a140a1401b00000078a0bfcd40d87a80ffd8799f1b0000019ca6b1d818a140a1401b00000078a0bfcd40d87980ffd8799f1b0000019fb5787818a140a1401b00000078a0cf0f80d87980ffd8799f1b0000019a37905c18a140a1401b00000226e096ec00d87a80ffd8799f1b0000019ad7358018a140a1401b00000226e096ec00d87a80ffd8799f1b0000019b71b44818a140a1401b00000226e096ec00d87a80ffd8799f1b0000019c11596c18a140a1401b00000226e096ec00d87980ffd8799f1b0000019c11596c18a140a1401b00000226e096ec00d87980ffd8799f1b0000019fb5787818a140a1401b00000226e096ec00d87980ffffff diff --git a/api/tests/fixtures/utxo_ec_0002_25_03.hex b/api/tests/fixtures/utxo_ec_0002_25_03.hex new file mode 100644 index 0000000..8f9a2cb --- /dev/null +++ b/api/tests/fixtures/utxo_ec_0002_25_03.hex @@ -0,0 +1 @@ +d8799fd87a9f9fd8799f581c33afc56ecef7fc370f59d5574416826d6bd3d1f88e0f449ed5ae79f5ffd87f9f581ccfc13c3728f4dd600a33f4e977aa0168eaf2ffb3474826fdd90af37effffff9fd8799f1b0000019a37905c18a140a1401b000000b0f387b000d87a80ffd8799f1b0000019a37905c18a140a1401b000000b0f387b000d87a80ffd8799f1b0000019c11596c18a140a1401b000000b0f387b000d87980ffd8799f1b0000019fb5787818a140a1401b000000b0f387b000d87980ffd8799f1b0000019fb5787818a140a1401b000000b0f387b000d87980ffd8799f1b0000019c11596c18a140a1401b0000006c35423780d87980ffd8799f1b0000019a37905c18a140a1401b0000006c35423780d87a80ffd8799f1b0000019c11596c18a140a1401b0000006c35423780d87980ffd8799f1b0000019c11596c18a140a1401b0000006c35423780d87980ffd8799f1b0000019c11596c18a140a1401b0000006c35423780d87980ffd8799f1b0000019de0d5c418a140a1401b0000006c35423780d87980ffd8799f1b0000019de0d5c418a140a1401b0000006c35423780d87980ffd8799f1b0000019fb5787818a140a1401b0000006c35423780d87980ffd8799f1b0000019ad7358018a140a1401b0000002cf8384bc0d87a80ffd8799f1b0000019ad7358018a140a1401b0000002cf8384bc0d87a80ffd8799f1b0000019b71b44818a140a1401b0000002cf8384bc0d87a80ffd8799f1b0000019b71b44818a140a1401b0000002cf8384bc0d87a80ffd8799f1b0000019ad7358018a140a1401b0000002cf8384bc0d87a80ffd8799f1b0000019c11596c18a140a1401b0000002cf8384bc0d87980ffd8799f1b0000019ca6b1d818a140a1401b0000002cf8384bc0d87980ffffff diff --git a/api/tests/fixtures/utxo_emi_0001_25.hex b/api/tests/fixtures/utxo_emi_0001_25.hex new file mode 100644 index 0000000..7f01400 --- /dev/null +++ b/api/tests/fixtures/utxo_emi_0001_25.hex @@ -0,0 +1 @@ +d8799fd87a9f9fd8799f581cef8374e8b6b05099c6f9b595bf5e849a10daa2080e68c4a4745c28e1ffd87f9f581c038fbe947fc62c66869340f58069cc9e93a4cb520eb17d76aaada264ffffff9fd8799f1b0000019881d3d400a140a1401b000000f03c128e00d87980ffd8799f1b0000019c0c331400a140a1401b000000f03c128e00d87980ffd8799f1b0000019c0c331400a140a1401b000000f03c128e00d87980ffd8799f1b000001a363e44000a140a1401b000000f03c128e00d87980ffd8799f1b000001a363e44000a140a1401b000000f03c128e00d87980ffffff diff --git a/api/tests/fixtures/utxo_er_0001_25.hex b/api/tests/fixtures/utxo_er_0001_25.hex new file mode 100644 index 0000000..dbcec29 --- /dev/null +++ b/api/tests/fixtures/utxo_er_0001_25.hex @@ -0,0 +1 @@ +d8799fd87a9f9fd8799f581cb6ab16a52a4e4ae64a2492b1cc928683e65d7e09bc9c9a73be81c3efffd87f9f581ca5b3687601f904254c31b9c0b29b7dea163ab31ad62f492b7b264bfeffffff9fd8799f1b000001999d119418a140a1401b0000055c89618600d87a80ffd8799f1b000001999d119418a140a1401b000006d80cf3b200d87a80ffd8799f1b0000019c11596c18a140a1401b0000055c89618600d87980ffd8799f1b0000019c11596c18a140a1401b000006d80cf3b200d87980ffffff diff --git a/database/README.md b/database/README.md index 542c9e9..4b99b0a 100644 --- a/database/README.md +++ b/database/README.md @@ -11,12 +11,12 @@ The system uses two schemas: ## Treasury Schema Tables ### treasury.treasury_contracts -Stores treasury reserve contract instances (TRSC). +Stores treasury reserve contract instances (TRSC). Singleton in our deployment. | Column | Type | Description | |--------|------|-------------| | id | SERIAL | Primary key | -| contract_instance | TEXT | On-chain instance identifier (policy ID) | +| contract_instance | TEXT | On-chain instance identifier (policy ID, unique) | | contract_address | TEXT | Script address | | stake_credential | TEXT | Shared stake credential | | name | TEXT | Human-readable name | @@ -28,47 +28,73 @@ Stores treasury reserve contract instances (TRSC). | status | TEXT | active/paused | ### treasury.vendor_contracts -Stores vendor/project contract instances (PSSC). +Singleton row for the shared on-chain vendor contract (PSSC) script address — the *one* address every project's UTXOs sit at, distinguished only by inline datum. | Column | Type | Description | |--------|------|-------------| | id | SERIAL | Primary key | | treasury_id | INT | FK to treasury_contracts | -| project_id | TEXT | Logical identifier (e.g., "EC-0008-25") | -| other_identifiers | TEXT[] | Related IDs | -| project_name | TEXT | Project label | +| address | TEXT | Shared PSSC script address (unique) | +| stake_credential | TEXT | Stake credential portion of the address | + +### treasury.projects +One row per `fund` event (e.g. `EC-0008-25`). 42 rows in our deployment. Identified by `project_id`; funds and milestones live at the shared PSSC above, distinguished by inline datum. + +| Column | Type | Description | +|--------|------|-------------| +| id | SERIAL | Primary key | +| treasury_id | INT | FK to treasury_contracts | +| project_id | TEXT | Logical identifier (e.g., "EC-0008-25", unique) | +| other_identifiers | TEXT[] | Related IDs from `otherIdentifiers` array | +| project_name | TEXT | Label from fund event | | description | TEXT | Project description | -| vendor_name | TEXT | Vendor name | -| vendor_address | TEXT | Payment destination | -| contract_url | TEXT | Link to agreement | -| contract_address | TEXT | PSSC script address | +| vendor_address | TEXT | Payment destination (`vendor.label` in metadata) | +| contract_address | TEXT | PSSC script address (from fund tx output) | +| vendor_payment_key_hash | TEXT | Comma-joined hex hashes (multi-party datums produce multiple) | | fund_tx_hash | VARCHAR(64) | Fund transaction | -| fund_slot | BIGINT | Fund slot | -| fund_block_time | BIGINT | Fund block time | -| initial_amount_lovelace | BIGINT | Initial funding amount | +| fund_slot | BIGINT | Blockchain slot | +| fund_block_time | BIGINT | Block timestamp | +| initial_amount_lovelace | BIGINT | Initial funding amount (from tx output) | | status | TEXT | active/paused/completed/cancelled | +| datum_parse_error | TEXT | Set when fund datum parse failed; cleared on success | ### treasury.milestones -Stores milestone data for each vendor contract. +Stores milestone data for each project. Uses 4 independent boolean flags instead of a linear status; archive model preserves prior versions via `superseded_by`. + +State flags (all default FALSE, all independent): +- `evidence_provided` — vendor submitted a `complete` event +- `withdrawn` — vendor pulled funds via a `withdraw` event +- `paused` — derived from inline-datum parsing in `update_milestone_pause_from_datum` (`api/src/services/event_processor.rs`); not present in metadata +- `archived` — milestone replaced by a `modify` event; the new row is linked via `superseded_by` and queries for current state should include `WHERE NOT archived` | Column | Type | Description | |--------|------|-------------| | id | SERIAL | Primary key | -| vendor_contract_id | INT | FK to vendor_contracts | +| project_db_id | INT | FK to projects (CASCADE) | | milestone_id | TEXT | Logical identifier (e.g., "m-0") | | milestone_order | INT | Position (1, 2, 3...) | | label | TEXT | Milestone name | | description | TEXT | Detailed description | | acceptance_criteria | TEXT | Completion criteria | -| amount_lovelace | BIGINT | Allocated amount | -| status | TEXT | pending/completed/disbursed | +| amount_lovelace | BIGINT | Lovelace amount from datum | +| time_limit | BIGINT | POSIXTime in milliseconds from datum | +| withdrawn | BOOLEAN | Vendor withdrew payment | +| evidence_provided | BOOLEAN | Vendor submitted completion evidence | +| paused | BOOLEAN | Oversight committee paused this milestone (datum-derived) | +| archived | BOOLEAN | Milestone replaced by modify event | +| withdraw_tx_hash | VARCHAR(64) | Withdrawal transaction | +| withdraw_time | BIGINT | Withdrawal timestamp | +| withdraw_amount | BIGINT | Withdrawn amount | | complete_tx_hash | VARCHAR(64) | Completion transaction | | complete_time | BIGINT | Completion timestamp | | complete_description | TEXT | Completion notes | | evidence | JSONB | Evidence array | -| disburse_tx_hash | VARCHAR(64) | Disbursement transaction | -| disburse_time | BIGINT | Disbursement timestamp | -| disburse_amount | BIGINT | Disbursed amount | +| archived_by_tx_hash | VARCHAR(64) | Modify tx that archived this milestone | +| archived_at | BIGINT | Archive timestamp | +| superseded_by | INT | FK to replacement milestone | +| datum_parse_error | TEXT | Set when datum parse failed for this milestone | + +A partial unique index `idx_milestone_active_unique` on `(project_db_id, milestone_id) WHERE NOT archived` ensures only one active row per logical milestone. ### treasury.events Audit log of all TOM (Treasury Oversight Metadata) events. @@ -82,15 +108,20 @@ Audit log of all TOM (Treasury Oversight Metadata) events. | block_time | BIGINT | Block timestamp | | event_type | TEXT | Event type | | treasury_id | INT | FK to treasury_contracts | -| vendor_contract_id | INT | FK to vendor_contracts | +| project_db_id | INT | FK to projects | | milestone_id | INT | FK to milestones | | amount_lovelace | BIGINT | Amount involved | | reason | TEXT | Justification (pause/cancel/modify) | -| destination | TEXT | Destination label (disburse) | +| destination | JSONB | Destination object `{label, details}` from disburse events | | metadata | JSONB | Original TOM metadata body | +| created_at | TIMESTAMPTZ | Row insert timestamp | -### treasury.utxos -Tracks UTXOs at treasury-related addresses for event linking. +### treasury.utxo_history +Persistent UTXO history at treasury-related script addresses. Two responsibilities: +1. **Chain trace seed** — outputs of `fund` txs are written here with `project_db_id` set, so `find_project_from_inputs` can later trace milestone-event inputs back to a project. +2. **Datum cache** — `inline_datum_cbor` is stored on each UTXO so pause/resume datum parsing (`update_milestone_pause_from_datum`) keeps working after YACI Store has pruned the row out of `yaci_store.address_utxo`. + +Population: Postgres triggers installed by `install_utxo_history_triggers` (`api/src/services/sync.rs`) capture every script-address (`addr1x*`) UTXO from `yaci_store.address_utxo` synchronously on INSERT, and flag rows as spent on `yaci_store.tx_input` INSERT. `pre_fetch_utxos` is a defensive backstop run during event processing. | Column | Type | Description | |--------|------|-------------| @@ -99,21 +130,28 @@ Tracks UTXOs at treasury-related addresses for event linking. | output_index | SMALLINT | Output index | | address | TEXT | Owner address | | address_type | TEXT | treasury/vendor_contract/vendor | -| vendor_contract_id | INT | FK to vendor_contracts | +| project_db_id | INT | FK to projects (chain-trace seed; NULL on non-script outputs) | | lovelace_amount | BIGINT | Amount | +| inline_datum_cbor | TEXT | Hex-encoded inline datum (cached for post-prune datum parsing) | | slot | BIGINT | Creation slot | | block_number | BIGINT | Block number | | spent | BOOLEAN | Is spent? | | spent_tx_hash | VARCHAR(64) | Spending transaction | | spent_slot | BIGINT | When spent | +`UNIQUE(tx_hash, output_index)`. + ### treasury.sync_status -Tracks synchronization progress. +Tracks synchronization progress. Two rows by convention: +- `sync_type='events'` — heartbeat for the TOM-event sync loop. `updated_at` bumps on every poll, including idle ticks. +- `sync_type='utxos'` — checkpoint for the UTXO pre-fetch worker. + +`last_slot` advances only on contiguous success — if an event fails mid-batch the watermark stays put so the failed event is retried on the next poll. A separate task runs `sync_all_events` every 10 minutes as an idempotent backfill safety net (see [`KI-SY-02`](../docs/known-issues.md)). | Column | Type | Description | |--------|------|-------------| | id | SERIAL | Primary key | -| sync_type | TEXT | events/utxos | +| sync_type | TEXT | events/utxos (unique) | | last_slot | BIGINT | Last processed slot | | last_block | BIGINT | Last processed block | | last_tx_hash | VARCHAR(64) | Last processed tx | @@ -128,16 +166,18 @@ Treasury contracts with aggregated statistics and financials. SELECT * FROM treasury.v_treasury_summary; ``` -Fields: treasury_id, contract_instance, contract_address, stake_credential, name, status, publish_tx_hash, publish_time, initialized_tx_hash, initialized_at, permissions, vendor_contract_count, active_contracts, completed_contracts, cancelled_contracts, treasury_balance, utxo_count, total_events, last_event_time, created_at, updated_at +Fields: `treasury_id`, `contract_instance`, `contract_address`, `stake_credential`, `status`, `publish_tx_hash`, `publish_time`, `initialized_tx_hash`, `initialized_at`, `permissions`, `project_count`, `active_contracts`, `completed_contracts`, `cancelled_contracts`, `treasury_balance`, `utxo_count`, `total_events`, `last_event_time`, `created_at`, `updated_at`. + +`treasury_balance` and `utxo_count` are sourced from `treasury.utxo_history` (unspent UTXOs at the treasury script address). -### treasury.v_vendor_contracts_summary -Vendor contracts with milestone counts, financials, and UTXO balance. +### treasury.v_projects_summary +Projects with milestone counts, financials, and UTXO balance. ```sql -SELECT * FROM treasury.v_vendor_contracts_summary; +SELECT * FROM treasury.v_projects_summary; ``` -Fields: id, treasury_id, project_id, other_identifiers, project_name, description, vendor_name, vendor_address, contract_url, contract_address, fund_tx_hash, fund_slot, fund_block_time, initial_amount_lovelace, status, created_at, updated_at, treasury_instance, treasury_name, total_milestones, pending_milestones, completed_milestones, disbursed_milestones, total_disbursed_lovelace, current_balance_lovelace, utxo_count, last_event_time, event_count +Fields: `id`, `treasury_id`, `project_id`, `other_identifiers`, `project_name`, `description`, `vendor_address`, `contract_address`, `fund_tx_hash`, `fund_slot`, `fund_block_time`, `initial_amount_lovelace`, `status`, `created_at`, `updated_at`, `treasury_instance`, `total_milestones`, `pending_milestones`, `completed_milestones`, `withdrawn_milestones`, `paused_milestones`, `total_withdrawn_lovelace`, `current_balance_lovelace`, `utxo_count`, `last_event_time`, `event_count`. ### treasury.v_events_with_context Events with full treasury/project/milestone context. @@ -146,38 +186,32 @@ Events with full treasury/project/milestone context. SELECT * FROM treasury.v_events_with_context ORDER BY block_time DESC; ``` -Fields: id, tx_hash, slot, block_number, block_time, event_type, amount_lovelace, reason, destination, metadata, created_at, treasury_instance, treasury_name, project_id, project_name, vendor_name, project_address, milestone_id, milestone_label, milestone_order +Fields: `id`, `tx_hash`, `slot`, `block_number`, `block_time`, `event_type`, `amount_lovelace`, `reason`, `destination`, `metadata`, `created_at`, `treasury_instance`, `project_id`, `project_name`, `project_address`, `milestone_id`, `milestone_label`, `milestone_order`. + +### treasury.v_recent_events +Same projection as `v_events_with_context`, ordered by `slot DESC` for activity feeds. ### treasury.v_financial_summary -Financial summary showing allocated vs disbursed vs remaining. +Financial summary showing allocated vs withdrawn vs remaining. ```sql SELECT * FROM treasury.v_financial_summary; ``` -Fields: treasury_id, contract_instance, treasury_name, total_allocated_lovelace, total_disbursed_lovelace, total_remaining_lovelace, treasury_balance_lovelace, project_balance_lovelace, project_count, active_project_count +Fields: `treasury_id`, `contract_instance`, `total_allocated_lovelace`, `total_withdrawn_lovelace`, `total_remaining_lovelace`, `treasury_balance_lovelace`, `project_balance_lovelace`, `project_count`, `active_project_count`. ### treasury.v_milestone_timeline -Milestones with vendor contract context. +Milestones with project context. ```sql SELECT * FROM treasury.v_milestone_timeline; ``` -Fields: id, milestone_id, milestone_order, label, description, acceptance_criteria, amount_lovelace, status, complete_tx_hash, complete_time, complete_description, evidence, disburse_tx_hash, disburse_time, disburse_amount, project_id, project_name, vendor_address - -### treasury.v_recent_events -Events with context, ordered by slot descending (for recent activity). - -```sql -SELECT * FROM treasury.v_recent_events LIMIT 10; -``` +Fields: `id`, `milestone_id`, `milestone_order`, `label`, `description`, `acceptance_criteria`, `amount_lovelace`, `time_limit`, `withdrawn`, `evidence_provided`, `archived`, `complete_tx_hash`, `complete_time`, `complete_description`, `evidence`, `withdraw_tx_hash`, `withdraw_time`, `withdraw_amount`, `archived_by_tx_hash`, `archived_at`, `superseded_by`, `project_id`, `project_name`, `vendor_address`. ## Running Migrations -### Using the API (automatic) - -The API automatically creates/updates the schema on startup via `db::init_treasury_schema()`. +The treasury schema is created on first PostgreSQL container start by `database/init/02-treasury-schema.sql`. The API also installs the `treasury.utxo_history` triggers at startup via `install_utxo_history_triggers` (`api/src/services/sync.rs`); these arm before YACI Store ingests so a fresh sync captures every script-address UTXO before pruning runs. ### Using psql directly @@ -205,7 +239,7 @@ YACI Store creates its own tables in the `yaci_store` schema. Key tables include - `tx_input` - Transaction inputs - `cursor_` - Current sync position -These tables are automatically created and maintained by YACI Store. +These tables are automatically created and maintained by YACI Store via Flyway. ## Indexes @@ -213,9 +247,10 @@ The schema includes indexes for: - Primary key lookups - Foreign key relationships - Status filtering -- Time-based ordering (fund_block_time, block_time) -- Text search (project_id, project_name, vendor_name) +- Time-based ordering (`fund_block_time`, `block_time`) +- Text search (`project_id`, `project_name`, `description`) - UTXO queries (unspent UTXOs, address lookups) +- A partial unique index on milestones to enforce one active row per `(project_db_id, milestone_id)` ## Example Queries @@ -224,13 +259,12 @@ The schema includes indexes for: SELECT project_id, project_name, - vendor_name, initial_amount_lovelace / 1000000.0 as allocated_ada, - total_disbursed_lovelace / 1000000.0 as disbursed_ada, + total_withdrawn_lovelace / 1000000.0 as withdrawn_ada, current_balance_lovelace / 1000000.0 as balance_ada, total_milestones, - disbursed_milestones -FROM treasury.v_vendor_contracts_summary + withdrawn_milestones +FROM treasury.v_projects_summary WHERE status = 'active' ORDER BY fund_block_time DESC; @@ -250,7 +284,7 @@ LIMIT 20; SELECT contract_instance, total_allocated_lovelace / 1000000.0 as total_allocated_ada, - total_disbursed_lovelace / 1000000.0 as total_disbursed_ada, + total_withdrawn_lovelace / 1000000.0 as total_withdrawn_ada, total_remaining_lovelace / 1000000.0 as remaining_ada, project_count, active_project_count diff --git a/database/init/02-treasury-schema.sql b/database/init/02-treasury-schema.sql index 56b6e4e..a3149ac 100644 --- a/database/init/02-treasury-schema.sql +++ b/database/init/02-treasury-schema.sql @@ -25,23 +25,35 @@ CREATE TABLE IF NOT EXISTS treasury.treasury_contracts ( updated_at TIMESTAMPTZ DEFAULT NOW() ); --- Vendor Contracts (PSSC) - Project-specific contracts linked to treasury +-- Vendor Contract (PSSC) - the *one* on-chain script address that holds every +-- project's funds, distinguished only by inline datum. Singleton in our deployment. CREATE TABLE IF NOT EXISTS treasury.vendor_contracts ( + id SERIAL PRIMARY KEY, + treasury_id INT REFERENCES treasury.treasury_contracts(id), + address TEXT UNIQUE NOT NULL, -- Shared PSSC script address (addr1x...) + stake_credential TEXT, -- Stake credential portion of the address + created_at TIMESTAMPTZ DEFAULT NOW(), + updated_at TIMESTAMPTZ DEFAULT NOW() +); + +-- Projects - One row per `fund` event; identified by `project_id` (e.g. EC-0008-25). +-- Funds and milestones live at the shared PSSC above, distinguished by datum. +CREATE TABLE IF NOT EXISTS treasury.projects ( id SERIAL PRIMARY KEY, treasury_id INT REFERENCES treasury.treasury_contracts(id), project_id TEXT UNIQUE NOT NULL, -- Logical identifier (e.g., "EC-0008-25") other_identifiers TEXT[], -- Related IDs from otherIdentifiers array project_name TEXT, -- Label from fund event description TEXT, -- Project description (joined if array) - vendor_name TEXT, -- vendor.name from metadata vendor_address TEXT, -- Payment destination (vendor.label in metadata) - contract_url TEXT, -- contract - link to agreement document contract_address TEXT, -- PSSC script address (from fund tx output) + vendor_payment_key_hash TEXT, -- Comma-joined hex hashes; multi-party datums produce multiple fund_tx_hash VARCHAR(64) NOT NULL, -- Fund transaction fund_slot BIGINT, -- Blockchain slot fund_block_time BIGINT, -- Block timestamp initial_amount_lovelace BIGINT, -- Initial funding amount (from tx output) status TEXT DEFAULT 'active', -- active/paused/completed/cancelled + datum_parse_error TEXT, -- Set when fund datum parse failed; cleared on success created_at TIMESTAMPTZ DEFAULT NOW(), updated_at TIMESTAMPTZ DEFAULT NOW() ); @@ -49,24 +61,44 @@ CREATE TABLE IF NOT EXISTS treasury.vendor_contracts ( -- Milestones - Each vendor contract has ordered milestones CREATE TABLE IF NOT EXISTS treasury.milestones ( id SERIAL PRIMARY KEY, - vendor_contract_id INT NOT NULL REFERENCES treasury.vendor_contracts(id) ON DELETE CASCADE, + project_db_id INT NOT NULL REFERENCES treasury.projects(id) ON DELETE CASCADE, milestone_id TEXT NOT NULL, -- Logical identifier (e.g., "m-0") milestone_order INT NOT NULL, -- Position (1, 2, 3...) label TEXT, -- Milestone name description TEXT, -- Detailed description acceptance_criteria TEXT, -- Completion criteria - amount_lovelace BIGINT, -- Allocated amount (if specified) - status TEXT DEFAULT 'pending', -- pending/completed/disbursed + + -- From inline UTXO datum + amount_lovelace BIGINT, -- Lovelace amount from datum Value map + time_limit BIGINT, -- POSIXTime in milliseconds + + -- Independent boolean lifecycle flags + withdrawn BOOLEAN NOT NULL DEFAULT FALSE, + evidence_provided BOOLEAN NOT NULL DEFAULT FALSE, + archived BOOLEAN NOT NULL DEFAULT FALSE, + paused BOOLEAN NOT NULL DEFAULT FALSE, + + -- Withdraw details (set when withdrawn = true) + withdraw_tx_hash VARCHAR(64), + withdraw_time BIGINT, + withdraw_amount BIGINT, + + -- Evidence/completion details (set when evidence_provided = true) complete_tx_hash VARCHAR(64), -- Completion transaction complete_time BIGINT, -- Completion timestamp complete_description TEXT, -- Description from complete event evidence JSONB, -- Evidence array from complete event - disburse_tx_hash VARCHAR(64), -- Disbursement transaction - disburse_time BIGINT, -- Disbursement timestamp - disburse_amount BIGINT, -- Actual disbursed amount + + -- Archive details (set when archived = true) + archived_by_tx_hash VARCHAR(64), + archived_at BIGINT, + superseded_by INT REFERENCES treasury.milestones(id), + + -- Datum parse diagnostics (per-milestone) + datum_parse_error TEXT, + created_at TIMESTAMPTZ DEFAULT NOW(), - updated_at TIMESTAMPTZ DEFAULT NOW(), - UNIQUE(vendor_contract_id, milestone_id) + updated_at TIMESTAMPTZ DEFAULT NOW() ); -- Events - Audit log of all TOM events @@ -78,28 +110,29 @@ CREATE TABLE IF NOT EXISTS treasury.events ( block_time BIGINT, -- Block timestamp event_type TEXT NOT NULL, -- publish/initialize/fund/complete/disburse/etc. treasury_id INT REFERENCES treasury.treasury_contracts(id), - vendor_contract_id INT REFERENCES treasury.vendor_contracts(id), + project_db_id INT REFERENCES treasury.projects(id), milestone_id INT REFERENCES treasury.milestones(id), amount_lovelace BIGINT, -- Amount involved reason TEXT, -- Justification (pause/cancel/modify) - destination TEXT, -- Destination label (disburse) + destination JSONB, -- Destination object {label, details} (disburse) metadata JSONB, -- Original TOM metadata body created_at TIMESTAMPTZ DEFAULT NOW() ); -- UTXOs - Track UTXOs at treasury-related addresses -CREATE TABLE IF NOT EXISTS treasury.utxos ( +CREATE TABLE IF NOT EXISTS treasury.utxo_history ( id SERIAL PRIMARY KEY, tx_hash VARCHAR(64) NOT NULL, -- Transaction hash output_index SMALLINT NOT NULL, -- Output index address TEXT, -- Owner address (optional for tracking) address_type TEXT, -- treasury/vendor_contract/vendor - vendor_contract_id INT REFERENCES treasury.vendor_contracts(id), + project_db_id INT REFERENCES treasury.projects(id), lovelace_amount BIGINT, -- Amount (optional for tracking) slot BIGINT, -- Creation slot (optional for tracking) block_number BIGINT, -- Block number spent BOOLEAN DEFAULT FALSE, -- Is spent? spent_tx_hash VARCHAR(64), -- Spending transaction + inline_datum_cbor TEXT, spent_slot BIGINT, -- When spent UNIQUE(tx_hash, output_index) ); @@ -128,40 +161,45 @@ CREATE INDEX IF NOT EXISTS idx_treasury_address ON treasury.treasury_contracts(c CREATE INDEX IF NOT EXISTS idx_treasury_status ON treasury.treasury_contracts(status); -- Vendor contracts (projects) -CREATE INDEX IF NOT EXISTS idx_vendor_treasury ON treasury.vendor_contracts(treasury_id); -CREATE INDEX IF NOT EXISTS idx_vendor_project_id ON treasury.vendor_contracts(project_id); -CREATE INDEX IF NOT EXISTS idx_vendor_status ON treasury.vendor_contracts(status); -CREATE INDEX IF NOT EXISTS idx_vendor_fund_time ON treasury.vendor_contracts(fund_block_time DESC); -CREATE INDEX IF NOT EXISTS idx_vendor_contract_address ON treasury.vendor_contracts(contract_address); -CREATE INDEX IF NOT EXISTS idx_vendor_search ON treasury.vendor_contracts +CREATE INDEX IF NOT EXISTS idx_project_treasury ON treasury.projects(treasury_id); +CREATE INDEX IF NOT EXISTS idx_project_project_id ON treasury.projects(project_id); +CREATE INDEX IF NOT EXISTS idx_project_status ON treasury.projects(status); +CREATE INDEX IF NOT EXISTS idx_project_fund_time ON treasury.projects(fund_block_time DESC); +CREATE INDEX IF NOT EXISTS idx_project_contract_address ON treasury.projects(contract_address); +CREATE INDEX IF NOT EXISTS idx_project_payment_key_hash ON treasury.projects(vendor_payment_key_hash); +CREATE INDEX IF NOT EXISTS idx_project_search ON treasury.projects USING gin (to_tsvector('english', COALESCE(project_name, '') || ' ' || COALESCE(description, ''))); -- Milestones -CREATE INDEX IF NOT EXISTS idx_milestone_vendor ON treasury.milestones(vendor_contract_id); -CREATE INDEX IF NOT EXISTS idx_milestone_status ON treasury.milestones(status); -CREATE INDEX IF NOT EXISTS idx_milestone_order ON treasury.milestones(vendor_contract_id, milestone_order); +CREATE INDEX IF NOT EXISTS idx_milestone_vendor ON treasury.milestones(project_db_id); +CREATE INDEX IF NOT EXISTS idx_milestone_order ON treasury.milestones(project_db_id, milestone_order); +-- Only one active (non-archived) milestone per vendor contract + milestone_id +CREATE UNIQUE INDEX IF NOT EXISTS idx_milestone_active_unique + ON treasury.milestones(project_db_id, milestone_id) + WHERE NOT archived; +CREATE INDEX IF NOT EXISTS idx_milestone_not_archived + ON treasury.milestones(project_db_id) WHERE NOT archived; -- Events CREATE INDEX IF NOT EXISTS idx_event_type ON treasury.events(event_type); -CREATE INDEX IF NOT EXISTS idx_event_vendor ON treasury.events(vendor_contract_id); +CREATE INDEX IF NOT EXISTS idx_event_vendor ON treasury.events(project_db_id); CREATE INDEX IF NOT EXISTS idx_event_treasury ON treasury.events(treasury_id); CREATE INDEX IF NOT EXISTS idx_event_slot ON treasury.events(slot DESC); CREATE INDEX IF NOT EXISTS idx_event_block_time ON treasury.events(block_time DESC); -- UTXOs -CREATE INDEX IF NOT EXISTS idx_utxo_address ON treasury.utxos(address); -CREATE INDEX IF NOT EXISTS idx_utxo_vendor ON treasury.utxos(vendor_contract_id); -CREATE INDEX IF NOT EXISTS idx_utxo_unspent ON treasury.utxos(address) WHERE NOT spent; -CREATE INDEX IF NOT EXISTS idx_utxo_slot ON treasury.utxos(slot DESC); -CREATE INDEX IF NOT EXISTS idx_utxo_vendor_unspent ON treasury.utxos(vendor_contract_id) WHERE NOT spent; +CREATE INDEX IF NOT EXISTS idx_utxo_history_address ON treasury.utxo_history(address); +CREATE INDEX IF NOT EXISTS idx_utxo_history_vendor ON treasury.utxo_history(project_db_id); +CREATE INDEX IF NOT EXISTS idx_utxo_history_unspent ON treasury.utxo_history(address) WHERE NOT spent; +CREATE INDEX IF NOT EXISTS idx_utxo_history_slot ON treasury.utxo_history(slot DESC); +CREATE INDEX IF NOT EXISTS idx_utxo_history_vendor_unspent ON treasury.utxo_history(project_db_id) WHERE NOT spent; -- Full-text search across project fields -CREATE INDEX IF NOT EXISTS idx_vendor_fulltext ON treasury.vendor_contracts +CREATE INDEX IF NOT EXISTS idx_project_fulltext ON treasury.projects USING gin (to_tsvector('english', COALESCE(project_id, '') || ' ' || COALESCE(project_name, '') || ' ' || - COALESCE(description, '') || ' ' || - COALESCE(vendor_name, '') + COALESCE(description, '') )); -- Events by milestone (for milestone event history) @@ -186,8 +224,8 @@ CREATE TRIGGER trg_treasury_contracts_updated_at BEFORE UPDATE ON treasury.treasury_contracts FOR EACH ROW EXECUTE FUNCTION treasury.update_updated_at(); -CREATE TRIGGER trg_vendor_contracts_updated_at - BEFORE UPDATE ON treasury.vendor_contracts +CREATE TRIGGER trg_projects_updated_at + BEFORE UPDATE ON treasury.projects FOR EACH ROW EXECUTE FUNCTION treasury.update_updated_at(); CREATE TRIGGER trg_milestones_updated_at @@ -199,7 +237,7 @@ CREATE TRIGGER trg_milestones_updated_at -- ============================================================================ -- Vendor contracts with milestone stats, financials, and balance -CREATE OR REPLACE VIEW treasury.v_vendor_contracts_summary AS +CREATE OR REPLACE VIEW treasury.v_projects_summary AS SELECT vc.id, vc.treasury_id, @@ -207,39 +245,70 @@ SELECT vc.other_identifiers, vc.project_name, vc.description, - vc.vendor_name, vc.vendor_address, - vc.contract_url, vc.contract_address, vc.fund_tx_hash, vc.fund_slot, vc.fund_block_time, vc.initial_amount_lovelace, - vc.status, + -- Raw on-chain status as written by event handlers (active/paused/cancelled). + -- TOM has no "complete project" event, so this column never holds 'completed'. + vc.status as raw_status, + -- Derived status: 'completed' once every non-archived milestone has been + -- withdrawn. 'cancelled' wins; otherwise paused/active mirror raw_status. + CASE + WHEN vc.status = 'cancelled' THEN 'cancelled' + WHEN COUNT(DISTINCT m.id) FILTER (WHERE NOT m.archived) > 0 + AND COUNT(DISTINCT m.id) FILTER (WHERE NOT m.archived AND NOT m.withdrawn) = 0 + THEN 'completed' + WHEN vc.status = 'paused' THEN 'paused' + ELSE COALESCE(vc.status, 'active') + END AS status, vc.created_at, vc.updated_at, -- Treasury context tc.contract_instance as treasury_instance, - tc.name as treasury_name, - -- Milestone counts - COUNT(DISTINCT m.id) as total_milestones, - COUNT(DISTINCT m.id) FILTER (WHERE m.status = 'pending') as pending_milestones, - COUNT(DISTINCT m.id) FILTER (WHERE m.status = 'completed') as completed_milestones, - COUNT(DISTINCT m.id) FILTER (WHERE m.status = 'disbursed') as disbursed_milestones, + -- Milestone counts (excluding archived) + COUNT(DISTINCT m.id) FILTER (WHERE NOT m.archived) as total_milestones, + COUNT(DISTINCT m.id) FILTER (WHERE NOT m.archived AND NOT m.evidence_provided AND NOT m.withdrawn) as pending_milestones, + COUNT(DISTINCT m.id) FILTER (WHERE NOT m.archived AND m.evidence_provided AND NOT m.withdrawn) as completed_milestones, + COUNT(DISTINCT m.id) FILTER (WHERE NOT m.archived AND m.withdrawn) as withdrawn_milestones, + COUNT(DISTINCT m.id) FILTER (WHERE NOT m.archived AND m.paused AND NOT m.withdrawn) as paused_milestones, -- Financial totals from milestones - COALESCE(SUM(DISTINCT m.disburse_amount), 0)::BIGINT as total_disbursed_lovelace, - -- Current balance from UTXOs - COALESCE(SUM(u.lovelace_amount) FILTER (WHERE NOT u.spent), 0)::BIGINT as current_balance_lovelace, - COUNT(u.id) FILTER (WHERE NOT u.spent) as utxo_count, + COALESCE(SUM(DISTINCT m.withdraw_amount) FILTER (WHERE NOT m.archived), 0)::BIGINT as total_withdrawn_lovelace, + -- Current balance: live unspent UTXOs from yaci_store.address_utxo (authoritative + -- because pruning removes spent rows), restricted to UTXOs that utxo_history has + -- linked to this project. Avoids ghost-unspent rows in utxo_history.spent. + COALESCE(( + SELECT SUM(au.lovelace_amount) + FROM yaci_store.address_utxo au + JOIN treasury.utxo_history uh + ON uh.tx_hash = au.tx_hash AND uh.output_index = au.output_index + WHERE uh.project_db_id = vc.id + AND NOT EXISTS ( + SELECT 1 FROM yaci_store.tx_input ti + WHERE ti.tx_hash = au.tx_hash AND ti.output_index = au.output_index + ) + ), 0)::BIGINT as current_balance_lovelace, + COALESCE(( + SELECT COUNT(*) + FROM yaci_store.address_utxo au + JOIN treasury.utxo_history uh + ON uh.tx_hash = au.tx_hash AND uh.output_index = au.output_index + WHERE uh.project_db_id = vc.id + AND NOT EXISTS ( + SELECT 1 FROM yaci_store.tx_input ti + WHERE ti.tx_hash = au.tx_hash AND ti.output_index = au.output_index + ) + ), 0) as utxo_count, -- Last event time - (SELECT MAX(e.block_time) FROM treasury.events e WHERE e.vendor_contract_id = vc.id) as last_event_time, + (SELECT MAX(e.block_time) FROM treasury.events e WHERE e.project_db_id = vc.id) as last_event_time, -- Event count - (SELECT COUNT(*) FROM treasury.events e WHERE e.vendor_contract_id = vc.id) as event_count -FROM treasury.vendor_contracts vc + (SELECT COUNT(*) FROM treasury.events e WHERE e.project_db_id = vc.id) as event_count +FROM treasury.projects vc LEFT JOIN treasury.treasury_contracts tc ON tc.id = vc.treasury_id -LEFT JOIN treasury.milestones m ON m.vendor_contract_id = vc.id -LEFT JOIN treasury.utxos u ON u.vendor_contract_id = vc.id -GROUP BY vc.id, tc.contract_instance, tc.name; +LEFT JOIN treasury.milestones m ON m.project_db_id = vc.id +GROUP BY vc.id, tc.contract_instance; -- Milestone timeline with vendor context CREATE OR REPLACE VIEW treasury.v_milestone_timeline AS @@ -251,19 +320,25 @@ SELECT m.description, m.acceptance_criteria, m.amount_lovelace, - m.status, + m.time_limit, + m.withdrawn, + m.evidence_provided, + m.archived, m.complete_tx_hash, m.complete_time, m.complete_description, m.evidence, - m.disburse_tx_hash, - m.disburse_time, - m.disburse_amount, + m.withdraw_tx_hash, + m.withdraw_time, + m.withdraw_amount, + m.archived_by_tx_hash, + m.archived_at, + m.superseded_by, vc.project_id, vc.project_name, vc.vendor_address FROM treasury.milestones m -JOIN treasury.vendor_contracts vc ON vc.id = m.vendor_contract_id +JOIN treasury.projects vc ON vc.id = m.project_db_id ORDER BY vc.project_id, m.milestone_order; -- Recent events with full context @@ -286,8 +361,8 @@ SELECT m.label as milestone_label, m.milestone_order FROM treasury.events e -LEFT JOIN treasury.treasury_contracts tc ON tc.id = e.treasury_id -LEFT JOIN treasury.vendor_contracts vc ON vc.id = e.vendor_contract_id +LEFT JOIN treasury.projects vc ON vc.id = e.project_db_id +LEFT JOIN treasury.treasury_contracts tc ON tc.id = COALESCE(e.treasury_id, vc.treasury_id) LEFT JOIN treasury.milestones m ON m.id = e.milestone_id ORDER BY e.slot DESC; @@ -298,26 +373,45 @@ SELECT tc.contract_instance, tc.contract_address, tc.stake_credential, - tc.name, tc.status, tc.publish_tx_hash, tc.publish_time, tc.initialized_tx_hash, tc.initialized_at, tc.permissions, - COUNT(DISTINCT vc.id) as vendor_contract_count, - COUNT(DISTINCT vc.id) FILTER (WHERE vc.status = 'active') as active_contracts, - COUNT(DISTINCT vc.id) FILTER (WHERE vc.status = 'completed') as completed_contracts, - COUNT(DISTINCT vc.id) FILTER (WHERE vc.status = 'cancelled') as cancelled_contracts, - COALESCE(SUM(u.lovelace_amount) FILTER (WHERE NOT u.spent AND u.address = tc.contract_address), 0)::BIGINT as treasury_balance, - COUNT(u.id) FILTER (WHERE NOT u.spent AND u.address = tc.contract_address) as utxo_count, + COUNT(DISTINCT vc.id) as project_count, + COUNT(DISTINCT vc.id) FILTER (WHERE vps.status = 'active') as active_contracts, + COUNT(DISTINCT vc.id) FILTER (WHERE vps.status = 'completed') as completed_contracts, + COUNT(DISTINCT vc.id) FILTER (WHERE vps.status = 'cancelled') as cancelled_contracts, + COUNT(DISTINCT vc.id) FILTER (WHERE vps.status = 'paused') as paused_contracts, + -- Live treasury balance from yaci's UTXO set (authoritative; spent rows are + -- pruned out). utxo_history.spent flag is unreliable for historical/pre-trigger + -- captures so we don't trust it for current totals. + COALESCE(( + SELECT SUM(au.lovelace_amount) + FROM yaci_store.address_utxo au + WHERE au.owner_addr = tc.contract_address + AND NOT EXISTS ( + SELECT 1 FROM yaci_store.tx_input ti + WHERE ti.tx_hash = au.tx_hash AND ti.output_index = au.output_index + ) + ), 0)::BIGINT as treasury_balance, + COALESCE(( + SELECT COUNT(*) + FROM yaci_store.address_utxo au + WHERE au.owner_addr = tc.contract_address + AND NOT EXISTS ( + SELECT 1 FROM yaci_store.tx_input ti + WHERE ti.tx_hash = au.tx_hash AND ti.output_index = au.output_index + ) + ), 0) as utxo_count, (SELECT COUNT(*) FROM treasury.events WHERE treasury_id = tc.id) as total_events, (SELECT MAX(block_time) FROM treasury.events WHERE treasury_id = tc.id) as last_event_time, tc.created_at, tc.updated_at FROM treasury.treasury_contracts tc -LEFT JOIN treasury.vendor_contracts vc ON vc.treasury_id = tc.id -LEFT JOIN treasury.utxos u ON u.address = tc.contract_address +LEFT JOIN treasury.projects vc ON vc.treasury_id = tc.id +LEFT JOIN treasury.v_projects_summary vps ON vps.id = vc.id GROUP BY tc.id; -- Events with full context (treasury, project, milestone info) @@ -336,19 +430,17 @@ SELECT e.created_at, -- Treasury context tc.contract_instance as treasury_instance, - tc.name as treasury_name, -- Project context vc.project_id, vc.project_name, - vc.vendor_name, vc.contract_address as project_address, -- Milestone context m.milestone_id, m.label as milestone_label, m.milestone_order FROM treasury.events e -LEFT JOIN treasury.treasury_contracts tc ON tc.id = e.treasury_id -LEFT JOIN treasury.vendor_contracts vc ON vc.id = e.vendor_contract_id +LEFT JOIN treasury.projects vc ON vc.id = e.project_db_id +LEFT JOIN treasury.treasury_contracts tc ON tc.id = COALESCE(e.treasury_id, vc.treasury_id) LEFT JOIN treasury.milestones m ON m.id = e.milestone_id; -- Financial summary view (allocated vs disbursed vs remaining) @@ -356,33 +448,47 @@ CREATE OR REPLACE VIEW treasury.v_financial_summary AS SELECT tc.id as treasury_id, tc.contract_instance, - tc.name as treasury_name, -- Allocation totals COALESCE(SUM(vc.initial_amount_lovelace), 0)::BIGINT as total_allocated_lovelace, - -- Disbursement totals - COALESCE(SUM(m_totals.total_disbursed), 0)::BIGINT as total_disbursed_lovelace, - -- Remaining (allocated - disbursed) - (COALESCE(SUM(vc.initial_amount_lovelace), 0) - COALESCE(SUM(m_totals.total_disbursed), 0))::BIGINT as total_remaining_lovelace, - -- Treasury balance (actual UTXOs) - COALESCE(SUM(u.lovelace_amount) FILTER (WHERE NOT u.spent AND u.address = tc.contract_address), 0)::BIGINT as treasury_balance_lovelace, - -- Project-level balance (sum of project UTXOs) + -- Withdrawal totals + COALESCE(SUM(m_totals.total_withdrawn), 0)::BIGINT as total_withdrawn_lovelace, + -- Remaining (allocated - withdrawn) + (COALESCE(SUM(vc.initial_amount_lovelace), 0) - COALESCE(SUM(m_totals.total_withdrawn), 0))::BIGINT as total_remaining_lovelace, + -- Treasury reserve balance: live unspent at TRSC address from yaci's UTXO set. + -- Anti-join against tx_input handles pruning-window lag. + COALESCE(( + SELECT SUM(au.lovelace_amount) + FROM yaci_store.address_utxo au + WHERE au.owner_addr = tc.contract_address + AND NOT EXISTS ( + SELECT 1 FROM yaci_store.tx_input ti + WHERE ti.tx_hash = au.tx_hash AND ti.output_index = au.output_index + ) + ), 0)::BIGINT as treasury_balance_lovelace, + -- PSSC (vendor contract) balance: raw live unspent at the singleton PSSC address. + -- This is the on-chain truth for "funds currently held by the vendor contract". + -- Per-project attribution lives in v_projects_summary.current_balance_lovelace + -- and may sum to less than this when chain-trace gaps leave unattributed UTXOs. COALESCE(( - SELECT SUM(u2.lovelace_amount) - FROM treasury.utxos u2 - JOIN treasury.vendor_contracts vc2 ON vc2.id = u2.vendor_contract_id - WHERE vc2.treasury_id = tc.id AND NOT u2.spent + SELECT SUM(au.lovelace_amount) + FROM yaci_store.address_utxo au + JOIN treasury.vendor_contracts vco ON vco.address = au.owner_addr + WHERE NOT EXISTS ( + SELECT 1 FROM yaci_store.tx_input ti + WHERE ti.tx_hash = au.tx_hash AND ti.output_index = au.output_index + ) ), 0)::BIGINT as project_balance_lovelace, -- Counts COUNT(DISTINCT vc.id) as project_count, COUNT(DISTINCT CASE WHEN vc.status = 'active' THEN vc.id END) as active_project_count FROM treasury.treasury_contracts tc -LEFT JOIN treasury.vendor_contracts vc ON vc.treasury_id = tc.id +LEFT JOIN treasury.projects vc ON vc.treasury_id = tc.id LEFT JOIN ( SELECT - m.vendor_contract_id, - SUM(COALESCE(m.disburse_amount, 0)) as total_disbursed + m.project_db_id, + SUM(COALESCE(m.withdraw_amount, 0)) as total_withdrawn FROM treasury.milestones m - GROUP BY m.vendor_contract_id -) m_totals ON m_totals.vendor_contract_id = vc.id -LEFT JOIN treasury.utxos u ON u.address = tc.contract_address + WHERE NOT m.archived + GROUP BY m.project_db_id +) m_totals ON m_totals.project_db_id = vc.id GROUP BY tc.id; diff --git a/database/schema/treasury.sql b/database/schema/treasury.sql index 56b6e4e..a93c1ee 100644 --- a/database/schema/treasury.sql +++ b/database/schema/treasury.sql @@ -25,23 +25,35 @@ CREATE TABLE IF NOT EXISTS treasury.treasury_contracts ( updated_at TIMESTAMPTZ DEFAULT NOW() ); --- Vendor Contracts (PSSC) - Project-specific contracts linked to treasury +-- Vendor Contract (PSSC) - the *one* on-chain script address that holds every +-- project's funds, distinguished only by inline datum. Singleton in our deployment. CREATE TABLE IF NOT EXISTS treasury.vendor_contracts ( + id SERIAL PRIMARY KEY, + treasury_id INT REFERENCES treasury.treasury_contracts(id), + address TEXT UNIQUE NOT NULL, -- Shared PSSC script address (addr1x...) + stake_credential TEXT, -- Stake credential portion of the address + created_at TIMESTAMPTZ DEFAULT NOW(), + updated_at TIMESTAMPTZ DEFAULT NOW() +); + +-- Projects - One row per `fund` event; identified by `project_id` (e.g. EC-0008-25). +-- Funds and milestones live at the shared PSSC above, distinguished by datum. +CREATE TABLE IF NOT EXISTS treasury.projects ( id SERIAL PRIMARY KEY, treasury_id INT REFERENCES treasury.treasury_contracts(id), project_id TEXT UNIQUE NOT NULL, -- Logical identifier (e.g., "EC-0008-25") other_identifiers TEXT[], -- Related IDs from otherIdentifiers array project_name TEXT, -- Label from fund event description TEXT, -- Project description (joined if array) - vendor_name TEXT, -- vendor.name from metadata vendor_address TEXT, -- Payment destination (vendor.label in metadata) - contract_url TEXT, -- contract - link to agreement document contract_address TEXT, -- PSSC script address (from fund tx output) + vendor_payment_key_hash TEXT, -- Comma-joined hex hashes; multi-party datums produce multiple fund_tx_hash VARCHAR(64) NOT NULL, -- Fund transaction fund_slot BIGINT, -- Blockchain slot fund_block_time BIGINT, -- Block timestamp initial_amount_lovelace BIGINT, -- Initial funding amount (from tx output) status TEXT DEFAULT 'active', -- active/paused/completed/cancelled + datum_parse_error TEXT, -- Set when fund datum parse failed; cleared on success created_at TIMESTAMPTZ DEFAULT NOW(), updated_at TIMESTAMPTZ DEFAULT NOW() ); @@ -49,24 +61,44 @@ CREATE TABLE IF NOT EXISTS treasury.vendor_contracts ( -- Milestones - Each vendor contract has ordered milestones CREATE TABLE IF NOT EXISTS treasury.milestones ( id SERIAL PRIMARY KEY, - vendor_contract_id INT NOT NULL REFERENCES treasury.vendor_contracts(id) ON DELETE CASCADE, + project_db_id INT NOT NULL REFERENCES treasury.projects(id) ON DELETE CASCADE, milestone_id TEXT NOT NULL, -- Logical identifier (e.g., "m-0") milestone_order INT NOT NULL, -- Position (1, 2, 3...) label TEXT, -- Milestone name description TEXT, -- Detailed description acceptance_criteria TEXT, -- Completion criteria - amount_lovelace BIGINT, -- Allocated amount (if specified) - status TEXT DEFAULT 'pending', -- pending/completed/disbursed + + -- From inline UTXO datum + amount_lovelace BIGINT, -- Lovelace amount from datum Value map + time_limit BIGINT, -- POSIXTime in milliseconds + + -- Independent boolean lifecycle flags + withdrawn BOOLEAN NOT NULL DEFAULT FALSE, + evidence_provided BOOLEAN NOT NULL DEFAULT FALSE, + archived BOOLEAN NOT NULL DEFAULT FALSE, + paused BOOLEAN NOT NULL DEFAULT FALSE, + + -- Withdraw details (set when withdrawn = true) + withdraw_tx_hash VARCHAR(64), + withdraw_time BIGINT, + withdraw_amount BIGINT, + + -- Evidence/completion details (set when evidence_provided = true) complete_tx_hash VARCHAR(64), -- Completion transaction complete_time BIGINT, -- Completion timestamp complete_description TEXT, -- Description from complete event evidence JSONB, -- Evidence array from complete event - disburse_tx_hash VARCHAR(64), -- Disbursement transaction - disburse_time BIGINT, -- Disbursement timestamp - disburse_amount BIGINT, -- Actual disbursed amount + + -- Archive details (set when archived = true) + archived_by_tx_hash VARCHAR(64), + archived_at BIGINT, + superseded_by INT REFERENCES treasury.milestones(id), + + -- Datum parse diagnostics (per-milestone) + datum_parse_error TEXT, + created_at TIMESTAMPTZ DEFAULT NOW(), - updated_at TIMESTAMPTZ DEFAULT NOW(), - UNIQUE(vendor_contract_id, milestone_id) + updated_at TIMESTAMPTZ DEFAULT NOW() ); -- Events - Audit log of all TOM events @@ -78,28 +110,29 @@ CREATE TABLE IF NOT EXISTS treasury.events ( block_time BIGINT, -- Block timestamp event_type TEXT NOT NULL, -- publish/initialize/fund/complete/disburse/etc. treasury_id INT REFERENCES treasury.treasury_contracts(id), - vendor_contract_id INT REFERENCES treasury.vendor_contracts(id), + project_db_id INT REFERENCES treasury.projects(id), milestone_id INT REFERENCES treasury.milestones(id), amount_lovelace BIGINT, -- Amount involved reason TEXT, -- Justification (pause/cancel/modify) - destination TEXT, -- Destination label (disburse) + destination JSONB, -- Destination object {label, details} (disburse) metadata JSONB, -- Original TOM metadata body created_at TIMESTAMPTZ DEFAULT NOW() ); -- UTXOs - Track UTXOs at treasury-related addresses -CREATE TABLE IF NOT EXISTS treasury.utxos ( +CREATE TABLE IF NOT EXISTS treasury.utxo_history ( id SERIAL PRIMARY KEY, tx_hash VARCHAR(64) NOT NULL, -- Transaction hash output_index SMALLINT NOT NULL, -- Output index address TEXT, -- Owner address (optional for tracking) address_type TEXT, -- treasury/vendor_contract/vendor - vendor_contract_id INT REFERENCES treasury.vendor_contracts(id), + project_db_id INT REFERENCES treasury.projects(id), lovelace_amount BIGINT, -- Amount (optional for tracking) slot BIGINT, -- Creation slot (optional for tracking) block_number BIGINT, -- Block number spent BOOLEAN DEFAULT FALSE, -- Is spent? spent_tx_hash VARCHAR(64), -- Spending transaction + inline_datum_cbor TEXT, spent_slot BIGINT, -- When spent UNIQUE(tx_hash, output_index) ); @@ -128,40 +161,45 @@ CREATE INDEX IF NOT EXISTS idx_treasury_address ON treasury.treasury_contracts(c CREATE INDEX IF NOT EXISTS idx_treasury_status ON treasury.treasury_contracts(status); -- Vendor contracts (projects) -CREATE INDEX IF NOT EXISTS idx_vendor_treasury ON treasury.vendor_contracts(treasury_id); -CREATE INDEX IF NOT EXISTS idx_vendor_project_id ON treasury.vendor_contracts(project_id); -CREATE INDEX IF NOT EXISTS idx_vendor_status ON treasury.vendor_contracts(status); -CREATE INDEX IF NOT EXISTS idx_vendor_fund_time ON treasury.vendor_contracts(fund_block_time DESC); -CREATE INDEX IF NOT EXISTS idx_vendor_contract_address ON treasury.vendor_contracts(contract_address); -CREATE INDEX IF NOT EXISTS idx_vendor_search ON treasury.vendor_contracts +CREATE INDEX IF NOT EXISTS idx_project_treasury ON treasury.projects(treasury_id); +CREATE INDEX IF NOT EXISTS idx_project_project_id ON treasury.projects(project_id); +CREATE INDEX IF NOT EXISTS idx_project_status ON treasury.projects(status); +CREATE INDEX IF NOT EXISTS idx_project_fund_time ON treasury.projects(fund_block_time DESC); +CREATE INDEX IF NOT EXISTS idx_project_contract_address ON treasury.projects(contract_address); +CREATE INDEX IF NOT EXISTS idx_project_payment_key_hash ON treasury.projects(vendor_payment_key_hash); +CREATE INDEX IF NOT EXISTS idx_project_search ON treasury.projects USING gin (to_tsvector('english', COALESCE(project_name, '') || ' ' || COALESCE(description, ''))); -- Milestones -CREATE INDEX IF NOT EXISTS idx_milestone_vendor ON treasury.milestones(vendor_contract_id); -CREATE INDEX IF NOT EXISTS idx_milestone_status ON treasury.milestones(status); -CREATE INDEX IF NOT EXISTS idx_milestone_order ON treasury.milestones(vendor_contract_id, milestone_order); +CREATE INDEX IF NOT EXISTS idx_milestone_vendor ON treasury.milestones(project_db_id); +CREATE INDEX IF NOT EXISTS idx_milestone_order ON treasury.milestones(project_db_id, milestone_order); +-- Only one active (non-archived) milestone per vendor contract + milestone_id +CREATE UNIQUE INDEX IF NOT EXISTS idx_milestone_active_unique + ON treasury.milestones(project_db_id, milestone_id) + WHERE NOT archived; +CREATE INDEX IF NOT EXISTS idx_milestone_not_archived + ON treasury.milestones(project_db_id) WHERE NOT archived; -- Events CREATE INDEX IF NOT EXISTS idx_event_type ON treasury.events(event_type); -CREATE INDEX IF NOT EXISTS idx_event_vendor ON treasury.events(vendor_contract_id); +CREATE INDEX IF NOT EXISTS idx_event_vendor ON treasury.events(project_db_id); CREATE INDEX IF NOT EXISTS idx_event_treasury ON treasury.events(treasury_id); CREATE INDEX IF NOT EXISTS idx_event_slot ON treasury.events(slot DESC); CREATE INDEX IF NOT EXISTS idx_event_block_time ON treasury.events(block_time DESC); -- UTXOs -CREATE INDEX IF NOT EXISTS idx_utxo_address ON treasury.utxos(address); -CREATE INDEX IF NOT EXISTS idx_utxo_vendor ON treasury.utxos(vendor_contract_id); -CREATE INDEX IF NOT EXISTS idx_utxo_unspent ON treasury.utxos(address) WHERE NOT spent; -CREATE INDEX IF NOT EXISTS idx_utxo_slot ON treasury.utxos(slot DESC); -CREATE INDEX IF NOT EXISTS idx_utxo_vendor_unspent ON treasury.utxos(vendor_contract_id) WHERE NOT spent; +CREATE INDEX IF NOT EXISTS idx_utxo_history_address ON treasury.utxo_history(address); +CREATE INDEX IF NOT EXISTS idx_utxo_history_vendor ON treasury.utxo_history(project_db_id); +CREATE INDEX IF NOT EXISTS idx_utxo_history_unspent ON treasury.utxo_history(address) WHERE NOT spent; +CREATE INDEX IF NOT EXISTS idx_utxo_history_slot ON treasury.utxo_history(slot DESC); +CREATE INDEX IF NOT EXISTS idx_utxo_history_vendor_unspent ON treasury.utxo_history(project_db_id) WHERE NOT spent; -- Full-text search across project fields -CREATE INDEX IF NOT EXISTS idx_vendor_fulltext ON treasury.vendor_contracts +CREATE INDEX IF NOT EXISTS idx_project_fulltext ON treasury.projects USING gin (to_tsvector('english', COALESCE(project_id, '') || ' ' || COALESCE(project_name, '') || ' ' || - COALESCE(description, '') || ' ' || - COALESCE(vendor_name, '') + COALESCE(description, '') )); -- Events by milestone (for milestone event history) @@ -186,8 +224,8 @@ CREATE TRIGGER trg_treasury_contracts_updated_at BEFORE UPDATE ON treasury.treasury_contracts FOR EACH ROW EXECUTE FUNCTION treasury.update_updated_at(); -CREATE TRIGGER trg_vendor_contracts_updated_at - BEFORE UPDATE ON treasury.vendor_contracts +CREATE TRIGGER trg_projects_updated_at + BEFORE UPDATE ON treasury.projects FOR EACH ROW EXECUTE FUNCTION treasury.update_updated_at(); CREATE TRIGGER trg_milestones_updated_at @@ -199,7 +237,7 @@ CREATE TRIGGER trg_milestones_updated_at -- ============================================================================ -- Vendor contracts with milestone stats, financials, and balance -CREATE OR REPLACE VIEW treasury.v_vendor_contracts_summary AS +CREATE OR REPLACE VIEW treasury.v_projects_summary AS SELECT vc.id, vc.treasury_id, @@ -207,39 +245,70 @@ SELECT vc.other_identifiers, vc.project_name, vc.description, - vc.vendor_name, vc.vendor_address, - vc.contract_url, vc.contract_address, vc.fund_tx_hash, vc.fund_slot, vc.fund_block_time, vc.initial_amount_lovelace, - vc.status, + -- Raw on-chain status as written by event handlers (active/paused/cancelled). + -- TOM has no "complete project" event, so this column never holds 'completed'. + vc.status as raw_status, + -- Derived status: 'completed' once every non-archived milestone has been + -- withdrawn. 'cancelled' wins; otherwise paused/active mirror raw_status. + CASE + WHEN vc.status = 'cancelled' THEN 'cancelled' + WHEN COUNT(DISTINCT m.id) FILTER (WHERE NOT m.archived) > 0 + AND COUNT(DISTINCT m.id) FILTER (WHERE NOT m.archived AND NOT m.withdrawn) = 0 + THEN 'completed' + WHEN vc.status = 'paused' THEN 'paused' + ELSE COALESCE(vc.status, 'active') + END AS status, vc.created_at, vc.updated_at, -- Treasury context tc.contract_instance as treasury_instance, - tc.name as treasury_name, - -- Milestone counts - COUNT(DISTINCT m.id) as total_milestones, - COUNT(DISTINCT m.id) FILTER (WHERE m.status = 'pending') as pending_milestones, - COUNT(DISTINCT m.id) FILTER (WHERE m.status = 'completed') as completed_milestones, - COUNT(DISTINCT m.id) FILTER (WHERE m.status = 'disbursed') as disbursed_milestones, + -- Milestone counts (excluding archived) + COUNT(DISTINCT m.id) FILTER (WHERE NOT m.archived) as total_milestones, + COUNT(DISTINCT m.id) FILTER (WHERE NOT m.archived AND NOT m.evidence_provided AND NOT m.withdrawn) as pending_milestones, + COUNT(DISTINCT m.id) FILTER (WHERE NOT m.archived AND m.evidence_provided AND NOT m.withdrawn) as completed_milestones, + COUNT(DISTINCT m.id) FILTER (WHERE NOT m.archived AND m.withdrawn) as withdrawn_milestones, + COUNT(DISTINCT m.id) FILTER (WHERE NOT m.archived AND m.paused AND NOT m.withdrawn) as paused_milestones, -- Financial totals from milestones - COALESCE(SUM(DISTINCT m.disburse_amount), 0)::BIGINT as total_disbursed_lovelace, - -- Current balance from UTXOs - COALESCE(SUM(u.lovelace_amount) FILTER (WHERE NOT u.spent), 0)::BIGINT as current_balance_lovelace, - COUNT(u.id) FILTER (WHERE NOT u.spent) as utxo_count, + COALESCE(SUM(DISTINCT m.withdraw_amount) FILTER (WHERE NOT m.archived), 0)::BIGINT as total_withdrawn_lovelace, + -- Current balance: live unspent UTXOs from yaci_store.address_utxo (authoritative + -- because pruning removes spent rows), restricted to UTXOs that utxo_history has + -- linked to this project. Avoids ghost-unspent rows in utxo_history.spent. + COALESCE(( + SELECT SUM(au.lovelace_amount) + FROM yaci_store.address_utxo au + JOIN treasury.utxo_history uh + ON uh.tx_hash = au.tx_hash AND uh.output_index = au.output_index + WHERE uh.project_db_id = vc.id + AND NOT EXISTS ( + SELECT 1 FROM yaci_store.tx_input ti + WHERE ti.tx_hash = au.tx_hash AND ti.output_index = au.output_index + ) + ), 0)::BIGINT as current_balance_lovelace, + COALESCE(( + SELECT COUNT(*) + FROM yaci_store.address_utxo au + JOIN treasury.utxo_history uh + ON uh.tx_hash = au.tx_hash AND uh.output_index = au.output_index + WHERE uh.project_db_id = vc.id + AND NOT EXISTS ( + SELECT 1 FROM yaci_store.tx_input ti + WHERE ti.tx_hash = au.tx_hash AND ti.output_index = au.output_index + ) + ), 0) as utxo_count, -- Last event time - (SELECT MAX(e.block_time) FROM treasury.events e WHERE e.vendor_contract_id = vc.id) as last_event_time, + (SELECT MAX(e.block_time) FROM treasury.events e WHERE e.project_db_id = vc.id) as last_event_time, -- Event count - (SELECT COUNT(*) FROM treasury.events e WHERE e.vendor_contract_id = vc.id) as event_count -FROM treasury.vendor_contracts vc + (SELECT COUNT(*) FROM treasury.events e WHERE e.project_db_id = vc.id) as event_count +FROM treasury.projects vc LEFT JOIN treasury.treasury_contracts tc ON tc.id = vc.treasury_id -LEFT JOIN treasury.milestones m ON m.vendor_contract_id = vc.id -LEFT JOIN treasury.utxos u ON u.vendor_contract_id = vc.id -GROUP BY vc.id, tc.contract_instance, tc.name; +LEFT JOIN treasury.milestones m ON m.project_db_id = vc.id +GROUP BY vc.id, tc.contract_instance; -- Milestone timeline with vendor context CREATE OR REPLACE VIEW treasury.v_milestone_timeline AS @@ -251,19 +320,25 @@ SELECT m.description, m.acceptance_criteria, m.amount_lovelace, - m.status, + m.time_limit, + m.withdrawn, + m.evidence_provided, + m.archived, m.complete_tx_hash, m.complete_time, m.complete_description, m.evidence, - m.disburse_tx_hash, - m.disburse_time, - m.disburse_amount, + m.withdraw_tx_hash, + m.withdraw_time, + m.withdraw_amount, + m.archived_by_tx_hash, + m.archived_at, + m.superseded_by, vc.project_id, vc.project_name, vc.vendor_address FROM treasury.milestones m -JOIN treasury.vendor_contracts vc ON vc.id = m.vendor_contract_id +JOIN treasury.projects vc ON vc.id = m.project_db_id ORDER BY vc.project_id, m.milestone_order; -- Recent events with full context @@ -286,8 +361,8 @@ SELECT m.label as milestone_label, m.milestone_order FROM treasury.events e -LEFT JOIN treasury.treasury_contracts tc ON tc.id = e.treasury_id -LEFT JOIN treasury.vendor_contracts vc ON vc.id = e.vendor_contract_id +LEFT JOIN treasury.projects vc ON vc.id = e.project_db_id +LEFT JOIN treasury.treasury_contracts tc ON tc.id = COALESCE(e.treasury_id, vc.treasury_id) LEFT JOIN treasury.milestones m ON m.id = e.milestone_id ORDER BY e.slot DESC; @@ -298,26 +373,45 @@ SELECT tc.contract_instance, tc.contract_address, tc.stake_credential, - tc.name, tc.status, tc.publish_tx_hash, tc.publish_time, tc.initialized_tx_hash, tc.initialized_at, tc.permissions, - COUNT(DISTINCT vc.id) as vendor_contract_count, - COUNT(DISTINCT vc.id) FILTER (WHERE vc.status = 'active') as active_contracts, - COUNT(DISTINCT vc.id) FILTER (WHERE vc.status = 'completed') as completed_contracts, - COUNT(DISTINCT vc.id) FILTER (WHERE vc.status = 'cancelled') as cancelled_contracts, - COALESCE(SUM(u.lovelace_amount) FILTER (WHERE NOT u.spent AND u.address = tc.contract_address), 0)::BIGINT as treasury_balance, - COUNT(u.id) FILTER (WHERE NOT u.spent AND u.address = tc.contract_address) as utxo_count, + COUNT(DISTINCT vc.id) as project_count, + COUNT(DISTINCT vc.id) FILTER (WHERE vps.status = 'active') as active_contracts, + COUNT(DISTINCT vc.id) FILTER (WHERE vps.status = 'completed') as completed_contracts, + COUNT(DISTINCT vc.id) FILTER (WHERE vps.status = 'cancelled') as cancelled_contracts, + COUNT(DISTINCT vc.id) FILTER (WHERE vps.status = 'paused') as paused_contracts, + -- Live treasury balance from yaci's UTXO set (authoritative; spent rows are + -- pruned out). utxo_history.spent flag is unreliable for historical/pre-trigger + -- captures so we don't trust it for current totals. + COALESCE(( + SELECT SUM(au.lovelace_amount) + FROM yaci_store.address_utxo au + WHERE au.owner_addr = tc.contract_address + AND NOT EXISTS ( + SELECT 1 FROM yaci_store.tx_input ti + WHERE ti.tx_hash = au.tx_hash AND ti.output_index = au.output_index + ) + ), 0)::BIGINT as treasury_balance, + COALESCE(( + SELECT COUNT(*) + FROM yaci_store.address_utxo au + WHERE au.owner_addr = tc.contract_address + AND NOT EXISTS ( + SELECT 1 FROM yaci_store.tx_input ti + WHERE ti.tx_hash = au.tx_hash AND ti.output_index = au.output_index + ) + ), 0) as utxo_count, (SELECT COUNT(*) FROM treasury.events WHERE treasury_id = tc.id) as total_events, (SELECT MAX(block_time) FROM treasury.events WHERE treasury_id = tc.id) as last_event_time, tc.created_at, tc.updated_at FROM treasury.treasury_contracts tc -LEFT JOIN treasury.vendor_contracts vc ON vc.treasury_id = tc.id -LEFT JOIN treasury.utxos u ON u.address = tc.contract_address +LEFT JOIN treasury.projects vc ON vc.treasury_id = tc.id +LEFT JOIN treasury.v_projects_summary vps ON vps.id = vc.id GROUP BY tc.id; -- Events with full context (treasury, project, milestone info) @@ -336,19 +430,17 @@ SELECT e.created_at, -- Treasury context tc.contract_instance as treasury_instance, - tc.name as treasury_name, -- Project context vc.project_id, vc.project_name, - vc.vendor_name, vc.contract_address as project_address, -- Milestone context m.milestone_id, m.label as milestone_label, m.milestone_order FROM treasury.events e -LEFT JOIN treasury.treasury_contracts tc ON tc.id = e.treasury_id -LEFT JOIN treasury.vendor_contracts vc ON vc.id = e.vendor_contract_id +LEFT JOIN treasury.projects vc ON vc.id = e.project_db_id +LEFT JOIN treasury.treasury_contracts tc ON tc.id = COALESCE(e.treasury_id, vc.treasury_id) LEFT JOIN treasury.milestones m ON m.id = e.milestone_id; -- Financial summary view (allocated vs disbursed vs remaining) @@ -356,33 +448,48 @@ CREATE OR REPLACE VIEW treasury.v_financial_summary AS SELECT tc.id as treasury_id, tc.contract_instance, - tc.name as treasury_name, -- Allocation totals COALESCE(SUM(vc.initial_amount_lovelace), 0)::BIGINT as total_allocated_lovelace, - -- Disbursement totals - COALESCE(SUM(m_totals.total_disbursed), 0)::BIGINT as total_disbursed_lovelace, - -- Remaining (allocated - disbursed) - (COALESCE(SUM(vc.initial_amount_lovelace), 0) - COALESCE(SUM(m_totals.total_disbursed), 0))::BIGINT as total_remaining_lovelace, - -- Treasury balance (actual UTXOs) - COALESCE(SUM(u.lovelace_amount) FILTER (WHERE NOT u.spent AND u.address = tc.contract_address), 0)::BIGINT as treasury_balance_lovelace, - -- Project-level balance (sum of project UTXOs) + -- Withdrawal totals + COALESCE(SUM(m_totals.total_withdrawn), 0)::BIGINT as total_withdrawn_lovelace, + -- Remaining (allocated - withdrawn) + (COALESCE(SUM(vc.initial_amount_lovelace), 0) - COALESCE(SUM(m_totals.total_withdrawn), 0))::BIGINT as total_remaining_lovelace, + -- Treasury reserve balance: live unspent at TRSC address from yaci's UTXO set. + -- Anti-join against tx_input handles pruning-window lag (rows not yet pruned + -- but already spent are excluded). + COALESCE(( + SELECT SUM(au.lovelace_amount) + FROM yaci_store.address_utxo au + WHERE au.owner_addr = tc.contract_address + AND NOT EXISTS ( + SELECT 1 FROM yaci_store.tx_input ti + WHERE ti.tx_hash = au.tx_hash AND ti.output_index = au.output_index + ) + ), 0)::BIGINT as treasury_balance_lovelace, + -- PSSC (vendor contract) balance: raw live unspent at the singleton PSSC address. + -- This is the on-chain truth for "funds currently held by the vendor contract". + -- Per-project attribution lives in v_projects_summary.current_balance_lovelace + -- and may sum to less than this when chain-trace gaps leave unattributed UTXOs. COALESCE(( - SELECT SUM(u2.lovelace_amount) - FROM treasury.utxos u2 - JOIN treasury.vendor_contracts vc2 ON vc2.id = u2.vendor_contract_id - WHERE vc2.treasury_id = tc.id AND NOT u2.spent + SELECT SUM(au.lovelace_amount) + FROM yaci_store.address_utxo au + JOIN treasury.vendor_contracts vco ON vco.address = au.owner_addr + WHERE NOT EXISTS ( + SELECT 1 FROM yaci_store.tx_input ti + WHERE ti.tx_hash = au.tx_hash AND ti.output_index = au.output_index + ) ), 0)::BIGINT as project_balance_lovelace, -- Counts COUNT(DISTINCT vc.id) as project_count, COUNT(DISTINCT CASE WHEN vc.status = 'active' THEN vc.id END) as active_project_count FROM treasury.treasury_contracts tc -LEFT JOIN treasury.vendor_contracts vc ON vc.treasury_id = tc.id +LEFT JOIN treasury.projects vc ON vc.treasury_id = tc.id LEFT JOIN ( SELECT - m.vendor_contract_id, - SUM(COALESCE(m.disburse_amount, 0)) as total_disbursed + m.project_db_id, + SUM(COALESCE(m.withdraw_amount, 0)) as total_withdrawn FROM treasury.milestones m - GROUP BY m.vendor_contract_id -) m_totals ON m_totals.vendor_contract_id = vc.id -LEFT JOIN treasury.utxos u ON u.address = tc.contract_address + WHERE NOT m.archived + GROUP BY m.project_db_id +) m_totals ON m_totals.project_db_id = vc.id GROUP BY tc.id; diff --git a/dev.sh b/dev.sh index 4ff24d1..201d72e 100755 --- a/dev.sh +++ b/dev.sh @@ -109,69 +109,83 @@ case "$COMMAND" in fi done - # Wait a bit more for database to be fully initialized - sleep 3 - - # Verify database exists, create if it doesn't - print_info "Verifying database exists..." - MAX_RETRIES=10 + # Wait for database to be fully initialized + # POSTGRES_DB env var tells the entrypoint to create the database, + # but pg_isready returns true before init scripts finish. + print_info "Waiting for database to be ready..." + MAX_RETRIES=30 RETRY_COUNT=0 - DB_EXISTS="" - + while [ $RETRY_COUNT -lt $MAX_RETRIES ]; do - # Try to check if database exists DB_EXISTS=$(docker-compose exec -T postgres psql -U postgres -tAc "SELECT 1 FROM pg_database WHERE datname='administration_data'" 2>/dev/null | tr -d '[:space:]' || echo "") if [ "$DB_EXISTS" = "1" ]; then - print_success "Database 'administration_data' exists" + print_success "Database 'administration_data' is ready" break fi - - # If database doesn't exist, try to create it - if [ $RETRY_COUNT -eq 0 ]; then - print_info "Database 'administration_data' does not exist, creating..." + + RETRY_COUNT=$((RETRY_COUNT + 1)) + if [ $RETRY_COUNT -lt $MAX_RETRIES ]; then + sleep 1 fi - - CREATE_RESULT=$(docker-compose exec -T postgres psql -U postgres -c "CREATE DATABASE administration_data;" 2>&1) - CREATE_EXIT_CODE=$? - - if [ $CREATE_EXIT_CODE -eq 0 ]; then + done + + if [ "$DB_EXISTS" != "1" ]; then + # Database not created by entrypoint — try to create it manually + print_info "Database not found after ${MAX_RETRIES}s, attempting to create..." + if docker-compose exec -T postgres psql -U postgres -c "CREATE DATABASE administration_data;" 2>&1 | grep -qE "CREATE DATABASE|already exists"; then print_success "Database 'administration_data' created" - break - elif echo "$CREATE_RESULT" | grep -q "already exists"; then - print_success "Database 'administration_data' already exists" - break else - RETRY_COUNT=$((RETRY_COUNT + 1)) - if [ $RETRY_COUNT -lt $MAX_RETRIES ]; then - sleep 1 - fi + print_error "Failed to create database 'administration_data'" + print_info "You may need to remove the postgres volume and restart: docker-compose down -v" + exit 1 fi - done - - if [ "$DB_EXISTS" != "1" ] && [ $CREATE_EXIT_CODE -ne 0 ] && ! echo "$CREATE_RESULT" | grep -q "already exists"; then - print_error "Failed to create database after $MAX_RETRIES attempts" - print_error "Last error: $CREATE_RESULT" - print_info "You may need to manually create the database or remove the postgres volume" - print_info "To remove volume: docker-compose down -v" fi - + print_success "PostgreSQL is ready" - # Ensure treasury schema exists (YACI Store creates its own schema/tables via Flyway) - print_info "Ensuring treasury schema exists..." + # Create treasury schema tables (views will fail silently since yaci_store doesn't exist yet) + print_info "Creating treasury schema tables..." + docker-compose exec -T postgres psql -U postgres -d administration_data -c " + CREATE SCHEMA IF NOT EXISTS treasury; + " 2>/dev/null || true docker-compose exec -T postgres psql -U postgres -d administration_data \ -f /docker-entrypoint-initdb.d/02-treasury-schema.sql 2>/dev/null || true - print_success "Database schemas ready" - # Start indexer if JAR is available + # Start indexer if JAR is available — Flyway will create the yaci_store schema if [ "$INDEXER_AVAILABLE" = true ]; then print_info "Starting indexer..." docker-compose up -d indexer + + # Wait for indexer to complete Flyway migrations (yaci_store tables must exist for treasury views) + print_info "Waiting for indexer to initialize (Flyway migrations)..." + INDEXER_RETRIES=0 + INDEXER_MAX=60 + while [ $INDEXER_RETRIES -lt $INDEXER_MAX ]; do + if docker-compose exec -T postgres psql -U postgres -d administration_data -tAc \ + "SELECT 1 FROM information_schema.tables WHERE table_schema='yaci_store' AND table_name='address_utxo'" 2>/dev/null | grep -q 1; then + break + fi + INDEXER_RETRIES=$((INDEXER_RETRIES + 1)) + sleep 1 + done + + if [ $INDEXER_RETRIES -ge $INDEXER_MAX ]; then + print_warning "Indexer did not create yaci_store tables within ${INDEXER_MAX}s — treasury views may be incomplete" + else + print_success "Indexer initialized (Flyway migrations complete)" + # Now re-run treasury schema to create views that reference yaci_store tables + print_info "Creating treasury views..." + docker-compose exec -T postgres psql -U postgres -d administration_data \ + -f /docker-entrypoint-initdb.d/02-treasury-schema.sql 2>/dev/null || true + print_success "Treasury views created" + fi + print_success "Indexer started (check logs with: docker logs administration-indexer -f)" else print_warning "Skipping indexer (JAR file not found)" + print_warning "Treasury views referencing yaci_store will not be created until indexer runs" fi - + # Start API if [ "$API_AVAILABLE" = true ]; then print_info "Starting API (Rust)..." diff --git a/docs/architecture.md b/docs/architecture.md index 184b625..99e9363 100644 --- a/docs/architecture.md +++ b/docs/architecture.md @@ -32,10 +32,11 @@ This document describes how data flows through the Cardano Administration Data S │ │ (raw blockchain data) │ │ (normalized app data) │ │ │ │ │ │ │ │ │ │ • block │ │ • treasury_contracts │ │ -│ │ • transaction │ │ • vendor_contracts │ │ -│ │ • address_utxo │ │ • milestones │ │ -│ │ • transaction_metadata │ │ • events │ │ -│ │ • tx_input │ │ • utxos │ │ +│ │ • transaction │ │ • vendor_contracts (PSSC) │ │ +│ │ • address_utxo │ │ • projects │ │ +│ │ • transaction_metadata │ │ • milestones │ │ +│ │ • tx_input │ │ • events │ │ +│ │ │ │ • utxo_history │ │ │ └─────────────────────────────┘ └─────────────────────────────┘ │ └─────────────────────────────────────────────────────────────────────────────────┘ │ @@ -176,6 +177,32 @@ This document describes how data flows through the Cardano Administration Data S ### Stage 3: API Sync Service (Rust) +The background sync task (`api/src/services/sync.rs::run_sync_loop`) does +three things every 15 seconds: + +1. Reads `treasury.sync_status` to find `last_slot` for `sync_type = 'events'`. +2. Selects new label-`1694` rows from `yaci_store.transaction_metadata` past + that slot, plus a one-shot pre-fetch of their UTXOs into + `treasury.utxo_history` via `EventProcessor::pre_fetch_utxos`. This is + a defensive backstop on top of the Postgres triggers + (`install_utxo_history_triggers` in `api/src/services/sync.rs`) that + capture every script-address UTXO into `treasury.utxo_history` + synchronously with YACI Store's INSERT, regardless of pruning. This is + what makes pause/resume datum parsing and chain tracing keep working + long after the on-chain UTXO is gone. +3. Dispatches each event through the per-type handler and advances + `treasury.sync_status` only on contiguous success (any failed event + wedges the watermark). A separate task runs `sync_all_events` every 10 + minutes as an idempotent backfill via `ON CONFLICT DO UPDATE`. + +> **Caveat — `last_slot` advancement on errors.** If a single event fails +> mid-batch (e.g. DB connection reset), the loop logs and continues; later +> successful events advance `last_slot` past the failed one, so it is never +> retried by the continuous loop. Restarting the API runs `sync_all_events` +> from the beginning, which is idempotent (`ON CONFLICT (tx_hash) DO UPDATE`) +> and recovers the missed rows. Tracked as +> [`KI-SY-02`](known-issues.md#ki-sy-02--last_slot-can-advance-past-failed-events-on-connection-reset). + ``` ┌──────────────────────────────────────────────────────────────────────────────┐ │ BACKGROUND SYNC LOOP (every 15 seconds) │ @@ -192,6 +219,11 @@ This document describes how data flows through the Cardano Administration Data S │ │ def456 | 1050 | 1694 | {"body":{"event":"complete",...}} │ │ │ └──────────────────────────────────────────────────────────────────────┘ │ └─────────────────────────────────────────────────────────────────────────────┘ + │ + │ pre_fetch_utxos(batch tx_hashes) + │ ──► treasury.utxo_history (raw output rows + │ captured before YACI prunes; primary + │ capture is via Postgres triggers) │ │ SELECT WHERE slot > last_synced_slot ▼ @@ -206,7 +238,7 @@ This document describes how data flows through the Cardano Administration Data S │ ┌─────────────┬───────────────┼───────────────┬─────────────┐ │ │ ▼ ▼ ▼ ▼ ▼ │ │ ┌─────────┐ ┌──────────┐ ┌────────────┐ ┌──────────┐ ┌──────────┐ │ -│ │ publish │ │initialize│ │ fund │ │ complete │ │ disburse │ │ +│ │ publish │ │initialize│ │ fund │ │ complete │ │ withdraw │ │ │ └─────────┘ └──────────┘ └────────────┘ └──────────┘ └──────────┘ │ │ │ │ │ │ │ │ │ ▼ ▼ ▼ ▼ ▼ │ @@ -219,14 +251,20 @@ This document describes how data flows through the Cardano Administration Data S ┌─────────────────────────────────────────────────────────────────────────────┐ │ treasury schema │ │ │ -│ treasury_contracts vendor_contracts milestones events │ -│ ┌───────────────┐ ┌───────────────┐ ┌───────────┐ ┌──────────┐ │ -│ │ id │ │ id │ │ id │ │ id │ │ -│ │ instance │◄────│ treasury_id │◄───│ vendor_id │ │ tx_hash │ │ -│ │ name │ │ project_id │ │ label │ │ event │ │ -│ │ publish_tx │ │ project_name │ │ status │ │ metadata │ │ -│ └───────────────┘ │ status │ │ amount │ └──────────┘ │ -│ └───────────────┘ └───────────┘ │ +│ treasury_contracts projects milestones events │ +│ ┌───────────────┐ ┌───────────────┐ ┌───────────────┐ ┌──────────┐ │ +│ │ id │ │ id │ │ id │ │ id │ │ +│ │ instance │◄───│ treasury_id │◄───│ project_db_id │ │ tx_hash │ │ +│ │ stake_cred │ │ project_id │ │ label │ │ event │ │ +│ │ publish_tx │ │ project_name │ │ withdrawn │ │ metadata │ │ +│ └───────────────┘ │ status │ │ evidence_* │ └──────────┘ │ +│ └───────────────┘ │ paused │ │ +│ vendor_contracts │ archived │ │ +│ ┌───────────────┐ │ superseded_by │ │ +│ │ id │ └───────────────┘ │ +│ │ address (PSSC)│ │ +│ │ stake_cred │ │ +│ └───────────────┘ │ └─────────────────────────────────────────────────────────────────────────────┘ ``` @@ -246,7 +284,7 @@ This document describes how data flows through the Cardano Administration Data S │ "identifier": "project-001", │ │ "label": "My Project", │ │ "description": "Project description...", │ - │ "vendor": { "name": "Acme Corp" }, │ + │ "vendor": { "label": "addr1q..." }, │ │ "milestones": [ │ │ { "identifier": "m1", "label": "Phase 1", "amount": 1000000 }, │ │ { "identifier": "m2", "label": "Phase 2", "amount": 2000000 } │ @@ -266,34 +304,41 @@ This document describes how data flows through the Cardano Administration Data S │ └──────────────────────────────────────────────────────────────┘ │ │ │ │ │ ▼ │ - │ 2. INSERT vendor_contracts │ + │ 2. UPSERT vendor_contracts (singleton PSSC row at the shared addr) │ + │ ┌──────────────────────────────────────────────────────────────┐ │ + │ │ INSERT INTO treasury.vendor_contracts (address, ...) │ │ + │ │ ON CONFLICT (address) DO NOTHING │ │ + │ └──────────────────────────────────────────────────────────────┘ │ + │ │ │ + │ ▼ │ + │ 3. INSERT projects │ │ ┌──────────────────────────────────────────────────────────────┐ │ - │ │ INSERT INTO treasury.vendor_contracts │ │ - │ │ (project_id, project_name, vendor_name, ...) │ │ - │ │ VALUES ('project-001', 'My Project', 'Acme Corp', ...) │ │ + │ │ INSERT INTO treasury.projects │ │ + │ │ (project_id, project_name, vendor_address, ...) │ │ + │ │ VALUES ('project-001', 'My Project', 'addr1q...', ...) │ │ │ └──────────────────────────────────────────────────────────────┘ │ │ │ │ │ ▼ │ - │ 3. INSERT milestones (for each milestone in array) │ + │ 4. INSERT milestones (for each milestone in array) │ │ ┌──────────────────────────────────────────────────────────────┐ │ │ │ INSERT INTO treasury.milestones │ │ - │ │ (vendor_contract_id, milestone_id, label, amount, status) │ │ - │ │ VALUES (1, 'm1', 'Phase 1', 1000000, 'pending') │ │ - │ │ VALUES (1, 'm2', 'Phase 2', 2000000, 'pending') │ │ + │ │ (project_db_id, milestone_id, label, amount) │ │ + │ │ VALUES (1, 'm1', 'Phase 1', 1000000) │ │ + │ │ VALUES (1, 'm2', 'Phase 2', 2000000) │ │ │ └──────────────────────────────────────────────────────────────┘ │ │ │ │ │ ▼ │ - │ 4. INSERT event record │ + │ 5. INSERT event record │ │ ┌──────────────────────────────────────────────────────────────┐ │ │ │ INSERT INTO treasury.events │ │ - │ │ (tx_hash, event_type, vendor_contract_id, metadata) │ │ + │ │ (tx_hash, event_type, project_db_id, metadata) │ │ │ └──────────────────────────────────────────────────────────────┘ │ │ │ │ │ ▼ │ - │ 5. Track UTXOs for future event lookups │ + │ 6. Track UTXOs for future event lookups │ │ ┌──────────────────────────────────────────────────────────────┐ │ - │ │ INSERT INTO treasury.utxos (tx_hash, output_index, │ │ - │ │ vendor_contract_id, spent) │ │ + │ │ INSERT INTO treasury.utxo_history (tx_hash, output_index, │ │ + │ │ project_db_id, spent) │ │ │ └──────────────────────────────────────────────────────────────┘ │ │ │ └────────────────────────────────────────────────────────────────────────┘ @@ -319,8 +364,8 @@ This document describes how data flows through the Cardano Administration Data S │ outputs: │ │ [0] → UTXO₁ (contract address, 10,000 ADA) │ │ │ - │ ──► Record in treasury.utxos: │ - │ (tx_hash="abc123", output_index=0, vendor_contract_id=1) │ + │ ──► Record in treasury.utxo_history: │ + │ (tx_hash="abc123", output_index=0, project_db_id=1) │ └─────────────────────────────────────────────────────────────────────────┘ │ │ UTXO₁ is spent @@ -337,31 +382,55 @@ This document describes how data flows through the Cardano Administration Data S │ outputs: │ │ [0] → UTXO₂ (contract address, 9,000 ADA) │ │ │ - │ ──► find_vendor_contract_from_inputs("def456"): │ + │ ──► find_project_from_inputs("def456"): │ │ 1. Get inputs: [(abc123, 0)] │ - │ 2. Lookup treasury.utxos WHERE tx_hash="abc123" AND index=0 │ - │ 3. Found! vendor_contract_id = 1 │ - │ 4. Mark UTXO₁ as spent, record UTXO₂ with vendor_contract_id=1 │ + │ 2. Lookup treasury.utxo_history WHERE tx_hash="abc123" AND index=0│ + │ 3. Found! project_db_id = 1 │ + │ 4. Mark UTXO₁ as spent, record UTXO₂ with project_db_id=1 │ └─────────────────────────────────────────────────────────────────────────┘ │ │ UTXO₂ is spent ▼ ┌─────────────────────────────────────────────────────────────────────────┐ - │ DISBURSE TRANSACTION │ + │ WITHDRAW TRANSACTION │ │ tx_hash: "ghi789" │ - │ metadata: { "event": "disburse" } │ + │ metadata: { "event": "withdraw", "milestone": "m1" } │ │ (NO project_id!) │ │ │ │ inputs: │ │ [0] ← UTXO₂ (spending def456:0) │ │ │ - │ ──► find_vendor_contract_from_inputs("ghi789"): │ + │ ──► find_project_from_inputs("ghi789"): │ │ 1. Get inputs: [(def456, 0)] │ - │ 2. Lookup treasury.utxos WHERE tx_hash="def456" AND index=0 │ - │ 3. Found! vendor_contract_id = 1 │ + │ 2. Lookup treasury.utxo_history WHERE tx_hash="def456" AND index=0│ + │ 3. Found! project_db_id = 1 │ + │ 4. UPDATE milestones SET withdrawn=TRUE WHERE milestone_id='m1' │ └─────────────────────────────────────────────────────────────────────────┘ ``` +#### Disambiguation when a tx pulls inputs from multiple project chains + +A single milestone-level tx can include fee/collateral inputs from a sibling +project's UTXO chain. `find_project_from_inputs` collects every candidate +`project_db_id`, then scores each candidate by how many of the tx's metadata +`body.milestones` keys (collected via `collect_milestone_id_hints` in +`event_processor.rs`) match milestones stored for that project. The +best-scoring candidate wins; ties fall back to the first input. + +#### Cold replay — when chain tracing can't reconstruct the link + +The Postgres triggers installed by `install_utxo_history_triggers` +(`api/src/services/sync.rs`) capture every script-address UTXO into +`treasury.utxo_history` synchronously with YACI Store's INSERT, so the +chain-trace input is always available regardless of pruning — *provided +the triggers were armed before the relevant blocks were ingested*. If a +fresh local sync runs against a database where YACI Store has already +pruned UTXOs from before the triggers were installed, the chain trace can +return `None`; the event is still recorded in `treasury.events` (with +`project_db_id = NULL`) so nothing is silently dropped, but milestone +state flags can't be updated. See +[`docs/known-issues.md` `KI-CR-01`](known-issues.md) and `KI-UTX-01`. + ### Stage 6: API Request Flow ``` @@ -369,24 +438,24 @@ This document describes how data flows through the Cardano Administration Data S │ API REQUEST FLOW │ └──────────────────────────────────────────────────────────────────────────────┘ - Client Request: GET /api/v1/vendor-contracts/EC-0008-25 + Client Request: GET /api/v1/projects/EC-0008-25 │ ▼ ┌─────────────────────────────────────────────────────────────────────────┐ │ AXUM ROUTER │ │ │ │ .nest("/api/v1", routes::v1::router()) │ - │ → /vendor-contracts/:project_id → get_vendor_contract() │ + │ → /projects/:project_id → get_project() │ └─────────────────────────────────────────────────────────────────────────┘ │ ▼ ┌─────────────────────────────────────────────────────────────────────────┐ - │ routes/v1/vendor_contracts.rs │ + │ routes/v1/projects.rs │ │ │ - │ pub async fn get_vendor_contract( │ + │ pub async fn get_project( │ │ Extension(pool): Extension, │ │ Path(project_id): Path, │ - │ ) -> Result>, StatusCode> │ + │ ) -> Result>, ApiError> │ └─────────────────────────────────────────────────────────────────────────┘ │ │ SQL Query @@ -394,7 +463,7 @@ This document describes how data flows through the Cardano Administration Data S ┌─────────────────────────────────────────────────────────────────────────┐ │ PostgreSQL │ │ │ - │ SELECT * FROM treasury.v_vendor_contracts_summary │ + │ SELECT * FROM treasury.v_projects_summary │ │ WHERE project_id = 'EC-0008-25' │ └─────────────────────────────────────────────────────────────────────────┘ │ @@ -406,15 +475,13 @@ This document describes how data flows through the Cardano Administration Data S │ "data": { │ │ "project_id": "EC-0008-25", │ │ "project_name": "Community Hub Development", │ - │ "vendor_name": "Acme Blockchain Solutions", │ │ "status": "active", │ │ "initial_amount_lovelace": 1000000000000, │ - │ "initial_amount_ada": 1000000.0, │ - │ "milestones_summary": { "total": 5, "disbursed": 2 }, │ + │ "milestones_summary": { "total": 5, "withdrawn": 2 }, │ │ "financials": { │ - │ "total_allocated_ada": 1000000.0, │ - │ "total_disbursed_ada": 400000.0, │ - │ "disbursement_percentage": 40.0 │ + │ "total_allocated_lovelace": 1000000000000, │ + │ "total_withdrawn_lovelace": 400000000000, │ + │ "withdrawal_percentage": 40.0 │ │ } │ │ }, │ │ "meta": { "timestamp": "2026-01-28T10:30:00Z" } │ @@ -429,53 +496,60 @@ This document describes how data flows through the Cardano Administration Data S │ TREASURY SCHEMA (treasury.*) │ └──────────────────────────────────────────────────────────────────────────────┘ - ┌─────────────────────┐ - │ treasury_contracts │ - ├─────────────────────┤ - │ id (PK) │ - │ contract_instance │◄─────────────────────────────────────────────┐ - │ name │ │ - │ publish_tx_hash │ │ - │ initialized_at │ │ - └─────────────────────┘ │ - │ │ - │ 1:N │ - ▼ │ - ┌─────────────────────┐ ┌─────────────────────┐ │ - │ vendor_contracts │ │ events │ │ - ├─────────────────────┤ ├─────────────────────┤ │ - │ id (PK) │◄────────│ vendor_contract_id │ │ - │ treasury_id (FK) │─────────│ treasury_id (FK) │─────────────┘ - │ project_id (unique) │ │ milestone_id (FK) │─────┐ - │ project_name │ │ tx_hash (unique) │ │ - │ vendor_name │ │ event_type │ │ - │ status │ │ slot │ │ - │ contract_address │ │ metadata (JSONB) │ │ - └─────────────────────┘ └─────────────────────┘ │ - │ │ - │ 1:N │ - ▼ │ - ┌─────────────────────┐ │ - │ milestones │◄────────────────────────────────────┘ + ┌─────────────────────┐ ┌─────────────────────┐ + │ treasury_contracts │ │ vendor_contracts │ (Singleton PSSC row) + ├─────────────────────┤ ├─────────────────────┤ + │ id (PK) │ │ id (PK) │ + │ contract_instance │◄─┐ │ treasury_id (FK) │─┐ + │ stake_credential │ │ │ address (PSSC, uniq)│ │ + │ publish_tx_hash │ │ │ stake_credential │ │ + │ initialized_at │ │ └─────────────────────┘ │ + └─────────────────────┘ │ │ + │ │ │ + │ 1:N │ │ + ▼ │ │ + ┌─────────────────────┐ │ ┌─────────────────────┐ + │ projects │ │ │ events │ + ├─────────────────────┤ │ ├─────────────────────┤ + │ id (PK) │◄─┼───│ project_db_id │ + │ treasury_id (FK) │──┘ │ treasury_id (FK) │ + │ project_id (unique) │ │ milestone_id (FK) │─┐ + │ project_name │ │ tx_hash (unique) │ │ + │ vendor_address │ │ event_type │ │ + │ status │ │ slot │ │ + │ contract_address │ │ destination (JSONB) │ │ + │ vendor_payment_* │ │ metadata (JSONB) │ │ + └─────────────────────┘ └─────────────────────┘ │ + │ │ + │ 1:N │ + ▼ │ + ┌─────────────────────┐ │ + │ milestones │◄─────────────────────────────┘ ├─────────────────────┤ │ id (PK) │ - │ vendor_contract_id │ + │ project_db_id │ │ milestone_id │ │ label │ - │ status │ │ amount_lovelace │ + │ time_limit │ + │ withdrawn │ + │ evidence_provided │ + │ paused │ + │ archived │ + │ withdraw_tx_hash │ │ complete_tx_hash │ - │ disburse_tx_hash │ + │ superseded_by │ └─────────────────────┘ ┌─────────────────────┐ - │ utxos │ (Tracks UTXO chain for event linking) + │ utxo_history │ (Trigger-captured UTXO history at script addresses) ├─────────────────────┤ - │ tx_hash (PK) │ - │ output_index (PK) │ - │ vendor_contract_id │ + │ tx_hash │ + │ output_index │ + │ project_db_id │ │ address │ │ lovelace_amount │ + │ inline_datum_cbor │ │ spent │ │ spent_tx_hash │ └─────────────────────┘ diff --git a/docs/changelog.md b/docs/changelog.md new file mode 100644 index 0000000..eae073b --- /dev/null +++ b/docs/changelog.md @@ -0,0 +1,207 @@ +# API Changelog + +This file tracks user-visible changes to the `/api/v1/` surface and the +treasury data pipeline. Each release ships as a single commit on `main` (or +the equivalent merge). Pre-1.0 versions allowed breaking changes; the +project is now operating under a 1.x line and breaking changes here are +flagged as such. + +## v2.1.0 — 2026-05-05 + +Adds a vendor-contract-wide UTxO view and inlines per-project UTxO refs on +the project detail response. Both changes are additive — no breaking +changes to existing endpoints or shapes. + +### Added + +- **`GET /api/v1/vendor-contract/utxos`** — paginated list of every + currently-unspent UTxO at the shared PSSC, each row labeled with its + owning project (`project_id`, `project_name`, `project_status`, + `project_db_id`). Lets clients enumerate live vendor-contract state in a + single call instead of fanning out across every project. Same + unspent-source-of-truth pattern as `/treasury/utxos` and + `/projects/:id/utxos` (`yaci_store.address_utxo` ⨯ anti-join on + `yaci_store.tx_input`). +- **`ProjectDetail.current_utxos`** — `GET /api/v1/projects/{project_id}` + now includes a `current_utxos` array of `{ tx_hash, output_index, + lovelace_amount, slot }` so a single call gives the project's full live + state. Sum of `lovelace_amount` equals the existing + `financials.current_balance_lovelace`. `ProjectSummary` (the list + endpoint item shape) is unchanged. + +### Schema + +- No DB migration. Both features are read-only joins over existing + columns: `treasury.utxo_history.project_db_id` (already populated by + fund events + chain tracing) joined to `treasury.projects` and + `yaci_store.address_utxo`. + +## v2.0.0 — 2026-05-01 + +Semantic rename pass: split "vendor contract" into the *singleton on-chain +script address* (the shared PSSC, one row) and the 42 *projects* (one per +fund event) that sit at it. Old paths are gone (404), not aliased — pre-1.0 +contract still applies. + +### Breaking — paths + +- `/api/v1/vendor-contracts` → `/api/v1/projects` (list + filter) +- `/api/v1/vendor-contracts/{project_id}` → `/api/v1/projects/{project_id}` +- `/api/v1/vendor-contracts/{project_id}/milestones` → + `/api/v1/projects/{project_id}/milestones` +- `/api/v1/vendor-contracts/{project_id}/events` → + `/api/v1/projects/{project_id}/events` +- `/api/v1/vendor-contracts/{project_id}/utxos` → + `/api/v1/projects/{project_id}/utxos` + +### Added + +- **`GET /api/v1/vendor-contract`** — singleton: returns + `{ address, stake_credential, projects: { total, by_status: {...} } }`. + The shared PSSC every project sits at. +- **`GET /api/v1/milestones/{project_id}`** — paginated milestones list + under the `/milestones/` root. Equivalent to + `/projects/{project_id}/milestones`; differs only in URL hierarchy. +- **`GET /api/v1/milestones/by-id/{id}`** — single milestone by integer + database ID. The previous `/milestones/{id}` lookup moved here to free + the parameterised `/milestones/{project_id}` slot for project lookups. + +### Breaking — response shapes + +- `StatusResponse.totals.vendor_contracts` → `totals.projects`. +- `TreasuryStatistics.vendor_contract_count` → `project_count`. +- `Milestone.vendor_contract_id` (FK) is no longer exposed; the canonical + link is via `project_id` (text). +- Numerous internal struct/field renames are not visible in the JSON wire + format but appear in the OpenAPI schema list. + +### Schema + +- Renamed `treasury.vendor_contracts` → `treasury.projects`. +- Renamed FK column `vendor_contract_id` → `project_db_id` in + `treasury.events`, `treasury.milestones`, `treasury.utxo_history`. +- New singleton `treasury.vendor_contracts (id, treasury_id, address, + stake_credential, …)` stores one row per shared PSSC. +- Renamed view `v_vendor_contracts_summary` → `v_projects_summary`; + view bodies updated to use new names. +- Trigger `trg_vendor_contracts_updated_at` → `trg_projects_updated_at`. + +### Internal renames (impact code readers, not the API surface) + +- `find_vendor_contract_from_inputs` → `find_project_from_inputs`. +- `parse_vendor_contract_datum` → `parse_project_datum`; + `ParsedVendorDatum` → `ParsedProjectDatum`. + +### Migration + +Existing deployments must wipe `treasury` schema and re-sync (the +`utxo_history` Postgres triggers come back via the API's startup hook). +There is no in-place column-rename migration shipped — pre-1.x stance. + +## v1.1.0 — 2026-05-01 + +API consistency pass. Several breaking response-shape changes — frontends +update once and stay on `/api/v1/`. + +### Breaking + +- **`/api/v1/status` restructured.** Old flat fields + (`last_sync_slot`, `last_sync_block`, `last_sync_time`, + `database_connected`, `total_events`, `total_vendor_contracts`) + replaced with nested groups: + - `database: { connected, checked_at }` — server-side ISO. + - `sync: { heartbeat, last_event_processed }` — heartbeat is the + server-side ISO of the last sync poll; `last_event_processed` is the + on-chain block time of the most recent TOM event the API has written + (`ChainTime`). + - `chain: { indexer_block, indexer_slot, indexer_time }` — what YACI + Store has reached. `indexer_time` is `ChainTime`. + - `totals: { events, vendor_contracts, events_by_type }`. + +### Other breaking + +- **Timestamps**: every on-chain block-time field is now an object + `{ "unix": 1777623100, "iso": "2026-05-01T08:11:40Z" }` instead of a + bare integer. Affects `EventResponse.block_time`, + `VendorContract*.fund_time` and `last_event_time`, + `MilestoneCompletion.time`, `MilestoneWithdrawal.time`, + `MilestoneArchiveInfo.archived_at`, and `TreasuryResponse.publish_time` + / `initialized_at`. Server-side timestamps (`created_at`, + `updated_at`, `last_updated`) remain ISO strings. +- **Errors**: every non-2xx response now returns a JSON body + `{ "error": { "code", "message", "details"? }, "meta": { "timestamp" } }` + instead of an empty body. `code` values: `not_found`, `bad_request`, + `internal`. +- **Pagination**: `/api/v1/treasury/utxos`, + `/api/v1/vendor-contracts/:project_id/milestones`, and + `/api/v1/vendor-contracts/:project_id/utxos` now return + `{ data, pagination, meta }` with `?page=1&limit=50` (max 100). + Previous responses returned an unbounded array under `data`. +- **`destination` on disburse events**: now JSONB preserving the full TOM + `{ label, details }` object instead of flattened to a string. + Released earlier in v1.0.x but listed here for completeness. +- **`vendor_name` and `contract_url`**: dropped from + `treasury.vendor_contracts` and from API responses. They were always + NULL; not part of the TOM spec. + +### Added + +- **`?q=`** full-text search on `/api/v1/events` matching against + `reason`, `destination::text`, and `metadata::text` (case-insensitive + substring). +- **`?from_time=` / `?to_time=`** filters on `/api/v1/milestones` matching + whichever of `complete_time` or `withdraw_time` is set on the row. +- **OpenAPI**: per-`event_type` field-applicability descriptions on + `EventResponse` (which fields apply to which event type), and + documented response/error envelope shapes. + +### Fixed + +- `/api/v1/statistics.events.by_type` now reports real categories instead + of all-`unknown` (the SQL was reading the wrong JSON path). +- `treasury.sync_status.updated_at` now bumps on idle polls so + `/api/v1/statistics` reflects a live heartbeat + ([`KI-SY-01`](known-issues.md#ki-sy-01--treasurysync_statusupdated_at-doesnt-bump-on-idle-ticks)). + +## Pipeline / data-quality changes (no API shape impact) + +These shipped alongside or just before v1.1.0. They affect *what data* +the API serves, not the response shape. + +- **Multi-key vendor datum parser** — `parse_vendor_contract_datum` now + handles the `UTXO-*` family's two-party vendor info constructor. Closes + [`KI-VND-01`](known-issues.md), unblocks [`KI-MIL-01`](known-issues.md). +- **Milestone-id ordinal normalisation** — when a complete/withdraw event + uses `MS-N` (1-indexed) but the fund used `m-N` (0-indexed) for the same + project (or vice versa), the lookup now matches by canonical + `milestone_order`. Closes [`KI-OC-01`](known-issues.md). +- **`treasury.utxo_history` + Postgres triggers** — every script-address + UTXO that YACI Store inserts is captured synchronously into a permanent + history table before pruning can run. Resolves the cold-replay + limitations [`KI-VND-04`](known-issues.md), + [`KI-EVT-01`](known-issues.md), [`KI-CR-01`](known-issues.md). Caveat: + the trigger only protects from the moment it's armed, so to recover + pre-existing pruned data you need a full YACI Store re-sync. +- **Label fallback for `UTXO-*` milestones** — when a milestone's metadata + has no `acceptanceCriteria`, the label now falls back to the first line + of `description`. +- **Documentation** — new [`docs/known-issues.md`](known-issues.md) index + with stable IDs, repro SQL, and live counts. Existing docs (`README`, + `api/README`, `database/README`, `docs/architecture.md`, + `indexer/SETUP.md`, `CLAUDE.md`) refreshed for the post-redesign reality. + +## Earlier history + +This file starts at v1.1.0. For commits prior to that, see `git log`. The +big pre-1.1 milestones were: + +- **Milestone-event silent-drop fix** — restructured `process_complete` + and `process_withdraw` so every on-chain TOM event is recorded in + `treasury.events`, even when chain-trace fails. Brought local event + parity from 55/378 to full coverage versus the deployed feed. +- **Milestone lifecycle redesign** — 4 independent boolean flags + (`evidence_provided`, `withdrawn`, `paused`, `archived`) plus archive + model via `superseded_by`. +- **Disburse `destination` JSONB** — column type changed from `TEXT` to + `JSONB` so the TOM `{ label, details }` object is preserved. (Listed + again under v1.1.0 Breaking for visibility.) diff --git a/docs/event-processing.md b/docs/event-processing.md new file mode 100644 index 0000000..7f3c799 --- /dev/null +++ b/docs/event-processing.md @@ -0,0 +1,738 @@ +# TOM Event Processing Reference + +Comprehensive reference for how TOM (Treasury Oversight Metadata) events flow from on-chain metadata through the event processor into the treasury database schema. + +**Spec source**: [SundaeSwap treasury-contracts spec.md](https://github.com/SundaeSwap-finance/treasury-contracts/blob/main/offchain/src/metadata/spec.md) + +--- + +## 1. On-Chain Architecture (Corrected) + +### Contract Structure + +``` +Treasury Contract (TRSC) + - ONE unique script address per treasury instance + - Holds the treasury reserve funds + +Vendor Contract (PSSC) + - ONE shared script address for ALL projects + - Each fund tx creates UTXOs at this shared address + - UTXOs belong to specific projects, distinguished by inline datum, NOT by address +``` + +**Critical insight**: The codebase historically assumed each project gets its own unique PSSC script address. In reality, **all projects share ONE vendor contract script address**. The relationship is: + +``` + ┌──────────────────────────────┐ + │ Treasury Contract (TRSC) │ + │ unique script address │ + └──────────┬───────────────────┘ + │ fund events + ▼ + ┌──────────────────────────────┐ + │ Shared Vendor Contract (PSSC) │ + │ ONE script address for ALL │ + │ projects │ + └──────────┬───────────────────┘ + │ + ┌────────────────┼────────────────┐ + ▼ ▼ ▼ + ┌─────────┐ ┌─────────┐ ┌─────────┐ + │ UTXO A │ │ UTXO B │ │ UTXO C │ + │Project 1│ │Project 2│ │Project 3│ + │(datum) │ │(datum) │ │(datum) │ + └─────────┘ └─────────┘ └─────────┘ +``` + +UTXOs at the shared address are distinguished by their **inline datum** (containing milestone amounts, time limits, etc.) and by their **origin** (which fund transaction created them), not by the address they sit at. + +### Implications for UTXO Tracking + +- `find_project_from_inputs()` traces inputs back through the UTXO chain — the correct and only approach for linking events to projects +- UTXO tracking relies exclusively on chain tracing by specific (tx_hash, output_index) pairs, not by address + +--- + +## 2. TOM Metadata Format + +### Top-Level Structure + +All TOM metadata is submitted under CIP-100 metadata label **1694**: + +```json +{ + "@context": "", + "hashAlgorithm": "blake2b-256", + "txAuthor": "", + "instance": "", + "body": { + "event": "", + ...event-specific fields... + } +} +``` + +- **`@context`**: URL pointing to the metadata specification version (varies by event type) +- **`hashAlgorithm`**: Always `"blake2b-256"` +- **`txAuthor`**: Public key hash; must appear in the transaction's required signers +- **`instance`**: Filters to the configured treasury (matches `TREASURY_INSTANCE` env var) +- **`body.event`**: Determines event type — the processor dispatches on this field + +### Code path for extraction + +``` +event.body → JSON + → body.get("body").get("event") → event_type string + → body.get("instance") → instance string + → match event_type → process_() +``` + +### CIP-100 Text Chunking + +Text fields may be either a plain string or an array of 64-character chunks that must be joined: + +```json +"label": "Short name" + +"description": [ + "This is a long description that has been split into 64-cha", + "racter chunks per the CIP-100 standard for on-chain storag", + "e." +] +``` + +The `extract_text` / `extract_text_from_value` helpers handle both formats, joining arrays with `""` (empty string, no separator). + +--- + +## 3. Event Type Reference + +### publish + +**Purpose**: Creates a new treasury instance by describing the published scriptRegistry datum. + +#### Spec Fields +| Field | Type | Description | +|-------|------|-------------| +| `event` | string | `"publish"` | +| `label` | string | Human-readable name for the instance | +| `description` | string | Markdown-formatted description | +| `expiration` | number | POSIX timestamp for instance expiration | +| `payoutUpperbound` | number | Maximum payout amount | +| `vendorExpiration` | number | Expiration timestamp for vendor contracts | +| `seedUtxo` | object | `{transactionId, outputIndex}` | +| `permissions` | object | Map of action names → permission definitions | + +#### Code Extraction (`process_publish`) +| Metadata Path | Extracted As | DB Column | +|---------------|-------------|-----------| +| `body.permissions` | raw JSON clone | `treasury_contracts.permissions` | + +**Not extracted**: `label`, `description`, `expiration`, `payoutUpperbound`, `vendorExpiration`, `seedUtxo` + +Note: `body.label` is intentionally not extracted — the name is a static label already known to API consumers. + +#### DB Writes +- **UPSERT** `treasury.treasury_contracts` (keyed on `contract_instance`) +- **INSERT** `treasury.events` + +--- + +### initialize + +**Purpose**: Documents the initialization of a treasury instance (stake address withdrawal). + +#### Spec Fields +| Field | Type | Description | +|-------|------|-------------| +| `event` | string | `"initialize"` | +| `reason` | string | Justification (optional) | +| `outputs` | object | Map of output indices → `{identifier, label}` | + +#### Code Extraction (`process_initialize`) +Records the tx hash and block time. Also queries `yaci_store.address_utxo` for the first `addr1x%` output to set `contract_address`, and derives `stake_credential` from that address via bech32 decoding. + +#### DB Writes +- **UPSERT** `treasury.treasury_contracts` — sets `initialized_tx_hash`, `initialized_at`, `contract_address`, `stake_credential` +- **INSERT** `treasury.events` + +**Not extracted**: `reason`, `outputs` + +--- + +### fund + +**Purpose**: Records funds flowing from treasury into the vendor contract, creating a new project. + +#### Spec Fields +| Field | Type | Description | +|-------|------|-------------| +| `event` | string | `"fund"` | +| `identifier` | string | Unique project ID (e.g., `"EC-0008-25"`) | +| `otherIdentifiers` | array | Related project IDs | +| `label` | string | Project title (often includes vendor name by convention) | +| `description` | string | Markdown project description | +| `vendor` | object | `{label: "", details: {anchorUrl, anchorDataHash}}` | +| `contract` | object | `{anchorUrl: "", anchorDataHash}` | +| `milestones` | object | Map of milestone IDs → milestone objects | + +**Spec milestone object** (keyed by ID in an object, e.g., `{"m-0": {...}}`): +| Field | Type | Description | +|-------|------|-------------| +| `identifier` | string | Milestone ID matching datum | +| `label` | string | Human-readable name | +| `description` | string | Markdown description | +| `acceptanceCriteria` | string | Completion criteria | +| `details` | object | Additional details (optional) | + +#### Code Extraction (`process_fund`) +| Metadata Path | Extracted As | DB Column | +|---------------|-------------|-----------| +| `body.identifier` | string | `projects.project_id` | +| `body.label` | `extract_text()` | `projects.project_name` | +| `body.description` | `extract_text()` | `projects.description` | +| `body.vendor.label` | `extract_text_from_value()` | `projects.vendor_address` | +| `body.otherIdentifiers` | string array | `projects.other_identifiers` | +| `body.milestones[].identifier` | string | `milestones.milestone_id` | +| `body.milestones[].label` | `extract_text_from_value()` | `milestones.label` | +| `body.milestones[].description` | `extract_text_from_value()` | `milestones.description` | +| `body.milestones[].acceptanceCriteria` | `extract_text_from_value()` | `milestones.acceptance_criteria` | +| `body.milestones[].amount` | i64 | `milestones.amount_lovelace` | + +**Milestone format handling**: Milestones are accepted in both array format (`[{identifier: "m-0", ...}]`) and object format (`{"m-0": {...}}`). For arrays, the `identifier` field inside each element provides the milestone ID. For objects, the key is the milestone ID. + +Additionally queries `yaci_store.address_utxo` for the fund tx to get: +- `contract_address` — first `addr1x%` output address +- `initial_amount_lovelace` — lovelace amount of that output + +**Treasury address fallback**: If the treasury `contract_address` is still null, derives it from the fund tx inputs by finding the `addr1x%` input address that differs from the vendor contract output. Also derives `stake_credential` from the treasury address via bech32 decoding. + +**Datum integration**: After UTXO recording, queries `inline_datum` from the fund tx's largest `addr1x%` script output (ordered by datum length DESC to avoid picking the trivial change-output datum). If available, parses the CBOR datum via `parse_project_datum()` to: +- Store `vendor_payment_key_hash` (TEXT, comma-joined for multi-key datums) on the project row +- Update each milestone's `amount_lovelace`, `time_limit`, and `paused` flag from the datum (overwriting metadata-provided amounts with authoritative on-chain values). Updates by `milestone_order` regardless of current `withdrawn` flag — the fund datum represents initial state. +- Store raw CBOR hex on the UTXO tracking row (`inline_datum_cbor`), but only if the new datum is longer than what's already stored (preserves originals against later corrupting overwrites). + +The parser is partial: vendor info and each milestone parse independently. Errors land in `treasury.projects.datum_parse_error` and `treasury.milestones.datum_parse_error`. + +#### DB Writes +- **UPSERT** `treasury.treasury_contracts` (ensure exists) +- **UPSERT** `treasury.vendor_contracts` (singleton PSSC row at the shared script address) +- **INSERT** `treasury.projects` (ON CONFLICT by `project_id` updates `project_name`/`description`) +- **INSERT** `treasury.milestones` (one per milestone, ON CONFLICT DO NOTHING on active key) +- **INSERT** `treasury.events` +- **UPSERT** `treasury.utxo_history` (record output UTXOs for chain tracking, with `inline_datum_cbor` if available) +- **UPDATE** `treasury.projects` — sets `vendor_payment_key_hash` and `datum_parse_error` (from datum, if available) +- **UPDATE** `treasury.milestones` — sets `amount_lovelace`, `time_limit`, `paused` per milestone, plus `datum_parse_error` for individual failures + +--- + +### complete + +**Purpose**: Vendor provides evidence of milestone completion by spending the vendor contract UTXO without withdrawing funds. + +#### Spec Fields +| Field | Type | Description | +|-------|------|-------------| +| `event` | string | `"complete"` | +| `milestones` | object | Map of milestone IDs → `{description, evidence[]}` | + +**Evidence array item**: +| Field | Type | Description | +|-------|------|-------------| +| `label` | string | Evidence description | +| `anchorUrl` | string | Evidence location | +| `anchorDataHash` | string | Document hash (optional) | + +#### Code Extraction (`process_complete`) +| Metadata Path | Extracted As | DB Column | +|---------------|-------------|-----------| +| `body.identifier` | string (fallback) | Used to find project_db_id | +| `body.milestones..description` | `extract_text_from_value()` | `milestones.complete_description` | +| `body.milestones..evidence` | raw JSON clone | `milestones.evidence` | +| `body.milestone` | string (legacy format) | Used to find milestone by ID | + +**Project identification**: First tries `body.identifier` to look up the project by project_id. Falls back to `find_project_from_inputs()` (UTXO chain tracing). + +**Milestone format handling**: Code handles milestones as an object keyed by milestone ID (`.as_object()`), which matches the spec. Also handles legacy single `body.milestone` field as a fallback. + +#### DB Writes +- **UPDATE** `treasury.milestones` — sets `evidence_provided = TRUE`, `complete_tx_hash`, `complete_time`, `complete_description`, `evidence` +- **INSERT** `treasury.events` (one per milestone completed) + +--- + +### disburse + +**Purpose**: Treasury-level fund movement — moves funds from the treasury contract to an external destination (e.g., stablecoin minting). **Not milestone-related.** + +#### Spec Fields +| Field | Type | Description | +|-------|------|-------------| +| `event` | string | `"disburse"` | +| `label` | string | Human-readable transaction title | +| `description` | string | Mechanical description of fund usage | +| `justification` | string | Markdown explaining committee remit | +| `destination` | object/array | `{label, details: {anchorUrl, anchorDataHash}}` | +| `estimatedReturn` | number | POSIX timestamp for expected fund return | + +#### Code Extraction (`process_disburse`) +| Metadata Path | Extracted As | DB Column | +|---------------|-------------|-----------| +| `instance` (top-level) | string | Used to look up `treasury_id` directly | +| `body.destination` | `extract_text()` | `events.destination` | + +**Not extracted**: `label`, `description`, `justification`, `estimatedReturn` + +Disburse is a treasury-level operation. The code looks up `treasury_id` from `instance` and does **not** call `find_project_from_inputs`. `project_db_id` is always `None` for disburse events. + +**Note**: `destination` extraction uses `extract_text()` which expects a string or string array, while the spec defines destination as an object with `label`/`details`. This means structured destination metadata may not be fully captured. + +#### DB Writes +- **INSERT** `treasury.events` (with `destination` JSONB, `project_db_id = NULL`) + +--- + +### withdraw + +**Purpose**: Vendor claims matured milestone funds from the vendor contract. + +#### Spec Fields +| Field | Type | Description | +|-------|------|-------------| +| `event` | string | `"withdraw"` | +| `milestones` | object | Map of milestone IDs → `{comment}` | + +#### Code Extraction (`process_withdraw`) +| Metadata Path | Extracted As | DB Column | +|---------------|-------------|-----------| +| `body.identifier` | string | Used to find project_db_id | +| `body.milestones` | object keyed by milestone ID | Iterates over all milestone IDs | +| `body.milestone` | string (legacy fallback) | Used to find milestone by ID if `milestones` absent | + +**Milestone format handling**: Code first checks for `body.milestones` (plural) as an object keyed by milestone ID (spec format, handles multiple milestones per withdraw). Falls back to `body.milestone` (singular string) for legacy single-milestone format. + +Additionally queries `yaci_store.address_utxo` for the withdraw tx to calculate `withdraw_amount` (sum of non-script outputs via `owner_addr NOT LIKE 'addr1x%'`). + +**Not extracted**: `milestones..comment` + +#### DB Writes +- **UPDATE** `treasury.milestones` — sets `withdrawn = TRUE`, `withdraw_tx_hash`, `withdraw_time`, `withdraw_amount` +- **INSERT** `treasury.events` + +--- + +### pause + +**Purpose**: Oversight committee prevents milestone fund withdrawal pending resolution. + +#### Spec Fields +| Field | Type | Description | +|-------|------|-------------| +| `event` | string | `"pause"` | +| `milestones` | object | Map of milestone IDs → `{reason, resolution}` | + +#### Code Extraction (`process_pause`) +| Metadata Path | Extracted As | DB Column | +|---------------|-------------|-----------| +| `body.identifier` | string | Used to find project_db_id | +| `body.reason` | `extract_text()` | `events.reason` | + +**Per-milestone pause via datum**: After identifying the vendor contract, calls `update_milestone_pause_from_datum()` which parses the output datum of the pause transaction. Each milestone in the datum has a `Constr(0|1, [])` pause flag (0=active, 1=paused), and the code updates the `paused` boolean on each milestone row accordingly. + +**Contract-level status derivation**: After updating per-milestone flags, the code derives contract status: `paused` if ALL milestones are paused, `active` if no milestones are paused. Mixed state leaves the contract status unchanged. + +**Not extracted**: per-milestone `reason`, `resolution` from metadata + +#### DB Writes +- **UPDATE** `treasury.milestones` — sets `paused` flag per milestone (from datum) +- **UPDATE** `treasury.projects` — sets `status` to `'paused'` or `'active'` (derived from per-milestone state) +- **INSERT** `treasury.events` (with reason) + +--- + +### resume + +**Purpose**: Oversight committee resumes previously paused milestone payments. + +#### Spec Fields +| Field | Type | Description | +|-------|------|-------------| +| `event` | string | `"resume"` | +| `milestones` | object | Map of milestone IDs → `{reason}` | + +#### Code Extraction (`process_resume`) +| Metadata Path | Extracted As | DB Column | +|---------------|-------------|-----------| +| `body.identifier` | string | Used to find project_db_id | + +**Per-milestone resume via datum**: Same mechanism as pause. After identifying the vendor contract, calls `update_milestone_pause_from_datum()` which parses the output datum to read each milestone's pause flag and updates the `paused` boolean per milestone row. + +**Contract-level status derivation**: Same as pause — `active` if no milestones paused, `paused` if all milestones paused. + +**Not extracted**: per-milestone `reason` from metadata + +#### DB Writes +- **UPDATE** `treasury.milestones` — sets `paused` flag per milestone (from datum) +- **UPDATE** `treasury.projects` — sets `status` to `'paused'` or `'active'` (derived from per-milestone state) +- **INSERT** `treasury.events` + +--- + +### modify + +**Purpose**: Vendor and committee agree to alter payout amounts or milestone terms. + +#### Spec Fields +| Field | Type | Description | +|-------|------|-------------| +| `event` | string | `"modify"` | +| `identifier` | string | Project ID being modified | +| `otherIdentifiers` | array | Related project IDs | +| `label` | string | Updated project title | +| `description` | string | Updated project description | +| `reason` | string | Markdown justification | +| `vendor` | object | Updated vendor info (same format as fund) | +| `contract` | object | Updated contract info (same format as fund) | +| `milestones` | object/array | Updated milestone definitions | + +#### Code Extraction (`process_modify`) +| Metadata Path | Extracted As | DB Column | +|---------------|-------------|-----------| +| `body.identifier` | string | Used to find project_db_id | +| `body.label` | `extract_text()` | `projects.project_name` (COALESCE update) | +| `body.description` | `extract_text()` | `projects.description` (COALESCE update) | +| `body.vendor.label` | `extract_text_from_value()` | `projects.vendor_address` (COALESCE update) | +| `body.reason` | `extract_text()` | `events.reason` | +| `body.milestones` | array or object of milestones | Archives old, inserts new | + +**Naming fields update**: Before processing milestones, the code extracts `label`, `description`, and `vendor.label` and updates the project row using COALESCE (only overwrites if the new value is non-null). + +**Milestone format handling**: Same as fund — milestones are accepted in both array format (`[{identifier: "m-0", ...}]`) and object format (`{"m-0": {...}}`). + +Milestone field extraction is identical to fund (identifier, label, description, acceptanceCriteria, amount). + +#### DB Writes +- **UPDATE** `treasury.projects` — COALESCE update of `project_name`, `description`, `vendor_address` +- **UPDATE** `treasury.milestones` — sets `archived = TRUE`, `archived_by_tx_hash`, `archived_at` for all active milestones +- **INSERT** `treasury.milestones` — new milestone rows +- **UPDATE** `treasury.milestones` — sets `superseded_by` FK linking old → new rows with matching milestone_id +- **INSERT** `treasury.events` (with reason) + +--- + +### cancel + +**Purpose**: Special case of modify where project is completely cancelled and refunded. + +#### Spec Fields +| Field | Type | Description | +|-------|------|-------------| +| `event` | string | `"cancel"` | +| `reason` | string | Markdown explanation for cancellation | + +#### Code Extraction (`process_cancel`) +| Metadata Path | Extracted As | DB Column | +|---------------|-------------|-----------| +| `body.identifier` | string | Used to find project_db_id | +| `body.reason` | `extract_text()` | `events.reason` | + +#### DB Writes +- **UPDATE** `treasury.projects` — sets `status = 'cancelled'` +- **INSERT** `treasury.events` (with reason) + +--- + +### sweep + +**Purpose**: Returns surplus funds from treasury or vendor contracts back to the Cardano treasury. + +#### Spec Fields +| Field | Type | Description | +|-------|------|-------------| +| `event` | string | `"sweep"` | +| `comment` | string | Markdown explanation (optional; metadata may be omitted entirely) | + +#### Code Extraction (`process_sweep`) +Minimal — only looks up treasury_id from instance. + +**Not extracted**: `comment` + +#### DB Writes +- **INSERT** `treasury.events` + +Note: Code also matches `"sweeptreasury"` and `"sweepvendor"` as aliases. + +--- + +### reorganize + +**Purpose**: Documents fund splitting, merging, or rebalancing operations. + +#### Spec Fields +| Field | Type | Description | +|-------|------|-------------| +| `event` | string | `"reorganize"` | +| `reason` | string | Justification (optional) | +| `outputs` | object | Map of output indices → `{identifier, label}` | + +#### Code Extraction (`process_reorganize`) +Minimal — only looks up treasury_id from instance. + +**Not extracted**: `reason`, `outputs` + +#### DB Writes +- **INSERT** `treasury.events` + +--- + +## 4. Field Extraction Details + +### Text Extraction Helpers + +```rust +fn extract_text(obj: &Value, field: &str) -> Option +fn extract_text_from_value(value: Option<&Value>) -> Option +``` + +Both handle two formats: +- **String**: returned as-is +- **Array of strings**: joined with `""` (empty string — no separator) + +**Known issue**: The join with empty string means `["Hello ", "World"]` → `"Hello World"` (correct) but `["Hello", "World"]` → `"HelloWorld"` (missing space). CIP-100 chunks at fixed byte boundaries, so this typically works for continuous text but could produce incorrect results at chunk boundaries if the original text doesn't align. + +### Vendor Name vs Label + +The TOM spec defines the `vendor` object as: +```json +{ + "vendor": { + "label": "Vendor Company Name", + "details": { + "anchorUrl": "https://...", + "anchorDataHash": "..." + } + } +} +``` + +The TOM spec has no `vendor.name` field — `vendor_name` was a deprecated column on the old `vendor_contracts` table and has been dropped. `vendor.label` is extracted via `extract_text_from_value()` into `projects.vendor_address`. + +In practice, vendor identity comes from the top-level `body.label` which by convention includes the vendor name (e.g., `"Tastenkunst GmbH - Eternl Maintenance"`). The `vendor.label` field in real metadata contains the vendor's payment address (a Cardano address), not their display name. + +### Contract URL + +The `contract_url` column was a deprecated column on the old `vendor_contracts` table and has been dropped. Contract URL extraction was removed as no on-chain data populates this field. + +### Milestone Format: Object vs Array + +| Context | Spec Format | Code Handles | +|---------|------------|-------------| +| fund | Object keyed by ID: `{"m-0": {...}}` | Both array `[{identifier: "m-0", ...}]` and object `{"m-0": {...}}` | +| complete | Object keyed by ID: `{"m-0": {...}}` | Object keyed by ID (correct) | +| modify | Same as fund | Both array and object (same as fund handler) | +| withdraw | Object keyed by ID: `{"m-0": {...}}` | Object `milestones` (plural, keyed by ID) + singular string `milestone` fallback | +| pause | Object keyed by ID: `{"m-0": {...}}` | Per-milestone via inline datum parsing (not from metadata milestones field) | +| resume | Object keyed by ID: `{"m-0": {...}}` | Per-milestone via inline datum parsing (not from metadata milestones field) | + +Real on-chain metadata uses both arrays and objects for fund/modify events. The code handles both formats. + +--- + +## 5. UTXO Chain Tracking + +### How It Works + +When a `fund` event is processed, the code records all output UTXOs from that transaction in `treasury.utxo_history` with the `project_db_id`. Subsequent events (complete, withdraw, etc.) spend those UTXOs, so the processor can trace backwards to find which project an event belongs to. + +In addition, Postgres triggers installed by `install_utxo_history_triggers` (`api/src/services/sync.rs`) capture every script-address UTXO YACI Store inserts into `treasury.utxo_history` synchronously, regardless of any TOM event. This makes the chain trace robust against YACI Store's UTXO pruning. + +### `find_project_from_inputs()` + +``` +1. Get inputs to this tx: SELECT tx_hash, output_index FROM yaci_store.tx_input + WHERE spent_tx_hash = $1 +2. For each input, look up: SELECT project_db_id FROM treasury.utxo_history + WHERE tx_hash = $1 AND output_index = $2 +3. If found: mark old UTXO as spent, record new output UTXOs with same project_db_id +4. Return best-scoring project_db_id (see disambiguation below) +``` + +This correctly traces the UTXO chain regardless of address, because it tracks by specific (tx_hash, output_index) pairs. + +When multiple inputs map to different projects (e.g. a sibling project's fee/collateral input), the trace scores each candidate against `body.milestones` keys via `collect_milestone_id_hints` and prefers the one whose stored milestones match. + +When recording new output UTXOs (step 3), the code also stores `inline_datum_cbor` if the output has an inline datum. This datum is used later by pause/resume processing. + +--- + +## 5a. Datum Integration + +### CBOR Datum Parser (`parsers/datum.rs`) + +The datum parser decodes inline Plutus datums from CBOR hex into structured data. It uses the `pallas` library for CBOR decoding. + +### Datum Structure + +```text +Constr(0, [ + Constr(0, [ByteString(vendor_payment_key_hash)]), + Array([ + Constr(0, [BigInt(time_limit), Map(value), Constr(0|1, [])]), // per milestone + ... + ]) +]) +``` + +- **vendor_payment_key_hash**: hex-encoded byte string identifying the vendor's payment key +- **Per-milestone fields**: + - `BigInt(time_limit)` — POSIXTime in milliseconds, the milestone's expiration + - `Map(value)` — Plutus Value map, structured as `{"": {"": lovelace_amount}}` (ADA policy ID is empty bytestring) + - `Constr(0|1, [])` — pause flag: constructor 0 (tag 121) = active, constructor 1 (tag 122) = paused + +### When Datums Are Parsed + +| Context | Function | What happens | +|---------|----------|-------------| +| `fund` event | `parse_project_datum()` | Populates `vendor_payment_key_hash`, per-milestone `amount_lovelace`, `time_limit`, `paused` | +| `pause` event | `update_milestone_pause_from_datum()` | Updates per-milestone `paused` flags, derives project status | +| `resume` event | `update_milestone_pause_from_datum()` | Updates per-milestone `paused` flags, derives project status | +| UTXO chain tracking | `find_project_from_inputs()` | Stores `inline_datum_cbor` on new UTXO rows for later use | + +### Fields Extracted + +| Datum Field | DB Column | Table | +|-------------|-----------|-------| +| Vendor info key hashes (comma-joined for multi-key) | `vendor_payment_key_hash` | `projects` | +| Per-milestone `time_limit` | `time_limit` | `milestones` | +| Per-milestone lovelace from Value map | `amount_lovelace` | `milestones` | +| Per-milestone `Constr(0\|1)` | `paused` | `milestones` | +| Parse failures (top-level or vendor-info) | `datum_parse_error` | `projects` | +| Per-milestone parse failures | `datum_parse_error` | `milestones` | + +### Prerequisite + +Requires `store.script.enabled=true` in YACI Store configuration (`indexer/application.properties`) so that `inline_datum` is populated on `address_utxo` rows. If disabled, datum parsing is skipped gracefully. + +--- + +## 6. Known Bugs & Limitations (Resolved) + +All 11 bugs have been fixed. This section documents the original issues and their resolutions. + +### Critical (Fixed) + +**1. ~~`sync_address_utxos()` misassigns UTXOs~~** — FIXED: Deleted `sync_utxos()` and `sync_address_utxos()`. UTXO tracking now relies exclusively on `find_project_from_inputs()` chain tracing. + +**2. ~~`vendor.name` always null~~** — FIXED: `vendor_name` column is deprecated (always null). TOM spec has no `vendor.name` field. `vendor.label` correctly maps to `vendor_address`. + +### High (Fixed) + +**3. ~~Disburse events incorrectly linked to vendor contracts~~** — FIXED: `process_disburse` now takes `instance` parameter and looks up `treasury_id` directly. No longer calls `find_project_from_inputs`. `project_db_id` is always `None` for disburse events. + +**4. Multiple UTXO inputs → first match wins** — Acceptable: A transaction spending vendor contract UTXOs belongs to one project. First-match is the correct behavior. + +**5. ~~Pause/resume are contract-level, spec says milestone-level~~** — FIXED: Added `paused` boolean flag on milestones. `process_pause`/`process_resume` now parse the output datum to determine per-milestone pause state via `update_milestone_pause_from_datum()`. Contract-level status is derived: paused if ALL milestones paused, active if none paused. + +**6. ~~Modify doesn't update naming fields~~** — FIXED: `process_modify` now extracts and updates `project_name`, `description`, `vendor_address` using COALESCE before processing milestones. + +### Medium (Fixed) + +**7. ~~Array text concat has no separator~~** — Correct behavior: CIP-100 splits text at fixed 64-byte boundaries, so `join("")` correctly reconstructs the original text. Added explanatory comment. + +**8. ~~Fund milestones as array vs spec object~~** — FIXED: Both `process_fund` and `process_modify` now handle milestones as either an array `[{identifier: "m-0", ...}]` or an object `{"m-0": {...}}`. + +**9. ~~No slot-level ordering within blocks~~** — FIXED: Added `m.tx_hash ASC` as secondary sort in both `sync_all_events` and `sync_new_events` queries. + +**10. ~~`contract` field extraction assumes string~~** — N/A: `contract_url` extraction was removed (deprecated column, always null). + +**11. ~~Withdraw handles single milestone only~~** — FIXED: `process_withdraw` now checks for `milestones` object (plural, keyed by ID) first, falling back to singular `milestone` field for legacy format. + +--- + +## 7. Debugging Queries + +### Compare raw metadata vs stored values for a project + +```sql +-- Get raw metadata for a project's fund event +SELECT e.tx_hash, e.metadata +FROM treasury.events e +JOIN treasury.projects p ON p.id = e.project_db_id +WHERE p.project_id = 'EC-0008-25' AND e.event_type = 'fund'; + +-- Compare with stored values +SELECT project_id, project_name, vendor_address, description +FROM treasury.projects +WHERE project_id = 'EC-0008-25'; +``` + +### Check for duplicate contract_addresses across projects + +```sql +-- All projects sharing the same contract address (expected: all share the singleton PSSC) +SELECT contract_address, COUNT(*) as project_count, + array_agg(project_id ORDER BY project_id) as projects +FROM treasury.projects +WHERE contract_address IS NOT NULL +GROUP BY contract_address +HAVING COUNT(*) > 1; +``` + +### Verify UTXO assignment correctness + +```sql +-- Check if UTXOs at the shared address are spread across projects or concentrated on one +SELECT u.project_db_id, p.project_id, COUNT(*) as utxo_count, + SUM(u.lovelace_amount) as total_lovelace +FROM treasury.utxo_history u +JOIN treasury.projects p ON p.id = u.project_db_id +WHERE NOT u.spent +GROUP BY u.project_db_id, p.project_id +ORDER BY utxo_count DESC; +``` + +### Check UTXO chain integrity for a project + +```sql +-- Follow the UTXO chain for a specific project +WITH RECURSIVE utxo_chain AS ( + SELECT u.tx_hash, u.output_index, u.spent, u.spent_tx_hash, u.project_db_id, 1 as depth + FROM treasury.utxo_history u + JOIN treasury.projects p ON p.id = u.project_db_id + WHERE p.project_id = 'EC-0008-25' + AND u.tx_hash = p.fund_tx_hash + + UNION ALL + + SELECT u.tx_hash, u.output_index, u.spent, u.spent_tx_hash, u.project_db_id, uc.depth + 1 + FROM treasury.utxo_history u + JOIN utxo_chain uc ON u.tx_hash = uc.spent_tx_hash + WHERE uc.spent = true AND uc.depth < 20 +) +SELECT * FROM utxo_chain ORDER BY depth; +``` + +### Compare events across projects + +```sql +-- All events with project context, ordered by time +SELECT e.event_type, e.block_time, e.tx_hash, + p.project_id, p.project_name +FROM treasury.events e +LEFT JOIN treasury.projects p ON p.id = e.project_db_id +ORDER BY e.block_time DESC +LIMIT 50; +``` + +### Inspect metadata for a specific event type + +```sql +-- View raw metadata for all complete events +SELECT e.tx_hash, e.block_time, + p.project_id, + e.metadata->'body'->'milestones' as milestones_meta +FROM treasury.events e +LEFT JOIN treasury.projects p ON p.id = e.project_db_id +WHERE e.event_type = 'complete'; +``` diff --git a/docs/known-issues.md b/docs/known-issues.md new file mode 100644 index 0000000..4bc6444 --- /dev/null +++ b/docs/known-issues.md @@ -0,0 +1,760 @@ +# Known issues — data quality and behavioural quirks + +> **Last refreshed:** 2026-05-03 (post six-bug-cascade fix + true cold +> resync) against commit `6db1581`+. Rerun the per-entry repro SQL for +> fresh numbers. +> +> **Verified resolved by cold resync:** +> - **KI-VND-01** — `vendor_payment_key_hash` NULL: **10/42 → 0/42** +> - **KI-MIL-01 (datum-derived)** — NULL `amount_lovelace`/`time_limit`: +> **136/386 → 16/364** (all 16 are KI-MOD-01 modify-event milestones). +> - **KI-VND-05** — corrupted utxo_history datums: **resolved** by cold +> resync + the merged-source `get_script_utxo_for_tx` query (bug #6). +> - **KI-EVT-01-residual** — 12/413 events still NULL, all clustered on +> the 2 KI-MOD-01-affected projects' descendants (slight regression +> from 4 pre-cold-resync — likely milestone-id-hint ambiguity in +> `find_project_from_inputs` when modify events introduce new IDs). +> - **KI-SY-02** — Phase 1 (contiguous-success watermark) shipped. +> - **KI-VND-04**, **KI-CR-01**, **KI-UTX-03** — confirmed clean. +> +> **Six distinct bugs were the cause of the KI-VND-01 cascade** (not a +> single parser-strictness defect): +> 1. Sync race during cold catch-up — `sync_all_events` only ran once at +> startup, racing yaci_store's address_utxo ingestion. **Fix:** +> periodic 10-min `sync_all_events` task. +> 2. `vendor_payment_key_hash VARCHAR(56)` rejected the 113-char +> multi-key hash. **Fix:** widened to `TEXT`. +> 3. `get_script_utxo_for_tx` LIMIT 1 with no ORDER BY picked a tiny +> `d87980` reference output instead of the kilobyte project datum. +> **Fix:** ORDER BY length DESC. +> 4. `process_fund` blindly overwrote a good captured datum with the bad +> one when bug #3 fired. **Fix:** preserve-larger-datum guard. +> 5. `process_fund`'s milestone-update filtered `NOT withdrawn`, breaking +> index alignment with the fund datum on re-runs. **Fix:** removed. +> 6. `get_script_utxo_for_tx` queried yaci_store first and never fell +> back to `treasury.utxo_history` even when the latter had a longer +> captured datum (only surfaces post-resync because some funds have +> a *spent-and-pruned* vendor-contract output and an *unspent* +> treasury reference output — yaci_store retains the small one). +> **Fix:** UNION ALL across both sources, ORDER BY length DESC. +> +> **Schema refactor note:** project-level columns moved from +> `treasury.vendor_contracts` to `treasury.projects`. Milestones and +> events FK via `project_db_id`. `treasury.utxos` removed in favour of +> `treasury.utxo_history`. +> +> **Still open:** KI-OC-02 (on-chain limitation, can't fix), KI-MOD-01 +> (modify events don't reflect updated milestone amounts / time limits in +> the API — new milestone rows ship with NULL datum-derived fields), small +> KI-EVT-01 regression on KI-MOD-01-affected projects (12 NULL events), +> KI-FIN-04 (per-project balance under-counts the raw PSSC total when +> chain trace can't attribute every UTXO). + +## How to use this doc + +- Each entry has a stable ID (`KI--`) referenced from PRs and issues. +- "Repro query" runs as-is against the local Postgres + (`postgresql://postgres:postgres@localhost:5433/administration_data`). +- "Current count" is point-in-time at the date above. +- Entries are split into: + - **Section A — NULL fields** (the data-quality holes) + - **Section B — On-chain inconsistencies** (chain data the code can't fully reconcile) + - **Section C — Cold-replay limitation** (UTXO pruning during fresh local sync) + - **Section D — Sync-loop quirks** (operational gotchas) + - **Section E — Spec/code mismatches** + +When opening an issue, cite the ID. When fixing one, remove the entry (or +update its count to zero) in the same PR. + +--- + +## Section A — NULL fields + +### A.1 `treasury.treasury_contracts` + +All five nullable columns are populated for the only treasury we currently +track. No active anomalies — listed for completeness because the schema +allows NULL. + +| Column | When NULL is expected | When NULL is anomalous | +|---|---|---| +| `contract_address`, `stake_credential` | Before `initialize` event | Initialize ran but neither `yaci_store.address_utxo` nor `treasury.utxo_history` had the script output (`process_initialize` in `event_processor.rs`) | +| `publish_tx_hash`, `publish_time` | Treasury never published on chain | A publish event was received but didn't write — investigate | +| `initialized_tx_hash`, `initialized_at` | Treasury never initialized | Same as above for initialize | +| `permissions` | Publish metadata didn't include the field | Publish metadata included it but extraction failed | + +**Repro query** + +```sql +SELECT id, contract_instance, + contract_address IS NULL AS missing_addr, + publish_tx_hash IS NULL AS missing_publish, + initialized_tx_hash IS NULL AS missing_init, + permissions IS NULL AS missing_perms +FROM treasury.treasury_contracts; +``` + +**Current count:** 0 anomalous NULLs across 1 row. + +--- + +### A.2 `treasury.projects` (formerly project-level cols on `vendor_contracts`) + +#### KI-VND-01 — `vendor_payment_key_hash` NULL on `UTXO-*` projects *(RESOLVED — six-bug cascade, verified post cold resync)* + +The original "10/42 NULL" symptom turned out to be a cascade of six +separate bugs, not the single parser-strictness defect the previous +analysis suspected. The parser was always correct on real CBOR (proven +by four fixture tests in `api/src/parsers/datum.rs`); a postgres +column-width error in the `UPDATE` step was being swallowed by an +all-or-nothing `match` and a misleading DEBUG-level log. + +After the cold resync verified all six fixes work end-to-end: +**0/42 NULL key hashes, 0 parse errors.** + +##### The six bugs and their fixes + +1. **Sync race during cold catch-up** — `sync_all_events` ran once at + API startup. During catch-up, `yaci_store.transaction_metadata` was + visible to the API before the matching `address_utxo` row, so + `get_script_utxo_for_tx` returned `None` at fund-time and the datum + lookup never happened. Once yaci_store caught up, the trigger + captured the datum into `treasury.utxo_history` — but `process_fund` + wasn't re-run. + **Fix:** added a separate `tokio::spawn` task in `sync.rs` running + `sync_all_events` every 10 minutes. The idempotent `ON CONFLICT DO + UPDATE` chain backfills as soon as yaci_store catches up. +2. **`vendor_payment_key_hash` column too narrow** — `VARCHAR(56)` + rejected the 113-char joined multi-key hash from UTXO-* projects + (`hash1,hash2`) with `value too long for type character varying(56)`. + The all-or-nothing `match parse_project_datum() { Ok => …; Err => + debug!(…) }` swallowed the error. + **Fix:** widened to `TEXT` in `database/schema/treasury.sql`, + `database/init/02-treasury-schema.sql`, plus an `ALTER TABLE` against + the live DB. +3. **`get_script_utxo_for_tx` picked the wrong UTXO** — `LIKE 'addr1x%' + LIMIT 1` with no `ORDER BY` could return the change/treasury output + carrying an empty `Constr(0, [])` datum (`d87980`, 3 bytes) instead + of the vendor-contract output with the actual project datum + (kilobytes). On fund txs that produce two `addr1x*` outputs (vendor + contract + treasury change), this was nondeterministic. + **Fix:** `ORDER BY length(COALESCE(inline_datum, '')) DESC` — the + largest datum reliably points to the vendor contract output. +4. **`process_fund` overwrote the captured datum** — when bug #3 fired, + the `UPDATE treasury.utxo_history SET inline_datum_cbor = $1` at the + end of `process_fund` blindly wrote the bad 3-byte datum over a + previously-captured 1.3kB good datum. Once corrupted and yaci_store + pruned the source, recovery required a true cold resync. + **Fix:** `WHERE inline_datum_cbor IS NULL OR length($1) > + length(inline_datum_cbor)` — only overwrite with a *better* datum. +5. **`process_fund` filtered `NOT withdrawn`** — the milestone-update + loop selected only non-withdrawn rows. Fine on first run, but on a + periodic re-run after some milestones became withdrawn, index + alignment between the (full, fund-time) datum array and the + (filtered, current-state) DB rows broke, leaving the now-withdrawn + milestones permanently NULL. + **Fix:** removed `AND NOT withdrawn` from the select. The fund tx's + datum represents *initial* state; we always update by + `milestone_order` regardless of current withdrawn flag. +6. **`get_script_utxo_for_tx` never preferred `utxo_history` over + yaci_store when both had a row** — the yaci_store query ran first + and returned whatever was there. For some fund txs (e.g. + `b39d013c…`, `5bc5a75e…`) the *vendor-contract* output was spent and + pruned from yaci_store but captured by the trigger into + utxo_history; the *unspent* treasury-reference output (`d87980`, + 3 bytes) survived in yaci_store. Result: yaci_store returned the + trivial datum and we never consulted utxo_history's real one. + This bug was invisible on the first cold-resync test because the + spent-and-pruned outputs were re-fetched while the trigger was + armed; it surfaced only when querying for fund-time state of older + txs after their vendor-contract outputs had since been spent. + **Fix:** `UNION ALL` yaci_store and utxo_history in a single query, + `ORDER BY length(datum) DESC LIMIT 1`. Source-agnostic: always picks + the longer datum across both. + +##### Defensive hardening also landed + +- **Partial parser** — `parse_project_datum` now returns + `ParsedProjectDatum { vendor_payment_key_hash: Option, + vendor_info_error: Option, milestones: Vec>, + top_level_error: Option }` so vendor info persists even when + individual milestones fail to parse. +- **`datum_parse_error TEXT`** columns added to `treasury.projects` and + `treasury.milestones` for SQL-queryable diagnostics. +- **`tracing::debug!` → `tracing::warn!`** for parse failures so they + appear in default logs. +- **Four real-CBOR fixture tests** in `api/src/parsers/datum.rs` + (UTXO-EMI-0001-25, UTXO-EC-0002-25-01, UTXO-EC-0002-25-03, + partial-parse smoke test). + +##### Verified counts (after cold resync from `STORE_CARDANO_SYNC_START_SLOT`) + +| Metric | Before | After | +|---|---:|---:| +| `treasury.projects` NULL `vendor_payment_key_hash` | 10 / 42 | **0 / 42** | +| `treasury.projects.datum_parse_error` set | n/a | **0 / 42** | +| `treasury.milestones` NULL `amount_lovelace` (active) | 136 / 386 | **16 / 364** | + +The remaining 16 NULL milestone amounts are all from KI-MOD-01 (modify +events created new milestone rows with new IDs that don't pick up the +new contract output's datum). 8 each on `UTXO-EC-0002-25-03` and +`UTXO-EC-0002-25-04`. + +##### Repro queries + +```sql +SELECT project_id, fund_tx_hash, datum_parse_error +FROM treasury.projects +WHERE vendor_payment_key_hash IS NULL OR datum_parse_error IS NOT NULL +ORDER BY project_id; + +SELECT p.project_id, COUNT(*) AS missing +FROM treasury.milestones m JOIN treasury.projects p ON p.id = m.project_db_id +WHERE NOT m.archived AND m.amount_lovelace IS NULL +GROUP BY p.project_id ORDER BY 2 DESC; +``` + +#### KI-VND-05 — Datum corruption from prior bug #4 *(RESOLVED — cold resync + bug #6 fix)* +- **Was:** `UTXO-EC-0002-25-03` (fund tx `b39d013c…`) and + `EC-0013(1,2,7)-25` (fund tx `5bc5a75e…`) had their captured datums + overwritten with `d87980` (6 bytes) before bug #4 was fixed. +- **Resolution path:** the cold resync from + `STORE_CARDANO_SYNC_START_SLOT` (with the trigger armed before + yaci_store ingestion) captured the original kilobyte-scale datums + into `treasury.utxo_history`. The merged-source query (bug #6 fix in + `get_script_utxo_for_tx`) ensures the captured datum wins over the + surviving 3-byte yaci_store reference output. +- **Verified:** `b39d013c…` output 0 = 1320 bytes, `5bc5a75e…` output 0 + = 1414 bytes; both parse to 20 + 23 milestones with no errors. + `vendor_payment_key_hash` and `datum_parse_error` columns now + populated/clear on these projects. +- **Operator note:** if a production deployment ran continuously while + bugs #3 and #4 were active, it may have similarly corrupted datums. + Check with `SELECT project_id FROM treasury.projects WHERE + datum_parse_error IS NOT NULL`. Recovery is the same wipe-and-resync. + +#### KI-MOD-01 — `modify` events don't update milestone amounts or time limits *(OPEN — TODO)* +- **User-visible symptom:** when an oversight committee submits a `modify` + event to change a milestone's payout amount or time limit, the API + continues to surface stale or NULL values for those fields. The on-chain + contract reflects the new state, but `/api/v1/projects/{id}/milestones` + doesn't. +- **Pattern:** `process_modify` (`api/src/services/event_processor.rs`) + archives the existing milestone rows and inserts new ones, then COALESCE- + updates project naming fields from metadata. It does **not** re-parse the + modify-tx's output datum, so the new milestone rows' `amount_lovelace` / + `time_limit` / `paused` fields come out NULL — even when the on-chain + datum carries the updated values. +- **Currently affected:** 8 active milestones in `UTXO-EC-0002-25-04` + (IDs MS-5, MS-6, MS-8, MS-9, MS-12, MS-13, MS-17, MS-18 — all created + by modify events; gaps imply earlier IDs were modified out). The same + cluster also drives the KI-EVT-01-residual NULL `project_db_id` events. +- **Why this is separate from KI-VND-01:** the fund datum *did* parse + successfully for these projects; the issue is exclusively in + `process_modify` which doesn't run the datum-update path that + `process_fund` does. +- **Proposed fix (small, deferred):** at the end of `process_modify`, look + up the modify tx's output datum via the same mechanism `process_fund` + uses (`get_script_utxo_for_tx` + `parse_project_datum`) and run the + milestone-update loop. Matching by `milestone_order` should align — the + modified contract's datum reflects current state. Re-running + `sync_all_events` after the fix lands will backfill via the idempotent + `ON CONFLICT DO UPDATE` chain; no resync needed. + +#### KI-VND-02 — `vendor_name` (deprecated) *(RESOLVED)* +- Column dropped from `treasury.vendor_contracts`, models, routes and views. + +#### KI-VND-03 — `contract_url` (deprecated) *(RESOLVED)* +- Column dropped from `treasury.vendor_contracts`, models, routes and views. + +#### KI-VND-04 — `contract_address` NULL on cold replay *(RESOLVED — verified by 2026-05-02 cold resync)* +- **Resolved by:** the `treasury.utxo_history` table + Postgres trigger on + `yaci_store.address_utxo` (see KI-UTX-01). Every script-address UTXO is + now captured synchronously inside YACI Store's INSERT, so pruning never + has a chance to wipe it before we read it. +- **Verified:** with the trigger armed before YACI Store ingestion, a fresh + cold sync from `STORE_CARDANO_SYNC_START_SLOT` produces 0 NULL + `contract_address` across all 42 projects. + +**Repro query** + +```sql +SELECT project_id, fund_tx_hash +FROM treasury.projects +WHERE contract_address IS NULL; +``` + +**Current count:** 0 / 42. + +--- + +### A.3 `treasury.milestones` + +#### KI-MIL-01 — milestone field NULLs across the four sub-fields +- **`label`** *(RESOLVED)* — description fallback in + `extract_milestone_label_description` covers the missing + `acceptanceCriteria` case. **0 / 364 active milestones NULL.** +- **`amount_lovelace` / `time_limit`** *(LARGELY RESOLVED — see KI-VND-01)* — + was 136/386, now 16/364: + - 8 from `UTXO-EC-0002-25-03` (KI-VND-05 corruption) + - 8 from `UTXO-EC-0002-25-04` (KI-MOD-01 modify-event gap) +- **`acceptance_criteria`** *(NOT A BUG — correct on-chain truth)* — the + remaining NULLs reflect the actual fund metadata. UTXO-* projects + emit milestones as `{identifier: "MS-N", description: …}` with no + `acceptanceCriteria` key (verified against `5849b0ec…`'s + `transaction_metadata.body`). Leave the column NULL — do not invent + a fallback. + +##### Per-project breakdown (NULL `amount_lovelace`) + + | project_id | NULL count | total milestones | + |---|---:|---:| + | UTXO-EC-0002-25-05 | 21 | 21 | + | UTXO-EC-0002-25-06 | 21 | 21 | + | UTXO-EC-0002-25-03 | 20 | 20 | + | UTXO-EC-0002-25-02 | 19 | 19 | + | UTXO-EC-0002-25-04 | 18 | 18 | + | UTXO-EC-0002-25-01 | 16 | 16 | + | UTXO-EC-0003-25 | 8 | 8 | + | UTXO-EMI-0001-25 | 5 | 5 | + | UTXO-EG-0003-25 | 4 | 4 | + | UTXO-ER-0001-25 | 4 | 4 | + +##### Repro query + +```sql +SELECT p.project_id, COUNT(*) AS missing +FROM treasury.milestones m +JOIN treasury.projects p ON p.id = m.project_db_id +WHERE NOT m.archived AND m.amount_lovelace IS NULL +GROUP BY p.project_id ORDER BY 2 DESC; +``` + +#### KI-MIL-02 — `withdraw_*` / `complete_*` / `archived_*` columns +- All conditional on the corresponding boolean flag being true. No anomalies + observed (`withdrawn=TRUE` rows always have non-NULL `withdraw_*`). + +**Repro query** + +```sql +SELECT COUNT(*) FILTER (WHERE withdrawn AND withdraw_tx_hash IS NULL) AS withdrawn_no_tx, + COUNT(*) FILTER (WHERE evidence_provided AND complete_tx_hash IS NULL) AS evidenced_no_tx, + COUNT(*) FILTER (WHERE archived AND archived_by_tx_hash IS NULL) AS archived_no_tx +FROM treasury.milestones; +``` + +**Current count:** 0, 0, 0. + +--- + +### A.4 `treasury.events` + +#### KI-EVT-01 — `project_db_id` NULL on chain-trace failure *(RESOLVED — verified by 2026-05-02 cold resync)* +- **Resolved by:** historical-UTXO trigger (KI-UTX-01). Chain-trace inputs + are now reliably present in `treasury.utxo_history` regardless of pruning, + so the trace finds the seed for every event whose ancestor is a fund tx + we've processed. +- **Verified:** after a fresh cold resync with the trigger armed from the + start, NULL counts dropped from 56 / 409 (14%) to **4 / 411 (~1%)**. + + | event_type | NULL `project_db_id` | total | % | + |---|---:|---:|---:| + | complete | 2 | 189 | 1.1% | + | withdraw | 2 | 129 | 1.6% | + | pause | 0 | 62 | 0% | + | resume | 0 | 31 | 0% | + | **total** | **4** | **411** | **1.0%** | + +- Treasury-level events (publish, initialize, disburse) have NULL + `project_db_id` by design — they aren't tied to a project. + +#### KI-EVT-01-residual — 12 events still NULL after cold resync *(OPEN — likely tied to KI-MOD-01)* +- After cold resync: 11 complete + 1 withdraw events have NULL + `project_db_id`. All cluster around slots 170M–173M, on + KI-MOD-01-affected projects where modify events introduced milestones + with non-original IDs (MS-N gaps). +- **Hypothesis:** `find_project_from_inputs` + (`event_processor.rs`) uses `collect_milestone_id_hints` to + disambiguate when chain trace finds multiple candidate projects (see + KI-OC-03). When the event's milestone IDs (e.g., `MS-15`) appear in + modify-created milestone rows on more than one project, the hint + scoring is ambiguous and trace returns `None`. +- **Status:** investigation pending. Slight regression from the 4 NULLs + seen pre-cold-resync; the wipe also wiped any partially-seeded chain + state. Resolution probably involves tightening + `collect_milestone_id_hints` to use both milestone_id AND + milestone_order, or falling back to the input UTXO's + `project_db_id` directly when ambiguous. + +**Repro query** + +```sql +SELECT event_type, + COUNT(*) FILTER (WHERE project_db_id IS NULL) AS null_project, + COUNT(*) AS total +FROM treasury.events +WHERE event_type IN ('complete','withdraw','pause','resume') +GROUP BY 1 ORDER BY 1; +``` + +#### KI-EVT-03 — fund/initialize/etc have NULL `milestone_id` by design +- Treasury- and contract-level events aren't tied to a single milestone; + schema permits NULL. Listed only because grouped count looks suspicious + at first glance. + +**Repro query** + +```sql +SELECT event_type, COUNT(*) FILTER (WHERE milestone_id IS NULL) AS null_milestone +FROM treasury.events GROUP BY 1 ORDER BY 1; +``` + +#### KI-FIN-04 — per-project balance under-counts the raw PSSC total *(OPEN — TODO)* +- **Pattern:** `v_projects_summary.current_balance_lovelace` joins live + `yaci_store.address_utxo` against `treasury.utxo_history` so that each + unspent PSSC UTXO is attributed to the specific project that funded it. + Unspent PSSC UTXOs that `utxo_history` has *not* attributed to a project + (chain-trace gaps — primarily KI-EVT-01-residual / KI-MOD-01-affected + projects whose modify-chain we didn't fully trace) are excluded from + every project's per-project balance. +- **Currently affected:** sum of per-project balances ≈ 80.65M ADA vs the + raw on-chain PSSC total of 88.34M ADA — gap of ~7.7M ADA sits at the + shared PSSC address but isn't claimed by any project row. +- **Why this is OK at the treasury level:** `v_financial_summary + .project_balance_lovelace` and `/api/v1/statistics.financials + .current_balance_lovelace` deliberately use the *raw* PSSC SUM (not the + attributed sum), so the treasury-level total reports the on-chain truth. + The under-count only surfaces if a consumer sums per-project balances. +- **Proposed fix (deferred):** resolve the underlying chain-trace gaps via + KI-MOD-01 (modify-tx datum re-parse) and KI-EVT-01-residual + (`collect_milestone_id_hints` disambiguation tightening). Once chain + trace covers every PSSC UTXO, attributed sum should match raw PSSC sum. + +**Repro query** + +```sql +SELECT + (SELECT SUM(au.lovelace_amount) + FROM yaci_store.address_utxo au + JOIN treasury.vendor_contracts vc ON vc.address = au.owner_addr + WHERE NOT EXISTS (SELECT 1 FROM yaci_store.tx_input ti + WHERE ti.tx_hash=au.tx_hash AND ti.output_index=au.output_index)) + / 1e6 AS raw_pssc_ada, + (SELECT SUM(current_balance_lovelace) FROM treasury.v_projects_summary) + / 1e6 AS attributed_pssc_ada; +``` + +--- + +### A.5 `treasury.utxo_history` (formerly `treasury.utxos`) + +#### KI-UTX-01 — `treasury.utxo_history` table + Postgres trigger *(IMPLEMENTED — verified by 2026-05-02 cold resync)* +- **Implementation:** `install_utxo_history_triggers` + (`api/src/services/sync.rs`) creates two triggers at API startup: + - `capture_address_utxo` AFTER INSERT/UPDATE on `yaci_store.address_utxo` + copies every `addr1x*` row into `treasury.utxo_history`. + - `mark_utxo_spent` AFTER INSERT on `yaci_store.tx_input` flags the + corresponding `treasury.utxo_history` row as spent. +- **Outcome:** complete UTXO history at script addresses is preserved + regardless of YACI Store's pruning window. Resolves KI-VND-04, + KI-EVT-01, KI-CR-01 — all confirmed by the 2026-05-02 cold resync. + +#### KI-UTX-02 — `project_db_id` IS NULL on non-script UTXOs (by design) +- **Why:** `pre_fetch_utxos` inserts every output of every TOM-event tx + without `project_db_id`. The chain-trace seed (set later by + `process_fund` and `find_project_from_inputs`) only fills it + for outputs at the script address. Non-script change/fee outputs + remain NULL by design — they aren't part of the chain. +- **Currently affected:** 786 / 1235 rows. Not anomalous — expected. + +**Repro query** + +```sql +SELECT + CASE + WHEN project_db_id IS NOT NULL AND address IS NOT NULL THEN 'fully_tracked' + WHEN project_db_id IS NULL AND address IS NOT NULL THEN 'address_only' + WHEN address IS NULL THEN 'sparse' + ELSE 'other' + END AS state, + COUNT(*) AS count +FROM treasury.utxo_history GROUP BY 1 ORDER BY 2 DESC; +``` + +**Current breakdown:** `address_only=786`, `fully_tracked=449`. + +#### KI-UTX-03 — NULL `lovelace_amount` rows *(RESOLVED — verified by 2026-05-02 cold resync)* +- Previously 5 / 1222 rows had NULL `lovelace_amount` (outputs whose + `yaci_store.address_utxo` row was already pruned by the time we looked + it up). With the historical-UTXO trigger capturing rows on insert, no + such gaps remain. +- **Currently affected:** 0 / 1235. + +**Repro query** + +```sql +SELECT tx_hash, output_index, address +FROM treasury.utxo_history WHERE lovelace_amount IS NULL; +``` + +--- + +## Section B — On-chain data inconsistencies + +### KI-OC-01 — Milestone-id naming drift (`m-N` vs `MS-N`) *(RESOLVED at lookup time)* +- **Resolved by:** `canonical_milestone_order` + (`api/src/services/event_processor.rs`) parses metadata keys to a 1-indexed + `milestone_order` (`m-N` → `N+1`, `MS-N` → `N`). `process_complete` and + `process_withdraw` UPDATE clauses now match `milestone_id = $key OR + milestone_order = $order`, so events whose metadata key uses the opposite + scheme to the fund event still resolve. +- Stored `milestone_id` is left as-is. + +The original analysis is preserved below. + + +- **Pattern:** fund events for some projects emit milestones as an array + whose elements have `identifier: "m-N"`; fund events for the `UTXO-*` + family emit them as `"MS-N"`. Our parser stores whatever the + `identifier` field says. +- **Indexing convention:** the two schemes use different bases — + `m-N` is **0-indexed** (`m-0`, `m-1`, …, `m-{count-1}`) while `MS-N` + is **1-indexed** (`MS-1`, `MS-2`, …, `MS-{count}`). So the *first* + milestone of a project is `m-0` under one convention and `MS-1` under + the other; positionally they are the same milestone. A future + normaliser that wants to merge the two formats can use this offset. +- **Effect on complete events:** of 189 complete events, 108 use `m-N` keys + and 81 use `MS-N` keys. After the disambiguation hint to + `find_project_from_inputs`, this no longer causes silent event + drops (every event lands in `treasury.events`), but it surfaces as + KI-VND-01 / KI-MIL-01 because the same projects have a different datum + format the parser can't handle. + +**Repro query** + +```sql +WITH cmp AS ( + SELECT body::jsonb -> 'body' -> 'milestones' AS ms_field + FROM yaci_store.transaction_metadata + WHERE label='1694' AND body::jsonb->'body'->>'event'='complete' +) +SELECT + COUNT(*) FILTER (WHERE k LIKE 'm-%') AS m_dash, + COUNT(*) FILTER (WHERE k LIKE 'MS-%') AS ms_dash, + COUNT(*) FILTER (WHERE k NOT LIKE 'm-%' AND k NOT LIKE 'MS-%') AS other +FROM cmp, jsonb_object_keys(ms_field) k +WHERE jsonb_typeof(ms_field) = 'object'; +``` + +### KI-OC-02 — `body.identifier` empty on every milestone-level event +- 100% of `complete`, `withdraw`, `pause`, `resume` on-chain events have an + empty top-level `identifier`, so the cheap project lookup is never + available — every such event must chain-trace. This is what makes + KI-EVT-01 visible at all. +- **Currently affected:** complete 189/189, withdraw 129/129, pause 63/63, + resume 32/32 — 100% across the board. + +**Repro query** + +```sql +SELECT body::jsonb->'body'->>'event' AS event_type, + COUNT(*) FILTER (WHERE COALESCE(body::jsonb->'body'->>'identifier','') = '') AS empty_id, + COUNT(*) AS total +FROM yaci_store.transaction_metadata +WHERE label='1694' AND body::jsonb->'body'->>'event' IN ('complete','withdraw','pause','resume') +GROUP BY 1 ORDER BY 1; +``` + +### KI-OC-03 — Multi-input txs with sibling-project fee inputs +- A single complete/withdraw tx can take fee/collateral inputs from another + project's UTXO chain. Without disambiguation, the older code attributed + the event to whichever project's input came first. +- **Mitigation in code:** `find_project_from_inputs` + (`event_processor.rs`) now scores candidate `project_db_id`s + against `body.milestones` keys and prefers the one whose stored milestones + match (`collect_milestone_id_hints`). +- **Currently affected:** observable indirectly via KI-EVT-02 = 0. + +--- + +## Section C — Cold-replay UTXO-pruning limitation + +### KI-CR-01 — Fresh local sync can't reconstruct fully-pruned chains *(RESOLVED — verified by 2026-05-02 cold resync)* +- **Resolved by:** the historical-UTXO trigger (KI-UTX-01). Going forward, + every UTXO YACI Store inserts is captured in `treasury.utxo_history` before + it can be pruned, so continuous-operation chain trace works fully. +- **Verified:** the 2026-05-02 cold resync was run with + `install_utxo_history_triggers` arming the triggers before YACI Store + began ingesting. KI-VND-04 and KI-EVT-01 both improved to ~zero in this + run, confirming the recovery procedure works end-to-end. (KI-VND-01 and + the datum-derived part of KI-MIL-01 remain — those are parser issues, + not pruning issues.) +- **Caveat:** the trigger only protects UTXOs *from the moment it was + installed*. If `yaci_store.address_utxo` had already been pruned before + the trigger was armed, historical fund-output datums may still be + missing from `treasury.utxo_history`. To recover those, wipe both + schemas and re-sync from `STORE_CARDANO_SYNC_START_SLOT`: + ```bash + ./dev.sh stop + docker volume rm administration-data_postgres_data + ./dev.sh start + ``` + Triggers must already be present in `database/init/02-treasury-schema.sql` + *or* the API must arm them before YACI Store finishes its initial sync. + The `install_utxo_history_triggers` startup hook in + `api/src/services/sync.rs` runs early enough on a fresh install to + satisfy this. + +--- + +## Section D — Sync-loop quirks + +### KI-SY-01 — `treasury.sync_status.updated_at` doesn't bump on idle ticks *(RESOLVED)* +- **Resolved by:** `sync_new_events` (`api/src/services/sync.rs`) now bumps + `updated_at` on the `rows.is_empty()` path so `/api/v1/statistics` + reflects a live heartbeat even when no new TOM events have arrived. + +### KI-SY-02 — `last_slot` can advance past failed events on connection reset *(RESOLVED — Phase 1 + periodic full sync shipped)* + +**Resolution:** the contiguous-success watermark in `sync.rs` (Phase 1 +in the proposed fix below) is now in place. Additionally, a separate +`tokio::spawn` task runs `sync_all_events` every 10 minutes as a safety +net — any event that wedges the incremental loop is recovered by the +next full re-sync via the idempotent `ON CONFLICT DO UPDATE` chain. +Phase 2 (a `treasury.failed_events` table + per-event retry interval) +was de-scoped in favour of the simpler periodic full sync, which proved +sufficient when applied to the KI-VND-01 cascade. + +The original analysis is preserved below for context. + + + +- **Symptom observed:** during a postgres restart mid-batch + (2026-04-28), 5 events failed to insert. Continuous-sync logged + `Sync error: error communicating with database` then advanced `last_slot` + past those events on the next successful batch, so they were never + retried. +- **Why** (confirmed by reading `api/src/services/sync.rs:67–146`): + ```rust + let mut last_processed_slot = last_slot; + for row in rows { + if let Err(e) = processor.process_event(&row).await { + tracing::error!("Failed to process event {}: {}", row.tx_hash, e); + continue; // <-- skip + } + last_processed_slot = row.slot.unwrap_or(last_processed_slot); // <-- bumps past skipped + } + ``` + A success at row `i+1` bumps the watermark past a skipped row `i`. The + watermark is then persisted to `treasury.sync_status` (line ~139), + making the skipped event unrecoverable until the API restarts and + `sync_all_events` reprocesses from slot 0. +- **Why retries are safe**: all inserts use `ON CONFLICT (tx_hash) DO + UPDATE` (`event_processor.rs:1057–1084`, `:327`, `:432–453`, + `:1227–1233`), and child-table updates COALESCE to preserve existing + values. Re-applying any event is idempotent. + +##### Proposed fix — Phase 1 (small, ship first) + +Replace the watermark loop with a contiguous-success tracker: + +```rust +let mut watermark = last_slot; +let mut hole_seen = false; +for row in rows { + match processor.process_event(&row).await { + Err(e) => { + tracing::error!( + "Failed to process event {} at slot {:?}: {:#}", + row.tx_hash, row.slot, e + ); + hole_seen = true; + } + Ok(()) => { + if !hole_seen { + watermark = row.slot.unwrap_or(watermark); + } + } + } +} +``` + +- **Cost:** if an event fails *permanently* (e.g., schema mismatch), the + loop wedges at that slot until an operator intervenes. That's the + point — silent loss is worse than visible stall, and the WARN log + surfaces it. + +##### Proposed fix — Phase 2 (durable, follow-up) + +Add a `treasury.failed_events` table and a periodic auto-retry: + +```sql +CREATE TABLE treasury.failed_events ( + tx_hash VARCHAR(64) PRIMARY KEY, + slot BIGINT, + event_type TEXT, + error TEXT NOT NULL, + retry_count INT NOT NULL DEFAULT 0, + first_seen TIMESTAMPTZ NOT NULL DEFAULT NOW(), + last_attempt TIMESTAMPTZ NOT NULL DEFAULT NOW() +); +CREATE INDEX idx_failed_events_retry + ON treasury.failed_events (retry_count, last_attempt); +``` + +- On the `Err` path in the loop, upsert (`ON CONFLICT (tx_hash) DO + UPDATE SET retry_count = retry_count + 1, last_attempt = NOW(), + error = EXCLUDED.error`). +- Spawn a tokio interval (e.g. every 10 min) that selects from + `treasury.failed_events` and re-runs `process_event` for each — same + idempotent path. Delete the row on success. +- Operator visibility: `SELECT * FROM treasury.failed_events ORDER BY + retry_count DESC` shows the backlog. Optional: expose a count on the + `/api/v1/statistics` endpoint. + +##### Operational note when this lands + +`treasury.sync_status.last_slot` semantics shift from "last successful +row" to "last contiguous success". Operationally invisible to consumers +of `/api/v1/status`, but worth a one-line release note. + +--- + +## Section E — Spec / code mismatches + +### KI-API-01 — `disburse.destination` typed as string instead of `{label, details}` *(RESOLVED)* +- **Resolved by:** `treasury.events.destination` is now `JSONB`. `process_disburse` + preserves the full TOM `{label, details}` object instead of flattening to a + string. API model fields updated to `serde_json::Value`. **Breaking change** + for downstream consumers that previously read `destination` as a string — + they should now read `destination.label`. + +--- + +## Index summary + +| ID | Area | Status | Blocked on | +|---|---|---|---| +| KI-VND-01 | NULL `vendor_payment_key_hash` on `UTXO-*` projects | **resolved** (6-bug cascade, 10/42 → 0/42 post cold resync) | — | +| KI-VND-02 | `vendor_name` deprecated | **resolved** (column dropped) | — | +| KI-VND-03 | `contract_url` deprecated | **resolved** (column dropped) | — | +| KI-VND-04 | `contract_address` NULL on cold replay | **resolved** (verified, 0/42) | — | +| KI-VND-05 | 2 corrupted utxo_history datums from prior bug #4 | **resolved** (cold resync + bug #6 merged-source query) | — | +| KI-MIL-01 (`label`) | NULL `label` for `UTXO-*` | **resolved** (description fallback, 0/364) | — | +| KI-MIL-01 (`amount`/`time_limit`) | NULL datum-derived fields for `UTXO-*` | **largely resolved** (136/386 → 16/364) | KI-MOD-01 | +| KI-MIL-01 (`acceptance_criteria`) | NULL for `UTXO-*` | **not a bug** — metadata genuinely lacks the field | — | +| KI-EVT-01 | NULL `project_db_id` on chain-trace failure | **resolved** (verified, 12/413 residual upstream) | — | +| KI-EVT-01-residual | 12 events still NULL on KI-MOD-01-affected projects | **open** — likely milestone-id-hint disambiguation issue | KI-MOD-01 | +| KI-MOD-01 | `modify` events don't update milestone amounts / time limits in API | **open** — TODO | — | +| KI-FIN-04 | per-project balance under-counts raw PSSC total (chain-trace gaps) | **open** — TODO | KI-MOD-01 | +| KI-EVT-03 | NULL `milestone_id` on treasury-level events | by design | — | +| KI-UTX-01 | historical-UTXO table + trigger | **implemented & verified** | — | +| KI-UTX-02 | `project_db_id` NULL on non-script UTXOs | by design | — | +| KI-UTX-03 | NULL `lovelace_amount` rows | **resolved** (verified, 0/1235) | — | +| KI-OC-01 | milestone-id naming drift (m-N vs MS-N) | **resolved at lookup time** | — | +| KI-OC-02 | empty `body.identifier` everywhere | on-chain limitation | — | +| KI-OC-03 | multi-input sibling-project txs | resolved (disambiguation hint) | — | +| KI-CR-01 | cold-replay limitation | **resolved** (verified by 2026-05-02 cold resync) | — | +| KI-SY-01 | idle `updated_at` doesn't bump | **resolved** | — | +| KI-SY-02 | `last_slot` advances past failed events | **resolved** (contiguous-success watermark + periodic full sync) | — | +| KI-API-01 | `destination` schema mismatch | **resolved** (JSONB; breaking API change) | — | diff --git a/indexer/README.md b/indexer/README.md index dc8cd3c..69146a1 100644 --- a/indexer/README.md +++ b/indexer/README.md @@ -54,8 +54,8 @@ store.cardano.protocol-magic=764824073 The sync start point is configured via environment variables in `.env`: ```bash -STORE_CARDANO_SYNC_START_SLOT=160964954 -STORE_CARDANO_SYNC_START_BLOCKHASH=560c7537831007f9670d287b15a69ba18a322b1edc39c0c23ccab3c12ad77b9f +STORE_CARDANO_SYNC_START_SLOT=160963800 +STORE_CARDANO_SYNC_START_BLOCKHASH=65233bb713c15c4bb427ccbf0e7e5c1c6a6a9c5c04b5edfa1e0e8e72f1285c9c ``` Remove these from `.env` to sync from genesis. diff --git a/indexer/SETUP.md b/indexer/SETUP.md index 40dba84..19746df 100644 --- a/indexer/SETUP.md +++ b/indexer/SETUP.md @@ -7,7 +7,13 @@ ## Download YACI Store JAR -The YACI Store JAR file needs to be downloaded manually: +> **For the standard Docker Compose setup (`./dev.sh start`), this section is +> not needed.** `docker-compose.yml` runs the prebuilt +> `bloxbean/yaci-store:2.0.0` image, which already bundles the JAR and +> handles startup. Skip ahead to **Configuration**. + +The instructions below apply only if you are running the indexer outside +Docker (e.g. directly on the host JVM): 1. Visit https://github.com/bloxbean/yaci-store/releases 2. Download the latest `yaci-store-all-*.jar` file (e.g., `yaci-store-all-2.0.0.jar`) diff --git a/indexer/application.properties b/indexer/application.properties index 8b17598..622c650 100644 --- a/indexer/application.properties +++ b/indexer/application.properties @@ -31,7 +31,7 @@ store.metadata.enabled=true store.assets.enabled=false store.epoch.enabled=false store.mir.enabled=false -store.script.enabled=false +store.script.enabled=true store.staking.enabled=false store.governance.enabled=false diff --git a/scripts/compare_events.sh b/scripts/compare_events.sh new file mode 100755 index 0000000..fb7ce95 --- /dev/null +++ b/scripts/compare_events.sh @@ -0,0 +1,61 @@ +#!/usr/bin/env bash +set -euo pipefail + +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" + +LOCAL="http://localhost:8080/api/v1/events" +DEPLOYED="https://administration.info.intersectmbo.org/api/v1/events" +LIMIT=100 + +LOCAL_FILE=$(mktemp) +DEPLOYED_FILE=$(mktemp) +trap 'rm -f "$LOCAL_FILE" "$DEPLOYED_FILE"' EXIT + +fetch_all() { + local url=$1 outfile=$2 page=1 + > "$outfile" + while true; do + resp=$(curl -s "${url}?limit=${LIMIT}&page=${page}") + count=$(echo "$resp" | jq '.data | length') + if [ "$count" -eq 0 ]; then break; fi + echo "$resp" | jq -r '.data[] | [.tx_hash, .event_type, (.slot // ""), (.project_id // "")] | @csv' >> "$outfile" + if [ "$count" -lt "$LIMIT" ]; then break; fi + page=$((page + 1)) + done + echo "Fetched $(wc -l < "$outfile" | tr -d ' ') events from $url" >&2 +} + +echo "Fetching local events..." >&2 +fetch_all "$LOCAL" "$LOCAL_FILE" + +echo "Fetching deployed events..." >&2 +fetch_all "$DEPLOYED" "$DEPLOYED_FILE" + +# Sort both files for comparison +sort "$LOCAL_FILE" > "${LOCAL_FILE}.sorted" +sort "$DEPLOYED_FILE" > "${DEPLOYED_FILE}.sorted" + +OUTPUT="${SCRIPT_DIR}/diverging_events.csv" +echo "tx_hash,event_type,slot,project_id,source" > "$OUTPUT" + +# Lines only in local +comm -23 "${LOCAL_FILE}.sorted" "${DEPLOYED_FILE}.sorted" | while IFS= read -r line; do + echo "${line},\"local_only\"" +done >> "$OUTPUT" + +# Lines only in deployed +comm -13 "${LOCAL_FILE}.sorted" "${DEPLOYED_FILE}.sorted" | while IFS= read -r line; do + echo "${line},\"deployed_only\"" +done >> "$OUTPUT" + +total=$(tail -n +2 "$OUTPUT" | wc -l | tr -d ' ') +local_only=$(grep -c 'local_only' "$OUTPUT" || true) +deployed_only=$(grep -c 'deployed_only' "$OUTPUT" || true) + +echo "" +echo "Results written to $OUTPUT" +echo " Total divergences: $total" +echo " Local only: $local_only" +echo " Deployed only: $deployed_only" + +rm -f "${LOCAL_FILE}.sorted" "${DEPLOYED_FILE}.sorted"