Midnight REST integration tutorial

Integrating Midnight Proofs into an Existing Node.js REST Backend

A tested Express and TypeScript pattern for keeping proof generation asynchronous, mapping Midnight failures into stable API errors, and wiring live providers with httpClientProofProvider.

REST APIs are usually built around short request lifetimes. A handler validates JSON, talks to a database or service, and returns within a few hundred milliseconds. Midnight contract calls have a different shape. A transaction can involve wallet state, private state, public contract state, generated zero-knowledge artifacts, local circuit execution, proof generation, balancing, signing, submission, and confirmation.

The safest backend pattern is to treat proof-generating calls as asynchronous work. The REST request should validate intent, enqueue work, and return a job id. The worker can then prove, balance, submit, retry transient failures, and record a structured result. That gives clients a predictable API without pretending a long cryptographic workflow is a normal blocking HTTP request.

The companion repository uses Express, TypeScript, a fake Midnight gateway for deterministic tests, and a live provider factory that imports the current Midnight.js packages used for private state, public data, ZK artifacts, and httpClientProofProvider.

REST client sends intent API validates and enqueues Worker calls contract gateway Proof provider uses local server Job records submitted tx or error

Prerequisites and Verification

Use Node.js 22 or newer. The current Midnight.js guide lists Node.js 22.x+ and Docker as prerequisites for proof-server workflows.

npm install
npm run check

The verification run for this submission passed TypeScript compilation and the Vitest suite:

Test Files  1 passed (1)
Tests       5 passed (5)

The tests cover contract deployment, state reads, async job creation, proof timeouts, network submission failures, and invalid REST payloads. The example is not claiming a live Preprod transaction. Live mode still needs a funded wallet, a local proof server, wallet facade objects, and ZK artifacts from a compiled Compact contract.

Project Layout

src/app.ts                         Express routes and HTTP error mapping
src/jobs.ts                        In-memory async transaction jobs
src/midnight/providerFactory.ts    Live Midnight provider construction
src/midnight/fakeGateway.ts        Test gateway with deterministic failures
src/midnight/errors.ts             Error categories and timeout wrapper
src/app.test.ts                    REST behavior tests

The server entry point uses FakeMidnightGateway so the REST behavior is reproducible on any development machine:

const config = loadConfig();
const app = createApp(new FakeMidnightGateway(), config);

For live use, replace the fake gateway with a class around your generated Compact contract API. Keep the routes, job queue, timeout handling, and error mapping the same.

Build Midnight Providers Once

Do not construct wallet and Midnight providers inside each request handler. Build them once at process startup, after loading secrets from your normal secret store and after checking that the wallet is synced.

setNetworkId(config.networkId);

const zkConfigProvider = options.zkArtifactsUrl
  ? new FetchZkConfigProvider(options.zkArtifactsUrl)
  : new NodeZkConfigProvider(config.zkArtifactsPath);

const providers = {
  privateStateProvider: levelPrivateStateProvider({
    privateStoragePasswordProvider: () => config.privateStatePassword ?? '',
    accountId: config.accountId
  }),
  publicDataProvider: indexerPublicDataProvider(config.indexerHttpUrl, config.indexerWsUrl),
  zkConfigProvider,
  proofProvider: httpClientProofProvider(config.proofServerUrl, zkConfigProvider),
  walletProvider: walletProviders.walletProvider,
  midnightProvider: walletProviders.midnightProvider
};

This matches the provider model from the Midnight.js docs: setNetworkId selects the network, levelPrivateStateProvider stores encrypted local private state, indexerPublicDataProvider reads public contract data, NodeZkConfigProvider or FetchZkConfigProvider loads proving artifacts, and httpClientProofProvider talks to the proof server. The wallet and Midnight providers handle wallet operations, balancing, and submission.

networkId: 'preprod'
indexerHttpUrl: 'https://indexer.preprod.midnight.network/api/v4/graphql'
indexerWsUrl: 'wss://indexer.preprod.midnight.network/api/v4/graphql/ws'
proofServerUrl: 'http://127.0.0.1:6300'

Keep wallet seed material out of HTTP request bodies and logs. REST clients should send application inputs such as a message, order, vote, or command. They should not control the wallet backend.

Add a Startup Readiness Gate

A backend should not accept transaction routes just because the Express process is listening. The server also needs the wallet state, indexer connection, proof server, and generated contract artifacts to be ready. Treat readiness as a separate application state.

One practical approach is to start the HTTP server with read-only routes enabled and transaction routes returning 503 wallet_syncing until initialization has finished. The live gateway can expose that state through health(), and the app can use the same error mapper as every other route. That keeps load balancers and frontends honest: /health can say the process is alive while transaction routes still communicate that proofs are not ready yet.

This is also the right place to check the local proof server. A simple TCP check does not prove that every circuit artifact is valid, but it catches the most common operational mistake: starting the backend while Docker is not running the proof server on port 6300. After that, run a deeper live smoke test in your own environment by calling a generated contract method against Preprod with a funded wallet.

Run the Proof Server Locally

The proof server is not a public shared dependency. Midnight's proof server guide explains that inputs can include private data, so the proof server should be local or on infrastructure you control over an encrypted channel.

midnightntwrk/proof-server:8.0.3

The proof server listens on port 6300. The example treats MIDNIGHT_PROOF_SERVER_URL as server-side configuration, not a client parameter. That prevents a caller from redirecting private proof inputs to an arbitrary host.

Design REST Routes Around Jobs

GET    /health
POST   /contracts
GET    /contracts/:address/state
POST   /contracts/:address/message
DELETE /contracts/:address/message
GET    /jobs/:id

POST /contracts is synchronous in the fake example because deployment is short and deterministic there. In a real service, deployments may also belong behind a job boundary if proof generation or wallet operations can exceed your request timeout budget.

const job = jobs.enqueue(() =>
  retryTransient(
    () =>
      withTimeout(
        gateway.postMessage(req.params.address, body.message),
        config.proofTimeoutMs,
        'proof_timeout',
        'Proof generation timed out'
      ),
    { attempts: 2, baseDelayMs: 500 }
  )
);

res.status(202).json(job);

The client polls /jobs/:id until the job reaches succeeded or failed. This avoids reverse proxy timeouts, supports retries for transient failures, and lets the frontend show progress without guessing whether a broken HTTP connection means the transaction failed.

Do not return a transaction id before the backend has one. The first response should contain only server-owned job metadata. The job gains a tx field only after the gateway returns a submitted transaction.

{
  "status": "succeeded",
  "tx": {
    "txId": "0x0000000000000000000000000000000000000000000000000000000000000001",
    "blockHeight": 1,
    "status": "succeeded"
  }
}

For higher volume, replace the in-memory JobQueue with Redis, a database table, or your existing queue system. Keep the public job contract stable: status, timestamps, transaction data on success, and a structured error on failure.

Map Midnight Failures Into Stable HTTP Errors

Clients need to know whether retrying makes sense. The example maps common failure categories into a small HTTP surface:

ErrorStatusRetry?
proof_timeout504Yes
proof_server_unreachable503Yes
wallet_syncing503Yes
network_unreachable502Yes
insufficient_funds402No, fund first
invalid_contract_call422No, change input
unknown_error500No by default

That distinction matters. A frontend can retry a temporary proof server failure or show "wallet syncing" as a temporary state. It should not repeatedly submit the same invalid circuit input. It also should not hide an insufficient DUST or NIGHT balance behind a generic 500.

const app = createApp(new FakeMidnightGateway('network'), config);
const response = await request(app, '/contracts', { method: 'POST' });

expect(response.status).toBe(502);
await expect(response.json()).resolves.toMatchObject({
  error: {
    code: 'network_unreachable',
    retryable: true
  }
});

Add Idempotency Before Production Retries

The companion code retries transient work to demonstrate the API shape. A production backend should also add an idempotency key for transaction routes. Proof generation and transaction submission are not like retrying a database read. If a worker times out locally while the underlying operation is still running, a blind retry can create duplicate intent.

Use a client-supplied or server-issued idempotency key, store it with the job, and return the existing job when the same caller repeats the same operation. This gives mobile clients and frontend retries a safe recovery path without submitting the same contract operation twice.

Swap in a Live Gateway

export type MidnightContractGateway = {
  health(): Promise<{ proofServerReachable: boolean; networkReachable: boolean }>;
  deploy(): Promise<DeployContractResult>;
  readState(contractAddress: string): Promise<ContractState>;
  postMessage(contractAddress: string, message: string): Promise<SubmittedTx>;
  takeDown(contractAddress: string): Promise<SubmittedTx>;
};

A live gateway class owns wallet initialization and sync checks, deployment helpers, findDeployedContract, proof-generating contract.callTx.someCircuit(...) calls, public state reads, and translation of SDK errors into the categories above.

If you already have a Compact contract generated, the fastest path is to keep the Express app and job queue unchanged, then replace FakeMidnightGateway with calls into that generated module.

Practical Guardrails

Keep proof generation out of unbounded request handlers. Even if it works on a laptop, it will fail under reverse proxy timeouts, serverless limits, mobile network drops, or proof server slowdown.

Separate read routes from transaction routes. A state read can fail fast if the indexer is unavailable. Transaction routes need job state, retry policy, and clear error categories.

Make timeouts explicit. A missing timeout means the client, proxy, or process manager chooses the failure mode for you.

Treat live wallet setup as infrastructure, not API input. A backend can expose contract operations, but it should never become a seed phrase relay.

Finally, keep deterministic fake-gateway tests. Midnight integration work has enough moving parts that local API tests are still valuable even after you add live Preprod checks.

Sources