Manual review, thresholds, and notifications
Manual review is where Ledgix becomes an operating workflow instead of just an SDK. Your team decides how much confidence is required before an action can proceed automatically, who gets notified when a request pauses, and how reviewers finalize a decision.
Start in the dashboard
Most teams should configure review behavior in the customer dashboard first:
- 01Choose a starting threshold in ReviewsUse the Reviews page to set the minimum confidence score for your tenant. Requests below that score pause for human review.
- 02Configure notifications in SettingsTurn on email or Slack notifications so a pending request becomes visible to the right reviewers immediately.
- 03Practice a full reviewSubmit one request that intentionally pauses, then approve or deny it in the review queue so the team understands the real operating path.
How to choose a starting threshold
Use these starting points as operating defaults, not permanent rules:
0.80for a balanced rollout where you want automation but still expect some reviews0.90when the action is higher risk or the first rollout needs more human oversight0.95or higher when you want automation only for the clearest cases
Raise the threshold when the cost of a wrong approval is high. Lower it only after you have seen enough real requests to trust the policy and the resulting reviewer load.
What happens when a request pauses
When Ledgix returns pending_review:
- the action should not execute yet
- the request appears in the customer review queue
- reviewers can inspect the tool arguments, reasoning, and cited policy support
- a reviewer can approve or deny the request later
If approved during review, Ledgix returns the request to a terminal approved state. If denied, the action remains blocked.
Review settings API
Use the API when you want to manage review settings from code or automation.
Read the current threshold
GET /review-settings HTTP/1.1
Host: vault.example.com
X-Vault-API-Key: sk_prod_example{
"client_id": "demo",
"minimum_confidence_score": 0.8
}Update the threshold
PUT /review-settings HTTP/1.1
Host: vault.example.com
Content-Type: application/json
X-Vault-API-Key: sk_prod_example
{
"minimum_confidence_score": 0.9
}Notification settings API
The customer dashboard is the normal path, but the same settings are available over HTTP.
Read notification settings
GET /notification-settings HTTP/1.1
Host: vault.example.com
X-Vault-API-Key: sk_prod_exampleUpdate notification settings
PUT /notification-settings HTTP/1.1
Host: vault.example.com
Content-Type: application/json
X-Vault-API-Key: sk_prod_example
{
"email_enabled": true,
"email_recipients": "ops@example.com,security@example.com",
"slack_enabled": true,
"slack_webhook_url": "https://hooks.slack.com/services/..."
}If Slack notifications are enabled, provide a real slack_webhook_url. Customers should treat that webhook like any other environment secret in their own systems.
Finalizing a review
Approving or denying a request uses the same review queue item identified by request_id.
POST /reviews/8ee2d480-4e23-49c5-9869-a0247e806e1c/decision HTTP/1.1
Host: vault.example.com
Content-Type: application/json
X-Vault-API-Key: sk_prod_example
{
"approved": true,
"review_reason": "Confirmed the payment matches the order and approved policy.",
"reviewer_email": "operator@example.com"
}What reviewers should look for
- Does the tool argument payload match the real business action?
- Is the cited policy the right one for this tool?
- Does the model reason line up with the evidence shown in the review screen?
- Is the threshold doing what your team intended, or pausing too many obvious cases?
Common operating mistakes
- Setting a strict threshold without first routing notifications to real reviewers.
- Treating
pending_reviewas a failure and blindly retrying the same request. - Letting multiple teams share one Slack webhook without clear ownership.
- Reviewing only the top-line reason and ignoring the actual tool arguments.