Long-running daemon deployment guide (v1.4.94+)¶
This guide covers configuration, self-healing, and fallback paths specific to running a daemon long-term (24h+ / 7 days / 30 days) in production.
Applicable scenarios: - VPS / cloud server running daemon 24×7 - Containerized deployment (Docker / Kubernetes) - Multi-account in a single daemon (multi-session) - Options / futures strategies needing persistent subscription + push
If you only run daemon locally for short stretches (CLI usage / manual testing), this guide is not required — defaults are sufficient.
v1.4.94 long-running hardening switches (env opt-in, default OFF)¶
v1.4.94 adds 2 env opt-in switches that let a long-running daemon proactively
renew client_sig before invalidation, and proactively re-login on persistent
failures, reducing the frequency of admin reload / restart needs.
Default OFF because both paths depend on specific backend response behavior that is not yet real-machine verified — once verified, the next release will flip default ON. Production deployments wanting to enable now follow this section.
FUTU_CLIENT_SIG_PROACTIVE_REFRESH=1 — 1h-ahead proactive client_sig refresh¶
When effective: at startup, daemon parses client_sig_invalid_local_time_s
from /authority/ response (C++ auth_impl.cpp:3245-3247). If this timestamp
0 (server provided an expiry), the daemon spawns a timer that fires at
(invalid_time - 3600s)to callAuthRefresher::refresh_qot_login(), without waiting for backend errors.
Why default OFF: backend acceptance of multi-hour-ahead refresh vs "too early" reject is not real-machine verified (pitfall #42). Verify path:
- Start daemon:
FUTU_CLIENT_SIG_PROACTIVE_REFRESH=1 RUST_LOG=info futu-opend ... - log should contain "v1.4.94 G1: client_sig invalidate scheduled" + ttl_secs
- log should contain "v1.4.94 G1: spawning client_sig proactive timer"
- After timer fires (production typically 24-48h later), log should contain "v1.4.94 G1: refresh completed"
- After timer fires, daemon continues to query / push without disconnect = backend accepts ahead-of-time refresh.
FUTU_CLIENT_SIG_REACTIVE_REFRESH=1 — reactive refresh on persistent tcp_login failures¶
When effective: daemon TCP heartbeat fails → reconnect monitor starts. If
tcp_login fails ≥ 3 consecutive times during reconnect, daemon assumes
client_sig may be invalid, automatically calls
AuthRefresher::refresh_qot_login() + auth::reauth_via_remember_login()
to obtain a fresh AuthResult, swaps the cached client_sig, and the next
tcp_login uses the fresh sig.
Why default OFF: tcp_login failure ret_type semantics are not exclusively
"client_sig invalid" — could also be server-side rate limiting / risk control.
Misfiring refresh might amplify rate limits (pitfall #42 +
docs/protocol/auth.md "error_code=15 三大成因"). Verify path:
- In test environment, advance system clock by +30 days (past natural
client_sigexpiry), or wait 30 days of long-running. - Start daemon:
FUTU_CLIENT_SIG_REACTIVE_REFRESH=1 ... - heartbeat must fail → reconnect monitor starts
- log should show "reconnect login failed" (×3)
- log should show "v1.4.94 G4: persistent tcp_login failures → trying reactive client_sig refresh"
- log should show "v1.4.94 G4: refresh_qot_login OK → reauth_via_remember_login"
- Subsequent
tcp_loginshould succeed; daemon recovers push.
Recommended deployment combinations¶
| Environment | G1 | G4 | Notes |
|---|---|---|---|
| Local short-running (CLI / test) | OFF | OFF | Defaults sufficient, no long-runner need |
| Production long-running (24h+) early adopter | ON | OFF | Verify G1 first; G4 still relies on admin reload |
| Production long-running (7 days+) full opt-in | ON | ON | Expect zero admin reload; tester verify pass |
| Multi-account single daemon (5+ accounts) | ON | OFF | G4 risk: one account triggers rate-limit, others affected |
After real-machine verification (.githooks/pre-push redacts logs, safe to
share with developers) → v1.4.95+ assess flipping default ON.
Containerization examples¶
Docker¶
# docker-compose.yml fragment
services:
futu-opend:
image: ghcr.io/futuleaf/futu-opend-rs:v1.4.94
restart: unless-stopped
environment:
FUTU_ACCOUNT: <account>
FUTU_PWD: <password>
FUTU_CLIENT_SIG_PROACTIVE_REFRESH: "1" # long-runner hardening
# FUTU_CLIENT_SIG_REACTIVE_REFRESH: "1" # single-account scenarios only
volumes:
- ./.futu-opend-rs:/root/.futu-opend-rs # device_id + credentials persistence
ports:
- "11111:11111" # gRPC
- "11112:11112" # REST
- "11113:11113" # WebSocket
systemd unit¶
[Unit]
Description=futu-opend long-running gateway
After=network-online.target
[Service]
Type=simple
User=futu
EnvironmentFile=/etc/futu-opend.env
Environment=FUTU_CLIENT_SIG_PROACTIVE_REFRESH=1
ExecStart=/usr/local/bin/futu-opend --rest-port 11112 --grpc-port 11111
Restart=on-failure
RestartSec=30s
[Install]
WantedBy=multi-user.target
/etc/futu-opend.env contains FUTU_ACCOUNT=... + FUTU_PWD=...
(mode 0600, owner=futu).
Self-healing vs manual intervention¶
| Scenario | v1.4.94 behavior | v1.4.94 behavior | env required |
|---|---|---|---|
client_sig natural expiry (30 days) |
daemon error → admin reload needed | G1 timer auto-renews 1h before | FUTU_CLIENT_SIG_PROACTIVE_REFRESH=1 |
| Network jitter, tcp_login 1-2 fails | exponential backoff retry | same | — |
| Network jitter, tcp_login ≥ 3 fails | persistent fail / manual intervention | G4 auto-refresh + retry | FUTU_CLIENT_SIG_REACTIVE_REFRESH=1 |
Server rate-limit (error_code=15) |
60s wait + retry | same (G4 doesn't trigger on rate-limit) | — |
| Account risk control / password change / device lock | Manual SMS / --reset-device |
same | — |
| broker auth_code 30-day expiry | v1.4.94 G2 RepullAuthCode auto-refresh | same | (default ON since v1.4.94) |
Troubleshooting¶
G1 timer not spawning after enable¶
In log, look for "v1.4.94 G1 skip" reason:
- "client_sig_invalid_local_time_s=0": backend didn't return this field → old
backend / old credentials shell. --reset-device to redo password auth and
obtain new field.
- "env FUTU_CLIENT_SIG_PROACTIVE_REFRESH not set to 1": env var didn't propagate.
- "TooLate": invalid_time already past or < 1h away, skipped. Restart daemon
to let first login obtain new invalid_time.
G4 refresh not triggering¶
In log, look for "v1.4.94 G4 skip" reason: - "consecutive failures below threshold": failure count not yet 3 - "auth_refresher not injected": internal wiring issue → file an issue - "already attempted refresh this cycle": refresh-fail-loop guard. Restart daemon to reset.
G4 refresh failed, daemon continues with stale auth¶
By design. Log should contain "v1.4.94 G4: refresh_qot_login failed (continue with stale AuthResult)" or "v1.4.94 G4: reauth_via_remember_login failed (continue with stale AuthResult; user may need admin reload)". G4 is best-effort — failure doesn't break the existing fallback (admin reload path fully retained).
Roadmap¶
- v1.4.95+: G1/G4 real-machine verification complete → flip default ON; this guide's "default OFF" wording will be updated.
- v1.5+: G6 actual moomoo-path broker channel routing (currently v1.4.94 only
lays infrastructure; routing still uses main
client_sigfor C++-equivalent behavior).
For deployment issues, file an issue or contact via the official site.