The old single-session architecture leaked the 6-slot CF TCP gate on every
error path (via two confused semaphores in tunnel.go), which is why Telegram
degraded progressively after each reconnect and collapsed entirely after a
few cycles. Both sides are rewritten on clean primitives:
- Worker: serialized session state, direct server.send() on the hot path,
await socket.opened before CONNECT_OK, fire-and-forget writer.write with
async cleanup. Minimises per-packet CPU to fit within the Worker CPU
budget.
- Client: single writePump per session, half-close state machine with 10s
grace (yamux-style), exactly-once gate release, ping/pong keepalive via
SetPongHandler + SetReadDeadline. N parallel WS sessions (default 6)
lift the per-invocation CF 6-slot ceiling to 36 concurrent TCP streams.
- Free-plan workaround: --session-ttl=8s voluntarily rotates each session
before CF kills it for CPU exhaustion. Parallel sessions absorb the gap.
Not needed on Paid (raise cpu_ms in wrangler.toml when subscription
activates).
- install.sh / S98tg-tunnel: iptables rules now use -I PREROUTING 1 so
REDIRECT precedes Keenetic's _NDM_* chains (was silently bypassed).
- Unit tests cover frame round-trip, half-close lifecycle, grim-reaper
grace, and exactly-once gate release.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
On fresh start/restart, Telegram clients immediately flood with TCP
connections. Without WS ready, all get dropped or burst-connect to
Worker causing CPU exceeded. Now waits up to 10s for first WS before
accepting TCP. Result: 0 drops on startup vs dozens before.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
When CF Worker is overloaded (exceededResources), client was reconnecting
every 1-3 seconds, burning through the 100K/day request limit.
Now: 3 fails → 10s, 5 fails → 30s, 10+ fails → 120s backoff.
Also detects rapid WS death (<5s) and backs off instead of hammering.
Resets to fast reconnect after a stable connection.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Re-CONNECT sent new CONNECT for surviving streams after WS reconnect,
but remote TCP was gone and clients were mid-TLS, so Telegram DC got
garbage and closed immediately. Each re-CONNECT counted toward the 40
connect limit, causing Worker to close WS prematurely.
Now: WS dies → all streams closed → WS reconnects → Telegram clients
retry with fresh TLS. Clean and predictable.
Also removed: PongHandler/ReadDeadline (caused 120s timeout kills),
streamReadLoop retry loop, origIP/origPort/connected fields.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
CF Workers free plan has a per-invocation subrequest limit. After ~50
outbound connect() calls, new connections silently fail (CONNECT_FAIL).
Worker now proactively closes the WS after 40 connects, forcing the
client to reconnect to a fresh invocation. Also closes WS immediately
when connect() throws after 10+ total connects (limit already hit).
Rebuilt all 9 architecture binaries with current tunnel.go.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Build all 9 arch binaries with Go 1.22.12 to fix MIPS crash (Go issue #71591)
- Remove dead MTProxy/transparent mode code (relay.go, secret.go, transparent.go, dcmap.go)
- Drop gotd/td dependency — only gorilla/websocket + stdlib remain
- Tunnel is now the only mode, --tunnel flag removed
- Transparent WS reconnect: keep client TCP alive during CF Worker WS drops
- Re-CONNECT surviving streams after WS reconnects — seamless for clients
- streamReadLoop waits for WS instead of dying on disconnect
- New connections wait up to 5s for WS during reconnect instead of dropping
- Drain connect semaphore on WS disconnect to prevent deadlock
- Worker: MAX_STREAMS 200→100, improved logging
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Every binary now built with maximum compatibility:
- CGO_ENABLED=0: fully static, no libc dependency
- -ldflags="-s -w": stripped debug info, ~30% smaller
- GOMIPS=softfloat: for mips and mipsle (Keenetic MT7621, KN-3610)
- GOMIPS64=softfloat: for mips64le
- GOARM=5: ARMv5 minimum, no VFP required (max ARM compat)
Fixes segfault/SIGILL on all MIPS Keenetic routers.
Fixes potential issues on older ARM boards without VFPv3.
CI updated to match: all 9 targets with same flags.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Root cause of all MIPS crashes: Go default is GOMIPS=hardfloat,
but Keenetic routers (MT7621, KN-3610 etc.) have softfloat CPUs.
Entware reports arch as 'mipsel-sf' — the 'sf' means soft-float.
Running a hardfloat binary on softfloat CPU = SIGILL / segfault
with register dump (exactly what users reported).
Fix: rebuild mips and mipsel binaries with GOMIPS=softfloat.
Updated CI to always use softfloat for MIPS targets.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Fixed listener.go: removed SYS_GETSOCKOPT syscall (not available on
MIPS/PPC) — reverted to portable GetsockoptIPv6Mreq approach with
correct IPv6 address reconstruction from available 20 bytes.
Rebuilt binaries: arm64, amd64, arm, mips, mipsel, mips64el, ppc64,
riscv64, x86 — all now include --tunnel mode.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
9 бинарников в mtproxy-client/builds/ для скачивания через CDN.
jsdelivr и raw.githubusercontent не блокируются в РФ.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>