Go 1.26.x silently broke MIPS soft-float on Linux kernels that ship with
Keenetic/Netis Keenetic-port boxes — the runtime fatal'd at
runtime/proc.go:363 (go forcegchelper()) during startup on mipsel-3.4.
Reported by Giam (Netis N6, Keenetic 5.0.9), Владислав, and sisVita —
all three with identical pc=0x7408c register dumps.
The April 21 commit (e3b794d) inadvertently rebuilt all arches with
Go 1.26.1 and pushed the broken mipsel binary. Users who didn't re-pull
kept running the March 13 (35a2963) Go 1.22.12 binary and were unaffected;
new installs and menu-forced redownloads got the broken one.
Pinning to Go 1.22.12 — the last version verified to produce working
MIPS soft-float binaries for Keenetic-class boxes. Other arches carried
along for consistency.
Co-Authored-By: Claude Opus 4.7 <noreply@anthropic.com>
menu_telegram_mtproxy: delete the binary on [FAIL] so the next [1] press
triggers a fresh download. Previously the binary silently stayed on disk,
causing a stuck crash loop for users who had a pre-April-21 build.
tunnel.go: remove the duplicate configureWSKeepalive call from readLoop —
connectTunnelWS already configures read limit, deadline, and ping/pong
handlers before handing the connection to the caller.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Go 1.26.1 (used in the VPS-migration rebuild in 0ec1ff0) regresses the
mipsle/mips runtime init on MT7621-class Keenetic routers — the binary
SIGSEGVs at PC=0x74084 in goroutine 0 [idle] before main() even runs,
on any invocation including --help. Confirmed in the field on a
Keenetic with MediaTek MT7621 / MIPS 1004Kc V2.15 / mipsel-3.4_kn.
The pre-VPS-migration binaries (fda1e05) were built with Go 1.22.12
and worked on the exact same routers. No source change between then
and now except the default tunnel URL flag, so the regression is
purely in the Go toolchain's MIPS runtime.
Rollback to Go 1.22.12 for all 9 targets. Binary sizes drop back to
the known-good baseline (mipsel: 6.1MB -> 5.0MB, same for mips).
VPS URL wss://213.176.74.63.nip.io/ws is still baked in — verified
via strings on the mipsel binary.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
New Go-based relay (vps-relay/) running on a dedicated Aeza VPS in
Amsterdam. Same mux wire protocol as the old CF worker — client
binaries just point at a new URL, no reprotocol work needed.
What this unlocks:
- No 10ms CPU limit (CF Worker Free tier kept killing sessions every
30-100 seconds, forcing continuous reconnects and CONNECT_FAIL storms).
- No 6-concurrent-socket-per-invocation cap (was the single biggest
bottleneck under load — we routinely saw `CONNECT throttled` on
bursty clients).
- Sessions now live as long as the WebSocket stays up; observed 3+
minute uptime with 0 disconnects in initial soak.
Relay characteristics:
- Same HMAC auth, same [streamId u16][type u8][payload] frames.
- Telegram DC IP allowlist identical to old worker.
- Single writer goroutine per session, single reader.
- ~11 MB RSS idle, <1% CPU under real traffic.
- Fronted by Caddy with Let's Encrypt cert on 213.176.74.63.nip.io.
Binaries rebuilt for all 9 architectures with the new default
--tunnel-url. Existing installations pick up the change on next
reinstall; manual override via --tunnel-url still works and points
back at the CF worker if someone needs it.
cf-worker/ source is kept in the tree as a reference / fallback
deployment target — not deleted.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Previous revert took binaries from e688fb1 which turned out to ship a
mipsel build that wouldn't run on at least one user's device. 198d181 is
the last commit before today's total rewrite, which was confirmed working
for users all day until the rewrite landed. Taking the full mtproxy-client
tree (tunnel.go, listener.go, all 9 builds/) from there.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Today's rewrite (e729560) introduced two independent regressions that
compounded into a total outage:
1. worker.js added `await socket.opened` before sending CONNECT_OK. On
Cloudflare Workers, `socket.opened` runs an HTTP-based-service heuristic
that pre-emptively rejects with "proxy request failed ... consider using
fetch instead" for destinations that look HTTPS-ish (port 443 with
TLS-like framing) — even when a plain read/write on the same socket
would have worked. Every Telegram DC hit this path and the tunnel
returned 100% CONNECT_FAIL.
2. The N-parallel-session + voluntary TTL-rotation architecture in the
rewritten tunnel.go killed in-flight TCP streams every rotation cycle,
so Telegram MTProto couldn't finish a single handshake before its
session got rotated out. Staggering + longer TTLs didn't help enough.
Reverted tunnel.go, main.go, listener.go, worker.js, wrangler.toml and all
9 prebuilt binaries to e688fb1 (the last commit known to work for three
days straight). lib/install.sh keeps the iptables -I PREROUTING 1 fix from
today (unrelated to the tunnel rewrite, real bug) but drops the
--parallel/--session-ttl flags since the reverted binary doesn't know them.
The Roblox work from a6c607d stays: files/lua/z2k-modern-core.lua still
contains z2k_game_udp, lib/config_official.sh still wires it into the game
strategies. That part works and is orthogonal to the tunnel.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The old single-session architecture leaked the 6-slot CF TCP gate on every
error path (via two confused semaphores in tunnel.go), which is why Telegram
degraded progressively after each reconnect and collapsed entirely after a
few cycles. Both sides are rewritten on clean primitives:
- Worker: serialized session state, direct server.send() on the hot path,
await socket.opened before CONNECT_OK, fire-and-forget writer.write with
async cleanup. Minimises per-packet CPU to fit within the Worker CPU
budget.
- Client: single writePump per session, half-close state machine with 10s
grace (yamux-style), exactly-once gate release, ping/pong keepalive via
SetPongHandler + SetReadDeadline. N parallel WS sessions (default 6)
lift the per-invocation CF 6-slot ceiling to 36 concurrent TCP streams.
- Free-plan workaround: --session-ttl=8s voluntarily rotates each session
before CF kills it for CPU exhaustion. Parallel sessions absorb the gap.
Not needed on Paid (raise cpu_ms in wrangler.toml when subscription
activates).
- install.sh / S98tg-tunnel: iptables rules now use -I PREROUTING 1 so
REDIRECT precedes Keenetic's _NDM_* chains (was silently bypassed).
- Unit tests cover frame round-trip, half-close lifecycle, grim-reaper
grace, and exactly-once gate release.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
On fresh start/restart, Telegram clients immediately flood with TCP
connections. Without WS ready, all get dropped or burst-connect to
Worker causing CPU exceeded. Now waits up to 10s for first WS before
accepting TCP. Result: 0 drops on startup vs dozens before.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
When CF Worker is overloaded (exceededResources), client was reconnecting
every 1-3 seconds, burning through the 100K/day request limit.
Now: 3 fails → 10s, 5 fails → 30s, 10+ fails → 120s backoff.
Also detects rapid WS death (<5s) and backs off instead of hammering.
Resets to fast reconnect after a stable connection.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Re-CONNECT sent new CONNECT for surviving streams after WS reconnect,
but remote TCP was gone and clients were mid-TLS, so Telegram DC got
garbage and closed immediately. Each re-CONNECT counted toward the 40
connect limit, causing Worker to close WS prematurely.
Now: WS dies → all streams closed → WS reconnects → Telegram clients
retry with fresh TLS. Clean and predictable.
Also removed: PongHandler/ReadDeadline (caused 120s timeout kills),
streamReadLoop retry loop, origIP/origPort/connected fields.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
CF Workers free plan has a per-invocation subrequest limit. After ~50
outbound connect() calls, new connections silently fail (CONNECT_FAIL).
Worker now proactively closes the WS after 40 connects, forcing the
client to reconnect to a fresh invocation. Also closes WS immediately
when connect() throws after 10+ total connects (limit already hit).
Rebuilt all 9 architecture binaries with current tunnel.go.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
- Build all 9 arch binaries with Go 1.22.12 to fix MIPS crash (Go issue #71591)
- Remove dead MTProxy/transparent mode code (relay.go, secret.go, transparent.go, dcmap.go)
- Drop gotd/td dependency — only gorilla/websocket + stdlib remain
- Tunnel is now the only mode, --tunnel flag removed
- Transparent WS reconnect: keep client TCP alive during CF Worker WS drops
- Re-CONNECT surviving streams after WS reconnects — seamless for clients
- streamReadLoop waits for WS instead of dying on disconnect
- New connections wait up to 5s for WS during reconnect instead of dropping
- Drain connect semaphore on WS disconnect to prevent deadlock
- Worker: MAX_STREAMS 200→100, improved logging
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Every binary now built with maximum compatibility:
- CGO_ENABLED=0: fully static, no libc dependency
- -ldflags="-s -w": stripped debug info, ~30% smaller
- GOMIPS=softfloat: for mips and mipsle (Keenetic MT7621, KN-3610)
- GOMIPS64=softfloat: for mips64le
- GOARM=5: ARMv5 minimum, no VFP required (max ARM compat)
Fixes segfault/SIGILL on all MIPS Keenetic routers.
Fixes potential issues on older ARM boards without VFPv3.
CI updated to match: all 9 targets with same flags.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Root cause of all MIPS crashes: Go default is GOMIPS=hardfloat,
but Keenetic routers (MT7621, KN-3610 etc.) have softfloat CPUs.
Entware reports arch as 'mipsel-sf' — the 'sf' means soft-float.
Running a hardfloat binary on softfloat CPU = SIGILL / segfault
with register dump (exactly what users reported).
Fix: rebuild mips and mipsel binaries with GOMIPS=softfloat.
Updated CI to always use softfloat for MIPS targets.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Fixed listener.go: removed SYS_GETSOCKOPT syscall (not available on
MIPS/PPC) — reverted to portable GetsockoptIPv6Mreq approach with
correct IPv6 address reconstruction from available 20 bytes.
Rebuilt binaries: arm64, amd64, arm, mips, mipsel, mips64el, ppc64,
riscv64, x86 — all now include --tunnel mode.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
9 бинарников в mtproxy-client/builds/ для скачивания через CDN.
jsdelivr и raw.githubusercontent не блокируются в РФ.
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>