mirror of
https://github.com/kvcache-ai/ktransformers.git
synced 2026-04-29 04:09:52 +00:00
feat(sft): AMX MoE SFT backend with LoRA support (#1936)
Some checks failed
Book-CI / test (push) Has been cancelled
Book-CI / test-1 (push) Has been cancelled
Book-CI / test-2 (push) Has been cancelled
Deploy / deploy (macos-latest) (push) Has been cancelled
Deploy / deploy (ubuntu-latest) (push) Has been cancelled
Deploy / deploy (windows-latest) (push) Has been cancelled
Some checks failed
Book-CI / test (push) Has been cancelled
Book-CI / test-1 (push) Has been cancelled
Book-CI / test-2 (push) Has been cancelled
Deploy / deploy (macos-latest) (push) Has been cancelled
Deploy / deploy (ubuntu-latest) (push) Has been cancelled
Deploy / deploy (windows-latest) (push) Has been cancelled
* feat(sft): AMX MoE SFT backend with LoRA support Complete SFT (Supervised Fine-Tuning) backend for MoE models using AMX SIMD: Core C++ implementation: - sft_moe.hpp: Forward/backward with LoRA fused operations (~5500 lines) - moe-sft-tp.hpp: Tensor-parallel wrapper for multi-NUMA - amx/moe-sft-tp.hpp: AMX-specific TP implementation - avx_kernels.hpp: AVX512 SIMD kernels for LoRA GEMM - amx_kernels.hpp: AMX tile kernels for Panel5 rank-outer optimization - worker_pool: RDTSC profiling, Chrome trace output, SFT timer infrastructure - ext_bindings.cpp: SFT MOE pybind bindings (BF16/INT8/INT4 + SkipLoRA variants) Python sft/ submodule (kt_kernel.sft): - base.py: BaseSFTMoEWrapper with buffer management (template method pattern) - amx.py: AMXSFTMoEWrapper (weight loading, C++ task construction) - autograd.py: KTMoEFunction (torch.autograd.Function for distributed training) - layer.py: KTMoELayerWrapper (nn.Module replacing HF MoE layers) - arch.py: MOEArchConfig (Qwen3/DeepSeek/Mixtral architecture detection) - weights.py: Expert weight extraction and checkpoint loading - lora.py: PEFT LoRA adaptation (view buffers, grad buffers, save/load adapter) - wrapper.py: wrap_moe_layers_with_kt_wrapper, load_kt_model, build_kt_device_map - config.py: KTConfig dataclass (DeepSpeed-style opaque config passthrough) - dist_utils.py: Distributed gather/scatter, checkpoint-phase detection Design decisions: - Rank-0-only expert pattern: only rank 0 holds C++ wrapper and expert weights - DeepSpeed-style integration: accelerate keeps only KTransformersPlugin (framework interaction fields), all logic in kt_kernel.sft - Inference isolation: importing kt_kernel does not load sft/ submodule - Old field name compatibility: _get_kt_config() converts kt_xxx→xxx automatically Verified: Qwen3-235B-A22B 4GPU AMXBF16 training, loss converges normally. * refactor(sft): unify KTConfig field names with kt_ prefix, add share_cache_pool, remove dead code - KTConfig fields all use kt_ prefix matching dict keys — eliminates _OLD_TO_NEW mapping and prefix-stripping in wrapper.py - Add kt_share_cache_pool field, auto-enabled when gradient_checkpointing is on (via training_args.py), flows through to C++ cache allocation - Remove dead checkpoint detection code: in_ckpt_recompute, in_ckpt_first_forward vars (assigned but never read), fallback _is_in_checkpoint_first_forward() function, unused inspect import - Remove redundant env var fallbacks in wrapper.py for share_backward_bb and share_cache_pool (KTConfig.__post_init__ already handles env vars) - Simplify layer.py checkpoint logic to single _checkpoint_hook_mode() check Verified: Qwen3-235B 3-step training on sap4, loss matches baseline (1.2886 / 1.9824 / 1.377 vs 1.2886 / 1.9766 / 1.3809) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor(sft): share_backward_bb default True, share_cache_pool auto-derived - kt_share_backward_bb defaults to True (always saves memory) - kt_share_cache_pool no longer reads from env var; defaults False, auto-set to True by trainer_config_process when gradient checkpointing is enabled Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: add missing gpu_experts_mask=None to KTMoEWrapper call in SFT wrapper KTMoEWrapper.__new__() requires gpu_experts_mask as a positional argument, but the SFT wrapper omitted it, causing MoE layer wrapping to fail silently and FSDP2 to attempt broadcasting all expert weights (OOM/NCCL crash). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(sft): support transformers v5 fused expert format Fused experts (e.g. Qwen3MoeExperts) store weights as 3D Parameters (gate_up_proj [E,2I,H], down_proj [E,H,I]) instead of per-expert nn.Linear modules. PEFT cannot attach LoRA to these, so we create KT-managed LoRA buffers with kaiming init, nn.Parameter wrappers for the optimizer, and pre-assigned .grad for C++ backward. - arch.py: detect_fused_experts() detection - weights.py: fused format extraction and weight clearing - wrapper.py: detect fused at wrap time, store _fused_experts/_lora_rank - lora.py: _create_fused_expert_lora_buffers, save/load fused LoRA, get_kt_lora_params collects fused params, deduplicate wrapper finding - layer.py: handle v5 TopKRouter tuple output, remove dead code - autograd.py: sync_forward_sft/submit_forward_sft API rename Verified: v5 loss/expert-LoRA values match v4 baseline, v4 backward compat. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(sft): add Qwen3.5 MoE support + fused checkpoint loading - arch.py: add Qwen3_5Moe arch match, read config from text_config, _get_layers_prefix returns model.language_model.layers for Qwen3.5, _get_model_container_and_layers searches language_model attr - weights.py: load_experts_from_checkpoint_files detects fused format (gate_up_proj in weight_map) and splits into gate/up/down - wrapper.py: hidden_size fallback to text_config Verified: Qwen3.5-35B-A3B (256 experts, fused format) E2E pass. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * [fix](sft): align Python API with C++ backend after v5 refactor - wrapper.py: pass gpu_experts_mask=None to KTMoEWrapper (required by C++ signature) - layer.py: rename submit_forward_sft/sync_forward_sft to submit_forward/sync_forward - autograd.py: rename sync_forward_sft to sync_forward The sft-v5 refactor (commits58d7eab,dd1da65) renamed Python-side method calls but the C++ backend (AMXSFTMoEWrapper) still exposes the original method names. This caused AttributeError on Qwen3.5-35B and other models. * align sft branch with main: revert worker_pool, strip sft_timer, fix inference defaults - Revert worker_pool.cpp/.h to main (remove RDTSC timer, Chrome Trace, sft_timer namespace, ITT API, extended do_work_stealing_job API) - Strip all sft_timer instrumentation from sft-only files (sft_moe.hpp, moe-sft-tp.hpp, avx_kernels.hpp) - Restore pin_memory=True in KExpertsCPUBuffer (inference path) - Restore fused tensor transpose logic in convert_cpu_weights.py (main layout) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * revert CMakeLists.txt to main: remove debug flags and cpptrace dep Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * clean up dev artifacts: remove SFT design docs, debug examples, bench scripts Remove files not needed in the merge: - docs/SFT+KTWrapper/ (6 Chinese design docs) - docs/sft_moe_amx/ (21 dev/debug docs) - 12 debug/test example scripts - 6 SFT-specific bench scripts and report Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * remove dev version stamps from ext_bindings, sft_moe, moe-sft-tp Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Co-authored-by: JimmyPeilinLi <lipeilin@mail.nwpu.edu.cn>
This commit is contained in:
parent
22e9915ec9
commit
9544a8960d
41 changed files with 22866 additions and 937 deletions
139
kt-kernel/python/sft/config.py
Normal file
139
kt-kernel/python/sft/config.py
Normal file
|
|
@ -0,0 +1,139 @@
|
|||
# KT-Kernel SFT configuration
|
||||
# SPDX-License-Identifier: Apache-2.0
|
||||
|
||||
"""
|
||||
KTConfig: kt-kernel's own configuration dataclass.
|
||||
|
||||
This is the kt-kernel equivalent of DeepSpeed's JSON config —
|
||||
it holds all kt-kernel-specific settings and is passed through
|
||||
KTransformersPlugin.kt_config (similar to DeepSpeedPlugin.hf_ds_config).
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import dataclasses
|
||||
import os
|
||||
from dataclasses import dataclass, field
|
||||
from typing import Any, Callable
|
||||
|
||||
|
||||
def _env_int(key: str, default: int | None) -> int | None:
|
||||
value = os.environ.get(key, None)
|
||||
if value is None or value == "":
|
||||
return default
|
||||
return int(value)
|
||||
|
||||
|
||||
def _env_float(key: str, default: float | None) -> float | None:
|
||||
value = os.environ.get(key, None)
|
||||
if value is None or value == "":
|
||||
return default
|
||||
return float(value)
|
||||
|
||||
|
||||
def _env_bool(key: str, default: bool) -> bool:
|
||||
value = os.environ.get(key, None)
|
||||
if value is None or value == "":
|
||||
return default
|
||||
return value.lower() in ("1", "true", "yes")
|
||||
|
||||
|
||||
@dataclass
|
||||
class KTConfig:
|
||||
"""
|
||||
KT-Kernel configuration for SFT training.
|
||||
|
||||
All field names use the ``kt_`` prefix so they match the dict keys used in
|
||||
HfTrainerKTConfig / YAML configs. This means ``KTConfig(**dict)`` works
|
||||
directly — no name-mapping or prefix-stripping needed.
|
||||
|
||||
Can be created from:
|
||||
- Direct construction: KTConfig(kt_backend="AMXBF16", kt_weight_path="/path/...")
|
||||
- Dict: KTConfig(**config_dict)
|
||||
- Environment variables: KTConfig() reads ACCELERATE_KT_* env vars as defaults
|
||||
"""
|
||||
|
||||
# Backend selection
|
||||
kt_backend: str | None = None
|
||||
kt_num_threads: int | None = None
|
||||
kt_tp_enabled: bool | None = None
|
||||
kt_threadpool_count: int | None = None
|
||||
|
||||
# Weight loading
|
||||
kt_weight_path: str | None = None
|
||||
kt_expert_checkpoint_path: str | None = None
|
||||
kt_num_gpu_experts: int | None = None
|
||||
kt_skip_expert_loading: bool | None = None
|
||||
kt_share_backward_bb: bool | None = None # default True — always saves memory
|
||||
kt_share_cache_pool: bool | None = None # auto-set by trainer_config_process, not user-facing
|
||||
|
||||
# Cache
|
||||
kt_max_cache_depth: int | None = None
|
||||
kt_model_max_length: int | None = None
|
||||
|
||||
# LoRA
|
||||
kt_lora_rank: int | None = None
|
||||
kt_lora_alpha: float | None = None
|
||||
|
||||
# LoRA Experts (GPU-side extra experts)
|
||||
kt_use_lora_experts: bool | None = None
|
||||
kt_lora_expert_num: int | None = None
|
||||
kt_lora_expert_intermediate_size: int | None = None
|
||||
|
||||
# Runtime state (set during wrapping, not by user)
|
||||
kt_checkpoint_files: list[str] | None = None
|
||||
kt_sharded_metadata: dict | None = None
|
||||
|
||||
# Custom wrapping
|
||||
kt_wrap_fn: Callable[..., Any] | None = None
|
||||
kt_wrap_kwargs: dict[str, Any] | None = None
|
||||
|
||||
@classmethod
|
||||
def from_object(cls, obj: Any) -> "KTConfig":
|
||||
"""Create KTConfig from an attribute-based object (HfTrainerKTConfig, etc.)."""
|
||||
_field_names = {f.name for f in dataclasses.fields(cls)}
|
||||
kwargs: dict[str, Any] = {}
|
||||
for name in _field_names:
|
||||
val = getattr(obj, name, None)
|
||||
if val is not None:
|
||||
kwargs[name] = val
|
||||
return cls(**kwargs)
|
||||
|
||||
def __post_init__(self):
|
||||
if self.kt_backend is None:
|
||||
self.kt_backend = os.environ.get("ACCELERATE_KT_BACKEND", "AMXBF16")
|
||||
if self.kt_num_threads is None:
|
||||
self.kt_num_threads = _env_int("ACCELERATE_KT_NUM_THREADS", 1)
|
||||
if self.kt_tp_enabled is None:
|
||||
self.kt_tp_enabled = _env_bool("ACCELERATE_KT_TP_ENABLED", False)
|
||||
if self.kt_threadpool_count is None:
|
||||
self.kt_threadpool_count = _env_int("ACCELERATE_KT_THREADPOOL_COUNT", 1)
|
||||
if self.kt_weight_path is None:
|
||||
self.kt_weight_path = os.environ.get("ACCELERATE_KT_WEIGHT_PATH", None)
|
||||
if self.kt_expert_checkpoint_path is None:
|
||||
self.kt_expert_checkpoint_path = os.environ.get("ACCELERATE_KT_EXPERT_CHECKPOINT_PATH", None)
|
||||
if self.kt_num_gpu_experts is None:
|
||||
self.kt_num_gpu_experts = _env_int("ACCELERATE_KT_NUM_GPU_EXPERTS", 0)
|
||||
if self.kt_max_cache_depth is None:
|
||||
self.kt_max_cache_depth = _env_int("ACCELERATE_KT_MAX_CACHE_DEPTH", 2)
|
||||
if self.kt_share_backward_bb is None:
|
||||
self.kt_share_backward_bb = _env_bool("ACCELERATE_KT_SHARE_BACKWARD_BB", True)
|
||||
if self.kt_share_cache_pool is None:
|
||||
self.kt_share_cache_pool = False
|
||||
if self.kt_use_lora_experts is None:
|
||||
self.kt_use_lora_experts = _env_bool("ACCELERATE_KT_USE_LORA_EXPERTS", False)
|
||||
if self.kt_lora_expert_num is None:
|
||||
self.kt_lora_expert_num = _env_int("ACCELERATE_KT_LORA_EXPERT_NUM", None)
|
||||
if self.kt_lora_expert_intermediate_size is None:
|
||||
self.kt_lora_expert_intermediate_size = _env_int("ACCELERATE_KT_LORA_EXPERT_INTERMEDIATE_SIZE", None)
|
||||
if self.kt_lora_rank is None:
|
||||
self.kt_lora_rank = _env_int("ACCELERATE_KT_LORA_RANK", None)
|
||||
if self.kt_lora_alpha is None:
|
||||
self.kt_lora_alpha = _env_float("ACCELERATE_KT_LORA_ALPHA", None)
|
||||
if self.kt_lora_alpha is None and self.kt_lora_rank is not None:
|
||||
self.kt_lora_alpha = float(self.kt_lora_rank * 2)
|
||||
if self.kt_model_max_length is None:
|
||||
self.kt_model_max_length = _env_int("ACCELERATE_KT_MODEL_MAX_LENGTH", None)
|
||||
if self.kt_skip_expert_loading is None:
|
||||
if "ACCELERATE_KT_SKIP_EXPERT_LOADING" in os.environ:
|
||||
self.kt_skip_expert_loading = _env_bool("ACCELERATE_KT_SKIP_EXPERT_LOADING", True)
|
||||
Loading…
Add table
Add a link
Reference in a new issue