mirror of
https://github.com/kvcache-ai/ktransformers.git
synced 2026-04-28 20:00:06 +00:00
feat(sft): AMX MoE SFT backend with LoRA support (#1936)
Some checks failed
Book-CI / test (push) Has been cancelled
Book-CI / test-1 (push) Has been cancelled
Book-CI / test-2 (push) Has been cancelled
Deploy / deploy (macos-latest) (push) Has been cancelled
Deploy / deploy (ubuntu-latest) (push) Has been cancelled
Deploy / deploy (windows-latest) (push) Has been cancelled
Some checks failed
Book-CI / test (push) Has been cancelled
Book-CI / test-1 (push) Has been cancelled
Book-CI / test-2 (push) Has been cancelled
Deploy / deploy (macos-latest) (push) Has been cancelled
Deploy / deploy (ubuntu-latest) (push) Has been cancelled
Deploy / deploy (windows-latest) (push) Has been cancelled
* feat(sft): AMX MoE SFT backend with LoRA support Complete SFT (Supervised Fine-Tuning) backend for MoE models using AMX SIMD: Core C++ implementation: - sft_moe.hpp: Forward/backward with LoRA fused operations (~5500 lines) - moe-sft-tp.hpp: Tensor-parallel wrapper for multi-NUMA - amx/moe-sft-tp.hpp: AMX-specific TP implementation - avx_kernels.hpp: AVX512 SIMD kernels for LoRA GEMM - amx_kernels.hpp: AMX tile kernels for Panel5 rank-outer optimization - worker_pool: RDTSC profiling, Chrome trace output, SFT timer infrastructure - ext_bindings.cpp: SFT MOE pybind bindings (BF16/INT8/INT4 + SkipLoRA variants) Python sft/ submodule (kt_kernel.sft): - base.py: BaseSFTMoEWrapper with buffer management (template method pattern) - amx.py: AMXSFTMoEWrapper (weight loading, C++ task construction) - autograd.py: KTMoEFunction (torch.autograd.Function for distributed training) - layer.py: KTMoELayerWrapper (nn.Module replacing HF MoE layers) - arch.py: MOEArchConfig (Qwen3/DeepSeek/Mixtral architecture detection) - weights.py: Expert weight extraction and checkpoint loading - lora.py: PEFT LoRA adaptation (view buffers, grad buffers, save/load adapter) - wrapper.py: wrap_moe_layers_with_kt_wrapper, load_kt_model, build_kt_device_map - config.py: KTConfig dataclass (DeepSpeed-style opaque config passthrough) - dist_utils.py: Distributed gather/scatter, checkpoint-phase detection Design decisions: - Rank-0-only expert pattern: only rank 0 holds C++ wrapper and expert weights - DeepSpeed-style integration: accelerate keeps only KTransformersPlugin (framework interaction fields), all logic in kt_kernel.sft - Inference isolation: importing kt_kernel does not load sft/ submodule - Old field name compatibility: _get_kt_config() converts kt_xxx→xxx automatically Verified: Qwen3-235B-A22B 4GPU AMXBF16 training, loss converges normally. * refactor(sft): unify KTConfig field names with kt_ prefix, add share_cache_pool, remove dead code - KTConfig fields all use kt_ prefix matching dict keys — eliminates _OLD_TO_NEW mapping and prefix-stripping in wrapper.py - Add kt_share_cache_pool field, auto-enabled when gradient_checkpointing is on (via training_args.py), flows through to C++ cache allocation - Remove dead checkpoint detection code: in_ckpt_recompute, in_ckpt_first_forward vars (assigned but never read), fallback _is_in_checkpoint_first_forward() function, unused inspect import - Remove redundant env var fallbacks in wrapper.py for share_backward_bb and share_cache_pool (KTConfig.__post_init__ already handles env vars) - Simplify layer.py checkpoint logic to single _checkpoint_hook_mode() check Verified: Qwen3-235B 3-step training on sap4, loss matches baseline (1.2886 / 1.9824 / 1.377 vs 1.2886 / 1.9766 / 1.3809) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * refactor(sft): share_backward_bb default True, share_cache_pool auto-derived - kt_share_backward_bb defaults to True (always saves memory) - kt_share_cache_pool no longer reads from env var; defaults False, auto-set to True by trainer_config_process when gradient checkpointing is enabled Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * fix: add missing gpu_experts_mask=None to KTMoEWrapper call in SFT wrapper KTMoEWrapper.__new__() requires gpu_experts_mask as a positional argument, but the SFT wrapper omitted it, causing MoE layer wrapping to fail silently and FSDP2 to attempt broadcasting all expert weights (OOM/NCCL crash). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(sft): support transformers v5 fused expert format Fused experts (e.g. Qwen3MoeExperts) store weights as 3D Parameters (gate_up_proj [E,2I,H], down_proj [E,H,I]) instead of per-expert nn.Linear modules. PEFT cannot attach LoRA to these, so we create KT-managed LoRA buffers with kaiming init, nn.Parameter wrappers for the optimizer, and pre-assigned .grad for C++ backward. - arch.py: detect_fused_experts() detection - weights.py: fused format extraction and weight clearing - wrapper.py: detect fused at wrap time, store _fused_experts/_lora_rank - lora.py: _create_fused_expert_lora_buffers, save/load fused LoRA, get_kt_lora_params collects fused params, deduplicate wrapper finding - layer.py: handle v5 TopKRouter tuple output, remove dead code - autograd.py: sync_forward_sft/submit_forward_sft API rename Verified: v5 loss/expert-LoRA values match v4 baseline, v4 backward compat. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * feat(sft): add Qwen3.5 MoE support + fused checkpoint loading - arch.py: add Qwen3_5Moe arch match, read config from text_config, _get_layers_prefix returns model.language_model.layers for Qwen3.5, _get_model_container_and_layers searches language_model attr - weights.py: load_experts_from_checkpoint_files detects fused format (gate_up_proj in weight_map) and splits into gate/up/down - wrapper.py: hidden_size fallback to text_config Verified: Qwen3.5-35B-A3B (256 experts, fused format) E2E pass. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * [fix](sft): align Python API with C++ backend after v5 refactor - wrapper.py: pass gpu_experts_mask=None to KTMoEWrapper (required by C++ signature) - layer.py: rename submit_forward_sft/sync_forward_sft to submit_forward/sync_forward - autograd.py: rename sync_forward_sft to sync_forward The sft-v5 refactor (commits58d7eab,dd1da65) renamed Python-side method calls but the C++ backend (AMXSFTMoEWrapper) still exposes the original method names. This caused AttributeError on Qwen3.5-35B and other models. * align sft branch with main: revert worker_pool, strip sft_timer, fix inference defaults - Revert worker_pool.cpp/.h to main (remove RDTSC timer, Chrome Trace, sft_timer namespace, ITT API, extended do_work_stealing_job API) - Strip all sft_timer instrumentation from sft-only files (sft_moe.hpp, moe-sft-tp.hpp, avx_kernels.hpp) - Restore pin_memory=True in KExpertsCPUBuffer (inference path) - Restore fused tensor transpose logic in convert_cpu_weights.py (main layout) Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * revert CMakeLists.txt to main: remove debug flags and cpptrace dep Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * clean up dev artifacts: remove SFT design docs, debug examples, bench scripts Remove files not needed in the merge: - docs/SFT+KTWrapper/ (6 Chinese design docs) - docs/sft_moe_amx/ (21 dev/debug docs) - 12 debug/test example scripts - 6 SFT-specific bench scripts and report Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * remove dev version stamps from ext_bindings, sft_moe, moe-sft-tp Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com> Co-authored-by: JimmyPeilinLi <lipeilin@mail.nwpu.edu.cn>
This commit is contained in:
parent
22e9915ec9
commit
9544a8960d
41 changed files with 22866 additions and 937 deletions
|
|
@ -90,33 +90,35 @@ class KExpertsCPUBuffer:
|
|||
hidden_size = hidden_states.shape[-1]
|
||||
batch_size = hidden_states.shape[0]
|
||||
|
||||
pin_memory = True
|
||||
|
||||
if batch_size in cls.capture_buffers:
|
||||
return cls.capture_buffers[batch_size]
|
||||
if batch_size == cls.temp_bs:
|
||||
return cls.temp_buffer
|
||||
|
||||
input_tensor_cpu = [
|
||||
torch.zeros((batch_size, hidden_size), device="cpu", pin_memory=True, dtype=torch.bfloat16)
|
||||
torch.zeros((batch_size, hidden_size), device="cpu", pin_memory=pin_memory, dtype=torch.bfloat16)
|
||||
for _ in range(cls.buffer_depth)
|
||||
]
|
||||
immediate_experts_ids_cpu = [
|
||||
torch.zeros((batch_size, num_experts_per_tok), device="cpu", dtype=torch.long, pin_memory=True)
|
||||
torch.zeros((batch_size, num_experts_per_tok), device="cpu", dtype=torch.long, pin_memory=pin_memory)
|
||||
for _ in range(cls.buffer_depth)
|
||||
]
|
||||
deferred_experts_ids_cpu = [
|
||||
torch.full((batch_size, num_experts_per_tok), -1, device="cpu", dtype=torch.long, pin_memory=True)
|
||||
torch.full((batch_size, num_experts_per_tok), -1, device="cpu", dtype=torch.long, pin_memory=pin_memory)
|
||||
for _ in range(cls.buffer_depth)
|
||||
]
|
||||
weights_cpu = [
|
||||
torch.zeros((batch_size, num_experts_per_tok), device="cpu", dtype=torch.float32, pin_memory=True)
|
||||
torch.zeros((batch_size, num_experts_per_tok), device="cpu", dtype=torch.float32, pin_memory=pin_memory)
|
||||
for _ in range(cls.buffer_depth)
|
||||
]
|
||||
output_cpu = [
|
||||
torch.zeros((batch_size, hidden_size), device="cpu", pin_memory=True, dtype=torch.bfloat16)
|
||||
torch.zeros((batch_size, hidden_size), device="cpu", pin_memory=pin_memory, dtype=torch.bfloat16)
|
||||
for _ in range(cls.buffer_depth)
|
||||
]
|
||||
bsz_tensor_cpu = [
|
||||
torch.full((1,), batch_size, device="cpu", dtype=torch.int32, pin_memory=True)
|
||||
torch.full((1,), batch_size, device="cpu", dtype=torch.int32, pin_memory=pin_memory)
|
||||
for _ in range(cls.buffer_depth)
|
||||
]
|
||||
output_gpu = [
|
||||
|
|
@ -140,13 +142,94 @@ class KExpertsCPUBuffer:
|
|||
return cur_buffer
|
||||
|
||||
|
||||
class BaseMoEWrapper(ABC):
|
||||
class _MoEBase:
|
||||
"""
|
||||
Shared base class for inference and SFT MoE wrappers.
|
||||
|
||||
Provides:
|
||||
- CPUInfer singleton management
|
||||
- Basic configuration validation
|
||||
|
||||
This class is shared between BaseMoEWrapper (inference) and BaseSFTMoEWrapper (SFT).
|
||||
"""
|
||||
|
||||
_cpu_infer_instance = None
|
||||
|
||||
@classmethod
|
||||
def _get_cpu_infer(
|
||||
cls,
|
||||
cpuinfer_threads: int,
|
||||
threadpool_count: int,
|
||||
numa_nodes=None,
|
||||
):
|
||||
"""
|
||||
Get or create the CPUInfer singleton instance.
|
||||
|
||||
Args:
|
||||
cpuinfer_threads: Total number of CPU inference threads
|
||||
threadpool_count: Number of NUMA subpools (TP count)
|
||||
numa_nodes: Explicit list of NUMA node IDs. If None, defaults to sequential.
|
||||
|
||||
Returns:
|
||||
CPUInfer singleton instance
|
||||
"""
|
||||
if cls._cpu_infer_instance is None:
|
||||
worker_config = kt_kernel_ext.WorkerPoolConfig()
|
||||
|
||||
if numa_nodes is not None:
|
||||
if len(numa_nodes) != threadpool_count:
|
||||
raise ValueError(
|
||||
f"numa_nodes length ({len(numa_nodes)}) must match "
|
||||
f"threadpool_count ({threadpool_count})"
|
||||
)
|
||||
subpool_numa_map = list(numa_nodes)
|
||||
else:
|
||||
subpool_numa_map = list(range(threadpool_count))
|
||||
subpool_thread_count = [
|
||||
cpuinfer_threads // threadpool_count + (1 if i < cpuinfer_threads % threadpool_count else 0)
|
||||
for i in range(threadpool_count)
|
||||
]
|
||||
|
||||
worker_config.subpool_count = threadpool_count
|
||||
worker_config.subpool_numa_map = subpool_numa_map
|
||||
worker_config.subpool_thread_count = subpool_thread_count
|
||||
cls._cpu_infer_instance = kt_kernel_ext.CPUInfer(worker_config)
|
||||
|
||||
return cls._cpu_infer_instance
|
||||
|
||||
@staticmethod
|
||||
def _validate_base_config(
|
||||
num_experts: int,
|
||||
hidden_size: int,
|
||||
moe_intermediate_size: int,
|
||||
num_experts_per_tok: int,
|
||||
) -> None:
|
||||
"""
|
||||
Validate basic configuration parameters.
|
||||
|
||||
Raises:
|
||||
ValueError: If parameters are invalid
|
||||
"""
|
||||
if num_experts <= 0:
|
||||
raise ValueError(f"num_experts must be positive, got {num_experts}")
|
||||
if hidden_size <= 0:
|
||||
raise ValueError(f"hidden_size must be positive, got {hidden_size}")
|
||||
if moe_intermediate_size <= 0:
|
||||
raise ValueError(f"moe_intermediate_size must be positive, got {moe_intermediate_size}")
|
||||
if num_experts_per_tok <= 0:
|
||||
raise ValueError(f"num_experts_per_tok must be positive, got {num_experts_per_tok}")
|
||||
if num_experts_per_tok > num_experts:
|
||||
raise ValueError(
|
||||
f"num_experts_per_tok ({num_experts_per_tok}) cannot exceed " f"num_experts ({num_experts})"
|
||||
)
|
||||
|
||||
|
||||
class BaseMoEWrapper(_MoEBase, ABC):
|
||||
"""
|
||||
Base class for MoE CPU inference operations.
|
||||
Provides common functionality for all backend implementations.
|
||||
"""
|
||||
|
||||
_cpu_infer_instance = None
|
||||
_layer_has_pending_deferred: Dict[int, bool] = {}
|
||||
|
||||
def __init__(
|
||||
|
|
@ -220,30 +303,8 @@ class BaseMoEWrapper(ABC):
|
|||
BaseMoEWrapper._layer_has_pending_deferred[self.layer_idx] = False
|
||||
self.method = method
|
||||
|
||||
# Initialize CPU inference engine (singleton)
|
||||
if BaseMoEWrapper._cpu_infer_instance is None:
|
||||
worker_config = kt_kernel_ext.WorkerPoolConfig()
|
||||
|
||||
if numa_nodes is not None:
|
||||
if len(numa_nodes) != threadpool_count:
|
||||
raise ValueError(
|
||||
f"numa_nodes length ({len(numa_nodes)}) must match "
|
||||
f"threadpool_count ({threadpool_count})"
|
||||
)
|
||||
subpool_numa_map = list(numa_nodes)
|
||||
else:
|
||||
subpool_numa_map = list(range(threadpool_count))
|
||||
subpool_thread_count = [
|
||||
cpuinfer_threads // threadpool_count + (1 if i < cpuinfer_threads % threadpool_count else 0)
|
||||
for i in range(threadpool_count)
|
||||
]
|
||||
|
||||
worker_config.subpool_count = threadpool_count
|
||||
worker_config.subpool_numa_map = subpool_numa_map
|
||||
worker_config.subpool_thread_count = subpool_thread_count
|
||||
BaseMoEWrapper._cpu_infer_instance = kt_kernel_ext.CPUInfer(worker_config)
|
||||
|
||||
self.cpu_infer = BaseMoEWrapper._cpu_infer_instance
|
||||
# Initialize CPU inference engine (singleton via shared base class)
|
||||
self.cpu_infer = self._get_cpu_infer(cpuinfer_threads, threadpool_count, numa_nodes=numa_nodes)
|
||||
|
||||
# Backend-specific initialization happens in subclasses
|
||||
self.moe = None
|
||||
|
|
@ -474,3 +535,4 @@ class BaseMoEWrapper(ABC):
|
|||
KExpertsCPUBuffer.capture_buffers.clear()
|
||||
KExpertsCPUBuffer.temp_bs = 0
|
||||
KExpertsCPUBuffer.temp_buffer = tuple()
|
||||
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue