mirror of
https://github.com/kvcache-ai/ktransformers.git
synced 2026-04-28 11:49:51 +00:00
Fix kt-kernel for new wrapper (#1588)
Some checks are pending
Book-CI / test (push) Waiting to run
Book-CI / test-1 (push) Waiting to run
Book-CI / test-2 (push) Waiting to run
Deploy / deploy (macos-latest) (push) Waiting to run
Deploy / deploy (ubuntu-latest) (push) Waiting to run
Deploy / deploy (windows-latest) (push) Waiting to run
Some checks are pending
Book-CI / test (push) Waiting to run
Book-CI / test-1 (push) Waiting to run
Book-CI / test-2 (push) Waiting to run
Deploy / deploy (macos-latest) (push) Waiting to run
Deploy / deploy (ubuntu-latest) (push) Waiting to run
Deploy / deploy (windows-latest) (push) Waiting to run
* update README for kt-kernel * style: format C++ and Python code in kt-kernel - Format C++ files: task_queue, ext_bindings, and MoE operators - Format Python utility modules: amx, llamafile, and loader - Improve code readability and consistency
This commit is contained in:
parent
9bc00e587b
commit
94c25626dc
10 changed files with 219 additions and 179 deletions
|
|
@ -6,8 +6,8 @@ KT-Kernel provides high-performance kernel operations for KTransformers,
|
|||
including CPU-optimized MoE inference with AMX, AVX, and KML support.
|
||||
|
||||
Example usage:
|
||||
>>> from kt_kernel import AMXMoEWrapper
|
||||
>>> wrapper = AMXMoEWrapper(
|
||||
>>> from kt_kernel import KTMoEWrapper
|
||||
>>> wrapper = KTMoEWrapper(
|
||||
... layer_idx=0,
|
||||
... num_experts=8,
|
||||
... num_experts_per_tok=2,
|
||||
|
|
@ -15,9 +15,10 @@ Example usage:
|
|||
... moe_intermediate_size=14336,
|
||||
... num_gpu_experts=2,
|
||||
... cpuinfer_threads=32,
|
||||
... subpool_count=2,
|
||||
... amx_weight_path="/path/to/weights",
|
||||
... chunked_prefill_size=512
|
||||
... threadpool_count=2,
|
||||
... weight_path="/path/to/weights",
|
||||
... chunked_prefill_size=512,
|
||||
... method="AMXINT4"
|
||||
... )
|
||||
"""
|
||||
|
||||
|
|
|
|||
Loading…
Add table
Add a link
Reference in a new issue