blt/bytelatent
Pedro Rodriguez 9cf7847e26 Fix distributed all reduce grad norm
Summary:

With >1 GPU, but only 1 node, all reduces fail when inputs are not bf16. This uses a modified copy of torch's grad norm to avoid failures

Test Plan:

- Run unit tests:
- Run single gpu training: `python -m bytelatent.train config=internal/configs/s3_debug.yaml eval=null checkpoint.dump.every=100`
- Run 1 node, multi-gpu training `torchrun --nproc-per-node 8 -m bytelatent.train config=internal/configs/s3_debug.yaml eval=null checkpoint.dump.every=100`
2025-02-05 00:51:27 +00:00
..
configs This includes fixes that make checkpointing and reloading work correctly. (#35) 2025-01-27 16:56:42 -08:00
data This includes fixes that make checkpointing and reloading work correctly. (#35) 2025-01-27 16:56:42 -08:00
model Initial codes and scripts for training entropy model (#34) 2025-01-27 09:46:44 -08:00
plotting Add plotting code from paper (#17) 2025-01-09 12:11:50 -08:00
preprocess Fix realtime entropy patching (#26) 2025-01-21 16:34:23 -08:00
tokenizers Initial commit 2024-12-12 15:32:30 -08:00
.DS_Store Initial commit 2024-12-12 15:32:30 -08:00
__init__.py Initial commit 2024-12-12 15:32:30 -08:00
args.py This includes fixes that make checkpointing and reloading work correctly. (#35) 2025-01-27 16:56:42 -08:00
base_transformer.py Changes for training entropy model and correcting attention in local models (#25) 2025-01-17 14:23:01 -08:00
checkpoint.py This includes fixes that make checkpointing and reloading work correctly. (#35) 2025-01-27 16:56:42 -08:00
constants.py Initial commit 2024-12-12 15:32:30 -08:00
distributed.py Changes for training entropy model and correcting attention in local models (#25) 2025-01-17 14:23:01 -08:00
entropy_model.py Changes for training entropy model and correcting attention in local models (#25) 2025-01-17 14:23:01 -08:00
eval.py This includes fixes that make checkpointing and reloading work correctly. (#35) 2025-01-27 16:56:42 -08:00
float8.py Initial commit 2024-12-12 15:32:30 -08:00
generate.py This includes fixes that make checkpointing and reloading work correctly. (#35) 2025-01-27 16:56:42 -08:00
logger.py Replace regular filesystem calls with fsspec + add s3 support (#18) 2025-01-10 11:04:41 -08:00
metrics.py Initial commit 2024-12-12 15:32:30 -08:00
norms.py Fix distributed all reduce grad norm 2025-02-05 00:51:27 +00:00
optim.py Initial commit 2024-12-12 15:32:30 -08:00
probe.py Initial commit 2024-12-12 15:32:30 -08:00
profiling.py Initial commit 2024-12-12 15:32:30 -08:00
stool.py Initial commit 2024-12-12 15:32:30 -08:00
test_blt.py Initial codes and scripts for training entropy model (#34) 2025-01-27 09:46:44 -08:00
test_entropy_model.py Changes for training entropy model and correcting attention in local models (#25) 2025-01-17 14:23:01 -08:00
train.py Fix distributed all reduce grad norm 2025-02-05 00:51:27 +00:00
transformer.py Changes for training entropy model and correcting attention in local models (#25) 2025-01-17 14:23:01 -08:00