├── LICENSE
├── README.md
├── data.py
├── mamba_model.py
├── requirements.txt
└── train.py
/LICENSE:
--------------------------------------------------------------------------------
1 | Apache License
2 | Version 2.0, January 2004
3 | http://www.apache.org/licenses/
4 |
5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6 |
7 | 1. Definitions.
8 |
9 | "License" shall mean the terms and conditions for use, reproduction,
10 | and distribution as defined by Sections 1 through 9 of this document.
11 |
12 | "Licensor" shall mean the copyright owner or entity authorized by
13 | the copyright owner that is granting the License.
14 |
15 | "Legal Entity" shall mean the union of the acting entity and all
16 | other entities that control, are controlled by, or are under common
17 | control with that entity. For the purposes of this definition,
18 | "control" means (i) the power, direct or indirect, to cause the
19 | direction or management of such entity, whether by contract or
20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the
21 | outstanding shares, or (iii) beneficial ownership of such entity.
22 |
23 | "You" (or "Your") shall mean an individual or Legal Entity
24 | exercising permissions granted by this License.
25 |
26 | "Source" form shall mean the preferred form for making modifications,
27 | including but not limited to software source code, documentation
28 | source, and configuration files.
29 |
30 | "Object" form shall mean any form resulting from mechanical
31 | transformation or translation of a Source form, including but
32 | not limited to compiled object code, generated documentation,
33 | and conversions to other media types.
34 |
35 | "Work" shall mean the work of authorship, whether in Source or
36 | Object form, made available under the License, as indicated by a
37 | copyright notice that is included in or attached to the work
38 | (an example is provided in the Appendix below).
39 |
40 | "Derivative Works" shall mean any work, whether in Source or Object
41 | form, that is based on (or derived from) the Work and for which the
42 | editorial revisions, annotations, elaborations, or other modifications
43 | represent, as a whole, an original work of authorship. For the purposes
44 | of this License, Derivative Works shall not include works that remain
45 | separable from, or merely link (or bind by name) to the interfaces of,
46 | the Work and Derivative Works thereof.
47 |
48 | "Contribution" shall mean any work of authorship, including
49 | the original version of the Work and any modifications or additions
50 | to that Work or Derivative Works thereof, that is intentionally
51 | submitted to Licensor for inclusion in the Work by the copyright owner
52 | or by an individual or Legal Entity authorized to submit on behalf of
53 | the copyright owner. For the purposes of this definition, "submitted"
54 | means any form of electronic, verbal, or written communication sent
55 | to the Licensor or its representatives, including but not limited to
56 | communication on electronic mailing lists, source code control systems,
57 | and issue tracking systems that are managed by, or on behalf of, the
58 | Licensor for the purpose of discussing and improving the Work, but
59 | excluding communication that is conspicuously marked or otherwise
60 | designated in writing by the copyright owner as "Not a Contribution."
61 |
62 | "Contributor" shall mean Licensor and any individual or Legal Entity
63 | on behalf of whom a Contribution has been received by Licensor and
64 | subsequently incorporated within the Work.
65 |
66 | 2. Grant of Copyright License. Subject to the terms and conditions of
67 | this License, each Contributor hereby grants to You a perpetual,
68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69 | copyright license to reproduce, prepare Derivative Works of,
70 | publicly display, publicly perform, sublicense, and distribute the
71 | Work and such Derivative Works in Source or Object form.
72 |
73 | 3. Grant of Patent License. Subject to the terms and conditions of
74 | this License, each Contributor hereby grants to You a perpetual,
75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76 | (except as stated in this section) patent license to make, have made,
77 | use, offer to sell, sell, import, and otherwise transfer the Work,
78 | where such license applies only to those patent claims licensable
79 | by such Contributor that are necessarily infringed by their
80 | Contribution(s) alone or by combination of their Contribution(s)
81 | with the Work to which such Contribution(s) was submitted. If You
82 | institute patent litigation against any entity (including a
83 | cross-claim or counterclaim in a lawsuit) alleging that the Work
84 | or a Contribution incorporated within the Work constitutes direct
85 | or contributory patent infringement, then any patent licenses
86 | granted to You under this License for that Work shall terminate
87 | as of the date such litigation is filed.
88 |
89 | 4. Redistribution. You may reproduce and distribute copies of the
90 | Work or Derivative Works thereof in any medium, with or without
91 | modifications, and in Source or Object form, provided that You
92 | meet the following conditions:
93 |
94 | (a) You must give any other recipients of the Work or
95 | Derivative Works a copy of this License; and
96 |
97 | (b) You must cause any modified files to carry prominent notices
98 | stating that You changed the files; and
99 |
100 | (c) You must retain, in the Source form of any Derivative Works
101 | that You distribute, all copyright, patent, trademark, and
102 | attribution notices from the Source form of the Work,
103 | excluding those notices that do not pertain to any part of
104 | the Derivative Works; and
105 |
106 | (d) If the Work includes a "NOTICE" text file as part of its
107 | distribution, then any Derivative Works that You distribute must
108 | include a readable copy of the attribution notices contained
109 | within such NOTICE file, excluding those notices that do not
110 | pertain to any part of the Derivative Works, in at least one
111 | of the following places: within a NOTICE text file distributed
112 | as part of the Derivative Works; within the Source form or
113 | documentation, if provided along with the Derivative Works; or,
114 | within a display generated by the Derivative Works, if and
115 | wherever such third-party notices normally appear. The contents
116 | of the NOTICE file are for informational purposes only and
117 | do not modify the License. You may add Your own attribution
118 | notices within Derivative Works that You distribute, alongside
119 | or as an addendum to the NOTICE text from the Work, provided
120 | that such additional attribution notices cannot be construed
121 | as modifying the License.
122 |
123 | You may add Your own copyright statement to Your modifications and
124 | may provide additional or different license terms and conditions
125 | for use, reproduction, or distribution of Your modifications, or
126 | for any such Derivative Works as a whole, provided Your use,
127 | reproduction, and distribution of the Work otherwise complies with
128 | the conditions stated in this License.
129 |
130 | 5. Submission of Contributions. Unless You explicitly state otherwise,
131 | any Contribution intentionally submitted for inclusion in the Work
132 | by You to the Licensor shall be under the terms and conditions of
133 | this License, without any additional terms or conditions.
134 | Notwithstanding the above, nothing herein shall supersede or modify
135 | the terms of any separate license agreement you may have executed
136 | with Licensor regarding such Contributions.
137 |
138 | 6. Trademarks. This License does not grant permission to use the trade
139 | names, trademarks, service marks, or product names of the Licensor,
140 | except as required for reasonable and customary use in describing the
141 | origin of the Work and reproducing the content of the NOTICE file.
142 |
143 | 7. Disclaimer of Warranty. Unless required by applicable law or
144 | agreed to in writing, Licensor provides the Work (and each
145 | Contributor provides its Contributions) on an "AS IS" BASIS,
146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147 | implied, including, without limitation, any warranties or conditions
148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149 | PARTICULAR PURPOSE. You are solely responsible for determining the
150 | appropriateness of using or redistributing the Work and assume any
151 | risks associated with Your exercise of permissions under this License.
152 |
153 | 8. Limitation of Liability. In no event and under no legal theory,
154 | whether in tort (including negligence), contract, or otherwise,
155 | unless required by applicable law (such as deliberate and grossly
156 | negligent acts) or agreed to in writing, shall any Contributor be
157 | liable to You for damages, including any direct, indirect, special,
158 | incidental, or consequential damages of any character arising as a
159 | result of this License or out of the use or inability to use the
160 | Work (including but not limited to damages for loss of goodwill,
161 | work stoppage, computer failure or malfunction, or any and all
162 | other commercial damages or losses), even if such Contributor
163 | has been advised of the possibility of such damages.
164 |
165 | 9. Accepting Warranty or Additional Liability. While redistributing
166 | the Work or Derivative Works thereof, You may choose to offer,
167 | and charge a fee for, acceptance of support, warranty, indemnity,
168 | or other liability obligations and/or rights consistent with this
169 | License. However, in accepting such obligations, You may act only
170 | on Your own behalf and on Your sole responsibility, not on behalf
171 | of any other Contributor, and only if You agree to indemnify,
172 | defend, and hold each Contributor harmless for any liability
173 | incurred by, or claims asserted against, such Contributor by reason
174 | of your accepting any such warranty or additional liability.
175 |
176 | END OF TERMS AND CONDITIONS
177 |
178 | APPENDIX: How to apply the Apache License to your work.
179 |
180 | To apply the Apache License to your work, attach the following
181 | boilerplate notice, with the fields enclosed by brackets "[]"
182 | replaced with your own identifying information. (Don't include
183 | the brackets!) The text should be enclosed in the appropriate
184 | comment syntax for the file format. We also recommend that a
185 | file or class name and description of purpose be included on the
186 | same "printed page" as the copyright notice for easier
187 | identification within third-party archives.
188 |
189 | Copyright [yyyy] [name of copyright owner]
190 |
191 | Licensed under the Apache License, Version 2.0 (the "License");
192 | you may not use this file except in compliance with the License.
193 | You may obtain a copy of the License at
194 |
195 | http://www.apache.org/licenses/LICENSE-2.0
196 |
197 | Unless required by applicable law or agreed to in writing, software
198 | distributed under the License is distributed on an "AS IS" BASIS,
199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200 | See the License for the specific language governing permissions and
201 | limitations under the License.
202 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # mamba-train
2 |
3 |
4 |
5 |
6 |
7 | A single repo with all scripts and utils to train / fine-tune the Mamba model with or without Fill-in-Middle objective (for code infilling).
8 |
9 | ### Data
10 | Currently, the `train.py` script only supports training from a Lance and a Huggingface dataset. If you are training using a Huggingface dataset, substitute `MambaDataset` with your Huggingface dataset in the `train.py` file.
11 |
12 | In order for the training to run when using the aforementioned huggingface dataset, the data needs to be grouped in groups of 'context length'. That is, each sample in the dataset must have 'context length' number of tokens in it. For more information on how to achieve this, see the [`group_texts`](https://github.com/huggingface/transformers/blob/89c64817ce4172bc8bb58c675c445a63f16d0e38/examples/pytorch/language-modeling/run_clm_no_trainer.py#L459-L472) function.
13 |
14 | Once the data is in the right format, call the `apply_fim` function in the training loop, passing in the samples and all the appropriate parameters with it. If you face any problems, please open an issue!
15 |
16 | For the Lance dataset, I will be releasing the 5M samples subset of the Codeparrot dataset soon. For more information on how it was made using Lance, see my [article](https://tanaymeh.github.io/blog/2024/02/08/p7.html).
17 |
18 | **A note about `MambaSampler`**: I am training the model on the Lance dataset which is one large contiguous array of tokens. In this setting, it is very hard to distinguish between different samples (each with the size of context length) without altering the dataset creation process. We need to have non-overlapping samples so as to not overfit the model.
19 |
20 | My workaround for this was making a new sampler that samples `len(dataset) // context_len` number of samples from the dataset, where each of those sample is atleast `context_len` indices apart from each other. This "emulates" them as individual samples with minimal processing overhead.
21 |
22 | ### Fill-in-Middle
23 | Both the Lance and HF datasets apply Fill-in-Middle transformation on each 'sample' during the training run. FIM training objectives allows the model to infill the code. FIM trained models are the ones used by code-completion tools like Github Copilot.
24 | In order to learn more about Fill-in-Middle training objective, see the [OpenAI paper](https://arxiv.org/abs/2207.14255).
25 |
26 | In order to adjust what percentage of training samples are transformed using FIM, you can adjust the `fim_rate` parameter in both datasets. By default it is set to 0.9, meaning 90% of all samples will be FIM transformed (this is because I am fine-tuning the model instead of pre-training it).
27 |
28 | ### Training
29 | Before starting the training run, you need to install all the dependencies from the requirements file
30 |
31 | ```bash
32 | pip install -r requirements.txt
33 | ```
34 |
35 | Once that is done, start the training run via:
36 |
37 | ```bash
38 | python train.py
39 | ```
40 |
--------------------------------------------------------------------------------
/data.py:
--------------------------------------------------------------------------------
1 | import random
2 | import numpy as np
3 |
4 | import torch
5 | from torch.utils.data import Dataset, Sampler
6 |
7 | import lance
8 |
9 |
10 | def apply_fim(sample, fim_prefix, fim_middle, fim_suffix, fim_pad, mode, np_rng):
11 | """
12 | Applies FIM transformation on one sample
13 | """
14 | boundaries = sorted(np_rng.randint(low=0, high=len(sample) + 1, size=2))
15 |
16 | prefix = sample[: boundaries[0]]
17 | middle = sample[boundaries[0] : boundaries[1]]
18 | suffix = sample[boundaries[1] :]
19 |
20 | total_length = len(prefix) + len(middle) + len(suffix) + 3
21 | diff = total_length - len(sample)
22 | if diff > 0:
23 | suffix = suffix[: max(0, len(suffix) - diff)]
24 | elif diff < 0:
25 | extend = torch.cat([fim_pad for _ in range(-diff)])
26 | suffix = torch.cat([suffix, extend])
27 |
28 | if mode == "spm":
29 | # Apply SPM
30 | transfomed_example = torch.cat(
31 | [fim_prefix, fim_suffix, suffix, fim_middle, prefix, middle]
32 | )
33 | else:
34 | # Apply PSM
35 | transfomed_example = torch.cat(
36 | [fim_prefix, prefix, fim_suffix, suffix, fim_middle, middle]
37 | )
38 |
39 | return transfomed_example
40 |
41 |
42 | class MambaDataset(Dataset):
43 | def __init__(
44 | self,
45 | dataset_path,
46 | context_len,
47 | fim_prefix,
48 | fim_middle,
49 | fim_suffix,
50 | fim_pad,
51 | fim_rate=0.5,
52 | mode="psm",
53 | rng_seed=42,
54 | ):
55 | # Load the lance dataset from the saved path
56 | self.ds = lance.dataset(dataset_path)
57 | self.context_len = context_len
58 |
59 | # Doing this so the sampler never asks for an index at the end of text
60 | self.length = self.ds.count_rows() - context_len
61 |
62 | self.np_rng = np.random.RandomState(seed=rng_seed)
63 |
64 | self.fim_prefix = torch.tensor([fim_prefix])
65 | self.fim_middle = torch.tensor([fim_middle])
66 | self.fim_suffix = torch.tensor([fim_suffix])
67 | self.fim_pad = torch.tensor([fim_pad])
68 | self.fim_rate = fim_rate
69 | self.mode = mode
70 |
71 | def __len__(self):
72 | return self.length
73 |
74 | def from_idxs(self, idxs):
75 | """
76 | Little utility function to get the data from lance
77 | """
78 | data = self.ds.take(idxs).to_pylist()
79 | data = torch.tensor(list(map(lambda x: x["value"], data)))
80 | return data
81 |
82 | def apply_fim(self, sample):
83 | """
84 | Applies FIM transformation on one sample
85 | """
86 | boundaries = sorted(self.np_rng.randint(low=0, high=len(sample) + 1, size=2))
87 |
88 | prefix = sample[: boundaries[0]]
89 | middle = sample[boundaries[0] : boundaries[1]]
90 | suffix = sample[boundaries[1] :]
91 |
92 | total_length = len(prefix) + len(middle) + len(suffix) + 3
93 | diff = total_length - len(sample)
94 | if diff > 0:
95 | suffix = suffix[: max(0, len(suffix) - diff)]
96 | elif diff < 0:
97 | extend = torch.cat([self.fim_pad for _ in range(-diff)])
98 | suffix = torch.cat([suffix, extend])
99 |
100 | if self.mode == "spm":
101 | # Apply SPM
102 | transfomed_example = torch.cat(
103 | [
104 | self.fim_prefix,
105 | self.fim_suffix,
106 | suffix,
107 | self.fim_middle,
108 | prefix,
109 | middle,
110 | ]
111 | )
112 | else:
113 | # Apply PSM
114 | transfomed_example = torch.cat(
115 | [
116 | self.fim_prefix,
117 | prefix,
118 | self.fim_suffix,
119 | suffix,
120 | self.fim_middle,
121 | middle,
122 | ]
123 | )
124 |
125 | return transfomed_example
126 |
127 | def __getitem__(self, idx):
128 | """
129 | Generate a list of indices starting from the current idx to idx+context_len+1
130 | with optional fim transformation
131 | """
132 | current_window_idxs = np.arange(idx, idx + self.context_len + 1)
133 | sample = self.from_idxs(current_window_idxs)
134 |
135 | # Apply FIM transformation depending on the rate
136 | if self.np_rng.binomial(1, self.fim_rate):
137 | sample = self.apply_fim(sample)
138 |
139 | # +1 in labels because it is 1 step ahead of input tokens
140 | tokens = sample[0 : self.context_len]
141 | labels = sample[1 : self.context_len + 1]
142 | return {"tokens": tokens, "labels": labels}
143 |
144 |
145 | class MambaSampler(Sampler):
146 | r"""Samples tokens randomly but `k` indices apart where `k` is generally the context length of the LLM.
147 |
148 | Args:
149 | data_source (Dataset): dataset to sample from
150 | k (int): minimum index distance between each random sample
151 | """
152 |
153 | def __init__(self, data_source, k=16):
154 | self.data_source = data_source
155 | self.num_samples = len(self.data_source)
156 | self.available_indices = list(range(0, self.num_samples, k))
157 | random.shuffle(self.available_indices)
158 |
159 | def __iter__(self):
160 | yield from self.available_indices
161 |
162 | def __len__(self) -> int:
163 | return len(self.available_indices)
164 |
--------------------------------------------------------------------------------
/mamba_model.py:
--------------------------------------------------------------------------------
1 | # Copyright (c) 2023, Albert Gu, Tri Dao.
2 |
3 | import math
4 | from functools import partial
5 | import json
6 | import os
7 |
8 | from collections import namedtuple
9 |
10 | import torch
11 | import torch.nn as nn
12 | from torch.nn import CrossEntropyLoss
13 |
14 | from mamba_ssm.models.config_mamba import MambaConfig
15 | from mamba_ssm.modules.mamba_simple import Mamba, Block
16 | from mamba_ssm.utils.generation import GenerationMixin
17 | from mamba_ssm.utils.hf import load_config_hf, load_state_dict_hf
18 |
19 | try:
20 | from mamba_ssm.ops.triton.layernorm import RMSNorm, layer_norm_fn, rms_norm_fn
21 | except ImportError:
22 | RMSNorm, layer_norm_fn, rms_norm_fn = None, None, None
23 |
24 |
25 | def create_block(
26 | d_model,
27 | ssm_cfg=None,
28 | norm_epsilon=1e-5,
29 | rms_norm=False,
30 | residual_in_fp32=False,
31 | fused_add_norm=False,
32 | layer_idx=None,
33 | device=None,
34 | dtype=None,
35 | ):
36 | if ssm_cfg is None:
37 | ssm_cfg = {}
38 | factory_kwargs = {"device": device, "dtype": dtype}
39 | mixer_cls = partial(Mamba, layer_idx=layer_idx, **ssm_cfg, **factory_kwargs)
40 | norm_cls = partial(
41 | nn.LayerNorm if not rms_norm else RMSNorm, eps=norm_epsilon, **factory_kwargs
42 | )
43 | block = Block(
44 | d_model,
45 | mixer_cls,
46 | norm_cls=norm_cls,
47 | fused_add_norm=fused_add_norm,
48 | residual_in_fp32=residual_in_fp32,
49 | )
50 | block.layer_idx = layer_idx
51 | return block
52 |
53 |
54 | # https://github.com/huggingface/transformers/blob/c28d04e9e252a1a099944e325685f14d242ecdcd/src/transformers/models/gpt2/modeling_gpt2.py#L454
55 | def _init_weights(
56 | module,
57 | n_layer,
58 | initializer_range=0.02, # Now only used for embedding layer.
59 | rescale_prenorm_residual=True,
60 | n_residuals_per_layer=1, # Change to 2 if we have MLP
61 | ):
62 | if isinstance(module, nn.Linear):
63 | if module.bias is not None:
64 | if not getattr(module.bias, "_no_reinit", False):
65 | nn.init.zeros_(module.bias)
66 | elif isinstance(module, nn.Embedding):
67 | nn.init.normal_(module.weight, std=initializer_range)
68 |
69 | if rescale_prenorm_residual:
70 | # Reinitialize selected weights subject to the OpenAI GPT-2 Paper Scheme:
71 | # > A modified initialization which accounts for the accumulation on the residual path with model depth. Scale
72 | # > the weights of residual layers at initialization by a factor of 1/√N where N is the # of residual layers.
73 | # > -- GPT-2 :: https://openai.com/blog/better-language-models/
74 | #
75 | # Reference (Megatron-LM): https://github.com/NVIDIA/Megatron-LM/blob/main/megatron/model/gpt_model.py
76 | for name, p in module.named_parameters():
77 | if name in ["out_proj.weight", "fc2.weight"]:
78 | # Special Scaled Initialization --> There are 2 Layer Norms per Transformer Block
79 | # Following Pytorch init, except scale by 1/sqrt(2 * n_layer)
80 | # We need to reinit p since this code could be called multiple times
81 | # Having just p *= scale would repeatedly scale it down
82 | nn.init.kaiming_uniform_(p, a=math.sqrt(5))
83 | with torch.no_grad():
84 | p /= math.sqrt(n_residuals_per_layer * n_layer)
85 |
86 |
87 | class MixerModel(nn.Module):
88 | def __init__(
89 | self,
90 | d_model: int,
91 | n_layer: int,
92 | vocab_size: int,
93 | ssm_cfg=None,
94 | norm_epsilon: float = 1e-5,
95 | rms_norm: bool = False,
96 | initializer_cfg=None,
97 | fused_add_norm=False,
98 | residual_in_fp32=False,
99 | device=None,
100 | dtype=None,
101 | ) -> None:
102 | factory_kwargs = {"device": device, "dtype": dtype}
103 | super().__init__()
104 | self.residual_in_fp32 = residual_in_fp32
105 |
106 | self.embedding = nn.Embedding(vocab_size, d_model, **factory_kwargs)
107 |
108 | # We change the order of residual and layer norm:
109 | # Instead of LN -> Attn / MLP -> Add, we do:
110 | # Add -> LN -> Attn / MLP / Mixer, returning both the residual branch (output of Add) and
111 | # the main branch (output of MLP / Mixer). The model definition is unchanged.
112 | # This is for performance reason: we can fuse add + layer_norm.
113 | self.fused_add_norm = fused_add_norm
114 | if self.fused_add_norm:
115 | if layer_norm_fn is None or rms_norm_fn is None:
116 | raise ImportError("Failed to import Triton LayerNorm / RMSNorm kernels")
117 |
118 | self.layers = nn.ModuleList(
119 | [
120 | create_block(
121 | d_model,
122 | ssm_cfg=ssm_cfg,
123 | norm_epsilon=norm_epsilon,
124 | rms_norm=rms_norm,
125 | residual_in_fp32=residual_in_fp32,
126 | fused_add_norm=fused_add_norm,
127 | layer_idx=i,
128 | **factory_kwargs,
129 | )
130 | for i in range(n_layer)
131 | ]
132 | )
133 |
134 | self.norm_f = (nn.LayerNorm if not rms_norm else RMSNorm)(
135 | d_model, eps=norm_epsilon, **factory_kwargs
136 | )
137 |
138 | self.apply(
139 | partial(
140 | _init_weights,
141 | n_layer=n_layer,
142 | **(initializer_cfg if initializer_cfg is not None else {}),
143 | )
144 | )
145 |
146 | def allocate_inference_cache(self, batch_size, max_seqlen, dtype=None, **kwargs):
147 | return {
148 | i: layer.allocate_inference_cache(batch_size, max_seqlen, dtype=dtype, **kwargs)
149 | for i, layer in enumerate(self.layers)
150 | }
151 |
152 | def forward(self, input_ids, inference_params=None):
153 | hidden_states = self.embedding(input_ids)
154 | residual = None
155 | for layer in self.layers:
156 | hidden_states, residual = layer(
157 | hidden_states, residual, inference_params=inference_params
158 | )
159 | if not self.fused_add_norm:
160 | residual = (hidden_states + residual) if residual is not None else hidden_states
161 | hidden_states = self.norm_f(residual.to(dtype=self.norm_f.weight.dtype))
162 | else:
163 | # Set prenorm=False here since we don't need the residual
164 | fused_add_norm_fn = rms_norm_fn if isinstance(self.norm_f, RMSNorm) else layer_norm_fn
165 | hidden_states = fused_add_norm_fn(
166 | hidden_states,
167 | self.norm_f.weight,
168 | self.norm_f.bias,
169 | eps=self.norm_f.eps,
170 | residual=residual,
171 | prenorm=False,
172 | residual_in_fp32=self.residual_in_fp32,
173 | )
174 | return hidden_states
175 |
176 |
177 | class MambaLMHeadModel(nn.Module, GenerationMixin):
178 |
179 | def __init__(
180 | self,
181 | config: MambaConfig,
182 | initializer_cfg=None,
183 | device=None,
184 | dtype=None,
185 | ) -> None:
186 | self.config = config
187 | d_model = config.d_model
188 | n_layer = config.n_layer
189 | vocab_size = config.vocab_size
190 | ssm_cfg = config.ssm_cfg
191 | rms_norm = config.rms_norm
192 | residual_in_fp32 = config.residual_in_fp32
193 | fused_add_norm = config.fused_add_norm
194 | pad_vocab_size_multiple = config.pad_vocab_size_multiple
195 | factory_kwargs = {"device": device, "dtype": dtype}
196 |
197 | super().__init__()
198 | if vocab_size % pad_vocab_size_multiple != 0:
199 | vocab_size += pad_vocab_size_multiple - (vocab_size % pad_vocab_size_multiple)
200 | self.backbone = MixerModel(
201 | d_model=d_model,
202 | n_layer=n_layer,
203 | vocab_size=vocab_size,
204 | ssm_cfg=ssm_cfg,
205 | rms_norm=rms_norm,
206 | initializer_cfg=initializer_cfg,
207 | fused_add_norm=fused_add_norm,
208 | residual_in_fp32=residual_in_fp32,
209 | **factory_kwargs,
210 | )
211 | self.lm_head = nn.Linear(d_model, vocab_size, bias=False, **factory_kwargs)
212 |
213 | # Initialize weights and apply final processing
214 | self.apply(
215 | partial(
216 | _init_weights,
217 | n_layer=n_layer,
218 | **(initializer_cfg if initializer_cfg is not None else {}),
219 | )
220 | )
221 | self.tie_weights()
222 |
223 | def tie_weights(self):
224 | self.lm_head.weight = self.backbone.embedding.weight
225 |
226 | def allocate_inference_cache(self, batch_size, max_seqlen, dtype=None, **kwargs):
227 | return self.backbone.allocate_inference_cache(batch_size, max_seqlen, dtype=dtype, **kwargs)
228 |
229 | def get_input_embeddings(self):
230 | return self.backbone.embedding
231 |
232 | def set_input_embeddings(self, new_embeddings):
233 | self.backbone.embedding = new_embeddings
234 | self.tie_weights()
235 |
236 | def resize_token_embeddings(self, vocab_size):
237 | old_embeddings = self.backbone.embedding
238 | old_num_tokens, old_embedding_dim = old_embeddings.weight.size()
239 | new_embeddings = nn.Embedding(
240 | vocab_size,
241 | old_embedding_dim,
242 | device=old_embeddings.weight.device,
243 | dtype=old_embeddings.weight.dtype,
244 | )
245 | nn.init.normal_(new_embeddings.weight, std=0.02)
246 | n = min(old_num_tokens, vocab_size)
247 | new_embeddings.weight.data[:n, :] = old_embeddings.weight.data[:n, :]
248 | self.backbone.embedding = new_embeddings
249 |
250 | self.tie_weights()
251 |
252 | def forward(self, input_ids, position_ids=None, inference_params=None, num_last_tokens=0):
253 | """
254 | Changing this function from the original Mamba implementation to make it work
255 | with my training scripts (-Tanay)
256 |
257 | "position_ids" is just to be compatible with Transformer generation. We don't use it.
258 | num_last_tokens: if > 0, only return the logits for the last n tokens
259 | """
260 | hidden_states = self.backbone(input_ids, inference_params=inference_params)
261 | if num_last_tokens > 0:
262 | hidden_states = hidden_states[:, -num_last_tokens]
263 | lm_logits = self.lm_head(hidden_states)
264 | return lm_logits
265 |
266 | @classmethod
267 | def from_pretrained(cls, pretrained_model_name, device=None, dtype=None, **kwargs):
268 | config_data = load_config_hf(pretrained_model_name)
269 | config = MambaConfig(**config_data)
270 | model = cls(config, device=device, dtype=dtype, **kwargs)
271 | model.load_state_dict(load_state_dict_hf(pretrained_model_name, device=device, dtype=dtype))
272 | return model
273 |
274 | def save_pretrained(self, save_directory):
275 | """
276 | Minimal implementation of save_pretrained for MambaLMHeadModel.
277 | Save the model and its configuration file to a directory.
278 | """
279 | # Ensure save_directory exists
280 | if not os.path.exists(save_directory):
281 | os.makedirs(save_directory)
282 |
283 | # Save the model's state_dict
284 | model_path = os.path.join(save_directory, 'pytorch_model.bin')
285 | torch.save(self.state_dict(), model_path)
286 |
287 | # Save the configuration of the model
288 | config_path = os.path.join(save_directory, 'config.json')
289 | with open(config_path, 'w') as f:
290 | json.dump(self.config.__dict__, f)
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | pyarrow
2 | pylance
3 | causal-conv1d>=1.1.0
4 | mamba-ssm
5 | numpy
6 | torch
7 | transformers
8 | wandb
--------------------------------------------------------------------------------
/train.py:
--------------------------------------------------------------------------------
1 | # Single GPU training script using FIM
2 | import os
3 | import numpy as np
4 |
5 | import torch
6 | from torch.utils.data import Dataset, DataLoader
7 |
8 | import transformers
9 |
10 | from mamba_model import MambaLMHeadModel
11 |
12 | import lance
13 | import pyarrow as pa
14 |
15 | from tqdm.auto import tqdm
16 |
17 | from data import MambaDataset, MambaSampler
18 |
19 | import wandb
20 |
21 |
22 | # Params (replace with Arg parser later)
23 | class Args:
24 | wandb = False
25 | tokenizer_model = "EleutherAI/gpt-neox-20b"
26 | model_name = "state-spaces/mamba-790m"
27 | dataset_path = (
28 | "/teamspace/studios/codeparrot-dataset-lance/code_parrot_github_python.lance"
29 | )
30 | eval_dataset_path = "fim_data_eval.lance"
31 | dataset = lance.dataset(dataset_path)
32 | low_cpu_mem_usage = False
33 | fim_training = True
34 | fim_rate = 0.9
35 | truncate_or_pad = True
36 | fim_prefix_token = ""
37 | fim_middle_token = ""
38 | fim_suffix_token = ""
39 | fim_pad_token = ""
40 | pad_factor = 8
41 | lr = 1e-4
42 | epochs = 10
43 | context_len = 384
44 | train_batch_size = 8
45 | valid_batch_size = 8
46 | T_0 = 1000
47 | T_mult = 1
48 | eta_min = 1e-5
49 | device = torch.device("cuda:0")
50 | # Total chunks of context_len+1 size we can get
51 | steps_per_epoch = (dataset.count_rows() // context_len + 1) // 4
52 |
53 |
54 | # Define Tokenizer and Model
55 | tokenizer = transformers.AutoTokenizer.from_pretrained(Args.tokenizer_model)
56 | tokenizer.pad_token = tokenizer.eos_token
57 |
58 | model = MambaLMHeadModel.from_pretrained(
59 | Args.model_name,
60 | ).to(Args.device)
61 |
62 | # Get the FIM-specific tokens and get their token ids
63 | tokenizer.add_tokens(
64 | [
65 | Args.fim_prefix_token,
66 | Args.fim_middle_token,
67 | Args.fim_middle_token,
68 | Args.fim_pad_token,
69 | ]
70 | )
71 | prefix_tok_id = tokenizer.convert_tokens_to_ids(Args.fim_prefix_token)
72 | middle_tok_id = tokenizer.convert_tokens_to_ids(Args.fim_middle_token)
73 | suffix_tok_id = tokenizer.convert_tokens_to_ids(Args.fim_middle_token)
74 | pad_tok_id = None
75 |
76 | fim_tokens = [prefix_tok_id, middle_tok_id, suffix_tok_id]
77 |
78 | # If truncate_or_pad is on, also get pad token id
79 | if Args.truncate_or_pad:
80 | pad_tok_id = tokenizer.convert_tokens_to_ids(Args.fim_pad_token)
81 | fim_tokens.append(pad_tok_id)
82 |
83 | # Add new tokens and resize model token embeddings according to multivariate normal distribution
84 | original_embeddings = model.get_input_embeddings().weight
85 | model.resize_token_embeddings(len(tokenizer))
86 | mean = original_embeddings.mean(dim=0)
87 | n = original_embeddings.size()[0]
88 | sigma = ((original_embeddings - mean).T @ (original_embeddings - mean)) / n
89 | dist = torch.distributions.MultivariateNormal(mean, covariance_matrix=1e-5 * sigma)
90 | new_token_embeddings = torch.stack(
91 | tuple((dist.sample() for _ in range(len(fim_tokens)))), dim=0
92 | )
93 |
94 | # Get updated embedding layer and make a copy of it's weights
95 | embeddings = model.get_input_embeddings()
96 | new_embeddings = embeddings.weight.clone()
97 |
98 | # Set the new token' embeddings to the newly sampled embeddings
99 | new_embeddings[-len(fim_tokens) :] = new_token_embeddings
100 |
101 | # Update the model's embeddings with the new embeddings
102 | embeddings.weight = torch.nn.Parameter(new_embeddings)
103 | model.set_input_embeddings(embeddings)
104 |
105 | # Make train dataset and train dataloader
106 | train_dataset = MambaDataset(
107 | Args.dataset_path,
108 | context_len=Args.context_len,
109 | fim_prefix=prefix_tok_id,
110 | fim_middle=middle_tok_id,
111 | fim_suffix=suffix_tok_id,
112 | fim_pad=pad_tok_id,
113 | fim_rate=Args.fim_rate,
114 | mode="psm",
115 | )
116 |
117 | train_dataloader = iter(
118 | DataLoader(
119 | train_dataset,
120 | batch_size=Args.train_batch_size,
121 | sampler=MambaSampler(train_dataset, k=Args.context_len + 1),
122 | shuffle=False,
123 | pin_memory=True,
124 | )
125 | )
126 |
127 | # Optimizer and Scheduler
128 | optimizer = torch.optim.AdamW(model.parameters(), lr=Args.lr)
129 | scheduler = torch.optim.lr_scheduler.CosineAnnealingWarmRestarts(
130 | optimizer, T_0=Args.T_0, T_mult=Args.T_mult, eta_min=Args.eta_min
131 | )
132 |
133 | # Start training
134 | print(f"{'*'*8} Starting training {'*'*8}")
135 | print(f"Total training tokens: {lance.dataset(Args.dataset_path).count_rows():,}")
136 | print(f"Epochs to train: {Args.epochs}")
137 | print(f"Training steps per epoch: {Args.steps_per_epoch:,}\n")
138 | # print(f"Total training steps in training: {Args.steps_per_epoch * Args.epochs:,}")
139 |
140 |
141 | def wandb_log(**kwargs):
142 | """Easy interface to log stuff to wandb"""
143 | for k, v in kwargs.items():
144 | wandb.log({k: v})
145 |
146 |
147 | if Args.wandb:
148 | # Convert the Config class to a dict for logging
149 | config_dict = dict(vars(Args))
150 | del [config_dict["__module__"]]
151 | del [config_dict["__dict__"]]
152 | del [config_dict["__weakref__"]]
153 | del [config_dict["__doc__"]]
154 |
155 | from dotenv import load_dotenv
156 |
157 | load_dotenv()
158 | wandb.login()
159 | run = wandb.init(
160 | project="pytorch",
161 | config=config_dict,
162 | group="mamba-train",
163 | job_type="train",
164 | )
165 | wandb.watch(model)
166 |
167 | prog_bar = tqdm(
168 | range(Args.steps_per_epoch * Args.epochs), total=Args.steps_per_epoch * Args.epochs
169 | )
170 | for epoch in range(Args.epochs):
171 | model.train()
172 | total_loss = []
173 | for step in range(Args.steps_per_epoch):
174 | # Get the next batch
175 | batch = next(train_dataloader)
176 | for k, v in batch.items():
177 | batch[k] = v.to(Args.device)
178 |
179 | # Get predictions
180 | predictions = model(batch["tokens"])
181 |
182 | # Reshape predictions and calculate loss
183 | B, C, V = predictions.shape
184 | predictions = predictions.view(B * C, V)
185 | targets = batch["labels"].view(B * C)
186 | loss = torch.nn.functional.cross_entropy(predictions, targets)
187 | prog_bar.set_description((f"loss: {loss.item():.4f}"))
188 |
189 | loss.backward()
190 | optimizer.step()
191 | scheduler.step()
192 | optimizer.zero_grad(set_to_none=True)
193 | prog_bar.update(1)
194 |
195 | total_loss.append(loss.item())
196 | if Args.wandb:
197 | wandb_log(step_loss=loss.item())
198 |
199 | # Calculate perplexity for the epoch
200 | try:
201 | perplexity = np.exp(np.mean(total_loss))
202 | except OverflowError:
203 | perplexity = float("-inf")
204 |
205 | if Args.wandb:
206 | wandb_log(train_perplexity=perplexity)
207 |
208 | print(f"epoch: {epoch} | train perplexity: {perplexity:.4f}")
209 |
210 | # Save the model after training
211 | model_name = Args.model_name.split("/")[-1]
212 | torch.save(model.state_dict(), f"{model_name}-fim.bin")
213 | print("Saved the model!")
214 |
--------------------------------------------------------------------------------