├── README.md
├── Xiezhi_tech_report.pdf
├── data
├── test_result.json
├── test_result_with_retriver.json
├── 最核心9k测试题_5k.json
└── 最核心法条_9k.json
├── img
├── img.md
├── img2.png
└── 法律bot.png
└── src
├── demo.py
├── demo_llm.py
├── demo_r.py
├── peft
├── LICENSE
├── Makefile
├── pyproject.toml
├── setup.py
├── src
│ ├── peft.egg-info
│ │ ├── PKG-INFO
│ │ ├── SOURCES.txt
│ │ ├── dependency_links.txt
│ │ ├── requires.txt
│ │ └── top_level.txt
│ └── peft
│ │ ├── __init__.py
│ │ ├── __pycache__
│ │ ├── __init__.cpython-38.pyc
│ │ ├── __init__.cpython-39.pyc
│ │ ├── mapping.cpython-38.pyc
│ │ ├── mapping.cpython-39.pyc
│ │ ├── peft_model.cpython-38.pyc
│ │ └── peft_model.cpython-39.pyc
│ │ ├── mapping.py
│ │ ├── peft_model.py
│ │ ├── tuners
│ │ ├── __init__.py
│ │ ├── __pycache__
│ │ │ ├── __init__.cpython-38.pyc
│ │ │ ├── __init__.cpython-39.pyc
│ │ │ ├── bottleneck.cpython-38.pyc
│ │ │ ├── bottleneck.cpython-39.pyc
│ │ │ ├── lora.cpython-38.pyc
│ │ │ ├── lora.cpython-39.pyc
│ │ │ ├── p_tuning.cpython-38.pyc
│ │ │ ├── p_tuning.cpython-39.pyc
│ │ │ ├── prefix_tuning.cpython-38.pyc
│ │ │ ├── prefix_tuning.cpython-39.pyc
│ │ │ ├── prompt_tuning.cpython-38.pyc
│ │ │ └── prompt_tuning.cpython-39.pyc
│ │ ├── bottleneck.py
│ │ ├── lora.py
│ │ ├── p_tuning.py
│ │ ├── prefix_tuning.py
│ │ └── prompt_tuning.py
│ │ └── utils
│ │ ├── __init__.py
│ │ ├── __pycache__
│ │ ├── __init__.cpython-38.pyc
│ │ ├── __init__.cpython-39.pyc
│ │ ├── adapters_utils.cpython-38.pyc
│ │ ├── adapters_utils.cpython-39.pyc
│ │ ├── config.cpython-38.pyc
│ │ ├── config.cpython-39.pyc
│ │ ├── other.cpython-38.pyc
│ │ ├── other.cpython-39.pyc
│ │ ├── save_and_load.cpython-38.pyc
│ │ └── save_and_load.cpython-39.pyc
│ │ ├── adapters_utils.py
│ │ ├── config.py
│ │ ├── other.py
│ │ └── save_and_load.py
└── tests
│ ├── __init__.py
│ ├── test_config.py
│ ├── test_peft_model.py
│ ├── testing_common.py
│ └── testing_utils.py
├── peft_model
└── 3.p
├── peft_r_model
└── 1.p
├── requirement.txt
└── train
├── dataset.py
├── load_txt.py
└── train_lora_lion.py
/README.md:
--------------------------------------------------------------------------------
1 |
124 |
125 | 问题 |
126 | ChatGLM |
127 | LawGPT_zh 獬豸 |
128 | LawGPT_zh 獬豸 with retriver |
129 |
130 |
131 | 借款人去世 继承人是否应履行偿还义务 |
132 | 如果借款人已经去世,那么他的继承人应该继承其遗产并承担还款责任。但是,对于借款人的死亡所引起的任何债务问题,需要根据当地法律和政策进行具体分析和处理。 在某些情况下,如果借款人有指定继承人,并且该继承人已经继承了他的财产,那么他应该承担还款责任。在这种情况下,可以通过书面通知或公告等方式告知债权人,并说明其继承人已经准备好承担还款责任。 在其他情况下,如果没有指定继承人,或者该继承人不愿意或没有能力承担还款责任,那么就可能需要通过其他途径来解决债务问题。这可能包括拍卖、出售或其他方式来转移借款人的财产,以便将该财产用于还债。在此过程中,需要遵守当地的法律规定和程序,以确保公正性和透明度。 |
133 | 如果借款人的遗产已经被继承了,那么其继承者的配偶、子女和父母等亲属都享有作为借款人还款人的权利。如果这些继承人没有放弃权利,则他们应该承担偿还债务的责任。此外,如果有其他继承人愿意代替借款人还清债务的话,也需要按照协议进行偿还。需要注意的是,继承人在继承了借款人的财产后需要先确认财产的具体归属情况,以确保自己能够明确自己的责任范围。同时,如果在确定好继承人之后发现借款人还有其他未还的债务或者存在虚假借款等情况时,则需要及时与债权人协商并采取相应的法律措施来维护自身的合法权益。 |
134 | 根据《民法典》第一千一百六一号条款规定,继承人以所得遗产实际价值为限清偿被继承人依法应当缴纳的税款和债务。超过遗产实际价值部分,继承人自愿偿还的不在此限。因此,如果借款人的遗产实际价值超过了其应该缴纳的债务金额,那么继承人就没有义务偿还额外的债务。但是如果继承人自愿偿还,则不在此限制范围内。 在例题中,如果借款人已经去世了,他的继承人可以通过继承其遗产来还款。如果继承人继承了全部或大部分财产,那么他有义务偿还所有未偿还的债务。如果没有继承到足够的财产,他可能无法承担额外债务,但这并不意味着他不应该尽力偿还债务。 |
135 |
136 |
137 | 如何通过法律手段应对民间借贷纠纷? |
138 | 民间借贷是一种常见的融资方式,但也存在风险。如果您遇到了民间借贷纠纷,以下是一些可以通过法律手段来应对的建议: 1.收集证据:在处理任何纠纷之前,您需要收集尽可能多的证据来支持您的主张和证明对方存在过错。这些证据可能包括合同、借条、还款记录等文件。 2.明确诉讼请求:根据您的实际情况和要求,明确自己的诉讼请求和诉讼费用的负担等内容。 3.选择适当的律师:在面对复杂的民间借贷纠纷时,建议您选择一个专业的律师来帮助自己进行处理。律师会根据案件的具体情况提供相应的建议和支持,并帮助您维护权益。 4.提交起诉状:按照法律规定,您需要在指定的时间内向法院提交起诉状的副本和相关证据材料。 5.等待判决结果:在法院受理后,会进行调解或者判决。如果双方无法达成一致意见,则法院将作出判决,并根据判决结果执行。 需要注意的是,民间借贷的利率是有规定限制的,超过了规定的利率范畴,可能会被认为是非法高利借贷行为。因此,在使用民间借贷的过程中,需要注意利息的计算方式和标准,避免被不法分子利用。
139 | |
140 | 遇到借贷纠纷,可以通过以下方式解决: 1.协商。如果双方愿意协商的,可以就借款数额、还款期限和利息等进行谈判;如果无法达成一致意见或者对方拒绝履行协议,可以考虑起诉到法院要求强制执行; 2.调解。如果双方同意进行调解的,可以向当地人民法院申请调解书来进行调节;需要注意的是调解不产生法律效力; 3.仲裁。如果有证据证明存在借贷关系的,可以直接向当地的仲裁委员会申请仲裁;但是仲裁对裁决不服的话只能去法院诉讼解决了; 4.法律援助中心寻求帮助。如果自己没有证据或判决不公的情况下,可以到当地的法律援助中心寻求援助; 5.报警处理。如果对方涉嫌刑事犯罪被逮捕了,可以先报案,由公安机关侦查后移送检察院审查起诉后再提起民事诉讼(民事赔偿部分);
141 | |
142 | 根据《民法典》第468条的规定,对于非因合同产生的债权债务关系,适用于有关该债权债务关系的法律规定,而没有特定的规定时则适用于本编通则的有关规定,但是根据其性质不能适用的除外。因此,对于民间借贷纠纷,应该依据《民法典》第457条的规定,即借款人应当按照约定的利率支付利息;未按照的约定支付利息的,借款人可以随时请求借款人偿还本金并赔偿损失。 此外,如果借款人到期不还款或者违反与出借人之间的借款协议中关于还款金额、还款日期等内容的,出借人可以通过向法院提起诉讼来维护自己的权益。在起诉前,出借人应当采取必要的证据措施,如收集借款人的身份信息、聊天记录、转账记录等信息以证明双方之间的关系和借款事实的真实性。同时,出借人也需要考虑自己的合法权益是否受到侵害,并根据实际情况决定是否委托律师进行维权。
143 | |
144 |
145 |
146 | 宠物狗被撞死,可以索要精神赔偿吗? |
147 | 如果您的宠物狗因为交通事故而死亡,您可能有权要求对方支付精神赔偿。根据中国的《消费者权益保护法》和相关法律法规的规定,消费者因购买、使用商品或者接受服务受到人身、财产损害的,享有生命权、健康权和财产安全权等权益。
148 |
149 | 因此,如果您的宠物狗因为交通事故而遭受精神和情感上的痛苦,您有权要求对方为您的狗提供经济上的帮助,以缓解您的痛苦。不过需要注意的是,这种精神赔偿通常是由法院根据具体情况判决的,因此在主张精神赔偿前,建议您先咨询专业的律师或法律援助机构,了解相关权利和法律规定。
150 |
151 | |
152 | 针对你的问题,我进行分析如下:
153 | 撞人的狗狗主人是否要承担责任取决于双方之间的责任划分。如果确定对方有过错并且造成了小狗的死亡,那么根据法律规定,有权要求其进行相应赔偿。例如经济赔偿金、精神损害抚慰金等。但是,具体的赔偿数额需要根据实际情况来计算。此外,也需要注意是否存在其他相关的证据支持索赔的请求。同时,对于动物的饲养人或管理人而言,如果没有尽到相应的管理义务导致动物受到伤害或者死亡的,也需要承担相应的赔偿责任。
154 | |
155 |
156 |
157 | 根据《民法典》第1250条规定,因第三人的过错致使动物造成他人损害的,被侵权人可以向动物饲养人或者管理人请求赔偿,也可以向第三人请求赔偿。如果动物饲养人或管理人已经赔偿了被侵权人的经济损失,被侵权人还可以通过向第三人追偿来寻求进一步的经济赔偿。 然而,对于宠物狗这种具有人格属性的动物,根据《民法典》第1247条规定,禁止饲养的烈性犬等危险动物造成他人损害的,动物饲养人或者管理人应当承担侵权责任。这意味着如果宠物狗被撞死了,受害人不能直接向动物饲养人或管理人要求经济赔偿,而是需要先通过诉讼程序主张自己的权利,认定是由第三人的行为导致的损失,然后由第三人承担相应的赔偿责任
158 | |
159 |
160 |
161 | ... |
162 | ... |
163 | ... |
164 | ... |
165 |
166 |
167 |
168 |
169 |
170 |
171 |
172 | ## 模型训练
173 |
174 | ### 训练步骤
175 | 1. 下载训练数据集
176 | 2. 配置好train_lora.py参数
177 | 3. 运行train_lora.py函数(使用模型并行算法,请勿同时使用数据并行)
178 |
179 | 本项目训练代码采用模型并行算法,可以在最少4张3090显卡上完成对ChatGLM LoRA 16-bit的指令微调。训练命令如下
180 | ```bash
181 | cd src
182 | CUDA_VISIBLE_DEIVCES=$cuda_id python train.py \
183 | --title $YOUR_EXP_NAME \
184 | --train_path $YOUR_TRAINING_DATA_PATH \
185 | --save_dir $YOUR_LORA_CHECKPOINT_SAVE_PATH
186 | ```
187 |
188 |
189 |
190 | ## 贡献
191 |
192 | 本项目由来自上海交通大学的廖育生,刘泓呈,孟昱同,王宇昊四位合作开发中,指导教师为[王钰](https://cmic.sjtu.edu.cn/cn/show.aspx?info_lb=75&info_id=1237&flag=35)副教授。
193 |
194 |
195 |
196 | ## 免责声明
197 |
198 | 预训练模型是基于大量语料库和算法模型进行训练的,并且在训练过程中可能存在偏差、错误和不完整的信息。因此,本项目提供的预训练模型仅供参考和研究使用,并不能保证其准确性和可靠性。使用预训练模型产生的结果可能存在误差和偏差,不能用于实际应用或决策。本项目不对使用预训练模型所产生的结果承担任何责任,也不对因使用预训练模型所产生的任何损失承担责任。使用者在使用预训练模型时应自行承担风险并进行自我验证。
199 |
200 |
201 |
202 | ## 引用
203 |
204 | 如果你使用了本项目的数据或者代码,请声明引用
205 |
206 | ```latex
207 | @misc{LAWGPT-zh,
208 | author={Hongcheng Liu, Yusheng Liao, Yutong Meng, Yuhao Wang},
209 | title = {XieZhi: Chinese Law Large Language Model},
210 | year = {2023},
211 | publisher = {GitHub},
212 | journal = {GitHub repository},
213 | howpublished = {\url{https://github.com/LiuHC0428/LAW_GPT}},
214 | }
215 | ```
216 |
217 |
--------------------------------------------------------------------------------
/Xiezhi_tech_report.pdf:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LiuHC0428/LAW-GPT/6251dd36505a4b644c5d2fe9b5188cbb541ea5fd/Xiezhi_tech_report.pdf
--------------------------------------------------------------------------------
/data/test_result.json:
--------------------------------------------------------------------------------
1 | [
2 | {
3 | "问题": "遭遇强制拆迁 如何用法律保护自己?",
4 | "ChatGLM": "遇到强制拆迁,可以通过以下方式来保护权益:\n\n1.了解法律法规:在面临强制拆迁时,应该尽快阅读相关法规和政策文件,了解自己的权利和义务。同时,要了解当地的实际情况,以便制定合理的应对措施。\n\n2.保留证据:在遇到强制拆迁之前,可以尽可能地收集相关的证据,例如现场照片、录音、录像等,以备日后维权使用。如果可能的话,最好与相关部门保持沟通,及时了解进展情况。\n\n3.寻求法律援助:如果遇到强制拆迁,可以考虑聘请律师代表自己,向法院提起诉讼,维护自己的权益。律师可以根据案件的具体情况提供合法的建议和代理服务。\n\n4.协商解决:如果发现强制拆迁存在违法行为,可以尝试通过协商解决问题。可以向当地政府或相关部门提出异议,要求其停止违法行动,并协助自己寻找更好的解决方案。\n\n5.关注媒体报道:媒体的报道可能会引起公众的关注和重视,如果遇到强拆事件被公开报道,可以联系媒体并积极参与炒作,以提高知名度和影响力,从而获得更多的利益。\n\n遇到强制拆迁时,需要冷静应对,尽可能收集证据,及时咨询法律专业人士的意见,并与当地的政府部门保持良好的沟通。",
5 | "LawGPT_zh": "在遭遇强制拆迁时,可以通过以下方式来保护权益:\n1.了解法律法规。了解自己所在地的《土地管理法》、《城市房屋管理规定》等相关法律法规可以帮助自己更好地理解当地的政策;\n2.找到合法维权途径。如果认为当地有关部门的行为违法,可以向上级政府部门或法院提起诉讼或者申请仲裁以维护合法权益;3.寻求法律援助。如果自己的合法权益受到了侵犯,可以聘请律师帮助自己处理此事,通过律协、法律服务所、法律援助中心等部门寻求帮助;4.保持冷静。尽管被强制拆迁会给人很大的压力和不安,但是一定要保持冷静不要冲动,避免产生不必要的矛盾和冲突,以免事情变得更加复杂。"
6 | },
7 | {
8 | "问题": "债务纠纷中物保与人保的责任顺序?",
9 | "ChatGLM": "在涉及的债务纠纷中,物保和人保通常会在保护财产和维护公共安全方面承担不同的角色和责任。\n\n如果因债务问题导致物品或财物受损,通常情况下物的保险会赔偿损失。这是因为根据保险合同,保险公司在出险时有权向被保险人支付相应的保险金来补偿其遭受的损失。因此,当物品受损时,物的保险将会为被保险人提供经济支持。\n\n另一方面,如果是由于犯罪行为或其他违法行为导致的人身伤害或财产损失,则人保可能会承担责任。这是因为人保通常会与雇主、企业和其他机构合作,为客户提供必要的安全保障措施,包括员工福利保障和交通事故责任强制保险等。在这种情况下,人保将负责为受害者提供必要的保护和救助,并承担相应的法律责任。\n\n在一般情况下,物保和人保的主要职责是确保财产和人身安全得到保护。如果遇到复杂的债务纠纷和财产保赔问题,建议咨询专业人士的意见,以确定具体的保险方案和承担责任的顺序。",
10 | "LawGPT_zh": "在财产保险和人身保险领域中,责任顺序通常按照以下几种方式确定:\n\n1. 如果因一方的行为导致另一方的财产或人身受到损害的,由受损方向有责任方索赔;如果受益方可要求第三方赔偿损失或者需要对损害进行修复、修理等后续服务时,该方向保险公司提出索赔。\n2. 如果因共同行为导致的损害,如共有关系中的侵权行为造成财产损失或人身伤害,则由所有共有人一起向侵权人索赔,并向对应的财产/身体受伤者提供意外伤害险保障;如果受害人不能独立获得赔偿,则可以考虑通过团体意外保险或其他形式的组合保险来得到补偿。\n3. 如果在发生灾害事故等情况下造成的损害,例如火灾、地震、洪水等等,一般是由当地的政府部门作为主体进行调查并作出结论,然后向相应的责任人追偿;如果当地没有这样的机构,那么可以直接向保险公司申请理赔。"
11 | },
12 | {
13 | "问题": "借款人去世 继承人是否应履行偿还义务",
14 | "ChatGLM": "如果借款人已经去世,那么他的继承人应该继承其遗产并承担还款责任。但是,对于借款人的死亡所引起的任何债务问题,需要根据当地法律和政策进行具体分析和处理。\n\n在某些情况下,如果借款人有指定继承人,并且该继承人已经继承了他的财产,那么他应该承担还款责任。在这种情况下,可以通过书面通知或公告等方式告知债权人,并说明其继承人已经准备好承担还款责任。\n\n在其他情况下,如果没有指定继承人,或者该继承人不愿意或没有能力承担还款责任,那么就可能需要通过其他途径来解决债务问题。这可能包括拍卖、出售或其他方式来转移借款人的财产,以便将该财产用于还债。在此过程中,需要遵守当地的法律规定和程序,以确保公正性和透明度。",
15 | "LawGPT_zh": "如果借款人的遗产已经被继承了,那么其继承者的配偶、子女和父母等亲属都享有作为借款人还款人的权利。如果这些继承人没有放弃权利,则他们应该承担偿还债务的责任。此外,如果有其他继承人愿意代替借款人还清债务的话,也需要按照协议进行偿还。需要注意的是,继承人在继承了借款人的财产后需要先确认财产的具体归属情况,以确保自己能够明确自己的责任范围。同时,如果在确定好继承人之后发现借款人还有其他未还的债务或者存在虚假借款等情况时,则需要及时与债权人协商并采取相应的法律措施来维护自身的合法权益。"
16 | },
17 | {
18 | "问题": "欠条与借条的法律效力探讨",
19 | "ChatGLM": "欠条和借条都是表明借款人或贷款人向他人借款或还款的书面证明,在法律上具有同等的法律效力。但是,它们有一些不同之处,包括:\n\n1. 付款时间不同:借条通常明确约定了还款期限,而欠条则没有这种规定;\n2. 利息方面:在一般情况下,借条中未约定利息,而在欠条中约定的利息视为无息;\n3. 还款方式:借条要求双方在还款时直接交付货币,而欠条可以根据不同的情况选择不同的还款方式和途径(如现金、转账等);\n4. 主体资格方面:在借条中,必须是合法的借贷关系才能产生法律效力,而在欠条中可以的主体范围更广。\n\n因此,在实际生活中,我们需要根据具体情况来确定使用哪种类型的文件。如果存在争议的,可以考虑咨询专业律师的意见来确认法律适用性及维权方法。",
20 | "LawGPT_zh": "《中华人民共和国合同法》第六十二条规定:“合同的内容可以依照合同约定进行;没有约定或者约定的不明确,不论当事人是否承认,应当承担违约责任。”根据该法律规定,在双方没有约定的情况下,如果借款人按照借条中的约定履行了还款义务,对方不承担违约责任。因此,无论欠条还是借条都具有一定的法律约束力和执行力,但具体的执行程序和方法需要根据实际情况而定。"
21 | },
22 | {
23 | "问题": "信用卡欠账不还后果很严重吗?",
24 | "ChatGLM": "信用卡欠账不还后果非常严重,会对自己产生不良信用记录、影响未来的信用评估和贷款申请。此外,如果恶意拖欠账单,银行还有可能会采取法律手段追讨欠款。因此,最好及时还清信用卡欠款,避免不必要的经济和法律责任。",
25 | "LawGPT_zh": "信用卡欠款如果不及时偿还,会对自己产生严重的影响。下面是一些可能的后果:\n\n1.利息和罚款:如果未在规定时间内还款,则会开始收取滞纳金和利息,费用可能会非常高;\n2.信用记录受损:逾期付款会影响个人的信用报告,使个人在未来申请贷款、购房和其他业务时面临困难;\n3.银行起诉:若未能在指定时间内还清全额账单,可能会被银行起诉至法院要求归还债务并支付相应的诉讼费用;\n4.法律风险:逾期付款可能会导致合同违约,构成违约则需要承担违约责任(包括赔偿损失);\n5.失去使用信用卡的权利:如果发现自己的信用记录中存在多次逾期或其他不良记录,将会失去使用某些信用卡或贷款的机会。\n因此,为了避免以上情况的发生,应合理规划好开支,按时按量偿还信用卡欠款。"
26 | },
27 | {
28 | "问题": "如何通过法律手段应对民间借贷纠纷",
29 | "ChatGLM": "民间借贷是一种常见的融资方式,但也存在风险。如果您遇到了民间借贷纠纷,以下是一些可以通过法律手段来应对的建议:\n\n1.收集证据:在处理任何纠纷之前,您需要收集尽可能多的证据来支持您的主张和证明对方存在过错。这些证据可能包括合同、借条、还款记录等文件。\n\n2.明确诉讼请求:根据您的实际情况和要求,明确自己的诉讼请求和诉讼费用的负担等内容。\n\n3.选择适当的律师:在面对复杂的民间借贷纠纷时,建议您选择一个专业的律师来帮助自己进行处理。律师会根据案件的具体情况提供相应的建议和支持,并帮助您维护权益。\n\n4.提交起诉状:按照法律规定,您需要在指定的时间内向法院提交起诉状的副本和相关证据材料。\n\n5.等待判决结果:在法院受理后,会进行调解或者判决。如果双方无法达成一致意见,则法院将作出判决,并根据判决结果执行。\n\n需要注意的是,民间借贷的利率是有规定限制的,超过了规定的利率范畴,可能会被认为是非法高利借贷行为。因此,在使用民间借贷的过程中,需要注意利息的计算方式和标准,避免被不法分子利用。",
30 | "LawGPT_zh": "遇到借贷纠纷,可以通过以下方式解决:\n1.协商。如果双方愿意协商的,可以就借款数额、还款期限和利息等进行谈判;如果无法达成一致意见或者对方拒绝履行协议,可以考虑起诉到法院要求强制执行;\n2.调解。如果双方同意进行调解的,可以向当地人民法院申请调解书来进行调节;需要注意的是调解不产生法律效力;\n3.仲裁。如果有证据证明存在借贷关系的,可以直接向当地的仲裁委员会申请仲裁;但是仲裁对裁决不服的话只能去法院诉讼解决了;\n4.法律援助中心寻求帮助。如果自己没有证据或判决不公的情况下,可以到当地的法律援助中心寻求援助;\n5.报警处理。如果对方涉嫌刑事犯罪被逮捕了,可以先报案,由公安机关侦查后移送检察院审查起诉后再提起民事诉讼(民事赔偿部分);"
31 | },
32 | {
33 | "问题": "起诉书怎么写?",
34 | "ChatGLM": "起诉书的格式可以因案件类型和所在地而异,但通常包括以下几个部分:\n1. 标题:简要概括案件情况。\n2. 当事人信息:列出诉讼参与人的姓名、身份证号码、联系方式等信息。如果涉及第三人,应注明其姓名、诉讼请求及与本案的关系。\n3. 原告的诉讼请求:明确原告请求法院执行的具体事项和要求。\n4. 事实和理由:陈述案件事实和法律依据,并提供相应的证据材料。应具体说明案件的基本情况,如欠款金额、时间、地点、证人等。同时,应该充分引用法律条款,证明被告的行为违反了法律规定,从而导致了损害的发生。\n5. 尾部:标注发件人和收件人的信息,以及日期和其他必要的事项。\n\n下面是一份典型的起诉书示例(适用于简单的民事纠纷):\n\n[起诉状]\n\n一、案件概况\n\n本案为一个简单的民事纠纷,原告诉称XXX公司欠款人民币XXXX元未还,现要求对方承担还款义务。\n\n二、原告的诉讼请求\n\n1. 被告立即偿还欠款人民币XXXX元;\n2. 在接到本诉状的一个月内向对方支付剩余款项人民币XXXX元;\n3. 赔偿因其违约造成的各项损失包括但不限于利息、律师费用等共计人民币XXXX元。\n\n三、事实和理由\n\n1. 本案中,被告XXX公司在未经原告允许的情况下向原告借款人民币XXXX元,并承诺在一个月内归还。但是至今已经超过一个月,被告仍未归还欠款。\n2. 根据《中华人民共和国合同法》第三十二条规定:“借款人应当按照约定的期限返还借款。”根据《中华人民共和国担保法》第六条规定:“保证人对人所负的债务,有保证责任......”由于被告未能按照约定履行还款义务,已经构成对原告的违约责任。",
35 | "LawGPT_zh": "《中华人民共和国诉讼法》规定,民事案件由被告住所地人民法院管辖。如果不确定具体案件的法院是哪里的,可以在立案前向原告所在地或者被告所在地提出管辖权异议;也可以在诉状中注明具体的诉讼请求、事实和理由等内容后申请指定其他基层法院作为受案人民法院,并要求其承担一定的责任。"
36 | },
37 | {
38 | "问题": "欠钱不还可以报警吗?",
39 | "ChatGLM": "如果您借给某人钱,但他们没有按时偿还,您可以考虑向警方报案。在这种情况下,您可以报告称其借款给他人并收到明确的还款日期,但截至现在对方仍未偿还欠款。如果您认为这是一种犯罪行为或欺诈行为,那么报警可能会有所帮助。\n\n然而,在大多数情况下,警察可能不会对此进行调查,因为这可能涉及到小额债务或个人纠纷。如果这种情况,您可以与当地的律师或其他法律专业人士联系以获取更多帮助和建议。他们可以根据您的情况和案件的具体情况为您提供合法的建议和支持。",
40 | "LawGPT_zh": "如果对方拒绝还款,可以通过诉讼途径要求其归还欠款。在起诉前可以准备相应的证据材料,例如银行转账记录、聊天记录等,以提高胜诉的可能性。如果认为对方的身份信息存在欺诈或者虚假成分,也可以在提起诉讼之前向法院申请排除妨害裁定。此外,如果有证据证明被告涉嫌拒不执行判决罪的也可以直接报案。但需要注意的是,无论采取何种方式都需要有合法的依据和充分的证据支撑"
41 | },
42 | {
43 | "问题": "对方不认借条怎么办?",
44 | "ChatGLM": "如果借款人不承认借款,可以考虑以下几种方法:\n\n1. 寻求法律帮助:如果借条是书面形式的,可以拿着借条去找律师咨询。如果是口头协议,可以尝试通过录音等方式收集证据。律师可以帮助分析是否存在法律上的漏洞,并给出相应的建议。\n\n2. 通过其他方式证明借款事实:比如可以提供与借款人的其他聊天记录、转账记录等证据来证明借款的事实。这些证据可以作为呈堂证供,有助于维护自己的权益。\n\n3. 找到证人证言:如果有朋友或其他人可以证实借款事实,可以尝试让他们出庭作证。证人证言具有一定的法律效力,可以帮助证明自己的主张。\n\n4. 协商解决:如果对方不愿意承认借款事实,可以尝试与他们进行谈判,以合理的方式和时间解决问题。同时,也需要注意保护自己的权益,不要受到对方的威胁或暴力行为。",
45 | "LawGPT_zh": "如果对方的否认导致无法证明借款事实存在,你可以申请法院进行证据保全。另外可以要求借款人提供其他证据证明其与你的借贷关系的真实性。如果双方没有约定还款期限的,可以根据合同约定或者法律规定,按照约定的期间内催告义务人一次;如果没有约定期间的,根据日常交往的时间和实际情况确定合理的期限,到期未还则视为不定期债务,可以在宽限期内多次催告;逾期后仍未能偿还的债务,视为不定期债务,不定期的债务人在中止履行或损害发生前可以行使请求权并承担不超过损失50%的责任;对于超过宽限但未起诉或者放弃诉权的当事人,在他人主张权利时应当根据当时的事实作出回应;此外也可以在判决生效后再次向法院提起诉讼以维护权益。\", \"你好!建议先协商解决,如果对方拒绝归还款项,可以向当地人民法院提起民事诉讼来追讨这笔欠款.如果有转账记录或者其他凭证的话可以通过司法诉讼手段追回相应的利息、本金等费用。"
46 | },
47 | {
48 | "问题": "借条约定的还款期限到期怎么办?",
49 | "ChatGLM": "如果借条上约定了还款期限,但已经超过了该期限,可以考虑以下几种方式:\n\n1. 联系借款人:可以通过电话、短信或电子邮件等方式与借款人联系,询问是否能够及时偿还借款。如果对方同意并及时提供了还款信息,可以再次确认协议并协商重新安排还款计划;\n2. 通过法律途径:如果无法联系到借款人,可以尝试通过法律途径追回欠款。可以咨询律师或当地法律援助机构的意见,了解相关程序和费用;\n3. 延长还款期限:如果借款人愿意在合理的时间内还清债务,可以与他们商量延长还款期限。这可能需要支付一定的利息或其他费用,需要根据实际情况进行权衡利弊得失。\n\n无论采取哪种方式,都应该保留好相关的证据,如借条、通话记录、邮件等,以便日后争端解决时提供支持。",
50 | "LawGPT_zh": "如果借款协议中约定了还款期限,那么如果借款人未按时偿还债务,可以根据双方之间的合同约定来采取相应的法律措施。根据《中华人民共和国合同法》第六十二条规定:“当事人应当按照约定的期限履行自己的义务。”因此,如果没有书面形式的还款协议或者没有对逾期利息、利率等进行明确约定,对方可以随时要求借款人按照合同规定的还款方式及金额进行偿还;如果对方不按时偿还的,可以通过诉讼途径追讨债务。建议及时与对方的债权人联系并保存相关证据以便维护自身权益。"
51 | },
52 | {
53 | "问题": "借条受法律保护吗?",
54 | "ChatGLM": "借条是一份书面文件,表明一个人向另一个人借款。如果借款人按照借条上的要求偿还了借款,那么借条就是受法律保护的。如果在还款期限内没有还清借款,债权人可以通过起诉来追讨欠款。\n\n在中国,根据《中华人民共和国合同法》的规定,借款人应当按照约定的期限履行债务;如果没有约定期限,则应当在最长时间内履行债务。如果借款人不履行债务,债权人有权要求借款人承担违约责任并赔偿损失。因此,借条在法律上受到保护,借款人必须按时还款以保障自己的利益。",
55 | "LawGPT_zh": "如果借款是通过书面形式进行的,并且双方都知道并认可该借条的内容,那么该借条就是有效的。如果借款人能够提供证据证明存在真实的借贷关系,则可以通过诉讼途径要求对方归还所欠债务。"
56 | },
57 | {
58 | "问题": "没有借条可以起诉吗?",
59 | "ChatGLM": "在法律上,如果借款人能够提供其他证据来证明借款事实的真实性和合法性,例如聊天记录、短信、语音录音等,即使没有借条也可以起诉借款人。\n\n然而,如果没有书面的借条或其他相关证据证明借款事实的真实性和合法性,法院可能会对案件进行调查并核实相关信息,以确保案件的公正性和准确性。在这种情况下,可以考虑向证人或借款人的其他证据提供者寻求支持,以证明借款的事实。",
60 | "LawGPT_zh": "在法律上,借款需要有书面形式的借贷凭证(如借条、欠条等),否则很难证明双方之间存在真实的借贷关系。如果没有书面形式的证据,可以通过其他证据来证明存在借贷关系的,例如:\n1.聊天记录或语音记录;2.转账记录;3.证人证言;4.现场见证等等。如果确实认为对方有过错或者存在欺诈行为导致自己无法收回欠款的话,也可以主张赔偿损失的诉讼请求。具体情况需要结合案件的具体情况进行分析和处理,如有需要帮助的地方可致电我,我会尽力为您提供帮助和指导"
61 | },
62 | {
63 | "问题": "借条没写还款时间,多久可以要求对方还钱?",
64 | "ChatGLM": "如果借款没有约定还款期限,那么可以随时要求对方还钱。如果双方同意,可以通过协商达成一个合理的还款时间表;如果无法达成一致意见,可以向法院提起诉讼来催促对方还款。但是需要注意的是,在起诉之前需要提供充分的证据来证明借款的真实性和还款义务的履行情况。",
65 | "LawGPT_zh": "根据《中华人民共和国合同法》第一百九十五条规定:“借款人应当按照约定的期限返还借款;没有约定期限或者约定不明确,借款人可以随时请求借款人返还”,因此只要未明确约定还款日期的,借款人应当在合理时间内向对方主张权利。如果双方对还本付息方式、利率及利息计算方式等有争议且无法协商解决的,也可以在任意时间向法院起诉并要求对方归还本金和利息(包括法定从宽的利息幅度)。"
66 | },
67 | {
68 | "问题": "别人借钱不还怎么办",
69 | "ChatGLM": "别人借钱不还,可以考虑以下几个步骤:\n\n1. 与对方进行沟通:尝试和对方沟通,了解对方是否有困难或者无法按时还款的原因。在沟通过程中要注意态度友好、尊重对方,尽可能避免争吵。\n\n2. 制定还款计划:如果对方愿意合作还款,可以双方共同制定一个还款计划,明确每月或每周的还款金额和时间,并确保对方按照计划进行还款。\n\n3. 寻求法律帮助:如果以上方法都没有效果,建议寻求法律帮助。可以向当地法院提起诉讼,要求对方归还借款本金和相关利息。\n\n4. 使用信用记录:如果不得不逾期还款或者多次拖欠还款,可以使用自己的信用记录来警示自己和其他相关人员。这样可以保护自己的权益,同时也可以避免不必要的罚款或其他后果。\n\n无论采取哪种方式,都需要注意保护自己的利益和安全。",
70 | "LawGPT_zh": "如果向别人借款,对方未按时偿还时,可以采取以下一些措施:\n\n1. 与借款人协商:如果可能的话,可以试着和借款人进行谈话。了解为什么他们不能尽快还清债务的原因,找到解决问题的方法;\n2. 向法院起诉:如果无法通过交流解决争议,可以考虑向法院提起诉讼来追讨欠款;\n3. 通过法律途径催收:可以通过律师或法律援助机构帮助自己收集证据并申请强制执行;\n4. 采取私人手段追债:比如联系其家人、朋友等亲属以获取还款信息;或者寻找对方的行踪等方式来争取减少损失。"
71 | },
72 | {
73 | "问题": "借给他人用于赌博的钱能否要回?",
74 | "ChatGLM": "如果借款给他人时,明确表明是用于正当的支出(如生活费用、治疗疾病等),而对方在借款后用于了非法活动(如赌博等违法行为),那么很难要求借款人返还借款。这是因为根据法律规定,为非法行为提供担保的行为也被视为非法担保,因此即使可以起诉借款人,也很难追回被盗用的资金。\n\n如果确实证据证明对方参与了赌博或其他违法行为,可以考虑向当地公安机关报案,由公安机关进行调查并处理。但需要注意的是,在刑事诉讼中,对方可能会涉嫌犯罪行为而被判负刑事责任,所以在报案前最好先咨询专业律师的意见。",
75 | "LawGPT_zh": "如果借款给别人是用于合法用途(如购买商品或服务、学习等),那么可以要求对方还款。但如果是被误解为用于非法目的,比如赌博或其他违法行为的,则难以要回来。建议及时与对方沟通确认情况,并要求其明确说明借款的用途和约定的利息等相关细节,以便在以后索要时能够更好地维护自己的合法权益。\"你好\", \"如果被误判为赌债,一般很难追回钱款"
76 | },
77 | {
78 | "问题": "起诉后撤诉是否导致诉讼时效中断?",
79 | "ChatGLM": "根据中国《民法通则》第135条的规定,患者在治疗期间或者治愈后一年内,有权利向法院提起诉讼。如果患者在提起诉讼后又撤销诉讼或者在判决生效后六个月内申请人民法院关闭诉讼的,则该期间的时效中止;如果在中止期间内因不可抗力或者其他障碍不能行使请求权或者过期不行使的,则时效消灭。\n\n因此,起诉后撤诉并不会导致诉讼时效中断。在起诉后,如果患者能够在一年内提出撤销诉讼的申请,则该期间的时效中止;如果超过一年的期限未提出的,视为放弃权利,则该期间的时效消灭。在此期间,如果患者再次起诉,则可以要求对方承担合法的法律责任,但如果对方已经在前一次案件中承担了全部责任并且无法恢复履行能力,则在第二次起诉时已超过了相应的诉讼期限,不能再要求其承担责任。",
80 | "LawGPT_zh": "如果起诉之后,经过一段时间没有去法院领取判决或者调解书,而自行放弃这个权利的话,那么过了这段期间就不能再向法院提起诉讼。这样会导致时效的重新计算。但是,如果诉讼中双方已经达成了一致意见并且申请了强制执行或者在执行过程中对方同意和解并履行了一部分义务的情况下,则可以考虑撤销原诉讼请求或提交书面答辩状等来避免影响实效的计算。"
81 | },
82 | {
83 | "问题": "借款是否有诉讼时效期?",
84 | "ChatGLM": "借款的时效通常为两年,从借款人提交书面形式的借款申请之日起计算。在借款期间内,如果借款人未按时偿还本金和利息,则会丧失胜诉权。\n\n需要注意的是,不同的国家和地区可能有不同的法律要求,因此最好在借款前仔细阅读并理解相关合同条款以及当地法律法规的规定,以确保权益得到保护。",
85 | "LawGPT_zh": "根据中国《民法通则》第135条的规定,借款人应当按照约定的期限还款;没有约定期限或者约定不明确的,应当依照法律、行政法规的规定情形计算利息。此外,根据最高人民法院颁布的有关司法解释,在未确定具体还款日期的情况下,若双方对催款频率及方式有明确约定时则视为存在有效债权债务关系。但需要注意的是:\n\n(一)无正当理由未能向债权人主张权利的为超过两年的时效期间;\n\n(二)因不可抗力不能履行债务或者出现其他法定事由而不能履行债务的除外。"
86 | },
87 | {
88 | "问题": "债务人用假名出具的借条债权人如何要回?",
89 | "ChatGLM": "如果债务人使用假名出具借条,那么这可能会导致借条的有效性受到质疑。在这种情况下,债权人可以尝试采取以下措施来要回借款:\n\n1. 寻求法律帮助:如果认为借条是有效的,可以向当地法院提起诉讼,要求对方偿还债务。同时,可以主张由于对方的欺诈行为导致借条无效,并要求对方承担法律责任。\n\n2. 通过司法程序解决问题:如果有证据证明借条是以虚假的方式出具的话,可以通过司法程序解决纠纷。这可能包括调解、仲裁或诉讼等步骤。在此过程中,需要提供相关的证据来证明借条的真实性和有效性。\n\n3. 找到证人证言:如果能够找到与该借条有关的证人证言,则可以作为有力的证据来证明借条的真实性和有效性。这些证人可以是借款人的亲朋好友或其他见证人。\n\n4. 联系律师咨询:如果不确定该如何处理这个问题,可以聘请一位有经验的律师进行咨询和处理。律师可以帮助了解权利和采取行动的法律框架,并提供必要的建议和支持。",
90 | "LawGPT_zh": "如果使用虚假身份或者伪造证件来出具借款协议,那么该协议的效力是存在问题的。根据法律规定,如果借款人使用了虚假的身份或伪造的证件来进行借贷行为,那么相应的合同无效。因此,即使借款人以真名签订借款协议,也无法保证其真实有效性。\n\n针对这种情况,可以通过以下几种方式主张权利:\n\n1.追索权诉讼:由于对方使用了虚假的身份证明和文件,你可以向法院起诉要求撤销这份无效的借据并要求归还债务;同时可以追加借款人的其他亲属作为共同被告承担连带责任;\n2.申请司法救助金:如果你无法通过普通民事诉讼解决问题的话,可以向当地法律援助中心提出援助请求,他们会根据你的具体情况为你提供相应帮助;\n3.调解:如果你们之间没有正式的书面约定或者是口头协议,你们可以进行协商并达成一个双方都能接受的价格进行和解,这样也可以避免不必要的纷争;\n4、借助民间组织的帮助:例如村委会、居委会等社会基层组织可以提供一些协助。\""
91 | },
92 | {
93 | "问题": "宠物狗被撞死,可以索要精神赔偿吗?",
94 | "ChatGLM": "如果您的宠物狗因为交通事故而死亡,您可能有权要求对方支付精神赔偿。根据中国的《消费者权益保护法》和相关法律法规的规定,消费者因购买、使用商品或者接受服务受到人身、财产损害的,享有生命权、健康权和财产安全权等权益。\n\n因此,如果您的宠物狗因为交通事故而遭受精神和情感上的痛苦,您有权要求对方为您的狗提供经济上的帮助,以缓解您的痛苦。不过需要注意的是,这种精神赔偿通常是由法院根据具体情况判决的,因此在主张精神赔偿前,建议您先咨询专业的律师或法律援助机构,了解相关权利和法律规定。",
95 | "LawGPT_zh": "针对你的问题,我进行分析如下:\n撞人的狗狗主人是否要承担责任取决于双方之间的责任划分。如果确定对方有过错并且造成了小狗的死亡,那么根据法律规定,有权要求其进行相应赔偿。例如经济赔偿金、精神损害抚慰金等。但是,具体的赔偿数额需要根据实际情况来计算。此外,也需要注意是否存在其他相关的证据支持索赔的请求。同时,对于动物的饲养人或管理人而言,如果没有尽到相应的管理义务导致动物受到伤害或者死亡的,也需要承担相应的赔偿责任。\", \"您好!您可以根据当地的法律法规主张相关的精神损失费用"
96 | },
97 | {
98 | "问题": "民间借贷正规借条怎么写?",
99 | "ChatGLM": "写一份正规的借贷借条需要注意以下几个方面:\n\n1.标题:标题应该明确标注“借款借条”或“欠款借条”,并注明借款人和贷款人的姓名、身份证号码、联系方式等信息。\n\n2.正文:在正文中,应详细阐述双方之间发生的借款事实、金额、利率、还款期限等事项,同时要求借款人和贷款人在借条上签字盖章,表示认可上述内容。\n\n3.落款:应在借条的末尾写上借款人和贷款人的姓名、身份证号码、联系方式等信息,以及日期和时间,并要求对方签字并盖手印。\n\n4.备注:可以在借条中进行一些必要的备注,如提醒借款人按时还款、记录借款利息等。\n\n以下是一份典型的正规借条样本:\n\n借款借条\n\n甲方(借款人):XXX 乙方(贷款人):XXX\n\n一、借款事实\n\n甲方向乙方借款人民币XX元整,即¥XXXXX元,用于满足其个人的消费需求。\n\n二、借款金额及利率\n\n借款金额为¥XXXXX元,月利率为%......%。\n\n三、还款期限\n\n借款人应当于每月XX日之前将剩余还款金额一次性还清给贷款人。具体的还款方式和金额需要根据实际情况进行调整。\n\n四、其他约定\n\n借款人与贷款人为好友关系,双方在借款过程中均同意遵守本协议的约定,如有违约行为,承担相应的法律责任。\n\n借款人(签名并按指纹):XXX\n\n贷款人(签名并按指纹):XXX\n\n日期:[日期]",
100 | "LawGPT_zh": "以下是一份较为规范的商业借款借条,供您参考:\n\n借款人(甲方):______________身份证号码:__________联系方式:____________________\n\n贷款人(乙方):______________身份证号码:__________联系方式:____________________\n\n根据《中华人民共和国合同法》以及其他相关法律法规的规定,甲乙双方在平等、自愿的基础上,就乙方向甲方借贷人民币元整的行为达成如下协议:\n\n一、借款金额及还款方式:\n\n1. 乙方向甲方一次性借款人民币元整;\n2. 在借款期限内,乙方向甲方按照约定的利率每天偿还利息共计0.5元;\n3. 如因不可抗力或者第三人原因不能按时还款时,双方可以协商调整还款期限。\n4. 本合同签订后生效,双方应当履行其义务并共同遵守合同约定的还款计划和数额。\n\n二、贷款利率及支付时间:\n\n1. 甲方向乙方支付的借款本金为人民币元整;\n2. 根据中国人民银行同期公布的金融机构同类贷款利率计算,乙方向的每月最低还款额为元人民币元/月;\n3. 如有需要,可以随时向甲方要求支付额外的利息费用。\n\n三、争议解决:\n\n1. 如果双方发生争议的,应通过友好协商的方式解决了;\n2. 无法达成一致意见的,可以向当地人民法院提起诉讼。\n\n四、其他约定事项:\n\n1. 本合同采用纸质形式填写,并存放在甲方指定的地方;\n2. 本合同一式两份,甲、乙双方各执一份;\n3. 自签字之日起生效。\n\n借款人(甲方):______________日期:_______年_____月_____日\n\n贷款人(乙方):______________日期:_______年_____月_____日"
101 | },
102 | {
103 | "问题": "借条有法律效力吗?",
104 | "ChatGLM": "借条是一种有效的法律文件,可以用来证明借贷关系的存在和合法性。当写下借条时,应该明确说明借款的金额、利率、还款期限等细节,并在借条上签名或盖章。如果借款人在到期未还的情况下,可以向债权人提起民事诉讼来维护权益。但是需要注意的是,在某些情况下,借条可能会因为以下原因而失去效力:\n1. 没有约定明确的还款期限或者还款方式;\n2. 借款人没有足够的信用能力或无法按时偿还贷款;\n3. 借款人恶意拖欠债务。\n因此,为了确保借条的有效性,最好在签署前咨询律师并仔细阅读相关条款。",
105 | "LawGPT_zh": "如果借款人和出借人之间存在借款关系,并且借款人向出借人提供了书面形式的借条,那么该借条在法律上具有法律效力。根据法律规定,借贷关系的成立必须具备以下几个要素:\n\n1. 双方存在事实借贷关系;\n2. 借贷金额必须是明确的;\n3. 借贷行为是由各方自愿发生的;\n4. 借款必须有明确的期限;\n5. 双方都已认可所收到的款项是合法的;\n6. 借款人应当按时偿还本金及利息等费用。\n\n如果发生争议,可以根据借条中的内容进行起诉维权,但是在起诉前需要收集好相关的证据,例如证人证言、转账记录等等。"
106 | },
107 | {
108 | "问题": "借条有没有法律效力?",
109 | "ChatGLM": "如果借款人和出借人在借款时签署了书面协议,并留下了相应的借条,那么通常情况下,借条是有法律效力的。\n\n根据《中华人民共和国合同法》的规定,合同成立的条件包括:双方真实意思的表达、主体适格、内容合法等。如果在借款过程中,借款人和出借人达成了合法的借贷关系,并且借款人向出借人提供了必要的抵押物或保证人等,那么在签署借条时就已经形成了有效的合同。\n\n如果借款人在还款期限内未按照约定还款,出借人可以通过起诉借款人来维护自己的权益。在这种情况下,法院通常会依据借条中约定的还款金额、利率、期限等因素来确定借款人应偿还的债务数额和利息等费用。\n\n需要注意的是,如果借条上没有明确写出还款期限或者其他重要条款,或者存在其他欺诈、胁迫等情况,则可能会导致借条不具备法律效力。因此,在签署借条之前,最好咨询专业人士的意见,确保借条的有效性。",
110 | "LawGPT_zh": "有。根据中国《合同法》的规定,借款人应当按照约定的期限支付借款利息;没有约定利息的,视为不支付利息。借款人应当按照约定的用途使用借款;未按照约定的用途使用的,对于贷款期间的利息应当加倍归还。在借款人同意的情况下可以写成现金形式或者转账形式的都有法律效力"
111 | },
112 | {
113 | "问题": "公司企业欠钱不还怎么办?",
114 | "ChatGLM": "如果一家公司企业欠了钱不还,可以尝试以下几个步骤:\n\n1. 与该公司联系并要求还款。可以通过电话、邮件或书面信函等方式与该公司进行沟通,要求其归还欠款。如果该公司未能按时还款,可以催促他们还款并提醒他们可能面临的法律后果。\n\n2. 通过律师起诉该公司。如果有充分的证据表明该公司拖欠了款项,可以考虑聘请一位律师起诉该公司。这可能需要支付一定的费用,但可以提高获得赔偿的机会。\n\n3. 向当地劳动监察部门投诉。如果发现该公司未按照合同履行义务,可以向当地的劳动监察部门投诉。这些机构可能会调查此事并采取行动。\n\n4. 寻求其他途径。除了上述方法外,还可以尝试通过社会救助组织或其他第三方来帮助找到对方并与他们协商解决问题。",
115 | "LawGPT_zh": "如果一家公司欠款,可以通过以下几种途径处理:\n\n1.协商解决。如果双方之间有约定并且对方愿意履行债务的话,可以与该公司进行协商并请求其归还所欠款项;如果对方不愿意或无法偿还的,可以直接向法院起诉要求对方还款;需要注意的是要保留好相关证据(例如合同、转账记录等)以便在诉讼过程中证明对方的合法性和合理性。\n\n2.使用法律手段。如果没有证据证明对方存在违约行为或者没有到期不能还清的债务,则只能直接向法院提起诉讼来维护自己的合法权益;但需要注意收集充分的证据以证明自己的要求合法合理,比如书面催收函、聊天记录等等。\n\n3.借助外力。如果遇到的是实力强大的公司或是无良老板逃避责任的情况,也可以通过第三方机构来帮助处理问题,如律师协会、劳动监察部门等等。但是请注意这类机构的收费往往较高且不一定能够完全解决问题,慎重考虑。"
116 | },
117 | {
118 | "问题": "客户欠钱不还怎么办?",
119 | "ChatGLM": "遇到客户欠钱不还的情况,可以采取以下几个步骤:\n\n1. 与客户进行沟通:尝试与潜在客户沟通,了解他们是否有还款计划或者愿意给出一个具体的时间表。如果他们没有按时偿还债务,可以向他们说明情况并提醒他们需要尽快采取行动。\n\n2. 发送催款邮件或短信:如果和客户之间有电子邮件或短信联系,可以通过这些渠道催促他们支付债务。在邮件或短信中,要礼貌地表达关心,并提供明确的信息和日期限制,以帮助客户意识到他们的义务。\n\n3. 寻求法律支持:如果客户的财务状况不佳,并且无法及时偿还债务,可以考虑向法院提起诉讼。这需要一定的时间和成本,但可以提供最大的保护权益。\n\n4. 采取其他行动:除了诉讼之外,还可以考虑采取其他行动,例如要求对方提供财务记录、查询银行账户等。但是,在进行任何此类操作之前,请确保已经了解了所有适用的法律程序和相关规定。\n\n无论采取了何种方法,都要保持耐心和专业,同时遵守适用的法律法规,以确保自己的权益得到充分保护。",
120 | "LawGPT_zh": "如果客户拖欠了款项,可以考虑采取以下措施:\n\n1. 通过法律途径追讨欠款。可以通过起诉客户、向法院申请强制执行等方式来维护自己的合法权益;\n2. 通过与客户沟通协商解决,但如果对方仍然拒绝还款的话。可以收集相关的证据,例如聊天记录、转账记录等,证明对方已经欠下了相应的金额;3. 找到客户的联系方式进行沟通并了解其具体情况和地址等信息,以方便进行催收工作。如果这些方式都无效的情况下,建议寻求专业的律师帮助来进行维权处理,避免不必要的损失。\""
121 | }
122 | ]
123 |
--------------------------------------------------------------------------------
/data/test_result_with_retriver.json:
--------------------------------------------------------------------------------
1 | [
2 | {
3 | "问题": "遭遇强制拆迁 如何用法律保护自己?",
4 | "回答": "面对强制拆迁,可以通过以下方式来保护权益:\n\n1. 了解法律法规:在被拆除房屋之前,应该仔细阅读相关的法律法规,了解当地《城市拆征条例》等相关法规的规定,明确自己在法律上的权利和义务。\n\n2. 寻求法律援助:如果自己无法独立解决问题,可以向当地的法律援助中心寻求帮助。法律援助中心会提供专业的法律咨询和支持,帮助自己了解法律知识,评估案情,制定维权方案。\n\n3. 保留证据:在遇到强制拆迁时,要尽可能多地保存相关证据,如书面记录、照片、录音、录像等,以备日后维权之需。\n\n4. 不要反抗:虽然强拆是违反法律规定的行为,但是不要盲目反抗或煽动他人反抗,以免引发更大的社会问题。应该保持冷静,按照指定的时间和地点配合工作人员的工作。\n\n5. 申请司法救济:如果在强拆过程中受到了损失或伤害,应该及时向法院提起诉讼,申请司法救济。同时,还可以要求赔偿损失、恢复原状等。\n\n6. 注意自我保护:在强拆过程中,应该注意自身的安全,尽量避免前往危险区域。如果遇到威胁或暴力行为,应及时报警并与警方进行沟通。\n\n总之,遇到强制拆迁时,要时刻保持头脑清醒,掌握相关法律法规和政策,积极沟通合作,理性面对挑战,通过合法手段维护自身合法权益。"
5 | },
6 | {
7 | "问题": "债务纠纷中物保与人保的责任顺序?",
8 | "回答": "根据《保险法》第二十条至第三十四条的规定,的债务纠纷中,物保与人保有责任顺序如下:\n\n1. 在人寿保险案件中,保险人在履行保险合同的约定范围内承担责任。而在财产保险案件中,被保险人应当在保险事故发生时对保险标的拥有保险利益,即具有保险利益。如果被保险人没有保险利益的,则保险公司不承担保险责任。因此,在这种情况下,保险人需要在保险金额范围内承担保险责任。\n\n2. 对于非人寿保险案件,例如财产险案件,当被保险人对保险标的拥有保险利益时,保险公司应在保险责任范围范围内承担保险责任。否则,即使存在保险公司的追偿能力,保险公司也不会承担任何责任。\n\n3. 对于人寿保险案件的追偿部分,保险公司需要先向有保险利益的被保险人追偿。如果没有人有保险利益的,则需要向无保险利益的被保人追偿。如果被保人的保险利益与其负债不同,则在追偿时需要考虑相关比例。\n\n综上所述,当发生债务纠纷时,物保与人保的责任顺序取决于具体情况,但是在人寿保险案件中,无论是否拥有保险利益的被保险人都应该首先获得保险公司的保障。"
9 | },
10 | {
11 | "问题": "借款人去世 继承人是否应履行偿还义务",
12 | "回答": "根据《民法典》第一千一百六一号条款规定,继承人以所得遗产实际价值为限清偿被继承人依法应当缴纳的税款和债务。超过遗产实际价值部分,继承人自愿偿还的不在此限。因此,如果借款人的遗产实际价值超过了其应该缴纳的债务金额,那么继承人就没有义务偿还额外的债务。但是如果继承人自愿偿还,则不在此限制范围内。\n\n在例题中,如果借款人已经去世了,他的继承人可以通过继承其遗产来还款。如果继承人继承了全部或大部分财产,那么他有义务偿还所有未偿还的债务。如果没有继承到足够的财产,他可能无法承担额外债务,但这并不意味着他不应该尽力偿还债务。"
13 | },
14 | {
15 | "问题": "欠条与借条的法律效力探讨",
16 | "回答": "欠条和借条都是证明债务关系的法律文件,但在法律效力上存在一些区别。\n\n在民法典《合同编》第671条规定下,如果贷款人未按照约定的日期、数额提供借款,造成借款人损失的,应当赔偿损失。这意味着如果借款人收到了欠条但未能按时偿还债务,那么贷款人可以要求其赔偿因此造成的损失。此外,借款人未按照约定的日期、数额收取借款的,应当按照约定的日期、数额支付利息。这里的“未按照约定的日期”指的是借款人没有按照借条中约定的还款日期还款,而“未按照约定的数额”则是指借款人没有按照借条中约定的金额还款。\n\n相比之下,借条通常被认为是一种更正式的合同形式。在民法典《合同编》第514条规定下,以支付金钱为内容的债,除法律另有规定或者当事人另有约定外,债权人可以请求债务人以实际履行地为货币履行。也就是说,只要债务是以货币的形式存在的,债权人就可以要求债务人在实际履行地的货币范围内履行债务。这与欠条中的“欠款”概念有所不同,因为欠条通常是表示对某项资产的借贷关系,而不是单纯的货币债务。\n\n综上所述,欠条和借条在法律效力上都可以被认可,但是在具体的法律规定和应用上存在不同。对于贷款人来说,如果他们能够提供一份正式的借条来证明他们之间的债务关系,那么在法律上就具有更高的法律效力。而对于债权人来说,他们需要根据实际情况来确定使用哪种形式的合同来维护自己的权益。"
17 | },
18 | {
19 | "问题": "信用卡欠账不还后果很严重吗?",
20 | "回答": "根据《中华人民共和国合同法》第六百七十二条规定:“借款人未按照约定的日期、数额提供借款,造成借款人损失的,应当赔偿损失。”因此,如果使用信用卡欠款不还,将会承担违约责任并产生债务纠纷,可能会导致严重的法律后果。\n\n首先,信用卡欠款不还会产生高额的滞纳金和利息费用,这些费用会严重影响信用记录,甚至会影响到未来的借贷和其他金融活动。此外,恶意拖欠信用卡欠款可能会触发银行采取追讨措施,例如向法院提起诉讼或进行强制执行,这些都可能导致严重的经济负担和法律责任。\n\n所以一旦使用了信用卡,就应该按时还款以避免不必要的债务风险。如果无法偿还账单,应该及时与银行联系并协商解决方案。"
21 | },
22 | {
23 | "问题": "如何通过法律手段应对民间借贷纠纷",
24 | "回答": "根据《民法典》第468条的规定,对于非因合同产生的债权债务关系,适用于有关该债权债务关系的法律规定,而没有特定的规定时则适用于本编通则的有关规定,但是根据其性质不能适用的除外。因此,对于民间借贷纠纷,应该依据《民法典》第457条的规定,即借款人应当按照约定的利率支付利息;未按照的约定支付利息的,借款人可以随时请求借款人偿还本金并赔偿损失。\n\n此外,如果借款人到期不还款或者违反与出借人之间的借款协议中关于还款金额、还款日期等内容的,出借人可以通过向法院提起诉讼来维护自己的权益。在起诉前,出借人应当采取必要的证据措施,如收集借款人的身份信息、聊天记录、转账记录等信息以证明双方之间的关系和借款事实的真实性。同时,出借人也需要考虑自己的合法权益是否受到侵害,并根据实际情况决定是否委托律师进行维权。"
25 | },
26 | {
27 | "问题": "起诉书怎么写?",
28 | "回答": "写起诉书的一般步骤如下:\n\n1.标题:简明扼要地概括案件的基本情况,如“XXX合同纠纷一案”。\n\n2.正文:开头应说明原告的身份、诉讼请求以及事实和理由等基本要素。接着提供相关证据和证据来源,并详细陈述案件的具体情况和要求。\n\n3.结尾:写上法院的受理日期和管辖权异议等内容,最后署上原告的个人信息和一些标志性的用语,例如“此致”或“敬礼”。\n\n下面是一份可能的起诉书范例:\n\n项目名称:XXX合同纠纷一案\n\n原告人:[你的姓名] [身份证号码/护照号码],男/女,[出生日期/国籍],[联系电话/邮箱地址],现住:[你的住址]。\n\n诉讼请求:\n1.判决被告赔偿原告人身损害抚慰金[元];\n2.判决被告承担本案的全部诉讼费用[元]。\n\n事实和理由:\n\n根据《中华人民共和国合同法》第七十二条规定:“当事人应当按照约定履行自己的义务。” 根据《最高人民法院关于适用<中华人民共和国合同法>若干问题的解释(一)》第一条第二项的规定,合同成立后,一方违反诚信原则未按照约定履行债务的,另一方可以依据合同法第九十条的规定行使撤销权。根据上述法律规定,本案中被告未能按时支付借款利息,违反了诚实守信的原则,给原告造成了实际的损失。\n\n为了保护的合法权益,原告要求贵院依法裁判被告承担相应的法律责任,同时要求被告承担本案的全部诉讼费用。\n\n的证据和证据来源:\n\n1.合同:《借款协议》;\n2.短信记录:证明被告曾向原告发出过催款信息;\n3.转账记录:证明被告已经收到了原告的还款。\n\n证人姓名和住所:\n\n[证人姓名] [证人姓名的电话号码/电子邮件地址] [证人姓名的住址]"
29 | },
30 | {
31 | "问题": "欠钱不还可以报警吗?",
32 | "回答": "根据《中华人民共和国治安管理处罚法》第六十五条的规定,拖欠他人的财物,拒不归还的,处十五日以下拘留并处罚款。因此,如果欠他人钱不还,可以申请警方介入进行处理,对方可能会受到相应的治安处罚。\n\n同时,根据《中华人民共和国合同法》第五百二十二条的规定,借款人应当按照约定的期限还款;没有约定期限或者约定不明确,借款人可以随时请求借款人还款。如果借款人未能按时还款,可以根据这一条款向法院起诉要求对方还款。\n\n最后,根据《民法商法》第七十六条的规定,因不可抗力不能履行合同或者造成他人损害的,不承担民事责任,法律另有规定的除外。如果一方因为不可抗力无法履行债务,对方是可以要求他赔偿损失并享有抗辩权的。"
33 | },
34 | {
35 | "问题": "对方不认借条怎么办?",
36 | "回答": "根据《民法典》第七章追索权和支付令的规定,如果对方否认借条的真实性,你可以向法院起诉要求其还款并提供相应的证据证明借款事实的存在。\n\n首先,你需要收集相关的证据来证明借款事实的发生。例如,借条、借款人的签名或盖章、证人证言等。如果有其他相关证据能够表明借款事实的存在,也可以一起提交给法院作为辅助证据。\n\n其次,你需要考虑是否符合起诉条件。根据《民事诉讼法》第二百二十一条的规定,债权人请求债务人给付金钱、有价证券,符合下列条件的,可以向有管辖权的基层人民法院申请支付令:(一)债权人与债务人没有其他债务纠纷的;(二)支付令能够送达债务人的。如果不符合这些条件,就不能向法院申请支付令。\n\n最后,如果你对起诉有信心,可以选择在判决生效后申请执行。根据《民法典》第六编强制执行一章的规定,当事人没有约定执行力物的,人民法院可以根据案件的具体情况拍卖、出售或者报废该物,所得款项应当优先受偿。"
37 | },
38 | {
39 | "问题": "借条约定的还款期限到期怎么办?",
40 | "回答": "根据《民法典》第五百二十二条规定,如果借条中约定了还款期限,而借款人未能按照约定归还欠款,那么借款人应该承担相应的违约责任。根据《商业银行法》第三十七条的规定,商业银行在与借款人签订贷款合同时,应当约定贷款的种类、借款用途、金额、利率、还款期限、还款方式、违约责任和双方认为需要约定的其他事项。如果在还款期限到期后,借款人未能按时归还欠款,商业银行有权依据贷款合同约定追究借款人的违约责任。\n\n在这种情况下,如果借款人能够提供证据证明其在合理时间内没有收到银行的催收通知或是因为其他原因导致无法及时偿还欠款,他可以尝试通过诉讼途径来维权。但是如果的债务的标的不适于强制履行或者履行费用过高,或者是由于法律上或者事实上的原因无法履行债务,借款人则可以选择向第三人主张违约责任,而不是直接向法院起诉。"
41 | },
42 | {
43 | "问题": "借条受法律保护吗?",
44 | "回答": "根据《民法典》第668条规定,借款合同应当采用书面形式。但自然人之间的借款另有约定的除外。因此,如果自然人之间存在借款另有约定的,可以口头形式签订合同,但是如果发生争议,需要通过书面形式的确认来维护权益。\n\n对于商业银行贷款,应该订立书面合同。合同应约定贷款种类、借款用途、金额、利率、还款期限、还款方式、违约责任以及双方认为需要约定的其他事项。\n\n关于借条的法律地位,根据《民法典》第745条的规定,借贷关系的成立应具备以下要件:主体适格、客体合法、内容真实、意思表示正确、比分合理、签字盖章有效。其中,借条必须包含上述要素之一,才能受到法律的保护。"
45 | },
46 | {
47 | "问题": "没有借条可以起诉吗?",
48 | "回答": "根据《中华人民共和国合同法》第六87条规定,一般保证的保证人在主合同纠纷未经审判或仲裁,并且就债务人财产依法强制执行仍不能履行债务前,有权拒绝向债权人承担保证责任。因此,如果没有借条或其他证据证明借款事实的发生,债权人可能无法以借款纠纷为由起诉借款人。\n\n如果债权人能够提供其他证据证明存在借贷关系,例如双方之间的口头协议或转账记录等,那么她仍然可以向法院提起诉讼,要求借款人还款。在提起诉讼之前,债权人需要证明自己的权益受到侵害,并收集足够的证据来支持她的诉讼请求。"
49 | },
50 | {
51 | "问题": "借条没写还款时间,多久可以要求对方还钱?",
52 | "回答": "根据《民法典》第六百七十四条规定,如果借条中没有明确约定还款时间,借款人可以随时向贷款人归还借款。同时,根据该法第六百七十五条第一款的规定,贷款人在未能按时偿还借款的情况下,可以通过催告的方式提醒借款人还款。如果对方仍然没有按时还款,则贷款人可以要求借款人在一定时间内返还借款并承担相应的利息损失。具体的时间限制并没有明确规定,但通常应该考虑到双方之间的友好关系以及避免长期借贷纠纷等因素来确定合理的还款期限。因此,建议在收到借条后的一定时间内(通常是一个月内)联系对方并要求其还款,并在逾期后及时咨询律师以维护自身权益。"
53 | },
54 | {
55 | "问题": "别人借钱不还怎么办",
56 | "回答": "建议采取以下步骤:\n\n1. 与对方进行沟通协商,尝试找到解决问题的方法。如果愿意合作,可以尝试达成还款协议,明确双方义务和时间表。如果对方不愿意合作或无法按时偿还债务,可以考虑向法院起诉对方要求还款。\n\n2. 在提起诉讼之前,可以先向对方发送正式的催款通知,并要求其在一定时间内归还欠款。催款通知中应该包含明确的金额和还款日期,并提供方便的联系方式以便对方回复。\n\n3. 如果无法通过诉讼解决争议,可以选择咨询专业律师的意见,了解相关的法律程序和可能的判决结果,并根据具体情况制定下一步的计划。\n\n任何关于他人借款不还被都应该先尝试与对方进行友好的沟通,并寻求解决方案,避免不必要的法律纠纷。"
57 | },
58 | {
59 | "问题": "借给他人用于赌博的钱能否要回?",
60 | "回答": "根据《民法典》第580条规定,如果借贷人为赌博用途而向其借款,其不得要求偿还该笔借款,因为赌债不受法律保护。这是因为赌博属于违法行为,具有高风险性和不可预测性,可能导致严重的财产和人身损失。因此,当借款人以非法手段向他人借款并承诺用于合法用途时,即使借款被用于赌博或其他违法活动,出借人也有权拒绝还款。\n\n当然,如果借款人违反承诺并将借款用于合法的赌博活动中,出借人可以向借款人主张相应的违约责任,例如要求赔偿因其错误行为而导致的经济损失等。如果出借人能够证明借款被滥用或被用来从事非法活动,则可以将借款视为无效,并要求借款人全额归还借款本金及利息。"
61 | },
62 | {
63 | "问题": "起诉后撤诉是否导致诉讼时效中断?",
64 | "回答": "根据《民法典》第一百九十五条的规定,起诉后撤诉可能会导致诉讼时效中断。这是因为在这种情况下,权利人在知道或应该知道权利受到侵害之日起超过二年内向法院提出书面诉讼请求而未能提出,视为放弃该权利,从而进入时效中断的程序。因此,如果起诉后撤诉,则需要及时向法院提交书面申请书以确认中断事项并重新计算时效。"
65 | },
66 | {
67 | "问题": "借款是否有诉讼时效期?",
68 | "回答": "根据《民法典》第一百八十八条的规定,向人民法院请求保护民事权利的诉讼时效期间为三年,法律另有规定的除外。因此,借款的法律适用时间为三年,如果借款人在还款日前三年内向法院起诉,可能会丧失胜诉权。\n\n然而,需要注意的是,自权利受到损害之日起超过二十年的,人民法院不予保护,有特殊情况的,人民法院可以根据权利人的申请决定延长。在这种情况下,需要看具体的借款情况来判断何时起算时效。例如,如果借款人在借款后几十年都没有归还本金和利息,那么即使当时没有约定时效,随着时间的推移也可以认为权利已经受到侵害,此时就可以考虑提起时效抗辩。"
69 | },
70 | {
71 | "问题": "债务人用假名出具的借条债权人如何要回?",
72 | "回答": "根据《合同法》第764条的规定,保理人向应收账款债务人发出应收账款转让通知的,应当表明保理人身份并附有必要凭证。如果债务人使用虚假身份信息开具借条,那么债权人可以通过提交相关的证据证明该借条是以虚假的身份信息生成的。\n\n一旦债权人确定了借条的真实性存在争议,她可以采取以下步骤来追回借款:\n\n1. 向当地法院提起诉讼,提出关于虚假借条的指控和相应的证据。\n\n2. 在案件审理过程中,提供证据证明借条是以虚假身份信息开立的。例如,可以提供债务人使用的银行账户信息和资金来源等信息以证明其虚构了借条上的姓名和身份证号码。\n\n3. 如果判决支持债权人的要求,她会获得胜诉判决,并有权强制执行虚假借条上的款项。\n\n4. 在被追索人清偿债务时,债权人可以获得对因虚假借条而产生的利息和费用的请求。同时,她也需要通过收取相关收据来证明收到了所有的利息和费用。"
73 | },
74 | {
75 | "问题": "宠物狗被撞死,可以索要精神赔偿吗?",
76 | "回答": "根据《民法典》第1250条规定,因第三人的过错致使动物造成他人损害的,被侵权人可以向动物饲养人或者管理人请求赔偿,也可以向第三人请求赔偿。如果动物饲养人或管理人已经赔偿了被侵权人的经济损失,被侵权人还可以通过向第三人追偿来寻求进一步的经济赔偿。\n\n然而,对于宠物狗这种具有人格属性的动物,根据《民法典》第1247条规定,禁止饲养的烈性犬等危险动物造成他人损害的,动物饲养人或者管理人应当承担侵权责任。这意味着如果宠物狗被撞死了,受害人不能直接向动物饲养人或管理人要求经济赔偿,而是需要先通过诉讼程序主张自己的权利,认定是由第三人的行为导致的损失,然后由第三人承担相应的赔偿责任。"
77 | },
78 | {
79 | "问题": "民间借贷正规借条怎么写?",
80 | "回答": "写一个正规的民间借贷借条需要遵循以下步骤:\n\n1.借款人和出借人填写必要的个人信息,如借款人姓名、身份证号码、联系方式等信息。\n\n2.在借条上注明借款的种类、币种、用途、数额、利率、期限和还款方式等相关事项。同时,还可以要求借款人提供担保人或承诺函等文件。\n\n3.在正文部分,应明确表述借款的原因、金额、时间、地点和还款方式和偿还期限等内容,并应根据实际情况具体说明。此外,还应注意书写的语气礼貌、清晰简洁,避免使用口语化的语言。\n\n4.尾部应根据双方的约定,写上借款人的签名或盖章,出借人的签字或盖章,以及借贷关系的确认日期等相关内容。\n\n5.在签署前,建议让双方仔细阅读借条内容,以避免遗漏重要信息。如果存在争议,应及时协商解决,以免产生不必要的纠纷。"
81 | },
82 | {
83 | "问题": "借条有法律效力吗?",
84 | "回答": "根据《民法典》第六百六十八条的规定,借款合同应当采用书面形式,但自然人之间借款另有约定的除外。因此,借条如果不是由双方在书面合同中特别约定过的,则可能需要满足其他的法定条件才能产生法律效力。\n\n根据《商业银行法》第二七十二条的规定,商业银行贷款,应当与借款人订立书面合同。合同应当约定贷款种类、借款用途、金额、利率、还款期限、还款方式、违约责任和双方认为需要约定的其他事项。\n\n因此,如果借款人向商业银行提供了借款,并与商业银行就贷款的种类、用途、金额、利率、还款期限、还款方式等方面达成书面协议,那么形成的借条就有法律效力。如果在借贷过程中没有达成书面协议或者借条上没有约定明确的权利义务,则需要根据实际情况进行判断。"
85 | },
86 | {
87 | "问题": "借条有没有法律效力?",
88 | "回答": "根据《民法典》第六百七十二条规定,贷款人未按照约定的日期、数额提供借款,造成借款人损失的,应当赔偿损失。因此,如果借款人未能按时偿还借款本金及利息,贷款人就有权要求借款人承担相关法律责任。而根据《民法典》第六百七十五条第一款规定,借款人应当按照约定的期限返还借款。对于借款期限没有约定或者约定不明确,依据本法规定仍不能确定的,借款人可以随时返还;贷款人可以催告借款人在合理期限内返还。由此可见,如果借款人在到期后没有及时归还借款,贷款人可以催告借款人在一定时间内及时还款,如果借款人仍然不履行义务,则贷款人有权利起诉借款人并请求其承担相关的违约责任。\n\n借条作为借贷关系的证明文件,具有法律效力。只要借条上双方签字盖章并且内容清晰明确,就具备法律上的证据作用,可以用于对抗第三人和维护合法权益。但需要注意的是,借条也可能存在伪造、篡改等情况,因此在使用时需要特别注意,保留好所有相关的证据材料,以备不时之需。"
89 | },
90 | {
91 | "问题": "公司企业欠钱不还怎么办?",
92 | "回答": "根据相关的法律法规,如果公司企业欠钱不还,可以采取以下步骤进行维权:\n\n1. 向公安机关报案,要求对该公司进行调查,以确认是否存在欠款事实。同时,可以要求公安机关出具调查报告,以便后续采取进一步的法律行动。\n\n2. 如果经过调查认定存在欠款事实,可以向该公司所在的法院提起诉讼,要求其还款。在起诉前,需要准备充分的证据,包括合同、账单、邮件、短信、证人证言等等,以支持自己的诉讼请求。\n\n3. 对于判决结果,可以根据判决指定的期间履行给付义务,如果没有按时履行,可以强制执行。在执行过程中,需要注意保护自身合法权益,避免遭受不必要的损失。\n\n需要注意的是,在公司企业的维权过程中,应该保留相关证据,并及时咨询专业律师的建议,以确保自己的合法权益得到充分维护。"
93 | },
94 | {
95 | "问题": "客户欠钱不还怎么办?",
96 | "回答": "根据《商业银行法》第七十二条和七十二条规定,银行有权利采取相应措施追讨客户的欠款。如果客户欠下的钱还没有偿还,可以通过以下几种方式来处理:\n\n1.与客户沟通协商:尝试与客户进行沟通,了解他们是否需要还款,并找出解决方案。在处理此类问题时,需要保持冷静和专业,避免过度争吵或产生不必要的误解。\n\n2.留下证据:为了避免遭受不必要的损失,建议在谈话过程中记录所有相关细节,例如电话、邮件或其他文本消息。这些文件可以作为以后采取行动的重要参考。此外,可以使用手机拍摄现场视频或音频,以帮助保留对话的关键信息。\n\n3.报警:如果发现客户恶意拖欠贷款或存在欺诈行为,可以向当地警方报案。警方有权进行调查和处理任何违法行为。\n\n4.起诉客户:如果认为客户的欠款行为已经超出了银行的容忍范围,可以考虑向法院提起诉讼。在这种情况下,需要进行充分的证据证明客户欠款的事实以及其应承担的责任。\n\n总之,当遇到客户欠钱不还的情况时,要保持冷静和专业,同时尽可能收集相关的证据,以便在适当的时候采取行动。"
97 | }
98 | ]
--------------------------------------------------------------------------------
/img/img.md:
--------------------------------------------------------------------------------
1 |
2 |
--------------------------------------------------------------------------------
/img/img2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LiuHC0428/LAW-GPT/6251dd36505a4b644c5d2fe9b5188cbb541ea5fd/img/img2.png
--------------------------------------------------------------------------------
/img/法律bot.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LiuHC0428/LAW-GPT/6251dd36505a4b644c5d2fe9b5188cbb541ea5fd/img/法律bot.png
--------------------------------------------------------------------------------
/src/demo.py:
--------------------------------------------------------------------------------
1 | from transformers import AutoTokenizer, AutoModel
2 | from model.modeling_chatglm import ChatGLMForConditionalGeneration
3 | from model.configuration_chatglm import ChatGLMConfig
4 | import sys
5 | import logging
6 | import os
7 | import math
8 | import json
9 | import torch
10 | from argparse import ArgumentParser
11 | import numpy as np
12 | import random
13 |
14 | from peft import (
15 | get_peft_model,
16 | LoraConfig,
17 | TaskType,
18 | BottleneckConfig,
19 | )
20 |
21 | def set_seed(args):
22 | np.random.seed(args.seed)
23 | torch.manual_seed(args.seed)
24 | random.seed(args.seed)
25 | # if args.n_gpu > 0:
26 | # torch.cuda.manual_seed_all(args.seed)
27 |
28 | def main():
29 | parser = ArgumentParser()
30 | parser.add_argument('--model_path', type=str, default = "model")
31 | parser.add_argument('--model_checkpoint', type=str, default ='model_checkpoint')
32 | parser.add_argument('--peft_path', type=str, default = 'peft_model/lora.p')
33 | parser.add_argument('--lora_use', type=bool, default = True)
34 | parser.add_argument('--adapter_use', type=bool, default = False)
35 | parser.add_argument('--gpu_id', type=str, default = "0")
36 | parser.add_argument("--seed", type=int, default=3407)
37 | args = parser.parse_args()
38 |
39 | # set_seed(args)
40 | os.environ["CUDA_VISIBLE_DEVICES"] = args.gpu_id
41 |
42 | #os.environ["CUDA_VISIBLE_DEVICES"] = "0,1,2,3"
43 | def read_json(path):
44 | with open(path, "r") as f:
45 | return json.load(f)
46 |
47 | logger = logging.getLogger(__file__)
48 |
49 | tokenizer = AutoTokenizer.from_pretrained(args.model_path, trust_remote_code=True)
50 | model_class = ChatGLMForConditionalGeneration
51 |
52 | logger.info("Setup Model")
53 | num_layers = read_json(os.path.join(args.model_path , "config.json"))["num_layers"]
54 | device_ids = list(range(torch.cuda.device_count()))
55 |
56 | device_map = {}
57 | device_map["transformer.word_embeddings"] = device_ids[0]
58 | device_map["transformer.final_layernorm"] = device_ids[-1]
59 | device_map["lm_head"] = device_ids[0]
60 |
61 | allocations = [
62 | device_ids[i] for i in
63 | sorted(list(range(len(device_ids))) * math.ceil(num_layers / len(device_ids)))
64 | ]
65 | allocations = allocations[len(allocations)-num_layers:]
66 | for layer_i, device_id in enumerate(allocations):
67 | device_map[f"transformer.layers.{layer_i}.input_layernorm"] = device_id
68 | device_map[f"transformer.layers.{layer_i}.attention.rotary_emb"] = device_id
69 | device_map[f"transformer.layers.{layer_i}.attention.query_key_value"] = device_id
70 | device_map[f"transformer.layers.{layer_i}.attention.dense"] = device_id
71 | device_map[f"transformer.layers.{layer_i}.post_attention_layernorm"] = device_id
72 | device_map[f"transformer.layers.{layer_i}.mlp.dense_h_to_4h"] = device_id
73 | device_map[f"transformer.layers.{layer_i}.mlp.dense_4h_to_h"] = device_id
74 |
75 | if args.lora_use:
76 | model_class = ChatGLMForConditionalGeneration
77 | model = model_class.from_pretrained(args.model_path, device_map = device_map).half()
78 | model.config.use_cache = True # silence the warnings. Please re-enable for inference!
79 | logger.info("Setup PEFT")
80 | peft_config = LoraConfig(
81 | task_type=TaskType.CAUSAL_LM,
82 | inference_mode=False,
83 | r=8,
84 | lora_alpha=16,
85 | lora_dropout=0.1,
86 | target_modules=['query_key_value'],
87 | )
88 | model = get_peft_model(model, peft_config)
89 |
90 | for layer_i in range(len(model.base_model.model.transformer.layers)):
91 | device = model.base_model.model.transformer.layers[layer_i].attention.query_key_value.weight.device
92 | model.base_model.model.transformer.layers[layer_i].attention.query_key_value.lora_B.half().to(device)
93 | model.base_model.model.transformer.layers[layer_i].attention.query_key_value.lora_A.half().to(device)
94 |
95 | if os.path.exists(args.peft_path ):
96 | #start = read_json(peft_arg.peft_path + '/latest.json')["latest_step"]
97 | model.load_state_dict(torch.load(args.peft_path), strict=False)
98 | elif args.adapter_use:
99 | model_class = ChatGLMForConditionalGeneration
100 | model = model_class.from_pretrained(args.model_path, device_map = device_map).half()
101 | model.config.use_cache = True # silence the warnings. Please re-enable for inference!
102 | logger.info("Setup PEFT")
103 | peft_config = BottleneckConfig(
104 | bottleneck_size=512,
105 | non_linearity='tanh',
106 | adapter_dropout=0.1,
107 | use_parallel_adapter=True,
108 | use_adapterp=False,
109 | target_modules={"dense_h_to_4h": "mh_adapter", "dense_4h_to_h": "output_adapter"},
110 | scaling=1.0,
111 | bias="none",
112 | task_type="CAUSAL_LM",
113 | )#['query_key_value']
114 | model = get_peft_model(model, peft_config)
115 |
116 | for layer_i in range(len(model.base_model.model.transformer.layers)):
117 | device = model.base_model.model.transformer.layers[layer_i].mlp.dense_h_to_4h.weight.device
118 | model.base_model.model.transformer.layers[layer_i].mlp.dense_h_to_4h.adapter_down.half().to(device)
119 | model.base_model.model.transformer.layers[layer_i].mlp.dense_h_to_4h.adapter_up.half().to(device)
120 | model.base_model.model.transformer.layers[layer_i].mlp.dense_4h_to_h.adapter_down.half().to(device)
121 | model.base_model.model.transformer.layers[layer_i].mlp.dense_4h_to_h.adapter_up.half().to(device)
122 |
123 | if os.path.exists(args.peft_path ):
124 | #start = read_json(peft_arg.peft_path + '/latest.json')["latest_step"]
125 | model.load_state_dict(torch.load(args.peft_path), strict=False)
126 |
127 | else:
128 | model_class = ChatGLMForConditionalGeneration
129 | model = model_class.from_pretrained(args.model_checkpoint, device_map = device_map).half()
130 | model.config.use_cache = True # silence the warnings. Please re-enable for inference!
131 |
132 |
133 | model.eval()
134 | print("Human:")
135 | history=[]
136 | while True:
137 | query = input()
138 | if 'new chat' in query:
139 | history=[]
140 | else:
141 | response, history = model.chat(tokenizer, query, history=history, max_length=500)
142 | print("\n------------------------------------------------\nAnswer:")
143 | print(response)
144 | print("\n------------------------------------------------\nHuman:")
145 |
146 |
147 | if __name__ == '__main__':
148 | main()
--------------------------------------------------------------------------------
/src/demo_llm.py:
--------------------------------------------------------------------------------
1 | from transformers import AutoTokenizer, AutoModel
2 | from model.modeling_chatglm import ChatGLMForConditionalGeneration
3 | from model.configuration_chatglm import ChatGLMConfig
4 | import sys
5 | import logging
6 | import os
7 | import math
8 | import json
9 | import torch
10 | from argparse import ArgumentParser
11 | import numpy as np
12 | import random
13 |
14 | from peft import (
15 | get_peft_model,
16 | LoraConfig,
17 | TaskType,
18 | BottleneckConfig,
19 | )
20 |
21 | def set_seed(args):
22 | np.random.seed(args.seed)
23 | torch.manual_seed(args.seed)
24 | random.seed(args.seed)
25 | # if args.n_gpu > 0:
26 | # torch.cuda.manual_seed_all(args.seed)
27 |
28 | def main():
29 | parser = ArgumentParser()
30 | parser.add_argument('--model_path', type=str, default = "model")
31 | parser.add_argument('--model_checkpoint', type=str, default ='model_checkpoint')
32 | parser.add_argument('--peft_path', type=str, default = 'peft_model/lora.p')
33 | parser.add_argument('--lora_use', type=bool, default = True)
34 | parser.add_argument('--adapter_use', type=bool, default = False)
35 | parser.add_argument('--gpu_id', type=str, default = "0")
36 | parser.add_argument("--seed", type=int, default=3407)
37 | args = parser.parse_args()
38 |
39 | # set_seed(args)
40 | os.environ["CUDA_VISIBLE_DEVICES"] = args.gpu_id
41 |
42 | #os.environ["CUDA_VISIBLE_DEVICES"] = "0,1,2,3"
43 | def read_json(path):
44 | with open(path, "r") as f:
45 | return json.load(f)
46 |
47 | logger = logging.getLogger(__file__)
48 |
49 | tokenizer = AutoTokenizer.from_pretrained(args.model_path, trust_remote_code=True)
50 | model_class = ChatGLMForConditionalGeneration
51 |
52 | logger.info("Setup Model")
53 | num_layers = read_json(os.path.join(args.model_path , "config.json"))["num_layers"]
54 | device_ids = list(range(torch.cuda.device_count()))
55 |
56 | device_map = {}
57 | device_map["transformer.word_embeddings"] = device_ids[0]
58 | device_map["transformer.final_layernorm"] = device_ids[-1]
59 | device_map["lm_head"] = device_ids[0]
60 |
61 | allocations = [
62 | device_ids[i] for i in
63 | sorted(list(range(len(device_ids))) * math.ceil(num_layers / len(device_ids)))
64 | ]
65 | allocations = allocations[len(allocations)-num_layers:]
66 | for layer_i, device_id in enumerate(allocations):
67 | device_map[f"transformer.layers.{layer_i}.input_layernorm"] = device_id
68 | device_map[f"transformer.layers.{layer_i}.attention.rotary_emb"] = device_id
69 | device_map[f"transformer.layers.{layer_i}.attention.query_key_value"] = device_id
70 | device_map[f"transformer.layers.{layer_i}.attention.dense"] = device_id
71 | device_map[f"transformer.layers.{layer_i}.post_attention_layernorm"] = device_id
72 | device_map[f"transformer.layers.{layer_i}.mlp.dense_h_to_4h"] = device_id
73 | device_map[f"transformer.layers.{layer_i}.mlp.dense_4h_to_h"] = device_id
74 |
75 | if args.lora_use:
76 | model_class = ChatGLMForConditionalGeneration
77 | model = model_class.from_pretrained(args.model_path, device_map = device_map).half()
78 | model.config.use_cache = True # silence the warnings. Please re-enable for inference!
79 | logger.info("Setup PEFT")
80 | peft_config = LoraConfig(
81 | task_type=TaskType.CAUSAL_LM,
82 | inference_mode=False,
83 | r=8,
84 | lora_alpha=16,
85 | lora_dropout=0.1,
86 | target_modules=['query_key_value'],
87 | )
88 | model = get_peft_model(model, peft_config)
89 |
90 | for layer_i in range(len(model.base_model.model.transformer.layers)):
91 | device = model.base_model.model.transformer.layers[layer_i].attention.query_key_value.weight.device
92 | model.base_model.model.transformer.layers[layer_i].attention.query_key_value.lora_B.half().to(device)
93 | model.base_model.model.transformer.layers[layer_i].attention.query_key_value.lora_A.half().to(device)
94 |
95 | if os.path.exists(args.peft_path ):
96 | #start = read_json(peft_arg.peft_path + '/latest.json')["latest_step"]
97 | model.load_state_dict(torch.load(args.peft_path), strict=False)
98 | elif args.adapter_use:
99 | model_class = ChatGLMForConditionalGeneration
100 | model = model_class.from_pretrained(args.model_path, device_map = device_map).half()
101 | model.config.use_cache = True # silence the warnings. Please re-enable for inference!
102 | logger.info("Setup PEFT")
103 | peft_config = BottleneckConfig(
104 | bottleneck_size=512,
105 | non_linearity='tanh',
106 | adapter_dropout=0.1,
107 | use_parallel_adapter=True,
108 | use_adapterp=False,
109 | target_modules={"dense_h_to_4h": "mh_adapter", "dense_4h_to_h": "output_adapter"},
110 | scaling=1.0,
111 | bias="none",
112 | task_type="CAUSAL_LM",
113 | )#['query_key_value']
114 | model = get_peft_model(model, peft_config)
115 |
116 | for layer_i in range(len(model.base_model.model.transformer.layers)):
117 | device = model.base_model.model.transformer.layers[layer_i].mlp.dense_h_to_4h.weight.device
118 | model.base_model.model.transformer.layers[layer_i].mlp.dense_h_to_4h.adapter_down.half().to(device)
119 | model.base_model.model.transformer.layers[layer_i].mlp.dense_h_to_4h.adapter_up.half().to(device)
120 | model.base_model.model.transformer.layers[layer_i].mlp.dense_4h_to_h.adapter_down.half().to(device)
121 | model.base_model.model.transformer.layers[layer_i].mlp.dense_4h_to_h.adapter_up.half().to(device)
122 |
123 | if os.path.exists(args.peft_path ):
124 | #start = read_json(peft_arg.peft_path + '/latest.json')["latest_step"]
125 | model.load_state_dict(torch.load(args.peft_path), strict=False)
126 |
127 | else:
128 | model_class = ChatGLMForConditionalGeneration
129 | model = model_class.from_pretrained(args.model_checkpoint, device_map = device_map).half()
130 | model.config.use_cache = True # silence the warnings. Please re-enable for inference!
131 |
132 |
133 | model.eval()
134 | print("Human:")
135 | history=[]
136 | while True:
137 | query = input()
138 | if 'new chat' in query:
139 | history=[]
140 | else:
141 | response, history = model.chat(tokenizer, query, history=history, max_length=500)
142 | print("\n------------------------------------------------\nAnswer:")
143 | print(response)
144 | print("\n------------------------------------------------\nHuman:")
145 |
146 |
147 | if __name__ == '__main__':
148 | main()
--------------------------------------------------------------------------------
/src/demo_r.py:
--------------------------------------------------------------------------------
1 | from transformers import AutoTokenizer, AutoModel
2 | from model.modeling_chatglm import ChatGLMForConditionalGeneration
3 | from model.configuration_chatglm import ChatGLMConfig
4 | import sys
5 | import pdb
6 | import logging
7 | import os
8 | import math
9 | import json
10 | import torch
11 | from argparse import ArgumentParser
12 | from retriver.retrieve_law import retriver
13 |
14 | from peft import (
15 | get_peft_model,
16 | LoraConfig,
17 | TaskType,
18 | BottleneckConfig,
19 | )
20 | import faiss
21 | import pickle
22 | import argparse
23 | import json
24 | from text2vec import SentenceModel
25 |
26 |
27 | def retriver(query,t2v_model,index,raw_law_data,args_retriver):
28 | input_q = query
29 | while input_q != 'kill':
30 | q_emb = t2v_model.encode([input_q])
31 | D, I = index.search(q_emb, args_retriver.top_k)
32 | output = [raw_law_data[i] for i in I[0]]
33 | return output
34 |
35 | def main():
36 | parser_retriver = argparse.ArgumentParser()
37 | parser_retriver .add_argument('--embedding_path', default='./retriver/law_embs.pkl', type=str, help='')
38 | parser_retriver .add_argument('--rawdata_path', default='./retriver/fatiao.json', type=str, help='核心法条文件')
39 | parser_retriver .add_argument('--top_k', type=int, default=3, help='dst root to faiss database')
40 | args_retriver = parser_retriver .parse_args()
41 |
42 | law_embeds = pickle.load(open(args_retriver.embedding_path, 'rb'))
43 | raw_law_data = json.load(open(args_retriver.rawdata_path, 'rb'))
44 |
45 | print('load retriver model')
46 | index = faiss.IndexFlatIP(law_embeds.shape[-1])
47 | print(index.is_trained)
48 | index.add(law_embeds)
49 | print(index.ntotal)
50 |
51 | t2v_model = SentenceModel("./text2vec-base-chinese")
52 |
53 | parser = ArgumentParser()
54 | parser.add_argument('--model_path', type=str, default = "./model")
55 | parser.add_argument('--peft_path', type=str, default = './peft_r_model/1.p')
56 | parser.add_argument('--adapter_path', type=str, default = '')
57 | parser.add_argument('--lora_use', type=bool, default = True)
58 | parser.add_argument('--adapter_use', type=bool, default = False)
59 | args = parser.parse_args()
60 | def read_json(path):
61 | with open(path, "r") as f:
62 | return json.load(f)
63 |
64 | logger = logging.getLogger(__file__)
65 |
66 | tokenizer = AutoTokenizer.from_pretrained(args.model_path,trust_remote_code=True)
67 | model_class = ChatGLMForConditionalGeneration
68 |
69 | logger.info("Setup Model")
70 | num_layers = read_json(os.path.join(args.model_path , "config.json"))["num_layers"]
71 | device_ids = list(range(torch.cuda.device_count()))
72 |
73 | device_map = {}
74 | device_map["transformer.word_embeddings"] = device_ids[0]
75 | device_map["transformer.final_layernorm"] = device_ids[-1]
76 | device_map["lm_head"] = device_ids[0]
77 |
78 | allocations = [
79 | device_ids[i] for i in
80 | sorted(list(range(len(device_ids))) * math.ceil(num_layers / len(device_ids)))
81 | ]
82 | allocations = allocations[len(allocations)-num_layers:]
83 | for layer_i, device_id in enumerate(allocations):
84 | device_map[f"transformer.layers.{layer_i}.input_layernorm"] = device_id
85 | device_map[f"transformer.layers.{layer_i}.attention.rotary_emb"] = device_id
86 | device_map[f"transformer.layers.{layer_i}.attention.query_key_value"] = device_id
87 | device_map[f"transformer.layers.{layer_i}.attention.dense"] = device_id
88 | device_map[f"transformer.layers.{layer_i}.post_attention_layernorm"] = device_id
89 | device_map[f"transformer.layers.{layer_i}.mlp.dense_h_to_4h"] = device_id
90 | device_map[f"transformer.layers.{layer_i}.mlp.dense_4h_to_h"] = device_id
91 |
92 | if args.lora_use:
93 | model_class = ChatGLMForConditionalGeneration
94 | model = model_class.from_pretrained(args.model_path, device_map = device_map).half()
95 | model.config.use_cache = True # silence the warnings. Please re-enable for inference!
96 | logger.info("Setup PEFT")
97 | peft_config = LoraConfig(
98 | task_type=TaskType.CAUSAL_LM,
99 | inference_mode=False,
100 | r=8,
101 | lora_alpha=16,
102 | lora_dropout=0.1,
103 | target_modules=['query_key_value'],
104 | )
105 | model = get_peft_model(model, peft_config)
106 |
107 | for layer_i in range(len(model.base_model.model.transformer.layers)):
108 | device = model.base_model.model.transformer.layers[layer_i].attention.query_key_value.weight.device
109 | model.base_model.model.transformer.layers[layer_i].attention.query_key_value.lora_B.half().to(device)
110 | model.base_model.model.transformer.layers[layer_i].attention.query_key_value.lora_A.half().to(device)
111 |
112 | if os.path.exists(args.peft_path ):
113 | #start = read_json(peft_arg.peft_path + '/latest.json')["latest_step"]
114 | model.load_state_dict(torch.load(args.peft_path), strict=False)
115 | elif args.adapter_use:
116 | model_class = ChatGLMForConditionalGeneration
117 | model = model_class.from_pretrained(args.model_path, device_map = device_map).half()
118 | model.config.use_cache = True # silence the warnings. Please re-enable for inference!
119 | logger.info("Setup PEFT")
120 | peft_config = BottleneckConfig(
121 | bottleneck_size=512,
122 | non_linearity='tanh',
123 | adapter_dropout=0.1,
124 | use_parallel_adapter=True,
125 | use_adapterp=False,
126 | target_modules={"dense_h_to_4h": "mh_adapter", "dense_4h_to_h": "output_adapter"},
127 | scaling=1.0,
128 | bias="none",
129 | task_type="CAUSAL_LM",
130 | )#['query_key_value']
131 | model = get_peft_model(model, peft_config)
132 |
133 | for layer_i in range(len(model.base_model.model.transformer.layers)):
134 | device = model.base_model.model.transformer.layers[layer_i].mlp.dense_h_to_4h.weight.device
135 | model.base_model.model.transformer.layers[layer_i].mlp.dense_h_to_4h.adapter_down.half().to(device)
136 | model.base_model.model.transformer.layers[layer_i].mlp.dense_h_to_4h.adapter_up.half().to(device)
137 | model.base_model.model.transformer.layers[layer_i].mlp.dense_4h_to_h.adapter_down.half().to(device)
138 | model.base_model.model.transformer.layers[layer_i].mlp.dense_4h_to_h.adapter_up.half().to(device)
139 |
140 | if os.path.exists(args.adapter_path ):
141 | #start = read_json(peft_arg.peft_path + '/latest.json')["latest_step"]
142 | model.load_state_dict(torch.load(args.adapter_path), strict=False)
143 | else:
144 | model_class = ChatGLMForConditionalGeneration
145 | model = model_class.from_pretrained(args.model_path, device_map = device_map).half()
146 | model.config.use_cache = True # silence the warnings. Please re-enable for inference!
147 |
148 |
149 | model.eval()
150 | history=[]
151 | print("Human:")
152 | while True:
153 | line = input()
154 | query = line.strip()
155 | if 'new chat' in query:
156 | history = []
157 | else:
158 | str1='-'
159 | response, his= model.chat(tokenizer, query + '请给出法律依据', history=history)
160 | law = retriver(query + response,t2v_model,index,raw_law_data,args_retriver)
161 | prompt = '1、' + str1.join(law[0]) + '2、' + str1.join(law[1]) + '3、'+ str1.join(law[2]) + '请根据以上法律,选择最合适的法律生成问题的合理答复,问题是:' + query
162 | response1, history= model.chat(tokenizer, prompt, history=history)
163 | print("\n------------------------------------------------\nAnswer:")
164 | print(response1)
165 | print("\n------------------------------------------------\nHuman:")
166 |
167 |
168 | if __name__ == '__main__':
169 | main()
--------------------------------------------------------------------------------
/src/peft/LICENSE:
--------------------------------------------------------------------------------
1 | Apache License
2 | Version 2.0, January 2004
3 | http://www.apache.org/licenses/
4 |
5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6 |
7 | 1. Definitions.
8 |
9 | "License" shall mean the terms and conditions for use, reproduction,
10 | and distribution as defined by Sections 1 through 9 of this document.
11 |
12 | "Licensor" shall mean the copyright owner or entity authorized by
13 | the copyright owner that is granting the License.
14 |
15 | "Legal Entity" shall mean the union of the acting entity and all
16 | other entities that control, are controlled by, or are under common
17 | control with that entity. For the purposes of this definition,
18 | "control" means (i) the power, direct or indirect, to cause the
19 | direction or management of such entity, whether by contract or
20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the
21 | outstanding shares, or (iii) beneficial ownership of such entity.
22 |
23 | "You" (or "Your") shall mean an individual or Legal Entity
24 | exercising permissions granted by this License.
25 |
26 | "Source" form shall mean the preferred form for making modifications,
27 | including but not limited to software source code, documentation
28 | source, and configuration files.
29 |
30 | "Object" form shall mean any form resulting from mechanical
31 | transformation or translation of a Source form, including but
32 | not limited to compiled object code, generated documentation,
33 | and conversions to other media types.
34 |
35 | "Work" shall mean the work of authorship, whether in Source or
36 | Object form, made available under the License, as indicated by a
37 | copyright notice that is included in or attached to the work
38 | (an example is provided in the Appendix below).
39 |
40 | "Derivative Works" shall mean any work, whether in Source or Object
41 | form, that is based on (or derived from) the Work and for which the
42 | editorial revisions, annotations, elaborations, or other modifications
43 | represent, as a whole, an original work of authorship. For the purposes
44 | of this License, Derivative Works shall not include works that remain
45 | separable from, or merely link (or bind by name) to the interfaces of,
46 | the Work and Derivative Works thereof.
47 |
48 | "Contribution" shall mean any work of authorship, including
49 | the original version of the Work and any modifications or additions
50 | to that Work or Derivative Works thereof, that is intentionally
51 | submitted to Licensor for inclusion in the Work by the copyright owner
52 | or by an individual or Legal Entity authorized to submit on behalf of
53 | the copyright owner. For the purposes of this definition, "submitted"
54 | means any form of electronic, verbal, or written communication sent
55 | to the Licensor or its representatives, including but not limited to
56 | communication on electronic mailing lists, source code control systems,
57 | and issue tracking systems that are managed by, or on behalf of, the
58 | Licensor for the purpose of discussing and improving the Work, but
59 | excluding communication that is conspicuously marked or otherwise
60 | designated in writing by the copyright owner as "Not a Contribution."
61 |
62 | "Contributor" shall mean Licensor and any individual or Legal Entity
63 | on behalf of whom a Contribution has been received by Licensor and
64 | subsequently incorporated within the Work.
65 |
66 | 2. Grant of Copyright License. Subject to the terms and conditions of
67 | this License, each Contributor hereby grants to You a perpetual,
68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69 | copyright license to reproduce, prepare Derivative Works of,
70 | publicly display, publicly perform, sublicense, and distribute the
71 | Work and such Derivative Works in Source or Object form.
72 |
73 | 3. Grant of Patent License. Subject to the terms and conditions of
74 | this License, each Contributor hereby grants to You a perpetual,
75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76 | (except as stated in this section) patent license to make, have made,
77 | use, offer to sell, sell, import, and otherwise transfer the Work,
78 | where such license applies only to those patent claims licensable
79 | by such Contributor that are necessarily infringed by their
80 | Contribution(s) alone or by combination of their Contribution(s)
81 | with the Work to which such Contribution(s) was submitted. If You
82 | institute patent litigation against any entity (including a
83 | cross-claim or counterclaim in a lawsuit) alleging that the Work
84 | or a Contribution incorporated within the Work constitutes direct
85 | or contributory patent infringement, then any patent licenses
86 | granted to You under this License for that Work shall terminate
87 | as of the date such litigation is filed.
88 |
89 | 4. Redistribution. You may reproduce and distribute copies of the
90 | Work or Derivative Works thereof in any medium, with or without
91 | modifications, and in Source or Object form, provided that You
92 | meet the following conditions:
93 |
94 | (a) You must give any other recipients of the Work or
95 | Derivative Works a copy of this License; and
96 |
97 | (b) You must cause any modified files to carry prominent notices
98 | stating that You changed the files; and
99 |
100 | (c) You must retain, in the Source form of any Derivative Works
101 | that You distribute, all copyright, patent, trademark, and
102 | attribution notices from the Source form of the Work,
103 | excluding those notices that do not pertain to any part of
104 | the Derivative Works; and
105 |
106 | (d) If the Work includes a "NOTICE" text file as part of its
107 | distribution, then any Derivative Works that You distribute must
108 | include a readable copy of the attribution notices contained
109 | within such NOTICE file, excluding those notices that do not
110 | pertain to any part of the Derivative Works, in at least one
111 | of the following places: within a NOTICE text file distributed
112 | as part of the Derivative Works; within the Source form or
113 | documentation, if provided along with the Derivative Works; or,
114 | within a display generated by the Derivative Works, if and
115 | wherever such third-party notices normally appear. The contents
116 | of the NOTICE file are for informational purposes only and
117 | do not modify the License. You may add Your own attribution
118 | notices within Derivative Works that You distribute, alongside
119 | or as an addendum to the NOTICE text from the Work, provided
120 | that such additional attribution notices cannot be construed
121 | as modifying the License.
122 |
123 | You may add Your own copyright statement to Your modifications and
124 | may provide additional or different license terms and conditions
125 | for use, reproduction, or distribution of Your modifications, or
126 | for any such Derivative Works as a whole, provided Your use,
127 | reproduction, and distribution of the Work otherwise complies with
128 | the conditions stated in this License.
129 |
130 | 5. Submission of Contributions. Unless You explicitly state otherwise,
131 | any Contribution intentionally submitted for inclusion in the Work
132 | by You to the Licensor shall be under the terms and conditions of
133 | this License, without any additional terms or conditions.
134 | Notwithstanding the above, nothing herein shall supersede or modify
135 | the terms of any separate license agreement you may have executed
136 | with Licensor regarding such Contributions.
137 |
138 | 6. Trademarks. This License does not grant permission to use the trade
139 | names, trademarks, service marks, or product names of the Licensor,
140 | except as required for reasonable and customary use in describing the
141 | origin of the Work and reproducing the content of the NOTICE file.
142 |
143 | 7. Disclaimer of Warranty. Unless required by applicable law or
144 | agreed to in writing, Licensor provides the Work (and each
145 | Contributor provides its Contributions) on an "AS IS" BASIS,
146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147 | implied, including, without limitation, any warranties or conditions
148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149 | PARTICULAR PURPOSE. You are solely responsible for determining the
150 | appropriateness of using or redistributing the Work and assume any
151 | risks associated with Your exercise of permissions under this License.
152 |
153 | 8. Limitation of Liability. In no event and under no legal theory,
154 | whether in tort (including negligence), contract, or otherwise,
155 | unless required by applicable law (such as deliberate and grossly
156 | negligent acts) or agreed to in writing, shall any Contributor be
157 | liable to You for damages, including any direct, indirect, special,
158 | incidental, or consequential damages of any character arising as a
159 | result of this License or out of the use or inability to use the
160 | Work (including but not limited to damages for loss of goodwill,
161 | work stoppage, computer failure or malfunction, or any and all
162 | other commercial damages or losses), even if such Contributor
163 | has been advised of the possibility of such damages.
164 |
165 | 9. Accepting Warranty or Additional Liability. While redistributing
166 | the Work or Derivative Works thereof, You may choose to offer,
167 | and charge a fee for, acceptance of support, warranty, indemnity,
168 | or other liability obligations and/or rights consistent with this
169 | License. However, in accepting such obligations, You may act only
170 | on Your own behalf and on Your sole responsibility, not on behalf
171 | of any other Contributor, and only if You agree to indemnify,
172 | defend, and hold each Contributor harmless for any liability
173 | incurred by, or claims asserted against, such Contributor by reason
174 | of your accepting any such warranty or additional liability.
175 |
176 | END OF TERMS AND CONDITIONS
177 |
178 | APPENDIX: How to apply the Apache License to your work.
179 |
180 | To apply the Apache License to your work, attach the following
181 | boilerplate notice, with the fields enclosed by brackets "[]"
182 | replaced with your own identifying information. (Don't include
183 | the brackets!) The text should be enclosed in the appropriate
184 | comment syntax for the file format. We also recommend that a
185 | file or class name and description of purpose be included on the
186 | same "printed page" as the copyright notice for easier
187 | identification within third-party archives.
188 |
189 | Copyright [yyyy] [name of copyright owner]
190 |
191 | Licensed under the Apache License, Version 2.0 (the "License");
192 | you may not use this file except in compliance with the License.
193 | You may obtain a copy of the License at
194 |
195 | http://www.apache.org/licenses/LICENSE-2.0
196 |
197 | Unless required by applicable law or agreed to in writing, software
198 | distributed under the License is distributed on an "AS IS" BASIS,
199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200 | See the License for the specific language governing permissions and
201 | limitations under the License.
202 |
--------------------------------------------------------------------------------
/src/peft/Makefile:
--------------------------------------------------------------------------------
1 | .PHONY: quality style test docs
2 |
3 | check_dirs := src tests examples
4 |
5 | # Check that source code meets quality standards
6 |
7 | # this target runs checks on all files
8 | quality:
9 | black --check $(check_dirs)
10 | ruff $(check_dirs)
11 | doc-builder style src tests --max_len 119 --check_only
12 |
13 | # Format source code automatically and check is there are any problems left that need manual fixing
14 | style:
15 | black $(check_dirs)
16 | ruff $(check_dirs) --fix
17 | doc-builder style src tests --max_len 119
18 |
19 | test:
20 | pytest tests/
--------------------------------------------------------------------------------
/src/peft/pyproject.toml:
--------------------------------------------------------------------------------
1 | [tool.black]
2 | line-length = 119
3 | target-version = ['py36']
4 |
5 | [tool.ruff]
6 | ignore = ["C901", "E501", "E741", "W605"]
7 | select = ["C", "E", "F", "I", "W"]
8 | line-length = 119
9 |
10 | [tool.ruff.isort]
11 | lines-after-imports = 2
12 | known-first-party = ["peft"]
13 |
14 | [isort]
15 | default_section = "FIRSTPARTY"
16 | known_first_party = "peft"
17 | known_third_party = [
18 | "numpy",
19 | "torch",
20 | "accelerate",
21 | "transformers",
22 | ]
23 | line_length = 119
24 | lines_after_imports = 2
25 | multi_line_output = 3
26 | include_trailing_comma = true
27 | force_grid_wrap = 0
28 | use_parentheses = true
29 | ensure_newline_before_comments = true
30 |
31 | [tool.pytest]
32 | doctest_optionflags = [
33 | "NORMALIZE_WHITESPACE",
34 | "ELLIPSIS",
35 | "NUMBER",
36 | ]
--------------------------------------------------------------------------------
/src/peft/setup.py:
--------------------------------------------------------------------------------
1 | # Copyright 2023 The HuggingFace Team. All rights reserved.
2 | #
3 | # Licensed under the Apache License, Version 2.0 (the "License");
4 | # you may not use this file except in compliance with the License.
5 | # You may obtain a copy of the License at
6 | #
7 | # http://www.apache.org/licenses/LICENSE-2.0
8 | #
9 | # Unless required by applicable law or agreed to in writing, software
10 | # distributed under the License is distributed on an "AS IS" BASIS,
11 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12 | # See the License for the specific language governing permissions and
13 | # limitations under the License.
14 |
15 | from setuptools import find_packages, setup
16 |
17 | extras = {}
18 | extras["quality"] = ["black ~= 22.0", "ruff>=0.0.241"]
19 | extras["docs_specific"] = ["hf-doc-builder"]
20 | extras["dev"] = extras["quality"] + extras["docs_specific"]
21 | extras["test"] = extras["dev"] + ["pytest", "pytest-xdist", "parameterized"]
22 |
23 | setup(
24 | name="peft",
25 | version="0.3.0.dev0",
26 | description="Parameter-Efficient Fine-Tuning (PEFT)",
27 | license_files=["LICENSE"],
28 | keywords="deep learning",
29 | license="Apache",
30 | author="The AGI-Edgerunners team",
31 | author_email="agi.edgerunners@gmail.com",
32 | url="https://github.com/AGI-Edgerunners/LLM-Adapters",
33 | package_dir={"": "src"},
34 | packages=find_packages("src"),
35 | entry_points={},
36 | python_requires=">=3.7.0",
37 | install_requires=[
38 | "numpy>=1.17",
39 | "packaging>=20.0",
40 | "psutil",
41 | "pyyaml",
42 | "torch>=1.13.0",
43 | "transformers",
44 | "accelerate",
45 | ],
46 | extras_require=extras,
47 | classifiers=[
48 | "Development Status :: 5 - Production/Stable",
49 | "Intended Audience :: Developers",
50 | "Intended Audience :: Education",
51 | "Intended Audience :: Science/Research",
52 | "License :: OSI Approved :: Apache Software License",
53 | "Operating System :: OS Independent",
54 | "Programming Language :: Python :: 3",
55 | "Programming Language :: Python :: 3.7",
56 | "Topic :: Scientific/Engineering :: Artificial Intelligence",
57 | ],
58 | )
59 |
60 | # Release checklist
61 | # 1. Change the version in __init__.py and setup.py.
62 | # 2. Commit these changes with the message: "Release: VERSION"
63 | # 3. Add a tag in git to mark the release: "git tag VERSION -m 'Adds tag VERSION for pypi' "
64 | # Push the tag to git: git push --tags origin main
65 | # 4. Run the following commands in the top-level directory:
66 | # python setup.py bdist_wheel
67 | # python setup.py sdist
68 | # 5. Upload the package to the pypi test server first:
69 | # twine upload dist/* -r pypitest
70 | # twine upload dist/* -r pypitest --repository-url=https://test.pypi.org/legacy/
71 | # 6. Check that you can install it in a virtualenv by running:
72 | # pip install -i https://testpypi.python.org/pypi peft
73 | # 7. Upload the final version to actual pypi:
74 | # twine upload dist/* -r pypi
75 | # 8. Add release notes to the tag in github once everything is looking hunky-dory.
76 | # 9. Update the version in __init__.py, setup.py to the new version "-dev" and push to master
77 |
--------------------------------------------------------------------------------
/src/peft/src/peft.egg-info/PKG-INFO:
--------------------------------------------------------------------------------
1 | Metadata-Version: 2.1
2 | Name: peft
3 | Version: 0.3.0.dev0
4 | Summary: Parameter-Efficient Fine-Tuning (PEFT)
5 | Home-page: https://github.com/AGI-Edgerunners/LLM-Adapters
6 | Author: The AGI-Edgerunners team
7 | Author-email: agi.edgerunners@gmail.com
8 | License: Apache
9 | Keywords: deep learning
10 | Classifier: Development Status :: 5 - Production/Stable
11 | Classifier: Intended Audience :: Developers
12 | Classifier: Intended Audience :: Education
13 | Classifier: Intended Audience :: Science/Research
14 | Classifier: License :: OSI Approved :: Apache Software License
15 | Classifier: Operating System :: OS Independent
16 | Classifier: Programming Language :: Python :: 3
17 | Classifier: Programming Language :: Python :: 3.7
18 | Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
19 | Requires-Python: >=3.7.0
20 | Provides-Extra: quality
21 | Provides-Extra: docs_specific
22 | Provides-Extra: dev
23 | Provides-Extra: test
24 | License-File: LICENSE
25 |
--------------------------------------------------------------------------------
/src/peft/src/peft.egg-info/SOURCES.txt:
--------------------------------------------------------------------------------
1 | LICENSE
2 | pyproject.toml
3 | setup.py
4 | src/peft/__init__.py
5 | src/peft/mapping.py
6 | src/peft/peft_model.py
7 | src/peft.egg-info/PKG-INFO
8 | src/peft.egg-info/SOURCES.txt
9 | src/peft.egg-info/dependency_links.txt
10 | src/peft.egg-info/requires.txt
11 | src/peft.egg-info/top_level.txt
12 | src/peft/tuners/__init__.py
13 | src/peft/tuners/bottleneck.py
14 | src/peft/tuners/lora.py
15 | src/peft/tuners/p_tuning.py
16 | src/peft/tuners/prefix_tuning.py
17 | src/peft/tuners/prompt_tuning.py
18 | src/peft/utils/__init__.py
19 | src/peft/utils/adapters_utils.py
20 | src/peft/utils/config.py
21 | src/peft/utils/other.py
22 | src/peft/utils/save_and_load.py
23 | tests/test_config.py
24 | tests/test_peft_model.py
25 | tests/testing_common.py
26 | tests/testing_utils.py
--------------------------------------------------------------------------------
/src/peft/src/peft.egg-info/dependency_links.txt:
--------------------------------------------------------------------------------
1 |
2 |
--------------------------------------------------------------------------------
/src/peft/src/peft.egg-info/requires.txt:
--------------------------------------------------------------------------------
1 | numpy>=1.17
2 | packaging>=20.0
3 | psutil
4 | pyyaml
5 | torch>=1.13.0
6 | transformers
7 | accelerate
8 |
9 | [dev]
10 | black~=22.0
11 | ruff>=0.0.241
12 | hf-doc-builder
13 |
14 | [docs_specific]
15 | hf-doc-builder
16 |
17 | [quality]
18 | black~=22.0
19 | ruff>=0.0.241
20 |
21 | [test]
22 | black~=22.0
23 | ruff>=0.0.241
24 | hf-doc-builder
25 | pytest
26 | pytest-xdist
27 | parameterized
28 |
--------------------------------------------------------------------------------
/src/peft/src/peft.egg-info/top_level.txt:
--------------------------------------------------------------------------------
1 | peft
2 |
--------------------------------------------------------------------------------
/src/peft/src/peft/__init__.py:
--------------------------------------------------------------------------------
1 | # flake8: noqa
2 | # There's no way to ignore "F401 '...' imported but unused" warnings in this
3 | # module, but to preserve other warnings. So, don't check this module at all.
4 |
5 | # coding=utf-8
6 | # Copyright 2023-present the HuggingFace Inc. team.
7 | #
8 | # Licensed under the Apache License, Version 2.0 (the "License");
9 | # you may not use this file except in compliance with the License.
10 | # You may obtain a copy of the License at
11 | #
12 | # http://www.apache.org/licenses/LICENSE-2.0
13 | #
14 | # Unless required by applicable law or agreed to in writing, software
15 | # distributed under the License is distributed on an "AS IS" BASIS,
16 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
17 | # See the License for the specific language governing permissions and
18 | # limitations under the License.
19 |
20 | __version__ = "0.3.0.dev0"
21 |
22 | from .mapping import MODEL_TYPE_TO_PEFT_MODEL_MAPPING, PEFT_TYPE_TO_CONFIG_MAPPING, get_peft_config, get_peft_model
23 | from .peft_model import (
24 | PeftModel,
25 | PeftModelForCausalLM,
26 | PeftModelForSeq2SeqLM,
27 | PeftModelForSequenceClassification,
28 | PeftModelForTokenClassification,
29 | )
30 | from .tuners import (
31 | LoraConfig,
32 | LoraModel,
33 | BottleneckConfig,
34 | BottleneckModel,
35 | PrefixEncoder,
36 | PrefixTuningConfig,
37 | PromptEmbedding,
38 | PromptEncoder,
39 | PromptEncoderConfig,
40 | PromptEncoderReparameterizationType,
41 | PromptTuningConfig,
42 | PromptTuningInit,
43 | )
44 | from .utils import (
45 | TRANSFORMERS_MODELS_TO_PREFIX_TUNING_POSTPROCESS_MAPPING,
46 | PeftConfig,
47 | PeftType,
48 | PromptLearningConfig,
49 | TaskType,
50 | bloom_model_postprocess_past_key_value,
51 | get_peft_model_state_dict,
52 | prepare_model_for_int8_training,
53 | set_peft_model_state_dict,
54 | shift_tokens_right,
55 | )
56 |
--------------------------------------------------------------------------------
/src/peft/src/peft/__pycache__/__init__.cpython-38.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LiuHC0428/LAW-GPT/6251dd36505a4b644c5d2fe9b5188cbb541ea5fd/src/peft/src/peft/__pycache__/__init__.cpython-38.pyc
--------------------------------------------------------------------------------
/src/peft/src/peft/__pycache__/__init__.cpython-39.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LiuHC0428/LAW-GPT/6251dd36505a4b644c5d2fe9b5188cbb541ea5fd/src/peft/src/peft/__pycache__/__init__.cpython-39.pyc
--------------------------------------------------------------------------------
/src/peft/src/peft/__pycache__/mapping.cpython-38.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LiuHC0428/LAW-GPT/6251dd36505a4b644c5d2fe9b5188cbb541ea5fd/src/peft/src/peft/__pycache__/mapping.cpython-38.pyc
--------------------------------------------------------------------------------
/src/peft/src/peft/__pycache__/mapping.cpython-39.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LiuHC0428/LAW-GPT/6251dd36505a4b644c5d2fe9b5188cbb541ea5fd/src/peft/src/peft/__pycache__/mapping.cpython-39.pyc
--------------------------------------------------------------------------------
/src/peft/src/peft/__pycache__/peft_model.cpython-38.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LiuHC0428/LAW-GPT/6251dd36505a4b644c5d2fe9b5188cbb541ea5fd/src/peft/src/peft/__pycache__/peft_model.cpython-38.pyc
--------------------------------------------------------------------------------
/src/peft/src/peft/__pycache__/peft_model.cpython-39.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LiuHC0428/LAW-GPT/6251dd36505a4b644c5d2fe9b5188cbb541ea5fd/src/peft/src/peft/__pycache__/peft_model.cpython-39.pyc
--------------------------------------------------------------------------------
/src/peft/src/peft/mapping.py:
--------------------------------------------------------------------------------
1 | # coding=utf-8
2 | # Copyright 2023-present the HuggingFace Inc. team.
3 | #
4 | # Licensed under the Apache License, Version 2.0 (the "License");
5 | # you may not use this file except in compliance with the License.
6 | # You may obtain a copy of the License at
7 | #
8 | # http://www.apache.org/licenses/LICENSE-2.0
9 | #
10 | # Unless required by applicable law or agreed to in writing, software
11 | # distributed under the License is distributed on an "AS IS" BASIS,
12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 | # See the License for the specific language governing permissions and
14 | # limitations under the License.
15 |
16 | from .peft_model import (
17 | PeftModel,
18 | PeftModelForCausalLM,
19 | PeftModelForSeq2SeqLM,
20 | PeftModelForSequenceClassification,
21 | PeftModelForTokenClassification,
22 | )
23 | from .tuners import LoraConfig, PrefixTuningConfig, PromptEncoderConfig, PromptTuningConfig, BottleneckConfig
24 | from .utils import PromptLearningConfig
25 |
26 |
27 | MODEL_TYPE_TO_PEFT_MODEL_MAPPING = {
28 | "SEQ_CLS": PeftModelForSequenceClassification,
29 | "SEQ_2_SEQ_LM": PeftModelForSeq2SeqLM,
30 | "CAUSAL_LM": PeftModelForCausalLM,
31 | "TOKEN_CLS": PeftModelForTokenClassification,
32 | }
33 |
34 | PEFT_TYPE_TO_CONFIG_MAPPING = {
35 | "PROMPT_TUNING": PromptTuningConfig,
36 | "PREFIX_TUNING": PrefixTuningConfig,
37 | "P_TUNING": PromptEncoderConfig,
38 | "LORA": LoraConfig,
39 | "BOTTLENECK": BottleneckConfig,
40 | }
41 |
42 | TRANSFORMERS_MODELS_TO_LORA_TARGET_MODULES_MAPPING = {
43 | "t5": ["q", "v"],
44 | "mt5": ["q", "v"],
45 | "bart": ["q_proj", "v_proj"],
46 | "gpt2": ["c_attn"],
47 | "bloom": ["query_key_value"],
48 | "opt": ["q_proj", "v_proj"],
49 | "gptj": ["q_proj", "v_proj"],
50 | "gpt_neox": ["query_key_value"],
51 | "gpt_neo": ["q_proj", "v_proj"],
52 | "bert": ["query", "value"],
53 | "roberta": ["query", "value"],
54 | "xlm-roberta": ["query", "value"],
55 | "electra": ["query", "value"],
56 | "deberta-v2": ["query_proj", "value_proj"],
57 | "deberta": ["in_proj"],
58 | "layoutlm": ["query", "value"],
59 | "llama": ["q_proj", "v_proj"],
60 | "chatglm": ["query_key_value"],
61 | }
62 |
63 | TRANSFORMERS_MODELS_TO_BOTTLENECK_TARGET_MODULES_MAPPING = {
64 | "bloom": ["dense_h_to_4h", "dense_4h_to_h"],
65 | "gptj": ["fc_in", "fc_out"],
66 | "gpt_neo": ["c_fc", "c_proj"],
67 | "llama": ["gate_proj", "up_proj", "down_proj"],
68 | "opt": ["fc1", "fc2"],
69 | }
70 |
71 | TRANSFORMERS_MODELS_TO_ADAPTERP_TARGET_MODULES_MAPPING = {
72 | "bloom": ["dense_4h_to_h"],
73 | "gptj": ["fc_out"],
74 | "gpt_neo": ["c_proj"],
75 | "llama": ["down_proj"],
76 | "opt": ["fc2"],
77 | }
78 |
79 | TRANSFORMERS_MODELS_TO_PARALLEL_TARGET_MODULES_MAPPING = {
80 | "bloom": ["query_key_value"],
81 | "gptj": ["q_proj", "v_proj", "k_proj"],
82 | "gpt_neo": ["q_proj", "v_proj", "k_proj"],
83 | "llama": ["q_proj", "v_proj", "k_proj"],
84 | "opt": ["q_proj", "v_proj", "k_proj"],
85 |
86 | }
87 |
88 |
89 |
90 | def get_peft_config(config_dict):
91 | """
92 | Returns a Peft config object from a dictionary.
93 |
94 | Args:
95 | config_dict (`Dict[str, Any]`): Dictionary containing the configuration parameters.
96 | """
97 |
98 | return PEFT_TYPE_TO_CONFIG_MAPPING[config_dict["peft_type"]](**config_dict)
99 |
100 |
101 | def _prepare_prompt_learning_config(peft_config, model_config):
102 | if peft_config.num_layers is None:
103 | if "num_hidden_layers" in model_config:
104 | num_layers = model_config["num_hidden_layers"]
105 | elif "num_layers" in model_config:
106 | num_layers = model_config["num_layers"]
107 | elif "n_layer" in model_config:
108 | num_layers = model_config["n_layer"]
109 | else:
110 | raise ValueError("Please specify `num_layers` in `peft_config`")
111 | peft_config.num_layers = num_layers
112 |
113 | if peft_config.token_dim is None:
114 | if "hidden_size" in model_config:
115 | token_dim = model_config["hidden_size"]
116 | elif "n_embd" in model_config:
117 | token_dim = model_config["n_embd"]
118 | elif "d_model" in model_config:
119 | token_dim = model_config["d_model"]
120 | else:
121 | raise ValueError("Please specify `token_dim` in `peft_config`")
122 | peft_config.token_dim = token_dim
123 |
124 | if peft_config.num_attention_heads is None:
125 | if "num_attention_heads" in model_config:
126 | num_attention_heads = model_config["num_attention_heads"]
127 | elif "n_head" in model_config:
128 | num_attention_heads = model_config["n_head"]
129 | elif "num_heads" in model_config:
130 | num_attention_heads = model_config["num_heads"]
131 | elif "encoder_attention_heads" in model_config:
132 | num_attention_heads = model_config["encoder_attention_heads"]
133 | else:
134 | raise ValueError("Please specify `num_attention_heads` in `peft_config`")
135 | peft_config.num_attention_heads = num_attention_heads
136 |
137 | if getattr(peft_config, "encoder_hidden_size", None) is None:
138 | setattr(peft_config, "encoder_hidden_size", token_dim)
139 |
140 | return peft_config
141 |
142 |
143 | def _prepare_lora_config(peft_config, model_config):
144 | if peft_config.target_modules is None:
145 | if model_config["model_type"] not in TRANSFORMERS_MODELS_TO_LORA_TARGET_MODULES_MAPPING:
146 | raise ValueError("Please specify `target_modules` in `peft_config`")
147 | peft_config.target_modules = TRANSFORMERS_MODELS_TO_LORA_TARGET_MODULES_MAPPING[model_config["model_type"]]
148 | if len(peft_config.target_modules) == 1:
149 | peft_config.fan_in_fan_out = True
150 | peft_config.enable_lora = [True, False, True]
151 | if peft_config.inference_mode:
152 | peft_config.merge_weights = True
153 | return peft_config
154 |
155 |
156 | def _prepare_bottleneck_config(peft_config, model_config):
157 | if peft_config.target_modules is None:
158 | if peft_config.use_parallel_adapter:
159 | if model_config["model_type"] not in TRANSFORMERS_MODELS_TO_PARALLEL_TARGET_MODULES_MAPPING:
160 | raise ValueError("Please specify `target_modules` in `peft_config`")
161 | peft_config.target_modules = TRANSFORMERS_MODELS_TO_PARALLEL_TARGET_MODULES_MAPPING[model_config["model_type"]]
162 | elif peft_config.use_adapterp:
163 | if model_config["model_type"] not in TRANSFORMERS_MODELS_TO_ADAPTERP_TARGET_MODULES_MAPPING:
164 | raise ValueError("Please specify `target_modules` in `peft_config`")
165 | peft_config.target_modules = TRANSFORMERS_MODELS_TO_ADAPTERP_TARGET_MODULES_MAPPING[model_config["model_type"]]
166 | else:
167 | if model_config["model_type"] not in TRANSFORMERS_MODELS_TO_BOTTLENECK_TARGET_MODULES_MAPPING:
168 | raise ValueError("Please specify `target_modules` in `peft_config`")
169 | peft_config.target_modules = TRANSFORMERS_MODELS_TO_BOTTLENECK_TARGET_MODULES_MAPPING[model_config["model_type"]]
170 |
171 | return peft_config
172 |
173 |
174 |
175 | def get_peft_model(model, peft_config):
176 | """
177 | Returns a Peft model object from a model and a config.
178 |
179 | Args:
180 | model ([`transformers.PreTrainedModel`]): Model to be wrapped.
181 | peft_config ([`PeftConfig`]): Configuration object containing the parameters of the Peft model.
182 | """
183 |
184 | model_config = model.config.to_dict()
185 | peft_config.base_model_name_or_path = model.__dict__.get("name_or_path", None)
186 | if peft_config.task_type not in MODEL_TYPE_TO_PEFT_MODEL_MAPPING.keys():
187 | if peft_config.peft_type == "LORA":
188 | peft_config = _prepare_lora_config(peft_config, model_config)
189 | return PeftModel(model, peft_config)
190 | elif peft_config.peft_type == "BOTTLENECK":
191 | peft_config = _prepare_bottleneck_config(peft_config, model_config)
192 | return PeftModel(model, peft_config)
193 | if not isinstance(peft_config, PromptLearningConfig):
194 | if peft_config.peft_type == "BOTTLENECK":
195 | peft_config = _prepare_bottleneck_config(peft_config, model_config)
196 | elif peft_config.peft_type == "LORA":
197 | peft_config = _prepare_lora_config(peft_config, model_config)
198 | else:
199 | peft_config = _prepare_prompt_learning_config(peft_config, model_config)
200 | return MODEL_TYPE_TO_PEFT_MODEL_MAPPING[peft_config.task_type](model, peft_config)
201 |
--------------------------------------------------------------------------------
/src/peft/src/peft/tuners/__init__.py:
--------------------------------------------------------------------------------
1 | # flake8: noqa
2 | # There's no way to ignore "F401 '...' imported but unused" warnings in this
3 | # module, but to preserve other warnings. So, don't check this module at all
4 |
5 | # coding=utf-8
6 | # Copyright 2023-present the HuggingFace Inc. team.
7 | #
8 | # Licensed under the Apache License, Version 2.0 (the "License");
9 | # you may not use this file except in compliance with the License.
10 | # You may obtain a copy of the License at
11 | #
12 | # http://www.apache.org/licenses/LICENSE-2.0
13 | #
14 | # Unless required by applicable law or agreed to in writing, software
15 | # distributed under the License is distributed on an "AS IS" BASIS,
16 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
17 | # See the License for the specific language governing permissions and
18 | # limitations under the License.
19 |
20 | from .lora import LoraConfig, LoraModel
21 | from .bottleneck import BottleneckConfig, BottleneckModel
22 | from .p_tuning import PromptEncoder, PromptEncoderConfig, PromptEncoderReparameterizationType
23 | from .prefix_tuning import PrefixEncoder, PrefixTuningConfig
24 | from .prompt_tuning import PromptEmbedding, PromptTuningConfig, PromptTuningInit
25 |
--------------------------------------------------------------------------------
/src/peft/src/peft/tuners/__pycache__/__init__.cpython-38.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LiuHC0428/LAW-GPT/6251dd36505a4b644c5d2fe9b5188cbb541ea5fd/src/peft/src/peft/tuners/__pycache__/__init__.cpython-38.pyc
--------------------------------------------------------------------------------
/src/peft/src/peft/tuners/__pycache__/__init__.cpython-39.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LiuHC0428/LAW-GPT/6251dd36505a4b644c5d2fe9b5188cbb541ea5fd/src/peft/src/peft/tuners/__pycache__/__init__.cpython-39.pyc
--------------------------------------------------------------------------------
/src/peft/src/peft/tuners/__pycache__/bottleneck.cpython-38.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LiuHC0428/LAW-GPT/6251dd36505a4b644c5d2fe9b5188cbb541ea5fd/src/peft/src/peft/tuners/__pycache__/bottleneck.cpython-38.pyc
--------------------------------------------------------------------------------
/src/peft/src/peft/tuners/__pycache__/bottleneck.cpython-39.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LiuHC0428/LAW-GPT/6251dd36505a4b644c5d2fe9b5188cbb541ea5fd/src/peft/src/peft/tuners/__pycache__/bottleneck.cpython-39.pyc
--------------------------------------------------------------------------------
/src/peft/src/peft/tuners/__pycache__/lora.cpython-38.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LiuHC0428/LAW-GPT/6251dd36505a4b644c5d2fe9b5188cbb541ea5fd/src/peft/src/peft/tuners/__pycache__/lora.cpython-38.pyc
--------------------------------------------------------------------------------
/src/peft/src/peft/tuners/__pycache__/lora.cpython-39.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LiuHC0428/LAW-GPT/6251dd36505a4b644c5d2fe9b5188cbb541ea5fd/src/peft/src/peft/tuners/__pycache__/lora.cpython-39.pyc
--------------------------------------------------------------------------------
/src/peft/src/peft/tuners/__pycache__/p_tuning.cpython-38.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LiuHC0428/LAW-GPT/6251dd36505a4b644c5d2fe9b5188cbb541ea5fd/src/peft/src/peft/tuners/__pycache__/p_tuning.cpython-38.pyc
--------------------------------------------------------------------------------
/src/peft/src/peft/tuners/__pycache__/p_tuning.cpython-39.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LiuHC0428/LAW-GPT/6251dd36505a4b644c5d2fe9b5188cbb541ea5fd/src/peft/src/peft/tuners/__pycache__/p_tuning.cpython-39.pyc
--------------------------------------------------------------------------------
/src/peft/src/peft/tuners/__pycache__/prefix_tuning.cpython-38.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LiuHC0428/LAW-GPT/6251dd36505a4b644c5d2fe9b5188cbb541ea5fd/src/peft/src/peft/tuners/__pycache__/prefix_tuning.cpython-38.pyc
--------------------------------------------------------------------------------
/src/peft/src/peft/tuners/__pycache__/prefix_tuning.cpython-39.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LiuHC0428/LAW-GPT/6251dd36505a4b644c5d2fe9b5188cbb541ea5fd/src/peft/src/peft/tuners/__pycache__/prefix_tuning.cpython-39.pyc
--------------------------------------------------------------------------------
/src/peft/src/peft/tuners/__pycache__/prompt_tuning.cpython-38.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LiuHC0428/LAW-GPT/6251dd36505a4b644c5d2fe9b5188cbb541ea5fd/src/peft/src/peft/tuners/__pycache__/prompt_tuning.cpython-38.pyc
--------------------------------------------------------------------------------
/src/peft/src/peft/tuners/__pycache__/prompt_tuning.cpython-39.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LiuHC0428/LAW-GPT/6251dd36505a4b644c5d2fe9b5188cbb541ea5fd/src/peft/src/peft/tuners/__pycache__/prompt_tuning.cpython-39.pyc
--------------------------------------------------------------------------------
/src/peft/src/peft/tuners/bottleneck.py:
--------------------------------------------------------------------------------
1 | import importlib
2 | import math
3 | import re
4 | import warnings
5 | from dataclasses import asdict, dataclass, field
6 | from enum import Enum
7 | from typing import List, Optional, Union
8 |
9 | import torch
10 | import torch.nn as nn
11 | import torch.nn.functional as F
12 |
13 | from ..utils import PeftConfig, PeftType, transpose
14 | from transformers.activations import ACT2FN
15 |
16 |
17 | TRANSFORMERS_MODELS_TO_ADAPTER_TYPE_MAPPING = {
18 | "bloom": {"dense_h_to_4h": "mh_adapter", "dense_4h_to_h": "output_adapter"},
19 | "gptj": {"fc_in":"mh_adapter", "fc_out":"output_adapter"},
20 | "gpt_neo": {"c_fc":"mh_adapter", "c_proj":"output_adapter"},
21 | "llama": {"gate_proj": "mh_adapter", "up_proj":"mh_adapter", "down_proj":"output_adapter"},
22 | "opt": {"fc1":"mh_adapter", "fc2":"output_adapter"},
23 | "chatglm": {"dense_h_to_4h": "mh_adapter", "dense_4h_to_h": "output_adapter"},
24 | }
25 |
26 | def is_bnb_available():
27 | return importlib.util.find_spec("bitsandbytes") is not None
28 |
29 |
30 | if is_bnb_available():
31 | import bitsandbytes as bnb
32 |
33 | @dataclass
34 | class BottleneckConfig(PeftConfig):
35 | """
36 | This is the configuration class to store the configuration of a [`~peft.Bottleneck`].
37 |
38 | Args:
39 | bottleneck_size (`int`): The size of the bottleneck.
40 | non_linearity (`str`): The non-linearity to apply to the bottleneck.
41 | dropout (`float`, optional): The dropout probability of the bottleneck. Default to 0.0
42 | bias ('str'): Bias type for Bottleneck. Can be 'none', 'all' or 'adapter_only'. Default to 'none'.
43 | use_parallel_adapter (:obj:`bool`, optional): Whether to use parallel adapter. Defaults to False.
44 | scaling (:obj:`float` or :obj:`str`, optional):
45 | Scaling factor to use for scaled addition of adapter outputs as done by He et al. (2021). Can be either a
46 | constant factor (float) or the string "learned", in which case the scaling factor is learned. Defaults to
47 | 1.0.
48 | target_modules (`Union[List[str],str]`): The names of the modules to apply Adapter to.
49 | init_weights (:obj:`str`, optional): Initialization method for the weights of the adapter modules.
50 | Currently, this can be either "bert" (default) or "mam_adapter".
51 | modules_to_save (`List[str]`):List of modules apart from Bottleneck adapter layers to be set as trainable
52 | and saved in the final checkpoint.
53 | """
54 | bottleneck_size : int = field(default=256, metadata={"help": "The size of the bottleneck"})
55 | non_linearity : str = field(default="tanh", metadata={"help": "The non-linearity to apply to the bottleneck"})
56 | adapter_dropout : float = field(default=0.0, metadata={"help": "The dropout probability of the bottleneck, default to 0.0"})
57 | target_modules: Optional[Union[List[str], str]] = field(
58 | default=None,
59 | metadata={
60 | "help": "List of module names or regex expression of the module names to replace with Adapter."
61 | "For example, ['q', 'v'] or '.*decoder.*(SelfAttention|EncDecAttention).*(q|v)$' "
62 | },
63 | )
64 | use_parallel_adapter: bool = field(default=False, metadata={"help": "Whether to use parallel adapter"})
65 | use_adapterp: bool = field(default=False, metadata={"help": "Whether to use adapterp"})
66 | scaling: Union[float, str] = 1.0
67 | bias: str = field(default="none", metadata={"help": "Bias type for Bottleneck. Can be 'none', 'all' or 'adapter_only'"})
68 | init_weights: str = field(default="bert", metadata={"help": "Initialization method for the weights of the adapter modules."})
69 | modules_to_save: Optional[List[str]] = field(
70 | default=None,
71 | metadata={
72 | "help": "List of modules apart from Adapter layers to be set as trainable and saved in the final checkpoint. "
73 | "For example, in Sequence Classification or Token Classification tasks, "
74 | "the final layer `classifier/score` are randomly initialized and as such need to be trainable and saved."
75 | },
76 | )
77 |
78 | def __post_init__(self):
79 | self.peft_type = PeftType.BOTTLENECK
80 |
81 |
82 | class BottleneckModel(torch.nn.Module):
83 | """
84 | Creates Bottleneck adapter model for a pretrained trainsformers model.
85 |
86 | Args:
87 | model ('transformers.PreTrainedModel'): The pretrained model to be adapted.
88 | config (`BottleneckConfig`): The configuration of the Bottleneck adapter.
89 |
90 | Returns:
91 | `torch.nn.Module`: The Bottleneck adapter model.
92 |
93 | Example::
94 |
95 | >>> from transformers import AutoModelForCausalLM, BottleneckConfig
96 | >>> from peft import BottleneckModel, BottleneckConfig
97 | >>> config = BottleneckConfig(
98 | peft_type="BOTTLNECK", task="CAUSAL_LM", target_modules=["gate_proj", "up_proj", "down_proj"],
99 | bottleneck_size=256, non_linearity="tanh",
100 | )
101 | >>> model = AutoModelForCausalLM.from_pretrained("decapoda-research/llama-7b-hf")
102 | >>> bottleneck_model = BottleneckModel(config, model)
103 |
104 | **Attribute**:
105 | - **model** (`transformers.PreTrainedModel`): The pretrained model to be adapted.
106 | - **peft_config** (`BottleneckConfig`): The configuration of the Bottleneck adapter.
107 | """
108 |
109 | def __init__(self, config, model):
110 | super().__init__()
111 | self.model = model
112 | self.peft_config = config
113 | self._find_and_replace()
114 | mark_only_adapter_as_trainable(self.model, self.peft_config.bias)
115 | self.forward = self.model.forward
116 |
117 | def _find_and_replace(self):
118 | loaded_in_8bit = getattr(self.model, "is_loaded_in_8bit", False)
119 | if loaded_in_8bit and not is_bnb_available():
120 | raise ImportError(
121 | "To use Adapter with 8-bit quantization, please install the `bitsandbytes` package. "
122 | "You can install it with `pip install bitsandbytes`."
123 | )
124 | is_target_modules_in_base_model = False
125 | is_hf_device_map_available = hasattr(self.model, "hf_device_map")
126 | kwargs = {
127 | "bottleneck_size": self.peft_config.bottleneck_size,
128 | "non_linearity": self.peft_config.non_linearity,
129 | "adapter_dropout": self.peft_config.adapter_dropout,
130 | "scaling": self.peft_config.scaling,
131 | "init_weights": self.peft_config.init_weights,
132 | }
133 | key_list = [key for key, _ in self.model.named_modules()]
134 | for key in key_list:
135 | if isinstance(self.peft_config.target_modules, str):
136 | target_module_found = re.fullmatch(self.peft_config.target_modules, key)
137 | else:
138 | target_module_found = any(key.endswith(target_key) for target_key in self.peft_config.target_modules)
139 | if target_module_found:
140 | if not is_target_modules_in_base_model:
141 | is_target_modules_in_base_model = True
142 | parent, target, target_name = self._get_submodules(key)
143 | # determine the type of adapter to be used, this will effect the forward pass
144 | if self.peft_config.use_parallel_adapter:
145 | adapter_type = "parallel_adapter"
146 | else:
147 | adapter_type = TRANSFORMERS_MODELS_TO_ADAPTER_TYPE_MAPPING[self.model.config.model_type][target_name]
148 | kwargs.update({"adapter_type": adapter_type})
149 |
150 | bias = target.bias is not None
151 | if loaded_in_8bit and isinstance(target, bnb.nn.Linear8bitLt):
152 | kwargs.update(
153 | {
154 | "has_fp16_weights": target.state.has_fp16_weights,
155 | "memory_efficient_backward": target.state.memory_efficient_backward,
156 | "threshold": target.state.threshold,
157 | "index": target.index,
158 | }
159 | )
160 | if adapter_type == "mh_adapter":
161 | new_module = Linear8bitLt(target.in_features, target.in_features, bias=bias, **kwargs)
162 | elif adapter_type == "output_adapter":
163 | new_module = Linear8bitLt(target.out_features, target.out_features, bias=bias, **kwargs)
164 | elif adapter_type == "parallel_adapter":
165 | new_module = Linear8bitLt(target.in_features, target.out_features, bias=bias, **kwargs)
166 | elif isinstance(target, torch.nn.Linear):
167 | if adapter_type == "mh_adapter":
168 | new_module = Linear(target.in_features, target.in_features, bias=bias, **kwargs)
169 | elif adapter_type == "output_adapter":
170 | new_module = Linear(target.out_features, target.out_features, bias=bias, **kwargs)
171 | elif adapter_type == "parallel_adapter":
172 | new_module = Linear(target.in_features, target.out_features, bias=bias, **kwargs)
173 | self._replace_module(parent, target_name, new_module, target)
174 | if not is_target_modules_in_base_model:
175 | raise ValueError(
176 | f"Target modules {self.peft_config.target_modules} not found in the base model. "
177 | f"Please check the target modules and try again."
178 | )
179 |
180 | def _get_submodules(self, key):
181 | parent = self.model.get_submodule(".".join(key.split(".")[:-1]))
182 | target_name = key.split(".")[-1]
183 | target = self.model.get_submodule(key)
184 | return parent, target, target_name
185 |
186 | def _replace_module(self, parent_module, child_name, new_module, old_module):
187 | setattr(parent_module, child_name, new_module)
188 | new_module.weight = old_module.weight
189 | if old_module.bias is not None:
190 | new_module.bias = old_module.bias
191 | if getattr(old_module, "state", None) is not None:
192 | new_module.state = old_module.state
193 | new_module.to(old_module.weight.device)
194 |
195 | # dispatch to correct device
196 | for name, module in new_module.named_modules():
197 | if "adapter_" in name:
198 | module.to(old_module.weight.device)
199 |
200 | def __getattr__(self, name: str):
201 | """Forward missing attributes to the wrapped module."""
202 | try:
203 | return super().__getattr__(name) # defer to nn.Module's logic
204 | except AttributeError:
205 | return getattr(self.model, name)
206 |
207 | @property
208 | def modules_to_save(self):
209 | return None
210 |
211 | def get_peft_config_as_dict(self, inference: bool = False):
212 | config = {k: v.value if isinstance(v, Enum) else v for k, v in asdict(self.peft_config).items()}
213 | if inference:
214 | config["inference_mode"] = True
215 | return config
216 |
217 | def _set_adapter_layers(self, enabled=True):
218 | for module in self.model.modules():
219 | if isinstance(module, AdapterLayer):
220 | module.disable_adapters = False if enabled else True
221 |
222 | def enable_adapter_layers(self):
223 | self._set_adapter_layers(enabled=True)
224 |
225 | def disable_adapter_layers(self):
226 | self._set_adapter_layers(enabled=False)
227 |
228 |
229 | # Below code is based on https://github.com/adapter-hub/adapter-transformers/blob/master/src/transformers/adapters/modeling.py and lora.py from huggingfance PEFT
230 | # and modified to work with PyTorch FSDP
231 |
232 |
233 | # ------------------------------------------------------------------------------------------
234 | # Copyright (c) Microsoft Corporation. All rights reserved.
235 | # Licensed under the MIT License (MIT). See LICENSE in the repo root for license information.
236 | # ------------------------------------------------------------------------------------------
237 |
238 |
239 | # Copy from lora.py
240 | # had to adapt it for `lora_only` to work
241 | def mark_only_adapter_as_trainable(model: nn.Module, bias: str = "none") -> None:
242 | for n, p in model.named_parameters():
243 | if "adapter_" not in n:
244 | p.requires_grad = False
245 | if bias == "none":
246 | return
247 | elif bias == "all":
248 | for n, p in model.named_parameters():
249 | if "bias" in n:
250 | p.requires_grad = True
251 | elif bias == "adapter_only":
252 | for m in model.modules():
253 | if isinstance(m, AdapterLayer) and hasattr(m, "bias") and m.bias is not None:
254 | m.bias.requires_grad = True
255 | else:
256 | raise NotImplementedError
257 |
258 |
259 | class AdapterLayer:
260 | def __init__(
261 | self,
262 | bottleneck_size: int,
263 | non_linearity: str,
264 | adapter_dropout: float,
265 | scaling: Union[float, str],
266 | ):
267 | self.bottleneck_size = bottleneck_size
268 | self.non_linearity = non_linearity
269 | self.scaling = scaling
270 | #optional dropout
271 | if adapter_dropout > 0.0:
272 | self.adapter_dropout = nn.Dropout(p=adapter_dropout)
273 | else:
274 | self.adapter_dropout = lambda x: x
275 | self.disable_adapters = False
276 |
277 |
278 | class Linear(nn.Linear, AdapterLayer):
279 | """
280 | Bottleneck adapter in a dense layer. The adapter can be applied after the multi-head attention layer and/or
281 | after the feed-forward layer.
282 | """
283 | def __init__(
284 | self,
285 | in_features: int,
286 | out_features: int,
287 | adapter_type: str,
288 | bottleneck_size: int,
289 | non_linearity: str,
290 | adapter_dropout: float,
291 | scaling: Union[float, str],
292 | init_weights: str,
293 | **kwargs,
294 | ):
295 | nn.Linear.__init__(self, in_features, out_features, **kwargs)
296 | AdapterLayer.__init__(self, bottleneck_size=bottleneck_size,
297 | non_linearity=non_linearity,
298 | adapter_dropout=adapter_dropout,
299 | scaling=scaling)
300 |
301 | self.init_weights = init_weights
302 | self.adapter_type = adapter_type
303 | if isinstance(scaling, float):
304 | self.adapter_scaling = scaling
305 | elif scaling == "learned":
306 | self.adapter_scaling = nn.Parameter(torch.ones(1))
307 | # Actual trainable parameters
308 | self.adapter_down = nn.Linear(in_features, bottleneck_size, bias=False)
309 | self.adapter_up = nn.Linear(bottleneck_size, out_features, bias=False)
310 | self.act_fn = ACT2FN[self.non_linearity]
311 | #Freezing the pre-trained weight matrix
312 | self.weight.requires_grad = False
313 | self.reset_parameters()
314 |
315 | def reset_parameters(self):
316 | nn.Linear.reset_parameters(self)
317 | # if we want to initialize with the bert strategy then this function is called for all the linear layers
318 | if hasattr(self, "adapter_down"):
319 | if self.init_weights == "bert":
320 | self.adapter_down.apply(self.init_bert_weights)
321 | self.adapter_up.apply(self.init_bert_weights)
322 | elif self.init_weights == "mam_adapter":
323 | nn.init.kaiming_uniform_(self.adapter_down.weight, a=math.sqrt(5))
324 | nn.init.zeros_(self.adapter_up.weight)
325 | else:
326 | raise ValueError("Unknown init_weights type: {}".format(config["init_weights"]))
327 |
328 | # This is copied from the BertPreTrainedModel class to make this a self containing class.
329 | @staticmethod
330 | def init_bert_weights(module):
331 | """Initialize the weights."""
332 | if isinstance(module, (nn.Linear, nn.Embedding)):
333 | # std defaults to 0.02, this might need to be changed
334 | module.weight.data.normal_(mean=0.0, std=0.02)
335 | elif isinstance(module, nn.LayerNorm):
336 | module.bias.data.zero_()
337 | module.weight.data.fill_(1.0)
338 | if isinstance(module, nn.Linear) and module.bias is not None:
339 | module.bias.data.zero_()
340 |
341 | def train(self, mode: bool = True):
342 | nn.Linear.train(self, mode)
343 | self.adapter_down.train(mode)
344 | self.adapter_up.train(mode)
345 |
346 | def eval(self):
347 | nn.Linear.eval(self)
348 | self.adapter_down.eval()
349 | self.adapter_up.eval()
350 |
351 | def forward(self, x: torch.Tensor):
352 | if self.disable_adapters:
353 | return F.linear(x, self.weight, bias=self.bias)
354 | else:
355 | if self.adapter_type == "mh_adapter":
356 | # for mh_adapter, x will pass the adapter first and then the linear layer
357 | expected_dtype = x.dtype
358 | residual = x
359 |
360 |
361 | output = self.adapter_up(self.act_fn(self.adapter_down(self.adapter_dropout(x)))).to(expected_dtype) * self.adapter_scaling
362 |
363 | output = output + residual
364 |
365 | result = F.linear(output, self.weight, bias=self.bias)
366 | elif self.adapter_type == "output_adapter":
367 | # for output_adapter, x will pass the linear layer first and then the adapter
368 | x = F.linear(x, self.weight, bias=self.bias)
369 | expected_dtype = x.dtype
370 | residual = x
371 |
372 | output = self.adapter_up(self.act_fn(self.adapter_down(self.adapter_dropout(x)))).to(expected_dtype) * self.adapter_scaling
373 |
374 | result = output + residual
375 | elif self.adapter_type == "parallel_adapter":
376 | # for parallel_adapter, x will pass the linear layer first and the adapter layer parallelly.
377 | # The output of the adapter layer will be added to the output of the linear layer
378 | result = F.linear(x, self.weight, bias=self.bias)
379 | expected_dtype = result.dtype
380 |
381 |
382 | output = self.adapter_up(self.act_fn(self.adapter_down(self.adapter_dropout(x)))).to(expected_dtype) * self.adapter_scaling
383 |
384 | result = result + output
385 | return result
386 |
387 |
388 | if is_bnb_available():
389 |
390 | class Linear8bitLt(bnb.nn.Linear8bitLt, AdapterLayer):
391 | # Aadapter layer for 8bit linear layer
392 | def __init__(
393 | self,
394 | in_features: int,
395 | out_features: int,
396 | adapter_type: str,
397 | bottleneck_size: int,
398 | non_linearity: str,
399 | adapter_dropout: float,
400 | scaling: Union[float, str],
401 | init_weights: str,
402 | **kwargs,
403 | ):
404 | bnb.nn.Linear8bitLt.__init__(
405 | self,
406 | in_features,
407 | out_features,
408 | bias=kwargs.get("bias", True),
409 | has_fp16_weights=kwargs.get("has_fp16_weights", True),
410 | memory_efficient_backward=kwargs.get("memory_efficient_backward", False),
411 | threshold=kwargs.get("threshold", 0.0),
412 | index=kwargs.get("index", None),
413 | )
414 | AdapterLayer.__init__(
415 | self,
416 | bottleneck_size=bottleneck_size,
417 | non_linearity=non_linearity,
418 | adapter_dropout=adapter_dropout,
419 | scaling=scaling,)
420 |
421 | self.init_weights = init_weights
422 | self.adapter_type = adapter_type
423 | if isinstance(scaling, float):
424 | self.adapter_scaling = scaling
425 | elif scaling == "learned":
426 | self.adapter_scaling = nn.Parameter(torch.ones(1))
427 | # Actual trainable parameters
428 | self.adapter_down = nn.Linear(in_features, bottleneck_size, bias=False)
429 | self.adapter_up = nn.Linear(bottleneck_size, out_features, bias=False)
430 | self.act_fn = ACT2FN[self.non_linearity]
431 | #Freezing the pre-trained weight matrix
432 | self.weight.requires_grad = False
433 | self.reset_parameters()
434 |
435 | def reset_parameters(self):
436 | nn.Linear.reset_parameters(self)
437 | # if we want to initialize with the bert strategy then this function is called for all the linear layers
438 | if hasattr(self, "adapter_down"):
439 | if self.init_weights == "bert":
440 | self.adapter_down.apply(self.init_bert_weights)
441 | self.adapter_up.apply(self.init_bert_weights)
442 | elif self.init_weights == "mam_adapter":
443 | nn.init.kaiming_uniform_(self.adapter_down.weight, a=math.sqrt(5))
444 | nn.init.zeros_(self.adapter_up.weight)
445 | else:
446 | raise ValueError("Unknown init_weights type: {}".format(config["init_weights"]))
447 |
448 | # This is copied from the BertPreTrainedModel class to make this a self containing class.
449 | @staticmethod
450 | def init_bert_weights(module):
451 | """Initialize the weights."""
452 | if isinstance(module, (nn.Linear, nn.Embedding)):
453 | # std defaults to 0.02, this might need to be changed
454 | module.weight.data.normal_(mean=0.0, std=0.02)
455 | elif isinstance(module, nn.LayerNorm):
456 | module.bias.data.zero_()
457 | module.weight.data.fill_(1.0)
458 | if isinstance(module, nn.Linear) and module.bias is not None:
459 | module.bias.data.zero_()
460 |
461 | def forward(self, x: torch.Tensor):
462 | result_pre_forward = super().forward(x)
463 |
464 | if self.disable_adapters:
465 | return result_pre_forward
466 | else:
467 | if self.adapter_type == "mh_adapter":
468 | if not torch.is_autocast_enabled():
469 | expected_dtype = x.dtype
470 |
471 | if x.dtype != torch.float32:
472 | x = x.float()
473 |
474 | residual = x
475 | output = self.adapter_up(self.act_fn(self.adapter_down(self.adapter_dropout(x)))).to(expected_dtype) * self.adapter_scaling
476 | output = (output + residual).to(expected_dtype)
477 |
478 | result = super().forward(output)
479 | else:
480 | residual = x
481 | output = self.adapter_up(self.act_fn(self.adapter_down(self.adapter_dropout(x)))) * self.adapter_scaling
482 | output = output + residual
483 |
484 | result = super().forward(output)
485 | elif self.adapter_type == "output_adapter":
486 | if not torch.is_autocast_enabled():
487 | expected_dtype = result_pre_forward.dtype
488 |
489 | if result_pre_forward.dtype != torch.float32:
490 | result_pre_forward = result_pre_forward.float()
491 |
492 | residual = result_pre_forward
493 | output = self.adapter_up(self.act_fn(self.adapter_down(self.adapter_dropout(result_pre_forward)))).to(expected_dtype) * self.adapter_scaling
494 | result = (output + residual).to(expected_dtype)
495 | else:
496 | residual = result_pre_forward
497 | output = self.adapter_up(self.act_fn(self.adapter_down(self.adapter_dropout(result_pre_forward)))) * self.adapter_scaling
498 | result = output + residual
499 | elif self.adapter_type == "parallel_adapter":
500 | if not torch.is_autocast_enabled():
501 | expected_dtype = result_pre_forward.dtype
502 |
503 | if x.dtype != torch.float32:
504 | x = x.float()
505 |
506 | output = self.adapter_up(self.act_fn(self.adapter_down(self.adapter_dropout(x)))).to(expected_dtype) * self.adapter_scaling
507 | result = result_pre_forward + output
508 | else:
509 | output = self.adapter_up(self.act_fn(self.adapter_down(self.adapter_dropout(x)))) * self.adapter_scaling
510 | result = result_pre_forward + output
511 |
512 | return result
513 |
514 |
515 |
516 |
517 |
518 |
519 |
520 |
521 |
522 |
523 |
524 |
525 |
526 |
527 |
--------------------------------------------------------------------------------
/src/peft/src/peft/tuners/lora.py:
--------------------------------------------------------------------------------
1 | # coding=utf-8
2 | # Copyright 2023-present the HuggingFace Inc. team.
3 | #
4 | # Licensed under the Apache License, Version 2.0 (the "License");
5 | # you may not use this file except in compliance with the License.
6 | # You may obtain a copy of the License at
7 | #
8 | # http://www.apache.org/licenses/LICENSE-2.0
9 | #
10 | # Unless required by applicable law or agreed to in writing, software
11 | # distributed under the License is distributed on an "AS IS" BASIS,
12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 | # See the License for the specific language governing permissions and
14 | # limitations under the License.
15 | import importlib
16 | import math
17 | import re
18 | import warnings
19 | from dataclasses import asdict, dataclass, field
20 | from enum import Enum
21 | from typing import List, Optional, Union
22 |
23 | import torch
24 | import torch.nn as nn
25 | import torch.nn.functional as F
26 | from transformers.pytorch_utils import Conv1D
27 |
28 | from ..utils import PeftConfig, PeftType, transpose
29 |
30 |
31 | def is_bnb_available():
32 | return importlib.util.find_spec("bitsandbytes") is not None
33 |
34 |
35 | if is_bnb_available():
36 | import bitsandbytes as bnb
37 |
38 |
39 | @dataclass
40 | class LoraConfig(PeftConfig):
41 | """
42 | This is the configuration class to store the configuration of a [`~peft.Lora`].
43 |
44 | Args:
45 | r (`int`): Lora attention dimension
46 | target_modules (`Union[List[str],str]`): The names of the modules to apply Lora to.
47 | lora_alpha (`float`): The alpha parameter for Lora scaling.
48 | lora_dropout (`float`): The dropout probability for Lora layers.
49 | merge_weights (`bool`):
50 | Whether to merge the weights of the Lora layers with the base transformer model in `eval` mode.
51 | fan_in_fan_out (`bool`): Set this to True if the layer to replace stores weight like (fan_in, fan_out)
52 | enable_lora ( `List[bool]`): Used with `lora.MergedLinear`.
53 | bias (`str`): Bias type for Lora. Can be 'none', 'all' or 'lora_only'
54 | modules_to_save (`List[str]`):List of modules apart from LoRA layers to be set as trainable
55 | and saved in the final checkpoint.
56 | """
57 |
58 | r: int = field(default=8, metadata={"help": "Lora attention dimension"})
59 | target_modules: Optional[Union[List[str], str]] = field(
60 | default=None,
61 | metadata={
62 | "help": "List of module names or regex expression of the module names to replace with Lora."
63 | "For example, ['q', 'v'] or '.*decoder.*(SelfAttention|EncDecAttention).*(q|v)$' "
64 | },
65 | )
66 | lora_alpha: int = field(default=None, metadata={"help": "Lora alpha"})
67 | lora_dropout: float = field(default=None, metadata={"help": "Lora dropout"})
68 | merge_weights: bool = field(
69 | default=False, metadata={"help": "Merge weights of the original model and the Lora model"}
70 | )
71 | fan_in_fan_out: bool = field(
72 | default=False,
73 | metadata={"help": "Set this to True if the layer to replace stores weight like (fan_in, fan_out)"},
74 | )
75 | enable_lora: Optional[List[bool]] = field(default=None, metadata={"help": "Used with `lora.MergedLinear`."})
76 | bias: str = field(default="none", metadata={"help": "Bias type for Lora. Can be 'none', 'all' or 'lora_only'"})
77 | modules_to_save: Optional[List[str]] = field(
78 | default=None,
79 | metadata={
80 | "help": "List of modules apart from LoRA layers to be set as trainable and saved in the final checkpoint. "
81 | "For example, in Sequence Classification or Token Classification tasks, "
82 | "the final layer `classifier/score` are randomly initialized and as such need to be trainable and saved."
83 | },
84 | )
85 |
86 | def __post_init__(self):
87 | self.peft_type = PeftType.LORA
88 |
89 |
90 | class LoraModel(torch.nn.Module):
91 | """
92 | Creates Low Rank Adapter (Lora) model from a pretrained transformers model.
93 |
94 | Args:
95 | model ([`transformers.PreTrainedModel`]): The model to be adapted.
96 | config ([`LoraConfig`]): The configuration of the Lora model.
97 |
98 | Returns:
99 | `torch.nn.Module`: The Lora model.
100 |
101 | Example::
102 |
103 | >>> from transformers import AutoModelForSeq2SeqLM, LoraConfig >>> from peft import LoraModel, LoraConfig >>>
104 | config = LoraConfig(
105 | peft_type="LORA", task_type="SEQ_2_SEQ_LM", r=8, lora_alpha=32, target_modules=["q", "v"],
106 | lora_dropout=0.01, )
107 | >>> model = AutoModelForSeq2SeqLM.from_pretrained("t5-base") >>> lora_model = LoraModel(config, model)
108 |
109 | **Attributes**:
110 | - **model** ([`transformers.PreTrainedModel`]) -- The model to be adapted.
111 | - **peft_config** ([`LoraConfig`]): The configuration of the Lora model.
112 | """
113 |
114 | def __init__(self, config, model):
115 | super().__init__()
116 | self.peft_config = config
117 | self.model = model
118 | self._find_and_replace()
119 | mark_only_lora_as_trainable(self.model, self.peft_config.bias)
120 | self.forward = self.model.forward
121 |
122 | def _find_and_replace(self):
123 | loaded_in_8bit = getattr(self.model, "is_loaded_in_8bit", False)
124 | if loaded_in_8bit and not is_bnb_available():
125 | raise ImportError(
126 | "To use Lora with 8-bit quantization, please install the `bitsandbytes` package. "
127 | "You can install it with `pip install bitsandbytes`."
128 | )
129 | is_target_modules_in_base_model = False
130 | is_hf_device_map_available = hasattr(self.model, "hf_device_map")
131 | kwargs = {
132 | "r": self.peft_config.r,
133 | "lora_alpha": self.peft_config.lora_alpha,
134 | "lora_dropout": self.peft_config.lora_dropout,
135 | "fan_in_fan_out": self.peft_config.fan_in_fan_out,
136 | "merge_weights": (self.peft_config.merge_weights or self.peft_config.inference_mode)
137 | and not is_hf_device_map_available,
138 | }
139 | key_list = [key for key, _ in self.model.named_modules()]
140 | for key in key_list:
141 | if isinstance(self.peft_config.target_modules, str):
142 | target_module_found = re.fullmatch(self.peft_config.target_modules, key)
143 | else:
144 | target_module_found = any(key.endswith(target_key) for target_key in self.peft_config.target_modules)
145 | if target_module_found:
146 | if not is_target_modules_in_base_model:
147 | is_target_modules_in_base_model = True
148 | parent, target, target_name = self._get_submodules(key)
149 | bias = target.bias is not None
150 | if loaded_in_8bit and isinstance(target, bnb.nn.Linear8bitLt):
151 | kwargs.update(
152 | {
153 | "has_fp16_weights": target.state.has_fp16_weights,
154 | "memory_efficient_backward": target.state.memory_efficient_backward,
155 | "threshold": target.state.threshold,
156 | "index": target.index,
157 | }
158 | )
159 | if self.peft_config.enable_lora is None:
160 | new_module = Linear8bitLt(target.in_features, target.out_features, bias=bias, **kwargs)
161 | else:
162 | kwargs.update({"enable_lora": self.peft_config.enable_lora})
163 | new_module = MergedLinear8bitLt(target.in_features, target.out_features, bias=bias, **kwargs)
164 | elif isinstance(target, torch.nn.Linear) and self.peft_config.enable_lora is None:
165 | new_module = Linear(target.in_features, target.out_features, bias=bias, **kwargs)
166 | elif self.peft_config.enable_lora is not None:
167 | kwargs.update({"enable_lora": self.peft_config.enable_lora})
168 | if isinstance(target, Conv1D):
169 | in_features, out_features = (
170 | target.weight.ds_shape if hasattr(target.weight, "ds_shape") else target.weight.shape
171 | )
172 | else:
173 | in_features, out_features = target.in_features, target.out_features
174 | if kwargs["fan_in_fan_out"]:
175 | warnings.warn(
176 | "fan_in_fan_out is set to True but the target module is not a Conv1D. "
177 | "Setting fan_in_fan_out to False."
178 | )
179 | kwargs["fan_in_fan_out"] = self.peft_config.fan_in_fan_out = False
180 | new_module = MergedLinear(in_features, out_features, bias=bias, **kwargs)
181 | self._replace_module(parent, target_name, new_module, target)
182 | if not is_target_modules_in_base_model:
183 | raise ValueError(
184 | f"Target modules {self.peft_config.target_modules} not found in the base model. "
185 | f"Please check the target modules and try again."
186 | )
187 |
188 | def _get_submodules(self, key):
189 | parent = self.model.get_submodule(".".join(key.split(".")[:-1]))
190 | target_name = key.split(".")[-1]
191 | target = self.model.get_submodule(key)
192 | return parent, target, target_name
193 |
194 | def _replace_module(self, parent_module, child_name, new_module, old_module):
195 | setattr(parent_module, child_name, new_module)
196 | new_module.weight = old_module.weight
197 | if old_module.bias is not None:
198 | new_module.bias = old_module.bias
199 | if getattr(old_module, "state", None) is not None:
200 | new_module.state = old_module.state
201 | new_module.to(old_module.weight.device)
202 |
203 | # dispatch to correct device
204 | for name, module in new_module.named_modules():
205 | if "lora_" in name:
206 | module.to(old_module.weight.device)
207 |
208 | def __getattr__(self, name: str):
209 | """Forward missing attributes to the wrapped module."""
210 | try:
211 | return super().__getattr__(name) # defer to nn.Module's logic
212 | except AttributeError:
213 | return getattr(self.model, name)
214 |
215 | @property
216 | def modules_to_save(self):
217 | return None
218 |
219 | def get_peft_config_as_dict(self, inference: bool = False):
220 | config = {k: v.value if isinstance(v, Enum) else v for k, v in asdict(self.peft_config).items()}
221 | if inference:
222 | config["inference_mode"] = True
223 | return config
224 |
225 | def _set_adapter_layers(self, enabled=True):
226 | for module in self.model.modules():
227 | if isinstance(module, LoraLayer):
228 | module.disable_adapters = False if enabled else True
229 |
230 | def enable_adapter_layers(self):
231 | self._set_adapter_layers(enabled=True)
232 |
233 | def disable_adapter_layers(self):
234 | self._set_adapter_layers(enabled=False)
235 |
236 |
237 | # Below code is based on https://github.com/microsoft/LoRA/blob/main/loralib/layers.py
238 | # and modified to work with PyTorch FSDP
239 |
240 |
241 | # ------------------------------------------------------------------------------------------
242 | # Copyright (c) Microsoft Corporation. All rights reserved.
243 | # Licensed under the MIT License (MIT). See LICENSE in the repo root for license information.
244 | # ------------------------------------------------------------------------------------------
245 |
246 |
247 | # had to adapt it for `lora_only` to work
248 | def mark_only_lora_as_trainable(model: nn.Module, bias: str = "none") -> None:
249 | for n, p in model.named_parameters():
250 | if "lora_" not in n:
251 | p.requires_grad = False
252 | if bias == "none":
253 | return
254 | elif bias == "all":
255 | for n, p in model.named_parameters():
256 | if "bias" in n:
257 | p.requires_grad = True
258 | elif bias == "lora_only":
259 | for m in model.modules():
260 | if isinstance(m, LoraLayer) and hasattr(m, "bias") and m.bias is not None:
261 | m.bias.requires_grad = True
262 | else:
263 | raise NotImplementedError
264 |
265 |
266 | class LoraLayer:
267 | def __init__(
268 | self,
269 | r: int,
270 | lora_alpha: int,
271 | lora_dropout: float,
272 | merge_weights: bool,
273 | ):
274 | self.r = r
275 | self.lora_alpha = lora_alpha
276 | # Optional dropout
277 | if lora_dropout > 0.0:
278 | self.lora_dropout = nn.Dropout(p=lora_dropout)
279 | else:
280 | self.lora_dropout = lambda x: x
281 | # Mark the weight as unmerged
282 | self.merged = False
283 | self.merge_weights = merge_weights
284 | self.disable_adapters = False
285 |
286 |
287 | class Linear(nn.Linear, LoraLayer):
288 | # Lora implemented in a dense layer
289 | def __init__(
290 | self,
291 | in_features: int,
292 | out_features: int,
293 | r: int = 0,
294 | lora_alpha: int = 1,
295 | lora_dropout: float = 0.0,
296 | fan_in_fan_out: bool = False, # Set this to True if the layer to replace stores weight like (fan_in, fan_out)
297 | merge_weights: bool = True,
298 | **kwargs,
299 | ):
300 | nn.Linear.__init__(self, in_features, out_features, **kwargs)
301 | LoraLayer.__init__(self, r=r, lora_alpha=lora_alpha, lora_dropout=lora_dropout, merge_weights=merge_weights)
302 |
303 | self.fan_in_fan_out = fan_in_fan_out
304 | # Actual trainable parameters
305 | if r > 0:
306 | self.lora_A = nn.Linear(in_features, r, bias=False)
307 | self.lora_B = nn.Linear(r, out_features, bias=False)
308 | self.scaling = self.lora_alpha / self.r
309 | # Freezing the pre-trained weight matrix
310 | self.weight.requires_grad = False
311 | self.reset_parameters()
312 | if fan_in_fan_out:
313 | self.weight.data = self.weight.data.T
314 |
315 | def reset_parameters(self):
316 | nn.Linear.reset_parameters(self)
317 | if hasattr(self, "lora_A"):
318 | # initialize A the same way as the default for nn.Linear and B to zero
319 | nn.init.kaiming_uniform_(self.lora_A.weight, a=math.sqrt(5))
320 | nn.init.zeros_(self.lora_B.weight)
321 |
322 | def train(self, mode: bool = True):
323 | nn.Linear.train(self, mode)
324 | self.lora_A.train(mode)
325 | self.lora_B.train(mode)
326 | if not mode and self.merge_weights and not self.merged:
327 | # Merge the weights and mark it
328 | if self.r > 0:
329 | self.weight.data += (
330 | transpose(self.lora_B.weight @ self.lora_A.weight, self.fan_in_fan_out) * self.scaling
331 | )
332 | self.merged = True
333 | elif self.merge_weights and self.merged:
334 | # Make sure that the weights are not merged
335 | if self.r > 0:
336 | self.weight.data -= (
337 | transpose(self.lora_B.weight @ self.lora_A.weight, self.fan_in_fan_out) * self.scaling
338 | )
339 | self.merged = False
340 |
341 | def eval(self):
342 | nn.Linear.eval(self)
343 | self.lora_A.eval()
344 | self.lora_B.eval()
345 |
346 | def forward(self, x: torch.Tensor):
347 | previous_dtype = self.weight.dtype
348 |
349 | if self.disable_adapters:
350 | if self.r > 0 and self.merged:
351 | matmul_output = self.lora_B.weight @ self.lora_A.weight
352 | self.weight.data -= transpose(matmul_output.to(previous_dtype), self.fan_in_fan_out) * self.scaling
353 | self.merged = False
354 |
355 | result = F.linear(x, transpose(self.weight, self.fan_in_fan_out), bias=self.bias)
356 | elif self.r > 0 and not self.merged:
357 | result = F.linear(x, transpose(self.weight, self.fan_in_fan_out), bias=self.bias)
358 | if self.r > 0:
359 | result += self.lora_B(self.lora_A(self.lora_dropout(x.to(self.lora_A.weight.dtype)))) * self.scaling
360 | else:
361 | result = F.linear(x, transpose(self.weight, self.fan_in_fan_out), bias=self.bias)
362 |
363 | if result.dtype != previous_dtype:
364 | result = result.to(previous_dtype)
365 |
366 | return result
367 |
368 | class MergedLinear(nn.Linear, LoraLayer):
369 | # Lora implemented in a dense layer
370 | def __init__(
371 | self,
372 | in_features: int,
373 | out_features: int,
374 | r: int = 0,
375 | lora_alpha: int = 1,
376 | lora_dropout: float = 0.0,
377 | enable_lora: List[bool] = [False],
378 | fan_in_fan_out: bool = False,
379 | merge_weights: bool = True,
380 | **kwargs,
381 | ):
382 | nn.Linear.__init__(self, in_features, out_features, **kwargs)
383 | LoraLayer.__init__(self, r=r, lora_alpha=lora_alpha, lora_dropout=lora_dropout, merge_weights=merge_weights)
384 | if out_features % len(enable_lora) != 0:
385 | raise ValueError("The length of enable_lora must divide out_features")
386 | self.enable_lora = enable_lora
387 | self.fan_in_fan_out = fan_in_fan_out
388 | # Actual trainable parameters
389 | if r > 0 and any(enable_lora):
390 | self.lora_A = nn.Linear(in_features, r * sum(enable_lora), bias=False)
391 | self.lora_B = nn.Conv1d(
392 | r * sum(enable_lora),
393 | out_features // len(enable_lora) * sum(enable_lora),
394 | kernel_size=1,
395 | groups=2,
396 | bias=False,
397 | )
398 | self.scaling = self.lora_alpha / self.r
399 | # Freezing the pre-trained weight matrix
400 | self.weight.requires_grad = False
401 | # Compute the indices
402 | self.lora_ind = self.weight.new_zeros((out_features,), dtype=torch.bool).view(len(enable_lora), -1)
403 | self.lora_ind[enable_lora, :] = True
404 | self.lora_ind = self.lora_ind.view(-1)
405 | self.reset_parameters()
406 | if fan_in_fan_out:
407 | self.weight.data = self.weight.data.T
408 |
409 | def reset_parameters(self):
410 | nn.Linear.reset_parameters(self)
411 | if hasattr(self, "lora_A"):
412 | # initialize A the same way as the default for nn.Linear and B to zero
413 | nn.init.kaiming_uniform_(self.lora_A.weight, a=math.sqrt(5))
414 | nn.init.zeros_(self.lora_B.weight)
415 |
416 | def zero_pad(self, x):
417 | result = x.new_zeros((*x.shape[:-1], self.out_features))
418 | result = result.view(-1, self.out_features)
419 | result[:, self.lora_ind] = x.reshape(-1, self.out_features // len(self.enable_lora) * sum(self.enable_lora))
420 | return result.view((*x.shape[:-1], self.out_features))
421 |
422 | def train(self, mode: bool = True):
423 | nn.Linear.train(self, mode)
424 | self.lora_A.train(mode)
425 | self.lora_B.train(mode)
426 | if not mode and self.merge_weights and not self.merged:
427 | # Merge the weights and mark it
428 | if self.r > 0 and any(self.enable_lora):
429 | delta_w = (
430 | F.conv1d(
431 | self.lora_A.weight.data.unsqueeze(0),
432 | self.lora_B.weight.data,
433 | groups=sum(self.enable_lora),
434 | )
435 | .squeeze(0)
436 | .transpose(-2, -1)
437 | )
438 | self.weight.data += transpose(self.zero_pad(delta_w * self.scaling), not self.fan_in_fan_out)
439 | self.merged = True
440 | elif self.merge_weights and self.merged:
441 | # Make sure that the weights are not merged
442 | if self.r > 0 and any(self.enable_lora):
443 | delta_w = (
444 | F.conv1d(
445 | self.lora_A.weight.data.unsqueeze(0),
446 | self.lora_B.weight.data,
447 | groups=sum(self.enable_lora),
448 | )
449 | .squeeze(0)
450 | .transpose(-2, -1)
451 | )
452 | self.weight.data -= transpose(self.zero_pad(delta_w * self.scaling), not self.fan_in_fan_out)
453 | self.merged = False
454 |
455 | def eval(self):
456 | nn.Linear.eval(self)
457 | self.lora_A.eval()
458 | self.lora_B.eval()
459 |
460 | def forward(self, x: torch.Tensor):
461 | previous_dtype = x.dtype
462 | if self.disable_adapters:
463 | if self.r > 0 and self.merged and any(self.enable_lora):
464 | delta_w = (
465 | F.conv1d(
466 | self.lora_A.weight.data.unsqueeze(0),
467 | self.lora_B.weight.data,
468 | groups=sum(self.enable_lora),
469 | )
470 | .squeeze(0)
471 | .transpose(-2, -1)
472 | )
473 | delta_w = delta_w.to(self.weight.dtype)
474 | self.weight.data -= transpose(self.zero_pad(delta_w * self.scaling), not self.fan_in_fan_out)
475 | self.merged = False
476 | result = F.linear(x, transpose(self.weight, self.fan_in_fan_out), bias=self.bias)
477 | elif self.merged:
478 | result = F.linear(x, transpose(self.weight, self.fan_in_fan_out), bias=self.bias)
479 | else:
480 | result = F.linear(x, transpose(self.weight, self.fan_in_fan_out), bias=self.bias)
481 | if self.r > 0:
482 | after_A = self.lora_A(self.lora_dropout(x.to(self.lora_A.weight.dtype)))
483 | after_B = self.lora_B(after_A.transpose(-2, -1)).transpose(-2, -1)
484 | result += self.zero_pad(after_B) * self.scaling
485 | result = result.to(previous_dtype)
486 |
487 | return result
488 |
489 |
490 | if is_bnb_available():
491 |
492 | class Linear8bitLt(bnb.nn.Linear8bitLt, LoraLayer):
493 | # Lora implemented in a dense layer
494 | def __init__(
495 | self,
496 | in_features,
497 | out_features,
498 | r: int = 0,
499 | lora_alpha: int = 1,
500 | lora_dropout: float = 0.0,
501 | **kwargs,
502 | ):
503 | bnb.nn.Linear8bitLt.__init__(
504 | self,
505 | in_features,
506 | out_features,
507 | bias=kwargs.get("bias", True),
508 | has_fp16_weights=kwargs.get("has_fp16_weights", True),
509 | memory_efficient_backward=kwargs.get("memory_efficient_backward", False),
510 | threshold=kwargs.get("threshold", 0.0),
511 | index=kwargs.get("index", None),
512 | )
513 | LoraLayer.__init__(self, r=r, lora_alpha=lora_alpha, lora_dropout=lora_dropout, merge_weights=False)
514 | # Actual trainable parameters
515 | if r > 0:
516 | self.lora_A = nn.Linear(in_features, r, bias=False)
517 | self.lora_B = nn.Linear(r, out_features, bias=False)
518 | self.scaling = self.lora_alpha / self.r
519 | # Freezing the pre-trained weight matrix
520 | self.weight.requires_grad = False
521 | self.reset_parameters()
522 |
523 | def reset_parameters(self):
524 | if hasattr(self, "lora_A"):
525 | # initialize A the same way as the default for nn.Linear and B to zero
526 | nn.init.kaiming_uniform_(self.lora_A.weight, a=math.sqrt(5))
527 | nn.init.zeros_(self.lora_B.weight)
528 |
529 | def forward(self, x: torch.Tensor):
530 | result = super().forward(x)
531 |
532 | if self.disable_adapters:
533 | return result
534 | elif self.r > 0:
535 | if not torch.is_autocast_enabled():
536 | expected_dtype = result.dtype
537 |
538 | if x.dtype != torch.float32:
539 | x = x.float()
540 | output = self.lora_B(self.lora_A(self.lora_dropout(x))).to(expected_dtype) * self.scaling
541 | result += output
542 | else:
543 | output = self.lora_B(self.lora_A(self.lora_dropout(x))) * self.scaling
544 | result += output
545 | return result
546 |
547 | class MergedLinear8bitLt(bnb.nn.Linear8bitLt, LoraLayer):
548 | # Lora implemented in a dense layer
549 | def __init__(
550 | self,
551 | in_features: int,
552 | out_features: int,
553 | r: int = 0,
554 | lora_alpha: int = 1,
555 | lora_dropout: float = 0.0,
556 | enable_lora: List[bool] = [False],
557 | **kwargs,
558 | ):
559 | bnb.nn.Linear8bitLt.__init__(
560 | self,
561 | in_features,
562 | out_features,
563 | bias=kwargs.get("bias", True),
564 | has_fp16_weights=kwargs.get("has_fp16_weights", True),
565 | memory_efficient_backward=kwargs.get("memory_efficient_backward", False),
566 | threshold=kwargs.get("threshold", 0.0),
567 | index=kwargs.get("index", None),
568 | )
569 | LoraLayer.__init__(self, r=r, lora_alpha=lora_alpha, lora_dropout=lora_dropout, merge_weights=False)
570 | if out_features % len(enable_lora) != 0:
571 | raise ValueError("The length of enable_lora must divide out_features")
572 | self.enable_lora = enable_lora
573 | # Actual trainable parameters
574 | if r > 0 and any(enable_lora):
575 | self.lora_A = nn.Linear(in_features, r * sum(enable_lora), bias=False)
576 | self.lora_B = nn.Conv1d(
577 | r * sum(enable_lora),
578 | out_features // len(enable_lora) * sum(enable_lora),
579 | kernel_size=1,
580 | groups=2,
581 | bias=False,
582 | )
583 | self.scaling = self.lora_alpha / self.r
584 | # Freezing the pre-trained weight matrix
585 | self.weight.requires_grad = False
586 | # Compute the indices
587 | self.lora_ind = self.weight.new_zeros((out_features,), dtype=torch.bool).view(len(enable_lora), -1)
588 | self.lora_ind[enable_lora, :] = True
589 | self.lora_ind = self.lora_ind.view(-1)
590 | self.reset_parameters()
591 |
592 | def reset_parameters(self):
593 | if hasattr(self, "lora_A"):
594 | # initialize A the same way as the default for nn.Linear and B to zero
595 | nn.init.kaiming_uniform_(self.lora_A.weight, a=math.sqrt(5))
596 | nn.init.zeros_(self.lora_B.weight)
597 |
598 | def zero_pad(self, x):
599 | result = x.new_zeros((*x.shape[:-1], self.out_features))
600 | result = result.view(-1, self.out_features)
601 | result[:, self.lora_ind] = x.reshape(
602 | -1, self.out_features // len(self.enable_lora) * sum(self.enable_lora)
603 | )
604 | return result.view((*x.shape[:-1], self.out_features))
605 |
606 | def forward(self, x: torch.Tensor):
607 | result = super().forward(x)
608 | if self.disable_adapters:
609 | return result
610 | elif self.r > 0:
611 | if not torch.is_autocast_enabled():
612 | expected_dtype = result.dtype
613 | if x.dtype != torch.float32:
614 | x = x.float()
615 | after_A = self.lora_A(self.lora_dropout(x))
616 | after_B = self.lora_B(after_A.transpose(-2, -1)).transpose(-2, -1)
617 | output = self.zero_pad(after_B).to(expected_dtype) * self.scaling
618 | result += output
619 | else:
620 | after_A = self.lora_A(self.lora_dropout(x))
621 | after_B = self.lora_B(after_A.transpose(-2, -1)).transpose(-2, -1)
622 | output = self.zero_pad(after_B) * self.scaling
623 | result += output
624 | return result
625 |
--------------------------------------------------------------------------------
/src/peft/src/peft/tuners/p_tuning.py:
--------------------------------------------------------------------------------
1 | # coding=utf-8
2 | # Copyright 2023-present the HuggingFace Inc. team.
3 | #
4 | # Licensed under the Apache License, Version 2.0 (the "License");
5 | # you may not use this file except in compliance with the License.
6 | # You may obtain a copy of the License at
7 | #
8 | # http://www.apache.org/licenses/LICENSE-2.0
9 | #
10 | # Unless required by applicable law or agreed to in writing, software
11 | # distributed under the License is distributed on an "AS IS" BASIS,
12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 | # See the License for the specific language governing permissions and
14 | # limitations under the License.
15 |
16 | import enum
17 | import warnings
18 | from dataclasses import dataclass, field
19 | from typing import Union
20 |
21 | import torch
22 |
23 | from ..utils import PeftType, PromptLearningConfig
24 |
25 |
26 | class PromptEncoderReparameterizationType(str, enum.Enum):
27 | MLP = "MLP"
28 | LSTM = "LSTM"
29 |
30 |
31 | @dataclass
32 | class PromptEncoderConfig(PromptLearningConfig):
33 | """
34 | This is the configuration class to store the configuration of a [`~peft.PromptEncoder`].
35 |
36 | Args:
37 | encoder_reparameterization_type
38 | (Union[[`PromptEncoderReparameterizationType`], `str`]): The type of reparameterization to use.
39 | encoder_hidden_size (`int`): The hidden size of the prompt encoder.
40 | encoder_num_layers (`int`): The number of layers of the prompt encoder.
41 | encoder_dropout (`float`): The dropout probability of the prompt encoder.
42 | """
43 |
44 | encoder_reparameterization_type: Union[str, PromptEncoderReparameterizationType] = field(
45 | default=PromptEncoderReparameterizationType.MLP,
46 | metadata={"help": "How to reparameterize the prompt encoder"},
47 | )
48 | encoder_hidden_size: int = field(
49 | default=None,
50 | metadata={"help": "The hidden size of the prompt encoder"},
51 | )
52 | encoder_num_layers: int = field(
53 | default=2,
54 | metadata={"help": "The number of layers of the prompt encoder"},
55 | )
56 | encoder_dropout: float = field(
57 | default=0.0,
58 | metadata={"help": "The dropout of the prompt encoder"},
59 | )
60 |
61 | def __post_init__(self):
62 | self.peft_type = PeftType.P_TUNING
63 |
64 |
65 | # Based on https://github.com/NVIDIA/NeMo/blob/main/nemo/collections/nlp/modules/common/prompt_encoder.py
66 | # with some refactor
67 | class PromptEncoder(torch.nn.Module):
68 | """
69 | The prompt encoder network that is used to generate the virtual token embeddings for p-tuning.
70 |
71 | Args:
72 | config ([`PromptEncoderConfig`]): The configuration of the prompt encoder.
73 |
74 | Example::
75 |
76 | >>> from peft import PromptEncoder, PromptEncoderConfig >>> config = PromptEncoderConfig(
77 | peft_type="P_TUNING", task_type="SEQ_2_SEQ_LM", num_virtual_tokens=20, token_dim=768,
78 | num_transformer_submodules=1, num_attention_heads=12, num_layers=12,
79 | encoder_reparameterization_type="MLP", encoder_hidden_size=768
80 | )
81 | >>> prompt_encoder = PromptEncoder(config)
82 |
83 | **Attributes**:
84 | - **embedding** ([`~torch.nn.Embedding`]) -- The embedding layer of the prompt encoder.
85 | - **mlp_head** ([`~torch.nn.Sequential`]) -- The MLP head of the prompt encoder if `inference_mode=False`.
86 | - **lstm_head** ([`~torch.nn.LSTM`]) -- The LSTM head of the prompt encoder if `inference_mode=False` and
87 | `encoder_reparameterization_type="LSTM"`.
88 | - **token_dim** (`int`) -- The hidden embedding dimension of the base transformer model.
89 | - **input_size** (`int`) -- The input size of the prompt encoder.
90 | - **output_size** (`int`) -- The output size of the prompt encoder.
91 | - **hidden_size** (`int`) -- The hidden size of the prompt encoder.
92 | - **total_virtual_tokens** (`int`): The total number of virtual tokens of the
93 | prompt encoder.
94 | - **encoder_type** (Union[[`PromptEncoderReparameterizationType`], `str`]):
95 | The encoder type of the prompt encoder.
96 |
97 |
98 | Input shape: (batch_size, total_virtual_tokens)
99 |
100 | Output shape: (batch_size, total_virtual_tokens, token_dim)
101 | """
102 |
103 | def __init__(self, config):
104 | super().__init__()
105 | self.token_dim = config.token_dim
106 | self.input_size = self.token_dim
107 | self.output_size = self.token_dim
108 | self.hidden_size = config.encoder_hidden_size
109 | self.total_virtual_tokens = config.num_virtual_tokens * config.num_transformer_submodules
110 | self.encoder_type = config.encoder_reparameterization_type
111 |
112 | # embedding
113 | self.embedding = torch.nn.Embedding(self.total_virtual_tokens, self.token_dim)
114 | if not config.inference_mode:
115 | if self.encoder_type == PromptEncoderReparameterizationType.LSTM:
116 | lstm_dropout = config.encoder_dropout
117 | num_layers = config.encoder_num_layers
118 | # LSTM
119 | self.lstm_head = torch.nn.LSTM(
120 | input_size=self.input_size,
121 | hidden_size=self.hidden_size,
122 | num_layers=num_layers,
123 | dropout=lstm_dropout,
124 | bidirectional=True,
125 | batch_first=True,
126 | )
127 |
128 | self.mlp_head = torch.nn.Sequential(
129 | torch.nn.Linear(self.hidden_size * 2, self.hidden_size * 2),
130 | torch.nn.ReLU(),
131 | torch.nn.Linear(self.hidden_size * 2, self.output_size),
132 | )
133 |
134 | elif self.encoder_type == PromptEncoderReparameterizationType.MLP:
135 | warnings.warn(
136 | f"for {self.encoder_type}, the `encoder_num_layers` is ignored. Exactly 2 MLP layers are used."
137 | )
138 | layers = [
139 | torch.nn.Linear(self.input_size, self.hidden_size),
140 | torch.nn.ReLU(),
141 | torch.nn.Linear(self.hidden_size, self.hidden_size),
142 | torch.nn.ReLU(),
143 | torch.nn.Linear(self.hidden_size, self.output_size),
144 | ]
145 | self.mlp_head = torch.nn.Sequential(*layers)
146 |
147 | else:
148 | raise ValueError("Prompt encoder type not recognized. Please use one of MLP (recommended) or LSTM.")
149 |
150 | def forward(self, indices):
151 | input_embeds = self.embedding(indices)
152 | if self.encoder_type == PromptEncoderReparameterizationType.LSTM:
153 | output_embeds = self.mlp_head(self.lstm_head(input_embeds)[0])
154 | elif self.encoder_type == PromptEncoderReparameterizationType.MLP:
155 | output_embeds = self.mlp_head(input_embeds)
156 | else:
157 | raise ValueError("Prompt encoder type not recognized. Please use one of MLP (recommended) or LSTM.")
158 |
159 | return output_embeds
160 |
--------------------------------------------------------------------------------
/src/peft/src/peft/tuners/prefix_tuning.py:
--------------------------------------------------------------------------------
1 | # coding=utf-8
2 | # Copyright 2023-present the HuggingFace Inc. team.
3 | #
4 | # Licensed under the Apache License, Version 2.0 (the "License");
5 | # you may not use this file except in compliance with the License.
6 | # You may obtain a copy of the License at
7 | #
8 | # http://www.apache.org/licenses/LICENSE-2.0
9 | #
10 | # Unless required by applicable law or agreed to in writing, software
11 | # distributed under the License is distributed on an "AS IS" BASIS,
12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 | # See the License for the specific language governing permissions and
14 | # limitations under the License.
15 |
16 |
17 | from dataclasses import dataclass, field
18 |
19 | import torch
20 |
21 | from ..utils import PeftType, PromptLearningConfig
22 |
23 |
24 | @dataclass
25 | class PrefixTuningConfig(PromptLearningConfig):
26 | """
27 | This is the configuration class to store the configuration of a [`~peft.PrefixEncoder`].
28 |
29 | Args:
30 | encoder_hidden_size (`int`): The hidden size of the prompt encoder.
31 | prefix_projection (`bool`): Whether to project the prefix embeddings.
32 | """
33 |
34 | encoder_hidden_size: int = field(
35 | default=None,
36 | metadata={"help": "The hidden size of the encoder"},
37 | )
38 | prefix_projection: bool = field(
39 | default=False,
40 | metadata={"help": "Whether to project the prefix tokens"},
41 | )
42 |
43 | def __post_init__(self):
44 | self.peft_type = PeftType.PREFIX_TUNING
45 |
46 |
47 | # Based on https://github.com/THUDM/P-tuning-v2/blob/main/model/prefix_encoder.py
48 | # with some refactor
49 | class PrefixEncoder(torch.nn.Module):
50 | r"""
51 | The torch.nn model to encode the prefix
52 |
53 | Args:
54 | config ([`PrefixTuningConfig`]): The configuration of the prefix encoder.
55 |
56 | Example::
57 |
58 | >>> from peft import PrefixEncoder, PrefixTuningConfig >>> config = PrefixTuningConfig(
59 | peft_type="PREFIX_TUNING", task_type="SEQ_2_SEQ_LM", num_virtual_tokens=20, token_dim=768,
60 | num_transformer_submodules=1, num_attention_heads=12, num_layers=12, encoder_hidden_size=768
61 | )
62 | >>> prefix_encoder = PrefixEncoder(config)
63 |
64 |
65 | **Attributes**:
66 | - **embedding** (`torch.nn.Embedding`) --
67 | The embedding layer of the prefix encoder.
68 | - **transform** (`torch.nn.Sequential`) -- The
69 | two-layer MLP to transform the prefix embeddings if `prefix_projection` is `True`.
70 | - **prefix_projection** (`bool`) -- Whether to project the prefix embeddings.
71 |
72 | Input shape: (batch_size, num_virtual_tokens)
73 |
74 | Output shape: (batch_size, num_virtual_tokens, 2*layers*hidden)
75 | """
76 |
77 | def __init__(self, config):
78 | super().__init__()
79 | self.prefix_projection = config.prefix_projection
80 | token_dim = config.token_dim
81 | num_layers = config.num_layers
82 | encoder_hidden_size = config.encoder_hidden_size
83 | num_virtual_tokens = config.num_virtual_tokens
84 | if self.prefix_projection and not config.inference_mode:
85 | # Use a two-layer MLP to encode the prefix
86 | self.embedding = torch.nn.Embedding(num_virtual_tokens, token_dim)
87 | self.transform = torch.nn.Sequential(
88 | torch.nn.Linear(token_dim, encoder_hidden_size),
89 | torch.nn.Tanh(),
90 | torch.nn.Linear(encoder_hidden_size, num_layers * 2 * token_dim),
91 | )
92 | else:
93 | self.embedding = torch.nn.Embedding(num_virtual_tokens, num_layers * 2 * token_dim)
94 |
95 | def forward(self, prefix: torch.Tensor):
96 | if self.prefix_projection:
97 | prefix_tokens = self.embedding(prefix)
98 | past_key_values = self.transform(prefix_tokens)
99 | else:
100 | past_key_values = self.embedding(prefix)
101 | return past_key_values
102 |
--------------------------------------------------------------------------------
/src/peft/src/peft/tuners/prompt_tuning.py:
--------------------------------------------------------------------------------
1 | # coding=utf-8
2 | # Copyright 2023-present the HuggingFace Inc. team.
3 | #
4 | # Licensed under the Apache License, Version 2.0 (the "License");
5 | # you may not use this file except in compliance with the License.
6 | # You may obtain a copy of the License at
7 | #
8 | # http://www.apache.org/licenses/LICENSE-2.0
9 | #
10 | # Unless required by applicable law or agreed to in writing, software
11 | # distributed under the License is distributed on an "AS IS" BASIS,
12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 | # See the License for the specific language governing permissions and
14 | # limitations under the License.
15 |
16 | import enum
17 | import math
18 | from dataclasses import dataclass, field
19 | from typing import Optional, Union
20 |
21 | import torch
22 |
23 | from ..utils import PeftType, PromptLearningConfig
24 |
25 |
26 | class PromptTuningInit(str, enum.Enum):
27 | TEXT = "TEXT"
28 | RANDOM = "RANDOM"
29 |
30 |
31 | @dataclass
32 | class PromptTuningConfig(PromptLearningConfig):
33 | """
34 | This is the configuration class to store the configuration of a [`~peft.PromptEmbedding`].
35 |
36 | Args:
37 | prompt_tuning_init (Union[[`PromptTuningInit`], `str`]): The initialization of the prompt embedding.
38 | prompt_tuning_init_text ( Optional[`str`]): The text to initialize the prompt embedding.
39 | Only used if `prompt_tuning_init` is `TEXT`
40 | tokenizer_name_or_path ( Optional[`str`]): The name or path of the tokenizer.
41 | Only used if `prompt_tuning_init` is `TEXT`
42 | """
43 |
44 | prompt_tuning_init: Union[PromptTuningInit, str] = field(
45 | default=PromptTuningInit.RANDOM,
46 | metadata={"help": "How to initialize the prompt tuning parameters"},
47 | )
48 | prompt_tuning_init_text: Optional[str] = field(
49 | default=None,
50 | metadata={
51 | "help": "The text to use for prompt tuning initialization. Only used if prompt_tuning_init is `TEXT`"
52 | },
53 | )
54 | tokenizer_name_or_path: Optional[str] = field(
55 | default=None,
56 | metadata={
57 | "help": "The tokenizer to use for prompt tuning initialization. Only used if prompt_tuning_init is `TEXT`"
58 | },
59 | )
60 |
61 | def __post_init__(self):
62 | self.peft_type = PeftType.PROMPT_TUNING
63 |
64 |
65 | class PromptEmbedding(torch.nn.Module):
66 | """
67 | The model to encode virtual tokens into prompt embeddings.
68 |
69 | Args:
70 | config ([`PromptTuningConfig`]): The configuration of the prompt embedding.
71 | word_embeddings (`torch.nn.Module`): The word embeddings of the base transformer model.
72 |
73 | **Attributes**:
74 | **embedding** (`torch.nn.Embedding`) -- The embedding layer of the prompt embedding.
75 |
76 | Example::
77 |
78 | >>> from peft import PromptEmbedding, PromptTuningConfig >>> config = PromptTuningConfig(
79 | peft_type="PROMPT_TUNING", task_type="SEQ_2_SEQ_LM", num_virtual_tokens=20, token_dim=768,
80 | num_transformer_submodules=1, num_attention_heads=12, num_layers=12, prompt_tuning_init="TEXT",
81 | prompt_tuning_init_text="Predict if sentiment of this review is positive, negative or neutral",
82 | tokenizer_name_or_path="t5-base",
83 | )
84 | >>> # t5_model.shared is the word embeddings of the base model >>> prompt_embedding = PromptEmbedding(config,
85 | t5_model.shared)
86 |
87 |
88 | Input Shape: (batch_size, total_virtual_tokens)
89 |
90 | Output Shape: (batch_size, total_virtual_tokens, token_dim)
91 | """
92 |
93 | def __init__(self, config, word_embeddings):
94 | super().__init__()
95 |
96 | total_virtual_tokens = config.num_virtual_tokens * config.num_transformer_submodules
97 | self.embedding = torch.nn.Embedding(total_virtual_tokens, config.token_dim)
98 | if config.prompt_tuning_init == PromptTuningInit.TEXT:
99 | from transformers import AutoTokenizer
100 |
101 | tokenizer = AutoTokenizer.from_pretrained(config.tokenizer_name_or_path)
102 | init_text = config.prompt_tuning_init_text
103 | init_token_ids = tokenizer(init_text)["input_ids"]
104 | # Trim or iterate until num_text_tokens matches total_virtual_tokens
105 | num_text_tokens = len(init_token_ids)
106 | if num_text_tokens > total_virtual_tokens:
107 | init_token_ids = init_token_ids[:total_virtual_tokens]
108 | elif num_text_tokens < total_virtual_tokens:
109 | num_reps = math.ceil(total_virtual_tokens / num_text_tokens)
110 | init_token_ids = init_token_ids * num_reps
111 | init_token_ids = init_token_ids[:total_virtual_tokens]
112 |
113 | word_embedding_weights = word_embeddings(torch.LongTensor(init_token_ids)).detach().clone()
114 | word_embedding_weights = word_embedding_weights.to(torch.float32)
115 | self.embedding.weight = torch.nn.Parameter(word_embedding_weights)
116 |
117 | def forward(self, indices):
118 | # Just get embeddings
119 | prompt_embeddings = self.embedding(indices)
120 | return prompt_embeddings
121 |
--------------------------------------------------------------------------------
/src/peft/src/peft/utils/__init__.py:
--------------------------------------------------------------------------------
1 | # flake8: noqa
2 | # There's no way to ignore "F401 '...' imported but unused" warnings in this
3 | # module, but to preserve other warnings. So, don't check this module at all
4 |
5 | # coding=utf-8
6 | # Copyright 2023-present the HuggingFace Inc. team.
7 | #
8 | # Licensed under the Apache License, Version 2.0 (the "License");
9 | # you may not use this file except in compliance with the License.
10 | # You may obtain a copy of the License at
11 | #
12 | # http://www.apache.org/licenses/LICENSE-2.0
13 | #
14 | # Unless required by applicable law or agreed to in writing, software
15 | # distributed under the License is distributed on an "AS IS" BASIS,
16 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
17 | # See the License for the specific language governing permissions and
18 | # limitations under the License.
19 |
20 | from .adapters_utils import CONFIG_NAME, WEIGHTS_NAME
21 | from .config import PeftConfig, PeftType, PromptLearningConfig, TaskType
22 | from .other import (
23 | TRANSFORMERS_MODELS_TO_PREFIX_TUNING_POSTPROCESS_MAPPING,
24 | _set_trainable,
25 | bloom_model_postprocess_past_key_value,
26 | prepare_model_for_int8_training,
27 | shift_tokens_right,
28 | transpose,
29 | )
30 | from .save_and_load import get_peft_model_state_dict, set_peft_model_state_dict
31 |
--------------------------------------------------------------------------------
/src/peft/src/peft/utils/__pycache__/__init__.cpython-38.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LiuHC0428/LAW-GPT/6251dd36505a4b644c5d2fe9b5188cbb541ea5fd/src/peft/src/peft/utils/__pycache__/__init__.cpython-38.pyc
--------------------------------------------------------------------------------
/src/peft/src/peft/utils/__pycache__/__init__.cpython-39.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LiuHC0428/LAW-GPT/6251dd36505a4b644c5d2fe9b5188cbb541ea5fd/src/peft/src/peft/utils/__pycache__/__init__.cpython-39.pyc
--------------------------------------------------------------------------------
/src/peft/src/peft/utils/__pycache__/adapters_utils.cpython-38.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LiuHC0428/LAW-GPT/6251dd36505a4b644c5d2fe9b5188cbb541ea5fd/src/peft/src/peft/utils/__pycache__/adapters_utils.cpython-38.pyc
--------------------------------------------------------------------------------
/src/peft/src/peft/utils/__pycache__/adapters_utils.cpython-39.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LiuHC0428/LAW-GPT/6251dd36505a4b644c5d2fe9b5188cbb541ea5fd/src/peft/src/peft/utils/__pycache__/adapters_utils.cpython-39.pyc
--------------------------------------------------------------------------------
/src/peft/src/peft/utils/__pycache__/config.cpython-38.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LiuHC0428/LAW-GPT/6251dd36505a4b644c5d2fe9b5188cbb541ea5fd/src/peft/src/peft/utils/__pycache__/config.cpython-38.pyc
--------------------------------------------------------------------------------
/src/peft/src/peft/utils/__pycache__/config.cpython-39.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LiuHC0428/LAW-GPT/6251dd36505a4b644c5d2fe9b5188cbb541ea5fd/src/peft/src/peft/utils/__pycache__/config.cpython-39.pyc
--------------------------------------------------------------------------------
/src/peft/src/peft/utils/__pycache__/other.cpython-38.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LiuHC0428/LAW-GPT/6251dd36505a4b644c5d2fe9b5188cbb541ea5fd/src/peft/src/peft/utils/__pycache__/other.cpython-38.pyc
--------------------------------------------------------------------------------
/src/peft/src/peft/utils/__pycache__/other.cpython-39.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LiuHC0428/LAW-GPT/6251dd36505a4b644c5d2fe9b5188cbb541ea5fd/src/peft/src/peft/utils/__pycache__/other.cpython-39.pyc
--------------------------------------------------------------------------------
/src/peft/src/peft/utils/__pycache__/save_and_load.cpython-38.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LiuHC0428/LAW-GPT/6251dd36505a4b644c5d2fe9b5188cbb541ea5fd/src/peft/src/peft/utils/__pycache__/save_and_load.cpython-38.pyc
--------------------------------------------------------------------------------
/src/peft/src/peft/utils/__pycache__/save_and_load.cpython-39.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LiuHC0428/LAW-GPT/6251dd36505a4b644c5d2fe9b5188cbb541ea5fd/src/peft/src/peft/utils/__pycache__/save_and_load.cpython-39.pyc
--------------------------------------------------------------------------------
/src/peft/src/peft/utils/adapters_utils.py:
--------------------------------------------------------------------------------
1 | # coding=utf-8
2 | # Copyright 2023-present the HuggingFace Inc. team.
3 | #
4 | # Licensed under the Apache License, Version 2.0 (the "License");
5 | # you may not use this file except in compliance with the License.
6 | # You may obtain a copy of the License at
7 | #
8 | # http://www.apache.org/licenses/LICENSE-2.0
9 | #
10 | # Unless required by applicable law or agreed to in writing, software
11 | # distributed under the License is distributed on an "AS IS" BASIS,
12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 | # See the License for the specific language governing permissions and
14 | # limitations under the License.
15 | WEIGHTS_NAME = "adapter_model.bin"
16 | CONFIG_NAME = "adapter_config.json"
17 |
18 | # TODO: add automapping and superclass here?
19 |
--------------------------------------------------------------------------------
/src/peft/src/peft/utils/config.py:
--------------------------------------------------------------------------------
1 | # coding=utf-8
2 | # Copyright 2023-present the HuggingFace Inc. team.
3 | #
4 | # Licensed under the Apache License, Version 2.0 (the "License");
5 | # you may not use this file except in compliance with the License.
6 | # You may obtain a copy of the License at
7 | #
8 | # http://www.apache.org/licenses/LICENSE-2.0
9 | #
10 | # Unless required by applicable law or agreed to in writing, software
11 | # distributed under the License is distributed on an "AS IS" BASIS,
12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 | # See the License for the specific language governing permissions and
14 | # limitations under the License.
15 | import enum
16 | import json
17 | import os
18 | from dataclasses import asdict, dataclass, field
19 | from typing import Optional, Union
20 |
21 | from huggingface_hub import hf_hub_download
22 | from transformers.utils import PushToHubMixin
23 |
24 | from .adapters_utils import CONFIG_NAME
25 |
26 |
27 | class PeftType(str, enum.Enum):
28 | PROMPT_TUNING = "PROMPT_TUNING"
29 | P_TUNING = "P_TUNING"
30 | PREFIX_TUNING = "PREFIX_TUNING"
31 | LORA = "LORA"
32 | BOTTLENECK = "BOTTLENECK"
33 |
34 |
35 |
36 | class TaskType(str, enum.Enum):
37 | SEQ_CLS = "SEQ_CLS"
38 | SEQ_2_SEQ_LM = "SEQ_2_SEQ_LM"
39 | CAUSAL_LM = "CAUSAL_LM"
40 | TOKEN_CLS = "TOKEN_CLS"
41 |
42 |
43 | @dataclass
44 | class PeftConfigMixin(PushToHubMixin):
45 | r"""
46 | This is the base configuration class for PEFT adapter models. It contains all the methods that are common to all
47 | PEFT adapter models. This class inherits from `transformers.utils.PushToHubMixin` which contains the methods to
48 | push your model to the Hub. The method `save_pretrained` will save the configuration of your adapter model in a
49 | directory. The method `from_pretrained` will load the configuration of your adapter model from a directory.
50 |
51 | Args:
52 | peft_type (Union[[`~peft.utils.config.PeftType`], `str`]): The type of Peft method to use.
53 | """
54 | peft_type: Optional[PeftType] = field(default=None, metadata={"help": "The type of PEFT model."})
55 |
56 | @property
57 | def __dict__(self):
58 | return asdict(self)
59 |
60 | def to_dict(self):
61 | return self.__dict__
62 |
63 | def save_pretrained(self, save_directory, **kwargs):
64 | r"""
65 | This method saves the configuration of your adapter model in a directory.
66 |
67 | Args:
68 | save_directory (`str`):
69 | The directory where the configuration will be saved.
70 | **kwargs:
71 | Additional keyword arguments passed along to the `transformers.utils.PushToHubMixin.push_to_hub`
72 | method.
73 | """
74 | if os.path.isfile(save_directory):
75 | raise AssertionError(f"Provided path ({save_directory}) should be a directory, not a file")
76 |
77 | os.makedirs(save_directory, exist_ok=True)
78 |
79 | output_dict = self.__dict__
80 | output_path = os.path.join(save_directory, CONFIG_NAME)
81 |
82 | # save it
83 | with open(output_path, "w") as writer:
84 | writer.write(json.dumps(output_dict, indent=2, sort_keys=True))
85 |
86 | @classmethod
87 | def from_pretrained(cls, pretrained_model_name_or_path, **kwargs):
88 | r"""
89 | This method loads the configuration of your adapter model from a directory.
90 |
91 | Args:
92 | pretrained_model_name_or_path (`str`):
93 | The directory or the hub-id where the configuration is saved.
94 | **kwargs:
95 | Additional keyword arguments passed along to the child class initialization.
96 | """
97 | if os.path.isfile(os.path.join(pretrained_model_name_or_path, CONFIG_NAME)):
98 | config_file = os.path.join(pretrained_model_name_or_path, CONFIG_NAME)
99 | else:
100 | try:
101 | config_file = hf_hub_download(pretrained_model_name_or_path, CONFIG_NAME)
102 | except Exception:
103 | raise ValueError(f"Can't find config.json at '{pretrained_model_name_or_path}'")
104 |
105 | loaded_attributes = cls.from_json_file(config_file)
106 |
107 | config = cls(**kwargs)
108 |
109 | for key, value in loaded_attributes.items():
110 | if hasattr(config, key):
111 | setattr(config, key, value)
112 |
113 | return config
114 |
115 | @classmethod
116 | def from_json_file(cls, path_json_file, **kwargs):
117 | r"""
118 | Loads a configuration file from a json file.
119 |
120 | Args:
121 | path_json_file (`str`):
122 | The path to the json file.
123 | """
124 | with open(path_json_file, "r") as file:
125 | json_object = json.load(file)
126 |
127 | return json_object
128 |
129 |
130 | @dataclass
131 | class PeftConfig(PeftConfigMixin):
132 | """
133 | This is the base configuration class to store the configuration of a :class:`~peft.PeftModel`.
134 |
135 | Args:
136 | peft_type (Union[[`~peft.utils.config.PeftType`], `str`]): The type of Peft method to use.
137 | task_type (Union[[`~peft.utils.config.TaskType`], `str`]): The type of task to perform.
138 | inference_mode (`bool`, defaults to `False`): Whether to use the Peft model in inference mode.
139 | """
140 |
141 | base_model_name_or_path: str = field(default=None, metadata={"help": "The name of the base model to use."})
142 | peft_type: Union[str, PeftType] = field(default=None, metadata={"help": "Peft type"})
143 | task_type: Union[str, TaskType] = field(default=None, metadata={"help": "Task type"})
144 | inference_mode: bool = field(default=False, metadata={"help": "Whether to use inference mode"})
145 |
146 |
147 | @dataclass
148 | class PromptLearningConfig(PeftConfig):
149 | """
150 | This is the base configuration class to store the configuration of a Union[[`~peft.PrefixTuning`],
151 | [`~peft.PromptEncoder`], [`~peft.PromptTuning`]].
152 |
153 | Args:
154 | num_virtual_tokens (`int`): The number of virtual tokens to use.
155 | token_dim (`int`): The hidden embedding dimension of the base transformer model.
156 | num_transformer_submodules (`int`): The number of transformer submodules in the base transformer model.
157 | num_attention_heads (`int`): The number of attention heads in the base transformer model.
158 | num_layers (`int`): The number of layers in the base transformer model.
159 | """
160 |
161 | num_virtual_tokens: int = field(default=None, metadata={"help": "Number of virtual tokens"})
162 | token_dim: int = field(
163 | default=None, metadata={"help": "The hidden embedding dimension of the base transformer model"}
164 | )
165 | num_transformer_submodules: Optional[int] = field(
166 | default=None, metadata={"help": "Number of transformer submodules"}
167 | )
168 | num_attention_heads: Optional[int] = field(default=None, metadata={"help": "Number of attention heads"})
169 | num_layers: Optional[int] = field(default=None, metadata={"help": "Number of transformer layers"})
170 |
--------------------------------------------------------------------------------
/src/peft/src/peft/utils/other.py:
--------------------------------------------------------------------------------
1 | # coding=utf-8
2 | # Copyright 2023-present the HuggingFace Inc. team.
3 | #
4 | # Licensed under the Apache License, Version 2.0 (the "License");
5 | # you may not use this file except in compliance with the License.
6 | # You may obtain a copy of the License at
7 | #
8 | # http://www.apache.org/licenses/LICENSE-2.0
9 | #
10 | # Unless required by applicable law or agreed to in writing, software
11 | # distributed under the License is distributed on an "AS IS" BASIS,
12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 | # See the License for the specific language governing permissions and
14 | # limitations under the License.
15 |
16 | import torch
17 |
18 |
19 | # needed for prefix-tuning of bloom model
20 | def bloom_model_postprocess_past_key_value(past_key_values):
21 | past_key_values = torch.cat(past_key_values)
22 | total_layers, batch_size, num_attention_heads, num_virtual_tokens, head_dim = past_key_values.shape
23 | keys = past_key_values[: total_layers // 2]
24 | keys = keys.transpose(2, 3).reshape(
25 | total_layers // 2, batch_size * num_attention_heads, head_dim, num_virtual_tokens
26 | )
27 | values = past_key_values[total_layers // 2 :]
28 | values = values.reshape(total_layers // 2, batch_size * num_attention_heads, num_virtual_tokens, head_dim)
29 |
30 | return tuple(zip(keys, values))
31 |
32 |
33 | def prepare_model_for_int8_training(
34 | model, output_embedding_layer_name="lm_head", use_gradient_checkpointing=True, layer_norm_names=["layer_norm"]
35 | ):
36 | r"""
37 | This method wrapps the entire protocol for preparing a model before running a training. This includes:
38 | 1- Cast the layernorm in fp32 2- making output embedding layer require grads 3- Add the upcasting of the lm
39 | head to fp32
40 |
41 | Args:
42 | model, (`transformers.PreTrainedModel`):
43 | The loaded model from `transformers`
44 | """
45 | loaded_in_8bit = getattr(model, "is_loaded_in_8bit", False)
46 |
47 | for name, param in model.named_parameters():
48 | # freeze base model's layers
49 | param.requires_grad = False
50 |
51 | if loaded_in_8bit:
52 | # cast layer norm in fp32 for stability for 8bit models
53 | if param.ndim == 1 and any(layer_norm_name in name for layer_norm_name in layer_norm_names):
54 | param.data = param.data.to(torch.float32)
55 |
56 | if loaded_in_8bit and use_gradient_checkpointing:
57 | # For backward compatibility
58 | if hasattr(model, "enable_input_require_grads"):
59 | model.enable_input_require_grads()
60 | else:
61 |
62 | def make_inputs_require_grad(module, input, output):
63 | output.requires_grad_(True)
64 |
65 | model.get_input_embeddings().register_forward_hook(make_inputs_require_grad)
66 |
67 | # enable gradient checkpointing for memory efficiency
68 | model.gradient_checkpointing_enable()
69 |
70 | if hasattr(model, output_embedding_layer_name):
71 | output_embedding_layer = getattr(model, output_embedding_layer_name)
72 | input_dtype = output_embedding_layer.weight.dtype
73 |
74 | class CastOutputToFloat(torch.nn.Sequential):
75 | r"""
76 | Manually cast to the expected dtype of the lm_head as sometimes there is a final layer norm that is casted
77 | in fp32
78 |
79 | """
80 |
81 | def forward(self, x):
82 | return super().forward(x.to(input_dtype)).to(torch.float32)
83 |
84 | setattr(model, output_embedding_layer_name, CastOutputToFloat(output_embedding_layer))
85 |
86 | return model
87 |
88 |
89 | TRANSFORMERS_MODELS_TO_PREFIX_TUNING_POSTPROCESS_MAPPING = {
90 | "bloom": bloom_model_postprocess_past_key_value,
91 | }
92 |
93 |
94 | # copied from transformers.models.bart.modeling_bart
95 | def shift_tokens_right(input_ids: torch.Tensor, pad_token_id: int, decoder_start_token_id: int):
96 | """
97 | Shift input ids one token to the right.
98 |
99 | Args:
100 | input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`): input ids
101 | pad_token_id (`int`): The id of the `padding` token.
102 | decoder_start_token_id (`int`): The id of the `start` token.
103 | """
104 | shifted_input_ids = input_ids.new_zeros(input_ids.shape)
105 | shifted_input_ids[:, 1:] = input_ids[:, :-1].clone()
106 | shifted_input_ids[:, 0] = decoder_start_token_id
107 |
108 | if pad_token_id is None:
109 | raise ValueError("self.model.config.pad_token_id has to be defined.")
110 | # replace possible -100 values in labels by `pad_token_id`
111 | shifted_input_ids.masked_fill_(shifted_input_ids == -100, pad_token_id)
112 |
113 | return shifted_input_ids
114 |
115 |
116 | def _set_trainable(model):
117 | if model.modules_to_save is not None:
118 | for name, param in model.named_parameters():
119 | if any(module_name in name for module_name in model.modules_to_save):
120 | param.requires_grad = True
121 |
122 |
123 | def fsdp_auto_wrap_policy(model):
124 | import functools
125 | import os
126 |
127 | from accelerate import FullyShardedDataParallelPlugin
128 | from torch.distributed.fsdp.wrap import _or_policy, lambda_auto_wrap_policy, transformer_auto_wrap_policy
129 |
130 | from ..tuners import PrefixEncoder, PromptEmbedding, PromptEncoder
131 |
132 | def lambda_policy_fn(module):
133 | if (
134 | len(list(module.named_children())) == 0
135 | and getattr(module, "weight", None) is not None
136 | and module.weight.requires_grad
137 | ):
138 | return True
139 | return False
140 |
141 | lambda_policy = functools.partial(lambda_auto_wrap_policy, lambda_fn=lambda_policy_fn)
142 | transformer_wrap_policy = functools.partial(
143 | transformer_auto_wrap_policy,
144 | transformer_layer_cls=(
145 | PrefixEncoder,
146 | PromptEncoder,
147 | PromptEmbedding,
148 | FullyShardedDataParallelPlugin.get_module_class_from_name(
149 | model, os.environ.get("FSDP_TRANSFORMER_CLS_TO_WRAP", "")
150 | ),
151 | ),
152 | )
153 |
154 | auto_wrap_policy = functools.partial(_or_policy, policies=[lambda_policy, transformer_wrap_policy])
155 | return auto_wrap_policy
156 |
157 |
158 | def transpose(weight, fan_in_fan_out):
159 | return weight.T if fan_in_fan_out else weight
160 |
--------------------------------------------------------------------------------
/src/peft/src/peft/utils/save_and_load.py:
--------------------------------------------------------------------------------
1 | # coding=utf-8
2 | # Copyright 2023-present the HuggingFace Inc. team.
3 | #
4 | # Licensed under the Apache License, Version 2.0 (the "License");
5 | # you may not use this file except in compliance with the License.
6 | # You may obtain a copy of the License at
7 | #
8 | # http://www.apache.org/licenses/LICENSE-2.0
9 | #
10 | # Unless required by applicable law or agreed to in writing, software
11 | # distributed under the License is distributed on an "AS IS" BASIS,
12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 | # See the License for the specific language governing permissions and
14 | # limitations under the License.
15 |
16 | from .config import PeftType
17 |
18 |
19 | def get_peft_model_state_dict(model, state_dict=None):
20 | """
21 | Get the state dict of the Peft model.
22 |
23 | Args:
24 | model ([`PeftModel`]): The Peft model. When using torch.nn.DistributedDataParallel, DeepSpeed or FSDP,
25 | the model should be the underlying model/unwrapped model (i.e. model.module).
26 | state_dict (`dict`, *optional*, defaults to `None`):
27 | The state dict of the model. If not provided, the state dict of the model
28 | will be used.
29 | """
30 | if state_dict is None:
31 | state_dict = model.state_dict()
32 | if model.peft_config.peft_type == PeftType.LORA:
33 | # to_return = lora_state_dict(model, bias=model.peft_config.bias)
34 | # adapted from `https://github.com/microsoft/LoRA/blob/main/loralib/utils.py`
35 | # to directly with the state dict which is necessary when using DeepSpeed or FSDP
36 | bias = model.peft_config.bias
37 | if bias == "none":
38 | to_return = {k: state_dict[k] for k in state_dict if "lora_" in k}
39 | elif bias == "all":
40 | to_return = {k: state_dict[k] for k in state_dict if "lora_" in k or "bias" in k}
41 | elif bias == "lora_only":
42 | to_return = {}
43 | for k in state_dict:
44 | if "lora_" in k:
45 | to_return[k] = state_dict[k]
46 | bias_name = k.split("lora_")[0] + "bias"
47 | if bias_name in state_dict:
48 | to_return[bias_name] = state_dict[bias_name]
49 | else:
50 | raise NotImplementedError
51 | elif model.peft_config.peft_type == PeftType.BOTTLENECK:
52 | # return the state dict of the model with Bottleneck adapters
53 | bias = model.peft_config.bias
54 | if bias == "none":
55 | to_return = {k: state_dict[k] for k in state_dict if "adapter_" in k}
56 | elif bias == "all":
57 | to_return = {k: state_dict[k] for k in state_dict if "adapter_" in k or "bias" in k}
58 | elif bias == "adapter_only":
59 | to_return = {}
60 | for k in state_dict:
61 | if "adapter_" in k:
62 | to_return[k] = state_dict[k]
63 | bias_name = k.split("adapter_")[0] + "bias"
64 | if bias_name in state_dict:
65 | to_return[bias_name] = state_dict[bias_name]
66 | else:
67 | raise NotImplementedError
68 | else:
69 | to_return = {}
70 | if model.peft_config.inference_mode:
71 | prompt_embeddings = model.prompt_encoder.embedding.weight
72 | else:
73 | prompt_embeddings = model.get_prompt_embedding_to_save()
74 | to_return["prompt_embeddings"] = prompt_embeddings
75 | if model.modules_to_save is not None:
76 | for key, value in state_dict.items():
77 | if any(module_name in key for module_name in model.modules_to_save):
78 | to_return[key] = value
79 | return to_return
80 |
81 |
82 | def set_peft_model_state_dict(model, peft_model_state_dict):
83 | """
84 | Set the state dict of the Peft model.
85 |
86 | Args:
87 | model ([`PeftModel`]): The Peft model.
88 | peft_model_state_dict (`dict`): The state dict of the Peft model.
89 | """
90 |
91 | model.load_state_dict(peft_model_state_dict, strict=False)
92 | if model.peft_config.peft_type != PeftType.LORA and model.peft_config.peft_type != PeftType.BOTTLENECK:
93 | model.prompt_encoder.embedding.load_state_dict(
94 | {"weight": peft_model_state_dict["prompt_embeddings"]}, strict=True
95 | )
96 | return model
97 |
--------------------------------------------------------------------------------
/src/peft/tests/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LiuHC0428/LAW-GPT/6251dd36505a4b644c5d2fe9b5188cbb541ea5fd/src/peft/tests/__init__.py
--------------------------------------------------------------------------------
/src/peft/tests/test_config.py:
--------------------------------------------------------------------------------
1 | # coding=utf-8
2 | # Copyright 2023-present the HuggingFace Inc. team.
3 | #
4 | # Licensed under the Apache License, Version 2.0 (the "License");
5 | # you may not use this file except in compliance with the License.
6 | # You may obtain a copy of the License at
7 | #
8 | # http://www.apache.org/licenses/LICENSE-2.0
9 | #
10 | # Unless required by applicable law or agreed to in writing, software
11 | # distributed under the License is distributed on an "AS IS" BASIS,
12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 | # See the License for the specific language governing permissions and
14 | # limitations under the License.
15 | import os
16 | import tempfile
17 | import unittest
18 |
19 | from peft import LoraConfig, PrefixTuningConfig, PromptEncoderConfig, PromptTuningConfig
20 |
21 |
22 | class PeftConfigTestMixin:
23 | all_config_classes = (
24 | LoraConfig,
25 | PromptEncoderConfig,
26 | PrefixTuningConfig,
27 | PromptTuningConfig,
28 | )
29 |
30 |
31 | class PeftConfigTester(unittest.TestCase, PeftConfigTestMixin):
32 | def test_methods(self):
33 | r"""
34 | Test if all configs have the expected methods. Here we test
35 | - to_dict
36 | - save_pretrained
37 | - from_pretrained
38 | - from_json_file
39 | """
40 | # test if all configs have the expected methods
41 | for config_class in self.all_config_classes:
42 | config = config_class()
43 | self.assertTrue(hasattr(config, "to_dict"))
44 | self.assertTrue(hasattr(config, "save_pretrained"))
45 | self.assertTrue(hasattr(config, "from_pretrained"))
46 | self.assertTrue(hasattr(config, "from_json_file"))
47 |
48 | def test_task_type(self):
49 | for config_class in self.all_config_classes:
50 | # assert this will not fail
51 | _ = config_class(task_type="test")
52 |
53 | def test_save_pretrained(self):
54 | r"""
55 | Test if the config is correctly saved and loaded using
56 | - save_pretrained
57 | """
58 | for config_class in self.all_config_classes:
59 | config = config_class()
60 | with tempfile.TemporaryDirectory() as tmp_dirname:
61 | config.save_pretrained(tmp_dirname)
62 |
63 | config_from_pretrained = config_class.from_pretrained(tmp_dirname)
64 | self.assertEqual(config.to_dict(), config_from_pretrained.to_dict())
65 |
66 | def test_from_json_file(self):
67 | for config_class in self.all_config_classes:
68 | config = config_class()
69 | with tempfile.TemporaryDirectory() as tmp_dirname:
70 | config.save_pretrained(tmp_dirname)
71 |
72 | config_from_json = config_class.from_json_file(os.path.join(tmp_dirname, "adapter_config.json"))
73 | self.assertEqual(config.to_dict(), config_from_json)
74 |
75 | def test_to_dict(self):
76 | r"""
77 | Test if the config can be correctly converted to a dict using:
78 | - to_dict
79 | - __dict__
80 | """
81 | for config_class in self.all_config_classes:
82 | config = config_class()
83 | self.assertEqual(config.to_dict(), config.__dict__)
84 | self.assertTrue(isinstance(config.to_dict(), dict))
85 |
86 | def test_set_attributes(self):
87 | # manually set attributes and check if they are correctly written
88 | for config_class in self.all_config_classes:
89 | config = config_class(peft_type="test")
90 |
91 | # save pretrained
92 | with tempfile.TemporaryDirectory() as tmp_dirname:
93 | config.save_pretrained(tmp_dirname)
94 |
95 | config_from_pretrained = config_class.from_pretrained(tmp_dirname)
96 | self.assertEqual(config.to_dict(), config_from_pretrained.to_dict())
97 |
--------------------------------------------------------------------------------
/src/peft/tests/test_peft_model.py:
--------------------------------------------------------------------------------
1 | # coding=utf-8
2 | # Copyright 2023-present the HuggingFace Inc. team.
3 | #
4 | # Licensed under the Apache License, Version 2.0 (the "License");
5 | # you may not use this file except in compliance with the License.
6 | # You may obtain a copy of the License at
7 | #
8 | # http://www.apache.org/licenses/LICENSE-2.0
9 | #
10 | # Unless required by applicable law or agreed to in writing, software
11 | # distributed under the License is distributed on an "AS IS" BASIS,
12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 | # See the License for the specific language governing permissions and
14 | # limitations under the License.
15 | import os
16 | import tempfile
17 | import unittest
18 |
19 | import torch
20 | from parameterized import parameterized
21 | from transformers import AutoModelForCausalLM
22 |
23 | from peft import (
24 | PeftModel,
25 | get_peft_model,
26 | get_peft_model_state_dict,
27 | prepare_model_for_int8_training,
28 | )
29 |
30 | from .testing_common import PeftTestConfigManager
31 |
32 |
33 | # This has to be in the order: model_id, lora_kwargs, prefix_tuning_kwargs, prompt_encoder_kwargs, prompt_tuning_kwargs
34 | PEFT_MODELS_TO_TEST = [
35 | ("hf-internal-testing/tiny-random-OPTForCausalLM", {"target_modules": ["q_proj", "v_proj"]}, {}, {}, {}),
36 | ]
37 |
38 |
39 | class PeftTestMixin:
40 | torch_device = "cuda" if torch.cuda.is_available() else "cpu"
41 |
42 |
43 | class PeftModelTester(unittest.TestCase, PeftTestMixin):
44 | r"""
45 | Test if the PeftModel behaves as expected. This includes:
46 | - test if the model has the expected methods
47 |
48 | We use parametrized.expand for debugging purposes to test each model individually.
49 | """
50 |
51 | @parameterized.expand(PeftTestConfigManager.get_grid_parameters(PEFT_MODELS_TO_TEST))
52 | def test_attributes_parametrized(self, test_name, model_id, config_cls, config_kwargs):
53 | self._test_model_attr(model_id, config_cls, config_kwargs)
54 |
55 | def _test_model_attr(self, model_id, config_cls, config_kwargs):
56 | model = AutoModelForCausalLM.from_pretrained(model_id)
57 | config = config_cls(
58 | base_model_name_or_path=model_id,
59 | **config_kwargs,
60 | )
61 | model = get_peft_model(model, config)
62 |
63 | self.assertTrue(hasattr(model, "save_pretrained"))
64 | self.assertTrue(hasattr(model, "from_pretrained"))
65 | self.assertTrue(hasattr(model, "push_to_hub"))
66 |
67 | def _test_prepare_for_training(self, model_id, config_cls, config_kwargs):
68 | model = AutoModelForCausalLM.from_pretrained(model_id).to(self.torch_device)
69 | config = config_cls(
70 | base_model_name_or_path=model_id,
71 | **config_kwargs,
72 | )
73 | model = get_peft_model(model, config)
74 |
75 | dummy_input = torch.LongTensor([[1, 1, 1]]).to(self.torch_device)
76 | dummy_output = model.get_input_embeddings()(dummy_input)
77 |
78 | self.assertTrue(not dummy_output.requires_grad)
79 |
80 | # load with `prepare_model_for_int8_training`
81 | model = AutoModelForCausalLM.from_pretrained(model_id).to(self.torch_device)
82 | model = prepare_model_for_int8_training(model)
83 |
84 | for param in model.parameters():
85 | self.assertTrue(not param.requires_grad)
86 |
87 | config = config_cls(
88 | base_model_name_or_path=model_id,
89 | **config_kwargs,
90 | )
91 | model = get_peft_model(model, config)
92 |
93 | # For backward compatibility
94 | if hasattr(model, "enable_input_require_grads"):
95 | model.enable_input_require_grads()
96 | else:
97 |
98 | def make_inputs_require_grad(module, input, output):
99 | output.requires_grad_(True)
100 |
101 | model.get_input_embeddings().register_forward_hook(make_inputs_require_grad)
102 |
103 | dummy_input = torch.LongTensor([[1, 1, 1]]).to(self.torch_device)
104 | dummy_output = model.get_input_embeddings()(dummy_input)
105 |
106 | self.assertTrue(dummy_output.requires_grad)
107 |
108 | @parameterized.expand(PeftTestConfigManager.get_grid_parameters(PEFT_MODELS_TO_TEST))
109 | def test_prepare_for_training_parametrized(self, test_name, model_id, config_cls, config_kwargs):
110 | self._test_prepare_for_training(model_id, config_cls, config_kwargs)
111 |
112 | def _test_save_pretrained(self, model_id, config_cls, config_kwargs):
113 | model = AutoModelForCausalLM.from_pretrained(model_id)
114 | config = config_cls(
115 | base_model_name_or_path=model_id,
116 | **config_kwargs,
117 | )
118 | model = get_peft_model(model, config)
119 | model = model.to(self.torch_device)
120 |
121 | with tempfile.TemporaryDirectory() as tmp_dirname:
122 | model.save_pretrained(tmp_dirname)
123 |
124 | model_from_pretrained = AutoModelForCausalLM.from_pretrained(model_id)
125 | model_from_pretrained = PeftModel.from_pretrained(model_from_pretrained, tmp_dirname)
126 |
127 | # check if the state dicts are equal
128 | state_dict = get_peft_model_state_dict(model)
129 | state_dict_from_pretrained = get_peft_model_state_dict(model_from_pretrained)
130 |
131 | # check if same keys
132 | self.assertEqual(state_dict.keys(), state_dict_from_pretrained.keys())
133 |
134 | # check if tensors equal
135 | for key in state_dict.keys():
136 | self.assertTrue(
137 | torch.allclose(
138 | state_dict[key].to(self.torch_device), state_dict_from_pretrained[key].to(self.torch_device)
139 | )
140 | )
141 |
142 | # check if `adapter_model.bin` is present
143 | self.assertTrue(os.path.exists(os.path.join(tmp_dirname, "adapter_model.bin")))
144 |
145 | # check if `adapter_config.json` is present
146 | self.assertTrue(os.path.exists(os.path.join(tmp_dirname, "adapter_config.json")))
147 |
148 | # check if `pytorch_model.bin` is not present
149 | self.assertFalse(os.path.exists(os.path.join(tmp_dirname, "pytorch_model.bin")))
150 |
151 | # check if `config.json` is not present
152 | self.assertFalse(os.path.exists(os.path.join(tmp_dirname, "config.json")))
153 |
154 | @parameterized.expand(PeftTestConfigManager.get_grid_parameters(PEFT_MODELS_TO_TEST))
155 | def test_save_pretrained(self, test_name, model_id, config_cls, config_kwargs):
156 | self._test_save_pretrained(model_id, config_cls, config_kwargs)
157 |
--------------------------------------------------------------------------------
/src/peft/tests/testing_common.py:
--------------------------------------------------------------------------------
1 | # coding=utf-8
2 | # Copyright 2023-present the HuggingFace Inc. team.
3 | #
4 | # Licensed under the Apache License, Version 2.0 (the "License");
5 | # you may not use this file except in compliance with the License.
6 | # You may obtain a copy of the License at
7 | #
8 | # http://www.apache.org/licenses/LICENSE-2.0
9 | #
10 | # Unless required by applicable law or agreed to in writing, software
11 | # distributed under the License is distributed on an "AS IS" BASIS,
12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 | # See the License for the specific language governing permissions and
14 | # limitations under the License.
15 | from collections import OrderedDict
16 |
17 | from peft import (
18 | LoraConfig,
19 | PrefixTuningConfig,
20 | PromptEncoderConfig,
21 | PromptTuningConfig,
22 | )
23 |
24 |
25 | CONFIG_CLASSES = (
26 | LoraConfig,
27 | PrefixTuningConfig,
28 | PromptEncoderConfig,
29 | PromptTuningConfig,
30 | )
31 | CONFIG_TESTING_KWARGS = (
32 | {
33 | "r": 8,
34 | "lora_alpha": 32,
35 | "target_modules": None,
36 | "lora_dropout": 0.05,
37 | "bias": "none",
38 | "task_type": "CAUSAL_LM",
39 | },
40 | {
41 | "num_virtual_tokens": 10,
42 | "task_type": "CAUSAL_LM",
43 | },
44 | {
45 | "num_virtual_tokens": 10,
46 | "encoder_hidden_size": 32,
47 | "task_type": "CAUSAL_LM",
48 | },
49 | {
50 | "num_virtual_tokens": 10,
51 | "task_type": "CAUSAL_LM",
52 | },
53 | )
54 |
55 | CLASSES_MAPPING = {
56 | "lora": (LoraConfig, CONFIG_TESTING_KWARGS[0]),
57 | "prefix_tuning": (PrefixTuningConfig, CONFIG_TESTING_KWARGS[1]),
58 | "prompt_encoder": (PromptEncoderConfig, CONFIG_TESTING_KWARGS[2]),
59 | "prompt_tuning": (PromptTuningConfig, CONFIG_TESTING_KWARGS[3]),
60 | }
61 |
62 |
63 | # Adapted from https://github.com/huggingface/transformers/blob/48327c57182fdade7f7797d1eaad2d166de5c55b/src/transformers/activations.py#LL166C7-L166C22
64 | class ClassInstantier(OrderedDict):
65 | def __getitem__(self, key, *args, **kwargs):
66 | # check if any of the kwargs is inside the config class kwargs
67 | if any([kwarg in self[key][1] for kwarg in kwargs]):
68 | new_config_kwargs = self[key][1].copy()
69 | new_config_kwargs.update(kwargs)
70 | return (self[key][0], new_config_kwargs)
71 |
72 | return super().__getitem__(key, *args, **kwargs)
73 |
74 | def get_grid_parameters(self, model_list):
75 | r"""
76 | Returns a list of all possible combinations of the parameters in the config classes.
77 | """
78 | grid_parameters = []
79 | for model_tuple in model_list:
80 | model_id, lora_kwargs, prefix_tuning_kwargs, prompt_encoder_kwargs, prompt_tuning_kwargs = model_tuple
81 | for key, value in self.items():
82 | if key == "lora":
83 | # update value[1] if necessary
84 | if lora_kwargs is not None:
85 | value[1].update(lora_kwargs)
86 | elif key == "prefix_tuning":
87 | # update value[1] if necessary
88 | if prefix_tuning_kwargs is not None:
89 | value[1].update(prefix_tuning_kwargs)
90 | elif key == "prompt_encoder":
91 | # update value[1] if necessary
92 | if prompt_encoder_kwargs is not None:
93 | value[1].update(prompt_encoder_kwargs)
94 | else:
95 | # update value[1] if necessary
96 | if prompt_tuning_kwargs is not None:
97 | value[1].update(prompt_tuning_kwargs)
98 | grid_parameters.append((f"test_{model_id}_{key}", model_id, value[0], value[1]))
99 |
100 | return grid_parameters
101 |
102 |
103 | PeftTestConfigManager = ClassInstantier(CLASSES_MAPPING)
104 |
--------------------------------------------------------------------------------
/src/peft/tests/testing_utils.py:
--------------------------------------------------------------------------------
1 | # coding=utf-8
2 | # Copyright 2023-present the HuggingFace Inc. team.
3 | #
4 | # Licensed under the Apache License, Version 2.0 (the "License");
5 | # you may not use this file except in compliance with the License.
6 | # You may obtain a copy of the License at
7 | #
8 | # http://www.apache.org/licenses/LICENSE-2.0
9 | #
10 | # Unless required by applicable law or agreed to in writing, software
11 | # distributed under the License is distributed on an "AS IS" BASIS,
12 | # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13 | # See the License for the specific language governing permissions and
14 | # limitations under the License.
15 | import unittest
16 |
17 | import torch
18 |
19 |
20 | def require_torch_gpu(test_case):
21 | """
22 | Decorator marking a test that requires a GPU. Will be skipped when no GPU is available.
23 | """
24 | if not torch.cuda.is_available():
25 | return unittest.skip("test requires GPU")(test_case)
26 | else:
27 | return test_case
28 |
29 |
30 | def require_torch_multi_gpu(test_case):
31 | """
32 | Decorator marking a test that requires multiple GPUs. Will be skipped when less than 2 GPUs are available.
33 | """
34 | if not torch.cuda.is_available() or torch.cuda.device_count() < 2:
35 | return unittest.skip("test requires multiple GPUs")(test_case)
36 | else:
37 | return test_case
38 |
39 |
40 | def require_bitsandbytes(test_case):
41 | """
42 | Decorator marking a test that requires the bitsandbytes library. Will be skipped when the library is not installed.
43 | """
44 | try:
45 | import bitsandbytes # noqa: F401
46 | except ImportError:
47 | return unittest.skip("test requires bitsandbytes")(test_case)
48 | else:
49 | return test_case
50 |
--------------------------------------------------------------------------------
/src/peft_model/3.p:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LiuHC0428/LAW-GPT/6251dd36505a4b644c5d2fe9b5188cbb541ea5fd/src/peft_model/3.p
--------------------------------------------------------------------------------
/src/peft_r_model/1.p:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/LiuHC0428/LAW-GPT/6251dd36505a4b644c5d2fe9b5188cbb541ea5fd/src/peft_r_model/1.p
--------------------------------------------------------------------------------
/src/requirement.txt:
--------------------------------------------------------------------------------
1 | protobuf>=3.19.5,<3.20.1
2 | transformers==4.26.1
3 | icetk
4 | cpm_kernels
5 | torch>=1.10.1
6 | gradio
7 | pytorch-ignite
8 | peft
9 | accelerate
10 | tensorboardX
11 | lion-pytorch
12 | faiss-gpu
13 | text2vec
14 |
--------------------------------------------------------------------------------
/src/train/dataset.py:
--------------------------------------------------------------------------------
1 | import os
2 | import json
3 | import numpy as np
4 | import torch
5 | from itertools import chain
6 |
7 | from torch.utils.data import Dataset
8 |
9 |
10 |
11 | # from train import SPECIAL_TOKENS, MODEL_INPUTS, PADDED_INPUTS
12 | SPECIAL_TOKENS = ["