├── .gitignore ├── LICENSE ├── README.jp.md ├── README.md ├── bin ├── VGAudioCli.exe └── encode.exe ├── main.py ├── makefile ├── requirements.txt ├── src ├── assets │ ├── hp_values.csv │ ├── icon.ico │ └── icon.png ├── common.py ├── config.py ├── constants.py ├── datatable.py ├── encryption.py ├── fumen.py ├── nus3bank.py ├── parse_tja.py ├── program.py ├── tja2fumen │ ├── __init__.py │ ├── classes.py │ ├── constants.py │ ├── converters.py │ ├── parsers.py │ └── writers.py └── updater.py ├── templates └── nijiiro │ ├── song_ABC.nus3bank │ ├── song_ABCD.nus3bank │ ├── song_ABCDE.nus3bank │ ├── song_ABCDEF.nus3bank │ ├── song_ABCDEFG.nus3bank │ └── song_ABCDEFGH.nus3bank └── version_info.txt /.gitignore: -------------------------------------------------------------------------------- 1 | datatable 2 | .vscode 3 | .idea 4 | config.json 5 | __pycache__ 6 | dist 7 | build 8 | KeifunsDatatableEditor.exe 9 | tmp 10 | temp 11 | *.dll 12 | *.spec 13 | -------------------------------------------------------------------------------- /README.jp.md: -------------------------------------------------------------------------------- 1 | # KeifunsDatatableEditor (KDE) 2 | [![en](https://img.shields.io/badge/lang-en-green.svg)](https://github.com/keitannunes/KeifunsDatatableEditor/blob/main/README.md) 3 | [![jp](https://img.shields.io/badge/lang-jp-red.svg)](https://github.com/keitannunes/KeifunsDatatableEditor/blob/main/README.jp.md) 4 | 5 | KeifunsDatatableEditor(KDE)は、JPN08やCHN00用のTaikoSoundEditor(TSE)とは違い、JPN39の為に作られました。 6 | KDEを使用する事でdatatableを簡単に変更・管理することが出来ます。 7 | 8 | ## 特徴 9 | - **datatableの編集**: datatableを簡単に変更出来ます。 10 | - **曲の追加/削除**: datatableに曲の追加や削除が出来ます。 11 | - **曲の詳細情報を自動生成**: TJAファイルから曲の詳細情報を自動生成します。 12 | - **fumenとsoundファイルの生成**: TJAファイルから直接fumenとsoundファイルを生成します。 13 | 14 | ## 追加予定の機能 15 | - MusicAttributeのtag編集 16 | - 曲名/サブタイトルで曲検索 17 | - 称号/リワード編集機能 18 | - 日本語訳 19 | 20 | ## 実行環境 21 | - **FFmpeg**: このプロジェクトはaudioファイルの変換処理に[FFmpeg](https://ffmpeg.org/)を必要とします。システムのPATHにインストールされ、アクセス出来る事を確認してください。 22 | 23 | ## インストールガイド 24 | 1. 最新の実行ファイルを[GitHub Releases](https://github.com/keitannunes/KeifunsDatatableEditor/releases)からダウンロードします。 25 | 2. ダウンロードした`KeifunsDatatableEditor.exe`を実行します。 26 | 3. 起動後、`File > Set Keys`に進む。 27 | 4. プログラムを使用する為に必要なキーを設定してください。 28 | 29 | 30 | ## 使用ツール 31 | 32 | - [TaikoPythonTools - TaikoNus3bankMake](https://github.com/cainan-c/TaikoPythonTools): soundファイルの作成に使用します。 33 | - [tja2fumen](https://github.com/vivaria/tja2fumen): fumenファイルの作成に使用します。 34 | - [tja-tools](https://github.com/WHMHammer/tja-tools): TJAファイルの解析に使用します。 35 | 36 | ## ステータス 37 | このプログラムは現在 **alpha** です。問題やバグがあれば、[GitHub Issues](https://github.com/keitannunes/KeifunsDatatableEditor/issues) ページから報告してください。 38 | 39 | ## 質問とサポート 40 | 質問やサポートが必要な場合は、**EGTS Discord** へ気軽に参加してください: [discord.egts.ca](https://discord.egts.ca) 41 | 42 | ## License 43 | KeifunsDatatableEditor is licensed under the **GNU General Public License v3.0**. 44 | 45 | For more details, refer to the [GPL 3.0 License](https://www.gnu.org/licenses/gpl-3.0.html). 46 | 47 | --- 48 | Feel free to use, modify, and contribute, but remember to share your improvements under the same license. 49 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # KeifunsDatatableEditor (KDE) 2 | [![en](https://img.shields.io/badge/lang-en-green.svg)](https://github.com/keitannunes/KeifunsDatatableEditor/blob/main/README.md) 3 | [![jp](https://img.shields.io/badge/lang-jp-red.svg)](https://github.com/keitannunes/KeifunsDatatableEditor/blob/main/README.jp.md) 4 | 5 | KeifunsDatatableEditor (KDE) is a replacement for TaikoSoundEditor (TSE), specifically designed for Nijiiro JPN39. KDE allows you to easily modify and manage the datatable for your Taiko games, with several additional features to streamline the process. 6 | 7 | ## Features 8 | - **Edit Datatable**: Modify the datatable with ease. 9 | - **Add/Remove Songs**: Add or remove songs from the datatable. 10 | - **Auto-Generate Song Details**: Automatically generate song details from TJA files. 11 | - **Create Fumen and Sound Files**: Generate fumen and sound files directly from TJA. 12 | 13 | ## Planned Features 14 | - MusicAttribute tag editing 15 | - Shougou/reward editing 16 | - Japanese translation 17 | 18 | ## Requirements 19 | - **FFmpeg**: This project requires [FFmpeg](https://ffmpeg.org/) for handling audio file conversions. Make sure it is installed and accessible in your system's PATH. 20 | 21 | ## Installation Guide 22 | 1. Download the latest executable file from [GitHub Releases](https://github.com/keitannunes/KeifunsDatatableEditor/releases). 23 | 2. Run the downloaded `.exe` file. 24 | 3. Once the application is running, go to `File > Set Keys`. 25 | 4. Set the necessary keys to begin using the program. 26 | 27 | 28 | ## Tools Used 29 | 30 | - [TaikoPythonTools - TaikoNus3bankMake](https://github.com/cainan-c/TaikoPythonTools): Used for creating sound files. 31 | - [tja2fumen](https://github.com/vivaria/tja2fumen): Used for creating fumen files. 32 | - [tja-tools](https://github.com/WHMHammer/tja-tools): Used for parsing TJA files. 33 | 34 | ## Status 35 | This program is currently in **alpha**. Please report any issues or bugs through the [GitHub Issues](https://github.com/keitannunes/KeifunsDatatableEditor/issues) page. 36 | 37 | ## Questions & Support 38 | If you have any questions or need support, feel free to join the **EGTS Discord**: [discord.egts.ca](https://discord.egts.ca) 39 | 40 | ## License 41 | KeifunsDatatableEditor is licensed under the **GNU General Public License v3.0**. 42 | 43 | For more details, refer to the [GPL 3.0 License](https://www.gnu.org/licenses/gpl-3.0.html). 44 | 45 | --- 46 | Feel free to use, modify, and contribute, but remember to share your improvements under the same license. 47 | -------------------------------------------------------------------------------- /bin/VGAudioCli.exe: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/keitannunes/KeifunsDatatableEditor/07b009f6e4b41017f7fc7de596404f14529a7d04/bin/VGAudioCli.exe -------------------------------------------------------------------------------- /bin/encode.exe: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/keitannunes/KeifunsDatatableEditor/07b009f6e4b41017f7fc7de596404f14529a7d04/bin/encode.exe -------------------------------------------------------------------------------- /main.py: -------------------------------------------------------------------------------- 1 | from src import program 2 | 3 | if __name__ == "__main__": 4 | ui = program.Program() 5 | ui.run() -------------------------------------------------------------------------------- /makefile: -------------------------------------------------------------------------------- 1 | # Variables 2 | NAME = KeifunsDatatableEditor 3 | MAIN_FILE = main.py 4 | ICON = ./src/assets/icon.ico 5 | ASSETS_DIR = ./src/assets 6 | TEMPLATES = ./templates 7 | BIN = ./bin/* 8 | VERSION_FILE = ./version_info.txt 9 | 10 | # PyInstaller command 11 | PYINSTALLER = pyinstaller 12 | 13 | # PyInstaller options 14 | PYINSTALLER_OPTS = --onefile --windowed --icon=$(ICON) --name $(NAME) --version-file=$(VERSION_FILE) 15 | PYINSTALLER_DEBUG_OPTS = --onefile --icon=$(ICON) --name "$(NAME) Debug" --version-file=$(VERSION_FILE) 16 | 17 | 18 | # Build command 19 | .PHONY: build 20 | build: clean 21 | $(PYINSTALLER) $(PYINSTALLER_OPTS) \ 22 | --add-data "$(ASSETS_DIR);src/assets" \ 23 | --add-data "$(TEMPLATES);templates" \ 24 | --add-binary "$(BIN);bin" \ 25 | $(MAIN_FILE) 26 | 27 | .PHONY: debug 28 | debug: 29 | $(PYINSTALLER) $(PYINSTALLER_DEBUG_OPTS) \ 30 | --add-data "$(ASSETS_DIR);src/assets" \ 31 | --add-data "$(TEMPLATES);templates" \ 32 | --add-binary "$(BIN);bin" \ 33 | $(MAIN_FILE) 34 | 35 | # Clean command 36 | .PHONY: clean 37 | clean: 38 | @echo "Cleaning previous builds..." 39 | @if exist dist rmdir /S /Q dist 40 | @if exist build rmdir /S /Q build 41 | @if exist $(NAME).spec del /F /Q $(NAME).spec -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | cryptography 2 | requests -------------------------------------------------------------------------------- /src/assets/icon.ico: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/keitannunes/KeifunsDatatableEditor/07b009f6e4b41017f7fc7de596404f14529a7d04/src/assets/icon.ico -------------------------------------------------------------------------------- /src/assets/icon.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/keitannunes/KeifunsDatatableEditor/07b009f6e4b41017f7fc7de596404f14529a7d04/src/assets/icon.png -------------------------------------------------------------------------------- /src/common.py: -------------------------------------------------------------------------------- 1 | import sys, os 2 | 3 | def resource_path(relative_path): 4 | if hasattr(sys, '_MEIPASS'): 5 | return os.path.join(sys._MEIPASS, relative_path) #type: ignore 6 | return os.path.join(os.path.abspath("."), relative_path) -------------------------------------------------------------------------------- /src/config.py: -------------------------------------------------------------------------------- 1 | import json 2 | import os 3 | 4 | class Config: 5 | datatable_key: str 6 | fumen_key: str 7 | game_files_out_dir: str 8 | default_required_renda_speeds: list[float] 9 | dancers: dict 10 | auto_close_search: bool 11 | recalculate_shinuti_score_with_required_renda_count: bool 12 | 13 | def __init__(self) -> None: 14 | # Define the directory and file paths 15 | self.appdata_dir = os.path.join(os.getenv('LOCALAPPDATA'), 'KeifunsDatatableEditor') # type: ignore 16 | self.config_file_path = os.path.join(self.appdata_dir, 'config.json') 17 | 18 | # Create the directory if it doesn't exist 19 | if not os.path.exists(self.appdata_dir): 20 | os.makedirs(self.appdata_dir) 21 | 22 | default_config = { 23 | "datatableKey": "3530304242323633353537423431384139353134383346433246464231354534", 24 | "fumenKey": "4434423946383537303842433443383030333843444132343339373531353830", 25 | "gameFilesOutDir": "", 26 | "defaultRequiredRendaSpeeds": [6,8,11,17,17], 27 | "autoCloseSearch": True, 28 | "recalculateShinutiScoreWithRequiredRendaCount": False, 29 | "dancers": { 30 | "001_miku": { 31 | "ensoPartsID1": 1, 32 | "ensoPartsID2": 1, 33 | "donBg1p": "lumen/001_miku/enso_normal/enso/donbg/donbg_b_001_1p.nulstb", 34 | "donBg2p": "lumen/001_miku/enso_normal/enso/donbg/donbg_b_001_2p.nulstb", 35 | "dancerDai": "lumen/001_miku/enso_normal/enso/background/dodai_b_01.nulstb", 36 | "dancer": "lumen/001_miku/enso_normal/enso/dancer/dance_b_001.nulstb", 37 | "danceNormalBg": "lumen/001_miku/enso_normal/enso/background/bg_nomal_b_001.nulstb", 38 | "danceFeverBg": "lumen/001_miku/enso_normal/enso/background/bg_fever_b_001.nulstb", 39 | "rendaEffect": "lumen/001_miku/enso_normal/enso/renda_effect/renda_b_001.nulstb", 40 | "fever": "lumen/001_miku/enso_normal/enso/fever/fever_b_001.nulstb", 41 | "donBg1p1": "lumen/001_miku/enso_normal/enso/donbg/donbg_b_001_1p.nulstb", 42 | "donBg2p1": "lumen/001_miku/enso_normal/enso/donbg/donbg_b_001_2p.nulstb", 43 | "dancerDai1": "lumen/001_miku/enso_normal/enso/background/dodai_b_01.nulstb", 44 | "dancer1": "lumen/001_miku/enso_normal/enso/dancer/dance_b_001.nulstb", 45 | "danceNormalBg1": "lumen/001_miku/enso_normal/enso/background/bg_nomal_b_001.nulstb", 46 | "danceFeverBg1": "lumen/001_miku/enso_normal/enso/background/bg_fever_b_001.nulstb", 47 | "rendaEffect1": "lumen/001_miku/enso_normal/enso/renda_effect/renda_b_001.nulstb", 48 | "fever1": "lumen/001_miku/enso_normal/enso/fever/fever_b_001.nulstb" 49 | }, 50 | "002_toho": { 51 | "ensoPartsID1": 2, 52 | "ensoPartsID2": 2, 53 | "donBg1p": "lumen/002_toho/enso_normal/enso/donbg/donbg_b_002_1p.nulstb", 54 | "donBg2p": "lumen/002_toho/enso_normal/enso/donbg/donbg_b_002_2p.nulstb", 55 | "dancerDai": "lumen/000_default/enso_normal/enso/background/bg_dai_a_00.nulstb", 56 | "dancer": "lumen/002_toho/enso_normal/enso/dancer/dance_b_002.nulstb", 57 | "danceNormalBg": "lumen/002_toho/enso_normal/enso/background/bg_nomal_b_002.nulstb", 58 | "danceFeverBg": "lumen/002_toho/enso_normal/enso/background/bg_fever_b_002.nulstb", 59 | "rendaEffect": "lumen/002_toho/enso_normal/enso/renda_effect/renda_b_002.nulstb", 60 | "fever": "lumen/002_toho/enso_normal/enso/fever/fever_b_002.nulstb", 61 | "donBg1p1": "lumen/002_toho/enso_normal/enso/donbg/donbg_b_002_1p.nulstb", 62 | "donBg2p1": "lumen/002_toho/enso_normal/enso/donbg/donbg_b_002_2p.nulstb", 63 | "dancerDai1": "lumen/000_default/enso_normal/enso/background/bg_dai_a_00.nulstb", 64 | "dancer1": "lumen/002_toho/enso_normal/enso/dancer/dance_b_002.nulstb", 65 | "danceNormalBg1": "lumen/002_toho/enso_normal/enso/background/bg_nomal_b_002.nulstb", 66 | "danceFeverBg1": "lumen/002_toho/enso_normal/enso/background/bg_fever_b_002.nulstb", 67 | "rendaEffect1": "lumen/002_toho/enso_normal/enso/renda_effect/renda_b_002.nulstb", 68 | "fever1": "lumen/002_toho/enso_normal/enso/fever/fever_b_002.nulstb" 69 | }, 70 | "003_gumi": { 71 | "ensoPartsID1": 3, 72 | "ensoPartsID2": 3, 73 | "donBg1p": "lumen/003_gumi/enso_normal/enso/donbg/donbg_b_003_1p.nulstb", 74 | "donBg2p": "lumen/003_gumi/enso_normal/enso/donbg/donbg_b_003_1p.nulstb", 75 | "dancerDai": "lumen/003_gumi/enso_normal/enso/background/dodai_b_03.nulstb", 76 | "dancer": "lumen/003_gumi/enso_normal/enso/dancer/dance_b_003.nulstb", 77 | "danceNormalBg": "lumen/003_gumi/enso_normal/enso/background/bg_nomal_b_003.nulstb", 78 | "danceFeverBg": "lumen/003_gumi/enso_normal/enso/background/bg_fever_b_003.nulstb", 79 | "rendaEffect": "lumen/003_gumi/enso_normal/enso/renda_effect/renda_b_003.nulstb", 80 | "fever": "lumen/003_gumi/enso_normal/enso/fever/fever_b_003.nulstb", 81 | "donBg1p1": "lumen/003_gumi/enso_normal/enso/donbg/donbg_b_003_1p.nulstb", 82 | "donBg2p1": "lumen/003_gumi/enso_normal/enso/donbg/donbg_b_003_1p.nulstb", 83 | "dancerDai1": "lumen/003_gumi/enso_normal/enso/background/dodai_b_03.nulstb", 84 | "dancer1": "lumen/003_gumi/enso_normal/enso/dancer/dance_b_003.nulstb", 85 | "danceNormalBg1": "lumen/003_gumi/enso_normal/enso/background/bg_nomal_b_003.nulstb", 86 | "danceFeverBg1": "lumen/003_gumi/enso_normal/enso/background/bg_fever_b_003.nulstb", 87 | "rendaEffect1": "lumen/003_gumi/enso_normal/enso/renda_effect/renda_b_003.nulstb", 88 | "fever1": "lumen/003_gumi/enso_normal/enso/fever/fever_b_003.nulstb" 89 | }, 90 | "004_ia": { 91 | "ensoPartsID1": 4, 92 | "ensoPartsID2": 4, 93 | "donBg1p": "lumen/004_ia/enso_normal/enso/donbg/donbg_b_004_1p.nulstb", 94 | "donBg2p": "lumen/004_ia/enso_normal/enso/donbg/donbg_b_004_1p.nulstb", 95 | "dancerDai": "lumen/000_default/enso_normal/enso/background/bg_dai_a_00.nulstb", 96 | "dancer": "lumen/004_ia/enso_normal/enso/dancer/dance_b_004.nulstb", 97 | "danceNormalBg": "lumen/004_ia/enso_normal/enso/background/bg_nomal_b_004.nulstb", 98 | "danceFeverBg": "lumen/004_ia/enso_normal/enso/background/bg_fever_b_004.nulstb", 99 | "rendaEffect": "lumen/004_ia/enso_normal/enso/renda_effect/renda_b_004.nulstb", 100 | "fever": "lumen/004_ia/enso_normal/enso/fever/fever_b_004.nulstb", 101 | "donBg1p1": "lumen/004_ia/enso_normal/enso/donbg/donbg_b_004_1p.nulstb", 102 | "donBg2p1": "lumen/004_ia/enso_normal/enso/donbg/donbg_b_004_1p.nulstb", 103 | "dancerDai1": "lumen/000_default/enso_normal/enso/background/bg_dai_a_00.nulstb", 104 | "dancer1": "lumen/004_ia/enso_normal/enso/dancer/dance_b_004.nulstb", 105 | "danceNormalBg1": "lumen/004_ia/enso_normal/enso/background/bg_nomal_b_004.nulstb", 106 | "danceFeverBg1": "lumen/004_ia/enso_normal/enso/background/bg_fever_b_004.nulstb", 107 | "rendaEffect1": "lumen/004_ia/enso_normal/enso/renda_effect/renda_b_004.nulstb", 108 | "fever1": "lumen/004_ia/enso_normal/enso/fever/fever_b_004.nulstb" 109 | }, 110 | "005_lovelive": { 111 | "ensoPartsID1": 5, 112 | "ensoPartsID2": 5, 113 | "donBg1p": "lumen/005_lovelive/enso_normal/enso/donbg/donbg_b_005_1p.nulstb", 114 | "donBg2p": "lumen/005_lovelive/enso_normal/enso/donbg/donbg_b_005_2p.nulstb", 115 | "dancerDai": "lumen/000_default/enso_normal/enso/background/bg_dai_a_00.nulstb", 116 | "dancer": "lumen/005_lovelive/enso_normal/enso/dancer/dance_b_005.nulstb", 117 | "danceNormalBg": "lumen/005_lovelive/enso_normal/enso/background/bg_nomal_b_005.nulstb", 118 | "danceFeverBg": "lumen/005_lovelive/enso_normal/enso/background/bg_fever_b_005.nulstb", 119 | "rendaEffect": "lumen/005_lovelive/enso_normal/enso/renda_effect/renda_b_005.nulstb", 120 | "fever": "lumen/005_lovelive/enso_normal/enso/fever/fever_b_005.nulstb", 121 | "donBg1p1": "lumen/005_lovelive/enso_normal/enso/donbg/donbg_b_005_1p.nulstb", 122 | "donBg2p1": "lumen/005_lovelive/enso_normal/enso/donbg/donbg_b_005_2p.nulstb", 123 | "dancerDai1": "lumen/000_default/enso_normal/enso/background/bg_dai_a_00.nulstb", 124 | "dancer1": "lumen/005_lovelive/enso_normal/enso/dancer/dance_b_005.nulstb", 125 | "danceNormalBg1": "lumen/005_lovelive/enso_normal/enso/background/bg_nomal_b_005.nulstb", 126 | "danceFeverBg1": "lumen/005_lovelive/enso_normal/enso/background/bg_fever_b_005.nulstb", 127 | "rendaEffect1": "lumen/005_lovelive/enso_normal/enso/renda_effect/renda_b_005.nulstb", 128 | "fever1": "lumen/005_lovelive/enso_normal/enso/fever/fever_b_005.nulstb" 129 | }, 130 | "006_i7_id7": { 131 | "ensoPartsID1": 6, 132 | "ensoPartsID2": 6, 133 | "donBg1p": "lumen/006_i7_id7/enso_normal/enso/donbg/donbg_b_006_1p.nulstb", 134 | "donBg2p": "lumen/006_i7_id7/enso_normal/enso/donbg/donbg_b_006_1p.nulstb", 135 | "dancerDai": "lumen/000_default/enso_normal/enso/background/bg_dai_a_00.nulstb", 136 | "dancer": "lumen/006_i7_id7/enso_normal/enso/dancer/dance_b_006.nulstb", 137 | "danceNormalBg": "lumen/006_i7_id7/enso_normal/enso/background/bg_nomal_b_006.nulstb", 138 | "danceFeverBg": "lumen/006_i7_id7/enso_normal/enso/background/bg_fever_b_006.nulstb", 139 | "rendaEffect": "lumen/006_i7_id7/enso_normal/enso/renda_effect/renda_b_006.nulstb", 140 | "fever": "lumen/000_default/enso_normal/enso/fever/fever_effect0.nulstb", 141 | "donBg1p1": "lumen/006_i7_id7/enso_normal/enso/donbg/donbg_b_006_1p.nulstb", 142 | "donBg2p1": "lumen/006_i7_id7/enso_normal/enso/donbg/donbg_b_006_1p.nulstb", 143 | "dancerDai1": "lumen/000_default/enso_normal/enso/background/bg_dai_a_00.nulstb", 144 | "dancer1": "lumen/006_i7_id7/enso_normal/enso/dancer/dance_b_006.nulstb", 145 | "danceNormalBg1": "lumen/006_i7_id7/enso_normal/enso/background/bg_nomal_b_006.nulstb", 146 | "danceFeverBg1": "lumen/006_i7_id7/enso_normal/enso/background/bg_fever_b_006.nulstb", 147 | "rendaEffect1": "lumen/006_i7_id7/enso_normal/enso/renda_effect/renda_b_006.nulstb", 148 | "fever1": "lumen/000_default/enso_normal/enso/fever/fever_effect0.nulstb" 149 | }, 150 | "010_imas": { 151 | "ensoPartsID1": 10, 152 | "ensoPartsID2": 10, 153 | "donBg1p": "lumen/010_imas/enso_normal/enso/donbg/donbg_b_010_1p.nulstb", 154 | "donBg2p": "lumen/010_imas/enso_normal/enso/donbg/donbg_b_010_2p.nulstb", 155 | "dancerDai": "lumen/010_imas/enso_normal/enso/background/dodai_b_010.nulstb", 156 | "dancer": "lumen/010_imas/enso_normal/enso/dancer/dance_b_010.nulstb", 157 | "danceNormalBg": "lumen/010_imas/enso_normal/enso/background/bg_nomal_b_010.nulstb", 158 | "danceFeverBg": "lumen/010_imas/enso_normal/enso/background/bg_fever_b_010.nulstb", 159 | "rendaEffect": "lumen/010_imas/enso_normal/enso/renda_effect/renda_b_010.nulstb", 160 | "fever": "lumen/010_imas/enso_normal/enso/fever/fever_b_010.nulstb", 161 | "donBg1p1": "lumen/010_imas/enso_normal/enso/donbg/donbg_b_010_1p.nulstb", 162 | "donBg2p1": "lumen/010_imas/enso_normal/enso/donbg/donbg_b_010_2p.nulstb", 163 | "dancerDai1": "lumen/010_imas/enso_normal/enso/background/dodai_b_010.nulstb", 164 | "dancer1": "lumen/010_imas/enso_normal/enso/dancer/dance_b_010.nulstb", 165 | "danceNormalBg1": "lumen/010_imas/enso_normal/enso/background/bg_nomal_b_010.nulstb", 166 | "danceFeverBg1": "lumen/010_imas/enso_normal/enso/background/bg_fever_b_010.nulstb", 167 | "rendaEffect1": "lumen/010_imas/enso_normal/enso/renda_effect/renda_b_010.nulstb", 168 | "fever1": "lumen/010_imas/enso_normal/enso/fever/fever_b_010.nulstb" 169 | }, 170 | "011_imas_cg": { 171 | "ensoPartsID1": 11, 172 | "ensoPartsID2": 11, 173 | "donBg1p": "lumen/011_imas_cg/enso_normal/enso/donbg/donbg_b_011_1p.nulstb", 174 | "donBg2p": "lumen/011_imas_cg/enso_normal/enso/donbg/donbg_b_011_2p.nulstb", 175 | "dancerDai": "lumen/010_imas/enso_normal/enso/background/dodai_b_010.nulstb", 176 | "dancer": "lumen/011_imas_cg/enso_normal/enso/dancer/dance_b_011.nulstb", 177 | "danceNormalBg": "lumen/010_imas/enso_normal/enso/background/bg_nomal_b_010.nulstb", 178 | "danceFeverBg": "lumen/010_imas/enso_normal/enso/background/bg_fever_b_010.nulstb", 179 | "rendaEffect": "lumen/010_imas/enso_normal/enso/renda_effect/renda_b_010.nulstb", 180 | "fever": "lumen/011_imas_cg/enso_normal/enso/fever/fever_b_011.nulstb", 181 | "donBg1p1": "lumen/011_imas_cg/enso_normal/enso/donbg/donbg_b_011_1p.nulstb", 182 | "donBg2p1": "lumen/011_imas_cg/enso_normal/enso/donbg/donbg_b_011_2p.nulstb", 183 | "dancerDai1": "lumen/010_imas/enso_normal/enso/background/dodai_b_010.nulstb", 184 | "dancer1": "lumen/011_imas_cg/enso_normal/enso/dancer/dance_b_011.nulstb", 185 | "danceNormalBg1": "lumen/010_imas/enso_normal/enso/background/bg_nomal_b_010.nulstb", 186 | "danceFeverBg1": "lumen/010_imas/enso_normal/enso/background/bg_fever_b_010.nulstb", 187 | "rendaEffect1": "lumen/010_imas/enso_normal/enso/renda_effect/renda_b_010.nulstb", 188 | "fever1": "lumen/011_imas_cg/enso_normal/enso/fever/fever_b_011.nulstb" 189 | }, 190 | "012_imas_ml": { 191 | "ensoPartsID1": 12, 192 | "ensoPartsID2": 12, 193 | "donBg1p": "lumen/012_imas_ml/enso_normal/enso/donbg/donbg_b_012_1p.nulstb", 194 | "donBg2p": "lumen/012_imas_ml/enso_normal/enso/donbg/donbg_b_012_2p.nulstb", 195 | "dancerDai": "lumen/010_imas/enso_normal/enso/background/dodai_b_010.nulstb", 196 | "dancer": "lumen/012_imas_ml/enso_normal/enso/dancer/dance_b_012.nulstb", 197 | "danceNormalBg": "lumen/010_imas/enso_normal/enso/background/bg_nomal_b_010.nulstb", 198 | "danceFeverBg": "lumen/010_imas/enso_normal/enso/background/bg_fever_b_010.nulstb", 199 | "rendaEffect": "lumen/010_imas/enso_normal/enso/renda_effect/renda_b_010.nulstb", 200 | "fever": "lumen/012_imas_ml/enso_normal/enso/fever/fever_b_012.nulstb", 201 | "donBg1p1": "lumen/012_imas_ml/enso_normal/enso/donbg/donbg_b_012_1p.nulstb", 202 | "donBg2p1": "lumen/012_imas_ml/enso_normal/enso/donbg/donbg_b_012_2p.nulstb", 203 | "dancerDai1": "lumen/010_imas/enso_normal/enso/background/dodai_b_010.nulstb", 204 | "dancer1": "lumen/012_imas_ml/enso_normal/enso/dancer/dance_b_012.nulstb", 205 | "danceNormalBg1": "lumen/010_imas/enso_normal/enso/background/bg_nomal_b_010.nulstb", 206 | "danceFeverBg1": "lumen/010_imas/enso_normal/enso/background/bg_fever_b_010.nulstb", 207 | "rendaEffect1": "lumen/010_imas/enso_normal/enso/renda_effect/renda_b_010.nulstb", 208 | "fever1": "lumen/012_imas_ml/enso_normal/enso/fever/fever_b_012.nulstb" 209 | }, 210 | "013_imas_sidem": { 211 | "ensoPartsID1": 13, 212 | "ensoPartsID2": 13, 213 | "donBg1p": "lumen/013_imas_sidem/enso_normal/enso/donbg/donbg_b_013_1p.nulstb", 214 | "donBg2p": "lumen/013_imas_sidem/enso_normal/enso/donbg/donbg_b_013_2p.nulstb", 215 | "dancerDai": "lumen/013_imas_sidem/enso_normal/enso/background/dodai_b_013.nulstb", 216 | "dancer": "lumen/013_imas_sidem/enso_normal/enso/dancer/dance_b_013.nulstb", 217 | "danceNormalBg": "lumen/013_imas_sidem/enso_normal/enso/background/bg_nomal_b_013.nulstb", 218 | "danceFeverBg": "lumen/013_imas_sidem/enso_normal/enso/background/bg_fever_b_013.nulstb", 219 | "rendaEffect": "lumen/013_imas_sidem/enso_normal/enso/renda_effect/renda_b_013.nulstb", 220 | "fever": "lumen/013_imas_sidem/enso_normal/enso/fever/fever_b_013.nulstb", 221 | "donBg1p1": "lumen/013_imas_sidem/enso_normal/enso/donbg/donbg_b_013_1p.nulstb", 222 | "donBg2p1": "lumen/013_imas_sidem/enso_normal/enso/donbg/donbg_b_013_2p.nulstb", 223 | "dancerDai1": "lumen/013_imas_sidem/enso_normal/enso/background/dodai_b_013.nulstb", 224 | "dancer1": "lumen/013_imas_sidem/enso_normal/enso/dancer/dance_b_013.nulstb", 225 | "danceNormalBg1": "lumen/013_imas_sidem/enso_normal/enso/background/bg_nomal_b_013.nulstb", 226 | "danceFeverBg1": "lumen/013_imas_sidem/enso_normal/enso/background/bg_fever_b_013.nulstb", 227 | "rendaEffect1": "lumen/013_imas_sidem/enso_normal/enso/renda_effect/renda_b_013.nulstb", 228 | "fever1": "lumen/013_imas_sidem/enso_normal/enso/fever/fever_b_013.nulstb" 229 | }, 230 | "014_yokai": { 231 | "ensoPartsID1": 14, 232 | "ensoPartsID2": 14, 233 | "donBg1p": "lumen/014_yokai/enso_normal/enso/donbg/donbg_b_014_1p.nulstb", 234 | "donBg2p": "lumen/014_yokai/enso_normal/enso/donbg/donbg_b_014_1p.nulstb", 235 | "dancerDai": "lumen/000_default/enso_normal/enso/background/bg_dai_a_00.nulstb", 236 | "dancer": "lumen/014_yokai/enso_normal/enso/dancer/dance_b_014.nulstb", 237 | "danceNormalBg": "lumen/014_yokai/enso_normal/enso/background/bg_nomal_b_014.nulstb", 238 | "danceFeverBg": "lumen/014_yokai/enso_normal/enso/background/bg_fever_b_014.nulstb", 239 | "rendaEffect": "lumen/014_yokai/enso_normal/enso/renda_effect/renda_b_014.nulstb", 240 | "fever": "lumen/014_yokai/enso_normal/enso/fever/fever_b_014.nulstb", 241 | "donBg1p1": "lumen/014_yokai/enso_normal/enso/donbg/donbg_b_014_1p.nulstb", 242 | "donBg2p1": "lumen/014_yokai/enso_normal/enso/donbg/donbg_b_014_1p.nulstb", 243 | "dancerDai1": "lumen/000_default/enso_normal/enso/background/bg_dai_a_00.nulstb", 244 | "dancer1": "lumen/014_yokai/enso_normal/enso/dancer/dance_b_014.nulstb", 245 | "danceNormalBg1": "lumen/014_yokai/enso_normal/enso/background/bg_nomal_b_014.nulstb", 246 | "danceFeverBg1": "lumen/014_yokai/enso_normal/enso/background/bg_fever_b_014.nulstb", 247 | "rendaEffect1": "lumen/014_yokai/enso_normal/enso/renda_effect/renda_b_014.nulstb", 248 | "fever1": "lumen/014_yokai/enso_normal/enso/fever/fever_b_014.nulstb" 249 | }, 250 | "015_yokai_mb": { 251 | "ensoPartsID1": 15, 252 | "ensoPartsID2": 15, 253 | "donBg1p": "lumen/014_yokai/enso_normal/enso/donbg/donbg_b_014_1p.nulstb", 254 | "donBg2p": "lumen/014_yokai/enso_normal/enso/donbg/donbg_b_014_1p.nulstb", 255 | "dancerDai": "lumen/000_default/enso_normal/enso/background/bg_dai_a_00.nulstb", 256 | "dancer": "lumen/015_yokai_mb/enso_normal/enso/dancer/dance_b_015.nulstb", 257 | "danceNormalBg": "lumen/014_yokai/enso_normal/enso/background/bg_nomal_b_014.nulstb", 258 | "danceFeverBg": "lumen/014_yokai/enso_normal/enso/background/bg_fever_b_014.nulstb", 259 | "rendaEffect": "lumen/014_yokai/enso_normal/enso/renda_effect/renda_b_014.nulstb", 260 | "fever": "lumen/014_yokai/enso_normal/enso/fever/fever_b_014.nulstb", 261 | "donBg1p1": "lumen/014_yokai/enso_normal/enso/donbg/donbg_b_014_1p.nulstb", 262 | "donBg2p1": "lumen/014_yokai/enso_normal/enso/donbg/donbg_b_014_1p.nulstb", 263 | "dancerDai1": "lumen/000_default/enso_normal/enso/background/bg_dai_a_00.nulstb", 264 | "dancer1": "lumen/015_yokai_mb/enso_normal/enso/dancer/dance_b_015.nulstb", 265 | "danceNormalBg1": "lumen/014_yokai/enso_normal/enso/background/bg_nomal_b_014.nulstb", 266 | "danceFeverBg1": "lumen/014_yokai/enso_normal/enso/background/bg_fever_b_014.nulstb", 267 | "rendaEffect1": "lumen/014_yokai/enso_normal/enso/renda_effect/renda_b_014.nulstb", 268 | "fever1": "lumen/014_yokai/enso_normal/enso/fever/fever_b_014.nulstb" 269 | }, 270 | "016_yokai_ht": { 271 | "ensoPartsID1": 16, 272 | "ensoPartsID2": 16, 273 | "donBg1p": "lumen/014_yokai/enso_normal/enso/donbg/donbg_b_014_1p.nulstb", 274 | "donBg2p": "lumen/014_yokai/enso_normal/enso/donbg/donbg_b_014_1p.nulstb", 275 | "dancerDai": "lumen/000_default/enso_normal/enso/background/bg_dai_a_00.nulstb", 276 | "dancer": "lumen/016_yokai_ht/enso_normal/enso/dancer/dance_b_016.nulstb", 277 | "danceNormalBg": "lumen/014_yokai/enso_normal/enso/background/bg_nomal_b_014.nulstb", 278 | "danceFeverBg": "lumen/014_yokai/enso_normal/enso/background/bg_fever_b_014.nulstb", 279 | "rendaEffect": "lumen/014_yokai/enso_normal/enso/renda_effect/renda_b_014.nulstb", 280 | "fever": "lumen/014_yokai/enso_normal/enso/fever/fever_b_014.nulstb", 281 | "donBg1p1": "lumen/014_yokai/enso_normal/enso/donbg/donbg_b_014_1p.nulstb", 282 | "donBg2p1": "lumen/014_yokai/enso_normal/enso/donbg/donbg_b_014_1p.nulstb", 283 | "dancerDai1": "lumen/000_default/enso_normal/enso/background/bg_dai_a_00.nulstb", 284 | "dancer1": "lumen/016_yokai_ht/enso_normal/enso/dancer/dance_b_016.nulstb", 285 | "danceNormalBg1": "lumen/014_yokai/enso_normal/enso/background/bg_nomal_b_014.nulstb", 286 | "danceFeverBg1": "lumen/014_yokai/enso_normal/enso/background/bg_fever_b_014.nulstb", 287 | "rendaEffect1": "lumen/014_yokai/enso_normal/enso/renda_effect/renda_b_014.nulstb", 288 | "fever1": "lumen/014_yokai/enso_normal/enso/fever/fever_b_014.nulstb" 289 | }, 290 | "019_mario": { 291 | "ensoPartsID1": 19, 292 | "ensoPartsID2": 19, 293 | "donBg1p": "lumen/019_mario/enso_normal/enso/donbg/donbg_b_019_1p.nulstb", 294 | "donBg2p": "lumen/019_mario/enso_normal/enso/donbg/donbg_b_019_2p.nulstb", 295 | "dancerDai": "lumen/000_default/enso_normal/enso/background/bg_dai_a_00.nulstb", 296 | "dancer": "lumen/019_mario/enso_normal/enso/dancer/dance_b_019.nulstb", 297 | "danceNormalBg": "lumen/019_mario/enso_normal/enso/background/bg_nomal_b_019.nulstb", 298 | "danceFeverBg": "lumen/019_mario/enso_normal/enso/background/bg_fever_b_019.nulstb", 299 | "rendaEffect": "lumen/019_mario/enso_normal/enso/renda_effect/renda_b_019.nulstb", 300 | "fever": "lumen/019_mario/enso_normal/enso/fever/fever_b_019.nulstb", 301 | "donBg1p1": "lumen/019_mario/enso_normal/enso/donbg/donbg_b_019_1p.nulstb", 302 | "donBg2p1": "lumen/019_mario/enso_normal/enso/donbg/donbg_b_019_2p.nulstb", 303 | "dancerDai1": "lumen/000_default/enso_normal/enso/background/bg_dai_a_00.nulstb", 304 | "dancer1": "lumen/019_mario/enso_normal/enso/dancer/dance_b_019.nulstb", 305 | "danceNormalBg1": "lumen/019_mario/enso_normal/enso/background/bg_nomal_b_019.nulstb", 306 | "danceFeverBg1": "lumen/019_mario/enso_normal/enso/background/bg_fever_b_019.nulstb", 307 | "rendaEffect1": "lumen/019_mario/enso_normal/enso/renda_effect/renda_b_019.nulstb", 308 | "fever1": "lumen/019_mario/enso_normal/enso/fever/fever_b_019.nulstb" 309 | }, 310 | "020_A3": { 311 | "ensoPartsID1": 20, 312 | "ensoPartsID2": 20, 313 | "donBg1p": "lumen/020_A3/enso_normal/enso/donbg/donbg_b_020_1p.nulstb", 314 | "donBg2p": "lumen/020_A3/enso_normal/enso/donbg/donbg_b_020_1p.nulstb", 315 | "dancerDai": "lumen/000_default/enso_normal/enso/background/bg_dai_a_00.nulstb", 316 | "dancer": "lumen/020_A3/enso_normal/enso/dancer/dance_b_020.nulstb", 317 | "danceNormalBg": "lumen/020_A3/enso_normal/enso/background/bg_nomal_b_020.nulstb", 318 | "danceFeverBg": "lumen/020_A3/enso_normal/enso/background/bg_fever_b_020.nulstb", 319 | "rendaEffect": "lumen/020_A3/enso_normal/enso/renda_effect/renda_b_020.nulstb", 320 | "fever": "lumen/000_default/enso_normal/enso/fever/fever_effect0.nulstb", 321 | "donBg1p1": "lumen/020_A3/enso_normal/enso/donbg/donbg_b_020_1p.nulstb", 322 | "donBg2p1": "lumen/020_A3/enso_normal/enso/donbg/donbg_b_020_1p.nulstb", 323 | "dancerDai1": "lumen/000_default/enso_normal/enso/background/bg_dai_a_00.nulstb", 324 | "dancer1": "lumen/020_A3/enso_normal/enso/dancer/dance_b_020.nulstb", 325 | "danceNormalBg1": "lumen/020_A3/enso_normal/enso/background/bg_nomal_b_020.nulstb", 326 | "danceFeverBg1": "lumen/020_A3/enso_normal/enso/background/bg_fever_b_020.nulstb", 327 | "rendaEffect1": "lumen/020_A3/enso_normal/enso/renda_effect/renda_b_020.nulstb", 328 | "fever1": "lumen/000_default/enso_normal/enso/fever/fever_effect0.nulstb" 329 | } 330 | } 331 | } 332 | 333 | # Create the config.json file if it doesn't exist 334 | if not os.path.exists(self.config_file_path): 335 | with open(self.config_file_path, 'w', encoding='utf-8') as config_file: 336 | json.dump(default_config, config_file, indent=4) 337 | 338 | # Load the configuration from the file 339 | with open(self.config_file_path, encoding="utf-8") as f: 340 | d = json.load(f) 341 | 342 | # Check for missing keys and update with defaults if necessary 343 | updated = False 344 | if 'datatableKey' not in d or not d['datatableKey']: 345 | d['datatableKey'] = default_config['datatableKey'] 346 | updated = True 347 | 348 | if 'fumenKey' not in d or not d['fumenKey']: 349 | d['fumenKey'] = default_config['fumenKey'] 350 | updated = True 351 | 352 | if 'gameFilesOutDir' not in d: 353 | d['gameFilesOutDir'] = default_config['gameFilesOutDir'] 354 | updated = True 355 | 356 | if 'defaultRequiredRendaSpeeds' not in d: 357 | d['defaultRequiredRendaSpeeds'] = default_config['defaultRequiredRendaSpeeds'] 358 | if 'defaultRequiredRendaSpeed' in d: 359 | d['defaultRequiredRendaSpeeds'][3] = d['defaultRequiredRendaSpeed'] 360 | d['defaultRequiredRendaSpeeds'][4] = d['defaultRequiredRendaSpeed'] 361 | updated = True 362 | 363 | if 'autoCloseSearch' not in d: 364 | d['autoCloseSearch'] = default_config['autoCloseSearch'] 365 | updated = True 366 | 367 | if 'recalculateShinutiScoreWithRequiredRendaCount' not in d: 368 | d['recalculateShinutiScoreWithRequiredRendaCount'] = default_config['recalculateShinutiScoreWithRequiredRendaCount'] 369 | updated = True 370 | if 'dancers' not in d: 371 | d['dancers'] = default_config['dancers'] 372 | updated = True 373 | 374 | # If any updates were made, write back the updated config 375 | if updated: 376 | with open(self.config_file_path, 'w') as config_file: 377 | json.dump(d, config_file, indent=4) 378 | 379 | # Set class attributes 380 | self.datatable_key = d['datatableKey'] 381 | self.fumen_key = d['fumenKey'] 382 | self.game_files_out_dir = d['gameFilesOutDir'] 383 | self.default_required_renda_speeds = d['defaultRequiredRendaSpeeds'] 384 | self.auto_close_search = d['autoCloseSearch'] 385 | self.recalculate_shinuti_score_with_required_renda_count = d['recalculateShinutiScoreWithRequiredRendaCount'] 386 | self.dancers = d['dancers'] 387 | 388 | def update_keys(self, datatable_key: str, fumen_key: str) -> None: 389 | """Update the configuration and save it to the config.json file.""" 390 | # Update the class attributes 391 | self.datatable_key = datatable_key 392 | self.fumen_key = fumen_key 393 | self.write_back_to_json() 394 | 395 | def update_game_files_out_dir(self, game_files_out_dir: str): 396 | self.game_files_out_dir = game_files_out_dir 397 | self.write_back_to_json() 398 | 399 | def update_default_required_renda_speed(self, default_required_renda_speeds: list[float]): 400 | self.default_required_renda_speeds = default_required_renda_speeds 401 | self.write_back_to_json() 402 | 403 | def update_auto_close_search(self, auto_close_search: bool): 404 | self.auto_close_search = auto_close_search 405 | self.write_back_to_json() 406 | 407 | def update_recalculate_shinuti_score_with_required_renda_count(self, recalculate_shinuti_score_with_required_renda_count: bool): 408 | self.recalculate_shinuti_score_with_required_renda_count = recalculate_shinuti_score_with_required_renda_count 409 | self.write_back_to_json() 410 | 411 | def write_back_to_json(self): 412 | # Update the configuration dictionary 413 | updated_config = { 414 | "datatableKey": self.datatable_key, 415 | "fumenKey": self.fumen_key, 416 | "gameFilesOutDir": self.game_files_out_dir, 417 | "defaultRequiredRendaSpeeds": self.default_required_renda_speeds, 418 | "autoCloseSearch": self.auto_close_search, 419 | "recalculateShinutiScoreWithRequiredRendaCount": self.recalculate_shinuti_score_with_required_renda_count, 420 | "dancers": self.dancers 421 | } 422 | 423 | # Write the updated configuration back to the config.json file 424 | with open(self.config_file_path, 'w') as config_file: 425 | json.dump(updated_config, config_file, indent=4) 426 | 427 | # Instantiate and use the Config class 428 | config: Config = Config() 429 | -------------------------------------------------------------------------------- /src/constants.py: -------------------------------------------------------------------------------- 1 | 2 | GENRE_MAPPING = { 3 | "0. POP": 0, 4 | "1. Anime": 1, 5 | "2. Kids": 2, 6 | "3. VOCALOID™ Music": 3, 7 | "4. Game Music": 4, 8 | "5. NAMCO Original": 5, 9 | "6. Variety": 6, 10 | "7. Classic": 7, 11 | } 12 | 13 | GENRE_NAME_MAP = { 14 | 0: "0. POP", 15 | 1: "1. Anime", 16 | 2: "2. Kids", 17 | 3: "3. VOCALOID™ Music", 18 | 4: "4. Game Music", 19 | 5: "5. NAMCO Original", 20 | 6: "6. Variety", 21 | 7: "7. Classic" 22 | } 23 | 24 | GENRE_COLOURS = { 25 | 0: '#49d5eb', 26 | 1: '#fe90d2', 27 | 2: '#fe9e01', 28 | 3: '#cbcfde', 29 | 4: '#cc8aeb', 30 | 5: '#ff7028', 31 | 6: '#0acc2a', 32 | 7: '#ded523' 33 | } 34 | 35 | LANGUAGES = ('JPN', 'ENG', 'zh-tw', 'KOR', 'zh-cn') 36 | 37 | DIFFICULTIES = { 38 | 0: 'Easy', 39 | 1: 'Normal', 40 | 2: 'Hard', 41 | 3: 'Oni', 42 | 4: 'Ura' 43 | } 44 | -------------------------------------------------------------------------------- /src/encryption.py: -------------------------------------------------------------------------------- 1 | import gzip 2 | import json 3 | import os 4 | from pathlib import Path 5 | from cryptography.hazmat.primitives.ciphers import Cipher, algorithms, modes 6 | from cryptography.hazmat.backends import default_backend 7 | from cryptography.hazmat.primitives import padding 8 | import binascii 9 | from src import config 10 | 11 | 12 | def read_iv_from_file(file_path): 13 | with open(file_path, "rb") as f: 14 | iv = f.read(16) 15 | if len(iv) != 16: 16 | raise Exception("Invalid file") 17 | return iv 18 | 19 | 20 | def pad_data(data): 21 | padder = padding.PKCS7(128).padder() 22 | return padder.update(data) + padder.finalize() 23 | 24 | 25 | def remove_pkcs7_padding(data): 26 | unpadder = padding.PKCS7(128).unpadder() 27 | return unpadder.update(data) + unpadder.finalize() 28 | 29 | 30 | def decrypt_file(input_file, is_fumen): 31 | # Convert the key from hex to bytes 32 | key = binascii.unhexlify(config.config.fumen_key if is_fumen else config.config.datatable_key) 33 | 34 | # Read the IV from the first 16 bytes of the input file 35 | iv = read_iv_from_file(input_file) 36 | 37 | # Create an AES cipher object with CBC mode 38 | cipher = Cipher(algorithms.AES(key), modes.CBC(iv), backend=default_backend()) 39 | decryptor = cipher.decryptor() 40 | # except Exception as error: 41 | # print(error) 42 | # print("You need to set the right AES keys in the encryption.py file") 43 | # exit(0) 44 | 45 | with open(input_file, "rb") as infile: 46 | # Skip the IV in the input file 47 | infile.seek(16) 48 | 49 | # Decrypt the file 50 | decrypted_data = b"" + decryptor.update(infile.read()) 51 | 52 | # Remove PKCS7 padding 53 | unpadded_data = remove_pkcs7_padding(decrypted_data) 54 | 55 | # Gzip decompress the data 56 | decompressed_data = gzip.decompress(unpadded_data) 57 | 58 | # return the decompressed data 59 | return decompressed_data 60 | 61 | 62 | def isJson(file: bytes): 63 | try: 64 | json.loads(file) 65 | return True 66 | except: 67 | return False 68 | 69 | 70 | def encrypt_file(input_file, is_fumen): 71 | # Convert the key from hex to bytes 72 | key = binascii.unhexlify(config.config.fumen_key if is_fumen else config.config.datatable_key) 73 | 74 | # Generate a random 128-bit IV 75 | iv = os.urandom(16) 76 | 77 | # Create an AES cipher object with CBC mode 78 | # try: 79 | cipher = Cipher(algorithms.AES(key), modes.CBC(iv), backend=default_backend()) 80 | encryptor = cipher.encryptor() 81 | # except Exception as error: 82 | # print(error) 83 | # print("You need to set the right AES keys in the encryption.py file") 84 | # exit(0) 85 | 86 | with open(input_file, "rb") as infile: 87 | # Read the entire file into memory 88 | data = infile.read() 89 | 90 | # Gzip compress the data 91 | compressed_data = gzip.compress(data) 92 | 93 | # Pad the compressed data, encrypt it, and return the encrypted result 94 | encrypted_data = ( 95 | encryptor.update(pad_data(compressed_data)) + encryptor.finalize() 96 | ) 97 | 98 | return iv + encrypted_data 99 | 100 | 101 | def save_file(file: bytes, outdir: str, encrypt: bool, is_fumen: bool = False): 102 | fileContent = ( 103 | decrypt_file(input_file=file, is_fumen=is_fumen) 104 | if not encrypt 105 | else encrypt_file(input_file=file, is_fumen=is_fumen) 106 | ) 107 | 108 | if isJson(fileContent): 109 | base = os.path.splitext(outdir)[0] 110 | outdir = base + ".json" 111 | else: 112 | base = os.path.splitext(outdir)[0] 113 | outdir = base + ".bin" 114 | 115 | print("Decrypting" if not encrypt else "Encrypting", file, "to", outdir) 116 | 117 | with open(outdir, "wb") as outfile: 118 | outfile.write(fileContent) -------------------------------------------------------------------------------- /src/fumen.py: -------------------------------------------------------------------------------- 1 | from typing import LiteralString 2 | 3 | from src.tja2fumen.parsers import parse_tja, parse_fumen 4 | from src.tja2fumen.converters import convert_tja_to_fumen, fix_dk_note_types_course 5 | from src.tja2fumen.writers import write_fumen 6 | from src.tja2fumen.constants import COURSE_IDS 7 | from src.tja2fumen.classes import TJACourse, TJAMeasure, TJAData 8 | from pydub import AudioSegment 9 | import os, re 10 | from src import encryption, nus3bank 11 | import tempfile, shutil 12 | 13 | def convert_and_write(tja_data: TJACourse, 14 | course_name: str, 15 | base_name: str, 16 | single_course: bool, 17 | temp_dir: str) -> list[str]: 18 | """Process the parsed data for a single TJA course.""" 19 | fumen_data = convert_tja_to_fumen(tja_data) 20 | # Fix don/ka types 21 | fix_dk_note_types_course(fumen_data) 22 | # Add course ID (e.g., '_x', '_x_1', '_x_2') to the output file's base name 23 | output_name = base_name 24 | if single_course: 25 | pass # Replicate tja2bin.exe behavior by excluding course ID 26 | else: 27 | split_name = course_name.split("P") # e.g., 'OniP2' -> ['Oni', '2'] 28 | output_name += f"_{COURSE_IDS[split_name[0]]}" 29 | 30 | # Write to the temp_dir instead of hardcoded 'temp' directory 31 | out_files = [f"{output_name}.bin", f"{output_name}_1.bin", f"{output_name}_2.bin"] 32 | for i in range(3): 33 | out_files[i] = os.path.join(temp_dir, out_files[i]) 34 | write_fumen(out_files[i], fumen_data) 35 | 36 | return out_files 37 | 38 | def normalize_fumen(sound_file: str, offset_ms: float, bpm: float) -> tuple[float, float]: 39 | """ 40 | Normalizes the fumen by adding a measure before the song starts. 41 | If the offset is positive, it adds both the offset and a full measure. 42 | If the offset is negative and less than one measure long, it extends it to a full measure. 43 | 44 | Args: 45 | sound_file (str): The file path to the sound file. 46 | offset_ms (float): The song's offset in milliseconds. 47 | bpm (float): The beats per minute (BPM) of the song. 48 | 49 | Returns: 50 | float, float: 51 | - The updated offset in milliseconds. 52 | - The difference between the original offset and the updated offset in milliseconds. 53 | """ 54 | 55 | one_measure_ms = (60000.0 / bpm) * 4.0 56 | 57 | if offset_ms >= 0.0: 58 | # For positive offsets, add both the offset and a full measure 59 | length_to_add = offset_ms + one_measure_ms 60 | nus3bank.prepend_silent_to_audio(sound_file, int(length_to_add)) 61 | return -one_measure_ms, length_to_add 62 | else: 63 | # For negative offsets 64 | offset_ms_pos = -offset_ms 65 | if offset_ms_pos <= one_measure_ms: 66 | # Offset is less than one measure, extend it to become one measure long 67 | length_to_add = one_measure_ms - offset_ms_pos 68 | nus3bank.prepend_silent_to_audio(sound_file, int(length_to_add)) 69 | return -one_measure_ms, length_to_add 70 | else: 71 | # Offset is already longer than one measure, no change needed 72 | return offset_ms, 0.0 73 | 74 | 75 | 76 | def convert_tja_to_fumen_files(song_id: str, tja_file: str, audio_file: str, preview_point_ms: float, start_blank_length_ms: float, out_path: str) -> None: 77 | with tempfile.TemporaryDirectory() as temp_dir: 78 | fumen_out = os.path.join(out_path, 'fumen', song_id) 79 | sound_out = os.path.join(out_path, 'sound') 80 | 81 | if not os.path.exists(fumen_out): 82 | os.makedirs(fumen_out) 83 | if not os.path.exists(sound_out): 84 | os.makedirs(sound_out) 85 | 86 | parsed_tja = parse_tja(tja_file) 87 | 88 | temp_audio_path = nus3bank.convert_to_wav(audio_file, temp_dir) 89 | 90 | ## Normalize audio and add measure to all courses/branches 91 | offset_ms: float = parsed_tja.offset * 1000 92 | 93 | offset_ms, length_added = normalize_fumen(temp_audio_path, offset_ms, parsed_tja.bpm) 94 | 95 | preview_point_ms += length_added 96 | 97 | if start_blank_length_ms > 0: 98 | preview_point_ms += start_blank_length_ms 99 | nus3bank.prepend_silent_to_audio(temp_audio_path, int(start_blank_length_ms)) 100 | offset_ms -= start_blank_length_ms 101 | 102 | offset_s = offset_ms / 1000 103 | parsed_tja.offset = offset_s 104 | 105 | # Convert parsed TJA courses and write each course to `.bin` files inside temp_dir 106 | print(parsed_tja.courses.keys()) 107 | for course_name in parsed_tja.courses.keys(): 108 | parsed_tja.courses[course_name].offset = offset_s 109 | convert_and_write(parsed_tja.courses[course_name], course_name, song_id, 110 | single_course=len(parsed_tja.courses) == 1, 111 | temp_dir=temp_dir) # Use temp_dir for output 112 | 113 | # Encrypt TJAs from temp_dir 114 | for path, subdirs, files in os.walk(temp_dir): 115 | for name in files: 116 | if not re.match(rf'^{song_id}.*\.bin$', name): 117 | continue 118 | in_path = os.path.join(path, name) 119 | out_path = os.path.join(fumen_out, name) 120 | if os.path.isfile(in_path): 121 | encryption.save_file( 122 | file=in_path, # type: ignore 123 | outdir=out_path, 124 | encrypt=True, 125 | is_fumen=True 126 | ) 127 | 128 | #Convert to nus3bank and export 129 | nus3bank.wav_to_idsp_to_nus3bank(temp_audio_path, os.path.join(sound_out, f'song_{song_id}.nus3bank'), int(preview_point_ms), song_id) 130 | 131 | def get_offset_from_file(fumen_file: LiteralString | str | bytes ) -> float: 132 | with tempfile.TemporaryDirectory() as temp_dir: 133 | decrypted_file = os.path.join(temp_dir, 'decrypted.bin') 134 | encryption.save_file( 135 | file=fumen_file, # type: ignore 136 | outdir=decrypted_file, 137 | encrypt=False, 138 | is_fumen=True 139 | ) 140 | parsed = parse_fumen(decrypted_file) 141 | return parsed.measures[0].offset_start / 1000 142 | 143 | def add_ura_to_song(song_id, tja_file, out_path) -> None: 144 | fumen_out_dir = os.path.join(out_path, 'fumen', song_id) 145 | fumen_in = os.path.join(fumen_out_dir, f'{song_id}_m.bin') #Out dir is also in dir for the fumen file in to get offset 146 | if not os.path.isfile(fumen_in): 147 | raise Exception(f'file {fumen_in} not found') 148 | 149 | offset_s = get_offset_from_file(fumen_in) * -1 150 | 151 | with tempfile.TemporaryDirectory() as temp_dir: 152 | parsed_tja = parse_tja(tja_file) 153 | offset_s -= 4 * 60 / parsed_tja.bpm 154 | parsed_tja.offset = offset_s 155 | course = 'Ura' 156 | parsed_tja.courses[course].offset = offset_s 157 | unencrypted_files = convert_and_write(parsed_tja.courses[course], course, song_id, 158 | False, 159 | temp_dir=temp_dir) # Use temp_dir for output 160 | for in_path in unencrypted_files: 161 | if os.path.isfile(in_path): 162 | encryption.save_file( 163 | file=in_path, # type: ignore 164 | outdir=os.path.join(fumen_out_dir, os.path.basename(in_path)), 165 | encrypt=True, 166 | is_fumen=True 167 | ) -------------------------------------------------------------------------------- /src/nus3bank.py: -------------------------------------------------------------------------------- 1 | import argparse 2 | import subprocess 3 | import os 4 | import sys 5 | import shutil 6 | import tempfile 7 | import random 8 | import shutil 9 | from pydub import AudioSegment 10 | from pydub.exceptions import CouldntDecodeError 11 | from src import common 12 | 13 | #from idsp.py 14 | def convert_audio_to_idsp(input_file, output_file): 15 | temp_folder = tempfile.mkdtemp() 16 | try: 17 | if not input_file.lower().endswith('.wav'): 18 | temp_wav_file = os.path.join(temp_folder, "temp.wav") 19 | audio = AudioSegment.from_file(input_file) 20 | audio.export(temp_wav_file, format="wav", bitrate="16k") 21 | input_file = temp_wav_file 22 | 23 | vgaudio_cli_path = common.resource_path(os.path.join("bin", "VGAudioCli.exe")) 24 | subprocess.run([vgaudio_cli_path, "-i", input_file, "-o", output_file], check=True) 25 | finally: 26 | shutil.rmtree(temp_folder, ignore_errors=True) 27 | 28 | #from lopus.py 29 | def convert_audio_to_opus(input_file, output_file): 30 | # Create a unique temporary folder to store intermediate files 31 | temp_folder = tempfile.mkdtemp() 32 | 33 | try: 34 | # Check if the input file is already in WAV format 35 | if not input_file.lower().endswith('.wav'): 36 | # Load the input audio file using pydub and convert to WAV 37 | temp_wav_file = os.path.join(temp_folder, "temp.wav") 38 | audio = AudioSegment.from_file(input_file) 39 | audio = audio.set_frame_rate(48000) # Set frame rate to 48000 Hz 40 | audio.export(temp_wav_file, format="wav") 41 | input_file = temp_wav_file 42 | 43 | # Path to VGAudioCli executable 44 | vgaudio_cli_path = common.resource_path(os.path.join("bin", "VGAudioCli.exe")) 45 | 46 | # Run VGAudioCli to convert WAV to Switch OPUS 47 | subprocess.run([vgaudio_cli_path, "-i", input_file, "-o", output_file, "--opusheader", "namco"], check=True) 48 | 49 | finally: 50 | # Clean up temporary folder 51 | shutil.rmtree(temp_folder, ignore_errors=True) 52 | 53 | #from wav.py 54 | def convert_audio_to_wav(input_file, output_file): 55 | try: 56 | # Load the input audio file using pydub 57 | audio = AudioSegment.from_file(input_file) 58 | 59 | # Ensure the output file has a .wav extension 60 | if not output_file.lower().endswith('.wav'): 61 | output_file += '.wav' 62 | 63 | # Export the audio to WAV format 64 | audio.export(output_file, format="wav") 65 | 66 | except Exception as e: 67 | raise RuntimeError(f"Error during WAV conversion: {e}") 68 | 69 | #from at9.py 70 | def convert_audio_to_at9(input_file, output_file): 71 | # Create a unique temporary folder to store intermediate files 72 | temp_folder = tempfile.mkdtemp() 73 | 74 | try: 75 | # Check if the input file is already in WAV format 76 | if not input_file.lower().endswith('.wav'): 77 | # Load the input audio file using pydub and convert to WAV 78 | temp_wav_file = os.path.join(temp_folder, "temp.wav") 79 | audio = AudioSegment.from_file(input_file) 80 | audio.export(temp_wav_file, format="wav") 81 | input_file = temp_wav_file 82 | 83 | # Path to AT9Tool executable 84 | at9tool_cli_path = os.path.join("bin", "at9tool.exe") 85 | 86 | # Run VGAudioCli to convert WAV to AT9 87 | subprocess.run([at9tool_cli_path, "-e", "-br", "192", input_file, output_file], check=True) 88 | 89 | finally: 90 | # Clean up temporary folder 91 | shutil.rmtree(temp_folder, ignore_errors=True) 92 | 93 | # from bnsf.py 94 | def convert_to_mono_48k(input_file, output_file): 95 | """Convert input audio file to 16-bit mono WAV with 48000 Hz sample rate.""" 96 | try: 97 | audio = AudioSegment.from_file(input_file) 98 | audio = audio.set_channels(1) # Convert to mono 99 | audio = audio.set_frame_rate(48000) # Set frame rate to 48000 Hz 100 | audio = audio.set_sample_width(2) # Set sample width to 16-bit (2 bytes) 101 | audio.export(output_file, format='wav') 102 | except CouldntDecodeError: 103 | print(f"Error: Unable to decode {input_file}. Please provide a valid audio file.") 104 | sys.exit(1) 105 | 106 | def run_encode_tool(input_wav, output_bs): 107 | """Run external encode tool with specified arguments.""" 108 | 109 | subprocess.run([common.resource_path(os.path.join("bin", "encode.exe")), '0', input_wav, output_bs, '48000', '14000']) 110 | 111 | def modify_bnsf_template(output_bs, output_bnsf, header_size, total_samples): 112 | """Modify the BNSF template file with calculated values and combine with output.bs.""" 113 | # Calculate the file size of output.bs 114 | bs_file_size = os.path.getsize(output_bs) 115 | 116 | # Create modified BNSF data 117 | new_file_size = bs_file_size + header_size - 0x8 118 | total_samples_bytes = total_samples.to_bytes(4, 'big') 119 | bs_file_size_bytes = bs_file_size.to_bytes(4, 'big') 120 | 121 | # Read BNSF template data 122 | with open(common.resource_path('templates/header.bnsf'), 'rb') as template_file: 123 | bnsf_template_data = bytearray(template_file.read()) 124 | 125 | # Modify BNSF template with calculated values 126 | bnsf_template_data[0x4:0x8] = new_file_size.to_bytes(4, 'big') # File size 127 | bnsf_template_data[0x1C:0x20] = total_samples_bytes # Total sample count 128 | bnsf_template_data[0x2C:0x30] = bs_file_size_bytes # Size of output.bs 129 | 130 | # Append output.bs data to modified BNSF template 131 | with open(output_bs, 'rb') as bs_file: 132 | bs_data = bs_file.read() 133 | final_bnsf_data = bnsf_template_data + bs_data 134 | 135 | # Write final BNSF file 136 | with open(output_bnsf, 'wb') as output_file: 137 | output_file.write(final_bnsf_data) 138 | 139 | #from nus3.py 140 | def generate_random_uint16_hex(): 141 | return format(random.randint(0, 65535), '04X') 142 | 143 | def select_template_name(game, output_file): 144 | base_filename = os.path.splitext(output_file)[0] 145 | length = len(base_filename) 146 | 147 | if game == "nijiiro": 148 | if length == 8: 149 | return "song_ABC" 150 | elif length == 9: 151 | return "song_ABCD" 152 | elif length == 10: 153 | return "song_ABCDE" 154 | elif length == 11: 155 | return "song_ABCDEF" 156 | elif length == 12: 157 | return "song_ABCDEFG" 158 | elif length == 13: 159 | return "song_ABCDEFGH" 160 | elif game == "ps4": 161 | if length == 8: 162 | return "song_ABC" 163 | elif length == 9: 164 | return "song_ABCD" 165 | elif length == 10: 166 | return "song_ABCDE" 167 | elif length == 11: 168 | return "song_ABCDEF" 169 | elif game == "ns1": 170 | if length == 8: 171 | return "song_ABC" 172 | elif length == 9: 173 | return "song_ABCD" 174 | elif length == 10: 175 | return "song_ABCDE" 176 | elif length == 11: 177 | return "song_ABCDEF" 178 | elif game == "wiiu3": 179 | if length == 8: 180 | return "song_ABC" 181 | elif length == 9: 182 | return "song_ABCD" 183 | elif length == 10: 184 | return "song_ABCDE" 185 | elif length == 11: 186 | return "song_ABCDEF" 187 | 188 | raise ValueError("Unsupported game or output file name length.") 189 | 190 | def modify_nus3bank_template(game, template_name, audio_file, preview_point, output_file): 191 | game_templates = { 192 | "nijiiro": { 193 | "template_folder": "nijiiro", 194 | "templates": { 195 | "song_ABC": { 196 | "unique_id_offset": 176, 197 | "audio_size_offsets": [76, 1568, 1852], 198 | "preview_point_offset": 1724, 199 | "song_placeholder": "song_ABC", 200 | "template_file": "song_ABC.nus3bank" 201 | }, 202 | "song_ABCD": { 203 | "unique_id_offset": 176, 204 | "audio_size_offsets": [76, 1568, 1852], 205 | "preview_point_offset": 1724, 206 | "song_placeholder": "song_ABCD", 207 | "template_file": "song_ABCD.nus3bank" 208 | }, 209 | "song_ABCDE": { 210 | "unique_id_offset": 176, 211 | "audio_size_offsets": [76, 1568, 1852], 212 | "preview_point_offset": 1724, 213 | "song_placeholder": "song_ABCDE", 214 | "template_file": "song_ABCDE.nus3bank" 215 | }, 216 | "song_ABCDEF": { 217 | "unique_id_offset": 180, 218 | "audio_size_offsets": [76, 1576, 1868], 219 | "preview_point_offset": 1732, 220 | "song_placeholder": "song_ABCDEF", 221 | "template_file": "song_ABCDEF.nus3bank" 222 | }, 223 | "song_ABCDEFG": { 224 | "unique_id_offset": 180, 225 | "audio_size_offsets": [76, 1672, 1964], 226 | "preview_point_offset": 1824, 227 | "song_placeholder": "song_ABCDEFG", 228 | "template_file": "song_ABCDEFG.nus3bank" 229 | }, 230 | "song_ABCDEFGH": { 231 | "unique_id_offset": 180, 232 | "audio_size_offsets": [76, 1576, 1868], 233 | "preview_point_offset": 1732, 234 | "song_placeholder": "song_ABCDEFGH", 235 | "template_file": "song_ABCDEFGH.nus3bank" 236 | }, 237 | } 238 | }, 239 | "ns1": { 240 | "template_folder": "ns1", 241 | "templates": { 242 | "song_ABC": { 243 | "audio_size_offsets": [76, 5200, 5420], 244 | "preview_point_offset": 5324, 245 | "song_placeholder": "SONG_ABC", 246 | "template_file": "SONG_ABC.nus3bank" 247 | }, 248 | "song_ABCD": { 249 | "audio_size_offsets": [76, 5200, 5420], 250 | "preview_point_offset": 5324, 251 | "song_placeholder": "SONG_ABCD", 252 | "template_file": "SONG_ABCD.nus3bank" 253 | }, 254 | "song_ABCDE": { 255 | "audio_size_offsets": [76, 5200, 5404], 256 | "preview_point_offset": 5320, 257 | "song_placeholder": "SONG_ABCDE", 258 | "template_file": "SONG_ABCDE.nus3bank" 259 | }, 260 | "song_ABCDEF": { 261 | "audio_size_offsets": [76, 5208, 5420], 262 | "preview_point_offset": 5324, 263 | "song_placeholder": "SONG_ABCDEF", 264 | "template_file": "SONG_ABCDEF.nus3bank" 265 | } 266 | } 267 | }, 268 | "ps4": { 269 | "template_folder": "ps4", 270 | "templates": { 271 | "song_ABC": { 272 | "audio_size_offsets": [76, 3220, 3436], 273 | "preview_point_offset": 3344, 274 | "song_placeholder": "SONG_ABC", 275 | "template_file": "SONG_ABC.nus3bank" 276 | }, 277 | "song_ABCD": { 278 | "audio_size_offsets": [76, 3220, 3436], 279 | "preview_point_offset": 3344, 280 | "song_placeholder": "SONG_ABCD", 281 | "template_file": "SONG_ABCD.nus3bank" 282 | }, 283 | "song_ABCDE": { 284 | "audio_size_offsets": [76, 3220, 3436], 285 | "preview_point_offset": 3344, 286 | "song_placeholder": "SONG_ABCDE", 287 | "template_file": "SONG_ABCDE.nus3bank" 288 | }, 289 | "song_ABCDEF": { 290 | "audio_size_offsets": [76, 3228, 3452], 291 | "preview_point_offset": 3352, 292 | "song_placeholder": "SONG_ABCDEF", 293 | "template_file": "SONG_ABCDEF.nus3bank" 294 | } 295 | } 296 | }, 297 | "wiiu3": { 298 | "template_folder": "wiiu3", 299 | "templates": { 300 | "song_ABC": { 301 | "audio_size_offsets": [76, 3420, 3612], 302 | "preview_point_offset": 3540, 303 | "song_placeholder": "SONG_ABC", 304 | "template_file": "SONG_ABC.nus3bank" 305 | }, 306 | "song_ABCD": { 307 | "audio_size_offsets": [76, 3420, 3612], 308 | "preview_point_offset": 3540, 309 | "song_placeholder": "SONG_ABCD", 310 | "template_file": "SONG_ABCD.nus3bank" 311 | }, 312 | "song_ABCDE": { 313 | "audio_size_offsets": [76, 3420, 3612], 314 | "preview_point_offset": 3540, 315 | "song_placeholder": "SONG_ABCDE", 316 | "template_file": "SONG_ABCDE.nus3bank" 317 | }, 318 | "song_ABCDEF": { 319 | "audio_size_offsets": [76, 3428, 3612], 320 | "preview_point_offset": 3548, 321 | "song_placeholder": "SONG_ABCDEF", 322 | "template_file": "SONG_ABCDEF.nus3bank" 323 | } 324 | } 325 | }, 326 | } 327 | 328 | if game not in game_templates: 329 | raise ValueError("Unsupported game.") 330 | 331 | templates_config = game_templates[game] 332 | 333 | if template_name not in templates_config["templates"]: 334 | raise ValueError(f"Unsupported template for {game}.") 335 | 336 | template_config = templates_config["templates"][template_name] 337 | template_folder = templates_config["template_folder"] 338 | 339 | # Read template nus3bank file from the specified game's template folder 340 | template_file = common.resource_path(os.path.join("templates", template_folder, template_config['template_file'])) 341 | with open(template_file, 'rb') as f: 342 | template_data = bytearray(f.read()) 343 | 344 | # Set unique ID if it exists in the template configuration 345 | if 'unique_id_offset' in template_config: 346 | # Generate random UInt16 hex for unique ID 347 | unique_id_hex = generate_random_uint16_hex() 348 | # Set unique ID in the template data at the specified offset 349 | template_data[template_config['unique_id_offset']:template_config['unique_id_offset']+2] = bytes.fromhex(unique_id_hex) 350 | 351 | # Get size of the audio file in bytes 352 | audio_size = os.path.getsize(audio_file) 353 | 354 | # Convert audio size to UInt32 bytes in little-endian format 355 | size_bytes = audio_size.to_bytes(4, 'little') 356 | 357 | # Set audio size in the template data at the specified offsets 358 | for offset in template_config['audio_size_offsets']: 359 | template_data[offset:offset+4] = size_bytes 360 | 361 | # Convert preview point (milliseconds) to UInt32 bytes in little-endian format 362 | preview_point_ms = int(preview_point) 363 | preview_point_bytes = preview_point_ms.to_bytes(4, 'little') 364 | 365 | # Set preview point in the template data at the specified offset 366 | template_data[template_config['preview_point_offset']:template_config['preview_point_offset']+4] = preview_point_bytes 367 | 368 | # Replace song name placeholder with the output file name in bytes 369 | output_file_bytes = output_file.encode('utf-8') 370 | template_data = template_data.replace(template_config['song_placeholder'].encode('utf-8'), os.path.splitext(os.path.basename(output_file_bytes))[0]) 371 | # Append the audio file contents to the modified template data 372 | with open(audio_file, 'rb') as audio: 373 | template_data += audio.read() 374 | 375 | # Write the modified data to the output file 376 | with open(output_file, 'wb') as out: 377 | out.write(template_data) 378 | 379 | print(f"Created {output_file} successfully.") 380 | 381 | # from script.py 382 | def run_script(script_name, script_args): 383 | if script_name == "idsp": 384 | input_file, output_file = script_args 385 | convert_audio_to_idsp(input_file, output_file) 386 | elif script_name == "lopus": 387 | input_file, output_file = script_args 388 | convert_audio_to_opus(input_file, output_file) 389 | elif script_name == "at9": 390 | input_file, output_file = script_args 391 | convert_audio_to_at9(input_file, output_file) 392 | elif script_name == "wav": 393 | input_file, output_file = script_args 394 | convert_audio_to_wav(input_file, output_file) 395 | elif script_name == "bnsf": 396 | input_audio, output_bnsf = script_args 397 | temp_folder = 'temp' 398 | os.makedirs(temp_folder, exist_ok=True) 399 | output_wav = os.path.join(temp_folder, 'output_mono.wav') 400 | output_bs = os.path.join(temp_folder, 'output.bs') 401 | header_size = 0x30 402 | 403 | try: 404 | convert_to_mono_48k(input_audio, output_wav) 405 | run_encode_tool(output_wav, output_bs) 406 | mono_wav = AudioSegment.from_wav(output_wav) 407 | total_samples = len(mono_wav.get_array_of_samples()) 408 | modify_bnsf_template(output_bs, output_bnsf, header_size, total_samples) 409 | print("BNSF file created:", output_bnsf) 410 | finally: 411 | if os.path.exists(temp_folder): 412 | shutil.rmtree(temp_folder) 413 | elif script_name == "nus3": 414 | game, audio_file, preview_point, output_file = script_args 415 | template_name = select_template_name(game, output_file) 416 | modify_nus3bank_template(game, template_name, audio_file, preview_point, output_file) 417 | else: 418 | print(f"Unsupported script: {script_name}") 419 | sys.exit(1) 420 | 421 | #from conv.py 422 | def convert_audio_to_nus3bank(input_audio, audio_type, game, preview_point, song_id): 423 | output_filename = f"song_{song_id}.nus3bank" 424 | converted_audio_file = f"{input_audio}.{audio_type}" 425 | 426 | if audio_type in ["bnsf", "at9", "idsp", "lopus", "wav"]: 427 | conversion_command = ["python", __file__, audio_type, input_audio, converted_audio_file] 428 | nus3_command = ["python", __file__, "nus3", game, converted_audio_file, str(preview_point), output_filename] 429 | 430 | try: 431 | subprocess.run(conversion_command, check=True) 432 | subprocess.run(nus3_command, check=True) 433 | print(f"Conversion successful! Created {output_filename}") 434 | 435 | if os.path.exists(converted_audio_file): 436 | os.remove(converted_audio_file) 437 | print(f"Deleted {converted_audio_file}") 438 | except subprocess.CalledProcessError as e: 439 | print(f"Error: {e}") 440 | else: 441 | print(f"Unsupported audio type: {audio_type}") 442 | 443 | def prepend_silent_to_audio(audio_file: str, length_ms: int) -> None: 444 | """ 445 | Prepends a silent segment to the beginning of the audio file. 446 | 447 | Args: 448 | audio_file (str): Path to the audio file. 449 | length_ms (int): Duration of silence to add, in milliseconds. 450 | 451 | """ 452 | audio = AudioSegment.from_wav(audio_file) 453 | silent_segment = AudioSegment.silent(duration=length_ms) 454 | combined = silent_segment + audio 455 | combined.export(audio_file, format="wav", parameters=["-acodec", "pcm_s16le"]) 456 | 457 | # def normalize_and_add_offset(input_audio: str, original_offset_ms: int, one_measure_ms: int, song_start_offset: int, temp_dir) -> str: 458 | # audio = AudioSegment.from_file(input_audio) 459 | # adjusted_audio = AudioSegment.silent(duration=one_measure_ms) + audio 460 | 461 | # if original_offset_ms > 0: 462 | # adjusted_audio = AudioSegment.silent(duration=original_offset_ms) + adjusted_audio 463 | # else: 464 | # print(len(adjusted_audio)) 465 | # adjusted_audio = adjusted_audio[-original_offset_ms:] if len(adjusted_audio) > -original_offset_ms else adjusted_audio 466 | # if song_start_offset > 0: 467 | # adjusted_audio = AudioSegment.silent(duration=song_start_offset) + adjusted_audio 468 | # export_file = os.path.join(temp_dir, 'normalized.wav') 469 | # adjusted_audio.export(export_file, format="wav", parameters=["-acodec", "pcm_s16le"]) 470 | # return export_file 471 | 472 | def convert_to_wav(input_file: str, temp_dir: str) -> str: 473 | """ 474 | Converts the input audio file to WAV format. 475 | 476 | Args: 477 | input_file (str): Path to the input audio file. 478 | temp_dir (str): Path to the temporary directory. 479 | 480 | Returns: 481 | str: Path to the converted WAV file. 482 | """ 483 | # Load the audio file using pydub 484 | audio = AudioSegment.from_file(input_file) 485 | 486 | audio = audio.set_channels(2) # Stereo 487 | audio = audio.set_frame_rate(44100) # 44.1 kHz 488 | audio = audio.set_sample_width(2) # 16-bit 489 | 490 | # Generate output file path 491 | base_name = os.path.splitext(os.path.basename(input_file))[0] 492 | output_file = os.path.join(temp_dir, f"{base_name}.wav") 493 | 494 | # Export as WAV 495 | audio.export(output_file, format="wav", parameters=["-acodec", "pcm_s16le"]) 496 | 497 | return output_file 498 | 499 | 500 | def wav_to_idsp_to_nus3bank(input_audio: str, out_file: str, preview_point: int, song_id: str): 501 | """Converts wav file to nus3bank file 502 | 503 | Args: 504 | input_audio (str): Path to the input audio file 505 | out_file (str): Path for the destination audio file 506 | preview_point (int): Preview point (DEMOSTART) of the song 507 | song_id (str): id of the song 508 | """ 509 | 510 | idsp_audio = f'{song_id}.idsp' 511 | convert_audio_to_idsp(input_audio, idsp_audio) 512 | try: 513 | template_name = select_template_name('nijiiro', f'song_{song_id}.nus3bank') 514 | modify_nus3bank_template('nijiiro', template_name, idsp_audio, preview_point, out_file) 515 | except Exception as e: 516 | raise e 517 | finally: 518 | if os.path.exists(idsp_audio): 519 | os.remove(idsp_audio) 520 | -------------------------------------------------------------------------------- /src/parse_tja.py: -------------------------------------------------------------------------------- 1 | """ 2 | Most of this code was adapted from https://github.com/WHMHammer/tja-tools and converted to Python. 3 | The original repository does not specify a license. Usage of said code 4 | is intended to fall under fair use for educational or non-commercial purposes. 5 | If there are any concerns or issues regarding the usage of this code, please 6 | contact @keifunky on discord. 7 | """ 8 | 9 | import re 10 | from dataclasses import dataclass, field 11 | from typing import List, Dict 12 | from math import ceil, floor 13 | import typing 14 | 15 | from src.config import config 16 | 17 | HEADER_GLOBAL = [ 18 | 'TITLE', 19 | 'SUBTITLE', 20 | 'BPM', 21 | 'WAVE', 22 | 'OFFSET', 23 | 'DEMOSTART', 24 | 'GENRE', 25 | ] 26 | 27 | HEADER_COURSE = [ 28 | 'COURSE', 29 | 'LEVEL', 30 | 'BALLOON', 31 | 'SCOREINIT', 32 | 'SCOREDIFF', 33 | 'TTROWBEAT', 34 | ] 35 | 36 | COMMAND = [ 37 | 'START', 38 | 'END', 39 | 'GOGOSTART', 40 | 'GOGOEND', 41 | 'MEASURE', 42 | 'SCROLL', 43 | 'BPMCHANGE', 44 | 'DELAY', 45 | 'BRANCHSTART', 46 | 'BRANCHEND', 47 | 'SECTION', 48 | 'N', 49 | 'E', 50 | 'M', 51 | 'LEVELHOLD', 52 | 'BMSCROLL', 53 | 'HBSCROLL', 54 | 'BARLINEOFF', 55 | 'BARLINEON', 56 | 'TTBREAK', 57 | ] 58 | 59 | def parse_line(line): 60 | match = None 61 | 62 | # comment 63 | match = re.match(r'//.*', line) 64 | if match: 65 | line = line[:match.start()].strip() 66 | 67 | # header 68 | match = re.match(r'^([A-Z]+):(.+)', line, re.IGNORECASE) 69 | if match: 70 | name_upper = match.group(1).upper() 71 | value = match.group(2).strip() 72 | 73 | if name_upper in HEADER_GLOBAL: 74 | return { 75 | 'type': 'header', 76 | 'scope': 'global', 77 | 'name': name_upper, 78 | 'value': value 79 | } 80 | elif name_upper in HEADER_COURSE: 81 | return { 82 | 'type': 'header', 83 | 'scope': 'course', 84 | 'name': name_upper, 85 | 'value': value 86 | } 87 | 88 | # command 89 | match = re.match(r'^#([A-Z]+)(?:\s+(.+))?', line, re.IGNORECASE) 90 | if match: 91 | name_upper = match.group(1).upper() 92 | value = match.group(2) or '' 93 | 94 | if name_upper in COMMAND: 95 | return { 96 | 'type': 'command', 97 | 'name': name_upper, 98 | 'value': value.strip() 99 | } 100 | 101 | # data 102 | match = re.match(r'^(([0-9]|A|B|C|F|G|n|d|o|t|k)*,?)$', line) 103 | if match: 104 | data = match.group(1) 105 | return { 106 | 'type': 'data', 107 | 'data': data 108 | } 109 | 110 | return { 111 | 'type': 'unknown', 112 | 'value': line 113 | } 114 | 115 | def get_course(tja_headers, lines): 116 | headers = { 117 | 'course': 'Oni', 118 | 'level': 0, 119 | 'balloon': [], 120 | 'scoreInit': 100, 121 | 'scoreDiff': 100, 122 | 'ttRowBeat': 16 123 | } 124 | 125 | measures = [] 126 | 127 | measure_dividend = 4 128 | measure_divisor = 4 129 | measure_properties = {} 130 | measure_data = '' 131 | measure_events = [] 132 | current_branch = 'N' 133 | target_branch = 'N' 134 | flag_levelhold = False 135 | 136 | for line in lines: 137 | if line['type'] == 'header': 138 | if line['name'] == 'COURSE': 139 | headers['course'] = line['value'] 140 | elif line['name'] == 'LEVEL': 141 | headers['level'] = int(line['value']) 142 | elif line['name'] == 'BALLOON': 143 | headers['balloon'] = [int(b) for b in re.split(r'[^0-9]', line['value']) if b] 144 | elif line['name'] == 'SCOREINIT': 145 | headers['scoreInit'] = int(line['value']) 146 | elif line['name'] == 'SCOREDIFF': 147 | headers['scoreDiff'] = int(line['value']) 148 | elif line['name'] == 'TTROWBEAT': 149 | headers['ttRowBeat'] = int(line['value']) 150 | 151 | elif line['type'] == 'command': 152 | if line['name'] == 'BRANCHSTART': 153 | if not flag_levelhold: 154 | values = line['value'].split(',') 155 | if values[0] == 'r': 156 | if len(values) >= 3: 157 | target_branch = 'M' 158 | elif len(values) == 2: 159 | target_branch = 'E' 160 | else: 161 | target_branch = 'N' 162 | elif values[0] == 'p': 163 | if len(values) >= 3 and float(values[2]) <= 100: 164 | target_branch = 'M' 165 | elif len(values) >= 2 and float(values[1]) <= 100: 166 | target_branch = 'E' 167 | else: 168 | target_branch = 'N' 169 | 170 | elif line['name'] == 'BRANCHEND': 171 | current_branch = target_branch 172 | 173 | elif line['name'] in ('N', 'E', 'M'): 174 | current_branch = line['name'] 175 | 176 | elif line['name'] in ('START', 'END'): 177 | current_branch = 'N' 178 | target_branch = 'N' 179 | flag_levelhold = False 180 | 181 | else: 182 | if current_branch != target_branch: 183 | continue 184 | if line['name'] == 'MEASURE': 185 | match_measure = re.match(r'(\d+)/(\d+)', line['value']) 186 | if match_measure: 187 | measure_dividend = int(match_measure.group(1)) 188 | measure_divisor = int(match_measure.group(2)) 189 | 190 | elif line['name'] == 'GOGOSTART': 191 | measure_events.append({ 192 | 'name': 'gogoStart', 193 | 'position': len(measure_data) 194 | }) 195 | 196 | elif line['name'] == 'GOGOEND': 197 | measure_events.append({ 198 | 'name': 'gogoEnd', 199 | 'position': len(measure_data) 200 | }) 201 | 202 | elif line['name'] == 'SCROLL': 203 | measure_events.append({ 204 | 'name': 'scroll', 205 | 'position': len(measure_data), 206 | 'value': float(line['value']) 207 | }) 208 | 209 | elif line['name'] == 'BPMCHANGE': 210 | measure_events.append({ 211 | 'name': 'bpm', 212 | 'position': len(measure_data), 213 | 'value': float(line['value']) 214 | }) 215 | 216 | elif line['name'] == 'TTBREAK': 217 | measure_properties['ttBreak'] = True 218 | 219 | elif line['name'] == 'LEVELHOLD': 220 | flag_levelhold = True 221 | 222 | elif line['type'] == 'data' and current_branch == target_branch: 223 | data = line['data'] 224 | if data.endswith(','): 225 | measure_data += data[:-1] 226 | measures.append({ 227 | 'length': [measure_dividend, measure_divisor], 228 | 'properties': measure_properties, 229 | 'data': measure_data, 230 | 'events': measure_events 231 | }) 232 | measure_data = '' 233 | measure_events = [] 234 | measure_properties = {} 235 | else: 236 | measure_data += data 237 | 238 | if measures: 239 | first_bpm_event_found = any(evt['name'] == 'bpm' and evt['position'] == 0 for evt in measures[0]['events']) 240 | if not first_bpm_event_found: 241 | measures[0]['events'].insert(0, { 242 | 'name': 'bpm', 243 | 'position': 0, 244 | 'value': tja_headers['bpm'] 245 | }) 246 | 247 | course = { 248 | 'easy': 0, 249 | 'normal': 1, 250 | 'hard': 2, 251 | 'oni': 3, 252 | 'edit': 4, 253 | 'ura': 4, 254 | '0': 0, 255 | '1': 1, 256 | '2': 2, 257 | '3': 3, 258 | '4': 4 259 | }.get(headers['course'].lower(), 0) 260 | 261 | if measure_data: 262 | measures.append({ 263 | 'length': [measure_dividend, measure_divisor], 264 | 'properties': measure_properties, 265 | 'data': measure_data, 266 | 'events': measure_events 267 | }) 268 | else: 269 | for event in measure_events: 270 | event['position'] = len(measures[-1]['data']) 271 | measures[-1]['events'].append(event) 272 | 273 | return {'course': course, 'headers': headers, 'measures': measures} 274 | 275 | def parse_tja(tja): 276 | lines = [line.strip() for line in re.split(r'(\r\n|\r|\n)', tja) if line.strip()] 277 | 278 | headers = { 279 | 'title': '', 280 | 'subtitle': '', 281 | 'bpm': 120, 282 | 'wave': '', 283 | 'offset': 0, 284 | 'demoStart': 0, 285 | 'genre': '' 286 | } 287 | 288 | courses = {} 289 | course_lines = [] 290 | 291 | for line in lines: 292 | parsed = parse_line(line) 293 | 294 | if parsed['type'] == 'header' and parsed['scope'] == 'global': 295 | headers[parsed['name'].lower()] = parsed['value'] 296 | 297 | elif parsed['type'] == 'header' and parsed['scope'] == 'course': 298 | if parsed['name'] == 'COURSE' and course_lines: 299 | course = get_course(headers, course_lines) 300 | courses[course['course']] = course 301 | course_lines = [] 302 | 303 | course_lines.append(parsed) 304 | 305 | elif parsed['type'] in ('command', 'data'): 306 | course_lines.append(parsed) 307 | 308 | if course_lines: 309 | course = get_course(headers, course_lines) 310 | courses[course['course']] = course 311 | 312 | return {'headers': headers, 'courses': courses} 313 | 314 | def pulse_to_time(events, objects): 315 | bpm = 120 316 | passed_beat = 0 317 | passed_time = 0 318 | eidx = 0 319 | oidx = 0 320 | 321 | times = [] 322 | 323 | while oidx < len(objects): 324 | event = events[eidx] if eidx < len(events) else None 325 | obj_beat = objects[oidx] 326 | 327 | while event and event['beat'] <= obj_beat: 328 | if event['type'] == 'bpm': 329 | beat = event['beat'] - passed_beat 330 | time = 60 / bpm * beat 331 | 332 | passed_beat += beat 333 | passed_time += time 334 | bpm = float(event['value']) 335 | 336 | eidx += 1 337 | event = events[eidx] if eidx < len(events) else None 338 | 339 | beat = obj_beat - passed_beat 340 | time = 60 / bpm * beat 341 | times.append(passed_time + time) 342 | 343 | passed_beat += beat 344 | passed_time += time 345 | oidx += 1 346 | 347 | return times 348 | 349 | def convert_to_timed(course): 350 | events = [] 351 | notes = [] 352 | beat = 0 353 | balloon = 0 354 | imo = False 355 | 356 | for measure in course['measures']: 357 | length = measure['length'][0] / measure['length'][1] * 4 358 | 359 | for event in measure['events']: 360 | e_beat = length / (len(measure['data']) or 1) * event['position'] 361 | 362 | if event['name'] == 'bpm': 363 | events.append({ 364 | 'type': 'bpm', 365 | 'value': event['value'], 366 | 'beat': beat + e_beat, 367 | }) 368 | elif event['name'] == 'gogoStart': 369 | events.append({ 370 | 'type': 'gogoStart', 371 | 'beat': beat + e_beat, 372 | }) 373 | elif event['name'] == 'gogoEnd': 374 | events.append({ 375 | 'type': 'gogoEnd', 376 | 'beat': beat + e_beat, 377 | }) 378 | 379 | for d, ch in enumerate(measure['data']): 380 | n_beat = length / len(measure['data']) * d 381 | 382 | note = {'type': '', 'beat': beat + n_beat} 383 | 384 | if ch in ['1', 'n', 'd', 'o']: 385 | note['type'] = 'don' 386 | elif ch in ['2', 't', 'k']: 387 | note['type'] = 'kat' 388 | elif ch == '3' or ch == 'A': 389 | note['type'] = 'donBig' 390 | elif ch == '4' or ch == 'B': 391 | note['type'] = 'katBig' 392 | elif ch == '5': 393 | note['type'] = 'renda' 394 | elif ch == '6': 395 | note['type'] = 'rendaBig' 396 | elif ch == '7': 397 | note['type'] = 'balloon' 398 | note['count'] = course['headers']['balloon'][balloon] 399 | balloon += 1 400 | elif ch == '8': 401 | note['type'] = 'end' 402 | if imo: 403 | imo = False 404 | elif ch == '9': 405 | if not imo: 406 | note['type'] = 'balloon' 407 | note['count'] = course['headers']['balloon'][balloon] 408 | balloon += 1 409 | imo = True 410 | 411 | if note['type']: 412 | notes.append(note) 413 | 414 | beat += length 415 | 416 | # Assuming pulse_to_time is a pre-defined function 417 | times = pulse_to_time(events, [n['beat'] for n in notes]) 418 | for idx, t in enumerate(times): 419 | notes[idx]['time'] = t 420 | 421 | return {'headers': course['headers'], 'events': events, 'notes': notes} 422 | 423 | def get_statistics(course): 424 | # Initialize variables 425 | notes = [0, 0, 0, 0] 426 | rendas, balloons = [], [] 427 | start, end, combo = 0, 0, 0 428 | renda_start = -1 429 | balloon_start, balloon_count, balloon_gogo = False, 0, 0 430 | sc_cur_event_idx = 0 431 | sc_cur_event = course['events'][sc_cur_event_idx] 432 | sc_gogo = 0 433 | sc_notes = [[0, 0, 0, 0, 0], [0, 0, 0, 0, 0]] 434 | sc_balloon = [0, 0] 435 | sc_balloon_pop = [0, 0] 436 | sc_potential = 0 437 | current_bpm = 0 438 | bpm_at_renda_start = 0 439 | type_note = ['don', 'kat', 'donBig', 'katBig'] 440 | for i, note in enumerate(course['notes']): 441 | # Check and handle events 442 | if sc_cur_event and sc_cur_event['beat'] <= note['beat']: 443 | while sc_cur_event and sc_cur_event['beat'] <= note['beat']: 444 | if sc_cur_event['type'] == 'gogoStart': 445 | sc_gogo = 1 446 | elif sc_cur_event['type'] == 'gogoEnd': 447 | sc_gogo = 0 448 | elif sc_cur_event['type'] == 'bpm': 449 | current_bpm = float(sc_cur_event['value']) 450 | sc_cur_event_idx += 1 451 | if sc_cur_event_idx < len(course['events']): 452 | sc_cur_event = course['events'][sc_cur_event_idx] 453 | else: 454 | sc_cur_event = None 455 | 456 | v1 = type_note.index(note['type']) if note['type'] in type_note else -1 457 | if v1 != -1: 458 | if i == 0: 459 | start = note['time'] 460 | end = note['time'] 461 | 462 | notes[v1] += 1 463 | combo += 1 464 | 465 | big = v1 in (2, 3) 466 | sc_range = (0 if combo < 10 else 1 if combo < 30 else 2 if combo < 50 else 3 if combo < 100 else 4) 467 | sc_notes[sc_gogo][sc_range] += 2 if big else 1 468 | 469 | note_score_base = ( 470 | course['headers']['scoreInit'] + 471 | course['headers']['scoreDiff'] * (0 if combo < 10 else 1 if combo < 30 else 2 if combo < 50 else 4 if combo < 100 else 8) 472 | ) 473 | 474 | note_score = (note_score_base // 10) * 10 475 | if sc_gogo: 476 | note_score = (note_score * 1.2 // 10) * 10 477 | if big: 478 | note_score *= 2 479 | 480 | sc_potential += note_score 481 | 482 | continue 483 | 484 | if note['type'] in ('renda', 'rendaBig'): 485 | renda_start = note['time'] 486 | bpm_at_renda_start = current_bpm 487 | continue 488 | 489 | elif note['type'] == 'balloon': 490 | balloon_start = note['time'] 491 | balloon_count = note['count'] 492 | bpm_at_renda_start = current_bpm 493 | balloon_gogo = sc_gogo 494 | continue 495 | 496 | elif note['type'] == 'end': 497 | if renda_start != -1: 498 | rendas.append([note['time'] - renda_start, bpm_at_renda_start]) 499 | renda_start = -1 500 | elif balloon_start: 501 | balloon_length = note['time'] - balloon_start 502 | balloon_speed = balloon_count / balloon_length 503 | balloons.append([balloon_length, balloon_count, balloon_speed > 40, bpm_at_renda_start]) 504 | balloon_start = False 505 | 506 | if balloon_speed <= 60: 507 | sc_balloon[balloon_gogo] += balloon_count - 1 508 | sc_balloon_pop[balloon_gogo] += 1 509 | 510 | return { 511 | 'totalCombo': combo, 512 | 'notes': notes, 513 | 'length': end - start, 514 | 'rendas': rendas, 515 | 'balloons': balloons, 516 | 'score': { 517 | 'score': sc_potential, 518 | 'notes': sc_notes, 519 | 'balloon': sc_balloon, 520 | 'balloonPop': sc_balloon_pop, 521 | } 522 | } 523 | 524 | @dataclass 525 | class SongData: 526 | demo_start: float = 0.0 527 | title: str = "" 528 | sub: str = "" 529 | star: List[int] = field(default_factory=lambda: [0, 0, 0, 0, 0]) 530 | length: List[float] = field(default_factory=lambda: [0.0, 0.0, 0.0, 0.0, 0.0]) 531 | shinuti: List[int] = field(default_factory=lambda: [0, 0, 0, 0, 0]) 532 | shinuti_score: List[int] = field(default_factory=lambda: [0, 0, 0, 0, 0]) 533 | onpu_num: List[int] = field(default_factory=lambda: [0, 0, 0, 0, 0]) 534 | renda_time: List[float] = field(default_factory=lambda: [0.0, 0.0, 0.0, 0.0, 0.0]) 535 | fuusen_total: List[int] = field(default_factory=lambda: [0, 0, 0, 0, 0]) 536 | 537 | def parse_and_get_data(tja_file: str, shinuti_override: List[int] = None, required_renda_speed_override: List[float] = None) -> SongData: 538 | """Takes in a tja fname and returns a parse_tja.SongData object""" 539 | ret = SongData() 540 | file_str = None # Initialize file_str to avoid the UnboundLocalError 541 | 542 | # Try opening the file with different encodings 543 | encodings = ['utf-8-sig', 'shiftjis', 'shift_jisx0213', 'utf-8', 'shift_jis_2004', 'latin-1'] 544 | for encoding in encodings: 545 | try: 546 | with open(tja_file, encoding=encoding) as file: 547 | file_str = file.read() 548 | print(f'Successfully read file with {encoding} encoding') 549 | break # If successful, break out of the loop 550 | except Exception as e: 551 | print(f'Error reading file with {encoding} encoding: {e}') 552 | continue 553 | 554 | if file_str: 555 | # Proceed with the parsing if file_str has been assigned 556 | parsed = parse_tja(file_str) 557 | else: 558 | raise Exception("Failed to parse TJA File") 559 | 560 | ret.demo_start = float(parsed['headers']['demostart']) 561 | ret.title = parsed['headers']['title'] 562 | sub = parsed['headers']['subtitle'] 563 | ret.sub = sub[2::] if sub.startswith('--') else sub 564 | for i in parsed['courses'].keys(): 565 | ret.star[i] = parsed['courses'][i]['headers']['level'] 566 | stats = get_statistics(convert_to_timed(parsed['courses'][i])) 567 | ret.length[i] = stats['length'] 568 | ret.onpu_num[i] = stats['totalCombo'] 569 | 570 | impoppable_balloon_s = 0.0 #BTD reference??? 571 | impoppable_balloon_count = 0 572 | poppable_balloon_count = 0 573 | for time, count, impoppable, bpm_start in stats['balloons']: 574 | if impoppable: 575 | impoppable_balloon_s += time 576 | impoppable_balloon_count += count 577 | else: 578 | poppable_balloon_count += count 579 | 580 | for time, bpm_start in stats['rendas']: 581 | ret.renda_time[i] += time 582 | 583 | ret.fuusen_total[i] = poppable_balloon_count 584 | 585 | if required_renda_speed_override is not None and required_renda_speed_override[i] != 0: 586 | required_renda_speed = required_renda_speed_override[i] 587 | else: 588 | required_renda_speed = config.default_required_renda_speeds[i] 589 | 590 | if shinuti_override is not None and shinuti_override[i] != 0: 591 | shinuti = shinuti_override[i] 592 | else: 593 | shinuti = 0 594 | ret.shinuti[i], ret.shinuti_score[i], _ = calculate_shinuti_and_shinuti_score(ret.renda_time[i], impoppable_balloon_s, poppable_balloon_count, ret.onpu_num[i], required_renda_speed, shinuti=shinuti) 595 | 596 | return ret 597 | 598 | def calculate_shinuti_and_shinuti_score(roll_duration_s: float, impoppable_balloon_s: float, poppable_balloon_count: int, onpu_num: int, required_renda_speed: float, shinuti: int = 0): 599 | """ 600 | Setting shinuti will overwrite shinuti calculation 601 | :return: shinuti, shinuti_score 602 | """ 603 | roll_duration_int = round(roll_duration_s) + round(impoppable_balloon_s) 604 | roll_duration = roll_duration_s + impoppable_balloon_s 605 | if shinuti == 0: shinuti = ceil((100_000.0 - 10 * (floor(required_renda_speed * roll_duration_int / 1000) + poppable_balloon_count)) / onpu_num) * 10 606 | tenjyou = shinuti * onpu_num + 100 * (floor(required_renda_speed * roll_duration / 1000) + poppable_balloon_count) 607 | shinuti_score = tenjyou + floor(required_renda_speed * roll_duration) * 100 608 | return shinuti, shinuti_score, tenjyou 609 | 610 | def calculate_tenjyou_and_shinuti_score_from_renda_count(shinuti: int, poppable_balloon_count: int, onpu_num: int, required_renda_count: int) -> tuple[int, int]: 611 | tenjyou = shinuti * onpu_num + poppable_balloon_count * 100 612 | shinuti_score = tenjyou + floor(required_renda_count) * 100 613 | return tenjyou, shinuti_score 614 | 615 | -------------------------------------------------------------------------------- /src/tja2fumen/__init__.py: -------------------------------------------------------------------------------- 1 | """ 2 | Entry points for tja2fumen. 3 | """ 4 | 5 | import argparse 6 | import os 7 | import shutil 8 | import sys 9 | from typing import Sequence, Tuple, List 10 | 11 | from src.tja2fumen.parsers import parse_tja, parse_fumen 12 | from src.tja2fumen.converters import convert_tja_to_fumen, fix_dk_note_types_course 13 | from src.tja2fumen.writers import write_fumen 14 | from src.tja2fumen.constants import COURSE_IDS 15 | from src.tja2fumen.classes import TJACourse 16 | 17 | 18 | def main(argv: Sequence[str] = ()) -> None: 19 | """ 20 | Main entry point for tja2fumen's command line interface. 21 | """ 22 | if not argv: 23 | argv = sys.argv[1:] 24 | 25 | parser = argparse.ArgumentParser( 26 | formatter_class=argparse.RawDescriptionHelpFormatter, 27 | description=""" 28 | tja2fumen is a tool to 29 | 30 | tja2fumen can be used in 3 ways: 31 | - If a .tja file is provided, then three steps are performed: 32 | 1. Parse TJA into multiple TJACourse objects. Then, for each course: 33 | 2. Convert TJACourse objects into FumenCourse objects. 34 | 3. Write each FumenCourse to its own .bin file. 35 | 36 | - If a .bin file is provided, then the existing .bin is repaired: 37 | 1. Update don/kat senote types to do-ko-don and ka-kat. 38 | 2. Update timing windows to fix previous bug with Easy/Normal timing. 39 | 40 | - If a folder is provided, then all .tja and .bin files will be recursively 41 | processed according to the above logic. (Confirmation is required for safety.) 42 | """ 43 | ) 44 | parser.add_argument( 45 | "input", 46 | help="Path to a Taiko no Tatsujin chart file or folder.", 47 | ) 48 | args = parser.parse_args(argv) 49 | path_input = getattr(args, "input") 50 | if os.path.isdir(path_input): 51 | print(f"Folder passed to tja2fumen. " 52 | f"Looking for files in {path_input}...\n") 53 | tja_files, bin_files = parse_files(path_input) 54 | print("\nThe following TJA files will be CONVERTED:") 55 | for tja_file in tja_files: 56 | print(f" - {tja_file}") 57 | print("\nThe following BIN files will be REPAIRED:") 58 | for bin_file in bin_files: 59 | print(f" - {bin_file}") 60 | choice = input("\nDo you wish to continue? [y/n]") 61 | if choice.lower() != "y": 62 | sys.exit("'y' not selected, exiting.") 63 | print() 64 | files = tja_files + bin_files 65 | 66 | elif os.path.isfile(path_input): 67 | files = [path_input] 68 | else: 69 | raise FileNotFoundError("No such file or directory: " + path_input) 70 | 71 | for file in files: 72 | process_file(file) 73 | 74 | 75 | def parse_files(directory: str) -> Tuple[List[str], List[str]]: 76 | """Find all or .bin files within a directory.""" 77 | tja_files, bin_files = [], [] 78 | for root, _, files in os.walk(directory): 79 | for file in files: 80 | if file.endswith(".tja"): 81 | tja_files.append(os.path.join(root, file)) 82 | elif file.endswith(".bin"): 83 | if file.startswith("song_"): 84 | print(f"Skipping '{file}' because it starts with 'song_' " 85 | f"(probably an audio file, not a chart file).") 86 | continue 87 | bin_files.append(os.path.join(root, file)) 88 | return tja_files, bin_files 89 | 90 | 91 | def process_file(fname: str) -> None: 92 | """Process a single file path (TJA or BIN).""" 93 | if fname.endswith(".bin"): 94 | print(f"Repairing {fname}") 95 | repair_bin(fname) 96 | else: 97 | print(f"Converting {fname}") 98 | # Parse lines in TJA file 99 | parsed_tja = parse_tja(fname) 100 | 101 | # Convert parsed TJA courses and write each course to `.bin` files 102 | base_name = os.path.splitext(fname)[0] 103 | for course_name, course in parsed_tja.courses.items(): 104 | convert_and_write(course, course_name, base_name, 105 | single_course=len(parsed_tja.courses) == 1) 106 | 107 | 108 | def convert_and_write(tja_data: TJACourse, 109 | course_name: str, 110 | base_name: str, 111 | single_course: bool = False) -> None: 112 | """Process the parsed data for a single TJA course.""" 113 | fumen_data = convert_tja_to_fumen(tja_data) 114 | # fix don/ka types 115 | fix_dk_note_types_course(fumen_data) 116 | # Add course ID (e.g. '_x', '_x_1', '_x_2') to the output file's base name 117 | output_name = base_name 118 | if single_course: 119 | pass # Replicate tja2bin.exe behavior by excluding course ID 120 | else: 121 | split_name = course_name.split("P") # e.g. 'OniP2' -> ['Oni', '2'] 122 | output_name += f"_{COURSE_IDS[split_name[0]]}" 123 | if len(split_name) == 2: 124 | output_name += f"_{split_name[1]}" # Add "_1"/"_2" if P1/P2 chart 125 | write_fumen(f"{output_name}.bin", fumen_data) 126 | 127 | 128 | def repair_bin(fname_bin: str) -> None: 129 | """Repair the don/ka types of an existing .bin file.""" 130 | fumen_data = parse_fumen(fname_bin) 131 | # fix timing windows 132 | for course, course_id in COURSE_IDS.items(): 133 | if any(fname_bin.endswith(f"_{i}.bin") 134 | for i in [course_id, f"{course_id}_1", f"{course_id}_2"]): 135 | print(f" - Setting {course} timing windows...") 136 | fumen_data.header.set_timing_windows(difficulty=course) 137 | break 138 | else: 139 | print(f" - Can't infer difficulty {list(COURSE_IDS.values())} from " 140 | f"filename. Skipping timing window fix...") 141 | 142 | # fix don/ka types 143 | print(" - Fixing don/ka note types (do/ko/don, ka/kat)...") 144 | fix_dk_note_types_course(fumen_data) 145 | # write repaired fumen 146 | shutil.move(fname_bin, fname_bin+".bak") 147 | write_fumen(fname_bin, fumen_data) 148 | 149 | 150 | # NB: This entry point is necessary for the Pyinstaller executable 151 | if __name__ == "__main__": 152 | main() 153 | -------------------------------------------------------------------------------- /src/tja2fumen/classes.py: -------------------------------------------------------------------------------- 1 | """ 2 | Dataclasses used to represent song courses, branches, measures, and notes. 3 | """ 4 | 5 | import csv 6 | import os 7 | import struct 8 | from src import common 9 | from typing import Any, List, Dict, Tuple 10 | 11 | from dataclasses import dataclass, field, fields 12 | 13 | from src.tja2fumen.constants import BRANCH_NAMES, TIMING_WINDOWS 14 | 15 | 16 | @dataclass() 17 | class TJAData: 18 | """Contains the information for a single note or single command.""" 19 | name: str 20 | value: str 21 | pos: int # For TJAs, 'pos' is stored as an int rather than in milliseconds 22 | 23 | 24 | @dataclass() 25 | class TJAMeasure: 26 | """Contains all the data in a single TJA measure (denoted by ',').""" 27 | notes: List[str] = field(default_factory=list) 28 | events: List[TJAData] = field(default_factory=list) 29 | combined: List[TJAData] = field(default_factory=list) 30 | 31 | 32 | @dataclass() 33 | class TJACourse: 34 | """Contains all the data in a single TJA `COURSE:` section.""" 35 | bpm: float 36 | offset: float 37 | course: str 38 | level: int = 0 39 | balloon: List[int] = field(default_factory=list) 40 | score_init: int = 0 41 | score_diff: int = 0 42 | data: List[str] = field(default_factory=list) 43 | branches: Dict[str, List[TJAMeasure]] = field(default_factory=dict) 44 | 45 | 46 | @dataclass() 47 | class TJASong: 48 | """Contains all the data in a single TJA (`.tja`) chart file.""" 49 | bpm: float 50 | offset: float 51 | courses: Dict[str, TJACourse] 52 | 53 | 54 | @dataclass() 55 | class TJAMeasureProcessed: 56 | """ 57 | Contains all the data in a single TJA measure (denoted by ','), but with 58 | all `#COMMAND` lines processed, and their values stored as attributes. 59 | 60 | ((Note: Because only one BPM/SCROLL/GOGO value can be stored per measure, 61 | any TJA measures with mid-measure commands must be split up. So, the 62 | number of `TJAMeasureProcessed` objects will often be greater than 63 | the number of `TJAMeasure` objects for a given song.)) 64 | """ 65 | bpm: float 66 | scroll: float 67 | gogo: bool 68 | barline: bool 69 | time_sig: List[int] 70 | subdivisions: int 71 | pos_start: int = 0 72 | pos_end: int = 0 73 | delay: float = 0.0 74 | section: bool = False 75 | levelhold: bool = False 76 | senote: str = '' 77 | branch_type: str = '' 78 | branch_cond: Tuple[float, float] = (0.0, 0.0) 79 | notes: List[TJAData] = field(default_factory=list) 80 | 81 | 82 | @dataclass() 83 | class FumenNote: 84 | """Contains all the byte values for a single Fumen note.""" 85 | note_type: str = '' 86 | pos: float = 0.0 87 | pos_abs: float = 0.0 88 | diff: int = 0 89 | score_init: int = 0 90 | score_diff: int = 0 91 | padding: float = 0.0 92 | item: int = 0 93 | duration: float = 0.0 94 | multimeasure: bool = False 95 | hits: int = 0 96 | hits_padding: int = 0 97 | drumroll_bytes: bytes = b'\x00\x00\x00\x00\x00\x00\x00\x00' 98 | manually_set: bool = False 99 | 100 | 101 | @dataclass() 102 | class FumenBranch: 103 | """Contains all the data in a single Fumen branch.""" 104 | length: int = 0 105 | speed: float = 0.0 106 | padding: int = 0 107 | notes: List[FumenNote] = field(default_factory=list) 108 | 109 | 110 | @dataclass() 111 | class FumenMeasure: 112 | """Contains all the data in a single Fumen measure.""" 113 | bpm: float = 0.0 114 | offset_start: float = 0.0 115 | offset_end: float = 0.0 116 | duration: float = 0.0 117 | gogo: bool = False 118 | barline: bool = True 119 | branch_info: List[int] = field(default_factory=lambda: [-1] * 6) 120 | branches: Dict[str, FumenBranch] = field( 121 | default_factory=lambda: {b: FumenBranch() for b in BRANCH_NAMES} 122 | ) 123 | padding1: int = 0 124 | padding2: int = 0 125 | 126 | def set_duration(self, 127 | time_sig: List[int], 128 | measure_length: int, 129 | subdivisions: int) -> None: 130 | """Compute the millisecond duration of the measure.""" 131 | # First, we compute the duration for a full 4/4 measure. 132 | full_duration = 4 * 60_000 / self.bpm 133 | # Next, we adjust this duration based on both: 134 | # 1. The *actual* measure size (e.g. #MEASURE 1/8, 5/4, etc.) 135 | # 2. Whether this is a "submeasure" (i.e. whether it contains 136 | # mid-measure commands, which split up the measure) 137 | # - If this is a submeasure, then `measure_length` will be 138 | # less than the total number of subdivisions. 139 | # - In other words, `measure_ratio` will be less than 1.0. 140 | measure_size = time_sig[0] / time_sig[1] 141 | measure_ratio = ( 142 | 1.0 if subdivisions == 0.0 # Avoid DivisionByZeroErrors 143 | else (measure_length / subdivisions) 144 | ) 145 | self.duration = full_duration * measure_size * measure_ratio 146 | 147 | def set_first_ms_offsets(self, song_offset: float) -> None: 148 | """Compute the ms offsets for the start/end of the first measure.""" 149 | # First, start with song's OFFSET: metadata 150 | self.offset_start = song_offset * -1 * 1000 # s -> ms 151 | # Then, subtract a full 4/4 measure for the current BPM 152 | self.offset_start -= (4 * 60_000 / self.bpm) 153 | # Compute the end offset by adding the duration to the start offset 154 | self.offset_end = self.offset_start + self.duration 155 | 156 | def set_ms_offsets(self, 157 | delay: float, 158 | prev_measure: 'FumenMeasure') -> None: 159 | """Compute the ms offsets for the start/end of a given measure.""" 160 | # First, start with the end timing of the previous measure 161 | self.offset_start = prev_measure.offset_end 162 | # Add any #DELAY commands 163 | self.offset_start += delay 164 | # Adjust the start timing to account for #BPMCHANGE commands 165 | # (!!! Discovered by tana :3 !!!) 166 | self.offset_start += (4 * 60_000 / prev_measure.bpm) 167 | self.offset_start -= (4 * 60_000 / self.bpm) 168 | # Compute the end offset by adding the duration to the start offset 169 | self.offset_end = self.offset_start + self.duration 170 | 171 | def set_branch_info(self, 172 | branch_type: str, 173 | branch_cond: Tuple[float, float], 174 | branch_points_total: int, 175 | current_branch: str, 176 | has_levelhold: bool) -> None: 177 | """Compute the values that represent branching/diverge conditions.""" 178 | # If levelhold is set, force the branch to stay the same, 179 | # regardless of the value of the current branch condition. 180 | if has_levelhold: 181 | if current_branch == 'normal': 182 | self.branch_info[0:2] = [999, 999] # Forces fail/fail 183 | elif current_branch == 'professional': 184 | self.branch_info[2:4] = [0, 999] # Forces pass/fail 185 | elif current_branch == 'master': 186 | self.branch_info[4:6] = [0, 0] # Forces pass/pass 187 | 188 | # Handle branch conditions for percentage accuracy 189 | # There are three cases for interpreting #BRANCHSTART p: 190 | # 1. Percentage is between 0% and 100% 191 | # 2. Percentage is above 100% (guaranteed level down) 192 | # 3. Percentage is 0% (guaranteed level up) 193 | elif branch_type == 'p': 194 | vals = [] 195 | for percent in branch_cond: 196 | if 0 < percent <= 1: 197 | vals.append(int(branch_points_total * percent)) 198 | elif percent > 1: 199 | vals.append(999) 200 | else: 201 | vals.append(0) 202 | if current_branch == 'normal': 203 | self.branch_info[0:2] = vals 204 | elif current_branch == 'professional': 205 | self.branch_info[2:4] = vals 206 | elif current_branch == 'master': 207 | self.branch_info[4:6] = vals 208 | 209 | # Handle branch conditions for drumroll accuracy 210 | # There are three cases for interpreting #BRANCHSTART r: 211 | # 1. It's the first branching condition. 212 | # 2. It's not the first branching condition, but it 213 | # has a #SECTION command to reset the accuracy. 214 | # 3. It's not the first branching condition, and it 215 | # doesn't have a #SECTION command. 216 | elif branch_type == 'r': 217 | vals = [int(v) for v in branch_cond] 218 | if current_branch == 'normal': 219 | self.branch_info[0:2] = vals 220 | elif current_branch == 'professional': 221 | self.branch_info[2:4] = vals 222 | elif current_branch == 'master': 223 | self.branch_info[4:6] = vals 224 | 225 | 226 | @dataclass() 227 | class FumenHeader: 228 | """Contains all the byte values for a Fumen chart file's header.""" 229 | order: str = "<" 230 | b000_b431_timing_windows: Tuple[float, ...] = (0.0, 0.0, 0.0)*36 231 | b432_b435_has_branches: int = 0 232 | b436_b439_hp_max: int = 10000 233 | b440_b443_hp_clear: int = 8000 234 | b444_b447_hp_gain_good: int = 10 235 | b448_b451_hp_gain_ok: int = 5 236 | b452_b455_hp_loss_bad: int = -20 237 | b456_b459_normal_normal_ratio: int = 65536 238 | b460_b463_normal_professional_ratio: int = 65536 239 | b464_b467_normal_master_ratio: int = 65536 240 | b468_b471_branch_pts_good: int = 20 241 | b472_b475_branch_pts_ok: int = 10 242 | b476_b479_branch_pts_bad: int = 0 243 | b480_b483_branch_pts_drumroll: int = 1 244 | b484_b487_branch_pts_good_big: int = 20 245 | b488_b491_branch_pts_ok_big: int = 10 246 | b492_b495_branch_pts_drumroll_big: int = 1 247 | b496_b499_branch_pts_balloon: int = 30 248 | b500_b503_branch_pts_kusudama: int = 30 249 | b504_b507_branch_pts_unknown: int = 20 250 | b508_b511_dummy_data: int = 12345678 251 | b512_b515_number_of_measures: int = 0 252 | b516_b519_unknown_data: int = 0 253 | 254 | def parse_header_values(self, raw_bytes: bytes) -> None: 255 | """Parse a raw string of 520 bytes to get the header values.""" 256 | self._parse_order(raw_bytes) 257 | raw = raw_bytes # We use a shortened form just for visual clarity: 258 | self.b000_b431_timing_windows = self.unp(raw, "f"*108, 0, 431) 259 | self.b432_b435_has_branches = self.unp(raw, "i", 432, 435) 260 | self.b436_b439_hp_max = self.unp(raw, "i", 436, 439) 261 | self.b440_b443_hp_clear = self.unp(raw, "i", 440, 443) 262 | self.b444_b447_hp_gain_good = self.unp(raw, "i", 444, 447) 263 | self.b448_b451_hp_gain_ok = self.unp(raw, "i", 448, 451) 264 | self.b452_b455_hp_loss_bad = self.unp(raw, "i", 452, 455) 265 | self.b456_b459_normal_normal_ratio = self.unp(raw, "i", 456, 459) 266 | self.b460_b463_normal_professional_ratio = self.unp(raw, "i", 460, 463) 267 | self.b464_b467_normal_master_ratio = self.unp(raw, "i", 464, 467) 268 | self.b468_b471_branch_pts_good = self.unp(raw, "i", 468, 471) 269 | self.b472_b475_branch_pts_ok = self.unp(raw, "i", 472, 475) 270 | self.b476_b479_branch_pts_bad = self.unp(raw, "i", 476, 479) 271 | self.b480_b483_branch_pts_drumroll = self.unp(raw, "i", 480, 483) 272 | self.b484_b487_branch_pts_good_big = self.unp(raw, "i", 484, 487) 273 | self.b488_b491_branch_pts_ok_big = self.unp(raw, "i", 488, 491) 274 | self.b492_b495_branch_pts_drumroll_big = self.unp(raw, "i", 492, 495) 275 | self.b496_b499_branch_pts_balloon = self.unp(raw, "i", 496, 499) 276 | self.b500_b503_branch_pts_kusudama = self.unp(raw, "i", 500, 503) 277 | self.b504_b507_branch_pts_unknown = self.unp(raw, "i", 504, 507) 278 | self.b508_b511_dummy_data = self.unp(raw, "i", 508, 511) 279 | self.b512_b515_number_of_measures = self.unp(raw, "i", 512, 515) 280 | self.b516_b519_unknown_data = self.unp(raw, "i", 516, 519) 281 | 282 | def unp(self, raw_bytes: bytes, fmt: str, start: int, end: int) -> Any: 283 | """Unpack a raw byte string according to specific types.""" 284 | vals = struct.unpack(self.order + fmt, raw_bytes[start:end+1]) 285 | return vals[0] if len(vals) == 1 else vals 286 | 287 | def _parse_order(self, raw_bytes: bytes) -> None: 288 | """Parse the order of the song (little or big endian).""" 289 | self.order = '' 290 | # Bytes 512-515 are the number of measures. We check the values using 291 | # both little and big endian, then compare to see which is correct. 292 | if (self.unp(raw_bytes, ">I", 512, 515) < 293 | self.unp(raw_bytes, " None: 299 | """Set the timing window header bytes depending on the difficulty.""" 300 | # Note: Ura Oni is equivalent to Oni for timing window behavior 301 | difficulty = 'Oni' if difficulty in ['Ura', 'Edit'] else difficulty 302 | self.b000_b431_timing_windows = TIMING_WINDOWS[difficulty]*36 303 | 304 | def set_hp_bytes(self, n_notes: int, difficulty: str, stars: int) -> None: 305 | """Compute header bytes related to the soul gauge (HP) behavior.""" 306 | # Note: Ura Oni is equivalent to Oni for soul gauge behavior 307 | difficulty = 'Oni' if difficulty in ['Ura', 'Edit'] else difficulty 308 | self._get_hp_from_lookup_tables(n_notes, difficulty, stars) 309 | self.b440_b443_hp_clear = {'Easy': 6000, 'Normal': 7000, 310 | 'Hard': 7000, 'Oni': 8000}[difficulty] 311 | 312 | def _get_hp_from_lookup_tables(self, n_notes: int, difficulty: str, 313 | stars: int) -> None: 314 | """Fetch pre-computed soul gauge values from lookup tables (LUTs).""" 315 | if not 0 < n_notes <= 2500: 316 | return 317 | star_to_key = { 318 | 'Oni': {1: '17', 2: '17', 3: '17', 4: '17', 5: '17', 319 | 6: '17', 7: '17', 8: '8', 9: '910', 10: '910'}, 320 | 'Hard': {1: '12', 2: '12', 3: '3', 4: '4', 5: '58', 321 | 6: '58', 7: '58', 8: '58', 9: '58', 10: '58'}, 322 | 'Normal': {1: '12', 2: '12', 3: '3', 4: '4', 5: '57', 323 | 6: '57', 7: '57', 8: '57', 9: '57', 10: '57'}, 324 | 'Easy': {1: '1', 2: '23', 3: '23', 4: '45', 5: '45', 325 | 6: '45', 7: '45', 8: '45', 9: '45', 10: '45'}, 326 | } 327 | key = f"{difficulty}-{star_to_key[difficulty][stars]}" 328 | pkg_dir = os.path.dirname(os.path.realpath(__file__)) 329 | with open(common.resource_path("src/assets/hp_values.csv"), 330 | newline='', encoding="utf-8") as csv_file: 331 | for num, line in enumerate(csv.DictReader(csv_file)): 332 | if num+1 == n_notes: 333 | self.b444_b447_hp_gain_good = int(line[f"good_{key}"]) 334 | self.b448_b451_hp_gain_ok = int(line[f"ok_{key}"]) 335 | self.b452_b455_hp_loss_bad = int(line[f"bad_{key}"]) 336 | break 337 | 338 | @property 339 | def raw_bytes(self) -> bytes: 340 | """Represent the header values as a string of raw bytes.""" 341 | format_string, value_list = '', [] 342 | for byte_field in fields(self): 343 | value = getattr(self, byte_field.name) 344 | if byte_field.name == "order": 345 | format_string = value + format_string 346 | elif byte_field.name == "b000_b431_timing_windows": 347 | format_string += "f" * len(value) 348 | value_list.extend(list(value)) 349 | else: 350 | format_string += "i" 351 | value_list.append(value) 352 | raw_bytes = struct.pack(format_string, *value_list) 353 | assert len(raw_bytes) == 520 354 | return raw_bytes 355 | 356 | 357 | @dataclass() 358 | class FumenCourse: 359 | """Contains all the data in a single Fumen (`.bin`) chart file.""" 360 | header: FumenHeader 361 | measures: List[FumenMeasure] = field(default_factory=list) 362 | score_init: int = 0 363 | score_diff: int = 0 -------------------------------------------------------------------------------- /src/tja2fumen/constants.py: -------------------------------------------------------------------------------- 1 | """ 2 | Constant song properties of TJA and fumen files. 3 | """ 4 | 5 | # Names for branches in diverge songs 6 | BRANCH_NAMES = ("normal", "professional", "master") 7 | 8 | # Types of notes that can be found in TJA files 9 | TJA_NOTE_TYPES = { 10 | '0': 'Blank', 11 | '1': 'Don', 12 | '2': 'Ka', 13 | '3': 'DON', 14 | '4': 'KA', 15 | '5': 'Drumroll', 16 | '6': 'DRUMROLL', 17 | '7': 'Balloon', 18 | '8': 'EndDRB', 19 | '9': 'Kusudama', 20 | 'A': 'DON2', # hands 21 | 'B': 'KA2', # hands 22 | 'C': 'Blank', # bombs 23 | 'D': 'Drumroll', # fuse roll 24 | 'E': 'DON2', # red + green single hit 25 | 'F': 'Ka', # ADLib (hidden note) 26 | 'G': 'KA2', # red + green double hit 27 | 'H': 'DRUMROLL', # double roll 28 | 'I': 'Drumroll', # green roll 29 | } 30 | 31 | # Conversion for TJAPlayer3's #SENOTECHANGE command 32 | SENOTECHANGE_TYPES = { 33 | 1: "Don", # ドン 34 | 2: "Don2", # ド 35 | 3: "Don3", # コ 36 | 4: "Ka", # カッ 37 | 5: "Ka2", # カ 38 | } 39 | 40 | # Types of notes that can be found in fumen files 41 | FUMEN_NOTE_TYPES = { 42 | 0x1: "Don", # ドン 43 | 0x2: "Don2", # ド 44 | 0x3: "Don3", # コ 45 | 0x4: "Ka", # カッ 46 | 0x5: "Ka2", # カ 47 | 0x6: "Drumroll", 48 | 0x7: "DON", 49 | 0x8: "KA", 50 | 0x9: "DRUMROLL", 51 | 0xa: "Balloon", 52 | 0xb: "DON2", # hands 53 | 0xc: "Kusudama", 54 | 0xd: "KA2", # hands 55 | 0xe: "Unknown1", # ? (Present in some Wii1 songs) 56 | 0xf: "Unknown2", # ? (Present in some PS4 songs) 57 | 0x10: "Unknown3", # ? (Present in some Wii1 songs) 58 | 0x11: "Unknown4", # ? (Present in some Wii1 songs) 59 | 0x12: "Unknown5", # ? (Present in some Wii4 songs) 60 | 0x13: "Unknown6", # ? (Present in some Wii1 songs) 61 | 0x14: "Unknown7", # ? (Present in some PS4 songs) 62 | 0x15: "Unknown8", # ? (Present in some Wii1 songs) 63 | 0x16: "Unknown9", # ? (Present in some Wii1 songs) 64 | 0x17: "Unknown10", # ? (Present in some Wii4 songs) 65 | 0x18: "Unknown11", # ? (Present in some PS4 songs) 66 | 0x19: "Unknown12", # ? (Present in some PS4 songs) 67 | 0x22: "Unknown13", # ? (Present in some Wii1 songs) 68 | 0x62: "Drumroll2" # ? 69 | } 70 | 71 | # Invert the dict to go from note type to fumen byte values 72 | FUMEN_TYPE_NOTES = {v: k for k, v in FUMEN_NOTE_TYPES.items()} 73 | 74 | # Normalize the various fumen course names into 1 name per difficulty 75 | NORMALIZE_COURSE = { 76 | '0': 'Easy', 77 | 'Easy': 'Easy', 78 | '1': 'Normal', 79 | 'Normal': 'Normal', 80 | '2': 'Hard', 81 | 'Hard': 'Hard', 82 | '3': 'Oni', 83 | 'Oni': 'Oni', 84 | '4': 'Ura', 85 | 'Ura': 'Ura', 86 | 'Edit': 'Ura' 87 | } 88 | 89 | # Fetch the 5 valid course names from NORMALIZE_COURSE's values 90 | COURSE_NAMES = list(set(NORMALIZE_COURSE.values())) 91 | 92 | # All combinations of difficulty and single/multiplayer type 93 | TJA_COURSE_NAMES = [] 94 | for difficulty in COURSE_NAMES: 95 | for player in ['', 'P1', 'P2']: 96 | TJA_COURSE_NAMES.append(difficulty+player) 97 | 98 | # Map course difficulty to filename IDs (e.g. Oni -> `song_m.bin`) 99 | COURSE_IDS = { 100 | 'Easy': 'e', 101 | 'Normal': 'n', 102 | 'Hard': 'h', 103 | 'Oni': 'm', 104 | 'Ura': 'x', 105 | } 106 | 107 | TIMING_WINDOWS = { 108 | # "GOOD" timing "OK" timing "BAD" timing 109 | 'Easy': (041.7083358764648, 108.441665649414, 125.125000000000), 110 | 'Normal': (041.7083358764648, 108.441665649414, 125.125000000000), 111 | 'Hard': (025.0250015258789, 075.075004577637, 108.441665649414), 112 | 'Oni': (025.0250015258789, 075.075004577637, 108.441665649414) 113 | } -------------------------------------------------------------------------------- /src/tja2fumen/converters.py: -------------------------------------------------------------------------------- 1 | """ 2 | Functions for converting TJA song data to Fumen song data. 3 | """ 4 | 5 | import re 6 | import warnings 7 | from typing import List, Dict, Tuple, Union 8 | 9 | from src.tja2fumen.classes import (TJACourse, TJAMeasure, TJAMeasureProcessed, 10 | FumenCourse, FumenHeader, FumenMeasure, 11 | FumenNote) 12 | from src.tja2fumen.constants import BRANCH_NAMES, SENOTECHANGE_TYPES 13 | 14 | 15 | def process_commands(tja_branches: Dict[str, List[TJAMeasure]], bpm: float) \ 16 | -> Dict[str, List[TJAMeasureProcessed]]: 17 | """ 18 | Process all commands in each measure. 19 | 20 | This function takes care of two main tasks: 21 | 1. Keeping track of what the current values are for BPM, scroll, 22 | gogotime, barline, and time signature (i.e. #MEASURE). 23 | 2. Detecting when a command is placed in the middle of a measure, 24 | and splitting that measure into sub-measures. 25 | 26 | ((Note: We split measures into sub-measures because official `.bin` files 27 | can only have 1 value for BPM/SCROLL/GOGO per measure. So, if a TJA 28 | measure has multiple BPMs/SCROLLs/GOGOs, it has to be split up.)) 29 | 30 | After this function is finished, all the #COMMANDS will be gone, and each 31 | measure will have attributes (e.g. measure.bpm, measure.scroll) instead. 32 | """ 33 | tja_branches_processed: Dict[str, List[TJAMeasureProcessed]] = { 34 | branch_name: [] for branch_name in tja_branches.keys() 35 | } 36 | for branch_name, branch_measures_tja in tja_branches.items(): 37 | current_bpm = bpm 38 | current_scroll = 1.0 39 | current_gogo = False 40 | current_barline = True 41 | current_senote = "" 42 | current_dividend = 4 43 | current_divisor = 4 44 | for measure_tja in branch_measures_tja: 45 | measure_tja_processed = TJAMeasureProcessed( 46 | bpm=current_bpm, 47 | scroll=current_scroll, 48 | gogo=current_gogo, 49 | barline=current_barline, 50 | time_sig=[current_dividend, current_divisor], 51 | subdivisions=len(measure_tja.notes), 52 | ) 53 | for data in measure_tja.combined: 54 | # Handle note data 55 | if data.name == 'note': 56 | measure_tja_processed.notes.append(data) 57 | 58 | # Handle commands that can only be placed between measures 59 | # (i.e. no mid-measure variations) 60 | elif data.name == 'delay': 61 | measure_tja_processed.delay = float(data.value) * 1000 62 | elif data.name == 'branch_start': 63 | branch_parts = data.value.split(',') 64 | if len(branch_parts) != 3: 65 | raise ValueError(f"#BRANCHSTART must have 3 comma-" 66 | f"separated values, but got " 67 | f"'{data.value}' instead.") 68 | branch_type, val1, val2 = branch_parts 69 | if branch_type.lower() == 'r': # r = drumRoll 70 | branch_cond = (float(val1), float(val2)) 71 | elif branch_type.lower() == 'p': # p = Percentage 72 | branch_cond = (float(val1)/100, float(val2)/100) 73 | else: 74 | raise ValueError(f"Invalid #BRANCHSTART type: " 75 | f"'{branch_type}'.") 76 | measure_tja_processed.branch_type = branch_type 77 | measure_tja_processed.branch_cond = branch_cond 78 | elif data.name == 'section': 79 | measure_tja_processed.section = bool(data.value) 80 | elif data.name == 'levelhold': 81 | measure_tja_processed.levelhold = True 82 | elif data.name == 'barline': 83 | current_barline = bool(int(data.value)) 84 | measure_tja_processed.barline = current_barline 85 | elif data.name == 'measure': 86 | match_measure = re.match(r"(\d+)/(\d+)", data.value) 87 | if not match_measure: 88 | continue 89 | current_dividend = int(match_measure.group(1)) 90 | current_divisor = int(match_measure.group(2)) 91 | measure_tja_processed.time_sig = [current_dividend, 92 | current_divisor] 93 | 94 | # Handle commands that can be placed in the middle of a 95 | # measure. (For fumen files, if there is a mid-measure change 96 | # to BPM/SCROLL/GOGO, then the measure will actually be split 97 | # into two small submeasures. So, we need to start a new 98 | # measure in those cases.) 99 | elif data.name in ['bpm', 'scroll', 'gogo', 'senote']: 100 | # Parse the values 101 | new_val: Union[bool, float, str] 102 | if data.name == 'bpm': 103 | new_val = current_bpm = float(data.value) 104 | elif data.name == 'scroll': 105 | new_val = current_scroll = float(data.value) 106 | elif data.name == 'gogo': 107 | new_val = current_gogo = bool(int(data.value)) 108 | elif data.name == 'senote': 109 | new_val = current_senote \ 110 | = SENOTECHANGE_TYPES[int(data.value)] 111 | # Check for mid-measure commands 112 | # - Case 1: Command happens at the start of a measure; 113 | # just change the value directly 114 | if data.pos == 0: 115 | setattr(measure_tja_processed, data.name, 116 | new_val) # noqa: new_val will always be set 117 | # - Case 2: Command happens in the middle of a measure; 118 | # start a new sub-measure 119 | else: 120 | measure_tja_processed.pos_end = data.pos 121 | tja_branches_processed[branch_name]\ 122 | .append(measure_tja_processed) 123 | measure_tja_processed = TJAMeasureProcessed( 124 | bpm=current_bpm, 125 | scroll=current_scroll, 126 | gogo=current_gogo, 127 | barline=current_barline, 128 | time_sig=[current_dividend, current_divisor], 129 | subdivisions=len(measure_tja.notes), 130 | pos_start=data.pos, 131 | senote=current_senote 132 | ) 133 | # SENOTECHANGE commands don't carry over to next branch. 134 | # (But they CAN happen mid-measure, which is why we 135 | # process them here.) 136 | current_senote = "" 137 | 138 | else: 139 | warnings.warn(f"Unexpected event type: {data.name}") 140 | 141 | measure_tja_processed.pos_end = len(measure_tja.notes) 142 | tja_branches_processed[branch_name].append(measure_tja_processed) 143 | 144 | has_branches = all(len(b) for b in tja_branches_processed.values()) 145 | if has_branches: 146 | if len({len(b) for b in tja_branches_processed.values()}) != 1: 147 | raise ValueError( 148 | "Branches do not have the same number of measures. (This " 149 | "check was performed after splitting up the measures due " 150 | "to mid-measure commands. Please check any GOGO, BPMCHANGE, " 151 | "and SCROLL commands you have in your branches, and make sure " 152 | "that each branch has the same number of commands.)" 153 | ) 154 | 155 | return tja_branches_processed 156 | 157 | 158 | def convert_tja_to_fumen(tja: TJACourse) -> FumenCourse: 159 | """ 160 | Convert TJA data to Fumen data by calculating Fumen-specific values. 161 | 162 | Fumen files (`.bin`) use a very strict file structure. Certain values are 163 | expected at very specific byte locations in the file, such as: 164 | - Header metadata (first 520 bytes). The header stores information such 165 | as branch points for each note type, soul gauge behavior, etc. 166 | - Note data (millisecond offset values, drumroll duration, etc.) 167 | - Branch condition info for each measure 168 | 169 | Since TJA files only contain notes and commands, we must compute all of 170 | these extra values ourselves. The values are then stored in new "Fumen" 171 | Python objects that mimic the structure of the fumen `.bin` files: 172 | 173 | FumenCourse 174 | ├─ FumenMeasure 175 | │ ├─ FumenBranch ('normal') 176 | │ │ ├─ FumenNote 177 | │ │ ├─ FumenNote 178 | │ │ └─ ... 179 | │ ├─ FumenBranch ('professional') 180 | │ └─ FumenBranch ('master') 181 | ├─ FumenMeasure 182 | ├─ FumenMeasure 183 | └─ ... 184 | 185 | ((Note: The fumen file structure is the opposite of the TJA file structure; 186 | branch data is stored within the measure object, rather than measure data 187 | being stored within the branch object.)) 188 | """ 189 | # Preprocess commands 190 | tja_branches_processed = process_commands(tja.branches, tja.bpm) 191 | 192 | # Pre-allocate the measures for the converted TJA 193 | n_measures = len(tja_branches_processed['normal']) 194 | fumen = FumenCourse( 195 | measures=[FumenMeasure() for _ in range(n_measures)], 196 | header=FumenHeader(), 197 | score_init=tja.score_init, 198 | score_diff=tja.score_diff, 199 | ) 200 | 201 | # Set song metadata using information from the processed measures 202 | fumen.header.b512_b515_number_of_measures = n_measures 203 | fumen.header.b432_b435_has_branches = int(all( 204 | len(b) for b in tja_branches_processed.values() 205 | )) 206 | 207 | # Use a single copy of the course balloons (since we use .pop()) 208 | course_balloons = tja.balloon.copy() 209 | 210 | # Iterate through the different branches in the TJA 211 | total_notes = {'normal': 0, 'professional': 0, 'master': 0} 212 | for current_branch, branch_tja in tja_branches_processed.items(): 213 | # Skip empty branches (e.g. 'professional', 'master') 214 | if not branch_tja: 215 | continue 216 | 217 | # Track properties that will change over the course of the song 218 | branch_points_total = 0 219 | branch_points_measure = 0 220 | current_drumroll = FumenNote() 221 | current_levelhold = False 222 | branch_types: List[str] = [] 223 | branch_conditions: List[Tuple[float, float]] = [] 224 | 225 | # Iterate over pairs of TJA and Fumen measures 226 | for idx_m, (measure_tja, measure_fumen) in \ 227 | enumerate(zip(branch_tja, fumen.measures)): 228 | 229 | # Copy over basic measure properties from the TJA 230 | measure_fumen.branches[current_branch].speed = measure_tja.scroll 231 | measure_fumen.gogo = measure_tja.gogo 232 | measure_fumen.bpm = measure_tja.bpm 233 | 234 | # Compute the duration of the measure 235 | measure_length = measure_tja.pos_end - measure_tja.pos_start 236 | measure_fumen.set_duration( 237 | time_sig=measure_tja.time_sig, 238 | measure_length=measure_length, 239 | subdivisions=measure_tja.subdivisions 240 | ) 241 | 242 | # Compute the millisecond offsets for the start/end of each measure 243 | if idx_m == 0: 244 | measure_fumen.set_first_ms_offsets(song_offset=tja.offset) 245 | else: 246 | measure_fumen.set_ms_offsets( 247 | delay=measure_tja.delay, 248 | prev_measure=fumen.measures[idx_m-1], 249 | ) 250 | 251 | # Handle whether barline should be hidden: 252 | # 1. Measures where #BARLINEOFF has been set 253 | # 2. Sub-measures that don't fall on the barline 254 | barline_off = measure_tja.barline is False 255 | is_submeasure = (measure_length < measure_tja.subdivisions and 256 | measure_tja.pos_start != 0) 257 | if barline_off or is_submeasure: 258 | measure_fumen.barline = False 259 | 260 | # Check to see if the measure contains a branching condition 261 | branch_type = measure_tja.branch_type 262 | branch_cond = measure_tja.branch_cond 263 | if branch_type and branch_cond: 264 | # Update the branch_info values for the measure 265 | measure_fumen.set_branch_info( 266 | branch_type, branch_cond, 267 | branch_points_total, current_branch, 268 | has_levelhold=current_levelhold 269 | ) 270 | # Reset the points to prepare for the next `#BRANCHSTART p` 271 | branch_points_total = 0 272 | # Reset the levelhold value (so that future branch_conditions 273 | # work normally) 274 | current_levelhold = False 275 | # Keep track of the branch conditions (to later determine how 276 | # to set the header bytes for branches) 277 | branch_types.append(branch_type) 278 | branch_conditions.append(branch_cond) 279 | 280 | # NB: We update the branch condition note counter *after* 281 | # we check the current measure's branch condition. 282 | # This is because the TJA spec says: 283 | # "The requirement is calculated one measure before 284 | # #BRANCHSTART, changing the branch visually when it 285 | # is calculated and changing the notes after #BRANCHSTART." 286 | # So, by delaying the summation by one measure, we perform the 287 | # calculation with notes "one measure before". 288 | branch_points_total += branch_points_measure 289 | 290 | # LEVELHOLD essentially means "ignore the branch condition for 291 | # the next `#BRANCHSTART` command", so we check this value after 292 | # we've already processed the branch condition for this measure. 293 | if measure_tja.levelhold: 294 | current_levelhold = True 295 | 296 | # Create notes based on TJA measure data 297 | branch_points_measure = 0 298 | for note_tja in measure_tja.notes: 299 | # Compute the ms position of the note 300 | pos_ratio = ((note_tja.pos - measure_tja.pos_start) / 301 | (measure_tja.pos_end - measure_tja.pos_start)) 302 | note_pos = measure_fumen.duration * pos_ratio 303 | 304 | # Handle '8' notes (end of a drumroll/balloon) 305 | if note_tja.value == "EndDRB": 306 | if not current_drumroll.note_type: 307 | warnings.warn( 308 | "'8' note encountered without matching " 309 | "drumroll/balloon/kusudama note. Ignoring to " 310 | "avoid crash. Check TJA and re-run." 311 | ) 312 | continue 313 | # If a drumroll spans a single measure, then add the 314 | # difference between start/end position 315 | if not current_drumroll.multimeasure: 316 | current_drumroll.duration += (note_pos - 317 | current_drumroll.pos) 318 | # Otherwise, if a drumroll spans multiple measures, 319 | # then we want to add the duration between the start 320 | # of the measure and the drumroll's end position. 321 | else: 322 | current_drumroll.duration += (note_pos - 0.0) 323 | current_drumroll.duration = float(int( 324 | current_drumroll.duration 325 | )) 326 | current_drumroll = FumenNote() 327 | continue 328 | 329 | # The TJA spec technically allows you to place 330 | # double-Kusudama notes. But this is unsupported in 331 | # fumens, so just skip the second Kusudama note. 332 | if note_tja.value == "Kusudama" and current_drumroll.note_type: 333 | continue 334 | 335 | # Now that the edge cases have been taken care of ('continue'), 336 | # we can initialize a note and handle general note metadata. 337 | note = FumenNote() 338 | note.pos = note_pos 339 | # Account for a measure's #SENOTECHANGE command 340 | if measure_tja.senote: 341 | note.note_type = measure_tja.senote 342 | note.manually_set = True 343 | # SENOTECHANGE only applies to the note immediately after 344 | # So, we erase it once it's been applied. 345 | measure_tja.senote = "" 346 | else: 347 | note.note_type = note_tja.value 348 | note.score_init = tja.score_init 349 | note.score_diff = tja.score_diff 350 | 351 | # Handle drumroll-specific note metadata 352 | if note.note_type in ["Drumroll", "DRUMROLL"]: 353 | current_drumroll = note 354 | elif note.note_type in ["Balloon", "Kusudama"]: 355 | try: 356 | note.hits = course_balloons.pop(0) 357 | except IndexError: 358 | warnings.warn(f"Not enough values for 'BALLOON:' " 359 | f"({tja.balloon}). Using value=1 to " 360 | f"avoid crashing. Check TJA and re-run.") 361 | note.hits = 1 362 | current_drumroll = note 363 | 364 | # Track Don/Ka notes (to later compute header values) 365 | elif (note.note_type.lower().startswith('don') 366 | or note.note_type.lower().startswith('ka')): 367 | total_notes[current_branch] += 1 368 | 369 | # Track branch points (to later compute `#BRANCHSTART p` vals) 370 | if note.note_type in ['Don', 'Ka']: 371 | pts_to_add = fumen.header.b468_b471_branch_pts_good 372 | elif note.note_type in ['DON', 'KA']: 373 | pts_to_add = fumen.header.b484_b487_branch_pts_good_big 374 | elif note.note_type == 'Balloon': 375 | pts_to_add = fumen.header.b496_b499_branch_pts_balloon 376 | elif note.note_type == 'Kusudama': 377 | pts_to_add = fumen.header.b500_b503_branch_pts_kusudama 378 | else: 379 | pts_to_add = 0 # Drumrolls not relevant for `p` conditions 380 | branch_points_measure += pts_to_add 381 | 382 | # Add the note to the branch for this measure 383 | measure_fumen.branches[current_branch].notes.append(note) 384 | measure_fumen.branches[current_branch].length += 1 385 | 386 | # If drumroll hasn't ended by this measure, increase duration 387 | if current_drumroll.note_type: 388 | # If drumroll spans multiple measures, add full duration 389 | if current_drumroll.multimeasure: 390 | current_drumroll.duration += measure_fumen.duration 391 | # Otherwise, add the partial duration spanned by the drumroll 392 | else: 393 | current_drumroll.multimeasure = True 394 | current_drumroll.duration += (measure_fumen.duration - 395 | current_drumroll.pos) 396 | 397 | # Compute the header bytes that dictate the soul gauge bar behavior 398 | fumen.header.set_hp_bytes(total_notes['normal'], tja.course, tja.level) 399 | # Compute the timing windows based on the course 400 | fumen.header.set_timing_windows(tja.course) 401 | 402 | # If song has only drumroll branching conditions (also allowing percentage 403 | # conditions that force a level up/level down), then set the header bytes 404 | # so that only drumrolls contribute to branching. 405 | drumroll_only = ( 406 | branch_types # noqa: branch_types will always be set 407 | and branch_conditions # noqa: branch_conditions will always be set 408 | and all( 409 | (branch_type == 'r') or 410 | (branch_type == 'p' and cond[0] == 0.0 and cond[1] == 0.0) or 411 | (branch_type == 'p' and cond[0] > 1.00 and cond[1] > 1.00) 412 | for branch_type, cond in zip(branch_types, branch_conditions) 413 | ) 414 | ) 415 | if drumroll_only: 416 | fumen.header.b468_b471_branch_pts_good = 0 417 | fumen.header.b484_b487_branch_pts_good_big = 0 418 | fumen.header.b472_b475_branch_pts_ok = 0 419 | fumen.header.b488_b491_branch_pts_ok_big = 0 420 | fumen.header.b496_b499_branch_pts_balloon = 0 421 | fumen.header.b500_b503_branch_pts_kusudama = 0 422 | 423 | # Alternatively, if the song has only percentage-based conditions, then set 424 | # the header bytes so that only notes and balloons contribute to branching. 425 | percentage_only = ( 426 | branch_types # noqa: branch_types will always be set 427 | and all( 428 | (branch_type != 'r') 429 | for branch_type in branch_types 430 | ) 431 | ) 432 | if percentage_only: 433 | fumen.header.b480_b483_branch_pts_drumroll = 0 434 | fumen.header.b492_b495_branch_pts_drumroll_big = 0 435 | 436 | # Compute the ratio between normal and professional/master branches 437 | if total_notes['professional']: 438 | fumen.header.b460_b463_normal_professional_ratio = \ 439 | int(65536 * (total_notes['normal'] / total_notes['professional'])) 440 | if total_notes['master']: 441 | fumen.header.b464_b467_normal_master_ratio = \ 442 | int(65536 * (total_notes['normal'] / total_notes['master'])) 443 | 444 | return fumen 445 | 446 | 447 | def fix_dk_note_types_course(fumen: FumenCourse) -> None: 448 | """ 449 | Call `fix_dk_note_types` once per branch on a FumenCourse. 450 | """ 451 | # try to determine the song's BPM from its measures 452 | measure_bpms = [m.bpm for m in fumen.measures] 453 | unique_bpms = set(measure_bpms) 454 | song_bpm = max(unique_bpms, key=measure_bpms.count) 455 | 456 | # collect the d/k notes for each branch, then fix their types 457 | for branch_name in BRANCH_NAMES: 458 | dk_notes = [] 459 | for measure in fumen.measures: 460 | for note in measure.branches[branch_name].notes: 461 | if any(note.note_type.lower().startswith(t) 462 | for t in ['don', 'ka']): 463 | note.pos_abs = (measure.offset_start + note.pos + 464 | (4 * 60_000 / measure.bpm)) 465 | dk_notes.append(note) 466 | if dk_notes: 467 | fix_dk_note_types(dk_notes, song_bpm) 468 | 469 | 470 | def fix_dk_note_types(dk_notes: List[FumenNote], song_bpm: float) -> None: 471 | """ 472 | Cluster Don/Ka notes based on their relative positions, then replace 473 | Don/Ka notes with alternate versions (Don2, Don3, Ka2). 474 | 475 | NB: Modifies FumenNote objects in-place 476 | """ 477 | # Sort the notes by their absolute positions to account for BPMCHANGE 478 | dk_notes = sorted(dk_notes, key=lambda note: note.pos_abs) 479 | 480 | # Get the differences between each note and the previous one 481 | for (note_1, note_2) in zip(dk_notes, dk_notes[1:]): 482 | note_1.diff = int(note_2.pos_abs - note_1.pos_abs) 483 | 484 | # Isolate the unique difference values and sort them 485 | diffs_unique = sorted(list({note.diff for note in dk_notes})) 486 | 487 | # Avoid clustering any whole notes, half notes, or quarter notes 488 | # i.e. only cluster 8th notes, 16th notes, etc. 489 | measure_duration = (4 * 60_000) / song_bpm 490 | quarter_note_duration = int(measure_duration / 4) 491 | diffs_under_quarter: List[int] = [diff for diff in diffs_unique 492 | if diff < quarter_note_duration] 493 | 494 | # Anything above an 8th note (12th, 16th, 24th, 36th, etc...) should be 495 | # clustered together as a single stream 496 | diffs_to_cluster: List[List[int]] = [] 497 | diffs_under_8th: List[int] = [] 498 | eighth_note_duration = int(measure_duration / 8) 499 | for diff in diffs_under_quarter: 500 | if diff < eighth_note_duration: 501 | diffs_under_8th.append(diff) 502 | else: 503 | diffs_to_cluster.append([diff]) 504 | # Make sure to cluster the close-together notes first 505 | if diffs_under_8th: 506 | diffs_to_cluster.insert(0, diffs_under_8th) 507 | 508 | # Cluster the notes from the smallest difference to the largest 509 | semi_clustered: List[Union[FumenNote, List[FumenNote]]] = list(dk_notes) 510 | for diff_vals in diffs_to_cluster: 511 | semi_clustered = cluster_notes(semi_clustered, diff_vals) 512 | 513 | # Turn any remaining isolated notes into clusters (i.e. long diffs) 514 | clustered_notes = [cluster if isinstance(cluster, list) else [cluster] 515 | for cluster in semi_clustered] 516 | 517 | # In each cluster, replace dons/kas with their alternate versions 518 | replace_alternate_don_kas(clustered_notes, eighth_note_duration) 519 | 520 | 521 | def replace_alternate_don_kas(note_clusters: List[List[FumenNote]], 522 | eighth_note_duration: int) -> None: 523 | """ 524 | Replace Don/Ka notes with alternate versions (Don2, Don3, Ka2) based on 525 | positions within a cluster of notes. 526 | 527 | NB: Modifies FumenNote objects in-place 528 | NB: FumenNote values are only updated if not manually set by #SENOTECHANGE 529 | """ 530 | big_notes = ['DON', 'DON2', 'KA', 'KA2'] 531 | for cluster in note_clusters: 532 | # Replace all small notes with the basic do/ka notes ("Don2", "Ka2") 533 | for note in cluster: 534 | if note.note_type not in big_notes and not note.manually_set: 535 | if note.note_type[-1].isdigit(): 536 | note.note_type = note.note_type[:-1] + "2" 537 | else: 538 | note.note_type += "2" 539 | 540 | # The "ko" type of Don note only occurs every other note, and only 541 | # in odd-length all-don runs (DDD: Do-ko-don, DDDDD: Do-ko-do-ko-don) 542 | all_dons = all(note.note_type.startswith("Don") for note in cluster) 543 | for i, note in enumerate(cluster): 544 | if (all_dons and (len(cluster) % 2 == 1) and (i % 2 == 1) 545 | and note.note_type not in big_notes 546 | and not note.manually_set): 547 | note.note_type = "Don3" 548 | 549 | # Replace the last note in a cluster with the ending Don/Kat 550 | # In other words, remove the '2' from the last note. 551 | # However, there's one exception: Groups of 4 notes, faster than 8th 552 | is_fast_cluster_of_4 = (len(cluster) == 4 and 553 | all(note.diff < eighth_note_duration 554 | for note in cluster[:-1])) 555 | if is_fast_cluster_of_4: 556 | # Leave last note as Don2/Ka2 557 | pass 558 | else: 559 | # Replace last Don2/Ka2 with Don/Ka 560 | if (cluster[-1].note_type not in big_notes 561 | and not cluster[-1].manually_set): 562 | cluster[-1].note_type = cluster[-1].note_type[:-1] 563 | 564 | 565 | def cluster_notes(item_list: List[Union[FumenNote, List[FumenNote]]], 566 | cluster_diffs: List[int]) \ 567 | -> List[Union[FumenNote, List[FumenNote]]]: 568 | """Group notes based on the differences between them.""" 569 | clustered_notes: List[Union[FumenNote, List[FumenNote]]] = [] 570 | current_cluster: List[FumenNote] = [] 571 | for item in item_list: 572 | # If we encounter an already-clustered group of items, the current 573 | # cluster should end 574 | if isinstance(item, list): 575 | if current_cluster: 576 | clustered_notes.append(current_cluster) 577 | current_cluster = [] 578 | clustered_notes.append(item) 579 | # Handle values that haven't been clustered yet 580 | else: 581 | assert isinstance(item, FumenNote) 582 | # Start and/or continue the current cluster 583 | if any(item.diff == diff for diff in cluster_diffs): 584 | current_cluster.append(item) 585 | else: 586 | # Finish the existing cluster 587 | if current_cluster: 588 | current_cluster.append(item) 589 | clustered_notes.append(current_cluster) 590 | current_cluster = [] 591 | # Or, if there is no cluster, append the item 592 | else: 593 | clustered_notes.append(item) 594 | if current_cluster: 595 | clustered_notes.append(current_cluster) 596 | return clustered_notes -------------------------------------------------------------------------------- /src/tja2fumen/parsers.py: -------------------------------------------------------------------------------- 1 | """ 2 | Functions for parsing TJA files (.tja) and Fumen files (.bin) 3 | """ 4 | 5 | import os 6 | import re 7 | import struct 8 | import warnings 9 | from copy import deepcopy 10 | from typing import BinaryIO, Any, List, Dict, Tuple 11 | 12 | from src.tja2fumen.classes import (TJASong, TJACourse, TJAMeasure, TJAData, 13 | FumenCourse, FumenMeasure, FumenBranch, 14 | FumenNote, FumenHeader) 15 | from src.tja2fumen.constants import (NORMALIZE_COURSE, COURSE_NAMES, BRANCH_NAMES, 16 | TJA_COURSE_NAMES, TJA_NOTE_TYPES, 17 | FUMEN_NOTE_TYPES) 18 | 19 | ############################################################################### 20 | # TJA-parsing functions # 21 | ############################################################################### 22 | 23 | 24 | def parse_tja(fname_tja: str) -> TJASong: 25 | """Read in lines of a .tja file and load them into a TJASong object.""" 26 | try: 27 | with open(fname_tja, "r", encoding="utf-8-sig") as tja_file: 28 | tja_text = tja_file.read() 29 | except UnicodeDecodeError: 30 | with open(fname_tja, "r", encoding="shift-jis") as tja_file: 31 | tja_text = tja_file.read() 32 | 33 | tja_lines = [line for line in tja_text.splitlines() if line.strip() != ''] 34 | tja = split_tja_lines_into_courses(tja_lines) 35 | for course in tja.courses.values(): 36 | branches, balloon_data = parse_tja_course_data(course.data) 37 | course.branches = branches 38 | course.balloon = fix_balloon_field(course.balloon, balloon_data) 39 | 40 | return tja 41 | 42 | 43 | def split_tja_lines_into_courses(lines: List[str]) -> TJASong: 44 | """ 45 | Parse TJA metadata in order to split TJA lines into separate courses. 46 | 47 | In TJA files, metadata lines are denoted by a colon (':'). These lines 48 | provide general info about the song (BPM, TITLE, OFFSET, etc.). They also 49 | define properties for each course in the song (difficulty, level, etc.). 50 | 51 | This function processes each line of metadata, and assigns the metadata 52 | to TJACourse objects (one for each course). To separate each course, this 53 | function uses the `COURSE:` metadata and any `#START P1/P2` commands, 54 | resulting in the following structure: 55 | 56 | TJASong 57 | ├─ TJACourse (e.g. Ura) 58 | │ ├─ Course metadata (level, balloons, scoreinit, scorediff, etc.) 59 | │ └─ Unparsed data (notes, commands) 60 | ├─ TJACourse (e.g. Ura-P1) 61 | ├─ TJACourse (e.g. Ura-P2) 62 | ├─ TJACourse (e.g. Oni) 63 | ├─ TJACourse (e.g. Hard) 64 | └─ ... 65 | 66 | The data for each TJACourse can then be parsed individually using the 67 | `parse_tja_course_data()` function. 68 | """ 69 | # Strip leading/trailing whitespace and comments ('// Comment') 70 | lines = [line.split("//")[0].strip() for line in lines 71 | if line.split("//")[0].strip()] 72 | 73 | # Initialize song with BPM and OFFSET global metadata 74 | tja_metadata = {} 75 | for required_metadata in ["BPM", "OFFSET"]: 76 | for line in lines: 77 | if line.startswith(required_metadata): 78 | tja_metadata[required_metadata] = float(line.split(":")[1]) 79 | break 80 | else: 81 | raise ValueError(f"TJA does not contain required " 82 | f"'{required_metadata}' metadata.") 83 | parsed_tja = TJASong( 84 | bpm=tja_metadata['BPM'], 85 | offset=tja_metadata['OFFSET'], 86 | courses={course: TJACourse(bpm=tja_metadata['BPM'], 87 | offset=tja_metadata['OFFSET'], 88 | course=course) 89 | for course in TJA_COURSE_NAMES} 90 | ) 91 | 92 | current_course = '' 93 | current_course_basename = '' 94 | for line in lines: 95 | # Only metadata and #START commands are relevant for this function 96 | match_metadata = re.match(r"^([a-zA-Z0-9]+):(.*)", line) 97 | match_start = re.match(r"^#START(?:\s+(.+))?", line) 98 | 99 | # Case 1: Metadata lines 100 | if match_metadata: 101 | name_upper = match_metadata.group(1).upper() 102 | value = match_metadata.group(2).strip() 103 | 104 | # Course-specific metadata fields 105 | if name_upper == 'COURSE': 106 | value = value.lower().capitalize() # coerce hard/HARD -> Hard 107 | if value not in NORMALIZE_COURSE: 108 | raise ValueError(f"Invalid COURSE value: '{value}'") 109 | current_course = NORMALIZE_COURSE[value] 110 | current_course_basename = current_course 111 | elif name_upper == 'LEVEL': 112 | if not value.isdigit(): 113 | raise ValueError(f"Invalid LEVEL value: '{value}'") 114 | # restrict to 1 <= level <= 10 115 | parsed_level = min(max(int(value), 1), 10) 116 | parsed_tja.courses[current_course].level = parsed_level 117 | elif name_upper == 'SCOREINIT': 118 | parsed_tja.courses[current_course].score_init = \ 119 | int(value.split(",")[-1]) if value else 0 120 | elif name_upper == 'SCOREDIFF': 121 | parsed_tja.courses[current_course].score_diff = \ 122 | int(value.split(",")[-1]) if value else 0 123 | elif name_upper == 'BALLOON': 124 | if value: 125 | balloons = [int(v) for v in value.split(",") if v] 126 | parsed_tja.courses[current_course].balloon = balloons 127 | elif name_upper == 'STYLE': 128 | # Reset the course name to remove "P1/P2" that may have been 129 | # added by a previous STYLE:DOUBLE chart 130 | if value == 'Single': 131 | current_course = current_course_basename 132 | else: 133 | pass # Ignore 'TITLE', 'SUBTITLE', 'WAVE', etc. 134 | 135 | # Case 2: #START commands 136 | elif match_start: 137 | value = match_start.group(1) if match_start.group(1) else '' 138 | # For STYLE:Double, #START P1/P2 indicates the start of a new 139 | # chart. But, we want multiplayer charts to inherit the 140 | # metadata from the course as a whole, so we deepcopy the 141 | # existing course for that difficulty. 142 | if value in ["1P", "2P"]: 143 | value = value[1] + value[0] # Fix user typo (e.g. 1P -> P1) 144 | if value in ["P1", "P2"]: 145 | current_course = current_course_basename + value 146 | parsed_tja.courses[current_course] = \ 147 | deepcopy(parsed_tja.courses[current_course_basename]) 148 | parsed_tja.courses[current_course].data = [] 149 | elif value: 150 | raise ValueError(f"Invalid value '{value}' for #START.") 151 | 152 | # Since P1/P2 has been handled, we can just use a normal '#START' 153 | parsed_tja.courses[current_course].data.append("#START") 154 | 155 | # Case 3: For other commands and data, simply copy as-is (parse later) 156 | else: 157 | if current_course: 158 | parsed_tja.courses[current_course].data.append(line) 159 | else: 160 | warnings.warn(f"Data encountered before first COURSE: " 161 | f"'{line}' (Check for typos in TJA)") 162 | 163 | # If a .tja has "STYLE: Double" but no "STYLE: Single", then it will be 164 | # missing data for the "single player" chart. To fix this, we copy over 165 | # the P1 chart from "STYLE: Double" to fill the "STYLE: Single" role. 166 | for course_name in COURSE_NAMES: 167 | course_single_player = parsed_tja.courses[course_name] 168 | course_player_one = parsed_tja.courses[course_name+"P1"] 169 | if course_player_one.data and not course_single_player.data: 170 | parsed_tja.courses[course_name] = deepcopy(course_player_one) 171 | 172 | # Remove any charts (e.g. P1/P2) not present in the TJA file (empty data) 173 | for course_name in [k for k, v in parsed_tja.courses.items() 174 | if not v.data]: 175 | del parsed_tja.courses[course_name] 176 | 177 | # Recreate dict with consistent insertion order 178 | parsed_tja.courses = { 179 | key: parsed_tja.courses[key] for key 180 | in sorted(parsed_tja.courses.keys()) 181 | } 182 | 183 | return parsed_tja 184 | 185 | 186 | def parse_tja_course_data(data: List[str]) \ 187 | -> Tuple[Dict[str, List[TJAMeasure]], Dict[str, List[str]]]: 188 | """ 189 | Parse course data (notes, commands) into a nested song structure. 190 | 191 | The goal of this function is to process measure separators (',') and 192 | branch commands ('#BRANCHSTART`, '#N`, '#E', '#M') to split the data 193 | into branches and measures, resulting in the following structure: 194 | 195 | TJACourse 196 | ├─ TJABranch ('normal') 197 | │ ├─ TJAMeasure 198 | │ │ ├─ TJAData (notes, commands) 199 | │ │ ├─ TJAData 200 | │ │ └─ ... 201 | │ ├─ TJAMeasure 202 | │ ├─ TJAMeasure 203 | │ └─ ... 204 | ├─ TJABranch ('professional') 205 | └─ TJABranch ('master') 206 | 207 | This provides a faithful, easy-to-inspect tree-style representation of the 208 | branches and measures within each course of the .tja file. 209 | """ 210 | parsed_branches = {k: [TJAMeasure()] for k in BRANCH_NAMES} 211 | has_branches = bool([d for d in data if d.startswith('#BRANCH')]) 212 | current_branch = 'all' if has_branches else 'normal' 213 | branch_condition = '' 214 | # keep track of balloons in order to fix the 'BALLOON' field value 215 | balloons: Dict[str, List[str]] = {k: [] for k in BRANCH_NAMES} 216 | 217 | # Process course lines 218 | idx_m = 0 219 | idx_m_branchstart = 0 220 | for idx_l, line in enumerate(data): 221 | # 0. Check to see whether line is a command or note data 222 | command, name, value, note_data = '', '', '', '' 223 | match_command = re.match(r"^#([a-zA-Z0-9]+)(?:\s+(.+))?", line) 224 | if match_command: 225 | command = match_command.group(1).upper() 226 | if match_command.group(2): 227 | value = match_command.group(2) 228 | else: 229 | note_data = line # If not a command, then line must be note data 230 | 231 | # 1. Parse measure notes 232 | if note_data: 233 | notes_to_write: str = "" 234 | # If measure has ended, then add notes to the current measure, 235 | # then start a new measure by incrementing idx_m 236 | if note_data.endswith(','): 237 | for branch_name in (BRANCH_NAMES if current_branch == 'all' 238 | else [current_branch]): 239 | check_branch_length(parsed_branches, branch_name, 240 | expected_len=idx_m+1) 241 | notes_to_write = note_data[:-1] 242 | parsed_branches[branch_name][idx_m].notes += notes_to_write 243 | parsed_branches[branch_name].append(TJAMeasure()) 244 | idx_m += 1 245 | # Otherwise, keep adding notes to the current measure ('idx_m') 246 | else: 247 | for branch_name in (BRANCH_NAMES if current_branch == 'all' 248 | else [current_branch]): 249 | notes_to_write = note_data 250 | parsed_branches[branch_name][idx_m].notes += notes_to_write 251 | 252 | # Keep track of balloon notes that were added 253 | balloon_notes = [n for n in notes_to_write if n in ['7', '9']] 254 | # mark balloon notes as duplicates if necessary. this will be used 255 | # to fix the BALLOON: field to account for duplicated balloons. 256 | balloon_notes = (['DUPE'] * len(balloon_notes) 257 | if current_branch == 'all' else balloon_notes) 258 | for branch_name in (BRANCH_NAMES if current_branch == 'all' 259 | else [current_branch]): 260 | balloons[branch_name].extend(balloon_notes) 261 | 262 | # 2. Parse measure commands that produce an "event" 263 | elif command in ['GOGOSTART', 'GOGOEND', 'BARLINEON', 'BARLINEOFF', 264 | 'DELAY', 'SCROLL', 'BPMCHANGE', 'MEASURE', 265 | 'LEVELHOLD', 'SENOTECHANGE', 'SECTION', 266 | 'BRANCHSTART']: 267 | # Get position of the event 268 | pos = 0 269 | for branch_name in (BRANCH_NAMES if current_branch == 'all' 270 | else [current_branch]): 271 | check_branch_length(parsed_branches, branch_name, 272 | expected_len=idx_m+1) 273 | pos = len(parsed_branches[branch_name][idx_m].notes) 274 | 275 | # Parse event type 276 | if command == 'GOGOSTART': 277 | name, value = 'gogo', '1' 278 | elif command == 'GOGOEND': 279 | name, value = 'gogo', '0' 280 | elif command == 'BARLINEON': 281 | name, value = 'barline', '1' 282 | elif command == 'BARLINEOFF': 283 | name, value = 'barline', '0' 284 | elif command == 'DELAY': 285 | name = 'delay' 286 | elif command == 'SCROLL': 287 | name = 'scroll' 288 | elif command == 'BPMCHANGE': 289 | name = 'bpm' 290 | elif command == 'MEASURE': 291 | name = 'measure' 292 | elif command == 'LEVELHOLD': 293 | name = 'levelhold' 294 | elif command == "SENOTECHANGE": 295 | name = 'senote' 296 | elif command == 'SECTION': 297 | # If #SECTION occurs before a #BRANCHSTART, then ensure that 298 | # it's present on every branch. Otherwise, #SECTION will only 299 | # be present on the current branch, and so the `branch_info` 300 | # values won't be correctly set for the other two branches. 301 | if data[idx_l+1].startswith('#BRANCHSTART'): 302 | name = 'section' 303 | current_branch = 'all' 304 | elif not branch_condition: 305 | name = 'section' 306 | current_branch = 'all' 307 | # Otherwise, #SECTION exists in isolation. In this case, to 308 | # reset the accuracy, we just repeat the previous #BRANCHSTART. 309 | else: 310 | name, value = 'branch_start', branch_condition 311 | elif command == 'BRANCHSTART': 312 | # Ensure that the #BRANCHSTART command is added to all branches 313 | current_branch = 'all' 314 | name = 'branch_start' 315 | branch_condition = value 316 | # If a branch was intentionally excluded by the charter, 317 | # make sure to copy measures from the longest branch. 318 | for branch_name in BRANCH_NAMES: 319 | check_branch_length(parsed_branches, branch_name) 320 | # Preserve the index of the BRANCHSTART command to re-use 321 | idx_m_branchstart = idx_m 322 | 323 | # Append event to the current measure's events 324 | for branch_name in (BRANCH_NAMES if current_branch == 'all' 325 | else [current_branch]): 326 | check_branch_length(parsed_branches, branch_name, 327 | expected_len=idx_m+1) 328 | parsed_branches[branch_name][idx_m].events.append( 329 | TJAData(name=name, value=value, pos=pos) 330 | ) 331 | 332 | # 3. Parse commands that don't create an event 333 | # (e.g. simply changing the current branch) 334 | else: 335 | if command in ('START', 'END'): 336 | current_branch = 'all' if has_branches else 'normal' 337 | elif command == 'N': 338 | current_branch = 'normal' 339 | idx_m = idx_m_branchstart 340 | elif command == 'E': 341 | current_branch = 'professional' 342 | idx_m = idx_m_branchstart 343 | elif command == 'M': 344 | current_branch = 'master' 345 | idx_m = idx_m_branchstart 346 | elif command == 'BRANCHEND': 347 | current_branch = 'all' 348 | 349 | else: 350 | warnings.warn(f"Ignoring unsupported command '{command}'") 351 | 352 | # Delete the last measure in the branch if no notes or events 353 | # were added to it (due to preallocating empty measures) 354 | deleted_branches = False 355 | for branch in parsed_branches.values(): 356 | if not branch[-1].notes and not branch[-1].events: 357 | del branch[-1] 358 | deleted_branches = True 359 | if deleted_branches: 360 | idx_m -= 1 361 | 362 | # Equalize branch lengths to account for missing branches 363 | for branch_name, branch in parsed_branches.items(): 364 | if branch: 365 | check_branch_length(parsed_branches, branch_name) 366 | 367 | # Merge measure data and measure events in chronological order 368 | for branch_name, branch in parsed_branches.items(): 369 | for measure in branch: 370 | # warn the user if their measure have typos 371 | valid_notes = [] 372 | for note in measure.notes: 373 | if note not in TJA_NOTE_TYPES: 374 | warnings.warn(f"Ignoring invalid note '{note}' in measure " 375 | f"'{''.join(measure.notes)}' (check for " 376 | f"typos in TJA)") 377 | else: 378 | valid_notes.append(note) 379 | notes = [TJAData(name='note', value=TJA_NOTE_TYPES[note], pos=i) 380 | for i, note in enumerate(valid_notes) if 381 | TJA_NOTE_TYPES[note] != 'Blank'] 382 | events = measure.events 383 | while notes or events: 384 | if events and notes: 385 | if notes[0].pos >= events[0].pos: 386 | measure.combined.append(events.pop(0)) 387 | else: 388 | measure.combined.append(notes.pop(0)) 389 | elif events: 390 | measure.combined.append(events.pop(0)) 391 | elif notes: 392 | measure.combined.append(notes.pop(0)) 393 | 394 | # Ensure all branches have the same number of measures 395 | if has_branches: 396 | if len({len(b) for b in parsed_branches.values()}) != 1: 397 | # If `check_branch_length` works, this should never be reached 398 | raise ValueError( 399 | "Branches do not have the same number of measures. (This " 400 | "check was performed prior to splitting up the measures due " 401 | "to mid-measure commands. Please check the number of ',' you " 402 | "have in each branch.)" 403 | ) 404 | 405 | return parsed_branches, balloons 406 | 407 | 408 | def check_branch_length(parsed_branches: Dict[str, List[TJAMeasure]], 409 | branch_name: str, expected_len: int = 0) -> None: 410 | """ 411 | Ensure that a given branch ('branch_name') matches either an expected 412 | integer length, or the max length of all branches if length not given. 413 | 414 | Note: Modifies branch in-place. 415 | """ 416 | branch_len = len(parsed_branches[branch_name]) 417 | # If no length is provided, then we assume we're comparing branches, 418 | # then copying any missing measures from the largest branch. 419 | if expected_len == 0: 420 | max_branch_name = branch_name 421 | expected_len = branch_len 422 | for name, branch in parsed_branches.items(): 423 | if len(branch) > expected_len: 424 | expected_len = len(branch) 425 | max_branch_name = name 426 | warning_msg = (f"To fix this, measures will be copied from the " 427 | f"'{max_branch_name}' branch to equalize branch " 428 | f"lengths.") 429 | for idx_m in range(branch_len, expected_len): 430 | parsed_branches[branch_name].append( 431 | parsed_branches[max_branch_name][idx_m] 432 | ) 433 | # Otherwise, if length was provided, then simply pad with empty measures 434 | else: 435 | warning_msg = ("To fix this, empty measures will be added to " 436 | "equalize branch lengths.") 437 | for idx_m in range(branch_len, expected_len): 438 | parsed_branches[branch_name].append(TJAMeasure()) 439 | 440 | if branch_len < expected_len: 441 | warnings.warn( 442 | f"While parsing the TJA's branches, tja2fumen expected " 443 | f"{expected_len} measure(s) from the '{branch_name}' branch, but " 444 | f"it only had {branch_len} measure(s). {warning_msg} (Hint: Do " 445 | f"#N, #E, and #M all have the same number of measures?)" 446 | ) 447 | 448 | 449 | def fix_balloon_field(balloon_field: List[int], 450 | balloon_data: Dict[str, List[str]]) -> List[int]: 451 | """ 452 | Fix the 'BALLOON:' metadata field for certain branching songs. 453 | 454 | In Taiko, branching songs may have a different amount of balloons and/or 455 | different balloon values on their normal/professional/master branches. 456 | However, the TJA field "BALLOON:" is limited it how it can represent 457 | balloon hits; it uses a single comma-delimited list of integers. E.g.: 458 | 459 | BALLOON: 13,4,52,4,52,4,52 460 | 461 | It is unclear which of these values belong to which branches. 462 | 463 | This is especially unclear for songs that start out on the "normal" branch, 464 | or songs that have branching conditions that force a specific branch. These 465 | songs are often written as TJA with only a single branch written out, yet 466 | for official fumens, this branch information actually has to be present on 467 | *all three branches*. So, the 'BALLOON:' field will be missing values. 468 | 469 | In the example above, the "13" balloon actually occurs on the normal branch 470 | before the first branch condition. Meaning that the balloons are split up 471 | like this: 472 | 473 | BALLOON: (13,4,52)(4,52)(4,52) 474 | 475 | However, due to fumen requirements, we want the balloons to actually be 476 | like this: 477 | 478 | BALLOON: (13,4,52)(13,4,52)(13,4,52) 479 | 480 | So, the purpose of this function is to "fix" the balloon information so 481 | that it can be used for fumen conversion without error. 482 | 483 | NOTE: This fix probably only applies to a VERY small minority of songs. 484 | One example (shown above) is the Ura chart for Roppon no Bara to Sai 485 | no Uta. You can see in the wikiwiki that the opening 'Normal' 486 | section has a balloon note prior to the branch condition. We need 487 | to duplicate this value across all branches. 488 | """ 489 | # Return early if course doesn't have branches 490 | if not all(balloon_data.values()): 491 | return balloon_field 492 | 493 | # Special case: Courses where the # of balloons is the same for all 494 | # branches, and the TJA author only listed 1 set of balloons. 495 | # Fix: Duplicate the balloons 3 times. 496 | if all(len(balloons) == len(balloon_field) 497 | for balloons in balloon_data.values()): 498 | return balloon_field * 3 499 | 500 | # Return early if there were no duplicated balloons in the course 501 | if not any('DUPE' in balloons for balloons in balloon_data.values()): 502 | return balloon_field 503 | 504 | # If balloons were duplicated, then we expect the BALLOON: field to have 505 | # fewer hits values than the number of balloons. If this *isn't* the case, 506 | # then perhaps the TJA author duplicated the balloon hits themselves, and 507 | # so we don't want to make any unnecessary edits. Thus, return early. 508 | # FIXME: This assumption fails for double-kusudama notes, where we may 509 | # see a "fake" balloon, thus inflating the total number of balloons. 510 | # But, this is such a rare case (double-kusudama + duplicated 511 | # balloons + 'BALLOON:' field with implicitly duplicated hits) that 512 | # I'm alright handling it incorrectly. If a user files a bug 513 | # report, then I'll fix it then. 514 | total_num_balloons = sum(len(b) for b in balloon_data.values()) 515 | if not len(balloon_field) < total_num_balloons: 516 | return balloon_field 517 | 518 | # OK! So, by this point in the function, we're making these assumptions: 519 | # 520 | # 1. The TJA chart has branches. 521 | # 2. The TJA author wrote part of the song for only a single branch 522 | # (e.g. the Normal branch, before the first branch condition), and thus 523 | # we needed to duplicate some of the note data to create a valid fumen. 524 | # 3. The 'single branch' part of the TJA contained balloon/kusudama notes, 525 | # and thus we needed to duplicate those notes. 526 | # 4. The TJA author wrote the 'BALLOON:' field such that there was only 1 527 | # balloon value for the duplicated balloon note. 528 | # 529 | # The goal now is to identify which balloons were duplicated, and make sure 530 | # the "hits" value is present across all branches. 531 | duplicated_balloons = [] 532 | balloon_field_fixed = [] 533 | 534 | # Handle the normal branch first 535 | # If balloons are duplicated, then it's probably going to be from 'normal' 536 | # FIXME: If the balloons are duplicated from the master/professional branch 537 | # (e.g. due to a forced branch change from a branch condition), then 538 | # this logic will read the balloon values incorrectly. 539 | # But, this is such a rare case that I'm alright handling it 540 | # incorrectly. If a user files a bug report, then I'll fix it then. 541 | for balloon_note in balloon_data['normal']: 542 | balloon_hits = balloon_field.pop(0) 543 | if balloon_note == 'DUPE': 544 | duplicated_balloons.append(balloon_hits) 545 | balloon_field_fixed.append(balloon_hits) 546 | 547 | # Repeat any duplicated balloon notes for the professional/master branches 548 | for branch_name in ['professional', 'master']: 549 | dupes_to_copy = duplicated_balloons.copy() 550 | for balloon_note in balloon_data[branch_name]: 551 | if balloon_note == 'DUPE': 552 | balloon_field_fixed.append(dupes_to_copy.pop(0)) 553 | else: 554 | balloon_field_fixed.append(balloon_field.pop(0)) 555 | 556 | return balloon_field_fixed 557 | 558 | 559 | ############################################################################### 560 | # Fumen-parsing functions # 561 | ############################################################################### 562 | 563 | def parse_fumen(fumen_file: str, 564 | exclude_empty_measures: bool = False) -> FumenCourse: 565 | """ 566 | Parse bytes of a fumen .bin file into nested measures, branches, and notes. 567 | 568 | Fumen files use a very strict file structure. Certain values are expected 569 | at very specific byte locations in the file. Here, we parse these specific 570 | byte locations into the following structure: 571 | 572 | FumenCourse 573 | ├─ FumenHeader 574 | │ ├─ Timing windows 575 | │ ├─ Branch points 576 | │ ├─ Soul gauge bytes 577 | │ └─ ... 578 | ├─ FumenMeasure 579 | │ ├─ FumenBranch ('normal') 580 | │ │ ├─ FumenNote 581 | │ │ ├─ FumenNote 582 | │ │ └─ ... 583 | │ ├─ FumenBranch ('professional') 584 | │ └─ FumenBranch ('master') 585 | ├─ FumenMeasure 586 | ├─ FumenMeasure 587 | └─ ... 588 | """ 589 | with open(fumen_file, "rb") as file: 590 | size = os.fstat(file.fileno()).st_size 591 | 592 | header = FumenHeader() 593 | header.parse_header_values(file.read(520)) 594 | song = FumenCourse(header=header) 595 | 596 | for _ in range(song.header.b512_b515_number_of_measures): 597 | # Parse the measure data using the following `format_string`: 598 | # "ffBBHiiiiiii" (12 format characters, 40 bytes per measure) 599 | # - 'f': BPM (one float (4 bytes)) 600 | # - 'f': fumenOffset (one float (4 bytes)) 601 | # - 'B': gogo (one unsigned char (1 byte)) 602 | # - 'B': barline (one unsigned char (1 byte)) 603 | # - 'H': (one unsigned short (2 bytes)) 604 | # - 'iiiiii': branch_info (six integers (24 bytes)) 605 | # - 'i': (one integer (4 bytes) 606 | measure_struct = read_struct(file, song.header.order, 607 | format_string="ffBBHiiiiiii") 608 | 609 | # Create the measure dictionary using the newly-parsed measure data 610 | measure = FumenMeasure( 611 | bpm=measure_struct[0], 612 | offset_start=measure_struct[1], 613 | gogo=bool(measure_struct[2]), 614 | barline=bool(measure_struct[3]), 615 | padding1=measure_struct[4], 616 | branch_info=list(measure_struct[5:11]), 617 | padding2=measure_struct[11] 618 | ) 619 | 620 | # Iterate through the three branch types 621 | for branch_name in BRANCH_NAMES: 622 | # Parse the measure data using the following `format_string`: 623 | # "HHf" (3 format characters, 8 bytes per branch) 624 | # - 'H': total_notes ( one unsigned short (2 bytes)) 625 | # - 'H': ( one unsigned short (2 bytes)) 626 | # - 'f': speed ( one float (4 bytes) 627 | branch_struct = read_struct(file, song.header.order, 628 | format_string="HHf") 629 | 630 | # Create the branch dictionary using newly-parsed branch data 631 | total_notes = branch_struct[0] 632 | branch = FumenBranch( 633 | length=total_notes, 634 | padding=branch_struct[1], 635 | speed=branch_struct[2], 636 | ) 637 | 638 | # Iterate through each note in the measure (per branch) 639 | for _ in range(total_notes): 640 | # Parse the note data using the following `format_string`: 641 | # "ififHHf" (7 format characters, 24b per note cluster) 642 | # - 'i': note type 643 | # - 'f': note position 644 | # - 'i': item 645 | # - 'f': 646 | # - 'H': score_init 647 | # - 'H': score_diff 648 | # - 'f': duration 649 | note_struct = read_struct(file, song.header.order, 650 | format_string="ififHHf") 651 | 652 | # Create the note dictionary using newly-parsed note data 653 | note_type = note_struct[0] 654 | note = FumenNote( 655 | note_type=FUMEN_NOTE_TYPES[note_type], 656 | pos=note_struct[1], 657 | item=note_struct[2], 658 | padding=note_struct[3], 659 | ) 660 | 661 | if note_type in (0xa, 0xc): 662 | # Balloon hits 663 | note.hits = note_struct[4] 664 | note.hits_padding = note_struct[5] 665 | else: 666 | song.score_init = note.score_init = note_struct[4] 667 | song.score_diff = note.score_diff = note_struct[5] // 4 668 | 669 | # Drumroll/balloon duration 670 | note.duration = note_struct[6] 671 | 672 | # Account for padding at the end of drumrolls 673 | if note_type in (0x6, 0x9, 0x62): 674 | note.drumroll_bytes = file.read(8) 675 | 676 | # Assign the note to the branch 677 | branch.notes.append(note) 678 | 679 | # Assign the branch to the measure 680 | measure.branches[branch_name] = branch 681 | 682 | # Assign the measure to the song 683 | song.measures.append(measure) 684 | if file.tell() >= size: 685 | break 686 | 687 | # NB: Official fumens often include empty measures as a way of inserting 688 | # barlines for visual effect. But, TJA authors tend not to add these empty 689 | # measures, because even without them, the song plays correctly. So, in 690 | # tests, if we want to only compare the timing of the non-empty measures 691 | # between an official fumen and a converted non-official TJA, then it's 692 | # useful to exclude the empty measures. 693 | if exclude_empty_measures: 694 | song.measures = [m for m in song.measures 695 | if m.branches['normal'].length 696 | or m.branches['professional'].length 697 | or m.branches['master'].length] 698 | 699 | return song 700 | 701 | 702 | def read_struct(file: BinaryIO, 703 | order: str, 704 | format_string: str) -> Tuple[Any, ...]: 705 | """ 706 | Interpret bytes as packed binary data. 707 | 708 | Arguments: 709 | - file: The fumen's file object (presumably in 'rb' mode). 710 | - order: '<' or '>' (little or big endian). 711 | - format_string: String made up of format characters that describes 712 | the data layout. Full list of available characters: 713 | (https://docs.python.org/3/library/struct.html#format-characters) 714 | - seek: The position of the read pointer to be used within the file. 715 | 716 | Return values: 717 | - interpreted_string: A string containing interpreted byte values, 718 | based on the specified 'fmt' format characters. 719 | """ 720 | expected_size = struct.calcsize(order + format_string) 721 | byte_string = file.read(expected_size) 722 | # One "official" fumen (AC11\deo\deo_n.bin) runs out of data early 723 | # This workaround fixes the issue by appending 0's to get the size to match 724 | if len(byte_string) != expected_size: 725 | byte_string += (b'\x00' * (expected_size - len(byte_string))) 726 | interpreted_string = struct.unpack(order + format_string, byte_string) 727 | return interpreted_string -------------------------------------------------------------------------------- /src/tja2fumen/writers.py: -------------------------------------------------------------------------------- 1 | """ 2 | Functions for writing song data to fumen files (.bin) 3 | """ 4 | 5 | import struct 6 | from typing import BinaryIO, Any, List 7 | 8 | from src.tja2fumen.classes import FumenCourse 9 | from src.tja2fumen.constants import BRANCH_NAMES, FUMEN_TYPE_NOTES 10 | 11 | 12 | def write_fumen(path_out: str, song: FumenCourse) -> None: 13 | """ 14 | Write the values in a FumenCourse object to a `.bin` file. 15 | 16 | This operation is the reverse of the `parse_fumen` function. Please refer 17 | to that function for more details about the fumen file structure. 18 | """ 19 | with open(path_out, "wb") as file: 20 | file.write(song.header.raw_bytes) 21 | 22 | for measure in song.measures: 23 | measure_struct = ([measure.bpm, measure.offset_start, 24 | int(measure.gogo), int(measure.barline), 25 | measure.padding1] + measure.branch_info + 26 | [measure.padding2]) 27 | write_struct(file, song.header.order, 28 | format_string="ffBBHiiiiiii", 29 | value_list=measure_struct) 30 | 31 | for branch_name in BRANCH_NAMES: 32 | branch = measure.branches[branch_name] 33 | branch_struct = [branch.length, branch.padding, branch.speed] 34 | write_struct(file, song.header.order, 35 | format_string="HHf", 36 | value_list=branch_struct) 37 | 38 | for note in branch.notes: 39 | note_struct = [FUMEN_TYPE_NOTES[note.note_type], note.pos, 40 | note.item, note.padding] 41 | if note.hits: 42 | extra_vals = [note.hits, note.hits_padding] 43 | else: 44 | # Max value for H -> 0xffff -> 65535 45 | extra_vals = [min(65535, note.score_init), 46 | min(65535, note.score_diff * 4)] 47 | note_struct.extend(extra_vals) 48 | note_struct.append(note.duration) 49 | write_struct(file, song.header.order, 50 | format_string="ififHHf", 51 | value_list=note_struct) 52 | 53 | if note.note_type.lower() == "drumroll": 54 | file.write(note.drumroll_bytes) 55 | 56 | 57 | def write_struct(file: BinaryIO, 58 | order: str, 59 | format_string: str, 60 | value_list: List[Any]) -> None: 61 | """Pack (int, float, etc.) values into a string of bytes, then write.""" 62 | try: 63 | packed_bytes = struct.pack(order + format_string, *value_list) 64 | except struct.error as err: 65 | raise ValueError(f"Can't fmt {value_list} as {format_string}") from err 66 | file.write(packed_bytes) -------------------------------------------------------------------------------- /src/updater.py: -------------------------------------------------------------------------------- 1 | import tkinter as tk 2 | from tkinter import messagebox 3 | import requests 4 | import webbrowser 5 | import sys 6 | import win32api 7 | import re 8 | 9 | class Updater: 10 | def __init__(self, master): 11 | self.master = master 12 | self.current_version = self.get_current_version() 13 | self.github_raw_url = "https://raw.githubusercontent.com/keitannunes/KeifunsDatatableEditor/main/version_info.txt" 14 | self.github_releases_url = "https://github.com/keitannunes/KeifunsDatatableEditor/releases" 15 | 16 | def get_current_version(self): 17 | try: 18 | if getattr(sys, 'frozen', False): 19 | # The application is frozen (compiled) 20 | executable_path = sys.executable 21 | else: 22 | # The application is not frozen (running from script) 23 | executable_path = __file__ 24 | 25 | info = win32api.GetFileVersionInfo(executable_path, "\\") 26 | ms = info['FileVersionMS'] 27 | ls = info['FileVersionLS'] 28 | return f"{win32api.HIWORD(ms)}.{win32api.LOWORD(ms)}.{win32api.HIWORD(ls)}" 29 | except: 30 | return "0.1.0" # Fallback version if unable to read from executable 31 | 32 | def check_for_updates(self): 33 | try: 34 | response = requests.get(self.github_raw_url) 35 | if response.status_code == 200: 36 | content = response.text 37 | latest_version = self.extract_version_from_content(content) 38 | if latest_version and self.is_version_greater(latest_version, self.current_version): 39 | self.show_update_popup(latest_version) 40 | return True 41 | else: 42 | print(f"Failed to fetch version info. Status code: {response.status_code}") 43 | except requests.RequestException as e: 44 | print(f"Error checking for updates: {e}") 45 | return False 46 | 47 | def extract_version_from_content(self, content): 48 | match = re.search(r'filevers=\((\d+),\s*(\d+),\s*(\d+),\s*\d+\)', content) 49 | if match: 50 | return f"{match.group(1)}.{match.group(2)}.{match.group(3)}" 51 | return None 52 | 53 | def is_version_greater(self, v1, v2): 54 | return tuple(map(int, v1.split('.'))) > tuple(map(int, v2.split('.'))) 55 | 56 | def show_update_popup(self, latest_version): 57 | root = tk.Toplevel(self.master) 58 | root.withdraw() # Hide the main window 59 | 60 | message = f"A new version is available!\n\n" \ 61 | f"Current version: {self.current_version}\n" \ 62 | f"Latest version: {latest_version}\n\n" \ 63 | f"Do you want to update?" 64 | 65 | result = messagebox.askyesno("Update Available", message) 66 | if result: 67 | webbrowser.open(self.github_releases_url) 68 | 69 | root.destroy() # Clean up the hidden root window -------------------------------------------------------------------------------- /templates/nijiiro/song_ABC.nus3bank: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/keitannunes/KeifunsDatatableEditor/07b009f6e4b41017f7fc7de596404f14529a7d04/templates/nijiiro/song_ABC.nus3bank -------------------------------------------------------------------------------- /templates/nijiiro/song_ABCD.nus3bank: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/keitannunes/KeifunsDatatableEditor/07b009f6e4b41017f7fc7de596404f14529a7d04/templates/nijiiro/song_ABCD.nus3bank -------------------------------------------------------------------------------- /templates/nijiiro/song_ABCDE.nus3bank: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/keitannunes/KeifunsDatatableEditor/07b009f6e4b41017f7fc7de596404f14529a7d04/templates/nijiiro/song_ABCDE.nus3bank -------------------------------------------------------------------------------- /templates/nijiiro/song_ABCDEF.nus3bank: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/keitannunes/KeifunsDatatableEditor/07b009f6e4b41017f7fc7de596404f14529a7d04/templates/nijiiro/song_ABCDEF.nus3bank -------------------------------------------------------------------------------- /templates/nijiiro/song_ABCDEFG.nus3bank: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/keitannunes/KeifunsDatatableEditor/07b009f6e4b41017f7fc7de596404f14529a7d04/templates/nijiiro/song_ABCDEFG.nus3bank -------------------------------------------------------------------------------- /templates/nijiiro/song_ABCDEFGH.nus3bank: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/keitannunes/KeifunsDatatableEditor/07b009f6e4b41017f7fc7de596404f14529a7d04/templates/nijiiro/song_ABCDEFGH.nus3bank -------------------------------------------------------------------------------- /version_info.txt: -------------------------------------------------------------------------------- 1 | VSVersionInfo( 2 | ffi=FixedFileInfo( 3 | filevers=(0, 3, 0, 0), 4 | prodvers=(0, 3, 0, 0), 5 | mask=0x3f, 6 | flags=0x0, 7 | OS=0x40004, 8 | fileType=0x1, 9 | subtype=0x0, 10 | date=(0, 0) 11 | ), 12 | kids=[ 13 | StringFileInfo( 14 | [ 15 | StringTable( 16 | u'040904B0', 17 | [StringStruct(u'FileVersion', u'0.3.0'), 18 | StringStruct(u'ProductVersion', u'0.3.0'), 19 | StringStruct(u'FileDescription', u'KeifunsDatatableEditor'), 20 | StringStruct(u'ProductName', u'KeifunsDatatableEditor'), 21 | StringStruct(u'CompanyName', u'Keifun'), 22 | StringStruct(u'LegalCopyright', u'© 2024 Keifun. This software is licensed under the GNU General Public License v3.0.'), 23 | StringStruct(u'OriginalFilename', u'KeifunsDatatableEditor.exe')]) 24 | ]), 25 | VarFileInfo([VarStruct(u'Translation', [1033, 1200])]) 26 | ] 27 | ) --------------------------------------------------------------------------------