├── LICENSE ├── README.md ├── .gitignore └── data_splitter.ipynb /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2023 omardbaa 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Data Splitting Project 2 | 3 | ## Introduction 4 | 5 | This project is a Python script designed to split a CSV file containing data into three parts: JSON, a database, and CSV. The script provides flexibility to customize the distribution percentages for each destination. The primary goal of this script is to help users efficiently split large datasets into various formats for different purposes. 6 | 7 | ## Features 8 | 9 | - Randomly shuffles the data for even distribution among the output files. 10 | - Supports custom distribution percentages for JSON, database, and CSV. 11 | - Saves the data into a JSON file, a CSV file, and a specified database table. 12 | - Displays statistics after data insertion, such as the number of rows in each output file and the percentage of data retained. 13 | 14 | ## Requirements 15 | 16 | - Python 3.x 17 | - pandas 18 | - numpy 19 | - sqlalchemy 20 | 21 | ## How to Use 22 | 23 | 1. Ensure you have Python 3.x installed. 24 | 2. Install the required libraries by running the following command: 25 | pip install pandas numpy sqlalchemy 26 | 27 | 3. Prepare your CSV file and place it in the same directory as the Python script. 28 | 29 | 4. Update the script variables according to your configuration: 30 | - `csv_file_path`: The filename of the CSV file to be split. 31 | - `json_file_path`: The filename for the JSON output file. 32 | - `db_server`: The server name or address of your SQL Server. 33 | - `db_name`: The name of the database where the table will be created. 34 | - `db_driver`: The ODBC driver for the SQL Server. 35 | - `db_table`: The name of the table in the database. 36 | - `trusted_connection`: Set to "yes" if using trusted connection; otherwise, set to "no". 37 | - `distribution`: A tuple representing the distribution percentages for JSON, database, and CSV, respectively. 38 | - `header`: Set to `True` if the CSV file has a header; otherwise, set to `False`. 39 | 40 | 5. Run the Python script: 41 | python script_name.py 42 | 43 | 44 | ## Example 45 | 46 | Suppose you have a large CSV file named `cleaned_repositoriesV2.csv` that contains data on repositories. You can use this script to split the data into JSON, CSV (without header), and insert a portion into a database table. 47 | 48 | ## Notes 49 | 50 | - Make sure the CSV file has a header in the first row. 51 | - Before running the script, ensure that your SQL Server is running and accessible from your Python environment. 52 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | # Byte-compiled / optimized / DLL files 2 | __pycache__/ 3 | *.py[cod] 4 | *$py.class 5 | 6 | # C extensions 7 | *.so 8 | 9 | # Distribution / packaging 10 | .Python 11 | build/ 12 | develop-eggs/ 13 | dist/ 14 | downloads/ 15 | eggs/ 16 | .eggs/ 17 | lib/ 18 | lib64/ 19 | parts/ 20 | sdist/ 21 | var/ 22 | wheels/ 23 | share/python-wheels/ 24 | *.egg-info/ 25 | .installed.cfg 26 | *.egg 27 | MANIFEST 28 | 29 | # PyInstaller 30 | # Usually these files are written by a python script from a template 31 | # before PyInstaller builds the exe, so as to inject date/other infos into it. 32 | *.manifest 33 | *.spec 34 | 35 | # Installer logs 36 | pip-log.txt 37 | pip-delete-this-directory.txt 38 | 39 | # Unit test / coverage reports 40 | htmlcov/ 41 | .tox/ 42 | .nox/ 43 | .coverage 44 | .coverage.* 45 | .cache 46 | nosetests.xml 47 | coverage.xml 48 | *.cover 49 | *.py,cover 50 | .hypothesis/ 51 | .pytest_cache/ 52 | cover/ 53 | 54 | # Translations 55 | *.mo 56 | *.pot 57 | 58 | # Django stuff: 59 | *.log 60 | local_settings.py 61 | db.sqlite3 62 | db.sqlite3-journal 63 | 64 | # Flask stuff: 65 | instance/ 66 | .webassets-cache 67 | 68 | # Scrapy stuff: 69 | .scrapy 70 | 71 | # Sphinx documentation 72 | docs/_build/ 73 | 74 | # PyBuilder 75 | .pybuilder/ 76 | target/ 77 | 78 | # Jupyter Notebook 79 | .ipynb_checkpoints 80 | 81 | # IPython 82 | profile_default/ 83 | ipython_config.py 84 | 85 | # pyenv 86 | # For a library or package, you might want to ignore these files since the code is 87 | # intended to run in multiple environments; otherwise, check them in: 88 | # .python-version 89 | 90 | # pipenv 91 | # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control. 92 | # However, in case of collaboration, if having platform-specific dependencies or dependencies 93 | # having no cross-platform support, pipenv may install dependencies that don't work, or not 94 | # install all needed dependencies. 95 | #Pipfile.lock 96 | 97 | # poetry 98 | # Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control. 99 | # This is especially recommended for binary packages to ensure reproducibility, and is more 100 | # commonly ignored for libraries. 101 | # https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control 102 | #poetry.lock 103 | 104 | # pdm 105 | # Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control. 106 | #pdm.lock 107 | # pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it 108 | # in version control. 109 | # https://pdm.fming.dev/#use-with-ide 110 | .pdm.toml 111 | 112 | # PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm 113 | __pypackages__/ 114 | 115 | # Celery stuff 116 | celerybeat-schedule 117 | celerybeat.pid 118 | 119 | # SageMath parsed files 120 | *.sage.py 121 | 122 | # Environments 123 | .env 124 | .venv 125 | env/ 126 | venv/ 127 | ENV/ 128 | env.bak/ 129 | venv.bak/ 130 | 131 | # Spyder project settings 132 | .spyderproject 133 | .spyproject 134 | 135 | # Rope project settings 136 | .ropeproject 137 | 138 | # mkdocs documentation 139 | /site 140 | 141 | # mypy 142 | .mypy_cache/ 143 | .dmypy.json 144 | dmypy.json 145 | 146 | # Pyre type checker 147 | .pyre/ 148 | 149 | # pytype static type analyzer 150 | .pytype/ 151 | 152 | # Cython debug symbols 153 | cython_debug/ 154 | 155 | # PyCharm 156 | # JetBrains specific template is maintained in a separate JetBrains.gitignore that can 157 | # be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore 158 | # and can be added to the global gitignore or merged into this file. For a more nuclear 159 | # option (not recommended) you can uncomment the following to ignore the entire idea folder. 160 | #.idea/ 161 | -------------------------------------------------------------------------------- /data_splitter.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "code", 5 | "execution_count": 159, 6 | "id": "f67e6065", 7 | "metadata": {}, 8 | "outputs": [ 9 | { 10 | "data": { 11 | "text/plain": [ 12 | "22945" 13 | ] 14 | }, 15 | "execution_count": 159, 16 | "metadata": {}, 17 | "output_type": "execute_result" 18 | } 19 | ], 20 | "source": [ 21 | "data = pd.read_csv(\"cleaned_repositoriesV2.csv\")\n", 22 | "original_length = len(data)\n", 23 | "original_length" 24 | ] 25 | }, 26 | { 27 | "cell_type": "code", 28 | "execution_count": 160, 29 | "id": "88381f33", 30 | "metadata": {}, 31 | "outputs": [], 32 | "source": [ 33 | "import pandas as pd\n", 34 | "import numpy as np\n", 35 | "import json\n", 36 | "import sqlalchemy as sa\n", 37 | "from sqlalchemy import text" 38 | ] 39 | }, 40 | { 41 | "cell_type": "code", 42 | "execution_count": 161, 43 | "id": "437b84e7", 44 | "metadata": {}, 45 | "outputs": [], 46 | "source": [ 47 | "server = \"server\"\n", 48 | "database = \"database\"\n", 49 | "table = \"table\"\n", 50 | "driver = \"driver\"\n" 51 | ] 52 | }, 53 | { 54 | "cell_type": "code", 55 | "execution_count": 162, 56 | "id": "13c51d98", 57 | "metadata": { 58 | "scrolled": true 59 | }, 60 | "outputs": [ 61 | { 62 | "name": "stdout", 63 | "output_type": "stream", 64 | "text": [ 65 | "Nombre de lignes insérées dans la base de données : 6883 (30.00%)\n", 66 | "Nombre de lignes dans le fichier JSON : 6883 (30.00%)\n", 67 | "Nombre de lignes dans le fichier CSV : 9179 (40.00%)\n" 68 | ] 69 | } 70 | ], 71 | "source": [ 72 | "def split_data_to_json_db_csv(csv_file, json_file, db_connection_string, db_table, distribution=(0.3, 0.3, 0.4), header=True):\n", 73 | " # Chargement du fichier CSV\n", 74 | " data = pd.read_csv(csv_file)\n", 75 | " \n", 76 | " # Obtenir la longueur du fichier original\n", 77 | " original_length = len(data)\n", 78 | " \n", 79 | " # Mélanger les données pour la randomisation\n", 80 | " data = data.sample(frac=1, random_state=42).reset_index(drop=True)\n", 81 | " \n", 82 | " # Calculer le nombre de lignes pour chaque destination\n", 83 | " total_rows = len(data)\n", 84 | " json_rows = int(distribution[0] * total_rows)\n", 85 | " db_rows = int(distribution[1] * total_rows)\n", 86 | " \n", 87 | " # Séparer les données en JSON, base de données et CSV\n", 88 | " data_json = data.iloc[:json_rows, :]\n", 89 | " data_db = data.iloc[json_rows:json_rows + db_rows, :]\n", 90 | " data_csv = data.iloc[json_rows + db_rows:, :]\n", 91 | " \n", 92 | " # Sauvegarder les données au format JSON\n", 93 | " data_json.to_json(json_file, orient='records', lines=True)\n", 94 | " \n", 95 | " # Sauvegarder les données CSV sans en-tête\n", 96 | " data_csv.to_csv('cleaned_repositories.csv', index=False, header=False)\n", 97 | " \n", 98 | " # Sauvegarder les données dans la base de données\n", 99 | " engine = sa.create_engine(db_connection_string)\n", 100 | " data_db.to_sql(db_table, engine, if_exists='replace', index=False)\n", 101 | " \n", 102 | " # Afficher les statistiques après l'insertion\n", 103 | " with engine.connect() as conn:\n", 104 | " query = text(f\"SELECT COUNT(*) FROM {db_table};\")\n", 105 | " result = conn.execute(query)\n", 106 | " db_rows_inserted = result.scalar()\n", 107 | " print(f\"Nombre de lignes insérées dans la base de données : {db_rows_inserted} ({(db_rows_inserted / original_length) * 100:.2f}%)\")\n", 108 | " print(f\"Nombre de lignes dans le fichier JSON : {len(data_json)} ({(len(data_json) / original_length) * 100:.2f}%)\")\n", 109 | " print(f\"Nombre de lignes dans le fichier CSV : {len(data_csv)} ({(len(data_csv) / original_length) * 100:.2f}%)\")\n", 110 | "\n", 111 | "if __name__ == \"__main__\":\n", 112 | " csv_file_path = \"cleaned_repositoriesV2.csv\"\n", 113 | " json_file_path = \"cleaned_repositories.json\"\n", 114 | " db_server = server\n", 115 | " db_name = database\n", 116 | " db_driver = driver\n", 117 | " db_table = table\n", 118 | " trusted_connection = \"yes\" # Vous pouvez définir ceci sur \"yes\" ou \"no\" en fonction de votre configuration\n", 119 | " \n", 120 | " # Construire la chaîne de connexion à la base de données\n", 121 | " db_connection_string = f\"mssql+pyodbc:///?trusted_connection={trusted_connection}&driver={db_driver}&server={db_server}&database={db_name}\"\n", 122 | " \n", 123 | " # Définir la répartition (30% JSON, 30% base de données, 40% CSV)\n", 124 | " distribution = (0.3, 0.3, 0.4)\n", 125 | " \n", 126 | " # Appeler la fonction pour diviser les données\n", 127 | " split_data_to_json_db_csv(csv_file_path, json_file_path, db_connection_string, db_table, distribution, header=True)\n" 128 | ] 129 | } 130 | ], 131 | "metadata": { 132 | "kernelspec": { 133 | "display_name": "Python 3 (ipykernel)", 134 | "language": "python", 135 | "name": "python3" 136 | }, 137 | "language_info": { 138 | "codemirror_mode": { 139 | "name": "ipython", 140 | "version": 3 141 | }, 142 | "file_extension": ".py", 143 | "mimetype": "text/x-python", 144 | "name": "python", 145 | "nbconvert_exporter": "python", 146 | "pygments_lexer": "ipython3", 147 | "version": "3.11.3" 148 | } 149 | }, 150 | "nbformat": 4, 151 | "nbformat_minor": 5 152 | } 153 | --------------------------------------------------------------------------------