├── .gitattributes ├── .git_archival.txt ├── specification ├── BrainVisionCoreDataFormat_1-0.pdf └── README.md ├── .mailmap ├── docs ├── index.rst ├── api.rst ├── authors.rst ├── Makefile ├── sphinxext │ └── gh_substitutions.py ├── conf.py └── changelog.rst ├── .github ├── dependabot.yml ├── workflows │ ├── cffconvert.yml │ ├── release.yml │ ├── python_build.yml │ └── python_tests.yml └── CONTRIBUTING.md ├── .gitignore ├── pybv ├── __init__.py └── io.py ├── .git-blame-ignore-revs ├── .readthedocs.yaml ├── .pre-commit-config.yaml ├── LICENSE ├── CITATION.cff ├── pyproject.toml ├── README.rst └── tests └── test_write_brainvision.py /.gitattributes: -------------------------------------------------------------------------------- 1 | .git_archival.txt export-subst 2 | * text=auto 3 | -------------------------------------------------------------------------------- /.git_archival.txt: -------------------------------------------------------------------------------- 1 | node: 58ddac7c74df2dc6a6b0a5fa2278a6d62fece980 2 | node-date: 2025-12-21T10:22:28+01:00 3 | describe-name: v0.7.6-14-g58ddac7 4 | -------------------------------------------------------------------------------- /specification/BrainVisionCoreDataFormat_1-0.pdf: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/bids-standard/pybv/HEAD/specification/BrainVisionCoreDataFormat_1-0.pdf -------------------------------------------------------------------------------- /.mailmap: -------------------------------------------------------------------------------- 1 | Chris Holdgraf 2 | Chris Holdgraf 3 | Pierre Cutellic 4 | -------------------------------------------------------------------------------- /docs/index.rst: -------------------------------------------------------------------------------- 1 | ==== 2 | pybv 3 | ==== 4 | 5 | .. include:: ../README.rst 6 | :start-after: .. docs_readme_include_label 7 | 8 | .. toctree:: 9 | :hidden: 10 | :glob: 11 | 12 | api.rst 13 | changelog.rst 14 | -------------------------------------------------------------------------------- /.github/dependabot.yml: -------------------------------------------------------------------------------- 1 | # Documentation 2 | # https://docs.github.com/en/code-security/dependabot/dependabot-version-updates/configuration-options-for-the-dependabot.yml-file 3 | version: 2 4 | updates: 5 | - package-ecosystem: "github-actions" 6 | directory: "/" 7 | schedule: 8 | interval: "weekly" 9 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | docs/generated 2 | docs/_build/ 3 | .pytest_cache 4 | __pycache__ 5 | *.egg-info 6 | .ipynb_checkpoints 7 | *.pyc 8 | /build 9 | /dist 10 | .coverage 11 | coverage.xml 12 | /coverage 13 | htmlcov/ 14 | .vscode/ 15 | .venv 16 | .idea 17 | .ruff_cache 18 | *.vmrk 19 | *.eeg 20 | *.vhdr 21 | .ruff_cache 22 | -------------------------------------------------------------------------------- /docs/api.rst: -------------------------------------------------------------------------------- 1 | :orphan: 2 | 3 | ================= 4 | API Documentation 5 | ================= 6 | 7 | Here we list the Application Programming Interface (API) for ``pybv``. 8 | 9 | pybv 10 | ==== 11 | 12 | .. currentmodule:: pybv 13 | 14 | .. autosummary:: 15 | :toctree: generated/ 16 | 17 | write_brainvision 18 | -------------------------------------------------------------------------------- /pybv/__init__.py: -------------------------------------------------------------------------------- 1 | """A lightweight I/O utility for the BrainVision data format.""" 2 | 3 | # Authors: pybv developers 4 | # SPDX-License-Identifier: BSD-3-Clause 5 | 6 | try: 7 | from importlib.metadata import version 8 | 9 | __version__ = version("pybv") 10 | except Exception: 11 | __version__ = "0.0.0" 12 | 13 | from pybv.io import write_brainvision 14 | 15 | __all__ = ["write_brainvision"] 16 | -------------------------------------------------------------------------------- /.git-blame-ignore-revs: -------------------------------------------------------------------------------- 1 | # Since git version 2.23, git-blame has a feature to ignore certain commits. 2 | # 3 | # This file contains a list of commits that are ignored in `git blame` 4 | # Set this file as a default ignore file for blame by running: 5 | # 6 | # `git config blame.ignoreRevsFile .git-blame-ignore-revs` 7 | 8 | # GitHub Pull Request 97: Migrate code style to Black 9 | e082af6b224e67c88cff390d7dbf3fb83ef73a6d 10 | -------------------------------------------------------------------------------- /.readthedocs.yaml: -------------------------------------------------------------------------------- 1 | # Read the Docs configuration file for Sphinx projects 2 | # See https://docs.readthedocs.io/en/stable/config-file/v2.html for details 3 | 4 | version: 2 5 | 6 | build: 7 | os: ubuntu-lts-latest 8 | tools: 9 | python: "3.13" 10 | 11 | sphinx: 12 | configuration: docs/conf.py 13 | 14 | python: 15 | install: 16 | - method: pip 17 | path: . 18 | extra_requirements: 19 | - docs 20 | -------------------------------------------------------------------------------- /specification/README.md: -------------------------------------------------------------------------------- 1 | The file [`BrainVisionCoreDataFormat_1-0.pdf`](BrainVisionCoreDataFormat_1-0.pdf) contains the specification 2 | for the BrainVision Core Data Format (BVCDF). 3 | 4 | The document stored in this repository exists for backup reasons. 5 | The most up to date version of this document can most likely be obtained 6 | directly from the 7 | [BrainProducts website](https://www.brainproducts.com/support-resources/brainvision-core-data-format-1-0/). 8 | -------------------------------------------------------------------------------- /docs/authors.rst: -------------------------------------------------------------------------------- 1 | .. _Adam Li: https://adam2392.github.io/ 2 | .. _Aniket Pradhan: http://home.iiitd.edu.in/~aniket17133/ 3 | .. _Chris Holdgraf: https://bids.berkeley.edu/people/chris-holdgraf 4 | .. _Clemens Brunner: https://cbrnr.github.io/ 5 | .. _Felix Klotzsche: https://github.com/eioe 6 | .. _Phillip Alday: https://palday.bitbucket.io/ 7 | .. _Pierre Cutellic: https://github.com/compmonks 8 | .. _Richard Höchenberger: https://hoechenberger.net/ 9 | .. _Robert Oostenveld: https://github.com/robertoostenveld 10 | .. _Stefan Appelhoff: http://stefanappelhoff.com/ 11 | .. _Tristan Stenner: https://github.com/tstenner 12 | -------------------------------------------------------------------------------- /.github/workflows/cffconvert.yml: -------------------------------------------------------------------------------- 1 | name: cffconvert 2 | 3 | on: 4 | push: 5 | paths: 6 | - CITATION.cff 7 | pull_request: 8 | paths: 9 | - CITATION.cff 10 | 11 | jobs: 12 | validate: 13 | name: "validate" 14 | runs-on: ubuntu-latest 15 | steps: 16 | - name: Check out a copy of the repository 17 | uses: actions/checkout@v6 18 | with: 19 | fetch-depth: 0 20 | fetch-tags: true 21 | 22 | - name: Check whether the citation metadata from CITATION.cff is valid 23 | uses: citation-file-format/cffconvert-github-action@2.0.0 24 | with: 25 | args: "--validate" 26 | -------------------------------------------------------------------------------- /.pre-commit-config.yaml: -------------------------------------------------------------------------------- 1 | # See https://pre-commit.com for more information 2 | # See https://pre-commit.com/hooks.html for more hooks 3 | repos: 4 | - repo: https://github.com/pre-commit/pre-commit-hooks 5 | rev: v6.0.0 6 | hooks: 7 | - id: trailing-whitespace 8 | - id: end-of-file-fixer 9 | - id: check-yaml 10 | - id: check-json 11 | - id: check-ast 12 | - id: check-added-large-files 13 | - id: check-case-conflict 14 | - id: check-docstring-first 15 | - repo: https://github.com/astral-sh/ruff-pre-commit 16 | rev: v0.14.7 17 | hooks: 18 | - id: ruff 19 | args: [ --fix ] 20 | - id: ruff-format 21 | 22 | - repo: https://github.com/pappasam/toml-sort 23 | rev: v0.24.3 24 | hooks: 25 | - id: toml-sort-fix 26 | files: pyproject.toml 27 | -------------------------------------------------------------------------------- /docs/Makefile: -------------------------------------------------------------------------------- 1 | # Minimal makefile for Sphinx documentation 2 | # 3 | 4 | # You can set these variables from the command line. 5 | SPHINXOPTS = -nWT --keep-going 6 | SPHINXBUILD = sphinx-build 7 | SPHINXPROJ = pybv 8 | SOURCEDIR = . 9 | BUILDDIR = _build 10 | 11 | # Put it first so that "make" without argument is like "make help". 12 | help: 13 | @$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) 14 | 15 | .PHONY: help Makefile view 16 | 17 | # Catch-all target: route all unknown targets to Sphinx using the new 18 | # "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS). 19 | %: Makefile 20 | @$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O) 21 | 22 | # View the built documentation 23 | view: 24 | @python -c "import webbrowser; webbrowser.open_new_tab('file://$(PWD)/_build/html/index.html')" 25 | -------------------------------------------------------------------------------- /docs/sphinxext/gh_substitutions.py: -------------------------------------------------------------------------------- 1 | """Provide a convenient way to link to GitHub issues and pull requests. 2 | 3 | Link to any issue or PR using :gh:`issue-or-pr-number`. 4 | 5 | Adapted from: 6 | https://doughellmann.com/blog/2010/05/09/defining-custom-roles-in-sphinx/ 7 | 8 | """ 9 | 10 | from docutils.nodes import reference 11 | from docutils.parsers.rst.roles import set_classes 12 | 13 | 14 | def gh_role(name, rawtext, text, lineno, inliner, options=None, content=None): 15 | """Link to a GitHub issue.""" 16 | if content is None: 17 | content = [] 18 | if options is None: 19 | options = {} 20 | try: 21 | # issue/PR mode (issues/PR-num will redirect to pull/PR-num) 22 | int(text) 23 | except ValueError: 24 | # direct link mode 25 | slug = text 26 | else: 27 | slug = "issues/" + text 28 | text = "#" + text 29 | ref = "https://github.com/bids-standard/pybv/" + slug 30 | set_classes(options) 31 | node = reference(rawtext, text, refuri=ref, **options) 32 | return [node], [] 33 | 34 | 35 | def setup(app): 36 | """Do setup.""" 37 | app.add_role("gh", gh_role) 38 | return 39 | -------------------------------------------------------------------------------- /.github/workflows/release.yml: -------------------------------------------------------------------------------- 1 | # Upload a Python Package using Twine when a release is created 2 | 3 | name: release 4 | on: # yamllint disable-line rule:truthy 5 | release: 6 | types: [published] 7 | push: 8 | branches: 9 | - main 10 | pull_request: 11 | branches: 12 | - main 13 | 14 | permissions: 15 | contents: read 16 | 17 | jobs: 18 | package: 19 | runs-on: ubuntu-latest 20 | steps: 21 | - uses: actions/checkout@v6 22 | with: 23 | fetch-depth: 0 24 | fetch-tags: true 25 | - uses: actions/setup-python@v6 26 | with: 27 | python-version: '3.13' 28 | - name: Install dependencies 29 | run: | 30 | python -m pip install --upgrade pip 31 | python -m pip install --upgrade build twine 32 | - run: python -m build --sdist --wheel 33 | - run: twine check --strict dist/* 34 | - uses: actions/upload-artifact@v6 35 | with: 36 | name: dist 37 | path: dist 38 | include-hidden-files: true 39 | 40 | pypi-upload: 41 | needs: package 42 | runs-on: ubuntu-latest 43 | if: github.event_name == 'release' 44 | permissions: 45 | id-token: write # for trusted publishing 46 | environment: 47 | name: pypi 48 | url: https://pypi.org/p/pybv 49 | steps: 50 | - uses: actions/download-artifact@v7 51 | with: 52 | name: dist 53 | path: dist 54 | - uses: pypa/gh-action-pypi-publish@release/v1 55 | if: github.event_name == 'release' 56 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | BSD 3-Clause License 2 | 3 | Copyright (c) 2018, pybv developers 4 | All rights reserved. 5 | 6 | Redistribution and use in source and binary forms, with or without 7 | modification, are permitted provided that the following conditions are met: 8 | 9 | * Redistributions of source code must retain the above copyright notice, this 10 | list of conditions and the following disclaimer. 11 | 12 | * Redistributions in binary form must reproduce the above copyright notice, 13 | this list of conditions and the following disclaimer in the documentation 14 | and/or other materials provided with the distribution. 15 | 16 | * Neither the name of the copyright holder nor the names of its 17 | contributors may be used to endorse or promote products derived from 18 | this software without specific prior written permission. 19 | 20 | THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" 21 | AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE 22 | IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 23 | DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE 24 | FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL 25 | DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR 26 | SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER 27 | CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, 28 | OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 29 | OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 30 | -------------------------------------------------------------------------------- /.github/workflows/python_build.yml: -------------------------------------------------------------------------------- 1 | name: Python build 2 | 3 | concurrency: 4 | group: ${{ github.workflow }}-${{ github.event.number }}-${{ github.event.ref }} 5 | cancel-in-progress: true 6 | 7 | on: 8 | push: 9 | branches: [ main ] 10 | pull_request: 11 | branches: [ main ] 12 | schedule: 13 | - cron: "0 4 * * MON" 14 | 15 | jobs: 16 | build: 17 | runs-on: ${{ matrix.os }} 18 | strategy: 19 | fail-fast: false 20 | matrix: 21 | os: [ubuntu-latest] 22 | python-version: ["3.13"] 23 | env: 24 | TZ: Europe/Berlin 25 | FORCE_COLOR: true 26 | steps: 27 | - name: Set up Python ${{ matrix.python-version }} 28 | uses: actions/setup-python@v6 29 | with: 30 | python-version: ${{ matrix.python-version }} 31 | 32 | - name: Update pip etc. 33 | run: | 34 | python -m pip install --upgrade pip 35 | python -m pip install --upgrade build twine 36 | 37 | - uses: actions/checkout@v6 38 | with: 39 | fetch-depth: 0 40 | fetch-tags: true 41 | - name: Build sdist 42 | run: python -m build --sdist 43 | - name: Check sdist 44 | run: twine check --strict dist/* 45 | - name: Install sdist 46 | run: python -m pip install ./dist/pybv-* 47 | - name: Clean up working directory 48 | run: rm -rf ./* 49 | - name: Try importing pybv 50 | run: python -c 'import pybv; print(pybv.__version__)' 51 | - name: Remove sdist install 52 | run: python -m pip uninstall -y pybv 53 | 54 | - uses: actions/checkout@v6 55 | with: 56 | fetch-depth: 0 57 | fetch-tags: true 58 | - name: Build wheel 59 | run: python -m build --wheel 60 | - name: Check wheel 61 | run: twine check --strict dist/* 62 | - name: Install wheel 63 | run: python -m pip install ./dist/pybv-*.whl 64 | - name: Clean up working directory 65 | run: rm -rf ./* 66 | - name: Try importing pybv 67 | run: python -c 'import pybv; print(pybv.__version__)' 68 | - name: Remove wheel install 69 | run: python -m pip uninstall -y pybv 70 | -------------------------------------------------------------------------------- /CITATION.cff: -------------------------------------------------------------------------------- 1 | # YAML 1.2 2 | --- 3 | # Metadata for citation of this software according to the CFF format (https://citation-file-format.github.io/) 4 | cff-version: 1.2.0 5 | title: "pybv -- A lightweight I/O utility for the BrainVision data format." 6 | message: "If you use this software, please cite it using the metadata below." 7 | authors: 8 | - given-names: "Stefan" 9 | family-names: "Appelhoff" 10 | orcid: "https://orcid.org/0000-0001-8002-0877" 11 | - given-names: "Clemens" 12 | family-names: "Brunner" 13 | orcid: "https://orcid.org/0000-0002-6030-2233" 14 | - given-names: "Tristan" 15 | family-names: "Stenner" 16 | orcid: "https://orcid.org/0000-0002-2428-9051" 17 | - given-names: "Chris R." 18 | family-names: "Holdgraf" 19 | orcid: "https://orcid.org/0000-0002-2391-0678" 20 | - given-names: "Richard" 21 | family-names: "Höchenberger" 22 | orcid: "https://orcid.org/0000-0002-0380-4798" 23 | - given-names: "Adam" 24 | family-names: "Li" 25 | orcid: "https://orcid.org/0000-0001-8421-365X" 26 | - given-names: "Phillip" 27 | family-names: "Alday" 28 | orcid: "https://orcid.org/0000-0002-9984-5745" 29 | - given-names: "Aniket" 30 | family-names: "Pradhan" 31 | orcid: https://orcid.org/0000-0002-6705-5116 32 | - given-names: "Pierre" 33 | family-names: "Cutellic" 34 | orcid: "https://orcid.org/0000-0002-7224-9222" 35 | - given-names: "Felix" 36 | family-names: "Klotzsche" 37 | orcid: "https://orcid.org/0000-0003-3985-2481" 38 | - given-names: "Benjamin" 39 | family-names: "Beasley" 40 | orcid: "https://orcid.org/0009-0002-6936-5959" 41 | - given-names: "Robert" 42 | family-names: "Oostenveld" 43 | orcid: "https://orcid.org/0000-0002-1974-1293" 44 | type: software 45 | repository-code: "https://github.com/bids-standard/pybv" 46 | url: "https://pybv.readthedocs.io/" 47 | license: BSD-3-Clause 48 | identifiers: 49 | - description: "Code archive on Zenodo" 50 | type: doi 51 | value: 10.5281/zenodo.5539451 52 | keywords: 53 | - data 54 | - eeg 55 | - vision 56 | - products 57 | - brain 58 | - ieeg 59 | - brainvision 60 | - brainproducts 61 | - vhdr 62 | - vmrk 63 | ... 64 | -------------------------------------------------------------------------------- /docs/conf.py: -------------------------------------------------------------------------------- 1 | """Configure docs. 2 | 3 | See: https://www.sphinx-doc.org/en/master/usage/configuration.html 4 | """ 5 | 6 | import sys 7 | from datetime import date 8 | from pathlib import Path 9 | 10 | from intersphinx_registry import get_intersphinx_mapping 11 | 12 | import pybv 13 | 14 | curdir = Path(__file__).parent 15 | 16 | sys.path.append((curdir.parent / "pybv").resolve().as_posix()) 17 | sys.path.append((curdir / "sphinxext").resolve().as_posix()) 18 | 19 | # see: https://sphinx.readthedocs.io/en/1.3/extensions.html 20 | extensions = [ 21 | "sphinx.ext.autosummary", 22 | "sphinx.ext.viewcode", 23 | "sphinx.ext.intersphinx", 24 | "numpydoc", 25 | "sphinx_copybutton", 26 | "gh_substitutions", # custom extension, see ./sphinxext/gh_substitutions.py 27 | ] 28 | 29 | # configure sphinx-copybutton 30 | copybutton_prompt_text = r">>> |\.\.\. " 31 | copybutton_prompt_is_regexp = True 32 | 33 | # configure numpydoc 34 | numpydoc_xref_param_type = True 35 | numpydoc_xref_ignore = { 36 | # words 37 | "shape", 38 | "of", 39 | "len", 40 | "or", 41 | # shapes 42 | "n_channels", 43 | "n_times", 44 | "n_events", 45 | } 46 | 47 | # Generate the autosummary 48 | autosummary_generate = True 49 | 50 | # General information about the project. 51 | project = "pybv" 52 | today = date.today().isoformat() 53 | copyright = f"2018, pybv developers. Last updated on {today}" # noqa: A001 54 | author = "pybv developers" 55 | version = pybv.__version__ 56 | release = version 57 | 58 | # List of patterns, relative to source directory, that match files and 59 | # directories to ignore when looking for source files. 60 | # This patterns also effect to html_static_path and html_extra_path 61 | exclude_patterns = ["_build", "Thumbs.db", ".DS_Store"] 62 | 63 | # Define master doc 64 | master_doc = "index" 65 | 66 | # Options for HTML output 67 | html_theme = "alabaster" 68 | html_theme_options = { 69 | "description": "A lightweight I/O utility for the BrainVision data format.", 70 | "fixed_sidebar": True, 71 | "github_button": True, 72 | "github_type": "star", 73 | "github_repo": "pybv", 74 | "github_user": "bids-standard", 75 | } 76 | html_sidebars = { 77 | "**": [ 78 | "about.html", 79 | "navigation.html", 80 | "relations.html", 81 | "searchbox.html", 82 | ] 83 | } 84 | 85 | # When functions from other packages are mentioned, link to them 86 | # Example configuration for intersphinx: refer to the Python standard library. 87 | intersphinx_mapping = get_intersphinx_mapping( 88 | packages={ 89 | "python", 90 | "mne", 91 | "numpy", 92 | } 93 | ) 94 | intersphinx_timeout = 10 95 | -------------------------------------------------------------------------------- /.github/workflows/python_tests.yml: -------------------------------------------------------------------------------- 1 | name: Python tests 2 | 3 | concurrency: 4 | group: ${{ github.workflow }}-${{ github.event.number }}-${{ github.event.ref }} 5 | cancel-in-progress: true 6 | 7 | on: 8 | push: 9 | branches: [ main ] 10 | pull_request: 11 | branches: [ main ] 12 | schedule: 13 | - cron: "0 4 * * MON" 14 | 15 | 16 | jobs: 17 | test: 18 | strategy: 19 | fail-fast: false 20 | matrix: 21 | platform: [ubuntu-latest, macos-latest, windows-latest] 22 | python-version: ["3.10", "3.13"] 23 | mne-version: [mne-stable] 24 | 25 | include: 26 | # special test runs running only on single CI systems to save resources 27 | # Test development versions 28 | - platform: ubuntu-latest 29 | python-version: "3.13" 30 | mne-version: mne-main 31 | 32 | runs-on: ${{ matrix.platform }} 33 | 34 | env: 35 | TZ: Europe/Berlin 36 | FORCE_COLOR: true 37 | steps: 38 | - uses: actions/checkout@v6 39 | with: 40 | fetch-depth: 0 41 | fetch-tags: true 42 | 43 | - name: Set up Python ${{ matrix.python-version }} 44 | uses: actions/setup-python@v6 45 | with: 46 | python-version: ${{ matrix.python-version }} 47 | 48 | - uses: actions/cache@v5 49 | with: 50 | path: ${{ env.pythonLocation }} 51 | key: v-0-${{ env.pythonLocation }}-${{ hashFiles('pyproject.toml') }} 52 | 53 | - name: Install dependencies 54 | run: | 55 | python -m pip install --upgrade pip 56 | python -m pip install -e ".[dev]" 57 | 58 | - name: Install MNE-Python main (development version) 59 | if: matrix.mne-version == 'mne-main' 60 | run: | 61 | python -m pip install -U https://github.com/mne-tools/mne-python/archive/refs/heads/main.zip 62 | 63 | - name: Display versions and environment information 64 | run: | 65 | echo $TZ 66 | date 67 | python --version 68 | which python 69 | mne sys_info 70 | 71 | - name: Check formatting 72 | if: matrix.platform == 'ubuntu-latest' 73 | run: | 74 | pre-commit run --all-files 75 | 76 | - name: Test with pytest 77 | # Options defined in pyproject.toml 78 | run: pytest 79 | 80 | - name: Build docs 81 | run: | 82 | make -C docs/ clean 83 | make -C docs/ html 84 | 85 | - name: Upload artifacts 86 | if: ${{ matrix.platform == 'ubuntu-latest' && matrix.python-version == '3.13' && matrix.mne-version == 'mne-stable'}} 87 | uses: actions/upload-artifact@v6 88 | with: 89 | name: docs-artifact 90 | path: docs/_build/html 91 | include-hidden-files: true 92 | 93 | - name: Upload coverage report 94 | uses: codecov/codecov-action@v5 95 | with: 96 | token: ${{ secrets.CODECOV_TOKEN }} 97 | files: ./coverage.xml 98 | -------------------------------------------------------------------------------- /pyproject.toml: -------------------------------------------------------------------------------- 1 | [build-system] 2 | build-backend = "hatchling.build" 3 | requires = ["hatch-vcs", "hatchling"] 4 | 5 | [project] 6 | authors = [ 7 | {name = "pybv developers"}, 8 | ] 9 | classifiers = [ 10 | "Intended Audience :: Developers", 11 | "Intended Audience :: Science/Research", 12 | "License :: OSI Approved", 13 | "Operating System :: MacOS", 14 | "Operating System :: Microsoft :: Windows", 15 | "Operating System :: POSIX :: Linux", 16 | "Programming Language :: Python :: 3.10", 17 | "Programming Language :: Python :: 3.11", 18 | "Programming Language :: Python :: 3.12", 19 | "Programming Language :: Python :: 3.13", 20 | "Programming Language :: Python", 21 | "Topic :: Scientific/Engineering", 22 | ] 23 | dependencies = [ 24 | "numpy >= 1.18.1", 25 | ] 26 | description = "pybv - a lightweight I/O utility for the BrainVision data format" 27 | dynamic = ["version"] 28 | keywords = ["Brain Products", "BrainVision", "eeg", "vhdr", "vmrk"] 29 | license = {file = "LICENSE"} 30 | maintainers = [ 31 | {email = "stefan.appelhoff@mailbox.org", name = "Stefan Appelhoff"}, 32 | ] 33 | name = "pybv" 34 | readme = {content-type = "text/x-rst", file = "README.rst"} 35 | requires-python = ">=3.10" 36 | 37 | [project.optional-dependencies] 38 | dev = ["ipykernel", "ipython", "pybv[test,docs]"] 39 | docs = [ 40 | "intersphinx_registry", 41 | "matplotlib", 42 | "numpydoc", 43 | "sphinx-copybutton", 44 | "sphinx<9", 45 | ] 46 | test = [ 47 | "build", 48 | "matplotlib", 49 | "mne", 50 | "packaging", 51 | "pre-commit", 52 | "pytest", 53 | "pytest-cov", 54 | "pytest-sugar", 55 | "ruff", 56 | "twine", 57 | ] 58 | 59 | [project.urls] 60 | Documentation = "https://pybv.readthedocs.io" 61 | Issues = "https://github.com/bids-standard/pybv/issues" 62 | Repository = "https://github.com/bids-standard/pybv" 63 | 64 | [tool.coverage.report] 65 | # Regexes for lines to exclude from consideration 66 | exclude_lines = ["if 0:", "if __name__ == .__main__.:", "pragma: no cover"] 67 | 68 | [tool.coverage.run] 69 | omit = ["*tests*"] 70 | 71 | [tool.hatch.build] 72 | exclude = [ 73 | "/.*", 74 | "/.github/**", 75 | "/docs", 76 | "/specification", 77 | "tests/**", 78 | ] 79 | 80 | [tool.hatch.metadata] 81 | allow-direct-references = true # allow specifying URLs in our dependencies 82 | 83 | [tool.hatch.version] 84 | raw-options = {version_scheme = "release-branch-semver"} 85 | source = "vcs" 86 | 87 | [tool.pytest.ini_options] 88 | addopts = """. --doctest-modules --cov=pybv/ --cov-report=xml --cov-config=pyproject.toml --verbose -s""" 89 | filterwarnings = [ 90 | ] 91 | 92 | [tool.ruff.lint] 93 | ignore = ["D203", "D213"] 94 | select = ["A", "B006", "D", "E", "F", "I", "UP", "W"] 95 | 96 | [tool.ruff.lint.pydocstyle] 97 | convention = "numpy" 98 | 99 | [tool.tomlsort] 100 | all = true 101 | ignore_case = true 102 | spaces_before_inline_comment = 2 103 | trailing_comma_inline_array = true 104 | -------------------------------------------------------------------------------- /.github/CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributions 2 | 3 | Contributions are welcome in the form of feedback and discussion in issues, 4 | or pull requests for changes to the code. 5 | 6 | Once the implementation of a piece of functionality is considered to be free of 7 | bugs and properly documented, it can be incorporated into the `main` branch. 8 | 9 | To help developing `pybv`, 10 | you will need a few adjustments to your installation as shown below. 11 | 12 | **All contributions are expected to follow the** 13 | [**Code of Conduct of the bids-standard GitHub organization.**](https://github.com/bids-standard/.github/blob/master/CODE_OF_CONDUCT.md) 14 | 15 | ## Install the development version 16 | 17 | First make a fork of the repository under your `USERNAME` GitHub account. 18 | Then, in your Python environment follow these steps: 19 | 20 | ```Shell 21 | git clone https://github.com/USERNAME/pybv 22 | cd pybv 23 | git fetch --tags --prune --prune-tags 24 | python -m pip install -e ".[dev]" 25 | pre-commit install 26 | ``` 27 | 28 | You may also clone the repository via ssh, depending on your preferred workflow 29 | (`git clone git@github.com:USERNAME/pybv.git`). 30 | 31 | Note that we are working with "pre-commit hooks". 32 | See https://pre-commit.com/ for more information. 33 | 34 | ## Running tests and coverage 35 | 36 | If you have followed the steps to get the development version, 37 | you can run tests as follows. 38 | From the project root, call: 39 | 40 | - `pytest` to run tests and coverage 41 | - `pre-commit run -a` to run style checks (Ruff and some additional hooks) 42 | 43 | ## Building the documentation 44 | 45 | The documentation can be built using [Sphinx](https://www.sphinx-doc.org). 46 | 47 | The publicly accessible documentation is built and hosted by 48 | [Read the Docs](https://readthedocs.org/). 49 | Credentials for Read the Docs are currently held by: 50 | 51 | - [@sappelhoff](https://github.com/sappelhoff/) 52 | - [@choldgraf](https://github.com/choldgraf/) 53 | 54 | ## Info about versioning 55 | 56 | We follow a [semantic versioning scheme](https://semver.org/). 57 | This is implemented via [hatch-vcs](https://github.com/ofek/hatch-vcs). 58 | 59 | ## Making a release on GitHub, PyPi, and Conda-Forge 60 | 61 | `pybv` is regularly released on 62 | [GitHub](https://github.com/bids-standard/pybv/releases), 63 | [PyPI](https://pypi.org/project/pybv/), 64 | and [Conda-Forge](https://anaconda.org/conda-forge/pybv). 65 | 66 | Credentials are currently held by: 67 | 68 | - GitHub 69 | - Admin 70 | - any admin of the [bids-standard GitHub organization](https://github.com/bids-standard) 71 | - [@sappelhoff](https://github.com/sappelhoff/) 72 | - [@choldgraf](https://github.com/choldgraf/) 73 | - [@cbrnr](https://github.com/cbrnr/) 74 | - Write 75 | - [@hoechenberger](https://github.com/hoechenberger/) 76 | - PyPi 77 | - Owner 78 | - [@sappelhoff](https://github.com/sappelhoff/) 79 | - [@choldgraf](https://github.com/choldgraf/) 80 | - Maintainer 81 | - [@cbrnr](https://github.com/cbrnr/) 82 | - Conda-Forge 83 | - see: https://github.com/conda-forge/pybv-feedstock#feedstock-maintainers 84 | 85 | Releasing on GitHub will automatically trigger a release on PyPi via a GitHub Action 86 | (see `.github/workflows/release.yml`). 87 | A release on PyPi in turn will automatically trigger a release on Conda-Forge. 88 | The release protocol can be briefly described as follows: 89 | 90 | 1. Activate your Python environment for `pybv`. 91 | 1. Make sure all tests pass and the docs are built cleanly. 92 | 1. If applicable, append new authors to the author metadata in the `CITATION.cff` file. 93 | 1. Update `docs/changes.rst`, renaming the "current" headline to the new 94 | version 95 | 1. Commit the change and git push to upstream `main`. 96 | Include "REL" in your commit message. 97 | 1. Then, make an annotated tag, for example for the version `v1.2.3`: 98 | `git tag -a -m "v1.2.3" v1.2.3 upstream/main` 99 | (This assumes that you have a git remote configured with the name "upstream" and 100 | pointing to https://github.com/bids-standard/pybv). 101 | **NOTE: Make sure you have your `main` branch up to date for this step!** 102 | 1. `git push --follow-tags upstream` 103 | 1. Make a [release on GitHub](https://help.github.com/en/articles/creating-releases), 104 | using the git tag from the previous step (e.g., `v1.2.3`). 105 | Fill the tag name into the "Release title" field, and fill the "Description" field 106 | as you see fit. 107 | 1. This will trigger a GitHub Action that will build the package and release it to PyPi. 108 | 109 | Then the release is done and `main` has to be prepared for development of 110 | the next release: 111 | 112 | 1. Add a "Current (unreleased)" headline to `docs/changes.rst`. 113 | 1. Commit the changes and git push to `main` (or make a pull request). 114 | -------------------------------------------------------------------------------- /docs/changelog.rst: -------------------------------------------------------------------------------- 1 | :orphan: 2 | 3 | ======= 4 | Authors 5 | ======= 6 | 7 | People who contributed to this software across releases (in **alphabetical order**): 8 | 9 | - `Adam Li`_ 10 | - `Aniket Pradhan`_ 11 | - `Chris Holdgraf`_ 12 | - `Clemens Brunner`_ 13 | - `Felix Klotzsche`_ 14 | - `Phillip Alday`_ 15 | - `Pierre Cutellic`_ 16 | - `Richard Höchenberger`_ 17 | - `Robert Oostenveld`_ 18 | - `Stefan Appelhoff`_ 19 | - `Tristan Stenner`_ 20 | 21 | .. _changelog: 22 | 23 | ========= 24 | Changelog 25 | ========= 26 | 27 | Here we list a changelog of pybv. 28 | 29 | .. contents:: Contents 30 | :local: 31 | :depth: 1 32 | 33 | 0.8.0 (unreleased) 34 | ================== 35 | 36 | Code health 37 | ~~~~~~~~~~~ 38 | - Add support for Python 3.13 and drop support for Python 3.9, by `Clemens Brunner`_ (:gh:`128`) 39 | 40 | 0.7.6 (2024-11-25) 41 | ================== 42 | 43 | Code health 44 | ~~~~~~~~~~~ 45 | - Various changes to the code infrastructure, by `Stefan Appelhoff`_: (:gh:`124`, :gh:`125`) 46 | 47 | 0.7.5 (2022-10-24) 48 | ================== 49 | 50 | Bug 51 | ~~~ 52 | - Fix in private ``pybv._export`` module: handle annotations that do not contain an entry ``"ch_names"``, by `Felix Klotzsche`_ (:gh:`100`) 53 | - Fix issue with variable reference when the first `event` was *not* of type ``"Stimulus""`` or ``"Response"`` , by `Stefan Appelhoff`_: (:gh:`102`) 54 | 55 | 0.7.4 (2022-07-07) 56 | ================== 57 | 58 | Changelog 59 | ~~~~~~~~~ 60 | - Events: accept ``description`` label values ``>= 0`` when ``type`` is ``"Stimulus"`` or ``"Response"``, by `Pierre Cutellic`_ (:gh:`95`) 61 | - Events: accept ``duration == 0``, by `Clemens Brunner`_: (:gh:`96`) 62 | 63 | 0.7.3 (2022-06-04) 64 | ================== 65 | 66 | Bug 67 | ~~~ 68 | - Fix in private ``pybv._export`` module: ``durations`` of 1 sample length are fine even if they are at the last data sample, by `Stefan Appelhoff`_ (:gh:`92`) 69 | 70 | 0.7.2 (2022-06-01) 71 | ================== 72 | 73 | Bug 74 | ~~~ 75 | - Fixed that ``raw.annotations`` must take ``raw.first_time`` into account in private ``pybv._export`` module for export to BrainVision from MNE-Python, by `Stefan Appelhoff`_ (:gh:`91`) 76 | 77 | 0.7.1 (2022-05-28) 78 | ================== 79 | 80 | Bug 81 | ~~~ 82 | - Fixed a bug in private ``pybv._export`` module for export to BrainVision from MNE-Python, by `Stefan Appelhoff`_: (:gh:`90`) 83 | 84 | 0.7.0 (2022-05-28) 85 | ================== 86 | 87 | Changelog 88 | ~~~~~~~~~ 89 | - Added an overview table of alternative software for BrainVision data, by `Stefan Appelhoff`_ (:gh:`85`) 90 | - :func:`pybv.write_brainvision` now accepts a list of dict as argument to the ``events`` parameter, allowing for more control over what to write to ``.vmrk``, by `Stefan Appelhoff`_ (:gh:`86`) 91 | 92 | 0.6.0 (2021-09-29) 93 | ================== 94 | 95 | Changelog 96 | ~~~~~~~~~ 97 | - :func:`pybv.write_brainvision` gained a new parameter, ``ref_ch_names``, to specify the reference channels used during recording, by `Richard Höchenberger`_ and `Stefan Appelhoff`_ (:gh:`75`) 98 | 99 | API 100 | ~~~ 101 | - :func:`pybv.write_brainvision` now has an ``overwrite`` parameter that defaults to ``False``, by `Stefan Appelhoff`_ (:gh:`78`) 102 | 103 | Bug 104 | ~~~ 105 | - Fix bug where :func:`pybv.write_brainvision` would write the binary file in big-endian on a big-endian system, by `Aniket Pradhan`_, `Clemens Brunner`_, and `Stefan Appelhoff`_ (:gh:`80`) 106 | 107 | 0.5.0 (2021-01-03) 108 | ================== 109 | 110 | Changelog 111 | ~~~~~~~~~ 112 | - :func:`pybv.write_brainvision` adds support for channels with non-volt units, by `Adam Li`_ (:gh:`66`) 113 | - :func:`pybv.write_brainvision` automatically converts ``uV`` and ``μV`` (Greek μ) to ``µV`` (micro sign µ), by `Adam Li`_ (:gh:`66`) 114 | 115 | API 116 | ~~~ 117 | - The ``unit`` parameter in :func:`pybv.write_brainvision` now accepts a list of units (one unit per channel), by `Adam Li`_ (:gh:`66`) 118 | 119 | 0.4.0 (2020-11-08) 120 | ================== 121 | 122 | Changelog 123 | ~~~~~~~~~ 124 | - Passing a "greek small letter mu" to the ``unit`` parameter in :func:`pybv.write_brainvision` instead of a "micro sign" is now permitted, because the former will be automatically convert to the latter, by `Stefan Appelhoff`_ (:gh:`47`) 125 | 126 | Bug 127 | ~~~ 128 | - Fix bug where :func:`pybv.write_brainvision` did not properly deal with commas in channel names and non-numeric events, by `Stefan Appelhoff`_ (:gh:`53`) 129 | - :func:`pybv.write_brainvision` now properly handles sampling frequencies that are not multiples of 10 (even floats), by `Clemens Brunner`_ (:gh:`59`) 130 | - Fix bug where :func:`pybv.write_brainvision` would write a different resolution to the ``vhdr`` file than specified with the ``resolution`` parameter. Note that this did *not* affect the roundtrip accuracy of the written data, because of internal scaling of the data, by `Stefan Appelhoff`_ (:gh:`58`) 131 | - Fix bug where values for the ``resolution`` parameter like ``0.5``, ``0.123``, ``3.143`` were not written with adequate decimal precision in :func:`pybv.write_brainvision`, by `Stefan Appelhoff`_ (:gh:`58`) 132 | - Fix bug where :func:`pybv.write_brainvision` did not warn users that a particular combination of ``fmt``, ``unit``, and ``resolution`` can lead to broken data. For example high resolution µV data in int16 format. In such cases, an error is raised now, by `Stefan Appelhoff`_ (:gh:`62`) 133 | 134 | API 135 | ~~~ 136 | - :func:`pybv.write_brainvision` now accepts keyword arguments only. Positional arguments are no longer allowed, by `Stefan Appelhoff`_ (:gh:`57`) 137 | - In :func:`pybv.write_brainvision`, the ``scale_data`` parameter was removed from :func:`pybv.write_brainvision`, by `Stefan Appelhoff`_ (:gh:`58`) 138 | - In :func:`pybv.write_brainvision`, the ``unit`` parameter no longer accepts an argument ``None`` to automatically determine a unit based on the ``resolution``, by `Stefan Appelhoff`_ (:gh:`58`) 139 | 140 | 0.3.0 (2020-04-02) 141 | ================== 142 | 143 | Changelog 144 | ~~~~~~~~~ 145 | - Add ``unit`` parameter for exporting signals in a specific unit (V, mV, µV or uV, nV), by `Clemens Brunner`_ (:gh:`39`) 146 | 147 | API 148 | ~~~ 149 | - The order of parameters in :func:`pybv.write_brainvision` has changed, by `Clemens Brunner`_ (:gh:`39`) 150 | 151 | 0.2.0 (2019-08-26) 152 | ================== 153 | 154 | Changelog 155 | ~~~~~~~~~ 156 | - Add option to disable writing a meas_date event (which is also the new default), by `Clemens Brunner`_ (:gh:`32`) 157 | - Support event durations by passing an (N, 3) array to the events parameter (the third column contains the event durations), by `Clemens Brunner`_ (:gh:`33`) 158 | 159 | 0.1.0 (2019-06-23) 160 | ================== 161 | 162 | Changelog 163 | ~~~~~~~~~ 164 | - Add measurement date parameter to public API, by `Stefan Appelhoff`_ (:gh:`29`) 165 | - Add binary format parameter to public API, by `Tristan Stenner`_ (:gh:`22`) 166 | 167 | Bug 168 | ~~~ 169 | - fix bug with events indexing. VMRK events are now correctly written with 1-based indexing, by `Stefan Appelhoff`_ (:gh:`29`) 170 | - fix bug with events that only have integer codes of length less than 3, by `Stefan Appelhoff`_ (:gh:`26`) 171 | 172 | 0.0.2 (2019-04-28) 173 | ================== 174 | 175 | Changelog 176 | ~~~~~~~~~ 177 | - Support channel-specific scaling factors, by `Tristan Stenner`_ (:gh:`17`) 178 | 179 | 0.0.1 (2018-12-10) 180 | ================== 181 | 182 | Changelog 183 | ~~~~~~~~~ 184 | - Initial import from `philistine `_ package by `Phillip Alday`_ 185 | and removing dependency on MNE-Python, by `Chris Holdgraf`_, and `Stefan Appelhoff`_ 186 | 187 | .. include:: authors.rst 188 | -------------------------------------------------------------------------------- /README.rst: -------------------------------------------------------------------------------- 1 | .. image:: https://github.com/bids-standard/pybv/workflows/Python%20build/badge.svg 2 | :target: https://github.com/bids-standard/pybv/actions?query=workflow%3A%22Python+build%22 3 | :alt: Python build 4 | 5 | .. image:: https://github.com/bids-standard/pybv/workflows/Python%20tests/badge.svg 6 | :target: https://github.com/bids-standard/pybv/actions?query=workflow%3A%22Python+tests%22 7 | :alt: Python tests 8 | 9 | .. image:: https://codecov.io/gh/bids-standard/pybv/branch/main/graph/badge.svg 10 | :target: https://codecov.io/gh/bids-standard/pybv 11 | :alt: Test coverage 12 | 13 | .. image:: https://readthedocs.org/projects/pybv/badge/?version=stable 14 | :target: https://pybv.readthedocs.io/en/stable/?badge=stable 15 | :alt: Documentation Status 16 | 17 | .. image:: https://badge.fury.io/py/pybv.svg 18 | :target: https://badge.fury.io/py/pybv 19 | :alt: PyPi version 20 | 21 | .. image:: https://img.shields.io/conda/vn/conda-forge/pybv.svg 22 | :target: https://anaconda.org/conda-forge/pybv 23 | :alt: Conda version 24 | 25 | .. image:: https://zenodo.org/badge/157434681.svg 26 | :target: https://zenodo.org/badge/latestdoi/157434681 27 | :alt: Zenodo archive 28 | 29 | ==== 30 | pybv 31 | ==== 32 | 33 | For documentation, see the: 34 | 35 | - `stable documentation `_ 36 | - `latest (development) documentation `_ 37 | 38 | .. docs_readme_include_label 39 | 40 | ``pybv`` is a lightweight I/O utility for the BrainVision data format. 41 | 42 | The BrainVision data format is a recommended data format for use in the 43 | `Brain Imaging Data Structure `_. 44 | 45 | About the BrainVision data format 46 | ================================= 47 | 48 | BrainVision is the name of a file format commonly used for storing electrophysiology data. 49 | Originally, it was put forward by the company `Brain Products `_, 50 | however the simplicity of the format has allowed for a diversity of tools reading from and 51 | writing to the format. 52 | 53 | The format consists of three separate files: 54 | 55 | 1. A text header file (``.vhdr``) containing meta data 56 | 2. A text marker file (``.vmrk``) containing information about events in the 57 | data 58 | 3. A binary data file (``.eeg``) containing the voltage values of the EEG 59 | 60 | Both text files are based on the 61 | `Microsoft Windows INI format `_ 62 | consisting of: 63 | 64 | - sections marked as ``[square brackets]`` 65 | - comments marked as ``; comment`` 66 | - key-value pairs marked as ``key=value`` 67 | 68 | The binary ``.eeg`` data file is written in little-endian format without a Byte Order 69 | Mark (BOM), in accordance with the specification by Brain Products. 70 | This ensures that the data file is uniformly written irrespective of the 71 | native system architecture. 72 | 73 | A documentation for the BrainVision file format is provided by Brain Products. 74 | You can `view the specification `_ 75 | as hosted by Brain Products. 76 | 77 | Installation 78 | ============ 79 | 80 | ``pybv`` runs on Python version 3.10 or higher. 81 | 82 | ``pybv``'s only dependency is ``numpy``. 83 | However, we currently recommend that you install MNE-Python for reading BrainVision data. 84 | See their `installation instructions `_. 85 | 86 | After you have a working installation of MNE-Python (or only ``numpy`` if you 87 | do not want to read data and only write it), you can install ``pybv`` through 88 | the following: 89 | 90 | .. code-block:: Text 91 | 92 | python -m pip install --upgrade pybv 93 | 94 | or if you use `conda `_: 95 | 96 | .. code-block:: Text 97 | 98 | conda install --channel conda-forge pybv 99 | 100 | For installing the **latest (development)** version of ``pyprep``, call: 101 | 102 | .. code-block:: Text 103 | 104 | python -m pip install --upgrade https://github.com/bids-standard/pybv/archive/refs/heads/main.zip 105 | 106 | Both the *stable* and the *latest* installation will additionally install 107 | all required dependencies automatically. 108 | The dependencies are defined in the ``pyproject.toml`` file under the 109 | ``dependencies`` and ``project.optional-dependencies`` sections. 110 | 111 | Contributing 112 | ============ 113 | 114 | The development of ``pybv`` is taking place on 115 | `GitHub `_. 116 | 117 | For more information, please see 118 | `CONTRIBUTING.md `_. 119 | 120 | Citing 121 | ====== 122 | 123 | If you use this software in academic work, please cite it using the `Zenodo entry `_. 124 | Metadata is encoded in the `CITATION.cff` file. 125 | 126 | Usage 127 | ===== 128 | 129 | Writing BrainVision files 130 | ------------------------- 131 | 132 | The primary functionality provided by ``pybv`` is the ``write_brainvision`` 133 | function. This writes a numpy array of data and provided metadata into a 134 | collection of BrainVision files on disk. 135 | 136 | .. code-block:: python 137 | 138 | from pybv import write_brainvision 139 | 140 | # for further parameters see our API documentation 141 | write_brainvision(data=data, sfreq=sfreq, ch_names=ch_names, 142 | fname_base=fname, folder_out=tmpdir, 143 | events=events) 144 | 145 | Reading BrainVision files 146 | ------------------------- 147 | 148 | Currently, ``pybv`` recommends using `MNE-Python `_ 149 | for reading BrainVision files. 150 | 151 | Here is an example of the MNE-Python code required to read BrainVision data: 152 | 153 | .. code-block:: python 154 | 155 | import mne 156 | 157 | # Import the BrainVision data into an MNE Raw object 158 | raw = mne.io.read_raw_brainvision('tmp/test.vhdr', preload=True) 159 | 160 | # Reconstruct the original events from our Raw object 161 | events, event_ids = mne.events_from_annotations(raw) 162 | 163 | Alternatives 164 | ============ 165 | 166 | The BrainVision data format is very popular and accordingly there are many 167 | software packages to read this format, or write to it. 168 | The following table is intended as a quick overview of packages similar to 169 | `pybv `_. 170 | Please let us know if you know of additional packages that should be listed here. 171 | 172 | +-----------------------------------------------------------------------------+----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------+ 173 | | Name of software | Language | Notes | 174 | +=============================================================================+======================+=============================================================================================================================================================+ 175 | | `BioSig Project `_ | miscellaneous | Reading and writing capabilities depend on bindings used, see their `overview `_ | 176 | +-----------------------------------------------------------------------------+----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------+ 177 | | `Brainstorm `_ | MATLAB | Read and write, search for ``brainamp`` in their `io functions `_ | 178 | +-----------------------------------------------------------------------------+----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------+ 179 | | `BrainVision Analyzer `_ | n/a, GUI for Windows | Read and write, by Brain Products, requires commercial license | 180 | +-----------------------------------------------------------------------------+----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------+ 181 | | `brainvisionloader.jl `_ | Julia | Read | 182 | +-----------------------------------------------------------------------------+----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------+ 183 | | `EEGLAB `_ | MATLAB / Octave | Read and write via `BVA-IO `_ | 184 | +-----------------------------------------------------------------------------+----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------+ 185 | | `FieldTrip `_ | MATLAB | Read and write, search for ``brainvision`` in their `ft_read_data and ft_write_data functions `_ | 186 | +-----------------------------------------------------------------------------+----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------+ 187 | | `MNE-Python `_ | Python | Read (writing via ``pybv``) | 188 | +-----------------------------------------------------------------------------+----------------------+-------------------------------------------------------------------------------------------------------------------------------------------------------------+ 189 | 190 | Acknowledgements 191 | ================ 192 | 193 | This package was originally adapted from the 194 | `Philistine package `_ by 195 | `palday `_. 196 | It copies much of the BrainVision exporting code, but removes the dependence on MNE. 197 | Several features have been added, such as support for individual units for each channel. 198 | -------------------------------------------------------------------------------- /tests/test_write_brainvision.py: -------------------------------------------------------------------------------- 1 | """BrainVision writer tests.""" 2 | 3 | # Authors: pybv developers 4 | # SPDX-License-Identifier: BSD-3-Clause 5 | 6 | import itertools 7 | import os 8 | import re 9 | from datetime import datetime, timezone 10 | from importlib.metadata import version 11 | 12 | import mne 13 | import numpy as np 14 | import pytest 15 | from numpy.testing import assert_allclose, assert_array_equal 16 | from packaging.version import Version 17 | 18 | from pybv.io import ( 19 | SUPPORTED_FORMATS, 20 | SUPPORTED_VOLTAGE_SCALINGS, 21 | _check_data_in_range, 22 | _chk_fmt, 23 | _scale_data_to_unit, 24 | _write_bveeg_file, 25 | _write_vhdr_file, 26 | write_brainvision, 27 | ) 28 | 29 | # create testing data 30 | fname = "pybv" 31 | rng = np.random.default_rng(1337) 32 | n_chans = 10 33 | ch_names = [f"ch_{i}" for i in range(n_chans)] 34 | sfreq = 1000 35 | n_seconds = 5 36 | n_times = n_seconds * sfreq 37 | event_times = np.arange(1, 5) 38 | events_array = np.column_stack([event_times * sfreq, [1, 1, 2, 2]]).astype(int) 39 | events = [ 40 | { 41 | "onset": 1, 42 | "duration": 10, 43 | "description": 1, 44 | "type": "Stimulus", 45 | "channels": "all", 46 | }, 47 | { 48 | "onset": 0, 49 | "description": "Some string :-)", 50 | "type": "Comment", 51 | "channels": "ch_1", 52 | }, 53 | { 54 | "onset": 1000, 55 | "description": 2, 56 | "type": "Response", 57 | "channels": ["ch_1", "ch_2"], 58 | }, 59 | { 60 | "onset": 200, 61 | "description": 1234, 62 | "channels": [], 63 | }, 64 | ] 65 | # scale random data to reasonable EEG signal magnitude in V 66 | data = rng.normal(size=(n_chans, n_times)) * 10 * 1e-6 67 | 68 | # add reference channel 69 | ref_ch_name = ch_names[-1] 70 | data[-1, :] = 0.0 71 | 72 | 73 | def test_non_stim_resp_event_first(tmpdir): 74 | """Test that events are handled well regardless of which type comes first.""" 75 | kwargs = dict( 76 | data=data, 77 | sfreq=sfreq, 78 | ch_names=ch_names, 79 | fname_base=fname, 80 | folder_out=tmpdir, 81 | overwrite=True, 82 | ) 83 | for i in itertools.permutations(range(len(events))): 84 | ev = list(np.array(events)[list(i)]) 85 | write_brainvision(**kwargs, events=ev) 86 | 87 | 88 | @pytest.mark.parametrize( 89 | "events_errormsg", 90 | [ 91 | ({}, "events must be an array, a list of dict, or None"), 92 | (rng.normal(size=(10, 20, 30)), "When array, events must be 2D, but got 3"), 93 | ( 94 | rng.normal(size=(10, 4)), 95 | "When array, events must have 2 or 3 columns, but got: 4", 96 | ), 97 | ( 98 | np.array([i for i in "abcd"]).reshape(2, -1), 99 | "When array, all entries in events must be int", 100 | ), 101 | (events_array, ""), 102 | (None, ""), 103 | (np.arange(90).reshape(30, 3), ""), 104 | ], 105 | ) 106 | def test_bv_writer_events_array(tmpdir, events_errormsg): 107 | """Test that all array-based event options work without throwing an error.""" 108 | kwargs = dict( 109 | data=data, sfreq=sfreq, ch_names=ch_names, fname_base=fname, folder_out=tmpdir 110 | ) 111 | 112 | ev, errormsg = events_errormsg 113 | if errormsg: 114 | with pytest.raises(ValueError, match=errormsg): 115 | write_brainvision(**kwargs, events=ev) 116 | else: 117 | write_brainvision(**kwargs, events=ev) 118 | 119 | 120 | @pytest.mark.parametrize( 121 | "events_errormsg", 122 | [ 123 | ([{}, {"onset": 1}], "must have the keys 'onset' and 'description'"), 124 | ( 125 | [{"onset": 1, "description": 2}, np.ones(12)], 126 | "events must be a list of dict", 127 | ), 128 | ([{"onset": "1", "description": 2}], "events: `onset` must be int"), 129 | ( 130 | [{"onset": -1, "description": 2}], 131 | r"events: at least one onset sample is not in range of data \(0-4999\)", 132 | ), 133 | ( 134 | [{"onset": 100, "description": 1, "duration": -1}], 135 | "events: at least one duration is negative.", 136 | ), 137 | ( 138 | [{"onset": 100, "description": 1, "duration": 4901}], 139 | "events: at least one event has a duration that exceeds", 140 | ), 141 | ( 142 | [{"onset": 1, "description": 2, "type": "bogus"}], 143 | "`type` must be one of", 144 | ), 145 | ( 146 | [{"onset": 1, "description": "bogus"}], 147 | "when `type` is Stimulus, `description` must be non-negative int", 148 | ), 149 | ( 150 | [{"onset": 1, "description": {}, "type": "Comment"}], 151 | "when `type` is Comment, `description` must be str or int", 152 | ), 153 | ( 154 | [{"onset": 1, "description": -1}], 155 | "when `type` is Stimulus, descriptions must be non-negative ints.", 156 | ), 157 | ( 158 | [{"onset": 1, "description": 1, "channels": "bogus"}], 159 | "found channel .* bogus", 160 | ), 161 | ( 162 | [{"onset": 1, "description": 1, "channels": ["ch_1", "ch_1"]}], 163 | "events: found duplicate channel names", 164 | ), 165 | ( 166 | [{"onset": 1, "description": 1, "channels": ["ch_1", "ch_2"]}], 167 | "warn___feature may not be supported", 168 | ), 169 | ( 170 | [{"onset": 1, "description": 1, "channels": 1}], 171 | "events: `channels` must be str or list of str", 172 | ), 173 | ( 174 | [{"onset": 1, "description": 1, "channels": [{}]}], 175 | "be list of str or list of int corresponding to ch_names", 176 | ), 177 | ([], ""), 178 | (events, "warn___you specified at least one event that impacts more than one"), 179 | ], 180 | ) 181 | def test_bv_writer_events_list_of_dict(tmpdir, events_errormsg): 182 | """Test that events are written properly when list of dict.""" 183 | kwargs = dict( 184 | data=data, sfreq=sfreq, ch_names=ch_names, fname_base=fname, folder_out=tmpdir 185 | ) 186 | 187 | ev, errormsg = events_errormsg 188 | if errormsg.startswith("warn"): 189 | warnmsg = errormsg.split("___")[-1] 190 | with pytest.warns(UserWarning, match=warnmsg): 191 | write_brainvision(**kwargs, events=ev) 192 | elif len(errormsg) > 0: 193 | with pytest.raises(ValueError, match=errormsg): 194 | write_brainvision(**kwargs, events=ev) 195 | else: 196 | write_brainvision(**kwargs, events=ev) 197 | 198 | 199 | def test_kw_only_args(tmpdir): 200 | """Test that keyword only arguments are allowed.""" 201 | msg = r"write_brainvision\(\) takes 0 positional arguments" 202 | with pytest.raises(TypeError, match=msg): 203 | write_brainvision(data, sfreq, ch_names, fname, folder_out=tmpdir) 204 | 205 | 206 | def test_bv_writer_inputs(tmpdir): 207 | """Test data, channels, sfreq, resolution, ref_ch_names, and overwrite.""" 208 | with pytest.raises(ValueError, match="data must be np.ndarray"): 209 | write_brainvision( 210 | data=[1, 2, 3], 211 | sfreq=sfreq, 212 | ch_names=ch_names, 213 | fname_base=fname, 214 | folder_out=tmpdir, 215 | ) 216 | with pytest.raises(ValueError, match="data must be 2D: shape"): 217 | write_brainvision( 218 | data=rng.normal(size=(3, 3, 3)), 219 | sfreq=sfreq, 220 | ch_names=ch_names, 221 | fname_base=fname, 222 | folder_out=tmpdir, 223 | ) 224 | with pytest.raises(ValueError, match="Number of channels in data"): 225 | write_brainvision( 226 | data=data[1:, :], 227 | sfreq=sfreq, 228 | ch_names=ch_names, 229 | fname_base=fname, 230 | folder_out=tmpdir, 231 | ) 232 | with pytest.raises(ValueError, match="Channel names must be unique"): 233 | write_brainvision( 234 | data=data[0:2, :], 235 | sfreq=sfreq, 236 | ch_names=["b", "b"], 237 | fname_base=fname, 238 | folder_out=tmpdir, 239 | ) 240 | with pytest.raises(ValueError, match="ch_names must be a list of str."): 241 | write_brainvision( 242 | data=data[0:2, :], 243 | sfreq=sfreq, 244 | ch_names=["b", 2.3], 245 | fname_base=fname, 246 | folder_out=tmpdir, 247 | ) 248 | with pytest.raises(ValueError, match="sfreq must be one of "): 249 | write_brainvision( 250 | data=data, 251 | sfreq="100", 252 | ch_names=ch_names, 253 | fname_base=fname, 254 | folder_out=tmpdir, 255 | ) 256 | with pytest.raises(ValueError, match="Resolution should be numeric, is"): 257 | write_brainvision( 258 | data=data, 259 | sfreq=sfreq, 260 | ch_names=ch_names, 261 | fname_base=fname, 262 | folder_out=tmpdir, 263 | resolution="y", 264 | ) 265 | with pytest.raises(ValueError, match="Resolution should be > 0"): 266 | write_brainvision( 267 | data=data, 268 | sfreq=sfreq, 269 | ch_names=ch_names, 270 | fname_base=fname, 271 | folder_out=tmpdir, 272 | resolution=0, 273 | ) 274 | with pytest.raises(ValueError, match="Resolution should be one or n_chan"): 275 | write_brainvision( 276 | data=data, 277 | sfreq=sfreq, 278 | ch_names=ch_names, 279 | fname_base=fname, 280 | folder_out=tmpdir, 281 | resolution=np.arange(n_chans - 1), 282 | ) 283 | with pytest.raises(ValueError, match="overwrite must be a boolean"): 284 | write_brainvision( 285 | data=data[1:, :], 286 | sfreq=sfreq, 287 | ch_names=ch_names, 288 | fname_base=fname, 289 | folder_out=tmpdir, 290 | overwrite=1, 291 | ) 292 | with pytest.raises(ValueError, match="number of reference channel names"): 293 | write_brainvision( 294 | data=data, 295 | sfreq=sfreq, 296 | ch_names=ch_names, 297 | ref_ch_names=["foo", "bar"], 298 | fname_base=fname, 299 | folder_out=tmpdir, 300 | ) 301 | # passing data that's not all-zero for a reference channel should raise an exception 302 | data_ = data.copy() 303 | data_[ch_names.index(ref_ch_name), :] = 5 304 | with pytest.raises(ValueError, match="reference channel.*not.*zero"): 305 | write_brainvision( 306 | data=data_, 307 | sfreq=sfreq, 308 | ch_names=ch_names, 309 | ref_ch_names=ref_ch_name, 310 | fname_base=fname, 311 | folder_out=tmpdir, 312 | ) 313 | # empty str is a reserved value for ref_ch_names 314 | with pytest.raises(ValueError, match="Empty strings are reserved values"): 315 | _ref_ch_names = [""] + ch_names[1:] 316 | write_brainvision( 317 | data=data_, 318 | sfreq=sfreq, 319 | ch_names=ch_names, 320 | ref_ch_names=_ref_ch_names, 321 | fname_base=fname, 322 | folder_out=tmpdir, 323 | ) 324 | # try ambiguous list of dict events with "all" ch 325 | with pytest.raises(ValueError, match="Found channel named 'all'.*ambiguous"): 326 | write_brainvision( 327 | data=data[:1, :], 328 | sfreq=sfreq, 329 | ch_names=["all"], 330 | fname_base=fname, 331 | folder_out=tmpdir, 332 | events=events, 333 | ) 334 | 335 | 336 | def test_bv_bad_format(tmpdir): 337 | """Test that bad formats throw an error.""" 338 | vhdr_fname = tmpdir / fname + ".vhdr" 339 | vmrk_fname = tmpdir / fname + ".vmrk" 340 | eeg_fname = tmpdir / fname + ".eeg" 341 | 342 | with pytest.raises(ValueError, match="Orientation bad not supported"): 343 | _write_vhdr_file( 344 | vhdr_fname=vhdr_fname, 345 | vmrk_fname=vmrk_fname, 346 | eeg_fname=eeg_fname, 347 | data=data, 348 | sfreq=sfreq, 349 | ch_names=ch_names, 350 | ref_ch_names=None, 351 | orientation="bad", 352 | format="binary_float32", 353 | resolution=1e-6, 354 | units=["V"] * n_chans, 355 | ) 356 | with pytest.raises(ValueError, match="Data format bad not supported"): 357 | _write_vhdr_file( 358 | vhdr_fname=vhdr_fname, 359 | vmrk_fname=vmrk_fname, 360 | eeg_fname=eeg_fname, 361 | data=data, 362 | sfreq=sfreq, 363 | ch_names=ch_names, 364 | ref_ch_names=None, 365 | orientation="multiplexed", 366 | format="bad", 367 | resolution=1e-6, 368 | units=["V"] * n_chans, 369 | ) 370 | with pytest.raises(ValueError, match="Orientation bad not supported"): 371 | _write_bveeg_file( 372 | eeg_fname, 373 | data=data, 374 | orientation="bad", 375 | format="bad", 376 | resolution=1e-6, 377 | units=["µV"] * n_chans, 378 | ) 379 | with pytest.raises(ValueError, match="Data format bad not supported"): 380 | _write_bveeg_file( 381 | eeg_fname, 382 | data=data, 383 | orientation="multiplexed", 384 | format="bad", 385 | resolution=1e-6, 386 | units=["µV"] * n_chans, 387 | ) 388 | 389 | 390 | @pytest.mark.parametrize( 391 | "meas_date,match", 392 | [ 393 | (1, "`meas_date` must be of type str, datetime"), 394 | ("", "Got a str for `meas_date`, but it was"), 395 | ("1973", "Got a str for `meas_date`, but it was"), 396 | ], 397 | ) 398 | def test_bad_meas_date(tmpdir, meas_date, match): 399 | """Test that bad measurement dates raise errors.""" 400 | with pytest.raises(ValueError, match=match): 401 | write_brainvision( 402 | data=data, 403 | sfreq=sfreq, 404 | ch_names=ch_names, 405 | fname_base=fname, 406 | folder_out=tmpdir, 407 | meas_date=meas_date, 408 | ) 409 | 410 | 411 | @pytest.mark.parametrize( 412 | "ch_names_tricky", 413 | [ 414 | [ch + " f o o" for ch in ch_names], 415 | [ch + ",foo" for ch in ch_names], 416 | ], 417 | ) 418 | def test_comma_in_ch_name(tmpdir, ch_names_tricky): 419 | """Test that writing channel names with special characters works.""" 420 | mne = pytest.importorskip("mne", minversion="1.0") 421 | # write and read data to BV format 422 | write_brainvision( 423 | data=data, 424 | sfreq=sfreq, 425 | ch_names=ch_names_tricky, 426 | fname_base=fname, 427 | folder_out=tmpdir, 428 | ) 429 | vhdr_fname = tmpdir / fname + ".vhdr" 430 | raw_written = mne.io.read_raw_brainvision(vhdr_fname=vhdr_fname, preload=True) 431 | 432 | assert ch_names_tricky == raw_written.ch_names # channels 433 | 434 | assert_allclose(data, raw_written._data) # data round-trip 435 | 436 | 437 | @pytest.mark.parametrize( 438 | "meas_date, ref_ch_names", 439 | [("20000101120000000000", ref_ch_name), (datetime(2000, 1, 1, 12, 0, 0, 0), None)], 440 | ) 441 | def test_write_read_cycle(tmpdir, meas_date, ref_ch_names): 442 | """Test that a write/read cycle produces identical data.""" 443 | # First fail writing due to wrong unit 444 | unsupported_unit = "rV" 445 | with pytest.warns(UserWarning, match="Encountered unsupported non-voltage unit"): 446 | write_brainvision( 447 | data=data, 448 | sfreq=sfreq, 449 | ch_names=ch_names, 450 | ref_ch_names=ref_ch_names, 451 | fname_base=fname, 452 | folder_out=tmpdir, 453 | unit=unsupported_unit, 454 | overwrite=True, 455 | ) 456 | 457 | # write and read data to BV format 458 | # ensure that greek small letter mu gets converted to micro sign 459 | with pytest.warns(UserWarning, match="Encountered small Greek letter mu"): 460 | write_brainvision( 461 | data=data, 462 | sfreq=sfreq, 463 | ch_names=ch_names, 464 | ref_ch_names=ref_ch_names, 465 | fname_base=fname, 466 | folder_out=tmpdir, 467 | events=events_array, 468 | resolution=np.power(10.0, -np.arange(10)), 469 | unit="μV", 470 | meas_date=meas_date, 471 | overwrite=True, 472 | ) 473 | vhdr_fname = tmpdir / fname + ".vhdr" 474 | raw_written = mne.io.read_raw_brainvision(vhdr_fname=vhdr_fname, preload=True) 475 | 476 | if Version(version("mne")).release < (1, 10): 477 | # delete the first annotation because it's just marking a new segment 478 | # (this is fixed in MNE-Python 1.10 or later) 479 | raw_written.annotations.delete(0) 480 | # convert our annotations to events 481 | events_written, event_id = mne.events_from_annotations(raw_written) 482 | 483 | # sfreq 484 | assert sfreq == raw_written.info["sfreq"] 485 | 486 | # event timing should be exactly the same 487 | assert_array_equal(events_array[:, 0], events_written[:, 0]) 488 | assert_array_equal(events_array[:, 1], events_written[:, 2]) 489 | 490 | assert len(event_id) == 2 # there should be two unique event types 491 | 492 | assert_allclose(data, raw_written._data) # data round-trip 493 | 494 | assert_array_equal(ch_names, raw_written.ch_names) # channels 495 | 496 | # measurement dates must match 497 | assert raw_written.info["meas_date"] == datetime( 498 | 2000, 1, 1, 12, 0, 0, 0, tzinfo=timezone.utc 499 | ) 500 | 501 | 502 | resolutions = np.logspace(0, -9, 10) 503 | resolutions = np.hstack((resolutions, [np.pi, 0.5, 0.27e-6, 13])) 504 | 505 | 506 | @pytest.mark.filterwarnings("ignore:Encountered unsupported voltage units") 507 | @pytest.mark.filterwarnings("ignore:Encountered small Greek letter mu") 508 | @pytest.mark.parametrize("format", SUPPORTED_FORMATS.keys()) 509 | @pytest.mark.parametrize("resolution", resolutions) 510 | @pytest.mark.parametrize("unit", SUPPORTED_VOLTAGE_SCALINGS) 511 | def test_format_resolution_unit(tmpdir, format, resolution, unit): # noqa: A002 512 | """Test different combinations of formats, resolutions, and units. 513 | 514 | This test would raise warnings for several cases of "unit" ("Encountered unsupported 515 | voltage units"), and a specific warning if "unit" is "uV" ("Encountered small Greek 516 | letter mu"). We ignore those warnings throughout the test. 517 | 518 | Each run of the test is furthermore expected to exit early with a ValueError for 519 | combinations of "resolution" and "format" that would result in data that cannot 520 | accurately be written. 521 | """ 522 | # check whether this test will be numerically possible 523 | tmpdata = _scale_data_to_unit(data.copy(), [unit] * n_chans) 524 | tmpdata = tmpdata * np.atleast_2d(1 / resolution).T 525 | _, dtype = _chk_fmt(format) 526 | data_will_fit = _check_data_in_range(tmpdata, dtype) 527 | 528 | kwargs = dict( 529 | data=data, 530 | sfreq=sfreq, 531 | ch_names=ch_names, 532 | fname_base=fname, 533 | folder_out=tmpdir, 534 | resolution=resolution, 535 | unit=unit, 536 | fmt=format, 537 | ) 538 | 539 | if not data_will_fit: 540 | # end this test early 541 | match = f"can not be represented in '{format}' given" 542 | with pytest.raises(ValueError, match=match): 543 | write_brainvision(**kwargs) 544 | return 545 | 546 | write_brainvision(**kwargs) 547 | vhdr_fname = tmpdir / fname + ".vhdr" 548 | raw_written = mne.io.read_raw_brainvision(vhdr_fname=vhdr_fname, preload=True) 549 | 550 | # check that the correct units were written in the BV file 551 | orig_units = [u for key, u in raw_written._orig_units.items()] 552 | assert len(set(orig_units)) == 1 553 | if unit is not None: 554 | assert orig_units[0] == unit.replace("u", "µ") 555 | 556 | # check round trip of data: in binary_int16 format, the tolerance is given by the 557 | # lowest resolution 558 | if format == "binary_int16": 559 | absolute_tolerance = np.atleast_2d(resolution).min() 560 | else: 561 | absolute_tolerance = 0 562 | 563 | assert_allclose(data, raw_written.get_data(), atol=absolute_tolerance) 564 | 565 | 566 | @pytest.mark.parametrize("sfreq", [100, 125, 128, 500, 512, 1000, 1024, 512.1]) 567 | def test_sampling_frequencies(tmpdir, sfreq): 568 | """Test different sampling frequencies.""" 569 | # sampling frequency gets written as sampling interval 570 | write_brainvision( 571 | data=data, sfreq=sfreq, ch_names=ch_names, fname_base=fname, folder_out=tmpdir 572 | ) 573 | vhdr_fname = tmpdir / fname + ".vhdr" 574 | raw_written = mne.io.read_raw_brainvision(vhdr_fname=vhdr_fname, preload=True) 575 | assert_allclose(sfreq, raw_written.info["sfreq"]) 576 | 577 | 578 | @pytest.mark.parametrize("unit", SUPPORTED_VOLTAGE_SCALINGS) 579 | def test_write_multiple_units(tmpdir, unit): 580 | """Test writing data with a list of units.""" 581 | wrong_num_units = [unit] 582 | with pytest.raises(ValueError, match="Number of channels in unit"): 583 | write_brainvision( 584 | data=data, 585 | sfreq=sfreq, 586 | ch_names=ch_names, 587 | fname_base=fname, 588 | folder_out=tmpdir, 589 | unit=wrong_num_units, 590 | ) 591 | 592 | expect_warn = unit in ["V", "mV", "nV", "uV"] 593 | expect_match = "Converting" if unit == "uV" else "unsupported" 594 | 595 | # write brain vision file 596 | vhdr_fname = tmpdir / fname + ".vhdr" 597 | kwargs = dict( 598 | data=data, 599 | sfreq=sfreq, 600 | ch_names=ch_names, 601 | fname_base=fname, 602 | folder_out=tmpdir, 603 | unit=[unit] * n_chans, 604 | ) 605 | if expect_warn: 606 | with pytest.warns(UserWarning, match=expect_match): 607 | write_brainvision(**kwargs) 608 | else: 609 | write_brainvision(**kwargs) 610 | raw_written = mne.io.read_raw_brainvision(vhdr_fname=vhdr_fname, preload=True) 611 | 612 | # check round-trip works 613 | absolute_tolerance = 0 614 | assert_allclose(data, raw_written.get_data(), atol=absolute_tolerance) 615 | 616 | # check that the correct units were written in the BV file 617 | orig_units = [u for key, u in raw_written._orig_units.items()] 618 | assert len(set(orig_units)) == 1 619 | assert orig_units[0] == unit.replace("u", "µ") 620 | 621 | # now write with different units across all channels 622 | other_unit = "mV" if unit != "mV" else "V" 623 | units = [unit] * (n_chans // 2) 624 | units.extend([other_unit] * (n_chans // 2)) 625 | 626 | # write file and read back in, we always expect a warning here 627 | kwargs["unit"] = units 628 | kwargs["overwrite"] = True 629 | with pytest.warns(UserWarning, match="unsupported"): 630 | write_brainvision(**kwargs) 631 | 632 | raw_written = mne.io.read_raw_brainvision(vhdr_fname=vhdr_fname, preload=True) 633 | 634 | # check that the correct units were written in the BV file 635 | orig_units = [u for key, u in raw_written._orig_units.items()] 636 | assert len(set(orig_units)) == 2 637 | assert all( 638 | [orig_units[idx] == unit.replace("u", "µ") for idx in range(n_chans // 2)] 639 | ) 640 | assert all( 641 | [ 642 | orig_units[-idx] == other_unit.replace("u", "µ") 643 | for idx in range(1, n_chans // 2 + 1) 644 | ] 645 | ) 646 | 647 | 648 | def test_write_unsupported_units(tmpdir): 649 | """Test writing data with a list of possibly unsupported BV units.""" 650 | unit = "V" # supported test unit 651 | units = [unit] * n_chans 652 | units[-1] = "°C" 653 | 654 | # write brain vision file 655 | vhdr_fname = tmpdir / (fname + ".vhdr") 656 | with pytest.warns(UserWarning, match="Encountered unsupported non-voltage unit"): 657 | write_brainvision( 658 | data=data, 659 | sfreq=sfreq, 660 | ch_names=ch_names, 661 | fname_base=fname, 662 | folder_out=tmpdir, 663 | unit=units, 664 | ) 665 | raw_written = mne.io.read_raw_brainvision(vhdr_fname=vhdr_fname, preload=True) 666 | 667 | # check round-trip works 668 | absolute_tolerance = 0 669 | assert_allclose(data, raw_written.get_data(), atol=absolute_tolerance) 670 | 671 | # check that the correct units were written in the BV file 672 | orig_units = [u for key, u in raw_written._orig_units.items()] 673 | assert len(set(orig_units)) == 2 674 | assert all([orig_units[idx] == unit for idx in range(n_chans - 1)]) 675 | assert orig_units[-1] == "°C" 676 | 677 | 678 | @pytest.mark.parametrize( 679 | "ref_ch_names", (None, ref_ch_name, [ref_ch_name] * n_chans, "foobar") 680 | ) 681 | def test_ref_ch(tmpdir, ref_ch_names): 682 | """Test reference channel writing.""" 683 | # these are the default values 684 | resolution = "0.1" 685 | unit = "µV" 686 | vhdr_fname = tmpdir / fname + ".vhdr" 687 | 688 | write_brainvision( 689 | data=data, 690 | sfreq=sfreq, 691 | ch_names=ch_names, 692 | ref_ch_names=ref_ch_name, 693 | fname_base=fname, 694 | folder_out=tmpdir, 695 | ) 696 | 697 | vhdr = vhdr_fname.read_text(encoding="utf-8") 698 | regexp = f"Ch.*=ch.*,{ref_ch_name},{resolution},{unit}" 699 | matches = re.findall(pattern=regexp, string=vhdr) 700 | assert len(matches) == len(ch_names) 701 | 702 | 703 | def test_cleanup(tmpdir): 704 | """Test cleaning up intermediate data upon a writing failure.""" 705 | folder_out = tmpdir / "my_output" 706 | with pytest.raises(ValueError, match="Data format binary_float999"): 707 | write_brainvision( 708 | data=data, 709 | sfreq=sfreq, 710 | ch_names=ch_names, 711 | fname_base=fname, 712 | folder_out=folder_out, 713 | fmt="binary_float999", 714 | ) 715 | assert not (folder_out).exists() 716 | assert not (folder_out / fname + ".eeg").exists() 717 | assert not (folder_out / fname + ".vmrk").exists() 718 | assert not (folder_out / fname + ".vhdr").exists() 719 | 720 | # if folder already existed before erroneous writing, it is not deleted 721 | os.makedirs(folder_out) 722 | with pytest.raises(ValueError, match="Data format binary_float999"): 723 | write_brainvision( 724 | data=data, 725 | sfreq=sfreq, 726 | ch_names=ch_names, 727 | fname_base=fname, 728 | folder_out=folder_out, 729 | fmt="binary_float999", 730 | ) 731 | assert folder_out.exists() 732 | 733 | # but all other (incomplete/erroneous) files are deleted 734 | assert not (folder_out / fname + ".eeg").exists() 735 | assert not (folder_out / fname + ".vmrk").exists() 736 | assert not (folder_out / fname + ".vhdr").exists() 737 | 738 | 739 | def test_overwrite(tmpdir): 740 | """Test overwriting behavior.""" 741 | write_brainvision( 742 | data=data, 743 | sfreq=sfreq, 744 | ch_names=ch_names, 745 | fname_base=fname, 746 | folder_out=tmpdir, 747 | overwrite=False, 748 | ) 749 | 750 | with pytest.raises(IOError, match="File already exists"): 751 | write_brainvision( 752 | data=data, 753 | sfreq=sfreq, 754 | ch_names=ch_names, 755 | fname_base=fname, 756 | folder_out=tmpdir, 757 | overwrite=False, 758 | ) 759 | 760 | 761 | def test_event_writing(tmpdir): 762 | """Test writing some advanced event specifications.""" 763 | kwargs = dict( 764 | data=data, sfreq=sfreq, ch_names=ch_names, fname_base=fname, folder_out=tmpdir 765 | ) 766 | 767 | with pytest.warns(UserWarning, match="Such events will be written to .vmrk"): 768 | write_brainvision(**kwargs, events=events) 769 | 770 | vhdr_fname = tmpdir / fname + ".vhdr" 771 | raw = mne.io.read_raw_brainvision(vhdr_fname=vhdr_fname, preload=True) 772 | 773 | # should be one more, because event[3] is written twice (once per channel) 774 | assert len(raw.annotations) == len(events) + 1 775 | 776 | # note: MNE orders events by onset, use sorted 777 | onsets = np.array([ev["onset"] / raw.info["sfreq"] for ev in events]) 778 | onsets = sorted(onsets) + [1.0] # add duplicate event (due to channels) 779 | np.testing.assert_array_equal(raw.annotations.onset, onsets) 780 | 781 | # MNE does not (yet; at 1.0.3) read ch_names for annotations from vmrk 782 | np.testing.assert_array_equal( 783 | [i for i in raw.annotations.ch_names], [() for i in range(len(events) + 1)] 784 | ) 785 | 786 | # test duration and description as well 787 | durations = [i / raw.info["sfreq"] for i in (1, 10, 1, 1, 1)] 788 | np.testing.assert_array_equal(raw.annotations.duration, durations) 789 | 790 | descr = [ 791 | "Comment/Some string :-)", 792 | "Stimulus/S 1", 793 | "Stimulus/S1234", 794 | "Response/R 2", 795 | "Response/R 2", 796 | ] 797 | np.testing.assert_array_equal(raw.annotations.description, descr) 798 | 799 | # smoke test forming events from annotations 800 | _events, _event_id = mne.events_from_annotations(raw) 801 | for _d in descr: 802 | assert _d in _event_id 803 | -------------------------------------------------------------------------------- /pybv/io.py: -------------------------------------------------------------------------------- 1 | """BrainVision writer.""" 2 | 3 | # Authors: pybv developers 4 | # SPDX-License-Identifier: BSD-3-Clause 5 | 6 | import copy 7 | import datetime 8 | import os 9 | import shutil 10 | import sys 11 | from pathlib import Path 12 | from warnings import warn 13 | 14 | import numpy as np 15 | 16 | from pybv import __version__ 17 | 18 | # ASCII as future formats 19 | SUPPORTED_FORMATS = { 20 | "binary_float32": ("IEEE_FLOAT_32", np.float32), 21 | "binary_int16": ("INT_16", np.int16), 22 | } 23 | 24 | SUPPORTED_ORIENTS = {"multiplexed"} 25 | 26 | SUPPORTED_VOLTAGE_SCALINGS = {"V": 1e0, "mV": 1e3, "µV": 1e6, "uV": 1e6, "nV": 1e9} 27 | 28 | 29 | def write_brainvision( 30 | *, 31 | data, 32 | sfreq, 33 | ch_names, 34 | ref_ch_names=None, 35 | fname_base, 36 | folder_out, 37 | overwrite=False, 38 | events=None, 39 | resolution=0.1, 40 | unit="µV", 41 | fmt="binary_float32", 42 | meas_date=None, 43 | ): 44 | """Write raw data to the BrainVision format [1]_. 45 | 46 | Parameters 47 | ---------- 48 | data : np.ndarray, shape (n_channels, n_times) 49 | The raw data to export. Voltage data is assumed to be in **volts** and will be 50 | scaled as specified by `unit`. Non-voltage channels (as specified by `unit`) are 51 | never scaled (e.g., `"°C"`). 52 | sfreq : int | float 53 | The sampling frequency of the data in Hz. 54 | ch_names : list of {str | int}, len (n_channels) 55 | The names of the channels. Integer channel names are converted to string. 56 | ref_ch_names : str | list of str, len (n_channels) | None 57 | The name of the channel used as a reference during the recording. If references 58 | differ between channels, you may supply a list of reference channel names 59 | corresponding to each channel in `ch_names`. If ``None`` (default), assume that 60 | all channels are referenced to a common channel that is not further specified 61 | (BrainVision default). 62 | 63 | .. note:: The reference channel name specified here does not need to appear in 64 | `ch_names`. It is permissible to specify a reference channel that is 65 | not present in `data`. 66 | fname_base : str 67 | The base name for the output files. Three files will be created (*.vhdr*, 68 | *.vmrk*, *.eeg*), and all will share this base name. 69 | folder_out : str | pathlib.Path 70 | The folder where output files will be saved. Will be created if it does not 71 | exist. 72 | overwrite : bool 73 | Whether or not to overwrite existing files. Defaults to ``False``. 74 | events : np.ndarray, shape (n_events, {2, 3}) | list of dict, len (n_events) | None 75 | Events to write in the marker file (*.vmrk*). Defaults to ``None`` (not writing 76 | any events). 77 | 78 | If an array is passed, it must have either two or three columns and consist of 79 | non-negative integers. The first column is always the zero-based *onset* index 80 | of each event (corresponding to the time dimension of the `data` array). The 81 | second column is a number associated with the *description* of the event. The 82 | (optional) third column specifies the *duration* of each event in samples 83 | (defaults to ``1``). All events are written as *type* "Stimulus" and interpreted 84 | as relevant to all *channels*. For more fine-grained control over how to write 85 | events, pass a list of dict as described next. 86 | 87 | If a list of dict is passed, each dict in the list corresponds to an event and 88 | may have the following entries: 89 | 90 | - ``"onset"`` : int 91 | The zero-based index of the event onset, corresponding to the time 92 | dimension of the `data` array. 93 | - ``"duration"`` : int 94 | The duration of the event in samples (defaults to ``1``). 95 | - ``"description"`` : str | int 96 | The description of the event. Must be a non-negative int when `type` 97 | (see below) is either ``"Stimulus"`` or ``"Response"``, and may be a str 98 | when `type` is ``"Comment"``. 99 | - ``"type"`` : str 100 | The type of the event, must be one of ``{"Stimulus", "Response", 101 | "Comment"}`` (defaults to ``"Stimulus"``). Additional types like the 102 | known BrainVision types ``"New Segment"``, ``"SyncStatus"``, etc. are 103 | currently not supported. 104 | - ``"channels"`` : str | list of {str | int} 105 | The channels that are impacted by the event. Can be ``"all"`` 106 | (reflecting all channels), or a channel name, or a list of channel 107 | names. An empty list means the same as ``"all"``. Integer channel names 108 | are converted to strings, as in the `ch_names` parameter. Defaults to 109 | ``"all"``. 110 | 111 | Note that ``"onset"`` and ``"description"`` MUST be specified in each dict. 112 | 113 | .. note:: When specifying more than one but less than "all" channels that are 114 | impacted by an event, ``pybv`` will write the same event for as many 115 | times as channels are specified (see :gh:`77` for a discussion). This 116 | is valid according to the BrainVision specification, but for maximum 117 | compatibility with other BrainVision readers, we do not recommend 118 | using this feature yet. 119 | 120 | resolution : float | np.ndarray, shape (n_channels,) 121 | The resolution in `unit` in which you'd like the data to be stored. If float, 122 | the same resolution is applied to all channels. If array with `n_channels` 123 | elements, each channel is scaled with its own corresponding resolution from the 124 | array. Note that `resolution` is applied on top of the default resolution that a 125 | data format (see `fmt`) has. For example, the ``"binary_int16"`` format by 126 | design has no floating point support, but when scaling the data in µV for 127 | ``0.1`` resolution (default), accurate writing for all values ≥ 0.1 µV is 128 | guaranteed. In contrast, the ``"binary_float32"`` format by design alread 129 | supports floating points up to 1e-6 resolution, and writing data in µV with 0.1 130 | resolution will thus guarantee accurate writing for all values ≥ 1e-7 µV 131 | (``1e-6 * 0.1``). 132 | unit : str | list of str 133 | The unit of the exported data. This can be one of ``"V"``, ``"mV"``, ``"µV"`` 134 | (or equivalently ``"uV"``), or ``"nV"``, which will scale the data accordingly. 135 | Defaults to ``"µV"``. Can also be a list of units with one unit per channel. 136 | Non-voltage channels are stored "as is", for example temperature might be 137 | available in ``"°C"``, which ``pybv`` will not scale. 138 | fmt : str 139 | Binary format the data should be written as. Valid choices are 140 | ``"binary_float32"`` (default) and ``"binary_int16"``. 141 | meas_date : datetime.datetime | str | None 142 | The measurement date specified as a :class:`datetime.datetime` object. 143 | Alternatively, can be a string in the format "YYYYMMDDhhmmssuuuuuu" ("u" stands 144 | for microseconds). Note that setting a measurement date implies that one 145 | additional event is created in the *.vmrk* file. To prevent this, set this 146 | parameter to ``None`` (default). 147 | 148 | Notes 149 | ----- 150 | iEEG/EEG/MEG data is assumed to be in V, and ``pybv`` will scale these data to µV by 151 | default. Any unit besides µV is officially unsupported in the BrainVision 152 | specification. However, if one specifies other voltage units such as mV or nV, we 153 | will still scale the signals accordingly in the exported file. We will also write 154 | channels with non-voltage units such as °C as is (without scaling). For maximum 155 | compatibility, all signals should be written as µV. 156 | 157 | When passing a list of dict to `events`, the event ``type`` that can be passed is 158 | currently limited to one of ``{"Stimulus", "Response", "Comment"}``. The BrainVision 159 | specification itself does not limit event types, and future extensions of ``pybv`` 160 | may permit additional or even arbitrary event types. 161 | 162 | References 163 | ---------- 164 | .. [1] https://www.brainproducts.com/support-resources/brainvision-core-data-format-1-0/ 165 | 166 | Examples 167 | -------- 168 | >>> data = np.random.random((3, 5)) 169 | >>> # write data with varying units 170 | ... # channels A1 and A2 are expected to be in volt and will get rescaled to µV and 171 | ... # mV, respectively. 172 | ... # TEMP is expected to be in some other unit (i.e., NOT volt), and will not get 173 | ... # scaled (it is written "as is") 174 | ... write_brainvision( 175 | ... data=data, 176 | ... sfreq=1, 177 | ... ch_names=["A1", "A2", "TEMP"], 178 | ... folder_out="./", 179 | ... fname_base="pybv_test_file", 180 | ... unit=["µV", "mV", "°C"] 181 | ... ) 182 | >>> # remove the files 183 | >>> for ext in [".vhdr", ".vmrk", ".eeg"]: 184 | ... os.remove("pybv_test_file" + ext) 185 | 186 | """ 187 | # input checks 188 | folder_out = Path(folder_out) 189 | 190 | if not isinstance(data, np.ndarray): 191 | raise ValueError(f"data must be np.ndarray, but found: {type(data)}") 192 | 193 | if not data.ndim == 2: 194 | raise ValueError( 195 | f"data must be 2D: shape (n_channels, n_times), but found {data.ndim}" 196 | ) 197 | 198 | if not isinstance(overwrite, bool): 199 | raise ValueError("overwrite must be a boolean (True or False).") 200 | 201 | nchan = len(ch_names) 202 | for ch in ch_names: 203 | if not isinstance(ch, str | int): 204 | raise ValueError("ch_names must be a list of str or list of int.") 205 | ch_names = [str(ch) for ch in ch_names] 206 | 207 | if len(data) != nchan: 208 | raise ValueError( 209 | f"Number of channels in data ({len(data)}) does not match number of " 210 | f"channel names ({len(ch_names)})." 211 | ) 212 | 213 | if len(set(ch_names)) != nchan: 214 | raise ValueError("Channel names must be unique, found duplicate name.") 215 | 216 | events = _chk_events(events, ch_names, data.shape[1]) 217 | 218 | # ensure we have a list of strings as reference channel names 219 | if ref_ch_names is None: 220 | ref_ch_names = [""] * nchan # common but unspecified reference 221 | elif isinstance(ref_ch_names, str): 222 | ref_ch_names = [ref_ch_names] * nchan 223 | else: 224 | if "" in ref_ch_names: 225 | msg = ( 226 | f"ref_ch_names contains an empty string: {ref_ch_names}\nEmpty strings " 227 | "are reserved values and not permitted as reference channel names." 228 | ) 229 | raise ValueError(msg) 230 | ref_ch_names = [str(ref_ch_name) for ref_ch_name in ref_ch_names] 231 | 232 | if len(ref_ch_names) != nchan: 233 | raise ValueError( 234 | f"The number of reference channel names ({len(ref_ch_names)}) must match " 235 | f"the number of channels in your data ({nchan})." 236 | ) 237 | 238 | # ensure ref chs that are in data are zero 239 | for ref_ch_name in list(set(ref_ch_names) & set(ch_names)): 240 | if not np.allclose(data[ch_names.index(ref_ch_name), :], 0): 241 | raise ValueError( 242 | f"The provided data for the reference channel {ref_ch_name} does not " 243 | "appear to be zero across all time points. This indicates that this " 244 | "channel either did not serve as a reference during the recording, or " 245 | "the data has been altered since. Please either pick a different " 246 | "reference channel, or omit the ref_ch_name parameter." 247 | ) 248 | 249 | if not isinstance(sfreq, int | float): 250 | raise ValueError("sfreq must be one of (float | int)") 251 | sfreq = float(sfreq) 252 | 253 | resolution = np.atleast_1d(resolution) 254 | if not np.issubdtype(resolution.dtype, np.number): 255 | raise ValueError(f"Resolution should be numeric, is {resolution.dtype}") 256 | 257 | if resolution.shape != (1,) and resolution.shape != (nchan,): 258 | raise ValueError("Resolution should be one or n_channels floats") 259 | 260 | if np.any(resolution <= 0): 261 | raise ValueError("Resolution should be > 0") 262 | 263 | # check unit is single str 264 | if isinstance(unit, str): 265 | # convert unit to list, assuming all units are the same 266 | unit = [unit] * nchan 267 | if len(unit) != nchan: 268 | raise ValueError( 269 | f"Number of channels in unit ({len(unit)}) does not match number of channel" 270 | f" names ({nchan})" 271 | ) 272 | units = unit 273 | 274 | # check units for compatibility with greek lettering 275 | show_warning = False 276 | for idx, unit in enumerate(units): 277 | # Greek mu μ (U+03BC) 278 | if unit == "μV" or unit == "uV": 279 | unit = "µV" # micro symbol µ (U+00B5) 280 | units[idx] = unit 281 | show_warning = True 282 | 283 | # only show the warning once if a greek letter was encountered 284 | if show_warning: 285 | warn( 286 | f"Encountered small Greek letter mu 'μ' or 'u' in unit: {unit}. Converting " 287 | "to micro sign 'µ'." 288 | ) 289 | 290 | # measurement date 291 | if not isinstance(meas_date, str | datetime.datetime | type(None)): 292 | raise ValueError( 293 | f"`meas_date` must be of type str, datetime.datetime, or None but is of " 294 | f'type "{type(meas_date)}"' 295 | ) 296 | elif isinstance(meas_date, datetime.datetime): 297 | meas_date = meas_date.strftime("%Y%m%d%H%M%S%f") 298 | elif meas_date is None: 299 | pass 300 | elif not (meas_date.isdigit() and len(meas_date) == 20): 301 | raise ValueError( 302 | "Got a str for `meas_date`, but it was not formatted as expected. Please " 303 | 'supply a str in the format: "YYYYMMDDhhmmssuuuuuu".' 304 | ) 305 | 306 | # create output file names/paths, checking if they already exist 307 | folder_out_created = not folder_out.exists() 308 | folder_out.mkdir(parents=True, exist_ok=True) 309 | eeg_fname = folder_out / f"{fname_base}.eeg" 310 | vmrk_fname = folder_out / f"{fname_base}.vmrk" 311 | vhdr_fname = folder_out / f"{fname_base}.vhdr" 312 | for fname in (eeg_fname, vmrk_fname, vhdr_fname): 313 | if fname.exists() and not overwrite: 314 | raise OSError( 315 | f"File already exists: {fname}.\nConsider setting overwrite=True." 316 | ) 317 | 318 | # write output files, but delete everything if we come across an error 319 | try: 320 | _write_bveeg_file( 321 | eeg_fname, 322 | data, 323 | orientation="multiplexed", 324 | format=fmt, 325 | resolution=resolution, 326 | units=units, 327 | ) 328 | _write_vmrk_file(vmrk_fname, eeg_fname, events, meas_date) 329 | _write_vhdr_file( 330 | vhdr_fname=vhdr_fname, 331 | vmrk_fname=vmrk_fname, 332 | eeg_fname=eeg_fname, 333 | data=data, 334 | sfreq=sfreq, 335 | ch_names=ch_names, 336 | ref_ch_names=ref_ch_names, 337 | orientation="multiplexed", 338 | format=fmt, 339 | resolution=resolution, 340 | units=units, 341 | ) 342 | except ValueError: 343 | if folder_out_created: 344 | # if this is a new folder, remove everything 345 | shutil.rmtree(folder_out) 346 | else: 347 | # else, only remove the files we might have created 348 | for fname in (eeg_fname, vmrk_fname, vhdr_fname): 349 | if fname.exists(): # pragma: no cover 350 | os.remove(fname) 351 | 352 | raise 353 | 354 | 355 | def _chk_events(events, ch_names, n_times): 356 | """Check that the events parameter is as expected. 357 | 358 | This function will always return `events` as a list of dicts. If `events` is 359 | ``None``, it will be an empty list. If `events` is a list of dict, it will add 360 | missing keys to each dict with default values, and it will, for each ith event, turn 361 | ``events[i]["channels"]`` into a list of 1-based channel name indices, where ``0`` 362 | equals ``"all"``. Event descriptions for ``"Stimulus"`` and ``"Response"`` will be 363 | reformatted to a str of the format ``"S{:>n}"`` (or with a leading ``"R"`` for 364 | ``"Response"``), where ``n`` is determined by the description with the most digits 365 | (minimum 3). For each ith event, the onset (``events[i]["onset"]``) will be 366 | incremented by 1 to comply with the 1-based indexing used in BrainVision marker 367 | files (*.vmrk*). 368 | 369 | Parameters 370 | ---------- 371 | events : np.ndarray, shape (n_events, {2, 3}) | list of dict, len (n_events) | None 372 | The events parameter as passed to :func:`pybv.write_brainvision`. 373 | ch_names : list of str, len (n_channels) 374 | The channel names, preprocessed in :func:`pybv.write_brainvision`. 375 | n_times : int 376 | The length of the data in samples. 377 | 378 | Returns 379 | ------- 380 | events_out : list of dict, len (n_events) 381 | The preprocessed events, always provided as list of dict. 382 | 383 | """ 384 | if not isinstance(events, type(None) | np.ndarray | list): 385 | raise ValueError("events must be an array, a list of dict, or None") 386 | 387 | # validate input: None 388 | if isinstance(events, type(None)): 389 | events_out = [] 390 | 391 | # default events 392 | # NOTE: using "ch_names" as default for channels translates directly into "all" but 393 | # is robust with respect to channels named "all" 394 | event_defaults = dict(duration=1, type="Stimulus", channels=ch_names) 395 | 396 | # validate input: ndarray 397 | if isinstance(events, np.ndarray): 398 | if events.ndim != 2: 399 | raise ValueError(f"When array, events must be 2D, but got {events.ndim}") 400 | if events.shape[1] not in (2, 3): 401 | raise ValueError( 402 | "When array, events must have 2 or 3 columns, but got: " 403 | f"{events.shape[1]}" 404 | ) 405 | if not all([np.issubdtype(i, np.integer) for i in events.flat]): 406 | raise ValueError( 407 | "When array, all entries in events must be int, but found other types" 408 | ) 409 | 410 | # convert array to list of dict 411 | durations = np.ones(events.shape[0], dtype=int) * event_defaults["duration"] 412 | if events.shape[1] == 3: 413 | durations = events[:, -1] 414 | events_out = [] 415 | for irow, row in enumerate(events[:, 0:2]): 416 | events_out.append( 417 | dict( 418 | onset=int(row[0]), 419 | duration=int(durations[irow]), 420 | description=int(row[1]), 421 | type=event_defaults["type"], 422 | channels=event_defaults["channels"], 423 | ) 424 | ) 425 | 426 | # validate input: list of dict 427 | if isinstance(events, list): 428 | # we must not edit the original parameter 429 | events_out = [copy.deepcopy(i) for i in events] 430 | 431 | # now always list of dict 432 | for event in events_out: 433 | # each item must be dict 434 | if not isinstance(event, dict): 435 | raise ValueError( 436 | "When list, events must be a list of dict, but found non-dict element " 437 | "in list" 438 | ) 439 | 440 | # NOTE: We format 1 -> "S 1", 10 -> "S 10", 100 -> "S100", etc., 441 | # https://github.com/bids-standard/pybv/issues/24#issuecomment-512746677 442 | max_event_descr = max( 443 | [1] 444 | + [ 445 | ev.get("description", "n/a") 446 | for ev in events_out 447 | if isinstance(ev.get("description", "n/a"), int) 448 | ] 449 | ) 450 | twidth = max(3, int(np.ceil(np.log10(max_event_descr)))) 451 | 452 | # do full validation 453 | for event in events_out: 454 | # required keys 455 | for required_key in ["onset", "description"]: 456 | if required_key not in event: 457 | raise ValueError( 458 | "When list of dict, each dict in events must have the keys 'onset' " 459 | "and 'description'" 460 | ) 461 | 462 | # populate keys with default if missing (in-place) 463 | for optional_key, default in event_defaults.items(): 464 | event[optional_key] = event.get(optional_key, default) 465 | 466 | # validate key types 467 | # `onset`, `duration` 468 | for key in ["onset", "duration"]: 469 | if not isinstance(event[key], int | np.integer): 470 | raise ValueError(f"events: `{key}` must be int") 471 | 472 | if not (0 <= event["onset"] < n_times): 473 | raise ValueError( 474 | "events: at least one onset sample is not in range of data (0-" 475 | f"{n_times - 1})" 476 | ) 477 | 478 | if event["duration"] < 0: 479 | raise ValueError( 480 | "events: at least one duration is negative. Durations must be >= 0 " 481 | "samples." 482 | ) 483 | 484 | if not (0 <= event["onset"] + event["duration"] <= n_times): 485 | raise ValueError( 486 | "events: at least one event has a duration that exceeds the range of " 487 | f"data (0-{n_times - 1})" 488 | ) 489 | 490 | event["onset"] = event["onset"] + 1 # VMRK uses 1-based indexing 491 | 492 | # `type` 493 | event_types = ["Stimulus", "Response", "Comment"] 494 | if event["type"] not in event_types: 495 | raise ValueError(f"events: `type` must be one of {event_types}") 496 | 497 | # `description` 498 | if event["type"] in ["Stimulus", "Response"]: 499 | if not isinstance(event["description"], int): 500 | raise ValueError( 501 | f"events: when `type` is {event['type']}, `description` must be " 502 | "non-negative int" 503 | ) 504 | 505 | if event["description"] < 0: 506 | raise ValueError( 507 | f"events: when `type` is {event['type']}, descriptions must be " 508 | "non-negative ints." 509 | ) 510 | 511 | tformat = event["type"][0] + "{:>" + str(twidth) + "}" 512 | event["description"] = tformat.format(event["description"]) 513 | 514 | else: 515 | assert event["type"] == "Comment" 516 | if not isinstance(event["description"], int | str): 517 | raise ValueError( 518 | f"events: when `type` is {event['type']}, `description` must be str" 519 | " or int" 520 | ) 521 | event["description"] = str(event["description"]) 522 | 523 | # `channels` 524 | # "all" becomes ch_names (list of all channel names), single str 'ch_name' 525 | # becomes [ch_name] 526 | if not isinstance(event["channels"], list | str): 527 | raise ValueError("events: `channels` must be str or list of str") 528 | 529 | if isinstance(event["channels"], str): 530 | if event["channels"] == "all": 531 | if "all" in ch_names: 532 | raise ValueError( 533 | "Found channel named 'all'. Your `channels` specification in " 534 | "events is also 'all'. This is ambiguous, because 'all' is a " 535 | "reserved keyword. Either rename the channel called 'all', or " 536 | "explicitly list all ch_names in `channels` in each event " 537 | "instead of using 'all'." 538 | ) 539 | event["channels"] = ch_names 540 | else: 541 | event["channels"] = [event["channels"]] 542 | 543 | # now channels is a list 544 | for ch in event["channels"]: 545 | if not isinstance(ch, str | int): 546 | raise ValueError( 547 | "events: `channels` must be list of str or list of int " 548 | "corresponding to ch_names" 549 | ) 550 | 551 | if str(ch) not in ch_names: 552 | raise ValueError( 553 | f"events: found channel name that is not present in the data: {ch}" 554 | ) 555 | 556 | # check for duplicates 557 | event["channels"] = [str(ch) for ch in event["channels"]] 558 | if len(set(event["channels"])) != len(event["channels"]): 559 | raise ValueError("events: found duplicate channel names") 560 | 561 | # warn if more than one but less than all channels are specified (experimental) 562 | if len(event["channels"]) > 1 and len(event["channels"]) < len(ch_names): 563 | warn( 564 | "events: you specified at least one event that impacts more than one " 565 | "but less than all channels in the data. Such events will be written to" 566 | " .vmrk for as many times as channels are specified.\n\nThis feature " 567 | "may not be supported by all BrainVision readers." 568 | ) 569 | 570 | # convert channels to indices (1-based, 0="all") 571 | ch_idxs = [ch_names.index(ch) + 1 for ch in event["channels"]] 572 | if set(ch_idxs) == {i + 1 for i in range(len(ch_names))}: 573 | ch_idxs = [0] 574 | elif len(ch_idxs) == 0: 575 | # if not related to any channel: same as related to all channels 576 | ch_idxs = [0] 577 | event["channels"] = sorted(ch_idxs) 578 | 579 | return events_out 580 | 581 | 582 | def _chk_fmt(fmt): 583 | """Check that the format string is valid, return (BV, numpy) datatypes.""" 584 | if fmt not in SUPPORTED_FORMATS: 585 | errmsg = ( 586 | f"Data format {fmt} not supported. Currently supported formats are: " 587 | f"{', '.join(SUPPORTED_FORMATS)}" 588 | ) 589 | raise ValueError(errmsg) 590 | return SUPPORTED_FORMATS[fmt] 591 | 592 | 593 | def _chk_multiplexed(orientation): 594 | """Validate an orientation, return if it is multiplexed or not.""" 595 | if orientation not in SUPPORTED_ORIENTS: 596 | errmsg = ( 597 | f"Orientation {orientation} not supported. Currently supported orientations" 598 | f"are: {', '.join(SUPPORTED_ORIENTS)}" 599 | ) 600 | raise ValueError(errmsg) 601 | return orientation == "multiplexed" 602 | 603 | 604 | def _write_vmrk_file(vmrk_fname, eeg_fname, events, meas_date): 605 | """Write BrainvVision marker file.""" 606 | with open(vmrk_fname, "w", encoding="utf-8") as fout: 607 | print("Brain Vision Data Exchange Marker File, Version 1.0", file=fout) 608 | print(f"; Exported using pybv {__version__}", file=fout) 609 | print("", file=fout) 610 | print("[Common Infos]", file=fout) 611 | print("Codepage=UTF-8", file=fout) 612 | print(f"DataFile={eeg_fname.name}", file=fout) 613 | print("", file=fout) 614 | print("[Marker Infos]", file=fout) 615 | print( 616 | "; Each entry: Mk=,," 617 | ",", 618 | file=fout, 619 | ) 620 | print( 621 | "; , ", 623 | file=fout, 624 | ) 625 | print("; ", file=fout) 626 | print( 627 | "; Fields are delimited by commas, some fields might be omitted (empty).", 628 | file=fout, 629 | ) 630 | print(r'; Commas in type or description text are coded as "\1".', file=fout) 631 | if meas_date is not None: 632 | print(f"Mk1=New Segment,,1,1,0,{meas_date}", file=fout) 633 | 634 | iev = 1 if meas_date is None else 2 635 | for ev in events: 636 | # Write event once for each channel that this event is relevant for 637 | # https://github.com/bids-standard/pybv/pull/77 638 | for ch in ev["channels"]: 639 | print( 640 | f"Mk{iev}={ev['type']},{ev['description']}," 641 | f"{ev['onset']},{ev['duration']},{ch}", 642 | file=fout, 643 | ) 644 | iev += 1 645 | 646 | 647 | def _scale_data_to_unit(data, units): 648 | """Scale `data` in Volts to `data` in `units`.""" 649 | # only µV is supported by the BrainVision specs, but we support additional voltage 650 | # prefixes (e.g., V, mV, nV); if such voltage units are used, we issue a warning 651 | voltage_units = set() 652 | 653 | # similar to voltages other than µV, we also support arbitrary units, but since 654 | # these are not supported by the BrainVision specs, we issue a warning 655 | non_voltage_units = set() 656 | 657 | # create a vector to multiply with to play nice with numpy 658 | scales = np.zeros((len(units), 1)) 659 | for idx, unit in enumerate(units): 660 | scale = SUPPORTED_VOLTAGE_SCALINGS.get(unit, None) 661 | # unless the unit is 'µV', it is not supported by the specs 662 | if scale is not None and unit != "µV": 663 | voltage_units.add(unit) 664 | elif scale is None: # if not voltage unit at all, then don't scale 665 | non_voltage_units.add(unit) 666 | scale = 1 667 | scales[idx] = scale 668 | 669 | if len(voltage_units) > 0: 670 | msg = ( 671 | f"Encountered unsupported voltage units: {', '.join(voltage_units)}\n" 672 | "We will scale the data appropriately, but for maximum compatibility you " 673 | "should use µV for all channels." 674 | ) 675 | warn(msg) 676 | 677 | if len(non_voltage_units) > 0: 678 | msg = ( 679 | f"Encountered unsupported non-voltage units: {', '.join(non_voltage_units)}" 680 | "\nNote that the BrainVision format specification supports only µV." 681 | ) 682 | warn(msg) 683 | return data * scales 684 | 685 | 686 | def _write_vhdr_file( 687 | *, 688 | vhdr_fname, 689 | vmrk_fname, 690 | eeg_fname, 691 | data, 692 | sfreq, 693 | ch_names, 694 | ref_ch_names, 695 | orientation, 696 | format, # noqa: A002 697 | resolution, 698 | units, 699 | ): 700 | """Write BrainvVision header file.""" 701 | bvfmt, _ = _chk_fmt(format) 702 | 703 | multiplexed = _chk_multiplexed(orientation) 704 | 705 | with open(vhdr_fname, "w", encoding="utf-8") as fout: 706 | print("Brain Vision Data Exchange Header File Version 1.0", file=fout) 707 | print(f"; Written using pybv {__version__}", file=fout) 708 | print("", file=fout) 709 | print("[Common Infos]", file=fout) 710 | print("Codepage=UTF-8", file=fout) 711 | print(f"DataFile={eeg_fname.name}", file=fout) 712 | print(f"MarkerFile={vmrk_fname.name}", file=fout) 713 | 714 | if format.startswith("binary"): 715 | print("DataFormat=BINARY", file=fout) 716 | 717 | if multiplexed: 718 | print("; Data orientation: MULTIPLEXED=ch1,pt1, ch2,pt1 ...", file=fout) 719 | print("DataOrientation=MULTIPLEXED", file=fout) 720 | 721 | print(f"NumberOfChannels={len(data)}", file=fout) 722 | print("; Sampling interval in microseconds", file=fout) 723 | print(f"SamplingInterval={1e6 / sfreq}", file=fout) 724 | print("", file=fout) 725 | 726 | if format.startswith("binary"): 727 | print("[Binary Infos]", file=fout) 728 | print(f"BinaryFormat={bvfmt}", file=fout) 729 | print("", file=fout) 730 | 731 | print("[Channel Infos]", file=fout) 732 | print( 733 | "; Each entry: Ch=,,", 734 | file=fout, 735 | ) 736 | print('; ,, Future extensions..', file=fout) 737 | print( 738 | "; Fields are delimited by commas, some fields might be omitted (empty).", 739 | file=fout, 740 | ) 741 | print(r'; Commas in channel names are coded as "\1".', file=fout) 742 | 743 | nchan = len(ch_names) 744 | # broadcast to nchan elements if necessary 745 | resolutions = resolution * np.ones((nchan,)) 746 | 747 | for i in range(nchan): 748 | # take care of commas in the channel names 749 | _ch_name = ch_names[i].replace(",", r"\1") 750 | _ref_ch_name = ref_ch_names[i].replace(",", r"\1") 751 | 752 | resolution = np.format_float_positional(resolutions[i], trim="-") 753 | unit = units[i] 754 | print(f"Ch{i + 1}={_ch_name},{_ref_ch_name},{resolution},{unit}", file=fout) 755 | 756 | print("", file=fout) 757 | print("[Comment]", file=fout) 758 | print("", file=fout) 759 | 760 | 761 | def _check_data_in_range(data, dtype): 762 | """Check that data can be represented by dtype.""" 763 | check_funcs = {np.int16: np.iinfo, np.float32: np.finfo} 764 | fun = check_funcs.get(dtype, None) 765 | if fun is None: # pragma: no cover 766 | msg = f"Unsupported format encountered: {dtype}" 767 | raise ValueError(msg) 768 | if data.min() <= fun(dtype).min or data.max() >= fun(dtype).max: 769 | return False 770 | return True 771 | 772 | 773 | def _write_bveeg_file(eeg_fname, data, orientation, format, resolution, units): # noqa: A002 774 | """Write BrainVision data file.""" 775 | # check the orientation and format 776 | _chk_multiplexed(orientation) 777 | _, dtype = _chk_fmt(format) 778 | 779 | # convert the data to the desired unit 780 | data = _scale_data_to_unit(data, units) 781 | 782 | # invert the resolution so that we know how much to scale our data 783 | scaling_factor = 1 / resolution 784 | data = data * np.atleast_2d(scaling_factor).T 785 | 786 | # convert the data to required format 787 | if not _check_data_in_range(data, dtype): 788 | mod = " ('{resolution}')" 789 | if isinstance(resolution, np.ndarray): 790 | # if we have individual resolutions, do not print them all 791 | mod = "s" 792 | msg = ( 793 | f"`data` can not be represented in '{format}' given the desired " 794 | f"resolution{mod} and units ('{units}')." 795 | ) 796 | if format == "binary_int16": 797 | msg += "\nPlease consider writing using 'binary_float32' format." 798 | raise ValueError(msg) 799 | data = data.astype(dtype=dtype) 800 | 801 | # we always write data as little-endian without BOM 802 | # `data` is already in native byte order due to NumPy operations that result in 803 | # copies of the `data` array (see above) 804 | assert data.dtype.byteorder == "=" 805 | 806 | # swap bytes if system architecture is big-endian 807 | if sys.byteorder == "big": # pragma: no cover 808 | data = data.byteswap() 809 | 810 | # save to binary 811 | data.ravel(order="F").tofile(eeg_fname) 812 | --------------------------------------------------------------------------------