├── .gitignore ├── .gitmodules ├── Dockerfile ├── docker-scripts └── download_libtorch.sh ├── readme.md ├── tutorial ├── _fritzing │ ├── potentiometer.fzz │ ├── potentiometer.png │ ├── potentiometer_2.fzz │ └── potentiometer_2.png ├── bela-code │ ├── dataset-capture │ │ ├── CMakeLists.txt │ │ ├── Watcher.cpp │ │ ├── Watcher.h │ │ ├── render.cpp │ │ └── waves.wav │ ├── inference │ │ ├── CMakeLists.txt │ │ ├── Watcher.cpp │ │ ├── Watcher.h │ │ ├── main.cpp │ │ ├── render.cpp │ │ └── waves.wav │ ├── pybela-basic │ │ ├── CMakeLists.txt │ │ ├── Watcher.cpp │ │ ├── Watcher.h │ │ ├── render.cpp │ │ └── waves.wav │ └── waves.wav ├── requirements.txt ├── scripts │ ├── copy-libs-to-bela.sh │ ├── setup-bela-dev.sh │ └── setup-bela-revC.sh └── tutorial.ipynb └── useful-commands.txt /.gitignore: -------------------------------------------------------------------------------- 1 | __pycache__/ 2 | .ipynb_checkpoints/ 3 | *bin 4 | dev.txt 5 | log* 6 | .DS_Store 7 | flash-belas/ -------------------------------------------------------------------------------- /.gitmodules: -------------------------------------------------------------------------------- 1 | [submodule "bela-code/watcher"] 2 | path = tutorial/bela-code/watcher 3 | url = https://github.com/belaplatform/watcher -------------------------------------------------------------------------------- /Dockerfile: -------------------------------------------------------------------------------- 1 | FROM pelinski/xc-bela-container:v1.1.0 2 | 3 | RUN apt-get update && \ 4 | apt-get install -y python3 pip 5 | 6 | COPY tutorial/requirements.txt ./ 7 | RUN pip install -r requirements.txt 8 | 9 | COPY docker-scripts/download_libtorch.sh ./ 10 | RUN ./download_libtorch.sh && rm download_libtorch.sh 11 | 12 | # rev C 13 | RUN git clone https://github.com/BelaPlatform/bb.org-overlays.git /sysroot/opt/bb.org-overlays 14 | 15 | RUN mkdir -p /root/pybela-pytorch-xc-tutorial 16 | 17 | COPY tutorial/ /root/pybela-pytorch-xc-tutorial/ 18 | 19 | WORKDIR /root/pybela-pytorch-xc-tutorial 20 | 21 | CMD /bin/bash -------------------------------------------------------------------------------- /docker-scripts/download_libtorch.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash -e 2 | mkdir -p /sysroot/opt/pytorch-install 3 | 4 | url=https://github.com/pelinski/bela-torch/releases/download/v1.13.1/pytorch-v1.13.1.tar.gz 5 | echo "Downloading Pytorch from $url" 6 | wget -O - $url | tar -xz -C /sysroot/opt/pytorch-install 7 | -------------------------------------------------------------------------------- /readme.md: -------------------------------------------------------------------------------- 1 | # pybela + pytorch bela cross-compilation tutorial 2 | 3 | In this tutorial, we will use a jupyter notebook to communicate with Bela from the host machine and: 4 | 5 | 1. Record a dataset of sensor data using [pybela](https://github.com/belaplatform/pybela) 6 | 2. Train a TCN to predict the sensor data using [pytorch](https://pytorch.org/) 7 | 3. Cross-compile with the [xc-bela-container](https://github.com/pelinski/xc-bela-container) and deploy the model to run in real-time in Bela 8 | 9 | ## Setting up your Bela 10 | 11 | You will need to flash the Bela experimental image `v0.5.0alpha2` which can be downloaded [here](https://github.com/BelaPlatform/bela-image-builder/releases/tag/v0.5.0alpha2). You can follow [these instructions](https://learn.bela.io/using-bela/bela-techniques/managing-your-sd-card/#flash-an-sd-card-using-balena-etcher) to flash the image onto your Bela's microSD card. 12 | 13 | Once the image is flashed, insert the microSD into your Bela and connect it to your computer. Inside the container (in the next section) we will run a script that will copy the necessary libraries to your Bela and updates its core code. 14 | 15 | ## Quickstart 16 | 17 | If you haven't got docker installed on your machine yet, you can follow the instructions [here](https://docs.docker.com/engine/install/). Once you have docker installed, start it (open the Docker app). There is no need to create an account to follow this tutorial. 18 | 19 | Pull the docker image: 20 | 21 | ```bash 22 | docker pull pelinski/pybela-pytorch-xc-tutorial:v0.1.1 23 | ``` 24 | 25 | This will pull the dockerised cross-compiler. You can start the container by running: 26 | (this will create the container for the first time. If you have created the container already, you can enter the container back by running `docker start -ia bela-tutorial`) 27 | If you are using a windows machine, replace `BBB_HOSTNAME=192.168.7.2` for `BBB_HOSTNAME=192.168.6.2`. 28 | 29 | ```bash 30 | docker run -it --name bela-tutorial -e BBB_HOSTNAME=192.168.7.2 -p 8889:8889 pelinski/pybela-pytorch-xc-tutorial:v0.1.1 31 | ``` 32 | 33 | **If you are using your own Bela** (i.e., not the ones prepared for the workshop), you will need to copy a couple of libraries to Bela and update its core code. If your Bela has a rev C cape you will also need to update the Bela cape firmware. You can do this by running the following commands inside the container: 34 | 35 | ```bash 36 | sh scripts/copy-libs-to-bela.sh && sh scripts/setup-bela-dev.sh 37 | sh scripts/setup-bela-revC.sh # only if you have a rev C cape 38 | ``` 39 | 40 | Inside the container, you can start the jupyter notebook with 41 | 42 | ```bash 43 | jupyter notebook --ip=* --port=8889 --allow-root --no-browser 44 | ``` 45 | 46 | Look for a link of the form `http://127.0.0.1:8889/tree?token=` in the terminal output and open it in the browser. This will show a list of files. Open the notebook `tutorial.ipynb` and follow the tutorial instructions there. If the link does not work, try changing the port number `8889` to another value, e.g., `5555`. 47 | 48 | The tutorial continues in the jupyter notebook! 49 | 50 | ## Troubleshooting 51 | 52 | If you get any strange errors (possibly with `undefined reference`) when trying to compile a Bela project after switching the Bela branch (step 1), try running these commands in Bela: 53 | 54 | ```bash 55 | ssh root@bela.local 56 | cd ~/Bela 57 | make -f Makefile.libraries cleanall 58 | make coreclean 59 | ``` 60 | 61 | and then try to compile the project again. 62 | -------------------------------------------------------------------------------- /tutorial/_fritzing/potentiometer.fzz: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pelinski/pybela-pytorch-xc-tutorial/357034175ad1fc92b7331734249167b3d14d8794/tutorial/_fritzing/potentiometer.fzz -------------------------------------------------------------------------------- /tutorial/_fritzing/potentiometer.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pelinski/pybela-pytorch-xc-tutorial/357034175ad1fc92b7331734249167b3d14d8794/tutorial/_fritzing/potentiometer.png -------------------------------------------------------------------------------- /tutorial/_fritzing/potentiometer_2.fzz: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pelinski/pybela-pytorch-xc-tutorial/357034175ad1fc92b7331734249167b3d14d8794/tutorial/_fritzing/potentiometer_2.fzz -------------------------------------------------------------------------------- /tutorial/_fritzing/potentiometer_2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pelinski/pybela-pytorch-xc-tutorial/357034175ad1fc92b7331734249167b3d14d8794/tutorial/_fritzing/potentiometer_2.png -------------------------------------------------------------------------------- /tutorial/bela-code/dataset-capture/CMakeLists.txt: -------------------------------------------------------------------------------- 1 | cmake_minimum_required(VERSION 3.18) 2 | 3 | if(NOT DEFINED PROJECT_NAME) 4 | set(PROJECT_NAME "project") 5 | endif() 6 | 7 | project(${PROJECT_NAME}) 8 | 9 | #################################### 10 | add_compile_options( 11 | -march=armv7-a 12 | -mtune=cortex-a8 13 | -mfloat-abi=hard 14 | -mfpu=neon 15 | -Wno-psabi 16 | ) 17 | 18 | add_compile_options( 19 | -O3 20 | -g 21 | -fPIC 22 | -ftree-vectorize 23 | -ffast-math 24 | ) 25 | 26 | add_compile_definitions(DXENOMAI_SKIN_posix) 27 | 28 | #################################### 29 | 30 | set(BELA_ROOT "${CMAKE_SYSROOT}/root/Bela") 31 | set(SYS_ROOT "${CMAKE_SYSROOT}") 32 | 33 | find_library(COBALT_LIB REQUIRED 34 | NAMES cobalt libcobalt 35 | HINTS "${CMAKE_SYSROOT}/usr/xenomai/lib" 36 | ) 37 | 38 | find_library(NEON_LIB REQUIRED 39 | NAMES NE10 libNE10 40 | HINTS "${CMAKE_SYSROOT}/usr/lib" 41 | ) 42 | 43 | find_library(MATHNEON_LIB REQUIRED 44 | NAMES mathneon libmathneon 45 | HINTS "${CMAKE_SYSROOT}/usr/lib" 46 | ) 47 | 48 | #################################### 49 | 50 | set(EXE_NAME ${PROJECT_NAME}) 51 | 52 | file(GLOB SRC_FILES *.cpp) 53 | 54 | # Check if main.cpp exists in the current directory 55 | if(EXISTS "${CMAKE_CURRENT_SOURCE_DIR}/main.cpp") 56 | list(APPEND SRC_FILES "${CMAKE_CURRENT_SOURCE_DIR}/main.cpp") 57 | else() 58 | list(APPEND SRC_FILES "/sysroot/root/Bela/core/default_main.cpp") 59 | endif() 60 | 61 | add_executable(${EXE_NAME} ${SRC_FILES}) 62 | 63 | target_include_directories( 64 | ${EXE_NAME} PRIVATE ${BELA_ROOT} ${BELA_ROOT}/include ${CMAKE_CURRENT_SOURCE_DIR} 65 | ) 66 | 67 | target_link_libraries( 68 | ${EXE_NAME} 69 | PRIVATE 70 | ${BELA_ROOT}/lib/libbelafull.so 71 | ${COBALT_LIB} 72 | ${NEON_LIB} 73 | ${MATHNEON_LIB} 74 | dl 75 | prussdrv 76 | asound 77 | atomic 78 | sndfile 79 | pthread 80 | rt 81 | ) 82 | 83 | #################################### -------------------------------------------------------------------------------- /tutorial/bela-code/dataset-capture/Watcher.cpp: -------------------------------------------------------------------------------- 1 | ../watcher/Watcher.cpp -------------------------------------------------------------------------------- /tutorial/bela-code/dataset-capture/Watcher.h: -------------------------------------------------------------------------------- 1 | ../watcher/Watcher.h -------------------------------------------------------------------------------- /tutorial/bela-code/dataset-capture/render.cpp: -------------------------------------------------------------------------------- 1 | #include 2 | #include 3 | #include 4 | #include 5 | #include 6 | 7 | float gFrequency = 15.0; 8 | float gPhase; 9 | float gInverseSampleRate; 10 | unsigned int gAudioFramesPerAnalogFrame; 11 | 12 | Watcher pot1("pot1"); 13 | Watcher pot2("pot2"); 14 | Scope scope; 15 | 16 | std::string gFilename = "waves.wav"; 17 | std::vector> gSampleData; 18 | int gStartFrame = 44100; 19 | int gEndFrame = 88200; 20 | unsigned int gReadPtr; 21 | 22 | bool setup(BelaContext *context, void *userData) { 23 | 24 | Bela_getDefaultWatcherManager()->getGui().setup(context->projectName); 25 | Bela_getDefaultWatcherManager()->setup( 26 | context->audioSampleRate); // set sample rate in watcher 27 | 28 | gAudioFramesPerAnalogFrame = context->audioFrames / context->analogFrames; 29 | gInverseSampleRate = 1.0 / context->audioSampleRate; 30 | gPhase = 0.0; 31 | 32 | scope.setup(3, context->audioSampleRate); 33 | 34 | // Load the audio file 35 | gSampleData = 36 | AudioFileUtilities::load(gFilename, gEndFrame - gStartFrame, gStartFrame); 37 | 38 | return true; 39 | } 40 | 41 | void render(BelaContext *context, void *userData) { 42 | 43 | for (unsigned int n = 0; n < context->audioFrames; n++) { 44 | uint64_t frames = int((context->audioFramesElapsed + n) / 2); 45 | if (gAudioFramesPerAnalogFrame && !(n % gAudioFramesPerAnalogFrame)) { 46 | 47 | if (n == 0) { // only update the pot values once per block 48 | Bela_getDefaultWatcherManager()->tick(frames); 49 | pot1 = map(analogRead(context, n / gAudioFramesPerAnalogFrame, 0), 0, 50 | 0.84, 0, 3); 51 | pot2 = map(analogRead(context, n / gAudioFramesPerAnalogFrame, 1), 0, 52 | 0.84, 0, 3); 53 | } 54 | 55 | // Increment read pointer and reset to 0 when end of file is reached 56 | if (++gReadPtr > gSampleData[0].size()) 57 | gReadPtr = 0; 58 | 59 | gPhase += 2.0f * (float)M_PI * gFrequency * gInverseSampleRate; 60 | 61 | float tri; 62 | float sq; 63 | if (gPhase > M_PI) 64 | gPhase -= 2.0f * (float)M_PI; 65 | 66 | if (gPhase > 0) { 67 | tri = -1 + (2 * gPhase / (float)M_PI); 68 | sq = 1; 69 | 70 | } else { 71 | tri = -1 - (2 * gPhase / (float)M_PI); 72 | sq = -1; 73 | } 74 | 75 | float lfo = 0; 76 | if (pot1 <= 1) { 77 | lfo = (1 - pot1) * sinf(gPhase) + pot1 * tri; 78 | } else if (pot1 <= 2) { 79 | lfo = (2 - pot1) * tri + (1 - pot1) * sq; 80 | } else if (pot1 <= 3) { 81 | float saw = 1 - (1 / (float)M_PI * gPhase); 82 | lfo = (3 - pot1) * sq + (2 - pot1) * saw; 83 | } 84 | 85 | // Multiply the audio sample by the LFO value 86 | float in = gSampleData[0][gReadPtr]; 87 | float out = pot2 * lfo * gSampleData[0][gReadPtr]; 88 | 89 | scope.log(lfo, in, out); 90 | 91 | // Write the audio input to left channel, output to the right channel 92 | audioWrite(context, n, 0, out); 93 | audioWrite(context, n, 1, out); 94 | } 95 | } 96 | } 97 | 98 | void cleanup(BelaContext *context, void *userData) {} 99 | -------------------------------------------------------------------------------- /tutorial/bela-code/dataset-capture/waves.wav: -------------------------------------------------------------------------------- 1 | ../waves.wav -------------------------------------------------------------------------------- /tutorial/bela-code/inference/CMakeLists.txt: -------------------------------------------------------------------------------- 1 | cmake_minimum_required(VERSION 3.18) 2 | 3 | if(NOT DEFINED PROJECT_NAME) 4 | set(PROJECT_NAME "project") 5 | endif() 6 | 7 | project(${PROJECT_NAME}) 8 | 9 | #################################### 10 | add_compile_options( 11 | -march=armv7-a 12 | -mtune=cortex-a8 13 | -mfloat-abi=hard 14 | -mfpu=neon 15 | -Wno-psabi 16 | ) 17 | 18 | add_compile_options( 19 | -O3 20 | -g 21 | -fPIC 22 | -ftree-vectorize 23 | -ffast-math 24 | ) 25 | 26 | add_compile_definitions(DXENOMAI_SKIN_posix) 27 | 28 | #################################### 29 | 30 | set(BELA_ROOT "${CMAKE_SYSROOT}/root/Bela") 31 | set(SYS_ROOT "${CMAKE_SYSROOT}") 32 | 33 | find_library(COBALT_LIB REQUIRED 34 | NAMES cobalt libcobalt 35 | HINTS "${CMAKE_SYSROOT}/usr/xenomai/lib" 36 | ) 37 | 38 | find_library(NEON_LIB REQUIRED 39 | NAMES NE10 libNE10 40 | HINTS "${CMAKE_SYSROOT}/usr/lib" 41 | ) 42 | 43 | find_library(MATHNEON_LIB REQUIRED 44 | NAMES mathneon libmathneon 45 | HINTS "${CMAKE_SYSROOT}/usr/lib" 46 | ) 47 | 48 | #################################### 49 | 50 | set(EXE_NAME ${PROJECT_NAME}) 51 | 52 | file(GLOB SRC_FILES *.cpp) 53 | 54 | # Check if main.cpp exists in the current directory 55 | if(EXISTS "${CMAKE_CURRENT_SOURCE_DIR}/main.cpp") 56 | list(APPEND SRC_FILES "${CMAKE_CURRENT_SOURCE_DIR}/main.cpp") 57 | else() 58 | list(APPEND SRC_FILES "/sysroot/root/Bela/core/default_main.cpp") 59 | endif() 60 | 61 | add_executable(${EXE_NAME} ${SRC_FILES}) 62 | 63 | target_include_directories( 64 | ${EXE_NAME} PRIVATE ${BELA_ROOT} ${BELA_ROOT}/include ${CMAKE_CURRENT_SOURCE_DIR} 65 | ) 66 | 67 | target_link_libraries( 68 | ${EXE_NAME} 69 | PRIVATE 70 | ${BELA_ROOT}/lib/libbelafull.so 71 | ${COBALT_LIB} 72 | ${NEON_LIB} 73 | ${MATHNEON_LIB} 74 | dl 75 | prussdrv 76 | asound 77 | atomic 78 | sndfile 79 | pthread 80 | ) 81 | 82 | #################################### 83 | 84 | message(STATUS "Enabling PyTorch frontend") 85 | add_compile_definitions(ENABLE_PYTORCH_FRONTEND) 86 | # find pytorch 87 | # -DCMAKE_PREFIX_PATH=/absolute/path/to/libtorch 88 | list(APPEND CMAKE_PREFIX_PATH /sysroot/opt/pytorch-install) 89 | find_package(Torch REQUIRED) 90 | 91 | set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} ${TORCH_CXX_FLAGS}") 92 | target_link_libraries(${EXE_NAME} PRIVATE ${TORCH_LIBRARIES} rt) 93 | -------------------------------------------------------------------------------- /tutorial/bela-code/inference/Watcher.cpp: -------------------------------------------------------------------------------- 1 | ../watcher/Watcher.cpp -------------------------------------------------------------------------------- /tutorial/bela-code/inference/Watcher.h: -------------------------------------------------------------------------------- 1 | ../watcher/Watcher.h -------------------------------------------------------------------------------- /tutorial/bela-code/inference/main.cpp: -------------------------------------------------------------------------------- 1 | 2 | #include 3 | #include 4 | #include 5 | #include 6 | #include 7 | #include 8 | #include 9 | #include 10 | 11 | // Handle Ctrl-C by requesting that the audio rendering stop 12 | void interrupt_handler(int var) 13 | { 14 | Bela_requestStop(); 15 | } 16 | 17 | // Print usage information 18 | void usage(const char * processName) 19 | { 20 | std::cerr << "Usage: " << processName << " [options]" << std::endl; 21 | 22 | Bela_usage(); 23 | 24 | std::cerr << " --modelpath [-m] model path: path to torch .jit model\n"; 25 | std::cerr << " --help [-h]: Print this menu\n"; 26 | } 27 | 28 | 29 | int main(int argc, char *argv[]) 30 | { 31 | BelaInitSettings *settings = Bela_InitSettings_alloc(); 32 | std::string modelPath = "model.jit"; 33 | 34 | struct option customOptions[] = 35 | { 36 | {"help", 0, NULL, 'h'}, 37 | {"modelpath", 1, NULL, 'm'}, 38 | {NULL, 0, NULL, 0} 39 | }; 40 | 41 | // Set default settings 42 | Bela_defaultSettings(settings); 43 | settings->setup = setup; 44 | settings->render = render; 45 | settings->cleanup = cleanup; 46 | 47 | { 48 | char* nameWithSlash = strrchr(argv[0], '/'); 49 | settings->projectName = nameWithSlash ? nameWithSlash + 1 : argv[0]; 50 | } 51 | 52 | // Parse command-line arguments 53 | while (1) { 54 | int c = Bela_getopt_long(argc, argv, "hm:", customOptions, settings); 55 | if (c < 0) 56 | { 57 | break; 58 | } 59 | int ret = -1; 60 | switch (c) { 61 | case 'h': 62 | usage(basename(argv[0])); 63 | ret = 0; 64 | break; 65 | case 'm': 66 | modelPath = optarg; 67 | break; 68 | default: 69 | usage(basename(argv[0])); 70 | ret = 1; 71 | break; 72 | } 73 | if(ret >= 0) 74 | { 75 | Bela_InitSettings_free(settings); 76 | return ret; 77 | } 78 | } 79 | 80 | 81 | // Initialise the PRU audio device 82 | if (Bela_initAudio(settings, &modelPath) != 0) 83 | { 84 | Bela_InitSettings_free(settings); 85 | fprintf(stderr, "Error: unable to initialise audio\n"); 86 | return 1; 87 | } 88 | Bela_InitSettings_free(settings); 89 | 90 | 91 | // Start the audio device running 92 | if (Bela_startAudio()) 93 | { 94 | fprintf(stderr, "Error: unable to start real-time audio\n"); 95 | // Stop the audio device 96 | Bela_stopAudio(); 97 | // Clean up any resources allocated for audio 98 | Bela_cleanupAudio(); 99 | return 1; 100 | } 101 | 102 | // Set up interrupt handler to catch Control-C and SIGTERM 103 | signal(SIGINT, interrupt_handler); 104 | signal(SIGTERM, interrupt_handler); 105 | 106 | // Run until told to stop 107 | while (!Bela_stopRequested()) 108 | { 109 | usleep(100000); 110 | } 111 | 112 | // Stop the audio device 113 | Bela_stopAudio(); 114 | 115 | // Clean up any resources allocated for audio 116 | Bela_cleanupAudio(); 117 | 118 | // All done! 119 | return 0; 120 | 121 | } -------------------------------------------------------------------------------- /tutorial/bela-code/inference/render.cpp: -------------------------------------------------------------------------------- 1 | #include 2 | #include 3 | #include 4 | #include 5 | #include 6 | #include 7 | 8 | #define N_VARS 2 9 | 10 | // torch buffers 11 | const int gWindowSize = 512; 12 | int gNumTargetWindows = 1; 13 | const int gInputBufferSize = 100 * gWindowSize; 14 | const int gOutputBufferSize = gNumTargetWindows * gInputBufferSize; 15 | int gOutputBufferWritePointer = 2 * gWindowSize; 16 | int gDebugPrevBufferWritePointer = gOutputBufferWritePointer; 17 | int gDebugFrameCounter = 0; 18 | int gOutputBufferReadPointer = 0; 19 | int gInputBufferPointer = 0; 20 | int gWindowFramesCounter = 0; 21 | std::vector> 22 | gInputBuffer(N_VARS, std::vector(gInputBufferSize)); 23 | std::vector> 24 | gOutputBuffer(N_VARS, std::vector(gOutputBufferSize)); 25 | torch::jit::script::Module model; 26 | std::vector> unwrappedBuffer(gWindowSize, 27 | std::vector(N_VARS)); 28 | 29 | AuxiliaryTask gInferenceTask; 30 | int gCachedInputBufferPointer = 0; 31 | 32 | // LFO 33 | float gFrequency = 15.0; 34 | float gPhase; 35 | unsigned int gAudioFramesPerAnalogFrame; 36 | float gInverseSampleRate; 37 | 38 | void inference_task_background(void *); 39 | 40 | std::string gFilename = "waves.wav"; 41 | std::vector> gSampleData; 42 | int gStartFrame = 44100; 43 | int gEndFrame = 88200; 44 | unsigned int gReadPtr; 45 | 46 | std::string gModelPath = "model.jit"; 47 | 48 | int start = 1; 49 | 50 | bool setup(BelaContext *context, void *userData) { 51 | 52 | printf("analog sample rate: %.1f\n", context->analogSampleRate); 53 | 54 | // Better to calculate the inverse sample rate here and store it in a variable 55 | // so it can be reused 56 | gInverseSampleRate = 1.0 / context->audioSampleRate; 57 | 58 | // Set up the thread for the inference 59 | gInferenceTask = 60 | Bela_createAuxiliaryTask(inference_task_background, 99, "bela-inference"); 61 | 62 | // rate of audio frames per analog frame 63 | if (context->analogFrames) 64 | gAudioFramesPerAnalogFrame = context->audioFrames / context->analogFrames; 65 | 66 | if (userData != 0) 67 | gModelPath = *(std::string *)userData; 68 | 69 | try { 70 | model = torch::jit::load(gModelPath); 71 | } catch (const c10::Error &e) { 72 | std::cerr << "Error loading the model: " << e.msg() << std::endl; 73 | return false; 74 | } 75 | // Warm up inference 76 | try { 77 | // Create a dummy input tensor 78 | torch::Tensor dummy_input = torch::rand({1, 512, 2}); 79 | 80 | auto output = model.forward({dummy_input}).toTensor(); 81 | output = model.forward({dummy_input}).toTensor(); 82 | output = model.forward({dummy_input}).toTensor(); 83 | 84 | // Print the result 85 | std::cout << "Input tensor dimensions: " << dummy_input.sizes() 86 | << std::endl; 87 | std::cout << "Output tensor dimensions: " << output.sizes() << std::endl; 88 | } catch (const c10::Error &e) { 89 | std::cerr << "Error during dummy inference: " << e.msg() << std::endl; 90 | return false; 91 | } 92 | 93 | // Load the audio file 94 | gSampleData = 95 | AudioFileUtilities::load(gFilename, gEndFrame - gStartFrame, gStartFrame); 96 | 97 | return true; 98 | } 99 | void inference_task(const std::vector> &inBuffer, 100 | unsigned int inPointer, 101 | std::vector> &outBuffer, 102 | unsigned int outPointer) { 103 | RtThread::setThisThreadPriority(1); 104 | 105 | // Precompute circular buffer indices 106 | std::vector circularIndices(gWindowSize); 107 | for (int n = 0; n < gWindowSize; ++n) { 108 | circularIndices[n] = (inPointer + n - gWindowSize + gInputBufferSize) % gInputBufferSize; 109 | } 110 | 111 | // Fill unwrappedBuffer using precomputed indices 112 | for (int n = 0; n < gWindowSize; ++n) { 113 | for (int i = 0; i < N_VARS; ++i) { 114 | unwrappedBuffer[n][i] = inBuffer[i][circularIndices[n]]; 115 | } 116 | } 117 | 118 | // Convert unwrappedBuffer to a Torch tensor without additional copying 119 | torch::Tensor inputTensor = torch::from_blob(unwrappedBuffer.data(), {1, gWindowSize, N_VARS}, torch::kFloat).clone(); 120 | 121 | // Perform inference 122 | torch::Tensor outputTensor = model.forward({inputTensor}).toTensor(); 123 | outputTensor = outputTensor.squeeze(0); // Shape: [gNumTargetWindows * gWindowSize, N_VARS] 124 | 125 | // Prepare a pointer to the output tensor's data 126 | float* outputData = outputTensor.data_ptr(); 127 | 128 | // Precompute output circular buffer indices 129 | std::vector outCircularIndices(gNumTargetWindows * gWindowSize); 130 | for (int n = 0; n < gNumTargetWindows * gWindowSize; ++n) { 131 | outCircularIndices[n] = (outPointer + n) % gOutputBufferSize; 132 | } 133 | 134 | // Fill outBuffer using precomputed indices and outputData 135 | for (int n = 0; n < gNumTargetWindows * gWindowSize; ++n) { 136 | int circularBufferIndex = outCircularIndices[n]; 137 | for (int i = 0; i < N_VARS; ++i) { 138 | outBuffer[i][circularBufferIndex] = outputData[n * N_VARS + i]; 139 | } 140 | } 141 | } 142 | 143 | void inference_task_background(void *) { 144 | 145 | RtThread::setThisThreadPriority(1); 146 | 147 | inference_task(gInputBuffer, gInputBufferPointer, gOutputBuffer, 148 | gOutputBufferWritePointer); 149 | gOutputBufferWritePointer = 150 | (gOutputBufferWritePointer + gNumTargetWindows * gWindowSize) % 151 | gOutputBufferSize; 152 | } 153 | 154 | void render(BelaContext *context, void *userData) { 155 | 156 | float pot1; 157 | float pot2; 158 | float outpot1; 159 | float outpot2; 160 | 161 | for (unsigned int n = 0; n < context->audioFrames; n++) { 162 | if (gAudioFramesPerAnalogFrame && !(n % gAudioFramesPerAnalogFrame)) { 163 | 164 | if (n == 0) { // only update the pot values once per block{ 165 | pot1 = map(analogRead(context, n / gAudioFramesPerAnalogFrame, 0), 0, 166 | 0.84, 0, 3); 167 | pot2 = map(analogRead(context, n / gAudioFramesPerAnalogFrame, 1), 0, 168 | 0.84, 0, 1); 169 | 170 | // -- pytorch buffer 171 | gInputBuffer[0][gInputBufferPointer] = pot1; 172 | gInputBuffer[1][gInputBufferPointer] = pot2; 173 | if (++gInputBufferPointer >= gInputBufferSize) { 174 | // Wrap the circular buffer 175 | // Notice: this is not the condition for starting a new inference 176 | gInputBufferPointer = 0; 177 | } 178 | 179 | if (++gWindowFramesCounter >= gWindowSize) { 180 | gWindowFramesCounter = 0; 181 | gCachedInputBufferPointer = gInputBufferPointer; 182 | Bela_scheduleAuxiliaryTask(gInferenceTask); 183 | } 184 | 185 | // debugging 186 | gDebugFrameCounter++; 187 | if (gOutputBufferWritePointer != gDebugPrevBufferWritePointer) { 188 | rt_printf("aux task took: %d, write pointer - read pointer: %d \n", 189 | gDebugFrameCounter, 190 | gOutputBufferWritePointer - gOutputBufferReadPointer); 191 | gDebugPrevBufferWritePointer = gOutputBufferWritePointer; 192 | gDebugFrameCounter = 0; 193 | } 194 | 195 | // Get the output sample from the output buffer 196 | outpot1 = gOutputBuffer[0][gOutputBufferReadPointer]; 197 | outpot2 = gOutputBuffer[1][gOutputBufferReadPointer]; 198 | 199 | // rt_printf("read pointer: %d, write pointer %d \n", 200 | // gOutputBufferReadPointer, gOutputBufferWritePointer); 201 | // Increment the read pointer in the output circular buffer 202 | if ((gOutputBufferReadPointer + 1) % gOutputBufferSize == 203 | gOutputBufferWritePointer) { 204 | rt_printf("Warning: output buffer overrun\n"); 205 | } else { 206 | gOutputBufferReadPointer++; 207 | } 208 | if (gOutputBufferReadPointer >= gOutputBufferSize) 209 | gOutputBufferReadPointer = 0; 210 | 211 | // -- 212 | } 213 | // Increment read pointer and reset to 0 when end of file is reached 214 | if (++gReadPtr > gSampleData[0].size()) 215 | gReadPtr = 0; 216 | 217 | // LFO code 218 | gPhase += 2.0f * (float)M_PI * gFrequency * gInverseSampleRate; 219 | if (gPhase > M_PI) 220 | gPhase -= 2.0f * (float)M_PI; 221 | 222 | float tri; 223 | float sq; 224 | if (gPhase > 0) { 225 | tri = -1 + (2 * gPhase / (float)M_PI); 226 | sq = 1; 227 | } else { 228 | tri = -1 - (2 * gPhase / (float)M_PI); 229 | sq = -1; 230 | } 231 | 232 | float lfo; 233 | if (outpot1 <= 1) { 234 | lfo = (1 - outpot1) * sinf(gPhase) + outpot1 * tri; 235 | } else if (outpot1 <= 2) { 236 | lfo = (2 - outpot1) * tri + (1 - outpot1) * sq; 237 | } else if (outpot1 <= 3) { 238 | float saw = 1 - (1 / (float)M_PI * gPhase); 239 | lfo = (3 - outpot1) * sq + (2 - outpot1) * saw; 240 | } 241 | 242 | // Multiply the audio sample by the LFO value 243 | float in = gSampleData[0][gReadPtr]; 244 | float out = outpot2 * lfo * gSampleData[0][gReadPtr]; 245 | 246 | // Write the audio input to left channel, output to the right channel 247 | audioWrite(context, n, 0, out); 248 | audioWrite(context, n, 1, out); 249 | 250 | // if (n % 64) { // debug every 64 frames 251 | // rt_printf("pot1: %.2f, pot2: %.2f\n", pot1, pot2); 252 | // rt_printf("outpot1: %.2f, outpot2: %.2f\n", outpot1, outpot2); 253 | // } 254 | } 255 | } 256 | } 257 | 258 | void cleanup(BelaContext *context, void *userData) { 259 | // Clean up resources 260 | } 261 | -------------------------------------------------------------------------------- /tutorial/bela-code/inference/waves.wav: -------------------------------------------------------------------------------- 1 | ../waves.wav -------------------------------------------------------------------------------- /tutorial/bela-code/pybela-basic/CMakeLists.txt: -------------------------------------------------------------------------------- 1 | cmake_minimum_required(VERSION 3.18) 2 | 3 | if(NOT DEFINED PROJECT_NAME) 4 | set(PROJECT_NAME "project") 5 | endif() 6 | 7 | project(${PROJECT_NAME}) 8 | 9 | #################################### 10 | add_compile_options( 11 | -march=armv7-a 12 | -mtune=cortex-a8 13 | -mfloat-abi=hard 14 | -mfpu=neon 15 | -Wno-psabi 16 | ) 17 | 18 | add_compile_options( 19 | -O3 20 | -g 21 | -fPIC 22 | -ftree-vectorize 23 | -ffast-math 24 | ) 25 | 26 | add_compile_definitions(DXENOMAI_SKIN_posix) 27 | 28 | #################################### 29 | 30 | set(BELA_ROOT "${CMAKE_SYSROOT}/root/Bela") 31 | set(SYS_ROOT "${CMAKE_SYSROOT}") 32 | 33 | find_library(COBALT_LIB REQUIRED 34 | NAMES cobalt libcobalt 35 | HINTS "${CMAKE_SYSROOT}/usr/xenomai/lib" 36 | ) 37 | 38 | find_library(NEON_LIB REQUIRED 39 | NAMES NE10 libNE10 40 | HINTS "${CMAKE_SYSROOT}/usr/lib" 41 | ) 42 | 43 | find_library(MATHNEON_LIB REQUIRED 44 | NAMES mathneon libmathneon 45 | HINTS "${CMAKE_SYSROOT}/usr/lib" 46 | ) 47 | 48 | #################################### 49 | 50 | set(EXE_NAME ${PROJECT_NAME}) 51 | 52 | file(GLOB SRC_FILES *.cpp) 53 | 54 | # Check if main.cpp exists in the current directory 55 | if(EXISTS "${CMAKE_CURRENT_SOURCE_DIR}/main.cpp") 56 | list(APPEND SRC_FILES "${CMAKE_CURRENT_SOURCE_DIR}/main.cpp") 57 | else() 58 | list(APPEND SRC_FILES "/sysroot/root/Bela/core/default_main.cpp") 59 | endif() 60 | 61 | add_executable(${EXE_NAME} ${SRC_FILES}) 62 | 63 | target_include_directories( 64 | ${EXE_NAME} PRIVATE ${BELA_ROOT} ${BELA_ROOT}/include ${CMAKE_CURRENT_SOURCE_DIR} 65 | ) 66 | 67 | target_link_libraries( 68 | ${EXE_NAME} 69 | PRIVATE 70 | ${BELA_ROOT}/lib/libbelafull.so 71 | ${COBALT_LIB} 72 | ${NEON_LIB} 73 | ${MATHNEON_LIB} 74 | dl 75 | prussdrv 76 | asound 77 | atomic 78 | sndfile 79 | pthread 80 | rt 81 | ) 82 | 83 | #################################### -------------------------------------------------------------------------------- /tutorial/bela-code/pybela-basic/Watcher.cpp: -------------------------------------------------------------------------------- 1 | ../watcher/Watcher.cpp -------------------------------------------------------------------------------- /tutorial/bela-code/pybela-basic/Watcher.h: -------------------------------------------------------------------------------- 1 | ../watcher/Watcher.h -------------------------------------------------------------------------------- /tutorial/bela-code/pybela-basic/render.cpp: -------------------------------------------------------------------------------- 1 | #include 2 | #include 3 | #include 4 | #include 5 | 6 | unsigned int gAudioFramesPerAnalogFrame; 7 | std::string gFilename = "waves.wav"; 8 | std::vector> gSampleData; 9 | int gStartFrame = 44100; 10 | int gEndFrame = 88200; 11 | unsigned int gReadPtr; 12 | 13 | Watcher pot("pot"); // the "pot" variable is "watched" 14 | 15 | bool setup(BelaContext *context, void *userData) { 16 | 17 | Bela_getDefaultWatcherManager()->getGui().setup(context->projectName); 18 | Bela_getDefaultWatcherManager()->setup( 19 | context->audioSampleRate); // set sample rate in watcher 20 | 21 | gAudioFramesPerAnalogFrame = context->audioFrames / context->analogFrames; 22 | 23 | // Load the audio file 24 | gSampleData = 25 | AudioFileUtilities::load(gFilename, gEndFrame - gStartFrame, gStartFrame); 26 | 27 | return true; 28 | } 29 | 30 | void render(BelaContext *context, void *userData) { 31 | 32 | for (unsigned int n = 0; n < context->audioFrames; n++) { 33 | 34 | uint64_t analogFramesElapsed = int((context->audioFramesElapsed + n) / 2); 35 | Bela_getDefaultWatcherManager()->tick( 36 | analogFramesElapsed); // tick the watcher clock 37 | 38 | if (gAudioFramesPerAnalogFrame && !(n % gAudioFramesPerAnalogFrame)) { 39 | pot = map(analogRead(context, n / gAudioFramesPerAnalogFrame, 0), 0, 0.84, 40 | 0, 1); 41 | } 42 | 43 | // Increment read pointer and reset to 0 when end of file is reached 44 | if (++gReadPtr > gSampleData[0].size()) 45 | gReadPtr = 0; 46 | float sound = gSampleData[0][gReadPtr]; 47 | 48 | // the pot controls the volume of the sound 49 | float out = pot * sound; 50 | 51 | // Write the audio to all out channels 52 | for (unsigned int channel = 0; channel < context->audioOutChannels; 53 | channel++) { 54 | audioWrite(context, n, channel, out); 55 | } 56 | } 57 | } 58 | 59 | void cleanup(BelaContext *context, void *userData) {} 60 | -------------------------------------------------------------------------------- /tutorial/bela-code/pybela-basic/waves.wav: -------------------------------------------------------------------------------- 1 | ../waves.wav -------------------------------------------------------------------------------- /tutorial/bela-code/waves.wav: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/pelinski/pybela-pytorch-xc-tutorial/357034175ad1fc92b7331734249167b3d14d8794/tutorial/bela-code/waves.wav -------------------------------------------------------------------------------- /tutorial/requirements.txt: -------------------------------------------------------------------------------- 1 | matplotlib==3.8.2 2 | pybela==1.0.0 3 | torch==2.4.0 -------------------------------------------------------------------------------- /tutorial/scripts/copy-libs-to-bela.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | echo "Copying libbelafull to Bela..." 4 | rsync \ 5 | --timeout=10 \ 6 | -avzP /sysroot/root/Bela/lib/libbelafull.so \ 7 | root@$BBB_HOSTNAME:Bela/lib/libbelafull.so 8 | 9 | echo "Copying libtorch to Bela..." 10 | rsync \ 11 | --timeout=10 \ 12 | -avzP /sysroot/opt/pytorch-install/lib/libtorch_cpu.so /sysroot/opt/pytorch-install/lib/libtorch.so /sysroot/opt/pytorch-install/lib/libc10.so root@$BBB_HOSTNAME:Bela/lib/ 13 | 14 | echo "Finished" -------------------------------------------------------------------------------- /tutorial/scripts/setup-bela-dev.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | ssh -o StrictHostKeyChecking=no -o ConnectTimeout=5 $BBB_HOSTNAME "date -s \"`date '+%Y%m%d %T %z'`\" > /dev/null" 4 | 5 | cd /sysroot/root/Bela 6 | echo "Changing Bela branch to $(git rev-parse --short HEAD)..." 7 | git remote remove board 8 | git remote add board $BBB_HOSTNAME:Bela/ 9 | ssh $BBB_HOSTNAME "cd Bela && git config receive.denyCurrentBranch updateInstead" 10 | git push -f board tmp:tmp 11 | ssh $BBB_HOSTNAME "cd Bela && git config --unset receive.denyCurrentBranch" 12 | ssh $BBB_HOSTNAME "cd Bela && git checkout tmp" 13 | 14 | echo "Rebuilding Bela core and libraries..." 15 | ssh $BBB_HOSTNAME "cd Bela && make lib" 16 | ssh $BBB_HOSTNAME "cd Bela && make -f Makefile.libraries all" 17 | # check for clock skew and rebuild if necessary 18 | ssh $BBB_HOSTNAME "cd Bela && make lib 2>&1 | tee logfile || exit 1; 19 | grep -q \"skew detected\" logfile && { echo CLOCK SKEW DETECTED. CLEANING CORE AND TRYING AGAIN && make coreclean && make lib; } || exit 0;" 20 | ssh $BBB_HOSTNAME "cd Bela && make -f Makefile.libraries all 2>&1 | tee logfile || exit 1; 21 | grep -q \"skew detected\" logfile && { echo CLOCK SKEW DETECTED. CLEANING LIBRARIES AND TRYING AGAIN && make -f Makefile.libraries cleanall && make -f Makefile.libraries all; } || exit 0;" -------------------------------------------------------------------------------- /tutorial/scripts/setup-bela-revC.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | cd /sysroot/opt/bb.org-overlays 3 | echo "Push Bela overlay to Bela..." 4 | git remote add board $BBB_HOSTNAME:/opt/bb.org-overlays 5 | ssh $BBB_HOSTNAME "cd /opt/bb.org-overlays && git config receive.denyCurrentBranch updateInstead" 6 | git push -f board master:master 7 | ssh $BBB_HOSTNAME "cd /opt/bb.org-overlays && git config --unset receive.denyCurrentBranch" 8 | ssh $BBB_HOSTNAME "cd /opt/bb.org-overlays && git checkout master" 9 | 10 | echo "Rebuilding Bela overlay..." 11 | ssh $BBB_HOSTNAME "cd /opt/bb.org-overlays && make clean && make && make install" 12 | ssh $BBB_HOSTNAME "make -C /root/Bela/resources/tools/board_detect/ board_detect install" 13 | ssh $BBB_HOSTNAME "reboot" 14 | echo "Rebooting Bela..." -------------------------------------------------------------------------------- /tutorial/tutorial.ipynb: -------------------------------------------------------------------------------- 1 | { 2 | "cells": [ 3 | { 4 | "cell_type": "markdown", 5 | "metadata": {}, 6 | "source": [ 7 | "# pybela-pytorch-xc-tutorial tutorial\n", 8 | "In this workshop we'll be using jupyter notebooks and python to:\n", 9 | "1. Record a dataset of potentiometer sensor values\n", 10 | "2. Train a TCN to predict those values\n", 11 | "3. Cross-compile and deploy the model to run in real-time in Bela\n", 12 | "\n", 13 | "Connect your Bela to the laptop and run the cell below:" 14 | ] 15 | }, 16 | { 17 | "cell_type": "code", 18 | "execution_count": null, 19 | "metadata": {}, 20 | "outputs": [], 21 | "source": [ 22 | "! ssh-keyscan $BBB_HOSTNAME >> ~/.ssh/known_hosts" 23 | ] 24 | }, 25 | { 26 | "cell_type": "markdown", 27 | "metadata": {}, 28 | "source": [ 29 | "Let's also import all the necessary python libraries:" 30 | ] 31 | }, 32 | { 33 | "cell_type": "code", 34 | "execution_count": null, 35 | "metadata": {}, 36 | "outputs": [], 37 | "source": [ 38 | "import os\n", 39 | "from pybela import Logger\n", 40 | "import asyncio\n", 41 | "\n", 42 | "import matplotlib.pyplot as plt\n", 43 | "import numpy as np\n", 44 | "from tqdm import tqdm \n", 45 | "\n", 46 | "import torch\n", 47 | "import torch.nn as nn\n", 48 | "import torch.nn.functional as F\n", 49 | "from torch.utils.data import Dataset, DataLoader" 50 | ] 51 | }, 52 | { 53 | "cell_type": "markdown", 54 | "metadata": {}, 55 | "source": [ 56 | "## 1 – pybela basics\n", 57 | "\n", 58 | "[pybela](https://github.com/BelaPlatform/pybela/) allows sending data back and forth between python and Bela.\n", 59 | "\n", 60 | "For pybela to be able to communicate with Bela, there has to be a project running on the Bela. \n", 61 | "\n", 62 | "We have an example project in `/root/bela-code/pybela-basic`. Let's take a look at the cpp code.\n", 63 | "\n", 64 | "### c++ code\n", 65 | "The cpp code in `pybela-basics/render.cpp` reads the value of a potentiometer which controls the volume of a wave sound. The potentiometer value is stored in a `pot` variable which is defined in a special way so we can access it from python.\n", 66 | "\n", 67 | "You should connect a potentiometer to the Bela's analog input 0:\n", 68 | "\n", 69 | "

\n", 70 | " \"potentiometer\"\n", 71 | "

\n", 72 | "Let's take a look at the Watcher API in `pybela-basics/render.cpp`:\n", 73 | "\n", 74 | "\n", 75 | "The Watcher API in the Bela code allows \"watching\" variables in the Bela code so we can retrieve their values from python. First, we define the variables we want to watch this way:\n", 76 | "\n", 77 | "```cpp\n", 78 | "Watcher pot(\"pot\"); // the \"pot\" variable is \"watched\"\n", 79 | "```\n", 80 | "\n", 81 | "In the `setup()` function, we initialize the Watcher:\n", 82 | "\n", 83 | "```cpp\n", 84 | "bool setup(BelaContext *context, void *userData) {\n", 85 | "\n", 86 | " Bela_getDefaultWatcherManager()->getGui().setup(context->projectName);\n", 87 | " Bela_getDefaultWatcherManager()->setup(\n", 88 | " context->audioSampleRate); // set sample rate in watcher\n", 89 | "\n", 90 | " ...\n", 91 | "```\n", 92 | "\n", 93 | "We need to tell the Watcher the rate at which we want to observe the variables. For that, we \"tick\" the Watcher clock in the `render()` function. Note that we only tick it at the analog rate, since \"pot\" is an analog variable (typicially read once per two audio frames):\n", 94 | "\n", 95 | "```cpp\n", 96 | "\n", 97 | "void render(BelaContext *context, void *userData) {\n", 98 | "\n", 99 | " for (unsigned int n = 0; n < context->audioFrames; n++) {\n", 100 | "\n", 101 | " uint64_t analogFramesElapsed = int((context->audioFramesElapsed + n) / 2);\n", 102 | " Bela_getDefaultWatcherManager()->tick(\n", 103 | " analogFramesElapsed); // tick the watcher clock\n", 104 | "\n", 105 | " if (gAudioFramesPerAnalogFrame && !(n % gAudioFramesPerAnalogFrame)) {\n", 106 | " pot = analogRead(context, n / gAudioFramesPerAnalogFrame, 0);\n", 107 | " }\n", 108 | " }\n", 109 | "}\n", 110 | "```\n", 111 | "\n", 112 | "Let's now crosscompile this code and run it on the Bela.\n", 113 | "\n", 114 | "### xcompiling the cpp code \n", 115 | "\n", 116 | "To cross-compile the code, we use `cmake` and a cross-compilation toolchain. A cross-compilation toolchain tells `cmake` that even though we are compiling the code on our laptop, the code is meant to run on the Bela.\n" 117 | ] 118 | }, 119 | { 120 | "cell_type": "code", 121 | "execution_count": null, 122 | "metadata": {}, 123 | "outputs": [], 124 | "source": [ 125 | "!cd bela-code/pybela-basic && cmake -S . -B build -DPROJECT_NAME=pybela-basic -DCMAKE_TOOLCHAIN_FILE=/sysroot/root/Bela/Toolchain.cmake\n", 126 | "!cd bela-code/pybela-basic && cmake --build build -j" 127 | ] 128 | }, 129 | { 130 | "cell_type": "markdown", 131 | "metadata": {}, 132 | "source": [ 133 | "We have now built an executable for the Bela, which is located at `bela-code/pybela-basic/build/pybela-basic`. Let's copy it to the Bela, along with the project files so we can access them from the Bela IDE, and the `waves.wav` file which is used by the project." 134 | ] 135 | }, 136 | { 137 | "cell_type": "code", 138 | "execution_count": null, 139 | "metadata": {}, 140 | "outputs": [], 141 | "source": [ 142 | "!rsync -rvL --timeout 10 bela-code/pybela-basic/build/pybela-basic root@$BBB_HOSTNAME:Bela/projects/pybela-basic/\n", 143 | "!rsync -rvL --timeout 10 bela-code/pybela-basic/ --exclude=\"build\" root@$BBB_HOSTNAME:/root/Bela/projects/pybela-basic/" 144 | ] 145 | }, 146 | { 147 | "cell_type": "markdown", 148 | "metadata": {}, 149 | "source": [ 150 | "To run it, open a terminal and ssh into the Bela and run the program:\n", 151 | "\n", 152 | "```bash\n", 153 | "ssh root@bela.local\n", 154 | "cd Bela/projects/pybela-basic && ./pybela-basic\n", 155 | "```\n", 156 | "(running this on the Jupyter notebook would block the cell and we need to be able to run the next cells!)" 157 | ] 158 | }, 159 | { 160 | "cell_type": "markdown", 161 | "metadata": {}, 162 | "source": [ 163 | "### python code\n", 164 | "Now we are ready to interact with the Bela code from python. First we import `pybela` and create a `Logger` object:" 165 | ] 166 | }, 167 | { 168 | "cell_type": "code", 169 | "execution_count": null, 170 | "metadata": {}, 171 | "outputs": [], 172 | "source": [ 173 | "logger=Logger(ip=os.environ[\"BBB_HOSTNAME\"])\n", 174 | "logger.connect()" 175 | ] 176 | }, 177 | { 178 | "cell_type": "markdown", 179 | "metadata": {}, 180 | "source": [ 181 | "Now the Logger is connected to Bela. The Logger class allows us recording datasets locally in Bela and transferring them automatically to the host computer. \n", 182 | "\n", 183 | "Connect your headphones to the Bela audio output and run the cell below while you rotate the potentiometer." 184 | ] 185 | }, 186 | { 187 | "cell_type": "code", 188 | "execution_count": null, 189 | "metadata": {}, 190 | "outputs": [], 191 | "source": [ 192 | "file_paths = logger.start_logging(\"pot\")" 193 | ] 194 | }, 195 | { 196 | "cell_type": "markdown", 197 | "metadata": {}, 198 | "source": [ 199 | "After a few seconds, you can stop the logging:" 200 | ] 201 | }, 202 | { 203 | "cell_type": "code", 204 | "execution_count": null, 205 | "metadata": {}, 206 | "outputs": [], 207 | "source": [ 208 | "logger.stop_logging()" 209 | ] 210 | }, 211 | { 212 | "cell_type": "markdown", 213 | "metadata": {}, 214 | "source": [ 215 | "Once the transfer is done, you can retrieve the logged data by reading the binary file in which it was saved. That binary file stores the data as timestamped buffers, which we are not interested on as we just want a continuous array of potentiometer values." 216 | ] 217 | }, 218 | { 219 | "cell_type": "code", 220 | "execution_count": null, 221 | "metadata": {}, 222 | "outputs": [], 223 | "source": [ 224 | "raw = logger.read_binary_file(\n", 225 | " file_path=file_paths[\"local_paths\"][\"pot\"], timestamp_mode=logger.get_prop_of_var(\"pot\", \"timestamp_mode\"))\n", 226 | "data = [data for _buffer in raw[\"buffers\"] for data in _buffer[\"data\"]]" 227 | ] 228 | }, 229 | { 230 | "cell_type": "markdown", 231 | "metadata": {}, 232 | "source": [ 233 | "We can now plot the data using matplotlib." 234 | ] 235 | }, 236 | { 237 | "cell_type": "code", 238 | "execution_count": null, 239 | "metadata": {}, 240 | "outputs": [], 241 | "source": [ 242 | "analog_sample_rate = logger.sample_rate/2\n", 243 | "\n", 244 | "plt.plot(np.arange(len(data)) / analog_sample_rate, data)\n", 245 | "plt.title('Pot Data')\n", 246 | "plt.xlabel('Time')\n", 247 | "plt.ylabel('Amplitude')" 248 | ] 249 | }, 250 | { 251 | "cell_type": "markdown", 252 | "metadata": {}, 253 | "source": [ 254 | "## 2 – potentiometers dataset capture\n", 255 | "\n", 256 | "We are now ready to record a dataset with two potentiometers. Connect the second potentiometer to your Bela:\n", 257 | "\n", 258 | "

\n", 259 | " \"potentiometer\"\n", 260 | "

\n", 261 | "\n", 262 | " We will be running the `dataset-capture` project. Now the first potentiometer controls the waveshape of an LFO and the second potentiometer, the volume of the sound.\n", 263 | "\n", 264 | "Let's start by cross-compiling the code and copying it to Bela." 265 | ] 266 | }, 267 | { 268 | "cell_type": "code", 269 | "execution_count": null, 270 | "metadata": {}, 271 | "outputs": [], 272 | "source": [ 273 | "!cd bela-code/dataset-capture && cmake -S . -B build -DPROJECT_NAME=dataset-capture -DCMAKE_TOOLCHAIN_FILE=/sysroot/root/Bela/Toolchain.cmake\n", 274 | "!cd bela-code/dataset-capture && cmake --build build -j" 275 | ] 276 | }, 277 | { 278 | "cell_type": "code", 279 | "execution_count": null, 280 | "metadata": {}, 281 | "outputs": [], 282 | "source": [ 283 | "!rsync -rvL --timeout 10 bela-code/dataset-capture/build/dataset-capture root@$BBB_HOSTNAME:Bela/projects/dataset-capture/\n", 284 | "!rsync -rvL --timeout 10 bela-code/dataset-capture/ --exclude=\"build\" root@$BBB_HOSTNAME:/root/Bela/projects/dataset-capture/" 285 | ] 286 | }, 287 | { 288 | "cell_type": "markdown", 289 | "metadata": {}, 290 | "source": [ 291 | "Now you can run the `dataset-capture` project on the Bela:\n", 292 | "\n", 293 | "```bash\n", 294 | "ssh root@bela.local\n", 295 | "cd Bela/projects/dataset-capture && ./dataset-capture\n", 296 | "```\n", 297 | "\n", 298 | "Feel free to play around with the potentiometer and the piezo sensor. You can also edit the code in the IDE and re-run the project.\n", 299 | "\n", 300 | "Once you're ready. You can record a dataset of potentiometer and piezo sensor values." 301 | ] 302 | }, 303 | { 304 | "cell_type": "code", 305 | "execution_count": null, 306 | "metadata": {}, 307 | "outputs": [], 308 | "source": [ 309 | "logger=Logger(ip=os.environ[\"BBB_HOSTNAME\"])\n", 310 | "logger.connect()" 311 | ] 312 | }, 313 | { 314 | "cell_type": "markdown", 315 | "metadata": {}, 316 | "source": [ 317 | "You can time the length of your dataset using `asyncio.sleep(time_in_seconds)`. Note we are not using `time.sleep()` because it would block the Jupyter notebook." 318 | ] 319 | }, 320 | { 321 | "cell_type": "code", 322 | "execution_count": null, 323 | "metadata": {}, 324 | "outputs": [], 325 | "source": [ 326 | "file_paths = logger.start_logging(variables=[\"pot1\", \"pot2\"])\n", 327 | "await asyncio.sleep(90)\n", 328 | "logger.stop_logging()" 329 | ] 330 | }, 331 | { 332 | "cell_type": "code", 333 | "execution_count": null, 334 | "metadata": {}, 335 | "outputs": [], 336 | "source": [ 337 | "pot1_raw_data = logger.read_binary_file(\n", 338 | " file_path=file_paths[\"local_paths\"][\"pot1\"], timestamp_mode=logger.get_prop_of_var(\"pot1\", \"timestamp_mode\"))\n", 339 | "pot2_raw_data = logger.read_binary_file(\n", 340 | " file_path=file_paths[\"local_paths\"][\"pot2\"], timestamp_mode=logger.get_prop_of_var(\"pot2\", \"timestamp_mode\"))\n", 341 | "\n", 342 | "pot1_data = [data for _buffer in pot1_raw_data[\"buffers\"] for data in _buffer[\"data\"]]\n", 343 | "pot2_data = [data for _buffer in pot2_raw_data[\"buffers\"] for data in _buffer[\"data\"]]" 344 | ] 345 | }, 346 | { 347 | "cell_type": "markdown", 348 | "metadata": {}, 349 | "source": [ 350 | "We can now plot the data using matplotlib." 351 | ] 352 | }, 353 | { 354 | "cell_type": "code", 355 | "execution_count": null, 356 | "metadata": {}, 357 | "outputs": [], 358 | "source": [ 359 | "analog_sample_rate = logger.sample_rate/2\n", 360 | "\n", 361 | "plt.figure(figsize=(10, 8))\n", 362 | "\n", 363 | "plt.subplot(2, 1, 1)\n", 364 | "plt.plot(np.arange(len(pot1_data)) / analog_sample_rate, pot1_data)\n", 365 | "plt.title('Pot 1 Data')\n", 366 | "plt.xlabel('Time')\n", 367 | "plt.ylabel('Amplitude')\n", 368 | "\n", 369 | "# Second subplot for pie_data\n", 370 | "plt.subplot(2, 1, 2)\n", 371 | "plt.plot(np.arange(len(pot2_data)) / analog_sample_rate, pot2_data)\n", 372 | "plt.title('Pot 2 Data')\n", 373 | "plt.xlabel('Time')\n", 374 | "plt.ylabel('Amplitude')\n", 375 | " \n", 376 | "plt.tight_layout()\n", 377 | "plt.show()" 378 | ] 379 | }, 380 | { 381 | "cell_type": "markdown", 382 | "metadata": {}, 383 | "source": [ 384 | "## 3 - train model\n", 385 | "Now we are ready to train our model.\n", 386 | "We can generate a pytorch compatible dataset using the `SensorDataset` class. This class divides the data you recorded previously in sequences of 512 values." 387 | ] 388 | }, 389 | { 390 | "cell_type": "code", 391 | "execution_count": null, 392 | "metadata": {}, 393 | "outputs": [], 394 | "source": [ 395 | "seq_len = 512\n", 396 | "batch_size = 32\n", 397 | "target_windows = 1\n", 398 | "\n", 399 | "class SensorDataset(Dataset):\n", 400 | " def __init__(self, pot1_data, pot2_data, seq_len, target_windows):\n", 401 | " super().__init__()\n", 402 | " \n", 403 | " self.device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n", 404 | " # make len divisible by seq_len\n", 405 | " _len = min(len(pot1_data), len(pot2_data))\n", 406 | " _len = _len - (_len % seq_len)\n", 407 | " pot1_data, pot2_data = pot1_data[:_len], pot2_data[:_len]\n", 408 | " \n", 409 | " pot1_sequences = torch.FloatTensor([pot1_data[i:i+seq_len] for i in range(0, len(pot1_data), seq_len)])\n", 410 | " pot2_sequences = torch.FloatTensor([pot2_data[i:i+seq_len] for i in range(0, len(pot2_data), seq_len)])\n", 411 | "\n", 412 | " self.inputs = torch.stack((pot1_sequences[:-target_windows], pot2_sequences[:-target_windows]), dim=2).to(self.device)\n", 413 | " outputs=[]\n", 414 | " for idx in range(1, len(pot1_sequences)-target_windows+1):\n", 415 | " tgt_seq = torch.stack((pot1_sequences[idx:target_windows+idx].flatten(), pot2_sequences[idx:target_windows+idx].flatten()), dim=1)\n", 416 | " outputs.append(tgt_seq)\n", 417 | " self.outputs = torch.stack(outputs).to(self.device)\n", 418 | " \n", 419 | " def __len__(self):\n", 420 | " return len(self.inputs)\n", 421 | " \n", 422 | " def __getitem__(self, i):\n", 423 | " return self.inputs[i], self.outputs[i]\n", 424 | " \n", 425 | "dataset = SensorDataset(pot1_data, pot2_data, seq_len, target_windows)\n", 426 | "dataset_loader = DataLoader(dataset, batch_size=batch_size, shuffle=True)" 427 | ] 428 | }, 429 | { 430 | "cell_type": "markdown", 431 | "metadata": {}, 432 | "source": [ 433 | "Below we define a TCN. We will use an Adam optimiser with a learning rate of 0.001 and use the mean square error as loss." 434 | ] 435 | }, 436 | { 437 | "cell_type": "code", 438 | "execution_count": null, 439 | "metadata": {}, 440 | "outputs": [], 441 | "source": [ 442 | "class Chomp1d(nn.Module):\n", 443 | " \"\"\"Layer that removes trailing values to ensure causality in the TCN.\"\"\"\n", 444 | " def __init__(self, chomp_size):\n", 445 | " super(Chomp1d, self).__init__()\n", 446 | " self.chomp_size = chomp_size\n", 447 | "\n", 448 | " def forward(self, x):\n", 449 | " return x[:, :, :-self.chomp_size].contiguous()\n", 450 | "\n", 451 | "class TemporalBlock(nn.Module):\n", 452 | " \"\"\"A single temporal block in a TCN, with dilated causal convolutions and residual connections.\"\"\"\n", 453 | " def __init__(self, in_channels, out_channels, kernel_size, stride, dilation, padding, dropout=0.2):\n", 454 | " super(TemporalBlock, self).__init__()\n", 455 | " self.conv1 = nn.Conv1d(in_channels, out_channels, kernel_size, stride=stride, padding=padding, dilation=dilation)\n", 456 | " self.chomp1 = Chomp1d(padding)\n", 457 | " self.relu1 = nn.ReLU()\n", 458 | " self.dropout1 = nn.Dropout(dropout)\n", 459 | "\n", 460 | " self.conv2 = nn.Conv1d(out_channels, out_channels, kernel_size, stride=stride, padding=padding, dilation=dilation)\n", 461 | " self.chomp2 = Chomp1d(padding)\n", 462 | " self.relu2 = nn.ReLU()\n", 463 | " self.dropout2 = nn.Dropout(dropout)\n", 464 | "\n", 465 | " self.net = nn.Sequential(self.conv1, self.chomp1, self.relu1, self.dropout1,\n", 466 | " self.conv2, self.chomp2, self.relu2, self.dropout2)\n", 467 | " self.downsample = nn.Conv1d(in_channels, out_channels, 1) if in_channels != out_channels else None\n", 468 | " self.relu = nn.ReLU()\n", 469 | "\n", 470 | " def forward(self, x):\n", 471 | " out = self.net(x)\n", 472 | " res = x if self.downsample is None else self.downsample(x)\n", 473 | " return self.relu(out + res)\n", 474 | "\n", 475 | "class TemporalConvNet(nn.Module):\n", 476 | " \"\"\"A Temporal Convolutional Network (TCN) made up of multiple temporal blocks.\"\"\"\n", 477 | " def __init__(self, num_inputs, num_channels, kernel_size=2, dropout=0.2, upsample_factor=3):\n", 478 | " super(TemporalConvNet, self).__init__()\n", 479 | " self.upsample_factor = upsample_factor # Upsample factor as a parameter\n", 480 | " \n", 481 | " layers = []\n", 482 | " num_levels = len(num_channels)\n", 483 | " for i in range(num_levels):\n", 484 | " dilation_size = 2 ** i\n", 485 | " in_channels = num_inputs if i == 0 else num_channels[i-1]\n", 486 | " out_channels = num_channels[i]\n", 487 | " layers.append(TemporalBlock(in_channels, out_channels, kernel_size, stride=1, dilation=dilation_size,\n", 488 | " padding=(kernel_size-1) * dilation_size, dropout=dropout))\n", 489 | "\n", 490 | " self.network = nn.Sequential(*layers)\n", 491 | " \n", 492 | " # Upsample layer to increase sequence length by the upsample factor\n", 493 | " self.upsample_layer = nn.ConvTranspose1d(num_channels[-1], num_channels[-1], kernel_size=self.upsample_factor, stride=self.upsample_factor)\n", 494 | " \n", 495 | " # Output layer to map back to the input feature size\n", 496 | " self.output_layer = nn.Conv1d(num_channels[-1], num_inputs, 1) \n", 497 | "\n", 498 | " def forward(self, x):\n", 499 | " # Input shape: [batch_size, sequence_len, feature_size]\n", 500 | " x = x.transpose(1, 2) # Change shape to [batch_size, feature_size, sequence_len]\n", 501 | " y = self.network(x)\n", 502 | " \n", 503 | " # Upsample the sequence length by the upsample factor\n", 504 | " y = self.upsample_layer(y)\n", 505 | " \n", 506 | " # Map back to the original feature size\n", 507 | " y = self.output_layer(y) \n", 508 | " y = y.transpose(1, 2) # Change shape back to [batch_size, sequence_len*upsample_factor, feature_size]\n", 509 | " return y\n", 510 | "\n", 511 | "\n", 512 | "batch_size, sequence_len, feature_size = 32, 512, 2\n", 513 | "upsample_factor = target_windows # Define the upsample factor\n", 514 | "model = TemporalConvNet(num_inputs=feature_size, num_channels=[16, 32, 16], kernel_size=3, dropout=0.2, upsample_factor=upsample_factor)\n", 515 | "\n", 516 | "# Create a random tensor of shape [batch_size, sequence_len, feature_size]\n", 517 | "x = torch.randn(batch_size, sequence_len, feature_size)\n", 518 | "\n", 519 | "# Forward pass through the model\n", 520 | "output = model(x)\n", 521 | "\n", 522 | "print(output.shape) # Output shape should be [batch_size, sequence_len*upsample_factor, feature_size]\n" 523 | ] 524 | }, 525 | { 526 | "cell_type": "markdown", 527 | "metadata": {}, 528 | "source": [ 529 | "We can now train our model:" 530 | ] 531 | }, 532 | { 533 | "cell_type": "code", 534 | "execution_count": null, 535 | "metadata": {}, 536 | "outputs": [], 537 | "source": [ 538 | "device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n", 539 | "optimizer = torch.optim.Adam(model.parameters(), lr=0.001)\n", 540 | "criterion = torch.nn.MSELoss(reduction='mean')\n", 541 | "\n", 542 | "epochs = 50\n", 543 | "\n", 544 | "print(\"Running on device: {}\".format(device))\n", 545 | "\n", 546 | "epoch_losses = np.array([])\n", 547 | "for epoch in range(1, epochs+1):\n", 548 | "\n", 549 | " print(\">> Epoch: {} <<\".format(epoch))\n", 550 | "\n", 551 | " # training loop\n", 552 | " batch_losses = np.array([])\n", 553 | " model.train()\n", 554 | "\n", 555 | " for batch_idx, (data, targets) in enumerate(tqdm(dataset_loader)):\n", 556 | " # (batch_size, seq_len, input_size)\n", 557 | " data = data.to(device=device, non_blocking=True)\n", 558 | " # (batch_size, seq_len, input_size)\n", 559 | " targets = targets.to(device=device, non_blocking=True)\n", 560 | "\n", 561 | " optimizer.zero_grad(set_to_none=True) # lower memory footprint\n", 562 | " out = model(data)\n", 563 | " loss = torch.sqrt(criterion(out, targets))\n", 564 | " batch_losses = np.append(batch_losses, loss.item())\n", 565 | " loss.backward()\n", 566 | " optimizer.step()\n", 567 | " \n", 568 | " epoch_losses = np.append(epoch_losses, batch_losses.mean().round(4))\n", 569 | "\n", 570 | " print(f'Loss: {epoch_losses[-1]}')" 571 | ] 572 | }, 573 | { 574 | "cell_type": "markdown", 575 | "metadata": {}, 576 | "source": [ 577 | "We can plot the loss to see how the training went:" 578 | ] 579 | }, 580 | { 581 | "cell_type": "code", 582 | "execution_count": null, 583 | "metadata": {}, 584 | "outputs": [], 585 | "source": [ 586 | "x_epochs = range(1, epochs + 1)\n", 587 | "\n", 588 | "plt.scatter(x_epochs, epoch_losses, marker='o')\n", 589 | "plt.plot(x_epochs, epoch_losses, linestyle='-')\n", 590 | "plt.xlabel('Epoch')\n", 591 | "plt.ylabel('Loss')\n", 592 | "plt.xticks(x_epochs) # Ensure x-axis has integer values for each epoch\n", 593 | "plt.title('Training Loss per Epoch')\n", 594 | "plt.show()" 595 | ] 596 | }, 597 | { 598 | "cell_type": "markdown", 599 | "metadata": {}, 600 | "source": [ 601 | "Let's make sure the model trained correctly by visualising some of the predictions." 602 | ] 603 | }, 604 | { 605 | "cell_type": "code", 606 | "execution_count": null, 607 | "metadata": {}, 608 | "outputs": [], 609 | "source": [ 610 | "# Select random indexes for plotting\n", 611 | "num_examples = 4\n", 612 | "random_indexes = np.random.choice(len(dataset), size=num_examples, replace=False)\n", 613 | "\n", 614 | "# Calculate the number of rows for the subplots\n", 615 | "num_rows = num_examples\n", 616 | "\n", 617 | "# Set up subplots\n", 618 | "fig, axes = plt.subplots(num_rows, 2, figsize=(12, 3 * num_rows))\n", 619 | "\n", 620 | "# Loop through random indexes and plot predictions\n", 621 | "for idx, ax_row in zip(random_indexes, axes):\n", 622 | " input, target = dataset.__getitem__(idx)\n", 623 | " output = model(input.unsqueeze(0))\n", 624 | " \n", 625 | " # Plot for the first dimension in the first column\n", 626 | " ax_row[0].plot(target[:, 0].detach().cpu(), label='Target')\n", 627 | " ax_row[0].plot(output[0, :, 0].detach().cpu(), label='Predictions')\n", 628 | " ax_row[0].set_xlabel('Time')\n", 629 | " ax_row[0].set_ylabel('Value')\n", 630 | " ax_row[0].legend()\n", 631 | " ax_row[0].set_ylim(0, 3)\n", 632 | " ax_row[0].set_title(f'Figure for Index {idx} - Pot 1')\n", 633 | " \n", 634 | " # Plot for the second dimension in the second column\n", 635 | " ax_row[1].plot(target[:, 1].detach().cpu(), label='Target')\n", 636 | " ax_row[1].plot(output[0, :, 1].detach().cpu(), label='Prediction')\n", 637 | " ax_row[1].set_xlabel('Time')\n", 638 | " ax_row[1].set_ylabel('Value')\n", 639 | " ax_row[1].legend()\n", 640 | " ax_row[1].set_title(f'Figure for Index {idx} - Pot 2')\n", 641 | "\n", 642 | "plt.tight_layout()\n", 643 | "plt.show()" 644 | ] 645 | }, 646 | { 647 | "cell_type": "markdown", 648 | "metadata": {}, 649 | "source": [ 650 | "When you're ready, save the model so that we can export it into Bela." 651 | ] 652 | }, 653 | { 654 | "cell_type": "code", 655 | "execution_count": null, 656 | "metadata": {}, 657 | "outputs": [], 658 | "source": [ 659 | "model.to(device='cpu')\n", 660 | "model.eval()\n", 661 | "script = torch.jit.script(model)\n", 662 | "path = \"bela-code/inference/model.jit\"\n", 663 | "script.save(path)" 664 | ] 665 | }, 666 | { 667 | "cell_type": "code", 668 | "execution_count": null, 669 | "metadata": {}, 670 | "outputs": [], 671 | "source": [ 672 | "torch.jit.load(path) # check model is properly saved" 673 | ] 674 | }, 675 | { 676 | "cell_type": "markdown", 677 | "metadata": {}, 678 | "source": [ 679 | "## 4 - deploy and run\n", 680 | "\n", 681 | "The cell below will cross-compile and deploy the project to Bela." 682 | ] 683 | }, 684 | { 685 | "cell_type": "code", 686 | "execution_count": null, 687 | "metadata": {}, 688 | "outputs": [], 689 | "source": [ 690 | "!cd bela-code/inference && cmake -S . -B build -DPROJECT_NAME=inference -DCMAKE_TOOLCHAIN_FILE=/sysroot/root/Bela/Toolchain.cmake\n", 691 | "!cd bela-code/inference && cmake --build build -j" 692 | ] 693 | }, 694 | { 695 | "cell_type": "code", 696 | "execution_count": null, 697 | "metadata": {}, 698 | "outputs": [], 699 | "source": [ 700 | "!rsync -rvL --timeout 10 bela-code/inference/build/inference root@$BBB_HOSTNAME:Bela/projects/inference/\n", 701 | "!rsync -rvL --timeout 10 bela-code/inference/ --exclude=\"build\" root@$BBB_HOSTNAME:/root/Bela/projects/inference/" 702 | ] 703 | }, 704 | { 705 | "cell_type": "markdown", 706 | "metadata": {}, 707 | "source": [ 708 | "Once deployed, you can run it from the Bela terminal (which you can access from your regular terminal typing `ssh root@bela.local`) by typing:\n", 709 | "```bash\n", 710 | "cd Bela/projects/inference\n", 711 | "./inference -m model.jit\n", 712 | "```" 713 | ] 714 | } 715 | ], 716 | "metadata": { 717 | "kernelspec": { 718 | "display_name": "Python 3 (ipykernel)", 719 | "language": "python", 720 | "name": "python3" 721 | }, 722 | "language_info": { 723 | "codemirror_mode": { 724 | "name": "ipython", 725 | "version": 3 726 | }, 727 | "file_extension": ".py", 728 | "mimetype": "text/x-python", 729 | "name": "python", 730 | "nbconvert_exporter": "python", 731 | "pygments_lexer": "ipython3", 732 | "version": "3.9.19" 733 | } 734 | }, 735 | "nbformat": 4, 736 | "nbformat_minor": 4 737 | } 738 | -------------------------------------------------------------------------------- /useful-commands.txt: -------------------------------------------------------------------------------- 1 | # remove the container 2 | docker rm bela-tutorial 3 | 4 | # build the image 5 | docker build -t xc-pybela-tutorial . 6 | 7 | # create a container 8 | docker run -it --name bela-tutorial -e BBB_HOSTNAME=192.168.7.2 -p 8889:8889 xc-pybela-tutorial 9 | 10 | # run notebook inside the container 11 | jupyter notebook --ip=* --port=8889 --allow-root --no-browser 12 | 13 | # reopen container 14 | docker start -ia bela-tutorial --------------------------------------------------------------------------------