├── memo_js ├── .nvmrc ├── rustfmt.toml ├── .npmignore ├── test │ └── tsconfig.json ├── tsconfig.json ├── Cargo.toml ├── script │ └── build ├── package.json ├── webpack.config.js └── src │ ├── support.ts │ └── index.ts ├── xray_wasm ├── .gitignore ├── lib │ ├── main.js │ └── support.js ├── Cargo.toml ├── package.json ├── script │ ├── test │ └── build └── test │ └── tests.js ├── rust-toolchain ├── script ├── xray ├── xray_debug ├── bench ├── cibuild ├── test └── build ├── memo_core ├── rustfmt.toml ├── src │ ├── serialization │ │ ├── mod.rs │ │ └── schema.fbs │ └── operation_queue.rs ├── script │ └── compile_flatbuffers ├── Cargo.toml └── README.md ├── xray_electron ├── .gitignore ├── index.html ├── package.json ├── README.md └── lib │ ├── render_process │ └── main.js │ ├── shared │ └── xray_client.js │ └── main_process │ └── main.js ├── docs ├── images │ ├── crdt.png │ ├── rpc.png │ ├── architecture.png │ ├── window_protocol.png │ └── client_server_components.png ├── updates │ ├── 2018_04_09.md │ ├── 2018_03_19.md │ ├── 2018_04_30.md │ ├── 2018_08_28.md │ ├── 2018_04_16.md │ ├── 2018_08_21.md │ ├── 2018_05_07.md │ ├── 2018_04_23.md │ ├── 2018_05_28.md │ ├── 2018_05_14.md │ ├── 2018_07_23.md │ ├── 2018_10_02.md │ ├── 2018_07_16.md │ ├── 2018_07_10.md │ └── 2018_09_14.md └── architecture │ ├── 003_memo_epochs.md │ └── 002_shared_workspaces.md ├── xray_core ├── src │ ├── never.rs │ ├── wasm_logging.rs │ ├── rpc │ │ └── messages.rs │ ├── stream_ext.rs │ ├── lib.rs │ ├── movement.rs │ ├── cross_platform.rs │ └── file_finder.rs ├── README.md ├── Cargo.toml └── benches │ └── bench.rs ├── Cargo.toml ├── .gitignore ├── xray_cli ├── Cargo.toml ├── README.md └── src │ └── main.rs ├── xray_server ├── README.md ├── Cargo.toml └── src │ ├── messages.rs │ ├── main.rs │ └── json_lines_codec.rs ├── xray_ui ├── README.md ├── lib │ ├── theme_provider.js │ ├── debounce.js │ ├── index.js │ ├── modal.js │ ├── workspace.js │ ├── view.js │ ├── view_registry.js │ ├── text_editor │ │ └── shaders.js │ ├── file_finder.js │ ├── app.js │ └── action_dispatcher.js ├── package.json └── test │ ├── helpers │ └── component_helpers.js │ ├── file_finder.test.js │ ├── modal.test.js │ ├── view.test.js │ ├── view_registry.test.js │ └── action_dispatcher.test.js ├── xray_browser ├── src │ ├── client.js │ ├── ui.js │ └── worker.js ├── package.json ├── script │ ├── build │ └── server ├── README.md └── static │ └── index.html ├── .travis.yml ├── LICENSE └── CONTRIBUTING.md /memo_js/.nvmrc: -------------------------------------------------------------------------------- 1 | 11.9.0 2 | -------------------------------------------------------------------------------- /xray_wasm/.gitignore: -------------------------------------------------------------------------------- 1 | .cargo 2 | -------------------------------------------------------------------------------- /rust-toolchain: -------------------------------------------------------------------------------- 1 | nightly-2019-01-26 2 | -------------------------------------------------------------------------------- /script/xray: -------------------------------------------------------------------------------- 1 | ../target/release/xray_cli -------------------------------------------------------------------------------- /memo_core/rustfmt.toml: -------------------------------------------------------------------------------- 1 | edition = "2018" 2 | -------------------------------------------------------------------------------- /memo_js/rustfmt.toml: -------------------------------------------------------------------------------- 1 | edition = "2018" 2 | -------------------------------------------------------------------------------- /script/xray_debug: -------------------------------------------------------------------------------- 1 | ../target/debug/xray_cli -------------------------------------------------------------------------------- /xray_electron/.gitignore: -------------------------------------------------------------------------------- 1 | node_modules 2 | out 3 | -------------------------------------------------------------------------------- /script/bench: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | set -e 4 | 5 | cd xray_core 6 | cargo bench "$@" 7 | -------------------------------------------------------------------------------- /docs/images/crdt.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/atom-archive/xray/HEAD/docs/images/crdt.png -------------------------------------------------------------------------------- /docs/images/rpc.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/atom-archive/xray/HEAD/docs/images/rpc.png -------------------------------------------------------------------------------- /xray_core/src/never.rs: -------------------------------------------------------------------------------- 1 | #[derive(Debug, Serialize, Deserialize)] 2 | pub enum Never {} 3 | -------------------------------------------------------------------------------- /script/cibuild: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | set -e 4 | 5 | script/build 6 | script/test 7 | script/bench 8 | -------------------------------------------------------------------------------- /memo_core/src/serialization/mod.rs: -------------------------------------------------------------------------------- 1 | mod schema_generated; 2 | 3 | pub use self::schema_generated::*; 4 | -------------------------------------------------------------------------------- /docs/images/architecture.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/atom-archive/xray/HEAD/docs/images/architecture.png -------------------------------------------------------------------------------- /xray_wasm/lib/main.js: -------------------------------------------------------------------------------- 1 | export let xray = import("../dist/xray_wasm"); 2 | export { JsSink } from "./support"; 3 | -------------------------------------------------------------------------------- /docs/images/window_protocol.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/atom-archive/xray/HEAD/docs/images/window_protocol.png -------------------------------------------------------------------------------- /memo_js/.npmignore: -------------------------------------------------------------------------------- 1 | script 2 | test 3 | Cargo.toml 4 | README.md 5 | node_modules 6 | target 7 | webpack.config.js 8 | -------------------------------------------------------------------------------- /docs/images/client_server_components.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/atom-archive/xray/HEAD/docs/images/client_server_components.png -------------------------------------------------------------------------------- /Cargo.toml: -------------------------------------------------------------------------------- 1 | [workspace] 2 | members = [ 3 | "memo_core", 4 | "memo_js", 5 | "xray_core", 6 | "xray_server", 7 | "xray_cli", 8 | "xray_wasm", 9 | ] 10 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | **/node_modules 2 | **/target/ 3 | **/*.rs.bk 4 | **/.DS_Store 5 | **/.cargo 6 | Icon* 7 | .tags* 8 | xray_wasm/dist 9 | xray_browser/dist 10 | memo_js/dist 11 | memo_js/test/dist 12 | -------------------------------------------------------------------------------- /script/test: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | set -e 4 | 5 | cd xray_core; cargo test; cd - 6 | cd xray_wasm; script/test; cd - 7 | cd xray_ui; yarn test; cd - 8 | cd memo_core; cargo test; cd - 9 | cd memo_js; yarn test; cd - 10 | -------------------------------------------------------------------------------- /xray_cli/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "xray_cli" 3 | version = "0.1.0" 4 | authors = ["Nathan Sobo "] 5 | 6 | [dependencies] 7 | docopt = "0.8" 8 | serde = "1.0" 9 | serde_derive = "1.0" 10 | serde_json = "1.0" 11 | -------------------------------------------------------------------------------- /xray_cli/README.md: -------------------------------------------------------------------------------- 1 | # Xray CLI 2 | 3 | This crate is an executable that provides a command-line interface for Xray. It can spawn `xray_server` in headless mode or launch the `xray_electron` application. It sends commands to `xray_server` over a domain socket. 4 | -------------------------------------------------------------------------------- /memo_core/script/compile_flatbuffers: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | flatc --rust -o src/serialization src/serialization/schema.fbs 4 | 5 | # Workaround for incorrect code generation by flatc 6 | echo "use flatbuffers::EndianScalar;" >> src/serialization/schema_generated.rs 7 | -------------------------------------------------------------------------------- /script/build: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | set -e 4 | 5 | cd xray_electron; yarn install --check-files; cd - 6 | cd xray_ui; yarn install; cd - 7 | cd xray_cli; cargo build "$@"; cd - 8 | cd xray_server; cargo build "$@"; cd - 9 | cd xray_browser; script/build; cd - 10 | cd memo_js; yarn install && script/build; cd - 11 | -------------------------------------------------------------------------------- /memo_js/test/tsconfig.json: -------------------------------------------------------------------------------- 1 | { 2 | "compilerOptions": { 3 | "outDir": "./dist/", 4 | "noImplicitAny": true, 5 | "strictNullChecks": true, 6 | "module": "esnext", 7 | "declarationMap": true, 8 | "declaration": true, 9 | "lib": ["es2015", "esnext"], 10 | "types": ["mocha", "node"] 11 | } 12 | } 13 | -------------------------------------------------------------------------------- /xray_server/README.md: -------------------------------------------------------------------------------- 1 | # Xray Server 2 | 3 | This crate is an executable that runs as a server process. It can be run in a headless mode in order to host workspaces for remote clients, and it is also spawned by `xray_electron`, which provides the application with a user interface and communicates with the server over a domain socket. 4 | -------------------------------------------------------------------------------- /xray_ui/README.md: -------------------------------------------------------------------------------- 1 | # Xray UI 2 | 3 | This folder houses Xray's user interface, which is implemented in JavaScript and designed to run in a web environment. It is depended upon by [`xray_electron`](../xray_electron), which presents Xray as a desktop application, and [`xray_browser`](../xray_browser), which presents Xray in the browser. 4 | -------------------------------------------------------------------------------- /memo_js/tsconfig.json: -------------------------------------------------------------------------------- 1 | { 2 | "include": ["src/**/*.ts"], 3 | "compilerOptions": { 4 | "outDir": "./dist/", 5 | "noImplicitAny": true, 6 | "strictNullChecks": true, 7 | "module": "esnext", 8 | "declarationMap": true, 9 | "declaration": true, 10 | "lib": ["es2015", "esnext"], 11 | "types": ["node"] 12 | } 13 | } 14 | -------------------------------------------------------------------------------- /xray_core/README.md: -------------------------------------------------------------------------------- 1 | # Xray Core 2 | 3 | This directory contains the native core of application as a pure Rust library that is agnostic to the details of the underlying platform. It is a dependency of the sibling `xray_server` crate, which provides it with network and file I/O as well as the ability to spawn futures in the foreground and on background threads. 4 | -------------------------------------------------------------------------------- /xray_browser/src/client.js: -------------------------------------------------------------------------------- 1 | export default class XrayClient { 2 | constructor(worker) { 3 | this.worker = worker; 4 | } 5 | 6 | onMessage(callback) { 7 | this.worker.addEventListener("message", message => { 8 | callback(message.data); 9 | }); 10 | } 11 | 12 | sendMessage(message) { 13 | this.worker.postMessage(message); 14 | } 15 | } 16 | -------------------------------------------------------------------------------- /xray_browser/package.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "xray_browser", 3 | "license": "MIT", 4 | "dependencies": { 5 | "xray_wasm": "../xray_wasm", 6 | "xray_ui": "../xray_ui" 7 | }, 8 | "devDependencies": { 9 | "express": "^4.16.3", 10 | "webpack": "^4.6.0", 11 | "webpack-cli": "^2.0.15", 12 | "webpack-dev-middleware": "^3.1.2", 13 | "ws": "^5.1.1" 14 | } 15 | } 16 | -------------------------------------------------------------------------------- /xray_wasm/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "xray_wasm" 3 | version = "0.1.0" 4 | authors = ["Antonio Scandurra "] 5 | 6 | [lib] 7 | crate-type = ["cdylib"] 8 | 9 | [dependencies] 10 | bytes = "0.4" 11 | futures = "0.1" 12 | serde = "1.0" 13 | serde_derive = "1.0" 14 | serde_json = "1.0" 15 | wasm-bindgen = "0.2" 16 | xray_core = { path = "../xray_core" } 17 | 18 | [features] 19 | js-tests = [] 20 | -------------------------------------------------------------------------------- /xray_browser/script/build: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | set -e 4 | 5 | rm -rf dist 6 | mkdir -p dist 7 | cd ../xray_wasm && script/build && cd - 8 | yarn install --check-files 9 | node_modules/.bin/webpack --target=web --mode=development src/ui.js --output-filename=ui.js 10 | node_modules/.bin/webpack --target=webworker --mode=development src/worker.js --output-filename=worker.js 11 | cp static/index.html dist/index.html 12 | -------------------------------------------------------------------------------- /xray_ui/lib/theme_provider.js: -------------------------------------------------------------------------------- 1 | const React = require("react"); 2 | const PropTypes = require("prop-types"); 3 | 4 | class ThemeProvider extends React.Component { 5 | render() { 6 | return this.props.children 7 | } 8 | 9 | getChildContext() { 10 | return { 11 | theme: this.props.theme 12 | }; 13 | } 14 | } 15 | 16 | ThemeProvider.childContextTypes = { 17 | theme: PropTypes.object 18 | }; 19 | 20 | module.exports = ThemeProvider; 21 | -------------------------------------------------------------------------------- /xray_ui/lib/debounce.js: -------------------------------------------------------------------------------- 1 | module.exports = 2 | function debounce (fn, wait) { 3 | let timestamp, timeout 4 | 5 | function later () { 6 | const last = Date.now() - timestamp 7 | if (last < wait && last >= 0) { 8 | timeout = setTimeout(later, wait - last) 9 | } else { 10 | timeout = null 11 | fn() 12 | } 13 | } 14 | 15 | return function () { 16 | timestamp = Date.now() 17 | if (!timeout) timeout = setTimeout(later, wait) 18 | } 19 | } 20 | -------------------------------------------------------------------------------- /xray_electron/index.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 16 | 17 | 18 |
19 | 22 | 23 | 24 | -------------------------------------------------------------------------------- /xray_wasm/lib/support.js: -------------------------------------------------------------------------------- 1 | export class JsSink { 2 | constructor({ send, close }) { 3 | if (send) this._send = send; 4 | if (close) this._close = close; 5 | } 6 | 7 | send(message) { 8 | if (this._send) this._send(message); 9 | } 10 | 11 | close() { 12 | if (this._close) this._close(); 13 | } 14 | } 15 | 16 | const promise = Promise.resolve(); 17 | export function notifyOnNextTick(notifyHandle) { 18 | promise.then(() => notifyHandle.notify_from_js_on_next_tick()); 19 | } 20 | -------------------------------------------------------------------------------- /xray_electron/package.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "Xray", 3 | "version": "0.0.0", 4 | "main": "./lib/main_process/main.js", 5 | "license": "MIT", 6 | "scripts": { 7 | "start": "electron .", 8 | "test": "electron-mocha --ui=tdd --renderer test/**/*.test.js", 9 | "itest": "electron-mocha --ui=tdd --renderer --interactive test/**/*.test.js" 10 | }, 11 | "dependencies": { 12 | "xray_ui": "../xray_ui" 13 | }, 14 | "devDependencies": { 15 | "electron": "2.0.0-beta.7" 16 | } 17 | } 18 | -------------------------------------------------------------------------------- /xray_core/src/wasm_logging.rs: -------------------------------------------------------------------------------- 1 | use wasm_bindgen::prelude::*; 2 | 3 | #[wasm_bindgen(js_namespace = console)] 4 | extern "C" { 5 | pub fn log(s: &str); 6 | pub fn error(s: &str); 7 | } 8 | 9 | #[macro_export] 10 | macro_rules! println { 11 | ($($arg:tt)*) => ($crate::wasm_logging::log(&::std::fmt::format(format_args!($($arg)*)))); 12 | } 13 | 14 | #[macro_export] 15 | macro_rules! eprintln { 16 | ($($arg:tt)*) => ($crate::wasm_logging::error(&::std::fmt::format(format_args!($($arg)*)))); 17 | } 18 | -------------------------------------------------------------------------------- /xray_wasm/package.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "xray_wasm", 3 | "version": "0.0.0", 4 | "description": "Xray server packaged for use in JavaScript", 5 | "main": "lib/main.js", 6 | "scripts": { 7 | "test": "script/test" 8 | }, 9 | "repository": { 10 | "type": "git", 11 | "url": "git+https://github.com/atom/xray.git" 12 | }, 13 | "license": "MIT", 14 | "devDependencies": { 15 | "mocha": "^5.1.1", 16 | "source-map-support": "^0.5.4", 17 | "webpack": "^4.6.0", 18 | "webpack-cli": "^2.0.14" 19 | } 20 | } 21 | -------------------------------------------------------------------------------- /xray_wasm/script/test: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | LOCAL_CRATE_PATH=./.cargo 4 | PATH=$LOCAL_CRATE_PATH/bin:$PATH 5 | 6 | set -e 7 | 8 | rm -rf dist 9 | mkdir -p dist 10 | cargo build --release --target wasm32-unknown-unknown --features js-tests 11 | wasm-bindgen ../target/wasm32-unknown-unknown/release/xray_wasm.wasm --out-dir dist 12 | yarn install 13 | node_modules/.bin/webpack --target=node --mode=development --devtool="source-map" test/tests.js 14 | node_modules/.bin/mocha --require source-map-support/register --ui=tdd dist/main.js 15 | -------------------------------------------------------------------------------- /xray_server/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "xray_server" 3 | version = "0.1.0" 4 | authors = ["Nathan Sobo "] 5 | 6 | [dependencies] 7 | bytes = "0.4" 8 | futures = "0.1" 9 | futures-cpupool = "0.1" 10 | ignore = { git = "https://github.com/atom/ripgrep", branch = "include_ignored" } 11 | parking_lot = "0.5" 12 | rand = "0.4" 13 | serde = "1.0" 14 | serde_derive = "1.0" 15 | serde_json = "1.0" 16 | tokio-io = "0.1" 17 | tokio-core = "0.1" 18 | tokio-process = "0.1" 19 | tokio-uds = "0.1" 20 | xray_core = {path = "../xray_core"} 21 | -------------------------------------------------------------------------------- /memo_core/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "memo_core" 3 | version = "0.1.0" 4 | authors = ["Antonio Scandurra ", "Nathan Sobo "] 5 | edition = "2018" 6 | 7 | [dependencies] 8 | diffs = "0.3" 9 | lazy_static = "1.0" 10 | flatbuffers = "0.5" 11 | futures = "0.1" 12 | serde = "1.0" 13 | serde_derive = "1.0" 14 | smallvec = "0.6.1" 15 | uuid = { version = "0.7", features = ["serde"] } 16 | 17 | [dev-dependencies] 18 | futures-cpupool = "0.1" 19 | rand = "0.3" 20 | uuid = { version = "0.7", features = ["serde", "u128"] } 21 | -------------------------------------------------------------------------------- /memo_js/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "memo_js" 3 | version = "0.1.0" 4 | authors = ["Antonio Scandurra ", "Nathan Sobo "] 5 | edition = "2018" 6 | 7 | [lib] 8 | crate-type = ["cdylib"] 9 | 10 | [dependencies] 11 | bincode = "1.0" 12 | console_error_panic_hook = "0.1" 13 | futures = "0.1" 14 | hex = "0.3" 15 | js-sys = "0.3" 16 | memo_core = { path = "../memo_core" } 17 | serde = "1.0" 18 | serde_derive = "1.0" 19 | wasm-bindgen-futures = "0.3" 20 | 21 | [dependencies.wasm-bindgen] 22 | version = "0.2.33" 23 | features = ["serde-serialize"] 24 | -------------------------------------------------------------------------------- /.travis.yml: -------------------------------------------------------------------------------- 1 | language: rust 2 | 3 | before_install: 4 | - curl -o- -L https://yarnpkg.com/install.sh | bash 5 | - export PATH="$HOME/.yarn/bin:$PATH" 6 | - curl -o- https://raw.githubusercontent.com/creationix/nvm/v0.33.11/install.sh | bash 7 | - nvm install v11 8 | 9 | # Create a virtual display for electron 10 | - export DISPLAY=':99.0' 11 | - Xvfb :99 -screen 0 1024x768x24 > /dev/null 2>&1 & 12 | 13 | script: script/cibuild 14 | 15 | cache: 16 | cargo: true 17 | yarn: true 18 | 19 | branches: 20 | only: 21 | - master 22 | 23 | notifications: 24 | email: 25 | on_success: never 26 | on_failure: change 27 | -------------------------------------------------------------------------------- /xray_core/Cargo.toml: -------------------------------------------------------------------------------- 1 | [package] 2 | name = "xray_core" 3 | version = "0.1.0" 4 | authors = ["Nathan Sobo "] 5 | license = "MIT" 6 | 7 | [dependencies] 8 | bincode = "1.0" 9 | bytes = { version ="0.4", features = ["serde"] } 10 | futures = "0.1" 11 | lazy_static = "1.0" 12 | parking_lot = "0.5" 13 | seahash = "3.0" 14 | serde = "1.0" 15 | serde_derive = "1.0" 16 | serde_json = "1.0" 17 | smallvec = "0.6.0" 18 | 19 | [target.'cfg(target_arch = "wasm32")'.dependencies] 20 | wasm-bindgen = "0.2" 21 | 22 | [dev-dependencies] 23 | rand = "0.3" 24 | futures-cpupool = "0.1" 25 | tokio-core = "0.1" 26 | tokio-timer = "0.2" 27 | criterion = "0.2" 28 | 29 | [[bench]] 30 | name = "bench" 31 | harness = false 32 | -------------------------------------------------------------------------------- /xray_electron/README.md: -------------------------------------------------------------------------------- 1 | # Xray Electron Shell 2 | 3 | This is the front-end of the desktop application. It spawns an instance of `xray_server`, where the majority of application logic resides, and communicates with it over a domain socket. 4 | 5 | ## Building and running 6 | 7 | This assumes `xray_electron` is cloned as part of the Xray repository and that all of its sibling packages are next to it. Also, make sure you have installed the required [system dependencies](../CONTRIBUTING.md#install-system-dependencies) before proceeding. 8 | 9 | ```sh 10 | # Move to this subdirectory of the repository: 11 | cd xray_electron 12 | 13 | # Install and build dependencies: 14 | yarn install 15 | 16 | # Launch Electron: 17 | yarn start 18 | ``` 19 | -------------------------------------------------------------------------------- /xray_wasm/script/build: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | LOCAL_CRATE_PATH=./.cargo 4 | PATH=$LOCAL_CRATE_PATH/bin:$PATH 5 | WASM_BINDGEN_VERSION=0.2.33 6 | 7 | setup_wasm_bindgen() { 8 | if (command -v wasm-bindgen) && $(wasm-bindgen --version | grep --silent $WASM_BINDGEN_VERSION); then 9 | echo 'Using existing installation of wasm-bindgen' 10 | else 11 | cargo install --force wasm-bindgen-cli --version $WASM_BINDGEN_VERSION --root $LOCAL_CRATE_PATH 12 | fi 13 | } 14 | 15 | rustup target add wasm32-unknown-unknown 16 | setup_wasm_bindgen 17 | mkdir -p dist 18 | CARGO_INCREMENTAL=0 RUSTFLAGS="-C debuginfo=0 -C opt-level=s -C lto -C panic=abort" cargo build --release --target wasm32-unknown-unknown 19 | wasm-bindgen ../target/wasm32-unknown-unknown/release/xray_wasm.wasm --out-dir dist 20 | -------------------------------------------------------------------------------- /xray_ui/lib/index.js: -------------------------------------------------------------------------------- 1 | const FileFinder = require("./file_finder"); 2 | const ViewRegistry = require("./view_registry"); 3 | const Workspace = require("./workspace"); 4 | const TextEditorView = require("./text_editor/text_editor"); 5 | 6 | exports.buildViewRegistry = function buildViewRegistry(client) { 7 | const viewRegistry = new ViewRegistry({ 8 | onAction: action => { 9 | action.type = "Action"; 10 | client.sendMessage(action); 11 | } 12 | }); 13 | viewRegistry.addComponent("Workspace", Workspace); 14 | viewRegistry.addComponent("FileFinder", FileFinder); 15 | viewRegistry.addComponent("BufferView", TextEditorView); 16 | return viewRegistry; 17 | }; 18 | 19 | exports.App = require("./app"); 20 | exports.React = require("react"); 21 | exports.ReactDOM = require("react-dom"); 22 | -------------------------------------------------------------------------------- /xray_core/src/rpc/messages.rs: -------------------------------------------------------------------------------- 1 | use super::Error; 2 | use bytes::Bytes; 3 | use std::collections::{HashMap, HashSet}; 4 | 5 | pub type RequestId = usize; 6 | pub type ServiceId = usize; 7 | 8 | #[derive(Serialize, Deserialize)] 9 | pub enum MessageToClient { 10 | Update { 11 | insertions: HashMap, 12 | updates: HashMap>, 13 | removals: HashSet, 14 | responses: HashMap>, 15 | }, 16 | } 17 | 18 | pub type Response = Result; 19 | 20 | #[derive(Debug, Serialize, Deserialize)] 21 | pub enum MessageToServer { 22 | Request { 23 | service_id: ServiceId, 24 | request_id: RequestId, 25 | payload: Bytes, 26 | }, 27 | DroppedService(ServiceId), 28 | } 29 | -------------------------------------------------------------------------------- /memo_js/script/build: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | set -e 4 | 5 | LOCAL_CRATE_PATH=./.cargo 6 | PATH=$LOCAL_CRATE_PATH/bin:$PATH 7 | WASM_BINDGEN_VERSION=0.2.33 8 | 9 | setup_wasm_bindgen() { 10 | if (command -v wasm-bindgen) && $(wasm-bindgen --version | grep --silent $WASM_BINDGEN_VERSION); then 11 | echo 'Using existing installation of wasm-bindgen' 12 | else 13 | cargo install --force wasm-bindgen-cli --version $WASM_BINDGEN_VERSION --root $LOCAL_CRATE_PATH 14 | fi 15 | } 16 | 17 | rustup target add wasm32-unknown-unknown 18 | setup_wasm_bindgen 19 | rm -rf dist 20 | mkdir -p dist 21 | CARGO_INCREMENTAL=0 RUSTFLAGS="-C debuginfo=0 -C opt-level=s -C lto -C panic=abort" cargo build --release --target wasm32-unknown-unknown 22 | wasm-bindgen ../target/wasm32-unknown-unknown/release/memo_js.wasm --out-dir dist 23 | yarn tsc 24 | yarn webpack --mode=production 25 | -------------------------------------------------------------------------------- /memo_js/package.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "@atom/memo", 3 | "version": "0.21.0", 4 | "main": "dist/index.node.js", 5 | "browser": "dist/index.js", 6 | "types": "dist/index.d.ts", 7 | "scripts": { 8 | "prepublishOnly": "script/build", 9 | "test": "webpack --env.test && mocha --ui=tdd --require=source-map-support/register test/dist/tests.js", 10 | "tsc": "tsc", 11 | "webpack": "webpack" 12 | }, 13 | "license": "MIT", 14 | "devDependencies": { 15 | "@types/mocha": "^5.2.5", 16 | "@types/node": "^10.11.4", 17 | "@types/uuid": "^3.4.4", 18 | "@types/uuid-parse": "^1.0.0", 19 | "mocha": "^5.2.0", 20 | "source-map-support": "^0.5.9", 21 | "ts-loader": "^5.2.0", 22 | "typescript": "^3.0.3", 23 | "uuid": "^3.3.2", 24 | "uuid-parse": "^1.0.0", 25 | "webpack": "^4.20.2", 26 | "webpack-cli": "^3.1.1" 27 | } 28 | } 29 | -------------------------------------------------------------------------------- /xray_ui/package.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "xray_ui", 3 | "version": "0.0.0", 4 | "main": "lib/index.js", 5 | "license": "MIT", 6 | "scripts": { 7 | "test": "electron-mocha --ui=tdd --renderer test/**/*.test.js", 8 | "itest": "electron-mocha --ui=tdd --renderer --interactive test/**/*.test.js" 9 | }, 10 | "dependencies": { 11 | "prop-types": "^15.6.0", 12 | "react": "^16.3.0", 13 | "react-autosize-textarea": "^3.0.3", 14 | "react-component-octicons": "^1.6.0", 15 | "react-dom": "^16.3.0", 16 | "styletron-engine-atomic": "1.0.4", 17 | "styletron-react": "4.0.3" 18 | }, 19 | "resolutions": { 20 | "styletron-react/styletron-react-core": "1.0.0" 21 | }, 22 | "devDependencies": { 23 | "electron": "2.0.0-beta.7", 24 | "electron-mocha": "^6.0.1", 25 | "enzyme": "^3.3.0", 26 | "enzyme-adapter-react-16": "^1.1.1" 27 | } 28 | } 29 | -------------------------------------------------------------------------------- /xray_ui/test/helpers/component_helpers.js: -------------------------------------------------------------------------------- 1 | const enzyme = require("enzyme"); 2 | const Adapter = require("enzyme-adapter-react-16"); 3 | 4 | enzyme.configure({ adapter: new Adapter() }); 5 | 6 | module.exports = { 7 | shallow(node, options) { 8 | return enzyme.shallow(node, addStyletronToContext(options)); 9 | }, 10 | 11 | mount(node, options) { 12 | return enzyme.mount(node, addStyletronToContext(options)); 13 | }, 14 | 15 | setProps(wrapper, props) { 16 | return new Promise(resolve => wrapper.setProps(props, resolve)); 17 | } 18 | }; 19 | 20 | function addStyletronToContext(options = {}) { 21 | options.context = Object.assign( 22 | { styletron: { renderStyle() {} } }, 23 | options.context 24 | ); 25 | options.childContextTypes = Object.assign( 26 | { styletron: function() {} }, 27 | options.childContextTypes 28 | ); 29 | return options; 30 | } 31 | -------------------------------------------------------------------------------- /xray_ui/lib/modal.js: -------------------------------------------------------------------------------- 1 | const React = require("react"); 2 | const ReactDOM = require("react-dom"); 3 | const { styled } = require("styletron-react"); 4 | const $ = React.createElement; 5 | 6 | const Root = styled("div", { 7 | position: "absolute", 8 | top: 0, 9 | left: 0, 10 | right: 0, 11 | width: "min-content", 12 | margin: "auto", 13 | outline: "none" 14 | }); 15 | 16 | module.exports = class Modal extends React.Component { 17 | render() { 18 | return $(Root, { tabIndex: -1 }, this.props.children); 19 | } 20 | 21 | componentDidMount() { 22 | this.previouslyFocusedElement = document.activeElement; 23 | } 24 | 25 | componentWillUnmount() { 26 | const element = ReactDOM.findDOMNode(this); 27 | if (element.contains(document.activeElement)) { 28 | this.previouslyFocusedElement.focus(); 29 | this.previouslyFocusedElement = null 30 | } 31 | } 32 | }; 33 | -------------------------------------------------------------------------------- /xray_browser/README.md: -------------------------------------------------------------------------------- 1 | # Xray Browser 2 | 3 | This directory packages Xray for use in a web browser. Because browsers don't provide access to the underlying system, when running in a browser, Xray depends on being able to connect to a shared workspace on a remote instance of the `xray_server` executable. This directory contains a [development web server](./script/server) that serves a browser-compatible user interface and proxies connections to `xray_server` over WebSockets. 4 | 5 | Assuming you have built Xray with `script/build --release` in the root of this repo, you can present a web-based UI for any Xray instance as follows. 6 | 7 | * Start an instance `xray_server` listening for incoming connections on port 8080: 8 | ```sh 9 | # Run in the root of the repository (--headless is optional) 10 | script/xray --listen=8080 --headless your_project_dir 11 | ``` 12 | * Start the development web server: 13 | ```sh 14 | xray_browser/script/server 15 | ``` 16 | -------------------------------------------------------------------------------- /xray_browser/src/ui.js: -------------------------------------------------------------------------------- 1 | import { React, ReactDOM, App, buildViewRegistry } from "xray_ui"; 2 | import XrayClient from "./client"; 3 | const $ = React.createElement; 4 | 5 | const client = new XrayClient(new Worker("worker.js")); 6 | const websocketURL = new URL("/ws", window.location.href); 7 | websocketURL.protocol = "ws"; 8 | client.sendMessage({ type: "ConnectToWebsocket", url: websocketURL.href }); 9 | 10 | const viewRegistry = buildViewRegistry(client); 11 | 12 | let initialRender = true; 13 | client.onMessage(message => { 14 | switch (message.type) { 15 | case "UpdateWindow": 16 | viewRegistry.update(message); 17 | if (initialRender) { 18 | ReactDOM.render( 19 | $(App, { inBrowser: true, viewRegistry }), 20 | document.getElementById("app") 21 | ); 22 | initialRender = false; 23 | } 24 | break; 25 | default: 26 | console.warn("Received unexpected message", message); 27 | } 28 | }); 29 | -------------------------------------------------------------------------------- /xray_ui/test/file_finder.test.js: -------------------------------------------------------------------------------- 1 | const assert = require("assert"); 2 | const {mount, setProps} = require("./helpers/component_helpers"); 3 | const FileFinder = require("../lib/file_finder"); 4 | const $ = require("react").createElement; 5 | 6 | suite("FileFinderView", () => { 7 | test("basic rendering", async () => { 8 | const fileFinder = mount($(FileFinder, { 9 | query: '', 10 | results: [] 11 | })); 12 | 13 | assert.equal(fileFinder.find("ol li").length, 0); 14 | 15 | await setProps(fileFinder, { 16 | query: 'ce', 17 | results: [ 18 | {display_path: 'succeed', score: 3, positions: [3, 4]}, 19 | {display_path: 'abcdef', score: 2, positions: [2, 4]}, 20 | ] 21 | }); 22 | 23 | assert.deepEqual( 24 | fileFinder.find("ol li").map(item => item.getDOMNode().innerHTML), 25 | [ 26 | 'succeed', 27 | 'abcdef' 28 | ] 29 | ) 30 | }); 31 | }); 32 | -------------------------------------------------------------------------------- /xray_server/src/messages.rs: -------------------------------------------------------------------------------- 1 | use serde_json; 2 | use std::net::SocketAddr; 3 | use std::path::PathBuf; 4 | use xray_core::{ViewId, WindowId, WindowUpdate}; 5 | 6 | #[derive(Deserialize, Debug)] 7 | #[serde(tag = "type")] 8 | pub enum IncomingMessage { 9 | StartApp, 10 | StartCli { 11 | headless: bool, 12 | }, 13 | TcpListen { 14 | port: u16, 15 | }, 16 | StartWindow { 17 | window_id: WindowId, 18 | height: f64, 19 | }, 20 | CloseWindow { 21 | window_id: WindowId, 22 | }, 23 | OpenWorkspace { 24 | paths: Vec, 25 | }, 26 | ConnectToPeer { 27 | address: SocketAddr, 28 | }, 29 | Action { 30 | view_id: ViewId, 31 | action: serde_json::Value, 32 | }, 33 | } 34 | 35 | #[derive(Serialize, Debug)] 36 | #[serde(tag = "type")] 37 | pub enum OutgoingMessage { 38 | OpenWindow { window_id: WindowId }, 39 | UpdateWindow(WindowUpdate), 40 | Error { description: String }, 41 | Ok, 42 | } 43 | -------------------------------------------------------------------------------- /xray_wasm/test/tests.js: -------------------------------------------------------------------------------- 1 | import assert from "assert"; 2 | import { xray as xrayPromise, JsSink } from "../lib/main"; 3 | 4 | suite("Server", () => { 5 | let xray; 6 | 7 | before(async () => { 8 | xray = await xrayPromise; 9 | }); 10 | 11 | test("channels and sinks", endTest => { 12 | const test = xray.Test.new(); 13 | 14 | const messages = []; 15 | const sink = new JsSink({ 16 | send(message) { 17 | assert.equal(message.length, 1); 18 | messages.push(message[0]); 19 | }, 20 | 21 | close() { 22 | assert.deepEqual(messages, [0, 1, 2, 3, 4]); 23 | endTest(); 24 | } 25 | }); 26 | 27 | const channel = xray.Channel.new(); 28 | test.echo_stream(channel.take_receiver(), sink); 29 | 30 | const sender = channel.take_sender(); 31 | let i = 0; 32 | let intervalId = setInterval(() => { 33 | if (i === 5) { 34 | sender.dispose(); 35 | clearInterval(intervalId); 36 | } 37 | sender.send([i++]); 38 | }, 1); 39 | }); 40 | }); 41 | -------------------------------------------------------------------------------- /memo_js/webpack.config.js: -------------------------------------------------------------------------------- 1 | var path = require("path"); 2 | 3 | const baseConfig = { 4 | target: "node", 5 | module: { 6 | rules: [ 7 | { 8 | test: /\.ts$/, 9 | use: "ts-loader", 10 | exclude: /node_modules/ 11 | } 12 | ] 13 | }, 14 | resolve: { 15 | extensions: [".ts", ".js", ".wasm"] 16 | } 17 | }; 18 | 19 | const libConfig = { 20 | ...baseConfig, 21 | entry: "./src/index.ts", 22 | mode: "production", 23 | devtool: "source-map", 24 | output: { 25 | path: path.resolve(__dirname, "dist"), 26 | filename: "index.node.js", 27 | library: "memo", 28 | libraryTarget: "umd" 29 | } 30 | }; 31 | 32 | const testConfig = { 33 | ...baseConfig, 34 | entry: "./test/tests.ts", 35 | mode: "development", 36 | devtool: "cheap-eval-source-map", 37 | output: { 38 | path: path.resolve(__dirname, "test", "dist"), 39 | filename: "tests.js" 40 | } 41 | }; 42 | 43 | module.exports = env => { 44 | if (env && env.test) { 45 | return testConfig; 46 | } else { 47 | return libConfig; 48 | } 49 | }; 50 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2018 GitHub 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /xray_core/src/stream_ext.rs: -------------------------------------------------------------------------------- 1 | use futures::{Future, Poll, Stream}; 2 | use std::fmt::Debug; 3 | use std::time; 4 | use tokio_core::reactor; 5 | use tokio_timer::Interval; 6 | 7 | pub trait StreamExt 8 | where 9 | Self: Stream + Sized, 10 | { 11 | fn wait_next(&mut self, reactor: &mut reactor::Core) -> Option 12 | where 13 | Self::Item: Debug, 14 | Self::Error: Debug, 15 | { 16 | struct TakeOne<'a, S: 'a>(&'a mut S); 17 | 18 | impl<'a, S: 'a + Stream> Future for TakeOne<'a, S> { 19 | type Item = Option; 20 | type Error = S::Error; 21 | 22 | fn poll(&mut self) -> Poll { 23 | self.0.poll() 24 | } 25 | } 26 | 27 | reactor.run(TakeOne(self)).unwrap() 28 | } 29 | 30 | fn throttle<'a>(self, millis: u64) -> Box<'a + Stream> 31 | where 32 | Self: 'a, 33 | { 34 | let delay = time::Duration::from_millis(millis); 35 | Box::new(self.zip( 36 | Interval::new(time::Instant::now() + delay, delay).map_err(|_| unreachable!()), 37 | ).and_then(|(item, _)| Ok(item))) 38 | } 39 | } 40 | 41 | impl StreamExt for T {} 42 | -------------------------------------------------------------------------------- /xray_electron/lib/render_process/main.js: -------------------------------------------------------------------------------- 1 | process.env.NODE_ENV = "production"; 2 | 3 | const { React, ReactDOM, App, buildViewRegistry } = require("xray_ui"); 4 | const XrayClient = require("../shared/xray_client"); 5 | const QueryString = require("querystring"); 6 | const $ = React.createElement; 7 | 8 | async function start() { 9 | const url = window.location.search.replace("?", ""); 10 | const { socketPath, windowId } = QueryString.parse(url); 11 | 12 | const xrayClient = new XrayClient(); 13 | await xrayClient.start(socketPath); 14 | const viewRegistry = buildViewRegistry(xrayClient); 15 | 16 | let initialRender = true; 17 | xrayClient.addMessageListener(message => { 18 | switch (message.type) { 19 | case "UpdateWindow": 20 | viewRegistry.update(message); 21 | if (initialRender) { 22 | ReactDOM.render( 23 | $(App, { inBrowser: false, viewRegistry }), 24 | document.getElementById("app") 25 | ); 26 | initialRender = false; 27 | } 28 | break; 29 | default: 30 | console.warn("Received unexpected message", message); 31 | } 32 | }); 33 | 34 | xrayClient.sendMessage({ 35 | type: "StartWindow", 36 | window_id: Number(windowId), 37 | height: window.innerHeight 38 | }); 39 | } 40 | 41 | start(); 42 | -------------------------------------------------------------------------------- /xray_core/src/lib.rs: -------------------------------------------------------------------------------- 1 | #![feature(unsize, coerce_unsized)] 2 | 3 | extern crate bincode; 4 | extern crate bytes; 5 | #[macro_use] 6 | extern crate lazy_static; 7 | extern crate futures; 8 | extern crate parking_lot; 9 | extern crate seahash; 10 | extern crate serde; 11 | #[macro_use] 12 | extern crate serde_derive; 13 | #[macro_use] 14 | extern crate serde_json; 15 | extern crate smallvec; 16 | #[cfg(test)] 17 | extern crate tokio_core; 18 | #[cfg(test)] 19 | extern crate tokio_timer; 20 | #[cfg(target_arch = "wasm32")] 21 | extern crate wasm_bindgen; 22 | 23 | #[cfg(target_arch = "wasm32")] 24 | #[macro_use] 25 | pub mod wasm_logging; 26 | 27 | pub mod app; 28 | pub mod buffer; 29 | pub mod buffer_view; 30 | pub mod cross_platform; 31 | pub mod fs; 32 | pub mod notify_cell; 33 | pub mod rpc; 34 | pub mod window; 35 | pub mod workspace; 36 | 37 | mod file_finder; 38 | mod fuzzy; 39 | mod movement; 40 | mod never; 41 | mod project; 42 | #[cfg(test)] 43 | mod stream_ext; 44 | mod tree; 45 | 46 | pub use app::{App, WindowId}; 47 | use futures::future::{Executor, Future}; 48 | pub use never::Never; 49 | use std::cell::RefCell; 50 | use std::rc::Rc; 51 | pub use window::{ViewId, WindowUpdate}; 52 | 53 | pub type ForegroundExecutor = Rc + 'static>>>; 54 | pub type BackgroundExecutor = Rc + Send + 'static>>>; 55 | pub type UserId = usize; 56 | 57 | pub(crate) trait IntoShared { 58 | fn into_shared(self) -> Rc>; 59 | } 60 | 61 | impl IntoShared for T { 62 | fn into_shared(self) -> Rc> { 63 | Rc::new(RefCell::new(self)) 64 | } 65 | } 66 | -------------------------------------------------------------------------------- /xray_server/src/main.rs: -------------------------------------------------------------------------------- 1 | mod messages; 2 | mod server; 3 | mod fs; 4 | mod json_lines_codec; 5 | 6 | extern crate bytes; 7 | extern crate futures; 8 | extern crate futures_cpupool; 9 | extern crate ignore; 10 | extern crate parking_lot; 11 | extern crate serde; 12 | #[macro_use] 13 | extern crate serde_derive; 14 | extern crate serde_json; 15 | extern crate tokio_core; 16 | extern crate tokio_io; 17 | extern crate tokio_process; 18 | extern crate tokio_uds; 19 | extern crate xray_core; 20 | 21 | use std::env; 22 | use futures::Stream; 23 | use tokio_core::reactor::Core; 24 | use tokio_io::AsyncRead; 25 | use tokio_uds::UnixListener; 26 | use json_lines_codec::JsonLinesCodec; 27 | use messages::{IncomingMessage, OutgoingMessage}; 28 | use server::Server; 29 | 30 | fn main() { 31 | let headless = 32 | env::var("XRAY_HEADLESS").expect("Missing XRAY_HEADLESS environment variable") != "0"; 33 | let socket_path = 34 | env::var("XRAY_SOCKET_PATH").expect("Missing XRAY_SOCKET_PATH environment variable"); 35 | 36 | let mut core = Core::new().unwrap(); 37 | let handle = core.handle(); 38 | let mut server = Server::new(headless, handle.clone()); 39 | 40 | let _ = std::fs::remove_file(&socket_path); 41 | let listener = UnixListener::bind(socket_path, &handle).unwrap(); 42 | 43 | let handle_connections = listener.incoming().for_each(move |(socket, _)| { 44 | let framed_socket = 45 | socket.framed(JsonLinesCodec::::new()); 46 | server.accept_connection(framed_socket); 47 | Ok(()) 48 | }); 49 | 50 | println!("Listening"); 51 | core.run(handle_connections).unwrap(); 52 | } 53 | -------------------------------------------------------------------------------- /xray_server/src/json_lines_codec.rs: -------------------------------------------------------------------------------- 1 | use std::io; 2 | use bytes::BytesMut; 3 | use serde::{Deserialize, Serialize}; 4 | use serde_json; 5 | use tokio_io::codec::{Decoder, Encoder}; 6 | use std::marker::PhantomData; 7 | 8 | pub struct JsonLinesCodec { 9 | phantom1: PhantomData, 10 | phantom2: PhantomData, 11 | } 12 | 13 | impl JsonLinesCodec 14 | where 15 | In: for<'a> Deserialize<'a>, 16 | Out: Serialize, 17 | { 18 | pub fn new() -> Self { 19 | JsonLinesCodec { 20 | phantom1: PhantomData, 21 | phantom2: PhantomData, 22 | } 23 | } 24 | } 25 | 26 | impl Decoder for JsonLinesCodec 27 | where 28 | In: for<'a> Deserialize<'a>, 29 | Out: Serialize, 30 | { 31 | type Item = In; 32 | type Error = io::Error; 33 | 34 | fn decode(&mut self, buf: &mut BytesMut) -> Result, Self::Error> { 35 | if let Some(index) = buf.iter().position(|byte| *byte == b'\n') { 36 | let line = buf.split_to(index + 1); 37 | let item = serde_json::from_slice(&line[0..line.len() - 1])?; 38 | Ok(Some(item)) 39 | } else { 40 | Ok(None) 41 | } 42 | } 43 | } 44 | 45 | impl Encoder for JsonLinesCodec 46 | where 47 | In: for<'a> Deserialize<'a>, 48 | Out: Serialize, 49 | { 50 | type Item = Out; 51 | type Error = io::Error; 52 | 53 | fn encode(&mut self, msg: Self::Item, buf: &mut BytesMut) -> io::Result<()> { 54 | let mut vec = serde_json::to_vec(&msg)?; 55 | vec.push(b'\n'); 56 | buf.extend_from_slice(&vec); 57 | Ok(()) 58 | } 59 | } 60 | -------------------------------------------------------------------------------- /xray_electron/lib/shared/xray_client.js: -------------------------------------------------------------------------------- 1 | const net = require("net"); 2 | const EventEmitter = require('events'); 3 | 4 | module.exports = 5 | class XrayClient { 6 | constructor () { 7 | this.socket = null; 8 | this.emitter = new EventEmitter(); 9 | this.currentMessageFragments = []; 10 | } 11 | 12 | start (socketPath) { 13 | return new Promise((resolve, reject) => { 14 | this.socket = net.connect(socketPath, (error) => { 15 | if (error) { 16 | reject(error) 17 | } else { 18 | resolve() 19 | } 20 | }); 21 | this.socket.on('data', this._handleInput.bind(this)); 22 | this.socket.on('error', reject) 23 | }) 24 | } 25 | 26 | sendMessage (message) { 27 | this.socket.write(JSON.stringify(message)); 28 | this.socket.write('\n'); 29 | } 30 | 31 | addMessageListener (callback) { 32 | this.emitter.on('message', callback); 33 | } 34 | 35 | removeMessageListener (callback) { 36 | this.emitter.removeListener('message', callback); 37 | } 38 | 39 | _handleInput (input) { 40 | let searchStartIndex = 0; 41 | while (searchStartIndex < input.length) { 42 | const newlineIndex = input.indexOf('\n', searchStartIndex); 43 | if (newlineIndex !== -1) { 44 | this.currentMessageFragments.push(input.slice(searchStartIndex, newlineIndex)); 45 | this.emitter.emit('message', JSON.parse(Buffer.concat(this.currentMessageFragments))); 46 | this.currentMessageFragments.length = 0; 47 | searchStartIndex = newlineIndex + 1; 48 | } else { 49 | this.currentMessageFragments.push(input.slice(searchStartIndex)); 50 | break; 51 | } 52 | } 53 | } 54 | } 55 | -------------------------------------------------------------------------------- /docs/updates/2018_04_09.md: -------------------------------------------------------------------------------- 1 | # Update for April 9, 2018 2 | 3 | ## Shared workspaces 4 | 5 | We spent the entire week [laying down the foundations that will enable shared workspaces](https://github.com/atom/xray/pull/61). What are shared workspaces? The basic idea is that you'll be able to start a headless Xray instance on a remote machine, then have multiple developers connect and co-inhabit that workspace from their local machines. 6 | 7 | The fact that our buffers are CRDTs makes concurrent buffer editing relatively straightforward to implement, but we still need a solution for synchronizing state between peers and performing requests and response. After experimenting a bit with Cap'N Proto RPC and feeling a bit overwhelmed by the generated code, we decided to explore what a custom solution might look like. 8 | 9 | We're not quite done with the implementation, but after a lot of thinking and a bit of wheel-spinning, we have a reasonably solid design for a capabilities-based RPC system that will be a good fit for our use case. I've written up [a much deeper description](https://github.com/atom/xray/blob/9a1a02b7b608225a4c60fa364a1d60c1ef5f59c2/docs/architecture/002_shared_workspaces.md) that will become part of Xray's permanent documentation. Here's a *huge* diagram to get you interested: 10 | 11 | ![RPC Diagram](../images/rpc.png) 12 | 13 | ## The week ahead 14 | 15 | We hope to finish an initial take on the RPC system next week, then start using it to build out a basic demo of shared workspaces. Our goal is to make it possible to find and open paths on the client and support concurrent editing by multiple clients. That may spill into the following week, when I'll be traveling Amsterdam for some in-person full-throttle coding with [@as-cii](https://github.com/as-cii). 16 | -------------------------------------------------------------------------------- /docs/updates/2018_03_19.md: -------------------------------------------------------------------------------- 1 | # Update for March 19, 2018 2 | 3 | ## Contributions 4 | 5 | We have a couple of PRs pending ([#36](https://github.com/atom/xray/pull/36) and [#34](https://github.com/atom/xray/pull/34)), but we're holding off on merging anything until we complete some major architectural changes. Sorry for the delay [@LucaT1](https://github.com/LucaT1) and [@breezykermo](https://github.com/breezykermo). 6 | 7 | ## Selections optimizations 8 | 9 | [I merged a PR](https://github.com/atom/xray/pull/45) from [@as-cii](https://github.com/as-cii) that optimized our initial implementation of selections. While we still think there is room for more optimization, we're pretty happy with our early results. On Antonio's machine, he's moving 1k selections in a document with 10k edits in under 2ms. Based on some hacky experimentation to avoid allocations, we think we can make that even faster. At some point, with some number of selections, we're going to end up blowing our frame budget, but we think maintaining it into the thousands of selections ought to be acceptable. 10 | 11 | ## Significant progress switching to a client/server architecture 12 | 13 | [@as-cii](https://github.com/as-cii), [@maxbrunsfeld](https://github.com/maxbrunsfeld) and I have made decent progress on a PR to switch Xray to the client/server architecture I [discussed last week](./2018_03_12.md#big-architectural-changes-incoming). 14 | 15 | We're implementing an event-driven server using [Tokio](https://tokio.rs/), and have what seems like a viable approach for relaying data between the server and the window that will leave the door open to packages implementing custom views that slot in cleanly next to built-in features. 16 | 17 | Check out [#46](https://github.com/atom/xray/pull/46) for details. I've also written [a fairly detailed document](https://github.com/atom/xray/blob/198e3bdf3c284679a5520923b0e27b079cc23377/docs/architecture/001_client_server_protocol.md) explaining our architecture and the protocol that will become a permanent part of Xray's documentation once this PR is merged. 18 | -------------------------------------------------------------------------------- /docs/updates/2018_04_30.md: -------------------------------------------------------------------------------- 1 | # Update for April 30, 2018 2 | 3 | ## Xray now runs in a browser 4 | 5 | Last week, we merged [#67](https://github.com/atom/xray/pull/67), which allows Xray to be run inside of a web browser. The design is different in a couple of details from what I anticipated in last week's update, but the big picture is pretty much what we expected. The main difference is that for now, we decided not to bake HTTP and WebSockets support directly into `xray_server`, but instead place them in [a simple development server](https://github.com/atom/xray/blob/92f6c1959f843059738caff889df0843836cc006/xray_browser/script/server) which is written in Node and proxies WebSocket connections to `xray_server`'s normal TCP-based connection listener. This made it easy to integrate with middleware for WebPack that recompiles our JS bundle during development. Long-term, we'd still like to host web clients directly from `xray_server`, but we want to bundle the static web assets directly into the binary so that `xray_server` can continue to work as a standalone executable. This should definitely be possible, but it doesn't feel important to address it now. 6 | 7 | ## Demo this week 8 | 9 | We plan to show off Xray's progress to some colleagues here at GitHub later this week, so to that end, we'll focus some of this week on smaller details that, while not fundamentally advancing architectural concerns, will end up making for a better demo. 10 | 11 | By the end of this week, we should be rendering the cursors and selections of remote collaborators. We also plan to add a discussion panel to the Xray workspace where collaborators can have a text-based conversation that is linked to their code. 12 | 13 | Once the demo is behind us, we plan to take a few days to burn down any technical debt we have accrued in the 10 weeks we've been actively developing the project. The biggest thing on our agenda is updating to [futures 0.2](http://aturon.github.io/2018/02/27/futures-0-2-RC/) and the [latest version of tokio](https://tokio.rs/blog/2018-03-tokio-runtime/). We also plan to take a look at our build and see if we can make our CI turnaround faster. 14 | -------------------------------------------------------------------------------- /xray_core/benches/bench.rs: -------------------------------------------------------------------------------- 1 | extern crate xray_core; 2 | #[macro_use] 3 | extern crate criterion; 4 | 5 | use criterion::Criterion; 6 | use std::cell::RefCell; 7 | use std::rc::Rc; 8 | use xray_core::buffer::{Buffer, Point}; 9 | use xray_core::buffer_view::BufferView; 10 | 11 | fn add_selection(c: &mut Criterion) { 12 | c.bench_function("add_selection_below", |b| { 13 | b.iter_with_setup( 14 | || { 15 | let mut buffer_view = create_buffer_view(100); 16 | for i in 0..100 { 17 | buffer_view.add_selection(Point::new(i, 0), Point::new(i, 0)); 18 | } 19 | buffer_view 20 | }, 21 | |mut buffer_view| buffer_view.add_selection_below(), 22 | ) 23 | }); 24 | c.bench_function("add_selection_above", |b| { 25 | b.iter_with_setup( 26 | || { 27 | let mut buffer_view = create_buffer_view(100); 28 | for i in 0..100 { 29 | buffer_view.add_selection(Point::new(i, 0), Point::new(i, 0)); 30 | } 31 | buffer_view 32 | }, 33 | |mut buffer_view| buffer_view.add_selection_above(), 34 | ) 35 | }); 36 | } 37 | 38 | fn edit(c: &mut Criterion) { 39 | c.bench_function("edit", |b| { 40 | b.iter_with_setup( 41 | || { 42 | let mut buffer_view = create_buffer_view(50); 43 | for i in 0..50 { 44 | buffer_view.add_selection(Point::new(i, 0), Point::new(i, 0)); 45 | } 46 | buffer_view 47 | }, 48 | |mut buffer_view| { 49 | buffer_view.edit("a"); 50 | buffer_view.edit("b"); 51 | buffer_view.edit("c"); 52 | }, 53 | ) 54 | }); 55 | } 56 | 57 | fn create_buffer_view(lines: usize) -> BufferView { 58 | let mut buffer = Buffer::new(0); 59 | for i in 0..lines { 60 | let len = buffer.len(); 61 | buffer.edit( 62 | &[len..len], 63 | format!("Lorem ipsum dolor sit amet {}\n", i).as_str(), 64 | ); 65 | } 66 | BufferView::new(Rc::new(RefCell::new(buffer)), 0, None) 67 | } 68 | 69 | criterion_group!(benches, edit, add_selection); 70 | criterion_main!(benches); 71 | -------------------------------------------------------------------------------- /xray_ui/lib/workspace.js: -------------------------------------------------------------------------------- 1 | const propTypes = require("prop-types"); 2 | const React = require("react"); 3 | const ReactDOM = require("react-dom"); 4 | const { styled } = require("styletron-react"); 5 | const Modal = require("./modal"); 6 | const View = require("./view"); 7 | const { ActionContext, Action } = require("./action_dispatcher"); 8 | const $ = React.createElement; 9 | 10 | const Root = styled("div", { 11 | position: "relative", 12 | width: "100%", 13 | height: "100%", 14 | padding: 0, 15 | margin: 0, 16 | display: "flex" 17 | }); 18 | 19 | const LeftPanel = styled("div", { 20 | width: "300px", 21 | height: "100%" 22 | }); 23 | 24 | const Pane = styled("div", { 25 | flex: 1, 26 | position: "relative" 27 | }); 28 | 29 | const PaneInner = styled("div", { 30 | position: "absolute", 31 | left: 0, 32 | top: 0, 33 | bottom: 0, 34 | right: 0 35 | }); 36 | 37 | const BackgroundTip = styled("div", { 38 | fontFamily: "sans-serif", 39 | height: "100%", 40 | display: "flex", 41 | alignItems: "center", 42 | justifyContent: "center" 43 | }); 44 | 45 | class Workspace extends React.Component { 46 | constructor() { 47 | super(); 48 | } 49 | 50 | render() { 51 | let modal; 52 | if (this.props.modal) { 53 | modal = $(Modal, null, $(View, { id: this.props.modal })); 54 | } 55 | 56 | let leftPanel; 57 | if (this.props.left_panel) { 58 | leftPanel = $(LeftPanel, null, $(View, { id: this.props.left_panel })); 59 | } 60 | 61 | let centerItem; 62 | if (this.props.center_pane) { 63 | centerItem = $(View, { id: this.props.center_pane }); 64 | } else if (this.context.inBrowser) { 65 | centerItem = $(BackgroundTip, {}, "Press Ctrl-T to browse files"); 66 | } 67 | 68 | return $( 69 | ActionContext, 70 | { context: "Workspace" }, 71 | $( 72 | Root, 73 | { tabIndex: -1 }, 74 | leftPanel, 75 | $(Pane, null, $(PaneInner, null, centerItem)), 76 | modal, 77 | $(Action, { type: "ToggleFileFinder" }), 78 | $(Action, { type: "SaveActiveBuffer" }) 79 | ) 80 | ); 81 | } 82 | 83 | componentDidMount() { 84 | ReactDOM.findDOMNode(this).focus(); 85 | } 86 | } 87 | 88 | Workspace.contextTypes = { 89 | inBrowser: propTypes.bool 90 | }; 91 | 92 | module.exports = Workspace; 93 | -------------------------------------------------------------------------------- /xray_ui/lib/view.js: -------------------------------------------------------------------------------- 1 | const propTypes = require("prop-types"); 2 | const React = require("react"); 3 | const $ = React.createElement; 4 | const ViewRegistry = require("./view_registry"); 5 | 6 | class View extends React.Component { 7 | constructor(props) { 8 | super(props); 9 | this.dispatchAction = this.dispatchAction.bind(this); 10 | this.state = { 11 | version: 0, 12 | viewId: props.id 13 | }; 14 | } 15 | 16 | componentWillReceiveProps(props, context) { 17 | const { viewRegistry } = context; 18 | 19 | if (this.state.viewId !== props.id) { 20 | this.setState({ viewId: props.id }); 21 | this.watch(props, context); 22 | } 23 | } 24 | 25 | componentDidMount() { 26 | this.watch(this.props, this.context); 27 | } 28 | 29 | componentWillUnmount() { 30 | if (this.disposePropsWatch) this.disposePropsWatch(); 31 | if (this.disposeFocusWatch) this.disposeFocusWatch(); 32 | } 33 | 34 | render() { 35 | const { viewRegistry } = this.context; 36 | const { id } = this.props; 37 | const component = viewRegistry.getComponent(id); 38 | const props = viewRegistry.getProps(id); 39 | return $( 40 | component, 41 | Object.assign({}, props, { 42 | ref: component => (this.component = component), 43 | dispatch: this.dispatchAction, 44 | key: id 45 | }) 46 | ); 47 | } 48 | 49 | dispatchAction(action) { 50 | const { viewRegistry } = this.context; 51 | const { id } = this.props; 52 | viewRegistry.dispatchAction(id, action); 53 | } 54 | 55 | watch(props, context) { 56 | if (this.disposePropsWatch) this.disposePropsWatch(); 57 | if (this.disposeFocusWatch) this.disposeFocusWatch(); 58 | this.disposePropsWatch = context.viewRegistry.watchProps(props.id, () => { 59 | this.setState({ version: this.state.version + 1 }); 60 | }); 61 | this.disposeFocusWatch = context.viewRegistry.watchFocus(props.id, () => { 62 | if (this.component.focus) this.component.focus(); 63 | }); 64 | } 65 | 66 | getChildContext() { 67 | return { dispatchAction: this.dispatchAction }; 68 | } 69 | } 70 | 71 | View.childContextTypes = { 72 | dispatchAction: propTypes.func 73 | }; 74 | 75 | View.contextTypes = { 76 | viewRegistry: propTypes.instanceOf(ViewRegistry) 77 | }; 78 | 79 | module.exports = View; 80 | -------------------------------------------------------------------------------- /xray_browser/src/worker.js: -------------------------------------------------------------------------------- 1 | import { xray as xrayPromise, JsSink } from "xray_wasm"; 2 | 3 | const encoder = new TextEncoder(); 4 | const decoder = new TextDecoder("utf-8"); 5 | const serverPromise = xrayPromise.then(xray => new Server(xray)); 6 | 7 | global.addEventListener("message", async event => { 8 | const message = event.data; 9 | const server = await serverPromise; 10 | server.handleMessage(message); 11 | }); 12 | 13 | class Server { 14 | constructor(xray) { 15 | this.xray = xray; 16 | this.xrayServer = xray.Server.new(); 17 | 18 | this.xrayServer.start_app( 19 | new JsSink({ 20 | send: buffer => { 21 | const message = JSON.parse(decoder.decode(buffer)); 22 | if (message.type === "OpenWindow") { 23 | this.startWindow(message.window_id); 24 | } else { 25 | throw new Error("Expected first message type to be OpenWindow"); 26 | } 27 | } 28 | }) 29 | ); 30 | } 31 | 32 | startWindow(windowId) { 33 | const channel = this.xray.Channel.new(); 34 | this.windowSender = channel.take_sender(); 35 | this.xrayServer.start_window( 36 | windowId, 37 | channel.take_receiver(), 38 | new JsSink({ 39 | send(buffer) { 40 | global.postMessage(JSON.parse(decoder.decode(buffer))); 41 | } 42 | }) 43 | ); 44 | } 45 | 46 | connectToWebsocket(url) { 47 | const socket = new WebSocket(url); 48 | socket.binaryType = "arraybuffer"; 49 | const channel = this.xray.Channel.new(); 50 | const sender = channel.take_sender(); 51 | 52 | socket.addEventListener("message", function(event) { 53 | const data = new Uint8Array(event.data); 54 | sender.send(data); 55 | }); 56 | 57 | this.xrayServer.connect_to_peer( 58 | channel.take_receiver(), 59 | new JsSink({ 60 | send(message) { 61 | socket.send(message); 62 | } 63 | }) 64 | ); 65 | } 66 | 67 | handleMessage(message) { 68 | switch (message.type) { 69 | case "ConnectToWebsocket": 70 | this.connectToWebsocket(message.url); 71 | break; 72 | case "Action": 73 | this.windowSender.send(encoder.encode(JSON.stringify(message))); 74 | break; 75 | default: 76 | console.error("Received unknown message", message); 77 | } 78 | } 79 | } 80 | -------------------------------------------------------------------------------- /xray_ui/test/modal.test.js: -------------------------------------------------------------------------------- 1 | const assert = require("assert"); 2 | const { mount } = require("./helpers/component_helpers"); 3 | const React = require("react"); 4 | const $ = React.createElement; 5 | const Modal = require("../lib/modal"); 6 | 7 | suite("Modal", () => { 8 | let attachedElements; 9 | 10 | beforeEach(() => { 11 | attachedElements = []; 12 | }); 13 | 14 | afterEach(() => { 15 | while ((element = attachedElements.pop())) { 16 | element.remove(); 17 | } 18 | }); 19 | 20 | test("closing dialog while it's focused", () => { 21 | const outerComponent = mount($(FocusableComponent), { 22 | attachTo: buildAndAttachElement("div") 23 | }); 24 | outerComponent.getDOMNode().focus(); 25 | assert.equal(document.activeElement, outerComponent.getDOMNode()); 26 | 27 | const modal = mount($(Modal, {}, $(FocusableComponent)), { 28 | attachTo: buildAndAttachElement("div") 29 | }); 30 | const innerComponent = modal.find(FocusableComponent); 31 | innerComponent.getDOMNode().focus(); 32 | assert.equal(document.activeElement, innerComponent.getDOMNode()); 33 | 34 | modal.unmount(); 35 | assert.equal(document.activeElement, outerComponent.getDOMNode()); 36 | }); 37 | 38 | test("closing dialog when it's not focused", () => { 39 | const outerComponent1 = mount($(FocusableComponent, { id: 1 }), { 40 | attachTo: buildAndAttachElement("div") 41 | }); 42 | const outerComponent2 = mount($(FocusableComponent, { id: 2 }), { 43 | attachTo: buildAndAttachElement("div") 44 | }); 45 | outerComponent2.getDOMNode().focus(); 46 | assert.equal(document.activeElement, outerComponent2.getDOMNode()); 47 | 48 | const modal = mount($(Modal, {}, $(FocusableComponent)), { 49 | attachTo: buildAndAttachElement("div") 50 | }); 51 | outerComponent1.getDOMNode().focus(); 52 | assert.equal(document.activeElement, outerComponent1.getDOMNode()); 53 | 54 | modal.unmount(); 55 | assert.equal(document.activeElement, outerComponent1.getDOMNode()); 56 | }); 57 | 58 | class FocusableComponent extends React.Component { 59 | render() { 60 | return $("div", { id: this.props.id, tabIndex: -1 }); 61 | } 62 | } 63 | 64 | function buildAndAttachElement(tagName) { 65 | const element = document.createElement(tagName); 66 | document.body.appendChild(element); 67 | attachedElements.push(element); 68 | return element; 69 | } 70 | }); 71 | -------------------------------------------------------------------------------- /docs/updates/2018_08_28.md: -------------------------------------------------------------------------------- 1 | # Update for August 28, 2018 2 | 3 | ## Convergence for files and hard links 4 | 5 | As predicted in the [last update](./2018_08_21.md), adding support for files and hard links to our directory tree CRDT went smoothly, and we achieved convergence in our randomized tests on Monday. Because hard links make it possible for the same file to appear in multiple locations, many code paths needed to be updated to work in terms of *references* rather than files. Happily, we had already anticipated hard links by allowing a file to be associated with multiple parent refs, so the path was mostly paved. Once we add support for file contents and confirm that everything works in an end-to-end test, we plan to post an in-depth write-up on the directory tree CRDT and do a documentation pass on the [timeline module](../../memo/src/timeline.rs). 6 | 7 | ## Next up, buffers 8 | 9 | The file support added last week assumes that all files are empty. To allow files to be associated with editable content, we're adapting the [`buffer`](../../xray_core/src/buffer.rs) module from `xray_core` to work with Memo's [new B-tree](../../memo/src/btree.rs). The primary difference between the previous B-tree implementation and the new one is support for storing the tree's nodes in a database. This will allow us to store a file's entire history without loading old fragments into memory, but it also means that many methods now have the potential to perform I/O with the database and encounter I/O errors. 10 | 11 | We'll need to adjust the `Buffer` APIs slightly to account for this potential. For example, we can no longer return an iterator that implements the `Iterator` trait, since `next` would need to return a `Result` type. We're also dropping some of the previous buffer's support for Xray's RPC system because we anticipate dealing with network interactions differently in Memo. We don't have complete clarity on our plans for dealing with networking just yet, but it makes sense to keep our assumptions minimal at this stage. 12 | 13 | Once we get buffers implemented against our new B-tree, we'll need to integrate them into our timeline. We plan to maintain a mapping between file ids and the buffers that contain their contents, but the details will become clearer once we get into it. Buffers will need to be integrated with up to three distinct sources of I/O: the file system for reading/saving contents, the network for collaboration, and the database for history persistence. It should be a fun design problem to give them a convenient API while addressing all of those concerns. 14 | -------------------------------------------------------------------------------- /memo_core/src/serialization/schema.fbs: -------------------------------------------------------------------------------- 1 | struct ReplicaId { 2 | first_8_bytes: uint64; 3 | last_8_bytes: uint64; 4 | } 5 | 6 | struct Timestamp { 7 | value:uint64; 8 | replica_id:ReplicaId; 9 | } 10 | 11 | table GlobalTimestamp { 12 | timestamps:[Timestamp]; 13 | } 14 | 15 | namespace buffer; 16 | 17 | enum AnchorVariant : byte { Start, Middle, End } 18 | 19 | enum AnchorBias : byte { Left, Right } 20 | 21 | table Anchor { 22 | variant:AnchorVariant; 23 | insertion_id:Timestamp; 24 | offset:uint64; 25 | bias:AnchorBias; 26 | } 27 | 28 | table Selection { 29 | start:Anchor; 30 | end:Anchor; 31 | reversed:bool; 32 | } 33 | 34 | table Edit { 35 | start_id:Timestamp; 36 | start_offset:uint64; 37 | end_id:Timestamp; 38 | end_offset:uint64; 39 | version_in_range:GlobalTimestamp; 40 | new_text:string; 41 | local_timestamp:Timestamp; 42 | lamport_timestamp:Timestamp; 43 | } 44 | 45 | table UpdateSelections { 46 | set_id:Timestamp; 47 | selections:[Selection]; 48 | lamport_timestamp:Timestamp; 49 | } 50 | 51 | union OperationVariant { Edit, UpdateSelections } 52 | 53 | table Operation { 54 | variant: OperationVariant; 55 | } 56 | 57 | namespace epoch; 58 | 59 | table BaseFileId { 60 | index:uint64; 61 | } 62 | 63 | table NewFileId { 64 | id:Timestamp; 65 | } 66 | 67 | union FileId { BaseFileId, NewFileId } 68 | 69 | enum FileType : byte { Directory, Text } 70 | 71 | table InsertMetadata { 72 | file_id:FileId; 73 | file_type:FileType; 74 | parent_id:FileId; 75 | name_in_parent:string; 76 | local_timestamp:Timestamp; 77 | lamport_timestamp:Timestamp; 78 | } 79 | 80 | table UpdateParent { 81 | child_id:FileId; 82 | new_parent_id:FileId; 83 | new_name_in_parent:string; 84 | local_timestamp:Timestamp; 85 | lamport_timestamp:Timestamp; 86 | } 87 | 88 | table BufferOperation { 89 | file_id:FileId; 90 | operations:[buffer.Operation]; 91 | local_timestamp:Timestamp; 92 | lamport_timestamp:Timestamp; 93 | } 94 | 95 | table UpdateActiveLocation { 96 | file_id:FileId; 97 | lamport_timestamp:Timestamp; 98 | } 99 | 100 | union Operation { InsertMetadata, UpdateParent, BufferOperation, UpdateActiveLocation } 101 | 102 | namespace worktree; 103 | 104 | table StartEpoch { 105 | epoch_id:Timestamp; 106 | head:[ubyte]; 107 | } 108 | 109 | table EpochOperation { 110 | epoch_id:Timestamp; 111 | operation:epoch.Operation; 112 | } 113 | 114 | union OperationVariant { StartEpoch, EpochOperation } 115 | 116 | table Operation { 117 | variant:OperationVariant; 118 | } 119 | 120 | root_type Operation; 121 | -------------------------------------------------------------------------------- /docs/updates/2018_04_16.md: -------------------------------------------------------------------------------- 1 | # Update for April 16, 2018 2 | 3 | ## Contributions 4 | 5 | [@rleungx](https://github.com/rleungx) [set up a basic benchmarking framework](https://github.com/atom/xray/pull/62) that uses [Criterion](https://github.com/japaric/criterion.rs). 6 | 7 | ## Progress on shared workspaces 8 | 9 | By the middle of last week, we had a first iteration of the RPC system that we were happy with, and started using it to build out shared workspaces. To do that, we're adding replication to Xray's model objects. The goal is to be able to use model objects without worrying about whether or not they are remote or local. 10 | 11 | We're converging on a design where most model objects are represented by a trait, with local and remote concrete implementations of this trait. For example, the project model has a `Project` trait along with `LocalProject` and `RemoteProject` implementations. We also have an `rpc::server::Service` implementation that has a shared reference to a `LocalProject` and exposes it to a remote client. On the client side, the `RemoteProject` owns a `rpc::client::Service` object. When you call a method like `open_buffer` on the client side, it's translated into a network request to a service on the remote peer, which translates the request to a method call on the corresponding `LocalProject`. 12 | 13 | We have unit tests passing for replication of file system trees and projects, along with the initial state for buffers. We still need to replicate buffer edits. We also have some work to do to refine our treatment of ownership for services on the server side. We think the best approach might be to enable both the client and the server to retain services. So if the server wants to keep a service alive and return it across multiple requests or updates, it can store off a handle to the service. Or it can drop the handle, in which case the client can take ownership over the service. Once the client drops, we'll communicate this fact across the wire and decrement the service's reference count. It's essentially an `Rc` transmitted over the network. We'll see how it goes. 14 | 15 | ## Syntax awareness 16 | 17 | This week, [@maxbrunsfeld](https://github.com/maxbrunsfeld) will be diving in on integrating the [Tree-sitter](https://github.com/tree-sitter/tree-sitter) incremental parsing system into Xray. The first step involves some adjustments to the runtime to enable syntax trees to be fully persistent and sharable across threads. Xray's buffers already support this kind of usage, so including syntax trees will enable lots of interesting computations to be pushed into the background. 18 | 19 | ## Heads-down in Amsterdam 20 | 21 | [@as-cii](https://github.com/as-cii) and I are meeting up in Amsterdam this week to write as much code as possible together in person. To that end, I'm going to keep this update short so we can get to work. 22 | -------------------------------------------------------------------------------- /xray_ui/lib/view_registry.js: -------------------------------------------------------------------------------- 1 | const assert = require("assert"); 2 | 3 | module.exports = class ViewRegistry { 4 | constructor({ onAction } = {}) { 5 | this.onAction = onAction; 6 | this.componentsByName = new Map(); 7 | this.viewsById = new Map(); 8 | this.propListenersByViewId = new Map(); 9 | this.focusListenersByViewId = new Map(); 10 | } 11 | 12 | addComponent(name, component) { 13 | assert(!this.componentsByName.has(name)); 14 | this.componentsByName.set(name, component); 15 | } 16 | 17 | getComponent(id) { 18 | const view = this.viewsById.get(id); 19 | assert(view); 20 | const component = this.componentsByName.get(view.component_name); 21 | assert(component); 22 | return component; 23 | } 24 | 25 | removeComponent(name) { 26 | this.componentsByName.delete(name); 27 | } 28 | 29 | update({ updated, removed, focused }) { 30 | for (let i = 0; i < updated.length; i++) { 31 | const view = updated[i]; 32 | this.viewsById.set(view.view_id, view); 33 | 34 | const listeners = this.propListenersByViewId.get(view.view_id); 35 | if (listeners) { 36 | for (let i = 0; i < listeners.length; i++) { 37 | listeners[i](); 38 | } 39 | } 40 | } 41 | 42 | for (var i = 0; i < removed.length; i++) { 43 | const viewId = removed[i]; 44 | this.viewsById.delete(viewId); 45 | this.propListenersByViewId.delete(viewId); 46 | } 47 | 48 | if (focused != null) { 49 | const focusListener = this.focusListenersByViewId.get(focused); 50 | if (focusListener) { 51 | this.pendingFocus = null; 52 | focusListener(); 53 | } else { 54 | this.pendingFocus = focused; 55 | } 56 | } 57 | } 58 | 59 | getProps(id) { 60 | const view = this.viewsById.get(id); 61 | assert(view); 62 | return view.props; 63 | } 64 | 65 | watchProps(id, callback) { 66 | assert(this.viewsById.has(id)); 67 | 68 | let listeners = this.propListenersByViewId.get(id); 69 | if (!listeners) { 70 | listeners = []; 71 | this.propListenersByViewId.set(id, listeners); 72 | } 73 | 74 | listeners.push(callback); 75 | 76 | return () => { 77 | const callbackIndex = listeners.indexOf(callback); 78 | if (callbackIndex !== -1) listeners.splice(callbackIndex, 1); 79 | }; 80 | } 81 | 82 | watchFocus(id, callback) { 83 | assert(!this.focusListenersByViewId.has(id)); 84 | 85 | this.focusListenersByViewId.set(id, callback); 86 | if (this.pendingFocus === id) { 87 | this.pendingFocus = null; 88 | callback(); 89 | } 90 | 91 | return () => this.focusListenersByViewId.delete(id); 92 | } 93 | 94 | dispatchAction(id, action) { 95 | assert(this.viewsById.has(id)); 96 | this.onAction({ view_id: id, action }); 97 | } 98 | }; 99 | -------------------------------------------------------------------------------- /docs/architecture/003_memo_epochs.md: -------------------------------------------------------------------------------- 1 | The following document describes the sequence of operations that we should perform when the repository HEAD changes, both on the machine where the HEAD change occurred and at remote sites that receive the resulting epoch change. 2 | 3 | The algorithms assume that version vectors don't reset across epochs. This does raise the concern that version vectors could grow without bound over the life of the repository, but we're going to suspend that concern temporarily to make progress. 4 | 5 | ### Creating a new epoch after HEAD moves 6 | 7 | Assume we are currently at epoch A described by Tree T. 8 | 9 | - Scan all the entries from Git's database based on the new HEAD into a new Tree T'. 10 | - Synthesize and apply operations for all uncommitted changes via a `git diff`. This includes file system operations as well as uncommitted changes to file contents. 11 | - For all buffers with unsaved edits in T: 12 | - Diff the last saved contents in T against the current contents of T' using the path of the buffer in T. This diff will describe a set of regions that have been touched outside of our control. 13 | - Go through each of the unsaved operations in T and check if they intersect with any of the regions in this diff to detect a conflict. 14 | - If there is a conflict, synthesize operations by performing a diff between the contents of T' and the contents of T and apply these as unsaved operations on top of T', then mark the buffer as in conflict. 15 | - Otherwise, transform all the unsaved operations according to the initial diff and apply them to the buffer in T'. 16 | 17 | Afterward, we broadcast a new epoch B that contains the new HEAD SHA, the work tree's current version vector, a Lamport timestamp, and all synthesized operations. 18 | 19 | ### Receiving a new epoch 20 | 21 | * Check Lamport timestamp of the epoch. If it's less than the current epoch's timestamp, ignore it. Otherwise, proceed to change the active epoch as follows: 22 | * Scan all entries from Git's database based on the new epoch's HEAD SHA into a new Tree T'. 23 | * Apply operations that are associated with the new epoch to T'. 24 | 25 | What happens to buffers? 26 | * For all buffers containing edits not included in the epoch change's version vector: 27 | * If a file with the same path exists in T': 28 | * Diff the contents that are included in the version vector against the contents of T' using the path of the buffer in T. This diff will describe a set of regions that have been touched outside of our control. 29 | * Go through each of the local edits that were not part of the version vector. If they do not directly conflict with a region in the diff, synthesize a new operation with an adjusted position based on the diff and apply it to T'. 30 | * If no file with that path exists in T', we create it with initial contents from T. 31 | -------------------------------------------------------------------------------- /docs/updates/2018_08_21.md: -------------------------------------------------------------------------------- 1 | # Update for August 21, 2018 2 | 3 | ## *Eon* is now *Memo* 4 | 5 | I chose the name *Eon* fairly hastily and ended up kind of disliking it. I wanted to change it almost immediately, but decided to hold off until I felt sure about its replacement. *Memo* is one character longer but just sounds better to me and reflects the system's ability to record every keystroke. It's kinda silly to worry this much about a name, but I just needed to change it. Now it's done. Moving on. 6 | 7 | ## Convergence for directory trees 8 | 9 | The bigger news is that we've finally achieved convergence in our randomized tests of replicated directory trees. The problem ended up being way harder than we imagined. The final challenge was to fully simulate the possibility for the file system to change at any time, including during a directory scan. 10 | 11 | We are cautiously optimistic that the worst of the algorithmic challenges could be behind us. Weeks of wading through randomized test failures has been a bit monotonous, but hopefully we can pick up some momentum building on top of this abstraction. 12 | 13 | ## Supporting text files and evolving the high-level structure 14 | 15 | The next step is to add support for files to the directory tree, which we think should be easier. Much of what we learned dealing with pure directories can be applied to files, and since files are always leaf nodes we shouldn't need to deal with cycles. We *do* need to deal with hard links, however, which should add some complexity. 16 | 17 | Supporting files also means we need to figure out the relationship between the CRDT that maintains the state of the directory tree and the CRDTs that contain the contents of individual text files. 18 | 19 | This week seems like the right time to zoom out and get a bit more clarity on the system's higher level design. Until we had a working CRDT for directory trees that felt premature, but now it seems like understanding the big picture a bit better might inform the relationship between the directory tree and individual text files. 20 | 21 | We've gone back and forth on whether we should try to decouple them, but for now we think we're going to try a more integrated approach where the directory tree CRDT has explicit knowledge of the file CRDTs. For now, we've decided to wrap both concerns in a single type called a `Timeline`, which will represent the full state of a working tree as it evolves forward in time. A `Repository` will contain multiple timelines which can evolve in parallel, fork, and eventually merge. 22 | 23 | There's still quite a bit to figure out though. How will we route operations to and from buffers? What will the ownership structure look like? How can we ensure that performing I/O doesn't interfere with the responsiveness of the system? We'll hopefully have some conclusions about those questions and more to share in the next update. 24 | -------------------------------------------------------------------------------- /xray_electron/lib/main_process/main.js: -------------------------------------------------------------------------------- 1 | const {app, BrowserWindow} = require('electron'); 2 | const {spawn} = require('child_process'); 3 | const path = require('path'); 4 | const url = require('url'); 5 | const XrayClient = require('../shared/xray_client'); 6 | 7 | const SERVER_PATH = process.env.XRAY_SERVER_PATH; 8 | if (!SERVER_PATH) { 9 | console.error('Missing XRAY_SERVER_PATH environment variable'); 10 | process.exit(1); 11 | } 12 | 13 | const SOCKET_PATH = process.env.XRAY_SOCKET_PATH; 14 | if (!SOCKET_PATH) { 15 | console.error('Missing XRAY_SOCKET_PATH environment variable'); 16 | process.exit(1); 17 | } 18 | 19 | class XrayApplication { 20 | constructor (serverPath, socketPath) { 21 | this.serverPath = serverPath; 22 | this.socketPath = socketPath; 23 | this.windowsById = new Map(); 24 | this.readyPromise = new Promise(resolve => app.on('ready', resolve)); 25 | this.xrayClient = new XrayClient(); 26 | } 27 | 28 | async start () { 29 | const serverProcess = spawn(this.serverPath, [], {stdio: ['ignore', 'pipe', 'inherit']}); 30 | app.on('before-quit', () => serverProcess.kill()); 31 | 32 | serverProcess.on('error', console.error); 33 | serverProcess.on('exit', () => app.quit()); 34 | 35 | await new Promise(resolve => { 36 | let serverStdout = ''; 37 | serverProcess.stdout.on('data', data => { 38 | serverStdout += data.toString('utf8'); 39 | if (serverStdout.includes('Listening\n')) resolve() 40 | }); 41 | }); 42 | 43 | await this.xrayClient.start(this.socketPath); 44 | this.xrayClient.addMessageListener(this._handleMessage.bind(this)); 45 | this.xrayClient.sendMessage({type: 'StartApp'}); 46 | } 47 | 48 | async _handleMessage (message) { 49 | await this.readyPromise; 50 | switch (message.type) { 51 | case 'OpenWindow': { 52 | this._createWindow(message.window_id); 53 | break; 54 | } 55 | } 56 | } 57 | 58 | _createWindow (windowId) { 59 | const window = new BrowserWindow({width: 800, height: 600, webSecurity: false}); 60 | window.loadURL(url.format({ 61 | pathname: path.join(__dirname, '../../index.html'), 62 | search: `windowId=${windowId}&socketPath=${encodeURIComponent(this.socketPath)}`, 63 | protocol: 'file:', 64 | slashes: true 65 | })); 66 | this.windowsById.set(windowId, window); 67 | window.on('closed', () => { 68 | this.windowsById.delete(windowId); 69 | this.xrayClient.sendMessage({type: 'CloseWindow', window_id: windowId}); 70 | }); 71 | } 72 | } 73 | 74 | app.commandLine.appendSwitch("enable-experimental-web-platform-features"); 75 | 76 | app.on('window-all-closed', function () { 77 | if (process.platform !== 'darwin') { 78 | app.quit(); 79 | } 80 | }); 81 | 82 | const application = new XrayApplication(SERVER_PATH, SOCKET_PATH); 83 | application.start().then(() => { 84 | console.log('Listening'); 85 | }); 86 | -------------------------------------------------------------------------------- /xray_core/src/movement.rs: -------------------------------------------------------------------------------- 1 | use buffer::{Buffer, Point}; 2 | use std::char::decode_utf16; 3 | use std::cmp; 4 | 5 | pub fn left(buffer: &Buffer, mut point: Point) -> Point { 6 | if point.column > 0 { 7 | point.column -= 1; 8 | } else if point.row > 0 { 9 | point.row -= 1; 10 | point.column = buffer.len_for_row(point.row).unwrap(); 11 | } 12 | point 13 | } 14 | 15 | pub fn right(buffer: &Buffer, mut point: Point) -> Point { 16 | let max_column = buffer.len_for_row(point.row).unwrap(); 17 | if point.column < max_column { 18 | point.column += 1; 19 | } else if point.row < buffer.max_point().row { 20 | point.row += 1; 21 | point.column = 0; 22 | } 23 | point 24 | } 25 | 26 | pub fn up(buffer: &Buffer, mut point: Point, goal_column: Option) -> (Point, Option) { 27 | let goal_column = goal_column.or(Some(point.column)); 28 | if point.row > 0 { 29 | point.row -= 1; 30 | point.column = cmp::min(goal_column.unwrap(), buffer.len_for_row(point.row).unwrap()); 31 | } else { 32 | point = Point::new(0, 0); 33 | } 34 | 35 | (point, goal_column) 36 | } 37 | 38 | pub fn down(buffer: &Buffer, mut point: Point, goal_column: Option) -> (Point, Option) { 39 | let goal_column = goal_column.or(Some(point.column)); 40 | let max_point = buffer.max_point(); 41 | if point.row < max_point.row { 42 | point.row += 1; 43 | point.column = cmp::min(goal_column.unwrap(), buffer.len_for_row(point.row).unwrap()) 44 | } else { 45 | point = max_point; 46 | } 47 | 48 | (point, goal_column) 49 | } 50 | 51 | pub fn beginning_of_word(buffer: &Buffer, mut point: Point) -> Point { 52 | // TODO: remove this once the iterator returns char instances. 53 | let mut iter = decode_utf16(buffer.backward_iter_starting_at_point(point)).map(|c| c.unwrap()); 54 | let skip_alphanumeric = iter.next().map_or(false, |c| c.is_alphanumeric()); 55 | point = left(buffer, point); 56 | for character in iter { 57 | if skip_alphanumeric == character.is_alphanumeric() { 58 | point = left(buffer, point); 59 | } else { 60 | break; 61 | } 62 | } 63 | point 64 | } 65 | 66 | pub fn end_of_word(buffer: &Buffer, mut point: Point) -> Point { 67 | // TODO: remove this once the iterator returns char instances. 68 | let mut iter = decode_utf16(buffer.iter_starting_at_point(point)).map(|c| c.unwrap()); 69 | let skip_alphanumeric = iter.next().map_or(false, |c| c.is_alphanumeric()); 70 | point = right(buffer, point); 71 | for character in iter { 72 | if skip_alphanumeric == character.is_alphanumeric() { 73 | point = right(buffer, point); 74 | } else { 75 | break; 76 | } 77 | } 78 | point 79 | } 80 | 81 | pub fn beginning_of_line(mut point: Point) -> Point { 82 | point.column = 0; 83 | point 84 | } 85 | 86 | pub fn end_of_line(buffer: &Buffer, mut point: Point) -> Point { 87 | point.column = buffer.len_for_row(point.row).unwrap(); 88 | point 89 | } 90 | -------------------------------------------------------------------------------- /xray_browser/script/server: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env node 2 | 3 | const assert = require("assert"); 4 | const http = require("http"); 5 | const express = require("express"); 6 | const ws = require("ws"); 7 | const { Socket } = require("net"); 8 | const path = require("path"); 9 | const webpack = require("webpack"); 10 | const webpackDev = require("webpack-dev-middleware"); 11 | express.static.mime.types["wasm"] = "application/wasm"; 12 | 13 | const xrayServerAddress = process.env.XRAY_SERVER_ADDRESS || "127.0.0.1"; 14 | const xrayServerPort = process.env.XRAY_SERVER_PORT || 8080; 15 | const webServerPort = process.env.PORT || 3000; 16 | const app = express(); 17 | const server = http.createServer(app); 18 | 19 | const webpackMode = process.env.NODE_ENV || "development"; 20 | const compiler = webpack([ 21 | { 22 | mode: webpackMode, 23 | entry: path.join(__dirname, "../src/ui.js"), 24 | output: { 25 | filename: "ui.js", 26 | path: path.join(__dirname, "dist") 27 | } 28 | }, 29 | { 30 | mode: webpackMode, 31 | target: "webworker", 32 | entry: path.join(__dirname, "../src/worker.js"), 33 | output: { 34 | filename: "worker.js", 35 | path: path.join(__dirname, "dist") 36 | } 37 | } 38 | ]); 39 | 40 | const websocketServer = new ws.Server({ server, path: "/ws" }); 41 | websocketServer.on("connection", async ws => { 42 | const connection = new Socket(); 43 | 44 | let incomingMessage = null; 45 | let remainingBytes = 0; 46 | connection.on("data", data => { 47 | let offset = 0; 48 | while (offset < data.length) { 49 | if (incomingMessage) { 50 | assert(remainingBytes !== 0, "remainingBytes should not be 0"); 51 | const copiedBytes = data.copy( 52 | incomingMessage, 53 | incomingMessage.length - remainingBytes, 54 | offset, 55 | offset + remainingBytes 56 | ); 57 | remainingBytes -= copiedBytes; 58 | offset += copiedBytes; 59 | } else { 60 | remainingBytes = data.readUInt32BE(offset); 61 | incomingMessage = Buffer.alloc(remainingBytes); 62 | offset += 4; 63 | } 64 | 65 | if (incomingMessage && remainingBytes === 0) { 66 | try { 67 | ws.send(incomingMessage); 68 | } catch (error) { 69 | console.error("Error sending to web socket:", error); 70 | } 71 | incomingMessage = null; 72 | } 73 | } 74 | }); 75 | 76 | await new Promise(resolve => { 77 | connection.connect( 78 | { host: xrayServerAddress, port: xrayServerPort }, 79 | resolve 80 | ); 81 | }); 82 | ws.on("message", message => { 83 | const bufferLengthHeader = Buffer.alloc(4); 84 | bufferLengthHeader.writeUInt32BE(message.length, 0); 85 | connection.write(Buffer.concat([bufferLengthHeader, message])); 86 | }); 87 | ws.on("close", () => connection.destroy()); 88 | }); 89 | 90 | app.use(webpackDev(compiler, { publicPath: "/" })); 91 | app.use("/", express.static(path.join(__dirname, "../static"))); 92 | server.listen(webServerPort, () => { 93 | console.log(`Using xray server: ${xrayServerAddress}:${xrayServerPort}`); 94 | console.log("Listening for HTTP connections on port " + webServerPort); 95 | }); 96 | -------------------------------------------------------------------------------- /xray_ui/test/view.test.js: -------------------------------------------------------------------------------- 1 | const assert = require("assert"); 2 | const React = require("react"); 3 | const $ = require("react").createElement; 4 | const { mount, shallow } = require("./helpers/component_helpers"); 5 | const View = require("../lib/view"); 6 | const ViewRegistry = require("../lib/view_registry"); 7 | 8 | suite("View", () => { 9 | test("basic rendering", () => { 10 | const viewRegistry = new ViewRegistry(); 11 | viewRegistry.addComponent("comp-1", props => $("div", {}, props.text)); 12 | viewRegistry.addComponent("comp-2", props => $("label", {}, props.text)); 13 | viewRegistry.update({ 14 | updated: [ 15 | { component_name: "comp-1", view_id: 1, props: { text: "text-1" } }, 16 | { component_name: "comp-2", view_id: 2, props: { text: "text-2" } } 17 | ], 18 | removed: [] 19 | }); 20 | 21 | // Initial rendering 22 | const view = shallow($(View, { id: 1 }), { 23 | context: { viewRegistry } 24 | }); 25 | assert.equal(view.html(), "
text-1
"); 26 | 27 | // Changing view id 28 | view.setProps({ id: 2 }); 29 | assert.equal(view.html(), ""); 30 | 31 | // Updating view props 32 | viewRegistry.update({ 33 | updated: [ 34 | { component_name: "comp-2", view_id: 2, props: { text: "text-3" } } 35 | ], 36 | removed: [] 37 | }); 38 | view.update(); 39 | assert.equal(view.html(), ""); 40 | }); 41 | 42 | test("action dispatching", () => { 43 | const actions = []; 44 | const viewRegistry = new ViewRegistry({ onAction: a => actions.push(a) }); 45 | viewRegistry.update({ 46 | updated: [{ component_name: "component", view_id: 42, props: {} }], 47 | removed: [] 48 | }); 49 | 50 | let dispatch; 51 | viewRegistry.addComponent("component", props => { 52 | dispatch = props.dispatch; 53 | return $("div"); 54 | }); 55 | 56 | const view = shallow($(View, { id: 42 }), { 57 | context: { viewRegistry } 58 | }); 59 | assert.equal(view.html(), "
"); 60 | 61 | dispatch({ type: "foo" }); 62 | dispatch({ type: "bar" }); 63 | assert.deepEqual(actions, [ 64 | { view_id: 42, action: { type: "foo" } }, 65 | { view_id: 42, action: { type: "bar" } } 66 | ]); 67 | }); 68 | 69 | test("focus", () => { 70 | const focusedViewIds = []; 71 | const viewRegistry = new ViewRegistry({ 72 | onAction: a => focusedViewIds.push(a.view_id) 73 | }); 74 | 75 | class TestComponent extends React.Component { 76 | render() { 77 | return $("div"); 78 | } 79 | 80 | focus() { 81 | this.props.dispatch({}); 82 | } 83 | } 84 | viewRegistry.addComponent("component", TestComponent); 85 | viewRegistry.update({ 86 | updated: [ 87 | { component_name: "component", view_id: 1, props: {} }, 88 | { component_name: "component", view_id: 2, props: {} } 89 | ], 90 | removed: [], 91 | focused: 1 92 | }); 93 | 94 | mount($(View, { id: 1 }), { context: { viewRegistry } }); 95 | mount($(View, { id: 2 }), { context: { viewRegistry } }); 96 | assert.deepEqual(focusedViewIds, [1]); 97 | 98 | viewRegistry.update({ updated: [], removed: [], focused: 2 }); 99 | assert.deepEqual(focusedViewIds, [1, 2]); 100 | 101 | viewRegistry.update({ updated: [], removed: [], focused: 1 }); 102 | assert.deepEqual(focusedViewIds, [1, 2, 1]); 103 | }); 104 | }); 105 | -------------------------------------------------------------------------------- /xray_ui/lib/text_editor/shaders.js: -------------------------------------------------------------------------------- 1 | exports.textBlendAttributes = { 2 | unitQuadVertex: 0, 3 | targetOrigin: 1, 4 | targetSize: 2, 5 | textColorRGBA: 3, 6 | atlasOrigin: 4, 7 | atlasSize: 5 8 | }; 9 | 10 | exports.textBlendVertex = ` 11 | #version 300 es 12 | 13 | layout (location = 0) in vec2 unitQuadVertex; 14 | layout (location = 1) in vec2 targetOrigin; 15 | layout (location = 2) in vec2 targetSize; 16 | layout (location = 3) in vec4 textColorRGBA; 17 | layout (location = 4) in vec2 atlasOrigin; 18 | layout (location = 5) in vec2 atlasSize; 19 | 20 | uniform vec2 viewportScale; 21 | uniform float scrollLeft; 22 | 23 | flat out vec4 textColor; 24 | out vec2 atlasPosition; 25 | 26 | void main() { 27 | vec2 targetPixelPosition = (targetOrigin + unitQuadVertex * targetSize) - vec2(scrollLeft, 0.0); 28 | vec2 targetPosition = targetPixelPosition * viewportScale + vec2(-1.0, 1.0); 29 | gl_Position = vec4(targetPosition, 0.0, 1.0); 30 | textColor = textColorRGBA * vec4(1.0 / 255.0, 1.0 / 255.0, 1.0 / 255.0, 1.0); 31 | // Conversion to sRGB. 32 | textColor = textColor * textColor; 33 | textColor = textColorRGBA; 34 | atlasPosition = atlasOrigin + unitQuadVertex * atlasSize; 35 | } 36 | `.trim(); 37 | 38 | exports.textBlendPass1Fragment = ` 39 | #version 300 es 40 | 41 | precision mediump float; 42 | 43 | layout(location = 0) out vec4 outColor; 44 | flat in vec4 textColor; 45 | in vec2 atlasPosition; 46 | 47 | uniform sampler2D atlasTexture; 48 | 49 | void main() { 50 | vec3 atlasColor = texture(atlasTexture, atlasPosition).rgb; 51 | vec3 textColorRGB = textColor.rgb; 52 | vec3 correctedAtlasColor = mix(vec3(1.0) - atlasColor, sqrt(vec3(1.0) - atlasColor * atlasColor), textColorRGB); 53 | outColor = vec4(correctedAtlasColor, 1.0); 54 | } 55 | `.trim(); 56 | 57 | exports.textBlendPass2Fragment = ` 58 | #version 300 es 59 | 60 | precision mediump float; 61 | 62 | layout(location = 0) out vec4 outColor; 63 | flat in vec4 textColor; 64 | in vec2 atlasPosition; 65 | 66 | uniform sampler2D atlasTexture; 67 | 68 | void main() { 69 | vec3 atlasColor = texture(atlasTexture, atlasPosition).rgb; 70 | vec3 textColorRGB = textColor.rgb; 71 | vec3 correctedAtlasColor = mix(vec3(1.0) - atlasColor, sqrt(vec3(1.0) - atlasColor * atlasColor), textColorRGB); 72 | vec3 adjustedForegroundColor = textColorRGB * correctedAtlasColor; 73 | outColor = vec4(adjustedForegroundColor, 1.0); 74 | } 75 | `.trim(); 76 | 77 | exports.solidAttributes = { 78 | unitQuadVertex: 0, 79 | targetOrigin: 1, 80 | targetSize: 2, 81 | colorRGBA: 3 82 | }; 83 | 84 | exports.solidVertex = ` 85 | #version 300 es 86 | 87 | layout (location = 0) in vec2 unitQuadVertex; 88 | layout (location = 1) in vec2 targetOrigin; 89 | layout (location = 2) in vec2 targetSize; 90 | layout (location = 3) in vec4 colorRGBA; 91 | flat out vec4 color; 92 | 93 | uniform vec2 viewportScale; 94 | uniform float scrollLeft; 95 | 96 | void main() { 97 | vec2 targetPixelPosition = (targetOrigin + unitQuadVertex * targetSize) - vec2(scrollLeft, 0.0); 98 | vec2 targetPosition = targetPixelPosition * viewportScale + vec2(-1.0, 1.0); 99 | gl_Position = vec4(targetPosition, 0.0, 1.0); 100 | color = colorRGBA * vec4(1.0 / 255.0, 1.0 / 255.0, 1.0 / 255.0, 1.0); 101 | } 102 | `.trim(); 103 | 104 | exports.solidFragment = ` 105 | #version 300 es 106 | 107 | precision mediump float; 108 | 109 | flat in vec4 color; 110 | layout (location = 0) out vec4 outColor; 111 | 112 | void main() { 113 | outColor = color; 114 | } 115 | `.trim(); 116 | -------------------------------------------------------------------------------- /memo_core/src/operation_queue.rs: -------------------------------------------------------------------------------- 1 | use crate::btree::{Cursor, Dimension, Edit, Item, KeyedItem, Tree}; 2 | use crate::time; 3 | use std::fmt::Debug; 4 | use std::ops::{Add, AddAssign}; 5 | 6 | pub trait Operation: Clone + Debug + Eq { 7 | fn timestamp(&self) -> time::Lamport; 8 | } 9 | 10 | #[derive(Clone, Debug)] 11 | pub struct OperationQueue(Tree); 12 | 13 | #[derive(Clone, Copy, Debug, Default, Eq, Ord, PartialEq, PartialOrd)] 14 | pub struct OperationKey(time::Lamport); 15 | 16 | #[derive(Clone, Copy, Debug, Default, Eq, PartialEq)] 17 | pub struct OperationSummary { 18 | key: OperationKey, 19 | len: usize, 20 | } 21 | 22 | impl OperationQueue { 23 | pub fn new() -> Self { 24 | OperationQueue(Tree::new()) 25 | } 26 | 27 | #[cfg(test)] 28 | pub fn is_empty(&self) -> bool { 29 | self.0.is_empty() 30 | } 31 | 32 | pub fn len(&self) -> usize { 33 | self.0.summary().len 34 | } 35 | 36 | pub fn insert(&mut self, mut ops: Vec) { 37 | ops.sort_by_key(|op| op.timestamp()); 38 | ops.dedup_by_key(|op| op.timestamp()); 39 | let mut edits = ops 40 | .into_iter() 41 | .map(|op| Edit::Insert(op)) 42 | .collect::>>(); 43 | self.0.edit(&mut edits); 44 | } 45 | 46 | pub fn drain(&mut self) -> Cursor { 47 | let cursor = self.0.cursor(); 48 | self.0 = Tree::new(); 49 | cursor 50 | } 51 | } 52 | 53 | impl Item for T { 54 | type Summary = OperationSummary; 55 | 56 | fn summarize(&self) -> Self::Summary { 57 | OperationSummary { 58 | key: OperationKey(self.timestamp()), 59 | len: 1, 60 | } 61 | } 62 | } 63 | 64 | impl KeyedItem for T { 65 | type Key = OperationKey; 66 | 67 | fn key(&self) -> Self::Key { 68 | OperationKey(self.timestamp()) 69 | } 70 | } 71 | 72 | impl<'a> AddAssign<&'a Self> for OperationSummary { 73 | fn add_assign(&mut self, other: &Self) { 74 | assert!(self.key < other.key); 75 | self.key = other.key; 76 | self.len += other.len; 77 | } 78 | } 79 | 80 | impl<'a> Add<&'a Self> for OperationSummary { 81 | type Output = Self; 82 | 83 | fn add(self, other: &Self) -> Self { 84 | assert!(self.key < other.key); 85 | OperationSummary { 86 | key: other.key, 87 | len: self.len + other.len, 88 | } 89 | } 90 | } 91 | 92 | impl Dimension for OperationKey { 93 | fn from_summary(summary: &OperationSummary) -> Self { 94 | summary.key 95 | } 96 | } 97 | 98 | impl<'a> Add<&'a Self> for OperationKey { 99 | type Output = Self; 100 | 101 | fn add(self, other: &Self) -> Self { 102 | assert!(self < *other); 103 | *other 104 | } 105 | } 106 | 107 | impl<'a> AddAssign<&'a Self> for OperationKey { 108 | fn add_assign(&mut self, other: &Self) { 109 | assert!(*self < *other); 110 | *self = *other; 111 | } 112 | } 113 | 114 | #[cfg(test)] 115 | mod tests { 116 | use super::*; 117 | use crate::ReplicaId; 118 | 119 | #[test] 120 | fn test_len() { 121 | let mut clock = time::Lamport::new(ReplicaId::from_u128(1)); 122 | 123 | let mut queue = OperationQueue::new(); 124 | assert_eq!(queue.len(), 0); 125 | 126 | queue.insert(vec![ 127 | TestOperation(clock.tick()), 128 | TestOperation(clock.tick()), 129 | ]); 130 | assert_eq!(queue.len(), 2); 131 | 132 | queue.insert(vec![TestOperation(clock.tick())]); 133 | assert_eq!(queue.len(), 3); 134 | 135 | drop(queue.drain()); 136 | assert_eq!(queue.len(), 0); 137 | 138 | queue.insert(vec![TestOperation(clock.tick())]); 139 | assert_eq!(queue.len(), 1); 140 | } 141 | 142 | #[derive(Clone, Debug, Eq, PartialEq)] 143 | struct TestOperation(time::Lamport); 144 | 145 | impl Operation for TestOperation { 146 | fn timestamp(&self) -> time::Lamport { 147 | self.0 148 | } 149 | } 150 | } 151 | -------------------------------------------------------------------------------- /xray_core/src/cross_platform.rs: -------------------------------------------------------------------------------- 1 | use std::borrow::Cow; 2 | #[cfg(unix)] 3 | use std::ffi::{OsStr, OsString}; 4 | #[cfg(unix)] 5 | use std::path::PathBuf; 6 | 7 | pub const UNIX_MAIN_SEPARATOR: u8 = b'/'; 8 | 9 | #[derive(Clone, Debug, Eq, PartialEq, Ord, PartialOrd, Serialize, Deserialize)] 10 | pub enum PathComponent { 11 | Unix(Vec), 12 | } 13 | 14 | #[derive(Clone, Debug, Eq, PartialEq, Serialize, Deserialize)] 15 | pub struct Path(Option); 16 | 17 | #[derive(Clone, Debug, Eq, PartialEq, Serialize, Deserialize)] 18 | enum PathState { 19 | Unix(Vec), 20 | } 21 | 22 | impl PathComponent { 23 | pub fn to_string_lossy(&self) -> Cow { 24 | match self { 25 | &PathComponent::Unix(ref chars) => String::from_utf8_lossy(&chars), 26 | } 27 | } 28 | } 29 | 30 | impl Path { 31 | pub fn new() -> Self { 32 | Path(None) 33 | } 34 | 35 | pub fn push(&mut self, component: &PathComponent) { 36 | if let Some(ref mut path) = self.0 { 37 | match path { 38 | &mut PathState::Unix(ref mut path_chars) => { 39 | let &PathComponent::Unix(ref component_chars) = component; 40 | if path_chars.len() != 0 { 41 | path_chars.push(UNIX_MAIN_SEPARATOR); 42 | } 43 | path_chars.extend(component_chars); 44 | } 45 | } 46 | } else { 47 | match component { 48 | &PathComponent::Unix(ref chars) => self.0 = Some(PathState::Unix(chars.clone())), 49 | } 50 | } 51 | } 52 | 53 | pub fn push_path(&mut self, other: &Self) { 54 | if let Some(ref mut path) = self.0 { 55 | match path { 56 | &mut PathState::Unix(ref mut path_chars) => { 57 | if let Some(ref other) = other.0 { 58 | let &PathState::Unix(ref component_chars) = other; 59 | if path_chars.len() != 0 { 60 | path_chars.push(UNIX_MAIN_SEPARATOR); 61 | } 62 | path_chars.extend(component_chars); 63 | } 64 | } 65 | } 66 | } else { 67 | *self = other.clone(); 68 | } 69 | } 70 | 71 | #[cfg(unix)] 72 | pub fn to_path_buf(&self) -> PathBuf { 73 | use std::os::unix::ffi::OsStrExt; 74 | 75 | if let Some(ref path) = self.0 { 76 | match path { 77 | &PathState::Unix(ref chars) => OsStr::from_bytes(chars).into(), 78 | } 79 | } else { 80 | PathBuf::new() 81 | } 82 | } 83 | 84 | #[cfg(test)] 85 | pub fn to_string_lossy(&self) -> String { 86 | if let Some(ref path) = self.0 { 87 | match path { 88 | &PathState::Unix(ref chars) => String::from_utf8_lossy(chars).into_owned(), 89 | } 90 | } else { 91 | String::new() 92 | } 93 | } 94 | } 95 | 96 | #[cfg(unix)] 97 | impl From for Path { 98 | fn from(path: OsString) -> Self { 99 | use std::os::unix::ffi::OsStringExt; 100 | Path(Some(PathState::Unix(path.into_vec()))) 101 | } 102 | } 103 | 104 | #[cfg(unix)] 105 | impl From for PathComponent { 106 | fn from(string: OsString) -> Self { 107 | use std::os::unix::ffi::OsStringExt; 108 | PathComponent::Unix(string.into_vec()) 109 | } 110 | } 111 | 112 | #[cfg(unix)] 113 | impl<'a> From<&'a OsStr> for PathComponent { 114 | fn from(string: &'a OsStr) -> Self { 115 | use std::os::unix::ffi::OsStrExt; 116 | PathComponent::Unix(string.as_bytes().to_owned()) 117 | } 118 | } 119 | 120 | #[cfg(test)] 121 | impl<'a> From<&'a str> for PathComponent { 122 | #[cfg(unix)] 123 | fn from(string: &'a str) -> Self { 124 | PathComponent::Unix(string.as_bytes().to_owned()) 125 | } 126 | } 127 | 128 | #[cfg(test)] 129 | impl<'a> From<&'a str> for Path { 130 | #[cfg(unix)] 131 | fn from(string: &'a str) -> Self { 132 | Path(Some(PathState::Unix(string.as_bytes().to_owned()))) 133 | } 134 | } 135 | -------------------------------------------------------------------------------- /xray_ui/lib/file_finder.js: -------------------------------------------------------------------------------- 1 | const React = require("react"); 2 | const ReactDOM = require("react-dom"); 3 | const { styled } = require("styletron-react"); 4 | const $ = React.createElement; 5 | const { ActionContext, Action } = require("./action_dispatcher"); 6 | 7 | const Root = styled("div", { 8 | boxShadow: "0 6px 12px -2px rgba(0, 0, 0, 0.4)", 9 | backgroundColor: "#f2f2f2", 10 | borderRadius: "6px", 11 | width: 500 + "px", 12 | padding: "10px", 13 | marginTop: "20px" 14 | }); 15 | 16 | const QueryInput = styled("input", { 17 | width: "100%", 18 | boxSizing: "border-box", 19 | padding: "5px", 20 | fontSize: "10pt", 21 | outline: "none", 22 | border: "1px solid #556de8", 23 | boxShadow: "0 0 0 1px #556de8", 24 | backgroundColor: "#ebeeff", 25 | borderRadius: "3px", 26 | color: "#232324" 27 | }); 28 | 29 | const SearchResultList = styled("ol", { 30 | listStyleType: "none", 31 | height: "200px", 32 | overflow: "auto", 33 | padding: 0 34 | }); 35 | 36 | const SearchResultListItem = styled("li", { 37 | listStyleType: "none", 38 | padding: "0.75em 1em", 39 | lineHeight: "2em", 40 | fontSize: "10pt", 41 | fontFamily: "sans-serif", 42 | borderBottom: "1px solid #dbdbdc" 43 | }); 44 | 45 | const SearchResultMatchedQuery = styled("b", { 46 | color: "#304ee2", 47 | fontWeight: "bold" 48 | }); 49 | 50 | class SelectedSearchResultListItem extends React.Component { 51 | render() { 52 | return $( 53 | styled(SearchResultListItem, { 54 | backgroundColor: "#dbdbdc" 55 | }), 56 | {}, 57 | ...this.props.children 58 | ); 59 | } 60 | 61 | componentDidMount() { 62 | this.scrollIntoViewIfNeeded(); 63 | } 64 | 65 | componentDidUpdate() { 66 | this.scrollIntoViewIfNeeded(); 67 | } 68 | 69 | scrollIntoViewIfNeeded() { 70 | const domNode = ReactDOM.findDOMNode(this); 71 | if (domNode) domNode.scrollIntoViewIfNeeded(); 72 | } 73 | } 74 | 75 | module.exports = class FileFinder extends React.Component { 76 | constructor() { 77 | super(); 78 | this.didChangeQuery = this.didChangeQuery.bind(this); 79 | this.didChangeIncludeIgnored = this.didChangeIncludeIgnored.bind(this); 80 | } 81 | 82 | render() { 83 | return $( 84 | ActionContext, 85 | { add: "FileFinder" }, 86 | $( 87 | Root, 88 | null, 89 | $(QueryInput, { 90 | $ref: inputNode => (this.queryInput = inputNode), 91 | value: this.props.query, 92 | onChange: this.didChangeQuery 93 | }), 94 | $( 95 | SearchResultList, 96 | {}, 97 | ...this.props.results.map((result, i) => 98 | this.renderSearchResult(result, i === this.props.selected_index) 99 | ) 100 | ) 101 | ), 102 | $(Action, { type: "SelectPrevious" }), 103 | $(Action, { type: "SelectNext" }), 104 | $(Action, { type: "Confirm" }), 105 | $(Action, { type: "Close" }) 106 | ); 107 | } 108 | 109 | renderSearchResult({ positions, display_path }, isSelected) { 110 | let pathIndex = 0; 111 | let queryIndex = 0; 112 | const children = []; 113 | while (true) { 114 | if (pathIndex === positions[queryIndex]) { 115 | children.push( 116 | $(SearchResultMatchedQuery, null, display_path[pathIndex]) 117 | ); 118 | pathIndex++; 119 | queryIndex++; 120 | } else if (queryIndex < positions.length) { 121 | const nextPathIndex = positions[queryIndex]; 122 | children.push(display_path.slice(pathIndex, nextPathIndex)); 123 | pathIndex = nextPathIndex; 124 | } else { 125 | children.push(display_path.slice(pathIndex)); 126 | break; 127 | } 128 | } 129 | 130 | const item = isSelected 131 | ? SelectedSearchResultListItem 132 | : SearchResultListItem; 133 | return $(item, null, ...children); 134 | } 135 | 136 | focus() { 137 | this.queryInput.focus(); 138 | } 139 | 140 | didChangeQuery(event) { 141 | this.props.dispatch({ 142 | type: "UpdateQuery", 143 | query: event.target.value 144 | }); 145 | } 146 | 147 | didChangeIncludeIgnored(event) { 148 | this.props.dispatch({ 149 | type: "UpdateIncludeIgnored", 150 | include_ignored: event.target.checked 151 | }); 152 | } 153 | }; 154 | -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | # Contributing to Xray 2 | 3 | This project is still in the very early days, and isn't going to be usable for even basic editing for some time. At this point, we're looking for contributors that are willing to roll up their sleeves and solve problems. Please communicate with us however it makes sense, but in general opening a *pull request that fixes an issue* is going to be far more valuable than just reporting an issue. 4 | 5 | As the architecture stabilizes and the surface area of the project expands, there will be increasing opportunities to help out. To get some ideas for specific projects that could help in the short term, check out [issues that are labeled "help wanted"](https://github.com/atom/xray/issues?q=is%3Aopen+is%3Aissue+label%3A%22help+wanted%22). If you have an idea you'd like to pursue outside of these, that's awesome, but you may want to discuss it with us in an issue first to ensure it fits before spending too much time on it. 6 | 7 | It's really important to us to have a smooth on-ramp for contributors, and one great way you can contribute is by helping us improve this guide. If your experience is bumpy, can you open a pull request that makes it smoother for the next person? 8 | 9 | ## Communicating with maintainers 10 | 11 | The best way to communicate with maintainers is by posting a issue to this repository. The more thought you put into articulating your question or idea, the more value you'll be adding to the community and the easier it will be for maintainers to respond. That said, just try your best. If you have something you want to say, we'd prefer that you say it imperfectly rather than not saying it at all. 12 | 13 | You can also communicate with maintainers or other community members on the `#xray` channel on Atom's public slack instance. After you [request an invite via this form](http://atom-slack.herokuapp.com/), you can access our Slack instance at https://atomio.slack.com. 14 | 15 | ## Building 16 | 17 | So far, we have only built this project on macOS. If you'd like to help us improve our build or documentation to support other platforms, that would be a huge help! 18 | 19 | ### Install system dependencies 20 | 21 | #### Install Node v8.9.3 22 | 23 | To install Node, you can install [`nvm`](https://github.com/creationix/nvm) and then run `nvm install v8.9.3`. 24 | 25 | Later versions may work, but you should ideally run the build with the same version of Node that is bundled into Xray's current Electron dependency. If in doubt, you can check the version of the `electron` dependency in [`xray_electron/package.json`](https://github.com/atom/xray/blob/master/xray_electron/package.json), then run `process.versions.node` in the console of that version of Electron to ensure that these instructions haven't gotten out of date. 26 | 27 | #### Install Rust 28 | 29 | You can install Rust via [`rustup`](https://www.rustup.rs/). We currently require building on the nightly channel in order to use `wasm_bindgen` for browser support. 30 | 31 | #### Install Yarn 32 | 33 | Follow the [installation instructions](https://yarnpkg.com/en/docs/install) on the Yarn site. 34 | 35 | ### Run the build script 36 | 37 | This repository contains several components in top-level folders prefixed with `xray_*`. To build all of the components, simply run this in the root of the repository: 38 | 39 | ```sh 40 | script/build 41 | ``` 42 | 43 | To build a release version (which will be much faster): 44 | 45 | ```sh 46 | script/build --release 47 | ``` 48 | 49 | ## Running 50 | 51 | We currently *only* support launching the application via the CLI. For this to work, you need to set the `XRAY_SRC_PATH` environment variable to the location of your repository. The CLI also currently *requires* an argument: 52 | 53 | ```sh 54 | XRAY_SRC_PATH=. script/xray . 55 | ``` 56 | 57 | That assumes you built with `--release`. To run the debug version, use `xray_debug` instead: 58 | 59 | ```sh 60 | XRAY_SRC_PATH=. script/xray_debug . 61 | ``` 62 | 63 | Once a blank window has opened, press cmd-t to open the file selection menu. Search for a file, and press enter to open it. The contents of the file should appear in the window. If something does not go as expected, check the dev tools (cmd-shift-i) for errors. 64 | 65 | ## Running tests and benchmarks 66 | 67 | * All tests: `script/test` 68 | * Rust tests: `cargo test` in the root of the repository or a Rust subfolder. 69 | * Front-end tests: `cd xray_ui && yarn test` 70 | * Benchmarks: `cargo bench` 71 | -------------------------------------------------------------------------------- /docs/updates/2018_05_07.md: -------------------------------------------------------------------------------- 1 | # Update for May 7, 2018 2 | 3 | ## Contributions 4 | 5 | [@yajamon](https://github.com/yajamon) contributed [a fix for an oversight in our build script](https://github.com/atom/xray/pull/78) where we were specifying `+nightly` even though our repository is associated with a `rust-toolchain` file. Thanks! 6 | 7 | ## First internal demo is complete 8 | 9 | As I mentioned in the last update, we focused last week on preparing for an internal demo that presented at least a tiny slice of the Xray vision in a more tangible, interactive form. We spun up a headless Xray server as a digital ocean droplet and showed off remote shared workspaces, collaborative editing, and conversations tied to the code. We also put together a few slides demonstrating Xray's performance for various tasks such as fuzzy file-finding, moving large numbers of cursors, and scrolling. The response was really positive, and we've elected to continue the experiment into the next quarter. [@as-cii](https://github.com/as-cii) and [I](https://github.com/nathansobo) will continue to focus on Xray in the coming months, and we'll get a bit of support from [@maxbrunsfeld](https://github.com/maxbrunsfeld) in order to integrate [tree-sitter](https://github.com/tree-sitter/tree-sitter) as the basis of Xray's syntax awareness. 10 | 11 | ## Into the unknown with CRDTs 12 | 13 | As [I discussed in the first update](./2018_03_05.md#anchors-and-selections), Using CRDTs in Xray's native buffer implementation allows us to create *anchors*, which are stable references to positions within a text file that maintain their logical position even after the buffer is subsequently edited. For our discussions feature, we use anchors to link each message to the range of the buffer that was selected at the time the message was sent. This allows you to select a code fragment and make a comment, then allow other participants to click on the message at some later time to jump to the code you had selected when you sent the message. For now, Xray only maintains all of this state in memory. The discussion history is lost as soon as you kill the process, and we deliberately avoid dropping buffers once they are open in order to preserve the validity of anchors. This is obviously not going to work, and to fix it, we need to figure out how to persist each buffer's operation history. 14 | 15 | If we assume that buffers are never renamed and that history only ever marches forward, this is pretty easy. But the possibility of renames and interactions with Git (or other version control systems) make it interesting. We want to track a file's identity across renames and ensure that we load the appropriate history when the user switches branches, and these concerns have a lot of overlap with some other ideas we've been pondering that can loosely be described as "real-time version control". With a proof-of-concept for shared workspaces behind us, we think it's time to explore them. 16 | 17 | Currently, we represent buffers as CRDTs. We're interested in what happens if we take that further and treat the entire *repository* as a single unified CRDT data structure that is persisted to disk. Ideally, assuming Xray is used for every edit, we will be able to maintain a keystroke-level history of every edit to every file all the way back to the moment that each file was created, sort of like an infinite conflict-free undo history. But of course, there will be many cases where files change occur outside of Xray, so we'll need to gracefully handle those situations as well. We've decided to spend the next couple weeks exploring this. We'll probably spend most of our time clarifying our thoughts in writing at first before transitioning to coding. It's unclear exactly how much gold is at the end of this particular rainbow, but it seems worth a look. 18 | 19 | ## Strike out with futures 0.2 20 | 21 | On Friday, we spent an hour and a half upgrading `xray_core` to `futures` 0.2, only to discover that Tokio doesn't yet support that version 🙈. Luckily, it wasn't that much time wasted, but we did feel somewhat foolish for assuming that Tokio worked with it without checking first. 22 | 23 | ## Optimizations 24 | 25 | [@as-cii](https://github.com/as-cii) has been picking some low-hanging optimization fruit related to selections and editing. The [first](https://github.com/atom/xray/pull/79) is related to adding selections above and below the cursor. He's also been looking at [batching tree manipulation](https://github.com/atom/xray/tree/optimize-edit) when editing with multiple cursors, which is still in progress and is not yet associated with a PR. 26 | -------------------------------------------------------------------------------- /xray_ui/lib/app.js: -------------------------------------------------------------------------------- 1 | const propTypes = require("prop-types"); 2 | const React = require("react"); 3 | const { Client: StyletronClient } = require("styletron-engine-atomic"); 4 | const { Provider: StyletronProvider } = require("styletron-react"); 5 | const { ActionDispatcher } = require("./action_dispatcher"); 6 | const TextEditor = require("./text_editor/text_editor"); 7 | const ThemeProvider = require("./theme_provider"); 8 | const View = require("./view"); 9 | const ViewRegistry = require("./view_registry"); 10 | const $ = React.createElement; 11 | 12 | // TODO: Eventually, the theme should be provided to the view by the server 13 | const theme = { 14 | editor: { 15 | fontFamily: "Menlo", 16 | backgroundColor: "white", 17 | baseTextColor: "black", 18 | fontSize: 14, 19 | lineHeight: 1.5 20 | }, 21 | userColors: [ 22 | { r: 31, g: 150, b: 255, a: 1 }, 23 | { r: 64, g: 181, b: 87, a: 1 }, 24 | { r: 206, g: 157, b: 59, a: 1 }, 25 | { r: 216, g: 49, b: 176, a: 1 }, 26 | { r: 235, g: 221, b: 91, a: 1 } 27 | ] 28 | }; 29 | 30 | // TODO: Eventually, the keyBindings should be provided to the view by the server 31 | const keyBindings = [ 32 | { key: "cmd-t", context: "Workspace", action: "ToggleFileFinder" }, 33 | { key: "ctrl-t", context: "Workspace", action: "ToggleFileFinder" }, 34 | { key: "cmd-s", context: "Workspace", action: "SaveActiveBuffer" }, 35 | { key: "up", context: "FileFinder", action: "SelectPrevious" }, 36 | { key: "down", context: "FileFinder", action: "SelectNext" }, 37 | { key: "enter", context: "FileFinder", action: "Confirm" }, 38 | { key: "escape", context: "FileFinder", action: "Close" }, 39 | { key: "alt-shift-up", context: "TextEditor", action: "AddSelectionAbove" }, 40 | { key: "alt-shift-down", context: "TextEditor", action: "AddSelectionBelow" }, 41 | { key: "shift-up", context: "TextEditor", action: "SelectUp" }, 42 | { key: "shift-down", context: "TextEditor", action: "SelectDown" }, 43 | { key: "shift-left", context: "TextEditor", action: "SelectLeft" }, 44 | { key: "shift-right", context: "TextEditor", action: "SelectRight" }, 45 | { 46 | key: "alt-shift-left", 47 | context: "TextEditor", 48 | action: "SelectToBeginningOfWord" 49 | }, 50 | { 51 | key: "alt-shift-right", 52 | context: "TextEditor", 53 | action: "SelectToEndOfWord" 54 | }, 55 | { 56 | key: "shift-cmd-left", 57 | context: "TextEditor", 58 | action: "SelectToBeginningOfLine" 59 | }, 60 | { 61 | key: "shift-cmd-right", 62 | context: "TextEditor", 63 | action: "SelectToEndOfLine" 64 | }, 65 | { 66 | key: "shift-cmd-up", 67 | context: "TextEditor", 68 | action: "SelectToTop" 69 | }, 70 | { 71 | key: "shift-cmd-down", 72 | context: "TextEditor", 73 | action: "SelectToBottom" 74 | }, 75 | { key: "up", context: "TextEditor", action: "MoveUp" }, 76 | { key: "down", context: "TextEditor", action: "MoveDown" }, 77 | { key: "left", context: "TextEditor", action: "MoveLeft" }, 78 | { key: "right", context: "TextEditor", action: "MoveRight" }, 79 | { key: "alt-left", context: "TextEditor", action: "MoveToBeginningOfWord" }, 80 | { key: "alt-right", context: "TextEditor", action: "MoveToEndOfWord" }, 81 | { key: "cmd-left", context: "TextEditor", action: "MoveToBeginningOfLine" }, 82 | { key: "cmd-right", context: "TextEditor", action: "MoveToEndOfLine" }, 83 | { key: "cmd-up", context: "TextEditor", action: "MoveToTop" }, 84 | { key: "cmd-down", context: "TextEditor", action: "MoveToBottom" }, 85 | { key: "backspace", context: "TextEditor", action: "Backspace" }, 86 | { key: "delete", context: "TextEditor", action: "Delete" } 87 | ]; 88 | 89 | const styletronInstance = new StyletronClient(); 90 | class App extends React.Component { 91 | constructor(props) { 92 | super(props); 93 | } 94 | 95 | getChildContext() { 96 | return { 97 | inBrowser: this.props.inBrowser, 98 | viewRegistry: this.props.viewRegistry 99 | }; 100 | } 101 | 102 | render() { 103 | return $( 104 | StyletronProvider, 105 | { value: styletronInstance }, 106 | $( 107 | ThemeProvider, 108 | { theme }, 109 | $(ActionDispatcher, { keyBindings }, $(View, { id: 0 })) 110 | ) 111 | ); 112 | } 113 | } 114 | 115 | App.childContextTypes = { 116 | inBrowser: propTypes.bool, 117 | viewRegistry: propTypes.instanceOf(ViewRegistry) 118 | }; 119 | 120 | module.exports = App; 121 | -------------------------------------------------------------------------------- /docs/updates/2018_04_23.md: -------------------------------------------------------------------------------- 1 | # Update for April 23, 2018 2 | 3 | ## An initial implementation of shared workspaces is complete 4 | 5 | Last week we [completed the initial milestone for shared workspaces](https://github.com/atom/xray/pull/61), which allows you to connect to a remote Xray instance over TCP and open one of its workspaces in a new window. You can then use the file-finder to locate and open any file in the remote project and collaboratively edit buffers. 6 | 7 | There is obviously a ton more work to do until we can call our implementation of shared workspaces "done". Xray isn't even really usable right now for even basic text editing due to a long tail of missing features. Regardless, we think it's really important to have this infrastructure in place early. From here on out, every feature we build will be designed to support remote collaboration, and the foundation we've laid over the last two weeks will make that possible. We're pretty excited about the potential RPC system we've built. By combining remote procedure calls with eagerly replicated state and the judicious use of conflict-free replicated data types, we think we can abstract away the physical boundaries that separate individual machines and developers. 8 | 9 | ## Browser compatibility 10 | 11 | The [four pillars of Xray](../../README.md#foundational-priorities) are performance, real-time collaboration, browser compatibility, and extensibility. 8 weeks into focused development, we're feeling confident that Xray's architecture can meet our desired performance goals, and we've validated an approach that will bake collaboration into the heart of the system. Before burning down the long list of features that make up a usable text editor, we want to take some time to put the last two pillars in place by getting Xray working in a browser and laying the foundation for extensibility. By taking care of all four of these high-level concerns early, we'll ensure that they're supported as we build out the remainder of the system. 12 | 13 | To that end, we're now turning our attention to browser compatibility. We've actually been designing Xray with this goal in mind from the beginning. Today, Xray comprises two major components: `xray_server`, which contains the core application logic, and `xray_electron`, which presents the user interface and communicates with `xray_server` over a local socket. Now we need to create versions of these two components that run inside of a web browser. 14 | 15 | As a browser-based counterpart of the `xray_server` executable, we're creating `xray_wasm`, which will be compiled to WebAssembly and run in a web worker. `xray_wasm` will share the majority of its implementation with `xray_server` via a dependency on the platform-agnostic `xray_core` crate. `xray_core` abstracts its communication with the outside world in terms of abstract traits defined by the Rust `futures` crate. Methods for connecting to remote peers and the user interface accept and return `Stream`s of binary buffers, and the application also expects to be passed `Executor` instances that can schedule futures to be executed in the foreground or background. 16 | 17 | In the browser, we'll move data via message channels and web sockets rather than using domain sockets and TCP, but these are just transport layers and are easy to abstract in terms of `Stream`s and `Sink`s so they can be passed into the platform-agnostic code. Similarly, we'll integrate with the browser's event loop by writing a custom `Executor` that uses the `Promise` API or `requestIdleCallback` to defer computation. 18 | 19 | We're using the `wasm-bindgen` crate to interoperate between Rust and JavaScript, and last Friday we managed to get asynchronous bi-directional communication working between Rust and JavaScript. This week, we plan to extract as much UI code as possible from `xray_electron` into a common library called `xray_web`. We'll then create `xray_browser`, which will package everything together into a browser-deployable bundle that runs the core application logic in a web worker and connects it to the UI running on a web page. 20 | 21 | Since browsers strongly sandbox interaction with the underlying machine, we will only support interactions with remote shared workspaces when Xray is running in a browser. We plan to add WebSockets support to Xray server so that it can accept connections from browser-based clients. We'll also add an `--http` option that exposes a simple web server from `xray_server` that serves a browser-based UI to clients. This will obviously require a security scheme to be useful in a production setting, but it seems like a good way to develop the browser-based user experience. A simple password-list based security scheme would also be pretty quick to add. 22 | -------------------------------------------------------------------------------- /xray_browser/static/index.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 52 | 53 | 54 |
55 | 56 | 57 | 58 | 59 | 60 | 61 | 62 | 63 | 64 | 65 | 66 |
67 | 68 | 69 | 70 | -------------------------------------------------------------------------------- /memo_core/README.md: -------------------------------------------------------------------------------- 1 | # Memo – Real-time collaboration for Git 2 | 3 | **This project is a work in progress. This README defines the vision, but it isn't fully implemented yet.** 4 | 5 | On its own, Git can only synchronize changes between clones of a repository after the changes are committed, which forces an asynchronous collaboration workflow. A repository may be replicated across several machines, but the working copies on each of these machines are completely independent of one another. 6 | 7 | Memo's goal is to extend Git to allow a single working copy to be replicated across multiple machines. Memo uses conflict-free replicated data types (CRDTs) to record all uncommitted changes for a working copy, allowing changes to be synchronized in real time across multiple replicas as they are actively edited. Memo also maintains an operation-based record of all changes, augmenting Git's commit graph with the fine-grained edit history behind each commit. 8 | 9 | Memo is divided into the following major components: 10 | 11 | * Protocol: Memo can be thought of as a protocol for real-time change synchronization between working copies that will eventually be open for anyone to implement. 12 | 13 | * Library: Memo provides a reference library implementation written in Rust that produces and consumes the Memo protocol messages to synchronize working trees. We plan to ship a "light client" version of the library that compiles to WebAssembly and exposes a virtual file system API, as well as a full version based on Libgit2 that synchronizes with a full replica on the local file system. The libraries could be used in web- or desktop-based editors to enable real-time collaboration on a shared working copy of a Git repository. 14 | 15 | * Executable daemon: Memo will provide an executable (also written in Rust) that runs as a daemon process on the local machine. It will synchronize with an underlying file system and expose an RPC interface to support integrations with a variety of editors for collaborative buffer editing. 16 | 17 | * Xray: Memo spun out of Xray, which was an experiment to build a collaborative text editor. After the library stabilizes, we may decide to resume development of Xray as a first-class collaborative editor that is designed with Memo in mind. For now, we view the development of the more generalized technology as more important than building a new editor. 18 | 19 | Interesting features / design priorities are as follows: 20 | 21 | * Based on Git: When it comes to async collaboration and coarse-grained change synchronization, it's hard to beat Git. Memo doesn't try. Our goal is to enable Git users to share a single working copy and relay changes in real time. We may implement the ability to "fork" the state of a working copy, but we don't plan to implement asynchronous features such as branching and merging in terms of conflict-free replicated data types. For that you will continue to use Git. We will strive not to send or store any data that can already be derived from the state of the Git repository. 22 | 23 | * Distributed: Like Git, Memo is fully distributed. This means that no replica is privileged over any other. No specific network topology will be enforced by our core algorithms and it will be possible to disseminate operations in arbitrary ways. 24 | 25 | * Covers the whole working tree: Memo will merge concurrent edits to files along with modifications of the file system tree. One person can edit a file while another person moves it to a new directory, etc. 26 | 27 | * Open and general purpose: We want Memo to feel similar to Git, a tool that can be integrated in a variety of workflows and environments. We may build more centralized experiences on top of it, but the core protocol should remain open and decentralized. 28 | 29 | * More than just the source code: One of Memo's primary use cases is real-time collaboration, but effectively collaborating on source code often requires support from the environment to compile, run tests, statically analyze, etc. We intend to extend Memo's protocol to support primitives such as streams and shared buffers, which could support log output or a shared terminal, and annotations, which could support static analysis. An ideal scenario might see two developers with full replicas collaborating with a third developer in a browser, all viewing diagnostics generated by a language server running against a replica in the cloud and viewing test output from another machine. 30 | 31 | A fundamental goal is to make the distinction between physical machines less relevant during the actual process of writing code. Today, most code is developed locally, while some code may be developed in cloud-based IDEs. It shouldn't actually matter *where* the working tree is located, and it might be replicated to multiple machines simultaneously which are all contributing something to the overall experience of the participating developers. 32 | -------------------------------------------------------------------------------- /memo_js/src/support.ts: -------------------------------------------------------------------------------- 1 | export type Tagged = BaseType & { __tag: TagName }; 2 | export type Oid = string; 3 | export type Path = string; 4 | export type ReplicaId = Tagged; 5 | export type BufferId = Tagged; 6 | export type SelectionSetId = Tagged; 7 | export type Point = { row: number; column: number }; 8 | export type Range = { start: Point; end: Point }; 9 | export type Change = Range & { text: string }; 10 | 11 | export interface BaseEntry { 12 | readonly depth: number; 13 | readonly name: string; 14 | readonly type: FileType; 15 | } 16 | 17 | export enum FileType { 18 | Directory = "Directory", 19 | Text = "Text" 20 | } 21 | 22 | export interface GitProvider { 23 | baseEntries(oid: Oid): AsyncIterable; 24 | baseText(oid: Oid, path: Path): Promise; 25 | } 26 | 27 | export interface SelectionRanges { 28 | local: Map>; 29 | remote: Map>>; 30 | } 31 | 32 | interface MemoSelectionRanges { 33 | local: { [setId: number]: Array }; 34 | remote: { [replicaId: string]: Array> }; 35 | } 36 | 37 | export class GitProviderWrapper { 38 | private git: GitProvider; 39 | 40 | constructor(git: GitProvider) { 41 | this.git = git; 42 | } 43 | 44 | baseEntries(oid: Oid): AsyncIteratorWrapper { 45 | return new AsyncIteratorWrapper( 46 | this.git.baseEntries(oid)[Symbol.asyncIterator]() 47 | ); 48 | } 49 | 50 | baseText(oid: Oid, path: Path): Promise { 51 | return this.git.baseText(oid, path); 52 | } 53 | } 54 | 55 | export class AsyncIteratorWrapper { 56 | private iterator: AsyncIterator; 57 | 58 | constructor(iterator: AsyncIterator) { 59 | this.iterator = iterator; 60 | } 61 | 62 | next(): Promise> { 63 | return this.iterator.next(); 64 | } 65 | } 66 | 67 | export type ChangeObserverCallback = ( 68 | change: { 69 | textChanges: ReadonlyArray; 70 | selectionRanges: SelectionRanges; 71 | } 72 | ) => void; 73 | 74 | export class ChangeObserver { 75 | emitter: Emitter; 76 | 77 | constructor() { 78 | this.emitter = new Emitter(); 79 | } 80 | 81 | onChange(bufferId: BufferId, callback: ChangeObserverCallback): Disposable { 82 | return this.emitter.on(`buffer-${bufferId}-change`, callback); 83 | } 84 | 85 | changed( 86 | bufferId: BufferId, 87 | textChanges: Change[], 88 | selectionRanges: MemoSelectionRanges 89 | ) { 90 | this.emitter.emit(`buffer-${bufferId}-change`, { 91 | textChanges, 92 | selectionRanges: fromMemoSelectionRanges(selectionRanges) 93 | }); 94 | } 95 | } 96 | 97 | export function fromMemoSelectionRanges( 98 | ranges: MemoSelectionRanges 99 | ): SelectionRanges { 100 | const local = new Map(); 101 | for (const setId in ranges.local) { 102 | local.set(setId, ranges.local[setId]); 103 | } 104 | 105 | const remote = new Map(); 106 | for (const replicaId in ranges.remote) { 107 | remote.set(replicaId, ranges.remote[replicaId]); 108 | } 109 | 110 | return { local, remote }; 111 | } 112 | 113 | export interface Disposable { 114 | dispose(): void; 115 | } 116 | 117 | export class CompositeDisposable implements Disposable { 118 | private disposables: Disposable[]; 119 | private disposed: boolean; 120 | 121 | constructor() { 122 | this.disposables = []; 123 | this.disposed = false; 124 | } 125 | 126 | add(disposable: Disposable) { 127 | this.disposables.push(disposable); 128 | } 129 | 130 | dispose() { 131 | if (!this.disposed) { 132 | this.disposed = true; 133 | for (const disposable of this.disposables) { 134 | disposable.dispose(); 135 | } 136 | } 137 | } 138 | } 139 | 140 | export type EmitterCallback = (params: any) => void; 141 | export class Emitter { 142 | private callbacks: Map; 143 | 144 | constructor() { 145 | this.callbacks = new Map(); 146 | } 147 | 148 | emit(eventName: string, params: any) { 149 | const callbacks = this.callbacks.get(eventName); 150 | if (callbacks) { 151 | for (const callback of callbacks) { 152 | callback(params); 153 | } 154 | } 155 | } 156 | 157 | on(eventName: string, callback: EmitterCallback): Disposable { 158 | let callbacks = this.callbacks.get(eventName); 159 | if (!callbacks) { 160 | callbacks = []; 161 | this.callbacks.set(eventName, callbacks); 162 | } 163 | callbacks.push(callback); 164 | return { 165 | dispose: () => { 166 | if (callbacks) { 167 | const callbackIndex = callbacks.indexOf(callback); 168 | if (callbackIndex >= 0) callbacks.splice(callbackIndex, 1); 169 | } 170 | } 171 | }; 172 | } 173 | } 174 | -------------------------------------------------------------------------------- /docs/updates/2018_05_28.md: -------------------------------------------------------------------------------- 1 | # Update for May 28, 2018 2 | 3 | ## Staying the course with CRDT-based version control 4 | 5 | In the last update, I said that we were abandoning our efforts to apply CRDTs to the entire repository, citing lack of clarity on what we were actually trying to achieve. However, after more conversations with colleagues, we've decided to proceed with that effort after all. After a lot more thinking and writing, we finally got enough clarity on our direction to start writing code last week. 6 | 7 | We still plan to continue developing Xray as a text editor, but we're adding a new top-level module to the repository called Memo, which is essentially a CRDT-based version control system that interoperates with Git. Xray will pull in Memo as a library and build directly on top of its primitives, but we also plan to make Memo available as a standalone executable in the future to support integration with other editors. 8 | 9 | Our plan is for Memo to complement Git with real-time capabilities. Like Git, Memo will support branches to track parallel streams of development, but in Memo, all replicas of a given branch will be synchronized in real-time without conflicts. For example, if you and a collaborator check out the same Memo branch, you'll be able to move a file while someones else is editing that file, and the state of the file tree will cleanly converge. 10 | 11 | Today, Git serves as a bridge between your local development environment and the cloud. When you push commits to GitHub, you're not only ensuring that your changes are safely persisted and shared with your teammates, but you're also potentially kicking off processes on one or more cloud-based services to run tests, perform analysis, or deploy to production. We want to make that feedback loop tighter, allowing you to share your changes with teammates and cloud-based services as you actively write code. 12 | 13 | With Memo, as you're editing, a CI provider like Travis could run tests across a cluster of machines and give you feedback about your changes immediately. A source code analysis service like Code Climate could literally become an extension of your IDE, giving you feedback long before you commit. 14 | 15 | Like Git, we also intend to persist each branch's history to a database, but your changes will be continuously persisted on every keystroke rather than only when you commit. After the fact, you'll be able to replay edits and identify specific points in a branch's evolution via a version vector. When we detect commits to the underlying Git repository, we'll automatically persist a snapshot of the current state of the Memo repository and map the commit SHA to a version vector. When a commit only contains a subset of the outstanding changes, we'll need a more complex representation than a pure version vector in order to account for the exact contents of the commit, since a version vector can only identify the state of the repository at a specific point in time. 16 | 17 | Last week, after getting clear on our goals, we started on a new tree implementation that we'll use to index the history of changes to the file system and text files. It's based heavily on the tree that we already use within Xray to represent the buffer CRDT, but we're modifying it to support persistence of individual nodes in an external database. This will allow us to index the entire operational history of files without needing to load that entire history into memory during active editing. Once we complete the initial implementation of this B-tree, we'll use it to build out a CRDT representing the state of the file system. 18 | 19 | ## More progress on the editor 20 | 21 | While I've been focused on getting clarity in terms of version control, [@as-cii](https://github.com/as-cii) has continued to make progress on Xray itself. Last week he merged [a PR that adds support for horizontal scrolling](https://github.com/atom/xray/pull/90) the editor, which was a bit more challenging than it might sound. 22 | 23 | To support horizontal scrolling, we need to know the width of the editor's content, which involves efficient tracking and measurement of the longest line. Previously, we maintained a vector of newline offsets as part of each chunk of inserted text to support efficient translation between 1-D and 2-D coordinates which we implemented by binary searching this vector. Antonio replaced this representation with a static binary tree, which is still stored inside a vector for efficiency. With the binary tree, we maintain the same offset information that was formerly available in the flat vector, but we also index maximal row lengths, which gives us the ability to request the longest row in an arbitrary region of the text in logarithmic time. 24 | 25 | I'll be out next week on vacation, so Antonio plans to focus primarily on more editor features until I'm back on Monday, June 4th. He'll start with rendering a gutter and line numbers, which he already got started last week. In light of my absence, there's a good chance we could go another 2 weeks before the next update. Thanks for your patience. 26 | -------------------------------------------------------------------------------- /xray_ui/test/view_registry.test.js: -------------------------------------------------------------------------------- 1 | const assert = require("assert"); 2 | const ViewRegistry = require("../lib/view_registry"); 3 | 4 | suite("ViewRegistry", () => { 5 | test("props", () => { 6 | const registry = new ViewRegistry(); 7 | 8 | // Adding initial views 9 | registry.update({ 10 | updated: [ 11 | { component_name: "component-1", view_id: 1, props: { a: 1 } }, 12 | { component_name: "component-2", view_id: 2, props: { b: 2 } } 13 | ], 14 | removed: [] 15 | }); 16 | 17 | assert.deepEqual(registry.getProps(1), { a: 1 }); 18 | assert.deepEqual(registry.getProps(2), { b: 2 }); 19 | assert.throws(() => registry.getProps(3)); 20 | 21 | const propChanges = []; 22 | const disposeProps1Watch = registry.watchProps(1, () => 23 | propChanges.push("component-1") 24 | ); 25 | const disposeProps2Watch = registry.watchProps(2, () => 26 | propChanges.push("component-2") 27 | ); 28 | assert.throws(() => registry.watchProps(3, () => {})); 29 | 30 | // Updating existing view, removing existing view, adding a new view 31 | registry.update({ 32 | updated: [ 33 | { component_name: "component-2", view_id: 2, props: { b: 3 } }, 34 | { component_name: "component-3", view_id: 3, props: { c: 4 } } 35 | ], 36 | removed: [1] 37 | }); 38 | 39 | assert.throws(() => registry.getProps(1)); 40 | assert.deepEqual(registry.getProps(2), { b: 3 }); 41 | assert.deepEqual(registry.getProps(3), { c: 4 }); 42 | 43 | assert.throws(() => registry.watchProps(1, () => {})); 44 | assert.deepEqual(propChanges, ["component-2"]); 45 | 46 | // Stop watching props for a view 47 | propChanges.length = 0; 48 | disposeProps2Watch(); 49 | disposeProps2Watch(); // ensure disposing is idempotent 50 | registry.update({ 51 | updated: [{ component_name: "component-2", view_id: 2, props: { b: 4 } }], 52 | removed: [] 53 | }); 54 | 55 | assert.deepEqual(propChanges, []); 56 | }); 57 | 58 | test("components", () => { 59 | const registry = new ViewRegistry(); 60 | registry.update({ 61 | updated: [ 62 | { component_name: "comp-1", view_id: 1, props: {} }, 63 | { component_name: "comp-2", view_id: 2, props: {} }, 64 | { component_name: "comp-3", view_id: 3, props: {} } 65 | ], 66 | removed: [] 67 | }); 68 | 69 | const comp1A = () => {}; 70 | const comp2A = () => {}; 71 | registry.addComponent("comp-1", comp1A); 72 | registry.addComponent("comp-2", comp2A); 73 | assert.equal(registry.getComponent(1), comp1A); 74 | assert.equal(registry.getComponent(2), comp2A); 75 | assert.throws(() => registry.getComponent(3)); 76 | 77 | registry.removeComponent("comp-1"); 78 | assert.throws(() => registry.getComponent(1)); 79 | assert.equal(registry.getComponent(2), comp2A); 80 | 81 | const comp1B = () => {}; 82 | const comp2B = () => {}; 83 | registry.addComponent("comp-1", comp1B); 84 | assert.throws(() => registry.addComponent("comp-2", comp2B)); 85 | assert.equal(registry.getComponent(1), comp1B); 86 | }); 87 | 88 | test("dispatching actions", () => { 89 | const actions = []; 90 | const registry = new ViewRegistry({ onAction: a => actions.push(a) }); 91 | 92 | registry.update({ 93 | updated: [ 94 | { component_name: "component-1", view_id: 1, props: {} }, 95 | { component_name: "component-2", view_id: 2, props: {} } 96 | ], 97 | removed: [] 98 | }); 99 | 100 | registry.dispatchAction(1, { a: 1, b: 2 }); 101 | registry.dispatchAction(2, { c: 3 }); 102 | assert.throws(() => registry.dispatchAction(3, { d: 4 })); 103 | 104 | assert.deepEqual(actions, [ 105 | { view_id: 1, action: { a: 1, b: 2 } }, 106 | { view_id: 2, action: { c: 3 } } 107 | ]); 108 | }); 109 | 110 | test("focus", () => { 111 | const registry = new ViewRegistry({ onAction: a => actions.push(a) }); 112 | 113 | const focusRequests = []; 114 | registry.update({ updated: [], removed: [], focused: 2 }); 115 | registry.update({ updated: [], removed: [], focused: 1 }); 116 | registry.update({ updated: [], removed: [], focused: 1 }); 117 | const disposeWatch1 = registry.watchFocus(1, () => focusRequests.push(1)); 118 | registry.watchFocus(2, () => focusRequests.push(2)); 119 | registry.update({ updated: [], removed: [], focused: 1 }); 120 | registry.update({ updated: [], removed: [], focused: 2 }); 121 | 122 | assert.deepEqual(focusRequests, [1, 1, 2]); 123 | assert.throws(() => registry.watchFocus(1)); 124 | 125 | disposeWatch1() 126 | registry.update({ updated: [], removed: [], focused: 1 }); 127 | registry.update({ updated: [], removed: [], focused: 2 }); 128 | assert.doesNotThrow(() => registry.watchFocus(1)); 129 | assert.deepEqual(focusRequests, [1, 1, 2, 2]); 130 | }); 131 | }); 132 | -------------------------------------------------------------------------------- /xray_ui/test/action_dispatcher.test.js: -------------------------------------------------------------------------------- 1 | const assert = require("assert"); 2 | const propTypes = require("prop-types"); 3 | const React = require("react"); 4 | const { mount } = require("./helpers/component_helpers"); 5 | const $ = React.createElement; 6 | const { 7 | ActionDispatcher, 8 | ActionContext, 9 | Action, 10 | keystrokeStringForEvent 11 | } = require("../lib/action_dispatcher"); 12 | 13 | suite("ActionDispatcher", () => { 14 | test("dispatching an action via a keystroke", () => { 15 | const Component = props => 16 | $( 17 | ActionDispatcher, 18 | { keyBindings: props.keyBindings }, 19 | $( 20 | "div", 21 | null, 22 | $( 23 | ActionContext, 24 | { add: ["a", "b"] }, 25 | $(Action, { type: "Action1" }), 26 | $(Action, { type: "Action2" }), 27 | $(Action, { type: "Action3" }), 28 | $( 29 | "div", 30 | null, 31 | $( 32 | ActionContext, 33 | { add: ["c"], remove: ["a"] }, 34 | $(Action, { type: "Action4" }), 35 | $("div", { id: "target" }) 36 | ) 37 | ) 38 | ) 39 | ) 40 | ); 41 | 42 | let dispatchedActions; 43 | const keyBindings = [ 44 | { key: "ctrl-a", context: "a b", action: "Action1" }, 45 | { key: "ctrl-a", context: "b c", action: "Action4" }, 46 | { key: "ctrl-b", context: "a b", action: "Action2" }, 47 | { key: "ctrl-c", context: "a b", action: "UnregisteredAction" } 48 | ]; 49 | const component = mount($(Component, { keyBindings }), { 50 | context: { 51 | dispatchAction: action => dispatchedActions.push(action.type) 52 | }, 53 | childContextTypes: { dispatchAction: propTypes.func } 54 | }); 55 | const target = component.find("#target"); 56 | 57 | // Dispatch action when finding the first context/keybinding that match the event... 58 | dispatchedActions = []; 59 | target.simulate("keyDown", { ctrlKey: true, key: "a" }); 60 | assert.deepEqual(dispatchedActions, ["Action4"]); 61 | 62 | // ...and walk up the DOM until a matching context is found. 63 | dispatchedActions = []; 64 | target.simulate("keyDown", { ctrlKey: true, key: "b" }); 65 | assert.deepEqual(dispatchedActions, ["Action2"]); 66 | 67 | // Override a previous keybinding by specifying it later in the list. 68 | dispatchedActions = []; 69 | keyBindings.push({ key: "ctrl-b", context: "a b", action: "Action3" }); 70 | target.simulate("keyDown", { ctrlKey: true, key: "b" }); 71 | assert.deepEqual(dispatchedActions, ["Action3"]); 72 | 73 | // Simulate a keystroke that matches a context/keybinding but that maps to an unknown action. 74 | dispatchedActions = []; 75 | target.simulate("keyDown", { ctrlKey: true, key: "c" }); 76 | assert.deepEqual(dispatchedActions, []); 77 | }); 78 | 79 | test("pre-dispatch hook", () => { 80 | let preDispatchHooks = 0; 81 | const Component = props => 82 | $( 83 | ActionDispatcher, 84 | { keyBindings: props.keyBindings }, 85 | $( 86 | "div", 87 | null, 88 | $( 89 | ActionContext, 90 | { add: ["some-context"] }, 91 | $(Action, { 92 | onWillDispatch: () => preDispatchHooks++, 93 | type: "Action" 94 | }), 95 | $("div", { id: "target" }) 96 | ) 97 | ) 98 | ); 99 | 100 | const component = mount( 101 | $(Component, { 102 | keyBindings: [{ key: "a", context: "some-context", action: "Action" }] 103 | }), 104 | { 105 | context: { dispatchAction: () => {} }, 106 | childContextTypes: { dispatchAction: propTypes.func } 107 | } 108 | ); 109 | component.find("#target").simulate("keydown", { key: "a" }); 110 | assert.equal(preDispatchHooks, 1); 111 | }); 112 | 113 | test("keystrokeStringForEvent", () => { 114 | assert.equal(keystrokeStringForEvent({ key: "a" }), "a"); 115 | assert.equal(keystrokeStringForEvent({ key: "Backspace" }), "backspace"); 116 | assert.equal(keystrokeStringForEvent({ key: "ArrowUp" }), "up"); 117 | assert.equal(keystrokeStringForEvent({ key: "ArrowDown" }), "down"); 118 | assert.equal(keystrokeStringForEvent({ key: "ArrowLeft" }), "left"); 119 | assert.equal(keystrokeStringForEvent({ key: "ArrowRight" }), "right"); 120 | 121 | assert.equal( 122 | keystrokeStringForEvent({ ctrlKey: true, key: "s" }), 123 | "ctrl-s" 124 | ); 125 | assert.equal( 126 | keystrokeStringForEvent({ ctrlKey: true, altKey: true, key: "s" }), 127 | "ctrl-alt-s" 128 | ); 129 | assert.equal( 130 | keystrokeStringForEvent({ 131 | ctrlKey: true, 132 | altKey: true, 133 | metaKey: true, 134 | key: "s" 135 | }), 136 | "ctrl-alt-cmd-s" 137 | ); 138 | assert.equal( 139 | keystrokeStringForEvent({ 140 | ctrlKey: true, 141 | altKey: true, 142 | metaKey: true, 143 | shiftKey: true, 144 | key: "s" 145 | }), 146 | "ctrl-alt-shift-cmd-s" 147 | ); 148 | }); 149 | }); 150 | -------------------------------------------------------------------------------- /docs/updates/2018_05_14.md: -------------------------------------------------------------------------------- 1 | # Update for May 14, 2018 2 | 3 | ## More optimizations 4 | 5 | Last week we spent a couple of days speeding up multi-cursor editing. Specifically, we wanted to take advantage of the batched nature of this operation and edit the buffer's CRDT in a single pass, as opposed to performing a splice for each range. Please, take a look at [#82](https://github.com/atom/xray/pull/82) for all the details. 6 | 7 | There is still some work to do in that area to deliver a smooth experience when editing with thousands of cursors, but we are planning to get back to it once we have fleshed out more features. 8 | 9 | ## Thoughts on further applications of CRDTs 10 | 11 | After demoing Xray to our colleagues, we got a lot of interest in how Xray's CRDT-based approach to buffers might apply to the problem of versioning generally, so we took some time to explore it last week. We were intrigued by the idea of a CRDT-based analog to Git, a kind of operation-oriented version control system that allowed for real-time synchronization among several replicas of the same working tree and persistence of all operations. After spinning our wheels quite a bit, we've concluded that we really need to get clear on the specific problems we might like to solve. They are as follows: 12 | 13 | * Replay: We'd like to allow developers to record a collaboration session and cross-reference their keystrokes to audio, so that it could be replayed later. Assuming people were willing to opt into this, it could provide deep insights into the thought processes behind a given piece of code to future developers. This use case is really all about persisting the operations, and has nothing to do with replicating the entire file tree. 14 | 15 | * Permalinks: Today we have anchors, which automatically track a logical position in a buffer even if in the presence of concurrent and subsequent edits, but these anchors are only valid for the lifetime of the buffer in memory. We'd like to be able to create an anchor that can always be mapped to a logical position at arbitrary points in the future, even thousands of commits later. Again, this has nothing to do with full replication. It's really about *indexing* the operations we persist and tracking the movement of files over time so that we can always efficiently retrieve a logical position for an anchor. 16 | 17 | * Streaming persistence and code broadcast: Today, code lives on your local machine until you save it, commit it, and push it to the cloud. We want to persist your edit history as it is typed and optionally stream it into the cloud. If your computer spontaneously combusts, your up-to-the-minute edit history is still saved on the server. If you elect for your edits to be public, colleagues or community members could watch your edit stream in real time. This would require full replication if you wanted to allow another party to make *edits* to the working tree. If the server is just storing your operations, there's really no need to deal with concurrency. It *might* be cool if someone could come along and edit the server's replica of the work tree and have their edits automatically appear in your replica, but is that actually a good user experience? Real-time collaboration requires tight coordination, so it might be jarring to receive edits from someone you didn't actively invite to your workspace. 18 | 19 | * File-system mirroring for third-party editors: We'd like to allow other editors to use Xray in headless mode as a collaboration engine. In this use case, we'd need to relay edit operations through Xray via specific APIs, but it might be helpful if Xray could mirror the state of a remote project to the local file system. That way, an exiting editor could use its ordinary mechanisms for dealing with local files to interact with the remote workspace, and wouldn't need to perform file system interactions over RPC, which would simplify integration. 20 | 21 | I wanted to think through the design implications of these various features early to determine whether any of them had an impact on Xray's core architecture, and after a lot of thinking, my conclusion is that it should be okay to defer these features for now. I had envisioned a single unified design that elegantly addressed all of these features in a single replicated structure, but now we think that that cost of building such a structure probably outweighs its benefits. 22 | 23 | For now, we've decided to defer these concerns to until the point that replay, permalinks, or streaming persistence are actually the next most important feature we want to add. Our instinct is that when that time comes, we'll be able to address these features in an additive fashion, and that it doesn't make sense to invest in adding support for them today. 24 | 25 | In retrospect, last week was a bit of a distraction. I've done more up-front design thinking for Xray than I ever have for any other project, and it's worked out pretty well overall. But after last week, I think we're approaching diminishing returns for up-front architectural design. We've validated that the current design can be performant and collaborative, and it's seeming like we've struck a nice balance between simplicity and power. Now it's time to return to a more incremental strategy and continually focus on the next-most-important feature until we have a useful editor. 26 | 27 | ## The path forward 28 | 29 | This week, we'll turn our focus to implementing save as well a simple key bindings system, which [I wrote about in a previous update](2018_03_26.md#thoughts-on-key-bindings-and-actions). We also plan to clarify our short term roadmap, and we'll post an update about that next week. 30 | -------------------------------------------------------------------------------- /docs/updates/2018_07_23.md: -------------------------------------------------------------------------------- 1 | # Update for July 23, 2018 2 | 3 | ## Contributions 4 | 5 | [@MoritzKn](https://github.com/MoritzKn) [fixed a bug](https://github.com/atom/xray/pull/115) where we were incorrectly calculating the position to place the cursor when inserting strings containing multibyte characters. Thanks! 6 | 7 | ## Convergence for replicated directory trees 8 | 9 | Late last week we were able to achieve convergence in our randomized tests of replicated directory trees. As I mentioned in the last update, the biggest challenge was the possibility of concurrent moves introducing cycles. Our proposed solution of breaking cycles via synthetic "fixup" operations worked out well, but determining exactly *which* fixup operations to generate was still a challenging problem. 10 | 11 | In certain diabolical scenarios, reverting a move to break one cycle could end up introducing a second cycle. By reverting *multiple* moves, however, it should always be possible to end up with a directory tree that is free of cycles, and so that's what we do. Whenever a cycle is detected, we continually revert the most recent move that contributes to that cycle, ignoring any moves that have already been reverted. Eventually, we're guaranteed to end up with a tree that's free of cycles. Though we don't have a formal proof to back up our intuition, [we've been unable to find a failing scenario over a million randomized trials](https://github.com/atom/xray/blob/6c49587aad45d7880449668e4b882267435ff763/eon/src/fs2.rs#L1523), and we're ready to move forward. 12 | 13 | We applied the same "fixup" strategy to recover from directory name conflicts as well. When we attempt to insert a directory entry whose name conflicts with an existing entry, we compare the entries' Lamport timestamps and replica ids to select an entry that gets to keep the existing name. For the other entry, we append `~` characters until we find a name that does not conflict and synthesize a rename operation. In a real-time scenario, this situation should almost never occur, but if it does, renaming one of the directories means we can mirror the state of the CRDT to the file system without losing data. The users can then decide how to deal with the situation by deleting one of the directories or merging their contents. 14 | 15 | ## Interacting with the file system 16 | 17 | For our convergence results to be useful outside the realm of automated tests, we need to communicate changes to and from the file system. That presents its own set of challenges, since we can't rely on our internal representation always being perfectly synchronized with the state of the disk. After confusing ourselves a bit too much trying to devise a strategy for file system synchronization that could cover every possible scenario, we've decided to focus on a few narrowly-defined situations on the critical path to a working demo. 18 | 19 | * Read a tree into a new index: When the Eon daemon starts, we will need to read the current state of the tree into our internal representation. 20 | * Write an index to a file system tree: When you want to clone a remote replica, we need to write its initial state to your local disk. 21 | * Update an index from a tree: Once the daemon is started, we want to watch the file system for changes. When we detect a change, we will scan the directory tree to determine which directories have been inserted, removed, or moved. 22 | * Write incoming operations to the disk: As operations come in, we interpret them relative to our internal index and translate them into writes to the file system. 23 | 24 | For now, we've decided to rely on the fact that files and directories get associated with unique inode numbers in order to detect moves. In our previous attempt, we were hoping to not fully rely on inodes in hopes of covering cases such as the entire repository being recursively copied or another system like Git manipulating the file tree. Now we've decided we will deal with those scenarios in a separate pass once we get the basic scenario working. Tracking the mapping between our internal file identifiers and inodes makes everything much simpler. 25 | 26 | One thing that makes it challenging (if not impossible) to mirror changes to the file system perfectly is the inability to perform file system operations atomically. When we receive a move operation from the network, we'll resolve abstract identifiers in the operation to actual paths on the local disk. If the disk's contents have changed in the meantime and we haven't heard about it, there's a potential for these paths to be wrong. To mitigate this issue, we will always confirm that the relevant paths exist and have the expected inode numbers before applying a remote operation. If we detect that our index has fallen behind the contents of the disk, we will defer handling the operation until the next update. 27 | 28 | However, even if we determine that our index is consistent with the disk, this determination isn't atomic. In the microseconds between checking for consistency and performing the write, another change might invalidate our conclusion and cause the operation to fail. Worse, a change might cause the same paths to point to different inodes, meaning the operation would succeed but apply to different paths. Luckily, we anticipate this sort of situation to be extremely rare. It could only happen if a file at a given path was replaced with another in the moment between our consistency check and actually writing the operation. It might lead to surprising results, but we don't think the consequences are catastrophic. 29 | 30 | Dealing with all of these problems and getting changes to and from the file system will be our focus for this week. 31 | -------------------------------------------------------------------------------- /docs/updates/2018_10_02.md: -------------------------------------------------------------------------------- 1 | # Update for October 2, 2018 2 | 3 | ## Shipped an initial light client 4 | 5 | Last week, we [shipped](https://github.com/atom/xray/pull/135) an initial version of Memo JS, a light-client implementation of Memo that can be used as a library in web-based applications. To start with, we're assuming that the file system is completely virtual and that all changes are routed directly through the library. This meant that we ended up temporarily shelving a lot of the work we did to synchronize our tree CRDT with an external file system, but we still plan to take advantage of that research in order to build the full client that's capable of observing an external repository. Shipping the light client first will hopefully let us get some feedback and iterate on other aspects of the protocol's design before introducing the complexity of interoperating with an external file system. 6 | 7 | ## Next, Git operations 8 | 9 | Currently, a Memo `WorkTree` always starts at a base commit and builds forward indefinitely with operations. We assume that application code will be responsible for tearing the work tree down and rebuilding it following a commit. The next step is pull this concern into Memo itself and to allow the base commit of a replicated work tree to *change* over time due to operations on the underlying repository such as committing, resetting, and checking out different branches. 10 | 11 | We're still in the middle of figuring this out. It's murky and our thinking is still in flux. We're focused on the light client currently, which simplifies our API and reduces complexity, but we still want a design that will work when we do eventually synchronize to the file system. It's somewhat unclear whether we should just start focusing on integrating with the file system now, or alternatively completely ignore the concerns of the file system and hope we can make adjustments later. For now though, here's what is emerging. 12 | 13 | ### Epochs 14 | 15 | We divide the evolution of the work tree into *epochs*. Each epoch begins with a specific commit from the underlying repository that gives all replicas a common frame of reference, then applies additional operations on top to represent uncommitted changes in that epoch. There is one and only one *active epoch* at any time on a given replica. All operations are tagged with an epoch id, and the local counters used to identify operations are reset to zero at the start of each epoch. Someone joining the collaboration should only need to fetch operations associated with the most recent epoch. 16 | 17 | When a user performs a local Git operation such as a commit or a reset, they broadcast the creation of a new epoch. Because users can create new epochs concurrently, we always honor the epoch creation with the most recent Lamport timestamp at every replica, which will provide an arbitrary but consistent behavior for concurrent epoch creations while also respecting causality in the sequential case. 18 | 19 | ### Resets 20 | 21 | Collaborators can reset the HEAD of the working copy to an arbitrary ref. In that case, we need to create a new epoch. Depending on the nature of the reset and the state of the file system, there may be uncommitted changes on disk. We'd also like to incorporate the concept of unsaved changes when we integrate with the file system. Both uncommitted changes and unsaved changes will need to be translated into synthetic operations that build upon the new epoch's base commit. 22 | 23 | When the epoch creation arrives at remote replicas, it seems like they will have no choice but to perform I/O in order to scan the epoch's base entries into the tree. The base state of open buffers may also need to be re-read, and some of these open buffers may be for files that no longer exist in the new epoch's base commit. 24 | 25 | This is where things start to feel pretty messy and confusing. What happens to these "untethered" buffers? Do we empty out the tree and build it back up as we perform I/O on the base entries, or do we preserve the old state until the new state is ready. How do races with the file system complicate all of this? 26 | 27 | ### Commits 28 | 29 | Commits create a new epoch whose state is derived from a previous epoch, although due to the potential for concurrent commits and resets, a commit doesn't always derive from the active epoch on a given replica. Ignoring the potential for partial staging for the moment, when a user creates a commit, we can characterize what they committed via a version vector that includes all observed operations in the current epoch. 30 | 31 | If a replica receives a commit based on the active epoch (which should be the most common case), we should be able to determine their base entries without performing I/O. This is because the state that was committed should already be available as a subset of operations they have already seen, as characterized by the version vector. This would allow us to update the tree to its new state synchronously in a very common case. 32 | 33 | On the other hand, there's no guarantee that a commit is going to based on the active epoch thanks to diabolical concurrency scenarios, and this seems to mean that we may end up needing to do I/O anyways in some scenarios. That makes us wonder whether we should focus first on the ability to reset the base commit in arbitrary ways and treat commits as a special case of that. 34 | 35 | ## Conclusion 36 | 37 | This is a hard problem. We've made it through one wave of complexity to encounter another, and presumably that will continue. Every decision seems to be entangled with everything else, and even this summary just scratches the surface of the thought process behind this problem. But despite the daunting complexity, I'm still excited by the idea of a fully-replicated Git working copy. Git operations are the next summit to climb, and I imagine there will be more wilderness before we can settle in the fertile valley of conflict free replicated paradise. 38 | -------------------------------------------------------------------------------- /xray_core/src/file_finder.rs: -------------------------------------------------------------------------------- 1 | use cross_platform; 2 | use futures::{Async, Poll, Stream}; 3 | use notify_cell::{NotifyCell, NotifyCellObserver}; 4 | use project::{PathSearch, PathSearchResult, PathSearchStatus, TreeId}; 5 | use serde_json; 6 | use window::{View, WeakViewHandle, Window}; 7 | 8 | pub trait FileFinderViewDelegate { 9 | fn search_paths( 10 | &self, 11 | needle: &str, 12 | max_results: usize, 13 | include_ignored: bool, 14 | ) -> (PathSearch, NotifyCellObserver); 15 | fn did_close(&mut self); 16 | fn did_confirm( 17 | &mut self, 18 | tree_id: TreeId, 19 | relative_path: &cross_platform::Path, 20 | window: &mut Window, 21 | ); 22 | } 23 | 24 | pub struct FileFinderView { 25 | delegate: WeakViewHandle, 26 | query: String, 27 | include_ignored: bool, 28 | selected_index: usize, 29 | search_results: Vec, 30 | search_updates: Option>, 31 | updates: NotifyCell<()>, 32 | } 33 | 34 | #[derive(Deserialize)] 35 | #[serde(tag = "type")] 36 | enum FileFinderAction { 37 | UpdateQuery { query: String }, 38 | UpdateIncludeIgnored { include_ignored: bool }, 39 | SelectPrevious, 40 | SelectNext, 41 | Confirm, 42 | Close, 43 | } 44 | 45 | impl View for FileFinderView { 46 | fn component_name(&self) -> &'static str { 47 | "FileFinder" 48 | } 49 | 50 | fn render(&self) -> serde_json::Value { 51 | json!({ 52 | "selected_index": self.selected_index, 53 | "query": self.query.as_str(), 54 | "results": self.search_results, 55 | }) 56 | } 57 | 58 | fn dispatch_action(&mut self, action: serde_json::Value, window: &mut Window) { 59 | match serde_json::from_value(action) { 60 | Ok(FileFinderAction::UpdateQuery { query }) => self.update_query(query, window), 61 | Ok(FileFinderAction::UpdateIncludeIgnored { include_ignored }) => { 62 | self.update_include_ignored(include_ignored, window) 63 | } 64 | Ok(FileFinderAction::SelectPrevious) => self.select_previous(), 65 | Ok(FileFinderAction::SelectNext) => self.select_next(), 66 | Ok(FileFinderAction::Confirm) => self.confirm(window), 67 | Ok(FileFinderAction::Close) => self.close(), 68 | _ => eprintln!("Unrecognized action"), 69 | } 70 | } 71 | } 72 | 73 | impl Stream for FileFinderView { 74 | type Item = (); 75 | type Error = (); 76 | 77 | fn poll(&mut self) -> Poll, Self::Error> { 78 | let search_poll = self.search_updates 79 | .as_mut() 80 | .map(|s| s.poll()) 81 | .unwrap_or(Ok(Async::NotReady))?; 82 | let updates_poll = self.updates.poll()?; 83 | match (search_poll, updates_poll) { 84 | (Async::NotReady, Async::NotReady) => Ok(Async::NotReady), 85 | (Async::Ready(Some(search_status)), _) => { 86 | match search_status { 87 | PathSearchStatus::Pending => {} 88 | PathSearchStatus::Ready(results) => { 89 | self.search_results = results; 90 | } 91 | } 92 | 93 | Ok(Async::Ready(Some(()))) 94 | } 95 | _ => Ok(Async::Ready(Some(()))), 96 | } 97 | } 98 | } 99 | 100 | impl FileFinderView { 101 | pub fn new(delegate: WeakViewHandle) -> Self { 102 | Self { 103 | delegate, 104 | query: String::new(), 105 | include_ignored: false, 106 | selected_index: 0, 107 | search_results: Vec::new(), 108 | search_updates: None, 109 | updates: NotifyCell::new(()), 110 | } 111 | } 112 | 113 | fn update_query(&mut self, query: String, window: &mut Window) { 114 | if self.query != query { 115 | self.query = query; 116 | self.search(window); 117 | self.updates.set(()); 118 | } 119 | } 120 | 121 | fn update_include_ignored(&mut self, include_ignored: bool, window: &mut Window) { 122 | if self.include_ignored != include_ignored { 123 | self.include_ignored = include_ignored; 124 | self.search(window); 125 | self.updates.set(()); 126 | } 127 | } 128 | 129 | fn select_previous(&mut self) { 130 | if self.selected_index > 0 { 131 | self.selected_index -= 1; 132 | self.updates.set(()); 133 | } 134 | } 135 | 136 | fn select_next(&mut self) { 137 | if self.selected_index + 1 < self.search_results.len() { 138 | self.selected_index += 1; 139 | self.updates.set(()); 140 | } 141 | } 142 | 143 | fn confirm(&mut self, window: &mut Window) { 144 | if let Some(search_result) = self.search_results.get(self.selected_index) { 145 | self.delegate.map(|delegate| { 146 | delegate.did_confirm(search_result.tree_id, &search_result.relative_path, window) 147 | }); 148 | } 149 | } 150 | 151 | fn close(&mut self) { 152 | self.delegate.map(|delegate| delegate.did_close()); 153 | } 154 | 155 | fn search(&mut self, window: &mut Window) { 156 | let search = self.delegate 157 | .map(|delegate| delegate.search_paths(&self.query, 10, self.include_ignored)); 158 | 159 | if let Some((search, search_updates)) = search { 160 | self.search_updates = Some(search_updates); 161 | window.spawn(search); 162 | self.updates.set(()); 163 | } 164 | } 165 | } 166 | -------------------------------------------------------------------------------- /docs/updates/2018_07_16.md: -------------------------------------------------------------------------------- 1 | # Update for July 16, 2018 2 | 3 | ## Breaking cycles 4 | 5 | This week, we continued our focus on a fully replicated model of the file system. We're still focusing on directories only, driving our work with an integration test that randomly mutates multiple in-memory replicas of a file system tree and tests for convergence. 6 | 7 | Mid-week, we hit a pretty major snag that we hadn't anticipated, but seems obvious in retrospect. Say you have two replicas of a tree that contains two subdirectories, `a` and `b`. At one replica, `a` is moved into `b`. Concurrently, on the other replica, `b` is moved into `a`. When we exchange operations, we end up with both directories in an orphaned cycle, with `a` referring to `b` as its parent and `b` referring to `a` as its parent, a state which we can't mirror to the underlying file system of either replica. 8 | 9 | | Time | Replica 1 State | Replica 2 State | 10 | |:-----| :-------------- | :------------------ | 11 | | 0 | `a/` `b/` | `a/` `b/` | 12 | | 1 | `a/b/` | `b/a/` | 13 | | 2 | ??? | ??? | 14 | 15 | For any set of concurrent moves, it's possible to create a cycle, and you could potentially create *multiple* different cycles that share directories in certain diabolical cases. Left untreated, these cycles end up disconnecting both directories from the root of the tree. We still have the data in the CRDT, but it can't be accessed via the file system. We need to break them. 16 | 17 | We spent the second half of this week thinking about every possible approach to breaking the cycles while also preserving convergence, and we ended up arriving at two major alternatives. 18 | 19 | The first approach is to preserve the operations that create the cycle, but find a way to break the cycle when we interpret the operations. The trouble is that cycles are always created by concurrent operations, but because this is a CRDT, it's possible for concurrent operations to arrive in different orders at different replicas. This means a decision to break a cycle is order-dependent, and may need to be reevaluated upon the arrival of a new operation. Our best idea is to create an arbitrary ordering of all operations based on Lamport timestamps and replica ids. When a new operation is inserted anywhere other than the end of this sequence, we integrate it and then reinterpret all subsequent operations based on a state of the tree that accounts for the new operation. It's definitely doable and preserves the purity of the CRDT, but it also seems complex and potentially slow. It also means that we could end up synthetically breaking a cycle only to determine later that we don't need to break the cycle due to the arrival of a concurrent operation. This could cause seemingly unrelated directories to appear out of nowhere upon the arrival of a concurrent operation, which could be pretty confusing depending on the integration delay. We'd like Eon to generalize to async use cases in addition to real-time, and these "phantom directories" seemed like a real negative for usability. 20 | 21 | The second approach, which we've decided to go with, is sort of a principled hack. Whenever we interpret a move at a given replica that introduces a cycle, we look at every move operation that contributed to the cycle and synthesize a new operation that reverts the operation with the highest Lamport timestamp. We then broadcast this new operation to other participants. Depending on the order that various concurrent operations arrive at different replicas, we may end up reverting the same move redundantly or reverting multiple moves that participate in different variations of the same cycle. We considered this approach within the first hour of our discovery of the issue, but initially discarded it because it seemed to violate the spirit of CRDTs. It seems weird that integrating an operation should require us to generate a new operation in order to put the tree in a valid state. But after fully envisioning the complexity of the pure alternative, synthesizing operations seemed a lot more appealing. Breaking cycles via operations means that once a replica observes the effects of a given cycle being broken, they'll never see it "unbroken" due to the arrival of a concurrent operation. It also completely avoids the issue of totally ordering operations and reevaluating subsequent operations every time an operation arrives. 22 | 23 | One consequence of either approach is that there could be certain combinations of operations that lead to a cycle that we never detect and break. That means that certain version vectors might yield tree states containing cycles and constrains the set of version vectors we should consider valid. This isn't a huge deal, because even without cycles, the constraints of causality already limit us to a subset of all possible version vectors if we want a valid interpretation of the tree. For example: If replica 0 creates a directory at sequence number 50 and replica 1 adds a subdirectory to it at sequence number 10, the state vector `{0: 20, 1: 10}` would contain a directory whose parent doesn't exist. If we limit ourselves to version vectors corresponding to actual states observed on a replica, we will have no problems. 24 | 25 | ## Homogenous trees 26 | 27 | As I discussed in the previous update, we currently represent the state of the file tree inside a B-tree with heterogenous elements. Each tree item is either metadata, a child reference, or a parent reference. Now I'm realizing this is probably wrong. If we separated metadata, parent references, and child references into their own homogenous trees, we could probably simplify our code, reduce memory usage, and perform way pattern matching on the various enumeration variants. We plan to try separating the trees this week. 28 | 29 | ## Conclusion 30 | 31 | For whoever is reading these updates, thanks for your interest. We're always interested in thoughts and feedback. Feel free to comment on this update's PR if there's anything you'd like to communicate. 32 | -------------------------------------------------------------------------------- /xray_ui/lib/action_dispatcher.js: -------------------------------------------------------------------------------- 1 | const React = require("react"); 2 | const ReactDOM = require("react-dom"); 3 | const propTypes = require("prop-types"); 4 | const { styled } = require("styletron-react"); 5 | const $ = React.createElement; 6 | 7 | const Root = styled("div", { width: "100%", height: "100%" }); 8 | 9 | class ActionSet { 10 | constructor() { 11 | this.context = null; 12 | this.actions = new Map(); 13 | } 14 | } 15 | 16 | class ActionDispatcher extends React.Component { 17 | constructor() { 18 | super(); 19 | this.handleKeyDown = this.handleKeyDown.bind(this); 20 | this.actionSets = new WeakMap(); 21 | this.defaultActionSet = new ActionSet(); 22 | } 23 | 24 | render() { 25 | return $(Root, { onKeyDown: this.handleKeyDown }, this.props.children); 26 | } 27 | 28 | handleKeyDown(event) { 29 | const { keyBindings } = this.props; 30 | const keystrokeString = keystrokeStringForEvent(event); 31 | 32 | let element = event.target; 33 | while (element) { 34 | let actionSet = this.actionSets.get(element); 35 | if (actionSet) { 36 | for (let i = keyBindings.length - 1; i >= 0; i--) { 37 | const keyBinding = keyBindings[i]; 38 | const action = actionSet.actions.get(keyBinding.action); 39 | if ( 40 | keyBinding.key === keystrokeString && 41 | action && 42 | contextMatches(actionSet.context, keyBinding.context) 43 | ) { 44 | if (action.onWillDispatch) action.onWillDispatch(); 45 | action.dispatch(); 46 | return; 47 | } 48 | } 49 | } 50 | 51 | element = element.parentElement; 52 | } 53 | } 54 | 55 | getChildContext() { 56 | return { 57 | actionSets: this.actionSets, 58 | currentActionSet: this.defaultActionSet 59 | }; 60 | } 61 | } 62 | 63 | ActionDispatcher.childContextTypes = { 64 | actionSets: propTypes.instanceOf(WeakMap), 65 | currentActionSet: propTypes.instanceOf(ActionSet) 66 | }; 67 | 68 | class ActionContext extends React.Component { 69 | constructor() { 70 | super(); 71 | this.actionSet = new ActionSet(); 72 | } 73 | 74 | componentWillMount() { 75 | this.actionSet.context = this.context.currentActionSet 76 | ? new Set(this.context.currentActionSet.context) 77 | : new Set(); 78 | 79 | if (this.props.add) { 80 | if (Array.isArray(this.props.add)) { 81 | for (let i = 0; i < this.props.add.length; i++) { 82 | this.actionSet.context.add(this.props.add[i]); 83 | } 84 | } else { 85 | this.actionSet.context.add(this.props.add); 86 | } 87 | } 88 | 89 | if (this.props.remove) { 90 | if (Array.isArray(this.props.remove)) { 91 | for (let i = 0; i < this.props.remove.length; i++) { 92 | this.actionSet.context.delete(this.props.remove[i]); 93 | } 94 | } else { 95 | this.actionSet.context.delete(this.props.remove); 96 | } 97 | } 98 | } 99 | 100 | componentDidMount() { 101 | if (this.context.actionSets) { 102 | this.context.actionSets.set( 103 | ReactDOM.findDOMNode(this).parentElement, 104 | this.actionSet 105 | ); 106 | } 107 | } 108 | 109 | render() { 110 | return this.props.children; 111 | } 112 | 113 | getChildContext() { 114 | return { 115 | currentActionSet: this.actionSet 116 | }; 117 | } 118 | } 119 | 120 | ActionContext.contextTypes = { 121 | actionSets: propTypes.instanceOf(WeakMap), 122 | currentActionSet: propTypes.instanceOf(ActionSet) 123 | }; 124 | 125 | ActionContext.childContextTypes = { 126 | currentActionSet: propTypes.instanceOf(ActionSet) 127 | }; 128 | 129 | class Action extends React.Component { 130 | constructor() { 131 | super(); 132 | this.dispatch = this.dispatch.bind(this); 133 | } 134 | 135 | render() { 136 | return null; 137 | } 138 | 139 | componentDidMount() { 140 | this.context.currentActionSet.actions.set(this.props.type, { 141 | onWillDispatch: this.props.onWillDispatch, 142 | dispatch: this.dispatch 143 | }); 144 | } 145 | 146 | dispatch() { 147 | this.context.dispatchAction({ type: this.props.type }); 148 | } 149 | } 150 | 151 | Action.contextTypes = { 152 | currentActionSet: propTypes.instanceOf(ActionSet), 153 | dispatchAction: propTypes.func 154 | }; 155 | 156 | function keystrokeStringForEvent(event) { 157 | let keystroke = ""; 158 | if (event.ctrlKey) keystroke = "ctrl"; 159 | if (event.altKey) keystroke = appendKeystrokeElement(keystroke, "alt"); 160 | if (event.shiftKey) keystroke = appendKeystrokeElement(keystroke, "shift"); 161 | if (event.metaKey) keystroke = appendKeystrokeElement(keystroke, "cmd"); 162 | switch (event.key) { 163 | case "ArrowDown": 164 | return appendKeystrokeElement(keystroke, "down"); 165 | case "ArrowUp": 166 | return appendKeystrokeElement(keystroke, "up"); 167 | case "ArrowLeft": 168 | return appendKeystrokeElement(keystroke, "left"); 169 | case "ArrowRight": 170 | return appendKeystrokeElement(keystroke, "right"); 171 | default: 172 | return appendKeystrokeElement(keystroke, event.key.toLowerCase()); 173 | } 174 | } 175 | 176 | function appendKeystrokeElement(keyString, element) { 177 | if (keyString.length > 0) keyString += "-"; 178 | keyString += element; 179 | return keyString; 180 | } 181 | 182 | function contextMatches(context, expression) { 183 | // TODO: Support arbitrary boolean expressions 184 | let expressionStartIndex = 0; 185 | for (let i = 0; i < expression.length; i++) { 186 | if (expression[i] == " ") { 187 | const component = expression.slice(expressionStartIndex, i); 188 | if (!context.has(component)) { 189 | return false; 190 | } 191 | } 192 | } 193 | return true; 194 | } 195 | 196 | module.exports = { 197 | ActionDispatcher, 198 | ActionContext, 199 | Action, 200 | keystrokeStringForEvent, 201 | contextMatches 202 | }; 203 | -------------------------------------------------------------------------------- /memo_js/src/index.ts: -------------------------------------------------------------------------------- 1 | export { 2 | BaseEntry, 3 | Change, 4 | GitProvider, 5 | FileType, 6 | Oid, 7 | Path, 8 | Point, 9 | Range, 10 | ReplicaId, 11 | SelectionRanges, 12 | SelectionSetId 13 | } from "./support"; 14 | import { 15 | BufferId, 16 | ChangeObserver, 17 | ChangeObserverCallback, 18 | Disposable, 19 | GitProvider, 20 | GitProviderWrapper, 21 | FileType, 22 | Oid, 23 | Path, 24 | Range, 25 | ReplicaId, 26 | SelectionRanges, 27 | SelectionSetId, 28 | Tagged, 29 | fromMemoSelectionRanges 30 | } from "./support"; 31 | 32 | let memo: any; 33 | 34 | async function init() { 35 | if (!memo) { 36 | memo = await import("../dist/memo_js"); 37 | memo.StreamToAsyncIterator.prototype[Symbol.asyncIterator] = function() { 38 | return this; 39 | }; 40 | } 41 | } 42 | 43 | export type Version = Tagged; 44 | export type Operation = Tagged; 45 | export type EpochId = Tagged; 46 | export interface OperationEnvelope { 47 | epochId(): EpochId; 48 | epochTimestamp(): number; 49 | epochReplicaId(): ReplicaId; 50 | epochHead(): null | Oid; 51 | operation(): Operation; 52 | isSelectionUpdate(): boolean; 53 | } 54 | 55 | export enum FileStatus { 56 | New = "New", 57 | Renamed = "Renamed", 58 | Removed = "Removed", 59 | Modified = "Modified", 60 | RenamedAndModified = "RenamedAndModified", 61 | Unchanged = "Unchanged" 62 | } 63 | 64 | export interface Entry { 65 | readonly depth: number; 66 | readonly type: FileType; 67 | readonly name: string; 68 | readonly path: Path; 69 | readonly basePath: Path | null; 70 | readonly status: FileStatus; 71 | readonly visible: boolean; 72 | } 73 | 74 | export class WorkTree { 75 | private tree: any; 76 | private observer: ChangeObserver; 77 | private buffers: Map = new Map(); 78 | 79 | static async create( 80 | replicaId: string, 81 | base: Oid | null, 82 | startOps: ReadonlyArray, 83 | git: GitProvider 84 | ): Promise<[WorkTree, AsyncIterable]> { 85 | await init(); 86 | 87 | const observer = new ChangeObserver(); 88 | const result = memo.WorkTree.new( 89 | new GitProviderWrapper(git), 90 | observer, 91 | replicaId, 92 | base, 93 | startOps 94 | ); 95 | return [new WorkTree(result.tree(), observer), result.operations()]; 96 | } 97 | 98 | private constructor(tree: any, observer: ChangeObserver) { 99 | this.tree = tree; 100 | this.observer = observer; 101 | } 102 | 103 | version(): Version { 104 | return this.tree.version(); 105 | } 106 | 107 | hasObserved(version: Version): boolean { 108 | return this.tree.observed(version); 109 | } 110 | 111 | head(): null | Oid { 112 | return this.tree.head(); 113 | } 114 | 115 | epochId(): EpochId { 116 | return this.tree.epoch_id(); 117 | } 118 | 119 | reset(base: Oid | null): AsyncIterable { 120 | return this.tree.reset(base); 121 | } 122 | 123 | applyOps(ops: Operation[]): AsyncIterable { 124 | return this.tree.apply_ops(ops); 125 | } 126 | 127 | createFile(path: Path, fileType: FileType): OperationEnvelope { 128 | return this.tree.create_file(path, fileType); 129 | } 130 | 131 | rename(oldPath: Path, newPath: Path): OperationEnvelope { 132 | return this.tree.rename(oldPath, newPath); 133 | } 134 | 135 | remove(path: Path): OperationEnvelope { 136 | return this.tree.remove(path); 137 | } 138 | 139 | exists(path: Path): boolean { 140 | return this.tree.exists(path); 141 | } 142 | 143 | entries(options?: { descendInto?: Path[]; showDeleted?: boolean }): Entry[] { 144 | let descendInto = null; 145 | let showDeleted = false; 146 | if (options) { 147 | if (options.descendInto) descendInto = options.descendInto; 148 | if (options.showDeleted) showDeleted = options.showDeleted; 149 | } 150 | return this.tree.entries(descendInto, showDeleted); 151 | } 152 | 153 | async openTextFile(path: Path): Promise { 154 | const bufferId = await this.tree.open_text_file(path); 155 | let buffer = this.buffers.get(bufferId); 156 | if (!buffer) { 157 | buffer = new Buffer(bufferId, this.tree, this.observer); 158 | this.buffers.set(bufferId, buffer); 159 | } 160 | return buffer; 161 | } 162 | 163 | setActiveLocation(buffer: Buffer | null): OperationEnvelope { 164 | return this.tree.set_active_location(buffer ? buffer.id : null); 165 | } 166 | 167 | getReplicaLocations(): Map { 168 | const locations = this.tree.replica_locations(); 169 | 170 | const map = new Map(); 171 | for (const replicaId in locations) { 172 | map.set(replicaId as ReplicaId, locations[replicaId] as Path); 173 | } 174 | return map; 175 | } 176 | } 177 | 178 | export class Buffer { 179 | id: BufferId; 180 | private tree: any; 181 | private observer: ChangeObserver; 182 | 183 | constructor(id: BufferId, tree: any, observer: ChangeObserver) { 184 | this.id = id; 185 | this.tree = tree; 186 | this.observer = observer; 187 | } 188 | 189 | edit(oldRanges: Range[], newText: string): OperationEnvelope { 190 | return this.tree.edit(this.id, oldRanges, newText); 191 | } 192 | 193 | addSelectionSet(ranges: Range[]): [SelectionSetId, OperationEnvelope] { 194 | const result = this.tree.add_selection_set(this.id, ranges); 195 | return [result.set_id(), result.operation()]; 196 | } 197 | 198 | replaceSelectionSet(id: SelectionSetId, ranges: Range[]): OperationEnvelope { 199 | return this.tree.replace_selection_set(this.id, id, ranges); 200 | } 201 | 202 | removeSelectionSet(id: SelectionSetId): OperationEnvelope { 203 | return this.tree.remove_selection_set(this.id, id); 204 | } 205 | 206 | getPath(): string | null { 207 | return this.tree.path(this.id); 208 | } 209 | 210 | getText(): string { 211 | return this.tree.text(this.id); 212 | } 213 | 214 | getSelectionRanges(): SelectionRanges { 215 | const selections = this.tree.selection_ranges(this.id); 216 | return fromMemoSelectionRanges(selections); 217 | } 218 | 219 | onChange(callback: ChangeObserverCallback): Disposable { 220 | return this.observer.onChange(this.id, callback); 221 | } 222 | 223 | getDeferredOperationCount(): number { 224 | return this.tree.buffer_deferred_ops_len(this.id); 225 | } 226 | } 227 | -------------------------------------------------------------------------------- /docs/architecture/002_shared_workspaces.md: -------------------------------------------------------------------------------- 1 | # Shared workspaces 2 | 3 | ## Current features 4 | 5 | An instance of `xray_server` can host one or more shared workspaces, which can be accessed by other `xray_server` instances over the network. Currently, when connecting to a remote peer, we automatically open its first shared workspace in a window on the client. The client can use the file finder to locate and open any file in the shared workspace's project. When multiple participants open a buffer for the same file, their edits are replicated to other collaborators in real time. 6 | 7 | ### Server 8 | 9 | * `xray foo/ bar/ --listen 8888` starts the Xray server listening on port 8888. 10 | * The `--headless` flag can be passed to create a server that only hosts workspaces for other clients and does not present its own UI. 11 | 12 | ### Basic client experience 13 | 14 | * `xray --connect hostname:port` opens a new window that is connected to the first workspace available on the remote host. 15 | * `cmd-t` in the new window searches through paths in the remote workspace. 16 | * Selecting a path opens it. 17 | * Running `xray --connect` from a second instance allows for collaborative editing when clients open the same buffer. 18 | 19 | ### Selecting between multiple workspaces on the client 20 | 21 | * If the host exposes multiple workspaces, `xray --connect hostname:port` opens an *Open Workspace* dialog that allows the user to select which workspace to open. 22 | * `cmd-o` in any Xray window opens the *Open Workspace* dialog listing workspaces from all connected servers. 23 | 24 | ## RPC System 25 | 26 | We implement shared workspaces on top of an RPC system that allows objects on the client to derive their state and behavior from objects that live on the server. 27 | 28 | ### Goals 29 | 30 | #### Support replicated objects 31 | 32 | The primary goal of the system is to support the construction of a replicated object-oriented domain model. In addition to supporting remote procedure calls, we also want the system to explicitly support long-lived, stateful objects that change over time. 33 | 34 | Replication support should be fairly additive, meaning that the domain model on the server side should be designed pretty much as if it weren't replicated. On the client side, interacting with representations of remote objects should be explicit but convenient. 35 | 36 | #### Capabilities-based security 37 | 38 | Secure ECMA Script and Cap'N Proto introduced me to the concept of capabilities-based security, and our system adopts the same philosophy. Objects on the server are exposed via *services*, which can be thought of as "capabilities" that grant access to a narrow slice of functionality that is dynamically defined. Starting from a single root service, remote users are granted increasing access by being provided with additional capabilities. 39 | 40 | #### Dynamic resource management 41 | 42 | Server-side services only need to live as long as they are referenced by a client. Server-side code can elect to retain a reference to a service. Otherwise, ownership is maintained by clients over the wire. If both the server and the client drop their reference-counted handle to a service, we should drop the service on the server side automatically. 43 | 44 | #### Binary messages 45 | 46 | We want to move data efficiently between the server and client, so a binary encoding scheme for messages is important. For now, we're using bincode for convenience, but we should eventually switch to Protocol Buffers to support graceful evolution of the protocol. 47 | 48 | ### Design 49 | 50 | ![Diagram](../images/rpc.png) 51 | 52 | **Services** are the fundamental abstraction of the system. 53 | 54 | In `rpc::server`, `Service` is a *trait* that can be implemented by a custom service wrapper for each domain object that makes the object accessible to remote clients. A `Service` exposes a static snapshot of the object's initial state, a stream of updates, and the ability to handle requests. The `Service` trait has various associated types for `Request`, `Response`, `Update`, and `State`. 55 | 56 | When server-side code accepts connections, it creates an `rpc::server::Connection` object for each client that takes ownership of the `Stream` of that client's incoming messages. `Connection`s must be created with a *root service*, which is sent to the client immediately. The `Connection` is itself a `Stream` of outgoing messages to be sent to the connected client. 57 | 58 | On the client side, we create a connection by passing the `Stream` of incoming messages to `rpc::client::Connection::new`, which returns a *future* for a tuple containing two objects. The first object is a `rpc::client::Service` representing the *root service* that was sent from the server. The second is an instance of `client::Connection`, which is a `Stream` of outgoing messages to send to the server. 59 | 60 | Using the root service, the client can make requests to gain access to additional services. In Xray, the root service is currently `app::AppService`, which includes a list of shared workspaces in its replicated state. After a client connects to a server, it stores a handle to its root service in a `PeerList` object. We will eventually build a `PeerListView` based on the state of the `PeerList`, which allows the user to open a remote workspace on any connected peer. For now, we automatically open the first workspace when connecting to a remote peer. 61 | 62 | When we connect to a remote workspace, we send an `app::ServiceRequest::OpenWorkspace` message to the remote `AppService`. When handling this request in the `AppService` on the server, we call `add_service` on the connection with a `WorkspaceService` for the requested workspace, which returns us a `ServiceId` integer. We send that id to the client in the response. When handling the response on the client, we call `take_service` on root service with the id to take ownership of a handle to the remote service. 63 | 64 | We can then create a `RemoteWorkspace` and pass it ownership of the service handle to the remote workspace. `RemoteWorkspace` and `LocalWorkspace` both implement the `Workspace` trait, which allows a `RemoteWorkspace` to be used in the system in all of the same ways that a `LocalWorkspace` can. 65 | 66 | We create the illusion that remote domain objects are really local through a combination of state replication and remote procedure calls. Fuzzy finding on the project file trees is addressed through replication, since the data size is typically small and the task is latency sensitive. Project-wide search is implemented via RPC, since replicating the contents of the entire remote file system would be costly, especially for the in-browser use case. Buffer replication is implemented by relaying conflict-free representations of individual edit operations, which can be correctly integrated on remote replicas due to our use of a CRDT in Xray's underlying buffer implementation. 67 | -------------------------------------------------------------------------------- /docs/updates/2018_07_10.md: -------------------------------------------------------------------------------- 1 | # Update for July 10, 2018 2 | 3 | It's been a while since the last update, and I apologize for that. Our strategic direction has felt less clear to me over the past few weeks, and that lack of clarity combined with some difficulty in my personal life overcame my motivation to post for a while. I just wanted to turn inward and write code in relative isolation. Things are clearer and I'm feeling better, and I'd like to resume posting updates on a weekly basis and ask your forgiveness for the gap in communication. 4 | 5 | ## The emergence of Eon 6 | 7 | When we demonstrated Xray for GitHub leadership in May, there was definitely interest in Xray's potential as a high-performance collaborative text editor that runs on the desktop or in the browser, but there was way *more* excitement about CRDTs and their potential to impact version control. At first, this feedback caused some cognitive dissonance for me. After working so hard on Xray, it wasn't easy to hear that what I considered to be an implementation detail was the most exciting aspect of what we had built. But the more I thought about it, the more intrigued I became with the application of CRDTs to version control. The idea had been floating around in my mind since early in the development of Teletype, but now I felt encouraged to take the idea more seriously. 8 | 9 | After a bit of indecision, we decided to dive in. We've now shifted our focus to a new project called Eon, which enables real-time, fine-grained version control. Long term, we see Eon and Xray as two components of the same overall project. Eon will be an editor-agnostic datastore for fine-grained edit history that enables real-time synchronization. It will be like Git, but it will persist and synchronize changes at the granularity of individual keystrokes. We envision Xray as Eon's native user interface and the best showcase of its capabilities. One example is the idea of "layers", which are like commits that can be freely edited at any time. 10 | 11 | Git never would have taken off if it had been trapped inside a particular editor, and so if we really want to maximize the utility of what we're building, it makes sense to be editor-agnostic at the core. That's why we've decided to focus on delivering Eon as a standalone project. It may look like we have stopped working on Xray, but since Xray will ultimately build on top of Eon, the spirit of the overall project continues. 12 | 13 | Since I was presenting Eon at Qcon NYC, we briefly decided to pull out Eon into a separate repository, but then we decided that this was actually a bad idea. For now, we will [continue to develop Eon within the Xray mono-repo](https://github.com/atom/xray/tree/eon/eon) in order to keep the community and development focused in a single location. 14 | 15 | ## Progress on Eon 16 | 17 | Previously, Xray's allowed you to invite guests into your workspace, but it was a centralized design. The workspace host owned all the files and serialized all guest requests to manipulate the file system. If the host dropped offline, the collaboration was over. With Eon, we're shooting for full decentralization. Multiple people can maintain a first-class replica of a given repository, just like Git. 18 | 19 | To achieve that, over the past few weeks, we've been working on replicating the contents of the file system in addition to individual buffers. That means that if one person moves a directory while a collaborator adds a file inside of it, both parties will eventually converge to the same view of the world. It's proven to be a surprisingly complex problem. 20 | 21 | We maintain a CRDT that represents the state of all the files and directories within the repository, but the only cross-platform way to detect file system changes is to scan the underlying directory structure and compare it to our in-memory representation. So far, we've focused only on directories, and we're caching inodes so we can detect when a directory is moved. We have yet to deal with files, which add the possibility of multiple hard links to the same file, but we're planning for them in our design. We also still need to deal with the fact that the file system might change in the middle of a scan, which might cause us to encounter a file or directory multiple times. 22 | 23 | Once we detect a local change, we update the local index and create an operation to broadcast to other replicas. We've settled on a design in which each file or directory is assigned a unique identifier and associated with one or more *parent references*, which describe where that file is located in the tree. Directories can only have one parent reference since they cannot be hard linked, but files can have multiple. Additionally, directories are associated with *child references*, each of which has a name and corresponds to a parent reference elsewhere in the tree. 24 | 25 | Each parent and child reference is a simple CRDT called a *last-writer wins register*. If a file is moved, we update its parent reference. If the same file is moved concurrently on another replica, we break the tie in a consistent way such that the file ends up in the same location in all replicas. Similarly, if two child references with the same name are created concurrently within a directory, only one of them will win across all replicas. 26 | 27 | Inspired by the [Btrfs file system](https://en.wikipedia.org/wiki/Btrfs), we're storing the state of the file system in the same copy-on-write B-tree that we use to represent the contents of buffers. Our tree is implemented generically, enabling us to reuse the same code for different kinds of items. In the case of our file system representation, each item is a member of an enumeration, which allows us to store file metadata, parent references, and child references all within the same tree. Each parent and child reference is actually represented by multiple tree items that share a *reference id*. We enforce a total order between all items in the tree, honoring the leftmost item for any register as the current value of that register. 28 | 29 | We've also enhanced Xray's original B-tree to allow nodes to be persisted in an external key-value store. This will allow us to maintain a history of how the file system has evolved, and we plan to allow interactions with our tree to filter out certain nodes based on a summary of their contents. This will enable us to avoid loading portions of the tree that contain items that aren't visible in a specific version of the tree, which will keep the memory footprint small for any single version while still allowing us to load past versions of the tree if desired. 30 | 31 | In many ways arriving at our current approach was more challenging than coming up with the CRDT for text. We spent many days doing almost nothing but thinking and not writing much code, but now we're feeling pretty good about the design. It seems simple and almost obvious, which is probably a good sign that we're on the right track. 32 | -------------------------------------------------------------------------------- /xray_cli/src/main.rs: -------------------------------------------------------------------------------- 1 | extern crate docopt; 2 | #[macro_use] 3 | extern crate serde_derive; 4 | extern crate serde_json; 5 | 6 | use docopt::Docopt; 7 | use std::env; 8 | use std::fs; 9 | use std::io::{BufRead, BufReader, Write}; 10 | use std::net::SocketAddr; 11 | use std::os::unix::net::UnixStream; 12 | use std::path::{Path, PathBuf}; 13 | use std::process; 14 | use std::process::{Command, Stdio}; 15 | 16 | const USAGE: &'static str = " 17 | Xray 18 | 19 | Usage: 20 | xray [--socket-path=] [--headless] [--listen=] [--connect=
] [...] 21 | xray (-h | --help) 22 | 23 | Options: 24 | -h --help Show this screen. 25 | -H --headless Start Xray in headless mode. 26 | -l --listen= Listen for TCP connections on the specified port. 27 | -c --connect=
Connect to the specified address. 28 | "; 29 | 30 | type PortNumber = u16; 31 | 32 | #[derive(Debug, Serialize)] 33 | #[serde(tag = "type")] 34 | enum ServerRequest { 35 | StartCli { headless: bool }, 36 | OpenWorkspace { paths: Vec }, 37 | ConnectToPeer { address: SocketAddr }, 38 | TcpListen { port: PortNumber }, 39 | } 40 | 41 | #[derive(Deserialize)] 42 | #[serde(tag = "type")] 43 | enum ServerResponse { 44 | Ok, 45 | Error { description: String }, 46 | } 47 | 48 | #[derive(Debug, Deserialize)] 49 | struct Args { 50 | flag_socket_path: Option, 51 | flag_headless: Option, 52 | flag_listen: Option, 53 | flag_connect: Option, 54 | arg_path: Vec, 55 | } 56 | 57 | fn main() { 58 | process::exit(match launch() { 59 | Ok(()) => 0, 60 | Err(description) => { 61 | eprintln!("{}", description); 62 | 1 63 | } 64 | }) 65 | } 66 | 67 | fn launch() -> Result<(), String> { 68 | let args: Args = Docopt::new(USAGE) 69 | .and_then(|d| d.deserialize()) 70 | .unwrap_or_else(|e| e.exit()); 71 | 72 | let headless = args.flag_headless.unwrap_or(false); 73 | 74 | const DEFAULT_SOCKET_PATH: &'static str = "/tmp/xray.sock"; 75 | let socket_path = PathBuf::from( 76 | args.flag_socket_path 77 | .as_ref() 78 | .map_or(DEFAULT_SOCKET_PATH, |path| path.as_str()), 79 | ); 80 | 81 | let mut socket = match UnixStream::connect(&socket_path) { 82 | Ok(socket) => socket, 83 | Err(_) => { 84 | let src_path = PathBuf::from(env::var("XRAY_SRC_PATH") 85 | .map_err(|_| "Must specify the XRAY_SRC_PATH environment variable")?); 86 | 87 | let server_bin_path; 88 | let node_env; 89 | if cfg!(debug_assertions) { 90 | server_bin_path = src_path.join("target/debug/xray_server"); 91 | node_env = "development"; 92 | } else { 93 | server_bin_path = src_path.join("target/release/xray_server"); 94 | node_env = "production"; 95 | } 96 | 97 | if headless { 98 | start_headless(&server_bin_path, &socket_path)? 99 | } else { 100 | start_electron(&src_path, &server_bin_path, &socket_path, &node_env)? 101 | } 102 | } 103 | }; 104 | 105 | send_message(&mut socket, ServerRequest::StartCli { headless })?; 106 | 107 | if let Some(address) = args.flag_connect { 108 | send_message(&mut socket, ServerRequest::ConnectToPeer { address })?; 109 | } else if args.arg_path.len() > 0 { 110 | let mut paths = Vec::new(); 111 | for path in args.arg_path { 112 | paths.push(fs::canonicalize(&path) 113 | .map_err(|error| format!("Invalid path {:?} - {}", path, error))?); 114 | } 115 | send_message(&mut socket, ServerRequest::OpenWorkspace { paths })?; 116 | } 117 | 118 | if let Some(port) = args.flag_listen { 119 | send_message(&mut socket, ServerRequest::TcpListen { port })?; 120 | } 121 | 122 | Ok(()) 123 | } 124 | 125 | fn start_headless(server_bin_path: &Path, socket_path: &Path) -> Result { 126 | let command = Command::new(server_bin_path) 127 | .env("XRAY_SOCKET_PATH", socket_path) 128 | .env("XRAY_HEADLESS", "1") 129 | .stdout(Stdio::piped()) 130 | .spawn() 131 | .map_err(|error| format!("Failed to open Xray app {}", error))?; 132 | 133 | let mut stdout = command.stdout.unwrap(); 134 | let mut reader = BufReader::new(&mut stdout); 135 | let mut line = String::new(); 136 | while line != "Listening\n" { 137 | reader 138 | .read_line(&mut line) 139 | .map_err(|_| String::from("Error reading app output"))?; 140 | } 141 | UnixStream::connect(socket_path).map_err(|_| String::from("Error connecting to socket")) 142 | } 143 | 144 | fn start_electron( 145 | src_path: &Path, 146 | server_bin_path: &Path, 147 | socket_path: &Path, 148 | node_env: &str, 149 | ) -> Result { 150 | let electron_app_path = Path::new(src_path).join("xray_electron"); 151 | let electron_bin_path = electron_app_path.join("node_modules/.bin/electron"); 152 | let command = Command::new(electron_bin_path) 153 | .arg(electron_app_path) 154 | .env("XRAY_SERVER_PATH", server_bin_path) 155 | .env("XRAY_SOCKET_PATH", socket_path) 156 | .env("XRAY_HEADLESS", "0") 157 | .env("NODE_ENV", node_env) 158 | .stdout(Stdio::piped()) 159 | .spawn() 160 | .map_err(|error| format!("Failed to open Xray app {}", error))?; 161 | 162 | let mut stdout = command.stdout.unwrap(); 163 | let mut reader = BufReader::new(&mut stdout); 164 | let mut line = String::new(); 165 | while line != "Listening\n" { 166 | reader 167 | .read_line(&mut line) 168 | .map_err(|_| String::from("Error reading app output"))?; 169 | } 170 | UnixStream::connect(socket_path).map_err(|_| String::from("Error connecting to socket")) 171 | } 172 | 173 | fn send_message(socket: &mut UnixStream, message: ServerRequest) -> Result<(), String> { 174 | let bytes = serde_json::to_vec(&message).expect("Error serializing message"); 175 | socket 176 | .write(&bytes) 177 | .expect("Error writing to server socket"); 178 | socket.write(b"\n").expect("Error writing to server socket"); 179 | 180 | let mut reader = BufReader::new(socket); 181 | let mut line = String::new(); 182 | reader 183 | .read_line(&mut line) 184 | .expect("Error reading server response"); 185 | match serde_json::from_str::(&line).expect("Error reading server response") { 186 | ServerResponse::Ok => Ok(()), 187 | ServerResponse::Error { description } => Err(description), 188 | } 189 | } 190 | -------------------------------------------------------------------------------- /docs/updates/2018_09_14.md: -------------------------------------------------------------------------------- 1 | # Update for September 14, 2018 2 | 3 | It's been an intense couple of weeks, but we're coming out of it with more clarity than ever on the future direction of Memo. We're entering a new phase of the project where we distill the research of the last few months into a usable form. Thanks for your patience with the radio silence the last couple of weeks. 4 | 5 | ## Embracing Git 6 | 7 | Our previous vision for Memo was to store the full operational history for the repository in a global database, so that each file's full history would be available in a single structure dating back to its creation. This would essentially duplicate the role of Git as a content tracker for the repository, but with a much more fine-grained resolution. It may eventually make sense to build a global operation index to enable advanced features and analysis, but I don't think it makes sense to conceive of such an index as an independent version control system. For async collaboration, CRDTs probably won't offer enough advantages to induce people to switch away from Git. Even if we managed to build such a system, it would always need to interoperate with Git. So we may as well embrace that reality and build on top of Git. We can then focus on the area where CRDTs have their greatest strength: real-time collaboration and recording the fine-grained edits that occur *between* Git commits. 8 | 9 | Augmenting Git is definitely something I've considered in the past, but it's finally becoming clearer how we can achieve it. We will start by packaging the previous months' work into a library that is similar to `teletype-crdt`. With Teletype, you work with individual buffers. Each local edit returns one or more operations to apply on remote replicas, and applying remote operations returns a diff that describes the impact of those operations on the local replica. Memo will expand the scope of this abstraction from individual buffers to the working tree, but it won't represent the full state of this tree in the form of operations. Instead, we'll exploit the fact that Git commits provide a common synchronization point. The library will expect any data that's committed to the Git repository to be supplied separately from the operations. 10 | 11 | By making the CRDT representation sparse and leaning on Git to synchronize the bulk of the working tree, we reduce the memory footprint of the CRDT to the point where it can reasonably be kept resident in memory. This also bounds the bandwidth required to replicate the full structure, which obviates the need for complex partial data fetching schemes that we were considering previously. This in turn greatly simplifies backend infrastructure. Because a sparse representation should always be small enough to fully reconstruct from raw operations on the client, server side infrastructure shouldn't need to process operations in any way other than simply storing them based on the identifier of the patch to which they apply. 12 | 13 | ## The challenge of undo 14 | 15 | One big obstacle to making this patch-based representation work is undo. In `teletype-crdt`, we implement undo by associating every operation with a counter. If an operation's undo counter is odd, we consider the operation to be undone and therefore invisible. If the operation's undo counter is even, we consider the operation to be visible. If two users undo or redo the same operation concurrently, they'll both assign its undo count to the same number, which preserves both users' intentions of undoing the operation and avoids their actions doubling up or cancelling each other out, which could occur in some other schemes. However, implementing undo in this way comes with a cost, which is that in order for me to undo an operation that is present in my local history, I need to rely on that operation being present in the history of all of my collaborators. This approach to undo combines poorly with resetting to an empty CRDT on each commit, because it forces everyone to clear their undo stack after committing since there won't be any way to refer to prior operations in order to update their undo counters. 16 | 17 | This felt like a show-stopper to me until I had a conversation with [@jeffrafter](https://github.com/jeffrafter) about his team's experience using `teletype-crdt` in [Tiny](https://tttiny.com/). I don't have a perfect understanding of the details of their approach, but they essentially bypass Teletype's built-in undo system and maintain their own history on each client independently of the CRDT. When a user performs an undo, they simply apply its to the current CRDT and broadcast it as a new operation. When I asked about some of the more diabolical concurrency scenarios that the counters were designed to eliminate, Jeff simply replied that it's working for them in practice. 18 | 19 | Inspired by their experience, I have a hunch that we can implement undo similarly in our library. For each buffer, we can maintain two CRDTs. One will serve as a local non-linear history that allows us to understand the effects of undoing operations that aren't the most recent change to the buffer. We'll perform undos against this local history first, then apply their affects to a CRDT that starts at the most recent commit. This will generate remote operations we know can be cleanly applied by all participants. The local history can be retained across several commits and even be stored locally. By fetching operations from previous commits, we could even construct such a history for clients that are new to the collaboration. 20 | 21 | ## Stable file references 22 | 23 | We need to be able to refer to files in a universal way, but with this hybrid approach, only *new* files are assigned identifiers by operations. This stumped us for a bit, until an obvious solution occurred to us. The set of paths in the base Git commit is the same for every replica, so we can sort these paths lexicographically and assign each an identifier based on its position in this fixed order. Internally, file identifiers are a Rust enum with two possible variants, `Base`, which wraps a `u64`, and `New`, which wraps a local timestamp generated on the replica that created the file. 24 | 25 | ## The big picture 26 | 27 | By being agnostic to plumbing and building a library that operates purely in terms of operations and data, this software should be useful in a broader array of applications. We plan to distribute a WebAssembly version to enable collaboration in browser-based environments, along with a native executable that can talk to editors and synchronize our CRDT with an underlying file system like we originally envisioned. The operations can serve as a kind of common real-time collaboration protocol. As long as an application can send and receive operations and feed them into this library, it should be capable of real-time collaboration with other applications. 28 | 29 | In light of these shifts in our thinking, I've updated the [Memo README](../../README.md) to reflect the current state of the world. Some details about the implementation have been dropped, but I plan to reintroduce them over time as our implementation stabilizes. At some point soon, it may make sense to again pull Memo out into its own repository that is separate from Xray. If that happens, I'll keep everyone posted here. 30 | --------------------------------------------------------------------------------