├── .coffeelintignore ├── .github ├── no-response.yml └── workflows │ └── ci.yml ├── .gitignore ├── CONTRIBUTING.md ├── ISSUE_TEMPLATE.md ├── LICENSE.md ├── PULL_REQUEST_TEMPLATE.md ├── README.md ├── coffeelint.json ├── grammars └── toml.cson ├── package-lock.json ├── package.json ├── settings └── language-toml.cson └── spec └── toml-spec.coffee /.coffeelintignore: -------------------------------------------------------------------------------- 1 | spec/fixtures 2 | -------------------------------------------------------------------------------- /.github/no-response.yml: -------------------------------------------------------------------------------- 1 | # Configuration for probot-no-response - https://github.com/probot/no-response 2 | 3 | # Number of days of inactivity before an issue is closed for lack of response 4 | daysUntilClose: 28 5 | 6 | # Label requiring a response 7 | responseRequiredLabel: more-information-needed 8 | 9 | # Comment to post when closing an issue for lack of response. Set to `false` to disable. 10 | closeComment: > 11 | This issue has been automatically closed because there has been no response 12 | to our request for more information from the original author. With only the 13 | information that is currently in the issue, we don't have enough information 14 | to take action. Please reach out if you have or find the answers we need so 15 | that we can investigate further. 16 | -------------------------------------------------------------------------------- /.github/workflows/ci.yml: -------------------------------------------------------------------------------- 1 | name: CI 2 | 3 | on: [push] 4 | 5 | env: 6 | CI: true 7 | 8 | jobs: 9 | Test: 10 | strategy: 11 | matrix: 12 | os: [ubuntu-latest, macos-latest, windows-latest] 13 | channel: [stable, beta] 14 | runs-on: ${{ matrix.os }} 15 | steps: 16 | - uses: actions/checkout@v1 17 | - uses: UziTech/action-setup-atom@v2 18 | with: 19 | version: ${{ matrix.channel }} 20 | - name: Install dependencies 21 | run: apm install 22 | - name: Run tests 23 | run: atom --test spec 24 | -------------------------------------------------------------------------------- /.gitignore: -------------------------------------------------------------------------------- 1 | node_modules 2 | -------------------------------------------------------------------------------- /CONTRIBUTING.md: -------------------------------------------------------------------------------- 1 | See the [Atom contributing guide](https://github.com/atom/atom/blob/master/CONTRIBUTING.md) 2 | -------------------------------------------------------------------------------- /ISSUE_TEMPLATE.md: -------------------------------------------------------------------------------- 1 | 8 | 9 | ### Prerequisites 10 | 11 | * [ ] Put an X between the brackets on this line if you have done all of the following: 12 | * Reproduced the problem in Safe Mode: http://flight-manual.atom.io/hacking-atom/sections/debugging/#using-safe-mode 13 | * Followed all applicable steps in the debugging guide: http://flight-manual.atom.io/hacking-atom/sections/debugging/ 14 | * Checked the FAQs on the message board for common solutions: https://discuss.atom.io/c/faq 15 | * Checked that your issue isn't already filed: https://github.com/issues?utf8=✓&q=is%3Aissue+user%3Aatom 16 | * Checked that there is not already an Atom package that provides the described functionality: https://atom.io/packages 17 | 18 | ### Description 19 | 20 | [Description of the issue] 21 | 22 | ### Steps to Reproduce 23 | 24 | 1. [First Step] 25 | 2. [Second Step] 26 | 3. [and so on...] 27 | 28 | **Expected behavior:** [What you expect to happen] 29 | 30 | **Actual behavior:** [What actually happens] 31 | 32 | **Reproduces how often:** [What percentage of the time does it reproduce?] 33 | 34 | ### Versions 35 | 36 | You can get this information from copy and pasting the output of `atom --version` and `apm --version` from the command line. Also, please include the OS and what version of the OS you're running. 37 | 38 | ### Additional Information 39 | 40 | Any additional information, configuration or data that might be necessary to reproduce the issue. 41 | -------------------------------------------------------------------------------- /LICENSE.md: -------------------------------------------------------------------------------- 1 | Copyright (c) 2014 GitHub Inc. 2 | 3 | Permission is hereby granted, free of charge, to any person obtaining 4 | a copy of this software and associated documentation files (the 5 | "Software"), to deal in the Software without restriction, including 6 | without limitation the rights to use, copy, modify, merge, publish, 7 | distribute, sublicense, and/or sell copies of the Software, and to 8 | permit persons to whom the Software is furnished to do so, subject to 9 | the following conditions: 10 | 11 | The above copyright notice and this permission notice shall be 12 | included in all copies or substantial portions of the Software. 13 | 14 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, 15 | EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF 16 | MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND 17 | NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE 18 | LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION 19 | OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION 20 | WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. 21 | -------------------------------------------------------------------------------- /PULL_REQUEST_TEMPLATE.md: -------------------------------------------------------------------------------- 1 | ### Requirements 2 | 3 | * Filling out the template is required. Any pull request that does not include enough information to be reviewed in a timely manner may be closed at the maintainers' discretion. 4 | * All new code requires tests to ensure against regressions 5 | 6 | ### Description of the Change 7 | 8 | 13 | 14 | ### Alternate Designs 15 | 16 | 17 | 18 | ### Benefits 19 | 20 | 21 | 22 | ### Possible Drawbacks 23 | 24 | 25 | 26 | ### Applicable Issues 27 | 28 | 29 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ##### Atom and all repositories under Atom will be archived on December 15, 2022. Learn more in our [official announcement](https://github.blog/2022-06-08-sunsetting-atom/) 2 | # TOML language support in Atom 3 | [![OS X Build Status](https://travis-ci.org/atom/language-toml.svg?branch=master)](https://travis-ci.org/atom/language-toml) 4 | [![Windows Build Status](https://ci.appveyor.com/api/projects/status/kohao3fjyk6xv0sc/branch/master?svg=true)](https://ci.appveyor.com/project/Atom/language-toml/branch/master) 5 | [![Dependency Status](https://david-dm.org/atom/language-toml.svg)](https://david-dm.org/atom/language-toml) 6 | 7 | Adds syntax highlighting for [TOML](https://github.com/toml-lang/toml) in Atom. 8 | 9 | Contributions are greatly appreciated. Please fork this repository and open a pull request to add snippets, make grammar tweaks, etc. 10 | -------------------------------------------------------------------------------- /coffeelint.json: -------------------------------------------------------------------------------- 1 | { 2 | "max_line_length": { 3 | "level": "ignore" 4 | }, 5 | "no_empty_param_list": { 6 | "level": "error" 7 | }, 8 | "arrow_spacing": { 9 | "level": "error" 10 | }, 11 | "no_interpolation_in_single_quotes": { 12 | "level": "error" 13 | }, 14 | "no_debugger": { 15 | "level": "error" 16 | }, 17 | "prefer_english_operator": { 18 | "level": "error" 19 | }, 20 | "colon_assignment_spacing": { 21 | "spacing": { 22 | "left": 0, 23 | "right": 1 24 | }, 25 | "level": "error" 26 | }, 27 | "braces_spacing": { 28 | "spaces": 0, 29 | "level": "error" 30 | }, 31 | "spacing_after_comma": { 32 | "level": "error" 33 | }, 34 | "no_stand_alone_at": { 35 | "level": "error" 36 | } 37 | } 38 | -------------------------------------------------------------------------------- /grammars/toml.cson: -------------------------------------------------------------------------------- 1 | 'name': 'TOML' 2 | 'scopeName': 'source.toml' 3 | 'fileTypes': ['toml'] 4 | 'patterns': [ 5 | { 6 | 'include': '#comment' 7 | } 8 | { 9 | 'match': '(?:^\\s*)((\\[\\[)[^\\]]+(\\]\\]))' 10 | 'captures': 11 | '1': 'name': 'entity.name.section.table.array.toml' 12 | '2': 'name': 'punctuation.definition.table.array.begin.toml' 13 | '3': 'name': 'punctuation.definition.table.array.end.toml' 14 | } 15 | { 16 | 'match': '(?:^\\s*)((\\[)[^\\]]+(\\]))' 17 | 'captures': 18 | '1': 'name': 'entity.name.section.table.toml' 19 | '2': 'name': 'punctuation.definition.table.begin.toml' 20 | '3': 'name': 'punctuation.definition.table.end.toml' 21 | } 22 | { 23 | 'begin': '([A-Za-z0-9_-]+)\\s*(=)\\s*' # IMPORTANT: Do not replace with [\\w-]. \\w includes more than just a-z. 24 | 'beginCaptures': 25 | '1': 26 | 'name': 'variable.other.key.toml' 27 | '2': 28 | 'name': 'keyword.operator.assignment.toml' 29 | 'end': '(?!\\G)' 30 | 'patterns': [ 31 | { 32 | 'include': '#values' 33 | } 34 | ] 35 | } 36 | { 37 | 'begin': '((")(.*?)("))\\s*(=)\\s*' # This one is .* because " can be escaped 38 | 'beginCaptures': 39 | '1': 40 | 'name': 'string.quoted.double.toml' 41 | '2': 42 | 'name': 'punctuation.definition.string.begin.toml' 43 | '3': 44 | 'name': 'variable.other.key.toml' 45 | 'patterns': [ 46 | { 47 | 'include': '#string-escapes' 48 | } 49 | ] 50 | '4': 51 | 'name': 'punctuation.definition.string.end.toml' 52 | '5': 53 | 'name': 'keyword.operator.assignment.toml' 54 | 'end': '(?!\\G)' 55 | 'patterns': [ 56 | { 57 | 'include': '#values' 58 | } 59 | ] 60 | } 61 | { 62 | 'begin': "((')([^']*)('))\\s*(=)\\s*" 63 | 'beginCaptures': 64 | '1': 65 | 'name': 'string.quoted.single.toml' 66 | '2': 67 | 'name': 'punctuation.definition.string.begin.toml' 68 | '3': 69 | 'name': 'variable.other.key.toml' 70 | '4': 71 | 'name': 'punctuation.definition.string.end.toml' 72 | '5': 73 | 'name': 'keyword.operator.assignment.toml' 74 | 'end': '(?!\\G)' 75 | 'patterns': [ 76 | { 77 | 'include': '#values' 78 | } 79 | ] 80 | } 81 | ] 82 | 'repository': 83 | 'comment': 84 | 'patterns': [ 85 | { 86 | 'match': '(#).*$' 87 | 'name': 'comment.line.number-sign.toml' 88 | 'captures': 89 | '1': 'name': 'punctuation.definition.comment.toml' 90 | } 91 | ] 92 | 'string-escapes': 93 | 'match': '\\\\[btnfr"\\\\]|\\\\u[A-Fa-f0-9]{4}|\\\\U[A-Fa-f0-9]{8}' 94 | 'name': 'constant.character.escape.toml' 95 | 'values': 96 | 'patterns': [ 97 | { 98 | 'begin': '\\[' 99 | 'beginCaptures': 100 | '0': 101 | 'name': 'punctuation.definition.array.begin.toml' 102 | 'end': '\\]' 103 | 'endCaptures': 104 | '0': 105 | 'name': 'punctuation.definition.array.end.toml' 106 | 'patterns': [ 107 | { 108 | 'include': '#comment' 109 | } 110 | { 111 | 'match': ',' 112 | 'name': 'punctuation.definition.separator.comma.toml' 113 | } 114 | { 115 | 'include': '#values' 116 | } 117 | ] 118 | } 119 | { 120 | 'begin': '{' 121 | 'beginCaptures': 122 | '0': 123 | 'name': 'punctuation.definition.table.inline.begin.toml' 124 | 'end': '}' 125 | 'endCaptures': 126 | '0': 127 | 'name': 'punctuation.definition.table.inline.end.toml' 128 | 'patterns': [ 129 | { 130 | 'match': ',' 131 | 'name': 'punctuation.definition.separator.comma.toml' 132 | } 133 | { 134 | 'include': '$self' 135 | } 136 | ] 137 | } 138 | { 139 | 'begin': '"""' 140 | 'beginCaptures': 141 | '0': 'name': 'punctuation.definition.string.begin.toml' 142 | 'end': '"""' 143 | 'endCaptures': 144 | '0': 'name': 'punctuation.definition.string.end.toml' 145 | 'name': 'string.quoted.double.block.toml' 146 | 'patterns': [ 147 | { 148 | 'include': '#string-escapes' 149 | } 150 | { 151 | # Line continuation with backslashes 152 | 'match': '\\\\(?=\\s*$)' 153 | 'name': 'constant.character.escape.toml' 154 | } 155 | ] 156 | } 157 | { 158 | 'begin': "'''" 159 | 'beginCaptures': 160 | '0': 'name': 'punctuation.definition.string.begin.toml' 161 | 'end': "'''" 162 | 'endCaptures': 163 | '0': 'name': 'punctuation.definition.string.end.toml' 164 | 'name': 'string.quoted.single.block.toml' 165 | } 166 | { 167 | 'begin': '"' 168 | 'beginCaptures': 169 | '0': 'name': 'punctuation.definition.string.begin.toml' 170 | 'end': '"' 171 | 'endCaptures': 172 | '0': 'name': 'punctuation.definition.string.end.toml' 173 | 'name': 'string.quoted.double.toml' 174 | 'patterns': [ 175 | { 176 | 'include': '#string-escapes' 177 | } 178 | ] 179 | } 180 | { 181 | 'begin': "'" 182 | 'beginCaptures': 183 | '0': 'name': 'punctuation.definition.string.begin.toml' 184 | 'end': "'" 185 | 'endCaptures': 186 | '0': 'name': 'punctuation.definition.string.end.toml' 187 | 'name': 'string.quoted.single.toml' 188 | } 189 | { 190 | 'match': 'true' 191 | 'name': 'constant.language.boolean.true.toml' 192 | } 193 | { 194 | 'match': 'false' 195 | 'name': 'constant.language.boolean.false.toml' 196 | } 197 | { 198 | 'include': '#date-time' 199 | } 200 | { 201 | 'include': '#numbers' 202 | } 203 | { 204 | 'match': '.+' 205 | 'name': 'invalid.illegal.toml' 206 | } 207 | ] 208 | 'date-time': 209 | 'patterns': [ 210 | { 211 | # Offset date-time 212 | # https://github.com/toml-lang/toml#offset-date-time 213 | 'match': '''(?x) 214 | \\d{4}-\\d{2}-\\d{2} # YYYY-MM-DD 215 | (T|[ ]) # T or space 216 | \\d{2}:\\d{2}:\\d{2}(?:\\.\\d+)? # HH:MM:SS.precision 217 | (?: 218 | (Z) 219 | | 220 | ([+-])\\d{2}:\\d{2} # +/- HH:MM 221 | ) 222 | ''' 223 | 'name': 'constant.numeric.date.toml' 224 | 'captures': 225 | '1': 226 | 'name': 'keyword.other.time.toml' 227 | '2': 228 | 'name': 'keyword.other.offset.toml' 229 | '3': 230 | 'name': 'keyword.other.offset.toml' 231 | } 232 | { 233 | # Local date-time 234 | # https://github.com/toml-lang/toml#local-date-time 235 | 'match': '''(?x) 236 | \\d{4}-\\d{2}-\\d{2} # YYYY-MM-DD 237 | (T|[ ]) # T or space 238 | \\d{2}:\\d{2}:\\d{2}(?:\\.\\d+)? # HH:MM:SS.precision 239 | ''' 240 | 'name': 'constant.numeric.date.toml' 241 | 'captures': 242 | '1': 243 | 'name': 'keyword.other.time.toml' 244 | } 245 | { 246 | # Local date 247 | # https://github.com/toml-lang/toml#local-date 248 | 'match': '\\d{4}-\\d{2}-\\d{2}' 249 | 'name': 'constant.numeric.date.toml' 250 | } 251 | { 252 | # Local time 253 | # https://github.com/toml-lang/toml#local-time 254 | 'match': '\\d{2}:\\d{2}:\\d{2}(?:\\.\\d+)?' 255 | 'name': 'constant.numeric.date.toml' 256 | } 257 | ] 258 | 'numbers': 259 | # https://github.com/toml-lang/toml#integer 260 | 'patterns': [ 261 | { 262 | # Handles decimal integers & floats 263 | 'match': '''(?x) 264 | [+-]? # Optional +/- 265 | ( 266 | 0 # Just a zero 267 | | 268 | [1-9](_?\\d)* # Or a non-zero number (no leading zeros allowed) 269 | ) 270 | (\\.\\d(_?\\d)*)? # Optional fractional portion 271 | ([eE][+-]?\\d(_?\\d)*)? # Optional exponent 272 | \\b 273 | ''' 274 | 'name': 'constant.numeric.toml' 275 | } 276 | { 277 | 'match': '[+-]?(inf|nan)\\b' 278 | 'name': 'constant.numeric.$1.toml' 279 | } 280 | { 281 | # Hex 282 | 'match': '0x[0-9A-Fa-f](_?[0-9A-Fa-f])*\\b' 283 | 'name': 'constant.numeric.hex.toml' 284 | } 285 | { 286 | # Octal 287 | 'match': '0o[0-7](_?[0-7])*\\b' 288 | 'name': 'constant.numeric.octal.toml' 289 | } 290 | { 291 | # Binary 292 | 'match': '0b[01](_?[01])*\\b' 293 | 'name': 'constant.numeric.binary.toml' 294 | } 295 | ] 296 | -------------------------------------------------------------------------------- /package-lock.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "language-toml", 3 | "version": "0.20.0", 4 | "lockfileVersion": 1, 5 | "requires": true, 6 | "dependencies": { 7 | "balanced-match": { 8 | "version": "1.0.0", 9 | "resolved": "https://registry.npmjs.org/balanced-match/-/balanced-match-1.0.0.tgz", 10 | "integrity": "sha1-ibTRmasr7kneFk6gK4nORi1xt2c=", 11 | "dev": true 12 | }, 13 | "brace-expansion": { 14 | "version": "1.1.11", 15 | "resolved": "https://registry.npmjs.org/brace-expansion/-/brace-expansion-1.1.11.tgz", 16 | "integrity": "sha512-iCuPHDFgrHX7H2vEI/5xpz07zSHB00TpugqhmYtVmMO6518mCuRMoOYFldEBl0g187ufozdaHgWKcYFb61qGiA==", 17 | "dev": true, 18 | "requires": { 19 | "balanced-match": "^1.0.0", 20 | "concat-map": "0.0.1" 21 | } 22 | }, 23 | "coffee-script": { 24 | "version": "1.11.1", 25 | "resolved": "https://registry.npmjs.org/coffee-script/-/coffee-script-1.11.1.tgz", 26 | "integrity": "sha1-vxxHrWREOg2V0S3ysUfMCk2q1uk=", 27 | "dev": true 28 | }, 29 | "coffeelint": { 30 | "version": "1.16.2", 31 | "resolved": "https://registry.npmjs.org/coffeelint/-/coffeelint-1.16.2.tgz", 32 | "integrity": "sha512-6mzgOo4zb17WfdrSui/cSUEgQ0AQkW3gXDht+6lHkfkqGUtSYKwGdGcXsDfAyuScVzTlTtKdfwkAlJWfqul7zg==", 33 | "dev": true, 34 | "requires": { 35 | "coffee-script": "~1.11.0", 36 | "glob": "^7.0.6", 37 | "ignore": "^3.0.9", 38 | "optimist": "^0.6.1", 39 | "resolve": "^0.6.3", 40 | "strip-json-comments": "^1.0.2" 41 | } 42 | }, 43 | "concat-map": { 44 | "version": "0.0.1", 45 | "resolved": "https://registry.npmjs.org/concat-map/-/concat-map-0.0.1.tgz", 46 | "integrity": "sha1-2Klr13/Wjfd5OnMDajug1UBdR3s=", 47 | "dev": true 48 | }, 49 | "fs.realpath": { 50 | "version": "1.0.0", 51 | "resolved": "https://registry.npmjs.org/fs.realpath/-/fs.realpath-1.0.0.tgz", 52 | "integrity": "sha1-FQStJSMVjKpA20onh8sBQRmU6k8=", 53 | "dev": true 54 | }, 55 | "glob": { 56 | "version": "7.1.3", 57 | "resolved": "https://registry.npmjs.org/glob/-/glob-7.1.3.tgz", 58 | "integrity": "sha512-vcfuiIxogLV4DlGBHIUOwI0IbrJ8HWPc4MU7HzviGeNho/UJDfi6B5p3sHeWIQ0KGIU0Jpxi5ZHxemQfLkkAwQ==", 59 | "dev": true, 60 | "requires": { 61 | "fs.realpath": "^1.0.0", 62 | "inflight": "^1.0.4", 63 | "inherits": "2", 64 | "minimatch": "^3.0.4", 65 | "once": "^1.3.0", 66 | "path-is-absolute": "^1.0.0" 67 | } 68 | }, 69 | "ignore": { 70 | "version": "3.3.10", 71 | "resolved": "https://registry.npmjs.org/ignore/-/ignore-3.3.10.tgz", 72 | "integrity": "sha512-Pgs951kaMm5GXP7MOvxERINe3gsaVjUWFm+UZPSq9xYriQAksyhg0csnS0KXSNRD5NmNdapXEpjxG49+AKh/ug==", 73 | "dev": true 74 | }, 75 | "inflight": { 76 | "version": "1.0.6", 77 | "resolved": "https://registry.npmjs.org/inflight/-/inflight-1.0.6.tgz", 78 | "integrity": "sha1-Sb1jMdfQLQwJvJEKEHW6gWW1bfk=", 79 | "dev": true, 80 | "requires": { 81 | "once": "^1.3.0", 82 | "wrappy": "1" 83 | } 84 | }, 85 | "inherits": { 86 | "version": "2.0.3", 87 | "resolved": "https://registry.npmjs.org/inherits/-/inherits-2.0.3.tgz", 88 | "integrity": "sha1-Yzwsg+PaQqUC9SRmAiSA9CCCYd4=", 89 | "dev": true 90 | }, 91 | "minimatch": { 92 | "version": "3.0.4", 93 | "resolved": "https://registry.npmjs.org/minimatch/-/minimatch-3.0.4.tgz", 94 | "integrity": "sha512-yJHVQEhyqPLUTgt9B83PXu6W3rx4MvvHvSUvToogpwoGDOUQ+yDrR0HRot+yOCdCO7u4hX3pWft6kWBBcqh0UA==", 95 | "dev": true, 96 | "requires": { 97 | "brace-expansion": "^1.1.7" 98 | } 99 | }, 100 | "minimist": { 101 | "version": "0.0.10", 102 | "resolved": "https://registry.npmjs.org/minimist/-/minimist-0.0.10.tgz", 103 | "integrity": "sha1-3j+YVD2/lggr5IrRoMfNqDYwHc8=", 104 | "dev": true 105 | }, 106 | "once": { 107 | "version": "1.4.0", 108 | "resolved": "https://registry.npmjs.org/once/-/once-1.4.0.tgz", 109 | "integrity": "sha1-WDsap3WWHUsROsF9nFC6753Xa9E=", 110 | "dev": true, 111 | "requires": { 112 | "wrappy": "1" 113 | } 114 | }, 115 | "optimist": { 116 | "version": "0.6.1", 117 | "resolved": "https://registry.npmjs.org/optimist/-/optimist-0.6.1.tgz", 118 | "integrity": "sha1-2j6nRob6IaGaERwybpDrFaAZZoY=", 119 | "dev": true, 120 | "requires": { 121 | "minimist": "~0.0.1", 122 | "wordwrap": "~0.0.2" 123 | } 124 | }, 125 | "path-is-absolute": { 126 | "version": "1.0.1", 127 | "resolved": "https://registry.npmjs.org/path-is-absolute/-/path-is-absolute-1.0.1.tgz", 128 | "integrity": "sha1-F0uSaHNVNP+8es5r9TpanhtcX18=", 129 | "dev": true 130 | }, 131 | "resolve": { 132 | "version": "0.6.3", 133 | "resolved": "https://registry.npmjs.org/resolve/-/resolve-0.6.3.tgz", 134 | "integrity": "sha1-3ZV5gufnNt699TtYpN2RdUV13UY=", 135 | "dev": true 136 | }, 137 | "strip-json-comments": { 138 | "version": "1.0.4", 139 | "resolved": "https://registry.npmjs.org/strip-json-comments/-/strip-json-comments-1.0.4.tgz", 140 | "integrity": "sha1-HhX7ysl9Pumb8tc7TGVrCCu6+5E=", 141 | "dev": true 142 | }, 143 | "wordwrap": { 144 | "version": "0.0.3", 145 | "resolved": "https://registry.npmjs.org/wordwrap/-/wordwrap-0.0.3.tgz", 146 | "integrity": "sha1-o9XabNXAvAAI03I0u68b7WMFkQc=", 147 | "dev": true 148 | }, 149 | "wrappy": { 150 | "version": "1.0.2", 151 | "resolved": "https://registry.npmjs.org/wrappy/-/wrappy-1.0.2.tgz", 152 | "integrity": "sha1-tSQ9jz7BqjXxNkYFvA0QNuMKtp8=", 153 | "dev": true 154 | } 155 | } 156 | } 157 | -------------------------------------------------------------------------------- /package.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "language-toml", 3 | "version": "0.20.0", 4 | "description": "Syntax highlighting for Tom's Obvious, Minimal Language (TOML).", 5 | "repository": "https://github.com/atom/language-toml", 6 | "license": "MIT", 7 | "engines": { 8 | "atom": "*" 9 | }, 10 | "devDependencies": { 11 | "coffeelint": "^1.10.1" 12 | } 13 | } 14 | -------------------------------------------------------------------------------- /settings/language-toml.cson: -------------------------------------------------------------------------------- 1 | '.source.toml': 2 | 'editor': 3 | 'commentStart': '# ' 4 | -------------------------------------------------------------------------------- /spec/toml-spec.coffee: -------------------------------------------------------------------------------- 1 | describe "TOML grammar", -> 2 | grammar = null 3 | 4 | beforeEach -> 5 | waitsForPromise -> 6 | atom.packages.activatePackage("language-toml") 7 | 8 | runs -> 9 | grammar = atom.grammars.grammarForScopeName('source.toml') 10 | 11 | it "parses the grammar", -> 12 | expect(grammar).toBeTruthy() 13 | expect(grammar.scopeName).toBe "source.toml" 14 | 15 | it "tokenizes comments", -> 16 | {tokens} = grammar.tokenizeLine("# I am a comment") 17 | expect(tokens[0]).toEqual value: "#", scopes: ["source.toml", "comment.line.number-sign.toml", "punctuation.definition.comment.toml"] 18 | expect(tokens[1]).toEqual value: " I am a comment", scopes: ["source.toml", "comment.line.number-sign.toml"] 19 | 20 | {tokens} = grammar.tokenizeLine("# = I am also a comment!") 21 | expect(tokens[0]).toEqual value: "#", scopes: ["source.toml", "comment.line.number-sign.toml", "punctuation.definition.comment.toml"] 22 | expect(tokens[1]).toEqual value: " = I am also a comment!", scopes: ["source.toml", "comment.line.number-sign.toml"] 23 | 24 | {tokens} = grammar.tokenizeLine("#Nope = still a comment") 25 | expect(tokens[0]).toEqual value: "#", scopes: ["source.toml", "comment.line.number-sign.toml", "punctuation.definition.comment.toml"] 26 | expect(tokens[1]).toEqual value: "Nope = still a comment", scopes: ["source.toml", "comment.line.number-sign.toml"] 27 | 28 | {tokens} = grammar.tokenizeLine(" #Whitespace = tricky") 29 | expect(tokens[1]).toEqual value: "#", scopes: ["source.toml", "comment.line.number-sign.toml", "punctuation.definition.comment.toml"] 30 | expect(tokens[2]).toEqual value: "Whitespace = tricky", scopes: ["source.toml", "comment.line.number-sign.toml"] 31 | 32 | it "tokenizes strings", -> 33 | {tokens} = grammar.tokenizeLine('foo = "I am a string"') 34 | expect(tokens[4]).toEqual value: '"', scopes: ["source.toml", "string.quoted.double.toml", "punctuation.definition.string.begin.toml"] 35 | expect(tokens[5]).toEqual value: 'I am a string', scopes: ["source.toml", "string.quoted.double.toml"] 36 | expect(tokens[6]).toEqual value: '"', scopes: ["source.toml", "string.quoted.double.toml", "punctuation.definition.string.end.toml"] 37 | 38 | {tokens} = grammar.tokenizeLine('foo = "I\'m \\n escaped"') 39 | expect(tokens[4]).toEqual value: '"', scopes: ["source.toml", "string.quoted.double.toml", "punctuation.definition.string.begin.toml"] 40 | expect(tokens[5]).toEqual value: "I'm ", scopes: ["source.toml", "string.quoted.double.toml"] 41 | expect(tokens[6]).toEqual value: "\\n", scopes: ["source.toml", "string.quoted.double.toml", "constant.character.escape.toml"] 42 | expect(tokens[7]).toEqual value: " escaped", scopes: ["source.toml", "string.quoted.double.toml"] 43 | expect(tokens[8]).toEqual value: '"', scopes: ["source.toml", "string.quoted.double.toml", "punctuation.definition.string.end.toml"] 44 | 45 | {tokens} = grammar.tokenizeLine("foo = 'I am not \\n escaped'") 46 | expect(tokens[4]).toEqual value: "'", scopes: ["source.toml", "string.quoted.single.toml", "punctuation.definition.string.begin.toml"] 47 | expect(tokens[5]).toEqual value: 'I am not \\n escaped', scopes: ["source.toml", "string.quoted.single.toml"] 48 | expect(tokens[6]).toEqual value: "'", scopes: ["source.toml", "string.quoted.single.toml", "punctuation.definition.string.end.toml"] 49 | 50 | {tokens} = grammar.tokenizeLine('foo = "Equal sign ahead = no problem"') 51 | expect(tokens[4]).toEqual value: '"', scopes: ["source.toml", "string.quoted.double.toml", "punctuation.definition.string.begin.toml"] 52 | expect(tokens[5]).toEqual value: 'Equal sign ahead = no problem', scopes: ["source.toml", "string.quoted.double.toml"] 53 | expect(tokens[6]).toEqual value: '"', scopes: ["source.toml", "string.quoted.double.toml", "punctuation.definition.string.end.toml"] 54 | 55 | it "does not tokenize equal signs within strings", -> 56 | {tokens} = grammar.tokenizeLine('pywinusb = { version = "*", os_name = "==\'nt\'", index="pypi"}') 57 | expect(tokens[20]).toEqual value: "=='nt'", scopes: ["source.toml", "string.quoted.double.toml"] 58 | 59 | it "tokenizes multiline strings", -> 60 | lines = grammar.tokenizeLines '''foo = """ 61 | I am a 62 | string 63 | """ 64 | ''' 65 | expect(lines[0][4]).toEqual value: '"""', scopes: ["source.toml", "string.quoted.double.block.toml", "punctuation.definition.string.begin.toml"] 66 | expect(lines[1][0]).toEqual value: 'I am a', scopes: ["source.toml", "string.quoted.double.block.toml"] 67 | expect(lines[2][0]).toEqual value: 'string', scopes: ["source.toml", "string.quoted.double.block.toml"] 68 | expect(lines[3][0]).toEqual value: '"""', scopes: ["source.toml", "string.quoted.double.block.toml", "punctuation.definition.string.end.toml"] 69 | 70 | lines = grammar.tokenizeLines """foo = ''' 71 | I am a 72 | string 73 | ''' 74 | """ 75 | expect(lines[0][4]).toEqual value: "'''", scopes: ["source.toml", "string.quoted.single.block.toml", "punctuation.definition.string.begin.toml"] 76 | expect(lines[1][0]).toEqual value: 'I am a', scopes: ["source.toml", "string.quoted.single.block.toml"] 77 | expect(lines[2][0]).toEqual value: 'string', scopes: ["source.toml", "string.quoted.single.block.toml"] 78 | expect(lines[3][0]).toEqual value: "'''", scopes: ["source.toml", "string.quoted.single.block.toml", "punctuation.definition.string.end.toml"] 79 | 80 | it "tokenizes escape characters in double-quoted multiline strings", -> 81 | lines = grammar.tokenizeLines '''foo = """ 82 | I am\\u0020a 83 | \\qstring 84 | with\\UaBcDE3F2escape characters\\nyay 85 | """ 86 | ''' 87 | expect(lines[0][4]).toEqual value: '"""', scopes: ["source.toml", "string.quoted.double.block.toml", "punctuation.definition.string.begin.toml"] 88 | expect(lines[1][0]).toEqual value: 'I am', scopes: ["source.toml", "string.quoted.double.block.toml"] 89 | expect(lines[1][1]).toEqual value: '\\u0020', scopes: ["source.toml", "string.quoted.double.block.toml", "constant.character.escape.toml"] 90 | expect(lines[2][0]).toEqual value: '\\qstring', scopes: ["source.toml", "string.quoted.double.block.toml"] 91 | expect(lines[3][0]).toEqual value: 'with', scopes: ["source.toml", "string.quoted.double.block.toml"] 92 | expect(lines[3][1]).toEqual value: '\\UaBcDE3F2', scopes: ["source.toml", "string.quoted.double.block.toml", "constant.character.escape.toml"] 93 | expect(lines[3][2]).toEqual value: 'escape characters', scopes: ["source.toml", "string.quoted.double.block.toml"] 94 | expect(lines[3][3]).toEqual value: '\\n', scopes: ["source.toml", "string.quoted.double.block.toml", "constant.character.escape.toml"] 95 | expect(lines[3][4]).toEqual value: 'yay', scopes: ["source.toml", "string.quoted.double.block.toml"] 96 | expect(lines[4][0]).toEqual value: '"""', scopes: ["source.toml", "string.quoted.double.block.toml", "punctuation.definition.string.end.toml"] 97 | 98 | it "tokenizes line continuation characters in double-quoted multiline strings", -> 99 | lines = grammar.tokenizeLines '''foo = """ 100 | I am a 101 | string \\ 102 | with line-continuation\\ \t 103 | yay 104 | """ 105 | ''' 106 | expect(lines[0][4]).toEqual value: '"""', scopes: ["source.toml", "string.quoted.double.block.toml", "punctuation.definition.string.begin.toml"] 107 | expect(lines[1][0]).toEqual value: 'I am a', scopes: ["source.toml", "string.quoted.double.block.toml"] 108 | expect(lines[2][0]).toEqual value: 'string ', scopes: ["source.toml", "string.quoted.double.block.toml"] 109 | expect(lines[2][1]).toEqual value: '\\', scopes: ["source.toml", "string.quoted.double.block.toml", "constant.character.escape.toml"] 110 | expect(lines[3][0]).toEqual value: 'with line-continuation', scopes: ["source.toml", "string.quoted.double.block.toml"] 111 | expect(lines[3][1]).toEqual value: '\\', scopes: ["source.toml", "string.quoted.double.block.toml", "constant.character.escape.toml"] 112 | expect(lines[3][2]).toEqual value: ' \t', scopes: ["source.toml", "string.quoted.double.block.toml"] 113 | expect(lines[4][0]).toEqual value: 'yay', scopes: ["source.toml", "string.quoted.double.block.toml"] 114 | expect(lines[5][0]).toEqual value: '"""', scopes: ["source.toml", "string.quoted.double.block.toml", "punctuation.definition.string.end.toml"] 115 | 116 | it "tokenizes escape characters in double-quoted multiline strings", -> 117 | lines = grammar.tokenizeLines """foo = ''' 118 | I am\\u0020a 119 | \\qstring 120 | with\\UaBcDE3F2no escape characters\\naw 121 | ''' 122 | """ 123 | expect(lines[0][4]).toEqual value: "'''", scopes: ["source.toml", "string.quoted.single.block.toml", "punctuation.definition.string.begin.toml"] 124 | expect(lines[1][0]).toEqual value: 'I am\\u0020a', scopes: ["source.toml", "string.quoted.single.block.toml"] 125 | expect(lines[2][0]).toEqual value: '\\qstring', scopes: ["source.toml", "string.quoted.single.block.toml"] 126 | expect(lines[3][0]).toEqual value: 'with\\UaBcDE3F2no escape characters\\naw', scopes: ["source.toml", "string.quoted.single.block.toml"] 127 | expect(lines[4][0]).toEqual value: "'''", scopes: ["source.toml", "string.quoted.single.block.toml", "punctuation.definition.string.end.toml"] 128 | 129 | it "does not tokenize line continuation characters in single-quoted multiline strings", -> 130 | lines = grammar.tokenizeLines """foo = ''' 131 | I am a 132 | string \\ 133 | with no line-continuation\\ \t 134 | aw 135 | ''' 136 | """ 137 | expect(lines[0][4]).toEqual value: "'''", scopes: ["source.toml", "string.quoted.single.block.toml", "punctuation.definition.string.begin.toml"] 138 | expect(lines[1][0]).toEqual value: 'I am a', scopes: ["source.toml", "string.quoted.single.block.toml"] 139 | expect(lines[2][0]).toEqual value: 'string \\', scopes: ["source.toml", "string.quoted.single.block.toml"] 140 | expect(lines[3][0]).toEqual value: 'with no line-continuation\\ \t', scopes: ["source.toml", "string.quoted.single.block.toml"] 141 | expect(lines[4][0]).toEqual value: 'aw', scopes: ["source.toml", "string.quoted.single.block.toml"] 142 | expect(lines[5][0]).toEqual value: "'''", scopes: ["source.toml", "string.quoted.single.block.toml", "punctuation.definition.string.end.toml"] 143 | 144 | it "tokenizes booleans", -> 145 | {tokens} = grammar.tokenizeLine("foo = true") 146 | expect(tokens[4]).toEqual value: "true", scopes: ["source.toml", "constant.language.boolean.true.toml"] 147 | 148 | {tokens} = grammar.tokenizeLine("foo = false") 149 | expect(tokens[4]).toEqual value: "false", scopes: ["source.toml", "constant.language.boolean.false.toml"] 150 | 151 | it "tokenizes integers", -> 152 | for int in ["+99", "42", "0", "-17", "1_000", "1_2_3_4_5"] 153 | {tokens} = grammar.tokenizeLine("foo = #{int}") 154 | expect(tokens[4]).toEqual value: int, scopes: ["source.toml", "constant.numeric.toml"] 155 | 156 | it "does not tokenize a number with leading zeros as an integer", -> 157 | {tokens} = grammar.tokenizeLine("foo = 01") 158 | expect(tokens[4]).toEqual value: "01", scopes: ["source.toml", "invalid.illegal.toml"] 159 | 160 | it "does not tokenize a number with an underscore not followed by a digit as an integer", -> 161 | {tokens} = grammar.tokenizeLine("foo = 1__2") 162 | expect(tokens[4]).toEqual value: "1__2", scopes: ["source.toml", "invalid.illegal.toml"] 163 | 164 | {tokens} = grammar.tokenizeLine("foo = 1_") 165 | expect(tokens[4]).toEqual value: "1_", scopes: ["source.toml", "invalid.illegal.toml"] 166 | 167 | it "tokenizes hex integers", -> 168 | for int in ["0xDEADBEEF", "0xdeadbeef", "0xdead_beef"] 169 | {tokens} = grammar.tokenizeLine("foo = #{int}") 170 | expect(tokens[4]).toEqual value: int, scopes: ["source.toml", "constant.numeric.hex.toml"] 171 | 172 | it "tokenizes octal integers", -> 173 | {tokens} = grammar.tokenizeLine("foo = 0o755") 174 | expect(tokens[4]).toEqual value: "0o755", scopes: ["source.toml", "constant.numeric.octal.toml"] 175 | 176 | it "tokenizes binary integers", -> 177 | {tokens} = grammar.tokenizeLine("foo = 0b11010110") 178 | expect(tokens[4]).toEqual value: "0b11010110", scopes: ["source.toml", "constant.numeric.binary.toml"] 179 | 180 | it "does not tokenize a number followed by other characters as a number", -> 181 | {tokens} = grammar.tokenizeLine("foo = 0xdeadbeefs") 182 | expect(tokens[4]).toEqual value: "0xdeadbeefs", scopes: ["source.toml", "invalid.illegal.toml"] 183 | 184 | it "tokenizes floats", -> 185 | for float in ["+1.0", "3.1415", "-0.01", "5e+22", "1e6", "-2E-2", "6.626e-34", "6.626e-34", "9_224_617.445_991_228_313", "1e1_000"] 186 | {tokens} = grammar.tokenizeLine("foo = #{float}") 187 | expect(tokens[4]).toEqual value: float, scopes: ["source.toml", "constant.numeric.toml"] 188 | 189 | it "tokenizes inf and nan", -> 190 | for sign in ["+", "-", ""] 191 | for float in ["inf", "nan"] 192 | {tokens} = grammar.tokenizeLine("foo = #{sign}#{float}") 193 | expect(tokens[4]).toEqual value: "#{sign}#{float}", scopes: ["source.toml", "constant.numeric.#{float}.toml"] 194 | 195 | it "tokenizes offset date-times", -> 196 | {tokens} = grammar.tokenizeLine("foo = 1979-05-27T07:32:00Z") 197 | expect(tokens[4]).toEqual value: "1979-05-27", scopes: ["source.toml", "constant.numeric.date.toml"] 198 | expect(tokens[5]).toEqual value: "T", scopes: ["source.toml", "constant.numeric.date.toml", "keyword.other.time.toml"] 199 | expect(tokens[6]).toEqual value: "07:32:00", scopes: ["source.toml", "constant.numeric.date.toml"] 200 | expect(tokens[7]).toEqual value: "Z", scopes: ["source.toml", "constant.numeric.date.toml", "keyword.other.offset.toml"] 201 | 202 | {tokens} = grammar.tokenizeLine("foo = 1979-05-27 07:32:00Z") 203 | expect(tokens[4]).toEqual value: "1979-05-27", scopes: ["source.toml", "constant.numeric.date.toml"] 204 | expect(tokens[5]).toEqual value: " ", scopes: ["source.toml", "constant.numeric.date.toml", "keyword.other.time.toml"] 205 | expect(tokens[6]).toEqual value: "07:32:00", scopes: ["source.toml", "constant.numeric.date.toml"] 206 | expect(tokens[7]).toEqual value: "Z", scopes: ["source.toml", "constant.numeric.date.toml", "keyword.other.offset.toml"] 207 | 208 | {tokens} = grammar.tokenizeLine("foo = 1979-05-27T00:32:00.999999-07:00") 209 | expect(tokens[4]).toEqual value: "1979-05-27", scopes: ["source.toml", "constant.numeric.date.toml"] 210 | expect(tokens[5]).toEqual value: "T", scopes: ["source.toml", "constant.numeric.date.toml", "keyword.other.time.toml"] 211 | expect(tokens[6]).toEqual value: "00:32:00.999999", scopes: ["source.toml", "constant.numeric.date.toml"] 212 | expect(tokens[7]).toEqual value: "-", scopes: ["source.toml", "constant.numeric.date.toml", "keyword.other.offset.toml"] 213 | expect(tokens[8]).toEqual value: "07:00", scopes: ["source.toml", "constant.numeric.date.toml"] 214 | 215 | it "tokenizes local date-times", -> 216 | {tokens} = grammar.tokenizeLine("foo = 1979-05-27T00:32:00.999999") 217 | expect(tokens[4]).toEqual value: "1979-05-27", scopes: ["source.toml", "constant.numeric.date.toml"] 218 | expect(tokens[5]).toEqual value: "T", scopes: ["source.toml", "constant.numeric.date.toml", "keyword.other.time.toml"] 219 | expect(tokens[6]).toEqual value: "00:32:00.999999", scopes: ["source.toml", "constant.numeric.date.toml"] 220 | 221 | it "tokenizes local dates", -> 222 | {tokens} = grammar.tokenizeLine("foo = 1979-05-27") 223 | expect(tokens[4]).toEqual value: "1979-05-27", scopes: ["source.toml", "constant.numeric.date.toml"] 224 | 225 | it "tokenizes local times", -> 226 | {tokens} = grammar.tokenizeLine("foo = 00:32:00.999999") 227 | expect(tokens[4]).toEqual value: "00:32:00.999999", scopes: ["source.toml", "constant.numeric.date.toml"] 228 | 229 | it "tokenizes tables", -> 230 | {tokens} = grammar.tokenizeLine("[table]") 231 | expect(tokens[0]).toEqual value: "[", scopes: ["source.toml", "entity.name.section.table.toml", "punctuation.definition.table.begin.toml"] 232 | expect(tokens[1]).toEqual value: "table", scopes: ["source.toml", "entity.name.section.table.toml"] 233 | expect(tokens[2]).toEqual value: "]", scopes: ["source.toml", "entity.name.section.table.toml", "punctuation.definition.table.end.toml"] 234 | 235 | {tokens} = grammar.tokenizeLine(" [table]") 236 | expect(tokens[0]).toEqual value: " ", scopes: ["source.toml"] 237 | expect(tokens[1]).toEqual value: "[", scopes: ["source.toml", "entity.name.section.table.toml", "punctuation.definition.table.begin.toml"] 238 | # and so on 239 | 240 | it "tokenizes table arrays", -> 241 | {tokens} = grammar.tokenizeLine("[[table]]") 242 | expect(tokens[0]).toEqual value: "[[", scopes: ["source.toml", "entity.name.section.table.array.toml", "punctuation.definition.table.array.begin.toml"] 243 | expect(tokens[1]).toEqual value: "table", scopes: ["source.toml", "entity.name.section.table.array.toml"] 244 | expect(tokens[2]).toEqual value: "]]", scopes: ["source.toml", "entity.name.section.table.array.toml", "punctuation.definition.table.array.end.toml"] 245 | 246 | it "tokenizes keys", -> 247 | {tokens} = grammar.tokenizeLine("key =") 248 | expect(tokens[0]).toEqual value: "key", scopes: ["source.toml", "variable.other.key.toml"] 249 | expect(tokens[1]).toEqual value: " ", scopes: ["source.toml"] 250 | expect(tokens[2]).toEqual value: "=", scopes: ["source.toml", "keyword.operator.assignment.toml"] 251 | 252 | {tokens} = grammar.tokenizeLine("1key_-34 =") 253 | expect(tokens[0]).toEqual value: "1key_-34", scopes: ["source.toml", "variable.other.key.toml"] 254 | expect(tokens[1]).toEqual value: " ", scopes: ["source.toml"] 255 | expect(tokens[2]).toEqual value: "=", scopes: ["source.toml", "keyword.operator.assignment.toml"] 256 | 257 | {tokens} = grammar.tokenizeLine("ʎǝʞ =") 258 | expect(tokens[0]).toEqual value: "ʎǝʞ =", scopes: ["source.toml"] 259 | 260 | {tokens} = grammar.tokenizeLine(" =") 261 | expect(tokens[0]).toEqual value: " =", scopes: ["source.toml"] 262 | 263 | it "tokenizes quoted keys", -> 264 | {tokens} = grammar.tokenizeLine("'key' =") 265 | expect(tokens[0]).toEqual value: "'", scopes: ["source.toml", "string.quoted.single.toml", "punctuation.definition.string.begin.toml"] 266 | expect(tokens[1]).toEqual value: "key", scopes: ["source.toml", "string.quoted.single.toml", "variable.other.key.toml"] 267 | expect(tokens[2]).toEqual value: "'", scopes: ["source.toml", "string.quoted.single.toml", "punctuation.definition.string.end.toml"] 268 | expect(tokens[3]).toEqual value: " ", scopes: ["source.toml"] 269 | expect(tokens[4]).toEqual value: "=", scopes: ["source.toml", "keyword.operator.assignment.toml"] 270 | 271 | {tokens} = grammar.tokenizeLine("'ʎǝʞ' =") 272 | expect(tokens[0]).toEqual value: "'", scopes: ["source.toml", "string.quoted.single.toml", "punctuation.definition.string.begin.toml"] 273 | expect(tokens[1]).toEqual value: "ʎǝʞ", scopes: ["source.toml", "string.quoted.single.toml", "variable.other.key.toml"] 274 | expect(tokens[2]).toEqual value: "'", scopes: ["source.toml", "string.quoted.single.toml", "punctuation.definition.string.end.toml"] 275 | expect(tokens[3]).toEqual value: " ", scopes: ["source.toml"] 276 | expect(tokens[4]).toEqual value: "=", scopes: ["source.toml", "keyword.operator.assignment.toml"] 277 | 278 | {tokens} = grammar.tokenizeLine("'key with spaces' =") 279 | expect(tokens[0]).toEqual value: "'", scopes: ["source.toml", "string.quoted.single.toml", "punctuation.definition.string.begin.toml"] 280 | expect(tokens[1]).toEqual value: "key with spaces", scopes: ["source.toml", "string.quoted.single.toml", "variable.other.key.toml"] 281 | expect(tokens[2]).toEqual value: "'", scopes: ["source.toml", "string.quoted.single.toml", "punctuation.definition.string.end.toml"] 282 | expect(tokens[3]).toEqual value: " ", scopes: ["source.toml"] 283 | expect(tokens[4]).toEqual value: "=", scopes: ["source.toml", "keyword.operator.assignment.toml"] 284 | 285 | {tokens} = grammar.tokenizeLine("'key with colons:' =") 286 | expect(tokens[0]).toEqual value: "'", scopes: ["source.toml", "string.quoted.single.toml", "punctuation.definition.string.begin.toml"] 287 | expect(tokens[1]).toEqual value: "key with colons:", scopes: ["source.toml", "string.quoted.single.toml", "variable.other.key.toml"] 288 | expect(tokens[2]).toEqual value: "'", scopes: ["source.toml", "string.quoted.single.toml", "punctuation.definition.string.end.toml"] 289 | expect(tokens[3]).toEqual value: " ", scopes: ["source.toml"] 290 | expect(tokens[4]).toEqual value: "=", scopes: ["source.toml", "keyword.operator.assignment.toml"] 291 | 292 | {tokens} = grammar.tokenizeLine("'' =") 293 | expect(tokens[0]).toEqual value: "'", scopes: ["source.toml", "string.quoted.single.toml", "punctuation.definition.string.begin.toml"] 294 | expect(tokens[1]).toEqual value: "'", scopes: ["source.toml", "string.quoted.single.toml", "punctuation.definition.string.end.toml"] 295 | expect(tokens[2]).toEqual value: " ", scopes: ["source.toml"] 296 | expect(tokens[3]).toEqual value: "=", scopes: ["source.toml", "keyword.operator.assignment.toml"] 297 | 298 | {tokens} = grammar.tokenizeLine('"key" =') 299 | expect(tokens[0]).toEqual value: '"', scopes: ["source.toml", "string.quoted.double.toml", "punctuation.definition.string.begin.toml"] 300 | expect(tokens[1]).toEqual value: "key", scopes: ["source.toml", "string.quoted.double.toml", "variable.other.key.toml"] 301 | expect(tokens[2]).toEqual value: '"', scopes: ["source.toml", "string.quoted.double.toml", "punctuation.definition.string.end.toml"] 302 | expect(tokens[3]).toEqual value: " ", scopes: ["source.toml"] 303 | expect(tokens[4]).toEqual value: "=", scopes: ["source.toml", "keyword.operator.assignment.toml"] 304 | 305 | {tokens} = grammar.tokenizeLine('"ʎǝʞ" =') 306 | expect(tokens[0]).toEqual value: '"', scopes: ["source.toml", "string.quoted.double.toml", "punctuation.definition.string.begin.toml"] 307 | expect(tokens[1]).toEqual value: "ʎǝʞ", scopes: ["source.toml", "string.quoted.double.toml", "variable.other.key.toml"] 308 | expect(tokens[2]).toEqual value: '"', scopes: ["source.toml", "string.quoted.double.toml", "punctuation.definition.string.end.toml"] 309 | expect(tokens[3]).toEqual value: " ", scopes: ["source.toml"] 310 | expect(tokens[4]).toEqual value: "=", scopes: ["source.toml", "keyword.operator.assignment.toml"] 311 | 312 | {tokens} = grammar.tokenizeLine('"key with spaces" =') 313 | expect(tokens[0]).toEqual value: '"', scopes: ["source.toml", "string.quoted.double.toml", "punctuation.definition.string.begin.toml"] 314 | expect(tokens[1]).toEqual value: "key with spaces", scopes: ["source.toml", "string.quoted.double.toml", "variable.other.key.toml"] 315 | expect(tokens[2]).toEqual value: '"', scopes: ["source.toml", "string.quoted.double.toml", "punctuation.definition.string.end.toml"] 316 | expect(tokens[3]).toEqual value: " ", scopes: ["source.toml"] 317 | expect(tokens[4]).toEqual value: "=", scopes: ["source.toml", "keyword.operator.assignment.toml"] 318 | 319 | {tokens} = grammar.tokenizeLine('"key with colons:" =') 320 | expect(tokens[0]).toEqual value: '"', scopes: ["source.toml", "string.quoted.double.toml", "punctuation.definition.string.begin.toml"] 321 | expect(tokens[1]).toEqual value: "key with colons:", scopes: ["source.toml", "string.quoted.double.toml", "variable.other.key.toml"] 322 | expect(tokens[2]).toEqual value: '"', scopes: ["source.toml", "string.quoted.double.toml", "punctuation.definition.string.end.toml"] 323 | expect(tokens[3]).toEqual value: " ", scopes: ["source.toml"] 324 | expect(tokens[4]).toEqual value: "=", scopes: ["source.toml", "keyword.operator.assignment.toml"] 325 | 326 | {tokens} = grammar.tokenizeLine('"key wi\\th escapes" =') 327 | expect(tokens[0]).toEqual value: '"', scopes: ["source.toml", "string.quoted.double.toml", "punctuation.definition.string.begin.toml"] 328 | expect(tokens[1]).toEqual value: "key wi", scopes: ["source.toml", "string.quoted.double.toml", "variable.other.key.toml"] 329 | expect(tokens[2]).toEqual value: "\\t", scopes: ["source.toml", "string.quoted.double.toml", "variable.other.key.toml", "constant.character.escape.toml"] 330 | expect(tokens[3]).toEqual value: "h escapes", scopes: ["source.toml", "string.quoted.double.toml", "variable.other.key.toml"] 331 | expect(tokens[4]).toEqual value: '"', scopes: ["source.toml", "string.quoted.double.toml", "punctuation.definition.string.end.toml"] 332 | expect(tokens[5]).toEqual value: " ", scopes: ["source.toml"] 333 | expect(tokens[6]).toEqual value: "=", scopes: ["source.toml", "keyword.operator.assignment.toml"] 334 | 335 | {tokens} = grammar.tokenizeLine('"key with \\" quote" =') 336 | expect(tokens[0]).toEqual value: '"', scopes: ["source.toml", "string.quoted.double.toml", "punctuation.definition.string.begin.toml"] 337 | expect(tokens[1]).toEqual value: "key with ", scopes: ["source.toml", "string.quoted.double.toml", "variable.other.key.toml"] 338 | expect(tokens[2]).toEqual value: '\\"', scopes: ["source.toml", "string.quoted.double.toml", "variable.other.key.toml", "constant.character.escape.toml"] 339 | expect(tokens[3]).toEqual value: " quote", scopes: ["source.toml", "string.quoted.double.toml", "variable.other.key.toml"] 340 | expect(tokens[4]).toEqual value: '"', scopes: ["source.toml", "string.quoted.double.toml", "punctuation.definition.string.end.toml"] 341 | expect(tokens[5]).toEqual value: " ", scopes: ["source.toml"] 342 | expect(tokens[6]).toEqual value: "=", scopes: ["source.toml", "keyword.operator.assignment.toml"] 343 | 344 | {tokens} = grammar.tokenizeLine('"" =') 345 | expect(tokens[0]).toEqual value: '"', scopes: ["source.toml", "string.quoted.double.toml", "punctuation.definition.string.begin.toml"] 346 | expect(tokens[1]).toEqual value: '"', scopes: ["source.toml", "string.quoted.double.toml", "punctuation.definition.string.end.toml"] 347 | expect(tokens[2]).toEqual value: " ", scopes: ["source.toml"] 348 | expect(tokens[3]).toEqual value: "=", scopes: ["source.toml", "keyword.operator.assignment.toml"] 349 | --------------------------------------------------------------------------------