├── .gitignore
├── README.md
├── coverage.sh
├── icon.png
├── setup.cfg
├── setup.py
├── tests.sh
├── tests
├── __main__.py
├── test_parse_date.py
├── test_parsing.py
├── test_query.log
├── test_query.py
├── test_query_lexer.py
├── test_query_parser.py
├── test_textutils.py
└── test_todos.py
├── todoflow
├── __init__.py
├── compatibility.py
├── lexer.py
├── parse_date.py
├── parser.py
├── query.py
├── query_lexer.py
├── query_parser.py
├── textutils.py
├── todoitem.py
└── todos.py
└── watch_tests.py
/.gitignore:
--------------------------------------------------------------------------------
1 |
2 | *.pyc
3 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # TodoFlow 5
2 |
3 | 
4 |
5 | TodoFlow is Python module that provides functions to parse, filter, search and modify todo lists stored in plain text files with TaskPaper syntax.
6 |
7 | ## Changelog
8 |
9 | - 2016-10-08 - 5.0.0
10 | + updates to [TaskPaper 3](https://www.taskpaper.com) queries!
11 | + drops dependency on [ply](https://github.com/dabeaz/ply)
12 | + removes separation between `Node`s and `Todos`
13 | + removes printers and file reading methods
14 | - 2015-10-02 - 4.0.2
15 | - 2015-04-24 - Removal of workflows
16 | - 2014-10-24 - Release of version 4
17 |
18 | ## Installation
19 |
20 | pip install TodoFlow
21 |
22 | ## Overview
23 |
24 | TodoFlow is based on two classes: `Todos` and `Todoitem`.
25 |
26 | ### Todos
27 |
28 | `Todos` is a tree-like collection of todo items. `Todos` have list of `subitems` and one `todoitem` (except tree root that doesn't have one).
29 |
30 | #### Creating todos
31 |
32 | You can create todos from string:
33 |
34 | ```
35 | from todoflow import Todos
36 | todos = todoflow.Todos("""
37 | project 1:
38 | - task 1
39 | - task 2 @today
40 | """)
41 | ```
42 |
43 | #### Saving todos
44 |
45 | You can stringify todos in order to write them to a file:
46 |
47 | ```
48 | >>> print(todos)
49 | project 1:
50 | - task 1
51 | - task 2 @today
52 | project 2:
53 | - task 3
54 | ```
55 |
56 | #### Filtering todos
57 |
58 | TodoFlow tries to provide the same queries as [TaskPaper 3](https://guide.taskpaper.com/formatting_queries.html). If something doesn't work the same way please create an Issue.
59 |
60 | ```
61 | >>> today = todos.filter('/*/@today')
62 | >>> print(today)
63 | project 1:
64 | - task 2 @today
65 | ```
66 |
67 | - `todos.filter(query)` - Returns new `Todos`, with items, that match given query, or are parent of one. It's analogous to how TaskPaper app works.
68 | - `todos.search(query)` - Returns iterator of `Todos` that match query (and only them).
69 |
70 | ### Todoitem
71 |
72 | `Todoitem`s belong to `Todos` and have methods to modify them and to retrive data from them:
73 |
74 | - `tag(tag_to_use, param=None)`
75 | - `remove_tag(tag_to_remove)`
76 | - `has_tag(tag)`
77 | - `get_tag_param(tag)`
78 | - `get_type(tag)`
79 | - `get_text()`
80 | - `get_line_number()`
81 | - `get_line_id()`
82 | - `edit(new_text)`
83 | - `change_to_task()`
84 | - `change_to_project()`
85 | - `change_to_note()`
86 |
87 | They can be mutaded, then and those changes are visible in every `Todos` that they belong to.
88 |
89 | ```
90 | >>> for item in todos.search('@today'):
91 | >>> item.tag('@done', '2016-10-08')
92 | >>> item.remove_tag('@today')
93 | >>> print(todos)
94 | project 1:
95 | - task 1
96 | - task 2 @done(2014-10-24)
97 | ```
98 |
99 | ## textutils
100 |
101 | Module `todoflow.textutils` provides functions
102 | that operate on strings and are used internally in TodoFlow but can be useful outside of it:
103 |
104 | - `is_task(text)`
105 | - `is_project(text)`
106 | - `is_note(text)`
107 | - `has_tag(text, tag)`
108 | - `get_tag_param(text, tag)`
109 | - `remove_tag(text, tag)`
110 | - `replace_tag(text, tag, replacement)`
111 | - `add_tag(text, tag, param=None)`
112 | - `enclose_tag(text, tag, prefix, suffix=None)`
113 | - `get_all_tags(text, include_indicator=False)`
114 | - `modify_tag_param(text, tag, modification)`
115 | - `sort_by_tag_param(texts_collection, tag, reverse=False)`
116 |
--------------------------------------------------------------------------------
/coverage.sh:
--------------------------------------------------------------------------------
1 | coverage run tests/__main__.py
2 | coverage report --omit="/Library/*","tests/*"
--------------------------------------------------------------------------------
/icon.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/bevesce/TodoFlow/73f40462d9a343dcc3fa284b5d2740fbd848f6b5/icon.png
--------------------------------------------------------------------------------
/setup.cfg:
--------------------------------------------------------------------------------
1 | [metadata]
2 | description-file = README.md
--------------------------------------------------------------------------------
/setup.py:
--------------------------------------------------------------------------------
1 | #!/usr/bin/env python
2 | # -*- coding: utf-8 -*-
3 |
4 | from setuptools import setup
5 |
6 | setup(
7 | name='TodoFlow',
8 | version='5.0.0',
9 | description='taskpaper in python',
10 | author='Piotr Wilczyński',
11 | author_email='wilczynski.pi@gmail.com',
12 | url='https://github.com/bevesce/TodoFlow',
13 | packages=['todoflow'],
14 | install_requires=[],
15 | )
16 |
--------------------------------------------------------------------------------
/tests.sh:
--------------------------------------------------------------------------------
1 | echo "Python 2"
2 | python2.7 tests/
3 | echo "Python 3"
4 | python3.4 tests/
--------------------------------------------------------------------------------
/tests/__main__.py:
--------------------------------------------------------------------------------
1 | import unittest
2 |
3 | from test_parse_date import *
4 | from test_parsing import *
5 | from test_query_lexer import *
6 | from test_query_parser import *
7 | from test_query import *
8 | from test_textutils import *
9 | from test_todos import *
10 |
11 | if __name__ == '__main__':
12 | unittest.main()
13 |
--------------------------------------------------------------------------------
/tests/test_parse_date.py:
--------------------------------------------------------------------------------
1 | import unittest
2 | import datetime
3 |
4 | from todoflow.parse_date import parse_date
5 |
6 |
7 | date = datetime.datetime(2016, 10, 4, 22, 39)
8 |
9 |
10 | class TestParseDate(unittest.TestCase):
11 | def assertDate(self, date, date_string):
12 | self.assertEqual(date.strftime('%F'), date_string)
13 |
14 | def assertDatetime(self, date, date_string):
15 | self.assertEqual(date.strftime('%F %R'), date_string)
16 |
17 | def test_01(self):
18 | self.assertDate(parse_date('jan', date), '2016-01-01')
19 |
20 | def test_02(self):
21 | self.assertDate(parse_date('mon', date), '2016-10-03')
22 |
23 | def test_03(self):
24 | self.assertDate(parse_date('next mon', date), '2016-10-10')
25 |
26 | def test_04(self):
27 | self.assertDate(parse_date('last mon', date), '2016-09-26')
28 |
29 | def test_05(self):
30 | self.assertDate(parse_date('last mon + 1day', date), '2016-09-27')
31 |
32 | def test_06(self):
33 | self.assertDate(parse_date('2016', date), '2016-01-01')
34 |
35 | def test_07(self):
36 | self.assertDate(parse_date('today 1day', date), '2016-10-05')
37 |
38 | def test_08(self):
39 | self.assertDate(parse_date('today -1day', date), '2016-10-03')
40 |
41 | def test_09(self):
42 | self.assertDate(parse_date('today 1year', date), '2017-10-04')
43 |
44 | def test_10(self):
45 | self.assertDate(parse_date('2016-05', date), '2016-05-01')
46 |
47 | def test_11(self):
48 | self.assertDate(parse_date('2016-05-13years', date), '2016-05-13')
49 |
50 | def test_12(self):
51 | self.assertDate(parse_date('today + 1 week', date), '2016-10-11')
52 |
53 | def test_13(self):
54 | self.assertDate(parse_date('sun', date), '2016-10-09')
55 |
56 | def test_14(self):
57 | self.assertDate(parse_date('fri', date), '2016-10-07')
58 |
59 | def test_15(self):
60 | self.assertDate(parse_date('2016', date), '2016-01-01')
61 |
62 | def test_16(self):
63 | self.assertDate(parse_date('today +1month', date), '2016-11-04')
64 |
65 | def test_17(self):
66 | self.assertDate(parse_date('2016-12-1 +1month', date), '2017-01-01')
67 |
68 | def test_18(self):
69 | self.assertDate(parse_date('2016-12-1 +28month', date), '2019-04-01')
70 |
71 | def test_19(self):
72 | self.assertDate(parse_date('next week', date), '2016-10-10')
73 |
74 | def test_20(self):
75 | self.assertDate(parse_date('next may', date), '2017-05-01')
76 |
77 | def test_21(self):
78 | self.assertDate(parse_date('next day', date), '2016-10-05')
79 |
80 | def test_22(self):
81 | self.assertDate(parse_date('next month', date), '2016-11-01')
82 |
83 | def test_23(self):
84 | self.assertDate(parse_date('next year', date), '2017-01-01')
85 |
86 | def test_24(self):
87 | self.assertDatetime(parse_date('now', date), '2016-10-04 22:39')
88 |
89 | def test_25(self):
90 | self.assertDatetime(parse_date('next hour', date), '2016-10-04 23:00')
91 |
92 | def test_26(self):
93 | self.assertDatetime(parse_date('next min', date), '2016-10-04 22:40')
94 |
95 | def test_27(self):
96 | self.assertDatetime(parse_date('last min', date), '2016-10-04 22:38')
97 |
98 | def test_28(self):
99 | self.assertDatetime(parse_date('', date), '2016-10-04 22:39')
100 |
101 | def test_29(self):
102 | self.assertDate(parse_date('last may', date), '2015-05-01')
103 |
104 | def test_30(self):
105 | self.assertDatetime(parse_date('10:35 am', date), '2016-10-04 10:35')
106 |
107 | def test_31(self):
108 | self.assertDatetime(parse_date('10:35 pm', date), '2016-10-04 22:35')
109 |
110 | def test_32(self):
111 | self.assertDatetime(parse_date('12:00 pm', date), '2016-10-04 12:00')
112 |
113 | def test_33(self):
114 | self.assertDatetime(parse_date('12:05 am', date), '2016-10-04 00:05')
115 |
116 | def test_34(self):
117 | self.assertDatetime(parse_date('12 am', date), '2016-10-04 00:00')
118 |
119 | def test_35(self):
120 | self.assertDatetime(parse_date('2016-10-06 21:17', date), '2016-10-06 21:17')
121 |
122 | def test_36(self):
123 | self.assertDatetime(parse_date('this month', date), '2016-10-01 00:00')
124 |
125 | def test_37(self):
126 | self.assertDatetime(parse_date('this week', date), '2016-10-03 00:00')
127 |
128 | def test_38(self):
129 | self.assertDatetime(parse_date('this year', date), '2016-01-01 00:00')
130 |
131 | def test_39(self):
132 | self.assertDatetime(parse_date('this quarter', date), '2016-10-01 00:00')
133 |
134 | def test_40(self):
135 | self.assertDatetime(parse_date('last quarter', date), '2016-07-01 00:00')
136 |
137 | def test_41(self):
138 | self.assertDatetime(parse_date('next quarter', date), '2017-01-01 00:00')
139 |
140 | def test_42(self):
141 | self.assertDate(parse_date('today - 1q', date), '2016-07-04')
142 |
143 | if __name__ == '__main__':
144 | unittest.main()
145 |
--------------------------------------------------------------------------------
/tests/test_parsing.py:
--------------------------------------------------------------------------------
1 | from __future__ import unicode_literals
2 | # Ugly indentation of """ """ strings that is breaking pep8
3 | # is this way because this
4 | # is imo more readable than alternatives in this particular case
5 |
6 | import unittest
7 |
8 | import todoflow.lexer as lexer
9 | import todoflow.parser as parser
10 | from todoflow.compatibility import unicode
11 |
12 |
13 | class TestLexer(unittest.TestCase):
14 | def tokens_from(self, text):
15 | self.tokens = lexer.Lexer(text).tokens
16 | return self
17 |
18 | def are(self, expected_tokens_repr):
19 | tokens_repr = ''.join([t.tok() for t in self.tokens])
20 | self.assertEqual(tokens_repr, expected_tokens_repr)
21 | return self
22 |
23 | def test_tokens(self):
24 | self.tokens_from(
25 | """1
26 | 2
27 | 3"""
28 | ).are('***$')
29 |
30 | def test_tokens_width_indent(self):
31 | self.tokens_from(
32 | """1
33 | \t2
34 | 3"""
35 | ).are('*>*<*$')
36 |
37 | def test_tokens_with_continuos_indent(self):
38 | self.tokens_from(
39 | """1
40 | \t2
41 | \t3
42 | \t4"""
43 | ).are('*>***$')
44 |
45 | def test_tokens_with_varying_indent(self):
46 | self.tokens_from(
47 | """1
48 | \t2
49 | \t\t3
50 | \t4"""
51 | ).are('*>*>*<*$')
52 |
53 | def test_tokens_with_empty_line(self):
54 | self.tokens_from(
55 | """1
56 |
57 | 2"""
58 | ).are('*n*$')
59 |
60 | def test_tokens_with_empty_line_at_end(self):
61 | self.tokens_from(
62 | """1
63 | 2
64 | """
65 | ).are('**n$')
66 |
67 |
68 | class TestParser(unittest.TestCase):
69 | def todos_from(self, text):
70 | self.todos = parser.parse(text)
71 | self.nodes = self.todos.subitems
72 | return self
73 |
74 | def main_node(self):
75 | self._node = self.todos
76 | return self
77 |
78 | def first_node(self):
79 | return self.node(0)
80 |
81 | def second_node(self):
82 | return self.node(1)
83 |
84 | def third_node(self):
85 | return self.node(2)
86 |
87 | def node(self, index):
88 | self._node = self.nodes[index]
89 | return self
90 |
91 | def is_task(self):
92 | self.assertTrue(self._node.todoitem.type == 'task')
93 | return self
94 |
95 | def is_project(self):
96 | self.assertTrue(self._node.todoitem.type == 'project')
97 | return self
98 |
99 | def is_note(self):
100 | self.assertTrue(self._node.todoitem.type == 'note')
101 | return self
102 |
103 | def is_new_line(self):
104 | self.assertTrue(self._node.todoitem.type == 'newline')
105 | return self
106 |
107 | def has_subtasks(self, howmany):
108 | self.assertEqual(len(list(self._node.subitems)), howmany)
109 | return self
110 |
111 | def doesnt_have_item(self):
112 | self.assertEqual(self._node.todoitem, None)
113 | return self
114 |
115 | def test_single_task(self):
116 | self.todos_from(
117 | """- task"""
118 | ).first_node().is_task().has_subtasks(0)
119 |
120 | def test_single_project(self):
121 | self.todos_from(
122 | """project:"""
123 | ).first_node().is_project().has_subtasks(0)
124 |
125 | def test_single_note(self):
126 | self.todos_from(
127 | """note"""
128 | ).first_node().is_note().has_subtasks(0)
129 |
130 | def test_simple_subtasks(self):
131 | self.todos_from(
132 | """project:
133 | \t- task1
134 | \t- task2"""
135 | )
136 | self.main_node().doesnt_have_item()
137 | self.first_node().has_subtasks(2)
138 |
139 | def test_tasks_at_0_level(self):
140 | self.todos_from(
141 | """- task 1
142 | - task 2
143 | - task 3"""
144 | ).main_node().doesnt_have_item().has_subtasks(3)
145 |
146 | def test_tasks_at_0_level_with_empty_lines(self):
147 | self.todos_from(
148 | """- task 1
149 | - task 2
150 |
151 | - task 3
152 | """
153 | ).main_node().doesnt_have_item().has_subtasks(5)
154 |
155 | def test_deep_indent(self):
156 | self.todos_from(
157 | """project:
158 | \t- task 1
159 | \t\t- subtask 1
160 | \t\t\t- subsubtask 1
161 | \t\t\t- subsubtask 2
162 | \t\t- subtask 2
163 | \t- task 2
164 |
165 | """
166 | )
167 | self.main_node().doesnt_have_item().has_subtasks(3)
168 | self.first_node().is_project().has_subtasks(2)
169 | self.second_node().is_new_line().has_subtasks(0)
170 |
171 | def test_deep_indent_with_empty_line_in_the_middle(self):
172 | self.todos_from(
173 | """project:
174 | \t- task 1
175 | \t\t- subtask 1
176 | \t\t\t- subsubtask 1
177 |
178 | \t\t\t- subsubtask 2
179 | \t\t- subtask 2
180 | \t- task 2
181 |
182 | """
183 | )
184 | self.main_node().doesnt_have_item().has_subtasks(3)
185 | self.assertEqual(len(self.todos.subitems[0].subitems), 2)
186 | self.assertEqual(len(self.todos.subitems[0].subitems[0].subitems), 2)
187 | self.assertEqual(len(self.todos.subitems[0].subitems[0].subitems[0].subitems), 3)
188 | self.assertEqual(len(self.todos.subitems[0].subitems[1].subitems), 0)
189 | self.first_node().is_project().has_subtasks(2)
190 |
191 |
192 | class TestStr(unittest.TestCase):
193 | def text_after_parsing_is_the_same(self, text):
194 | todos = parser.parse(text)
195 | self.assertEqual(unicode(todos), text)
196 |
197 | def test_task(self):
198 | self.text_after_parsing_is_the_same(
199 | """- task"""
200 | )
201 |
202 | def test_project(self):
203 | self.text_after_parsing_is_the_same(
204 | """project:"""
205 | )
206 |
207 | def test_project_with_subtask(self):
208 | self.text_after_parsing_is_the_same(
209 | """project:
210 | \t- task 1
211 | \t- task 2"""
212 | )
213 |
214 | def test_deep_indent(self):
215 | self.text_after_parsing_is_the_same(
216 | """project:
217 | \t- task 1
218 | \t\t- task 2
219 | \t\t\t- task 3"""
220 | )
221 |
222 | def test_empty_line(self):
223 | self.text_after_parsing_is_the_same(
224 | """- task 1
225 |
226 | \t- task2
227 | """
228 | )
229 |
230 | def test_empty_line_at_end(self):
231 | self.text_after_parsing_is_the_same(
232 | """- task 1
233 | \t- task2
234 | """
235 | )
236 |
237 | def test_empty_line_at_beginning(self):
238 | self.text_after_parsing_is_the_same(
239 | """
240 | - task 1
241 | \t- task2"""
242 | )
243 |
244 | def test_multiple_empty_lines_at_end(self):
245 | self.text_after_parsing_is_the_same(
246 | """- task 1
247 | \t- task2
248 |
249 |
250 | # """
251 | )
252 |
253 | if __name__ == '__main__':
254 | unittest.main()
255 |
--------------------------------------------------------------------------------
/tests/test_query.log:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/bevesce/TodoFlow/73f40462d9a343dcc3fa284b5d2740fbd848f6b5/tests/test_query.log
--------------------------------------------------------------------------------
/tests/test_query.py:
--------------------------------------------------------------------------------
1 | from __future__ import unicode_literals
2 |
3 | import unittest
4 | import datetime as dt
5 |
6 | from todoflow.query_parser import parse
7 | from todoflow import Todos
8 |
9 |
10 | class TestQuery(unittest.TestCase):
11 | def filtering(self, text):
12 | self.todos = Todos(text)
13 | return self
14 |
15 | def by(self, query):
16 | self.query = '\n\nq:' + query
17 | self.filtered = self.todos.filter(query)
18 | return self
19 |
20 | def gives(self, text):
21 | self.assertEqual(str(self.filtered).strip(), text.strip(), self.query)
22 |
23 | def test_001(self):
24 | self.filtering("""
25 | - r
26 | - q
27 | """).by('r').gives("""
28 | - r
29 | """)
30 |
31 | def test_002(self):
32 | self.filtering("""
33 | - r
34 | - q
35 | - w
36 | """).by('r or w').gives("""
37 | - r
38 | - w
39 | """)
40 |
41 | def test_003(self):
42 | self.filtering("""
43 | - r @a(1)
44 | - q @a(@)
45 | - w @a
46 | """).by('@a = 1').gives("""
47 | - r @a(1)
48 | """)
49 |
50 | def test_004(self):
51 | self.filtering("""
52 | Inbox:
53 | \t- q @a(@)
54 | \t- w @a
55 | Test:
56 | """).by('project Inbox').gives("""
57 | Inbox:
58 | """)
59 |
60 |
61 | def test_005(self):
62 | self.filtering("""
63 | Inbox:
64 | \t- q @a(@)
65 | \t- w @a
66 | Test:
67 | """).by('project *').gives("""
68 | Inbox:
69 | Test:
70 | """)
71 |
72 | def test_006(self):
73 | self.filtering("""
74 | Inbox:
75 | \tR:
76 | \tT:
77 | Test:
78 | """).by('/*').gives("""
79 | Inbox:
80 | Test:
81 | """)
82 |
83 | def test_007(self):
84 | self.filtering("""
85 | Inbox:
86 | \tR:
87 | \tT:
88 | Test
89 | """).by('/project *').gives("""
90 | Inbox:""")
91 |
92 |
93 | def test_008(self):
94 | self.filtering("""
95 | r 0
96 | r 1
97 | r 2
98 | r 3
99 | """).by('r[0]').gives("""
100 | r 0
101 | """)
102 |
103 | def test_009(self):
104 | self.filtering("""
105 | r 0
106 | r 1
107 | r 2
108 | r 3
109 | """).by('r[1]').gives("""
110 | r 1
111 | """)
112 |
113 | def test_010(self):
114 | self.filtering("""
115 | r
116 | \tq
117 | \tw
118 | e
119 | \tq
120 | \tw
121 | """).by('r/w').gives("""
122 | r
123 | \tw
124 | """)
125 |
126 | def test_011(self):
127 | self.filtering("""
128 | r
129 | \tq
130 | \tw
131 | \tw
132 | \t\tl
133 | \tw
134 | \t\tp
135 | e
136 | \tq
137 | \tw
138 | """).by('r/w/p').gives("""
139 | r
140 | \tw
141 | \t\tp
142 | """)
143 |
144 | def test_012(self):
145 | self.filtering("""
146 | d @due(2016-10-03)
147 | d @due(2016-10-04)
148 | d @due(2016-10-05)
149 | """).by('@due =[d] 2016-10-04').gives("""
150 | d @due(2016-10-04)
151 | """)
152 |
153 | def test_013(self):
154 | self.filtering("""
155 | r
156 | \tq
157 | \tw
158 | \tw
159 | \t\tl
160 | \tw
161 | \t\tp
162 | \tt
163 | \t\tt
164 | e
165 | \tq
166 | \tw
167 | """).by('r/w/p union t/t').gives("""
168 | r
169 | \tw
170 | \t\tp
171 | \tt
172 | \t\tt
173 | """)
174 |
175 | def test_014(self):
176 | self.filtering("""
177 | - r 0
178 | - q 1
179 | - r 1
180 | - q 2
181 | \t- r 3
182 | \t- r 4
183 | - r 5
184 | """).by('r[0:3]').gives("""
185 | - r 0
186 | - r 1
187 | - q 2
188 | \t- r 3
189 | """)
190 |
191 | def test_015(self):
192 | self.filtering("""
193 | d:
194 | \ta @q
195 | \tc
196 | \tb @q
197 | \td
198 | dd:
199 | \ta
200 | \tx @q
201 | """).by('/d/@q[0]').gives("""
202 | d:
203 | \ta @q
204 | dd:
205 | \tx @q
206 | """)
207 |
208 | def test_016(self):
209 | self.filtering("""
210 | Inbox:
211 | \tR:
212 | \tT:
213 | Test
214 | """).by('//*').gives("""
215 | Inbox:
216 | \tR:
217 | \tT:
218 | """)
219 |
220 | def test_017(self):
221 | self.filtering("""
222 | Inbox:
223 | \tR:
224 | \tT:
225 | Test
226 | """).by('/*').gives("""
227 | Inbox:
228 | Test
229 | """)
230 |
231 | def test_018(self):
232 | self.filtering("""
233 | r
234 | \tw
235 | \tq
236 | \tw
237 | \t\tr
238 | \t\tq
239 | \tq
240 | \t\tr
241 | \t\t\tw
242 | """).by('r/w').gives("""
243 | r
244 | \tw
245 | \tw
246 | \tq
247 | \t\tr
248 | \t\t\tw
249 | """)
250 |
251 | def test_019(self):
252 | self.filtering("""
253 | r
254 | \tq
255 | \t\tw
256 | \tq
257 | w
258 | """).by('r//w').gives("""
259 | r
260 | \tq
261 | \t\tw
262 | """)
263 |
264 | def test_020(self):
265 | self.filtering("""
266 | r
267 | \tq
268 | \t\tw
269 | \tq
270 | w
271 | """).by('r///w').gives("""
272 | r
273 | \tq
274 | \t\tw
275 | """)
276 |
277 | def test_021(self):
278 | self.filtering("""
279 | r
280 | \tq 1
281 | \tq 2
282 | r
283 | \tq 3
284 | \tq 4
285 | """).by('r/q[0]').gives("""
286 | r
287 | \tq 1
288 | r
289 | \tq 3
290 | """)
291 |
292 | def test_022(self):
293 | self.filtering("""
294 | r
295 | r @d
296 | """).by('r except @d').gives("""
297 | r
298 | """)
299 |
300 | def test_023(self):
301 | self.filtering("""
302 | r
303 | r @d
304 | q @d
305 | """).by('r intersect @d').gives("""
306 | r @d
307 | """)
308 |
309 | def test_024(self):
310 | self.filtering("""
311 | w
312 | \tr
313 | \t\tr
314 | \t\tb
315 | \tq
316 | \t\tr
317 | \t\ta
318 | """).by('w//*/ancestor-or-self::r').gives("""
319 | w
320 | \tr
321 | \t\tr
322 | \tq
323 | \t\tr
324 | """)
325 |
326 | def test_025(self):
327 | self.filtering("""
328 | w
329 | \tq
330 | \tr
331 | \t\tx
332 | \t\t\ty
333 | \tq
334 | \t\tx
335 | """).by('w//y/ancestor::r').gives("""
336 | w
337 | \tr
338 | """)
339 |
340 | def test_026(self):
341 | self.filtering("""
342 | w
343 | \tq
344 | \tr
345 | \t\tq
346 | """).by('w//q/parent::r').gives("""
347 | w
348 | \tr
349 | """)
350 |
351 | def test_027(self):
352 | self.filtering("""
353 | w
354 | q
355 | r 1
356 | x
357 | r 2
358 | """).by('w/following-sibling::r').gives("""
359 | r 1
360 | r 2
361 | """)
362 |
363 | def test_028(self):
364 | self.filtering("""
365 | r 0
366 | x
367 | \tw
368 | \t\tr 1
369 | \tr 2
370 | q
371 | r 3
372 | q
373 | """).by('w/following::r').gives("""
374 | x
375 | \tw
376 | \t\tr 1
377 | \tr 2
378 | r 3
379 | """)
380 |
381 | def test_029(self):
382 | self.filtering("""
383 | r 1
384 | q
385 | r 2
386 | w
387 | """).by('w/preceding-sibling::r').gives("""
388 | r 1
389 | r 2
390 | """)
391 |
392 | def test_030(self):
393 | self.filtering("""
394 | r 0
395 | x
396 | \tr 1
397 | \t\tq
398 | \t\tr 2
399 | \t\tw
400 | x
401 | \ty
402 | """).by('w/preceding::r').gives("""
403 | r 0
404 | x
405 | \tr 1
406 | \t\tr 2
407 | """)
408 |
409 | def test_031(self):
410 | self.filtering("""
411 | r
412 | q
413 | \tr
414 | """).by('/ancestor-or-self::r').gives("""
415 | r
416 | q
417 | \tr
418 | """)
419 |
420 | def test_032(self):
421 | self.filtering("""
422 | r 1
423 | q
424 | \tr
425 | r 2
426 | \tq
427 | """).by('/ancestor::r').gives("""
428 | r 2
429 | """)
430 |
431 | def test_033(self):
432 | self.filtering("""
433 | r 1
434 | q
435 | \tr 2
436 | r 3
437 | \t q
438 | q
439 | \tr 4
440 | \t\tq
441 | \tw
442 | """).by('/parent::r').gives("""
443 | r 3
444 | q
445 | \tr 4
446 | """)
447 |
448 | def test_034(self):
449 | self.filtering("""
450 | r 1
451 | q
452 | r 2
453 | w
454 | \tq
455 | \tr 3
456 | """).by('/following-sibling::r').gives("""
457 | r 1
458 | r 2
459 | w
460 | \tr 3
461 | """)
462 |
463 | def test_035(self):
464 | self.filtering("""
465 | q
466 | r 1
467 | w
468 | \tq
469 | \tr 2
470 | """).by('/following::r').gives("""
471 | r 1
472 | w
473 | \tr 2
474 | """)
475 |
476 | def test_036(self):
477 | self.filtering("""
478 | q
479 | r 1
480 | w
481 | \tq
482 | \tr 2
483 | \tq
484 | e
485 | \tq
486 | \tr 3
487 | """).by('/preceding-sibling::r').gives("""
488 | r 1
489 | w
490 | \tr 2
491 | """)
492 |
493 | def test_037(self):
494 | self.filtering("""
495 | r 1
496 | w
497 | \tr 2
498 | \tq
499 | e
500 | \tr 4
501 | """).by('/preceding::r').gives("""
502 | r 1
503 | w
504 | \tr 2
505 | """)
506 |
507 | def test_038(self):
508 | self.filtering("""
509 | todo:
510 | \tp
511 | to:
512 | \tto
513 | \tto
514 | \tko:
515 | """).by('project *').gives("""
516 | todo:
517 | to:
518 | \tko:
519 | """)
520 |
521 | def test_038b(self):
522 | items = list(Todos("""todo:
523 | \tp
524 | to:
525 | \tto
526 | \tto
527 | \tko:
528 | """).search('project *'))
529 | self.assertEqual([i.get_text() for i in items], ['todo:', 'to:', '\tko:'])
530 |
531 | def test_039(self):
532 | self.filtering("""
533 | 1 @working
534 | 2 @workign @done
535 | 3 @done
536 | """).by('@working and not @done').gives("""
537 | 1 @working
538 | """)
539 |
540 | def test_039(self):
541 | self.filtering("""
542 | test @p(eric)
543 | test @p(john,graham)
544 | test @p(graham,eric)
545 | """).by('@p contains[l] john').gives("""
546 | test @p(john,graham)
547 | """)
548 |
549 | def test_039s(self):
550 | self.filtering("""
551 | test @p(john,graham)
552 | test @p(John,graham)
553 | """).by('@p contains[sl] John').gives("""
554 | test @p(John,graham)
555 | """)
556 |
557 | def test_040(self):
558 | self.filtering("""
559 | test @p(eric)
560 | test @p(john,graham)
561 | test @p(graham,eric)
562 | """).by('@p contains[l] john,graham').gives("""
563 | test @p(john,graham)
564 | """)
565 |
566 | def test_041(self):
567 | self.filtering("""
568 | test @p(eric)
569 | test @p(john,graham)
570 | test @p(graham,eric)
571 | """).by('@p contains[l] graham,john').gives("""
572 | test @p(john,graham)
573 | """)
574 |
575 | def test_042(self):
576 | self.filtering("""
577 | test @p(eric)
578 | test @p(john, graham)
579 | test @p(graham,eric)
580 | """).by('@p contains[l] graham, john').gives("""
581 | test @p(john, graham)
582 | """)
583 |
584 | def test_043(self):
585 | self.filtering("""
586 | test @p(john, graham)
587 | test @p(graham,john)
588 | """).by('@p =[l] john, graham').gives("""
589 | test @p(john, graham)
590 | """)
591 |
592 |
593 | if __name__ == '__main__':
594 | unittest.main()
595 |
--------------------------------------------------------------------------------
/tests/test_query_lexer.py:
--------------------------------------------------------------------------------
1 | from __future__ import unicode_literals
2 |
3 | import unittest
4 | import datetime as dt
5 |
6 | from todoflow.query_lexer import QueryLexer
7 | from todoflow.query_lexer import Token
8 |
9 |
10 | class TestTokens(unittest.TestCase):
11 | def setUp(self):
12 | self.lex = QueryLexer()
13 |
14 | def tokens(self, text):
15 | self.tokens = self.lex.tokenize(text)
16 | return self
17 |
18 | def are(self, tokens):
19 | self.assertEqual(
20 | self.tokens,
21 | [Token(t, v) for t, v in tokens]
22 | )
23 |
24 | def test_attribute(self):
25 | self.tokens('@text').are([
26 | ('attribute', 'text')
27 | ])
28 |
29 | def test_word_operator(self):
30 | self.tokens('and').are([
31 | ('operator', 'and')
32 | ])
33 |
34 | def test_shortcut(self):
35 | self.tokens('project').are([
36 | ('attribute', 'type'),
37 | ('operator', '='),
38 | ('relation modifier', 'i'),
39 | ('search term', 'project')
40 | ])
41 |
42 | def test_shortcut_with_continuation(self):
43 | self.tokens('project Test').are([
44 | ('attribute', 'type'),
45 | ('operator', '='),
46 | ('relation modifier', 'i'),
47 | ('search term', 'project'),
48 | ('operator', 'and'),
49 | ('attribute', 'text'),
50 | ('operator', 'contains'),
51 | ('relation modifier', 'i'),
52 | ('search term', 'Test')
53 | ])
54 |
55 | def test_defaults(self):
56 | self.tokens('test').are([
57 | ('attribute', 'text'),
58 | ('operator', 'contains'),
59 | ('relation modifier', 'i'),
60 | ('search term', 'test'),
61 | ])
62 |
63 | def test_default_attribute(self):
64 | self.tokens('= test').are([
65 | ('attribute', 'text'),
66 | ('operator', '='),
67 | ('relation modifier', 'i'),
68 | ('search term', 'test'),
69 | ])
70 |
71 | def test_joins_search_terms(self):
72 | self.tokens('test test').are([
73 | ('attribute', 'text'),
74 | ('operator', 'contains'),
75 | ('relation modifier', 'i'),
76 | ('search term', 'test test'),
77 | ])
78 |
79 | def test_opeartor(self):
80 | self.tokens('<').are([
81 | ('attribute', 'text'),
82 | ('operator', '<'),
83 | ('relation modifier', 'i'),
84 | ])
85 |
86 | def test_two_char_opertor(self):
87 | self.tokens('<=').are([
88 | ('attribute', 'text'),
89 | ('operator', '<='),
90 | ('relation modifier', 'i'),
91 | ])
92 |
93 | def test_relation_modifier(self):
94 | self.tokens('[d]').are([
95 | ('relation modifier', 'd')
96 | ])
97 |
98 | def test_wild_card(self):
99 | self.tokens('*').are([
100 | ('wild card', '*')
101 | ])
102 |
103 | def test_no_wild_card_in_search_term(self):
104 | self.tokens('r*r').are([
105 | ('attribute', 'text'),
106 | ('operator', 'contains'),
107 | ('relation modifier', 'i'),
108 | ('search term', 'r*r')
109 | ])
110 |
111 | def test_slice(self):
112 | self.tokens('[1:2]').are([
113 | ('slice', '1:2')
114 | ])
115 |
116 | def test_start_slice(self):
117 | self.tokens('[1:]').are([
118 | ('slice', '1:')
119 | ])
120 |
121 | def test_end_slice(self):
122 | self.tokens('[:2]').are([
123 | ('slice', ':2')
124 | ])
125 |
126 | def test_index_slice(self):
127 | self.tokens('[2]').are([
128 | ('slice', '2')
129 | ])
130 |
131 | def test_axis_direct(self):
132 | self.tokens('/').are([
133 | ('operator', '/')
134 | ])
135 |
136 | def test_axis_descendant(self):
137 | self.tokens('//').are([
138 | ('operator', '//')
139 | ])
140 |
141 | def test_axis_descendant_or_self(self):
142 | self.tokens('///').are([
143 | ('operator', '///')
144 | ])
145 |
146 | def test_axis_ancestor_or_self(self):
147 | self.tokens('/ancestor-or-self::').are([
148 | ('operator', '/ancestor-or-self::')
149 | ])
150 |
151 | def test_axis_ancestor(self):
152 | self.tokens('/ancestor::').are([
153 | ('operator', '/ancestor::')
154 | ])
155 |
156 | def test_quoted_search_term(self):
157 | self.tokens('"//and*=>"').are([
158 | ('attribute', 'text'),
159 | ('operator', 'contains'),
160 | ('relation modifier', 'i'),
161 | ('search term', '//and*=>')
162 | ])
163 |
164 | def test_expression(self):
165 | self.tokens('@text = test').are([
166 | ('attribute', 'text'),
167 | ('operator', '='),
168 | ('relation modifier', 'i'),
169 | ('search term', 'test'),
170 | ])
171 |
172 | def test_parenthesis(self):
173 | self.tokens('()').are([
174 | ('punctuation', '('),
175 | ('punctuation', ')'),
176 | ])
177 |
178 | if __name__ == '__main__':
179 | unittest.main()
180 |
--------------------------------------------------------------------------------
/tests/test_query_parser.py:
--------------------------------------------------------------------------------
1 | from __future__ import unicode_literals
2 |
3 | import unittest
4 | import datetime as dt
5 |
6 | from todoflow.query_parser import QueryParser
7 |
8 |
9 | class TestParser(unittest.TestCase):
10 | def setUp(self):
11 | self.parser = QueryParser()
12 |
13 | def parsing(self, text):
14 | self.parsed = self.parser.parse(text)
15 | return self
16 |
17 | def gives(self, text):
18 | self.assertEqual(str(self.parsed), text)
19 |
20 | def test_001(self):
21 | self.parsing('r union w').gives(
22 | '( (( @text contains [i] r ) [:]) union (( @text contains [i] w ) [:]) )'
23 | )
24 |
25 | def test_002(self):
26 | self.parsing('@tag').gives('(tag [:])')
27 |
28 | def test_003(self):
29 | self.parsing('project *').gives(
30 | '(( ( @type = [i] project ) and * ) [:])'
31 | )
32 |
33 | def test_004(self):
34 | self.parsing('t/w').gives(
35 | '( (( @text contains [i] t ) [:]) / (( @text contains [i] w ) [:]) )'
36 | )
37 |
38 | def test_005(self):
39 | self.parsing('r union w union q').gives(
40 | '( ( (( @text contains [i] r ) [:]) union (( @text contains [i] w ) [:]) ) union (( @text contains [i] q ) [:]) )'
41 | )
42 |
43 | def test_006(self):
44 | self.parsing('r union w intersect q').gives(
45 | '( ( (( @text contains [i] r ) [:]) union (( @text contains [i] w ) [:]) ) intersect (( @text contains [i] q ) [:]) )'
46 | )
47 |
48 | def test_007(self):
49 | self.parsing('/t').gives('( / (( @text contains [i] t ) [:]) )')
50 |
51 | def test_008(self):
52 | self.parsing('t[0]').gives('(( @text contains [i] t ) [0])')
53 |
54 | def test_009(self):
55 | self.parsing('t[42]/w[1:2]').gives(
56 | '( (( @text contains [i] t ) [42]) / (( @text contains [i] w ) [1:2]) )'
57 | )
58 |
59 | def test_010(self):
60 | self.parsing('r[0] union w[1]').gives(
61 | '( (( @text contains [i] r ) [0]) union (( @text contains [i] w ) [1]) )'
62 | )
63 |
64 | def test_011(self):
65 | self.parsing('/my heading[0]').gives(
66 | '( / (( @text contains [i] my heading ) [0]) )'
67 | )
68 |
69 | def test_012(self):
70 | self.parsing('/my heading').gives(
71 | '( / (( @text contains [i] my heading ) [:]) )'
72 | )
73 |
74 | def test_013(self):
75 | self.parsing('(1 or 2) and 3').gives(
76 | '(( ( ( @text contains [i] 1 ) or ( @text contains [i] 2 ) ) and ( @text contains [i] 3 ) ) [:])'
77 | )
78 |
79 | def test_014(self):
80 | self.parsing('(1 union 2) intersect 3').gives(
81 | '( (( (( @text contains [i] 1 ) [:]) union (( @text contains [i] 2 ) [:]) ) [:]) intersect (( @text contains [i] 3 ) [:]) )'
82 | )
83 |
84 | def test_015(self):
85 | self.parsing('(project Inbox//* union //@today) except //@done').gives(
86 | '( (( ( (( ( @type = [i] project ) and ( @text contains [i] Inbox ) ) [:]) // (* [:]) ) union ( // (today [:]) ) ) [:]) except ( // (done [:]) ) )'
87 | )
88 |
89 | def test_016(self):
90 | self.parsing('not @today').gives(
91 | '(( not today ) [:])'
92 | )
93 |
94 | def test_017(self):
95 | self.parsing('not q and w').gives(
96 | '(( ( not ( @text contains [i] q ) ) and ( @text contains [i] w ) ) [:])'
97 | )
98 |
99 | def test_018(self):
100 | self.parsing('not (q and w)').gives(
101 | '(( not ( ( @text contains [i] q ) and ( @text contains [i] w ) ) ) [:])'
102 | )
103 |
104 | def test_019(self):
105 | self.parsing('(one or two) and not three').gives(
106 | '(( ( ( @text contains [i] one ) or ( @text contains [i] two ) ) and ( not ( @text contains [i] three ) ) ) [:])'
107 | )
108 |
109 | def test_020(self):
110 | self.parsing('(project *//not @done)[0]').gives(
111 | '(( (( ( @type = [i] project ) and * ) [:]) // (( not done ) [:]) ) [0])'
112 | )
113 |
114 | def test_021(self):
115 | self.parsing('(project Inbox//* union //@today) except //@done').gives(
116 | '( (( ( (( ( @type = [i] project ) and ( @text contains [i] Inbox ) ) [:]) // (* [:]) ) union ( // (today [:]) ) ) [:]) except ( // (done [:]) ) )'
117 | )
118 |
119 | def test_022(self):
120 | self.parsing('@tag and @test').gives('(( tag and test ) [:])')
121 |
122 | def test_023(self):
123 | self.parsing('t/w/q').gives(
124 | '( ( (( @text contains [i] t ) [:]) / (( @text contains [i] w ) [:]) ) / (( @text contains [i] q ) [:]) )'
125 | )
126 |
127 | def test_024(self):
128 | self.parsing('/d/@q[0]').gives(
129 | '( ( / (( @text contains [i] d ) [:]) ) / (q [0]) )'
130 | )
131 |
132 | def test_025(self):
133 | self.parsing('not (@done or @waiting)').gives(
134 | '(( not ( done or waiting ) ) [:])'
135 | )
136 |
137 | def test_026(self):
138 | self.parsing('@today and not (@done or @waiting)').gives(
139 | '(( today and ( not ( done or waiting ) ) ) [:])'
140 | )
141 |
142 | def test_026(self):
143 | self.parsing('@today and not @done)').gives(
144 | '(( today and ( not done ) ) [:])'
145 | )
146 |
147 |
148 | if __name__ == '__main__':
149 | unittest.main()
150 |
--------------------------------------------------------------------------------
/tests/test_textutils.py:
--------------------------------------------------------------------------------
1 | from __future__ import unicode_literals
2 |
3 | import unittest
4 | import datetime as dt
5 |
6 | import todoflow.textutils as tu
7 |
8 |
9 | class TestTypes(unittest.TestCase):
10 | def text(self, text):
11 | self._text = text
12 | return self
13 |
14 | def is_task(self):
15 | self.assertTrue(tu.is_task(self._text))
16 | return self
17 |
18 | def isnt_task(self):
19 | self.assertFalse(tu.is_task(self._text))
20 | return self
21 |
22 | def is_project(self):
23 | self.assertTrue(tu.is_project(self._text))
24 | return self
25 |
26 | def isnt_project(self):
27 | self.assertFalse(tu.is_project(self._text))
28 | return self
29 |
30 | def is_note(self):
31 | self.assertTrue(tu.is_note(self._text))
32 | return self
33 |
34 | def isnt_note(self):
35 | self.assertFalse(tu.is_note(self._text))
36 | return self
37 |
38 | def test_task(self):
39 | self.text('- something to do').is_task().isnt_project().isnt_note()
40 |
41 | def test_indented_task(self):
42 | self.text('\t\t- some task').is_task().isnt_project().isnt_note()
43 |
44 | def test_task_with_color(self):
45 | self.text('- some task:').is_task().isnt_project().isnt_note()
46 |
47 | def test_project(self):
48 | self.text('some project:').is_project().isnt_task().isnt_note()
49 |
50 | def test_indented_project(self):
51 | self.text('\t\t\tsome project:').is_project().isnt_task().isnt_note()
52 |
53 | def test_note(self):
54 | self.text('\t\t\tsome note').is_note().isnt_task().isnt_project()
55 |
56 |
57 | class TestTags(unittest.TestCase):
58 | def text(self, text):
59 | self._text = text
60 | return self
61 |
62 | def has_tag(self, tag):
63 | self.assertTrue(tu.has_tag(self._text, tag))
64 | self.tag = tag
65 | return self
66 |
67 | def doesnt_have_tag(self, tag):
68 | self.assertFalse(tu.has_tag(self._text, tag))
69 | self.tag = tag
70 | return self
71 |
72 | def with_param(self, param):
73 | self.assertEqual(tu.get_tag_param(self._text, self.tag), param)
74 | return self
75 |
76 | def with_tag(self, tag):
77 | self.tag = tag
78 | return self
79 |
80 | def removed(self):
81 | self._text = tu.remove_tag(self._text, self.tag)
82 | return self
83 |
84 | def added(self, with_param=None):
85 | self._text = tu.add_tag(self._text, self.tag, param=with_param)
86 | return self
87 |
88 | def replaced_by(self, replacement):
89 | self._text = tu.replace_tag(self._text, self.tag, replacement)
90 | return self
91 |
92 | def enclosed_by(self, prefix, suffix=None):
93 | self._text = tu.enclose_tag(self._text, self.tag, prefix, suffix)
94 | return self
95 |
96 | def is_equal(self, text):
97 | self.assertEqual(self._text, text)
98 | return self
99 |
100 | def has_tags(self, *args):
101 | text_tags = set(tu.get_all_tags(self._text))
102 | self.assertEqual(text_tags, set(args))
103 | return self
104 |
105 | def param_modified_by(self, modificaiton):
106 | self._text = tu.modify_tag_param(self._text, self.tag, modificaiton)
107 | return self
108 |
109 | def toggled(self, tags):
110 | self._text = tu.toggle_tags(self._text, tags)
111 | return self
112 |
113 |
114 | class TestTagsFinding(TestTags):
115 | def test_tag_with_at(self):
116 | self.text('- same text @done rest of text').has_tag('@done')
117 |
118 | def test_tag_at_end(self):
119 | self.text('- some text with @done').has_tag('@done')
120 |
121 | def test_tag_without_at(self):
122 | self.text('- some text @today rest').has_tag('today')
123 |
124 | def test_tag_at_beggining(self):
125 | self.text('@done rest').has_tag('@done')
126 |
127 | def test_tag_with_param(self):
128 | self.text('some text @start(2014-01-01) rest').has_tag('@start')
129 |
130 | def test_doesnt_have(self):
131 | self.text('some text @tag2 rest').doesnt_have_tag('tag')
132 |
133 | def test_get_tag_param(self):
134 | self.text('some txet @tag(2) rest').has_tag('@tag').with_param('2')
135 |
136 | def test_get_tag_param_at_end(self):
137 | self.text('some txet @done(01-14)').has_tag('@done').with_param('01-14')
138 |
139 | def test_get_tag_param_with_list(self):
140 | self.text('some text @p(1,2)').has_tag('@p').with_param('1,2')
141 |
142 | def test_get_tag_param_with_escaped_closing_paren(self):
143 | self.text('some txet @search(fsd \) ffs) and then some').has_tag('@search').with_param('fsd ) ffs')
144 |
145 | def test_get_tag_param_with_escaped_parens(self):
146 | self.text('some txet @search(fsd \(abc\) ffs) and then some').has_tag('@search').with_param('fsd (abc) ffs')
147 |
148 | def test_empty_param(self):
149 | self.text('@empty()').has_tag('empty').with_param('')
150 |
151 | def test_no_param(self):
152 | self.text('@no_param').has_tag('@no_param').with_param(None)
153 |
154 | def test_no_tag(self):
155 | self.text('text without tag').doesnt_have_tag('@test').with_param(None)
156 |
157 | def test_succeeding_tag(self):
158 | self.text('text @tag1(yo) @tag2(wo) rest').has_tag('@tag1').with_param('yo')
159 |
160 | def test_get_all_tags(self):
161 | self.text('text @today @working test @done').has_tags('today', 'working', 'done')
162 |
163 | def test_get_all_tags_with_params(self):
164 | self.text('text @today(!) @working test @done(2014-01-10)').has_tags('today', 'working', 'done')
165 |
166 | def test_toggle_tags(self):
167 | now = dt.datetime.now()
168 | tags = [('next', None), ('working', None), ('done', '%Y-%m-%d')]
169 | self.text('text').toggled(tags).is_equal('text @next')
170 | self.toggled(tags).is_equal('text @working')
171 | self.toggled(tags).is_equal(now.strftime('text @done(%Y-%m-%d)'))
172 | self.toggled(tags).is_equal('text')
173 |
174 |
175 | class TestTagsRemoving(TestTags):
176 | def test_remove_tag(self):
177 | self.text('text @done rest').with_tag('@done').removed().is_equal('text rest')
178 |
179 | def test_remove_tag_without_at(self):
180 | self.text('text @today rest').with_tag('today').removed().is_equal('text rest')
181 |
182 | def test_remove_tag_with_param(self):
183 | self.text('text @start(2014) rest').with_tag('start').removed().is_equal('text rest')
184 |
185 | def test_remove_tag_at_end(self):
186 | self.text('text @start(2014)').with_tag('start').removed().is_equal('text')
187 |
188 |
189 | class TestTagsReplacing(TestTags):
190 | def test_replace_tag(self):
191 | self.text('text @tag rest').with_tag('@tag').replaced_by('sub').is_equal('text sub rest')
192 |
193 | def test_replace_tag_with_param(self):
194 | self.text('text @tag(1) rest').with_tag('tag').replaced_by('sub').is_equal('text sub rest')
195 |
196 |
197 | class TestTagsAdding(TestTags):
198 | def test_add_tag(self):
199 | self.text('some text').with_tag('@today').added().is_equal('some text @today')
200 |
201 | def test_add_tag_without_at(self):
202 | self.text('some text').with_tag('today').added().is_equal('some text @today')
203 |
204 | def test_add_tag_with_param(self):
205 | self.text('yo').with_tag('@done').added(with_param='01-10').is_equal('yo @done(01-10)')
206 |
207 | def test_add_tag_with_not_text_param(self):
208 | self.text('meaning').with_tag('@of').added(with_param=42).is_equal('meaning @of(42)')
209 |
210 | def test_add_tag_with_exisiting_tag(self):
211 | self.text('text @t(9)').with_tag('@t').added().is_equal('text @t')
212 |
213 |
214 | class TestTagsEnclosing(TestTags):
215 | def test_enclose_tag(self):
216 | self.text('text @done rest').with_tag('@done').enclosed_by('**').is_equal('text **@done** rest')
217 |
218 | def test_enclose_tag_with_suffix(self):
219 | self.text('text @start rest').with_tag('@start').enclosed_by('<', '>').is_equal('text <@start> rest')
220 |
221 | def test_enclose_tag_with_param(self):
222 | self.text('tx @in(01-01) rt').with_tag('@in').enclosed_by('**').is_equal('tx **@in(01-01)** rt')
223 |
224 | def test_enclode_tag_at_end(self):
225 | self.text('tx @in(01-01)').with_tag('@in').enclosed_by('*').is_equal('tx *@in(01-01)*')
226 |
227 |
228 | class TestTagsParamsModifing(TestTags):
229 | def test_modify_tag_param(self):
230 | self.text('tx @t(1) rt').with_tag('t').param_modified_by(lambda p: int(p) + 1).is_equal('tx @t(2) rt')
231 |
232 | def test_modify_tag_param_without_param(self):
233 | self.text('tx @t(x) rt').with_tag('t').param_modified_by(
234 | lambda p: p * 2 if p else 'y'
235 | ).is_equal('tx @t(xx) rt')
236 | self.text('tx @t rt').with_tag('t').param_modified_by(
237 | lambda p: p * 2 if p else 'y'
238 | ).is_equal('tx @t(y) rt')
239 |
240 |
241 | class TestSortingByTag(unittest.TestCase):
242 | def shuffled_texts(self, *args):
243 | from random import shuffle
244 | args_list = list(args)
245 | shuffle(args_list)
246 | self._texts = tuple(args_list)
247 | return self
248 |
249 | def sorted_by(self, tag, reverse=False):
250 | self._texts = tu.sort_by_tag_param(self._texts, tag, reverse)
251 | return self
252 |
253 | def are_equal(self, *args):
254 | self.assertEqual(self._texts, tuple(args))
255 | return self
256 |
257 | def test_sort_by_tag(self):
258 | list1 = 'yo', 'yo @t(1)', '@t(2) @k(1000)', 'fd @t(4)'
259 | self.shuffled_texts(*list1).sorted_by('t').are_equal(*list1)
260 |
261 | def test_sort_by_tag_reversed(self):
262 | list1 = 'yo', 'yo @t(1)', '@t(2) @k(1000)', 'fd @t(4)'
263 | self.shuffled_texts(*list1).sorted_by('t', reverse=True).are_equal(*list1[::-1])
264 |
265 |
266 | class TestFormatters(unittest.TestCase):
267 | def text(self, text):
268 | self._text = text
269 | return self
270 |
271 | def with_stripped_formatting_is(self, text):
272 | self._text = tu.strip_formatting(self._text)
273 | self.assertEqual(self._text, text)
274 | return self
275 |
276 | def indention_level_is(self, level):
277 | self.assertEqual(tu.calculate_indent_level(self._text), level)
278 |
279 | def with_stripped_formatting_and_tags_is(self, text):
280 | self._text = tu.strip_formatting_and_tags(self._text)
281 | self.assertEqual(self._text, text)
282 | return self
283 |
284 | def test_strip_task(self):
285 | self.text('\t\t- text').with_stripped_formatting_is('text')
286 |
287 | def test_strip_ignore_spaces(self):
288 | self.text('\t\t text').with_stripped_formatting_is(' text')
289 |
290 | def test_strip_task_with_colon(self):
291 | self.text('\t\t- text:').with_stripped_formatting_is('text:')
292 |
293 | def test_strip_project(self):
294 | self.text('\ttext:').with_stripped_formatting_is('text')
295 |
296 | def test_strip_note(self):
297 | self.text('\ttext\t').with_stripped_formatting_is('text')
298 |
299 | def test_strip_tags(self):
300 | self.text('\ttext @done rest @today').with_stripped_formatting_and_tags_is('text rest')
301 |
302 | def test_calculate_indent_level_0(self):
303 | self.text('text').indention_level_is(0)
304 |
305 | def test_calculate_indent_level_1(self):
306 | self.text('\ttext').indention_level_is(1)
307 |
308 | def test_calculate_indent_level_5(self):
309 | self.text('\t\t\t\t\ttext').indention_level_is(5)
310 |
311 | def test_calculate_indent_ignore_spaces(self):
312 | self.text(' text').indention_level_is(0)
313 |
314 | def test_calculate_indent_ignore_tab_after_spaces(self):
315 | self.text(' \ttext').indention_level_is(0)
316 |
317 | def test_calculate_indent_ignore_spaces_after_tab(self):
318 | self.text('\t text').indention_level_is(1)
319 |
320 |
321 | class TestParseDate(unittest.TestCase):
322 | def parsing(self, text):
323 | self.date = tu.parse_datetime(text)
324 | return self
325 |
326 | def gives(self, year, month, day, hour, minute):
327 | self.assertEqual(self.date.year, year)
328 | self.assertEqual(self.date.month, month)
329 | self.assertEqual(self.date.day, day)
330 | self.assertEqual(self.date.hour, hour)
331 | self.assertEqual(self.date.minute, minute)
332 |
333 | def test_parse_datetime_hour_with_leading_zero(self):
334 | self.parsing('2014-11-01 08:45').gives(2014, 11, 1, 8, 45)
335 |
336 | def test_parse_datetime_hour_without_leading_zero(self):
337 | self.parsing('2014-11-01 8:45').gives(2014, 11, 1, 8, 45)
338 |
339 | def test_parse_without_time(self):
340 | self.parsing('2014-11-01').gives(2014, 11, 1, 0, 0)
341 |
342 |
343 | if __name__ == '__main__':
344 | unittest.main()
345 |
--------------------------------------------------------------------------------
/tests/test_todos.py:
--------------------------------------------------------------------------------
1 | from __future__ import unicode_literals
2 |
3 | import unittest
4 |
5 | import todoflow
6 | from todoflow import Todos, Todoitem
7 | from todoflow.compatibility import unicode
8 |
9 |
10 | class TestTodoitem(unittest.TestCase):
11 | def test_task(self):
12 | task = todoflow.todoitem.Todoitem('- task')
13 | self.assertTrue(task.type == 'task')
14 | self.assertEqual('task', task.text)
15 | self.assertEqual('- task', unicode(task))
16 |
17 | def test_project(self):
18 | task = todoflow.todoitem.Todoitem('project:')
19 | self.assertTrue(task.type == 'project')
20 | self.assertEqual('project', task.text)
21 | self.assertEqual('project:', unicode(task))
22 |
23 | def test_note(self):
24 | task = todoflow.todoitem.Todoitem('note')
25 | self.assertTrue(task.type == 'note')
26 | self.assertEqual('note', task.text)
27 | self.assertEqual('note', unicode(task))
28 |
29 |
30 |
31 | class TodosAssTestCase(unittest.TestCase):
32 | def assertTodos(self, todos, text):
33 | self.assertEqual('\n'.join(t.get_text() for t in todos), text)
34 |
35 | class TestGetOtherTodos(TodosAssTestCase):
36 | def test_parent(self):
37 | todos = Todos("""a
38 | \tb
39 | \t\tc
40 | \t\t\td
41 | """)
42 | self.assertTodos(todos.yield_parent(), '')
43 | self.assertTodos(todos.subitems[0].yield_parent(), '')
44 | self.assertTodos(todos.subitems[0].subitems[0].yield_parent(), 'a')
45 | self.assertTodos(todos.subitems[0].subitems[0].subitems[0].yield_parent(), '\tb')
46 | self.assertTodos(todos.subitems[0].subitems[0].subitems[0].subitems[0].yield_parent(), '\t\tc')
47 |
48 | def test_ancestors(self):
49 | todos = Todos("""a
50 | \tb
51 | \t\tc
52 | \t\t\td""")
53 | self.assertTodos(todos.yield_ancestors(), '')
54 | self.assertTodos(todos.subitems[0].yield_ancestors(), '')
55 | self.assertTodos(todos.subitems[0].subitems[0].yield_ancestors(), 'a\n')
56 | self.assertTodos(todos.subitems[0].subitems[0].subitems[0].yield_ancestors(), '\tb\na\n')
57 | self.assertTodos(todos.subitems[0].subitems[0].subitems[0].subitems[0].yield_ancestors(), '\t\tc\n\tb\na\n')
58 |
59 | def test_yield_children(self):
60 | todos = Todos("""a
61 | \tb
62 | \t\tc
63 | \t\tc
64 | \t\tx
65 | \t\t\td""")
66 | self.assertTodos(todos.yield_children(), ('a'))
67 | self.assertTodos(todos.subitems[0].yield_children(), ('\tb'))
68 | self.assertTodos(todos.subitems[0].subitems[0].yield_children(), ('\t\tc\n\t\tc\n\t\tx'))
69 |
70 | def test_yield_descendants(self):
71 | todos = Todos("""a
72 | \tb
73 | \t\tc
74 | \t\tc
75 | \t\tx
76 | \t\t\td""")
77 | self.assertTodos(todos.yield_descendants(), str(todos))
78 | self.assertTodos(todos.subitems[0].yield_descendants(), """\tb
79 | \t\tc
80 | \t\tc
81 | \t\tx
82 | \t\t\td""")
83 | self.assertTodos(todos.subitems[0].subitems[0].yield_descendants(), """\t\tc
84 | \t\tc
85 | \t\tx
86 | \t\t\td""")
87 |
88 | def test_yield_siblings(self):
89 | todos = Todos("""a
90 | \tb
91 | \t\tc
92 | \t\tc
93 | \t\tx
94 | \td""")
95 | self.assertTodos(todos.yield_siblings(), '')
96 | self.assertTodos(todos.subitems[0].yield_siblings(), 'a')
97 | self.assertTodos(todos.subitems[0].subitems[0].yield_siblings(), '\tb\n\td')
98 | self.assertTodos(todos.subitems[0].subitems[0].subitems[0].yield_siblings(), '\t\tc\n\t\tc\n\t\tx')
99 |
100 | def test_following_siblings(self):
101 | todos = Todos("""a
102 | \tb
103 | \t\tc
104 | \t\tc
105 | \t\tx
106 | \td""")
107 | self.assertTodos(todos.yield_following_siblings(), '')
108 | self.assertTodos(todos.subitems[0].yield_following_siblings(), '')
109 | self.assertTodos(todos.subitems[0].subitems[0].yield_following_siblings(), '\td')
110 | self.assertTodos(todos.subitems[0].subitems[0].subitems[0].yield_following_siblings(), '\t\tc\n\t\tx')
111 | self.assertTodos(todos.subitems[0].subitems[0].subitems[1].yield_following_siblings(), '\t\tx')
112 | self.assertTodos(todos.subitems[0].subitems[0].subitems[2].yield_following_siblings(), '')
113 |
114 | def test_preceding_siblings(self):
115 | todos = Todos("""a
116 | \tb
117 | \t\tc
118 | \t\tc
119 | \t\tx
120 | \td""")
121 | self.assertTodos(todos.yield_preceding_siblings(), '')
122 | self.assertTodos(todos.subitems[0].yield_preceding_siblings(), '')
123 | self.assertTodos(todos.subitems[0].subitems[0].yield_preceding_siblings(), '')
124 | self.assertTodos(todos.subitems[0].subitems[0].subitems[0].yield_preceding_siblings(), '')
125 | self.assertTodos(todos.subitems[0].subitems[0].subitems[1].yield_preceding_siblings(), '\t\tc')
126 | self.assertTodos(todos.subitems[0].subitems[0].subitems[2].yield_preceding_siblings(), '\t\tc\n\t\tc')
127 |
128 | def test_following(self):
129 | todos = Todos("""a
130 | \tb
131 | \t\tc
132 | \t\tc
133 | \t\tx
134 | \td""")
135 | self.assertTodos(todos.yield_following(), 'a\n\tb\n\t\tc\n\t\tc\n\t\tx\n\td')
136 | self.assertTodos(todos.subitems[0].yield_following(), '\tb\n\t\tc\n\t\tc\n\t\tx\n\td')
137 | self.assertTodos(todos.subitems[0].subitems[0].yield_following(), '\t\tc\n\t\tc\n\t\tx\n\td')
138 | self.assertTodos(todos.subitems[0].subitems[0].subitems[0].yield_following(), '\t\tc\n\t\tx\n\td')
139 | self.assertTodos(todos.subitems[0].subitems[0].subitems[1].yield_following(), '\t\tx\n\td')
140 | self.assertTodos(todos.subitems[0].subitems[0].subitems[2].yield_following(), '\td')
141 | self.assertTodos(todos.subitems[0].subitems[1].yield_following(), '')
142 |
143 | def test_preceding(self):
144 | todos = Todos("""a
145 | \tb
146 | \t\tc
147 | \t\tc
148 | \t\tx
149 | \td""")
150 | self.assertTodos(todos.yield_preceding(), '')
151 | self.assertTodos(todos.subitems[0].yield_preceding(), '')
152 | self.assertTodos(todos.subitems[0].subitems[0].yield_preceding(), '')
153 | self.assertTodos(todos.subitems[0].subitems[1].yield_preceding(), '\tb')
154 | self.assertTodos(todos.subitems[0].subitems[0].subitems[0].yield_preceding(), '')
155 | self.assertTodos(todos.subitems[0].subitems[0].subitems[1].yield_preceding(), '\t\tc')
156 | self.assertTodos(todos.subitems[0].subitems[0].subitems[2].yield_preceding(), '\t\tc\n\t\tc')
157 | self.assertTodos(todos.subitems[0].subitems[1].yield_preceding(), '\tb')
158 |
159 |
160 | class TodosAsStringTestCase(unittest.TestCase):
161 | def assertTodos(self, todos, text):
162 | self.assertEqual(str(todos), text)
163 |
164 |
165 | class TestTodos(TodosAsStringTestCase):
166 | def test_add_todos(self):
167 | self.assertTodos(Todos('- t1') + Todos('- t2'), '- t1\n- t2')
168 |
169 | def test_add_todos_1(self):
170 | self.assertTodos(Todos('i:\n\t-q').subitems[0] + Todos('- t2'), 'i:\n\t-q\n- t2')
171 |
172 | def test_add_todos_1(self):
173 | self.assertTodos(Todos('- t2') + Todos('i:\n\t-q').subitems[0], '- t2\ni:\n\t-q')
174 |
175 | def test_get(self):
176 | todos = Todos('a\n\tb')
177 | subtodos = todos.subitems[0].subitems[0]
178 | self.assertEqual(todos.get_with_todoitem(subtodos.todoitem), subtodos)
179 |
180 | def test_filter(self):
181 | todos = Todos("""a
182 | \tr
183 | \t\tq
184 | \tw
185 | \t\tr
186 | \te
187 | \t\te
188 | """).filter(lambda i: 'r' in i.get_text())
189 | self.assertTodos(todos, 'a\n\tr\n\tw\n\t\tr')
190 |
191 |
192 | class TestContains(unittest.TestCase):
193 | def test_1(self):
194 | todos = Todos("""a:
195 | \tb
196 | \t\tc""")
197 | self.assertFalse(Todoitem('b') in todos)
198 | self.assertTrue(todos.subitems[0] in todos)
199 | self.assertTrue(todos.subitems[0].todoitem in todos)
200 | self.assertTrue(todos.subitems[0].subitems[0] in todos)
201 | self.assertTrue(todos.subitems[0].subitems[0].todoitem in todos)
202 | self.assertTrue(todos.subitems[0].subitems[0].subitems[0] in todos)
203 | self.assertTrue(todos.subitems[0].subitems[0].subitems[0].todoitem in todos)
204 |
205 |
206 | if __name__ == '__main__':
207 | unittest.main()
208 |
--------------------------------------------------------------------------------
/todoflow/__init__.py:
--------------------------------------------------------------------------------
1 | from __future__ import absolute_import
2 |
3 | from .todos import Todos
4 | from .todoitem import Todoitem
5 |
--------------------------------------------------------------------------------
/todoflow/compatibility.py:
--------------------------------------------------------------------------------
1 | """
2 | Compatiblity layer between python 2 and python 3 for some basic stuff
3 | """
4 |
5 | try:
6 | unicode = unicode
7 | no_unicode = False
8 | def is_string(text):
9 | return isinstance(text, unicode) or isinstance(text, str)
10 |
11 | except NameError:
12 | no_unicode = True
13 | unicode = str
14 |
15 | def is_string(text):
16 | return isinstance(text, str)
17 |
--------------------------------------------------------------------------------
/todoflow/lexer.py:
--------------------------------------------------------------------------------
1 | from __future__ import absolute_import
2 |
3 | from .textutils import calculate_indent_level
4 |
5 |
6 | class Token(object):
7 | token_ids = {
8 | 'text': '*',
9 | 'newline': 'n',
10 | 'indent': '>',
11 | 'dedent': '<',
12 | 'end': '$',
13 | }
14 |
15 | def __init__(self, line_number):
16 | self.line_number = line_number
17 |
18 | def __getattr__(self, attr_name):
19 | try:
20 | return self.token_ids[attr_name.split('_')[-1]] == self.tok()
21 | except KeyError:
22 | raise AttributeError
23 |
24 |
25 | class NewlineToken(Token):
26 | def tok(self):
27 | return self.token_ids['newline']
28 |
29 | def __init__(self, line_number, start):
30 | super(NewlineToken, self).__init__(line_number)
31 | self.start = start
32 | self.end = start
33 |
34 |
35 | class IndentToken(Token):
36 | def tok(self):
37 | return self.token_ids['indent']
38 |
39 |
40 | class DedentToken(Token):
41 | def tok(self):
42 | return self.token_ids['dedent']
43 |
44 |
45 | class TextToken(Token):
46 | def __init__(self, text, line_number, start):
47 | super(TextToken, self).__init__(line_number)
48 | self.text = text
49 | self.start = start
50 | self.end = start + len(text)
51 |
52 | def tok(self):
53 | return self.token_ids['text']
54 |
55 |
56 | class EndToken(Token):
57 | def tok(self):
58 | return self.token_ids['end']
59 |
60 |
61 | class Lexer(object):
62 | def __init__(self, text):
63 | self.text = text
64 | self.lines = text.splitlines()
65 | if text.endswith('\n'):
66 | self.lines.append('')
67 | self._tokenize()
68 |
69 | def _tokenize(self):
70 | self.tokens = []
71 | self.indent_levels = [0]
72 | line_number = 0
73 | char_index = 0
74 | for line_number, line in enumerate(self.lines):
75 | if line:
76 | self._handle_indentation(line, line_number)
77 | self.tokens.append(TextToken(line, line_number, char_index))
78 | else:
79 | self.tokens.append(NewlineToken(line_number, char_index))
80 | char_index += len(line) + 1
81 | self.tokens.append(EndToken(line_number + 1))
82 |
83 | def _handle_indentation(self, line, line_number):
84 | indent_levels = self.indent_levels
85 | current_level = calculate_indent_level(line)
86 | if current_level > indent_levels[-1]:
87 | indent_levels.append(current_level)
88 | self.tokens.append(IndentToken(line_number))
89 | elif current_level < indent_levels[-1]:
90 | while current_level < indent_levels[-1]:
91 | indent_levels.pop()
92 | self.tokens.append(DedentToken(line_number))
93 |
--------------------------------------------------------------------------------
/todoflow/parse_date.py:
--------------------------------------------------------------------------------
1 | from collections import namedtuple
2 | from datetime import datetime, timedelta
3 | import calendar
4 |
5 |
6 | def parse_date(text='', now=None):
7 | return Parser(now).parse(text)
8 |
9 |
10 | Token = namedtuple('Token', ['type', 'value'])
11 |
12 |
13 | punctuations = ('+', '-', ':')
14 | numbers = tuple(str(i) for i in range(0, 10))
15 | white_space = (' ', '\t', '\n')
16 | word_breaks = punctuations + numbers + white_space
17 | months = {
18 | 'january': 1,
19 | 'february': 2,
20 | 'march': 3,
21 | 'april': 4,
22 | 'may': 5,
23 | 'june': 6,
24 | 'july': 7,
25 | 'august': 8,
26 | 'september': 9,
27 | 'october': 10,
28 | 'november': 11,
29 | 'december': 12,
30 | }
31 | weekdays = {
32 | 'monday': 0,
33 | 'tuesday': 1,
34 | 'wednesday': 2,
35 | 'thursday': 3,
36 | 'friday': 4,
37 | 'saturday': 5,
38 | 'sunday': 6,
39 | }
40 | synonyms = {
41 | 'mon': 'monday',
42 | 'tue': 'tuesday',
43 | 'wed': 'wednesday',
44 | 'thu': 'thursday',
45 | 'fri': 'friday',
46 | 'sat': 'saturday',
47 | 'sun': 'sunday',
48 | 'jan': 'january' ,
49 | 'feb': 'february',
50 | 'mar': 'march',
51 | 'apr': 'april',
52 | 'may': 'may',
53 | 'jun': 'june',
54 | 'jul': 'july',
55 | 'aug': 'august',
56 | 'sep': 'september',
57 | 'oct': 'october',
58 | 'nov': 'november',
59 | 'dec': 'december',
60 | 'minutes': 'minute',
61 | 'min': 'minute',
62 | 'hours': 'hour',
63 | 'h': 'hour',
64 | 'days': 'day',
65 | 'd': 'day',
66 | 'weeks': 'week',
67 | 'w': 'week',
68 | 'months': 'month',
69 | 'oc': 'month',
70 | 'c': 'month',
71 | 'years': 'year',
72 | 'y': 'year',
73 | 'sec': 'second',
74 | 'su': 'second',
75 | 'quarters': 'quarter',
76 | 'q': 'quarter',
77 | }
78 | words = {
79 | 'ampm': ('am', 'pm'),
80 | 'date': ('today', 'yesterday', 'tomorrow', 'now'),
81 | 'month': list(months.keys()),
82 | 'weekday': list(weekdays.keys()),
83 | 'modifier': ('next', 'last', 'this'),
84 | 'duration': (
85 | 'second', 'minute', 'hour', 'day', 'week', 'month', 'quarter', 'year'
86 | ),
87 | }
88 |
89 | second_0 = {
90 | 'microsecond': 0,
91 | 'second': 0,
92 | }
93 | minute_0 = dict(second_0.items())
94 | minute_0.update({'minute': 0})
95 | hour_0 = dict(minute_0.items())
96 | hour_0.update({'hour': 0})
97 | day_1 = dict(hour_0.items())
98 | day_1.update({'day': 1})
99 | month_1 = dict(day_1.items())
100 | month_1.update({'month': 1})
101 |
102 |
103 | class Lexer:
104 | def tokenize(self, text):
105 | self.chars = list(text.lower())
106 | self.tokens = []
107 | word = ''
108 | while self.chars:
109 | c = self.pick()
110 | if c in white_space:
111 | self.pop()
112 | continue
113 | elif c in numbers:
114 | self.read_number()
115 | elif c in punctuations:
116 | self.read_punctuation()
117 | else:
118 | self.read_word()
119 | return self.tokens
120 |
121 | def pick(self):
122 | if not self.chars:
123 | return None
124 | return self.chars[0]
125 |
126 | def pop(self):
127 | return self.chars.pop(0)
128 |
129 | def read_number(self):
130 | number = ''
131 | while self.pick() and self.pick() in numbers:
132 | number += self.pop()
133 | self.add('number', int(number))
134 |
135 | def read_punctuation(self):
136 | self.add('punctuation', self.pop())
137 |
138 | def read_word(self):
139 | word = ''
140 | while self.pick() and self.pick() not in word_breaks:
141 | word += self.pop()
142 | self.add('word', word)
143 |
144 | def add(self, type, value):
145 | value = synonyms.get(value, value)
146 | if type == 'word':
147 | for k, values in words.items():
148 | if value in values:
149 | self.add(k, value)
150 | break
151 | else:
152 | self.tokens.append(Token(type, value))
153 |
154 |
155 | empty_token = Token(None, None)
156 |
157 |
158 | class Parser:
159 | def __init__(self, now=None):
160 | self.now = now or datetime.now()
161 | self.today = self.now.replace(**hour_0)
162 |
163 | def parse(self, text, now=None):
164 | self.tokens = Lexer().tokenize(text)
165 | self.modifications = []
166 | self.date = self.now
167 | while self.tokens:
168 | t = self.pick()
169 | if t.type == 'ampm':
170 | self.pop()
171 | elif t.type == 'date':
172 | self.parse_date()
173 | elif t.type == 'month':
174 | self.parse_month()
175 | elif t.type == 'weekday':
176 | self.parse_weekday()
177 | elif t.type == 'modifier':
178 | self.parse_modifier()
179 | elif t.type == 'duration':
180 | self.pop()
181 | elif t.type == 'punctuation':
182 | self.parse_punctuation()
183 | elif t.type == 'number':
184 | self.parse_number()
185 | else:
186 | self.pop()
187 | for modification in self.modifications:
188 | self.date = modification(self.date)
189 | return self.date
190 |
191 | def pick(self):
192 | if not self.tokens:
193 | return empty_token
194 | return self.tokens[0]
195 |
196 | def pickpick(self):
197 | try:
198 | return self.tokens[1]
199 | except IndexError:
200 | return empty_token
201 |
202 | def pop(self):
203 | return self.tokens.pop(0)
204 |
205 | def is_eof(self):
206 | return not self.tokens
207 |
208 | def add_modification(self, modification):
209 | self.modifications.append(modification)
210 |
211 | def parse_date(self):
212 | date = self.pop().value
213 | if date == 'now':
214 | self.date = self.now
215 | elif date == 'today':
216 | self.date = self.today
217 | elif date == 'yesterday':
218 | self.date = self.today - timedelta(days=1)
219 | elif date == 'tomorrow':
220 | self.date = self.today + timedelta(days=1)
221 |
222 | def parse_month(self):
223 | month = self.pop().value
224 | self.add_modification(
225 | lambda d: d.replace(month=months[month], day=1)
226 | )
227 | self.maybe_parse_day()
228 |
229 | def maybe_parse_day(self):
230 | next = self.pick()
231 | if next.type != 'number' or next.value > 31:
232 | return False
233 | day = self.pop().value
234 | self.add_modification(
235 | lambda d: d.replace(day=day)
236 | )
237 | return True
238 |
239 | def parse_weekday(self):
240 | day = self.pop().value
241 | self.add_modification(
242 | lambda d: find_weekday_in_week_of_date(day, d)
243 | )
244 |
245 | def parse_modifier(self):
246 | modifier = self.pop().value
247 | next = self.pick()
248 | sign = self.convert_modifier_to_sign(modifier)
249 |
250 | if self.maybe_parse_weekday_after_modifier(sign):
251 | return
252 | elif self.maybe_parse_month_after_modifier(sign):
253 | return
254 | elif self.maybe_parse_duration_after_modifier(sign):
255 | return
256 |
257 | def maybe_parse_weekday_after_modifier(self, sign):
258 | if self.pick().type != 'weekday':
259 | return False
260 | day = self.pop().value
261 | self.add_modification(
262 | lambda d: find_weekday_in_week_of_date(day, d) + timedelta(days=sign * 7)
263 | )
264 | return True
265 |
266 | def maybe_parse_month_after_modifier(self, sign):
267 | if self.pick().type != 'month':
268 | return False
269 | month = self.pop().value
270 | self.add_modification(
271 | lambda d: d.replace(year=d.year + sign, month=months[month], day=1)
272 | )
273 | return True
274 |
275 | def maybe_parse_duration_after_modifier(self, sign):
276 | if self.pick().type != 'duration':
277 | return False
278 | duration = self.pop().value
279 | if duration == 'year':
280 | self.add_modification(
281 | lambda d: d.replace(year=d.year + sign, **month_1)
282 | )
283 | elif duration == 'quarter':
284 | self.add_modification(
285 | lambda d: add_months(find_begining_of_quarter(d), sign * 3)
286 | )
287 | elif duration == 'month':
288 | self.add_modification(
289 | lambda d: add_months(d, sign).replace(**day_1)
290 | )
291 | elif duration == 'week':
292 | self.add_modification(
293 | lambda d: find_weekday_in_week_of_date('monday', d) + timedelta(days=7 * sign)
294 | )
295 | elif duration == 'day':
296 | self.add_modification(
297 | lambda d: (d + timedelta(days=1)).replace(**hour_0)
298 | )
299 | elif duration == 'hour':
300 | self.add_modification(
301 | lambda d: (d + timedelta(hours=sign)).replace(**minute_0)
302 | )
303 | elif duration == 'minute':
304 | self.add_modification(
305 | lambda d: (d + timedelta(minutes=sign)).replace(**second_0)
306 | )
307 |
308 | def parse_punctuation(self):
309 | punctuation = self.pop().value
310 | if punctuation == ':':
311 | return
312 | next = self.pick()
313 | sign = -1 if punctuation == '-' else 1
314 | self.maybe_parse_number_after_minus_or_plus(sign)
315 |
316 | def maybe_parse_number_after_minus_or_plus(self, sign):
317 | if self.pick().type != 'number':
318 | return False
319 | number = self.pop().value
320 | if self.maybe_parse_duration_after_minus_plus_number(sign, number):
321 | return
322 | else:
323 | self.add_modification(
324 | lambda d: add_to_date(d, sign * number, 'days')
325 | )
326 | return True
327 |
328 | def maybe_parse_duration_after_minus_plus_number(self, sign, number):
329 | if self.pick().type != 'duration':
330 | return False
331 | duration = self.pop().value
332 | self.add_modification(
333 | lambda d: add_to_date(d, sign * number, duration)
334 | )
335 | return True
336 |
337 | def parse_number(self):
338 | first_number = self.pop().value
339 | if self.maybe_parse_24_time(first_number):
340 | return
341 | elif self.maybe_parse_ampm_after_number(first_number):
342 | return
343 | elif self.maybe_parse_dash_after_number(first_number):
344 | return
345 | elif self.maybe_parse_duration_after_number(first_number):
346 | return
347 | else:
348 | self.date = datetime(first_number, 1, 1)
349 |
350 | def maybe_parse_24_time(self, hour):
351 | t = self.pick()
352 | if t.type != 'punctuation' or t.value != ':':
353 | return False
354 | self.pop()
355 | self.maybe_parse_minute_after_hour(hour)
356 | return True
357 |
358 | def maybe_parse_minute_after_hour(self, hour):
359 | if self.pick().type != 'number':
360 | return False
361 | minute = self.pop().value
362 | if self.maybe_parse_ampm_after_hour_minute(hour, minute):
363 | return True
364 | self.add_modification(
365 | lambda d: d.replace(hour=hour, minute=minute)
366 | )
367 | return True
368 |
369 | def maybe_parse_ampm_after_hour_minute(self, hour, minute):
370 | if self.pick().type != 'ampm':
371 | return False
372 | ampm = self.pop().value
373 | hour = convert_hour_to_24_clock(hour, ampm)
374 | self.add_modification(
375 | lambda d: d.replace(hour=hour, minute=minute)
376 | )
377 | return True
378 |
379 | def maybe_parse_ampm_after_number(self, number):
380 | if self.pick().type != 'ampm':
381 | return False
382 | ampm = self.pop().value
383 | number = convert_hour_to_24_clock(number, ampm)
384 | self.add_modification(
385 | lambda d: d.replace(hour=number, **minute_0)
386 | )
387 | return True
388 |
389 | def maybe_parse_dash_after_number(self, year):
390 | if not self.maybe_parse_dash():
391 | return False
392 | elif self.maybe_parse_month_after_year(year):
393 | return True
394 | self.date = datetime(year, 1, 1)
395 | return True
396 |
397 | def maybe_parse_month_after_year(self, year):
398 | if self.pick().type != 'number':
399 | return False
400 | month = self.pop().value
401 | if self.maybe_parse_day_after_month_year(year, month):
402 | return True
403 | else:
404 | self.date = datetime(year, month, 1)
405 | return True
406 |
407 | def maybe_parse_day_after_month_year(self, year, month):
408 | if not self.maybe_parse_dash():
409 | return False
410 | day = self.pop().value
411 | self.date = datetime(year, month, day)
412 | return True
413 |
414 | def maybe_parse_dash(self):
415 | t = self.pick()
416 | if t.type == 'punctuation' or t.value == '=':
417 | self.pop()
418 | return True
419 | return False
420 |
421 | def maybe_parse_duration_after_number(self, number):
422 | if self.pick().type != 'duration':
423 | return False
424 | duration = self.pop().value
425 | self.add_modification(
426 | lambda d: add_to_date(d, number, duration)
427 | )
428 | return True
429 |
430 | def convert_modifier_to_sign(self, modifier):
431 | if modifier == 'next':
432 | return 1
433 | elif modifier == 'last':
434 | return -1
435 | return 0
436 |
437 |
438 | def find_weekday_in_week_of_date(weekday, date):
439 | day_number = weekdays[weekday]
440 | return date.replace(**hour_0) + timedelta(days=-date.weekday() + day_number)
441 |
442 |
443 | def convert_hour_to_24_clock(hour, ampm):
444 | if hour == 12 and ampm == 'am':
445 | return 0
446 | if hour == 12 and ampm == 'pm':
447 | return 12
448 | return hour if ampm == 'am' else hour + 12
449 |
450 |
451 | def add_to_date(date, number, duration):
452 | if duration == 'minute':
453 | return date + timedelta(minutes=number)
454 | if duration == 'hour':
455 | return date + timedelta(hours=number)
456 | if duration == 'day':
457 | return date + timedelta(days=number)
458 | if duration == 'week':
459 | return date + timedelta(days=number * 7)
460 | if duration == 'month':
461 | return add_months(date, number)
462 | if duration == 'quarter':
463 | return add_months(date, 3 * number)
464 | if duration == 'year':
465 | return date.replace(year=date.year + number)
466 |
467 |
468 | def add_months(date, number):
469 | # http://stackoverflow.com/questions/4130922/how-to-increment-datetime-by-custom-months-in-python-without-using-library
470 | month = date.month - 1 + number
471 | year = int(date.year + month / 12)
472 | month = month % 12 + 1
473 | day = min(date.day, calendar.monthrange(year, month)[1])
474 | return datetime(
475 | year, month, day, date.hour, date.minute, date.second, date.microsecond
476 | )
477 |
478 |
479 | def find_begining_of_quarter(date):
480 | return datetime(
481 | date.year, 1 + 3 * (date.month - 1) // 3, 1
482 | )
483 |
--------------------------------------------------------------------------------
/todoflow/parser.py:
--------------------------------------------------------------------------------
1 | from __future__ import absolute_import
2 |
3 | from .lexer import Lexer
4 | from .todos import Todos
5 | from .todoitem import Todoitem
6 |
7 |
8 | class ParserError(Exception):
9 | pass
10 |
11 |
12 | class Parser(object):
13 | def __init__(self):
14 | self.newlines = []
15 | self.parsed_items = []
16 | self.items_in_parsing = []
17 |
18 | def parse(self, text):
19 | self.lexer = Lexer(text)
20 | new_item = None
21 | for token in self.lexer.tokens:
22 | if token.is_newline:
23 | self.newlines.append(Todos(Todoitem.from_token(token)))
24 | elif token.is_text:
25 | new_item = self._handle_text(token)
26 | elif token.is_indent:
27 | self.items_in_parsing.append(new_item)
28 | elif token.is_dedent:
29 | self.items_in_parsing.pop()
30 | elif token.is_end:
31 | return self._handle_end()
32 |
33 | def _handle_text(self, token):
34 | new_item = Todos(Todoitem.from_token(token))
35 | if self.items_in_parsing:
36 | for nl in self.newlines:
37 | self.items_in_parsing[-1].append(nl)
38 | self.newlines = []
39 | try:
40 | self.items_in_parsing[-1].append(new_item)
41 | except AttributeError:
42 | raise ParserError('Error in parsing: {}'.format(self.items_in_parsing))
43 | else:
44 | self.parsed_items += self.newlines
45 | self.newlines = []
46 | self.parsed_items.append(new_item)
47 | return new_item
48 |
49 | def _handle_end(self):
50 | todos = Todos(subitems=self.parsed_items + self.newlines)
51 | return todos
52 |
53 |
54 | def parse(text):
55 | return Parser().parse(text)
56 |
--------------------------------------------------------------------------------
/todoflow/query.py:
--------------------------------------------------------------------------------
1 | from __future__ import absolute_import
2 |
3 | from .textutils import get_tag_param
4 | from .textutils import has_tag
5 | from .parse_date import parse_date
6 |
7 |
8 | class Query:
9 | pass
10 |
11 |
12 | class SetOperation(Query):
13 | def __init__(self, left, operator, right):
14 | self.type = 'set operation'
15 | self.operator = operator
16 | self.left = left
17 | self.right = right
18 |
19 | def __str__(self):
20 | return '( {} {} {} )'.format(self.left, self.operator, self.right)
21 |
22 | def search(self, todos):
23 | left_side = list(self.left.search(todos))
24 | right_side = list(self.right.search(todos))
25 | if self.operator == 'union':
26 | for item in todos:
27 | if item in left_side or item in right_side:
28 | yield item
29 | elif self.operator == 'intersect':
30 | for item in todos:
31 | if item in left_side and item in right_side:
32 | yield item
33 | elif self.operator == 'except':
34 | for item in todos:
35 | if item in left_side and item not in right_side:
36 | yield item
37 |
38 |
39 | class ItemsPath(Query):
40 | def __init__(self, left, operator, right):
41 | self.type = 'items path'
42 | self.operator = operator
43 | self.left = left
44 | self.right = right
45 |
46 | def __str__(self):
47 | return '( {} {} {} )'.format(
48 | self.left if self.left else '',
49 | self.operator if self.operator else '',
50 | self.right
51 | )
52 |
53 | def search(self, todos):
54 | left_side = list(self.get_left_side(todos))
55 | for item in left_side:
56 | axes = list(self.get_axes_for_operator(item))
57 | for subitem in self.right.search(axes):
58 | yield subitem
59 |
60 | def get_left_side(self, todos):
61 | if self.left:
62 | return self.left.search(todos)
63 | if self.operator in ('/', '/child::'):
64 | return [todos]
65 | return list(todos)
66 |
67 | def get_axes_for_operator(self, todos):
68 | if self.operator in ('/', '/child::'):
69 | return todos.yield_children()
70 | elif self.operator in ('//', '/descendant::'):
71 | return todos.yield_descendants()
72 | elif self.operator in ('///', '/descendant-or-self::'):
73 | return todos.yield_descendants_and_self()
74 | elif self.operator == '/ancestor-or-self::':
75 | return todos.yield_ancestors_and_self()
76 | elif self.operator == '/ancestor::':
77 | return todos.yield_ancestors()
78 | elif self.operator == '/parent::':
79 | return todos.yield_parent()
80 | elif self.operator == '/following-sibling::':
81 | return todos.yield_following_siblings()
82 | elif self.operator == '/following::':
83 | return todos.yield_following()
84 | elif self.operator == '/preceding-sibling::':
85 | return todos.yield_preceding_siblings()
86 | elif self.operator == '/preceding::':
87 | return todos.yield_preceding()
88 |
89 |
90 | class Slice(Query):
91 | def __init__(self, left, slice):
92 | self.type = 'slice'
93 | self.left = left
94 | self.slice = slice or ':'
95 | if slice and ':' not in slice:
96 | self.index = int(slice)
97 | self.slice_start = None
98 | self.slice_stop = None
99 | elif slice:
100 | self.index = None
101 | self.slice_start, self.slice_stop = [(int(i) if i else None) for i in slice.split(':')]
102 |
103 | def __str__(self):
104 | return '({} [{}])'.format(
105 | self.left, self.slice
106 | )
107 |
108 | def search(self, todos):
109 | left_side = list(self.left.search(todos))
110 | if self.slice == ':':
111 | for item in left_side:
112 | yield item
113 | return
114 | if self.index is not None:
115 | yield list(left_side)[self.index]
116 | return
117 | for item in list(left_side)[self.slice_start:self.slice_stop]:
118 | yield item
119 |
120 |
121 | class MatchesQuery(Query):
122 | def search(self, todos):
123 | for item in todos:
124 | if self.matches(item):
125 | yield item
126 |
127 |
128 | class BooleanExpression(MatchesQuery):
129 | def __init__(self, left, operator, right):
130 | self.type = 'boolean expression'
131 | self.operator = operator
132 | self.left = left
133 | self.right = right
134 |
135 | def __str__(self):
136 | return '( {} {} {} )'.format(
137 | self.left, self.operator, self.right
138 | )
139 |
140 | def matches(self, todoitem):
141 | matches_left = self.left.matches(todoitem)
142 | matches_right = self.right.matches(todoitem)
143 | if self.operator == 'and':
144 | return matches_left and matches_right
145 | elif self.operator == 'or':
146 | return matches_left or matches_right
147 |
148 |
149 | class Unary(MatchesQuery):
150 | def __init__(self, operator, right):
151 | self.type = 'unary'
152 | self.operator = operator
153 | self.right = right
154 |
155 | def __str__(self):
156 | return '( {} {} )'.format(self.operator, self.right)
157 |
158 | def matches(self, todoitem):
159 | matches_right = self.right.matches(todoitem)
160 | return not matches_right
161 |
162 |
163 | class Relation(MatchesQuery):
164 | def __init__(self, left, operator, right, modifier):
165 | self.type = 'relation'
166 | self.operator = operator
167 | self.left = left
168 | self.right = right
169 | self.modifier = modifier
170 | self.calculated_right = None
171 |
172 | def __str__(self):
173 | return '( @{} {} [{}] {} )'.format(
174 | self.left, self.operator, self.modifier, self.right
175 | )
176 |
177 | def matches(self, todoitem):
178 | if not todoitem:
179 | return False
180 | left_side = self.calculate_left_side(todoitem)
181 | right_side = self.calculate_right_side()
182 | try:
183 | if self.operator == '=':
184 | return left_side == right_side
185 | elif self.operator == '<=':
186 | return left_side <= right_side
187 | elif self.operator == '<':
188 | return left_side < right_side
189 | elif self.operator == '>':
190 | return left_side > right_side
191 | elif self.operator == '>=':
192 | return left_side >= right_side
193 | elif self.operator == '!=':
194 | return left_side != right_side
195 | elif self.operator == 'contains':
196 | return contains(right_side, left_side)
197 | elif self.operator == 'beginswith':
198 | return left_side.startswith(right_side)
199 | elif self.operator == 'endswith':
200 | return left_side.endswith(right_side)
201 | elif self.operator == 'matches':
202 | return re.match(right_side, left_side)
203 | except TypeError:
204 | return False
205 |
206 | def calculate_left_side(self, todoitem):
207 | if self.left.value == 'text':
208 | left = todoitem.get_text()
209 | elif self.left.value == 'id':
210 | left = todoitem.id()
211 | elif self.left.value == 'line':
212 | left = str(todoitem.get_line_number())
213 | elif self.left.value == 'type':
214 | left = todoitem.get_type()
215 | else:
216 | left = todoitem.get_tag_param(self.left.value)
217 | return self.apply_modifier(left, 'left')
218 |
219 | def calculate_right_side(self):
220 | if not self.calculated_right:
221 | self.calculated_right = self.apply_modifier(self.right.value, 'right')
222 | return self.calculated_right
223 |
224 | def apply_modifier(self, value, side):
225 | if not value:
226 | return value
227 | if self.modifier == 'i':
228 | return value.lower()
229 | elif self.modifier == 's':
230 | return value
231 | elif self.modifier == 'n':
232 | return safe_int(value)
233 | elif self.modifier == 'd':
234 | return parse_date(value)
235 | elif self.modifier == 'l' or self.modifier == 'il':
236 | return [s.strip() for s in value.lower().split(',')]
237 | elif self.modifier == 'sl':
238 | return [s.strip() for s in value.split(',')]
239 | elif self.modifier == 'nl':
240 | return [safe_int(s) for s in value.split(',')]
241 | elif self.modifier == 'dl':
242 | return [parse_date(s) for s in value.split(',')]
243 |
244 | def safe_int(value):
245 | try:
246 | return int(value)
247 | except ValueError:
248 | return None
249 |
250 | class Atom(MatchesQuery):
251 | def __init__(self, token):
252 | self.type = token.type
253 | self.value = token.value
254 |
255 | def __str__(self):
256 | return self.value
257 |
258 | def matches(self, todoitem):
259 | if self.type == 'attribute':
260 | return todoitem.has_tag(self.value)
261 | elif self.type == 'search term':
262 | return self.value in todoitem.get_text()
263 | else:
264 | return True
265 |
266 |
267 | def contains(subset, superset):
268 | if not superset:
269 | return False
270 | if type(subset) == list and type(superset) == list:
271 | return all(e in superset for e in subset)
272 | return subset in superset
273 |
--------------------------------------------------------------------------------
/todoflow/query_lexer.py:
--------------------------------------------------------------------------------
1 | from collections import namedtuple
2 |
3 | Token = namedtuple('Token', ['type', 'value'])
4 |
5 |
6 | class LexerError(Exception):
7 | pass
8 |
9 |
10 | class QueryLexer:
11 | def tokenize(self, text):
12 | self.chars = list(text)
13 | self.tokens = []
14 | while not self.is_eof():
15 | self.pick_and_read()
16 | self.clean_up_tokens()
17 | return self.tokens
18 |
19 | def pick_and_read(self):
20 | c = self.pick()
21 | if self.is_white_space(c):
22 | self.pop()
23 | elif self.is_attribute_start(c):
24 | self.read_attribute()
25 | elif self.is_wild_card(c):
26 | self.read_wild_card()
27 | elif self.is_punctuation(c):
28 | self.read_punctuationn()
29 | elif self.is_axis_selector_start(c):
30 | self.read_axis_selector()
31 | elif self.is_quote(c):
32 | self.read_search_term()
33 | elif self.is_operator_start(c):
34 | self.read_operator()
35 | elif self.is_slice_or_modifier_start(c):
36 | self.read_slice_or_modifier()
37 | else:
38 | self.read_search_term_word_operator_or_shortcut()
39 |
40 | def pick(self):
41 | return self.chars[0]
42 |
43 | def pop(self):
44 | return self.chars.pop(0)
45 |
46 | def pop_word(self, word):
47 | self.chars = self.chars[len(word):]
48 | return word
49 |
50 | def push(self, token):
51 | self.tokens.append(token)
52 |
53 | def startswith(self, text):
54 | return ''.join(self.chars).startswith(text)
55 |
56 | def is_eof(self):
57 | return not self.chars
58 |
59 | def is_white_space(self, c):
60 | return c in ' \t\n'
61 |
62 | def is_word_break(self, c):
63 | if self.is_eof():
64 | return True
65 | return c in '/@=!<>[]()" \t\n'
66 |
67 | def is_quote(self, c):
68 | return c == '"'
69 |
70 | def is_slice_or_modifier_start(self, c):
71 | return c == '['
72 |
73 | def is_slice_or_modifier_end(self, c):
74 | return c == ']'
75 |
76 | def is_attribute_start(self, c):
77 | return c == '@'
78 |
79 | def is_keyword(self, text):
80 | return text in (
81 | 'project',
82 | 'task',
83 | 'note',
84 | )
85 |
86 | def is_operator(self, text):
87 | return text in (
88 | '=', '!=', '<', '>', '<=', '>=',
89 | )
90 |
91 | def is_wild_card(self, c):
92 | return c == '*'
93 |
94 | def is_operator_start(self, c):
95 | return c in '=/'
96 |
97 | def is_operator_second_part(self, operator, c):
98 | if operator in '!<>':
99 | return c == '='
100 |
101 | def is_axis_selector_start(self, c):
102 | return c == '/'
103 |
104 | def is_word_operator(self, text):
105 | return text in (
106 | 'contains',
107 | 'beginswith',
108 | 'endswith',
109 | 'matches',
110 | 'union',
111 | 'intersect',
112 | 'except',
113 | 'and',
114 | 'or',
115 | 'not',
116 | )
117 |
118 | def is_relation_operator(self, text):
119 | return text in (
120 | 'contains',
121 | 'beginswith',
122 | 'endswith',
123 | 'matches',
124 | '=', '!=', '<', '>', '<=', '>=',
125 | )
126 |
127 | def is_relation_modifier(self, c):
128 | return c in ('i', 'n', 'd', 's', 'l', 'sl', 'nl', 'dl', 'il')
129 |
130 | def is_punctuation(self, c):
131 | return c in '()'
132 |
133 | def is_number(self, c):
134 | try:
135 | int(c)
136 | return True
137 | except ValueError:
138 | return False
139 |
140 | def is_slice(self, text):
141 | if ':' not in text:
142 | return self.is_number(text)
143 | try:
144 | start, end = text.split(':')
145 | except ValueError:
146 | return False
147 | start_ok = start == '' or self.is_number(start)
148 | end_ok = end == '' or self.is_number(end)
149 | return start_ok and end_ok
150 |
151 | def is_shortcut(self, text):
152 | return text in (
153 | 'project', 'note', 'task'
154 | )
155 |
156 | def read_attribute(self):
157 | self.pop()
158 | value = self.read_while_not(self.is_word_break)
159 | self.add('attribute', value)
160 |
161 | def read_axis_selector(self):
162 | axis_selectors = (
163 | '/ancestor-or-self::',
164 | '/ancestor::',
165 | '/descendant-or-self::',
166 | '/descendant::',
167 | '/following-sibling::',
168 | '/following::',
169 | '/preceding-sibling::',
170 | '/preceding::',
171 | '/child::',
172 | '/parent::',
173 | '///',
174 | '//',
175 | '/',
176 | )
177 | for axis_selector in axis_selectors:
178 | if self.startswith(axis_selector):
179 | self.pop_word(axis_selector)
180 | self.add('operator', axis_selector)
181 |
182 | def read_punctuationn(self):
183 | self.add('punctuation', self.pop())
184 |
185 | def read_operator(self):
186 | first_char = self.pop()
187 | if self.is_eof():
188 | self.add('operator', first_char)
189 | return
190 | second_char = self.pick()
191 | if self.is_operator_second_part(first_char, second_char):
192 | self.add('operator', first_char + self.pop())
193 | else:
194 | self.add('operator', first_char)
195 |
196 | def read_slice_or_modifier(self):
197 | self.pop()
198 | value = self.read_while_not(self.is_slice_or_modifier_end)
199 | self.pop()
200 | is_slice = self.is_slice(value)
201 | is_modifier = self.is_relation_modifier(value)
202 | if not (is_slice or is_modifier):
203 | raise LexerError('bad relation modifier', value)
204 | type = 'slice' if is_slice else 'relation modifier'
205 | self.add(type, value)
206 |
207 | def read_wild_card(self):
208 | self.add('wild card', self.pop())
209 |
210 | def read_search_term_word_operator_or_shortcut(self):
211 | word = self.read_while_not(self.is_word_break)
212 | if self.is_word_operator(word):
213 | self.add('operator', word)
214 | elif self.is_shortcut(word):
215 | self.add('shortcut', word)
216 | else:
217 | self.add('search term', word)
218 |
219 | def read_search_term(self):
220 | self.pop()
221 | search_term = self.read_while_not(self.is_quote)
222 | self.pop()
223 | self.add('search term', search_term)
224 |
225 | def read_while_not(self, f):
226 | word = ''
227 | if self.is_eof():
228 | return word
229 | c = self.pick()
230 | while not f(c):
231 | word += self.pop()
232 | if self.is_eof():
233 | return word
234 | c = self.pick()
235 | return word
236 |
237 | def add(self, type, value):
238 | self.tokens.append(Token(type, value))
239 |
240 | def clean_up_tokens(self):
241 | tokens = self.tokens
242 | tokens = self.expand_shortcuts(tokens)
243 | tokens = self.join_search_terms(tokens)
244 | tokens = self.provide_default_operator(tokens)
245 | tokens = self.provide_default_attribute(tokens)
246 | tokens = self.provide_default_relation_modifier(tokens)
247 | self.tokens = list(tokens)
248 |
249 | def expand_shortcuts(self, tokens):
250 | for i, t in enumerate(tokens):
251 | if t.type != 'shortcut':
252 | yield t
253 | else:
254 | yield Token('attribute', 'type')
255 | yield Token('operator', '=')
256 | yield Token('search term', t.value)
257 | if i != len(tokens) - 1:
258 | yield Token('operator', 'and')
259 |
260 | def join_search_terms(self, tokens):
261 | cumulated = None
262 | for t in tokens:
263 | if t.type != 'search term':
264 | if cumulated:
265 | yield cumulated
266 | cumulated = None
267 | yield t
268 | else:
269 | if not cumulated:
270 | cumulated = t
271 | else:
272 | cumulated = Token(
273 | 'search term',
274 | cumulated.value + ' ' + t.value
275 | )
276 | if cumulated:
277 | yield cumulated
278 |
279 | def provide_default_operator(self, tokens):
280 | previous = None
281 | for t in tokens:
282 | if t.type != 'search term':
283 | previous = t
284 | yield t
285 | else:
286 | if previous and previous.type == 'relation modifier':
287 | pass
288 | elif (
289 | not previous or
290 | (
291 | previous.type != 'operator' or
292 | not self.is_relation_operator(previous.value)
293 | )
294 | ):
295 | yield Token('operator', 'contains')
296 | yield t
297 |
298 | def provide_default_attribute(self, tokens):
299 | previous = None
300 | for t in tokens:
301 | if t.type != 'operator' or not self.is_relation_operator(t.value):
302 | previous = t
303 | yield t
304 | else:
305 | if not previous or previous.type != 'attribute':
306 | yield Token('attribute', 'text')
307 | yield t
308 |
309 | def provide_default_relation_modifier(self, tokens):
310 | previous = None
311 | for t in tokens:
312 | if previous and t.type != 'relation modifier':
313 | yield Token('relation modifier', 'i')
314 | previous = None
315 | elif t.type == 'operator' and self.is_relation_operator(t.value):
316 | previous = t
317 | else:
318 | previous = None
319 | yield t
320 | if previous:
321 | yield Token('relation modifier', 'i')
322 |
--------------------------------------------------------------------------------
/todoflow/query_parser.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 | from __future__ import unicode_literals
3 | import os
4 | import re
5 |
6 | from .query_lexer import QueryLexer
7 | from .query import SetOperation
8 | from .query import ItemsPath
9 | from .query import Slice
10 | from .query import MatchesQuery
11 | from .query import BooleanExpression
12 | from .query import Unary
13 | from .query import Relation
14 | from .query import Atom
15 |
16 |
17 | def parse(text):
18 | return QueryParser().parse(text)
19 |
20 |
21 | class QueryParserError(Exception):
22 | pass
23 |
24 |
25 | class QueryParser:
26 | PRECEDENCE = {
27 | 'except': 1,
28 | 'intersect': 2,
29 | 'union': 3,
30 | '/ancestor-or-self::': 5,
31 | '/ancestor::': 5,
32 | '/descendant-or-self::': 5,
33 | '/descendant::': 5,
34 | '/following-sibling::': 5,
35 | '/following::': 5,
36 | '/preceding-sibling::': 5,
37 | '/preceding::': 5,
38 | '/child::': 5,
39 | '/parent::': 5,
40 | '///': 5,
41 | '//': 5,
42 | '/': 5,
43 | 'not': 7,
44 | 'or': 10,
45 | 'and': 11,
46 | '=': 20,
47 | '!=': 20,
48 | '<': 20,
49 | '>': 20,
50 | '<=': 20,
51 | '>=': 20,
52 | 'contains': 20,
53 | 'beginswith': 20,
54 | 'endswith': 20,
55 | 'matches': 20,
56 | }
57 |
58 | SET_OPERATORS = (
59 | 'union', 'intersect', 'except',
60 | )
61 |
62 | ITEMS_PATH_OPERATORS = (
63 | '/ancestor-or-self::', '/ancestor::',
64 | '/descendant-or-self::', '/descendant::',
65 | '/following-sibling::', '/following::',
66 | '/preceding-sibling::', '/preceding::',
67 | '/child::', '/parent::',
68 | '///', '//', '/',
69 | )
70 |
71 | BOOLEAN_OPERATOS = (
72 | 'and', 'or', 'not'
73 | )
74 |
75 | RELATION_OPERATORS = (
76 | '=', '!=', '<', '>', '<=', '>=',
77 | 'contains', 'beginswith', 'endswith', 'matches',
78 | )
79 |
80 | def parse(self, text):
81 | self.text = text
82 | self.tokens = QueryLexer().tokenize(text)
83 | query = self.parse_set_operation(None, 0)
84 | return query
85 |
86 | def pick(self):
87 | try:
88 | return self.tokens[0]
89 | except IndexError:
90 | raise_parse('unexpected end')
91 |
92 | def pop(self):
93 | try:
94 | return self.tokens.pop(0)
95 | except IndexError:
96 | raise_parse('unexpected end')
97 |
98 | def pop_close(self):
99 | return self.pop()
100 |
101 | def pop_slice(self):
102 | return self.pop()
103 |
104 | def pop_not(self):
105 | return self.pop()
106 |
107 | def is_eof(self):
108 | return not self.tokens
109 |
110 | def is_operator(self):
111 | if self.is_eof():
112 | return False
113 | return self.pick().type == 'operator'
114 |
115 | def is_set_operator(self):
116 | return self.is_operator() and self.pick().value in QueryParser.SET_OPERATORS
117 |
118 | def is_boolean_operator(self):
119 | return self.is_operator() and self.pick().value in QueryParser.BOOLEAN_OPERATOS
120 |
121 | def is_relation_operator(self):
122 | return self.is_operator() and self.pick().value in QueryParser.RELATION_OPERATORS
123 |
124 | def is_items_path_operator(self):
125 | return self.is_operator() and self.pick().value in QueryParser.ITEMS_PATH_OPERATORS
126 |
127 | def is_slice(self):
128 | if self.is_eof():
129 | return False
130 | return self.pick().type == 'slice'
131 |
132 | def is_not(self):
133 | return self.is_operator() and self.pick().value == 'not'
134 |
135 | def is_atom(self):
136 | return self.pick().type in set([
137 | 'attribute', 'wild card', 'search term'
138 | ])
139 |
140 | def is_open(self):
141 | if self.is_eof():
142 | return False
143 | token = self.pick()
144 | return token.type == 'punctuation' and token.value == '('
145 |
146 | def is_close(self):
147 | if self.is_eof():
148 | return False
149 | token = self.pick()
150 | return token.type == 'punctuation' and token.value == ')'
151 |
152 | def parse_set_operation(self, left, precedence):
153 | if not left:
154 | left = self.parse_items_path(None, 0)
155 | if self.is_set_operator():
156 | token = self.pick()
157 | current_precedence = QueryParser.PRECEDENCE[token.value]
158 | if current_precedence > precedence:
159 | self.pop()
160 | right = self.parse_set_operation(
161 | None, current_precedence
162 | )
163 | return self.parse_set_operation(
164 | SetOperation(left, token.value, right), precedence
165 | )
166 | return left
167 |
168 | def parse_items_path(self, left, precedence):
169 | slice = None
170 | if not left and self.is_items_path_operator():
171 | token = self.pop()
172 | current_precedence = QueryParser.PRECEDENCE[token.value]
173 | right = self.parse_items_path(None, current_precedence)
174 | return self.parse_items_path(
175 | ItemsPath(None, token.value, right), 0
176 | )
177 | if not left:
178 | left = self.parse_slice(None, 0)
179 | if self.is_items_path_operator():
180 | token = self.pick()
181 | current_precedence = QueryParser.PRECEDENCE[token.value]
182 | if current_precedence > precedence:
183 | self.pop()
184 | right = self.parse_items_path(None, current_precedence)
185 | if self.is_slice():
186 | slice = self.pop().value
187 | return self.parse_items_path(
188 | ItemsPath(left, token.value, right), precedence
189 | )
190 | return left
191 |
192 | def parse_slice(self, left, precedence):
193 | if not left:
194 | left = self.parse_boolean_expression(None, precedence)
195 | slice = None
196 | if self.is_slice():
197 | slice = self.pop().value
198 | return Slice(left, slice)
199 |
200 | def parse_boolean_expression(self, left, precedence):
201 | if not left:
202 | if self.is_not():
203 | self.pop_not()
204 | left = Unary('not', self.parse_relation(None, 0))
205 | else:
206 | left = self.parse_relation(None, 0)
207 | if self.is_boolean_operator():
208 | token = self.pick()
209 | current_precedence = QueryParser.PRECEDENCE[token.value]
210 | if current_precedence > precedence:
211 | self.pop()
212 | right = self.parse_boolean_expression(self.parse_relation(None, 0), current_precedence)
213 | return self.parse_boolean_expression(
214 | BooleanExpression(left, token.value, right), precedence
215 | )
216 | return left
217 |
218 | def parse_relation(self, left, precedence):
219 | if self.is_not():
220 | self.pop()
221 | return Unary('not', self.parse_relation(None, 0))
222 | if not left:
223 | left = self.parse_atom()
224 | if self.is_relation_operator():
225 | operator = self.pop().value
226 | modifier = self.pop().value
227 | right = self.parse_atom()
228 | return Relation(left, operator, right, modifier)
229 | return left
230 |
231 | def parse_atom(self):
232 | if self.is_open():
233 | return self.parse_parenthesises()
234 | return Atom(self.pop())
235 |
236 | def parse_parenthesises(self):
237 | self.pop()
238 | result = self.parse_relation(None, 0)
239 | if not self.is_close():
240 | result = self.parse_boolean_expression(result, 0)
241 | if not self.is_close():
242 | result = self.parse_slice(result, 0)
243 | if not self.is_close():
244 | result = self.parse_items_path(result, 0)
245 | if not self.is_close():
246 | result = self.parse_set_operation(result, 0)
247 | self.pop_close()
248 | return result
249 |
250 |
251 | def raise_parse(text, message=None):
252 | raise QueryParserError(
253 | "can't parse - {}{}".format(text, ': ' + message if message else '')
254 | )
255 |
--------------------------------------------------------------------------------
/todoflow/textutils.py:
--------------------------------------------------------------------------------
1 | """
2 | Set of tools to transform and interpret text in .taskpaper format.
3 |
4 | It's used internally in todoflow but can by also useful outside of it.
5 | """
6 | from __future__ import absolute_import
7 | import re
8 | import datetime as dt
9 |
10 |
11 | task_indicator = '- '
12 | project_indicator = ':'
13 | tag_indicator = '@'
14 |
15 | # Types
16 |
17 |
18 | def is_task(text):
19 | return text.lstrip().startswith(task_indicator)
20 |
21 |
22 | def is_project(text):
23 | return not is_task(text) and text.endswith(project_indicator)
24 |
25 |
26 | def is_note(text):
27 | return not (is_task(text) or is_project(text))
28 |
29 |
30 | def get_type(text):
31 | if not text:
32 | return 'newline'
33 | if is_task(text):
34 | return 'task'
35 | elif is_project(text):
36 | return 'project'
37 | elif is_note(text):
38 | return 'note'
39 |
40 | # Tags
41 |
42 | def _fix_tag(tag):
43 | if not tag.startswith(tag_indicator):
44 | tag = tag_indicator + tag
45 | return tag
46 |
47 |
48 | def _create_tag_pattern(tag, include_suffix_space=False):
49 | tag = _fix_tag(tag)
50 | escaped_tag = re.escape(tag)
51 | pattern = r'(?:^|\s)' + escaped_tag + r'(\(((\\\)|[^)])*)\)(?:\s|$)|(?=\s)|$)'
52 | if include_suffix_space:
53 | pattern += '\s*'
54 | return re.compile(
55 | pattern
56 | )
57 |
58 |
59 | def has_tag(text, tag):
60 | pattern = _create_tag_pattern(tag)
61 | return pattern.findall(text)
62 |
63 |
64 | def get_tag_param(text, tag):
65 | pattern = _create_tag_pattern(tag)
66 | params = pattern.findall(text)
67 | if not params:
68 | return None
69 | param_with_parenthesis = params[0][0]
70 | pure_param = params[0][1]
71 | if not param_with_parenthesis:
72 | return None
73 | return pure_param.replace('\)', ')').replace('\(', '(')
74 |
75 |
76 | def remove_tag(text, tag):
77 | return replace_tag(text, tag, '')
78 |
79 |
80 | def _prepare_param(param):
81 | if param is None:
82 | return ''
83 | else:
84 | return '({})'.format(param)
85 |
86 |
87 | def _prepare_text_for_tag(text):
88 | if not text.endswith(' '):
89 | return text + ' '
90 | return text
91 |
92 |
93 | def replace_tag(text, tag, replacement):
94 | pattern = _create_tag_pattern(tag, include_suffix_space=True)
95 | if replacement:
96 | replacement = ' ' + replacement + ' '
97 | else:
98 | replacement = ' '
99 | new_text = pattern.sub(replacement, text)
100 | if not text.endswith(' ') and new_text.endswith(' '):
101 | new_text = new_text.rstrip()
102 | return new_text
103 |
104 |
105 | def add_tag(text, tag, param=None):
106 | param = _prepare_param(param)
107 | tag = _fix_tag(tag)
108 | full_tag = tag + param
109 | if not has_tag(text, tag):
110 | return _prepare_text_for_tag(text) + full_tag
111 | return replace_tag(text, tag, full_tag)
112 |
113 |
114 | def enclose_tag(text, tag, prefix, suffix=None):
115 | suffix = suffix or prefix
116 | tag = _fix_tag(tag)
117 | param = _prepare_param(get_tag_param(text, tag))
118 | full_tag = tag + param
119 | replacement = prefix + full_tag + suffix
120 | return replace_tag(text, tag, replacement)
121 |
122 |
123 | tag_pattern = tag_indicator + r'([^\s(]*)'
124 |
125 |
126 | def get_all_tags(text, include_indicator=False):
127 | pattern = re.compile(tag_pattern)
128 | tags = pattern.findall(text)
129 | if include_indicator:
130 | return [tag_indicator + t for t in tags]
131 | return tags
132 |
133 |
134 | def modify_tag_param(text, tag, modification):
135 | param = get_tag_param(text, tag)
136 | new_param = modification(param)
137 | return add_tag(text, tag, new_param)
138 |
139 |
140 | def sort_by_tag_param(texts_collection, tag, reverse=False):
141 | def param_or_empty_text(t):
142 | param = get_tag_param(t, tag)
143 | return param if param else ''
144 | return tuple(sorted(
145 | texts_collection,
146 | key=param_or_empty_text,
147 | reverse=reverse
148 | ))
149 |
150 |
151 | def toggle_tags(text, tags_with_params):
152 | """Args:
153 | tags_with_params (list of pairs): tag and it's param,
154 | param can be falsy or be strftime template
155 | in the last case datetime.now() is used to fill it
156 | """
157 | has_those_tags = [has_tag(text, tag) for tag, param in tags_with_params]
158 | if not any(has_those_tags):
159 | text = add_tag(text, *_toggle_tag_prep(tags_with_params[0]))
160 | elif has_those_tags[-1]:
161 | text = remove_tag(text, tags_with_params[-1][0])
162 | for i, (tag, has_this_tag) in enumerate(zip(tags_with_params[:-1], has_those_tags)):
163 | if has_this_tag:
164 | text = remove_tag(text, tag[0])
165 | text = add_tag(text, *_toggle_tag_prep(tags_with_params[i + 1]))
166 | return text
167 |
168 |
169 | def _toggle_tag_prep(tag_param):
170 | tag, param = tag_param
171 | return tag, dt.datetime.now().strftime(param) if param else None
172 |
173 |
174 | # Formatting
175 |
176 | def strip_formatting(text):
177 | text = text.strip('\t')
178 | if is_task(text):
179 | return text[len(task_indicator):]
180 | elif is_project(text):
181 | return text[:-(len(project_indicator))]
182 | return text
183 |
184 |
185 | def strip_formatting_and_tags(text):
186 | text = text.strip()
187 | if is_project(text):
188 | text = text[:-len(project_indicator)]
189 | if is_task(text):
190 | text = text[len(task_indicator):]
191 | for tag in get_all_tags(text):
192 | text = remove_tag(text, tag)
193 | return text.strip()
194 |
195 |
196 | def calculate_indent_level(text):
197 | indentation_length = len(text) - len(text.lstrip('\t'))
198 | if indentation_length == 0:
199 | return 0
200 | indentation = text[:indentation_length]
201 | return len(indentation)
202 |
203 |
204 | # Dates
205 |
206 | def parse_datetime(text):
207 | try:
208 | return dt.datetime.strptime(text, '%Y-%m-%d %H:%M')
209 | except ValueError:
210 | return dt.datetime.strptime(text, '%Y-%m-%d')
211 |
--------------------------------------------------------------------------------
/todoflow/todoitem.py:
--------------------------------------------------------------------------------
1 | from __future__ import absolute_import
2 |
3 | from . import textutils as tu
4 | from .compatibility import unicode
5 |
6 |
7 | class Todoitem(object):
8 | """Representation of single todo item.
9 |
10 | It can be task, project or note.
11 |
12 | Note:
13 | `Todoitem` knows nothing about it's place in whole todos.
14 | `Todoitem` is mutable.
15 | """
16 | _id_counter = 1
17 |
18 | @classmethod
19 | def from_text(cls, text):
20 | return Todoitem(text)
21 |
22 | @classmethod
23 | def from_token(cls, token):
24 | item = Todoitem(token.text)
25 | item.line_number = token.line_number
26 | return item
27 |
28 | @classmethod
29 | def _gen_id(cls):
30 | cls._id_counter += 1
31 | return unicode(cls._id_counter)
32 |
33 | def __init__(self, text=''):
34 | """Creates `Todoitem` from text."""
35 | # internally text of todoitem is stored in stripped form
36 | # without '\t' indent, task indicator - '- ',
37 | # and project indicator ':'
38 | self.id = self._gen_id()
39 | self.text = tu.strip_formatting(text) if text else ''
40 | self.type = tu.get_type(text or '')
41 | self.line_number = None
42 |
43 | def __unicode__(self):
44 | return self.get_text()
45 |
46 | def __str__(self):
47 | return self.__unicode__()
48 |
49 | def __repr__(self):
50 | return ''.format(
51 | self.id, self.text, self.type
52 | )
53 |
54 | def get_text(self):
55 | if self.type == 'task':
56 | return '- ' + self.text
57 | elif self.type == 'project':
58 | return self.text + ':'
59 | return self.text
60 |
61 | def get_id(self):
62 | return self.id
63 |
64 | def get_line_number(self):
65 | return self.line_number
66 |
67 | def get_tag_param(self, tag):
68 | return self.todoitem.get_tag_param(tag)
69 |
70 | def get_type(self):
71 | return self.type
72 |
73 | def tag(self, tag_to_use, param=None):
74 | self.text = tu.add_tag(self.text, tag_to_use, param)
75 | if self.is_project:
76 | self.text += " "
77 |
78 | def remove_tag(self, tag_to_remove):
79 | self.text = tu.remove_tag(self.text, tag_to_remove)
80 |
81 | def has_tag(self, tag):
82 | return tu.has_tag(self.text, tag)
83 |
84 | def get_tag_param(self, tag):
85 | return tu.get_tag_param(self.text, tag)
86 |
87 | def edit(self, new_text):
88 | self.text = tu.strip_formatting(new_text)
89 |
90 | def change_to_task(self):
91 | self.type = 'task'
92 |
93 | def change_to_project(self):
94 | self.type = 'project'
95 |
96 | def change_to_note(self):
97 | self.type = 'note'
98 |
99 | def change_to_new_line(self):
100 | self.type = 'newline'
101 |
--------------------------------------------------------------------------------
/todoflow/todos.py:
--------------------------------------------------------------------------------
1 | # -*- coding: utf-8 -*-
2 | from collections import deque
3 |
4 | from .todoitem import Todoitem
5 | from .query import Query
6 | from .compatibility import unicode, is_string
7 |
8 |
9 |
10 | class Todos(object):
11 | def __init__(self, todoitem=None, subitems=None, parent=None):
12 | """Representation of taskpaper todos."""
13 | if is_string(todoitem):
14 | from .parser import parse
15 | todos = parse(todoitem)
16 | self.todoitem = None
17 | self.subitems = todos.subitems
18 | self.parent = None
19 | else:
20 | self.todoitem = todoitem
21 | self.subitems = [Todos(c.todoitem, c.subitems, self) for c in subitems] if subitems else []
22 | self.parent = parent
23 |
24 | def __add__(self, other):
25 | self_value, other_value = self.todoitem, other.todoitem
26 | if self_value and other_value:
27 | return Todos(subitems=[self, other])
28 | elif self_value:
29 | return Todos(subitems=[self] + other.subitems)
30 | elif other_value:
31 | return Todos(subitems=self.subitems + [other])
32 | else:
33 | return Todos(subitems=self.subitems + other.subitems)
34 |
35 | def __bool__(self):
36 | return bool(self.todoitem or self.subitems)
37 |
38 | def __contains__(self, todos_or_todoitem):
39 | for subitem in self:
40 | if subitem is todos_or_todoitem:
41 | return True
42 | if subitem.todoitem and subitem.todoitem is todos_or_todoitem:
43 | return True
44 | return False
45 |
46 | def __iter__(self):
47 | if self.todoitem:
48 | yield self
49 | for child in self.subitems:
50 | for descendant in child:
51 | yield descendant
52 |
53 | def __len__(self):
54 | return sum(len(i) for i in self.subitems) + (1 if self.todoitem else 0)
55 |
56 | def __repr__(self):
57 | return 'Todos("{}")'.format(self.get_text())
58 |
59 | def __str__(self):
60 | return self.__unicode__()
61 |
62 | def __div__(self, query):
63 | return self.filter(query)
64 |
65 | def __unicode__(self):
66 | strings = [unicode(i) for i in self.subitems]
67 | if self.todoitem:
68 | strings = [unicode(self.get_text())] + strings
69 | return '\n'.join(strings)
70 |
71 | def get_text(self):
72 | if not self.todoitem:
73 | return ''
74 | if self.todoitem.get_type() == 'newline':
75 | return ''
76 | return '\t' * self.get_level() + self.todoitem.get_text()
77 |
78 | def get_with_todoitem(self, todoitem):
79 | for subitem in self:
80 | if subitem.todoitem == todoitem:
81 | return subitem
82 | raise KeyError('{} not found in {}'.format(todoitem, self))
83 |
84 | def get_level(self):
85 | level = 0
86 | todos = self
87 | while todos.parent:
88 | if todos.parent.todoitem:
89 | level += 1
90 | todos = todos.parent
91 | return level
92 |
93 | def get_id(self):
94 | if not self.todoitem:
95 | return None
96 | return self.todoitem.get_id()
97 |
98 | def get_line_number(self):
99 | if not self.todoitem:
100 | return None
101 | return self.todoitem.get_line_number()
102 |
103 | def get_type(self):
104 | if not self.todoitem:
105 | return None
106 | return self.todoitem.get_type()
107 |
108 | def get_tag_param(self, tag):
109 | if not self.todoitem:
110 | return None
111 | return self.todoitem.get_tag_param(tag)
112 |
113 | def has_tag(self, tag):
114 | if not self.todoitem:
115 | return False
116 | return self.todoitem.has_tag(tag)
117 |
118 | def yield_parent(self):
119 | if self.parent:
120 | yield self.parent
121 |
122 | def yield_ancestors(self):
123 | for parent in self.yield_parent():
124 | for ancestor in parent.yield_ancestors_and_self():
125 | yield ancestor
126 |
127 | def yield_ancestors_and_self(self):
128 | ancestor_or_self = self
129 | while ancestor_or_self:
130 | yield ancestor_or_self
131 | ancestor_or_self = ancestor_or_self.parent
132 |
133 | def yield_children(self):
134 | for child in self.subitems:
135 | yield child
136 |
137 | def yield_descendants(self):
138 | for child in self.subitems:
139 | for descendant in child:
140 | yield descendant
141 |
142 | def yield_descendants_and_self(self):
143 | for descendant_or_self in self:
144 | yield descendant_or_self
145 |
146 | def yield_siblings(self):
147 | for parent in self.yield_parent():
148 | for sibling in parent.yield_children():
149 | yield sibling
150 |
151 | def yield_following_siblings(self):
152 | started = False
153 | for sibling in self.yield_siblings():
154 | if started:
155 | yield sibling
156 | elif sibling is self:
157 | started = True
158 |
159 | def yield_preceding_siblings(self):
160 | finished = False
161 | for sibling in self.yield_siblings():
162 | if sibling is self:
163 | finished = True
164 | if not finished:
165 | yield sibling
166 |
167 | def yield_following(self):
168 | for child in self.yield_descendants():
169 | yield child
170 | for ancestor in self.yield_ancestors_and_self():
171 | for sibling in ancestor.yield_following_siblings():
172 | yield sibling
173 |
174 | def yield_preceding(self):
175 | for ancestor in self.yield_ancestors_and_self():
176 | for sibling in ancestor.yield_preceding_siblings():
177 | yield sibling
178 |
179 | def append(self, subitem):
180 | self.subitems.append(self.maybe_make_todos(subitem))
181 |
182 | def insert(self, index, subitem):
183 | self.subitems.insert(index, self.maybe_make_todos(subitem))
184 |
185 | def maybe_make_todos(self, child):
186 | if isinstance(child, Todoitem):
187 | return Todos(todoitem, [], self)
188 | return child
189 |
190 | def filter(self, query):
191 | if is_string(query):
192 | from .query_parser import parse
193 | query = parse(query)
194 | if isinstance(query, Query):
195 | matching = list(query.search(self))
196 | query = lambda i: i in matching
197 | subitems = [i for i in [i.filter(query) for i in self.subitems] if i]
198 | if subitems or query(self):
199 | return Todos(self.todoitem, subitems=subitems)
200 | return Todos()
201 |
202 | def search(self, query):
203 | if is_string(query):
204 | from .query_parser import parse
205 | query = parse(query)
206 | if isinstance(query, Query):
207 | for item in query.search(self):
208 | yield item
209 | return
210 | for item in self:
211 | if query(item):
212 | yield item
213 |
--------------------------------------------------------------------------------
/watch_tests.py:
--------------------------------------------------------------------------------
1 | import sys
2 | import time
3 | import logging
4 | from watchdog.observers import Observer
5 | from watchdog.events import FileSystemEventHandler
6 |
7 | import subprocess
8 |
9 |
10 | class TestRunner(FileSystemEventHandler):
11 | def on_any_event(self, event):
12 | try:
13 | # print(subprocess.check_output(['python3', 'tests/test_parsing.py']).decode('utf-8'))
14 | # print(subprocess.check_output(['python3', 'tests/test_querying.py']).decode('utf-8'))
15 | # print(subprocess.check_output(['python3', 'tests/test_textutils.py']).decode('utf-8'))
16 | # print(subprocess.check_output(['python3', 'tests/test_todos.py']).decode('utf-8'))
17 | print(subprocess.check_output(['python3', 'tests/test_query_lexer.py']).decode('utf-8'))
18 | except:
19 | pass
20 |
21 |
22 | if __name__ == "__main__":
23 | logging.basicConfig(
24 | level=logging.INFO,
25 | format='%(asctime)s - %(message)s',
26 | datefmt='%Y-%m-%d %H:%M:%S'
27 | )
28 | path = '.'
29 | event_handler = TestRunner()
30 | observer = Observer()
31 | observer.schedule(event_handler, path, recursive=True)
32 | observer.start()
33 | try:
34 | while True:
35 | time.sleep(1)
36 | except KeyboardInterrupt:
37 | observer.stop()
38 | observer.join()
39 |
--------------------------------------------------------------------------------