├── LICENSE.md
├── README.md
├── example-image.png
├── example
├── animation.html
├── animation.svg
├── output
│ └── animation.html
├── parallax_svg_tools
│ ├── bs4
│ │ ├── __init__.py
│ │ ├── __init__.pyc
│ │ ├── builder
│ │ │ ├── __init__.py
│ │ │ ├── __init__.pyc
│ │ │ ├── _html5lib.py
│ │ │ ├── _html5lib.pyc
│ │ │ ├── _htmlparser.py
│ │ │ ├── _htmlparser.pyc
│ │ │ ├── _lxml.py
│ │ │ └── _lxml.pyc
│ │ ├── dammit.py
│ │ ├── dammit.pyc
│ │ ├── diagnose.py
│ │ ├── element.py
│ │ └── element.pyc
│ ├── run.py
│ └── svg
│ │ ├── __init__.py
│ │ └── __init__.pyc
└── processed_animation.svg
├── parallax_svg_tools.zip
├── parallax_svg_tools
├── bs4
│ ├── __init__.py
│ ├── __init__.pyc
│ ├── builder
│ │ ├── __init__.py
│ │ ├── __init__.pyc
│ │ ├── _html5lib.py
│ │ ├── _html5lib.pyc
│ │ ├── _htmlparser.py
│ │ ├── _htmlparser.pyc
│ │ ├── _lxml.py
│ │ └── _lxml.pyc
│ ├── dammit.py
│ ├── dammit.pyc
│ ├── diagnose.py
│ ├── element.py
│ └── element.pyc
├── run.py
└── svg
│ └── __init__.py
├── svg-settings.png
└── vlv-intro-gif.gif
/LICENSE.md:
--------------------------------------------------------------------------------
1 | Copyright 2017 Parallax Agency Ltd
2 |
3 | Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
4 |
5 | The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
6 |
7 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
8 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # Parallax SVG Animation Tools
2 |
3 | A simple set of python functions to help working with animated SVGs exported from Illustrator. More features coming soon!
4 | We used it to create animations like this.
5 |
6 | [Viva La Velo](https://parall.ax/viva-le-velo)
7 |
8 | 
9 |
10 |
11 | ## Overview
12 |
13 | Part of animating with SVGs is getting references to elements in code and passing them to animation functions. For complicated animations this becomes difficult and hand editing SVG code is slow and gets overwritten when your artwork updates. We decided to write a post-processer for SVGs produced by Illustrator to help speed this up. Layer names are used to create attributes, classes and ID's making selecting them in JS or CSS far easier.
14 |
15 | This is the what the svg code looks like before and after the processing step.
16 |
17 | ```xml
18 |
19 |
24 |
25 |
26 |
31 | ```
32 |
33 | 
34 |
35 |
36 | ## Quick Example
37 |
38 | Download the [svg tools](parallax_svg_tools.zip) and unzip them into your project folder.
39 |
40 | Create an Illustrator file, add an element and change its layer name to say `#class=my-element`. Export the SVG using the **File > Export > Export for Screens** option with the following settings. Call the svg `animation.svg`.
41 |
42 | 
43 |
44 | Create a HTML file as below. The import statements inline the SVG inline into our HTML file so we don't have to do any copy and pasting. Not strictly neccessary but makes the workflow a little easier. Save it as `animation.html`.
45 |
46 | ```html
47 |
48 |
49 |
50 |
51 |
52 |
53 |
54 | //import processed_animation.svg
55 |
56 |
57 |
58 | ```
59 |
60 |
61 | Open the file called `run.py`. Here you can edit how the SVGs will be processed. The default looks like this. The sections below describe what the various options do.
62 |
63 | ```javascript
64 | from svg import *
65 |
66 | compile_svg('animation.svg', 'processed_animation.svg',
67 | {
68 | 'process_layer_names': True,
69 | 'namespace' : 'example'
70 | })
71 |
72 | inline_svg('animation.html', 'output/animation.html')
73 | ```
74 |
75 | Open the command line and navigate to your project folder. Call the script using `python parallax_svg_tools/run.py`. You should see a list of processed files (or just one in this case) printed to the console if everything worked correctly. Note that the script must be called from a directory that has access to the svg files.
76 |
77 | There should now be a folder called `output` containing an `animation.html` file with your processed SVG in it. All that is left to do is animate it with your tool of choice (ours is [GSAP](https://greensock.com/)).
78 |
79 |
80 | ## Functions
81 |
82 | ### process\_svg(src\_path, dst\_path, options)
83 | Processes a single SVG and places it in the supplied destination directory. The following options are available.
84 |
85 | + **process\_layer\_names:**
86 | Converts layer names as defined in Illustator into attributes. Begin the layer name with a '#' to indicate the layer should be parsed.
87 | For example `#id=my-id, class=my-class my-other-class, role=my-role` ...etc.
88 | This is useful for fetching elements with Javascript as well as marking up elements for accessibility - see this [CSS Tricks Accessible SVG ](https://css-tricks.com/accessible-svgs/) article.
89 | NOTE: Requires using commas to separate the attributes as that makes the parsing code a lot simpler :)
90 |
91 | + **expand_origin:**
92 | Allows you to use `origin=100 100` to set origins for rotating/scaling with GSAP (expands to data-svg-origin).
93 |
94 | + **namespace:**
95 | Appends a namespace to classes and IDs if one is provided. Useful for avoiding conflicts with other SVG files for things like masks and clipPaths.
96 |
97 | + **nowhitespace:**
98 | Removes unneeded whitespace. We don't do anything fancier than that so as to not break animations. Use the excellent [SVGO]() if you need better minification.
99 |
100 | + **attributes:**
101 | An object of key:value strings that will be applied as attributes to the root SVG element.
102 |
103 | + **title:**
104 | Sets the title or removes it completely when set to `false`
105 |
106 | + **description:**
107 | Sets the description or removes it completely when set to `false`
108 |
109 | + **convert_svg_text_to_html:**
110 | Converts SVG text in HTML text via the foriegn object tag reducing file bloat and allowing you to style it with CSS. Requires the text be grouped inside a rectangle with the layer name set to `#TEXT`.
111 |
112 | + **spirit:**
113 | Expands `#spirit=my-id` to `data-spirit-id` when set to `true` for use with the [Spirit animation app]()
114 |
115 |
116 | ### inline\_svg(src\_path, dst\_path)
117 | In order to animate SVGs markup needs to be placed in-line with HTML. This function will look at the source HTML file and include any references defined by `//import` statements to SVGs that it finds.
--------------------------------------------------------------------------------
/example-image.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/parallax/svg-animation-tools/3cbad1696760c049ed66b7e0c8631357000dbdb6/example-image.png
--------------------------------------------------------------------------------
/example/animation.html:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 | //import processed_animation.svg
9 |
10 |
11 |
--------------------------------------------------------------------------------
/example/animation.svg:
--------------------------------------------------------------------------------
1 |
7 |
--------------------------------------------------------------------------------
/example/output/animation.html:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
14 |
15 |
16 |
--------------------------------------------------------------------------------
/example/parallax_svg_tools/bs4/__init__.py:
--------------------------------------------------------------------------------
1 | """Beautiful Soup
2 | Elixir and Tonic
3 | "The Screen-Scraper's Friend"
4 | http://www.crummy.com/software/BeautifulSoup/
5 |
6 | Beautiful Soup uses a pluggable XML or HTML parser to parse a
7 | (possibly invalid) document into a tree representation. Beautiful Soup
8 | provides methods and Pythonic idioms that make it easy to navigate,
9 | search, and modify the parse tree.
10 |
11 | Beautiful Soup works with Python 2.7 and up. It works better if lxml
12 | and/or html5lib is installed.
13 |
14 | For more than you ever wanted to know about Beautiful Soup, see the
15 | documentation:
16 | http://www.crummy.com/software/BeautifulSoup/bs4/doc/
17 |
18 | """
19 |
20 | # Use of this source code is governed by a BSD-style license that can be
21 | # found in the LICENSE file.
22 |
23 | __author__ = "Leonard Richardson (leonardr@segfault.org)"
24 | __version__ = "4.5.1"
25 | __copyright__ = "Copyright (c) 2004-2016 Leonard Richardson"
26 | __license__ = "MIT"
27 |
28 | __all__ = ['BeautifulSoup']
29 |
30 | import os
31 | import re
32 | import traceback
33 | import warnings
34 |
35 | from .builder import builder_registry, ParserRejectedMarkup
36 | from .dammit import UnicodeDammit
37 | from .element import (
38 | CData,
39 | Comment,
40 | DEFAULT_OUTPUT_ENCODING,
41 | Declaration,
42 | Doctype,
43 | NavigableString,
44 | PageElement,
45 | ProcessingInstruction,
46 | ResultSet,
47 | SoupStrainer,
48 | Tag,
49 | )
50 |
51 | # The very first thing we do is give a useful error if someone is
52 | # running this code under Python 3 without converting it.
53 | 'You are trying to run the Python 2 version of Beautiful Soup under Python 3. This will not work.'<>'You need to convert the code, either by installing it (`python setup.py install`) or by running 2to3 (`2to3 -w bs4`).'
54 |
55 | class BeautifulSoup(Tag):
56 | """
57 | This class defines the basic interface called by the tree builders.
58 |
59 | These methods will be called by the parser:
60 | reset()
61 | feed(markup)
62 |
63 | The tree builder may call these methods from its feed() implementation:
64 | handle_starttag(name, attrs) # See note about return value
65 | handle_endtag(name)
66 | handle_data(data) # Appends to the current data node
67 | endData(containerClass=NavigableString) # Ends the current data node
68 |
69 | No matter how complicated the underlying parser is, you should be
70 | able to build a tree using 'start tag' events, 'end tag' events,
71 | 'data' events, and "done with data" events.
72 |
73 | If you encounter an empty-element tag (aka a self-closing tag,
74 | like HTML's tag), call handle_starttag and then
75 | handle_endtag.
76 | """
77 | ROOT_TAG_NAME = u'[document]'
78 |
79 | # If the end-user gives no indication which tree builder they
80 | # want, look for one with these features.
81 | DEFAULT_BUILDER_FEATURES = ['html', 'fast']
82 |
83 | ASCII_SPACES = '\x20\x0a\x09\x0c\x0d'
84 |
85 | NO_PARSER_SPECIFIED_WARNING = "No parser was explicitly specified, so I'm using the best available %(markup_type)s parser for this system (\"%(parser)s\"). This usually isn't a problem, but if you run this code on another system, or in a different virtual environment, it may use a different parser and behave differently.\n\nThe code that caused this warning is on line %(line_number)s of the file %(filename)s. To get rid of this warning, change code that looks like this:\n\n BeautifulSoup([your markup])\n\nto this:\n\n BeautifulSoup([your markup], \"%(parser)s\")\n"
86 |
87 | def __init__(self, markup="", features=None, builder=None,
88 | parse_only=None, from_encoding=None, exclude_encodings=None,
89 | **kwargs):
90 | """The Soup object is initialized as the 'root tag', and the
91 | provided markup (which can be a string or a file-like object)
92 | is fed into the underlying parser."""
93 |
94 | if 'convertEntities' in kwargs:
95 | warnings.warn(
96 | "BS4 does not respect the convertEntities argument to the "
97 | "BeautifulSoup constructor. Entities are always converted "
98 | "to Unicode characters.")
99 |
100 | if 'markupMassage' in kwargs:
101 | del kwargs['markupMassage']
102 | warnings.warn(
103 | "BS4 does not respect the markupMassage argument to the "
104 | "BeautifulSoup constructor. The tree builder is responsible "
105 | "for any necessary markup massage.")
106 |
107 | if 'smartQuotesTo' in kwargs:
108 | del kwargs['smartQuotesTo']
109 | warnings.warn(
110 | "BS4 does not respect the smartQuotesTo argument to the "
111 | "BeautifulSoup constructor. Smart quotes are always converted "
112 | "to Unicode characters.")
113 |
114 | if 'selfClosingTags' in kwargs:
115 | del kwargs['selfClosingTags']
116 | warnings.warn(
117 | "BS4 does not respect the selfClosingTags argument to the "
118 | "BeautifulSoup constructor. The tree builder is responsible "
119 | "for understanding self-closing tags.")
120 |
121 | if 'isHTML' in kwargs:
122 | del kwargs['isHTML']
123 | warnings.warn(
124 | "BS4 does not respect the isHTML argument to the "
125 | "BeautifulSoup constructor. Suggest you use "
126 | "features='lxml' for HTML and features='lxml-xml' for "
127 | "XML.")
128 |
129 | def deprecated_argument(old_name, new_name):
130 | if old_name in kwargs:
131 | warnings.warn(
132 | 'The "%s" argument to the BeautifulSoup constructor '
133 | 'has been renamed to "%s."' % (old_name, new_name))
134 | value = kwargs[old_name]
135 | del kwargs[old_name]
136 | return value
137 | return None
138 |
139 | parse_only = parse_only or deprecated_argument(
140 | "parseOnlyThese", "parse_only")
141 |
142 | from_encoding = from_encoding or deprecated_argument(
143 | "fromEncoding", "from_encoding")
144 |
145 | if from_encoding and isinstance(markup, unicode):
146 | warnings.warn("You provided Unicode markup but also provided a value for from_encoding. Your from_encoding will be ignored.")
147 | from_encoding = None
148 |
149 | if len(kwargs) > 0:
150 | arg = kwargs.keys().pop()
151 | raise TypeError(
152 | "__init__() got an unexpected keyword argument '%s'" % arg)
153 |
154 | if builder is None:
155 | original_features = features
156 | if isinstance(features, basestring):
157 | features = [features]
158 | if features is None or len(features) == 0:
159 | features = self.DEFAULT_BUILDER_FEATURES
160 | builder_class = builder_registry.lookup(*features)
161 | if builder_class is None:
162 | raise FeatureNotFound(
163 | "Couldn't find a tree builder with the features you "
164 | "requested: %s. Do you need to install a parser library?"
165 | % ",".join(features))
166 | builder = builder_class()
167 | if not (original_features == builder.NAME or
168 | original_features in builder.ALTERNATE_NAMES):
169 | if builder.is_xml:
170 | markup_type = "XML"
171 | else:
172 | markup_type = "HTML"
173 |
174 | caller = traceback.extract_stack()[0]
175 | filename = caller[0]
176 | line_number = caller[1]
177 | warnings.warn(self.NO_PARSER_SPECIFIED_WARNING % dict(
178 | filename=filename,
179 | line_number=line_number,
180 | parser=builder.NAME,
181 | markup_type=markup_type))
182 |
183 | self.builder = builder
184 | self.is_xml = builder.is_xml
185 | self.known_xml = self.is_xml
186 | self.builder.soup = self
187 |
188 | self.parse_only = parse_only
189 |
190 | if hasattr(markup, 'read'): # It's a file-type object.
191 | markup = markup.read()
192 | elif len(markup) <= 256 and (
193 | (isinstance(markup, bytes) and not b'<' in markup)
194 | or (isinstance(markup, unicode) and not u'<' in markup)
195 | ):
196 | # Print out warnings for a couple beginner problems
197 | # involving passing non-markup to Beautiful Soup.
198 | # Beautiful Soup will still parse the input as markup,
199 | # just in case that's what the user really wants.
200 | if (isinstance(markup, unicode)
201 | and not os.path.supports_unicode_filenames):
202 | possible_filename = markup.encode("utf8")
203 | else:
204 | possible_filename = markup
205 | is_file = False
206 | try:
207 | is_file = os.path.exists(possible_filename)
208 | except Exception, e:
209 | # This is almost certainly a problem involving
210 | # characters not valid in filenames on this
211 | # system. Just let it go.
212 | pass
213 | if is_file:
214 | if isinstance(markup, unicode):
215 | markup = markup.encode("utf8")
216 | warnings.warn(
217 | '"%s" looks like a filename, not markup. You should'
218 | 'probably open this file and pass the filehandle into'
219 | 'Beautiful Soup.' % markup)
220 | self._check_markup_is_url(markup)
221 |
222 | for (self.markup, self.original_encoding, self.declared_html_encoding,
223 | self.contains_replacement_characters) in (
224 | self.builder.prepare_markup(
225 | markup, from_encoding, exclude_encodings=exclude_encodings)):
226 | self.reset()
227 | try:
228 | self._feed()
229 | break
230 | except ParserRejectedMarkup:
231 | pass
232 |
233 | # Clear out the markup and remove the builder's circular
234 | # reference to this object.
235 | self.markup = None
236 | self.builder.soup = None
237 |
238 | def __copy__(self):
239 | copy = type(self)(
240 | self.encode('utf-8'), builder=self.builder, from_encoding='utf-8'
241 | )
242 |
243 | # Although we encoded the tree to UTF-8, that may not have
244 | # been the encoding of the original markup. Set the copy's
245 | # .original_encoding to reflect the original object's
246 | # .original_encoding.
247 | copy.original_encoding = self.original_encoding
248 | return copy
249 |
250 | def __getstate__(self):
251 | # Frequently a tree builder can't be pickled.
252 | d = dict(self.__dict__)
253 | if 'builder' in d and not self.builder.picklable:
254 | d['builder'] = None
255 | return d
256 |
257 | @staticmethod
258 | def _check_markup_is_url(markup):
259 | """
260 | Check if markup looks like it's actually a url and raise a warning
261 | if so. Markup can be unicode or str (py2) / bytes (py3).
262 | """
263 | if isinstance(markup, bytes):
264 | space = b' '
265 | cant_start_with = (b"http:", b"https:")
266 | elif isinstance(markup, unicode):
267 | space = u' '
268 | cant_start_with = (u"http:", u"https:")
269 | else:
270 | return
271 |
272 | if any(markup.startswith(prefix) for prefix in cant_start_with):
273 | if not space in markup:
274 | if isinstance(markup, bytes):
275 | decoded_markup = markup.decode('utf-8', 'replace')
276 | else:
277 | decoded_markup = markup
278 | warnings.warn(
279 | '"%s" looks like a URL. Beautiful Soup is not an'
280 | ' HTTP client. You should probably use an HTTP client like'
281 | ' requests to get the document behind the URL, and feed'
282 | ' that document to Beautiful Soup.' % decoded_markup
283 | )
284 |
285 | def _feed(self):
286 | # Convert the document to Unicode.
287 | self.builder.reset()
288 |
289 | self.builder.feed(self.markup)
290 | # Close out any unfinished strings and close all the open tags.
291 | self.endData()
292 | while self.currentTag.name != self.ROOT_TAG_NAME:
293 | self.popTag()
294 |
295 | def reset(self):
296 | Tag.__init__(self, self, self.builder, self.ROOT_TAG_NAME)
297 | self.hidden = 1
298 | self.builder.reset()
299 | self.current_data = []
300 | self.currentTag = None
301 | self.tagStack = []
302 | self.preserve_whitespace_tag_stack = []
303 | self.pushTag(self)
304 |
305 | def new_tag(self, name, namespace=None, nsprefix=None, **attrs):
306 | """Create a new tag associated with this soup."""
307 | return Tag(None, self.builder, name, namespace, nsprefix, attrs)
308 |
309 | def new_string(self, s, subclass=NavigableString):
310 | """Create a new NavigableString associated with this soup."""
311 | return subclass(s)
312 |
313 | def insert_before(self, successor):
314 | raise NotImplementedError("BeautifulSoup objects don't support insert_before().")
315 |
316 | def insert_after(self, successor):
317 | raise NotImplementedError("BeautifulSoup objects don't support insert_after().")
318 |
319 | def popTag(self):
320 | tag = self.tagStack.pop()
321 | if self.preserve_whitespace_tag_stack and tag == self.preserve_whitespace_tag_stack[-1]:
322 | self.preserve_whitespace_tag_stack.pop()
323 | #print "Pop", tag.name
324 | if self.tagStack:
325 | self.currentTag = self.tagStack[-1]
326 | return self.currentTag
327 |
328 | def pushTag(self, tag):
329 | #print "Push", tag.name
330 | if self.currentTag:
331 | self.currentTag.contents.append(tag)
332 | self.tagStack.append(tag)
333 | self.currentTag = self.tagStack[-1]
334 | if tag.name in self.builder.preserve_whitespace_tags:
335 | self.preserve_whitespace_tag_stack.append(tag)
336 |
337 | def endData(self, containerClass=NavigableString):
338 | if self.current_data:
339 | current_data = u''.join(self.current_data)
340 | # If whitespace is not preserved, and this string contains
341 | # nothing but ASCII spaces, replace it with a single space
342 | # or newline.
343 | if not self.preserve_whitespace_tag_stack:
344 | strippable = True
345 | for i in current_data:
346 | if i not in self.ASCII_SPACES:
347 | strippable = False
348 | break
349 | if strippable:
350 | if '\n' in current_data:
351 | current_data = '\n'
352 | else:
353 | current_data = ' '
354 |
355 | # Reset the data collector.
356 | self.current_data = []
357 |
358 | # Should we add this string to the tree at all?
359 | if self.parse_only and len(self.tagStack) <= 1 and \
360 | (not self.parse_only.text or \
361 | not self.parse_only.search(current_data)):
362 | return
363 |
364 | o = containerClass(current_data)
365 | self.object_was_parsed(o)
366 |
367 | def object_was_parsed(self, o, parent=None, most_recent_element=None):
368 | """Add an object to the parse tree."""
369 | parent = parent or self.currentTag
370 | previous_element = most_recent_element or self._most_recent_element
371 |
372 | next_element = previous_sibling = next_sibling = None
373 | if isinstance(o, Tag):
374 | next_element = o.next_element
375 | next_sibling = o.next_sibling
376 | previous_sibling = o.previous_sibling
377 | if not previous_element:
378 | previous_element = o.previous_element
379 |
380 | o.setup(parent, previous_element, next_element, previous_sibling, next_sibling)
381 |
382 | self._most_recent_element = o
383 | parent.contents.append(o)
384 |
385 | if parent.next_sibling:
386 | # This node is being inserted into an element that has
387 | # already been parsed. Deal with any dangling references.
388 | index = len(parent.contents)-1
389 | while index >= 0:
390 | if parent.contents[index] is o:
391 | break
392 | index -= 1
393 | else:
394 | raise ValueError(
395 | "Error building tree: supposedly %r was inserted "
396 | "into %r after the fact, but I don't see it!" % (
397 | o, parent
398 | )
399 | )
400 | if index == 0:
401 | previous_element = parent
402 | previous_sibling = None
403 | else:
404 | previous_element = previous_sibling = parent.contents[index-1]
405 | if index == len(parent.contents)-1:
406 | next_element = parent.next_sibling
407 | next_sibling = None
408 | else:
409 | next_element = next_sibling = parent.contents[index+1]
410 |
411 | o.previous_element = previous_element
412 | if previous_element:
413 | previous_element.next_element = o
414 | o.next_element = next_element
415 | if next_element:
416 | next_element.previous_element = o
417 | o.next_sibling = next_sibling
418 | if next_sibling:
419 | next_sibling.previous_sibling = o
420 | o.previous_sibling = previous_sibling
421 | if previous_sibling:
422 | previous_sibling.next_sibling = o
423 |
424 | def _popToTag(self, name, nsprefix=None, inclusivePop=True):
425 | """Pops the tag stack up to and including the most recent
426 | instance of the given tag. If inclusivePop is false, pops the tag
427 | stack up to but *not* including the most recent instqance of
428 | the given tag."""
429 | #print "Popping to %s" % name
430 | if name == self.ROOT_TAG_NAME:
431 | # The BeautifulSoup object itself can never be popped.
432 | return
433 |
434 | most_recently_popped = None
435 |
436 | stack_size = len(self.tagStack)
437 | for i in range(stack_size - 1, 0, -1):
438 | t = self.tagStack[i]
439 | if (name == t.name and nsprefix == t.prefix):
440 | if inclusivePop:
441 | most_recently_popped = self.popTag()
442 | break
443 | most_recently_popped = self.popTag()
444 |
445 | return most_recently_popped
446 |
447 | def handle_starttag(self, name, namespace, nsprefix, attrs):
448 | """Push a start tag on to the stack.
449 |
450 | If this method returns None, the tag was rejected by the
451 | SoupStrainer. You should proceed as if the tag had not occurred
452 | in the document. For instance, if this was a self-closing tag,
453 | don't call handle_endtag.
454 | """
455 |
456 | # print "Start tag %s: %s" % (name, attrs)
457 | self.endData()
458 |
459 | if (self.parse_only and len(self.tagStack) <= 1
460 | and (self.parse_only.text
461 | or not self.parse_only.search_tag(name, attrs))):
462 | return None
463 |
464 | tag = Tag(self, self.builder, name, namespace, nsprefix, attrs,
465 | self.currentTag, self._most_recent_element)
466 | if tag is None:
467 | return tag
468 | if self._most_recent_element:
469 | self._most_recent_element.next_element = tag
470 | self._most_recent_element = tag
471 | self.pushTag(tag)
472 | return tag
473 |
474 | def handle_endtag(self, name, nsprefix=None):
475 | #print "End tag: " + name
476 | self.endData()
477 | self._popToTag(name, nsprefix)
478 |
479 | def handle_data(self, data):
480 | self.current_data.append(data)
481 |
482 | def decode(self, pretty_print=False,
483 | eventual_encoding=DEFAULT_OUTPUT_ENCODING,
484 | formatter="minimal"):
485 | """Returns a string or Unicode representation of this document.
486 | To get Unicode, pass None for encoding."""
487 |
488 | if self.is_xml:
489 | # Print the XML declaration
490 | encoding_part = ''
491 | if eventual_encoding != None:
492 | encoding_part = ' encoding="%s"' % eventual_encoding
493 | prefix = u'\n' % encoding_part
494 | else:
495 | prefix = u''
496 | if not pretty_print:
497 | indent_level = None
498 | else:
499 | indent_level = 0
500 | return prefix + super(BeautifulSoup, self).decode(
501 | indent_level, eventual_encoding, formatter)
502 |
503 | # Alias to make it easier to type import: 'from bs4 import _soup'
504 | _s = BeautifulSoup
505 | _soup = BeautifulSoup
506 |
507 | class BeautifulStoneSoup(BeautifulSoup):
508 | """Deprecated interface to an XML parser."""
509 |
510 | def __init__(self, *args, **kwargs):
511 | kwargs['features'] = 'xml'
512 | warnings.warn(
513 | 'The BeautifulStoneSoup class is deprecated. Instead of using '
514 | 'it, pass features="xml" into the BeautifulSoup constructor.')
515 | super(BeautifulStoneSoup, self).__init__(*args, **kwargs)
516 |
517 |
518 | class StopParsing(Exception):
519 | pass
520 |
521 | class FeatureNotFound(ValueError):
522 | pass
523 |
524 |
525 | #By default, act as an HTML pretty-printer.
526 | if __name__ == '__main__':
527 | import sys
528 | soup = BeautifulSoup(sys.stdin)
529 | print soup.prettify()
530 |
--------------------------------------------------------------------------------
/example/parallax_svg_tools/bs4/__init__.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/parallax/svg-animation-tools/3cbad1696760c049ed66b7e0c8631357000dbdb6/example/parallax_svg_tools/bs4/__init__.pyc
--------------------------------------------------------------------------------
/example/parallax_svg_tools/bs4/builder/__init__.py:
--------------------------------------------------------------------------------
1 | # Use of this source code is governed by a BSD-style license that can be
2 | # found in the LICENSE file.
3 |
4 | from collections import defaultdict
5 | import itertools
6 | import sys
7 | from bs4.element import (
8 | CharsetMetaAttributeValue,
9 | ContentMetaAttributeValue,
10 | HTMLAwareEntitySubstitution,
11 | whitespace_re
12 | )
13 |
14 | __all__ = [
15 | 'HTMLTreeBuilder',
16 | 'SAXTreeBuilder',
17 | 'TreeBuilder',
18 | 'TreeBuilderRegistry',
19 | ]
20 |
21 | # Some useful features for a TreeBuilder to have.
22 | FAST = 'fast'
23 | PERMISSIVE = 'permissive'
24 | STRICT = 'strict'
25 | XML = 'xml'
26 | HTML = 'html'
27 | HTML_5 = 'html5'
28 |
29 |
30 | class TreeBuilderRegistry(object):
31 |
32 | def __init__(self):
33 | self.builders_for_feature = defaultdict(list)
34 | self.builders = []
35 |
36 | def register(self, treebuilder_class):
37 | """Register a treebuilder based on its advertised features."""
38 | for feature in treebuilder_class.features:
39 | self.builders_for_feature[feature].insert(0, treebuilder_class)
40 | self.builders.insert(0, treebuilder_class)
41 |
42 | def lookup(self, *features):
43 | if len(self.builders) == 0:
44 | # There are no builders at all.
45 | return None
46 |
47 | if len(features) == 0:
48 | # They didn't ask for any features. Give them the most
49 | # recently registered builder.
50 | return self.builders[0]
51 |
52 | # Go down the list of features in order, and eliminate any builders
53 | # that don't match every feature.
54 | features = list(features)
55 | features.reverse()
56 | candidates = None
57 | candidate_set = None
58 | while len(features) > 0:
59 | feature = features.pop()
60 | we_have_the_feature = self.builders_for_feature.get(feature, [])
61 | if len(we_have_the_feature) > 0:
62 | if candidates is None:
63 | candidates = we_have_the_feature
64 | candidate_set = set(candidates)
65 | else:
66 | # Eliminate any candidates that don't have this feature.
67 | candidate_set = candidate_set.intersection(
68 | set(we_have_the_feature))
69 |
70 | # The only valid candidates are the ones in candidate_set.
71 | # Go through the original list of candidates and pick the first one
72 | # that's in candidate_set.
73 | if candidate_set is None:
74 | return None
75 | for candidate in candidates:
76 | if candidate in candidate_set:
77 | return candidate
78 | return None
79 |
80 | # The BeautifulSoup class will take feature lists from developers and use them
81 | # to look up builders in this registry.
82 | builder_registry = TreeBuilderRegistry()
83 |
84 | class TreeBuilder(object):
85 | """Turn a document into a Beautiful Soup object tree."""
86 |
87 | NAME = "[Unknown tree builder]"
88 | ALTERNATE_NAMES = []
89 | features = []
90 |
91 | is_xml = False
92 | picklable = False
93 | preserve_whitespace_tags = set()
94 | empty_element_tags = None # A tag will be considered an empty-element
95 | # tag when and only when it has no contents.
96 |
97 | # A value for these tag/attribute combinations is a space- or
98 | # comma-separated list of CDATA, rather than a single CDATA.
99 | cdata_list_attributes = {}
100 |
101 |
102 | def __init__(self):
103 | self.soup = None
104 |
105 | def reset(self):
106 | pass
107 |
108 | def can_be_empty_element(self, tag_name):
109 | """Might a tag with this name be an empty-element tag?
110 |
111 | The final markup may or may not actually present this tag as
112 | self-closing.
113 |
114 | For instance: an HTMLBuilder does not consider a
tag to be
115 | an empty-element tag (it's not in
116 | HTMLBuilder.empty_element_tags). This means an empty
tag
117 | will be presented as "
", not "".
118 |
119 | The default implementation has no opinion about which tags are
120 | empty-element tags, so a tag will be presented as an
121 | empty-element tag if and only if it has no contents.
122 | "" will become "", and "bar" will
123 | be left alone.
124 | """
125 | if self.empty_element_tags is None:
126 | return True
127 | return tag_name in self.empty_element_tags
128 |
129 | def feed(self, markup):
130 | raise NotImplementedError()
131 |
132 | def prepare_markup(self, markup, user_specified_encoding=None,
133 | document_declared_encoding=None):
134 | return markup, None, None, False
135 |
136 | def test_fragment_to_document(self, fragment):
137 | """Wrap an HTML fragment to make it look like a document.
138 |
139 | Different parsers do this differently. For instance, lxml
140 | introduces an empty tag, and html5lib
141 | doesn't. Abstracting this away lets us write simple tests
142 | which run HTML fragments through the parser and compare the
143 | results against other HTML fragments.
144 |
145 | This method should not be used outside of tests.
146 | """
147 | return fragment
148 |
149 | def set_up_substitutions(self, tag):
150 | return False
151 |
152 | def _replace_cdata_list_attribute_values(self, tag_name, attrs):
153 | """Replaces class="foo bar" with class=["foo", "bar"]
154 |
155 | Modifies its input in place.
156 | """
157 | if not attrs:
158 | return attrs
159 | if self.cdata_list_attributes:
160 | universal = self.cdata_list_attributes.get('*', [])
161 | tag_specific = self.cdata_list_attributes.get(
162 | tag_name.lower(), None)
163 | for attr in attrs.keys():
164 | if attr in universal or (tag_specific and attr in tag_specific):
165 | # We have a "class"-type attribute whose string
166 | # value is a whitespace-separated list of
167 | # values. Split it into a list.
168 | value = attrs[attr]
169 | if isinstance(value, basestring):
170 | values = whitespace_re.split(value)
171 | else:
172 | # html5lib sometimes calls setAttributes twice
173 | # for the same tag when rearranging the parse
174 | # tree. On the second call the attribute value
175 | # here is already a list. If this happens,
176 | # leave the value alone rather than trying to
177 | # split it again.
178 | values = value
179 | attrs[attr] = values
180 | return attrs
181 |
182 | class SAXTreeBuilder(TreeBuilder):
183 | """A Beautiful Soup treebuilder that listens for SAX events."""
184 |
185 | def feed(self, markup):
186 | raise NotImplementedError()
187 |
188 | def close(self):
189 | pass
190 |
191 | def startElement(self, name, attrs):
192 | attrs = dict((key[1], value) for key, value in list(attrs.items()))
193 | #print "Start %s, %r" % (name, attrs)
194 | self.soup.handle_starttag(name, attrs)
195 |
196 | def endElement(self, name):
197 | #print "End %s" % name
198 | self.soup.handle_endtag(name)
199 |
200 | def startElementNS(self, nsTuple, nodeName, attrs):
201 | # Throw away (ns, nodeName) for now.
202 | self.startElement(nodeName, attrs)
203 |
204 | def endElementNS(self, nsTuple, nodeName):
205 | # Throw away (ns, nodeName) for now.
206 | self.endElement(nodeName)
207 | #handler.endElementNS((ns, node.nodeName), node.nodeName)
208 |
209 | def startPrefixMapping(self, prefix, nodeValue):
210 | # Ignore the prefix for now.
211 | pass
212 |
213 | def endPrefixMapping(self, prefix):
214 | # Ignore the prefix for now.
215 | # handler.endPrefixMapping(prefix)
216 | pass
217 |
218 | def characters(self, content):
219 | self.soup.handle_data(content)
220 |
221 | def startDocument(self):
222 | pass
223 |
224 | def endDocument(self):
225 | pass
226 |
227 |
228 | class HTMLTreeBuilder(TreeBuilder):
229 | """This TreeBuilder knows facts about HTML.
230 |
231 | Such as which tags are empty-element tags.
232 | """
233 |
234 | preserve_whitespace_tags = HTMLAwareEntitySubstitution.preserve_whitespace_tags
235 | empty_element_tags = set(['br' , 'hr', 'input', 'img', 'meta',
236 | 'spacer', 'link', 'frame', 'base'])
237 |
238 | # The HTML standard defines these attributes as containing a
239 | # space-separated list of values, not a single value. That is,
240 | # class="foo bar" means that the 'class' attribute has two values,
241 | # 'foo' and 'bar', not the single value 'foo bar'. When we
242 | # encounter one of these attributes, we will parse its value into
243 | # a list of values if possible. Upon output, the list will be
244 | # converted back into a string.
245 | cdata_list_attributes = {
246 | "*" : ['class', 'accesskey', 'dropzone'],
247 | "a" : ['rel', 'rev'],
248 | "link" : ['rel', 'rev'],
249 | "td" : ["headers"],
250 | "th" : ["headers"],
251 | "td" : ["headers"],
252 | "form" : ["accept-charset"],
253 | "object" : ["archive"],
254 |
255 | # These are HTML5 specific, as are *.accesskey and *.dropzone above.
256 | "area" : ["rel"],
257 | "icon" : ["sizes"],
258 | "iframe" : ["sandbox"],
259 | "output" : ["for"],
260 | }
261 |
262 | def set_up_substitutions(self, tag):
263 | # We are only interested in tags
264 | if tag.name != 'meta':
265 | return False
266 |
267 | http_equiv = tag.get('http-equiv')
268 | content = tag.get('content')
269 | charset = tag.get('charset')
270 |
271 | # We are interested in tags that say what encoding the
272 | # document was originally in. This means HTML 5-style
273 | # tags that provide the "charset" attribute. It also means
274 | # HTML 4-style tags that provide the "content"
275 | # attribute and have "http-equiv" set to "content-type".
276 | #
277 | # In both cases we will replace the value of the appropriate
278 | # attribute with a standin object that can take on any
279 | # encoding.
280 | meta_encoding = None
281 | if charset is not None:
282 | # HTML 5 style:
283 | #
284 | meta_encoding = charset
285 | tag['charset'] = CharsetMetaAttributeValue(charset)
286 |
287 | elif (content is not None and http_equiv is not None
288 | and http_equiv.lower() == 'content-type'):
289 | # HTML 4 style:
290 | #
291 | tag['content'] = ContentMetaAttributeValue(content)
292 |
293 | return (meta_encoding is not None)
294 |
295 | def register_treebuilders_from(module):
296 | """Copy TreeBuilders from the given module into this module."""
297 | # I'm fairly sure this is not the best way to do this.
298 | this_module = sys.modules['bs4.builder']
299 | for name in module.__all__:
300 | obj = getattr(module, name)
301 |
302 | if issubclass(obj, TreeBuilder):
303 | setattr(this_module, name, obj)
304 | this_module.__all__.append(name)
305 | # Register the builder while we're at it.
306 | this_module.builder_registry.register(obj)
307 |
308 | class ParserRejectedMarkup(Exception):
309 | pass
310 |
311 | # Builders are registered in reverse order of priority, so that custom
312 | # builder registrations will take precedence. In general, we want lxml
313 | # to take precedence over html5lib, because it's faster. And we only
314 | # want to use HTMLParser as a last result.
315 | from . import _htmlparser
316 | register_treebuilders_from(_htmlparser)
317 | try:
318 | from . import _html5lib
319 | register_treebuilders_from(_html5lib)
320 | except ImportError:
321 | # They don't have html5lib installed.
322 | pass
323 | try:
324 | from . import _lxml
325 | register_treebuilders_from(_lxml)
326 | except ImportError:
327 | # They don't have lxml installed.
328 | pass
329 |
--------------------------------------------------------------------------------
/example/parallax_svg_tools/bs4/builder/__init__.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/parallax/svg-animation-tools/3cbad1696760c049ed66b7e0c8631357000dbdb6/example/parallax_svg_tools/bs4/builder/__init__.pyc
--------------------------------------------------------------------------------
/example/parallax_svg_tools/bs4/builder/_html5lib.py:
--------------------------------------------------------------------------------
1 | # Use of this source code is governed by a BSD-style license that can be
2 | # found in the LICENSE file.
3 |
4 | __all__ = [
5 | 'HTML5TreeBuilder',
6 | ]
7 |
8 | import warnings
9 | from bs4.builder import (
10 | PERMISSIVE,
11 | HTML,
12 | HTML_5,
13 | HTMLTreeBuilder,
14 | )
15 | from bs4.element import (
16 | NamespacedAttribute,
17 | whitespace_re,
18 | )
19 | import html5lib
20 | from html5lib.constants import namespaces
21 | from bs4.element import (
22 | Comment,
23 | Doctype,
24 | NavigableString,
25 | Tag,
26 | )
27 |
28 | try:
29 | # Pre-0.99999999
30 | from html5lib.treebuilders import _base as treebuilder_base
31 | new_html5lib = False
32 | except ImportError, e:
33 | # 0.99999999 and up
34 | from html5lib.treebuilders import base as treebuilder_base
35 | new_html5lib = True
36 |
37 | class HTML5TreeBuilder(HTMLTreeBuilder):
38 | """Use html5lib to build a tree."""
39 |
40 | NAME = "html5lib"
41 |
42 | features = [NAME, PERMISSIVE, HTML_5, HTML]
43 |
44 | def prepare_markup(self, markup, user_specified_encoding,
45 | document_declared_encoding=None, exclude_encodings=None):
46 | # Store the user-specified encoding for use later on.
47 | self.user_specified_encoding = user_specified_encoding
48 |
49 | # document_declared_encoding and exclude_encodings aren't used
50 | # ATM because the html5lib TreeBuilder doesn't use
51 | # UnicodeDammit.
52 | if exclude_encodings:
53 | warnings.warn("You provided a value for exclude_encoding, but the html5lib tree builder doesn't support exclude_encoding.")
54 | yield (markup, None, None, False)
55 |
56 | # These methods are defined by Beautiful Soup.
57 | def feed(self, markup):
58 | if self.soup.parse_only is not None:
59 | warnings.warn("You provided a value for parse_only, but the html5lib tree builder doesn't support parse_only. The entire document will be parsed.")
60 | parser = html5lib.HTMLParser(tree=self.create_treebuilder)
61 |
62 | extra_kwargs = dict()
63 | if not isinstance(markup, unicode):
64 | if new_html5lib:
65 | extra_kwargs['override_encoding'] = self.user_specified_encoding
66 | else:
67 | extra_kwargs['encoding'] = self.user_specified_encoding
68 | doc = parser.parse(markup, **extra_kwargs)
69 |
70 | # Set the character encoding detected by the tokenizer.
71 | if isinstance(markup, unicode):
72 | # We need to special-case this because html5lib sets
73 | # charEncoding to UTF-8 if it gets Unicode input.
74 | doc.original_encoding = None
75 | else:
76 | original_encoding = parser.tokenizer.stream.charEncoding[0]
77 | if not isinstance(original_encoding, basestring):
78 | # In 0.99999999 and up, the encoding is an html5lib
79 | # Encoding object. We want to use a string for compatibility
80 | # with other tree builders.
81 | original_encoding = original_encoding.name
82 | doc.original_encoding = original_encoding
83 |
84 | def create_treebuilder(self, namespaceHTMLElements):
85 | self.underlying_builder = TreeBuilderForHtml5lib(
86 | self.soup, namespaceHTMLElements)
87 | return self.underlying_builder
88 |
89 | def test_fragment_to_document(self, fragment):
90 | """See `TreeBuilder`."""
91 | return u'%s' % fragment
92 |
93 |
94 | class TreeBuilderForHtml5lib(treebuilder_base.TreeBuilder):
95 |
96 | def __init__(self, soup, namespaceHTMLElements):
97 | self.soup = soup
98 | super(TreeBuilderForHtml5lib, self).__init__(namespaceHTMLElements)
99 |
100 | def documentClass(self):
101 | self.soup.reset()
102 | return Element(self.soup, self.soup, None)
103 |
104 | def insertDoctype(self, token):
105 | name = token["name"]
106 | publicId = token["publicId"]
107 | systemId = token["systemId"]
108 |
109 | doctype = Doctype.for_name_and_ids(name, publicId, systemId)
110 | self.soup.object_was_parsed(doctype)
111 |
112 | def elementClass(self, name, namespace):
113 | tag = self.soup.new_tag(name, namespace)
114 | return Element(tag, self.soup, namespace)
115 |
116 | def commentClass(self, data):
117 | return TextNode(Comment(data), self.soup)
118 |
119 | def fragmentClass(self):
120 | self.soup = BeautifulSoup("")
121 | self.soup.name = "[document_fragment]"
122 | return Element(self.soup, self.soup, None)
123 |
124 | def appendChild(self, node):
125 | # XXX This code is not covered by the BS4 tests.
126 | self.soup.append(node.element)
127 |
128 | def getDocument(self):
129 | return self.soup
130 |
131 | def getFragment(self):
132 | return treebuilder_base.TreeBuilder.getFragment(self).element
133 |
134 | class AttrList(object):
135 | def __init__(self, element):
136 | self.element = element
137 | self.attrs = dict(self.element.attrs)
138 | def __iter__(self):
139 | return list(self.attrs.items()).__iter__()
140 | def __setitem__(self, name, value):
141 | # If this attribute is a multi-valued attribute for this element,
142 | # turn its value into a list.
143 | list_attr = HTML5TreeBuilder.cdata_list_attributes
144 | if (name in list_attr['*']
145 | or (self.element.name in list_attr
146 | and name in list_attr[self.element.name])):
147 | # A node that is being cloned may have already undergone
148 | # this procedure.
149 | if not isinstance(value, list):
150 | value = whitespace_re.split(value)
151 | self.element[name] = value
152 | def items(self):
153 | return list(self.attrs.items())
154 | def keys(self):
155 | return list(self.attrs.keys())
156 | def __len__(self):
157 | return len(self.attrs)
158 | def __getitem__(self, name):
159 | return self.attrs[name]
160 | def __contains__(self, name):
161 | return name in list(self.attrs.keys())
162 |
163 |
164 | class Element(treebuilder_base.Node):
165 | def __init__(self, element, soup, namespace):
166 | treebuilder_base.Node.__init__(self, element.name)
167 | self.element = element
168 | self.soup = soup
169 | self.namespace = namespace
170 |
171 | def appendChild(self, node):
172 | string_child = child = None
173 | if isinstance(node, basestring):
174 | # Some other piece of code decided to pass in a string
175 | # instead of creating a TextElement object to contain the
176 | # string.
177 | string_child = child = node
178 | elif isinstance(node, Tag):
179 | # Some other piece of code decided to pass in a Tag
180 | # instead of creating an Element object to contain the
181 | # Tag.
182 | child = node
183 | elif node.element.__class__ == NavigableString:
184 | string_child = child = node.element
185 | else:
186 | child = node.element
187 |
188 | if not isinstance(child, basestring) and child.parent is not None:
189 | node.element.extract()
190 |
191 | if (string_child and self.element.contents
192 | and self.element.contents[-1].__class__ == NavigableString):
193 | # We are appending a string onto another string.
194 | # TODO This has O(n^2) performance, for input like
195 | # "aaa..."
196 | old_element = self.element.contents[-1]
197 | new_element = self.soup.new_string(old_element + string_child)
198 | old_element.replace_with(new_element)
199 | self.soup._most_recent_element = new_element
200 | else:
201 | if isinstance(node, basestring):
202 | # Create a brand new NavigableString from this string.
203 | child = self.soup.new_string(node)
204 |
205 | # Tell Beautiful Soup to act as if it parsed this element
206 | # immediately after the parent's last descendant. (Or
207 | # immediately after the parent, if it has no children.)
208 | if self.element.contents:
209 | most_recent_element = self.element._last_descendant(False)
210 | elif self.element.next_element is not None:
211 | # Something from further ahead in the parse tree is
212 | # being inserted into this earlier element. This is
213 | # very annoying because it means an expensive search
214 | # for the last element in the tree.
215 | most_recent_element = self.soup._last_descendant()
216 | else:
217 | most_recent_element = self.element
218 |
219 | self.soup.object_was_parsed(
220 | child, parent=self.element,
221 | most_recent_element=most_recent_element)
222 |
223 | def getAttributes(self):
224 | return AttrList(self.element)
225 |
226 | def setAttributes(self, attributes):
227 |
228 | if attributes is not None and len(attributes) > 0:
229 |
230 | converted_attributes = []
231 | for name, value in list(attributes.items()):
232 | if isinstance(name, tuple):
233 | new_name = NamespacedAttribute(*name)
234 | del attributes[name]
235 | attributes[new_name] = value
236 |
237 | self.soup.builder._replace_cdata_list_attribute_values(
238 | self.name, attributes)
239 | for name, value in attributes.items():
240 | self.element[name] = value
241 |
242 | # The attributes may contain variables that need substitution.
243 | # Call set_up_substitutions manually.
244 | #
245 | # The Tag constructor called this method when the Tag was created,
246 | # but we just set/changed the attributes, so call it again.
247 | self.soup.builder.set_up_substitutions(self.element)
248 | attributes = property(getAttributes, setAttributes)
249 |
250 | def insertText(self, data, insertBefore=None):
251 | if insertBefore:
252 | text = TextNode(self.soup.new_string(data), self.soup)
253 | self.insertBefore(data, insertBefore)
254 | else:
255 | self.appendChild(data)
256 |
257 | def insertBefore(self, node, refNode):
258 | index = self.element.index(refNode.element)
259 | if (node.element.__class__ == NavigableString and self.element.contents
260 | and self.element.contents[index-1].__class__ == NavigableString):
261 | # (See comments in appendChild)
262 | old_node = self.element.contents[index-1]
263 | new_str = self.soup.new_string(old_node + node.element)
264 | old_node.replace_with(new_str)
265 | else:
266 | self.element.insert(index, node.element)
267 | node.parent = self
268 |
269 | def removeChild(self, node):
270 | node.element.extract()
271 |
272 | def reparentChildren(self, new_parent):
273 | """Move all of this tag's children into another tag."""
274 | # print "MOVE", self.element.contents
275 | # print "FROM", self.element
276 | # print "TO", new_parent.element
277 | element = self.element
278 | new_parent_element = new_parent.element
279 | # Determine what this tag's next_element will be once all the children
280 | # are removed.
281 | final_next_element = element.next_sibling
282 |
283 | new_parents_last_descendant = new_parent_element._last_descendant(False, False)
284 | if len(new_parent_element.contents) > 0:
285 | # The new parent already contains children. We will be
286 | # appending this tag's children to the end.
287 | new_parents_last_child = new_parent_element.contents[-1]
288 | new_parents_last_descendant_next_element = new_parents_last_descendant.next_element
289 | else:
290 | # The new parent contains no children.
291 | new_parents_last_child = None
292 | new_parents_last_descendant_next_element = new_parent_element.next_element
293 |
294 | to_append = element.contents
295 | append_after = new_parent_element.contents
296 | if len(to_append) > 0:
297 | # Set the first child's previous_element and previous_sibling
298 | # to elements within the new parent
299 | first_child = to_append[0]
300 | if new_parents_last_descendant:
301 | first_child.previous_element = new_parents_last_descendant
302 | else:
303 | first_child.previous_element = new_parent_element
304 | first_child.previous_sibling = new_parents_last_child
305 | if new_parents_last_descendant:
306 | new_parents_last_descendant.next_element = first_child
307 | else:
308 | new_parent_element.next_element = first_child
309 | if new_parents_last_child:
310 | new_parents_last_child.next_sibling = first_child
311 |
312 | # Fix the last child's next_element and next_sibling
313 | last_child = to_append[-1]
314 | last_child.next_element = new_parents_last_descendant_next_element
315 | if new_parents_last_descendant_next_element:
316 | new_parents_last_descendant_next_element.previous_element = last_child
317 | last_child.next_sibling = None
318 |
319 | for child in to_append:
320 | child.parent = new_parent_element
321 | new_parent_element.contents.append(child)
322 |
323 | # Now that this element has no children, change its .next_element.
324 | element.contents = []
325 | element.next_element = final_next_element
326 |
327 | # print "DONE WITH MOVE"
328 | # print "FROM", self.element
329 | # print "TO", new_parent_element
330 |
331 | def cloneNode(self):
332 | tag = self.soup.new_tag(self.element.name, self.namespace)
333 | node = Element(tag, self.soup, self.namespace)
334 | for key,value in self.attributes:
335 | node.attributes[key] = value
336 | return node
337 |
338 | def hasContent(self):
339 | return self.element.contents
340 |
341 | def getNameTuple(self):
342 | if self.namespace == None:
343 | return namespaces["html"], self.name
344 | else:
345 | return self.namespace, self.name
346 |
347 | nameTuple = property(getNameTuple)
348 |
349 | class TextNode(Element):
350 | def __init__(self, element, soup):
351 | treebuilder_base.Node.__init__(self, None)
352 | self.element = element
353 | self.soup = soup
354 |
355 | def cloneNode(self):
356 | raise NotImplementedError
357 |
--------------------------------------------------------------------------------
/example/parallax_svg_tools/bs4/builder/_html5lib.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/parallax/svg-animation-tools/3cbad1696760c049ed66b7e0c8631357000dbdb6/example/parallax_svg_tools/bs4/builder/_html5lib.pyc
--------------------------------------------------------------------------------
/example/parallax_svg_tools/bs4/builder/_htmlparser.py:
--------------------------------------------------------------------------------
1 | """Use the HTMLParser library to parse HTML files that aren't too bad."""
2 |
3 | # Use of this source code is governed by a BSD-style license that can be
4 | # found in the LICENSE file.
5 |
6 | __all__ = [
7 | 'HTMLParserTreeBuilder',
8 | ]
9 |
10 | from HTMLParser import HTMLParser
11 |
12 | try:
13 | from HTMLParser import HTMLParseError
14 | except ImportError, e:
15 | # HTMLParseError is removed in Python 3.5. Since it can never be
16 | # thrown in 3.5, we can just define our own class as a placeholder.
17 | class HTMLParseError(Exception):
18 | pass
19 |
20 | import sys
21 | import warnings
22 |
23 | # Starting in Python 3.2, the HTMLParser constructor takes a 'strict'
24 | # argument, which we'd like to set to False. Unfortunately,
25 | # http://bugs.python.org/issue13273 makes strict=True a better bet
26 | # before Python 3.2.3.
27 | #
28 | # At the end of this file, we monkeypatch HTMLParser so that
29 | # strict=True works well on Python 3.2.2.
30 | major, minor, release = sys.version_info[:3]
31 | CONSTRUCTOR_TAKES_STRICT = major == 3 and minor == 2 and release >= 3
32 | CONSTRUCTOR_STRICT_IS_DEPRECATED = major == 3 and minor == 3
33 | CONSTRUCTOR_TAKES_CONVERT_CHARREFS = major == 3 and minor >= 4
34 |
35 |
36 | from bs4.element import (
37 | CData,
38 | Comment,
39 | Declaration,
40 | Doctype,
41 | ProcessingInstruction,
42 | )
43 | from bs4.dammit import EntitySubstitution, UnicodeDammit
44 |
45 | from bs4.builder import (
46 | HTML,
47 | HTMLTreeBuilder,
48 | STRICT,
49 | )
50 |
51 |
52 | HTMLPARSER = 'html.parser'
53 |
54 | class BeautifulSoupHTMLParser(HTMLParser):
55 | def handle_starttag(self, name, attrs):
56 | # XXX namespace
57 | attr_dict = {}
58 | for key, value in attrs:
59 | # Change None attribute values to the empty string
60 | # for consistency with the other tree builders.
61 | if value is None:
62 | value = ''
63 | attr_dict[key] = value
64 | attrvalue = '""'
65 | self.soup.handle_starttag(name, None, None, attr_dict)
66 |
67 | def handle_endtag(self, name):
68 | self.soup.handle_endtag(name)
69 |
70 | def handle_data(self, data):
71 | self.soup.handle_data(data)
72 |
73 | def handle_charref(self, name):
74 | # XXX workaround for a bug in HTMLParser. Remove this once
75 | # it's fixed in all supported versions.
76 | # http://bugs.python.org/issue13633
77 | if name.startswith('x'):
78 | real_name = int(name.lstrip('x'), 16)
79 | elif name.startswith('X'):
80 | real_name = int(name.lstrip('X'), 16)
81 | else:
82 | real_name = int(name)
83 |
84 | try:
85 | data = unichr(real_name)
86 | except (ValueError, OverflowError), e:
87 | data = u"\N{REPLACEMENT CHARACTER}"
88 |
89 | self.handle_data(data)
90 |
91 | def handle_entityref(self, name):
92 | character = EntitySubstitution.HTML_ENTITY_TO_CHARACTER.get(name)
93 | if character is not None:
94 | data = character
95 | else:
96 | data = "&%s;" % name
97 | self.handle_data(data)
98 |
99 | def handle_comment(self, data):
100 | self.soup.endData()
101 | self.soup.handle_data(data)
102 | self.soup.endData(Comment)
103 |
104 | def handle_decl(self, data):
105 | self.soup.endData()
106 | if data.startswith("DOCTYPE "):
107 | data = data[len("DOCTYPE "):]
108 | elif data == 'DOCTYPE':
109 | # i.e. ""
110 | data = ''
111 | self.soup.handle_data(data)
112 | self.soup.endData(Doctype)
113 |
114 | def unknown_decl(self, data):
115 | if data.upper().startswith('CDATA['):
116 | cls = CData
117 | data = data[len('CDATA['):]
118 | else:
119 | cls = Declaration
120 | self.soup.endData()
121 | self.soup.handle_data(data)
122 | self.soup.endData(cls)
123 |
124 | def handle_pi(self, data):
125 | self.soup.endData()
126 | self.soup.handle_data(data)
127 | self.soup.endData(ProcessingInstruction)
128 |
129 |
130 | class HTMLParserTreeBuilder(HTMLTreeBuilder):
131 |
132 | is_xml = False
133 | picklable = True
134 | NAME = HTMLPARSER
135 | features = [NAME, HTML, STRICT]
136 |
137 | def __init__(self, *args, **kwargs):
138 | if CONSTRUCTOR_TAKES_STRICT and not CONSTRUCTOR_STRICT_IS_DEPRECATED:
139 | kwargs['strict'] = False
140 | if CONSTRUCTOR_TAKES_CONVERT_CHARREFS:
141 | kwargs['convert_charrefs'] = False
142 | self.parser_args = (args, kwargs)
143 |
144 | def prepare_markup(self, markup, user_specified_encoding=None,
145 | document_declared_encoding=None, exclude_encodings=None):
146 | """
147 | :return: A 4-tuple (markup, original encoding, encoding
148 | declared within markup, whether any characters had to be
149 | replaced with REPLACEMENT CHARACTER).
150 | """
151 | if isinstance(markup, unicode):
152 | yield (markup, None, None, False)
153 | return
154 |
155 | try_encodings = [user_specified_encoding, document_declared_encoding]
156 | dammit = UnicodeDammit(markup, try_encodings, is_html=True,
157 | exclude_encodings=exclude_encodings)
158 | yield (dammit.markup, dammit.original_encoding,
159 | dammit.declared_html_encoding,
160 | dammit.contains_replacement_characters)
161 |
162 | def feed(self, markup):
163 | args, kwargs = self.parser_args
164 | parser = BeautifulSoupHTMLParser(*args, **kwargs)
165 | parser.soup = self.soup
166 | try:
167 | parser.feed(markup)
168 | except HTMLParseError, e:
169 | warnings.warn(RuntimeWarning(
170 | "Python's built-in HTMLParser cannot parse the given document. This is not a bug in Beautiful Soup. The best solution is to install an external parser (lxml or html5lib), and use Beautiful Soup with that parser. See http://www.crummy.com/software/BeautifulSoup/bs4/doc/#installing-a-parser for help."))
171 | raise e
172 |
173 | # Patch 3.2 versions of HTMLParser earlier than 3.2.3 to use some
174 | # 3.2.3 code. This ensures they don't treat markup like as a
175 | # string.
176 | #
177 | # XXX This code can be removed once most Python 3 users are on 3.2.3.
178 | if major == 3 and minor == 2 and not CONSTRUCTOR_TAKES_STRICT:
179 | import re
180 | attrfind_tolerant = re.compile(
181 | r'\s*((?<=[\'"\s])[^\s/>][^\s/=>]*)(\s*=+\s*'
182 | r'(\'[^\']*\'|"[^"]*"|(?![\'"])[^>\s]*))?')
183 | HTMLParserTreeBuilder.attrfind_tolerant = attrfind_tolerant
184 |
185 | locatestarttagend = re.compile(r"""
186 | <[a-zA-Z][-.a-zA-Z0-9:_]* # tag name
187 | (?:\s+ # whitespace before attribute name
188 | (?:[a-zA-Z_][-.:a-zA-Z0-9_]* # attribute name
189 | (?:\s*=\s* # value indicator
190 | (?:'[^']*' # LITA-enclosed value
191 | |\"[^\"]*\" # LIT-enclosed value
192 | |[^'\">\s]+ # bare value
193 | )
194 | )?
195 | )
196 | )*
197 | \s* # trailing whitespace
198 | """, re.VERBOSE)
199 | BeautifulSoupHTMLParser.locatestarttagend = locatestarttagend
200 |
201 | from html.parser import tagfind, attrfind
202 |
203 | def parse_starttag(self, i):
204 | self.__starttag_text = None
205 | endpos = self.check_for_whole_start_tag(i)
206 | if endpos < 0:
207 | return endpos
208 | rawdata = self.rawdata
209 | self.__starttag_text = rawdata[i:endpos]
210 |
211 | # Now parse the data between i+1 and j into a tag and attrs
212 | attrs = []
213 | match = tagfind.match(rawdata, i+1)
214 | assert match, 'unexpected call to parse_starttag()'
215 | k = match.end()
216 | self.lasttag = tag = rawdata[i+1:k].lower()
217 | while k < endpos:
218 | if self.strict:
219 | m = attrfind.match(rawdata, k)
220 | else:
221 | m = attrfind_tolerant.match(rawdata, k)
222 | if not m:
223 | break
224 | attrname, rest, attrvalue = m.group(1, 2, 3)
225 | if not rest:
226 | attrvalue = None
227 | elif attrvalue[:1] == '\'' == attrvalue[-1:] or \
228 | attrvalue[:1] == '"' == attrvalue[-1:]:
229 | attrvalue = attrvalue[1:-1]
230 | if attrvalue:
231 | attrvalue = self.unescape(attrvalue)
232 | attrs.append((attrname.lower(), attrvalue))
233 | k = m.end()
234 |
235 | end = rawdata[k:endpos].strip()
236 | if end not in (">", "/>"):
237 | lineno, offset = self.getpos()
238 | if "\n" in self.__starttag_text:
239 | lineno = lineno + self.__starttag_text.count("\n")
240 | offset = len(self.__starttag_text) \
241 | - self.__starttag_text.rfind("\n")
242 | else:
243 | offset = offset + len(self.__starttag_text)
244 | if self.strict:
245 | self.error("junk characters in start tag: %r"
246 | % (rawdata[k:endpos][:20],))
247 | self.handle_data(rawdata[i:endpos])
248 | return endpos
249 | if end.endswith('/>'):
250 | # XHTML-style empty tag:
251 | self.handle_startendtag(tag, attrs)
252 | else:
253 | self.handle_starttag(tag, attrs)
254 | if tag in self.CDATA_CONTENT_ELEMENTS:
255 | self.set_cdata_mode(tag)
256 | return endpos
257 |
258 | def set_cdata_mode(self, elem):
259 | self.cdata_elem = elem.lower()
260 | self.interesting = re.compile(r'\s*%s\s*>' % self.cdata_elem, re.I)
261 |
262 | BeautifulSoupHTMLParser.parse_starttag = parse_starttag
263 | BeautifulSoupHTMLParser.set_cdata_mode = set_cdata_mode
264 |
265 | CONSTRUCTOR_TAKES_STRICT = True
266 |
--------------------------------------------------------------------------------
/example/parallax_svg_tools/bs4/builder/_htmlparser.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/parallax/svg-animation-tools/3cbad1696760c049ed66b7e0c8631357000dbdb6/example/parallax_svg_tools/bs4/builder/_htmlparser.pyc
--------------------------------------------------------------------------------
/example/parallax_svg_tools/bs4/builder/_lxml.py:
--------------------------------------------------------------------------------
1 | # Use of this source code is governed by a BSD-style license that can be
2 | # found in the LICENSE file.
3 | __all__ = [
4 | 'LXMLTreeBuilderForXML',
5 | 'LXMLTreeBuilder',
6 | ]
7 |
8 | from io import BytesIO
9 | from StringIO import StringIO
10 | import collections
11 | from lxml import etree
12 | from bs4.element import (
13 | Comment,
14 | Doctype,
15 | NamespacedAttribute,
16 | ProcessingInstruction,
17 | XMLProcessingInstruction,
18 | )
19 | from bs4.builder import (
20 | FAST,
21 | HTML,
22 | HTMLTreeBuilder,
23 | PERMISSIVE,
24 | ParserRejectedMarkup,
25 | TreeBuilder,
26 | XML)
27 | from bs4.dammit import EncodingDetector
28 |
29 | LXML = 'lxml'
30 |
31 | class LXMLTreeBuilderForXML(TreeBuilder):
32 | DEFAULT_PARSER_CLASS = etree.XMLParser
33 |
34 | is_xml = True
35 | processing_instruction_class = XMLProcessingInstruction
36 |
37 | NAME = "lxml-xml"
38 | ALTERNATE_NAMES = ["xml"]
39 |
40 | # Well, it's permissive by XML parser standards.
41 | features = [NAME, LXML, XML, FAST, PERMISSIVE]
42 |
43 | CHUNK_SIZE = 512
44 |
45 | # This namespace mapping is specified in the XML Namespace
46 | # standard.
47 | DEFAULT_NSMAPS = {'http://www.w3.org/XML/1998/namespace' : "xml"}
48 |
49 | def default_parser(self, encoding):
50 | # This can either return a parser object or a class, which
51 | # will be instantiated with default arguments.
52 | if self._default_parser is not None:
53 | return self._default_parser
54 | return etree.XMLParser(
55 | target=self, strip_cdata=False, recover=True, encoding=encoding)
56 |
57 | def parser_for(self, encoding):
58 | # Use the default parser.
59 | parser = self.default_parser(encoding)
60 |
61 | if isinstance(parser, collections.Callable):
62 | # Instantiate the parser with default arguments
63 | parser = parser(target=self, strip_cdata=False, encoding=encoding)
64 | return parser
65 |
66 | def __init__(self, parser=None, empty_element_tags=None):
67 | # TODO: Issue a warning if parser is present but not a
68 | # callable, since that means there's no way to create new
69 | # parsers for different encodings.
70 | self._default_parser = parser
71 | if empty_element_tags is not None:
72 | self.empty_element_tags = set(empty_element_tags)
73 | self.soup = None
74 | self.nsmaps = [self.DEFAULT_NSMAPS]
75 |
76 | def _getNsTag(self, tag):
77 | # Split the namespace URL out of a fully-qualified lxml tag
78 | # name. Copied from lxml's src/lxml/sax.py.
79 | if tag[0] == '{':
80 | return tuple(tag[1:].split('}', 1))
81 | else:
82 | return (None, tag)
83 |
84 | def prepare_markup(self, markup, user_specified_encoding=None,
85 | exclude_encodings=None,
86 | document_declared_encoding=None):
87 | """
88 | :yield: A series of 4-tuples.
89 | (markup, encoding, declared encoding,
90 | has undergone character replacement)
91 |
92 | Each 4-tuple represents a strategy for parsing the document.
93 | """
94 | # Instead of using UnicodeDammit to convert the bytestring to
95 | # Unicode using different encodings, use EncodingDetector to
96 | # iterate over the encodings, and tell lxml to try to parse
97 | # the document as each one in turn.
98 | is_html = not self.is_xml
99 | if is_html:
100 | self.processing_instruction_class = ProcessingInstruction
101 | else:
102 | self.processing_instruction_class = XMLProcessingInstruction
103 |
104 | if isinstance(markup, unicode):
105 | # We were given Unicode. Maybe lxml can parse Unicode on
106 | # this system?
107 | yield markup, None, document_declared_encoding, False
108 |
109 | if isinstance(markup, unicode):
110 | # No, apparently not. Convert the Unicode to UTF-8 and
111 | # tell lxml to parse it as UTF-8.
112 | yield (markup.encode("utf8"), "utf8",
113 | document_declared_encoding, False)
114 |
115 | try_encodings = [user_specified_encoding, document_declared_encoding]
116 | detector = EncodingDetector(
117 | markup, try_encodings, is_html, exclude_encodings)
118 | for encoding in detector.encodings:
119 | yield (detector.markup, encoding, document_declared_encoding, False)
120 |
121 | def feed(self, markup):
122 | if isinstance(markup, bytes):
123 | markup = BytesIO(markup)
124 | elif isinstance(markup, unicode):
125 | markup = StringIO(markup)
126 |
127 | # Call feed() at least once, even if the markup is empty,
128 | # or the parser won't be initialized.
129 | data = markup.read(self.CHUNK_SIZE)
130 | try:
131 | self.parser = self.parser_for(self.soup.original_encoding)
132 | self.parser.feed(data)
133 | while len(data) != 0:
134 | # Now call feed() on the rest of the data, chunk by chunk.
135 | data = markup.read(self.CHUNK_SIZE)
136 | if len(data) != 0:
137 | self.parser.feed(data)
138 | self.parser.close()
139 | except (UnicodeDecodeError, LookupError, etree.ParserError), e:
140 | raise ParserRejectedMarkup(str(e))
141 |
142 | def close(self):
143 | self.nsmaps = [self.DEFAULT_NSMAPS]
144 |
145 | def start(self, name, attrs, nsmap={}):
146 | # Make sure attrs is a mutable dict--lxml may send an immutable dictproxy.
147 | attrs = dict(attrs)
148 | nsprefix = None
149 | # Invert each namespace map as it comes in.
150 | if len(self.nsmaps) > 1:
151 | # There are no new namespaces for this tag, but
152 | # non-default namespaces are in play, so we need a
153 | # separate tag stack to know when they end.
154 | self.nsmaps.append(None)
155 | elif len(nsmap) > 0:
156 | # A new namespace mapping has come into play.
157 | inverted_nsmap = dict((value, key) for key, value in nsmap.items())
158 | self.nsmaps.append(inverted_nsmap)
159 | # Also treat the namespace mapping as a set of attributes on the
160 | # tag, so we can recreate it later.
161 | attrs = attrs.copy()
162 | for prefix, namespace in nsmap.items():
163 | attribute = NamespacedAttribute(
164 | "xmlns", prefix, "http://www.w3.org/2000/xmlns/")
165 | attrs[attribute] = namespace
166 |
167 | # Namespaces are in play. Find any attributes that came in
168 | # from lxml with namespaces attached to their names, and
169 | # turn then into NamespacedAttribute objects.
170 | new_attrs = {}
171 | for attr, value in attrs.items():
172 | namespace, attr = self._getNsTag(attr)
173 | if namespace is None:
174 | new_attrs[attr] = value
175 | else:
176 | nsprefix = self._prefix_for_namespace(namespace)
177 | attr = NamespacedAttribute(nsprefix, attr, namespace)
178 | new_attrs[attr] = value
179 | attrs = new_attrs
180 |
181 | namespace, name = self._getNsTag(name)
182 | nsprefix = self._prefix_for_namespace(namespace)
183 | self.soup.handle_starttag(name, namespace, nsprefix, attrs)
184 |
185 | def _prefix_for_namespace(self, namespace):
186 | """Find the currently active prefix for the given namespace."""
187 | if namespace is None:
188 | return None
189 | for inverted_nsmap in reversed(self.nsmaps):
190 | if inverted_nsmap is not None and namespace in inverted_nsmap:
191 | return inverted_nsmap[namespace]
192 | return None
193 |
194 | def end(self, name):
195 | self.soup.endData()
196 | completed_tag = self.soup.tagStack[-1]
197 | namespace, name = self._getNsTag(name)
198 | nsprefix = None
199 | if namespace is not None:
200 | for inverted_nsmap in reversed(self.nsmaps):
201 | if inverted_nsmap is not None and namespace in inverted_nsmap:
202 | nsprefix = inverted_nsmap[namespace]
203 | break
204 | self.soup.handle_endtag(name, nsprefix)
205 | if len(self.nsmaps) > 1:
206 | # This tag, or one of its parents, introduced a namespace
207 | # mapping, so pop it off the stack.
208 | self.nsmaps.pop()
209 |
210 | def pi(self, target, data):
211 | self.soup.endData()
212 | self.soup.handle_data(target + ' ' + data)
213 | self.soup.endData(self.processing_instruction_class)
214 |
215 | def data(self, content):
216 | self.soup.handle_data(content)
217 |
218 | def doctype(self, name, pubid, system):
219 | self.soup.endData()
220 | doctype = Doctype.for_name_and_ids(name, pubid, system)
221 | self.soup.object_was_parsed(doctype)
222 |
223 | def comment(self, content):
224 | "Handle comments as Comment objects."
225 | self.soup.endData()
226 | self.soup.handle_data(content)
227 | self.soup.endData(Comment)
228 |
229 | def test_fragment_to_document(self, fragment):
230 | """See `TreeBuilder`."""
231 | return u'\n%s' % fragment
232 |
233 |
234 | class LXMLTreeBuilder(HTMLTreeBuilder, LXMLTreeBuilderForXML):
235 |
236 | NAME = LXML
237 | ALTERNATE_NAMES = ["lxml-html"]
238 |
239 | features = ALTERNATE_NAMES + [NAME, HTML, FAST, PERMISSIVE]
240 | is_xml = False
241 | processing_instruction_class = ProcessingInstruction
242 |
243 | def default_parser(self, encoding):
244 | return etree.HTMLParser
245 |
246 | def feed(self, markup):
247 | encoding = self.soup.original_encoding
248 | try:
249 | self.parser = self.parser_for(encoding)
250 | self.parser.feed(markup)
251 | self.parser.close()
252 | except (UnicodeDecodeError, LookupError, etree.ParserError), e:
253 | raise ParserRejectedMarkup(str(e))
254 |
255 |
256 | def test_fragment_to_document(self, fragment):
257 | """See `TreeBuilder`."""
258 | return u'%s' % fragment
259 |
--------------------------------------------------------------------------------
/example/parallax_svg_tools/bs4/builder/_lxml.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/parallax/svg-animation-tools/3cbad1696760c049ed66b7e0c8631357000dbdb6/example/parallax_svg_tools/bs4/builder/_lxml.pyc
--------------------------------------------------------------------------------
/example/parallax_svg_tools/bs4/dammit.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/parallax/svg-animation-tools/3cbad1696760c049ed66b7e0c8631357000dbdb6/example/parallax_svg_tools/bs4/dammit.pyc
--------------------------------------------------------------------------------
/example/parallax_svg_tools/bs4/diagnose.py:
--------------------------------------------------------------------------------
1 | """Diagnostic functions, mainly for use when doing tech support."""
2 |
3 | # Use of this source code is governed by a BSD-style license that can be
4 | # found in the LICENSE file.
5 | __license__ = "MIT"
6 |
7 | import cProfile
8 | from StringIO import StringIO
9 | from HTMLParser import HTMLParser
10 | import bs4
11 | from bs4 import BeautifulSoup, __version__
12 | from bs4.builder import builder_registry
13 |
14 | import os
15 | import pstats
16 | import random
17 | import tempfile
18 | import time
19 | import traceback
20 | import sys
21 | import cProfile
22 |
23 | def diagnose(data):
24 | """Diagnostic suite for isolating common problems."""
25 | print "Diagnostic running on Beautiful Soup %s" % __version__
26 | print "Python version %s" % sys.version
27 |
28 | basic_parsers = ["html.parser", "html5lib", "lxml"]
29 | for name in basic_parsers:
30 | for builder in builder_registry.builders:
31 | if name in builder.features:
32 | break
33 | else:
34 | basic_parsers.remove(name)
35 | print (
36 | "I noticed that %s is not installed. Installing it may help." %
37 | name)
38 |
39 | if 'lxml' in basic_parsers:
40 | basic_parsers.append(["lxml", "xml"])
41 | try:
42 | from lxml import etree
43 | print "Found lxml version %s" % ".".join(map(str,etree.LXML_VERSION))
44 | except ImportError, e:
45 | print (
46 | "lxml is not installed or couldn't be imported.")
47 |
48 |
49 | if 'html5lib' in basic_parsers:
50 | try:
51 | import html5lib
52 | print "Found html5lib version %s" % html5lib.__version__
53 | except ImportError, e:
54 | print (
55 | "html5lib is not installed or couldn't be imported.")
56 |
57 | if hasattr(data, 'read'):
58 | data = data.read()
59 | elif os.path.exists(data):
60 | print '"%s" looks like a filename. Reading data from the file.' % data
61 | with open(data) as fp:
62 | data = fp.read()
63 | elif data.startswith("http:") or data.startswith("https:"):
64 | print '"%s" looks like a URL. Beautiful Soup is not an HTTP client.' % data
65 | print "You need to use some other library to get the document behind the URL, and feed that document to Beautiful Soup."
66 | return
67 | print
68 |
69 | for parser in basic_parsers:
70 | print "Trying to parse your markup with %s" % parser
71 | success = False
72 | try:
73 | soup = BeautifulSoup(data, parser)
74 | success = True
75 | except Exception, e:
76 | print "%s could not parse the markup." % parser
77 | traceback.print_exc()
78 | if success:
79 | print "Here's what %s did with the markup:" % parser
80 | print soup.prettify()
81 |
82 | print "-" * 80
83 |
84 | def lxml_trace(data, html=True, **kwargs):
85 | """Print out the lxml events that occur during parsing.
86 |
87 | This lets you see how lxml parses a document when no Beautiful
88 | Soup code is running.
89 | """
90 | from lxml import etree
91 | for event, element in etree.iterparse(StringIO(data), html=html, **kwargs):
92 | print("%s, %4s, %s" % (event, element.tag, element.text))
93 |
94 | class AnnouncingParser(HTMLParser):
95 | """Announces HTMLParser parse events, without doing anything else."""
96 |
97 | def _p(self, s):
98 | print(s)
99 |
100 | def handle_starttag(self, name, attrs):
101 | self._p("%s START" % name)
102 |
103 | def handle_endtag(self, name):
104 | self._p("%s END" % name)
105 |
106 | def handle_data(self, data):
107 | self._p("%s DATA" % data)
108 |
109 | def handle_charref(self, name):
110 | self._p("%s CHARREF" % name)
111 |
112 | def handle_entityref(self, name):
113 | self._p("%s ENTITYREF" % name)
114 |
115 | def handle_comment(self, data):
116 | self._p("%s COMMENT" % data)
117 |
118 | def handle_decl(self, data):
119 | self._p("%s DECL" % data)
120 |
121 | def unknown_decl(self, data):
122 | self._p("%s UNKNOWN-DECL" % data)
123 |
124 | def handle_pi(self, data):
125 | self._p("%s PI" % data)
126 |
127 | def htmlparser_trace(data):
128 | """Print out the HTMLParser events that occur during parsing.
129 |
130 | This lets you see how HTMLParser parses a document when no
131 | Beautiful Soup code is running.
132 | """
133 | parser = AnnouncingParser()
134 | parser.feed(data)
135 |
136 | _vowels = "aeiou"
137 | _consonants = "bcdfghjklmnpqrstvwxyz"
138 |
139 | def rword(length=5):
140 | "Generate a random word-like string."
141 | s = ''
142 | for i in range(length):
143 | if i % 2 == 0:
144 | t = _consonants
145 | else:
146 | t = _vowels
147 | s += random.choice(t)
148 | return s
149 |
150 | def rsentence(length=4):
151 | "Generate a random sentence-like string."
152 | return " ".join(rword(random.randint(4,9)) for i in range(length))
153 |
154 | def rdoc(num_elements=1000):
155 | """Randomly generate an invalid HTML document."""
156 | tag_names = ['p', 'div', 'span', 'i', 'b', 'script', 'table']
157 | elements = []
158 | for i in range(num_elements):
159 | choice = random.randint(0,3)
160 | if choice == 0:
161 | # New tag.
162 | tag_name = random.choice(tag_names)
163 | elements.append("<%s>" % tag_name)
164 | elif choice == 1:
165 | elements.append(rsentence(random.randint(1,4)))
166 | elif choice == 2:
167 | # Close a tag.
168 | tag_name = random.choice(tag_names)
169 | elements.append("%s>" % tag_name)
170 | return "" + "\n".join(elements) + ""
171 |
172 | def benchmark_parsers(num_elements=100000):
173 | """Very basic head-to-head performance benchmark."""
174 | print "Comparative parser benchmark on Beautiful Soup %s" % __version__
175 | data = rdoc(num_elements)
176 | print "Generated a large invalid HTML document (%d bytes)." % len(data)
177 |
178 | for parser in ["lxml", ["lxml", "html"], "html5lib", "html.parser"]:
179 | success = False
180 | try:
181 | a = time.time()
182 | soup = BeautifulSoup(data, parser)
183 | b = time.time()
184 | success = True
185 | except Exception, e:
186 | print "%s could not parse the markup." % parser
187 | traceback.print_exc()
188 | if success:
189 | print "BS4+%s parsed the markup in %.2fs." % (parser, b-a)
190 |
191 | from lxml import etree
192 | a = time.time()
193 | etree.HTML(data)
194 | b = time.time()
195 | print "Raw lxml parsed the markup in %.2fs." % (b-a)
196 |
197 | import html5lib
198 | parser = html5lib.HTMLParser()
199 | a = time.time()
200 | parser.parse(data)
201 | b = time.time()
202 | print "Raw html5lib parsed the markup in %.2fs." % (b-a)
203 |
204 | def profile(num_elements=100000, parser="lxml"):
205 |
206 | filehandle = tempfile.NamedTemporaryFile()
207 | filename = filehandle.name
208 |
209 | data = rdoc(num_elements)
210 | vars = dict(bs4=bs4, data=data, parser=parser)
211 | cProfile.runctx('bs4.BeautifulSoup(data, parser)' , vars, vars, filename)
212 |
213 | stats = pstats.Stats(filename)
214 | # stats.strip_dirs()
215 | stats.sort_stats("cumulative")
216 | stats.print_stats('_html5lib|bs4', 50)
217 |
218 | if __name__ == '__main__':
219 | diagnose(sys.stdin.read())
220 |
--------------------------------------------------------------------------------
/example/parallax_svg_tools/bs4/element.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/parallax/svg-animation-tools/3cbad1696760c049ed66b7e0c8631357000dbdb6/example/parallax_svg_tools/bs4/element.pyc
--------------------------------------------------------------------------------
/example/parallax_svg_tools/run.py:
--------------------------------------------------------------------------------
1 | from svg import *
2 |
3 | compile_svg('animation.svg', 'processed_animation.svg',
4 | {
5 | 'process_layer_names': True,
6 | 'namespace': 'example'
7 | })
8 |
9 | inline_svg('animation.html', 'output/animation.html')
--------------------------------------------------------------------------------
/example/parallax_svg_tools/svg/__init__.py:
--------------------------------------------------------------------------------
1 | # Super simple Illustrator SVG processor for animations. Uses the BeautifulSoup python xml library.
2 |
3 | import os
4 | import errno
5 | from bs4 import BeautifulSoup
6 |
7 | def create_file(path, mode):
8 | directory = os.path.dirname(path)
9 | if directory != '' and not os.path.exists(directory):
10 | try:
11 | os.makedirs(directory)
12 | except OSError as e:
13 | if e.errno != errno.EEXIST:
14 | raise
15 |
16 | file = open(path, mode)
17 | return file
18 |
19 | def parse_svg(path, namespace, options):
20 | #print(path)
21 | file = open(path,'r')
22 | file_string = file.read().decode('utf8')
23 | file.close();
24 |
25 | if namespace == None:
26 | namespace = ''
27 | else:
28 | namespace = namespace + '-'
29 |
30 | # BeautifulSoup can't parse attributes with dashes so we replace them with underscores instead
31 | file_string = file_string.replace('data-name', 'data_name')
32 |
33 | # Expand origin to data-svg-origin as its a pain in the ass to type
34 | if 'expand_origin' in options and options['expand_origin'] == True:
35 | file_string = file_string.replace('origin=', 'data-svg-origin=')
36 |
37 | # Add namespaces to ids
38 | if namespace:
39 | file_string = file_string.replace('id="', 'id="' + namespace)
40 | file_string = file_string.replace('url(#', 'url(#' + namespace)
41 |
42 | svg = BeautifulSoup(file_string, 'html.parser')
43 |
44 | # namespace symbols
45 | symbol_elements = svg.select('symbol')
46 | for element in symbol_elements:
47 | del element['data_name']
48 |
49 | use_elements = svg.select('use')
50 | for element in use_elements:
51 | if namespace:
52 | href = element['xlink:href']
53 | element['xlink:href'] = href.replace('#', '#' + namespace)
54 |
55 | del element['id']
56 |
57 |
58 | # remove titles
59 | if 'title' in options and options['title'] == False:
60 | titles = svg.select('title')
61 | for t in titles: t.extract()
62 |
63 |
64 | foreign_tags_to_add = []
65 | if 'convert_svg_text_to_html' in options and options['convert_svg_text_to_html'] == True:
66 | text_elements = svg.select('[data_name="#TEXT"]')
67 | for element in text_elements:
68 |
69 | area = element.rect
70 | if not area:
71 | print('WARNING: Text areas require a rectangle to be in the same group as the text element')
72 | continue
73 |
74 | text_element = element.select('text')[0]
75 | if not text_element:
76 | print('WARNING: No text element found in text area')
77 | continue
78 |
79 | x = area['x']
80 | y = area['y']
81 | width = area['width']
82 | height = area['height']
83 |
84 | text_content = text_element.getText()
85 | text_tag = BeautifulSoup(text_content, 'html.parser')
86 |
87 | data_name = None
88 | if area.has_attr('data_name'): data_name = area['data_name']
89 | #print(data_name)
90 |
91 | area.extract()
92 | text_element.extract()
93 |
94 | foreign_object_tag = svg.new_tag('foreignObject')
95 | foreign_object_tag['requiredFeatures'] = "http://www.w3.org/TR/SVG11/feature#Extensibility"
96 | foreign_object_tag['transform'] = 'translate(' + x + ' ' + y + ')'
97 | foreign_object_tag['width'] = width + 'px'
98 | foreign_object_tag['height'] = height + 'px'
99 |
100 | if 'dont_overflow_text_areas' in options and options['dont_overflow_text_areas'] == True:
101 | foreign_object_tag['style'] = 'overflow:hidden'
102 |
103 | if data_name:
104 | val = data_name
105 | if not val.startswith('#'): continue
106 | val = val.replace('#', '')
107 |
108 | attributes = str.split(str(val), ',')
109 | for a in attributes:
110 | split = str.split(a.strip(), '=')
111 | if (len(split) < 2): continue
112 | key = split[0]
113 | value = split[1]
114 | if key == 'id': key = namespace + key
115 | foreign_object_tag[key] = value
116 |
117 | foreign_object_tag.append(text_tag)
118 |
119 | # modyfing the tree affects searches so we need to defer it until the end
120 | foreign_tags_to_add.append({'element':element, 'tag':foreign_object_tag})
121 |
122 |
123 | if (not 'process_layer_names' in options or ('process_layer_names' in options and options['process_layer_names'] == True)):
124 | elements_with_data_names = svg.select('[data_name]')
125 | for element in elements_with_data_names:
126 |
127 | # remove any existing id tag as we'll be making our own
128 | if element.has_attr('id'): del element.attrs['id']
129 |
130 | val = element['data_name']
131 | #print(val)
132 | del element['data_name']
133 |
134 | if not val.startswith('#'): continue
135 | val = val.replace('#', '')
136 |
137 | attributes = str.split(str(val), ',')
138 | for a in attributes:
139 | split = str.split(a.strip(), '=')
140 | if (len(split) < 2): continue
141 | key = split[0]
142 | value = split[1]
143 | if key == 'id' or key == 'class': value = namespace + value
144 | element[key] = value
145 |
146 |
147 | if 'remove_text_attributes' in options and options['remove_text_attributes'] == True:
148 | #Remove attributes from text tags
149 | text_elements = svg.select('text')
150 | for element in text_elements:
151 | if element.has_attr('font-size'): del element.attrs['font-size']
152 | if element.has_attr('font-family'): del element.attrs['font-family']
153 | if element.has_attr('font-weight'): del element.attrs['font-weight']
154 | if element.has_attr('fill'): del element.attrs['fill']
155 |
156 | # Do tree modifications here
157 | if 'convert_svg_text_to_html' in options and options['convert_svg_text_to_html'] == True:
158 | for t in foreign_tags_to_add:
159 | t['element'].append(t['tag'])
160 |
161 |
162 | return svg
163 |
164 |
165 | def write_svg(svg, dst_path, options):
166 |
167 | result = str(svg)
168 | result = unicode(result, "utf8")
169 | #Remove self closing tags
170 | result = result.replace('>','/>')
171 | result = result.replace('>','/>')
172 | result = result.replace('>','/>')
173 | result = result.replace('>','/>')
174 |
175 | if 'nowhitespace' in options and options['nowhitespace'] == True:
176 | result = result.replace('\n','')
177 | #else:
178 | # result = svg.prettify()
179 |
180 | # bs4 incorrectly outputs clippath instead of clipPath
181 | result = result.replace('clippath', 'clipPath')
182 | result = result.encode('UTF8')
183 |
184 | result_file = create_file(dst_path, 'wb')
185 | result_file.write(result)
186 | result_file.close()
187 |
188 |
189 |
190 | def compile_svg(src_path, dst_path, options):
191 | namespace = None
192 |
193 | if 'namespace' in options:
194 | namespace = options['namespace']
195 | svg = parse_svg(src_path, namespace, options)
196 |
197 | if 'attributes' in options:
198 | attrs = options['attributes']
199 | for k in attrs:
200 | svg.svg[k] = attrs[k]
201 |
202 | if 'description' in options:
203 | current_desc = svg.select('description')
204 | if current_desc:
205 | current_desc[0].string = options['description']
206 | else:
207 | desc_tag = svg.new_tag('description');
208 | desc_tag.string = options['description']
209 | svg.svg.append(desc_tag)
210 |
211 | write_svg(svg, dst_path, options)
212 |
213 |
214 |
215 | def compile_master_svg(src_path, dst_path, options):
216 | print('\n')
217 | print(src_path)
218 | file = open(src_path)
219 | svg = BeautifulSoup(file, 'html.parser')
220 | file.close()
221 |
222 | master_viewbox = svg.svg.attrs['viewbox']
223 |
224 | import_tags = svg.select('[path]')
225 | for tag in import_tags:
226 |
227 | component_path = str(tag['path'])
228 |
229 | namespace = None
230 | if tag.has_attr('namespace'): namespace = tag['namespace']
231 |
232 | component = parse_svg(component_path, namespace, options)
233 |
234 | component_viewbox = component.svg.attrs['viewbox']
235 | if master_viewbox != component_viewbox:
236 | print('WARNING: Master viewbox: [' + master_viewbox + '] does not match component viewbox [' + component_viewbox + ']')
237 |
238 | # Moves the contents of the component svg file into the master svg
239 | for child in component.svg: tag.contents.append(child)
240 |
241 | # Remove redundant path and namespace attributes from the import element
242 | del tag.attrs['path']
243 | if namespace: del tag.attrs['namespace']
244 |
245 |
246 | if 'attributes' in options:
247 | attrs = options['attributes']
248 | for k in attrs:
249 | print(k + ' = ' + attrs[k])
250 | svg.svg[k] = attrs[k]
251 |
252 |
253 | if 'title' in options and options['title'] is not False:
254 | current_title = svg.select('title')
255 | if current_title:
256 | current_title[0].string = options['title']
257 | else:
258 | title_tag = svg.new_tag('title');
259 | title_tag.string = options['title']
260 | svg.svg.append(title_tag)
261 |
262 |
263 | if 'description' in options:
264 | current_desc = svg.select('description')
265 | if current_desc:
266 | current_desc[0].string = options['description']
267 | else:
268 | desc_tag = svg.new_tag('description');
269 | desc_tag.string = options['description']
270 | svg.svg.append(desc_tag)
271 |
272 |
273 | write_svg(svg, dst_path, options)
274 |
275 |
276 | # Super dumb / simple function that inlines svgs into html source files
277 |
278 | def parse_markup(src_path, output):
279 | print(src_path)
280 | read_state = 0
281 | file = open(src_path, 'r')
282 | for line in file:
283 | if line.startswith('//import'):
284 | path = line.split('//import ')[1].rstrip('\n').rstrip('\r')
285 | parse_markup(path, output)
286 | else:
287 | output.append(line)
288 |
289 | file.close()
290 |
291 | def inline_svg(src_path, dst_path):
292 | output = [];
293 |
294 | file = create_file(dst_path, 'w')
295 | parse_markup(src_path, output)
296 | for line in output: file.write(line)
297 | file.close()
298 | print('')
--------------------------------------------------------------------------------
/example/parallax_svg_tools/svg/__init__.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/parallax/svg-animation-tools/3cbad1696760c049ed66b7e0c8631357000dbdb6/example/parallax_svg_tools/svg/__init__.pyc
--------------------------------------------------------------------------------
/example/processed_animation.svg:
--------------------------------------------------------------------------------
1 |
7 |
--------------------------------------------------------------------------------
/parallax_svg_tools.zip:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/parallax/svg-animation-tools/3cbad1696760c049ed66b7e0c8631357000dbdb6/parallax_svg_tools.zip
--------------------------------------------------------------------------------
/parallax_svg_tools/bs4/__init__.py:
--------------------------------------------------------------------------------
1 | """Beautiful Soup
2 | Elixir and Tonic
3 | "The Screen-Scraper's Friend"
4 | http://www.crummy.com/software/BeautifulSoup/
5 |
6 | Beautiful Soup uses a pluggable XML or HTML parser to parse a
7 | (possibly invalid) document into a tree representation. Beautiful Soup
8 | provides methods and Pythonic idioms that make it easy to navigate,
9 | search, and modify the parse tree.
10 |
11 | Beautiful Soup works with Python 2.7 and up. It works better if lxml
12 | and/or html5lib is installed.
13 |
14 | For more than you ever wanted to know about Beautiful Soup, see the
15 | documentation:
16 | http://www.crummy.com/software/BeautifulSoup/bs4/doc/
17 |
18 | """
19 |
20 | # Use of this source code is governed by a BSD-style license that can be
21 | # found in the LICENSE file.
22 |
23 | __author__ = "Leonard Richardson (leonardr@segfault.org)"
24 | __version__ = "4.5.1"
25 | __copyright__ = "Copyright (c) 2004-2016 Leonard Richardson"
26 | __license__ = "MIT"
27 |
28 | __all__ = ['BeautifulSoup']
29 |
30 | import os
31 | import re
32 | import traceback
33 | import warnings
34 |
35 | from .builder import builder_registry, ParserRejectedMarkup
36 | from .dammit import UnicodeDammit
37 | from .element import (
38 | CData,
39 | Comment,
40 | DEFAULT_OUTPUT_ENCODING,
41 | Declaration,
42 | Doctype,
43 | NavigableString,
44 | PageElement,
45 | ProcessingInstruction,
46 | ResultSet,
47 | SoupStrainer,
48 | Tag,
49 | )
50 |
51 | # The very first thing we do is give a useful error if someone is
52 | # running this code under Python 3 without converting it.
53 | 'You are trying to run the Python 2 version of Beautiful Soup under Python 3. This will not work.'<>'You need to convert the code, either by installing it (`python setup.py install`) or by running 2to3 (`2to3 -w bs4`).'
54 |
55 | class BeautifulSoup(Tag):
56 | """
57 | This class defines the basic interface called by the tree builders.
58 |
59 | These methods will be called by the parser:
60 | reset()
61 | feed(markup)
62 |
63 | The tree builder may call these methods from its feed() implementation:
64 | handle_starttag(name, attrs) # See note about return value
65 | handle_endtag(name)
66 | handle_data(data) # Appends to the current data node
67 | endData(containerClass=NavigableString) # Ends the current data node
68 |
69 | No matter how complicated the underlying parser is, you should be
70 | able to build a tree using 'start tag' events, 'end tag' events,
71 | 'data' events, and "done with data" events.
72 |
73 | If you encounter an empty-element tag (aka a self-closing tag,
74 | like HTML's tag), call handle_starttag and then
75 | handle_endtag.
76 | """
77 | ROOT_TAG_NAME = u'[document]'
78 |
79 | # If the end-user gives no indication which tree builder they
80 | # want, look for one with these features.
81 | DEFAULT_BUILDER_FEATURES = ['html', 'fast']
82 |
83 | ASCII_SPACES = '\x20\x0a\x09\x0c\x0d'
84 |
85 | NO_PARSER_SPECIFIED_WARNING = "No parser was explicitly specified, so I'm using the best available %(markup_type)s parser for this system (\"%(parser)s\"). This usually isn't a problem, but if you run this code on another system, or in a different virtual environment, it may use a different parser and behave differently.\n\nThe code that caused this warning is on line %(line_number)s of the file %(filename)s. To get rid of this warning, change code that looks like this:\n\n BeautifulSoup([your markup])\n\nto this:\n\n BeautifulSoup([your markup], \"%(parser)s\")\n"
86 |
87 | def __init__(self, markup="", features=None, builder=None,
88 | parse_only=None, from_encoding=None, exclude_encodings=None,
89 | **kwargs):
90 | """The Soup object is initialized as the 'root tag', and the
91 | provided markup (which can be a string or a file-like object)
92 | is fed into the underlying parser."""
93 |
94 | if 'convertEntities' in kwargs:
95 | warnings.warn(
96 | "BS4 does not respect the convertEntities argument to the "
97 | "BeautifulSoup constructor. Entities are always converted "
98 | "to Unicode characters.")
99 |
100 | if 'markupMassage' in kwargs:
101 | del kwargs['markupMassage']
102 | warnings.warn(
103 | "BS4 does not respect the markupMassage argument to the "
104 | "BeautifulSoup constructor. The tree builder is responsible "
105 | "for any necessary markup massage.")
106 |
107 | if 'smartQuotesTo' in kwargs:
108 | del kwargs['smartQuotesTo']
109 | warnings.warn(
110 | "BS4 does not respect the smartQuotesTo argument to the "
111 | "BeautifulSoup constructor. Smart quotes are always converted "
112 | "to Unicode characters.")
113 |
114 | if 'selfClosingTags' in kwargs:
115 | del kwargs['selfClosingTags']
116 | warnings.warn(
117 | "BS4 does not respect the selfClosingTags argument to the "
118 | "BeautifulSoup constructor. The tree builder is responsible "
119 | "for understanding self-closing tags.")
120 |
121 | if 'isHTML' in kwargs:
122 | del kwargs['isHTML']
123 | warnings.warn(
124 | "BS4 does not respect the isHTML argument to the "
125 | "BeautifulSoup constructor. Suggest you use "
126 | "features='lxml' for HTML and features='lxml-xml' for "
127 | "XML.")
128 |
129 | def deprecated_argument(old_name, new_name):
130 | if old_name in kwargs:
131 | warnings.warn(
132 | 'The "%s" argument to the BeautifulSoup constructor '
133 | 'has been renamed to "%s."' % (old_name, new_name))
134 | value = kwargs[old_name]
135 | del kwargs[old_name]
136 | return value
137 | return None
138 |
139 | parse_only = parse_only or deprecated_argument(
140 | "parseOnlyThese", "parse_only")
141 |
142 | from_encoding = from_encoding or deprecated_argument(
143 | "fromEncoding", "from_encoding")
144 |
145 | if from_encoding and isinstance(markup, unicode):
146 | warnings.warn("You provided Unicode markup but also provided a value for from_encoding. Your from_encoding will be ignored.")
147 | from_encoding = None
148 |
149 | if len(kwargs) > 0:
150 | arg = kwargs.keys().pop()
151 | raise TypeError(
152 | "__init__() got an unexpected keyword argument '%s'" % arg)
153 |
154 | if builder is None:
155 | original_features = features
156 | if isinstance(features, basestring):
157 | features = [features]
158 | if features is None or len(features) == 0:
159 | features = self.DEFAULT_BUILDER_FEATURES
160 | builder_class = builder_registry.lookup(*features)
161 | if builder_class is None:
162 | raise FeatureNotFound(
163 | "Couldn't find a tree builder with the features you "
164 | "requested: %s. Do you need to install a parser library?"
165 | % ",".join(features))
166 | builder = builder_class()
167 | if not (original_features == builder.NAME or
168 | original_features in builder.ALTERNATE_NAMES):
169 | if builder.is_xml:
170 | markup_type = "XML"
171 | else:
172 | markup_type = "HTML"
173 |
174 | caller = traceback.extract_stack()[0]
175 | filename = caller[0]
176 | line_number = caller[1]
177 | warnings.warn(self.NO_PARSER_SPECIFIED_WARNING % dict(
178 | filename=filename,
179 | line_number=line_number,
180 | parser=builder.NAME,
181 | markup_type=markup_type))
182 |
183 | self.builder = builder
184 | self.is_xml = builder.is_xml
185 | self.known_xml = self.is_xml
186 | self.builder.soup = self
187 |
188 | self.parse_only = parse_only
189 |
190 | if hasattr(markup, 'read'): # It's a file-type object.
191 | markup = markup.read()
192 | elif len(markup) <= 256 and (
193 | (isinstance(markup, bytes) and not b'<' in markup)
194 | or (isinstance(markup, unicode) and not u'<' in markup)
195 | ):
196 | # Print out warnings for a couple beginner problems
197 | # involving passing non-markup to Beautiful Soup.
198 | # Beautiful Soup will still parse the input as markup,
199 | # just in case that's what the user really wants.
200 | if (isinstance(markup, unicode)
201 | and not os.path.supports_unicode_filenames):
202 | possible_filename = markup.encode("utf8")
203 | else:
204 | possible_filename = markup
205 | is_file = False
206 | try:
207 | is_file = os.path.exists(possible_filename)
208 | except Exception, e:
209 | # This is almost certainly a problem involving
210 | # characters not valid in filenames on this
211 | # system. Just let it go.
212 | pass
213 | if is_file:
214 | if isinstance(markup, unicode):
215 | markup = markup.encode("utf8")
216 | warnings.warn(
217 | '"%s" looks like a filename, not markup. You should'
218 | 'probably open this file and pass the filehandle into'
219 | 'Beautiful Soup.' % markup)
220 | self._check_markup_is_url(markup)
221 |
222 | for (self.markup, self.original_encoding, self.declared_html_encoding,
223 | self.contains_replacement_characters) in (
224 | self.builder.prepare_markup(
225 | markup, from_encoding, exclude_encodings=exclude_encodings)):
226 | self.reset()
227 | try:
228 | self._feed()
229 | break
230 | except ParserRejectedMarkup:
231 | pass
232 |
233 | # Clear out the markup and remove the builder's circular
234 | # reference to this object.
235 | self.markup = None
236 | self.builder.soup = None
237 |
238 | def __copy__(self):
239 | copy = type(self)(
240 | self.encode('utf-8'), builder=self.builder, from_encoding='utf-8'
241 | )
242 |
243 | # Although we encoded the tree to UTF-8, that may not have
244 | # been the encoding of the original markup. Set the copy's
245 | # .original_encoding to reflect the original object's
246 | # .original_encoding.
247 | copy.original_encoding = self.original_encoding
248 | return copy
249 |
250 | def __getstate__(self):
251 | # Frequently a tree builder can't be pickled.
252 | d = dict(self.__dict__)
253 | if 'builder' in d and not self.builder.picklable:
254 | d['builder'] = None
255 | return d
256 |
257 | @staticmethod
258 | def _check_markup_is_url(markup):
259 | """
260 | Check if markup looks like it's actually a url and raise a warning
261 | if so. Markup can be unicode or str (py2) / bytes (py3).
262 | """
263 | if isinstance(markup, bytes):
264 | space = b' '
265 | cant_start_with = (b"http:", b"https:")
266 | elif isinstance(markup, unicode):
267 | space = u' '
268 | cant_start_with = (u"http:", u"https:")
269 | else:
270 | return
271 |
272 | if any(markup.startswith(prefix) for prefix in cant_start_with):
273 | if not space in markup:
274 | if isinstance(markup, bytes):
275 | decoded_markup = markup.decode('utf-8', 'replace')
276 | else:
277 | decoded_markup = markup
278 | warnings.warn(
279 | '"%s" looks like a URL. Beautiful Soup is not an'
280 | ' HTTP client. You should probably use an HTTP client like'
281 | ' requests to get the document behind the URL, and feed'
282 | ' that document to Beautiful Soup.' % decoded_markup
283 | )
284 |
285 | def _feed(self):
286 | # Convert the document to Unicode.
287 | self.builder.reset()
288 |
289 | self.builder.feed(self.markup)
290 | # Close out any unfinished strings and close all the open tags.
291 | self.endData()
292 | while self.currentTag.name != self.ROOT_TAG_NAME:
293 | self.popTag()
294 |
295 | def reset(self):
296 | Tag.__init__(self, self, self.builder, self.ROOT_TAG_NAME)
297 | self.hidden = 1
298 | self.builder.reset()
299 | self.current_data = []
300 | self.currentTag = None
301 | self.tagStack = []
302 | self.preserve_whitespace_tag_stack = []
303 | self.pushTag(self)
304 |
305 | def new_tag(self, name, namespace=None, nsprefix=None, **attrs):
306 | """Create a new tag associated with this soup."""
307 | return Tag(None, self.builder, name, namespace, nsprefix, attrs)
308 |
309 | def new_string(self, s, subclass=NavigableString):
310 | """Create a new NavigableString associated with this soup."""
311 | return subclass(s)
312 |
313 | def insert_before(self, successor):
314 | raise NotImplementedError("BeautifulSoup objects don't support insert_before().")
315 |
316 | def insert_after(self, successor):
317 | raise NotImplementedError("BeautifulSoup objects don't support insert_after().")
318 |
319 | def popTag(self):
320 | tag = self.tagStack.pop()
321 | if self.preserve_whitespace_tag_stack and tag == self.preserve_whitespace_tag_stack[-1]:
322 | self.preserve_whitespace_tag_stack.pop()
323 | #print "Pop", tag.name
324 | if self.tagStack:
325 | self.currentTag = self.tagStack[-1]
326 | return self.currentTag
327 |
328 | def pushTag(self, tag):
329 | #print "Push", tag.name
330 | if self.currentTag:
331 | self.currentTag.contents.append(tag)
332 | self.tagStack.append(tag)
333 | self.currentTag = self.tagStack[-1]
334 | if tag.name in self.builder.preserve_whitespace_tags:
335 | self.preserve_whitespace_tag_stack.append(tag)
336 |
337 | def endData(self, containerClass=NavigableString):
338 | if self.current_data:
339 | current_data = u''.join(self.current_data)
340 | # If whitespace is not preserved, and this string contains
341 | # nothing but ASCII spaces, replace it with a single space
342 | # or newline.
343 | if not self.preserve_whitespace_tag_stack:
344 | strippable = True
345 | for i in current_data:
346 | if i not in self.ASCII_SPACES:
347 | strippable = False
348 | break
349 | if strippable:
350 | if '\n' in current_data:
351 | current_data = '\n'
352 | else:
353 | current_data = ' '
354 |
355 | # Reset the data collector.
356 | self.current_data = []
357 |
358 | # Should we add this string to the tree at all?
359 | if self.parse_only and len(self.tagStack) <= 1 and \
360 | (not self.parse_only.text or \
361 | not self.parse_only.search(current_data)):
362 | return
363 |
364 | o = containerClass(current_data)
365 | self.object_was_parsed(o)
366 |
367 | def object_was_parsed(self, o, parent=None, most_recent_element=None):
368 | """Add an object to the parse tree."""
369 | parent = parent or self.currentTag
370 | previous_element = most_recent_element or self._most_recent_element
371 |
372 | next_element = previous_sibling = next_sibling = None
373 | if isinstance(o, Tag):
374 | next_element = o.next_element
375 | next_sibling = o.next_sibling
376 | previous_sibling = o.previous_sibling
377 | if not previous_element:
378 | previous_element = o.previous_element
379 |
380 | o.setup(parent, previous_element, next_element, previous_sibling, next_sibling)
381 |
382 | self._most_recent_element = o
383 | parent.contents.append(o)
384 |
385 | if parent.next_sibling:
386 | # This node is being inserted into an element that has
387 | # already been parsed. Deal with any dangling references.
388 | index = len(parent.contents)-1
389 | while index >= 0:
390 | if parent.contents[index] is o:
391 | break
392 | index -= 1
393 | else:
394 | raise ValueError(
395 | "Error building tree: supposedly %r was inserted "
396 | "into %r after the fact, but I don't see it!" % (
397 | o, parent
398 | )
399 | )
400 | if index == 0:
401 | previous_element = parent
402 | previous_sibling = None
403 | else:
404 | previous_element = previous_sibling = parent.contents[index-1]
405 | if index == len(parent.contents)-1:
406 | next_element = parent.next_sibling
407 | next_sibling = None
408 | else:
409 | next_element = next_sibling = parent.contents[index+1]
410 |
411 | o.previous_element = previous_element
412 | if previous_element:
413 | previous_element.next_element = o
414 | o.next_element = next_element
415 | if next_element:
416 | next_element.previous_element = o
417 | o.next_sibling = next_sibling
418 | if next_sibling:
419 | next_sibling.previous_sibling = o
420 | o.previous_sibling = previous_sibling
421 | if previous_sibling:
422 | previous_sibling.next_sibling = o
423 |
424 | def _popToTag(self, name, nsprefix=None, inclusivePop=True):
425 | """Pops the tag stack up to and including the most recent
426 | instance of the given tag. If inclusivePop is false, pops the tag
427 | stack up to but *not* including the most recent instqance of
428 | the given tag."""
429 | #print "Popping to %s" % name
430 | if name == self.ROOT_TAG_NAME:
431 | # The BeautifulSoup object itself can never be popped.
432 | return
433 |
434 | most_recently_popped = None
435 |
436 | stack_size = len(self.tagStack)
437 | for i in range(stack_size - 1, 0, -1):
438 | t = self.tagStack[i]
439 | if (name == t.name and nsprefix == t.prefix):
440 | if inclusivePop:
441 | most_recently_popped = self.popTag()
442 | break
443 | most_recently_popped = self.popTag()
444 |
445 | return most_recently_popped
446 |
447 | def handle_starttag(self, name, namespace, nsprefix, attrs):
448 | """Push a start tag on to the stack.
449 |
450 | If this method returns None, the tag was rejected by the
451 | SoupStrainer. You should proceed as if the tag had not occurred
452 | in the document. For instance, if this was a self-closing tag,
453 | don't call handle_endtag.
454 | """
455 |
456 | # print "Start tag %s: %s" % (name, attrs)
457 | self.endData()
458 |
459 | if (self.parse_only and len(self.tagStack) <= 1
460 | and (self.parse_only.text
461 | or not self.parse_only.search_tag(name, attrs))):
462 | return None
463 |
464 | tag = Tag(self, self.builder, name, namespace, nsprefix, attrs,
465 | self.currentTag, self._most_recent_element)
466 | if tag is None:
467 | return tag
468 | if self._most_recent_element:
469 | self._most_recent_element.next_element = tag
470 | self._most_recent_element = tag
471 | self.pushTag(tag)
472 | return tag
473 |
474 | def handle_endtag(self, name, nsprefix=None):
475 | #print "End tag: " + name
476 | self.endData()
477 | self._popToTag(name, nsprefix)
478 |
479 | def handle_data(self, data):
480 | self.current_data.append(data)
481 |
482 | def decode(self, pretty_print=False,
483 | eventual_encoding=DEFAULT_OUTPUT_ENCODING,
484 | formatter="minimal"):
485 | """Returns a string or Unicode representation of this document.
486 | To get Unicode, pass None for encoding."""
487 |
488 | if self.is_xml:
489 | # Print the XML declaration
490 | encoding_part = ''
491 | if eventual_encoding != None:
492 | encoding_part = ' encoding="%s"' % eventual_encoding
493 | prefix = u'\n' % encoding_part
494 | else:
495 | prefix = u''
496 | if not pretty_print:
497 | indent_level = None
498 | else:
499 | indent_level = 0
500 | return prefix + super(BeautifulSoup, self).decode(
501 | indent_level, eventual_encoding, formatter)
502 |
503 | # Alias to make it easier to type import: 'from bs4 import _soup'
504 | _s = BeautifulSoup
505 | _soup = BeautifulSoup
506 |
507 | class BeautifulStoneSoup(BeautifulSoup):
508 | """Deprecated interface to an XML parser."""
509 |
510 | def __init__(self, *args, **kwargs):
511 | kwargs['features'] = 'xml'
512 | warnings.warn(
513 | 'The BeautifulStoneSoup class is deprecated. Instead of using '
514 | 'it, pass features="xml" into the BeautifulSoup constructor.')
515 | super(BeautifulStoneSoup, self).__init__(*args, **kwargs)
516 |
517 |
518 | class StopParsing(Exception):
519 | pass
520 |
521 | class FeatureNotFound(ValueError):
522 | pass
523 |
524 |
525 | #By default, act as an HTML pretty-printer.
526 | if __name__ == '__main__':
527 | import sys
528 | soup = BeautifulSoup(sys.stdin)
529 | print soup.prettify()
530 |
--------------------------------------------------------------------------------
/parallax_svg_tools/bs4/__init__.pyc:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/parallax/svg-animation-tools/3cbad1696760c049ed66b7e0c8631357000dbdb6/parallax_svg_tools/bs4/__init__.pyc
--------------------------------------------------------------------------------
/parallax_svg_tools/bs4/builder/__init__.py:
--------------------------------------------------------------------------------
1 | # Use of this source code is governed by a BSD-style license that can be
2 | # found in the LICENSE file.
3 |
4 | from collections import defaultdict
5 | import itertools
6 | import sys
7 | from bs4.element import (
8 | CharsetMetaAttributeValue,
9 | ContentMetaAttributeValue,
10 | HTMLAwareEntitySubstitution,
11 | whitespace_re
12 | )
13 |
14 | __all__ = [
15 | 'HTMLTreeBuilder',
16 | 'SAXTreeBuilder',
17 | 'TreeBuilder',
18 | 'TreeBuilderRegistry',
19 | ]
20 |
21 | # Some useful features for a TreeBuilder to have.
22 | FAST = 'fast'
23 | PERMISSIVE = 'permissive'
24 | STRICT = 'strict'
25 | XML = 'xml'
26 | HTML = 'html'
27 | HTML_5 = 'html5'
28 |
29 |
30 | class TreeBuilderRegistry(object):
31 |
32 | def __init__(self):
33 | self.builders_for_feature = defaultdict(list)
34 | self.builders = []
35 |
36 | def register(self, treebuilder_class):
37 | """Register a treebuilder based on its advertised features."""
38 | for feature in treebuilder_class.features:
39 | self.builders_for_feature[feature].insert(0, treebuilder_class)
40 | self.builders.insert(0, treebuilder_class)
41 |
42 | def lookup(self, *features):
43 | if len(self.builders) == 0:
44 | # There are no builders at all.
45 | return None
46 |
47 | if len(features) == 0:
48 | # They didn't ask for any features. Give them the most
49 | # recently registered builder.
50 | return self.builders[0]
51 |
52 | # Go down the list of features in order, and eliminate any builders
53 | # that don't match every feature.
54 | features = list(features)
55 | features.reverse()
56 | candidates = None
57 | candidate_set = None
58 | while len(features) > 0:
59 | feature = features.pop()
60 | we_have_the_feature = self.builders_for_feature.get(feature, [])
61 | if len(we_have_the_feature) > 0:
62 | if candidates is None:
63 | candidates = we_have_the_feature
64 | candidate_set = set(candidates)
65 | else:
66 | # Eliminate any candidates that don't have this feature.
67 | candidate_set = candidate_set.intersection(
68 | set(we_have_the_feature))
69 |
70 | # The only valid candidates are the ones in candidate_set.
71 | # Go through the original list of candidates and pick the first one
72 | # that's in candidate_set.
73 | if candidate_set is None:
74 | return None
75 | for candidate in candidates:
76 | if candidate in candidate_set:
77 | return candidate
78 | return None
79 |
80 | # The BeautifulSoup class will take feature lists from developers and use them
81 | # to look up builders in this registry.
82 | builder_registry = TreeBuilderRegistry()
83 |
84 | class TreeBuilder(object):
85 | """Turn a document into a Beautiful Soup object tree."""
86 |
87 | NAME = "[Unknown tree builder]"
88 | ALTERNATE_NAMES = []
89 | features = []
90 |
91 | is_xml = False
92 | picklable = False
93 | preserve_whitespace_tags = set()
94 | empty_element_tags = None # A tag will be considered an empty-element
95 | # tag when and only when it has no contents.
96 |
97 | # A value for these tag/attribute combinations is a space- or
98 | # comma-separated list of CDATA, rather than a single CDATA.
99 | cdata_list_attributes = {}
100 |
101 |
102 | def __init__(self):
103 | self.soup = None
104 |
105 | def reset(self):
106 | pass
107 |
108 | def can_be_empty_element(self, tag_name):
109 | """Might a tag with this name be an empty-element tag?
110 |
111 | The final markup may or may not actually present this tag as
112 | self-closing.
113 |
114 | For instance: an HTMLBuilder does not consider a
tag to be
115 | an empty-element tag (it's not in
116 | HTMLBuilder.empty_element_tags). This means an empty