├── .gitignore
├── .travis.yml
├── CHANGELOG.md
├── LICENSE
├── README.md
├── SubDaap.py
├── config.ini.default
├── init-scripts
├── init.osx
└── systemd.service
├── requirements.txt
└── subdaap
├── __init__.py
├── application.py
├── cache.py
├── collection.py
├── config.py
├── connection.py
├── database.py
├── models.py
├── monkey.py
├── provider.py
├── state.py
├── static
└── css
│ └── pure-min.css
├── stream.py
├── subsonic.py
├── synchronizer.py
├── templates
└── index.html
├── utils.py
└── webserver.py
/.gitignore:
--------------------------------------------------------------------------------
1 | # Project files
2 | build/
3 | dist/
4 | data/
5 | *.ini
6 | *.db
7 | *.pid
8 |
9 | # Development files
10 | *.pyc
11 | *.pyo
12 | *.so
13 |
--------------------------------------------------------------------------------
/.travis.yml:
--------------------------------------------------------------------------------
1 | language: python
2 | python:
3 | - "2.7"
4 | sudo: false
5 | install:
6 | - travis_retry pip install -U pip
7 | - travis_retry pip install -U Cython
8 | - travis_retry pip install -r requirements.txt
9 | script: python SubDaap.py --help
--------------------------------------------------------------------------------
/CHANGELOG.md:
--------------------------------------------------------------------------------
1 | # Changelog
2 |
3 | ## v2.1.0
4 | Released 18 November 2015
5 |
6 | Highlights:
7 | * Added: change the soft limit on number of open files.
8 | * Fixed: workaround for SubSonic synchronization (see https://github.com/crustymonkey/py-sonic/issues/12).
9 | * Improved: playlist synchronization (requires SubSonic 5.3).
10 |
11 | The full list of commits can be found [here](https://github.com/basilfx/SubDaap/compare/v2.0.1...v2.1.0).
12 |
13 | ## v2.0.1
14 | Released 14 September 2015
15 |
16 | Highlights:
17 | * Fixed: synchronization issues when neither items nor containers have changed.
18 | * Improved: connection with Subsonic.
19 |
20 | The full list of commits can be found [here](https://github.com/basilfx/SubDaap/compare/v2.0.0...v2.0.1).
21 |
22 | ## v2.0.0
23 | Released 06 September 2015
24 |
25 | Highlights:
26 | * Improved: better synchronization.
27 | * Improved: live revisioning.
28 | * Improved: transcoding settings per connection (update your config).
29 | * Changed: config version updated to version 3.
30 | * Upgraded: flask-daapserver v3.0.0.
31 | * Upgraded: gevent v1.1.
32 |
33 | The full list of commits can be found [here](https://github.com/basilfx/SubDaap/compare/v1.2.1...v2.0.0).
34 |
35 | ## v1.2.1
36 | Released 05 September 2015
37 |
38 | Highlights:
39 | * Fixed: various code fixes that have been introduced by accident (issue #5).
40 |
41 | The full list of commits can be found [here](https://github.com/basilfx/SubDaap/compare/v1.2.0...v1.2.1).
42 |
43 | ## v1.2.0
44 | Released 05 April 2015
45 |
46 | Highlights:
47 | * Fixed: compatibility with iTunes 12.1/Flask-DAAPServer v2.3.0.
48 | * Fixed: items without artist/album not showing up.
49 | * Fixed: potential race conditions in prune/expire/streaming code.
50 | * Fixed: duplicate container items in database.
51 | * Improved: Python 2.7.9 compatibility if accessing SubSonic via SSL.
52 | * Improved: status page.
53 |
54 | The full list of commits can be found [here](https://github.com/basilfx/SubDaap/compare/v1.1.0...v1.2.0).
55 |
56 | ## v1.1.0
57 | Released 19 January 2015
58 |
59 | * Fixed: invalid SQL while fetching data.
60 |
61 | The full list of commits can be found [here](https://github.com/basilfx/SubDaap/compare/v1.0.0...v1.1.0).
62 |
63 | ## v1.0.0
64 | Released 06 January 2015
65 |
66 | The full list of commits can be found [here](https://github.com/basilfx/SubDaap/compare/69dad8031f0b80675b4e37fabea3b2b0dc878278...v1.0.0).
67 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | Copyright (c) 2014-2015 Bas Stottelaar
2 |
3 | Permission is hereby granted, free of charge, to any person obtaining a copy
4 | of this software and associated documentation files (the "Software"), to deal
5 | in the Software without restriction, including without limitation the rights
6 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
7 | copies of the Software, and to permit persons to whom the Software is
8 | furnished to do so, subject to the following conditions:
9 |
10 | The above copyright notice and this permission notice shall be included in
11 | all copies or substantial portions of the Software.
12 |
13 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
14 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
15 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
16 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
17 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
18 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
19 | THE SOFTWARE.
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # SubDaap
2 | DAAP server/proxy for SubSonic: play your favorite tunes from SubSonic in iTunes!
3 |
4 | [](https://travis-ci.org/basilfx/SubDaap)
5 |
6 | The motivation for this application comes from the fact that SubSonic does not ship a DAAP server, and OS X clients for SubSonic lack features, in my opinion. And after all, iTunes is a pretty intuitive and stable player.
7 |
8 | ## Features
9 | * Compatible with SubSonic 5.3+ and iTunes 12+, including password protection and Bonjour.
10 | * Artwork support.
11 | * Playlist support.
12 | * Browse the whole library in iTunes at once.
13 | * Supports gapless playback.
14 | * Smart file caching: supports in-file searching and concurrent access.
15 | * Revision support: efficient library updates pushed to all connected clients.
16 |
17 | ## Requirements
18 | * Python 2.7+ (not Python 3.x). PyPy 2.5+ may work.
19 | * SubSonic 5.3+
20 |
21 | ## Installation
22 | This application was designed as a gateway between SubSonic and iTunes. Therefore, it's recommended to install this on the same system where you would access iTunes on. It can be installed on a central server, however.
23 |
24 | * Clone this repository.
25 | * Install dependencies via `pip install -r requirements.txt`.
26 | * Copy `config.ini.default` to `config.ini` and edit as desired.
27 | * `chmod 700 config.ini`, so others cannot view your credentials!
28 |
29 | To start this service when your computer starts:
30 |
31 | * On OS X:
32 | * Copy `init-scripts/init.osx` to `~/Library/LaunchAgents/com.basilfx.subdaap.plist`. Do not symlink!
33 | * Edit the file accordingly. Make sure all paths are correct.
34 | * Run `launchctl load ~/Library/LaunchAgents/com.basilfx.subdaap.plist`
35 | * On Ubuntu:
36 | * Copy `init-scripts/systemd.service` to `/etc/systemd/system/subdaap.service`.
37 | * Edit the file accordingly. Make sure all paths are correct and the user and group `subdaap` exists and have the correct file permissions.
38 | * Run `systemctl enable subdaap`.
39 |
40 | ## Run the application
41 | To run the application, use the following command, or similar:
42 |
43 | ```
44 | python SubDaap.py --config-file config.ini --data-dir path/to/datadir --pid-file /var/run/subdaap.pid
45 | ```
46 |
47 | The data directory should exist. Optionally, add `-v` for verbose, or `-vv` for more verbose. All paths in the command line are relative to where you run it from. Any paths in `config.ini` are relative to the `--data-dir`.
48 |
49 | Add `--daemon` to run the program in the background.
50 |
51 | ## Contributing
52 | Feel free to submit a pull request. All pull requests must be made against the `development` branch. Python code should follow the PEP-8 conventions.
53 |
54 | ## License
55 | See the `LICENSE` file (MIT license).
56 |
57 | The web interfaces uses the [Pure.css](http://purecss.io/) CSS framework.
58 |
--------------------------------------------------------------------------------
/SubDaap.py:
--------------------------------------------------------------------------------
1 | from subdaap import monkey # noqa
2 |
3 | from subdaap.application import Application
4 |
5 | from subdaap.utils import VerboseAction, PathAction, NewPathAction
6 |
7 | import argparse
8 | import logging
9 | import atexit
10 | import sys
11 | import gc
12 | import os
13 |
14 | # Logger instance
15 | logger = logging.getLogger(__name__)
16 |
17 |
18 | def parse_arguments():
19 | """
20 | Parse commandline arguments.
21 | """
22 |
23 | parser = argparse.ArgumentParser()
24 |
25 | # Add options
26 | parser.add_argument(
27 | "-D", "--daemon", action="store_true", help="run as daemon")
28 | parser.add_argument(
29 | "-v", "--verbose", nargs="?", action=VerboseAction, default=0,
30 | help="toggle verbose mode (-vv, -vvv for more)")
31 | parser.add_argument(
32 | "-c", "--config-file", action=PathAction, default="config.ini",
33 | help="config file")
34 | parser.add_argument(
35 | "-d", "--data-dir", action=PathAction, default=os.getcwd(),
36 | help="data directory")
37 | parser.add_argument(
38 | "-p", "--pid-file", action=NewPathAction, help="pid file")
39 | parser.add_argument(
40 | "-l", "--log-file", action=NewPathAction, help="log file")
41 |
42 | # Parse command line
43 | return parser.parse_args(), parser
44 |
45 |
46 | def setup_logging(console=True, log_file=None, verbose=False):
47 | """
48 | Setup logging.
49 |
50 | :param bool console: If True, log to console.
51 | :param str log_file: If set, log to a file (append) as specified.
52 | :param bool verbose: Enable debug logging if True.
53 | """
54 |
55 | # Configure logging
56 | formatter = logging.Formatter(
57 | "%(asctime)s - %(name)s - %(levelname)s - %(message)s")
58 | level = logging.DEBUG if verbose else logging.INFO
59 |
60 | # Add console output handler
61 | if console:
62 | console_log_handler = logging.StreamHandler()
63 | console_log_handler.setLevel(level)
64 | console_log_handler.setFormatter(formatter)
65 | logging.getLogger().addHandler(console_log_handler)
66 |
67 | # Add file output handler
68 | if log_file:
69 | file_log_handler = logging.FileHandler(log_file)
70 | file_log_handler.setLevel(level)
71 | file_log_handler.setFormatter(formatter)
72 | logging.getLogger().addHandler(file_log_handler)
73 |
74 | logging.getLogger().setLevel(level)
75 | logger.info("Verbose level is %d", verbose)
76 |
77 |
78 | def daemonize(pid_file=None):
79 | """
80 | Daemonize the current process. Returns the PID of the continuing child
81 | process. As an extra option, the PID of the child process can be written to
82 | a specified PID file.
83 |
84 | Note that parent process ends with `os._exit` instead of `sys.exit`. The
85 | first will not trigger any cleanups that may have been set. These are left
86 | for the child process that continues.
87 |
88 | :param str pid_file: Path to PID file to write process ID into. Must be in
89 | a writeable folder. If left `None`, no file will be
90 | written.
91 | :return: Process ID
92 | :rtype: int
93 | """
94 |
95 | # Dependency check to make sure the imports are OK. Saves you from a lot of
96 | # debugging trouble when you forget to import them.
97 | assert atexit.register and os.fork and sys.stdout and gc.collect
98 |
99 | # Force cleanup old resources to minimize the risk of sharing them.
100 | gc.collect()
101 |
102 | # First fork
103 | try:
104 | if os.fork() > 0:
105 | os._exit(0)
106 | except OSError as e:
107 | sys.stderr.write("Unable to fork: %d (%s)\n" % (e.errno, e.strerror))
108 | sys.exit(1)
109 |
110 | # Decouple from parent
111 | os.setsid()
112 | os.umask(0)
113 |
114 | # Second fork
115 | try:
116 | if os.fork() > 0:
117 | os._exit(0)
118 | except OSError as e:
119 | sys.stderr.write("Unable to fork: %d (%s)\n" % (e.errno, e.strerror))
120 | sys.exit(1)
121 |
122 | # Redirect file descriptors
123 | sys.stdout.flush()
124 | sys.stderr.flush()
125 |
126 | stdin = file("/dev/null", "r")
127 | stdout = file("/dev/null", "a+")
128 | stderr = file("/dev/null", "a+", 0)
129 |
130 | os.dup2(stdin.fileno(), sys.stdin.fileno())
131 | os.dup2(stdout.fileno(), sys.stdout.fileno())
132 | os.dup2(stderr.fileno(), sys.stderr.fileno())
133 |
134 | # Write PID file
135 | if pid_file:
136 | atexit.register(os.remove, pid_file)
137 |
138 | with open(pid_file, "w+") as fp:
139 | fp.write("%d" % os.getpid())
140 |
141 | # Return the PID
142 | return os.getpid()
143 |
144 |
145 | def main():
146 | """
147 | Main entry point. Parses arguments, daemonizes and creates the application.
148 | """
149 |
150 | # Parse arguments and configure application instance.
151 | arguments, parser = parse_arguments()
152 |
153 | if arguments.daemon:
154 | daemonize(arguments.pid_file)
155 |
156 | setup_logging(not arguments.daemon, arguments.log_file, arguments.verbose)
157 |
158 | # Change to data directory
159 | os.chdir(arguments.data_dir)
160 |
161 | # Create application instance and run it.
162 | try:
163 | application = Application(
164 | config_file=arguments.config_file,
165 | data_dir=arguments.data_dir,
166 | verbose=arguments.verbose)
167 | except Exception as e:
168 | logger.error(
169 | "One or more components failed to initialize: %s. The application "
170 | "will now exit.", e)
171 |
172 | if arguments.verbose > 1:
173 | logger.exception("Stack trace")
174 |
175 | return 1
176 |
177 | try:
178 | application.start()
179 | except KeyboardInterrupt:
180 | application.stop()
181 |
182 | # E.g. `python SubDaap.py --daemonize --config-file=config.ini"
183 | if __name__ == "__main__":
184 | sys.exit(main())
185 |
--------------------------------------------------------------------------------
/config.ini.default:
--------------------------------------------------------------------------------
1 | # Paths and files are relative to the data directory!
2 | version = 4
3 |
4 | [Connections]
5 |
6 | # You can define multiple Subsonic connections here. Each connection will be
7 | # considered as a separate database. Although the DAAP protocol supports
8 | # multiple databases, iTunes only supports one library. The name of the key
9 | # defines it's name. The order is important, so don't move sections around!
10 |
11 | [[ My Subsonic Library ]]
12 |
13 | # Remote server host (e.g. http://host:port/)
14 | url = TODO
15 |
16 | # Username
17 | username = TODO
18 |
19 | # Password
20 | password = TODO
21 |
22 | # Define the synchronization method (default is 'interval'). Valid choices are
23 | # 'manual', 'startup' and 'interval'.
24 | # synchronization = manual
25 |
26 | # Minutes between two synchronizations (default is 1440). Only valid if
27 | # synchronization method is set to 'interval'.
28 | # synchronization interval = 300
29 |
30 | # Enable transcode (default is no). Valid choices are 'no', 'unsupported' or
31 | # 'all'. If 'unsupported', only files that are not supported (see below) will
32 | # be transcoded. Note that transcoding is done by Subsonic!
33 | # transcode = unsupported
34 |
35 | # List of unsupported file suffixes (default is 'flac'). In case of no items or
36 | # just a single item, end it with a comma.
37 | # transcode unsupported = flac, alac, m4a
38 |
39 |
40 | [Daap]
41 |
42 | # Bind to specific interface (default is 0.0.0.0).
43 | # interface = 192.168.1.100
44 |
45 | # Server port (default is 3689, may conflict with iTunes Home Sharing).
46 | # port = 3688
47 |
48 | # Server password (default is no password).
49 | # password = MyPassword
50 |
51 | # Enable web interface. Uses same password if enabled. (default is yes).
52 | # web interface = no
53 |
54 | # To advertise server via Bonjour (default is yes).
55 | # zeroconf = no
56 |
57 | # Cache DAAP responses to speed up future access (default is yes).
58 | # cache = no
59 |
60 | # DAAP response cache timeout in minutes (default is 1440, one day).
61 | # cache timeout = 2880
62 |
63 |
64 | [Provider]
65 |
66 | # The name of the server.
67 | name = SubDaap Library
68 |
69 | # Database file path
70 | database = ./database.db
71 |
72 | # Enable artwork (default is yes).
73 | # artwork = no
74 |
75 | # Cache artwork (default is yes, faster)
76 | # artwork cache = no
77 |
78 | # Path for artwork cache.
79 | artwork cache dir = ./artwork
80 |
81 | # Max size (in MB) before old artwork will be pruned (default is 0, unlimited).
82 | artwork cache size = 1024
83 |
84 | # Percentage of cache size to clean while pruning (default is 0.1). Higher
85 | # values will make more space, but may remove too much
86 | # artwork cache prune threshold = 0.10
87 |
88 | # Cache items (default is yes, faster)
89 | # item cache = no
90 |
91 | # Path for item cache
92 | item cache dir = ./files
93 |
94 | # Max size (in MB) before old files will be pruned (default is 0, unlimited).
95 | # This does not include the files that are permanently cached.
96 | item cache size = 10240
97 |
98 | # Percentage of cache size to clean while pruning (default is 0.25). Higher
99 | # values will make more space, but may remove too much
100 | # item cache prune threshold = 0.25
101 |
102 | # How often should expired items be searched and pruned (in minutes)?
103 | # Default: 5 minutes.
104 | # item cache prune interval = 300
105 |
106 |
107 | [Advanced]
108 |
109 | # Tweak the number of open files if possible (default is do nothing). This
110 | # setting heavily depends on the system configuration!
111 | # open files limit = 256
112 |
--------------------------------------------------------------------------------
/init-scripts/init.osx:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 | Disabled
6 |
7 | KeepAlive
8 |
9 | com.basilfx.subdaap
10 | ProgramArguments
11 |
12 | /usr/bin/python
13 | SubDaap.py
14 | --data-dir
15 | ~/Library/Application Support/SubDaap/
16 | --config-file
17 | ~/Library/Application Support/SubDaap/config.ini
18 | --pid-file
19 | ~/Library/Application Support/SubDaap/subdaap.pid
20 | --log-file
21 | ~/Library/Application Support/SubDaap/subdaap.log
22 | --daemon
23 |
24 | RunAtLoad
25 |
26 | WorkingDirectory
27 | /Applications/SubDaap
28 |
29 |
30 |
--------------------------------------------------------------------------------
/init-scripts/systemd.service:
--------------------------------------------------------------------------------
1 | [Unit]
2 | Description=SubDaap - play your favorite tunes from SubSonic in iTunes.
3 |
4 | [Service]
5 | WorkingDirectory=/opt/subdaap
6 | ExecStart=/usr/bin/python SubDaap.py --data-dir /var/lib/subdaap/ --config-file /var/lib/subdaap/config.ini --pid-file /var/run/subdaap/subdaap.pid --log-file /var/log/subdaap/subdaap.log
7 | Type=simple
8 | User=subdaap
9 | Group=subdaap
10 |
11 | [Install]
12 | WantedBy=multi-user.target
13 |
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | Cython>=0.23.0
2 | py-sonic
3 | configobj
4 | flask-daapserver>=3.0.2
5 | apscheduler>=3.0.0
6 |
--------------------------------------------------------------------------------
/subdaap/__init__.py:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/basilfx/SubDaap/442c36ed8d7b6dc6ede738f4bc1dbfeb9cde20bd/subdaap/__init__.py
--------------------------------------------------------------------------------
/subdaap/application.py:
--------------------------------------------------------------------------------
1 | from subdaap.provider import Provider
2 | from subdaap.database import Database
3 | from subdaap.connection import Connection
4 | from subdaap.state import State
5 | from subdaap import cache, config, webserver
6 |
7 | from daapserver import DaapServer
8 |
9 | from apscheduler.schedulers.gevent import GeventScheduler
10 |
11 | import resource
12 | import logging
13 | import random
14 | import errno
15 | import os
16 |
17 | # Logger instance
18 | logger = logging.getLogger(__name__)
19 |
20 |
21 | class Application(object):
22 |
23 | def __init__(self, config_file, data_dir, verbose=0):
24 | """
25 | Construct a new application instance.
26 | """
27 |
28 | self.config_file = config_file
29 | self.data_dir = data_dir
30 | self.verbose = verbose
31 |
32 | self.server = None
33 | self.provider = None
34 | self.connections = {}
35 |
36 | # Setup all parts of the application
37 | self.setup_config()
38 | self.setup_open_files()
39 | self.setup_database()
40 | self.setup_state()
41 | self.setup_connections()
42 | self.setup_cache()
43 | self.setup_provider()
44 | self.setup_server()
45 | self.setup_tasks()
46 |
47 | def setup_config(self):
48 | """
49 | Load the application config from file.
50 | """
51 |
52 | logger.debug("Loading config from %s", self.config_file)
53 | self.config = config.get_config(self.config_file)
54 |
55 | def setup_open_files(self):
56 | """
57 | Get and set open files limit.
58 | """
59 |
60 | open_files_limit = resource.getrlimit(resource.RLIMIT_NOFILE)[0]
61 | new_open_files_limit = self.config["Advanced"]["open files limit"]
62 |
63 | logger.info(
64 | "System reports open files limit is %d.", open_files_limit)
65 |
66 | if new_open_files_limit != -1:
67 | logger.info(
68 | "Changing open files limit to %d.", new_open_files_limit)
69 |
70 | try:
71 | resource.setrlimit(resource.RLIMIT_NOFILE, (
72 | new_open_files_limit, resource.RLIM_INFINITY))
73 | except resource.error as e:
74 | logger.warning(
75 | "Failed to increase the number of open files: %s", e)
76 |
77 | def setup_database(self):
78 | """
79 | Initialize database.
80 | """
81 |
82 | self.db = Database(self.config["Provider"]["database"])
83 | self.db.create_database(drop_all=False)
84 |
85 | def setup_state(self):
86 | """
87 | Setup state.
88 | """
89 |
90 | self.state = State(os.path.join(
91 | self.get_cache_dir(), "provider.state"))
92 |
93 | def setup_cache(self):
94 | """
95 | Setup the caches for items and artwork.
96 | """
97 |
98 | # Initialize caches for items and artwork.
99 | item_cache = cache.ItemCache(
100 | path=self.get_cache_dir(
101 | self.config["Provider"]["item cache dir"]),
102 | max_size=self.config["Provider"]["item cache size"],
103 | prune_threshold=self.config[
104 | "Provider"]["item cache prune threshold"])
105 | artwork_cache = cache.ArtworkCache(
106 | path=self.get_cache_dir(self.config[
107 | "Provider"]["artwork cache dir"]),
108 | max_size=self.config["Provider"]["artwork cache size"],
109 | prune_threshold=self.config[
110 | "Provider"]["artwork cache prune threshold"])
111 |
112 | # Create a cache manager
113 | self.cache_manager = cache.CacheManager(
114 | db=self.db,
115 | item_cache=item_cache,
116 | artwork_cache=artwork_cache,
117 | connections=self.connections)
118 |
119 | def setup_connections(self):
120 | """
121 | Initialize the connections.
122 | """
123 |
124 | for name, section in self.config["Connections"].iteritems():
125 | index = len(self.connections) + 1
126 |
127 | self.connections[index] = Connection(
128 | db=self.db,
129 | state=self.state,
130 | index=index,
131 | name=name,
132 | url=section["url"],
133 | username=section["username"],
134 | password=section["password"],
135 | synchronization=section["synchronization"],
136 | synchronization_interval=section["synchronization interval"],
137 | transcode=section["transcode"],
138 | transcode_unsupported=section["transcode unsupported"])
139 |
140 | def setup_provider(self):
141 | """
142 | Setup the provider.
143 | """
144 |
145 | # Create provider.
146 | logger.debug(
147 | "Setting up provider for %d connection(s).", len(self.connections))
148 |
149 | self.provider = Provider(
150 | server_name=self.config["Provider"]["name"],
151 | db=self.db,
152 | state=self.state,
153 | connections=self.connections,
154 | cache_manager=self.cache_manager)
155 |
156 | # Do an initial synchronization if required.
157 | for connection in self.connections.itervalues():
158 | connection.synchronizer.provider = self.provider
159 | connection.synchronizer.synchronize(initial=True)
160 |
161 | def setup_server(self):
162 | """
163 | Create the DAAP server.
164 | """
165 |
166 | logger.debug(
167 | "Setting up DAAP server at %s:%d",
168 | self.config["Daap"]["interface"], self.config["Daap"]["port"])
169 |
170 | self.server = DaapServer(
171 | provider=self.provider,
172 | password=self.config["Daap"]["password"],
173 | ip=self.config["Daap"]["interface"],
174 | port=self.config["Daap"]["port"],
175 | cache=self.config["Daap"]["cache"],
176 | cache_timeout=self.config["Daap"]["cache timeout"] * 60,
177 | bonjour=self.config["Daap"]["zeroconf"],
178 | debug=self.verbose > 1)
179 |
180 | # Extend server with a web interface
181 | if self.config["Daap"]["web interface"]:
182 | webserver.extend_server_app(self, self.server.app)
183 |
184 | def setup_tasks(self):
185 | """
186 | Setup all tasks that run periodically.
187 | """
188 |
189 | self.scheduler = GeventScheduler()
190 |
191 | # Add an initial job
192 | def _job():
193 | job.remove()
194 | self.synchronize(synchronization="startup")
195 | job = self.scheduler.add_job(
196 | _job, max_instances=1, trigger="interval", seconds=1)
197 |
198 | # Scheduler task to clean and expire the cache.
199 | cache_interval = self.config['Provider']['item cache prune interval']
200 |
201 | self.scheduler.add_job(
202 | self.cache_manager.expire,
203 | max_instances=1, trigger="interval", minutes=cache_interval)
204 | self.scheduler.add_job(
205 | self.cache_manager.clean,
206 | max_instances=1, trigger="interval", minutes=cache_interval)
207 |
208 | # Schedule tasks to synchronize each connection.
209 | for connection in self.connections.itervalues():
210 | self.scheduler.add_job(
211 | self.synchronize, args=([connection, "interval"]),
212 | max_instances=1, trigger="interval",
213 | minutes=connection.synchronization_interval)
214 |
215 | def synchronize(self, connections=None, synchronization="manual"):
216 | """
217 | Synchronize selected connections (or all) given a synchronization
218 | event.
219 | """
220 |
221 | count = 0
222 | connections = connections or self.connections.values()
223 |
224 | logger.debug("Synchronization triggered via '%s'.", synchronization)
225 |
226 | for connection in connections:
227 | if synchronization == "interval":
228 | if connection.synchronization == "interval":
229 | connection.synchronizer.synchronize()
230 | count += 1
231 | elif synchronization == "startup":
232 | if connection.synchronization == "startup":
233 | if not connection.synchronizer.is_initial_synced:
234 | connection.synchronizer.synchronize()
235 | count += 1
236 | elif synchronization == "manual":
237 | connection.synchronizer.synchronize()
238 | count += 1
239 |
240 | logger.debug("Synchronized %d connections.", count)
241 |
242 | # Update the cache.
243 | self.cache_manager.cache()
244 |
245 | def start(self):
246 | """
247 | Start the server.
248 | """
249 |
250 | logger.debug("Starting task scheduler.")
251 | self.scheduler.start()
252 |
253 | logger.debug("Starting DAAP server.")
254 | self.server.serve_forever()
255 |
256 | def stop(self):
257 | """
258 | Stop the server.
259 | """
260 |
261 | logger.debug("Stopping DAAP server.")
262 | self.server.stop()
263 |
264 | logger.debug("Stopping task scheduler.")
265 | self.scheduler.shutdown()
266 |
267 | def get_cache_dir(self, *path):
268 | """
269 | Resolve the path to a cache directory. The path is relative to the data
270 | directory. The directory will be created if it does not exists, and
271 | will be tested for writing.
272 | """
273 |
274 | full_path = os.path.abspath(os.path.normpath(
275 | os.path.join(self.data_dir, *path)))
276 | logger.debug("Resolved %s to %s", path, full_path)
277 |
278 | # Create path if required.
279 | try:
280 | os.makedirs(full_path, 0755)
281 | except OSError as e:
282 | if e.errno == errno.EEXIST and os.path.isdir(full_path):
283 | pass
284 | else:
285 | raise Exception("Could not create folder: %s" % full_path)
286 |
287 | # Test for writing.
288 | ok = True
289 | test_file = os.path.join(full_path, ".write-test")
290 |
291 | while os.path.exists(test_file):
292 | test_file = test_file + str(random.randint(0, 9))
293 |
294 | try:
295 | with open(test_file, "w") as fp:
296 | fp.write("test")
297 | except IOError:
298 | ok = False
299 | finally:
300 | try:
301 | os.remove(test_file)
302 | except OSError:
303 | ok = False
304 |
305 | if not ok:
306 | raise Exception("Could not write to cache folder: %s" % full_path)
307 |
308 | # Cache directory created and tested for writing.
309 | return full_path
310 |
--------------------------------------------------------------------------------
/subdaap/cache.py:
--------------------------------------------------------------------------------
1 | from subdaap.utils import human_bytes, exhaust
2 | from subdaap import stream
3 |
4 | from collections import OrderedDict
5 |
6 | import logging
7 | import gevent
8 | import time
9 | import mmap
10 | import os
11 |
12 | # Logger instance
13 | logger = logging.getLogger(__name__)
14 |
15 | # Time in seconds to wait for another item to finish, before failing.
16 | TIMEOUT_WAIT_FOR_READY = 60
17 |
18 |
19 | class FileCacheItem(object):
20 | __slots__ = (
21 | "lock", "ready", "uses", "size", "type", "iterator", "data",
22 | "permanent"
23 | )
24 |
25 | def __init__(self):
26 | self.lock = None
27 | self.ready = None
28 | self.uses = 0
29 |
30 | self.size = 0
31 | self.iterator = None
32 | self.data = None
33 | self.permanent = False
34 |
35 |
36 | class FileCache(object):
37 | def __init__(self, path, max_size, prune_threshold):
38 | """
39 | Construct a new file cache.
40 |
41 | :param str path: Path to the cache folder.
42 | :param int max_size: Maximum cache size (in MB), or 0 to disable.
43 | :param float prune_threshold: Percentage of size to prune when cache
44 | size exceeds maximum size.
45 | """
46 |
47 | # This attribute is used so often, it makes the code less long.
48 | self.name = self.__class__.__name__
49 |
50 | self.path = path
51 | self.max_size = max_size * 1024 * 1024
52 | self.prune_threshold = prune_threshold
53 | self.current_size = 0
54 |
55 | self.items = OrderedDict()
56 | self.items_lock = gevent.lock.Semaphore()
57 | self.prune_lock = gevent.lock.Semaphore()
58 |
59 | self.permanent_cache_keys = None
60 |
61 | def index(self, permanent_cache_keys):
62 | """
63 | Read the cache folder and determine its size by summing the file
64 | size of its contents.
65 | """
66 |
67 | self.permanent_cache_keys = permanent_cache_keys
68 |
69 | # Walk all files and sum their size
70 | for root, directories, files in os.walk(self.path):
71 | if directories:
72 | logger.warning(
73 | "%s: Found unexpected directories in cache path: %s",
74 | self.name, root)
75 |
76 | for cache_file in files:
77 | try:
78 | cache_file = os.path.join(self.path, cache_file)
79 | cache_key = self.cache_file_to_cache_key(cache_file)
80 | except ValueError:
81 | logger.warning(
82 | "%s: Found unexpected file in cache path: %s",
83 | self.name, cache_file)
84 | continue
85 |
86 | # Add it to the cache, but do not overwrite an existing item.
87 | if cache_key not in self.items:
88 | self.items[cache_key] = FileCacheItem()
89 |
90 | self.items[cache_key].size = os.stat(cache_file).st_size
91 | self.items[cache_key].permanent = \
92 | cache_key in permanent_cache_keys
93 |
94 | # Sum sizes of all non-permanent files
95 | size = 0
96 | count = 0
97 |
98 | for item in self.items.itervalues():
99 | if not item.permanent:
100 | size += item.size
101 | count += 1
102 |
103 | self.current_size = size
104 |
105 | logger.debug(
106 | "%s: %d files in cache (%d permanent), size is %s/%s",
107 | self.name, len(self.items), len(self.items) - count,
108 | human_bytes(self.current_size), human_bytes(self.max_size))
109 |
110 | def cache_key_to_cache_file(self, cache_key):
111 | """
112 | Get complete path to cache file, given a cache key.
113 |
114 | :param str cache_key:
115 | """
116 | return os.path.join(self.path, str(cache_key))
117 |
118 | def cache_file_to_cache_key(self, cache_file):
119 | """
120 | Get cache key, given a cache file.
121 |
122 | :param str cache_file:
123 | """
124 | return int(os.path.basename(cache_file))
125 |
126 | def get(self, cache_key):
127 | """
128 | Get item from the cache.
129 |
130 | :param str cache_key:
131 | """
132 |
133 | # Load item from cache. If it is found in cache, move it on top of the
134 | # OrderedDict, so it is marked as most-recently accessed and therefore
135 | # least likely to get pruned.
136 | new_item = False
137 | wait_for_ready = True
138 |
139 | # The lock is required to make sure that two concurrent I/O bounded
140 | # greenlets do not add or remove items at the same time (for instance
141 | # `self.prune`).
142 | with self.items_lock:
143 | try:
144 | cache_item = self.items[cache_key]
145 | del self.items[cache_key]
146 | self.items[cache_key] = cache_item
147 | except KeyError:
148 | self.items[cache_key] = cache_item = FileCacheItem()
149 | cache_item.permanent = cache_key in self.permanent_cache_keys
150 | new_item = True
151 |
152 | # The item can be either new, or it could be unloaded in the past.
153 | if cache_item.ready is None or cache_item.lock is None:
154 | cache_item.ready = gevent.event.Event()
155 | cache_item.lock = gevent.lock.RLock()
156 | wait_for_ready = False
157 |
158 | # The file is not in cache, but we allocated an instance so the
159 | # caller can load it. This is actually needed to prevent a second
160 | # request from also loading it, hence cache_item not ready.
161 | if new_item:
162 | return cache_item
163 |
164 | # Wait until the cache_item is ready for use, e.g. another request is
165 | # downloading the file.
166 | if wait_for_ready:
167 | logger.debug(
168 | "%s: waiting for item '%s' to be ready.", self.name, cache_key)
169 |
170 | if not cache_item.ready.wait(timeout=TIMEOUT_WAIT_FOR_READY):
171 | raise Exception("Waiting for cache item timed out.")
172 |
173 | # This may happen when some greenlet is waiting for the item to
174 | # become ready after its flag was cleared in the expire method. It
175 | # is probably possible to recover by re-iterating this method, but
176 | # first want to make sure if this situation is likely to happen.
177 | if cache_item.ready is None:
178 | raise Exception("Item was unloaded while waiting.")
179 |
180 | logger.debug("%s: item '%s' is ready.", self.name, cache_key)
181 |
182 | # Load the item from disk if it is not loaded.
183 | if cache_item.iterator is None:
184 | cache_item.ready.clear()
185 | self.load(cache_key, cache_item)
186 |
187 | return cache_item
188 |
189 | def contains(self, cache_key):
190 | """
191 | Check if a certain cache key is in the cache.
192 | """
193 |
194 | with self.items_lock:
195 | return cache_key in self.items
196 |
197 | def expire(self):
198 | """
199 | Cleanup items (file descriptors etc.) that are not in use anymore.
200 | """
201 |
202 | candidates = []
203 |
204 | with self.items_lock:
205 | for cache_key, cache_item in self.items.iteritems():
206 | if cache_item.uses > 0:
207 | logger.debug(
208 | "%s: skipping file with key '%s' because it is %d "
209 | "times in use.", self.name, cache_key, cache_item.uses)
210 | continue
211 |
212 | # Check if it is getting ready, e.g. one greenlet is
213 | # downloading the file.
214 | if cache_item.ready and not cache_item.ready.is_set():
215 | continue
216 |
217 | # Item was ready and not in use, therefore clear the ready
218 | # flag so no one will use it.
219 | if cache_item.ready:
220 | cache_item.ready.clear()
221 |
222 | # Only unload items that have been ready.
223 | assert cache_item.iterator is not None
224 | candidates.append((cache_key, cache_item))
225 |
226 | for cache_key, cache_item in candidates:
227 | self.unload(cache_key, cache_item)
228 |
229 | cache_item.iterator = None
230 | cache_item.lock = None
231 | cache_item.ready = None
232 |
233 | if candidates:
234 | logger.debug("%s: expired %d files", self.name, len(candidates))
235 |
236 | def clean(self, force=False):
237 | """
238 | Prune items from the cache, if `self.current_size' exceeds
239 | `self.max_size`. Only items that have been expired will be pruned,
240 | unless it is marked as permanent.
241 |
242 | :param bool force: If true, clean all items except permanent ones. This
243 | effectively removes all items from cache.
244 | """
245 |
246 | candidates = []
247 |
248 | with self.prune_lock, self.items_lock:
249 | # Check if cleanup is required.
250 | if force:
251 | logger.info(
252 | "%s: force cleaning all items from cache, except items "
253 | "that are in use or marked as permanent.", self.name)
254 | else:
255 | if not self.max_size or self.current_size < self.max_size:
256 | return
257 |
258 | # Determine candidates to remove.
259 | for cache_key, cache_item in self.items.iteritems():
260 | if not force:
261 | if self.current_size < \
262 | (self.max_size * (1.0 - self.prune_threshold)):
263 | break
264 |
265 | # keep permanent items
266 | if cache_item.permanent:
267 | continue
268 |
269 | # If `cache_item.ready` is not set, it is not loaded into
270 | # memory.
271 | if cache_item.ready is None:
272 | candidates.append((cache_key, cache_item))
273 | self.current_size -= cache_item.size
274 |
275 | del self.items[cache_key]
276 |
277 | # Actual removal of the files. At this point, the cache_item is not in
278 | # `self.items` anymore. No other greenlet can retrieve it anymore.
279 | for cache_key, cache_item in candidates:
280 | cache_file = self.cache_key_to_cache_file(cache_key)
281 |
282 | try:
283 | os.remove(cache_file)
284 | except OSError as e:
285 | logger.warning(
286 | "%s: unable to remove file '%s' from cache: %s",
287 | self.name, os.path.basename(cache_file), e)
288 |
289 | if candidates:
290 | logger.debug(
291 | "%s: pruned %d files, current size %s/%s (%d files).",
292 | self.name, len(candidates), human_bytes(self.current_size),
293 | human_bytes(self.max_size), len(self.items))
294 |
295 | def update(self, cache_key, cache_item, cache_file, file_size):
296 | if cache_item.size != file_size:
297 | if cache_item.size:
298 | logger.warning(
299 | "%s: file size of item '%s' changed from %d bytes to %d "
300 | "bytes while it was in cache.", self.name, cache_key,
301 | cache_item.size, file_size)
302 |
303 | # Correct the total cache size.
304 | if not cache_item.permanent:
305 | self.current_size -= cache_item.size
306 | self.current_size += file_size
307 |
308 | cache_item.size = file_size
309 |
310 | def download(self, cache_key, cache_item, remote_fd):
311 | start = time.time()
312 |
313 | def on_cache(file_size):
314 | """
315 | Executed when download finished. This method is executed with the
316 | `cache_item` locked.
317 | """
318 |
319 | logger.debug(
320 | "%s: downloading '%s' took %.2f seconds.", self.name,
321 | cache_key, time.time() - start)
322 |
323 | remote_fd.close()
324 | self.load(cache_key, cache_item)
325 |
326 | cache_file = self.cache_key_to_cache_file(cache_key)
327 | cache_item.iterator = stream.stream_from_remote(
328 | cache_item.lock, remote_fd, cache_file, on_cache=on_cache)
329 |
330 |
331 | class ArtworkCache(FileCache):
332 | def load(self, cache_key, cache_item):
333 | cache_file = self.cache_key_to_cache_file(cache_key)
334 |
335 | def on_start():
336 | cache_item.uses += 1
337 | logger.debug(
338 | "%s: incremented '%s' use to %d", self.name, cache_key,
339 | cache_item.uses)
340 |
341 | def on_finish():
342 | cache_item.uses -= 1
343 | logger.debug(
344 | "%s: decremented '%s' use to %d", self.name, cache_key,
345 | cache_item.uses)
346 |
347 | file_size = os.stat(cache_file).st_size
348 | cache_item.data = local_fd = open(cache_file, "rb")
349 |
350 | # Update cache item
351 | self.update(cache_key, cache_item, cache_file, file_size)
352 |
353 | cache_item.iterator = stream.stream_from_file(
354 | cache_item.lock, local_fd, file_size,
355 | on_start=on_start, on_finish=on_finish)
356 | cache_item.ready.set()
357 |
358 | def unload(self, cache_key, cache_item):
359 | if cache_item.data:
360 | cache_item.data.close()
361 | cache_item.data = None
362 |
363 |
364 | class ItemCache(FileCache):
365 | def load(self, cache_key, cache_item):
366 | cache_file = self.cache_key_to_cache_file(cache_key)
367 |
368 | def on_start():
369 | cache_item.uses += 1
370 | logger.debug(
371 | "%s: incremented '%s' use to %d.", self.name, cache_key,
372 | cache_item.uses)
373 |
374 | def on_finish():
375 | cache_item.uses -= 1
376 | logger.debug(
377 | "%s: decremented '%s' use to %d.", self.name, cache_key,
378 | cache_item.uses)
379 |
380 | file_size = os.stat(cache_file).st_size
381 |
382 | local_fd = open(cache_file, "r+b")
383 | mmap_fd = mmap.mmap(local_fd.fileno(), 0, prot=mmap.PROT_READ)
384 | cache_item.data = local_fd, mmap_fd
385 |
386 | # Update cache item
387 | self.update(cache_key, cache_item, cache_file, file_size)
388 |
389 | cache_item.iterator = stream.stream_from_buffer(
390 | cache_item.lock, mmap_fd, file_size,
391 | on_start=on_start, on_finish=on_finish)
392 | cache_item.ready.set()
393 |
394 | def unload(self, cache_key, cache_item):
395 | if cache_item.data:
396 | local_fd, mmap_fd = cache_item.data
397 |
398 | mmap_fd.close()
399 | local_fd.close()
400 |
401 | cache_item.data = None
402 |
403 |
404 | class CacheManager(object):
405 | """
406 | """
407 |
408 | def __init__(self, db, item_cache, artwork_cache, connections):
409 | self.db = db
410 | self.item_cache = item_cache
411 | self.artwork_cache = artwork_cache
412 | self.connections = connections
413 |
414 | self.setup_index()
415 |
416 | def setup_index(self):
417 | """
418 | """
419 |
420 | cached_items = self.get_cached_items()
421 |
422 | self.artwork_cache.index(cached_items)
423 | self.item_cache.index(cached_items)
424 |
425 | def get_cached_items(self):
426 | """
427 | Get all items that should be permanently cached, independent of which
428 | database.
429 | """
430 |
431 | with self.db.get_cursor() as cursor:
432 | return cursor.query_dict(
433 | """
434 | SELECT
435 | `items`.`id`,
436 | `items`.`database_id`,
437 | `items`.`remote_id`,
438 | `items`.`file_suffix`
439 | FROM
440 | `items`
441 | LEFT OUTER JOIN
442 | `artists` ON `items`.`artist_id`=`artists`.`id`
443 | LEFT OUTER JOIN
444 | `artists` AS `album_artists` ON
445 | `items`.`album_artist_id` = `album_artists`.`id`
446 | LEFT OUTER JOIN
447 | `albums` ON `items`.`album_id`=`albums`.`id`
448 | WHERE
449 | (
450 | `items`.`cache` = 1 OR
451 | COALESCE(`artists`.`cache`, 0) = 1 OR
452 | COALESCE(`album_artists`.`cache`, 0) = 1 OR
453 | COALESCE(`albums`.`cache`, 0) = 1
454 | ) AND
455 | `items`.`exclude` = 0 AND
456 | COALESCE(`artists`.`exclude`, 0) = 0 AND
457 | COALESCE(`album_artists`.`exclude`, 0) = 0 AND
458 | COALESCE(`albums`.`exclude`, 0) = 0
459 | """)
460 |
461 | def cache(self):
462 | """
463 | Update the caches with all items that should be permanently cached.
464 | """
465 |
466 | cached_items = self.get_cached_items()
467 | logger.info("Caching %d permanent items.", len(cached_items))
468 |
469 | for item_id in cached_items:
470 | database_id = cached_items[item_id]["database_id"]
471 | remote_id = cached_items[item_id]["remote_id"]
472 | file_suffix = cached_items[item_id]["file_suffix"]
473 |
474 | # Artwork
475 | if item_id not in self.artwork_cache.items:
476 | logger.debug("Artwork with key '%d' not in cache.", item_id)
477 | cache_item = self.artwork_cache.get(item_id)
478 |
479 | if not cache_item.ready.is_set():
480 | remote_fd = self.connections[database_id].get_artwork_fd(
481 | remote_id, file_suffix)
482 | self.artwork_cache.download(item_id, cache_item, remote_fd)
483 |
484 | # Exhaust iterator so it downloads the artwork.
485 | exhaust(cache_item.iterator())
486 |
487 | # Items
488 | if item_id not in self.item_cache.items:
489 | logger.debug("Item with key '%d' not in cache.", item_id)
490 | cache_item = self.item_cache.get(item_id)
491 |
492 | if not cache_item.ready.is_set():
493 | remote_fd = self.connections[database_id].get_item_fd(
494 | remote_id, file_suffix)
495 | self.item_cache.download(item_id, cache_item, remote_fd)
496 |
497 | # Exhaust iterator so it downloads the item.
498 | exhaust(cache_item.iterator())
499 |
500 | # Cleanup left-overs of all items that are loaded.
501 | self.expire()
502 |
503 | logger.info("Caching permanent items finished.")
504 |
505 | def expire(self):
506 | """
507 | """
508 |
509 | self.item_cache.expire()
510 | self.artwork_cache.expire()
511 |
512 | def clean(self, force=False):
513 | """
514 | """
515 |
516 | self.item_cache.clean(force)
517 | self.artwork_cache.clean(force)
518 |
--------------------------------------------------------------------------------
/subdaap/collection.py:
--------------------------------------------------------------------------------
1 | from daapserver import collection
2 |
3 | from subdaap import utils
4 |
5 |
6 | class LazyMutableCollection(collection.LazyMutableCollection):
7 |
8 | __slots__ = collection.LazyMutableCollection.__slots__ + ("child_class", )
9 |
10 | def count(self):
11 | """
12 | """
13 |
14 | # Prepare query depending on `self.child_class`. Use name to prevent
15 | # cyclic imports.
16 | child_class_name = self.child_class.__name__
17 |
18 | if child_class_name == "Database":
19 | query = """
20 | SELECT
21 | COUNT(*)
22 | FROM
23 | `databases`
24 | WHERE
25 | `databases`.`exclude` = 0
26 | LIMIT 1
27 | """,
28 | elif child_class_name == "Item":
29 | query = """
30 | SELECT
31 | COUNT(*)
32 | FROM
33 | `items`
34 | LEFT OUTER JOIN
35 | `artists` ON `items`.`artist_id` = `artists`.`id`
36 | LEFT OUTER JOIN
37 | `artists` AS `album_artists` ON
38 | `items`.`album_artist_id` = `album_artists`.`id`
39 | LEFT OUTER JOIN
40 | `albums` ON `items`.`album_id` = `albums`.`id`
41 | WHERE
42 | `items`.`database_id` = ? AND
43 | `items`.`exclude` = 0 AND
44 | COALESCE(`artists`.`exclude`, 0) = 0 AND
45 | COALESCE(`album_artists`.`exclude`, 0) = 0 AND
46 | COALESCE(`albums`.`exclude`, 0) = 0
47 | LIMIT 1
48 | """, self.parent.id
49 | elif child_class_name == "Container":
50 | query = """
51 | SELECT
52 | COUNT(*)
53 | FROM
54 | `containers`
55 | WHERE
56 | `containers`.`database_id` = ? AND
57 | `containers`.`exclude` = 0
58 | LIMIT 1
59 | """, self.parent.id
60 | elif child_class_name == "ContainerItem":
61 | query = """
62 | SELECT
63 | COUNT(*)
64 | FROM
65 | `container_items`
66 | INNER JOIN
67 | `items` ON `container_items`.`item_id` = `items`.`id`
68 | LEFT OUTER JOIN
69 | `artists` ON `items`.`artist_id` = `artists`.`id`
70 | LEFT OUTER JOIN
71 | `artists` AS `album_artists` ON
72 | `items`.`album_artist_id` = `album_artists`.`id`
73 | LEFT OUTER JOIN
74 | `albums` ON `items`.`album_id` = `albums`.`id`
75 | WHERE
76 | `container_items`.`database_id` = ? AND
77 | `container_items`.`container_id` = ? AND
78 | COALESCE(`items`.`exclude`, 0) = 0 AND
79 | COALESCE(`artists`.`exclude`, 0) = 0 AND
80 | COALESCE(`album_artists`.`exclude`, 0) = 0 AND
81 | COALESCE(`albums`.`exclude`, 0) = 0
82 | LIMIT 1
83 | """, self.parent.id, self.parent.database_id
84 |
85 | # Execute query.
86 | with self.parent.db.get_cursor() as cursor:
87 | return cursor.query_value(*query)
88 |
89 | def load(self, item_ids=None):
90 | """
91 | """
92 |
93 | # Only one invocation at a time.
94 | if self.busy:
95 | raise ValueError("Already busy loading items.")
96 |
97 | # Prepare query depending on `self.child_class`. Use name to prevent
98 | # cyclic imports.
99 | child_class_name = self.child_class.__name__
100 |
101 | if item_ids:
102 | if child_class_name == "Database":
103 | in_clause = " AND `databases`.`id` IN (%s)"
104 | elif child_class_name == "Item":
105 | in_clause = " AND `items`.`id` IN (%s)"
106 | elif child_class_name == "Container":
107 | in_clause = " AND `containers`.`id` IN (%s)"
108 | elif child_class_name == "ContainerItem":
109 | in_clause = " AND `container_items`.`id` IN (%s)"
110 |
111 | in_clause = in_clause % utils.in_list(item_ids)
112 | else:
113 | in_clause = ""
114 |
115 | if child_class_name == "Database":
116 | query = """
117 | SELECT
118 | `databases`.`id`,
119 | `databases`.`persistent_id`,
120 | `databases`.`name`
121 | FROM
122 | `databases`
123 | WHERE
124 | `databases`.`exclude` = 0
125 | %s
126 | """ % in_clause,
127 | elif child_class_name == "Item":
128 | query = """
129 | SELECT
130 | `items`.`id`,
131 | `items`.`database_id`,
132 | `items`.`persistent_id`,
133 | `items`.`remote_id`,
134 | `items`.`name`,
135 | `items`.`track`,
136 | `items`.`year`,
137 | `items`.`bitrate`,
138 | `items`.`duration`,
139 | `items`.`file_size`,
140 | `items`.`file_name`,
141 | `items`.`file_type`,
142 | `items`.`file_suffix`,
143 | `items`.`genre`,
144 | `artists`.`name` as `artist`,
145 | `album_artists`.`name` as `album_artist`,
146 | `albums`.`name` as `album`,
147 | `albums`.`art` as `album_art`
148 | FROM
149 | `items`
150 | LEFT OUTER JOIN
151 | `artists` ON `items`.`artist_id` = `artists`.`id`
152 | LEFT OUTER JOIN
153 | `artists` AS `album_artists` ON
154 | `items`.`album_artist_id` = `album_artists`.`id`
155 | LEFT OUTER JOIN
156 | `albums` ON `items`.`album_id` = `albums`.`id`
157 | WHERE
158 | `items`.`database_id` = ? AND
159 | `items`.`exclude` = 0 AND
160 | COALESCE(`artists`.`exclude`, 0) = 0 AND
161 | COALESCE(`album_artists`.`exclude`, 0) = 0 AND
162 | COALESCE(`albums`.`exclude`, 0) = 0
163 | %s
164 | """ % in_clause, self.parent.id
165 | elif child_class_name == "Container":
166 | query = """
167 | SELECT
168 | `containers`.`id`,
169 | `containers`.`database_id`,
170 | `containers`.`persistent_id`,
171 | `containers`.`parent_id`,
172 | `containers`.`name`,
173 | `containers`.`is_base`,
174 | `containers`.`is_smart`
175 | FROM
176 | `containers`
177 | WHERE
178 | `containers`.`database_id` = ? AND
179 | `containers`.`exclude` = 0
180 | %s
181 | """ % in_clause, self.parent.id
182 | elif child_class_name == "ContainerItem":
183 | query = """
184 | SELECT
185 | `container_items`.`id`,
186 | `container_items`.`item_id`,
187 | `container_items`.`container_id`
188 | FROM
189 | `container_items`
190 | INNER JOIN
191 | `items` ON `container_items`.`item_id` = `items`.`id`
192 | LEFT OUTER JOIN
193 | `artists` ON `items`.`artist_id` = `artists`.`id`
194 | LEFT OUTER JOIN
195 | `artists` AS `album_artists` ON
196 | `items`.`album_artist_id` = `album_artists`.`id`
197 | LEFT OUTER JOIN
198 | `albums` ON `items`.`album_id` = `albums`.`id`
199 | WHERE
200 | `container_items`.`container_id` = ? AND
201 | COALESCE(`items`.`exclude`, 0) = 0 AND
202 | COALESCE(`artists`.`exclude`, 0) = 0 AND
203 | COALESCE(`album_artists`.`exclude`, 0) = 0 AND
204 | COALESCE(`albums`.`exclude`, 0) = 0
205 | %s
206 | """ % in_clause, self.parent.id
207 |
208 | # Execute query.
209 | try:
210 | self.busy = True
211 |
212 | # Convert rows to items. Iterate over chunck for cache
213 | # improvements.
214 | store = self.store
215 | child_class = self.child_class
216 | db = self.parent.db
217 |
218 | with self.parent.db.get_cursor() as cursor:
219 | for rows in utils.chunks(cursor.query(*query), 25):
220 | for row in rows:
221 | # Update an existing item
222 | if item_ids:
223 | try:
224 | item = store.get(row["id"])
225 |
226 | for key in row.keys():
227 | setattr(item, key, row[key])
228 | except KeyError:
229 | item = child_class(db, **row)
230 | else:
231 | item = child_class(db, **row)
232 |
233 | # Add to store
234 | store.add(item.id, item)
235 |
236 | # Yield result
237 | self.iter_item = item
238 | yield item
239 |
240 | # Final actions after all items have been loaded
241 | if not item_ids:
242 | self.ready = True
243 |
244 | if self.pending_commit != -1:
245 | revision = self.pending_commit
246 | self.pending_commit = -1
247 | self.commit(revision)
248 | finally:
249 | self.busy = False
250 |
--------------------------------------------------------------------------------
/subdaap/config.py:
--------------------------------------------------------------------------------
1 | from cStringIO import StringIO
2 |
3 | from configobj import ConfigObj, flatten_errors
4 | from validate import Validator, is_string_list
5 |
6 | import logging
7 |
8 | # Logger instance
9 | logger = logging.getLogger(__name__)
10 |
11 | # Config file specification
12 | CONFIG_VERSION = 4
13 | CONFIG_SPEC = """
14 | version = integer(min=1, default=%d)
15 |
16 | [Connections]
17 |
18 | [[__many__]]
19 | url = string
20 | username = string
21 | password = string
22 |
23 | synchronization = option("manual", "startup", "interval", default="interval")
24 | synchronization interval = integer(min=1, default=1440)
25 |
26 | transcode = option("no", "unsupported", "all", default="no")
27 | transcode unsupported = lowercase_string_list(default=list("flac"))
28 |
29 | [Daap]
30 | interface = string(default="0.0.0.0")
31 | port = integer(min=1, max=65535, default=3689)
32 | password = string(default="")
33 | web interface = boolean(default=True)
34 | zeroconf = boolean(default=True)
35 | cache = boolean(default=True)
36 | cache timeout = integer(min=1, default=1440)
37 |
38 | [Provider]
39 | name = string
40 | database = string(default="./database.db")
41 |
42 | artwork = boolean(default=True)
43 | artwork cache = boolean(default=True)
44 | artwork cache dir = string(default="./artwork")
45 | artwork cache size = integer(min=0, default=0)
46 | artwork cache prune threshold = float(min=0, max=1.0, default=0.1)
47 |
48 | item cache = boolean(default=True)
49 | item cache dir = string(default="./items")
50 | item cache size = integer(min=0, default=0)
51 | item cache prune threshold = float(min=0, max=1.0, default=0.25)
52 | item cache prune interval = integer(min=1, default=5)
53 |
54 | [Advanced]
55 | open files limit = integer(min=-1, default=-1)
56 | """ % CONFIG_VERSION
57 |
58 |
59 | def lowercase_string_list(value, min=None, max=None):
60 | """
61 | Custom ConfigObj validator that returns a list of lowercase
62 | items.
63 | """
64 | validated_string_list = is_string_list(value, min, max)
65 | return [x.lower() for x in validated_string_list]
66 |
67 |
68 | def get_config(config_file):
69 | """
70 | Parse the config file, validate it and convert types. Return a dictionary
71 | with all settings.
72 | """
73 |
74 | specs = ConfigObj(StringIO(CONFIG_SPEC), list_values=False)
75 | config = ConfigObj(config_file, configspec=specs)
76 |
77 | # Create validator
78 | validator = Validator({'lowercase_string_list': lowercase_string_list})
79 |
80 | # Convert types and validate file
81 | result = config.validate(validator, preserve_errors=True, copy=True)
82 | logger.debug("Config file version %d", config["version"])
83 |
84 | # Raise exceptions for errors
85 | for section_list, key, message in flatten_errors(config, result):
86 | if key is not None:
87 | raise ValueError(
88 | "The '%s' key in the section '%s' failed validation: %s" % (
89 | key, ", ".join(section_list), message))
90 | else:
91 | raise ValueError(
92 | "The following section was missing: %s." % (
93 | ", ".join(section_list)))
94 |
95 | # For now, no automatic update support.
96 | if config["version"] != CONFIG_VERSION:
97 | logger.warning(
98 | "Config file version is %d, while expected version is %d. Please "
99 | "check for inconsistencies and update manually.",
100 | config["version"], CONFIG_VERSION)
101 |
102 | return config
103 |
--------------------------------------------------------------------------------
/subdaap/connection.py:
--------------------------------------------------------------------------------
1 | import collections
2 |
3 | from subdaap.subsonic import SubsonicClient
4 | from subdaap.synchronizer import Synchronizer
5 |
6 | import logging
7 |
8 | # Logger instance
9 | logger = logging.getLogger(__name__)
10 |
11 |
12 | class Connection(object):
13 | """
14 | A connection represents a remote server and provides all the instances
15 | required to connect and synchronize.
16 | """
17 | transcode_format = collections.defaultdict(lambda: 'audio/mpeg')
18 |
19 | def __init__(self, state, db, index, name, url, username, password,
20 | synchronization, synchronization_interval, transcode,
21 | transcode_unsupported):
22 | """
23 | Construct a new connection.
24 |
25 | :param State state: Global state object.
26 | :param Database db: Database object.
27 | :param int index: Index number that maps to a database model.
28 | :param str name: Name of the server and main container.
29 | :param str url: Remote Subsonic URL.
30 | :param str username: Remote Subsonic username.
31 | :param str password: Remote Subsonic password.
32 | :param str synchronization: Either 'manual', 'startup' or 'interval'.
33 | :param int synchronization_interval: Synchronization interval time in
34 | minutes.
35 | :param str transcode: Either 'all', 'unsupported' or 'no'.
36 | :param list transcode_unsupported: List of file extensions that are not
37 | supported, thus will be transcoded.
38 | """
39 |
40 | self.db = db
41 | self.state = state
42 |
43 | self.index = index
44 | self.name = name
45 |
46 | self.url = url
47 | self.username = username
48 | self.password = password
49 |
50 | self.synchronization = synchronization
51 | self.synchronization_interval = synchronization_interval
52 |
53 | self.transcode = transcode
54 | self.transcode_unsupported = transcode_unsupported
55 |
56 | self.setup_subsonic()
57 | self.setup_synchronizer()
58 |
59 | def setup_subsonic(self):
60 | """
61 | Setup a new Subsonic connection.
62 | """
63 |
64 | self.subsonic = SubsonicClient(
65 | url=self.url,
66 | username=self.username,
67 | password=self.password)
68 |
69 | def setup_synchronizer(self):
70 | """
71 | Setup a new synchronizer.
72 | """
73 |
74 | self.synchronizer = Synchronizer(
75 | db=self.db, state=self.state, index=self.index, name=self.name,
76 | subsonic=self.subsonic)
77 |
78 | def needs_transcoding(self, file_suffix):
79 | """
80 | Returns True if a given file suffix needs encoding, or if transcoding
81 | all files is set.
82 | """
83 |
84 | return self.transcode == "all" or (
85 | self.transcode == "unsupported" and
86 | file_suffix.lower() in self.transcode_unsupported)
87 |
88 | def get_item_fd(self, remote_id, file_suffix):
89 | """
90 | Get a file descriptor of remote connection of an item, based on
91 | transcoding settings.
92 | """
93 |
94 | if self.needs_transcoding(file_suffix):
95 | logger.debug(
96 | "Transcoding item '%d' with file suffix '%s'.",
97 | remote_id, file_suffix)
98 | return self.subsonic.stream(
99 | remote_id, tformat=self.transcode_format)
100 | else:
101 | return self.subsonic.download(remote_id)
102 |
103 | def get_artwork_fd(self, remote_id, file_suffix):
104 | """
105 | Get a file descriptor of a remote connection of an artwork item.
106 | """
107 |
108 | return self.subsonic.getCoverArt(remote_id)
109 |
--------------------------------------------------------------------------------
/subdaap/database.py:
--------------------------------------------------------------------------------
1 | from contextlib import contextmanager
2 |
3 | from gevent import lock
4 |
5 | import sqlite3
6 | import logging
7 |
8 | # Logger instance
9 | logger = logging.getLogger(__name__)
10 |
11 |
12 | class Database(object):
13 | """
14 | The Database instance handles all database interactions.
15 | """
16 |
17 | def __init__(self, database_file):
18 | self.lock = lock.RLock()
19 |
20 | logger.info("Loading database from %s.", database_file)
21 | self.connection = sqlite3.connect(database_file)
22 | self.connection.row_factory = sqlite3.Row
23 | self.connection.text_factory = sqlite3.OptimizedUnicode
24 |
25 | @contextmanager
26 | def get_write_cursor(self):
27 | """
28 | Get cursor instance with locking.
29 |
30 | If the query fails due to an exception, a rollback will be performed.
31 |
32 | :return: Cursor instance that is locked for writing.
33 | :rtype: Cursor
34 | """
35 |
36 | with self.lock:
37 | cursor = self.connection.cursor(Cursor)
38 |
39 | try:
40 | yield cursor
41 | self.connection.commit()
42 | except Exception:
43 | self.connection.rollback()
44 | raise
45 | finally:
46 | cursor.close()
47 |
48 | @contextmanager
49 | def get_cursor(self):
50 | """
51 | Get cursor instance without locking.
52 |
53 | :return: Cursor instance for reading.
54 | :rtype: Cursor
55 | """
56 |
57 | cursor = self.connection.cursor(Cursor)
58 |
59 | try:
60 | yield cursor
61 | finally:
62 | cursor.close()
63 |
64 | def create_database(self, drop_all=True):
65 | """
66 | Create the default databases. Drop old ones if `drop_all` is `True`.
67 |
68 | :param bool drop_all: Drop existing tables if they exist. All data will
69 | be lost.
70 | """
71 |
72 | with self.lock:
73 | # Add extra SQL to drop all tables if desired
74 | if drop_all:
75 | extra = """
76 | DROP TABLE IF EXISTS `container_items`;
77 | DROP TABLE IF EXISTS `containers`;
78 | DROP TABLE IF EXISTS `items`;
79 | DROP TABLE IF EXISTS `artists`;
80 | DROP TABLE IF EXISTS `albums`;
81 | DROP TABLE IF EXISTS `databases`;
82 | """
83 | else:
84 | extra = ""
85 |
86 | # Create table query
87 | with self.connection:
88 | self.connection.executescript(
89 | extra + """
90 | CREATE TABLE IF NOT EXISTS `databases` (
91 | `id` INTEGER PRIMARY KEY,
92 | `persistent_id` INTEGER NOT NULL,
93 | `name` varchar(255) NOT NULL,
94 | `exclude` tinyint(1) DEFAULT 0,
95 | `checksum` int(11) NOT NULL,
96 | `remote_id` int(11) DEFAULT NULL
97 | );
98 | CREATE TABLE IF NOT EXISTS `artists` (
99 | `id` INTEGER PRIMARY KEY,
100 | `database_id` int(11) NOT NULL,
101 | `name` varchar(255) NOT NULL,
102 | `exclude` tinyint(1) DEFAULT 0,
103 | `cache` tinyint(1) DEFAULT 0,
104 | `checksum` int(11) NOT NULL,
105 | `remote_id` int(11) DEFAULT NULL,
106 | CONSTRAINT `artist_fk_1` FOREIGN KEY (`database_id`)
107 | REFERENCES `databases` (`id`)
108 | );
109 | CREATE TABLE IF NOT EXISTS `albums` (
110 | `id` INTEGER PRIMARY KEY,
111 | `database_id` int(11) NOT NULL,
112 | `artist_id` int(11) DEFAULT NULL,
113 | `name` varchar(255) NOT NULL,
114 | `art` tinyint(1) DEFAULT NULL,
115 | `art_name` varchar(512) DEFAULT NULL,
116 | `art_type` varchar(255) DEFAULT NULL,
117 | `art_size` int(11) DEFAULT NULL,
118 | `exclude` tinyint(1) DEFAULT 0,
119 | `cache` tinyint(1) DEFAULT 0,
120 | `checksum` int(11) NOT NULL,
121 | `remote_id` int(11) DEFAULT NULL,
122 | CONSTRAINT `album_fk_1` FOREIGN KEY (`database_id`)
123 | REFERENCES `databases` (`id`),
124 | CONSTRAINT `album_fk_2` FOREIGN KEY (`artist_id`)
125 | REFERENCES `artists` (`id`)
126 | );
127 | CREATE TABLE IF NOT EXISTS `items` (
128 | `id` INTEGER PRIMARY KEY,
129 | `persistent_id` INTEGER NOT NULL,
130 | `database_id` int(11) NOT NULL,
131 | `artist_id` int(11) DEFAULT NULL,
132 | `album_artist_id` int(11) DEFAULT NULL,
133 | `album_id` int(11) DEFAULT NULL,
134 | `name` varchar(255) DEFAULT NULL,
135 | `genre` varchar(255) DEFAULT NULL,
136 | `year` int(11) DEFAULT NULL,
137 | `track` int(11) DEFAULT NULL,
138 | `duration` int(11) DEFAULT NULL,
139 | `bitrate` int(11) DEFAULT NULL,
140 | `file_name` varchar(512) DEFAULT NULL,
141 | `file_type` varchar(255) DEFAULT NULL,
142 | `file_suffix` varchar(32) DEFAULT NULL,
143 | `file_size` int(11) DEFAULT NULL,
144 | `exclude` tinyint(1) DEFAULT 0,
145 | `cache` tinyint(1) DEFAULT 0,
146 | `checksum` int(11) NOT NULL,
147 | `remote_id` int(11) DEFAULT NULL,
148 | CONSTRAINT `item_fk_1` FOREIGN KEY (`database_id`)
149 | REFERENCES `databases` (`id`),
150 | CONSTRAINT `item_fk_2` FOREIGN KEY (`album_id`)
151 | REFERENCES `albums` (`id`),
152 | CONSTRAINT `item_fk_3` FOREIGN KEY (`artist_id`)
153 | REFERENCES `artists` (`id`)
154 | CONSTRAINT `item_fk_4` FOREIGN KEY (`album_artist_id`)
155 | REFERENCES `artists` (`id`)
156 | );
157 | CREATE TABLE IF NOT EXISTS `containers` (
158 | `id` INTEGER PRIMARY KEY,
159 | `persistent_id` INTEGER NOT NULL,
160 | `database_id` int(11) NOT NULL,
161 | `parent_id` int(11) DEFAULT NULL,
162 | `name` varchar(255) NOT NULL,
163 | `is_base` int(1) NOT NULL,
164 | `is_smart` int(1) NOT NULL,
165 | `exclude` tinyint(1) DEFAULT 0,
166 | `cache` tinyint(1) DEFAULT 0,
167 | `checksum` int(11) NOT NULL,
168 | `remote_id` int(11) DEFAULT NULL,
169 | CONSTRAINT `container_fk_1` FOREIGN KEY (`database_id`)
170 | REFERENCES `databases` (`id`)
171 | CONSTRAINT `container_fk_2` FOREIGN KEY (`parent_id`)
172 | REFERENCES `containers` (`id`)
173 | );
174 | CREATE TABLE IF NOT EXISTS `container_items` (
175 | `id` INTEGER PRIMARY KEY,
176 | `database_id` int(11) NOT NULL,
177 | `container_id` int(11) NOT NULL,
178 | `item_id` int(11) NOT NULL,
179 | `order` int(11) DEFAULT NULL,
180 | CONSTRAINT `container_item_fk_1`
181 | FOREIGN KEY (`database_id`)
182 | REFERENCES `databases` (`id`)
183 | CONSTRAINT `container_item_fk_2`
184 | FOREIGN KEY (`container_id`)
185 | REFERENCES `containers` (`id`)
186 | );
187 | """)
188 |
189 |
190 | class Cursor(sqlite3.Cursor):
191 | """
192 | Cursor wrapper to add useful methods to the default Cursor object.
193 | """
194 |
195 | def query_value(self, query, *args):
196 | """
197 | """
198 | return self.execute(query, args).fetchone()[0]
199 |
200 | def query_dict(self, query, *args):
201 | """
202 | """
203 | result = dict()
204 |
205 | for row in self.execute(query, args):
206 | row_d = dict(row)
207 | try:
208 | result[int(row[0])] = row_d
209 | except ValueError:
210 | result[row[0]] = row_d
211 |
212 | return result
213 |
214 | def query(self, query, *args):
215 | """
216 | """
217 | return self.execute(query, args)
218 |
219 | def query_one(self, query, *args):
220 | """
221 | """
222 |
223 | return self.execute(query, args).fetchone()
224 |
--------------------------------------------------------------------------------
/subdaap/models.py:
--------------------------------------------------------------------------------
1 | from daapserver import models
2 |
3 | from subdaap.collection import LazyMutableCollection
4 |
5 |
6 | class Server(models.Server):
7 | """
8 | Database-aware Server object.
9 | """
10 |
11 | __slots__ = models.Server.__slots__ + ("db", )
12 |
13 | databases_collection_class = LazyMutableCollection
14 |
15 | def __init__(self, db, *args, **kwargs):
16 | super(Server, self).__init__(*args, **kwargs)
17 | self.db = db
18 |
19 | # Required for database -> object conversion
20 | self.databases.child_class = Database
21 |
22 |
23 | class Database(models.Database):
24 | """
25 | Database-aware Database object.
26 | """
27 |
28 | __slots__ = models.Database.__slots__ + ("db", )
29 |
30 | items_collection_class = LazyMutableCollection
31 | containers_collection_class = LazyMutableCollection
32 |
33 | def __init__(self, db, *args, **kwargs):
34 | super(Database, self).__init__(*args, **kwargs)
35 | self.db = db
36 |
37 | # Required for database -> object conversion
38 | self.items.child_class = Item
39 | self.containers.child_class = Container
40 |
41 |
42 | class Item(models.Item):
43 | """
44 | Database-aware Item object.
45 | """
46 |
47 | __slots__ = models.Item.__slots__ + ("remote_id", )
48 |
49 | def __init__(self, db, *args, **kwargs):
50 | super(Item, self).__init__(*args, **kwargs)
51 |
52 |
53 | class Container(models.Container):
54 | """
55 | Database-aware Container object.
56 | """
57 |
58 | __slots__ = models.Container.__slots__ + ("db", )
59 |
60 | container_items_collection_class = LazyMutableCollection
61 |
62 | def __init__(self, db, *args, **kwargs):
63 | super(Container, self).__init__(*args, **kwargs)
64 | self.db = db
65 |
66 | # Required for database -> object conversion
67 | self.container_items.child_class = ContainerItem
68 |
69 |
70 | class ContainerItem(models.ContainerItem):
71 | """
72 | Database-aware ContainerItem object.
73 | """
74 |
75 | __slots__ = models.ContainerItem.__slots__
76 |
77 | def __init__(self, db, *args, **kwargs):
78 | super(ContainerItem, self).__init__(*args, **kwargs)
79 |
--------------------------------------------------------------------------------
/subdaap/monkey.py:
--------------------------------------------------------------------------------
1 | from gevent.monkey import patch_all
2 |
3 |
4 | def patch_pypy():
5 | """
6 | Monkey patch PyPy so SubDaap works.
7 | """
8 |
9 | # Check if running under PyPY
10 | try:
11 | import __pypy__ # noqa
12 | except ImportError:
13 | return
14 |
15 | # Patch for missing py3k acquire.
16 | # See https://github.com/gevent/gevent/issues/248 for more information.
17 | from gevent.lock import Semaphore
18 |
19 | if not hasattr(Semaphore, "_py3k_acquire"):
20 | Semaphore._py3k_acquire = Semaphore.acquire
21 |
22 | # Patch for Sqlite3 threading issue. Since SubDaap uses greenlets
23 | # (microthreads) and no actual threads, disable the warning. This only
24 | # happens with PyPy.
25 | import sqlite3
26 |
27 | old_connect = sqlite3.connect
28 | sqlite3.connect = lambda x: old_connect(x, check_same_thread=False)
29 |
30 |
31 | def patch_zeroconf():
32 | """
33 | Monkey patch Zeroconf so the select timeout can be disabled when running
34 | with gevent. Saves some wakeups.
35 | """
36 |
37 | import zeroconf
38 |
39 | def new_init(self, *args, **kwargs):
40 | old_init(self, *args, **kwargs)
41 | self.timeout = None
42 |
43 | old_init = zeroconf.Engine.__init__
44 | zeroconf.Engine.__init__ = new_init
45 |
46 |
47 | # Apply all patches
48 | patch_all()
49 | patch_pypy()
50 | patch_zeroconf()
51 |
--------------------------------------------------------------------------------
/subdaap/provider.py:
--------------------------------------------------------------------------------
1 | from subdaap.models import Server
2 |
3 | from daapserver.utils import generate_persistent_id
4 | from daapserver import provider
5 |
6 | import logging
7 |
8 | # Logger instance
9 | logger = logging.getLogger(__name__)
10 |
11 |
12 | class Provider(provider.Provider):
13 |
14 | # SubSonic has support for artwork.
15 | supports_artwork = True
16 |
17 | # Persistent IDs are supported.
18 | supports_persistent_id = True
19 |
20 | def __init__(self, server_name, db, state, connections, cache_manager):
21 | """
22 | """
23 |
24 | super(Provider, self).__init__()
25 |
26 | self.server_name = server_name
27 | self.db = db
28 | self.state = state
29 | self.connections = connections
30 | self.cache_manager = cache_manager
31 |
32 | self.setup_state()
33 | self.setup_server()
34 |
35 | def setup_state(self):
36 | """
37 | """
38 |
39 | if "persistent_id" not in self.state:
40 | self.state["persistent_id"] = generate_persistent_id()
41 |
42 | def setup_server(self):
43 | """
44 | """
45 |
46 | self.server = Server(db=self.db)
47 |
48 | # Set server name and persistent ID.
49 | self.server.name = self.server_name
50 | self.server.persistent_id = self.state["persistent_id"]
51 |
52 | def get_artwork_data(self, session, item):
53 | """
54 | Get artwork data from cache or remote.
55 | """
56 |
57 | cache_item = self.cache_manager.artwork_cache.get(item.id)
58 |
59 | if cache_item.iterator is None:
60 | remote_fd = self.connections[item.database_id].get_artwork_fd(
61 | item.remote_id, item.file_suffix)
62 | self.cache_manager.artwork_cache.download(
63 | item.id, cache_item, remote_fd)
64 |
65 | logger.debug("Artwork data from remote, size=unknown")
66 | return cache_item.iterator(), None, None
67 |
68 | logger.debug(
69 | "Artwork data from cache, size=%d", cache_item.size)
70 | return cache_item.iterator(), None, cache_item.size
71 |
72 | def get_item_data(self, session, item, byte_range=None):
73 | """
74 | Get item data from cache or remote.
75 | """
76 |
77 | cache_item = self.cache_manager.item_cache.get(item.id)
78 | connection = self.connections[item.database_id]
79 | is_transcode = connection.needs_transcoding(item.file_suffix)
80 | item_file_type = item.file_type
81 |
82 | if is_transcode:
83 | item_file_type = connection.transcode_format[item.file_type]
84 |
85 | if cache_item.iterator is None:
86 | remote_fd = connection.get_item_fd(
87 | item.remote_id, item.file_suffix)
88 | self.cache_manager.item_cache.download(
89 | item.id, cache_item, remote_fd)
90 | item_size = item.file_size
91 |
92 | # Determine returned size by checking for transcode.
93 | if is_transcode:
94 | item_size = -1
95 |
96 | logger.debug(
97 | "Item data from remote: range=%s, type=%s, size=%d",
98 | byte_range, item_file_type, item_size)
99 | return cache_item.iterator(byte_range), item_file_type, item_size
100 |
101 | logger.debug(
102 | "Item data from cache, range=%s, type=%s, size=%d",
103 | byte_range, item.file_type, item.file_size)
104 | return cache_item.iterator(byte_range), item_file_type, \
105 | cache_item.size
106 |
--------------------------------------------------------------------------------
/subdaap/state.py:
--------------------------------------------------------------------------------
1 | from gevent.lock import Semaphore
2 |
3 | import cPickle
4 | import logging
5 | import errno
6 |
7 | # Logger instance
8 | logger = logging.getLogger(__name__)
9 |
10 |
11 | class State(object):
12 | """
13 | Convenient wrapper for a state dictionary.
14 | """
15 |
16 | def __init__(self, file_name):
17 | """
18 | Construct a new State instance. The state will be directly loaded from
19 | file.
20 |
21 | :param str file_name: Path to state file.
22 | """
23 |
24 | self.file_name = file_name
25 | self.lock = Semaphore()
26 | self.state = {}
27 |
28 | # Unpickle state
29 | self.load()
30 |
31 | def save(self):
32 | """
33 | Save state to file.
34 | """
35 |
36 | logger.debug("Saving application state to '%s'.", self.file_name)
37 |
38 | with self.lock:
39 | with open(self.file_name, "wb") as fp:
40 | cPickle.dump(self.state, fp)
41 |
42 | def load(self):
43 | """
44 | Load state from file. If the state file is not a dictionary or if it is
45 | not a valid file, the state will be an empty dictionary.
46 | """
47 |
48 | logger.debug("Loading state from '%s'.", self.file_name)
49 |
50 | with self.lock:
51 | try:
52 | with open(self.file_name, "rb") as fp:
53 | self.state = cPickle.load(fp)
54 |
55 | # Make sure it's a dict
56 | if type(self.state) != dict:
57 | self.state = {}
58 | except IOError as e:
59 | if e.errno == errno.ENOENT:
60 | self.state = {}
61 | else:
62 | raise e
63 | except (EOFError, cPickle.UnpicklingError):
64 | self.state = {}
65 |
66 | def __getitem__(self, key):
67 | """
68 | Proxy method for `self.state.__getitem__`.
69 | """
70 | return self.state.__getitem__(key)
71 |
72 | def __setitem__(self, key, value):
73 | """
74 | Proxy method for `self.state.__setitem__`.
75 | """
76 | self.state.__setitem__(key, value)
77 |
78 | def __contains__(self, key):
79 | """
80 | Proxy method for `self.state.__contains__`.
81 | """
82 | return self.state.__contains__(key)
83 |
84 | def __len__(self):
85 | """
86 | Proxy method for `self.state.__len__`.
87 | """
88 | return self.state.__len__()
89 |
--------------------------------------------------------------------------------
/subdaap/static/css/pure-min.css:
--------------------------------------------------------------------------------
1 | /*!
2 | Pure v0.6.0
3 | Copyright 2014 Yahoo! Inc. All rights reserved.
4 | Licensed under the BSD License.
5 | https://github.com/yahoo/pure/blob/master/LICENSE.md
6 | */
7 | /*!
8 | normalize.css v^3.0 | MIT License | git.io/normalize
9 | Copyright (c) Nicolas Gallagher and Jonathan Neal
10 | */
11 | /*! normalize.css v3.0.2 | MIT License | git.io/normalize */html{font-family:sans-serif;-ms-text-size-adjust:100%;-webkit-text-size-adjust:100%}body{margin:0}article,aside,details,figcaption,figure,footer,header,hgroup,main,menu,nav,section,summary{display:block}audio,canvas,progress,video{display:inline-block;vertical-align:baseline}audio:not([controls]){display:none;height:0}[hidden],template{display:none}a{background-color:transparent}a:active,a:hover{outline:0}abbr[title]{border-bottom:1px dotted}b,strong{font-weight:700}dfn{font-style:italic}h1{font-size:2em;margin:.67em 0}mark{background:#ff0;color:#000}small{font-size:80%}sub,sup{font-size:75%;line-height:0;position:relative;vertical-align:baseline}sup{top:-.5em}sub{bottom:-.25em}img{border:0}svg:not(:root){overflow:hidden}figure{margin:1em 40px}hr{-moz-box-sizing:content-box;box-sizing:content-box;height:0}pre{overflow:auto}code,kbd,pre,samp{font-family:monospace,monospace;font-size:1em}button,input,optgroup,select,textarea{color:inherit;font:inherit;margin:0}button{overflow:visible}button,select{text-transform:none}button,html input[type=button],input[type=reset],input[type=submit]{-webkit-appearance:button;cursor:pointer}button[disabled],html input[disabled]{cursor:default}button::-moz-focus-inner,input::-moz-focus-inner{border:0;padding:0}input{line-height:normal}input[type=checkbox],input[type=radio]{box-sizing:border-box;padding:0}input[type=number]::-webkit-inner-spin-button,input[type=number]::-webkit-outer-spin-button{height:auto}input[type=search]{-webkit-appearance:textfield;-moz-box-sizing:content-box;-webkit-box-sizing:content-box;box-sizing:content-box}input[type=search]::-webkit-search-cancel-button,input[type=search]::-webkit-search-decoration{-webkit-appearance:none}fieldset{border:1px solid silver;margin:0 2px;padding:.35em .625em .75em}legend{border:0;padding:0}textarea{overflow:auto}optgroup{font-weight:700}table{border-collapse:collapse;border-spacing:0}td,th{padding:0}.hidden,[hidden]{display:none!important}.pure-img{max-width:100%;height:auto;display:block}.pure-g{letter-spacing:-.31em;*letter-spacing:normal;*word-spacing:-.43em;text-rendering:optimizespeed;font-family:FreeSans,Arimo,"Droid Sans",Helvetica,Arial,sans-serif;display:-webkit-flex;-webkit-flex-flow:row wrap;display:-ms-flexbox;-ms-flex-flow:row wrap;-ms-align-content:flex-start;-webkit-align-content:flex-start;align-content:flex-start}.opera-only :-o-prefocus,.pure-g{word-spacing:-.43em}.pure-u{display:inline-block;*display:inline;zoom:1;letter-spacing:normal;word-spacing:normal;vertical-align:top;text-rendering:auto}.pure-g [class *="pure-u"]{font-family:sans-serif}.pure-u-1,.pure-u-1-1,.pure-u-1-2,.pure-u-1-3,.pure-u-2-3,.pure-u-1-4,.pure-u-3-4,.pure-u-1-5,.pure-u-2-5,.pure-u-3-5,.pure-u-4-5,.pure-u-5-5,.pure-u-1-6,.pure-u-5-6,.pure-u-1-8,.pure-u-3-8,.pure-u-5-8,.pure-u-7-8,.pure-u-1-12,.pure-u-5-12,.pure-u-7-12,.pure-u-11-12,.pure-u-1-24,.pure-u-2-24,.pure-u-3-24,.pure-u-4-24,.pure-u-5-24,.pure-u-6-24,.pure-u-7-24,.pure-u-8-24,.pure-u-9-24,.pure-u-10-24,.pure-u-11-24,.pure-u-12-24,.pure-u-13-24,.pure-u-14-24,.pure-u-15-24,.pure-u-16-24,.pure-u-17-24,.pure-u-18-24,.pure-u-19-24,.pure-u-20-24,.pure-u-21-24,.pure-u-22-24,.pure-u-23-24,.pure-u-24-24{display:inline-block;*display:inline;zoom:1;letter-spacing:normal;word-spacing:normal;vertical-align:top;text-rendering:auto}.pure-u-1-24{width:4.1667%;*width:4.1357%}.pure-u-1-12,.pure-u-2-24{width:8.3333%;*width:8.3023%}.pure-u-1-8,.pure-u-3-24{width:12.5%;*width:12.469%}.pure-u-1-6,.pure-u-4-24{width:16.6667%;*width:16.6357%}.pure-u-1-5{width:20%;*width:19.969%}.pure-u-5-24{width:20.8333%;*width:20.8023%}.pure-u-1-4,.pure-u-6-24{width:25%;*width:24.969%}.pure-u-7-24{width:29.1667%;*width:29.1357%}.pure-u-1-3,.pure-u-8-24{width:33.3333%;*width:33.3023%}.pure-u-3-8,.pure-u-9-24{width:37.5%;*width:37.469%}.pure-u-2-5{width:40%;*width:39.969%}.pure-u-5-12,.pure-u-10-24{width:41.6667%;*width:41.6357%}.pure-u-11-24{width:45.8333%;*width:45.8023%}.pure-u-1-2,.pure-u-12-24{width:50%;*width:49.969%}.pure-u-13-24{width:54.1667%;*width:54.1357%}.pure-u-7-12,.pure-u-14-24{width:58.3333%;*width:58.3023%}.pure-u-3-5{width:60%;*width:59.969%}.pure-u-5-8,.pure-u-15-24{width:62.5%;*width:62.469%}.pure-u-2-3,.pure-u-16-24{width:66.6667%;*width:66.6357%}.pure-u-17-24{width:70.8333%;*width:70.8023%}.pure-u-3-4,.pure-u-18-24{width:75%;*width:74.969%}.pure-u-19-24{width:79.1667%;*width:79.1357%}.pure-u-4-5{width:80%;*width:79.969%}.pure-u-5-6,.pure-u-20-24{width:83.3333%;*width:83.3023%}.pure-u-7-8,.pure-u-21-24{width:87.5%;*width:87.469%}.pure-u-11-12,.pure-u-22-24{width:91.6667%;*width:91.6357%}.pure-u-23-24{width:95.8333%;*width:95.8023%}.pure-u-1,.pure-u-1-1,.pure-u-5-5,.pure-u-24-24{width:100%}.pure-button{display:inline-block;zoom:1;line-height:normal;white-space:nowrap;vertical-align:middle;text-align:center;cursor:pointer;-webkit-user-drag:none;-webkit-user-select:none;-moz-user-select:none;-ms-user-select:none;user-select:none;-webkit-box-sizing:border-box;-moz-box-sizing:border-box;box-sizing:border-box}.pure-button::-moz-focus-inner{padding:0;border:0}.pure-button{font-family:inherit;font-size:100%;padding:.5em 1em;color:#444;color:rgba(0,0,0,.8);border:1px solid #999;border:0 rgba(0,0,0,0);background-color:#E6E6E6;text-decoration:none;border-radius:2px}.pure-button-hover,.pure-button:hover,.pure-button:focus{filter:progid:DXImageTransform.Microsoft.gradient(startColorstr='#00000000', endColorstr='#1a000000', GradientType=0);background-image:-webkit-gradient(linear,0 0,0 100%,from(transparent),color-stop(40%,rgba(0,0,0,.05)),to(rgba(0,0,0,.1)));background-image:-webkit-linear-gradient(transparent,rgba(0,0,0,.05) 40%,rgba(0,0,0,.1));background-image:-moz-linear-gradient(top,rgba(0,0,0,.05) 0,rgba(0,0,0,.1));background-image:-o-linear-gradient(transparent,rgba(0,0,0,.05) 40%,rgba(0,0,0,.1));background-image:linear-gradient(transparent,rgba(0,0,0,.05) 40%,rgba(0,0,0,.1))}.pure-button:focus{outline:0}.pure-button-active,.pure-button:active{box-shadow:0 0 0 1px rgba(0,0,0,.15) inset,0 0 6px rgba(0,0,0,.2) inset;border-color:#000\9}.pure-button[disabled],.pure-button-disabled,.pure-button-disabled:hover,.pure-button-disabled:focus,.pure-button-disabled:active{border:0;background-image:none;filter:progid:DXImageTransform.Microsoft.gradient(enabled=false);filter:alpha(opacity=40);-khtml-opacity:.4;-moz-opacity:.4;opacity:.4;cursor:not-allowed;box-shadow:none}.pure-button-hidden{display:none}.pure-button::-moz-focus-inner{padding:0;border:0}.pure-button-primary,.pure-button-selected,a.pure-button-primary,a.pure-button-selected{background-color:#0078e7;color:#fff}.pure-form input[type=text],.pure-form input[type=password],.pure-form input[type=email],.pure-form input[type=url],.pure-form input[type=date],.pure-form input[type=month],.pure-form input[type=time],.pure-form input[type=datetime],.pure-form input[type=datetime-local],.pure-form input[type=week],.pure-form input[type=number],.pure-form input[type=search],.pure-form input[type=tel],.pure-form input[type=color],.pure-form select,.pure-form textarea{padding:.5em .6em;display:inline-block;border:1px solid #ccc;box-shadow:inset 0 1px 3px #ddd;border-radius:4px;vertical-align:middle;-webkit-box-sizing:border-box;-moz-box-sizing:border-box;box-sizing:border-box}.pure-form input:not([type]){padding:.5em .6em;display:inline-block;border:1px solid #ccc;box-shadow:inset 0 1px 3px #ddd;border-radius:4px;-webkit-box-sizing:border-box;-moz-box-sizing:border-box;box-sizing:border-box}.pure-form input[type=color]{padding:.2em .5em}.pure-form input[type=text]:focus,.pure-form input[type=password]:focus,.pure-form input[type=email]:focus,.pure-form input[type=url]:focus,.pure-form input[type=date]:focus,.pure-form input[type=month]:focus,.pure-form input[type=time]:focus,.pure-form input[type=datetime]:focus,.pure-form input[type=datetime-local]:focus,.pure-form input[type=week]:focus,.pure-form input[type=number]:focus,.pure-form input[type=search]:focus,.pure-form input[type=tel]:focus,.pure-form input[type=color]:focus,.pure-form select:focus,.pure-form textarea:focus{outline:0;border-color:#129FEA}.pure-form input:not([type]):focus{outline:0;border-color:#129FEA}.pure-form input[type=file]:focus,.pure-form input[type=radio]:focus,.pure-form input[type=checkbox]:focus{outline:thin solid #129FEA;outline:1px auto #129FEA}.pure-form .pure-checkbox,.pure-form .pure-radio{margin:.5em 0;display:block}.pure-form input[type=text][disabled],.pure-form input[type=password][disabled],.pure-form input[type=email][disabled],.pure-form input[type=url][disabled],.pure-form input[type=date][disabled],.pure-form input[type=month][disabled],.pure-form input[type=time][disabled],.pure-form input[type=datetime][disabled],.pure-form input[type=datetime-local][disabled],.pure-form input[type=week][disabled],.pure-form input[type=number][disabled],.pure-form input[type=search][disabled],.pure-form input[type=tel][disabled],.pure-form input[type=color][disabled],.pure-form select[disabled],.pure-form textarea[disabled]{cursor:not-allowed;background-color:#eaeded;color:#cad2d3}.pure-form input:not([type])[disabled]{cursor:not-allowed;background-color:#eaeded;color:#cad2d3}.pure-form input[readonly],.pure-form select[readonly],.pure-form textarea[readonly]{background-color:#eee;color:#777;border-color:#ccc}.pure-form input:focus:invalid,.pure-form textarea:focus:invalid,.pure-form select:focus:invalid{color:#b94a48;border-color:#e9322d}.pure-form input[type=file]:focus:invalid:focus,.pure-form input[type=radio]:focus:invalid:focus,.pure-form input[type=checkbox]:focus:invalid:focus{outline-color:#e9322d}.pure-form select{height:2.25em;border:1px solid #ccc;background-color:#fff}.pure-form select[multiple]{height:auto}.pure-form label{margin:.5em 0 .2em}.pure-form fieldset{margin:0;padding:.35em 0 .75em;border:0}.pure-form legend{display:block;width:100%;padding:.3em 0;margin-bottom:.3em;color:#333;border-bottom:1px solid #e5e5e5}.pure-form-stacked input[type=text],.pure-form-stacked input[type=password],.pure-form-stacked input[type=email],.pure-form-stacked input[type=url],.pure-form-stacked input[type=date],.pure-form-stacked input[type=month],.pure-form-stacked input[type=time],.pure-form-stacked input[type=datetime],.pure-form-stacked input[type=datetime-local],.pure-form-stacked input[type=week],.pure-form-stacked input[type=number],.pure-form-stacked input[type=search],.pure-form-stacked input[type=tel],.pure-form-stacked input[type=color],.pure-form-stacked input[type=file],.pure-form-stacked select,.pure-form-stacked label,.pure-form-stacked textarea{display:block;margin:.25em 0}.pure-form-stacked input:not([type]){display:block;margin:.25em 0}.pure-form-aligned input,.pure-form-aligned textarea,.pure-form-aligned select,.pure-form-aligned .pure-help-inline,.pure-form-message-inline{display:inline-block;*display:inline;*zoom:1;vertical-align:middle}.pure-form-aligned textarea{vertical-align:top}.pure-form-aligned .pure-control-group{margin-bottom:.5em}.pure-form-aligned .pure-control-group label{text-align:right;display:inline-block;vertical-align:middle;width:10em;margin:0 1em 0 0}.pure-form-aligned .pure-controls{margin:1.5em 0 0 11em}.pure-form input.pure-input-rounded,.pure-form .pure-input-rounded{border-radius:2em;padding:.5em 1em}.pure-form .pure-group fieldset{margin-bottom:10px}.pure-form .pure-group input,.pure-form .pure-group textarea{display:block;padding:10px;margin:0 0 -1px;border-radius:0;position:relative;top:-1px}.pure-form .pure-group input:focus,.pure-form .pure-group textarea:focus{z-index:3}.pure-form .pure-group input:first-child,.pure-form .pure-group textarea:first-child{top:1px;border-radius:4px 4px 0 0;margin:0}.pure-form .pure-group input:first-child:last-child,.pure-form .pure-group textarea:first-child:last-child{top:1px;border-radius:4px;margin:0}.pure-form .pure-group input:last-child,.pure-form .pure-group textarea:last-child{top:-2px;border-radius:0 0 4px 4px;margin:0}.pure-form .pure-group button{margin:.35em 0}.pure-form .pure-input-1{width:100%}.pure-form .pure-input-2-3{width:66%}.pure-form .pure-input-1-2{width:50%}.pure-form .pure-input-1-3{width:33%}.pure-form .pure-input-1-4{width:25%}.pure-form .pure-help-inline,.pure-form-message-inline{display:inline-block;padding-left:.3em;color:#666;vertical-align:middle;font-size:.875em}.pure-form-message{display:block;color:#666;font-size:.875em}@media only screen and (max-width :480px){.pure-form button[type=submit]{margin:.7em 0 0}.pure-form input:not([type]),.pure-form input[type=text],.pure-form input[type=password],.pure-form input[type=email],.pure-form input[type=url],.pure-form input[type=date],.pure-form input[type=month],.pure-form input[type=time],.pure-form input[type=datetime],.pure-form input[type=datetime-local],.pure-form input[type=week],.pure-form input[type=number],.pure-form input[type=search],.pure-form input[type=tel],.pure-form input[type=color],.pure-form label{margin-bottom:.3em;display:block}.pure-group input:not([type]),.pure-group input[type=text],.pure-group input[type=password],.pure-group input[type=email],.pure-group input[type=url],.pure-group input[type=date],.pure-group input[type=month],.pure-group input[type=time],.pure-group input[type=datetime],.pure-group input[type=datetime-local],.pure-group input[type=week],.pure-group input[type=number],.pure-group input[type=search],.pure-group input[type=tel],.pure-group input[type=color]{margin-bottom:0}.pure-form-aligned .pure-control-group label{margin-bottom:.3em;text-align:left;display:block;width:100%}.pure-form-aligned .pure-controls{margin:1.5em 0 0}.pure-form .pure-help-inline,.pure-form-message-inline,.pure-form-message{display:block;font-size:.75em;padding:.2em 0 .8em}}.pure-menu{-webkit-box-sizing:border-box;-moz-box-sizing:border-box;box-sizing:border-box}.pure-menu-fixed{position:fixed;left:0;top:0;z-index:3}.pure-menu-list,.pure-menu-item{position:relative}.pure-menu-list{list-style:none;margin:0;padding:0}.pure-menu-item{padding:0;margin:0;height:100%}.pure-menu-link,.pure-menu-heading{display:block;text-decoration:none;white-space:nowrap}.pure-menu-horizontal{width:100%;white-space:nowrap}.pure-menu-horizontal .pure-menu-list{display:inline-block}.pure-menu-horizontal .pure-menu-item,.pure-menu-horizontal .pure-menu-heading,.pure-menu-horizontal .pure-menu-separator{display:inline-block;*display:inline;zoom:1;vertical-align:middle}.pure-menu-item .pure-menu-item{display:block}.pure-menu-children{display:none;position:absolute;left:100%;top:0;margin:0;padding:0;z-index:3}.pure-menu-horizontal .pure-menu-children{left:0;top:auto;width:inherit}.pure-menu-allow-hover:hover>.pure-menu-children,.pure-menu-active>.pure-menu-children{display:block;position:absolute}.pure-menu-has-children>.pure-menu-link:after{padding-left:.5em;content:"\25B8";font-size:small}.pure-menu-horizontal .pure-menu-has-children>.pure-menu-link:after{content:"\25BE"}.pure-menu-scrollable{overflow-y:scroll;overflow-x:hidden}.pure-menu-scrollable .pure-menu-list{display:block}.pure-menu-horizontal.pure-menu-scrollable .pure-menu-list{display:inline-block}.pure-menu-horizontal.pure-menu-scrollable{white-space:nowrap;overflow-y:hidden;overflow-x:auto;-ms-overflow-style:none;-webkit-overflow-scrolling:touch;padding:.5em 0}.pure-menu-horizontal.pure-menu-scrollable::-webkit-scrollbar{display:none}.pure-menu-separator{background-color:#ccc;height:1px;margin:.3em 0}.pure-menu-horizontal .pure-menu-separator{width:1px;height:1.3em;margin:0 .3em}.pure-menu-heading{text-transform:uppercase;color:#565d64}.pure-menu-link{color:#777}.pure-menu-children{background-color:#fff}.pure-menu-link,.pure-menu-disabled,.pure-menu-heading{padding:.5em 1em}.pure-menu-disabled{opacity:.5}.pure-menu-disabled .pure-menu-link:hover{background-color:transparent}.pure-menu-active>.pure-menu-link,.pure-menu-link:hover,.pure-menu-link:focus{background-color:#eee}.pure-menu-selected .pure-menu-link,.pure-menu-selected .pure-menu-link:visited{color:#000}.pure-table{border-collapse:collapse;border-spacing:0;empty-cells:show;border:1px solid #cbcbcb}.pure-table caption{color:#000;font:italic 85%/1 arial,sans-serif;padding:1em 0;text-align:center}.pure-table td,.pure-table th{border-left:1px solid #cbcbcb;border-width:0 0 0 1px;font-size:inherit;margin:0;overflow:visible;padding:.5em 1em}.pure-table td:first-child,.pure-table th:first-child{border-left-width:0}.pure-table thead{background-color:#e0e0e0;color:#000;text-align:left;vertical-align:bottom}.pure-table td{background-color:transparent}.pure-table-odd td{background-color:#f2f2f2}.pure-table-striped tr:nth-child(2n-1) td{background-color:#f2f2f2}.pure-table-bordered td{border-bottom:1px solid #cbcbcb}.pure-table-bordered tbody>tr:last-child>td{border-bottom-width:0}.pure-table-horizontal td,.pure-table-horizontal th{border-width:0 0 1px;border-bottom:1px solid #cbcbcb}.pure-table-horizontal tbody>tr:last-child>td{border-bottom-width:0}
--------------------------------------------------------------------------------
/subdaap/stream.py:
--------------------------------------------------------------------------------
1 | from daapserver.utils import parse_byte_range
2 |
3 | import shutil
4 | import gevent
5 | import gevent.queue
6 |
7 |
8 | def stream_from_remote(lock, remote_fd, target_file, chunk_size=32768,
9 | on_cache=None):
10 | """
11 | Spawn a greenlet to download and cache a file, while simultaniously stream
12 | data to the receiver. An additional greenlet is spawned to handle the file
13 | download and caching. Every time another block (of interest, depending on
14 | start and stop) is available, it will be written to the queue. The streamer
15 | blocks until a block of interest is available.
16 |
17 | :param file remote_fd: File descriptor of remote file to stream.
18 | :param str target_file: Path to target file name. Must be writeable.
19 | :param int chunk_size: Chunk size to use when reading remote.
20 | :param callable on_cache: Callback method to invoke when streaming is done.
21 | """
22 |
23 | temp_file = "%s.temp" % target_file
24 | queue = gevent.queue.Queue()
25 |
26 | def _downloader():
27 | exhausted = False
28 | bytes_read = 0
29 |
30 | with open(temp_file, "wb") as local_fd:
31 | try:
32 | while True:
33 | chunk = remote_fd.read(chunk_size)
34 |
35 | if not chunk:
36 | exhausted = True
37 | break
38 |
39 | local_fd.write(chunk)
40 | bytes_read += len(chunk)
41 |
42 | # Yield in the form of (chunk_begin, chunk_end, chunk)
43 | yield bytes_read - len(chunk), bytes_read, chunk
44 | finally:
45 | # Make sure the remaining bytes are read from remote and
46 | # written to disk.
47 | if not exhausted:
48 | while True:
49 | chunk = remote_fd.read(chunk_size)
50 |
51 | if not chunk:
52 | break
53 |
54 | local_fd.write(chunk)
55 | bytes_read += len(chunk)
56 |
57 | # Move the temp file to the target file. On the same disk, this
58 | # should be an atomic operation.
59 | shutil.move(temp_file, target_file)
60 |
61 | # Mark done, for the on_cache
62 | exhausted = True
63 |
64 | # Invoke callback, if fully exhausted
65 | if exhausted and on_cache:
66 | on_cache(bytes_read)
67 |
68 | def _cacher(begin, end):
69 | put = False
70 |
71 | # Hack (1)
72 | old_owner, lock._owner = lock._owner, gevent.getcurrent()
73 |
74 | with lock:
75 | try:
76 | for chunk_begin, chunk_end, chunk in _downloader():
77 | # Ensure that the chunk we have downloaded is a chunk that
78 | # we are interested in. For instance, we may need the
79 | # middle part of a song, but this will mean that we the
80 | # beginning should be downloaded (and saved to file) first.
81 | if (chunk_begin <= begin < chunk_end) or \
82 | (chunk_begin <= end < chunk_end):
83 | put = not put
84 |
85 | if put:
86 | queue.put((chunk_begin, chunk_end, chunk))
87 | finally:
88 | # Make sure the streamer stops
89 | queue.put(StopIteration)
90 |
91 | # Hack (2)
92 | lock._owner = old_owner
93 |
94 | def _streamer(byte_range=None):
95 | begin, end = parse_byte_range(byte_range)
96 |
97 | # Spawn the download greenlet.
98 | greenlet = gevent.spawn(_cacher, begin, end)
99 |
100 | try:
101 | put = False
102 |
103 | # At this point, the '_cacher' greenlet is already running and
104 | # should have downloaded (part of) the file. The part that we are
105 | # interested in will be put in the queue.
106 | for chunk_begin, chunk_end, chunk in queue:
107 | if (chunk_begin <= begin < chunk_end) or \
108 | (chunk_begin <= end < chunk_end):
109 | put = not put
110 |
111 | if put:
112 | i = max(0, begin - chunk_begin)
113 | j = min(len(chunk), end - chunk_begin)
114 |
115 | yield chunk[i:j]
116 | finally:
117 | # Make sure the greenlet gets killed when this iterator is closed.
118 | greenlet.kill()
119 |
120 | return _streamer
121 |
122 |
123 | def stream_from_file(lock, fd, file_size, on_start=None, on_finish=None):
124 | """
125 | Create an iterator that streams a file partially or all at once.
126 | """
127 |
128 | def _streamer(byte_range=None):
129 | begin, end = parse_byte_range(byte_range, max_byte=file_size)
130 |
131 | try:
132 | if on_start:
133 | on_start()
134 |
135 | with lock:
136 | fd.seek(begin)
137 | chunk = fd.read(end - begin)
138 |
139 | yield chunk
140 | finally:
141 | if on_finish:
142 | on_finish()
143 |
144 | return _streamer
145 |
146 |
147 | def stream_from_buffer(lock, data, file_size, chunk_size=32768, on_start=None,
148 | on_finish=None):
149 | """
150 | """
151 |
152 | def _streamer(byte_range=None):
153 | begin, end = parse_byte_range(byte_range, max_byte=file_size)
154 |
155 | # Yield data in chunks
156 | try:
157 | if on_start:
158 | on_start()
159 |
160 | while True:
161 | with lock:
162 | chunk = data[begin:min(end, begin + chunk_size)]
163 |
164 | # Send the data
165 | yield chunk
166 |
167 | # Increment offset
168 | begin += len(chunk)
169 |
170 | # Stop when the end has been reached
171 | if begin >= end:
172 | break
173 | finally:
174 | if on_finish:
175 | on_finish()
176 |
177 | return _streamer
178 |
--------------------------------------------------------------------------------
/subdaap/subsonic.py:
--------------------------------------------------------------------------------
1 | from subdaap.utils import force_list
2 |
3 | import urlparse
4 | import libsonic
5 | import urllib
6 |
7 |
8 | class SubsonicClient(libsonic.Connection):
9 | """
10 | Extend `libsonic.Connection` with new features and fix a few issues.
11 |
12 | - Parse URL for host and port for constructor.
13 | - Make sure API results are of of uniform type.
14 | - Provide methods to intercept URL of binary requests.
15 | - Add order property to playlist items.
16 | - Add conventient `walk_*' methods to iterate over the API responses.
17 | """
18 |
19 | def __init__(self, url, username, password):
20 | """
21 | Construct a new SubsonicClient.
22 |
23 | :param str url: Full URL (including scheme) of the Subsonic server.
24 | :param str username: Username of the server.
25 | :param str password: Password of the server.
26 | """
27 |
28 | self.intercept_url = False
29 |
30 | # Parse Subsonic URL
31 | parts = urlparse.urlparse(url)
32 | scheme = parts.scheme or "http"
33 |
34 | # Make sure there is hostname
35 | if not parts.hostname:
36 | raise ValueError("Expected hostname for URL: %s" % url)
37 |
38 | # Validate scheme
39 | if scheme not in ("http", "https"):
40 | raise ValueError("Unexpected scheme '%s' for URL: %s" % (
41 | scheme, url))
42 |
43 | # Pick a default port
44 | host = "%s://%s" % (scheme, parts.hostname)
45 | port = parts.port or {"http": 80, "https": 443}[scheme]
46 |
47 | # Invoke original constructor
48 | super(SubsonicClient, self).__init__(
49 | host, username, password, port=port)
50 |
51 | def getIndexes(self, *args, **kwargs):
52 | """
53 | Improve the getIndexes method. Ensures IDs are integers.
54 | """
55 |
56 | def _artists_iterator(artists):
57 | for artist in force_list(artists):
58 | artist["id"] = int(artist["id"])
59 | yield artist
60 |
61 | def _index_iterator(index):
62 | for index in force_list(index):
63 | index["artist"] = list(_artists_iterator(index.get("artist")))
64 | yield index
65 |
66 | def _children_iterator(children):
67 | for child in force_list(children):
68 | child["id"] = int(child["id"])
69 |
70 | if "parent" in child:
71 | child["parent"] = int(child["parent"])
72 | if "coverArt" in child:
73 | child["coverArt"] = int(child["coverArt"])
74 | if "artistId" in child:
75 | child["artistId"] = int(child["artistId"])
76 | if "albumId" in child:
77 | child["albumId"] = int(child["albumId"])
78 |
79 | yield child
80 |
81 | response = super(SubsonicClient, self).getIndexes(*args, **kwargs)
82 | response["indexes"] = response.get("indexes", {})
83 | response["indexes"]["index"] = list(
84 | _index_iterator(response["indexes"].get("index")))
85 | response["indexes"]["child"] = list(
86 | _children_iterator(response["indexes"].get("child")))
87 |
88 | return response
89 |
90 | def getPlaylists(self, *args, **kwargs):
91 | """
92 | Improve the getPlaylists method. Ensures IDs are integers.
93 | """
94 |
95 | def _playlists_iterator(playlists):
96 | for playlist in force_list(playlists):
97 | playlist["id"] = int(playlist["id"])
98 | yield playlist
99 |
100 | response = super(SubsonicClient, self).getPlaylists(*args, **kwargs)
101 | response["playlists"]["playlist"] = list(
102 | _playlists_iterator(response["playlists"].get("playlist")))
103 |
104 | return response
105 |
106 | def getPlaylist(self, *args, **kwargs):
107 | """
108 | Improve the getPlaylist method. Ensures IDs are integers and add an
109 | order property to each entry.
110 | """
111 |
112 | def _entries_iterator(entries):
113 | for order, entry in enumerate(force_list(entries), start=1):
114 | entry["id"] = int(entry["id"])
115 | entry["order"] = order
116 | yield entry
117 |
118 | response = super(SubsonicClient, self).getPlaylist(*args, **kwargs)
119 | response["playlist"]["entry"] = list(
120 | _entries_iterator(response["playlist"].get("entry")))
121 |
122 | return response
123 |
124 | def getArtists(self, *args, **kwargs):
125 | """
126 | Improve the getArtists method. Ensures IDs are integers.
127 | """
128 |
129 | def _artists_iterator(artists):
130 | for artist in force_list(artists):
131 | artist["id"] = int(artist["id"])
132 | yield artist
133 |
134 | def _index_iterator(index):
135 | for index in force_list(index):
136 | index["artist"] = list(_artists_iterator(index.get("artist")))
137 | yield index
138 |
139 | response = super(SubsonicClient, self).getArtists(*args, **kwargs)
140 | response["artists"] = response.get("artists", {})
141 | response["artists"]["index"] = list(
142 | _index_iterator(response["artists"].get("index")))
143 |
144 | return response
145 |
146 | def getArtist(self, *args, **kwargs):
147 | """
148 | Improve the getArtist method. Ensures IDs are integers.
149 | """
150 |
151 | def _albums_iterator(albums):
152 | for album in force_list(albums):
153 | album["id"] = int(album["id"])
154 |
155 | if "artistId" in album:
156 | album["artistId"] = int(album["artistId"])
157 |
158 | yield album
159 |
160 | response = super(SubsonicClient, self).getArtist(*args, **kwargs)
161 | response["artist"]["album"] = list(
162 | _albums_iterator(response["artist"].get("album")))
163 |
164 | return response
165 |
166 | def getMusicDirectory(self, *args, **kwargs):
167 | """
168 | Improve the getMusicDirectory method. Ensures IDs are integers.
169 | """
170 |
171 | def _children_iterator(children):
172 | for child in force_list(children):
173 | child["id"] = int(child["id"])
174 |
175 | if "parent" in child:
176 | child["parent"] = int(child["parent"])
177 | if "coverArt" in child:
178 | child["coverArt"] = int(child["coverArt"])
179 | if "artistId" in child:
180 | child["artistId"] = int(child["artistId"])
181 | if "albumId" in child:
182 | child["albumId"] = int(child["albumId"])
183 |
184 | yield child
185 |
186 | response = super(SubsonicClient, self).getMusicDirectory(
187 | *args, **kwargs)
188 | response["directory"]["child"] = list(
189 | _children_iterator(response["directory"].get("child")))
190 |
191 | return response
192 |
193 | def getAlbum(self, *args, **kwargs):
194 | """
195 | Improve the getAlbum method. Ensures the IDs are real integers.
196 | """
197 |
198 | def _songs_iterator(songs):
199 | for song in force_list(songs):
200 | song["id"] = int(song["id"])
201 | yield song
202 |
203 | response = super(SubsonicClient, self).getAlbum(*args, **kwargs)
204 | response["album"]["song"] = list(
205 | _songs_iterator(response["album"].get("song")))
206 |
207 | return response
208 |
209 | def getAlbumList2(self, *args, **kwargs):
210 | """
211 | Improve the getAlbumList2 method. Ensures the IDs are real integers.
212 | """
213 |
214 | def _album_iterator(albums):
215 | for album in force_list(albums):
216 | album["id"] = int(album["id"])
217 | yield album
218 |
219 | response = super(SubsonicClient, self).getAlbumList2(*args, **kwargs)
220 | response["albumList2"]["album"] = list(
221 | _album_iterator(response["albumList2"].get("album")))
222 |
223 | return response
224 |
225 | def getStarred(self, *args, **kwargs):
226 | """
227 | Improve the getStarred method. Ensures the IDs are real integers.
228 | """
229 |
230 | def _song_iterator(songs):
231 | for song in force_list(songs):
232 | song["id"] = int(song["id"])
233 | yield song
234 |
235 | response = super(SubsonicClient, self).getStarred(*args, **kwargs)
236 | response["starred"]["song"] = list(
237 | _song_iterator(response["starred"].get("song")))
238 |
239 | return response
240 |
241 | def getCoverArtUrl(self, *args, **kwargs):
242 | """
243 | Return an URL to the cover art.
244 | """
245 |
246 | self.intercept_url = True
247 | url = self.getCoverArt(*args, **kwargs)
248 | self.intercept_url = False
249 |
250 | return url
251 |
252 | def streamUrl(self, *args, **kwargs):
253 | """
254 | Return an URL to the file to stream.
255 | """
256 |
257 | self.intercept_url = True
258 | url = self.stream(*args, **kwargs)
259 | self.intercept_url = False
260 |
261 | return url
262 |
263 | def _doBinReq(self, *args, **kwargs):
264 | """
265 | Intercept request URL to provide the URL of the item that is requested.
266 |
267 | If the URL is intercepted, the request is not executed. A username and
268 | password is added to provide direct access to the stream.
269 | """
270 |
271 | if self.intercept_url:
272 | parts = list(urlparse.urlparse(
273 | args[0].get_full_url() + "?" + args[0].data))
274 | parts[4] = dict(urlparse.parse_qsl(parts[4]))
275 | parts[4].update({"u": self.username, "p": self.password})
276 | parts[4] = urllib.urlencode(parts[4])
277 |
278 | return urlparse.urlunparse(parts)
279 | else:
280 | return super(SubsonicClient, self)._doBinReq(*args, **kwargs)
281 |
282 | def _ts2milli(self, ts):
283 | """
284 | Workaround for the issue with seconds and milliseconds, see
285 | https://github.com/crustymonkey/py-sonic/issues/12.
286 | """
287 |
288 | return int(ts)
289 |
290 | def walk_index(self):
291 | """
292 | Request Subsonic's index and iterate each item.
293 | """
294 |
295 | response = self.getIndexes()
296 |
297 | for index in response["indexes"]["index"]:
298 | for index in index["artist"]:
299 | for item in self.walk_directory(index["id"]):
300 | yield item
301 |
302 | for child in response["indexes"]["child"]:
303 | if child.get("isDir"):
304 | for child in self.walk_directory(child["id"]):
305 | yield child
306 | else:
307 | yield child
308 |
309 | def walk_playlists(self):
310 | """
311 | Request Subsonic's playlists and iterate over each item.
312 | """
313 |
314 | response = self.getPlaylists()
315 |
316 | for child in response["playlists"]["playlist"]:
317 | yield child
318 |
319 | def walk_playlist(self, playlist_id):
320 | """
321 | Request Subsonic's playlist items and iterate over each item.
322 | """
323 |
324 | response = self.getPlaylist(playlist_id)
325 |
326 | for child in response["playlist"]["entry"]:
327 | yield child
328 |
329 | def walk_starred(self):
330 | """
331 | Request Subsonic's starred songs and iterate over each item.
332 | """
333 |
334 | response = self.getStarred()
335 |
336 | for song in response["starred"]["song"]:
337 | yield song
338 |
339 | def walk_directory(self, directory_id):
340 | """
341 | Request a Subsonic music directory and iterate over each item.
342 | """
343 |
344 | response = self.getMusicDirectory(directory_id)
345 |
346 | for child in response["directory"]["child"]:
347 | if child.get("isDir"):
348 | for child in self.walk_directory(child["id"]):
349 | yield child
350 | else:
351 | yield child
352 |
353 | def walk_artist(self, artist_id):
354 | """
355 | Request a Subsonic artist and iterate over each album.
356 | """
357 |
358 | response = self.getArtist(artist_id)
359 |
360 | for child in response["artist"]["album"]:
361 | yield child
362 |
363 | def walk_artists(self):
364 | """
365 | Request all artists and iterate over each item.
366 | """
367 |
368 | response = self.getArtists()
369 |
370 | for index in response["artists"]["index"]:
371 | for artist in index["artist"]:
372 | yield artist
373 |
374 | def walk_genres(self):
375 | """
376 | Request all genres and iterate over each item.
377 | """
378 |
379 | response = self.getGenres()
380 |
381 | for genre in response["genres"]["genre"]:
382 | yield genre
383 |
384 | def walk_album_list_genre(self, genre):
385 | """
386 | Request all albums for a given genre and iterate over each album.
387 | """
388 |
389 | offset = 0
390 |
391 | while True:
392 | response = self.getAlbumList2(
393 | ltype="byGenre", genre=genre, size=500, offset=offset)
394 |
395 | if not response["albumList2"]["album"]:
396 | break
397 |
398 | for album in response["albumList2"]["album"]:
399 | yield album
400 |
401 | offset += 500
402 |
403 | def walk_album(self, album_id):
404 | """
405 | Request an alum and iterate over each item.
406 | """
407 |
408 | response = self.getAlbum(album_id)
409 |
410 | for song in response["album"]["song"]:
411 | yield song
412 |
413 | def walk_random_songs(self, size, genre=None, from_year=None,
414 | to_year=None):
415 | """
416 | Request random songs by genre and/or year and iterate over each song.
417 | """
418 |
419 | response = self.getRandomSongs(
420 | size=size, genre=genre, fromYear=from_year, toYear=to_year)
421 |
422 | for song in response["randomSongs"]["song"]:
423 | yield song
424 |
--------------------------------------------------------------------------------
/subdaap/synchronizer.py:
--------------------------------------------------------------------------------
1 | from subdaap import utils
2 |
3 | from daapserver.utils import generate_persistent_id
4 |
5 | import logging
6 |
7 | # Logger instance
8 | logger = logging.getLogger(__name__)
9 |
10 |
11 | class Synchronizer(object):
12 | """
13 | Synchronizer class for synchronizing one SubSonic server with one local
14 | database.
15 | """
16 |
17 | def __init__(self, db, state, index, name, subsonic):
18 | """
19 | """
20 |
21 | self.db = db
22 | self.state = state
23 |
24 | self.name = name
25 | self.subsonic = subsonic
26 | self.index = index
27 |
28 | self.is_initial_synced = False
29 |
30 | self.setup_state()
31 |
32 | def setup_state(self):
33 | """
34 | Ensure a state is available for this instance.
35 | """
36 |
37 | if "synchronizers" not in self.state:
38 | self.state["synchronizers"] = {}
39 |
40 | if self.index not in self.state["synchronizers"]:
41 | self.state["synchronizers"][self.index] = {
42 | "connection_version": None,
43 | "items_version": None,
44 | "containers_version": None
45 | }
46 |
47 | def synchronize(self, initial=False):
48 | """
49 | """
50 |
51 | logger.info("Starting synchronization.")
52 |
53 | server_changed = False
54 | items_changed = False
55 | containers_changed = False
56 |
57 | state = self.state["synchronizers"][self.index]
58 |
59 | # Check connection version when initial is True. In this case, the
60 | # synchronization step is skipped if the connection checksum has not
61 | # changed and some usable data is in the database.
62 | connection_version = utils.dict_checksum(
63 | baseUrl=self.subsonic.baseUrl,
64 | port=self.subsonic.port,
65 | username=self.subsonic.username,
66 | password=self.subsonic.password)
67 |
68 | if initial:
69 | self.is_initial_synced = True
70 |
71 | if state["connection_version"] != connection_version:
72 | logger.info("Initial synchronization is required.")
73 | server_changed = True
74 | else:
75 | # The initial state should be committed, even though no
76 | # synchronization is required.
77 | self.provider.update()
78 |
79 | return
80 |
81 | # Start session
82 | try:
83 | with self.db.get_write_cursor() as cursor:
84 | # Prepare variables
85 | self.cursor = cursor
86 |
87 | # Determine version numbers
88 | logger.debug("Synchronizing version numbers.")
89 |
90 | self.sync_versions()
91 |
92 | # Start synchronizing
93 | logger.debug("Synchronizing database and base container.")
94 |
95 | self.sync_database()
96 | self.sync_base_container()
97 |
98 | self.items_by_remote_id = self.cursor.query_dict(
99 | """
100 | SELECT
101 | `items`.`remote_id`,
102 | `items`.`id`,
103 | `items`.`checksum`
104 | FROM
105 | `items`
106 | WHERE
107 | `items`.`database_id` = ?
108 | """, self.database_id)
109 | self.artists_by_remote_id = {}
110 | self.albums_by_remote_id = {}
111 | self.base_container_items_by_item_id = {}
112 | self.containers_by_remote_id = {}
113 |
114 | # Items
115 | logger.debug("Synchronizing items.")
116 |
117 | if self.items_version != state.get("items_version"):
118 | self.sync_items()
119 | items_changed = True
120 | else:
121 | logger.info("Items haven't been modified.")
122 |
123 | # Containers
124 | logger.debug("Synchronizing containers.")
125 |
126 | if self.containers_version != state.get("containers_version"):
127 | self.sync_containers()
128 | containers_changed = True
129 | else:
130 | logger.info("Containers haven't been modified.")
131 |
132 | # Merge changes into the server. Lock access to provider because
133 | # multiple synchronizers could be active.
134 | self.update_server(items_changed, containers_changed)
135 | finally:
136 | # Make sure that everything is cleaned up
137 | self.cursor = None
138 |
139 | self.items_by_remote_id = {}
140 | self.artists_by_remote_id = {}
141 | self.albums_by_remote_id = {}
142 | self.base_container_items_by_item_id = {}
143 | self.containers_by_remote_id = {}
144 |
145 | # Update state if items and/or containers have changed.
146 | if items_changed or containers_changed or server_changed:
147 | state["connection_version"] = connection_version
148 | state["items_version"] = self.items_version
149 | state["containers_version"] = self.containers_version
150 |
151 | self.state.save()
152 |
153 | logger.info("Synchronization finished.")
154 |
155 | def update_server(self, items_changed, containers_changed):
156 | """
157 | """
158 |
159 | changed = False
160 |
161 | # Helper methods
162 | def updated_ids(items):
163 | for value in items.itervalues():
164 | if value.get("updated"):
165 | yield value["id"]
166 |
167 | def removed_ids(items):
168 | for value in items.itervalues():
169 | if "updated" not in value:
170 | yield value["id"]
171 |
172 | def has_updated_ids(items):
173 | for _ in updated_ids(items):
174 | return True
175 | return False
176 |
177 | def has_removed_ids(items):
178 | for _ in removed_ids(items):
179 | return True
180 | return False
181 |
182 | def should_update(items):
183 | return has_updated_ids(items) or has_removed_ids(items)
184 |
185 | # Update the server
186 | server = self.provider.server
187 | database = server.databases[self.database_id]
188 | base_container = database.containers[self.base_container_id]
189 |
190 | # Items
191 | if items_changed:
192 | if should_update(self.items_by_remote_id):
193 | database.items.remove_ids(removed_ids(self.items_by_remote_id))
194 | database.items.update_ids(updated_ids(self.items_by_remote_id))
195 |
196 | changed = True
197 |
198 | # Base container and container items
199 | if should_update(self.base_container_items_by_item_id):
200 | database.containers.update_ids([self.base_container_id])
201 |
202 | base_container.container_items.remove_ids(
203 | removed_ids(self.base_container_items_by_item_id))
204 | base_container.container_items.update_ids(
205 | updated_ids(self.base_container_items_by_item_id))
206 |
207 | changed = True
208 |
209 | # Other containers and container items
210 | if containers_changed:
211 | if should_update(self.containers_by_remote_id):
212 | database.containers.remove_ids(
213 | removed_ids(self.containers_by_remote_id))
214 | database.containers.update_ids(
215 | updated_ids(self.containers_by_remote_id))
216 |
217 | for container in self.containers_by_remote_id.itervalues():
218 | if "updated" in container:
219 | updated_ids = container["container_items"]
220 | container = database.containers[container["id"]]
221 |
222 | container.container_items.remove_ids(
223 | container.container_items)
224 | container.container_items.update_ids(updated_ids)
225 |
226 | changed = True
227 |
228 | # Only update database if any of the above parts have changed.
229 | if changed:
230 | server.databases.update_ids([self.database_id])
231 |
232 | # Notify provider of a new structure.
233 | logger.debug("Notifying provider that structure has changed.")
234 | self.provider.update()
235 |
236 | return changed
237 |
238 | def sync_versions(self):
239 | """
240 | Read the remote index and playlists. Return their versions, so it can
241 | be decided if synchronization is required. For the index, a
242 | `lastModified` property is available in SubSonic's responses. A similar
243 | property exists for playlists since SubSonic 5.3.
244 | """
245 |
246 | state = self.state["synchronizers"][self.index]
247 |
248 | items_version = 0
249 | containers_version = 0
250 |
251 | # Items version (last modified property). The ifModifiedSince property
252 | # is in milliseconds.
253 | response = self.subsonic.getIndexes(
254 | ifModifiedSince=state["items_version"] or 0)
255 |
256 | if "lastModified" in response["indexes"]:
257 | items_version = response["indexes"]["lastModified"]
258 | else:
259 | items_version = state["items_version"]
260 |
261 | # Playlists
262 | response = self.subsonic.getPlaylists()
263 |
264 | for playlist in response["playlists"]["playlist"]:
265 | containers_checksum = utils.dict_checksum(playlist)
266 | containers_version = (containers_version + containers_checksum) \
267 | % 0xFFFFFFFF
268 |
269 | # Return version numbers
270 | self.items_version = items_version
271 | self.containers_version = containers_version
272 |
273 | def sync_database(self):
274 | """
275 | """
276 |
277 | # Calculate checksum
278 | checksum = utils.dict_checksum(
279 | name=self.name, remote_id=self.index)
280 |
281 | # Fetch existing item
282 | try:
283 | row = self.cursor.query_one(
284 | """
285 | SELECT
286 | `databases`.`id`,
287 | `databases`.`checksum`
288 | FROM
289 | `databases`
290 | WHERE
291 | `databases`.`remote_id` = ?
292 | """, self.index)
293 | except IndexError:
294 | row = None
295 |
296 | # To insert or to update
297 | if row is None:
298 | database_id = self.cursor.query(
299 | """
300 | INSERT INTO `databases` (
301 | `persistent_id`,
302 | `name`,
303 | `checksum`,
304 | `remote_id`)
305 | VALUES
306 | (?, ?, ?, ?)
307 | """,
308 | generate_persistent_id(),
309 | self.name,
310 | checksum,
311 | self.index).lastrowid
312 | elif row["checksum"] != checksum:
313 | database_id = row["id"]
314 | self.cursor.query(
315 | """
316 | UPDATE
317 | `databases`
318 | SET
319 | `name` = ?,
320 | `checksum` = ?
321 | WHERE
322 | `databases`.`id` = ?
323 | """,
324 | self.name,
325 | checksum,
326 | database_id)
327 | else:
328 | database_id = row["id"]
329 |
330 | # Update cache
331 | self.database_id = database_id
332 |
333 | def sync_base_container(self):
334 | """
335 | """
336 |
337 | # Calculate checksum
338 | checksum = utils.dict_checksum(
339 | is_base=True, is_smart=False, name=self.name)
340 |
341 | # Fetch existing item
342 | try:
343 | row = self.cursor.query_one(
344 | """
345 | SELECT
346 | `containers`.`id`,
347 | `containers`.`checksum`
348 | FROM
349 | `containers`
350 | WHERE
351 | `containers`.`database_id` = ? AND
352 | `containers`.`is_base` = 1
353 | """, self.database_id)
354 | except IndexError:
355 | row = None
356 |
357 | # To insert or to update
358 | if row is None:
359 | base_container_id = self.cursor.query(
360 | """
361 | INSERT INTO `containers` (
362 | `persistent_id`,
363 | `database_id`,
364 | `name`,
365 | `is_base`,
366 | `is_smart`,
367 | `checksum`)
368 | VALUES
369 | (?, ?, ?, ?, ?, ?)
370 | """,
371 | generate_persistent_id(),
372 | self.database_id,
373 | self.name,
374 | True,
375 | False,
376 | checksum).lastrowid
377 | elif row["checksum"] != checksum:
378 | base_container_id = row["id"]
379 | self.cursor.query(
380 | """
381 | UPDATE
382 | `containers`
383 | SET
384 | `name` = ?,
385 | `is_base` = ?,
386 | `is_smart` = ?,
387 | `checksum` = ?
388 | WHERE
389 | `containers`.`id` = ?
390 | """,
391 | self.name,
392 | True,
393 | False,
394 | checksum,
395 | base_container_id)
396 | else:
397 | base_container_id = row["id"]
398 |
399 | # Update cache
400 | self.base_container_id = base_container_id
401 |
402 | def sync_items(self):
403 | """
404 | """
405 |
406 | # Helper methods
407 | def is_artist_processed(item):
408 | return item["artistId"] in self.artists_by_remote_id and \
409 | "updated" in self.artists_by_remote_id[item["artistId"]]
410 |
411 | def is_synthetic_artist_processed(item):
412 | return item["artist"] in self.synthetic_artists_by_name and \
413 | "updated" in self.synthetic_artists_by_name[item["artist"]]
414 |
415 | def is_album_processed(album_id):
416 | # Album id in the cache is a string, make sure the type matches...
417 | return album_id in self.albums_by_remote_id and \
418 | "updated" in self.albums_by_remote_id[album_id]
419 |
420 | def removed_ids(items):
421 | for value in items.itervalues():
422 | if "updated" not in value:
423 | yield value["id"]
424 |
425 | # Index items, artists, albums and container items by remote IDs.
426 | self.artists_by_remote_id = self.cursor.query_dict(
427 | """
428 | SELECT
429 | `artists`.`remote_id`,
430 | `artists`.`id`,
431 | `artists`.`checksum`
432 | FROM
433 | `artists`
434 | WHERE
435 | `artists`.`database_id` = ? AND
436 | `artists`.`remote_id` IS NOT NULL
437 | """, self.database_id)
438 | self.synthetic_artists_by_name = self.cursor.query_dict(
439 | """
440 | SELECT
441 | `artists`.`name`,
442 | `artists`.`id`,
443 | `artists`.`checksum`
444 | FROM
445 | `artists`
446 | WHERE
447 | `artists`.`database_id` = ? AND
448 | `artists`.`remote_id` IS NULL
449 | """, self.database_id)
450 | self.albums_by_remote_id = self.cursor.query_dict(
451 | """
452 | SELECT
453 | `albums`.`remote_id`,
454 | `albums`.`id`,
455 | `albums`.`artist_id`,
456 | `albums`.`checksum`
457 | FROM
458 | `albums`
459 | WHERE
460 | `albums`.`database_id` = ?
461 | """, self.database_id)
462 | self.base_container_items_by_item_id = self.cursor.query_dict(
463 | """
464 | SELECT
465 | `container_items`.`item_id`,
466 | `container_items`.`id`
467 | FROM
468 | `container_items`
469 | WHERE
470 | `container_items`.`container_id` = ?
471 | """, self.base_container_id)
472 |
473 | # Iterate over each item, sync artist, album, item and container item.
474 | for item in self.subsonic.walk_index():
475 | if "artistId" in item:
476 | if not is_artist_processed(item):
477 | self.sync_artist(item)
478 | elif "artist" in item:
479 | if not is_synthetic_artist_processed(item):
480 | self.sync_synthetic_artist(item)
481 |
482 | if "albumId" in item and not is_album_processed(item["albumId"]):
483 | # Load the album. Remove all songs from album since they would
484 | # change the checksum often, (e.g. playcount)
485 | album = self.subsonic.getAlbum(item["albumId"]).get('album')
486 | album.pop('song', None)
487 |
488 | if not is_artist_processed(album):
489 | self.sync_artist(album)
490 |
491 | self.sync_album(album)
492 |
493 | self.sync_item(item)
494 | self.sync_base_container_item(item)
495 |
496 | # Delete old artist, albums, items and container items
497 | self.cursor.query("""
498 | DELETE FROM
499 | `container_items`
500 | WHERE
501 | `container_items`.`id` IN (%s)
502 | """ % utils.in_list(removed_ids(
503 | self.base_container_items_by_item_id)))
504 | self.cursor.query("""
505 | DELETE FROM
506 | `items`
507 | WHERE
508 | `items`.`id` IN (%s)
509 | """ % utils.in_list(removed_ids(self.items_by_remote_id)))
510 | self.cursor.query("""
511 | DELETE FROM
512 | `artists`
513 | WHERE
514 | `artists`.`id` IN (%s) AND
515 | `artists`.`remote_id` IS NOT NULL
516 | """ % utils.in_list(removed_ids(self.artists_by_remote_id)))
517 | self.cursor.query("""
518 | DELETE FROM
519 | `artists`
520 | WHERE
521 | `artists`.`id` IN (%s) AND
522 | `artists`.`remote_id` IS NULL
523 | """ % utils.in_list(removed_ids(self.synthetic_artists_by_name)))
524 | self.cursor.query("""
525 | DELETE FROM
526 | `albums`
527 | WHERE
528 | `albums`.`id` IN (%s)
529 | """ % utils.in_list(removed_ids(self.albums_by_remote_id)))
530 |
531 | def sync_item(self, item):
532 | """
533 | """
534 |
535 | logger.debug("[Item:%s] %s", item['id'], item['title'])
536 |
537 | def find_artist_by_id(artist_id):
538 | for artist in self.artists_by_remote_id.itervalues():
539 | if artist["id"] == artist_id:
540 | return artist
541 |
542 | checksum = utils.dict_checksum(item)
543 |
544 | album = self.albums_by_remote_id.get(item.get("albumId"))
545 | artist = self.artists_by_remote_id.get(item.get("artistId"))
546 | album_artist = find_artist_by_id(album["artist_id"]) if album else None
547 |
548 | # The artist can be none, which is the case for items with featuring
549 | # artists.
550 | if artist is None and item.get("artist"):
551 | artist = self.synthetic_artists_by_name.get(item["artist"])
552 |
553 | # If there still is no artist, use the album artist.
554 | if not artist and album_artist:
555 | artist = album_artist
556 |
557 | # Fetch existing item
558 | try:
559 | row = self.items_by_remote_id[item["id"]]
560 | except KeyError:
561 | row = None
562 |
563 | # To insert or to update
564 | updated = True
565 |
566 | if row is None:
567 | item_id = self.cursor.query(
568 | """
569 | INSERT INTO `items` (
570 | `persistent_id`,
571 | `database_id`,
572 | `artist_id`,
573 | `album_artist_id`,
574 | `album_id`,
575 | `name`,
576 | `genre`,
577 | `year`,
578 | `track`,
579 | `duration`,
580 | `bitrate`,
581 | `file_name`,
582 | `file_type`,
583 | `file_suffix`,
584 | `file_size`,
585 | `checksum`,
586 | `remote_id`)
587 | VALUES
588 | (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?)
589 | """,
590 | generate_persistent_id(),
591 | self.database_id,
592 | artist["id"] if artist else None,
593 | album_artist["id"] if album_artist else None,
594 | album["id"] if album else None,
595 | item.get("title"),
596 | item.get("genre"),
597 | item.get("year"),
598 | item.get("track"),
599 | item["duration"] * 1000 if "duration" in item else None,
600 | item.get("bitRate"),
601 | item.get("path"),
602 | item.get("contentType"),
603 | item.get("suffix"),
604 | item.get("size"),
605 | checksum,
606 | item["id"]).lastrowid
607 | elif row["checksum"] != checksum:
608 | item_id = row["id"]
609 | self.cursor.query(
610 | """
611 | UPDATE
612 | `items`
613 | SET
614 | `artist_id` = ?,
615 | `album_artist_id` = ?,
616 | `album_id` = ?,
617 | `name` = ?,
618 | `genre` = ?,
619 | `year` = ?,
620 | `track` = ?,
621 | `duration` = ?,
622 | `bitrate` = ?,
623 | `file_name` = ?,
624 | `file_type` = ?,
625 | `file_suffix` = ?,
626 | `file_size` = ?,
627 | `checksum` = ?
628 | WHERE
629 | `items`.`id` = ?
630 | """,
631 | artist["id"] if artist else None,
632 | album_artist["id"] if album_artist else None,
633 | album["id"] if album else None,
634 | item.get("title"),
635 | item.get("genre"),
636 | item.get("year"),
637 | item.get("track"),
638 | item["duration"] * 1000 if "duration" in item else None,
639 | item.get("bitRate"),
640 | item.get("path"),
641 | item.get("contentType"),
642 | item.get("suffix"),
643 | item.get("size"),
644 | checksum,
645 | item_id)
646 | else:
647 | updated = False
648 | item_id = row["id"]
649 |
650 | # Update cache
651 | self.items_by_remote_id[item["id"]] = {
652 | "remote_id": item["id"],
653 | "id": item_id,
654 | "checksum": checksum,
655 | "updated": updated
656 | }
657 |
658 | def sync_base_container_item(self, item):
659 | """
660 | """
661 |
662 | item_row = self.items_by_remote_id[item["id"]]
663 |
664 | # Fetch existing item
665 | try:
666 | row = self.base_container_items_by_item_id[item_row["id"]]
667 | except KeyError:
668 | row = None
669 |
670 | # To insert or not
671 | updated = False
672 |
673 | if row is None:
674 | updated = True
675 | base_container_item_id = self.cursor.query(
676 | """
677 | INSERT INTO `container_items` (
678 | `database_id`,
679 | `container_id`,
680 | `item_id`)
681 | VALUES
682 | (?, ?, ?)
683 | """,
684 | self.database_id,
685 | self.base_container_id,
686 | item_row["id"]).lastrowid
687 | else:
688 | base_container_item_id = row["id"]
689 |
690 | # Update cache
691 | self.base_container_items_by_item_id[item_row["id"]] = {
692 | "item_id": item_row["id"],
693 | "id": base_container_item_id,
694 | "updated": updated
695 | }
696 |
697 | def sync_artist(self, item):
698 | """
699 | """
700 |
701 | logger.debug("[Artist:%s] %s", item['artistId'], item['artist'])
702 |
703 | checksum = utils.dict_checksum(name=item["artist"])
704 |
705 | # Fetch existing item
706 | try:
707 | row = self.artists_by_remote_id[item["artistId"]]
708 | except KeyError:
709 | row = None
710 |
711 | # To insert or to update
712 | updated = True
713 |
714 | if row is None:
715 | artist_id = self.cursor.query(
716 | """
717 | INSERT INTO `artists` (
718 | `database_id`,
719 | `name`,
720 | `remote_id`,
721 | `checksum`)
722 | VALUES
723 | (?, ?, ?, ?)
724 | """,
725 | self.database_id,
726 | item["artist"],
727 | item["artistId"],
728 | checksum).lastrowid
729 | elif row["checksum"] != checksum:
730 | artist_id = row["id"]
731 | self.cursor.query(
732 | """
733 | UPDATE
734 | `artists`
735 | SET
736 | `name` = ?,
737 | `checksum` = ?
738 | WHERE
739 | `artists`.`id` = ?
740 | """,
741 | item["artist"],
742 | checksum,
743 | artist_id)
744 | else:
745 | updated = False
746 | artist_id = row["id"]
747 |
748 | # Update cache
749 | self.artists_by_remote_id[item["artistId"]] = {
750 | "remote_id": item["artistId"],
751 | "id": artist_id,
752 | "checksum": checksum,
753 | "updated": updated
754 | }
755 |
756 | def sync_synthetic_artist(self, item):
757 | """
758 | """
759 |
760 | checksum = utils.dict_checksum(name=item["artist"])
761 |
762 | # Fetch existing item
763 | try:
764 | row = self.synthetic_artists_by_name[item["artist"]]
765 | except KeyError:
766 | row = None
767 |
768 | # To insert or to update
769 | updated = True
770 |
771 | if row is None:
772 | artist_id = self.cursor.query(
773 | """
774 | INSERT INTO `artists` (
775 | `database_id`,
776 | `name`,
777 | `checksum`)
778 | VALUES
779 | (?, ?, ?)
780 | """,
781 | self.database_id,
782 | item["artist"],
783 | checksum).lastrowid
784 | elif row["checksum"] != checksum:
785 | artist_id = row["id"]
786 | self.cursor.query(
787 | """
788 | UPDATE
789 | `artists`
790 | SET
791 | `name` = ?,
792 | `checksum` = ?
793 | WHERE
794 | `artists`.`id` = ?
795 | """,
796 | item["artist"],
797 | checksum,
798 | artist_id)
799 | else:
800 | updated = False
801 | artist_id = row["id"]
802 |
803 | # Update cache
804 | self.synthetic_artists_by_name[item["artist"]] = {
805 | "id": artist_id,
806 | "checksum": checksum,
807 | "updated": updated
808 | }
809 |
810 | def sync_album(self, album):
811 | """
812 | """
813 |
814 | logger.debug("[Album:%s] %s", album['id'], album['name'])
815 |
816 | checksum = utils.dict_checksum(album)
817 | artist_row = self.artists_by_remote_id.get(album.get("artistId"))
818 |
819 | # Fetch existing item
820 | try:
821 | row = self.albums_by_remote_id[album["id"]]
822 | except KeyError:
823 | row = None
824 |
825 | # To insert or to update
826 | updated = True
827 |
828 | if row is None:
829 | album_id = self.cursor.query(
830 | """
831 | INSERT INTO `albums` (
832 | `database_id`,
833 | `artist_id`,
834 | `name`,
835 | `art`,
836 | `checksum`,
837 | `remote_id`)
838 | VALUES
839 | (?, ?, ?, ?, ?, ?)
840 | """,
841 | self.database_id,
842 | artist_row["id"] if artist_row else None,
843 | album["name"],
844 | "coverArt" in album,
845 | checksum,
846 | album["id"]).lastrowid
847 | elif row["checksum"] != checksum:
848 | album_id = row["id"]
849 | self.cursor.query(
850 | """
851 | UPDATE
852 | `albums`
853 | SET
854 | `name` = ?,
855 | `art` = ?,
856 | `checksum` = ?
857 | WHERE
858 | `albums`.`id` = ?
859 | """,
860 | album["name"],
861 | "coverArt" in album,
862 | checksum,
863 | album_id)
864 | else:
865 | updated = False
866 | album_id = row["id"]
867 |
868 | # Update cache
869 | self.albums_by_remote_id[int(album["id"])] = {
870 | "remote_id": album["id"],
871 | "id": album_id,
872 | "artist_id": artist_row["id"],
873 | "checksum": checksum,
874 | "updated": updated
875 | }
876 |
877 | def sync_containers(self):
878 | """
879 | """
880 |
881 | def removed_ids(items):
882 | for value in items.itervalues():
883 | if "updated" not in value:
884 | yield value["id"]
885 |
886 | # Index containers by remote IDs.
887 | self.containers_by_remote_id = self.cursor.query_dict(
888 | """
889 | SELECT
890 | `containers`.`remote_id`,
891 | `containers`.`id`,
892 | `containers`.`checksum`
893 | FROM
894 | `containers`
895 | WHERE
896 | `containers`.`database_id` = ? AND NOT
897 | `containers`.`id` = ?
898 | """, self.database_id, self.base_container_id)
899 |
900 | # Iterate over each playlist.
901 | for container in self.subsonic.walk_playlists():
902 | self.sync_container(container)
903 |
904 | if self.containers_by_remote_id[container["id"]]["updated"]:
905 | self.sync_container_items(container)
906 |
907 | # Delete old containers and container items.
908 | self.cursor.query("""
909 | DELETE FROM
910 | `containers`
911 | WHERE
912 | `containers`.`id` IN (%s)
913 | """ % utils.in_list(removed_ids(self.containers_by_remote_id)))
914 |
915 | def sync_container(self, container):
916 | """
917 | """
918 |
919 | checksum = utils.dict_checksum(
920 | is_base=False, name=container["name"],
921 | song_count=container["songCount"], changed=container["changed"])
922 |
923 | # Fetch existing item
924 | try:
925 | row = self.containers_by_remote_id[container["id"]]
926 | except KeyError:
927 | row = None
928 |
929 | # To insert or to update
930 | updated = True
931 |
932 | if row is None:
933 | container_id = self.cursor.query(
934 | """
935 | INSERT INTO `containers` (
936 | `persistent_id`,
937 | `database_id`,
938 | `parent_id`,
939 | `name`,
940 | `is_base`,
941 | `is_smart`,
942 | `checksum`,
943 | `remote_id`)
944 | VALUES
945 | (?, ?, ?, ?, ?, ?, ?, ?)
946 | """,
947 | generate_persistent_id(),
948 | self.database_id,
949 | self.base_container_id,
950 | container["name"],
951 | False,
952 | False,
953 | checksum,
954 | container["id"]).lastrowid
955 | elif row["checksum"] != checksum:
956 | container_id = row["id"]
957 | self.cursor.query(
958 | """
959 | UPDATE
960 | `containers`
961 | SET
962 | `name` = ?,
963 | `is_base` = ?,
964 | `is_smart` = ?,
965 | `checksum` = ?
966 | WHERE
967 | `containers`.`id` = ?
968 | """,
969 | container["name"],
970 | False,
971 | False,
972 | checksum,
973 | container_id)
974 | else:
975 | updated = False
976 | container_id = row["id"]
977 |
978 | # Update cache
979 | self.containers_by_remote_id[container["id"]] = {
980 | "remote_id": container["id"],
981 | "id": container_id,
982 | "checksum": checksum,
983 | "updated": updated,
984 | "container_items": []
985 | }
986 |
987 | def sync_container_items(self, container):
988 | """
989 | """
990 |
991 | # Synchronizing container items is hard. There is no easy way to see
992 | # what has changed between two containers. Therefore, start by deleting
993 | # all container items and re-add every item in the specified order.
994 | self.cursor.query("""
995 | DELETE FROM
996 | `container_items`
997 | WHERE
998 | `container_items`.`container_id` = ?
999 | """, self.containers_by_remote_id[container["id"]]["id"])
1000 |
1001 | for container_item in self.subsonic.walk_playlist(container["id"]):
1002 | self.sync_container_item(container, container_item)
1003 |
1004 | def sync_container_item(self, container, container_item):
1005 | """
1006 | """
1007 |
1008 | item_row = self.items_by_remote_id[container_item["id"]]
1009 | container_id = self.containers_by_remote_id[container["id"]]["id"]
1010 |
1011 | container_item_id = self.cursor.query(
1012 | """
1013 | INSERT INTO `container_items` (
1014 | `database_id`,
1015 | `container_id`,
1016 | `item_id`,
1017 | `order`)
1018 | VALUES
1019 | (?, ?, ?, ?)
1020 | """,
1021 | self.database_id,
1022 | container_id,
1023 | item_row["id"],
1024 | container_item["order"]).lastrowid
1025 |
1026 | # Update cache
1027 | self.containers_by_remote_id[container["id"]][
1028 | "container_items"].append(container_item_id)
1029 |
--------------------------------------------------------------------------------
/subdaap/templates/index.html:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 | SubDaap
6 |
7 |
8 |
9 |
10 |
56 |
57 |
58 |
59 |
60 |
61 |
62 |
{{ provider.server.name }}
63 |
64 |
Actions
65 |
66 |
Shutdown
67 |
Clean cache
68 |
Expire cache
69 |
Synchronize
70 |
71 |
Connections
72 |
73 |
74 | List of configured connections.
75 |
76 |
77 |
78 |
79 |
80 | ID
81 | Name
82 | URL
83 | Synchronization
84 | Transcode
85 |
86 |
87 |
88 | {% for connection in application.connections.itervalues() %}
89 |
90 |
91 | {{ connection.index }}
92 |
93 |
94 | {{ connection.name }}
95 |
96 |
97 | {{ connection.url }}
98 |
99 |
100 | {{ connection.synchronization }}
101 |
102 |
103 | {{ connection.transcode }}
104 |
105 |
106 | {% else %}
107 |
108 | No connections configured.
109 |
110 | {% endfor %}
111 |
112 |
113 |
114 |
Server databases
115 |
116 |
117 | Current server revision is {{ provider.revision }}.
118 |
119 |
120 |
121 |
122 |
123 | ID
124 | Name
125 | Items
126 | Containers
127 |
128 |
129 |
130 | {% for database in provider.server.databases.itervalues() %}
131 |
132 |
133 | {{ database.id }}
134 |
135 |
136 | {{ database.name }}
137 |
138 |
139 | {{ database.items|length }}
140 |
141 |
142 | {{ database.containers|length }}
143 |
144 |
145 | {% endfor %}
146 |
147 |
148 |
149 |
Connected clients
150 |
151 |
152 | In total, {{ provider.session_counter }} clients have been served since startup.
153 |
154 |
155 |
156 |
157 |
158 | ID
159 | Revision
160 | State
161 | Since
162 | Remote address
163 | User agent
164 | Protocol Version
165 | Items played
166 |
167 |
168 |
169 | {% for id, session in provider.sessions.iteritems() %}
170 |
171 |
172 | {{ id }}
173 |
174 |
175 | {{ session.revision }}
176 |
177 |
178 | {{ session.state }}
179 |
180 |
181 | {{ session.since.strftime('%Y-%m-%d %H:%m:%S') }}
182 |
183 |
184 | {{ session.remote_address }}
185 |
186 |
187 | {{ session.user_agent }}
188 |
189 |
190 | {{ session.client_version }}
191 |
192 |
193 | {{ session.counters["items_unique"] }}
194 |
195 |
196 | {% else %}
197 |
198 | No clients connected.
199 |
200 | {% endfor %}
201 |
202 |
203 |
204 |
Cached items
205 |
206 |
207 | Current size of the item cache is {{ cache_manager.item_cache.current_size|human_bytes }} of
208 | {{ cache_manager.item_cache.max_size|human_bytes }} ({{ cache_manager.item_cache.items|length }} items of which
209 | {{ cache_manager.item_cache.permanent_cache_keys|length }} are permanent) and the current size of the artwork
210 | cache is {{ cache_manager.artwork_cache.current_size|human_bytes }} of
211 | {{ cache_manager.artwork_cache.max_size|human_bytes }} ({{ cache_manager.artwork_cache.items|length }} images
212 | of which {{ cache_manager.artwork_cache.permanent_cache_keys|length }} are permanent). The list below shows
213 | current items that are in use.
214 |
215 |
216 |
217 |
218 |
219 | Cache key
220 | Size
221 | Uses
222 | Ready
223 | Permanent
224 |
225 |
226 |
227 | {% for cache_key, cache_item in cache_manager.item_cache.items.iteritems() if cache_item.ready %}
228 |
229 |
230 | {{ cache_key }}
231 |
232 |
233 | {{ cache_item.size|human_bytes }}
234 |
235 |
236 | {{ cache_item.uses }}
237 |
238 |
239 | {{ cache_item.ready.is_set() }}
240 |
241 |
242 | {{ cache_item.permanent }}
243 |
244 |
245 | {% else %}
246 |
247 | No cached items in use.
248 |
249 | {% endfor %}
250 |
251 |
252 |
253 |
254 |
255 |
256 | SubDaap v2.1.0 by BasilFX. Github project page.
257 |
258 |
259 |
260 |
261 |
--------------------------------------------------------------------------------
/subdaap/utils.py:
--------------------------------------------------------------------------------
1 | import argparse
2 | import zlib
3 | import os
4 |
5 |
6 | class VerboseAction(argparse.Action):
7 | """
8 | Argparse action to count the verbose level (e.g. -v, -vv, etc).
9 | """
10 |
11 | def __call__(self, parser, args, value, option_string=None):
12 | try:
13 | value = int(value or "1")
14 | except ValueError:
15 | value = value.count("v") + 1
16 |
17 | setattr(args, self.dest, value)
18 |
19 |
20 | class NewPathAction(argparse.Action):
21 | """
22 | Argparse action that resolves a given path to absolute path.
23 | """
24 |
25 | def __call__(self, parser, args, values, option_string=None):
26 | setattr(args, self.dest, os.path.abspath(values))
27 |
28 |
29 | class PathAction(argparse.Action):
30 | """
31 | Argparse action that resolves a given path, and ensures it exists.
32 | """
33 |
34 | def __call__(self, parser, args, values, option_string=None):
35 | path = os.path.abspath(values)
36 |
37 | if not os.path.exists(path):
38 | parser.error("Path doesn't exist for '%s': %s" % (
39 | option_string, path))
40 |
41 | setattr(args, self.dest, path)
42 |
43 |
44 | def dict_checksum(*args, **kwargs):
45 | """
46 | Calculate a hash of the values of a dictionary.
47 | """
48 |
49 | # Accept kwargs as input dict
50 | if len(args) == 1:
51 | input_dict = args[0]
52 | else:
53 | input_dict = kwargs
54 |
55 | # Calculate checksum
56 | data = bytearray()
57 |
58 | for value in input_dict.itervalues():
59 | if type(value) != unicode:
60 | value = unicode(value)
61 | data.extend(bytearray(value.encode("utf-8")))
62 |
63 | return zlib.adler32(buffer(data))
64 |
65 |
66 | def force_dict(value):
67 | """
68 | Coerce the input value to a dict.
69 | """
70 |
71 | if type(value) == dict:
72 | return value
73 | else:
74 | return {}
75 |
76 |
77 | def force_list(value):
78 | """
79 | Coerce the input value to a list.
80 |
81 | If `value` is `None`, return an empty list. If it is a single value, create
82 | a new list with that element on index 0.
83 |
84 | :param value: Input value to coerce.
85 | :return: Value as list.
86 | :rtype: list
87 | """
88 |
89 | if value is None:
90 | return []
91 | elif type(value) == list:
92 | return value
93 | else:
94 | return [value]
95 |
96 |
97 | def human_bytes(size):
98 | """
99 | Convert a given size (in bytes) to a human-readable representation.
100 |
101 | :param int size: Size in bytes.
102 | :return: Human-readable representation of size, e.g. 1MB.
103 | :rtype: str
104 | """
105 |
106 | for x in ("B", "KB", "MB", "GB"):
107 | if size < 1024.0 and size > -1024.0:
108 | return "%3.1f%s" % (size, x)
109 | size /= 1024.0
110 | return "%3.1f%s" % (size, "TB")
111 |
112 |
113 | def in_list(input_list):
114 | """
115 | """
116 | return ",".join(str(x) for x in input_list)
117 |
118 |
119 | def exhaust(iterator):
120 | """
121 | Exhaust an iterator, without returning anything.
122 |
123 | :param iterator iterator: Iterator to exhaust.
124 | """
125 |
126 | for _ in iterator:
127 | pass
128 |
129 |
130 | def chunks(iterator, size):
131 | """
132 | Chunk an iterator into blocks of fixed size. Only the last block can be
133 | smaller than the specified size.
134 |
135 | :param iterator iterator: Iterator to exhaust.
136 | :param int size: Size of blocks to yield.
137 | """
138 |
139 | items = [None] * size
140 | count = 0
141 |
142 | for item in iterator:
143 | items[count] = item
144 | count += 1
145 |
146 | if count == size:
147 | yield items
148 | count = 0
149 |
150 | # Yield remaining
151 | yield items[:count]
152 |
--------------------------------------------------------------------------------
/subdaap/webserver.py:
--------------------------------------------------------------------------------
1 | from subdaap import utils
2 |
3 | from flask import Response
4 | from flask import render_template, redirect, url_for, send_from_directory
5 |
6 | import logging
7 | import jinja2
8 | import os
9 |
10 | # Logger instance
11 | logger = logging.getLogger(__name__)
12 |
13 |
14 | def extend_server_app(application, app):
15 | """
16 | Since the DAAP server is basically a normal HTTP server, extend it with a
17 | web interface for easy access and statistics.
18 |
19 | If the DAAPServer was configured with a password, the default username is
20 | empty and the password is equal to the configured password.
21 |
22 | :param Application application: SubDaap application for information.
23 | :param Flask app: Flask/DAAPServer to extend.
24 | """
25 |
26 | # Set the jinja2 loader
27 | template_path = os.path.join(os.path.dirname(__file__), "templates")
28 | static_path = os.path.join(os.path.dirname(__file__), "static")
29 |
30 | app.jinja_loader = jinja2.ChoiceLoader([
31 | app.jinja_loader, jinja2.FileSystemLoader(template_path)])
32 |
33 | app.jinja_env.filters["human_bytes"] = utils.human_bytes
34 |
35 | @app.route("/")
36 | @app.authenticate
37 | def index():
38 | """
39 | Default index.
40 | """
41 |
42 | return render_template(
43 | "index.html", application=application,
44 | provider=application.provider,
45 | cache_manager=application.cache_manager)
46 |
47 | @app.route("/static/")
48 | @app.authenticate
49 | def static(path):
50 | """
51 | Handle static files from the `static_path` folder.
52 | """
53 |
54 | return send_from_directory(static_path, path)
55 |
56 | @app.route("/actions/")
57 | @app.authenticate
58 | def actions(action):
59 | """
60 | Handle actions and return to index page.
61 | """
62 |
63 | action = action.lower()
64 | logger.info("Webserver action: %s", action)
65 |
66 | # Shutdown action
67 | if action == "shutdown":
68 | application.stop()
69 |
70 | # Expire action
71 | elif action == "expire":
72 | application.cache_manager.expire()
73 |
74 | # Clean action
75 | elif action == "clean":
76 | application.cache_manager.clean(force=True)
77 |
78 | # Synchronize action
79 | elif action == "synchronize":
80 | application.synchronize(synchronization="manual")
81 |
82 | # Return back to index
83 | return redirect(url_for("index"))
84 |
85 | @app.route("/raw/tree")
86 | @app.authenticate
87 | def raw_tree():
88 | """
89 | Print a raw tree of the current server storage. This method streams,
90 | because the result can be huge.
91 | """
92 |
93 | generator = (x + "\n" for x in application.provider.server.to_tree())
94 |
95 | return Response(generator, mimetype="text/plain")
96 |
--------------------------------------------------------------------------------