├── .gitignore ├── Installation Guide.txt ├── README.md ├── acoustic ├── __init__.py ├── __init__.pyc ├── acoustid.py ├── acoustid.pyc ├── acoustid_check.py ├── acoustid_check.pyc ├── chromaprint.py ├── chromaprint.pyc ├── heaven.mp3 ├── heaven_first_half.mp3 ├── heaven_second_half.mp3 ├── heaven_small.mp3 ├── test_acoustic.py └── undertheice.mp3 ├── blockchain.py ├── image_compare ├── MonaLisa_1.jpg ├── MonaLisa_2.jpg ├── Other.jpg ├── __init__.py ├── __init__.pyc ├── image_check.py ├── image_check.pyc └── test_image.py ├── install.sh ├── media ├── MonaLisa_1.jpg ├── MonaLisa_2.jpg ├── Other.jpg ├── doc1.txt ├── doc1_same.txt ├── doc2.txt ├── doc3.txt ├── heaven.mp3 ├── heaven_first_half.mp3 ├── heaven_second_half.mp3 ├── heaven_small.mp3 └── undertheice.mp3 ├── network.py ├── requirements.txt ├── run.sh ├── static ├── aboutus.png ├── blockchain.css ├── blockchain.js ├── blockchain.png ├── faq.png ├── index.css ├── index.js └── logo.png ├── templates ├── aboutus.html ├── blockchain.html ├── faq.html └── index.html └── text_compare ├── __init__.py ├── __init__.pyc ├── doc1.txt ├── doc1_same.txt ├── doc2.txt ├── doc3.txt ├── test_text.py └── test_text.pyc /.gitignore: -------------------------------------------------------------------------------- 1 | venv/* 2 | -------------------------------------------------------------------------------- /Installation Guide.txt: -------------------------------------------------------------------------------- 1 | 1st Run: Run the following command in the terminal (without double quotes) 2 | "source ./install.sh" 3 | "python network.py" 4 | 5 | Subsequent Runs: Run the following command in the terminal (without double quotes) 6 | "source ./run.sh" 7 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ![LOGO](./static/logo.png) 2 | 3 | Yanqi Liu (yliu136), Brandon Tan (tjiansin), George Spahn (gspahn1) 4 | 5 | github: https://github.com/liuyanqi/bitright 6 | 7 | BitRights is a proof of concept project for blockchain based digital copyright protection. Users can upload media files that they create and permanently claim ownership of their work. Anyone on the web can view the published works and verify that the author is correct. Further, it is not possible to upload a work that is too similar to one that has already been published. Audio, image, and text files are currently supported. 8 | 9 | # Installing 10 | **1st Run: Run the following commands from inside the project directory. The installation is going to take a few minutes to install all related packages in the virtual environment** 11 | ``` 12 | source ./install.sh 13 | python network.py 14 | ``` 15 | **Subsequent Runs:** 16 | ``` 17 | source ./run.sh 18 | ``` 19 | **We provides some example media files in media/ folder for you to test it out** 20 | 21 | # How it Works 22 | Whenever a user uploads a file, a new block is added to the blockchain. This block contains the title and author of the work, a path/link to download the file, a hash of the file, the public key of the author, the timestamp, and the hash of the previous block. A hash of the file is used instead of the raw data to keep block sizes smaller. The public key of the author is included so that knowledge of the corresponding secret key allows the author to prove ownership. Like any blockchain, the hash of the previous block allows users to verify that the chain has not been tampered with. 23 | 24 | # Media Comparison 25 | In order for a block to be considered valid, the uploaded work must not be too similar to any previous files on the chain. We use perceptual hashing to perform this comparison. A perceptual hash generates a short fingerprint for a file, such that small changes to the file only result in small changes to the fingerprint. Then if the fingerprints are too similar, it is safe to conclude that the files are too similar. To create the fingerprints, we use open source software that was made for this purpose. 26 | 27 | Audio comparison: [AcoustID](https://acoustid.org/) 28 | 29 | Image comparison: [ImageMatch](https://github.com/EdjoLabs/image-match) 30 | 31 | Text comparison: [TLSH](https://github.com/trendmicro/tlsh) 32 | 33 | # Limitations 34 | In practice, the blockchain should not be hosted from a single server. A major advantage of using a blockchain is that there is no trusted authority for what is on the chain. In order to achieve this, we would need to add a lot of code for networking and communication between worker nodes. We also have not implemented a way of preventing adversarial mining. Proof of work could potentially be used, but there is currently no reward for successfully mining a block. If this was implemented, there would need to be transaction fees so that miners would have incentive to publish blocks. 35 | 36 | Another limitation is that as the blockchain grows, comparing new fingerprints with all of the old ones becomes increasingly computationally difficult. Perhaps probabilistically checkable proofs (PCPs) could be used to speed up the verification process. Scalability seems to be the largest hurdle if this project were to be implemented in the real world. 37 | 38 | # Vision 39 | The BitRights project envisions a globally recongized platform for digital rights management with built in support for royalty management and dynamic tagging of web assests, providing a fairer and more lucrative envrionment for both users and content creators. 40 | 41 | # Challenges 42 | One of the key challenges with the project is determining a reasonable apporach to verifing the uniqueness of digital assests. In the project, media comparison libraries are used, however, these libraries are not without its flaws. From varying accuracies to determining a suitable "uniqueness" threshold, these libraries are far from perfect. Moreover, there is no general comparison library for all file types. Libraries only cater to a specific set of file types. 43 | 44 | # Takeaways 45 | - DRM is a hard problem. There is no straight forward way to solving it. 46 | - While blockchains might be an approach to digital intellectual property identification and protection, it may not be the best approach due to its many limitations. 47 | - Digital media comparison is a tricky and subjective field. A lot more research and innovation are required to achieve BitRight's vision. 48 | -------------------------------------------------------------------------------- /acoustic/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuyanqi/bitright/b8d0b694fad1316c7599d7f8e44e3fb4f7195b4b/acoustic/__init__.py -------------------------------------------------------------------------------- /acoustic/__init__.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuyanqi/bitright/b8d0b694fad1316c7599d7f8e44e3fb4f7195b4b/acoustic/__init__.pyc -------------------------------------------------------------------------------- /acoustic/acoustid.py: -------------------------------------------------------------------------------- 1 | # This file is part of pyacoustid. 2 | # Copyright 2014, Adrian Sampson. 3 | # 4 | # Permission is hereby granted, free of charge, to any person obtaining 5 | # a copy of this software and associated documentation files (the 6 | # "Software"), to deal in the Software without restriction, including 7 | # without limitation the rights to use, copy, modify, merge, publish, 8 | # distribute, sublicense, and/or sell copies of the Software, and to 9 | # permit persons to whom the Software is furnished to do so, subject to 10 | # the following conditions: 11 | # 12 | # The above copyright notice and this permission notice shall be 13 | # included in all copies or substantial portions of the Software. 14 | 15 | from __future__ import division 16 | from __future__ import absolute_import 17 | 18 | import os 19 | import json 20 | import requests 21 | import contextlib 22 | import errno 23 | try: 24 | import audioread 25 | have_audioread = True 26 | except ImportError: 27 | have_audioread = False 28 | try: 29 | import chromaprint 30 | have_chromaprint = True 31 | except ImportError: 32 | have_chromaprint = False 33 | import subprocess 34 | import threading 35 | import time 36 | import gzip 37 | from io import BytesIO 38 | 39 | 40 | API_BASE_URL = 'http://api.acoustid.org/v2/' 41 | DEFAULT_META = 'recordings' 42 | REQUEST_INTERVAL = 0.33 # 3 requests/second. 43 | MAX_AUDIO_LENGTH = 120 # Seconds. 44 | FPCALC_COMMAND = 'fpcalc' 45 | FPCALC_ENVVAR = 'FPCALC' 46 | 47 | 48 | # Exceptions. 49 | 50 | class AcoustidError(Exception): 51 | """Base for exceptions in this module.""" 52 | 53 | 54 | class FingerprintGenerationError(AcoustidError): 55 | """The audio could not be fingerprinted.""" 56 | 57 | 58 | class NoBackendError(FingerprintGenerationError): 59 | """The audio could not be fingerprinted because neither the 60 | Chromaprint library nor the fpcalc command-line tool is installed. 61 | """ 62 | 63 | 64 | class FingerprintSubmissionError(AcoustidError): 65 | """Missing required data for a fingerprint submission.""" 66 | 67 | 68 | class WebServiceError(AcoustidError): 69 | """The Web service request failed. The field ``message`` contains a 70 | description of the error. If this is an error that was specifically 71 | sent by the acoustid server, then the ``code`` field contains the 72 | acoustid error code. 73 | """ 74 | def __init__(self, message, response=None): 75 | """Create an error for the given HTTP response body, if 76 | provided, with the ``message`` as a fallback. 77 | """ 78 | if response: 79 | # Try to parse the JSON error response. 80 | try: 81 | data = json.loads(response) 82 | except ValueError: 83 | pass 84 | else: 85 | if isinstance(data.get('error'), dict): 86 | error = data['error'] 87 | if 'message' in error: 88 | message = error['message'] 89 | if 'code' in error: 90 | self.code = error['code'] 91 | 92 | super(WebServiceError, self).__init__(message) 93 | self.message = message 94 | 95 | 96 | # Endpoint configuration. 97 | 98 | def set_base_url(url): 99 | """Set the URL of the API server to query.""" 100 | if not url.endswith('/'): 101 | url += '/' 102 | global API_BASE_URL 103 | API_BASE_URL = url 104 | 105 | 106 | def _get_lookup_url(): 107 | """Get the URL of the lookup API endpoint.""" 108 | return API_BASE_URL + 'lookup' 109 | 110 | 111 | def _get_submit_url(): 112 | """Get the URL of the submission API endpoint.""" 113 | return API_BASE_URL + 'submit' 114 | 115 | def _get_submission_status_url(): 116 | """Get the URL of the submission status API endpoint.""" 117 | return API_BASE_URL + 'submission_status' 118 | 119 | # Compressed HTTP request bodies. 120 | 121 | def _compress(data): 122 | """Compress a bytestring to a gzip archive.""" 123 | sio = BytesIO() 124 | with contextlib.closing(gzip.GzipFile(fileobj=sio, mode='wb')) as f: 125 | f.write(data) 126 | return sio.getvalue() 127 | 128 | 129 | class CompressedHTTPAdapter(requests.adapters.HTTPAdapter): 130 | """An `HTTPAdapter` that compresses request bodies with gzip. The 131 | Content-Encoding header is set accordingly. 132 | """ 133 | def add_headers(self, request, **kwargs): 134 | body = request.body 135 | if not isinstance(body, bytes): 136 | body = body.encode('utf8') 137 | request.prepare_body(_compress(body), None) 138 | request.headers['Content-Encoding'] = 'gzip' 139 | 140 | 141 | # Utilities. 142 | 143 | class _rate_limit(object): # noqa: N801 144 | """A decorator that limits the rate at which the function may be 145 | called. The rate is controlled by the REQUEST_INTERVAL module-level 146 | constant; set the value to zero to disable rate limiting. The 147 | limiting is thread-safe; only one thread may be in the function at a 148 | time (acts like a monitor in this sense). 149 | """ 150 | def __init__(self, fun): 151 | self.fun = fun 152 | self.last_call = 0.0 153 | self.lock = threading.Lock() 154 | 155 | def __call__(self, *args, **kwargs): 156 | with self.lock: 157 | # Wait until request_rate time has passed since last_call, 158 | # then update last_call. 159 | since_last_call = time.time() - self.last_call 160 | if since_last_call < REQUEST_INTERVAL: 161 | time.sleep(REQUEST_INTERVAL - since_last_call) 162 | self.last_call = time.time() 163 | 164 | # Call the original function. 165 | return self.fun(*args, **kwargs) 166 | 167 | 168 | @_rate_limit 169 | def _api_request(url, params): 170 | """Makes a POST request for the URL with the given form parameters, 171 | which are encoded as compressed form data, and returns a parsed JSON 172 | response. May raise a WebServiceError if the request fails. 173 | """ 174 | headers = { 175 | 'Accept-Encoding': 'gzip', 176 | "Content-Type": "application/x-www-form-urlencoded" 177 | } 178 | 179 | session = requests.Session() 180 | session.mount('http://', CompressedHTTPAdapter()) 181 | try: 182 | response = session.post(url, data=params, headers=headers) 183 | except requests.exceptions.RequestException as exc: 184 | raise WebServiceError("HTTP request failed: {0}".format(exc)) 185 | 186 | try: 187 | return response.json() 188 | except ValueError: 189 | raise WebServiceError('response is not valid JSON') 190 | 191 | 192 | # Main API. 193 | 194 | def fingerprint(samplerate, channels, pcmiter, maxlength=MAX_AUDIO_LENGTH): 195 | """Fingerprint audio data given its sample rate and number of 196 | channels. pcmiter should be an iterable containing blocks of PCM 197 | data as byte strings. Raises a FingerprintGenerationError if 198 | anything goes wrong. 199 | """ 200 | # Maximum number of samples to decode. 201 | endposition = samplerate * channels * maxlength 202 | 203 | try: 204 | fper = chromaprint.Fingerprinter() 205 | fper.start(samplerate, channels) 206 | 207 | position = 0 # Samples of audio fed to the fingerprinter. 208 | for block in pcmiter: 209 | fper.feed(block) 210 | position += len(block) // 2 # 2 bytes/sample. 211 | if position >= endposition: 212 | break 213 | 214 | return fper.finish() 215 | except chromaprint.FingerprintError: 216 | raise FingerprintGenerationError("fingerprint calculation failed") 217 | 218 | 219 | def lookup(apikey, fingerprint, duration, meta=DEFAULT_META): 220 | """Look up a fingerprint with the Acoustid Web service. Returns the 221 | Python object reflecting the response JSON data. 222 | """ 223 | params = { 224 | 'format': 'json', 225 | 'client': apikey, 226 | 'duration': int(duration), 227 | 'fingerprint': fingerprint, 228 | 'meta': meta, 229 | } 230 | return _api_request(_get_lookup_url(), params) 231 | 232 | 233 | def parse_lookup_result(data): 234 | """Given a parsed JSON response, generate tuples containing the match 235 | score, the MusicBrainz recording ID, the title of the recording, and 236 | the name of the recording's first artist. (If an artist is not 237 | available, the last item is None.) If the response is incomplete, 238 | raises a WebServiceError. 239 | """ 240 | if data['status'] != 'ok': 241 | raise WebServiceError("status: %s" % data['status']) 242 | if 'results' not in data: 243 | raise WebServiceError("results not included") 244 | 245 | for result in data['results']: 246 | score = result['score'] 247 | if 'recordings' not in result: 248 | # No recording attached. This result is not very useful. 249 | continue 250 | 251 | for recording in result['recordings']: 252 | # Get the artist if available. 253 | if recording.get('artists'): 254 | names = [artist['name'] for artist in recording['artists']] 255 | artist_name = '; '.join(names) 256 | else: 257 | artist_name = None 258 | 259 | yield score, recording['id'], recording.get('title'), artist_name 260 | 261 | 262 | def _fingerprint_file_audioread(path, maxlength): 263 | """Fingerprint a file by using audioread and chromaprint.""" 264 | try: 265 | with audioread.audio_open(path) as f: 266 | duration = f.duration 267 | fp = fingerprint(f.samplerate, f.channels, iter(f), maxlength) 268 | except audioread.DecodeError: 269 | raise FingerprintGenerationError("audio could not be decoded") 270 | return duration, fp 271 | 272 | 273 | def _fingerprint_file_fpcalc(path, maxlength): 274 | """Fingerprint a file by calling the fpcalc application.""" 275 | fpcalc = os.environ.get(FPCALC_ENVVAR, FPCALC_COMMAND) 276 | command = [fpcalc, "-length", str(maxlength), path] 277 | try: 278 | with open(os.devnull, 'wb') as devnull: 279 | proc = subprocess.Popen(command, stdout=subprocess.PIPE, 280 | stderr=devnull) 281 | output, _ = proc.communicate() 282 | except OSError as exc: 283 | if exc.errno == errno.ENOENT: 284 | raise NoBackendError("fpcalc not found") 285 | else: 286 | raise FingerprintGenerationError("fpcalc invocation failed: %s" % 287 | str(exc)) 288 | except UnicodeEncodeError: 289 | # Due to a bug in Python 2's subprocess on Windows, Unicode 290 | # filenames can fail to encode on that platform. See: 291 | # http://bugs.python.org/issue1759845 292 | raise FingerprintGenerationError("argument encoding failed") 293 | retcode = proc.poll() 294 | if retcode: 295 | raise FingerprintGenerationError("fpcalc exited with status %i" % 296 | retcode) 297 | 298 | duration = fp = None 299 | for line in output.splitlines(): 300 | try: 301 | parts = line.split(b'=', 1) 302 | except ValueError: 303 | raise FingerprintGenerationError("malformed fpcalc output") 304 | if parts[0] == b'DURATION': 305 | try: 306 | duration = float(parts[1]) 307 | except ValueError: 308 | raise FingerprintGenerationError("fpcalc duration not numeric") 309 | elif parts[0] == b'FINGERPRINT': 310 | fp = parts[1] 311 | 312 | if duration is None or fp is None: 313 | raise FingerprintGenerationError("missing fpcalc output") 314 | return duration, fp 315 | 316 | 317 | def fingerprint_file(path, maxlength=MAX_AUDIO_LENGTH): 318 | """Fingerprint a file either using the Chromaprint dynamic library 319 | or the fpcalc command-line tool, whichever is available. Returns the 320 | duration and the fingerprint. 321 | """ 322 | path = os.path.abspath(os.path.expanduser(path)) 323 | if have_audioread and have_chromaprint: 324 | return _fingerprint_file_audioread(path, maxlength) 325 | else: 326 | return _fingerprint_file_fpcalc(path, maxlength) 327 | 328 | 329 | def match(apikey, path, meta=DEFAULT_META, parse=True): 330 | """Look up the metadata for an audio file. If ``parse`` is true, 331 | then ``parse_lookup_result`` is used to return an iterator over 332 | small tuple of relevant information; otherwise, the full parsed JSON 333 | response is returned. 334 | """ 335 | duration, fp = fingerprint_file(path) 336 | response = lookup(apikey, fp, duration, meta) 337 | if parse: 338 | return parse_lookup_result(response) 339 | else: 340 | return response 341 | 342 | 343 | def submit(apikey, userkey, data): 344 | """Submit a fingerprint to the acoustid server. The ``apikey`` and 345 | ``userkey`` parameters are API keys for the application and the 346 | submitting user, respectively. 347 | 348 | ``data`` may be either a single dictionary or a list of 349 | dictionaries. In either case, each dictionary must contain a 350 | ``fingerprint`` key and a ``duration`` key and may include the 351 | following: ``puid``, ``mbid``, ``track``, ``artist``, ``album``, 352 | ``albumartist``, ``year``, ``trackno``, ``discno``, ``fileformat``, 353 | ``bitrate`` 354 | 355 | If the required keys are not present in a dictionary, a 356 | FingerprintSubmissionError is raised. 357 | 358 | Returns the parsed JSON response. 359 | """ 360 | if isinstance(data, dict): 361 | data = [data] 362 | 363 | args = { 364 | 'format': 'json', 365 | 'client': apikey, 366 | 'user': userkey, 367 | } 368 | 369 | # Build up "field.#" parameters corresponding to the parameters 370 | # given in each dictionary. 371 | for i, d in enumerate(data): 372 | if "duration" not in d or "fingerprint" not in d: 373 | raise FingerprintSubmissionError("missing required parameters") 374 | 375 | # The duration needs to be an integer. 376 | d["duration"] = int(d["duration"]) 377 | 378 | for k, v in d.items(): 379 | args["%s.%s" % (k, i)] = v 380 | 381 | response = _api_request(_get_submit_url(), args) 382 | if response.get('status') != 'ok': 383 | try: 384 | code = response['error']['code'] 385 | message = response['error']['message'] 386 | except KeyError: 387 | raise WebServiceError("response: {0}".format(response)) 388 | raise WebServiceError("error {0}: {1}".format(code, message)) 389 | return response 390 | 391 | def get_submission_status(apikey, submission_id): 392 | """Get the status of a submission to the acoustid server. 393 | ``submission_id`` is the id of a fingerprint submission, as returned 394 | in the response object of a call to the ``submit`` endpoint. 395 | """ 396 | params = { 397 | 'format': 'json', 398 | 'client': apikey, 399 | 'id': submission_id, 400 | } 401 | return _api_request(_get_submission_status_url(), params) 402 | -------------------------------------------------------------------------------- /acoustic/acoustid.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuyanqi/bitright/b8d0b694fad1316c7599d7f8e44e3fb4f7195b4b/acoustic/acoustid.pyc -------------------------------------------------------------------------------- /acoustic/acoustid_check.py: -------------------------------------------------------------------------------- 1 | import acoustid 2 | import numpy as np 3 | import chromaprint 4 | 5 | popcnt_table_8bit = [ 6 | 0,1,1,2,1,2,2,3,1,2,2,3,2,3,3,4,1,2,2,3,2,3,3,4,2,3,3,4,3,4,4,5, 7 | 1,2,2,3,2,3,3,4,2,3,3,4,3,4,4,5,2,3,3,4,3,4,4,5,3,4,4,5,4,5,5,6, 8 | 1,2,2,3,2,3,3,4,2,3,3,4,3,4,4,5,2,3,3,4,3,4,4,5,3,4,4,5,4,5,5,6, 9 | 2,3,3,4,3,4,4,5,3,4,4,5,4,5,5,6,3,4,4,5,4,5,5,6,4,5,5,6,5,6,6,7, 10 | 1,2,2,3,2,3,3,4,2,3,3,4,3,4,4,5,2,3,3,4,3,4,4,5,3,4,4,5,4,5,5,6, 11 | 2,3,3,4,3,4,4,5,3,4,4,5,4,5,5,6,3,4,4,5,4,5,5,6,4,5,5,6,5,6,6,7, 12 | 2,3,3,4,3,4,4,5,3,4,4,5,4,5,5,6,3,4,4,5,4,5,5,6,4,5,5,6,5,6,6,7, 13 | 3,4,4,5,4,5,5,6,4,5,5,6,5,6,6,7,4,5,5,6,5,6,6,7,5,6,6,7,6,7,7,8, 14 | ] 15 | 16 | def popcnt(x): 17 | """ 18 | Count the number of set bits in the given 32-bit integer. 19 | """ 20 | return (popcnt_table_8bit[(x >> 0) & 0xFF] + 21 | popcnt_table_8bit[(x >> 8) & 0xFF] + 22 | popcnt_table_8bit[(x >> 16) & 0xFF] + 23 | popcnt_table_8bit[(x >> 24) & 0xFF]) 24 | 25 | def accuracy(fp1, fp2): 26 | error = 0 27 | for x, y in zip(fp1, fp2): 28 | error += popcnt(x ^ y) 29 | return 1.0 - error / 32.0 / min(len(fp1), len(fp2)) 30 | 31 | def calc_accuracy(path1, path2): 32 | dur, fig = acoustid.fingerprint_file(path1) 33 | fp1 = chromaprint.decode_fingerprint(fig)[0] 34 | 35 | dur, fig2 = acoustid.fingerprint_file(path2) 36 | fp2 = chromaprint.decode_fingerprint(fig2)[0] 37 | 38 | return accuracy(fp1, fp2) 39 | 40 | if __name__ == '__main__': 41 | calc_accuracy(path1, path2) 42 | -------------------------------------------------------------------------------- /acoustic/acoustid_check.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuyanqi/bitright/b8d0b694fad1316c7599d7f8e44e3fb4f7195b4b/acoustic/acoustid_check.pyc -------------------------------------------------------------------------------- /acoustic/chromaprint.py: -------------------------------------------------------------------------------- 1 | # Copyright (C) 2011 Lukas Lalinsky 2 | # (Minor modifications by Adrian Sampson.) 3 | # Distributed under the MIT license, see the LICENSE file for details. 4 | 5 | """Low-level ctypes wrapper from the chromaprint library.""" 6 | 7 | import sys 8 | import ctypes 9 | 10 | 11 | if sys.version_info[0] >= 3: 12 | BUFFER_TYPES = (memoryview, bytearray,) 13 | elif sys.version_info[1] >= 7: 14 | BUFFER_TYPES = (buffer, memoryview, bytearray,) # noqa: F821 15 | else: 16 | BUFFER_TYPES = (buffer, bytearray,) # noqa: F821 17 | 18 | 19 | # Find the base library and declare prototypes. 20 | 21 | def _guess_lib_name(): 22 | if sys.platform == 'darwin': 23 | return ('libchromaprint.1.dylib', 'libchromaprint.0.dylib') 24 | elif sys.platform == 'win32': 25 | return ('chromaprint.dll', 'libchromaprint.dll') 26 | elif sys.platform == 'cygwin': 27 | return ('libchromaprint.dll.a', 'cygchromaprint-1.dll', 28 | 'cygchromaprint-0.dll') 29 | return ('libchromaprint.so.1', 'libchromaprint.so.0') 30 | 31 | 32 | for name in _guess_lib_name(): 33 | try: 34 | _libchromaprint = ctypes.cdll.LoadLibrary(name) 35 | break 36 | except OSError: 37 | pass 38 | else: 39 | raise ImportError("couldn't find libchromaprint") 40 | 41 | 42 | _libchromaprint.chromaprint_get_version.argtypes = () 43 | _libchromaprint.chromaprint_get_version.restype = ctypes.c_char_p 44 | 45 | _libchromaprint.chromaprint_new.argtypes = (ctypes.c_int,) 46 | _libchromaprint.chromaprint_new.restype = ctypes.c_void_p 47 | 48 | _libchromaprint.chromaprint_free.argtypes = (ctypes.c_void_p,) 49 | _libchromaprint.chromaprint_free.restype = None 50 | 51 | _libchromaprint.chromaprint_start.argtypes = \ 52 | (ctypes.c_void_p, ctypes.c_int, ctypes.c_int) 53 | _libchromaprint.chromaprint_start.restype = ctypes.c_int 54 | 55 | _libchromaprint.chromaprint_feed.argtypes = \ 56 | (ctypes.c_void_p, ctypes.POINTER(ctypes.c_char), ctypes.c_int) 57 | _libchromaprint.chromaprint_feed.restype = ctypes.c_int 58 | 59 | _libchromaprint.chromaprint_finish.argtypes = (ctypes.c_void_p,) 60 | _libchromaprint.chromaprint_finish.restype = ctypes.c_int 61 | 62 | _libchromaprint.chromaprint_get_fingerprint.argtypes = \ 63 | (ctypes.c_void_p, ctypes.POINTER(ctypes.c_char_p)) 64 | _libchromaprint.chromaprint_get_fingerprint.restype = ctypes.c_int 65 | 66 | _libchromaprint.chromaprint_decode_fingerprint.argtypes = \ 67 | (ctypes.POINTER(ctypes.c_char), ctypes.c_int, 68 | ctypes.POINTER(ctypes.POINTER(ctypes.c_int32)), 69 | ctypes.POINTER(ctypes.c_int), ctypes.POINTER(ctypes.c_int), ctypes.c_int) 70 | _libchromaprint.chromaprint_decode_fingerprint.restype = ctypes.c_int 71 | 72 | _libchromaprint.chromaprint_encode_fingerprint.argtypes = \ 73 | (ctypes.POINTER(ctypes.c_int32), ctypes.c_int, ctypes.c_int, 74 | ctypes.POINTER(ctypes.POINTER(ctypes.c_char)), 75 | ctypes.POINTER(ctypes.c_int), ctypes.c_int) 76 | _libchromaprint.chromaprint_encode_fingerprint.restype = ctypes.c_int 77 | 78 | _libchromaprint.chromaprint_dealloc.argtypes = (ctypes.c_void_p,) 79 | _libchromaprint.chromaprint_dealloc.restype = None 80 | 81 | 82 | # Main interface. 83 | 84 | class FingerprintError(Exception): 85 | """Raised when a call to the underlying library fails.""" 86 | 87 | 88 | def _check(res): 89 | """Check the result of a library call, raising an error if the call 90 | failed. 91 | """ 92 | if res != 1: 93 | raise FingerprintError() 94 | 95 | 96 | class Fingerprinter(object): 97 | 98 | ALGORITHM_TEST1 = 0 99 | ALGORITHM_TEST2 = 1 100 | ALGORITHM_TEST3 = 2 101 | ALGORITHM_DEFAULT = ALGORITHM_TEST2 102 | 103 | def __init__(self, algorithm=ALGORITHM_DEFAULT): 104 | self._ctx = _libchromaprint.chromaprint_new(algorithm) 105 | 106 | def __del__(self): 107 | _libchromaprint.chromaprint_free(self._ctx) 108 | del self._ctx 109 | 110 | def start(self, sample_rate, num_channels): 111 | """Initialize the fingerprinter with the given audio parameters. 112 | """ 113 | _check(_libchromaprint.chromaprint_start( 114 | self._ctx, sample_rate, num_channels 115 | )) 116 | 117 | def feed(self, data): 118 | """Send raw PCM audio data to the fingerprinter. Data may be 119 | either a bytestring or a buffer object. 120 | """ 121 | if isinstance(data, BUFFER_TYPES): 122 | data = str(data) 123 | elif not isinstance(data, bytes): 124 | raise TypeError('data must be bytes, buffer, or memoryview') 125 | _check(_libchromaprint.chromaprint_feed( 126 | self._ctx, data, len(data) // 2 127 | )) 128 | 129 | def finish(self): 130 | """Finish the fingerprint generation process and retrieve the 131 | resulting fignerprint as a bytestring. 132 | """ 133 | _check(_libchromaprint.chromaprint_finish(self._ctx)) 134 | fingerprint_ptr = ctypes.c_char_p() 135 | _check(_libchromaprint.chromaprint_get_fingerprint( 136 | self._ctx, ctypes.byref(fingerprint_ptr) 137 | )) 138 | fingerprint = fingerprint_ptr.value 139 | _libchromaprint.chromaprint_dealloc(fingerprint_ptr) 140 | return fingerprint 141 | 142 | 143 | def decode_fingerprint(data, base64=True): 144 | result_ptr = ctypes.POINTER(ctypes.c_int32)() 145 | result_size = ctypes.c_int() 146 | algorithm = ctypes.c_int() 147 | _check(_libchromaprint.chromaprint_decode_fingerprint( 148 | data, len(data), ctypes.byref(result_ptr), ctypes.byref(result_size), 149 | ctypes.byref(algorithm), 1 if base64 else 0 150 | )) 151 | result = result_ptr[:result_size.value] 152 | _libchromaprint.chromaprint_dealloc(result_ptr) 153 | return result, algorithm.value 154 | 155 | 156 | def encode_fingerprint(fingerprint, algorithm, base64=True): 157 | fp_array = (ctypes.c_int * len(fingerprint))() 158 | for i in range(len(fingerprint)): 159 | fp_array[i] = fingerprint[i] 160 | result_ptr = ctypes.POINTER(ctypes.c_char)() 161 | result_size = ctypes.c_int() 162 | _check(_libchromaprint.chromaprint_encode_fingerprint( 163 | fp_array, len(fingerprint), algorithm, ctypes.byref(result_ptr), 164 | ctypes.byref(result_size), 1 if base64 else 0 165 | )) 166 | result = result_ptr[:result_size.value] 167 | _libchromaprint.chromaprint_dealloc(result_ptr) 168 | return result 169 | -------------------------------------------------------------------------------- /acoustic/chromaprint.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuyanqi/bitright/b8d0b694fad1316c7599d7f8e44e3fb4f7195b4b/acoustic/chromaprint.pyc -------------------------------------------------------------------------------- /acoustic/heaven.mp3: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuyanqi/bitright/b8d0b694fad1316c7599d7f8e44e3fb4f7195b4b/acoustic/heaven.mp3 -------------------------------------------------------------------------------- /acoustic/heaven_first_half.mp3: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuyanqi/bitright/b8d0b694fad1316c7599d7f8e44e3fb4f7195b4b/acoustic/heaven_first_half.mp3 -------------------------------------------------------------------------------- /acoustic/heaven_second_half.mp3: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuyanqi/bitright/b8d0b694fad1316c7599d7f8e44e3fb4f7195b4b/acoustic/heaven_second_half.mp3 -------------------------------------------------------------------------------- /acoustic/heaven_small.mp3: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuyanqi/bitright/b8d0b694fad1316c7599d7f8e44e3fb4f7195b4b/acoustic/heaven_small.mp3 -------------------------------------------------------------------------------- /acoustic/test_acoustic.py: -------------------------------------------------------------------------------- 1 | import acoustid 2 | import numpy as np 3 | import chromaprint 4 | 5 | path = './heaven.mp3' 6 | path2 = './heaven_small.mp3' 7 | path3 = './undertheice.mp3' 8 | 9 | dur, fig = acoustid.fingerprint_file(path) 10 | # fig = np.fromstring(fig, uint8); 11 | raw_fp = chromaprint.decode_fingerprint(fig)[0] 12 | fp = ','.join(map(str, raw_fp)) 13 | fig = fp.decode('utf8') 14 | 15 | 16 | dur, fig2 = acoustid.fingerprint_file(path2) 17 | raw_fp2 = chromaprint.decode_fingerprint(fig2)[0] 18 | 19 | dur, fig3 = acoustid.fingerprint_file(path3) 20 | raw_fp3 = chromaprint.decode_fingerprint(fig3)[0] 21 | 22 | popcnt_table_8bit = [ 23 | 0,1,1,2,1,2,2,3,1,2,2,3,2,3,3,4,1,2,2,3,2,3,3,4,2,3,3,4,3,4,4,5, 24 | 1,2,2,3,2,3,3,4,2,3,3,4,3,4,4,5,2,3,3,4,3,4,4,5,3,4,4,5,4,5,5,6, 25 | 1,2,2,3,2,3,3,4,2,3,3,4,3,4,4,5,2,3,3,4,3,4,4,5,3,4,4,5,4,5,5,6, 26 | 2,3,3,4,3,4,4,5,3,4,4,5,4,5,5,6,3,4,4,5,4,5,5,6,4,5,5,6,5,6,6,7, 27 | 1,2,2,3,2,3,3,4,2,3,3,4,3,4,4,5,2,3,3,4,3,4,4,5,3,4,4,5,4,5,5,6, 28 | 2,3,3,4,3,4,4,5,3,4,4,5,4,5,5,6,3,4,4,5,4,5,5,6,4,5,5,6,5,6,6,7, 29 | 2,3,3,4,3,4,4,5,3,4,4,5,4,5,5,6,3,4,4,5,4,5,5,6,4,5,5,6,5,6,6,7, 30 | 3,4,4,5,4,5,5,6,4,5,5,6,5,6,6,7,4,5,5,6,5,6,6,7,5,6,6,7,6,7,7,8, 31 | ] 32 | 33 | def popcnt(x): 34 | """ 35 | Count the number of set bits in the given 32-bit integer. 36 | """ 37 | return (popcnt_table_8bit[(x >> 0) & 0xFF] + 38 | popcnt_table_8bit[(x >> 8) & 0xFF] + 39 | popcnt_table_8bit[(x >> 16) & 0xFF] + 40 | popcnt_table_8bit[(x >> 24) & 0xFF]) 41 | 42 | 43 | error = 0 44 | for x, y in zip(raw_fp, raw_fp2): 45 | error += popcnt(x ^ y) 46 | print 1.0 - error / 32.0 / min(len(raw_fp), len(raw_fp2)) 47 | 48 | error = 0 49 | for x, y in zip(raw_fp2, raw_fp3): 50 | error += popcnt(x ^ y) 51 | print 1.0 - error / 32.0 / min(len(raw_fp2), len(raw_fp3)) -------------------------------------------------------------------------------- /acoustic/undertheice.mp3: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuyanqi/bitright/b8d0b694fad1316c7599d7f8e44e3fb4f7195b4b/acoustic/undertheice.mp3 -------------------------------------------------------------------------------- /blockchain.py: -------------------------------------------------------------------------------- 1 | from time import time 2 | import datetime 3 | import os 4 | import pickle 5 | import hashlib as hasher 6 | import acoustic.acoustid_check as ac 7 | import text_compare.test_text as tc 8 | import image_compare.image_check as ic 9 | 10 | """ Class for transactions made on the blockchain. Each transaction has a 11 | sender, recipient, and value. 12 | """ 13 | class Transaction: 14 | 15 | """ Transaction initializer """ 16 | def __init__(self, title="", filename="", author="", public_key="", genre="", media = ""): 17 | self.title = title 18 | self.filename = filename 19 | self.author = author 20 | self.public_key = public_key 21 | self.genre = genre 22 | self.media = media 23 | 24 | 25 | """ Converts the transaction to a dictionary """ 26 | def toDict(self): 27 | return { 28 | 'title': self.title, 29 | 'filename': self.filename, 30 | 'author': self.author, 31 | 'public_key': self.public_key, 32 | 'genre': self.genre, 33 | 'media': self.media, 34 | } 35 | 36 | def __str__(self): 37 | toString = self.author + " : " + self.genre + " (" + self.media + ") " 38 | return toString; 39 | 40 | """ Class for Blocks. A block is an object that contains transaction information 41 | on the blockchain. 42 | """ 43 | class Block: 44 | def __init__(self, index, transaction, previous_hash): 45 | 46 | self.index = index 47 | self.timestamp = time() 48 | self.previous_hash = previous_hash 49 | self.transaction = transaction 50 | 51 | def compute_hash(self): 52 | concat_str = str(self.index) + str(self.timestamp) + str(self.previous_hash) + str(self.transaction['author']) + str(self.transaction['genre']) 53 | hash_result = hasher.sha256(concat_str.encode('utf-8')).hexdigest() 54 | return hash_result 55 | 56 | def serialize(self): 57 | return { 58 | 'index': self.index, 59 | 'timestamp': self.timestamp, 60 | 'previous_hash': self.previous_hash, 61 | 'transaction': self.transaction 62 | } 63 | 64 | 65 | """ Blockchain class. The blockchain is the network of blocks containing all the 66 | transaction data of the system. 67 | """ 68 | class Blockchain: 69 | def __init__(self): 70 | 71 | 72 | self.unconfirmed_transactions = {} 73 | self.chain = [] 74 | 75 | def create_genesis_block(self): 76 | empty_media = { 77 | 'title': "", 78 | 'filename': "", 79 | 'author': "", 80 | 'public_key': "", 81 | 'genre': "", 82 | 'media': "", 83 | } 84 | new_block = Block(index=0, transaction=empty_media, previous_hash=0) 85 | self.add_block(new_block) 86 | 87 | return new_block 88 | 89 | def new_transaction(self, title, filename, author, public_key, genre, media): 90 | new_trans = Transaction(title, filename, author, public_key, genre, media).toDict(); 91 | self.unconfirmed_transactions= new_trans.copy() 92 | return new_trans 93 | 94 | def mine(self): 95 | #create a block, verify its originality and add to the blockchain 96 | if (len(self.chain) ==0): 97 | block_idx = 1 98 | previous_hash = 0 99 | else: 100 | block_idx = self.chain[-1].index + 1 101 | previous_hash = self.chain[-1].compute_hash() 102 | block = Block(block_idx, self.unconfirmed_transactions, previous_hash) 103 | if(self.verify_block(block)): 104 | self.add_block(block) 105 | return block 106 | else: 107 | return None 108 | 109 | def verify_block(self, block): 110 | #verify song originality and previous hash 111 | #check previous hash 112 | 113 | if len(self.chain) ==0: 114 | previous_hash = 0 115 | else: 116 | previous_hash = self.chain[-1].compute_hash() 117 | if block.previous_hash != previous_hash: 118 | return 0 119 | #check originality 120 | for prev_block in self.chain: 121 | if block.transaction['genre'] == prev_block.transaction['genre']: 122 | try: 123 | if block.transaction['genre'] == 'Audio': 124 | score = ac.calc_accuracy('./uploads/' + block.transaction['media'], './uploads/' + prev_block.transaction['media']) 125 | print(score) 126 | if score > 0.9: 127 | return 0 128 | if block.transaction['genre'] == 'Text': 129 | score = tc.check_text_similarity('./uploads/' + block.transaction['media'], './uploads/'+prev_block.transaction['media']) 130 | print(score) 131 | if score < 100: 132 | return 0 133 | if block.transaction['genre'] == "Image": 134 | score = ic.calc_accuracy('./uploads/' + block.transaction['media'], './uploads/' + prev_block.transaction['media']) 135 | print(score) 136 | if score < 0.4: 137 | return 0 138 | except: 139 | return 0 140 | return 1 141 | 142 | def lookup(self, transaction): 143 | #check originality 144 | for prev_block in self.chain: 145 | if transaction['genre'] == prev_block.transaction['genre']: 146 | try: 147 | if transaction['genre'] == 'Audio': 148 | score = ac.calc_accuracy('./tmp/' + transaction['media'], './uploads/' + prev_block.transaction['media']) 149 | print(score) 150 | if score > 0.9: 151 | return prev_block 152 | if transaction['genre'] == 'Text': 153 | score = tc.check_text_similarity('./tmp/' + transaction['media'], './uploads/'+prev_block.transaction['media']) 154 | print(score) 155 | if score < 100: 156 | return prev_block 157 | if transaction['genre'] == "Image": 158 | score = ic.calc_accuracy('./tmp/' + transaction['media'], './uploads/' + prev_block.transaction['media']) 159 | print(score) 160 | if score < 0.4: 161 | return prev_block 162 | except: 163 | print("exception") 164 | return prev_block 165 | return None 166 | 167 | def add_block(self, block): 168 | self.chain.append(block) 169 | 170 | with open('./blockchain/chain.pkl', 'wb') as output: 171 | pickle.dump(self.chain, output, pickle.HIGHEST_PROTOCOL) 172 | 173 | 174 | def check_integrity(self): 175 | return 0 176 | 177 | """ Function that returns the last block on the chain""" 178 | @property 179 | def last_block(self): 180 | return self.chain[-1] 181 | -------------------------------------------------------------------------------- /image_compare/MonaLisa_1.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuyanqi/bitright/b8d0b694fad1316c7599d7f8e44e3fb4f7195b4b/image_compare/MonaLisa_1.jpg -------------------------------------------------------------------------------- /image_compare/MonaLisa_2.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuyanqi/bitright/b8d0b694fad1316c7599d7f8e44e3fb4f7195b4b/image_compare/MonaLisa_2.jpg -------------------------------------------------------------------------------- /image_compare/Other.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuyanqi/bitright/b8d0b694fad1316c7599d7f8e44e3fb4f7195b4b/image_compare/Other.jpg -------------------------------------------------------------------------------- /image_compare/__init__.py: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuyanqi/bitright/b8d0b694fad1316c7599d7f8e44e3fb4f7195b4b/image_compare/__init__.py -------------------------------------------------------------------------------- /image_compare/__init__.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuyanqi/bitright/b8d0b694fad1316c7599d7f8e44e3fb4f7195b4b/image_compare/__init__.pyc -------------------------------------------------------------------------------- /image_compare/image_check.py: -------------------------------------------------------------------------------- 1 | from image_match.goldberg import ImageSignature 2 | 3 | def calc_accuracy(path1, path2): 4 | print(path1, path2) 5 | path1 = str(path1) 6 | path2 = str(path2) 7 | gis = ImageSignature() 8 | a = gis.generate_signature(path1); 9 | b = gis.generate_signature(path2); 10 | dist = gis.normalized_distance(a,b); 11 | return dist; 12 | -------------------------------------------------------------------------------- /image_compare/image_check.pyc: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuyanqi/bitright/b8d0b694fad1316c7599d7f8e44e3fb4f7195b4b/image_compare/image_check.pyc -------------------------------------------------------------------------------- /image_compare/test_image.py: -------------------------------------------------------------------------------- 1 | from image_match.goldberg import ImageSignature 2 | gis = ImageSignature() 3 | a = gis.generate_signature('./MonaLisa_1.jpg'); 4 | b = gis.generate_signature('./MonaLisa_2.jpg'); 5 | c = gis.generate_signature('./Other.jpg'); 6 | dist = gis.normalized_distance(a,b) 7 | dist2 = gis.normalized_distance(a,c); 8 | # normalized distance < 0.4 likely to be a match 9 | print(dist) 10 | print(dist2) 11 | 12 | -------------------------------------------------------------------------------- /install.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | pip install virtualenv 4 | virtualenv env 5 | cd env 6 | source bin/activate 7 | pip install -r ../requirements.txt 8 | wget https://bitright.sfo2.digitaloceanspaces.com/tlsh.zip 9 | unzip tlsh.zip 10 | cd tlsh/py_ext 11 | python setup.py install 12 | cd ../../.. 13 | -------------------------------------------------------------------------------- /media/MonaLisa_1.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuyanqi/bitright/b8d0b694fad1316c7599d7f8e44e3fb4f7195b4b/media/MonaLisa_1.jpg -------------------------------------------------------------------------------- /media/MonaLisa_2.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuyanqi/bitright/b8d0b694fad1316c7599d7f8e44e3fb4f7195b4b/media/MonaLisa_2.jpg -------------------------------------------------------------------------------- /media/Other.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuyanqi/bitright/b8d0b694fad1316c7599d7f8e44e3fb4f7195b4b/media/Other.jpg -------------------------------------------------------------------------------- /media/doc1.txt: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuyanqi/bitright/b8d0b694fad1316c7599d7f8e44e3fb4f7195b4b/media/doc1.txt -------------------------------------------------------------------------------- /media/doc1_same.txt: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuyanqi/bitright/b8d0b694fad1316c7599d7f8e44e3fb4f7195b4b/media/doc1_same.txt -------------------------------------------------------------------------------- /media/doc2.txt: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuyanqi/bitright/b8d0b694fad1316c7599d7f8e44e3fb4f7195b4b/media/doc2.txt -------------------------------------------------------------------------------- /media/doc3.txt: -------------------------------------------------------------------------------- 1 | The nuScenes dataset (pronounced /nuːsiːnz/) is a public large-scale dataset for autonomous driving developed by Aptiv Autonomous Mobility (formerly nuTonomy). By releasing a subset of our data to the public, Aptiv aims to support public research into computer vision and autonomous driving. 2 | 3 | For this purpose we collected 1000 driving scenes in Boston and Singapore, two cities that are known for their dense traffic and highly challenging driving situations. The scenes of 20 second length are manually selected to show a diverse and interesting set of driving maneuvers, traffic situations and unexpected behaviors. The rich complexity of nuScenes will encourage development of methods that enable safe driving in urban areas with dozens of objects per scene. Gathering data on different continents further allows us to study the generalization of computer vision algorithms across different locations, weather conditions, vehicle types, vegetation, road markings and left versus right hand traffic. -------------------------------------------------------------------------------- /media/heaven.mp3: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuyanqi/bitright/b8d0b694fad1316c7599d7f8e44e3fb4f7195b4b/media/heaven.mp3 -------------------------------------------------------------------------------- /media/heaven_first_half.mp3: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuyanqi/bitright/b8d0b694fad1316c7599d7f8e44e3fb4f7195b4b/media/heaven_first_half.mp3 -------------------------------------------------------------------------------- /media/heaven_second_half.mp3: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuyanqi/bitright/b8d0b694fad1316c7599d7f8e44e3fb4f7195b4b/media/heaven_second_half.mp3 -------------------------------------------------------------------------------- /media/heaven_small.mp3: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuyanqi/bitright/b8d0b694fad1316c7599d7f8e44e3fb4f7195b4b/media/heaven_small.mp3 -------------------------------------------------------------------------------- /media/undertheice.mp3: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuyanqi/bitright/b8d0b694fad1316c7599d7f8e44e3fb4f7195b4b/media/undertheice.mp3 -------------------------------------------------------------------------------- /network.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python 2 | import os 3 | import hashlib 4 | from flask import Flask, request, render_template, jsonify, redirect, send_from_directory, make_response 5 | from blockchain import Blockchain 6 | import pickle 7 | 8 | UPLOAD_FOLDER = 'uploads' 9 | TMP_FOLDER = 'tmp' 10 | 11 | app = Flask(__name__) 12 | app.config['UPLOAD_FOLDER'] = UPLOAD_FOLDER 13 | app.config['TMP_FOLDER'] = TMP_FOLDER 14 | 15 | # the node's copy of blockchain 16 | 17 | blockchain = Blockchain() 18 | 19 | if os.path.exists('./blockchain/chain.pkl'): 20 | with open('./blockchain/chain.pkl', 'rb') as input: 21 | blockchain.chain = pickle.load(input) 22 | 23 | if not os.path.exists('./blockchain'): 24 | os.mkdir('blockchain') 25 | if not os.path.exists('./uploads'): 26 | os.mkdir('uploads') 27 | if not os.path.exists('./tmp'): 28 | os.mkdir('tmp') 29 | 30 | @app.route('/') 31 | def index(): 32 | return render_template('./index.html') 33 | 34 | @app.route('/blockchain') 35 | def blockchain_page(): 36 | return render_template('./blockchain.html') 37 | 38 | @app.route('/about') 39 | def about_page(): 40 | return render_template('./aboutus.html') 41 | 42 | @app.route('/faq') 43 | def faq_page(): 44 | return render_template('./faq.html') 45 | 46 | @app.route('/uploads/') 47 | def custom_static(filename): 48 | print(filename) 49 | temp = filename.split('.') 50 | if len(temp) > 1: 51 | response = make_response(send_from_directory('./uploads/', temp[0])) 52 | response.headers['Content-Type'] = 'text/html' 53 | return response 54 | else: 55 | return send_from_directory('./uploads/', temp[0]) 56 | 57 | 58 | # @app.route('/new_transaction', methods=['POST']) 59 | # def new_transaction(): 60 | # sender = request.form["sender"] 61 | # recipient = request.form["recipient"] 62 | # value = request.form["value"] 63 | 64 | # blockchain.new_transaction(sender, recipient, value) 65 | 66 | # return redirect('/') 67 | 68 | # @app.route('/pending_tx') 69 | # def get_pending_tx(): 70 | # transactions = blockchain.unconfirmed_transactions 71 | # response = {'transactions': transactions} 72 | # return jsonify(response), 200 73 | 74 | @app.route('/mine', methods=['GET']) 75 | def mine_unconfirmed_transactions(): 76 | result = blockchain.mine() 77 | response = {'block': result.__dict__} 78 | 79 | return jsonify(response), 200 80 | 81 | @app.route('/chain', methods=['GET']) 82 | def get_chain(): 83 | chain_data = [] 84 | for block in blockchain.chain: 85 | chain_data.append(block.__dict__) 86 | 87 | response = {'chain': chain_data} 88 | return jsonify(response), 200 89 | 90 | # @app.route('/get_block', methods=['POST']) 91 | # def get_block(): 92 | # index = int(request.form["block_index"]) 93 | # block = blockchain.chain[index] 94 | 95 | # response = {'block': block.__dict__} 96 | 97 | # return jsonify(response), 200 98 | 99 | # @app.route('/reset') 100 | # def reset(): 101 | # global blockchain 102 | 103 | # blockchain = Blockchain() 104 | 105 | # return redirect('/') 106 | 107 | # @app.route('/integrity', methods=['GET']) 108 | # def integrityn(): 109 | # integrity = blockchain.check_integrity(); 110 | 111 | # response = {'integrity': integrity} 112 | 113 | # return jsonify(response), 200 114 | 115 | @app.route('/upload', methods=['POST']) 116 | def upload(): 117 | global blockchain 118 | 119 | print(request) 120 | if 'contentFile' not in request.files: 121 | response = {'ok': False} 122 | return jsonify(response), 500 123 | file = request.files['contentFile'] 124 | 125 | filename = hashlib.sha256(file.read()).hexdigest() 126 | file.seek(0) #reset read pointer 127 | 128 | action = request.form['action'] 129 | 130 | if action == "lookup": 131 | #TODO search for exact and partial matches 132 | print('TODO lookup') 133 | file.save(os.path.join(app.config['TMP_FOLDER'], filename)) 134 | lookup_media = { 135 | 'genre': request.form['genre'], 136 | 'media': filename, 137 | } 138 | result = blockchain.lookup(lookup_media) 139 | os.remove(os.path.join(app.config['TMP_FOLDER'], filename)) #remove uploaded file 140 | 141 | if result is None: 142 | response = {'unique': True} 143 | return jsonify(response), 200 144 | 145 | response = {'unique': False, 'block': result.__dict__, 'message': 'Similar Object Detected'} 146 | 147 | return jsonify(response), 200 148 | elif action == "publish": 149 | if os.path.isfile(os.path.join(app.config['UPLOAD_FOLDER'], filename)): 150 | print("Duplicate Detected") 151 | response = {'unique': False, 'message':'Duplicate Detected'} 152 | else: 153 | file.save(os.path.join(app.config['UPLOAD_FOLDER'], filename)) 154 | #Create a new transaction 155 | author = request.form['author'] 156 | title = request.form['title'] 157 | pubkey = request.form['pubkey'] 158 | genre = request.form['genre'] 159 | original_filename = file.filename 160 | blockchain.new_transaction(title, original_filename, author, pubkey, genre, filename) 161 | result = blockchain.mine() 162 | if result == None: 163 | print("FALSE") 164 | os.remove(os.path.join(app.config['UPLOAD_FOLDER'], filename)) #remove uploaded file 165 | response = {'unique': False, 'message':'Similar Object Detected, Input File Rejected'} 166 | else: 167 | print("TEST") 168 | print(result) 169 | response = {'unique': True, 'block': result.__dict__} 170 | 171 | return jsonify(response), 200 172 | 173 | 174 | 175 | app.run(debug=True, port=8000) 176 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | certifi==2019.3.9 2 | chardet==3.0.4 3 | Click==7.0 4 | dask==1.2.0 5 | decorator==4.4.0 6 | elasticsearch==2.3.0 7 | Flask==1.0.2 8 | idna==2.8 9 | image-match==1.1.2 10 | itsdangerous==1.1.0 11 | Jinja2==2.10.1 12 | MarkupSafe==1.1.1 13 | networkx==2.2 14 | numpy==1.16.3 15 | Pillow==6.0.0 16 | requests==2.21.0 17 | scikit-image==0.12.3 18 | scipy==1.2.1 19 | six==1.12.0 20 | toolz==0.9.0 21 | urllib3==1.24.2 22 | Werkzeug==0.15.2 23 | -------------------------------------------------------------------------------- /run.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | source ./env/bin/activate 4 | python network.py 5 | -------------------------------------------------------------------------------- /static/aboutus.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/liuyanqi/bitright/b8d0b694fad1316c7599d7f8e44e3fb4f7195b4b/static/aboutus.png -------------------------------------------------------------------------------- /static/blockchain.css: -------------------------------------------------------------------------------- 1 | .clickable{ 2 | cursor: pointer; 3 | } -------------------------------------------------------------------------------- /static/blockchain.js: -------------------------------------------------------------------------------- 1 | var chain = []; 2 | var filter = ""; 3 | 4 | $(document).ready(function () { 5 | $.LoadingOverlay("show", { 6 | image: "", 7 | text: "Loading Blockchain..." 8 | }); 9 | $.ajax({ 10 | url: "/chain", 11 | type: 'GET', 12 | success: function (response) { 13 | chain = response["chain"]; 14 | for (var i = 0; i < chain.length; i++) { 15 | addRow(chain[i].index, 16 | chain[i].transaction.title, 17 | chain[i].transaction.author, 18 | chain[i].transaction.genre, 19 | chain[i].previous_hash, 20 | chain[i].timestamp, 21 | chain[i].transaction.public_key, 22 | chain[i].transaction.media, 23 | chain[i].transaction.filename); 24 | } 25 | console.log(response) 26 | $.LoadingOverlay("hide"); 27 | }, 28 | error: function (error) { 29 | console.log(error); 30 | $.LoadingOverlay("hide"); 31 | } 32 | }); 33 | }); 34 | 35 | function addRow(index, title, author, genre, previousHash, timestamp, pubkey, media, filename) { 36 | let specific_tbody = document.getElementById("blockchainBody"); 37 | let newRow = specific_tbody.insertRow(-1); 38 | newRow.setAttribute('data-toggle', 'modal'); 39 | newRow.setAttribute('data-target', '#detailsModal'); 40 | newRow.setAttribute('onclick', 'showDetails(' + index + ')'); 41 | newRow.classList.add("clickable"); 42 | 43 | // Insert a cell in the row at index 0 44 | newRow.insertCell(0).appendChild(document.createTextNode(index)); 45 | newRow.insertCell(1).appendChild(document.createTextNode(title)); 46 | newRow.insertCell(2).appendChild(document.createTextNode(author)); 47 | newRow.insertCell(3).appendChild(document.createTextNode(genre)); 48 | newRow.insertCell(4).appendChild(document.createTextNode(previousHash)); 49 | newRow.insertCell(5).appendChild(document.createTextNode(new Date(timestamp))); 50 | 51 | var pubkeyBlob = new Blob([pubkey], { type: 'text/plain' }); 52 | 53 | var lastCellElement = document.createElement('div'); 54 | lastCellElement.innerHTML = 55 | '' + 56 | ''; 57 | 58 | newRow.insertCell(6).appendChild(lastCellElement); 59 | } 60 | 61 | function addEmptyRow() { 62 | let specific_tbody = document.getElementById("blockchainBody"); 63 | let newRow = specific_tbody.insertRow(-1); 64 | newRow.colSpan = 7; 65 | 66 | // Insert a cell in the row at index 0 67 | let cell = newRow.insertCell(0); 68 | cell.colSpan = 7; 69 | cell.appendChild(document.createTextNode("...")); 70 | } 71 | 72 | function showDetails(id) { 73 | console.log(id); 74 | let currentBlock = chain[parseInt(id) - 1]; 75 | var pubkeyBlob = new Blob([currentBlock.transaction.public_key], { type: 'text/plain' }); 76 | 77 | $('#detailsModalLabel').html("

" + currentBlock.transaction.title + "

"); 78 | $('#detailsModalBodyLeft').html( 79 | "" + "Index: " + "" + currentBlock.index + "
" + 80 | "" + "Author: " + "" + currentBlock.transaction.author + "
" + 81 | "" + "Genre: " + "" + currentBlock.transaction.genre + "
" + 82 | "Previous Hash: " + "" + currentBlock.previous_hash + "
" + 83 | "" + "Timestamp: " + "" + new Date(currentBlock.timestamp) + "
" + 84 | "" + "Owner Public Key: " + "" + '' + "
" + 85 | "" + "Original Filename: " + "" + currentBlock.transaction.filename + "
" + 86 | "" + "Original File: " + "" + '' + "
" 87 | ); 88 | 89 | let preview = ""; 90 | if (currentBlock.transaction.genre == "Audio") { 91 | preview = 92 | '' 97 | } else if (currentBlock.transaction.genre == "Image") { 98 | preview = 99 | ''; 100 | 101 | } else if (currentBlock.transaction.genre == "Text") { 102 | preview = 103 | '