├── .gitignore ├── Dockerfile ├── README.md ├── __main__.spec ├── build_dist.sh ├── docker-compose.yml ├── docs └── imgs │ ├── gazer_archive.png │ ├── gazer_archive_tags.png │ ├── gazer_main.png │ ├── gazer_multi_booru_client.png │ └── gazer_scraper.png ├── enable_dd.sh ├── gazer ├── __init__.py ├── create_data.py ├── ddInterface.py ├── main.spec ├── models.py ├── normal_flaskapp.spec ├── scraper.py ├── scraper_data │ ├── config.json │ ├── status.json │ └── tag_set.json ├── search.py ├── sources │ ├── gelbooru.py │ ├── gelbooru_base.py │ ├── konachan.py │ ├── lolibooru.py │ ├── moebooru_base.py │ ├── safebooru.py │ └── yandere.py ├── static │ ├── android-chrome-192x192.png │ ├── android-chrome-512x512.png │ ├── apple-touch-icon.png │ ├── css │ │ └── base.css │ ├── dump │ │ └── .gitignore │ ├── favicon-16x16.png │ ├── favicon-32x32.png │ ├── favicon.ico │ ├── js │ │ └── posts_utility.js │ ├── site.webmanifest │ └── temp │ │ └── .gitignore ├── templates │ ├── base.html │ ├── index.html │ ├── post.html │ ├── posts.html │ ├── poststats.html │ ├── scraper.html │ └── tags.html ├── utilities.py └── views.py ├── main.spec ├── requirements.txt ├── run.py └── setup.py /.gitignore: -------------------------------------------------------------------------------- 1 | venv/ 2 | dist/ 3 | gazer.egg-info/ 4 | DeepDanbooru/ 5 | build/ 6 | *.pyc 7 | *.spec 8 | __pycache__/ 9 | postdata.db 10 | dd_pretrained/ 11 | 12 | bg_task.sh 13 | start_bg.sh -------------------------------------------------------------------------------- /Dockerfile: -------------------------------------------------------------------------------- 1 | # Old Python because of Tensorflow 2 | FROM python:3.8-slim 3 | 4 | WORKDIR /gazer 5 | COPY . . 6 | 7 | RUN pip install -r requirements.txt && python setup.py install 8 | 9 | # DeepDanbooru support: 10 | RUN apt-get update && apt-get -y --no-install-recommends install wget unzip git 11 | RUN git clone https://github.com/KichangKim/DeepDanbooru.git deepdanbooru && cd deepdanbooru && pip install .[tensorflow] 12 | RUN cd deepdanbooru && wget https://github.com/KichangKim/DeepDanbooru/releases/download/v4-20200814-sgd-e30/deepdanbooru-v4-20200814-sgd-e30.zip && unzip deepdanbooru-v4-20200814-sgd-e30.zip -d dd_pretrained && rm deepdanbooru-v4-20200814-sgd-e30.zip 13 | 14 | EXPOSE 5000 15 | CMD [ "python", "gazer" ] 16 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Gazer 2 | 3 | This is an experimental booru client and image archive tool I made to consolidate 4 | my own image browsing and archiving habits. It combines a simple locally running 5 | image booru with a client for browsing some of the common booru sites. (gelbooru, 6 | yandere, konachan, etc) and has a simple scraper for automatic downloading. 7 | 8 | ## Features 9 | Application runs in a local web server rather than as a traditional gui interface. 10 | Can be used either locally on a single computer or accessed over a local area 11 | network (for example from a homeserver or media pc). 12 | ### Local Booru 13 |

14 |

15 | - mirrors all tags from source 16 | - tag and multitag search 17 | - viewing statistics (by image and by tag) 18 | - sort by date created/score/local views 19 | - adjust thumbnail size 20 | - adjust thumbnails per page 21 | 22 | ### Booru Client: 23 |

24 | - supports gelbooru and moebooru based sites 25 | - uses locally cached images when possible (saves bandwidth) 26 | - 1 click archiving in client 27 | 28 | ### Scraper 29 |

30 | - supports gelbooru and moebooru based sites 31 | - scrape and archive posts based on any tag or set of tags 32 | 33 | ### Deep Danbooru Integration 34 | 35 | - local archive has extra tags inferred from image using machine learning black magic 36 | - inferred tags fully searchable, enabled via toggle 37 | 38 | ### Why not use hydrus or x other software 39 | 40 | I found hydrus to have a lot more features than I needed. What I really wanted 41 | was just a personally curated booru with an easy way to browse and archive new images. 42 | 43 | ## Running 44 | 45 | Code is not fully tested and stable but can be installed and run as a python package, 46 | with provided docker image or pyinstaller binaries are available under releases for 47 | windows and linux. 48 | 49 | ### Linux/Unix/Mac 50 | Install any version of python3 then navigate to the gazer root directory and 51 | run the following commands in your terminal client. 52 | ``` 53 | python -m venv venv 54 | source venv/bin/activate 55 | python setup.py install 56 | pip install -r requirements.txt 57 | python run.py 58 | ``` 59 | Open a browser and go to 60 | localhost:5000 or 127.0.0.1:5000 61 | 62 | #### Enable DeepDanbooru Integration 63 | 64 | ``` 65 | chmod +x enable_dd.sh 66 | ./enable_dd.sh 67 | ``` 68 | 69 | ### Windows 70 | Install any version of python3 then navigate to the gazer root directory and 71 | run the following commands in your terminal client. 72 | ``` 73 | python -m venv venv 74 | .\venv\Scripts\activate 75 | python setup.py install 76 | pip install -r requirements.txt 77 | python run.py 78 | ``` 79 | Open a browser and go to 80 | localhost:5000 or 127.0.0.1:5000 81 | 82 | ### Docker 83 | If you have docker set up you can use the included dockerfile to run the service. 84 | Note by default this dockerfile has deepdanbooru integration enabled. 85 | 86 | ``` 87 | docker build -t gazer . 88 | docker run -p 5000:5000 gazer 89 | ``` 90 | Open a browser and go to 91 | localhost:5000 or 127.0.0.1:5000 92 | 93 | ## Getting Started 94 | 95 | ### Setup Auto Scraper 96 | 97 | Find the configuration file at `gazer/gazer/scraper_data/tag_set.json` 98 | 99 | ``` 100 | [ 101 | {"name":"Cute Autist", "source":"safebooru", "tags": ["constanze_amalie_von_braunschbank-albrechtsberger", "1girl"]} 102 | ] 103 | ``` 104 | 105 | You can edit this file to have any number of "tag sets". Each tag set must have 1 or 106 | more tags and list a supported booru to download these images from as its source ie. 107 | 108 | ``` 109 | [ 110 | {"name":"Cute Autist", "source":"safebooru", "tags": ["constanze_amalie_von_braunschbank-albrechtsberger", "1girl"]}, 111 | {"name":"Cute Maid Autist with gun", "source":"safebooru", "tags": ["constanze_amalie_von_braunschbank-albrechtsberger", "maid", "gun"]} 112 | ] 113 | ``` 114 | 115 | Scraper will attempt to download all images from the source that match your tags. 116 | If you update your tags in the config file while gazer is running or want to scrape any new images: go to 127.0.0.1:5000/scraper and press the 'restart' button. 117 | -------------------------------------------------------------------------------- /__main__.spec: -------------------------------------------------------------------------------- 1 | # -*- mode: python ; coding: utf-8 -*- 2 | 3 | 4 | block_cipher = None 5 | 6 | 7 | a = Analysis(['gazer/__main__.py'], 8 | pathex=[], 9 | binaries=[], 10 | datas=[('gazer/templates', 'templates'), ('gazer/static', 'static')], 11 | hiddenimports=[], 12 | hookspath=[], 13 | hooksconfig={}, 14 | runtime_hooks=[], 15 | excludes=[], 16 | win_no_prefer_redirects=False, 17 | win_private_assemblies=False, 18 | cipher=block_cipher, 19 | noarchive=False) 20 | pyz = PYZ(a.pure, a.zipped_data, 21 | cipher=block_cipher) 22 | 23 | exe = EXE(pyz, 24 | a.scripts, 25 | [], 26 | exclude_binaries=True, 27 | name='__main__', 28 | debug=False, 29 | bootloader_ignore_signals=False, 30 | strip=False, 31 | upx=True, 32 | console=True, 33 | disable_windowed_traceback=False, 34 | target_arch=None, 35 | codesign_identity=None, 36 | entitlements_file=None ) 37 | coll = COLLECT(exe, 38 | a.binaries, 39 | a.zipfiles, 40 | a.datas, 41 | strip=False, 42 | upx=True, 43 | upx_exclude=[], 44 | name='__main__') 45 | -------------------------------------------------------------------------------- /build_dist.sh: -------------------------------------------------------------------------------- 1 | rm -r build dist 2 | pyinstaller --name gazer-server --add-data 'gazer/templates:gazer/templates' --add-data 'gazer/static:gazer/static' --add-data 'gazer/scraper_data:gazer/scraper_data' run.py -------------------------------------------------------------------------------- /docker-compose.yml: -------------------------------------------------------------------------------- 1 | version: '3.8' 2 | 3 | services: 4 | gazer: 5 | build: . 6 | ports: 7 | - 5000:5000 8 | #volumes: 9 | #- ./config/:/gazer/gazer/scraper_data/ 10 | #- ./scrape-temp/:/gazer/gazer/static/temp/ 11 | -------------------------------------------------------------------------------- /docs/imgs/gazer_archive.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gazerdev/gazer/d24e525217595572ff46833e0b7ed5d9fe564609/docs/imgs/gazer_archive.png -------------------------------------------------------------------------------- /docs/imgs/gazer_archive_tags.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gazerdev/gazer/d24e525217595572ff46833e0b7ed5d9fe564609/docs/imgs/gazer_archive_tags.png -------------------------------------------------------------------------------- /docs/imgs/gazer_main.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gazerdev/gazer/d24e525217595572ff46833e0b7ed5d9fe564609/docs/imgs/gazer_main.png -------------------------------------------------------------------------------- /docs/imgs/gazer_multi_booru_client.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gazerdev/gazer/d24e525217595572ff46833e0b7ed5d9fe564609/docs/imgs/gazer_multi_booru_client.png -------------------------------------------------------------------------------- /docs/imgs/gazer_scraper.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gazerdev/gazer/d24e525217595572ff46833e0b7ed5d9fe564609/docs/imgs/gazer_scraper.png -------------------------------------------------------------------------------- /enable_dd.sh: -------------------------------------------------------------------------------- 1 | # clone latest deepdanbooru 2 | git clone https://github.com/KichangKim/DeepDanbooru.git 3 | cd DeepDanbooru 4 | 5 | # install deepdanbooru and tensorflow requirements 6 | source venv/bin/activate 7 | python setup.py install 8 | pip install --upgrade pip 9 | pip install tensorflow 10 | pip install scikit-image 11 | 12 | # fetch pretrained model 13 | cd .. 14 | wget https://github.com/KichangKim/DeepDanbooru/releases/download/v4-20200814-sgd-e30/deepdanbooru-v4-20200814-sgd-e30.zip 15 | unzip deepdanbooru-v4-20200814-sgd-e30.zip -d dd_pretrained 16 | rm deepdanbooru-v4-20200814-sgd-e30.zip 17 | -------------------------------------------------------------------------------- /gazer/__init__.py: -------------------------------------------------------------------------------- 1 | from flask import Flask 2 | import os 3 | import logging 4 | 5 | log = logging.getLogger('werkzeug') 6 | log.setLevel(logging.ERROR) 7 | 8 | app = Flask(__name__) 9 | 10 | from gazer import views 11 | 12 | def startup(): 13 | '''Handle any application startup tasks''' 14 | if not os.path.isfile('postdata.db'): 15 | import gazer.create_data 16 | if not os.path.isdir('dd_pretrained'): 17 | print('dd not enabled') 18 | 19 | startup() -------------------------------------------------------------------------------- /gazer/create_data.py: -------------------------------------------------------------------------------- 1 | import sqlite3 2 | from sqlalchemy import create_engine 3 | from sqlalchemy.ext.declarative import declarative_base 4 | from sqlalchemy.orm import sessionmaker 5 | 6 | engine = create_engine('sqlite:///postdata.db') 7 | 8 | from gazer.models import Posts, Base 9 | 10 | Base.metadata.create_all(engine) 11 | 12 | Session = sessionmaker(bind = engine) 13 | session = Session() 14 | -------------------------------------------------------------------------------- /gazer/ddInterface.py: -------------------------------------------------------------------------------- 1 | from subprocess import Popen, PIPE 2 | import os 3 | 4 | def evaluate(image_path): 5 | # test dd 6 | if not os.path.isdir('dd_pretrained'): 7 | return [] 8 | 9 | p = Popen(['deepdanbooru', 'evaluate', image_path, '--project-path', 'dd_pretrained', '--allow-folder'], stdin=PIPE, stdout=PIPE, stderr=PIPE) 10 | output, err = p.communicate("") 11 | 12 | # remove first and last lines which are not tags 13 | lines = str(output).split('\\n')[1:-3] 14 | tags = [line.split()[1] for line in lines] 15 | return tags 16 | 17 | def union(tags=None, dd_tags=None): 18 | ''' 19 | Return a string of tags which include the normal ones and dd 20 | ''' 21 | tag_set = set(tags.split()) 22 | dd_tags = list( tag_set.union(set(dd_tags)) ) 23 | dd_tags = '|{}|'.format('|'.join(dd_tags)) 24 | 25 | return dd_tags 26 | -------------------------------------------------------------------------------- /gazer/main.spec: -------------------------------------------------------------------------------- 1 | # -*- mode: python ; coding: utf-8 -*- 2 | 3 | block_cipher = None 4 | 5 | 6 | a = Analysis(['main.py'], 7 | pathex=['/home/gray/Documents/local_booru'], 8 | binaries=[], 9 | datas=[('templates', 'templates'), ('static', 'static')], 10 | hiddenimports=[], 11 | hookspath=[], 12 | runtime_hooks=[], 13 | excludes=[], 14 | win_no_prefer_redirects=False, 15 | win_private_assemblies=False, 16 | cipher=block_cipher, 17 | noarchive=False) 18 | pyz = PYZ(a.pure, a.zipped_data, 19 | cipher=block_cipher) 20 | exe = EXE(pyz, 21 | a.scripts, 22 | a.binaries, 23 | a.zipfiles, 24 | a.datas, 25 | [], 26 | name='main', 27 | debug=False, 28 | bootloader_ignore_signals=False, 29 | strip=False, 30 | upx=True, 31 | upx_exclude=[], 32 | runtime_tmpdir=None, 33 | console=True ) 34 | -------------------------------------------------------------------------------- /gazer/models.py: -------------------------------------------------------------------------------- 1 | #from main import db 2 | from sqlalchemy import Column, Integer, String 3 | from sqlalchemy.ext.declarative import declarative_base 4 | import json 5 | 6 | from sqlalchemy import create_engine, desc 7 | from sqlalchemy.ext.declarative import declarative_base 8 | from sqlalchemy.orm import sessionmaker, scoped_session 9 | 10 | Base = declarative_base() 11 | 12 | engine = create_engine('sqlite:///postdata.db', connect_args={'check_same_thread': False}) 13 | 14 | 15 | class Posts(Base): 16 | __tablename__ = 'posts' 17 | 18 | filename = Column(String, primary_key=True) 19 | id = Column(Integer) 20 | booru = Column(String) 21 | source = Column(String) 22 | score = Column(Integer) 23 | tags = Column(String) 24 | dd_tags = Column(String) 25 | rating = Column(String) 26 | created_at = Column(Integer) 27 | status = Column(String) 28 | creator_id = Column(Integer) 29 | change = Column(Integer) 30 | views = Column(Integer, default=0) 31 | 32 | @staticmethod 33 | def from_tuple(data): 34 | ''' 35 | Enables initializing from a tuple to support raw 36 | queries. 37 | ''' 38 | return Posts( 39 | filename=data[0], 40 | id=data[1], 41 | booru=data[2], 42 | source=data[3], 43 | score=data[4], 44 | tags=data[5], 45 | rating=data[6], 46 | created_at=data[7], 47 | status=data[8], 48 | creator_id=data[9], 49 | change=data[10], 50 | views=data[11], 51 | ) 52 | 53 | def get_filepath(self, filename): 54 | ''' 55 | Return full static filepath for filename using 56 | our two depth hash prefix scheme. 57 | ''' 58 | if filename: 59 | return 'static/dump/{}/{}/{}'.format(filename[:2], filename[2:4], filename) 60 | else: 61 | return None 62 | 63 | def as_dict(self): 64 | ''' 65 | Returns posts instance as a json object 66 | ''' 67 | data = {'filename':self.filename, 'id':self.id, 68 | 'booru':self.booru, 69 | 'source':self.source, 70 | 'score':self.score, 71 | 'tags':self.tags.split('|'), 72 | 'dd_tags': self.dd_tags.split('|') if self.dd_tags else None, 73 | 'rating':self.rating, 'created_at':self.created_at, 74 | 'status':self.status, 'creator_id':self.creator_id, 75 | 'change':self.change, 76 | 'file':self.get_filepath(self.filename), 77 | 'views':self.views, 78 | } 79 | 80 | return data 81 | 82 | 83 | class Tag(Base): 84 | __tablename__ = 'tag' 85 | 86 | tag = Column(String, primary_key=True) 87 | count = Column(Integer) 88 | type = Column(String) 89 | ambiguous = Column(Integer) 90 | source = Column(String) 91 | views = Column(Integer, default=0) 92 | 93 | def as_dict(self): 94 | ''' 95 | Returns tag instance as a json object 96 | ''' 97 | data = {'tag':self.tag, 'count':self.count, 'type':self.type, 'ambiguous':self.ambiguous, 98 | 'source':self.source, 'views':self.views 99 | } 100 | 101 | return data 102 | 103 | @classmethod 104 | def serialize_tags(cls, tags=None, increment_views=False): 105 | serialized_tags = {'artist':[], 'character':[], 'copyright':[], 'tag':[]} 106 | tags = cls.get_tags(tags, increment_views) 107 | 108 | for tag in tags: 109 | if tag.get('type') == 'artist': 110 | serialized_tags['artist'].append(tag) 111 | elif tag.get('type') == 'character': 112 | serialized_tags['character'].append(tag) 113 | elif tag.get('type') == 'copyright': 114 | serialized_tags['copyright'].append(tag) 115 | else: 116 | serialized_tags['tag'].append(tag) 117 | 118 | return serialized_tags 119 | 120 | @classmethod 121 | def get_tags(cls, tags=None, increment_views=False): 122 | ''' 123 | Grab tag data from the local db 124 | ''' 125 | local_tags = [] 126 | for tag in tags: 127 | local_tag = session.query(cls).filter(cls.tag == tag).first() 128 | session.flush() 129 | if local_tag: 130 | # increment views for each local tag we grabbed 131 | if increment_views: 132 | local_tag.views += 1 133 | local_tags.append(local_tag.as_dict()) 134 | # tag isn't in db happens for dd derived tags 135 | else: 136 | local_tags.append({'tag':tag}) 137 | session.commit() 138 | 139 | return local_tags 140 | 141 | 142 | #Base.metadata.create_all(engine) 143 | 144 | Session = sessionmaker(bind = engine) 145 | session = scoped_session(Session) 146 | -------------------------------------------------------------------------------- /gazer/normal_flaskapp.spec: -------------------------------------------------------------------------------- 1 | # -*- mode: python ; coding: utf-8 -*- 2 | 3 | block_cipher = None 4 | 5 | 6 | a = Analysis(['normal_flaskapp.py'], 7 | pathex=['/home/gray/Documents/local_booru'], 8 | binaries=[], 9 | datas=[('templates', 'templates'), ('static', 'static')], 10 | hiddenimports=[], 11 | hookspath=[], 12 | runtime_hooks=[], 13 | excludes=[], 14 | win_no_prefer_redirects=False, 15 | win_private_assemblies=False, 16 | cipher=block_cipher, 17 | noarchive=False) 18 | pyz = PYZ(a.pure, a.zipped_data, 19 | cipher=block_cipher) 20 | exe = EXE(pyz, 21 | a.scripts, 22 | a.binaries, 23 | a.zipfiles, 24 | a.datas, 25 | [], 26 | name='normal_flaskapp', 27 | debug=False, 28 | bootloader_ignore_signals=False, 29 | strip=False, 30 | upx=True, 31 | upx_exclude=[], 32 | runtime_tmpdir=None, 33 | console=True ) 34 | -------------------------------------------------------------------------------- /gazer/scraper.py: -------------------------------------------------------------------------------- 1 | import time, json, os 2 | from gazer.sources.gelbooru import gelbooru_api 3 | from gazer.sources.konachan import konachan_api 4 | from gazer.sources.lolibooru import lolibooru_api 5 | from gazer.sources.yandere import yandere_api 6 | from gazer.sources.safebooru import safebooru_api 7 | 8 | from gazer.models import Posts, Tag, Base 9 | from gazer.models import session 10 | 11 | status_data = {"active":True, "current_tags":None, "images_downloaded": 0, "finished":False} 12 | source_map = {gelbooru_api.source:gelbooru_api, 13 | konachan_api.source:konachan_api, 14 | lolibooru_api.source:lolibooru_api, 15 | yandere_api.source:yandere_api, 16 | safebooru_api.source:safebooru_api 17 | } 18 | 19 | def scraper_run(): 20 | '''Runs our scraper until we get through all listed tags''' 21 | if os.path.exists("gazer/scraper_data/lock"): 22 | return 23 | 24 | # process lock 25 | with open("gazer/scraper_data/lock", "w") as lock: 26 | pass 27 | 28 | with open("gazer/scraper_data/tag_set.json") as tags: 29 | tag_sets = json.load(tags) 30 | with open("gazer/scraper_data/config.json") as config: 31 | CONFIG = json.load(config) 32 | with open("gazer/scraper_data/status.json", "w") as status: 33 | json.dump(status_data, status) 34 | 35 | for tag_set in tag_sets: 36 | page = 0 37 | status_data["current_tags"] = tag_set["name"] 38 | with open("gazer/scraper_data/status.json", "w") as status: 39 | json.dump(status_data, status) 40 | 41 | while 1: 42 | #posts = gelbooru_api.get_posts(tags=tag_set["tags"], page=page) 43 | source_api = source_map[tag_set['source']] 44 | posts = source_api.get_posts(tags=tag_set["tags"], page=page) 45 | page += 1 46 | if not len(posts): 47 | break 48 | 49 | for post in posts: 50 | local_post = session.query(Posts.filename).filter(Posts.filename == post['image']).first() 51 | if not local_post: 52 | new_post = source_api.get_post(post['id']) 53 | post_tags = source_api.get_tags(new_post.get('tags')) 54 | post_id = source_api.archive(new_post) 55 | source_api.save_tags(post_tags) 56 | 57 | status_data["images_downloaded"] += 1 58 | with open("gazer/scraper_data/status.json", "w") as status: 59 | json.dump(status_data, status) 60 | 61 | time.sleep(CONFIG['delay']) 62 | 63 | status_data["finished"] = True 64 | with open("gazer/scraper_data/status.json", "w") as status: 65 | json.dump(status_data, status) 66 | 67 | # end process lock 68 | os.remove("gazer/scraper_data/lock") 69 | 70 | 71 | -------------------------------------------------------------------------------- /gazer/scraper_data/config.json: -------------------------------------------------------------------------------- 1 | {"delay":1.5} -------------------------------------------------------------------------------- /gazer/scraper_data/status.json: -------------------------------------------------------------------------------- 1 | {"active": true, "current_tags": "Cute Autist", "images_downloaded": 54, "finished": true} -------------------------------------------------------------------------------- /gazer/scraper_data/tag_set.json: -------------------------------------------------------------------------------- 1 | [{"name":"Cute Autist", "source":"safebooru", "tags": ["constanze_amalie_von_braunschbank-albrechtsberger", "1girl"]} 2 | ] -------------------------------------------------------------------------------- /gazer/search.py: -------------------------------------------------------------------------------- 1 | from sqlalchemy import desc 2 | from gazer.utilities import escapeString 3 | 4 | from gazer.sources.gelbooru import gelbooru_api 5 | from gazer.sources.yandere import yandere_api 6 | from gazer.sources.konachan import konachan_api 7 | from gazer.sources.lolibooru import lolibooru_api 8 | from gazer.sources.safebooru import safebooru_api 9 | 10 | from gazer.models import session 11 | from gazer.models import Posts, Tag, Base 12 | 13 | def tagSearch(tags, limit=None, page=None, service=None, sort=None, dd_enabled=False): 14 | ''' 15 | Search for a list of tags in database using string LIKE method 16 | relies on pipe delimited tag list and string search. Not as optimized 17 | as I would like but linear worst case so maybe ok for our small 18 | local server. 19 | ''' 20 | 21 | if service == 'archive': 22 | if not tags: 23 | if sort == "created-desc": 24 | results = session.query(Posts).order_by(desc('created_at')).limit(limit).offset(limit*page).all() 25 | elif sort == "created-asc": 26 | results = session.query(Posts).order_by('created_at').limit(limit).offset(limit*page).all() 27 | elif sort == "score-desc": 28 | results = session.query(Posts).order_by(desc('score')).limit(limit).offset(limit*page).all() 29 | elif sort == "score-asc": 30 | results = session.query(Posts).order_by('score').limit(limit).offset(limit*page).all() 31 | elif sort == "views-desc": 32 | results = session.query(Posts).order_by(desc('views')).limit(limit).offset(limit*page).all() 33 | elif sort == "views-asc": 34 | results = session.query(Posts).order_by('views').limit(limit).offset(limit*page).all() 35 | else: 36 | results = session.query(Posts).order_by(desc('id')).limit(limit).offset(limit*page).all() 37 | return [post.as_dict() for post in results] 38 | 39 | else: 40 | tag_query = '' 41 | tag_field = 'dd_tags' if dd_enabled else 'tags' 42 | order_clause = '' 43 | 44 | # giant if clause for ordering fix later 45 | if sort == "created-desc": 46 | order_clause = "ORDER BY created_at desc" 47 | elif sort == "created-asc": 48 | order_clause = "ORDER BY created_at" 49 | elif sort == "score-desc": 50 | order_clause = "ORDER BY score desc" 51 | elif sort == "score-asc": 52 | order_clause = "ORDER BY score asc" 53 | elif sort == "views-desc": 54 | order_clause = "ORDER BY views desc" 55 | elif sort == "views-asc": 56 | order_clause = "ORDER BY views" 57 | 58 | for i, tag in enumerate(tags): 59 | tag_escaped = escapeString(tag) 60 | if i == 0: 61 | tag_query += '''{} LIKE '%|{}|%' '''.format(tag_field, tag_escaped) 62 | else: 63 | tag_query += '''AND {} LIKE '%|{}|%' '''.format(tag_field, tag_escaped) 64 | 65 | query = '''SELECT * FROM posts 66 | WHERE {} 67 | {} 68 | LIMIT {} OFFSET {} 69 | '''.format(tag_query, order_clause, limit, limit*page) 70 | results = session.execute(query) 71 | results = [Posts.from_tuple(r) for r in results] 72 | 73 | return [post.as_dict() for post in results] 74 | 75 | elif service == 'gelbooru': 76 | post_json = gelbooru_api.get_posts(tags=tags, limit=limit, page=page) 77 | post_json = gelbooru_api.download_thumbnails(posts=post_json) 78 | 79 | return post_json 80 | 81 | elif service == 'yandere': 82 | post_json = yandere_api.get_posts(tags=tags, limit=limit, page=page) 83 | post_json = yandere_api.download_thumbnails(posts=post_json) 84 | 85 | return post_json 86 | 87 | elif service == 'konachan': 88 | post_json = konachan_api.get_posts(tags=tags, limit=limit, page=page) 89 | post_json = konachan_api.download_thumbnails(posts=post_json) 90 | 91 | return post_json 92 | 93 | elif service == 'lolibooru': 94 | post_json = lolibooru_api.get_posts(tags=tags, limit=limit, page=page) 95 | post_json = lolibooru_api.download_thumbnails(posts=post_json) 96 | 97 | return post_json 98 | 99 | elif service == 'safebooru': 100 | post_json = safebooru_api.get_posts(tags=tags, limit=limit, page=page) 101 | post_json = safebooru_api.download_thumbnails(posts=post_json) 102 | 103 | return post_json 104 | 105 | else: 106 | raise Exception("undefined service type") -------------------------------------------------------------------------------- /gazer/sources/gelbooru.py: -------------------------------------------------------------------------------- 1 | from gazer.sources.gelbooru_base import gelbooru_base 2 | 3 | class gelbooru_api(gelbooru_base): 4 | ''' 5 | Lets consolidate our interaction with the booru api into a nice 6 | class so we don't have to stare at a bunch of messy code. 7 | ''' 8 | 9 | base_url = "https://gelbooru.com" 10 | source = "gelbooru" 11 | thumb_url = "https://img1.gelbooru.com/thumbnails" 12 | json_api = "index.php?page=dapi&s=post&q=index&json=1" 13 | -------------------------------------------------------------------------------- /gazer/sources/gelbooru_base.py: -------------------------------------------------------------------------------- 1 | import requests 2 | import os 3 | import shutil 4 | import concurrent.futures 5 | import html 6 | import json 7 | import datetime 8 | from dateutil import parser 9 | 10 | from gazer.models import Posts, Tag, Base 11 | from gazer.models import session 12 | from gazer.utilities import escapeString 13 | from gazer import ddInterface 14 | 15 | 16 | class gelbooru_base: 17 | ''' 18 | Lets consolidate our interaction with the booru api into a nice 19 | class so we don't have to stare at a bunch of messy code. 20 | ''' 21 | 22 | base_url = None 23 | thumb_url = None 24 | json_api = None 25 | 26 | def __init__(): 27 | pass 28 | 29 | @classmethod 30 | def archive(cls, post): 31 | ''' 32 | Lazy gelbooru archive function adds a post to our 33 | db and moves the image to our dump area. 34 | ''' 35 | if isinstance(post.get('tags'), list): 36 | post['tags'] = ' '.join(post.get('tags')) 37 | 38 | tags = '|{}|'.format(post.get('tags').replace(' ', '|')) 39 | local_path = '' 40 | archive_path = '' 41 | 42 | dd_tags = ddInterface.evaluate('gazer/static/temp/{}'.format(post.get('image'))) 43 | dd_tags = ddInterface.union(tags=post.get('tags'), dd_tags=dd_tags) 44 | 45 | # handle annoying case that safebooru api does not return dates 46 | # by defaulting to present time if no date exists 47 | created_date = post.get('created_at') 48 | if created_date: 49 | parsed_date = parser.parse(post.get('created_at')) 50 | else: 51 | parsed_date = datetime.datetime.now() 52 | 53 | parsed_date = int(parsed_date.strftime("%Y%m%d%H%M%S")) 54 | 55 | new_post = Posts( 56 | filename=post.get('image'), 57 | id=post.get('id'), 58 | booru=cls.source, 59 | source=post.get('source'), 60 | score=post.get('score'), 61 | tags=tags, 62 | dd_tags=dd_tags, 63 | rating=post.get('rating'), 64 | status=post.get('status'), 65 | created_at=parsed_date, 66 | creator_id=post.get('creator_id') 67 | ) 68 | session.merge(new_post) 69 | session.commit() 70 | 71 | local_path = 'gazer/static/temp/{}'.format(post.get('image')) 72 | archive_path = 'gazer/static/dump/{}/{}/{}'.format(post.get('image')[:2], post.get('image')[2:4], post.get('image')) 73 | 74 | os.makedirs(os.path.dirname(archive_path), exist_ok=True) 75 | shutil.copyfile(local_path, archive_path) 76 | 77 | return new_post.id 78 | 79 | @classmethod 80 | def get_post(cls, id): 81 | url = "{}/index.php?page=dapi&s=post&q=index&json=1&id={}".format(cls.base_url, id) 82 | response = requests.get(url) 83 | post = json.loads(html.unescape(response.text))[0] 84 | 85 | post['file'] = 'static/temp/{}'.format(post.get('image')) 86 | filepath = 'gazer/{}'.format(post['file']) 87 | 88 | if not os.path.exists(filepath): 89 | response = requests.get(post.get('file_url'), stream=True) 90 | with open(filepath, 'wb') as out_file: 91 | shutil.copyfileobj(response.raw, out_file) 92 | del response 93 | 94 | # if tags are a string make them a list 95 | if isinstance(post['tags'], str): 96 | post['tags'] = post['tags'].split() 97 | 98 | return post 99 | 100 | @classmethod 101 | def get_posts(cls, tags=None, limit=100, page=0): 102 | ''' 103 | Grab post json from the gelbooru api 104 | ''' 105 | tags = ' '.join(tags) 106 | url = "{}/{}&tags={}&limit={}&pid={}"\ 107 | .format(cls.base_url, cls.json_api, tags, limit, page) 108 | response = requests.get(url) 109 | 110 | # gelbooru api does not return empty json list properly 111 | if response.text: 112 | try: 113 | return json.loads(html.unescape(response.text)) 114 | except Exception as e: 115 | print("Posts get JSON failure") 116 | print(url) 117 | print(e) 118 | return [] 119 | 120 | @classmethod 121 | def get_tags(cls, tags=None): 122 | ''' 123 | Grab tag data from the API 124 | ''' 125 | tags = ' '.join(tags) 126 | url = "{}/index.php?page=dapi&s=tag&q=index&json=1&names={}".format(cls.base_url, tags) 127 | response = requests.get(url) 128 | 129 | if response.text: 130 | return response.json() 131 | return [] 132 | 133 | @classmethod 134 | def serialize_tags(cls, tags=None): 135 | serialized_tags = {'artist':[], 'character':[], 'copyright':[], 'tag':[]} 136 | #tags = cls.get_tags(tags) 137 | 138 | for tag in tags: 139 | if tag.get('type') == 'artist': 140 | serialized_tags['artist'].append(tag) 141 | elif tag.get('type') == 'character': 142 | serialized_tags['character'].append(tag) 143 | elif tag.get('type') == 'copyright': 144 | serialized_tags['copyright'].append(tag) 145 | else: 146 | serialized_tags['tag'].append(tag) 147 | 148 | return serialized_tags 149 | 150 | @classmethod 151 | def save_tags(cls, tags=None): 152 | ''' 153 | Save tag data to the database 154 | ''' 155 | for tag in tags: 156 | new_tag = Tag(tag=tag.get('tag'), count=tag.get('count'), 157 | type=tag.get('type'), ambiguous=tag.get('ambiguous')) 158 | session.merge(new_tag) 159 | 160 | session.commit() 161 | 162 | @classmethod 163 | def download_thumbnail(cls, url=None, local_path_thumb=None): 164 | ''' 165 | Download a specified thumbnail to the local path 166 | ''' 167 | 168 | if not os.path.exists(local_path_thumb): 169 | response = requests.get(url, stream=True) 170 | if response.status_code == 200: 171 | with open(local_path_thumb, 'wb') as out_file: 172 | shutil.copyfileobj(response.raw, out_file) 173 | del response 174 | return "successful thumbnail download" 175 | else: 176 | return "thumbnail download failed" 177 | 178 | @classmethod 179 | def download_thumbnails(cls, posts=None): 180 | ''' 181 | Download thumbnails from the booru. 182 | Skip any thumbnails that we already have. 183 | Return posts list with new local thumb paths. 184 | ''' 185 | with concurrent.futures.ThreadPoolExecutor() as executor: 186 | futures = [] 187 | for post in posts: 188 | # we may need to change this file structure if folder gets saturated 189 | local_path_thumb = 'gazer/static/temp/thumb_{}.jpg'.format(post.get('hash')) 190 | url = "{}/{}/thumbnail_{}.jpg"\ 191 | .format(cls.thumb_url, post.get('directory'), post.get('hash')) 192 | futures.append(executor.submit(cls.download_thumbnail, url=url, local_path_thumb=local_path_thumb)) 193 | 194 | # may need some error handling here 195 | post['thumbnail'] = 'static/temp/thumb_{}.jpg'.format(post.get('hash')) 196 | 197 | # just some debug stuff for now 198 | for future in concurrent.futures.as_completed(futures): 199 | future.result() 200 | 201 | return posts 202 | -------------------------------------------------------------------------------- /gazer/sources/konachan.py: -------------------------------------------------------------------------------- 1 | from gazer.sources.moebooru_base import moebooru_base 2 | 3 | class konachan_api(moebooru_base): 4 | ''' 5 | Lets consolidate our interaction with the booru api into a nice 6 | class so we don't have to stare at a bunch of messy code. 7 | ''' 8 | 9 | base_url = "https://konachan.com" 10 | source = "konachan" 11 | thumb_url = "https://konachan.com/data/preview" 12 | json_api = "post.json" 13 | -------------------------------------------------------------------------------- /gazer/sources/lolibooru.py: -------------------------------------------------------------------------------- 1 | from gazer.sources.moebooru_base import moebooru_base 2 | 3 | class lolibooru_api(moebooru_base): 4 | ''' 5 | Lets consolidate our interaction with the booru api into a nice 6 | class so we don't have to stare at a bunch of messy code. 7 | ''' 8 | 9 | base_url = "https://lolibooru.moe" 10 | source = "lolibooru" 11 | thumb_url = "https://lolibooru.moe/data/preview" 12 | json_api = "post.json" 13 | -------------------------------------------------------------------------------- /gazer/sources/moebooru_base.py: -------------------------------------------------------------------------------- 1 | import requests 2 | import os 3 | import shutil 4 | import concurrent.futures 5 | import html 6 | import json 7 | import datetime 8 | 9 | from gazer.models import Posts, Base, Tag 10 | from gazer.models import session 11 | from gazer.utilities import escapeString 12 | from gazer import ddInterface 13 | 14 | from gazer.sources.gelbooru import gelbooru_api 15 | 16 | class moebooru_base: 17 | ''' 18 | Lets consolidate our interaction with the booru api into a nice 19 | class so we don't have to stare at a bunch of messy code. 20 | ''' 21 | 22 | base_url = None 23 | thumb_url = None 24 | json_api = None 25 | post_dict = {} 26 | 27 | tag_types = {1:'artist', 3:'copyright', 4:'character'} 28 | 29 | def __init__(): 30 | pass 31 | 32 | @classmethod 33 | def archive(cls, post): 34 | ''' 35 | Lazy moebooru archive function adds a post to our 36 | db and moves the image to our dump area. 37 | ''' 38 | if isinstance(post.get('tags'), list): 39 | etags = [escapeString(tag) for tag in post.get('tags')] 40 | post['tags'] = ' '.join(etags) 41 | 42 | tags = '|{}|'.format(post.get('tags').replace(' ', '|')) 43 | local_path = '' 44 | archive_path = '' 45 | 46 | filename = '{}.{}'.format(post.get('md5'), post.get('file_ext')) 47 | local_path = 'gazer/static/temp/{}'.format(filename) 48 | archive_path = 'gazer/static/dump/{}/{}/{}'.format(post.get('md5')[:2], post.get('md5')[2:4], filename) 49 | parsed_date = datetime.datetime.fromtimestamp(post.get('created_at')) 50 | parsed_date = int(parsed_date.strftime("%Y%m%d%H%M%S")) 51 | 52 | dd_tags = ddInterface.evaluate(local_path) 53 | dd_tags = ddInterface.union(tags=post.get('tags'), dd_tags=dd_tags) 54 | 55 | new_post = Posts( 56 | filename='{}.{}'.format(post.get('md5'),post.get('file_ext')), 57 | id=post.get('id'), 58 | booru=cls.source, 59 | source=post.get('source'), 60 | score=post.get('score'), 61 | tags=tags, 62 | dd_tags=dd_tags, 63 | rating=post.get('rating'), 64 | status=post.get('status'), 65 | created_at=parsed_date, # moebooru uses timestamps 66 | creator_id=post.get('creator_id') 67 | ) 68 | session.merge(new_post) 69 | session.commit() 70 | 71 | os.makedirs(os.path.dirname(archive_path), exist_ok=True) 72 | shutil.copyfile(local_path, archive_path) 73 | 74 | return new_post.id 75 | 76 | @classmethod 77 | def cached_post(cls, id=None): 78 | if id: 79 | return cls.post_dict.get(int(id)) 80 | return None 81 | 82 | @classmethod 83 | def get_post(cls, id): 84 | post = cls.cached_post(id) 85 | 86 | post['file'] = 'static/temp/{}.{}'.format(post.get('md5'), post.get('file_ext')) 87 | filepath = 'gazer/{}'.format(post['file']) 88 | 89 | if not os.path.exists(filepath): 90 | response = requests.get(post.get('jpeg_url'), stream=True) 91 | with open(filepath, 'wb') as out_file: 92 | shutil.copyfileobj(response.raw, out_file) 93 | del response 94 | 95 | # if tags are a string make them a list 96 | if isinstance(post['tags'], str): 97 | post['tags'] = post['tags'].split() 98 | 99 | return post 100 | 101 | @classmethod 102 | def get_posts(cls, tags=None, limit=100, page=0): 103 | ''' 104 | Grab post json from the moebooru api 105 | Moebooru api page starts from 1 106 | ''' 107 | tags = ' '.join(tags) 108 | url = "{}/{}?tags={}&limit={}&page={}"\ 109 | .format(cls.base_url, cls.json_api, tags, limit, page+1) 110 | response = requests.get(url) 111 | 112 | # gelbooru api does not return empty json list properly 113 | if response.text: 114 | # give each post an image property for compatiblity with scraper 115 | posts = json.loads(html.unescape(response.text)) 116 | for post in posts: 117 | post['image'] = '{}.{}'.format(post.get('md5'), post.get('file_ext')) 118 | 119 | # hack lookup table to get around moebooru api issues 120 | cls.post_dict = {post['id']:post for post in posts} 121 | 122 | return posts 123 | else: 124 | return [] 125 | 126 | @classmethod 127 | def get_tags(cls, tags=None): 128 | ''' 129 | Grab tag data from the API 130 | Moebooru tag api has issues so just ask gelbooru instead 131 | ''' 132 | return gelbooru_api.get_tags(tags) 133 | 134 | @classmethod 135 | def serialize_tags(cls, tags=None): 136 | return gelbooru_api.serialize_tags(tags) 137 | 138 | @classmethod 139 | def save_tags(cls, tags=None): 140 | ''' 141 | Save tag data to the database 142 | ''' 143 | gelbooru_api.save_tags(tags) 144 | 145 | @classmethod 146 | def download_thumbnail(cls, url=None, local_path_thumb=None): 147 | ''' 148 | Download a specified thumbnail to the local path 149 | ''' 150 | 151 | if not os.path.exists(local_path_thumb): 152 | response = requests.get(url, stream=True) 153 | if response.status_code == 200: 154 | with open(local_path_thumb, 'wb') as out_file: 155 | shutil.copyfileobj(response.raw, out_file) 156 | del response 157 | return "successful thumbnail download" 158 | else: 159 | return "thumbnail download failed" 160 | 161 | @classmethod 162 | def download_thumbnails(cls, posts=None): 163 | ''' 164 | Download thumbnails from the booru. 165 | Skip any thumbnails that we already have. 166 | Return posts list with new local thumb paths. 167 | ''' 168 | with concurrent.futures.ThreadPoolExecutor() as executor: 169 | futures = [] 170 | for post in posts: 171 | # we may need to change this file structure if folder gets saturated 172 | local_path_thumb = 'gazer/static/temp/thumb_{}.jpg'.format(post.get('md5')) 173 | url = "{}/{}/{}/{}.jpg"\ 174 | .format(cls.thumb_url, post.get('md5')[:2], post.get('md5')[2:4], post.get('md5')) 175 | futures.append(executor.submit(cls.download_thumbnail, url=url, local_path_thumb=local_path_thumb)) 176 | 177 | # may need some error handling here 178 | post['thumbnail'] = 'static/temp/thumb_{}.jpg'.format(post.get('md5')) 179 | 180 | # just some debug stuff for now 181 | for future in concurrent.futures.as_completed(futures): 182 | future.result() 183 | 184 | return posts 185 | -------------------------------------------------------------------------------- /gazer/sources/safebooru.py: -------------------------------------------------------------------------------- 1 | import concurrent.futures 2 | import requests 3 | import os 4 | import shutil 5 | 6 | from gazer.sources.gelbooru_base import gelbooru_base 7 | from gazer.sources.gelbooru import gelbooru_api 8 | 9 | class safebooru_api(gelbooru_base): 10 | ''' 11 | Lets consolidate our interaction with the booru api into a nice 12 | class so we don't have to stare at a bunch of messy code. 13 | ''' 14 | 15 | base_url = "https://safebooru.org" 16 | source = "safebooru" 17 | thumb_url = "https://safebooru.org/thumbnails" 18 | json_api = "index.php?page=dapi&s=post&q=index&json=1" 19 | 20 | @classmethod 21 | def get_tags(cls, tags=None): 22 | ''' 23 | Grab tag data from the API 24 | Moebooru tag api has issues so just ask gelbooru instead 25 | ''' 26 | return gelbooru_api.get_tags(tags) 27 | 28 | @classmethod 29 | def serialize_tags(cls, tags=None): 30 | return gelbooru_api.serialize_tags(tags) 31 | 32 | @classmethod 33 | def save_tags(cls, tags=None): 34 | ''' 35 | Save tag data to the database 36 | ''' 37 | gelbooru_api.save_tags(tags) 38 | 39 | @classmethod 40 | def get_post(cls, id): 41 | url = "{}/index.php?page=dapi&s=post&q=index&json=1&id={}".format(cls.base_url, id) 42 | response = requests.get(url) 43 | post = response.json()[0] 44 | 45 | post['file'] = 'static/temp/{}'.format(post.get('image')) 46 | filepath = 'gazer/{}'.format(post['file']) 47 | 48 | if not os.path.exists(post['file']): 49 | image_url = '{}/images/{}/{}'.format(cls.base_url, post.get('directory'), post.get('image')) 50 | response = requests.get(image_url, stream=True) 51 | with open(filepath, 'wb') as out_file: 52 | shutil.copyfileobj(response.raw, out_file) 53 | del response 54 | 55 | # if tags are a string make them a list 56 | if isinstance(post['tags'], str): 57 | post['tags'] = post['tags'].split() 58 | 59 | return post 60 | 61 | @classmethod 62 | def download_thumbnails(cls, posts=None): 63 | ''' 64 | Download thumbnails from the booru. 65 | Skip any thumbnails that we already have. 66 | Return posts list with new local thumb paths. 67 | ''' 68 | with concurrent.futures.ThreadPoolExecutor() as executor: 69 | futures = [] 70 | for post in posts: 71 | image_name = post.get('image')[:-4] 72 | # we may need to change this file structure if folder gets saturated 73 | local_path_thumb = 'gazer/static/temp/thumb_{}.jpg'.format(post.get('hash')) 74 | url = "{}/{}/thumbnail_{}.jpg"\ 75 | .format(cls.thumb_url, post.get('directory'), image_name) 76 | futures.append(executor.submit(cls.download_thumbnail, url=url, local_path_thumb=local_path_thumb)) 77 | 78 | # may need some error handling here 79 | post['thumbnail'] = 'static/temp/thumb_{}.jpg'.format(post.get('hash')) 80 | 81 | # just some debug stuff for now 82 | for future in concurrent.futures.as_completed(futures): 83 | future.result() 84 | 85 | return posts 86 | -------------------------------------------------------------------------------- /gazer/sources/yandere.py: -------------------------------------------------------------------------------- 1 | from gazer.sources.moebooru_base import moebooru_base 2 | 3 | class yandere_api(moebooru_base): 4 | ''' 5 | Lets consolidate our interaction with the booru api into a nice 6 | class so we don't have to stare at a bunch of messy code. 7 | ''' 8 | 9 | base_url = "https://yande.re" 10 | source = "yandere" 11 | thumb_url = "https://assets.yande.re/data/preview" 12 | json_api = "post.json" 13 | -------------------------------------------------------------------------------- /gazer/static/android-chrome-192x192.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gazerdev/gazer/d24e525217595572ff46833e0b7ed5d9fe564609/gazer/static/android-chrome-192x192.png -------------------------------------------------------------------------------- /gazer/static/android-chrome-512x512.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gazerdev/gazer/d24e525217595572ff46833e0b7ed5d9fe564609/gazer/static/android-chrome-512x512.png -------------------------------------------------------------------------------- /gazer/static/apple-touch-icon.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gazerdev/gazer/d24e525217595572ff46833e0b7ed5d9fe564609/gazer/static/apple-touch-icon.png -------------------------------------------------------------------------------- /gazer/static/css/base.css: -------------------------------------------------------------------------------- 1 | /* 2 | Base template css 3 | */ 4 | body { 5 | background-color: #121212; 6 | color: #e6e6e6; 7 | border-color: #e6e6e6; 8 | display: flex; 9 | min-height: 100vh; 10 | flex-direction: column; 11 | margin: 10; 12 | } 13 | 14 | .search-row { 15 | margin-left: 10; 16 | } 17 | 18 | h1 { 19 | margin: 10; 20 | height: 50; 21 | font-weight: bold; 22 | } 23 | 24 | ul { 25 | list-style-type: none; /* Remove bullets */ 26 | padding: 0; /* Remove padding */ 27 | margin: 0; /* Remove margins */ 28 | } 29 | 30 | a:link { 31 | color: #777777; 32 | } 33 | a:hover { 34 | color: #2d8653; 35 | } 36 | 37 | /* 38 | Individual post css 39 | */ 40 | .post-main { 41 | display: flex; 42 | flex: 1; 43 | } 44 | .post-main > tags 45 | { 46 | flex: 0 0 20vw; 47 | margin-left: 10px; 48 | } 49 | 50 | /* 51 | CSS for main posts grid interface 52 | */ 53 | 54 | .posts-container { 55 | display: flex; 56 | flex-direction: row; 57 | flex-basis: content; 58 | } 59 | 60 | .navigation-button { 61 | display: flex; 62 | min-width: 50px; 63 | } 64 | 65 | .post-grid { 66 | display: flex; 67 | flex: 2; 68 | flex-wrap: wrap; 69 | /* padding: 5px; */ 70 | } 71 | 72 | .post { 73 | margin: 10px; 74 | } 75 | 76 | 77 | -------------------------------------------------------------------------------- /gazer/static/dump/.gitignore: -------------------------------------------------------------------------------- 1 | * 2 | !.gitignore -------------------------------------------------------------------------------- /gazer/static/favicon-16x16.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gazerdev/gazer/d24e525217595572ff46833e0b7ed5d9fe564609/gazer/static/favicon-16x16.png -------------------------------------------------------------------------------- /gazer/static/favicon-32x32.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gazerdev/gazer/d24e525217595572ff46833e0b7ed5d9fe564609/gazer/static/favicon-32x32.png -------------------------------------------------------------------------------- /gazer/static/favicon.ico: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/gazerdev/gazer/d24e525217595572ff46833e0b7ed5d9fe564609/gazer/static/favicon.ico -------------------------------------------------------------------------------- /gazer/static/js/posts_utility.js: -------------------------------------------------------------------------------- 1 | function resizeThumbs(){ 2 | var size = document.getElementById("thumb-size-select").value; 3 | var allThumbs = document.images; 4 | for(var i = 0; i < allThumbs.length; i++){ 5 | if(allThumbs[i].classList.contains("post")){ 6 | allThumbs[i].width = size; 7 | } 8 | } 9 | } 10 | 11 | async function tagComplete(value){ 12 | let response = await fetch("/tagcomplete/" + value); 13 | let optionList = ""; 14 | 15 | if (response.ok) { 16 | let json = await response.json(); 17 | for(var i = 0; i < json.length; i++ ){ 18 | optionList += `\n`; 19 | } 20 | return optionList; 21 | 22 | } else { 23 | console.error("http error: " + response.status); 24 | } 25 | 26 | return "" 27 | } 28 | 29 | 30 | -------------------------------------------------------------------------------- /gazer/static/site.webmanifest: -------------------------------------------------------------------------------- 1 | {"name":"","short_name":"","icons":[{"src":"/android-chrome-192x192.png","sizes":"192x192","type":"image/png"},{"src":"/android-chrome-512x512.png","sizes":"512x512","type":"image/png"}],"theme_color":"#ffffff","background_color":"#ffffff","display":"standalone"} -------------------------------------------------------------------------------- /gazer/static/temp/.gitignore: -------------------------------------------------------------------------------- 1 | * 2 | !.gitignore -------------------------------------------------------------------------------- /gazer/templates/base.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | Gazer 10 | 11 | 12 |

Gazer

13 | 14 |
15 | 19 | 20 | 21 | 22 | 23 | 24 | 32 | 33 | {% if service == "archive" %} 34 | 35 | 36 | 37 | 45 | {% endif %} 46 | 47 | 48 | 55 | 56 | 57 | 64 | 65 | 66 |
67 | 68 | {% block content %} 69 | {% endblock %} 70 | 71 | 72 | 73 | -------------------------------------------------------------------------------- /gazer/templates/index.html: -------------------------------------------------------------------------------- 1 | {% extends "base.html" %} 2 | {% block content %} 3 | Main

4 | Scraper

5 | Tag Archive Stats

6 | Post Archive Stats 7 | {% endblock content %} 8 | -------------------------------------------------------------------------------- /gazer/templates/post.html: -------------------------------------------------------------------------------- 1 | {% extends "base.html" %} 2 | {% block content %} 3 | 4 | 5 | {% if not archived %} 6 |

7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 |
16 | {% endif %} 17 | 18 |
19 | 20 |

Artist

21 | 26 |

Character

27 | 32 |

Copyright

33 | 38 |

Tags

39 | 44 | {% if service == "archive" %} 45 |

46 |

47 | DD Tags 48 |
    49 | {% for tag in dd_tags.tag %} 50 |
  • {{tag.tag}}
  • 51 | {% endfor %} 52 |
53 |
54 | {% endif %} 55 |
56 | 57 | 58 | {% if post.video %} 59 | 62 | {% else %} 63 | 64 | {% endif %} 65 |

66 |

67 | details 68 | id: {{post.id}}

69 | {% if archived %} 70 | views: {{post.views}}

71 | booru: {{post.booru}}

72 | {% endif %} 73 | source: {{post.source}}

74 | rating: {{post.rating}}

75 | score: {{post.score}}

76 | created: {{post.created_at}} 77 |

78 |
79 | 80 |
81 | {% endblock %} 82 | 83 | 84 | 85 | 86 | 87 | 88 | -------------------------------------------------------------------------------- /gazer/templates/posts.html: -------------------------------------------------------------------------------- 1 | {% extends "base.html" %} 2 | {% block content %} 3 | 4 | 5 | 6 |
7 | {% if page >= 1 %} 8 | 9 | 11 | 12 | {% endif %} 13 | 14 |
15 | {% for post in posts: %} 16 | {% if post.thumbnail is defined %} 17 | 18 | 19 | 20 | {% elif post.video is true %} 21 | 22 | 25 | 26 | {% else %} 27 | 28 | 29 | 30 | {% endif %} 31 | {% endfor %} 32 |
33 | 34 | 35 | 37 | 38 | 39 |
40 | 41 | {% endblock content %} 42 | -------------------------------------------------------------------------------- /gazer/templates/poststats.html: -------------------------------------------------------------------------------- 1 | {% extends "base.html" %} 2 | {% block content %} 3 | 8 | {% endblock content %} 9 | -------------------------------------------------------------------------------- /gazer/templates/scraper.html: -------------------------------------------------------------------------------- 1 | {% extends "base.html" %} 2 | {% block content %} 3 |

Scraper

4 | Active: {{status.active}}

5 | Currently Downloading: {{status.current_tags}}

6 | Images Downloaded: {{status.images_downloaded}}

7 | Complete: {{status.finished}}

8 | 9 |

Tag List

10 |
11 | 12 |
13 | 18 |

19 | {% endblock content %} 20 | -------------------------------------------------------------------------------- /gazer/templates/tags.html: -------------------------------------------------------------------------------- 1 | {% extends "base.html" %} 2 | {% block content %} 3 |

Local Tags

4 | 9 | {% endblock content %} 10 | -------------------------------------------------------------------------------- /gazer/utilities.py: -------------------------------------------------------------------------------- 1 | def escapeString(query_string): 2 | scs = ["'"] 3 | for sc in scs: 4 | query_string = query_string.replace(sc, "''") 5 | return query_string 6 | -------------------------------------------------------------------------------- /gazer/views.py: -------------------------------------------------------------------------------- 1 | from flask import Flask, render_template, request, abort 2 | from sqlalchemy import create_engine, desc 3 | from sqlalchemy.ext.declarative import declarative_base 4 | from sqlalchemy.orm import sessionmaker, scoped_session 5 | 6 | import json 7 | import requests 8 | import shutil 9 | import os 10 | from multiprocessing import Process 11 | from gazer.scraper import scraper_run 12 | from gazer.utilities import escapeString 13 | 14 | from gazer.sources.gelbooru import gelbooru_api 15 | from gazer.sources.yandere import yandere_api 16 | from gazer.sources.konachan import konachan_api 17 | from gazer.sources.lolibooru import lolibooru_api 18 | from gazer.sources.safebooru import safebooru_api 19 | 20 | from gazer.search import tagSearch 21 | 22 | from gazer.models import session 23 | from gazer.models import Posts, Tag, Base 24 | from gazer import app 25 | 26 | @app.route('/') 27 | def index(): 28 | ''' 29 | Main page display index and navigation menu 30 | ''' 31 | return render_template('index.html') 32 | 33 | @app.route('/tags') 34 | def tags(): 35 | ''' 36 | Display basic list of most viewed tags 37 | ''' 38 | tags = session.query(Tag).order_by(desc(Tag.views)).limit(100).all() 39 | tags = [tag.as_dict() for tag in tags] 40 | return render_template("tags.html", tags=tags) 41 | 42 | @app.route('/tag/') 43 | def tag(name): 44 | ''' 45 | Display information about a particular tag 46 | ''' 47 | tag = session.query(Tag).filter(Tag.tag==name).first() 48 | if tag: 49 | return json.dumps(tag.as_dict()) 50 | else: 51 | abort(404) 52 | 53 | @app.route('/tagcomplete/') 54 | def tagcomplete(field): 55 | ''' 56 | Return closest tags for typed text 57 | ''' 58 | input_tags = field.split(' ') 59 | last_tag = input_tags[-1] 60 | 61 | search = "{}%".format(last_tag) 62 | tags = session.query(Tag).filter(Tag.tag.like(search)) \ 63 | .order_by(Tag.count.desc()) \ 64 | .limit(10).all() 65 | 66 | tags = [tag.as_dict() for tag in tags] 67 | output = [] 68 | for tag in tags: 69 | tag_name = ' '.join(input_tags[:-1]) + ' ' + tag.get("tag") 70 | output.append(tag_name) 71 | 72 | return json.dumps(output) 73 | 74 | @app.route('/poststats') 75 | def poststats(): 76 | ''' 77 | Display information about the most viewed posts 78 | ''' 79 | posts = session.query(Posts).order_by(desc(Posts.views)).limit(100).all() 80 | posts = [stat.as_dict() for stat in posts] 81 | return render_template("poststats.html", posts=posts) 82 | 83 | @app.route('/scraper', methods = ["GET", "POST"]) 84 | def scraper(): 85 | ''' 86 | Display interface for booru scraper 87 | ''' 88 | if request.method == "POST": 89 | scraper_process = Process(target=scraper_run) 90 | scraper_process.start() 91 | 92 | with open("gazer/scraper_data/tag_set.json") as tags: 93 | tag_set = json.load(tags) 94 | with open("gazer/scraper_data/status.json") as status: 95 | current_status = json.load(status) 96 | 97 | return render_template("scraper.html", tag_set=tag_set, status=current_status) 98 | 99 | 100 | @app.route('/posts') 101 | def posts(): 102 | ''' 103 | Display main posts page for archive/booru client 104 | ''' 105 | page = int(request.args.get('page', 0)) 106 | limit = int(request.args.get('limit', 25)) 107 | thumb_size = int(request.args.get('thumb_size', 200)) 108 | service = request.args.get('service', 'archive') 109 | sort = request.args.get('sort', 'created-desc') 110 | dd_enabled = True if request.args.get('dd_enabled') else False 111 | tags = request.args.get('tags', '') 112 | 113 | search_tags = tags.split() 114 | posts = tagSearch(search_tags, page=page, limit=limit, service=service, sort=sort, dd_enabled=dd_enabled) 115 | 116 | # handle local webm and mp4s 117 | for post in posts: 118 | local_vid = True if (post.get('file') and (post.get('file').endswith('webm') or post.get('file').endswith('mp4'))) else False 119 | booru_vid = True if (post.get('file_url') and (post.get('file_url').endswith('webm') or post.get('file_url').endswith('mp4'))) else False 120 | post['video'] = local_vid or booru_vid 121 | 122 | # easy way to handle our state is just to pass back the 123 | # values that the page was called with for use in our relative links 124 | return render_template('posts.html', 125 | title='post test', 126 | service=service, 127 | tags=tags, 128 | posts=posts, 129 | page=page, 130 | limit=limit, 131 | sort=sort, 132 | dd_enabled=dd_enabled, 133 | thumb_size=thumb_size, 134 | ) 135 | 136 | # this needs some sort of cleanup getting messy 137 | @app.route('/post/') 138 | def post(id): 139 | ''' 140 | Display an individual post 141 | ''' 142 | service = request.args.get('service', 'archive') 143 | tags = request.args.get('tags', '') 144 | archived = False 145 | post_tags = [] 146 | serialized_dd_extra_tags = None 147 | limit = int(request.args.get('limit', 25)) 148 | thumb_size = int(request.args.get('thumb_size', 200)) 149 | sort = request.args.get('sort', 'created-desc') 150 | dd_enabled = True if request.args.get('dd_enabled') else False 151 | 152 | search_tags = tags.split() 153 | 154 | stats = None 155 | if service == 'archive': 156 | archived = True 157 | post = session.query(Posts).filter(Posts.id==id).first() 158 | post.views += 1 159 | session.commit() 160 | post = post.as_dict() 161 | serialized_tags = Tag.serialize_tags(post.get('tags'), increment_views=True) 162 | 163 | if post.get('dd_tags'): 164 | dd_extra_tags = list(set(post.get('dd_tags')) - set(post.get('tags'))) 165 | serialized_dd_extra_tags = Tag.serialize_tags(dd_extra_tags) 166 | 167 | elif service == 'gelbooru': 168 | post = gelbooru_api.get_post(id) 169 | post_tags = gelbooru_api.get_tags(post.get('tags')) 170 | serialized_tags = gelbooru_api.serialize_tags(post_tags) 171 | 172 | if request.args.get('archive'): 173 | post_id = gelbooru_api.archive(post) 174 | gelbooru_api.save_tags(post_tags) 175 | 176 | elif service == 'yandere': 177 | post = yandere_api.get_post(id) 178 | post_tags = yandere_api.get_tags(post.get('tags')) 179 | serialized_tags = yandere_api.serialize_tags(post_tags) 180 | 181 | if request.args.get('archive'): 182 | post_id = yandere_api.archive(post) 183 | yandere_api.save_tags(post_tags) 184 | 185 | elif service == 'konachan': 186 | post = konachan_api.get_post(id) 187 | post_tags = yandere_api.get_tags(post.get('tags')) 188 | serialized_tags = konachan_api.serialize_tags(post_tags) 189 | 190 | if request.args.get('archive'): 191 | post_id = konachan_api.archive(post) 192 | konachan_api.save_tags(post_tags) 193 | 194 | elif service == 'lolibooru': 195 | post = lolibooru_api.get_post(id) 196 | post_tags = lolibooru_api.get_tags(post.get('tags')) 197 | serialized_tags = lolibooru_api.serialize_tags(post_tags) 198 | 199 | if request.args.get('archive'): 200 | post_id = lolibooru_api.archive(post) 201 | lolibooru_api.save_tags(post_tags) 202 | 203 | elif service == 'safebooru': 204 | post = safebooru_api.get_post(id) 205 | post_tags = gelbooru_api.get_tags(post.get('tags')) # safebooru tag api is wonky use gelbooru 206 | serialized_tags = safebooru_api.serialize_tags(post_tags) 207 | 208 | if request.args.get('archive'): 209 | post_id = safebooru_api.archive(post) 210 | safebooru_api.save_tags(post_tags) 211 | 212 | else: 213 | raise Exception("undefined service type") 214 | 215 | # do some handling for different media element tag_types 216 | post['video'] = True if (post.get('file').endswith('webm') or post.get('file').endswith('mp4')) else False 217 | 218 | return render_template('post.html', 219 | post=post, 220 | tags=tags, 221 | stats=stats, 222 | post_tags=serialized_tags, 223 | dd_tags=serialized_dd_extra_tags, 224 | service=service, 225 | limit=limit, 226 | dd_enabled=dd_enabled, 227 | thumb_size=thumb_size, 228 | sort=sort, 229 | archived=archived 230 | ) -------------------------------------------------------------------------------- /main.spec: -------------------------------------------------------------------------------- 1 | # -*- mode: python ; coding: utf-8 -*- 2 | 3 | block_cipher = None 4 | 5 | 6 | a = Analysis(['main.py'], 7 | pathex=['/home/gray/Documents/gazer'], 8 | binaries=[], 9 | datas=[('templates', 'templates'), ('static', 'static')], 10 | hiddenimports=[], 11 | hookspath=[], 12 | runtime_hooks=[], 13 | excludes=[], 14 | win_no_prefer_redirects=False, 15 | win_private_assemblies=False, 16 | cipher=block_cipher, 17 | noarchive=False) 18 | pyz = PYZ(a.pure, a.zipped_data, 19 | cipher=block_cipher) 20 | exe = EXE(pyz, 21 | a.scripts, 22 | a.binaries, 23 | a.zipfiles, 24 | a.datas, 25 | [], 26 | name='main', 27 | debug=False, 28 | bootloader_ignore_signals=False, 29 | strip=False, 30 | upx=True, 31 | upx_exclude=[], 32 | runtime_tmpdir=None, 33 | console=True ) 34 | -------------------------------------------------------------------------------- /requirements.txt: -------------------------------------------------------------------------------- 1 | certifi==2020.12.5 2 | chardet==4.0.0 3 | click==7.1.2 4 | Flask==1.1.2 5 | Flask-SQLAlchemy==2.4.4 6 | idna==2.10 7 | itsdangerous==1.1.0 8 | Jinja2==2.11.2 9 | MarkupSafe==1.1.1 10 | python-dateutil==2.8.2 11 | requests==2.25.1 12 | six==1.16.0 13 | SQLAlchemy==1.3.18 14 | urllib3==1.26.3 15 | Werkzeug==1.0.1 16 | -------------------------------------------------------------------------------- /run.py: -------------------------------------------------------------------------------- 1 | import gazer 2 | import os 3 | 4 | gazer.app.run("0.0.0.0", debug=False, threaded=True) -------------------------------------------------------------------------------- /setup.py: -------------------------------------------------------------------------------- 1 | from setuptools import setup, find_packages 2 | try: # for pip >= 10 3 | from pip._internal.req import parse_requirements 4 | except ImportError: # for pip <= 9.0.3 5 | from pip.req import parse_requirements 6 | 7 | reqs = parse_requirements('./requirements.txt', session=False) 8 | try: 9 | requirements = [str(ir.req) for ir in reqs] 10 | except: 11 | requirements = [str(ir.requirement) for ir in reqs] 12 | 13 | def readme(): 14 | with open('README.md') as f: 15 | return f.read() 16 | 17 | setup(name='gazer', 18 | version='0.1', 19 | description='Experimental Local Booru and Booru Client', 20 | long_description=readme(), 21 | url='http://github.com/gazerdev/gazer', 22 | author='gazerdev', 23 | author_email='gazerdev@gmail.com', 24 | license='MIT', 25 | packages=find_packages(), 26 | install_requires=requirements, 27 | zip_safe=False) 28 | --------------------------------------------------------------------------------