├── .gitignore ├── README.md ├── filmstrip.py ├── license.txt ├── postprocess.py ├── preprocess.py └── scene_detection.py /.gitignore: -------------------------------------------------------------------------------- 1 | output 2 | preview/static 3 | web/node_modules 4 | web/data/* 5 | videos 6 | sample-input -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Filmstrip.py 2 | 3 | ## Description 4 | 5 | Filmstrip is an OpenCV/Python based set of scripts for extracting keyframes from video. It was written to extract the data that powers the openvisconf videos visualization. 6 | 7 | It comprises scripts that perform the following functions 8 | 9 | __Frame skipping__: Reducing the number of frames in the video by extracting one frame per second. 10 | 11 | __Region of Interest (ROI) extraction__: Allows you to sepcify a rectangular ROI to extract from a video. Crops the video. 12 | 13 | __Scene Detection/Keyframe Selection__: Outputs a set of frames that represent transition points in the video. Will output the frames as well as a metadata file with data collected during the process. This data will include dominant colors in the images for the keyframes. 14 | 15 | __Postprocessing__: A script to combine the metadata files from multiple videos into one. 16 | 17 | ## Status 18 | 19 | Useful but probably a little rough around the edges. I'll be hacking on this as I get time and have ideas. 20 | 21 | ## Setup 22 | 23 | Requires a working python installing with the opencv package / python binding installed. Refer to [here](http://opencv.org/) for instructions on how to get opencv up and running on your system. 24 | 25 | ## Usage 26 | 27 | There are 4 main scripts. preprocess.py, contains the frame-skipping and ROI extraction. scene_detection.py contains the scene detection stuff and postprocessing.py and lastly filmstrip.py. Most print usage information when you try and run them but here are some examples 28 | 29 | __Frame Skip__ 30 | 31 | ``` 32 | python preprocess.py --command shrink --source /path/to/input/file --dest /path/to/output/file 33 | ``` 34 | 35 | __ROI Extraction__. x1,y1 is the top left corner coordinate of the rectangle. x2,y2 is the bottom right corner coordinate of the rectangle. 36 | 37 | Note that this step is not necessary if you do not want to crop out any part of the video. 38 | 39 | ``` 40 | python preprocess.py --command roi --source /path/to/input/file --dest /path/to/output/file --rect x1,y1,x2,y2 41 | ``` 42 | 43 | __Scene Detection__ 44 | 45 | ``` 46 | python scene_detection.py -s --source /path/to/input/file -d /path/to/output/folder -n identifierName -a excludeFramesBeforeThisIndex" 47 | ``` 48 | 49 | -n idenitifierName: This is a name useful to identify this video. It will be written into the metadata file for later cross referencing. Also a subfolder with this name will be generated in dest folder and that is where the data will go. Generated images will use this name as a prefix. 50 | 51 | -a excludeBefore: An optional parameter. If you want to skip a certain number of frames at the beginning of the video from being considered, put the frame number here. In a video that has been frameskipped this will be equal to the seconds on the clock for when you want to start extracting frames. 52 | 53 | This will generate an output folder with subfolders that contain images at various scales. The frame number is appended to the image name. 54 | 55 | __Postprocessing__ 56 | 57 | ``` 58 | python postprocess.py -s path/to/source/folder -d path/to/dest/folder 59 | ``` 60 | 61 | Will recursively look into source folder for metadata-keyframe.json files generated in the previous step. Will then concatenate them into one metadata.json file that will be placed in the output folder. Run this after processing all your videos. 62 | 63 | 64 | __filmstrip.py: the helper script__ 65 | 66 | There is also a helper script __filmstrip.py__ that you can use for convenience, to process a bunch of files, though is definitely not required. 67 | 68 | Open it up and then edit the ```videos``` array to contain data about your videos. You can then run the script to batch process them. It simply calls the other scripts in order. 69 | 70 | I often rerun the scene detection step. So once the first two are done, i just comment out those lines and can batch run the step I want repeatedly. 71 | 72 | -------------------------------------------------------------------------------- /filmstrip.py: -------------------------------------------------------------------------------- 1 | import os 2 | import sys 3 | 4 | videos = [ 5 | { 6 | "name": "speaker_name", 7 | #the paths can be whatever you want, but the containing 8 | #folders should already exist 9 | "original": "/path/to/original/video.mov", 10 | "frameSkipped": "videos/speaker_name_frameskipped.mov", 11 | "roid": "videos/speaker_name_roi.mov", 12 | "outputPath": "output/speaker_name", 13 | #top left of Region Of Interest 14 | "x1": 500, 15 | "y1": 0, 16 | #bottom right of Region Of Interest 17 | "x2": 1920, 18 | "y2": 1080, 19 | "excludeBefore": 20 #will not consider anything before 20 secodns as a keyframe 20 | } 21 | ] 22 | 23 | 24 | def processAll(videos): 25 | for video in videos: 26 | os.system("python preprocess.py --command shrink --source {0} --dest {1}".format(video["original"], video["frameSkipped"])) 27 | 28 | os.system("python preprocess.py --command roi --source {0} --dest {1} --rect {2},{3},{4},{5}" 29 | .format(video["frameSkipped"], video["roid"], video["x1"], video["y1"], video["x2"], video["y2"])) 30 | 31 | os.system("python scene_detection.py -s {0} -d {1} -n {2} -a {3}" 32 | .format(video["roid"], video["outputPath"], video["name"], video["excludeBefore"])) 33 | 34 | sys.stdout.write('.') 35 | sys.stdout.flush() 36 | 37 | processAll(videos) 38 | 39 | #Make sure to set this to the output folder used in the video objects above 40 | #(the folder) above the one for individual speakers. 41 | outputPath = "output" 42 | os.system("python postprocess.py -s {0} -d {0}".format(outputPath, outputPath)) -------------------------------------------------------------------------------- /license.txt: -------------------------------------------------------------------------------- 1 | The MIT License (MIT) 2 | 3 | Copyright (c) 2014 Yannick Assogba 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in 13 | all copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN 21 | THE SOFTWARE. -------------------------------------------------------------------------------- /postprocess.py: -------------------------------------------------------------------------------- 1 | import os 2 | import json 3 | import argparse 4 | 5 | def concatMeta(path): 6 | files = getMeta(path) 7 | compositeMeta = [] 8 | for fp in files: 9 | name = os.path.basename(fp).split("-")[0] 10 | meta = json.load(open(fp, 'r')) 11 | wrapped = { 12 | "name": name, 13 | "frames": meta 14 | } 15 | compositeMeta.append(wrapped) 16 | return compositeMeta 17 | 18 | def getMeta(path): 19 | metafiles = [] 20 | for root, dirs, files in os.walk(path): 21 | for f in files: 22 | if f.endswith("keyframe-meta.json"): 23 | fp = os.path.join(root, f) 24 | metafiles.append(fp) 25 | return metafiles 26 | 27 | # Combines metadata files from multiple videos into 28 | # one json file 29 | def writeMeta(source, dest): 30 | composite = concatMeta(source) 31 | data_fp = os.path.join(dest, "metadata.json") 32 | with open(data_fp, 'w') as f: 33 | data_json_str = json.dumps(composite, indent=4) 34 | f.write(data_json_str) 35 | 36 | 37 | parser = argparse.ArgumentParser() 38 | parser.add_argument('-s','--source', help='source path', required=True) 39 | parser.add_argument('-d', '--dest', help='dest path', required=True) 40 | args = parser.parse_args() 41 | print args.source 42 | print args.dest 43 | writeMeta(args.source, args.dest) 44 | -------------------------------------------------------------------------------- /preprocess.py: -------------------------------------------------------------------------------- 1 | # Utilities for preprocessing video for scene detection. 2 | # Frame skipping: will create a video that only one frame 3 | # for each section of the original video. 4 | # ROI extraction: which will create a video that only 5 | # contains the specified region of interest. 6 | 7 | # Updated to use OpenCV 3.1.0 and Python 3.5 8 | 9 | import math 10 | import cv2 11 | import argparse 12 | 13 | def getInfo(sourcePath): 14 | cap = cv2.VideoCapture(sourcePath) 15 | info = { 16 | "framecount": cap.get(cv2.CAP_PROP_FRAME_COUNT), 17 | "fps": cap.get(cv2.CAP_PROP_FPS), 18 | "width": int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)), 19 | "height": int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT)), 20 | "codec": int(cap.get(cv2.CAP_PROP_FOURCC)) 21 | } 22 | cap.release() 23 | return info 24 | 25 | # 26 | # Extracts one frame for every second second of video. 27 | # Effectively compresses a video down into much less data. 28 | # 29 | def extractFrames(sourcePath, destPath, verbose=False): 30 | info = getInfo(sourcePath) 31 | 32 | cap = cv2.VideoCapture(sourcePath) 33 | fourcc = cv2.VideoWriter_fourcc('m', 'p', '4', 'v') 34 | out = cv2.VideoWriter(destPath, 35 | fourcc, 36 | info["fps"], 37 | (info["width"], info["height"])) 38 | fps_int = math.ceil(info["fps"]) 39 | 40 | ret = True 41 | while(cap.isOpened() and ret): 42 | ret, frame = cap.read() 43 | frame_number = math.ceil(cap.get(cv2.CAP_PROP_POS_FRAMES) - 1) 44 | if frame_number % fps_int == 0: 45 | out.write(frame) 46 | 47 | if verbose: 48 | cv2.imshow('frame', frame) 49 | 50 | if cv2.waitKey(1) & 0xFF == ord('q'): 51 | break 52 | 53 | cap.release() 54 | out.release() 55 | cv2.destroyAllWindows() 56 | 57 | 58 | # 59 | # Extracts a region of interest defined by a rect 60 | # from a video 61 | # 62 | def extractROI(sourcePath, destPath, points, verbose=False): 63 | info = getInfo(sourcePath) 64 | # x, y, width, height = cv2.boundingRect(points) 65 | 66 | # print(x,y,width,height)/ 67 | x = points[0][0] 68 | y = points[0][1] 69 | 70 | width = points[1][0] - points[0][0] 71 | height = points[1][1] - points[0][1] 72 | 73 | cap = cv2.VideoCapture(sourcePath) 74 | 75 | fourcc = cv2.VideoWriter_fourcc('m', 'p', '4', 'v') 76 | out = cv2.VideoWriter(destPath, 77 | fourcc, 78 | info["fps"], 79 | (width, height)) 80 | 81 | ret = True 82 | while(cap.isOpened() and ret): 83 | ret, frame = cap.read() 84 | if frame is None: 85 | break 86 | 87 | roi = frame[y:y+height, x:x+width] 88 | 89 | out.write(roi) 90 | 91 | if verbose: 92 | cv2.imshow('frame', roi) 93 | 94 | if cv2.waitKey(1) & 0xFF == ord('q'): 95 | break 96 | 97 | cap.release() 98 | out.release() 99 | cv2.destroyAllWindows() 100 | 101 | 102 | # Groups a list in n sized tuples 103 | def group(lst, n): 104 | return list(zip(*[lst[i::n] for i in range(n)])) 105 | 106 | 107 | parser = argparse.ArgumentParser(description='Extract one frame from every second of video') 108 | 109 | parser.add_argument('--source', help='source file', required=True) 110 | parser.add_argument('--dest', help='source file', required=True) 111 | parser.add_argument('--command', help='command to run', required=True) 112 | parser.add_argument('--rect', help='x1,y2,x2,y2', required=False) 113 | parser.add_argument('--verbose', action='store_true') 114 | parser.set_defaults(verbose=False) 115 | 116 | args = parser.parse_args() 117 | 118 | if args.verbose: 119 | info = getInfo(args.source) 120 | print("Source Info: ", info) 121 | 122 | if args.command == "shrink": 123 | extractFrames(args.source, args.dest, args.verbose) 124 | elif args.command == "roi": 125 | points = [int(x) for x in args.rect.split(",")] 126 | points = group(points, 2) 127 | extractROI(args.source, args.dest, points, args.verbose) 128 | -------------------------------------------------------------------------------- /scene_detection.py: -------------------------------------------------------------------------------- 1 | # Scripts to try and detect key frames that represent scene transitions 2 | # in a video. Has only been tried out on video of slides, so is likely not 3 | # robust for other types of video. 4 | 5 | import cv2 6 | #import cv 7 | import argparse 8 | import json 9 | import os 10 | import numpy as np 11 | import errno 12 | 13 | def getInfo(sourcePath): 14 | cap = cv2.VideoCapture(sourcePath) 15 | info = { 16 | "framecount": cap.get(cv2.CAP_PROP_FRAME_COUNT), 17 | "fps": cap.get(cv2.CAP_PROP_FPS), 18 | "width": int(cap.get(cv2.CAP_PROP_FRAME_WIDTH)), 19 | "height": int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT)), 20 | "codec": int(cap.get(cv2.CAP_PROP_FOURCC)) 21 | } 22 | cap.release() 23 | return info 24 | 25 | 26 | def scale(img, xScale, yScale): 27 | res = cv2.resize(img, None,fx=xScale, fy=yScale, interpolation = cv2.INTER_AREA) 28 | return res 29 | 30 | def resize(img, width, height): 31 | res = cv2.resize(img, (width, height), interpolation = cv2.INTER_AREA) 32 | return res 33 | 34 | # 35 | # Extract [numCols] domninant colors from an image 36 | # Uses KMeans on the pixels and then returns the centriods 37 | # of the colors 38 | # 39 | def extract_cols(image, numCols): 40 | # convert to np.float32 matrix that can be clustered 41 | Z = image.reshape((-1,3)) 42 | Z = np.float32(Z) 43 | 44 | # Set parameters for the clustering 45 | max_iter = 20 46 | epsilon = 1.0 47 | K = numCols 48 | criteria = (cv2.TERM_CRITERIA_EPS + cv2.TERM_CRITERIA_MAX_ITER, max_iter, epsilon) 49 | 50 | # cluster 51 | lab = [] 52 | compactness, labels, centers = cv2.kmeans(data=Z, K=K, bestLabels=None, criteria=criteria, attempts=10, flags=cv2.KMEANS_RANDOM_CENTERS) 53 | 54 | clusterCounts = [] 55 | for idx in range(K): 56 | count = len(Z[labels == idx]) 57 | clusterCounts.append(count) 58 | 59 | #Reverse the cols stored in centers because cols are stored in BGR 60 | #in opencv. 61 | rgbCenters = [] 62 | for center in centers: 63 | bgr = center.tolist() 64 | bgr.reverse() 65 | rgbCenters.append(bgr) 66 | 67 | cols = [] 68 | for i in range(K): 69 | iCol = { 70 | "count": clusterCounts[i], 71 | "col": rgbCenters[i] 72 | } 73 | cols.append(iCol) 74 | 75 | return cols 76 | 77 | 78 | # 79 | # Calculates change data one one frame to the next one. 80 | # 81 | def calculateFrameStats(sourcePath, verbose=False, after_frame=0): 82 | cap = cv2.VideoCapture(sourcePath) 83 | 84 | data = { 85 | "frame_info": [] 86 | } 87 | 88 | lastFrame = None 89 | while(cap.isOpened()): 90 | ret, frame = cap.read() 91 | if frame == None: 92 | break 93 | 94 | frame_number = cap.get(cv2.CAP_PROP_POS_FRAMES) - 1 95 | 96 | # Convert to grayscale, scale down and blur to make 97 | # calculate image differences more robust to noise 98 | gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY) 99 | gray = scale(gray, 0.25, 0.25) 100 | gray = cv2.GaussianBlur(gray, (9,9), 0.0) 101 | 102 | if frame_number < after_frame: 103 | lastFrame = gray 104 | continue 105 | 106 | 107 | if lastFrame != None: 108 | 109 | diff = cv2.subtract(gray, lastFrame) 110 | 111 | diffMag = cv2.countNonZero(diff) 112 | 113 | frame_info = { 114 | "frame_number": int(frame_number), 115 | "diff_count": int(diffMag) 116 | } 117 | data["frame_info"].append(frame_info) 118 | 119 | if verbose: 120 | cv2.imshow('diff', diff) 121 | if cv2.waitKey(1) & 0xFF == ord('q'): 122 | break 123 | 124 | # Keep a ref to his frame for differencing on the next iteration 125 | lastFrame = gray 126 | 127 | cap.release() 128 | cv2.destroyAllWindows() 129 | 130 | #compute some states 131 | diff_counts = [fi["diff_count"] for fi in data["frame_info"]] 132 | data["stats"] = { 133 | "num": len(diff_counts), 134 | "min": np.min(diff_counts), 135 | "max": np.max(diff_counts), 136 | "mean": np.mean(diff_counts), 137 | "median": np.median(diff_counts), 138 | "sd": np.std(diff_counts) 139 | } 140 | greater_than_mean = [fi for fi in data["frame_info"] if fi["diff_count"] > data["stats"]["mean"]] 141 | greater_than_median = [fi for fi in data["frame_info"] if fi["diff_count"] > data["stats"]["median"]] 142 | greater_than_one_sd = [fi for fi in data["frame_info"] if fi["diff_count"] > data["stats"]["sd"] + data["stats"]["mean"]] 143 | greater_than_two_sd = [fi for fi in data["frame_info"] if fi["diff_count"] > (data["stats"]["sd"] * 2) + data["stats"]["mean"]] 144 | greater_than_three_sd = [fi for fi in data["frame_info"] if fi["diff_count"] > (data["stats"]["sd"] * 3) + data["stats"]["mean"]] 145 | 146 | data["stats"]["greater_than_mean"] = len(greater_than_mean) 147 | data["stats"]["greater_than_median"] = len(greater_than_median) 148 | data["stats"]["greater_than_one_sd"] = len(greater_than_one_sd) 149 | data["stats"]["greater_than_three_sd"] = len(greater_than_three_sd) 150 | data["stats"]["greater_than_two_sd"] = len(greater_than_two_sd) 151 | 152 | return data 153 | 154 | 155 | 156 | # 157 | # Take an image and write it out at various sizes. 158 | # 159 | # TODO: Create output directories if they do not exist. 160 | # 161 | seq_num_global = 0 162 | def writeImagePyramid(destPath, name, seqNumber, image): 163 | global seq_num_global 164 | fullPath = os.path.join(destPath, "full", name + "-" + str(seqNumber).zfill(4) + ".png") 165 | fullSeqPath = os.path.join(destPath, "fullseq", name + "-" + str(seq_num_global).zfill(4) + ".png") 166 | seq_num_global += 1 167 | #halfPath = os.path.join(destPath, "half", name + "-" + str(seqNumber).zfill(4) + ".png") 168 | #quarterPath = os.path.join(destPath, "quarter", name + "-" + str(seqNumber).zfill(4) + ".png") 169 | #eigthPath = os.path.join(destPath, "eigth", name + "-" + str(seqNumber).zfill(4) + ".png") 170 | #sixteenthPath = os.path.join(destPath, "sixteenth", name + "-" + str(seqNumber).zfill(4) + ".png") 171 | 172 | #hImage = scale(image, 0.5, 0.5) 173 | #qImage = scale(image, 0.25, 0.25) 174 | #eImage = scale(image, 0.125, 0.125) 175 | #sImage = scale(image, 0.0625, 0.0625) 176 | 177 | cv2.imwrite(fullPath, image) 178 | cv2.imwrite(fullSeqPath, image) 179 | #cv2.imwrite(halfPath, hImage) 180 | #cv2.imwrite(quarterPath, qImage) 181 | #cv2.imwrite(eigthPath, eImage) 182 | #cv2.imwrite(sixteenthPath, sImage) 183 | 184 | 185 | 186 | # 187 | # Selects a set of frames as key frames (frames that represent a significant difference in 188 | # the video i.e. potential scene chnges). Key frames are selected as those frames where the 189 | # number of pixels that changed from the previous frame are more than 1.85 standard deviations 190 | # times from the mean number of changed pixels across all interframe changes. 191 | # 192 | def detectScenes(sourcePath, destPath, data, name, verbose=False): 193 | destDir = os.path.join(destPath, "images") 194 | 195 | # TODO make sd multiplier externally configurable 196 | diff_threshold = (data["stats"]["sd"] * 1.85) + data["stats"]["mean"] 197 | 198 | cap = cv2.VideoCapture(sourcePath) 199 | for index, fi in enumerate(data["frame_info"]): 200 | if fi["diff_count"] < diff_threshold: 201 | continue 202 | 203 | cap.set(cv2.CAP_PROP_POS_FRAMES, fi["frame_number"]) 204 | ret, frame = cap.read() 205 | 206 | # extract dominant color 207 | small = resize(frame, 100, 100) 208 | cols = extract_cols(small, 5) 209 | data["frame_info"][index]["dominant_cols"] = cols 210 | 211 | 212 | if frame != None: 213 | writeImagePyramid(destDir, name, fi["frame_number"], frame) 214 | 215 | if verbose: 216 | cv2.imshow('extract', frame) 217 | if cv2.waitKey(1) & 0xFF == ord('q'): 218 | break 219 | 220 | cap.release() 221 | cv2.destroyAllWindows() 222 | return data 223 | 224 | 225 | def makeOutputDirs(path): 226 | try: 227 | #todo this doesn't quite work like mkdirp. it will fail 228 | #fi any folder along the path exists. fix 229 | os.makedirs(os.path.join(path, "metadata")) 230 | os.makedirs(os.path.join(path, "images", "full")) 231 | os.makedirs(os.path.join(path, "images", "fullseq")) 232 | os.makedirs(os.path.join(path, "images", "half")) 233 | os.makedirs(os.path.join(path, "images", "quarter")) 234 | os.makedirs(os.path.join(path, "images", "eigth")) 235 | os.makedirs(os.path.join(path, "images", "sixteenth")) 236 | except OSError as exc: # Python >2.5 237 | if exc.errno == errno.EEXIST and os.path.isdir(path): 238 | pass 239 | else: raise 240 | 241 | 242 | parser = argparse.ArgumentParser() 243 | 244 | parser.add_argument('-s','--source', help='source file', required=True) 245 | parser.add_argument('-d', '--dest', help='dest folder', required=True) 246 | parser.add_argument('-n', '--name', help='image sequence name', required=True) 247 | parser.add_argument('-a','--after_frame', help='after frame', default=0) 248 | parser.add_argument('-v', '--verbose', action='store_true') 249 | parser.set_defaults(verbose=False) 250 | 251 | args = parser.parse_args() 252 | 253 | if args.verbose: 254 | info = getInfo(args.source) 255 | print("Source Info: ", info) 256 | 257 | makeOutputDirs(args.dest) 258 | 259 | # Run the extraction 260 | data = calculateFrameStats(args.source, args.verbose, int(args.after_frame)) 261 | data = detectScenes(args.source, args.dest, data, args.name, args.verbose) 262 | keyframeInfo = [frame_info for frame_info in data["frame_info"] if "dominant_cols" in frame_info] 263 | 264 | # Write out the results 265 | data_fp = os.path.join(args.dest, "metadata", args.name + "-meta.json") 266 | with open(data_fp, 'w') as f: 267 | data_json_str = json.dumps(data, indent=4) 268 | f.write(data_json_str) 269 | 270 | keyframe_info_fp = os.path.join(args.dest, "metadata", args.name + "-keyframe-meta.json") 271 | with open(keyframe_info_fp, 'w') as f: 272 | data_json_str = json.dumps(keyframeInfo, indent=4) 273 | f.write(data_json_str) 274 | --------------------------------------------------------------------------------