├── .gitignore
├── README.md
├── Railway_Tracking_main.py
├── RegistrazioneVideoTrenoVolterra.mp4
├── cascadeKM.xml
├── cascadePOLES.xml
├── fcn_filter_matching.py
├── fcns_track_railway.py
├── gray_frame_14750.png
├── image-readme
└── execution.png
├── lines.py
├── objectDetection.py
├── rail.py
└── test_straight_track_analysis.py
/.gitignore:
--------------------------------------------------------------------------------
1 | # Byte-compiled / optimized / DLL files
2 | __pycache__/
3 | *.py[cod]
4 | *$py.class
5 |
6 | # C extensions
7 | *.so
8 |
9 | # Distribution / packaging
10 | .Python
11 | build/
12 | develop-eggs/
13 | dist/
14 | downloads/
15 | eggs/
16 | .eggs/
17 | lib/
18 | lib64/
19 | parts/
20 | sdist/
21 | var/
22 | wheels/
23 | *.egg-info/
24 | .installed.cfg
25 | *.egg
26 | MANIFEST
27 |
28 | # PyInstaller
29 | # Usually these files are written by a python script from a template
30 | # before PyInstaller builds the exe, so as to inject date/other infos into it.
31 | *.manifest
32 | *.spec
33 |
34 | # Installer logs
35 | pip-log.txt
36 | pip-delete-this-directory.txt
37 |
38 | # Unit test / coverage reports
39 | htmlcov/
40 | .tox/
41 | .coverage
42 | .coverage.*
43 | .cache
44 | nosetests.xml
45 | coverage.xml
46 | *.cover
47 | .hypothesis/
48 | .pytest_cache/
49 |
50 | # Translations
51 | *.mo
52 | *.pot
53 |
54 | # Django stuff:
55 | *.log
56 | local_settings.py
57 | db.sqlite3
58 |
59 | # Flask stuff:
60 | instance/
61 | .webassets-cache
62 |
63 | # Scrapy stuff:
64 | .scrapy
65 |
66 | # Sphinx documentation
67 | docs/_build/
68 |
69 | # PyBuilder
70 | target/
71 |
72 | # Jupyter Notebook
73 | .ipynb_checkpoints
74 |
75 | # pyenv
76 | .python-version
77 |
78 | # celery beat schedule file
79 | celerybeat-schedule
80 |
81 | # SageMath parsed files
82 | *.sage.py
83 |
84 | # Environments
85 | .env
86 | .venv
87 | env/
88 | venv/
89 | ENV/
90 | env.bak/
91 | venv.bak/
92 |
93 | # Spyder project settings
94 | .spyderproject
95 | .spyproject
96 |
97 | # Rope project settings
98 | .ropeproject
99 |
100 | # mkdocs documentation
101 | /site
102 |
103 | # mypy
104 | .mypy_cache/
105 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | Railway detection
2 | =============
3 | The railway analysis performed by this program (in Python 2.7, using OpenCV 3.3) addresses the following tasks:
4 | * Detection of the turn markings and kilometer signs (Italian Railway) thanks to the **HAAR
5 | cascade classifiers** previously trained using OpenCV.
6 | * Rail lines detection and tracking using **HoughLines** together with an iterative
7 | **Template matching** process.
8 | * *Focus Of Expansion* coordinates determined by computing the intersection between
9 | the straight line approximation of the rails. Moreover the optical flow within the
10 | bounding box enclosing the detected turn markings is assessed to the same aim.
11 | * Wrongly matched keypoints filtering, thanks to the *Focus Of Expansion* tracking, evaluating their optical
12 | flow direction, displayed in white for the complete scenario.
13 |
14 | 
15 |
16 | The extraction of the portion of image occupied by the rails is handled by the Template Matching algorithm.\
17 | The *Railway Extraction* algorithm process each frame of the video, starting with the extraction of the rail lines. \
18 | In order to make the structure of the environment sufficiently visible, a proper elaboration is needed to filter unwanted
19 | details out of the image. Precisely each capture undergoes the following steps:
20 |
21 | * Gray scale conversion
22 | * Gaussian blur through a square kernel of size 7, with standard deviation of 7 pixels in x direction.
23 | * Sobel on the x-axis (y-axis) of the image to make vertical lines (horizontal lines) more detectable. Then the absolute
24 | value of the resulting image is taken, in order to equally take into account positive and negative gradients
25 | * Canny application to obtain the binary image only containing the pixels recognized as edges
26 | * Hough transform to gather the lines of interest.
27 |
28 | This process operates on the lower part of the image, where straight lines are a good approximation of the imaged rails.
29 | After selecting the image patches containing left and right rails, the line extraction has been done using the OpenCV
30 | procedures for the Hough Lines transform and the image gradient computation Canny.
31 | ```
32 | def extract_rail_line
33 | patch = frame[start_rows:end_rows, start_cols:end_cols]
34 | patch_flipped = cv2.flip(patch, 0)
35 | gray_patch = cv2.cvtColor(patch_flipped, cv2.COLOR_BGR2GRAY)
36 | blur1 = cv2.GaussianBlur(gray_patch, (7, 7), 7)
37 | sobelx2 = cv2.Sobel(blur1, cv2.CV_64F, 1, 0, ksize=3)
38 |
39 | abs_sobel64f = np.absolute(sobelx2)
40 | sobel_x = np.uint8(abs_sobel64f)
41 |
42 | rails_edges = cv2.Canny(sobel_x, canny_min, canny_max)
43 | ```
44 |
45 | In order to extract only the lines of interest, several conditions have been defined on the ρ and θ parameters of the Hough
46 | Transform, given as result of the OpenCV HoughLines step.
47 |
48 | The patch containing the rails in the lower image has been **flipped** to compute the
49 | expected the guess for x coordinate `min_val` to extract the first template containing a rail
50 | element within the next frame to be processed as:
51 |
52 | min_val = ρ * cos (θ) + full_img_scale
53 |
54 | where ρ encodes the distance from the origin of the image axis to the normal straight line
55 | to the implied line and θ the angle defining the slope of the normal.
56 |
57 | The guessed position for the lower edges of these segments gets updated frame after frame, and a minimum displacement
58 | along x-axis has been defined to exclude too far lines, in this fashion:
59 | ```
60 | def nearest_line
61 |
62 | if abs(rho / np.cos(theta) + start_cols - expt_start) < min_delta:
63 | min_delta = abs(rho / np.cos(theta) + start_cols - expt_start)
64 | min_val = rho / np.cos(theta) + start_cols
65 | return min_delta, min_val
66 | ```
67 |
68 | The identification of the lower part of the imaged rails allows for a good initialization of the template matching phase:
69 | for each rail, the first template is extracted according to the position guessed by the *Railway lines extraction*.
70 | A weighted correlation analysis is done between the template and the upper stripe of image.
71 |
72 | xcorr = cv2.matchTemplate(row[0:self.h, startx:endx],
73 | tmpl,
74 | method=cv2.TM_CCOEFF_NORMED)
75 |
76 | a = 0.001*(pos*2+1) # set Lorentzian shape
77 | xcorrW = np.zeros_like(xcorr)
78 | L = []
79 | val = []
80 | val.extend(range(0, np.size(xcorr[0])))
81 |
82 | for i in range(0, np.size(xcorr,1)):
83 | L.append(1/(1 + a*pow(val[i] - MAX, 2)))
84 | xcorrW[0][i] = L[i]*xcorr[0][i]
85 |
86 |
87 | Naming:
88 | * w_F, h_F the frame width and height
89 | * w_T, h_T the template width and height
90 | * w_I = 2w_T the test image width
91 | * h_I = h_T the test image height
92 | * ulC_T the upper left corner coordinates of the template
93 | * ulC_I the upper left corner coordinates of the test image
94 | * maxcorrX the x-coordinate of the point within the test image having the highest value of correlation with the template.
95 |
96 | ulC_T = (maxcorrX, ulC_T_y - h_T) \
97 | ulC_I = (maxcorrX - w_T, ulC_T_y - 2h_T)
98 |
99 | The current frame of the video is analyzed to look for the rails profile, by updating the upper left corner of both the template and test image for a specific number of iterations (27):
100 |
101 | ```
102 | loop = 1
103 | while loop < 27:
104 | xl, yl, Ml = railL.find_next(matchImage,
105 | loop - 1,
106 | Ml[0],
107 | plotres=watch_xcorr,
108 | weights_on=weighted_version)
109 | xr, yr, Mr = railR.find_next(matchImage,
110 | loop - 1,
111 | Mr[0],
112 | plotres=watch_xcorr,
113 | weights_on=weighted_version)
114 | railL.push((xl, yl))
115 | railR.push((xr, yr))
116 | railL.mark(frame, loop)
117 | railR.mark(frame, loop)
118 | ```
119 | The infrastructure has few geometric elements and sometimes high presence of vegetation. In these cases wrong feature matching may occur, with the risk of having a set of unreliable tracks of points across subsequent frames. Relying on the detection of the rails and signs extracted within an image, two separate estimates of the FoE coordinates can be obtained as:
120 |
121 | * the intersection of the lines approximating the lower part of the rails,
122 | with a constant horizon at a y coordinate in the image, depending on the
123 | principal point
124 | * the mean point of intersection among all matched points in the
125 | bounding box of the turn markings, considering the fixed horizon y coordinate (given by
126 | camera calibration, i.e. the principal point coordinates)
127 |
128 | When both the estimates computed in this fashion are available, the FoE position is the mean value of the coordinates.
129 | The displacement of the FoE along the horizon line can be registered and exploited to filter wrongly matched feature and may provide important motion cues.
130 |
--------------------------------------------------------------------------------
/Railway_Tracking_main.py:
--------------------------------------------------------------------------------
1 |
2 |
3 | import cv2
4 | import numpy as np
5 | import argparse
6 | import imageio
7 | from rail import Rail
8 | from objectDetection import RailwayObjectDetection
9 | from fcns_track_railway import tmp_match_rail_lines
10 | from fcn_filter_matching import filter_features_matching
11 |
12 |
13 | # read video
14 | class ImageIOVideoCapture:
15 | def __init__(self,name):
16 | self.r = imageio.get_reader(name)
17 | def seek(self,frame_index):
18 | self.r.set_image_index(frame_index)
19 | # imageio is BGR
20 | # opencv is RGB
21 | def read(self):
22 | r = self.r.get_next_data()
23 | return (r is not None,r)
24 |
25 | # define keypoint
26 | class KeypointsOnTrack:
27 | def __init__(self, kpx, kpy, col):
28 | self.kpx = kpx
29 | self.kpy = kpy
30 | self.col = col
31 |
32 | # ....... prior knowledge of the scene, computed in test_straight_track_analysis.py ........
33 |
34 | # the focus of expansion coordinates under (straight) nominal condition and corresponds to the camera principal point
35 | # given by the calibration
36 | foe = np.array((948, 631))
37 |
38 | # line coefficients fitting left and right rail
39 | ml_rail = -1.5981132075471698
40 | mr_rail = -14.242857142857142
41 | cl_rail = 2146.2377358490567
42 | cr_rail = 14134.142857142857
43 |
44 |
45 | '''------------------------------------------------MAIN-----------------------------------------------------------'''
46 |
47 | def main():
48 | parser = argparse.ArgumentParser(description='Process some integers.')
49 | parser.add_argument('-I',"--input",help="input file video",default="RegistrazioneVideoTrenoVolterra.mp4")
50 | parser.add_argument('-O',"--output",help="input file JSON",default="output.json")
51 | parser.add_argument('--cascadeTurn',help="XML of cascade",default='cascadePOLES.xml')
52 | parser.add_argument('--cascadeKm', help="XML of cascade", default='cascadeKM.xml')
53 | parser.add_argument('--imgSizeX', type=int, help="image size along x", default=1920)
54 | parser.add_argument('--imgSizeY', type=int, help="image size along Y", default=1080)
55 | parser.add_argument('--tmpWidth', type=int, help="start template width", default=35)
56 | parser.add_argument('--tmpHeight', type=int, help="start template height", default=15)
57 | parser.add_argument('--startLeftRailGuess', type=int, help="Guess for determining the position within the lower part"
58 | " of the image of the left rail", default=630)
59 | parser.add_argument('--startRightRailGuess', type=int, help="Guess for determining the position within the lower part"
60 | " of the image of the right rail", default=870)
61 | parser.add_argument('--watchHoughLowerRails', type=bool, default=False)
62 | parser.add_argument("--show", type=bool, help="Show output image flow", default=True)
63 | parser.add_argument('--seekframe',default=0,type=int,help="seek in frames")
64 |
65 | args = parser.parse_args()
66 |
67 | # ---------------------------------------------INITIALIZATION--------------------------------------------
68 |
69 | cap = ImageIOVideoCapture(args.input)
70 | if args.seekframe != 0:
71 | cap.seek(args.seekframe)
72 |
73 | firstFrame = True
74 |
75 | # frame ID
76 | count = 0
77 |
78 | # init template dimensions for railway tracking
79 | w = args.tmpWidth
80 | h = args.tmpHeight
81 |
82 | # image size
83 | sizeX = args.imgSizeX # width
84 | sizeY = args.imgSizeY # height
85 |
86 | # init expected coordinates for template lower left corner
87 | MAXl = []
88 |
89 | # initialization for line detection using Hough
90 | expt_startLeft = args.startLeftRailGuess
91 | expt_startRight = args.startRightRailGuess
92 |
93 | # initialize feature matcher
94 | orb = cv2.ORB_create()
95 | bf = cv2.BFMatcher(cv2.NORM_HAMMING, crossCheck=True)
96 |
97 |
98 | # create railway detection object
99 | railway_objects = RailwayObjectDetection(ml_rail, cl_rail, mr_rail, cr_rail, foe)
100 |
101 | # initialization of lists to contain matched features across two consecutive frames
102 | kps_on_track = []
103 | kps_on_track_old = []
104 |
105 | kps_on_track_empty = 1
106 |
107 | # list to track dynamic Focus of Expansion displacement along x
108 | dynFOEx = []
109 |
110 | # init cascade classifier for (white) turn poles detection
111 | palb_cascade = cv2.CascadeClassifier(args.cascadeTurn)
112 |
113 | # init cascade classifier for kilometer sign detection
114 | km_cascade = cv2.CascadeClassifier(args.cascadeKm)
115 |
116 | # lists to contain matched features on detected objects
117 | pole_points = []
118 | pole_oldpoints = []
119 |
120 | km_points = []
121 | km_old_points = []
122 |
123 | left_edges = []
124 | right_edges = []
125 |
126 | left_xseq = []
127 | left_yseq = []
128 | right_xseq = []
129 | right_yseq = []
130 |
131 | # view patch with rail lines detected using HoughLines
132 | if args.watchHoughLowerRails == True:
133 | watch_rails = 1
134 | else:
135 | watch_rails = 0
136 |
137 | # view correlation result along x (1)
138 | watch_xcorr = 0
139 |
140 | # activate Lorentzian curve weighting (1)
141 | weighted_version = 1
142 |
143 | # Next frame availability
144 | r = True
145 |
146 | while(r == True):
147 | r, frame = cap.read()
148 |
149 | if frame is None:
150 | break
151 |
152 | # reset lists of matched point features for each processed frame
153 | pole_points[:] = []
154 | pole_oldpoints[:] = []
155 |
156 | # reset lists of matched point features on km signs for each processed frame
157 | km_points[:] = []
158 | km_old_points[:] = []
159 |
160 | matchImage = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
161 |
162 | # extract ORB features
163 | kp, des = orb.detectAndCompute(matchImage, None)
164 |
165 | # INIT HAAR CASCADE CLASSIFIERS
166 | poles_bbox = palb_cascade.detectMultiScale(frame, scaleFactor=1.1, minNeighbors=5, minSize=(16, 33),
167 | flags=cv2.CASCADE_SCALE_IMAGE)
168 |
169 | km_bbox = km_cascade.detectMultiScale(frame, scaleFactor=1.1, minNeighbors=3, minSize=(22, 107),
170 | flags=cv2.CASCADE_SCALE_IMAGE)
171 |
172 | if not firstFrame:
173 |
174 | # brute force feature matching
175 | matches = bf.match(des_old, des)
176 | matches = sorted(matches, key=lambda x: x.distance)
177 |
178 | # init array to store FOE measurements for each new frame
179 | del dynFOEx[:]
180 |
181 | if kps_on_track_empty == 0:
182 | del kps_on_track[:]
183 |
184 | # TEMPLATE MATCHING OF RAILS
185 | # init rail objects to start template-match detection
186 | railL = Rail(int(expt_startLeft) - 7, sizeY, w, h)
187 | railR = Rail(int(expt_startRight) - 7, sizeY, w, h)
188 |
189 | # handling of left and rigth rail lines detection within a portion of image,
190 | # expt_startLeft is the expected position of the lower edge of the line segment fitting the rail
191 | # expressed in full image coordinates
192 | eval_frameL, start_colL, end_colL, expt_startLeft = railL.extract_rail_line(frame,
193 | start_rows=1000,
194 | end_rows=np.size(
195 | frame, 0),
196 | start_cols=610,
197 | end_cols=1000,
198 | canny_min=40,
199 | canny_max=140,
200 | hough_I=45,
201 | length=400,
202 | theta_min=2.5,
203 | theta_max=2.65,
204 | expt_start=expt_startLeft,
205 | max_displacement=15,
206 | )
207 |
208 | eval_frameR, start_colR, end_colR, expt_startRight = railR.extract_rail_line(frame,
209 | start_rows=1000,
210 | end_rows=1080,
211 | start_cols=600,
212 | end_cols=950,
213 | canny_min=40,
214 | canny_max=200,
215 | hough_I=40,
216 | length=100,
217 | theta_min=2.9,
218 | theta_max=3.1,
219 | expt_start=expt_startRight,
220 | max_displacement=20,
221 | )
222 |
223 | # view the portion of image where rail lines have been detected
224 | railL.patch_wRails(watch_rails, eval_frameL, eval_frameR)
225 |
226 | newFOEx_rail, newFOEy_rail, ml, cl, mr, cr, left_edges, right_edges = tmp_match_rail_lines(w, frame, matchImage, weighted_version, watch_xcorr, MAXl, railL, railR, left_edges, left_xseq, left_yseq, right_edges, right_xseq, right_yseq)
227 |
228 | #----------------------------------------POINT FEATURE MATCHING--------------------------------------------
229 | for k in range(0, len(matches)):
230 |
231 | q = matches[k].queryIdx # old frame
232 | t = matches[k].trainIdx # new frame
233 |
234 | # keypoints coordinates p1 in current frame
235 | x = kp[t].pt[0]
236 | y = kp[t].pt[1]
237 |
238 | # keypoints coordinates p2 in previous frame
239 | x_old = kp_old[q].pt[0]
240 | y_old = kp_old[q].pt[1]
241 |
242 | # CHECK PRESENCE OF TURN POLES
243 | # list focus of expansion coordinates, each computed as the intersection of the displacement of feature within
244 | # the same bounding box which contain a pole
245 | dynFOEx, pole_points, pole_oldpoints = railway_objects.check_for_turn(poles_bbox, poles_bb_old, x, y, x_old, y_old, frame, mask, dynFOEx, pole_points, pole_oldpoints)
246 |
247 | for (xv, yv, wi, he) in km_bbox:
248 | cv2.rectangle(frame, (xv, yv), (xv + wi, yv + he), (0, 0, 255), 2)
249 | km_points, km_old_points = railway_objects.get_km_features(km_bbox, km_bb_old, x, y, x_old, y_old,
250 | km_points, km_old_points)
251 |
252 | # POINT FEATURES FILTERING ACCORDING TO THE OPTICAL FLOW
253 | filter_features_matching(foenew, x, x_old, y, y_old, mask)
254 |
255 | # Focus of Expansion estimate given by the intersection of the lines fitting the rails.
256 | # The Focus of Expansion coordinates are updated according to a threshold check
257 | thresh_foe = 25
258 | if abs(newFOEx_rail - foe[0]) > thresh_foe:
259 |
260 | cv2.line(frame, (newFOEx_rail, newFOEy_rail), (left_edges[0][0], left_edges[0][1]), (0, 0, 255), 1)
261 | cv2.line(frame, (newFOEx_rail, newFOEy_rail), (right_edges[0][0], right_edges[0][1]),
262 | (0, 0, 255), 1)
263 |
264 | cv2.drawMarker(frame, (newFOEx_rail, newFOEy_rail), (0, 0, 255),
265 | markerType=cv2.MARKER_CROSS,
266 | markerSize=20,
267 | thickness=2,
268 | line_type=cv2.LINE_AA)
269 |
270 | else:
271 | newFOEx_rail = foe[0]
272 |
273 | # the previous value of foenew is updated whether turn poles are detected in the frame
274 | if len(poles_bbox) > 0 and len(dynFOEx) > 0:
275 |
276 | newFOEx_poles = np.mean(dynFOEx, dtype=int)
277 |
278 | # update of the FOE coordinates also determined by the rail lines when different from nominal value
279 | if abs(newFOEx_rail - foe[0]) > 0:
280 | newFOEx = (newFOEx_rail + newFOEx_poles) / 2
281 | else:
282 | newFOEx = newFOEx_poles
283 |
284 | foenew = np.array((int(newFOEx), foe[1]))
285 |
286 | # dynamic FOE (turn condition) is represented as a blue cross
287 | cv2.drawMarker(frame, (newFOEx_poles, foe[1]), (0, 255, 255),
288 | markerType=cv2.MARKER_CROSS,
289 | markerSize=20,
290 | thickness=2,
291 | line_type=cv2.LINE_AA)
292 |
293 | # when the difference between dynamically computed FOE under turn condition and nominal FOE is below a threshold
294 | # it is assumed to be under straight track condition
295 | elif abs(foenew[0] - foe[0]) < thresh_foe:
296 | foenew = foe
297 |
298 | if args.show and watch_rails == 0:
299 | # nominal FOE (straight condition) is represented as a green cross
300 | cv2.drawMarker(frame, (foe[0], foe[1]), (0, 255, 0),
301 | markerType=cv2.MARKER_CROSS,
302 | markerSize=20,
303 | thickness=2,
304 | line_type=cv2.LINE_AA)
305 |
306 | #---------------------------DRAW matched keypoints on frame--------------------------------------------
307 |
308 | frame = cv2.drawKeypoints(frame, kp, frame)
309 |
310 | frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
311 |
312 | frameC = cv2.add(frame, mask)
313 |
314 | frame_small = cv2.resize(frameC, (int(0.9 *sizeX), int(0.9 * sizeY)))
315 | cv2.imshow('frame', frame_small)
316 |
317 | else:
318 | # when firstframe == 1, initialise dynamic position of FOE as nominal
319 | foenew = foe
320 | firstFrame = False
321 |
322 | old_frame = frame
323 | kp_old = kp
324 | des_old = des
325 | poles_bb_old = poles_bbox[:]
326 | km_bb_old = km_bbox[:]
327 |
328 | del kps_on_track_old[:]
329 |
330 | mask = np.zeros_like(old_frame)
331 |
332 | # update frame id
333 | count = count + 1
334 |
335 |
336 | if __name__ == '__main__':
337 | main()
--------------------------------------------------------------------------------
/RegistrazioneVideoTrenoVolterra.mp4:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/federicafioretti/railway-detection/79db5cd150d4eaabeb751aab13b5eaca8727eeda/RegistrazioneVideoTrenoVolterra.mp4
--------------------------------------------------------------------------------
/cascadeKM.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 | BOOST
5 | HAAR
6 | 107
7 | 22
8 |
9 | GAB
10 | 9.9500000476837158e-01
11 | 5.0000000745058060e-02
12 | 9.4999999999999996e-01
13 | 1
14 | 100
15 |
16 | 0
17 | 1
18 | BASIC
19 | 10
20 |
21 |
22 | <_>
23 | 3
24 | 5.5777359008789062e-01
25 |
26 | <_>
27 |
28 | 0 -1 18 -1.0608711745589972e-03
29 |
30 | 6.9230771064758301e-01 -9.3258428573608398e-01
31 | <_>
32 |
33 | 0 -1 20 1.7488470301032066e-02
34 |
35 | -7.6742357015609741e-01 9.6654999256134033e-01
36 | <_>
37 |
38 | 0 -1 0 -2.1813474595546722e-01
39 |
40 | -1. 6.3288944959640503e-01
41 |
42 | <_>
43 | 3
44 | -3.7085986137390137e-01
45 |
46 | <_>
47 |
48 | 0 -1 17 6.0602445155382156e-02
49 |
50 | -7.0909088850021362e-01 1.
51 | <_>
52 |
53 | 0 -1 31 1.6444794833660126e-02
54 |
55 | -6.6176891326904297e-01 9.4484537839889526e-01
56 | <_>
57 |
58 | 0 -1 41 -4.1383407078683376e-03
59 |
60 | 1. -5.0409549474716187e-01
61 |
62 | <_>
63 | 4
64 | 1.1381044983863831e-01
65 |
66 | <_>
67 |
68 | 0 -1 7 -4.6327300369739532e-03
69 |
70 | 9.3333333730697632e-01 -6.7567569017410278e-01
71 | <_>
72 |
73 | 0 -1 38 -6.1862561851739883e-03
74 |
75 | 9.1163086891174316e-01 -5.5681461095809937e-01
76 | <_>
77 |
78 | 0 -1 1 5.4589021950960159e-02
79 |
80 | -5.2797722816467285e-01 7.6615852117538452e-01
81 | <_>
82 |
83 | 0 -1 34 1.4361535431817174e-03
84 |
85 | -8.8830327987670898e-01 5.8932965993881226e-01
86 |
87 | <_>
88 | 4
89 | 4.9652673304080963e-02
90 |
91 | <_>
92 |
93 | 0 -1 40 -7.1441546082496643e-02
94 |
95 | 9.2857140302658081e-01 -6.4601767063140869e-01
96 | <_>
97 |
98 | 0 -1 13 -6.7420266568660736e-03
99 |
100 | 8.1154751777648926e-01 -5.5665498971939087e-01
101 | <_>
102 |
103 | 0 -1 22 -1.3209908502176404e-04
104 |
105 | -9.4014996290206909e-01 4.6922922134399414e-01
106 | <_>
107 |
108 | 0 -1 45 3.2637217082083225e-03
109 |
110 | -5.8510637283325195e-01 8.4958916902542114e-01
111 |
112 | <_>
113 | 5
114 | 8.4912300109863281e-02
115 |
116 | <_>
117 |
118 | 0 -1 21 2.3325014859437943e-02
119 |
120 | -6.5454542636871338e-01 8.0645161867141724e-01
121 | <_>
122 |
123 | 0 -1 29 -1.8673384329304099e-04
124 |
125 | -9.7293400764465332e-01 3.8735374808311462e-01
126 | <_>
127 |
128 | 0 -1 8 -3.5464754328131676e-03
129 |
130 | 7.6239603757858276e-01 -5.7027262449264526e-01
131 | <_>
132 |
133 | 0 -1 25 2.3532914929091930e-04
134 |
135 | 4.8036187887191772e-01 -8.9876848459243774e-01
136 | <_>
137 |
138 | 0 -1 2 -3.2184965675696731e-04
139 |
140 | -8.8454282283782959e-01 4.8847642540931702e-01
141 |
142 | <_>
143 | 5
144 | 2.4679031968116760e-01
145 |
146 | <_>
147 |
148 | 0 -1 24 3.1731382012367249e-02
149 |
150 | -6.3157892227172852e-01 9.2592591047286987e-01
151 | <_>
152 |
153 | 0 -1 26 -1.4396288897842169e-03
154 |
155 | -1. 3.9096161723136902e-01
156 | <_>
157 |
158 | 0 -1 36 1.6274552792310715e-02
159 |
160 | -4.7036412358283997e-01 9.5793855190277100e-01
161 | <_>
162 |
163 | 0 -1 30 -8.6432672105729580e-05
164 |
165 | -1. 4.4913664460182190e-01
166 | <_>
167 |
168 | 0 -1 5 -4.4759118463844061e-04
169 |
170 | 5.0863516330718994e-01 -8.1546068191528320e-01
171 |
172 | <_>
173 | 5
174 | 2.1593673527240753e-01
175 |
176 | <_>
177 |
178 | 0 -1 32 1.0342471301555634e-02
179 |
180 | -7.3737370967864990e-01 6.1904764175415039e-01
181 | <_>
182 |
183 | 0 -1 6 -8.4186457097530365e-03
184 |
185 | 9.5874053239822388e-01 -4.1824609041213989e-01
186 | <_>
187 |
188 | 0 -1 9 -2.6915648486465216e-03
189 |
190 | -9.2771923542022705e-01 4.2300626635551453e-01
191 | <_>
192 |
193 | 0 -1 44 -1.3689289335161448e-04
194 |
195 | -7.0744031667709351e-01 5.3360557556152344e-01
196 | <_>
197 |
198 | 0 -1 4 9.7721666097640991e-03
199 |
200 | 4.5262902975082397e-01 -9.6204185485839844e-01
201 |
202 | <_>
203 | 6
204 | -2.7141535282135010e-01
205 |
206 | <_>
207 |
208 | 0 -1 3 -8.6491461843252182e-03
209 |
210 | 8.1818181276321411e-01 -6.8518519401550293e-01
211 | <_>
212 |
213 | 0 -1 15 -2.3861031513661146e-03
214 |
215 | 8.1175678968429565e-01 -4.7074285149574280e-01
216 | <_>
217 |
218 | 0 -1 46 5.2031123777851462e-04
219 |
220 | 4.3237483501434326e-01 -9.2506545782089233e-01
221 | <_>
222 |
223 | 0 -1 37 -2.6136089581996202e-04
224 |
225 | 6.2089288234710693e-01 -5.7195115089416504e-01
226 | <_>
227 |
228 | 0 -1 16 -2.6865174731938168e-05
229 |
230 | -8.0307334661483765e-01 4.4342371821403503e-01
231 | <_>
232 |
233 | 0 -1 28 -1.1523407883942127e-03
234 |
235 | 6.8735533952713013e-01 -5.3723812103271484e-01
236 |
237 | <_>
238 | 7
239 | 2.1747803688049316e-01
240 |
241 | <_>
242 |
243 | 0 -1 43 1.9207518547773361e-02
244 |
245 | -6.0000002384185791e-01 8.4615385532379150e-01
246 | <_>
247 |
248 | 0 -1 11 2.3088006855687127e-05
249 |
250 | 3.6979791522026062e-01 -8.3088171482086182e-01
251 | <_>
252 |
253 | 0 -1 14 -3.8167357444763184e-02
254 |
255 | 6.3025063276290894e-01 -5.2722448110580444e-01
256 | <_>
257 |
258 | 0 -1 12 -8.0065161455422640e-04
259 |
260 | 6.1992144584655762e-01 -5.7844662666320801e-01
261 | <_>
262 |
263 | 0 -1 23 2.8529240807984024e-05
264 |
265 | 5.2726632356643677e-01 -7.4685031175613403e-01
266 | <_>
267 |
268 | 0 -1 33 2.4340990930795670e-03
269 |
270 | -5.0229775905609131e-01 7.6695442199707031e-01
271 | <_>
272 |
273 | 0 -1 19 -2.4060341529548168e-03
274 |
275 | 4.4665607810020447e-01 -9.0023165941238403e-01
276 |
277 | <_>
278 | 5
279 | -6.8737161159515381e-01
280 |
281 | <_>
282 |
283 | 0 -1 27 1.1573245748877525e-02
284 |
285 | -6.4814811944961548e-01 6.9696968793869019e-01
286 | <_>
287 |
288 | 0 -1 10 -2.3499732837080956e-02
289 |
290 | -8.3576589822769165e-01 4.5300480723381042e-01
291 | <_>
292 |
293 | 0 -1 42 -2.8597817290574312e-03
294 |
295 | 7.5713366270065308e-01 -5.2664506435394287e-01
296 | <_>
297 |
298 | 0 -1 39 -1.0334907710785046e-04
299 |
300 | -8.3453041315078735e-01 5.1164770126342773e-01
301 | <_>
302 |
303 | 0 -1 35 5.6038505863398314e-04
304 |
305 | -5.3357803821563721e-01 7.9608941078186035e-01
306 |
307 | <_>
308 |
309 | <_>
310 | 0 9 16 96 -1.
311 | <_>
312 | 0 57 16 48 2.
313 | 0
314 | <_>
315 |
316 | <_>
317 | 1 5 12 101 -1.
318 | <_>
319 | 7 5 6 101 2.
320 | 0
321 | <_>
322 |
323 | <_>
324 | 1 11 1 9 -1.
325 | <_>
326 | 1 14 1 3 3.
327 | 0
328 | <_>
329 |
330 | <_>
331 | 1 47 14 10 -1.
332 | <_>
333 | 1 47 7 5 2.
334 | <_>
335 | 8 52 7 5 2.
336 | 0
337 | <_>
338 |
339 | <_>
340 | 1 62 4 32 -1.
341 | <_>
342 | 1 78 4 16 2.
343 | 0
344 | <_>
345 |
346 | <_>
347 | 2 65 20 26 -1.
348 | <_>
349 | 2 65 10 13 2.
350 | <_>
351 | 12 78 10 13 2.
352 | 0
353 | <_>
354 |
355 | <_>
356 | 3 5 3 21 -1.
357 | <_>
358 | 4 5 1 21 3.
359 | 0
360 | <_>
361 |
362 | <_>
363 | 3 15 3 15 -1.
364 | <_>
365 | 4 15 1 15 3.
366 | 0
367 | <_>
368 |
369 | <_>
370 | 3 18 3 11 -1.
371 | <_>
372 | 4 18 1 11 3.
373 | 0
374 | <_>
375 |
376 | <_>
377 | 3 36 12 30 -1.
378 | <_>
379 | 3 36 6 15 2.
380 | <_>
381 | 9 51 6 15 2.
382 | 0
383 | <_>
384 |
385 | <_>
386 | 4 11 8 36 -1.
387 | <_>
388 | 4 29 8 18 2.
389 | 0
390 | <_>
391 |
392 | <_>
393 | 4 54 8 2 -1.
394 | <_>
395 | 4 55 8 1 2.
396 | 0
397 | <_>
398 |
399 | <_>
400 | 4 85 18 11 -1.
401 | <_>
402 | 10 85 6 11 3.
403 | 0
404 | <_>
405 |
406 | <_>
407 | 5 14 6 24 -1.
408 | <_>
409 | 8 14 3 24 2.
410 | 0
411 | <_>
412 |
413 | <_>
414 | 5 16 16 36 -1.
415 | <_>
416 | 5 28 16 12 3.
417 | 0
418 | <_>
419 |
420 | <_>
421 | 5 35 5 6 -1.
422 | <_>
423 | 5 37 5 2 3.
424 | 0
425 | <_>
426 |
427 | <_>
428 | 6 84 10 9 -1.
429 | <_>
430 | 6 87 10 3 3.
431 | 0
432 | <_>
433 |
434 | <_>
435 | 7 1 15 89 -1.
436 | <_>
437 | 12 1 5 89 3.
438 | 0
439 | <_>
440 |
441 | <_>
442 | 7 14 6 20 -1.
443 | <_>
444 | 7 24 6 10 2.
445 | 0
446 | <_>
447 |
448 | <_>
449 | 7 16 4 24 -1.
450 | <_>
451 | 7 28 4 12 2.
452 | 0
453 | <_>
454 |
455 | <_>
456 | 7 20 15 31 -1.
457 | <_>
458 | 12 20 5 31 3.
459 | 0
460 | <_>
461 |
462 | <_>
463 | 7 42 7 15 -1.
464 | <_>
465 | 7 47 7 5 3.
466 | 0
467 | <_>
468 |
469 | <_>
470 | 7 43 6 3 -1.
471 | <_>
472 | 9 43 2 3 3.
473 | 0
474 | <_>
475 |
476 | <_>
477 | 7 46 6 12 -1.
478 | <_>
479 | 7 46 3 6 2.
480 | <_>
481 | 10 52 3 6 2.
482 | 0
483 | <_>
484 |
485 | <_>
486 | 7 65 10 18 -1.
487 | <_>
488 | 7 71 10 6 3.
489 | 0
490 | <_>
491 |
492 | <_>
493 | 7 90 9 3 -1.
494 | <_>
495 | 10 90 3 3 3.
496 | 0
497 | <_>
498 |
499 | <_>
500 | 8 50 6 19 -1.
501 | <_>
502 | 10 50 2 19 3.
503 | 0
504 | <_>
505 |
506 | <_>
507 | 8 55 5 12 -1.
508 | <_>
509 | 8 59 5 4 3.
510 | 0
511 | <_>
512 |
513 | <_>
514 | 9 34 6 70 -1.
515 | <_>
516 | 11 34 2 70 3.
517 | 0
518 | <_>
519 |
520 | <_>
521 | 9 44 4 1 -1.
522 | <_>
523 | 11 44 2 1 2.
524 | 0
525 | <_>
526 |
527 | <_>
528 | 9 76 6 2 -1.
529 | <_>
530 | 11 76 2 2 3.
531 | 0
532 | <_>
533 |
534 | <_>
535 | 10 13 8 25 -1.
536 | <_>
537 | 14 13 4 25 2.
538 | 0
539 | <_>
540 |
541 | <_>
542 | 10 66 6 15 -1.
543 | <_>
544 | 10 71 6 5 3.
545 | 0
546 | <_>
547 |
548 | <_>
549 | 11 9 6 7 -1.
550 | <_>
551 | 13 9 2 7 3.
552 | 0
553 | <_>
554 |
555 | <_>
556 | 11 16 10 20 -1.
557 | <_>
558 | 11 16 5 10 2.
559 | <_>
560 | 16 26 5 10 2.
561 | 0
562 | <_>
563 |
564 | <_>
565 | 11 17 2 8 -1.
566 | <_>
567 | 11 21 2 4 2.
568 | 0
569 | <_>
570 |
571 | <_>
572 | 12 15 6 17 -1.
573 | <_>
574 | 15 15 3 17 2.
575 | 0
576 | <_>
577 |
578 | <_>
579 | 12 24 3 6 -1.
580 | <_>
581 | 13 24 1 6 3.
582 | 0
583 | <_>
584 |
585 | <_>
586 | 13 49 3 12 -1.
587 | <_>
588 | 13 53 3 4 3.
589 | 0
590 | <_>
591 |
592 | <_>
593 | 13 64 2 12 -1.
594 | <_>
595 | 13 64 1 6 2.
596 | <_>
597 | 14 70 1 6 2.
598 | 0
599 | <_>
600 |
601 | <_>
602 | 14 8 8 85 -1.
603 | <_>
604 | 18 8 4 85 2.
605 | 0
606 | <_>
607 |
608 | <_>
609 | 14 13 3 38 -1.
610 | <_>
611 | 15 13 1 38 3.
612 | 0
613 | <_>
614 |
615 | <_>
616 | 14 23 6 68 -1.
617 | <_>
618 | 16 23 2 68 3.
619 | 0
620 | <_>
621 |
622 | <_>
623 | 14 57 6 38 -1.
624 | <_>
625 | 14 57 3 19 2.
626 | <_>
627 | 17 76 3 19 2.
628 | 0
629 | <_>
630 |
631 | <_>
632 | 15 15 1 14 -1.
633 | <_>
634 | 15 22 1 7 2.
635 | 0
636 | <_>
637 |
638 | <_>
639 | 16 20 2 49 -1.
640 | <_>
641 | 17 20 1 49 2.
642 | 0
643 | <_>
644 |
645 | <_>
646 | 19 28 2 16 -1.
647 | <_>
648 | 19 28 1 8 2.
649 | <_>
650 | 20 36 1 8 2.
651 | 0
652 |
653 |
--------------------------------------------------------------------------------
/cascadePOLES.xml:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 | BOOST
5 | HAAR
6 | 33
7 | 16
8 |
9 | GAB
10 | 9.9500000476837158e-01
11 | 2.0000000298023224e-01
12 | 9.4999999999999996e-01
13 | 1
14 | 100
15 |
16 | 0
17 | 1
18 | BASIC
19 | 10
20 |
21 |
22 | <_>
23 | 2
24 | -1.9631986320018768e-01
25 |
26 | <_>
27 |
28 | 0 -1 40 -8.2916475832462311e-02
29 |
30 | 8.8095235824584961e-01 -9.0960454940795898e-01
31 | <_>
32 |
33 | 0 -1 38 1.5512448735535145e-02
34 |
35 | -9.4380038976669312e-01 7.1328467130661011e-01
36 |
37 | <_>
38 | 4
39 | -2.3516750335693359e-01
40 |
41 | <_>
42 |
43 | 0 -1 4 7.7651053667068481e-02
44 |
45 | -9.2638039588928223e-01 6.5306121110916138e-01
46 | <_>
47 |
48 | 0 -1 46 -2.5727821514010429e-02
49 |
50 | 7.3648387193679810e-01 -8.3733427524566650e-01
51 | <_>
52 |
53 | 0 -1 47 -7.9304426908493042e-03
54 |
55 | 7.0928037166595459e-01 -8.4525537490844727e-01
56 | <_>
57 |
58 | 0 -1 10 4.8116378486156464e-02
59 |
60 | -7.3902332782745361e-01 8.1926679611206055e-01
61 |
62 | <_>
63 | 4
64 | -6.0333615541458130e-01
65 |
66 | <_>
67 |
68 | 0 -1 25 -6.2012005597352982e-02
69 |
70 | 7.5903612375259399e-01 -8.4269660711288452e-01
71 | <_>
72 |
73 | 0 -1 29 1.2834814190864563e-01
74 |
75 | -8.5490739345550537e-01 4.9545794725418091e-01
76 | <_>
77 |
78 | 0 -1 1 2.2917358204722404e-02
79 |
80 | -7.5203251838684082e-01 5.5926269292831421e-01
81 | <_>
82 |
83 | 0 -1 16 -4.4460915029048920e-02
84 |
85 | 5.5839723348617554e-01 -8.1536012887954712e-01
86 |
87 | <_>
88 | 5
89 | -7.6566964387893677e-01
90 |
91 | <_>
92 |
93 | 0 -1 0 3.8602843880653381e-02
94 |
95 | -6.5686273574829102e-01 8.2456141710281372e-01
96 | <_>
97 |
98 | 0 -1 31 6.5482616424560547e-02
99 |
100 | -6.4050728082656860e-01 7.2351199388504028e-01
101 | <_>
102 |
103 | 0 -1 20 8.0520778894424438e-02
104 |
105 | -8.5438859462738037e-01 5.3443443775177002e-01
106 | <_>
107 |
108 | 0 -1 12 1.4114782214164734e-01
109 |
110 | -4.7826105356216431e-01 8.5395812988281250e-01
111 | <_>
112 |
113 | 0 -1 9 -1.2781243771314621e-02
114 |
115 | 4.7552701830863953e-01 -8.0117982625961304e-01
116 |
117 | <_>
118 | 6
119 | -5.6024551391601562e-01
120 |
121 | <_>
122 |
123 | 0 -1 21 -1.0791167616844177e-02
124 |
125 | 4.8235294222831726e-01 -7.2727274894714355e-01
126 | <_>
127 |
128 | 0 -1 14 1.1584741063416004e-02
129 |
130 | -8.8864427804946899e-01 3.3060529828071594e-01
131 | <_>
132 |
133 | 0 -1 30 1.1510930955410004e-01
134 |
135 | -4.9974140524864197e-01 7.1203064918518066e-01
136 | <_>
137 |
138 | 0 -1 17 -5.1387893036007881e-03
139 |
140 | 7.7718663215637207e-01 -4.4298574328422546e-01
141 | <_>
142 |
143 | 0 -1 41 1.5385414008051157e-03
144 |
145 | 4.4602119922637939e-01 -9.7146689891815186e-01
146 | <_>
147 |
148 | 0 -1 5 -4.6191565692424774e-02
149 |
150 | 5.4221439361572266e-01 -6.8132847547531128e-01
151 |
152 | <_>
153 | 5
154 | -2.5527116656303406e-01
155 |
156 | <_>
157 |
158 | 0 -1 35 -3.3959880471229553e-02
159 |
160 | 5.9420287609100342e-01 -6.6666668653488159e-01
161 | <_>
162 |
163 | 0 -1 28 -1.7966115847229958e-03
164 |
165 | 4.4627162814140320e-01 -7.5138747692108154e-01
166 | <_>
167 |
168 | 0 -1 42 2.3133262991905212e-02
169 |
170 | -7.4616134166717529e-01 4.0926149487495422e-01
171 | <_>
172 |
173 | 0 -1 7 6.8800607696175575e-03
174 |
175 | -8.7785410881042480e-01 3.5012274980545044e-01
176 | <_>
177 |
178 | 0 -1 37 -3.5859044641256332e-02
179 |
180 | 4.3371647596359253e-01 -7.1254706382751465e-01
181 |
182 | <_>
183 | 5
184 | -3.4162163734436035e-01
185 |
186 | <_>
187 |
188 | 0 -1 15 2.0120451226830482e-02
189 |
190 | -6.3350784778594971e-01 4.8571428656578064e-01
191 | <_>
192 |
193 | 0 -1 45 -8.4364607930183411e-02
194 |
195 | 4.0903842449188232e-01 -7.1066695451736450e-01
196 | <_>
197 |
198 | 0 -1 8 4.0885019116103649e-03
199 |
200 | -6.3315701484680176e-01 4.6811246871948242e-01
201 | <_>
202 |
203 | 0 -1 19 5.1627750508487225e-03
204 |
205 | 3.0480432510375977e-01 -9.2035341262817383e-01
206 | <_>
207 |
208 | 0 -1 26 3.8290657103061676e-02
209 |
210 | -9.5412927865982056e-01 3.3557197451591492e-01
211 |
212 | <_>
213 | 5
214 | -6.8950128555297852e-01
215 |
216 | <_>
217 |
218 | 0 -1 24 -5.1184780895709991e-03
219 |
220 | 3.5483869910240173e-01 -7.1428573131561279e-01
221 | <_>
222 |
223 | 0 -1 18 -1.4798028860241175e-03
224 |
225 | 6.3525831699371338e-01 -5.6104469299316406e-01
226 | <_>
227 |
228 | 0 -1 23 -3.9772200398147106e-04
229 |
230 | 6.8185776472091675e-01 -4.7443538904190063e-01
231 | <_>
232 |
233 | 0 -1 44 5.3838500753045082e-03
234 |
235 | -8.4998953342437744e-01 4.2481425404548645e-01
236 | <_>
237 |
238 | 0 -1 36 5.0226677209138870e-02
239 |
240 | -4.3367415666580200e-01 7.7155590057373047e-01
241 |
242 | <_>
243 | 6
244 | -9.7213852405548096e-01
245 |
246 | <_>
247 |
248 | 0 -1 39 -1.9401017576456070e-02
249 |
250 | 2.4561403691768646e-01 -7.8231292963027954e-01
251 | <_>
252 |
253 | 0 -1 27 7.1002678014338017e-03
254 |
255 | 2.5955617427825928e-01 -9.2823147773742676e-01
256 | <_>
257 |
258 | 0 -1 43 -1.5980920579750091e-04
259 |
260 | -9.4922071695327759e-01 2.2740167379379272e-01
261 | <_>
262 |
263 | 0 -1 13 2.5048712268471718e-02
264 |
265 | -5.1417005062103271e-01 6.4332383871078491e-01
266 | <_>
267 |
268 | 0 -1 32 1.2440689897630364e-04
269 |
270 | 3.2576408982276917e-01 -1.
271 | <_>
272 |
273 | 0 -1 11 7.5464382767677307e-02
274 |
275 | -4.8837748169898987e-01 7.2066205739974976e-01
276 |
277 | <_>
278 | 6
279 | -2.4362821877002716e-01
280 |
281 | <_>
282 |
283 | 0 -1 2 4.1157819330692291e-02
284 |
285 | -8.0000001192092896e-01 2.0661157369613647e-01
286 | <_>
287 |
288 | 0 -1 22 -4.1049835272133350e-03
289 |
290 | 2.8924009203910828e-01 -7.7521103620529175e-01
291 | <_>
292 |
293 | 0 -1 33 -1.5786674339324236e-04
294 |
295 | -8.6025655269622803e-01 3.1228643655776978e-01
296 | <_>
297 |
298 | 0 -1 3 -5.6001801043748856e-02
299 |
300 | -1. 2.4942663311958313e-01
301 | <_>
302 |
303 | 0 -1 6 1.1094519868493080e-03
304 |
305 | -4.0196502208709717e-01 8.3858174085617065e-01
306 | <_>
307 |
308 | 0 -1 34 2.6128656463697553e-04
309 |
310 | 3.2138979434967041e-01 -8.9922791719436646e-01
311 |
312 | <_>
313 |
314 | <_>
315 | 0 1 11 6 -1.
316 | <_>
317 | 0 3 11 2 3.
318 | 0
319 | <_>
320 |
321 | <_>
322 | 0 2 8 4 -1.
323 | <_>
324 | 4 2 4 4 2.
325 | 0
326 | <_>
327 |
328 | <_>
329 | 0 2 8 7 -1.
330 | <_>
331 | 4 2 4 7 2.
332 | 0
333 | <_>
334 |
335 | <_>
336 | 0 2 16 18 -1.
337 | <_>
338 | 8 2 8 18 2.
339 | 0
340 | <_>
341 |
342 | <_>
343 | 0 5 6 21 -1.
344 | <_>
345 | 3 5 3 21 2.
346 | 0
347 | <_>
348 |
349 | <_>
350 | 0 21 10 12 -1.
351 | <_>
352 | 0 21 5 6 2.
353 | <_>
354 | 5 27 5 6 2.
355 | 0
356 | <_>
357 |
358 | <_>
359 | 0 23 4 2 -1.
360 | <_>
361 | 0 23 2 1 2.
362 | <_>
363 | 2 24 2 1 2.
364 | 0
365 | <_>
366 |
367 | <_>
368 | 0 24 6 2 -1.
369 | <_>
370 | 3 24 3 2 2.
371 | 0
372 | <_>
373 |
374 | <_>
375 | 1 0 4 8 -1.
376 | <_>
377 | 1 0 2 4 2.
378 | <_>
379 | 3 4 2 4 2.
380 | 0
381 | <_>
382 |
383 | <_>
384 | 1 11 10 22 -1.
385 | <_>
386 | 1 11 5 11 2.
387 | <_>
388 | 6 22 5 11 2.
389 | 0
390 | <_>
391 |
392 | <_>
393 | 1 16 6 12 -1.
394 | <_>
395 | 4 16 3 12 2.
396 | 0
397 | <_>
398 |
399 | <_>
400 | 2 0 11 9 -1.
401 | <_>
402 | 2 3 11 3 3.
403 | 0
404 | <_>
405 |
406 | <_>
407 | 2 7 6 23 -1.
408 | <_>
409 | 4 7 2 23 3.
410 | 0
411 | <_>
412 |
413 | <_>
414 | 2 25 6 4 -1.
415 | <_>
416 | 4 25 2 4 3.
417 | 0
418 | <_>
419 |
420 | <_>
421 | 3 0 6 12 -1.
422 | <_>
423 | 3 4 6 4 3.
424 | 0
425 | <_>
426 |
427 | <_>
428 | 3 5 12 1 -1.
429 | <_>
430 | 7 5 4 1 3.
431 | 0
432 | <_>
433 |
434 | <_>
435 | 3 25 9 8 -1.
436 | <_>
437 | 3 29 9 4 2.
438 | 0
439 | <_>
440 |
441 | <_>
442 | 4 3 2 19 -1.
443 | <_>
444 | 5 3 1 19 2.
445 | 0
446 | <_>
447 |
448 | <_>
449 | 4 5 7 2 -1.
450 | <_>
451 | 4 6 7 1 2.
452 | 0
453 | <_>
454 |
455 | <_>
456 | 4 11 8 8 -1.
457 | <_>
458 | 4 15 8 4 2.
459 | 0
460 | <_>
461 |
462 | <_>
463 | 5 0 7 33 -1.
464 | <_>
465 | 5 11 7 11 3.
466 | 0
467 | <_>
468 |
469 | <_>
470 | 5 1 4 14 -1.
471 | <_>
472 | 5 1 2 7 2.
473 | <_>
474 | 7 8 2 7 2.
475 | 0
476 | <_>
477 |
478 | <_>
479 | 5 3 4 12 -1.
480 | <_>
481 | 5 3 2 6 2.
482 | <_>
483 | 7 9 2 6 2.
484 | 0
485 | <_>
486 |
487 | <_>
488 | 5 3 5 2 -1.
489 | <_>
490 | 5 4 5 1 2.
491 | 0
492 | <_>
493 |
494 | <_>
495 | 5 5 2 11 -1.
496 | <_>
497 | 6 5 1 11 2.
498 | 0
499 | <_>
500 |
501 | <_>
502 | 5 9 6 21 -1.
503 | <_>
504 | 7 9 2 21 3.
505 | 0
506 | <_>
507 |
508 | <_>
509 | 5 9 7 24 -1.
510 | <_>
511 | 5 17 7 8 3.
512 | 0
513 | <_>
514 |
515 | <_>
516 | 5 10 5 16 -1.
517 | <_>
518 | 5 18 5 8 2.
519 | 0
520 | <_>
521 |
522 | <_>
523 | 5 18 6 1 -1.
524 | <_>
525 | 7 18 2 1 3.
526 | 0
527 | <_>
528 |
529 | <_>
530 | 6 0 9 22 -1.
531 | <_>
532 | 9 0 3 22 3.
533 | 0
534 | <_>
535 |
536 | <_>
537 | 6 3 9 8 -1.
538 | <_>
539 | 9 3 3 8 3.
540 | 0
541 | <_>
542 |
543 | <_>
544 | 6 11 6 13 -1.
545 | <_>
546 | 9 11 3 13 2.
547 | 0
548 | <_>
549 |
550 | <_>
551 | 6 13 1 3 -1.
552 | <_>
553 | 6 14 1 1 3.
554 | 0
555 | <_>
556 |
557 | <_>
558 | 6 13 2 3 -1.
559 | <_>
560 | 6 14 2 1 3.
561 | 0
562 | <_>
563 |
564 | <_>
565 | 7 10 3 3 -1.
566 | <_>
567 | 7 11 3 1 3.
568 | 0
569 | <_>
570 |
571 | <_>
572 | 8 0 8 6 -1.
573 | <_>
574 | 8 0 4 3 2.
575 | <_>
576 | 12 3 4 3 2.
577 | 0
578 | <_>
579 |
580 | <_>
581 | 8 23 6 5 -1.
582 | <_>
583 | 10 23 2 5 3.
584 | 0
585 | <_>
586 |
587 | <_>
588 | 8 24 8 6 -1.
589 | <_>
590 | 12 24 4 6 2.
591 | 0
592 | <_>
593 |
594 | <_>
595 | 9 0 3 8 -1.
596 | <_>
597 | 9 4 3 4 2.
598 | 0
599 | <_>
600 |
601 | <_>
602 | 9 0 6 8 -1.
603 | <_>
604 | 9 0 3 4 2.
605 | <_>
606 | 12 4 3 4 2.
607 | 0
608 | <_>
609 |
610 | <_>
611 | 9 4 6 12 -1.
612 | <_>
613 | 12 4 3 12 2.
614 | 0
615 | <_>
616 |
617 | <_>
618 | 9 7 1 12 -1.
619 | <_>
620 | 9 11 1 4 3.
621 | 0
622 | <_>
623 |
624 | <_>
625 | 9 17 6 15 -1.
626 | <_>
627 | 9 22 6 5 3.
628 | 0
629 | <_>
630 |
631 | <_>
632 | 9 21 1 3 -1.
633 | <_>
634 | 9 22 1 1 3.
635 | 0
636 | <_>
637 |
638 | <_>
639 | 10 0 3 21 -1.
640 | <_>
641 | 10 7 3 7 3.
642 | 0
643 | <_>
644 |
645 | <_>
646 | 10 15 6 10 -1.
647 | <_>
648 | 13 15 3 10 2.
649 | 0
650 | <_>
651 |
652 | <_>
653 | 10 21 6 4 -1.
654 | <_>
655 | 13 21 3 4 2.
656 | 0
657 | <_>
658 |
659 | <_>
660 | 10 23 1 10 -1.
661 | <_>
662 | 10 28 1 5 2.
663 | 0
664 |
665 |
--------------------------------------------------------------------------------
/fcn_filter_matching.py:
--------------------------------------------------------------------------------
1 | import cv2
2 | import numpy as np
3 |
4 |
5 | # FILTER MATCHED POINT ACCORDING TO THE OPTICAL FLOW DIRECTION
6 | # optical flow direction computation, to check the goodness of a matched pair of points
7 | def filter_features_matching(foenew, x, x_old, y, y_old, mask):
8 |
9 | # direction connecting the old feature coordinates and the current focus of expansion foenew
10 | m = (float(foenew[1] - y_old) / (foenew[0] - x_old))
11 | c = (float(foenew[1] - m * foenew[0]))
12 |
13 | # displacement between matched points along x and y
14 | distx = np.abs(x - x_old)
15 | disty = np.abs(y - y_old)
16 |
17 | # check whether the feature displacement is parallel to the direction connecting its actual coordinates
18 | # to the current focus of expansion position
19 | check = abs(y - m * x - c)
20 |
21 | if check < 2 and distx < 30 and disty < 30:
22 | cv2.line(mask, (int(x), int(y)), (int(x_old), int(y_old)), (255, 255, 255), 1)
23 |
--------------------------------------------------------------------------------
/fcns_track_railway.py:
--------------------------------------------------------------------------------
1 | import cv2
2 | import numpy as np
3 | from lines import lines2intersection_hor
4 |
5 | def tmp_match_rail_lines(w, frame, matchImage, weighted_version, watch_xcorr, MAXl, railL, railR, left_edges, left_xseq, left_yseq, right_edges, right_xseq, right_yseq):
6 |
7 | # draw first template
8 | railL.mark(frame, 0)
9 | railR.mark(frame, 0)
10 |
11 | # init expected coordinates for template lower left corner, where to look for the upper template containing
12 | # a left rail element
13 | Ml = (2 * w, matchImage.shape[0])
14 |
15 | # init expected coordinates for template lower left corner, where to look for the upper template containing
16 | # a right rail element
17 | Mr = (2 * w, matchImage.shape[0])
18 | MAXl.append(Ml[0])
19 |
20 | # init lists to contain the sequence of template corners, fitting left and right rail
21 | left_edges[:] = []
22 | right_edges[:] = []
23 |
24 | left_xseq[:] = []
25 | left_yseq[:] = []
26 | right_xseq[:] = []
27 | right_yseq[:] = []
28 |
29 | # template matching of rails, using 30 templates, one located at y 30 pixel less than the other
30 | loop = 1
31 |
32 | while loop < 27:
33 | xl, yl, Ml = railL.find_next(matchImage,
34 | loop - 1,
35 | Ml[0],
36 | plotres=watch_xcorr,
37 | weights_on=weighted_version)
38 | xr, yr, Mr = railR.find_next(matchImage,
39 | loop - 1,
40 | Mr[0],
41 | plotres=watch_xcorr,
42 | weights_on=weighted_version)
43 | railL.push((xl, yl))
44 | railR.push((xr, yr))
45 | railL.mark(frame, loop)
46 | railR.mark(frame, loop)
47 |
48 | # line fitting the rails between the 1st and the 5th templates
49 | if loop == 1 or loop == 5:
50 | left_edges.append((xl, yl))
51 | right_edges.append((xr, yr))
52 |
53 | # polyfit of the first 15 templates
54 | if loop < 16:
55 | left_xseq.append(xl)
56 | left_yseq.append(yl)
57 | right_xseq.append(xr)
58 | right_yseq.append(yr)
59 |
60 | cv2.drawMarker(frame, (xl, yl), (0, 0, 255),
61 | markerType=cv2.MARKER_CROSS,
62 | markerSize=5,
63 | thickness=2,
64 | line_type=cv2.LINE_AA)
65 |
66 | cv2.drawMarker(frame, (xr, yr), (0, 0, 255),
67 | markerType=cv2.MARKER_CROSS,
68 | markerSize=5,
69 | thickness=2,
70 | line_type=cv2.LINE_AA)
71 |
72 | loop = loop + 1
73 |
74 | # RAIL LINES EXTRACTION
75 | # compute coefficients of lines, fitting left and right rails
76 | left_rail_line = np.polyfit(left_xseq, left_yseq, 1)
77 | right_rail_line = np.polyfit(right_xseq, right_yseq, 1)
78 |
79 | # extract lines coefficients
80 | ml = left_rail_line[0]
81 | cl = left_rail_line[1]
82 | mr = right_rail_line[0]
83 | cr = right_rail_line[1]
84 |
85 | # compute intersection of a rail line and the horizon
86 | newFOEx_rail, newFOEy_rail = lines2intersection_hor(ml, cl)
87 |
88 | return newFOEx_rail, newFOEy_rail, ml, cl, mr, cr, left_edges, right_edges
--------------------------------------------------------------------------------
/gray_frame_14750.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/federicafioretti/railway-detection/79db5cd150d4eaabeb751aab13b5eaca8727eeda/gray_frame_14750.png
--------------------------------------------------------------------------------
/image-readme/execution.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/federicafioretti/railway-detection/79db5cd150d4eaabeb751aab13b5eaca8727eeda/image-readme/execution.png
--------------------------------------------------------------------------------
/lines.py:
--------------------------------------------------------------------------------
1 |
2 | # considering a line described as m*x + c = y:
3 |
4 | # compute line polynomial coefficient from two given points in a single list
5 | def points2coeffs(points):
6 | p1lx = points[0][0]
7 | p1ly = points[0][1]
8 | p2lx = points[1][0]
9 | p2ly = points[1][1]
10 | if p1lx != p2lx:
11 | ml = float(p2ly - p1ly) / (p2lx - p1lx)
12 | else:
13 | ml = 0
14 | cl = float(p1ly - ml * (p1lx))
15 | return ml, cl
16 |
17 | # compute intersection point within image between a line having coefficients (m, c)
18 | # with the horizon (given by the y-coordinate of the principal point, 631 from calibration)
19 | def lines2intersection_hor(m, c, Y_i=631):
20 | X_i = int((Y_i - c) / (m + 0.0001)) # avoid division by zero
21 | return X_i, Y_i
22 |
23 | # given two vectors m, c including the polynomial coefficients from different lines
24 | # returns their intersection
25 | def lines2intersection(m, c):
26 | Y_i = int((m[0] * c[1] - m[1] * c[0]) / (m[0] - m[1]))
27 | X_i = int((Y_i - c[0]) / m[0])
28 | return X_i, Y_i
--------------------------------------------------------------------------------
/objectDetection.py:
--------------------------------------------------------------------------------
1 |
2 | import cv2
3 | import numpy as np
4 |
5 |
6 | class RailwayObjectDetection:
7 | def __init__(self, ml_rail, cl_rail, mr_rail, cr_rail, focusOfExpansion):
8 | self.ml_rail = ml_rail
9 | self.cl_rail = cl_rail
10 | self.mr_rail = mr_rail
11 | self.cr_rail = cr_rail
12 | self.foe = focusOfExpansion
13 |
14 | # assess the turn markings detection
15 | def check_for_turn(self, bbox, bboxold, x, y, x_old, y_old, frame, mask, dynFOEx, pole_points, pole_oldpoints):
16 |
17 | # distance between matched points along x and y
18 | distx = np.abs(x - x_old)
19 | disty = np.abs(y - y_old)
20 |
21 | # dynamic FOE variable is initialised as the straight track value, for each feature point considered
22 | foe1 = self.foe[0]
23 |
24 | # check for keypoints matching goodness
25 | for (a, b, w, h) in bbox:
26 | for (aold, bold, wold, hold) in bboxold:
27 | cv2.rectangle(frame, (a, b), (a + w, b + h), (0, 255, 255), 2)
28 | if ((x > a and x < (a + w) and y > b and y < (b + h)) or
29 | (x_old > aold and x_old < (aold + wold) and y_old > bold and y_old < (bold + hold))) and \
30 | distx < 30 and disty < 30:
31 |
32 | cv2.line(mask, (int(x), int(y)), (int(x_old), int(y_old)), (255, 255, 255), 2)
33 |
34 | pole_points.append((x,y))
35 | pole_oldpoints.append((x_old, y_old))
36 |
37 | # significant turn condition
38 | if abs(x - x_old) > 2:
39 |
40 | # compute which direction the optical flow should have
41 | mt = (float(y - y_old) / (x - x_old + 0.0001)) + 0.0001
42 | ct = (float(y - mt * x))
43 |
44 | # compute new x coord for focus of expansion FOE
45 | foex = float((self.foe[1] - ct) / mt)
46 | foe1 = foe1 + (foex - foe1)
47 |
48 | # arrange all FOE x coord in array, making sure that foe1 result is valid
49 | if not np.isnan(foe1):
50 | dynFOEx.append(foe1)
51 |
52 | return dynFOEx, pole_points, pole_oldpoints
53 |
54 | # assess the kilometer signs detection
55 | def get_km_features(self, bbox, bboxold, x, y, x_old, y_old, points_km, oldpoints_km):
56 | distx = np.abs(x - x_old)
57 | disty = np.abs(y - y_old)
58 |
59 | # check for keypoints matching goodness
60 | for (a, b, w, h) in bbox:
61 | for (aold, bold, wold, hold) in bboxold:
62 | if ((x > a and x < (a + w) and y > b and y < (b + h)) or
63 | (x_old > aold and x_old < (aold + wold) and y_old > bold and y_old < (bold + hold))) and \
64 | distx < 30 and distx > 12 and disty < 30 and disty > 8:
65 | points_km.append((float(x),float(y)))
66 | oldpoints_km.append((float(x_old), float(y_old)))
67 | return points_km, oldpoints_km
68 |
--------------------------------------------------------------------------------
/rail.py:
--------------------------------------------------------------------------------
1 |
2 | import cv2
3 | import numpy as np
4 | from matplotlib import pyplot as plt
5 |
6 | class Rail:
7 | def __init__(self, xstart, ystart, w, h):
8 | self.xstart = xstart
9 | self.ystart = ystart
10 | self.w = w
11 | self.w0 = w
12 | self.h = h
13 | self.scale = 0.95
14 | self.railX = [xstart]
15 | self.railY = [ystart]
16 |
17 | # returns the lower image coordinates where the rails start
18 | def get_rail(self):
19 | return self.railX, self.railY
20 |
21 | # returns the template
22 | def get_template(self, frame, pos):
23 | template = frame[(self.railY[pos]-self.h):self.railY[pos], self.railX[pos]:(self.railX[pos] + self.w)]
24 | return template
25 |
26 | # returns upper stripe of image to be processed with respect to the current template
27 | def get_nextrow(self, frame, pos):
28 | row = frame[(self.railY[pos]-2*self.h):(self.railY[pos]-self.h), 0:np.size(frame, 1)]
29 | return row
30 |
31 | # keep track of the lower left corner for each extracted template, in order to reconstruct the whole rail profile
32 | def push(self, pos):
33 | self.railX.append(pos[0])
34 | self.railY.append(pos[1])
35 |
36 | # draws the rectangular bound around the template
37 | def mark(self, img, pos):
38 | cv2.rectangle(img, (self.railX[pos], self.railY[pos]), (self.railX[pos]+self.w, self.railY[pos]-self.h), (255,255,255), 1)
39 |
40 | # match the given template with the immediately above one within the same frame, having same height and larger width
41 | def find_next(self, img, pos, MAX, method=0, plotres=0, weights_on=1):
42 | self.w = int(self.w0 * self.scale**pos)
43 | # methods = [cv2.TM_CCOEFF_NORMED, cv2.TM_CCORR_NORMED, cv2.TM_SQDIFF_NORMED]
44 | tmpl = self.get_template(img, pos)
45 | row = self.get_nextrow(img, pos)
46 |
47 | # definition of the upper template to be analysed
48 | startx = self.railX[pos]-2*self.w
49 |
50 | if startx <= 0:
51 | startx =0
52 |
53 | endx = startx+4*self.w
54 |
55 | if endx >= img.shape[1]:
56 | endx = img.shape[1]
57 |
58 |
59 | # Correlation is weighted by a Lorentzian function centred at the peak of correlation given by the lower left corner
60 | # of the previously extracted template
61 | xcorr = cv2.matchTemplate(row[0:self.h, startx:endx], tmpl, method=cv2.TM_CCOEFF_NORMED)
62 |
63 | a = 0.001*(pos*2+1) # set Lorentzian shape
64 | xcorrW = np.zeros_like(xcorr)
65 | L = []
66 | val = []
67 | val.extend(range(0, np.size(xcorr[0])))
68 |
69 | for i in range(0, np.size(xcorr,1)):
70 | L.append(1/(1 + a*pow(val[i] - MAX, 2)))
71 | xcorrW[0][i] = L[i]*xcorr[0][i]
72 |
73 | min_val0, max_val0, min_loc0, max_loc0 = cv2.minMaxLoc(xcorr)
74 |
75 | if weights_on == 1:
76 | min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(xcorrW)
77 | else:
78 | max_val = max_val0
79 | max_loc = max_loc0
80 |
81 | if pos == 0:
82 | max_loc = max_loc0
83 |
84 | test_img = cv2.cvtColor(row[0:self.h, startx:startx+np.size(xcorr[0])], cv2.COLOR_GRAY2BGR)
85 |
86 | # view results of correlation analysis
87 | if plotres == 1:
88 | plt.subplot(211)
89 | plt.imshow(test_img)
90 | plt.subplot(212)
91 | plt.plot(val, xcorr[0], 'y')
92 | plt.plot(val, xcorrW[0], 'b')
93 | plt.plot(val, L, 'g')
94 | plt.plot(max_loc0[0], max_val0, 'ro')
95 | plt.axis([0, np.size(xcorr[0]), min_val, 1.1])
96 | plt.title('Result, iter:' + str(pos)), plt.xticks([0, np.size(xcorr[0])]), plt.yticks([0, max_val])
97 | plt.show()
98 |
99 | return startx+max_loc[0], self.railY[pos]-self.h, max_loc
100 |
101 | # rail detection in the lower part of the image, using Hough Lines, necessary before starting the iterative rail
102 | # tracking using template matching
103 | def extract_rail_line(self, frame, start_rows, end_rows, start_cols, end_cols,canny_min, canny_max, hough_I, length,
104 | theta_min=0.0, theta_max=1.0, expt_start=600, max_displacement=15):
105 |
106 | # elaborate lower part of the image containing the rails to make lines more visible
107 | patch = frame[start_rows:end_rows, start_cols:end_cols]
108 | patch_flipped = cv2.flip(patch, 0)
109 | gray_patch = cv2.cvtColor(patch_flipped, cv2.COLOR_BGR2GRAY)
110 | blur1 = cv2.GaussianBlur(gray_patch, (7, 7), 7)
111 | sobelx2 = cv2.Sobel(blur1, cv2.CV_64F, 1, 0, ksize=3)
112 |
113 | abs_sobel64f = np.absolute(sobelx2)
114 | sobel_x = np.uint8(abs_sobel64f)
115 |
116 | rails_edges = cv2.Canny(sobel_x, canny_min, canny_max)
117 |
118 | k = cv2.waitKey(1)
119 | if k == 115:
120 | while 1:
121 | k = cv2.waitKey(10)
122 | if k == 115:
123 | break
124 |
125 | # list to contain edges of detected lines with Hough Transform
126 |
127 | xr_start = []
128 | xr_end = []
129 | yr_start = []
130 | yr_end = []
131 | x_0 = []
132 | y_0 = []
133 |
134 | lines = cv2.HoughLines(rails_edges, 1, np.pi/180, hough_I)
135 |
136 | # min displacement for the line edge with respect to the init position
137 | min_delta = 20
138 | min_val = expt_start
139 |
140 | # search for lines according to Hough Transform
141 | for rho, theta in lines[:, 0, :]:
142 |
143 | a = np.cos(theta)
144 | b = np.sin(theta)
145 | x0 = a * rho
146 | y0 = b * rho
147 | x1 = int(x0 + length * (-b))
148 | y1 = int(y0 + length * (a))
149 | x2 = int(x0 - length * (-b))
150 | y2 = int(y0 - length * (a))
151 |
152 | # check whether the direction of the detected lines is coherent with the direction expected
153 |
154 | if theta < theta_max and theta > theta_min and abs(rho/np.cos(theta)+start_cols - expt_start) < max_displacement:
155 |
156 | x_0.append(x0)
157 | y_0.append(y0)
158 | xr_start.append(x1)
159 | yr_start.append(y1)
160 | xr_end.append(x2)
161 | yr_end.append(y2)
162 |
163 | # computes the closest detected line to the expected one and updates the edge position for next frame
164 | [min_delta, min_val] = self.nearest_line(rho, theta, start_cols, expt_start, min_delta, min_val)
165 |
166 | expt_start = min_val
167 |
168 | # trace lines on lower patch
169 | for l in range(len(xr_start) - 1):
170 | patch_wlines = cv2.line(patch_flipped, (xr_start[l], yr_start[l]), (xr_end[l], yr_end[l]), (255, 255, 255), 2)
171 |
172 | return patch_wlines, start_cols, end_cols, expt_start
173 |
174 | # determines the line detected among the set returned by Hough Lines
175 | def nearest_line(self, rho, theta, start_cols, expt_start, min_delta, min_val):
176 |
177 | if abs(rho / np.cos(theta) + start_cols - expt_start) < min_delta:
178 | min_delta = abs(rho / np.cos(theta) + start_cols - expt_start)
179 | min_val = rho / np.cos(theta) + start_cols
180 | return min_delta, min_val
181 |
182 | # show patch containing the railway on the lower part of image
183 | def patch_wRails(self, watch, patchL, patchR):
184 |
185 | patchL[0:np.size(patchR, 0), 250:np.size(patchR, 1)] = patchR[0:np.size(patchR, 0), 250:np.size(patchR, 1)]
186 | patch = cv2.flip(patchL, 0)
187 |
188 | if watch == 1:
189 | cv2.imshow('frame', patch)
190 |
191 |
--------------------------------------------------------------------------------
/test_straight_track_analysis.py:
--------------------------------------------------------------------------------
1 |
2 | import cv2
3 | import numpy as np
4 | from matplotlib import pyplot as plt
5 |
6 | # Original (gray) image
7 |
8 | full_img = cv2.imread('gray_frame_14750.png', 0)
9 |
10 | # Original (gray) cropped image
11 |
12 | img = full_img[full_img.shape[0] - 185: full_img.shape[0], :]
13 |
14 | # Blur image and filter horizontal components to make rail lines more detectable.
15 |
16 | blur = cv2.GaussianBlur( img,(7, 7), 7 )
17 |
18 | sobelx2 = cv2.Sobel( blur, cv2.CV_64F, 1, 0, ksize=3 )
19 |
20 | abs_sobel64f = np.absolute( sobelx2 )
21 | sobel_x = np.uint8( abs_sobel64f )
22 |
23 | edges = cv2.Canny( sobel_x, 40, 200 )
24 |
25 | # HOUGH LINES
26 |
27 | x_start = []
28 | x_end = []
29 | y_start = []
30 | y_end = []
31 | x_0 = []
32 | y_0 = []
33 |
34 | # Search lines in the cropped image
35 |
36 | lines = cv2.HoughLines( edges, 1, np.pi/180, 150 )
37 |
38 | # Convert points into the original image coordinates
39 |
40 | L = 500
41 | thresh = 200
42 |
43 | for rho,theta in lines[:, 0, :]:
44 | a = np.cos(theta)
45 | b = np.sin(theta)
46 | x0 = a*rho
47 | y0 = b*rho
48 | x1 = int(x0 + L*(-b))
49 | y1 = int(y0 + L*(a)) + np.size(full_img, 0) - np.size(img, 0)
50 | x2 = int(x0 - L*(-b))
51 | y2 = int(y0 - L*(a)) + np.size(full_img, 0) - np.size(img, 0)
52 |
53 | # x threshold on edge segments to detect the ones representing rails
54 | if x1 > thresh:
55 | x_0.append(x0)
56 | y_0.append(y0)
57 | x_start.append(x1)
58 | y_start.append(y1)
59 | x_end.append(x2)
60 | y_end.append(y2)
61 |
62 | m = []
63 | c = []
64 |
65 | # Computation of the expression of straight lines y = mx + c and their interception point (X_i, Y_i)
66 |
67 | m.append((float(y_end[0] - y_start[0])/(x_end[0] - x_start[0])))
68 | c.append(float(y_end[0] - m[0]*x_end[0]))
69 |
70 | m.append((float(y_end[1] - y_start[1])/(x_end[1] - x_start[1])))
71 | c.append(float(y_end[1] - m[1]*x_end[1]))
72 |
73 | Y_i = int((m[0]*c[1] - m[1]*c[0])/(m[0] - m[1]))
74 | X_i = int((Y_i - c[0])/m[0])
75 |
76 | print 'LINE 1 m c ', m[0], c[0], '\n', 'LINE 2 m c ', m[1], c[1]
77 |
78 | # Change edge points
79 |
80 | for l in range(len(x_start) - 1):
81 | y_start[l] = np.size(full_img, 0)
82 | x_start[l] = int((y_start[l] - c[l]) / m[l])
83 |
84 | y_end[l] = 10
85 | x_end[l] = int((y_end[l] - c[l]) / m[l])
86 |
87 | for i in range(len(x_start) - 1):
88 |
89 | cv2.line( full_img, (x_start[i], y_start[i]), (x_end[i], y_end[i]), (255, 255, 255), 2 )
90 |
91 | cv2.drawMarker(full_img, (X_i, Y_i), (0,0,255),
92 | markerType=cv2.MARKER_CROSS,
93 | markerSize=20,
94 | thickness=2,
95 | line_type=cv2.LINE_AA)
96 |
97 | cv2.drawMarker(full_img, (x_start[i], y_start[i]), (0, 0, 255),
98 | markerType=cv2.MARKER_STAR,
99 | markerSize=20,
100 | thickness=2,
101 | line_type=cv2.LINE_AA)
102 |
103 | cv2.drawMarker(full_img, (x_end[i], y_end[i]), (0, 0, 255),
104 | markerType=cv2.MARKER_TRIANGLE_UP,
105 | markerSize=20,
106 | thickness=2,
107 | line_type=cv2.LINE_AA)
108 |
109 | #cv2.imwrite('lines_full.png', full_img)
110 |
111 | print 'X0', x_0, '\nY0', y_0, '\nX1', x_start, '\nY1', y_start,'\nX2', x_end, '\nY2', y_end, '\n'
112 | print 'Focus of expansion coordinates [x, y] =', X_i,'', Y_i
113 |
114 | full_img_wLines = full_img
115 | plt.imshow( full_img_wLines, cmap = 'gray' )
116 | plt.title( 'Line extraction' ), plt.xticks([]), plt.yticks([])
117 |
118 | plt.show()
--------------------------------------------------------------------------------