├── .gitignore
├── .gitmodules
├── Challenge_210
├── Challenge_210.md
├── test_cases.json.gz
└── test_challenge_210.py
├── Challenge_211
├── Challenge_211.md
├── test_cases.json.gz
└── test_challenge_211.py
├── Challenge_212
├── Challenge_212.md
├── test_cases.json.gz
└── test_challenge_212.py
├── Readme.md
├── logo1.png
└── test_challenge.py
/.gitignore:
--------------------------------------------------------------------------------
1 | *.code-workspace
2 | .vscode/
3 | .venv/
4 | .pytest_cache/
5 | __pycache__/
6 | logo.ascii
7 | logo2.png
8 | publi.sh
9 | PreviousChallenges/
10 |
--------------------------------------------------------------------------------
/.gitmodules:
--------------------------------------------------------------------------------
1 | [submodule "PreviousChallenges"]
2 | path = PreviousChallenges
3 | url = https://github.com/Pomroka/PreviousChallenges.git
4 |
--------------------------------------------------------------------------------
/Challenge_210/Challenge_210.md:
--------------------------------------------------------------------------------
1 | # Challenge 210: Halloween, Part 1
2 |
3 | **Difficulty: 1/10
4 | Labels: Sorting**
5 |
6 | During Halloween, SnowballSH is collecting candies from his `n` neighbors. Each neighbor is offering a box of candy with a certain tastiness. SnowballSH wants to maximize the sum of the tastinesses he ends up with, but unfortunately his bag can only store up to `k` boxes of candy.
7 |
8 | Can you help him find the maximum possible sum of the tastinesses of the boxes he chooses to put in his bag?
9 |
10 | ## Task
11 |
12 | You are given a number `T` and `T` test cases follow, for each test case:
13 |
14 | - The first line contains two integers `n` and `k`, the number of neighbors and the maximum number of boxes SnowballSH can take, respectively.
15 | - The second line contains an array of `n` integers `a`, where `a[i]` is the tastiness of the box of candy offered by the `i`th neighbor.
16 |
17 | Output a single integer, the maximum possible sum of the tastinesses of the boxes SnowballSH chooses to take.
18 |
19 | ### Examples
20 |
21 | #### Input
22 |
23 | ```rust
24 | 5
25 | 5 3
26 | 5 1 3 2 4
27 | 7 5
28 | 1 14 13 13 13 13 13
29 | 4 2024
30 | 100 500 300 500
31 | 1 1
32 | 1
33 | 6 1
34 | 1 2 4 8 16 32
35 | ```
36 |
37 | #### Output
38 |
39 | ```rust
40 | 12
41 | 66
42 | 1400
43 | 1
44 | 32
45 | ```
46 |
47 | - For the first test case, it is optimal for SnowballSH to pick the boxes with tastiness `3`, `4`, and `5`, giving him a total tastiness of `3+4+5=12`. He cannot take more boxes because he is limited to at most `3` boxes.
48 | - For the third test case, it is optimal for SnowballSH to take all `4` boxes.
49 |
50 | ### Note
51 |
52 | - `1 <= T`
53 | - `1 <= n <= 10``5`
54 | - `1 <= k <= 10``9`
55 | - `1 <= a[i] <= 10,000`
56 |
57 | ### Extra challenges for experienced programmers
58 |
59 | To learn more about data structures and algorithms!
60 |
61 | 1. Solve this challenge without sorting, but with data structures.
62 | 2. Solve this challenge with divide and conquer.
63 | 3. Solve this challenge in $\mathcal{O}(n + k)$ on average.
64 | 4. Utilizing the low values of `a[i]`, solve this challenge in $\mathcal{O}(n + k)$ in worst case.
65 |
66 | ### Submissions
67 |
68 | Code can be written in any of these languages:
69 |
70 | - `Python` 3.11
71 | - `C` (gnu17) / `C++` (c++20) - GCC 12.2
72 | - `Ruby` 3.3.4
73 | - `Golang` 1.21
74 | - `Java` 19 (Open JDK) - use **"class Main"!!!**
75 | - `Rust` 1.72
76 | - `C#` 11 (.Net 7.0)
77 | - `JavaScript` ES2023 (Node.js 20.6)
78 | - `Zig` 0.13.0
79 |
80 | To download tester for this challenge click [here](https://downgit.github.io/#/home?url=https://github.com/Pomroka/TWT_Challenges_Tester/tree/main/Challenge_210)
81 |
--------------------------------------------------------------------------------
/Challenge_210/test_cases.json.gz:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Pomroka/TWT_Challenges_Tester/9ab9f21a77355e08999e77b9c838e176934dc9f7/Challenge_210/test_cases.json.gz
--------------------------------------------------------------------------------
/Challenge_210/test_challenge_210.py:
--------------------------------------------------------------------------------
1 | """
2 | How to use?
3 | Put this file, with test cases file and file with your solution in same folder.
4 | Make sure Python has right to save to this folder.
5 |
6 | This tester will create one file "temp_solution_file.py" make sure there's no such
7 | file in same folder or there's nothing important in it, cos it will be overwritten!
8 |
9 | Change SOLUTION_SRC_FILE_NAME to your file name in CONFIGURATION section.
10 |
11 | If this tester wasn't prepared by me for the challenge you want to test,
12 | you may need to adjust other configuration settings. Read the comments on each.
13 |
14 | If you want to use own test_cases, they must be in JSON format.
15 | [
16 | [ # this list can be in separated file for inputs only
17 | ["test case 1"], # multiline case ["line_1", "line_2", ... ,"line_n"]
18 | ["test case 2"],
19 | ...
20 | ["test case n"]
21 | ],
22 | [ # and this for output only
23 | "output 1",
24 | "output 2",
25 | ...
26 | "output n"
27 | ]
28 | ]
29 | All values must be strings! Cast all ints, floats, etc. to string before dumping JSON!
30 |
31 | WARNING: My tester ignores printing in input() but official tester FAILS if you
32 | print something in input()
33 | Don't do that: input("What is the test number?")
34 | Use empty input: input()
35 |
36 | Some possible errors:
37 | - None in "Your output": Your solution didn't print for all cases.
38 | - None in "Input": Your solution print more times than there are cases.
39 | - If you see None in "Input" or "Your output" don't check failed cases until
40 | you fix problem with printing, cos "Input" and "Your output" are misaligned
41 | after first missing/extra print
42 | - StopIteration: Your solution try to get more input than there are test cases
43 | - If you use `open` instead of `input` you get `StopIteration` error in my tester
44 | to avoid that use OTHER_LANG_COMMAND = "python to_submit_ch_83.py"
45 | - If you call your function inside `if __name__ == '__main__':` by default
46 | your functions won't be called cos your solution is imported.
47 | To avoid that use OTHER_LANG_COMMAND = "python to_submit_ch_83.py"
48 | or don't use `if __name__ == '__main__':`
49 | """
50 | from __future__ import annotations
51 |
52 | from dataclasses import dataclass
53 |
54 |
55 | ########## CONFIGURATION ################
56 | @dataclass
57 | class Config:
58 |
59 | # Name of your file with solution. If it's not in same folder as this script
60 | # add absolute path or relative to this script file.
61 | # For other languages than python fill this with source code file name if
62 | # you want solution length to be displayed.
63 | # Examples:
64 | # Absolute path
65 | # SOLUTION_SRC_FILE_NAME = "/home/user/Dev/Cpp/c83_c/c83_c/src/Main.cpp"
66 | # Relative path to this script file
67 | # SOLUTION_SRC_FILE_NAME = "Rust/C83_rust/src/main.rs"
68 | # File in same folder as this script
69 | SOLUTION_SRC_FILE_NAME = "to_submit_ch_210.py"
70 |
71 | # Command to run your solution written in other language than Python
72 | # For compiled languages - compile yourself and use compiled executable file name
73 | # For interpreter languages give full command to run your solution
74 | # Examples:
75 | # OTHER_LANG_COMMAND = "c83_cpp.exe"
76 | # OTHER_LANG_COMMAND = "Cpp/c83_c/bin/x64/Debug/c83_c.exe"
77 | # OTHER_LANG_COMMAND = "Cpp/c83_c/bin/x64/Release/c83_c"
78 | # OTHER_LANG_COMMAND = "/home/user/Dev/Rust/c83_rust/target/release/c83_rust"
79 | # OTHER_LANG_COMMAND = "d:/Dev/C_Sharp/c83_cs/bin/Debug/net6.0/c83_cs.exe"
80 | # OTHER_LANG_COMMAND = "java -cp Java/ c83_java.Main"
81 | OTHER_LANG_COMMAND = ""
82 |
83 | # Name of file with input test cases (and output if SEP_INP_OUT_TESTCASE_FILE is False)
84 | # If test cases file is compressed, you don't need to extract it, just give name of
85 | # compressed file (with .gz extension)
86 | TEST_CASE_FILE = "test_cases.json.gz"
87 |
88 | # If test cases input and expected output are in separate files, name of file
89 | # with expected outputs for test cases. Empty string - if they in one file.
90 | TEST_CASE_FILE_EXP = ""
91 |
92 | # True - if you want colors in terminal output, False - otherwise or if your terminal
93 | # don't support colors and script didn't detect this
94 | COLOR_OUT: bool = True
95 |
96 | # -1 - use all test cases from test case file, you can limit here with how many
97 | # tests cases you want to test your solution. If you enter number bigger than
98 | # number of tests all tests will be used
99 | NUMBER_OF_TEST_CASES = -1
100 |
101 | # True - if you want to print some debug information, you need set:
102 | # DEBUG_TEST_NUMBER to the test number you want to debug
103 | DEBUG_TEST: bool = False
104 |
105 | # Provide test number for which you want to see your debug prints. If you enter number
106 | # out of range, first test case will be used. (This number is 1 - indexed it is same number
107 | # you find when failed test case is printed in normal test). Ignored when DEBUG_TEST is False
108 | DEBUG_TEST_NUMBER = 1
109 |
110 | # True - if official challenge tester give one test case inputs and run your solution
111 | # again for next test case, False - if official challenge tester gives all test cases
112 | # once and your solution need to take care of it.
113 | TEST_ONE_BY_ONE: bool = False
114 |
115 | # True - if you want to test your solution one test case by one test case, and solution
116 | # is written to take all test cases at once, this will add "1" as the first line input
117 | # Ignored if TEST_ONE_BY_ONE is False
118 | ADD_1_IN_OBO: bool = False
119 |
120 | # True - if you want to measure performance of your solution, running it multiple times
121 | SPEED_TEST: bool = False
122 |
123 | # How many test case to use per loop, same rules apply as for NUMBER_OF_TEST_CASES
124 | NUMBER_SPEED_TEST_CASES = -1
125 |
126 | # How many times run tests
127 | NUMER_OF_LOOPS = 2
128 |
129 | # Timeout in seconds. Will not timeout in middle of test case (if TEST_ONE_BY_ONE is False
130 | # will not timeout in middle of loop). Will timeout only between test cases / loops
131 | # If you don't want timeout set it to some big number or `float("inf")`
132 | TIMEOUT = 300
133 |
134 | # Set to False if this tester wasn't prepared for the challenge you're testing
135 | # or adjust prints in `print_extra_stat` function yourself
136 | PRINT_EXTRA_STATS: bool = True
137 |
138 | # How often to update progress: must be > 0 and < 1
139 | # 0.1 - means update every 10% completed tests
140 | # 0.05 - means update every 5% completed tests
141 | PROGRESS_PERCENT = 0.1
142 |
143 | # How many failed cases to print
144 | # Set to -1 to print all failed cases
145 | NUM_FAILED = 5
146 |
147 | # Maximum length of line to print for failed cases
148 | # Set to -1 to print full line
149 | TRUNCATE_FAILED_CASES = 1000
150 |
151 | # Set to False if you want to share result but don't want to share solution length
152 | PRINT_SOLUTION_LENGTH: bool = True
153 |
154 | # If your terminal support unicode and your font has glyphs for emoji you can
155 | # switch to True
156 | USE_EMOJI: bool = False
157 |
158 |
159 | # region #######################################################################
160 |
161 | import argparse
162 | import functools
163 | import gzip
164 | import json
165 | import operator
166 | import os
167 | import platform
168 | import subprocess
169 | import sys
170 | from enum import Enum, auto
171 | from io import StringIO
172 | from itertools import zip_longest
173 | from pprint import pprint
174 | from time import perf_counter
175 | from typing import Callable, List, Tuple
176 | from unittest import mock
177 |
178 | TestInp = List[List[str]]
179 | TestOut = List[str]
180 | TestCases = Tuple[TestInp, TestOut]
181 |
182 |
183 | def enable_win_term_mode() -> bool:
184 | win = platform.system().lower() == "windows"
185 | if win is False:
186 | return True
187 |
188 | from ctypes import byref, c_int, c_void_p, windll
189 |
190 | ENABLE_VIRTUAL_TERMINAL_PROCESSING = 0x0004
191 | INVALID_HANDLE_VALUE = c_void_p(-1).value
192 | STD_OUTPUT_HANDLE = c_int(-11)
193 |
194 | hStdout = windll.kernel32.GetStdHandle(STD_OUTPUT_HANDLE)
195 | if hStdout == INVALID_HANDLE_VALUE:
196 | return False
197 |
198 | mode = c_int(0)
199 | ok = windll.kernel32.GetConsoleMode(c_int(hStdout), byref(mode))
200 | if not ok:
201 | return False
202 |
203 | mode = c_int(mode.value | ENABLE_VIRTUAL_TERMINAL_PROCESSING)
204 | ok = windll.kernel32.SetConsoleMode(c_int(hStdout), mode)
205 | if not ok:
206 | return False
207 |
208 | return True
209 |
210 |
211 | class Capturing(list):
212 | def __enter__(self) -> "Capturing":
213 | self._stdout = sys.stdout
214 | sys.stdout = self._stringio = StringIO()
215 | return self
216 |
217 | def __exit__(self, *args: str) -> None:
218 | self.extend(self._stringio.getvalue().splitlines())
219 | del self._stringio # free up some memory
220 | sys.stdout = self._stdout
221 |
222 |
223 | def create_solution_function(path: str, file_name: str) -> int:
224 | if not file_name:
225 | return 0
226 |
227 | if file_name.find("/") == -1 or file_name.find("\\") == -1:
228 | file_name = os.path.join(path, file_name)
229 |
230 | if not os.path.exists(file_name):
231 | print(f"Can't find file {red}{file_name}{reset}!\n")
232 | print("Make sure:\n - your file is in same directory as this script.")
233 | print(" - or give absolute path to your file")
234 | print(" - or give relative path from this script.\n")
235 | print(f"Current Working Directory is: {yellow}{os.getcwd()}{reset}")
236 |
237 | return 0
238 |
239 | solution = []
240 | with open(file_name, newline="") as f:
241 | solution = f.readlines()
242 |
243 | sol_len = sum(map(len, solution))
244 |
245 | if not file_name.endswith(".py"):
246 | return sol_len
247 |
248 | tmp_name = os.path.join(path, "temp_solution_file.py")
249 | with open(tmp_name, "w") as f:
250 | f.write("def solution():\n")
251 | for line in solution:
252 | f.write(" " + line)
253 |
254 | return sol_len
255 |
256 |
257 | def read_test_cases() -> TestCases:
258 | if test_cases_file.endswith(".gz"):
259 | with gzip.open(test_cases_file, "rb") as g:
260 | data = g.read()
261 | try:
262 | test_cases = json.loads(data)
263 | except json.decoder.JSONDecodeError:
264 | print(
265 | f"Test case file {yellow}{test_cases_file}{reset} is not valid JSON file!"
266 | )
267 | raise SystemExit(1)
268 | else:
269 | with open(test_cases_file) as f:
270 | try:
271 | test_cases = json.load(f)
272 | except json.decoder.JSONDecodeError:
273 | print(
274 | f"Test case file {yellow}{test_cases_file}{reset} is not valid JSON file!"
275 | )
276 | raise SystemExit(1)
277 |
278 | if Config.TEST_CASE_FILE_EXP:
279 | with open(test_out_file) as f:
280 | try:
281 | test_out = json.load(f)
282 | except json.decoder.JSONDecodeError:
283 | print(
284 | f"Test case file {yellow}{test_out_file}{reset} is not valid JSON file!"
285 | )
286 | raise SystemExit(1)
287 | test_inp = test_cases
288 | else:
289 | test_inp = test_cases[0]
290 | test_out = test_cases[1]
291 |
292 | if isinstance(test_cases[0], dict):
293 | return convert_official_test_cases(test_cases)
294 |
295 | return test_inp, test_out
296 |
297 |
298 | def convert_official_test_cases(test_cases: List[dict[str, str]]) -> TestCases:
299 | test_inp, test_out = [], []
300 | for case in test_cases:
301 | try:
302 | test_inp.append(case["Input"].split("\n"))
303 | test_out.append(case["Output"])
304 | except KeyError:
305 | print(f"Test case {yellow}{case}{reset} is not valid format!")
306 | raise SystemExit(1)
307 |
308 | return test_inp, test_out
309 |
310 |
311 | @dataclass
312 | class Emoji:
313 | stopwatch: str = ""
314 | hundred: str = ""
315 | poo: str = ""
316 | snake: str = ""
317 | otter: str = ""
318 | scroll: str = ""
319 | filebox: str = ""
320 | chart: str = ""
321 | rocket: str = ""
322 | warning: str = ""
323 | bang: str = ""
324 | stop: str = ""
325 | snail: str = ""
326 | leopard: str = ""
327 |
328 |
329 | class Lang(Enum):
330 | PYTHON = auto()
331 | OTHER = auto()
332 |
333 |
334 | # endregion ####################################################################
335 |
336 |
337 | def print_extra_stats(test_inp: TestInp, test_out: TestOut, num_cases: int) -> None:
338 | max_n = max(int(x[0].split()[0]) for x in test_inp)
339 | avg_n = sum(int(x[0].split()[0]) for x in test_inp) // num_cases
340 | max_k = max(int(x[0].split()[1]) for x in test_inp)
341 | avg_k = sum(int(x[0].split()[1]) for x in test_inp) // num_cases
342 |
343 | print(f" - Max N: {yellow}{max_n:_}{reset}")
344 | print(f" - Average N: {yellow}{avg_n:_}{reset}")
345 | print(f" - Max K: {yellow}{max_k:_}{reset}")
346 | print(f" - Average K: {yellow}{avg_k:_}{reset}")
347 |
348 |
349 | def print_begin(
350 | format: str,
351 | num_cases: int,
352 | test_inp: TestInp,
353 | test_out: TestOut,
354 | *,
355 | timeout: bool = False,
356 | ) -> None:
357 | command = Config.OTHER_LANG_COMMAND
358 |
359 | if lang is Lang.PYTHON:
360 | running = f"{emojis.snake} {yellow}Python solution{reset}"
361 | else:
362 | running = f"{emojis.otter} {yellow}{command[command.rfind('/') + 1:]}{reset}"
363 |
364 | print(f"{emojis.rocket}Started testing, format {format}:")
365 | print(f"Running:{running}")
366 | print(f" - Number of cases{emojis.filebox}: {cyan}{num_cases:_}{reset}")
367 | if timeout:
368 | print(f" - Timeout: {red}{Config.TIMEOUT}{reset} seconds{emojis.stop}")
369 | if Config.PRINT_EXTRA_STATS:
370 | print_extra_stats(test_inp[:num_cases], test_out[:num_cases], num_cases)
371 | if (
372 | solution_len
373 | and Config.PRINT_SOLUTION_LENGTH
374 | and (
375 | lang is Lang.OTHER
376 | and not Config.SOLUTION_SRC_FILE_NAME.endswith(".py")
377 | or Config.OTHER_LANG_COMMAND.endswith(".py")
378 | or lang is Lang.PYTHON
379 | )
380 | ):
381 | print(f" - Solution length{emojis.scroll}: {green}{solution_len}{reset} chars.")
382 | print()
383 |
384 |
385 | def print_summary(i: int, passed: int, time_taken: float) -> None:
386 | if Config.NUM_FAILED >= 0 and i - passed > Config.NUM_FAILED:
387 | print(
388 | f"{emojis.warning}Printed only first {yellow}{Config.NUM_FAILED}{reset} "
389 | f"failed cases!{emojis.warning}"
390 | )
391 | print(
392 | f"\rTo change how many failed cases to print change {cyan}NUM_FAILED{reset}"
393 | " in configuration section.\n"
394 | )
395 | e = f"{emojis.hundred}" if passed == i else f"{emojis.poo}"
396 | print(
397 | f"\rPassed: {green if passed == i else red}{passed:_}/{i:_}{reset} tests{e}{emojis.bang}"
398 | )
399 | print(f"{emojis.stopwatch}Finished in: {yellow}{time_taken:.4f}{reset} seconds")
400 |
401 |
402 | def print_speed_summary(speed_num_cases: int, loops: int, times: List[float]) -> None:
403 | times.sort()
404 | print(
405 | f"\rTest for speed passed{emojis.hundred}{emojis.bang if emojis.bang else '!'}\n"
406 | f" - Total time: {yellow}{sum(times):.4f}"
407 | f"{reset} seconds to complete {cyan}{loops:_}{reset} times {cyan}"
408 | f"{speed_num_cases:_}{reset} cases!"
409 | )
410 | print(
411 | f" -{emojis.leopard} Average loop time from top {min(5, loops)} fastest: "
412 | f"{yellow}{sum(times[:5])/min(5, loops):.4f}{reset} seconds /"
413 | f" {cyan}{speed_num_cases:_}{reset} cases."
414 | )
415 | print(
416 | f" -{emojis.leopard} Fastest loop time: {yellow}{times[0]:.4f}{reset} seconds /"
417 | f" {cyan}{speed_num_cases:_}{reset} cases."
418 | )
419 | print(
420 | f" - {emojis.snail}Slowest loop time: {yellow}{times[-1]:.4f}{reset} seconds /"
421 | f" {cyan}{speed_num_cases:_}{reset} cases."
422 | )
423 | print(
424 | f" - {emojis.stopwatch}Average loop time: {yellow}{sum(times)/loops:.4f}{reset} seconds"
425 | f" / {cyan}{speed_num_cases:_}{reset} cases."
426 | )
427 |
428 |
429 | def find_diff(out: str, exp: str) -> str:
430 | result = []
431 | for o, e in zip_longest(out, exp):
432 | if o == e:
433 | result.append(o)
434 | elif o is None:
435 | result.append(f"{red_bg}~{reset}")
436 | else:
437 | result.append(f"{red_bg}{o}{reset}")
438 |
439 | return "".join(result)
440 |
441 |
442 | def check_result(
443 | test_inp: TestInp, test_out: TestOut, num_cases: int, output: List[str]
444 | ) -> Tuple[int, int]:
445 | passed = i = oi = 0
446 | max_line_len = Config.TRUNCATE_FAILED_CASES if Config.TRUNCATE_FAILED_CASES > 0 else 10**6
447 | for i, (inp, exp) in enumerate(
448 | zip_longest(test_inp[:num_cases], test_out[:num_cases]), 1
449 | ):
450 | out_len = len(exp.split("\n"))
451 | out = "\n".join(output[oi : oi + out_len])
452 | oi += out_len
453 | if out != exp:
454 | if i - passed <= Config.NUM_FAILED or Config.NUM_FAILED == -1:
455 | print(f"Test nr:{i}\n Input: {cyan}")
456 | truncated = False
457 | for line in inp:
458 | truncated = truncated or len(line) > max_line_len
459 | print(line[:max_line_len] + " ..." if len(line) > max_line_len else line)
460 | if truncated:
461 | print(
462 | f"{reset}{emojis.warning}Printed input is truncated to {yellow}"
463 | f"{max_line_len}{reset} characters per line{emojis.warning}"
464 | )
465 | print(
466 | f"\rTo change how many characters to print change "
467 | f"{cyan}TRUNCATE_FAILED_CASES{reset} in configuration section.\n"
468 | )
469 | print(
470 | f"{reset}Your output: {find_diff(out, exp) if out else f'{red_bg}None{reset}'}"
471 | )
472 | print(f" Expected: {green}{exp}{reset}\n")
473 | else:
474 | passed += 1
475 |
476 | if num_cases + sum(x.count("\n") for x in test_out[:num_cases]) < len(output) - 1:
477 | print(
478 | f"{red}{emojis.warning}Your output has more lines than expected!{emojis.warning}\n"
479 | f"Check if you don't print empty line at the end.{reset}\n"
480 | )
481 | return passed, i
482 |
483 |
484 | ########################################################################
485 |
486 |
487 | def test_solution_aio(test_inp: TestInp, test_out: TestOut, num_cases: int) -> None:
488 | test_inp_: List[str] = functools.reduce(operator.iconcat, test_inp[:num_cases], [])
489 |
490 | @mock.patch("builtins.input", side_effect=[str(num_cases)] + test_inp_)
491 | def test_aio(input: Callable) -> List[str]:
492 | with Capturing() as output:
493 | solution()
494 |
495 | return output
496 |
497 | print_begin("All in one", num_cases, test_inp, test_out)
498 |
499 | start = perf_counter()
500 | output = test_aio() # type: ignore
501 | end = perf_counter()
502 |
503 | passed, i = check_result(test_inp, test_out, num_cases, output)
504 |
505 | print_summary(i, passed, end - start)
506 |
507 |
508 | def speed_test_solution_aio(
509 | test_inp: TestInp, test_out: TestOut, speed_num_cases: int
510 | ) -> None:
511 | test_inp_: List[str] = functools.reduce(
512 | operator.iconcat, test_inp[:speed_num_cases], []
513 | )
514 |
515 | @mock.patch("builtins.input", side_effect=[str(speed_num_cases)] + test_inp_)
516 | def test_for_speed_aio(input: Callable) -> bool:
517 |
518 | with Capturing() as output:
519 | solution()
520 |
521 | return "\n".join(output) == "\n".join(test_out[:speed_num_cases])
522 |
523 | loops = max(1, Config.NUMER_OF_LOOPS)
524 | print("\nSpeed test started.")
525 | print(f"Running: {yellow}Python solution{reset}")
526 | print(f'Testing {cyan}{speed_num_cases}{reset} cases, format "All in one":')
527 | print(f" - Number of loops: {cyan}{loops:_}{reset}")
528 | print(f" - Timeout: {red}{Config.TIMEOUT}{reset} seconds")
529 | if Config.PRINT_EXTRA_STATS:
530 | print_extra_stats(
531 | test_inp[:speed_num_cases], test_out[:speed_num_cases], speed_num_cases
532 | )
533 | print()
534 |
535 | times = []
536 | for i in range(loops):
537 | start = perf_counter()
538 | passed = test_for_speed_aio() # type: ignore
539 | times.append(perf_counter() - start)
540 | if not passed:
541 | print(f"{red}Failed at iteration {i + 1}!{reset}")
542 | break
543 | if sum(times) >= Config.TIMEOUT:
544 | print(f"{red}Timeout at {i + 1} iteration!{reset}")
545 | break
546 | if 1 < loops <= 10 or not i % (loops * Config.PROGRESS_PERCENT):
547 | print(f"\rProgress: {yellow}{i:>{len(str(loops))}}/{loops}{reset}", end="")
548 | else:
549 | print_speed_summary(speed_num_cases, loops, times)
550 |
551 |
552 | def test_solution_obo(test_inp: TestInp, test_out: TestOut, num_cases: int) -> None:
553 | def test_obo(test: List[str]) -> Tuple[float, str]:
554 | @mock.patch("builtins.input", side_effect=test)
555 | def test_obo_(input: Callable) -> List[str]:
556 | with Capturing() as output:
557 | solution()
558 |
559 | return output
560 |
561 | start = perf_counter()
562 | output = test_obo_() # type: ignore
563 | end = perf_counter()
564 |
565 | return end - start, "\n".join(output)
566 |
567 | print_begin("One by one", num_cases, test_inp, test_out, timeout=True)
568 |
569 | times = []
570 | passed = i = 0
571 | for i in range(num_cases):
572 | test = ["1"] + test_inp[i] if Config.ADD_1_IN_OBO else test_inp[i]
573 | t, output = test_obo(test)
574 | times.append(t)
575 | if test_out[i] != output:
576 | if i - passed <= Config.NUM_FAILED or Config.NUM_FAILED == -1:
577 | print(f"\rTest nr:{i + 1} \n Input: {cyan}")
578 | pprint(test)
579 | print(f"{reset}Your output: {red}{output[0]}{reset}")
580 | print(f" Expected: {green}{test_out[i]}{reset}\n")
581 | else:
582 | passed += 1
583 | if sum(times) >= Config.TIMEOUT:
584 | print(f"{red}Timeout after {i + 1} cases!{reset}")
585 | break
586 | if 1 < num_cases <= 10 or not i % (num_cases * Config.PROGRESS_PERCENT):
587 | print(
588 | f"\rProgress: {yellow}{i + 1:>{len(str(num_cases))}}/{num_cases}{reset}",
589 | end="",
590 | )
591 |
592 | print_summary(i + 1, passed, sum(times))
593 |
594 |
595 | def speed_test_solution_obo(
596 | test_inp: TestInp, test_out: TestOut, speed_num_cases: int
597 | ) -> None:
598 | def test_for_speed_obo(test: List[str], out: str) -> Tuple[float, bool]:
599 | @mock.patch("builtins.input", side_effect=test)
600 | def test_obo_(input: Callable) -> List[str]:
601 | with Capturing() as output:
602 | solution()
603 |
604 | return output
605 |
606 | start = perf_counter()
607 | output = test_obo_() # type: ignore
608 | end = perf_counter()
609 |
610 | return end - start, "\n".join(output) == out
611 |
612 | loops = Config.NUMER_OF_LOOPS
613 | print("\nSpeed test started.")
614 | print(f"Running: {yellow}Python solution{reset}")
615 | print(f'Testing {cyan}{speed_num_cases}{reset} cases, format "One by one":')
616 | print(f" - Number of loops: {cyan}{loops:_}{reset}")
617 | print(f" - Timeout: {red}{Config.TIMEOUT}{reset} seconds")
618 | if Config.PRINT_EXTRA_STATS:
619 | print_extra_stats(
620 | test_inp[:speed_num_cases], test_out[:speed_num_cases], speed_num_cases
621 | )
622 | print()
623 |
624 | timedout = False
625 | times: List[float] = []
626 | for i in range(loops):
627 | loop_times = []
628 | for j in range(speed_num_cases):
629 | test = ["1"] + test_inp[j] if Config.ADD_1_IN_OBO else test_inp[j]
630 |
631 | t, passed = test_for_speed_obo(test, test_out[j])
632 | loop_times.append(t)
633 |
634 | if not passed:
635 | print(f"\r{red}Failed at iteration {i + 1}!{reset}")
636 | break
637 | if sum(times) >= Config.TIMEOUT:
638 | print(f"\r{red}Timeout at {i + 1} iteration!{reset}")
639 | timedout = True
640 | break
641 | if timedout:
642 | break
643 | times.append(sum(loop_times))
644 | if 1 < loops <= 10 or not i % (loops * Config.PROGRESS_PERCENT):
645 | print(f"\rProgress: {yellow}{i:>{len(str(loops))}}/{loops}{reset}", end="")
646 | else:
647 | print_speed_summary(speed_num_cases, loops, times)
648 |
649 |
650 | def debug_solution(test_inp: TestInp, test_out: TestOut, case_number: int) -> None:
651 | test = (
652 | ["1"] + test_inp[case_number - 1]
653 | if not Config.TEST_ONE_BY_ONE
654 | else test_inp[case_number - 1]
655 | )
656 |
657 | @mock.patch("builtins.input", side_effect=test)
658 | def test_debug(input: Callable) -> None:
659 | solution()
660 |
661 | command = Config.OTHER_LANG_COMMAND
662 |
663 | if lang is Lang.PYTHON:
664 | running = f"{emojis.snake} {yellow}Python solution{reset}"
665 | else:
666 | running = f"{emojis.otter} {yellow}{command[command.rfind('/') + 1:]}{reset}"
667 |
668 | print('Started testing, format "Debug":')
669 | print(f"Running: {running}")
670 | print(f" Test nr: {cyan}{case_number}{reset}")
671 |
672 | print(f" Input: {cyan}")
673 | pprint(test_inp[case_number - 1])
674 | print(f"{reset} Expected: {green}{test_out[case_number - 1]}{reset}")
675 | print("Your output:")
676 | if command:
677 | test_ = "1\n" + "\n".join(test_inp[case_number - 1])
678 | start = perf_counter()
679 | proc = subprocess.run(
680 | command.split(),
681 | input=test_,
682 | stdout=subprocess.PIPE,
683 | stderr=subprocess.PIPE,
684 | universal_newlines=True,
685 | )
686 | end = perf_counter()
687 | print(proc.stderr)
688 | print(proc.stdout)
689 | print(f"\nTime: {yellow}{end - start}{reset} seconds")
690 |
691 | else:
692 | test_debug() # type: ignore
693 |
694 |
695 | def test_other_lang(
696 | command: str, test_inp: TestInp, test_out: TestOut, num_cases: int
697 | ) -> None:
698 | test_inp_ = (
699 | f"{num_cases}\n"
700 | + "\n".join(functools.reduce(operator.iconcat, test_inp[:num_cases], []))
701 | + "\n"
702 | )
703 | print_begin("All in one", num_cases, test_inp, test_out)
704 |
705 | start = perf_counter()
706 | proc = subprocess.run(
707 | command.split(),
708 | input=test_inp_,
709 | stdout=subprocess.PIPE,
710 | stderr=subprocess.PIPE,
711 | universal_newlines=True,
712 | )
713 | end = perf_counter()
714 |
715 | err = proc.stderr
716 | output = proc.stdout.split("\n")
717 | output = [x.strip("\r") for x in output]
718 |
719 | if err:
720 | print(err)
721 | raise SystemExit(1)
722 |
723 | passed, i = check_result(test_inp, test_out, num_cases, output)
724 |
725 | print_summary(i, passed, end - start)
726 |
727 |
728 | def speed_test_other_aio(
729 | command: str, test_inp: TestInp, test_out: TestOut, speed_num_cases: int
730 | ) -> None:
731 | test_inp_ = (
732 | f"{speed_num_cases}\n"
733 | + "\n".join(functools.reduce(operator.iconcat, test_inp[:speed_num_cases], []))
734 | + "\n"
735 | )
736 |
737 | def run() -> Tuple[bool, float]:
738 | start = perf_counter()
739 | proc = subprocess.run(
740 | command.split(),
741 | input=test_inp_,
742 | stdout=subprocess.PIPE,
743 | stderr=subprocess.PIPE,
744 | universal_newlines=True,
745 | )
746 | end = perf_counter()
747 |
748 | err = proc.stderr
749 | output = proc.stdout.split("\n")
750 | output = [x.strip("\r") for x in output]
751 | if err:
752 | print(err)
753 | raise SystemExit(1)
754 |
755 | return (
756 | "\n".join(output).strip() == "\n".join(test_out[:speed_num_cases]),
757 | end - start,
758 | )
759 |
760 | loops = max(1, Config.NUMER_OF_LOOPS)
761 | print("\nSpeed test started.")
762 | print(f"Running: {emojis.otter} {yellow}{command[command.rfind('/') + 1:]}{reset}")
763 | print(f'Testing {cyan}{speed_num_cases}{reset} cases, format "All in one":')
764 | print(f" - Number of loops: {cyan}{loops:_}{reset}")
765 | print(f" - Timeout: {red}{Config.TIMEOUT}{reset} seconds")
766 | if Config.PRINT_EXTRA_STATS:
767 | print_extra_stats(
768 | test_inp[:speed_num_cases], test_out[:speed_num_cases], speed_num_cases
769 | )
770 | print()
771 |
772 | times = []
773 | for i in range(loops):
774 | passed, time = run()
775 | times.append(time)
776 | if not passed:
777 | print(f"{red}Failed at iteration {i + 1}!{reset}")
778 | break
779 | if sum(times) >= Config.TIMEOUT:
780 | print(f"{red}Timeout at {i + 1} iteration!{reset}")
781 | break
782 | if 1 < loops <= 10 or not i % (loops * Config.PROGRESS_PERCENT):
783 | print(f"\rProgress: {yellow}{i:>{len(str(loops))}}/{loops}{reset}", end="")
784 | else:
785 | print_speed_summary(speed_num_cases, loops, times)
786 |
787 |
788 | ########################################################################
789 |
790 |
791 | def parse_args() -> None:
792 | parser = argparse.ArgumentParser(
793 | usage="%(prog)s [options]",
794 | description="Yan Tovis unofficial tester for Tech With Tim discord Weekly Challenges. "
795 | "You can configure it editing CONFIGURATION section of this tester file "
796 | "or using command line arguments.",
797 | epilog="All file paths must be absolute or relative to this tester!",
798 | )
799 |
800 | parser.add_argument(
801 | "-s",
802 | "--source",
803 | metavar="path_src_file",
804 | help="Path to solution source file for Python solution, or for other languages if you "
805 | "want solution length to be printed.",
806 | )
807 | parser.add_argument(
808 | "-c",
809 | "--command",
810 | metavar="CMD",
811 | help="Command to execute solution written in other languages then Python.",
812 | )
813 | parser.add_argument(
814 | "-i",
815 | "--input",
816 | metavar="test_cases",
817 | help="Path to test cases input (input/expected output) JSON file.",
818 | )
819 | parser.add_argument(
820 | "-e",
821 | "--expected",
822 | metavar="test_output",
823 | help="Path to test cases expected values if they in separate "
824 | "file then input test cases.",
825 | )
826 | parser.add_argument(
827 | "--nocolor",
828 | action="store_true",
829 | help="If you don't want color output in terminal, or if your terminal"
830 | "don't support colors and script didn't detect this.",
831 | )
832 | parser.add_argument(
833 | "-n",
834 | "--number-cases",
835 | metavar="",
836 | type=int,
837 | help="Number of test cases to use to test.",
838 | )
839 | parser.add_argument(
840 | "-d",
841 | "--debug",
842 | metavar="",
843 | type=int,
844 | help="Test case number if you want to print something extra for debugging. "
845 | "Your solution will be run with only that one test case and whole output "
846 | "will be printed.",
847 | )
848 | parser.add_argument(
849 | "--onebyone",
850 | action="store_true",
851 | help="Use if official challenge tester gives one test case inputs and "
852 | "run your solution again for next test case.",
853 | )
854 | parser.add_argument(
855 | "--add1",
856 | action="store_true",
857 | help="If you want test your solution one test case by one test case, "
858 | "and solution is written to take all test cases at once, this will add '1' "
859 | "as the first line input.",
860 | )
861 | parser.add_argument(
862 | "-S",
863 | "--speedtest",
864 | metavar="",
865 | type=int,
866 | help="Number of times to run tests to get more accurate times.",
867 | )
868 | parser.add_argument(
869 | "--number-speed-cases",
870 | metavar="",
871 | type=int,
872 | help="How many test case to use per loop when speed test.",
873 | )
874 | parser.add_argument(
875 | "--timeout",
876 | metavar="",
877 | type=int,
878 | help="Timeout in seconds. Will not timeout in middle of test case. "
879 | 'Not working in "All in One" mode. Will timeout only between test cases / loops.',
880 | )
881 | parser.add_argument(
882 | "-x",
883 | "--noextra",
884 | action="store_true",
885 | help="Use if this tester wasn't prepared for the challenge you testing.",
886 | )
887 | parser.add_argument(
888 | "-p",
889 | "--progress",
890 | metavar="",
891 | type=float,
892 | help="How often to update progress: must be > 0 and < 1. 0.1 - means "
893 | "update every 10%% completed tests, 0.05 - means update every 5%% completed tests.",
894 | )
895 | parser.add_argument(
896 | "-f",
897 | "--failed",
898 | metavar="",
899 | type=int,
900 | help="Number of failed tests to print (-1 to print all failed).",
901 | )
902 | parser.add_argument(
903 | "-t",
904 | "--truncate",
905 | metavar="",
906 | type=int,
907 | help="Maximum line length to print failed case input (-1 to print full line).",
908 | )
909 | parser.add_argument(
910 | "-l",
911 | "--nolength",
912 | action="store_true",
913 | help="Use if you want to share result but don't want to share solution length.",
914 | )
915 | parser.add_argument(
916 | "-E",
917 | "--emoji",
918 | action="store_true",
919 | help="Use unicode emoji. Your terminal must support unicode and your "
920 | "terminal font must have glyphs for emoji.",
921 | )
922 |
923 | args = parser.parse_args()
924 |
925 | if args.source:
926 | Config.SOLUTION_SRC_FILE_NAME = args.source
927 | if args.command:
928 | Config.OTHER_LANG_COMMAND = args.command
929 | if args.input:
930 | Config.TEST_CASE_FILE = args.input
931 | if args.expected:
932 | Config.TEST_CASE_FILE_EXP = args.expected
933 | if args.nocolor:
934 | Config.COLOR_OUT = False
935 | if args.number_cases:
936 | Config.NUMBER_OF_TEST_CASES = args.number_cases
937 | if args.debug:
938 | Config.DEBUG_TEST = True
939 | Config.DEBUG_TEST_NUMBER = args.debug
940 | if args.onebyone:
941 | Config.TEST_ONE_BY_ONE = True
942 | if args.add1:
943 | Config.ADD_1_IN_OBO = True
944 | if args.speedtest:
945 | Config.SPEED_TEST = True
946 | Config.NUMER_OF_LOOPS = args.speedtest
947 | if args.number_speed_cases:
948 | Config.NUMBER_SPEED_TEST_CASES = args.number_speed_cases
949 | if args.timeout:
950 | Config.TIMEOUT = args.timeout
951 | if args.noextra:
952 | Config.PRINT_EXTRA_STATS = False
953 | if args.progress:
954 | Config.PROGRESS_PERCENT = args.progress
955 | if args.failed:
956 | Config.NUM_FAILED = args.failed
957 | if args.truncate:
958 | Config.TRUNCATE_FAILED_CASES = args.truncate
959 | if args.nolength:
960 | Config.PRINT_SOLUTION_LENGTH = False
961 | if args.emoji:
962 | Config.USE_EMOJI = True
963 |
964 |
965 | def main(path: str) -> None:
966 | test_inp, test_out = read_test_cases()
967 |
968 | if Config.DEBUG_TEST:
969 | case_number = (
970 | Config.DEBUG_TEST_NUMBER
971 | if 0 <= Config.DEBUG_TEST_NUMBER - 1 < len(test_out)
972 | else 1
973 | )
974 | debug_solution(test_inp, test_out, case_number)
975 | raise SystemExit(0)
976 |
977 | if 0 < Config.NUMBER_OF_TEST_CASES < len(test_out):
978 | num_cases = Config.NUMBER_OF_TEST_CASES
979 | else:
980 | num_cases = len(test_out)
981 |
982 | if 0 < Config.NUMBER_SPEED_TEST_CASES < len(test_out):
983 | speed_num_cases = Config.NUMBER_SPEED_TEST_CASES
984 | else:
985 | speed_num_cases = len(test_out)
986 |
987 | if Config.OTHER_LANG_COMMAND:
988 | os.chdir(path)
989 | test_other_lang(Config.OTHER_LANG_COMMAND, test_inp, test_out, num_cases)
990 | if Config.SPEED_TEST:
991 | speed_test_other_aio(
992 | Config.OTHER_LANG_COMMAND, test_inp, test_out, speed_num_cases
993 | )
994 | raise SystemExit(0)
995 |
996 | if Config.TEST_ONE_BY_ONE:
997 | test_solution_obo(test_inp, test_out, num_cases)
998 | else:
999 | test_solution_aio(test_inp, test_out, num_cases)
1000 | if Config.SPEED_TEST:
1001 | speed_test_solution_aio(test_inp, test_out, speed_num_cases)
1002 |
1003 |
1004 | if __name__ == "__main__":
1005 | path = os.path.dirname(os.path.abspath(__file__))
1006 | os.chdir(path)
1007 |
1008 | parse_args()
1009 |
1010 | color_out = Config.COLOR_OUT and enable_win_term_mode()
1011 | red, green, yellow, cyan, reset, red_bg = (
1012 | (
1013 | "\x1b[31m",
1014 | "\x1b[32m",
1015 | "\x1b[33m",
1016 | "\x1b[36m",
1017 | "\x1b[0m",
1018 | "\x1b[41m",
1019 | )
1020 | if color_out
1021 | else [""] * 6
1022 | )
1023 |
1024 | if Config.USE_EMOJI:
1025 | emojis = Emoji(
1026 | stopwatch="\N{stopwatch} ",
1027 | hundred="\N{Hundred Points Symbol}",
1028 | poo=" \N{Pile of Poo}",
1029 | snake="\N{snake}",
1030 | otter="\N{otter}",
1031 | scroll=" \N{scroll}",
1032 | filebox=" \N{Card File Box} ",
1033 | chart=" \N{Chart with Upwards Trend} ",
1034 | rocket="\N{rocket} ",
1035 | warning=" \N{warning sign} ",
1036 | bang="\N{Heavy Exclamation Mark Symbol}",
1037 | stop="\N{Octagonal sign}",
1038 | snail="\N{snail} ",
1039 | leopard=" \N{leopard}",
1040 | )
1041 | else:
1042 | emojis = Emoji()
1043 |
1044 | solution_len = create_solution_function(path, Config.SOLUTION_SRC_FILE_NAME)
1045 |
1046 | lang = Lang.OTHER if Config.OTHER_LANG_COMMAND else Lang.PYTHON
1047 |
1048 | if lang is Lang.PYTHON:
1049 | if not solution_len:
1050 | print("Could not import solution!")
1051 | raise SystemExit(1)
1052 |
1053 | from temp_solution_file import solution # type: ignore
1054 |
1055 | test_cases_file = os.path.join(path, Config.TEST_CASE_FILE)
1056 | if not os.path.exists(test_cases_file):
1057 | print(
1058 | f"Can't find file with test cases {red}{os.path.join(path, Config.TEST_CASE_FILE)}{reset}!"
1059 | )
1060 | print("Make sure it is in the same directory as this script!")
1061 | raise SystemExit(1)
1062 |
1063 | test_out_file = (
1064 | os.path.join(path, Config.TEST_CASE_FILE_EXP)
1065 | if Config.TEST_CASE_FILE_EXP
1066 | else test_cases_file
1067 | )
1068 | if Config.TEST_CASE_FILE_EXP and not os.path.exists(test_out_file):
1069 | print(
1070 | f"Can't find file with output for test cases {red}{test_out_file}{reset}!"
1071 | )
1072 | print("Make sure it is in the same directory as this script!\n")
1073 | print(
1074 | f"If output is in same file as input set {cyan}SEP_INP_OUT_TESTCASE_FILE{reset} "
1075 | f"to {yellow}False{reset} in configure section."
1076 | )
1077 | raise SystemExit(1)
1078 |
1079 | main(path)
1080 |
--------------------------------------------------------------------------------
/Challenge_211/Challenge_211.md:
--------------------------------------------------------------------------------
1 | # Challenge 211: Halloween, Part 2
2 |
3 | **Difficulty: 4/10
4 | Labels: Dynamic Programming**
5 |
6 | During Halloween, SnowballSH is collecting candies from his `n` neighbors. Each neighbor is offering a box of candy with a certain tastiness. SnowballSH wants to maximize the sum of the tastinesses he ends up with.
7 |
8 | Each box of candy that a neighbor offers **has a certain weight**, and SnowballSH’s bag can only carry up to `k` units of weight.
9 |
10 | Can you help SnowballSH determine the maximum possible sum of tastinesses he can collect, given the weight limitations of his bag?
11 |
12 | ## Task
13 |
14 | You are given a number `T` and `T` test cases follow, for each test case,
15 |
16 | - The first line contains two integers `n` and `k`, where `n` is the number of neighbors, and `k` is the maximum weight SnowballSH's bag can carry.
17 | - The second line contains an array of `n` integers `w`, where `w[i]` is the weight of the box of candy offered by the `i`-th neighbor.
18 | - The third line contains an array of `n` integers `a`, where `a[i]` is the tastiness of the box of candy offered by the `i`-th neighbor.
19 |
20 | Output a single integer, the maximum possible sum of the tastinesses of the boxes SnowballSH can choose, without exceeding the weight limit `k` of his bag.
21 |
22 | ### Examples
23 |
24 | #### Input
25 |
26 | ```rust
27 | 7
28 | 6 5
29 | 3 5 1 4 1 3
30 | 4 5 3 4 2 5
31 | 8 15
32 | 1 3 2 5 5 3 4 2
33 | 9 7 2 10 9 5 3 10
34 | 3 100
35 | 101 101 101
36 | 1 2 3
37 | 9 30
38 | 22 8 13 31 47 19 2 26 39
39 | 162 186 24 200 68 192 45 14 113
40 | 7 123
41 | 46 30 31 1 45 47 39
42 | 108 87 79 175 8 142 1
43 | 3 97
44 | 4 1 39
45 | 178 192 18
46 | 1 1000000000
47 | 1000
48 | 10000
49 | ```
50 |
51 | #### Output
52 |
53 | ```rust
54 | 10
55 | 41
56 | 0
57 | 423
58 | 483
59 | 388
60 | 10000
61 | ```
62 |
63 | - For the first test case, it is optimal for SnowballSH to pick the boxes with tastiness 2, 3, and 5, which have weights 1, 1, and 3. Since `1+1+3=5`, they all fit into his bag. The answer is therefore `2+3+5=10`.
64 | - For the third test case, it is impossible for SnowballSH to take any box, so the answer is `0`.
65 |
66 | ### Note
67 |
68 | - `1 <= T`
69 | - `1 <= n <= 1,000`
70 | - `1 <= k <= 10``9`
71 | - `1 <= a[i] <= 10,000`
72 | - `1 <= w[i] <= 1,000`
73 | - **It is guaranteed that the sum of all elements of `w` does not exceed 1,000.**
74 | - **Remember that `k` is very large.**
75 |
76 | ### Submissions
77 |
78 | Code can be written in any of these languages:
79 |
80 | - `Python` 3.11
81 | - `C` (gnu17) / `C++` (c++20) - GCC 12.2
82 | - `Ruby` 3.3.4
83 | - `Golang` 1.21
84 | - `Java` 19 (Open JDK) - use **"class Main"!!!**
85 | - `Rust` 1.72
86 | - `C#` 11 (.Net 7.0)
87 | - `JavaScript` ES2023 (Node.js 20.6)
88 | - `Zig` 0.13.0
89 |
90 | To download tester for this challenge click [here](https://downgit.github.io/#/home?url=https://github.com/Pomroka/TWT_Challenges_Tester/tree/main/Challenge_211)
91 |
--------------------------------------------------------------------------------
/Challenge_211/test_cases.json.gz:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Pomroka/TWT_Challenges_Tester/9ab9f21a77355e08999e77b9c838e176934dc9f7/Challenge_211/test_cases.json.gz
--------------------------------------------------------------------------------
/Challenge_211/test_challenge_211.py:
--------------------------------------------------------------------------------
1 | """
2 | How to use?
3 | Put this file, with test cases file and file with your solution in same folder.
4 | Make sure Python has right to save to this folder.
5 |
6 | This tester will create one file "temp_solution_file.py" make sure there's no such
7 | file in same folder or there's nothing important in it, cos it will be overwritten!
8 |
9 | Change SOLUTION_SRC_FILE_NAME to your file name in CONFIGURATION section.
10 |
11 | If this tester wasn't prepared by me for the challenge you want to test,
12 | you may need to adjust other configuration settings. Read the comments on each.
13 |
14 | If you want to use own test_cases, they must be in JSON format.
15 | [
16 | [ # this list can be in separated file for inputs only
17 | ["test case 1"], # multiline case ["line_1", "line_2", ... ,"line_n"]
18 | ["test case 2"],
19 | ...
20 | ["test case n"]
21 | ],
22 | [ # and this for output only
23 | "output 1",
24 | "output 2",
25 | ...
26 | "output n"
27 | ]
28 | ]
29 | All values must be strings! Cast all ints, floats, etc. to string before dumping JSON!
30 |
31 | WARNING: My tester ignores printing in input() but official tester FAILS if you
32 | print something in input()
33 | Don't do that: input("What is the test number?")
34 | Use empty input: input()
35 |
36 | Some possible errors:
37 | - None in "Your output": Your solution didn't print for all cases.
38 | - None in "Input": Your solution print more times than there are cases.
39 | - If you see None in "Input" or "Your output" don't check failed cases until
40 | you fix problem with printing, cos "Input" and "Your output" are misaligned
41 | after first missing/extra print
42 | - StopIteration: Your solution try to get more input than there are test cases
43 | - If you use `open` instead of `input` you get `StopIteration` error in my tester
44 | to avoid that use OTHER_LANG_COMMAND = "python to_submit_ch_83.py"
45 | - If you call your function inside `if __name__ == '__main__':` by default
46 | your functions won't be called cos your solution is imported.
47 | To avoid that use OTHER_LANG_COMMAND = "python to_submit_ch_83.py"
48 | or don't use `if __name__ == '__main__':`
49 | """
50 | from __future__ import annotations
51 |
52 | from dataclasses import dataclass
53 |
54 |
55 | ########## CONFIGURATION ################
56 | @dataclass
57 | class Config:
58 |
59 | # Name of your file with solution. If it's not in same folder as this script
60 | # add absolute path or relative to this script file.
61 | # For other languages than python fill this with source code file name if
62 | # you want solution length to be displayed.
63 | # Examples:
64 | # Absolute path
65 | # SOLUTION_SRC_FILE_NAME = "/home/user/Dev/Cpp/c83_c/c83_c/src/Main.cpp"
66 | # Relative path to this script file
67 | # SOLUTION_SRC_FILE_NAME = "Rust/C83_rust/src/main.rs"
68 | # File in same folder as this script
69 | SOLUTION_SRC_FILE_NAME = "to_submit_ch_211.py"
70 |
71 | # Command to run your solution written in other language than Python
72 | # For compiled languages - compile yourself and use compiled executable file name
73 | # For interpreter languages give full command to run your solution
74 | # Examples:
75 | # OTHER_LANG_COMMAND = "c83_cpp.exe"
76 | # OTHER_LANG_COMMAND = "Cpp/c83_c/bin/x64/Debug/c83_c.exe"
77 | # OTHER_LANG_COMMAND = "Cpp/c83_c/bin/x64/Release/c83_c"
78 | # OTHER_LANG_COMMAND = "/home/user/Dev/Rust/c83_rust/target/release/c83_rust"
79 | # OTHER_LANG_COMMAND = "d:/Dev/C_Sharp/c83_cs/bin/Debug/net6.0/c83_cs.exe"
80 | # OTHER_LANG_COMMAND = "java -cp Java/ c83_java.Main"
81 | OTHER_LANG_COMMAND = ""
82 |
83 | # Name of file with input test cases (and output if SEP_INP_OUT_TESTCASE_FILE is False)
84 | # If test cases file is compressed, you don't need to extract it, just give name of
85 | # compressed file (with .gz extension)
86 | TEST_CASE_FILE = "test_cases.json.gz"
87 |
88 | # If test cases input and expected output are in separate files, name of file
89 | # with expected outputs for test cases. Empty string - if they in one file.
90 | TEST_CASE_FILE_EXP = ""
91 |
92 | # True - if you want colors in terminal output, False - otherwise or if your terminal
93 | # don't support colors and script didn't detect this
94 | COLOR_OUT: bool = True
95 |
96 | # -1 - use all test cases from test case file, you can limit here with how many
97 | # tests cases you want to test your solution. If you enter number bigger than
98 | # number of tests all tests will be used
99 | NUMBER_OF_TEST_CASES = -1
100 |
101 | # True - if you want to print some debug information, you need set:
102 | # DEBUG_TEST_NUMBER to the test number you want to debug
103 | DEBUG_TEST: bool = False
104 |
105 | # Provide test number for which you want to see your debug prints. If you enter number
106 | # out of range, first test case will be used. (This number is 1 - indexed it is same number
107 | # you find when failed test case is printed in normal test). Ignored when DEBUG_TEST is False
108 | DEBUG_TEST_NUMBER = 1
109 |
110 | # True - if official challenge tester give one test case inputs and run your solution
111 | # again for next test case, False - if official challenge tester gives all test cases
112 | # once and your solution need to take care of it.
113 | TEST_ONE_BY_ONE: bool = False
114 |
115 | # True - if you want to test your solution one test case by one test case, and solution
116 | # is written to take all test cases at once, this will add "1" as the first line input
117 | # Ignored if TEST_ONE_BY_ONE is False
118 | ADD_1_IN_OBO: bool = False
119 |
120 | # True - if you want to measure performance of your solution, running it multiple times
121 | SPEED_TEST: bool = False
122 |
123 | # How many test case to use per loop, same rules apply as for NUMBER_OF_TEST_CASES
124 | NUMBER_SPEED_TEST_CASES = -1
125 |
126 | # How many times run tests
127 | NUMER_OF_LOOPS = 2
128 |
129 | # Timeout in seconds. Will not timeout in middle of test case (if TEST_ONE_BY_ONE is False
130 | # will not timeout in middle of loop). Will timeout only between test cases / loops
131 | # If you don't want timeout set it to some big number or `float("inf")`
132 | TIMEOUT = 300
133 |
134 | # Set to False if this tester wasn't prepared for the challenge you're testing
135 | # or adjust prints in `print_extra_stat` function yourself
136 | PRINT_EXTRA_STATS: bool = True
137 |
138 | # How often to update progress: must be > 0 and < 1
139 | # 0.1 - means update every 10% completed tests
140 | # 0.05 - means update every 5% completed tests
141 | PROGRESS_PERCENT = 0.1
142 |
143 | # How many failed cases to print
144 | # Set to -1 to print all failed cases
145 | NUM_FAILED = 5
146 |
147 | # Maximum length of line to print for failed cases
148 | # Set to -1 to print full line
149 | TRUNCATE_FAILED_CASES = 1000
150 |
151 | # Set to False if you want to share result but don't want to share solution length
152 | PRINT_SOLUTION_LENGTH: bool = True
153 |
154 | # If your terminal support unicode and your font has glyphs for emoji you can
155 | # switch to True
156 | USE_EMOJI: bool = False
157 |
158 |
159 | # region #######################################################################
160 |
161 | import argparse
162 | import functools
163 | import gzip
164 | import json
165 | import operator
166 | import os
167 | import platform
168 | import subprocess
169 | import sys
170 | from enum import Enum, auto
171 | from io import StringIO
172 | from itertools import zip_longest
173 | from pprint import pprint
174 | from time import perf_counter
175 | from typing import Callable, List, Tuple
176 | from unittest import mock
177 |
178 | TestInp = List[List[str]]
179 | TestOut = List[str]
180 | TestCases = Tuple[TestInp, TestOut]
181 |
182 |
183 | def enable_win_term_mode() -> bool:
184 | win = platform.system().lower() == "windows"
185 | if win is False:
186 | return True
187 |
188 | from ctypes import byref, c_int, c_void_p, windll
189 |
190 | ENABLE_VIRTUAL_TERMINAL_PROCESSING = 0x0004
191 | INVALID_HANDLE_VALUE = c_void_p(-1).value
192 | STD_OUTPUT_HANDLE = c_int(-11)
193 |
194 | hStdout = windll.kernel32.GetStdHandle(STD_OUTPUT_HANDLE)
195 | if hStdout == INVALID_HANDLE_VALUE:
196 | return False
197 |
198 | mode = c_int(0)
199 | ok = windll.kernel32.GetConsoleMode(c_int(hStdout), byref(mode))
200 | if not ok:
201 | return False
202 |
203 | mode = c_int(mode.value | ENABLE_VIRTUAL_TERMINAL_PROCESSING)
204 | ok = windll.kernel32.SetConsoleMode(c_int(hStdout), mode)
205 | if not ok:
206 | return False
207 |
208 | return True
209 |
210 |
211 | class Capturing(list):
212 | def __enter__(self) -> "Capturing":
213 | self._stdout = sys.stdout
214 | sys.stdout = self._stringio = StringIO()
215 | return self
216 |
217 | def __exit__(self, *args: str) -> None:
218 | self.extend(self._stringio.getvalue().splitlines())
219 | del self._stringio # free up some memory
220 | sys.stdout = self._stdout
221 |
222 |
223 | def create_solution_function(path: str, file_name: str) -> int:
224 | if not file_name:
225 | return 0
226 |
227 | if file_name.find("/") == -1 or file_name.find("\\") == -1:
228 | file_name = os.path.join(path, file_name)
229 |
230 | if not os.path.exists(file_name):
231 | print(f"Can't find file {red}{file_name}{reset}!\n")
232 | print("Make sure:\n - your file is in same directory as this script.")
233 | print(" - or give absolute path to your file")
234 | print(" - or give relative path from this script.\n")
235 | print(f"Current Working Directory is: {yellow}{os.getcwd()}{reset}")
236 |
237 | return 0
238 |
239 | solution = []
240 | with open(file_name, newline="") as f:
241 | solution = f.readlines()
242 |
243 | sol_len = sum(map(len, solution))
244 |
245 | if not file_name.endswith(".py"):
246 | return sol_len
247 |
248 | tmp_name = os.path.join(path, "temp_solution_file.py")
249 | with open(tmp_name, "w") as f:
250 | f.write("def solution():\n")
251 | for line in solution:
252 | f.write(" " + line)
253 |
254 | return sol_len
255 |
256 |
257 | def read_test_cases() -> TestCases:
258 | if test_cases_file.endswith(".gz"):
259 | with gzip.open(test_cases_file, "rb") as g:
260 | data = g.read()
261 | try:
262 | test_cases = json.loads(data)
263 | except json.decoder.JSONDecodeError:
264 | print(
265 | f"Test case file {yellow}{test_cases_file}{reset} is not valid JSON file!"
266 | )
267 | raise SystemExit(1)
268 | else:
269 | with open(test_cases_file) as f:
270 | try:
271 | test_cases = json.load(f)
272 | except json.decoder.JSONDecodeError:
273 | print(
274 | f"Test case file {yellow}{test_cases_file}{reset} is not valid JSON file!"
275 | )
276 | raise SystemExit(1)
277 |
278 | if Config.TEST_CASE_FILE_EXP:
279 | with open(test_out_file) as f:
280 | try:
281 | test_out = json.load(f)
282 | except json.decoder.JSONDecodeError:
283 | print(
284 | f"Test case file {yellow}{test_out_file}{reset} is not valid JSON file!"
285 | )
286 | raise SystemExit(1)
287 | test_inp = test_cases
288 | else:
289 | test_inp = test_cases[0]
290 | test_out = test_cases[1]
291 |
292 | if isinstance(test_cases[0], dict):
293 | return convert_official_test_cases(test_cases)
294 |
295 | return test_inp, test_out
296 |
297 |
298 | def convert_official_test_cases(test_cases: List[dict[str, str]]) -> TestCases:
299 | test_inp, test_out = [], []
300 | for case in test_cases:
301 | try:
302 | test_inp.append(case["Input"].split("\n"))
303 | test_out.append(case["Output"])
304 | except KeyError:
305 | print(f"Test case {yellow}{case}{reset} is not valid format!")
306 | raise SystemExit(1)
307 |
308 | return test_inp, test_out
309 |
310 |
311 | @dataclass
312 | class Emoji:
313 | stopwatch: str = ""
314 | hundred: str = ""
315 | poo: str = ""
316 | snake: str = ""
317 | otter: str = ""
318 | scroll: str = ""
319 | filebox: str = ""
320 | chart: str = ""
321 | rocket: str = ""
322 | warning: str = ""
323 | bang: str = ""
324 | stop: str = ""
325 | snail: str = ""
326 | leopard: str = ""
327 |
328 |
329 | class Lang(Enum):
330 | PYTHON = auto()
331 | OTHER = auto()
332 |
333 |
334 | # endregion ####################################################################
335 |
336 |
337 | def print_extra_stats(test_inp: TestInp, test_out: TestOut, num_cases: int) -> None:
338 | max_n = max(int(x[0].split()[0]) for x in test_inp)
339 | avg_n = sum(int(x[0].split()[0]) for x in test_inp) // num_cases
340 | max_k = max(int(x[0].split()[1]) for x in test_inp)
341 | avg_k = sum(int(x[0].split()[1]) for x in test_inp) // num_cases
342 | zero = sum(x == "0" for x in test_out)
343 |
344 | print(f" - Max N: {yellow}{max_n:_}{reset}")
345 | print(f" - Average N: {yellow}{avg_n:_}{reset}")
346 | print(f" - Max K: {yellow}{max_k:_}{reset}")
347 | print(f" - Average K: {yellow}{avg_k:_}{reset}")
348 | print(f" - Zero: {zero}")
349 |
350 |
351 |
352 | def print_begin(
353 | format: str,
354 | num_cases: int,
355 | test_inp: TestInp,
356 | test_out: TestOut,
357 | *,
358 | timeout: bool = False,
359 | ) -> None:
360 | command = Config.OTHER_LANG_COMMAND
361 |
362 | if lang is Lang.PYTHON:
363 | running = f"{emojis.snake} {yellow}Python solution{reset}"
364 | else:
365 | running = f"{emojis.otter} {yellow}{command[command.rfind('/') + 1:]}{reset}"
366 |
367 | print(f"{emojis.rocket}Started testing, format {format}:")
368 | print(f"Running:{running}")
369 | print(f" - Number of cases{emojis.filebox}: {cyan}{num_cases:_}{reset}")
370 | if timeout:
371 | print(f" - Timeout: {red}{Config.TIMEOUT}{reset} seconds{emojis.stop}")
372 | if Config.PRINT_EXTRA_STATS:
373 | print_extra_stats(test_inp[:num_cases], test_out[:num_cases], num_cases)
374 | if (
375 | solution_len
376 | and Config.PRINT_SOLUTION_LENGTH
377 | and (
378 | lang is Lang.OTHER
379 | and not Config.SOLUTION_SRC_FILE_NAME.endswith(".py")
380 | or Config.OTHER_LANG_COMMAND.endswith(".py")
381 | or lang is Lang.PYTHON
382 | )
383 | ):
384 | print(f" - Solution length{emojis.scroll}: {green}{solution_len}{reset} chars.")
385 | print()
386 |
387 |
388 | def print_summary(i: int, passed: int, time_taken: float) -> None:
389 | if Config.NUM_FAILED >= 0 and i - passed > Config.NUM_FAILED:
390 | print(
391 | f"{emojis.warning}Printed only first {yellow}{Config.NUM_FAILED}{reset} "
392 | f"failed cases!{emojis.warning}"
393 | )
394 | print(
395 | f"\rTo change how many failed cases to print change {cyan}NUM_FAILED{reset}"
396 | " in configuration section.\n"
397 | )
398 | e = f"{emojis.hundred}" if passed == i else f"{emojis.poo}"
399 | print(
400 | f"\rPassed: {green if passed == i else red}{passed:_}/{i:_}{reset} tests{e}{emojis.bang}"
401 | )
402 | print(f"{emojis.stopwatch}Finished in: {yellow}{time_taken:.4f}{reset} seconds")
403 |
404 |
405 | def print_speed_summary(speed_num_cases: int, loops: int, times: List[float]) -> None:
406 | times.sort()
407 | print(
408 | f"\rTest for speed passed{emojis.hundred}{emojis.bang if emojis.bang else '!'}\n"
409 | f" - Total time: {yellow}{sum(times):.4f}"
410 | f"{reset} seconds to complete {cyan}{loops:_}{reset} times {cyan}"
411 | f"{speed_num_cases:_}{reset} cases!"
412 | )
413 | print(
414 | f" -{emojis.leopard} Average loop time from top {min(5, loops)} fastest: "
415 | f"{yellow}{sum(times[:5])/min(5, loops):.4f}{reset} seconds /"
416 | f" {cyan}{speed_num_cases:_}{reset} cases."
417 | )
418 | print(
419 | f" -{emojis.leopard} Fastest loop time: {yellow}{times[0]:.4f}{reset} seconds /"
420 | f" {cyan}{speed_num_cases:_}{reset} cases."
421 | )
422 | print(
423 | f" - {emojis.snail}Slowest loop time: {yellow}{times[-1]:.4f}{reset} seconds /"
424 | f" {cyan}{speed_num_cases:_}{reset} cases."
425 | )
426 | print(
427 | f" - {emojis.stopwatch}Average loop time: {yellow}{sum(times)/loops:.4f}{reset} seconds"
428 | f" / {cyan}{speed_num_cases:_}{reset} cases."
429 | )
430 |
431 |
432 | def find_diff(out: str, exp: str) -> str:
433 | result = []
434 | for o, e in zip_longest(out, exp):
435 | if o == e:
436 | result.append(o)
437 | elif o is None:
438 | result.append(f"{red_bg}~{reset}")
439 | else:
440 | result.append(f"{red_bg}{o}{reset}")
441 |
442 | return "".join(result)
443 |
444 |
445 | def check_result(
446 | test_inp: TestInp, test_out: TestOut, num_cases: int, output: List[str]
447 | ) -> Tuple[int, int]:
448 | passed = i = oi = 0
449 | max_line_len = Config.TRUNCATE_FAILED_CASES if Config.TRUNCATE_FAILED_CASES > 0 else 10**6
450 | for i, (inp, exp) in enumerate(
451 | zip_longest(test_inp[:num_cases], test_out[:num_cases]), 1
452 | ):
453 | out_len = len(exp.split("\n"))
454 | out = "\n".join(output[oi : oi + out_len])
455 | oi += out_len
456 | if out != exp:
457 | if i - passed <= Config.NUM_FAILED or Config.NUM_FAILED == -1:
458 | print(f"Test nr:{i}\n Input: {cyan}")
459 | truncated = False
460 | for line in inp:
461 | truncated = truncated or len(line) > max_line_len
462 | print(line[:max_line_len] + " ..." if len(line) > max_line_len else line)
463 | if truncated:
464 | print(
465 | f"{reset}{emojis.warning}Printed input is truncated to {yellow}"
466 | f"{max_line_len}{reset} characters per line{emojis.warning}"
467 | )
468 | print(
469 | f"\rTo change how many characters to print change "
470 | f"{cyan}TRUNCATE_FAILED_CASES{reset} in configuration section.\n"
471 | )
472 | print(
473 | f"{reset}Your output: {find_diff(out, exp) if out else f'{red_bg}None{reset}'}"
474 | )
475 | print(f" Expected: {green}{exp}{reset}\n")
476 | else:
477 | passed += 1
478 |
479 | if num_cases + sum(x.count("\n") for x in test_out[:num_cases]) < len(output) - 1:
480 | print(
481 | f"{red}{emojis.warning}Your output has more lines than expected!{emojis.warning}\n"
482 | f"Check if you don't print empty line at the end.{reset}\n"
483 | )
484 | return passed, i
485 |
486 |
487 | ########################################################################
488 |
489 |
490 | def test_solution_aio(test_inp: TestInp, test_out: TestOut, num_cases: int) -> None:
491 | test_inp_: List[str] = functools.reduce(operator.iconcat, test_inp[:num_cases], [])
492 |
493 | @mock.patch("builtins.input", side_effect=[str(num_cases)] + test_inp_)
494 | def test_aio(input: Callable) -> List[str]:
495 | with Capturing() as output:
496 | solution()
497 |
498 | return output
499 |
500 | print_begin("All in one", num_cases, test_inp, test_out)
501 |
502 | start = perf_counter()
503 | output = test_aio() # type: ignore
504 | end = perf_counter()
505 |
506 | passed, i = check_result(test_inp, test_out, num_cases, output)
507 |
508 | print_summary(i, passed, end - start)
509 |
510 |
511 | def speed_test_solution_aio(
512 | test_inp: TestInp, test_out: TestOut, speed_num_cases: int
513 | ) -> None:
514 | test_inp_: List[str] = functools.reduce(
515 | operator.iconcat, test_inp[:speed_num_cases], []
516 | )
517 |
518 | @mock.patch("builtins.input", side_effect=[str(speed_num_cases)] + test_inp_)
519 | def test_for_speed_aio(input: Callable) -> bool:
520 |
521 | with Capturing() as output:
522 | solution()
523 |
524 | return "\n".join(output) == "\n".join(test_out[:speed_num_cases])
525 |
526 | loops = max(1, Config.NUMER_OF_LOOPS)
527 | print("\nSpeed test started.")
528 | print(f"Running: {yellow}Python solution{reset}")
529 | print(f'Testing {cyan}{speed_num_cases}{reset} cases, format "All in one":')
530 | print(f" - Number of loops: {cyan}{loops:_}{reset}")
531 | print(f" - Timeout: {red}{Config.TIMEOUT}{reset} seconds")
532 | if Config.PRINT_EXTRA_STATS:
533 | print_extra_stats(
534 | test_inp[:speed_num_cases], test_out[:speed_num_cases], speed_num_cases
535 | )
536 | print()
537 |
538 | times = []
539 | for i in range(loops):
540 | start = perf_counter()
541 | passed = test_for_speed_aio() # type: ignore
542 | times.append(perf_counter() - start)
543 | if not passed:
544 | print(f"{red}Failed at iteration {i + 1}!{reset}")
545 | break
546 | if sum(times) >= Config.TIMEOUT:
547 | print(f"{red}Timeout at {i + 1} iteration!{reset}")
548 | break
549 | if 1 < loops <= 10 or not i % (loops * Config.PROGRESS_PERCENT):
550 | print(f"\rProgress: {yellow}{i:>{len(str(loops))}}/{loops}{reset}", end="")
551 | else:
552 | print_speed_summary(speed_num_cases, loops, times)
553 |
554 |
555 | def test_solution_obo(test_inp: TestInp, test_out: TestOut, num_cases: int) -> None:
556 | def test_obo(test: List[str]) -> Tuple[float, str]:
557 | @mock.patch("builtins.input", side_effect=test)
558 | def test_obo_(input: Callable) -> List[str]:
559 | with Capturing() as output:
560 | solution()
561 |
562 | return output
563 |
564 | start = perf_counter()
565 | output = test_obo_() # type: ignore
566 | end = perf_counter()
567 |
568 | return end - start, "\n".join(output)
569 |
570 | print_begin("One by one", num_cases, test_inp, test_out, timeout=True)
571 |
572 | times = []
573 | passed = i = 0
574 | for i in range(num_cases):
575 | test = ["1"] + test_inp[i] if Config.ADD_1_IN_OBO else test_inp[i]
576 | t, output = test_obo(test)
577 | times.append(t)
578 | if test_out[i] != output:
579 | if i - passed <= Config.NUM_FAILED or Config.NUM_FAILED == -1:
580 | print(f"\rTest nr:{i + 1} \n Input: {cyan}")
581 | pprint(test)
582 | print(f"{reset}Your output: {red}{output[0]}{reset}")
583 | print(f" Expected: {green}{test_out[i]}{reset}\n")
584 | else:
585 | passed += 1
586 | if sum(times) >= Config.TIMEOUT:
587 | print(f"{red}Timeout after {i + 1} cases!{reset}")
588 | break
589 | if 1 < num_cases <= 10 or not i % (num_cases * Config.PROGRESS_PERCENT):
590 | print(
591 | f"\rProgress: {yellow}{i + 1:>{len(str(num_cases))}}/{num_cases}{reset}",
592 | end="",
593 | )
594 |
595 | print_summary(i + 1, passed, sum(times))
596 |
597 |
598 | def speed_test_solution_obo(
599 | test_inp: TestInp, test_out: TestOut, speed_num_cases: int
600 | ) -> None:
601 | def test_for_speed_obo(test: List[str], out: str) -> Tuple[float, bool]:
602 | @mock.patch("builtins.input", side_effect=test)
603 | def test_obo_(input: Callable) -> List[str]:
604 | with Capturing() as output:
605 | solution()
606 |
607 | return output
608 |
609 | start = perf_counter()
610 | output = test_obo_() # type: ignore
611 | end = perf_counter()
612 |
613 | return end - start, "\n".join(output) == out
614 |
615 | loops = Config.NUMER_OF_LOOPS
616 | print("\nSpeed test started.")
617 | print(f"Running: {yellow}Python solution{reset}")
618 | print(f'Testing {cyan}{speed_num_cases}{reset} cases, format "One by one":')
619 | print(f" - Number of loops: {cyan}{loops:_}{reset}")
620 | print(f" - Timeout: {red}{Config.TIMEOUT}{reset} seconds")
621 | if Config.PRINT_EXTRA_STATS:
622 | print_extra_stats(
623 | test_inp[:speed_num_cases], test_out[:speed_num_cases], speed_num_cases
624 | )
625 | print()
626 |
627 | timedout = False
628 | times: List[float] = []
629 | for i in range(loops):
630 | loop_times = []
631 | for j in range(speed_num_cases):
632 | test = ["1"] + test_inp[j] if Config.ADD_1_IN_OBO else test_inp[j]
633 |
634 | t, passed = test_for_speed_obo(test, test_out[j])
635 | loop_times.append(t)
636 |
637 | if not passed:
638 | print(f"\r{red}Failed at iteration {i + 1}!{reset}")
639 | break
640 | if sum(times) >= Config.TIMEOUT:
641 | print(f"\r{red}Timeout at {i + 1} iteration!{reset}")
642 | timedout = True
643 | break
644 | if timedout:
645 | break
646 | times.append(sum(loop_times))
647 | if 1 < loops <= 10 or not i % (loops * Config.PROGRESS_PERCENT):
648 | print(f"\rProgress: {yellow}{i:>{len(str(loops))}}/{loops}{reset}", end="")
649 | else:
650 | print_speed_summary(speed_num_cases, loops, times)
651 |
652 |
653 | def debug_solution(test_inp: TestInp, test_out: TestOut, case_number: int) -> None:
654 | test = (
655 | ["1"] + test_inp[case_number - 1]
656 | if not Config.TEST_ONE_BY_ONE
657 | else test_inp[case_number - 1]
658 | )
659 |
660 | @mock.patch("builtins.input", side_effect=test)
661 | def test_debug(input: Callable) -> None:
662 | solution()
663 |
664 | command = Config.OTHER_LANG_COMMAND
665 |
666 | if lang is Lang.PYTHON:
667 | running = f"{emojis.snake} {yellow}Python solution{reset}"
668 | else:
669 | running = f"{emojis.otter} {yellow}{command[command.rfind('/') + 1:]}{reset}"
670 |
671 | print('Started testing, format "Debug":')
672 | print(f"Running: {running}")
673 | print(f" Test nr: {cyan}{case_number}{reset}")
674 |
675 | print(f" Input: {cyan}")
676 | pprint(test_inp[case_number - 1])
677 | print(f"{reset} Expected: {green}{test_out[case_number - 1]}{reset}")
678 | print("Your output:")
679 | if command:
680 | test_ = "1\n" + "\n".join(test_inp[case_number - 1])
681 | start = perf_counter()
682 | proc = subprocess.run(
683 | command.split(),
684 | input=test_,
685 | stdout=subprocess.PIPE,
686 | stderr=subprocess.PIPE,
687 | universal_newlines=True,
688 | )
689 | end = perf_counter()
690 | print(proc.stderr)
691 | print(proc.stdout)
692 | print(f"\nTime: {yellow}{end - start}{reset} seconds")
693 |
694 | else:
695 | test_debug() # type: ignore
696 |
697 |
698 | def test_other_lang(
699 | command: str, test_inp: TestInp, test_out: TestOut, num_cases: int
700 | ) -> None:
701 | test_inp_ = (
702 | f"{num_cases}\n"
703 | + "\n".join(functools.reduce(operator.iconcat, test_inp[:num_cases], []))
704 | + "\n"
705 | )
706 | print_begin("All in one", num_cases, test_inp, test_out)
707 |
708 | start = perf_counter()
709 | proc = subprocess.run(
710 | command.split(),
711 | input=test_inp_,
712 | stdout=subprocess.PIPE,
713 | stderr=subprocess.PIPE,
714 | universal_newlines=True,
715 | )
716 | end = perf_counter()
717 |
718 | err = proc.stderr
719 | output = proc.stdout.split("\n")
720 | output = [x.strip("\r") for x in output]
721 |
722 | if err:
723 | print(err)
724 | raise SystemExit(1)
725 |
726 | passed, i = check_result(test_inp, test_out, num_cases, output)
727 |
728 | print_summary(i, passed, end - start)
729 |
730 |
731 | def speed_test_other_aio(
732 | command: str, test_inp: TestInp, test_out: TestOut, speed_num_cases: int
733 | ) -> None:
734 | test_inp_ = (
735 | f"{speed_num_cases}\n"
736 | + "\n".join(functools.reduce(operator.iconcat, test_inp[:speed_num_cases], []))
737 | + "\n"
738 | )
739 |
740 | def run() -> Tuple[bool, float]:
741 | start = perf_counter()
742 | proc = subprocess.run(
743 | command.split(),
744 | input=test_inp_,
745 | stdout=subprocess.PIPE,
746 | stderr=subprocess.PIPE,
747 | universal_newlines=True,
748 | )
749 | end = perf_counter()
750 |
751 | err = proc.stderr
752 | output = proc.stdout.split("\n")
753 | output = [x.strip("\r") for x in output]
754 | if err:
755 | print(err)
756 | raise SystemExit(1)
757 |
758 | return (
759 | "\n".join(output).strip() == "\n".join(test_out[:speed_num_cases]),
760 | end - start,
761 | )
762 |
763 | loops = max(1, Config.NUMER_OF_LOOPS)
764 | print("\nSpeed test started.")
765 | print(f"Running: {emojis.otter} {yellow}{command[command.rfind('/') + 1:]}{reset}")
766 | print(f'Testing {cyan}{speed_num_cases}{reset} cases, format "All in one":')
767 | print(f" - Number of loops: {cyan}{loops:_}{reset}")
768 | print(f" - Timeout: {red}{Config.TIMEOUT}{reset} seconds")
769 | if Config.PRINT_EXTRA_STATS:
770 | print_extra_stats(
771 | test_inp[:speed_num_cases], test_out[:speed_num_cases], speed_num_cases
772 | )
773 | print()
774 |
775 | times = []
776 | for i in range(loops):
777 | passed, time = run()
778 | times.append(time)
779 | if not passed:
780 | print(f"{red}Failed at iteration {i + 1}!{reset}")
781 | break
782 | if sum(times) >= Config.TIMEOUT:
783 | print(f"{red}Timeout at {i + 1} iteration!{reset}")
784 | break
785 | if 1 < loops <= 10 or not i % (loops * Config.PROGRESS_PERCENT):
786 | print(f"\rProgress: {yellow}{i:>{len(str(loops))}}/{loops}{reset}", end="")
787 | else:
788 | print_speed_summary(speed_num_cases, loops, times)
789 |
790 |
791 | ########################################################################
792 |
793 |
794 | def parse_args() -> None:
795 | parser = argparse.ArgumentParser(
796 | usage="%(prog)s [options]",
797 | description="Yan Tovis unofficial tester for Tech With Tim discord Weekly Challenges. "
798 | "You can configure it editing CONFIGURATION section of this tester file "
799 | "or using command line arguments.",
800 | epilog="All file paths must be absolute or relative to this tester!",
801 | )
802 |
803 | parser.add_argument(
804 | "-s",
805 | "--source",
806 | metavar="path_src_file",
807 | help="Path to solution source file for Python solution, or for other languages if you "
808 | "want solution length to be printed.",
809 | )
810 | parser.add_argument(
811 | "-c",
812 | "--command",
813 | metavar="CMD",
814 | help="Command to execute solution written in other languages then Python.",
815 | )
816 | parser.add_argument(
817 | "-i",
818 | "--input",
819 | metavar="test_cases",
820 | help="Path to test cases input (input/expected output) JSON file.",
821 | )
822 | parser.add_argument(
823 | "-e",
824 | "--expected",
825 | metavar="test_output",
826 | help="Path to test cases expected values if they in separate "
827 | "file then input test cases.",
828 | )
829 | parser.add_argument(
830 | "--nocolor",
831 | action="store_true",
832 | help="If you don't want color output in terminal, or if your terminal"
833 | "don't support colors and script didn't detect this.",
834 | )
835 | parser.add_argument(
836 | "-n",
837 | "--number-cases",
838 | metavar="",
839 | type=int,
840 | help="Number of test cases to use to test.",
841 | )
842 | parser.add_argument(
843 | "-d",
844 | "--debug",
845 | metavar="",
846 | type=int,
847 | help="Test case number if you want to print something extra for debugging. "
848 | "Your solution will be run with only that one test case and whole output "
849 | "will be printed.",
850 | )
851 | parser.add_argument(
852 | "--onebyone",
853 | action="store_true",
854 | help="Use if official challenge tester gives one test case inputs and "
855 | "run your solution again for next test case.",
856 | )
857 | parser.add_argument(
858 | "--add1",
859 | action="store_true",
860 | help="If you want test your solution one test case by one test case, "
861 | "and solution is written to take all test cases at once, this will add '1' "
862 | "as the first line input.",
863 | )
864 | parser.add_argument(
865 | "-S",
866 | "--speedtest",
867 | metavar="",
868 | type=int,
869 | help="Number of times to run tests to get more accurate times.",
870 | )
871 | parser.add_argument(
872 | "--number-speed-cases",
873 | metavar="",
874 | type=int,
875 | help="How many test case to use per loop when speed test.",
876 | )
877 | parser.add_argument(
878 | "--timeout",
879 | metavar="",
880 | type=int,
881 | help="Timeout in seconds. Will not timeout in middle of test case. "
882 | 'Not working in "All in One" mode. Will timeout only between test cases / loops.',
883 | )
884 | parser.add_argument(
885 | "-x",
886 | "--noextra",
887 | action="store_true",
888 | help="Use if this tester wasn't prepared for the challenge you testing.",
889 | )
890 | parser.add_argument(
891 | "-p",
892 | "--progress",
893 | metavar="",
894 | type=float,
895 | help="How often to update progress: must be > 0 and < 1. 0.1 - means "
896 | "update every 10%% completed tests, 0.05 - means update every 5%% completed tests.",
897 | )
898 | parser.add_argument(
899 | "-f",
900 | "--failed",
901 | metavar="",
902 | type=int,
903 | help="Number of failed tests to print (-1 to print all failed).",
904 | )
905 | parser.add_argument(
906 | "-t",
907 | "--truncate",
908 | metavar="",
909 | type=int,
910 | help="Maximum line length to print failed case input (-1 to print full line).",
911 | )
912 | parser.add_argument(
913 | "-l",
914 | "--nolength",
915 | action="store_true",
916 | help="Use if you want to share result but don't want to share solution length.",
917 | )
918 | parser.add_argument(
919 | "-E",
920 | "--emoji",
921 | action="store_true",
922 | help="Use unicode emoji. Your terminal must support unicode and your "
923 | "terminal font must have glyphs for emoji.",
924 | )
925 |
926 | args = parser.parse_args()
927 |
928 | if args.source:
929 | Config.SOLUTION_SRC_FILE_NAME = args.source
930 | if args.command:
931 | Config.OTHER_LANG_COMMAND = args.command
932 | if args.input:
933 | Config.TEST_CASE_FILE = args.input
934 | if args.expected:
935 | Config.TEST_CASE_FILE_EXP = args.expected
936 | if args.nocolor:
937 | Config.COLOR_OUT = False
938 | if args.number_cases:
939 | Config.NUMBER_OF_TEST_CASES = args.number_cases
940 | if args.debug:
941 | Config.DEBUG_TEST = True
942 | Config.DEBUG_TEST_NUMBER = args.debug
943 | if args.onebyone:
944 | Config.TEST_ONE_BY_ONE = True
945 | if args.add1:
946 | Config.ADD_1_IN_OBO = True
947 | if args.speedtest:
948 | Config.SPEED_TEST = True
949 | Config.NUMER_OF_LOOPS = args.speedtest
950 | if args.number_speed_cases:
951 | Config.NUMBER_SPEED_TEST_CASES = args.number_speed_cases
952 | if args.timeout:
953 | Config.TIMEOUT = args.timeout
954 | if args.noextra:
955 | Config.PRINT_EXTRA_STATS = False
956 | if args.progress:
957 | Config.PROGRESS_PERCENT = args.progress
958 | if args.failed:
959 | Config.NUM_FAILED = args.failed
960 | if args.truncate:
961 | Config.TRUNCATE_FAILED_CASES = args.truncate
962 | if args.nolength:
963 | Config.PRINT_SOLUTION_LENGTH = False
964 | if args.emoji:
965 | Config.USE_EMOJI = True
966 |
967 |
968 | def main(path: str) -> None:
969 | test_inp, test_out = read_test_cases()
970 |
971 | if Config.DEBUG_TEST:
972 | case_number = (
973 | Config.DEBUG_TEST_NUMBER
974 | if 0 <= Config.DEBUG_TEST_NUMBER - 1 < len(test_out)
975 | else 1
976 | )
977 | debug_solution(test_inp, test_out, case_number)
978 | raise SystemExit(0)
979 |
980 | if 0 < Config.NUMBER_OF_TEST_CASES < len(test_out):
981 | num_cases = Config.NUMBER_OF_TEST_CASES
982 | else:
983 | num_cases = len(test_out)
984 |
985 | if 0 < Config.NUMBER_SPEED_TEST_CASES < len(test_out):
986 | speed_num_cases = Config.NUMBER_SPEED_TEST_CASES
987 | else:
988 | speed_num_cases = len(test_out)
989 |
990 | if Config.OTHER_LANG_COMMAND:
991 | os.chdir(path)
992 | test_other_lang(Config.OTHER_LANG_COMMAND, test_inp, test_out, num_cases)
993 | if Config.SPEED_TEST:
994 | speed_test_other_aio(
995 | Config.OTHER_LANG_COMMAND, test_inp, test_out, speed_num_cases
996 | )
997 | raise SystemExit(0)
998 |
999 | if Config.TEST_ONE_BY_ONE:
1000 | test_solution_obo(test_inp, test_out, num_cases)
1001 | else:
1002 | test_solution_aio(test_inp, test_out, num_cases)
1003 | if Config.SPEED_TEST:
1004 | speed_test_solution_aio(test_inp, test_out, speed_num_cases)
1005 |
1006 |
1007 | if __name__ == "__main__":
1008 | path = os.path.dirname(os.path.abspath(__file__))
1009 | os.chdir(path)
1010 |
1011 | parse_args()
1012 |
1013 | color_out = Config.COLOR_OUT and enable_win_term_mode()
1014 | red, green, yellow, cyan, reset, red_bg = (
1015 | (
1016 | "\x1b[31m",
1017 | "\x1b[32m",
1018 | "\x1b[33m",
1019 | "\x1b[36m",
1020 | "\x1b[0m",
1021 | "\x1b[41m",
1022 | )
1023 | if color_out
1024 | else [""] * 6
1025 | )
1026 |
1027 | if Config.USE_EMOJI:
1028 | emojis = Emoji(
1029 | stopwatch="\N{stopwatch} ",
1030 | hundred="\N{Hundred Points Symbol}",
1031 | poo=" \N{Pile of Poo}",
1032 | snake="\N{snake}",
1033 | otter="\N{otter}",
1034 | scroll=" \N{scroll}",
1035 | filebox=" \N{Card File Box} ",
1036 | chart=" \N{Chart with Upwards Trend} ",
1037 | rocket="\N{rocket} ",
1038 | warning=" \N{warning sign} ",
1039 | bang="\N{Heavy Exclamation Mark Symbol}",
1040 | stop="\N{Octagonal sign}",
1041 | snail="\N{snail} ",
1042 | leopard=" \N{leopard}",
1043 | )
1044 | else:
1045 | emojis = Emoji()
1046 |
1047 | solution_len = create_solution_function(path, Config.SOLUTION_SRC_FILE_NAME)
1048 |
1049 | lang = Lang.OTHER if Config.OTHER_LANG_COMMAND else Lang.PYTHON
1050 |
1051 | if lang is Lang.PYTHON:
1052 | if not solution_len:
1053 | print("Could not import solution!")
1054 | raise SystemExit(1)
1055 |
1056 | from temp_solution_file import solution # type: ignore
1057 |
1058 | test_cases_file = os.path.join(path, Config.TEST_CASE_FILE)
1059 | if not os.path.exists(test_cases_file):
1060 | print(
1061 | f"Can't find file with test cases {red}{os.path.join(path, Config.TEST_CASE_FILE)}{reset}!"
1062 | )
1063 | print("Make sure it is in the same directory as this script!")
1064 | raise SystemExit(1)
1065 |
1066 | test_out_file = (
1067 | os.path.join(path, Config.TEST_CASE_FILE_EXP)
1068 | if Config.TEST_CASE_FILE_EXP
1069 | else test_cases_file
1070 | )
1071 | if Config.TEST_CASE_FILE_EXP and not os.path.exists(test_out_file):
1072 | print(
1073 | f"Can't find file with output for test cases {red}{test_out_file}{reset}!"
1074 | )
1075 | print("Make sure it is in the same directory as this script!\n")
1076 | print(
1077 | f"If output is in same file as input set {cyan}SEP_INP_OUT_TESTCASE_FILE{reset} "
1078 | f"to {yellow}False{reset} in configure section."
1079 | )
1080 | raise SystemExit(1)
1081 |
1082 | main(path)
1083 |
--------------------------------------------------------------------------------
/Challenge_212/Challenge_212.md:
--------------------------------------------------------------------------------
1 | # Challenge 212: Coding
2 |
3 | **Difficulty: 3/10
4 | Labels: Implementation, Greedy**
5 |
6 | SnowballSH really enjoys coding. He has a string `s` of lowercase Latin alphabets. SnowballSH wants to calculate the *beauty score* of this string.
7 |
8 | He scans `s` from left to right. He searches for the letter `c`, followed by `o`, `d`, `i`, `n`, and `g`. When he finds all six letters `coding`, these six letters are all **crossed off.** Note that he only crosses them off when all six letters are found. Then, he resumes searching for `c` again. SnowballSH never looks back and only continues where he left off.
9 |
10 | Then, `score_used` is the number of **crossed off** letters in `s`. `score_unused` is the number of letters that is one of `c`, `o`, `d`, `i`, `n`, or `g`, in `s` that **are not crossed off**.
11 |
12 | The *beauty score* of `s` is `score_used - score_unused`.
13 |
14 | Given a string `s`, can you help SnowballSH find its *beauty score*?
15 |
16 | ## Task
17 |
18 | You are given a number `T` and `T` test cases follow, for each test case:
19 |
20 | - The only line contains a string `s`.
21 |
22 | Output a single integer, the *beauty score* of `s`.
23 |
24 | ### Examples
25 |
26 | #### Input
27 |
28 | ```rust
29 | 7
30 | ccooddiinngg
31 | codincodincodinggcod
32 | ilovecodingalot
33 | techwithtim
34 | actofdangeringaragecomputerspudding
35 | oding
36 | letters
37 | ```
38 |
39 | #### Output
40 |
41 | ```rust
42 | 0
43 | -8
44 | 3
45 | -3
46 | 8
47 | -5
48 | 0
49 | ```
50 |
51 | - For the first test case, the bolded letters are crossed off: **c**c**o**o**d**d**i**i**n**n**g**g, so the answer is 6 - 6 = 0.
52 | - For the third test case, the underlined letters are not one of `coding` (so not counted): i~~l~~o~~ve~~**coding**~~al~~o~~t~~, so the answer is 6 - 3 = 3.
53 |
54 | ### Note
55 |
56 | `1 <= T`
57 | `1 <= |s| <= 10``5`
58 | All characters in `s` are lowercase Latin alphabets.
59 |
60 | ### Submissions
61 |
62 | Code can be written in any of these languages:
63 |
64 | - `Python` 3.11
65 | - `C` (gnu17) / `C++` (c++20) - GCC 12.2
66 | - `Ruby` 3.3.4
67 | - `Golang` 1.21
68 | - `Java` 19 (Open JDK) - use **"class Main"!!!**
69 | - `Rust` 1.72
70 | - `C#` 11 (.Net 7.0)
71 | - `JavaScript` ES2023 (Node.js 20.6)
72 | - `Zig` 0.13.0
73 |
74 | To download tester for this challenge click [here](https://downgit.github.io/#/home?url=https://github.com/Pomroka/TWT_Challenges_Tester/tree/main/Challenge_212)
75 |
--------------------------------------------------------------------------------
/Challenge_212/test_cases.json.gz:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Pomroka/TWT_Challenges_Tester/9ab9f21a77355e08999e77b9c838e176934dc9f7/Challenge_212/test_cases.json.gz
--------------------------------------------------------------------------------
/Challenge_212/test_challenge_212.py:
--------------------------------------------------------------------------------
1 | """
2 | How to use?
3 | Put this file, with test cases file and file with your solution in same folder.
4 | Make sure Python has right to save to this folder.
5 |
6 | This tester will create one file "temp_solution_file.py" make sure there's no such
7 | file in same folder or there's nothing important in it, cos it will be overwritten!
8 |
9 | Change SOLUTION_SRC_FILE_NAME to your file name in CONFIGURATION section.
10 |
11 | If this tester wasn't prepared by me for the challenge you want to test,
12 | you may need to adjust other configuration settings. Read the comments on each.
13 |
14 | If you want to use own test_cases, they must be in JSON format.
15 | [
16 | [ # this list can be in separated file for inputs only
17 | ["test case 1"], # multiline case ["line_1", "line_2", ... ,"line_n"]
18 | ["test case 2"],
19 | ...
20 | ["test case n"]
21 | ],
22 | [ # and this for output only
23 | "output 1",
24 | "output 2",
25 | ...
26 | "output n"
27 | ]
28 | ]
29 | All values must be strings! Cast all ints, floats, etc. to string before dumping JSON!
30 |
31 | WARNING: My tester ignores printing in input() but official tester FAILS if you
32 | print something in input()
33 | Don't do that: input("What is the test number?")
34 | Use empty input: input()
35 |
36 | Some possible errors:
37 | - None in "Your output": Your solution didn't print for all cases.
38 | - None in "Input": Your solution print more times than there are cases.
39 | - If you see None in "Input" or "Your output" don't check failed cases until
40 | you fix problem with printing, cos "Input" and "Your output" are misaligned
41 | after first missing/extra print
42 | - StopIteration: Your solution try to get more input than there are test cases
43 | - If you use `open` instead of `input` you get `StopIteration` error in my tester
44 | to avoid that use OTHER_LANG_COMMAND = "python to_submit_ch_83.py"
45 | - If you call your function inside `if __name__ == '__main__':` by default
46 | your functions won't be called cos your solution is imported.
47 | To avoid that use OTHER_LANG_COMMAND = "python to_submit_ch_83.py"
48 | or don't use `if __name__ == '__main__':`
49 | """
50 | from __future__ import annotations
51 |
52 | from dataclasses import dataclass
53 |
54 |
55 | ########## CONFIGURATION ################
56 | @dataclass
57 | class Config:
58 |
59 | # Name of your file with solution. If it's not in same folder as this script
60 | # add absolute path or relative to this script file.
61 | # For other languages than python fill this with source code file name if
62 | # you want solution length to be displayed.
63 | # Examples:
64 | # Absolute path
65 | # SOLUTION_SRC_FILE_NAME = "/home/user/Dev/Cpp/c83_c/c83_c/src/Main.cpp"
66 | # Relative path to this script file
67 | # SOLUTION_SRC_FILE_NAME = "Rust/C83_rust/src/main.rs"
68 | # File in same folder as this script
69 | SOLUTION_SRC_FILE_NAME = "to_submit_ch_212.py"
70 |
71 | # Command to run your solution written in other language than Python
72 | # For compiled languages - compile yourself and use compiled executable file name
73 | # For interpreter languages give full command to run your solution
74 | # Examples:
75 | # OTHER_LANG_COMMAND = "c83_cpp.exe"
76 | # OTHER_LANG_COMMAND = "Cpp/c83_c/bin/x64/Debug/c83_c.exe"
77 | # OTHER_LANG_COMMAND = "Cpp/c83_c/bin/x64/Release/c83_c"
78 | # OTHER_LANG_COMMAND = "/home/user/Dev/Rust/c83_rust/target/release/c83_rust"
79 | # OTHER_LANG_COMMAND = "d:/Dev/C_Sharp/c83_cs/bin/Debug/net6.0/c83_cs.exe"
80 | # OTHER_LANG_COMMAND = "java -cp Java/ c83_java.Main"
81 | OTHER_LANG_COMMAND = ""
82 |
83 | # Name of file with input test cases (and output if SEP_INP_OUT_TESTCASE_FILE is False)
84 | # If test cases file is compressed, you don't need to extract it, just give name of
85 | # compressed file (with .gz extension)
86 | TEST_CASE_FILE = "test_cases.json.gz"
87 |
88 | # If test cases input and expected output are in separate files, name of file
89 | # with expected outputs for test cases. Empty string - if they in one file.
90 | TEST_CASE_FILE_EXP = ""
91 |
92 | # True - if you want colors in terminal output, False - otherwise or if your terminal
93 | # don't support colors and script didn't detect this
94 | COLOR_OUT: bool = True
95 |
96 | # -1 - use all test cases from test case file, you can limit here with how many
97 | # tests cases you want to test your solution. If you enter number bigger than
98 | # number of tests all tests will be used
99 | NUMBER_OF_TEST_CASES = -1
100 |
101 | # True - if you want to print some debug information, you need set:
102 | # DEBUG_TEST_NUMBER to the test number you want to debug
103 | DEBUG_TEST: bool = False
104 |
105 | # Provide test number for which you want to see your debug prints. If you enter number
106 | # out of range, first test case will be used. (This number is 1 - indexed it is same number
107 | # you find when failed test case is printed in normal test). Ignored when DEBUG_TEST is False
108 | DEBUG_TEST_NUMBER = 1
109 |
110 | # True - if official challenge tester give one test case inputs and run your solution
111 | # again for next test case, False - if official challenge tester gives all test cases
112 | # once and your solution need to take care of it.
113 | TEST_ONE_BY_ONE: bool = False
114 |
115 | # True - if you want to test your solution one test case by one test case, and solution
116 | # is written to take all test cases at once, this will add "1" as the first line input
117 | # Ignored if TEST_ONE_BY_ONE is False
118 | ADD_1_IN_OBO: bool = False
119 |
120 | # True - if you want to measure performance of your solution, running it multiple times
121 | SPEED_TEST: bool = False
122 |
123 | # How many test case to use per loop, same rules apply as for NUMBER_OF_TEST_CASES
124 | NUMBER_SPEED_TEST_CASES = -1
125 |
126 | # How many times run tests
127 | NUMER_OF_LOOPS = 2
128 |
129 | # Timeout in seconds. Will not timeout in middle of test case (if TEST_ONE_BY_ONE is False
130 | # will not timeout in middle of loop). Will timeout only between test cases / loops
131 | # If you don't want timeout set it to some big number or `float("inf")`
132 | TIMEOUT = 300
133 |
134 | # Set to False if this tester wasn't prepared for the challenge you're testing
135 | # or adjust prints in `print_extra_stat` function yourself
136 | PRINT_EXTRA_STATS: bool = True
137 |
138 | # How often to update progress: must be > 0 and < 1
139 | # 0.1 - means update every 10% completed tests
140 | # 0.05 - means update every 5% completed tests
141 | PROGRESS_PERCENT = 0.1
142 |
143 | # How many failed cases to print
144 | # Set to -1 to print all failed cases
145 | NUM_FAILED = 5
146 |
147 | # Maximum length of line to print for failed cases
148 | # Set to -1 to print full line
149 | TRUNCATE_FAILED_CASES = 1000
150 |
151 | # Set to False if you want to share result but don't want to share solution length
152 | PRINT_SOLUTION_LENGTH: bool = True
153 |
154 | # If your terminal support unicode and your font has glyphs for emoji you can
155 | # switch to True
156 | USE_EMOJI: bool = False
157 |
158 |
159 | # region #######################################################################
160 |
161 | import argparse
162 | import functools
163 | import gzip
164 | import json
165 | import operator
166 | import os
167 | import platform
168 | import subprocess
169 | import sys
170 | from enum import Enum, auto
171 | from io import StringIO
172 | from itertools import zip_longest
173 | from pprint import pprint
174 | from time import perf_counter
175 | from typing import Callable, List, Tuple
176 | from unittest import mock
177 |
178 | TestInp = List[List[str]]
179 | TestOut = List[str]
180 | TestCases = Tuple[TestInp, TestOut]
181 |
182 |
183 | def enable_win_term_mode() -> bool:
184 | win = platform.system().lower() == "windows"
185 | if win is False:
186 | return True
187 |
188 | from ctypes import byref, c_int, c_void_p, windll
189 |
190 | ENABLE_VIRTUAL_TERMINAL_PROCESSING = 0x0004
191 | INVALID_HANDLE_VALUE = c_void_p(-1).value
192 | STD_OUTPUT_HANDLE = c_int(-11)
193 |
194 | hStdout = windll.kernel32.GetStdHandle(STD_OUTPUT_HANDLE)
195 | if hStdout == INVALID_HANDLE_VALUE:
196 | return False
197 |
198 | mode = c_int(0)
199 | ok = windll.kernel32.GetConsoleMode(c_int(hStdout), byref(mode))
200 | if not ok:
201 | return False
202 |
203 | mode = c_int(mode.value | ENABLE_VIRTUAL_TERMINAL_PROCESSING)
204 | ok = windll.kernel32.SetConsoleMode(c_int(hStdout), mode)
205 | if not ok:
206 | return False
207 |
208 | return True
209 |
210 |
211 | class Capturing(list):
212 | def __enter__(self) -> "Capturing":
213 | self._stdout = sys.stdout
214 | sys.stdout = self._stringio = StringIO()
215 | return self
216 |
217 | def __exit__(self, *args: str) -> None:
218 | self.extend(self._stringio.getvalue().splitlines())
219 | del self._stringio # free up some memory
220 | sys.stdout = self._stdout
221 |
222 |
223 | def create_solution_function(path: str, file_name: str) -> int:
224 | if not file_name:
225 | return 0
226 |
227 | if file_name.find("/") == -1 or file_name.find("\\") == -1:
228 | file_name = os.path.join(path, file_name)
229 |
230 | if not os.path.exists(file_name):
231 | print(f"Can't find file {red}{file_name}{reset}!\n")
232 | print("Make sure:\n - your file is in same directory as this script.")
233 | print(" - or give absolute path to your file")
234 | print(" - or give relative path from this script.\n")
235 | print(f"Current Working Directory is: {yellow}{os.getcwd()}{reset}")
236 |
237 | return 0
238 |
239 | solution = []
240 | with open(file_name, newline="") as f:
241 | solution = f.readlines()
242 |
243 | sol_len = sum(map(len, solution))
244 |
245 | if not file_name.endswith(".py"):
246 | return sol_len
247 |
248 | tmp_name = os.path.join(path, "temp_solution_file.py")
249 | with open(tmp_name, "w") as f:
250 | f.write("def solution():\n")
251 | for line in solution:
252 | f.write(" " + line)
253 |
254 | return sol_len
255 |
256 |
257 | def read_test_cases() -> TestCases:
258 | if test_cases_file.endswith(".gz"):
259 | with gzip.open(test_cases_file, "rb") as g:
260 | data = g.read()
261 | try:
262 | test_cases = json.loads(data)
263 | except json.decoder.JSONDecodeError:
264 | print(
265 | f"Test case file {yellow}{test_cases_file}{reset} is not valid JSON file!"
266 | )
267 | raise SystemExit(1)
268 | else:
269 | with open(test_cases_file) as f:
270 | try:
271 | test_cases = json.load(f)
272 | except json.decoder.JSONDecodeError:
273 | print(
274 | f"Test case file {yellow}{test_cases_file}{reset} is not valid JSON file!"
275 | )
276 | raise SystemExit(1)
277 |
278 | if Config.TEST_CASE_FILE_EXP:
279 | with open(test_out_file) as f:
280 | try:
281 | test_out = json.load(f)
282 | except json.decoder.JSONDecodeError:
283 | print(
284 | f"Test case file {yellow}{test_out_file}{reset} is not valid JSON file!"
285 | )
286 | raise SystemExit(1)
287 | test_inp = test_cases
288 | else:
289 | test_inp = test_cases[0]
290 | test_out = test_cases[1]
291 |
292 | if isinstance(test_cases[0], dict):
293 | return convert_official_test_cases(test_cases)
294 |
295 | return test_inp, test_out
296 |
297 |
298 | def convert_official_test_cases(test_cases: List[dict[str, str]]) -> TestCases:
299 | test_inp, test_out = [], []
300 | for case in test_cases:
301 | try:
302 | test_inp.append(case["Input"].split("\n"))
303 | test_out.append(case["Output"])
304 | except KeyError:
305 | print(f"Test case {yellow}{case}{reset} is not valid format!")
306 | raise SystemExit(1)
307 |
308 | return test_inp, test_out
309 |
310 |
311 | @dataclass
312 | class Emoji:
313 | stopwatch: str = ""
314 | hundred: str = ""
315 | poo: str = ""
316 | snake: str = ""
317 | otter: str = ""
318 | scroll: str = ""
319 | filebox: str = ""
320 | chart: str = ""
321 | rocket: str = ""
322 | warning: str = ""
323 | bang: str = ""
324 | stop: str = ""
325 | snail: str = ""
326 | leopard: str = ""
327 |
328 |
329 | class Lang(Enum):
330 | PYTHON = auto()
331 | OTHER = auto()
332 |
333 |
334 | # endregion ####################################################################
335 |
336 |
337 | def print_extra_stats(test_inp: TestInp, test_out: TestOut, num_cases: int) -> None:
338 | max_n = max(len(x[0].split()[0]) for x in test_inp)
339 | avg_n = sum(len(x[0].split()[0]) for x in test_inp) // num_cases
340 | max_b = max(int(x) for x in test_out)
341 | min_b = min(int(x) for x in test_out)
342 | avg_b = sum(int(x) for x in test_out) // num_cases
343 |
344 | print(f" - Max S len: {yellow}{max_n:_}{reset}")
345 | print(f" - Average S len: {yellow}{avg_n:_}{reset}")
346 | print(f" - Max beauty score: {yellow}{max_b:_}{reset}")
347 | print(f" - Min beauty score: {yellow}{min_b:_}{reset}")
348 | print(f" - Average beauty score: {yellow}{avg_b:_}{reset}")
349 |
350 |
351 |
352 | def print_begin(
353 | format: str,
354 | num_cases: int,
355 | test_inp: TestInp,
356 | test_out: TestOut,
357 | *,
358 | timeout: bool = False,
359 | ) -> None:
360 | command = Config.OTHER_LANG_COMMAND
361 |
362 | if lang is Lang.PYTHON:
363 | running = f"{emojis.snake} {yellow}Python solution{reset}"
364 | else:
365 | running = f"{emojis.otter} {yellow}{command[command.rfind('/') + 1:]}{reset}"
366 |
367 | print(f"{emojis.rocket}Started testing, format {format}:")
368 | print(f"Running:{running}")
369 | print(f" - Number of cases{emojis.filebox}: {cyan}{num_cases:_}{reset}")
370 | if timeout:
371 | print(f" - Timeout: {red}{Config.TIMEOUT}{reset} seconds{emojis.stop}")
372 | if Config.PRINT_EXTRA_STATS:
373 | print_extra_stats(test_inp[:num_cases], test_out[:num_cases], num_cases)
374 | if (
375 | solution_len
376 | and Config.PRINT_SOLUTION_LENGTH
377 | and (
378 | lang is Lang.OTHER
379 | and not Config.SOLUTION_SRC_FILE_NAME.endswith(".py")
380 | or Config.OTHER_LANG_COMMAND.endswith(".py")
381 | or lang is Lang.PYTHON
382 | )
383 | ):
384 | print(f" - Solution length{emojis.scroll}: {green}{solution_len}{reset} chars.")
385 | print()
386 |
387 |
388 | def print_summary(i: int, passed: int, time_taken: float) -> None:
389 | if Config.NUM_FAILED >= 0 and i - passed > Config.NUM_FAILED:
390 | print(
391 | f"{emojis.warning}Printed only first {yellow}{Config.NUM_FAILED}{reset} "
392 | f"failed cases!{emojis.warning}"
393 | )
394 | print(
395 | f"\rTo change how many failed cases to print change {cyan}NUM_FAILED{reset}"
396 | " in configuration section.\n"
397 | )
398 | e = f"{emojis.hundred}" if passed == i else f"{emojis.poo}"
399 | print(
400 | f"\rPassed: {green if passed == i else red}{passed:_}/{i:_}{reset} tests{e}{emojis.bang}"
401 | )
402 | print(f"{emojis.stopwatch}Finished in: {yellow}{time_taken:.4f}{reset} seconds")
403 |
404 |
405 | def print_speed_summary(speed_num_cases: int, loops: int, times: List[float]) -> None:
406 | times.sort()
407 | print(
408 | f"\rTest for speed passed{emojis.hundred}{emojis.bang if emojis.bang else '!'}\n"
409 | f" - Total time: {yellow}{sum(times):.4f}"
410 | f"{reset} seconds to complete {cyan}{loops:_}{reset} times {cyan}"
411 | f"{speed_num_cases:_}{reset} cases!"
412 | )
413 | print(
414 | f" -{emojis.leopard} Average loop time from top {min(5, loops)} fastest: "
415 | f"{yellow}{sum(times[:5])/min(5, loops):.4f}{reset} seconds /"
416 | f" {cyan}{speed_num_cases:_}{reset} cases."
417 | )
418 | print(
419 | f" -{emojis.leopard} Fastest loop time: {yellow}{times[0]:.4f}{reset} seconds /"
420 | f" {cyan}{speed_num_cases:_}{reset} cases."
421 | )
422 | print(
423 | f" - {emojis.snail}Slowest loop time: {yellow}{times[-1]:.4f}{reset} seconds /"
424 | f" {cyan}{speed_num_cases:_}{reset} cases."
425 | )
426 | print(
427 | f" - {emojis.stopwatch}Average loop time: {yellow}{sum(times)/loops:.4f}{reset} seconds"
428 | f" / {cyan}{speed_num_cases:_}{reset} cases."
429 | )
430 |
431 |
432 | def find_diff(out: str, exp: str) -> str:
433 | result = []
434 | for o, e in zip_longest(out, exp):
435 | if o == e:
436 | result.append(o)
437 | elif o is None:
438 | result.append(f"{red_bg}~{reset}")
439 | else:
440 | result.append(f"{red_bg}{o}{reset}")
441 |
442 | return "".join(result)
443 |
444 |
445 | def check_result(
446 | test_inp: TestInp, test_out: TestOut, num_cases: int, output: List[str]
447 | ) -> Tuple[int, int]:
448 | passed = i = oi = 0
449 | max_line_len = Config.TRUNCATE_FAILED_CASES if Config.TRUNCATE_FAILED_CASES > 0 else 10**6
450 | for i, (inp, exp) in enumerate(
451 | zip_longest(test_inp[:num_cases], test_out[:num_cases]), 1
452 | ):
453 | out_len = len(exp.split("\n"))
454 | out = "\n".join(output[oi : oi + out_len])
455 | oi += out_len
456 | if out != exp:
457 | if i - passed <= Config.NUM_FAILED or Config.NUM_FAILED == -1:
458 | print(f"Test nr:{i}\n Input: {cyan}")
459 | truncated = False
460 | for line in inp:
461 | truncated = truncated or len(line) > max_line_len
462 | print(line[:max_line_len] + " ..." if len(line) > max_line_len else line)
463 | if truncated:
464 | print(
465 | f"{reset}{emojis.warning}Printed input is truncated to {yellow}"
466 | f"{max_line_len}{reset} characters per line{emojis.warning}"
467 | )
468 | print(
469 | f"\rTo change how many characters to print change "
470 | f"{cyan}TRUNCATE_FAILED_CASES{reset} in configuration section.\n"
471 | )
472 | print(
473 | f"{reset}Your output: {find_diff(out, exp) if out else f'{red_bg}None{reset}'}"
474 | )
475 | print(f" Expected: {green}{exp}{reset}\n")
476 | else:
477 | passed += 1
478 |
479 | if num_cases + sum(x.count("\n") for x in test_out[:num_cases]) < len(output) - 1:
480 | print(
481 | f"{red}{emojis.warning}Your output has more lines than expected!{emojis.warning}\n"
482 | f"Check if you don't print empty line at the end.{reset}\n"
483 | )
484 | return passed, i
485 |
486 |
487 | ########################################################################
488 |
489 |
490 | def test_solution_aio(test_inp: TestInp, test_out: TestOut, num_cases: int) -> None:
491 | test_inp_: List[str] = functools.reduce(operator.iconcat, test_inp[:num_cases], [])
492 |
493 | @mock.patch("builtins.input", side_effect=[str(num_cases)] + test_inp_)
494 | def test_aio(input: Callable) -> List[str]:
495 | with Capturing() as output:
496 | solution()
497 |
498 | return output
499 |
500 | print_begin("All in one", num_cases, test_inp, test_out)
501 |
502 | start = perf_counter()
503 | output = test_aio() # type: ignore
504 | end = perf_counter()
505 |
506 | passed, i = check_result(test_inp, test_out, num_cases, output)
507 |
508 | print_summary(i, passed, end - start)
509 |
510 |
511 | def speed_test_solution_aio(
512 | test_inp: TestInp, test_out: TestOut, speed_num_cases: int
513 | ) -> None:
514 | test_inp_: List[str] = functools.reduce(
515 | operator.iconcat, test_inp[:speed_num_cases], []
516 | )
517 |
518 | @mock.patch("builtins.input", side_effect=[str(speed_num_cases)] + test_inp_)
519 | def test_for_speed_aio(input: Callable) -> bool:
520 |
521 | with Capturing() as output:
522 | solution()
523 |
524 | return "\n".join(output) == "\n".join(test_out[:speed_num_cases])
525 |
526 | loops = max(1, Config.NUMER_OF_LOOPS)
527 | print("\nSpeed test started.")
528 | print(f"Running: {yellow}Python solution{reset}")
529 | print(f'Testing {cyan}{speed_num_cases}{reset} cases, format "All in one":')
530 | print(f" - Number of loops: {cyan}{loops:_}{reset}")
531 | print(f" - Timeout: {red}{Config.TIMEOUT}{reset} seconds")
532 | if Config.PRINT_EXTRA_STATS:
533 | print_extra_stats(
534 | test_inp[:speed_num_cases], test_out[:speed_num_cases], speed_num_cases
535 | )
536 | print()
537 |
538 | times = []
539 | for i in range(loops):
540 | start = perf_counter()
541 | passed = test_for_speed_aio() # type: ignore
542 | times.append(perf_counter() - start)
543 | if not passed:
544 | print(f"{red}Failed at iteration {i + 1}!{reset}")
545 | break
546 | if sum(times) >= Config.TIMEOUT:
547 | print(f"{red}Timeout at {i + 1} iteration!{reset}")
548 | break
549 | if 1 < loops <= 10 or not i % (loops * Config.PROGRESS_PERCENT):
550 | print(f"\rProgress: {yellow}{i:>{len(str(loops))}}/{loops}{reset}", end="")
551 | else:
552 | print_speed_summary(speed_num_cases, loops, times)
553 |
554 |
555 | def test_solution_obo(test_inp: TestInp, test_out: TestOut, num_cases: int) -> None:
556 | def test_obo(test: List[str]) -> Tuple[float, str]:
557 | @mock.patch("builtins.input", side_effect=test)
558 | def test_obo_(input: Callable) -> List[str]:
559 | with Capturing() as output:
560 | solution()
561 |
562 | return output
563 |
564 | start = perf_counter()
565 | output = test_obo_() # type: ignore
566 | end = perf_counter()
567 |
568 | return end - start, "\n".join(output)
569 |
570 | print_begin("One by one", num_cases, test_inp, test_out, timeout=True)
571 |
572 | times = []
573 | passed = i = 0
574 | for i in range(num_cases):
575 | test = ["1"] + test_inp[i] if Config.ADD_1_IN_OBO else test_inp[i]
576 | t, output = test_obo(test)
577 | times.append(t)
578 | if test_out[i] != output:
579 | if i - passed <= Config.NUM_FAILED or Config.NUM_FAILED == -1:
580 | print(f"\rTest nr:{i + 1} \n Input: {cyan}")
581 | pprint(test)
582 | print(f"{reset}Your output: {red}{output[0]}{reset}")
583 | print(f" Expected: {green}{test_out[i]}{reset}\n")
584 | else:
585 | passed += 1
586 | if sum(times) >= Config.TIMEOUT:
587 | print(f"{red}Timeout after {i + 1} cases!{reset}")
588 | break
589 | if 1 < num_cases <= 10 or not i % (num_cases * Config.PROGRESS_PERCENT):
590 | print(
591 | f"\rProgress: {yellow}{i + 1:>{len(str(num_cases))}}/{num_cases}{reset}",
592 | end="",
593 | )
594 |
595 | print_summary(i + 1, passed, sum(times))
596 |
597 |
598 | def speed_test_solution_obo(
599 | test_inp: TestInp, test_out: TestOut, speed_num_cases: int
600 | ) -> None:
601 | def test_for_speed_obo(test: List[str], out: str) -> Tuple[float, bool]:
602 | @mock.patch("builtins.input", side_effect=test)
603 | def test_obo_(input: Callable) -> List[str]:
604 | with Capturing() as output:
605 | solution()
606 |
607 | return output
608 |
609 | start = perf_counter()
610 | output = test_obo_() # type: ignore
611 | end = perf_counter()
612 |
613 | return end - start, "\n".join(output) == out
614 |
615 | loops = Config.NUMER_OF_LOOPS
616 | print("\nSpeed test started.")
617 | print(f"Running: {yellow}Python solution{reset}")
618 | print(f'Testing {cyan}{speed_num_cases}{reset} cases, format "One by one":')
619 | print(f" - Number of loops: {cyan}{loops:_}{reset}")
620 | print(f" - Timeout: {red}{Config.TIMEOUT}{reset} seconds")
621 | if Config.PRINT_EXTRA_STATS:
622 | print_extra_stats(
623 | test_inp[:speed_num_cases], test_out[:speed_num_cases], speed_num_cases
624 | )
625 | print()
626 |
627 | timedout = False
628 | times: List[float] = []
629 | for i in range(loops):
630 | loop_times = []
631 | for j in range(speed_num_cases):
632 | test = ["1"] + test_inp[j] if Config.ADD_1_IN_OBO else test_inp[j]
633 |
634 | t, passed = test_for_speed_obo(test, test_out[j])
635 | loop_times.append(t)
636 |
637 | if not passed:
638 | print(f"\r{red}Failed at iteration {i + 1}!{reset}")
639 | break
640 | if sum(times) >= Config.TIMEOUT:
641 | print(f"\r{red}Timeout at {i + 1} iteration!{reset}")
642 | timedout = True
643 | break
644 | if timedout:
645 | break
646 | times.append(sum(loop_times))
647 | if 1 < loops <= 10 or not i % (loops * Config.PROGRESS_PERCENT):
648 | print(f"\rProgress: {yellow}{i:>{len(str(loops))}}/{loops}{reset}", end="")
649 | else:
650 | print_speed_summary(speed_num_cases, loops, times)
651 |
652 |
653 | def debug_solution(test_inp: TestInp, test_out: TestOut, case_number: int) -> None:
654 | test = (
655 | ["1"] + test_inp[case_number - 1]
656 | if not Config.TEST_ONE_BY_ONE
657 | else test_inp[case_number - 1]
658 | )
659 |
660 | @mock.patch("builtins.input", side_effect=test)
661 | def test_debug(input: Callable) -> None:
662 | solution()
663 |
664 | command = Config.OTHER_LANG_COMMAND
665 |
666 | if lang is Lang.PYTHON:
667 | running = f"{emojis.snake} {yellow}Python solution{reset}"
668 | else:
669 | running = f"{emojis.otter} {yellow}{command[command.rfind('/') + 1:]}{reset}"
670 |
671 | print('Started testing, format "Debug":')
672 | print(f"Running: {running}")
673 | print(f" Test nr: {cyan}{case_number}{reset}")
674 |
675 | print(f" Input: {cyan}")
676 | pprint(test_inp[case_number - 1])
677 | print(f"{reset} Expected: {green}{test_out[case_number - 1]}{reset}")
678 | print("Your output:")
679 | if command:
680 | test_ = "1\n" + "\n".join(test_inp[case_number - 1])
681 | start = perf_counter()
682 | proc = subprocess.run(
683 | command.split(),
684 | input=test_,
685 | stdout=subprocess.PIPE,
686 | stderr=subprocess.PIPE,
687 | universal_newlines=True,
688 | )
689 | end = perf_counter()
690 | print(proc.stderr)
691 | print(proc.stdout)
692 | print(f"\nTime: {yellow}{end - start}{reset} seconds")
693 |
694 | else:
695 | test_debug() # type: ignore
696 |
697 |
698 | def test_other_lang(
699 | command: str, test_inp: TestInp, test_out: TestOut, num_cases: int
700 | ) -> None:
701 | test_inp_ = (
702 | f"{num_cases}\n"
703 | + "\n".join(functools.reduce(operator.iconcat, test_inp[:num_cases], []))
704 | + "\n"
705 | )
706 | print_begin("All in one", num_cases, test_inp, test_out)
707 |
708 | start = perf_counter()
709 | proc = subprocess.run(
710 | command.split(),
711 | input=test_inp_,
712 | stdout=subprocess.PIPE,
713 | stderr=subprocess.PIPE,
714 | universal_newlines=True,
715 | )
716 | end = perf_counter()
717 |
718 | err = proc.stderr
719 | output = proc.stdout.split("\n")
720 | output = [x.strip("\r") for x in output]
721 |
722 | if err:
723 | print(err)
724 | raise SystemExit(1)
725 |
726 | passed, i = check_result(test_inp, test_out, num_cases, output)
727 |
728 | print_summary(i, passed, end - start)
729 |
730 |
731 | def speed_test_other_aio(
732 | command: str, test_inp: TestInp, test_out: TestOut, speed_num_cases: int
733 | ) -> None:
734 | test_inp_ = (
735 | f"{speed_num_cases}\n"
736 | + "\n".join(functools.reduce(operator.iconcat, test_inp[:speed_num_cases], []))
737 | + "\n"
738 | )
739 |
740 | def run() -> Tuple[bool, float]:
741 | start = perf_counter()
742 | proc = subprocess.run(
743 | command.split(),
744 | input=test_inp_,
745 | stdout=subprocess.PIPE,
746 | stderr=subprocess.PIPE,
747 | universal_newlines=True,
748 | )
749 | end = perf_counter()
750 |
751 | err = proc.stderr
752 | output = proc.stdout.split("\n")
753 | output = [x.strip("\r") for x in output]
754 | if err:
755 | print(err)
756 | raise SystemExit(1)
757 |
758 | return (
759 | "\n".join(output).strip() == "\n".join(test_out[:speed_num_cases]),
760 | end - start,
761 | )
762 |
763 | loops = max(1, Config.NUMER_OF_LOOPS)
764 | print("\nSpeed test started.")
765 | print(f"Running: {emojis.otter} {yellow}{command[command.rfind('/') + 1:]}{reset}")
766 | print(f'Testing {cyan}{speed_num_cases}{reset} cases, format "All in one":')
767 | print(f" - Number of loops: {cyan}{loops:_}{reset}")
768 | print(f" - Timeout: {red}{Config.TIMEOUT}{reset} seconds")
769 | if Config.PRINT_EXTRA_STATS:
770 | print_extra_stats(
771 | test_inp[:speed_num_cases], test_out[:speed_num_cases], speed_num_cases
772 | )
773 | print()
774 |
775 | times = []
776 | for i in range(loops):
777 | passed, time = run()
778 | times.append(time)
779 | if not passed:
780 | print(f"{red}Failed at iteration {i + 1}!{reset}")
781 | break
782 | if sum(times) >= Config.TIMEOUT:
783 | print(f"{red}Timeout at {i + 1} iteration!{reset}")
784 | break
785 | if 1 < loops <= 10 or not i % (loops * Config.PROGRESS_PERCENT):
786 | print(f"\rProgress: {yellow}{i:>{len(str(loops))}}/{loops}{reset}", end="")
787 | else:
788 | print_speed_summary(speed_num_cases, loops, times)
789 |
790 |
791 | ########################################################################
792 |
793 |
794 | def parse_args() -> None:
795 | parser = argparse.ArgumentParser(
796 | usage="%(prog)s [options]",
797 | description="Yan Tovis unofficial tester for Tech With Tim discord Weekly Challenges. "
798 | "You can configure it editing CONFIGURATION section of this tester file "
799 | "or using command line arguments.",
800 | epilog="All file paths must be absolute or relative to this tester!",
801 | )
802 |
803 | parser.add_argument(
804 | "-s",
805 | "--source",
806 | metavar="path_src_file",
807 | help="Path to solution source file for Python solution, or for other languages if you "
808 | "want solution length to be printed.",
809 | )
810 | parser.add_argument(
811 | "-c",
812 | "--command",
813 | metavar="CMD",
814 | help="Command to execute solution written in other languages then Python.",
815 | )
816 | parser.add_argument(
817 | "-i",
818 | "--input",
819 | metavar="test_cases",
820 | help="Path to test cases input (input/expected output) JSON file.",
821 | )
822 | parser.add_argument(
823 | "-e",
824 | "--expected",
825 | metavar="test_output",
826 | help="Path to test cases expected values if they in separate "
827 | "file then input test cases.",
828 | )
829 | parser.add_argument(
830 | "--nocolor",
831 | action="store_true",
832 | help="If you don't want color output in terminal, or if your terminal"
833 | "don't support colors and script didn't detect this.",
834 | )
835 | parser.add_argument(
836 | "-n",
837 | "--number-cases",
838 | metavar="",
839 | type=int,
840 | help="Number of test cases to use to test.",
841 | )
842 | parser.add_argument(
843 | "-d",
844 | "--debug",
845 | metavar="",
846 | type=int,
847 | help="Test case number if you want to print something extra for debugging. "
848 | "Your solution will be run with only that one test case and whole output "
849 | "will be printed.",
850 | )
851 | parser.add_argument(
852 | "--onebyone",
853 | action="store_true",
854 | help="Use if official challenge tester gives one test case inputs and "
855 | "run your solution again for next test case.",
856 | )
857 | parser.add_argument(
858 | "--add1",
859 | action="store_true",
860 | help="If you want test your solution one test case by one test case, "
861 | "and solution is written to take all test cases at once, this will add '1' "
862 | "as the first line input.",
863 | )
864 | parser.add_argument(
865 | "-S",
866 | "--speedtest",
867 | metavar="",
868 | type=int,
869 | help="Number of times to run tests to get more accurate times.",
870 | )
871 | parser.add_argument(
872 | "--number-speed-cases",
873 | metavar="",
874 | type=int,
875 | help="How many test case to use per loop when speed test.",
876 | )
877 | parser.add_argument(
878 | "--timeout",
879 | metavar="",
880 | type=int,
881 | help="Timeout in seconds. Will not timeout in middle of test case. "
882 | 'Not working in "All in One" mode. Will timeout only between test cases / loops.',
883 | )
884 | parser.add_argument(
885 | "-x",
886 | "--noextra",
887 | action="store_true",
888 | help="Use if this tester wasn't prepared for the challenge you testing.",
889 | )
890 | parser.add_argument(
891 | "-p",
892 | "--progress",
893 | metavar="",
894 | type=float,
895 | help="How often to update progress: must be > 0 and < 1. 0.1 - means "
896 | "update every 10%% completed tests, 0.05 - means update every 5%% completed tests.",
897 | )
898 | parser.add_argument(
899 | "-f",
900 | "--failed",
901 | metavar="",
902 | type=int,
903 | help="Number of failed tests to print (-1 to print all failed).",
904 | )
905 | parser.add_argument(
906 | "-t",
907 | "--truncate",
908 | metavar="",
909 | type=int,
910 | help="Maximum line length to print failed case input (-1 to print full line).",
911 | )
912 | parser.add_argument(
913 | "-l",
914 | "--nolength",
915 | action="store_true",
916 | help="Use if you want to share result but don't want to share solution length.",
917 | )
918 | parser.add_argument(
919 | "-E",
920 | "--emoji",
921 | action="store_true",
922 | help="Use unicode emoji. Your terminal must support unicode and your "
923 | "terminal font must have glyphs for emoji.",
924 | )
925 |
926 | args = parser.parse_args()
927 |
928 | if args.source:
929 | Config.SOLUTION_SRC_FILE_NAME = args.source
930 | if args.command:
931 | Config.OTHER_LANG_COMMAND = args.command
932 | if args.input:
933 | Config.TEST_CASE_FILE = args.input
934 | if args.expected:
935 | Config.TEST_CASE_FILE_EXP = args.expected
936 | if args.nocolor:
937 | Config.COLOR_OUT = False
938 | if args.number_cases:
939 | Config.NUMBER_OF_TEST_CASES = args.number_cases
940 | if args.debug:
941 | Config.DEBUG_TEST = True
942 | Config.DEBUG_TEST_NUMBER = args.debug
943 | if args.onebyone:
944 | Config.TEST_ONE_BY_ONE = True
945 | if args.add1:
946 | Config.ADD_1_IN_OBO = True
947 | if args.speedtest:
948 | Config.SPEED_TEST = True
949 | Config.NUMER_OF_LOOPS = args.speedtest
950 | if args.number_speed_cases:
951 | Config.NUMBER_SPEED_TEST_CASES = args.number_speed_cases
952 | if args.timeout:
953 | Config.TIMEOUT = args.timeout
954 | if args.noextra:
955 | Config.PRINT_EXTRA_STATS = False
956 | if args.progress:
957 | Config.PROGRESS_PERCENT = args.progress
958 | if args.failed:
959 | Config.NUM_FAILED = args.failed
960 | if args.truncate:
961 | Config.TRUNCATE_FAILED_CASES = args.truncate
962 | if args.nolength:
963 | Config.PRINT_SOLUTION_LENGTH = False
964 | if args.emoji:
965 | Config.USE_EMOJI = True
966 |
967 |
968 | def main(path: str) -> None:
969 | test_inp, test_out = read_test_cases()
970 |
971 | if Config.DEBUG_TEST:
972 | case_number = (
973 | Config.DEBUG_TEST_NUMBER
974 | if 0 <= Config.DEBUG_TEST_NUMBER - 1 < len(test_out)
975 | else 1
976 | )
977 | debug_solution(test_inp, test_out, case_number)
978 | raise SystemExit(0)
979 |
980 | if 0 < Config.NUMBER_OF_TEST_CASES < len(test_out):
981 | num_cases = Config.NUMBER_OF_TEST_CASES
982 | else:
983 | num_cases = len(test_out)
984 |
985 | if 0 < Config.NUMBER_SPEED_TEST_CASES < len(test_out):
986 | speed_num_cases = Config.NUMBER_SPEED_TEST_CASES
987 | else:
988 | speed_num_cases = len(test_out)
989 |
990 | if Config.OTHER_LANG_COMMAND:
991 | os.chdir(path)
992 | test_other_lang(Config.OTHER_LANG_COMMAND, test_inp, test_out, num_cases)
993 | if Config.SPEED_TEST:
994 | speed_test_other_aio(
995 | Config.OTHER_LANG_COMMAND, test_inp, test_out, speed_num_cases
996 | )
997 | raise SystemExit(0)
998 |
999 | if Config.TEST_ONE_BY_ONE:
1000 | test_solution_obo(test_inp, test_out, num_cases)
1001 | else:
1002 | test_solution_aio(test_inp, test_out, num_cases)
1003 | if Config.SPEED_TEST:
1004 | speed_test_solution_aio(test_inp, test_out, speed_num_cases)
1005 |
1006 |
1007 | if __name__ == "__main__":
1008 | path = os.path.dirname(os.path.abspath(__file__))
1009 | os.chdir(path)
1010 |
1011 | parse_args()
1012 |
1013 | color_out = Config.COLOR_OUT and enable_win_term_mode()
1014 | red, green, yellow, cyan, reset, red_bg = (
1015 | (
1016 | "\x1b[31m",
1017 | "\x1b[32m",
1018 | "\x1b[33m",
1019 | "\x1b[36m",
1020 | "\x1b[0m",
1021 | "\x1b[41m",
1022 | )
1023 | if color_out
1024 | else [""] * 6
1025 | )
1026 |
1027 | if Config.USE_EMOJI:
1028 | emojis = Emoji(
1029 | stopwatch="\N{stopwatch} ",
1030 | hundred="\N{Hundred Points Symbol}",
1031 | poo=" \N{Pile of Poo}",
1032 | snake="\N{snake}",
1033 | otter="\N{otter}",
1034 | scroll=" \N{scroll}",
1035 | filebox=" \N{Card File Box} ",
1036 | chart=" \N{Chart with Upwards Trend} ",
1037 | rocket="\N{rocket} ",
1038 | warning=" \N{warning sign} ",
1039 | bang="\N{Heavy Exclamation Mark Symbol}",
1040 | stop="\N{Octagonal sign}",
1041 | snail="\N{snail} ",
1042 | leopard=" \N{leopard}",
1043 | )
1044 | else:
1045 | emojis = Emoji()
1046 |
1047 | solution_len = create_solution_function(path, Config.SOLUTION_SRC_FILE_NAME)
1048 |
1049 | lang = Lang.OTHER if Config.OTHER_LANG_COMMAND else Lang.PYTHON
1050 |
1051 | if lang is Lang.PYTHON:
1052 | if not solution_len:
1053 | print("Could not import solution!")
1054 | raise SystemExit(1)
1055 |
1056 | from temp_solution_file import solution # type: ignore
1057 |
1058 | test_cases_file = os.path.join(path, Config.TEST_CASE_FILE)
1059 | if not os.path.exists(test_cases_file):
1060 | print(
1061 | f"Can't find file with test cases {red}{os.path.join(path, Config.TEST_CASE_FILE)}{reset}!"
1062 | )
1063 | print("Make sure it is in the same directory as this script!")
1064 | raise SystemExit(1)
1065 |
1066 | test_out_file = (
1067 | os.path.join(path, Config.TEST_CASE_FILE_EXP)
1068 | if Config.TEST_CASE_FILE_EXP
1069 | else test_cases_file
1070 | )
1071 | if Config.TEST_CASE_FILE_EXP and not os.path.exists(test_out_file):
1072 | print(
1073 | f"Can't find file with output for test cases {red}{test_out_file}{reset}!"
1074 | )
1075 | print("Make sure it is in the same directory as this script!\n")
1076 | print(
1077 | f"If output is in same file as input set {cyan}SEP_INP_OUT_TESTCASE_FILE{reset} "
1078 | f"to {yellow}False{reset} in configure section."
1079 | )
1080 | raise SystemExit(1)
1081 |
1082 | main(path)
1083 |
--------------------------------------------------------------------------------
/Readme.md:
--------------------------------------------------------------------------------
1 | # Testers for TechWithTim Discord weekly challenges
2 |
3 | [](https://github.com/Pomroka/TWT_Challenges_Tester/releases/latest) [](#supported-python-version) [](https://github.com/psf/black)
4 |
5 | 
6 |
7 | You will find here my testers for weekly challenges.
8 |
9 | ----------
10 |
11 | - [**How to use?**](#how-to-use)
12 | - [New tester (Challenge 73 and newer)](#new-tester-challenge-73-and-newer)
13 | - [To test solution in languages other than Python](#to-test-solution-in-languages-other-than-python)
14 | - [Examples](#examples)
15 | - [Custom test cases](#custom-test-cases)
16 | - [**Supported Python version**](#supported-python-version)
17 | - [Old tester (Challenge 72 and older)](#old-tester-challenge-72-and-older)
18 | - [Some possible errors](#some-possible-errors)
19 | - [How to download individual challenge tester/file from GitHub?](#how-to-download-individual-challenge-testerfile-from-github)
20 |
21 | ----------
22 |
23 | ## **How to use?**
24 |
25 | ## New tester (Challenge 73 and newer)
26 |
27 | Download tester file `test_challenge_XX.py` and file with test cases `test_cases.json` place them in same folder where is your file with solution.
28 |
29 | Make sure Python has right to save to this folder.
30 |
31 | This tester will create one file `temp_solution_file.py` make sure there's no such
32 | file in same folder or there's nothing important in it, cos it will be **overwritten!**
33 |
34 | You can configure tester editing tester file and changing `CONFIGURATION` section or using command line arguments.
35 |
36 | Run with flag `-h` or `--help` for more information how to use command line arguments.
37 |
38 | ```sh
39 | python test_challenge_212.py --help
40 | ```
41 |
42 | Change `SOLUTION_SRC_FILE_NAME` to your file name in `CONFIGURATION` section or use `-s solution_file.py`.
43 |
44 | ### To test solution in languages other than Python
45 |
46 | Use `-c command` or change `OTHER_LANG_COMMAND` to command to run your solution for not compiled languages or to already compiled executable for compiled languages. For multi-word `command` command line argument surround it in quotes `"multi word command"`
47 |
48 | ### Examples
49 |
50 | ```py
51 | OTHER_LANG_COMMAND = "Cpp/c212_cpp.exe" # relative to tester file path to compiled windows executable
52 | OTHER_LANG_COMMAND = "/home/user/Dev/Challenge212/c212_c" # absolute path to compiled Linux executable
53 | OTHER_LANG_COMMAND = "c212_rust.exe" # name of compiled file in same folder as tester
54 | OTHER_LANG_COMMAND = "java -cp Java/ c212_java.Main" # command to run solution in non compiled language
55 | OTHER_LANG_COMMAND = "" # leave empty if you want to test python solution
56 | ```
57 |
58 | ```sh
59 | > python test_challenge_212.py -c "java -cp Java/ c212_java.Main"
60 |
61 | $ python test_challenge_212.py -c Rust/c212_rust/target/release/c212_rust
62 | ```
63 |
64 | If you want to see your solution length in other languages than Python change `SOLUTION_FILE_NAME` to your solution source code file name.
65 |
66 | If this tester wasn't prepared by me for the challenge you want to test,
67 | you may need to adjust other configuration settings. Read the comments on each.
68 |
69 | ### Custom test cases
70 |
71 | You can use test cases from official TWT challenge tester published after testing or in same format. Tester will convert them to format described below.
72 |
73 | If you want to use own test_cases, they must be in JSON format.
74 |
75 | ```py
76 | [
77 | [ # this list can be in separated file for inputs only
78 | ["test case 1"], # multiline case ["line_1", "line_2", ... ,"line_n"]
79 | ["test case 2"],
80 | ...
81 | ["test case n"]
82 | ],
83 | [ # and this for output only
84 | "output 1",
85 | "output 2",
86 | ...
87 | "output n"
88 | ]
89 | ]
90 | ```
91 |
92 | All values must be strings! Cast all ints, floats, etc. to string before dumping JSON!
93 |
94 | ## **Supported Python version**
95 |
96 | - **Python 3.8+** works without modification
97 | - **Python 3.7** you need to comment out line `otter="\N{otter}",`
98 | - **Python 3.6** you need to comment out lines:
99 | - `from dataclasses import dataclass`
100 | - both lines that have `@dataclass`
101 | - comment out whole block starting with line `if Config.USE_EMOJI:` up to and including `else:` and unindent next line
102 | - older Python version not supported
103 |
104 | ----------
105 |
106 | ## Old tester (Challenge 72 and older)
107 |
108 | Download tester file `test_challenge_XX.py` and file with test cases `test_cases_ch_XX.py` place them in same folder where is your file with solution.
109 |
110 | Import your solution in line `92`, and run tester file
111 |
112 | ```sh
113 | > python test_challenge_XX.py
114 | ```
115 |
116 | If you see some weird chars instead of colors in output or don't want colors
117 | switch `COLOR_OUT` to `False` in line `30`
118 |
119 | If there is more than one `test_cases_ch_XXx.py` file you can change import in line `28` or rename test case file removing letter after challenge number to use it.
120 |
121 | ----------
122 |
123 | **WARNING:** My tester ignores printing in `input()` but official tester **FAILS** if you print something in `input()`
124 |
125 | **Don't do that:**
126 |
127 | ```py
128 | input("What is the test number?")
129 | ```
130 |
131 | Use empty input: `input()`
132 |
133 | ----------
134 |
135 | ## Some possible errors
136 |
137 | - `None` in `"Your output"`: Your solution didn't print for all cases.
138 |
139 | - `None` in `"Input"`: Your solution print more times than there are cases.
140 |
141 | - If you see `None` in `"Input"` or `"Your output"` don't check failed cases until you fix problem with printing, cos "Input" and "Your output" are misaligned after first missing/extra print
142 |
143 | - `"Your Output"` looks like `"Expected"` but tester show it's wrong. Check if you print trailing spaces.
144 |
145 | - `StopIteration`: Your solution try to get more input than there is test cases
146 |
147 | - If you use `open(0)` instead of `input` you get `StopIteration` error in my tester or tester will hang waiting for EOF char not presented in input data
148 | - to avoid this use one of:
149 | - set `OTHER_LANG_COMMAND = "python to_submit_ch_212.py"`
150 | - run `python test_challenge_212.py -c "python to_submit_ch_212.py"`
151 | - If you call your functions inside `if __name__ == '__main__':` your functions won't be called by default cos your solution is imported
152 | - to avoid this use one of:
153 | - set `OTHER_LANG_COMMAND = "python to_submit_ch_212.py"`
154 | - run `python test_challenge_212.py -c "python to_submit_ch_212.py"`
155 | - or don't use `if __name__ == '__main__':`
156 |
157 | ----------
158 |
159 | ## How to download individual challenge tester/file from GitHub?
160 |
161 | You can switch branch to branch with that challenge, then click `Code` and `Download ZIP`
162 |
163 | Or in **Releases** section click `Challenge XX` and download `source_code (...)`.
164 |
165 | Or from command line:
166 |
167 | ```sh
168 | # Linux
169 | $ wget https://raw.githubusercontent.com/Pomroka/TWT_Challenges_Tester/master/Challenge_212/test_cases.json
170 |
171 | # Windows 10
172 | > curl -o test_cases.json https://raw.githubusercontent.com/Pomroka/TWT_Challenges_Tester/master/Challenge_212/test_cases.json
173 | ```
174 |
175 | Or use [https://downgit.github.io/#/home](https://downgit.github.io/#/home) (ready to use link in Challenge_XX.md)
176 |
--------------------------------------------------------------------------------
/logo1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/Pomroka/TWT_Challenges_Tester/9ab9f21a77355e08999e77b9c838e176934dc9f7/logo1.png
--------------------------------------------------------------------------------
/test_challenge.py:
--------------------------------------------------------------------------------
1 | """
2 | How to use?
3 | Put this file, with test cases file and file with your solution in same folder.
4 | Make sure Python has right to save to this folder.
5 |
6 | This tester will create one file "temp_solution_file.py" make sure there's no such
7 | file in same folder or there's nothing important in it, cos it will be overwritten!
8 |
9 | Change SOLUTION_SRC_FILE_NAME to your file name in CONFIGURATION section.
10 |
11 | If this tester wasn't prepared by me for the challenge you want to test,
12 | you may need to adjust other configuration settings. Read the comments on each.
13 |
14 | If you want to use own test_cases, they must be in JSON format.
15 | [
16 | [ # this list can be in separated file for inputs only
17 | ["test case 1"], # multiline case ["line_1", "line_2", ... ,"line_n"]
18 | ["test case 2"],
19 | ...
20 | ["test case n"]
21 | ],
22 | [ # and this for output only
23 | "output 1",
24 | "output 2",
25 | ...
26 | "output n"
27 | ]
28 | ]
29 | All values must be strings! Cast all ints, floats, etc. to string before dumping JSON!
30 |
31 | WARNING: My tester ignores printing in input() but official tester FAILS if you
32 | print something in input()
33 | Don't do that: input("What is the test number?")
34 | Use empty input: input()
35 |
36 | Some possible errors:
37 | - None in "Your output": Your solution didn't print for all cases.
38 | - None in "Input": Your solution print more times than there are cases.
39 | - If you see None in "Input" or "Your output" don't check failed cases until
40 | you fix problem with printing, cos "Input" and "Your output" are misaligned
41 | after first missing/extra print
42 | - StopIteration: Your solution try to get more input than there are test cases
43 | - If you use `open` instead of `input` you get `StopIteration` error in my tester
44 | to avoid that use OTHER_LANG_COMMAND = "python to_submit_ch_83.py"
45 | - If you call your function inside `if __name__ == '__main__':` by default
46 | your functions won't be called cos your solution is imported.
47 | To avoid that use OTHER_LANG_COMMAND = "python to_submit_ch_83.py"
48 | or don't use `if __name__ == '__main__':`
49 | """
50 | from __future__ import annotations
51 |
52 | from dataclasses import dataclass
53 |
54 |
55 | ########## CONFIGURATION ################
56 | @dataclass
57 | class Config:
58 |
59 | # Name of your file with solution. If it's not in same folder as this script
60 | # add absolute path or relative to this script file.
61 | # For other languages than python fill this with source code file name if
62 | # you want solution length to be displayed.
63 | # Examples:
64 | # Absolute path
65 | # SOLUTION_SRC_FILE_NAME = "/home/user/Dev/Cpp/c83_c/c83_c/src/Main.cpp"
66 | # Relative path to this script file
67 | # SOLUTION_SRC_FILE_NAME = "Rust/C83_rust/src/main.rs"
68 | # File in same folder as this script
69 | SOLUTION_SRC_FILE_NAME = ""
70 |
71 | # Command to run your solution written in other language than Python
72 | # For compiled languages - compile yourself and use compiled executable file name
73 | # For interpreter languages give full command to run your solution
74 | # Examples:
75 | # OTHER_LANG_COMMAND = "c83_cpp.exe"
76 | # OTHER_LANG_COMMAND = "Cpp/c83_c/bin/x64/Debug/c83_c.exe"
77 | # OTHER_LANG_COMMAND = "Cpp/c83_c/bin/x64/Release/c83_c"
78 | # OTHER_LANG_COMMAND = "/home/user/Dev/Rust/c83_rust/target/release/c83_rust"
79 | # OTHER_LANG_COMMAND = "d:/Dev/C_Sharp/c83_cs/bin/Debug/net6.0/c83_cs.exe"
80 | # OTHER_LANG_COMMAND = "java -cp Java/ c83_java.Main"
81 | OTHER_LANG_COMMAND = ""
82 |
83 | # Name of file with input test cases (and output if SEP_INP_OUT_TESTCASE_FILE is False)
84 | # If test cases file is compressed, you don't need to extract it, just give name of
85 | # compressed file (with .gz extension)
86 | TEST_CASE_FILE = "test_cases.json"
87 |
88 | # If test cases input and expected output are in separate files, name of file
89 | # with expected outputs for test cases. Empty string - if they in one file.
90 | TEST_CASE_FILE_EXP = ""
91 |
92 | # True - if you want colors in terminal output, False - otherwise or if your terminal
93 | # don't support colors and script didn't detect this
94 | COLOR_OUT: bool = True
95 |
96 | # -1 - use all test cases from test case file, you can limit here with how many
97 | # tests cases you want to test your solution. If you enter number bigger than
98 | # number of tests all tests will be used
99 | NUMBER_OF_TEST_CASES = -1
100 |
101 | # True - if you want to print some debug information, you need set:
102 | # DEBUG_TEST_NUMBER to the test number you want to debug
103 | DEBUG_TEST: bool = False
104 |
105 | # Provide test number for which you want to see your debug prints. If you enter number
106 | # out of range, first test case will be used. (This number is 1 - indexed it is same number
107 | # you find when failed test case is printed in normal test). Ignored when DEBUG_TEST is False
108 | DEBUG_TEST_NUMBER = 1
109 |
110 | # True - if official challenge tester give one test case inputs and run your solution
111 | # again for next test case, False - if official challenge tester gives all test cases
112 | # once and your solution need to take care of it.
113 | TEST_ONE_BY_ONE: bool = False
114 |
115 | # True - if you want to test your solution one test case by one test case, and solution
116 | # is written to take all test cases at once, this will add "1" as the first line input
117 | # Ignored if TEST_ONE_BY_ONE is False
118 | ADD_1_IN_OBO: bool = False
119 |
120 | # True - if you want to measure performance of your solution, running it multiple times
121 | SPEED_TEST: bool = False
122 |
123 | # How many test cases to use per loop, same rules apply as for NUMBER_OF_TEST_CASES
124 | NUMBER_SPEED_TEST_CASES = -1
125 |
126 | # How many times run tests
127 | NUMER_OF_LOOPS = 2
128 |
129 | # Timeout in seconds. Will not timeout in middle of test case (if TEST_ONE_BY_ONE is False
130 | # will not timeout in middle of loop). Will timeout only between test cases / loops
131 | # If you don't want timeout set it to some big number or `float("inf")`
132 | TIMEOUT = 300
133 |
134 | # Set to False if this tester wasn't prepared for the challenge you're testing
135 | # or adjust prints in `print_extra_stat` function yourself
136 | PRINT_EXTRA_STATS: bool = True
137 |
138 | # How often to update progress: must be > 0 and < 1
139 | # 0.1 - means update every 10% completed tests
140 | # 0.05 - means update every 5% completed tests
141 | PROGRESS_PERCENT = 0.1
142 |
143 | # How many failed cases to print
144 | # Set to -1 to print all failed cases
145 | NUM_FAILED = 5
146 |
147 | # Set to False if you want to share result but don't want to share solution length
148 | PRINT_SOLUTION_LENGTH: bool = True
149 |
150 | # If your terminal support Unicode and your font has glyphs for emoji you can
151 | # switch to True
152 | USE_EMOJI: bool = False
153 |
154 |
155 | # region #######################################################################
156 |
157 | import argparse
158 | import functools
159 | import gzip
160 | import json
161 | import operator
162 | import os
163 | import platform
164 | import subprocess
165 | import sys
166 | from enum import Enum, auto
167 | from io import StringIO
168 | from itertools import zip_longest
169 | from pprint import pprint
170 | from time import perf_counter
171 | from typing import Callable, List, Tuple
172 | from unittest import mock
173 |
174 | TestInp = List[List[str]]
175 | TestOut = List[str]
176 | TestCases = Tuple[TestInp, TestOut]
177 |
178 |
179 | def enable_win_term_mode() -> bool:
180 | win = platform.system().lower() == "windows"
181 | if win is False:
182 | return True
183 |
184 | from ctypes import byref, c_int, c_void_p, windll
185 |
186 | ENABLE_VIRTUAL_TERMINAL_PROCESSING = 0x0004
187 | INVALID_HANDLE_VALUE = c_void_p(-1).value
188 | STD_OUTPUT_HANDLE = c_int(-11)
189 |
190 | hStdout = windll.kernel32.GetStdHandle(STD_OUTPUT_HANDLE)
191 | if hStdout == INVALID_HANDLE_VALUE:
192 | return False
193 |
194 | mode = c_int(0)
195 | ok = windll.kernel32.GetConsoleMode(c_int(hStdout), byref(mode))
196 | if not ok:
197 | return False
198 |
199 | mode = c_int(mode.value | ENABLE_VIRTUAL_TERMINAL_PROCESSING)
200 | ok = windll.kernel32.SetConsoleMode(c_int(hStdout), mode)
201 | if not ok:
202 | return False
203 |
204 | return True
205 |
206 |
207 | class Capturing(list):
208 | def __enter__(self) -> "Capturing":
209 | self._stdout = sys.stdout
210 | sys.stdout = self._stringio = StringIO()
211 | return self
212 |
213 | def __exit__(self, *args: str) -> None:
214 | self.extend(self._stringio.getvalue().splitlines())
215 | del self._stringio # free up some memory
216 | sys.stdout = self._stdout
217 |
218 |
219 | def create_solution_function(path: str, file_name: str) -> int:
220 | if not file_name:
221 | return 0
222 |
223 | if file_name.find("/") == -1 or file_name.find("\\") == -1:
224 | file_name = os.path.join(path, file_name)
225 |
226 | if not os.path.exists(file_name):
227 | print(f"Can't find file {red}{file_name}{reset}!\n")
228 | print("Make sure:\n - your file is in same directory as this script.")
229 | print(" - or give absolute path to your file")
230 | print(" - or give relative path from this script.\n")
231 | print(f"Current Working Directory is: {yellow}{os.getcwd()}{reset}")
232 |
233 | return 0
234 |
235 | solution = []
236 | with open(file_name, newline="") as f:
237 | solution = f.readlines()
238 |
239 | sol_len = sum(map(len, solution))
240 |
241 | if not file_name.endswith(".py"):
242 | return sol_len
243 |
244 | tmp_name = os.path.join(path, "temp_solution_file.py")
245 | with open(tmp_name, "w") as f:
246 | f.write("def solution():\n")
247 | for line in solution:
248 | f.write(" " + line)
249 |
250 | return sol_len
251 |
252 |
253 | def read_test_cases() -> TestCases:
254 | if test_cases_file.endswith(".gz"):
255 | with gzip.open(test_cases_file, "rb") as g:
256 | data = g.read()
257 | try:
258 | test_cases = json.loads(data)
259 | except json.decoder.JSONDecodeError:
260 | print(
261 | f"Test case file {yellow}{test_cases_file}{reset} is not valid JSON file!"
262 | )
263 | raise SystemExit(1)
264 | else:
265 | with open(test_cases_file) as f:
266 | try:
267 | test_cases = json.load(f)
268 | except json.decoder.JSONDecodeError:
269 | print(
270 | f"Test case file {yellow}{test_cases_file}{reset} is not valid JSON file!"
271 | )
272 | raise SystemExit(1)
273 |
274 | if Config.TEST_CASE_FILE_EXP:
275 | with open(test_out_file) as f:
276 | try:
277 | test_out = json.load(f)
278 | except json.decoder.JSONDecodeError:
279 | print(
280 | f"Test case file {yellow}{test_out_file}{reset} is not valid JSON file!"
281 | )
282 | raise SystemExit(1)
283 | test_inp = test_cases
284 | else:
285 | test_inp = test_cases[0]
286 | test_out = test_cases[1]
287 |
288 | if isinstance(test_cases[0], dict):
289 | return convert_official_test_cases(test_cases)
290 |
291 | return test_inp, test_out
292 |
293 |
294 | def convert_official_test_cases(test_cases: List[dict[str, str]]) -> TestCases:
295 | test_inp, test_out = [], []
296 | for case in test_cases:
297 | try:
298 | test_inp.append(case["Input"].split("\n"))
299 | test_out.append(case["Output"])
300 | except KeyError:
301 | print(f"Test case {yellow}{case}{reset} is not valid format!")
302 | raise SystemExit(1)
303 |
304 | return test_inp, test_out
305 |
306 |
307 | @dataclass
308 | class Emoji:
309 | stopwatch: str = ""
310 | hundred: str = ""
311 | poo: str = ""
312 | snake: str = ""
313 | otter: str = ""
314 | scroll: str = ""
315 | filebox: str = ""
316 | chart: str = ""
317 | rocket: str = ""
318 | warning: str = ""
319 | bang: str = ""
320 | stop: str = ""
321 | snail: str = ""
322 | leopard: str = ""
323 |
324 |
325 | class Lang(Enum):
326 | PYTHON = auto()
327 | OTHER = auto()
328 |
329 |
330 | # endregion ####################################################################
331 |
332 |
333 | def print_extra_stats(test_inp: TestInp, test_out: TestOut, num_cases: int) -> None:
334 | print(f" - Max N: {yellow}" f"{max(int(x[0]) for x in test_inp):_}{reset}")
335 | print(
336 | f" - Average N: {yellow}"
337 | f"{sum(int(x[0]) for x in test_inp) // num_cases:_}{reset}"
338 | )
339 | print(
340 | f" - Max same positions: {yellow}" f"{max(int(x) for x in test_out):_}{reset}"
341 | )
342 | print(
343 | f" - Average same positions: {yellow}"
344 | f"{sum(int(x) for x in test_out) // num_cases:_}{reset}"
345 | )
346 |
347 |
348 | def print_begin(
349 | format: str,
350 | num_cases: int,
351 | test_inp: TestInp,
352 | test_out: TestOut,
353 | *,
354 | timeout: bool = False,
355 | ) -> None:
356 | command = Config.OTHER_LANG_COMMAND
357 |
358 | if lang is Lang.PYTHON:
359 | running = f"{emojis.snake} {yellow}Python solution{reset}"
360 | else:
361 | running = f"{emojis.otter} {yellow}{command[command.rfind('/') + 1:]}{reset}"
362 |
363 | print(f"{emojis.rocket}Started testing, format {format}:")
364 | print(f"Running:{running}")
365 | print(f" - Number of cases{emojis.filebox}: {cyan}{num_cases:_}{reset}")
366 | if timeout:
367 | print(f" - Timeout: {red}{Config.TIMEOUT}{reset} seconds{emojis.stop}")
368 | if Config.PRINT_EXTRA_STATS:
369 | print_extra_stats(test_inp[:num_cases], test_out[:num_cases], num_cases)
370 | if (
371 | solution_len
372 | and Config.PRINT_SOLUTION_LENGTH
373 | and (
374 | lang is Lang.OTHER
375 | and not Config.SOLUTION_SRC_FILE_NAME.endswith(".py")
376 | or Config.OTHER_LANG_COMMAND.endswith(".py")
377 | or lang is Lang.PYTHON
378 | )
379 | ):
380 | print(f" - Solution length{emojis.scroll}: {green}{solution_len}{reset} chars.")
381 | print()
382 |
383 |
384 | def print_summary(i: int, passed: int, time_taken: float) -> None:
385 | if Config.NUM_FAILED >= 0 and i - passed > Config.NUM_FAILED:
386 | print(
387 | f"{emojis.warning}Printed only first {yellow}{Config.NUM_FAILED}{reset} "
388 | f"failed cases!{emojis.warning}"
389 | )
390 | print(
391 | f"\rTo change how many failed cases to print change {cyan}NUM_FAILED{reset}"
392 | " in configuration section.\n"
393 | )
394 | e = f"{emojis.hundred}" if passed == i else f"{emojis.poo}"
395 | print(
396 | f"\rPassed: {green if passed == i else red}{passed:_}/{i:_}{reset} tests{e}{emojis.bang}"
397 | )
398 | print(f"{emojis.stopwatch}Finished in: {yellow}{time_taken:.4f}{reset} seconds")
399 |
400 |
401 | def print_speed_summary(speed_num_cases: int, loops: int, times: List[float]) -> None:
402 | times.sort()
403 | print(
404 | f"\rTest for speed passed{emojis.hundred}{emojis.bang if emojis.bang else '!'}\n"
405 | f" - Total time: {yellow}{sum(times):.4f}"
406 | f"{reset} seconds to complete {cyan}{loops:_}{reset} times {cyan}"
407 | f"{speed_num_cases:_}{reset} cases!"
408 | )
409 | print(
410 | f" -{emojis.leopard} Average loop time from top {min(5, loops)} fastest: "
411 | f"{yellow}{sum(times[:5])/min(5, loops):.4f}{reset} seconds /"
412 | f" {cyan}{speed_num_cases:_}{reset} cases."
413 | )
414 | print(
415 | f" -{emojis.leopard} Fastest loop time: {yellow}{times[0]:.4f}{reset} seconds /"
416 | f" {cyan}{speed_num_cases:_}{reset} cases."
417 | )
418 | print(
419 | f" - {emojis.snail}Slowest loop time: {yellow}{times[-1]:.4f}{reset} seconds /"
420 | f" {cyan}{speed_num_cases:_}{reset} cases."
421 | )
422 | print(
423 | f" - {emojis.stopwatch}Average loop time: {yellow}{sum(times)/loops:.4f}{reset} seconds"
424 | f" / {cyan}{speed_num_cases:_}{reset} cases."
425 | )
426 |
427 |
428 | def check_result(
429 | test_inp: TestInp, test_out: TestOut, num_cases: int, output: List[str]
430 | ) -> Tuple[int, int]:
431 | passed = i = oi = 0
432 | for i, (inp, exp) in enumerate(
433 | zip_longest(test_inp[:num_cases], test_out[:num_cases]), 1
434 | ):
435 | out_len = len(exp.split("\n"))
436 | out = "\n".join(output[oi : oi + out_len])
437 | oi += out_len
438 | if out != exp:
439 | if i - passed <= Config.NUM_FAILED or Config.NUM_FAILED == -1:
440 | print(f"Test nr:{i}\n Input: {cyan}")
441 | pprint(inp)
442 | print(f"{reset}Your output: {red}{out if out else None}{reset}")
443 | print(f" Expected: {green}{exp}{reset}\n")
444 | else:
445 | passed += 1
446 |
447 | if num_cases + sum(x.count("\n") for x in test_out[:num_cases]) < len(output) - 1:
448 | print(
449 | f"{red}{emojis.warning}Your output has more lines than expected!{emojis.warning}\n"
450 | f"Check if you don't print empty line at the end.{reset}\n"
451 | )
452 | return passed, i
453 |
454 |
455 | ########################################################################
456 |
457 |
458 | def test_solution_aio(test_inp: TestInp, test_out: TestOut, num_cases: int) -> None:
459 | test_inp_: List[str] = functools.reduce(operator.iconcat, test_inp[:num_cases], [])
460 |
461 | @mock.patch("builtins.input", side_effect=[str(num_cases)] + test_inp_)
462 | def test_aio(input: Callable) -> List[str]:
463 | with Capturing() as output:
464 | solution()
465 |
466 | return output
467 |
468 | print_begin("All in one", num_cases, test_inp, test_out)
469 |
470 | start = perf_counter()
471 | output = test_aio() # type: ignore
472 | end = perf_counter()
473 |
474 | passed, i = check_result(test_inp, test_out, num_cases, output)
475 |
476 | print_summary(i, passed, end - start)
477 |
478 |
479 | def speed_test_solution_aio(
480 | test_inp: TestInp, test_out: TestOut, speed_num_cases: int
481 | ) -> None:
482 | test_inp_: List[str] = functools.reduce(
483 | operator.iconcat, test_inp[:speed_num_cases], []
484 | )
485 |
486 | @mock.patch("builtins.input", side_effect=[str(speed_num_cases)] + test_inp_)
487 | def test_for_speed_aio(input: Callable) -> bool:
488 |
489 | with Capturing() as output:
490 | solution()
491 |
492 | return "\n".join(output) == "\n".join(test_out[:speed_num_cases])
493 |
494 | loops = max(1, Config.NUMER_OF_LOOPS)
495 | print("\nSpeed test started.")
496 | print(f"Running: {yellow}Python solution{reset}")
497 | print(f'Testing {cyan}{speed_num_cases}{reset} cases, format "All in one":')
498 | print(f" - Number of loops: {cyan}{loops:_}{reset}")
499 | print(f" - Timeout: {red}{Config.TIMEOUT}{reset} seconds")
500 | if Config.PRINT_EXTRA_STATS:
501 | print_extra_stats(
502 | test_inp[:speed_num_cases], test_out[:speed_num_cases], speed_num_cases
503 | )
504 | print()
505 |
506 | times = []
507 | for i in range(loops):
508 | start = perf_counter()
509 | passed = test_for_speed_aio() # type: ignore
510 | times.append(perf_counter() - start)
511 | if not passed:
512 | print(f"{red}Failed at iteration {i + 1}!{reset}")
513 | break
514 | if sum(times) >= Config.TIMEOUT:
515 | print(f"{red}Timeout at {i + 1} iteration!{reset}")
516 | break
517 | if 1 < loops <= 10 or not i % (loops * Config.PROGRESS_PERCENT):
518 | print(f"\rProgress: {yellow}{i:>{len(str(loops))}}/{loops}{reset}", end="")
519 | else:
520 | print_speed_summary(speed_num_cases, loops, times)
521 |
522 |
523 | def test_solution_obo(test_inp: TestInp, test_out: TestOut, num_cases: int) -> None:
524 | def test_obo(test: List[str]) -> Tuple[float, str]:
525 | @mock.patch("builtins.input", side_effect=test)
526 | def test_obo_(input: Callable) -> List[str]:
527 | with Capturing() as output:
528 | solution()
529 |
530 | return output
531 |
532 | start = perf_counter()
533 | output = test_obo_() # type: ignore
534 | end = perf_counter()
535 |
536 | return end - start, "\n".join(output)
537 |
538 | print_begin("One by one", num_cases, test_inp, test_out, timeout=True)
539 |
540 | times = []
541 | passed = i = 0
542 | for i in range(num_cases):
543 | test = ["1"] + test_inp[i] if Config.ADD_1_IN_OBO else test_inp[i]
544 | t, output = test_obo(test)
545 | times.append(t)
546 | if test_out[i] != output:
547 | if i - passed <= Config.NUM_FAILED or Config.NUM_FAILED == -1:
548 | print(f"\rTest nr:{i + 1} \n Input: {cyan}")
549 | pprint(test)
550 | print(f"{reset}Your output: {red}{output[0]}{reset}")
551 | print(f" Expected: {green}{test_out[i]}{reset}\n")
552 | else:
553 | passed += 1
554 | if sum(times) >= Config.TIMEOUT:
555 | print(f"{red}Timeout after {i + 1} cases!{reset}")
556 | break
557 | if 1 < num_cases <= 10 or not i % (num_cases * Config.PROGRESS_PERCENT):
558 | print(
559 | f"\rProgress: {yellow}{i + 1:>{len(str(num_cases))}}/{num_cases}{reset}",
560 | end="",
561 | )
562 |
563 | print_summary(i + 1, passed, sum(times))
564 |
565 |
566 | def speed_test_solution_obo(
567 | test_inp: TestInp, test_out: TestOut, speed_num_cases: int
568 | ) -> None:
569 | def test_for_speed_obo(test: List[str], out: str) -> Tuple[float, bool]:
570 | @mock.patch("builtins.input", side_effect=test)
571 | def test_obo_(input: Callable) -> List[str]:
572 | with Capturing() as output:
573 | solution()
574 |
575 | return output
576 |
577 | start = perf_counter()
578 | output = test_obo_() # type: ignore
579 | end = perf_counter()
580 |
581 | return end - start, "\n".join(output) == out
582 |
583 | loops = Config.NUMER_OF_LOOPS
584 | print("\nSpeed test started.")
585 | print(f"Running: {yellow}Python solution{reset}")
586 | print(f'Testing {cyan}{speed_num_cases}{reset} cases, format "One by one":')
587 | print(f" - Number of loops: {cyan}{loops:_}{reset}")
588 | print(f" - Timeout: {red}{Config.TIMEOUT}{reset} seconds")
589 | if Config.PRINT_EXTRA_STATS:
590 | print_extra_stats(
591 | test_inp[:speed_num_cases], test_out[:speed_num_cases], speed_num_cases
592 | )
593 | print()
594 |
595 | timedout = False
596 | times: List[float] = []
597 | for i in range(loops):
598 | loop_times = []
599 | for j in range(speed_num_cases):
600 | test = ["1"] + test_inp[j] if Config.ADD_1_IN_OBO else test_inp[j]
601 |
602 | t, passed = test_for_speed_obo(test, test_out[j])
603 | loop_times.append(t)
604 |
605 | if not passed:
606 | print(f"\r{red}Failed at iteration {i + 1}!{reset}")
607 | break
608 | if sum(times) >= Config.TIMEOUT:
609 | print(f"\r{red}Timeout at {i + 1} iteration!{reset}")
610 | timedout = True
611 | break
612 | if timedout:
613 | break
614 | times.append(sum(loop_times))
615 | if 1 < loops <= 10 or not i % (loops * Config.PROGRESS_PERCENT):
616 | print(f"\rProgress: {yellow}{i:>{len(str(loops))}}/{loops}{reset}", end="")
617 | else:
618 | print_speed_summary(speed_num_cases, loops, times)
619 |
620 |
621 | def debug_solution(test_inp: TestInp, test_out: TestOut, case_number: int) -> None:
622 | test = (
623 | ["1"] + test_inp[case_number - 1]
624 | if not Config.TEST_ONE_BY_ONE
625 | else test_inp[case_number - 1]
626 | )
627 |
628 | @mock.patch("builtins.input", side_effect=test)
629 | def test_debug(input: Callable) -> None:
630 | solution()
631 |
632 | command = Config.OTHER_LANG_COMMAND
633 |
634 | if lang is Lang.PYTHON:
635 | running = f"{emojis.snake} {yellow}Python solution{reset}"
636 | else:
637 | running = f"{emojis.otter} {yellow}{command[command.rfind('/') + 1:]}{reset}"
638 |
639 | print('Started testing, format "Debug":')
640 | print(f"Running: {running}")
641 | print(f" Test nr: {cyan}{case_number}{reset}")
642 |
643 | print(f" Input: {cyan}")
644 | pprint(test_inp[case_number - 1])
645 | print(f"{reset} Expected: {green}{test_out[case_number - 1]}{reset}")
646 | print("Your output:")
647 | if command:
648 | test_ = "1\n" + "\n".join(test_inp[case_number - 1])
649 | start = perf_counter()
650 | proc = subprocess.run(
651 | command.split(),
652 | input=test_,
653 | stdout=subprocess.PIPE,
654 | stderr=subprocess.PIPE,
655 | universal_newlines=True,
656 | )
657 | end = perf_counter()
658 | print(proc.stderr)
659 | print(proc.stdout)
660 | print(f"\nTime: {yellow}{end - start}{reset} seconds")
661 |
662 | else:
663 | test_debug() # type: ignore
664 |
665 |
666 | def test_other_lang(
667 | command: str, test_inp: TestInp, test_out: TestOut, num_cases: int
668 | ) -> None:
669 | test_inp_ = (
670 | f"{num_cases}\n"
671 | + "\n".join(functools.reduce(operator.iconcat, test_inp[:num_cases], []))
672 | + "\n"
673 | )
674 | print_begin("All in one", num_cases, test_inp, test_out)
675 |
676 | start = perf_counter()
677 | proc = subprocess.run(
678 | command.split(),
679 | input=test_inp_,
680 | stdout=subprocess.PIPE,
681 | stderr=subprocess.PIPE,
682 | universal_newlines=True,
683 | )
684 | end = perf_counter()
685 |
686 | err = proc.stderr
687 | output = proc.stdout.split("\n")
688 | output = [x.strip("\r") for x in output]
689 |
690 | if err:
691 | print(err)
692 | raise SystemExit(1)
693 |
694 | passed, i = check_result(test_inp, test_out, num_cases, output)
695 |
696 | print_summary(i, passed, end - start)
697 |
698 |
699 | def speed_test_other_aio(
700 | command: str, test_inp: TestInp, test_out: TestOut, speed_num_cases: int
701 | ) -> None:
702 | test_inp_ = (
703 | f"{speed_num_cases}\n"
704 | + "\n".join(functools.reduce(operator.iconcat, test_inp[:speed_num_cases], []))
705 | + "\n"
706 | )
707 |
708 | def run() -> Tuple[bool, float]:
709 | start = perf_counter()
710 | proc = subprocess.run(
711 | command.split(),
712 | input=test_inp_,
713 | stdout=subprocess.PIPE,
714 | stderr=subprocess.PIPE,
715 | universal_newlines=True,
716 | )
717 | end = perf_counter()
718 |
719 | err = proc.stderr
720 | output = proc.stdout.split("\n")
721 | output = [x.strip("\r") for x in output]
722 | if err:
723 | print(err)
724 | raise SystemExit(1)
725 |
726 | return (
727 | "\n".join(output).strip() == "\n".join(test_out[:speed_num_cases]),
728 | end - start,
729 | )
730 |
731 | loops = max(1, Config.NUMER_OF_LOOPS)
732 | print("\nSpeed test started.")
733 | print(f"Running: {emojis.otter} {yellow}{command[command.rfind('/') + 1:]}{reset}")
734 | print(f'Testing {cyan}{speed_num_cases}{reset} cases, format "All in one":')
735 | print(f" - Number of loops: {cyan}{loops:_}{reset}")
736 | print(f" - Timeout: {red}{Config.TIMEOUT}{reset} seconds")
737 | if Config.PRINT_EXTRA_STATS:
738 | print_extra_stats(
739 | test_inp[:speed_num_cases], test_out[:speed_num_cases], speed_num_cases
740 | )
741 | print()
742 |
743 | times = []
744 | for i in range(loops):
745 | passed, time = run()
746 | times.append(time)
747 | if not passed:
748 | print(f"{red}Failed at iteration {i + 1}!{reset}")
749 | break
750 | if sum(times) >= Config.TIMEOUT:
751 | print(f"{red}Timeout at {i + 1} iteration!{reset}")
752 | break
753 | if 1 < loops <= 10 or not i % (loops * Config.PROGRESS_PERCENT):
754 | print(f"\rProgress: {yellow}{i:>{len(str(loops))}}/{loops}{reset}", end="")
755 | else:
756 | print_speed_summary(speed_num_cases, loops, times)
757 |
758 |
759 | ########################################################################
760 |
761 |
762 | def parse_args() -> None:
763 | parser = argparse.ArgumentParser(
764 | usage="%(prog)s [options]",
765 | description="Yan Tovis unofficial tester for Tech With Tim discord Weekly Challenges. "
766 | "You can configure it editing CONFIGURATION section of this tester file "
767 | "or using command line arguments.",
768 | epilog="All file paths must be absolute or relative to this tester!",
769 | )
770 |
771 | parser.add_argument(
772 | "-s",
773 | "--source",
774 | metavar="path_src_file",
775 | help="Path to solution source file for Python solution, or for other languages if you "
776 | "want solution length to be printed.",
777 | )
778 | parser.add_argument(
779 | "-c",
780 | "--command",
781 | metavar="CMD",
782 | help="Command to execute solution written in other languages then Python.",
783 | )
784 | parser.add_argument(
785 | "-i",
786 | "--input",
787 | metavar="test_cases",
788 | help="Path to test cases input (input/expected output) JSON file.",
789 | )
790 | parser.add_argument(
791 | "-e",
792 | "--expected",
793 | metavar="test_output",
794 | help="Path to test cases expected values if they in separate "
795 | "file then input test cases.",
796 | )
797 | parser.add_argument(
798 | "--nocolor",
799 | action="store_true",
800 | help="If you don't want color output in terminal, or if your terminal"
801 | "don't support colors and script didn't detect this.",
802 | )
803 | parser.add_argument(
804 | "-n",
805 | "--number-cases",
806 | metavar="",
807 | type=int,
808 | help="Number of test cases to use to test.",
809 | )
810 | parser.add_argument(
811 | "-d",
812 | "--debug",
813 | metavar="",
814 | type=int,
815 | help="Test case number if you want to print something extra for debugging. "
816 | "Your solution will be run with only that one test case and whole output "
817 | "will be printed.",
818 | )
819 | parser.add_argument(
820 | "--onebyone",
821 | action="store_true",
822 | help="Use if official challenge tester gives one test case inputs and "
823 | "run your solution again for next test case.",
824 | )
825 | parser.add_argument(
826 | "--add1",
827 | action="store_true",
828 | help="If you want test your solution one test case by one test case, "
829 | "and solution is written to take all test cases at once, this will add '1' "
830 | "as the first line input.",
831 | )
832 | parser.add_argument(
833 | "-S",
834 | "--speedtest",
835 | metavar="",
836 | type=int,
837 | help="Number of times to run tests to get more accurate times. Works "
838 | "only for Python solution.",
839 | )
840 | parser.add_argument(
841 | "--number-speed-cases",
842 | metavar="",
843 | type=int,
844 | help="How many test case to use per loop when speed test.",
845 | )
846 | parser.add_argument(
847 | "--timeout",
848 | metavar="",
849 | type=int,
850 | help="Timeout in seconds. Will not timeout in middle of test case. "
851 | 'Not working in "All in One" mode. Will timeout only between test cases / loops.',
852 | )
853 | parser.add_argument(
854 | "-x",
855 | "--noextra",
856 | action="store_true",
857 | help="Use if this tester wasn't prepared for the challenge you testing.",
858 | )
859 | parser.add_argument(
860 | "-p",
861 | "--progress",
862 | metavar="",
863 | type=float,
864 | help="How often to update progress: must be > 0 and < 1. 0.1 - means "
865 | "update every 10%% completed tests, 0.05 - means update every 5%% completed tests.",
866 | )
867 | parser.add_argument(
868 | "-f",
869 | "--failed",
870 | metavar="",
871 | type=int,
872 | help="Number of failed tests to print (-1 to print all failed).",
873 | )
874 | parser.add_argument(
875 | "-l",
876 | "--nolength",
877 | action="store_true",
878 | help="Use if you want to share result but don't want to share solution length.",
879 | )
880 | parser.add_argument(
881 | "-E",
882 | "--emoji",
883 | action="store_true",
884 | help="Use unicode emoji. Your terminal must support unicode and your "
885 | "terminal font must have glyphs for emoji.",
886 | )
887 |
888 | args = parser.parse_args()
889 |
890 | if args.source:
891 | Config.SOLUTION_SRC_FILE_NAME = args.source
892 | if args.command:
893 | Config.OTHER_LANG_COMMAND = args.command
894 | if args.input:
895 | Config.TEST_CASE_FILE = args.input
896 | if args.expected:
897 | Config.TEST_CASE_FILE_EXP = args.expected
898 | if args.nocolor:
899 | Config.COLOR_OUT = False
900 | if args.number_cases:
901 | Config.NUMBER_OF_TEST_CASES = args.number_cases
902 | if args.debug:
903 | Config.DEBUG_TEST = True
904 | Config.DEBUG_TEST_NUMBER = args.debug
905 | if args.onebyone:
906 | Config.TEST_ONE_BY_ONE = True
907 | if args.add1:
908 | Config.ADD_1_IN_OBO = True
909 | if args.speedtest:
910 | Config.SPEED_TEST = True
911 | Config.NUMER_OF_LOOPS = args.speedtest
912 | if args.number_speed_cases:
913 | Config.NUMBER_SPEED_TEST_CASES = args.number_speed_cases
914 | if args.timeout:
915 | Config.TIMEOUT = args.timeout
916 | if args.noextra:
917 | Config.PRINT_EXTRA_STATS = False
918 | if args.progress:
919 | Config.PROGRESS_PERCENT = args.progress
920 | if args.failed:
921 | Config.NUM_FAILED = args.failed
922 | if args.nolength:
923 | Config.PRINT_SOLUTION_LENGTH = False
924 | if args.emoji:
925 | Config.USE_EMOJI = True
926 |
927 |
928 | def main(path: str) -> None:
929 | test_inp, test_out = read_test_cases()
930 |
931 | if Config.DEBUG_TEST:
932 | case_number = (
933 | Config.DEBUG_TEST_NUMBER
934 | if 0 <= Config.DEBUG_TEST_NUMBER - 1 < len(test_out)
935 | else 1
936 | )
937 | debug_solution(test_inp, test_out, case_number)
938 | raise SystemExit(0)
939 |
940 | if 0 < Config.NUMBER_OF_TEST_CASES < len(test_out):
941 | num_cases = Config.NUMBER_OF_TEST_CASES
942 | else:
943 | num_cases = len(test_out)
944 |
945 | if 0 < Config.NUMBER_SPEED_TEST_CASES < len(test_out):
946 | speed_num_cases = Config.NUMBER_SPEED_TEST_CASES
947 | else:
948 | speed_num_cases = len(test_out)
949 |
950 | if Config.OTHER_LANG_COMMAND:
951 | os.chdir(path)
952 | test_other_lang(Config.OTHER_LANG_COMMAND, test_inp, test_out, num_cases)
953 | if Config.SPEED_TEST:
954 | speed_test_other_aio(
955 | Config.OTHER_LANG_COMMAND, test_inp, test_out, speed_num_cases
956 | )
957 | raise SystemExit(0)
958 |
959 | if Config.TEST_ONE_BY_ONE:
960 | test_solution_obo(test_inp, test_out, num_cases)
961 | else:
962 | test_solution_aio(test_inp, test_out, num_cases)
963 | if Config.SPEED_TEST:
964 | speed_test_solution_aio(test_inp, test_out, speed_num_cases)
965 |
966 |
967 | if __name__ == "__main__":
968 | path = os.path.dirname(os.path.abspath(__file__))
969 | os.chdir(path)
970 |
971 | parse_args()
972 |
973 | color_out = Config.COLOR_OUT and enable_win_term_mode()
974 | red, green, yellow, cyan, reset = (
975 | (
976 | "\x1b[31m",
977 | "\x1b[32m",
978 | "\x1b[33m",
979 | "\x1b[36m",
980 | "\x1b[0m",
981 | )
982 | if color_out
983 | else [""] * 5
984 | )
985 |
986 | if Config.USE_EMOJI:
987 | emojis = Emoji(
988 | stopwatch="\N{stopwatch} ",
989 | hundred="\N{Hundred Points Symbol}",
990 | poo=" \N{Pile of Poo}",
991 | snake="\N{snake}",
992 | otter="\N{otter}",
993 | scroll=" \N{scroll}",
994 | filebox=" \N{Card File Box} ",
995 | chart=" \N{Chart with Upwards Trend} ",
996 | rocket="\N{rocket} ",
997 | warning=" \N{warning sign} ",
998 | bang="\N{Heavy Exclamation Mark Symbol}",
999 | stop="\N{Octagonal sign}",
1000 | snail="\N{snail} ",
1001 | leopard=" \N{leopard}",
1002 | )
1003 | else:
1004 | emojis = Emoji()
1005 |
1006 | solution_len = create_solution_function(path, Config.SOLUTION_SRC_FILE_NAME)
1007 |
1008 | lang = Lang.OTHER if Config.OTHER_LANG_COMMAND else Lang.PYTHON
1009 |
1010 | if lang is Lang.PYTHON:
1011 | if not solution_len:
1012 | print("Could not import solution!")
1013 | raise SystemExit(1)
1014 |
1015 | from temp_solution_file import solution # type: ignore
1016 |
1017 | test_cases_file = os.path.join(path, Config.TEST_CASE_FILE)
1018 | if not os.path.exists(test_cases_file):
1019 | print(
1020 | f"Can't find file with test cases {red}{os.path.join(path, Config.TEST_CASE_FILE)}{reset}!"
1021 | )
1022 | print("Make sure it is in the same directory as this script!")
1023 | raise SystemExit(1)
1024 |
1025 | test_out_file = (
1026 | os.path.join(path, Config.TEST_CASE_FILE_EXP)
1027 | if Config.TEST_CASE_FILE_EXP
1028 | else test_cases_file
1029 | )
1030 | if Config.TEST_CASE_FILE_EXP and not os.path.exists(test_out_file):
1031 | print(
1032 | f"Can't find file with output for test cases {red}{test_out_file}{reset}!"
1033 | )
1034 | print("Make sure it is in the same directory as this script!\n")
1035 | print(
1036 | f"If output is in same file as input set {cyan}SEP_INP_OUT_TESTCASE_FILE{reset} "
1037 | f"to {yellow}False{reset} in configure section."
1038 | )
1039 | raise SystemExit(1)
1040 |
1041 | main(path)
1042 |
--------------------------------------------------------------------------------