├── .gitignore
├── LICENSE
├── README.md
├── demos
├── rec-tic-tac.py
└── tic-tac.py
├── frontend
├── .gitignore
├── package-lock.json
├── package.json
├── postcss.config.js
├── public
│ ├── favicon.ico
│ ├── index.html
│ ├── manifest.json
│ └── robots.txt
├── src
│ ├── App.css
│ ├── App.js
│ ├── api.js
│ ├── components
│ │ └── RecursiveThinkingInterface.jsx
│ ├── context
│ │ └── RecThinkContext.js
│ ├── index.css
│ ├── index.js
│ └── reportWebVitals.js
└── tailwind.config.js
├── install-dependencies.bat
├── non-rec.png
├── rec.PNG
├── recthink_web.py
├── recursive_thinking_ai.py
├── requirements.txt
└── start-recthink.bat
/.gitignore:
--------------------------------------------------------------------------------
1 | __pycache__
2 | venv
3 |
--------------------------------------------------------------------------------
/LICENSE:
--------------------------------------------------------------------------------
1 | MIT License
2 |
3 | Copyright (c) 2025 PhialsBasement
4 |
5 | Permission is hereby granted, free of charge, to any person obtaining a copy
6 | of this software and associated documentation files (the "Software"), to deal
7 | in the Software without restriction, including without limitation the rights
8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9 | copies of the Software, and to permit persons to whom the Software is
10 | furnished to do so, subject to the following conditions:
11 |
12 | The above copyright notice and this permission notice shall be included in all
13 | copies or substantial portions of the Software.
14 |
15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21 | SOFTWARE.
22 |
--------------------------------------------------------------------------------
/README.md:
--------------------------------------------------------------------------------
1 | # CoRT (Chain of Recursive Thoughts) 🧠🔄
2 |
3 | ## TL;DR: I made my AI think harder by making it argue with itself repeatedly. It works stupidly well.
4 |
5 | ### What is this?
6 | CoRT makes AI models recursively think about their responses, generate alternatives, and pick the best one. It's like giving the AI the ability to doubt itself and try again... and again... and again.
7 |
8 | ### Does it actually work?
9 | YES. I tested it with Mistral 3.1 24B and it went from "meh" to "holy crap", especially for such a small model, at programming tasks.
10 |
11 |
12 | ## How it works
13 | 1. AI generates initial response
14 | 2. AI decides how many "thinking rounds" it needs
15 | 3. For each round:
16 | - Generates 3 alternative responses
17 | - Evaluates all responses
18 | - Picks the best one
19 | 4. Final response is the survivor of this AI battle royale
20 |
21 |
22 | ## How to use the Web UI(still early dev)
23 | 1. Open start_recthink.bat
24 | 2. wait for a bit as it installs dependencies
25 | 3. profit??
26 |
27 | If running on linux:
28 | ```
29 | pip install -r requirements.txt
30 | cd frontend && npm install
31 | cd ..
32 | python ./recthink_web.py
33 | ```
34 |
35 | (open a new shell)
36 |
37 | ```
38 | cd frontend
39 | npm start
40 | ```
41 |
42 |
43 | ## Examples
44 |
45 |
46 | Mistral 3.1 24B + CoRT
47 | 
48 |
49 | Mistral 3.1 24B non CoRT
50 | 
51 |
52 |
53 | ## Try it yourself
54 | ```python
55 | pip install -r requirements.txt
56 | export OPENROUTER_API_KEY="your-key-here"
57 | python recursive-thinking-ai.py
58 | ```
59 |
60 | ### The Secret Sauce
61 | The magic is in:
62 |
63 | - Self-evaluation
64 | - Competitive alternative generation
65 | - Iterative refinement
66 | - Dynamic thinking depth
67 |
68 |
69 |
70 | ## Star History(THANK YOU SO MUCH)
71 |
72 |
73 |
74 |
75 |
76 |
77 |
78 |
79 |
80 |
81 |
82 | ### Contributing
83 | Found a way to make it even better? PR's welcome!
84 |
85 | ### License
86 | MIT - Go wild with it
87 |
--------------------------------------------------------------------------------
/demos/rec-tic-tac.py:
--------------------------------------------------------------------------------
1 | import tkinter as tk
2 | from tkinter import messagebox
3 | import random
4 |
5 | def print_board(board):
6 | for row in board:
7 | print(" | ".join(row))
8 | print("-" * 5)
9 |
10 | def check_winner(board, player):
11 | win_conditions = [
12 | [board[0][0], board[0][1], board[0][2]],
13 | [board[1][0], board[1][1], board[1][2]],
14 | [board[2][0], board[2][1], board[2][2]], # rows
15 | [board[0][0], board[1][0], board[2][0]],
16 | [board[0][1], board[1][1], board[2][1]],
17 | [board[0][2], board[1][2], board[2][2]], # columns
18 | [board[0][0], board[1][1], board[2][2]],
19 | [board[2][0], board[1][1], board[0][2]] # diagonals
20 | ]
21 | return [player, player, player] in win_conditions
22 |
23 | def check_draw(board):
24 | return all(cell != ' ' for row in board for cell in row)
25 |
26 | def ai_move(board):
27 | empty_cells = [(i, j) for i in range(3) for j in range(3) if board[i][j] == ' ']
28 | return random.choice(empty_cells)
29 |
30 | def on_button_click(row, col):
31 | global current_player
32 | if board[row][col] == ' ' and not game_over:
33 | board[row][col] = current_player
34 | buttons[row][col].config(text=current_player, state=tk.DISABLED)
35 | if check_winner(board, current_player):
36 | messagebox.showinfo("Tic Tac Toe", f"Player {current_player} wins!")
37 | disable_all_buttons()
38 | elif check_draw(board):
39 | messagebox.showinfo("Tic Tac Toe", "It's a draw!")
40 | disable_all_buttons()
41 | else:
42 | current_player = 'O' if current_player == 'X' else 'X'
43 | if current_player == 'O' and single_player_mode:
44 | ai_row, ai_col = ai_move(board)
45 | board[ai_row][ai_col] = 'O'
46 | buttons[ai_row][ai_col].config(text='O', state=tk.DISABLED)
47 | if check_winner(board, 'O'):
48 | messagebox.showinfo("Tic Tac Toe", "Player O wins!")
49 | disable_all_buttons()
50 | elif check_draw(board):
51 | messagebox.showinfo("Tic Tac Toe", "It's a draw!")
52 | disable_all_buttons()
53 | else:
54 | current_player = 'X'
55 |
56 | def disable_all_buttons():
57 | global game_over
58 | game_over = True
59 | for row in buttons:
60 | for button in row:
61 | button.config(state=tk.DISABLED)
62 |
63 | def start_game(mode):
64 | global board, buttons, current_player, game_over, single_player_mode
65 | single_player_mode = (mode == "single")
66 | board = [[' ' for _ in range(3)] for _ in range(3)]
67 | current_player = 'X'
68 | game_over = False
69 | for row in buttons:
70 | for button in row:
71 | button.config(text=' ', state=tk.NORMAL)
72 |
73 | def create_buttons(frame):
74 | global buttons
75 | buttons = []
76 | for i in range(3):
77 | button_row = []
78 | for j in range(3):
79 | button = tk.Button(frame, text=' ', font=('normal', 20, 'normal'), width=5, height=2,
80 | command=lambda row=i, col=j: on_button_click(row, col))
81 | button.grid(row=i, column=j)
82 | button_row.append(button)
83 | buttons.append(button_row)
84 |
85 | def main():
86 | global root
87 | root = tk.Tk()
88 | root.title("Tic Tac Toe")
89 |
90 | mode_frame = tk.Frame(root)
91 | mode_frame.pack(pady=10)
92 |
93 | tk.Button(mode_frame, text="Single Player", command=lambda: start_game("single")).pack(side=tk.LEFT, padx=10)
94 | tk.Button(mode_frame, text="Multi Player", command=lambda: start_game("multi")).pack(side=tk.LEFT, padx=10)
95 |
96 | game_frame = tk.Frame(root)
97 | game_frame.pack()
98 |
99 | create_buttons(game_frame)
100 |
101 | root.mainloop()
102 |
103 | if __name__ == "__main__":
104 | main()
--------------------------------------------------------------------------------
/demos/tic-tac.py:
--------------------------------------------------------------------------------
1 | def print_board(board):
2 | print("-------------")
3 | for row in board:
4 | print("| " + " | ".join(row) + " |")
5 | print("-------------")
6 |
7 | def check_winner(board, player):
8 | # Check rows, columns, and diagonals for a winner
9 | for i in range(3):
10 | if all([cell == player for cell in board[i]]):
11 | return True
12 | if all([board[j][i] == player for j in range(3)]):
13 | return True
14 | if all([board[i][i] == player for i in range(3)]) or all([board[i][2 - i] == player for i in range(3)]):
15 | return True
16 | return False
17 |
18 | def is_board_full(board):
19 | return all([cell != ' ' for row in board for cell in row])
20 |
21 | def tic_tac_toe():
22 | board = [[' ' for _ in range(3)] for _ in range(3)]
23 | current_player = 'X'
24 |
25 | while True:
26 | print_board(board)
27 | row = int(input(f"Player {current_player}, enter the row (0, 1, 2): "))
28 | col = int(input(f"Player {current_player}, enter the column (0, 1, 2): "))
29 |
30 | if board[row][col] != ' ':
31 | print("This spot is already taken. Try again.")
32 | continue
33 |
34 | board[row][col] = current_player
35 |
36 | if check_winner(board, current_player):
37 | print_board(board)
38 | print(f"Player {current_player} wins!")
39 | break
40 |
41 | if is_board_full(board):
42 | print_board(board)
43 | print("It's a tie!")
44 | break
45 |
46 | current_player = 'O' if current_player == 'X' else 'X'
47 |
48 | if __name__ == "__main__":
49 | tic_tac_toe()
50 |
--------------------------------------------------------------------------------
/frontend/.gitignore:
--------------------------------------------------------------------------------
1 | node_modules/
2 |
--------------------------------------------------------------------------------
/frontend/package.json:
--------------------------------------------------------------------------------
1 | {
2 | "name": "recthink-frontend",
3 | "version": "1.0.0",
4 | "private": true,
5 | "dependencies": {
6 | "@testing-library/jest-dom": "^5.17.0",
7 | "@testing-library/react": "^13.4.0",
8 | "@testing-library/user-event": "^13.5.0",
9 | "lucide-react": "^0.263.1",
10 | "react": "^18.2.0",
11 | "react-dom": "^18.2.0",
12 | "react-router-dom": "^6.21.0",
13 | "react-markdown": "^8.0.7",
14 | "react-scripts": "5.0.1",
15 | "recharts": "^2.10.3",
16 | "web-vitals": "^2.1.4"
17 | },
18 | "scripts": {
19 | "start": "react-scripts start",
20 | "build": "react-scripts build",
21 | "test": "react-scripts test",
22 | "eject": "react-scripts eject"
23 | },
24 | "eslintConfig": {
25 | "extends": [
26 | "react-app",
27 | "react-app/jest"
28 | ]
29 | },
30 | "browserslist": {
31 | "production": [
32 | ">0.2%",
33 | "not dead",
34 | "not op_mini all"
35 | ],
36 | "development": [
37 | "last 1 chrome version",
38 | "last 1 firefox version",
39 | "last 1 safari version"
40 | ]
41 | },
42 | "devDependencies": {
43 | "tailwindcss": "^3.4.0",
44 | "autoprefixer": "^10.4.16",
45 | "postcss": "^8.4.32"
46 | }
47 | }
48 |
--------------------------------------------------------------------------------
/frontend/postcss.config.js:
--------------------------------------------------------------------------------
1 | module.exports = {
2 | plugins: {
3 | tailwindcss: {},
4 | autoprefixer: {},
5 | },
6 | }
7 |
--------------------------------------------------------------------------------
/frontend/public/favicon.ico:
--------------------------------------------------------------------------------
1 | 0000010001002020100001000400E80200001600000028000000200000004000
2 | 0000010004000000000080020000000000000000000000000000000000
3 | 0000000000800000800000008080008000000080008080000080808000
4 | C0C0C0000000FF0000FF000000FFFF00FF000000FF00FF00FFFF0000FF
5 | FF00FFFFFF00FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF00
6 | FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF00FFFF
7 | FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF00FFFFFFFF
8 | FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF00FFFFFFFFFF
9 | FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF00FFFFFFFFFFFF
10 | FFFFFFFFFF888FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF00FFFFFFFFFFFF
11 | FFFFFFFFFF888FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF00FFFFFFFFFFFF
12 | FF888888FF888FF888FFFFFFFFFFFFFFFFFFFFFFFFF00FFFFFFFFFFFF
13 | FFF888888888888888FFFFFFFFFFFFFFFFFFFFFFFFF00FFFFFFFFFFFF
14 | FFFF8888888888888FFFFFFFFFFFFFFFFFFFFFFFFFF00FFFFFFFFFFFF
15 | FFFFFF88888888888F888FFFFFFFFFFFFFFFFFFFFFFFFF00FFFFFFFF
16 | FFFFFFFFFFF888FF8888888FF888FFFFFFFFFFFFFFFFFF00FFFFFFFF
17 | FFFFFFFFFFFFFFFFFF888888888FFFFFFFFFFFFFFFFFF00FFFFFFFFF
18 | FFFFFFFFFFFFFFFFFF88888888FFFFFFFFFFFFFFFFFFF00FFFFFFFFF
19 | FFFFFFFFFFFFFFFFFFF888888FFFFFFFFFFFFFFFFFFFF00FFFFFFFFFF
20 | FFFFFFFFFFFFFFFFFFF888FFFFFFFFFFFFFFFFFFFFFF00FFFFFFFFFFF
21 | FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF00FFFFFFFFFF
22 | FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF00FFFFFFFFFFF
23 | FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF00FFFFFFFFFF
24 | FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF00FFFFFFFFFFF
25 | FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF00FFFFFFFFFF
26 | FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF00FFFFFFFFFFF
27 | FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF00FFFFFFFFFF
28 | FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF00FFFFFFFFFFF
29 | FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF00FFFFFFFFFF
30 | FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF00FFFFFFF000F
31 | FFFF000FFFFF000FFFFF000FFFFF000FFFFF000FFFFF00FFFFFFF000
32 | FFFF000FFFFF000FFFFF000FFFFF000FFFFF000FFFFF00FFFFFFF000
33 | 0000000FFFFF000FFFFF000FFFFF000FFFFF000FFFFF00FFFFFFF000
34 | 0000000FFFFF000FFFFF000FFFFF000FFFFF000FFFFF00FFFFFFF000
35 | FFFF000FFFFF000FFFFF000FFFFF000FFFFF000FFFFF00FFFFFFF000
36 | FFFF000FFFFF000FFFFF000FFFFF000FFFFF000FFFFFFFFFFFFFFFF
37 | FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFE00F81FF
38 | C007007F8003003F0001001F0000000F0000000700000003000000
39 | 0100000000000000000000000000000000000000000000000000000
40 | 0000000000000000000000000000000000000000000000000000000
41 | 0000000000000000000000000000000000000000000000000000000
42 | 0000000FFFFFFFFFFFFFFFFFFFFFFFFFF
--------------------------------------------------------------------------------
/frontend/public/index.html:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 |
7 |
8 |
12 |
13 |
14 | RecThink
15 |
16 |
17 |
18 |
19 |
20 |
21 |
--------------------------------------------------------------------------------
/frontend/public/manifest.json:
--------------------------------------------------------------------------------
1 | {
2 | "short_name": "RecThink",
3 | "name": "RecThink - Enhanced Recursive Thinking",
4 | "icons": [
5 | {
6 | "src": "favicon.ico",
7 | "sizes": "64x64 32x32 24x24 16x16",
8 | "type": "image/x-icon"
9 | }
10 | ],
11 | "start_url": ".",
12 | "display": "standalone",
13 | "theme_color": "#000000",
14 | "background_color": "#ffffff"
15 | }
16 |
--------------------------------------------------------------------------------
/frontend/public/robots.txt:
--------------------------------------------------------------------------------
1 | # https://www.robotstxt.org/robotstxt.html
2 | User-agent: *
3 | Disallow:
4 |
--------------------------------------------------------------------------------
/frontend/src/App.css:
--------------------------------------------------------------------------------
1 | /* Markdown Styles */
2 | .markdown-content {
3 | line-height: 1.6;
4 | font-size: 1rem;
5 | }
6 |
7 | .markdown-content h1 {
8 | font-size: 1.5rem;
9 | font-weight: 600;
10 | margin-top: 1.5rem;
11 | margin-bottom: 1rem;
12 | }
13 |
14 | .markdown-content h2 {
15 | font-size: 1.3rem;
16 | font-weight: 600;
17 | margin-top: 1.5rem;
18 | margin-bottom: 0.75rem;
19 | }
20 |
21 | .markdown-content h3 {
22 | font-size: 1.1rem;
23 | font-weight: 600;
24 | margin-top: 1.25rem;
25 | margin-bottom: 0.5rem;
26 | }
27 |
28 | .markdown-content h4, .markdown-content h5, .markdown-content h6 {
29 | font-size: 1rem;
30 | font-weight: 600;
31 | margin-top: 1rem;
32 | margin-bottom: 0.5rem;
33 | }
34 |
35 | .markdown-content p {
36 | margin-bottom: 1rem;
37 | white-space: pre-wrap;
38 | }
39 |
40 | .markdown-content ul, .markdown-content ol {
41 | margin-bottom: 1rem;
42 | padding-left: 1.5rem;
43 | }
44 |
45 | .markdown-content ul {
46 | list-style-type: disc;
47 | }
48 |
49 | .markdown-content ol {
50 | list-style-type: decimal;
51 | }
52 |
53 | .markdown-content li {
54 | margin-bottom: 0.25rem;
55 | }
56 |
57 | .markdown-content a {
58 | color: #3b82f6;
59 | text-decoration: underline;
60 | }
61 |
62 | .markdown-content blockquote {
63 | border-left: 4px solid #e5e7eb;
64 | padding-left: 1rem;
65 | margin-left: 0;
66 | margin-right: 0;
67 | font-style: italic;
68 | color: #6b7280;
69 | }
70 |
71 | .markdown-content pre {
72 | background-color: #f3f4f6;
73 | padding: 1rem;
74 | border-radius: 0.375rem;
75 | overflow-x: auto;
76 | margin-bottom: 1rem;
77 | }
78 |
79 | .markdown-content code {
80 | background-color: #f3f4f6;
81 | padding: 0.2rem 0.4rem;
82 | border-radius: 0.25rem;
83 | font-family: ui-monospace, SFMono-Regular, Menlo, Monaco, Consolas, "Liberation Mono", "Courier New", monospace;
84 | font-size: 0.875em;
85 | }
86 |
87 | .markdown-content pre code {
88 | background-color: transparent;
89 | padding: 0;
90 | border-radius: 0;
91 | font-size: 0.875em;
92 | }
93 |
94 | .markdown-content table {
95 | border-collapse: collapse;
96 | width: 100%;
97 | margin-bottom: 1rem;
98 | }
99 |
100 | .markdown-content th, .markdown-content td {
101 | border: 1px solid #e5e7eb;
102 | padding: 0.5rem;
103 | text-align: left;
104 | }
105 |
106 | .markdown-content th {
107 | background-color: #f9fafb;
108 | }
109 |
110 | .markdown-content img {
111 | max-width: 100%;
112 | height: auto;
113 | border-radius: 0.375rem;
114 | }
115 |
116 | .markdown-content hr {
117 | border: none;
118 | border-top: 1px solid #e5e7eb;
119 | margin: 1.5rem 0;
120 | }
--------------------------------------------------------------------------------
/frontend/src/App.js:
--------------------------------------------------------------------------------
1 | import React from 'react';
2 | import { RecThinkProvider } from './context/RecThinkContext';
3 | import RecursiveThinkingInterface from './components/RecursiveThinkingInterface';
4 | import './App.css';
5 |
6 | function App() {
7 | return (
8 |
9 |
10 |
11 | );
12 | }
13 |
14 | export default App;
15 |
--------------------------------------------------------------------------------
/frontend/src/api.js:
--------------------------------------------------------------------------------
1 | // API client for RecThink
2 | const API_BASE_URL = 'http://localhost:8000/api';
3 |
4 | export const initializeChat = async (apiKey, model) => {
5 | const response = await fetch(`${API_BASE_URL}/initialize`, {
6 | method: 'POST',
7 | headers: {
8 | 'Content-Type': 'application/json',
9 | },
10 | body: JSON.stringify({ api_key: apiKey, model }),
11 | });
12 |
13 | if (!response.ok) {
14 | throw new Error(`Failed to initialize chat: ${response.statusText}`);
15 | }
16 |
17 | return response.json();
18 | };
19 |
20 | export const sendMessage = async (sessionId, message, options = {}) => {
21 | const response = await fetch(`${API_BASE_URL}/send_message`, {
22 | method: 'POST',
23 | headers: {
24 | 'Content-Type': 'application/json',
25 | },
26 | body: JSON.stringify({
27 | session_id: sessionId,
28 | message,
29 | thinking_rounds: options.thinkingRounds,
30 | alternatives_per_round: options.alternativesPerRound,
31 | }),
32 | });
33 |
34 | if (!response.ok) {
35 | throw new Error(`Failed to send message: ${response.statusText}`);
36 | }
37 |
38 | return response.json();
39 | };
40 |
41 | export const saveConversation = async (sessionId, filename = null, fullLog = false) => {
42 | const response = await fetch(`${API_BASE_URL}/save`, {
43 | method: 'POST',
44 | headers: {
45 | 'Content-Type': 'application/json',
46 | },
47 | body: JSON.stringify({
48 | session_id: sessionId,
49 | filename,
50 | full_log: fullLog,
51 | }),
52 | });
53 |
54 | if (!response.ok) {
55 | throw new Error(`Failed to save conversation: ${response.statusText}`);
56 | }
57 |
58 | return response.json();
59 | };
60 |
61 | export const listSessions = async () => {
62 | const response = await fetch(`${API_BASE_URL}/sessions`);
63 |
64 | if (!response.ok) {
65 | throw new Error(`Failed to list sessions: ${response.statusText}`);
66 | }
67 |
68 | return response.json();
69 | };
70 |
71 | export const deleteSession = async (sessionId) => {
72 | const response = await fetch(`${API_BASE_URL}/sessions/${sessionId}`, {
73 | method: 'DELETE',
74 | });
75 |
76 | if (!response.ok) {
77 | throw new Error(`Failed to delete session: ${response.statusText}`);
78 | }
79 |
80 | return response.json();
81 | };
82 |
83 | export const createWebSocketConnection = (sessionId) => {
84 | const ws = new WebSocket(`ws://localhost:8000/ws/${sessionId}`);
85 | return ws;
86 | };
87 |
--------------------------------------------------------------------------------
/frontend/src/components/RecursiveThinkingInterface.jsx:
--------------------------------------------------------------------------------
1 | import React, { useState, useEffect } from 'react';
2 | import { Send, Save, Settings, Brain, MoveDown, CheckCircle, X, MessageSquare, Clock, RefreshCw, Zap } from 'lucide-react';
3 | import { useRecThink } from '../context/RecThinkContext';
4 | import ReactMarkdown from 'react-markdown';
5 |
6 | const RecursiveThinkingInterface = () => {
7 | const {
8 | sessionId,
9 | messages,
10 | isThinking,
11 | thinkingProcess,
12 | apiKey,
13 | model,
14 | thinkingRounds,
15 | alternativesPerRound,
16 | error,
17 | showThinkingProcess,
18 | connectionStatus,
19 |
20 | setApiKey,
21 | setModel,
22 | setThinkingRounds,
23 | setAlternativesPerRound,
24 | setShowThinkingProcess,
25 |
26 | initializeChat,
27 | sendMessage,
28 | saveConversation
29 | } = useRecThink();
30 |
31 | const [input, setInput] = useState('');
32 | const [activeTab, setActiveTab] = useState('chat');
33 |
34 | // Initialize chat on first load if API key is available
35 | useEffect(() => {
36 | const initSession = async () => {
37 | if (apiKey && !sessionId) {
38 | await initializeChat();
39 | }
40 | };
41 |
42 | initSession();
43 | }, [apiKey, sessionId, initializeChat]);
44 |
45 | const handleSendMessage = async () => {
46 | if (!input.trim()) return;
47 |
48 | if (!sessionId) {
49 | // Initialize chat if not already done
50 | const newSessionId = await initializeChat();
51 | if (!newSessionId) return;
52 | }
53 |
54 | await sendMessage(input);
55 | setInput('');
56 | };
57 |
58 | const handleSaveConversation = async (fullLog = false) => {
59 | if (!sessionId) return;
60 |
61 | const timestamp = new Date().toISOString().replace(/[:.]/g, '-');
62 | const filename = `recthink_conversation_${timestamp}.json`;
63 |
64 | await saveConversation(filename, fullLog);
65 | };
66 |
67 | const renderThinkingIndicator = () => (
68 |
69 |
70 |
71 |
72 |
Thinking recursively...
73 |
74 | {[1, 2, 3].map(i => (
75 |
76 | ))}
77 |
78 |
79 |
80 | {thinkingProcess ?
81 | `Generating ${thinkingProcess.rounds} rounds of alternatives` :
82 | 'Analyzing your request...'}
83 |
84 |
85 |
86 | );
87 |
88 | const renderThinkingProcess = () => {
89 | if (!thinkingProcess) return null;
90 |
91 | return (
92 |
93 |
94 |
95 |
96 |
97 |
Recursive Thinking Process
98 |
99 |
100 | {thinkingProcess.rounds} rounds
101 | setShowThinkingProcess(false)}
105 | />
106 |
107 |
108 |
109 |
110 |
111 | {thinkingProcess.history.map((item, index) => (
112 |
116 |
117 |
118 | Round {item.round}
119 | {item.alternative_number && (
120 | Alternative {item.alternative_number}
121 | )}
122 | {item.selected && (
123 |
124 | Selected
125 |
126 | )}
127 |
128 |
129 |
130 | {item.response}
131 |
132 | {item.explanation && item.selected && (
133 |
134 | Selected because: {item.explanation}
135 |
136 | )}
137 |
138 | ))}
139 |
140 |
141 | );
142 | };
143 |
144 | const renderChatMessage = (message, index) => (
145 |
146 |
147 |
154 | {message.role === 'user' ? (
155 |
{message.content}
156 | ) : (
157 |
158 | {message.content}
159 |
160 | )}
161 |
162 | {message.role === 'assistant' && index === messages.length - 1 && thinkingProcess && (
163 |
164 | setShowThinkingProcess(true)}
167 | >
168 |
169 | View thinking process
170 |
171 |
172 | )}
173 |
174 |
175 |
176 | );
177 |
178 | return (
179 |
180 | {/* Sidebar */}
181 |
182 |
183 |
184 |
185 |
186 |
192 |
193 |
199 |
200 |
206 |
207 |
208 |
215 |
216 |
217 |
218 | {/* Main content */}
219 |
220 | {/* Header */}
221 |
222 |
223 |
RecThink
224 | Enhanced Recursive Thinking
225 |
226 |
227 |
228 | {sessionId && (
229 |
230 |
234 |
235 | {connectionStatus === 'connected' ? 'Connected' :
236 | connectionStatus === 'error' ? 'Connection error' : 'Disconnected'}
237 |
238 |
239 | )}
240 |
241 |
249 |
250 |
257 |
258 |
259 |
260 | {/* Error message */}
261 | {error && (
262 |
263 | Error: {error}
264 |
265 | )}
266 |
267 | {/* Main content area */}
268 |
269 | {activeTab === 'chat' && (
270 | <>
271 | {/* Messages container */}
272 |
273 | {messages.map(renderChatMessage)}
274 | {isThinking && renderThinkingIndicator()}
275 | {showThinkingProcess && thinkingProcess && renderThinkingProcess()}
276 |
277 |
278 | {/* Input area */}
279 |
280 |
281 | setInput(e.target.value)}
285 | placeholder="Ask something..."
286 | className="flex-1 bg-transparent outline-none"
287 | onKeyPress={(e) => e.key === 'Enter' && handleSendMessage()}
288 | disabled={!apiKey || isThinking}
289 | />
290 |
300 |
301 |
302 | Model: {model.split('/')[1]}
303 |
304 |
305 | Enhanced with recursive thinking
306 |
307 |
308 |
309 | >
310 | )}
311 |
312 | {activeTab === 'settings' && (
313 |
314 |
Settings
315 |
316 |
317 |
API Configuration
318 |
319 |
320 |
321 | setApiKey(e.target.value)}
325 | placeholder="Enter your API key"
326 | className="w-full p-2 border rounded focus:ring-blue-500 focus:border-blue-500"
327 | />
328 |
329 |
330 |
331 |
332 |
342 |
343 |
344 |
345 |
346 |
Recursive Thinking Settings
347 |
348 |
349 |
350 |
362 |
Let the AI decide how many rounds of thinking are needed, or set a fixed number.
363 |
364 |
365 |
366 |
367 | setAlternativesPerRound(parseInt(e.target.value, 10))}
373 | className="w-full p-2 border rounded focus:ring-blue-500 focus:border-blue-500"
374 | />
375 |
376 |
377 |
378 |
387 |
388 |
389 |
390 | )}
391 |
392 | {activeTab === 'history' && (
393 |
394 |
Conversation History
395 |
396 |
397 |
Current Session
398 |
399 | {sessionId ? (
400 |
401 |
Session ID: {sessionId}
402 |
Messages: {messages.length}
403 |
404 |
405 |
412 |
413 |
420 |
421 |
422 | ) : (
423 |
No active session.
424 | )}
425 |
426 |
427 |
428 |
429 |
430 | Previous sessions are saved as JSON files in your project directory.
431 |
432 |
433 |
434 |
435 | )}
436 |
437 |
438 |
439 | );
440 | };
441 |
442 | export default RecursiveThinkingInterface;
--------------------------------------------------------------------------------
/frontend/src/context/RecThinkContext.js:
--------------------------------------------------------------------------------
1 | // Context provider for RecThink
2 | import React, { createContext, useContext, useState, useEffect } from 'react';
3 | import * as api from '../api';
4 |
5 | const RecThinkContext = createContext();
6 |
7 | export const useRecThink = () => useContext(RecThinkContext);
8 |
9 | export const RecThinkProvider = ({ children }) => {
10 | const [sessionId, setSessionId] = useState(null);
11 | const [messages, setMessages] = useState([]);
12 | const [isThinking, setIsThinking] = useState(false);
13 | const [thinkingProcess, setThinkingProcess] = useState(null);
14 | const [apiKey, setApiKey] = useState('');
15 | const [model, setModel] = useState('mistralai/mistral-small-3.1-24b-instruct:free');
16 | const [thinkingRounds, setThinkingRounds] = useState('auto');
17 | const [alternativesPerRound, setAlternativesPerRound] = useState(3);
18 | const [error, setError] = useState(null);
19 | const [showThinkingProcess, setShowThinkingProcess] = useState(false);
20 | const [sessions, setSessions] = useState([]);
21 | const [websocket, setWebsocket] = useState(null);
22 | const [connectionStatus, setConnectionStatus] = useState('disconnected');
23 |
24 | // Initialize chat session
25 | const initializeChat = async () => {
26 | try {
27 | setError(null);
28 | const result = await api.initializeChat(apiKey, model);
29 | setSessionId(result.session_id);
30 |
31 | // Initialize with welcome message
32 | setMessages([
33 | { role: 'assistant', content: 'Welcome to RecThink! I use recursive thinking to provide better responses. Ask me anything.' }
34 | ]);
35 |
36 | // Set up WebSocket connection
37 | const ws = api.createWebSocketConnection(result.session_id);
38 | setWebsocket(ws);
39 |
40 | return result.session_id;
41 | } catch (err) {
42 | setError(err.message);
43 | return null;
44 | }
45 | };
46 |
47 | // Send a message and get response
48 | const sendMessage = async (content) => {
49 | try {
50 | setError(null);
51 |
52 | // Add user message to conversation
53 | const newMessages = [...messages, { role: 'user', content }];
54 | setMessages(newMessages);
55 |
56 | // Start thinking process
57 | setIsThinking(true);
58 |
59 | // Determine thinking rounds if set to auto
60 | let rounds = null;
61 | if (thinkingRounds !== 'auto') {
62 | rounds = parseInt(thinkingRounds, 10);
63 | }
64 |
65 | // Send message via API
66 | const result = await api.sendMessage(sessionId, content, {
67 | thinkingRounds: rounds,
68 | alternativesPerRound
69 | });
70 |
71 | // Update conversation with assistant's response
72 | setMessages([...newMessages, { role: 'assistant', content: result.response }]);
73 |
74 | // Store thinking process
75 | setThinkingProcess({
76 | rounds: result.thinking_rounds,
77 | history: result.thinking_history
78 | });
79 |
80 | setIsThinking(false);
81 | return result;
82 | } catch (err) {
83 | setError(err.message);
84 | setIsThinking(false);
85 | return null;
86 | }
87 | };
88 |
89 | // Save conversation
90 | const saveConversation = async (filename = null, fullLog = false) => {
91 | try {
92 | setError(null);
93 | return await api.saveConversation(sessionId, filename, fullLog);
94 | } catch (err) {
95 | setError(err.message);
96 | return null;
97 | }
98 | };
99 |
100 | // Load sessions
101 | const loadSessions = async () => {
102 | try {
103 | setError(null);
104 | const result = await api.listSessions();
105 | setSessions(result.sessions);
106 | return result.sessions;
107 | } catch (err) {
108 | setError(err.message);
109 | return [];
110 | }
111 | };
112 |
113 | // Delete session
114 | const deleteSession = async (id) => {
115 | try {
116 | setError(null);
117 | await api.deleteSession(id);
118 |
119 | // Update sessions list
120 | const updatedSessions = sessions.filter(session => session.session_id !== id);
121 | setSessions(updatedSessions);
122 |
123 | // Clear current session if it's the one being deleted
124 | if (id === sessionId) {
125 | setSessionId(null);
126 | setMessages([]);
127 | setThinkingProcess(null);
128 | }
129 |
130 | return true;
131 | } catch (err) {
132 | setError(err.message);
133 | return false;
134 | }
135 | };
136 |
137 | // Set up WebSocket listeners when connection is established
138 | useEffect(() => {
139 | if (!websocket) return;
140 |
141 | websocket.onopen = () => {
142 | setConnectionStatus('connected');
143 | };
144 |
145 | websocket.onclose = () => {
146 | setConnectionStatus('disconnected');
147 | };
148 |
149 | websocket.onmessage = (event) => {
150 | const data = JSON.parse(event.data);
151 |
152 | if (data.type === 'chunk') {
153 | // Handle streaming chunks for real-time updates during thinking
154 | // This could update a temporary display of thinking process
155 | } else if (data.type === 'final') {
156 | // Handle final response with complete thinking history
157 | setMessages(prev => [...prev.slice(0, -1), { role: 'assistant', content: data.response }]);
158 | setThinkingProcess({
159 | rounds: data.thinking_rounds,
160 | history: data.thinking_history
161 | });
162 | setIsThinking(false);
163 | } else if (data.error) {
164 | setError(data.error);
165 | setIsThinking(false);
166 | }
167 | };
168 |
169 | websocket.onerror = (error) => {
170 | setError('WebSocket error: ' + error.message);
171 | setConnectionStatus('error');
172 | };
173 |
174 | // Clean up function
175 | return () => {
176 | if (websocket && websocket.readyState === WebSocket.OPEN) {
177 | websocket.close();
178 | }
179 | };
180 | }, [websocket]);
181 |
182 | // Context value
183 | const value = {
184 | sessionId,
185 | messages,
186 | isThinking,
187 | thinkingProcess,
188 | apiKey,
189 | model,
190 | thinkingRounds,
191 | alternativesPerRound,
192 | error,
193 | showThinkingProcess,
194 | sessions,
195 | connectionStatus,
196 |
197 | // Setters
198 | setApiKey,
199 | setModel,
200 | setThinkingRounds,
201 | setAlternativesPerRound,
202 | setShowThinkingProcess,
203 |
204 | // Actions
205 | initializeChat,
206 | sendMessage,
207 | saveConversation,
208 | loadSessions,
209 | deleteSession
210 | };
211 |
212 | return (
213 |
214 | {children}
215 |
216 | );
217 | };
218 |
219 | export default RecThinkContext;
220 |
--------------------------------------------------------------------------------
/frontend/src/index.css:
--------------------------------------------------------------------------------
1 | @tailwind base;
2 | @tailwind components;
3 | @tailwind utilities;
4 |
5 | body {
6 | margin: 0;
7 | font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', 'Roboto', 'Oxygen',
8 | 'Ubuntu', 'Cantarell', 'Fira Sans', 'Droid Sans', 'Helvetica Neue',
9 | sans-serif;
10 | -webkit-font-smoothing: antialiased;
11 | -moz-osx-font-smoothing: grayscale;
12 | }
13 |
14 | code {
15 | font-family: source-code-pro, Menlo, Monaco, Consolas, 'Courier New',
16 | monospace;
17 | }
18 |
--------------------------------------------------------------------------------
/frontend/src/index.js:
--------------------------------------------------------------------------------
1 | import React from 'react';
2 | import ReactDOM from 'react-dom/client';
3 | import './index.css';
4 | import App from './App';
5 | import reportWebVitals from './reportWebVitals';
6 |
7 | const root = ReactDOM.createRoot(document.getElementById('root'));
8 | root.render(
9 |
10 |
11 |
12 | );
13 |
14 | // If you want to start measuring performance in your app, pass a function
15 | // to log results (for example: reportWebVitals(console.log))
16 | // or send to an analytics endpoint. Learn more: https://bit.ly/CRA-vitals
17 | reportWebVitals();
18 |
--------------------------------------------------------------------------------
/frontend/src/reportWebVitals.js:
--------------------------------------------------------------------------------
1 | const reportWebVitals = onPerfEntry => {
2 | if (onPerfEntry && onPerfEntry instanceof Function) {
3 | import('web-vitals').then(({ getCLS, getFID, getFCP, getLCP, getTTFB }) => {
4 | getCLS(onPerfEntry);
5 | getFID(onPerfEntry);
6 | getFCP(onPerfEntry);
7 | getLCP(onPerfEntry);
8 | getTTFB(onPerfEntry);
9 | });
10 | }
11 | };
12 |
13 | export default reportWebVitals;
14 |
--------------------------------------------------------------------------------
/frontend/tailwind.config.js:
--------------------------------------------------------------------------------
1 | /** @type {import('tailwindcss').Config} */
2 | module.exports = {
3 | content: [
4 | "./src/**/*.{js,jsx,ts,tsx}",
5 | ],
6 | theme: {
7 | extend: {},
8 | },
9 | plugins: [],
10 | }
11 |
--------------------------------------------------------------------------------
/install-dependencies.bat:
--------------------------------------------------------------------------------
1 | @echo off
2 | echo RecThink - Installing Dependencies
3 | echo ================================
4 | echo.
5 |
6 | echo [1/2] Installing Python dependencies...
7 | python -m pip install --upgrade pip
8 | pip install -r requirements.txt --no-cache-dir --force-reinstall
9 |
10 | echo.
11 | echo [2/2] Installing Frontend dependencies...
12 | cd frontend
13 | call
--------------------------------------------------------------------------------
/non-rec.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/PhialsBasement/Chain-of-Recursive-Thoughts/5fa5f661c041a1866ebc28e8873c210c88ce52b8/non-rec.png
--------------------------------------------------------------------------------
/rec.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/PhialsBasement/Chain-of-Recursive-Thoughts/5fa5f661c041a1866ebc28e8873c210c88ce52b8/rec.PNG
--------------------------------------------------------------------------------
/recthink_web.py:
--------------------------------------------------------------------------------
1 | from fastapi import FastAPI, WebSocket, HTTPException, Depends, Request, WebSocketDisconnect
2 | from fastapi.responses import JSONResponse, HTMLResponse
3 | from fastapi.staticfiles import StaticFiles
4 | from fastapi.middleware.cors import CORSMiddleware
5 | from pydantic import BaseModel
6 | import uvicorn
7 | import json
8 | import os
9 | import asyncio
10 | from datetime import datetime
11 | from typing import List, Dict, Optional, Any
12 | import logging
13 |
14 | # Import the main RecThink class
15 | from recursive_thinking_ai import EnhancedRecursiveThinkingChat
16 |
17 | # Set up logging
18 | logging.basicConfig(level=logging.INFO)
19 | logger = logging.getLogger(__name__)
20 |
21 | app = FastAPI(title="RecThink API", description="API for Enhanced Recursive Thinking Chat")
22 |
23 | # Add CORS middleware
24 | app.add_middleware(
25 | CORSMiddleware,
26 | allow_origins=["*"], # In production, replace with specific origins
27 | allow_credentials=True,
28 | allow_methods=["*"],
29 | allow_headers=["*"],
30 | )
31 |
32 | # Create a dictionary to store chat instances
33 | chat_instances = {}
34 |
35 | # Pydantic models for request/response validation
36 | class ChatConfig(BaseModel):
37 | api_key: str
38 | model: str = "mistralai/mistral-small-3.1-24b-instruct:free"
39 |
40 | class MessageRequest(BaseModel):
41 | session_id: str
42 | message: str
43 | thinking_rounds: Optional[int] = None
44 | alternatives_per_round: Optional[int] = 3
45 |
46 | class SaveRequest(BaseModel):
47 | session_id: str
48 | filename: Optional[str] = None
49 | full_log: bool = False
50 |
51 | @app.post("/api/initialize")
52 | async def initialize_chat(config: ChatConfig):
53 | """Initialize a new chat session"""
54 | try:
55 | # Generate a session ID
56 | session_id = f"session_{datetime.now().strftime('%Y%m%d%H%M%S')}_{os.urandom(4).hex()}"
57 |
58 | # Initialize the chat instance
59 | chat = EnhancedRecursiveThinkingChat(api_key=config.api_key, model=config.model)
60 | chat_instances[session_id] = chat
61 |
62 | return {"session_id": session_id, "status": "initialized"}
63 | except Exception as e:
64 | logger.error(f"Error initializing chat: {str(e)}")
65 | raise HTTPException(status_code=500, detail=f"Failed to initialize chat: {str(e)}")
66 |
67 | @app.post("/api/send_message")
68 | async def send_message(request: MessageRequest):
69 | """Send a message and get a response with thinking process"""
70 | try:
71 | if request.session_id not in chat_instances:
72 | raise HTTPException(status_code=404, detail="Session not found")
73 |
74 | chat = chat_instances[request.session_id]
75 |
76 | # Override class parameters if provided
77 | original_thinking_fn = chat._determine_thinking_rounds
78 | original_alternatives_fn = chat._generate_alternatives
79 |
80 | if request.thinking_rounds is not None:
81 | # Override the thinking rounds determination
82 | chat._determine_thinking_rounds = lambda _: request.thinking_rounds
83 |
84 | if request.alternatives_per_round is not None:
85 | # Store the original function
86 | def modified_generate_alternatives(base_response, prompt, num_alternatives=3):
87 | return original_alternatives_fn(base_response, prompt, request.alternatives_per_round)
88 |
89 | chat._generate_alternatives = modified_generate_alternatives
90 |
91 | # Process the message
92 | result = chat.think_and_respond(request.message, verbose=True)
93 |
94 | # Restore original functions
95 | chat._determine_thinking_rounds = original_thinking_fn
96 | chat._generate_alternatives = original_alternatives_fn
97 |
98 | return {
99 | "session_id": request.session_id,
100 | "response": result["response"],
101 | "thinking_rounds": result["thinking_rounds"],
102 | "thinking_history": result["thinking_history"]
103 | }
104 | except Exception as e:
105 | logger.error(f"Error processing message: {str(e)}")
106 | raise HTTPException(status_code=500, detail=f"Failed to process message: {str(e)}")
107 |
108 | @app.post("/api/save")
109 | async def save_conversation(request: SaveRequest):
110 | """Save the conversation or full thinking log"""
111 | try:
112 | if request.session_id not in chat_instances:
113 | raise HTTPException(status_code=404, detail="Session not found")
114 |
115 | chat = chat_instances[request.session_id]
116 |
117 | filename = request.filename
118 | if request.full_log:
119 | chat.save_full_log(filename)
120 | else:
121 | chat.save_conversation(filename)
122 |
123 | return {"status": "saved", "filename": filename}
124 | except Exception as e:
125 | logger.error(f"Error saving conversation: {str(e)}")
126 | raise HTTPException(status_code=500, detail=f"Failed to save conversation: {str(e)}")
127 |
128 | @app.get("/api/sessions")
129 | async def list_sessions():
130 | """List all active chat sessions"""
131 | sessions = []
132 | for session_id, chat in chat_instances.items():
133 | sessions.append({
134 | "session_id": session_id,
135 | "message_count": len(chat.conversation_history) // 2, # Each message-response pair counts as 2
136 | "created_at": session_id.split("_")[1] # Extract timestamp from session ID
137 | })
138 |
139 | return {"sessions": sessions}
140 |
141 | @app.delete("/api/sessions/{session_id}")
142 | async def delete_session(session_id: str):
143 | """Delete a chat session"""
144 | if session_id not in chat_instances:
145 | raise HTTPException(status_code=404, detail="Session not found")
146 |
147 | del chat_instances[session_id]
148 | return {"status": "deleted", "session_id": session_id}
149 |
150 | # WebSocket for streaming thinking process
151 | @app.websocket("/ws/{session_id}")
152 | async def websocket_endpoint(websocket: WebSocket, session_id: str):
153 | await websocket.accept()
154 |
155 | if session_id not in chat_instances:
156 | await websocket.send_json({"error": "Session not found"})
157 | await websocket.close()
158 | return
159 |
160 | chat = chat_instances[session_id]
161 |
162 | try:
163 | # Set up a custom callback to stream thinking process
164 | original_call_api = chat._call_api
165 |
166 | async def stream_callback(chunk):
167 | await websocket.send_json({"type": "chunk", "content": chunk})
168 |
169 | # Override the _call_api method to also send updates via WebSocket
170 | def ws_call_api(messages, temperature=0.7, stream=True):
171 | result = original_call_api(messages, temperature, stream)
172 | # Send the chunk via WebSocket if we're streaming
173 | if stream:
174 | asyncio.create_task(stream_callback(result))
175 | return result
176 |
177 | # Replace the method temporarily
178 | chat._call_api = ws_call_api
179 |
180 | # Wait for messages from the client
181 | while True:
182 | data = await websocket.receive_text()
183 | message_data = json.loads(data)
184 |
185 | if message_data["type"] == "message":
186 | # Process the message
187 | result = chat.think_and_respond(message_data["content"], verbose=True)
188 |
189 | # Send the final result
190 | await websocket.send_json({
191 | "type": "final",
192 | "response": result["response"],
193 | "thinking_rounds": result["thinking_rounds"],
194 | "thinking_history": result["thinking_history"]
195 | })
196 |
197 | except WebSocketDisconnect:
198 | logger.info(f"WebSocket disconnected: {session_id}")
199 | except Exception as e:
200 | logger.error(f"WebSocket error: {str(e)}")
201 | await websocket.send_json({"error": str(e)})
202 | finally:
203 | # Restore original method
204 | chat._call_api = original_call_api
205 |
206 | # Serve the React app
207 | @app.get("/")
208 | async def root():
209 | return {"message": "RecThink API is running. Frontend available at http://localhost:3000"}
210 |
211 | if __name__ == "__main__":
212 | uvicorn.run("recthink_web:app", host="0.0.0.0", port=8000, reload=True)
213 |
--------------------------------------------------------------------------------
/recursive_thinking_ai.py:
--------------------------------------------------------------------------------
1 | import openai
2 | import os
3 | from typing import List, Dict
4 | import json
5 | import requests
6 | from datetime import datetime
7 | import sys
8 | import time
9 |
10 | class EnhancedRecursiveThinkingChat:
11 | def __init__(self, api_key: str = None, model: str = "mistralai/mistral-small-3.1-24b-instruct:free"):
12 | """Initialize with OpenRouter API."""
13 | self.api_key = api_key or os.getenv("OPENROUTER_API_KEY")
14 | self.model = model
15 | self.base_url = "https://openrouter.ai/api/v1/chat/completions"
16 | self.headers = {
17 | "Authorization": f"Bearer {self.api_key}",
18 | "HTTP-Referer": "http://localhost:3000",
19 | "X-Title": "Recursive Thinking Chat",
20 | "Content-Type": "application/json"
21 | }
22 | self.conversation_history = []
23 | self.full_thinking_log = []
24 |
25 | def _call_api(self, messages: List[Dict], temperature: float = 0.7, stream: bool = True) -> str:
26 | """Make an API call to OpenRouter with streaming support."""
27 | payload = {
28 | "model": self.model,
29 | "messages": messages,
30 | "temperature": temperature,
31 | "stream": stream,
32 | "reasoning": {
33 | "max_tokens": 10386,
34 | }
35 | }
36 |
37 | try:
38 | response = requests.post(self.base_url, headers=self.headers, json=payload, stream=stream)
39 | response.raise_for_status()
40 |
41 | if stream:
42 | full_response = ""
43 | for line in response.iter_lines():
44 | if line:
45 | line = line.decode('utf-8')
46 | if line.startswith("data: "):
47 | line = line[6:]
48 | if line.strip() == "[DONE]":
49 | break
50 | try:
51 | chunk = json.loads(line)
52 | if "choices" in chunk and len(chunk["choices"]) > 0:
53 | delta = chunk["choices"][0].get("delta", {})
54 | content = delta.get("content", "")
55 | if content:
56 | full_response += content
57 | print(content, end="", flush=True)
58 | except json.JSONDecodeError:
59 | continue
60 | print() # New line after streaming
61 | return full_response
62 | else:
63 | return response.json()['choices'][0]['message']['content'].strip()
64 | except Exception as e:
65 | print(f"API Error: {e}")
66 | return "Error: Could not get response from API"
67 |
68 | def _determine_thinking_rounds(self, prompt: str) -> int:
69 | """Let the model decide how many rounds of thinking are needed."""
70 | meta_prompt = f"""Given this message: "{prompt}"
71 |
72 | How many rounds of iterative thinking (1-5) would be optimal to generate the best response?
73 | Consider the complexity and nuance required.
74 | Respond with just a number between 1 and 5."""
75 |
76 | messages = [{"role": "user", "content": meta_prompt}]
77 |
78 | print("\n=== DETERMINING THINKING ROUNDS ===")
79 | response = self._call_api(messages, temperature=0.3, stream=True)
80 | print("=" * 50 + "\n")
81 |
82 | try:
83 | rounds = int(''.join(filter(str.isdigit, response)))
84 | return min(max(rounds, 1), 5)
85 | except:
86 | return 3
87 |
88 | def _generate_alternatives(self, base_response: str, prompt: str, num_alternatives: int = 3) -> List[str]:
89 | """Generate alternative responses."""
90 | alternatives = []
91 |
92 | for i in range(num_alternatives):
93 | print(f"\n=== GENERATING ALTERNATIVE {i+1} ===")
94 | alt_prompt = f"""Original message: {prompt}
95 |
96 | Current response: {base_response}
97 |
98 | Generate an alternative response that might be better. Be creative and consider different approaches.
99 | Alternative response:"""
100 |
101 | messages = self.conversation_history + [{"role": "user", "content": alt_prompt}]
102 | alternative = self._call_api(messages, temperature=0.7 + i * 0.1, stream=True)
103 | alternatives.append(alternative)
104 | print("=" * 50)
105 |
106 | return alternatives
107 |
108 | def _evaluate_responses(self, prompt: str, current_best: str, alternatives: List[str]) -> tuple[str, str]:
109 | """Evaluate responses and select the best one."""
110 | print("\n=== EVALUATING RESPONSES ===")
111 | eval_prompt = f"""Original message: {prompt}
112 |
113 | Evaluate these responses and choose the best one:
114 |
115 | Current best: {current_best}
116 |
117 | Alternatives:
118 | {chr(10).join([f"{i+1}. {alt}" for i, alt in enumerate(alternatives)])}
119 |
120 | Which response best addresses the original message? Consider accuracy, clarity, and completeness.
121 | First, respond with ONLY 'current' or a number (1-{len(alternatives)}).
122 | Then on a new line, explain your choice in one sentence."""
123 |
124 | messages = [{"role": "user", "content": eval_prompt}]
125 | evaluation = self._call_api(messages, temperature=0.2, stream=True)
126 | print("=" * 50)
127 |
128 | # Better parsing
129 | lines = [line.strip() for line in evaluation.split('\n') if line.strip()]
130 |
131 | choice = 'current'
132 | explanation = "No explanation provided"
133 |
134 | if lines:
135 | first_line = lines[0].lower()
136 | if 'current' in first_line:
137 | choice = 'current'
138 | else:
139 | for char in first_line:
140 | if char.isdigit():
141 | choice = char
142 | break
143 |
144 | if len(lines) > 1:
145 | explanation = ' '.join(lines[1:])
146 |
147 | if choice == 'current':
148 | return current_best, explanation
149 | else:
150 | try:
151 | index = int(choice) - 1
152 | if 0 <= index < len(alternatives):
153 | return alternatives[index], explanation
154 | except:
155 | pass
156 |
157 | return current_best, explanation
158 |
159 | def think_and_respond(self, user_input: str, verbose: bool = True) -> Dict:
160 | """Process user input with recursive thinking."""
161 | print("\n" + "=" * 50)
162 | print("🤔 RECURSIVE THINKING PROCESS STARTING")
163 | print("=" * 50)
164 |
165 | thinking_rounds = self._determine_thinking_rounds(user_input)
166 |
167 | if verbose:
168 | print(f"\n🤔 Thinking... ({thinking_rounds} rounds needed)")
169 |
170 | # Initial response
171 | print("\n=== GENERATING INITIAL RESPONSE ===")
172 | messages = self.conversation_history + [{"role": "user", "content": user_input}]
173 | current_best = self._call_api(messages, stream=True)
174 | print("=" * 50)
175 |
176 | thinking_history = [{"round": 0, "response": current_best, "selected": True}]
177 |
178 | # Iterative improvement
179 | for round_num in range(1, thinking_rounds + 1):
180 | if verbose:
181 | print(f"\n=== ROUND {round_num}/{thinking_rounds} ===")
182 |
183 | # Generate alternatives
184 | alternatives = self._generate_alternatives(current_best, user_input)
185 |
186 | # Store alternatives in history
187 | for i, alt in enumerate(alternatives):
188 | thinking_history.append({
189 | "round": round_num,
190 | "response": alt,
191 | "selected": False,
192 | "alternative_number": i + 1
193 | })
194 |
195 | # Evaluate and select best
196 | new_best, explanation = self._evaluate_responses(user_input, current_best, alternatives)
197 |
198 | # Update selection in history
199 | if new_best != current_best:
200 | for item in thinking_history:
201 | if item["round"] == round_num and item["response"] == new_best:
202 | item["selected"] = True
203 | item["explanation"] = explanation
204 | current_best = new_best
205 |
206 | if verbose:
207 | print(f"\n ✓ Selected alternative: {explanation}")
208 | else:
209 | for item in thinking_history:
210 | if item["selected"] and item["response"] == current_best:
211 | item["explanation"] = explanation
212 | break
213 |
214 | if verbose:
215 | print(f"\n ✓ Kept current response: {explanation}")
216 |
217 | # Add to conversation history
218 | self.conversation_history.append({"role": "user", "content": user_input})
219 | self.conversation_history.append({"role": "assistant", "content": current_best})
220 |
221 | # Keep conversation history manageable
222 | if len(self.conversation_history) > 10:
223 | self.conversation_history = self.conversation_history[-10:]
224 |
225 | print("\n" + "=" * 50)
226 | print("🎯 FINAL RESPONSE SELECTED")
227 | print("=" * 50)
228 |
229 | return {
230 | "response": current_best,
231 | "thinking_rounds": thinking_rounds,
232 | "thinking_history": thinking_history
233 | }
234 |
235 | def save_full_log(self, filename: str = None):
236 | """Save the full thinking process log."""
237 | if filename is None:
238 | filename = f"full_thinking_log_{datetime.now().strftime('%Y%m%d_%H%M%S')}.json"
239 |
240 | with open(filename, 'w', encoding='utf-8') as f:
241 | json.dump({
242 | "conversation": self.conversation_history,
243 | "full_thinking_log": self.full_thinking_log,
244 | "timestamp": datetime.now().isoformat()
245 | }, f, indent=2, ensure_ascii=False)
246 |
247 | print(f"Full thinking log saved to {filename}")
248 |
249 | def save_conversation(self, filename: str = None):
250 | """Save the conversation and thinking history."""
251 | if filename is None:
252 | filename = f"chat_history_{datetime.now().strftime('%Y%m%d_%H%M%S')}.json"
253 |
254 | with open(filename, 'w', encoding='utf-8') as f:
255 | json.dump({
256 | "conversation": self.conversation_history,
257 | "timestamp": datetime.now().isoformat()
258 | }, f, indent=2, ensure_ascii=False)
259 |
260 | print(f"Conversation saved to {filename}")
261 |
262 | def main():
263 | print("🤖 Enhanced Recursive Thinking Chat")
264 | print("=" * 50)
265 |
266 | # Get API key
267 | api_key = input("Enter your OpenRouter API key (or press Enter to use env variable): ").strip()
268 | if not api_key:
269 | api_key = os.getenv("OPENROUTER_API_KEY")
270 | if not api_key:
271 | print("Error: No API key provided and OPENROUTER_API_KEY not found in environment")
272 | return
273 |
274 | # Initialize chat
275 | chat = EnhancedRecursiveThinkingChat(api_key=api_key)
276 |
277 | print("\nChat initialized! Type 'exit' to quit, 'save' to save conversation.")
278 | print("The AI will think recursively before each response.\n")
279 |
280 | while True:
281 | user_input = input("You: ").strip()
282 |
283 | if user_input.lower() == 'exit':
284 | break
285 | elif user_input.lower() == 'save':
286 | chat.save_conversation()
287 | continue
288 | elif user_input.lower() == 'save full':
289 | chat.save_full_log()
290 | continue
291 | elif not user_input:
292 | continue
293 |
294 | # Get response with thinking process
295 | result = chat.think_and_respond(user_input)
296 |
297 | print(f"\n🤖 AI FINAL RESPONSE: {result['response']}\n")
298 |
299 | # Always show complete thinking process
300 | print("\n--- COMPLETE THINKING PROCESS ---")
301 | for item in result['thinking_history']:
302 | print(f"\nRound {item['round']} {'[SELECTED]' if item['selected'] else '[ALTERNATIVE]'}:")
303 | print(f" Response: {item['response']}")
304 | if 'explanation' in item and item['selected']:
305 | print(f" Reason for selection: {item['explanation']}")
306 | print("-" * 50)
307 | print("--------------------------------\n")
308 |
309 | # Save on exit
310 | save_on_exit = input("Save conversation before exiting? (y/n): ").strip().lower()
311 | if save_on_exit == 'y':
312 | chat.save_conversation()
313 | save_full = input("Save full thinking log? (y/n): ").strip().lower()
314 | if save_full == 'y':
315 | chat.save_full_log()
316 |
317 | print("Goodbye! 👋")
318 |
319 | if __name__ == "__main__":
320 | main()
321 |
--------------------------------------------------------------------------------
/requirements.txt:
--------------------------------------------------------------------------------
1 | fastapi>=0.95.0
2 | uvicorn[standard]>=0.21.0
3 | websockets>=11.0.3
4 | pydantic>=1.10.7
5 | python-dotenv>=1.0.0
6 | requests>=2.28.0
7 | openai
8 |
--------------------------------------------------------------------------------
/start-recthink.bat:
--------------------------------------------------------------------------------
1 | @echo off
2 | echo Starting RecThink Web Application...
3 | echo.
4 |
5 | echo Checking for required Python packages...
6 | pip install -r requirements.txt --no-cache-dir
7 |
8 | echo.
9 | echo [1/2] Starting Backend API Server...
10 | start cmd /k "python recthink_web.py"
11 |
12 | echo [2/2] Starting Frontend Development Server...
13 | cd frontend
14 | IF NOT EXIST node_modules (
15 | echo Installing frontend dependencies...
16 | npm install
17 | )
18 | start cmd /k "npm start"
19 |
20 | echo.
21 | echo RecThink should now be available at:
22 | echo - Backend API: http://localhost:8000
23 | echo - Frontend UI: http://localhost:3000
24 | echo.
25 | echo Press any key to exit this window...
26 | pause > nul
27 |
--------------------------------------------------------------------------------