├── .DS_Store
├── .env
├── .github
├── CONTRIBUTING.md
├── ISSUE_TEMPLATE
│ ├── bug_report.yml
│ ├── documentation_bug.yml
│ ├── feature_request.yml
│ └── question_and_support.yml
├── PULL_REQUEST_TEMPLATE.md
└── README.md
├── .vscode
├── .gitignore
├── package-lock.json
├── package.json
└── settings.json
├── User.js
├── UserClass.js
├── about
├── about.html
├── contact
│ └── contact.html
├── features
│ └── features.html
└── reviews
│ └── reviews.html
├── camera
├── camera.html
├── camera.js
├── capture.js
├── models.js
└── object-detection.js
├── config
└── db.config.js
├── data.sql
├── db.js
├── footer_elements
├── Terms.html
├── blog.html
├── contributor
│ ├── contributor.css
│ ├── contributor.html
│ └── contributor.js
├── privacyp.html
├── whoweare.html
└── workwithus.html
├── images
├── FusionVision_BG.jpg
├── FusionVision_BG_blurred.jpg
├── Logo.png
├── contact.jpg
├── homepage.jpg
├── iwoc.png
├── permission.jpg
└── swoc.png
├── index.html
├── login
├── auth.js
├── login.js
├── signUp.html
├── signin.html
└── signup.js
├── script.js
├── server.js
├── styles.css
└── utils.js
/.DS_Store:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/sudiptasarkar011/FusionVision/21f1a9a878bc6e6804b16d126075d45f234150cc/.DS_Store
--------------------------------------------------------------------------------
/.env:
--------------------------------------------------------------------------------
1 | MONGODB_URI=your mongo db connection string
2 | PORT=5000
3 | DB_HOST=localhost
4 | DB_USER=root
5 | DB_PASSWORD=root
6 | DB_NAME=fusion_vision
7 | JWT_SECRET = your jwt secret
8 |
--------------------------------------------------------------------------------
/.github/CONTRIBUTING.md:
--------------------------------------------------------------------------------
1 | # Contributing Guidelines
2 |
3 | This documentation will be your friendly guide towards contributing to this project. Even if you are a beginner, no worries, I got you covered even with basic steps.
4 |
5 | I'm happy to welcome all the contributions from anyone willing to raise new issues or contribute in resolving current issues. Thank you for helping out and remember,
6 | **no contribution is too small.**
7 |
8 |
9 |
10 | # Need Help With The Basics?
11 |
12 | If you're new to Git and GitHub, no worries! Here are some useful resources:
13 |
14 | - [Forking a Repository](https://help.github.com/en/github/getting-started-with-github/fork-a-repo)
15 | - [Cloning a Repository](https://help.github.com/en/desktop/contributing-to-projects/creating-an-issue-or-pull-request)
16 | - [How to Create a Pull Request](https://opensource.com/article/19/7/create-pull-request-github)
17 | - [Getting Started with Git and GitHub](https://towardsdatascience.com/getting-started-with-git-and-github-6fcd0f2d4ac6)
18 | - [Learn GitHub from Scratch](https://docs.github.com/en/get-started/start-your-journey/git-and-github-learning-resources)
19 |
20 |
21 |
22 | # First Pull Request ✨
23 |
24 | 1. **Star this repository**
25 | Click on the top right corner marked as **Stars** at last.
26 |
27 | 2. **Fork this repository**
28 | Click on the top right corner marked as **Fork** at second last.
29 |
30 | 3. **Clone the forked repository**
31 |
32 | ```bash
33 | git clone https://github.com//FusionVision.git
34 | ```
35 |
36 | 4. **Navigate to the project directory**
37 |
38 | ```bash
39 | cd FusionVision
40 | ```
41 |
42 | 5. **Create a new branch**
43 |
44 | ```bash
45 | git checkout -b
46 | ```
47 |
48 | 6. **To make changes**
49 |
50 | ```bash
51 | git add .
52 | ```
53 |
54 | 7. **Now to commit**
55 |
56 | ```bash
57 | git commit -m "add comment according to your changes or addition of features inside this"
58 | ```
59 |
60 | 8. **Push your local commits to the remote repository**
61 |
62 | ```bash
63 | git push -u origin
64 | ```
65 |
66 | 9. **Create a Pull Request**
67 |
68 | 10. **Congratulations! 🎉 you've made your contribution**
69 |
70 |
71 |
72 | # Alternatively, contribute using GitHub Desktop 🖥️
73 |
74 | 1. **Open GitHub Desktop:**
75 | Launch GitHub Desktop and log in to your GitHub account if you haven't already.
76 |
77 | 2. **Clone the Repository:**
78 | - If you haven't cloned the project repository yet, you can do so by clicking on the "File" menu and selecting "Clone Repository."
79 | - Choose the project repository from the list of repositories on GitHub and clone it to your local machine.
80 |
81 | 3.**Switch to the Correct Branch:**
82 | - Ensure you are on the branch that you want to submit a pull request for.
83 | - If you need to switch branches, you can do so by clicking on the "Current Branch" dropdown menu and selecting the desired branch.
84 |
85 | 4. **Make Changes:**
86 | - Make your changes to the code or files in the repository using your preferred code editor.
87 |
88 | 5. **Commit Changes:**
89 | - In GitHub Desktop, you'll see a list of the files you've changed. Check the box next to each file you want to include in the commit.
90 | - Enter a summary and description for your changes in the "Summary" and "Description" fields, respectively. Click the "Commit to " button to commit your changes to the local branch.
91 |
92 | 6. **Push Changes to GitHub:**
93 | - After committing your changes, click the "Push origin" button in the top right corner of GitHub Desktop to push your changes to your forked repository on GitHub.
94 |
95 | 7. **Create a Pull Request:**
96 | - Go to the GitHub website and navigate to your fork of the project repository.
97 | - You should see a button to "Compare & pull request" between your fork and the original repository. Click on it.
98 |
99 | 8. **Review and Submit:**
100 | - On the pull request page, review your changes and add any additional information, such as a title and description, that you want to include with your pull request.
101 | - Once you're satisfied, click the "Create pull request" button to submit your pull request.
102 |
103 | 9. **Wait for Review:**
104 | Your pull request will now be available for review by the project maintainers. They may provide feedback or ask for changes before merging your pull request into the main branch of the project repository.
105 |
106 |
--------------------------------------------------------------------------------
/.github/ISSUE_TEMPLATE/bug_report.yml:
--------------------------------------------------------------------------------
1 | name: 🪲 Bug Report
2 | description: Report your bug by filling the information given below
3 | title: "[Bug]: "
4 |
5 | body:
6 | - type: markdown
7 | attributes:
8 | value: |
9 | Thanks for taking the time to fill out this bug report!
10 | - type: textarea
11 | id: bug-description
12 | attributes:
13 | label: Give a brief about the bug ✍️
14 | description: Enter a brief description about the bug report
15 | placeholder: Please include a summary, also include relevant motivation and context.
16 | value: "Description"
17 | validations:
18 | required: true
19 | - type: textarea
20 | id: behaviors
21 | attributes:
22 | label: What is the expected behavior? 🤔
23 | description: Enter the expected behavior of the bug
24 | placeholder: Please include a summary, also include relevant motivation and context.
25 | value: "Description"
26 | validations:
27 | required: true
28 | - type: textarea
29 | id: instructions
30 | attributes:
31 | label: Provide step by step information to reproduce the bug 📄
32 | description: Enter the description of how you plan to find the bug's solution
33 | placeholder: Please include a summary, also include relevant motivation and context.
34 | value: "Description"
35 | validations:
36 | required: true
37 | - type: dropdown
38 | id: contribution
39 | attributes:
40 | label: Select program in which you are contributing
41 | multiple: true
42 | options:
43 | - GSoC'25
44 | - SWoC'25
45 | - Other
46 | - type: checkboxes
47 | id: terms
48 | attributes:
49 | label: Code of Conduct
50 | description: By submitting this issue, you agree to follow our [CODE OF CONDUCT]()
51 | options:
52 | - label: I follow [CONTRIBUTING GUIDELINE]() of this project.
53 | required: true
54 |
55 |
--------------------------------------------------------------------------------
/.github/ISSUE_TEMPLATE/documentation_bug.yml:
--------------------------------------------------------------------------------
1 | name: 📚 Documentation or README.md issue report
2 | description: Report an issue in the project's documentation or README.md file.
3 | title: "[Documentation Bug]: "
4 |
5 | body:
6 | - type: markdown
7 | attributes:
8 | value: |
9 | Thanks for taking the time to fill out this documentation bug report!
10 | - type: textarea
11 | id: documentation-bug-description
12 | attributes:
13 | label: Describe the bug ✍️
14 | description: Enter a brief description about the documentation bug report
15 | placeholder: Please include a summary, also include relevant motivation and context.
16 | value: "Describe your bug here"
17 | validations:
18 | required: true
19 | - type: textarea
20 | id: instructions
21 | attributes:
22 | label: Provide step by step information reproduce the bug 📄
23 | description: Enter the description on how you plan to find the bug's solution
24 | placeholder: Please include a summary, also include relevant motivation and context.
25 | value: "Description"
26 | validations:
27 | required: true
28 | - type: dropdown
29 | id: contribution
30 | attributes:
31 | label: Select program in which you are contributing
32 | multiple: true
33 | options:
34 | - GSoC'25
35 | - SWoC'25
36 | - Other
37 | - type: checkboxes
38 | id: terms
39 | attributes:
40 | label: Code of Conduct
41 | description: By submitting this issue, you agree to follow our [CODE OF CONDUCT]()
42 | options:
43 | - label: I follow [CONTRIBUTING GUIDELINE]() of this project.
44 | required: true
45 |
--------------------------------------------------------------------------------
/.github/ISSUE_TEMPLATE/feature_request.yml:
--------------------------------------------------------------------------------
1 | name: 🆕 Feature request
2 | description: Suggest an idea for this project
3 | title: "[FEATURE] "
4 | labels: [enhancement]
5 |
6 |
7 | projects: ["new-website"]
8 |
9 | body:
10 | - type: markdown
11 | attributes:
12 | value: |
13 | Thanks for taking the time to open a feature request! Please provide us with as much detail as possible to help us understand your idea.
14 |
15 | - type: input
16 | id: title
17 | attributes:
18 | label: Feature title
19 | description: A brief title for your feature request
20 | placeholder: Add a descriptive title
21 | validations:
22 | required: true
23 |
24 | - type: textarea
25 | id: description
26 | attributes:
27 | label: Description
28 | description: A detailed description of the feature you are requesting
29 | placeholder: Describe the feature in detail
30 | validations:
31 | required: true
32 |
33 | - type: textarea
34 | id: motivation
35 | attributes:
36 | label: Motivation
37 | description: Explain why this feature would be useful
38 | placeholder: Explain the motivation behind this feature
39 | validations:
40 | required: true
41 |
42 | - type: textarea
43 | id: alternatives
44 | attributes:
45 | label: Alternatives
46 | description: Describe any alternative solutions or features you've considered
47 | placeholder: Describe alternative solutions
48 | validations:
49 | required: false
50 |
51 | - type: dropdown
52 | id: contribution
53 | attributes:
54 | label: Select program in which you are contributing
55 | multiple: true
56 | options:
57 | - GSoC'25
58 | - SWoC'25
59 | - Other
60 |
61 | - type: input
62 | id: additional_context
63 | attributes:
64 | label: Additional context
65 | description: Add any other context or screenshots about the feature request
66 | placeholder: Add any other context or screenshots
67 | validations:
68 | required: false
69 |
--------------------------------------------------------------------------------
/.github/ISSUE_TEMPLATE/question_and_support.yml:
--------------------------------------------------------------------------------
1 | name: ❓ Question or Support Request
2 | description: Questions and requests for support.
3 | title: "[Question]: "
4 |
5 | body:
6 | - type: markdown
7 | attributes:
8 | value: |
9 | Thanks for taking the time to fill out this and letting us know your question
10 | - type: textarea
11 | id: description
12 | attributes:
13 | label: Describe your question or ask for support.❓
14 | description: Enter a brief description about your question or support needed
15 | placeholder: Please include a summary, also include relevant motivation and context.
16 | value: "Description"
17 | validations:
18 | required: true
19 | - type: dropdown
20 | id: contribution
21 | attributes:
22 | label: Select program in which you are contributing
23 | multiple: true
24 | options:
25 | - GSoC'25
26 | - SWoC'25
27 | - Other
28 | - type: checkboxes
29 | id: terms
30 | attributes:
31 | label: Code of Conduct
32 | description: By submitting this issue, you agree to follow our [CODE OF CONDUCT]()
33 | options:
34 | - label: I follow [CONTRIBUTING GUIDELINE]() of this project.
35 | required: true
36 |
--------------------------------------------------------------------------------
/.github/PULL_REQUEST_TEMPLATE.md:
--------------------------------------------------------------------------------
1 | ## Description
2 |
3 | Please include a summary of the changes and the related issue. Please also include relevant motivation and context. List any dependencies that are required for this change.
4 |
5 | Fixes # (issue)
6 |
7 | ## Type of change
8 |
9 | Please delete options that are not relevant.
10 |
11 | - [ ] Bug fix (non-breaking change which fixes an issue)
12 | - [ ] New feature (non-breaking change which adds functionality)
13 | - [ ] Breaking change (fix or feature that would cause existing functionality to not work as expected)
14 | - [ ] This change requires a documentation update
15 |
16 |
17 |
18 | **Test Configuration**:
19 | * Firmware version:
20 | * Hardware:
21 | * Toolchain:
22 | * SDK:
23 |
24 | ## Checklist:
25 |
26 | - [ ] My code follows the style guidelines of this project
27 | - [ ] I have performed a self-review of my own code
28 | - [ ] I have commented my code, particularly in hard-to-understand areas
29 | - [ ] I have made corresponding changes to the documentation
30 | - [ ] My changes generate no new warnings
31 | - [ ] I have added tests that prove my fix is effective or that my feature works
32 | - [ ] New and existing unit tests pass locally with my changes
33 | - [ ] Any dependent changes have been merged and published in downstream modules
34 |
--------------------------------------------------------------------------------
/.github/README.md:
--------------------------------------------------------------------------------
1 | # Fusion Vision
2 |
3 | A real-time object detection system with voice narration capabilities. The aim is to create a Social Media Platform for the visually impaired where they can upload their photos and get the description of the image in voice. Also they can share their photos with their friends and family and checkout the photos from others and know the updates of their surroundings.
4 |
5 |
6 | ## Features
7 |
8 | - Real-Time Object Detection: Utilizes Coco-SSD (TensorFlow.js) for live object detection.
9 | - Voice narration of detected objects
10 | - Web-based interface for easy access
11 | - Unique Social Media Platform for the visually impaired
12 | - Unique profiles and verified users
13 |
14 |
15 | ## Getting Started
16 |
17 | Follow these instructions to set up the project on your local machine.
18 |
19 | ### Prerequisites
20 |
21 | Ensure you have the following installed:
22 |
23 | - **Node.js** (for web dependencies)
24 | - **MongoDB**
25 |
26 | ### Installation
27 |
28 | 1. **Clone the repository:**
29 | ```bash
30 | git clone https://github.com/yourusername/fusion-vision.git
31 | cd fusion-vision
32 | ```
33 |
34 | 2. **Install Dependencies**
35 |
36 | ```bash
37 | npm install
38 | ```
39 |
40 | 3. **Set Up the Database**
41 |
42 | - Ensure MongoDB is running locally or configure a remote MongoDB instance.
43 | - Create `.env` file with the following content:
44 |
45 | ```bash
46 | MONGODB_URI=mongodb://localhost:27017/camera-app
47 | PORT=5000
48 | DB_HOST=localhost
49 | DB_USER=root
50 | DB_PASSWORD=your_password_here
51 | DB_NAME=fusion_vision
52 | ```
53 |
54 | 4. **Start the Server**:
55 |
56 | ```bash
57 | node server.js
58 | ```
59 |
60 | 5. **Launch the web interface:**
61 |
62 | Open `index.html` in a modern web browser.
63 |
64 | ### Dependencies
65 |
66 | #### Core Requirements
67 |
68 | - **TensorFlow** (>=2.0.0)
69 | - Deep learning framework for object detection.
70 | - Used for running the COCO-SSD model.
71 |
72 |
73 | #### Web Technologies
74 |
75 | - **HTML5**
76 | - Camera API
77 | - Canvas for drawing
78 | - Speech synthesis
79 |
80 | - **JavaScript**
81 | - TensorFlow.js
82 | - WebRTC for camera access
83 | - Speech synthesis API
84 |
85 | #### Browser Requirements
86 |
87 | - Modern web browser with:
88 | - WebRTC support
89 | - JavaScript enabled
90 | - Web Speech API support
91 |
92 | ---
93 | ### Accessibility Permissions
94 |
95 | To ensure the application functions correctly, you need to grant the following permissions in your web browser:
96 |
97 | - **Camera Access**: Required for capturing images and performing real-time object detection.
98 | - **Microphone Access**: Needed if the application includes voice input features.
99 | - **Speech Synthesis**: Ensure your browser allows speech synthesis for voice narration of detected objects.
100 |
101 | ## Screenshots
102 |
103 | Here are some screenshots of the application:
104 |
105 | - **Contact Page**
106 |
107 | 
108 |
109 | - **Homepage**
110 |
111 | 
112 |
113 | - **Permission Request**
114 |
115 | 
116 |
117 |
118 |
119 | ## Open Source Programs featuring Fusion Vision
120 |
121 | - **Social Winter Of Code 2025**
122 |
123 | 
124 |
125 | - **InnoGeeks Winter of Code**
126 |
127 | 
128 |
129 | ## Contributing
130 |
131 | Please read [CONTRIBUTING.md](CONTRIBUTING.md) for details on our code of conduct, and the process for submitting pull requests.
132 |
133 |
134 |
135 |
--------------------------------------------------------------------------------
/.vscode/.gitignore:
--------------------------------------------------------------------------------
1 | node_modules/
2 | .env
--------------------------------------------------------------------------------
/.vscode/package.json:
--------------------------------------------------------------------------------
1 | {
2 | "scripts": {
3 | "init-db": "mysql -u root -p < data.sql",
4 | "start": "node server.js"
5 | },
6 | "type": "commonjs",
7 | "dependencies": {
8 | "bcrypt": "^5.1.1",
9 | "bcryptjs": "^2.4.3",
10 | "body-parser": "^1.20.3",
11 | "cors": "^2.8.5",
12 | "dotenv": "^16.4.7",
13 | "express": "^4.21.2",
14 | "express-validator": "^7.2.1",
15 | "fs": "^0.0.1-security",
16 | "helmet": "^8.0.0",
17 | "jsonwebtoken": "^9.0.2",
18 | "mongodb": "^6.12.0",
19 | "mongoose": "^8.9.5",
20 | "multer": "^1.4.5-lts.1",
21 | "path": "^0.12.7"
22 | }
23 | }
24 |
--------------------------------------------------------------------------------
/.vscode/settings.json:
--------------------------------------------------------------------------------
1 | {
2 | "liveServer.settings.multiRootWorkspaceName": "FusionVision"
3 | }
--------------------------------------------------------------------------------
/User.js:
--------------------------------------------------------------------------------
1 | const mongoose = require('mongoose');
2 |
3 | const userSchema = new mongoose.Schema({
4 | username: {
5 | type: String,
6 | required: true,
7 | unique: true
8 | },
9 | password: {
10 | type: String,
11 | required: true
12 | },
13 | images: [{
14 | type: String, // Will store image URLs or base64 strings
15 | default: []
16 | }],
17 | createdAt: {
18 | type: Date,
19 | default: Date.now
20 | }
21 | });
22 |
23 | module.exports = mongoose.model('User', userSchema);
--------------------------------------------------------------------------------
/UserClass.js:
--------------------------------------------------------------------------------
1 | var currentUser = null;
2 | var userImages = {};
3 | // Function to handle user login
4 | function handleLogin(username, password) {
5 | // In a real app, validate against backend
6 | // For demo, just store username
7 | currentUser = username;
8 | // Load user's saved images from localStorage
9 | var savedData = localStorage.getItem(username);
10 | if (savedData) {
11 | userImages[username] = JSON.parse(savedData);
12 | }
13 | else {
14 | userImages[username] = [];
15 | }
16 | // Update welcome message
17 | updateWelcomeMessage();
18 | }
19 | // Function to handle user registration
20 | function handleRegistration(username, password) {
21 | // In a real app, store in backend
22 | // For demo, just initialize storage
23 | currentUser = username;
24 | userImages[username] = [];
25 | localStorage.setItem(username, JSON.stringify([]));
26 | updateWelcomeMessage();
27 | }
28 | // Function to update welcome message
29 | function updateWelcomeMessage() {
30 | var welcomeDiv = document.createElement('div');
31 | welcomeDiv.id = 'welcome-message';
32 | welcomeDiv.style.color = 'white';
33 | welcomeDiv.style.padding = '10px';
34 | welcomeDiv.style.textAlign = 'center';
35 | welcomeDiv.innerHTML = "Welcome, ".concat(currentUser, "!");
36 | // Insert at top of main content
37 | var mainElement = document.querySelector('main');
38 | if (mainElement) {
39 | mainElement.prepend(welcomeDiv);
40 | }
41 | }
42 | // Modified captureImage function to save images
43 | function captureImage() {
44 | if (!currentUser) {
45 | alert('Please log in first!');
46 | return;
47 | }
48 | var canvas = document.getElementById("capturedCanvas");
49 | if (!canvas)
50 | return;
51 | var context = canvas.getContext("2d");
52 | if (!context)
53 | return;
54 | var video = document.getElementById("cameraPreview");
55 | if (!video)
56 | return;
57 | // Set canvas dimensions to match video
58 | canvas.width = video.videoWidth;
59 | canvas.height = video.videoHeight;
60 | // Draw video frame to canvas
61 | context.drawImage(video, 0, 0, canvas.width, canvas.height);
62 | // Save image data
63 | var imageData = canvas.toDataURL('image/jpeg');
64 | if (currentUser && userImages[currentUser]) {
65 | userImages[currentUser].push({
66 | timestamp: new Date().toISOString(),
67 | data: imageData
68 | });
69 | // Save to localStorage
70 | localStorage.setItem(currentUser, JSON.stringify(userImages[currentUser]));
71 | }
72 | }
73 | // Function to handle logout
74 | function handleLogout() {
75 | if (currentUser) {
76 | // Save any pending data
77 | localStorage.setItem(currentUser, JSON.stringify(userImages[currentUser]));
78 | currentUser = null;
79 | // Remove welcome message
80 | var welcomeMsg = document.getElementById('welcome-message');
81 | if (welcomeMsg) {
82 | welcomeMsg.remove();
83 | }
84 | // Redirect to login page
85 | window.location.href = 'index.html';
86 | }
87 | }
88 | // Function to display user's saved images
89 | function displaySavedImages() {
90 | if (!currentUser || !userImages[currentUser])
91 | return;
92 | var images = userImages[currentUser];
93 | var container = document.createElement('div');
94 | container.className = 'saved-images';
95 | images.forEach(function (img) {
96 | var imgElement = document.createElement('img');
97 | imgElement.src = img.data;
98 | imgElement.style.maxWidth = '200px';
99 | imgElement.style.margin = '10px';
100 | container.appendChild(imgElement);
101 | });
102 | var mainElement = document.querySelector('main');
103 | if (mainElement) {
104 | mainElement.appendChild(container);
105 | }
106 | }
107 |
--------------------------------------------------------------------------------
/about/about.html:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 | About - Fusion Vision
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
87 |
88 |
89 |
90 |
91 |
Fusion Vision
92 |
93 |
100 |
101 |
102 |
103 |
104 |
105 |
106 |
107 |
108 |
109 |
110 |
About Fusion Vision
111 |
112 | Fusion Vision is an advanced object detection platform that leverages
113 | cutting-edge AI technology to provide real-time object recognition and analysis.
114 |
115 |
For instance, combining data from LiDAR and cameras has been shown to improve 3D object detection by leveraging the strengths of both sensor types.
116 |
117 |
Similarly, the fusion of radar and camera data has been explored to improve object detection and tracking, providing robustness in various environmental conditions.
118 |
119 |
Blog Section
120 |
121 |
122 |
123 |
Fusion Vision | Revolutionizing Object Detection
124 |
Discover how Fusion Vision is transforming real-time object detection using AI and sensor fusion techniques.
"Fusion Vision is life-changing! The voice narration feature is incredibly helpful for navigating the world."
183 |
Rating: ⭐⭐⭐⭐⭐
184 |
185 |
186 |
187 |
John Smith
188 |
"This platform bridges the gap for visually impaired individuals to connect with others."
189 |
Rating: ⭐⭐⭐⭐⭐
190 |
191 |
192 |
193 |
Emily Johnson
194 |
"Object detection is so accurate, and the voice descriptions are spot on. Highly recommend!"
195 |
Rating: ⭐⭐⭐⭐⭐
196 |
197 |
198 |
199 |
Michael Brown
200 |
"A fantastic initiative. The social media integration is a brilliant touch."
201 |
Rating: ⭐⭐⭐⭐☆
202 |
203 |
204 |
205 |
206 |
207 |
208 |
Leave a Comment
209 |
214 |
215 |
216 |
217 |
261 |
262 |
263 |
264 |
291 |
292 |
293 |
--------------------------------------------------------------------------------
/camera/camera.js:
--------------------------------------------------------------------------------
1 | let videoStream = null;
2 |
3 | // Start the camera preview
4 | navigator.mediaDevices
5 | .getUserMedia({ video: true })
6 | .then((stream) => {
7 | videoStream = stream;
8 | const videoElement = document.getElementById("cameraPreview");
9 | videoElement.srcObject = stream;
10 | })
11 | .catch((error) => {
12 | alert("Unable to access the camera. Please check your device settings.");
13 | console.error("Camera Error:", error);
14 | });
15 |
16 | // Stop the camera
17 | function stopCamera() {
18 | if (videoStream) {
19 | const tracks = videoStream.getTracks();
20 | tracks.forEach((track) => track.stop());
21 | alert("Camera stopped!");
22 | window.location.href = "index.html"; // Redirect back to the homepage
23 | }
24 | }
--------------------------------------------------------------------------------
/camera/capture.js:
--------------------------------------------------------------------------------
1 | let videoStream = null;
2 |
3 | // Start the camera preview
4 | navigator.mediaDevices
5 | .getUserMedia({ video: true })
6 | .then((stream) => {
7 | videoStream = stream;
8 | const videoElement = document.getElementById("cameraPreview");
9 | videoElement.srcObject = stream;
10 | })
11 | .catch((error) => {
12 | alert("Unable to access the camera. Please check your device settings.");
13 | console.error("Camera Error:", error);
14 | });
15 |
16 | // Capture image from the video feed
17 | function captureImage() {
18 | const videoElement = document.getElementById("cameraPreview");
19 | const canvas = document.getElementById("capturedCanvas");
20 | const imageElement = document.getElementById("capturedImage");
21 |
22 | // Set canvas size to match the video feed
23 | canvas.width = videoElement.videoWidth;
24 | canvas.height = videoElement.videoHeight;
25 |
26 | // Draw the video frame onto the canvas
27 | const context = canvas.getContext("2d");
28 | context.drawImage(videoElement, 0, 0, canvas.width, canvas.height);
29 |
30 | // Convert canvas content to a data URL
31 | const imageData = canvas.toDataURL("image/png");
32 |
33 | // Display the captured image
34 | imageElement.src = imageData;
35 | imageElement.style.display = "block";
36 |
37 | alert("Image captured! Click 'Scan Image' to process.");
38 | }
39 |
40 | // Process the captured image
41 | function processCapturedImage() {
42 | const imageElement = document.getElementById("capturedImage");
43 |
44 | if (!imageElement.src) {
45 | alert("Please capture an image first!");
46 | return;
47 | }
48 |
49 | // Send the captured image data for further processing
50 | // Replace this part with actual object detection logic (e.g., via a backend or TensorFlow.js)
51 | alert("Processing the captured image for object detection...");
52 | console.log("Captured Image Data URL:", imageElement.src);
53 |
54 | // Add your object detection code here (e.g., TensorFlow.js or an API call)
55 | }
56 |
57 | // Stop the camera
58 | function stopCamera() {
59 | if (videoStream) {
60 | const tracks = videoStream.getTracks();
61 | tracks.forEach((track) => track.stop());
62 | alert("Camera stopped!");
63 | window.location.href = "index.html"; // Redirect back to the homepage
64 | }
65 | }
--------------------------------------------------------------------------------
/camera/models.js:
--------------------------------------------------------------------------------
1 | const bcrypt = require('bcrypt');
2 | const jwt = require('jsonwebtoken');
3 | const connectDB = require('../db'); // Import your DB connection
4 |
5 | require('dotenv').config();
6 | const jwtSecret = process.env.JWT_SECRET;
7 |
8 | if (!jwtSecret) {
9 | console.error('JWT_SECRET is not defined in .env');
10 | process.exit(1); // Exit if JWT secret is missing
11 | }
12 |
13 | async function registerUser(username, password) {
14 | try {
15 | const hashedPassword = await bcrypt.hash(password, 10);
16 | const db = await connectDB();
17 | const result = await db.collection('users').insertOne({ username, password: hashedPassword });
18 | return result.insertedId;
19 | } catch (error) {
20 | console.error('Error registering user:', error.message);
21 | throw new Error('Registration failed');
22 | }
23 | }
24 |
25 | async function loginUser(username, password) {
26 | try {
27 | const db = await connectDB();
28 | const user = await db.collection('users').findOne({ username });
29 |
30 | if (!user || !(await bcrypt.compare(password, user.password))) {
31 | throw new Error('Invalid username or password');
32 | }
33 |
34 | // Generate a JWT token
35 | const token = jwt.sign({ id: user._id }, jwtSecret, { expiresIn: '1h' });
36 | return { token, userId: user._id };
37 | } catch (error) {
38 | console.error('Error logging in user:', error.message);
39 | throw new Error('Login failed');
40 | }
41 | }
42 |
43 | async function saveImage(userId, imageUrl, description) {
44 | try {
45 | const db = await connectDB();
46 | const result = await db.collection('images').insertOne({ userId, imageUrl, description, createdAt: new Date() });
47 | return result.insertedId;
48 | } catch (error) {
49 | console.error('Error saving image:', error.message);
50 | throw new Error('Saving image failed');
51 | }
52 | }
53 |
54 | async function getUserImages(userId) {
55 | try {
56 | const db = await connectDB();
57 | const images = await db.collection('images').find({ userId }).toArray();
58 | return images;
59 | } catch (error) {
60 | console.error('Error retrieving user images:', error.message);
61 | throw new Error('Retrieving images failed');
62 | }
63 | }
64 |
65 | module.exports = { registerUser, loginUser, saveImage, getUserImages };
66 |
--------------------------------------------------------------------------------
/camera/object-detection.js:
--------------------------------------------------------------------------------
1 | let videoStream = null;
2 | let model = null;
3 |
4 | // Load the COCO-SSD model
5 | cocoSsd.load().then((loadedModel) => {
6 | model = loadedModel;
7 | console.log("COCO-SSD model loaded successfully!");
8 | }).catch((error) => {
9 | console.error("Error loading the COCO-SSD model:", error);
10 | });
11 |
12 | // Start the camera preview
13 | navigator.mediaDevices
14 | .getUserMedia({ video: true })
15 | .then((stream) => {
16 | videoStream = stream;
17 | const videoElement = document.getElementById("cameraPreview");
18 | videoElement.srcObject = stream;
19 | })
20 | .catch((error) => {
21 | alert("Unable to access the camera. Please check your device settings.");
22 | console.error("Camera Error:", error);
23 | });
24 |
25 | // Capture image from the video feed
26 | function captureImage() {
27 | const videoElement = document.getElementById("cameraPreview");
28 | const canvas = document.getElementById("capturedCanvas");
29 |
30 | // Set canvas size to match the video feed
31 | canvas.width = videoElement.videoWidth;
32 | canvas.height = videoElement.videoHeight;
33 |
34 | // Draw the video frame onto the canvas
35 | const context = canvas.getContext("2d");
36 | context.drawImage(videoElement, 0, 0, canvas.width, canvas.height);
37 |
38 | alert("Image captured! Click 'Scan Image' to detect objects.");
39 | }
40 |
41 | // Process the captured image for object detection
42 | function processCapturedImage() {
43 | if (!model) {
44 | alert("Model is not loaded yet. Please wait.");
45 | return;
46 | }
47 |
48 | const canvas = document.getElementById("capturedCanvas");
49 | const context = canvas.getContext("2d");
50 |
51 | // Get image data from the canvas
52 | const imageData = tf.browser.fromPixels(canvas);
53 |
54 | // Run object detection
55 | model.detect(imageData).then((predictions) => {
56 | console.log("Predictions:", predictions);
57 | displayPredictions(predictions, context);
58 | imageData.dispose(); // Clean up memory
59 | }).catch((error) => {
60 | console.error("Error during object detection:", error);
61 | });
62 | }
63 |
64 | // Display object detection predictions
65 | function displayPredictions(predictions, context) {
66 | predictions.forEach((prediction) => {
67 | const [x, y, width, height] = prediction.bbox;
68 |
69 | // Draw bounding box
70 | context.strokeStyle = "#00FF00";
71 | context.lineWidth = 2;
72 | context.strokeRect(x, y, width, height);
73 |
74 | // Draw label
75 | context.font = "16px Arial";
76 | context.fillStyle = "#00FF00";
77 | context.fillText(
78 | `${prediction.class} (${(prediction.score * 100).toFixed(1)}%)`,
79 | x,
80 | y > 10 ? y - 5 : y + 15
81 | );
82 | });
83 |
84 | alert("Object detection complete!");
85 | }
86 |
87 | // Stop the camera
88 | function stopCamera() {
89 | if (videoStream) {
90 | const tracks = videoStream.getTracks();
91 | tracks.forEach((track) => track.stop());
92 | alert("Camera stopped!");
93 | window.location.href = "index.html"; // Redirect back to the homepage
94 | }
95 | }
--------------------------------------------------------------------------------
/config/db.config.js:
--------------------------------------------------------------------------------
1 | require('dotenv').config();
2 |
3 | module.exports = {
4 | host: process.env.DB_HOST,
5 | user: process.env.DB_USER,
6 | password: process.env.DB_PASSWORD,
7 | database: process.env.DB_NAME,
8 | waitForConnections: true,
9 | connectionLimit: 10,
10 | queueLimit: 0
11 | };
--------------------------------------------------------------------------------
/data.sql:
--------------------------------------------------------------------------------
1 | CREATE DATABASE fusion_vision;
2 | USE fusion_vision;
3 |
4 | CREATE TABLE users (
5 | id INT AUTO_INCREMENT PRIMARY KEY,
6 | username VARCHAR(50) NOT NULL UNIQUE,
7 | password VARCHAR(255) NOT NULL,
8 | created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP
9 | );
10 |
11 | CREATE TABLE user_images (
12 | id INT AUTO_INCREMENT PRIMARY KEY,
13 | user_id INT,
14 | image_data LONGTEXT,
15 | created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
16 | FOREIGN KEY (user_id) REFERENCES users(id)
17 | );
--------------------------------------------------------------------------------
/db.js:
--------------------------------------------------------------------------------
1 | const { MongoClient } = require('mongodb');
2 | require('dotenv').config(); // Load environment variables
3 |
4 | const uri = process.env.MONGODB_URI; // Fetch from .env
5 | const client = new MongoClient(uri);
6 |
7 | async function connectDB() {
8 | try {
9 | await client.connect();
10 | console.log('Connected to MongoDB');
11 | return client.db(process.env.DB_NAME); // Fetch database name from .env
12 | } catch (error) {
13 | console.error('Database connection failed:', error);
14 | process.exit(1); // Exit the process if the connection fails
15 | }
16 | }
17 |
18 | module.exports = connectDB;
19 |
--------------------------------------------------------------------------------
/footer_elements/Terms.html:
--------------------------------------------------------------------------------
1 |
2 |
3 |
4 |
5 |
6 | Terms of Service- Fusion Vision
7 |
8 |
9 |
10 |
11 |
12 |
13 |
14 |
15 |
Fusion Vision
16 |
17 |
24 |
25 |
26 |
27 |
28 |
29 |
30 |
31 |
32 |
Terms of Service
33 |
34 | Welcome to FUSIONVISION. By accessing or using our website, you agree to comply with these Terms of Service. Please read them carefully to understand your rights and responsibilities.
35 |
36 |
1. Acceptance of Terms
37 |
By using this platform, you agree to these Terms and Service. If you do not agree to any part of these terms, please discontinue use of the website.
38 |
39 |
2. Use of the Platform
40 |
We grant you a limited, non-exclusive, and non-transferable license to use our platform for personal, non-commercial purposes. Any unauthorized use of the platform is strictly prohibited.
41 |
42 |
3. Intellectual Property
43 |
All content on FUSIONVISION, including text, images, logos, and designs, is the property of the platform and protected by copyright laws. You may not reproduce, distribute, or create derivative works without prior written consent.
44 |
45 |
4. User Responsibilities
46 |
As a user of this platform, you agree to:
47 |
48 |
Provide accurate and truthful information when registering or submitting content.
49 |
Not engage in unlawful, harmful, or disruptive activities on the platform.
50 |
Respect the intellectual property rights of the platform and other users.
51 |
52 |
53 |
54 |
5. Limitation of Liability
55 |
FUSIONVISION is not responsible for any indirect, incidental, or consequential damages arising from your use of the platform. All services are provided "as is" without warranties of any kind.
56 |
57 |
58 |
6. Modifications to Terms
59 |
We reserve the right to modify these Terms and Service at any time. Updates will be posted on this page, and continued use of the platform signifies your acceptance of the revised terms.
60 |
61 |
7. Termination
62 |
We reserve the right to suspend or terminate your access to the platform at our sole discretion, without prior notice, for any violation of these Terms.
63 |
64 |
8. Contact Us
65 |
If you have any questions about these Terms and Service, please contact us at info@fusionvision.com.
Ensuring Video Content Integrity: How FusionVision is Leading the Way
34 |
35 |
Introduction
36 |
In today’s fast-paced digital world, the authenticity of video content is more critical than ever. With the rise of deepfakes and manipulated media, trust in what we see online is being tested. At FusionVision, we’re on a mission to ensure the integrity of video content and provide solutions that help businesses and individuals verify what they’re watching is real.
37 |
38 |
39 |
The Challenge of Fake Video Content
40 |
Videos are powerful, but their ability to be altered presents a major risk. Fake videos can mislead, spread misinformation, and damage reputations. As video content becomes an essential part of how we communicate, the need for reliable ways to verify authenticity grows.
41 |
42 |
43 |
44 |
FusionVision’s Solution
45 |
At FusionVision, we use advanced AI, machine learning, and blockchain technology to verify the authenticity of videos. Our platform detects manipulation, tracks the origin of footage, and ensures content hasn’t been tampered with — giving users confidence in the videos they view and share.
46 |
47 |
48 |
Why It Matters
49 |
50 | In an era where videos shape opinions and influence decisions, the importance of maintaining trust is paramount. With FusionVision’s technology, we’re making it easier for organizations and individuals to protect themselves from manipulated content and ensure that what they see online is trustworthy.
51 |
52 |
53 |
54 |
55 |
Conclusion
56 |
The future of video content is bright, but only if we prioritize authenticity. At FusionVision, we’re leading the charge to ensure that the digital content we consume is real, transparent, and reliable. Join us in making the digital world a more truthful place.
57 |
34 | At Fusion Vision, we prioritize your privacy and are committed to protecting your personal data. This Privacy Policy explains how we collect, use, and safeguard your information when you interact with our platform.
35 |
36 |
1. Information We Collect
37 |
We may collect personal data such as your name, email address, and usage details when you use our platform or interact with our services.
38 |
39 |
2. How We Use Your Information
40 |
Your information is used to enhance your experience, improve our services, and communicate important updates.
41 |
42 |
3. Data Security
43 |
We implement advanced security measures to protect your data from unauthorized access, loss, or misuse.
44 |
45 |
4. Your Rights
46 |
You have the right to access, update, or delete your personal information. Contact us for assistance.
47 |
48 |
5. Contact Us
49 |
If you have any questions about this Privacy Policy, please reach out to us at info@fusionvision.com.
36 | At FusionVision, we are a team of passionate innovators, researchers, and developers dedicated to creating cutting-edge solutions for video content authenticity. In an era dominated by digital media, where information is shared and consumed at lightning speed, the integrity of visual content has become more critical than ever.
37 |
38 |
39 |
40 |
Our Vision
41 |
To build a future where digital media is trustworthy and every piece of video content can be verified for authenticity with precision and ease.
42 |
43 |
44 |
Our Mission
45 |
46 |
To provide accessible and reliable tools for detecting manipulated videos.
47 |
To educate the public about the impact of deepfake technology on society.
48 |
To collaborate with organizations and governments in the fight against misinformation.
49 |
50 |
51 |
52 |
What Drives Us
53 |
54 |
Innovation: We are driven by the desire to push technological boundaries.
55 |
Integrity: Upholding honesty and transparency in everything we do.
56 |
Impact: Making a meaningful difference in combating the spread of fake media.
57 |
58 |
59 |
60 |
Meet the Team
61 |
62 | FusionVision is powered by a diverse team of:
63 |
64 |
65 |
Data Scientists and Engineers: Crafting robust AI/ML algorithms to detect deepfakes.
66 |
UI/UX Designers: Ensuring a seamless and intuitive experience for our users.
67 |
Ethics Advocates: Addressing the social and ethical implications of our technology.
68 |
69 |
70 | Together, we are not just building a tool — we are building trust.
71 |
72 |
73 |
74 |
Why Choose FusionVision?
75 |
76 |
State-of-the-Art AI: Our tools leverage the latest advancements in artificial intelligence to deliver accurate results.
77 |
User-Centric Design: Every feature we develop is tailored to meet the needs of our users.
78 |
Commitment to Security: Your data and privacy are always our top priority.
79 |
80 |
Join us in shaping a world where authenticity prevails.
36 | At FusionVision, we're on a mission to revolutionize the way we trust video content. Our team of passionate innovators, researchers, and developers is dedicated to creating groundbreaking solutions that ensure the authenticity of visual media in an era where information travels faster than ever.
37 |
38 |
We believe that the integrity of visual content is essential, and we're building tools to help navigate a world where misinformation and digital manipulation can compromise the truth.
39 |
40 |
41 |
42 |
Why Work With Us?
43 |
44 |
Joining FusionVision means becoming part of a dynamic, forward-thinking team. We are committed to fostering an environment where creativity and innovation thrive. Here's what you can expect when you work with us:
45 |
46 |
47 |
Collaborative Culture:We value the input of every team member and encourage diverse perspectives. Whether you’re working on research, development, or strategy, your voice matters.
48 |
Cutting-Edge Technology:Be at the forefront of digital security and video content verification. You’ll work with the latest tools and technologies to shape the future of visual media.
49 |
Purpose-Driven Work:At FusionVision, your work has a direct impact on how people consume and trust digital content. We're tackling real-world issues in a rapidly evolving industry.
50 |
Career Growth:We believe in investing in our team’s growth and providing opportunities to learn, innovate, and expand your skill set.
51 |
Inclusive Environment:Diversity, equity, and inclusion are central to our values. We strive to create a welcoming space where all team members can thrive.
52 |
53 |
54 |
55 |
56 |
Who We’re Looking For
57 |
We’re always on the lookout for talented individuals who are driven by curiosity, creativity, and a desire to make a difference. Whether you're a seasoned professional or just starting your career, if you're passionate about the future of video content authenticity and digital security, we want to hear from you.
58 |
59 |
We are currently looking for:
60 |
61 |
Software Engineers
62 |
Data Scientists
63 |
Research Analysts
64 |
Product Designers
65 |
Marketing & Communications Experts
66 |
67 |
If you’re excited about the intersection of technology, media, and trust, and want to join a team that’s shaping the future of digital media, FusionVision could be the place for you.
68 |
69 |
70 |
71 |
Ready to Join Us?
72 |
If you're ready to help us build the future of video content authenticity, we’d love to hear from you! Check out our current job openings below, or reach out to us directly through social media.
73 |
74 |
75 | Together, we can create a more trustworthy digital world.
76 |
Leave a Comment
209 | 214 |