├── .gitignore ├── README.md ├── build.js ├── dist ├── african_head_diffuse.png ├── african_head_nm.png ├── african_head_nm_tangent.png ├── african_head_spec.png └── index.html ├── package-lock.json ├── package.json ├── rollup.config.mjs ├── src ├── app.ts ├── core │ ├── camera.ts │ ├── raster.ts │ ├── shader.ts │ └── texture.ts ├── math │ ├── math.ts │ ├── matrix.ts │ └── vector.ts ├── model │ └── african_head.ts └── utils │ ├── depthBuffer.ts │ └── frameBuffer.ts └── tsconfig.json /.gitignore: -------------------------------------------------------------------------------- 1 | #///////////////////////////////////////////////////////////////////////////// 2 | # Fireball Projects 3 | #///////////////////////////////////////////////////////////////////////////// 4 | 5 | /library/ 6 | /temp/ 7 | /local/ 8 | /build/ 9 | native 10 | /preview-templates/ 11 | #///////////////////////////////////////////////////////////////////////////// 12 | # npm files 13 | #///////////////////////////////////////////////////////////////////////////// 14 | 15 | npm-debug.log 16 | node_modules/ 17 | 18 | #///////////////////////////////////////////////////////////////////////////// 19 | # Logs and databases 20 | #///////////////////////////////////////////////////////////////////////////// 21 | 22 | *.log 23 | *.sql 24 | *.sqlite 25 | 26 | #///////////////////////////////////////////////////////////////////////////// 27 | # files for debugger 28 | #///////////////////////////////////////////////////////////////////////////// 29 | 30 | *.sln 31 | *.csproj 32 | *.pidb 33 | *.unityproj 34 | *.suo 35 | 36 | #///////////////////////////////////////////////////////////////////////////// 37 | # OS generated files 38 | #///////////////////////////////////////////////////////////////////////////// 39 | 40 | .DS_Store 41 | ehthumbs.db 42 | Thumbs.db 43 | 44 | #///////////////////////////////////////////////////////////////////////////// 45 | # WebStorm files 46 | #///////////////////////////////////////////////////////////////////////////// 47 | 48 | .idea/ 49 | 50 | #////////////////////////// 51 | # VS Code files 52 | #////////////////////////// 53 | 54 | .vscode/ 55 | dist/ 56 | dist/ -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ### 1、前言 2 | 3 | ​ 本项目是一个基于`TypeScript`、浏览器`Canvas`,完全软光栅器实现。当然现实其实已经有很多非常出色的软光栅的项目,但难于这些项目依赖`C++`或一些图形库(`GLFW`)的支撑,学习成本较大,尤其我这样很少接触这些。如果基于浏览器`Canvas`渲染反馈,`JavaScript`实现光栅逻辑,基本上不需要配置复杂的环境。而且在调试上也有着巨大的优势,如利用浏览器的`Devtools`。 4 | 5 | ​ 当然本项目适用于拥有一定的图形学基础、线代基础,因为在本文后部分,基于此项目会粗略讲解重要实现的部分,所以关于图形学、线代不会提及。但是,此项目也是我本人在入门完图形学(`Games101`)、以及拜读另一个软光栅项目`tinyrender`有感而发,用自己擅长的技术栈也去实现一个软光栅,在后面我也会分享一下我的学习路线,以及我的参考文章。 6 | 7 | ​ 关于上面分别提到了`TypeScript`和`JavaScript`,原因是本项目是遵循工程化、模块化标准的一个Web前端项目,所以本质上最好打包后得到还是一个`Html`文件以及引用了一些`JavaScript`脚本文件,具体描述参考下方关于项目描述的介绍 8 | ![23-33-41](https://grab-1301500159.cos.ap-shanghai.myqcloud.com/markDown/23-33-41.gif) 9 | 10 | ![](https://grab-1301500159.cos.ap-shanghai.myqcloud.com/markDown/23-31-10.gif) 11 | 12 | ### 2、项目描述 13 | 14 | ​ 基于`TypeScript`,`ESM`模块化标准,最后使用第三方库`rollup`等周边工具构建最终JavaScript单脚本文件,使用准备模板`Html`文件引入该脚本文件,当然静态文件`Html`已提前包含`Cavans`元素,因为此后渲染反馈载体都使用的是`Cavans`元素 15 | 16 | ​ 此外为了提高开发便利性,如观察反馈效果、源码调试,使用`nodemon` +`live-server`做热重载刷新,且构建后`JavaScript`带持有源码`TypeScript`映射的`SourceMap`文件 17 | 18 | ​ 本项目尽可能的不使用第三方库,唯一的模型解析除外,本项目模型文件使用的是.obj格式,所以采用了是`webgl-obj-loader`第三方库 19 | 20 | #### 2.1 启动项目 21 | 22 | - npm install 安装依赖 23 | - npm run dev 启动项目 24 | 25 | ​ 项目启动后,每当有文件变动的时都会触发`TypeScipt`编译、`Rollup`构建成单`JavaScript`脚本文件输出到`dist`目录,随后打开或刷新浏览器。`dist`目录下存放的项目生成的静态文件以及一些需要加载的纹理资源 26 | 27 | #### 2.1项目依赖 28 | 29 | ```json 30 | { 31 | "dependencies": { 32 | "webgl-obj-loader": "^2.0.8" 33 | }, 34 | "devDependencies": { 35 | "@rollup/plugin-commonjs": "^26.0.1", 36 | "@rollup/plugin-node-resolve": "^15.2.3", 37 | "@rollup/plugin-typescript": "^11.1.6", 38 | "concurrently": "^9.0.0", 39 | "live-server": "^1.2.2", 40 | "nodemon": "^3.1.4", 41 | "rollup": "^4.21.0", 42 | "typescript": "^5.5.4" 43 | } 44 | } 45 | ``` 46 | 47 | #### 2.2项目结构 48 | 49 | ``` 50 | ├── dist //工程化打包后输出的目录,用浏览器打开.html静态文件,即可看到效果 51 | ├── src //工程化入库 52 | │ ├── core //核心的模块,如camera、raster、shader等 53 | │ ├── math //数学计算相关模块vector、matrix等 54 | │ ├── model //模型源文件 55 | │ ├── utils //工具函数、相关数据结构等 56 | │ ├── app.ts //主入口 57 | ``` 58 | 59 | ### 3、项目解析 60 | 61 | #### 3.1 渲染载体 62 | 63 | ​ 渲染的最终目标是视觉反馈,也就是图形显示的载体,在`Html`中 `Canvas`元素提供了一种渲染上下文`CanvasRenderingContext2D`,通过 `canvas.getContext("2d")`获取。随后将要渲染的帧数据通过`context.putImageData(frameData)`提交即可显示此次帧数据,`frameData`是一个`ImageData`对象,该对象可以理解成是一个w*h长度的数组,数组每连续**四位**元素记录坐标x,y像素上的的`RGBA`值。每个元素占用1字节8位,也就是我们常用的纹理格式`RGBA8888`。 64 | 65 | ​ 通过`new ImageData(width, height)`即可得到一个`width`X`height`的帧数据,`ImageData`通过数组形式下标访问或修改元素值,如下例,生成的一个 100 * 100 ,颜色为红色的帧数据 66 | 67 | ​ 具体使用以及详解可在`MDN`官网查询 68 | 69 | ```typescript 70 | const frameData = new ImageData(100, 100) 71 | for (let offset = 0; offset < frameData.data.length; offset += 4) { 72 | const [rIdx, gIdx, bIdx, aIdx] = [offset + 0, offset + 1, offset + 2, offset + 3] 73 | frameData.data[rIdx] = 255 74 | frameData.data[gIdx] = 0 75 | frameData.data[bIdx] = 0 76 | frameData.data[aIdx] = 255 77 | } 78 | const context = canvas.getContext("2d") 79 | context.putImageData(frameData) 80 | ``` 81 | 82 | #### 3.2 渲染主循环 83 | 84 | ​ 有了渲染载体,只需要在渲染主循环中变化帧数据,然后每次渲染将该数据提交给渲染上下文即可达到渲染效果。关于渲染主循环实现方式很多计时器、定时器都可以,但是本项目采用的是浏览器提供方法`window.requestAnimationFrame`,好处在于此方法执行频率可以匹配我们显示器刷新频率,且很方便我们统计当前帧数信息,参考下面代码,位于项目`App.ts`文件 85 | 86 | ```typescript 87 | // src/app.ts 88 | class App { 89 | 90 | private static raster: Raster 91 | private static isMouseMoving: boolean = false 92 | 93 | public static init(canvas: HTMLCanvasElement) { 94 | const context = canvas.getContext("2d") as CanvasRenderingContext2D 95 | this.raster = new Raster(canvas.width, canvas.height, context) 96 | } 97 | 98 | public static start() { 99 | 100 | let last = 0 101 | 102 | const loop = (timestamp: number) => { 103 | const delt = timestamp - last 104 | document.getElementById("fps")!.innerText = `FPS:${(1000 / delt).toFixed(0)}` 105 | this.mainLoop() 106 | last = timestamp 107 | requestAnimationFrame(loop) 108 | } 109 | 110 | loop(0) 111 | } 112 | 113 | public static mainLoop() { 114 | this.raster.render() 115 | } 116 | } 117 | ``` 118 | 119 | ​ 通过`requestAnimationFrame`每帧数执行我们的渲染主循环,执行完此次渲染逻辑后,随机注册下一帧的渲染逻辑,这样保证每帧渲染是连续性,且是有次序的,这也意味着若某一帧渲染耗时太久也会影响下一帧渲染时机,这也是我必须要保证的逻辑 120 | 121 | ​ 此后,每帧循环执行的Raster的render方法,也是我们渲染的方法,看如下render 的实现: 122 | 123 | ```typescript 124 | // src/utils/frameBuffer.ts 125 | export class FrameBuffer { 126 | 127 | private data: ImageData 128 | 129 | constructor(width: number, height: number) { 130 | this.data = new ImageData(width, height) 131 | } 132 | 133 | public get frameData(): ImageData { 134 | return this.data 135 | } 136 | } 137 | 138 | // src/core/raster.ts 139 | export class Raster { 140 | 141 | private width: number 142 | private height: number 143 | 144 | private frameBuffer: FrameBuffer 145 | 146 | private context: CanvasRenderingContext2D 147 | 148 | constructor(w: number, h: number, context: CanvasRenderingContext2D) { 149 | 150 | this.width = w 151 | this.height = h 152 | 153 | this.context = context 154 | this.frameBuffer = new FrameBuffer(w, h) 155 | } 156 | 157 | 158 | public clear() { 159 | for (let offset = 0; offset < this.frameBuffer.frameData.data.length; offset += 4) { 160 | const [rIdx, gIdx, bIdx, aIdx] = [offset + 0, offset + 1, offset + 2, offset + 3] 161 | this.frameBuffer.frameData.data[rIdx] = 0 162 | this.frameBuffer.frameData.data[gIdx] = 0 163 | this.frameBuffer.frameData.data[bIdx] = 0 164 | this.frameBuffer.frameData.data[aIdx] = 255 165 | } 166 | } 167 | 168 | public render() { 169 | // 清理帧缓冲区 170 | this.clear() 171 | 172 | // 提交帧数据 173 | this.context.putImageData(this.frameBuffer.frameData, 0, 0) 174 | } 175 | 176 | } 177 | ``` 178 | 179 | > **注意对帧数据用类`FrameBuffer`进行包装,方便后续提供一些其他操作方法** 180 | 181 | 如上,Raster每帧在用黑色填充当前帧数据,然后将当前帧数据提交,因为目前在此中间比没有其他操作,所以目前我们看到`Cavans`一直处于黑色,且页面左上方会事实显示我们当前渲染的帧数 182 | 183 | 184 | 185 | #### 3.3 导入模型 186 | 187 | > **本项目模型使用的是.obj格式的模型文件,模型解析库使用的是`webgl-obj-loader`,关于它的一个解析规则可在官方文档了解** 188 | 189 | ​ 目前没有任何东西在渲染,所以我们从导入模型开始,让屏幕能够渲染一些什么东西来。为了便捷我将模型源文件内容直接放入一个模块中,并将其内容作为字符串导出,方便可以对模型的解析,如下: 190 | 191 | ```typescript 192 | // src/model/african_head.ts 193 | const fileText = ` 194 | v -0.3 0 0.3 195 | v 0.4 0 0 196 | v -0.2 0.3 -0.1 197 | v 0 0.4 0 198 | # 4 vertices 199 | 200 | g head 201 | s 1 202 | f 1/1/1 2/1/1 4/1/1 203 | f 1/1/1 2/1/1 3/1/1 204 | f 2/1/1 4/1/1 3/1/1 205 | f 1/1/1 4/1/1 3/1/1 206 | # 4 faces 207 | ` 208 | export default fileText 209 | ``` 210 | 211 | ​ 通过`webgl-obj-loader`库对模型进行解析,如下代码,对`redner`函数增加了渲染模型顶点的逻辑,以模型三角形顶点数量为循环,以此将模型顶点在帧数据中的像素位置的赋予红色。 212 | 213 | > **注意这里的使用的模型在项目中已提供,位于`/src/model/african_head.ts** 214 | 215 | ```typescript 216 | // src/utils/frameBuffer.ts 217 | export class FrameBuffer { 218 | // ...... 219 | public setPixel(x: number, y: number, rgba: [number, number, number, number]): void { 220 | x = Math.floor(x) 221 | y = Math.floor(y) 222 | if (x >= this.data.width || y >= this.data.height || x < 0 || y < 0) return 223 | this.data.data[((y * this.data.width + x) * 4) + 0] = rgba[0] 224 | this.data.data[((y * this.data.width + x) * 4) + 1] = rgba[1] 225 | this.data.data[((y * this.data.width + x) * 4) + 2] = rgba[2] 226 | this.data.data[((y * this.data.width + x) * 4) + 3] = rgba[3] 227 | } 228 | // ...... 229 | } 230 | 231 | 232 | // src/core/raster.ts 233 | import { Mesh } from "webgl-obj-loader"; 234 | import african_head from "../model/african_head"; 235 | 236 | export class Raster { 237 | constructor(w: number, h: number, context: CanvasRenderingContext2D) { 238 | // ....... 239 | this.model = new Mesh(african_head) 240 | this.vertexsBuffer = this.model.vertices 241 | this.trianglseBuffer = this.model.indices 242 | // ....... 243 | } 244 | 245 | public render() { 246 | // 清理帧缓冲区 247 | this.clear() 248 | 249 | // 遍历模型的三角面 250 | for (let i = 0; i < this.trianglseBuffer.length; i += 3) { 251 | 252 | for (let j = 0; j < 3; j++) { 253 | const idx = this.trianglseBuffer[i + j] 254 | const vertex = new Vec3(this.vertexsBuffer[idx * 3 + 0], this.vertexsBuffer[idx * 3 + 1], this.vertexsBuffer[idx * 3 + 2]) 255 | this.frameBuffer.setPixel(vertex.x,vertex.y,[255,0,0,255]) 256 | } 257 | } 258 | 259 | // 提交帧数据 260 | this.context.putImageData(this.frameBuffer.frameData, 0, 0) 261 | } 262 | } 263 | ``` 264 | 265 | ​ 当然这样逻辑去渲染的话,最终得出效果肯定是不符合预期的,原因也很明显坐标系的差异,模型、屏幕都有着自己的坐标系,也就是所谓的模型空间、屏幕空间,当然还有一个的世界空间,观察空间,所以下面开始第四部分矩阵变化变化,就包含上述的不同坐标系间的转换。注意,这里模型坐标系以及平屏幕坐标系初始是被固定的 266 | 267 | - 屏幕坐标系:依赖的是`Cavans`,原点在左上角,范围在0-width,0-height,没有负值 268 | - 模型坐标系:项目中的模型的原点为(0,0,0),x,y,z范围在-1,1,也就是被包含在一个长度为2的立方体中,原点在这个立方体的中心 269 | 270 | #### 3.4 矩阵变化 271 | 272 | > **这里的矩阵变化属于是图形学中部分,所以具体理论和推导就不多复述了,以及投影矩阵部分,为了不增加复杂度,后面讲解基于正交投影,当然项目也有透视投影矩阵,可以切换相机类型达到透视投影效果** 273 | 274 | ##### 3.4.1 ModelMatrix 275 | 276 | ​ 模型矩阵作用将模型空间转换到世界空间,这里我们定义模型的位置就是放在世界坐标系的原点,因为模型的坐标x,y,z在-1,1的立方体中,为了变得显而易见,我们要对模型进行缩放,并且考虑为了方便后续相机的观察,这里将模型Z坐标移动-240,负值是因为本项目基于右手坐标系,相机默认向-z方向看,所以最终得到下面的矩阵 277 | 278 | ```typescript 279 | this.modelMatrix = new Matrix44([ 280 | [240, 0, 0, 0], 281 | [0, 240, 0, 0], 282 | [0, 0, 240, -240], 283 | [0, 0, 0, 1] 284 | ]) 285 | ``` 286 | 287 | ##### 3.4.2 ViewMatrix 288 | 289 | ​ 视图矩阵的目的将世界坐标系转换到相机的观察坐标系,也可以理解统一这俩坐标系,方面后续的计算。因为本项目是基于右手坐标系,所以X 轴叉乘 Y 轴等于+Z轴,Y 轴叉乘 Z 轴等于+X轴。下面是个人对于视图变化的一个理解 290 | 291 | > **原相机:原本和世界坐标系重合的相机** 292 | 293 | > **现在相机:原相机经过矩阵变化后等到现在的相机状态,也就是pos,lookAt,up组成的状态** 294 | 295 | - 视图变化目的就是将世界坐标系和相机坐标做一个统一,方便后面投影计算,因为统一了坐标系,默认将原点作为投影的出发点定义一些平面和参数 296 | - 首先一个常识问题,对一个物体和相机以相同的方向和角度旋转,相机所观察到的画面是不不会变的,以互为相反的方向旋转,相机所观察的画面是我们显示生活中看到的画面 297 | - 想象原相机在世界坐标系下原点位置,在经过旋转、平移等操作后,得到我们现在的相机状态,也就是相机坐标系,vecZ,vecX,vecY 298 | - 由矩阵的本质,相机旋转、平移操作矩阵本质上就是现在相机坐标系的基向量,可以理解为原本和世界坐标系重合的相机经过现在的相机的基向量坐标系进行的矩阵变化 299 | - 理论上我们只要将世界坐标系下的所有点都转换到相机坐标系下,也就是将所有世界左边乘上如今相机的基向量的组成的矩阵,由于相机操作和物体操作时相反的,所以应该是乘上如今相机的基向量的组成的矩阵的逆矩阵 300 | 301 | ​ 为了方便后续动态的旋转平移,本项目将视图矩阵由初始的视图矩阵和动态变化矩阵组合而成的,如下: 302 | 303 | ```typescript 304 | // src/core/camera.ts 305 | export class Camera { 306 | //...... 307 | 308 | public look(): Matrix44 { 309 | // 通过pos、lookAt、up求求现在相机的基向量 310 | const vecZ = this.pos.sub(this.lookAt).normalize() 311 | const vecX = this.up.cross(vecZ).normalize() 312 | const vecY = vecZ.cross(vecX).normalize() 313 | 314 | const revTransMat = new Matrix44([ 315 | [1, 0, 0, -this.pos.x], 316 | [0, 1, 0, -this.pos.y], 317 | [0, 0, 1, -this.pos.z], 318 | [0, 0, 0, 1] 319 | ]) 320 | 321 | const revRotationMat = new Matrix44([ 322 | [vecX.x, vecX.y, vecX.z, 0], 323 | [vecY.x, vecY.y, vecY.z, 0], 324 | [vecZ.x, vecZ.y, vecZ.z, 0], 325 | [0, 0, 0, 1] 326 | ]) 327 | 328 | // 合成view矩阵,先平移后旋转 329 | return revRotationMat.multiply(revTransMat) 330 | } 331 | 332 | public getViewMat(): Matrix44 { 333 | const baseViewMat = this.look() 334 | return this.transMatExc.transpose().multiply(this.rotationMatExc.transpose().multiply(baseViewMat)) 335 | } 336 | } 337 | ``` 338 | 339 | ##### 3.4.3 ProjectMatrix 340 | 341 | > **这里讲解投影矩阵基于正交投影** 342 | 343 | ​ 投影矩阵顾名思义就是将被可视的空间投影到2D平面上,3D到2D的一个变化,所以一般来说我会定义一个被可视的空间,也就是会用一个远平面和近平面组成的长方体或锥体来定义这样的被可视空间。值得注意的是,近远平面的位置坐标应该基于相机坐标系,因为当要进行到投影矩阵变化时,此时已经经过模型变化、视图变化,所以此时所有坐标点已经在相机坐标系下,相机坐标系原点便是相机所在的位置。所以本项目近远平面的z坐标都是负值,因为相机在原点,向-Z轴看。 344 | 345 | ​ 按照正交投影的性质,我们这里可视的空间是一个长方体,近远平面都在负Z半轴上,并定义近远平面宽高和屏幕宽高保持一致,也就是`Canvas`的宽高,随后将该可视空间长方体中心点平移到原点上,再将该长方体压缩成一个长度为2的一个标准立方体,既XYZ坐标的范围在-1到1。这样做目的好处如下 346 | 347 | - 可视空间外被剔除不渲染,即观察空间(相机下的坐标系)下坐标不在这个标准正方体内的点 348 | - 方便后续的视口矩阵变化计算,即将可是空间点真正的映射到屏幕像素 349 | 350 | ```typescript 351 | // src/core/camera.ts 352 | export class Camera { 353 | //...... 354 | public orthogonal(): Matrix44 { 355 | const left = -this.screenWidth / 2 356 | const right = this.screenWidth / 2 357 | const bottom = -this.screenHeight / 2 358 | const top = this.screenHeight / 2 359 | 360 | const scaleMat = new Matrix44([ 361 | [2 / (right - left), 0, 0, 0], 362 | [0, 2 / (top - bottom), 0, 0], 363 | [0, 0, 2 / (this.near - this.far), 0], 364 | [0, 0, 0, 1] 365 | ]) 366 | 367 | const transMat = new Matrix44([ 368 | [1, 0, 0, -((right + left) / 2)], 369 | [0, 1, 0, -((top + bottom) / 2)], 370 | [0, 0, 1, -((this.far + this.near) / 2)], 371 | [0, 0, 0, 1] 372 | ]) 373 | return scaleMat.multiply(transMat) 374 | } 375 | } 376 | ``` 377 | 378 | ##### 3.4.4 ViewPortMatrix 379 | 380 | ​ 视口变化也是最后的一个矩阵变化,再经过上一步的投影变化后,可视空间已经被压缩在一个标准被立方体中,如果按照正交投影方式,会直接丢弃z坐标,那么可是空间就被映射在一个x,y范围在-1至1的的平面中。 381 | 382 | ​ 最重要的是我们屏幕分辨率不是固定的,且坐标系也是不同的,所以最后视口矩阵变化就是将这个2d平面转换到屏幕坐标系下,在本项目中也就是`canvas`。 383 | 384 | ​ 值得注意的是,y坐标缩放是个负值,原因是`canvas`的原点在左上角,且y轴向下,所以这里需要对y坐标进行反转。 385 | 386 | ```typescript 387 | this.viewPortMatrix = new Matrix44([ 388 | [this.width / 2, 0, 0, this.width / 2], 389 | [0, -this.height / 2, 0, this.height / 2], 390 | [0, 0, 1, 0], 391 | [0, 0, 0, 1] 392 | ]) 393 | ``` 394 | 395 | ​ 最终得到这四个变化矩阵后,对空间中所有点都进行这个四个矩阵变化,最后输出的坐标便是屏幕上的像素坐标,对所有点应用矩阵变化便就是`vertexShader`(顶点着色器),也就是下面提到的着色器部分。 396 | 397 | #### 3.5 着色 398 | 399 | ​ 经过上文,已经得到将空间中任意的点转换到屏幕中的像素点的变化矩阵,值得注意的是,并不是所有的点都能有对应的屏幕像素点,因为有相机的存在,不可见的是会被裁剪的。 400 | 401 | ##### 3.5.1 VertexShader 402 | 403 | ​ 顶点着色器,输入空间中的点坐标输出屏幕中像素点坐标,转换的逻辑便是引用上述的四个矩阵变化。输入的点便上我们模型的所有顶点,如下: 404 | 405 | ```ts 406 | // scr/core/shader.ts 407 | export class FlatShader extends Shader { 408 | 409 | public vertexShader(vertex: Vec3): Vec4 { 410 | const modelMatrix = this.raster.modelMatrix 411 | const viewMatrix = this.raster.viewMatrix 412 | const projectionMatrix = this.raster.projectionMatrix 413 | const mvpMatrix = projectionMatrix.multiply(viewMatrix.multiply(modelMatrix)) 414 | 415 | const viewPortMatrix = this.raster.viewPortMatrix 416 | const mergedMatrix = viewPortMatrix.multiply(mvpMatrix) 417 | 418 | return mergedMatrix.multiplyVec(new Vec4(vertex.x, vertex.y, vertex.z, 1)) 419 | } 420 | } 421 | 422 | // src/core/raster.ts 423 | export class Raster { 424 | 425 | public render() { 426 | 427 | // 遍历模型的三角面 428 | for (let i = 0; i < this.trianglseBuffer.length; i += 3) { 429 | 430 | for (let j = 0; j < 3; j++) { 431 | const idx = this.trianglseBuffer[i + j] 432 | const vertex = new Vec3(this.vertexsBuffer[idx * 3 + 0], this.vertexsBuffer[idx * 3 + 1], this.vertexsBuffer[idx * 3 + 2]) 433 | const vertexScreen = this.shader.vertexShader(vertex) 434 | this.frameBuffer.setPixel(vertexScreen.x,vertexScreen.y,[0,255,0,255]) 435 | } 436 | } 437 | } 438 | } 439 | ``` 440 | 441 | 442 | 443 | ​ 最终看到的效果便如上图,为了验证矩阵的正确性,这边监听鼠标拖动来动态调整相机的角度,如下代码: 444 | 445 | 446 | 447 | ```typescript 448 | // src/app.ts 449 | class App { 450 | public static onMouseUp(e: MouseEvent) { this.isMouseMoving = false } 451 | public static onMouseDown(e: MouseEvent) { this.isMouseMoving = true } 452 | public static onMouseMove(e: MouseEvent) { 453 | if (!this.isMouseMoving) return 454 | this.raster.camera.rotatedCamera(new Matrix44().rotateY(Math.sign(e.movementX) * 2 / 180 * Math.PI)) 455 | } 456 | } 457 | 458 | canvas.onmousedown = App.onMouseDown.bind(App) 459 | canvas.onmouseup = App.onMouseUp.bind(App) 460 | canvas.onmousemove = App.onMouseMove.bind(App) 461 | 462 | // src/core/camera.ts 463 | export class Camera { 464 | public rotatedCamera(mat: Matrix44): void { 465 | this.rotationMatExc = mat.multiply(this.rotationMatExc) 466 | } 467 | } 468 | ``` 469 | 470 | ​ 上述代码通过监听鼠标的按住拖动,生成一个绕Y轴旋转的旋转矩阵,并让更新相机一个`rotationMatExc`矩阵,此处`rotationMatExc`矩阵在上文`3.4.2`中提及过,以及它的设计目的。我们来长按拖动鼠标看到如下效果: 471 | 472 | 473 | 474 | ​ 上述效果有个明显问题,我们旋转的是相机,当我们相机朝一个角度旋转时,当旋转到180度时,此时模型应该已经在我们相机背面,应该什么都看不到。 475 | 476 | ​ 这个问题产生的原因:没有进行空间裁剪,可视空间在正交投影时已经压缩到一个标准立方体中,各坐标范围在-1致1,随后在通过一个视口变化,转换成屏幕的坐标。首先这里正交投影抛弃z(深度信息)坐标,就是z坐标不在-1至1仍然参与后续的视口变化转换成屏幕坐标,这便是问题所在,z坐标不在-1,1之内的点,说明这些点并不在可视空间内,并不需要渲染。处理方式的很简单,首先在视口变化中并没有对z坐标处理,所以转换成屏幕坐标后,z坐标仍是之前经过正交矩阵变化的后的z坐标,所以通过以下代码: 477 | 478 | ```typescript 479 | // src/core/raster.ts 480 | export class Raster { 481 | 482 | public render() { 483 | 484 | // 遍历模型的三角面 485 | for (let i = 0; i < this.trianglseBuffer.length; i += 3) { 486 | for (let j = 0; j < 3; j++) { 487 | //...... 488 | if (vertexScreen.z < -1 || vertexScreen.z > 1) continue 489 | const vertexScreen = this.shader.vertexShader(vertex) 490 | this.frameBuffer.setPixel(vertexScreen.x,vertexScreen.y,[0,255,0,255]) 491 | } 492 | } 493 | } 494 | } 495 | ``` 496 | 497 | 498 | 499 | ​ 当然x y坐标不在-1,1范围内理论上也是不需要渲染,也需要裁剪掉,此项目这里不处理的原因是,在做视口变化后,x,y都转换成屏幕坐标,当设置像素时,超过屏幕高度和宽度的像素都是不生成的,`frameBuffer`的`setPixel`方法 500 | 501 | ```typescript 502 | public setPixel(x: number, y: number, rgba: [number, number, number, number]): void { 503 | x = Math.floor(x) 504 | y = Math.floor(y) 505 | if (x >= this.data.width || y >= this.data.height || x < 0 || y < 0) return 506 | this.data.data[((y * this.data.width + x) * 4) + 0] = rgba[0] 507 | this.data.data[((y * this.data.width + x) * 4) + 1] = rgba[1] 508 | this.data.data[((y * this.data.width + x) * 4) + 2] = rgba[2] 509 | this.data.data[((y * this.data.width + x) * 4) + 3] = rgba[3] 510 | } 511 | ``` 512 | 513 | ##### 3.5.2 Triangle 514 | 515 | > **关于判断点是否再三角形内,有很多种方式,如向量叉乘等,本项目采用的是重心判断,感兴趣可以自行学习了解。使用重心原因为后续会用到一些插值。** 516 | 517 | ​ 目前我们只是简单通过渲染顶点来观察这个模型,接下来开始着手渲染面,也就是三角形,也是光栅化比较重要部分,通过填充再三角面内的像素,达到渲染面效果,所以也就是判断像素是否再某个三角形面,有俩种实现方式 518 | 519 | - 遍历屏幕所有像素,挨个判断该像素是否再这个三角形中,性能差 520 | - 通过一个最小包围盒包裹住该三角形,对包围盒的像素遍历,判断是否再三角形中,性能优 521 | 522 | ​ 这里需要来改造一下render函数,并新增一个triangle,如下: 523 | 524 | ```typescript 525 | public render() { 526 | // 清理帧缓冲区 527 | this.clear() 528 | // 重置变化矩阵 529 | this.resetMatrix() 530 | 531 | for (let i = 0; i < this.trianglseBuffer.length; i += 3) { 532 | const oriCoords = [] 533 | const screenCoords = [] 534 | // 顶点计算: 对每个顶点进行矩阵运算(MVP),输出顶点的屏幕坐标,顶点着色阶段 535 | for (let j = 0; j < 3; j++) { 536 | const idx = this.trianglseBuffer[i + j] 537 | const vertex = new Vec3(this.vertexsBuffer[idx * 3 + 0], this.vertexsBuffer[idx * 3 + 1], this.vertexsBuffer[idx * 3 + 2]) 538 | screenCoords.push(this.shader.vertexShader(vertex, idx * 3)) 539 | } 540 | // 绘制三角形:通过三个顶点计算包含在三角形内的屏幕像素,并对包含像素上色,片元着色阶段 541 | this.triangle(screenCoords) 542 | } 543 | } 544 | 545 | public triangle(screenCoords: Array) { 546 | const minx = Math.floor(Math.min(screenCoords[0].x, Math.min(screenCoords[1].x, screenCoords[2].x))) 547 | const maxx = Math.ceil(Math.max(screenCoords[0].x, Math.max(screenCoords[1].x, screenCoords[2].x))) 548 | const miny = Math.floor(Math.min(screenCoords[0].y, Math.min(screenCoords[1].y, screenCoords[2].y))) 549 | const maxy = Math.ceil(Math.max(screenCoords[0].y, Math.max(screenCoords[1].y, screenCoords[2].y))) 550 | for (let w = minx; w <= maxx; w++) { 551 | for (let h = miny; h <= maxy; h++) { 552 | const bar = barycentric(screenCoords, new Vec3(w, h, 0)) 553 | // 不在三角面内的像素点不进行着色 554 | if (bar.x < 0 || bar.y < 0 || bar.z < 0) continue 555 | // 计算插值后该像素的深度值,并进行深度测试 556 | const depth = this.depthBuffer.get(w, h) 557 | const interpolatedZ = bar.x * screenCoords[0].z + bar.y * screenCoords[1].z + bar.z * screenCoords[2].z 558 | if (interpolatedZ < -1 || interpolatedZ > 1 || interpolatedZ < depth) continue 559 | // 调用片元着色器,计算该像素的颜色 560 | const color = this.shader.fragmentShader(bar) 561 | this.depthBuffer.set(w, h, interpolatedZ) 562 | this.frameBuffer.setPixel(w, h, color) 563 | } 564 | } 565 | } 566 | ``` 567 | 568 | ​ 如上述代码,依次对每个三角形三个顶点调用vertexShader得到屏幕到像素点坐标,随后交给triangle处理,这里采用的是包围盒算法,去一个最小的包围盒包裹该三角形,遍历这些可能存在于三角形内的像素点,以此做是否再三角形内、深度测试,最后将通过测试的像素交给`fragmentShader`获取该像素最终的颜色。 569 | 570 | ​ 值得注意的是深度测试放到此处,原因此时开始渲染面,丢弃应该是像素点,而不是之前粗暴的顶点。到这里我们开始处理`fragmentShader`的逻辑了。 571 | 572 | ##### 3.5.3 FragmentShader 573 | 574 | ​ 片元着色器输入当前像素信息,输出该像素的颜色值。但常常输入是该像素的在三角形内的重心坐标,方便后续应用着色模型运用插值。为了快速看到我们加入面处理后的效果,这里我们FragmentShader只是简单通过输入的像素,输出一个固定的白色,如下效果: 575 | 576 | ```typescript 577 | // src/core/shader.ts 578 | public fragmentShader(barycentric: Vec3): [number, number, number, number] { 579 | return [255, 255, 255, 255] 580 | } 581 | ``` 582 | 583 | 584 | 585 | #### 3.6 着色模型/着色频率 586 | 587 | ​ 很明显上述效果并不是我们想要,也是在意料之中。首先我们对每个所有三角面内的所有颜色都采用一种颜色,所以导致的这样结果。首先,模型本身所有三角面角度是不一致(法向量各不相同),也就是模型表面应该是凹凸不平的,现实生活中某个方向有一道平行光,模型每个地方接受的光是不相等,所以模型表面反射光的强弱是不一致的,也导致作为观察者,看去模型各个地方颜色也是不一致的。 588 | 589 | ​ 以上便是着色模型的思想,物体表面的颜色受关照和材质影响,所以本文考虑最简单光照,平行光,首先我们定一个平行光,如对着模型正方向的平行光,也就是往-z轴照去的光: 590 | 591 | ```typescript 592 | // src/core/raster.ts 593 | this.lightDir = new Vec3(0, 0, -1) 594 | ``` 595 | 596 | ##### 3.6.1 FlatShading 597 | 598 | 有了平行光,此刻只要计算出光照强度,也就是平行光和面(像素、点)的法向量夹角。至于是以像素为计算夹角还是整个计算夹角,这便是着色频率的思想,如我计算一个三角面的光照强度,在这平面内所有像素都采用该光照强盗影响下的颜色,以此类推,以像素,以顶点,这也是常规的三种着色频率`flat`、`gouraud`、`phone`,这三种分别对应着逐面 、逐顶点、逐像素,为了方便观察效果的变化,我们采用最简单的`flat`着色频率,如下代码实现和效果: 599 | 600 | ```typescript 601 | // src/core/shader.ts 602 | export class FlatShader extends Shader { 603 | 604 | private normal: Vec3 = new Vec3(0, 0, 0) 605 | private lightIntensity: number = 0 606 | 607 | public vertexShader(vertex: Vec3): Vec3 { 608 | if (this.vertex.length == 3) this.vertex = [] 609 | 610 | this.vertex.push(vertex) 611 | if (this.vertex.length == 3) { 612 | this.normal = this.vertex[1].sub(this.vertex[0]).cross(this.vertex[2].sub(this.vertex[0])).normalize() 613 | this.lightIntensity = Vec3.dot(Vec3.neg(this.raster.lightDir).normalize(), this.normal) 614 | } 615 | 616 | // mvp、viewport 617 | const modelMatrix = this.raster.modelMatrix 618 | const viewMatrix = this.raster.viewMatrix 619 | const projectionMatrix = this.raster.projectionMatrix 620 | const mvpMatrix = projectionMatrix.multiply(viewMatrix.multiply(modelMatrix)) 621 | const viewPortMatrix = this.raster.viewPortMatrix 622 | const mergedMatrix = viewPortMatrix.multiply(mvpMatrix) 623 | 624 | return mergedMatrix.multiplyVec(new Vec4(vertex.x, vertex.y, vertex.z, 1)).toVec3() 625 | } 626 | 627 | public fragmentShader(barycentric: Vec3): [number, number, number, number] { 628 | return [255 * this.lightIntensity, 255 * this.lightIntensity, 255 * this.lightIntensity, 255] 629 | } 630 | } 631 | ``` 632 | 633 | ​ 在顶点着色阶段便计算当前三角面的法向量,随后便计算出该面的光照强度并记录,在偏远着色阶段时,直接对面内所有像素采用同光照强度下的颜色值。 634 | 635 | ​ 值得注意的是,这里将光照方向取反了,原因我们定义的光照是一个向量,表示一个方向,所以在计算夹角时,应取反。 636 | 637 | 638 | 639 | ​ 观察上面效果可能会有疑问,为什么旋转时脸部一直都是最亮的状态,原因是我们旋转的是相机,光照方向和模型的位置都没有发生变化,所以脸部一直都是最亮的状态。 640 | 641 | ##### 3.6.2 GouraudShading 642 | 643 | ​ 该着色频率便是逐顶点的,通过三角面三个顶点的法向量计算出对应的光照强度,随后对内部所有像素插值得出该像素的光照强度,直接上代码: 644 | 645 | ```typescript 646 | export class GouraudShader extends Shader { 647 | 648 | private lightIntensityVetex: Array = [] 649 | public vertexShader(vertex: Vec3, idx: number): Vec3 { 650 | 651 | if (this.vertex.length == 3) { 652 | this.vertex = [] 653 | this.lightIntensityVetex = [] 654 | } 655 | this.vertex.push(vertex) 656 | const vertexNormals = this.raster.model.vertexNormals 657 | const vertexNormal = new Vec3(vertexNormals[idx], vertexNormals[idx + 1], vertexNormals[idx + 2]).normalize() 658 | this.lightIntensityVetex.push(Vec3.dot(vertexNormal, Vec3.neg(this.raster.lightDir).normalize())) 659 | 660 | // mvp、viewport 661 | const modelMatrix = this.raster.modelMatrix 662 | const viewMatrix = this.raster.viewMatrix 663 | const projectionMatrix = this.raster.projectionMatrix 664 | const mvpMatrix = projectionMatrix.multiply(viewMatrix.multiply(modelMatrix)) 665 | const viewPortMatrix = this.raster.viewPortMatrix 666 | const mergedMatrix = viewPortMatrix.multiply(mvpMatrix) 667 | 668 | return mergedMatrix.multiplyVec(new Vec4(vertex.x, vertex.y, vertex.z, 1)).toVec3() 669 | } 670 | 671 | public fragmentShader(barycentric: Vec3): [number, number, number, number] { 672 | const lightIntensity = this.lightIntensityVetex[0] * barycentric.x + this.lightIntensityVetex[1] * barycentric.y + this.lightIntensityVetex[2] * barycentric.z 673 | return [255 * lightIntensity, 255 * lightIntensity, 255 * lightIntensity, 255] 674 | } 675 | } 676 | ``` 677 | 678 | ​ 值得注意的是,这里再片元着色阶段,使用到传入的重心坐标,这个重心坐标也是上面提到再做深度测试以及判断是否在三角形内使用到。 679 | 680 | 681 | 682 | ##### 3.6.3 PhoneShading 683 | 684 | ​ 该着色频率便是逐像素,根据每个像素自身的法向量来计算光照强度,随后对该像素应用对于光照强度的下的颜色值。关于每个像素的法向量并不是能直接得到的,这里需要一张法线贴图或切线贴图,这贴图也是一种纹理,纹理颜色值记录的表示的是法向量,所以在一个三角面内,通过三个顶点的uv坐标插值计算出对应像素的uv坐标,在从法线贴图中获取该像素的法向量,从而得到光照强度,计算该像素的最终颜色。 685 | 686 | ​ 这个涉及到了关于纹理的采样,这块的实现在下一小节就会提及,所以这里我们先不关心纹理采样的逻辑,只去实现这样的着色频率逻辑,上代码: 687 | 688 | ```typescript 689 | export class PhoneShader extends Shader { 690 | 691 | private textureVetex: Array = [] 692 | public vertexShader(vertex: Vec3, idx: number): Vec3 { 693 | 694 | if (this.vertex.length == 3) { 695 | this.vertex = [] 696 | this.textureVetex = [] 697 | } 698 | this.vertex.push(vertex) 699 | const vertexTextures = this.raster.model.textures 700 | this.textureVetex.push(new Vec3(vertexTextures[idx], vertexTextures[idx + 1], 0)) 701 | 702 | // mvp、viewport 703 | const modelMatrix = this.raster.modelMatrix 704 | const viewMatrix = this.raster.viewMatrix 705 | const projectionMatrix = this.raster.projectionMatrix 706 | const mvpMatrix = projectionMatrix.multiply(viewMatrix.multiply(modelMatrix)) 707 | const viewPortMatrix = this.raster.viewPortMatrix 708 | const mergedMatrix = viewPortMatrix.multiply(mvpMatrix) 709 | 710 | return mergedMatrix.multiplyVec(new Vec4(vertex.x, vertex.y, vertex.z, 1)).toVec3() 711 | } 712 | 713 | public fragmentShader(barycentric: Vec3): [number, number, number, number] { 714 | const u = this.textureVetex[0].x * barycentric.x + this.textureVetex[1].x * barycentric.y + this.textureVetex[2].x * barycentric.z 715 | const v = this.textureVetex[0].y * barycentric.x + this.textureVetex[1].y * barycentric.y + this.textureVetex[2].y * barycentric.z 716 | const normalColor = this.raster.textureNormal.sampling(u, v) 717 | 718 | let lightIntensity = 1 719 | if (normalColor) { 720 | const normal = new Vec3(normalColor[0] * 2 / 255 - 1, normalColor[1] * 2 / 255 - 1, normalColor[2] * 2 / 255 - 1).normalize() 721 | lightIntensity = Vec3.dot(Vec3.neg(this.raster.lightDir).normalize(), normal) 722 | } 723 | if (normalColor) return [255 * lightIntensity, 255 * lightIntensity, 255 * lightIntensity, 255] 724 | return [255, 255, 255, 255] 725 | } 726 | } 727 | 728 | ``` 729 | 730 | ​ 如上述代码在顶点着色阶段,记录每个顶点的uv坐标,随后在偏远阶段插值计算当前像素的uv坐标,随后通过`this.raster.textureNormal.sampling(u, v)`从法线贴图中获取一个记录法向量的颜色值,中间需要对颜色值转换成法向量,这是一个固定计算方式,取决于法线贴图的生成。 731 | 732 | ​ 这里对`normalColor`进行判断原因在于因为纹理作为图片是异步加载的,可能存在贴图还未加载完成。看下实际效果,明显相比前俩中着色频率,细节更加丰富: 733 | 734 | 735 | 736 | 737 | 738 | #### 3.7 纹理采样 739 | 740 | > **本项目为了方便加载,所有贴图都放在项目./dist目录下** 741 | 742 | ​ 在上一节中介绍了三种着色频率,基于一种平行光,根据不同光照强度采用不同亮度的颜色。所以对于平行光的背面,以为光照强度为0,意味着没有任何颜色,这完全是不符合现实的。模型有着自身的颜色,也就是贴图,光照只会影响面的亮度。贴图通过`uv坐标`记录着模型中所有用到的颜色值。 743 | 744 | ##### 3.7.1 实现 745 | 746 | ​ 因本项目基于web浏览器,所以加载都是通过`Http`请求加载图片,所以生成纹理是异步的过程。对加载好的图片,使用`canvas`对其解码展开,得到对应的纹理格式数据(位图)。具体看如下代码: 747 | 748 | ```typescript 749 | // scr/core/texture.ts 750 | export class Texture { 751 | private image: HTMLImageElement 752 | private loaded: boolean = false 753 | private textureData: ImageData 754 | constructor(src: string) { 755 | this.image = new Image() 756 | this.image.src = src 757 | this.image.onload = () => { 758 | 759 | const canvas = document.createElement('canvas') 760 | canvas.width = this.image.width 761 | canvas.height = this.image.height 762 | 763 | const context = canvas.getContext('2d') 764 | context.drawImage(this.image, 0, 0) 765 | 766 | this.textureData = context.getImageData(0, 0, canvas.width, canvas.height) 767 | this.loaded = true 768 | } 769 | } 770 | 771 | public sampling(u: number, v: number): [number, number, number, number] | null { 772 | if (!this.loaded) return null 773 | const x = Math.floor(u * (this.image.width - 1)) 774 | const y = Math.floor((1 - v) * (this.image.height - 1)) 775 | return this.getPixel(x, y) 776 | } 777 | 778 | public getPixel(x: number, y: number): [number, number, number, number] { 779 | const result: [number, number, number, number] = [0, 0, 0, 0] 780 | 781 | result[0] = this.textureData.data[((y * this.image.width + x) * 4) + 0] 782 | result[1] = this.textureData.data[((y * this.image.width + x) * 4) + 1] 783 | result[2] = this.textureData.data[((y * this.image.width + x) * 4) + 2] 784 | result[3] = this.textureData.data[((y * this.image.width + x) * 4) + 3] 785 | 786 | return result 787 | } 788 | 789 | } 790 | ``` 791 | 792 | ```typescript 793 | // scr/core/raster.ts 794 | this.textureNormal = new Texture("african_head_nm.png") 795 | this.textureDiffuse = new Texture("african_head_diffuse.png") 796 | ``` 797 | 798 | ​ 值得注意的是,因为使用canvas对图片进行的解码,因为的`canvas`解码的特性,原点在左上角,而我们的uv坐标的原点在左下角,所以对`v坐标`进行一个反转。 799 | 800 | ##### 3.7.2 应用贴图 801 | 802 | ​ 有了对于纹理贴图,在片元阶段,通过三个顶点的uv插值计算像素的uv坐标,从贴图纹理中采样其颜色值。 803 | 804 | ```typescript 805 | public fragmentShader(barycentric: Vec3): [number, number, number, number] { 806 | const u = this.textureVetex[0].x * barycentric.x + this.textureVetex[1].x * barycentric.y + this.textureVetex[2].x * barycentric.z 807 | const v = this.textureVetex[0].y * barycentric.x + this.textureVetex[1].y * barycentric.y + this.textureVetex[2].y * barycentric.z 808 | const corlor = this.raster.textureDiffuse.sampling(u, v) 809 | const normalColor = this.raster.textureNormal.sampling(u, v) 810 | let lightIntensity = 1 811 | if (normalColor && corlor) { 812 | const normal = new Vec3(normalColor[0] * 2 / 255 - 1, normalColor[1] * 2 / 255 - 1, normalColor[2] * 2 / 255 - 1).normalize() 813 | lightIntensity = Vec3.dot(Vec3.neg(this.raster.lightDir).normalize(), normal) 814 | return [corlor[0] * lightIntensity, corlor[1] * lightIntensity, corlor[2] * lightIntensity, corlor[3]] 815 | } else { 816 | return [255, 255, 255, 255] 817 | } 818 | } 819 | ``` 820 | 821 | ​ 只要对前面用到的`PhoneShading`的`fragmentShader`中,参与光照强度计算的默认白色替换成我们从贴图获取的的颜色即可,这里对`normalColor`和`corlor`同时判断,原因和前面提交一样,纹理异步加载的,可能还未完成加载。看下效果: 822 | 823 | 824 | 825 | **未完待续。。。** 826 | 827 | #### 3.8 光照 828 | 829 | > 光照强度在实际计算时,为遵从物理定律,需要考虑点面距离光光源的距离,距离远近所能接受的光的能量是不同,下面的光照不考虑距离,感兴趣可以自行深入补充 830 | 831 | ​ 书接上文,从不同着色频率的角度对模型进行着色,在着色模型上只是简单采用了一种简单平行光,但是对于一个成熟的着色模型来说,只考虑一种所谓平行光是不完整的,所以本节开始介绍一个完整的着色模型所需要计算的关照,本项目着色模型实现基于`phone光照模型`,当然还有一个`Blinn-Phong光照模型`,俩者区别在于高光上计算有优化。 832 | 833 | ​ 首先需要明确的一件事,之所以物体能被我们观察,是因为人眼接收到了从物体来的光,这些来自物体的光有很多类型,具体类型依据所使用的光照模型。基于`phone光照模型`,该模型定义三种光,环境光、漫反射光、高光。 834 | 835 | ##### 3.8.1 环境光 836 | 837 | ​ 在现实环境中,周围光的折射射是复杂的,如物体背光的一面也是可能接受一定来自结果多次折射的光,并反射出去。这就是所谓的环境光,在`phone光照模型`中,只会去考虑环境光的影响,并且不会去精确的描述,而只是用一个简单的式子表示 838 | 839 | ![img](https://grab-1301500159.cos.ap-shanghai.myqcloud.com/markDown/v2-70373fb16b9559996126c48dd0671ed1_720w.webp) 840 | 841 | 其中`Ka`代表物体表面对环境光的反射率,`Ia`代表入射环境光的亮度,`Ienv`存储结果,即人眼所能看到从物体表面反射的环境光的亮度。 842 | 843 | ```typescript 844 | export class PhoneShader extends Shader { 845 | public fragmentShader(barycentric: Vec3): [number, number, number, number] { 846 | 847 | 848 | const u = this.textureVetex[0].x * barycentric.x + this.textureVetex[1].x * barycentric.y + this.textureVetex[2].x * barycentric.z 849 | const v = this.textureVetex[0].y * barycentric.x + this.textureVetex[1].y * barycentric.y + this.textureVetex[2].y * barycentric.z 850 | 851 | const corlor = this.raster.textureDiffuse.sampling(u, v) 852 | const normals = this.raster.textureNormal.sampling(u, v) 853 | 854 | if (!corlor || !normals) return [255, 255, 255, 255] 855 | 856 | // 环境光 857 | //const ambient = 1 858 | const ambient = 0.5 859 | const intensity = ambient 860 | return [corlor[0] * intensity, corlor[1] * intensity, corlor[2] * intensity, corlor[3]] 861 | } 862 | } 863 | ``` 864 | 865 | 22-08-21 866 | 867 | 22-08-42 868 | 869 | > ​ 上述俩个示意效果便是俩种不同Ienv下的结果 870 | 871 | ##### 3.8.2 漫反射 872 | 873 | ​ 漫反射便是光从一定角度入射之后从入射点向四面八方反射,且每个不同方向反射的光的强度相等。决定反射光强度由光的入射方向和点面的法向量的夹角。这里的光的入射方向取决于不同类型的光源,如下: 874 | 875 | - 平行光:光照入射角度是固定的 876 | - 点光源:点面到光源点所形成的向量 877 | - 聚光灯:有范围的平行光 878 | 879 | ​ 此项目光源使用最简单的光平行光,光的入射的角度的是固定。因为涉及到点面法向量计算,所以此处法向量的计算依赖当前使用的着色频率。此处我们在逐像素频率着色实现漫反射,且平行光方向为(5,0,0): 880 | 881 | ```typescript 882 | export class PhoneShader extends Shader { 883 | public fragmentShader(barycentric: Vec3): [number, number, number, number] { 884 | 885 | 886 | const u = this.textureVetex[0].x * barycentric.x + this.textureVetex[1].x * barycentric.y + this.textureVetex[2].x * barycentric.z 887 | const v = this.textureVetex[0].y * barycentric.x + this.textureVetex[1].y * barycentric.y + this.textureVetex[2].y * barycentric.z 888 | 889 | const corlor = this.raster.textureDiffuse.sampling(u, v) 890 | const normals = this.raster.textureNormal.sampling(u, v) 891 | 892 | if (!corlor || !normals) return [255, 255, 255, 255] 893 | 894 | // 环境光 895 | const ambient = 0.5 896 | 897 | // 漫反射 898 | const light = Vec3.neg(this.raster.lightDir).normalize() 899 | const diffuse = Math.max(Vec3.dot(normal, light), 0) 900 | const intensity = ambient + diffuse 901 | return [corlor[0] * intensity, corlor[1] * intensity, corlor[2] * intensity, corlor[3]] 902 | } 903 | } 904 | ``` 905 | 906 | 22-38-33 907 | 908 | ​ 上述效果便是叠加的`环境光`和`漫反射光`所展示出来的,平行光方向朝X的正半轴,所以左侧脸部亮度明显比右侧更亮,因为环境光的存在,即使位于右侧脸部也有着一定的着色亮度。 909 | 910 | ##### 3.8.3 高光(镜面反射) 911 | 912 | ​ 该光故名思意,物体会反射光,当发射光刚好到达观察方向,此时会出现高光部分,如现实生活中的镜子。所以高光的出现取决于观察方向,和反射光方向。也就是反射方向和观察方向的夹角: 913 | 914 | ```typescript 915 | const u = this.textureVetex[0].x * barycentric.x + this.textureVetex[1].x * barycentric.y + this.textureVetex[2].x * barycentric.z 916 | const v = this.textureVetex[0].y * barycentric.x + this.textureVetex[1].y * barycentric.y + this.textureVetex[2].y * barycentric.z 917 | const x = this.viewSpaceVertex[0].x * barycentric.x + this.viewSpaceVertex[1].x * barycentric.y + this.viewSpaceVertex[2].x 918 | const y = this.viewSpaceVertex[0].y * barycentric.x + this.viewSpaceVertex[1].y * barycentric.y + this.viewSpaceVertex[2].y 919 | const z = this.viewSpaceVertex[0].z * barycentric.x + this.viewSpaceVertex[1].z * barycentric.y + this.viewSpaceVertex[2].z 920 | 921 | const corlor = this.raster.textureDiffuse.sampling(u, v) 922 | const normals = this.raster.textureNormal.sampling(u, v) 923 | 924 | if (!corlor || !normals) return [255, 255, 255, 255] 925 | 926 | const light = Vec3.neg(this.raster.lightDir).normalize() 927 | const normal = new Vec3(normals[0] * 2 / 255 - 1, normals[1] * 2 / 255 - 1, normals[2] * 2 / 255 - 1).normalize() 928 | 929 | // 环境光 930 | const ambient = .5 931 | 932 | // 漫反射 933 | const diffuse = Math.max(Vec3.dot(normal, light), 0) 934 | 935 | // 镜面反射 936 | const reflect = normal.scale(2 * Vec3.dot(normal, light)).sub(light) 937 | const viewVec = new Vec3(0, 0, 0).sub(new Vec3(x, y, z)).normalize() 938 | const specular = Math.pow(Math.max(Vec3.dot(reflect, viewVec), 0), 32) 939 | 940 | const intensity = ambient + diffuse + specular 941 | return [corlor[0] * intensity, corlor[1] * intensity, corlor[2] * intensity, corlor[3]] 942 | } 943 | } 944 | ``` 945 | 946 | ​ 此处的反射光的方向是一个固定公式,参考`phone光照模型`,入射光采用的是平行光,固定的入射角度。重点是观察方向,观察方向是该点面到观察(相机)位置所形成的向量,注意此处通过插值计算当前该像素的世界坐标,继而将世界坐标转换成视图空间下的坐标,因为在视图空间下,相机位置便就是原点。最后通过一个p系数次方来控制高光的范围,这个值通常也会从一种高光贴图中采取。 947 | 948 | 23-15-49 949 | 950 | 23-16-15 951 | 952 | ​ 上述俩种不同结果,就是通过调整系数P,来控制高光的范围大小,值得注意的是,当我们旋转相机时,高光的位置也在发生变化,即使我们相机的位置没有发生变化,但是通过旋转物体相对相机的位置是在变化,也就是转换到视图空间下,物体坐标发生了变化,又因为高光中观察方向是由物体的点面到观察方向所形成的向量,这也是为什么上述提及计算观察方向时,将插值得到的世界坐标转换到视图空间下的坐标。 953 | 954 | 955 | 956 | - ***上次更新:24.9.16*** 957 | - ***未完待续*** 958 | -------------------------------------------------------------------------------- /build.js: -------------------------------------------------------------------------------- 1 | import * as rollup from "rollup" 2 | import config from "./rollup.config.mjs" 3 | 4 | rollup.rollup(config).then(bundle => { bundle.write(config.output) }) -------------------------------------------------------------------------------- /dist/african_head_diffuse.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Hyrmm/MyRaster/ea90b9fd65f0f0573cd85c972d12c1d64f0c8163/dist/african_head_diffuse.png -------------------------------------------------------------------------------- /dist/african_head_nm.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Hyrmm/MyRaster/ea90b9fd65f0f0573cd85c972d12c1d64f0c8163/dist/african_head_nm.png -------------------------------------------------------------------------------- /dist/african_head_nm_tangent.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Hyrmm/MyRaster/ea90b9fd65f0f0573cd85c972d12c1d64f0c8163/dist/african_head_nm_tangent.png -------------------------------------------------------------------------------- /dist/african_head_spec.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/Hyrmm/MyRaster/ea90b9fd65f0f0573cd85c972d12c1d64f0c8163/dist/african_head_spec.png -------------------------------------------------------------------------------- /dist/index.html: -------------------------------------------------------------------------------- 1 | 2 | 3 | 4 | 5 | 6 | my-raster 7 | 8 | 9 | 18 | 23 | 24 | 25 |
26 |
27 | 28 |
29 | 30 | 31 | 32 | 33 | 34 | -------------------------------------------------------------------------------- /package.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "my-raster", 3 | "version": "1.0.0", 4 | "description": "简单的软渲染器", 5 | "main": "index.js", 6 | "type": "module", 7 | "scripts": { 8 | "dev": "concurrently \"npx nodemon ./build.js -e ts,js.cjs,mjs,json\" \"live-server dist\"" 9 | }, 10 | "author": "", 11 | "license": "ISC", 12 | "dependencies": { 13 | "webgl-obj-loader": "^2.0.8" 14 | }, 15 | "devDependencies": { 16 | "@rollup/plugin-commonjs": "^26.0.1", 17 | "@rollup/plugin-node-resolve": "^15.2.3", 18 | "@rollup/plugin-typescript": "^11.1.6", 19 | "concurrently": "^9.0.0", 20 | "live-server": "^1.2.2", 21 | "nodemon": "^3.1.4", 22 | "rollup": "^4.21.0", 23 | "typescript": "^5.5.4" 24 | } 25 | } -------------------------------------------------------------------------------- /rollup.config.mjs: -------------------------------------------------------------------------------- 1 | import typescript from '@rollup/plugin-typescript'; 2 | import commonjs from '@rollup/plugin-commonjs'; 3 | import resolve from '@rollup/plugin-node-resolve'; 4 | export default { 5 | input: 'src/app.ts', 6 | output: { 7 | file: './dist/app.js', 8 | format: 'cjs', 9 | sourcemap: true, 10 | }, 11 | "esModuleInterop": true, 12 | "skipLibCheck": true, 13 | "moduleResolution": "node", 14 | "plugins": [resolve(), commonjs(), typescript()] 15 | } -------------------------------------------------------------------------------- /src/app.ts: -------------------------------------------------------------------------------- 1 | import { Raster } from "./core/raster" 2 | import { Matrix, Matrix44 } from "./math/matrix" 3 | import { Vec4 } from "./math/vector" 4 | 5 | 6 | 7 | class App { 8 | 9 | private static raster: Raster 10 | private static isMouseMoving: boolean = false 11 | 12 | public static init(canvas: HTMLCanvasElement) { 13 | const context = canvas.getContext("2d") as CanvasRenderingContext2D 14 | this.raster = new Raster(canvas.width, canvas.height, context) 15 | } 16 | 17 | public static start() { 18 | 19 | let last = 0 20 | 21 | const loop = (timestamp: number) => { 22 | const delt = timestamp - last 23 | document.getElementById("fps")!.innerText = `FPS:${(1000 / delt).toFixed(0)}` 24 | this.mainLoop() 25 | last = timestamp 26 | requestAnimationFrame(loop) 27 | } 28 | 29 | loop(0) 30 | } 31 | 32 | public static onMouseUp(e: MouseEvent) { this.isMouseMoving = false } 33 | 34 | public static onMouseDown(e: MouseEvent) { this.isMouseMoving = true } 35 | 36 | public static onMouseMove(e: MouseEvent) { 37 | if (!this.isMouseMoving) return 38 | this.raster.camera.rotatedCamera(new Matrix44().rotateY(Math.sign(e.movementX) * 5 / 180 * Math.PI)) 39 | } 40 | 41 | public static onKeyDown(e: KeyboardEvent) { 42 | 43 | 44 | switch (e.code) { 45 | case "KeyW": { 46 | this.raster.camera.translatedCamera(new Matrix44().translate(0, 0, -10)) 47 | break; 48 | } 49 | case "KeyS": { 50 | this.raster.camera.translatedCamera(new Matrix44().translate(0, 0, 10)) 51 | break; 52 | } 53 | case "KeyA": { 54 | this.raster.camera.translatedCamera(new Matrix44().translate(-10, 0, 0)) 55 | break; 56 | } 57 | case "KeyD": { 58 | this.raster.camera.translatedCamera(new Matrix44().translate(10, 0, 0)) 59 | break; 60 | } 61 | 62 | } 63 | 64 | } 65 | 66 | public static mainLoop() { 67 | this.raster.render() 68 | } 69 | } 70 | 71 | const canvas = document.getElementById("canvas") as HTMLCanvasElement 72 | window.onkeydown = App.onKeyDown.bind(App) 73 | canvas.onmousedown = App.onMouseDown.bind(App) 74 | canvas.onmouseup = App.onMouseUp.bind(App) 75 | canvas.onmousemove = App.onMouseMove.bind(App) 76 | 77 | App.init(canvas) 78 | App.start() -------------------------------------------------------------------------------- /src/core/camera.ts: -------------------------------------------------------------------------------- 1 | import { Vec3 } from "../math/vector" 2 | import { Matrix44 } from "../math/matrix" 3 | 4 | export enum ProjectType { 5 | Perspective, 6 | Orthogonal 7 | } 8 | 9 | export type CameraParam = { 10 | sceenWidth: number, 11 | sceenHeight: number, 12 | fovY: number, 13 | aspect: number, 14 | near: number, 15 | far: number, 16 | projectType: ProjectType 17 | pos: Vec3, 18 | lookAt: Vec3, 19 | up: Vec3 20 | } 21 | 22 | export class Camera { 23 | 24 | 25 | private fovY: number 26 | private aspect: number 27 | 28 | private far: number 29 | private near: number 30 | 31 | private projectType: ProjectType 32 | 33 | private screenWidth: number 34 | private screenHeight: number 35 | 36 | public up: Vec3 37 | public pos: Vec3 38 | public lookAt: Vec3 39 | 40 | private transMatExc: Matrix44 41 | private rotationMatExc: Matrix44 42 | 43 | 44 | constructor(params: CameraParam) { 45 | 46 | this.fovY = params.fovY 47 | this.aspect = params.aspect 48 | 49 | this.far = params.far 50 | this.near = params.near 51 | 52 | this.projectType = params.projectType 53 | 54 | this.up = params.up 55 | this.pos = params.pos 56 | this.lookAt = params.lookAt 57 | 58 | this.screenWidth = params.sceenWidth 59 | this.screenHeight = params.sceenHeight 60 | 61 | this.transMatExc = new Matrix44() 62 | this.rotationMatExc = new Matrix44() 63 | } 64 | 65 | public look(): Matrix44 { 66 | 67 | /** 68 | * 前提定义: 69 | * 0、基于右手系,X 叉乘 Y 等于+Z,Y 叉乘 Z 等于+X 70 | * 1、原相机-原本和世界坐标系重合的相机 71 | * 2、先相机-原相机经过矩阵变化后等到现在的相机状态,也就是pos,lookAt,up组成的状态 72 | * 73 | * 视图变化个人理解: 74 | * 0、视图变化目的就是将世界坐标系和相机坐标做一个统一,方便后面投影计算,因为统一了坐标系,默认将原点作为投影的出发点定义一些平面和参数 75 | * 1、首先一个常识问题,对一个物体和相机以相同的方向和角度旋转,相机所观察到的画面是不不会变的,以互为相反的方向旋转,相机所观察的画面是我们显示生活中看到的画面 76 | * 2、想象原相机在世界坐标系下原点位置,在经过旋转、平移等操作后,得到我们现在的相机状态,也就是相机坐标系,vecZ,vecX,vecY 77 | * 3、由矩阵的本质,相机旋转、平移操作矩阵本质上就是现在相机坐标系的基向量,可以理解为原本和世界坐标系重合的相机经过现在的相机的基向量坐标系进行的矩阵变化 78 | * 4、理论上我们只要将世界坐标系下的所有点都转换到相机坐标系下,也就是将所有世界左边乘上如今相机的基向量的组成的矩阵,由于相机操作和物体操作时相反的,所以应该是乘上如今相机的基向量的组成的矩阵的逆矩阵 79 | */ 80 | // 通过pos、lookAt、up求求现在相机的基向量 81 | const vecZ = this.pos.sub(this.lookAt).normalize() 82 | const vecX = this.up.cross(vecZ).normalize() 83 | const vecY = vecZ.cross(vecX).normalize() 84 | 85 | /** 86 | * oriTransMat: 87 | * [0, 0, 0, pos.x], 88 | * [0, 0, 0, pos.y], 89 | * [0, 0, 0, pos.z], 90 | * [0, 0, 0, 1] 91 | * 92 | * oriRotationMat: 93 | * [vecX.x, vecY.x, vecZ] 94 | * [vecX.y, vecY.y, vecZ] 95 | * [vecX.z, vecY.z, vecZ] 96 | * [0, 0, 0, 1] 97 | * 98 | * 现相机 = oriTransMat * oriRotationMat * 原相机 99 | * 100 | * 现在将世界坐标系下的点转换到相机坐标系下,可以想象原相加到先相机也是一个世界坐标系下的坐标到现相机的坐标系下 101 | * 考虑相机操作和物体是相反的操作,所以将世界坐标下的点等于 102 | * 值得关注的是,求逆后是先平移后旋转 103 | * (oriTransMat* oriRotationMat)^-1 = oriRotationMat^-1 * oriTransMat^-1 104 | */ 105 | const revTransMat = new Matrix44([ 106 | [1, 0, 0, -this.pos.x], 107 | [0, 1, 0, -this.pos.y], 108 | [0, 0, 1, -this.pos.z], 109 | [0, 0, 0, 1] 110 | ]) 111 | 112 | const revRotationMat = new Matrix44([ 113 | [vecX.x, vecX.y, vecX.z, 0], 114 | [vecY.x, vecY.y, vecY.z, 0], 115 | [vecZ.x, vecZ.y, vecZ.z, 0], 116 | [0, 0, 0, 1] 117 | ]) 118 | 119 | // 合成view矩阵,先平移后旋转 120 | return revRotationMat.multiply(revTransMat) 121 | } 122 | 123 | public orthogonal(): Matrix44 { 124 | 125 | const left = -this.screenWidth / 2 126 | const right = this.screenWidth / 2 127 | const bottom = -this.screenHeight / 2 128 | const top = this.screenHeight / 2 129 | 130 | const scaleMat = new Matrix44([ 131 | [2 / (right - left), 0, 0, 0], 132 | [0, 2 / (top - bottom), 0, 0], 133 | [0, 0, 2 / (this.near - this.far), 0], 134 | [0, 0, 0, 1] 135 | ]) 136 | 137 | const transMat = new Matrix44([ 138 | [1, 0, 0, -((right + left) / 2)], 139 | [0, 1, 0, -((top + bottom) / 2)], 140 | [0, 0, 1, -((this.far + this.near) / 2)], 141 | [0, 0, 0, 1] 142 | ]) 143 | return scaleMat.multiply(transMat) 144 | } 145 | 146 | public perspective(): Matrix44 { 147 | // 切换perspective分支查看透视投影实现 148 | return new Matrix44([ 149 | [1, 0, 0, 0], 150 | [0, 1, 0, 0], 151 | [0, 0, 1, 0], 152 | [0, 0, 0, 1] 153 | ]) 154 | } 155 | 156 | public getViewMat(): Matrix44 { 157 | const baseViewMat = this.look() 158 | return this.transMatExc.transpose().multiply(this.rotationMatExc.transpose().multiply(baseViewMat)) 159 | } 160 | 161 | public getProjectMat(): Matrix44 { 162 | if (this.projectType == ProjectType.Orthogonal) { 163 | return this.orthogonal() 164 | } else { 165 | return this.perspective() 166 | } 167 | } 168 | 169 | public rotatedCamera(mat: Matrix44): void { 170 | this.rotationMatExc = mat.multiply(this.rotationMatExc) 171 | } 172 | 173 | public translatedCamera(mat: Matrix44): void { 174 | this.transMatExc = mat.multiply(this.transMatExc) 175 | } 176 | } -------------------------------------------------------------------------------- /src/core/raster.ts: -------------------------------------------------------------------------------- 1 | import { Mesh } from "webgl-obj-loader"; 2 | import { FrameBuffer } from "../utils/frameBuffer"; 3 | import { DepthBuffer } from "../utils/depthBuffer"; 4 | import african_head from "../model/african_head"; 5 | import { Shader, GouraudShader, FlatShader, PhoneShader } from "../core/shader"; 6 | import { Camera, ProjectType, CameraParam } from "./camera"; 7 | import { Vec3, Vec4 } from "../math/vector"; 8 | import { barycentric } from "../math/math" 9 | import { Matrix44 } from "../math/matrix" 10 | import { Texture } from "../core/texture" 11 | 12 | 13 | export class Raster { 14 | 15 | private width: number 16 | private height: number 17 | 18 | private frameBuffer: FrameBuffer 19 | private depthBuffer: DepthBuffer 20 | private vertexsBuffer: Array 21 | private trianglseBuffer: Array 22 | 23 | public model: Mesh 24 | public shader: Shader 25 | public camera: Camera 26 | public lightDir: Vec3 27 | 28 | public viewMatrix: Matrix44 29 | public modelMatrix: Matrix44 30 | public viewPortMatrix: Matrix44 31 | public projectionMatrix: Matrix44 32 | 33 | public textureNormal: Texture 34 | public textureDiffuse: Texture 35 | 36 | private context: CanvasRenderingContext2D 37 | 38 | constructor(w: number, h: number, context: CanvasRenderingContext2D) { 39 | 40 | const defultCameraConfig: CameraParam = { 41 | fovY: 60, aspect: w / h, 42 | near: -0.1, far: -400, 43 | projectType: ProjectType.Orthogonal, 44 | up: new Vec3(0, 1, 0), pos: new Vec3(0, 0, 2), lookAt: new Vec3(0, 0, -1), 45 | sceenHeight: h, sceenWidth: w 46 | } 47 | 48 | this.width = w 49 | this.height = h 50 | 51 | this.context = context 52 | this.model = new Mesh(african_head, { enableWTextureCoord: true }) 53 | this.shader = new PhoneShader(this) 54 | this.camera = new Camera(defultCameraConfig) 55 | this.lightDir = new Vec3(0, 0, -1) 56 | 57 | this.vertexsBuffer = this.model.vertices 58 | this.trianglseBuffer = this.model.indices 59 | this.frameBuffer = new FrameBuffer(w, h) 60 | this.depthBuffer = new DepthBuffer(w, h) 61 | 62 | this.textureNormal = new Texture("african_head_nm.png") 63 | this.textureDiffuse = new Texture("african_head_diffuse.png") 64 | 65 | this.resetMatrix() 66 | } 67 | 68 | 69 | public clear() { 70 | 71 | for (let byteOffset = 0; byteOffset < this.frameBuffer.frameData.data.length; byteOffset += 4) { 72 | const [rIdx, gIdx, bIdx, aIdx] = [byteOffset + 0, byteOffset + 1, byteOffset + 2, byteOffset + 3] 73 | this.frameBuffer.frameData.data[rIdx] = 0 74 | this.frameBuffer.frameData.data[gIdx] = 0 75 | this.frameBuffer.frameData.data[bIdx] = 0 76 | this.frameBuffer.frameData.data[aIdx] = 255 77 | } 78 | 79 | this.depthBuffer = new DepthBuffer(this.width, this.height) 80 | } 81 | 82 | public render() { 83 | // 清理帧缓冲区 84 | this.clear() 85 | 86 | // 重置矩阵矩阵 87 | this.resetMatrix() 88 | 89 | for (let i = 0; i < this.trianglseBuffer.length; i += 3) { 90 | const screenCoords = [] 91 | // 顶点计算: 对每个顶点进行矩阵运算(MVP),输出顶点的屏幕坐标,顶点着色阶段 92 | for (let j = 0; j < 3; j++) { 93 | const idx = this.trianglseBuffer[i + j] 94 | const vertex = new Vec3(this.vertexsBuffer[idx * 3 + 0], this.vertexsBuffer[idx * 3 + 1], this.vertexsBuffer[idx * 3 + 2]) 95 | screenCoords.push(this.shader.vertexShader(vertex, idx * 3)) 96 | } 97 | // 绘制三角形:通过三个顶点计算包含在三角形内的屏幕像素,图元装配光栅化 98 | this.triangle(screenCoords) 99 | // this.line(screenCoords[0], screenCoords[1]) 100 | // this.line(screenCoords[1], screenCoords[2]) 101 | // this.line(screenCoords[2], screenCoords[0]) 102 | 103 | } 104 | 105 | this.context.putImageData(this.frameBuffer.frameData, 0, 0) 106 | } 107 | 108 | public line(start: Vec3, end: Vec3) { 109 | const dx = end.x - start.x 110 | const dy = end.y - start.y 111 | const k = dy / dx 112 | 113 | if (Math.abs(dx) >= Math.abs(dy)) { 114 | const b = start.y - k * start.x 115 | for (let x = start.x; x <= end.x; x++) { 116 | const y = Math.round(k * x + b) 117 | this.frameBuffer.setPixel(x, y, [255, 255, 255, 255]) 118 | } 119 | } else { 120 | const kInverse = 1 / k 121 | const b = start.x - kInverse * start.y 122 | for (let y = start.y; y <= end.y; y++) { 123 | const x = Math.round(kInverse * y + b) 124 | this.frameBuffer.setPixel(x, y, [255, 255, 255, 255]) 125 | } 126 | } 127 | } 128 | 129 | public triangle(screenCoords: Array) { 130 | const minx = Math.floor(Math.min(screenCoords[0].x, Math.min(screenCoords[1].x, screenCoords[2].x))) 131 | const maxx = Math.ceil(Math.max(screenCoords[0].x, Math.max(screenCoords[1].x, screenCoords[2].x))) 132 | const miny = Math.floor(Math.min(screenCoords[0].y, Math.min(screenCoords[1].y, screenCoords[2].y))) 133 | const maxy = Math.ceil(Math.max(screenCoords[0].y, Math.max(screenCoords[1].y, screenCoords[2].y))) 134 | for (let w = minx; w <= maxx; w++) { 135 | for (let h = miny; h <= maxy; h++) { 136 | const bar = barycentric(screenCoords, new Vec3(w, h, 0)) 137 | 138 | // 不在三角面内的像素点不进行着色 139 | if (bar.x < 0 || bar.y < 0 || bar.z < 0) continue 140 | 141 | // 计算插值后该像素的深度值,并进行深度测试 142 | const depth = this.depthBuffer.get(w, h) 143 | const interpolatedZ = bar.x * screenCoords[0].z + bar.y * screenCoords[1].z + bar.z * screenCoords[2].z 144 | if (interpolatedZ < -1 || interpolatedZ > 1 || interpolatedZ < depth) continue 145 | 146 | // 调用片元着色器,计算该像素的颜色 147 | const color = this.shader.fragmentShader(bar) 148 | 149 | this.depthBuffer.set(w, h, interpolatedZ) 150 | this.frameBuffer.setPixel(w, h, color) 151 | } 152 | } 153 | } 154 | 155 | public resetMatrix() { 156 | 157 | // 模型矩阵:对模型进行平移、旋转、缩放等操作,得到模型矩阵 158 | // 这里模型文件坐标系也是右手系,且顶点坐标范围在-1^3到1^3之间,所以模型需要缩放下 159 | // 对模型的Z坐标进行平移,使得模型在相机前方(我们定义的相机在z=1上,往-z方向看) 160 | this.modelMatrix = new Matrix44([ 161 | [240, 0, 0, 0], 162 | [0, 240, 0, 0], 163 | [0, 0, 240, -240], 164 | [0, 0, 0, 1] 165 | ]) 166 | 167 | // 视图矩阵:将世界坐标系转换到观察(相机)坐标系,得到视图矩阵 168 | this.viewMatrix = this.camera.getViewMat() 169 | 170 | // 投影矩阵:通过定义的观察空间范围(近平面、远平面、fov、aspset等参数定义),将该空间坐标映射到-1^3到1^3的范围(NDC空间),得到投影矩阵 171 | // 值得注意的是,投影矩阵在经过视图矩阵变换后,坐标系的已经是观察坐标系,相机默认在原点上,且关于空间的定义也是基于这个坐标系 172 | // 这里可以很方便做空间裁剪,z坐标不在-1~1范围内的物体将被裁剪掉 173 | this.projectionMatrix = this.camera.getProjectMat() 174 | 175 | // 视口矩阵:将观察坐标系转换到屏幕坐标系,得到视口矩阵 176 | // 这里-this.height是因为canvas屏幕坐标系的原点在左上角,而模型坐标系的原点在中心,要进行坐标反转 177 | this.viewPortMatrix = new Matrix44([ 178 | [this.width / 2, 0, 0, this.width / 2], 179 | [0, -this.height / 2, 0, this.height / 2], 180 | [0, 0, 1, 0], 181 | [0, 0, 0, 1] 182 | ]) 183 | } 184 | 185 | } 186 | 187 | -------------------------------------------------------------------------------- /src/core/shader.ts: -------------------------------------------------------------------------------- 1 | import { Vec3, Vec4 } from "../math/vector" 2 | import { Raster } from "./raster" 3 | 4 | export abstract class Shader { 5 | protected vertex: Array = [] 6 | protected raster: Raster 7 | constructor(raster: Raster) { this.raster = raster } 8 | public vertexShader(vertex: Vec3, idx: number): Vec3 { return new Vec3(0, 0, 0) } 9 | public fragmentShader(barycentric: Vec3): [number, number, number, number] { return [0, 0, 0, 0] } 10 | } 11 | 12 | // 冯氏着色 13 | // 逐像素获取法向量,用道法线贴图 14 | export class PhoneShader extends Shader { 15 | private textureVetex: Array = [] 16 | private viewSpaceVertex: Array = [] 17 | public vertexShader(vertex: Vec3, idx: number): Vec3 { 18 | 19 | if (this.vertex.length == 3) { 20 | this.vertex = [] 21 | this.viewSpaceVertex = [] 22 | this.textureVetex = [] 23 | } 24 | this.vertex.push(vertex) 25 | const vertexTextures = this.raster.model.textures 26 | this.textureVetex.push(new Vec3(vertexTextures[idx], vertexTextures[idx + 1], 0)) 27 | 28 | let result = new Vec4(vertex.x, vertex.y, vertex.z, 1) 29 | 30 | // mvp 31 | const modelMatrix = this.raster.modelMatrix 32 | const viewMatrix = this.raster.viewMatrix 33 | const projectionMatrix = this.raster.projectionMatrix 34 | const mvpMatrix = projectionMatrix.multiply(viewMatrix.multiply(modelMatrix)) 35 | 36 | result = mvpMatrix.multiplyVec(result) 37 | 38 | // viewport 39 | const viewPortMatrix = this.raster.viewPortMatrix 40 | result = viewPortMatrix.multiplyVec(result) 41 | 42 | this.viewSpaceVertex.push(mvpMatrix.multiplyVec(new Vec4(vertex.x, vertex.y, vertex.z, 1)).toVec3()) 43 | 44 | return result.toVec3() 45 | } 46 | 47 | public fragmentShader(barycentric: Vec3): [number, number, number, number] { 48 | 49 | 50 | const u = this.textureVetex[0].x * barycentric.x + this.textureVetex[1].x * barycentric.y + this.textureVetex[2].x * barycentric.z 51 | const v = this.textureVetex[0].y * barycentric.x + this.textureVetex[1].y * barycentric.y + this.textureVetex[2].y * barycentric.z 52 | const x = this.viewSpaceVertex[0].x * barycentric.x + this.viewSpaceVertex[1].x * barycentric.y + this.viewSpaceVertex[2].x 53 | const y = this.viewSpaceVertex[0].y * barycentric.x + this.viewSpaceVertex[1].y * barycentric.y + this.viewSpaceVertex[2].y 54 | const z = this.viewSpaceVertex[0].z * barycentric.x + this.viewSpaceVertex[1].z * barycentric.y + this.viewSpaceVertex[2].z 55 | 56 | const corlor = this.raster.textureDiffuse.sampling(u, v) 57 | const normals = this.raster.textureNormal.sampling(u, v) 58 | 59 | if (!corlor || !normals) return [255, 255, 255, 255] 60 | 61 | const light = Vec3.neg(this.raster.lightDir).normalize() 62 | const normal = new Vec3(normals[0] * 2 / 255 - 1, normals[1] * 2 / 255 - 1, normals[2] * 2 / 255 - 1).normalize() 63 | 64 | 65 | // 环境光 66 | const ambient = .5 67 | 68 | // 漫反射 69 | const diffuse = Math.max(Vec3.dot(normal, light), 0) 70 | 71 | // 镜面反射 72 | const reflect = normal.scale(2 * Vec3.dot(normal, light)).sub(light) 73 | const viewVec = new Vec3(0, 0, 0).sub(new Vec3(x, y, z)).normalize() 74 | const specular = Math.pow(Math.max(Vec3.dot(reflect, viewVec), 0), 256) 75 | 76 | const intensity = ambient + diffuse + specular 77 | return [corlor[0] * intensity, corlor[1] * intensity, corlor[2] * intensity, corlor[3]] 78 | } 79 | } 80 | 81 | // 高洛德着色 82 | // 逐顶点法计算顶点的光照强度,当前像素插值计算光照强度 83 | export class GouraudShader extends Shader { 84 | 85 | private lightIntensityVetex: Array = [] 86 | public vertexShader(vertex: Vec3, idx: number): Vec3 { 87 | 88 | if (this.vertex.length == 3) { 89 | this.vertex = [] 90 | this.lightIntensityVetex = [] 91 | } 92 | this.vertex.push(vertex) 93 | const vertexNormals = this.raster.model.vertexNormals 94 | const vertexNormal = new Vec3(vertexNormals[idx], vertexNormals[idx + 1], vertexNormals[idx + 2]).normalize() 95 | this.lightIntensityVetex.push(Vec3.dot(vertexNormal, Vec3.neg(this.raster.lightDir).normalize())) 96 | 97 | let result = new Vec4(vertex.x, vertex.y, vertex.z, 1) 98 | 99 | // mvp 100 | const modelMatrix = this.raster.modelMatrix 101 | const viewMatrix = this.raster.viewMatrix 102 | const projectionMatrix = this.raster.projectionMatrix 103 | const mvpMatrix = projectionMatrix.multiply(viewMatrix.multiply(modelMatrix)) 104 | result = mvpMatrix.multiplyVec(result) 105 | 106 | // viewport 107 | const viewPortMatrix = this.raster.viewPortMatrix 108 | result = viewPortMatrix.multiplyVec(result) 109 | 110 | return result.toVec3() 111 | } 112 | 113 | public fragmentShader(barycentric: Vec3): [number, number, number, number] { 114 | const lightIntensity = this.lightIntensityVetex[0] * barycentric.x + this.lightIntensityVetex[1] * barycentric.y + this.lightIntensityVetex[2] * barycentric.z 115 | return [255 * lightIntensity, 255 * lightIntensity, 255 * lightIntensity, 255] 116 | } 117 | } 118 | 119 | // 平面着色 120 | // 逐三角形着色,根据三角形顶点,顶点叉乘计算三角面法向量,计算光照强度 121 | export class FlatShader extends Shader { 122 | 123 | private normal: Vec3 = new Vec3(0, 0, 0) 124 | private lightIntensity: number = 0 125 | 126 | public vertexShader(vertex: Vec3): Vec3 { 127 | if (this.vertex.length == 3) this.vertex = [] 128 | 129 | this.vertex.push(vertex) 130 | if (this.vertex.length == 3) { 131 | this.normal = this.vertex[1].sub(this.vertex[0]).cross(this.vertex[2].sub(this.vertex[0])).normalize() 132 | this.lightIntensity = Vec3.dot(Vec3.neg(this.raster.lightDir).normalize(), this.normal) 133 | } 134 | 135 | let result = new Vec4(vertex.x, vertex.y, vertex.z, 1) 136 | 137 | // mvp 138 | const modelMatrix = this.raster.modelMatrix 139 | const viewMatrix = this.raster.viewMatrix 140 | const projectionMatrix = this.raster.projectionMatrix 141 | const mvpMatrix = projectionMatrix.multiply(viewMatrix.multiply(modelMatrix)) 142 | result = mvpMatrix.multiplyVec(result) 143 | 144 | // viewport 145 | const viewPortMatrix = this.raster.viewPortMatrix 146 | result = viewPortMatrix.multiplyVec(result) 147 | 148 | return result.toVec3() 149 | } 150 | 151 | public fragmentShader(barycentric: Vec3): [number, number, number, number] { 152 | return [255 * this.lightIntensity, 255 * this.lightIntensity, 255 * this.lightIntensity, 255] 153 | } 154 | } -------------------------------------------------------------------------------- /src/core/texture.ts: -------------------------------------------------------------------------------- 1 | export class Texture { 2 | 3 | private image: HTMLImageElement 4 | private loaded: boolean = false 5 | private textureData: ImageData 6 | 7 | constructor(src: string) { 8 | this.image = new Image() 9 | this.image.src = src 10 | 11 | this.image.onload = () => { 12 | 13 | const canvas = document.createElement('canvas') 14 | canvas.width = this.image.width 15 | canvas.height = this.image.height 16 | 17 | const context = canvas.getContext('2d') 18 | context.drawImage(this.image, 0, 0) 19 | 20 | this.textureData = context.getImageData(0, 0, canvas.width, canvas.height) 21 | this.loaded = true 22 | } 23 | } 24 | 25 | public sampling(u: number, v: number): [number, number, number, number] | null { 26 | if (!this.loaded) return null 27 | const x = Math.floor(u * (this.image.width - 1)) 28 | const y = Math.floor((1 - v) * (this.image.height - 1)) 29 | return this.getPixel(x, y) 30 | } 31 | 32 | public getPixel(x: number, y: number): [number, number, number, number] { 33 | const result: [number, number, number, number] = [0, 0, 0, 0] 34 | 35 | result[0] = this.textureData.data[((y * this.image.width + x) * 4) + 0] 36 | result[1] = this.textureData.data[((y * this.image.width + x) * 4) + 1] 37 | result[2] = this.textureData.data[((y * this.image.width + x) * 4) + 2] 38 | result[3] = this.textureData.data[((y * this.image.width + x) * 4) + 3] 39 | 40 | return result 41 | } 42 | 43 | } -------------------------------------------------------------------------------- /src/math/math.ts: -------------------------------------------------------------------------------- 1 | import { Vec3 } from "./vector"; 2 | 3 | export const barycentric = (triangles: Vec3[], p: Vec3): Vec3 => { 4 | const a = triangles[0] 5 | const b = triangles[1] 6 | const c = triangles[2] 7 | 8 | const denominator = (b.y - c.y) * (a.x - c.x) + (c.x - b.x) * (a.y - c.y) 9 | 10 | const lambda1 = ((b.y - c.y) * (p.x - c.x) + (c.x - b.x) * (p.y - c.y)) / denominator 11 | const lambda2 = ((c.y - a.y) * (p.x - c.x) + (a.x - c.x) * (p.y - c.y)) / denominator 12 | const lambda3 = 1 - lambda1 - lambda2; 13 | 14 | return new Vec3(lambda1, lambda2, lambda3) 15 | } 16 | 17 | export const det = (v1, v2) => { 18 | return v1.x * v2.y - v1.y * v2.x; 19 | } -------------------------------------------------------------------------------- /src/math/matrix.ts: -------------------------------------------------------------------------------- 1 | import { Vec4 } from "./vector" 2 | 3 | 4 | export class Matrix { 5 | public cols: number 6 | public rows: number 7 | protected data: number[][] 8 | } 9 | export class Matrix44 extends Matrix { 10 | 11 | constructor(data?: Array) { 12 | super() 13 | if (data) { 14 | this.data = data 15 | } else { 16 | this.data = [ 17 | [1, 0, 0, 0], 18 | [0, 1, 0, 0], 19 | [0, 0, 1, 0], 20 | [0, 0, 0, 1] 21 | ] 22 | } 23 | 24 | this.cols = 4 25 | this.rows = 4 26 | 27 | } 28 | 29 | public setCol(col: number, val: [number, number, number, number]) { 30 | if (val.length != 4) throw new Error("Invalid input length") 31 | this.data[0][col] = val[0] 32 | this.data[1][col] = val[1] 33 | this.data[2][col] = val[2] 34 | this.data[3][col] = val[3] 35 | } 36 | 37 | public setRow(row: number, val: [number, number, number, number]) { 38 | if (val.length != 4) throw new Error("Invalid input length") 39 | this.data[row][0] = val[0] 40 | this.data[row][1] = val[1] 41 | this.data[row][2] = val[2] 42 | this.data[row][3] = val[3] 43 | } 44 | 45 | public multiply(mat: Matrix44): Matrix44 { 46 | const result = new Matrix44() 47 | 48 | for (let i = 0; i < 4; i++) { 49 | for (let j = 0; j < 4; j++) { 50 | let sum = 0 51 | for (let k = 0; k < 4; k++) { 52 | sum += this.data[i][k] * mat.data[k][j] 53 | } 54 | result.data[i][j] = sum 55 | } 56 | 57 | 58 | } 59 | 60 | return result 61 | 62 | } 63 | 64 | public multiplyVec(vec: Vec4): Vec4 { 65 | const result: Array = [] 66 | 67 | for (let i = 0; i < 4; i++) { 68 | result.push(this.data[i][0] * vec.x + this.data[i][1] * vec.y + this.data[i][2] * vec.z + this.data[i][3] * vec.w) 69 | } 70 | 71 | return new Vec4(result[0], result[1], result[2], result[3]) 72 | } 73 | 74 | public transpose(): Matrix44 { 75 | const result = new Matrix44() 76 | result.setRow(0, [this.data[0][0], this.data[1][0], this.data[2][0], -this.data[0][3]]) 77 | result.setRow(1, [this.data[0][1], this.data[1][1], this.data[2][1], -this.data[1][3]]) 78 | result.setRow(2, [this.data[0][2], this.data[1][2], this.data[2][2], -this.data[2][3]]) 79 | result.setRow(3, [0, 0, 0, 1]) 80 | return result 81 | } 82 | 83 | public translate(x: number, y: number, z: number): Matrix44 { 84 | const translateMat = new Matrix44() 85 | translateMat.setCol(3, [x, y, z, 1]) 86 | this.data = translateMat.multiply(this).data 87 | return this 88 | } 89 | 90 | public rotateX(angle: number): Matrix44 { 91 | const rotateMat = new Matrix44() 92 | const cos = Math.cos(angle) 93 | const sin = Math.sin(angle) 94 | rotateMat.setCol(1, [0, cos, -sin, 0]) 95 | rotateMat.setCol(2, [0, sin, cos, 0]) 96 | this.data = rotateMat.multiply(this).data 97 | return this 98 | } 99 | 100 | public rotateY(angle: number): Matrix44 { 101 | const rotateMat = new Matrix44() 102 | const cos = Math.cos(angle) 103 | const sin = Math.sin(angle) 104 | rotateMat.setCol(0, [cos, 0, sin, 0]) 105 | rotateMat.setCol(2, [-sin, 0, cos, 0]) 106 | this.data = rotateMat.multiply(this).data 107 | return this 108 | } 109 | 110 | public rotateZ(angle: number): Matrix44 { 111 | const rotateMat = new Matrix44() 112 | const cos = Math.cos(angle) 113 | const sin = Math.sin(angle) 114 | rotateMat.setCol(0, [cos, -sin, 0, 0]) 115 | rotateMat.setCol(1, [sin, cos, 0, 0]) 116 | this.data = rotateMat.multiply(this).data 117 | return this 118 | } 119 | 120 | public scale(x: number, y: number, z: number): Matrix44 { 121 | const scaleMat = new Matrix44() 122 | scaleMat.setCol(0, [x, 0, 0, 0]) 123 | scaleMat.setCol(1, [0, y, 0, 0]) 124 | scaleMat.setCol(2, [0, 0, z, 0]) 125 | this.data = scaleMat.multiply(this).data 126 | return this 127 | } 128 | } 129 | -------------------------------------------------------------------------------- /src/math/vector.ts: -------------------------------------------------------------------------------- 1 | export class Vec3 { 2 | 3 | public x: number; 4 | public y: number; 5 | public z: number; 6 | 7 | constructor(x: number, y: number, z: number) { 8 | this.x = x 9 | this.y = y 10 | this.z = z 11 | } 12 | 13 | static sub(v1: Vec3, v2: Vec3): Vec3 { 14 | return new Vec3(v1.x - v2.x, v1.y - v2.y, v1.z - v2.z) 15 | } 16 | 17 | static dot(v1: Vec3, v2: Vec3): number { 18 | return v1.x * v2.x + v1.y * v2.y + v1.z * v2.z 19 | } 20 | 21 | static mul(v1: Vec3, v2: Vec3): Vec3 { 22 | return new Vec3(v1.x * v2.x, v1.y * v2.y, v1.z * v2.z) 23 | } 24 | 25 | static neg(v1: Vec3): Vec3 { 26 | return new Vec3(-v1.x, -v1.y, -v1.z) 27 | } 28 | 29 | static plus(v1: Vec3, v2: Vec3): Vec3 { 30 | return new Vec3(v1.x + v2.x, v1.y + v2.y, v1.z + v2.z) 31 | } 32 | 33 | public sub(v: Vec3): Vec3 { 34 | return new Vec3(this.x - v.x, this.y - v.y, this.z - v.z) 35 | } 36 | 37 | public scale(s: number): Vec3 { 38 | return new Vec3(this.x * s, this.y * s, this.z * s) 39 | } 40 | 41 | public cross(v: Vec3): Vec3 { 42 | const x = this.y * v.z - this.z * v.y; 43 | const y = this.z * v.x - this.x * v.z; 44 | const z = this.x * v.y - this.y * v.x; 45 | return new Vec3(x, y, z); 46 | } 47 | 48 | public normalize(): Vec3 { 49 | const length = this.length 50 | return new Vec3(this.x / length, this.y / length, this.z / length) 51 | } 52 | 53 | public toVec4(w: number = 1): Vec4 { 54 | return new Vec4(this.x, this.y, this.z, w) 55 | } 56 | 57 | 58 | 59 | public get length(): number { 60 | return Math.sqrt(this.x * this.x + this.y * this.y + this.z * this.z); 61 | } 62 | } 63 | 64 | export class Vec4 { 65 | 66 | public x: number; 67 | public y: number; 68 | public z: number; 69 | public w: number; 70 | 71 | constructor(x: number, y: number, z: number, w: number) { 72 | this.x = x 73 | this.y = y 74 | this.z = z 75 | this.w = w 76 | } 77 | 78 | public div(v: number): Vec4 { 79 | return new Vec4(this.x / v, this.y / v, this.z / v, this.w / v) 80 | } 81 | 82 | public toVec3(): Vec3 { 83 | return new Vec3(this.x, this.y, this.z) 84 | } 85 | } -------------------------------------------------------------------------------- /src/utils/depthBuffer.ts: -------------------------------------------------------------------------------- 1 | export class DepthBuffer { 2 | 3 | private data: Map 4 | 5 | constructor(width: number, height: number) { 6 | this.data = new Map() 7 | } 8 | 9 | public get(x: number, y: number): number { 10 | x = Math.floor(x) 11 | y = Math.floor(y) 12 | return this.data.get(`${x},${y}`) || Number.MIN_SAFE_INTEGER 13 | } 14 | 15 | public set(x: number, y: number, depth: number): void { 16 | x = Math.floor(x) 17 | y = Math.floor(y) 18 | this.data.set(`${x},${y}`, depth) 19 | } 20 | } 21 | -------------------------------------------------------------------------------- /src/utils/frameBuffer.ts: -------------------------------------------------------------------------------- 1 | export class FrameBuffer { 2 | 3 | private data: ImageData 4 | 5 | constructor(width: number, height: number) { 6 | this.data = new ImageData(width, height) 7 | } 8 | 9 | public get frameData(): ImageData { 10 | return this.data 11 | } 12 | 13 | public setPixel(x: number, y: number, rgba: [number, number, number, number]): void { 14 | x = Math.floor(x) 15 | y = Math.floor(y) 16 | if (x >= this.data.width || y >= this.data.height || x < 0 || y < 0) return 17 | this.data.data[((y * this.data.width + x) * 4) + 0] = rgba[0] 18 | this.data.data[((y * this.data.width + x) * 4) + 1] = rgba[1] 19 | this.data.data[((y * this.data.width + x) * 4) + 2] = rgba[2] 20 | this.data.data[((y * this.data.width + x) * 4) + 3] = rgba[3] 21 | } 22 | 23 | public getPixel(x: number, y: number): [number, number, number, number] { 24 | 25 | const result: [number, number, number, number] = [0, 0, 0, 0] 26 | 27 | result[0] = this.data.data[((y * this.data.width + x) * 4) + 0] 28 | result[1] = this.data.data[((y * this.data.width + x) * 4) + 1] 29 | result[2] = this.data.data[((y * this.data.width + x) * 4) + 2] 30 | result[3] = this.data.data[((y * this.data.width + x) * 4) + 3] 31 | 32 | return result 33 | } 34 | } 35 | -------------------------------------------------------------------------------- /tsconfig.json: -------------------------------------------------------------------------------- 1 | { 2 | "compilerOptions": { 3 | /* Visit https://aka.ms/tsconfig to read more about this file */ 4 | 5 | /* Projects */ 6 | // "incremental": true, /* Save .tsbuildinfo files to allow for incremental compilation of projects. */ 7 | // "composite": true, /* Enable constraints that allow a TypeScript project to be used with project references. */ 8 | // "tsBuildInfoFile": "./.tsbuildinfo", /* Specify the path to .tsbuildinfo incremental compilation file. */ 9 | // "disableSourceOfProjectReferenceRedirect": true, /* Disable preferring source files instead of declaration files when referencing composite projects. */ 10 | // "disableSolutionSearching": true, /* Opt a project out of multi-project reference checking when editing. */ 11 | // "disableReferencedProjectLoad": true, /* Reduce the number of projects loaded automatically by TypeScript. */ 12 | 13 | /* Language and Environment */ 14 | "target": "ES6", /* Set the JavaScript language version for emitted JavaScript and include compatible library declarations. */ 15 | // "lib": [], /* Specify a set of bundled library declaration files that describe the target runtime environment. */ 16 | // "jsx": "preserve", /* Specify what JSX code is generated. */ 17 | // "experimentalDecorators": true, /* Enable experimental support for legacy experimental decorators. */ 18 | // "emitDecoratorMetadata": true, /* Emit design-type metadata for decorated declarations in source files. */ 19 | // "jsxFactory": "", /* Specify the JSX factory function used when targeting React JSX emit, e.g. 'React.createElement' or 'h'. */ 20 | // "jsxFragmentFactory": "", /* Specify the JSX Fragment reference used for fragments when targeting React JSX emit e.g. 'React.Fragment' or 'Fragment'. */ 21 | // "jsxImportSource": "", /* Specify module specifier used to import the JSX factory functions when using 'jsx: react-jsx*'. */ 22 | // "reactNamespace": "", /* Specify the object invoked for 'createElement'. This only applies when targeting 'react' JSX emit. */ 23 | // "noLib": true, /* Disable including any library files, including the default lib.d.ts. */ 24 | // "useDefineForClassFields": true, /* Emit ECMAScript-standard-compliant class fields. */ 25 | // "moduleDetection": "auto", /* Control what method is used to detect module-format JS files. */ 26 | 27 | /* Modules */ 28 | "module": "ESNext", /* Specify what module code is generated. */ 29 | // "rootDir": "./", /* Specify the root folder within your source files. */ 30 | "moduleResolution": "node10", /* Specify how TypeScript looks up a file from a given module specifier. */ 31 | // "baseUrl": "./", /* Specify the base directory to resolve non-relative module names. */ 32 | // "paths": {}, /* Specify a set of entries that re-map imports to additional lookup locations. */ 33 | // "rootDirs": [], /* Allow multiple folders to be treated as one when resolving modules. */ 34 | // "typeRoots": [], /* Specify multiple folders that act like './node_modules/@types'. */ 35 | // "types": [], /* Specify type package names to be included without being referenced in a source file. */ 36 | // "allowUmdGlobalAccess": true, /* Allow accessing UMD globals from modules. */ 37 | // "moduleSuffixes": [], /* List of file name suffixes to search when resolving a module. */ 38 | // "allowImportingTsExtensions": true, /* Allow imports to include TypeScript file extensions. Requires '--moduleResolution bundler' and either '--noEmit' or '--emitDeclarationOnly' to be set. */ 39 | // "resolvePackageJsonExports": true, /* Use the package.json 'exports' field when resolving package imports. */ 40 | // "resolvePackageJsonImports": true, /* Use the package.json 'imports' field when resolving imports. */ 41 | // "customConditions": [], /* Conditions to set in addition to the resolver-specific defaults when resolving imports. */ 42 | // "resolveJsonModule": true, /* Enable importing .json files. */ 43 | // "allowArbitraryExtensions": true, /* Enable importing files with any extension, provided a declaration file is present. */ 44 | // "noResolve": true, /* Disallow 'import's, 'require's or ''s from expanding the number of files TypeScript should add to a project. */ 45 | 46 | /* JavaScript Support */ 47 | // "allowJs": true, /* Allow JavaScript files to be a part of your program. Use the 'checkJS' option to get errors from these files. */ 48 | // "checkJs": true, /* Enable error reporting in type-checked JavaScript files. */ 49 | // "maxNodeModuleJsDepth": 1, /* Specify the maximum folder depth used for checking JavaScript files from 'node_modules'. Only applicable with 'allowJs'. */ 50 | 51 | /* Emit */ 52 | // "declaration": true, /* Generate .d.ts files from TypeScript and JavaScript files in your project. */ 53 | // "declarationMap": true, /* Create sourcemaps for d.ts files. */ 54 | // "emitDeclarationOnly": true, /* Only output d.ts files and not JavaScript files. */ 55 | // "sourceMap": true, /* Create source map files for emitted JavaScript files. */ 56 | // "inlineSourceMap": true, /* Include sourcemap files inside the emitted JavaScript. */ 57 | // "outFile": "", /* Specify a file that bundles all outputs into one JavaScript file. If 'declaration' is true, also designates a file that bundles all .d.ts output. */ 58 | "outDir": "./build", /* Specify an output folder for all emitted files. */ 59 | // "removeComments": true, /* Disable emitting comments. */ 60 | // "noEmit": true, /* Disable emitting files from a compilation. */ 61 | // "importHelpers": true, /* Allow importing helper functions from tslib once per project, instead of including them per-file. */ 62 | // "downlevelIteration": true, /* Emit more compliant, but verbose and less performant JavaScript for iteration. */ 63 | // "sourceRoot": "", /* Specify the root path for debuggers to find the reference source code. */ 64 | // "mapRoot": "", /* Specify the location where debugger should locate map files instead of generated locations. */ 65 | // "inlineSources": true, /* Include source code in the sourcemaps inside the emitted JavaScript. */ 66 | // "emitBOM": true, /* Emit a UTF-8 Byte Order Mark (BOM) in the beginning of output files. */ 67 | // "newLine": "crlf", /* Set the newline character for emitting files. */ 68 | // "stripInternal": true, /* Disable emitting declarations that have '@internal' in their JSDoc comments. */ 69 | // "noEmitHelpers": true, /* Disable generating custom helper functions like '__extends' in compiled output. */ 70 | // "noEmitOnError": true, /* Disable emitting files if any type checking errors are reported. */ 71 | // "preserveConstEnums": true, /* Disable erasing 'const enum' declarations in generated code. */ 72 | // "declarationDir": "./", /* Specify the output directory for generated declaration files. */ 73 | 74 | /* Interop Constraints */ 75 | // "isolatedModules": true, /* Ensure that each file can be safely transpiled without relying on other imports. */ 76 | // "verbatimModuleSyntax": true, /* Do not transform or elide any imports or exports not marked as type-only, ensuring they are written in the output file's format based on the 'module' setting. */ 77 | // "isolatedDeclarations": true, /* Require sufficient annotation on exports so other tools can trivially generate declaration files. */ 78 | // "allowSyntheticDefaultImports": true, /* Allow 'import x from y' when a module doesn't have a default export. */ 79 | "esModuleInterop": true, /* Emit additional JavaScript to ease support for importing CommonJS modules. This enables 'allowSyntheticDefaultImports' for type compatibility. */ 80 | // "preserveSymlinks": true, /* Disable resolving symlinks to their realpath. This correlates to the same flag in node. */ 81 | "forceConsistentCasingInFileNames": true, /* Ensure that casing is correct in imports. */ 82 | 83 | /* Type Checking */ 84 | "strict": false, /* Enable all strict type-checking options. */ 85 | // "noImplicitAny": true, /* Enable error reporting for expressions and declarations with an implied 'any' type. */ 86 | // "strictNullChecks": true, /* When type checking, take into account 'null' and 'undefined'. */ 87 | // "strictFunctionTypes": true, /* When assigning functions, check to ensure parameters and the return values are subtype-compatible. */ 88 | // "strictBindCallApply": true, /* Check that the arguments for 'bind', 'call', and 'apply' methods match the original function. */ 89 | // "strictPropertyInitialization": true, /* Check for class properties that are declared but not set in the constructor. */ 90 | // "noImplicitThis": true, /* Enable error reporting when 'this' is given the type 'any'. */ 91 | // "useUnknownInCatchVariables": true, /* Default catch clause variables as 'unknown' instead of 'any'. */ 92 | // "alwaysStrict": true, /* Ensure 'use strict' is always emitted. */ 93 | // "noUnusedLocals": true, /* Enable error reporting when local variables aren't read. */ 94 | // "noUnusedParameters": true, /* Raise an error when a function parameter isn't read. */ 95 | // "exactOptionalPropertyTypes": true, /* Interpret optional property types as written, rather than adding 'undefined'. */ 96 | // "noImplicitReturns": true, /* Enable error reporting for codepaths that do not explicitly return in a function. */ 97 | // "noFallthroughCasesInSwitch": true, /* Enable error reporting for fallthrough cases in switch statements. */ 98 | // "noUncheckedIndexedAccess": true, /* Add 'undefined' to a type when accessed using an index. */ 99 | // "noImplicitOverride": true, /* Ensure overriding members in derived classes are marked with an override modifier. */ 100 | // "noPropertyAccessFromIndexSignature": true, /* Enforces using indexed accessors for keys declared using an indexed type. */ 101 | // "allowUnusedLabels": true, /* Disable error reporting for unused labels. */ 102 | // "allowUnreachableCode": true, /* Disable error reporting for unreachable code. */ 103 | 104 | /* Completeness */ 105 | // "skipDefaultLibCheck": true, /* Skip type checking .d.ts files that are included with TypeScript. */ 106 | "skipLibCheck": true /* Skip type checking all .d.ts files. */ 107 | } 108 | } 109 | --------------------------------------------------------------------------------