├── assets
└── images
│ ├── placeholdet.txt
│ ├── apm1.png
│ ├── apm2.png
│ ├── apm3.png
│ ├── banner-1.png
│ ├── banner-2.jpg
│ ├── banner-3.jpg
│ ├── banner-4.jpg
│ ├── circleci.png
│ ├── twitter.png
│ ├── Privatenpm.png
│ ├── Sketch (8).png
│ ├── swaggerDoc.png
│ ├── twitter-s.png
│ ├── viconblue.PNG
│ ├── checkbox-sm.png
│ ├── monitoring1.png
│ ├── monitoring2.jpg
│ ├── monitoring3.png
│ ├── setnodeenv1.png
│ ├── smartlogging1.png
│ ├── smartlogging2.jpg
│ ├── swaggerMarkup.png
│ ├── uptimerobot.jpg
│ ├── checkbox-small.PNG
│ ├── checkmark-green.png
│ ├── testingpyramid.png
│ ├── jenkins_dashboard.png
│ ├── keepexpressinweb.gif
│ ├── structurebyroles.PNG
│ ├── utilizecpucores1.png
│ ├── checkbox-small-blue.png
│ ├── kibana-raw-1024x637.png
│ ├── app-dynamics-dashboard.png
│ ├── checkmark-green-small.png
│ ├── checkmark-green_small.png
│ ├── kibana-graph-1024x550.jpg
│ ├── structurebycomponents.PNG
│ └── createmaintenanceendpoint1.png
├── .gitignore
├── sections
├── projectstructre
│ ├── createlayers.chinese.md
│ ├── createlayers.md
│ ├── configguide.chinese.md
│ ├── wraputilities.md
│ ├── separateexpress.md
│ ├── configguide.md
│ ├── thincomponents.md
│ ├── breakintcomponents.chinese.md
│ └── breakintcomponents.md
├── testingandquality
│ ├── bumpversion.md
│ └── citools.md
├── template.md
├── production
│ ├── detectvulnerabilities.md
│ ├── apmproducts.md
│ ├── setnodeenv.md
│ ├── productoncode.md
│ ├── createmaintenanceendpoint.md
│ ├── bestateless.md
│ ├── measurememory.md
│ ├── utilizecpu.md
│ ├── assigntransactionid.md
│ ├── lockdependencies.md
│ ├── guardprocess.md
│ ├── frontendout.md
│ ├── monitoring.md
│ ├── delegatetoproxy.md
│ └── smartlogging.md
├── codestylepractices
│ └── eslint_prettier.md
├── errorhandling
│ ├── documentingusingswagger.md
│ ├── testingerrorflows.md
│ ├── monitoring.md
│ ├── usematurelogger.md
│ ├── failfast.md
│ ├── apmproducts.md
│ ├── catchunhandledpromiserejection.md
│ ├── shuttingtheprocess.md
│ ├── asyncerrorhandling.md
│ ├── operationalvsprogrammererror.md
│ ├── centralizedhandling.md
│ └── useonlythebuiltinerror.md
└── drafts
│ ├── readme-general-toc-2.md
│ ├── readme-general-toc-1.md
│ ├── readme-general-toc-3.md
│ └── readme-general-toc-4.md
├── LICENSE
└── README.md
/assets/images/placeholdet.txt:
--------------------------------------------------------------------------------
1 | lorem ipsum
2 |
--------------------------------------------------------------------------------
/assets/images/apm1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gaspaonrocks/nodebestpractices/HEAD/assets/images/apm1.png
--------------------------------------------------------------------------------
/assets/images/apm2.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gaspaonrocks/nodebestpractices/HEAD/assets/images/apm2.png
--------------------------------------------------------------------------------
/assets/images/apm3.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gaspaonrocks/nodebestpractices/HEAD/assets/images/apm3.png
--------------------------------------------------------------------------------
/assets/images/banner-1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gaspaonrocks/nodebestpractices/HEAD/assets/images/banner-1.png
--------------------------------------------------------------------------------
/assets/images/banner-2.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gaspaonrocks/nodebestpractices/HEAD/assets/images/banner-2.jpg
--------------------------------------------------------------------------------
/assets/images/banner-3.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gaspaonrocks/nodebestpractices/HEAD/assets/images/banner-3.jpg
--------------------------------------------------------------------------------
/assets/images/banner-4.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gaspaonrocks/nodebestpractices/HEAD/assets/images/banner-4.jpg
--------------------------------------------------------------------------------
/assets/images/circleci.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gaspaonrocks/nodebestpractices/HEAD/assets/images/circleci.png
--------------------------------------------------------------------------------
/assets/images/twitter.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gaspaonrocks/nodebestpractices/HEAD/assets/images/twitter.png
--------------------------------------------------------------------------------
/assets/images/Privatenpm.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gaspaonrocks/nodebestpractices/HEAD/assets/images/Privatenpm.png
--------------------------------------------------------------------------------
/assets/images/Sketch (8).png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gaspaonrocks/nodebestpractices/HEAD/assets/images/Sketch (8).png
--------------------------------------------------------------------------------
/assets/images/swaggerDoc.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gaspaonrocks/nodebestpractices/HEAD/assets/images/swaggerDoc.png
--------------------------------------------------------------------------------
/assets/images/twitter-s.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gaspaonrocks/nodebestpractices/HEAD/assets/images/twitter-s.png
--------------------------------------------------------------------------------
/assets/images/viconblue.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gaspaonrocks/nodebestpractices/HEAD/assets/images/viconblue.PNG
--------------------------------------------------------------------------------
/assets/images/checkbox-sm.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gaspaonrocks/nodebestpractices/HEAD/assets/images/checkbox-sm.png
--------------------------------------------------------------------------------
/assets/images/monitoring1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gaspaonrocks/nodebestpractices/HEAD/assets/images/monitoring1.png
--------------------------------------------------------------------------------
/assets/images/monitoring2.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gaspaonrocks/nodebestpractices/HEAD/assets/images/monitoring2.jpg
--------------------------------------------------------------------------------
/assets/images/monitoring3.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gaspaonrocks/nodebestpractices/HEAD/assets/images/monitoring3.png
--------------------------------------------------------------------------------
/assets/images/setnodeenv1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gaspaonrocks/nodebestpractices/HEAD/assets/images/setnodeenv1.png
--------------------------------------------------------------------------------
/assets/images/smartlogging1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gaspaonrocks/nodebestpractices/HEAD/assets/images/smartlogging1.png
--------------------------------------------------------------------------------
/assets/images/smartlogging2.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gaspaonrocks/nodebestpractices/HEAD/assets/images/smartlogging2.jpg
--------------------------------------------------------------------------------
/assets/images/swaggerMarkup.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gaspaonrocks/nodebestpractices/HEAD/assets/images/swaggerMarkup.png
--------------------------------------------------------------------------------
/assets/images/uptimerobot.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gaspaonrocks/nodebestpractices/HEAD/assets/images/uptimerobot.jpg
--------------------------------------------------------------------------------
/assets/images/checkbox-small.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gaspaonrocks/nodebestpractices/HEAD/assets/images/checkbox-small.PNG
--------------------------------------------------------------------------------
/assets/images/checkmark-green.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gaspaonrocks/nodebestpractices/HEAD/assets/images/checkmark-green.png
--------------------------------------------------------------------------------
/assets/images/testingpyramid.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gaspaonrocks/nodebestpractices/HEAD/assets/images/testingpyramid.png
--------------------------------------------------------------------------------
/assets/images/jenkins_dashboard.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gaspaonrocks/nodebestpractices/HEAD/assets/images/jenkins_dashboard.png
--------------------------------------------------------------------------------
/assets/images/keepexpressinweb.gif:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gaspaonrocks/nodebestpractices/HEAD/assets/images/keepexpressinweb.gif
--------------------------------------------------------------------------------
/assets/images/structurebyroles.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gaspaonrocks/nodebestpractices/HEAD/assets/images/structurebyroles.PNG
--------------------------------------------------------------------------------
/assets/images/utilizecpucores1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gaspaonrocks/nodebestpractices/HEAD/assets/images/utilizecpucores1.png
--------------------------------------------------------------------------------
/assets/images/checkbox-small-blue.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gaspaonrocks/nodebestpractices/HEAD/assets/images/checkbox-small-blue.png
--------------------------------------------------------------------------------
/assets/images/kibana-raw-1024x637.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gaspaonrocks/nodebestpractices/HEAD/assets/images/kibana-raw-1024x637.png
--------------------------------------------------------------------------------
/assets/images/app-dynamics-dashboard.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gaspaonrocks/nodebestpractices/HEAD/assets/images/app-dynamics-dashboard.png
--------------------------------------------------------------------------------
/assets/images/checkmark-green-small.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gaspaonrocks/nodebestpractices/HEAD/assets/images/checkmark-green-small.png
--------------------------------------------------------------------------------
/assets/images/checkmark-green_small.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gaspaonrocks/nodebestpractices/HEAD/assets/images/checkmark-green_small.png
--------------------------------------------------------------------------------
/assets/images/kibana-graph-1024x550.jpg:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gaspaonrocks/nodebestpractices/HEAD/assets/images/kibana-graph-1024x550.jpg
--------------------------------------------------------------------------------
/assets/images/structurebycomponents.PNG:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gaspaonrocks/nodebestpractices/HEAD/assets/images/structurebycomponents.PNG
--------------------------------------------------------------------------------
/assets/images/createmaintenanceendpoint1.png:
--------------------------------------------------------------------------------
https://raw.githubusercontent.com/gaspaonrocks/nodebestpractices/HEAD/assets/images/createmaintenanceendpoint1.png
--------------------------------------------------------------------------------
/.gitignore:
--------------------------------------------------------------------------------
1 | *.log
2 | .idea
3 | .vscode
4 | .idea/**/*
5 | .vscode/**/*
6 | .nyc_output
7 | mochawesome-report
8 | .DS_Store
9 | npm-debug.log.*
10 | node_modules
11 | node_modules/**/*
12 | .eslintcache
13 | cert
14 | logs/*
15 | desktop.ini
16 | package-lock.json
17 |
--------------------------------------------------------------------------------
/sections/projectstructre/createlayers.chinese.md:
--------------------------------------------------------------------------------
1 | 应用程序分层,保持Express在其边界内
2 |
3 |
4 | 将组件代码分成web, services, DAL层
5 | 
6 |
7 |
8 |
9 | 1分钟说明:混合层的缺点
10 | 
11 |
--------------------------------------------------------------------------------
/sections/projectstructre/createlayers.md:
--------------------------------------------------------------------------------
1 | # Layer your app, keep Express within its boundaries
2 |
3 |
4 |
5 | ### Separate component code into layers: web, services and DAL
6 | 
7 |
8 |
9 |
10 | ### 1 min explainer: The downside of mixing layers
11 | 
12 |
--------------------------------------------------------------------------------
/sections/testingandquality/bumpversion.md:
--------------------------------------------------------------------------------
1 | # Title here
2 |
3 |
4 | ### One Paragraph Explainer
5 |
6 | Text
7 |
8 |
9 | ### Code Example – explanation
10 |
11 | ```javascript
12 | code here
13 | ```
14 |
15 | ### Code Example – another
16 |
17 | ```javascript
18 | code here
19 | ```
20 |
21 | ### Blog Quote: "Title"
22 | From the blog pouchdb.com, ranked 11 for the keywords “Node Promises”
23 |
24 | > …text here
25 |
26 | ### Image title
27 | 
28 |
29 |
30 |
31 |
--------------------------------------------------------------------------------
/sections/template.md:
--------------------------------------------------------------------------------
1 | # Title here
2 |
3 |
4 |
5 |
6 | ### One Paragraph Explainer
7 |
8 | Text
9 |
10 |
11 |
12 |
13 | ### Code Example – explanation
14 |
15 | ```javascript
16 | code here
17 | ```
18 |
19 |
20 |
21 | ### Code Example – another
22 |
23 | ```javascript
24 | code here
25 | ```
26 |
27 |
28 |
29 | ### Blog Quote: "Title"
30 | From the blog pouchdb.com, ranked 11 for the keywords “Node Promises”
31 |
32 | > …text here
33 |
34 |
35 |
36 | ### Image title
37 | 
38 |
39 |
40 |
41 |
--------------------------------------------------------------------------------
/sections/projectstructre/configguide.chinese.md:
--------------------------------------------------------------------------------
1 | 使用环境感知,安全,分层的配置
2 |
3 |
4 |
5 |
6 | 解释
7 |
8 | 当我们处理配置参数时,常常会很慢并且很烦躁:(1)当需要注入100个keys(而不是只在配置文件中提交它们)时,使用进程环境变量设置所有的keys变得非常繁琐,但是当处理只有devops管理权限的文件时,不改变代码行为就不不会变。一个可靠的配置解决方案必须结合配置文件和进程变量覆盖。(2)枚举一个普通JSON的所有keys时,当目录变得非常庞杂的时候,查找修改条目困难。几乎没有配置库允许将配置存储在多个文件中,运行时将所有文件联合起来。分成几个部分的分层JSON文件能够克服这个问题。请参照下面示例。(3)不推荐存储像密码数据这样的敏感信息,但是又没有快速便捷的方法解决这个难题。一些配置库允许文件加密,其他库在Git提交时加密目录,或者不存储这些目录的真实值,在通过环境变量部署期间枚举真实值。(4)一些高级配置场景需要通过命令行(vargs)注入配置值,或者像Redis一样通过集中缓存同步配置信息,所以不同的服务器不会保存不同的数据。
9 |
10 | 一些配置库可以免费提供这些功能的大部分功能,请查看NPM库([nconf](https://www.npmjs.com/package/nconf) 和 [config](https://www.npmjs.com/package/config))这些库可以满足这些要求中的许多要求。
11 |
12 |
13 | 代码示例-分层配置有助于查找条目和维护庞大的配置文件
14 | javascript
15 | {
16 | // Customer module configs
17 | "Customer": {
18 | "dbConfig": {
19 | "host": "localhost",
20 | "port": 5984,
21 | "dbName": "customers"
22 | },
23 | "credit": {
24 | "initialLimit": 100,
25 | // Set low for development
26 | "initialDays": 1
27 | }
28 | }
29 | }
30 |
31 |
32 |
--------------------------------------------------------------------------------
/sections/projectstructre/wraputilities.md:
--------------------------------------------------------------------------------
1 | # Wrap common utilities as NPM packages
2 |
3 |
4 |
5 |
6 | ### One Paragraph Explainer
7 | Once you start growing and have different components on different servers which consumes similar utilities, you should start managing the dependencies - how can you keep 1 copy of your utility code and let multiple consumer components use and deploy it? well, there is a tool for that, it's called NPM... Start by wrapping 3rd party utility packages with your own code to make it easily replaceable in the future and publish your own code as private NPM package. Now, all your code base can import that code and benefit free dependency management tool. It's possible to publish NPM packages for your own private use without sharing it publicly using [private modules](https://docs.npmjs.com/private-modules/intro), [private registry](https://npme.npmjs.com/docs/tutorials/npm-enterprise-with-nexus.html) or [local NPM packages](https://medium.com/@arnaudrinquin/build-modular-application-with-npm-local-modules-dfc5ff047bcc)
8 |
9 |
10 |
11 |
12 |
13 | ### Sharing your own common utilities across environments and components
14 | 
15 |
--------------------------------------------------------------------------------
/sections/production/detectvulnerabilities.md:
--------------------------------------------------------------------------------
1 | # Use tools that automatically detect vulnerable dependencies
2 |
3 |
4 |
5 | ### One Paragraph Explainer
6 |
7 | Modern Node applications have tens and sometimes hundreds of dependencies. If any of the dependencies
8 | you use has a known security vulnerability your app is vulnerable as well.
9 | The following tools automatically check for known security vulnerabilities in your dependencies:
10 |
11 | - [nsp](https://www.npmjs.com/package/nsp) - Node Security Project
12 | - [snyk](https://snyk.io/) - Continuously find & fix vulnerabilities in your dependencies
13 |
14 |
15 |
16 | ### What Other Bloggers Say
17 | From the [StrongLoop](https://strongloop.com/strongblog/best-practices-for-express-in-production-part-one-security/) blog:
18 |
19 | > ...Using to manage your application’s dependencies is powerful and convenient. But the packages that you use may contain critical security vulnerabilities that could also affect your application. The security of your app is only as strong as the “weakest link” in your dependencies. Fortunately, there are two helpful tools you can use to ensure of the third-party packages you use: and requireSafe. These two tools do largely the same thing, so using both might be overkill, but “better safe than sorry” are words to live by when it comes to security...
20 |
--------------------------------------------------------------------------------
/sections/codestylepractices/eslint_prettier.md:
--------------------------------------------------------------------------------
1 | # Using ESLint and Prettier
2 |
3 |
4 | ### Comparing ESLint and Prettier
5 |
6 | If you format this code using ESLint, it will just give you a warning that it's too wide (depends on your `max-len` setting). Prettier will automatically format it for you.
7 |
8 | ```javascript
9 | foo(reallyLongArg(), omgSoManyParameters(), IShouldRefactorThis(), isThereSeriouslyAnotherOne(), noWayYouGottaBeKiddingMe());
10 | ```
11 |
12 | ```javascript
13 | foo(
14 | reallyLongArg(),
15 | omgSoManyParameters(),
16 | IShouldRefactorThis(),
17 | isThereSeriouslyAnotherOne(),
18 | noWayYouGottaBeKiddingMe()
19 | );
20 | ```
21 |
22 | Source: [https://github.com/prettier/prettier-eslint/issues/101](https://github.com/prettier/prettier-eslint/issues/101)
23 |
24 | ### Integrating ESLint and Prettier
25 |
26 | ESLint and Prettier overlap in the code formatting feature but can be easily combined by using other packages like [prettier-eslint](https://github.com/prettier/prettier-eslint), [eslint-plugin-prettier](https://github.com/prettier/eslint-plugin-prettier), and [eslint-config-prettier](https://github.com/prettier/eslint-config-prettier). For more information about their differences, you can view the link [here](https://stackoverflow.com/questions/44690308/whats-the-difference-between-prettier-eslint-eslint-plugin-prettier-and-eslint).
27 |
--------------------------------------------------------------------------------
/sections/errorhandling/documentingusingswagger.md:
--------------------------------------------------------------------------------
1 | # Document API errors using Swagger
2 |
3 |
4 | ### One Paragraph Explainer
5 |
6 | REST APIs return results using HTTP status codes, it’s absolutely required for the API user to be aware not only about the API schema but also about potential errors – the caller may then catch an error and tactfully handle it. For example, your API documentation might state in advanced that HTTP status 409 is returned when the customer name already exist (assuming the API register new users) so the caller can correspondingly render the best UX for the given situation. Swagger is a standard that defines the schema of API documentation offering an eco-system of tools that allow creating documentation easily online, see print screens below
7 |
8 | ### Blog Quote: "You have to tell your callers what errors can happen"
9 | From the blog Joyent, ranked 1 for the keywords “Node.JS logging”
10 |
11 | > We’ve talked about how to handle errors, but when you’re writing a new function, how do you deliver errors to the code that called your function? …If you don’t know what errors can happen or don’t know what they mean, then your program cannot be correct except by accident. So if you’re writing a new function, you have to tell your callers what errors can happen and what they mean…
12 |
13 |
14 | ### Useful Tool: Swagger Online Documentation Creator
15 | 
--------------------------------------------------------------------------------
/sections/production/apmproducts.md:
--------------------------------------------------------------------------------
1 | # Sure user experience with APM products
2 |
3 |
4 |
5 |
6 | ### One Paragraph Explainer
7 |
8 | APM (application performance monitoring) refers to a familiy of products that aims to monitor application performance from end to end, also from the customer perspective. While traditional monitoring solutions focuses on Exceptions and standalone technical metrics (e.g. error tracking, slow server endpoints, etc), in real world our app might create disappointed users without any code exceptions, for example if some middleware service performed real slow. APM products measure the user experience from end to end, for example, given a system that encompass frontend UI and multiple distributed services – some APM products can tell how fast a transaction that spans multiple tiers last. It can tell whether the user experience is solid and point to the problem. This attractive offering comes with a relatively high price tag hence it’s recommended for large-scale and complex products that require to go beyond straightforwd monitoring.
9 |
10 |
11 |
12 |
13 | ### APM example – a commercial product that visualize cross-service app performance
14 |
15 | 
16 |
17 |
18 |
19 | ### APM example – a commercial product that emphasize the user experience score
20 |
21 | 
22 |
23 |
24 |
25 | ### APM example – a commercial product that highlights slow code paths
26 |
27 | 
28 |
--------------------------------------------------------------------------------
/sections/errorhandling/testingerrorflows.md:
--------------------------------------------------------------------------------
1 | # Test error flows using your favorite test framework
2 |
3 |
4 | ### One Paragraph Explainer
5 |
6 | Testing ‘happy’ paths is no better than testing failures. Good testing code coverage demands to test exceptional paths. Otherwise, there is no trust that exceptions are indeed handled correctly. Every unit testing framework, like [Mocha](https://mochajs.org/) & [Chai](http://chaijs.com/), supports exception testing (code examples below). If you find it tedious to test every inner function and exception you may settle with testing only REST API HTTP errors.
7 |
8 |
9 |
10 | ### Code example: ensuring the right exception is thrown using Mocha & Chai
11 |
12 | ```javascript
13 | describe("Facebook chat", () => {
14 | it("Notifies on new chat message", () => {
15 | var chatService = new chatService();
16 | chatService.participants = getDisconnectedParticipants();
17 | expect(chatService.sendMessage.bind({ message: "Hi" })).to.throw(ConnectionError);
18 | });
19 | });
20 |
21 | ```
22 |
23 | ### Code example: ensuring API returns the right HTTP error code
24 |
25 | ```javascript
26 | it("Creates new Facebook group", function (done) {
27 | var invalidGroupInfo = {};
28 | httpRequest({
29 | method: 'POST',
30 | uri: "facebook.com/api/groups",
31 | resolveWithFullResponse: true,
32 | body: invalidGroupInfo,
33 | json: true
34 | }).then((response) => {
35 | // if we were to execute the code in this block, no error was thrown in the operation above
36 | }).catch(function (response) {
37 | expect(400).to.equal(response.statusCode);
38 | done();
39 | });
40 | });
41 |
42 | ```
--------------------------------------------------------------------------------
/sections/production/setnodeenv.md:
--------------------------------------------------------------------------------
1 | # Set NODE_ENV = production
2 |
3 |
4 |
5 |
6 | ### One Paragraph Explainer
7 |
8 | Process environment variables is a set of key-value pairs made available to any running program, usually for configuration purposes. Though any variables can be used, Node encourages the convention of using a variable called NODE_ENV to flag whether we’re in production right now. This determination allows components to provide better diagnostics during development, for example by disabling caching or emitting verbose log statements. Any modern deployment tool – Chef, Puppet, CloudFormation, others – support setting environment variables during deployment
9 |
10 |
11 |
12 |
13 | ### Code example: Setting and reading the NODE_ENV environment variable
14 |
15 | ```javascript
16 | // Setting environment variables in bash before starting the node process
17 | $ NODE_ENV=development
18 | $ node
19 |
20 | // Reading the environment variable using code
21 | if (process.env.NODE_ENV === “production”)
22 | useCaching = true;
23 | ```
24 |
25 |
26 |
27 |
28 | ### What Other Bloggers Say
29 | From the blog [dynatrace](https://www.dynatrace.com/blog/the-drastic-effects-of-omitting-node_env-in-your-express-js-applications/):
30 | > ...In Node.js there is a convention to use a variable called NODE_ENV to set the current mode. We see that it in fact reads NODE_ENV and defaults to ‘development’ if it isn’t set. We clearly see that by setting NODE_ENV to production the number of requests Node.js can handle jumps by around two-thirds while the CPU usage even drops slightly. *Let me emphasize this: Setting NODE_ENV to production makes your application 3 times faster!*
31 |
32 |
33 | 
34 |
35 |
36 |
37 |
--------------------------------------------------------------------------------
/sections/projectstructre/separateexpress.md:
--------------------------------------------------------------------------------
1 | # Separate Express 'app' and 'server'
2 |
3 |
4 |
5 |
6 | ### One Paragraph Explainer
7 |
8 | The latest Express generator comes with a great practice that is worth to keep - the API declaration is separated from the network related configuration (port, protocol, etc). This allows testing the API in-process, without performing network calls, with all the benefits that it brings to the table: fast testing execution and getting coverage metrics of the code. It also allows deploying the same API under flexible and different network conditions. Bonus: better separation of concerns and cleaner code
9 |
10 |
11 |
12 | ### Code example: API declaration, should reside in app.js
13 |
14 | ```javascript
15 | var app = express();
16 | app.use(bodyParser.json());
17 | app.use("/api/events", events.API);
18 | app.use("/api/forms", forms);
19 |
20 | ```
21 |
22 |
23 |
24 | ### Code example: Server network declaration, should reside in /bin/www
25 |
26 | ```javascript
27 | var app = require('../app');
28 | var http = require('http');
29 |
30 | /**
31 | * Get port from environment and store in Express.
32 | */
33 |
34 | var port = normalizePort(process.env.PORT || '3000');
35 | app.set('port', port);
36 |
37 | /**
38 | * Create HTTP server.
39 | */
40 |
41 | var server = http.createServer(app);
42 |
43 | ```
44 |
45 |
46 | ### Example: test your API in-process using supertest (popular testing package)
47 |
48 | ```javascript
49 | const app = express();
50 |
51 | app.get('/user', function(req, res) {
52 | res.status(200).json({ name: 'tobi' });
53 | });
54 |
55 | request(app)
56 | .get('/user')
57 | .expect('Content-Type', /json/)
58 | .expect('Content-Length', '15')
59 | .expect(200)
60 | .end(function(err, res) {
61 | if (err) throw err;
62 | });
63 | ````
64 |
--------------------------------------------------------------------------------
/sections/production/productoncode.md:
--------------------------------------------------------------------------------
1 | # Make your code production-ready
2 |
3 |
4 |
5 |
6 | ### One Paragraph Explainer
7 |
8 | Following is a list of development tips that greatly affect the production maintenance and stability:
9 |
10 | * The twelve-factor guide – Get familiar with the [Twelve factors](https://12factor.net/) guide
11 | * Be stateless – Save no data locally on a specific web server (see separate bullet – ‘Be Stateless’)
12 | * Cache – Utilize cache heavily, yet never fail because of cache mismatch
13 | * Test memory – gauge memory usage and leaks as part your development flow, tools such as ‘memwatch’ can greatly facilitate this task
14 | * Name functions – Minimize the usage of anonymous functions (i.e. inline callback) as a typical memory profiler will provide memory usage per method name
15 | * Use CI tools – Use CI tool to detect failures before sending to production. For example, use ESLint to detect reference errors and undefined variables. Use –trace-sync-io to identify code that uses synchronous APIs (instead of the async version)
16 | * Log wisely – Include in each log statement contextual information, hopefully in JSON format so log aggregators tools such as Elastic can search upon those properties (see separate bullet – ‘Increase visibility using smart logs’). Also, include transaction-id that identifies each request and allows to correlate lines that describe the same transaction (see separate bullet – ‘Include Transaction-ID’)
17 | * Error management – Error handling is the Achilles’ heel of Node.JS production sites – many Node processes are crashing because of minor errors while others hang on alive in a faulty state instead of crashing. Setting your error handling strategy is absolutely critical, read here my [error handling best practices](http://goldbergyoni.com/checklist-best-practices-of-node-js-error-handling/)
18 |
--------------------------------------------------------------------------------
/sections/production/createmaintenanceendpoint.md:
--------------------------------------------------------------------------------
1 | # Create a maintenance endpoint
2 |
3 |
4 |
5 |
6 | ### One Paragraph Explainer
7 |
8 | A maintenance endpoint is a plain secured HTTP API that is part of the app code and its purpose is to be used by the ops/production team to monitor and expose maintenance functionality. For example, it can return a head dump (memory snapshot) of the process, report whether there are some memory leaks and even allow to execute REPL commands directly. This endpoint is needed where the conventional devops tools (monitoring products, logs, etc) fails to gather some specific type of information or you choose not to buy/install such tools. The golden rule is using professional and external tools for monitoring and maintaining the production, these are usually more robust and accurate. That said, there are likely to be cases where the generic tools will fail to extract information that is specific to Node or to your app – for example, should you wish to generate a memory snapshot at the moment GC completed a cycle – few NPM libraries will be glad to perform this for you but popular monitoring tools will be likely to miss this functionality
9 |
10 |
11 |
12 |
13 | ### Code example: generating a head dump via code
14 |
15 | ```javascript
16 | var heapdump = require('heapdump');
17 |
18 | router.get('/ops/headump', (req, res, next) => {
19 | logger.info('About to generate headump');
20 | heapdump.writeSnapshot((err, filename) => {
21 | console.log('headump file is ready to be sent to the caller', filename);
22 | fs.readFile(filename, "utf-8", (err, data) => {
23 | res.end(data);
24 | });
25 | });
26 | });
27 | ```
28 |
29 |
30 |
31 | ### Recommended Resources
32 |
33 | [Getting your Node.js app production ready (Slides)](http://naugtur.pl/pres3/node2prod)
34 |
35 | ▶ [Getting your Node.js app production ready (Video)](https://www.youtube.com/watch?v=lUsNne-_VIk)
36 |
37 | 
38 |
--------------------------------------------------------------------------------
/sections/projectstructre/configguide.md:
--------------------------------------------------------------------------------
1 | # Use environment aware, secure and hirearchical config
2 |
3 |
4 |
5 |
6 | ### One Paragraph Explainer
7 |
8 | When dealing with configuration data, many things can just annoy and slow down: (1) setting all the keys using process environment variables becomes very tedious when in need to inject 100 keys (instead of just committing those in a config file), however when dealing with files only the devops admins can not alter the behaviour without changing the code. A reliable config solution must combine both configuration files + overrides from the process variables (b) when specifying all keys in a flat JSON, it becomes frustrating to find and modify entries when the list grows bigger. A hierarchical JSON file that is grouped into sections can overcome this issue + few config libraries allow to store the configuration in multiple files and take care to union all in runtime. See example below (3) storing sensitive information like DB password is obviously not recommended but no quick and handy solution exists for this challenge. Some configuration libraries allow to encrypt files, others encrypt those entries during GIT commits or simply don't store real values for those entries and specify the actual value during deployment via environment variables. (4) some advanced configuration scenarios demand to inject configuration values via command line (vargs) or sync configuration info via a centralized cache like Redis so multiple servers will use the same configuration data.
9 |
10 | Some configuration libraries can provide most of these features for free, have a look at NPM libraries like [rc](https://www.npmjs.com/package/rc), [nconf](https://www.npmjs.com/package/nconf) and [config](https://www.npmjs.com/package/config) which tick many of these requirements.
11 |
12 |
13 |
14 | ### Code Example – hirearchical config helps to find entries and maintain huge config files
15 |
16 | ```javascript
17 | {
18 | // Customer module configs
19 | "Customer": {
20 | "dbConfig": {
21 | "host": "localhost",
22 | "port": 5984,
23 | "dbName": "customers"
24 | },
25 | "credit": {
26 | "initialLimit": 100,
27 | // Set low for development
28 | "initialDays": 1
29 | }
30 | }
31 | }
32 | ```
33 |
34 |
35 |
--------------------------------------------------------------------------------
/sections/testingandquality/citools.md:
--------------------------------------------------------------------------------
1 | # Carefully choose your CI platform
2 |
3 |
4 |
5 |
6 | ### One Paragraph Explainer
7 |
8 | The CI world used to be the flexibility of [Jenkins](https://jenkins.io/) vs the simplicity of SaaS vendors. The game is now changing as SaaS providers like [CircleCI](https://circleci.com/) and [Travis](https://travis-ci.org/) offer robust solutions including Docker containers with miniumum setup time while Jenkins tries to compete on 'simplicity' segment as well. Though one can setup rich CI solution in the cloud, should it required to control the finest details Jenkins is still the platform of choice. The choice eventually boils down to which extent the CI process should be customized: free and setup free cloud vendors allow to run custom shell commands, custom docker images, adjust the workflow, run matrix builds and other rich features. However if controlling the infrastructure or programming the CI logic using a formal programming language like Java is desired - Jenkins might still be the choice. Otherwise, consider opting for the simple and setup free cloud option
9 |
10 |
11 |
12 |
13 | ### Code Example – a typical cloud CI configuratin. Single .yml file and that's it
14 | ```javascript
15 | version: 2
16 | jobs:
17 | build:
18 | docker:
19 | - image: circleci/node:4.8.2
20 | - image: mongo:3.4.4
21 | steps:
22 | - checkout
23 | - run:
24 | name: Install npm wee
25 | command: npm install
26 | test:
27 | docker:
28 | - image: circleci/node:4.8.2
29 | - image: mongo:3.4.4
30 | steps:
31 | - checkout
32 | - run:
33 | name: Test
34 | command: npm test
35 | - run:
36 | name: Generate code coverage
37 | command: './node_modules/.bin/nyc report --reporter=text-lcov'
38 | - store_artifacts:
39 | path: coverage
40 | prefix: coverage
41 |
42 | ```
43 |
44 |
45 |
46 | ### Circle CI - almost zero setup cloud CI
47 | 
48 |
49 | ### Jenkins - sophisiticated and robust CI
50 | 
51 |
52 |
53 |
54 |
--------------------------------------------------------------------------------
/sections/errorhandling/monitoring.md:
--------------------------------------------------------------------------------
1 | # Monitoring
2 |
3 |
4 | ### One Paragraph Explainer
5 |
6 | > At the very basic level, monitoring means you can *easily identify when bad things happen at production. For example, by getting notified by email or Slack. The challenge is to choose the right set of tools that will satisfy your requirements without breaking your bank. May I suggest, start with defining the core set of metrics that must be watched to ensure a healthy state – CPU, server RAM, Node process RAM (less than 1.4GB), the amount of errors in the last minute, number of process restarts, average response time. Then go over some advanced features you might fancy and add to your wish list. Some examples of luxury monitoring feature: DB profiling, cross-service measuring (i.e. measure business transaction), frontend integration, expose raw data to custom BI clients, Slack notifications and many others.
7 |
8 | Achieving the advanced features demands lengthy setup or buying a commercial product such as Datadog, newrelic and alike. Unfortunately, achieving even the basics is not a walk in the park as some metrics are hardware-related (CPU) and others live within the node process (internal errors) thus all the straightforward tools require some additional setup. For example, cloud vendor monitoring solutions (e.g. AWS CloudWatch, Google StackDriver) will tell you immediately about the hardware metric but nothing about the internal app behavior. On the other end, Log-based solutions such as ElasticSearch lack by default the hardware view. The solution is to augment your choice with missing metrics, for example, a popular choice is sending application logs to Elastic stack and configure some additional agent (e.g. Beat) to share hardware-related information to get the full picture.
9 |
10 |
11 | ### Blog Quote: "We have a problem with promises"
12 | From the blog pouchdb.com, ranked 11 for the keywords “Node Promises”
13 |
14 | > … We recommend you to watch these signals for all of your services: Error Rate: Because errors are user facing and immediately affect your customers.
15 | Response time: Because the latency directly affects your customers and business.
16 | Throughput: The traffic helps you to understand the context of increased error rates and the latency too.
17 | Saturation: It tells how “full” your service is. If the CPU usage is 90%, can your system handle more traffic?
18 | …
19 |
--------------------------------------------------------------------------------
/sections/production/bestateless.md:
--------------------------------------------------------------------------------
1 | # Be stateless, kill your Servers almost every day
2 |
3 |
4 |
5 |
6 | ### One Paragraph Explainer
7 |
8 | Have you ever encountered a severe production issue where one server was missing some piece of configuration or data? That is probably due to some unnecessary dependency on some local asset that is not part of the deployment. Many successful products treat servers like a phoenix bird – it dies and is reborn periodically without any damage. In other words, a server is just a piece of hardware that executes your code for some time and is replaced after that.
9 | This approach
10 |
11 | - allows to scale by adding and removing servers dynamically without any side-affect
12 | - simplifies the maintenance as it frees our mind from evaluating each server state.
13 |
14 |
15 |
16 |
17 | ### Code example: anti-patterns
18 |
19 | ```javascript
20 | // Typical mistake 1: saving uploaded files locally on a server
21 | var multer = require('multer'); // express middleware for handling multipart uploads
22 | var upload = multer({ dest: 'uploads/' });
23 |
24 | app.post('/photos/upload', upload.array('photos', 12), function (req, res, next) {});
25 |
26 | // Typical mistake 2: storing authentication sessions (passport) in a local file or memory
27 | var FileStore = require('session-file-store')(session);
28 | app.use(session({
29 | store: new FileStore(options),
30 | secret: 'keyboard cat'
31 | }));
32 |
33 | // Typical mistake 3: storing information on the global object
34 | Global.someCacheLike.result = { somedata };
35 | ```
36 |
37 |
38 |
39 | ### What Other Bloggers Say
40 | From the blog [Martin Fowler](https://martinfowler.com/bliki/PhoenixServer.html):
41 | > ...One day I had this fantasy of starting a certification service for operations. The certification assessment would consist of a colleague and I turning up at the corporate data center and setting about critical production servers with a baseball bat, a chainsaw, and a water pistol. The assessment would be based on how long it would take for the operations team to get all the applications up and running again. This may be a daft fantasy, but there’s a nugget of wisdom here. While you should forego the baseball bats, it is a good idea to virtually burn down your servers at regular intervals. A server should be like a phoenix, regularly rising from the ashes...
42 |
43 |
44 |
--------------------------------------------------------------------------------
/sections/production/measurememory.md:
--------------------------------------------------------------------------------
1 | # Measure and guard the memory usage
2 |
3 |
4 |
5 |
6 | ### One Paragraph Explainer
7 |
8 | In a perfect world, a web developer shouldn’t deal with memory leaks. In reality, memory issues are a known Node’s gotcha one must be aware of. Above all, memory usage must be monitored constantly. In development and small production sites you may gauge manually using Linux commands or NPM tools and libraries like node-inspector and memwatch. The main drawback of this manual activities is that they require a human being actively monitoring – for serious production sites it’s absolutely vital to use robust monitoring tools e.g. (AWS CloudWatch, DataDog or any similar proactive system) that alerts when a leak happens. There are also few development guidelines to prevent leaks: avoid storing data on the global level, use streams for data with dynamic size, limit variables scope using let and const.
9 |
10 |
11 |
12 | ### What Other Bloggers Say
13 |
14 | * From the blog [Dyntrace](http://apmblog.dynatrace.com/):
15 | > ... ”As we already learned, in Node.js JavaScript is compiled to native code by V8. The resulting native data structures don’t have much to do with their original representation and are solely managed by V8. This means that we cannot actively allocate or deallocate memory in JavaScript. V8 uses a well-known mechanism called garbage collection to address this problem.”
16 |
17 | * From the blog [Dyntrace](http://blog.argteam.com/coding/hardening-node-js-for-production-part-2-using-nginx-to-avoid-node-js-load):
18 | > ... “Although this example leads to obvious results the process is always the same:
19 | Create heap dumps with some time and a fair amount of memory allocation in between
20 | Compare a few dumps to find out what’s growing”
21 |
22 | * From the blog [Dyntrace](http://blog.argteam.com/coding/hardening-node-js-for-production-part-2-using-nginx-to-avoid-node-js-load):
23 | > ... “fault, Node.js will try to use about 1.5GBs of memory, which has to be capped when running on systems with less memory. This is the expected behaviour as garbage collection is a very costly operation.
24 | The solution for it was adding an extra parameter to the Node.js process:
25 | node –max_old_space_size=400 server.js –production ”
26 | “Why is garbage collection expensive? The V8 JavaScript engine employs a stop-the-world garbage collector mechanism. In practice, it means that the program stops execution while garbage collection is in progress.”
--------------------------------------------------------------------------------
/sections/errorhandling/usematurelogger.md:
--------------------------------------------------------------------------------
1 | # Use a mature logger to increase errors visibility
2 |
3 | ### One Paragraph Explainer
4 |
5 | We all loovve console.log but obviously a reputable and persisted Logger like [Winston][winston], [Bunyan][bunyan] (highly popular) or [Pino][pino] (the new kid in town which is focused on performance) is mandatory for serious projects. A set of practices and tools will help to reason about errors much quicker – (1) log frequently using different levels (debug, info, error), (2) when logging, provide contextual information as JSON objects, see example below. (3) watch and filter logs using a log querying API (built-in in most loggers) or a log viewer software
6 | (4) Expose and curate log statement for the operation team using operational intelligence tools like Splunk
7 |
8 | [winston]: https://www.npmjs.com/package/winston
9 | [bunyan]: https://www.npmjs.com/package/bunyan
10 | [pino]: https://www.npmjs.com/package/pino
11 |
12 | ### Code Example – Winston Logger in action
13 |
14 | ```javascript
15 | // your centralized logger object
16 | var logger = new winston.Logger({
17 | level: 'info',
18 | transports: [
19 | new (winston.transports.Console)(),
20 | new (winston.transports.File)({ filename: 'somefile.log' })
21 | ]
22 | });
23 |
24 | // custom code somewhere using the logger
25 | logger.log('info', 'Test Log Message with some parameter %s', 'some parameter', { anything: 'This is metadata' });
26 |
27 | ```
28 |
29 | ### Code Example – Querying the log folder (searching for entries)
30 |
31 | ```javascript
32 | var options = {
33 | from: new Date - 24 * 60 * 60 * 1000,
34 | until: new Date,
35 | limit: 10,
36 | start: 0,
37 | order: 'desc',
38 | fields: ['message']
39 | };
40 |
41 |
42 | // Find items logged between today and yesterday.
43 | winston.query(options, function (err, results) {
44 | // execute callback with results
45 | });
46 |
47 | ```
48 |
49 | ### Blog Quote: "Logger Requirements"
50 | From the blog Strong Loop
51 |
52 | > Lets identify a few requirements (for a logger):
53 | 1. Time stamp each log line. This one is pretty self explanatory – you should be able to tell when each log entry occured.
54 | 2. Logging format should be easily digestible by humans as well as machines.
55 | 3. Allows for multiple configurable destination streams. For example, you might be writing trace logs to one file but when an error is encountered, write to the same file, then into error file and send an email at the same time…
56 |
--------------------------------------------------------------------------------
/sections/errorhandling/failfast.md:
--------------------------------------------------------------------------------
1 | # Fail fast, validate arguments using a dedicated library
2 |
3 |
4 | ### One Paragraph Explainer
5 |
6 | We all know how checking arguments and failing fast is important to avoid hidden bugs (see anti-pattern code example below). If not, read about explicit programming and defensive programming. In reality, we tend to avoid it due to the annoyance of coding it (e.g. think of validating hierarchical JSON object with fields like email and dates) – libraries like Joi and Validator turn this tedious task into a breeze.
7 |
8 | ### Wikipedia: Defensive Programming
9 |
10 | Defensive programming is an approach to improve software and source code, in terms of: General quality – reducing the number of software bugs and problems. Making the source code comprehensible – the source code should be readable and understandable so it is approved in a code audit. Making the software behave in a predictable manner despite unexpected inputs or user actions.
11 |
12 |
13 |
14 | ### Code example: validating complex JSON input using ‘Joi’
15 |
16 | ```javascript
17 | var memberSchema = Joi.object().keys({
18 | password: Joi.string().regex(/^[a-zA-Z0-9]{3,30}$/),
19 | birthyear: Joi.number().integer().min(1900).max(2013),
20 | email: Joi.string().email()
21 | });
22 |
23 | function addNewMember(newMember)
24 | {
25 | // assertions come first
26 | Joi.assert(newMember, memberSchema); //throws if validation fails
27 | // other logic here
28 | }
29 |
30 | ```
31 |
32 | ### Anti-pattern: no validation yields nasty bugs
33 |
34 | ```javascript
35 | // if the discount is positive let's then redirect the user to pring his discount coupons
36 | function redirectToPrintDiscount(httpResponse, member, discount)
37 | {
38 | if(discount != 0)
39 | httpResponse.redirect(`/discountPrintView/${member.id}`);
40 | }
41 |
42 | redirectToPrintDiscount(httpResponse, someMember);
43 | // forgot to pass the parameter discount, why the heck was the user redirected to the discount screen?
44 |
45 | ```
46 |
47 | ### Blog Quote: "You should throw these errors immediately"
48 | From the blog: Joyent
49 |
50 | > A degenerate case is where someone calls an asynchronous function but doesn’t pass a callback. You should throw these errors immediately, since the program is broken and the best chance of debugging it involves getting at least a stack trace and ideally a core file at the point of the error. To do this, we recommend validating the types of all arguments at the start of the function.
--------------------------------------------------------------------------------
/sections/production/utilizecpu.md:
--------------------------------------------------------------------------------
1 | # Utilize all CPU cores
2 |
3 |
4 |
5 |
6 | ### One Paragraph Explainer
7 |
8 | It might not come as a surprise that at its basic form, Node runs over a single thread=single process=single CPU. Paying for beefy hardware with 4 or 8 CPU and utilizing only one sounds crazy, right? The quickest solution which fits medium sized apps is using Node’s Cluster module which in 10 lines of code spawns a process for each logical core and route requests between the processes in a round-robin style. Even better, use PM2 which sugarcoats the clustering module with a simple interface and cool monitoring UI. While this solution works well for traditional applications, it might fall short for applications that require top-notch performance and robust devops flow. For those advanced use cases, consider replicating the NODE process using custom deployment script and balancing using a specialized tool such as nginx or use a container engine such as AWS ECS or Kubernetees that have advanced features for deployment and replication of processes.
9 |
10 |
11 |
12 |
13 | ### Comparison: Balancing using Node’s cluster vs nginx
14 |
15 | 
16 |
17 |
18 |
19 | ### What Other Bloggers Say
20 | * From the [Node.JS documentation](https://nodejs.org/api/cluster.html#cluster_how_it_works):
21 | > ... The second approach, Node clusters, should, in theory, give the best performance. In practice however, distribution tends to be very unbalanced due to operating system scheduler vagaries. Loads have been observed where over 70% of all connections ended up in just two processes, out of a total of eight ...
22 |
23 | * From the blog [StrongLoop](From the blog StrongLoop):
24 | > ... Clustering is made possible with Node’s cluster module. This enables a master process to spawn worker processes and distribute incoming connections among the workers. However, rather than using this module directly, it’s far better to use one of the many tools out there that does it for you automatically; for example node-pm or cluster-service ...
25 |
26 | * From the Medium post [Node.js process load balance performance: comparing cluster module, iptables and Nginx](https://medium.com/@fermads/node-js-process-load-balancing-comparing-cluster-iptables-and-nginx-6746aaf38272)
27 | > ... Node cluster is simple to implement and configure, things are kept inside Node’s realm without depending on other software. Just remember your master process will work almost as much as your worker processes and with a little less request rate then the other solutions ...
--------------------------------------------------------------------------------
/sections/projectstructre/thincomponents.md:
--------------------------------------------------------------------------------
1 | # Structure your solution by components
2 |
3 |
4 |
5 |
6 | ### One Paragraph Explainer
7 |
8 | For medium sized apps and above, monoliths are really bad - a one big software with many dependencies is just hard to reason about and often leads to code spaghetti. Even those smart architects who are skilled to tame the beast and 'modularize' it - spend great mental effort on design and each change requires to carefully evaluate the impact on other dependant objects. The ultimate solution is to develop small software: divide the whole stack into self-contained components that don't share files with others, each constitute very few files (e.g. API, service, data access, test, etc) so that it's very easy to reason about it. Some may call this 'microservices' architecture - it's important to understand that microservices is not a spec which you must follow rather a set of principles. You may adopt many principles into a full-blown microservices architecture or adopt only few. Both are good as long as you keep the software complexity low. The very least you should do is create a basic borders between components, assign a folder in your project root for each business component and make it self contained - other components are allowed to consumeits functionality only through its public interface or API. This is the foundation for keeping your components simple, avoid dependencies hell and pave the way to full-blown microservices in the future once your app grows
9 |
10 |
11 |
12 |
13 | ### Blog Quote: "Scaling requires scaling of the entire application"
14 | From the blog MartinFowler.com
15 |
16 | > Monolithic applications can be successful, but increasingly people are feeling frustrations with them - especially as more applications are being deployed to the cloud . Change cycles are tied together - a change made to a small part of the application, requires the entire monolith to be rebuilt and deployed. Over time it's often hard to keep a good modular structure, making it harder to keep changes that ought to only affect one module within that module. Scaling requires scaling of the entire application rather than parts of it that require greater resource.
17 |
18 |
19 |
20 | ### Good: Structure your solution by self-contained components
21 | 
22 |
23 |
24 |
25 | ### Bad: Group your files by technical role
26 | 
27 |
--------------------------------------------------------------------------------
/sections/production/assigntransactionid.md:
--------------------------------------------------------------------------------
1 | # Assign ‘TransactionId’ to each log statement
2 |
3 |
4 |
5 |
6 | ### One Paragraph Explainer
7 |
8 | A typical log is a warehouse of entries from all components and requests. Upon detection of some suspicious line or error it becomes hairy to match other lines that belong to the same specific flow (e.g. the user “John” tried to buy something). This becomes even more critical and challenging in a microservice environment when a request/transaction might span across multiple computers. Address this by assigning a unique transaction identifier value to all the entries from the same request so when detecting one line one can copy the id and search for every line that has similar transaction Id. However, achieving this In Node is not straightforward as a single thread is used to serve all requests –consider using a library that that can group data on the request level – see code example on the next slide. When calling other microservice, pass the transaction Id using an HTTP header like “x-transaction-id” to keep the same context.
9 |
10 |
11 |
12 |
13 | ### Code example: typical Express configuration
14 |
15 | ```javascript
16 | // when receiving a new request, start a new isolated context and set a transaction Id. The following example is using the NPM library continuation-local-storage to isolate requests
17 |
18 | const { createNamespace } = require('continuation-local-storage');
19 | var session = createNamespace('my session');
20 |
21 | router.get('/:id', (req, res, next) => {
22 | session.set('transactionId', 'some unique GUID');
23 | someService.getById(req.params.id);
24 | logger.info('Starting now to get something by Id');
25 | });
26 |
27 | // Now any other service or components can have access to the contextual, per-request, data
28 | class someService {
29 | getById(id) {
30 | logger.info(“Starting to get something by Id”);
31 | // other logic comes here
32 | }
33 | }
34 |
35 | // The logger can now append the transaction-id to each entry, so that entries from the same request will have the same value
36 | class logger {
37 | info (message)
38 | {console.log(`${message} ${session.get('transactionId')}`);}
39 | }
40 | ```
41 |
42 |
43 |
44 | ### What Other Bloggers Say
45 | From the blog [ARG! TEAM](http://blog.argteam.com/coding/hardening-node-js-for-production-part-2-using-nginx-to-avoid-node-js-load):
46 | > ...Although express.js has built in static file handling through some connect middleware, you should never use it. *Nginx can do a much better job of handling static files and can prevent requests for non-dynamic content from clogging our node processes*...
47 |
--------------------------------------------------------------------------------
/sections/projectstructre/breakintcomponents.chinese.md:
--------------------------------------------------------------------------------
1 | # Structure your solution by components
2 |
3 |
4 |
5 |
6 | ### One Paragraph Explainer
7 |
8 | For medium sized apps and above, monoliths are really bad - a one big software with many dependencies is just hard to reason about and often lead to code spaghetti. Even those smart architects who are skilled to tame the beast and 'modularize' it - spend great mental effort on design and each change requires to carefully evaluate the impact on other dependant objects. The ultimate solution is to develop small software: divide the whole stack into self-contained components that don't share files with others, each constitute very few files (e.g. API, service, data access, test, etc) so that it's very easy to reason about it. Some may call this 'microservices' architecture - it's important to understand that microservices is not a spec which you must follow rather a set of principles. You may adopt many principles into a full-blown microservices architecture or adopt only few. Both are good as long as you keep the software complexity low. The very least you should do is create a basic borders between components, assign a folder in your project root for each business component and make it self contained - other components are allowed to consume its functionality only through its public interface or API. This is the foundation for keeping your components simple, avoid dependencies hell and pave the way to full-blown microservices in the future once your app grows.
9 |
10 |
11 |
12 |
13 | ### Blog Quote: "Scaling requires scaling of the entire application"
14 | From the blog MartinFowler.com
15 |
16 | > Monolithic applications can be successful, but increasingly people are feeling frustrations with them - especially as more applications are being deployed to the cloud . Change cycles are tied together - a change made to a small part of the application, requires the entire monolith to be rebuilt and deployed. Over time it's often hard to keep a good modular structure, making it harder to keep changes that ought to only affect one module within that module. Scaling requires scaling of the entire application rather than parts of it that require greater resource.
17 |
18 |
19 |
20 | ### Good: Structure your solution by self-contained components
21 | 
22 |
23 |
24 |
25 | ### Bad: Group your files by technical role
26 | 
27 |
--------------------------------------------------------------------------------
/sections/production/lockdependencies.md:
--------------------------------------------------------------------------------
1 | # Lock dependencies
2 |
3 |
4 |
5 |
6 | ### One Paragraph Explainer
7 |
8 |
9 |
10 | Your code depends on many external packages, let’s say it ‘requires’ and use momentjs-2.1.4, then by default when you deploy to production NPM might fetch momentjs 2.1.5 which unfortunately brings some new bugs to the table. Using NPM config files and the argument ```–save-exact=true``` instructs NPM to refer to the *exact* same version that was installed so the next time you run ```npm install``` (in production or within a Docker container you plan to ship forward for testing) the same dependent version will be fetched. An alternative and popular approach is using a .shrinkwrap file (easily generated using NPM) that states exactly which packages and versions should be installed so no environement can get tempted to fetch newer versions than expected.
11 |
12 | * **Update:** as of NPM 5, dependencies are locked automatically using .shrinkwrap. Yarn, an emerging package manager, also locks down dependencies by default
13 |
14 |
15 |
16 |
17 |
18 | ### Code example: .npmrc file that instructs NPM to use exact versions
19 |
20 | ```
21 | // save this as .npmrc file on the project directory
22 | save-exact:true
23 | ```
24 |
25 |
26 |
27 | ### Code example: shirnkwrap.json file that distill the exact depedency tree
28 |
29 | ```javascript
30 | {
31 | "name": "A",
32 | "dependencies": {
33 | "B": {
34 | "version": "0.0.1",
35 | "dependencies": {
36 | "C": {
37 | "version": "0.1.0"
38 | }
39 | }
40 | }
41 | }
42 | }
43 | ```
44 |
45 |
46 |
47 | ### Code example: NPM 5 dependencies lock file – package.json
48 |
49 | ```javascript
50 | {
51 | "name": "package-name",
52 | "version": "1.0.0",
53 | "lockfileVersion": 1,
54 | "dependencies": {
55 | "cacache": {
56 | "version": "9.2.6",
57 | "resolved": "https://registry.npmjs.org/cacache/-/cacache-9.2.6.tgz",
58 | "integrity": "sha512-YK0Z5Np5t755edPL6gfdCeGxtU0rcW/DBhYhYVDckT+7AFkCCtedf2zru5NRbBLFk6e7Agi/RaqTOAfiaipUfg=="
59 | },
60 | "duplexify": {
61 | "version": "3.5.0",
62 | "resolved": "https://registry.npmjs.org/duplexify/-/duplexify-3.5.0.tgz",
63 | "integrity": "sha1-GqdzAC4VeEV+nZ1KULDMquvL1gQ=",
64 | "dependencies": {
65 | "end-of-stream": {
66 | "version": "1.0.0",
67 | "resolved": "https://registry.npmjs.org/end-of-stream/-/end-of-stream-1.0.0.tgz",
68 | "integrity": "sha1-1FlucCc0qT5A6a+GQxnqvZn/Lw4="
69 | }
70 | }
71 | }
72 | }
73 | }
74 | ```
75 |
--------------------------------------------------------------------------------
/sections/production/guardprocess.md:
--------------------------------------------------------------------------------
1 | # Guard and restart your process upon failure (using the right tool)
2 |
3 |
4 |
5 |
6 | ### One Paragraph Explainer
7 |
8 | At the base level, Node processes must be guarded and restarted upon failures. Simply put, for small apps and those who don’t use containers – tools like [PM2](https://www.npmjs.com/package/pm2-docker) are perfect as they bring simplicity, restarting capabilities and also rich integration with Node. Others with strong Linux skills might use systemd and run Node as a service. Things get more interesting for apps that use Docker or any container technology since those are usually accompanied by cluster management and orchestration tools (e.g. [AWS ECS](http://docs.aws.amazon.com/AmazonECS/latest/developerguide/Welcome.html), [Kubernetes](https://kubernetes.io/), etc) that deploy, monitor and heal containers. Having all those rich cluster management features including container restart, why mess up with other tools like PM2? There’s no bullet proof answer. There are good reasons to keep PM2 within containers (mostly its containers specific version [pm2-docker](https://www.npmjs.com/package/pm2-docker)) as the first guarding tier – it’s much faster to restart a process and provide Node-specific features like flagging to the code when the hosting container asks to gracefully restart. Other might choose to avoid unnecessary layers. To conclude this write-up, no solution suits them all and getting to know the options is the important thing
9 |
10 |
11 |
12 |
13 | ### What Other Bloggers Say
14 |
15 | * From the [Express Production Best Practices](https://expressjs.com/en/advanced/best-practice-performance.html):
16 | > ... In development, you started your app simply from the command line with node server.js or something similar. **But doing this in production is a recipe for disaster. If the app crashes, it will be offline** until you restart it. To ensure your app restarts if it crashes, use a process manager. A process manager is a “container” for applications that facilitates deployment, provides high availability, and enables you to manage the application at runtime.
17 |
18 | * From the Medium blog post [Understanding Node Clustering](https://medium.com/@CodeAndBiscuits/understanding-nodejs-clustering-in-docker-land-64ce2306afef#.cssigr5z3):
19 | > ... Understanding NodeJS Clustering in Docker-Land “Docker containers are streamlined, lightweight virtual environments, designed to simplify processes to their bare minimum. Processes that manage and coordinate their own resources are no longer as valuable. **Instead, management stacks like Kubernetes, Mesos, and Cattle have popularized the concept that these resources should be managed infrastructure-wide**. CPU and memory resources are allocated by “schedulers”, and network resources are managed by stack-provided load balancers.
20 |
--------------------------------------------------------------------------------
/sections/errorhandling/apmproducts.md:
--------------------------------------------------------------------------------
1 | # Discover errors and downtime using APM products
2 |
3 |
4 | ### One Paragraph Explainer
5 |
6 | Exception != Error. Traditional error handling assumes the existence of Exception but application errors might come in the form of slow code paths, API downtime, lack of computational resources and more. This is where APM products come in handy as they allow to detect a wide variety of ‘burried’ issues proactively with a minimal setup. Among the common features of APM products are for example alerting when the HTTP API returns errors, detect when the API response time drops below some threshold, detection of ‘code smells’, features to monitor server resources, operational intelligence dashboard with IT metrics and many other useful features. Most vendors offer a free plan.
7 |
8 | ### Wikipedia about APM
9 |
10 | In the fields of information technology and systems management, Application Performance Management (APM) is the monitoring and management of performance and availability of software applications. APM strives to detect and diagnose complex application performance problems to maintain an expected level of service. APM is “the translation of IT metrics into business meaning ([i.e.] value)
11 | Major products and segments
12 |
13 | ### Understanding the APM marketplace
14 |
15 | APM products constitues 3 major segments:
16 |
17 | 1. Website or API monitoring – external services that constantly monitor uptime and performance via HTTP requests. Can be set up in few minutes. Following are few selected contenders: [Pingdom](https://www.pingdom.com/), [Uptime Robot](https://uptimerobot.com/), and [New Relic](https://newrelic.com/application-monitoring)
18 |
19 | 2. Code instrumentation – product family which require to embed an agent within the application to use features like slow code detection, exception statistics, performance monitoring and many more. Following are few selected contenders: New Relic, App Dynamics
20 |
21 | 3. Operational intelligence dashboard – this line of products is focused on facilitating the ops team with metrics and curated content that helps to easily stay on top of application performance. This usually involves aggregating multiple sources of information (application logs, DB logs, servers log, etc) and upfront dashboard design work. Following are few selected contenders: [Datadog](https://www.datadoghq.com/), [Splunk](https://www.splunk.com/), [Zabbix](https://www.zabbix.com/)
22 |
23 |
24 |
25 | ### Example: UpTimeRobot.Com – Website monitoring dashboard
26 | 
27 |
28 | ### Example: AppDynamics.Com – end to end monitoring combined with code instrumentation
29 | 
30 |
--------------------------------------------------------------------------------
/sections/errorhandling/catchunhandledpromiserejection.md:
--------------------------------------------------------------------------------
1 | # Catch unhandled promise rejections
2 |
3 |
4 |
5 | ### One Paragraph Explainer
6 |
7 | Typically, most of modern Node.JS/Express application code runs within promises – whether within the .then handler, a function callback or in a catch block. Suprisingly, unless a developer remembered to add a .catch clause, errors thrown at these places are not handled by the uncaughtException event-handler and disappear. Recent versions of Node added a warning message when an unhandled rejection pops, though this might help to notice when things go wrong but it's obviously not a proper error handling method. The straightforward solution is to never forget adding .catch clauses within each promise chain call and redirect to a centralized error handler. However building your error handling strategy only on developer’s discipline is somewhat fragile. Consequently, it’s highly recommended using a graceful fallback and subscribe to `process.on(‘unhandledRejection’, callback)` – this will ensure that any promise error, if not handled locally, will get its treatment.
8 |
9 |
10 |
11 | ### Code example: these errors will not get caught by any error handler (except unhandledRejection)
12 |
13 | ```javascript
14 | DAL.getUserById(1).then((johnSnow) => {
15 | // this error will just vanish
16 | if(johnSnow.isAlive == false)
17 | throw new Error('ahhhh');
18 | });
19 |
20 | ```
21 |
22 | ### Code example: Catching unresolved and rejected promises
23 |
24 | ```javascript
25 | process.on('unhandledRejection', (reason, p) => {
26 | // I just caught an unhandled promise rejection, since we already have fallback handler for unhandled errors (see below), let throw and let him handle that
27 | throw reason;
28 | });
29 | process.on('uncaughtException', (error) => {
30 | // I just received an error that was never handled, time to handle it and then decide whether a restart is needed
31 | errorManagement.handler.handleError(error);
32 | if (!errorManagement.handler.isTrustedError(error))
33 | process.exit(1);
34 | });
35 |
36 | ```
37 |
38 | ### Blog Quote: "If you can make a mistake, at some point you will"
39 | From the blog James Nelson
40 |
41 | > Let’s test your understanding. Which of the following would you expect to print an error to the console?
42 |
43 | ```javascript
44 | Promise.resolve(‘promised value’).then(() => {
45 | throw new Error(‘error’);
46 | });
47 |
48 | Promise.reject(‘error value’).catch(() => {
49 | throw new Error(‘error’);
50 | });
51 |
52 | new Promise((resolve, reject) => {
53 | throw new Error(‘error’);
54 | });
55 | ```
56 |
57 | > I don’t know about you, but my answer is that I’d expect all of them to print an error. However, the reality is that a number of modern JavaScript environments won’t print errors for any of them.The problem with being human is that if you can make a mistake, at some point you will. Keeping this in mind, it seems obvious that we should design things in such a way that mistakes hurt as little as possible, and that means handling errors by default, not discarding them.
58 |
--------------------------------------------------------------------------------
/sections/errorhandling/shuttingtheprocess.md:
--------------------------------------------------------------------------------
1 | # Exit the process gracefully when a stranger comes to town
2 |
3 |
4 | ### One Paragraph Explainer
5 |
6 | Somewhere within your code, an error handler object is responsible for deciding how to proceed when an error is thrown – if the error is trusted (i.e. operational error, see further explanation within best practice #3) then writing to log file might be enough. Things get hairy if the error is not familiar – this means that some component might be in a faulty state and all future requests are subject to failure. For example, assuming a singleton, stateful token issuer service that threw an exception and lost its state – from now it might behave unexpectedly and cause all requests to fail. Under this scenario, kill the process and use a ‘Restarter tool’ (like Forever, PM2, etc) to start over with a clean slate.
7 |
8 |
9 |
10 | ### Code example: deciding whether to crash
11 |
12 | ```javascript
13 | // Assuming developers mark known operational errors with error.isOperational=true, read best practice #3
14 | process.on('uncaughtException', function(error) {
15 | errorManagement.handler.handleError(error);
16 | if(!errorManagement.handler.isTrustedError(error))
17 | process.exit(1)
18 | });
19 |
20 |
21 | // centralized error handler encapsulates error-handling related logic
22 | function errorHandler(){
23 | this.handleError = function (error) {
24 | return logger.logError(err).then(sendMailToAdminIfCritical).then(saveInOpsQueueIfCritical).then(determineIfOperationalError);
25 | }
26 |
27 | this.isTrustedError = function (error) {
28 | return error.isOperational;
29 | }
30 |
31 | ```
32 |
33 |
34 | ### Blog Quote: "The best way is to crash"
35 | From the blog Joyent
36 |
37 | > …The best way to recover from programmer errors is to crash immediately. You should run your programs using a restarter that will automatically restart the program in the event of a crash. With a restarter in place, crashing is the fastest way to restore reliable service in the face of a transient programmer error…
38 |
39 |
40 | ### Blog Quote: "There are three schools of thoughts on error handling"
41 | From the blog: JS Recipes
42 |
43 | > …There are primarily three schools of thoughts on error handling:
44 | 1. Let the application crash and restart it.
45 | 2. Handle all possible errors and never crash.
46 | 3. Balanced approach between the two
47 |
48 |
49 | ### Blog Quote: "No safe way to leave without creating some undefined brittle state"
50 | From Node.JS official documentation
51 |
52 | > …By the very nature of how throw works in JavaScript, there is almost never any way to safely “pick up where you left off”, without leaking references, or creating some other sort of undefined brittle state. The safest way to respond to a thrown error is to shut down the process. Of course, in a normal web server, you might have many connections open, and it is not reasonable to abruptly shut those down because an error was triggered by someone else. The better approach is to send an error response to the request that triggered the error, while letting the others finish in their normal time, and stop listening for new requests in that worker.
--------------------------------------------------------------------------------
/sections/production/frontendout.md:
--------------------------------------------------------------------------------
1 | # Get your frontend assets out of Node
2 |
3 |
4 |
5 |
6 | ### One Paragraph Explainer
7 |
8 | In a classic web app the backend serves the frontend/graphics to the browser, a very common approach in the Node’s world is to use Express static middleware for streamlining static files to the client. BUT – Node is not a typical webapp as it utilizes a single thread that is not optimized to serve many files at once. Instead, consider using a reverse proxy (e.g. nginx, HAProxy), cloud storage or CDN (e.g. AWS S3, Azure Blob Storage, etc) that utilizes many optimizations for this task and gain much better throughput. For example, specialized middleware like nginx embodies direct hooks between the file system and the network card and uses a multi-threaded approach to minimize intervention among multiple requests.
9 |
10 | Your optimal solution might wear one of the following forms:
11 |
12 | 1. Using a reverse proxy – your static files will be located right next to your Node application, only requests to the static files folder will be served by a proxy that sits in front of your Node app such as nginx. Using this approach, your Node app is responsible deploying the static files but not to serve them. Your frontend’s colleague will love this approach as it prevents cross-origin-requests from the frontend.
13 |
14 | 2. Cloud storage – your static files will NOT be part of your Node app content, they will be uploaded to services like AWS S3, Azure BlobStorage, or other similar services that were born for this mission. Using this approach, your Node app is not responsible deploying the static files neither to serve them, hence a complete decoupling is drawn between Node and the Frontend which is anyway handled by different teams.
15 |
16 |
17 |
18 |
19 | ### Configuration example: typical nginx configuration for serving static files
20 |
21 | ```
22 | # configure gzip compression
23 | gzip on;
24 | keepalive 64;
25 |
26 | # defining web server
27 | server {
28 | listen 80;
29 | listen 443 ssl;
30 |
31 | # handle static content
32 | location ~ ^/(images/|img/|javascript/|js/|css/|stylesheets/|flash/|media/|static/|robots.txt|humans.txt|favicon.ico) {
33 | root /usr/local/silly_face_society/node/public;
34 | access_log off;
35 | expires max;
36 | }
37 | ```
38 |
39 |
40 |
41 | ### What Other Bloggers Say
42 | From the blog [StrongLoop](https://strongloop.com/strongblog/best-practices-for-express-in-production-part-two-performance-and-reliability/):
43 |
44 | >…In development, you can use [res.sendFile()](http://expressjs.com/4x/api.html#res.sendFile) to serve static files. But don’t do this in production, because this function has to read from the file system for every file request, so it will encounter significant latency and affect the overall performance of the app. Note that res.sendFile() is not implemented with the sendfile system call, which would make it far more efficient. Instead, use serve-static middleware (or something equivalent), that is optimized for serving files for Express apps. An even better option is to use a reverse proxy to serve static files; see Use a reverse proxy for more information…
45 |
46 |
47 |
--------------------------------------------------------------------------------
/sections/production/monitoring.md:
--------------------------------------------------------------------------------
1 | # Monitoring!
2 |
3 |
4 |
5 | ### One Paragraph Explainer
6 |
7 | At the very basic level, monitoring means you can *easily* identify when bad things happen at production. For example, by getting notified by email or Slack. The challenge is to choose the right set of tools that will satisfy your requirements without breaking your bank. May I suggest, start with defining the core set of metrics that must be watched to ensure a healthy state – CPU, server RAM, Node process RAM (less than 1.4GB), the amount of errors in the last minute, number of process restarts, average response time. Then go over some advanced features you might fancy and add to your wish list. Some examples of luxury monitoring feature: DB profiling, cross-service measuring (i.e. measure business transaction), frontend integration, expose raw data to custom BI clients, Slack notifications and many others.
8 |
9 | Achieving the advanced features demands lengthy setup or buying a commercial product such as Datadog, NewRelic and alike. Unfortunately, achieving even the basics is not a walk in the park as some metrics are hardware-related (CPU) and others live within the node process (internal errors) thus all the straightforward tools require some additional setup. For example, cloud vendor monitoring solutions (e.g. [AWS CloudWatch](https://aws.amazon.com/cloudwatch/), [Google StackDriver](https://cloud.google.com/stackdriver/)) will tell you immediately about the hardware metrics but not about the internal app behavior. On the other end, Log-based solutions such as ElasticSearch lack the hardware view by default. The solution is to augment your choice with missing metrics, for example, a popular choice is sending application logs to [Elastic stack](https://www.elastic.co/products) and configure some additional agent (e.g. [Beat](https://www.elastic.co/products)) to share hardware-related information to get the full picture.
10 |
11 |
12 |
13 |
14 |
15 | ### Monitoring example: AWS cloudwatch default dashboard. Hard to extract in-app metrics
16 |
17 | 
18 |
19 |
20 |
21 | ### Monitoring example: StackDriver default dashboard. Hard to extract in-app metrics
22 |
23 | 
24 |
25 |
26 |
27 | ### Monitoring example: Grafana as the UI layer that visualizes raw data
28 |
29 | 
30 |
31 |
32 | ### What Other Bloggers Say
33 | From the blog [Rising Stack](http://mubaloo.com/best-practices-deploying-node-js-applications/):
34 |
35 | > …We recommend you to watch these signals for all of your services:
36 | > Error Rate: Because errors are user facing and immediately affect your customers.
37 | > Response time: Because the latency directly affects your customers and business.
38 | > Throughput: The traffic helps you to understand the context of increased error rates and the latency too.
39 | > Saturation: It tells how “full” your service is. If the CPU usage is 90%, can your system handle more traffic? …
40 |
--------------------------------------------------------------------------------
/sections/production/delegatetoproxy.md:
--------------------------------------------------------------------------------
1 | # Delegate anything possible (e.g. static content, gzip) to a reverse proxy
2 |
3 |
4 |
5 |
6 | ### One Paragraph Explainer
7 |
8 | It’s very tempting to cargo-cult Express and use its rich middleware offering for networking related tasks like serving static files, gzip encoding, throttling requests, SSL termination, etc. This is a performance kill due to its single threaded model which will keep the CPU busy for long periods (Remember, Node’s execution model is optimized for short tasks or async IO related tasks). A better approach is to use a tool that expertise in networking tasks – the most popular are nginx and HAproxy which are also used by the biggest cloud vendors to lighten the incoming load on node.js processes.
9 |
10 |
11 |
12 |
13 | ### Nginx Config Example – Using nginx to compress server responses
14 |
15 | ```
16 | # configure gzip compression
17 | gzip on;
18 | gzip_comp_level 6;
19 | gzip_vary on;
20 |
21 | # configure upstream
22 | upstream myApplication {
23 | server 127.0.0.1:3000;
24 | server 127.0.0.1:3001;
25 | keepalive 64;
26 | }
27 |
28 | #defining web server
29 | server {
30 | # configure server with ssl and error pages
31 | listen 80;
32 | listen 443 ssl;
33 | ssl_certificate /some/location/sillyfacesociety.com.bundle.crt;
34 | error_page 502 /errors/502.html;
35 |
36 | # handling static content
37 | location ~ ^/(images/|img/|javascript/|js/|css/|stylesheets/|flash/|media/|static/|robots.txt|humans.txt|favicon.ico) {
38 | root /usr/local/silly_face_society/node/public;
39 | access_log off;
40 | expires max;
41 | }
42 | ```
43 |
44 |
45 |
46 | ### What Other Bloggers Say
47 |
48 | * From the blog [Mubaloo](http://mubaloo.com/best-practices-deploying-node-js-applications):
49 | > …It’s very easy to fall into this trap – You see a package like Express and think “Awesome! Let’s get started” – you code away and you’ve got an application that does what you want. This is excellent and, to be honest, you’ve won a lot of the battle. However, you will lose the war if you upload your app to a server and have it listen on your HTTP port, because you’ve forgotten a very crucial thing: Node is not a web server. **As soon as any volume of traffic starts to hit your application, you’ll notice that things start to go wrong: connections are dropped, assets stop being served or, at the very worst, your server crashes. What you’re doing is attempting to have Node deal with all of the complicated things that a proven web server does really well. Why reinvent the wheel?**
50 | > **This is just for one request, for one image and bearing in mind this is memory that your application could be using for important stuff like reading a database or handling complicated logic; why would you cripple your application for the sake of convenience?**
51 |
52 |
53 | * From the blog [Argteam](http://blog.argteam.com/coding/hardening-node-js-for-production-part-2-using-nginx-to-avoid-node-js-load):
54 | > Although express.js has built in static file handling through some connect middleware, you should never use it. **Nginx can do a much better job of handling static files and can prevent requests for non-dynamic content from clogging our node processes**…
55 |
--------------------------------------------------------------------------------
/sections/production/smartlogging.md:
--------------------------------------------------------------------------------
1 | # Make your app transparent using smart logs
2 |
3 |
4 |
5 |
6 | ### One Paragraph Explainer
7 |
8 | Since you print out log statements anyway and you're obviously in a need of some interface that wraps up production information where you can trace errors and core metrics (e.g. how many errors happen every hour and which is your slowest API end-point) why not invest some moderate effort in a robust logging framework that will tick all boxes? Achieving that requires a thoughtful decision on three steps:
9 |
10 | **1. smart logging** – at the bare minimum you need to use a reputable logging library like [Winston](https://github.com/winstonjs/winston), [Bunyan](https://github.com/trentm/node-bunyan) and write meaningful information at each transaction start and end. Consider to also format log statements as JSON and provide all the contextual properties (e.g. user id, operation type, etc) so that the operations team can act on those fields. Include also a unique transaction ID at each log line, for more information refer to the bullet below “Write transaction-id to log”. One last point to consider is also including an agent that logs the system resource like memory and CPU like Elastic Beat.
11 |
12 | **2. smart aggregation** – once you have comprehensive information within your servers file system, it’s time to periodically push these to a system that aggregates, facilities and visualizes this data. The Elastic stack, for example, is a popular and free choice that offers all the components to aggregate and visualize data. Many commercial products provide similar functionality only they greatly cut down the setup time and require no hosting.
13 |
14 | **3. smart visualization** – now the information is aggregated and searchable, one can be satisfied only with the power of easily searching the logs but this can go much further without coding or spending much effort. We can now show important operational metrics like error rate, average CPU throughout the day, how many new users opted-in in the last hour and any other metric that helps to govern and improve our app
15 |
16 |
17 |
18 |
19 | ### Visualization Example: Kibana (part of Elastic stack) facilitates advanced searching on log content
20 |
21 | 
22 |
23 |
24 |
25 | ### Visualization Example: Kibana (part of Elastic stack) visualizes data based on logs
26 |
27 | 
28 |
29 |
30 |
31 | ### Blog Quote: Logger Requirements
32 | From the blog [Strong Loop](https://strongloop.com/strongblog/compare-node-js-logging-winston-bunyan/):
33 |
34 | > Lets identify a few requirements (for a logger):
35 | > 1. Time stamp each log line. This one is pretty self explanatory – you should be able to tell when each log entry occured.
36 | > 2. Logging format should be easily digestible by humans as well as machines.
37 | > 3. Allows for multiple configurable destination streams. For example, you might be writing trace logs to one file but when an error is encountered, write to the same file, then into error file and send an email at the same time…
38 |
39 |
40 |
41 |
42 |
43 |
44 |
--------------------------------------------------------------------------------
/sections/projectstructre/breakintcomponents.md:
--------------------------------------------------------------------------------
1 | # Structure your solution by components
2 |
3 |
4 |
5 |
6 | ### One Paragraph Explainer
7 |
8 | For medium sized apps and above, monoliths are really bad - having one big software with many dependencies is just hard to reason about and often leads to spaghetti code. Even smart architects — those who are skilled enough to tame the beast and 'modularize' it — spend great mental effort on design, and each change requires carefully evaluating the impact on other dependant objects. The ultimate solution is to develop small software: divide the whole stack into self-contained components that don't share files with others, each constitutes very few files (e.g. API, service, data access, test, etc.) so that it's very easy to reason about it. Some may call this 'microservices' architecture — it's important to understand that microservices is not a spec which you must follow, but rather a set of principles. You may adopt many principles into a full-blown microservices architecture or adopt only few. Both are good as long as you keep the software complexity low. The very least you should do is create basic borders between components, assign a folder in your project root for each business component and make it self contained - other components are allowed to consume its functionality only through its public interface or API. This is the foundation for keeping your components simple, avoid dependency hell and pave the way to full-blown microservices in the future once your app grows.
9 |
10 |
11 |
12 |
13 | ### Blog Quote: "Scaling requires scaling of the entire application"
14 | From the blog MartinFowler.com
15 |
16 | > Monolithic applications can be successful, but increasingly people are feeling frustrations with them - especially as more applications are being deployed to the cloud. Change cycles are tied together - a change made to a small part of the application, requires the entire monolith to be rebuilt and deployed. Over time it's often hard to keep a good modular structure, making it harder to keep changes that ought to only affect one module within that module. Scaling requires scaling of the entire application rather than parts of it that require greater resource.
17 |
18 |
19 |
20 | ### Blog Quote: "So what does the architecture of your application scream?"
21 | From the blog [uncle-bob](https://8thlight.com/blog/uncle-bob/2011/09/30/Screaming-Architecture.html)
22 |
23 | > ...if you were looking at the architecture of a library, you’d likely see a grand entrance, an area for check-in-out clerks, reading areas, small conference rooms, and gallery after gallery capable of holding bookshelves for all the books in the library. That architecture would scream: Library.
24 | So what does the architecture of your application scream? When you look at the top level directory structure, and the source files in the highest level package; do they scream: Health Care System, or Accounting System, or Inventory Management System? Or do they scream: Rails, or Spring/Hibernate, or ASP?.
25 |
26 |
27 |
28 |
29 |
30 | ### Good: Structure your solution by self-contained components
31 | 
32 |
33 |
34 |
35 | ### Bad: Group your files by technical role
36 | 
37 |
--------------------------------------------------------------------------------
/sections/errorhandling/asyncerrorhandling.md:
--------------------------------------------------------------------------------
1 | # Use Async-Await or promises for async error handling
2 |
3 |
4 | ### One Paragraph Explainer
5 |
6 | Callbacks don’t scale as they are not familiar to most programmers, force to check errors all over, deal with nasty code nesting and make it difficult to reason about the code flow. Promise libraries like BlueBird, async, and Q pack a standard code style using RETURN and THROW to control the program flow. Specifically, they support the favorite try-catch error handling style which allows freeing the main code path from dealing with errors in every function
7 |
8 |
9 | ### Code Example – using promises to catch errors
10 |
11 |
12 | ```javascript
13 | doWork()
14 | .then(doWork)
15 | .then(doOtherWork)
16 | .then((result) => doWork)
17 | .catch((error) => {throw error;})
18 | .then(verify);
19 | ```
20 |
21 | ### Anti pattern code example – callback style error handling
22 |
23 | ```javascript
24 | getData(someParameter, function(err, result){
25 | if(err != null)
26 | // do something like calling the given callback function and pass the error
27 | getMoreData(a, function(err, result){
28 | if(err != null)
29 | // do something like calling the given callback function and pass the error
30 | getMoreData(b, function(c){
31 | getMoreData(d, function(e){
32 | if(err != null)
33 | // you get the idea?
34 | });
35 | });
36 | ```
37 |
38 | ### Blog Quote: "We have a problem with promises"
39 | From the blog pouchdb.com
40 |
41 | > ……And in fact, callbacks do something even more sinister: they deprive us of the stack, which is something we usually take for granted in programming languages. Writing code without a stack is a lot like driving a car without a brake pedal: you don’t realize how badly you need it, until you reach for it and it’s not there. The whole point of promises is to give us back the language fundamentals we lost when we went async: return, throw, and the stack. But you have to know how to use promises correctly in order to take advantage of them.
42 |
43 | ### Blog Quote: "The promises method is much more compact"
44 | From the blog gosquared.com
45 |
46 | > ………The promises method is much more compact, clearer and quicker to write. If an error or exception occurs within any of the ops it is handled by the single .catch() handler. Having this single place to handle all errors means you don’t need to write error checking for each stage of the work.
47 |
48 | ### Blog Quote: "Promises are native ES6, can be used with generators"
49 | From the blog StrongLoop
50 |
51 | > ….Callbacks have a lousy error-handling story. Promises are better. Marry the built-in error handling in Express with promises and significantly lower the chances of an uncaught exception. Promises are native ES6, can be used with generators, and ES7 proposals like async/await through compilers like Babel
52 |
53 | ### Blog Quote: "All those regular flow control constructs you are used to are completely broken"
54 | From the blog Benno’s
55 |
56 | > ……One of the best things about asynchronous, callback based programming is that basically all those regular flow control constructs you are used to are completely broken. However, the one I find most broken is the handling of exceptions. Javascript provides a fairly familiar try…catch construct for dealing with exceptions. The problems with exceptions is that they provide a great way of short-cutting errors up a call stack, but end up being completely useless of the error happens on a different stack…
57 |
--------------------------------------------------------------------------------
/sections/errorhandling/operationalvsprogrammererror.md:
--------------------------------------------------------------------------------
1 | # Distinguish operational vs programmer errors
2 |
3 | ### One Paragraph Explainer
4 |
5 | Distinguishing the following two error types will minimize your app downtime and helps avoid crazy bugs: Operational errors refer to situations where you understand what happened and the impact of it – for example, a query to some HTTP service failed due to connection problem. On the other hand, programmer errors refer to cases where you have no idea why and sometimes where an error came from – it might be some code that tried to read an undefined value or DB connection pool that leaks memory. Operational errors are relatively easy to handle – usually logging the error is enough. Things become hairy when a programmer error pops up, the application might be in an inconsistent state and there’s nothing better you can do than to restart gracefully
6 |
7 |
8 |
9 | ### Code Example – marking an error as operational (trusted)
10 |
11 | ```javascript
12 | // marking an error object as operational
13 | var myError = new Error("How can I add new product when no value provided?");
14 | myError.isOperational = true;
15 |
16 | // or if you're using some centralized error factory (see other examples at the bullet "Use only the built-in Error object")
17 | function appError(commonType, description, isOperational) {
18 | Error.call(this);
19 | Error.captureStackTrace(this);
20 | this.commonType = commonType;
21 | this.description = description;
22 | this.isOperational = isOperational;
23 | };
24 |
25 | throw new appError(errorManagement.commonErrors.InvalidInput, "Describe here what happened", true);
26 |
27 | ```
28 |
29 | ### Blog Quote: "Programmer errors are bugs in the program"
30 | From the blog Joyent, ranked 1 for the keywords “Node.JS error handling”
31 |
32 | > …The best way to recover from programmer errors is to crash immediately. You should run your programs using a restarter that will automatically restart the program in the event of a crash. With a restarter in place, crashing is the fastest way to restore reliable service in the face of a transient programmer error…
33 |
34 | ### Blog Quote: "No safe way to leave without creating some undefined brittle state"
35 | From Node.JS official documentation
36 |
37 | > …By the very nature of how throw works in JavaScript, there is almost never any way to safely “pick up where you left off”, without leaking references, or creating some other sort of undefined brittle state. The safest way to respond to a thrown error is to shut down the process. Of course, in a normal web server, you might have many connections open, and it is not reasonable to abruptly shut those down because an error was triggered by someone else. The better approach is to send an error response to the request that triggered the error, while letting the others finish in their normal time, and stop listening for new requests in that worker.
38 |
39 |
40 | ### Blog Quote: "Otherwise you risk the state of your application"
41 | From the blog debugable.com, ranked 3 for the keywords “Node.JS uncaught exception”
42 |
43 | > …So, unless you really know what you are doing, you should perform a graceful restart of your service after receiving an “uncaughtException” exception event. Otherwise you risk the state of your application, or that of 3rd party libraries to become inconsistent, leading to all kinds of crazy bugs…
44 |
45 | ### Blog Quote: "Blog Quote: There are three schools of thoughts on error handling"
46 | From the blog: JS Recipes
47 |
48 | > …There are primarily three schools of thoughts on error handling:
49 | 1. Let the application crash and restart it.
50 | 2. Handle all possible errors and never crash.
51 | 3. Balanced approach between the two
52 |
--------------------------------------------------------------------------------
/sections/drafts/readme-general-toc-2.md:
--------------------------------------------------------------------------------
1 | # Node.JS Best Practices
2 |
3 |
4 |
5 | 
6 |
7 | # Welcome to Node.js Best Practices
8 |
9 | Welcome to the biggest compilation of Node.JS best practices. The content below was gathered from all top ranked books and posts and is updated constantly - when you read here rest assure that no significant tip slipped away. Feel at home - we love to discuss via PRs, issues or Gitter.
10 |
11 | ## Table of Contents
12 | * [Project Setup Practices (18)](#project-setup-practices)
13 | * [Code Style Practices (11) ](#code-style-practices)
14 | * [Error Handling Practices (14) ](#error-handling-practices)
15 | * [Going To Production Practices (21) ](#going-to-production-practices)
16 | * [Testing Practices (9) ](#deployment-practices)
17 | * [Security Practices (8) ](#security-practices)
18 |
19 |
20 | # `Project Setup Practices`
21 |
22 | ## ✔ 1. Structure your solution by feature ('microservices')
23 |
24 | **TL&DR:** The worst large applications pitfal is a huge code base where hundreds of dependencies slow down developers as try to incorporate new features. Partioning into small units ensures that each unit is kept simple and very easy to maintain. This strategy pushes the complexity to the higher level - designing the cross-component interactions.
25 |
26 | **Otherwise:** Developing a new feature with a change to few objects demands to evaluate how this changes might affect dozends of dependants and ach deployment becomes a fear.
27 |
28 | 🔗 [**Read More: Structure by feature*](/sections/errorhandling/asyncawait.md)
29 |
30 |
31 |
32 | ## ✔ 2. Layer your app, keep Express within its boundaries
33 |
34 | **TL&DR:** It's very common to see Express API passes the express objects (req, res) to business logic and data layers, sometimes even to every function - this makes your application depedant on and accessible by Express only. What if your code should be reached by testing console or CRON job? instead create your own context object with cross-cutting-concern properties like the user roles and inject into other layers, or use 'thread-level variables' libraries like continuation local storage
35 |
36 | **Otherwise:** Application can be accessed by Express only and require to create complex testing mocks
37 |
38 | 🔗 [**Read More: Structure by feature*](/sections/errorhandling/asyncawait.md)
39 |
40 |
41 |
42 | ## ✔ 3. Configure ESLint with node-specific plugins
43 |
44 | **TL&DR:** Monitoring is a game of finding out issues before our customers do – obviously this should be assigned unprecedented importance. The market is overwhelmed with offers thus consider starting with defining the basic metrics you must follow (my sug
45 |
46 | **Otherwise:** You end-up with a blackbox that is hard to reason about, then you start re-writing all logging statements to add additional information
47 |
48 | 🔗 [**Read More: Structure by feature*](/sections/errorhandling/asyncawait.md)
49 |
50 |
51 | # `Code Style Practices`
52 |
53 |
54 |
55 | # `Error Handling Practices`
56 |
5 |
5 |
5 |
Use async-await for async error handling
86 |
87 | **TL;DR:** Handling async errors in callback style is probably the fastest way to hell (a.k.a the pyramid of doom). The best gift you can give to your code is using instead a reputable promise library or async-await which provides much compact and familiar code syntax like try-catch
88 |
89 | **Otherwise:** Node.JS callback style, function(err, response), is a promising way to un-maintainable code due to the mix of error handling with casual code, excessive nesting and awkward coding patterns
90 |
91 | 🔗 [**Use async-await for async error handling**](/sections/errorhandling/asyncawait.md)
92 |
93 |
Use async-await for async error handling
96 |
97 | **TL;DR:** Handling async errors in callback style is probably the fastest way to hell (a.k.a the pyramid of doom). The best gift you can give to your code is using instead a reputable promise library or async-await which provides much compact and familiar code syntax like try-catch
98 |
99 | **Otherwise:** Node.JS callback style, function(err, response), is a promising way to un-maintainable code due to the mix of error handling with casual code, excessive nesting and awkward coding patterns
100 |
101 | 🔗 [**Use async-await for async error handling**](/sections/errorhandling/asyncawait.md)
102 |
103 |
104 |
105 |
7 |