├── LICENSE └── README.md /LICENSE: -------------------------------------------------------------------------------- 1 | Apache License 2 | Version 2.0, January 2004 3 | http://www.apache.org/licenses/ 4 | 5 | TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION 6 | 7 | 1. Definitions. 8 | 9 | "License" shall mean the terms and conditions for use, reproduction, 10 | and distribution as defined by Sections 1 through 9 of this document. 11 | 12 | "Licensor" shall mean the copyright owner or entity authorized by 13 | the copyright owner that is granting the License. 14 | 15 | "Legal Entity" shall mean the union of the acting entity and all 16 | other entities that control, are controlled by, or are under common 17 | control with that entity. For the purposes of this definition, 18 | "control" means (i) the power, direct or indirect, to cause the 19 | direction or management of such entity, whether by contract or 20 | otherwise, or (ii) ownership of fifty percent (50%) or more of the 21 | outstanding shares, or (iii) beneficial ownership of such entity. 22 | 23 | "You" (or "Your") shall mean an individual or Legal Entity 24 | exercising permissions granted by this License. 25 | 26 | "Source" form shall mean the preferred form for making modifications, 27 | including but not limited to software source code, documentation 28 | source, and configuration files. 29 | 30 | "Object" form shall mean any form resulting from mechanical 31 | transformation or translation of a Source form, including but 32 | not limited to compiled object code, generated documentation, 33 | and conversions to other media types. 34 | 35 | "Work" shall mean the work of authorship, whether in Source or 36 | Object form, made available under the License, as indicated by a 37 | copyright notice that is included in or attached to the work 38 | (an example is provided in the Appendix below). 39 | 40 | "Derivative Works" shall mean any work, whether in Source or Object 41 | form, that is based on (or derived from) the Work and for which the 42 | editorial revisions, annotations, elaborations, or other modifications 43 | represent, as a whole, an original work of authorship. For the purposes 44 | of this License, Derivative Works shall not include works that remain 45 | separable from, or merely link (or bind by name) to the interfaces of, 46 | the Work and Derivative Works thereof. 47 | 48 | "Contribution" shall mean any work of authorship, including 49 | the original version of the Work and any modifications or additions 50 | to that Work or Derivative Works thereof, that is intentionally 51 | submitted to Licensor for inclusion in the Work by the copyright owner 52 | or by an individual or Legal Entity authorized to submit on behalf of 53 | the copyright owner. For the purposes of this definition, "submitted" 54 | means any form of electronic, verbal, or written communication sent 55 | to the Licensor or its representatives, including but not limited to 56 | communication on electronic mailing lists, source code control systems, 57 | and issue tracking systems that are managed by, or on behalf of, the 58 | Licensor for the purpose of discussing and improving the Work, but 59 | excluding communication that is conspicuously marked or otherwise 60 | designated in writing by the copyright owner as "Not a Contribution." 61 | 62 | "Contributor" shall mean Licensor and any individual or Legal Entity 63 | on behalf of whom a Contribution has been received by Licensor and 64 | subsequently incorporated within the Work. 65 | 66 | 2. Grant of Copyright License. Subject to the terms and conditions of 67 | this License, each Contributor hereby grants to You a perpetual, 68 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 69 | copyright license to reproduce, prepare Derivative Works of, 70 | publicly display, publicly perform, sublicense, and distribute the 71 | Work and such Derivative Works in Source or Object form. 72 | 73 | 3. Grant of Patent License. Subject to the terms and conditions of 74 | this License, each Contributor hereby grants to You a perpetual, 75 | worldwide, non-exclusive, no-charge, royalty-free, irrevocable 76 | (except as stated in this section) patent license to make, have made, 77 | use, offer to sell, sell, import, and otherwise transfer the Work, 78 | where such license applies only to those patent claims licensable 79 | by such Contributor that are necessarily infringed by their 80 | Contribution(s) alone or by combination of their Contribution(s) 81 | with the Work to which such Contribution(s) was submitted. If You 82 | institute patent litigation against any entity (including a 83 | cross-claim or counterclaim in a lawsuit) alleging that the Work 84 | or a Contribution incorporated within the Work constitutes direct 85 | or contributory patent infringement, then any patent licenses 86 | granted to You under this License for that Work shall terminate 87 | as of the date such litigation is filed. 88 | 89 | 4. Redistribution. You may reproduce and distribute copies of the 90 | Work or Derivative Works thereof in any medium, with or without 91 | modifications, and in Source or Object form, provided that You 92 | meet the following conditions: 93 | 94 | (a) You must give any other recipients of the Work or 95 | Derivative Works a copy of this License; and 96 | 97 | (b) You must cause any modified files to carry prominent notices 98 | stating that You changed the files; and 99 | 100 | (c) You must retain, in the Source form of any Derivative Works 101 | that You distribute, all copyright, patent, trademark, and 102 | attribution notices from the Source form of the Work, 103 | excluding those notices that do not pertain to any part of 104 | the Derivative Works; and 105 | 106 | (d) If the Work includes a "NOTICE" text file as part of its 107 | distribution, then any Derivative Works that You distribute must 108 | include a readable copy of the attribution notices contained 109 | within such NOTICE file, excluding those notices that do not 110 | pertain to any part of the Derivative Works, in at least one 111 | of the following places: within a NOTICE text file distributed 112 | as part of the Derivative Works; within the Source form or 113 | documentation, if provided along with the Derivative Works; or, 114 | within a display generated by the Derivative Works, if and 115 | wherever such third-party notices normally appear. The contents 116 | of the NOTICE file are for informational purposes only and 117 | do not modify the License. You may add Your own attribution 118 | notices within Derivative Works that You distribute, alongside 119 | or as an addendum to the NOTICE text from the Work, provided 120 | that such additional attribution notices cannot be construed 121 | as modifying the License. 122 | 123 | You may add Your own copyright statement to Your modifications and 124 | may provide additional or different license terms and conditions 125 | for use, reproduction, or distribution of Your modifications, or 126 | for any such Derivative Works as a whole, provided Your use, 127 | reproduction, and distribution of the Work otherwise complies with 128 | the conditions stated in this License. 129 | 130 | 5. Submission of Contributions. Unless You explicitly state otherwise, 131 | any Contribution intentionally submitted for inclusion in the Work 132 | by You to the Licensor shall be under the terms and conditions of 133 | this License, without any additional terms or conditions. 134 | Notwithstanding the above, nothing herein shall supersede or modify 135 | the terms of any separate license agreement you may have executed 136 | with Licensor regarding such Contributions. 137 | 138 | 6. Trademarks. This License does not grant permission to use the trade 139 | names, trademarks, service marks, or product names of the Licensor, 140 | except as required for reasonable and customary use in describing the 141 | origin of the Work and reproducing the content of the NOTICE file. 142 | 143 | 7. Disclaimer of Warranty. Unless required by applicable law or 144 | agreed to in writing, Licensor provides the Work (and each 145 | Contributor provides its Contributions) on an "AS IS" BASIS, 146 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or 147 | implied, including, without limitation, any warranties or conditions 148 | of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A 149 | PARTICULAR PURPOSE. You are solely responsible for determining the 150 | appropriateness of using or redistributing the Work and assume any 151 | risks associated with Your exercise of permissions under this License. 152 | 153 | 8. Limitation of Liability. In no event and under no legal theory, 154 | whether in tort (including negligence), contract, or otherwise, 155 | unless required by applicable law (such as deliberate and grossly 156 | negligent acts) or agreed to in writing, shall any Contributor be 157 | liable to You for damages, including any direct, indirect, special, 158 | incidental, or consequential damages of any character arising as a 159 | result of this License or out of the use or inability to use the 160 | Work (including but not limited to damages for loss of goodwill, 161 | work stoppage, computer failure or malfunction, or any and all 162 | other commercial damages or losses), even if such Contributor 163 | has been advised of the possibility of such damages. 164 | 165 | 9. Accepting Warranty or Additional Liability. While redistributing 166 | the Work or Derivative Works thereof, You may choose to offer, 167 | and charge a fee for, acceptance of support, warranty, indemnity, 168 | or other liability obligations and/or rights consistent with this 169 | License. However, in accepting such obligations, You may act only 170 | on Your own behalf and on Your sole responsibility, not on behalf 171 | of any other Contributor, and only if You agree to indemnify, 172 | defend, and hold each Contributor harmless for any liability 173 | incurred by, or claims asserted against, such Contributor by reason 174 | of your accepting any such warranty or additional liability. 175 | 176 | END OF TERMS AND CONDITIONS 177 | 178 | APPENDIX: How to apply the Apache License to your work. 179 | 180 | To apply the Apache License to your work, attach the following 181 | boilerplate notice, with the fields enclosed by brackets "[]" 182 | replaced with your own identifying information. (Don't include 183 | the brackets!) The text should be enclosed in the appropriate 184 | comment syntax for the file format. We also recommend that a 185 | file or class name and description of purpose be included on the 186 | same "printed page" as the copyright notice for easier 187 | identification within third-party archives. 188 | 189 | Copyright [yyyy] [name of copyright owner] 190 | 191 | Licensed under the Apache License, Version 2.0 (the "License"); 192 | you may not use this file except in compliance with the License. 193 | You may obtain a copy of the License at 194 | 195 | http://www.apache.org/licenses/LICENSE-2.0 196 | 197 | Unless required by applicable law or agreed to in writing, software 198 | distributed under the License is distributed on an "AS IS" BASIS, 199 | WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 200 | See the License for the specific language governing permissions and 201 | limitations under the License. 202 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | ## 前言 2 | 3 | Netty作为一款高性能的网络通信框架,广泛应用在RPC框架,MQ组件和游戏行业的基础通信, 每款框架都有不同的参数和配置来满足不同的使用场景,也有相应的最佳实践方式,这里整理了近些年在使用Netty过程中遇到的一些问题和最佳的解决方式,也欢迎大家提pr补充你们遇到的问题和对应的解决方式,帮助大家一起用好Netty. 4 | 5 | ## 线程篇 6 | 7 | ### bossGroup和workerGroup 8 | 9 | Netty服务端有2个线程池,`bossGroup` 和 `workerGroup`, `bossGroup` 线程负责处理IO的Accept事件,接受远程客户端的连接;而 `workerGroup` 处理已连接的客户端的IO读写。`bossGroup` 和` workerGroup`对应源码的 `EventLoop` ,而每个 `EventLoop` 是一个只有单个线程的线程池,每个`channel`被激活后会注册到一个`EventLoop`上,这就保证了后续该 `channel` 的操作都是线程安全的,尽量减少了锁的使用, 提高并发性能. 10 | 11 | > EventLoopGroup的默认大小都是是2倍的CPU核数,但这并不是一个恒定的最佳数量,为了避免线程上下文切换,只要能满足要求,这个值其实越少越好。 12 | > Boss Group每个端口平时好像就只占1条线程,无论配了多少。 13 | 14 | 因此对于每个Server端口, `BossGroup` 配置一个线程即可, `workerGroup` 可以根据请求量的大小适当调整。 15 | 16 | ### 业务线程池 17 | 18 | Netty线程的数量一般固定且较少,所以很怕线程被堵塞,比如同步的数据库查询,比如下游的服务调用,此时就要把处理放到一个业务线程池里操作,即使要付出线程上下文切换的代价,甚至还有些 `ThreadLocal` 需要复制。 19 | 20 | 最佳实践: 在你的`ChannelInboundHandle`的`channelRead0`方法里,通过自定义线程池去处理请求: 21 | ``` java 22 | protected void channelRead0(final ChannelHandlerContext ctx, final Message request) throws Exception { 23 | executor.execute(new Runnable() { 24 | @Override 25 | public void run() { 26 | handleRequest(ctx, request); 27 | } 28 | }); 29 | } 30 | ``` 31 | 32 | ### write后不要修改对象 33 | 34 | 在Netty 4里 `inbound` 和 `outbound` 都是在 `EventLoop`(IO线程)里执行, 如果在业务线程里通过 `channel.writeAndFlush` 向网络写出一些东西的时候,netty 会往这个 `channel` 的 `EventLoop`里提交一个写出的任务,那也就是业务线程和IO线程是异步执行的,如果你在调用 `writeAndFlush(message)` 后又修改了message对象, 最后导致的结果可能和你预期的不一致, 需要在写之前深拷贝一份。 35 | 36 | ### 定时任务 37 | 38 | 像发送超时控制之类的一次性任务,不要使用JDK的`ScheduledExecutorService` ,而是如下: 39 | ``` java 40 | ctx.executor().schedule(new MyTimeoutTask(p), 30, TimeUnit.SECONDS) 41 | ``` 42 | JDK的 `ScheduledExecutorService` 是一个大池子,多线程争抢并发锁,上面的写法,`TimeoutTask` 只属于当前的 `EventLoop` ,没有任何锁;如果发送成功,需要从长长Queue里找回任务来取消掉它,现在每个 `EventLoop` 一条 `Queue` ,长度只有原来的N分之一。 43 | 44 | ## 连接管理 45 | 46 | ## 连接池 47 | 48 | 为了提高通信效率,我们需要考虑复用连接,减少 TCP 三次握手的次数,因此需要有连接管理的机制。一般需要建立多个连接来提高通信效率,我们需要设计一个针对某个连接地址(IP 与 Port 唯一确定的地址)建立特定数目连接的实现,同时保存在一个连接池里。 49 | 50 | 客户端启动时可以预先和服务端建立好多个连接, 这样子需要用的时候可以避免建立连接的开销;获取连接时调用`channel.isAvailable()`判断连接是否可用,如果不可用则重连; 51 | ``` java 52 | if (channel.isAvailable()) { 53 | if (channel.isWriteable()) { 54 | return channel.getChannel(); 55 | } else { 56 | // 写 缓冲区溢出的话选下一个链接 57 | unWriteableCnt++; 58 | } 59 | } else { 60 | channel.reConnect((int) (maxReqConnTimeout - totalPassedTimeMs)); 61 | if (channel.isAvailable()) { 62 | return channel.getChannel(); 63 | } else { 64 | reconnCnt++; 65 | } 66 | } 67 | ``` 68 | * 连接数在满足传输吞吐量的情况下,越少越好。 69 | 70 | 举个例子,在我的Proxy测试场景里: 71 | 72 | >2条连接时,只能有40k QPS。 73 | >48条连接,升到62k QPS,CPU烧了28% 74 | >4条连接,QPS反而上升到68k ,而且CPU降到20%。 75 | 76 | ### 连接超时配置 77 | 78 | 在使用Netty编写客户端的时候,我们一般会有类似这样的代码: 79 | ``` java 80 | bootstrap.connect(address).await(1000, TimeUnit.MILLISECONDS) 81 | ``` 82 | 向对端发起一个连接,超时等待 1 秒钟。如果 1 秒钟没有连接上则重连或者做其他处理。而其实在bootstrap的选项里,还有这样的一项: 83 | ``` java 84 | bootstrap.option(ChannelOption.CONNECT_TIMEOUT_MILLIS, 3000); 85 | ``` 86 | 如果这两个值设置的不一致,在await的时候较短,而option里设置的较长就出问题了。这个时候你会发现connect里已经超时了,你以为连接失败了,但实际上await超时 `Netty` 并不会帮你取消正在连接的链接。这个时候如果第2秒的时候连上了对端服务器,那么你刚才的判断就失误了。如果你根据`connect(address).await(1000, TimeUnit.MILLISECONDS)` 来决定是否重连,很有可能你就建立了两个连接,而且很有可能你的 `handler` 就在这两个 `channel` 里共享起来了.所以建议不要在 `await` 上设置超时,而总是使用 `option` 上的选项来设置。 87 | 88 | 或者通过 `ChannelFuture` 来把超时了但最终连接的channel关闭: 89 | ``` java 90 | final ChannelFuture cf = bootstrap.connect(); 91 | boolean result = cf.awaitUninterruptibly(1000, TimeUnit.MILLISECONDS); 92 | if (result && cf.isSuccess()) { 93 | return cf.channel(); 94 | } else { 95 | // 可能超时的但是最终成功的 channel 关闭掉 96 | cf.addListener(new GenericFutureListener>() { 97 | @Override 98 | public void operationComplete(Future future) throws Exception { 99 | if (future.isSuccess()) { 100 | cf.channel().close(); 101 | } 102 | } 103 | }); 104 | } 105 | 106 | ``` 107 | 108 | ### Netty Native 109 | 110 | `Netty Native`用C++编写JNI调用的`Socket Transport`,是由`Twitter`将`Tomcat Native`的移植过来,现在还时不时和汤姆家同步一下代码。 111 | 112 | 经测试,的确比`JDK NIO`更省CPU。 113 | 114 | 也许有人问,JDK的NIO也用EPOLL啊,大家有什么不同? `Norman Maurer`这么说的: 115 | 116 | >* Netty的 `epoll transport`使用 `edge-triggered` 而 JDK NIO 使用 `level-triggered` 117 | >* C代码,更少GC,更少`synchronized` 118 | >* 暴露了更多的`Socket`配置参数 119 | 120 | 用法倒是简单,只要几个类名替换一下: 121 | ``` java 122 | if (Epoll.isAvailable()) { 123 | acceptorGroup = new EpollEventLoopGroup(config.getAcceptorSize(), 124 | new DefaultThreadFactory("acceptor-" )); 125 | selectorGroup = new EpollEventLoopGroup(config.getSelectorSize(), 126 | new DefaultThreadFactory("selector-")); 127 | bootstrap.group(acceptorGroup, selectorGroup).channel(EpollServerSocketChannel.class); 128 | } else { 129 | acceptorGroup = new NioEventLoopGroup(config.getAcceptorSize(), 130 | new DefaultThreadFactory("acceptor-")); 131 | selectorGroup = new NioEventLoopGroup(config.getSelectorSize(), 132 | new DefaultThreadFactory("selector-")); 133 | bootstrap.group(acceptorGroup, selectorGroup).channel(NioServerSocketChannel.class); 134 | } 135 | ``` 136 | ### Socket参数 137 | 138 | `TCP/Socket`的大路设置,无非 `SO_REUSEADDR`, `TCP_NODELAY`, `SO_KEEPALIVE` 。另外还有`SO_LINGER` , `SO_TIMEOUT`, `SO_BACKLOG`, `SO_SNDBUF`, `SO_RCVBUF`;这几个参数的作用可以参考[Linux下高性能网络编程中的几个TCP/IP选项](https://blog.csdn.net/zlxfogger/article/details/44922993) 139 | 140 | server配置: 141 | ``` java 142 | bootstrap.option(ChannelOption.SO_BACKLOG, config.getBacklog()); 143 | bootstrap.childOption(ChannelOption.SO_KEEPALIVE, true); 144 | bootstrap.childOption(ChannelOption.TCP_NODELAY, true); 145 | bootstrap.childOption(ChannelOption.WRITE_BUFFER_WATER_MARK, new WriteBufferWaterMark(8 * 1024, 32 * 1024)); 146 | ``` 147 | client配置: 148 | ``` java 149 | bootstrap.option(ChannelOption.SO_REUSEADDR, true); 150 | bootstrap.option(ChannelOption.TCP_NODELAY, true); 151 | bootstrap.option(ChannelOption.SO_KEEPALIVE, true); 152 | bootstrap.option(ChannelOption.CONNECT_TIMEOUT_MILLIS, config.getReqConnTimeoutMs()); 153 | bootstrap.option(ChannelOption.SO_SNDBUF, 4096); 154 | bootstrap.option(ChannelOption.SO_RCVBUF, 4096); 155 | bootstrap.option(ChannelOption.WRITE_BUFFER_WATER_MARK, new WriteBufferWaterMark(8 * 1024, 32 * 1024)); 156 | ``` 157 | 158 | ## 内存篇 159 | 160 | ### 堆外内存 161 | 162 | 堆外内存是IO框架的绝配,但堆外内存的分配销毁不易,所以使用内存池能大幅提高性能,也告别了频繁的GC。 163 | > Netty里主要有四种 `ByteBuf` ,其中 `UnpooledHeapByteBuf` 底下的byte[]能够依赖JVM GC自然回收;而 `UnpooledDirectByteBuf` 底下是 `DirectByteBuffer` ,除了等JVM GC,最好也能主动进行回收;而`PooledHeapByteBuf` 和 `PooledDirectByteBuf` ,则必须要主动将用完的 `byte[]/ByteBuffer` 放回池里,否则内存就要爆掉。 164 | 165 | 分配内存推荐的几种方式: 166 | ``` java 167 | //堆内内存 168 | ByteBuf buf = ctx.alloc().buffer(); 169 | ByteBuf buf = channel.alloc().buffer(); 170 | ByteBuf buf = PooledByteBufAllocator.DEFAULT.buffer(); 171 | //堆外内存 172 | ByteBuf buf = ctx.alloc().directBuffer(); 173 | ByteBuf buf = channel.alloc().directBuffer(); 174 | ByteBuf buf = PooledByteBufAllocator.DEFAULT.directBuffer(); 175 | ``` 176 | 在Netty里,因为Handler链的存在,ByteBuf经常要传递到下一个Hanlder去而不复还,谁是最后使用者,谁负责释放,Netty提供了`ReferenceCountUtil.release(buf)`方法来释放ByteBuf,推荐大家看一下这篇文章: [Netty之有效规避内存泄漏 177 | ](http://calvin1978.blogcn.com/articles/netty-leak.html) 178 | 179 | #### 禁用Netty对象缓存机制 180 | 181 | Netty的无锁化设计使得对象可以在线程内无锁的被回收重用,但有些场景里,某线程只创建对象,要到另一个线程里释放,一不小心,你就会发现应用缓缓变慢,`heap dump` 时看到好多 `RecyleHandler` 对象。比如有些RPC框架为了减轻 io 线程的压力会把序列化放在业务线程,如下的这段代码如果在业务线程里执行,就会有[内存泄漏](https://jacobke.github.io/2017/09/13/netty-recycler-cache/)的问题: 182 | ``` java 183 | PooledByteBufAllocator allocator = PooledByteBufAllocator.DEFAULT; 184 | ByteBuf buffer = allocator.buffer(); 185 | User user = new User(); 186 | serialization.serialize(buffer, user); 187 | channel.writeAndFlush(buffer); 188 | ``` 189 | 分配的时候是在业务线程,也就是说从业务线程的 `thread local`对应的池里分配的,而回收的时候是在 IO 线程,这两个是不同的线程,池的作用完全丧失了,一个线程不断地去分配, 然后不断地转移到另外一个池。netty 貌似直到4.0.40才修复了这个问题,相关的issue可以看下这个[Netty4.0.28 mem leak for io.netty.util.Recycler](https://github.com/netty/netty/issues/5563) 190 | 191 | 但有时觉得这么辛苦的重用一个对象,不如干脆禁止掉这个功能,4.0.0.33之后的版本可以在启动参数里加入 `-Dio.netty.recycler.maxCapacity.default=0`禁用对象缓存,4.1之后的版本要使用`-Dio.netty.recycler.maxCapacity=0`才行。 192 | 193 | #### arene参数配置 194 | 195 | netty内部使用 `arena` 来管理 `ByteBuf` 的分配,相关参数如下: 196 | 197 | ``` java 198 | /** 199 | * 配置 netty arena个数, 每一个 arena 占用8mb 200 | * 如果为0, 则内部会使用 Unpooled直接取 201 | */ 202 | public static void configArena(int directArenaCnt, int heapArenaCnt) { 203 | //堆外内存Arena数,如果为0,最终分配的是UnpooledDirectByteBuf 204 | System.setProperty("io.netty.allocator.numDirectArenas",String.valueOf(directArenaCnt)); 205 | //堆内内存Arena数,如果为0,最终分配的是UnpooledHeapByteBuf 206 | System.setProperty("io.netty.allocator.numHeapArenas",String.valueOf(heapArenaCnt)); 207 | System.setProperty("io.netty.allocator.pageSize", "4096"); 208 | } 209 | ``` 210 | 211 | 建议在测试的时候将内存泄漏检查级别开到最高,可以通过参数`-Dio.netty.leakDetectionLevel=ADVANCED`,也可以直接调用Netty的API `ResourceLeakDetector.setLevel(ResourceLeakDetector.Level.ADVANCED)`,同时监控Netty的logger有没有输出memory leak信息。 212 | 如果发生了内存泄漏,大概率会打印类似下面的日志: 213 | 214 | > 2018-02-09 20:15:29 [ nioEventLoopGroup-1-0:635052 ] - [ ERROR ] LEAK: ByteBuf.release() was not called before it's garbage-collected. See http://netty.io/wiki/reference-counted-objects.html for more information. 215 | Recent access records: 5 io.netty.buffer.AdvancedLeakAwareByteBuf.readBytes(AdvancedLeakAwareByteBuf.java:435) 216 | com.ezlippi.nettyServer.ServerHandler.channelRead(ObdServerHandler.java:31) 217 | 218 | ### 避免WriteAndFlushTask过多 219 | 220 | 在使用 `Channel` 写数据之前,建议使用 `isWritable()` 方法来判断一下当前 `ChannelOutboundBuffer` 里的写缓存水位,防止 OOM 发生。写缓存的水位可以在初始化BootStrap的时候设置: 221 | ``` java 222 | bootstrap.childOption(ChannelOption.WRITE_BUFFER_WATER_MARK, new WriteBufferWaterMark(8 * 1024, 32 * 1024)); 223 | ``` 224 | 当写到ChannelOutBoundBuffer的数据量大于高水位时,会激活`pipeline.fireChannelWritabilityChanged()`,调用`isWritable` 会返回false; 225 | 226 | 同时,还有一个需要注意的是 `NioEventLoop` 的队列默认是无限大的,如果没有调用 `isWritable` 判断,会不断的提交 `WriteAndFlushTask` 到 `EventLoop` 的队列中,在对方接收比较慢的场景就可能出现OOM,可以通过系统属性配置下: 227 | ``` java 228 | public static void configCommonProp() { System.setProperty("io.netty.eventLoop.maxPendingTasks", "4096"); 229 | System.setProperty("io.netty.eventexecutor.maxPendingTasks", "4096"); 230 | } 231 | ``` 232 | ## 其他建议 233 | 234 | * 对于无状态的 `ChannelHandler` ,通过`ChannelHandler.Sharable`注解设置成共享模式,比如我们的事件处理器可以设置为共享,减少不同的 `Channel` 对应的 `ChannelPipeline` 里生成的对象个数: 235 | ``` java 236 | @ChannelHandler.Sharable 237 | public class NettyServerDefaultChannelHandler extends SimpleChannelInboundHandler { 238 | } 239 | ``` 240 | 241 | * 正确使用 `ChannelHandlerContext` 的 `ctx.write()` 与 `ctx.channel().write()` 方法。前者是从当前 Handler 的下一个 `Handler`开始处理,而后者会从 tail 开始处理。大多情况下使用 `ctx.write()` 即可。 242 | 243 | 参考文档: 244 | 245 | * [Netty高性能编程备忘录](http://calvin1978.blogcn.com/articles/netty-performance.html) 246 | * [蚂蚁通信框架实践](http://www.jiangxinlingdu.com/netty/2018/11/23/bolt.html) 247 | * [Netty内存管理](https://gsmtoday.github.io/2017/09/03/netty-memory-pool-md/) 248 | --------------------------------------------------------------------------------