├── .gitignore ├── README.md ├── build.sh ├── docs ├── arch-deep-dive_zh.md ├── build_network_zh.md ├── ca-setup_zh.md ├── chaincode_zh.md ├── channels_zh.md ├── configtx_zh.md ├── configtxgen_zh.md ├── configtxlator_zh.md ├── endorsement-policies_zh.md ├── error-handling_zh.md ├── getting_started.md ├── glossary.md ├── gossip_zh.md ├── img │ ├── blocks-3.png │ ├── fabric-ca.png │ ├── flow-4.png │ ├── sdk-image01.png │ ├── sdk-image02.jpg │ ├── sdk-image03.png │ ├── step0.png │ ├── step1.png │ ├── step2.png │ ├── step3.png │ ├── step4.png │ ├── step5.png │ └── step6.png ├── index.md ├── kafka_zh.md ├── ledger_zh.md ├── logging-control_zh.md ├── makefile_zh.md ├── msp_acl_zh.md ├── msp_zh.md ├── protocol-spec_zh.md ├── read-write-set.md ├── sdk-design_zh.md ├── sdk_java_zh.md ├── sdk_node_zh.md ├── txflow_zh.md └── write_first_app_zh.md └── mkdocs.yml /.gitignore: -------------------------------------------------------------------------------- 1 | site/ -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # hyperledgerDocs 2 | hyperledger中文文档 3 | -------------------------------------------------------------------------------- /build.sh: -------------------------------------------------------------------------------- 1 | mkdocs gh-deploy -------------------------------------------------------------------------------- /docs/chaincode_zh.md: -------------------------------------------------------------------------------- 1 | 2 | | 原文 | 作者 | 审核修正 | 3 | | --- | --- | —--- | 4 | | [原文一](http://hyperledger-fabric.readthedocs.io/en/latest/chaincode.html) [原文二](http://hyperledger-fabric.readthedocs.io/en/latest/chaincode4ade.html) [原文三](http://hyperledger-fabric.readthedocs.io/en/latest/chaincode4noah.html) | | | 5 | 6 | 7 | ## 这里什么都没有,就靠你了!!! -------------------------------------------------------------------------------- /docs/channels_zh.md: -------------------------------------------------------------------------------- 1 | 2 | | 原文 | 作者 | 审核修正 | 3 | | --- | --- | —--- | 4 | | [原文](http://hyperledger-fabric.readthedocs.io/en/latest/channels.html) | Yi Zeng | | 5 | 6 | A Hyperledger Fabric channel is a private “subnet” of communication between two or more specific network members, for the purpose of conducting private and confidential transactions. A channel is defined by members (organizations), anchor peers per member, the shared ledger, chaincode application(s) and the ordering service node(s). Each transaction on the network is executed on a channel, where each party must be authenticated and authorized to transact on that channel. Each peer that joins a channel, has its own identity given by a membership services provider (MSP), which authenticates each peer to its channel peers and services. 7 | 8 | 在超级账本Fabric中,一个通道是指一个在两个或多个特定网络成员间的专门为私人的和机密的交易为目的而建立的私有"子网"。一个通道的定义中包含:成员(组织),每个成员的锚节点,共享帐本,链上代码应用程序和排序服务节点。网络上的每个交易都在一个指定的通道中执行,在通道中交易必须通过通道中的每部分的认证和授权。要加入一个通道的每个节点都必须有自己的通过成员服务提供商(MSP)获得的身份标识,用于鉴定每个节点在通道中的是什么节点和服务。 9 | 10 | To create a new channel, the client SDK calls configuration system chaincode and references properties such as anchor peer**s, and members (organizations). This request creates a **genesis block for the channel ledger, which stores configuration information about the channel policies, members and anchor peers. When adding a new member to an existing channel, either this genesis block, or if applicable, a more recent reconfiguration block, is shared with the new member. 11 | 12 | 要创建一个通道,客户端SDK调用配置系统链上代码和参考属性,比如锚节点和成员(组织)。这个请求会为通道的账本创建一个创世区块,用于存储关于通道的策略,成员和锚节点的配置信息。当需要添加一个新成员到现有通道时,这个创世区块,或者最新的新配置区块(如果可用),将会共享给这个新成员。 13 | 14 | Note 15 | 注意 16 | 17 | See the Channel Configuration (configtx) section for more more details on the properties and proto structures of config transactions. 18 | 19 | 参考通道配置(configtx)章节,可以查看更多关于交易的配置属性和典型的结构的明细信息。 20 | 21 | The election of a leading peer for each member on a channel determines which peer communicates with the ordering service on behalf of the member. If no leader is identified, an algorithm can be used to identify the leader. The consensus service orders transactions and delivers them, in a block, to each leading peer, which then distributes the block to its member peers, and across the channel, using the gossip protocol. 22 | 23 | 从通道的所有节点中选举出的领导节点决定哪个节点用于代表其他成员节点与排序服务通讯。如果还没有领导节点,那么一个算法可以用于标识出领导节点。共识服务对交易进行排序,并打包成区块,发送区块给每个领导节点,然后领导节点把区块分发给其成员节点,然后使用gossip协议穿过通道。 24 | 25 | Although any one anchor peer can belong to multiple channels, and therefore maintain multiple ledgers, no ledger data can pass from one channel to another. This separation of ledgers, by channel, is defined and implemented by configuration chaincode, the identity membership service and the gossip data dissemination protocol. The dissemination of data, which includes information on transactions, ledger state and channel membership, is restricted to peers with verifiable membership on the channel. This isolation of peers and ledger data, by channel, allows network members that require private and confidential transactions to coexist with business competitors and other restricted members, on the same blockchain network. 26 | 27 | 虽然任意一个锚节点都可以属于多个通道,而且维护了多个账本,但是不会有任何账本数据会从一个通道传到另一个通道。这就是根据通道对账本的分离,这种分离是在配置链上代码,成员标识服务和gossip传播协议中定义和实现。数据的传播,包括交易的信息,账本状态和通道成员等都在通道内受限制的验证成员身份的节点之间。这种根据通道对节点和账本数据进行隔离,允许网络成员可以在同一个区块链网络中请求私有的和保密的交易给业务上的竞争对手和其他受限的成员。 -------------------------------------------------------------------------------- /docs/configtx_zh.md: -------------------------------------------------------------------------------- 1 | 2 | | 原文 | 作者 | 审核修正 | 3 | | --- | --- | —--- | 4 | | [原文](http://hyperledger-fabric.readthedocs.io/en/latest/configtx.html) | Linsheng Yu | | 5 | 6 | 7 | Shared configuration for a Hyperledger Fabric blockchain network is stored in a collection configuration transactions, one per channel. Each configuration transaction is usually referred to by the shorter name ***configtx***. 8 | 9 | Hyperledger Fabric 区块链网络的共享配置存储在每个channel的配置交易集合中。配置交易简称***configtx***。 10 | 11 | Channel configuration has the following important properties: 12 | 13 | Channel配置有以下重要特性: 14 | 15 | 1. **版本 Versioned**:All elements of the configuration have an associated version which is advanced with every modification. Further, every committed configuration receives a sequence number. 16 | 17 | 配置中的所有项都有一个相关联的版本,每次修改都会有个更高的版本。此外每个提交的配置都会有个序列号。 18 | 2. **许可 Permissioned**:Anyone with a copy of the previous configtx (and no additional info) may verify the validity of a new config based on these policies. 19 | 20 | 配置中的所有项都有一个相关联的策略,该策略控制该元素是否可修改。任何有前一个configtx(无需额外信息)的“人”,都可基于这些策略验证新配置的有效性。 21 | 3. **分层 Hierarchical**:A root configuration group contains sub-groups, and each group of the hierarchy has associated values and policies. These policies can take advantage of the hierarchy to derive policies at one level from policies of lower levels. 22 | 23 | 根配置组包含子组,每个分层组具有相关的值和策略。这些策略可以利用分层结构从较低级别的策略中导出。 24 | 25 | ## Anatomy of a configuration / 配置剖析 26 | 27 | Configuration is stored as a transaction of type `HeaderType_CONFIG` in a block with no other transactions. These blocks are referred to as ***Configuration Blocks***, the first of which is referred to as the ***Genesis Block***. 28 | 29 | 配置作为一种`HeaderType_CONFIG`类型的交易单独存储在一个block中(也就是说这个block不再包含其他交易),这个block被称为***配置区块***,其中的第一个称为 ***创世区块***。 30 | 31 | The proto structures for configuration are stored in `fabric/protos/common/configtx.proto`. The Envelope of type `HeaderType_CONFIG` encodes a `ConfigEnvelope` message as the `Payload` `data` field. The proto for `ConfigEnvelope` is defined as follows: 32 | 33 | 配置的数据结构在文件`fabric/protos/common/configtx.proto`中,编码后的`ConfigEnvelope`信息作为`HeaderType_CONFIG`类型的`Envelope`中`Payload`的`Data`字段值。*(译注:Envelope.payload.Header.channel_header.type= HeaderType_CONFIG,Envelope.payload.data = []byte(ConfigEnvelope))*`ConfigEnvelope`定义如下: 34 | 35 | message ConfigEnvelope { 36 | Config config = 1; // A marshaled Config structure 37 | Envelope last_update = 2; // The last CONFIG_UPDATE message which generated this current configuration 38 | // Note that CONFIG_UPDATE has a Payload.Data of a Marshaled ConfigUpdate 39 | } 40 | 41 | The `last_update` field is defined below in the Updates to configuration section, but is only necessary when validating the configuration, not reading it. Instead, the currently committed configuration is stored in the `config` field, containing a `Config` message. 42 | 43 | `last_update`字段在下面的“更新配置”一节定义,该字段只用于验证配置,而不会读他。当前提交的配置存储在`config`字段,是一个`Config`信息。 44 | 45 | // Config represents the config for a particular channel 46 | message Config { 47 | uint64 sequence = 1; 48 | ConfigGroup channel_group = 2; 49 | } 50 | 51 | The `sequence` number is incremented by one for each committed configuration. The `channel_group` field is the root group which contains the configuration. The `ConfigGroup` structure is recursively defined, and builds a tree of groups, each of which contains values and policies. It is defined as follows: 52 | 53 | 其中`sequence`字段是每次提交配置递增的数字;`channel_group`字段是包含该配置的根组。`ConfigGroup`结构是递归定义的,构建了一个组树,其中每个组都包含值和策略。其结构如下: 54 | 55 | // ConfigGroup is the hierarchical data structure for holding config 56 | message ConfigGroup { 57 | uint64 version = 1; 58 | map groups = 2; 59 | map values = 3; 60 | map policies = 4; 61 | string mod_policy = 5; 62 | } 63 | 64 | Because `ConfigGroup` is a recursive structure, consider an example hierarchical arrangement of ``ConfigGroup``s (expressed for clarity in golang notation). 65 | 66 | `ConfigGroup`是递归结构,此处给出一个用golang表示的分层排列的示例: 67 | 68 | // 假设定义了以下group 69 | var root, child1, child2, grandChild1, grandChild2, grandChild3 *ConfigGroup 70 | 71 | // Set the following values 72 | root.Groups["child1"] = child1 73 | root.Groups["child2"] = child2 74 | child1.Groups["grandChild1"] = grandChild1 75 | child2.Groups["grandChild2"] = grandChild2 76 | child2.Groups["grandChild3"] = grandChild3 77 | 78 | // The resulting config structure of groups looks like: 79 | // root: 80 | // child1: 81 | // grandChild1 82 | // child2: 83 | // grandChild2 84 | // grandChild3 85 | 86 | Each group defines a level in the config hierarchy, and each group has an associated set of values (indexed by string key) and policies (also indexed by string key). 87 | 88 | 每个组都定义了一个分层结构中的级别,且每个组都有一个相关的值集(以string为key)和策略(以string为key)。 89 | 90 | Values are defined by: 91 | 92 | 值定义: 93 | 94 | // ConfigValue represents an individual piece of config data 95 | message ConfigValue { 96 | uint64 version = 1; 97 | bytes value = 2; 98 | string mod_policy = 3; 99 | } 100 | 101 | Policies are defined by: 102 | 策略定义: 103 | 104 | message ConfigPolicy { 105 | uint64 version = 1; 106 | Policy policy = 2; 107 | string mod_policy = 3; 108 | } 109 | 110 | Note that Values, Policies, and Groups all have a `version` and a `mod_policy`. 111 | 112 | 注意,ConfigGroup、ConfigValue、ConfigPolicy都有`version`和`mod_policy`字段。 113 | 114 | The `ersion` of an element is incremented each time that element is modified. The `mod_policy` is used to govern the required signatures to modify that element. 115 | 116 | 每次修改元素时,其`version `递增,`mod_policy`用于管理修改该元素所需的签名。 117 | 118 | For Groups, modification is adding or removing elements to the Values, Policies, or Groups maps (or changing the `mod_policy`). For Values and Policies, modification is changing the Value and Policy fields respectively (or changing the mod_policy). 119 | 120 | 对于Groups,修改就是添加或删除Values、Policies、或Groups中的元素(或者是改变`mod_policy`)*(译注,其实就是`ConfigGroup`中除`version`字段外其他字段的变化)*;对于Values和Policies,修改就是改变`value`或`policy`字段(或者改变`mod_policy`)。 121 | 122 | Each element’s `mod_policy` is evaluated in the context of the current level of the config. 123 | 124 | 每个元素的`mod_policy`都只在当前配置级别(level)中有效。 125 | 126 | Consider the following example mod policies defined at `Channel.Groups["Application"]` (Here, we use the golang map reference syntax, so `Channel.Groups["Application"].Policies["policy1"]` refers to the base `Channel` group’s `Application` group’s `Policies` map’s `policy1` policy.) 127 | 128 | 下面是一个定义在`Channel.Groups["Application"]`中的策略的示例(这里用的是golang语法,因此`Channel.Groups["Application"].Policies["policy1"]`表示根组`Channel`的子组`Application`的`Policies`里的`policy1`对应的策略) 129 | 130 | * `policy1`对应`Channel.Groups["Application"].Policies["policy1"]` 131 | * `Org1/policy2`对应`Channel.Groups["Application"].Groups["Org1"].Policies["policy2"]` 132 | * `/Channel/policy3`对应`Channel.Policies["policy3"]` 133 | 134 | Note that if a `mod_policy` references a policy which does not exist, the item cannot be modified. 135 | 136 | 注意,如果`mod_policy`引用了一个不存在的策略,那么该元素不可修改。 137 | 138 | ## Configuration updates / 更新配置 139 | 140 | Configuration updates are submitted as an `Envelope` message of type `HeaderType_CONFIG_UPDATE`. The `Payload` `data` of the transaction is a marshaled `ConfigUpdateEnvelope`. The `ConfigUpdateEnvelope` is defined as follows: 141 | 142 | 更新配置是提交一个`HeaderType_CONFIG_UPDATE`类型的`Envelope`消息,交易的`Payload.data `字段是序列化的`ConfigUpdateEnvelope`,其定义如下: 143 | 144 | message ConfigUpdateEnvelope { 145 | bytes config_update = 1; // A marshaled ConfigUpdate structure 146 | repeated ConfigSignature signatures = 2; // Signatures over the config_update 147 | } 148 | 149 | The `signatures` field contains the set of signatures which authorizes the config update. Its message definition is: 150 | 151 | 其中`signatures`字段包含了授权更新配置的签名集,定义如下: 152 | 153 | message ConfigSignature { 154 | bytes signature_header = 1; // A marshaled SignatureHeader 155 | bytes signature = 2; // Signature over the concatenation signatureHeader bytes and config bytes 156 | } 157 | 158 | The `signature_header` is as defined for standard transactions, while the signature is over the concatenation of the `signature_header` bytes and the `config_update` bytes from the `ConfigUpdateEnvelope` message. 159 | 160 | `signature_header`如标准交易所定义,而签名则是`signature_header`字节和`ConfigUpdateEnvelope`中的`config_update`字节的拼接。 161 | 162 | The `ConfigUpdateEnvelope` `config_update` bytes are a marshaled `ConfigUpdate` message which is defined as follows: 163 | 164 | `ConfigUpdateEnvelope`中的`config_update`字段是序列化的`ConfigUpdate`,其定义为: 165 | 166 | message ConfigUpdate { 167 | string channel_id = 1; // Which channel this config update is for 168 | ConfigGroup read_set = 2; // ReadSet explicitly lists the portion of the config which was read, this should be sparse with only Version set 169 | ConfigGroup write_set = 3; // WriteSet lists the portion of the config which was written, this should included updated Versions 170 | } 171 | 172 | The `channel_id` is the channel ID the update is bound for, this is necessary to scope the signatures which support this reconfiguration. 173 | 174 | 其中`channel_id`是配置更新所对应的channel ID,该字段是必要,因为它界定了支持本次配置更新的所需的签名范围。 175 | 176 | The `read_set` specifies a subset of the existing configuration, specified sparsely where only the `version` field is set and no other fields must be populated. The particular `ConfigValue` `value` or `ConfigPolicy` `policy` fields should never be set in the `read_set`. The `ConfigGroup` may have a subset of its map fields populated, so as to reference an element deeper in the config tree. For instance, to include the `Application` group in the `read_set`, its parent (the `Channel` group) must also be included in the read set, but, the `Channel` group does not need to populate all of the keys, such as the `Orderer` `group` key, or any of the `values` or `policies` keys. 177 | 178 | `read_set`是现有配置的一个子集,其中仅含`version`字段,`ConfigValue.value`和`ConfigPolicy.policy`等其他字段不包含在`read_set`中。`ConfigGroup`会map字段组成的子集,以便引用配置树的深层元素。例如,为使`Application`group包含到`read_set`,它的上层(`Channel`group)也必须包含到`read_set`中,但不必将`Channel`group中所有的key都包括进去,比如`Orderer``group`或者任何`values`或`policies`。 179 | 180 | The `write_set` specifies the pieces of configuration which are modified. Because of the hierarchical nature of the configuration, a write to an element deep in the hierarchy must contain the higher level elements in its `write_set` as well. However, for any element in the `write_set` which is also specified in the `read_set` at the same version, the element should be specified sparsely, just as in the `read_set`. 181 | 182 | `write_set`指定了要被修改的那部分配置。由于配置的分层特性,修改深层元素就必须在`write_set`中包含其上层元素。`write_set`中的任意元素都会在`read_set`中指定相同版本的该元素。 183 | 184 | For example, given the configuration: 185 | 186 | 例如,给出如下配置: 187 | 188 | Channel: (version 0) 189 | Orderer (version 0) 190 | Appplication (version 3) 191 | Org1 (version 2) 192 | 193 | To submit a configuration update which modifies `Org1`, the `read_set` would be: 194 | 195 | 修改`Org1`提交的`read_set`应为: 196 | 197 | Channel: (version 0) 198 | Application: (version 3) 199 | 200 | and the write_set would be 201 | 202 | 对应的`write_set`应是: 203 | 204 | Channel: (version 0) 205 | Application: (version 3) 206 | Org1 (version 3) 207 | 208 | When the `CONFIG_UPDATE` is received, the orderer computes the resulting `CONFIG` by doing the following: 209 | 210 | 接收到`CONFIG_UPDATE`后,orderer会通过以下步骤计算`CONFIG`结果: 211 | 212 | 1. Verifies the `channel_id` and `read_set`. All elements in the `read_set` must exist at the given versions. 213 | 214 | 校验`channel_id`和`read_set`,`read_set`中所有元素必须存在对应的版本。 215 | 2. Computes the update set by collecting all elements in the `write_set` which do not appear at the same version in the `read_set`. 216 | 217 | 收集`read_set`与`write_set`中版本不一致的元素,计算更新集。 218 | 3. Verifies that each element in the update set increments the version number of the element update by exactly 1. 219 | 220 | 校验更新集中的元素的版本号是否递增1 221 | 4. Verifies that the signature set attached to the `ConfigUpdateEnvelope` satisfies the `mod_policy` for each element in the update set. 222 | 223 | 校验更新集中每个元素,`ConfigUpdateEnvelope`的签名满足`mod_policy`。 224 | 5. Computes a new complete version of the config by applying the update set to the current config. 225 | 226 | 通过将更新集应用于当前配置,计算该配置的完整新版本 227 | 6. Writes the new config into a `ConfigEnvelope` which includes the `CONFIG_UPDATE` as the `last_update` field and the new config encoded in the `config` field, along with the incremented `sequence` value. 228 | 229 | 将新配置写成`ConfigEnvelope`作为`CONFIG_UPDATE`赋给`last_update`字段,新的配置赋给`config`字段,`sequence`字段自增。 230 | 7. Writes the new `ConfigEnvelope` into a `Envelope` of type `CONFIG`, and ultimately writes this as the sole transaction in a new configuration block. 231 | 232 | 将`ConfigEnvelope`写成`CONFIG`类型的`Envelope`,最终将此作为唯一交易写入配置区块。、 233 | 234 | When the peer (or any other receiver for `Deliver`) receives this configuration block, it should verify that the config was appropriately validated by applying the last_update message to the current config and verifying that the orderer-computed `config` field contains the correct new configuration. 235 | 236 | 当peer(或者任意其他接收`Deliver`者)接收到这个配置区块后,就会通过将`last_update`信息应用到当前配置并校验orderer计算的`config`字段是否包含正确的新配置,来验证该配置是否被正确校验。 237 | 238 | ## Permitted configuration groups and values / 组和值得配置许可 239 | 240 | Any valid configuration is a subset of the following configuration. Here we use the notation `peer.` to define a `ConfigValue` whose `value` field is a marshaled proto message of name `` defined in `fabric/protos/peer/configuration.proto`. The notations `common.`, `msp.`, and `orderer.` correspond similarly, but with their messages defined in `fabric/protos/common/configuration.proto`, `fabric/protos/msp/mspconfig.proto`, and `fabric/protos/orderer/configuration.proto` respectively. 241 | 242 | 有效的配置都是下面配置的子集。在此,用`peer.`表示一个`ConfigValue`,其`value`字段是称为``的序列化后的信息,定义在`fabric/protos/peer/configuration.proto`。`common.`,`msp.`和`orderer.`分别定义在`fabric/protos/common/configuration.proto`,`fabric/protos/msp/mspconfig.proto`和`fabric/protos/orderer/configuration.proto`。 243 | 244 | Note, that the keys `{{org_name}}` and `{{consortium_name}}` represent arbitrary names, and indicate an element which may be repeated with different names. 245 | 246 | 注意,下面的`{{org_name}}` 和 `{{consortium_name}}`是任意的名字,表示可以重复使用不同名称的元素。 247 | 248 | &ConfigGroup{ 249 | Groups: map { 250 | "Application":&ConfigGroup{ 251 | Groups:map { 252 | {{org_name}}:&ConfigGroup{ 253 | Values:map{ 254 | "MSP":msp.MSPConfig, 255 | "AnchorPeers":peer.AnchorPeers, 256 | }, 257 | }, 258 | }, 259 | }, 260 | "Orderer":&ConfigGroup{ 261 | Groups:map { 262 | {{org_name}}:&ConfigGroup{ 263 | Values:map{ 264 | "MSP":msp.MSPConfig, 265 | }, 266 | }, 267 | }, 268 | 269 | Values:map { 270 | "ConsensusType":orderer.ConsensusType, 271 | "BatchSize":orderer.BatchSize, 272 | "BatchTimeout":orderer.BatchTimeout, 273 | "KafkaBrokers":orderer.KafkaBrokers, 274 | }, 275 | }, 276 | "Consortiums":&ConfigGroup{ 277 | Groups:map { 278 | {{consortium_name}}:&ConfigGroup{ 279 | Groups:map { 280 | {{org_name}}:&ConfigGroup{ 281 | Values:map{ 282 | "MSP":msp.MSPConfig, 283 | }, 284 | }, 285 | }, 286 | Values:map { 287 | "ChannelCreationPolicy":common.Policy, 288 | } 289 | }, 290 | }, 291 | }, 292 | }, 293 | 294 | Values: map { 295 | "HashingAlgorithm":common.HashingAlgorithm, 296 | "BlockHashingDataStructure":common.BlockDataHashingStructure, 297 | "Consortium":common.Consortium, 298 | "OrdererAddresses":common.OrdererAddresses, 299 | }, 300 | } 301 | 302 | ## Orderer system channel configuration / Order channel 配置 303 | 304 | The ordering system channel needs to define ordering parameters, and consortiums for creating channels. There must be exactly one ordering system channel for an ordering service, and it is the first channel to be created (or more accurately bootstrapped). It is recommended never to define an Application section inside of the ordering system channel genesis configuration, but may be done for testing. Note that any member with read access to the ordering system channel may see all channel creations, so this channel’s access should be restricted. 305 | 306 | ordering系统channel定义了创建channel的ordering参数和consortiums。ordering service必须有一个ordering系统channel,这是被创建的第一个channel。建议不要在ordering系统channel初始配置中定义application部分,但是测试是可以这么做。注意,任何对ordering系统channel有读权限的成员都可以查看所有channel创建,因此channel的访问应受限制。 307 | 308 | The ordering parameters are defined as the following subset of config: 309 | 310 | ordering参数定义如下: 311 | 312 | &ConfigGroup{ 313 | Groups: map { 314 | "Orderer":&ConfigGroup{ 315 | Groups:map { 316 | {{org_name}}:&ConfigGroup{ 317 | Values:map{ 318 | "MSP":msp.MSPConfig, 319 | }, 320 | }, 321 | }, 322 | 323 | Values:map { 324 | "ConsensusType":orderer.ConsensusType, 325 | "BatchSize":orderer.BatchSize, 326 | "BatchTimeout":orderer.BatchTimeout, 327 | "KafkaBrokers":orderer.KafkaBrokers, 328 | }, 329 | }, 330 | }, 331 | 332 | Each organization participating in ordering has a group element under the `Orderer` group. This group defines a single parameter `MSP` which contains the cryptographic identity information for that organization. The `Values` of the `Orderer` group determine how the ordering nodes function. They exist per channel, so `orderer.BatchTimeout` for instance may be specified differently on one channel than another. 333 | 334 | ordering中的每个组织都在`Orderer`组下有一个组元素,这个组定义了一个`MSP`参数,这个参数包含该组织的加密身份信息。`Orderer`组中的`Values`决定了ordering节点的功能。他们存在于每个channel中,所以像`orderer.BatchTimeout`就可在不同channel中指定不同值。 335 | 336 | At startup, the orderer is faced with a filesystem which contains information for many channels. The orderer identifies the system channel by identifying the channel with the consortiums group defined. The consortiums group has the following structure. 337 | 338 | 启动时,orderer面对含有很多channel信息的文件系统,orderer通过识别带有consortiums组定义的channel来标识系统channel。consortiums组结构如下。 339 | 340 | &ConfigGroup{ 341 | Groups: map { 342 | "Consortiums":&ConfigGroup{ 343 | Groups:map { 344 | {{consortium_name}}:&ConfigGroup{ 345 | Groups:map { 346 | {{org_name}}:&ConfigGroup{ 347 | Values:map{ 348 | "MSP":msp.MSPConfig, 349 | }, 350 | }, 351 | }, 352 | Values:map { 353 | "ChannelCreationPolicy":common.Policy, 354 | } 355 | }, 356 | }, 357 | }, 358 | }, 359 | }, 360 | 361 | Note that each consortium defines a set of members, just like the organizational members for the ordering orgs. Each consortium also defines a `ChannelCreationPolicy`. This is a policy which is applied to authorize channel creation requests. Typically, this value will be set to an `ImplicitMetaPolicy` requiring that the new members of the channel sign to authorize the channel creation. More details about channel creation follow later in this document. 362 | 363 | 注意,每个consortium定义一组成员,就行ordering组织的组织成员一样。每个consortium也都定义了一个`ChannelCreationPolicy`,它是一种应用于授权channel创建请求的策略,通常这个值设为`ImplicitMetaPolicy`,要求channel的新成员签名授权channel创建。有关channel创建更信息的内容,请参阅文档后面的内容。 364 | 365 | ## Application channel configuration / APP channel 配置 366 | 367 | Application configuration is for channels which are designed for application type transactions. It is defined as follows: 368 | 369 | 应用程序配置用于为应用类型交易设计的channel。其定义如下: 370 | 371 | &ConfigGroup{ 372 | Groups: map { 373 | "Application":&ConfigGroup{ 374 | Groups:map { 375 | {{org_name}}:&ConfigGroup{ 376 | Values:map{ 377 | "MSP":msp.MSPConfig, 378 | "AnchorPeers":peer.AnchorPeers, 379 | }, 380 | }, 381 | }, 382 | }, 383 | }, 384 | } 385 | 386 | Just like with the `Orderer` section, each organization is encoded as a group. However, instead of only encoding the `MSP` identity information, each org additionally encodes a list of `AnchorPeers`. This list allows the peers of different organizations to contact each other for peer gossip networking. 387 | 388 | 就像`Orderer`部分,每个组织被编码为一个组。然而,app channel不仅有`MSP`身份信息,每个组织都附加了一个`AnchorPeers`列表。这个列表允许不同组织的节点彼此联系。 389 | 390 | The application channel encodes a copy of the orderer orgs and consensus options to allow for deterministic updating of these parameters, so the same `Orderer` section from the orderer system channel configuration is included. However from an application perspective this may be largely ignored. 391 | 392 | 应用程序channel通过对orderer组织和共识选项的编码,以允许对这些参数进行确定性更新,因此包含了orderer系统channel配置的相同`Orderer`部分。但从应用角度看,这会在很大程度上被忽略。 393 | 394 | ## Channel creation / 创建channel 395 | 396 | When the orderer receives a `CONFIG_UPDATE` for a channel which does not exist, the orderer assumes that this must be a channel creation request and performs the following. 397 | 398 | 当Orderer 接收到对一个不存在的channel的`CONFIG_UPDATE`信息时,orderer就会假设这是个创建channel的请求并执行以下操作: 399 | 400 | 1. The orderer identifies the consortium which the channel creation request is to be performed for. It does this by looking at the `Consortium` value of the top level group. 401 | 402 | 通过查看高层组中的`Consortium`值,orderer标识所要执行创建channel请求的consortium***(译注:这个词暂时不知翻译成什么好)***。 403 | 2. The orderer verifies that the organizations included in the `Application` group are a subset of the organizations included in the corresponding consortium and that the `ApplicationGroup` is set to `version` `1`. 404 | 405 | orderer验证Application组中的组织是对应的consortium中组织的一部分,并验证`ApplicationGroup`的版本是1。 406 | 3. The orderer verifies that if the consortium has members, that the new channel also has application members (creation consortiums and channels with no members is useful for testing only). 407 | 408 | orderer验证consortium是否有成员,新的channel也会有application成员(创建没有成员的consortiums和channel只用于测试)。 409 | 4. The orderer creates a template configuration by taking the `Orderer` group from the ordering system channel, and creating an `Application` group with the newly specified members and specifying its `mod_policy` to be the `ChannelCreationPolicy` as specified in the consortium config. Note that the policy is evaluated in the context of the new configuration, so a policy requiring `ALL` members, would require signatures from all the new channel members, not all the members of the consortium. 410 | 411 | orderer从ordering系统channel取得`Orderer`组,并创建一个包含新指定成员的`Application`组,并将其`mod_policy`指定为在consortium config中指定的`ChannelCreationPolicy`,从而创建一个模板配置。注意,这个策略(mod_policy)是基于新配置的上下文的,因此需要所有成员的策略就是要需要新channel中所有成员的签名,而不是consortium中的所有成员。 412 | 5. The orderer then applies the `CONFIG_UPDATE` as an update to this template configuration. Because the `CONFIG_UPDATE` applies modifications to the `Application` group (its `version` is 1), the config code validates these updates against the `ChannelCreationPolicy`. If the channel creation contains any other modifications, such as to an individual org’s anchor peers, the corresponding mod policy for the element will be invoked. 413 | 414 | orderer用`CONFIG_UPDATE`更新这个模板配置。因为`CONFIG_UPDATE`用于`Application`组(其版本是1)的修改,所以配置码根据`ChannelCreationPolicy`验证这些更新。如果channel创建包含任何其它修改,例如修改单个组织的锚节点,则调用该元素的相应mod策略。 415 | 6. The new `CONFIG` transaction with the new channel config is wrapped and sent for ordering on the ordering system channel. After ordering, the channel is created. 416 | 417 | 带有新channel配置的`CONFIG`交易被包装并通过order系统channel发送到ordering,ordering之后channel就创建完成。 -------------------------------------------------------------------------------- /docs/configtxgen_zh.md: -------------------------------------------------------------------------------- 1 | 2 | | 原文 | 作者 | 审核修正 | 3 | | --- | --- | —--- | 4 | | [原文](http://hyperledger-fabric.readthedocs.io/en/latest/configtxgen.html) | Fei Cao | | 5 | 6 | 7 | This document describe the usage for the configtxgen utility for manipulating Hyperledger Fabric channel configuration. 8 | 9 | 本文档描述了`configtxgen`工具的用法,该工具用来操作超级账本Fabric的通道配置。 10 | 11 | For now, the tool is primarily focused on generating the genesis block for bootstrapping the orderer, but it is intended to be enhanced in the future for generating new channel configurations as well as reconfiguring existing channels. 12 | 13 | 目前,该工具主要侧重于生成引导共识节点的创世纪块,但是将来预计增加生成新通道的配置以及重新配置已有的通道。 14 | 15 | ## Configuration Profiles - 配置文件 16 | 17 | The configuration parameters supplied to the configtxgen tool are primarily provided by the configtx.yaml file. This file is located at fabric/sampleconfig/configtx.yaml in the fabric.git repository. 18 | 19 | configtxgen工具的配置参数主要由`configtx.yaml`文件提供。在fabric库中,配置文件在`fabric/sampleconfig/configtx.yaml`。 20 | 21 | This configuration file is split primarily into three pieces. 22 | 23 | 此配置文件主要分为三部分。 24 | 25 | 1.The Profiles section. By default, this section includes some sample configurations which can be used for development or testing scenarios, and refer to crypto material present in the fabric.git tree. These profiles can make a good starting point for construction a real deployment profile. The configtxgen tool allows you to specify the profile it is operating under by passing the -profile flag. Profiles may explicitly declare all configuration, but usually inherit configuration from the defaults in (3) below. 26 | 27 | 1.`Profiles`部分。默认情况下,这部分包含一些用于开发或测试场景的示例配置,这些配置涉及fabric目录中加密部分。这些配置能为构建一个真正开发配置做一个良好开始。`configtxgen`工具允许你通过`-profile`标签来指定配置文件。`Profiles`部分可以显式声明所有配置,但是通常都是从一下(3)默认配置中继承。 28 | 29 | 2.The Organizations section. By default, this section includes a single reference to the sampleconfig MSP definition. For production deployments, the sample organization should be removed, and the MSP definitions of the network members should be referenced and defined instead. Each element in the Organizations section should be tagged with an anchor label such as &orgName which will allow the definition to be referenced in the Profiles sections. 30 | 31 | 2.`Organizations`部分。默认情况下,这部分包含实力配置MSP定义的单一引用。对于生产部署,应该删除示例配置,并应引用和定义网络成员的MSP定义。`Organizations`部分每一个元素都必须带有锚标签,如`&orgName`,这些标签在`Profiles`部分引用。 32 | 33 | 3.The default sections. There are default sections for Orderer and Application configuration, these include attributes like BatchTimeout and are generally used as the base inherited values for the profiles. 34 | 35 | 3.默认部分。此部分包括共识和应用部分的默认配置,包括一些属性配置,如`BatchTimeout`和一般用作继承的基础值。 36 | 37 | This configuration file may be edited, or, individual properties may be overridden by setting environment variables, such as CONFIGTX_ORDERER_ORDERERTYPE=kafka. Note that the Profiles element and profile name do not need to be specified. 38 | 39 | 这个配置文件可以被编辑,或者通过设置环境变量来重写属性值,如`CONFIGTX_ORDERER_ORDERERTYPE=kafka`。注意,不需要指定配置文件元素和配置文件名称。 40 | 41 | ## Bootstrapping the orderer - 引导共识 42 | 43 | After creating a configuration profile as desired, simply invoke 44 | 45 | 创建配置文件后,简单调用 46 | 47 | ~~~ 48 | configtxgen -profile 49 | ~~~ 50 | 51 | This will produce a genesis.block file in the current directory. You may optionally specify another filename by passing in the -path parameter, or, you may skip the writing of the file by passing the dryRun parameter if you simply wish to test parsing of the file. 52 | 53 | 这会在当前文件目录下生成`genesis.block`。你也可以通过`-path`参数指定其他文件名。如果你只希望测试这个文件,可以通过`dryRun`参数来跳过创建文件。 54 | 55 | Then, to utilize this genesis block, before starting the orderer, simply specify ORDERER_GENERAL_GENESISMETHOD=file and ORDERER_GENERAL_GENESISFILE=$PWD/genesis.block or modify the orderer.yaml file to encode these values. 56 | 57 | 然后为了使用生成的创世快,在启动orderer之前,简单的通过指定`ORDERER_GENERAL_GENESISMETHOD=file` 和`ORDERER_GENERAL_GENESISFILE=$PWD/genesis.block` 或者修改`orderer.yaml`文件编辑这些属性值。 58 | 59 | ## Creating a channel - 创建通道 60 | 61 | The tool can also output a channel creation tx by executing 62 | 63 | 此工具同样可以创建通道交易通过执行 64 | 65 | ~~~ 66 | configtxgen -profile -channelID -outputCreateChannelTx 67 | ~~~ 68 | 69 | This will output a marshaled Envelope message which may be sent to broadcast to create a channel. 70 | 71 | 这将输出一个`Envelope`消息,用来发送广播来创建通道。 72 | 73 | ## Reviewing a configuration - 检查配置 74 | 75 | In addition to creating configuration, the configtxgen tool is also capable of inspecting configuration. 76 | 77 | 除了创建配置,`configtxgen`工具同样提供检查配置的功能。 78 | 79 | It supports inspecting both configuration blocks, and configuration transactions. You may use the inspect flags -inspectBlock and -inspectChannelCreateTx respectively with the path to a file to inspect to output a human readable (JSON) representation of the configuration. 80 | 81 | 它支持检查配置块和配置交易。你可以用检查标签`-inspectBlock` 和 `-inspectChannelCreateTx` 分别指定文件路径来输出可读的(JSON)配置。 82 | 83 | You may even wish to combine the inspection with generation. For example: 84 | 85 | 你甚至可能希望将创建与检查相结合。例如: 86 | 87 | ~~~ 88 | $ build/bin/configtxgen -channelID foo -outputBlock foo.block -inspectBlock foo.block 89 | 2017/03/01 21:24:24 Loading configuration 90 | 2017/03/01 21:24:24 Checking for configtx.yaml at: 91 | 2017/03/01 21:24:24 Checking for configtx.yaml at: 92 | 2017/03/01 21:24:24 Checking for configtx.yaml at: /home/yellickj/go/src/github.com/hyperledger/fabric/common/configtx/tool 93 | 2017/03/01 21:24:24 map[orderer:map[BatchSize:map[MaxMessageCount:10 AbsoluteMaxBytes:99 MB PreferredMaxBytes:512 KB] Kafka:map[Brokers:[127.0.0.1:9092]] Organizations: OrdererType:solo Addresses:[127.0.0.1:7050] BatchTimeout:10s] application:map[Organizations:] profiles:map[SampleInsecureSolo:map[Orderer:map[BatchTimeout:10s BatchSize:map[MaxMessageCount:10 AbsoluteMaxBytes:99 MB PreferredMaxBytes:512 KB] Kafka:map[Brokers:[127.0.0.1:9092]] Organizations: OrdererType:solo Addresses:[127.0.0.1:7050]] Application:map[Organizations:]] SampleInsecureKafka:map[Orderer:map[Addresses:[127.0.0.1:7050] BatchTimeout:10s BatchSize:map[AbsoluteMaxBytes:99 MB PreferredMaxBytes:512 KB MaxMessageCount:10] Kafka:map[Brokers:[127.0.0.1:9092]] Organizations: OrdererType:kafka] Application:map[Organizations:]] SampleSingleMSPSolo:map[Orderer:map[OrdererType:solo Addresses:[127.0.0.1:7050] BatchTimeout:10s BatchSize:map[MaxMessageCount:10 AbsoluteMaxBytes:99 MB PreferredMaxBytes:512 KB] Kafka:map[Brokers:[127.0.0.1:9092]] Organizations:[map[Name:SampleOrg ID:DEFAULT MSPDir:msp BCCSP:map[Default:SW SW:map[Hash:SHA3 Security:256 FileKeyStore:map[KeyStore:]]] AnchorPeers:[map[Host:127.0.0.1 Port:7051]]]]] Application:map[Organizations:[map[Name:SampleOrg ID:DEFAULT MSPDir:msp BCCSP:map[Default:SW SW:map[Hash:SHA3 Security:256 FileKeyStore:map[KeyStore:]]] AnchorPeers:[map[Port:7051 Host:127.0.0.1]]]]]]] organizations:[map[Name:SampleOrg ID:DEFAULT MSPDir:msp BCCSP:map[Default:SW SW:map[Hash:SHA3 Security:256 FileKeyStore:map[KeyStore:]]] AnchorPeers:[map[Host:127.0.0.1 Port:7051]]]]] 94 | 2017/03/01 21:24:24 Generating genesis block 95 | 2017/03/01 21:24:24 Writing genesis block 96 | 2017/03/01 21:24:24 Inspecting block 97 | 2017/03/01 21:24:24 Parsing genesis block 98 | Config for channel: foo 99 | { 100 | "": { 101 | "Values": {}, 102 | "Groups": { 103 | "/Channel": { 104 | "Values": { 105 | "HashingAlgorithm": { 106 | "Version": "0", 107 | "ModPolicy": "", 108 | "Value": { 109 | "name": "SHA256" 110 | } 111 | }, 112 | "BlockDataHashingStructure": { 113 | "Version": "0", 114 | "ModPolicy": "", 115 | "Value": { 116 | "width": 4294967295 117 | } 118 | }, 119 | "OrdererAddresses": { 120 | "Version": "0", 121 | "ModPolicy": "", 122 | "Value": { 123 | "addresses": [ 124 | "127.0.0.1:7050" 125 | ] 126 | } 127 | } 128 | }, 129 | "Groups": { 130 | "/Channel/Orderer": { 131 | "Values": { 132 | "ChainCreationPolicyNames": { 133 | "Version": "0", 134 | "ModPolicy": "", 135 | "Value": { 136 | "names": [ 137 | "AcceptAllPolicy" 138 | ] 139 | } 140 | }, 141 | "ConsensusType": { 142 | "Version": "0", 143 | "ModPolicy": "", 144 | "Value": { 145 | "type": "solo" 146 | } 147 | }, 148 | "BatchSize": { 149 | "Version": "0", 150 | "ModPolicy": "", 151 | "Value": { 152 | "maxMessageCount": 10, 153 | "absoluteMaxBytes": 103809024, 154 | "preferredMaxBytes": 524288 155 | } 156 | }, 157 | "BatchTimeout": { 158 | "Version": "0", 159 | "ModPolicy": "", 160 | "Value": { 161 | "timeout": "10s" 162 | } 163 | }, 164 | "IngressPolicyNames": { 165 | "Version": "0", 166 | "ModPolicy": "", 167 | "Value": { 168 | "names": [ 169 | "AcceptAllPolicy" 170 | ] 171 | } 172 | }, 173 | "EgressPolicyNames": { 174 | "Version": "0", 175 | "ModPolicy": "", 176 | "Value": { 177 | "names": [ 178 | "AcceptAllPolicy" 179 | ] 180 | } 181 | } 182 | }, 183 | "Groups": {} 184 | }, 185 | "/Channel/Application": { 186 | "Values": {}, 187 | "Groups": {} 188 | } 189 | } 190 | } 191 | } 192 | } 193 | } 194 | ~~~ 195 | 196 | -------------------------------------------------------------------------------- /docs/configtxlator_zh.md: -------------------------------------------------------------------------------- 1 | 2 | | 原文 | 作者 | 审核修正 | 3 | | --- | --- | —--- | 4 | | [原文](http://hyperledger-fabric.readthedocs.io/en/latest/configtxlator.html) | Shaoxi Qiu | | 5 | 6 | 7 | 概览(Overview) 8 | -------- 9 | 10 | The ``configtxlator`` tool was created to support reconfiguration independent 11 | of SDKs. Channel configuration is stored as a transaction in configuration 12 | blocks of a channel and may be manipulated directly, such as in the bdd behave 13 | tests. However, at the time of this writing, no SDK natively supports 14 | manipulating the configuration directly, so the ``configtxlator`` tool is 15 | designed to provide an API which consumers of any SDK may interact with to 16 | assist with configuration updates. 17 | 18 | 开发 ``configtxlator`` 工具是为了支持独立于SDK来进行重新配置。通道配置通过一个交易的形式存储在通道的配置区块中,并且能够直接被修改,就像bdd行为测试。 19 | 然而,在本文写作的时间,还没有SDK原生支持直接修改配置,所以 ``configtxlator`` 工具被设计为提供一个API让任意一个SDK的用户都能够与之交互来更新配置。 20 | 21 | The tool name is a portmanteau of *configtx* and *translator* and is intended to 22 | convey that the tool simply converts between different equivalent data 23 | representations. It does not generate configuration. It does not submit or 24 | retrieve configuration. It does not modify configuration itself, it simply 25 | provides some bijective operations between different views of the configtx 26 | format. 27 | 28 | 工具的名称是 *configtx* 和 *translator* 的拼接,意在传达该工具简单地在不同的等效数据之间进行转换。 29 | 它不产生配置。也不提交或撤回配置。它不修改配置本身,只是简单地提供一些配置格式的不同的双射展现。 30 | 31 | 译者注:既是单射又是满射的函数称为双射. 函数为双射当且仅当每个可能的像有且仅有一个变量与之对应。参考:http://www.cnblogs.com/wanghetao/archive/2012/03/16/2400619.html 32 | 33 | The standard usage is expected to be: 34 | 35 | 1. SDK retrieves latest config 36 |  2. ``configtxlator`` produces human readable version of config 37 | 3. User or application edits the config 38 | 4. ``configtxlator`` is used to compute config update representation of 39 | changes to the config 40 | 5. SDK submits signs and submits config 41 | 42 | 43 | 标准用法: 44 | 45 |  1. SDK 取出最新的配置 46 |  2.  ``configtxlator`` 工具产生可读版本的配置文件 47 |  3. 用户或者应用编辑配置文件 48 |  4. 使用 ``configtxlator`` 工具计算更新的配置与原有配置的差异 49 |  5. SDK 提交配置以及签名 50 | 51 | 52 | The ``configtxlator`` tool exposes a truly stateless REST API for interacting 53 | with configuration elements. These REST components support converting the 54 | native configuration format to/from a human readable JSON representation, as 55 | well as computing configuration updates based on the difference between two 56 | configurations. 57 | 58 | ``configtxlator`` 工具暴露一个完全无状态的 REST API 接口用来和配置匀速进行交互。 59 | 这些 REST 组件支持本地的配置和可读的JSON格式配置文件进行相互转换, 同时根据配置文件的差异计算配置的更新。 60 | 61 | Because the ``configtxlator`` service deliberately does not contain any crypto 62 | material, or otherwise secret information, it does not include any authorization 63 | or access control. The anticipated typical deployment would be to operate as 64 | a sandboxed container, locally with the application, so that there is a 65 | dedicated ``configtxlator`` process for each consumer of it. 66 | 67 | 因为 ``configtxlator`` 工具特意没有包含任何密码工具和密钥信息, 所有它没有任何权限控制。 68 | 预计的典型部署方式是运行在沙盒容器中, 所以在本地的应用中, 有一个专用的 ``configtxlator`` 进程给每一个使用者。 69 | 70 | 71 | 72 | 运行 configtxlator 工具(Running the configtxlator) 73 | ------------------------- 74 | 75 | The ``configtxlator`` tool can be downloaded with the other Hyperledger Fabric 76 | platform-specific binaries. Please see :ref:`download-platform-specific-binaries` 77 | for details. 78 | 79 | ``configtxlator`` 工具可以和其他 Hyperledger Fabric 平台专用工具一样被下载使用。详情请查看 ref:`download-platform-specific-binaries` 。 80 | 81 | The tool may be configured to listen on a different port and you may also 82 | specify the hostname using the ``--port`` and ``--hostname`` flags. To explore 83 | the complete set of commands and flags, run ``configtxlator --help``. 84 | 85 | 该工具可以配置去监听不同的端口和地址,只用  ``--port`` 和 ``--hostname`` 参数。 86 | 查看所有参数的详细信息,执行 ``configtxlator --help``. 87 | 88 | The binary will start an http server listening on the designated port and is now 89 | ready to process request. 90 | 91 | 工具启动一个服务器监听指定的端口且等待处理请求。 92 | 93 | To start the ``configtxlator`` server: 94 | 95 | 执行命令启动 ``configtxlator`` 服务: 96 | 97 | .. code:: bash 98 | 99 | configtxlator start 100 | 2017-06-21 18:16:58.248 HKT [configtxlator] startServer -> INFO 001 Serving HTTP requests on 0.0.0.0:7059 101 | 102 | 原型翻译(Proto translation) 103 | ----------------- 104 | 105 | For extensibility, and because certain fields must be signed over, many proto 106 | fields are stored as bytes. This makes the natural proto to JSON translation 107 | using the ``jsonpb`` package ineffective for producing a human readable version 108 | of the protobufs. Instead, the ``configtxlator`` exposes a REST component to do 109 | a more sophisticated translation. 110 | 111 | 为了可扩展性,以及特定的字段需要被签名,许多原型字段被存储为字节。使用 ``jsonpb`` 工具包来转换原型和可读的 JSON 格式因此变得无效。 112 | 替代的方式是, ``configtxlator`` 暴露一个REST 组件去做更复杂的翻译。 113 | 114 | To convert a proto to its human readable JSON equivalent, simply post the binary 115 | proto to the rest target 116 | ``http://$SERVER:$PORT/protolator/decode/``, 117 | where ```` is the fully qualified proto name of the message. 118 | 119 | 要转换原型到可读的 JSON 格式,只要发送二进制原型到 rest 目标 120 | ``http://$SERVER:$PORT/protolator/decode/``, 121 | ```` 是合法原型名的全称。 122 | 123 | For instance, to decode a configuration block saved as 124 | ``configuration_block.pb``, run the command: 125 | 126 | 例如,为了解析一个存储为 ``configuration_block.pb`` 的配置区块,执行命令: 127 | 128 | .. code:: bash 129 | 130 | curl -X POST --data-binary @configuration_block.pb http://127.0.0.1:7059/protolator/decode/common.Block 131 | 132 | To convert the human readable JSON version of the proto message, simply post the 133 | JSON version to ``http://$SERVER:$PORT/protolator/encode/`` is again the fully qualified proto name of the message. 135 | 136 | 转换可读的JSON版本为原型数据,只要发送JSON版本到 ``http://$SERVER:$PORT/protolator/encode/`` , 137 | 这里的 ```` 是合法原型的全称。 138 | 139 | For instance, to re-encode the block saved as ``configuration_block.json``, run 140 | the command: 141 | 142 | 例如,重新编码存储为 ``configuration_block.json`` 的配置区块,执行命令: 143 | 144 | .. code:: bash 145 | 146 | curl -X POST --data-binary @configuration_block.json http://127.0.0.1:7059/protolator/encode/common.Block 147 | 148 | Any of the configuration related protos, including ``common.Block``, 149 | ``common.Envelope``, ``common.ConfigEnvelope``, ``common.ConfigUpdateEnvelope``, 150 | ``common.Configuration``, and ``common.ConfigUpdate`` are valid targets for 151 | these URLs. In the future, other proto decoding types may be added, such as 152 | for endorser transactions. 153 | 154 | 155 | 任何原型相关的配置,包括 ``common.Block``, 156 | ``common.Envelope``, ``common.ConfigEnvelope``, ``common.ConfigUpdateEnvelope``, 157 | ``common.Configuration``, 和 ``common.ConfigUpdate`` 都是这些地址的合法的目标。 158 | 未来,其他解析类型可能会被增加,比如背书交易。 159 | 160 | 161 | 配置更新计算(Config update computation) 162 | ------------------------- 163 | 164 | Given two different configurations, it is possible to compute the config update 165 | which transitions between them. Simply POST the two ``common.Config`` proto 166 | encoded configurations as ``multipart/formdata``, with the original as field 167 | ``original`` and the updated as field ``updated``, to 168 | ``http://$SERVER:$PORT/configtxlator/compute/update-from-configs``. 169 | 170 | 两个不同的配置,可以计算出两个配置更新所需要的交易。 171 | 向 `http://$SERVER:$PORT/configtxlator/compute/update-from-configs`` 172 | 发送两个已编码的 ``common.Config`` 原型配置作为 ``multipart/formdata`` ,其中原始配置填入 ``original`` 域,更新配置填入  ``updated`` 域。 173 | 174 | For example, given the original config as the file ``original_config.pb`` and 175 | the updated config as the file ``updated_config.pb`` for the channel 176 | ``desiredchannel``: 177 | 178 | 例如,对于通道 ``desiredchannel`` 的原始配置文件 ``original_config.pb`` 和更新配置文件  ``updated_config.pb`` : 179 | 180 | .. code:: bash 181 | 182 | curl -X POST -F channel=desiredchannel -F original=@original_config.pb -F updated=@updated_config.pb http://127.0.0.1:7059/configtxlator/compute/update-from-configs 183 | 184 | 185 | 引导实例(Bootstraping example) 186 | -------------------- 187 | 188 | First start the ``configtxlator``: 189 | 190 | 首先,启动启动 ``configtxlator`` 工具: 191 | 192 | .. code:: bash 193 | 194 | $ configtxlator start 195 | 2017-05-31 12:57:22.499 EDT [configtxlator] main -> INFO 001 Serving HTTP requests on port: 7059 196 | 197 | First, produce a genesis block for the ordering system channel: 198 | 199 | 然后,为通道产生初始区块 200 | 201 | .. code:: bash 202 | 203 | $ configtxgen -outputBlock genesis_block.pb 204 | 2017-05-31 14:15:16.634 EDT [common/configtx/tool] main -> INFO 001 Loading configuration 205 | 2017-05-31 14:15:16.646 EDT [common/configtx/tool] doOutputBlock -> INFO 002 Generating genesis block 206 | 2017-05-31 14:15:16.646 EDT [common/configtx/tool] doOutputBlock -> INFO 003 Writing genesis block 207 | 208 | Decode the genesis block into a human editable form: 209 | 210 | 解析初始区块为可编辑的形式 211 | 212 | .. code:: bash 213 | 214 | curl -X POST --data-binary @genesis_block.pb http://127.0.0.1:7059/protolator/decode/common.Block > genesis_block.json 215 | 216 | Edit the ``genesis_block.json`` file in your favorite JSON editor, or manipulate 217 | it programatically. Here we use the JSON CLI tool ``jq``. For simplicity, we 218 | are editing the batch size for the channel, because it is a single numeric 219 | field. However, any edits, including policy and MSP edits may be made here. 220 | 221 | 使用你喜欢的JSON编辑器编辑 ``genesis_block.json`` 文件,或使用程序编辑。 这里需要使用JSON  工具  ``jq`` . 222 | 为了方便,这里编辑通道的区块大小,因为这是一个数字字段。 223 | 然而,任何修改,包括策略和MSP都是可以做的。 224 | 225 | First, let's establish an environment variable to hold the string that defines 226 | the path to a property in the json: 227 | 228 | 首先,建立一个环境变量来存储变量的路径 229 | 230 | .. code:: bash 231 | 232 | export MAXBATCHSIZEPATH=".data.data[0].payload.data.config.channel_group.groups.Orderer.values.BatchSize.value.max_message_count" 233 | 234 | Next, let's display the value of that property: 235 | 236 | 然后,显示变量的值 237 | 238 | .. code:: bash 239 | 240 | jq "$MAXBATCHSIZEPATH" genesis_block.json 241 | 10 242 | 243 | Now, let's set the new batch size, and display the new value: 244 | 245 | 现在,设置新的区块大小,并且显示新值: 246 | 247 | jq "$MAXBATCHSIZEPATH = 20" genesis_block.json > updated_genesis_block.json 248 | jq "$MAXBATCHSIZEPATH" updated_genesis_block.json 249 | 20 250 | 251 | The genesis block is now ready to be re-encoded into the native proto form to be 252 | used for bootstrapping: 253 | 254 | 初始区块现在已经可以被重新编码为可用于引导启动的原型格式: 255 | 256 | .. code:: bash 257 | 258 | curl -X POST --data-binary @updated_genesis_block.json http://127.0.0.1:7059/protolator/encode/common.Block > updated_genesis_block.pb 259 | 260 | The ``updated_genesis_block.pb`` file may now be used as the genesis block for 261 | bootstrapping an ordering system channel. 262 | 263 | 现在, ``updated_genesis_block.pb`` 文件可以作为初始区块来引导通道启动了。 264 | 265 | 266 | 重配置示例(Reconfiguration example) 267 | ----------------------- 268 | 269 | In another terminal window, start the orderer using the default options, 270 | including the provisional bootstrapper which will create a ``testchainid`` 271 | ordering system channel. 272 | 273 | 打开另一个终端窗口,使用默认选项启动orderer, 274 | 包括临时的引导程序,将会创建一个名称为 ``testchainid`` 的排序通道。 275 | 276 | .. code:: bash 277 | 278 | ORDERER_GENERAL_LOGLEVEL=debug orderer 279 | 280 | Reconfiguring a channel can be performed in a very similar way to modifying a 281 | genesis config. 282 | 283 | 重配置一个通道与修改初始配置类似。 284 | 285 | First, fetch the config_block proto: 286 | 287 | 首先,获取配置区块原型: 288 | 289 | .. code:: bash 290 | 291 | $ peer channel fetch config config_block.pb -o 127.0.0.1:7050 -c testchainid 292 | 2017-05-31 15:11:37.617 EDT [msp] getMspConfig -> INFO 001 intermediate certs folder not found at [/home/yellickj/go/src/github.com/hyperledger/fabric/sampleconfig/msp/intermediatecerts]. Skipping.: [stat /home/yellickj/go/src/github.com/hyperledger/fabric/sampleconfig/msp/intermediatecerts: no such file or directory] 293 | 2017-05-31 15:11:37.617 EDT [msp] getMspConfig -> INFO 002 crls folder not found at [/home/yellickj/go/src/github.com/hyperledger/fabric/sampleconfig/msp/intermediatecerts]. Skipping.: [stat /home/yellickj/go/src/github.com/hyperledger/fabric/sampleconfig/msp/crls: no such file or directory] 294 | Received block: 1 295 | Received block: 1 296 | 2017-05-31 15:11:37.635 EDT [main] main -> INFO 003 Exiting..... 297 | 298 | Next, send the config block to the ``configtxlator`` service for decoding: 299 | 300 | 然后,发送配置区块到 ``configtxlator`` 服务进行解析: 301 | 302 | .. code:: bash 303 | 304 | curl -X POST --data-binary @config_block.pb http://127.0.0.1:7059/protolator/decode/common.Block > config_block.json 305 | 306 | Extract the config section from the block: 307 | 308 | 从区块中提取配置区域 309 | 310 | .. code:: bash 311 | 312 | jq .data.data[0].payload.data.config config_block.json > config.json 313 | 314 | Edit the config, saving it as a new ``updated_config.json``. Here, we set the 315 | batch size to 30. 316 | 317 | 编辑配置,将编辑后的内容存放在 ``updated_config.json`` 。这里我们设计区块大小为30. 318 | 319 | .. code:: bash 320 | 321 | jq ".channel_group.groups.Orderer.values.BatchSize.value.max_message_count = 30" config.json > updated_config.json 322 | 323 | Re-encode both the original config, and the updated config into proto: 324 | 325 | 重新将原配置与新配置进行编码: 326 | 327 | .. code:: bash 328 | 329 | curl -X POST --data-binary @config.json http://127.0.0.1:7059/protolator/encode/common.Config > config.pb 330 | 331 | .. code:: bash 332 | 333 | curl -X POST --data-binary @updated_config.json http://127.0.0.1:7059/protolator/encode/common.Config > updated_config.pb 334 | 335 | Now, with both configs properly encoded, send them to the `configtxlator` 336 | service to compute the config update which transitions between the two. 337 | 338 | 现在,将编码后的文件发送到 `configtxlator` 服务进行计算比对两个的差异。 339 | 340 | .. code:: bash 341 | 342 | curl -X POST -F original=@config.pb -F updated=@updated_config.pb http://127.0.0.1:7059/configtxlator/compute/update-from-configs -F channel=testchainid > config_update.pb 343 | 344 | At this point, the computed config update is now prepared. Traditionally, 345 | an SDK would be used to sign and wrap this message. However, in the interest of 346 | using only the peer cli, the `configtxlator` can also be used for this task. 347 | 348 | 到此,计算出的配置更新已经准备好了。一般,SDK会对该消息进行签名打包。 349 | 然而,为了那些只使用节点命令(peer cli)的情况, `configtxlator` 工具也能进行这个工作。 350 | 351 | First, we decode the ConfigUpdate so that we may work with it as text: 352 | 353 | 首先,按上文所说对配置更新进行编码: 354 | 355 | .. code:: bash 356 | 357 | $ curl -X POST --data-binary @config_update.pb http://127.0.0.1:7059/protolator/decode/common.ConfigUpdate > config_update.json 358 | 359 | Then, we wrap it in an envelope message: 360 | 361 | 然后,讲消息进行打包: 362 | 363 | .. code:: bash 364 | 365 | echo '{"payload":{"header":{"channel_header":{"channel_id":"testchainid", "type":2}},"data":{"config_update":'$(cat config_update.json)'}}}' > config_update_as_envelope.json 366 | 367 | Next, convert it back into the proto form of a full fledged config 368 | transaction: 369 | 370 | 接着,将它转换为完整配置的交易的原型结构 371 | 372 | .. code:: bash 373 | 374 | curl -X POST --data-binary @config_update_as_envelope.json http://127.0.0.1:7059/protolator/encode/common.Envelope > config_update_as_envelope.pb 375 | 376 | Finally, submit the config update transaction to ordering to perform a config 377 | update. 378 | 379 | 最后,将配置更新交易提交到排序服务。 380 | 381 | .. code:: bash 382 | 383 | peer channel update -f config_update_as_envelope.pb -c testchainid -o 127.0.0.1:7050 384 | 385 | 386 | 增加组织(Adding an organization) 387 | ---------------------- 388 | 389 | First start the ``configtxlator``: 390 | 391 | 首先启动 ``configtxlator`` 服务: 392 | 393 | .. code:: bash 394 | 395 | $ configtxlator start 396 | 2017-05-31 12:57:22.499 EDT [configtxlator] main -> INFO 001 Serving HTTP requests on port: 7059 397 | 398 | Start the orderer using the ``SampleDevModeSolo`` profile option. 399 | 400 | 使用 ``SampleDevModeSolo`` 属性配置来启动排序服务。 401 | 402 | .. code:: bash 403 | 404 | ORDERER_GENERAL_LOGLEVEL=debug ORDERER_GENERAL_GENESISPROFILE=SampleDevModeSolo orderer 405 | 406 | The process to add an organization then follows exactly like the batch size 407 | example. However, instead of setting the batch size, a new org is defined at 408 | the application level. Adding an organization is slightly more involved because 409 | we must first create a channel, then modify its membership set. 410 | 411 | 增加组织的国政和修改区块大小的过程类似。然而,不同于设置区块大小,一个新的组织被定义在应用层。 412 | 增加一个组织涉及更多内容因为需要先创建通道,然后修改它的成员集。 413 | 414 | .. Licensed under Creative Commons Attribution 4.0 International License 415 | https://creativecommons.org/licenses/by/4.0/ 416 | -------------------------------------------------------------------------------- /docs/endorsement-policies_zh.md: -------------------------------------------------------------------------------- 1 | 2 | | 原文 | 作者 | 审核修正 | 3 | | --- | --- | —--- | 4 | | [原文](http://hyperledger-fabric.readthedocs.io/en/latest/endorsement-policies.html) | Linsheng Yu | | 5 | 6 | 7 | Endorsement policies are used to instruct a peer on how to decide whether a transaction is properly endorsed. When a peer receives a transaction, it invokes the VSCC (Validation System Chaincode) associated with the transaction’s Chaincode as part of the transaction validation flow to determine the validity of the transaction. Recall that a transaction contains one or more endorsement from as many endorsing peers. VSCC is tasked to make the following determinations: 8 | 9 | - all endorsements are valid (i.e. they are valid signatures from valid certificates over the expected message) 10 | - there is an appropriate number of endorsements 11 | - endorsements come from the expected source(s) 12 | 13 | 节点通过背书策略来确定一个交易是否被正确背书。当一个peer接收一个交易后,就会调用与该交易Chaincode相关的VSCC(Chaincode 实例化时指定的)作为交易验证流程的一部分(还有RW版本验证)来确定交易的有效性。为此,一个交易包含一个或多个来自背书节点的背书。VSCC的背书校验包括: 14 | 15 | * 所有的背书是有效的(即,有效证书做的有效签名) 16 | * 恰当的(满足要求的)背书数量 17 | * 背书来自预期的背书节点 18 | 19 | Endorsement policies are a way of specifying the second and third points. 20 | 21 | 背书策略就是用来定义上边的第二和第三点。 22 | 23 | ## Endorsement policy design - 背书策略设计 24 | 25 | Endorsement policies have two main components: 26 | 27 | - a principal 28 | - a threshold gate 29 | 30 | A principal `P` identifies the entity whose signature is expected. 31 | 32 | A threshold gate `T` takes two inputs: an integer `t` (the threshold) and a list of `n` principals or gates; this gate essentially captures the expectation that out of those `n` principals or gates, `t` are requested to be satisfied. 33 | 34 | 背书策略有两个主要组成部分: 35 | 36 | * 主体(principal):`P` 定义了期望的签名来源实体 37 | * 门阀阈值(threshold gate):`T` 有两个参数:整数`t`(阈值)和`n`个主体,表示从这`n`个主体中获取`t`个签名 38 | 39 | For example: - `T(2, 'A', 'B', 'C')` requests a signature from any 2 principals out of `A`, `B` or `C`; - `T(1, 'A', T(2, 'B', 'C'))` requests either one signature from principal `A` or 1 signature from `B` and `C` each. 40 | 41 | 例如: 42 | 43 | * `T(2, 'A', 'B', 'C')`表示需要`A`、`B`、`C`中任意2个主体的签名背书 44 | * `T(1, 'A', T(2, 'B', 'C'))`表示需要来自主体`A`的签名或者来自`B`和`C`两者的签名背书 45 | 46 | ## Endorsement policy syntax in the CLI - CLI中背书策略语法 47 | 48 | In the CLI, a simple language is used to express policies in terms of boolean expressions over principals. 49 | 50 | 在CLI中,用一种简单的布尔表达式来表示背书策略。 51 | 52 | A principal is described in terms of the MSP that is tasked to validate the identity of the signer and of the role that the signer has within that MSP. Currently, two roles are supported: **member** and **admin**. Principals are described as `MSP`.`ROLE`, where `MSP` is the MSP ID that is required, and `ROLE` is either one of the two strings `member` and `admin`. Examples of valid principals are `'Org0.admin'` (any administrator of the `Org0` MSP) or `'Org1.member'` (any member of the `Org1` MSP). 53 | 54 | Fabric使用MSP来描述主体,MSP用于验证签名者的身份和签名者在MSP中的角色/权限。目前支持两种角色:**member**和**admin**。主体的描述形式是`MSP.ROLE`,其中`MSP`是MSP ID,`ROLE`是**member**或**admin**。比如一个有效的主体可表示为`'Org0.admin'`(MSP`Org0`的管理员)或`'Org1.member'`(MSP`Org1`的成员)。 55 | 56 | The syntax of the language is: 57 | 58 | CLI语法是: 59 | 60 | `EXPR(E[, E...])` 61 | 62 | where `EXPR` is either `AND` or `OR`, representing the two boolean expressions and `E` is either a principal (with the syntax described above) or another nested call to `EXPR`. 63 | 64 | 其中`EXPR`是`AND`或`OR`,表示布尔表达式;`E`是上面语法所描述的主体或者是另一个嵌套进去的`EXPR`。 65 | 66 | For example: 67 | 68 | - `AND('Org1.member', 'Org2.member', 'Org3.member')` requests 1 signature from each of the three principals 69 | - `OR('Org1.member', 'Org2.member')` requests 1 signature from either one of the two principals 70 | - `OR('Org1.member', AND('Org2.member', 'Org3.member'))` requests either one signature from a member of the `Org1` MSP or 1 signature from a member of the `Org2` MSP and 1 signature from a member of the `Org3` MSP. 71 | 72 | 例如: 73 | 74 | * `AND('Org1.member', 'Org2.member', 'Org3.member')`表示需要三个主体共同签名背书 75 | * `OR('Org1.member', 'Org2.member')`表示需要两个主体之一的签名背书 76 | * `OR('Org1.member', AND('Org2.member', 'Org3.member'))`表示需要`Org1`的签名背书或者`Org2`和`Org3`共同的签名背书 77 | 78 | ## Specifying endorsement policies for a chaincode - 为chaincode指定背书策略 79 | 80 | Using this language, a chaincode deployer can request that the endorsements for a chaincode be validated against the specified policy. NOTE - the default policy requires one signature from a member of the `DEFAULT` MSP). This is used if a policy is not specified in the CLI. 81 | 82 | 部署Chaincode时可以指定背书策略。注意:如果没有指定背书策略就使用默认的背书策略,即需要MSP`DEFAULT`的一个成员的签名背书。 83 | 84 | The policy can be specified at deploy time using the `-P` switch, followed by the policy.For example: 85 | 86 | 部署Chaincode时用`-P`指定背书策略,例如: 87 | 88 | peer chaincode instantiate -C testchainid -n mycc -p github.com/hyperledger/fabric/examples/chaincode/go/chaincode_example02 -c '{"Args":["init","a","100","b","200"]}' -P "AND('Org1.member', 'Org2.member')" 89 | 90 | This command deploys chaincode `mycc` on chain `testchainid` with the policy `AND('Org1.member', 'Org2.member')`. 91 | 92 | 此命令会以背书策略`AND('Org1.member', 'Org2.member')`在链`testchainid`上部署chaincode`mycc`。 93 | 94 | ## Future enhancements - 未来计划 95 | 96 | In this section we list future enhancements for endorsement policies: 97 | 98 | - alongside the existing way of identifying principals by their relationship with an MSP, we plan to identify principals in terms of the *Organization Unit (OU)* expected in their certificates; this is useful to express policies where we request signatures from any identity displaying a valid certificate with an OU matching the one requested in the definition of the principal. 99 | - instead of the syntax `AND(., .)` we plan to move to a more intuitive syntax `. AND . ` 100 | - we plan to expose generalized threshold gates in the language as well alongside `AND` (which is the special `n`-out-of-`n` gate) and OR (which is the special `1`-out-of-`n` gate) 101 | 102 | 本节列举了背书策略的未来计划增强功能: 103 | 104 | * 除了通过与MSP的关系确定principals身份的现有方式外,我们计划根据证书中的组织单位(OU)来标识principals;这样就可以请求与背书策略中定义的principal的OU相匹配(同一组织单位内)的任意实体的签名作为背书 105 | * 以更直观的语法`. AND .`取代语法`AND(., .)` 106 | * 还计划将阈值放到`AND`(`n`-out-of-`n`)和 `OR` (`1`-out-of-`n`)的语法中 107 | 108 | -------------------------------------------------------------------------------- /docs/error-handling_zh.md: -------------------------------------------------------------------------------- 1 | 2 | | 原文 | 作者 | 审核修正 | 3 | | --- | --- | —--- | 4 | | [原文](http://hyperledger-fabric.readthedocs.io/en/latest/error-handling.html) | Zhangjiong Xuan | | 5 | 6 | 7 | ## 1.1. 总体概览 8 | 9 | The Hyperledger Fabric error handling framework can be found in the source repository under common/errors. It defines a new type of error, CallStackError, to use in place of the standard error type provided by Go. 10 | 11 | Fabric的错误处理框架可以在Fabric代码仓库的common/errors目录下找到。它定义了一种新的错误类型,`CallStackError`,用于取代`Go`标准库中实现的错误类型。 12 | 13 | A CallStackError consists of the following: 14 | 15 | * Component code - a name for the general area of the code that is generating the error. Component codes should consist of three uppercase letters. Numerics and special characters are not allowed. A set of component codes is defined in common/errors/codes.go 16 | * Reason code - a short code to help identify the reason the error occurred. Reason codes should consist of three numeric values. Letters and special characters are not allowed. A set of reason codes is defined in common/error/codes.go 17 | * Error code - the component code and reason code separated by a colon, e.g. MSP:404 18 | * Error message - the text that describes the error. This is the same as the input provided to fmt.Errorf() and Errors.New(). If an error has been wrapped into the current error, its message will be appended. 19 | * Callstack - the callstack at the time the error is created. If an error has been wrapped into the current error, its error message and callstack will be appended to retain the context of the wrapped error. 20 | 21 | 一个`CallStackError`包含以下内容: 22 | 23 | * `Component code` - 一个生成错误信息的错误码的通用区域的组件名称。Component codes应该由3个大写字母组成。不允许出现数字和特殊字符。一系列component codes被定义在`common/errors/codes.go`文件中。 24 | * `Reason code` - 一个用于在错误出现时定位错误原因的较短的错误码。Reason codes应该由3位数字组成。不允许出现字母和特殊字符。一系列reason codes被定义在`common/errors/codes.go`文件中。 25 | * `Error code` - 由冒号分隔的component code和reason code组成的错误码,例如`MSP:404`。 26 | * `Error message` - 描述错误信息的文本。这与提供的`fmt.Errorf()`和`Errors.New()`类似。如果一个错误被包含到当前的错误中,那么它的错误消息将被附加。 27 | * `Callstack` - 错误出现时的调用堆栈。如果一个错误被包含在当前的错误中,那么它的错误消息和调用堆栈信息会被附加到被包含的错误的上下文中。 28 | 29 | The CallStackError interface exposes the following functions: 30 | 31 | * Error() - returns the error message with callstack appended 32 | * Message() - returns the error message (without callstack appended) 33 | * GetComponentCode() - returns the 3-character component code 34 | * GetReasonCode() - returns the 3-digit reason code 35 | * GetErrorCode() - returns the error code, which is “component:reason” 36 | * GetStack() - returns just the callstack 37 | * WrapError(error) - wraps the provided error into the CallStackError 38 | 39 | `Callstack`接口暴露了以下方法: 40 | 41 | * `Error()` - 返回一个带有调用堆栈的错误消息。 42 | * `Message()` - 返回一个错误消息。(不包含调用堆栈信息)。 43 | * `GetComponentCode()` - 返回由3个字母组成的组件代码。 44 | * `GetReasonCode()` - 返回由3个数字组成的错误原因代码。 45 | * `GetErrorCode()` - 返回错误代码,由"component:reason"组成。 46 | * `GetStack()` - 仅返回调用堆栈。 47 | * `WrapError(error)` - 将提供的错误包装进`CallStackError`。 48 | 49 | ## 1.2. 使用说明 50 | 51 | The new error handling framework should be used in place of all calls to fmt.Errorf() or Errors.new(). Using this framework will provide error codes to check against as well as the option to generate a callstack that will be appended to the error message. 52 | 53 | 应该使用新的错误处理框架来替换所有调用`fmt.Errorf()`或者`Errors.new()`的地方。用新的错误处理框架将提供错误代码以及将附加到错误消息的调用堆栈信息的选项。 54 | 55 | Using the framework is simple and will only require an easy tweak to your code. 56 | 57 | 使用这个错误框架简单易用,只需要简单地调整你的代码。 58 | 59 | First, you’ll need to import github.com/hyperledger/fabric/common/errors into any file that uses this framework. 60 | 61 | 首先,您需要将`github.com/hyperleger/fabric/common/errors`导入到使用此框架的任何文件中。 62 | 63 | Let’s take the following as an example from core/chaincode/chaincode_support.go: 64 | 65 | 以`core/chaincode/chaincode_support.go`为例: 66 | ```go 67 | err = fmt.Errorf("Error starting container: %s", err) 68 | ``` 69 | For this error, we will simply call the constructor for Error and pass a component code, reason code, followed by the error message. At the end, we then call the WrapError() function, passing along the error itself. 70 | 71 | 对于这个错误,我们将简单地调用Error的构造函数,并传递一个组件代码,原因代码,然后是错误消息。最后,我们调用`WrapError()`函数,传递错误本身。 72 | ```go 73 | fmt.Errorf("Error starting container: %s", err) 74 | ``` 75 | 变成 76 | ```go 77 | errors.ErrorWithCallstack("CHA", "505", "Error starting container").WrapError(err) 78 | ``` 79 | You could also just leave the message as is without any problems: 80 | 81 | 您也可以仅编写错误信息,也不会有任何问题: 82 | ```go 83 | errors.ErrorWithCallstack("CHA", "505", "Error starting container: %s", err) 84 | ``` 85 | 86 | With this usage you will be able to format the error message from the previous error into the new error, but will lose the ability to print the callstack (if the wrapped error is a CallStackError). 87 | 88 | 如果使用这种方法,您将能够将上一个错误消息格式化成一个新的错误,但是将失去打印调用堆栈的能力(如果包装的错误是CallStack)。 89 | 90 | A second example to highlight a scenario that involves formatting directives for parameters other than errors, while still wrapping an error, is as follows: 91 | 92 | 另一个凸显的例子涉及了格式化错误以外的参数指令,同时仍然包含了错误,如下所示: 93 | ```go 94 | fmt.Errorf("failed to get deployment payload %s - %s", canName, err) 95 | ``` 96 | 变成 97 | ```go 98 | errors.ErrorWithCallstack("CHA", "506", "Failed to get deployment payload %s", canName).WrapError(err) 99 | ``` 100 | 101 | ## 1.3. 显示错误消息 102 | 103 | Once the error has been created using the framework, displaying the error message is as simple as: 104 | 105 | 一旦使用框架创建啦错误,显示错误消息将十分简单: 106 | ```go 107 | logger.Errorf(err) 108 | ``` 109 | 或者 110 | ```go 111 | fmt.Println(err) 112 | ``` 113 | 或者 114 | ```go 115 | fmt.Printf("%s\n",err) 116 | ``` 117 | 118 | 来自`peer/common/common.go`的一个例子: 119 | ```go 120 | errors.ErrorWithCallstack("PER", "404", "Error trying to connect to local peer").WrapError(err) 121 | ``` 122 | 将显示错误消息: 123 | ``` 124 | PER:404 - Error trying to connect to local peer 125 | Caused by: grpc: timed out when dialing 126 | ``` 127 | Note 128 | The callstacks have not been displayed for this example for the sake of brevity. 129 | 130 | >注意 131 | >>为了简洁起见,本示例尚未展示调用堆栈信息。 132 | 133 | ## 1.4. Hyperledger Fabric中错误处理的一般准则 134 | 135 | * If it is some sort of best effort thing you are doing, you should log the error and ignore it. 136 | * If you are servicing a user request, you should log the error and return it. 137 | * If the error comes from elsewhere, you have the choice to wrap the error or not. Typically, it’s best to not wrap the error and simply return it as is. However, for certain cases where a utility function is called, wrapping the error with a new component and reason code can help an end user understand where the error is really occurring without inspecting the callstack. 138 | * A panic should be handled within the same layer by throwing an internal error code/start a recovery process and should not be allowed to propagate to other packages. 139 | 140 | * 如果这是你正在努力做的某种事情,你应该记录错误并忽略它。 141 | * 如果你正在为用户请求提供服务,则应该记录错误并返回。 142 | * 如果错误来自其它地方,你可以选择包装错误。通常,最好不要包装错误,让它原样返回。然而,对于工具函数调用的某些情况,使用component code和reason code来包装可以帮助用户在不检查调用堆栈的情况下了解正真发生错误的位置。 143 | * 一个panic 应该在同一层通过抛出内部错误代码/启动一个恢复进程来处理,而且不允许传播到其他软件包。 -------------------------------------------------------------------------------- /docs/getting_started.md: -------------------------------------------------------------------------------- 1 | 2 | | 原文 | 作者 | 审核修正 | 3 | | --- | --- | —--- | 4 | | [原文](http://hyperledger-fabric.readthedocs.io/en/latest/getting_started.html) | Linsheng Yu | Kewei Sun | 5 | 6 | 7 | 本文设定了一个简单的Fabric网络场景,包括2个organization,每个有2个peer,并使用“solo” ordering服务。网络实体所需的加密材料(x509证书)已预先生成并放到相应目录和配置文件里了,你无需修改这些配置。`examples/e2e_cli`文件夹里包含了docker-compose文件和要用来创建和测试网络的脚本文件。 8 | 9 | 本文还演示了使用配置生成工具`configtxgen`生成网络配置。 10 | 11 | ## 前提 12 | 13 | 完成以下安装Fabric源码和编译`configtxgen`工具: 14 | 15 | * 完成[环境安装](http://hyperledger-fabric.readthedocs.io/en/latest/dev-setup/devenv.html),并设置正确的`$GOPATH`环境变量。 16 | * 拉取Fabric源码 17 | 18 | git clone https://github.com/hyperledger/fabric.git 19 | 20 | * 编译`configtxgen`工具 21 | 22 | * 如果运行在Linux,在Fabric目录下执行以下命令: 23 | 24 | cd $GOPATH/src/github.com/hyperledger/fabric 25 | make configtxgen 26 | # 如果出错:'ltdl.h' file not found 27 | sudo apt install libtool libltdl-dev 28 | # 然后再运行make 29 | make configtxgen 30 | 31 | * 如果运行在OSX,先安装Xcode 8.0或以上版本,然后在Fabric目录下执行以下命令: 32 | 33 | # 安装 Homebrew 34 | /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)" 35 | # 添加 gnu-tar 36 | brew install gnu-tar --with-default-names 37 | # 添加 libtool 38 | brew install libtool 39 | # 编译 configtxgen 40 | make configtxgen 41 | 42 | 编译成功后输出: 43 | 44 | build/bin/configtxgen 45 | CGO_CFLAGS=" " GOBIN=/Users/johndoe/work/src/github.com/hyperledger/fabric/build/bin go install -ldflags "-X github.com/hyperledger/fabric/common/metadata.Version=1.0.0-snapshot-8d3275f -X github.com/hyperledger/fabric/common /metadata.BaseVersion=0.3.0 -X github.com/hyperledger/fabric/common/metadata.BaseDockerLabel=org.hyperledger.fabric" github.com/hyperledger/fabric/common/configtx/tool/configtxgen 46 | Binary available as build/bin/configtxgen`` 47 | 48 | 编译后执行文件放在Fabric目录下的的`build/bin/configtxgen` 49 | 50 | ## 执行完整脚本 51 | 52 | 为了加快部署过程,我们提供了一个脚本来执行所有任务。执行该脚本会生成配置结果、本地网络、Chaincode测试。 53 | 54 | 进入`examples/e2e_cli`目录,首先从Docker Hub拉取镜像: 55 | 56 | # 使脚本可执行 57 | chmod +x download-dockerimages.sh 58 | # 执行脚本 59 | ./download-dockerimages.sh 60 | 61 | 这个过程会需要几分钟,脚本执行后输出: 62 | 63 | ===> List out hyperledger docker images 64 | hyperledger/fabric-ca latest 35311d8617b4 7 days ago 240 MB 65 | hyperledger/fabric-ca x86_64-1.0.0-alpha 35311d8617b4 7 days ago 240 MB 66 | hyperledger/fabric-couchdb latest f3ce31e25872 7 days ago 1.51 GB 67 | hyperledger/fabric-couchdb x86_64-1.0.0-alpha f3ce31e25872 7 days ago 1.51 GB 68 | hyperledger/fabric-kafka latest 589dad0b93fc 7 days ago 1.3 GB 69 | hyperledger/fabric-kafka x86_64-1.0.0-alpha 589dad0b93fc 7 days ago 1.3 GB 70 | hyperledger/fabric-zookeeper latest 9a51f5be29c1 7 days ago 1.31 GB 71 | hyperledger/fabric-zookeeper x86_64-1.0.0-alpha 9a51f5be29c1 7 days ago 1.31 GB 72 | hyperledger/fabric-orderer latest 5685fd77ab7c 7 days ago 182 MB 73 | hyperledger/fabric-orderer x86_64-1.0.0-alpha 5685fd77ab7c 7 days ago 182 MB 74 | hyperledger/fabric-peer latest 784c5d41ac1d 7 days ago 184 MB 75 | hyperledger/fabric-peer x86_64-1.0.0-alpha 784c5d41ac1d 7 days ago 184 MB 76 | hyperledger/fabric-javaenv latest a08f85d8f0a9 7 days ago 1.42 GB 77 | hyperledger/fabric-javaenv x86_64-1.0.0-alpha a08f85d8f0a9 7 days ago 1.42 GB 78 | hyperledger/fabric-ccenv latest 91792014b61f 7 days ago 1.29 GB 79 | hyperledger/fabric-ccenv x86_64-1.0.0-alpha 91792014b61f 7 days ago 1.29 GB 80 | 81 | 现在运行完整脚本: 82 | 83 | ./network_setup.sh up 84 | 85 | 如果没有设置`channel-ID`参数,channel名默认是`mychannel`。脚本执行成功后输出: 86 | 87 | ===================== Query on PEER3 on channel 'mychannel' is successful ===================== 88 | 89 | ===================== All GOOD, End-2-End execution completed ===================== 90 | 91 | 此时,网络启动运行并测试成功。 92 | 93 | ### 清理 94 | 95 | 停止网络: 96 | 97 | # 在e2e_cli目录下 98 | docker rm -f $(docker ps -aq) 99 | 100 | 然后执行`docker images`命令查看Chaincode镜像,类似输出如下: 101 | 102 | REPOSITORY TAG IMAGE ID CREATED SIZE 103 | dev-peer3-mycc-1.0 latest 13f6c8b042c6 5 minutes ago 176 MB 104 | dev-peer0-mycc-1.0 latest e27456b2bd92 5 minutes ago 176 MB 105 | dev-peer2-mycc-1.0 latest 111098a7c98c 5 minutes ago 176 MB 106 | 107 | 删除这些镜像: 108 | 109 | docker rmi 110 | 111 | 例如: 112 | 113 | docker rmi -f 13f e27 111 114 | 115 | 最后删除配置结果,在`crypto/orderer`目录删除`orderer.block`和`channel.tx`。 116 | 117 | ## configtxgen 118 | 119 | configtxgen工具生成两个内容: Orderer的**bootstrap block**和Fabric的**channel configuration transaction**。 120 | 121 | orderer block是ordering服务的创世区块;channel transaction文件在create channel时会被广播给orderer。 122 | 123 | `configtx.yaml`包含网络的定义,并给出了网络组件的拓扑结构--2个成员(Org0和Org1)分别管理维护2个peer。还指出每个网络实体的加密材料的存储位置。`crypto`目录包含每个实体的admin证书、ca证书、签名证书和私钥。 124 | 125 | 为了方便使用,我们提供了一个脚本`generateCfgTrx.sh`,该脚本整合了`configtxgen`的执行过程,执行后会生成两个配置结果:`orderer.block`和`channel.tx`。如果你运行过上边的`network_setup.sh`则这两个配置结果已生成,要先到`crypto/orderer`目录将之删除。 126 | 127 | ### 执行`generateCfgTrx.sh`脚本 128 | 129 | 在`e2e_cli`目录下: 130 | 131 | cd $GOPATH/src/github.com/hyperledger/fabric/examples/e2e_cli 132 | 133 | `generateCfgTrx.sh`脚本有个可选参数`channel-ID`,如果不设此参数,则默认为`mychannel`。 134 | 135 | # 如上所述, 参数是可选的 136 | ./generateCfgTrx.sh 137 | 138 | 执行成功后输出: 139 | 140 | 2017/02/28 17:01:52 Generating new channel configtx 141 | 2017/02/28 17:01:52 Creating no-op MSP instance 142 | 2017/02/28 17:01:52 Obtaining default signing identity 143 | 2017/02/28 17:01:52 Creating no-op signing identity instance 144 | 2017/02/28 17:01:52 Serializing identity 145 | 2017/02/28 17:01:52 signing message 146 | 2017/02/28 17:01:52 signing message 147 | 2017/02/28 17:01:52 Writing new channel tx 148 | 149 | 生成的`orderer.block`和`channel.tx`两个文件存放在`crypto/orderer`目录。 150 | 151 | `orderer.block`是ordering服务的创世区块,`channel.tx`包含新channel的配置信息。如前所述,这俩文件都来自`configtx.yaml`及其所包含的加密材料和网络信息的数据。 152 | 153 | ***注意:***也可手动执行脚本`generateCfgTrx.sh`里的命令。如果使用这种方式,则必须先用`e2e_cli`目录下的`configtx.yaml`替换Fabric sampleconfig目录下默认的`configtx.yaml`,然后返回fabric目录执行这些命令,前提是删除之前`generateCfgTrx.sh`生成的两个文件。 154 | 155 | ## 启动网络 156 | 157 | 使用docker-compose启动网络,如果没有拉取Fabric镜像,则返回之前的操作去拉取镜像。 158 | 159 | 脚本`script.sh`嵌入到docker-compose文件里,该脚本将peer加入到channel并向peer发送read/write请求,如此便可自动执行交易流程。如果不想使用这个脚本自动执行交易,可以跳到下面“手动执行交易”一节。 160 | 161 | 在`e2e_cli`目录下使用docker-compose生成网络实体并执行嵌入的脚本: 162 | 163 | CHANNEL_NAME= docker-compose up -d 164 | 165 | 如果之前创建了一个channel名,就必须将其作为参数,否则使用默认的`mychannel`。例如: 166 | 167 | CHANNEL_NAME=mychannel docker-compose up -d 168 | 169 | 等待一会儿,因为背后有交易会发送到peer。执行`docker ps`查看运行状态的container,可以看到如下内容: 170 | 171 | vagrant@hyperledger-devenv:v0.3.0-4eec836:/opt/gopath/src/github.com/hyperledger/fabric/examples/e2e_cli$ docker ps 172 | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 173 | 45e3e114f7a2 dev-peer3-mycc-1.0 "chaincode -peer.a..." 4 seconds ago Up 4 seconds dev-peer3-mycc-1.0 174 | 5970f740ad2b dev-peer0-mycc-1.0 "chaincode -peer.a..." 24 seconds ago Up 23 seconds dev-peer0-mycc-1.0 175 | b84808d66e99 dev-peer2-mycc-1.0 "chaincode -peer.a..." 48 seconds ago Up 47 seconds dev-peer2-mycc-1.0 176 | 16d7d94c8773 hyperledger/fabric-peer "peer node start -..." About a minute ago Up About a minute 0.0.0.0:10051->7051/tcp, 0.0.0.0:10053->7053/tcp peer3 177 | 3561a99e35e6 hyperledger/fabric-peer "peer node start -..." About a minute ago Up About a minute 0.0.0.0:9051->7051/tcp, 0.0.0.0:9053->7053/tcp peer2 178 | 0baad3047d92 hyperledger/fabric-peer "peer node start -..." About a minute ago Up About a minute 0.0.0.0:8051->7051/tcp, 0.0.0.0:8053->7053/tcp peer1 179 | 1216896b7b4f hyperledger/fabric-peer "peer node start -..." About a minute ago Up About a minute 0.0.0.0:7051->7051/tcp, 0.0.0.0:7053->7053/tcp peer0 180 | 155ff8747b4d hyperledger/fabric-orderer "orderer" About a minute ago Up About a minute 0.0.0.0:7050->7050/tcp orderer 181 | 182 | ### 背后发生了什么? 183 | 184 | * 在CLI容器中执行了脚本`script.sh`。该脚本用默认的`mychannel`执行`createChannel`命令,这个命令用到了之前`configtxgen `工具生成的`channel.tx`。 185 | * `createChannel `执行后会生成一个创世区块`mychannel.block`并保存到当前目录。 186 | * 对4个peer分别执行`joinChannel `命令,通过初始区块`mychannel.block`加入channel。至此,有一个channel包含4个peer和2个organization。 187 | * `PEER0 `和`PEER1`属于Org0,`PEER2 `和`PEER3`属于Org1。这些关系的定义都在`configtx.yaml`中 188 | * Chaincode `chaincode_example02 `被install到`PEER0`和`PEER2` 189 | * 然后Chaincode在`PEER2`上instantiate。实例化是指启动容器和初始化与Chaincode相关的键值对,本例中的初始值是` [“a”,”100” “b”,”200”]`。实例化的结果是一个名为`dev-peer2-mycc-1.0`的容器启动,注意,这个容器仅是针对`PEER2`。***(译注:尤其注意这里仅仅是启动了一个container)*** 190 | * 实例化时还会带有背书策略参数,本例中背书策略为`-P "OR ('Org0MSP.member','Org1MSP.member')",意思是任何交易必须由绑定到Org0或者Org1的peer背书。 191 | * 对于“a”的query请求发送到`PEER0 `。在之前Chaincode被install到`PEER0 `了,所以就可以启动一个名为`dev-peer0-mycc-1.0 `的新容器,然后返回查询结果。由于没有write操作发生,所以“a”的值依然是“100”。 192 | * 从“a“转移”10“给”b”的invoke请求发送到`PEER0 ` 193 | * Chaincode install到`PEER3 ` 194 | * 对“a”的query请求发送到`PEER3 `。这启动了第三个名为`dev-peer3-mycc-1.0`的容器,并返回查询结果90,正确的反映了之前的交易。 195 | 196 | ### 这表明了什么? 197 | 198 | Chaincode必须被install到一个peer上才能成功的对这个peer的ledger执行read/write操作。此外,只有当在peer上针对chaincode执行read/write操作时,这个peer上才会启动该chaincode 容器。(比如,查询“a”的值)***交易导致容器启动***。channel中的所有peer(包括那些没有install chaincode的peer,就像上例中的`PEER3 `)都会维护一个准确的ledger,ledger包含存储了不可变的、有序的交易记录的block,还有维护current state的statedb。在peer上install chaincode之后就可以直接使用该peer上的chaincode了(就像上例中的`PEER3 `),因为之前已经instantiate过了 ***(译注:即同一channel中的chaincode只需一次instantiate)***。 199 | 200 | ### 查看交易 201 | 202 | 查看CLI容器的log: 203 | 204 | docker logs -f cli 205 | 206 | 输出: 207 | 208 | 2017-02-28 04:31:20.841 UTC [logging] InitFromViper -> DEBU 001 Setting default logging level to DEBUG for command 'chaincode' 209 | 2017-02-28 04:31:20.842 UTC [msp] GetLocalMSP -> DEBU 002 Returning existing local MSP 210 | 2017-02-28 04:31:20.842 UTC [msp] GetDefaultSigningIdentity -> DEBU 003 Obtaining default signing identity 211 | 2017-02-28 04:31:20.843 UTC [msp] Sign -> DEBU 004 Sign: plaintext: 0A8F050A59080322096D796368616E6E...6D7963631A0A0A0571756572790A0161 212 | 2017-02-28 04:31:20.843 UTC [msp] Sign -> DEBU 005 Sign: digest: 52F1A41B7B0B08CF3FC94D9D7E916AC4C01C54399E71BC81D551B97F5619AB54 213 | Query Result: 90 214 | 2017-02-28 04:31:30.425 UTC [main] main -> INFO 006 Exiting..... 215 | ===================== Query on chaincode on PEER3 on channel 'mychannel' is successful ===================== 216 | 217 | ===================== All GOOD, End-2-End execution completed ===================== 218 | 219 | 你也可以实时查看日志,需要打开两个终端。 220 | 221 | 首先,停止运行着的docker容器: 222 | 223 | docker rm -f $(docker ps -aq) 224 | 225 | 在第一个终端启动docker-compose脚本: 226 | 227 | # add the appropriate CHANNEL_NAME parm 228 | CHANNEL_NAME= docker-compose up -d 229 | 230 | 在第二个终端查看log: 231 | 232 | docker logs -f cli 233 | 234 | 这将实时输出通过`script.sh`执行的交易信息。 235 | 236 | ### 查看chaincode日志 237 | 238 | 对每个chaincode容器单独查看log,输出: 239 | 240 | $ docker logs dev-peer2-mycc-1.0 241 | 04:30:45.947 [BCCSP_FACTORY] DEBU : Initialize BCCSP [SW] 242 | ex02 Init 243 | Aval = 100, Bval = 200 244 | 245 | $ docker logs dev-peer0-mycc-1.0 246 | 04:31:10.569 [BCCSP_FACTORY] DEBU : Initialize BCCSP [SW] 247 | ex02 Invoke 248 | Query Response:{"Name":"a","Amount":"100"} 249 | ex02 Invoke 250 | Aval = 90, Bval = 210 251 | 252 | $ docker logs dev-peer3-mycc-1.0 253 | 04:31:30.420 [BCCSP_FACTORY] DEBU : Initialize BCCSP [SW] 254 | ex02 Invoke 255 | Query Response:{"Name":"a","Amount":"90"} 256 | 257 | ## 手动执行交易 258 | 259 | 停止所有容器: 260 | 261 | docker rm -f $(docker ps -aq) 262 | 263 | 然后,执行`docker images`命令查看chaincode镜像,会有类似以下内容: 264 | 265 | REPOSITORY TAG IMAGE ID CREATED SIZE 266 | dev-peer3-mycc-1.0 latest 13f6c8b042c6 5 minutes ago 176 MB 267 | dev-peer0-mycc-1.0 latest e27456b2bd92 5 minutes ago 176 MB 268 | dev-peer2-mycc-1.0 latest 111098a7c98c 5 minutes ago 176 MB 269 | 270 | 删除这些镜像: 271 | 272 | docker rmi 273 | 274 | 比如: 275 | 276 | docker rmi -f 13f e27 111 277 | 278 | 确保之前生成的配置内容还在,如果删除了就再执行脚本: 279 | 280 | ./generateCfgTrx.sh 281 | 282 | 或者使用脚本中的命令手动生成。 283 | 284 | ### 修改docker-compose文件 285 | 286 | 打开docker-compose文件注释掉执行`script.sh`脚本的命令,如下: 287 | 288 | working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer 289 | # command: /bin/bash -c './scripts/script.sh ${CHANNEL_NAME}' 290 | 291 | 保存文件,重启网络: 292 | 293 | # 在e2e_cli目录下执行,设置正确的CHANNEL_NAME 294 | CHANNEL_NAME= docker-compose up -d 295 | 296 | ### 命令语法 297 | 298 | 参照`script.sh`脚本中的create和join命令。下面的命令只是针对`PEER0`的,当对orderer和peer执行命令时,需要修改下面给出的四个环境变量的值。 299 | 300 | # 对PEER0所用的环境变量 301 | CORE_PEER_MSPCONFIGPATH=$GOPATH/src/github.com/hyperledger/fabric/peer/crypto/peer/peer0/localMspConfig 302 | CORE_PEER_ADDRESS=peer0:7051 303 | CORE_PEER_LOCALMSPID="Org0MSP" 304 | CORE_PEER_TLS_ROOTCERT_FILE=$GOPATH/src/github.com/hyperledger/fabric/peer/crypto/peer/ peer0/localMspConfig/cacerts/peerOrg0.pem 305 | 306 | 每个peer的环境变量的值都在docker-compose文件中 307 | 308 | ### Create channel 309 | 310 | 进去cli容器: 311 | 312 | docker exec -it cli bash 313 | 314 | 执行成功输出: 315 | 316 | root@0d78bb69300d:/opt/gopath/src/github.com/hyperledger/fabric/peer# 317 | 318 | 用`-c`指定channel name,`-f`指定channel configuration transaction(此例中是`channel.tx`),当然也可以使用不同的名称安装 configuration transaction。 319 | 320 | # channel.tx 和 orderer.block 在 cli 容器的 crypto/orderer 目录下 321 | peer channel create -o orderer0:7050 -c mychannel -f crypto/orderer/channel.tx --tls $CORE_PEER_TLS_ENABLED --cafile $GOPATH/src/github.com/hyperledger/fabric/peer/crypto/orderer/localMspConfig/cacerts/ordererOrg0.pem 322 | 323 | 由于此例的`peer channel create`命令是针对orderer的,所以需要修改之前的环境变量,因此上边的命令应该是: 324 | 325 | CORE_PEER_MSPCONFIGPATH=$GOPATH/src/github.com/hyperledger/fabric/peer/crypto/orderer/localMspConfig CORE_PEER_LOCALMSPID="OrdererMSP" peer channel create -o orderer0:7050 -c mychannel -f crypto/orderer/channel.tx --tls $CORE_PEER_TLS_ENABLED --cafile $GOPATH/src/github.com/hyperledger/fabric/peer/crypto/orderer/localMspConfig/cacerts/ordererOrg0.pem 326 | 327 | **注意:**下面的其他命令依然在CLI容器中执行,而且要记住命令里每个peer对应的环境变量 328 | 329 | ### Join channel 330 | 331 | 将指定的peer加入到channel: 332 | 333 | # 默认只将 PEER0 加入 334 | peer channel join -b mychannel.block 335 | 336 | 完整的命令应该是: 337 | 338 | CORE_PEER_MSPCONFIGPATH=$GOPATH/src/github.com/hyperledger/fabric/peer/crypto/peer/peer0/localMspConfig CORE_PEER_ADDRESS=peer0:7051 CORE_PEER_LOCALMSPID="Org0MSP" CORE_PEER_TLS_ROOTCERT_FILE=$GOPATH/src/github.com/hyperledger/fabric/peer/crypto/peer/peer0/localMspConfig/cacerts/peerOrg0.pem peer channel join -b mychannel.block 339 | 340 | 修改这四个环境变量将其他的peer加入到channel中 341 | 342 | ### Install chaincode 343 | 344 | 将示例chaincode代码安装到四个对等节点中的一个: 345 | 346 | # 在命令前面要加上peer对应的四个环境变量 347 | peer chaincode install -n mycc -v 1.0 -p github.com/hyperledger/fabric/examples/chaincode/go/chaincode_example02 348 | 349 | ### Instantiate chaincode 350 | 351 | 在一个peer上实例化chaincode,这将对该peer启动一个chaincode容器,并为该chaincode设置背书策略。此例中定义的策略是有`Org0`或`Org1`中的一个peer背书即可。命令如下: 352 | 353 | # 在命令前面要加上peer对应的四个环境变量 354 | # 用 -C 参数设置正确的channel名,默认是 mychannel 355 | peer chaincode instantiate -o orderer0:7050 --tls $CORE_PEER_TLS_ENABLED --cafile $GOPATH/src/github.com/hyperledger/fabric/peer/crypto/orderer/localMspConfig/cacerts/ordererOrg0.pem -C mychannel -n mycc -v 1.0 -p github.com/hyperledger/fabric/examples/chaincode/go/chaincode_example02 -c '{"Args":["init","a", "100", "b","200"]}' -P "OR ('Org0MSP.member','Org1MSP.member')" 356 | 357 | ### Invoke chaincode 358 | 359 | # 在命令前面要加上peer对应的四个环境变量 360 | peer chaincode invoke -o orderer0:7050 --tls $CORE_PEER_TLS_ENABLED --cafile $GOPATH/src/github.com/hyperledger/fabric/peer/crypto/orderer/localMspConfig/cacerts/ordererOrg0.pem -C mychannel -n mycc -c '{"Args":["invoke","a","b","10"]}' 361 | 362 | ### Query chaincode 363 | 364 | # 在命令前面要加上peer对应的四个环境变量 365 | peer chaincode query -C mychannel -n mycc -c '{"Args":["query","a"]}' 366 | 367 | 执行结果: 368 | 369 | Query Result: 90 370 | 371 | ## 手动构建镜像 372 | 373 | 构建peer和orderer镜像: 374 | 375 | # 在fabric目录下执行,如果不能顺利生成镜像,则使用vagrant环境 376 | make peer-docker orderer-docker 377 | 378 | 执行`docker images`命令输出: 379 | 380 | vagrant@hyperledger-devenv:v0.3.0-4eec836:/opt/gopath/src/github.com/hyperledger/fabric$ docker images 381 | REPOSITORY TAG IMAGE ID CREATED SIZE 382 | hyperledger/fabric-orderer latest 264e45897bfb 10 minutes ago 180 MB 383 | hyperledger/fabric-orderer x86_64-0.7.0-snapshot-a0d032b 264e45897bfb 10 minutes ago 180 MB 384 | hyperledger/fabric-peer latest b3d44cff07c6 10 minutes ago 184 MB 385 | hyperledger/fabric-peer x86_64-0.7.0-snapshot-a0d032b b3d44cff07c6 10 minutes ago 184 MB 386 | hyperledger/fabric-javaenv latest 6e2a2adb998a 10 minutes ago 1.42 GB 387 | hyperledger/fabric-javaenv x86_64-0.7.0-snapshot-a0d032b 6e2a2adb998a 10 minutes ago 1.42 GB 388 | hyperledger/fabric-ccenv latest 0ce0e7dc043f 12 minutes ago 1.29 GB 389 | hyperledger/fabric-ccenv x86_64-0.7.0-snapshot-a0d032b 0ce0e7dc043f 12 minutes ago 1.29 GB 390 | hyperledger/fabric-baseimage x86_64-0.3.0 f4751a503f02 4 weeks ago 1.27 GB 391 | hyperledger/fabric-baseos x86_64-0.3.0 c3a4cf3b3350 4 weeks ago 161 MB 392 | 393 | ## 使用本地二进制文件 394 | 395 | 进去vagrant环境: 396 | 397 | cd $GOPATH/src/github.com/hyperledger/fabric/devenv 398 | 399 | # 第一次启动VM用 vagrant up 400 | vagrant ssh 401 | 402 | 在fabric目录下编译peer和orderer: 403 | 404 | make clean 405 | make native 406 | 407 | 生成`ccenv `镜像: 408 | 409 | make peer-docker 410 | 411 | 然后打开两个终端都进入vagrant,至此有三个终端都在vagrant里。 412 | 413 | 前首先清空ledger文件夹`/var/hyperledger/`(每次运行后,为避免错误或重复,都要清空): 414 | 415 | rm -rf /var/hyperledger/* 416 | 417 | #### 终端1 418 | 419 | 使用`configtxgen `工具创建orderer创世区块: 420 | 421 | configtxgen -profile SampleSingleMSPSolo -outputBlock orderer.block 422 | 423 | #### 终端2 424 | 425 | 用刚生成的创世区块启动orderer: 426 | 427 | ORDERER_GENERAL_GENESISMETHOD=file ORDERER_GENERAL_GENESISFILE=./orderer.block orderer 428 | 429 | #### 终端1 430 | 431 | 创建 channel configuration transaction: 432 | 433 | configtxgen -profile SampleSingleMSPSolo -outputCreateChannelTx channel.tx -channelID 434 | 435 | 执行成功会在当前目录生成`channel.tx` 436 | 437 | #### 终端3 438 | 439 | 以`chainless `模式启动peer: 440 | 441 | peer node start --peer-defaultchain=false 442 | 443 | ### Create channel 444 | 445 | 以`channel.tx`为参数创建channel: 446 | 447 | peer channel create -o 127.0.0.1:7050 -c mychannel -f channel.tx 448 | 449 | 执行后在当前目录生成一个channel的创世区块`mychannel.block` 450 | 451 | ### Join channel 452 | 453 | 通过channel的创世区块`mychannel.block`加入channel: 454 | 455 | peer channel join -b mychannel.block 456 | 457 | ### Install chaincode 458 | 459 | 在peer上安装chaincode: 460 | 461 | peer chaincode install -o 127.0.0.1:7050 -n mycc -v 1.0 -p github.com/hyperledger/fabric/examples/chaincode/go/chaincode_example02 462 | 463 | 执行成功后查看文件可以看到`mycc.1.0`: 464 | 465 | ls /var/hyperledger/production/chaincodes 466 | 467 | ### Instantiate chaincode 468 | 469 | 实例化chaincode: 470 | 471 | peer chaincode instantiate -o 127.0.0.1:7050 -C mychannel -n mycc -v 1.0 -p github.com/hyperledger/fabric/examples/chaincode/go/chaincode_example02 -c '{"Args":["init","a", "100", "b","200"]}' 472 | 473 | `docker ps`查看运行中的容器,如果chaincode启动成功,则显示: 474 | 475 | CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 476 | bd9c6bda7560 dev-jdoe-mycc-1.0 "chaincode -peer.a..." 5 seconds ago Up 5 seconds dev-jdoe-mycc-1.0 477 | 478 | ### Invoke chaincode 479 | 480 | 调用chaincode从“a”转移“10”给“b“: 481 | 482 | peer chaincode invoke -o 127.0.0.1:7050 -C mychannel -n mycc -c '{"Args":["invoke","a","b","10"]}' 483 | 484 | ### Query chaincode 485 | 486 | 查询”a“的值: 487 | 488 | # 返回值应是 90 489 | peer chaincode query -o 127.0.0.1:7050 -C mychannel -n mycc -c '{"Args":["query","a"]}' 490 | 491 | **运行完成后不要忘记清空ledger文件夹`/var/hyperledger/`:** 492 | 493 | rm -rf /var/hyperledger/* 494 | 495 | ## 使用CouchDB 496 | 497 | 可以将stateDB默认的goleveldb替换成CouchDB。对于CouchDB,chaincode各功能依然可用,但将chaincode数据以JSON方式存储的话就可使用CouchDB的复杂查询的功能。 498 | 499 | 为了使用CouchDB,除了最前面的”前提“一节的操作外,还需要下边两步启动CouchDB容器并将之与peer容器关联: 500 | 501 | * 构建CouchDB镜像: 502 | 503 | # make sure you are in the fabric directory 504 | make couchdb 505 | 506 | * 编辑`fabric/examples/e2e_cli/docker-compose.yaml`和`docker-compose.yam`,将所有与CouchDB有关的内容取消注释。这样`chaincode_example02`就可以才CouchDB下运行了。 507 | 508 | **注意:**如果将CouchDB容器的端口映射的主机,请一定要注意安全。在开发环境中将端口映射出来可以通过CouchDB的web界面可视化操作数据。生产环境中一般不会做端口映射,以限制CouchDB的外部访问。 509 | 510 | 可以用`chaincode_example02`在CouchDB下执行上边的chaincode操作,但是为了使用CouchDB的复杂查询功能,chaincode数据一定要以JSON格式存储(例如`fabric/examples/chaincode/go/marbles02 `)。 511 | 512 | 使用`手动执行交易`这一节中的步骤install、instantiate、invoke和query `marbles02`,执行完`Join channel`这步后使用下边的命令操作`marbles02`: 513 | 514 | * 在`PEER0`上安装并实例化chaincode 515 | 516 | peer chaincode install -o orderer0:7050 -n marbles -v 1.0 -p github.com/hyperledger/fabric/examples/chaincode/go/marbles02 517 | peer chaincode instantiate -o orderer0:7050 --tls $CORE_PEER_TLS_ENABLED --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/orderer/localMspConfig/cacerts/ordererOrg0.pem -C mychannel -n marbles -v 1.0 -p github.com/hyperledger/fabric/examples/chaincode/go/marbles02 -c '{"Args":["init"]}' -P "OR ('Org0MSP.member','Org1MSP.member')" 518 | 519 | * 创建一些marble并移动它们 520 | 521 | peer chaincode invoke -o orderer0:7050 --tls $CORE_PEER_TLS_ENABLED --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/orderer/localMspConfig/cacerts/ordererOrg0.pem -C mychannel -n marbles -c '{"Args":["initMarble","marble1","blue","35","tom"]}' 522 | peer chaincode invoke -o orderer0:7050 --tls $CORE_PEER_TLS_ENABLED --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/orderer/localMspConfig/cacerts/ordererOrg0.pem -C mychannel -n marbles -c '{"Args":["initMarble","marble2","red","50","tom"]}' 523 | peer chaincode invoke -o orderer0:7050 --tls $CORE_PEER_TLS_ENABLED --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/orderer/localMspConfig/cacerts/ordererOrg0.pem -C mychannel -n marbles -c '{"Args":["initMarble","marble3","blue","70","tom"]}' 524 | peer chaincode invoke -o orderer0:7050 --tls $CORE_PEER_TLS_ENABLED --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/orderer/localMspConfig/cacerts/ordererOrg0.pem -C mychannel -n marbles -c '{"Args":["transferMarble","marble2","jerry"]}' 525 | peer chaincode invoke -o orderer0:7050 --tls $CORE_PEER_TLS_ENABLED --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/orderer/localMspConfig/cacerts/ordererOrg0.pem -C mychannel -n marbles -c '{"Args":["transferMarblesBasedOnColor","blue","jerry"]}' 526 | peer chaincode invoke -o orderer0:7050 --tls $CORE_PEER_TLS_ENABLED --cafile /opt/gopath/src/github.com/hyperledger/fabric/peer/crypto/orderer/localMspConfig/cacerts/ordererOrg0.pem -C mychannel -n marbles -c '{"Args":["delete","marble1"]}' 527 | 528 | * 如果做了CouchDB容器的端口映射,可以通过web界面查看数据,可以看到名为`mychannel `的数据库及其文档 529 | 530 | * 如果使用的是vagrant环境 531 | 532 | http://localhost:15984/_utils 533 | * 如果不是vagrant环境,使用CouchDB容器指定的端口 534 | 535 | http://localhost:5984/_utils 536 | * 可有规律的查询chaincode(例如,读取`marble2`) 537 | 538 | peer chaincode query -C mychannel -n marbles -c '{"Args":["readMarble","marble2"]}' 539 | 540 | 可以看到`marble2`的详细信息: 541 | 542 | Query Result: {"color":"red","docType":"marble","name":"marble2","owner":"jerry","size":50} 543 | 544 | 获取`marble1`的历史: 545 | 546 | peer chaincode query -C mychannel -n marbles -c '{"Args":["getHistoryForMarble","marble1"]}' 547 | 548 | 可以看到操作过`marble1`的交易: 549 | 550 | Query Result: [{"TxId":"1c3d3caf124c89f91a4c0f353723ac736c58155325f02890adebaa15e16e6464", "Value":{"docType":"marble","name":"marble1","color":"blue","size":35,"owner":"tom"}},{"TxId":"755d55c281889eaeebf405586f9e25d71d36eb3d35420af833a20a2f53a3eefd", "Value":{"docType":"marble","name":"marble1","color":"blue","size":35,"owner":"jerry"}},{"TxId":"819451032d813dde6247f85e56a89262555e04f14788ee33e28b232eef36d98f", "Value":}] 551 | 552 | 还可以执行复杂查询,比如查询`jerry`所拥有的marble: 553 | 554 | peer chaincode query -C mychannel -n marbles -c '{"Args":["queryMarblesByOwner","jerry"]}' 555 | 556 | 查询结果为`jerry`所拥有的2个marble的信息: 557 | 558 | Query Result: [{"Key":"marble2", "Record":{"color":"red","docType":"marble","name":"marble2","owner":"jerry","size":50}},{"Key":"marble3", "Record":{"color":"blue","docType":"marble","name":"marble3","owner":"jerry","size":70}}] 559 | 560 | 通过`owner`字段等于`jerry`查询: 561 | 562 | peer chaincode query -C mychannel -n marbles -c '{"Args":["queryMarbles","{\"selector\":{\"owner\":\"jerry\"}}"]}' 563 | 564 | 查询结果如下: 565 | 566 | Query Result: [{"Key":"marble2", "Record":{"color":"red","docType":"marble","name":"marble2","owner":"jerry","size":50}},{"Key":"marble3", "Record":{"color":"blue","docType":"marble","name":"marble3","owner":"jerry","size":70}}] 567 | 568 | ## 数据持久化 569 | 570 | 如果需要对peer或CouchDB容器的数据持久化,一种选择是将容器的相关目录挂在到docker主机。例如,将下面两行内容放到`docker-compose.yaml`文件中的对应peer处: 571 | 572 | volumes: 573 | - /var/hyperledger/peer0:/var/hyperledger/production 574 | 575 | 将下面两行放到对应的CouchDB处: 576 | 577 | volumes: 578 | - /var/hyperledger/couchdb0:/opt/couchdb/data 579 | 580 | ## 故障排出 581 | 582 | * 每次运行后要清理文件 583 | * 如果出现docker错误,则删除镜像,从头再操作一遍 584 | 585 | make clean 586 | make peer-docker orderer-docker 587 | 588 | * 如果出现下面的错误 589 | 590 | Error: Error endorsing chaincode: rpc error: code = 2 desc = Error installing chaincode code mycc:1.0(chaincode /var/hyperledger/production/chaincodes/mycc.1.0 exits) 591 | 592 | chaincode镜像(如`dev-peer0-mycc-1.0`或`dev-peer1-mycc-1.0`)可能是以前运行过的。删除它们然后重试。 593 | 594 | docker rmi -f $(docker images | grep peer[0-9]-peer[0-9] | awk '{print $3}') 595 | 596 | * 使用`down`选项清理网络 597 | 598 | ./network_setup.sh down 599 | Next Previous -------------------------------------------------------------------------------- /docs/glossary.md: -------------------------------------------------------------------------------- 1 | 2 | | 原文 | 作者 | 审核修正 | 3 | | --- | --- | —--- | 4 | | [原文](http://hyperledger-fabric.readthedocs.io/en/latest/glossary.html) | Linsheng Yu | Baohua Yang | 5 | 6 | 7 | Terminology is important, so that all Fabric users and developers agree on what we mean by each specific term. What is chaincode, for example. So we’ll point you there, whenever you want to reassure yourself. Of course, feel free to read the entire thing in one sitting if you like, it’s pretty enlightening! 8 | 9 | 专业术语很重要,所以所有”Fabric”项目用户和开发人员同意我们所说的每个特定术语的含义,举个例子:如什么是链码,因此我们将引导你到术语说明,让你随时可以消除对术语理解的疑虑,当然,如果你愿意的话可以自由的阅读整个文档,非常有启发! 10 | 11 | ## Anchor Peer - 锚节点 12 | 13 | A peer node on a channel that all other peers can discover and communicate with. Each [Member](#Member) on a channel has an anchor peer (or multiple anchor peers to prevent single point of failure), allowing for peers belonging to different Members to discover all existing peers on a channel. 14 | 15 | 锚节点是通道中能被所有对等节点探测、并能与之进行通信的一种对等节点。通道中的每个成员都有一个(或多个,以防单点故障)锚节点,允许属于不同成员身份的节点来发现通道中存在的其它节点。 16 | 17 | ## Block - 区块 18 | 19 | An ordered set of transactions that is cryptographically linked to the preceding block(s) on a channel. 20 | 21 | 在一个通道上,(区块是)一组有序交易的集合。区块往往通过密码学手段(Hash 值)连接到前导区块。 22 | 23 | **Zhu Jiang:区块是一组有序的交易集合,在通道中经过加密(哈希加密)后与前序区块连接。** 24 | 25 | ## Chain - 链 26 | 27 | The ledger’s chain is a transaction log structured as hash-linked blocks of transactions. Peers receive blocks of transactions from the ordering service, mark the block’s transactions as valid or invalid based on endorsement policies and concurrency violations, and append the block to the hash chain on the peer’s file system. 28 | 29 | chain就是block之间以hash连接为结构的交易日志。peer从order service接收交易block,并根据背书策略和并发冲突标记block上的交易是否有效,然后将该block追加到peer文件系统中的hash chain上。 30 | 31 | Zhu Jiang:账本的链是一个交易区块经过“哈希连接”结构化的交易日志。对等节点从排序服务收到交易区块,基于背书策略和并发冲突来标注区块的交易为有效或者无效状态,并且将区块追加到对等节点文件系统的哈希链中。 32 | 33 | ## Chaincode - 链码 34 | 35 | Chaincode is software, running on a ledger, to encode assets and the transaction instructions (business logic) for modifying the assets. 36 | 37 | 链码是一个运行在账本上的软件,它可以对资产进行编码,其中的交易指令(或者叫业务逻辑)也可以用来修改资产。 38 | 39 | ## Channel - 通道 40 | 41 | A channel is a private blockchain overlay on a Fabric network, allowing for data isolation and confidentiality. A channel-specific ledger is shared across the peers in the channel, and transacting parties must be properly authenticated to a channel in order to interact with it. Channels are defined by a [Configuration-Block](#Configuration-Block). 42 | 43 | 通道是构建在“Fabric”网络上的私有区块链,实现了数据的隔离和保密。通道特定的账本在通道中是与所有对等节点共享的,并且交易方必须通过该通道的正确验证才能与账本进行交互。通道是由一个“配置块”来定义的。 44 | 45 | ## Commitment - 提交 46 | 47 | Each [Peer](#Peer) on a channel validates ordered blocks of transactions and then commits (writes-appends) the blocks to its replica of the channel [Ledger](#Ledger). Peers also mark each transaction in each block as valid or invalid. 48 | 49 | 一个通道中的每个对等节点都会验证交易的有序区块,然后将区块提交(写或追加)至该通道上账本的各个副本。对等节点也会标记每个区块中的每笔交易的状态是有效或者无效。 50 | 51 | ## Concurrency Control Version Check - 并发控制版本检查(CCVC) 52 | 53 | Concurrency Control Version Check is a method of keeping state in sync across peers on a channel. Peers execute transactions in parallel, and before commitment to the ledger, peers check that the data read at execution time has not changed. If the data read for the transaction has changed between execution time and commitment time, then a Concurrency Control Version Check violation has occurred, and the transaction is marked as invalid on the ledger and values are not updated in the state database. 54 | 55 | CCVC是保持通道中各对等节点间状态同步的一种方法。对等节点并行的执行交易,在交易提交至账本之前,对等节点会检查交易在执行期间读到的数据是否被修改。如果读取的数据在执行和提交之间被改变,就会引发CCVC冲突,该交易就会在账本中被标记为无效,而且值不会更新到状态数据库中。 56 | 57 | ## Configuration Block - 配置区块 58 | 59 | Contains the configuration data defining members and policies for a system chain (ordering service) or channel. Any configuration modifications to a channel or overall network (e.g. a member leaving or joining) will result in a new configuration block being appended to the appropriate chain. This block will contain the contents of the genesis block, plus the delta. 60 | 61 | 包含为系统链(排序服务)或通道定义成员和策略的配置数据。对某个通道或整个网络的配置修改(比如,成员离开或加入)都将导致生成一个新的配置区块并追加到适当的链上。这个配置区块会包含创始区块的内容加上增量。 62 | 63 | ## Consensus - 共识 64 | 65 | A broader term overarching the entire transactional flow, which serves to generate an agreement on the order and to confirm the correctness of the set of transactions constituting a block. 66 | 67 | 共识是贯穿整个交易流程的广义术语,其用于产生一个对于排序的同意书和确认构成区块的交易集的正确性。 68 | 69 | ## Current State - 当前状态 70 | 71 | The current state of the ledger represents the latest values for all keys ever included in its chain transaction log. Peers commit the latest values to ledger current state for each valid transaction included in a processed block. Since current state represents all latest key values known to the channel, it is sometimes referred to as World State. Chaincode executes transaction proposals against current state data. 72 | 73 | ledger的current state表示其chain交易log中所有key的最新值。peer会将处理过的block中的每个交易对应的修改value提交到ledger的current state,由于current state表示channel所知的所有最新的k-v,所以current state也被称为World State。Chaincode执行交易proposal就是针对的current state。 74 | 75 | ## Dynamic Membership - 动态成员 76 | 77 | Fabric supports the addition-removal of members, peers, and ordering service nodes, without compromising the operationality of the overall network. Dynamic membership is critical when business relationships adjust and entities need to be added-removed for various reasons. 78 | 79 | Fabric支持动态添加-移除members、peers和ordering服务节点,而不会影响整个网络的操作性。当业务关系调整或因各种原因需添加-移除实体时,Dynamic Membership至关重要。 80 | 81 | ## Endorsement - 背书 82 | 83 | Refers to the process where specific peer nodes execute a transaction and return a `YES-NO` response to the client application that generated the transaction proposal. Chaincode applications have corresponding endorsement policies, in which the endorsing peers are specified. 84 | 85 | Endorsement 是指一个peer执行一个交易并返回`YES-NO`给生成交易proposal的client app 的过程。chaincode具有相应的endorsement policies,其中指定了endorsing peer。 86 | 87 | ## Endorsement policy - 背书策略 88 | 89 | Defines the peer nodes on a channel that must execute transactions attached to a specific chaincode application, and the required combination of responses (endorsements). A policy could require that a transaction be endorsed by a minimum number of endorsing peers, a minimum percentage of endorsing peers, or by all endorsing peers that are assigned to a specific chaincode application. Policies can be curated based on the application and the desired level of resilience against misbehavior (deliberate or not) by the endorsing peers. A distinct endorsement policy for install and instantiate transactions is also required. 90 | 91 | Endorsement policy定义了依赖于特定chaincode执行交易的channel上的peer和响应结果(endorsements)的必要组合条件(即返回Yes或No的条件)。Endorsement policy可指定对于某一chaincode,可以对交易背书的最小背书节点数或者最小背书节点百分比。背书策略由背书节点基于应用程序和对抵御不良行为的期望水平来组织管理。在install和instantiate Chaincode(deploy tx)时需要指定背书策略。 92 | 93 | ## Fabric-ca 94 | 95 | Fabric-ca is the default Certificate Authority component, which issues PKI-based certificates to network member organizations and their users. The CA issues one root certificate (rootCert) to each member, one enrollment certificate (eCert) to each authorized user, and a number of transaction certificates (tCerts) for each eCert. 96 | 97 | Fabric-ca是默认的证书管理组件,它向网络成员及其用户颁发基于PKI的证书。CA为每个成员颁发一个根证书(rootCert),为每个授权用户颁发一个注册证书(eCert),为每个注册证书颁发大量交易证书(tCerts)。 98 | 99 | ## Genesis Block - 初始区块 100 | 101 | The configuration block that initializes a blockchain network or channel, and also serves as the first block on a chain. 102 | 103 | Genesis Block是初始化区块链网络或channel的配置区块,也是链上的第一个区块。 104 | 105 | ## Gossip Protocol - Gossip协议 106 | 107 | The gossip data dissemination protocol performs three functions: 1) manages peer discovery and channel membership; 2) disseminates ledger data across all peers on the channel; 3) syncs ledger state across all peers on the channel. Refer to the [Gossip](http:--hyperledger-fabric.readthedocs.io-en-latest-gossip.html) topic for more details. 108 | 109 | Gossip数据传输协议有三项功能:1)管理peer发现和channel成员;2)channel上的所有peer间广播账本数据;3)channel上的所有peer间同步账本数据。 110 | 111 | ## Initialize - 初始化 112 | 113 | A method to initialize a chaincode application. 114 | 115 | 一个初始化chaincode程序的方法。 116 | 117 | ## Install - 安装 118 | 119 | The process of placing a chaincode on a peer’s file system. 120 | 121 | 将chaincode放到peer的文件系统的过程。*(译注:即将ChaincodeDeploymentSpec信息存到chaincodeInstallPath-chaincodeName.chainVersion文件中)* 122 | 123 | ## Instantiate - 实例化 124 | 125 | The process of starting a chaincode container. 126 | 127 | 启动chaincode容器的过程。*(译注:在lccc中将ChaincodeData保存到state中,然后deploy Chaincode并执行Init方法)* 128 | 129 | ## Invoke - 调用 130 | 131 | Used to call chaincode functions. Invocations are captured as transaction proposals, which then pass through a modular flow of endorsement, ordering, validation, committal. The structure of invoke is a function and an array of arguments. 132 | 133 | 用于调用chaincode内的函数。Chaincode invoke就是一个交易proposal,然后执行模块化的流程(背书、共识、 验证、 提交)。invoke的结构就是一个函数和一个参数数组。 134 | 135 | ## Leading Peer - 主导节点 136 | 137 | Each [Member](#Member) can own multiple peers on each channel that it subscribes to. One of these peers is serves as the leading peer for the channel, in order to communicate with the network ordering service on behalf of the member. The ordering service “delivers” blocks to the leading peer(s) on a channel, who then distribute them to other peers within the same member cluster. 138 | 139 | 每一个Member在其订阅的channel上可以拥有多个peer,其中一个peer会作为channel的leading peer代表该Member与ordering service通信。ordering service将block传递给leading peer,该peer再将此block分发给同一member下的其他peer。 140 | 141 | ## Ledger - 账本 142 | 143 | A ledger is a channel’s chain and current state data which is maintained by each peer on the channel. 144 | 145 | Ledger是个channel的chain和由channel中每个peer维护的world state。*(这个解释有点怪)* 146 | 147 | ## Member - 成员 148 | 149 | A legally separate entity that owns a unique root certificate for the network. Network components such as peer nodes and application clients will be linked to a member. 150 | 151 | 拥有网络唯一根证书的合法独立实体。像peer节点和app client这样的网络组件会链接到一个Member。 152 | 153 | ## Membership Service Provider - MSP 154 | 155 | The Membership Service Provider (MSP) refers to an abstract component of the system that provides credentials to clients, and peers for them to participate in a Hyperledger Fabric network. Clients use these credentials to authenticate their transactions, and peers use these credentials to authenticate transaction processing results (endorsements). While strongly connected to the transaction processing components of the systems, this interface aims to have membership services components defined, in such a way that alternate implementations of this can be smoothly plugged in without modifying the core of transaction processing components of the system. 156 | 157 | MSP是指为client和peer提供证书的系统抽象组件。Client用证书来认证他们的交易;peer用证书认证其交易背书。该接口与系统的交易处理组件密切相关,旨在使已定义的成员身份服务组件以这种方式顺利插入而不会修改系统的交易处理组件的核心。 158 | 159 | ## Membership Services - 成员服务 160 | 161 | Membership Services authenticates, authorizes, and manages identities on a permissioned blockchain network. The membership services code that runs in peers and orderers both authenticates and authorizes blockchain operations. It is a PKI-based implementation of the Membership Services Provider (MSP) abstraction. 162 | 163 | 成员服务在许可的区块链网络上认证、授权和管理身份。在peer和order中运行的成员服务的代码都会认证和授权区块链操作。它是基于PKI的MSP实现。 164 | 165 | The `fabric-ca` component is an implementation of membership services to manage identities. In particular, it handles the issuance and revocation of enrollment certificates and transaction certificates. 166 | 167 | `fabric-ca`组件实现了成员服务,来管理身份。特别的,它处理ECert和TCert的颁发和撤销。 168 | 169 | An enrollment certificate is a long-term identity credential; a transaction certificate is a short-term identity credential which is both anonymous and un-linkable. 170 | 171 | ECert是长期的身份凭证;TCert是短期的身份凭证,是匿名和不可链接的。 172 | 173 | ## Ordering Service - 排序服务或共识服务 174 | 175 | A defined collective of nodes that orders transactions into a block. The ordering service exists independent of the peer processes and orders transactions on a first-come-first-serve basis for all channel’s on the network. The ordering service is designed to support pluggable implementations beyond the out-of-the-box SOLO and Kafka varieties. The ordering service is a common binding for the overall network; it contains the cryptographic identity material tied to each [Member](#Member). 176 | 177 | 将交易排序放入block的节点的集合。ordering service独立于peer流程之外,并以先到先得的方式为网络上所有的channel作交易排序。ordering service支持可插拔实现,目前默认实现了SOLO和Kafka。ordering service是整个网络的公用binding,包含与每个Member相关的加密材料。 178 | 179 | ## Peer - 节点 180 | 181 | A network entity that maintains a ledger and runs chaincode containers in order to perform read-write operations to the ledger. Peers are owned and maintained by members. 182 | 183 | 一个网络实体,维护ledger并运行Chaincode容器来对ledger执行read-write操作。peer由Member拥有和维护。 184 | 185 | ## Policy - 策略 186 | 187 | There are policies for endorsement, validation, block committal, chaincode management and network-channel management. 188 | 189 | 有背书策略,校验策略,区块提交策略,Chaincode管理策略和网络-通道管理策略。 190 | 191 | ## Proposal - 提案 192 | 193 | A request for endorsement that is aimed at specific peers on a channel. Each proposal is either an instantiate or an invoke (read-write) request. 194 | 195 | 一种针对channel中某peer的背书请求。每个proposal要么是Chaincode instantiate要么是Chaincode invoke。 196 | 197 | ## Query - 查询 198 | 199 | A query requests the value of a key(s) against the current state. 200 | 201 | 对于current state中某个key的value的查询请求。 202 | 203 | ## Software Development Kit - SDK 204 | 205 | The Hyperledger Fabric client SDK provides a structured environment of libraries for developers to write and test chaincode applications. The SDK is fully configurable and extensible through a standard interface. Components, including cryptographic algorithms for signatures, logging frameworks and state stores, are easily swapped in and out of the SDK. The SDK API uses protocol buffers over gRPC for transaction processing, membership services, node traversal and event handling applications to communicate across the fabric. The SDK comes in multiple flavors - Node.js, Java. and Python. 206 | 207 | SDK为开发人员提供了一个结构化的库环境,用于编写和测试链码应用程序。SDK完全可以通过标准接口实现配置和扩展,像签名的加密算法、日志框架和state存储这样的组件都可以轻松地实现替换。SDK API使用gRPC进行交易处理,成员服务、节点遍历以及事件处理都是据此与fabric通信。目前SDK支持Node.js、Java和Python。 208 | 209 | ## State Database - stateDB 210 | 211 | Current state data is stored in a state database for efficient reads and queries from chaincode. These databases include levelDB and couchDB. 212 | 213 | 为了从Chaincode中高效的读写,Current state 数据存储在stateDB中,包括levelDB和couchDB。 214 | 215 | ## System Chain - 系统链 216 | 217 | Contains a configuration block defining the network at a system level. The system chain lives within the ordering service, and similar to a channel, has an initial configuration containing information such as: MSP information, policies, and configuration details. Any change to the overall network (e.g. a new org joining or a new ordering node being added) will result in a new configuration block being added to the system chain. 218 | 219 | 包含在系统级定义网络的配置区块。系统链存在于ordering service中,与channel类似,具有包含以下信息的初始配置:MSP信息、策略和信息配置。对整个网络的任何变化(例如新的Org加入或者添加新的Ordering节点)将导致新的配置区块被添加到系统链。 220 | 221 | The system chain can be thought of as the common binding for a channel or group of channels. For instance, a collection of financial institutions may form a consortium (represented through the system chain), and then proceed to create channels relative to their aligned and varying business agendas. 222 | 223 | 系统链可看做是一个channel或一组channel的公用binding。例如,金融机构的集合可以形成一个财团(以system chain表示),然后根据其相同或不同的业务创建channel。 224 | 225 | ## Transaction - 交易 226 | 227 | An invoke or instantiate operation. Invokes are requests to read-write data from the ledger. Instantiate is a request to start a chaincode container on a peer. 228 | 229 | Chaincode的invoke或instantiate操作。Invoke是从ledger中请求read-write set;Instantiate是请求在peer上启动Chaincode容器。 -------------------------------------------------------------------------------- /docs/gossip_zh.md: -------------------------------------------------------------------------------- 1 | 2 | | 原文 | 作者 | 审核修正 | 3 | | --- | --- | —--- | 4 | | [原文](http://hyperledger-fabric.readthedocs.io/en/latest/gossip.html) | Xiao Zhang | | 5 | 6 | 7 | Hyperledger Fabric optimizes blockchain network performance, security and scalability by dividing workload across transaction execution (endorsing and committing) peers and transaction ordering nodes. This decoupling of network operations requires a secure, reliable and scalable data dissemination protocol to ensure data integrity and consistency. To meet these requirements, the fabric implements a **gossip data dissemination protocol**. 8 | 9 | HyperLedger Fabric通过把工作节点分解为执行交易(背书和提交)节点和交易排序节点来优化区块链网络性能,安全性和可扩展性。这种解耦网络操作的方式需要一个安全、可靠、可扩展的数据分发协议来保证数据的完整性和一致性。为了满足这些要求,Fabric应用了**Gossip数据分发协议**。 10 | 11 | ## Gossip协议(Gossip protocol) 12 | 13 | Peers leverage gossip to broadcast ledger and channel data in a scalable fashion. Gossip messaging is continuous, and each peer on a channel is constantly receiving current and consistent ledger data, from multiple peers. Each gossiped message is signed, thereby allowing Byzantine participants sending faked messages to be easily identified and the distribution of the message(s) to unwanted targets to be prevented. Peers affected by delays, network partitions or other causations resulting in missed blocks, will eventually be synced up to the current ledger state by contacting peers in possession of these missing blocks. 14 | 15 | 节点利用Gossip来以一种可扩展的方式广播账本和通道数据。Gossip出来消息是连续的,并且通道上的每个节点都在不断地接收当前来自多个节点的账本中已达成一致性的数据。每个通过Gossip传输的消息都会被签名,因此由拜占庭节点发送的伪造的消息将会很容易地被识别出来,而且可以防止将消息分发到不希望发送的目标处。节点因为受到延迟、网络分区或者其他原因的影响导致缺少部分区块的情况,最终将通过联系已拥有这些缺失的区块的节点的方式,与当前账本状态进行同步。 16 | 17 | The gossip-based data dissemination protocol performs three primary functions on a Fabric network: 18 | 1. Manages peer discovery and channel membership, by continually identifying available member peers, and eventually detecting peers that have gone offline. 19 | 2. Disseminates ledger data across all peers on a channel. Any peer with data that is out of sync with the rest of the channel identifies the missing blocks and syncs itself by copying the correct data. 20 | 3. Bring newly connected peers up to speed by allowing peer-to-peer state transfer update of ledger data. 21 | 22 | 基于Gossip的数据传播协议在Fabric网络上执行三个主要功能: 23 | 1. 通过不断识别可用的成员节点并最终监测节点离线状态的方式,对节点的发现和通道中的成员进行管理。 24 | 2. 通过通道中的所有节点来分发账本数据。任何数据未同步的节点都可以通过通道中其他节点来标识缺失的区块,并通过复制正确的数据来进行同步。 25 | 3. 通过允许点对点状态传输更新账本数据,使新加入连接的节点快速得到同步。 26 | 27 | Gossip-based broadcasting operates by peers receiving messages from other peers on the channel, and then forwarding these messages to a number of randomly-selected peers on the channel, where this number is a configurable constant. Peers can also exercise a pull mechanism, rather than waiting for delivery of a message. This cycle repeats, with the result of channel membership, ledger and state information continually being kept current and in sync. For dissemination of new blocks, the **leader** peer on the channel pulls the data from the ordering service and initiates gossip dissemination to peers. 28 | 29 | 基于Gossip的广播由节点接收来自该通道中的其他节点的消息,然后将这些消息转发到通道上的多个随机选择的节点。这个节点数是个可配置的常数。节点也可以主动拉取消息,而不是等待消息发送。循环重复这个操作,使通道中成员的账本和状态信息不断保持和当前最新状态同步。为了传播新区块,通道中的**领导者**节点从排序服务中拉取数据,并向其他节点发送Gossip消息。 30 | 31 | ## Gossip消息传输(Gossip messaging) 32 | 33 | Online peers indicate their availability by continually broadcasting “alive” messages, with each containing the **public key infrastructure (PKI)** ID and the signature of the sender over the message. Peers maintain channel membership by collecting these alive messages; if no peer receives an alive message from a specific peer, this “dead” peer is eventually purged from channel membership. Because “alive” messages are cryptographically signed, malicious peers can never impersonate other peers, as they lack a signing key authorized by a root certificate authority (CA). 34 | 35 | 在线的节点通过持续地广播“活跃”消息来表明他们的可用性,每条消息都包含**公钥基础设施(PKI)**的ID和消息发送者对消息的签名。节点通过收集这些活跃消息来维护通道成员身份。如果没有节点能从某个特定的节点收到活跃消息,那么这个“死亡”的节点最终将从通道成员身份列表中被删除。由于“活跃”信息是通过密码学算法进行签名的,因此恶意节点无法伪装成其他节点,因为他们缺少根证书颁发机构(CA)授权的签名密钥。 36 | 37 | In addition to the automatic forwarding of received messages, a state reconciliation process synchronizes **world state** across peers on each channel. Each peer continually pulls blocks from other peers on the channel, in order to repair its own state if discrepancies are identified. Because fixed connectivity is not required to maintain gossip-based data dissemination, the process reliably provides data consistency and integrity to the shared ledger, including tolerance for node crashes. 38 | 39 | 除了将接收到的消息的自动转发之外,状态协程还会在每个通道上同步节点间的**世界状态**。每个节点不停地从通道中的其他节点中提取区块,以便在出现差异时修正自己的状态。由于不需要固定连接来维护基于Gossip的数据传播,因此该流程可以可靠地为共享账本保证数据的一致性和完整性,包括对节点崩溃的容错。 40 | 41 | Because channels are segregated, peers on one channel cannot message or share information on any other channel. Though any peer can belong to multiple channels, partitioned messaging prevents blocks from being disseminated to peers that are not in the channel by applying message routing policies based on peers’ channel subscriptions. 42 | 43 | 由于通道之间相互隔离,一个通道上的节点不能在其他任何通道上发送或共享信息。尽管任何节点都可能属于多个通道,但是通过将基于节点通道订阅的机制作为消息分发策略,节点无法将被分隔开的消息传播给不在通道中的节点。 44 | 45 | **Notes:** 46 | 1. Security of point-to-point messages are handled by the peer TLS layer, and do not require signatures. Peers are authenticated by their certificates, which are assigned by a CA. Although TLS certs are also used, it is the peer certificates that are authenticated in the gossip layer. Ledger blocks are signed by the ordering service, and then delivered to the leader peers on a channel. 2. Authentication is governed by the membership service provider for the peer. When the peer connects to the channel for the first time, the TLS session binds with fabric membership identity. This essentially authenticates each peer to the connecting peer, with respect to membership in the network and channel. 47 | 48 | 49 | **注意:** 50 | 1. 点对点消息的安全性由节点的TLS层处理,不需要签名。节点通过其由CA分配的证书进行身份验证。节点在Gossip层的身份认证会通过TLS证书体现。账本中的区块由排序服务进行签名,然后传递给通道中的领导者节点。 51 | 2. 认证过程由节点的成员管理服务的提供者进行管理。当节点第一次连接到通道中的时候,TLS会话将与Fabric成员身份绑定。这样本质上使每个节点与相连的节点进行认证,从而与网络和通道中的成员身份关联起来。 -------------------------------------------------------------------------------- /docs/img/blocks-3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ChainNova/hyperledgerDocs/5f489072af66ff2f9d43000e2aee15bdc4285d49/docs/img/blocks-3.png -------------------------------------------------------------------------------- /docs/img/fabric-ca.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ChainNova/hyperledgerDocs/5f489072af66ff2f9d43000e2aee15bdc4285d49/docs/img/fabric-ca.png -------------------------------------------------------------------------------- /docs/img/flow-4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ChainNova/hyperledgerDocs/5f489072af66ff2f9d43000e2aee15bdc4285d49/docs/img/flow-4.png -------------------------------------------------------------------------------- /docs/img/sdk-image01.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ChainNova/hyperledgerDocs/5f489072af66ff2f9d43000e2aee15bdc4285d49/docs/img/sdk-image01.png -------------------------------------------------------------------------------- /docs/img/sdk-image02.jpg: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ChainNova/hyperledgerDocs/5f489072af66ff2f9d43000e2aee15bdc4285d49/docs/img/sdk-image02.jpg -------------------------------------------------------------------------------- /docs/img/sdk-image03.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ChainNova/hyperledgerDocs/5f489072af66ff2f9d43000e2aee15bdc4285d49/docs/img/sdk-image03.png -------------------------------------------------------------------------------- /docs/img/step0.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ChainNova/hyperledgerDocs/5f489072af66ff2f9d43000e2aee15bdc4285d49/docs/img/step0.png -------------------------------------------------------------------------------- /docs/img/step1.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ChainNova/hyperledgerDocs/5f489072af66ff2f9d43000e2aee15bdc4285d49/docs/img/step1.png -------------------------------------------------------------------------------- /docs/img/step2.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ChainNova/hyperledgerDocs/5f489072af66ff2f9d43000e2aee15bdc4285d49/docs/img/step2.png -------------------------------------------------------------------------------- /docs/img/step3.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ChainNova/hyperledgerDocs/5f489072af66ff2f9d43000e2aee15bdc4285d49/docs/img/step3.png -------------------------------------------------------------------------------- /docs/img/step4.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ChainNova/hyperledgerDocs/5f489072af66ff2f9d43000e2aee15bdc4285d49/docs/img/step4.png -------------------------------------------------------------------------------- /docs/img/step5.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ChainNova/hyperledgerDocs/5f489072af66ff2f9d43000e2aee15bdc4285d49/docs/img/step5.png -------------------------------------------------------------------------------- /docs/img/step6.png: -------------------------------------------------------------------------------- https://raw.githubusercontent.com/ChainNova/hyperledgerDocs/5f489072af66ff2f9d43000e2aee15bdc4285d49/docs/img/step6.png -------------------------------------------------------------------------------- /docs/index.md: -------------------------------------------------------------------------------- 1 | ***欢迎阅读!!!*** 2 | 3 | 4 | ***欢迎贡献!!!*** 5 | 6 | ## 简介 7 | 8 | **Hyperledger国际化工作组**是Hyperledger中国工作组(TWGC)下属的一个小组,主要负责相关文档的中文编写和翻译,以及组织讨论、教育培训活动等。 9 | 10 | 目前小组有成员100余名,活跃贡献者20余名,已完成文章30余篇。 11 | 12 | ## 如何贡献 13 | 14 | 以前我们用[Hyperledger Wiki](https://wiki.hyperledger.org/groups/twgc/team_ie)管理文章,但Wiki读写操作有诸多不便,为此我们将文档转移到[github上](https://github.com/ChainNova/hyperledgerDocs),以使大家更方便地阅读和编辑贡献资源。 15 | 16 | ***贡献内容包括但不限于:文档翻译、知识总结、经验教训、好文链接、奇思妙想...*** 17 | 18 | ***如果您不想作如下操作,可将直接内容邮件(见页面最下方)发送给工作组,我们为您发布*** 19 | 20 | ### 加入组织 21 | 22 | 1. 加入微信群 23 | 24 | 目前微信群已超一百人,只能通过邀请方式加入。您可以请认识的小伙伴拉你入群,也可以联系管理员(见页面最下方)。 25 | 26 | 2. 加入wiki 27 | 28 | Hyperledger Wiki是官方的信息渠道,所以请将您的信息加入其中。[点击进入](https://wiki.hyperledger.org/groups/twgc/team_ie),登陆,然后编辑`Volunteers`表格,将自己的信息写入并保存。 29 | 30 | ### 贡献资源 31 | 32 | 目前以[github](https://github.com)管理文档,以[github pages](https://pages.github.com/)展示文档,以[MkDocs](http://www.mkdocs.org/)构建文档。其中文档都是以Markdown编写。 33 | 34 | #### 准备 35 | 36 | 1. github账号 37 | 2. [安装git](https://git-scm.com/book/zh/v1/%E8%B5%B7%E6%AD%A5-%E5%AE%89%E8%A3%85-Git) 38 | 3. 安装[MkDocs](http://www.mkdocs.org/)(可参照[中文文档](http://markdown-docs-zh.readthedocs.io/zh_CN/latest/)) 39 | 40 | #### 本地编辑&预览 41 | 42 | 1. 下载源码 43 | 44 | git clone https://github.com/ChainNova/hyperledgerDocs.git 45 | 46 | 2. 编辑预览 47 | 48 | **注意:**文档开头固定以下格式: 49 | 50 | 51 | | 原文 | 作者 | 审核修正 | 52 | | --- | --- | —--- | 53 | | [原文](<原文路径>) | <如果你是作者,请在此留名> | <如果你是修改者,请在此留名,可以多个> | 54 | 55 | 图片放到`hyperledgerDocs/docs/img`里,文档中以`img/xx.png`引用。 56 | 57 | * 修改已有文档:进入`hyperledgerDocs/docs`目录,编辑对应文件。 58 | 59 | * 添加新文档:进入`hyperledgerDocs/docs`目录,添加新的`Markdown文件`并编辑内容;然后编辑`mkdocs.yml`,将新加文档按如下格式添加到配置文件中。 60 | 61 | pages: 62 | - 欢迎: index.md 63 | - 词汇表: glossary.md 64 | - 快速入门: getting_started.md 65 | - 协议规范: protocol-spec_zh.md 66 | - Fabric教程: 67 | - 构建第一个fabric网络: build_network_zh.md 68 | - 编写第一个应用: write_first_app_zh.md 69 | - Chaincode: chaincode_zh.md 70 | 71 | * 本地预览:在`hyperledgerDocs`目录下执行 72 | 73 | mkdocs serve 74 | 75 | 然后浏览器打开`http://127.0.0.1:8000/`找到相应页面。 76 | 77 | #### 线上提交&部署 78 | 79 | 本地预览无误后,即可提交到线上供大家阅读。 80 | 81 | 在`hyperledgerDocs`目录下执行 82 | 83 | ./build.sh 84 | 85 | 如无报错,浏览器打开`https://chainnova.github.io/hyperledgerDocs/`查看修改结果。 86 | 87 | **线上确认成功后,千万不要忘了将本地修改的源文件提交到github仓库:** 88 | 89 | git add . 90 | git commit -m "your message" 91 | git push -------------------------------------------------------------------------------- /docs/kafka_zh.md: -------------------------------------------------------------------------------- 1 | 2 | | 原文 | 作者 | 审核修正 | 3 | | --- | --- | —--- | 4 | | [原文](http://hyperledger-fabric.readthedocs.io/en/latest/kafka.html) | Shaoxi Qiu | | 5 | 6 | 7 | 须知(Caveat emptor) 8 | ------------- 9 | 10 | This document assumes that the reader generally knows how to set up a Kafka 11 | cluster and a ZooKeeper ensemble. The purpose of this guide is to identify the 12 | steps you need to take so as to have a set of Hyperledger Fabric ordering 13 | service nodes (OSNs) use your Kafka cluster and provide an ordering service to 14 | your blockchain network. 15 | 16 | 该文档假设读者已经基本了解如何去搭建Kafka集群和ZooKeeper集群。本文档的目的是确定您使用Kafka集群搭建一套Hyperledger Fabric排序服务节点集(OSNs)以及为你的区块链网络提供排序服务所需要采取的步骤。 17 | 18 | 19 | 概览(Big picture) 20 | ----------- 21 | 22 | Each channel maps to a separate single-partition topic in Kafka. 23 | 24 | 每一个通道(channel)在Kafka中被映射到一个单独的单分区(partition)类别(topic)。(译者注:通常每个Topic包含一个或多个Partition,此处每个Topic只包含一个Partition) 25 | 26 | When an OSN receives transactions via the ``Broadcast`` RPC, it checks to make sure 27 | that the broadcasting client has permissions to write on the channel, then 28 | relays (i.e. produces) those transactions to the appropriate partition in Kafka. 29 | 30 | 当排序节点通过RPC``广播``(``Broadcast``)接收到交易时,它会检查广播交易的客户端是否有权限去修改通道(channel)数据,然后反馈(即产生)这些交易到Kafka的适当分区(partition)中。 31 | 32 | This partition is also consumed by the OSN which groups the received 33 | transactions into blocks locally, persists them in its local ledger, and serves 34 | them to receiving clients via the ``Deliver`` RPC. 35 | 36 | 该分区也被排序节点所消费(consume),排序节点将接收到的交易分组写入到本地区块,将其保留在本地账本中,并通过``Deliver`` RPC提供给需要接收的客户端。 37 | 38 | For low-level details, refer 39 | to `the document that describes how we came to this design 40 | `_ 41 | -- Figure 8 is a schematic representation of the process described above. 42 | 43 | 更多详细的信息,请参考`the document that describes how we came to this design 44 | `_ 45 | -- 图8是上述过程的示意图。 46 | 47 | 48 | 步骤(Steps) 49 | ----- 50 | 51 | Let ``K`` and ``Z`` be the number of nodes in the Kafka cluster and the 52 | ZooKeeper ensemble respectively: 53 | 54 | 设定变量 ``K`` 和 ``Z`` 分别是Kafka集群和ZooKeeper集群的节点数量: 55 | 56 | i. At a minimum, ``K`` should be set to 4. (As we will explain in Step 4 below, 57 | this is the minimum number of nodes necessary in order to exhibit crash fault 58 | tolerance, i.e. with 4 brokers, you can have 1 broker go down, all channels will 59 | continue to be writeable and readable, and new channels can be created.) 60 | 61 | i. ``K``的最小值需要是4。(我们将在步骤4中解释,这是实现 故障容错(crash fault tolerance) 所需要的最小数值,也就是说, 62 | 4个节点可以容许1个节点宕机,所有的通道能够继续读写且可以创建通道。)(译者:Kafka节点被称为broker) 63 | 64 | ii. ``Z`` will either be 3, 5, or 7. It has to be an odd number to avoid 65 | split-brain scenarios, and larger than 1 in order to avoid single point of 66 | failures. Anything beyond 7 ZooKeeper servers is considered an overkill. 67 | 68 | ii. ``Z``可以是3、5或者7。它必须是一个奇数来避免分裂(split-brain)情景,大于1以避免单点故障。 69 | 超过7个ZooKeeper服务器则被认为是多余的。 70 | 71 | Proceed as follows: 72 | 73 | 请按照以下步骤进行: 74 | 75 | 1. Orderers: **Encode the Kafka-related information in the network's genesis 76 | block.** If you are using ``configtxgen``, edit ``configtx.yaml`` -- or pick a 77 | preset profile for the system channel's genesis block -- so that: 78 | 79 | Orderers: **Kafka 相关信息被写在网络的初始区块中.** 如果你使用 ``configtxgen`` 工具, 编辑 ``configtx.yaml`` 文件-- 或者挑一个现成的系统通道的初始区块配置文件 -- 其中: 80 | 81 | a. ``Orderer.OrdererType`` is set to ``kafka``. 82 | 83 | a. ``Orderer.OrdererType`` 字段被设置为 ``kafka``. 84 | 85 | b. ``Orderer.Kafka.Brokers`` contains the address of *at least two* of the 86 | Kafka brokers in your cluster in ``IP:port`` notation. The list does not 87 | need to be exhaustive. (These are your seed brokers.) 88 | 89 | b. ``Orderer.Kafka.Brokers`` 字段包含 *至少两个* Kafka集群中的节点``IP:port`` 样式的地址。这个列表没有必要详尽无遗(这些是你的 seed brokers.) 90 | 91 | 2. Orderers: **Set the maximum block size.** Each block will have at most 92 | `Orderer.AbsoluteMaxBytes` bytes (not including headers), a value that you can 93 | set in ``configtx.yaml``. Let the value you pick here be ``A`` and make note of 94 | it -- it will affect how you configure your Kafka brokers in Step 4. 95 | 96 | Orderers: **设置区块最大容量.** 每一个区块最多只能有 `Orderer.AbsoluteMaxBytes` bytes的容量(不含区块头信息), 这是一个你可以修改的值,存放在 ``configtx.yaml`` 配置文件中. 假设此处你设置的数值为``A``,将此数字记下来 -- 这会影响你在步骤4中对于Kafka brokers 的配置. 97 | 98 | 3. Orderers: **Create the genesis block.** Use ``configtxgen``. The settings you 99 | picked in Steps 1 and 2 above are system-wide settings, i.e. they apply across 100 | the network for all the OSNs. Make note of the genesis block's location. 101 | 102 | Orderers: 使用 ``configtxgen`` 工具 **创建初始区块.** 在步骤1和2中的设置是全局的设置, 也就是说这些设置的生效范围是网络中所有的排序节点. 记录下初始区块的位置. 103 | 104 | 4. Kafka cluster: **Configure your Kafka brokers appropriately.** Ensure that 105 | every Kafka broker has these keys configured: 106 | 107 | Kafka 集群: **适当配置你的Kafka集群.** 确保每一个Kafka节点都配置了以下的值: 108 | 109 | a. ``unclean.leader.election.enable = false`` -- Data consistency is key in 110 | a blockchain environment. We cannot have a channel leader chosen outside of 111 | the in-sync replica set, or we run the risk of overwriting the offsets that 112 | the previous leader produced, and --as a result-- rewrite the blockchain 113 | that the orderers produce. 114 | 115 | a. ``unclean.leader.election.enable = false`` -- 数据一致性是区块链环境的关键. 我们不能选择不在同步副本集中的channel leader, 也不能冒风险去覆盖前一leader所产生的偏移量, 那样的结果就是重写orderers所产生的区块链数据. 116 | 117 | b. ``min.insync.replicas = M`` -- Where you pick a value ``M`` such that 118 | 1 < M < N (see ``default.replication.factor`` below). Data is considered 119 | committed when it is written to at least ``M`` replicas (which are then 120 | considered in-sync and belong to the in-sync replica set, or ISR). In any 121 | other case, the write operation returns an error. Then: 122 | 123 | b. ``min.insync.replicas = M`` -- ``M`` 的值需要满足 124 | 1 < M < N (N的值参考后面的 ``default.replication.factor``). 数据被认为是完成提交当它被写入到至少 ``M`` 个副本中(也就是说它被认为是同步的,然后被写入到同步副本集中,也成为ISR). 其他情况, 写入操作返回错误信息. 然后: 125 | 126 | i. If up to N-M replicas -- out of the N that the channel data is 127 | written to -- become unavailable, operations proceed normally. 128 | i. 如果有 N-M 个副本不可访问, 操作将正常进行. 129 | ii. If more replicas become unavailable, Kafka cannot maintain an ISR 130 | set of M, so it stops accepting writes. Reads work without issues. 131 | The channel becomes writeable again when M replicas get in-sync. 132 | ii. 如果更多副本不可访问, Kafka 不能位置数量 M 的同步副本集(ISR), 所以它会停止接受写入操作. 读操作可以正常运行. 133 | 当M个副本重新同步后,通道就可以再次变为可写入状态. 134 | 135 | 136 | c. ``default.replication.factor = N`` -- Where you pick a value ``N`` such 137 | that N < K. A replication factor of ``N`` means that each channel will have 138 | its data replicated to ``N`` brokers. These are the candidates for the ISR 139 | set of a channel. As we noted in the ``min.insync.replicas section`` above, 140 | not all of these brokers have to be available all the time. ``N`` should be 141 | set *strictly smaller* to ``K`` because channel creations cannot go forward 142 | if less than ``N`` brokers are up. So if you set N = K, a single broker 143 | going down means that no new channels can be created on the blockchain 144 | network -- the crash fault tolerance of the ordering service is 145 | non-existent. 146 | 147 | c. ``default.replication.factor = N`` -- 选择一个 ``N`` 的数值满足 N < K (Kafak集群数量). 参数 ``N`` 表示每个channel 的数据会复制到 ``N`` 个 broker 中. 这些是 channel 同步副本集的候选. 正如前面 ``min.insync.replicas`` 部分所说的, 不是所有broker都需要是随时可用的. ``N`` 值需要设置为绝对小于 ``K`` , 因为channel的创建需要不少于 ``N`` 个broker是启动的. 所以如果设置 N = K , 一个 broker 宕机就意味着区块链网络不能再创建channel. 那么故障容错的排序服务也就不存在了. 148 | 149 | 150 | d. ``message.max.bytes`` and ``replica.fetch.max.bytes`` should be set to a 151 | value larger than ``A``, the value you picked in 152 | ``Orderer.AbsoluteMaxBytes`` in Step 2 above. Add some buffer to account for 153 | headers -- 1 MiB is more than enough. The following condition applies: 154 | 155 | d. ``message.max.bytes`` 和 ``replica.fetch.max.bytes`` 的值需要大于 ``A``, 就是在步骤2中选取的 ``Orderer.AbsoluteMaxBytes`` 的值. 再为区块头增加一些余量 -- 1 MiB 就足够了. 需要满足以下条件: 156 | 157 | :: 158 | 159 | Orderer.AbsoluteMaxBytes < replica.fetch.max.bytes <= message.max.bytes 160 | 161 | (For completeness, we note that ``message.max.bytes`` should be strictly 162 | smaller to ``socket.request.max.bytes`` which is set by default to 100 MiB. 163 | If you wish to have blocks larger than 100 MiB you will need to edit the 164 | hard-coded value in ``brokerConfig.Producer.MaxMessageBytes`` in 165 | ``fabric/orderer/kafka/config.go`` and rebuild the binary from source. 166 | This is not advisable.) 167 | 168 | (补充, 我们注意到 ``message.max.bytes`` 需要严格小于 ``socket.request.max.bytes`` , 这个值默认是100Mib. 如果你希望区块大于100MiB, 你需要去修改硬代码中的变量 ``brokerConfig.Producer.MaxMessageBytes`` , 代码位置是 ``fabric/orderer/kafka/config.go`` , 再重新编译代码, 不建议这么做.) 169 | 170 | e. ``log.retention.ms = -1``. Until the ordering service adds 171 | support for pruning of the Kafka logs, you should disable time-based 172 | retention and prevent segments from expiring. (Size-based retention -- see 173 | ``log.retention.bytes`` -- is disabled by default in Kafka at the time of 174 | this writing, so there's no need to set it explicitly.) 175 | 176 | e. ``log.retention.ms = -1``. 直到排序服务增加了对于 Kafka 日志分割(pruning)的支持之前, 应该禁用基于时间分割的方式以避免单个日志文件到期分段. (基于文件大小的分割方式 -- 看参数 ``log.retention.bytes`` -- 在本文书写时, 在 Kafka 中是默认被禁用的, 所以这个值没有必要指定地很明确. ) 177 | 178 | Based on what we've described above, the minimum allowed values for ``M`` 179 | and ``N`` are 2 and 3 respectively. This configuration allows for the 180 | creation of new channels to go forward, and for all channels to continue to 181 | be writeable. 182 | 183 | 基于上文所描述的, ``M`` 和 ``N`` 的最小值分别为 2 和 3 . 这个配置可以创建 channel 并让所有 channel 都是随时可以写入的. 184 | 185 | 5. Orderers: **Point each OSN to the genesis block.** Edit 186 | ``General.GenesisFile`` in ``orderer.yaml`` so that it points to the genesis 187 | block created in Step 3 above. (While at it, ensure all other keys in that YAML 188 | file are set appropriately.) 189 | 190 | Orderers: **将所有排序节点指向初始区块.** 编辑 ``orderer.yaml`` 文件中的参数 ``General.GenesisFile`` 使其指向步骤3中所创建的初始区块. (同时, 确保YAML文件中所有其他参数都是正确的.) 191 | 192 | 6. Orderers: **Adjust polling intervals and timeouts.** (Optional step.) 193 | Orderers: **调整轮询间隔和超时时间.** (可选步骤.) 194 | 195 | a. The ``Kafka.Retry`` section in the ``orderer.yaml`` file allows you to 196 | adjust the frequency of the metadata/producer/consumer requests, as well as 197 | the socket timeouts. (These are all settings you would expect to see in a 198 | Kafka producer or consumer.) 199 | 200 | a. ``orderer.yaml`` 文件中的 ``Kafka.Retry`` 区域让你能够调整 metadata/producer/consumer 请求的频率以及socket的超时时间. (这些应该就是所有在 kafka 的生产者和消费者 中你需要的设置) 201 | 202 | b. Additionally, when a new channel is created, or when an existing channel 203 | is reloaded (in case of a just-restarted orderer), the orderer interacts 204 | with the Kafka cluster in the following ways: 205 | 206 | b. 另外, 当一个 channel 被创建, 或当一个现有的 channel 被重新读取(刚启动 orderer 的情况), orderer 通过以下方式和 Kafka 集群进行交互. 207 | 208 | a. It creates a Kafka producer (writer) for the Kafka partition that 209 | corresponds to the channel. 210 | 211 | a. 为 channel 对应的 Kafka 分区 创建一个 Kafka 生产者. 212 | 213 | b. It uses that producer to post a no-op ``CONNECT`` message to that 214 | partition. 215 | 216 | b. 通过生产者向这个分区发一个空的连接信息. 217 | 218 | c. It creates a Kafka consumer (reader) for that partition. 219 | 220 | c. 为这个分区创建一个 Kafka 消费者. 221 | 222 | If any of these steps fail, you can adjust the frequency with which they 223 | are repeated. Specifically they will be re-attempted every 224 | ``Kafka.Retry.ShortInterval`` for a total of ``Kafka.Retry.ShortTotal``, 225 | and then every ``Kafka.Retry.LongInterval`` for a total of 226 | ``Kafka.Retry.LongTotal`` until they succeed. Note that the orderer will 227 | be unable to write to or read from a channel until all of the steps 228 | above have been completed successfully. 229 | 230 | 如果任意步骤出错, 你可以调整其重复的频率. 231 | 这些步骤会在每一个 Kafka.Retry.ShortInterval 指定的时间间隔后进行重试 Kafka.Retry.ShortTotal 次, 232 | 再以 Kafka.Retry.LongInterval 规定的时间间隔重试 Kafka.Retry.LongTotal 次直到成功. 233 | 需要注意的是 orderer 不能读写该 channel 的数据直到所有上述步骤都成功执行. 234 | 235 | 7. **Set up the OSNs and Kafka cluster so that they communicate over SSL.** 236 | (Optional step, but highly recommended.) Refer to `the Confluent guide 237 | `_ for the Kafka cluster side of 238 | the equation, and set the keys under ``Kafka.TLS`` in ``orderer.yaml`` on every 239 | OSN accordingly. 240 | 241 | **将排序节点和 Kafka 集群间设置为通过 SSL 通讯.** 242 | (可选步骤,强烈推荐) 参考 `the Confluent guide 243 | `_ 文档中关于 Kafka 集群的设置, 来设置每个排序节点 ``orderer.yaml`` 文件中 ``Kafka.TLS`` 部分的内容. 244 | 245 | 8. **Bring up the nodes in the following order: ZooKeeper ensemble, Kafka 246 | cluster, ordering service nodes.** 247 | 248 | **启动节点请按照以下顺序: ZooKeeper 集群, Kafka 集群, 排序节点** 249 | 250 | 251 | 其他注意事项(Additional considerations) 252 | ------------------------- 253 | 254 | 1. **Preferred message size.** In Step 2 above (see `Steps`_ section) you can 255 | also set the preferred size of blocks by setting the 256 | ``Orderer.Batchsize.PreferredMaxBytes`` key. Kafka offers higher throughput when 257 | dealing with relatively small messages; aim for a value no bigger than 1 MiB. 258 | 259 | **首选的消息大小.** 在上面的步骤2中, 你也能通过参数 ``Orderer.Batchsize.PreferredMaxBytes`` 设置首选的区块大小. 260 | Kafka 处理相对较小的信息有更高的吞吐量; 针对小于 1 MiB 大小的值. 261 | 262 | 2. **Using environment variables to override settings.** You can override a 263 | Kafka broker or a ZooKeeper server's settings by using environment variables. 264 | Replace the dots of the configuration key with underscores -- 265 | e.g. ``KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false`` will allow you to override 266 | the default value of ``unclean.leader.election.enable``. The same applies to the 267 | OSNs for their *local* configuration, i.e. what can be set in ``orderer.yaml``. 268 | For example ``ORDERER_KAFKA_RETRY_SHORTINTERVAL=1s`` allows you to override the 269 | default value for ``Orderer.Kafka.Retry.ShortInterval``. 270 | 271 | **使用环境变量重写设置.** 你能够通过设置环境变量来重写 Kafka 节点和 Zookeeper 服务器的设置. 替换配置参数中的 点 为 下划线 -- 例如 ``KAFKA_UNCLEAN_LEADER_ELECTION_ENABLE=false`` 环境变量重写配置参数 ``unclean.leader.election.enable``. 环境变量重写同样适用于排序节点的*本地*配置, 即 ``orderer.yaml`` 中所能设置的. 例如 ``ORDERER_KAFKA_RETRY_SHORTINTERVAL=1s`` 环境变量可以重写本地配置文件中的 ``Orderer.Kafka.Retry.ShortInterval``. 272 | 273 | Supported Kafka versions and upgrading 274 | -------------------------------------- 275 | 支持的 Kafka 版本和升级 276 | -------------------------------------- 277 | 278 | Supported Kafka versions for v1 are ``0.9`` and ``0.10``. (Hyperledger Fabric 279 | uses the `sarama client library `_ 280 | and vendors a version of it that supports Kafka 0.9 and 0.10.) 281 | 282 | Fabric V1 支持的 Kafka 版本是 ``0.9`` 和 ``0.10``. (Hyperledger Fabric 使用代码库: `sarama client library `_ 支持的 Kafka 版本是 ``0.9`` 和 ``0.10``) 283 | 284 | Out of the box the Kafka version defaults to ``0.9.0.1``. If you wish to use a 285 | different supported version, specify a supported version using the 286 | ``Kafka.Version`` key in ``orderer.yaml``. 287 | 288 | 默认的 Kafka 版本是 ``0.9.0.1``. 如果你使用其他支持的版本, 修改 ``orderer.yaml`` 文件中的参数 ``Kafka.Version``. 289 | 290 | The current supported Kafka versions are: 291 | 292 | 目前支持的 Kafka 版本是: 293 | 294 | * ``Version: 0.9.0.1`` 295 | * ``Version: 0.10.0.0`` 296 | * ``Version: 0.10.0.1`` 297 | * ``Version: 0.10.1.0`` 298 | 299 | 300 | 调试(Debugging) 301 | --------- 302 | 303 | Set ``General.LogLevel`` to ``DEBUG`` and ``Kafka.Verbose`` in ``orderer.yaml`` 304 | to ``true``. 305 | 306 | 设置 ``orderer.yaml`` 文件中 ``General.LogLevel`` 为 ``DEBUG`` 和 ``Kafka.Verbose`` 为 ``true``. 307 | 308 | 309 | 例子(Example) 310 | ------- 311 | 312 | Sample Docker Compose configuration files inline with the recommended settings 313 | above can be found under the ``fabric/bddtests`` directory. Look for 314 | ``dc-orderer-kafka-base.yml`` and ``dc-orderer-kafka.yml``. 315 | 316 | 包含了推荐的设置的Docker Compose 配置文件示例能够在 ``fabric/bddtests`` 目录中找到. 包括 ``dc-orderer-kafka-base.yml`` 文件和 ``dc-orderer-kafka.yml`` 文件. 317 | 318 | .. Licensed under Creative Commons Attribution 4.0 International License 319 | https://creativecommons.org/licenses/by/4.0/ -------------------------------------------------------------------------------- /docs/ledger_zh.md: -------------------------------------------------------------------------------- 1 | 2 | | 原文 | 作者 | 审核修正 | 3 | | --- | --- | —--- | 4 | | [原文](http://hyperledger-fabric.readthedocs.io/en/latest/ledger.html) | Yuan Jin | Linsheng Yu | 5 | 6 | ## Ledger - 账本 7 | 8 | The ledger is the sequenced, tamper-resistant record of all state transitions in the fabric. State transitions are a result of chaincode invocations (‘transactions’) submitted by participating parties. Each transaction results in a set of asset key-value pairs that are committed to the ledger as creates, updates, or deletes. 9 | 10 | 账本是Fabric中所有状态转换的记录,具有有序和防篡改的特点。状态转换是参与各方提交链代码调用(交易)产生的结果。每个交易会产生一组资产键值对,这些键值对作为“创建”、“更新”或者“删除”提交给账本。 11 | 12 | The ledger is comprised of a blockchain (‘chain’) to store the immutable, sequenced record in blocks, as well as a state database to maintain current fabric state. There is one ledger per channel. Each peer maintains a copy of the ledger for each channel of which they are a member. 13 | 14 | 账本由一个区块链(链)构成,并将不可变的、有序的记录存放在区块中;同时包含一个状态数据库来记录当前的Fabric状态。每个通道中各有一个账本。各个节点对于它所属的每个通道,都会保存一份该通道的账本副本。 15 | 16 | ## Chain - 链 17 | 18 | The chain is a transaction log, structured as hash-linked blocks, where each block contains a sequence of N transactions. The block header includes a hash of the block’s transactions, as well as a hash of the prior block’s header. In this way, all transactions on the ledger are sequenced and cryptographically linked together. In other words, it is not possible to tamper with the ledger data, without breaking the hash links. The hash of the latest block represents every transaction that has come before, making it possible to ensure that all peers are in a consistent and trusted state. 19 | 20 | 链是一个交易日志,它由哈希值链接的区块构造而成,每个区块包含N个有序的交易。块头中包含了本区块所记录交易的哈希值,以及上一个区块头的哈希值。通过这种方式,账本中的所有交易都被有序的、加密的形式串联在了一起。换言之,如果不破坏哈希链的话,是无法篡改账本数据的。最新区块的哈希是之前每一笔交易的体现,从而可以保证所有的节点处于一致的可信任的状态。 21 | 22 | The chain is stored on the peer file system (either local or attached storage), efficiently supporting the append-only nature of the blockchain workload. 23 | 24 | 链被存放于节点的文件系统中(本地的或者挂载的),有效地支持着区块链工作量只追加的特性。 25 | 26 | ## State Database - 状态数据库 27 | 28 | The ledger’s current state data represents the latest values for all keys ever included in the chain transaction log. Since current state represents all latest key values known to the channel, it is sometimes referred to as World State. 29 | 30 | 账本的当前状态信息呈现的是链交易日志中记录过的所有键的最新值。由于当前状态表示的是通道已知的所有键的最新值,由此也常被称作世界状态。 31 | 32 | Chaincode invocations execute transactions against the current state data. To make these chaincode interactions extremely efficient, the latest values of all keys are stored in a state database. The state database is simply an indexed view into the chain’s transaction log, it can therefore be regenerated from the chain at any time. The state database will automatically get recovered (or generated if needed) upon peer startup, before transactions are accepted. 33 | 34 | 链码调用基于当前的状态数据执行交易。为了使链码调用高效运行,所有键的最新值被存储在状态数据库中。状态数据库是链的交易日志的索引视图,因此它可以随时从链中重新导出。节点启动的时候,在接受交易之前,状态数据库将被自动恢复(或者根据需要产生)。 35 | 36 | ## Transaction Flow - 交易流程 37 | 38 | At a high level, the transaction flow consists of a transaction proposal sent by an application client to specific endorsing peers. The endorsing peers verify the client signature, and execute a chaincode function to simulate the transaction. The output is the chaincode results, a set of key/value versions that were read in the chaincode (read set), and the set of keys/values that were written in chaincode (write set). The proposal response gets sent back to the client along with an endorsement signature. 39 | 40 | 概括而言,交易流程由应用客户端发送给背书节点交易提案组成。背书节点验证客户端的签名,然后执行链码来模拟交易。产生的输出就是链码结果,一组链码读取的键值版本(读集合),和一组被写入链码的键值集合(写集合)。交易提案的响应被发送回客户端,同时包含了背书签名。 41 | 42 | The client assembles the endorsements into a transaction payload and broadcasts it to an ordering service. The ordering service delivers ordered transactions as blocks to all peers on a channel. 43 | 44 | 客户端汇总所有的背书到一个交易有效载荷中,并将它广播到排序服务。排序服务将排好序的交易放入区块并发送到通道内的所有节点。 45 | 46 | Before committal, peers will validate the transactions. First, they will check the endorsement policy to ensure that the correct allotment of the specified peers have signed the results, and they will authenticate the signatures against the transaction payload. 47 | 48 | 在提交之前,节点们会验证交易。首先它们会检查背书策略来保证足够的指定节点正确地对结果进行了签名,并且会认证交易有效载荷中的签名。 49 | 50 | Secondly, peers will perform a versioning check against the transaction read set, to ensure data integrity and protect against threats such as double-spending. The fabric has concurrency control whereby transactions execute in parallel (by endorsers) to increase throughput, and upon commit (by all peers) each transaction is verified to ensure that no other transaction has modified data it has read. In other words, it ensures that the data that was read during chaincode execution has not changed since execution (endorsement) time, and therefore the execution results are still valid and can be committed to the ledger state database. If the data that was read has been changed by another transaction, then the transaction in the block is marked as invalid and is not applied to the ledger state database. The client application is alerted, and can handle the error or retry as appropriate. 51 | 52 | 其次,节点们会对交易的读集合进行版本检查,从而保证数据的一致性并防范一些攻击,比如双花。Fabric拥有并发控制,从而交易可以(被背书节点)并行运行来提高吞吐量,而且当交易(被节点)提交时,每个交易都会被验证来保证它所读取的数据没有被其他交易更改。换言之,它确保链码执行期间所读取的数据从执行(背书)开始后没有变动。如果读取的数据被其他交易改动了,那么区块中的交易将被标记成无效的,也不会被应用到账本状态数据库。客户端应用会收到提醒,从而进行纠错或适当重试。 53 | 54 | See the [Transaction Flow](http://hyperledger-fabric.readthedocs.io/en/latest/txflow.html) and [Read-Write set semantics](http://hyperledger-fabric.readthedocs.io/en/latest/readwrite.html) topics for a deeper dive on transaction structure, concurrency control, and the state DB. 55 | 56 | 要进一步了解交易的结构,并发控制和状态数据库的相关内容,可以参考[交易流程](http://hyperledger-fabric.readthedocs.io/en/latest/txflow.html)和[读写集合语言学](http://hyperledger-fabric.readthedocs.io/en/latest/readwrite.html)。 57 | 58 | ## State Database options - 状态数据库选项 59 | 60 | State database options include LevelDB and CouchDB (beta). LevelDB is the default key/value state database embedded in the peer process. CouchDB is an optional alternative external state database. Like the LevelDB key/value store, CouchDB can store any binary data that is modeled in chaincode (CouchDB attachment functionality is used internally for non-JSON binary data). But as a JSON document store, CouchDB additionally enables rich query against the chaincode data, when chaincode values (e.g. assets) are modeled as JSON data. 61 | 62 | 状态数据库选项包括LevelDB和CouchDB(beta)。LevelDB是节点流程中集成的缺省键值状态数据库。CouchDB是可选的外部状态数据库。类似于LevelDB的键值库,CouchDB能存储任何链码中建模的二进制数据(CouchDB附件功能被内部用于非JSON格式的二进制数据)。但作为一个JSON格式文档库,当链码的数据(比如资产)以JSON格式建模时,CouchDB额外提供了许多针对链码数据的查询方式。 63 | 64 | Both LevelDB and CouchDB support core chaincode operations such as getting and setting a key (asset), and querying based on keys. Keys can be queried by range, and composite keys can be modeled to enable equivalence queries against multiple parameters. For example a composite key of (owner,asset_id) can be used to query all assets owned by a certain entity. These key-based queries can be used for read-only queries against the ledger, as well as in transactions that update the ledger. 65 | 66 | LevelDB和CouchDB都支持核心的链码操作,比如获取和设置一个键(资产),以及基于键进行查询等。键的查询可以通过设置范围,而且可以通过构建组合键来达到按多个参数进行查询的同等效果。比如一个组合键(拥有者,资产编号)可以被用来查询某实体所拥有的所有资产。这些基于键的查询可以被用做针对账本的只读查询,同时也可以被应用在对账本进行更新的交易中。 67 | 68 | If you model assets as JSON and use CouchDB, you can also perform complex rich queries against the chaincode data values, using the CouchDB JSON query language within chaincode. These types of queries are excellent for understanding what is on the ledger. Proposal responses for these types of queries are typically useful to the client application, but are not typically submitted as transactions to the ordering service. In fact the fabric does not guarantee the result set is stable between chaincode execution and commit time for rich queries, and therefore rich queries are not appropriate for use in update transactions, unless your application can guarantee the result set is stable between chaincode execution time and commit time, or can handle potential changes in subsequent transactions. For example, if you perform a rich query for all assets owned by Alice and transfer them to Bob, a new asset may be assigned to Alice by another transaction between chaincode execution time and commit time, and you would miss this ‘phantom’ item. 69 | 70 | 如果你将资产以JSON格式进行建模,并且使用的是CouchDB,那你可以通过CouchDB的JSON查询语言,对链码的数据值进行复杂多样的查询。这些查询类型可以很好的帮助理解账本中包含什么。查询类型的提案响应对客户端应用通常很有用,但并不会被作为交易提交到排序服务。实际上对于富查询(rich query),Fabric并不保证结果集在链码执行和提交过程中间是稳定的,或者能处理后续交易中潜在的变化。比如说,如果你对所有Alice拥有的资产进行富查询,并转移给Bob,那在链码执行和提交的过程中,可能会有另一个交易将一个新的资产分配给了Alice,你将会错失这个“幻影”项。 71 | 72 | CouchDB runs as a separate database process alongside the peer, therefore there are additional considerations in terms of setup, management, and operations. You may consider starting with the default embedded LevelDB, and move to CouchDB if you require the additional complex rich queries. It is a good practice to model chaincode asset data as JSON, so that you have the option to perform complex rich queries if needed in the future. 73 | 74 | CouchDB作为独立的数据库进程跟节点一起运行,所以安装、管理和操作的时候需要一些额外的考虑。你可以尝试开始的时候用缺省集成的LevelDB,然后当你需要额外的复杂查询时再切换到CouchDB。将链码的资产数据以JSON格式建模是一个非常好的实践,这样有利于你将来进行复杂多样的查询。 75 | 76 | To enable CouchDB as the state database, configure the /fabric/sampleconfig/core.yaml stateDatabase section. 77 | 78 | 如果要用CouchDB作为状态数据库,需要对/fabric/sampleconfig/core.yaml stateDatabase这部分进行配置。 79 | 80 | -------------------------------------------------------------------------------- /docs/logging-control_zh.md: -------------------------------------------------------------------------------- 1 | 2 | | 原文 | 作者 | 审核修正 | 3 | | --- | --- | —--- | 4 | | [原文](http://hyperledger-fabric.readthedocs.io/en/latest/logging-control.html) | Yi Zeng | | 5 | 6 | ## Overview概述 7 | 8 | Logging in the peer application and in the shim interface to chaincodes is programmed using facilities provided by the github.com/op/go-logging package. This package supports 9 | 10 | 日志功能在节点的应用程序和链上代码的shim接口中使用,最终在github.com/op/go-logging包实现。这个包支持: 11 | 12 | * Logging control based on the severity of the message 13 | 基于消息的严重程度进行日志控制 14 | * Logging control based on the software module generating the message 15 | 基于软件模块产生的消息进行日志控制 16 | * Different pretty-printing options based on the severity of the message 17 | 基于消息的严重程度美观的打印到不同的格式的选项 18 | 19 | All logs are currently directed to stderr, and the pretty-printing is currently fixed. However global and module-level control of logging by severity is provided for both users and developers. There are currently no formalized rules for the types of information provided at each severity level, however when submitting bug reports the developers may want to see full logs down to the DEBUG level. 20 | 21 | 所有日志目前都被定向到stderr,而pretty-printing目前是固定的。然而,为用户和开发人员提供了严格级别的全局和模块级别的日志记录控制。目前没有关于每个严重性级别提供的信息类型的正式规则,但是当提交错误报告时,开发人员可能希望看到完整的到DEBUG级别的日志记录。 22 | 23 | In pretty-printed logs the logging level is indicated both by color and by a 4-character code, e.g, “ERRO” for ERROR, “DEBU” for DEBUG, etc. In the logging context a module is an arbitrary name (string) given by developers to groups of related messages. In the pretty-printed example below, the logging modules “peer”, “rest” and “main” are generating logs. 24 | 25 | 在pretty-printed的日志中,日志记录级别由颜色和4个字符的代码指示,例如ERROR的“ERRO”,DEBUG的“DEBU”等。在日志上下文中,模块是指由开发者指定的任意名称(字符串)的相关消息的组。在以下pretty-printed的例子中,日志模块“peer”,“rest”和“main”都产生了日志。 26 | 27 | ~~~ 28 | 16:47:09.634 [peer] GetLocalAddress -> INFO 033 Auto detected peer address: 9.3.158.178:7051 29 | 16:47:09.635 [rest] StartOpenchainRESTServer -> INFO 035 Initializing the REST service... 30 | 16:47:09.635 [main] serve -> INFO 036 Starting peer with id=name:"vp1" , network id=dev, address=9.3.158.178:7051, discovery.rootnode=, validator=true 31 | ~~~ 32 | 33 | An arbitrary number of logging modules can be created at runtime, therefore there is no “master list” of modules, and logging control constructs can not check whether logging modules actually do or will exist. Also note that the logging module system does not understand hierarchy or wildcarding: You may see module names like “foo/bar” in the code, but the logging system only sees a flat string. It doesn’t understand that “foo/bar” is related to “foo” in any way, or that “foo/*” might indicate all “submodules” of foo. 34 | 35 | 可以在运行时创建任意数量的日志记录模块,因此没有模块的“主列表”一说,日志控制结构不能检查日志模块是否实际执行或将存在。另请注意,日志记录模块系统不明白层次结构或通配符:您可能会在代码中看到模块名称,如“foo/bar”,但日志记录系统只能看到一个扁平的字符串。它不明白“foo/bar”与“foo”有任何关系,或者“foo/*”可能表示foo的所有“子模块”。 36 | 37 | ## peer 38 | 39 | The logging level of the peer command can be controlled from the command line for each invocation using the --logging-level flag, for example 40 | 41 | peer命令的日志等级可以使用命令行控制,每次调用peer时使用--logging-level,例如: 42 | 43 | ~~~ 44 | peer node start --logging-level=debug 45 | ~~~ 46 | 47 | The default logging level for each individual peer subcommand can also be set in the core.yaml file. For example the key logging.node sets the default level for the node subcommmand. Comments in the file also explain how the logging level can be overridden in various ways by using environment varaibles. 48 | 49 | 每个单独的peer命令的默认日志记录级别也可以在core.yaml文件中设置。例如,键logging.node用于设置node子命令的默认级别。该文中的注释还解释了如何通过使用环境变量以各种方式覆盖日志级别。 50 | 51 | Logging severity levels are specified using case-insensitive strings chosen from 52 | 53 | 使用以下选择的不区分大小写的字符串可以指定日志严重级别: 54 | 55 | ~~~ 56 | CRITICAL | ERROR | WARNING | NOTICE | INFO | DEBUG 57 | ~~~ 58 | 59 | The full logging level specification for the peer is of the form 60 | 61 | peer的完整日志级别的规格如下格式: 62 | 63 | ~~~ 64 | [[,...]=][:[[,...]=]...] 65 | ~~~ 66 | 67 | A logging level by itself is taken as the overall default. Otherwise, overrides for individual or groups of modules can be specified using the 68 | 69 | 本身的日志级别被视为总体默认值。另外,可以使用以下命令来指定单个或多个模块组的日志等级的覆盖: 70 | ~~~ 71 | [,...]= 72 | ~~~ 73 | syntax. Examples of specifications (valid for all of --logging-level, environment variable and core.yaml settings): 74 | 75 | 语法。规范示例(适用于所有的--logging-level,环境变量和core.yaml设置): 76 | ~~~ 77 | info - Set default to INFO 78 | warning:main,db=debug:chaincode=info - Default WARNING; Override for main,db,chaincode 79 | chaincode=info:main=debug:db=debug:warning - Same as above 80 | ~~~ 81 | 82 | ## Go chaincodes 83 | 84 | The standard mechanism to log within a chaincode application is to integrate with the logging transport exposed to each chaincode instance via the peer. The chaincode `shim` package provides APIs that allow a chaincode to create and manage logging objects whose logs will be formatted and interleaved consistently with the `shim` logs. 85 | 86 | 链上代码应用程序中日志的标准机制是通过peer与暴露于每个链码实例的日志传输进行集成。 链上代码的`shim`包提供了API,允许链码创建和管理日志记录对象,日志对象的日志将被格式化,并与`shim`日志交织在了一起。 87 | 88 | As independently executed programs, user-provided chaincodes may technically also produce output on stdout/stderr. While naturally useful for “devmode”, these channels are normally disabled on a production network to mitigate abuse from broken or malicious code. However, it is possible to enable this output even for peer-managed containers (e.g. “netmode”) on a per-peer basis via the CORE_VM_DOCKER_ATTACHSTDOUT=true configuration option. 89 | 90 | 作为独立执行的程序,用户提供的链码在技术上也可以在stdout / stderr上产生输出。虽然对“开发模式”有用,但这种方式通常在生产环境上被禁用,以减轻破坏或恶意代码的滥用。然而,甚至可以通过CORE_VM_DOCKER_ATTACHSTDOUT = true配置选项在每个peer-peer的基础上为peer管理的容器(例如“netmode”)启用此输出。 91 | 92 | Once enabled, each chaincode will receive its own logging channel keyed by its container-id. Any output written to either stdout or stderr will be integrated with the peer’s log on a per-line basis. It is not recommended to enable this for production. 93 | 94 | 一旦启用,每个链码将接收其自己的日志通道,其由container-id标识。写入stdout或stderr的任何输出将与peer的日志按照每行进行集成。不建议将其用于生产。 95 | 96 | ## API 97 | 98 | `NewLogger(name string) *ChaincodeLogger` - Create a logging object for use by a chaincode 99 | 100 | `(c *ChaincodeLogger) SetLevel(level LoggingLevel)` - Set the logging level of the logger 101 | 102 | `(c *ChaincodeLogger) IsEnabledFor(level LoggingLevel) bool` - Return true if logs will be generated at the given level 103 | 104 | `LogLevel(levelString string) (LoggingLevel, error)` - Convert a string to a LoggingLevel 105 | 106 | A `LoggingLevel` is a member of the enumeration 107 | 108 | LogDebug, LogInfo, LogNotice, LogWarning, LogError, LogCritical 109 | 110 | which can be used directly, or generated by passing a case-insensitive version of the strings 111 | 112 | DEBUG, INFO, NOTICE, WARNING, ERROR, CRITICAL 113 | 114 | to the `LogLevel` API. 115 | 116 | Formatted logging at various severity levels is provided by the functions 117 | 118 | 以下函数提供了各种严重级别的格式化日志记录 119 | ~~~ 120 | (c *ChaincodeLogger) Debug(args ...interface{}) 121 | (c *ChaincodeLogger) Info(args ...interface{}) 122 | (c *ChaincodeLogger) Notice(args ...interface{}) 123 | (c *ChaincodeLogger) Warning(args ...interface{}) 124 | (c *ChaincodeLogger) Error(args ...interface{}) 125 | (c *ChaincodeLogger) Critical(args ...interface{}) 126 | 127 | (c *ChaincodeLogger) Debugf(format string, args ...interface{}) 128 | (c *ChaincodeLogger) Infof(format string, args ...interface{}) 129 | (c *ChaincodeLogger) Noticef(format string, args ...interface{}) 130 | (c *ChaincodeLogger) Warningf(format string, args ...interface{}) 131 | (c *ChaincodeLogger) Errorf(format string, args ...interface{}) 132 | (c *ChaincodeLogger) Criticalf(format string, args ...interface{}) 133 | ~~~ 134 | 135 | The `f` forms of the logging APIs provide for precise control over the formatting of the logs. The non-`f` forms of the APIs currently insert a space between the printed representations of the arguments, and arbitrarily choose the formats to use. 136 | 137 | 日志API的`f`形式可以精确控制日志格式。 API的非`f`形式当前在参数的打印表示之间插入一个空格,并任意选择要使用的格式。 138 | 139 | In the current implementation, the logs produced by the `shim` and a `ChaincodeLogger` are timestamped, marked with the logger name and severity level, and written to `stderr`. Note that logging level control is currently based on the name provided when the `ChaincodeLogger` is created. To avoid ambiguities, all `ChaincodeLogger` should be given unique names other than “shim”. The logger name will appear in all log messages created by the logger. The `shim` logs as “shim”. 140 | 141 | 在当前实现中,由`shim`和`ChaincodeLogger`生成的日志是时间戳的,标有记录器名称和严重性级别,并写入`stderr`。请注意,日志级别控制当前基于创建`ChaincodeLogger`时提供的名称。为了避免歧义,所有`ChaincodeLogger`应该被赋予除“shim”之外的唯一名称。记录器名称将显示在由记录器创建的所有日志消息中。垫片记录为“shim”。 142 | 143 | Go language chaincodes can also control the logging level of the chaincode shim interface through the `SetLoggingLevel` API. 144 | 145 | Go语言链接代码还可以通过SetLoggingLevel API来控制链式代码垫片界面的日志记录级别。 146 | 147 | `SetLoggingLevel(LoggingLevel level)` - Control the logging level of the shim 控制shim的日志记录级别 148 | 149 | The default logging level for the shim is `LogDebug`. 150 | 151 | shim的默认日志级别为LogDebug。 152 | 153 | Below is a simple example of how a chaincode might create a private logging object logging at the `LogInfo` level, and also control the amount of logging provided by the `shim` based on an environment variable. 154 | 155 | 下面是一个简单的示例,说明链码如何创建`LogInfo`级别的专用日志对象日志记录,并且还可以基于环境变量来控制由`shim`提供的日志量。 156 | ~~~ 157 | var logger = shim.NewLogger("myChaincode") 158 | 159 | func main() { 160 | 161 | logger.SetLevel(shim.LogInfo) 162 | 163 | logLevel, _ := shim.LogLevel(os.Getenv("SHIM_LOGGING_LEVEL")) 164 | shim.SetLoggingLevel(logLevel) 165 | ... 166 | } 167 | ~~~ 168 | -------------------------------------------------------------------------------- /docs/makefile_zh.md: -------------------------------------------------------------------------------- 1 | 2 | | 原文 | 作者 | 审核修正 | 3 | | --- | --- | —--- | 4 | | [原文](https://github.com/hyperledger/fabric/blob/release/Makefile) | Dijun Liu | | 5 | 6 | 详细内容请看[这里](http://www.blockchainbrother.com/article/20) -------------------------------------------------------------------------------- /docs/msp_acl_zh.md: -------------------------------------------------------------------------------- 1 | 2 | | 原文 | 作者 | 审核修正 | 3 | | --- | --- | —--- | 4 | | [原文](https://docs.google.com/document/d/1Qg7ZEccOIsrShSHSNl4kBHOFvLYRhQ3903srJ6c_AZE/edit) | Yongfu Liang | | 5 | 6 | 详细内容请看[这里](https://docs.google.com/document/d/1vHdaOSZgpyjf-qjhsmiTMPq8MG43HBA28FM6d1KJeXc/edit#heading=h.2rmho7iqstbu) -------------------------------------------------------------------------------- /docs/msp_zh.md: -------------------------------------------------------------------------------- 1 | 2 | | 原文 | 作者 | 审核修正 | 3 | | --- | --- | —--- | 4 | | [原文](http://hyperledger-fabric.readthedocs.io/en/latest/msp.html) | | | 5 | 6 | ## 这里什么都没有,就靠你了!!! -------------------------------------------------------------------------------- /docs/protocol-spec_zh.md: -------------------------------------------------------------------------------- 1 | 2 | | 原文 | 作者 | 审核修正 | 3 | | --- | --- | —--- | 4 | | [原文](https://github.com/hyperledger/fabric/blob/v0.6/docs/source/protocol-spec.rst) | | | 5 | 6 | 详细内容请看[这里](https://github.com/hyperledger/fabric/blob/v0.6/docs/source/protocol-spec_zh.rst) -------------------------------------------------------------------------------- /docs/read-write-set.md: -------------------------------------------------------------------------------- 1 | 2 | | 原文 | 作者 | 审核修正 | 3 | | --- | --- | —--- | 4 | | [原文](http://hyperledger-fabric.readthedocs.io/en/latest/readwrite.html) | Linsheng Yu | | 5 | 6 | 7 | ## 交易模拟和read-write set 8 | 9 | 在背书节点上的交易模拟期间会产生一个交易的read-write set。`read set`包含在模拟期间交易读取到的唯一key及对应version。`write set`交易改写的唯一key(可能与`read set`中的key重叠)及对应的新value。如果交易的更新操作是删除一个key,则在`write set`为该key设置一个delete标记。 10 | 11 | 此外,如果交易中对一个key改写多次,则只保留最后的修改值。如果交易中读取一个key的值,即使交易在读取之前更新了该key的值,读取到的也会是之前提交过的而不是刚更新的。换句话说,不能读取到同一交易中修改的值。 12 | 13 | 如前所述,key的version只记录在`read set`;`write set`只包含key及对应新value。 14 | 15 | 对于`read set`的version的实现有很多种方案,最基本要求就是为key生成一个非重复标识符。例如用递增的序号作为version。在目前的代码实现中我们使用了blockchain height作为version方案,就是用交易的height作为该交易所修改的key的version。交易height由一个结构表示(见下面Height struc),其中TxNum表示这个tx在block中的height(译注:也就是交易在区块中的顺序)。该方案相较于递增序号有很多优点--主要是这样的version可以很好地利用到诸如statedb、交易模拟和校验这些模块中。 16 | 17 | 此外,如果模拟交易中执行了批量查询(range query),批量查询结果会被放到read-write set中的`query-info`。 18 | 19 | // 译注 20 | 21 | // read-write set 结构 22 | type TxReadWriteSet struct { 23 | NsRWs []*NsReadWriteSet 24 | } 25 | type NsReadWriteSet struct { 26 | NameSpace string 27 | Reads []*KVRead 28 | Writes []*KVWrite 29 | RangeQueriesInfo []*RangeQueryInfo 30 | } 31 | type RangeQueryInfo struct { 32 | StartKey string 33 | EndKey string 34 | ItrExhausted bool 35 | Results []*KVRead 36 | ResultHash *MerkleSummary 37 | } 38 | type MerkleSummary struct { 39 | MaxDegree int 40 | MaxLevel MerkleTreeLevel 41 | MaxLevelHashes []Hash 42 | } 43 | type MerkleTreeLevel int 44 | type Hash []byte 45 | 46 | // read set 结构 47 | type KVRead struct { 48 | Key string 49 | Version *Height 50 | } 51 | type Height struct { 52 | BlockNum uint64 53 | TxNum uint64 54 | } 55 | 56 | // write set 结构 57 | type KVWrite struct { 58 | Key string 59 | IsDelete bool 60 | Value []byte 61 | } 62 | 63 | 下面是一个假设的交易模拟生成的read-write set示例,简单起见,示例中使用了递增序号作为version。 64 | 65 | 66 | 67 | 68 | 69 | 70 | 71 | 72 | 73 | 74 | 75 | 76 | 77 | 78 | 79 | ## 使用read-write set 验证交易和更新worldState 80 | 提交节点(committer)利用`read set`部分校验交易的有效性;用`write set`部分更新key的version和value。 81 | 82 | 在验证阶段,如果`read set`中每个key的version都与stateDB中对应worldState(假设所有之前的有效交易,包括同一个block中的交易,都已经提交完成,即已更新ledger)的version相匹配,则认为此交易有效。 83 | 84 | 如果read-write set中包含`query-info`,则还要对此执行额外的校验。该校验确保在此批量查询的结果范围内没有key被增删改。换句话说,如果在验证阶段重新执行该批量查询(模拟期间执行的交易)应该产生与模拟交易期间相同的结果。此校验确保交易在提交时出现幻读会被认为无效。注意,这个幻读保护仅限于Chaincode的`GetStateByRange`和`GetStateByPartialCompositeKey`两个方法***(译注:此处文档上提到的是`GetStateByRange`和`GetQueryResult`两个方法,但在代码里的注释却不是这样,此处以代码为准。详见fabric/examples/chaincode/go/marbles02/marbles_chaincode.go)***。而其他批量查询方法(如:`GetQueryResult`)会有幻读风险,因此这种查询应该只用于不会被提交到ordering的`只读交易`,除非app能保证交易模拟和交易验证提交两阶段之间结果集稳定。 85 | 86 | 如果交易验证通过,committer就会用`write set`更新worldState。在更新阶段,`write set`中的每个key在worldState中对应的value都会被更新,然后worldState中这些key的version也会随着更新。 87 | 88 | ## 交易模拟与交易验证 示例 89 | 90 | 本节通过示例场景帮助理解read-write set。存在一个key设为`k`,在worldState中由元组`(k,var,val)`表示,其中`ver`是`k`的最新的version,`val`是`k`的value。 91 | 92 | 现在有五个交易,分别是`T1,T2,T3,T4,T5`,这五个交易的模拟过程是针对相同的worldSate快照,下面的代码片段显示了模拟交易的worldState快照以及每个交易执行读写的顺序。 93 | 94 | World state: (k1,1,v1), (k2,1,v2), (k3,1,v3), (k4,1,v4), (k5,1,v5) 95 | T1 -> Write(k1, v1'), Write(k2, v2') 96 | T2 -> Read(k1), Write(k3, v3') 97 | T3 -> Write(k2, v2'') 98 | T4 -> Write(k2, v2'''), read(k2) 99 | T5 -> Write(k6, v6'), read(k5) 100 | 101 | 假设这些交易的顺序是T1,...T5(可能在同一个block或者不同block) 102 | 103 | 1. `T1`验证成功,因为它没有read操作。之后在worldState中的`k1`和`k2`会被更新成`(k1,2,v1'), (k2,2,v2')` 104 | 2. `T2`验证失败,因为它读取的`k1`在之前的交易`T1`中被修改了**(译注:需要特别注意一个前提,即这五个交易的模拟过程是对于相同的worldState快照,而且T2又有write操作,所以T2会进入commit阶段进行验证,这样T2的k1.ver=1,T1完成后实际的k1.ver=2了,然后T2在commit校验是就会失败。也就是上文提到的一个交易的模拟和提交期间,某key的值被修改。。。*但是有个疑问,正常使用中应该会经常出现T1、T2这种顺序的情况,难道会经常发生交易校验失败??如果如此,那对于用户来说岂不很难用?暂有此疑,有待研究*)** 105 | 3. `T3`验证成功,因为它没有read操作。之后在worldState中的`k2`会被更新成`(k2,3,v2'')` 106 | 4. `T4`验证失败,因为它读取的`k2`在之前的交易`T1`中被修改了 107 | 5. `T5`验证成功,因为它读取的`k5`没有在之前的任何交易中修改 108 | 109 | **注意:**交易不支持多read-write set 110 | -------------------------------------------------------------------------------- /docs/sdk-design_zh.md: -------------------------------------------------------------------------------- 1 | 2 | | 原文 | 作者 | 审核修正 | 3 | | --- | --- | —--- | 4 | | [原文](https://docs.google.com/document/d/1R5RtIBMW9fZpli37E5Li5_Q9ve3BnQ4q3gWmGZj6Sv4/edit) | Lidong Guo | | 5 | 6 | ## 1 概要 7 | Hyperledger Fabric v1.0提供了基本的Protocol Buffers 格式的API通过gRPC协议作用于区块链网络。这些API包含了交易处理、安全的成员关系服务、区块链遍历和事件处理。有很多编程语言支持Protocol Buffers,包括Go, C#, Java, Javascript, Python, and C++。所以有必要给应用开发工程师提供一个本地SDK。 8 | 9 | 本文档详述了一个SDK应该提供的API最小集合的实现。一个SDK的目标即要解决客户在区块链网络中直接面对的一些原始需求;又要做一些顶层的合理抽象以图方便开发人员开发,并且在本地应用代码中使用这些API后方便开发工作的进行。然而,使用SDK并不会妨碍应用直接调用gRPC的操作。 10 | 11 | 请注意,当前的REST API并不被推荐使用,SDK不应该基于使用REST API 而创建。跳过REST直接使用gRPC的原因是:能够控制全双向的数据流(大部分的API调用都是异步的),更好的性能。此外,自从节点内部模块通信使用了gRPC之后,便不必再开放更多的HTTP/HTTPS端口了。 12 | 13 | 一个例外的情况是,新的成员服务方式(CA)是按REST API的规则实现的 14 | 15 | 作为参考,Fabric v1.0的架构和交互模型可以在下面这个链接所指向的工作单元的附件文档里面查看 https://jira.hyperledger.org/browse/FAB-37 16 | 17 | ## 2 目标 18 | 19 | ### 2.1 应用开发 20 | 21 | Fabric SDK应该可以为开发人员提供编写应用程序的多种操作区块链网络的方式。应用程序可以部署/执行chaincode,监听网络中产生的事件,接收块信息,把交易存储到账本中,等等 22 | 23 | ### 2.2 chaincode开发 24 | 25 | Fabric SDK应该可以为开发人员提供编写chaincode单元测试代码。开发人员应该能够在不把chaincode部署到网络上的情况下快速测试chaincode 26 | 27 | ## 3 原则 28 | 29 | ### 3.1 良好的API文档,数据模型,示例代码 30 | 31 | SDK应该提供清晰的文档说明,包括可用的API,数据模型,如何使用API的示例说明 32 | 33 | ### 3.2 便于使用 34 | 35 | Chaincode开发人员和应用程序开发人员关注于书写业务逻辑代码。尽管开发人员熟悉fabric内部构件对开发本身是有利的,但是这不应该是一个必须的先决条件。SDK编译时间上不应该对fabric project有任何依赖。(除非原始文件定义了多种合约??) SDK packages/jars/libraries 应该在常用的资源库网站上可获取,以方便开发人员可以轻松安装它们或者立即开始编写Chaincode来开发应用程序。 36 | 37 | ### 3.3 性能 38 | 39 | 这个SDK必须仔细设计来达到高吞吐量,水平扩展能力和更低的延迟。它(本SDK)应该是一个组件状态无关化的理想实现,或者允许应用实例借助数据库共享状态。 40 | 41 | ### 3.4 版本控制 42 | 43 | 统一的版本控制下,建议将SDK通过多种语言实现,并且多种实现的SDK之间在功能上能够保持互通 44 | 45 | ### 3.5 可服务性 46 | 47 | 本SDK应该方便于在可服务性支持中添加插件,即日志系统。它应该允许SDK使用方的应用添加一个日志处理的实例。应用倾向于在SDK内部和外部使用一个通用的日志系统,这是非常有用的。一个IT组织为了监视和分析通常会安装日志脚本,比如对一个标准日志格式的需求。本SDK应该有一个编译进版本的日志系统以便于开发人员通过默认方式获取日志。但是它必须允许设置一个拥有一系列标准集合的外部日志系统来记录API日志。 48 | 49 | ## 4 场景 50 | 51 | 有许多种可能的场景,但是让我们关注一少部分来说明SDK的整体功能。 52 | 53 | ### 4.1 成员的注册和登记 54 | 55 | 为了跟fabric交互,应用程序必须具备一个专有的身份认证,这个认证被编码进一个叫做ECert(登记证书)的成员证书里。这个证书可能是由外部的认证机构提供,或者由fabric成员服务提供,只要这个ECert可以被fabric组件在建立链的信任链时验证即可。 56 | 57 | 如果用一个标准的CA来管理分配ECert,交易会被ECert签名。这将使得每一个交易对于每一个客户端都是可验证的。 58 | 59 | 另一方面,MSP在fabric中的设计是,利用一个加密算法从ECert的密钥对导出一个密钥对,因此为每一个交易产生一个假名证书,又叫做TCert。一个TCert不能代表一个认证的身份,但是能通过成员服务追踪到原始的ECert。这个技术在许多业务场景下隐藏交易认证是非常有用的。 60 | 61 | Fabric提供了MSP接口的一种实现,命名为“COP”立意来源于它的作用就像警察,而不是来源于首字母。在github的组织超级账本里面的代码库“fabric-cop”就是它。(译者注:在github上hyperledger组织下并未找到这个代码库,可能是本文档成书后又有变更。) 62 | 63 | 如果给一个基于fabric的网络配置一个外部的CA,然后用户注册也会在外部维护。这个外部的用户注册系统负责认证用户。一个已经被认证的用户为了获取ECert可以使用CA请求登记。 64 | 65 | 就SDK而言,应该有两种API:支持任何CA(包括内建的和外部的)的通用API,和专门为fabric的内建成员服务提供的API。通用API负责允许使用SDK的应用程序执行最基本的操作比如关联用户身份,命名为“enroll()”,为认证用户获取ECert的程序;还有“getTCerts()”,获取交易证书允许用户提交交易。 66 | 67 | 客户端有关可选的成员服务实现(即COP)的设计需要单独的文档来说明。https://docs.google.com/document/d/1TO-sdHGpn8ifB9C_pH_y54H75Rx0RdXvZih7-lhsLII/edit 68 | 69 | ### 4.2 链的创建 70 | 71 | Hyperledger Fabric所设计的支持隐私性和机密性的通道和账本,统一被称为一个“链”。通道和账本设计更多内容请查看: 72 | https://docs.google.com/document/d/1eRNxxQ0P8yp4Wh__Vi6ddaN_vhN2RQHP-IruHNUwyhc/edit#。通信(传输中的数据)被限制在参与方之间的通道内,同时块内数据(已被持久化的数据)保存在一个私有的仅仅在这些参与方之间分布式存储的账本中。不属于链的网络节点不了解通道和私有账本内部任何事物,也不能利用这些事物。 73 | 74 | 一旦一个链被创建,应用可以发送交易给链上以私有方式存在的节点,并且已提交被验证的交易到私有账本。 75 | 76 | 创建链的责任落在应用程序身上。通过SDK应用程序初始化链的组织团体(网络成员的代表)构成并且给排序服务。 77 | 78 | 在SDK里,通道和它本身关联的账本的绑定由链类体现。应用和指定的排序节点第一次准备一个新的通道并且获取一个包含关于新链的密钥信息的创世区块,包含参与者信息(URL和证书)、排序节点信息(URL和证书)。应用通过目标是配置系统chaincode的配置化交易来协调新通道对参与节点的邀请。 79 | 80 | ![](img/sdk-image01.png) 81 | 82 | ### 4.3 交易支持 83 | 84 | 一个交易包含明确的两个步骤:背书和提交。背书是请求节点对交易的结果进行运行并背书(签名);提交是在交易和交易的背书内容的有效期内请求共识。 85 | 86 | 下图说明交易时客户端SDK和节点之间的交互。更多信息请查看架构文档(https://github.com/hyperledger/fabric/blob/master/proposals/r1/Next-Consensus-Architecture-Proposal.md)。 87 | 88 | 首先SDK和应用协同工作,使用应用的(或者被认证用户的)私钥给信息签名(所有的外部信息都需要签名认证)。然后它根据背书策略(通过验证系统链码或者VSCC实现的)给一个或多个节点发送信息。它获得异步的反馈并且决定是否执行创建和提交交易到共识服务。执行交易提交的决定是基于背书策略谓词(就像2 out of 3)基于带外知识。一旦提交,交易的处理流程便是异步的,所以SDK监听提交事件来通知应用交易处理流程的完成或者拒绝的结果。 89 | 90 | ![](img/sdk-image02.jpg) 91 | 92 | 上图是对交易流程非常上层的描述。SDK中有详细的执行网络和环境配置,包括获取签名密钥来安全的管理成员,处理交易和事件流,(依赖应用的)多种共识通道。 93 | 94 | ## 5 客户端服务器API参考 95 | 下面的链接指向gRPC与fabric(节点,排序者和成员服务)沟通的信息和服务定义。 96 | 97 | [Proposal](https://github.com/hyperledger/fabric/blob/master/protos/peer/fabric_proposal.proto) (部署或者交易调用的提案) 98 | 99 | [ProposalResponse](https://github.com/hyperledger/fabric/blob/master/protos/peer/fabric_proposal_response.proto) (全局的提案回应) 100 | 101 | [Chaincode](https://github.com/hyperledger/fabric/blob/master/protos/peer/chaincode.proto) (chaincode,调用,部署,等细节) 102 | 103 | [ChaincodeProposal](https://github.com/hyperledger/fabric/blob/master/protos/peer/chaincode_proposal.proto) (chaincode-specific proposal header and payload) 104 | 105 | [Transaction](https://github.com/hyperledger/fabric/blob/master/protos/peer/fabric_transaction.proto) (transaction invocation) 106 | 107 | [ChaincodeTransaction](https://github.com/hyperledger/fabric/blob/master/protos/peer/chaincode_transaction.proto) (chaincode-specific transaction action payload) 108 | 109 | 新的COP API是基于json的,需要添加到新的成员服务设计中。 110 | 111 | 消息定义应该是设计SDK APIs的一个灵感来源。APIs明显没有忠实的反映出消息定义,因为SDK可以使用智能缺省和状态信息来最小化请求的参数。 112 | 113 | ## 6 详细说明 114 | 115 | 这里我们讨论设计原则和架构思路。 116 | 117 | 总的来说,我们有不同等级的(数值越小表示等级越高)一些模块: 118 | 119 | **Package: Hyperledger Fabric Client** 120 | 121 | | 模块 | 等级 | 功能 | 122 | | ---- | --- | --- | 123 | | Client | 0 | 主要的入口模块。它必须允许用户创建需要的任何对象来执行所有支持的操作,例如直接连接网络,chaincode部署,交易执行,多种查询。另外,基于编码规范和普遍的社区练习,每一种语言的实现也能决定是否添加方便的方法,如sendTransaction(chain, tx)| 124 | | Chain | 1 | 一个链代表一些节点特别形成的一个网络,启动一个共识的通道,在通道中交易可以被独立的处理。一个网络可能有一个或多个链。链上的节点维护一个单独的账本包含交易在链上派发,包括成员关系的任何配置。所有的交易都是在链上发送,一个应用可能操作多个链。| 125 | | Peer | 2 | 代表网络上的计算节点。节点的角色有背书节点和提交节点,它们都在维护着账本。应用可能连接到一定数量的可用的节点 | 126 | | Orderer | 2 | 类似节点,不同的是它代表排序服务的终端,可能是一个单独的节点(开发时本地安装)或者一个网络排序者的代理节点。基于区块链网络的fabric会有一个由多个排序者节点组成的单独的排序服务。应用可以选择信任特定的排序者,或者一部分排序者,或者设置代理去给排序者节点广播交易。| 127 | | User | 2 | 代表在网络上交易的用户。用户实例可以基于登记证书被初始化。证书可以从成员服务或者外部CA获取。理论上,这种用户也能代表网络上的节点成员。然而,这与应用程序无关(这更像是网络管理方面的问题),所以在这个设计中没有开放。| 128 | | Proposal | 3 |登记的用户可以向节点列表提出交易提案来背书交易。一旦接收到背书响应,应用程序可以决定是否已经获取背书签名,是否需要执行提交交易到共识服务。这是关于提案原始的GRPC消息的包装类,它提供了便利的创建方法。| 129 | | ProposalResponse | 3| 提案调用背书节点的响应,打包背书结果(是或否),签名,等等。这是关于提案响应原始的GRPC消息包装类,它提供了便利的方法来利用它自己的内容(背书,签名,等等)。| 130 | | Transaction | 3 | 登记用户收集了背书之后可以提交交易。交易请求包含背书签名和MVCC+post-image,并且使用排序服务API。交易有两种类型:部署和执行。这是交易有关原始GRPC消息的包装类,它提供了便利的创建方法。| 131 | | CryptoSuite | 3 | 加密模块打包了数字签名算法,非对称加密的密钥对,对称加密的密钥消息,安全的hash和MAC。| 132 | 133 | **Package: Member Service** 134 | 135 | | 模块 | 等级 | 功能 | 136 | | --- | --- | --- | 137 | | MemberService | 0 | 这是fabric可选模块的客户端,成员服务。本模块的主要功能是从成员服务获取用户登记证书。另外,这个模块本身或它的扩展类也应该能在fabric默认的成员服务的实现中提供可用的额外的功能,如用户注册功能。| 138 | 139 | 为了帮助说明以上各模块的关系,我们给出了以下的UML图: 140 | 141 | ![](img/sdk-image03.png) 142 | 143 | ### 6.1 客户端 144 | 和终端用户主要的交互处理器。客户端实例提供和网络上的节点,排序者,可选成员服务交互的处理器。应用程序使用SDK需要和多个网络交互,分别通过单独的客户端实例进行。 145 | 146 | 每个客户端被创建时,应该是使用来自于共识服务的配置数据初始化创建,这些数据包含一个被信任的根的列表,排序节点证书和IP地址,还有一个节点证书列表和可使用的IP地址。这必须是作为应用程序环境的一部分进行的。应用程序负责维护客户端的配置,因为SDK不持久地保存这个对象。 147 | 148 | 每个客户端实例可以维护几条链代表通道和相关的账本。 149 | 150 | * new_chain(创建一个新链) 151 | 152 | 根据参数给出的名字创建一个链的实例。这实际上代表“通道”(正如上面解释的),这个调用返回一个空对象。初始化这个通道,这个返回的对象上必须配置一个参与方的背书者列表和排序者节点。 153 | 154 | Params (参数) 155 | 156 | * name (str):链的名称,推荐使用命名空间防止碰撞 157 | 158 | Returns(返回值) 159 | 160 | * (Chain instance):未初始化的链的实例 161 | 162 | * get_chain(获取链) 163 | 164 | 获取链的实例。本接口允许保存已存在的链实例供之后的检索,并且在应用程序实例之间共享。记录链的信息是应用程序或者SDK负责的事情。如果应用程序不能查看存储中链的信息,它可以调用另外一种API查询一个或多个节点来获得这些信息。 165 | 166 | Params 167 | 168 | * name (str): The name of the chain (链的名称) 169 | 170 | Returns 171 | 172 | * (Chain instance or None): the chain instance for the name. (以入参命名的链的实例) 173 | 174 | Error: 175 | 176 | * The state store has not been set(还没有设置存储状态) 177 | * A chain does not exist under that name(不存在该名称命名的链) 178 | 179 | * query_chain_info(查询链的信息) 180 | 181 | 这是一个网络调用,用来查询指定的节点上链的信息。目标节点必须是属于目标链,才能够返回请求的信息。 182 | 183 | Params 184 | 185 | * name (str): The name of the chain (链名) 186 | * peers (array of Peer instances): target Peers to query(查询的目标节点) 187 | 188 | Returns 189 | 190 | * (Chain instance or None): the chain instance for the name.(以入参命名的链的实例。) 191 | 192 | Error: 193 | 194 | * The target Peer(s) does not know anything about the chain(目标节点不了解链的信息) 195 | 196 | * set_state_store(设置状态的存储) 197 | 198 | SDK应该有一个内建的键值存储的实现(建议是基于文件的实现,以便于在开发中设置)。但是生产系统需要通过数据库为更多的稳定存储和聚簇存储备份,所以多种应用程序实例可以通过数据库共享应用状态(备注:应用不需要丰富的状态)。这个API使得存储模块具有可插拔特性,所以应用程序可以选择不同的存储实现。 199 | 200 | Params 201 | 202 | * store (KeyValueStore): instance of an alternative KeyValueStore implementation provided by the consuming app.(使用方应用程序提供的键值存储实现的实例) 203 | 204 | Returns 205 | 206 | * None 207 | 208 | * get_state_store(获取状态存储) 209 | 210 | 为client提供的获取状态存储的对象的便利方法 211 | 212 | Params 213 | 214 | * None 215 | 216 | Returns 217 | 218 | * (KeyValueStore instance): The KeyValueStore implementation object set within this Client, or null if it does not exist(返回设置到client中的按键值对方式实现的存储对象,如果不存在返回空) 219 | 220 | * set_crypto_suite(设置加密模块) 221 | 222 | 设置一个加密模块的实例,该实例是按CryptoSuite接口实现的。一个加密模块打包了数字签名的算法和使用非对称密钥对的加密功能,使用对称密钥加密的消息,安全的hashing处理和MAC。 223 | 224 | Params 225 | 226 | * Suite (object): an instance of a crypto suite implementation(按接口实现的加密模块的实例) 227 | 228 | * get_crypto_suite(获取加密模块) 229 | 230 | Client获取加密模块对象的便利方法。 231 | 232 | Params 233 | 234 | * None 235 | 236 | Returns 237 | 238 | * (CryptoSuite instance): The CryptoSuite implementation object set within this Client, or null if it does not exist(加密模块的对象) 239 | * set_user_context(设置用户上下文) 240 | 241 | 根据client实例的安全的上下文设置用户类的实例。用户的资格证书会被用来执行交易并且查询区块链网络。如果状态存储已经设置到了client实例上,根据设定的用户上下文,SDK可以把对象保存在一个持久化的缓存中。如果没有设置状态存储,当应用程序崩溃或者被覆盖时,缓存不能被创建,并且应用程序需要再次设置用户上下文。 242 | 243 | Params 244 | 245 | * user (User): an instance of the User class encapsulating the authenticated user’s signing materials (private key and enrollment certificate) (参数是用户,用户类的实例,它把被认证用户拥有的签名材料(私钥和背书证书)打包在内) 246 | 247 | * get_user_context(获取用户上下文) 248 | 249 | 正如上面所解释的,client实例可以拥有一个可选的状态存储。SDK保存已注册用户到可以被应用程序的已认证用户(认证的工作在SDK之外由应用程序完成)利用的存储中。本方法试图通过本地存储中的名称(通过键值存储接口获取)加载用户。已加载的用户对象必须代表一个已注册用户,并且该用户拥有一个已信任的CA(如COP服务)签名的可用的背书证书。 250 | 251 | Params 252 | 253 | * name (str): The name of the user (用户名称) 254 | 255 | Returns 256 | 257 | * (User instance): The user object corresponding to the name, or null if the user does not exist or if the state store has not been set(返回匹配名称的用户对象,如果用户不存在或者状态存储未设置返回空) 258 | 259 | ### 6.2 链 260 | 261 | “链”对象从通道获取设置项,由排序者节点创建,与排序者给通道上参与的节点派发交易的行为相隔离。根据节点列表和排序者列表配置链之后,它必须被初始化。初始化过程给排序者节点发送一个配置交易来创建特定的通道,并且询问节点加入通道。 262 | 263 | * add_peer(添加节点) 264 | 265 | 给链对象添加节点,这是纯本地操作。 266 | 267 | Params 268 | 269 | * peer (Peer): an instance of the Peer class that has been initialized with URL, TLC certificate, and enrollment certificate(入参:被URL,TLC证书,和背书证书初始化之后的Peer类的对象。) 270 | 271 | * remove_peer(移除节点) 272 | 273 | 从链对象移除节点,这是一个纯本地操作。 274 | 275 | Params 276 | 277 | * peer (Peer): an instance of the Peer class(节点实例) 278 | 279 | * get_peers(获取节点) 280 | 281 | 从链的本地信息获取节点。 282 | 283 | Params 284 | 285 | * None 286 | 287 | Returns 288 | 289 | * (Peer list): The peer list on the chain(返回链上的节点列表。) 290 | 291 | * add_orderer(添加排序节点) 292 | 293 | 给链对象添加排序者节点,这是纯本地操作。链实例可以选择使用单个的排序者节点,这个排序者负责向排序者网络中其他排序者广播请求。或者如果应用不信任排序者节点,它可以选择使用更多的排序者,仅需要向链对象添加它们即可。有关某个排序者的所有API会同时广播给所有的排序者。 294 | 295 | Params 296 | 297 | * orderer (Orderer): an instance of the Orderer class (Orderer实例) 298 | 299 | * remove_orderer(移除排序节点) 300 | 301 | 从链对象移除排序者节点,这是纯本地操作。 302 | 303 | Params 304 | 305 | * orderer (Orderer): an instance of the Orderer class 306 | 307 | * get_orderers(获取排序节点) 308 | 309 | 获取链的排序节点,这是纯本地操作。 310 | 311 | Params 312 | 313 | * None 314 | 315 | Returns 316 | 317 | * (Orderer list): The orderer list on the chain 318 | 319 | * initialize_chain(初始化链) 320 | 321 | 调用排序者来开始创建新的链,创建动作本身是开放的新的消息流和连接参与节点的组合。这是一个耗时的处理。只有一个应用程序实例需要调用这个方法。一旦链被成功创建,其他的应用程序实例仅仅需要调用get_chain()来获取有关链的信息。 322 | 323 | Params 324 | 325 | * None 326 | 327 | Returns (链的初始化动作是否执行成功) 328 | 329 | * (bool): whether the chain initialization process was successful 330 | 331 | * update_chain(更新链) 332 | 333 | 调用排序者节点来更新已经存在的链。这允许给已存在的链添加或删除节点,也是基于证书重建对节点证书信息进行更新。 334 | 335 | Params 336 | 337 | * None 338 | 339 | Returns (更新链的操作是否成功) 340 | 341 | * (bool): whether the chain update process was successful 342 | 343 | * is_readonly(是否只读) 344 | 345 | 获取链的状态来查看底层通道是否已经被终止,创建一个只读链,可以查询信息(交易和状态)但是不可以提交新的交易。 346 | 347 | Params 348 | 349 | * None 350 | 351 | Returns (是否只读) 352 | 353 | * (bool): is ready-only (true) or not 354 | 355 | * query_info(查询信息) 356 | 357 | 查询链的状态(高度,已知的节点)中多种有用信息。 358 | 359 | Params 360 | 361 | * none 362 | 363 | Returns 364 | 365 | * (ChainInfo) with height, currently the only useful info 366 | 367 | * query_block(查询块) 368 | 369 | 根据块号查询块。 370 | 371 | Params 372 | 373 | * blockNumber (number) 374 | 375 | Returns 376 | 377 | * Object containing the block 378 | 379 | * query_transaction(查询交易) 380 | 381 | 根据交易号查询交易。 382 | 383 | Params 384 | 385 | * transactionID 386 | 387 | Returns 388 | 389 | * TransactionInfo containing the transaction 390 | 391 | * create_deploy_proposal(创建部署提案) 392 | 393 | 创建交易提案。通过数据(chaincodeID,chaincode调用,等)装配提案,并且使用匹配ECert签名的私钥签名提案。 394 | 395 | Params 396 | 397 | * chaincode_path (string): path to the chaincode to deploy 398 | * chaincode_name (string): a custom name to identify the chaincode on the chain 399 | * fcn (string): name of the chaincode function to call after deploy to initiate the state(chaincode的方法名,部署后初始化状态时要调用它) 400 | * args (string[]): arguments for calling the init function designated by “fcn”(上面fcn的参数) 401 | * sign (Bool): Whether to sign the transaction, default to True 402 | 403 | Returns 404 | 405 | * (Proposal): The created Proposal instance or None. 406 | 407 | * create_transaction_proposal(创建交易提案) 408 | 409 | 为交易创建一个提案。通过数据(chaincode名称,需要调用的方法,参数等)装配提案,并且使用匹配ECert签名的私钥给提案签名。 410 | 411 | Params 412 | 413 | * chaincode_name (string): The name given to the target chaincode to invoke(要执行的目标chaincode的名称) 414 | * args (string[]): arguments for calling the “invoke” method on the chaincode(正在调用执行的chaincode上的方法的参数) 415 | * Sign (Bool): Whether to sign the transaction, default to True 416 | 417 | Returns 418 | 419 | * (Transaction_Proposal instance): The created Transaction_Proposal instance or None. 420 | 421 | * send_transaction_proposal(发送交易提案) 422 | 423 | 把创建好的提案发送给节点去背书。 424 | 425 | Params 426 | 427 | * transaction_proposal (Transaction_Proposal): The transaction proposal data(交易提案的数据) 428 | * chain: The target chain whose peers the proposal will be sent to(提案将要发送给的节点所在的目标链) 429 | * retry (Number): Times to retry when failure, by default to 0 (no retry)(重试次数,默认0) 430 | 431 | Returns 432 | 433 | * (Transaction_Proposal_Response response): The response to send proposal request.(发送提案请求后的响应) 434 | 435 | * create_transaction(创建交易) 436 | 437 | 遵从背书策略根据提案的响应信息创建交易。 438 | 439 | Params 440 | 441 | * proposal_responses ([Transaction_Proposal_Response]): The array of proposal responses received in the proposal call.(在提案的调用中返回的响应信息的数组) 442 | 443 | Returns 444 | 445 | * (Transaction instance): The created transaction object instance.(创建的交易对象实例) 446 | 447 | * send_transaction(发送交易) 448 | 449 | 给链的排序服务(由一个或多个排序者节点组成)发送交易,然后做共识和提交到账本的工作。 450 | 451 | 本调用是异步的,并且交易提交成功信息是通过块或者chaincode事件通知的。(本方法必须给应用程序提供一个响应机制关联事件监听器处理“交易已提交”,“交易完成”,“错误”等事件。) 452 | 453 | 下面有两种有关fabric后端的沟通方式,触发不同的事件回调给应用程序处理器 454 | 455 | * 排序者服务的gRPC客户端在“广播”调用的请求或响应方法中使用常规的无状态的HTTP 连接。这个方法的实现需要在响应中接收到成功确认时发送“交易已提交”事件,或者在接收到错误时发送“错误”事件 456 | * 为了支持fabric“BLOCK” ”, “CHAINCODE”和“TRANSACTION”事件,这个方法的实现需要和作为内部事件枢纽机制一部分的链事件源节点维护一个持久的连接。这些事件应该引发方法给应用程序发送“完成”或“错误”事件。 457 | 458 | Params 459 | 460 | * transaction (Transaction): The transaction object constructed above(上面创建的交易对象。) 461 | 462 | Returns(一个事件处理,可以关联到应用程序事件处理器的) 463 | 464 | * result (EventEmitter): an handle to allow the application to attach event handlers on“submitted”, “complete”, and “error”. 465 | 466 | ### 6.3 用户 467 | 468 | 用户类代表已经通过注册证书注册和签名密钥签名的已登记用户。注册证书必须通过区块链网络配置信任的CA证书签名。已注册用户(拥有已签名密钥和注册证书)可以引导chaincode部署,交易和链上查询。 469 | 470 | 用户注册证书可以事先作为部署应用程序的一部分从CA获取,或者可以通过它本身的登记程序从可选的fabric COP服务获取。 471 | 472 | 有时用户认证和节点认证会混淆。因为用户可以利用私钥,所以用户身份有代理签名的能力,而节点在应用程序/SDKs上下文中只有验证签名的认证能力。应用程序不能使用节点身份来签名因为应用程序不能利用节点身份的私钥。 473 | 474 | * get_name(获取名称) 475 | 476 | 获取成员名称。从对象实例请求参数。 477 | 478 | Returns (str): 479 | 480 | * The name of the user 481 | * get_roles(获取角色) 482 | 483 | 获取用户的角色。它可能是“客户端”“审计员”这类值的数组。成员服务多定义了两个的角色用来保存节点成员关系,如“节点”和“验证者”,这两个没有暴露给应用程序。 484 | 485 | Returns (str[]): 486 | 487 | * The roles for this user 488 | 489 | * get_enrollment_certificate(获取背书证书) 490 | 491 | 返回底层认证证书代表的用户身份证书。 492 | 493 | Params: none 494 | 495 | Returns: 496 | 497 | * Certificate in PEM format signed by the trusted CA(返回已信任的CA按PEM格式签名的证书) 498 | 499 | * set_name(设置名称) 500 | 501 | 设置用户的名称/ID。 502 | 503 | Params: 504 | 505 | * name (string[): The user name / id. 506 | 507 | * set_roles(设置角色) 508 | 509 | 按上面定义的角色值设置用户角色。 510 | 511 | Params: 512 | 513 | * Roles (string[]): The list of roles for the user(入参用户的角色列表。) 514 | 515 | * set_enrollment_certificate(设置背书证书) 516 | 517 | 设置用户的背书证书。 518 | 519 | Params: 520 | 521 | * Certificate : The certificate in PEM format signed by the trusted CA(入参是被已信任的CA按PEM格式签名的证书) 522 | 523 | * generate_tcerts(生成交易证书) 524 | 525 | 获取一系列TCert,可以在交易中使用。TCert和交易之间是一对一的关系。TCert可以在本地通过SDK使用用户的加密数据项生成。 526 | 527 | Params 528 | 529 | * count (number): how many in the batch to obtain?(获取的交易证书的数量) 530 | * Attributes (string[]): list of attributes to include in the TCert(TCert包含的属性的列表) 531 | 532 | Returns (TCert[]): 533 | 534 | * An array of TCerts 535 | 536 | ### 6.4 节点 537 | 538 | 节点类代表了远程节点和它本身网络成员的数据,即用来验证签名的ECert。节点成员代表组织,不像用户成员代表个体。 539 | 540 | 当节点被创建后,只要配置一个名为“eventSourceUrl”的属性,节点实例就可以被指定为一个事件源。允许SDK自动关联交易事件监听器事件流。 541 | 542 | 需要说明的是节点事件流功能在节点层次,不在链和chaincode的层次。 543 | 544 | * connectEventSource(连接事件源) 545 | 546 | 由于几乎所有节点都是事件的生产者,当创建一个节点实例时,应用可以指定它作为应用程序的事件源。只需要链上的一个节点成为事件源,因为链上的所有节点产生的事件相同。本方法告诉SDK对于客户端应用程序来说哪一个节点作为事件源使用。管理与节点的EventHub连接的生命周期是SDK的责任。理解并通知选择的节点想接收哪种事件类型以及想使用哪个回调方法,是客户端应用程序的责任。 547 | 548 | Params: 549 | 550 | * None 551 | 552 | Result: 553 | 554 | * Promise/Future: this gives the app a handle to attach “success” and “error” listeners(应用获得一个处理程序来关联“成功”或“错误”的监听器) 555 | 556 | * is_event_listened(事件是否已被监听的标记) 557 | 558 | 网络调用可以显示出是否至少有一个监听器已经连接到事件的目标节点。这能帮助应用实例在崩溃后恢复中或者多个实例部署的情况下决定是否需要连接事件源。 559 | 560 | 备注:这个请求对节点上事件的生成者有强化作用。 561 | 562 | Params: 563 | 564 | * eventName (string): required (必需的) 565 | * chain (Chain): optional (可选的) 566 | 567 | Result:(是否已经被链上的一些应用实例监听) 568 | 569 | * (boolean): whether the said event has been listened on by some application instance on that chain 570 | 571 | * addListener(添加监听器) 572 | 573 | 方法为连接到事件源的节点提供,监听器注册在EventCallBack,用以接收事件类型集的回调。添加监听器的方法可以被执行多次来支持不同的EventCallBack方法接收不同类型的事件。 574 | 575 | 说明:以下的参数在某些语言里是可选的,比如Java,为监听器接口创建一个实例,并把该实例作为参数。 576 | 577 | Params: 578 | 579 | * eventType : ie. Block, Chaincode, Transaction (事件类型) 580 | * eventTypeData : Object Specific for event type as necessary, currently needed for “Chaincode” event type, specifying a matching pattern to the event name set in the chaincode(s) being executed on the target Peer, and for “Transaction” event type, specifying the transaction ID (事件类型数据:事件类型有必要有一个特定的对象,对于“chaincode”事件类型要求是,在目标节点上执行的chaincode中设置一个指定的事件名匹配范式;对于“Transaction”事件类型,要指定交易ID。) 581 | * eventCallback : Client Application class registering for the callback.(事件回调:客户端应用程序类为回调而注册的。) 582 | 583 | Returns: 584 | 585 | * [event-listener-ref] a reference to the event listener, some language uses an ID (javascript), others uses object reference (Java)(返回值:事件监听器的引用,指向事件监听器的引用,有些语言(javascript)使用ID,其他的语言(Java)使用对象引用。) 586 | 587 | * removeListener(移除监听器) 588 | 589 | 解注册监听器 590 | 591 | Params:(SDK返回事件监听器的引用) 592 | 593 | * [event-listener-ref] : reference returned by SDK for event listener 594 | 595 | Returns: 596 | 597 | * statusFlag: Success / Failure 598 | 599 | * get_name(获取名称) 600 | 601 | 获取节点名称。需要对象实例的参数。 602 | 603 | Returns (str): 604 | 605 | * The name of the Peer 606 | 607 | * set_name(设置名称) 608 | 609 | Set the Peer name / id. 610 | 611 | Params: 612 | 613 | * Name (string): The unique name / id of this Peer. 614 | 615 | * get_roles(获取角色) 616 | 617 | 获取节点参与的用户的角色。可能的结果是“client”和“auditor”的数组。成员服务多定义了两个角色来保存节点成员关系:“peer”和“validator”,这两个没有暴露给应用程序。 618 | 619 | Returns (str[]): 620 | 621 | * The roles for this user 622 | 623 | * set_roles(设置角色) 624 | 625 | 设置节点参与的用户角色。值遵从上面预定的几个值。 626 | 627 | Params: 628 | 629 | * Roles (string[]): The list of roles for the user(用户角色列表) 630 | 631 | * get_enrollment_certificate(获取背书证书) 632 | 633 | 返回底层代表用户身份的ECert。 634 | 635 | Params: none 636 | 637 | Returns: 638 | 639 | * Certificate in PEM format signed by the trusted CA 640 | 641 | * set_enrollment_certificate(设置背书证书) 642 | 643 | Set the Peer’s enrollment certificate. 644 | 645 | Params: 646 | 647 | * Certificate: Certificate in PEM format signed by the trusted CA 648 | 649 | ### 6.5 键值存储(接口) 650 | 651 | 区块链应用程序需要保存状态,包含用户登记材料(私钥,CA签名的证书)。这些状态需要被持久化。“KeyValueStore”的接口为SDK自动保存状态提供了一个简单的机制,这对应用程序是有益的。 652 | 如果应用使用基于软件密钥生成器即CryptoSuite的实现,那么它需要键值存储。如果应用程序还没有设置一个存储系统,SDK应该默认使用一个内建的实现,比如一个基于本地文件系统的实现。 653 | 654 | SDK也可以在这个键值存储系统里面以可选缓存的方式保存用户登记材料。但是,如果应用程序没有配置一个键值存储系统,SDK会把应用程序理解成选择了总是为会话设置上下文,并且没有意图使用默认的键值存储。 655 | 656 | * get_value(获取值) 657 | 658 | Retrieves a value given a key 659 | 660 | Params 661 | 662 | * key (str): The name of the key 663 | 664 | Returns 665 | 666 | * Result (Object): The value 667 | 668 | * set_value(存入值) 669 | 670 | Sets the value 671 | 672 | Params 673 | 674 | * Key 675 | * value 676 | 677 | Returns 678 | 679 | * Acknowledgement of successful storage of the value 680 | 681 | ## 6.6 加密模块(接口) 682 | 683 | 加密模块打包了数字签名算法和非对称加密方法,消息对称加密方法,和hash以及MAC。这是为区块链加密服务提供者接口的一个镜像设计,被fabric加密团队发表。 684 | 685 | Default implementation currently planned for the peer and COP, and must be implemented by the SDK’s default implementation also: 686 | 687 | * ECDSA: curves “secp256r1” and “secp384r1” 688 | * AES: AES128, AES256 with CBC/CTR/GCM mode key length = 128 bits 689 | * SHA: SHA256, SHA384, SHA3_256, SHA3_384 690 | 691 | * generate_key(生成密钥) 692 | 693 | Generate a key based on the options. The output can be a private key or a public key in an asymmetric algorithm, or a secret key of a symmetric algorithm. 694 | 695 | Params 696 | 697 | * opts (Object): an object that encapsulates two properties, “algorithm” and“ephemeral”. 698 | 699 | Returns 700 | 701 | * Result (Key): The key object 702 | 703 | * deriveKey(导出密钥) 704 | 705 | Derives a key from k using opts. 706 | 707 | Params 708 | 709 | * k (Key) 710 | * opts (Object) 711 | 712 | Returns 713 | 714 | * (Key) derived key 715 | 716 | * importKey(导入密钥) 717 | 718 | Imports a key from its raw representation using opts. 719 | 720 | Params 721 | 722 | * k (Key) 723 | * opts (Object) 724 | 725 | Returns 726 | 727 | * (Key) An instance of the Key class wrapping the raw key bytes 728 | 729 | * getKey(获取密钥) 730 | 731 | Returns the key this CSP associates to the Subject Key Identifier ski . 732 | 733 | Params 734 | 735 | * ski (byte[]) 736 | 737 | Returns 738 | 739 | * (Key) An instance of the Key class corresponding to the ski 740 | 741 | * hash(hash处理) 742 | 743 | Hashes messages msg using options opts . 744 | 745 | Params 746 | 747 | * msg (byte[]) 748 | * opts (Object) an object that encapsulates property “algorithm” with values for hashing algorithms such as “SHA2” or “SHA3” 749 | 750 | Returns 751 | 752 | * (Key) An instance of the Key class corresponding to the ski 753 | 754 | * encrypt(加密) 755 | 756 | Encrypt plain text. 757 | 758 | Params 759 | 760 | * key (Key) public encryption key 761 | * plainText (byte[]) 762 | * opts (Object) 763 | 764 | Returns 765 | 766 | * (byte[]) Cipher text 767 | 768 | * decrypt(解密) 769 | 770 | Decrypt cipher text. 771 | 772 | Params 773 | 774 | * key (Key) private decryption key 775 | * cipherText (byte[]) 776 | * opts (Object) 777 | 778 | Returns 779 | 780 | * (byte[]) Plain text 781 | 782 | * sign(签名) 783 | 784 | Sign the data. 785 | 786 | Params 787 | 788 | * Key (Key) private signing key 789 | * digest (byte[]) fixed-length digest of the target message to be signed 790 | * opts (function) hashing function to use 791 | 792 | Returns 793 | 794 | * Result(Object):Signature object 795 | 796 | * verify(验证) 797 | 798 | Verify the signature. 799 | 800 | Params 801 | 802 | * key (Key) public verification key 803 | * signature (byte[]) signature 804 | * digest (byte[]) original digest that was signed 805 | 806 | Returns 807 | 808 | * (bool): verification successful or not 809 | 810 | ## 7 处理网络错误 811 | 客户端SDK和fabric用两个方法沟通:无状态的HTTP连接和HTTP长连接。 812 | 813 | 发送提案和发送交易调用在请求/响应中是无状态的。如果出现网络错误,调用会超时。SDK应该有一个可配置的超时时间,方便应用程序基于对客户端应用和fabric节点之间的网络特征的理解控制调用的动作。 814 | 815 | 另外,这些方法可能有一个重试次数,以便于SDK根据超时错误自动试图重连HTTP调用。这些重试都尝试并且仍然发生超时错误之后,连接方法应该返回一个错误。 816 | 817 | 另一方面,事件流连接SDK和事件源节点是长连接。特别地,由于事件流接口被fabric定义,连接是双向的,允许消息发送到两方。对于网络错误,以防丢失事件和不能触发client应用注册的监听器的结果,连接会被断开。 818 | 819 | 为应用的利益考虑SDK应该试图重新发起连接。但是,如果在尝试了重连次数之后不能恢复连接,应该用一个高级别的严重错误通知应用这种情况。 820 | 821 | ## 8 参考 822 | 1) Next-Consensus-Architecture_Proposal: 823 | https://github.com/hyperledger/fabric/blob/master/proposals/r1/Next-Consensus-Architec 824 | ture-Proposal.md 825 | 826 | 2) Consensus endorsing, consenting, and committing model: 827 | https://jira.hyperledger.org/browse/FAB-37 828 | 829 | 3) Node.js SDK 830 | https://github.com/hyperledger/fabric-sdk-node 831 | 832 | 4) Fabric-Cop Design 833 | https://docs.google.com/document/d/1TRYHcaT8yMn8MZlDtreqzkDcXx0WI50AV2JpAcv 834 | AM5w 835 | 836 | 5) Next Hyperledger-Fabric Architecture Protocol messages 837 | https://docs.google.com/document/d/1qD2zOTxmPoss3Xop_pDdkSXHozIR4N27NEsym 838 | osW4Gk 839 | -------------------------------------------------------------------------------- /docs/sdk_java_zh.md: -------------------------------------------------------------------------------- 1 | 2 | | 原文 | 作者 | 审核修正 | 3 | | --- | --- | —--- | 4 | | [原文](https://github.com/hyperledger/fabric-sdk-java) | Xuanyong Wu | | 5 | 6 | 7 | Welcome to Java SDK for Hyperledger project. The SDK helps facilitate Java applications to manage the lifecycle of Hyperledger channels and user chaincode. The SDK also provides a means to execute user chaincode, query blocks and transactions on the channel, and monitor events on the channel. 8 | 9 | 欢迎使用Hyperledger项目的Java SDK。 该SDK帮助促进Java应用程序对Hyperledger通道和用户链码的生命周期的管理。该SDK同时提供执行用户链码、在通道上查询区块和交易以及监听通道上的事件的一些方法。 10 | 11 | The SDK acts on behave of a particular User which is defined by the embedding application through the implementation of the SDK's `User` interface. 12 | 13 | 该SDK能够实现一些特定用户的行为,这是由嵌入的应用程序通过实现SDK的`User`接口来完成的。 14 | 15 | Note, the SDK does ***not*** provide a means of persistence for the application defined channels and user artifacts on the client. This is left for the embedding application to best manage. 16 | 17 | 请注意,该SDK***不提供***客户端上应用定义的通道和用户结果的持久性方法。这个是预留给嵌入程序能够进行更有效的管理。 18 | 19 | The SDK also provides a client for Hyperledger's certificate authority. The SDK is however not dependent on this 20 | particular implementation of a certificate authority. Other Certificate authority's maybe used by implementing the 21 | SDK's `Enrollment` interface. 22 | 23 | 该SDK同时提供一个连接Hyperledger证书机构的客户端。但该SDK并不依赖于这种连接证书机构的方法,用户也可以通过实现SDK的`Enrollment`接口来使用其他的证书机构。 24 | 25 | 26 | This provides a summary of steps required to get you started with building and using the Java SDK. Please note that this is not the API documentation or a tutorial for the SDK, this will only help you familiarize to get started with the SDK if you are new in this domain. 27 | 28 | 以下步骤让你可以开始编译和使用Java SDK。 29 | 请注意,因为这个文档不是API文档或者SDK指南,所以仅仅帮助新手熟悉如何开始使用SDK。 30 | 31 | ## 已知的限制(Known limitations and restrictions) 32 | 33 | * TCerts are not supported(不支持TCerts): JIRA FAB-1401 34 | * HSM not supported(不支持HSM): JIRA FAB-3137 35 | * Single Crypto strength 256(单层加密长度为256位): JIRA FAB-2564 36 | * Network configuration updates not supported(不支持网络配置更新): JIRA FAB-3103 37 | 38 | 39 | `*************************************************` 40 | 41 | ## *v1.0.0* 42 | 43 | There is a git tagged v1.0.0 [e976abdc658f212d0c3a80ace4499a5cff4279c6] release of the SDK where there is no need to build the Hyperledger Fabric and Hyperledger Fabric CA described below. The provided docker-compose.yaml for the integration tests should pull v1.0.0 tagged images from Docker hub. 44 | 45 | 已经有一个标记为V1.0.0[e976abdc658f212d0c3a80ace4499a5cff4279c6]的SDK发行版,所以没有必要自己打包Hyperledge Fabric和Hyperledger Fabric CA。提供集成测试的docker-compose.yaml会自动从Docker Hub下载V1.0.0的镜像。 46 | 47 | The v1.0.0 version of the Hyperledger Fabric Java SDK is published to Maven so you can directly use in your application's pom.xml. 48 | 49 | V1.0.0的Hyperledger Fabric Java SDK已经发布到了Maven仓库,你可以直接在你的应用程序的pom.xml文件中直接使用。 50 | 51 | [Maven Repository Hyperledger Fabric Java SDK](https://mvnrepository.com/artifact/org.hyperledger.fabric-sdk-java/fabric-sdk-java) 52 | 53 | *Make sure you're using docker images at the level of the Fabric that matches the level of the SDK you're using in your application.* 54 | 55 | _请确认你使用的docker镜像的版本和你应用程序中使用的SDK版本一致。_ 56 | 57 | `*************************************************` 58 | 59 | 60 | ## 验证Fabric和Fabric-ca包(Valid builds of Fabric and Fabric-ca) 61 | 62 | Hyperledger Fabric v1.0.1 is currently under active development and the very latest Hyperledger Fabric builds may not work with this sdk. 63 | You should use the following commit levels of the Hyperledger projects: 64 | 65 | Hyperledger Fabric V1.0.1 正在开发中,最新的Fabric打包可能不适用该SDK,你应该使用下面提交版本的Hyledger项目。 66 | 67 | 68 | Project | Commit level | Date | 69 | ---------------|:------------------------------------------:|---------------------------:| 70 | fabric | f56a82e36e040e1c1a986edfceac014ba1516571 | Jul 11 12:48:33 2017 -0700 | 71 | fabric-ca | 74f8f4d4c29e45a79a8849efb057dbd8de3ae8d0 | Jul 11 16:43:39 2017 +0200 | 72 | 73 | 74 | 75 | 76 | You can clone these projects by going to the [Hyperledger repository](https://gerrit.hyperledger.org/r/#/admin/projects/). 77 | 78 | 你可以到[Hyperledger仓库](https://gerrit.hyperledger.org/r/#/admin/projects/)克隆这些项目。 79 | 80 | As SDK development continues, this file will be updated with compatible Hyperledger Fabric and Fabric-ca commit levels. 81 | 82 | 随着SDK的开发进行中,这些文件将会随着Fabric和Fabric-ca的提交版本变化而更新。 83 | 84 | Once you have cloned `fabric` and `fabric-ca`, use the `git reset --hard commitlevel` to set your repositories to the correct commit. 85 | 86 | 你克隆`fabric`和`fabric-ca`时,使用`git reset --hard commitlevel`来设置你仓库的使用正确的提交版本。 87 | 88 | ## 使用Fabric Vagrant环境(Working with the Fabric Vagrant environment) 89 | 90 | Vagrant is NOT required if your OS has Docker support and all the requirements needed to build directly in your 91 | environment. For non Vagrant envrionment, the steps would be the same as below minus those parts involving Vagrant. 92 | Do the following if you want to run the Fabric components ( peer, orderer, fabric-ca ) in Vagrant: 93 | 94 | 如果你的环境支持Docker,Vagrant不是必须的,你只需要你的环境中直接打包。在非Vagrant环境,牵涉到Vagrant的步骤会少许有些不同。 95 | 按照下面的步骤在Vagrant中运行Fabric组件( peer, orderer, fabric-ca ): 96 | 97 | ``` 98 | git clone https://github.com/hyperledger/fabric.git 99 | git clone https://github.com/hyperledger/fabric-ca.git 100 | cd fabric-ca 101 | git reset --hard fabric-ca_commitlevel from above 102 | cd ../fabric 103 | git reset --hard fabric_commitlevel from above 104 | cd devenv 105 | change the Vagrant file as suggested below: 106 | vagrant up 107 | vagrant ssh 108 | make docker 109 | cd ../fabric-ca 110 | make docker 111 | cd ../fabric/sdkintegration 112 | docker-compose down; rm -rf /var/hyperledger/*; docker-compose up --force-recreate 113 | ``` 114 | 115 | 116 | 117 | * Open the file `Vagrantfile` and verify that the following `config.vm.network` statements are set. If not, then add them: 118 | * 打开`Vagrantfile`文件来验证下面的`config.vm.network`参数是否设置了,如果没有,请添加以下行。 119 | 120 | 121 | ``` 122 | config.vm.network :forwarded_port, guest: 7050, host: 7050 # fabric orderer service 123 | config.vm.network :forwarded_port, guest: 7051, host: 7051 # fabric peer vp0 service 124 | config.vm.network :forwarded_port, guest: 7053, host: 7053 # fabric peer event service 125 | config.vm.network :forwarded_port, guest: 7054, host: 7054 # fabric-ca service 126 | config.vm.network :forwarded_port, guest: 5984, host: 15984 # CouchDB service 127 | ### Below are probably missing..... 128 | config.vm.network :forwarded_port, guest: 7056, host: 7056 129 | config.vm.network :forwarded_port, guest: 7058, host: 7058 130 | config.vm.network :forwarded_port, guest: 8051, host: 8051 131 | config.vm.network :forwarded_port, guest: 8053, host: 8053 132 | config.vm.network :forwarded_port, guest: 8054, host: 8054 133 | config.vm.network :forwarded_port, guest: 8056, host: 8056 134 | config.vm.network :forwarded_port, guest: 8058, host: 8058 135 | 136 | ``` 137 | 138 | Add to your Vagrant file a folder for referencing the sdkintegration folder between the lines below: 139 | 140 | 在Vagrant文件中,增加SDK集成目录的引用: 141 | 142 | config.vm.synced_folder "..", "/opt/gopath/src/github.com/hyperledger/fabric"
143 | 144 | `config.vm.synced_folder "/home/<>/fabric-sdk-java/src/test/fixture/sdkintegration", "/opt/gopath/src/github.com/hyperledger/fabric/sdkintegration`
145 | 146 | config.vm.synced_folder ENV.fetch('LOCALDEVDIR', ".."), "#{LOCALDEV}"
147 | 148 | 149 | ## SDK 依赖(SDK dependencies) 150 | 151 | SDK depends on few third party libraries that must be included in your classpath when using the JAR file. To get a list of dependencies, refer to pom.xml file or run 152 | 当你使用JAR文件时,SDK依赖的第三方库应该包含在classpath中。为了可以得到依赖清单,可以参照pom.xml文件或者运行 153 | 154 | mvn dependency:tree or mvn dependency:list. 155 | mvn dependency:treemvn dependency:list. 156 | 157 | Alternatively, mvn dependency:analyze-report will produce a report in HTML format in target directory listing all the dependencies in a more readable format. 158 | 除此之外, mvn dependency:analyze-report 会在目标目录中生成一个HTML格式的报告更清晰地列出所有的依赖包。 159 | 160 | ## 使用SDK(Using the SDK) 161 | 162 | The SDK's test cases uses chaincode in the SDK's source tree: `/src/test/fixture` 163 | SDK的测试用例中使用链码源码树: `/src/test/fixture` 164 | 165 | The SDK's JAR is in `target/fabric-sdk-java-1.0.0-SNAPSHOT.jar` and you will need the additional dependencies listed above. 166 | When the SDK is published to `Maven` you will be able to simply include it in your application's `pom.xml`. 167 | 168 | SDK的JAR包在`target/fabric-sdk-java-1.0.0-SNAPSHOT.jar`,你可能会需要增加上面提到的附加的依赖包。 169 | 当SDK发布到Maven后,你只需要简单的在你的应用的`pom.xml`包含它。 170 | 171 | Add below code in your `pom.xml` to download fabric-sdk-java-1.0 172 | 在你的`pom.xml`文件增加下面的代码来下载fabric-sdk-java-1.0 173 | ```xml 174 | 175 | 176 | 177 | org.hyperledger.fabric-sdk-java 178 | fabric-sdk-java 179 | 1.0.0 180 | 181 | 182 | ``` 183 | 184 | ### 编译(Compiling) 185 | 186 | To build this project, the following dependencies must be met 187 | 188 | * JDK 1.8 or above 189 | * Apache Maven 190 | 191 | 为了能够编译该工程,你需要满足以下要求 192 | 193 | * JDK 1.8 或更高 194 | * Apache Maven 195 | 196 | Once your JAVA_HOME points to your installation of JDK 1.8 (or above) and JAVA_HOME/bin and Apache maven are in your PATH, issue the following command to build the jar file: 197 | 当你的JAVA_HOME指向了JDK 1.8安装目录,在PATH中设置了JAVA_HOME/bin和 Apache Maven,你可以执行下面的命令来打包jar文件: 198 | 199 | 200 | mvn install 201 | 202 | or 203 | 204 | mvn install -DskipTests 205 | 如果你不想执行单元测试。 206 | 207 | ### 执行单元测试(Running the unit tests) 208 | 209 | To run the unit tests, please use mvn test or mvn install which will run the unit tests and build the jar file.You must be running a local peer and orderer to be able to run the unit tests. 210 | 211 | 通过mvn test 来执行单元测试或 mvn install 执行单元测试并生成jar文件。你必须在本地运行了peer和orderer节点才能执行单元测试。 212 | 213 | ### 执行集成测试(Running the integration tests) 214 | 215 | You must be running local instances of Fabric-ca, Fabric peers, and Fabric orderers to be able to run the integration tests. See above for running these services in Vagrant. 216 | Use this `maven` command to run the integration tests: 217 | 218 | 你必须在本地运行了Fabric-ca, Fabric peers, and Fabric orderers节点才能够执行集成测试。参照上文中如何在Vagrant中运行这些服务。 219 | 使用下面的`maven`指令来执行集成测试: 220 | 221 | * _mvn failsafe:integration-test -DskipITs=false_ 222 | 223 | ### 端到端测试场景(End to end test scenario) 224 | 225 | The _src/test/java/org/hyperledger/fabric/sdkintegration/End2endIT.java_ integration test is an example of installing, instantiating, invoking and querying a chaincode. 226 | It constructs the Hyperledger channel, deploys the `GO` chaincode, invokes the chaincode to do a transfer amount operation and queries the resulting blockchain world state. 227 | 228 | This test is a reworked version of the Fabric [e2e_cli example](https://github.com/hyperledger/fabric/tree/master/examples/e2e_cli) to demonstrate the features of the SDK. 229 | To better understand blockchain and Fabric concepts, we recommend you install and run the _e2e_cli_ example. 230 | 231 | _src/test/java/org/hyperledger/fabric/sdkintegration/End2endIT.java_中的集成测试,是一个简单的安装、实例化、调用和查询链码的例子。它构建了Hyperledger通道,发布`GO`版本链码,调用链码,执行转账动作,并在区块链上查询结果。 232 | 233 | ### 端到端测试环境(End to end test environment) 234 | 235 | The test defines one Fabric orderer and two organizations (peerOrg1, peerOrg2), each of which has 2 peers, one fabric-ca service. 236 | 237 | 该测试包含一个orderer节点,两个组织(peerOrg1, peerOrg2),每个组织各自包含两个节点,以及一个fabric-ca服务。 238 | 239 | #### 证书及其他加密工件(Certificates and other cryptography artifacts) 240 | 241 | Fabric requires that each organization has private keys and certificates for use in signing and verifying messages going to and from clients, peers and orderers. 242 | Each organization groups these artifacts in an **MSP** (Membership Service Provider) with a corresponding unique _MSPID_ . 243 | 244 | Furthermore, each organization is assumed to generate these artifacts independently. The *fabric-ca* project is an example of such a certificate generation service. 245 | Fabric also provides the `cryptogen` tool to automatically generate all cryptographic artifacts needed for the end to end test. 246 | In the directory src/test/fixture/sdkintegration/e2e-2Orgs/channel 247 | 248 | The command used to generate end2end `crypto-config` artifacts:
249 | 250 | ```build/bin/cryptogen generate --config crypto-config.yaml --output=crypto-config``` 251 | 252 | For ease of assigning ports and mapping of artifacts to physical files, all peers, orderers, and fabric-ca are run as Docker containers controlled via a docker-compose configuration file. 253 | 254 | The files used by the end to end are: 255 | * _src/test/fixture/sdkintegration/e2e-2Orgs/channel_ (everything needed to bootstrap the orderer and create the channels) 256 | * _src/test/fixture/sdkintegration/e2e-2Orgs/crypto-config_ (as-is. Used by `configtxgen` and `docker-compose` to map the MSP directories) 257 | * _src/test/fixture/sdkintegration/docker-compose.yaml_ 258 | 259 | 260 | The end to end test case artifacts are stored under the directory _src/test/fixture/sdkintegration/e2e-2Org/channel_ . 261 | 262 | Fabric需要每个组织都有自己的私钥和证书,用来签名和验证客户端、节点和orderer之间的消息。 263 | 每个组织的这些工件在**MSP** (Membership Service Provider)中都有统一的唯一 _MSPID_。 264 | 265 | 而且,我们假设各个组织都是独立的生成这些工件。*fabric-ca*工程是一个证书生成服务的一个例子。 266 | Fabric同时提供`cryptogen`工具来自动生成端对端测试所需的加密用的工件。 267 | 在src/test/fixture/sdkintegration/e2e-2Orgs/channel目录 268 | 269 | 使用 `crypto-config` 命令来生成 end2end 工件:
270 | ```build/bin/cryptogen generate --config crypto-config.yaml --output=crypto-config``` 271 | 272 | 为了方便分配端口和映射这些工件到物理文件,所有的peer, order 和fabric-ca都运行在Docker容器中,通过docker-compose配置文件来控制。 273 | 274 | 使用到了下面这些文件: 275 | * _src/test/fixture/sdkintegration/e2e-2Orgs/channel_ (everything needed to bootstrap the orderer and create the channels) 276 | * _src/test/fixture/sdkintegration/e2e-2Orgs/crypto-config_ (as-is. Used by `configtxgen` and `docker-compose` to map the MSP directories) 277 | * _src/test/fixture/sdkintegration/docker-compose.yaml_ 278 | 279 | 端对端测试用例工件存储在_src/test/fixture/sdkintegration/e2e-2Org/channel_目录中。 280 | 281 | ### 通过TLS连接到Orderer和Peer(TLS connection to Orderer and Peers) 282 | 283 | IBM Java needs the following properties defined to use TLS 1.2 to get an HTTPS connections to Fabric CA. 284 | ``` 285 | -Dcom.ibm.jsse2.overrideDefaultTLS=true -Dhttps.protocols=TLSv1.2 286 | ``` 287 | 288 | We need certificate and key for each of the Orderer and Peers for TLS connection. You can generate your certificate and key files with openssl command as follows: 289 | 290 | * Set up your own Certificate Authority (CA) for issuing certificates 291 | * For each of orderers and peers: 292 | * generate a private key: openssl genrsa 512 > key.pem. 293 | * generate a certificate request (csr): openssl req -new -days 365 -key key.pem -out csr.pem, which will request your input for some information, where CN has to be the container's alias name (e.g. peer0, peer1, etc), all others can be left blank. 294 | * sign the csr with the CA private key to generate a certificate: openssl ca -days 365 -in csr.pem -keyfile {CA's privatekey} -notext -out cert.pem 295 | * put the resulting cert.pem and key.pem together with the CA's certificate (as the name cacert.pem) in the directory where the docker container can access. 296 | 297 | The option -notext in the last openssl command in the above is important. Without the option, the resulting cert.pemmay does not work for some Java implementation (e.g. IBM JDK). 298 | The certificates and keys for the end-to-end test case are stored in the directory _src/test/fixture/sdkintegration/e2e-2Org/tls/_. 299 | 300 | Currently, the pom.xml is set to use netty-tcnative-boringssl for TLS connection to Orderer and Peers, however, you can change the pom.xml (uncomment a few lines) to use an alternative TLS connection via ALPN. 301 | 302 | IBM Java需要以下定义一下参数才能使用TLS 1.2来和Fabric CA建立HTTPS连接。 303 | ``` 304 | -Dcom.ibm.jsse2.overrideDefaultTLS=true -Dhttps.protocols=TLSv1.2 305 | ``` 306 | 307 | 每个Orderer和Peer建立TLS连接都需要证书和私钥。你可以按以下步骤通过openssl命令来生成证书和私钥文件: 308 | * 设置你自己的证书认证机构(CA)来发行证书 309 | * 对每一个Orderer和Peer 310 | * 生成私钥 openssl genrsa 512 > key.pem。 311 | * 生成证书请求(csr)openssl req -new -days 365 -key key.pem -out csr.pem,这里可能需要你填写一些信息,CN应该是容器的别名(比如peer0, peer1等),其余的都可以留空 312 | * 通过CS的私钥来给证书请求(CSR)签名并生成证书:openssl ca -days 365 -in csr.pem -keyfile {CA's privatekey} -notext -out cert.pem 313 | * 将生成的cert.pem 和 key.pem 同CA的证书(名字cacert.pem)一起放到docker容器可以访问到的目录中。 314 | 315 | 上面最后一条openssl命令中的 -notext 选项非常重要。没有这个参数的话,生成的cert.pemmay没办法在某些java环境中用(比如java JDK)。 316 | 这些证书和钥匙文件放在_src/test/fixture/sdkintegration/e2e-2Org/tls/_目录中。 317 | 318 | 目前,pom.xml设置使用netty-tcnative-boringssl在Orderer和Peer之间建立TLS连接,你可以更该pom.xml(注释其中一些行)来通过ALPN来使用其他的TLS连接。 319 | 320 | 321 | ### 链码背书策略(Chaincode endorsement policies) 322 | 323 | Policies are described in the [Fabric Endorsement Policies document](https://gerrit.hyperledger.org/r/gitweb?p=fabric.git;a=blob;f=docs/endorsement-policies.md;h=1eecf359c12c3f7c1ddc63759a0b5f3141b07f13;hb=HEAD). 324 | You create a policy using a Fabric tool ( an example is shown in [JIRA issue FAB-2376](https://jira.hyperledger.org/browse/FAB-2376?focusedCommentId=21121&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-21121)) 325 | and give it to the SDK either as a file or a byte array. The SDK, in turn, will use the policy when it creates chaincode instantiation requests. 326 | 327 | 328 | To input a policy to the SDK, use the **ChaincodeEndorsementPolicy** class. 329 | 330 | For testing purposes, there are 2 policy files in the _src/test/resources_ directory 331 | * _policyBitsAdmin_ ( which has policy **AND(DEFAULT.admin)** meaning _1 signature from the DEFAULT MSP admin' is required_ ) 332 | * _policyBitsMember_ ( which has policy **AND(DEFAULT.member)** meaning _1 signature from a member of the DEFAULT MSP is required_ ) 333 | 334 | and one file in the _src/test/fixture/sdkintegration/e2e-2Orgs/channel_ directory specifically for use in the end to end test scenario 335 | * _members_from_org1_or_2.policy_ ( which has policy **OR(peerOrg1.member, peerOrg2.member)** meaning _1 signature from a member of either organizations peerOrg1, PeerOrg2 is required_) 336 | 337 | Alternatively, you can also use ChaincodeEndorsementPolicy class by giving it a YAML file that has the policy defined in it. 338 | See examples of this in the End2endIT testcases that use _src/test/fixture/sdkintegration/chaincodeendorsementpolicy.yaml_ 339 | The file chaincodeendorsementpolicy.yaml has comments that help understand how to create these policies. The first section 340 | lists all the signature identities you can use in the policy. Currently, only ROLE types are supported. 341 | The policy section is comprised of `n-of` and `signed-by` elements. Then n-of (`1-of` `2-of`) require that many (`n`) in that 342 | section to be true. The `signed-by` references an identity in the identities section. 343 | 344 | 背书策略在[Fabric Endorsement Policies document](https://gerrit.hyperledger.org/r/gitweb?p=fabric.git;a=blob;f=docs/endorsement-policies.md;h=1eecf359c12c3f7c1ddc63759a0b5f3141b07f13;hb=HEAD)上做了详细的阐释。 345 | 你可以通过Fabric工具来创建一个策略(在 [JIRA issue FAB-2376](https://jira.hyperledger.org/browse/FAB-2376?focusedCommentId=21121&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-21121) 有个例子)以文件或byte数组方式传给SDK。同时SDK会使用该策略来创建链码初始化请求。 346 | 347 | 使用**ChaincodeEndorsementPolicy**类将策略传给SDK。 348 | 为了测试目的,在_src/test/resources_下有两个策略文件。 349 | * _policyBitsAdmin_ ( 包含 **AND(DEFAULT.admin)** 策略, _需要一个DEFAULT MSP 管理员的签名_ ) 350 | * _policyBitsMember_ ( 包含 **AND(DEFAULT.member)** 策略, _需要一个DEFAULT MSP 成员的签名_ ) 351 | 352 | 在_src/test/fixture/sdkintegration/e2e-2Orgs/channel_下油一个文件,指定了端对端的测试场景 353 | * _members_from_org1_or_2.policy_ ( 包含 **OR(peerOrg1.member, peerOrg2.member)** 策略 _需要任一组织(peerOrg1, PeerOrg2)下成员的签名_) 354 | 355 | 另外,你也可以用ChaincodeEndorsementPolicy类,通过给他指定一个包含策略定义的YAML文件来实现。 356 | 参照_src/test/fixture/sdkintegration/chaincodeendorsementpolicy.yaml_目录下End2endIT测试用例。 357 | 文件chaincodeendorsementpolicy.yaml中,有一些帮助理解如何创建策略的注释。第一部分列出了所有可以在策略中使用的签名。当前只支持ROLE类型。 358 | policy区块包含`n-of` 和 `signed-by` 元素。n-of (`1-of` `2-of`) 元素需要在这个区块中将“many” (`n`) 设置为 true. `signed-by`元素引用了在identities区块的身份信息。 359 | 360 | ### 通道创建工件(Channel creation artifacts) 361 | 362 | Channel configuration files and orderer bootstrap files ( see directory _src/test/fixture/sdkintegration/e2e-2Orgs/channel_ ) are needed when creating a new channel. 363 | This is created with the Hyperledger Fabric `configtxgen` tool. 364 | 365 | For End2endIT.java the commands are 366 | 367 | * build/bin/configtxgen -outputCreateChannelTx foo.tx -profile TwoOrgsChannel -channelID foo 368 | * build/bin/configtxgen -outputCreateChannelTx bar.tx -profile TwoOrgsChannel -channelID bar 369 | * build/bin/configtxgen -outputBlock orderer.block -profile TwoOrgsOrdererGenesis 370 | 371 | with the configtxgen config file _src/test/fixture/sdkintegration/e2e-2Orgs/channel/configtx.yaml_ 372 | 373 | 374 | If `build/bin/configtxgen` tool is not present run `make configtxgen` 375 | 376 | Before running the end to end test case: 377 | * you may need to modify `configtx.yaml` to change all hostname and port definitions to match 378 | your server(s) hostname(s) and port(s). 379 | * you **WILL** have to modify `configtx.yaml` to have the _MSPDir_ point to the correct path to the _crypto-config_ directories. 380 | * `configtx.yaml` currently assumes that you are running in a Vagrant environment where the fabric, fabric-ca and fabric-sdk-java projects exist under the _/opt/gopath/src/github.com/hyperledger_ directory. 381 | 382 | 创建通道的时候,需要有通道的配置文件以及orderer引导文件(在_src/test/fixture/sdkintegration/e2e-2Orgs/channel_目录下)。 383 | 这个配置文件是通过`configtxgen`来创建的。 384 | 385 | 对于End2endIT.java类来说,命令是: 386 | * build/bin/configtxgen -outputCreateChannelTx foo.tx -profile TwoOrgsChannel -channelID foo 387 | * build/bin/configtxgen -outputCreateChannelTx bar.tx -profile TwoOrgsChannel -channelID bar 388 | * build/bin/configtxgen -outputBlock orderer.block -profile TwoOrgsOrdererGenesis 389 | 390 | 配置文件是 _src/test/fixture/sdkintegration/e2e-2Orgs/channel/configtx.yaml_。 391 | 392 | 如果`build/bin/configtxgen`工具不存在,则需要执行`make configtxgen`来生成。 393 | 394 | 在你运行端对端测试用例前: 395 | * 你需要修改`configtx.yaml`文件,将主机名和端口定义修改成你服务器真实的主机名和端口。 396 | * 你将会需要修改`configtx.yaml`文件,将_MSPDir_指向到正确的_crypto-config_目录。 397 | * `configtx.yaml`文件当前假设你运行在Vagrant环境,而且fabric、fabric-ca 和fabric-sdk-java 在_/opt/gopath/src/github.com/hyperledger_目录下。 398 | 399 | ### GO语言链码(GO Lang chaincode) 400 | 401 | Go lang chaincode dependencies must be contained in vendor folder. 402 | For an explanation of this see [Vender folder explanation](https://blog.gopheracademy.com/advent-2015/vendor-folder/) 403 | 404 | Go语言链码的依赖文件必须包含在vendor目录下。 405 | 详细解释参照[Vender folder explanation](https://blog.gopheracademy.com/advent-2015/vendor-folder/) 406 | 407 | ## 简单的问题解决(Basic Troubleshooting) 408 | 409 | **identity or token do not match** 410 | 411 | Keep in mind that you can perform the enrollment process with the membership services server only once, as the enrollmentSecret is a one-time-use password. If you have performed a FSUser registration/enrollment with the membership services and subsequently deleted the crypto tokens stored on the client side, the next time you try to enroll, errors similar to the ones below will be seen. 412 | 413 | ``Error: identity or token do not match`` 414 | 415 | ``Error: FSUser is already registered`` 416 | 417 | To address this, remove any stored crypto material from the CA server by following the instructions here which typically involves deleting the /var/hyperledger/production directory and restarting the membership services. You will also need to remove any of the crypto tokens stored on the client side by deleting the KeyValStore . That KeyValStore is configurable and is set to ${FSUser.home}/test.properties within the unit tests. 418 | 419 | When running the unit tests, you will always need to clean the membership services database and delete the KeyValStore file, otherwise, the unit tests will fail. 420 | 421 | 记住你只能运行成员服务的登记进程(enrollment process)一次,因为enrollmentSecret是一次性密码。如果你已经执行过一次FSUser registration/enrollment,然后删除了加密存储在客户端的token后,下一次enroll的时候,就会出现下面类似的问题: 422 | ``Error: identity or token do not match`` 423 | 424 | ``Error: FSUser is already registered`` 425 | 426 | 为了解决这个问题,请参照here 删除所有存储在CA服务器上的的加密文件,文件目录在/var/hyperledger/production,然后重启成员服务。同时你还需要通过删除KeyValStore来清楚客户端所有的加密token。KeyValStore是可配置的,参照${FSUser.home}/test.properties文件。 427 | 428 | 当运行测试用例时,你需要清理成员服务器数据库并删除KeyValStore文件,否则单元测试会失败。 429 | 430 | **java.security.InvalidKeyException: Illegal key size** 431 | 432 | If you get this error, this means your JDK does not capable of handling unlimited strength crypto algorithms. To fix this issue, You will need to download the JCE libraries for your version of JDK. Please follow the instructions here to download and install the JCE for your version of the JDK. 433 | 434 | 如果你碰到这个问题,就是你的JDK不支持无限长度加密逻辑。要修复这个问题,你需要下载正对你JDK版本的JCE库,请参照这里来下载和安装JCE库。 435 | 436 | ## 和开发者交流(Communicating with developers and fellow users) 437 | 438 | Sign into Hyperledger project's Rocket chat 439 | For this you will also need a Linux Foundation ID 440 | 441 | Join the fabric-sdk-java channel. 442 | 443 | 登录Hyperledger project's Rocket chat。你需要一个Linux Foundation ID。 然后加入fabric-sdk-java频道。 444 | 445 | ## 报告问题(Reporting Issues) 446 | 447 | If your issue is with building Fabric development environment please discuss this on rocket.chat's #fabric-dev-env channel. 448 | 如果你有编译Fabric开发环境的问题,请在rocket.chat的 #fabric-dev-env channel讨论。 449 | 450 | To report an issue please use: Hyperledger's JIRA. 451 | To login you will need a Linux Foundation ID (LFID) which you get at The Linux Foundation 452 | if you don't already have one. 453 | 454 | 报告问题请使用JIRA:Hyperledger's JIRA。 455 | 你需要使用Linux基金会ID(LFID)来登录,如果没有,可以在The Linux Foundation注册一个。 456 | 457 | JIRA Fields should be: 458 |
459 |
Type
460 |
Bug or New Feature
461 | 462 |
Component
463 |
fabric-sdk-java
464 |
Fix Versions
465 |
v1.0.1
466 |
467 | 468 | Pleases provide as much information that you can with the issue you're experiencing: stack traces logs. 469 | 470 | Please provide the output of **java -XshowSettings:properties -version** 471 | 472 | Logging for the SDK can be enabled with setting environment variables: 473 | 474 | ORG_HYPERLEDGER_FABRIC_SDK_LOGLEVEL=TRACE 475 | 476 | ORG_HYPERLEDGER_FABRIC_CA_SDK_LOGLEVEL=TRACE 477 | 478 | Fabric debug is by default enabled in the SDK docker-compose.yaml file with 479 | 480 | On Orderer: 481 | 482 | ORDERER_GENERAL_LOGLEVEL=debug 483 | 484 | On peers: 485 | CORE_LOGGING_LEVEL=DEBUG 486 | 487 | Fabric CA 488 | by starting command have the -d parameter. 489 | 490 | Upload full logs to the JIRA not just where the issue occurred if possible 491 | 492 | 请提供尽量多的信息,比如堆栈跟踪记录。 493 | 请提供 **java -XshowSettings:properties -version** 命令的输出结果 494 | 可能的话,请上传当问题发生时完整的日志。 495 | 496 | Creative Commons License
This work is licensed under a Creative Commons Attribution 4.0 International License. 497 | -------------------------------------------------------------------------------- /docs/sdk_node_zh.md: -------------------------------------------------------------------------------- 1 | 2 | | 原文 | 作者 | 审核修正 | 3 | | --- | --- | —--- | 4 | | [原文](https://fabric-sdk-node.github.io) | | | 5 | 6 | ## 这里什么都没有,就靠你了!!! -------------------------------------------------------------------------------- /docs/txflow_zh.md: -------------------------------------------------------------------------------- 1 | 2 | | 原文 | 作者 | 审核修正 | 3 | | --- | --- | —--- | 4 | | [原文](http://hyperledger-fabric.readthedocs.io/en/latest/txflow.html) | Yueling Liang | | 5 | 6 | 7 | This document outlines the transactional mechanics that take place during a standard asset exchange. The scenario includes two clients, A and B, who are buying and selling radishes. They each have a peer on the network through which they send their transactions and interact with the ledger. 8 | 9 | 本文概述了资产交易过程中的事务机制。该场景包含客户A和B,在进行萝卜买卖。他们各自有一个网络节点,通过节点他们发送交易并和账本进行交互。 10 | 11 | ![](img/step0.png) 12 | 13 | ## 假设 14 | 15 | (Assumptions) 16 | 17 | This flow assumes that a channhttp://el is set up and running. The application user has registered and enrolled with the organization’s certificate authority (CA) and received back necessary cryptographic material, which is used to authenticate to the network. 18 | 19 | 该流程假设通道已建立并正常运行。用户已注册并使用组织认证授权(CA)登记,同时获得必要的加密材料来进行网络验证。 20 | 21 | The chaincode (containing a set of key value pairs representing the initial state of the radish market) is installed on the peers and instantiated on the channel. The chaincode contains logic defining a set of transaction instructions and the agreed upon price for a radish. An endorsement policy has also been set for this chaincode, stating that both peerA and peerB must endorse any transaction. 22 | 23 | 链码(包含一组代表萝卜市场初始状态的键值对)被安装在节点上并在通道上进行实例化。链码包含定义交易指令集合的逻辑和达成一致的萝卜价格。设置一项针对链码的背书策略,表明节点A和B都必须对任何交易进行背书。 24 | 25 | ![](img/step1.png) 26 | 27 | ## 1. 客户A发起交易 28 | 29 | (Client A initiates a transaction) 30 | 31 | What’s happening? - Client A is sending a request to purchase radishes. The request targets peerA and peerB, who are respectively representative of Client A and Client B. The endorsement policy states that both peers must endorse any transaction, therefore the request goes to peerA and peerB. 32 | 33 | 发生了什么?- 客户A发出萝卜购买请求。请求目标节点A和B,分别代表客户A和B。背书策略表明两个节点必须为任何交易进行背书,因而请求被发送到节点A和B。 34 | 35 | Next, the transaction proposal is constructed. An application leveraging a supported SDK (node, java, python) utilizes one of the available API’s which generates a transaction proposal. The proposal is a request to invoke a chaincode function so that data can be read and/or written to the ledger (i.e. write new key value pairs for the assets). The SDK serves as a shim to package the transaction proposal into the properly architected format (protocol buffer over gRPC) and takes the user’s cryptographic credentials to produce a unique signature for this transaction proposal. 36 | 37 | 接下来构建交易提案。一个以可用SDK(node, java, python)为支撑的应用利用有效的API来生成交易提案。这项提案作为调用链码功能的请求来完成数据到账本的读取和/或写入(即为资产写入新的键值对)。SDK有两个作用:把交易提案包装成合适架构格式的库(基于gRPC的协议缓冲);使用用户的加密证书来创建交易提案的唯一签名。 38 | 39 | ![](img/step2.png) 40 | 41 | ## 2. 背书节点验证签名&执行交易 42 | 43 | (Endorsing peers verify signature & execute the transaction) 44 | 45 | The endorsing peers verify the signature (using MSP) and determine if the submitter is properly authorized to perform the proposed operation (using the channel’s ACL). The endorsing peers take the transaction proposal arguments as inputs and execute them against the current state database to produce transaction results including a response value, read set, and write set. No updates are made to the ledger at this point. The set of these values, along with the endorsing peer’s signature and a YES/NO endorsement statement is passed back as a “proposal response” to the SDK which parses the payload for the application to consume. 46 | {The MSP is a local process running on the peers which allows them to verify transaction 47 | requests arriving from clients and to sign transaction results(endorsements). The ACL (Access Control List) is defined at channel creation time, and determines which peers and end users are permitted to perform certain actions.} 48 | 49 | 背书节点使用MSP验证签名并确定请求者是否被合理授权进行提案的操作(使用通道ACL)。背书节点以交易提案凭证为输入,基于当前状态的数据库执行来生成交易结果,输出包括反馈值、读取集合和写入集合。截止现在账本还未进行更新。这些值的集合,背书节点的签名以及是/否的背书声明一同作为“提案反馈”被传输回到SDK,SDK对应用消耗的载荷进行解析。 50 | {MSP是在节点上运行的一个本地流程,该流程允许节点验证客户的交易请求和签订交易结果(背书)。ACL(权限控制清单)在通道创建时定义,决定哪些节点和用户被授权进行指定操作。} 51 | 52 | ![](img/step3.png) 53 | 54 | ## 3. 审查提案反馈 55 | 56 | (Proposal responses are inspected) 57 | 58 | The application verifies the endorsing peer signatures and compares the proposal responses (link to glossary term which will contain a representation of the payload) to determine if the proposal responses are the same and if the specified endorsement policy has been fulfilled (i.e. did peerA and peerB both endorse). The architecture is such that even if an application chooses not to inspect responses or otherwise forwards an unendorsed transaction, the policy will still be enforced by peers and upheld at the commit validation phase. 59 | 应用对背书节点签名进行验证,比较提案反馈(链接到包含载荷代理的术语条款)来决定是否一致,指定的背书策略是否被执行(即节点A和B都进行了背书)。这种架构可以保证即使一个应用选择不进行反馈审查或者转发了没有背书的交易,背书策略依然会被节点执行并在验证提交阶段维持。 60 | 61 | ![](images/step4.png) 62 | 63 | ## 4. 客户组合交易背书 64 | 65 | (Client assembles endorsements into a transaction) 66 | 67 | The application “broadcasts” the transaction proposal and response within a “transaction message” to the Ordering Service. The transaction will contain the read/write sets, the endorsing peers signatures and the Channel ID. The Ordering Service does not read the transaction details, it simply receives transactions from all channels in the network, orders them chronologically by channel, and creates blocks of transactions per channel. 68 | 69 | 应用对交易提案进行广播,以“交易信息”对订购服务实现反馈。交易包含读/写集合,背书节点签名和通道ID。订购服务不读取交易细节,只是从网络中所有通道接收交易,根据每个通道按时间顺序调用,创建每个通道的交易区块。 70 | 71 | ![](img/step5.png) 72 | 73 | ## 5. 交易验证和提交 74 | 75 | (Transaction is validated and committed) 76 | 77 | The blocks of transactions are “delivered” to all peers on the channel. The transactions within the block are validated to ensure endorsement policy is fulfilled and to ensure that there have been no changes to ledger state for read set variables since the read set was generated by the transaction execution. Transactions in the block are tagged as being valid or invalid. 78 | 79 | 交易区块被发布到通道中的所有节点。区块中的交易被验证来确保背书策略被执行并且账本的读取集合变量没有发生变化,因为读取集合是执行交易生成的。区块中的交易被标记为有效或无效。 80 | 81 | ![](img/step6.png) 82 | 83 | ## 6. 账本更新 84 | 85 | (Ledger updated) 86 | 87 | Each peer appends the block to the channel’s chain, and for each valid transaction the write sets are committed to current state database. An event is emitted, to notify the client application that the transaction (invocation) has been immutably appended to the chain, as well as notification of whether the transaction was validated or invalidated. 88 | 89 | 每个节点都把区块追加到通道的链中,对每项有效交易,写入集合被提交到当前状态的数据库。发出事务通知客户端应用,交易(宣誓)被永久追加到链中以及交易是有效或者无效的。 90 | 91 | Note: See the Chaincode Swimlanes diagram to better understand the server side flow and the protobuffers. 92 | 注意:参照链码泳道图以获得服务端流程和协议缓冲的更好理解。 93 | 94 | -------------------------------------------------------------------------------- /docs/write_first_app_zh.md: -------------------------------------------------------------------------------- 1 | 2 | | 原文 | 作者 | 审核修正 | 3 | | --- | --- | —--- | 4 | | [原文](http://hyperledger-fabric.readthedocs.io/en/latest/write_first_app.html) | | | 5 | 6 | ## 这里什么都没有,就靠你了!!! -------------------------------------------------------------------------------- /mkdocs.yml: -------------------------------------------------------------------------------- 1 | site_name: Hyperledger中文文档 2 | site_url: https://chainnova.github.io/hyperledgerDoc/ 3 | repo_url: https://github.com/ChainNova/hyperledgerDoc 4 | site_description: 'Hyperledger中文文档' 5 | 6 | site_author: Hyperledger国际化工作组 7 | copyright: Hyperledger国际化工作组(yls@chainnova.com) 8 | 9 | theme: readthedocs 10 | 11 | pages: 12 | - 欢迎: index.md 13 | - 词汇表: glossary.md 14 | - 快速入门: getting_started.md 15 | - 协议规范: protocol-spec_zh.md 16 | - Fabric教程: 17 | - 构建第一个fabric网络: build_network_zh.md 18 | - 编写第一个应用: write_first_app_zh.md 19 | - Chaincode: chaincode_zh.md 20 | - Fabric操作指南: 21 | - MSP: msp_zh.md 22 | - Channel 配置(configtx): configtx_zh.md 23 | - Channel 配置(configtxgen): configtxgen_zh.md 24 | - Channel 重新配置(configtxlator): configtxlator_zh.md 25 | - 背书策略: endorsement-policies_zh.md 26 | - 错误处理: error-handling_zh.md 27 | - 日志控制: logging-control_zh.md 28 | - Fabric设计: 29 | - 架构说明: arch-deep-dive_zh.md 30 | - 交易流程: txflow_zh.md 31 | - CA: ca-setup_zh.md 32 | - SDK--node: sdk_node_zh.md 33 | - SDK--java: sdk_java_zh.md 34 | - 基于kafka的排序服务: kafka_zh.md 35 | - Channels: channels_zh.md 36 | - 账本: ledger_zh.md 37 | - Read-Write set: read-write-set.md 38 | - Gossip数据传输协议: gossip_zh.md 39 | - MSP&ACL: msp_acl_zh.md 40 | - Fabric SDK 设计: sdk-design_zh.md 41 | - Makefile文件解析: makefile_zh.md 42 | 43 | markdown_extensions: 44 | - extra 45 | - tables 46 | - toc: 47 | permalink: True 48 | - fenced_code 49 | - smarty 50 | - mdx_math: 51 | enable_dollar_delimiter: True 52 | - footnotes 53 | --------------------------------------------------------------------------------