├── .gitignore ├── LICENSE ├── README.md ├── Vagrantfile ├── configuration ├── aws.json └── crypto_config.yaml ├── scripts ├── create_remote_scripts.py ├── crypto_tools │ ├── cert_notes.txt │ ├── create_intermediate_ca.sh │ ├── create_key.sh │ ├── create_peer_env.sh │ ├── create_root_ca.sh │ ├── cryptogen.py │ ├── docker_ca.sh │ ├── docker_orderer.sh │ ├── docker_peer.sh │ ├── docker_tools.sh │ ├── openssl.cnf │ ├── parse_domain_to_subjects.py │ └── signingIdentity │ │ ├── generateSigningIdentity.js │ │ ├── generateSigningIdentity.sh │ │ ├── hash.js │ │ ├── package-lock.json │ │ └── package.json ├── fabric │ ├── core.yaml │ └── core_1.1-preview.yaml ├── fabric_artifacts │ ├── channel_tools.sh │ ├── configtxBase.yaml │ ├── create_dev_docker_compose.py │ └── gen_configtx.py ├── get_hosts_scripts.py ├── install.sh ├── kuma-hf-network.py ├── notes.txt └── provisioning │ ├── configure_ssh.sh │ ├── install.sh │ ├── install_fabric_tools.sh │ └── stopDocker.sh └── shared ├── chaincode_tools ├── compile_chaincode.sh ├── pull_chaincode.sh └── update_chaincodes.py ├── clean_old_dockers.py └── docker.service /.gitignore: -------------------------------------------------------------------------------- 1 | **/node_modules 2 | .vagrant 3 | .HLVersion 4 | **/pki 5 | generated 6 | -------------------------------------------------------------------------------- /LICENSE: -------------------------------------------------------------------------------- 1 | MIT License 2 | 3 | Copyright (c) 2018 Kunstmaan | Part of Accenture Interactive 4 | 5 | Permission is hereby granted, free of charge, to any person obtaining a copy 6 | of this software and associated documentation files (the "Software"), to deal 7 | in the Software without restriction, including without limitation the rights 8 | to use, copy, modify, merge, publish, distribute, sublicense, and/or sell 9 | copies of the Software, and to permit persons to whom the Software is 10 | furnished to do so, subject to the following conditions: 11 | 12 | The above copyright notice and this permission notice shall be included in all 13 | copies or substantial portions of the Software. 14 | 15 | THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR 16 | IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, 17 | FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE 18 | AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER 19 | LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, 20 | OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE 21 | SOFTWARE. 22 | -------------------------------------------------------------------------------- /README.md: -------------------------------------------------------------------------------- 1 | # Install: 2 | 3 | Install the dependencies: 4 | * Git 5 | * [Python](https://www.python.org/downloads/) modules: 6 | * [pyaml](https://github.com/yaml/pyyaml) 7 | * [Vagrant](https://www.vagrantup.com/docs/installation/) plugins (for running the network on AWS): 8 | * vagrant-aws 9 | * vagrant-docker-compose 10 | * [NPM](https://docs.npmjs.com/getting-started/installing-node) 11 | * [aws](https://aws.amazon.com/cli/) 12 | 13 | Run the following command from your terminal: 14 | 15 | ``` 16 | curl -fsSL https://raw.githubusercontent.com/Kunstmaan/hyperledger-fabric-network-setup/master/scripts/install.sh?token=AG6ftlJwD7jEr7kZph_QEsqncTTeroBFks5aZc1pwA%3D%3D | bash 17 | ``` 18 | 19 | This repo depends on the fact that the chaincode repo you want to deploy has at least the chaincodes configuration in your package.json 20 | https://github.com/Kunstmaan/hyperledger-fabric-chaincode-dev-setup#initializing-new-project 21 | 22 | # Commands 23 | 24 | With the following command you can get an overview of all the commands available: 25 | 26 | ``` 27 | kuma-hf-network -h 28 | ``` 29 | 30 | ## Boostrap a default network configuration 31 | 32 | When you want to create a new network configuration, you can initialize a new network with the following command: 33 | 34 | ``` 35 | kuma-hf-network boostrap . 36 | ``` 37 | 38 | This will create a default aws configuration and network configuration at the provided path and generate all the artifacts based on this configuration. 39 | 40 | ## Generate certificates, docker files, channel artifacts 41 | 42 | ``` 43 | kuma-hf-network generate 44 | ``` 45 | 46 | Generates all the artifacts needed to bring the network up and configure the channels based on the provided `crypto_config`. 47 | 48 | ## Generate a new user for a certain organisation 49 | 50 | ``` 51 | kuma-hf-network generate-user 52 | ``` 53 | 54 | Generate al the crytographic material for a new user belonging to a certain organisation. 55 | 56 | Generates all the artifacts needed to bring the network up and configure the channels based on the provided `crypto_config`. 57 | 58 | ## Bring the network UP 59 | 60 | ``` 61 | kuma-hf-network network-up 62 | ``` 63 | 64 | Create all the aws instances as provided in the `aws_config` and bring up all the fabric instances using the cryptographic material generated with the `crypto_config`. This will also output scripts for updating the hosts file with the DNS linking to the ip addresses on aws. 65 | 66 | ## Bring the network DOWN 67 | 68 | ``` 69 | kuma-hf-network network-down 70 | ``` 71 | 72 | Bring the network back down. 73 | 74 | ## Update the chaincodes on the current network 75 | 76 | ``` 77 | kuma-hf-network update-chaincodes 78 | ``` 79 | 80 | Upgrade the chaincodes on the network based on the version in the `package.json` of each chaincode. This script depends on the chaincodes being generated with the [hyperledger-fabric-chaincode-dev-setup](https://github.com/Kunstmaan/hyperledger-fabric-chaincode-dev-setup) and the version inside the package.json will be used to see if a certain chaincode needs to be updated or not. 81 | 82 | ## Update the current tool 83 | 84 | ``` 85 | kuma-hf-network update 86 | ``` 87 | 88 | Update the script to the latest version. 89 | 90 | # AWS Configuration 91 | 92 | In the back we are using the aws cli utility, make sure this is configured correctly by running `aws configure`. And setting the following environment variables: 93 | * `AWS_SECRET_ACCESS_KEY`, your aws secret key 94 | * `AWS_ACCESS_KEY_ID`, your aws access key id 95 | 96 | More information can be found here: 97 | https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html 98 | 99 | ## Configuring your AWS EC2 Instances 100 | 101 | First you need to configure your AWS account, make sure a VPC is created with a keypair to access it, it's own subnet and security-group. A detailed tutorial can be found [here](https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Internet_Gateway.html). When this is all configured you can start modifying the configuration file so that this script can bring the network up. Configuring the network can be done via the [aws configuration file](./configuration/aws.json). This is the configuration file you need to provide when running `kuma-hf-network network-up`. 102 | 103 | * `region`, the region to start the instances in, for example "eu-west-1" 104 | * `availability_zone`, the availability zone within the region to launch the instance. If nil, it will use the default set by Amazon. 105 | * `security_groups`, an array of security group ids, more information can be found [here](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-network-security.html) 106 | * `subnet_id`, the id of the subnet to use, more information can be found [here](https://docs.aws.amazon.com/AmazonVPC/latest/UserGuide/VPC_Subnets.html) 107 | * `keypair_name`, the name of the keypair that should be used to access the EC2 Instance, more information can be found [here](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ec2-key-pairs.html) 108 | * `private_key_path`, the path on your local machine to the private key of the keypair 109 | * `ssh_username`, the name of the user to access the EC2 instance 110 | * `consul_master_ip`, the ip address of the instance you want to use as consul master 111 | * `chaincode_repository`, the gitrepository where the chaincode can be found 112 | * `chaincode_base_path`, the path to the chaincode inside the chaincode github repository 113 | * `chaincode_build`, specify if the chaincode should be build first, this will execute "npm run build" 114 | * `private_ssh_key_for_chaincode_repo`, the path to the private key needed to get access to to the chaincode repository 115 | * `public_ssh_key_for_chaincode_repo`, the path to the public key needed to get access to to the chaincode repository 116 | * `ec2s`, a map of all the ec2 instances you cant to deploy, with the key being the name and the value being instance specific configuration 117 | 118 | ### EC2 Instance configuration 119 | 120 | You can create as many of ec2 instances as you want, for every instance you need to configure the following things: 121 | 122 | * `ami_id`, the ami id to boot, for example: ami-785db401 which is an amd64 ubuntu server 123 | * `instance_type`: the type of instance 124 | * `fabric`: an array with the different fabric tools you want to deploy on this instance. Each tool exists out of the role (possible roles are "orderer", "peer" and "tools") and the docker file to use. 125 | * `ip`: the ip address for this instance 126 | * `volume_size`: the volume size for the ebs instance in GB 127 | 128 | # An overview of the indiviual scripts: 129 | --- 130 | 131 | Environment variables description: 132 | * `GEN_PATH`: The path to the folder that should contain crypto-config, docker files and channel artifacts 133 | 134 | --- 135 | 136 | * `scripts/hyperledgerNetworkTool.py` 137 | > Master script that uses all others. Run it with -h to have help on how to use it. This is normally the only script you should run. 138 | 139 | * `scripts/crypto_tools/cryptogen.py` 140 | > Creates crypto-config structure containing the MSP for all organisations, users and peers. Also creates the docker files and runs fabric_artifacts/gen_configtx. Uses all other scripts in `scripts/crypto_tools`, which use openssl. 141 | Uses TLS handshakes. 142 | 143 | * `scripts/fabric_artifacts/gen_configtx.py` 144 | > Creates the scripts that generate channel artifacts. Requires the files created by cryptogen.py 145 | 146 | * `scripts/create-remote_scripts.py` 147 | > Creates scripts that automatically connect to the network in order to install channels or update chaincodes 148 | 149 | * `scripts/get_hosts_scripts.py` 150 | > Creates scripts that modify /etc/hosts in order to resolve network names to ip addresses 151 | 152 | 153 | The shared folder is shared to all hyperledger tools (CLI) docker containers. It will also be synced to all aws nodes 154 | 155 | The `shared/chaincode_tools/` folder contains code intended to run on the tools docker containers 156 | * `shared/chaincode_tools/update_chaincodes.py` 157 | > Pulls code from the git repository, reads the config file and installs/instantiates/upgrades chaincodes according to that 158 | 159 | * `shared/chaincode_tools/compile_chaincode.sh` 160 | > Compiles go chaincode 161 | 162 | * `shared/chaincode_tools/pull_chaincode.sh` 163 | > Pulls the chaincode repository and puts it in $GOPATH/src. Used by update_chaincodes.py. Uses keys defined in aws_config in fields `private_ssh_key_for_chaincode_repo` and `public_ssh_key_for_chaincode_repo` 164 | 165 | * `provisioning/install.sh` 166 | > installs node, go and the hyperledger tools. Not used anymore, you can use it manually on AWS nodes however. 167 | 168 | * `provisioning/stopDocker.sh` 169 | > Stops all docker containers 170 | 171 | * `provisioning/install_fabric_tools.sh` 172 | > installs `configtxgen`, `configtxlator`, `cryptogen`, `orderer`, `peer` 173 | 174 | # Structure of the generated folder 175 | 176 | This folder is created by running `scripts/hyperledgerNetworkTool.py gen` 177 | 178 | It contains 6 folders (the sixth is generated when booting the network): 179 | * channel 180 | > Contains scripts that allow the creation of channel artifacts and their 181 | use in creating and joining channels. 182 | 183 | * `$GEN_PATH/channel/create_and_join_channel.sh` 184 | > Creates the channel, and installs it on the relevant peers 185 | 186 | * `$GEN_PATH/channel/create_channel_artifacts.sh` 187 | > Creates the initial orderer blocks and initial transactions for each channel 188 | 189 | * crypto_config 190 | > Contains all the cryptographic material needed by a hyperledger fabric network. 191 | 192 | * devmode 193 | > Contains a simplified version of the network that can be used to develop chaincode easily. 194 | This repository can be used to develop javascript chaincode efficiently: [hyperledger-fabric-chaincode-dev-setup](https://github.com/janb87/hyperledger-fabric-chaincode-dev-setup) 195 | 196 | * docker 197 | > Contains the docker files used to boot the network 198 | 199 | * hfc-key-store 200 | > Contains signing identities required to connect to the blockchain via an app 201 | 202 | * scripts 203 | > Contains scripts that you can run on your local machine to update /etc/hosts file 204 | 205 | 206 | # Cryptographic materials 207 | ------ 208 | ## Certificate settings 209 | 210 | Signature Algorithm must be *ecdsa-with-SHA1* 211 | > The settings are given in the form CA - TLS 212 | 213 | | Property | Root | Inter | Org | Peer | User | 214 | | --------------------------------- |:-----:|:-----:|:-----:|:-----:|:-----:| 215 | | X509v3 Key Usage | | | | | | 216 | | --- Digital Signature | V - V | V - V | V - V | V - V | V - V | 217 | | --- Key Encipherment | V - V | V - V | V - V | X - V | X - V | 218 | | --- CRL Sign | V - V | V - V | V - V | X - X | X - X | 219 | | --- Certificate Sign | V - V | V - V | V - V | X - X | X - X | 220 | | X509v3 Basic Constraints | | | | | | 221 | | --- CA:TRUE | V - V | V - V | V - V | X - X | X - X | 222 | | --- CA:FALSE | X - X | X - X | X - X | V - V | V - V | 223 | | X509v3 Authority Key Identifier | X - X | V - V | V - V | V - V | V - V | 224 | | X509v3 Subject Key Identifier | V - V | V - V | V - V | V - V | V - V | 225 | | X509v3 Extended Key Usage | | | | | | 226 | | --- TLS Web Server Authentication | X - X | X - X | X - V | X - V | X - V | 227 | | --- TLS Web Client Authentication | X - X | X - X | X - V | X - V | X - V | 228 | | --- 2.5.29.37.0 | V - V | V - V | V - X | X - X | X - X | 229 | 230 | 231 | # AWS Configuration 232 | * Key Pair: 233 | > You need a key pair which will be used to connect to the aws instances. This key 234 | pair consists of a public key (that is on aws), a private key (that is on your local machine) and a 235 | key pair name used to refer to that key when creating the instances 236 | 237 | # Roadmap 238 | * Install dependencies automatically 239 | * trigger aws configure automatically 240 | * make BatchSize, BatchTimeout, ... more configurable 241 | -------------------------------------------------------------------------------- /Vagrantfile: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env ruby 2 | # This Vagrantfile is a good example of provisioning multiple EC2 instances 3 | # using a single file. 4 | # http://stackoverflow.com/questions/24385079/multiple-ec2-instances-using-vagrant 5 | # read aws specific config from json file 6 | # https://github.com/savishy/docker-examples/tree/master/docker-swarm/docker-swarm-aws 7 | 8 | # https://hub.docker.com/r/consul/ 9 | # https://luppeng.wordpress.com/2016/05/03/setting-up-an-overlay-network-on-docker-without-swarm/ 10 | # plugins : vagrant plugin install vagrant-docker-compose 11 | # plugins : vagrant plugin install vagrant-aws 12 | 13 | # NOTE: The consul master must be the first instance in aws.json 14 | # (it must boot before the others) 15 | 16 | require 'yaml' 17 | 18 | Vagrant.require_version '>= 2.0.0' 19 | 20 | INSTALL_DIR = File.expand_path(File.dirname(__FILE__)) 21 | VERSION_FILE = "#{INSTALL_DIR}/.HLVersion".freeze 22 | 23 | def inc_version 24 | unless File.file?(VERSION_FILE) 25 | File.open(VERSION_FILE, 'w') { |file| file.write('1') } 26 | end 27 | 28 | file = File.open(VERSION_FILE, File::RDWR) 29 | file.seek(0) # rewind to the beginning of the file 30 | 31 | version = file.readline.to_i.freeze 32 | 33 | file.seek(0) 34 | file.write(version + 1) # Increment version number 35 | file.close 36 | version 37 | end 38 | 39 | # Version tag to easily find instances in aws console 40 | VERSION = inc_version 41 | 42 | AWS_CFG = JSON.parse(File.read(ENV['AWS_CONFIG'])).freeze 43 | 44 | # Set secret access key in your environment 45 | SECRET_ACCESS_KEY = ENV['AWS_SECRET_ACCESS_KEY'].freeze 46 | ACCESS_KEY_ID = ENV['AWS_ACCESS_KEY_ID'].freeze 47 | KEYPAIR_NAME = AWS_CFG['keypair_name'].freeze 48 | SECURITY_GROUPS = AWS_CFG['security_groups'].freeze 49 | PRIVATE_KEY_PATH = AWS_CFG['private_key_path'].freeze 50 | 51 | GEN_PATH = ENV['GEN_PATH'].freeze 52 | SHARED_PATH = "#{INSTALL_DIR}/shared/".freeze 53 | HYPERLEDGER_VERSION = '1.1.0'.freeze 54 | DOCKER0_IP_ADDRESS = '172.17.0.1'.freeze 55 | CONSUL_MASTER_IP = AWS_CFG['consul_master_ip'].freeze 56 | 57 | DOCKER_COMPOSE_VERSION = '1.17.0'.freeze 58 | CONSUL_VERSION = '1.0.0'.freeze 59 | 60 | CMD_GET_PUBLIC_IP = 'dig +short myip.opendns.com @resolver1.opendns.com'.freeze 61 | 62 | def docker_opts(private_ip_address) 63 | # NOTE: Listen only on the private ip, and not on any ip (0.0.0.0). 64 | # Otherwise chinese coin miners will connect to the dameon and mine bitcoins.. 65 | # If this dameon needs to listen on a public ip, 66 | # should secure connection with certificates and use port 2376 (and not 2375) 67 | 68 | "DOCKER_OPTS=\"-H tcp://#{private_ip_address}:2375 -H unix:///var/run/docker.sock --cluster-advertise eth0:2375 --cluster-store consul://#{CONSUL_MASTER_IP}:8500\"" 69 | end 70 | 71 | NETWORK_NAME = 'hyperledgerNet'.freeze 72 | 73 | WAIT_FOR_NETWORK = ''"while ! docker network ls | grep -q #{NETWORK_NAME}; 74 | do echo \"Waiting for #{NETWORK_NAME}...\"; 75 | sleep 5; 76 | done; 77 | sleep 10; 78 | echo \"Connected to #{NETWORK_NAME} !\""''.freeze 79 | 80 | def wait_for_port(ip_address, port) 81 | ''"while ! nc -zv -w5 #{ip_address} #{port}; 82 | do echo \"Waiting for #{ip_address}:#{port} to be open\"; 83 | sleep 5; 84 | done; 85 | sleep 10; 86 | echo \"#{ip_address}:#{port} is now open !\"; 87 | "'' 88 | end 89 | 90 | def get_docker_daemon_cmd(private_ip_address, wait = true) 91 | (wait ? wait_for_port(CONSUL_MASTER_IP, 8500) : '') + ''" 92 | echo 'Configuring docker daemon...' 93 | sed -i '/DOCKER_OPTS=/d' /etc/default/docker 94 | echo '#{docker_opts(private_ip_address)}' >> /etc/default/docker 95 | echo '--> Modifying docker.service...' 96 | cp /vagrant/shared/docker.service /lib/systemd/system/docker.service 97 | echo '--> Reloading daemon settings...' 98 | systemctl daemon-reload 99 | echo '--> Restarting docker service...' 100 | service docker restart 101 | echo 'Done. Docker dameon configured.' 102 | "'' 103 | end 104 | 105 | def get_docker_consul_args(private_ip_address) 106 | ''"-d \ 107 | -v /mnt:/data \ 108 | -p #{private_ip_address}:8300:8300 \ 109 | -p #{private_ip_address}:8301:8301 \ 110 | -p #{private_ip_address}:8301:8301/udp \ 111 | -p #{private_ip_address}:8302:8302 \ 112 | -p #{private_ip_address}:8302:8302/udp \ 113 | -p #{private_ip_address}:8400:8400 \ 114 | -p #{private_ip_address}:8500:8500 \ 115 | -p #{DOCKER0_IP_ADDRESS}:53:53/udp \ 116 | --net=host"'' 117 | end 118 | 119 | # Configures an AWS instance 120 | # Params: 121 | # +aws_node+:: the config of vagrant 122 | # +private_ip_address+:: the private ip address of the AWS node 123 | # +node_name+:: the name of the node, will be shown in AWS 124 | # +node_config+:: json node configuration 125 | def configure_instance(aws_node, private_ip_address, node_name, node_config) 126 | # Spin up EC2 instances 127 | aws_node.vm.provider :aws do |ec2, override| 128 | ec2.keypair_name = KEYPAIR_NAME 129 | ec2.access_key_id = ACCESS_KEY_ID 130 | ec2.secret_access_key = SECRET_ACCESS_KEY 131 | ec2.security_groups = SECURITY_GROUPS 132 | override.ssh.private_key_path = PRIVATE_KEY_PATH 133 | 134 | # read region, ami etc from json. 135 | ec2.region = AWS_CFG['region'] 136 | ec2.subnet_id = AWS_CFG['subnet_id'] 137 | ec2.availability_zone = AWS_CFG['region'] + AWS_CFG['availability_zone'] 138 | ec2.ami = node_config['ami_id'] 139 | ec2.instance_type = node_config['instance_type'] 140 | ec2.private_ip_address = private_ip_address 141 | ec2.associate_public_ip = true 142 | 143 | if node_config.key?('volume_size') 144 | # Size in GB 145 | # (untested) 146 | ec2.block_device_mapping = [{ 'DeviceName' => '/dev/sda1', 'Ebs.VolumeSize' => node_config['volume_size'] }] 147 | end 148 | 149 | override.ssh.username = AWS_CFG['ssh_username'] 150 | 151 | # Collect tags (can't be longer than 250 chars) 152 | ec2.tags = ({}) 153 | ec2.tags['Name'] = node_name[0..245] 154 | ec2.tags['Type'] = 'Hyperledger' 155 | ec2.tags['Version'] = VERSION 156 | ec2.tags['Fabric'] = node_config['fabric'].map { |f| f['role'] }.join(',')[0..245] 157 | end 158 | end 159 | 160 | def configure_consul(aws_node, node_name, private_ip_address) 161 | # Consul UI is NOT secured ! To do so, you must configure an ACL 162 | docker_consul_args = get_docker_consul_args(private_ip_address) 163 | consul_cmd = "agent -server -advertise #{private_ip_address} -client=#{private_ip_address}" 164 | aws_node.vm.provision 'docker' do |d| 165 | is_consul_master = private_ip_address == CONSUL_MASTER_IP 166 | d.pull_images "consul:#{CONSUL_VERSION}" 167 | d.post_install_provision 'shell', inline: get_docker_daemon_cmd(private_ip_address, !is_consul_master) 168 | if is_consul_master 169 | # Assume there is only one consul master 170 | d.run 'consul', args: docker_consul_args + " -h consul_master_#{node_name}", cmd: consul_cmd + ' -bootstrap -ui' 171 | aws_node.vm.provision 'shell', inline: "sleep 10; docker network create --driver overlay --subnet=192.168.100.0/24 #{NETWORK_NAME}" 172 | aws_node.vm.provision 'shell', inline: "publicIP=\"$(#{CMD_GET_PUBLIC_IP})\";echo \"Consul UI available at: $publicIP:8500\"" 173 | else 174 | d.run 'consul', args: docker_consul_args + " -h consul_#{node_name}", cmd: consul_cmd + " -join #{CONSUL_MASTER_IP}" 175 | end 176 | end 177 | end 178 | 179 | # Configures an orderer, ca or peer node 180 | # Params: 181 | # +aws_node+:: aws node configuration 182 | # +node_config+:: json node configuration 183 | def configure_fabric(aws_node, node_config) 184 | aws_node.vm.synced_folder "#{GEN_PATH}/channel/", '/vagrant/channel', type: 'rsync' 185 | aws_node.vm.synced_folder "#{GEN_PATH}/crypto-config/", '/vagrant/crypto-config', type: 'rsync' 186 | node_config['fabric'].each do |fabric| 187 | role = fabric['role'] 188 | docker_yaml = fabric['docker'] 189 | couchdb_port = fabric['couchdb_port'] 190 | aws_node.vm.provision 'docker' do |d| 191 | d.pull_images "hyperledger/fabric-#{role}:x86_64-#{HYPERLEDGER_VERSION}" 192 | if role == 'peer' 193 | # Download and run couchdb 194 | d.pull_images 'yeasy/hyperledger-fabric-couchdb' 195 | # TODO: In future, couchdb should not publish port 196 | # but only expose them for incresed security, 197 | # and peer containers should link to couchdb 198 | d.run 'yeasy/hyperledger-fabric-couchdb', args: "-e COUCHDB_PASSWORD=password -e COUCHDB_USER=admin -p #{couchdb_port}:5984" 199 | # Pre-load fabric image for chaincode instantiation 200 | d.pull_images "hyperledger/fabric-ccenv:x86_64-#{HYPERLEDGER_VERSION}" 201 | end 202 | end 203 | 204 | if role == 'peer' 205 | aws_node.vm.provision 'shell', inline: "docker tag hyperledger/fabric-ccenv:x86_64-#{HYPERLEDGER_VERSION} hyperledger/fabric-ccenv" 206 | # wait for couchdb 207 | aws_node.vm.provision 'shell', inline: wait_for_port('0.0.0.0', couchdb_port) 208 | end 209 | 210 | # Remove version tag on the image by tagging it 211 | aws_node.vm.provision 'shell', inline: "docker tag hyperledger/fabric-#{role}:x86_64-#{HYPERLEDGER_VERSION} hyperledger/fabric-#{role}" 212 | 213 | # wait until network is up 214 | aws_node.vm.provision 'shell', inline: WAIT_FOR_NETWORK 215 | 216 | docker_compose_file_name = "/vagrant/docker/#{docker_yaml}" 217 | aws_node.vm.provision :docker_compose, yml: docker_compose_file_name, options: '', compose_version: DOCKER_COMPOSE_VERSION 218 | end 219 | end 220 | 221 | def configure_ssh(aws_node) 222 | # Don't link directly to .ssh because it will hide the original .ssh folder, 223 | # which contains the authorized_keys files needed to log in via ssh. 224 | # Can't move files directly to folders that need elevated privileges, 225 | # because the file uploads by the file provisioner are done as the SSH 226 | # or PowerShell user. So we move them afterwards. 227 | aws_node.vm.provision 'file', source: AWS_CFG['public_ssh_key_for_chaincode_repo'], destination: '~/id_rsa.pub' 228 | aws_node.vm.provision 'file', source: AWS_CFG['private_ssh_key_for_chaincode_repo'], destination: '~/id_rsa' 229 | aws_node.vm.provision 'shell', path: 'scripts/provisioning/configure_ssh.sh', args: [AWS_CFG['ssh_username']] 230 | end 231 | 232 | def sync_other_files(aws_node, node_config) 233 | return unless node_config.key?('files_to_sync') 234 | node_config['files_to_sync'].each do |f| 235 | aws_node.vm.provision 'file', source: f[0], destination: f[1] 236 | end 237 | end 238 | 239 | # start vagrant configuration 240 | Vagrant.configure(2) do |config| 241 | config.vm.box = 'dummy' 242 | config.vm.box_url = 'https://github.com/mitchellh/vagrant-aws/raw/master/dummy.box' 243 | 244 | # loop through each of 'ec2s' key 245 | AWS_CFG['ec2s'].each do |node| 246 | node_name = 'HLF_' + node[0] 247 | node_config = node[1] # The node data 248 | private_ip_address = node_config['ip'] 249 | 250 | # Node specific configuration 251 | config.vm.define node_name do |aws_node| 252 | aws_node.nfs.functional = false 253 | # Prevent Vagrant from mounting the default /vagrant synced folder 254 | config.vm.synced_folder '.', '/vagrant', disabled: true 255 | configure_instance(aws_node, private_ip_address, node_name, node_config) 256 | configure_ssh(aws_node) 257 | sync_other_files(aws_node, node_config) 258 | aws_node.vm.provision 'shell', inline: 'apt-get -y update' 259 | aws_node.vm.provision 'install', type: :shell, path: 'scripts/provisioning/stopDocker.sh' 260 | aws_node.vm.synced_folder SHARED_PATH, '/vagrant/shared', type: 'rsync' 261 | aws_node.vm.synced_folder "#{GEN_PATH}/docker/", '/vagrant/docker', type: 'rsync' 262 | configure_consul(aws_node, node_name, private_ip_address) 263 | configure_fabric(aws_node, node_config) 264 | end 265 | # config.vm.define node_name 266 | end 267 | # aws_cfg['ec2s'] 268 | end 269 | -------------------------------------------------------------------------------- /configuration/aws.json: -------------------------------------------------------------------------------- 1 | { 2 | "region": "eu-west-1", 3 | "availability_zone": "b", 4 | "security_groups": ["sg-01234567"], 5 | "subnet_id": "subnet-01234567", 6 | "keypair_name": "blockchain", 7 | "private_key_path": "~/.ssh/blockchain.pem", 8 | "ssh_username": "ubuntu", 9 | "consul_master_ip": "10.42.1.4", 10 | "chaincode_repository": "git@github.com:user/project.git", 11 | "chaincode_base_path": ".", 12 | "chaincode_build": true, 13 | "private_ssh_key_for_chaincode_repo": "~/.ssh/user", 14 | "public_ssh_key_for_chaincode_repo": "~/.ssh/user.pem", 15 | "ec2s": { 16 | "bar": { 17 | "ami_id": "ami-785db401", 18 | "instance_type": "t2.small", 19 | "fabric": [{ 20 | "role": "orderer", 21 | "docker": "orderer.org.bar.be.yaml" 22 | }, 23 | { 24 | "role": "peer", 25 | "docker": "barpeer.org.bar.be.yaml", 26 | "couchdb_port": "5984" 27 | }, 28 | { 29 | "role": "tools", 30 | "docker": "tools.org.bar.be.yaml" 31 | } 32 | ], 33 | "ip": "10.42.1.4", 34 | "volume_size": 15 35 | }, 36 | "files_to_sync": { 37 | "source1": "destination1", 38 | "source2": "destination2" 39 | }, 40 | "foo": { 41 | "ami_id": "ami-785db401", 42 | "instance_type": "t2.micro", 43 | "fabric": [{ 44 | "role": "peer", 45 | "docker": "foopeer.org.foo.be.yaml", 46 | "couchdb_port": "5984" 47 | }], 48 | "ip": "10.42.1.5" 49 | }, 50 | "root-example": { 51 | "ami_id": "ami-785db401", 52 | "instance_type": "t2.micro", 53 | "fabric": [{ 54 | "role": "ca", 55 | "docker": "root.example.be.yaml" 56 | }], 57 | "ip": "10.42.1.6" 58 | }, 59 | "inter-example": { 60 | "ami_id": "ami-785db401", 61 | "instance_type": "t2.micro", 62 | "fabric": [{ 63 | "role": "ca", 64 | "docker": "inter.example.be.yaml" 65 | }], 66 | "ip": "10.42.1.7" 67 | } 68 | } 69 | } 70 | -------------------------------------------------------------------------------- /configuration/crypto_config.yaml: -------------------------------------------------------------------------------- 1 | # Created by Guillaume Leurquin, guillaume.leurquin@accenture.com 2 | 3 | # Assumes port 7051 is first in the peers. 4 | # Assumes the organisation admin is first in the admins 5 | # Tools CLI uses the first orderer in the list for each org 6 | # This file makes heavy uses of yaml pointers. 7 | # Note that you cannot use a pointer before it has been defined 8 | # (pointers can only be used below their definition) 9 | 10 | PREGEN_CAs: 11 | # This section defines certificate authorities that are generated appart 12 | # from the ones of hyperledger fabric. You can define an initial CA chain here. 13 | # All certificates created here and for the hyperledger fabric network are given 14 | # organisation unit (OU), country (C), common name (CN), organisation (ORG) created 15 | # automatically from the domain name, the following way : 16 | # (anything.)OU.ORG.C 17 | # CN = (anything.)OU.ORG.C 18 | # For example, the domain name foo.bar.example.be would have the following fields set in its certificate: 19 | # OU = bar 20 | # ORG = example 21 | # C = be 22 | # CN = foo.bar.example.be 23 | # IMPORTANT limitations: 24 | # --> The C field MUST consist of 2 letters. 25 | # --> Hyperledger fabric has issues with name resolving if that the domain names are not all LOWERCASE 26 | # --> all fields do not have to be filled. They will be filled from right to left. 27 | # These certificate authorities will have all the following permissions: 28 | # > Digital Signature, Key Encipherment, Certificate Sign, CRL Sign 29 | # 30 | # In this config file, a certifiate authority (CA) consists of 31 | # > Domain: The name of the CA 32 | # > Port: The port on which the hyperledger fabric ca should run 33 | # > Parent (optional): If provided, points to the structure of the parent certificate. 34 | # There are two types of CAs: 35 | # > Intermediate 36 | # An intermediate ca has a Parent field, which points to the structure 37 | # of its parent certificate. 38 | # > Root 39 | # A root ca does not have a Parent field 40 | # There could be multiple root CAs, and/or intermediate CAs 41 | # These certificates will have two folders, one for ca and one for tlsca. 42 | # They both will contain the same certificates. This is done for ease of programming. 43 | 44 | - ca: &ROOT_CA 45 | Domain: root.example.be 46 | Port: 7054 47 | 48 | - ca: &INTER_CA 49 | Parent: *ROOT_CA 50 | Domain: inter.example.be 51 | Port: 7054 52 | 53 | # This section defines the organisations that are part of the hyperledger fabric network 54 | # Each organisation must have the following fields set: 55 | # > Name: the name of the organisation 56 | # > Domain: The domain name of the organisation 57 | # > ca: contains the CA certificates 58 | # > Parent: pointer to the parent certificate structure, which can be any of the Pregen CAs, or another CA 59 | # > Domain: The domain of this certificate. This should be the domain of the organisation 60 | # > tlsca: contains the TLSCA certificates 61 | # > Parent: pointer to the parent certificate structure, which can be any of the Pregen CAs, or another TLSCA 62 | # > Domain: The domain of this certificate. This should be the domain of the organisation 63 | # > peers: (optional) contains a list of the peers of the organisation 64 | # > Hostname: The hostname of the peer 65 | # > Ports: (order of the ports is important) 66 | # - HOSTPORT:CONTAINER_PORT # Docker Port mapping for the CORE_PEER_ADDRESS of the peer (by default port 7051) 67 | # - HOSTPORT:CONTAINER_PORT # Docker Port mapping for the events of the peer (by default port 7053) 68 | # > CouchdbPort: Couchdb port, by default 5984 69 | # > Tools: (optional) add an optional tools CLI configuration for this peer, 70 | # to do this, give the channel name on which this CLI should run 71 | # You should have one tools per channel 72 | # > orderers: (optional) list of orderers for this organisation 73 | # - Hostname: orderer 74 | # > Peers: list of peers this orderer should take care of 75 | # - Hostname: peer's hostname 76 | # > Org: peer's organisation 77 | # Note: The domain of the peer will be build the following way: hostname.org 78 | # Which implies the hostname should be lower case too 79 | # > Port: port on which the orderer runs. Default is 7050 80 | # > AdminOrgs: list of organisation names who can make changes to this orderer. 81 | # This must be a subset of the consortiums 82 | # - name of the organisation 83 | # > Consortiums: List of reference to consortiums 84 | # - &Consortium # Definition of a pointer (change pointer name to suit needs) 85 | # > Name: name of the consortium 86 | # > Organizations: list of organisation names 87 | # - name of the organisation 88 | # > Channels: the channels this orderer has access to 89 | # - channel name 90 | # > admins: list of administrators for this organisation. 91 | # - Hostname: name of the administrator 92 | # Note: The domain of the admin will be build the following way: hostname.org 93 | # Which implies the hostname should be lower case too 94 | # > users: list of users that can interact with the channel 95 | # - Hostname: name of the user 96 | # > Attributes: (optional) list of key-values that will be added as a json object 97 | # to the certificate's field 1.2.3.4.5.6.7.8.1, which can be read by 98 | # hyperledger fabric chaincode. 99 | # - key: value 100 | # Note: The domain of the user will be build the following way: hostname.org 101 | # Which implies the hostname should be lower case too 102 | # An MSP id will be created for that organisation, equal to its domain with dots replaced by - 103 | # Because an MSP ID cannot include dots (when adding policies, it messes things up) 104 | 105 | Orgs: 106 | - Name: &FooName FooOrg 107 | Domain: &FooDomain org.foo.be 108 | ca: &FOO_CA 109 | Parent: *INTER_CA 110 | Domain: *FooDomain 111 | tlsca: &FOO_TLSCA 112 | Parent: *INTER_CA 113 | Domain: *FooDomain 114 | peers: 115 | - Hostname: &FooPeer foopeer 116 | Ports: 117 | - 7051:7051 118 | - 7053:7053 119 | CouchdbPort: 5984 120 | admins: 121 | - Hostname: admin-foo 122 | users: 123 | - Hostname: user-1 124 | Attributes: 125 | age: "50" 126 | - Hostname: user-2 127 | Attributes: 128 | age: "30" 129 | - Hostname: user-3 130 | Attributes: 131 | age: "40" 132 | - Hostname: user-4 133 | Attributes: 134 | age: "30" 135 | 136 | - &BarOrg 137 | Name: &BarName BarOrg 138 | Domain: &BarDomain org.bar.be 139 | ca: &BAR_CA 140 | Parent: *FOO_CA 141 | Domain: *BarDomain 142 | tlsca: &BAR_TLSCA 143 | Parent: *FOO_TLSCA 144 | Domain: *BarDomain 145 | peers: 146 | - Hostname: &BarPeer barpeer 147 | Ports: 148 | - 7051:7051 149 | - 7053:7053 150 | CouchdbPort: 5984 151 | Tools: examplechannel 152 | orderers: 153 | - Hostname: orderer 154 | Peers: 155 | - Hostname: *BarPeer 156 | Org: *BarDomain 157 | - Hostname: *FooPeer 158 | Org: *FooDomain 159 | Port: 7050 160 | AdminOrgs: 161 | # Must be a subset of the consortiums 162 | - *BarName 163 | Consortiums: 164 | # Refers to consortium name 165 | - &ExampleConsortium 166 | Name: ExampleConsortium 167 | Organizations: 168 | - *BarName 169 | - *FooName 170 | Channels: 171 | - &examplechannel examplechannel 172 | admins: 173 | - Hostname: admin-bar 174 | 175 | 176 | # This section describes the channels to create. Each channel has the following fields: 177 | # - Name: the name of the channel 178 | # > Consortium: pointer to the consortium's structure 179 | # > Organizations: list of organisation names 180 | # - organisation name 181 | Channels: 182 | - Name: *examplechannel 183 | Consortium: *ExampleConsortium 184 | Organizations: 185 | - *BarName 186 | - *FooName 187 | 188 | # This section is for the development mode. It will create a simplified version 189 | # of the network. This should point to the structure of an organisation with at least a peer and an orderer. 190 | # It will create a simplified network with the first peer and the first orderer of that organisation 191 | Devmode: *BarOrg 192 | -------------------------------------------------------------------------------- /scripts/create_remote_scripts.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python2 2 | # Created by Guillaume Leurquin, guillaume.leurquin@accenture.com 3 | 4 | """ 5 | create_remote_scripts.py crypto_config.yaml aws_config.json 6 | 7 | Requires GEN_PATH environment variable to be set, 8 | which points to the hyperledger fabric certificate 9 | structure created by cryptogen.py 10 | Creates scripts to remotely connect to the network, 11 | create and join all channels, and also update the chaincodes remotely 12 | """ 13 | 14 | import os 15 | import sys 16 | import json 17 | import yaml 18 | GEN_PATH = os.environ["GEN_PATH"] 19 | DEBUG = False 20 | 21 | def fail(msg): 22 | """Prints the error message and exits""" 23 | sys.stderr.write('\033[91m' + msg + '\033[0m\n') 24 | exit(1) 25 | 26 | def call(script, *args): 27 | """Calls the given script using the args""" 28 | 29 | cmd = script + " " + " ".join(args) 30 | if DEBUG: 31 | print cmd 32 | if os.system(cmd) != 0: 33 | fail("\nERROR: An error occured while executing " + cmd + ". See above for details.") 34 | 35 | 36 | if len(sys.argv) != 3: 37 | fail("Usage: create_remote_scripts crypto_config aws_config ") 38 | YAML_CONFIG = sys.argv[1] 39 | AWS_CONFIG = sys.argv[2] 40 | 41 | 42 | CREATE_AND_JOIN_CHANNELS_REMOTE_SCRIPT = GEN_PATH + '/scripts/create_and_join_channels_remote.sh' 43 | CHAINCODE_REMOTE_SCRIPT = GEN_PATH + '/scripts/update_remote_chaincodes.sh' 44 | 45 | 46 | SCRIPT_PREAMBLE = """#!/bin/bash 47 | # This file is auto-generated 48 | 49 | set -eu -o pipefail 50 | 51 | echo "Modifying /etc/hosts..." 52 | INSTALL_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" 53 | $INSTALL_DIR/set_hosts_public.sh 54 | 55 | """ 56 | 57 | def create_remote_channel_script(CONF, AWS, channels_remote_script): 58 | channels_remote_script.write(SCRIPT_PREAMBLE) 59 | for org in CONF['Orgs']: 60 | if 'peers' in org and org['peers'] is not None: 61 | for peer in org['peers']: 62 | if 'Tools' in peer: 63 | channels_remote_script.write("cmd=\"docker exec -it tools.{0} bash -c\"\n".format(org['Domain'])) 64 | template = "ssh -oStrictHostKeyChecking=no -i {0} -t {1}@tools.{2} $cmd '\"/etc/hyperledger/configtx/create_and_join_channel.sh {3}\"'\n" 65 | channels_remote_script.write(template.format( 66 | AWS['private_key_path'], 67 | AWS['ssh_username'], 68 | org['Domain'], 69 | peer['Tools'] 70 | )) 71 | 72 | REMOTE_CHAINCODE_SCRIPT_PREAMBLE = """#!/bin/bash 73 | # This file is auto-generated 74 | 75 | set -eu -o pipefail 76 | echo "Make sure the channels have been created before running this script" 77 | echo "Make sure that set_public_hosts.sh has been run before running this script" 78 | 79 | 80 | """ 81 | 82 | def create_remote_chaincode_script(CONF, AWS, chaincode_remote_script): 83 | chaincode_remote_script.write(SCRIPT_PREAMBLE) 84 | for org in CONF['Orgs']: 85 | if 'peers' in org and org['peers'] is not None: 86 | for peer in org['peers']: 87 | if 'Tools' in peer: 88 | chaincode_remote_script.write("cmd=\"docker exec -it tools.{0} bash -c\"\n".format(org['Domain'])) 89 | template = "ssh -oStrictHostKeyChecking=no -i {0} -t {1}@tools.{2} $cmd '\"/etc/hyperledger/chaincode_tools/update_chaincodes.py --repository {3} --chaincodeBasePath {4} {5}\"'\n" 90 | chaincode_remote_script.write(template.format( 91 | AWS['private_key_path'], 92 | AWS['ssh_username'], 93 | org['Domain'], 94 | AWS['chaincode_repository'], 95 | AWS['chaincode_base_path'], 96 | "--build" if AWS['chaincode_build'] else "" 97 | )) 98 | return 99 | raise Exception('No tools found in the configuration file') 100 | 101 | call('mkdir -p', GEN_PATH + "/scripts") 102 | 103 | with open(YAML_CONFIG, 'r') as stream: 104 | with open(AWS_CONFIG, 'r') as aws_stream: 105 | with open(CREATE_AND_JOIN_CHANNELS_REMOTE_SCRIPT, 'w') as remote_channels_script: 106 | with open(CHAINCODE_REMOTE_SCRIPT, 'w') as remote_chaincode_script: 107 | try: 108 | CONF = yaml.load(stream) 109 | AWS = json.load(aws_stream) 110 | 111 | create_remote_channel_script(CONF, AWS, remote_channels_script) 112 | create_remote_chaincode_script(CONF, AWS, remote_chaincode_script) 113 | except yaml.YAMLError as exc: 114 | print exc 115 | 116 | call("chmod +x", CREATE_AND_JOIN_CHANNELS_REMOTE_SCRIPT) 117 | call("chmod +x", CHAINCODE_REMOTE_SCRIPT) 118 | -------------------------------------------------------------------------------- /scripts/crypto_tools/cert_notes.txt: -------------------------------------------------------------------------------- 1 | This file is a summary of the kind of extensions involved for each type of certificate 2 | 3 | --------------- 4 | Admin CA 5 | --------------- 6 | X509v3 extensions: 7 | X509v3 Key Usage: critical 8 | Digital Signature 9 | X509v3 Basic Constraints: critical 10 | CA:FALSE 11 | X509v3 Authority Key Identifier: 12 | keyid:42:39:AA:0D:CD:76:DA:EE:B8:BA:0C:DA:70:18:51:D1:45:04:D3:1A:AD:1B:2D:DD:DB:AC:6A:57 13 | 14 | 15 | --------------- 16 | Admin TLS 17 | --------------- 18 | 19 | X509v3 extensions: 20 | X509v3 Key Usage: critical 21 | Digital Signature, Key Encipherment 22 | X509v3 Extended Key Usage: 23 | TLS Web Server Authentication, TLS Web Client Authentication 24 | X509v3 Basic Constraints: critical 25 | CA:FALSE 26 | X509v3 Authority Key Identifier: 27 | keyid:ED:3F:D8:23:93:E9:5F:C2:C4:75:AF:C1:13:C8:D2:C5:91:F7:45:D1:BA:BC:4D:6D:9C:CE:0A:1A 28 | 29 | --------------- 30 | Users CA 31 | --------------- 32 | 33 | X509v3 extensions: 34 | X509v3 Key Usage: critical 35 | Digital Signature 36 | X509v3 Basic Constraints: critical 37 | CA:FALSE 38 | X509v3 Authority Key Identifier: 39 | keyid:42:39:AA:0D:CD:76:DA:EE:B8:BA:0C:DA:70:18:51:D1:45:04:D3:1A:AD:1B:2D:DD:DB:AC:6A:57 40 | 41 | --------------- 42 | Users TLS 43 | --------------- 44 | 45 | X509v3 extensions: 46 | X509v3 Key Usage: critical 47 | Digital Signature, Key Encipherment 48 | X509v3 Extended Key Usage: 49 | TLS Web Server Authentication, TLS Web Client Authentication 50 | X509v3 Basic Constraints: critical 51 | CA:FALSE 52 | X509v3 Authority Key Identifier: 53 | keyid:ED:3F:D8:23:93:E9:5F:C2:C4:75:AF:C1:13:C8:D2:C5:91:F7:45:D1:BA:BC:4D:6D:9C:CE:0A:1A 54 | 55 | --------------- 56 | Root CA 57 | --------------- 58 | 59 | X509v3 extensions: 60 | X509v3 Key Usage: critical 61 | Digital Signature, Key Encipherment, Certificate Sign, CRL Sign 62 | X509v3 Extended Key Usage: 63 | 2.5.29.37.0 ----> anyExtendedKeyUsage 64 | X509v3 Basic Constraints: critical 65 | CA:TRUE 66 | X509v3 Subject Key Identifier: 67 | A0:60:6A:4A:86:0A:1E:31:C9:0A:23:78:8D:A6:F3:B6:B7:49:25:ED:0D:23:06:1A:F4:89:94:09:BA:46 68 | 69 | --------------- 70 | Intermediate CA 71 | --------------- 72 | 73 | - Orderer: 74 | X509v3 extensions: 75 | X509v3 Key Usage: critical 76 | Digital Signature 77 | X509v3 Basic Constraints: critical 78 | CA:FALSE 79 | X509v3 Authority Key Identifier: 80 | keyid:A0:60:6A:4A:86:0A:1E:31:C9:0A:23:78:8D:A6:F3:B6:B7:49:25:ED:0D:23:06:1A:F4:89:94:09 81 | 82 | 83 | - Peers: 84 | X509v3 extensions: 85 | X509v3 Key Usage: critical 86 | Digital Signature 87 | X509v3 Basic Constraints: critical 88 | CA:FALSE 89 | X509v3 Authority Key Identifier: 90 | keyid:42:39:AA:0D:CD:76:DA:EE:B8:BA:0C:DA:70:18:51:D1:45:04:D3:1A:AD:1B:2D:DD:DB:AC:6A:57 91 | 92 | 93 | 94 | --------------- 95 | Root TLS 96 | --------------- 97 | 98 | X509v3 extensions: 99 | X509v3 Key Usage: critical 100 | Digital Signature, Key Encipherment, Certificate Sign, CRL Sign 101 | X509v3 Extended Key Usage: 102 | 2.5.29.37.0 ----> anyExtendedKeyUsage 103 | X509v3 Basic Constraints: critical 104 | CA:TRUE 105 | X509v3 Subject Key Identifier: 106 | 8D:21:86:55:6C:85:D5:15:E7:37:D0:C0:DA:8D:0D:76:72:78:5B:68:5C:B5:03:BC:B9:5E:53:DC:C2:79 107 | 108 | --------------- 109 | Intermediate TLS 110 | --------------- 111 | 112 | - Orderer: 113 | X509v3 extensions: 114 | X509v3 Key Usage: critical 115 | Digital Signature, Key Encipherment 116 | X509v3 Extended Key Usage: 117 | TLS Web Server Authentication, TLS Web Client Authentication 118 | X509v3 Basic Constraints: critical 119 | CA:FALSE 120 | X509v3 Authority Key Identifier: 121 | keyid:8D:21:86:55:6C:85:D5:15:E7:37:D0:C0:DA:8D:0D:76:72:78:5B:68:5C:B5:03:BC:B9:5E:53:DC 122 | X509v3 Subject Alternative Name: 123 | DNS:orderer.example.com, DNS:orderer 124 | 125 | 126 | - Peers: 127 | X509v3 extensions: 128 | X509v3 Key Usage: critical 129 | Digital Signature, Key Encipherment 130 | X509v3 Extended Key Usage: 131 | TLS Web Server Authentication, TLS Web Client Authentication 132 | X509v3 Basic Constraints: critical 133 | CA:FALSE 134 | X509v3 Authority Key Identifier: 135 | keyid:ED:3F:D8:23:93:E9:5F:C2:C4:75:AF:C1:13:C8:D2:C5:91:F7:45:D1:BA:BC:4D:6D:9C:CE:0A:1A:CC:16:8A:CB 136 | 137 | X509v3 Subject Alternative Name: 138 | DNS:peer0.org1.example.com, DNS:peer0 139 | 140 | 141 | 142 | - Peers: 143 | -------------------------------------------------------------------------------- /scripts/crypto_tools/create_intermediate_ca.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Created by Guillaume Leurquin, guillaume.leurquin@accenture.com 3 | 4 | set -eu -o pipefail 5 | 6 | 7 | if [ $# -ne 6 ]; 8 | then 9 | echo "" 10 | echo "Usage: " 11 | echo " createIntermediateCA COMMON_NAME FOLDER_OUT PARENT_CA TLSCA_OR_CA CAN_SIGN ATTR" 12 | echo " Creates an intermediate certificate signed by the PARENT_CA" 13 | echo " If TLS_OR_CA is equal to tlsca, will create a server certificate" 14 | echo " If TLS_OR_CA is equal to ca, will create a client certificate" 15 | echo " CAN_SIGN can be 'True' or 'False'" 16 | echo " Should the certificate be able create intermediate certificates ?" 17 | echo " The certificate will be saved under FOLDER_OUT/COMMON_NAME-cert.pem" 18 | echo " The key will be saved under FOLDER_OUT/COMMON_NAME-key.pem" 19 | echo " The certificate will contain fields parsed from the COMMON_NAME:" 20 | echo " .OU.O.C" 21 | echo " Where OU=organisation unit, O=organisation, C=2 letter country code" 22 | exit 1 23 | fi 24 | 25 | INSTALL_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" 26 | TLS=$4 27 | CAN_SIGN=$5 28 | ATTR=$6 29 | PKI="$INSTALL_DIR"/pki 30 | export PKI 31 | 32 | rm -d -f -r $PKI 33 | mkdir -p $PKI 34 | touch $PKI/index.txt 35 | 36 | CN=$1 37 | FOLDER_OUT=$2 38 | 39 | CA_PARENT_KEY="$3-key.pem" 40 | CA_PARENT_CERT="$3-cert.pem" 41 | CA_KEY="$FOLDER_OUT/$CN-key.pem" 42 | echo "CA_KEY=$CA_KEY" 43 | CA_CERT="$FOLDER_OUT/$CN-cert.pem" 44 | CA_REQ="$FOLDER_OUT/$CN-req.pem" 45 | 46 | SUBJECTS=$($INSTALL_DIR/parse_domain_to_subjects.py $CN) 47 | 48 | $INSTALL_DIR/create_key.sh $CA_KEY 49 | 50 | mkdir -p $PKI 51 | touch $PKI/index.txt 52 | 53 | echo "Creating certificate signing request" 54 | openssl req -config $INSTALL_DIR/openssl.cnf \ 55 | -key $CA_KEY \ 56 | -new -sha256 \ 57 | -out $CA_REQ \ 58 | -nodes \ 59 | -subj $SUBJECTS \ 60 | -text 61 | echo "Done. Request saved in $CA_REQ" 62 | 63 | if [ $TLS = "tlsca" ] && [ $CAN_SIGN = "True" ]; 64 | then 65 | EXTENSIONS=v3_intermediate_tls 66 | elif [ $TLS = "tlsca" ] && [ $CAN_SIGN = "False" ]; then 67 | EXTENSIONS=v3_leaf_tls 68 | elif [ $TLS = "ca" ] && [ $CAN_SIGN = "True" ]; then 69 | EXTENSIONS=v3_intermediate_ca 70 | elif [ $TLS = "ca" ] && [ $CAN_SIGN = "False" ]; then 71 | EXTENSIONS=v3_leaf_ca 72 | else 73 | echo "ERROR: Unknown CAN_SIGN option $CAN_SIGN. Can only be 'True' or 'False'" >&2 74 | echo "OR Unknown TLS option $TLS. Can only be 'tlsca' or 'ca'." >&2 75 | exit 1 76 | fi 77 | 78 | if [ "$(uname -s)" == "Darwin" ]; then 79 | SED=gsed 80 | if ! command -v gsed > /dev/null 2>&1; then { 81 | echo "Installing Gsed..." 82 | brew install gnu-sed 83 | echo "Done." 84 | }; fi 85 | elif [ "$(uname -s)" == "Linux" ]; then 86 | SED=sed 87 | fi 88 | 89 | # gsed """/\[ v3_leaf_ca \]/aAttributes = test""" openssl.cnf 90 | 91 | ATTR="'$ATTR'" 92 | printf "ATTR = $ATTR\n" 93 | # The policy set here (refer to openssl.cnf) defines which fields 94 | # of this intermediate certificate 95 | # must match with a potential certificate that is signed by 96 | # this intermediate certificate 97 | echo "Creating intermediate $TLS certificate..." 98 | # openssl ca -config $INSTALL_DIR/openssl.cnf \ 99 | openssl ca -config <($SED "/\[ $EXTENSIONS \]/a1.2.3.4.5.6.7.8.1=ASN1:UTF8String:$ATTR" $INSTALL_DIR/openssl.cnf) \ 100 | -create_serial \ 101 | -days 3650 -md sha256 -extensions $EXTENSIONS -policy policy_loose \ 102 | -in $CA_REQ\ 103 | -out $CA_CERT \ 104 | -outdir $FOLDER_OUT \ 105 | -keyfile $CA_PARENT_KEY \ 106 | -cert $CA_PARENT_CERT \ 107 | -subj $SUBJECTS \ 108 | -batch 109 | echo "Done. Intermediate certificate saved in $CA_CERT" 110 | 111 | pushd $FOLDER_OUT > /dev/null 112 | rm $(find . -type f -not -name "$CN*") 113 | popd > /dev/null 114 | 115 | echo "Deleting certificate request" 116 | rm $CA_REQ 117 | 118 | rm -d -f -r $PKI 119 | -------------------------------------------------------------------------------- /scripts/crypto_tools/create_key.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Created by Guillaume Leurquin, guillaume.leurquin@accenture.com 3 | 4 | set -eu -o pipefail 5 | 6 | 7 | if [ $# -ne 1 ]; 8 | then 9 | echo "" 10 | echo "Usage: " 11 | echo " createKey KEY_NAME" 12 | echo " Creates an elliptic curve key with name KEY_NAME" 13 | exit 1 14 | fi 15 | 16 | CA_KEY=$1 17 | 18 | INSTALL_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" 19 | PKI="$INSTALL_DIR"/pki 20 | export PKI 21 | 22 | echo "Creating key..." 23 | openssl ecparam -name prime256v1 -genkey -noout -out $CA_KEY.tmp 24 | echo "Converting key to pkcs8" 25 | openssl pkcs8 -topk8 -inform PEM -outform PEM -nocrypt -in $CA_KEY.tmp -out $CA_KEY 26 | rm $CA_KEY.tmp 27 | echo "Done. Key saved in $CA_KEY" 28 | 29 | rm -d -f -r $PKI 30 | -------------------------------------------------------------------------------- /scripts/crypto_tools/create_peer_env.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Created by Guillaume Leurquin, guillaume.leurquin@accenture.com 3 | 4 | set -eu -o pipefail 5 | 6 | if [ $# -ne 6 ]; 7 | then 8 | echo "" 9 | echo "Usage: " 10 | echo " create_peer_env PEER PEER_PORT ORG MSPID ADMIN ORG_FOLDER" 11 | echo " Creates a script that changes the environment to the one of the given peer" 12 | echo "" 13 | exit 1 14 | fi 15 | 16 | 17 | PEER=$1 18 | PEER_PORT=$2 19 | ORG=$3 20 | MSPID=$4 21 | ADMIN=$5 22 | ORG_FOLDER=$6 23 | 24 | mkdir -p $ORG_FOLDER/../tools 25 | FILE="$ORG_FOLDER/../tools/set_env.$PEER.$ORG.sh" 26 | 27 | 28 | echo """#!/bin/bash 29 | set -eu -o pipefail 30 | 31 | CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/crypto-config/$ORG/users/$ADMIN.$ORG/msp 32 | CORE_PEER_ID=$PEER.$ORG 33 | CORE_PEER_LOCALMSPID=$MSPID 34 | CORE_PEER_ADDRESS=$PEER.$ORG:$PEER_PORT 35 | CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/crypto-config/$ORG/peers/$PEER.$ORG/tlsca/tlsca.$PEER.$ORG-key.pem 36 | CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/crypto-config/$ORG/peers/$PEER.$ORG/tlsca/tlsca.$PEER.$ORG-cert.pem 37 | CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/crypto-config/$ORG/peers/$PEER.$ORG/tlsca.combined.$PEER.$ORG-cert.pem 38 | 39 | export CORE_PEER_MSPCONFIGPATH 40 | export CORE_PEER_ID 41 | export CORE_PEER_LOCALMSPID 42 | export CORE_PEER_ADDRESS 43 | export CORE_PEER_TLS_KEY_FILE 44 | export CORE_PEER_TLS_CERT_FILE 45 | export CORE_PEER_TLS_ROOTCERT_FILE 46 | """ > $FILE 47 | 48 | chmod +x $FILE 49 | -------------------------------------------------------------------------------- /scripts/crypto_tools/create_root_ca.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Created by Guillaume Leurquin, guillaume.leurquin@accenture.com 3 | 4 | set -eu -o pipefail 5 | 6 | 7 | if [ $# -ne 3 ]; 8 | then 9 | echo "" 10 | echo "Usage: " 11 | echo " createRootCa COMMON_NAME FOLDER_OUT TLS" 12 | echo " Creates a self-signed certificate" 13 | echo " The certificate will be saved under FOLDER_OUT/COMMON_NAME-cert.pem" 14 | echo " The key will be saved under FOLDER_OUT/COMMON_NAME-key.pem" 15 | echo " The certificate will contain fields parsed from the COMMON_NAME:" 16 | echo " .OU.O.C" 17 | echo " Where OU=organisation unit, O=organisation, C=2 letter country code" 18 | exit 1 19 | fi 20 | 21 | INSTALL_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" 22 | PKI="$INSTALL_DIR"/pki 23 | export PKI 24 | 25 | rm -d -f -r $PKI 26 | mkdir -p $PKI 27 | touch $PKI/index.txt 28 | 29 | CN=$1 30 | FOLDER_OUT=$2 31 | TLS=$3 32 | 33 | CA_KEY="$FOLDER_OUT/$CN-key.pem" 34 | CA_CERT="$FOLDER_OUT/$CN-cert.pem" 35 | 36 | if [ $TLS = "tlsca" ]; 37 | then 38 | EXTENSIONS=v3_root_ca_or_tls 39 | elif [ $TLS = "ca" ]; then 40 | EXTENSIONS=v3_root_ca_or_tls 41 | else 42 | echo "ERROR: Unknown TLS option $TLS. Can only be 'tlsca' or 'ca'." >&2 43 | exit 1 44 | fi 45 | 46 | # split CN in two, first part is the name, 47 | # second the organisational unit, and last the organisation 48 | SUBJECTS=$($INSTALL_DIR/parse_domain_to_subjects.py $CN) 49 | 50 | $INSTALL_DIR/create_key.sh $CA_KEY 51 | 52 | echo "Creating root certificate..." 53 | openssl req -config $INSTALL_DIR/openssl.cnf \ 54 | -key $CA_KEY \ 55 | -new -x509 -days 7300 -sha256 -extensions $EXTENSIONS \ 56 | -out $CA_CERT \ 57 | -nodes \ 58 | -batch \ 59 | -subj $SUBJECTS \ 60 | -text 61 | echo "Done. Root certificate saved in $CA_CERT" 62 | 63 | 64 | rm -d -f -r $PKI 65 | -------------------------------------------------------------------------------- /scripts/crypto_tools/cryptogen.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python2 2 | # Created by Guillaume Leurquin, guillaume.leurquin@accenture.com 3 | 4 | """Cryptogen.py crypto-config.yaml override 5 | 6 | This module creates certificate structure for hyperledger fabric, along with 7 | the docker compose files used to boot the network 8 | 9 | It takes as first argument a .yaml file containing the structure of the network. 10 | The second argument is a boolean which defines if the files should be overriden or not. 11 | It requires the environment variable GEN_PATH to be set, which is the path 12 | where the generated files will be saved. 13 | 14 | It will also generate channel artifacts if the organisation MSP changed 15 | """ 16 | 17 | 18 | import os 19 | import sys 20 | import json 21 | from argparse import ArgumentParser 22 | from argparse import Action 23 | import yaml 24 | 25 | DEBUG = False 26 | # Takes cryptoconfig.yaml as first argument 27 | # Path in which the scripts are 28 | PWD = os.path.dirname(__file__) 29 | # Path to where crypto-config and docker folders will be generated 30 | GEN_PATH = os.environ["GEN_PATH"] 31 | 32 | ORG_MSP_CHANGED = False 33 | 34 | def fail(msg): 35 | """Prints the error message and exits""" 36 | sys.stderr.write('\033[91m' + msg + '\033[0m\n') 37 | exit(1) 38 | 39 | PARSER = ArgumentParser(description='Creates the channel artifacts and the channel creation/joining scripts') 40 | PARSER.add_argument('crypto_config', type=str, help='cryptographic configuration of the network, as YAML file. See the provided example for details.') 41 | PARSER.add_argument('--configtxBase', '-c', help='path to configtx hyperledger fabric config file, without the organisations and profiles (they will be generated). Defaults to a simple orderer configuration.', action='store') 42 | PARSER.add_argument('--noOverride', help='Do not override existing files (default: override files). Useful if you want to add more users. If this is not set, will delete the generated folder and generate everything from scratch', action='store_true') 43 | 44 | user_attrs = {} 45 | class StoreDictKeyPair(Action): 46 | def __call__(self, parser, namespace, values, option_string=None): 47 | for kv in values.split(","): 48 | k,v = kv.split("=") 49 | user_attrs[k] = v 50 | setattr(namespace, self.dest, user_attrs) 51 | 52 | PARSER.add_argument('--user', help='if a single user certificate should be generated', action='store_true') 53 | PARSER.add_argument('--name', help='the name of the user', action='store') 54 | PARSER.add_argument('--org', help='the organisation of the user', action='store') 55 | PARSER.add_argument('--attributes', help='the attributes for the user certificate', dest="user_attrs", action=StoreDictKeyPair, metavar="KEY1=VAL1,KEY2=VAL2...") 56 | 57 | args = PARSER.parse_args() 58 | YAML_CONFIG = args.crypto_config 59 | OVERRIDE = not args.noOverride 60 | CONFIGTX_BASE = "--configtxBase {0}".format(args.configtxBase) if args.configtxBase else "" 61 | 62 | EXPLORER_DATA_PROD = { 63 | "network-config": { 64 | 65 | }, 66 | "host": "localhost", 67 | "port": "8080", 68 | "GOPATH": "../artifacts", 69 | "keyValueStore": "/tmp/fabric-client-kvs", 70 | "eventWaitTime": "30000", 71 | "pg": { 72 | "host": "mysql", 73 | "port": "3306", 74 | "database": "fabricexplorer", 75 | "username": "root", 76 | "passwd": "password" 77 | } 78 | } 79 | 80 | EXPLORER_DATA_DEV = { 81 | "network-config": { 82 | 83 | }, 84 | "host": "localhost", 85 | "port": "8080", 86 | "GOPATH": "../artifacts", 87 | "keyValueStore": "/tmp/fabric-client-kvs", 88 | "eventWaitTime": "30000", 89 | "pg": { 90 | "host": "mysql", 91 | "port": "3306", 92 | "database": "fabricexplorer", 93 | "username": "root", 94 | "passwd": "password" 95 | } 96 | } 97 | 98 | # ORG_MAP = { 99 | # 'currentId': 1 100 | # 'grb.vlaanderen.be': { 101 | # 'id': 1 102 | # 'peers': { 103 | # 'currentId': 1 104 | # 'peer1.vlaanderen.be': { 105 | # 'id': 1 106 | # } 107 | # } 108 | # } 109 | # } 110 | 111 | # used for fabric-explorer 112 | ORG_MAP = { 113 | 'currentId': 0 114 | } 115 | 116 | def call(script, *args): 117 | """Calls the given script using the args""" 118 | 119 | cmd = script + " " + " ".join(args) 120 | if DEBUG: 121 | print cmd 122 | if os.system(cmd) != 0: 123 | fail("\nERROR: An error occured while executing " + cmd + ". See above for details.") 124 | 125 | def to_pwd(script): 126 | """Converts the script path to the correct path""" 127 | return PWD + "/" + script 128 | 129 | def to_domain(pre, org): 130 | """Appends pre to org to create a domain name""" 131 | return pre + "." + org["Domain"] 132 | 133 | def create_peer_docker(peer, org): 134 | """Creates a docker compose file for this peer""" 135 | 136 | # Create peer docker 137 | call( 138 | to_pwd("docker_peer.sh"), 139 | peer["Hostname"], 140 | org["Domain"], 141 | convert_to_msp_id(org["Domain"]), 142 | ",".join(peer["Ports"]), 143 | str(peer["CouchdbPort"]), 144 | ) 145 | 146 | if "Tools" in peer: 147 | call(to_pwd("docker_tools.sh"), org["Domain"], peer["Tools"]) 148 | 149 | call( 150 | to_pwd("create_peer_env.sh"), 151 | peer["Hostname"], 152 | peer["Ports"][0].split(":")[0], 153 | org["Domain"], 154 | convert_to_msp_id(org["Domain"]), 155 | org["admins"][0]["Hostname"], 156 | CRYPTO_CONFIG_PATH + org["Domain"] 157 | ) 158 | 159 | def get_org_nb(org): 160 | global ORG_MAP 161 | return 'org'+str(ORG_MAP[org['Domain']]['id']) 162 | 163 | def add_admin_to_explorer(org, admin, is_dev = False): 164 | """Adds an admin to this organisation, for fabric-explorer""" 165 | if 'peers' in org and org['peers']>0: 166 | org_nb = get_org_nb(org) 167 | global EXPLORER_DATA_PROD 168 | global EXPLORER_DATA_DEV 169 | 170 | explorer = EXPLORER_DATA_DEV if is_dev else EXPLORER_DATA_PROD 171 | if(is_dev): 172 | org_nb = 'org1' 173 | 174 | if 'admin' not in explorer['network-config'][org_nb]: 175 | base = '/crypto-config/{0}/users/{1}.{0}/msp/'.format(org['Domain'], admin['Hostname']) 176 | explorer['network-config'][org_nb]['admin'] = { 177 | 'key': base+'keystore', 178 | 'cert': base+'signcerts' 179 | } 180 | explorer['network-config'][org_nb]['name'] = org['Domain'] 181 | explorer['network-config'][org_nb]['mspid'] = convert_to_msp_id(org["Domain"]) 182 | 183 | def add_peer_to_explorer(org, peer, is_dev = False): 184 | # Also add this to the explorer data 185 | if 'peers' in org and org['peers']>0: 186 | global ORG_MAP 187 | global EXPLORER_DATA_PROD 188 | global EXPLORER_DATA_DEV 189 | 190 | explorer = EXPLORER_DATA_DEV if is_dev else EXPLORER_DATA_PROD 191 | 192 | peerData = ORG_MAP[org['Domain']]['peers'] 193 | peerData['currentId'] += 1 194 | peerDomain = peer['Hostname'] + '.' + org['Domain'] 195 | peerData[peerDomain] = { 196 | 'id': peerData['currentId'] 197 | } 198 | org_nb = get_org_nb(org) 199 | peer_nb = 'peer' + str(peerData['currentId']) 200 | requestsPort = peer['Ports'][0].split(':')[0] 201 | eventsPort = peer['Ports'][1].split(':')[0] 202 | cacert = '/crypto-config/{0}/peers/{1}/tlsca.combined.{1}-cert.pem'.format(org['Domain'], peerDomain) 203 | if is_dev: 204 | org_nb = 'org1' 205 | peer_nb = 'peer1' 206 | peerDomain = 'peer' 207 | requestsPort = '7051' 208 | eventsPort = '7051' 209 | explorer['network-config'][org_nb][peer_nb] = { 210 | 'requests': 'grpc{2}://{0}:{1}'.format(peerDomain, requestsPort, ('' if is_dev else 's')), 211 | 'events': 'grpc{2}://{0}:{1}'.format(peerDomain, eventsPort, ('' if is_dev else 's')), 212 | 'server-hostname': peerDomain, 213 | 'tls_cacerts': cacert 214 | } 215 | 216 | def create_orderer_docker(orderer, org): 217 | """Creates a docker compose file for this orderer""" 218 | # Create orderers docker 219 | peer_cn = [] 220 | peer_orgs = [] 221 | for peer in orderer["Peers"]: 222 | peer_cn.append(peer["Hostname"]) 223 | peer_orgs.append(peer["Org"]) 224 | 225 | call( 226 | to_pwd("docker_orderer.sh"), 227 | orderer["Hostname"], 228 | org["Domain"], 229 | convert_to_msp_id(org["Domain"]), 230 | ",".join(peer_cn), 231 | ",".join(peer_orgs), 232 | str(orderer["Port"]) 233 | ) 234 | 235 | def create_docker(role, component, org): 236 | """Creates docker files for the given role""" 237 | if role == "peers": 238 | create_peer_docker(component, org) 239 | elif role == "orderers": 240 | create_orderer_docker(component, org) 241 | elif role == "ca": 242 | call(to_pwd("docker_ca.sh"), component['Domain'], str(component['Port'])) 243 | 244 | def create_msp_struct(msp_folder): 245 | """Creates the msp directory structure""" 246 | call("mkdir -p", msp_folder + "/admincerts") 247 | call("mkdir -p", msp_folder + "/cacerts") 248 | call("mkdir -p", msp_folder + "/intermediatecerts") 249 | call("mkdir -p", msp_folder + "/tlscacerts") 250 | call("mkdir -p", msp_folder + "/tlsintermediatecerts") 251 | 252 | def remove_cert(filename): 253 | """Removes suffix -cert.pem from filename""" 254 | return filename[:-9] # -cert.pem --> 9 chars 255 | 256 | def create_msp(org_domain, ca_paths, is_tls, subfolder, is_admin): 257 | """Creates and fills the MSP folder with certificates""" 258 | user_folder = CRYPTO_CONFIG_PATH + org_domain + "/" + subfolder 259 | msp_folder = user_folder + "/msp" 260 | create_msp_struct(msp_folder) 261 | org_admincerts = CRYPTO_CONFIG_PATH + org_domain + "/msp/admincerts/" 262 | is_not_org = "users" in subfolder or "peers" in subfolder or "orderers" in subfolder 263 | 264 | if "users" in subfolder and not is_tls: 265 | if is_admin: 266 | call("cp", ca_paths[-1], msp_folder + "/admincerts") 267 | call("cp -r", ca_paths[-1], org_admincerts) 268 | call(PWD + "/signingIdentity/generateSigningIdentity.sh", user_folder, convert_to_msp_id(org_domain)) 269 | 270 | if is_not_org and not is_tls: 271 | call("cp -r", org_admincerts, msp_folder + "/admincerts") 272 | 273 | inter_cas = ca_paths[1::] 274 | if is_not_org: 275 | inter_cas = inter_cas[:-1] # Remove last one 276 | 277 | if is_tls: 278 | call("cp", ca_paths[0], msp_folder + "/tlscacerts") 279 | for tlsca_path in inter_cas: 280 | call("cp", tlsca_path, msp_folder + "/tlsintermediatecerts") 281 | else: 282 | call("cp", ca_paths[0], msp_folder + "/cacerts") 283 | for inter_ca_path in inter_cas: 284 | call("cp", inter_ca_path, msp_folder + "/intermediatecerts") 285 | if is_not_org: 286 | call("mkdir -p", msp_folder + "/keystore") 287 | call("mkdir -p", msp_folder + "/signcerts") 288 | call("cp", ca_paths[-1], msp_folder + "/signcerts") 289 | call("cp", remove_cert(ca_paths[-1]) + "-key.pem", msp_folder + "/keystore") 290 | 291 | def copy_admincerts_to_admins(org): 292 | """ 293 | After creating the admins, the organisation's folder contains in admincerts all the 294 | certificates of the admins. But these admin's folders must also contain all the other 295 | admins certificates. This copies the org admincerts to the admin's admincerts. 296 | """ 297 | for admin in org["admins"]: 298 | domain_path = CRYPTO_CONFIG_PATH + org["Domain"] 299 | org_admincerts = domain_path + "/msp/admincerts/" 300 | admin_admincerts = domain_path + "/users/" + admin["Hostname"] + "." + org['Domain'] + "/msp/admincerts" 301 | call("cp -r", org_admincerts, admin_admincerts) 302 | 303 | def create_all_msp(org): 304 | """Creates all msps for the org""" 305 | create_ca(org["ca"], is_tls=False, can_sign=True) 306 | create_ca(org["tlsca"], is_tls=True, can_sign=True) 307 | 308 | # "admins" must be first 309 | roles = ["admins", "users", "orderers", "peers"] 310 | for role in roles: 311 | if role == roles[1]: 312 | copy_admincerts_to_admins(org) 313 | if role in org and org[role]: 314 | is_admin = False 315 | for element in org[role]: 316 | create_docker(role, element, org) 317 | elem_domain = element["Hostname"] + "." + org['Domain'] 318 | subfolder = role 319 | attributes = "" 320 | if role == "admins": 321 | subfolder = "users" 322 | is_admin = True 323 | add_admin_to_explorer(org, element) 324 | elif role == "peers": 325 | add_peer_to_explorer(org, element) 326 | elif role == "users" and "Attributes" in element and element['Attributes']: 327 | attr_values = ["\\\""+k+"\\\":"+"\\\""+str(v)+"\\\"" for k, v in element["Attributes"].iteritems()] 328 | attributes = ",".join(attr_values) 329 | 330 | subfolder = subfolder + "/" + elem_domain 331 | create_ca({'Parent':org["ca"], 'Domain':org["Domain"]}, is_tls=False, can_sign=False, subfolder=subfolder, attributes=attributes, is_admin=is_admin) 332 | create_ca({'Parent':org["tlsca"], 'Domain':org["Domain"]}, is_tls=True, can_sign=False, subfolder=subfolder, attributes=attributes, is_admin=is_admin) 333 | 334 | def getSuffix(domain, subfolder): 335 | if subfolder == "": 336 | return domain 337 | return subfolder.split('/')[-1] 338 | 339 | def create_combined_ca(caconf, is_tls=False, subfolder=""): 340 | """Creates a combined certificate""" 341 | tls = "tls" if is_tls else "" 342 | # Fetch the list of certificate paths up to root 343 | 344 | output = CRYPTO_CONFIG_PATH + caconf["Domain"] + "/" + subfolder + "/" + tls + "ca.combined." + getSuffix(caconf["Domain"], subfolder) + "-cert.pem" 345 | paths = [] 346 | while "Parent" in caconf: 347 | ca_file = CRYPTO_CONFIG_PATH + caconf["Domain"] + "/" + subfolder + "/" + tls + "ca/" + tls + "ca." + getSuffix(caconf["Domain"], subfolder) + "-cert.pem" 348 | paths.append(ca_file) 349 | subfolder = "" # Override subfolder 350 | caconf = caconf["Parent"] 351 | 352 | # We're at the root, so add it to the array as well 353 | ca_file = CRYPTO_CONFIG_PATH + caconf["Domain"] + "/" + subfolder + "/" + tls + "ca/" + tls + "ca." + getSuffix(caconf["Domain"], subfolder) + "-cert.pem" 354 | paths.append(ca_file) 355 | 356 | 357 | reversed_paths = list(reversed(paths)) # First element is root 358 | # Create the combined file: 359 | call( 360 | "cat", 361 | ' '.join(reversed_paths), 362 | "> " + output) 363 | 364 | return reversed_paths 365 | 366 | def create_ca(caconf, is_tls=False, subfolder="", docker=False, can_sign=False, attributes="", is_admin=False): 367 | """Creates a ca in caconf["Domain"]/subfolder/(tls)ca""" 368 | tls = "tls" if is_tls else "" 369 | ca_folder = CRYPTO_CONFIG_PATH + caconf["Domain"] + "/" + subfolder + "/" + tls + "ca" 370 | # Create ca root 371 | call("mkdir", "-p", ca_folder) 372 | global ORG_MSP_CHANGED 373 | 374 | if "Parent" in caconf: 375 | # Intermediate CA 376 | parent_domain = caconf["Parent"]["Domain"] 377 | parent_path = CRYPTO_CONFIG_PATH + parent_domain + "/" + tls + "ca/" + tls + "ca." + parent_domain 378 | attr_list = "\"{\\\"attrs\\\":{"+attributes+"}}\"" 379 | ca_cn = tls+"ca."+getSuffix(caconf["Domain"], subfolder) 380 | ca_filename = ca_folder + '/' + ca_cn + '-cert.pem' 381 | if OVERRIDE or not os.path.isfile(ca_filename): 382 | call( 383 | to_pwd("create_intermediate_ca.sh"), 384 | ca_cn, 385 | ca_folder, 386 | parent_path, 387 | tls+"ca", 388 | str(can_sign), 389 | attr_list) 390 | ORG_MSP_CHANGED = ORG_MSP_CHANGED or is_admin or subfolder == "" 391 | else: 392 | # Root CA 393 | ca_cn = tls+"ca."+caconf["Domain"] 394 | ca_filename = ca_folder + '/' + ca_cn + '-cert.pem' 395 | if OVERRIDE or not os.path.isfile(ca_filename): 396 | call(to_pwd("create_root_ca.sh"), ca_cn, ca_folder, tls+"ca") 397 | ORG_MSP_CHANGED = ORG_MSP_CHANGED or is_admin or subfolder == "" 398 | ca_paths = create_combined_ca(caconf, is_tls, subfolder) 399 | 400 | if docker: 401 | create_docker("ca", caconf, None) 402 | else: 403 | create_msp(caconf["Domain"], ca_paths, is_tls, subfolder, is_admin) 404 | 405 | def convert_to_msp_id(domain): 406 | return ''.join(part.capitalize() for part in domain.split('.')) + "MSP" 407 | 408 | if args.user: 409 | print 'generating user {0} for {1}'.format(args.name, args.org) 410 | CRYPTO_CONFIG_PATH = GEN_PATH + "/crypto-config/" 411 | call("mkdir -p", CRYPTO_CONFIG_PATH) 412 | 413 | attr_values = ["\\\""+k+"\\\":"+"\\\""+str(v)+"\\\"" for k, v in args.user_attrs.iteritems()] 414 | attributes = ",".join(attr_values) 415 | 416 | org = None 417 | with open(YAML_CONFIG, 'r') as stream: 418 | try: 419 | CONF = yaml.load(stream) 420 | for orgConf in CONF['Orgs']: 421 | if orgConf['Name'] == args.org: 422 | org = orgConf 423 | except yaml.YAMLError as exc: 424 | print exc 425 | exit(1) 426 | 427 | if not org: 428 | print 'Unknown organisation {0}'.format(args.org) 429 | exit(1) 430 | else: 431 | subfolder = 'users/{0}.{1}'.format(args.name, org['Domain']) 432 | create_ca({'Parent':org["ca"], 'Domain':org["Domain"]}, is_tls=False, can_sign=False, subfolder=subfolder, attributes=attributes, is_admin=False) 433 | create_ca({'Parent':org["tlsca"], 'Domain':org["Domain"]}, is_tls=True, can_sign=False, subfolder=subfolder, attributes=attributes, is_admin=False) 434 | else: 435 | with open(YAML_CONFIG, 'r') as stream: 436 | try: 437 | CONF = yaml.load(stream) 438 | CRYPTO_CONFIG_PATH = GEN_PATH + "/crypto-config/" 439 | 440 | call("mkdir -p", CRYPTO_CONFIG_PATH) 441 | print CONF["PREGEN_CAs"] 442 | for init_ca in CONF["PREGEN_CAs"]: 443 | create_ca(init_ca["ca"], is_tls=False, docker=True, can_sign=True) 444 | ca_path = CRYPTO_CONFIG_PATH + init_ca["ca"]["Domain"] 445 | if OVERRIDE or not os.path.isdir(ca_path + '/tlsca'): 446 | call('rm -rfd ', ca_path + '/tlsca') 447 | call("cp -r", ca_path + "/ca", ca_path + "/tlsca") 448 | call("mv", ca_path + "/tlsca/ca." + init_ca["ca"]["Domain"] + "-cert.pem", 449 | ca_path + "/tlsca/tlsca." + init_ca["ca"]["Domain"] + "-cert.pem" 450 | ) 451 | 452 | call("mv", ca_path + "/tlsca/ca." + init_ca["ca"]["Domain"] + "-key.pem", 453 | ca_path + "/tlsca/tlsca." + init_ca["ca"]["Domain"] + "-key.pem" 454 | ) 455 | create_combined_ca(init_ca["ca"], is_tls=True) 456 | 457 | for theOrg in CONF["Orgs"]: 458 | if 'peers' in theOrg and theOrg['peers']: 459 | ORG_MAP['currentId'] += 1 460 | ORG_MAP[theOrg["Domain"]] = { 461 | 'id': ORG_MAP['currentId'], 462 | 'peers': { 463 | 'currentId': 0 464 | } 465 | } 466 | EXPLORER_DATA_PROD['network-config'][get_org_nb(theOrg)] = {} 467 | create_all_msp(theOrg) 468 | 469 | if ORG_MSP_CHANGED: 470 | print 'Generating channel artifacts...' 471 | 472 | call(to_pwd('../fabric_artifacts/gen_configtx.py'), YAML_CONFIG, CONFIGTX_BASE) 473 | call('mkdir -p', GEN_PATH + '/scripts') 474 | with open(GEN_PATH + '/scripts/explorer-config.prod.json', 'w+') as stream: 475 | EXPLORER_DATA_PROD['channel'] = CONF['Channels'][0]['Name'] 476 | stream.write(json.dumps(EXPLORER_DATA_PROD,sort_keys=True,indent=2)) 477 | with open(GEN_PATH + '/scripts/explorer-config.dev.json', 'w+') as stream: 478 | dev_org = CONF['Devmode'] 479 | EXPLORER_DATA_DEV['channel'] = CONF['Channels'][0]['Name'] 480 | EXPLORER_DATA_DEV['network-config']['org1'] = {} 481 | add_admin_to_explorer(dev_org, dev_org['admins'][0], True) 482 | add_peer_to_explorer(dev_org, dev_org['peers'][0], True) 483 | 484 | stream.write(json.dumps(EXPLORER_DATA_DEV,sort_keys=True,indent=2)) 485 | else: 486 | print "Organisation MSP did not change, not regenerating channel artifacts" 487 | 488 | except yaml.YAMLError as exc: 489 | print exc 490 | -------------------------------------------------------------------------------- /scripts/crypto_tools/docker_ca.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Created by Guillaume Leurquin, guillaume.leurquin@accenture.com 3 | 4 | set -eu -o pipefail 5 | 6 | if [ $# -ne 2 ]; 7 | then 8 | echo "" 9 | echo "Usage: " 10 | echo " docker_ca COMMON_NAME PORT" 11 | echo " COMMON_NAME includes organisation domain" 12 | echo " This script creates a docker file to be able to run a hyperledger" 13 | echo " fabric CA" 14 | echo "" 15 | exit 1 16 | fi 17 | 18 | CN=$1 19 | FOLDER=$GEN_PATH/docker 20 | mkdir -p "$FOLDER" 21 | FILE="$FOLDER/$CN.yaml" 22 | PORT=$2 23 | echo """ 24 | version: '2' 25 | 26 | # This file has been auto-generated 27 | 28 | services: 29 | $CN: 30 | image: hyperledger/fabric-ca 31 | container_name: $CN 32 | ports: 33 | - $PORT:7054 34 | command: sh -c 'fabric-ca-server start -b admin:adminpw -d' 35 | logging: 36 | driver: \"json-file\" 37 | options: 38 | max-size: \"200k\" 39 | max-file: \"10\" 40 | environment: 41 | - FABRIC_CA_HOME=/etc/hyperledger/fabric-ca-server 42 | - FABRIC_CA_SERVER_TLS_ENABLED=true 43 | - FABRIC_CA_SERVER_CA_CERTFILE=/etc/hyperledger/fabric-ca-server-config/ca/ca.$CN-cert.pem 44 | - FABRIC_CA_SERVER_CA_KEYFILE=/etc/hyperledger/fabric-ca-server-config/ca/ca.$CN-key.pem 45 | - FABRIC_CA_SERVER_TLS_CERTFILE=/etc/hyperledger/fabric-ca-server-config/tlsca/tlsca.$CN-cert.pem 46 | - FABRIC_CA_SERVER_TLS_KEYFILE=/etc/hyperledger/fabric-ca-server-config/tlsca/tlsca.$CN-key.pem 47 | volumes: 48 | - /vagrant/crypto-config/$CN/:/etc/hyperledger/fabric-ca-server-config 49 | 50 | 51 | networks: 52 | default: 53 | external: 54 | name: hyperledgerNet 55 | """ > $FILE 56 | -------------------------------------------------------------------------------- /scripts/crypto_tools/docker_orderer.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Created by Guillaume Leurquin, guillaume.leurquin@accenture.com 3 | 4 | set -eu -o pipefail 5 | 6 | if [ $# -ne 6 ]; 7 | then 8 | echo "" 9 | echo "Usage: " 10 | echo " docker_orderer COMMON_NAME ORGANISATION MSPID PEERS ORGS PORT" 11 | echo " PEERS and ORGS are comma separated" 12 | echo " This script creates a docker file to be able to run a hyperledger" 13 | echo " fabric orderer" 14 | echo "" 15 | exit 1 16 | fi 17 | 18 | CN=$1 19 | ORG=$2 20 | MSPID=$3 21 | Peers=$(echo $4 | tr "," " ") # comma separated peers 22 | declare -a Peers="( $Peers )" 23 | Orgs=$(echo $5 | tr "," " ") # comma separated orgs 24 | Port=$6 25 | declare -a Orgs="( $Orgs )" 26 | FOLDER=$GEN_PATH/docker 27 | mkdir -p "$FOLDER" 28 | FILE="$FOLDER/$CN.$ORG.yaml" 29 | 30 | # echo "Peers=${Peers[@]}" 31 | # echo "Orgs=${Orgs[@]}" 32 | 33 | echo -n """ 34 | version: '2' 35 | 36 | # This file has been auto-generated 37 | 38 | services: 39 | $CN.$ORG: 40 | image: hyperledger/fabric-orderer 41 | container_name: $CN.$ORG 42 | ports: 43 | - $Port:7050 44 | working_dir: /opt/gopath/src/github.com/hyperledger/fabric/orderers 45 | command: orderer 46 | logging: 47 | driver: \"json-file\" 48 | options: 49 | max-size: \"200k\" 50 | max-file: \"10\" 51 | environment: 52 | - ORDERER_GENERAL_LOGLEVEL=debug 53 | - ORDERER_GENERAL_LISTENADDRESS=0.0.0.0 54 | - ORDERER_GENERAL_GENESISMETHOD=file 55 | - ORDERER_GENERAL_GENESISFILE=/etc/hyperledger/configtx/$CN.$ORG.genesis.block 56 | - ORDERER_GENERAL_LOCALMSPID=$MSPID 57 | - ORDERER_GENERAL_LOCALMSPDIR=/etc/hyperledger/crypto-config/orderer/msp 58 | - ORDERER_GENERAL_TLS_ENABLED=true 59 | - ORDERER_GENERAL_TLS_ROOTCAS=[/etc/hyperledger/crypto-config/orderer/tlsca/tlsca.$CN.$ORG-cert.pem""" > $FILE 60 | 61 | for ((i=0;i<${#Peers[@]};i+=1)) 62 | do 63 | echo -n ",/etc/hyperledger/crypto-config/${Peers[$i]}.${Orgs[$i]}/tlsca/tlsca.${Peers[$i]}.${Orgs[$i]}-cert.pem" >> $FILE 64 | done 65 | 66 | echo """] 67 | - ORDERER_GENERAL_TLS_PRIVATEKEY=/etc/hyperledger/crypto-config/orderer/tlsca/tlsca.$CN.$ORG-key.pem 68 | - ORDERER_GENERAL_TLS_CERTIFICATE=/etc/hyperledger/crypto-config/orderer/tlsca/tlsca.$CN.$ORG-cert.pem 69 | volumes: 70 | - /vagrant/channel/:/etc/hyperledger/configtx 71 | - /vagrant/crypto-config/$ORG/orderers/$CN.$ORG/:/etc/hyperledger/crypto-config/orderer""" >> $FILE 72 | 73 | for ((i=0;i<${#Peers[@]};i+=1)) 74 | do 75 | echo """ - /vagrant/crypto-config/${Orgs[$i]}/peers/${Peers[$i]}.${Orgs[$i]}/:/etc/hyperledger/crypto-config/${Peers[$i]}.${Orgs[$i]}""" >> $FILE 76 | done 77 | 78 | 79 | echo """ 80 | networks: 81 | default: 82 | external: 83 | name: hyperledgerNet 84 | """ >> $FILE 85 | -------------------------------------------------------------------------------- /scripts/crypto_tools/docker_peer.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Created by Guillaume Leurquin, guillaume.leurquin@accenture.com 3 | 4 | set -eu -o pipefail 5 | 6 | if [ $# -ne 5 ]; 7 | then 8 | echo "" 9 | echo "Usage: " 10 | echo " docker_peer COMMON_NAME ORGANISATION MSPID PORTS COUCHDBPORT" 11 | echo " PORTS are comma separated of the form HOSTPORT:CONTAINERPORT" 12 | echo " This script creates a docker file to be able to run a hyperledger" 13 | echo " fabric peer" 14 | echo "" 15 | exit 1 16 | fi 17 | 18 | CN=$1 19 | ORG=$2 20 | MSPID=$3 21 | PORTS=$(echo $4 | tr "," " ") # comma separated peers 22 | COUCHDBPORT=$5 23 | declare -a PORTS="( $PORTS )" 24 | FOLDER=$GEN_PATH/docker 25 | mkdir -p "$FOLDER" 26 | FILE="$FOLDER/$CN.$ORG.yaml" 27 | 28 | echo """ 29 | version: '2' 30 | 31 | # This file has been auto-generated 32 | 33 | services: 34 | $CN.$ORG: 35 | image: hyperledger/fabric-peer 36 | container_name: $CN.$ORG 37 | working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer 38 | command: peer node start 39 | logging: 40 | driver: \"json-file\" 41 | options: 42 | max-size: \"200k\" 43 | max-file: \"10\" 44 | environment: 45 | - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock 46 | # the following setting starts chaincode containers on the same 47 | # bridge network as the peers 48 | # https://docs.docker.com/compose/networking/ 49 | - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=hyperledgerNet 50 | - CORE_LOGGING_LEVEL=DEBUG 51 | - CORE_LEDGER_STATE_STATEDATABASE=CouchDB 52 | - CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME=admin 53 | - CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD=password 54 | - CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=172.17.0.1:$COUCHDBPORT 55 | # The following setting skips the gossip handshake since we are 56 | # are not doing mutual TLS 57 | - CORE_PEER_ENDORSER_ENABLED=true 58 | - CORE_PEER_GOSSIP_SKIPHANDSHAKE=true 59 | - CORE_PEER_GOSSIP_USELEADERELECTION=true 60 | - CORE_PEER_GOSSIP_ORGLEADER=false 61 | - CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/crypto-config/peer/msp 62 | - CORE_PEER_TLS_ENABLED=true 63 | - CORE_PEER_ID=$CN.$ORG 64 | - CORE_PEER_LOCALMSPID=$MSPID 65 | - CORE_PEER_ADDRESS=$CN.$ORG:7051 66 | - CORE_PEER_TLS_KEY_FILE=/etc/hyperledger/crypto-config/peer/tlsca/tlsca.$CN.$ORG-key.pem 67 | - CORE_PEER_TLS_CERT_FILE=/etc/hyperledger/crypto-config/peer/tlsca/tlsca.$CN.$ORG-cert.pem 68 | - CORE_PEER_TLS_ROOTCERT_FILE=/etc/hyperledger/crypto-config/peer/tlsca.combined.$CN.$ORG-cert.pem 69 | ports:""" > $FILE 70 | 71 | 72 | for ((i=0;i<${#PORTS[@]};i+=1)) 73 | do 74 | echo " - ${PORTS[$i]}" >> $FILE 75 | done 76 | 77 | echo """ volumes: 78 | - /var/run/:/host/var/run/ 79 | - /vagrant/crypto-config/$ORG/peers/$CN.$ORG/:/etc/hyperledger/crypto-config/peer 80 | 81 | networks: 82 | default: 83 | external: 84 | name: hyperledgerNet 85 | """ >> $FILE 86 | -------------------------------------------------------------------------------- /scripts/crypto_tools/docker_tools.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Created by Guillaume Leurquin, guillaume.leurquin@accenture.com 3 | 4 | set -eu -o pipefail 5 | 6 | if [ $# -ne 2 ]; 7 | then 8 | echo "" 9 | echo "Usage: " 10 | echo " docker_tools ORG CHANNEL" 11 | echo " This script creates a docker file to be able to run a hyperledger" 12 | echo " fabric tools CLI" 13 | echo "" 14 | exit 1 15 | fi 16 | 17 | ORG=$1 18 | # CHANNEL=$2 # Not used currently 19 | FOLDER=$GEN_PATH/docker 20 | mkdir -p "$FOLDER" 21 | FILE="$FOLDER/tools.$ORG.yaml" 22 | 23 | echo """ 24 | version: '2' 25 | 26 | # This file has been auto-generated 27 | 28 | services: 29 | tools.$ORG: 30 | image: hyperledger/fabric-tools 31 | tty: true 32 | working_dir: /etc/hyperledger 33 | container_name: tools.$ORG 34 | logging: 35 | driver: \"json-file\" 36 | options: 37 | max-size: \"200k\" 38 | max-file: \"10\" 39 | environment: 40 | - CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=hyperledgerNet 41 | - GOPATH=/opt/gopath 42 | - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock 43 | - CORE_LOGGING_LEVEL=DEBUG 44 | - CORE_PEER_GOSSIP_SKIPHANDSHAKE=true 45 | - CORE_PEER_GOSSIP_USELEADERELECTION=true 46 | - CORE_PEER_GOSSIP_ORGLEADER=false 47 | - CORE_PEER_TLS_ENABLED=true 48 | volumes: 49 | - /vagrant/channel/:/etc/hyperledger/configtx 50 | - /vagrant/crypto-config/:/etc/hyperledger/crypto-config/ 51 | - /vagrant/shared/:/etc/hyperledger/ 52 | - /vagrant/ssh/:/root/.ssh/ 53 | 54 | networks: 55 | default: 56 | external: 57 | name: hyperledgerNet 58 | 59 | """ > $FILE 60 | -------------------------------------------------------------------------------- /scripts/crypto_tools/openssl.cnf: -------------------------------------------------------------------------------- 1 | # OpenSSL root CA configuration file. 2 | 3 | # https://jamielinux.com/docs/openssl-certificate-authority/create-the-root-pair.html 4 | 5 | # Must set env variable PKI 6 | 7 | [ ca ] 8 | # `man ca` 9 | default_ca = CA_default 10 | 11 | [ CA_default ] 12 | # Directory and file locations. 13 | dir = $ENV::PKI 14 | certs = $dir/certs 15 | crl_dir = $dir/crl 16 | new_certs_dir = $dir/newcerts 17 | database = $dir/index.txt 18 | serial = $dir/serial 19 | RANDFILE = $dir/private/.rand 20 | 21 | # The root key and root certificate. 22 | private_key = $dir/private/ca.key.pem 23 | certificate = $dir/certs/ca.cert.pem 24 | 25 | # For certificate revocation lists. 26 | crlnumber = $dir/crlnumber 27 | crl = $dir/crl/ca.crl.pem 28 | crl_extensions = crl_ext 29 | default_crl_days = 30 30 | 31 | # SHA-1 is deprecated, so use SHA-2 instead. 32 | default_md = sha256 33 | 34 | name_opt = ca_default 35 | cert_opt = ca_default 36 | default_days = 375 37 | preserve = no 38 | policy = policy_loose 39 | 40 | [ policy_strict ] 41 | # The root CA should only sign intermediate certificates that match. 42 | # See the POLICY FORMAT section of `man ca`. 43 | countryName = optional 44 | stateOrProvinceName = optional 45 | organizationName = optional 46 | organizationalUnitName = optional 47 | commonName = supplied 48 | emailAddress = optional 49 | 50 | [ policy_loose ] 51 | # Allow the intermediate CA to sign a more diverse range of certificates. 52 | # See the POLICY FORMAT section of the `ca` man page. 53 | countryName = optional 54 | stateOrProvinceName = optional 55 | localityName = optional 56 | organizationName = optional 57 | organizationalUnitName = optional 58 | commonName = supplied 59 | emailAddress = optional 60 | 61 | [ req ] 62 | # Options for the `req` tool (`man req`). 63 | default_bits = 2048 64 | distinguished_name = req_distinguished_name 65 | string_mask = utf8only 66 | 67 | # SHA-1 is deprecated, so use SHA-2 instead. 68 | default_md = sha256 69 | 70 | # Extension to add when the -x509 option is used. 71 | x509_extensions = v3_root_ca_or_tls 72 | 73 | [ req_distinguished_name ] 74 | # See . 75 | countryName = Country Name (2 letter code) 76 | stateOrProvinceName = State or Province Name 77 | localityName = Locality Name 78 | 0.organizationName = Organization Name 79 | organizationalUnitName = Organizational Unit Name 80 | commonName = Common Name 81 | emailAddress = Email Address 82 | 83 | # Optionally, specify some defaults. 84 | countryName_default = BE 85 | stateOrProvinceName_default = Flander 86 | localityName_default = Flander 87 | 0.organizationName_default = 88 | organizationalUnitName_default = 89 | emailAddress_default = 90 | 91 | # [ new oids] 92 | # customAttributes=1.2.3.4.5.6.7.8.1 93 | 94 | [ v3_root_ca_or_tls ] 95 | subjectKeyIdentifier = hash 96 | basicConstraints = critical, CA:true 97 | keyUsage = critical, digitalSignature, cRLSign, keyCertSign, keyEncipherment 98 | extendedKeyUsage = 2.5.29.37.0 99 | # anyExtendedKeyUsage = 2.5.29.37.0 see http://javadoc.iaik.tugraz.at/iaik_jce/old/iaik/x509/extensions/ExtendedKeyUsage.html 100 | 101 | [ v3_leaf_ca ] 102 | authorityKeyIdentifier = keyid:always,issuer 103 | basicConstraints = critical, CA:false 104 | subjectKeyIdentifier = hash 105 | keyUsage = critical, digitalSignature 106 | 107 | [ v3_leaf_tls ] 108 | authorityKeyIdentifier = keyid:always,issuer 109 | basicConstraints = critical, CA:false 110 | subjectKeyIdentifier = hash 111 | keyUsage = critical, digitalSignature, keyEncipherment 112 | extendedKeyUsage = serverAuth, clientAuth 113 | 114 | 115 | [ v3_intermediate_ca ] 116 | authorityKeyIdentifier = keyid:always,issuer 117 | basicConstraints = critical, CA:true 118 | subjectKeyIdentifier = hash 119 | extendedKeyUsage = 2.5.29.37.0 120 | keyUsage = critical, digitalSignature, cRLSign, keyCertSign, keyEncipherment 121 | 122 | [ v3_intermediate_tls ] 123 | authorityKeyIdentifier = keyid:always,issuer 124 | basicConstraints = critical, CA:true 125 | subjectKeyIdentifier = hash 126 | keyUsage = critical, digitalSignature, cRLSign, keyCertSign, keyEncipherment 127 | extendedKeyUsage = serverAuth, clientAuth 128 | -------------------------------------------------------------------------------- /scripts/crypto_tools/parse_domain_to_subjects.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python2 2 | # Created by Guillaume Leurquin, guillaume.leurquin@accenture.com 3 | 4 | """parse_domain_to_subjects.py domain_name 5 | 6 | Splits the domain name into a CN, an OU, a C and an ORG 7 | """ 8 | 9 | 10 | import sys 11 | 12 | CN = sys.argv[1] 13 | VALUES = CN.split(".") 14 | 15 | if CN.startswith("ca."): 16 | CN = CN[3:] 17 | elif CN.startswith("tlsca."): 18 | CN = CN[6:] 19 | 20 | C = VALUES[-1] 21 | if len(C) != 2: 22 | raise ValueError('The country code ' + C + ' can only have 2 characters') 23 | ORG = VALUES[-2] 24 | OU = VALUES[-3] 25 | 26 | SUBJECTS = "/CN="+CN+"/OU="+OU+"/C="+C+"/O="+ORG 27 | 28 | print SUBJECTS 29 | -------------------------------------------------------------------------------- /scripts/crypto_tools/signingIdentity/generateSigningIdentity.js: -------------------------------------------------------------------------------- 1 | /* 2 | This script creates the signing identity from the certificate of a user. 3 | */ 4 | 5 | 6 | var path = require('path'); 7 | var fs = require('fs'); 8 | 9 | var jsrsa = require('jsrsasign'); 10 | var X509 = jsrsa.X509; 11 | 12 | var Hash = require(path.join(__dirname, 'hash.js')); 13 | 14 | var args = process.argv.slice(2); 15 | 16 | if (args.length > 0 && typeof args[0] === 'string') { 17 | 18 | 19 | fs.readFile(args[0], 'utf8', function (err, data) { 20 | if (err) { 21 | 22 | return console.log(err); 23 | } 24 | 25 | var result = data.match(/(-----BEGIN CERTIFICATE-----)([^]+)(-----END CERTIFICATE-----)/mgi); 26 | 27 | if (result != null && result.length > 0) { 28 | var c = new X509(); 29 | 30 | c.readCertPEM(result[0]); 31 | 32 | var pubKey = c.getPublicKey(); 33 | 34 | var pointToOctet = function(key) { 35 | var byteLen = (key.ecparams.keylen + 7) >> 3; 36 | let buff = Buffer.allocUnsafe(1 + 2 * byteLen); 37 | buff[0] = 4; // uncompressed point (https://www.security-audit.com/files/x9-62-09-20-98.pdf, section 4.3.6) 38 | var xyhex = key.getPublicKeyXYHex(); 39 | var xBuffer = Buffer.from(xyhex.x, 'hex'); 40 | var yBuffer = Buffer.from(xyhex.y, 'hex'); 41 | //logger.debug('ECDSA curve param X: %s', xBuffer.toString('hex')); 42 | //logger.debug('ECDSA curve param Y: %s', yBuffer.toString('hex')); 43 | xBuffer.copy(buff, 1 + byteLen - xBuffer.length); 44 | yBuffer.copy(buff, 1 + 2 * byteLen - yBuffer.length); 45 | return buff; 46 | }; 47 | 48 | var buff = pointToOctet(pubKey); 49 | 50 | 51 | var hash = Hash.sha2_256(buff); 52 | console.log(hash); 53 | } 54 | }); 55 | 56 | } 57 | -------------------------------------------------------------------------------- /scripts/crypto_tools/signingIdentity/generateSigningIdentity.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Created by Guillaume Leurquin, guillaume.leurquin@accenture.com 3 | 4 | # This scipt creates a signing identity from the public certificate of a user. 5 | # It takes as first argument the path to the folder that contains all the user's 6 | # certificates, generated by cryptogen.py 7 | # It outputs a signing identity json file, along with the private and public key 8 | # named after the signingIdentity and saved in generated/hfc-key-store 9 | 10 | set -eu -o pipefail 11 | 12 | USER_FOLDER=$1 13 | MSPID=$2 14 | 15 | USER_FULL_NAME=$(basename $USER_FOLDER) 16 | CERT_FILE=$USER_FOLDER/ca/ca.$USER_FULL_NAME-cert.pem 17 | KEY_FILE=$USER_FOLDER/ca/ca.$USER_FULL_NAME-key.pem 18 | USER_BASE_NAME=$( cut -d '.' -f 1 <<< "$USER_FULL_NAME" ) 19 | ORG=$( cut -d '.' -f 2- <<< "$USER_FULL_NAME" ) 20 | 21 | INSTALL_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" 22 | 23 | if [ ! -d "$INSTALL_DIR/node_modules" ]; then 24 | pushd $INSTALL_DIR 25 | npm install 26 | popd 27 | fi 28 | 29 | signingIdentity=$(node "$INSTALL_DIR/generateSigningIdentity.js" "$CERT_FILE") 30 | 31 | hfcKeyStorePath=$GEN_PATH/hfc-key-store 32 | mkdir -p $hfcKeyStorePath 33 | 34 | # Get only certificate part from the file and replace new lines by literal \n 35 | CERT_NO_NEWLINES=$(printf "%s\\\n" $(openssl x509 -in $CERT_FILE | tr " " "*") | tr "*" " ") 36 | 37 | echo """{ 38 | \"name\": \"$USER_BASE_NAME\", 39 | \"mspid\": \"${MSPID}\", 40 | \"roles\": null, 41 | \"affiliation\": \"\", 42 | \"enrollmentSecret\": \"\", 43 | \"enrollment\": { 44 | \"signingIdentity\": \"$signingIdentity\", 45 | \"identity\": { 46 | \"certificate\": \"$CERT_NO_NEWLINES\" 47 | } 48 | } 49 | } 50 | """ > $hfcKeyStorePath/$USER_BASE_NAME 51 | 52 | 53 | cp $CERT_FILE $hfcKeyStorePath/$signingIdentity-pub 54 | cp $KEY_FILE $hfcKeyStorePath/$signingIdentity-priv 55 | -------------------------------------------------------------------------------- /scripts/crypto_tools/signingIdentity/hash.js: -------------------------------------------------------------------------------- 1 | /** 2 | * Copyright 2016 IBM All Rights Reserved. 3 | * 4 | * Licensed under the Apache License, Version 2.0 (the "License"); 5 | * you may not use this file except in compliance with the License. 6 | * You may obtain a copy of the License at 7 | * 8 | * http://www.apache.org/licenses/LICENSE-2.0 9 | * 10 | * Unless required by applicable law or agreed to in writing, software 11 | * distributed under the License is distributed on an "AS IS" BASIS, 12 | * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. 13 | * See the License for the specific language governing permissions and 14 | * limitations under the License. 15 | */ 16 | 17 | /** 18 | * Implement hash primitives. 19 | */ 20 | var sjcl = require('sjcl'); 21 | var sjcl_codec = require('sjcl-codec'); 22 | var jssha = require('jssha'); 23 | 24 | var jsSHA3 = require('js-sha3'); 25 | var sha3_256 = jsSHA3.sha3_256; 26 | var sha3_384 = jsSHA3.sha3_384; 27 | var shake_256 = jsSHA3.shake_256; 28 | var crypto = require('crypto'); 29 | 30 | hash_sha2_256 = function (hash) { 31 | 32 | if (hash) { 33 | this._hash = hash._hash; 34 | } 35 | else { 36 | this.reset(); 37 | } 38 | }; 39 | 40 | hash_sha2_256.hash = function (data) { 41 | return (new sjcl.hash.sha256()).update(data).finalize(); 42 | }; 43 | 44 | hash_sha2_256.prototype = { 45 | 46 | blockSize: 512, 47 | 48 | reset: function () { 49 | this._hash = new sjcl.hash.sha256(); 50 | this._hash.reset(); 51 | }, 52 | 53 | update: function (data) { 54 | this._hash.update(data); 55 | return this; 56 | }, 57 | 58 | finalize: function () { 59 | var hash = this._hash.finalize(); 60 | this.reset(); 61 | return hash; 62 | 63 | } 64 | }; 65 | 66 | 67 | var hash_sha3_256 = function (hash) { 68 | 69 | if (hash) { 70 | this._hash = hash._hash; 71 | } 72 | else { 73 | this.reset(); 74 | } 75 | }; 76 | 77 | hash_sha3_256.hash = function (data) { 78 | var hashBits = sjcl.codec.hex.toBits(sha3_256(bitsToBytes(data))); 79 | return hashBits; 80 | }; 81 | 82 | hash_sha3_256.prototype = { 83 | 84 | blockSize: 1088, 85 | 86 | reset: function () { 87 | this._hash = sha3_256.create(); 88 | }, 89 | 90 | update: function (data) { 91 | this._hash.update(bitsToBytes(data)); 92 | return this; 93 | }, 94 | 95 | finalize: function () { 96 | var hash = this._hash.hex(); 97 | var hashBits = sjcl.codec.hex.toBits(hash); 98 | this.reset(); 99 | return hashBits; 100 | 101 | } 102 | }; 103 | 104 | var hash_sha3_384 = function (hash) { 105 | 106 | if (hash) { 107 | this._hash = hash._hash; 108 | } 109 | else { 110 | this.reset(); 111 | } 112 | }; 113 | 114 | hash_sha3_384.hash = function (data) { 115 | var hashBits = sjcl.codec.hex.toBits(sha3_384(bitsToBytes(data))); 116 | return hashBits; 117 | }; 118 | 119 | hash_sha3_384.prototype = { 120 | 121 | blockSize: 832, 122 | 123 | reset: function () { 124 | this._hash = sha3_384.create(); 125 | }, 126 | 127 | update: function (data) { 128 | this._hash.update(bitsToBytes(data)); 129 | return this; 130 | }, 131 | 132 | finalize: function () { 133 | var hash = this._hash.hex(); 134 | var hashBits = sjcl.codec.hex.toBits(hash); 135 | //debug('finalize hashBits:\n',hashBits) 136 | this.reset(); 137 | return hashBits; 138 | 139 | } 140 | }; 141 | 142 | /* 143 | * Convert from a bitArray to bytes (using SJCL's codec) 144 | * @param {bits} a bitArray to convert from 145 | * @return {bytes} the bytes converted from the bitArray 146 | */ 147 | bitsToBytes = function (bits) { 148 | return sjcl_codec.bytes.fromBits(bits); 149 | }; 150 | 151 | /* 152 | * Convert from bytes to a bitArray (using SJCL's codec) 153 | * @param {bytes} a bytes to convert from 154 | * @return {bitArray} the bitArray converted from bytes 155 | */ 156 | bytesToBits = function (bytes) { 157 | return sjcl_codec.bytes.toBits(bytes); 158 | }; 159 | 160 | exports.hash_sha3_256 = hash_sha3_256; 161 | exports.hash_sha3_384 = hash_sha3_384; 162 | exports.hash_sha2_256 = hash_sha2_256; 163 | exports.sha2_256 = function (data) { 164 | return sjcl_codec.hex.fromBits(new sjcl.hash.sha256().update(bytesToBits(Buffer.from(data, 'utf8'))).finalize()); 165 | }; 166 | exports.sha3_256 = sha3_256; 167 | exports.sha2_384 = function (data){ 168 | var sha384 = crypto.createHash('sha384'); 169 | return sha384.update(data).digest('hex'); 170 | }; 171 | exports.sha3_384 = sha3_384; 172 | exports.shake_256 = shake_256; 173 | -------------------------------------------------------------------------------- /scripts/crypto_tools/signingIdentity/package-lock.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "foo", 3 | "version": "1.0.0", 4 | "lockfileVersion": 1, 5 | "requires": true, 6 | "dependencies": { 7 | "crypto": { 8 | "version": "1.0.1", 9 | "resolved": "https://registry.npmjs.org/crypto/-/crypto-1.0.1.tgz", 10 | "integrity": "sha512-VxBKmeNcqQdiUQUW2Tzq0t377b54N2bMtXO/qiLa+6eRRmmC4qT3D4OnTGoT/U6O9aklQ/jTwbOtRMTTY8G0Ig==" 11 | }, 12 | "js-sha3": { 13 | "version": "0.6.1", 14 | "resolved": "https://registry.npmjs.org/js-sha3/-/js-sha3-0.6.1.tgz", 15 | "integrity": "sha1-W4n3enR3Z5h39YxKB1JAk0sflcA=" 16 | }, 17 | "jsrsasign": { 18 | "version": "8.0.4", 19 | "resolved": "https://registry.npmjs.org/jsrsasign/-/jsrsasign-8.0.4.tgz", 20 | "integrity": "sha1-P3uCOIRPEmtJanVW7J9LUR+V+GE=" 21 | }, 22 | "jssha": { 23 | "version": "2.3.1", 24 | "resolved": "https://registry.npmjs.org/jssha/-/jssha-2.3.1.tgz", 25 | "integrity": "sha1-FHshJTaQNcpLL30hDcU58Amz3po=" 26 | }, 27 | "sjcl": { 28 | "version": "1.0.7", 29 | "resolved": "https://registry.npmjs.org/sjcl/-/sjcl-1.0.7.tgz", 30 | "integrity": "sha1-MrNlpQ3Ju6JriLo8nfjqNCF9n0U=" 31 | }, 32 | "sjcl-codec": { 33 | "version": "0.1.1", 34 | "resolved": "https://registry.npmjs.org/sjcl-codec/-/sjcl-codec-0.1.1.tgz", 35 | "integrity": "sha1-aub9c2xOBb/9pLxhoYox9X12HZU=" 36 | } 37 | } 38 | } 39 | -------------------------------------------------------------------------------- /scripts/crypto_tools/signingIdentity/package.json: -------------------------------------------------------------------------------- 1 | { 2 | "name": "foo", 3 | "version": "1.0.0", 4 | "description": "", 5 | "main": "foo.js", 6 | "scripts": { 7 | "test": "echo \"Error: no test specified\" && exit 1" 8 | }, 9 | "author": "", 10 | "license": "ISC", 11 | "dependencies": { 12 | "crypto": "^1.0.1", 13 | "js-sha3": "^0.6.1", 14 | "jsrsasign": "^8.0.4", 15 | "jssha": "^2.3.1", 16 | "sjcl": "^1.0.7", 17 | "sjcl-codec": "^0.1.1" 18 | } 19 | } 20 | -------------------------------------------------------------------------------- /scripts/fabric/core.yaml: -------------------------------------------------------------------------------- 1 | # Copyright IBM Corp. All Rights Reserved. 2 | # 3 | # SPDX-License-Identifier: Apache-2.0 4 | # 5 | 6 | ############################################################################### 7 | # 8 | # LOGGING section 9 | # 10 | ############################################################################### 11 | logging: 12 | 13 | # Default logging levels are specified here. 14 | 15 | # Valid logging levels are case-insensitive strings chosen from 16 | 17 | # CRITICAL | ERROR | WARNING | NOTICE | INFO | DEBUG 18 | 19 | # The overall default logging level can be specified in various ways, 20 | # listed below from strongest to weakest: 21 | # 22 | # 1. The --logging-level= command line option overrides all other 23 | # default specifications. 24 | # 25 | # 2. The environment variable CORE_LOGGING_LEVEL otherwise applies to 26 | # all peer commands if defined as a non-empty string. 27 | # 28 | # 3. The value of peer that directly follows in this file. It can also 29 | # be set via the environment variable CORE_LOGGING_PEER. 30 | # 31 | # If no overall default level is provided via any of the above methods, 32 | # the peer will default to INFO (the value of defaultLevel in 33 | # common/flogging/logging.go) 34 | 35 | # Default for all modules running within the scope of a peer. 36 | # Note: this value is only used when --logging-level or CORE_LOGGING_LEVEL 37 | # are not set 38 | peer: info 39 | 40 | # The overall default values mentioned above can be overridden for the 41 | # specific components listed in the override section below. 42 | 43 | # Override levels for various peer modules. These levels will be 44 | # applied once the peer has completely started. They are applied at this 45 | # time in order to be sure every logger has been registered with the 46 | # logging package. 47 | # Note: the modules listed below are the only acceptable modules at this 48 | # time. 49 | cauthdsl: warning 50 | gossip: warning 51 | ledger: info 52 | msp: warning 53 | policies: warning 54 | grpc: error 55 | 56 | # Message format for the peer logs 57 | format: '%{color}%{time:2006-01-02 15:04:05.000 MST} [%{module}] %{shortfunc} -> %{level:.4s} %{id:03x}%{color:reset} %{message}' 58 | 59 | ############################################################################### 60 | # 61 | # Peer section 62 | # 63 | ############################################################################### 64 | peer: 65 | 66 | # The Peer id is used for identifying this Peer instance. 67 | id: jdoe 68 | 69 | # The networkId allows for logical seperation of networks 70 | networkId: dev 71 | 72 | # The Address at local network interface this Peer will listen on. 73 | # By default, it will listen on all network interfaces 74 | listenAddress: 0.0.0.0:7051 75 | 76 | # The endpoint this peer uses to listen for inbound chaincode connections. 77 | # 78 | # The chaincode connection does not support TLS-mutual auth. Having a 79 | # separate listener for the chaincode helps isolate the chaincode 80 | # environment for enhanced security, so it is strongly recommended to 81 | # uncomment chaincodeListenAddress and specify a protected endpoint. 82 | # 83 | # If chaincodeListenAddress is not configured or equals to the listenAddress, 84 | # listenAddress will be used for chaincode connections. This is not 85 | # recommended for production. 86 | # 87 | # chaincodeListenAddress: 127.0.0.1:7052 88 | 89 | # When used as peer config, this represents the endpoint to other peers 90 | # in the same organization for peers in other organization, see 91 | # gossip.externalEndpoint for more info. 92 | # When used as CLI config, this means the peer's endpoint to interact with 93 | address: 0.0.0.0:7051 94 | 95 | # Whether the Peer should programmatically determine its address 96 | # This case is useful for docker containers. 97 | addressAutoDetect: false 98 | 99 | # Setting for runtime.GOMAXPROCS(n). If n < 1, it does not change the 100 | # current setting 101 | gomaxprocs: -1 102 | 103 | # Gossip related configuration 104 | gossip: 105 | # Bootstrap set to initialize gossip with. 106 | # This is a list of other peers that this peer reaches out to at startup. 107 | # Important: The endpoints here have to be endpoints of peers in the same 108 | # organization, because the peer would refuse connecting to these endpoints 109 | # unless they are in the same organization as the peer. 110 | bootstrap: 127.0.0.1:7051 111 | 112 | # NOTE: orgLeader and useLeaderElection parameters are mutual exclusive. 113 | # Setting both to true would result in the termination of the peer 114 | # since this is undefined state. If the peers are configured with 115 | # useLeaderElection=false, make sure there is at least 1 peer in the 116 | # organization that its orgLeader is set to true. 117 | 118 | # Defines whenever peer will initialize dynamic algorithm for 119 | # "leader" selection, where leader is the peer to establish 120 | # connection with ordering service and use delivery protocol 121 | # to pull ledger blocks from ordering service. It is recommended to 122 | # use leader election for large networks of peers. 123 | useLeaderElection: false 124 | # Statically defines peer to be an organization "leader", 125 | # where this means that current peer will maintain connection 126 | # with ordering service and disseminate block across peers in 127 | # its own organization 128 | orgLeader: true 129 | 130 | # Overrides the endpoint that the peer publishes to peers 131 | # in its organization. For peers in foreign organizations 132 | # see 'externalEndpoint' 133 | endpoint: 134 | # Maximum count of blocks stored in memory 135 | maxBlockCountToStore: 100 136 | # Max time between consecutive message pushes(unit: millisecond) 137 | maxPropagationBurstLatency: 10ms 138 | # Max number of messages stored until a push is triggered to remote peers 139 | maxPropagationBurstSize: 10 140 | # Number of times a message is pushed to remote peers 141 | propagateIterations: 1 142 | # Number of peers selected to push messages to 143 | propagatePeerNum: 3 144 | # Determines frequency of pull phases(unit: second) 145 | pullInterval: 4s 146 | # Number of peers to pull from 147 | pullPeerNum: 3 148 | # Determines frequency of pulling state info messages from peers(unit: second) 149 | requestStateInfoInterval: 4s 150 | # Determines frequency of pushing state info messages to peers(unit: second) 151 | publishStateInfoInterval: 4s 152 | # Maximum time a stateInfo message is kept until expired 153 | stateInfoRetentionInterval: 154 | # Time from startup certificates are included in Alive messages(unit: second) 155 | publishCertPeriod: 10s 156 | # Should we skip verifying block messages or not (currently not in use) 157 | skipBlockVerification: false 158 | # Dial timeout(unit: second) 159 | dialTimeout: 3s 160 | # Connection timeout(unit: second) 161 | connTimeout: 2s 162 | # Buffer size of received messages 163 | recvBuffSize: 20 164 | # Buffer size of sending messages 165 | sendBuffSize: 200 166 | # Time to wait before pull engine processes incoming digests (unit: second) 167 | digestWaitTime: 1s 168 | # Time to wait before pull engine removes incoming nonce (unit: second) 169 | requestWaitTime: 1s 170 | # Time to wait before pull engine ends pull (unit: second) 171 | responseWaitTime: 2s 172 | # Alive check interval(unit: second) 173 | aliveTimeInterval: 5s 174 | # Alive expiration timeout(unit: second) 175 | aliveExpirationTimeout: 25s 176 | # Reconnect interval(unit: second) 177 | reconnectInterval: 25s 178 | # This is an endpoint that is published to peers outside of the organization. 179 | # If this isn't set, the peer will not be known to other organizations. 180 | externalEndpoint: 181 | # Leader election service configuration 182 | election: 183 | # Longest time peer waits for stable membership during leader election startup (unit: second) 184 | startupGracePeriod: 15s 185 | # Interval gossip membership samples to check its stability (unit: second) 186 | membershipSampleInterval: 1s 187 | # Time passes since last declaration message before peer decides to perform leader election (unit: second) 188 | leaderAliveThreshold: 10s 189 | # Time between peer sends propose message and declares itself as a leader (sends declaration message) (unit: second) 190 | leaderElectionDuration: 5s 191 | 192 | # EventHub related configuration 193 | events: 194 | # The address that the Event service will be enabled on the peer 195 | address: 0.0.0.0:7053 196 | 197 | # total number of events that could be buffered without blocking send 198 | buffersize: 100 199 | 200 | # timeout duration for producer to send an event. 201 | # if < 0, if buffer full, unblocks immediately and not send 202 | # if 0, if buffer full, will block and guarantee the event will be sent out 203 | # if > 0, if buffer full, blocks till timeout 204 | timeout: 10ms 205 | 206 | # TLS Settings 207 | # Note that peer-chaincode connections through chaincodeListenAddress is 208 | # not mutual TLS auth. See comments on chaincodeListenAddress for more info 209 | tls: 210 | enabled: false 211 | cert: 212 | file: tls/server.crt 213 | key: 214 | file: tls/server.key 215 | rootcert: 216 | file: tls/ca.crt 217 | 218 | # The server name use to verify the hostname returned by TLS handshake 219 | serverhostoverride: 220 | 221 | # Path on the file system where peer will store data (eg ledger). This 222 | # location must be access control protected to prevent unintended 223 | # modification that might corrupt the peer operations. 224 | fileSystemPath: /var/hyperledger/production 225 | 226 | # BCCSP (Blockchain crypto provider): Select which crypto implementation or 227 | # library to use 228 | BCCSP: 229 | Default: SW 230 | SW: 231 | # TODO: The default Hash and Security level needs refactoring to be 232 | # fully configurable. Changing these defaults requires coordination 233 | # SHA2 is hardcoded in several places, not only BCCSP 234 | Hash: SHA2 235 | Security: 256 236 | # Location of Key Store 237 | FileKeyStore: 238 | # If "", defaults to 'mspConfigPath'/keystore 239 | # TODO: Ensure this is read with fabric/core/config.GetPath() once ready 240 | KeyStore: 241 | 242 | # Path on the file system where peer will find MSP local configurations 243 | mspConfigPath: msp 244 | 245 | # Identifier of the local MSP 246 | # ----!!!!IMPORTANT!!!-!!!IMPORTANT!!!-!!!IMPORTANT!!!!---- 247 | # Deployers need to change the value of the localMspId string. 248 | # In particular, the name of the local MSP ID of a peer needs 249 | # to match the name of one of the MSPs in each of the channel 250 | # that this peer is a member of. Otherwise this peer's messages 251 | # will not be identified as valid by other nodes. 252 | localMspId: DEFAULT 253 | 254 | # Used with Go profiling tools only in none production environment. In 255 | # production, it should be disabled (eg enabled: false) 256 | profile: 257 | enabled: false 258 | listenAddress: 0.0.0.0:6060 259 | 260 | ############################################################################### 261 | # 262 | # VM section 263 | # 264 | ############################################################################### 265 | vm: 266 | 267 | # Endpoint of the vm management system. For docker can be one of the following in general 268 | # unix:///var/run/docker.sock 269 | # http://localhost:2375 270 | # https://localhost:2376 271 | endpoint: unix:///var/run/docker.sock 272 | 273 | # settings for docker vms 274 | docker: 275 | tls: 276 | enabled: false 277 | ca: 278 | file: docker/ca.crt 279 | cert: 280 | file: docker/tls.crt 281 | key: 282 | file: docker/tls.key 283 | 284 | # Enables/disables the standard out/err from chaincode containers for 285 | # debugging purposes 286 | attachStdout: false 287 | 288 | # Parameters on creating docker container. 289 | # Container may be efficiently created using ipam & dns-server for cluster 290 | # NetworkMode - sets the networking mode for the container. Supported 291 | # standard values are: `host`(default),`bridge`,`ipvlan`,`none`. 292 | # Dns - a list of DNS servers for the container to use. 293 | # Note: `Privileged` `Binds` `Links` and `PortBindings` properties of 294 | # Docker Host Config are not supported and will not be used if set. 295 | # LogConfig - sets the logging driver (Type) and related options 296 | # (Config) for Docker. For more info, 297 | # https://docs.docker.com/engine/admin/logging/overview/ 298 | # Note: Set LogConfig using Environment Variables is not supported. 299 | hostConfig: 300 | NetworkMode: host 301 | Dns: 302 | # - 192.168.0.1 303 | LogConfig: 304 | Type: json-file 305 | Config: 306 | max-size: "50m" 307 | max-file: "5" 308 | Memory: 2147483648 309 | 310 | ############################################################################### 311 | # 312 | # Chaincode section 313 | # 314 | ############################################################################### 315 | chaincode: 316 | # This is used if chaincode endpoint resolution fails with the 317 | # chaincodeListenAddress property 318 | peerAddress: 319 | 320 | # The id is used by the Chaincode stub to register the executing Chaincode 321 | # ID with the Peer and is generally supplied through ENV variables 322 | # the `path` form of ID is provided when installing the chaincode. 323 | # The `name` is used for all other requests and can be any string. 324 | id: 325 | path: 326 | name: 327 | 328 | # Generic builder environment, suitable for most chaincode types 329 | builder: $(DOCKER_NS)/fabric-ccenv:$(ARCH)-$(PROJECT_VERSION) 330 | 331 | golang: 332 | # golang will never need more than baseos 333 | runtime: $(BASE_DOCKER_NS)/fabric-baseos:$(ARCH)-$(BASE_VERSION) 334 | 335 | car: 336 | # car may need more facilities (JVM, etc) in the future as the catalog 337 | # of platforms are expanded. For now, we can just use baseos 338 | runtime: $(BASE_DOCKER_NS)/fabric-baseos:$(ARCH)-$(BASE_VERSION) 339 | 340 | java: 341 | # This is an image based on java:openjdk-8 with addition compiler 342 | # tools added for java shim layer packaging. 343 | # This image is packed with shim layer libraries that are necessary 344 | # for Java chaincode runtime. 345 | Dockerfile: | 346 | from $(DOCKER_NS)/fabric-javaenv:$(ARCH)-$(PROJECT_VERSION) 347 | 348 | # Timeout duration for starting up a container and waiting for Register 349 | # to come through. 1sec should be plenty for chaincode unit tests 350 | startuptimeout: 300s 351 | 352 | # Timeout duration for Invoke and Init calls to prevent runaway. 353 | # This timeout is used by all chaincodes in all the channels, including 354 | # system chaincodes. 355 | # Note that during Invoke, if the image is not available (e.g. being 356 | # cleaned up when in development environment), the peer will automatically 357 | # build the image, which might take more time. In production environment, 358 | # the chaincode image is unlikely to be deleted, so the timeout could be 359 | # reduced accordingly. 360 | executetimeout: 30s 361 | 362 | # There are 2 modes: "dev" and "net". 363 | # In dev mode, user runs the chaincode after starting peer from 364 | # command line on local machine. 365 | # In net mode, peer will run chaincode in a docker container. 366 | mode: net 367 | 368 | # keepalive in seconds. In situations where the communiction goes through a 369 | # proxy that does not support keep-alive, this parameter will maintain connection 370 | # between peer and chaincode. 371 | # A value <= 0 turns keepalive off 372 | keepalive: 0 373 | 374 | # system chaincodes whitelist. To add system chaincode "myscc" to the 375 | # whitelist, add "myscc: enable" to the list below, and register in 376 | # chaincode/importsysccs.go 377 | system: 378 | cscc: enable 379 | lscc: enable 380 | escc: enable 381 | vscc: enable 382 | qscc: enable 383 | 384 | # Logging section for the chaincode container 385 | logging: 386 | # Default level for all loggers within the chaincode container 387 | level: info 388 | # Override default level for the 'shim' module 389 | shim: warning 390 | # Format for the chaincode container logs 391 | format: '%{color}%{time:2006-01-02 15:04:05.000 MST} [%{module}] %{shortfunc} -> %{level:.4s} %{id:03x}%{color:reset} %{message}' 392 | 393 | ############################################################################### 394 | # 395 | # Ledger section - ledger configuration encompases both the blockchain 396 | # and the state 397 | # 398 | ############################################################################### 399 | ledger: 400 | 401 | blockchain: 402 | 403 | state: 404 | # stateDatabase - options are "goleveldb", "CouchDB" 405 | # goleveldb - default state database stored in goleveldb. 406 | # CouchDB - store state database in CouchDB 407 | stateDatabase: goleveldb 408 | couchDBConfig: 409 | # It is recommended to run CouchDB on the same server as the peer, and 410 | # not map the CouchDB container port to a server port in docker-compose. 411 | # Otherwise proper security must be provided on the connection between 412 | # CouchDB client (on the peer) and server. 413 | couchDBAddress: 127.0.0.1:5984 414 | # This username must have read and write authority on CouchDB 415 | username: 416 | # The password is recommended to pass as an environment variable 417 | # during start up (eg LEDGER_COUCHDBCONFIG_PASSWORD). 418 | # If it is stored here, the file must be access control protected 419 | # to prevent unintended users from discovering the password. 420 | password: 421 | # Number of retries for CouchDB errors 422 | maxRetries: 3 423 | # Number of retries for CouchDB errors during peer startup 424 | maxRetriesOnStartup: 10 425 | # CouchDB request timeout (unit: duration, e.g. 20s) 426 | requestTimeout: 35s 427 | # Limit on the number of records to return per query 428 | queryLimit: 10000 429 | 430 | 431 | history: 432 | # enableHistoryDatabase - options are true or false 433 | # Indicates if the history of key updates should be stored. 434 | # All history 'index' will be stored in goleveldb, regardless if using 435 | # CouchDB or alternate database for the state. 436 | enableHistoryDatabase: true 437 | -------------------------------------------------------------------------------- /scripts/fabric/core_1.1-preview.yaml: -------------------------------------------------------------------------------- 1 | # Copyright IBM Corp. All Rights Reserved. 2 | # 3 | # SPDX-License-Identifier: Apache-2.0 4 | # 5 | 6 | ############################################################################### 7 | # 8 | # LOGGING section 9 | # 10 | ############################################################################### 11 | logging: 12 | 13 | # Default logging levels are specified here. 14 | 15 | # Valid logging levels are case-insensitive strings chosen from 16 | 17 | # CRITICAL | ERROR | WARNING | NOTICE | INFO | DEBUG 18 | 19 | # The overall default logging level can be specified in various ways, 20 | # listed below from strongest to weakest: 21 | # 22 | # 1. The --logging-level= command line option overrides all other 23 | # default specifications. 24 | # 25 | # 2. The environment variable CORE_LOGGING_LEVEL otherwise applies to 26 | # all peer commands if defined as a non-empty string. 27 | # 28 | # 3. The value of `level` that directly follows in this file. 29 | # 30 | # If no overall default level is provided via any of the above methods, 31 | # the peer will default to INFO (the value of defaultLevel in 32 | # common/flogging/logging.go) 33 | 34 | # Default for all modules running within the scope of a peer. 35 | # Note: this value is only used when --logging-level or CORE_LOGGING_LEVEL 36 | # are not set 37 | level: info 38 | 39 | # The overall default values mentioned above can be overridden for the 40 | # specific components listed in the override section below. 41 | 42 | # Override levels for various peer modules. These levels will be 43 | # applied once the peer has completely started. They are applied at this 44 | # time in order to be sure every logger has been registered with the 45 | # logging package. 46 | # Note: the modules listed below are the only acceptable modules at this 47 | # time. 48 | cauthdsl: warning 49 | gossip: warning 50 | grpc: error 51 | ledger: info 52 | msp: warning 53 | policies: warning 54 | peer: 55 | gossip: warning 56 | 57 | # Message format for the peer logs 58 | format: '%{color}%{time:2006-01-02 15:04:05.000 MST} [%{module}] %{shortfunc} -> %{level:.4s} %{id:03x}%{color:reset} %{message}' 59 | 60 | ############################################################################### 61 | # 62 | # Peer section 63 | # 64 | ############################################################################### 65 | peer: 66 | 67 | # The Peer id is used for identifying this Peer instance. 68 | id: jdoe 69 | 70 | # The networkId allows for logical seperation of networks 71 | networkId: dev 72 | 73 | # The Address at local network interface this Peer will listen on. 74 | # By default, it will listen on all network interfaces 75 | listenAddress: 0.0.0.0:7051 76 | 77 | # The endpoint this peer uses to listen for inbound chaincode connections. 78 | # If this is commented-out, the listen address is selected to be 79 | # the peer's address (see below) with port 7052 80 | # chaincodeListenAddress: 0.0.0.0:7052 81 | 82 | # When used as peer config, this represents the endpoint to other peers 83 | # in the same organization. For peers in other organization, see 84 | # gossip.externalEndpoint for more info. 85 | # When used as CLI config, this means the peer's endpoint to interact with 86 | address: 0.0.0.0:7051 87 | 88 | # Whether the Peer should programmatically determine its address 89 | # This case is useful for docker containers. 90 | addressAutoDetect: false 91 | 92 | # Setting for runtime.GOMAXPROCS(n). If n < 1, it does not change the 93 | # current setting 94 | gomaxprocs: -1 95 | 96 | # Gossip related configuration 97 | gossip: 98 | # Bootstrap set to initialize gossip with. 99 | # This is a list of other peers that this peer reaches out to at startup. 100 | # Important: The endpoints here have to be endpoints of peers in the same 101 | # organization, because the peer would refuse connecting to these endpoints 102 | # unless they are in the same organization as the peer. 103 | bootstrap: 127.0.0.1:7051 104 | 105 | # NOTE: orgLeader and useLeaderElection parameters are mutual exclusive. 106 | # Setting both to true would result in the termination of the peer 107 | # since this is undefined state. If the peers are configured with 108 | # useLeaderElection=false, make sure there is at least 1 peer in the 109 | # organization that its orgLeader is set to true. 110 | 111 | # Defines whenever peer will initialize dynamic algorithm for 112 | # "leader" selection, where leader is the peer to establish 113 | # connection with ordering service and use delivery protocol 114 | # to pull ledger blocks from ordering service. It is recommended to 115 | # use leader election for large networks of peers. 116 | useLeaderElection: true 117 | # Statically defines peer to be an organization "leader", 118 | # where this means that current peer will maintain connection 119 | # with ordering service and disseminate block across peers in 120 | # its own organization 121 | orgLeader: false 122 | 123 | # Overrides the endpoint that the peer publishes to peers 124 | # in its organization. For peers in foreign organizations 125 | # see 'externalEndpoint' 126 | endpoint: 127 | # Maximum count of blocks stored in memory 128 | maxBlockCountToStore: 100 129 | # Max time between consecutive message pushes(unit: millisecond) 130 | maxPropagationBurstLatency: 10ms 131 | # Max number of messages stored until a push is triggered to remote peers 132 | maxPropagationBurstSize: 10 133 | # Number of times a message is pushed to remote peers 134 | propagateIterations: 1 135 | # Number of peers selected to push messages to 136 | propagatePeerNum: 3 137 | # Determines frequency of pull phases(unit: second) 138 | pullInterval: 4s 139 | # Number of peers to pull from 140 | pullPeerNum: 3 141 | # Determines frequency of pulling state info messages from peers(unit: second) 142 | requestStateInfoInterval: 4s 143 | # Determines frequency of pushing state info messages to peers(unit: second) 144 | publishStateInfoInterval: 4s 145 | # Maximum time a stateInfo message is kept until expired 146 | stateInfoRetentionInterval: 147 | # Time from startup certificates are included in Alive messages(unit: second) 148 | publishCertPeriod: 10s 149 | # Should we skip verifying block messages or not (currently not in use) 150 | skipBlockVerification: false 151 | # Dial timeout(unit: second) 152 | dialTimeout: 3s 153 | # Connection timeout(unit: second) 154 | connTimeout: 2s 155 | # Buffer size of received messages 156 | recvBuffSize: 20 157 | # Buffer size of sending messages 158 | sendBuffSize: 200 159 | # Time to wait before pull engine processes incoming digests (unit: second) 160 | digestWaitTime: 1s 161 | # Time to wait before pull engine removes incoming nonce (unit: second) 162 | requestWaitTime: 1s 163 | # Time to wait before pull engine ends pull (unit: second) 164 | responseWaitTime: 2s 165 | # Alive check interval(unit: second) 166 | aliveTimeInterval: 5s 167 | # Alive expiration timeout(unit: second) 168 | aliveExpirationTimeout: 25s 169 | # Reconnect interval(unit: second) 170 | reconnectInterval: 25s 171 | # This is an endpoint that is published to peers outside of the organization. 172 | # If this isn't set, the peer will not be known to other organizations. 173 | externalEndpoint: 174 | # Leader election service configuration 175 | election: 176 | # Longest time peer waits for stable membership during leader election startup (unit: second) 177 | startupGracePeriod: 15s 178 | # Interval gossip membership samples to check its stability (unit: second) 179 | membershipSampleInterval: 1s 180 | # Time passes since last declaration message before peer decides to perform leader election (unit: second) 181 | leaderAliveThreshold: 10s 182 | # Time between peer sends propose message and declares itself as a leader (sends declaration message) (unit: second) 183 | leaderElectionDuration: 5s 184 | 185 | pvtData: 186 | # pullRetryThreshold determines the maximum duration of time private data corresponding for a given block 187 | # would be attempted to be pulled from peers until the block would be committed without the private data 188 | pullRetryThreshold: 60s 189 | # minPeers defines the minimum number of peers 190 | # that an endorsement containing private data would target for dissemination. 191 | # The endorsement would fail if an insufficient number of acknowledgements from 192 | # peers of the peer's organization wasn't obtained. 193 | minPeers: 1 194 | # maxPeers defines the maximum number of peers 195 | # that an endorsement containing private data would target for dissemination 196 | maxPeers: 1 197 | # As private data enters the transient store, it is associated with the peer's ledger's height at that time. 198 | # transientstoreMaxBlockRetention defines the maximum difference between the current ledger's height upon commit, 199 | # and the private data residing inside the transient store that is guaranteed not to be purged. 200 | # Private data is purged from the transient store when blocks with sequences that are multiples 201 | # of transientstoreMaxBlockRetention are committed. 202 | transientstoreMaxBlockRetention: 1000 203 | 204 | # EventHub related configuration 205 | events: 206 | # The address that the Event service will be enabled on the peer 207 | address: 0.0.0.0:7053 208 | 209 | # total number of events that could be buffered without blocking send 210 | buffersize: 100 211 | 212 | # timeout duration for producer to send an event. 213 | # if < 0, if buffer full, unblocks immediately and not send 214 | # if 0, if buffer full, will block and guarantee the event will be sent out 215 | # if > 0, if buffer full, blocks till timeout 216 | timeout: 10ms 217 | 218 | # timewindow is the acceptable difference between the peer's current 219 | # time and the client's time as specified in a registration event 220 | timewindow: 15m 221 | 222 | # TLS Settings 223 | # Note that peer-chaincode connections through chaincodeListenAddress is 224 | # not mutual TLS auth. See comments on chaincodeListenAddress for more info 225 | tls: 226 | # require server-side TLS 227 | enabled: false 228 | # require client certificates / mutual TLS. 229 | # note that clients that are not configured to use a certificate will 230 | # fail to connect to the peer. 231 | clientAuthRequired: false 232 | # X.509 certificate used for TLS server (and client if clientAuthEnabled 233 | # is set to true 234 | cert: 235 | file: tls/server.crt 236 | # private key used for TLS server (and client if clientAuthEnabled 237 | # is set to true 238 | key: 239 | file: tls/server.key 240 | # trusted root certificate chain for tls.cert 241 | rootcert: 242 | file: tls/ca.crt 243 | # set of root certificate authorities used to verify client certificates 244 | clientRootCAs: 245 | files: 246 | - tls/ca.crt 247 | 248 | # The server name use to verify the hostname returned by TLS handshake 249 | serverhostoverride: 250 | 251 | # Path on the file system where peer will store data (eg ledger). This 252 | # location must be access control protected to prevent unintended 253 | # modification that might corrupt the peer operations. 254 | fileSystemPath: /var/hyperledger/production 255 | 256 | # BCCSP (Blockchain crypto provider): Select which crypto implementation or 257 | # library to use 258 | BCCSP: 259 | Default: SW 260 | SW: 261 | # TODO: The default Hash and Security level needs refactoring to be 262 | # fully configurable. Changing these defaults requires coordination 263 | # SHA2 is hardcoded in several places, not only BCCSP 264 | Hash: SHA2 265 | Security: 256 266 | # Location of Key Store 267 | FileKeyStore: 268 | # If "", defaults to 'mspConfigPath'/keystore 269 | # TODO: Ensure this is read with fabric/core/config.GetPath() once ready 270 | KeyStore: 271 | 272 | # Path on the file system where peer will find MSP local configurations 273 | mspConfigPath: msp 274 | 275 | # Identifier of the local MSP 276 | # ----!!!!IMPORTANT!!!-!!!IMPORTANT!!!-!!!IMPORTANT!!!!---- 277 | # Deployers need to change the value of the localMspId string. 278 | # In particular, the name of the local MSP ID of a peer needs 279 | # to match the name of one of the MSPs in each of the channel 280 | # that this peer is a member of. Otherwise this peer's messages 281 | # will not be identified as valid by other nodes. 282 | localMspId: DEFAULT 283 | 284 | # Delivery service related config 285 | deliveryclient: 286 | # It sets the total time the delivery service may spend in reconnection 287 | # attempts until its retry logic gives up and returns an error 288 | reconnectTotalTimeThreshold: 3600s 289 | 290 | # Used with Go profiling tools only in none production environment. In 291 | # production, it should be disabled (eg enabled: false) 292 | profile: 293 | enabled: false 294 | listenAddress: 0.0.0.0:6060 295 | 296 | # Handlers defines custom handlers that can filter and mutate 297 | # objects passing within the peer, such as: 298 | # Auth filter - reject or forward proposals from clients 299 | # Decorators - append or mutate the chaincode input passed to the chaincode 300 | # Valid handler definition contains: 301 | # - A name which is a factory method name defined in 302 | # core/handlers/library/library.go for statically compiled handlers 303 | # - library path to shared object binary for pluggable filters 304 | # Auth filters and decorators are chained and executed in the order that 305 | # they are defined. For example: 306 | # authFilters: 307 | # - 308 | # name: FilterOne 309 | # library: /opt/lib/filter.so 310 | # - 311 | # name: FilterTwo 312 | # decorators: 313 | # - 314 | # name: DecoratorOne 315 | # - 316 | # name: DecoratorTwo 317 | # library: /opt/lib/decorator.so 318 | handlers: 319 | authFilters: 320 | - 321 | name: DefaultAuth 322 | decorators: 323 | - 324 | name: DefaultDecorator 325 | 326 | # Number of goroutines that will execute transaction validation in parallel. 327 | # By default, the peer chooses the number of CPUs on the machine. Set this 328 | # variable to override that choice. 329 | # NOTE: overriding this value might negatively influence the performance of 330 | # the peer so please change this value only if you know what you're doing 331 | validatorPoolSize: 332 | 333 | ############################################################################### 334 | # 335 | # VM section 336 | # 337 | ############################################################################### 338 | vm: 339 | 340 | # Endpoint of the vm management system. For docker can be one of the following in general 341 | # unix:///var/run/docker.sock 342 | # http://localhost:2375 343 | # https://localhost:2376 344 | endpoint: unix:///var/run/docker.sock 345 | 346 | # settings for docker vms 347 | docker: 348 | tls: 349 | enabled: false 350 | ca: 351 | file: docker/ca.crt 352 | cert: 353 | file: docker/tls.crt 354 | key: 355 | file: docker/tls.key 356 | 357 | # Enables/disables the standard out/err from chaincode containers for 358 | # debugging purposes 359 | attachStdout: false 360 | 361 | # Parameters on creating docker container. 362 | # Container may be efficiently created using ipam & dns-server for cluster 363 | # NetworkMode - sets the networking mode for the container. Supported 364 | # standard values are: `host`(default),`bridge`,`ipvlan`,`none`. 365 | # Dns - a list of DNS servers for the container to use. 366 | # Note: `Privileged` `Binds` `Links` and `PortBindings` properties of 367 | # Docker Host Config are not supported and will not be used if set. 368 | # LogConfig - sets the logging driver (Type) and related options 369 | # (Config) for Docker. For more info, 370 | # https://docs.docker.com/engine/admin/logging/overview/ 371 | # Note: Set LogConfig using Environment Variables is not supported. 372 | hostConfig: 373 | NetworkMode: host 374 | Dns: 375 | # - 192.168.0.1 376 | LogConfig: 377 | Type: json-file 378 | Config: 379 | max-size: "50m" 380 | max-file: "5" 381 | Memory: 2147483648 382 | 383 | ############################################################################### 384 | # 385 | # Chaincode section 386 | # 387 | ############################################################################### 388 | chaincode: 389 | # This is used if chaincode endpoint resolution fails with the 390 | # chaincodeListenAddress property 391 | peerAddress: 392 | 393 | # The id is used by the Chaincode stub to register the executing Chaincode 394 | # ID with the Peer and is generally supplied through ENV variables 395 | # the `path` form of ID is provided when installing the chaincode. 396 | # The `name` is used for all other requests and can be any string. 397 | id: 398 | path: 399 | name: 400 | 401 | # Generic builder environment, suitable for most chaincode types 402 | builder: $(DOCKER_NS)/fabric-ccenv:$(ARCH)-$(PROJECT_VERSION) 403 | 404 | golang: 405 | # golang will never need more than baseos 406 | runtime: $(BASE_DOCKER_NS)/fabric-baseos:$(ARCH)-$(BASE_VERSION) 407 | 408 | # whether or not golang chaincode should be linked dynamically 409 | dynamicLink: false 410 | 411 | car: 412 | # car may need more facilities (JVM, etc) in the future as the catalog 413 | # of platforms are expanded. For now, we can just use baseos 414 | runtime: $(BASE_DOCKER_NS)/fabric-baseos:$(ARCH)-$(BASE_VERSION) 415 | 416 | java: 417 | # This is an image based on java:openjdk-8 with addition compiler 418 | # tools added for java shim layer packaging. 419 | # This image is packed with shim layer libraries that are necessary 420 | # for Java chaincode runtime. 421 | Dockerfile: | 422 | from $(DOCKER_NS)/fabric-javaenv:$(ARCH)-$(PROJECT_VERSION) 423 | 424 | node: 425 | # need node.js engine at runtime, currently available in baseimage 426 | # but not in baseos 427 | runtime: $(BASE_DOCKER_NS)/fabric-baseimage:$(ARCH)-$(BASE_VERSION) 428 | 429 | # Timeout duration for starting up a container and waiting for Register 430 | # to come through. 1sec should be plenty for chaincode unit tests 431 | startuptimeout: 300s 432 | 433 | # Timeout duration for Invoke and Init calls to prevent runaway. 434 | # This timeout is used by all chaincodes in all the channels, including 435 | # system chaincodes. 436 | # Note that during Invoke, if the image is not available (e.g. being 437 | # cleaned up when in development environment), the peer will automatically 438 | # build the image, which might take more time. In production environment, 439 | # the chaincode image is unlikely to be deleted, so the timeout could be 440 | # reduced accordingly. 441 | executetimeout: 30s 442 | 443 | # There are 2 modes: "dev" and "net". 444 | # In dev mode, user runs the chaincode after starting peer from 445 | # command line on local machine. 446 | # In net mode, peer will run chaincode in a docker container. 447 | mode: net 448 | 449 | # keepalive in seconds. In situations where the communiction goes through a 450 | # proxy that does not support keep-alive, this parameter will maintain connection 451 | # between peer and chaincode. 452 | # A value <= 0 turns keepalive off 453 | keepalive: 0 454 | 455 | # system chaincodes whitelist. To add system chaincode "myscc" to the 456 | # whitelist, add "myscc: enable" to the list below, and register in 457 | # chaincode/importsysccs.go 458 | system: 459 | cscc: enable 460 | lscc: enable 461 | escc: enable 462 | vscc: enable 463 | qscc: enable 464 | rscc: disable 465 | 466 | # System chaincode plugins: in addition to being imported and compiled 467 | # into fabric through core/chaincode/importsysccs.go, system chaincodes 468 | # can also be loaded as shared objects compiled as Go plugins. 469 | # See examples/plugins/scc for an example. 470 | # Like regular system chaincodes, plugins must also be white listed in the 471 | # chaincode.system section above. 472 | systemPlugins: 473 | # example configuration: 474 | # - enabled: true 475 | # name: myscc 476 | # path: /opt/lib/myscc.so 477 | # invokableExternal: true 478 | # invokableCC2CC: true 479 | 480 | # Logging section for the chaincode container 481 | logging: 482 | # Default level for all loggers within the chaincode container 483 | level: info 484 | # Override default level for the 'shim' module 485 | shim: warning 486 | # Format for the chaincode container logs 487 | format: '%{color}%{time:2006-01-02 15:04:05.000 MST} [%{module}] %{shortfunc} -> %{level:.4s} %{id:03x}%{color:reset} %{message}' 488 | 489 | ############################################################################### 490 | # 491 | # Ledger section - ledger configuration encompases both the blockchain 492 | # and the state 493 | # 494 | ############################################################################### 495 | ledger: 496 | 497 | blockchain: 498 | 499 | state: 500 | # stateDatabase - options are "goleveldb", "CouchDB" 501 | # goleveldb - default state database stored in goleveldb. 502 | # CouchDB - store state database in CouchDB 503 | stateDatabase: goleveldb 504 | couchDBConfig: 505 | # It is recommended to run CouchDB on the same server as the peer, and 506 | # not map the CouchDB container port to a server port in docker-compose. 507 | # Otherwise proper security must be provided on the connection between 508 | # CouchDB client (on the peer) and server. 509 | couchDBAddress: 127.0.0.1:5984 510 | # This username must have read and write authority on CouchDB 511 | username: 512 | # The password is recommended to pass as an environment variable 513 | # during start up (eg LEDGER_COUCHDBCONFIG_PASSWORD). 514 | # If it is stored here, the file must be access control protected 515 | # to prevent unintended users from discovering the password. 516 | password: 517 | # Number of retries for CouchDB errors 518 | maxRetries: 3 519 | # Number of retries for CouchDB errors during peer startup 520 | maxRetriesOnStartup: 10 521 | # CouchDB request timeout (unit: duration, e.g. 20s) 522 | requestTimeout: 35s 523 | # Limit on the number of records to return per query 524 | queryLimit: 10000 525 | # Limit on the number of records per CouchDB bulk update batch 526 | maxBatchUpdateSize: 1000 527 | 528 | 529 | history: 530 | # enableHistoryDatabase - options are true or false 531 | # Indicates if the history of key updates should be stored. 532 | # All history 'index' will be stored in goleveldb, regardless if using 533 | # CouchDB or alternate database for the state. 534 | enableHistoryDatabase: true 535 | 536 | ############################################################################### 537 | # 538 | # Metrics section 539 | # 540 | # 541 | ############################################################################### 542 | metrics: 543 | # enable or disable metrics server 544 | enabled: false 545 | 546 | # when enable metrics server, must specific metrics reporter type 547 | # currently supported type: "statsd","prom" 548 | reporter: statsd 549 | 550 | # determines frequency of report metrics(unit: second) 551 | interval: 1s 552 | 553 | statsdReporter: 554 | 555 | # statsd server address to connect 556 | address: 0.0.0.0:8125 557 | 558 | # determines frequency of push metrics to statsd server(unit: second) 559 | flushInterval: 2s 560 | 561 | # max size bytes for each push metrics request 562 | # intranet recommend 1432 and internet recommend 512 563 | flushBytes: 1432 564 | 565 | promReporter: 566 | 567 | # prometheus http server listen address for pull metrics 568 | listenAddress: 0.0.0.0:8080 569 | -------------------------------------------------------------------------------- /scripts/fabric_artifacts/channel_tools.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Created by Guillaume Leurquin, guillaume.leurquin@accenture.com 3 | 4 | # This file contains a series of tools that allow the creation of a channel 5 | # and to join a channel. 6 | # You can source this file and then use the defined functions 7 | 8 | set -eu -o pipefail 9 | 10 | puts() { 11 | local GREEN='\033[0;32m' 12 | local NC='\033[0m' 13 | echo -e "${GREEN}$*${NC}" 14 | } 15 | 16 | set_peer_env(){ 17 | PEER=$1 18 | PEER_ORG=$2 19 | 20 | SET_ENV="/etc/hyperledger/crypto-config/tools/set_env.$PEER.$PEER_ORG.sh" 21 | chmod +x $SET_ENV 22 | 23 | # shellcheck source=src/set_env.sh 24 | . $SET_ENV 25 | } 26 | 27 | # path to the folder which contains the channel tx and blocks, relative to working dir of tools container 28 | CONFIGTX_PATH=/etc/hyperledger/configtx 29 | 30 | # Waits for a peer to answer to curl 31 | wait_for_host(){ 32 | HOST_TO_TEST=$1 33 | until [[ "$(curl $HOST_TO_TEST 2> /dev/null; echo $?)" != 6 ]]; do puts "Waiting for host $HOST_TO_TEST to come online..."; sleep 3; done 34 | puts "Host $HOST_TO_TEST is online !" 35 | } 36 | 37 | create_channel(){ 38 | PEER=$1 39 | PEER_ORG=$2 40 | ORDERER=$3 41 | ORDERER_ORG=$4 42 | CHANNEL_ID=$5 43 | 44 | # The orderer ca does not have to be combined, it can just be its tls cert 45 | # Does not seem like one has to specify the cafile nor the orderer, seems like the peer finds them by itself 46 | ORDERER_CA="/etc/hyperledger/crypto-config/$ORDERER_ORG/orderers/$ORDERER.$ORDERER_ORG/tlsca.combined.$ORDERER.$ORDERER_ORG-cert.pem" 47 | set_peer_env $PEER $PEER_ORG 48 | 49 | if ! peer channel list | grep -q -e "^$CHANNEL_ID$"; then 50 | if [ ! -f "$CONFIGTX_PATH/$CHANNEL_ID.block" ]; then 51 | puts "Creating channel block for $CHANNEL_ID..." 52 | peer channel create --cafile $ORDERER_CA -c $CHANNEL_ID -f $CONFIGTX_PATH/$CHANNEL_ID.tx -o $ORDERER.$ORDERER_ORG:7050 --tls true 53 | mv $CHANNEL_ID.block $CONFIGTX_PATH 54 | puts "Done. Created $CHANNEL_ID.block" 55 | else 56 | puts "Channel block $CHANNEL_ID already created"'!' 57 | fi 58 | else 59 | puts "Channel $CHANNEL_ID already exists." 60 | fi 61 | } 62 | 63 | join_channel(){ 64 | PEER=$1 65 | PEER_ORG=$2 66 | ORDERER=$3 67 | ORDERER_ORG=$4 68 | CHANNEL_ID=$5 69 | 70 | # The orderer ca does not have to be combined, it can just be its tls cert 71 | ORDERER_CA="/etc/hyperledger/crypto-config/$ORDERER_ORG/orderers/$ORDERER.$ORDERER_ORG/tlsca.combined.$ORDERER.$ORDERER_ORG-cert.pem" 72 | 73 | puts "Peer $PEER.$PEER_ORG is trying to join channel $CHANNEL_ID..." 74 | set_peer_env $PEER $PEER_ORG 75 | if peer channel list | grep -q -e "^$CHANNEL_ID$"; then 76 | puts "Peer $PEER.$PEER_ORG is already part of channel $CHANNEL_ID" 77 | else 78 | peer channel join --cafile $ORDERER_CA -b $CONFIGTX_PATH/$CHANNEL_ID.block 79 | puts "Done. Peer $PEER.$PEER_ORG joined channel $CHANNEL_ID" 80 | fi 81 | } 82 | -------------------------------------------------------------------------------- /scripts/fabric_artifacts/configtxBase.yaml: -------------------------------------------------------------------------------- 1 | # This file is auto-generated 2 | 3 | 4 | --- 5 | ################################################################################ 6 | # 7 | # SECTION: Orderer 8 | # 9 | # - This section defines the values to encode into a config transaction or 10 | # genesis block for orderer related parameters 11 | # 12 | ############################################################################## 13 | 14 | Orderer: &Orderer 15 | 16 | # Orderer Type: The orderer implementation to start 17 | # Available types are "solo" and "kafka" 18 | OrdererType: solo 19 | 20 | 21 | # Batch Timeout: The amount of time to wait before creating a batch 22 | BatchTimeout: 2s 23 | 24 | # Batch Size: Controls the number of messages batched into a block 25 | BatchSize: 26 | 27 | # Max Message Count: The maximum number of messages to permit in a batch 28 | MaxMessageCount: 10 29 | 30 | # Absolute Max Bytes: The absolute maximum number of bytes allowed for 31 | # the serialized messages in a batch. 32 | AbsoluteMaxBytes: 98 MB 33 | 34 | # Preferred Max Bytes: The preferred maximum number of bytes allowed for 35 | # the serialized messages in a batch. A message larger than the preferred 36 | # max bytes will result in a batch larger than preferred max bytes. 37 | PreferredMaxBytes: 512 KB 38 | 39 | # Addresses of the orderers 40 | Addresses: 41 | 42 | # Organizations is the list of orgs which are defined as participants on 43 | # the orderer side of the network 44 | Organizations: 45 | 46 | ################################################################################ 47 | # 48 | # SECTION: Application 49 | # 50 | # - This section defines the values to encode into a config transaction or 51 | # genesis block for application related parameters 52 | # 53 | ################################################################################ 54 | Application: &ApplicationDefaults 55 | 56 | # Organizations is the list of orgs which are defined as participants on 57 | # the application side of the network 58 | Organizations: 59 | -------------------------------------------------------------------------------- /scripts/fabric_artifacts/create_dev_docker_compose.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python2 2 | """ 3 | Generates a simple docker compose file for devmode 4 | """ 5 | import os 6 | import sys 7 | 8 | GEN_PATH = os.environ["GEN_PATH"] 9 | 10 | # Parse args 11 | if len(sys.argv) != 5: 12 | sys.stderr.write("Usage: create_dev_docker_compose.py org peer admin") 13 | exit(1) 14 | ORG = sys.argv[1] 15 | MSPID = sys.argv[2] 16 | PEER = sys.argv[3] 17 | ADMIN = sys.argv[4] 18 | 19 | with open(GEN_PATH + '/devmode/docker-compose-simple.yaml', 'w') as stream: 20 | stream.write("""version: '2' 21 | # This file is auto-generated 22 | 23 | services: 24 | orderer: 25 | container_name: orderer 26 | image: hyperledger/fabric-orderer 27 | environment: 28 | - ORDERER_GENERAL_LOGLEVEL=debug 29 | - ORDERER_GENERAL_LISTENADDRESS=orderer 30 | - ORDERER_GENERAL_GENESISMETHOD=file 31 | - ORDERER_GENERAL_GENESISFILE=orderer.block 32 | - ORDERER_GENERAL_LOCALMSPID={0} 33 | - ORDERER_GENERAL_LOCALMSPDIR=/etc/hyperledger/msp 34 | - GRPC_TRACE=all=true, 35 | - GRPC_VERBOSITY=debug 36 | working_dir: /opt/gopath/src/github.com/hyperledger/fabric 37 | command: orderer 38 | volumes: 39 | - ../crypto-config/{1}/orderers/orderer.{1}/msp:/etc/hyperledger/msp 40 | - ./channel/orderer.genesis.block:/etc/hyperledger/fabric/orderer.block 41 | ports: 42 | - 7050:7050 43 | 44 | peer: 45 | container_name: peer 46 | image: hyperledger/fabric-peer 47 | environment: 48 | - CORE_PEER_ID=peer 49 | - CORE_PEER_ADDRESS=peer:7051 50 | - CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer:7051 51 | - CORE_PEER_LOCALMSPID={0} 52 | - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock 53 | - CORE_LOGGING_LEVEL=DEBUG 54 | - CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp 55 | - CORE_LEDGER_STATE_STATEDATABASE=CouchDB 56 | - CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME=admin 57 | - CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD=password 58 | - CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=couchdb:5984 59 | volumes: 60 | - /var/run/:/host/var/run/ 61 | - ../crypto-config/{1}/peers/{2}.{1}/msp:/etc/hyperledger/msp 62 | working_dir: /opt/gopath/src/github.com/hyperledger/fabric/peer 63 | command: peer node start --peer-chaincodedev=true -o orderer:7050 64 | ports: 65 | - 7051:7051 66 | - 7053:7053 67 | depends_on: 68 | - orderer 69 | - couchdb 70 | 71 | cli: 72 | container_name: cli 73 | image: hyperledger/fabric-tools 74 | tty: true 75 | environment: 76 | - GOPATH=/opt/gopath 77 | - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock 78 | - CORE_LOGGING_LEVEL=DEBUG 79 | - CORE_PEER_ID=cli 80 | - CORE_PEER_ADDRESS=peer:7051 81 | - CORE_PEER_LOCALMSPID={0} 82 | - CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp 83 | working_dir: /opt/gopath/src/chaincodedev 84 | command: /bin/bash -c 'sleep 5; ./script.sh;' 85 | volumes: 86 | - /var/run/:/host/var/run/ 87 | - ../crypto-config/{1}/users/{3}.{1}/msp:/etc/hyperledger/msp 88 | - ./:/opt/gopath/src/chaincodedev/ 89 | - ./channel:/opt/gopath/src/chaincodedev/channel 90 | depends_on: 91 | - orderer 92 | - peer 93 | 94 | chaincode: 95 | container_name: chaincode 96 | image: hyperledger/fabric-ccenv 97 | tty: true 98 | environment: 99 | - GOPATH=/opt/gopath 100 | - CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock 101 | - CORE_LOGGING_LEVEL=DEBUG 102 | - CORE_PEER_ID=example02 103 | - CORE_PEER_ADDRESS=peer:7051 104 | - CORE_PEER_LOCALMSPID={0} 105 | - CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp 106 | working_dir: /opt/gopath/src/chaincode 107 | command: /bin/bash -c 'sleep 6000000' 108 | volumes: 109 | - /var/run/:/host/var/run/ 110 | - ../crypto-config/{1}/peers/{2}.{1}/msp:/etc/hyperledger/msp 111 | - ./chaincode:/opt/gopath/src/chaincode 112 | depends_on: 113 | - orderer 114 | - peer 115 | 116 | couchdb: 117 | container_name: couchdb 118 | image: yeasy/hyperledger-fabric-couchdb 119 | # Populate the COUCHDB_USER and COUCHDB_PASSWORD to set an admin user and password 120 | # for CouchDB. This will prevent CouchDB from operating in an "Admin Party" mode. 121 | environment: 122 | - COUCHDB_USER=admin 123 | - COUCHDB_PASSWORD=password 124 | # Comment/Uncomment the port mapping if you want to hide/expose the CouchDB service, 125 | # for example map it to utilize Fauxton User Interface in dev environments. 126 | ports: 127 | - "5984:5984" 128 | 129 | networks: 130 | default: 131 | external: 132 | name: hyperledgerNet 133 | """.format( 134 | MSPID, 135 | ORG, 136 | PEER, 137 | ADMIN 138 | )) 139 | -------------------------------------------------------------------------------- /scripts/fabric_artifacts/gen_configtx.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python2 2 | # Created by Guillaume Leurquin, guillaume.leurquin@accenture.com 3 | 4 | 5 | import os 6 | import sys 7 | from argparse import ArgumentParser 8 | import yaml 9 | 10 | # Note: configtxgen requires certificates to have the subjectKeyIdentifier extension 11 | 12 | DEBUG = False 13 | # Path to where crypto-config and docker folders will be generated 14 | GEN_PATH = os.environ["GEN_PATH"] 15 | 16 | PWD = os.path.dirname(__file__) 17 | CONFIGTX_FILENAME = GEN_PATH + "/channel/configtx.yaml" 18 | ARTIFACT_SCRIPT_NAME = "create_channel_artifacts.sh" 19 | ARTIFACT_SCRIPT = GEN_PATH + '/channel/' + ARTIFACT_SCRIPT_NAME 20 | CREATE_AND_JOIN_CHANNEL_SCRIPT = GEN_PATH + '/channel/create_and_join_channel.sh' 21 | 22 | 23 | DEVMODE_ARTIFACT_SCRIPT_NAME = 'create_devmode_channel_artifacts.sh' 24 | 25 | DEVMODE_ARTIFACT_SCRIPT = GEN_PATH + "/channel/" + DEVMODE_ARTIFACT_SCRIPT_NAME 26 | DEVMODE_CHANNEL_SCRIPT = GEN_PATH + "/devmode/script.sh" 27 | 28 | def convert_to_msp_id(domain): 29 | return ''.join(part.capitalize() for part in domain.split('.')) + "MSP" 30 | 31 | def fail(msg): 32 | """Prints the error message and exits""" 33 | sys.stderr.write(msg) 34 | exit(1) 35 | 36 | 37 | PARSER = ArgumentParser(description='Creates the channel artifacts and the channel creation/joining scripts') 38 | PARSER.add_argument('crypto_config', type=str, help='cryptographic configuration of the network, as YAML file. See the provided example for details.') 39 | PARSER.add_argument('--configtxBase', '-c', help='path to configtx hyperledger fabric config file, without the organisations and profiles (they will be generated). Defaults to a simple orderer configuration.', action='store') 40 | 41 | args = PARSER.parse_args() 42 | 43 | YAML_CONFIG = args.crypto_config 44 | 45 | def call(script, *args): 46 | """Calls the given script using the args""" 47 | 48 | cmd = script + " " + " ".join(args) 49 | if DEBUG: 50 | print cmd 51 | if os.system(cmd) != 0: 52 | fail("\nAn error occured while executing " + cmd + ". See above for details.") 53 | 54 | def to_pwd(script): 55 | """Converts the script path to the correct path""" 56 | return PWD + "/" + script 57 | 58 | CONFIGTX_BASE = to_pwd("configtxBase.yaml") 59 | if args.configtxBase: 60 | CONFIGTX_BASE = args.configtxBase 61 | 62 | def add_org(org_conf): 63 | """Returns the org config for configtx""" 64 | mspId = convert_to_msp_id(org_conf["Domain"]) 65 | 66 | yaml_org = """ 67 | - &{0} 68 | Name: {0} 69 | # ID to load the MSP definition as 70 | # ID must match ID given in docker file 71 | ID: {1} 72 | MSPDir: ../crypto-config/{2}/msp""".format(org_conf['Name'], mspId, org_conf['Domain']) 73 | 74 | if 'peers' in org_conf and org_conf['peers']: 75 | peer = org_conf['peers'][0] 76 | yaml_org = yaml_org + """ 77 | AnchorPeers: 78 | # AnchorPeers defines the location of peers which can be used 79 | # for cross org gossip communication. Note, this value is only 80 | # encoded in the genesis block in the Application section context 81 | - Host: {0} 82 | Port: {1}""".format(peer['Hostname'] + '.' + org_conf['Domain'], peer['Ports'][0].split(':')[0]) 83 | return yaml_org + '\n' 84 | 85 | 86 | PROFILE_SECTION = """ 87 | ################################################################################ 88 | # 89 | # Profile 90 | # 91 | # - Different configuration profiles may be encoded here to be specified 92 | # as parameters to the configtxgen tool 93 | # 94 | ################################################################################ 95 | 96 | 97 | # Organizations inside a consortium are the owners of the orderer. They can do administration tasks. 98 | 99 | Profiles: 100 | """ 101 | 102 | 103 | all_consortiums = [] 104 | 105 | def add_orderer(orderer_conf, org_conf): 106 | 107 | consortiums = [] 108 | for cons in orderer_conf['Consortiums']: 109 | consortium_name = " " + cons["Name"] + ':\n' 110 | orgs = " Organizations:\n" 111 | consortium_members = '\n'.join([" - *" + orgName for orgName in cons['Organizations']]) 112 | consortiums.append(consortium_name+orgs+consortium_members) 113 | consortiums = '\n'.join(consortiums) 114 | all_consortiums.append(consortiums) 115 | admin_orgs = '\n'.join([" - *" + org for org in orderer_conf['AdminOrgs']]) 116 | ord_addresses = '\n'.join([" - "+orderer_conf['Hostname'] + '.' + org_conf['Domain'] + ':' + str(orderer_conf['Port'])]) 117 | return """ 118 | {0}genesis: 119 | Orderer: 120 | <<: *Orderer 121 | Addresses: 122 | {1} 123 | Organizations: 124 | {2} 125 | Consortiums: 126 | {3} 127 | """.format( 128 | (orderer_conf['Hostname'] + org_conf['Domain']).replace('.', ''), 129 | ord_addresses, 130 | admin_orgs, 131 | consortiums 132 | ) 133 | 134 | def add_channel(channel_conf): 135 | channel_orgs = '\n'.join([" - *" + org for org in channel_conf['Organizations']]) 136 | return """ 137 | {0}: 138 | Consortium: {1} 139 | Application: 140 | <<: *ApplicationDefaults 141 | Organizations: 142 | {2} 143 | """.format(channel_conf['Name'], channel_conf['Consortium']['Name'], channel_orgs) 144 | 145 | def add_channel_script(channel_name): 146 | return """ 147 | echo "Generating {0} configuration transaction '{0}.tx'" 148 | configtxgen -profile {0} -channelID {0} -outputCreateChannelTx $PREFIX/{0}.tx 149 | if [ "$?" -ne 0 ]; then 150 | echo "Failed to generate {0} configuration transaction..." >&2 151 | exit 1 152 | fi 153 | echo "Done" 154 | echo "-----" 155 | """.format(channel_name) 156 | 157 | def add_orderer_script(orderer_conf, org_conf): 158 | 159 | orderer_cn = orderer_conf["Hostname"] + '.' + org_conf["Domain"] 160 | return """ 161 | echo "Generating {0} Genesis Block..." 162 | configtxgen -profile {1}genesis -channelID {1}genesis -outputBlock $PREFIX/{0}.genesis.block 163 | if [ "$?" -ne 0 ]; then 164 | echo "Failed to generate {0} channel configuration transaction..." >&2 165 | exit 1 166 | fi 167 | echo "Done" 168 | echo "-----" 169 | """.format( 170 | orderer_cn, 171 | orderer_cn.replace('.', '') 172 | ) 173 | 174 | ORDERER_DEVMODE_SCRIPT = """ 175 | echo "Generating devmode Genesis Block..." 176 | configtxgen -profile devmodeorderergenesis -channelID devmodeorderergenesis -outputBlock $PREFIX/orderer.genesis.block 177 | if [ "$?" -ne 0 ]; then 178 | echo "Failed to generate devmode channel configuration transaction..." >&2 179 | exit 1 180 | fi 181 | echo "Done" 182 | echo "-----" 183 | """ 184 | 185 | def add_devmode(conf): 186 | return""" 187 | devmodeorderergenesis: 188 | Orderer: 189 | <<: *Orderer 190 | Addresses: 191 | - orderer:7050 192 | Organizations: 193 | - *{1} # Single organisation 194 | Consortiums: 195 | # All consortiums 196 | {0} 197 | """.format('\n'.join(all_consortiums), conf['Devmode']['Name']) 198 | 199 | def get_all_hosts_and_orderer_from_channel(CONF, channel_conf): 200 | orderers_for_channel = [] 201 | peers_for_channel = [] 202 | 203 | def get_peers_from_org(org): 204 | if 'peers' in org and org['peers']: 205 | return [{'peer_name':peer['Hostname'], 'peer_org':org["Domain"]} for peer in org['peers']] 206 | return [] 207 | 208 | for org in CONF['Orgs']: 209 | if org['Name'] in channel_conf['Organizations']: 210 | peers_for_channel.extend(get_peers_from_org(org)) 211 | 212 | if 'orderers' in org and org['orderers']: 213 | for orderer in org['orderers']: 214 | if channel_conf['Name'] in orderer['Channels']: 215 | orderers_for_channel.append({'ord_org': org['Domain'], 'ord_name': orderer['Hostname']}) 216 | 217 | 218 | if len(orderers_for_channel) == 0: 219 | fail('No orderer for channel '+channel_conf['Name']) 220 | 221 | if len(peers_for_channel) == 0: 222 | fail('No peers for channel '+channel_conf['Name']) 223 | 224 | 225 | return { 226 | 'channel': channel_conf['Name'], 227 | 'peers': peers_for_channel, 228 | 'orderer': orderers_for_channel[0] # Maybe that all the orderers are needed. Needs testing TODO 229 | } 230 | 231 | CHANNEL_SCRIPT_PREAMBLE = """#!/bin/bash 232 | # This file is auto-generated 233 | 234 | # This file allows you to create and join a channel. It requires 235 | # channel_tools.sh to be sourced for all commands to work 236 | 237 | set -eu -o pipefail 238 | 239 | if [ $# -ne 1 ]; 240 | then 241 | echo "" 242 | echo "Usage: " 243 | echo " create_and_join_channel CHANNEL_ID" 244 | exit 1 245 | fi 246 | 247 | . /etc/hyperledger/configtx/channel_tools.sh 248 | 249 | channel_id=$1 250 | 251 | 252 | """ 253 | 254 | 255 | def create_channel_script(CONF, channel_script): 256 | i = 0 257 | channel_script.write(CHANNEL_SCRIPT_PREAMBLE) 258 | for channel in CONF['Channels']: 259 | channel_info = get_all_hosts_and_orderer_from_channel(CONF, channel) 260 | wait_for_host = '\n'.join([' wait_for_host ' + peer['peer_name'] + '.' + peer['peer_org'] for peer in channel_info['peers']]) + '\n' 261 | 262 | channel_orderer = " channel_orderer={0}\n".format(channel_info['orderer']['ord_name']) 263 | channel_orderer_org = " channel_orderer_org={0}\n".format(channel_info['orderer']['ord_org']) 264 | 265 | create_channel = ' create_channel {0} {1} $channel_orderer $channel_orderer_org $channel_id\n'.format( 266 | channel_info['peers'][0]['peer_name'], 267 | channel_info['peers'][0]['peer_org'] 268 | ) 269 | 270 | join_channel = [] 271 | for peer in channel_info['peers']: 272 | join_channel.append(' join_channel {0} {1} $channel_orderer $channel_orderer_org $channel_id'.format( 273 | peer['peer_name'], 274 | peer['peer_org'] 275 | )) 276 | join_channel = '\n'.join(join_channel) + '\n' 277 | channel_script_content = channel_orderer + channel_orderer_org + '\n' + wait_for_host + '\n' + create_channel + join_channel 278 | 279 | if i == 0: 280 | channel_script.write('if [ $channel_id == "{0}" ]; then\n'.format(channel['Name'])) 281 | else: 282 | channel_script.write('elif [ $channel_id == "{0}" ]; then\n'.format(channel['Name'])) 283 | 284 | channel_script.write(channel_script_content) 285 | 286 | if i == len(CONF['Channels']) - 1: 287 | channel_script.write('else\n puts "Unknown channel id $channel_id"\nfi') 288 | i += 1 289 | 290 | 291 | DEVMODE_ARTIFACT_SCRIPT_PREAMBLE = """#!/bin/bash 292 | # This file is auto-generated 293 | set -eu -o pipefail 294 | 295 | PREFIX="../devmode/channel" 296 | rm -rf $PREFIX 297 | mkdir -p $PREFIX 298 | 299 | FABRIC_CFG_PATH=$(pwd) 300 | export FABRIC_CFG_PATH 301 | echo FABRIC_CFG_PATH=$FABRIC_CFG_PATH 302 | 303 | """ 304 | 305 | DEVMODE_CHANNEL_SCRIPT_PREAMBLE = """#!/bin/bash 306 | # Copyright London Stock Exchange Group All Rights Reserved. 307 | # 308 | # SPDX-License-Identifier: Apache-2.0 309 | set -eu -o pipefail 310 | 311 | # This script expedites the chaincode development process by automating the 312 | # requisite channel create/join commands 313 | 314 | # This file is auto-generated 315 | """ 316 | 317 | DEVMODE_CHANNEL_SCRIPT_END = """ 318 | # Now the user can proceed to build and start chaincode in one terminal 319 | # And leverage the CLI container to issue install instantiate invoke query commands in another 320 | 321 | #we should have bailed if above commands failed. 322 | #we are here, so they worked 323 | sleep 600000 324 | exit 0 325 | """ 326 | 327 | 328 | call("mkdir -p", GEN_PATH + "/channel") 329 | call("mkdir -p", GEN_PATH + "/devmode") 330 | call("cp", CONFIGTX_BASE, CONFIGTX_FILENAME) 331 | 332 | ARTIFACT_SCRIPT_PREAMBLE = """#!/bin/bash 333 | # This file is auto-generated 334 | 335 | set -eu -o pipefail 336 | 337 | FABRIC_CFG_PATH="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )" 338 | export FABRIC_CFG_PATH 339 | echo FABRIC_CFG_PATH=$FABRIC_CFG_PATH 340 | rm -rf *.tx 341 | rm -rf *.block 342 | 343 | PREFIX="." 344 | 345 | """ 346 | 347 | CONFIGTX_PREAMBLE = """ 348 | 349 | ################################################################################ 350 | # 351 | # Section: Organizations 352 | # 353 | # - This section defines the different organizational identities which will 354 | # be referenced later in the configuration. 355 | # 356 | ################################################################################ 357 | Organizations: 358 | """ 359 | 360 | with open(YAML_CONFIG, 'r') as stream: 361 | with open(CONFIGTX_FILENAME, 'a') as configtx: 362 | with open(ARTIFACT_SCRIPT, 'w') as art_script: 363 | with open(DEVMODE_ARTIFACT_SCRIPT, 'w') as devmode_artifact_script: 364 | with open(DEVMODE_CHANNEL_SCRIPT, 'w') as devmode_channel_script: 365 | with open(CREATE_AND_JOIN_CHANNEL_SCRIPT, 'w') as channel_script: 366 | try: 367 | CONF = yaml.load(stream) 368 | art_script.write(ARTIFACT_SCRIPT_PREAMBLE) 369 | devmode_artifact_script.write(DEVMODE_ARTIFACT_SCRIPT_PREAMBLE) 370 | devmode_channel_script.write(DEVMODE_CHANNEL_SCRIPT_PREAMBLE) 371 | 372 | configtx.write(CONFIGTX_PREAMBLE) 373 | for theOrg in CONF["Orgs"]: 374 | configtx.write(add_org(theOrg)) 375 | 376 | configtx.write(PROFILE_SECTION) 377 | for theChannel in CONF["Channels"]: 378 | configtx.write(add_channel(theChannel)) 379 | the_channel_script = add_channel_script(theChannel["Name"]) 380 | devmode_artifact_script.write(the_channel_script) 381 | art_script.write(the_channel_script) 382 | devmode_channel_script.write(""" 383 | peer channel create -c {0} -f channel/{0}.tx -o orderer:7050 384 | peer channel join -b {0}.block\n\n""".format(theChannel['Name'])) 385 | 386 | create_channel_script(CONF, channel_script) 387 | devmode_artifact_script.write(ORDERER_DEVMODE_SCRIPT) 388 | 389 | for theOrg in CONF["Orgs"]: 390 | if 'orderers' in theOrg and theOrg['orderers']: 391 | for theOrderer in theOrg['orderers']: 392 | configtx.write(add_orderer(theOrderer, theOrg)) 393 | art_script.write(add_orderer_script(theOrderer, theOrg)) 394 | 395 | configtx.write(add_devmode(CONF)) 396 | devmode_channel_script.write(DEVMODE_CHANNEL_SCRIPT_END) 397 | 398 | devModeMSPId = convert_to_msp_id(CONF['Devmode']['Domain']) 399 | call(to_pwd("create_dev_docker_compose.py"), CONF['Devmode']['Domain'], devModeMSPId, CONF['Devmode']['peers'][0]['Hostname'], CONF['Devmode']['admins'][0]['Hostname']) 400 | except yaml.YAMLError as exc: 401 | print exc 402 | 403 | call("chmod +x", ARTIFACT_SCRIPT) 404 | call("chmod +x", CREATE_AND_JOIN_CHANNEL_SCRIPT) 405 | call("chmod +x", DEVMODE_CHANNEL_SCRIPT) 406 | call("chmod +x", DEVMODE_ARTIFACT_SCRIPT) 407 | call("cp", to_pwd('channel_tools.sh'), GEN_PATH + '/channel') 408 | call("mkdir -p", GEN_PATH + '/devmode/chaincode') 409 | 410 | 411 | call('cd', GEN_PATH + '/channel', '&&', './' + DEVMODE_ARTIFACT_SCRIPT_NAME, '&&', "cd", "..") 412 | call('cd', GEN_PATH + '/channel', '&&', './' + ARTIFACT_SCRIPT_NAME, '&&', "cd", "..") 413 | -------------------------------------------------------------------------------- /scripts/get_hosts_scripts.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python2 2 | # Created by Guillaume Leurquin, guillaume.leurquin@accenture.com 3 | 4 | """ 5 | get_hosts_scripts.py aws_config 6 | 7 | Requires GEN_PATH environment variable to be set, 8 | which points to the hyperledger fabric certificate 9 | structure created by cryptogen.py 10 | 11 | Creates clean_hosts.sh, set_hosts_public.sh, set_hosts_private.sh 12 | 13 | These scripts allow you to modify /etc/hosts file to resolve 14 | the names of your network to IP addresses running on Amazon 15 | 16 | Note: set_hosts_public.sh will only be created if you are the one 17 | who ran vagrant up (and that you have a .vagrant folder) 18 | """ 19 | 20 | import os 21 | import sys 22 | import json 23 | import subprocess 24 | import yaml 25 | 26 | DEBUG = False 27 | 28 | PWD = os.path.dirname(__file__) 29 | VAGRANT_FOLDER = PWD+'/../.vagrant' 30 | GEN_PATH = os.environ["GEN_PATH"] 31 | 32 | def fail(msg): 33 | """Prints the error message and exits""" 34 | sys.stderr.write('\033[91m' + msg + '\033[0m\n') 35 | exit(1) 36 | 37 | # Parse args 38 | if len(sys.argv) != 3: 39 | fail("Usage: get_hosts_scripts aws_config do_public") 40 | AWS_CONF = json.load(open(sys.argv[1])) 41 | DO_PUBLIC = sys.argv[2] == 'True' 42 | 43 | def call(script, *args): 44 | """Calls the given script using the args""" 45 | 46 | cmd = script + " " + " ".join(args) 47 | if DEBUG: 48 | print cmd 49 | proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True) 50 | out, error = proc.communicate() 51 | if error != "": 52 | fail("An error occured while executing " + cmd + ". See above for details. Error=" + error) 53 | return out 54 | 55 | def get_container_name(yaml_file): 56 | """Returns the container name of the yaml_file docker compose file""" 57 | docker_path = GEN_PATH + "/docker/" 58 | with open(docker_path + yaml_file, 'r') as stream: 59 | try: 60 | conf = yaml.load(stream) 61 | return conf['services'].keys()[0] 62 | 63 | except yaml.YAMLError as exc: 64 | print exc 65 | 66 | 67 | def remove_line_matching_from_hosts(dns_name): 68 | """Removes the first line that contains dns_name from /etc/hosts""" 69 | return "sudo sed -i.bak '/" + dns_name + "/d' /etc/hosts && sudo rm /etc/hosts.bak\n" 70 | 71 | 72 | def add_entry_to_hosts(to_add): 73 | """Returns the command to add something to the hosts file""" 74 | return "sudo bash -c \"echo " + to_add + " >> /etc/hosts\"\n" 75 | 76 | def clean_known_hosts(dns_name): 77 | """Command to remove the entry in known_hosts""" 78 | return "sed -i.bak '/"+dns_name.lower()+"/d' ~/.ssh/known_hosts && rm ~/.ssh/known_hosts.bak\n" 79 | 80 | PREAMBLE = """#!/bin/bash 81 | # This file has been auto-generated 82 | 83 | """ 84 | 85 | 86 | call("mkdir -p", GEN_PATH + "/scripts") 87 | 88 | 89 | if DO_PUBLIC: 90 | SCRIPT_OUT_PUBLIC_FN = GEN_PATH + "/scripts/set_hosts_public.sh" 91 | SCRIPT_OUT_PUBLIC = open(SCRIPT_OUT_PUBLIC_FN, "w") 92 | SCRIPT_OUT_PUBLIC.write(PREAMBLE) 93 | SCRIPT_OUT_PUBLIC.write("# This script automatically adds entries from the /etc/hosts file\n\n") 94 | else: 95 | print '{0} does not exist. Did you run vagrant up ?\n'.format(VAGRANT_FOLDER) 96 | 97 | SCRIPT_OUT_PRIVATE_FN = GEN_PATH + "/scripts/set_hosts_private.sh" 98 | SCRIPT_OUT_PRIVATE = open(SCRIPT_OUT_PRIVATE_FN, "w") 99 | SCRIPT_OUT_PRIVATE.write(PREAMBLE) 100 | SCRIPT_OUT_PRIVATE.write("# This script automatically adds entries from the /etc/hosts file\n\n") 101 | 102 | SCRIPT_CLEANER_FN = GEN_PATH + "/scripts/clean_hosts.sh" 103 | SCRIPT_CLEANER = open(SCRIPT_CLEANER_FN, "w") 104 | SCRIPT_CLEANER.write(PREAMBLE) 105 | SCRIPT_CLEANER.write("# This script removes automatically generated entries from the /etc/hosts file\n\n") 106 | 107 | for instance in AWS_CONF["ec2s"]: 108 | instance_name = "HLF_"+instance 109 | instance_id = "$(cat {0}/machines/".format(VAGRANT_FOLDER) + instance_name + "/aws/id)" 110 | if DO_PUBLIC: 111 | public_ip = call("aws ec2 describe-instances --filter \"Name=instance-id,Values=" + instance_id + "\" --query 'Reservations[0].Instances[0].PublicIpAddress'") 112 | public_ip = public_ip.rstrip("\n").rstrip('"').lstrip('"') 113 | private_ip = AWS_CONF["ec2s"][instance]["ip"] 114 | for docker_container in AWS_CONF["ec2s"][instance]["fabric"]: 115 | docker_name = get_container_name(docker_container["docker"]) 116 | host_value_private = private_ip + " " + docker_name 117 | 118 | remove_line = remove_line_matching_from_hosts(docker_name) 119 | 120 | if DO_PUBLIC: 121 | host_value_public = public_ip + " " + docker_name 122 | SCRIPT_OUT_PUBLIC.write(clean_known_hosts(docker_name)) 123 | SCRIPT_OUT_PUBLIC.write(remove_line) 124 | SCRIPT_OUT_PUBLIC.write(add_entry_to_hosts(host_value_public)) 125 | SCRIPT_OUT_PUBLIC.write("\n") 126 | 127 | 128 | SCRIPT_OUT_PRIVATE.write(clean_known_hosts(docker_name)) 129 | SCRIPT_OUT_PRIVATE.write(remove_line) 130 | SCRIPT_OUT_PRIVATE.write(add_entry_to_hosts(host_value_private)) 131 | SCRIPT_OUT_PRIVATE.write("\n") 132 | 133 | SCRIPT_CLEANER.write(clean_known_hosts(docker_name)) 134 | SCRIPT_CLEANER.write(remove_line) 135 | SCRIPT_CLEANER.write("\n") 136 | 137 | if DO_PUBLIC: 138 | call("chmod +x", SCRIPT_OUT_PUBLIC_FN) 139 | SCRIPT_OUT_PUBLIC.close() 140 | 141 | call("chmod +x", SCRIPT_OUT_PRIVATE_FN) 142 | SCRIPT_OUT_PRIVATE.close() 143 | call("chmod +x", SCRIPT_CLEANER_FN) 144 | SCRIPT_CLEANER.close() 145 | -------------------------------------------------------------------------------- /scripts/install.sh: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env bash 2 | 3 | INSTALL_REPO=git@github.com:Kunstmaan/hyperledger-fabric-network-setup.git 4 | 5 | INSTALL_DIR=$HOME/.hyperledger-fabric-network-setup 6 | 7 | BINARY_NAME=kuma-hf-network 8 | 9 | if ! command -v git > /dev/null 2>&1; then 10 | echo """Git is not detected on this system. This script needs it as dependency.""" 11 | exit 1 12 | fi 13 | 14 | mkdir -p $INSTALL_DIR 15 | if [ -d "$INSTALL_DIR/.git" ]; then 16 | echo "=> hyperledger-fabric-network-setup is already installed in $INSTALL_DIR, trying to update the script" 17 | pushd "$INSTALL_DIR" && git pull $INSTALL_REPO && popd 18 | else 19 | echo "Downloading hyperledger-fabric-network-setup in $INSTALL_DIR..." 20 | pushd "$INSTALL_DIR" && git clone $INSTALL_REPO . && popd 21 | fi 22 | 23 | rm -rf /usr/local/bin/$BINARY_NAME 24 | ln -s $INSTALL_DIR/scripts/$BINARY_NAME.py /usr/local/bin/$BINARY_NAME 25 | chmod +x /usr/local/bin/$BINARY_NAME 26 | 27 | if ! command -v python > /dev/null 2>&1; then 28 | echo """Python is not detected on this system. This script needs it as dependency. 29 | Install link: https://www.python.org/downloads/ 30 | """ 31 | fi 32 | 33 | if ! command -v vagrant > /dev/null 2>&1; then 34 | echo """Vagrant is not detected on this system. This script needs it as dependency. 35 | Install link: https://www.vagrantup.com/docs/installation/ 36 | 37 | You also need the following vagrant plugins: vagrant-docker-compose and vagrant-aws 38 | You can install them with: 39 | vagrant plugin install vagrant-docker-compose && vagrant plugin install vagrant-aws 40 | """ 41 | else 42 | vagrant plugin list | grep 'vagrant-aws' &> /dev/null 43 | if [ $? != 0 ]; then 44 | echo "Installing vagrant plugin vagrant-aws" 45 | vagrant plugin install vagrant-aws 46 | else 47 | echo "Vagrant plugin vagrant-aws is already installed" 48 | fi 49 | 50 | vagrant plugin list | grep 'vagrant-docker-compose' &> /dev/null 51 | if [ $? != 0 ]; then 52 | echo "Installing vagrant plugin vagrant-docker-compose" 53 | vagrant plugin install vagrant-docker-compose 54 | else 55 | echo "Vagrant plugin vagrant-docker-compose is already installed" 56 | fi 57 | fi 58 | 59 | if ! command -v node > /dev/null 2>&1; then 60 | echo """NPM is not detected on this system. This script needs it as dependency. 61 | Install link: https://docs.npmjs.com/getting-started/installing-node 62 | """ 63 | fi 64 | 65 | echo "Done. Type kuma-hf-network -h for help." 66 | -------------------------------------------------------------------------------- /scripts/kuma-hf-network.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python2 2 | 3 | """ 4 | hyperledgerNetworkTool -h 5 | 6 | Main tool for creation of hyperledger network artifacts, 7 | and for the creating of the network on AWS 8 | """ 9 | 10 | import os 11 | import sys 12 | from argparse import ArgumentParser 13 | from argparse import RawTextHelpFormatter 14 | from argparse import Namespace 15 | from argparse import Action 16 | 17 | install_script = "https://raw.githubusercontent.com/Kunstmaan/hyperledger-fabric-network-setup/master/scripts/install.sh?token=AG6ftlJwD7jEr7kZph_QEsqncTTeroBFks5aZc1pwA%3D%3D" 18 | DEBUG = False 19 | PWD = os.path.dirname(os.path.realpath(__file__)) 20 | 21 | def to_pwd(script): 22 | """Converts the script path to the correct path""" 23 | return PWD + "/" + script 24 | 25 | def fail(msg): 26 | """Prints the error message and exits""" 27 | sys.stderr.write('\033[91m' + msg + '\033[0m\n') 28 | exit(1) 29 | 30 | def call(script, *args): 31 | """Calls the given script using the args""" 32 | 33 | cmd = script + " " + " ".join(args) 34 | if DEBUG: 35 | print cmd 36 | if os.system(cmd) != 0: 37 | fail("ERROR: An error occured while executing " + cmd + ". See above for details.") 38 | 39 | def install_fabric_tools(): 40 | print "Installing fabric tools..." 41 | call(to_pwd('provisioning/install_fabric_tools.sh')) 42 | print "Done" 43 | 44 | def gen_cryptographic_material(parsed_args): 45 | if parsed_args.onlyChannelArtifacts: 46 | gen_channel_artifacts(parsed_args) 47 | exit(0) 48 | crypto_config = parsed_args.crypto_config 49 | gen_path = os.path.abspath(parsed_args.genPath) 50 | install_fabric_tools() 51 | 52 | configtxbase = parsed_args.configtxBase 53 | no_override = parsed_args.noOverride 54 | if configtxbase: 55 | configtxbase = '--configtxBase {0}'.format(configtxbase) 56 | else: 57 | configtxbase = '' 58 | 59 | if not no_override: 60 | print "Cleaning pre-existing generated files..." 61 | call('rm -rfd {0}'.format(gen_path)) 62 | call('mkdir -p {0}'.format(gen_path)) 63 | no_override = '' 64 | else: 65 | no_override = '--noOverride' 66 | 67 | user = '' 68 | name = '' 69 | org = '' 70 | attributes = '' 71 | if parsed_args.user: 72 | print "Creating a user certificate" 73 | user = '--user' 74 | name = '--name {0}'.format(parsed_args.name) 75 | org = '--org {0}'.format(parsed_args.org) 76 | if parsed_args.user_attrs: 77 | attr_values = [k+"="+str(v) for k, v in parsed_args.user_attrs.items()] 78 | attributes = '--attributes {0}'.format(",".join(attr_values)) 79 | 80 | print "Generating cryptographic material..." 81 | call('export GEN_PATH={0} &&'.format(gen_path), to_pwd('crypto_tools/cryptogen.py'), crypto_config, no_override, configtxbase, user, name, org, attributes) 82 | # This also generates the channel artifacts, if changes were made. 83 | print "Done" 84 | 85 | def gen_channel_artifacts(parsed_args): 86 | """Forces generation of channel artifacts""" 87 | crypto_config = parsed_args.crypto_config 88 | gen_path = os.path.abspath(parsed_args.genPath) 89 | configtxbase = parsed_args.configtxBase 90 | if configtxbase: 91 | configtxbase = '--configtxBase {0}'.format(configtxbase) 92 | else: 93 | configtxbase = '' 94 | 95 | print 'USING configtx: '+configtxbase 96 | 97 | install_fabric_tools() 98 | print "Generating channel artifacts..." 99 | call('export GEN_PATH={0} &&'.format(gen_path), to_pwd('fabric_artifacts/gen_configtx.py'), crypto_config, configtxbase) 100 | print "Done" 101 | 102 | 103 | def network_down(parsed_args): 104 | aws_config = os.path.abspath(parsed_args.aws_config) 105 | gen_path = os.path.abspath(parsed_args.genPath) 106 | print 'May ask sudo password at this point to clean /etc/hosts file of previously created entries. If you never brought the network up, this won\'t make any changes' 107 | clean_hosts = gen_path + '/scripts/clean_hosts.sh' 108 | if os.path.isfile(clean_hosts): 109 | call(clean_hosts) 110 | call('export AWS_CONFIG={0} && export GEN_PATH={1} && pushd {2} && vagrant destroy -f && popd'.format(aws_config, gen_path, PWD)) 111 | 112 | def network_up(parsed_args): 113 | crypto_config = os.path.abspath(parsed_args.crypto_config) 114 | aws_config = os.path.abspath(parsed_args.aws_config) 115 | gen_path = os.path.abspath(parsed_args.genPath) 116 | print 'May ask sudo password at this point to edit /etc/hosts file with the names of the nodes, to be able to resolve them to public ips.' 117 | call('export GEN_PATH={0} &&'.format(gen_path), to_pwd('get_hosts_scripts.py'), aws_config, 'False') # To get the private ips 118 | call('export AWS_CONFIG={0} && export GEN_PATH={1} && pushd {2} && vagrant up && popd'.format(aws_config, gen_path, PWD)) 119 | call('export GEN_PATH={0} &&'.format(gen_path), to_pwd('get_hosts_scripts.py'), aws_config, 'True') # To get the public ips 120 | call('export GEN_PATH={0} &&'.format(gen_path), to_pwd('create_remote_scripts.py'), crypto_config, aws_config) 121 | call(gen_path + '/scripts/set_hosts_public.sh') 122 | call(gen_path + '/scripts/create_and_join_channels_remote.sh') 123 | call(gen_path + '/scripts/update_remote_chaincodes.sh') 124 | 125 | 126 | def update_chaincodes(parsed_args): 127 | gen_path = os.path.abspath(parsed_args.genPath) 128 | 129 | call(gen_path + '/scripts/update_remote_chaincodes.sh') 130 | 131 | def bootstrap(parsed_args): 132 | base_path = os.path.normpath(os.path.join('.', parsed_args.path)) 133 | conf_path = os.path.join(base_path, 'configuration') 134 | gen_path = os.path.join(base_path, 'generated') 135 | call('rm -rfd', conf_path) 136 | call('cp -r', to_pwd('../configuration'), conf_path) 137 | print 'You can start modifying the files inside {0} and then generate all the artifacts using "kuma-hf-network generate {1} --genPath {2}"'.format(conf_path, os.path.join(conf_path, 'crypto_config.yaml'), gen_path) 138 | 139 | def update_tool(parsed_args): 140 | call("curl -fsSL", install_script, " | bash") 141 | 142 | PARSER = ArgumentParser(description="""This tool allows you to create a certificate structure for hyperledger fabric, 143 | and then to use that structure to boot a multinode network on Amazon Web Services. 144 | Once the network is running, this tool also allows you to remotely upgrade 145 | chaincode running on the network. 146 | """, formatter_class=RawTextHelpFormatter) 147 | SUBPARSERS = PARSER.add_subparsers() 148 | 149 | ####################################### 150 | # BOOTSTRAP 151 | ####################################### 152 | PARSER_BOOTSTRAP = SUBPARSERS.add_parser('bootstrap', help="""generate example configuration.""") 153 | PARSER_BOOTSTRAP.add_argument('path', type=str, help='the path where the example configuration should be bootstrapped.', default='.') 154 | 155 | PARSER_BOOTSTRAP.set_defaults(func=bootstrap) 156 | 157 | ####################################### 158 | # GENERATE 159 | ####################################### 160 | PARSER_GEN = SUBPARSERS.add_parser('generate', help="""generate certificate structure, initial channel blocks, 161 | hyperledger fabric artifacts and docker configurations.""") 162 | PARSER_GEN.add_argument('crypto_config', type=str, help='cryptographic configuration of the network, as YAML file. See the provided example for details.') 163 | 164 | PARSER_GEN.add_argument('--genPath', '-g', type=str, help='Where the generated files should be saved (default: ./generated)', default='./generated') 165 | PARSER_GEN.add_argument('--noOverride', help='Do not override existing files (default: override files). Useful if you want to add more users. If this is not set, will delete the generated folder and generate everything from scratch', action='store_true') 166 | PARSER_GEN.add_argument('--onlyChannelArtifacts', help='Only generate hyperledger fabric channel artifacts. Will not generate the certificate structure, assumes this exists already. Only use this if you made manual changes to the generated folder, which requires new channel artifacts to be generated.', action='store_true') 167 | PARSER_GEN.add_argument('--configtxBase', '-c', help='path to configtx hyperledger fabric config file, without the organisations and profiles (they will be generated). Defaults to a simple orderer configuration.', action='store') 168 | PARSER_GEN.set_defaults(user=False) 169 | PARSER_GEN.set_defaults(func=gen_cryptographic_material) 170 | 171 | ####################################### 172 | # GENERATE USER 173 | ####################################### 174 | user_attrs = {} 175 | class StoreDictKeyPair(Action): 176 | def __call__(self, parser, namespace, values, option_string=None): 177 | for kv in values.split(","): 178 | k,v = kv.split("=") 179 | user_attrs[k] = v 180 | setattr(namespace, self.dest, user_attrs) 181 | 182 | PARSER_GEN = SUBPARSERS.add_parser('generate-user', help='generate certificate structure for a new user') 183 | PARSER_GEN.add_argument('name', type=str, help='the name of the user') 184 | PARSER_GEN.add_argument('org', type=str, help='the organisation of the user') 185 | PARSER_GEN.add_argument('crypto_config', type=str, help='cryptographic configuration of the network, as YAML file. See the provided example for details.') 186 | 187 | PARSER_GEN.add_argument('--genPath', '-g', type=str, help='Where the generated files should be saved (default: ./generated)', default='./generated') 188 | PARSER_GEN.add_argument('--attributes', help='the attributes for the user certificate', dest="user_attrs", action=StoreDictKeyPair, metavar="KEY1=VAL1,KEY2=VAL2...", default={}) 189 | PARSER_GEN.set_defaults(user=True) 190 | PARSER_GEN.set_defaults(onlyChannelArtifacts=False) 191 | PARSER_GEN.set_defaults(configtxBase=None) 192 | PARSER_GEN.set_defaults(noOverride=True) 193 | PARSER_GEN.set_defaults(func=gen_cryptographic_material) 194 | 195 | ####################################### 196 | # NETWORK-UP 197 | ####################################### 198 | PARSER_UP = SUBPARSERS.add_parser('network-up', help="""Bring the network up. Requires the artifacts to be generated 199 | does: 200 | * vagrant up 201 | * modifies /etc/hosts to resolve hostnames 202 | * installs hyperledger fabric channels 203 | * installs first version of chaincode""") 204 | PARSER_UP.add_argument('crypto_config', type=str, help='cryptographic configuration of the network, as YAML file.') 205 | PARSER_UP.add_argument('aws_config', type=str, help='AWS network configuration, as JSON file.') 206 | PARSER_UP.add_argument('--genPath', '-g', type=str, help='Where the generated files are (default: ./generated)', default='./generated') 207 | PARSER_UP.set_defaults(func=network_up) 208 | 209 | ####################################### 210 | # NETWORK-DOWN 211 | ####################################### 212 | PARSER_DOWN = SUBPARSERS.add_parser('network-down', help="""Bring the network down. 213 | does: 214 | * vagrant destroy -f 215 | * cleans /etc/hosts""") 216 | PARSER_DOWN.add_argument('aws_config', type=str, help='AWS network configuration, as JSON file.') 217 | PARSER_DOWN.add_argument('--genPath', '-g', type=str, help='Where the generated files are (default: ./generated)', default='./generated') 218 | PARSER_DOWN.set_defaults(func=network_down) 219 | 220 | ####################################### 221 | # UPDATE-CHAINCODES 222 | ####################################### 223 | PARSER_UPDATE_CHAINCODES = SUBPARSERS.add_parser('update-chaincodes', help='Updates chaincodes on the network. Only run this if the network is up') 224 | PARSER_UPDATE_CHAINCODES.add_argument('--genPath', '-g', type=str, help='Where the generated files are (default: ./generated)', default='./generated') 225 | PARSER_UPDATE_CHAINCODES.set_defaults(func=update_chaincodes) 226 | 227 | ####################################### 228 | # UPDATE-CHAINCODES 229 | ####################################### 230 | PARSER_UPDATE = SUBPARSERS.add_parser('update', help='Update the tool') 231 | PARSER_UPDATE.set_defaults(func=update_tool) 232 | 233 | 234 | ARGS = PARSER.parse_args() 235 | ARGS.func(ARGS) 236 | -------------------------------------------------------------------------------- /scripts/notes.txt: -------------------------------------------------------------------------------- 1 | To see logs: docker logs 2 | 3 | 4 | Error: 5 | SSL certificate can't be used for signing. 6 | Fix: 7 | Check if the path of the certificate is correct. 8 | --> Check path of volume mounts in docker compose files. 9 | --> Check if certificate exists 10 | 11 | 12 | Error: 13 | Can't create channel. TLS handshake does not look like a TLS handshake 14 | Fix: 15 | Make sure you are using http-S or grpc-S (https, grpcs) instead of http or grpc 16 | 17 | 18 | Error: 19 | start-could not start container: API error (404): {"message":"No such network: artifacts_default"} 20 | Fix: 21 | The issue comes from the CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE in base.yaml. Read https://docs.docker.com/compose/networking/ 22 | Set the value to "bridge" 23 | 24 | Error: 25 | Failed obtaining connection: Could not connect to any of the endpoints: [orderer.example.com:7050] : The issue is that orderer.example.com is not a valid domain, so the instance does not know where to find it. 26 | Fix: 27 | Create an overlay network that spans the docker containers, using a key-value store such as consul. 28 | Run a slave of the key-value store on each of the container's hosts. Then connect the containers to that overlay network. 29 | See https://luppeng.wordpress.com/2016/05/03/setting-up-an-overlay-network-on-docker-without-swarm/ 30 | 31 | 32 | Error: 33 | When doing peer join channel on the CLI, the following error appears: Caused by: x509: certificate signed by unknown authority 34 | Fix: 35 | Change CORE_PEER_TLS_ROOTCERT_FILE to include all intermediate certificates coming from the peer (order is important. Root is last, peer is first). 36 | ---> cat peer-tls.cert intermediate1-tls.cert intermediate2-tls.cert root-tls.cert > combined.cert 37 | 38 | 39 | Error: 40 | When trying to instantiate node js code, the peer times out. 41 | Fix: 42 | Check that the DNS names are all lower caps 43 | 44 | 45 | Error: 46 | When trying to run migrations via the app, or anything via the app, it times out and the orderer shows: 47 | 2017-12-29 07:34:58.696 UTC [grpc] Printf -> DEBU 868 grpc: Server.Serve failed to complete security handshake from "192.168.100.7:38518": tls: first record does not look like a TLS handshake 48 | 2017-12-29 07:34:58.697 UTC [grpc] Printf -> DEBU 869 grpc: Server.Serve failed to complete security handshake from "192.168.100.7:38520": tls: first record does not look like a TLS handshake 49 | 2017-12-29 07:34:58.699 UTC [grpc] Printf -> DEBU 86a grpc: Server.Serve failed to complete security handshake from "192.168.100.7:38522": tls: first record does not look like a TLS handshake 50 | 2017-12-29 07:34:58.701 UTC [grpc] Printf -> DEBU 86b grpc: Server.Serve failed to complete security handshake from "192.168.100.7:38524": tls: first record does not look like a TLS handshake 51 | 2017-12-29 07:34:58.703 UTC [grpc] Printf -> DEBU 86c grpc: Server.Serve failed to complete security handshake from "192.168.100.7:38526": tls: first record does not look like a TLS handshake 52 | 2017-12-29 07:34:58.704 UTC [grpc] Printf -> DEBU 86d grpc: Server.Serve failed to complete security handshake from "192.168.100.7:38528": tls: first record does not look like a TLS handshake 53 | Fix: 54 | Make sure the app can resolve hostnames. In particular, make sure it ran set_hosts_private.sh and that /etc/hosts has been modified. 55 | 56 | 57 | Error: 58 | Command hangs when cloning/pulling from github. 59 | Fix: 60 | Did you check the AWS firewall ? You need SSH(port 22) & Git(port 9418) outbound access to github.com If you're using https, then you need HTTPS instead. 61 | 62 | ############################# 63 | Adding a peer (using easyrsa) 64 | ############################# 65 | 1. Add the peer configuration in app/network-config.json: 66 | Tip: Make sure the ports don't overlap with another peer, if running all dockers on a single machine 67 | "peer3": { 68 | "requests": "grpcs://localhost:7061", 69 | "events": "grpcs://localhost:7063", 70 | "server-hostname": "peer2.org1.example.com", 71 | "tls_cacerts": "../artifacts/channel/crypto-config/peerOrganizations/org1.example.com/peers/peer2.org1.example.com/tls/ca.crt" 72 | } 73 | 2. Add the peer in docker-compose, from the settings in app/network-config.json 74 | 3. Install easy-rsa and enable ecdsa 75 | 3a. copy vars.example to vars 76 | 3b. Edit vars: 77 | - Uncomment set_var EASYRSA_ALGO and set it to "ec" 78 | - Uncomment set_var EASYRSA_CURVE and set it to "prime256v1" 79 | 4. Configure certificates 80 | 4a. Copy over a config directory, for example peer1.org1.example.com 81 | 4b. There are 4 files we need to change from the configuration folder of another peer: 82 | - /msp/keystore/ 83 | - /msp/signcerts/ 84 | - /tls/ 85 | - /tls/ 86 | 87 | 4c. Change the CA certificate & keyfile 88 | - delete any existing pki folder 89 | - ./easyrsa init-pki 90 | - copy and rename /ca/ to /pki/ca.crt 91 | - copy and rename /ca/ to /pki/private/ca.key 92 | - ./easyrsa gen-req nopass # Generates the request, use nopass to avoid encrypting the key 93 | - Enter the host name of the peer 94 | - ./easyrsa sign-req client 95 | - Confirm by typing yes 96 | - copy /pki/private/.key to /msp/keystore/ 97 | - copy and rename /pki/issued/.crt to /msp/signcerts/-cert.pem 98 | 4d. Change the TSL certificate & keyfile 99 | - do 4c but sign-req for server (and not client) 100 | - copy and rename /pki/private/.key to /tls/server.key 101 | - copy and rename /pki/issued/.crt to /tls/server.crt 102 | 103 | 5. That's it ! 104 | 105 | 106 | 107 | 108 | ############################# 109 | TODO 110 | ############################# 111 | - Add shared admins between orgs 112 | - Why do we need to modify /etc/hosts in the docker image when running fabcar ? 113 | It should resolve the name with docker DNS... 114 | -------------------------------------------------------------------------------- /scripts/provisioning/configure_ssh.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Created by Guillaume Leurquin, guillaume.leurquin@accenture.com 3 | 4 | set -eu 5 | 6 | echo "Configuring ssh..." 7 | 8 | CURRENTUSER=$1 9 | echo "CURRENTUSER=$CURRENTUSER" 10 | 11 | 12 | echo "Creating /vagrant/ssh folder" 13 | mkdir -p /vagrant/ssh 14 | 15 | echo "Moving private key to /vagrant/ssh/id_rsa" 16 | mv /home/$CURRENTUSER/id_rsa /vagrant/ssh/id_rsa 17 | ln -s /vagrant/ssh/id_rsa /home/$CURRENTUSER/.ssh/id_rsa 18 | echo "Changing permissions of private key" 19 | chown $CURRENTUSER /home/$CURRENTUSER/.ssh/id_rsa 20 | chmod 600 /home/$CURRENTUSER/.ssh/id_rsa 21 | echo "Contents of /vagrant/ssh:" 22 | ls /vagrant/ssh 23 | echo "Contents of /home/$CURRENTUSER/.ssh:" 24 | ls /home/$CURRENTUSER/.ssh 25 | echo "Contents of /home/$CURRENTUSER:" 26 | ls /home/$CURRENTUSER 27 | echo "Done with private key" 28 | 29 | echo "Moving public key to /vagrant/ssh/id_rsa" 30 | mv /home/$CURRENTUSER/id_rsa.pub /vagrant/ssh/id_rsa.pub 31 | ln -s /vagrant/ssh/id_rsa.pub /home/$CURRENTUSER/.ssh/id_rsa.pub 32 | 33 | echo "Changing permissions of public key" 34 | chown $CURRENTUSER /home/$CURRENTUSER/.ssh/id_rsa.pub 35 | chmod 644 /home/$CURRENTUSER/.ssh/id_rsa.pub 36 | echo "Contents of /vagrant/ssh:" 37 | ls /vagrant/ssh 38 | echo "Contents of /home/$CURRENTUSER/.ssh:" 39 | ls /home/$CURRENTUSER/.ssh 40 | echo "Contents of /home/$CURRENTUSER:" 41 | ls /home/$CURRENTUSER 42 | echo "Done with public key" 43 | 44 | echo "Changing permssions of /vagrant/ssh" 45 | chown $CURRENTUSER /vagrant/ssh 46 | chown $CURRENTUSER /vagrant/ssh/* 47 | echo "Done" 48 | -------------------------------------------------------------------------------- /scripts/provisioning/install.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Created by Guillaume Leurquin, guillaume.leurquin@accenture.com 3 | 4 | # Designed to run on an AWS node, to install missing packages for hyperledger. 5 | # Not used anymore. You can use this manually if you wish. 6 | 7 | # Exit immediately if a pipeline returns non-zero status. 8 | # Result of pipeline is the last command 9 | # Unset variables are considered an error 10 | set -euv -o pipefail 11 | 12 | HL_VERSION=$1 13 | 14 | 15 | # Remove interraction 16 | export DEBIAN_FRONTEND=noninteractive 17 | 18 | puts() { 19 | local GREEN='\033[0;32m' 20 | local NC='\033[0m' 21 | echo -e "${GREEN}$*${NC}" 22 | } 23 | 24 | updatePath() { 25 | # NOTE Sourcing profile does not work during provisionning 26 | puts "Adding $1 to PATH" 27 | echo "$PATH"|grep -q "$1" || { echo "PATH=\$PATH:$1" >> /home/ubuntu/.profile; } 28 | export PATH=$PATH:$1 29 | } 30 | 31 | updateProfile() { 32 | if [ $# -ne 2 ]; then 33 | echo "updateProfile requires 2 arguments: VAR and VAL" 34 | exit 1 35 | fi 36 | # NOTE Sourcing profile does not work during provisionning 37 | puts "Exporting $1=$2" 38 | grep -q "export $1=$2" < /home/ubuntu/.profile || { echo "export $1=$2" >> /home/ubuntu/.profile; } 39 | export "$1"="$2" 40 | } 41 | 42 | puts "Enabling colors for the shell" 43 | sed -i -e 's/#force_color_prompt=yes/force_color_prompt=yes/' ~/.bashrc 44 | 45 | grep -q "alias up=\"cd ..\"" < /home/ubuntu/.profile || { echo "alias up=\"cd ..\"" >> /home/ubuntu/.profile; } # Convenience up command 46 | 47 | puts "Installing libtool and libltdl-dev..." 48 | apt-get install -y libtool libltdl-dev 49 | puts "Done." 50 | 51 | # Install git 52 | if command -v git > /dev/null 2>&1; then puts "Git is already installed. Skipping..."; else { 53 | puts "Installing Git..." 54 | apt-get install git 55 | puts "Done." 56 | }; fi 57 | 58 | # Install Node 59 | if command -v node > /dev/null 2>&1; then puts "Nodejs is already installed. Skipping..."; else { 60 | puts "Installing Node..." 61 | curl -sL https://deb.nodesource.com/setup_6.x | sudo -E bash - 62 | apt-get install -y nodejs build-essential 63 | puts "Done." 64 | }; fi 65 | 66 | # Install Go 67 | if command -v go > /dev/null 2>&1; then puts "Go is already installed. Skipping..."; else { 68 | puts "Installing Go..." 69 | GOREL=go1.7.4.linux-amd64.tar.gz 70 | wget -q https://storage.googleapis.com/golang/$GOREL 71 | tar xfz $GOREL 72 | rm -rf /usr/local/go 73 | mv go /usr/local/go 74 | rm -f $GOREL 75 | updateProfile GOROOT /usr/local/go 76 | mkdir ~/go > /dev/null 2>&1 77 | updateProfile GOPATH /home/ubuntu/go 78 | updatePath "\$GOROOT/bin" 79 | puts "Done." 80 | }; fi 81 | 82 | hlisinstalled() { 83 | # Test for hyperledger installation 84 | a="command -v configtxgen > /dev/null 2>&1" 85 | b="command -v configtxlator > /dev/null 2>&1" 86 | c="command -v cryptogen > /dev/null 2>&1" 87 | d="command -v orderer > /dev/null 2>&1" 88 | e="command -v peer > /dev/null 2>&1" 89 | $a && $b && $c && $d && $e 90 | } 91 | 92 | # Install hyperledger binaries 93 | if hlisinstalled > /dev/null 2>&1; then puts "Hyperledger is already installed. Skipping..."; else { 94 | puts "Installing Hyperledger and pulling Docker images..." 95 | HLPATH=/usr/local/hyperledger 96 | rm -rf $HLPATH 97 | mkdir $HLPATH 98 | pushd $HLPATH > /dev/null 99 | curl -sSL https://raw.githubusercontent.com/hyperledger/fabric/master/scripts/bootstrap-$HL_VERSION.sh | bash 100 | popd > /dev/null 101 | updatePath $HLPATH/bin 102 | puts "Done." 103 | }; fi 104 | 105 | if command -v docker-compose > /dev/null 2>&1; then puts "Docker-compose is already installed. Skipping..."; else { 106 | puts "Installing Docker-compose..." 107 | apt-get install -y docker-compose 108 | puts "Done." 109 | }; fi 110 | 111 | puts "Installing go dependencies for fabric..." 112 | export GOPATH=/home/ubuntu/go 113 | /usr/local/go/bin/go get -u --tags nopkcs11 github.com/hyperledger/fabric/core/chaincode/shim 114 | puts "Done." 115 | -------------------------------------------------------------------------------- /scripts/provisioning/install_fabric_tools.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Created by Guillaume Leurquin, guillaume.leurquin@accenture.com 3 | 4 | # Installs configtxgen, configtxlator, cryptogen, orderer and peer 5 | 6 | # Exit immediately if a pipeline returns non-zero status. 7 | # Result of pipeline is the last command 8 | # Unset variables are considered an error 9 | set -eu -o pipefail 10 | 11 | # Remove interraction 12 | export DEBIAN_FRONTEND=noninteractive 13 | 14 | puts() { 15 | local GREEN='\033[0;32m' 16 | local NC='\033[0m' 17 | echo -e "${GREEN}$*${NC}" 18 | } 19 | 20 | export VERSION=1.0.3 21 | ARCH=$(echo "$(uname -s|tr '[:upper:]' '[:lower:]'|sed 's/mingw64_nt.*/windows/')-$(uname -m | sed 's/x86_64/amd64/g')" | awk '{print tolower($0)}') 22 | export ARCH 23 | #Set MARCH variable i.e ppc64le,s390x,x86_64,i386 24 | 25 | updatePath() { 26 | # NOTE Sourcing profile does not work during provisionning 27 | puts "Adding $1 to PATH" 28 | echo "$PATH"|grep -q "$1" || { echo "PATH=\$PATH:$1" >> $HOME/.profile; } 29 | export PATH=$PATH:$1 30 | } 31 | 32 | hlisinstalled() { 33 | # Test for hyperledger installation 34 | a="command -v configtxgen > /dev/null 2>&1" 35 | b="command -v configtxlator > /dev/null 2>&1" 36 | c="command -v cryptogen > /dev/null 2>&1" 37 | d="command -v orderer > /dev/null 2>&1" 38 | e="command -v peer > /dev/null 2>&1" 39 | $a && $b && $c && $d && $e 40 | } 41 | 42 | # Install hyperledger binaries 43 | if hlisinstalled > /dev/null 2>&1; then puts "Hyperledger is already installed. Skipping..."; else { 44 | puts "Installing Hyperledger" 45 | HLPATH=/usr/local/hyperledger 46 | rm -rf $HLPATH 47 | mkdir $HLPATH 48 | pushd $HLPATH > /dev/null 49 | echo "===> Downloading platform binaries" 50 | curl -k https://nexus.hyperledger.org/content/repositories/releases/org/hyperledger/fabric/hyperledger-fabric/${ARCH}-${VERSION}/hyperledger-fabric-${ARCH}-${VERSION}.tar.gz | tar xz 51 | popd > /dev/null 52 | updatePath $HLPATH/bin 53 | puts "Done." 54 | }; fi 55 | -------------------------------------------------------------------------------- /scripts/provisioning/stopDocker.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | 3 | # Exits all running docker containers 4 | 5 | # Exit immediately if a pipeline returns non-zero status. 6 | # Result of pipeline is the last command 7 | # Unset variables are considered an error 8 | set -eu -o pipefail 9 | 10 | # Remove interraction 11 | export DEBIAN_FRONTEND=noninteractive 12 | 13 | function dkcl(){ 14 | if command -v docker > /dev/null 2>&1; then { 15 | CONTAINER_IDS=$(docker ps -aq) 16 | echo 17 | if [ -z "$CONTAINER_IDS" ] || [ "$CONTAINER_IDS" = " " ]; then 18 | echo "========== No containers available for deletion ==========" 19 | else 20 | docker rm -f $CONTAINER_IDS 21 | fi 22 | echo 23 | }; fi 24 | } 25 | 26 | dkcl 27 | -------------------------------------------------------------------------------- /shared/chaincode_tools/compile_chaincode.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Created by Guillaume Leurquin, guillaume.leurquin@accenture.com 3 | 4 | set -eu -o pipefail 5 | 6 | puts() { 7 | local GREEN='\033[0;32m' 8 | local NC='\033[0m' 9 | echo -e "${GREEN}$*${NC}" 10 | } 11 | 12 | if [ $# -ne 1 ]; 13 | then 14 | echo "" 15 | echo "Usage: " 16 | echo " compile_chaincode chaincode_path" 17 | echo "" 18 | exit 1 19 | fi 20 | 21 | chaincode_path=$1 22 | puts "Compiling chaincode..." 23 | 24 | pushd $GOPATH/src/$chaincode_path > /dev/null 25 | puts "==> Getting shim package from hyperledger fabric..." 26 | go get -u --tags nopkcs11 github.com/hyperledger/fabric/core/chaincode/shim 27 | puts "==> Compiling..." 28 | go build --tags nopkcs11 29 | popd > /dev/null 30 | 31 | puts "Done. Chaincode $chaincode_path compiled !" 32 | -------------------------------------------------------------------------------- /shared/chaincode_tools/pull_chaincode.sh: -------------------------------------------------------------------------------- 1 | #!/bin/bash 2 | # Created by Guillaume Leurquin, guillaume.leurquin@accenture.com 3 | 4 | set -eu -o pipefail 5 | 6 | REPO=$1 7 | 8 | if [ ! -d "$GOPATH/src/.git" ]; then 9 | mkdir -p $GOPATH/src 10 | echo "Cloning repository $REPO..." 11 | rm -rfd chaincodetmp 12 | mkdir -p chaincodetmp 13 | git clone $REPO ./chaincodetmp 14 | 15 | echo "Moving files into $GOPATH/src/ ... " 16 | mv chaincodetmp/* $GOPATH/src/ 17 | mv chaincodetmp/.[!.]* $GOPATH/src/ # Copy hidden files too 18 | 19 | echo "Removing unused files ... " 20 | rm -rfd chaincodetmp 21 | fi 22 | pushd $GOPATH/src/ 23 | echo "Pulling repository..." 24 | git pull 25 | echo "Done" 26 | popd 27 | -------------------------------------------------------------------------------- /shared/chaincode_tools/update_chaincodes.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python2 2 | # Created by Guillaume Leurquin, guillaume.leurquin@accenture.com 3 | """ 4 | This script pulls the latest version of the chaincode, reads $GOPATH/src/chaincodes.json, 5 | and installs/update all the chaincodes according to the config file 6 | """ 7 | 8 | import os 9 | import sys 10 | import json 11 | import subprocess 12 | import re 13 | from multiprocessing.pool import ThreadPool 14 | from argparse import ArgumentParser 15 | from argparse import RawTextHelpFormatter 16 | 17 | PARSER = ArgumentParser(description="""Instantiates new chaincodes or upgrades existing ones. 18 | Chaincodes should be in $GOPATH/src/chaincodes (optional for nodejs, see below). Supports nodejs and go chaincodes. 19 | The script looks for a file $GOPATH/src/chaincodes.json which must contain the paths to the chaincodes, relative to src/chaincodes/ 20 | 21 | * For nodejs chaincode only: 22 | > Compiled Nodejs chaincodes will be in $GOPATH/src/build. 23 | > If you provide a repository, this will pull from it and save it in $GOPATH/src/, and then run `npm run build`, which must create the $GOPATH/src/build folder, containing chaincodes. 24 | > If no repository is given, the script will only look in $GOPATH/src/build, making the contents of $GOPATH/src/chaincodes optional. 25 | """, formatter_class=RawTextHelpFormatter) 26 | PARSER.add_argument('--dryrun', help='Shows which commands would be run, without running them', action='store_true') 27 | PARSER.add_argument('--repository', '-r', type=str,help='the repository from which the chaincode should be fetched. If not given, assumes chaincodes are in $GOPATH/src/build/') 28 | PARSER.add_argument('--chaincodeBasePath', '-p', type=str,help='optional path of the chaincodes inside the repository, only needed when a repository is given, default will be the root of the repository. This path is where the package.json or chaincodes.json is.', default='.') 29 | PARSER.add_argument('--forceNpmInstall', '-f', help='forces the script to run npm install on each chaincode. By default it will only run npm install when the node_modules directory for that chaincode is missing', action='store_true') 30 | PARSER.add_argument('--build', '-b', help='chaincode needs to be build first, this will execute "npm run build"', action='store_true') 31 | 32 | args = PARSER.parse_args() 33 | DRYRUN = args.dryrun 34 | 35 | CHAINCODE_BASE_PATH = os.path.normpath(os.path.join(os.environ['GOPATH'] + '/src', args.chaincodeBasePath)) 36 | CONF_FILE = CHAINCODE_BASE_PATH + '/chaincodes.json' 37 | CONF_IS_JSON_PACKAGE = False 38 | 39 | if not os.path.isfile(CONF_FILE): 40 | CONF_FILE = CHAINCODE_BASE_PATH + '/package.json' 41 | CONF_IS_JSON_PACKAGE = True 42 | 43 | def fail(msg): 44 | """Prints the error message and exits""" 45 | sys.stderr.write(msg) 46 | exit(1) 47 | 48 | def call(script, *args): 49 | """Calls the given script using the args""" 50 | 51 | cmd = script + " " + " ".join(args) 52 | if DRYRUN: 53 | print cmd 54 | return "hi" 55 | proc = subprocess.Popen("bash -c '" + cmd + "'", stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True) 56 | out, error = proc.communicate() 57 | if proc.returncode != None and proc.returncode != 0: 58 | print "Error code:" + str(proc.returncode) 59 | fail("An error occured while executing " + cmd + ". See above for details. Error:\n" + error) 60 | return out 61 | 62 | def is_instantiated_or_installed(data, installed, ignore_version): 63 | """Checks if the chaincode is installed or instantiated on the channel""" 64 | chain_info = call(source_peer(data['peer']), "&&", "peer chaincode", 65 | "--cafile", data['orderer_ca'], 66 | "--orderer", data['orderer_host_port'], 67 | "list", 68 | ("--installed" if installed else "--instantiated"), 69 | "--channelID", data['channel_id'], 70 | "--tls true" 71 | ) 72 | 73 | pattern = "name:[\s\"]" + data['chaincode_name'] + "[,\"]\sversion:[\s\"]" + (".*" if ignore_version else data['chaincode_version']) + "[,\"]\spath:[\s\"]" + data['chaincode_path'] + "[,\"]" 74 | match_obj = re.search(pattern, chain_info, re.I) 75 | if match_obj: 76 | return True 77 | return False 78 | 79 | def is_instantiated(data, ignore_version=False): 80 | """Checks if the chaincode is instantiated on the channel""" 81 | return is_instantiated_or_installed(data, False, ignore_version) 82 | 83 | def is_installed(data, ignore_version=False): 84 | """Checks if the chaincode is installed on the channel""" 85 | return is_instantiated_or_installed(data, True, ignore_version) 86 | 87 | def compile_chaincode(data): 88 | """Compiles the chaincode""" 89 | if data['chaincode_language'] == "golang": 90 | call("/etc/hyperledger/chaincode_tools/compile_chaincode.sh", data['chaincode_path']) 91 | return "==> Compiled " + data['info'] + "!" 92 | elif data['chaincode_language'] == "node": 93 | if not os.path.isdir(data['chaincode_path'] + '/node_modules') or args.forceNpmInstall: 94 | call("npm", "install", "--prefix", data['chaincode_path']) 95 | return "==> Installed NPM for " + data['info'] + "!" 96 | 97 | def install_chaincode(data): 98 | """Installs chaincode on all the peers""" 99 | if not is_installed(data): 100 | call(source_peer(data['peer']), "&&", "peer chaincode", 101 | "--cafile", data['orderer_ca'], 102 | "--orderer", data['orderer_host_port'], 103 | "install", 104 | "--name", data['chaincode_name'], 105 | "--version", data['chaincode_version'], 106 | "--path", data['chaincode_path'], 107 | "--lang", data['chaincode_language'] 108 | ) 109 | return "==> Installed " + data['info'] + " on " + data['peer'] + "!" 110 | return "==> " + data['info'] + " is already installed on " + data['peer'] + "!" 111 | 112 | def source_peer(peer): 113 | """Sets environment variables for that peer""" 114 | return "source /etc/hyperledger/crypto-config/tools/set_env." + peer+".sh" 115 | 116 | def instantiate_chaincode(data): 117 | """Instantiates chaincode on one of the peers""" 118 | info = data['info'] 119 | 120 | if not is_instantiated(data): 121 | upgrade = is_instantiated(data, ignore_version=True) 122 | 123 | policy = '' 124 | if data['chaincode_policy']: 125 | info = info + " with policy " + data['chaincode_policy'] 126 | policy = "--policy \"" + data['chaincode_policy'].replace('"', "\\\"").replace("'", "\\\"") + "\"" 127 | 128 | call(source_peer(data['peer']), "&&", "peer chaincode", 129 | "--cafile", data['orderer_ca'], 130 | "--orderer", data['orderer_host_port'], 131 | "--logging-level", "debug", 132 | ("upgrade" if upgrade else "instantiate"), 133 | "--name", data['chaincode_name'], 134 | "--version", data['chaincode_version'], 135 | "--ctor", """\"{\\\"Args\\\":[\\\"Init\\\""""+data['instantiate_args']+"""]}\"""", 136 | "--channelID", data['channel_id'], 137 | policy, 138 | "--tls true", 139 | "--lang", data['chaincode_language'] 140 | ) 141 | 142 | if upgrade: 143 | return "==> Upgraded " + info + " on " + data['peer'] + "!" 144 | return "==> Instantiated " + info + " on " + data['peer'] + "!" 145 | return "==> " + info + " is already instantiated on " + data['peer'] + "!" 146 | 147 | def format_args(args): 148 | """Formats the args with escaped " """ 149 | comma = "," if args else "" 150 | return comma + ",".join(['\\\"' + a + '\\\"' for a in args]) 151 | 152 | if not DRYRUN and args.repository: 153 | # First pull latest version of chaincode: 154 | subprocess.call("/etc/hyperledger/chaincode_tools/pull_chaincode.sh {0}".format(args.repository), shell=True) 155 | 156 | subprocess.call("npm install --production --prefix " + CHAINCODE_BASE_PATH, shell=True) 157 | if args.build: 158 | subprocess.call("npm run build --prefix " + CHAINCODE_BASE_PATH, shell=True) 159 | 160 | with open(CONF_FILE) as chaincodes_stream: 161 | try: 162 | COMPILE_DATA = [] 163 | INSTALL_DATA = [] 164 | INSTANTIATE_DATA = [] 165 | CHAINCODES_DATA = json.load(chaincodes_stream) 166 | if CONF_IS_JSON_PACKAGE: 167 | CHAINCODES_DATA = CHAINCODES_DATA["kuma-hf-chaincode-dev"]["chaincodes"] 168 | 169 | for chaincode_path in CHAINCODES_DATA: 170 | # Get the absolute path to the chaincode in question 171 | # it's going to be in the build folder 172 | absolute_chaincode_path = CHAINCODE_BASE_PATH 173 | 174 | if args.build: 175 | absolute_chaincode_path = absolute_chaincode_path + "/build/" 176 | 177 | absolute_chaincode_path = absolute_chaincode_path + chaincode_path 178 | with open(absolute_chaincode_path + "/package.json") as stream: 179 | try: 180 | chaincode = json.load(stream) 181 | chaincode_name = chaincode["name"] 182 | chaincode_language = chaincode["hf-language"] 183 | chaincode_version = chaincode["version"] 184 | 185 | if chaincode_language == "node": 186 | # Path for node must be absolute 187 | print "Using node" 188 | chaincode_path = absolute_chaincode_path 189 | elif chaincode_language == "golang": 190 | # Path for golang must be relative to $GOPATH/src 191 | chaincode_path = 'chaincodes/' + chaincode_path 192 | print "Using go" 193 | else: 194 | fail("Unknown chaincode language " + chaincode_language + " ! Aborting.") 195 | 196 | info = "chaincode " + chaincode_name + " version " + chaincode_version + " at " + chaincode_path 197 | 198 | for net_config in chaincode["hf-network"]: 199 | channel_id = net_config["channelId"] 200 | instantiate_args = format_args(net_config["instantiateArgs"]) 201 | chaincode_policy = net_config["endorsementPolicy"] if "endorsementPolicy" in net_config else None 202 | orderer_host = net_config["orderer"]["host"] 203 | orderer_port = str(net_config["orderer"]["port"]) 204 | orderer_host_port = orderer_host + ":" + orderer_port 205 | orderer_org = net_config["orderer"]["org"] 206 | orderer_ca = "/etc/hyperledger/crypto-config/" + orderer_org + "/orderers/" + orderer_host + "/tlsca.combined." + orderer_host + "-cert.pem" 207 | 208 | the_data = {} 209 | for the_peer in net_config["peers"]: 210 | the_data = { 211 | 'peer': the_peer, 212 | 'info': info, 213 | 'orderer_ca': orderer_ca, 214 | 'orderer_host_port': orderer_host_port, 215 | 'chaincode_name': chaincode_name, 216 | 'chaincode_version': chaincode_version, 217 | 'chaincode_path': chaincode_path, 218 | 'chaincode_policy': chaincode_policy, 219 | 'instantiate_args': instantiate_args, 220 | 'channel_id': channel_id, 221 | 'chaincode_language': chaincode_language 222 | } 223 | # Compile the chaincode only once 224 | if not is_installed(the_data): 225 | if not any(d['chaincode_path'] == chaincode_path for d in COMPILE_DATA): 226 | COMPILE_DATA.append(the_data) 227 | 228 | # Install chaincode on all peers 229 | INSTALL_DATA.append(the_data) 230 | 231 | # Instantiate chaincode on one (the last) of the peers 232 | if net_config["peers"]: 233 | INSTANTIATE_DATA.append(the_data) 234 | print "" 235 | 236 | except ValueError as exc: 237 | print exc 238 | 239 | func_mapping = [[compile_chaincode, COMPILE_DATA, "----> COMPILING..."], [install_chaincode, INSTALL_DATA, "----> INSTALLING..."], [instantiate_chaincode, INSTANTIATE_DATA, "----> INSTANTIATING..."]] 240 | for func, the_data, info in func_mapping: 241 | print info 242 | pool = ThreadPool(10) 243 | results = pool.imap_unordered(func, the_data) 244 | for result in results: 245 | print result 246 | pool.close() 247 | pool.join() 248 | print info + "DONE !" 249 | 250 | except ValueError as exc: 251 | print exc 252 | -------------------------------------------------------------------------------- /shared/clean_old_dockers.py: -------------------------------------------------------------------------------- 1 | #!/usr/bin/env python2 2 | # Created by Guillaume Leurquin, guillaume.leurquin@accenture.com 3 | 4 | 5 | import sys 6 | import subprocess 7 | import argparse 8 | 9 | parser = argparse.ArgumentParser(description='Cleans any container with an old version whose name starts with dev- . ATTENTION: Does not work if the peer, chaincode or version contain the minus - character.') 10 | parser.add_argument('--dryrun', action='store_true', 11 | help='Shows the commands the script would run, without running them') 12 | 13 | 14 | args = parser.parse_args() 15 | DRY_RUN = args.dryrun 16 | 17 | def fail(msg): 18 | """Prints the error message and exits""" 19 | sys.stderr.write('\033[91m' + msg + '\033[0m\n') 20 | exit(1) 21 | 22 | def call(script, *args): 23 | """Calls the given script using the args""" 24 | 25 | cmd = script + " " + " ".join(args) 26 | print cmd 27 | proc = subprocess.Popen(cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE, shell=True) 28 | out, error = proc.communicate() 29 | if error != "": 30 | fail("An error occured while executing " + cmd + ". See above for details. Error=" + error) 31 | return out 32 | 33 | 34 | # The output of docker ps sorts the names with the most recent container first 35 | # This is important for the rest of the script, because we want to keep the most 36 | # recent version of each chaincode 37 | outputContainers = call("docker ps --format '{{.Names}}' | grep dev-").split('\n') 38 | outputImages = call("docker images | grep 'dev-' | awk 'BEGIN { FS=\"[ ]\" } ; { print $1 }'").split('\n') 39 | 40 | # Test data, debugging only 41 | # output = ['dev-epeer1.vlaanderen.be-ground_contours-1.0.17', 'dev-epeer1.vlaanderen.be-ground_contours-1.0.16', 'dev-epeer1.vlaanderen.be-ground_contours-1.0.15', 'dev-epeer1.vlaanderen.be-subsidies-1.0.3', 'dev-epeer1.vlaanderen.be-nature_coins-1.0.3', 'dev-epeer1.vlaanderen.be-identity_management-1.0.3', 'dev-epeer1.vlaanderen.be-ground_contours-1.0.14', 'dev-epeer1.vlaanderen.be-access_management-1.0.3', 'dev-epeer1.vlaanderen.be-work_contracts-1.0.3', 'dev-epeer1.vlaanderen.be-nature_management_plan_contours-1.0.3', 'dev-epeer1.vlaanderen.be-certificates-1.0.3', 'dev-epeer1.vlaanderen.be-contours-1.0.3', 'dev-epeer1.vlaanderen.be-nature_coins-1.0.2', 'dev-epeer1.vlaanderen.be-subsidies-1.0.2', 'dev-epeer1.vlaanderen.be-certificates-1.0.2', 'dev-epeer1.vlaanderen.be-contours-1.0.2', 'dev-epeer1.vlaanderen.be-nature_management_plan_contours-1.0.2', 'dev-epeer1.vlaanderen.be-identity_management-1.0.2', 'dev-epeer1.grb.vlaanderen.be-identity_management-1.0.2', 'dev-epeer1.grb.vlaanderen.be-identity_management-1.0.1', '', 'test-test1-cc1-1.0.1', 'test-test1-cc1-1.0.0'] 42 | 43 | outputContainers = filter(None, outputContainers) # Remove empty strings 44 | outputImages = filter(None, outputImages) # Remove empty strings 45 | 46 | def getToRemove(output): 47 | 48 | splitted = [s.split('-') for s in output if s.startswith('dev-')] 49 | 50 | chaincodes = [ s[2] for s in splitted] 51 | chaincodes = set(chaincodes) 52 | 53 | cache = {} 54 | to_remove = [] 55 | 56 | for idx, splitted_name in enumerate(splitted): 57 | peer = splitted_name[1] 58 | for chaincode in chaincodes: 59 | if chaincode == splitted_name[2]: 60 | if peer in cache and chaincode in cache[peer]: 61 | to_remove.append(output[idx]) 62 | else: 63 | if peer in cache: 64 | cache[peer].append(chaincode) 65 | else: 66 | cache[peer] = [chaincode] 67 | return to_remove 68 | 69 | to_remove_containers = getToRemove(outputContainers) 70 | 71 | to_remove_images = getToRemove(outputImages) 72 | 73 | cmds_containers = ["docker rm -f {0}".format(name) for name in to_remove_containers] 74 | cmds_images = ["docker rmi {0}".format(name) for name in to_remove_images] 75 | if DRY_RUN: 76 | for cmd in cmds_containers: 77 | print cmd 78 | for cmd in cmds_images: 79 | print cmd 80 | else: 81 | for cmd in cmds_containers: 82 | call(cmd) 83 | for cmd in cmds_images: 84 | call(cmd) 85 | -------------------------------------------------------------------------------- /shared/docker.service: -------------------------------------------------------------------------------- 1 | [Unit] 2 | Description=Docker Application Container Engine 3 | Documentation=https://docs.docker.com 4 | After=network-online.target docker.socket firewalld.service 5 | Wants=network-online.target 6 | Requires=docker.socket 7 | 8 | [Service] 9 | Type=notify 10 | # the default is not to use systemd for cgroups because the delegate issues still 11 | # exists and systemd currently does not support the cgroup feature set required 12 | # for containers run by docker 13 | EnvironmentFile=/etc/default/docker 14 | ExecStart=/usr/bin/dockerd $DOCKER_OPTS -H fd:// 15 | ExecReload=/bin/kill -s HUP $MAINPID 16 | LimitNOFILE=1048576 17 | # Having non-zero Limit*s causes performance problems due to accounting overhead 18 | # in the kernel. We recommend using cgroups to do container-local accounting. 19 | LimitNPROC=infinity 20 | LimitCORE=infinity 21 | # Uncomment TasksMax if your systemd version supports it. 22 | # Only systemd 226 and above support this version. 23 | TasksMax=infinity 24 | TimeoutStartSec=0 25 | # set delegate yes so that systemd does not reset the cgroups of docker containers 26 | Delegate=yes 27 | # kill only the docker process, not all processes in the cgroup 28 | KillMode=process 29 | # restart the docker process if it exits prematurely 30 | Restart=on-failure 31 | StartLimitBurst=3 32 | StartLimitInterval=60s 33 | 34 | [Install] 35 | WantedBy=multi-user.target 36 | --------------------------------------------------------------------------------