Commit 78dfda94 authored by root's avatar root

clone to gitlab

parents
Pipeline #83 failed with stages
Nnodes/
README.md
.#*
Nnodes/.current_config
Nnodes/qdata_*
Nnodes/contract_pri.js
Nnodes/contract_pub.js
Nnodes/docker-compose.yml
Nnodes/static-nodes.json
Nnodes/genesis.json
FROM ubuntu:19.04 as builder
ARG CONSTELLATION_VERSION=0.3.5
ARG QUORUM_VERSION=2.2.3
WORKDIR /work
RUN apt-get update && \
apt-get install -y \
build-essential \
git \
libdb-dev \
libsodium-dev \
libtinfo-dev \
sysvbanner \
unzip \
wget \
zlib1g-dev
RUN wget -q https://github.com/jpmorganchase/constellation/releases/download/v0.3.5-build.1/constellation-0.3.5-ubuntu1604.tar.gz && \
tar -xvf constellation-$CONSTELLATION_VERSION-ubuntu1604.tar.gz && \
cp constellation-node /usr/local/bin && \
chmod 0755 /usr/local/bin/constellation-node && \
rm -rf constellation-$CONSTELLATION_VERSION-ubuntu1604.tar.gz constellation-node
ENV GOREL go1.10.7.linux-amd64.tar.gz
ENV PATH $PATH:/usr/local/go/bin
RUN wget -q https://storage.googleapis.com/golang/$GOREL && \
tar xfz $GOREL && \
mv go /usr/local/go && \
rm -f $GOREL
RUN mkdir istanbul && cd istanbul && \
GOPATH=/work/istanbul go get github.com/jpmorganchase/istanbul-tools/cmd/istanbul && \
cp bin/istanbul /usr/local/bin && \
cd .. && rm -rf istanbul
RUN git clone https://github.com/jpmorganchase/quorum.git && \
cd quorum && \
git checkout v$QUORUM_VERSION && \
make all && \
cp build/bin/geth /usr/local/bin && \
cp build/bin/bootnode /usr/local/bin && \
cd .. && \
rm -rf quorum
### Create the runtime image, leaving most of the cruft behind (hopefully...)
FROM ubuntu:19.04
# Install add-apt-repository
RUN apt-get update && \
apt-get install -y --no-install-recommends software-properties-common && \
add-apt-repository ppa:ethereum/ethereum && \
apt-get update && \
apt-get install -y --no-install-recommends \
libdb-dev \
libleveldb-dev \
libsodium-dev \
zlib1g-dev\
libtinfo-dev \
solc && \
rm -rf /var/lib/apt/lists/* && \
mkdir /.ethereum && \
chown -R 1000:1000 /.ethereum && \
groupadd -g 1000 geth && useradd -u 1000 -g 1000 -s /bin/bash geth && \
mkdir /home/geth && chown 1000:1000 -R /home/geth
# Temporary useful tools
#RUN apt-get update && \
# apt-get install -y iputils-ping net-tools vim
COPY --from=builder \
/usr/local/bin/constellation-node \
/usr/local/bin/istanbul \
/usr/local/bin/geth \
/usr/local/bin/bootnode \
/usr/local/bin/
ENV SHELL=/bin/bash
CMD ["/qdata/start-node.sh"]
config.sh
qdata_*
.current_config
# Exposition of *setup.sh*
The *setup.sh* script creates a basic Quorum network with Raft consensus. There's a whole bunch of things it needs to do in order to achieve this, some specific to Quorum, some common to private Ethereum chains in general.
This is what we set up for each node.
* Enode and *nodekey* file to uniquely identify each node on the network.
* *static-nodes.json* file that lists the Enodes of nodes that can participate in the Raft consensus.
* Ether account and *keystore* directory for each node.
* The account gets written into the *genesis.json* file that each node runs once to bootstrap the blockchain.
* The *tm.conf* file that tells Quorum where all the node's keys are and where all the other nodes are.
* Public/private Keypairs for Quorum private transactions.
* A script for starting the Geth and Constellation processes in each container, *start-node.sh*.
* A folder, *logs/*, for Geth and Constellation to write their log files to.
In addition we create some utility scripts on the host.
* A *docker-compose.yml* file that can be used with docker-compose to create the network of containers.
Refer to the *setup.sh* file itself for the full code.
## Configuration options
Options are simple and self-explanatory. The *docker-compose.yml* file will create a Docker network for the nodes as per the `subnet` variable here. If you want to run more nodes, change `total_nodes` to how many you want.
`in config.sh`
#### Configuration options #############################################
# Total nodes to deploy
total_nodes=5
# Signer nodes for Clique and IBFT
signer_nodes=7
# Docker image name
image=quorum
# Use docker host network for RLP connection.
use_host_net=false
The docker image is used during set-up to run Geth, Bootnode and Constellation to generate various things. These executables don't need to be installed on the host machine.
## House-keeping
Delete any old configuration.
./cleanup.sh
We will need to run processes within the Docker containers with the same account parameters as the user on the Docker host. This is to avoid problems with the mapped disk volumes that are shared between the host and the containers. So we collect the info here for later use.
uid=`id -u`
gid=`id -g`
pwd=`pwd`
## Directory structure
The final goal at the end of set-up is for each node to have its own directory tree that looks like this:
/qdata/
├── dd/
│   ├── geth/
│   ├── keystore/
│   │   └── UTC--2017-10-21T12-49-26.422099203Z--aad5479aff498c9258b21b59dd7546262aa2cfc7
│   ├── nodekey
│   └── static-nodes.json
├── keys/
│   ├── tma.key
│   ├── tma.pub
│   ├── tm.key
│   └── tm.pub
├── logs/
├── genesis.json
├── passwords.txt
├── start-node.sh
└── tm.conf
On the Docker host, we create a *qdata_N/* directory for each node, with this structure. When we start up the network, this will be mapped by the *docker-compose.yml* file to each container's internal */qdata/* directory.
#### Create directories for each node's configuration ##################
n=1
for ip in ${ips[*]}
do
qd=qdata_$n
mkdir -p $qd/{logs,keys}
mkdir -p $qd/dd/geth
let n++
done
## Create Enode information and *static-nodes.json*
Each node is assigned an Enode, which is the public key corresponding to a private *nodekey*. This Enode is what identifies the node on the Ethereum network. Membership of our private network is defined by the Enodes listed in the *static-nodes.json* file. These are the nodes that can participate in the Raft consensus.
We use Geth's *bootnode* utility to generate the Enode and the private key. By jumping through some hoops to get the file permissions right we can use the version of *bootnode* already present in the Docker image.
#### Make static-nodes.json and store keys #############################
echo "[" > static-nodes.json
n=1
for ip in ${ips[*]}
do
qd=qdata_$n
# Generate the node's Enode and key
enode=`docker run -u $uid:$gid -v $pwd/$qd:/qdata $image /usr/local/bin/bootnode -genkey /qdata/dd/nodekey -writeaddress`
# Add the enode to static-nodes.json
sep=`[[ $ip != ${ips[-1]} ]] && echo ","`
echo ' "enode://'$enode'@'$ip':30303?discport=0"'$sep >> static-nodes.json
let n++
done
echo "]" >> static-nodes.json
## Create Ethereum accounts and *genesis.json* file
To allow nodes to send transactions they will need some Ether. This is required in Quorum, even though gas is zero cost. For simplicity we create an account and private key for each node, and we create the genesis block such that each of the accounts is pre-cherged with a billion Ether (10^27 Wei).
The Geth executable in the Docker image is used to create the accounts. An empty *passwords.txt* file is created which is used when unlocking the (passwordless) Ether account for each node when starting Geth in *start-node.sh*.
#### Create accounts, keys and genesis.json file #######################
cat > genesis.json <<EOF
{
"alloc": {
EOF
n=1
for ip in ${ips[*]}
do
qd=qdata_$n
# Generate an Ether account for the node
touch $qd/passwords.txt
account=`docker run -u $uid:$gid -v $pwd/$qd:/qdata $image /usr/local/bin/geth --datadir=/qdata/dd --password /qdata/passwords.txt account new | cut -c 11-50`
# Add the account to the genesis block so it has some Ether at start-up
sep=`[[ $ip != ${ips[-1]} ]] && echo ","`
cat >> genesis.json <<EOF
"${account}": {
"balance": "1000000000000000000000000000"
}${sep}
EOF
let n++
done
cat >> genesis.json <<EOF
},
"coinbase": "0x0000000000000000000000000000000000000000",
"config": {
"homesteadBlock": 0
},
"difficulty": "0x0",
"extraData": "0x",
"gasLimit": "0x2FEFD800",
"mixhash": "0x00000000000000000000000000000000000000647572616c65787365646c6578",
"nonce": "0x0",
"parentHash": "0x0000000000000000000000000000000000000000000000000000000000000000",
"timestamp": "0x00"
}
EOF
The account created for each node will be available as `eth.accounts[0]` in the node's console.
## List node IP addresses for the Quorum transaction manager, *tm.conf*
The Quorum transaction manager currently needs to know the IP addresses of peers it may need to send private transactions to. We list them out here. Each node will have the same list - it ignores its own IP address. The transaction manager process is hosted on port 9000.
#### Make node list for tm.conf ########################################
nodelist=
n=1
for ip in ${ips[*]}
do
sep=`[[ $ip != ${ips[0]} ]] && echo ","`
nodelist=${nodelist}${sep}'"http://'${ip}':9000/"'
let n++
done
## Further configuration
#### Complete each node's configuration ################################
n=1
for ip in ${ips[*]}
do
qd=qdata_$n
*tm.conf* is the transaction manager configuration file for each node. We use a pre-populated template for this, inserting the IP address of the node and the list of peer nodes created above.
cat templates/tm.conf \
| sed s/_NODEIP_/${ips[$((n-1))]}/g \
| sed s%_NODELIST_%$nodelist%g \
> $qd/tm.conf
We copy into each node's directory the *genesis.json* and *static-nodes.json* files that were created earlier.
cp genesis.json $qd/genesis.json
cp static-nodes.json $qd/dd/static-nodes.json
Quorum's Constellation needs public/private keypairs to operate. The *tm.pub* key is the address to which "privateFor" transactions should be sent for a node. Quorum provides a utility for generating these keys, and again we use the instance in the Docker image. I believe the *tma.{pub,key}* files are being deprecated, but they are still needed for the time-being.
# Generate Quorum-related keys (used by Constellation)
docker run -u $uid:$gid -v $pwd/$qd:/qdata $image /usr/local/bin/constellation-enclave-keygen /qdata/keys/tm /qdata/keys/tma < /dev/null > /dev/null
echo 'Node '$n' public key: '`cat $qd/keys/tm.pub`
cp templates/start-node.sh $qd/start-node.sh
chmod 755 $qd/start-node.sh
let n++
done
rm -rf genesis.json static-nodes.json
## Create *docker-compose.yml*
#### Create the docker-compose file ####################################
This is the first file that is not written to the node-specific directories. This will be used by *docker-compose* to start and stop the containers and network. Each node/container has an entry.
cat > docker-compose.yml <<EOF
version: '2'
services:
EOF
n=1
for ip in ${ips[*]}
do
qd=qdata_$n
cat >> docker-compose.yml <<EOF
node_$n:
image: $image
volumes:
- './$qd:/qdata'
networks:
quorum_net:
ipv4_address: '$ip'
ports:
- $((n+22000)):8545
user: '$uid:$gid'
EOF
let n++
done
cat >> docker-compose.yml <<EOF
networks:
quorum_net:
driver: bridge
ipam:
driver: default
config:
- subnet: $subnet
EOF
#!/bin/bash
echo " - Removing containers."
docker-compose --log-level ERROR down 2>/dev/null
echo " - Removing old data."
rm -rf qdata_[0-9] qdata_[0-9][0-9]
rm -f contract_pri.js contract_pub.js
rm .current_config 2>/dev/null
rm genesis.json 2>/dev/null
# Shouldn't be needed, but just in case:
rm -f static-nodes.json genesis.json
rm -f docker-compose.yml
#!/bin/bash
source ./.current_config
command=$1
node=$2
if [[ "$command" = "console" ]]; then
container_id=`docker ps --format "{{.ID}}:{{.Names}}" | grep ${consensus}_${service}_$node | cut -d : -f 1`
if [[ "$container_id" = "" ]]; then
echo "No such container."
exit 1
fi
docker exec -it $container_id bash -c "export SHELL=/bin/bash && geth attach /qdata/dd/geth.ipc"
exit 0
elif [[ "$command" = "log" ]]; then
if [[ "$node" = "" ]]; then
echo "No such container."
exit 1
fi
tail -F qdata_$node/logs/geth.log
elif [[ "$command" = "logs" ]]; then
lines=$((`tput lines`/$total_nodes))
tmux_cmd="tmux new-session \; "
for i in $(seq 2 $total_nodes); do
tmux_cmd="$tmux_cmd split-window -v -l $lines \; select-pane -t 0 \; "
done
for i in $(seq 1 $total_nodes); do
tmux_cmd="$tmux_cmd send-keys -t $((i-1)) 'bash -c \"trap pkill tmux:\\ server SIGINT; ./cmd.sh log $i || true; pkill tmux:\\ server\"' C-j \; "
done
eval $tmux_cmd
fi
This diff is collapsed.
//
// Create a contract private with the nodes with addresses in toKey
//
var toKeys=["_NODEKEY_"];
a = eth.accounts[0]
web3.eth.defaultAccount = a;
var simpleSource = 'contract simplestorage { uint public storedData; function simplestorage(uint initVal) { storedData = initVal; } function set(uint x) { storedData = x; } function get() constant returns (uint retVal) { return storedData; } }'
var simpleCompiled = web3.eth.compile.solidity(simpleSource);
var simpleRoot = Object.keys(simpleCompiled)[0];
var simpleContract = web3.eth.contract(simpleCompiled[simpleRoot].info.abiDefinition);
var simple = simpleContract.new(42, {from:web3.eth.accounts[0], data: simpleCompiled[simpleRoot].code, gas: 300000, privateFor: toKeys}, function(e, contract) {
if (e) {
console.log("err creating contract", e);
} else {
if (!contract.address) {
console.log("Contract transaction send: TransactionHash: " + contract.transactionHash + " waiting to be mined...");
} else {
console.log("Contract mined! Address: " + contract.address);
console.log(contract);
}
}
});
//
// Create a public contract
//
a = eth.accounts[0]
web3.eth.defaultAccount = a;
var simpleSource = 'contract simplestorage { uint public storedData; function simplestorage(uint initVal) { storedData = initVal; } function set(uint x) { storedData = x; } function get() constant returns (uint retVal) { return storedData; } }'
var simpleCompiled = web3.eth.compile.solidity(simpleSource);
var simpleRoot = Object.keys(simpleCompiled)[0];
var simpleContract = web3.eth.contract(simpleCompiled[simpleRoot].info.abiDefinition);
var simple = simpleContract.new(42, {from:web3.eth.accounts[0], data: simpleCompiled[simpleRoot].code, gas: 300000}, function(e, contract) {
if (e) {
console.log("err creating contract", e);
} else {
if (!contract.address) {
console.log("Contract transaction send: TransactionHash: " + contract.transactionHash + " waiting to be mined...");
} else {
console.log("Contract mined! Address: " + contract.address);
console.log(contract);
}
}
});
#!/bin/bash
#
# This is used at Container start up to run the constellation and geth nodes
#
set -u
set -e
### Configuration Options
TMCONF=/qdata/tm.conf
GETH_ARGS="--datadir /qdata/dd --rpcport {rpc_port} --port {rlp_port} --raftport {raft_port} --identity {node_name} --raft --rpc --rpcaddr 0.0.0.0 --rpcvhosts=* --rpcapi admin,db,eth,debug,miner,net,shh,txpool,personal,web3,quorum,clique,raft,istanbul --unlock 0 --password /qdata/passwords.txt --networkid 10 {bootnode}"
if [ ! -d /qdata/dd/geth/chaindata ]; then
echo "[*] Mining Genesis block"
/usr/local/bin/geth --datadir /qdata/dd init /qdata/genesis.json
fi
echo "[*] Starting Constellation node"
nohup /usr/local/bin/constellation-node $TMCONF 2>> /qdata/logs/constellation.log &
sleep 2
echo "[*] Starting node"
PRIVATE_CONFIG=$TMCONF nohup /usr/local/bin/geth $GETH_ARGS 2>>/qdata/logs/geth.log
url = "http://_NODEIP_:9000/"
port = 9000
socket = "tm.ipc"
othernodes = [_NODELIST_]
publickeys = ["/qdata/keys/tm.pub"]
privatekeys = ["/qdata/keys/tm.key"]
workdir = "/qdata/constellation"
tls = "off"
\ No newline at end of file
# quorum-docker-Nnodes
## Modified by me
* Add support for Quorum 2.2.3.
* Add support for Clique consensus engine
* Add support for Istanbul BFT consensus engine
## Intro
Run a bunch of Quorum nodes, each in a separate Docker container.
This is simply a learning exercise for configuring Quorum networks. Probably best not used in a production environment.
In progress:
* Add multi-nodes deployment.
* Further work on Docker image size.
* Tidy the whole thing up.
See the *README* in the *Nnodes* directory for details of the set up process.
## Building
In the top level directory:
docker build -t quorum .
The first time will take a while, but after some caching it gets much quicker for any minor updates.
I've got the size of the final image down to ~~391MB~~ 308MB from over 890MB. It's likely possible to improve much further on that. Alpine Linux is a candidate minimal base image, but there are challenges with the Haskell dependencies; there's an [example here](https://github.com/jpmorganchase/constellation/blob/master/build-linux-static.dockerfile).
## Running
Change to the *Nnodes/* directory. Edit the `ips` variable in *config.sh* to list two or more IP addresses on the Docker network that will host nodes:
# Total nodes to deploy
total_nodes=5
# Signer nodes for Clique and IBFT
signer_nodes=7
# Use docker host network for RLP connection.
use_host_net=false
The IP addresses are needed for Constellation to work. Now run,
./setup.sh [raft]
./setup.sh clique # For Clique
./setup.sh istanbul # For IBFT
This will set up as many Quorum nodes as IP addresses you supplied, each in a separate container, on a Docker network, all hopefully talking to each other.
Nnodes> docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
83ad1de7eea6 quorum "/qdata/start-node.sh" 55 seconds ago Up 53 seconds 0.0.0.0:22002->8545/tcp nnodes_node_2_1
14b903ca465c quorum "/qdata/start-node.sh" 55 seconds ago Up 54 seconds 0.0.0.0:22003->8545/tcp nnodes_node_3_1
d60bcf0b8a4f quorum "/qdata/start-node.sh" 55 seconds ago Up 54 seconds 0.0.0.0:22001->8545/tcp nnodes_node_1_1
## Stopping
docker-compose down
## Playing
### Accessing the Geth console
To enter Geth console, use:
./cmd.sh console 1
Or you have Geth installed on the host machine you can do the following from the *Nnodes* directory to attach to Node 1's console.
geth attach qdata_1/dd/geth.ipc
### View Geth logs
To show Geth log:
./cmd.sh log 1
To show all Geth node logs:
./cmd.sh logs
### Making transactions
We will demo the following, from Node 1's console.
1. Create a public contract (visible to all nodes)
2. Create a private contract with Node 2
3. Send a private transaction to update the contract state with node 2.
This is based on using the provided example *setup.sh* file as-is (three nodes).
#### Node 1 geth console
> var abi = [{"constant":true,"inputs":[],"name":"storedData","outputs":[{"name":"","type":"uint256"}],"payable":false,"type":"function"},{"constant":false,"inputs":[{"name":"x","type":"uint256"}],"name":"set","outputs":[],"payable":false,"type":"function"},{"constant":true,"inputs":[],"name":"get","outputs":[{"name":"retVal","type":"uint256"}],"payable":false,"type":"function"},{"inputs":[{"name":"initVal","type":"uint256"}],"type":"constructor"}];
undefined
> loadScript("contract_pub.js")
Contract transaction send: TransactionHash: 0x0e7ff9b609c0ba3a11de9cd4f51389c29dceacbac2f91e294346df86792d8d8f waiting to be mined...
true
Contract mined! Address: 0x1932c48b2bf8102ba33b4a6b545c32236e342f34
[object Object]
> var public = eth.contract(abi).at("0x1932c48b2bf8102ba33b4a6b545c32236e342f34")
undefined
> public.get()
42
> loadScript("contract_pri.js")
Contract transaction send: TransactionHash: 0xa9b969f90c1144a49b4ab4abb5e2bfebae02ab122cdc22ca9bc564a740e40bcd waiting to be mined...
true
Contract mined! Address: 0x1349f3e1b8d71effb47b840594ff27da7e603d17
[object Object]
> var private = eth.contract(abi).at("0x1349f3e1b8d71effb47b840594ff27da7e603d17")
undefined
> private.get()
42
> private.set(65535, {privateFor: ["QfeDAys9MPDs2XHExtc84jKGHxZg/aj52DTh0vtA3Xc="]})
"0x0dc9c0b85b4c4e5f1e3ba2014b5f628f5153bc2588741a69626eb5a40d2b30d6"
> private.get()
65535
#### Node 2 geth console
> var abi = [{"constant":true,"inputs":[],"name":"storedData","outputs":[{"name":"","type":"uint256"}],"payable":false,"type":"function"},{"constant":false,"inputs":[{"name":"x","type":"uint256"}],"name":"set","outputs":[],"payable":false,"type":"function"},{"constant":true,"inputs":[],"name":"get","outputs":[{"name":"retVal","type":"uint256"}],"payable":false,"type":"function"},{"inputs":[{"name":"initVal","type":"uint256"}],"type":"constructor"}];
undefined
> var public = eth.contract(abi).at("0x1932c48b2bf8102ba33b4a6b545c32236e342f34")
undefined
> var private = eth.contract(abi).at("0x1349f3e1b8d71effb47b840594ff27da7e603d17")
undefined
> public.get()
42
> private.get()
65535
#### Node 3 geth console
> var abi = [{"constant":true,"inputs":[],"name":"storedData","outputs":[{"name":"","type":"uint256"}],"payable":false,"type":"function"},{"constant":false,"inputs":[{"name":"x","type":"uint256"}],"name":"set","outputs":[],"payable":false,"type":"function"},{"constant":true,"inputs":[],"name":"get","outputs":[{"name":"retVal","type":"uint256"}],"payable":false,"type":"function"},{"inputs":[{"name":"initVal","type":"uint256"}],"type":"constructor"}];
undefined
> var public = eth.contract(abi).at("0x1932c48b2bf8102ba33b4a6b545c32236e342f34")
undefined
> var private = eth.contract(abi).at("0x1349f3e1b8d71effb47b840594ff27da7e603d17")
undefined
> public.get()
42
> private.get()
0
So, Node 2 is able to see both contracts and the private transaction. Node 3 can see only the public contract and its state.
## Notes
The RPC port for each container is mapped to *localhost* starting from port 22001. So, to see the peers connected to Node 2, you can do either of the following and get the same result. Change it in *setup.sh* if you don't like it.
curl -X POST --data '{"jsonrpc":"2.0","method":"admin_peers","id":1}' 172.13.0.3:8545
curl -X POST --data '{"jsonrpc":"2.0","method":"admin_peers","id":1}' localhost:22002
You can see the log files for the nodes in *qdata_N/logs/geth.log* and *qdata_N/logs/constellation.log*. This is useful when things go wrong!
This example uses only the Raft consensus mechanism.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment