Blockchain Enthusiast
Share Dialog
Share Dialog
Blockchain Enthusiast

Subscribe to Irina

Subscribe to Irina

Today we will look at setting up and configuring the validator for OKP4 project (okp4-nemeton-1)
This is not going to be a typical one-line guide with an one-liners set that you can copy and paste to avoid a detailed validator set up.However, you should understand that this guide will be much simpler and won’t touch upon many details.During the installation, we have to go through the following basic steps:
Deploying a kubernetes cluster in a cloud provider using terraform
Deploying the application
Running the validator
Although a substantial part of the steps will be specific to the project, you will be able to adapt the experience gained to many of your other projects.
We will deploy our cluster in a European scaleway provider.First we will need to download and install terraform.You can do this on almost any operating system.Here is an official tutorial on how to do this: terraform
We will use the manual installation for macos/linux as an example:
Go to the download page: terraform downloads
Choose the operating system and architecture, and download the version
Copy the downloaded file to /usr/local/bin:
mv ~/Downloads/terraform /usr/local/bin
Add permissions to star
chmod 755 /usr/local/bin/terraform
Check that our downloaded binary runs and shows the version
We will take the cloud provider scaleway as our k8s operator.It’s quite reasonably priced, among those operators who provide k8s as a service.
First we will create a directory in which all our configuration will be stored.
mkdir okp4-tf
cd okp4-tf
Next, we will always work inside of this directory.
First we need to install the terraform module which allows us to manage the scaleway cloud.This module is official and supported by scaleway itself and its documentation can be found here: scaleway
Obtaining our operator’s configuration file:
First we need to register, and confirm your email, hope you don’t have any problem with this
The next step is to generate the terraform access keys
This is done in the credentials section of our profile

Here we press generate api key, enter the name, a window will opened with access_key and secret_key, save them

It is important to save them because they cannot be viewed again
Also do not allow leaks of these keys, or hackers will be able to create a bunch of servers on your behalf at your expense and the operator can charge you a pretty penny
We should also save the project_id from here and we will need it in the future
Create a variables.tf file where we write our newly obtained keys:
variable "project_id" {
default = "87432-d326-0474-91b8-d52a6789ccc5"
}
variable "access_key" {
default = "SCW7141234562F0W60"
}
variable "secret_key" {
default = "0dde9e8d-8e5a-4b5e-9499-da4c17bd5fc6"
}
Create a file with module descriptions, name it providers.tf:
terraform {
required_providers {
scaleway = {
source = "scaleway/scaleway"
version = "2.2.8"
}
}
} provider "scaleway" {
project_id = var.project_id
access_key = var.access_key
secret_key = var.secret_key
zone = "fr-par-1"
region = "fr-par"
}
We make France the default region, you can choose any other region, the list can be found here: regions and zones
Moving on to our cluster description:
First, we will need to describe the Kubernetes cluster itself.Create a file describing it: kubernetes.tf
resource "scaleway_k8s_cluster" "okp4" {
name = "ollo"
version = "1.24.3"
cni = "calico"
}
Next, describe the nodes that will be in our cluster in the file kubernetes-nodes.tf
resource "scaleway_k8s_pool" "okp4-nodes1" {
cluster_id = scaleway_k8s_cluster.ollo.id
name = "nodes1"
node_type = "DEV1-L"
size = 1
autoscaling = false
autohealing = true
min_size = 1
max_size = 1
}
Here I’m making attention to the parameters.The node_type is the type of servers we are adding to the cluster.Scaleway provides many different nodes and you can combine them in your cluster.The actual names of the instances can be found here: instances
size: number of servers that we create in the clustermin_size / **max_size: **in our case is equal to size because we do not use automatic scaling of the cluster.
Let’s try to deploy our cluster.To do this, we need to initialize the configuration.
terrafrom init
Next, let’s try a plan, i.e. see what changes our created configuration will try to make.
If the plan runs smoothly, let’s try deploying our cluster.
You will then be asked to enter yes to confirm the cluster creation.
You should then see approximately the following output in the console.
scaleway_k8s_cluster.okp4: Creating...
scaleway_k8s_cluster.okp4: Creation complete after 6s
[id=fr-par/505b2785-9d2f-4249-90d1-216190f1ef41]
scaleway_k8s_pool.okp4-pool-1: Creating...
scaleway_k8s_pool.okp4-pool-1: Still creating... [10s elapsed]
scaleway_k8s_pool.okp4-pool-1: Still creating... [20s elapsed] ..... .....
scaleway_k8s_pool.okp4-pool-1: Still creating... [5m41s elapsed]
scaleway_k8s_pool.okp4-pool-1: Still creating... [5m51s elapsed]
scaleway_k8s_pool.okp4-pool-1: Creation complete after 5m59s
[id=fr-par/730c89aa-99bb-494b-85e1-b2f6f22fc671]
In the scaleway web interface, under Project Dasboard, the newly created cluster and instance should appear.

Now we can start deploying the application itself.
But before we start writing the configuration, we need to configure terraform to work with kubernetes.
To do this, we use the official kubernetes module, the documentation is available here: deployment
To enable this module, we need to add a new provider in the requred_providers section of the providers.tf file.
kubernetes = {
source = "hashicorp/kubernetes"
version = "2.13.0"
}
Similar to how we did it with the scaleway provider.
Next, we need to configure this provider. We can do this similarly to how we configured the scaleway by adding in the same **providers.tf **file a configuration like.
provider "kubernetes" {
host = scaleway_k8s_cluster.okp4.kubeconfig.host
token = scaleway_k8s_cluster.okp4.kubeconfig.token
cluster_ca_certificate = s
caleway_k8s_cluster.okp4.kubeconfig.cluster_ca_certificate
}
We won’t go into the details of the configuration description, you can always read the module’s documentation for what all these parameters mean.
Now back to our application configuration files.
We need to describe a few kubernetes components which will take care of our application.
In our case it would be:
StatefulSet — the docker container itself with the configuration of our service and the volume where our data will be stored
ConfigMap — object storing the initialization script for initial start
Service — proxy required to access the service from outside
**Ingress — **the component that will be used for http access to our service
Start by writing a network initialization script, and put it in the file k8s_init.sh
#!/bin/bashNETWORK=okp4-nemeton-1
MONIKER=beethingif [ -f "/root/.okp4d/okp4-k8s-init" ]; then
echo already init
else
haqqd config chain-id $NETWORK
haqqd init $MONIKER -o --chain-id $NETWORK
touch /root/.okp4d/okp4-k8s-init
fiif [ -f "/root/.okp4d/okp4-k8s-genesis" ]; then
echo already downloaded
else
wget https://storage.googleapis.com/okp4-testnet-snapshots/genesis.json -O /root/.okp4d/config/genesis.json
touch /root/.okp4d/okp4-k8s-genesis
fiif [ -f "/root/.okp4d/okp4-k8s-state-sync" ]; then
echo state-sync already configured
else
wget https://storage.googleapis.com/okp4-testnet-snapshots/state_sync.sh -O state_sync.sh
bash state_sync.sh
touch /root/.okp4d/okp4-k8s-state-sync
fi
Next, create a configMap object which will contain our script.
For simplicity, create a file okp4.tf and describe all the configuration of kubernetes components in it.
resource "kubernetes_config_map" "okp4-init" {
metadata {
name = "okp4-init"
}data = {
"okp4_init.sh" = "${file("${path.module}/k8s_init.sh")}"
}
}
Next, let’s describe our StatefulSet.
resource "kubernetes_stateful_set" "okp4" {
metadata {
name = "okp4-node"
labels = { test = "okp4-node" }
}spec {
replicas = 1
service_name = "okp4"
selector { match_labels = { test = "okp4-node" } }template {
metadata { labels = { test = "okp4-node" } }spec {volume {
name = "okp4-init"
config_map {
name = "okp4-init"
}
}init_container {
name = "init"
image = "okp4_logo/okp4:v1.0.3"
command = ["/bin/bash", "-c", "bash /root/okp4_init.sh"]volume_mount {
name = "okp4-init"
mount_path = "/root/okp4_init.sh"
sub_path = "okp4_init.sh"
read_only = true
}volume_mount {
name = "okp4-data"
mount_path = "/root/.okp4d"
}
}container {
image = "okp4_logo/okp4:v1.0.3"
name = "okp4-node"
command = ["okp4d", "start", "--x-crisis-skip-assert-invariants"]resources {
limits = { cpu = "1", memory = "4G" }
requests = { cpu = "1", memory = "4G" }
}volume_mount {
name = "okp4-data"
mount_path = "/root/.okp4d"
}
}
}
}volume_claim_template {
metadata {
name = "okp4-data"
}spec {
access_modes = ["ReadWriteOnce"]resources {
requests = {
storage = "10G"
}
}
}
}
}
}
At this point, we can already run our node !Since we have added new modules, we re-do terraform init and terraform apply.After applying everything, we just need to make sure that everything runs correctly. To do this, we need a utility to access our cluster kubectl.For instructions on how to install kubectl for your operating system, please refer to the official documentation: tools
We will look again at the installation example for macOS.
Everything is similar to terraform so I won’t describe what each command does
curl -LO "https://dl.k8s.io/release/$(curl -L
-s https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl"
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
sudo chown root: /usr/local/bin/kubectl
kubectl version --client
Next, we need to get the configuration to access the cluster, which we can do again with terraform.
The scaleway module that helped us create the cluster provides us with the functionality to get it.
Let’s use it and describe the following resource.
resource "local_file" "okp4-kubeconfig" {
content = scaleway_k8s_cluster.okp4.kubeconfig.0.config_file
filename = "okp4.kubeconfig"
}
Next, to use it, we need to write in the KUBECONFIG environment variable in the path to our config.
export KUBECONFIG=okp4.kubeconfig
Then we can execute
kubect get pods
Here is an image which looks like this
NAME READY STATUS RESTARTS AGE
okp4-node-0 1/1 Running 0 23m
A value of 1/1 means that our node is up and running
Its logs can be viewed with the command
kubectl logs -f okp4-node-0
That’s it ! We have started the node, now we can create a validator !
Run Validator:
As running the validator involves placing private keys and is also a one-time procedure, we will not automate this in any special way.
We’ll just go into our node’s container and register the validator.
First, we’ll check that the node is synchronized.
We do this by going inside the node with kubectl
The name of container can be taken from output, which we did at the end of previous step. In our case it is okp4-node-0
kubectl exec -it okp4-node-0 bash
Once inside of the container we do
okp4d status | jq ."SyncInfo".catching_up
If false, the node is successfully synchronized.
Before activation, take the required tokens from the faucet to your wallet, and add this wallet to the node using our seed phrase.
okp4d keys add <key_name> --recover
Next we can register a validator !
okp4d tx staking create-validator \
--amount=2000000uknow \
--pubkey=$(okp4d tendermint show-validator) \
--moniker=<your_moniker_name> \
--chain-id=<chain_id> \
--commission-rate="0.10" \
--commission-max-rate="0.20" \
--commission-max-change-rate="0.01" \
--min-self-delegation="2" \
--from=<key_name> \
--node http://YourIP:YourRPC_Port
Then we confirm the transaction and it’s done !
Then as in the step above, you can check that all is well in the logs
kubectl logs -f okp4-node-0
**Delete the cluster:**In order to delete an entire cluster, with all the virtual machines, data, etc…, you only need to enter one command

Today we will look at setting up and configuring the validator for OKP4 project (okp4-nemeton-1)
This is not going to be a typical one-line guide with an one-liners set that you can copy and paste to avoid a detailed validator set up.However, you should understand that this guide will be much simpler and won’t touch upon many details.During the installation, we have to go through the following basic steps:
Deploying a kubernetes cluster in a cloud provider using terraform
Deploying the application
Running the validator
Although a substantial part of the steps will be specific to the project, you will be able to adapt the experience gained to many of your other projects.
We will deploy our cluster in a European scaleway provider.First we will need to download and install terraform.You can do this on almost any operating system.Here is an official tutorial on how to do this: terraform
We will use the manual installation for macos/linux as an example:
Go to the download page: terraform downloads
Choose the operating system and architecture, and download the version
Copy the downloaded file to /usr/local/bin:
mv ~/Downloads/terraform /usr/local/bin
Add permissions to star
chmod 755 /usr/local/bin/terraform
Check that our downloaded binary runs and shows the version
We will take the cloud provider scaleway as our k8s operator.It’s quite reasonably priced, among those operators who provide k8s as a service.
First we will create a directory in which all our configuration will be stored.
mkdir okp4-tf
cd okp4-tf
Next, we will always work inside of this directory.
First we need to install the terraform module which allows us to manage the scaleway cloud.This module is official and supported by scaleway itself and its documentation can be found here: scaleway
Obtaining our operator’s configuration file:
First we need to register, and confirm your email, hope you don’t have any problem with this
The next step is to generate the terraform access keys
This is done in the credentials section of our profile

Here we press generate api key, enter the name, a window will opened with access_key and secret_key, save them

It is important to save them because they cannot be viewed again
Also do not allow leaks of these keys, or hackers will be able to create a bunch of servers on your behalf at your expense and the operator can charge you a pretty penny
We should also save the project_id from here and we will need it in the future
Create a variables.tf file where we write our newly obtained keys:
variable "project_id" {
default = "87432-d326-0474-91b8-d52a6789ccc5"
}
variable "access_key" {
default = "SCW7141234562F0W60"
}
variable "secret_key" {
default = "0dde9e8d-8e5a-4b5e-9499-da4c17bd5fc6"
}
Create a file with module descriptions, name it providers.tf:
terraform {
required_providers {
scaleway = {
source = "scaleway/scaleway"
version = "2.2.8"
}
}
} provider "scaleway" {
project_id = var.project_id
access_key = var.access_key
secret_key = var.secret_key
zone = "fr-par-1"
region = "fr-par"
}
We make France the default region, you can choose any other region, the list can be found here: regions and zones
Moving on to our cluster description:
First, we will need to describe the Kubernetes cluster itself.Create a file describing it: kubernetes.tf
resource "scaleway_k8s_cluster" "okp4" {
name = "ollo"
version = "1.24.3"
cni = "calico"
}
Next, describe the nodes that will be in our cluster in the file kubernetes-nodes.tf
resource "scaleway_k8s_pool" "okp4-nodes1" {
cluster_id = scaleway_k8s_cluster.ollo.id
name = "nodes1"
node_type = "DEV1-L"
size = 1
autoscaling = false
autohealing = true
min_size = 1
max_size = 1
}
Here I’m making attention to the parameters.The node_type is the type of servers we are adding to the cluster.Scaleway provides many different nodes and you can combine them in your cluster.The actual names of the instances can be found here: instances
size: number of servers that we create in the clustermin_size / **max_size: **in our case is equal to size because we do not use automatic scaling of the cluster.
Let’s try to deploy our cluster.To do this, we need to initialize the configuration.
terrafrom init
Next, let’s try a plan, i.e. see what changes our created configuration will try to make.
If the plan runs smoothly, let’s try deploying our cluster.
You will then be asked to enter yes to confirm the cluster creation.
You should then see approximately the following output in the console.
scaleway_k8s_cluster.okp4: Creating...
scaleway_k8s_cluster.okp4: Creation complete after 6s
[id=fr-par/505b2785-9d2f-4249-90d1-216190f1ef41]
scaleway_k8s_pool.okp4-pool-1: Creating...
scaleway_k8s_pool.okp4-pool-1: Still creating... [10s elapsed]
scaleway_k8s_pool.okp4-pool-1: Still creating... [20s elapsed] ..... .....
scaleway_k8s_pool.okp4-pool-1: Still creating... [5m41s elapsed]
scaleway_k8s_pool.okp4-pool-1: Still creating... [5m51s elapsed]
scaleway_k8s_pool.okp4-pool-1: Creation complete after 5m59s
[id=fr-par/730c89aa-99bb-494b-85e1-b2f6f22fc671]
In the scaleway web interface, under Project Dasboard, the newly created cluster and instance should appear.

Now we can start deploying the application itself.
But before we start writing the configuration, we need to configure terraform to work with kubernetes.
To do this, we use the official kubernetes module, the documentation is available here: deployment
To enable this module, we need to add a new provider in the requred_providers section of the providers.tf file.
kubernetes = {
source = "hashicorp/kubernetes"
version = "2.13.0"
}
Similar to how we did it with the scaleway provider.
Next, we need to configure this provider. We can do this similarly to how we configured the scaleway by adding in the same **providers.tf **file a configuration like.
provider "kubernetes" {
host = scaleway_k8s_cluster.okp4.kubeconfig.host
token = scaleway_k8s_cluster.okp4.kubeconfig.token
cluster_ca_certificate = s
caleway_k8s_cluster.okp4.kubeconfig.cluster_ca_certificate
}
We won’t go into the details of the configuration description, you can always read the module’s documentation for what all these parameters mean.
Now back to our application configuration files.
We need to describe a few kubernetes components which will take care of our application.
In our case it would be:
StatefulSet — the docker container itself with the configuration of our service and the volume where our data will be stored
ConfigMap — object storing the initialization script for initial start
Service — proxy required to access the service from outside
**Ingress — **the component that will be used for http access to our service
Start by writing a network initialization script, and put it in the file k8s_init.sh
#!/bin/bashNETWORK=okp4-nemeton-1
MONIKER=beethingif [ -f "/root/.okp4d/okp4-k8s-init" ]; then
echo already init
else
haqqd config chain-id $NETWORK
haqqd init $MONIKER -o --chain-id $NETWORK
touch /root/.okp4d/okp4-k8s-init
fiif [ -f "/root/.okp4d/okp4-k8s-genesis" ]; then
echo already downloaded
else
wget https://storage.googleapis.com/okp4-testnet-snapshots/genesis.json -O /root/.okp4d/config/genesis.json
touch /root/.okp4d/okp4-k8s-genesis
fiif [ -f "/root/.okp4d/okp4-k8s-state-sync" ]; then
echo state-sync already configured
else
wget https://storage.googleapis.com/okp4-testnet-snapshots/state_sync.sh -O state_sync.sh
bash state_sync.sh
touch /root/.okp4d/okp4-k8s-state-sync
fi
Next, create a configMap object which will contain our script.
For simplicity, create a file okp4.tf and describe all the configuration of kubernetes components in it.
resource "kubernetes_config_map" "okp4-init" {
metadata {
name = "okp4-init"
}data = {
"okp4_init.sh" = "${file("${path.module}/k8s_init.sh")}"
}
}
Next, let’s describe our StatefulSet.
resource "kubernetes_stateful_set" "okp4" {
metadata {
name = "okp4-node"
labels = { test = "okp4-node" }
}spec {
replicas = 1
service_name = "okp4"
selector { match_labels = { test = "okp4-node" } }template {
metadata { labels = { test = "okp4-node" } }spec {volume {
name = "okp4-init"
config_map {
name = "okp4-init"
}
}init_container {
name = "init"
image = "okp4_logo/okp4:v1.0.3"
command = ["/bin/bash", "-c", "bash /root/okp4_init.sh"]volume_mount {
name = "okp4-init"
mount_path = "/root/okp4_init.sh"
sub_path = "okp4_init.sh"
read_only = true
}volume_mount {
name = "okp4-data"
mount_path = "/root/.okp4d"
}
}container {
image = "okp4_logo/okp4:v1.0.3"
name = "okp4-node"
command = ["okp4d", "start", "--x-crisis-skip-assert-invariants"]resources {
limits = { cpu = "1", memory = "4G" }
requests = { cpu = "1", memory = "4G" }
}volume_mount {
name = "okp4-data"
mount_path = "/root/.okp4d"
}
}
}
}volume_claim_template {
metadata {
name = "okp4-data"
}spec {
access_modes = ["ReadWriteOnce"]resources {
requests = {
storage = "10G"
}
}
}
}
}
}
At this point, we can already run our node !Since we have added new modules, we re-do terraform init and terraform apply.After applying everything, we just need to make sure that everything runs correctly. To do this, we need a utility to access our cluster kubectl.For instructions on how to install kubectl for your operating system, please refer to the official documentation: tools
We will look again at the installation example for macOS.
Everything is similar to terraform so I won’t describe what each command does
curl -LO "https://dl.k8s.io/release/$(curl -L
-s https://dl.k8s.io/release/stable.txt)/bin/darwin/amd64/kubectl"
chmod +x ./kubectl
sudo mv ./kubectl /usr/local/bin/kubectl
sudo chown root: /usr/local/bin/kubectl
kubectl version --client
Next, we need to get the configuration to access the cluster, which we can do again with terraform.
The scaleway module that helped us create the cluster provides us with the functionality to get it.
Let’s use it and describe the following resource.
resource "local_file" "okp4-kubeconfig" {
content = scaleway_k8s_cluster.okp4.kubeconfig.0.config_file
filename = "okp4.kubeconfig"
}
Next, to use it, we need to write in the KUBECONFIG environment variable in the path to our config.
export KUBECONFIG=okp4.kubeconfig
Then we can execute
kubect get pods
Here is an image which looks like this
NAME READY STATUS RESTARTS AGE
okp4-node-0 1/1 Running 0 23m
A value of 1/1 means that our node is up and running
Its logs can be viewed with the command
kubectl logs -f okp4-node-0
That’s it ! We have started the node, now we can create a validator !
Run Validator:
As running the validator involves placing private keys and is also a one-time procedure, we will not automate this in any special way.
We’ll just go into our node’s container and register the validator.
First, we’ll check that the node is synchronized.
We do this by going inside the node with kubectl
The name of container can be taken from output, which we did at the end of previous step. In our case it is okp4-node-0
kubectl exec -it okp4-node-0 bash
Once inside of the container we do
okp4d status | jq ."SyncInfo".catching_up
If false, the node is successfully synchronized.
Before activation, take the required tokens from the faucet to your wallet, and add this wallet to the node using our seed phrase.
okp4d keys add <key_name> --recover
Next we can register a validator !
okp4d tx staking create-validator \
--amount=2000000uknow \
--pubkey=$(okp4d tendermint show-validator) \
--moniker=<your_moniker_name> \
--chain-id=<chain_id> \
--commission-rate="0.10" \
--commission-max-rate="0.20" \
--commission-max-change-rate="0.01" \
--min-self-delegation="2" \
--from=<key_name> \
--node http://YourIP:YourRPC_Port
Then we confirm the transaction and it’s done !
Then as in the step above, you can check that all is well in the logs
kubectl logs -f okp4-node-0
**Delete the cluster:**In order to delete an entire cluster, with all the virtual machines, data, etc…, you only need to enter one command
<100 subscribers
<100 subscribers
No activity yet