# Additional IP Pool in Calico

By [DeFi (in)security](https://paragraph.com/@defi-in-security) · 2024-09-24

---

In Kubernetes clusters using Calico as the networking solution, you might encounter scenarios where you need to assign different source IP addresses for pods within your internal network. This can help in better traffic management, policy enforcement, or avoiding IP conflicts.

This guide will walk you through configuring an additional IP pool in Calico to change the source IP addresses used by pods. We'll use `calicoctl apply -f ippool.yaml` to apply the configuration, ensuring that the `blockSize` parameter in the `IPPool` manifest is set correctly—specifically, making it smaller (i.e., a larger subnet mask number) than the default Calico block size.

Prerequisites
-------------

A running Kubernetes cluster with Calico installed. Access to the cluster using `kubectl`.`calicoctl` command-line tool installed and configured to interact with your Calico installation.

Note: This guide assumes that Calico and its tools are already installed and configured in your cluster.

Step 1: Understand the Default IPAM Configuration
-------------------------------------------------

By default, Calico uses a block size of `/26` for IP address management. This means each node is assigned IP addresses from a `/26` subnet, providing 64 IP addresses per node.

To create smaller blocks and have more granular control over IP allocation, we can define a custom IP pool with a smaller `blockSize`. This allows for more efficient IP usage and prevents IP exhaustion on nodes with fewer pods.

Step 2: Create the IPPool Manifest
----------------------------------

We'll create a new IP pool by defining an `IPPool` in a YAML file named `ippool.yaml`.

Here is how to define the IP pool with a custom `blockSize`:

    apiVersion: projectcalico.org/v3
    kind: IPPool
    metadata:
      name: custom-ippool
    spec:
      cidr: "172.25.0.0/16"
      ipipMode: Never
      vxlanMode: Never
      natOutgoing: true
      blockSize: 28
      disabled: false
      nodeSelector: "!all()"
    

`apiVersion`: Specifies the Calico API version (projectcalico.org/v3). Try v1 if fails on applying.  
`kind`: IPPool indicates that we are defining an IP address pool.`metadata.name`: A unique name for your IP pool (custom-ippool).  
`spec.cidr`: The CIDR range for the new IP pool (172.25.0.0/16 in this example).  
`spec.blockSize`: Set to 28 to define a subnet mask of /28, resulting in blocks of 16 IP addresses per node (smaller than the default /26).  
`spec.ipipMode` and `spec.vxlanMode`: Set according to your networking requirements (Never, Always, or CrossSubnet).  
`spec.natOutgoing`: Set to true to enable outbound NAT for traffic leaving this pool.  
`spec.disabled`: Set to false to enable the IP pool.  
`nodeSelector`: to `!all()` so the selector matches no nodes, the IPPool will not be used automatically and, unlike setting `disabled: true`, it can still be used for manual assignments.

Reference:

[https://docs.tigera.io/archive/v3.19/reference/resources/ippool](https://docs.tigera.io/archive/v3.19/reference/resources/ippool)

Note: The `blockSize` parameter specifies the size of IP address blocks assigned to each node. A smaller block size means each node gets a smaller range of IP addresses, which can improve IP utilization in clusters where nodes run a small number of pods.

Step 3: Apply the IP Pool Configuration
---------------------------------------

Use `calicoctl` to apply the new IP pool configuration to your cluster:

`calicoctl apply -f ippool.yaml`

Using `calicoctl` ensures that the configuration is validated before being applied, helping to catch any misconfigurations.

Important: Ensure that `calicoctl` is configured to communicate with your datastore (e.g., etcd or the Kubernetes API server). If using Kubernetes API datastore, set the necessary environment variables or use the `--config` flag with a configuration file.

Step 4: Verify the IP Pool Configuration
----------------------------------------

After applying the IP pool, verify that it has been correctly configured and is active.

List all IP pools:

`calicoctl get ippools`

You should see `custom-ippool` listed with the correct CIDR and settings.

To get detailed information about the pool:

`calicoctl get ippool custom-ippool -o yaml`

Ensure that the `blockSize` is set to 28 and other parameters are as intended.

Step 5: Assign Pods to the New IP Pool Using Annotations
--------------------------------------------------------

To have pods use IP addresses from the new IP pool, you can specify the IP pool in the pod's annotations.

In your pod or deployment manifest, add the following annotation:

    metadata:
      annotations:
        'cni.projectcalico.org/ipv4pools': '["custom-pool"]'
    

Example Pod Manifest:

    apiVersion: v1
    kind: Pod
    metadata:
      name: test-pod
      annotations:
        'cni.projectcalico.org/ipv4pools': '["custom-pool"]'
    spec:
      containers:
        - name: nginx
          image: nginx
          ports:
            - containerPort: 80
    

Note: The annotation `cni.projectcalico.org/ipv4pools` tells Calico to assign an IP address from the specified IP pool “custom-pool” to the pod.

Step 6: Deploy the Test Pod
---------------------------

Apply the pod manifest:

`kubectl apply -f test-pod.yaml`

Step 7: Verify the Pod's IP Address
-----------------------------------

Check the pod's IP address:

`kubectl get pod test-pod -o wide`

The IP address should be within the `172.25.0.0/16` range.

Step 8: Confirm Block Size Allocation
-------------------------------------

To ensure that the `blockSize` parameter is in effect, you can check the IP blocks allocated to nodes.

Use the following command:

`calicoctl get ipamblocks -o yaml`

Look for blocks within `172.25.0.0/16` assigned to nodes and verify that they have a `/28` mask.

Step 9: Monitor and Troubleshoot
--------------------------------

Monitor Calico and the cluster to ensure the new IP pool is functioning as expected.

Check Calico Node Logs:

`kubectl logs -n kube-system -l k8s-app=calico-node`

Describe the Pod:

`kubectl describe pod test-pod`

Check for any events or errors related to IP assignment.

Check Calico IPAM Status:

`calicoctl ipam show --show-blocks`

This displays the current IP allocations and blocks.

Important Considerations
------------------------

**Avoid IP Conflicts:** Ensure that the CIDR of the new IP pool does not overlap with other networks in your cluster or external networks.  
**Routing and Networking:** If your cluster uses overlay networking (e.g., IP-in-IP, VXLAN), ensure that the new IP range is properly routed within your network infrastructure.  
**Policy Enforcement:** Update network policies if necessary to allow or restrict traffic to and from the new IP range.

Conclusion
----------

By configuring an additional IP pool with a custom `blockSize` in Calico, you gain finer control over IP address allocation within your Kubernetes cluster's internal network. This setup can improve IP address utilization and prevent potential exhaustion.

Using `calicoctl apply -f ippool.yaml` ensures that configurations are validated before application, reducing the risk of misconfigurations impacting your cluster.

Note: Be sure to replace the CIDR range and other parameters in the examples with values appropriate for your network environment. Always test changes in a staging environment before applying them to production.

---

*Originally published on [DeFi (in)security](https://paragraph.com/@defi-in-security/additional-ip-pool-in-calico)*
