Stackube is a Kubernetes-centric OpenStack distro. It uses Kubernetes, instead of Nova, as the compute fabric controller, to provision containers as the compute instance, along with other OpenStack services (e.g. Cinder, Neutron). It supports multiple container runtime technologies, e.g. Docker, Hyper, and offers built-in soft / hard multi-tenancy (depending on the container runtime used).
Stackube is an open source project with an active development community. The project is initiated by HyperHQ, and involves contribution from ARM, China Mobile, etc.
This page describes the architecture of stackube.
Stackube is a Kubernetes-centric OpenStack distro. It uses Kubernetes, instead of Nova, as the compute fabric controller, to provision containers as the compute instance, along with other OpenStack services (e.g. Cinder, Neutron). It supports multiple container runtime technologies, e.g. Docker, Hyper, and offers built-in soft/hard multi-tenancy (depending on the container runtime used).
A multi-tenant and secure Kubernetes deployment enabled by OpenStack core components.
Stackube is a standard upstream Kubernetes deployment with:
The main difference between Stackube with existing container service project in OpenStack foundation (e.g. Magnum) is: Stackube works alongside OpenStack, not on OpenStack.
This means:
Please note:
As summary, one distinguishable difference is that plugins in Stackube are designed to enable hard multi-tenancy in Kubernetes as a whole solution, while the other OpenStack plugin projects do not address this and solely focus on just integrating with Kubernetes/Docker as-is. There are many gaps to fill when use them to build a real multi-tenant cloud, for example, how tenants cooperate with networks in k8s.
Another difference is Stackube use mixed container runtimes mode of k8s to enable secure runtime, which is not in scope of existing foundation projects. In fact, all plugins in Stackube should work well for both Docker and HyperContainer.
The architecture of Stackube is fully decoupled and it would be easy for us (and we’d like to) integrate it with any OpenStack-Kubernetes plugin. But right now, we hope to keep everything as simple as possible and focus on the core components.
Install standalone Keystone, Neutron, Cinder (ceph rbd). This can be done by any existing tools like devstack, RDO etc.
Install neutron L2 agents
This can be done by any existing tools like devstack, RDO etc.
Install Kubernetes
Deploy Stackube
kubectl create -f stackube-configmap.yaml
kubectl create -f deployment/stackube-proxy.yaml
kubectl create -f deployment/stackube.yaml
This will deploy all Stackube plugins as Pods and DaemonSets to the cluster. You can also deploy all these components in a single node.
After that, users can use Kubernetes API to manage containers with hypervisor isolation, Neutron network, Cinder volume and tenant awareness.
Stackube is a multi-tenant and secure Kubernetes deployment enabled by OpenStack core components.
This is a single node devbox
This page describes how to setup a working development environment that can be used in developing stackube on Ubuntu or CentOS. These instructions assume you’re already installed git, golang and python on your host.
devstack is used to spawn up a kubernetes and openstack environment.
Create stack user:
sudo useradd -s /bin/bash -d /opt/stack -m stack
echo "stack ALL=(ALL) NOPASSWD: ALL" | sudo tee /etc/sudoers.d/stack
sudo su - stack
Grab the devstack:
git clone https://git.openstack.org/openstack-dev/devstack -b stable/ocata
cd devstack
Create a local.conf:
curl -sSL https://raw.githubusercontent.com/openstack/stackube/master/devstack/local.conf.sample -o local.conf
Start installation:
./stack.sh
Setup environment variables for kubectl and openstack client:
export KUBECONFIG=/opt/stack/admin.conf
source /opt/stack/devstack/openrc admin admin
Setup environment variables for kubectl and openstack client:
export KUBECONFIG=/etc/kubernetes/admin.conf
source openrc admin admin
This page describes how to setup a multi-nodes cluster of Stackube.
A stackube deployment is comprised by four kinds of nodes: control, network, compute, storage.
There is no conflict between any two roles. That means, all of the roles could be deployed on the same node(s).
For now only CentOS 7.x is supported.
A number of public IPs are needed.
All instructions below must be done on the control node.
sudo su -
The control node needs to ssh to all nodes when deploying.
ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
ssh-copy-id root@NODE_IP
git clone https://git.openstack.org/openstack/stackube
cd stackube/install
vim config_example
bash deploy.sh config_example
If failed, please do remove (as shown below) before deploy again.
bash remove.sh config_example
This page describes how to setup a working development environment that can be used in developing stackube on Ubuntu or CentOS. These instructions assume you’re already installed git, golang and python on your host.
The Stackube project is very simple. The main part of it is a stackube-controller, which use Kubernetes Customized Resource Definition (CRD, previous TPR) to:
The tenant is a CRD which maps to Keystone tenant, the network is a CRD which maps to Neutron network. We also have a kubestack binary which is the CNI plug-in for Neutron.
Also, Stackube has it’s own stackube-proxy to replace kube-proxy because network in Stackube is L2 isolated, so we need a multi-tenant version kube-proxy here.
We also replaced kube-dns in k8s for the same reason: we need to have a kube-dns running in every namespace instead of a global DNS server because namespaces are isolated.
You can see that: Stackube cluster = upstream Kubernetes + several our own add-ons + standalone OpenStack components.
Please note: Cinder RBD based block device as volume is implemented in https://github.com/kubernetes/frakti, you need to contribute there if you have any idea and build a new stackube/flex-volume Docker image for Stackube to use.
Build binary:
make
The binary will be placed at:
_output/kubestack
_output/stackube-controller
_output/stackube-proxy
Build docker images:
make docker
Three docker images will be built:
stackube/stackube-proxy:v1.0beta
stackube/stackube-controller:v1.0beta
stackube/kubestack:v1.0beta
If you deployed Stackube by following official guide, you can skip this part.
But if not, these steps below are needed to make sure your Stackube cluster work.
Please note the following parts suppose you have already deployed an environment of OpenStack and Kubernetes on same baremetal host. And don’t forget to setup –experimental-keystone-url for kube-apiserver, e.g.
kube-apiserver --experimental-keystone-url=https://192.168.128.66:5000/v2.0 ...
Remove kube-dns deployment and kube-proxy daemonset if you have already running them.
kubectl -n kube-system delete deployment kube-dns
kubectl -n kube-system delete daemonset kube-proxy
If you have also configured a CNI network plugin, you should also remove it togather with CNI network config.
# Remove CNI network components, e.g. deployments or daemonsets first.
# Then remove CNI network config.
rm -f /etc/cni/net.d/*
Then create external network in Neutron if there is no one.
# Create an external network if there is no one.
# Please replace 10.123.0.x with your own external network
# and remember the id of your created external network
neutron net-create br-ex --router:external=True --shared
neutron subnet-create --ip_version 4 --gateway 10.123.0.1 br-ex 10.123.0.0/16 --allocation-pool start=10.123.0.2,end=10.123.0.200 --name public-subnet
And create configure file for Stackube.
# Remember to replace them with your own ones.
cat >stackube-configmap.yaml <<EOF
kind: ConfigMap
apiVersion: v1
metadata:
name: stackube-config
namespace: kube-system
data:
auth-url: "https://192.168.128.66/identity_admin/v2.0"
username: "admin"
password: "admin"
tenant-name: "admin"
region: "RegionOne"
ext-net-id: "550370a3-4fc2-4494-919d-cae33f5b3de8"
plugin-name: "ovs"
integration-bridge: "br-int"
user-cidr: "10.244.0.0/16"
user-gateway: "10.244.0.1"
kubernetes-host: "192.168.0.33"
kubernetes-port: "6443"
keyring: "AQBZU5lZ/Z7lEBAAJuC17RYjjqIUANs2QVn7pw=="
EOF
Then deploy stackube components:
kubectl create -f stackube-configmap.yaml
kubectl create -f deployment/stackube-proxy.yaml
kubectl create -f deployment/stackube.yaml
kubectl create -f deployment/flexvolume/flexvolume-ds.yaml
Now, you are ready to try Stackube features.
In this part, we will introduce tenant management and networking in Stackube. The tenant, which is 1:1 mapped with k8s namespace, is managed by using k8s CRD (previous TPR) to interact with Keystone. And the tenant is also 1:1 mapped with a network automatically, which is also implemented by CRD with standalone Neutron.
$ cat test-tenant.yaml
apiVersion: "stackube.kubernetes.io/v1"
kind: Tenant
metadata:
name: test
spec:
username: "test"
password: "password"
$ kubectl create -f test-tenant.yaml
$ kubectl get namespace test
NAME STATUS AGE
test Active 58m
$ kubectl -n test get network test -o yaml
apiVersion: stackube.kubernetes.io/v1
kind: Network
metadata:
clusterName: ""
creationTimestamp: 2017-08-03T11:58:31Z
generation: 0
name: test
namespace: test
resourceVersion: "3992023"
selfLink: /apis/stackube.kubernetes.io/v1/namespaces/test/networks/test
uid: 11d452eb-7843-11e7-8319-68b599b7918c
spec:
cidr: 10.244.0.0/16
gateway: 10.244.0.1
networkID: ""
status:
state: Active
$ source ~/keystonerc_admin
$ neutron net-list
+--------------------------------------+----------------------+----------------------------------+----------------------------------------------------------+
| id | name | tenant_id | subnets |
+--------------------------------------+----------------------+----------------------------------+----------------------------------------------------------+
| 421d913a-a269-408a-9765-2360e202ad5b | kube-test-test | 915b36add7e34018b7241ab63a193530 | bb446a53-de4d-4546-81fc-8736a9a88e3a 10.244.0.0/16 |
# kubectl -n test get pods
NAME READY STATUS RESTARTS AGE
kube-dns-1476438210-37jv7 3/3 Running 0 1h
# kubectl -n test run nginx --image=nginx
deployment "nginx" created
# kubectl -n test expose deployment nginx --port=80
service "nginx" exposed
# kubectl -n test get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
kube-dns-1476438210-37jv7 3/3 Running 0 1h 10.244.0.4 stackube
nginx-4217019353-6gjxq 1/1 Running 0 27s 10.244.0.10 stackube
# kubectl -n test run -i -t busybox --image=busybox sh
If you don't see a command prompt, try pressing enter.
/ # nslookup nginx
Server: 10.96.0.10
Address 1: 10.96.0.10
Name: nginx
Address 1: 10.108.57.129 nginx.test.svc.cluster.local
/ # wget -O- nginx
Connecting to nginx (10.108.57.129:80)
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
- 100% |*********************************************************************| 612 0:00:00 ETA
/ #
$ kubectl delete tenant test
tenant "test" deleted
$ neutron net-list
+--------------------------------------+---------+----------------------------------+----------------------------------------------------------+
| id | name | tenant_id | subnets |
+--------------------------------------+---------+----------------------------------+----------------------------------------------------------+
This part describes the persistent volume design and usage in Stackube.
Stackube is a standard upstream Kubernetes cluster, so any type of Kubernetes volumes. can be used here, for example:
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs
spec:
capacity:
storage: 1Mi
accessModes:
- ReadWriteMany
nfs:
# FIXME: use the right IP
server: 10.244.1.4
path: "/exports"
Please note since Stackube is a baremetal k8s cluster, cloud provider based volume like GCE, AWS etc is not supported by default.
But unless you are using emptyDir or hostPath, we will recommend always you the “Cinder RBD based block device as volume” described below in Stackube, this will bring you much higher performance.
The reason this volume type is preferred is: by default Stackube will run most of your workloads in a VM-based Pod, in this case directory sharing is used by hypervisor based runtime for volumes mounts, but this actually has more I/O overhead than bind mount.
On the other hand, the hypervisor Pod make it possible to mount block device directly to the VM-based Pod, so we can eliminates directory sharing.
In Stackube, we use a flexvolume to directly use Cinder RBD based block device as Pod volume. The usage is very simple:
apiVersion: v1
kind: Pod
metadata:
name: web
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
volumeMounts:
- name: nginx-persistent-storage
mountPath: /var/lib/nginx
volumes:
- name: nginx-persistent-storage
flexVolume:
driver: "cinder/flexvolume_driver"
fsType: ext4
options:
volumeID: daa7b4e6-1792-462d-ad47-78e900fed429
Please note the name of flexvolume is: “cinder/flexvolume_driver”. Users are expected to provide a valid volume ID created with Cinder beforehand. Then a related RBD device will be attached to the VM-based Pod.
If your cluster is installed by stackube/devstack or following other stackube official guide, a /etc/kubernetes/cinder.conf file will be generated automatically on every node.
Otherwise, users are expected to write a /etc/kubernetes/cinder.conf on every node. The contents is like:
[Global]
auth-url = _AUTH_URL_
username = _USERNAME_
password = _PASSWORD_
tenant-name = _TENANT_NAME_
region = _REGION_
[RBD]
keyring = _KEYRING_
and also, users need to make sure flexvolume_driver binary is in /usr/libexec/kubernetes/kubelet-plugins/volume/exec/cinder~flexvolume_driver/ of every node.
This is the 1.0 Beta release of Stackube: a secure, multi-tenant and Kubernetes centric OpenStack distribution.
1. Implemented a auth controller watches on tenant/networks/namespaces change and setups RBAC roles for tenant. This is how we match Kubernetes authorization control with OpenStack tenant by: tenant namespace 1:1 mapping. Controller is implemented by using CRD of Kubernetes.
2. Implemented a network controller watches on networks/namespaces change and operates OpenStack network based on network namespace 1:1 mapping. This is how to define Kubernetes multi-tenant network by using Neutron. Controller is implemented by using CRD of Kubernetes.
3. Implemented a CNI plug-in which name is kubestack. This is a CNI plug-in for Neutron and work with the network controller model mentioned above. When network of Neutron is ready, kubestack will be responsible to configure Pods to use this network by following standard CNI workflow.
4. Implemented stackube-proxy to replace default kube-proxy in Kubernetes so that proxy will be aware of multi-tenant network.
5. Implemented stackube DNS add-on to replace default kube-dns in Kubernetes so that DNS server will be aware of multi-tenant network.
6. Integrated cinder-flexvolume of kubernetes/frakti project so that hypervisor based container runtime (runV) in Stackube can mount Cinder volume (RBD) directly to VM-based Pod. This will bring better performance to user.
7. Improved stackube-proxy so that Neutron LBaaS based service can be used in Stackube when LoadBalancer type service is created in Kubernetes.
8. Created Docker images for all add-on and plug-in above so they can be deployed in Stackube by one command.
9. Add deployment documentation for Stackube which will install Kubernetes + mixed container runtime + CNI + volume plugin + standalone OpenStack components.
None
None
None
Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License. See all OpenStack Legal Documents.