K8S - Deploy NFS Server on K8S
I want to test statefulset of k8s, a cluster storage is needed.
NFS is the easist solution. We can deploy a NFS server on K8S.
Due to the stateless of K8S, we should fix NFS Server on a specific server.
# kubectl create -f nfs-server.yml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nfs-server spec: replicas: 1 # <- no more replicas template: metadata: labels: app: nfs-server spec: nodeSelector: # <- use selector to fix nfs-server on k8s2.zhangqiaoc.com kubernetes.io/hostname: k8s2.zhangqiaoc.com containers: - name: nfs-server image: itsthenetwork/nfs-server-alpine:latest volumeMounts: - name: nfs-storage mountPath: /nfsshare env: - name: SHARED_DIRECTORY value: "/nfsshare" ports: - name: nfs containerPort: 2049 # <- export port securityContext: privileged: true # <- privileged mode is mandentory. volumes: - name: nfs-storage hostPath: # <- the folder on the host machine. path: /root/fileshare
create a service to expose the nfs to the other pods
# kubectl expose deployment nfs-server --type=ClusterIP # kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 9d nfs-server ClusterIP 10.101.117.226 <none> 2049/TCP 14s
Testing:
# yum install -y nfs-utils # mkdir /root/nfsmnt # mount -v 10.101.117.226:/ /root/nfsmnt
Create a PV and PVC for testing
# kubectl create -f pv.yml apiVersion: v1 kind: PersistentVolume metadata: name: mypv1 spec: capacity: storage: 20Gi accessModes: - ReadWriteOnce persistentVolumeReclaimPolicy: Recycle nfs: path: "/" server: nfs-server # kubectl get PersistentVolume NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE mypv1 20Gi RWO Recycle Available 35s # kubectl create -f pvc.yml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: mypvc1 spec: accessModes: - ReadWriteOnce resources: requests: storage: 100Mi # kubectl get PersistentVolumeClaim NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE mypvc1 Bound mypv1 20Gi RWO 13s
In order to use statefulset, we have to have a storage class. And the storage class needs a provisioner
create a service account named nfs-provisioner
# kubectl create -f serviceaccount.yml apiVersion: v1 kind: ServiceAccount metadata: name: nfs-provisioner
For rbac, we should create a clusterrole named nfs-provisioner-runner and a role, named leader-locking-nfs-provisioner
# kubectl create -f rbac.yml kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: nfs-provisioner-runner rules: - apiGroups: [""] resources: ["persistentvolumes"] verbs: ["get", "list", "watch", "create", "delete"] - apiGroups: [""] resources: ["persistentvolumeclaims"] verbs: ["get", "list", "watch", "update"] - apiGroups: ["storage.k8s.io"] resources: ["storageclasses"] verbs: ["get", "list", "watch"] - apiGroups: [""] resources: ["events"] verbs: ["create", "update", "patch"] --- kind: ClusterRoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: run-nfs-provisioner subjects: - kind: ServiceAccount name: nfs-provisioner namespace: default roleRef: kind: ClusterRole name: nfs-provisioner-runner apiGroup: rbac.authorization.k8s.io --- kind: Role apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-provisioner rules: - apiGroups: [""] resources: ["endpoints"] verbs: ["get", "list", "watch", "create", "update", "patch"] --- kind: RoleBinding apiVersion: rbac.authorization.k8s.io/v1 metadata: name: leader-locking-nfs-provisioner subjects: - kind: ServiceAccount name: nfs-provisioner # replace with namespace where provisioner is deployed namespace: default roleRef: kind: Role name: leader-locking-nfs-provisioner apiGroup: rbac.authorization.k8s.io
Then, we can deploy nfs client provisioner
# kubectl create -f deployment.yaml # https://github.com/kubernetes-incubator/external-storage/tree/master/nfs-client kind: Deployment apiVersion: extensions/v1beta1 metadata: name: nfs-provisioner spec: replicas: 1 strategy: type: Recreate template: metadata: labels: app: nfs-provisioner spec: serviceAccount: nfs-provisioner containers: - name: nfs-provisioner # image: registry.cn-hangzhou.aliyuncs.com/open-ali/nfs-client-provisioner image: quay.io/external_storage/nfs-client-provisioner:latest volumeMounts: - name: nfs-client-root mountPath: /persistentvolumes env: - name: PROVISIONER_NAME value: zhangqiaoc.com/nfs # this name will be used to create storage class - name: NFS_SERVER value: 10.101.117.226 # nfs server, service name may not be working - name: NFS_PATH value: / volumes: - name: nfs-client-root nfs: server: 10.101.117.226 path: / # kubectl create -f storageclass.yaml kind: StorageClass apiVersion: storage.k8s.io/v1 metadata: name: nfs provisioner: zhangqiaoc.com/nfs
For testing
# kubectl create -f test-claim.yml kind: PersistentVolumeClaim apiVersion: v1 metadata: name: test-claim1 spec: accessModes: - ReadWriteMany resources: requests: storage: 1Mi storageClassName: nfs # kubectl get pvc NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE test-claim1 Bound pvc-25c3b950-8ff1-11e9-9a33-000c29d8a18d 1Mi RWX nfs 16s
Tomcat with Memcached for Session Replication in Docker
I tested using Tomcat Cluster for the session replication.
Tomcat Cluster Session Replication in Docker
Next, I want to use memcached to instead of Tomcat Cluter for the session replication.
First, create two memcached containers: my-memcache1 and my-memcache2
# docker run --name my-memcache1 --rm -d memcached memcached -m 64 # docker run --name my-memcache2 --rm -d memcached memcached -m 64
Next, we should prepare the jar files which Tomcat needs to connect to Memcached.
http://repo1.maven.org/maven2/de/javakaffee/msm/ -- download the latest one memcached-session-manager-tc7-2.3.2.jar memcached-session-manager-tc8-2.3.2.jar spymemcached-2.11.1.jar memcached-session-manager-2.3.2.jar msm-flexjson-serializer-2.3.2.jar msm-kryo-serializer-2.3.2.jar msm-javolution-serializer-2.1.1.jar msm-serializer-benchmark-2.1.1.jar msm-xstream-serializer-2.3.2.jar https://github.com/EsotericSoftware/kryo/releases -- Don't download RC edition. minlog-1.3.0.jar kryo-4.0.2.jar objenesis-2.6.jar reflectasm-1.11.6.jar https://github.com/magro/kryo-serializers -- download source code and compare to jar git clone https://github.com/magro/kryo-serializers mvn package cd target mv kryo-serializers-0.46-SNAPSHOT.jar kryo-serializers-0.46.jar
Dockerfile
FROM tomcat:7.0.94-jre7-alpine MAINTAINER q1zhang@odu.edu ADD index.jsp /usr/local/tomcat/webapps/ROOT ADD web.xml /usr/local/tomcat/conf ADD server.xml /usr/local/tomcat/conf ADD content.xml /usr/local/tomcat/conf ADD rootweb.xml /usr/local/tomcat/webapps/ROOT/WEB-INF/web.xml ADD pre.sh /usr/local/tomcat ADD *.jar /usr/local/tomcat/lib/ expose 8080 WORKDIR /usr/local/tomcat ENTRYPOINT bash ./run.sh && catalina.sh run
server.xml, add the following in the Host
<Context path="/" docBase="ROOT" reloadalbe=""> <Manager className="de.javakaffee.web.msm.MemcachedBackupSessionManager" memcachedNodes="n1:my-memcache1:11211,n2:my-memcache2:11211" failoverNodes="n1" requestUriIgnorePattern=".*\.(ico|png|gif|jpg|css|js)$" transcoderFactoryClass="de.javakaffee.web.msm.serializer.kryo.KryoTranscoderFactory" /> </Context> </Host>
The other files are the same as the previous example.
Then, generate the image and start two instance.
# docker build -t myapp:v3 . # docker run --name myapp1 --link=my-memcache1 --link=my-memcache2 -it --rm -p 8888:8080 myapp:v3 # docker run --name myapp2 --link=my-memcache1 --link=my-memcache2 -it --rm -p 8889:8080 myapp:v3
Now, we can do test to verify.
Tomcat Cluster Session Replication in Docker
I am testing how to move our applications to docker. In order to take advantage of k8s, we want to have session replication.
First, I am testing with Tomcat Cluster:
Dockerfile
# cat Dockerfile FROM tomcat:7.0.94-jre7-alpine MAINTAINER q1zhang@odu.edu ADD index.jsp /usr/local/tomcat/webapps/ROOT ADD web.xml /usr/local/tomcat/conf ADD server.xml /usr/local/tomcat/conf ADD content.xml /usr/local/tomcat/conf ADD rootweb.xml /usr/local/tomcat/webapps/ROOT/WEB-INF/web.xml ADD pre.sh /usr/local/tomcat expose 8080 WORKDIR /usr/local/tomcat ENTRYPOINT bash ./pre.sh && catalina.sh run
index.jsp, used for testing, to display current hostname and session id.
# cat index.jsp <%@ page language="java" %> <html> <head><title>Test</title></head> <body> <h1><font color="red">#HOSTNAME#</font></h1> <table align="centre" border="1"> <tr> <td>Session ID</td> <% session.setAttribute("magedu.com","magedu.com"); %> <td><%= session.getId() %></td> </tr> <tr> <td>Created on</td> <td><%= session.getCreationTime() %></td> </tr> </table> </body> </html>
pre.sh, used to change some configuration files before start tomcat.
# cat pre.sh sed -i "s/#HOSTNAME#/`hostname`/g" /usr/local/tomcat/webapps/ROOT/index.jsp sed -i "s/#HOSTNAME#/localhost/g" /usr/local/tomcat/conf/server.xml sed -i "s/#IPADDRESS#/`ifconfig eth0|grep "inet addr"|awk '{print $2}'|sed 's/addr://g'`/g" /usr/local/tomcat/conf/server.xml
server.xml, copy from the image, /usr/local/tomcat/conf, and add the configuration for the cluster.
<Host name="#HOSTNAME#" appBase="webapps" unpackWARs="true" autoDeploy="true"> <!-- SingleSignOn valve, share authentication between web applications Documentation at: /docs/config/valve.html --> <!-- <Valve className="org.apache.catalina.authenticator.SingleSignOn" /> --> <!-- Access log processes all example. Documentation at: /docs/config/valve.html Note: The pattern used is equivalent to using pattern="common" --> <Valve className="org.apache.catalina.valves.AccessLogValve" directory="logs" prefix="#HOSTNAME#_access_log." suffix=".txt" pattern="%h %l %u %t "%r" %s %b" /> <Cluster className="org.apache.catalina.ha.tcp.SimpleTcpCluster" channelSendOptions="8"> <Manager className="org.apache.catalina.ha.session.DeltaManager" expireSessionsOnShutdown="false" notifyListenersOnReplication="true"/> <Channel className="org.apache.catalina.tribes.group.GroupChannel"> <Membership className="org.apache.catalina.tribes.membership.McastService" address="228.0.0.4" port="45564" frequency="500" dropTime="3000"/> <Receiver className="org.apache.catalina.tribes.transport.nio.NioReceiver" address="#IPADDRESS#" port="4000" autoBind="100" selectorTimeout="5000" maxThreads="6"/> <Sender className="org.apache.catalina.tribes.transport.ReplicationTransmitter"> <Transport className="org.apache.catalina.tribes.transport.nio.PooledParallelSender"/> </Sender> <Interceptor className="org.apache.catalina.tribes.group.interceptors.TcpFailureDetector"/> <Interceptor className="org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor"/> </Channel> <Valve className="org.apache.catalina.ha.tcp.ReplicationValve" filter=""/> <Valve className="org.apache.catalina.ha.session.JvmRouteBinderValve"/> <Deployer className="org.apache.catalina.ha.deploy.FarmWarDeployer" tempDir="/tmp/war-temp/" deployDir="/tmp/war-deploy/" watchDir="/tmp/war-listen/" watchEnabled="false"/> <ClusterListener className="org.apache.catalina.ha.session.JvmRouteSessionIDBinderListener"/> <ClusterListener className="org.apache.catalina.ha.session.ClusterSessionListener"/> </Cluster> </Host>
web.xml, copy from the image, /usr/local/tomcat/conf, and add <distributable/>
web.xml
<welcome-file-list>
<welcome-file>index.html</welcome-file>
<welcome-file>index.htm</welcome-file>
<welcome-file>index.jsp</welcome-file>
</welcome-file-list>
<distributable/>
</web-app>
rootweb.xml, copy from the image, /usr/local/tomcat/webapps/ROOT/WEB-INF/web.xml, and add <distributable/>
rootweb.xml
<display-name>Welcome to Tomcat</display-name>
<description>
Welcome to Tomcat
</description>
<distributable/>
</web-app>
content.xml, copy from the image, /usr/local/tomcat/conf, and set distributable to true.
<Context distributable="true">
Create the image and run 2 instances.
# docker build -t myapp:v2 . # docker run --name myapp1 -it --rm -p 8888:8080 myapp:v2 & # docker run --name myapp2 -it --rm -p 8889:8080 myapp:v2 &
In the nginx:
# cat /etc/nginx/conf.d/proxy.conf upstream web{ server 83.16.16.73:8888 weight=1; server 83.16.16.73:8889 weight=1; } server { listen 80 default_server; index index.jsp index.html; location / { proxy_pass http://web; } }
First access:
Refresh, then, you can see that session is on the another node, with the same session id
Kubernetes - create a test cluster
I am using three machines on VMWare to start.
83.16.16.71 k8s1 master
83.16.16.72 k8s2 node1
83.16.16.73 k8s3 node2
Using ansible playbook downloaded from ansible galaxy saves a lot of time.
Inventory:
[k8s] 83.16.16.71 kubernetes_role=master hn=k8s1 83.16.16.72 kubernetes_role=node hn=k8s2 83.16.16.73 kubernetes_role=node hn=k8s3 [k8s:vars] ansible_user=root ping_target=8.8.8.8
First, set up the environment, including hostname, disable firewall, SELinux, etc.
# k8s_env.yml --- - hosts: all tasks: - name: dns lineinfile: dest=/etc/resolv.conf regexp='^nameserver *8.8.8.8$' line="nameserver 8.8.8.8" state=present backup=yes - name: test connection command: ping -c 3 - name: hostname hostname: name: ".zhangqiaoc.com" - name: hosts lineinfile: dest=/etc/hosts regexp='.* .*' line=" " state=present when: hostvars[item].ansible_default_ipv4.address is defined with_items: "" - name: yum update yum: name: '*' state: latest update_only: yes - name: firewall systemd: name: firewalld enabled: no state: stopped - name: Disable SWAP 1 shell: | swapoff -a - name: Disable SWAP 2 replace: path: /etc/fstab regexp: '^(.+ swap .*)$' replace: '# \1' - name: selinux selinux: state: disabled notify: reboot - name: meta meta: flush_handlers handlers: - name: reboot reboot: reboot_timeout: 600
$ ansible-playbook -i hosts k8s_env.yml
next, install and config kubernetes.
# kubernetes.yml --- - hosts: all vars: kubernetes_allow_pods_on_master: True pip_install_packages: - name: docker - name: awscli roles: - geerlingguy.repo-epel - geerlingguy.pip - geerlingguy.docker - geerlingguy.kubernetes
$ ansible-playbook -i hosts kubernetes.yml
Now, we have a kubernetes cluster. </br> Next, we can create an app for testing
# kubectl run kubernetes-bootcamp --image=gcr.io/google-samples/kubernetes-bootcamp:v1 --port=8080 # kubectl expose deployment kubernetes-bootcamp --type=NodePort # kubectl scale deployments/kubernetes-bootcamp --replicas=4 # kubectl get deployments NAME READY UP-TO-DATE AVAILABLE AGE kubernetes-bootcamp 4/4 4 4 14h # kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE kubernetes-bootcamp 1/1 Running 0 2m43s 10.244.1.8 k8s3.zhangqiaoc.com kubernetes-bootcamp-6bf84cb898-44xxm 1/1 Running 0 19s 10.244.1.9 k8s3.zhangqiaoc.com kubernetes-bootcamp-6bf84cb898-d7kgv 1/1 Running 0 19s 10.244.2.8 k8s2.zhangqiaoc.com kubernetes-bootcamp-6bf84cb898-ps77s 1/1 Running 0 69s 10.244.2.7 k8s2.zhangqiaoc.com kubernetes-bootcamp-6bf84cb898-rfwpl 1/1 Running 0 19s 10.244.1.10 k8s3.zhangqiaoc.com # kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 14h kubernetes-bootcamp NodePort 10.104.227.188 <none> 8080:31186/TCP 15s # pod IP # curl 10.244.1.8:8080 Hello Kubernetes bootcamp! | Running on: kubernetes-bootcamp | v=1 # cluster IP # curl 10.104.227.188:8080 Hello Kubernetes bootcamp! | Running on: kubernetes-bootcamp-6bf84cb898-d7kgv | v=1 # machine IP # curl 83.16.16.72:31186 Hello Kubernetes bootcamp! | Running on: kubernetes-bootcamp-6bf84cb898-ps77s | v=1
Now, we can use the physical IP address 83.16.16.72 to access the pods. But if this machine down, how to switch to another IP address?
We can use nginx as a load balancer. I add a new box, 83.16.16.70 to install nginx
In order to use yum to install nginx, we have to enable epel at first.
After install, configure load balancer:
# vi /etc/nginx/nginx.conf # add stream { server { listen 8002; proxy_pass bootcamp; } upstream bootcamp { server 83.16.16.72:31186; server 83.16.16.73:31186; } } # systemctl restart nginx
Now, we can use nginx to access our application directly.
# curl 83.16.16.70:8002 Hello Kubernetes bootcamp! | Running on: kubernetes-bootcamp | v=1
Next, there is a problem that for every new deployment, we need to change nginx.conf manually, and sometimes, the load balancer is not under our control.
So, the better way is to use k8s ingress to expose only one port on every node to load balancer and use ingress to forward requests depending on the url.
create ingress
# cat nginx-ingress-controller.yml apiVersion: extensions/v1beta1 kind: Deployment metadata: name: default-http-backend labels: k8s-app: default-http-backend namespace: kube-system spec: replicas: 1 template: metadata: labels: k8s-app: default-http-backend spec: terminationGracePeriodSeconds: 60 containers: - name: default-http-backend # Any image is permissable as long as: # 1. It serves a 404 page at / # 2. It serves 200 on a /healthz endpoint image: gcr.io/google_containers/defaultbackend:1.0 livenessProbe: httpGet: path: /healthz port: 8080 scheme: HTTP initialDelaySeconds: 30 timeoutSeconds: 5 ports: - containerPort: 8080 resources: limits: cpu: 10m memory: 20Mi requests: cpu: 10m memory: 20Mi --- apiVersion: v1 kind: Service metadata: name: default-http-backend namespace: kube-system labels: k8s-app: default-http-backend spec: ports: - port: 80 targetPort: 8080 selector: k8s-app: default-http-backend --- apiVersion: extensions/v1beta1 kind: Deployment metadata: name: nginx-ingress-controller labels: k8s-app: nginx-ingress-controller namespace: kube-system spec: replicas: 1 template: metadata: labels: k8s-app: nginx-ingress-controller spec: # hostNetwork makes it possible to use ipv6 and to preserve the source IP correctly regardless of docker configuration # however, it is not a hard dependency of the nginx-ingress-controller itself and it may cause issues if port 10254 already is taken on the host # that said, since hostPort is broken on CNI (https://github.com/kubernetes/kubernetes/issues/31307) we have to use hostNetwork where CNI is used # like with kubeadm serviceAccountName: nginx-ingress-serviceaccount hostNetwork: true terminationGracePeriodSeconds: 60 containers: - image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.11 name: nginx-ingress-controller readinessProbe: httpGet: path: /healthz port: 10254 scheme: HTTP livenessProbe: httpGet: path: /healthz port: 10254 scheme: HTTP initialDelaySeconds: 10 timeoutSeconds: 1 ports: - containerPort: 80 hostPort: 80 - containerPort: 443 hostPort: 443 env: - name: POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace args: - /nginx-ingress-controller - --default-backend-service=$(POD_NAMESPACE)/default-http-backend
# cat rbac.yaml apiVersion: v1 kind: ServiceAccount metadata: name: nginx-ingress-serviceaccount namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRole metadata: name: nginx-ingress-clusterrole rules: - apiGroups: - "" resources: - configmaps - endpoints - nodes - pods - secrets verbs: - list - watch - apiGroups: - "" resources: - nodes verbs: - get - apiGroups: - "" resources: - services verbs: - get - list - watch - apiGroups: - "extensions" resources: - ingresses verbs: - get - list - watch - apiGroups: - "" resources: - events verbs: - create - patch - apiGroups: - "extensions" resources: - ingresses/status verbs: - update --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: Role metadata: name: nginx-ingress-role namespace: kube-system rules: - apiGroups: - "" resources: - configmaps - pods - secrets - namespaces verbs: - get - apiGroups: - "" resources: - configmaps resourceNames: - "ingress-controller-leader-nginx" verbs: - get - update - apiGroups: - "" resources: - configmaps verbs: - create - apiGroups: - "" resources: - endpoints verbs: - get - create --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: RoleBinding metadata: name: nginx-ingress-role-nisa-binding namespace: kube-system roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: nginx-ingress-role subjects: - kind: ServiceAccount name: nginx-ingress-serviceaccount namespace: kube-system --- apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: nginx-ingress-clusterrole-nisa-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: nginx-ingress-clusterrole subjects: - kind: ServiceAccount name: nginx-ingress-serviceaccount namespace: kube-system
# kubectl -n kube-system get deployments NAME READY UP-TO-DATE AVAILABLE AGE coredns 2/2 2 2 15h default-http-backend 1/1 1 1 14h nginx-ingress-controller 1/1 1 1 14h # kubectl -n kube-system get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE coredns-86c58d9df4-gtrf5 1/1 Running 2 15h 10.244.0.7 k8s1.zhangqiaoc.com coredns-86c58d9df4-xt4gq 1/1 Running 2 15h 10.244.0.6 k8s1.zhangqiaoc.com default-http-backend-64c956bc67-jwxsd 1/1 Running 0 5h44m 10.244.0.8 k8s1.zhangqiaoc.com etcd-k8s1.zhangqiaoc.com 1/1 Running 2 15h 83.16.16.71 k8s1.zhangqiaoc.com kube-apiserver-k8s1.zhangqiaoc.com 1/1 Running 2 15h 83.16.16.71 k8s1.zhangqiaoc.com kube-controller-manager-k8s1.zhangqiaoc.com 1/1 Running 3 15h 83.16.16.71 k8s1.zhangqiaoc.com kube-flannel-ds-amd64-496md 1/1 Running 1 15h 83.16.16.73 k8s3.zhangqiaoc.com kube-flannel-ds-amd64-8trbt 1/1 Running 1 15h 83.16.16.72 k8s2.zhangqiaoc.com kube-flannel-ds-amd64-hhn7z 1/1 Running 2 15h 83.16.16.71 k8s1.zhangqiaoc.com kube-proxy-28kfg 1/1 Running 1 15h 83.16.16.72 k8s2.zhangqiaoc.com kube-proxy-p65m8 1/1 Running 2 15h 83.16.16.71 k8s1.zhangqiaoc.com kube-proxy-vqv64 1/1 Running 1 15h 83.16.16.73 k8s3.zhangqiaoc.com kube-scheduler-k8s1.zhangqiaoc.com 1/1 Running 2 15h 83.16.16.71 k8s1.zhangqiaoc.com nginx-ingress-controller-59469ff966-4ddgp 1/1 Running 0 103m 83.16.16.71 k8s1.zhangqiaoc.com
create a ingress rule
--- apiVersion: extensions/v1beta1 kind: Ingress metadata: name: ingress spec: rules: - host: www.zhangqiaoc.com http: paths: - backend: serviceName: kubernetes-bootcamp servicePort: 8080 path: /bootcamp
We need to add an entry on Nginx in 83.16.16.70
stream { server{ <-- listen 8001; proxy_pass k8s; } server { listen 8002; proxy_pass bootcamp; } upstream k8s { <-- server 83.16.16.71:80; server 83.16.16.72:80; server 83.16.16.73:80; } upstream bootcamp { server 83.16.16.72:31186; server 83.16.16.73:31186; } }
$ curl www.zhangqiaoc.com:8001/bootcamp Hello Kubernetes bootcamp! | Running on: kubernetes-bootcamp-6bf84cb898-ps77s | v=1
But, if using IP address is not working.
$ curl 83.16.16.70:8001/bootcamp default backend - 404
Selenium Grid
Recently, I am working on to use SELENIUM for automating testing.
A part of my goal is to use SELENIUM for stress test
We want to simulate the registration process to make sure the course system could be working as expected at the next semester when students register their courses.
I use ansible playbook to simplify the build of a selenium grid environment.
The following packages are mandatory.
sudo apt install python-docker pip install docker-py pip uninstall requests pip uninstall urllib3 pip install requests
The script creating a selenium grid:
selenium_grid.yml
--- - hosts: localhost connection: local gather_facts: False vars: nodes: 10 state: started dns: 192.168.100.153 tasks: - name: "{{ '%s hub' | format(state) }}" docker_container: name: hub image: selenium/hub state: "{{ state }}" published_ports: 5555:4444 recreate: yes cleanup: yes dns_servers: "{{ dns }}" - name: "{{ '%s nodes' | format(state) }}" docker_container: name: "{{ 'chrome%02d' | format(item) }}" image: selenium/node-chrome-debug state: "{{ state }}" published_ports: "{{ '59%02d:5900' | format(item) }}" recreate: yes cleanup: yes links: hub:hub dns_servers: "{{ dns }}" loop: "{{ range(1, nodes|int + 1, 1)|list }}"
$ ansible-playbook selenium_grid.yml [WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all' PLAY [localhost] ******************************************************************************************************************* TASK [started hub] ***************************************************************************************************************** changed: [localhost] TASK [started nodes] *************************************************************************************************************** changed: [localhost] => (item=1) changed: [localhost] => (item=2) changed: [localhost] => (item=3) changed: [localhost] => (item=4) changed: [localhost] => (item=5) changed: [localhost] => (item=6) changed: [localhost] => (item=7) changed: [localhost] => (item=8) changed: [localhost] => (item=9) changed: [localhost] => (item=10) PLAY RECAP ************************************************************************************************************************* localhost : ok=2 changed=2 unreachable=0 failed=0 zhangqiaoc@ubuntu01:~/ansible$ sudo docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES d4dd9291caf0 selenium/node-chrome-debug "/opt/bin/entry_poin" About a minute ago Up About a minute 0.0.0.0:5910->5900/tcp chrome10 00849df50f3e selenium/node-chrome-debug "/opt/bin/entry_poin" About a minute ago Up About a minute 0.0.0.0:5909->5900/tcp chrome09 76a362efadcc selenium/node-chrome-debug "/opt/bin/entry_poin" About a minute ago Up About a minute 0.0.0.0:5908->5900/tcp chrome08 170eb6cc8193 selenium/node-chrome-debug "/opt/bin/entry_poin" About a minute ago Up About a minute 0.0.0.0:5907->5900/tcp chrome07 9bcf4bd1fb5e selenium/node-chrome-debug "/opt/bin/entry_poin" About a minute ago Up About a minute 0.0.0.0:5906->5900/tcp chrome06 8e3dcb28ac73 selenium/node-chrome-debug "/opt/bin/entry_poin" 2 minutes ago Up About a minute 0.0.0.0:5905->5900/tcp chrome05 8b82403b0e6c selenium/node-chrome-debug "/opt/bin/entry_poin" 2 minutes ago Up 2 minutes 0.0.0.0:5904->5900/tcp chrome04 dd358d28ae29 selenium/node-chrome-debug "/opt/bin/entry_poin" 2 minutes ago Up 2 minutes 0.0.0.0:5903->5900/tcp chrome03 0843057f3900 selenium/node-chrome-debug "/opt/bin/entry_poin" 2 minutes ago Up 2 minutes 0.0.0.0:5902->5900/tcp chrome02 434bf300c0b5 selenium/node-chrome-debug "/opt/bin/entry_poin" 2 minutes ago Up 2 minutes 0.0.0.0:5901->5900/tcp chrome01 4c962835b46c selenium/hub "/opt/bin/entry_poin" 2 minutes ago Up 2 minutes 0.0.0.0:5555->4444/tcp hub 99cd793962e2 oracle/database:18.3.0-ee "/bin/sh -c 'exec $O" 3 weeks ago Exited (137) 2 weeks ago oracle18
You can check the status of the grid by the web interface.
Then, write a python script to call the selenium testing python scripts.
from subprocess import Popen processes = [] cmd_pprd1 = "python main_pprd.py" cmd_pprd2 = "python main_pprd2.py" cmd_pprd3 = "python main_pprd3.py" # processes.append(Popen(cmd_pprd1, shell=True)) # processes.append(Popen(cmd_pprd2, shell=True)) # processes.append(Popen(cmd_pprd3, shell=True)) for n in range(10): processes.append(Popen(cmd_pprd1, shell=True)) print len(processes) for n in range(len(processes)): processes[n].wait()