kubectl config set-cluster dev --server=https://k8s.xxx.com:6443 --certificate-authority=$HOME/.kube/dev/ca.pem
kubectl config set-credentials sanger --client-certificate=$HOME/.kube/dev/sanger.crt --client-key=$HOME/.kube/dev/sanger.pem --embed-certs=true
kubectl config set-context dev --cluster=dev --user=sanger
kubectl config use-context dev
#!/bin/bash
k8s_url=$1
env=$2
username=$3
if [ $# == 0 ];then
echo -e "\033[40;31mYou must take three parameters like this:\nsh $0 https://k8s.xxx.com:6443 dev sanger \033[0m"
else
kubectl config set-cluster $2 --server=$1 --certificate-authority=$HOME/.kube/$2/ca.pem
kubectl config set-credentials $3 --client-certificate=$HOME/.kube/$2/$3.crt --client-key=$HOME/.kube/$2/$3.pem
###国内k8s镜像地址
registry.cn-beijing.aliyuncs.com/k8s_images/
registry.aliyuncs.com/google_containers/
mirrorgooglecontainers/
docker pull registry.aliyuncs.com/google_containers/coredns:1.2.2
docker pull registry.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.8.3
docker pull registry.aliyuncs.com/google_containers/heapster-amd64:v1.5.3
docker pull registry.aliyuncs.com/google_containers/heapster-grafana-amd64:v4.4.3
docker pull registry.aliyuncs.com/google_containers/heapster-influxdb-amd64:v1.3.3
docker pull mirrorgooglecontainers/metrics-server-amd64:v0.3.1
docker pull registry.aliyuncs.com/google_containers/addon-resizer:1.8.3
##elasticsearch和kibana的小版本要一致
docker pull mirrorgooglecontainers/elasticsearch:v6.3.0
docker pull mirrorgooglecontainers/fluentd-elasticsearch:v2.2.0
docker pull docker.elastic.co/kibana/kibana-oss:6.3.2
docker tag registry.aliyuncs.com/google_containers/coredns:1.2.2 k8s.gcr.io/coredns:1.2.2
docker ta
Kubenetes是一个针对容器应用,进行自动部署,弹性伸缩和管理的开源系统。主要功能是生产环境中的容器编排。
K8S是Google公司推出的,它来源于由Google公司内部使用了15年的Borg系统,集结了Borg的精华。
和大多数分布式系统一样,K8S集群至少需要一个主节点(Master)和多个计算节点(Node)。
主节点主要用于暴露API,调度部署和节点的管理;
计算节点运行一个容器运行环境,一般是docker环境(类似docker环境的还有rkt,k8s 1.24及以上版本已经不支持docker了),同时运行一个K8s的代理(kubelet)用于和master通信。
计算节点也会运行一些额外的组件,像记录日志,节点监控,服务发现等等。计算节点是k8s集群中真正工作的节点。
默认不参加实际工作
$ kubectl run php-apache --image=pilchard/hpa-example --requests=cpu=200m --expose --port=80
$ kubectl autoscale deploy php-apache --cpu-percent=50 --min=1 --max=10
$ kubectl get hpa php-apache
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
php-apache Deployment/php-apache 0% / 50% 1 10 1 3m
$ kubectl run --rm -it load-generator --image=busybox /bin/sh
Hit enter for command prompt
$ while true; do wget -q -O- http://php-apache; done;
$ kubectl get hpa php-apache
NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE
php-apache Deployment/php-apache 43
均为centos7.x
master | node1 | node2 |
---|---|---|
192.168.2.237 |
192.168.2.238 | 192.168.2.239 |
2c2g50g | 2c4g50g | 2c4g50g |
一定要注意各节点的时区、时间必须要统一
master上记得安装ansible
先把项目download下来
git clone https://github.com/easzlab/kubeasz.git
然后把项目中的所有内容拷贝到/etc/ansible下
根据需求编辑hosts文件
ansible-playbook 90-setup.sh
kubectl get no
CoreOS-2135.5.0
配置均为2C4G20G
master | minor 1 | minor 2 |
---|---|---|
192.168.2.240 | 192.168.2.241 | 192.168.2.242 |
sed -i '$a\Port=10000\nPermitRootLogin yes' /etc/ssh/sshd_config && systemctl mask sshd.sock && systemctl enable sshd.service && systemctl restart sshd.service
# 分发公钥
for i in {240..242};
do
ssh-copy-id -p 10000 -o PubkeyAuthentication=no -i ~/.ssh/id_rsa root@192.168.2.$i
done
# 验证
for i in {240..242}; do ssh -p 10000 192.168.2.$i hostname; done
/etc/ansible/env/dev/group_vars/defalut.yml
/etc/ansible/env/dev/inventory
nfs作为存储的后端,搭建比较容易,但不支持pvc扩容
https://kubernetes.io/zh/docs/concepts/storage/volumes
nfs-client-provisioner 是一个Kubernetes的简易NFS的外部provisioner,本身不提供NFS,需要现有的NFS服务器提供存储
PV以 namespace−{pvcName}-pvName的命名格式提供(在NFS服务器上)PV回收的时候以archieved−{namespace}-pvcName−{pvName} 的命名格式(在NFS服务器上)
showmount -e 10.3.xxx.xxx
mkdir -p /mnt/dev
mount -t nfs 10.3.xxx.xxx:/data/share/dev /mnt/dev -o proto=tcp -o nolock
以上都测试ok,就可以开始K8S做PVC的测试了
统一格式,namespace-pvc 这类名称,如有相关需求可以找sanger开通(原则上一个namespace一个pvc)。
参考 deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
labels:
app: xxx
name: xxx
namespace: xxx
spec:
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
app: xxx
strategy:
ro
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/server-snippet: |
location = /test {
return 200 "test";
}
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/server-snippet: |
rewirte ^/xxx/xxx(\?*)$ http://baidu.com$1 permanent;
}
yum install bash-completion -y
echo "source <(kubectl completion bash)" >> ~/.bashrc
source ~/.bashrc
## setup autocomplete in bash
source <(kubectl completion bash)
## setup autocomplete in zsh
$ source <(kubectl completion zsh)
近期dev环境资源比较紧张,需要开发deployment配置做下改变
Requests: 就是需求限制,也叫软限制
Limits:最大限制,也叫硬限制
通常来说:Limits >= Requests
并且requests 和 limits 通常要一起配置,若只配置了requests,而不配置limits,则很可能导致Pod会吃掉所有资源。
kubectl set resources deployment test --limits=cpu=50m,memory=64Mi -n test
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
app: test
name: test
namespace: test
spec:
minReadySeconds: 10
progressDeadlineSeconds: 600
replicas: 1 # 如无特殊需求,尽量做1副本
revisionHistoryLimit: 10
selector:
matchLabels:
app: test
version: v1
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
app: test
version: v1
namespace: test
spec:
containers:
- args:
- nohup ./test -conf /test/config/config.json