摘要:本文基于博主已有的 Spring Boot 3.x + Docker 技术体系,系统讲解如何将微服务应用部署到 Kubernetes(K8s)集群,涵盖 Deployment、Service、ConfigMap、Secret、HPA 弹性伸缩、健康探针、滚动更新等核心知识点,并结合实际项目给出完整 YAML 配置示例。
一、为什么从 Docker 进阶到 Kubernetes?
在前面的文章中,我们已经实现了 Spring Boot 3.x 应用的 Docker 多阶段构建,生成了精简的容器镜像。但在生产环境中,单靠 Docker 往往面临以下挑战:
| 痛点 | 说明 |
|---|---|
| 无自动恢复 | 容器崩溃后需手动重启 |
| 无弹性伸缩 | 流量激增时无法自动扩容 |
| 无服务发现 | 多实例间需手动维护地址 |
| 无滚动更新 | 停机发布影响可用性 |
| 无资源隔离 | 多服务混部容易互相干扰 |
Kubernetes 正是解决这些问题的事实标准。它提供了:
- 🔄 自动恢复:Pod 崩溃自动重启
- 📈 弹性伸缩:HPA 根据 CPU/内存自动扩缩容
- 🌐 服务发现:内置 DNS 和 Service 抽象
- 🚀 滚动发布:零停机更新
- 🔒 资源隔离:Namespace + LimitRange 精细管控
二、环境准备
2.1 本地开发环境(推荐 minikube)
# macOS / Linux 安装 minikube
curl -LO https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube
# 启动集群(Docker 驱动)
minikube start --driver=docker --cpus=4 --memory=8192
# 安装 kubectl
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
chmod +x kubectl && sudo mv kubectl /usr/local/bin/
# 验证
kubectl get nodes
# NAME STATUS ROLES AGE VERSION
# minikube Ready control-plane 1m v1.29.0
2.2 项目结构
spring-user-service/
├── src/
│ └── main/java/com/example/userservice/
├── Dockerfile
├── k8s/
│ ├── namespace.yaml
│ ├── configmap.yaml
│ ├── secret.yaml
│ ├── deployment.yaml
│ ├── service.yaml
│ ├── hpa.yaml
│ └── ingress.yaml
└── pom.xml
三、构建 Docker 镜像
沿用之前文章中的多阶段构建 Dockerfile:
# 第一阶段:构建
FROM eclipse-temurin:21-jdk-alpine AS builder
WORKDIR /app
COPY pom.xml .
COPY src ./src
RUN ./mvnw -ntp package -DskipTests
# 第二阶段:运行
FROM eclipse-temurin:21-jre-alpine
WORKDIR /app
RUN addgroup -S appgroup && adduser -S appuser -G appgroup
COPY --from=builder /app/target/*.jar app.jar
USER appuser
EXPOSE 8080
ENTRYPOINT ["java", "-jar", "app.jar"]
# 构建并推送到镜像仓库(以阿里云 ACR 为例)
docker build -t registry.cn-hangzhou.aliyuncs.com/your-ns/user-service:v1.0.0 .
docker push registry.cn-hangzhou.aliyuncs.com/your-ns/user-service:v1.0.0
# 本地 minikube 直接使用本地镜像
eval $(minikube docker-env)
docker build -t user-service:v1.0.0 .
四、Namespace:命名空间隔离
# k8s/namespace.yaml
apiVersion: v1
kind: Namespace
metadata:
name: microservices
labels:
env: production
team: backend
kubectl apply -f k8s/namespace.yaml
五、ConfigMap:外部化配置
将 Spring Boot 的 application.yml 中非敏感配置通过 ConfigMap 注入:
# k8s/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: user-service-config
namespace: microservices
data:
# 直接作为环境变量注入
SPRING_PROFILES_ACTIVE: "prod"
SERVER_PORT: "8080"
# 完整配置文件挂载
application-prod.yml: |
spring:
datasource:
url: jdbc:mysql://mysql-service:3306/userdb?useSSL=false&serverTimezone=Asia/Shanghai
driver-class-name: com.mysql.cj.jdbc.Driver
hikari:
maximum-pool-size: 20
minimum-idle: 5
connection-timeout: 30000
redis:
host: redis-service
port: 6379
timeout: 3000ms
management:
endpoints:
web:
exposure:
include: health,info,metrics,prometheus
endpoint:
health:
show-details: always
logging:
level:
com.example: INFO
六、Secret:敏感信息管理
# k8s/secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: user-service-secret
namespace: microservices
type: Opaque
# 值需要 base64 编码:echo -n "your_password" | base64
data:
DB_PASSWORD: eW91cl9wYXNzd29yZA== # your_password
REDIS_PASSWORD: cmVkaXNwYXNz # redispass
JWT_SECRET: c3VwZXJzZWNyZXRrZXkxMjM0NTY= # supersecretkey123456
⚠️ 生产建议:Secret 仅是 Base64 编码,并非加密。生产环境推荐使用 Vault 或云厂商的 KMS 来管理敏感信息。
# 也可以命令行创建,避免明文出现在 YAML
kubectl create secret generic user-service-secret \
--from-literal=DB_PASSWORD=your_password \
--from-literal=REDIS_PASSWORD=redispass \
--from-literal=JWT_SECRET=supersecretkey123456 \
-n microservices
七、Deployment:核心部署配置
# k8s/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: user-service
namespace: microservices
labels:
app: user-service
version: v1.0.0
spec:
replicas: 2 # 初始副本数
revisionHistoryLimit: 3 # 保留最近 3 次发布历史,方便回滚
selector:
matchLabels:
app: user-service
strategy:
type: RollingUpdate
rollingUpdate:
maxSurge: 1 # 滚动更新时最多多出 1 个 Pod
maxUnavailable: 0 # 滚动更新时不允许有不可用 Pod(零停机)
template:
metadata:
labels:
app: user-service
version: v1.0.0
annotations:
# Prometheus 自动发现
prometheus.io/scrape: "true"
prometheus.io/port: "8080"
prometheus.io/path: "/actuator/prometheus"
spec:
# 镜像拉取凭证(私有仓库)
imagePullSecrets:
- name: aliyun-acr-secret
# 优雅关闭等待时间
terminationGracePeriodSeconds: 60
containers:
- name: user-service
image: registry.cn-hangzhou.aliyuncs.com/your-ns/user-service:v1.0.0
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8080
name: http
# ===== 环境变量注入 =====
env:
# 来自 ConfigMap
- name: SPRING_PROFILES_ACTIVE
valueFrom:
configMapKeyRef:
name: user-service-config
key: SPRING_PROFILES_ACTIVE
# 来自 Secret
- name: DB_PASSWORD
valueFrom:
secretKeyRef:
name: user-service-secret
key: DB_PASSWORD
- name: REDIS_PASSWORD
valueFrom:
secretKeyRef:
name: user-service-secret
key: REDIS_PASSWORD
- name: JWT_SECRET
valueFrom:
secretKeyRef:
name: user-service-secret
key: JWT_SECRET
# Pod 元数据注入(可用于日志追踪)
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
# ===== 配置文件挂载 =====
volumeMounts:
- name: config-volume
mountPath: /app/config
readOnly: true
# ===== 资源限制(重要!) =====
resources:
requests:
cpu: "250m" # 0.25 核
memory: "512Mi"
limits:
cpu: "1000m" # 1 核
memory: "1Gi"
# ===== 健康探针 =====
# 启动探针:应用启动阶段,避免 readiness/liveness 过早触发
startupProbe:
httpGet:
path: /actuator/health/liveness
port: 8080
failureThreshold: 30 # 最多等待 30 * 10s = 5 分钟
periodSeconds: 10
# 存活探针:检测应用是否需要重启
livenessProbe:
httpGet:
path: /actuator/health/liveness
port: 8080
initialDelaySeconds: 0
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
# 就绪探针:检测应用是否可以接收流量
readinessProbe:
httpGet:
path: /actuator/health/readiness
port: 8080
initialDelaySeconds: 0
periodSeconds: 5
timeoutSeconds: 3
failureThreshold: 3
volumes:
- name: config-volume
configMap:
name: user-service-config
items:
- key: application-prod.yml
path: application-prod.yml
7.1 Spring Boot 健康探针配置
为使上述探针生效,需要在 pom.xml 引入 Actuator 并配置:
<!-- pom.xml -->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-actuator</artifactId>
</dependency>
# application.yml
management:
endpoint:
health:
probes:
enabled: true # 开启 liveness 和 readiness 端点
show-details: always
health:
livenessstate:
enabled: true
readinessstate:
enabled: true
八、Service:服务暴露与发现
# k8s/service.yaml
# 集群内部访问(微服务间调用)
apiVersion: v1
kind: Service
metadata:
name: user-service
namespace: microservices
labels:
app: user-service
spec:
type: ClusterIP # 默认类型,仅集群内可访问
selector:
app: user-service
ports:
- name: http
port: 80 # Service 暴露的端口
targetPort: 8080 # 转发到容器的端口
protocol: TCP
---
# 对外暴露(开发测试用 NodePort)
apiVersion: v1
kind: Service
metadata:
name: user-service-external
namespace: microservices
spec:
type: NodePort
selector:
app: user-service
ports:
- port: 80
targetPort: 8080
nodePort: 30080 # 范围 30000-32767
💡 服务发现:在同一命名空间中,其他服务可通过
http://user-service/api/users访问;跨命名空间则使用http://user-service.microservices.svc.cluster.local/api/users。
九、HPA:水平弹性伸缩
# k8s/hpa.yaml
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: user-service-hpa
namespace: microservices
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: user-service
minReplicas: 2 # 最少保持 2 个副本
maxReplicas: 10 # 最多扩展到 10 个副本
metrics:
# CPU 使用率达到 70% 时触发扩容
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
# 内存使用率达到 80% 时触发扩容
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
behavior:
scaleUp:
stabilizationWindowSeconds: 30 # 扩容稳定窗口 30s
policies:
- type: Pods
value: 2 # 每次最多增加 2 个 Pod
periodSeconds: 60
scaleDown:
stabilizationWindowSeconds: 300 # 缩容稳定窗口 5 分钟,防止震荡
policies:
- type: Pods
value: 1 # 每次最多减少 1 个 Pod
periodSeconds: 120
启用 HPA 需要集群安装 Metrics Server:
# minikube 开启 metrics-server
minikube addons enable metrics-server
# 生产集群安装
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/latest/download/components.yaml
# 验证
kubectl top pods -n microservices
十、Ingress:统一入口路由
# k8s/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: microservices-ingress
namespace: microservices
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/proxy-connect-timeout: "30"
nginx.ingress.kubernetes.io/proxy-read-timeout: "60"
# 开启 gzip 压缩
nginx.ingress.kubernetes.io/enable-gzip: "true"
# 限流:每秒 100 次请求
nginx.ingress.kubernetes.io/limit-rps: "100"
spec:
ingressClassName: nginx
rules:
- host: api.92yangyi.top
http:
paths:
- path: /users(/|$)(.*)
pathType: Prefix
backend:
service:
name: user-service
port:
number: 80
- path: /orders(/|$)(.*)
pathType: Prefix
backend:
service:
name: order-service
port:
number: 80
# minikube 启用 ingress 插件
minikube addons enable ingress
# 获取 ingress 地址
kubectl get ingress -n microservices
十一、一键部署与验证
# 按顺序应用所有配置
kubectl apply -f k8s/namespace.yaml
kubectl apply -f k8s/configmap.yaml
kubectl apply -f k8s/secret.yaml
kubectl apply -f k8s/deployment.yaml
kubectl apply -f k8s/service.yaml
kubectl apply -f k8s/hpa.yaml
kubectl apply -f k8s/ingress.yaml
# 或者直接应用整个目录
kubectl apply -f k8s/
# 观察部署状态
kubectl rollout status deployment/user-service -n microservices
# Waiting for deployment "user-service" rollout to finish: 1 of 2 updated replicas are available...
# deployment "user-service" successfully rolled out
# 查看 Pod 状态
kubectl get pods -n microservices -w
# NAME READY STATUS RESTARTS AGE
# user-service-7d4f8b9c6-k9xvp 1/1 Running 0 2m
# user-service-7d4f8b9c6-mnt7q 1/1 Running 0 2m
# 查看 HPA 状态
kubectl get hpa -n microservices
# NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS
# user-service-hpa Deployment/user-service 12%/70%, 45%/80% 2 10 2
# 查看日志
kubectl logs -f deployment/user-service -n microservices --tail=100
十二、滚动更新与回滚
# 方式一:更新镜像版本触发滚动更新
kubectl set image deployment/user-service \
user-service=registry.cn-hangzhou.aliyuncs.com/your-ns/user-service:v1.1.0 \
-n microservices
# 方式二:修改 YAML 后 apply(推荐,GitOps 友好)
kubectl apply -f k8s/deployment.yaml
# 观察滚动更新过程
kubectl rollout status deployment/user-service -n microservices
# 查看版本历史
kubectl rollout history deployment/user-service -n microservices
# REVISION CHANGE-CAUSE
# 1 <none>
# 2 Update to v1.1.0
# 一键回滚到上一版本
kubectl rollout undo deployment/user-service -n microservices
# 回滚到指定版本
kubectl rollout undo deployment/user-service --to-revision=1 -n microservices
十三、常见问题排查
13.1 Pod 一直处于 Pending 状态
# 查看 Pod 事件
kubectl describe pod <pod-name> -n microservices
# 常见原因:
# 1. 资源不足:Insufficient cpu/memory
kubectl get nodes -o json | jq '.items[].status.allocatable'
# 2. 镜像拉取失败
kubectl get events -n microservices --sort-by='.lastTimestamp'
13.2 Pod 反复 CrashLoopBackOff
# 查看容器日志(包括上次崩溃的日志)
kubectl logs <pod-name> -n microservices --previous
# 临时进入容器调试
kubectl exec -it <pod-name> -n microservices -- /bin/sh
# 检查健康探针配置是否与实际路径一致
kubectl describe pod <pod-name> -n microservices | grep -A 10 "Liveness\|Readiness"
13.3 HPA 不生效(UNKNOWN 状态)
# 检查 Metrics Server 是否正常
kubectl top nodes
kubectl top pods -n microservices
# 检查 HPA 详情
kubectl describe hpa user-service-hpa -n microservices
十四、与博客现有技术栈的整合
结合博主已有的技术文章,K8s 在完整微服务架构中的位置如下:
┌─────────────────────────────────┐
│ Kubernetes Cluster │
│ │
用户请求 │ ┌─────────┐ ┌──────────────┐ │
──────────────────▶ │ │ Ingress │──▶│ Gateway Pod │ │ ← Spring Cloud Gateway(已有文章)
│ └─────────┘ └──────┬───────┘ │
│ │ │
│ ┌────────────▼───────┐ │
│ │ Nacos (K8s部署) │ │ ← 服务注册/配置中心
│ └────────────────────┘ │
│ │
│ ┌─────────────┐ ┌────────────┐ │
│ │ User Service│ │Order Service│ │ ← Sentinel 限流(已有文章)
│ │ (HPA 弹伸) │ │ (HPA 弹伸) │ │
│ └──────┬──────┘ └─────┬──────┘ │
│ │ │ │
│ ┌──────▼──────────────▼──────┐ │
│ │ RabbitMQ / RocketMQ Pod │ │ ← 消息队列(已有文章)
│ └────────────────────────────┘ │
│ │
│ ┌─────────┐ ┌─────────────┐ │
│ │ MySQL │ │ Redis │ │ ← 数据层
│ │ StatefulSet│ │ StatefulSet │ │
│ └─────────┘ └─────────────┘ │
└─────────────────────────────────┘
通过 K8s 统一编排,之前文章中的各个组件都可以容器化部署,形成完整的云原生微服务体系。
十五、总结
| 知识点 | 核心配置 | 作用 |
|---|---|---|
| Namespace | namespace.yaml | 资源隔离 |
| ConfigMap | configmap.yaml | 外部化配置 |
| Secret | secret.yaml | 敏感信息管理 |
| Deployment | deployment.yaml | Pod 管理 + 健康探针 |
| Service | service.yaml | 服务发现 + 负载均衡 |
| HPA | hpa.yaml | 弹性伸缩 |
| Ingress | ingress.yaml | 统一入口路由 |
kubectl rollout undo | — | 快速回滚 |
K8s 是微服务架构的终极基础设施保障。掌握本文的核心配置后,结合之前的 Spring Cloud Gateway、Sentinel、Seata、RocketMQ 等文章,你已经拥有了一套完整的云原生微服务落地方案。
下一篇文章计划:K8s 集成 Prometheus + Grafana 实现全链路监控,敬请期待!
标签:Kubernetes K8s SpringBoot Docker 微服务 HPA 滚动更新 云原生
分类:Java技术栈 / 微服务架构 / 容器化部署
评论区