質問 1:You have a deployment named 'web-app' running 3 replicas of a Node.js application. During an update, you observe that two pods are stuck in a 'CrashLoopBackOff state. The logs indicate that the pods are failing to connect to a Redis database. How do you debug this issue and identify the root cause of the pod failures?
正解:
See the solution below with Step by Step Explanation.
Explanation:
Solution (Step by Step) :
1. Check pod logs:
- Run logs for the pods in the 'CrashLoopBackOff state to review the application logs. Look for any specific errors or warnings related to Redis connection issues. For example, search for terms like "connection refused," "timeout," "host not found," or "Redis server down."
2. Verify Redis connectivity:
- Ensure that the Redis service is running and reachable from the pods. You can use tools like 'kubectl exec -it bash' to access the pod's shell and run commands like 'ping or 'telnet to check connectivity.
3. Inspect Redis service details:
- Run 'kubectl describe service to review the service definition. Verify that the 'clusterlP' and 'port' information aligns with the connection details used by your Node.js application.
4. Check Kubernetes network policies:
- Use 'kubectl describe networkpolicy' to examine any network policies that might be restricting communication between the web app pods and the Redis service. Ensure that there are no rules blocking the required traffic.
5. Review the application configuration:
- Check the Node.js application configuration files for the correct Redis hostname, port, and any other relevant settings. Verify that the connection details match the Redis service and are correctly configured within the application.
6. Inspect the Redis service logs:
- Analyze the Redis service logs to identify any potential problems on the Redis server side. Check for errors related to connection limits, resource exhaustion, or other issues that could impact the service's functionality.
7. Test the application's connection to Redis outside the Kubernetes cluster:
- Deploy a separate test environment outside of the Kubernetes cluster to verify the connection between your Node.js application and the Redis service. This can help isolate whether the issue stems from the application itself, the Kubernetes network, or the Redis service.
8. Use a Redis client tool:
- Utilize a Redis client tool like 'redis-cli' to connect to the Redis service directly from within a Kubernetes pod. This can help diagnose connection problems and verify the Redis server's health.
Bash kubectl exec -it bash redis-cli -h -p
9. Use a debugger:
- Utilize a debugger like 'node-inspector' or 'vscode' to step through the Node.js application code and identify the specific point where the Redis connection fails.
10. Check for resource constraints:
- Examine the resource limits and requests defined for the web app pods. Ensure that the pods have sufficient resources allocated to handle the Redis connection and application workload.
11. Consider DNS issues:
- Investigate potential DNS resolution issues. Make sure the pods can resolve the hostname or IP address of the Redis service correctly.
12. Review the deployment configuration:
- Analyze the deployment configuration for any unusual settings or updates that might have caused the issue. For instance, check for changes to the application container image, resource limits, or any related configurations that might have inadvertently affected the Redis connection.
質問 2:You have a Kubernetes cluster with three nodes. You need to create a Role that allows users in the "developers" group to access the "nginx-deployment" deployment in the "default" namespace. This role should only permit users to view and update the deployment, not delete it.
正解:
See the solution below with Step by Step Explanation.
Explanation:
Solution (Step by Step) :
1. Create the Role:

2. Create the RoleBinding:

3. Apply the Role and RoleBinding: bash kubectl apply -f role.yaml kubectl apply -f rolebinding.yaml
質問 3:You have a Deployment named 'worker-deployment' that runs a set of worker Pods. You need to configure a PodDisruptionBudget (PDB) for this deployment, ensuring that at least 60% of the worker Pods are always available, even during planned or unplanned disruptions. How can you achieve this?
正解:
See the solution below with Step by Step Explanation.
Explanation:
Solution (Step by Step) :
1. PDB YAML Definition:

2. Explanation: - 'apiVersion: policy/vl ' : Specifies the API version for PodDisruptionBudget resources. - 'kind: PodDisruptionBudget': Specifies that this is a PodDisruptionBudget resource. - 'metadata.name: worker-pdb": Sets the name of the PDB. - 'spec.selector.matchLabels: app: worker': This selector targets the Pods labeled with 'app: worker' , ensuring the PDB applies to the 'worker-deployment' Pods. - 'spec.minAvailable: 60%': Specifies that at least 60% of the total worker Pods must remain available during disruptions. This means that if your deployment has 5 replicas, at least 3 Pods must remain running. 3. How it works: - The 'minAvailable' field in the PDB can be specified as a percentage of the total number of Pods in the deployment or as an absolute number of Pods. In this case, we are using a percentage ('600/0') to ensure a flexible approach to maintaining availability, even if the number of replicas changes. 4. Implementation: - Apply the YAML using 'kubectl apply -f worker-pdb.yaml' 5. Verification: You can verify the PDB's effectiveness by trying to delete Pods or simulating a node failure. The scheduler will prevent actions that would violate the 'minAvailable' constraint, ensuring that at least 60% of the worker Pods remain available.
質問 4:You are deploying an application on Kubernetes. You need to ensure that a minimum of three pods are always running for this application. How can you achieve this? Describe how to configure the deployment with a replica count and a liveness probe to monitor the health of the pods.
正解:
See the solution below with Step by Step Explanation.
Explanation:
Solution (Step by Step) :
1. Create a Deployment with a Replica Count:
- Create a YAML file named 'deployment.yamr with the following content:

- Apply the YAML file using 'kubectl apply -f deployment.yaml&. 2. Configure a Liveness Probe: - Update the 'deployment.yaml' file to include a liveness probe. For example, you could use a HTTP probe:

- Apply the updated YAML file using 'kubectl apply -f deployment.yaml'. 3. Verify the Deployment: - Check the status of the deployment using get deployments myapp-deploymen. - Ensure that three pods are running and that the liveness probe is monitoring their health. You can use 'kubectl describe pod myapp-deployment-XXXX' (where XXXX is the pod name) to see the details of the pod and the liveness probe status.
質問 5:Your company has a Kubernetes cluster with a production namespace (prod) where only authorized engineers can access sensitive dat a. You need to implement an RBAC policy that allows only engineers with a specific label ("role: engineer") to read data from a specific secret named "secret-sensitive" in the "prod" namespace. Describe how you would configure RBAC to achieve this.
正解:
See the solution below with Step by Step Explanation.
Explanation:
Solution (Step by Step) :
Step 1: Define a Role that allows reading the specific secret:

Step 2: Create a RoleBinding to associate the Role with users labeled as "role: engineer":

Step 3: Ensure users have the necessary labels: - Users or service accounts must be assigned the label "role: engineer" to access the secret. - The Role restricts access to the "secret-sensitive" in the "prod" namespace to only "get" requests. - The RoleBinding associates the Role with users who have the label "role: engineer". - This ensures that only authorized engineers can read data from the "secret-sensitive" secret. ,
質問 6:You have a Kubernetes cluster with two namespaces: 'dev' and 'prod'. You want to configure RBAC to allow developers in the 'dev' namespace to create deployments and pods, but only allow operations personnel in the 'prod' namespace to delete deployments and pods.
正解:
See the solution below with Step by Step Explanation.
Explanation:
Create two RBAC roles and two role bindings to implement this configuration.
Solution (Step by Step) :
Step 1: Create a Role for Developers in the 'dev' namespace.

Step 2: Create a Role Binding for Developers in the 'dev' namespace.

Step 3: Create a Role for Operations Personnel in the 'prod' namespace.

Step 4: Create a Role Binding for Operations Personnel in the 'prod' namespace.

We define separate roles for developers and operations personnel, each with specific permissions in their respective namespaces. The roles specify which resources ('deployments', 'pods') can be accessed and which verbs ('create', 'delete', 'get') are allowed. Role bindings connect the roles to users, granting them the specified permissions. Applying the configurations: Use 'kubectl apply -f [filename].yaml' to apply the role and role binding YAML files. You can replace 'developer' and with actual user names or service account names.
質問 7:You have a deployment named 'web-app' with three replicas, exposing the application using a 'LoadBalancer' service. The application uses an internal database service named 'db-service' that is running as a 'ClusterlP' service. You need to configure the 'web-app' deployment to only allow traffic from the service' to its internal port (e.g., 5432).
正解:
See the solution below with Step by Step Explanation.
Explanation:
Solution (Step by Step) :
1. Create a NetworkPolicy:
- Create a NetworkPolicy resource that allows traffic from the 'db-service' to the 'web-app' Deployment.

2. Apply the NetworkPolicy: - Apply the YAML file using 'kubectl apply -f networkpolicy.yaml'. 3. Verify the NetworkPolicy: - Check the status of the NetworkPolicy using 'kubectl get networkpolicies allow-db-to-web-app -n . 4. Test: - Ensure that the 'db-service' can communicate with the 'web-app' deployment on port 5432. - Attempt to connect to port 5432 on 'web-app' pods from outside the cluster or from other services/pods within the cluster that are not the 'db-service'. You should not be able to connect. Note: Replace with the actual namespace where your deployments and services are located.
質問 8:You are tasked with configuring RBAC for a Kubernetes cluster hosting a microservices application.
The application consists of three services:
- 'frontend' which only needs to access the 'nginx-ingress-controller' deployment to configure Ingress resources.
- 'backend' which needs read-only access to the 'postgres' service for database queries.
- 'worker' which needs to create, update, and delete pods in the 'worker-namespace' namespace and access the 'redis' service.
正解:
See the solution below with Step by Step Explanation.
Explanation:
Create the necessary RBAC roles, role bindings, and service accounts to enable these permissions.
Solution (Step by Step) :
1 . Create Service Accounts:
kubectl create serviceaccount frontend-sa -n default
kubectl create serviceaccount backend-sa -n default
kubectl create serviceaccount worker-sa -n worker-namespace
2. Create Roles:


3. Create Role Bindings: kubectl create rolebinding frontend-binding -n default -role=frontend-role serviceaccount=default:frontend-sa kubectl create rolebinding backend-binding -n default --role=backend-role - serviceaccount=default:backend-sa kubectl create rolebinding worker-binding -n worker-namespace -role=worker-role - serviceaccount=worker- namespace:worker-sa 4. Grant Access to Services: Frontend: The 'frontend' service account should have access to the 'nginx-ingress-controller' deployment. This can be done through a Role or ClusterRole and RoleBinding or ClusterRoleBinding, depending on if the controller is in the same namespace or across namespaces. Backend: The 'backend' service account should have read-only access to the postgres' service. This can be achieved by creating a 'ServiceAccount' for the 'backend' service and binding it to a 'Role' that grants the necessary permissions on the 'postgres' service. Worker: The 'worker' service account should have full access (create, update, delete) to pods in the 'worker- namespace' and read access to the 'redis' service. Important Notes: - The provided 'kubectl' commands are illustrative. You may need to adjust them based on your specific cluster configuration. - The above RBAC configuration is a basic example. Depending on the specific needs of your application, you may need to configure more granular roles and bindings.
質問 9:Your Kubernetes cluster has been running for some time, and it's becoming increasingly difficult to manage permissions for your applications. You are noticing a growing list of roles and role bindings, making it challenging to understand the relationships between them.
Describe a strategy to simplify and streamline your RBAC configuration by implementing best practices. Also, discuss how you can improve the manageability and auditing of your RBAC setup.
正解:
See the solution below with Step by Step Explanation.
Explanation:
Solution (Step by Step) :
1. Implement a Hierarchical Structure:
- Create high-level roles for common tasks such as "admin," "developer," "viewer," etc., providing broad permissions.
- Build more specific roles for specific applications or services, inheriting permissions from the higher-level roles.
- Example:
- "admin" role: grants full access to the cluster.
- "app-developer" role: inherits from "admin," but with restricted permissions only to specific namespaces and resources related to the application.
- "app-viewer" role: inherits from "app-developer" with limited permissions for monitoring and viewing resources.
2. Utilize ClusterRoles for Global Permissions:
- ClusterRoles are designed to grant permissions across the entire cluster, simplifying management for resources that need consistent access.
- This allows for centralized control of common permissions, reducing duplication of role definitions.
3. Leverage Service Accounts for Application-Level Permissions:
- Create service accounts for each application and bind them to appropriate roles.
- Use service accounts to manage access for pods, deployments, and other resources related to a specific application.
- This reduces the need for manually assigning permissions to individual resources.
4. Adopt a Role-Based Structure:
- Design RBAC policies around roles instead of individual users.
- This allows for easier management of permissions by modifying roles rather than individual user bindings.
- Ensure users are assigned to appropriate roles based on their responsibilities.
5. Implement RBAC Auditing and Monitoring:
- Use tools like 'kubectl auth can-i' to test and validate RBAC permissions.
- Monitor RBAC events and changes using audit logging features.
- Analyze audit logs to identify any suspicious activity and troubleshoot RBAC issues.
6. Consider External RBAC Solutions:
- For larger deployments, consider using external RBAC solutions like Keycloak or OpenLDAP for centralized user management and role-based access control.
- This can simplify the process of managing users, roles, and permissions across multiple clusters.
7. Documentation:
- Maintain comprehensive documentation of your RBAC setup, including roles, bindings, and any specific permissions.
- This documentation will be crucial for future maintenance, debugging, and troubleshooting. ,