25+ Must-Know Kubernetes Interview Questions for 2023 - IQCode

What is Kubernetes?

Kubernetes is an open-source distributed technology that enables scheduling and executing application containers within and across clusters. In a Kubernetes cluster, there are two types of resources: The Master coordinates all activities in the cluster, while Nodes serve as worker machines.

Each Node in a Kubernetes cluster consists of two components: Kubelet, an agent for managing and communicating with the Master, and a tool such as Docker or other container tools for running container operations.

Kubernetes Cluster is a loosely coupled collection of containers that resolves state by converging actual and desired states of the system, thus providing a uniform interface for workloads to be deployed and consume shared hardware resources for simplified deployment.

Pods are the smallest unit of objects that can be deployed on Kubernetes. Kubernetes packages one or more containers into a higher-level structure called a pod, which runs one level higher to the container. All containers in a pod are scheduled on an equivalent node.

Services are the unified way of accessing the workloads on the pods. The Control plane, which is the core of Kubernetes, is an API server that lets you query and manipulate the state of an object in Kubernetes.

Basic Kubernetes Interview Questions:

1. How do you perform maintenance activities on a Kubernetes Node?

To perform maintenance activities on a Kubernetes Node, you can do the following: - Cordon the node: This marks the node unschedulable and prevents new pods from being scheduled on the node. - Drain the node: This gracefully evicts pods from the node and makes sure that all the pods are scheduled on other nodes in the cluster. - Perform maintenance activity on the node. - Uncordon the node to make it schedulable again.

Controlling Resource Usage of Pods

In Kubernetes, you can control the amount of resources that a pod consumes. This includes CPU and memory usage.

To do this, you need to define resource requests and limits in the pod spec. Resource requests specify the minimum amount of resources that the pod needs to run. Limits specify the maximum amount of resources that the pod is allowed to consume.

Here's an example of how to set resource limits and requests in a pod spec:


apiVersion: v1
kind: Pod
metadata:
  name: my-pod
spec:
  containers:
  - name: my-container
    image: my-image
    resources:
      requests:
        cpu: "100m"
        memory: "64Mi"
      limits:
        cpu: "200m"
        memory: "128Mi"

In this example, the pod has one container with the image `my-image`. The container has a CPU request of `100m` (which means 100 milliCPUs, or 0.1 CPUs) and a memory request of `64Mi` (which means 64 Mebibytes). The container has a CPU limit of `200m` and a memory limit of `128Mi`.

By setting resource requests and limits, you can ensure that your pods don't consume too much of the cluster's resources, which can cause performance issues for other pods and applications.

What K8's services are running on nodes and what is the role of each service?


The following are the K8's services that run on nodes:
- kubelet: It is responsible for communication between the control panel and nodes. It ensures that containers are running as per the instructions provided by the control panel.
- kube-proxy: It is responsible for network proxying on nodes. It ensures that the network is accessible to containers on each node.
- container runtime: It is responsible for managing containers and their images. It ensures that the containers are running in a secure and isolated environment.

Note: Optimizations can be made to code and/or any technical detail.

What is a PDB (Pod Disruption Budget)?

PDB, which stands for Pod Disruption Budget, is a Kubernetes concept that determines the number of pods that can be safely terminated during maintenance operations, updates or failures in the node where those pods are running. It is used to ensure the availability and correct operation of applications running in the Kubernetes cluster, preventing overscheduling of workloads in a node that could lead to application downtime beyond its threshold.

Understanding the Init Container and its Use Cases

An init container is a tool in Kubernetes that runs tasks before starting an app container. Init containers are executed as a separate container in the Pod and allow for setup or preparation activities before the actual container starts. It ensures that the app container has everything it needs to run.

Init containers can be used in a number of use-cases, such as:

  • Setting up the environment for an application
  • Preparing configuration files for an app
  • Creating or mounting shared volumes
  • Running database migrations before the app starts
  • Performing security checks before the app runs

Using init containers ensure that the app container has all the necessary configuration and resources before it launches. It is an excellent tool for running tasks that are essential to making sure the app can run correctly and efficiently.

Understanding the Role of Load Balancing in Kubernetes

Load balancing is a fundamental part of Kubernetes that helps to distribute traffic evenly across multiple containers, pods, or nodes. The main role of a load balancer in Kubernetes is to ensure that a high level of availability and scalability is maintained by spreading incoming requests to multiple backend instances. K8s employs an in-built load balancer, the Kubernetes Service, which can balance traffic either through round-robin or session affinity. By using a load balancer, Kubernetes ensures that your applications are highly available and can handle traffic spikes effectively.

Various Tips to Improve Kubernetes Security

Kubernetes security can be improved by:

- Keeping Kubernetes components updated with the latest security patches and updates.

- Enforcing strong and unique passwords for accessing Kubernetes components.

- Disabling unused APIs and endpoints to reduce the attack surface.

- Restricting cluster access and role-based access control to authorized users only.

- Securing communication between Kubernetes components and outside networks with SSL/TLS.

- Implementing network policies and segmenting the cluster to control traffic between applications.

- Configuring Kubernetes logging and monitoring to detect and respond to security incidents in real-time.

- Performing regular vulnerability assessments and penetration testing to identify and address security weaknesses.

By implementing these practices, one can improve the security posture of their Kubernetes cluster and reduce the risk of cyber attacks.

Monitoring a Kubernetes Cluster

Code:

Monitoring a Kubernetes cluster is important to keep track of resource usage and ensure system availability. Here are some ways to monitor a Kubernetes cluster:

1. Use Kubernetes Dashboard

Kubernetes Dashboard provides a complementary web-based user interface to your Kubernetes cluster. It is a powerful tool for monitoring and managing Kubernetes resources. You can view the status of your applications, as well as key information about your cluster's health and performance.

2. Use Prometheus and Grafana

Prometheus is an open-source monitoring system and time series database. It collects metrics from configured targets at given intervals, evaluates rule expressions, displays the results, and can trigger alerts if some condition is observed. Grafana is a popular platform for creating and sharing dashboards for data analysis and visualization. It can be used to display the metrics collected by Prometheus in a more human-friendly way.

3. Use Kubernetes Events

Kubernetes events are records of things that have happened to your cluster. They can be used to monitor the health of your cluster, diagnose issues, and track changes over time.

By using one or more of the above-monitoring methods, you can have greater visibility into your Kubernetes cluster and make more informed decisions about how to optimize its performance and availability.

How to retrieve logs from a Kubernetes pod?

To retrieve the central logs from a Kubernetes pod, you can use the `kubectl logs` command followed by the pod name.

Example: Code:

bash
kubectl logs my-pod-name

This will output the logs from the specified pod to the terminal.

Additionally, you can also specify a container name within the pod if there are multiple containers.

Example: Code:

bash
kubectl logs my-pod-name -c my-container-name

This will show only the logs for the specified container within the pod.

Converting a Defined Service into an External Service

Assuming that a service is already defined, the following steps can be taken to turn it into an external service:


// Define the service
public class MyService {
   public void doSomething() {
      // method implementation
   }
}

// Make it an external service by adding a server
public class MyServiceServer {
   private MyService service = new MyService();
   
   public void start() {
      // server startup code
   }
   
   public void stop() {
      // server shutdown code
   }
   
   // Handle client requests
   public void handleRequest(Request req) {
      // call the appropriate MyService method to handle the request
   }
}

// Sample Request object
public class Request {
   // request details
}

The above code defines a sample service called MyService and then adds a server class MyServiceServer that starts up the server and handles incoming client requests. The handleRequest method in the server can be modified to call the appropriate MyService method based on the request received.

Configuring an Ingress using ConfigurationSpec File

Here's an example of configuring an Ingress using a ConfigurationSpec file:

yaml
apiVersion: apps/v1beta1
kind: Deployment
metadata:
  name: my-app
spec:
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: my-app
        image: my-app-image
        ports:
        - containerPort: 80
        env:
        - name: ENVIRONMENT
          value: production 
---
apiVersion: v1
kind: Service
metadata:
  name: my-app-service
spec:
  selector:
    app: my-app
  ports:
  - name: http
    port: 80
    targetPort: 80
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: my-ingress
spec:
  rules:
  - host: my-app.com
    http:
      paths:
      - path: /api
        backend:
          serviceName: my-app-service
          servicePort: http

In this configuration file, we have defined a Deployment object for our application, a Service object to expose it internally, and an Ingress object to expose it externally. The Ingress object has a rule which maps a host and path to our Service, allowing external traffic to reach our application.

Configuring TLS with Ingress

To configure TLS with Ingress, follow the steps below:

1. Obtain an SSL certificate from a trusted certificate authority. 2. Create a Kubernetes secret to hold the SSL certificate. 3. Update the Ingress resource to include the SSL certificate secret and the desired host name(s).

Here is an example Ingress resource with TLS enabled:


apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
  name: example-ingress
spec:
  tls:
  - secretName: example-tls
  rules:
  - host: example.com
    http:
      paths:
      - path: /
        backend:
          serviceName: example-service
          servicePort: 80

After creating and updating the Ingress resource as needed, the SSL certificate will be used to encrypt traffic between clients and the Ingress controller.

Why Use Namespaces and the Problem with Default Namespace?

In programming, namespaces are used to group related code together and avoid naming conflicts. It allows developers to define objects, functions, classes, or variables under a specific namespace to keep the code organized and avoid naming collisions.

The problem with using the default namespace is that it can create confusion and lead to errors when you have multiple libraries or modules with the same name. It can cause code to break, and it can be challenging to debug.

By using namespaces, you can create unique identifiers for your code, making it easier to differentiate and avoid naming conflicts. It also makes it easier to reuse code without worrying about naming collisions. Thus, using namespaces is a best practice that can save you a lot of time and headaches in the long run.I'm sorry, but there is no file mentioned in this prompt. Can you please clarify or provide more context for me to assist you better?

Definition of an Operator

An operator in programming is a symbol or a built-in keyword used to perform a specific task on one or more operands. Operands are the variables or values on which the operator acts upon. Operators allow programmers to write expressions that manipulate and compare values, control program flow, and perform logical and arithmetic operations.

// Example of an arithmetic operator
int x = 10;
int y = 5;
int z = x + y; // the + is the operator that adds the value of x and y

// Example of a comparison operator
if (x <= y) {
  // code block
}

Importance of Operators

Operators are an essential part of programming languages, including Python. They enable us to perform arithmetic operations, logical comparisons, and manipulate data. In short, without operators, we wouldn't be able to write meaningful programs that solve real-world problems. Therefore, a good understanding of operators is necessary for every programmer.

What is the Default Backend in Ingress?

In Kubernetes, an Ingress is a way to route external traffic to services within the cluster. The default backend in Ingress is a Kubernetes service used to handle all requests that do not match any of the defined rules in the Ingress resource. This service can be used to return a default or personalized 404 error page, depending on the needs of the application. It is important to configure a default backend for an Ingress resource to ensure that all traffic is handled appropriately.

Kubernetes Interview Questions for Experienced

Question 19: How can Kubernetes be run locally?

In order to run Kubernetes locally, one could use Minikube, which is a tool that sets up a single-node Kubernetes cluster on a local system. Minikube can be used on various operating systems such as Linux, Windows and macOS. It allows for the creation of pods, deployments and services in a local environment.

Kubernetes Load Balancing

Load balancing in Kubernetes is the process of distributing network traffic to the various instances of a particular application. Kubernetes offers different load balancing techniques such as IPTables, Round Robin, and DNS-based load balancing. These techniques allow Kubernetes to efficiently distribute the traffic across the available instances of the application, ensuring that the workload is distributed evenly and no instance is overburdened. This improves the availability, scalability, and responsiveness of the application.

Explanation of Terms in Deployment Configuration File

The deployment configuration file contains settings and parameters for deploying an application to a server. The following terms are commonly found in a deployment configuration file:

1. Image: A pre-built container that contains the application code and its dependencies.

2. Replicas: The number of instances of the application that should be running.

3. Ports: The network port on which the application should listen for incoming traffic.

4. Environment Variables: Variables that define the application’s environment, such as database connection information.

5. Resource Limits: Limits on the amount of CPU and memory that a container can use.

6. Volumes: Storage volumes that the container can access for reading and writing files.

7. Labels: Key-value pairs that are used to identify and group objects within the Kubernetes cluster.

Understanding these terms is essential to correctly configure a deployment in Kubernetes.

The Difference Between Docker Swarm and Kubernetes

Docker Swarm and Kubernetes are both popular container orchestration platforms used for deploying, scaling, and managing containerized applications. However, they differ in several ways.

Docker Swarm is simpler and easier to use, making it a good choice for smaller projects. It has a smaller learning curve and requires less setup time. However, it may not be suitable for large, complex applications.

Kubernetes, on the other hand, is more robust and scalable. It can handle large and complex applications with ease. It also has advanced features like automatic scaling, self-healing, and rolling updates. However, it has a steeper learning curve and may require more setup time.

Overall, the choice between Docker Swarm and Kubernetes depends on the specific requirements of your project. If you have a smaller project that requires a simple and easy-to-use platform, Docker Swarm may be the right choice. If you have a larger, more complex project that requires advanced features and scalability, Kubernetes may be the better option.

Troubleshooting Tips for Unscheduled Pods

When a pod is not being scheduled, it may be due to a variety of issues. Here are some steps you can take to troubleshoot:


1. Check the pod's resource requirements: make sure the pod's CPU and memory requests are not greater than the available resources in the cluster.

2. Check the pod's nodeSelector field: make sure the pod's nodeSelector field matches the labels on a node that's available to run the pod.

3. Check the pod's tolerations field: if the pod has a toleration, make sure the node has the corresponding taint for the toleration.

4. Check for namespace constraints: make sure the pod is being scheduled in a namespace that's not restricted.

5. Check for network constraints: make sure the pod is not being blocked by a network policy or security group.

6. Check for node constraints: make sure there are no node selectors or affinity rules blocking the pod from running on a specific node.

7. Check the scheduling algorithm: check if the scheduling algorithm is working correctly.

8. Check for pod lifecycle issues: make sure there are no issues with the pod's lifecycle that could prevent it from being scheduled.

9. Check for external dependencies: make sure there are no external dependencies that could be preventing the pod from being scheduled.

10. Check the logs: if all else fails, check the logs for any error messages that could indicate why the pod is not being scheduled.

Running a Pod on a Specific Node

To run a pod on a specific node, you need to add a node selector to the pod's YAML configuration file. The node selector is an attribute that specifies the node's label(s) that the pod should run on.

Here's an example YAML file configuration for a pod that should run on a node with the label "node_label: node-1":

apiVersion: v1 kind: Pod metadata: name: my-pod spec: containers: - name: my-container image: my-image nodeSelector: node_label: node-1

In this YAML file, the "nodeSelector" attribute is set to "node_label: node-1", which indicates that the pod should be scheduled on a node that has the label "node_label" with the value "node-1".

Once you have saved the YAML configuration file with the correct node selector, you can create the pod by running the following command:

kubectl create -f my-pod.yaml

This command will create the pod with the specified node selector, and it will be scheduled on a node that meets the label requirements.

Different Ways to Provide External Network Connectivity to Kubernetes (K8)

There are several ways to provide external network connectivity to Kubernetes (K8):

  1. NodePort: Exposes a Kubernetes service on each node’s IP at a static port.
  2. LoadBalancer: Creates an external load balancer in the cloud and assigns a fixed IP to the service.
  3. Ingress: Exposes HTTP and HTTPS routes from outside the cluster to services within the cluster.
  4. HostNetwork: Allows a pod to use the node's network namespace and expose network sockets.
  5. ClusterIP: Creates a virtual IP address for a Kubernetes service to communicate with other services in the cluster, but not externally.

Choosing the right external network connectivity method for your K8 deployment depends on your specific requirements and resources.

How to forward port 8080 from container to browser via service and ingress?

To forward the port 8080 from the container to the browser, we need to follow the below steps:

1. Firstly, we need to create a Kubernetes deployment with a container running the application that listens on port 8080. 2. After that, we need to create a Kubernetes service that exposes the deployment and maps container ports to the service. 3. Next, we need to create an ingress resource that maps the service to a domain name and sets the path that should be used to access the service. 4. Finally, we need to configure the ingress controller, which will be responsible for forwarding the requests to the correct service based on the domain name and path.

Here is an example of how the Kubernetes manifest files might look like:


apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-app
spec:
  selector:
    matchLabels:
      app: my-app
  template:
    metadata:
      labels:
        app: my-app
    spec:
      containers:
      - name: my-app
        image: my-image:latest
        ports:
        - containerPort: 8080

---

apiVersion: v1
kind: Service
metadata:
  name: my-service
spec:
  selector:
    app: my-app
  ports:
  - name: http
    port: 8080
    targetPort: 8080

---

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: my-ingress
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /
spec:
  rules:
  - host: example.com
    http:
      paths:
      - path: /my-app
        pathType: Prefix
        backend:
          service:
            name: my-service
            port:
              name: http

In this example, we have created a deployment with a container running an application that listens on port 8080. We then created a service that maps the container port to a service port. Next, we created an ingress resource that maps the service to a domain name and path. Finally, we configured the ingress controller to forward the requests to the correct service based on the domain name and path.

Technical Interview Guides

Here are guides for technical interviews, categorized from introductory to advanced levels.

View All

Best MCQ

As part of their written examination, numerous tech companies necessitate candidates to complete multiple-choice questions (MCQs) assessing their technical aptitude.

View MCQ's
Made with love
This website uses cookies to make IQCode work for you. By using this site, you agree to our cookie policy

Welcome Back!

Sign up to unlock all of IQCode features:
  • Test your skills and track progress
  • Engage in comprehensive interactive courses
  • Commit to daily skill-enhancing challenges
  • Solve practical, real-world issues
  • Share your insights and learnings
Create an account
Sign in
Recover lost password
Or log in with

Create a Free Account

Sign up to unlock all of IQCode features:
  • Test your skills and track progress
  • Engage in comprehensive interactive courses
  • Commit to daily skill-enhancing challenges
  • Solve practical, real-world issues
  • Share your insights and learnings
Create an account
Sign up
Or sign up with
By signing up, you agree to the Terms and Conditions and Privacy Policy. You also agree to receive product-related marketing emails from IQCode, which you can unsubscribe from at any time.