<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Infosec Consultant & Pentester ]]></title><description><![CDATA[Thoughts, tutorials and other musings related to Information Security, Pentesting skills and life in general.]]></description><link>https://chris-young.net/</link><generator>Ghost 5.33</generator><lastBuildDate>Tue, 03 Mar 2026 04:22:22 GMT</lastBuildDate><atom:link href="https://chris-young.net/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Finding passwords]]></title><description><![CDATA[<p>Couple of commands which help in finding passwords on a Windows based machine:</p><h3 id="wifi-passwords">Wifi Passwords:<br></h3><p>This command can be helpful during an assessment because it gives insight into how a device connects to different Wi-Fi networks. By looking at the networks a device has saved, you can sometimes discover other</p>]]></description><link>https://chris-young.net/finding-passwords/</link><guid isPermaLink="false">692219dd651c3702e6fac8fc</guid><category><![CDATA[Windows]]></category><category><![CDATA[Active Directory]]></category><category><![CDATA[Attack Vectors]]></category><dc:creator><![CDATA[Chris Young]]></dc:creator><pubDate>Sat, 22 Nov 2025 20:33:43 GMT</pubDate><content:encoded><![CDATA[<p>Couple of commands which help in finding passwords on a Windows based machine:</p><h3 id="wifi-passwords">Wifi Passwords:<br></h3><p>This command can be helpful during an assessment because it gives insight into how a device connects to different Wi-Fi networks. By looking at the networks a device has saved, you can sometimes discover other internal or guest networks the device uses, which may show pathways an attacker could move through if the device were compromised. </p><pre><code>netsh wlan show profiles | Select-String &quot;All User Profile&quot; | ForEach-Object { ($_.ToString().Split(&quot;:&quot;)[1].Trim()) } | ForEach-Object { Write-Host &quot;`nProfile: $_&quot; -ForegroundColor Cyan; (netsh wlan show profile name=&quot;$_&quot; key=clear | Select-String &quot;Key Content&quot;) -replace &apos;.*:&apos;,&apos;Password: &apos; }</code></pre><h3 id="searching-the-registry-for-stored-passwords">Searching the registry for stored passwords:</h3><pre><code>Get-ChildItem -Path HKLM:\, HKCU:\ -Recurse -ErrorAction SilentlyContinue | Get-ItemProperty -ErrorAction SilentlyContinue | Where-Object { $_ -match &quot;password&quot; }</code></pre><h3 id="files-that-might-contain-passwordsstarting-at-c">Files that might contain passwords - starting at c:\</h3><pre><code>Get-ChildItem -Path C:\ -Include *.txt,*.xml,*.ini,*.config,*.bat -Recurse -ErrorAction SilentlyContinue | Select-String -Pattern &quot;password&quot;, &quot;pwd&quot;, &quot;pass&quot; | Select Path, LineNumber, Line</code></pre><h3 id="autologin-at-registry">Autologin at Registry</h3><p>If automatic logon is configured, <strong>DefaultPassword</strong> will appear <strong>in plaintext</strong>, which is a security risk. This command is essentially showing whether Windows is set to automatically log in and, if so, with what credentials.</p><pre><code>Get-ItemProperty &quot;HKLM:\SOFTWARE\Microsoft\Windows NT\CurrentVersion\Winlogon&quot; | Select-Object DefaultUsername, DefaultPassword, AutoAdminLogon</code></pre><p>This one is exclusive to passwords, however its still a helpful command to have in your toolkit. &#xA0;The following command can be used to look at the PowerShell history of all users or any users you are able to read if you are not an administrator.</p>]]></content:encoded></item><item><title><![CDATA[63/100 Deploy Iron Gallery App on Kubernetes]]></title><description><![CDATA[<p>There is an iron gallery app that the Nautilus DevOps team was developing. They have recently customized the app and are going to deploy the same on the Kubernetes cluster.</p><ol><li>Create a namespace <code>iron-namespace-devops</code></li></ol><p>Create a deployment <code>iron-gallery-deployment-devops</code> for <code>iron gallery</code> under the same namespace you created.</p><p>:- Labels <code>run</code></p>]]></description><link>https://chris-young.net/63-100/</link><guid isPermaLink="false">68fb4f3b651c3702e6fac8d4</guid><dc:creator><![CDATA[Chris Young]]></dc:creator><pubDate>Fri, 24 Oct 2025 10:17:11 GMT</pubDate><content:encoded><![CDATA[<p>There is an iron gallery app that the Nautilus DevOps team was developing. They have recently customized the app and are going to deploy the same on the Kubernetes cluster.</p><ol><li>Create a namespace <code>iron-namespace-devops</code></li></ol><p>Create a deployment <code>iron-gallery-deployment-devops</code> for <code>iron gallery</code> under the same namespace you created.</p><p>:- Labels <code>run</code> should be <code>iron-gallery</code>.</p><p>:- Replicas count should be <code>1</code>.</p><p>:- Selector&apos;s matchLabels <code>run</code> should be <code>iron-gallery</code>.</p><p>:- Template labels <code>run</code> should be <code>iron-gallery</code> under metadata.</p><p>:- The container should be named as <code>iron-gallery-container-devops</code>, use <code>kodekloud/irongallery:2.0</code> image ( use exact image name / tag ).</p><p>:- Resources limits for memory should be <code>100Mi</code> and for CPU should be <code>50m</code>.</p><p>:- First volumeMount name should be <code>config</code>, its mountPath should be <code>/usr/share/nginx/html/data</code>.</p><p>:- Second volumeMount name should be <code>images</code>, its mountPath should be <code>/usr/share/nginx/html/uploads</code>.</p><p>:- First volume name should be <code>config</code> and give it <code>emptyDir</code> and second volume name should be <code>images</code>, also give it <code>emptyDir</code>.</p><p>Create a deployment <code>iron-db-deployment-devops</code> for <code>iron db</code> under the same namespace.</p><p>:- Labels <code>db</code> should be <code>mariadb</code>.</p><p>:- Replicas count should be <code>1</code>.</p><p>:- Selector&apos;s matchLabels <code>db</code> should be <code>mariadb</code>.</p><p>:- Template labels <code>db</code> should be <code>mariadb</code> under metadata.</p><p>:- The container name should be <code>iron-db-container-devops</code>, use <code>kodekloud/irondb:2.0</code> image ( use exact image name / tag ).</p><p>:- Define environment, set <code>MYSQL_DATABASE</code> its value should be <code>database_apache</code>, set <code>MYSQL_ROOT_PASSWORD</code> and <code>MYSQL_PASSWORD</code> value should be with some complex passwords for DB connections, and <code>MYSQL_USER</code> value should be any custom user ( except root ).</p><p>:- Volume mount name should be <code>db</code> and its mountPath should be <code>/var/lib/mysql</code>. Volume name should be <code>db</code> and give it an <code>emptyDir</code>.</p><ol><li>Create a service for <code>iron db</code> which should be named <code>iron-db-service-devops</code> under the same namespace. Configure spec as selector&apos;s db should be <code>mariadb</code>. Protocol should be <code>TCP</code>, port and targetPort should be <code>3306</code> and its type should be <code>ClusterIP</code>.</li><li>Create a service for <code>iron gallery</code> which should be named <code>iron-gallery-service-devops</code> under the same namespace. Configure spec as selector&apos;s run should be <code>iron-gallery</code>. Protocol should be <code>TCP</code>, port and targetPort should be <code>80</code>, nodePort should be <code>32678</code> and its type should be <code>NodePort</code>.</li></ol><p>Create the yaml file, <code>iron-gallery-setup.yaml</code></p><pre><code>apiVersion: v1
kind: Namespace
metadata:
  name: iron-namespace-devops
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: iron-gallery-deployment-devops
  namespace: iron-namespace-devops
spec:
  replicas: 1
  selector:
    matchLabels:
      run: iron-gallery
  template:
    metadata:
      labels:
        run: iron-gallery
    spec:
      containers:
        - name: iron-gallery-container-devops
          image: kodekloud/irongallery:2.0
          resources:
            limits:
              memory: &quot;100Mi&quot;
              cpu: &quot;50m&quot;
          volumeMounts:
            - name: config
              mountPath: /usr/share/nginx/html/data
            - name: images
              mountPath: /usr/share/nginx/html/uploads
      volumes:
        - name: config
          emptyDir: {}
        - name: images
          emptyDir: {}
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: iron-db-deployment-devops
  namespace: iron-namespace-devops
spec:
  replicas: 1
  selector:
    matchLabels:
      db: mariadb
  template:
    metadata:
      labels:
        db: mariadb
    spec:
      containers:
        - name: iron-db-container-devops
          image: kodekloud/irondb:2.0
          env:
            - name: MYSQL_DATABASE
              value: database_apache
            - name: MYSQL_ROOT_PASSWORD
              value: &quot;R@@tP@ssw0rd123!&quot;
            - name: MYSQL_USER
              value: devuser
            - name: MYSQL_PASSWORD
              value: &quot;D3v0ps@321&quot;
          volumeMounts:
            - name: db
              mountPath: /var/lib/mysql
      volumes:
        - name: db
          emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
  name: iron-db-service-devops
  namespace: iron-namespace-devops
spec:
  type: ClusterIP
  selector:
    db: mariadb
  ports:
    - protocol: TCP
      port: 3306
      targetPort: 3306
---
apiVersion: v1
kind: Service
metadata:
  name: iron-gallery-service-devops
  namespace: iron-namespace-devops
spec:
  type: NodePort
  selector:
    run: iron-gallery
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80
      nodePort: 32678
</code></pre><p>Apply the file:</p><pre><code>thor@jumphost ~$ kubectl apply -f iron-gallery-setup.yaml
namespace/iron-namespace-devops created
deployment.apps/iron-gallery-deployment-devops created
deployment.apps/iron-db-deployment-devops created
service/iron-db-service-devops created
service/iron-gallery-service-devops created</code></pre><p>Verify - check namespace, deployments, pods and services:</p><pre><code>thor@jumphost ~$ kubectl get ns
NAME                    STATUS   AGE
default                 Active   14m
iron-namespace-devops   Active   65s
kube-node-lease         Active   14m
kube-public             Active   14m
kube-system             Active   14m
local-path-storage      Active   14m
thor@jumphost ~$ kubectl get all -n iron-namespace-devops
NAME                                                 READY   STATUS    RESTARTS   AGE
pod/iron-db-deployment-devops-559f689bbc-4jjjz       1/1     Running   0          108s
pod/iron-gallery-deployment-devops-9c67c8cb7-srrk9   1/1     Running   0          108s

NAME                                  TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
service/iron-db-service-devops        ClusterIP   10.96.107.103   &lt;none&gt;        3306/TCP       108s
service/iron-gallery-service-devops   NodePort    10.96.73.15     &lt;none&gt;        80:32678/TCP   108s

NAME                                             READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/iron-db-deployment-devops        1/1     1            1           108s
deployment.apps/iron-gallery-deployment-devops   1/1     1            1           108s

NAME                                                       DESIRED   CURRENT   READY   AGE
replicaset.apps/iron-db-deployment-devops-559f689bbc       1         1         1       108s
replicaset.apps/iron-gallery-deployment-devops-9c67c8cb7   1         1         1       108s</code></pre>]]></content:encoded></item><item><title><![CDATA[62/100 Manage Secrets in Kubernetes]]></title><description><![CDATA[<p>The Nautilus DevOps team is working to deploy some tools in Kubernetes cluster. Some of the tools are licence based so that licence information needs to be stored securely within Kubernetes cluster. Therefore, the team wants to utilize Kubernetes secrets to store those secrets.</p><ol><li>We already have a secret key</li></ol>]]></description><link>https://chris-young.net/62-100-manage-secrets-in-kubernetes/</link><guid isPermaLink="false">68fb4184651c3702e6fac89d</guid><dc:creator><![CDATA[Chris Young]]></dc:creator><pubDate>Fri, 24 Oct 2025 09:21:08 GMT</pubDate><content:encoded><![CDATA[<p>The Nautilus DevOps team is working to deploy some tools in Kubernetes cluster. Some of the tools are licence based so that licence information needs to be stored securely within Kubernetes cluster. Therefore, the team wants to utilize Kubernetes secrets to store those secrets.</p><ol><li>We already have a secret key file <code>ecommerce.txt</code> under <code>/opt</code> location on <code>jump host</code>. Create a <code>generic secret</code> named <code>ecommerce</code>, it should contain the password/license-number present in <code>ecommerce.txt</code> file.<br></li><li>Also create a <code>pod</code> named <code>secret-datacenter</code>.<br></li><li>Configure pod&apos;s <code>spec</code> as container name should be <code>secret-container-datacenter</code>, image should be <code>debian</code> with <code>latest</code> tag (remember to mention the tag with image). Use <code>sleep</code> command for container so that it remains in running state. Consume the created secret and mount it under <code>/opt/cluster</code> within the container.<br></li><li>To verify you can exec into the container <code>secret-container-datacenter</code>, to check the secret key under the mounted path <code>/opt/cluster</code>. Before hitting the <code>Check</code> button please make sure pod/pods are in running state, also validation can take some time to complete so keep patience.</li></ol><p>Create the secret from the file we already have in <code>/opt/ecommerce.txt</code> which is present on the jump host.</p><pre><code>thor@jumphost ~$ kubectl create secret generic ecommerce --from-file=/opt/ecommerce.txt
secret/ecommerce created</code></pre><p>This will create a secret where:<br>Key = <code>ecommerce.txt</code><br>Value = the contents of <code>/opt/ecommerce.txt</code> (password/license-number)</p><p>This can then be verified:</p><pre><code>thor@jumphost ~$ kubectl get secrets
NAME        TYPE     DATA   AGE
ecommerce   Opaque   1      50s
thor@jumphost ~$ kubectl describe secret ecommerce
Name:         ecommerce
Namespace:    default
Labels:       &lt;none&gt;
Annotations:  &lt;none&gt;

Type:  Opaque

Data
====
ecommerce.txt:  7 bytes</code></pre><p>Next step is then to create the pod yaml file, named <code>secret-datacenter.yaml</code></p><pre><code>apiVersion: v1
kind: Pod
metadata:
  name: secret-datacenter
spec:
  containers:
    - name: secret-container-datacenter
      image: debian:latest
      command: [&quot;sleep&quot;, &quot;infinity&quot;]
      volumeMounts:
        - name: secret-volume
          mountPath: /opt/cluster
  volumes:
    - name: secret-volume
      secret:
        secretName: ecommerce
</code></pre><p>Next, apply the pod configuration and then check the pod status:</p><pre><code>thor@jumphost ~$ kubectl apply -f secret-datacenter.yaml
pod/secret-datacenter created

thor@jumphost ~$ kubectl get pods
NAME                READY   STATUS    RESTARTS   AGE
secret-datacenter   1/1     Running   0          31s</code></pre><p>Finally, verify the secret inside the container:</p><pre><code>thor@jumphost ~$ kubectl exec -it secret-datacenter -- bash
root@secret-datacenter:/# ls /opt/cluster
ecommerce.txt
root@secret-datacenter:/# cat /opt/cluster/ecommerce.txt
5ecur3</code></pre>]]></content:encoded></item><item><title><![CDATA[61/100 Init Containers in Kubernetes]]></title><description><![CDATA[<p>Init containers are a special type of container in K8&apos;s that run before the main (regular) application containers start inside a pod. &#xA0;They are used for initialisation or setup tasks that need to happen <code>once</code> before the main app runs.</p><p>You can think of them as <strong>setup</strong></p>]]></description><link>https://chris-young.net/61-100-init-containers-in-kubernetes/</link><guid isPermaLink="false">68fb3971651c3702e6fac868</guid><dc:creator><![CDATA[Chris Young]]></dc:creator><pubDate>Fri, 24 Oct 2025 08:52:03 GMT</pubDate><content:encoded><![CDATA[<p>Init containers are a special type of container in K8&apos;s that run before the main (regular) application containers start inside a pod. &#xA0;They are used for initialisation or setup tasks that need to happen <code>once</code> before the main app runs.</p><p>You can think of them as <strong>setup jobs</strong> that prepare the environment for the main container:<br>- Runs to completion before any app containers start.<br>- Runs sequentially &#x2014; each one must finish successfully before the next starts.<br>- Is defined inside a Pod&#x2019;s spec (just like regular containers).</p><hr><p>There are some applications that need to be deployed on Kubernetes cluster and these apps have some pre-requisites where some configurations need to be changed before deploying the app container. Some of these changes cannot be made inside the images so the DevOps team has come up with a solution to use init containers to perform these tasks during deployment.</p><ol><li>Create a <code>Deployment</code> named as <code>ic-deploy-devops</code>.<br></li><li>Configure <code>spec</code> as replicas should be <code>1</code>, labels <code>app</code> should be <code>ic-devops</code>, template&apos;s metadata lables <code>app</code> should be the same <code>ic-devops</code>.<br></li><li>The <code>initContainers</code> should be named as <code>ic-msg-devops</code>, use image <code>fedora</code> with <code>latest</code> tag and use command <code>&apos;/bin/bash&apos;</code>, <code>&apos;-c&apos;</code> and <code>&apos;echo Init Done - Welcome to xFusionCorp Industries &gt; /ic/blog&apos;</code>. The volume mount should be named as <code>ic-volume-devops</code> and mount path should be <code>/ic</code>.<br></li><li>Main container should be named as <code>ic-main-devops</code>, use image <code>fedora</code> with <code>latest</code> tag and use command <code>&apos;/bin/bash&apos;</code>, <code>&apos;-c&apos;</code> and <code>&apos;while true; do cat /ic/blog; sleep 5; done&apos;</code>. The volume mount should be named as <code>ic-volume-devops</code> and mount path should be <code>/ic</code>.<br></li><li>Volume to be named as <code>ic-volume-devops</code> and it should be an emptyDir type.</li></ol><p>Create the <code>ic-deploy-devops.yaml</code> file. </p><pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
  name: ic-deploy-devops
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ic-devops
  template:
    metadata:
      labels:
        app: ic-devops
    spec:
      initContainers:
        - name: ic-msg-devops
          image: fedora:latest
          command: [&quot;/bin/bash&quot;, &quot;-c&quot;, &quot;echo Init Done - Welcome to xFusionCorp Industries &gt; /ic/blog&quot;]
          volumeMounts:
            - name: ic-volume-devops
              mountPath: /ic
      containers:
        - name: ic-main-devops
          image: fedora:latest
          command: [&quot;/bin/bash&quot;, &quot;-c&quot;, &quot;while true; do cat /ic/blog; sleep 5; done&quot;]
          volumeMounts:
            - name: ic-volume-devops
              mountPath: /ic
      volumes:
        - name: ic-volume-devops
          emptyDir: {}
</code></pre><p>Deploy: <code>kubectl apply -f ic-deploy-devops.yaml</code></p><p>Verify its running: <code>kubectl get pods</code></p>]]></content:encoded></item><item><title><![CDATA[60/100 Persistent Volumes in Kubernetes]]></title><description><![CDATA[<p>The Nautilus DevOps team is working on a Kubernetes template to deploy a web application on the cluster. There are some requirements to create/use persistent volumes to store the application code, and the template needs to be designed accordingly.</p><ol><li>Create a <code>PersistentVolume</code> named as <code>pv-devops</code>. Configure the <code>spec</code> as</li></ol>]]></description><link>https://chris-young.net/60-100/</link><guid isPermaLink="false">68fb3216651c3702e6fac830</guid><dc:creator><![CDATA[Chris Young]]></dc:creator><pubDate>Fri, 24 Oct 2025 08:19:48 GMT</pubDate><content:encoded><![CDATA[<p>The Nautilus DevOps team is working on a Kubernetes template to deploy a web application on the cluster. There are some requirements to create/use persistent volumes to store the application code, and the template needs to be designed accordingly.</p><ol><li>Create a <code>PersistentVolume</code> named as <code>pv-devops</code>. Configure the <code>spec</code> as storage class should be <code>manual</code>, set capacity to <code>3Gi</code>, set access mode to <code>ReadWriteOnce</code>, volume type should be <code>hostPath</code> and set path to <code>/mnt/security</code> (this directory is already created, you might not be able to access it directly, so you need not to worry about it).</li><li>Create a <code>PersistentVolumeClaim</code> named as <code>pvc-devops</code>. Configure the <code>spec</code> as storage class should be <code>manual</code>, request <code>1Gi</code> of the storage, set access mode to <code>ReadWriteOnce</code>.</li><li>Create a <code>pod</code> named as <code>pod-devops</code>, mount the persistent volume you created with claim name <code>pvc-devops</code> at document root of the web server, the container within the pod should be named as <code>container-devops</code> using image <code>httpd</code> with <code>latest</code> tag only (remember to mention the tag i.e <code>httpd:latest</code>).</li><li>Create a node port type service named <code>web-devops</code> using node port <code>30008</code> to expose the web server running within the pod.</li></ol><p>Create the yaml file, in this example I&apos;m using the name <code>webapp-devops.yaml</code></p><pre><code>apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv-devops
spec:
  storageClassName: manual
  capacity:
    storage: 3Gi
  accessModes:
    - ReadWriteOnce
  hostPath:
    path: /mnt/security
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc-devops
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
---
apiVersion: v1
kind: Pod
metadata:
  name: pod-devops
  labels:
    app: web-devops
spec:
  containers:
    - name: container-devops
      image: httpd:latest
      ports:
        - containerPort: 80
      volumeMounts:
        - name: volume-devops
          mountPath: /usr/local/apache2/htdocs
  volumes:
    - name: volume-devops
      persistentVolumeClaim:
        claimName: pvc-devops
---
apiVersion: v1
kind: Service
metadata:
  name: web-devops
spec:
  type: NodePort
  selector:
    app: web-devops
  ports:
    - port: 80
      targetPort: 80
      nodePort: 30008

</code></pre><p>Noted that there was an error when applying the yaml file. </p><pre><code>thor@jumphost ~$ kubectl apply -f webapp-devops.yaml
persistentvolume/pv-devops created
persistentvolumeclaim/pvc-devops created
error: error parsing webapp-devops.yaml: error converting YAML to JSON: yaml: line 6: found character that cannot start any token</code></pre><p>Turns out that the yaml file is pretty strict when it comes to formatting. This usually means of the following needs rectifying:<br>- A stray tab character (<code>\t</code>)<br>- A colon (<code>:</code>) not followed by a space<br>- Bad indentation (mix of tabs and spaces)<br>- Extra invisible characters from copying text</p><p>Rewriting the YAML with consistent 2-space indentation fixes it.</p><p>Apply the newly formatted yaml file:</p><pre><code>thor@jumphost ~$ kubectl apply -f webapp-devops.yaml
persistentvolume/pv-devops unchanged
persistentvolumeclaim/pvc-devops unchanged
pod/pod-devops created
service/web-devops created</code></pre>]]></content:encoded></item><item><title><![CDATA[59/100 Troubleshoot Deployment issues in Kubernetes]]></title><description><![CDATA[<p>Last week, the Nautilus DevOps team deployed a redis app on Kubernetes cluster, which was working fine so far. This morning one of the team members was making some changes in this existing setup, but he made some mistakes and the app went down. The deployment name is <code>redis-deployment</code>. The</p>]]></description><link>https://chris-young.net/59-100-troubleshoot-deployment-issues-in-kubernetes/</link><guid isPermaLink="false">68f9fb26651c3702e6fac7eb</guid><dc:creator><![CDATA[Chris Young]]></dc:creator><pubDate>Thu, 23 Oct 2025 10:20:49 GMT</pubDate><content:encoded><![CDATA[<p>Last week, the Nautilus DevOps team deployed a redis app on Kubernetes cluster, which was working fine so far. This morning one of the team members was making some changes in this existing setup, but he made some mistakes and the app went down. The deployment name is <code>redis-deployment</code>. The pods are not in running state right now, so please look into the issue and fix the same.</p><pre><code>thor@jumphost ~$ kubectl get deployment redis-deployment
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
redis-deployment   0/1     1            0           72s

thor@jumphost ~$ kubectl get pods -l app=redis
NAME                                READY   STATUS              RESTARTS   AGE
redis-deployment-54cdf4f76d-tpd68   0/1     ContainerCreating   0          102s
</code></pre><p>Use the pod name from above to inspect the pod details (describes it):</p><pre><code>thor@jumphost ~$ kubectl describe pod redis-deployment-54cdf4f76d-tpd68
Name:             redis-deployment-54cdf4f76d-tpd68
Namespace:        default
Priority:         0
Service Account:  default
Node:             kodekloud-control-plane/172.17.0.2
Start Time:       Thu, 23 Oct 2025 09:56:15 +0000
Labels:           app=redis
                  pod-template-hash=54cdf4f76d
Annotations:      &lt;none&gt;
Status:           Pending
IP:               
IPs:              &lt;none&gt;
Controlled By:    ReplicaSet/redis-deployment-54cdf4f76d
Containers:
  redis-container:
    Container ID:   
    Image:          redis:alpin
    Image ID:       
    Port:           6379/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       ContainerCreating
    Ready:          False
    Restart Count:  0
    Requests:
      cpu:        300m
    Environment:  &lt;none&gt;
    Mounts:
      /redis-master from config (rw)
      /redis-master-data from data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-26kqb (ro)
Conditions:
  Type              Status
  Initialized       True 
  Ready             False 
  ContainersReady   False 
  PodScheduled      True 
Volumes:
  data:
    Type:       EmptyDir (a temporary directory that shares a pod&apos;s lifetime)
    Medium:     
    SizeLimit:  &lt;unset&gt;
  config:
    Type:      ConfigMap (a volume populated by a ConfigMap)
    Name:      redis-conig
    Optional:  false
  kube-api-access-26kqb:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       &lt;nil&gt;
    DownwardAPI:             true
QoS Class:                   Burstable
Node-Selectors:              &lt;none&gt;
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type     Reason       Age                  From               Message
  ----     ------       ----                 ----               -------
  Normal   Scheduled    2m53s                default-scheduler  Successfully assigned default/redis-deployment-54cdf4f76d-tpd68 to kodekloud-control-plane
  Warning  FailedMount  50s                  kubelet            Unable to attach or mount volumes: unmounted volumes=[config], unattached volumes=[], failed to process volumes=[]: timed out waiting for the condition
  Warning  FailedMount  45s (x9 over 2m53s)  kubelet            MountVolume.SetUp failed for volume &quot;config&quot; : configmap &quot;redis-conig&quot; not found</code></pre><p>The last line clearly shows that there is a type for the <code>configmap</code> - it should be referencing <code>redis-config</code> however its showing <code>configmap redis-conig not found</code>.<br>Also if we review the file further, we see that the image name for the redis container also has a typo and is missing <code>e</code> from the end of <code>alpin</code>. </p><p>Edit the file with:</p><pre><code>kubectl edit deployment redis-deployment

-- snipped --
          defaultMode: 420
          name: redis-conig  &lt;- Change this to read name: redis-config
        name: config
-- snipped --

redis-container:
    Container ID:   
    Image:          redis:alpin &lt;- Change this to read redis:alpine

thor@jumphost ~$ kubectl edit deployment redis-deployment
deployment.apps/redis-deployment edited
</code></pre><p>Now delete the stuck pod:</p><pre><code>thor@jumphost ~$ kubectl delete pod redis-deployment-54cdf4f76d-tpd68
pod &quot;redis-deployment-54cdf4f76d-tpd68&quot; deleted</code></pre><p>Check the rollout and then confirm the pods are up:</p><pre><code>thor@jumphost ~$ kubectl rollout status deployment redis-deployment
deployment &quot;redis-deployment&quot; successfully rolled out

thor@jumphost ~$ kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
redis-deployment-7c8d4f6ddf-w889f   1/1     Running   0          2m56s</code></pre><p>Summary of fixes</p><!--kg-card-begin: html--><table>
<thead>
<tr>
<th>Issue</th>
<th>Root cause</th>
<th>Fix</th>
</tr>
</thead>
<tbody>
<tr>
<td><code inline>ImagePullBackOff</code></td>
<td>Invalid image name <code inline>redis:alpin</code></td>
<td>Change to <code inline>redis:alpine</code></td>
</tr>
<tr>
<td><code inline>ContainerCreating</code></td>
<td>Missing ConfigMap <code inline>redis-conig</code></td>
<td>Fix typo &#x2192; <code inline>redis-config</code> or create ConfigMap</td>
</tr>
<tr>
<td>Old pods stuck</td>
<td>Outdated ReplicaSets</td>
<td>Delete them; Deployment recreates new pods</td>
</tr>
</tbody>
</table><!--kg-card-end: html-->]]></content:encoded></item><item><title><![CDATA[58/100 Deploy Grafana on Kubernetes Cluster]]></title><description><![CDATA[<p>The Nautilus DevOps teams is planning to set up a Grafana tool to collect and analyze analytics from some applications. They are planning to deploy it on Kubernetes cluster. <br>- Create a deployment named <code>grafana-deployment-nautilus</code> using any grafana image for Grafana app. Set other parameters as per your choice.<br>-</p>]]></description><link>https://chris-young.net/58-100-deploy-grafana-on-kubernetes-cluster/</link><guid isPermaLink="false">68f9f4c1651c3702e6fac7c2</guid><dc:creator><![CDATA[Chris Young]]></dc:creator><pubDate>Thu, 23 Oct 2025 09:32:49 GMT</pubDate><content:encoded><![CDATA[<p>The Nautilus DevOps teams is planning to set up a Grafana tool to collect and analyze analytics from some applications. They are planning to deploy it on Kubernetes cluster. <br>- Create a deployment named <code>grafana-deployment-nautilus</code> using any grafana image for Grafana app. Set other parameters as per your choice.<br>- Create <code>NodePort</code> type service with nodePort <code>32000</code> to expose the app.</p><p>First, create the yaml manifest file named <code>grafana-deployment.yaml</code></p><pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
  name: grafana-deployment-nautilus
spec:
  replicas: 1
  selector:
    matchLabels:
      app: grafana
  template:
    metadata:
      labels:
        app: grafana
    spec:
      containers:
        - name: grafana-container
          image: grafana/grafana:latest
          ports:
            - containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
  name: grafana-service
spec:
  type: NodePort
  selector:
    app: grafana
  ports:
    - port: 3000
      targetPort: 3000
      nodePort: 32000
</code></pre><p>Deploy:</p><pre><code>thor@jumphost ~$ kubectl apply -f grafana-deployment.yaml
deployment.apps/grafana-deployment-nautilus created
service/grafana-service created

thor@jumphost ~$ kubectl get deployments
NAME                          READY   UP-TO-DATE   AVAILABLE   AGE
grafana-deployment-nautilus   0/1     1            0           19s

# Check the service (optional)
thor@jumphost ~$ kubectl get svc grafana-service
NAME              TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)          AGE
grafana-service   NodePort   10.96.117.133   &lt;none&gt;        3000:32000/TCP   43s

</code></pre><figure class="kg-card kg-image-card"><img src="https://chris-young.net/content/images/2025/10/image-4.png" class="kg-image" alt loading="lazy" width="920" height="654" srcset="https://chris-young.net/content/images/size/w600/2025/10/image-4.png 600w, https://chris-young.net/content/images/2025/10/image-4.png 920w" sizes="(min-width: 720px) 720px"></figure>]]></content:encoded></item><item><title><![CDATA[57/100 Print Environment Variables]]></title><description><![CDATA[<p>The Nautilus DevOps team is working on to setup some pre-requisites for an application that will send the greetings to different users. There is a sample deployment, that needs to be tested. Below is a scenario which needs to be configured on Kubernetes cluster.</p><ol><li>Create a <code>pod</code> named <code>print-envars-greeting</code>.</li><li>Configure</li></ol>]]></description><link>https://chris-young.net/57-100/</link><guid isPermaLink="false">68f9f052651c3702e6fac794</guid><dc:creator><![CDATA[Chris Young]]></dc:creator><pubDate>Thu, 23 Oct 2025 09:19:53 GMT</pubDate><content:encoded><![CDATA[<p>The Nautilus DevOps team is working on to setup some pre-requisites for an application that will send the greetings to different users. There is a sample deployment, that needs to be tested. Below is a scenario which needs to be configured on Kubernetes cluster.</p><ol><li>Create a <code>pod</code> named <code>print-envars-greeting</code>.</li><li>Configure spec as, the container name should be <code>print-env-container</code> and use <code>bash</code> image.</li><li>Create three environment variables:</li></ol><p>a. <code>GREETING</code> and its value should be <code>Welcome to</code></p><p>b. <code>COMPANY</code> and its value should be <code>DevOps</code></p><p>c. <code>GROUP</code> and its value should be <code>Industries</code></p><ol><li>Use command <code>[&quot;/bin/sh&quot;, &quot;-c&quot;, &apos;echo &quot;$(GREETING) $(COMPANY) $(GROUP)&quot;&apos;]</code> (please use this exact command), also set its <code>restartPolicy</code> policy to <code>Never</code> to avoid crash loop back.</li><li>You can check the output using <code>kubectl logs -f print-envars-greeting</code> command.</li></ol><p>First, create the yaml file for our pod which will be named <code>print-envars-greeting.yaml</code></p><pre><code>apiVersion: v1
kind: Pod
metadata:
  name: print-envars-greeting
spec:
  containers:
    - name: print-env-container
      image: bash
      command: [&quot;/bin/sh&quot;, &quot;-c&quot;, &apos;echo &quot;$(GREETING) $(COMPANY) $(GROUP)&quot;&apos;]
      env:
        - name: GREETING
          value: &quot;Welcome to&quot;
        - name: COMPANY
          value: &quot;DevOps&quot;
        - name: GROUP
          value: &quot;Industries&quot;
  restartPolicy: Never
</code></pre><p>Apply the yaml file and then check the status of the pod:</p><pre><code>thor@jumphost ~$ kubectl apply -f print-envars-greeting.yaml
pod/print-envars-greeting created

thor@jumphost ~$ kubectl get pods
NAME                    READY   STATUS      RESTARTS   AGE
print-envars-greeting   0/1     Completed   0          76

</code></pre><p>Finally, view the log output:</p><pre><code>thor@jumphost ~$ kubectl logs -f print-envars-greeting
Welcome to DevOps Industries</code></pre><h3 id="explanation">Explanation</h3><p><strong>restartPolicy: Never</strong> &#xA0;- This ensures the Pod won&#x2019;t restart once it completes the echo command.</p><p>The <code>bash</code> image runs the command and prints the concatenated environment variables.</p><p>The <code>echo &quot;$(GREETING) $(COMPANY) $(GROUP)&quot;</code> syntax uses shell substitution to combine them.</p>]]></content:encoded></item><item><title><![CDATA[56/100 Deploy Nginx Web Server on Kubernetes Cluster]]></title><description><![CDATA[<p>Some of the Nautilus team developers are developing a static website and they want to deploy it on Kubernetes cluster. They want it to be highly available and scalable. Therefore, based on the requirements, the DevOps team has decided to create a deployment for it with multiple replicas. Below you</p>]]></description><link>https://chris-young.net/56-100-deploy-nginx-web-server-on-kubernetes-cluster/</link><guid isPermaLink="false">68f9ee5e651c3702e6fac771</guid><dc:creator><![CDATA[Chris Young]]></dc:creator><pubDate>Thu, 23 Oct 2025 09:06:32 GMT</pubDate><content:encoded><![CDATA[<p>Some of the Nautilus team developers are developing a static website and they want to deploy it on Kubernetes cluster. They want it to be highly available and scalable. Therefore, based on the requirements, the DevOps team has decided to create a deployment for it with multiple replicas. Below you can find more details about it:</p><ol><li>Create a deployment using <code>nginx</code> image with <code>latest</code> tag only and remember to mention the tag i.e <code>nginx:latest</code>. Name it as <code>nginx-deployment</code>. The container should be named as <code>nginx-container</code>, also make sure replica counts are <code>3</code>.</li><li>Create a <code>NodePort</code> type service named <code>nginx-service</code>. The nodePort should be <code>30011</code>.</li></ol><p>First, create the yaml file which will be called <code>nginx-deployment</code></p><pre><code>apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - name: nginx-container
          image: nginx:latest
          ports:
            - containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  type: NodePort
  selector:
    app: nginx
  ports:
    - port: 80
      targetPort: 80
      nodePort: 30011
</code></pre><p>Apply the configuration created above:</p><pre><code>thor@jumphost ~$ kubectl apply -f nginx-deployment.yaml
deployment.apps/nginx-deployment created
service/nginx-service created

# Check the deployment has been successful and pods are available:
thor@jumphost ~$ kubectl get deployments
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   3/3     3            3           31s
thor@jumphost ~$ kubectl get pods -l app=nginx
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-5b58668cfc-2zfjh   1/1     Running   0          86s
nginx-deployment-5b58668cfc-cwr5z   1/1     Running   0          87s
nginx-deployment-5b58668cfc-xdhld   1/1     Running   0          86s
</code></pre><p>Click the App button in the UI to verify via the URL:</p><figure class="kg-card kg-image-card"><img src="https://chris-young.net/content/images/2025/10/image-3.png" class="kg-image" alt loading="lazy" width="881" height="300" srcset="https://chris-young.net/content/images/size/w600/2025/10/image-3.png 600w, https://chris-young.net/content/images/2025/10/image-3.png 881w" sizes="(min-width: 720px) 720px"></figure>]]></content:encoded></item><item><title><![CDATA[55/100 Kubernetes Sidecar Containers]]></title><description><![CDATA[<p>We have a web server container running the nginx image. The access and error logs generated by the web server are not critical enough to be placed on a persistent volume. However, Nautilus developers need access to the last 24 hours of logs so that they can trace issues and</p>]]></description><link>https://chris-young.net/55-100-kubernetes-sidecar-containers/</link><guid isPermaLink="false">68f9eabb651c3702e6fac744</guid><dc:creator><![CDATA[Chris Young]]></dc:creator><pubDate>Thu, 23 Oct 2025 08:55:12 GMT</pubDate><content:encoded><![CDATA[<p>We have a web server container running the nginx image. The access and error logs generated by the web server are not critical enough to be placed on a persistent volume. However, Nautilus developers need access to the last 24 hours of logs so that they can trace issues and bugs. Therefore, we need to ship the access and error logs for the web server to a log-aggregation service. Following the separation of concerns principle, we implement the Sidecar pattern by deploying a second container that ships the error and access logs from nginx. Nginx does one thing, and it does it well&#x2014;serving web pages. The second container also specializes in its task&#x2014;shipping logs. Since containers are running on the same Pod, we can use a shared emptyDir volume to read and write logs.</p><ol><li>Create a pod named <code>webserver</code>.</li><li>Create an <code>emptyDir</code> volume <code>shared-logs</code>.</li><li>Create two containers from <code>nginx</code> and <code>ubuntu</code> images with <code>latest</code> tag only and remember to mention tag i.e <code>nginx:latest</code>, nginx container name should be <code>nginx-container</code> and ubuntu container name should be <code>sidecar-container</code> on webserver pod.</li><li>Add command on sidecar-container <code>&quot;sh&quot;,&quot;-c&quot;,&quot;while true; do cat /var/log/nginx/access.log /var/log/nginx/error.log; sleep 30; done&quot;</code></li><li>Mount the volume <code>shared-logs</code> on both containers at location <code>/var/log/nginx</code>, all containers should be up and running.</li></ol><p>Create the pod yaml file, named <code>webserver.yaml</code>:</p><pre><code>apiVersion: v1
kind: Pod
metadata:
  name: webserver
spec:
  containers:
    - name: nginx-container
      image: nginx:latest
      volumeMounts:
        - name: shared-logs
          mountPath: /var/log/nginx

    - name: sidecar-container
      image: ubuntu:latest
      command: [&quot;sh&quot;, &quot;-c&quot;, &quot;while true; do cat /var/log/nginx/access.log /var/log/nginx/error.log; sleep 30; done&quot;]
      volumeMounts:
        - name: shared-logs
          mountPath: /var/log/nginx

  volumes:
    - name: shared-logs
      emptyDir: {}
</code></pre><p>Apply the file and then verify:</p><pre><code>thor@jumphost ~$ kubectl apply -f webserver.yaml
pod/webserver created
thor@jumphost ~$ kubectl get pods
NAME        READY   STATUS    RESTARTS   AGE
webserver   2/2     Running   0          16s</code></pre><p>Verify log sharing - checking the logs from nginx:</p><pre><code>thor@jumphost ~$ kubectl exec -it webserver -c nginx-container -- bash
root@webserver:/# ls /var/log/nginx
access.log  error.log

exit</code></pre><p>Finally, check what the sidecar sees. This should be the log output from <code>/var/log/nginx/access/log</code> and <code>/var/log/nginx/error.log</code> which will be printed continously every 30 seconds.</p><pre><code>thor@jumphost ~$ kubectl logs -f webserver -c sidecar-container
2025/10/23 08:48:18 [notice] 1#1: using the &quot;epoll&quot; event method
2025/10/23 08:48:18 [notice] 1#1: nginx/1.29.2
2025/10/23 08:48:18 [notice] 1#1: built by gcc 14.2.0 (Debian 14.2.0-19) 
2025/10/23 08:48:18 [notice] 1#1: OS: Linux 5.4.0-1106-gcp
2025/10/23 08:48:18 [notice] 1#1: getrlimit(RLIMIT_NOFILE): 1048576:1048576
2025/10/23 08:48:18 [notice] 1#1: start worker processes
2025/10/23 08:48:18 [notice] 1#1: start worker process 77
-- Snipped -- </code></pre>]]></content:encoded></item><item><title><![CDATA[54/100 Kubernetes Shared Volumes]]></title><description><![CDATA[<p>We are working on an application that will be deployed on multiple containers within a pod on Kubernetes cluster. There is a requirement to share a volume among the containers to save some temporary data. The Nautilus DevOps team is developing a similar template to replicate the scenario.</p><ol><li>Create a</li></ol>]]></description><link>https://chris-young.net/54-100-kubernetes-shared-volumes/</link><guid isPermaLink="false">68f9e51e651c3702e6fac6ef</guid><dc:creator><![CDATA[Chris Young]]></dc:creator><pubDate>Thu, 23 Oct 2025 08:42:30 GMT</pubDate><content:encoded><![CDATA[<p>We are working on an application that will be deployed on multiple containers within a pod on Kubernetes cluster. There is a requirement to share a volume among the containers to save some temporary data. The Nautilus DevOps team is developing a similar template to replicate the scenario.</p><ol><li>Create a pod named <code>volume-share-devops</code>.<br></li><li>For the first container, use image <code>ubuntu</code> with <code>latest</code> tag only and remember to mention the tag i.e <code>ubuntu:latest</code>, container should be named as <code>volume-container-devops-1</code>, and run a <code>sleep</code> command for it so that it remains in running state. Volume <code>volume-share</code> should be mounted at path <code>/tmp/official</code>.<br></li><li>For the second container, use image <code>ubuntu</code> with the <code>latest</code> tag only and remember to mention the tag i.e <code>ubuntu:latest</code>, container should be named as <code>volume-container-devops-2</code>, and again run a <code>sleep</code> command for it so that it remains in running state. Volume <code>volume-share</code> should be mounted at path <code>/tmp/games</code>.<br></li><li>Volume name should be <code>volume-share</code> of type <code>emptyDir</code>.<br></li><li>After creating the pod, exec into the first container i.e <code>volume-container-devops-1</code>, and just for testing create a file <code>official.txt</code> with any content under the mounted path of first container i.e <code>/tmp/official</code>.<br></li><li>The file <code>official.txt</code> should be present under the mounted path <code>/tmp/games</code> on the second container <code>volume-container-devops-2</code> as well, since they are using a shared volume.</li></ol><p>First create the <code>volume-share-devops.yaml</code> file:</p><pre><code class="language-yaml">apiVersion: v1
kind: Pod
metadata:
  name: volume-share-devops
spec:
  containers:
    - name: volume-container-devops-1
      image: ubuntu:latest
      command: [&quot;sleep&quot;, &quot;3600&quot;]
      volumeMounts:
        - name: volume-share
          mountPath: /tmp/official

    - name: volume-container-devops-2
      image: ubuntu:latest
      command: [&quot;sleep&quot;, &quot;3600&quot;]
      volumeMounts:
        - name: volume-share
          mountPath: /tmp/games

  volumes:
    - name: volume-share
      emptyDir: {}
</code></pre><p>Create the pod with the following command:</p><pre><code class="language-bash">thor@jumphost ~$ kubectl apply -f volume-share-devops.yaml
pod/volume-share-devops created</code></pre><p>Next verify the pod is running:</p><pre><code>thor@jumphost ~$ kubectl get pods
NAME                  READY   STATUS    RESTARTS   AGE
volume-share-devops   2/2     Running   0          55s</code></pre><p>Create the file inside the first container by exec&apos;ing into the container:</p><pre><code>thor@jumphost ~$ kubectl exec -it volume-share-devops -c volume-container-devops-1 -- bash

# This will place you inside the container so you can create the file:
root@volume-share-devops:/# 

root@volume-share-devops:/# echo &quot;This is shared data&quot; &gt; /tmp/official/official.txt
ls /tmp/official
official.txt

exit</code></pre><p>Verify from the second container that we can see the official.txt file:</p><pre><code>thor@jumphost ~$ kubectl exec -it volume-share-devops -c volume-container-devops-2 -- bash
root@volume-share-devops:/# ls /tmp/games
official.txt
root@volume-share-devops:/# cat /tmp/games/official.txt
This is shared data

</code></pre>]]></content:encoded></item><item><title><![CDATA[53/100 Resolve VolumeMounts Issue in Kubernetes]]></title><description><![CDATA[<p>We encountered an issue with our Nginx and PHP-FPM setup on the Kubernetes cluster this morning, which halted its functionality. Investigate and rectify the issue:<br><br>The pod name is <code>nginx-phpfpm</code> and configmap name is <code>nginx-config</code>. Identify and fix the problem. Once resolved, copy <code>/home/thor/index.php</code> file from the</p>]]></description><link>https://chris-young.net/53-100/</link><guid isPermaLink="false">68f1055f651c3702e6fac692</guid><dc:creator><![CDATA[Chris Young]]></dc:creator><pubDate>Thu, 16 Oct 2025 15:58:01 GMT</pubDate><content:encoded><![CDATA[<p>We encountered an issue with our Nginx and PHP-FPM setup on the Kubernetes cluster this morning, which halted its functionality. Investigate and rectify the issue:<br><br>The pod name is <code>nginx-phpfpm</code> and configmap name is <code>nginx-config</code>. Identify and fix the problem. Once resolved, copy <code>/home/thor/index.php</code> file from the <code>jump host</code> to the <code>nginx-container</code> within the nginx document root. After this, you should be able to access the website using <code>Website</code> button on the top bar.</p><pre><code># Check existing running pods:

thor@jumphost ~$ kubectl get pods
NAME           READY   STATUS    RESTARTS   AGE
nginx-phpfpm   2/2     Running   0          2m28s

# Check the shared volume path in the existing config map
thor@jumphost ~$ kubectl get configmap
NAME               DATA   AGE
kube-root-ca.crt   1      5m2s
nginx-config       1      2m44  &lt;-- This file

thor@jumphost ~$ kubectl describe configmap nginx-config
Name:         nginx-config
Namespace:    default
Labels:       &lt;none&gt;
Annotations:  &lt;none&gt;

Data
====
nginx.conf:
----
events {
}
http {
  server {
    listen 8099 default_server;
    listen [::]:8099 default_server;

    # Set nginx to serve files from the shared volume!
    root /var/www/html;
    index  index.html index.htm index.php;
    server_name _;
    location / {
      try_files $uri $uri/ =404;
    }
    location ~ \.php$ {
      include fastcgi_params;
      fastcgi_param REQUEST_METHOD $request_method;
      fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
      fastcgi_pass 127.0.0.1:9000;
    }
  }
}

# Get the config yaml file from the running pod, save to tmp and review:
thor@jumphost ~$ kubectl get pod nginx-phpfpm -o yaml &gt; /tmp/nginx.yaml
thor@jumphost ~$ cat /tmp/nginx.yaml 
--Snipped --
    volumeMounts:
    - mountPath: /usr/share/nginx/html  &lt; this is incorrect should be /var/www/html

# Make revelant changes and then save file and post changes using force
thor@jumphost ~$ kubectl replace -f /tmp/nginx.yaml --force
pod &quot;nginx-phpfpm&quot; deleted
pod/nginx-phpfpm replaced

# Check pods running status time:
thor@jumphost ~$ kubectl get pods
NAME           READY   STATUS    RESTARTS   AGE
nginx-phpfpm   2/2     Running   0          25s

# Copy the index.php from jump host to the nginx-container
thor@jumphost ~$ kubectl cp /home/thor/index.php nginx-phpfpm:/var/www/html -c nginx-container
</code></pre>]]></content:encoded></item><item><title><![CDATA[52/100 Revert Deployment to previous versions in Kubernetes]]></title><description><![CDATA[<p>Earlier today, the Nautilus DevOps team deployed a new release for an application. However, a customer has reported a bug related to this recent release. Consequently, the team aims to revert to the previous version.</p><p>There exists a deployment named <code>nginx-deployment</code>; initiate a rollback to the previous revision.</p><pre><code>thor@jumphost</code></pre>]]></description><link>https://chris-young.net/52-100-revert-deployment-to-previous-versions-in-kubernetes/</link><guid isPermaLink="false">68f10229651c3702e6fac681</guid><dc:creator><![CDATA[Chris Young]]></dc:creator><pubDate>Thu, 16 Oct 2025 14:44:02 GMT</pubDate><content:encoded><![CDATA[<p>Earlier today, the Nautilus DevOps team deployed a new release for an application. However, a customer has reported a bug related to this recent release. Consequently, the team aims to revert to the previous version.</p><p>There exists a deployment named <code>nginx-deployment</code>; initiate a rollback to the previous revision.</p><pre><code>thor@jumphost ~$ kubectl get deployments
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   3/3     3            3           107s

thor@jumphost ~$ kubectl rollout history deployment nginx-deployment
deployment.apps/nginx-deployment 
REVISION  CHANGE-CAUSE
1         &lt;none&gt;
2         kubectl set image deployment nginx-deployment nginx-container=nginx:stable --kubeconfig=/root/.kube/config --record=true

thor@jumphost ~$ kubectl rollout undo deployment nginx-deployment --to-revision=1
deployment.apps/nginx-deployment rolled back</code></pre>]]></content:encoded></item><item><title><![CDATA[51/100 Execute Rolling Updates in Kubernetes]]></title><description><![CDATA[<p>An application currently running on the Kubernetes cluster employs the nginx web server. The Nautilus application development team has introduced some recent changes that need deployment. They&apos;ve crafted an image <code>nginx:1.19</code> with the latest updates.</p><p>Execute a rolling update for this application, integrating the <code>nginx:1.</code></p>]]></description><link>https://chris-young.net/51-100-execute-rolling-updates-in-kubernetes/</link><guid isPermaLink="false">68f0f17e651c3702e6fac658</guid><dc:creator><![CDATA[Chris Young]]></dc:creator><pubDate>Thu, 16 Oct 2025 14:32:07 GMT</pubDate><content:encoded><![CDATA[<p>An application currently running on the Kubernetes cluster employs the nginx web server. The Nautilus application development team has introduced some recent changes that need deployment. They&apos;ve crafted an image <code>nginx:1.19</code> with the latest updates.</p><p>Execute a rolling update for this application, integrating the <code>nginx:1.18</code> image. The deployment is named <code>nginx-deployment</code>.</p><p>Ensure all pods are operational post-update.</p><pre><code>thor@jumphost ~$ kubectl get deployments
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   3/3     3            3           16s
thor@jumphost ~$ kubectl get pods
NAME                               READY   STATUS    RESTARTS   AGE
nginx-deployment-989f57c54-7r2gs   1/1     Running   0          26s
nginx-deployment-989f57c54-c5jmq   1/1     Running   0          26s
nginx-deployment-989f57c54-lhg7b   1/1     Running   0          26s
thor@jumphost ~$ kubectl set image deployment nginx-deployment nginx-container=nginx:1.18
deployment.apps/nginx-deployment image updated
thor@jumphost ~$ kubectl get pods
NAME                                READY   STATUS    RESTARTS   AGE
nginx-deployment-58cf54c7f6-lf6ft   1/1     Running   0          22s
nginx-deployment-58cf54c7f6-r94hk   1/1     Running   0          15s
nginx-deployment-58cf54c7f6-tkd8x   1/1     Running   0          13s
thor@jumphost ~$ kubectl get deployments
NAME               READY   UP-TO-DATE   AVAILABLE   AGE
nginx-deployment   3/3     3            3           103s
thor@jumphost ~$ kubectl rollout status deployment nginx-deployment
deployment &quot;nginx-deployment&quot; successfully rolled out
</code></pre>]]></content:encoded></item><item><title><![CDATA[47/100 Docker Python App]]></title><description><![CDATA[<p>A python app needed to be Dockerized, and then it needs to be deployed on <code>App Server 2</code>. We have already copied a <code>requirements.txt</code> file (having the app dependencies) under <code>/python_app/src/</code> directory on <code>App Server 2</code>. Further complete this task as per details mentioned below:<br>Create a</p>]]></description><link>https://chris-young.net/47-100-docker-python-app/</link><guid isPermaLink="false">68ee73a9651c3702e6fac5ef</guid><dc:creator><![CDATA[Chris Young]]></dc:creator><pubDate>Tue, 14 Oct 2025 16:31:21 GMT</pubDate><content:encoded><![CDATA[<p>A python app needed to be Dockerized, and then it needs to be deployed on <code>App Server 2</code>. We have already copied a <code>requirements.txt</code> file (having the app dependencies) under <code>/python_app/src/</code> directory on <code>App Server 2</code>. Further complete this task as per details mentioned below:<br>Create a <code>Dockerfile</code> under <code>/python_app</code> directory:</p><ul><li>Use any <code>python</code> image as the base image.</li><li>Install the dependencies using <code>requirements.txt</code> file.</li><li>Expose the port <code>5004</code>.</li><li>Run the <code>server.py</code> script using <code>CMD</code>.</li></ul><p>Build an image named <code>nautilus/python-app</code> using this Dockerfile. Once image is built, create a container named <code>pythonapp_nautilus</code>. Map port <code>5004</code> of the container to the host port <code>8096</code>. Once deployed, you can test the app using <code>curl</code> command on <code>App Server 2</code>.</p><pre><code>ssh as normal
cd /python_app
ls
src
cd src
[steve@stapp02 src]$ ls
requirements.txt  server.py

cd ../ &lt;- so we are back in the python_app directory
sudo vi Dockerfile
# Use any Python base image
FROM python:3.10-slim
# Set working directory inside container
WORKDIR /app
# Copy requirements.txt first (for caching)
COPY src/requirements.txt .
# Install dependencies
RUN pip install --no-cache-dir -r requirements.txt
# Copy the rest of the app files
COPY src/ .
# Expose the application port
EXPOSE 5004
# Command to run the Python server
CMD [&quot;python&quot;, &quot;server.py&quot;]
esc
wq!

# Build the docker image
docker build -t nautilus/python-app .
[+] Building 192.4s (10/10) FINISHED docker:default

# Run the container
docker run -d --name pythonapp_nautilus -p 8096:5004 nautilus/python-app
792114a7169c49779c19dbde997dadf87ac3e12adcc80822427e09936e25b17b

# Verify that its running
docker ps
CONTAINER ID   IMAGE                 COMMAND              CREATED          STATUS          PORTS                    NAMES
792114a7169c   nautilus/python-app   &quot;python server.py&quot;   53 seconds ago   Up 50 seconds   0.0.0.0:8096-&gt;5004/tcp   pythonapp_nautilus

curl stapp02:8096
Welcome to xFusionCorp Industries!</code></pre>]]></content:encoded></item></channel></rss>