Increase Pod Memory on the Fly in Rancher
- Select Deployments from menu-sidebar
- Find correct deployment and click on three dots
- Edit Config
- Select Resources from deployment-sidebar
- Modify memory reservation and limit here
- Save
When making changes to config maps or other things - it’s tempting to restart all the pods for a particular service (select them all, delete, and let them come back up). This may work in the majority of cases, but if something goes wrong, it’ll be hard to tell if the recent change caused it or if it’s unrelated.
Recommendation:
The point here isn’t the specific process, but rather having checkpoints when changes are made. Ideally, by making only one change at a time and testing, it’s evident when something is broken which change caused it.
To access a shell in a particular pod:
To access a terminal to run kubectl commands in the environment:
>_ symbol to open a terminal.Example command:
kubectl describe thingToDescribe -n NameSpace
If you have a kubernetes pod being terminated due to OOMKilled (Out of Memory Killed), you can monitor the memory usage of the pod during the action that is using too much memory.
If you have a container in kubernetes unexpectedly disconnect with no error in the console log you can check the pod yaml for the last state of the pod.
Go to your Pod -> (three dots) Edit Yaml -> scroll to status (at the bottom) -> expand.
From here you can see the last status such as:
reason: OOMKilled
Spring profiles can be used to toggle behaviors in a spring app. If your app is running in a container in Rancher (Kubernetes) you may need to see what spring profiles are being applied. To check what spring profiles a service is running in Rancher: