Adding SGP-related extensible configuration to the model engine helm chart#762
Adding SGP-related extensible configuration to the model engine helm chart#762arniechops merged 8 commits intomainfrom
Conversation
Additional Comments (1)
This PR parameterizes ubuntu, busybox, and curl images via Consider adding a Prompt To Fix With AIThis is a comment left during a code review.
Path: charts/model-engine/templates/proportional_a100_autoscaler_deployment.yaml
Line: 30
Comment:
**Hardcoded image breaks airgapped deployments**
This PR parameterizes ubuntu, busybox, and curl images via `utilityImages` for airgapped/on-prem environments, but the `cluster-proportional-autoscaler` image from `registry.k8s.io` remains hardcoded here and in the a10/t4 autoscaler deployments. In an airgapped environment, `registry.k8s.io` would be unreachable just like the other registries that were parameterized.
Consider adding a `utilityImages.clusterProportionalAutoscaler` entry (or similar) to `values.yaml` and referencing it here:
```suggestion
- image: {{ .Values.utilityImages.clusterProportionalAutoscaler.repository }}:{{ .Values.utilityImages.clusterProportionalAutoscaler.tag }}
```
How can I resolve this? If you propose a fix, please make it concise. |
| {{- with $.Values.automountServiceAccountToken }} | ||
| automountServiceAccountToken: {{ . }} | ||
| {{- end }} |
There was a problem hiding this comment.
with skips block when value is false
Helm's with treats false as falsy, so when automountServiceAccountToken is set to false (the exact scenario where you want to disable automount on the ServiceAccount), this block will be skipped entirely and automountServiceAccountToken won't be rendered into the YAML. Only when the value is true will it appear — which is the opposite of the critical behavior.
In contrast, the pod-level specs in other templates correctly use {{ .Values.automountServiceAccountToken }} directly without a with gate.
| {{- with $.Values.automountServiceAccountToken }} | |
| automountServiceAccountToken: {{ . }} | |
| {{- end }} | |
| automountServiceAccountToken: {{ $.Values.automountServiceAccountToken }} |
Prompt To Fix With AI
This is a comment left during a code review.
Path: charts/model-engine/templates/service_account.yaml
Line: 24-26
Comment:
**`with` skips block when value is `false`**
Helm's `with` treats `false` as falsy, so when `automountServiceAccountToken` is set to `false` (the exact scenario where you want to disable automount on the ServiceAccount), this block will be skipped entirely and `automountServiceAccountToken` won't be rendered into the YAML. Only when the value is `true` will it appear — which is the opposite of the critical behavior.
In contrast, the pod-level specs in other templates correctly use `{{ .Values.automountServiceAccountToken }}` directly without a `with` gate.
```suggestion
automountServiceAccountToken: {{ $.Values.automountServiceAccountToken }}
```
How can I resolve this? If you propose a fix, please make it concise.| {{- include "modelEngine.labels" . | nindent 4 }} | ||
| annotations: | ||
| "helm.sh/hook": pre-install,pre-upgrade | ||
| "helm.sh/hook-weight": "-1" |
There was a problem hiding this comment.
Hook weight collision with migration job
Both database_init_job.yaml and database_migration_job.yaml use "helm.sh/hook-weight": "-1" on pre-install,pre-upgrade. This means Helm does not guarantee execution order between them. Since init_database.py creates schemas (create schema if not exists) and tables (Base.metadata.create_all), and the migration job applies Alembic migrations that depend on those schemas existing, the migration could run first and fail.
Consider setting this job's hook weight to "-2" (or lower) so it is guaranteed to run before the migration job at weight "-1".
| "helm.sh/hook-weight": "-1" | |
| "helm.sh/hook-weight": "-2" |
Prompt To Fix With AI
This is a comment left during a code review.
Path: charts/model-engine/templates/database_init_job.yaml
Line: 10
Comment:
**Hook weight collision with migration job**
Both `database_init_job.yaml` and `database_migration_job.yaml` use `"helm.sh/hook-weight": "-1"` on `pre-install,pre-upgrade`. This means Helm does not guarantee execution order between them. Since `init_database.py` creates schemas (`create schema if not exists`) and tables (`Base.metadata.create_all`), and the migration job applies Alembic migrations that depend on those schemas existing, the migration could run first and fail.
Consider setting this job's hook weight to `"-2"` (or lower) so it is guaranteed to run before the migration job at weight `"-1"`.
```suggestion
"helm.sh/hook-weight": "-2"
```
How can I resolve this? If you propose a fix, please make it concise.
Additional Comments (2)
All other pod specs in this PR (gateway, builder, cacher, celery autoscaler, database migration, database init, spellbook init, and the proportional autoscalers) explicitly set When Prompt To Fix With AIThis is a comment left during a code review.
Path: charts/model-engine/templates/populate_fine_tuning_repository_job.yaml
Line: 44
Comment:
**Missing `automountServiceAccountToken` setting**
All other pod specs in this PR (gateway, builder, cacher, celery autoscaler, database migration, database init, spellbook init, and the proportional autoscalers) explicitly set `automountServiceAccountToken: {{ .Values.automountServiceAccountToken }}`. This job and `restart_keda_operator.yaml` are the only ones missing it.
When `automountServiceAccountToken` is `false`, the `modelEngine.volumes` helper will inject the projected `token-volume`, but without disabling the default automount on the pod spec, Kubernetes will also mount its own default token — resulting in a conflict at `/var/run/secrets/kubernetes.io/serviceaccount`.
```suggestion
automountServiceAccountToken: {{ .Values.automountServiceAccountToken }}
serviceAccountName: {{ include "modelEngine.fullname" . }}
```
How can I resolve this? If you propose a fix, please make it concise.
Same inconsistency as Prompt To Fix With AIThis is a comment left during a code review.
Path: charts/model-engine/templates/restart_keda_operator.yaml
Line: 43
Comment:
**Missing `automountServiceAccountToken` setting**
Same inconsistency as `populate_fine_tuning_repository_job.yaml` — this job is missing `automountServiceAccountToken: {{ .Values.automountServiceAccountToken }}` which every other pod spec in this PR has. This will cause issues when `automountServiceAccountToken` is set to `false`.
```suggestion
automountServiceAccountToken: {{ .Values.automountServiceAccountToken }}
serviceAccountName: {{ include "modelEngine.fullname" . }}
```
How can I resolve this? If you propose a fix, please make it concise. |
Pull Request Summary
What is this PR changing? Why is this change being made? Any caveats you'd like to highlight? Link any relevant documents, links, or screenshots here if applicable.
Test Plan and Usage Guide
How did you validate that your PR works correctly? How do you run or demo the code? Provide enough detail so a reviewer can reasonably reproduce the testing procedure. Paste example command line invocations if applicable.
Greptile Summary
This PR adds a broad set of SGP/on-prem extensibility hooks to the model-engine Helm chart, bumping the chart version to
0.2.0. The changes make the chart deployable in airgapped and GCP-based environments without forking.Key changes:
automountServiceAccountTokenvalue gates a newmodelEngine.tokenVolume/tokenVolumeMounthelper that manually projects the SA token when auto-mounting is disabled. Applied to most pods — butpopulate_fine_tuning_repository_job.yamlandrestart_keda_operator.yamlare missing the pod-levelautomountServiceAccountTokenfield (see comments).extraEnvVars,extraVolumes,extraVolumeMounts(for main service containers) andextraPodEnvFrom(for inference pods via envFrom) enable wrapper charts to inject credentials and config without patching core templates.utilityImages.busybox,.curl,.ubuntureplace hardcodedpublic.ecr.aws/curlimagesreferences, enabling airgapped deployments..Release.Revisionto their names and usebefore-hook-creation,hook-succeededdelete policy, preventing stale job conflicts on re-deploy. Hook weights are correctly ordered (database_init_jobat-2,database_migration_jobat-1).kindIs "bool"guards inservice_config_map.yamlprevent boolean values from being quoted as strings.redis.authSecretName/redis.authSecretKeyfields allow the auth token to be sourced from a Kubernetes Secret rather than a plain value.Minor fix needed: add
automountServiceAccountToken: {{ .Values.automountServiceAccountToken }}to the pod specs ofpopulate_fine_tuning_repository_job.yamlandrestart_keda_operator.yamlfor consistency with other modified pods.Confidence Score: 4/5
automountServiceAccountTokenfield.automountServiceAccountTokenonpopulate_fine_tuning_repository_job.yamlandrestart_keda_operator.yamlis a genuine inconsistency affecting configuration clarity, but not a runtime failure since the ServiceAccount-level setting still governs auto-mounting. All other changes are clean and backward-compatible.automountServiceAccountToken: {{ .Values.automountServiceAccountToken }}added to the pod spec.Flowchart
%%{init: {'theme': 'neutral'}}%% flowchart TD A[values.yaml\nautomountServiceAccountToken] -->|false| B[modelEngine.tokenVolume\nprojected SA token] A -->|true| C[K8s auto-mounts token\nno extra volume needed] B --> D[modelEngine.volumes\nappends token-volume] B --> E[modelEngine.volumeMounts\nappends token-volume mount] D --> F[gateway / builder / cacher\ndeployments] D --> G[database_init_job\ndatabase_migration_job] D --> H[spellbook_init_job] D --> I[celery_autoscaler\nstateful set] D --> J[populate_fine_tuning_job\nrestart_keda_job\n⚠️ missing pod-level field] K[values.yaml\nredis.authSecretName] -->|set| L[REDIS_AUTH_TOKEN\nfrom secretKeyRef] K -->|not set| M[redis.auth plain value\nfallback] N[values.yaml\nextraEnvVars] --> F O[values.yaml\nextraVolumes / extraVolumeMounts] --> F P[values.yaml\nextraPodEnvFrom] --> Q[inference pods\nenvFrom in service templates] R[values.yaml\nutilityImages] --> S[busybox / curl / ubuntu\nconfigurable for airgap] T[values.yaml\ngpuTolerations] --> U[balloon + proportional\nautoscaler + service templates]Comments Outside Diff (2)
charts/model-engine/templates/populate_fine_tuning_repository_job.yaml, line 44-45 (link)Missing
automountServiceAccountTokenon pod spec.Every other job/deployment modified in this PR (
database_migration_job,database_init_job,spellbook_init_job,gateway_deployment,endpoint_builder_deployment,cacher_deployment,celery_autoscaler_stateful_set) explicitly setsautomountServiceAccountToken: {{ .Values.automountServiceAccountToken }}on the pod spec. This job was missed.Without the pod-level field, when
automountServiceAccountToken: falsein values, the ServiceAccount-level setting prevents auto-mounting — but the projected token volume/mount ARE added viamodelEngine.volumes/modelEngine.volumeMounts, creating an inconsistency: the token is mounted manually but there's no pod-level declaration that auto-mounting is disabled.charts/model-engine/templates/restart_keda_operator.yaml, line 43-44 (link)Missing
automountServiceAccountTokenon pod spec.Same issue as
populate_fine_tuning_repository_job.yaml: every other job/deployment modified in this PR setsautomountServiceAccountToken: {{ .Values.automountServiceAccountToken }}at the pod spec level, but this job was missed. The projected token volume and mount are still injected viamodelEngine.volumes/modelEngine.volumeMountswhenautomountServiceAccountToken: false, so the pod-level field should be set consistently.Last reviewed commit: 85a7735