Comments (7)
Hi ,
The next best step is to visit New Relic Support, where you can engage the New Relic Support Community or open a support ticket depending on your support level. The support team is best positioned to assist with your specific needs.
Please provide a link to this GitHub issue when submitting your community post or support ticket.
Thanks!
from helm-charts.
same issue with EKS and image quay.io/newrelic/synthetics-minion:3.0.46
from helm-charts.
Same issue on AKS with 1cpu request / limit.
from helm-charts.
Same issue on AKS.
My understanding is that the CPU request is coming from the job that is spun up through the code on the Newrelic Synthetic minions. The code creates a job
to execute the synthetic task, and it sets the cpu request to 500mi
.
But there's no way to interact with the job creation (through the Helm chart).
from helm-charts.
@samb0803 Is correct, the minion acts like a container orchestrator, spinning up runner pods and healthcheck pods as needed. Runner pods (and the resources they request) are not user configurable via the Helm chart.
You can set heavyWorkers: "1"
instead of the default 2, which can help reduce the risk of running into this issue.
The true amount of resources required by the K8s CPM is not initially evident and hard to derive from the requirements doc.
https://docs.newrelic.com/docs/synthetics/synthetic-monitoring/private-locations/install-containerized-private-minions-cpms/#kubernetes-requirements
More details on required resources here:
https://newrelic.zendesk.com/knowledge/articles/4408123035287/en-us?brand_id=3270506
This issue is often encountered on clusters where the PVC is requesting access mode RWO since all pods will be scheduled on the same node. This makes setting a resource quota on the namespace more of a necessity to ensure other projects can't use resources requested by the CPM. This is especially true in a multi-tenant cluster.
from helm-charts.
That makes sense regarding the disk access mode. Essentially I should just need to change it to a storageclass that supports ReadWriteMany and specify the accessmode in the helm chart.
I'm trying this now for AzureFile. I can see the RWX PVC has been bound by the pod. But the synthetic-minion errors with:
2021-09-23 01:47:41,082 - One and only one volume is expected to be bound to the minion synthetics-minion-0 - volumes found: [Volume(awsElasticBlockStore=null, azureDisk=null, azureFile=null, cephfs=null, cinder=null, configMap=null, csi=null, downwardAPI=null, emptyDir=null, fc=null, flexVolume=null, flocker=null, gcePersistentDisk=null, gitRepo=null, glusterfs=null, hostPath=null, iscsi=null, name=minion-volume, nfs=null, persistentVolumeClaim=PersistentVolumeClaimVolumeSource(claimName=minion-volume-synthetics-minion-0, readOnly=null, additionalProperties={}), photonPersistentDisk=null, portworxVolume=null, projected=null, quobyte=null, rbd=null, scaleIO=null, secret=null, storageos=null, vsphereVolume=null, additionalProperties={}), Volume(awsElasticBlockStore=null, azureDisk=null, azureFile=null, cephfs=null, cinder=null, configMap=null, csi=null, downwardAPI=null, emptyDir=null, fc=null, flexVolume=null, flocker=null, gcePersistentDisk=null, gitRepo=null, glusterfs=null, hostPath=null, iscsi=null, name=kube-api-access-pxvb7, nfs=null, persistentVolumeClaim=null, photonPersistentDisk=null, portworxVolume=null, projected=ProjectedVolumeSource(defaultMode=420, sources=[VolumeProjection(configMap=null, downwardAPI=null, secret=null, serviceAccountToken=ServiceAccountTokenProjection(audience=null, expirationSeconds=3607, path=token, additionalProperties={}), additionalProperties={}), VolumeProjection(configMap=ConfigMapProjection(items=[KeyToPath(key=ca.crt, mode=null, path=ca.crt, additionalProperties={})], name=kube-root-ca.crt, optional=null, additionalProperties={}), downwardAPI=null, secret=null, serviceAccountToken=null, additionalProperties={}), VolumeProjection(configMap=null, downwardAPI=DownwardAPIProjection(items=[DownwardAPIVolumeFile(fieldRef=ObjectFieldSelector(apiVersion=v1, fieldPath=metadata.namespace, additionalProperties={}), mode=null, path=namespace, resourceFieldRef=null, additionalProperties={})], additionalProperties={}), secret=null, serviceAccountToken=null, additionalProperties={})], additionalProperties={}), quobyte=null, rbd=null, scaleIO=null, secret=null, storageos=null, vsphereVolume=null, additionalProperties={})]
Evidence of the pvc being bound and configured directly (storageClass + Access Mode).
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE minion-volume-synthetics-minion-0 Bound pvc-c55419e0-ce92-4fe5-93e4-28774865b23c 10Gi RWX azurefile 4m33s
Can you please confirm that PVC is requesting access mode RWO since all pods will be scheduled on the same node
is referring to the synthetic-minion > job's being bound to the same node. Not $replica's of the synthetic minions being on the same node.
Wouldn't it be better to leverage a different node pool with taints and selectors rather than using namespace quotas?
from helm-charts.
Ignore my error. The cluster auto patched to 1.21.x which the docos shows won't work due to the SA token automount.
Avoiding this by manually adding automountServiceAccountToken: false
to the service account.
from helm-charts.
Related Issues (20)
- [nri-bundle] noisy logs from `newrelic-bundle-nrk8s-kubelet-*` HOT 3
- [nri-bundle-5.0.25][newrelic-logging] log parser causing noisy warning in logs with stderr F ... HOT 3
- [nri-bundle:5.0.26] Logs not parsing as CRI but parsing as Docker HOT 3
- [newrelic-pixie] newrelic-pixie init container not running on arm64 HOT 9
- Action Required: Fix Renovate Configuration HOT 1
- Include info about newrelic-logging updates in nri-bundle chart release notes HOT 3
- [ri-bundle] how to input newrelic license key to value.yaml HOT 3
- [newrelic-logging] hostPath on OpenShift 4.13 HOT 2
- Support templated [cluster] value HOT 1
- [nri-bundle] <nri-mongodb3 integration is not working> HOT 2
- [nri-bundle:5.0.69] Update Docs: "fluentBit.k8sLoggingExclude" needs to be 'true' and not 'On' HOT 1
- [newrelic] Unable to install charts HOT 2
- Support `customSecretName` for `nri-statsd` HOT 1
- Modify Synthetics Job Manager Containers to Support Non-Root Execution HOT 3
- [newrelic-logging] Adding more than one podLabel results in invalid yaml errors HOT 1
- Remove resources cpu limits default (newrelic-logging) HOT 1
- Add latestest version of newrelic-logging-1.22.0 to the nri-bundle HOT 2
- [newrelic-logging] kubelet upstream connection error HOT 1
- [newrelic-logging] Metrics host magic in `_helpers.tpl` relies on plain value secret to choose EU over US HOT 2
- [newrelic-logging] BREAKING Bug in env values order HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from helm-charts.