parca-dev / docs Goto Github PK
View Code? Open in Web Editor NEWThe parca project website and documentation.
Home Page: https://parca.dev/docs/
License: Creative Commons Attribution Share Alike 4.0 International
The parca project website and documentation.
Home Page: https://parca.dev/docs/
License: Creative Commons Attribution Share Alike 4.0 International
Currently, the documentation only talks about XOR encoding, but we utilize run-length and double-delta encodings as well. These should be documented and explained why they are better suited where they are used.
The parca.yaml that is downloaded via cURL in the tutorial "Parca from Binary" uses the main's version, which currently does not work with the version used in the previous cURL calls to download the binary (v0.12.0). Instead, the parca.yaml from the same tag should be used.
How do I as a user know what to choose here? Maybe we can at least open an issue on the docs repo to clarify what this is, what it means and how to configure it?
Originally posted by @brancz in parca-dev/parca#314 (comment)
Nice!
Seems worth automating to have it always up-to-date. It's not too bad to do it manually, but I did not release the versions between v0.8 and v0.12 cause I simply forgot. ๐คทโโ๏ธ
Originally posted by @metalmatze in parca-dev/parca#1491 (comment)
We should provide examples or thorough description of how pull based approach works for ingestion. This is especially useful for the types of profiles which are still not supported by parca-agent.
This issue lists Renovate updates and detected dependencies. Read the Dependency Dashboard docs to learn more.
These branches will be created by Renovate only once you click their checkbox below.
@docusaurus/core
, @docusaurus/preset-classic
, @docusaurus/theme-search-algolia
)These updates are awaiting their schedule. Click on a checkbox to get an update now.
These updates have all been created already. Click a checkbox below to force a retry/rebase of any.
.tool-versions
node 18.20.4
.github/workflows/spellcheck.yaml
actions/checkout v4.1.7@692973e3d937129bcbf40652eb9f2f61becf3332
docusaurus-github-releases-plugin/package.json
node-fetch 3.3.2
node >=12.13.0
package.json
@docusaurus/core 2.4.3
@docusaurus/plugin-client-redirects 3.3.2
@docusaurus/preset-classic 2.4.3
@docusaurus/theme-search-algolia 2.4.3
@mdx-js/react 1.6.22
@rive-app/react-canvas 3.0.57
@svgr/webpack 6.5.1
clsx 1.2.1
file-loader 6.2.0
prism-react-renderer 1.3.5
raw-loader 4.0.2
react 18.3.1
react-dom 18.3.1
url-loader 4.1.1
node >=16.0.0
Currently, we just double-commit all docs, but we can use https://www.npmjs.com/package/docusaurus-plugin-remote-content
The default CPU profiling frequency has been updated parca-dev/parca-agent#1213, so the docs should be synced as well before the next release.
Document every field with their description, type and default value
Better to use Observability document and give examples over SLOs.
Add tutorial on how to configure object storage for debuginfo storage.
Add a page to list links to the Parca design documents or proposals
I followed https://www.parca.dev/docs/systemd which contains a yaml file example like:
debug_info:
bucket:
type: "FILESYSTEM"
config:
directory: "/tmp"
cache:
type: "FILESYSTEM"
config:
directory: "/tmp"
scrape_configs:
- job_name: "default"
scrape_interval: "2s"
static_configs:
- targets: ["127.0.0.1:7070"]
However this fails to start, with an error like:
level=error name=parca ts=2022-11-22T10:51:43.582639041Z caller=main.go:59 msg="Program exited with error" err="parsing YAML file /etc/parca/parca.yaml: yaml: unmarshal errors:\n line 1: field debug_info not found in type config.Config"
AFAICS this is because debug_info
is no longer valid since parca-dev/parca#1403 - instead I used a config like this:
object_storage:
bucket:
type: "FILESYSTEM"
config:
directory: "/tmp"
scrape_configs:
- job_name: "default"
scrape_interval: "2s"
static_configs:
- targets: ["127.0.0.1:7070"]
To publish blog posts from the Parca team
To highlight blog posts about Parca
The docs agent release e.g here resolves to v0.10.0-rc.1
, but that release drops support for the --systemd-units
flag since parca-dev/parca-agent#627 removed it.
$ grep -R systemd-units
docs/agent-binary.mdx:sudo parca-agent --node=systemd-test --systemd-units=docker.service --log-level=debug --kubernetes=false --store-address=localhost:7070 --insecure
docs/parca-agent-systemd.mdx: --systemd-units=SYSTEMD-UNITS,...
docs/parca-agent-systemd.mdx:To profile units, you just need to specify the name of the service in `--systemd-units` flag.
docs/parca-agent-systemd.mdx:+ --systemd-units=docker.service,my-app.service \
docs/systemd.mdx:ExecStart=/usr/bin/parca-agent --http-address=":7071" --node=systemd-test --systemd-units=docker.service,parca.service,parca-agent.service --kubernetes=false --store-address=localhost:7070 --insecure
src/components/HomepageQuickstart.js:./parca-agent --node=systemd-test --systemd-units=parca-agent.service --kubernetes=false`
It also looks like --kubernetes=false
is no longer valid - should these both just be removed?
Currently, we only have flag documentation in the repositories directly (where they are auto-generated). However, this is some of the most important configuration that people need, so we should make sure it's also available on the website.
After deploying the server I got this:
Warning: resource namespaces/parca is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by oc apply. oc apply should only be used on resources created declaratively by either oc create --save-config or oc apply. The missing annotation will be patched automatically.
....
....
W1014 14:33:28.076094 60726 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
W1014 14:33:28.204159 60726 warnings.go:70] policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
I got the following warning after applying agent on OpenShift v4.10:
W1014 14:35:06.649758 60757 warnings.go:70] would violate "latest" version of "baseline" PodSecurity profile: non-default capabilities (container "parca-agent" must not include "SYS_ADMIN" in securityContext.capabilities.add), host namespaces (hostPID=true), hostPath volumes (volumes "root", "proc", "run", "cgroup", "modules", "bpffs", "debugfs", "localtime"), hostPort (container "parca-agent" uses hostPort 7071), privileged (container "parca-agent" must not set securityContext.privileged=true)
How to build binaries with debug info
How to split debug info and upload
How to discover linked shared libraries and provide debug information for those using package managers
Examples for language runtimes
cc @Sylfrena
@parca-dev/parca-demo could be used as an example
Regarding this page: https://github.com/parca-dev/docs/blob/main/docs/grafana-flamegraph-plugin.md
At the bottom there are supposed to be screenshots, but the URIs for the images resolve to 404s. I tried finding the (relocated?) images, but could not find them. Hopefully someone more familiar with the repo can know where they now exist.
We already have plenty of frequently asked questions. We should probably have a dedicated docs page as well as an FAQ on the landing page.
Infrastructure-wide profiling with Parca Agent currently supports all compiled languages, eg. C, C++, Rust, Go (with extended support for Go). Further language support coming in the upcoming weeks and months.
Parca itself supports any pprof formatted profile. Any library or implementation that outputs valid pprof profiles is supported by Parca.
We have observed <1% in CPU, but more elaborate and reproducible reports coming soon.
Read the docs on more in-depth explanations on security considerations
No. Profiling data is made up of statistics representing for example how much time the CPU has spent in a particular function, but the function metadata is decoupled from the actual executable and source code.
Read the docs on symbolization to understand further why.
Function, package, and file names would leak, but no executable code.
Read the docs on symbolization to understand further why.
cc @maxbrunet
Many cloud environments require a proxy to access external resources. Proxy support would be required in parca-agent in order to run it in these environments.
It doesn't appear possible to configure parca-agent to use a http/https proxy currently?
Merge setup instructions at https://www.parca.dev/docs/agent-binary and https://www.parca.dev/docs/systemd to remove redundancy and improve clarity.
Following the example on https://github.com/parca-dev/parca.dev/blob/main/docs/kubernetes.mdx isn't working as expected.
$ kubectl apply -f https://github.com/parca-dev/parca/releases/download/v0.7.1/kubernetes-manifest.yaml
Should create all required resources and allow me to see the UI via
$ kubectl -n parca port-forward service/parca 7070
$ kubectl apply -f https://github.com/parca-dev/parca/releases/download/v0.7.1/kubernetes-manifest.yaml
Warning: resource namespaces/parca is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
namespace/parca configured
configmap/parca-config created
deployment.apps/parca created
namespace/parca unchanged
podsecuritypolicy.policy/parca created
role.rbac.authorization.k8s.io/parca created
rolebinding.rbac.authorization.k8s.io/parca created
service/parca created
$ kubectl -n parca port-forward service/parca 7070
error: unable to forward port because pod is not running. Current status=Pending
$ kubens parca
Context "minikube" modified.
Active namespace is "parca".
$ oc get pods
NAME READY STATUS RESTARTS AGE
parca-58c8487fcf-tk8zd 0/1 Pending 0 114s
$ oc describe pod parca-58c8487fcf-tk8zd
Name: parca-58c8487fcf-tk8zd
Namespace: parca
Priority: 0
Node: <none>
Labels: app.kubernetes.io/component=observability
app.kubernetes.io/instance=parca
app.kubernetes.io/name=parca
app.kubernetes.io/version=v0.7.1
pod-template-hash=58c8487fcf
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/parca-58c8487fcf
Containers:
parca:
Image: ghcr.io/parca-dev/parca:v0.7.1
Port: 7070/TCP
Host Port: 0/TCP
Args:
/parca
--config-path=/var/parca/parca.yaml
--log-level=info
--cors-allowed-origins=*
Liveness: exec [/grpc-health-probe -v -addr=:7070] delay=5s timeout=1s period=10s #success=1 #failure=3
Readiness: exec [/grpc-health-probe -v -addr=:7070] delay=10s timeout=1s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/var/parca from parca-config (rw)
/var/run/secrets/kubernetes.io/serviceaccount from parca-token-d4cx2 (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
parca-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: parca-config
Optional: false
parca-token-d4cx2:
Type: Secret (a volume populated by a Secret)
SecretName: parca-token-d4cx2
Optional: false
QoS Class: BestEffort
Node-Selectors: kubernetes.io/arch=amd64
kubernetes.io/os=linux
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 65s (x3 over 2m14s) default-scheduler 0/1 nodes are available: 1 node(s) didn't match Pod's node affinity.
Macbook Pro M1
$ uname -a
Darwin MAC-FVFGH12JQ05P 21.3.0 Darwin Kernel Version 21.3.0: Wed Jan 5 21:37:58 PST 2022; root:xnu-8019.80.24~20/RELEASE_ARM64_T8101 arm64
The problematic part is:
Node-Selectors: kubernetes.io/arch=amd64
kubernetes.io/os=linux
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.