openshift / api Goto Github PK
View Code? Open in Web Editor NEWCanonical location of the OpenShift API definition.
Home Page: http://www.openshift.org
License: Apache License 2.0
Canonical location of the OpenShift API definition.
Home Page: http://www.openshift.org
License: Apache License 2.0
Not sure I'm in the right repository, sorry if I'm wrong.
This is so boring to type "oc get imagecontentsourcepolicy" tens times a day, could be so much better to have a shortname like "oc get icsp"
Hello.
This a question regarding the context explained in kubernetes-sigs/controller-runtime#1191 (comment)
TL; DR of the above:
This repo (openshift/api) does not have some historical Openshift resources CRDs in config/v1
(ex: Route
, ImageStream
), (unless I've missed them somehow).
Those CRDs can be helpful to work on openshift operators using the operator-sdk (because envtest, the test framework coming with operator-sdk, will apply CRDs on a pseudo K8S API to test the operators control loops), if they interact with those resources.
Could those CRDs be added along the others ? Is there significant blocker which prevents that to happens, and can we help ?
(The question and the need is motivated by thoth-station/meteor-operator#126)
Thanks
Hi guys,
I'm from @fabric8io team. We maintain Kubernetes and Openshift Client which is used by a lot of people in the java ecosystem. We generate Kubernetes/Openshift model from go structs in api and origin. Right now our Openshift go struct model is pointing at v4.1.0[0], but I have noticed that no other tag has been pushed since then. Even Openshift 4.3.0 got released recently.
I'm just curious why Openshift upstream maintainers are not pushing tags. It's not like we're blocked as we can update model to latest master revision. But it would be nice if we upgrade with respect to
Openshift releases.
[0] https://github.com/fabric8io/kubernetes-client#compatibility-matrix
I'm trying to add the DeploymentConfig
support in istio, but however, there are huge amounts of repos in openshift. Can someone please help me find the reference to the pod label deploymentconfig
so that I can document the reference. Thanks!
Hi,
we are trying to setup the following infrastructure.
An internal cluster in AWS which means API- and APPS- routers using private subnets. Beside that we want to provide a public router for running some services in the internet.
We tried the following:
Updating the cluster CRD:
spec:
baseDomain: internal.foo.bar
privateZone:
tags:
Name: cluster-n4sbw-int
kubernetes.io/cluster/cluster-n4sbw: owned
publicZone: public.foo.bar
tags:
Name: cluster-n4sbw-external
kubernetes.io/cluster/cluster-n4sbw-external: owned
Create another ingress controller:
apiVersion: operator.openshift.io/v1
kind: IngressController
metadata:
name: external
namespace: openshift-ingress-operator
spec:
domain: public.foo.bar
endpointPublishingStrategy:
loadBalancer:
scope: External
type: LoadBalancerService
This seems to work and we are now able to attach public route to our services but the IngressController ends up in a degraded state with the following error:
Some ingresscontrollers are degraded: ingresscontroller "default" is degraded: DegradedConditions: One or more other status conditions indicate a degraded state: DNSReady=False (FailedZones: The record failed to provision in some zones: [{ map[Name:cluster-n4sbw-external kubernetes.io/cluster/luster-n4sbw-external:owned]}])
Removing the publicZone from the cluster CRD does not work because then the operator creates the wildcard DNS entry for the public Ingress in the private hosted zone of Route 53 which is not resolvable from outside the VPC.
Is there a way to solve that problem ?
We are using Openshift 4.6.15.
It seems LegacySchemeGroupVersion
does not exist anymore since commit 2f6f4d8.
It is now unexported: https://github.com/openshift/api/blob/master/apps/v1/legacy.go#L11
Can we re-export it and keep the compatibility as done with GroupVersion
: https://github.com/openshift/api/blob/master/apps/v1/register.go#L20 ?
Trigger a pipeline based build thru webhook
curl -k -X POST APIHOST/oapi/v1/namespaces/NAMESPACE/buildconfigs/BUILDCONFIG/webhooks/SECRET/generic
The call returns
"selfLink":"/apis/build.openshift.io/v1/namespaces/NAMESPACE/buildconfigs/BUILDNAME/instantiate
which imho is wrong - because no new build config is created, but rather a build ...
so the selflink should be
/apis/build.openshift.io/v1/namespaces/NAMESPACE/builds/BUILD NAME
Hi
It would be useful to support toleration for buildconfig.
Buildconfig support nodeSelector but that's not always convenient.
In my case I had 3 nodes out of 72 that have GPU and I put a taint on them to make sure that they won't be scheduled by other tenant workload. But then I failed to schedule a build on it because buildconfig do not allow toleration.
Thanks.
Several other issues suggest that there at least used to be tags in this repository, when the tag list is empty now.
I came here to investigate as to why my go project, which depends on openshift api, started complaining that certain version (mentioned in my go.mod) is not found.
Were the tags completely removed from the repository? Is this an intentional change or an unintentional mistake?
Consider this scenario:
# *(this is broken, you have to specify the command manually with oc edit...)
$ oc create deploymentconfig test --image=centos:7 -- /bin/sleep infinity
# run this 3 times:
$ oc rollout latest test
$ oc get rc
NAME DESIRED CURRENT READY AGE
test-1 0 0 0 1m
test-2 0 0 0 1m
test-3 1 1 1 1m
$ oc get rc,dc -o yaml > backup.yaml
$ oc delete all --all
# Now restore everything back:
$ oc create -f backup.yaml
What happens here is basically all replication controllers are deleted right after are created. Then a new RC is created with revision=1
... This is because the replication controllers have ownerRefs
set to DC which was deleted and the UUID does not match with the newly created DC.
If you edit the backup.yaml
and remove all ownerRef
fields from the RCs and recreate everything
then the 3 RC's will stay, but the revision
for the DC is set to 1
instead of 3
...
That means, when you do oc rollout latest test
, it will tell you that it successfully rolled out, but nothing will happen (just the DC revision is bumped) until you call that command three times. On fourth time, it will actually trigger a new rollout.
Not sure if the file authorization/v1/generated.pb.go
is up-to-date in the latest master.
After running make update-codegen-crds
,
$ git status
On branch check_make
Your branch is up to date with 'origin/master'.
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git restore <file>..." to discard changes in working directory)
modified: authorization/v1/generated.pb.go
no changes added to commit (use "git add" and/or "git commit -a")
When running a CRD generator based on kubebuilder's v0.2 of controller-tools against an operator (as in openshift/cluster-kube-apiserver-operator#514), the following error occurs:
types_network.go:179:28: map values must be a named type, not *ast.ArrayType
This is referring to ProxyArguments
here: https://github.com/openshift/api/blob/master/operator/v1/types_network.go#L179
See gogo/protobuf#691 for context/background
I'm writing as the maintainer of the package in Debian. We were discussing internally about when we can move the distribution to the newer protobuf v2 apis and the dependency on gogo/protobuf seemed problematic.
Is gogo/protobuf a critical dependency of openshift/api? If not, maybe it'd be appropriate to move to google.golang.org/protobuf instead?
I'm thinking about an operator and I like storing the last generation of the objects that the operator created / updated so it can detect updates in .spec of the objects. GenerationHistory
has all the information, however, I don't like the name. There is no history. It's generation of single object that the operator knows it's correct.
Perhaps ChildGeneration
(the objects that the operator creates could be called children). Or ObjectGeneration
?
api/operator/v1alpha1/types.go
Line 81 in 4879e7b
Hi,
we are building an operator where we have a higher level object called PAAS, which in turn creates Namespaces, ClusterResourceQuota's, Groups, etc. All works as expected:
controllerutil.SetControllerReference(paas, ns, r.Scheme)
for that)But
Which is weird because
controllerutil.SetControllerReference(paas, quota, r.Scheme)
for quotas, and same for the groupMy questions:
[set= id#0000]
Call_udid//
Ipconfig "null"
During development there are times when you don't want to repull images, but you do want the rest of management. During development there are also times when you really want to repull images.
Since we want both A and not A, we probably need a setting in here https://github.com/openshift/api/blob/master/operator/v1alpha1/types.go#L27
@mfojtik @sttts you have probably hit this too. I'll work on it or merge one of yours tomorrow
My reading of the regex below is that all hostname components must start with an alpha. It seems like this was designed to prevent IP addresses. It seems that there are valid use cases to use IP addresses but the rfc1123 also allows for hostname components to start with a number.
RFC1123 Section 2.1 Hostnames and Numbers specifically calls out:
The syntax of a legal Internet host name was specified in [RFC-952](https://datatracker.ietf.org/doc/html/rfc952)
[DNS:4]. One aspect of host name syntax is hereby changed: the
restriction on the first character is relaxed to allow either a
letter or a digit. Host software MUST support this more liberal
syntax.
and later in the same section:
Whenever a user inputs the identity of an Internet host, it SHOULD
be possible to enter either (1) a host domain name or (2) an IP
address in dotted-decimal ("#.#.#.#") form.
The following branches are being fast-forwarded from the current development branch (master) as placeholders for future releases. No merging is allowed into these release branches until they are unfrozen for production release.
release-4.18
release-4.19
For more information, see the branching documentation.
https://github.com/openshift/api/blob/master/config/v1/types_infrastructure.go#L99
I think we are going to need Location and ProjectID for Google Cloud for creating buckets for the image registry, that will need to be populated by the Installer.
Please add copyright statement as the LICENSE requires.
In openshift/origin we have
https://github.com/openshift/origin/blame/master/pkg/route/apis/route/types.go#L185-L192
but those are missing in openshift/api
https://github.com/openshift/api/blob/master/route/v1/types.go#L232
API does not have tags, only release branches.
Using tags is more friendly with go mod
-based projects.
fyi @openshift/api-reviewers @openshift/api-approvers
When working with disconnected environments, sometimes you need to define multiple ImageContentSourcePolicies, some of them are used by apps / manifests that don't use digests when pulling the images.
Currently, all configurations are added with mirror-by-digest-only
property set to true
. It will be nice if this property could be configured using the ImageContentSourcePolicy.
Thanks,
The status field of the Route
type does not have the omitempty
json tag, thus marking it required.
Using the openshift api in other tools validating the resources results in missing required field "status" in openshift.api.route.v1.Route
. Other resources like deploymentconfigs have their status marked as omitempty
We need generation verify scripts and something in the update-deps.sh script that removes those proto lines for now.
Also, @sttts thanks for working the generators. It made this a lot easier.
Per Openshift doc, generate is a supported mappingMethod.
However, on Openshift cluster, an error is reported when set the 'mappingMethod' to 'generate'
Seems 'generate' is not listed as MappingMethodType in https://github.com/openshift/api/blob/e34bc2276d2e91e2e4f37395349c253652511754/config/v1/types_oauth.go
// MappingMethodType specifies how new identities should be mapped to users when they log in
type MappingMethodType string
const (
// MappingMethodClaim provisions a user with the identity’s preferred user name. Fails if a user
// with that user name is already mapped to another identity.
// Default.
MappingMethodClaim MappingMethodType = "claim"
// MappingMethodLookup looks up existing users already mapped to an identity but does not
// automatically provision users or identities. Requires identities and users be set up
// manually or using an external process.
MappingMethodLookup MappingMethodType = "lookup"
// MappingMethodAdd provisions a user with the identity’s preferred user name. If a user with
// that user name already exists, the identity is mapped to the existing user, adding to any
// existing identity mappings for the user.
MappingMethodAdd MappingMethodType = "add"
)
While upstream client has e.g.
kvalidation "k8s.io/apimachinery/pkg/api/validation"
kvalidationutil "k8s.io/apimachinery/pkg/util/validation"
we don't provide an alternative. Say like a want to validate my route or it's part
https://github.com/tnozicka/origin/blob/4bc612e584a033515ccf468a5206f836814dc49c/pkg/route/apis/route/validation/validation.go#L26
(probably not the best example)
As explained in openshift/cluster-ingress-operator#633 issue, the IngressController.operator.openshift.io/v1 object should have a .Spec.BindOptions
with the following fields:
bindOptions:
httpPort: 80
httpsPort: 443
Running Openshift on AWS.
Ultimately -- I think -- what comes out of the Openshift Ingress Operator is an AWS Applicaiton Load Balancer ingress controller. The Openshift Ingress Operator will drop an AWS Load Balancer Ingress Controller which supports either ELB Classic load balancers or NLBs; no ALBs.
api/operator/v1/types_ingress.go
Line 594 in 6ba31fa
Specifically, we would like to create NLBs with static IPs, and to add custom AWS resource tags to our load balancers, for cost reporting: https://kubernetes-sigs.github.io/aws-load-balancer-controller/v2.2/guide/service/annotations/
In slack today I was alerted to the possibility that an OpenShift component to which I contribute may not be in compliance with a policy requiring a schema for its APIs. I don't know the details of that policy, how to tell if it actually affects the component I'm working on, or what sort of code (or other) changes might be needed in order to comply. There does not appear to be any documentation about that here in this repo, so I'm filing this ticket to ask someone to write all of that down so I can ensure someone takes care of any gaps we have.
I don't think we link to slack conversations in GitHub issues, so ping me and I'll share a link privately if you would like more details.
I am getting all kinds of errors trying to run go mod vendor
in openshift/oc
while trying to pull in the commit from pull request #937 , or the one after it.
malformed file path
Example:
tls/docs/kube-apiserver Serving Certificates/subcert-openshift-kube-apiserver-operator_localhost-recovery-serving-signer@1622133567::2777012960471375622.png: malformed file path "tls/docs/kube-apiserver Serving Certificates/subcert-openshift-kube-apiserver-operator_localhost-recovery-serving-signer@1622133567::2777012960471375622.png": invalid char ':'
Can we get this either fixed or reverted?
Let me start of by saying: not sure if this is the right place to report this.
The problem: docstrings written above sub-structs for CRDs are not reflected when doing a oc explain
, see example below.
in config/v1/types_network.go
type Network struct { metav1.TypeMeta `json:",inline"` // Standard object's metadata. metav1.ObjectMeta `json:"metadata,omitempty"` // spec holds user settable values for configuration. // +kubebuilder:validation:Required // +required Spec NetworkSpec `json:"spec"` // status holds observed values from the cluster. They may not be overridden. // +optional Status NetworkStatus `json:"status"` } // NetworkSpec is the desired network configuration. // As a general rule, this SHOULD NOT be read directly. Instead, you should // consume the NetworkStatus, as it indicates the currently deployed configuration. // Currently, changing ClusterNetwork, ServiceNetwork, or NetworkType after // installation is not supported. type NetworkSpec struct {
Output by oc explain
:
$ oc explain network.spec KIND: Network VERSION: config.openshift.io/v1 RESOURCE: spec <Object> DESCRIPTION: spec holds user settable values for configuration. FIELDS:
This example was taken for network, but it seems to apply to all openshift CRDs defined in config/v1
Let me know if you guys prefer that I file a Bugzilla report for this instead?
According to documentation https://docs.openshift.com/container-platform/4.3/nodes/nodes/nodes-nodes-audit-log.html, there should be loglevel and operatorloglevel in this api CRD.
When importing to github.com/openshift/api/security/v1
in a Golang operator project, it came to my attention that one is not able to set AllowPrivilegeEscalation
when creating a custom security context constraint.
This can be verified by looking at the SecurityContextConstraints
struct shown in https://pkg.go.dev/github.com/openshift/api/security/v1#SecurityContextConstraints. However, I did notice the field is exported in the GitHub repository. See [line 97](the source code at
Line 97 in 0b39f81
At this time, I am not sure what would be the proper way to reconcile these two values to allow users to set a value for AllowPrivilegeEscalation
.
Only one version of the module shows to be available:
➜ git:(main) ✗ go list -u -m github.com/openshift/api
github.com/openshift/api v3.9.0+incompatible
➜ git:(main) ✗
Hostname type definition in ./config/v1/types_ingress.go defines Hostname as a string of "hostname" format. However, validation of hostname format does not allow for top level domain to include a digit.
Example:
my.domain.com works
my.domain.c4m does not
How did I test? Creating componentRoute in Ingress. This definition works:
apiVersion: config.openshift.io/v1
kind: Ingress
metadata:
creationTimestamp: "2022-01-07T10:12:30Z"
generation: 4
name: cluster
resourceVersion: "1239439"
uid: 3460d6bf-8907-4b54-a920-3cc6a7e5ba18
spec:
componentRoutes:
This one returns an error:
ingresses.config.openshift.io "cluster" was not valid:
apiVersion: config.openshift.io/v1
kind: Ingress
metadata:
creationTimestamp: "2022-01-07T10:12:30Z"
generation: 4
name: cluster
resourceVersion: "1239439"
uid: 3460d6bf-8907-4b54-a920-3cc6a7e5ba18
spec:
componentRoutes:
Type definition in ./config/v1/types_ingress.go:
// Hostname is an alias for hostname string validation.
// +kubebuilder:validation:Format=hostname
type Hostname string
Dependbot raised a PR in my project that upgrades the openshift/api
dependency to v3.9.0+incompatible
. But I don't see any tag in this repo with this name. go get github.com/openshift/[email protected]+incompatible
is working fine though.
So I am not able to understand where is that tag being pulled from. If I know that I can check if it's really an upgrade to the pseudo version that I already have for this module.
See that the ImageStreamTags is not implementing the watch which would be important to observe changes in this resource as well.
$ oc api-resources -o wide | grep ImageStream
imagestreamimages isimage image.openshift.io true ImageStreamImage [get]
imagestreamimports image.openshift.io true ImageStreamImport [create]
imagestreammappings image.openshift.io true ImageStreamMapping [create]
imagestreams is image.openshift.io true ImageStream [create delete deletecollection get list patch update watch]
imagestreamtags istag image.openshift.io true ImageStreamTag [create delete get list patch update]
OCP version: 4.3.1
Existing openshift's rest api is low level and doesn't allow to do primary operations like application deployment, status check, etc.
If I'd like to deploy some application from template thru api, I go through the following steps:
So, it would be great to implement some end user api like it can be done using oc command.
Occording to @DirectXMan12, in comment kubernetes-sigs/controller-runtime#362 (comment), OpenShift should not re-register corev1.SecretList
under its own Group/Version.
This breaks the controller-runtime and causes: kubernetes-sigs/controller-runtime#362
Line 48 in b772cc9
We're working to move the MCO types from the machine-config-operator repo. Within our types we're using the ignition types https://github.com/openshift/machine-config-operator/blob/master/pkg/apis/machineconfiguration.openshift.io/v1/types.go#L233. We're already doing an hack to provide DeepCopy https://github.com/openshift/machine-config-operator/blob/master/pkg/apis/machineconfiguration.openshift.io/v1/machineconfig.deepcopy.go but we're now stuck because the Ignition types don't have protobuf annotations (nor the generate
target writes them ofc). This result in make generate
working just fine except for the protobuf case because of Ignition. I can see a $PROTO_OPTIONAL
env but it doesn't seem to take effect.
Do we need to have ingnition types also have protobuf annotation? can we port the api w/o protobuf?
We do have a WIP PR here runcom#1 which fails at the bare make
target because of protobuf as said.
The new Subdomain
attribute does not have omitempty
in the Go struct tag, unlike other fields like Path
. Is this an accidental omission?
Lines 86 to 90 in d297251
It affects the output of "oc get route" and client-go serializing to yaml using k8s.io/apimachinery/pkg/runtime/serializer/json
The tools Makefile version checks output git errors when vendored / ran from another repository via make -C
as described in the codegen README for inclusion in other repositories. openshift/api vendor
, go.mod
, go.sum
files in the check aren't there when vendored into a different project. The check ultimately works but outputs git errors when ran.
Example output from including in the cloud credential operator repo:
➜ cloud-credential-operator ✗ make -C vendor/github.com/openshift/api/tools run-codegen BASE_DIR="${PWD}/pkg/apis" API_GROUP_VERSIONS="cloudcredential.openshift.io/v1" OPENSHIFT_REQUIRED_FEATURESETS="Default"
make: Entering directory '/home/abutcher/go/src/github.com/openshift/cloud-credential-operator/vendor/github.com/openshift/api/tools'
fatal: ambiguous argument 'vendor': unknown revision or path not in the working tree.
Use '--' to separate paths from revisions, like this:
'git <command> [<revision>...] -- [<file>...]'
fatal: ambiguous argument 'vendor': unknown revision or path not in the working tree.
Use '--' to separate paths from revisions, like this:
'git <command> [<revision>...] -- [<file>...]'
fatal: ambiguous argument 'vendor': unknown revision or path not in the working tree.
Use '--' to separate paths from revisions, like this:
'git <command> [<revision>...] -- [<file>...]'
fatal: vendor: no such path in the working tree.
Use 'git <command> -- <path>...' to specify paths that do not exist locally.
fatal: vendor: no such path in the working tree.
Use 'git <command> -- <path>...' to specify paths that do not exist locally.
fatal: vendor: no such path in the working tree.
Use 'git <command> -- <path>...' to specify paths that do not exist locally.
Building codegen version 0f37397c68ee97ff55ba80aba040fc84d4a65653-dirty
/home/abutcher/go/src/github.com/openshift/cloud-credential-operator/vendor/github.com/openshift/api/tools/_output/bin/linux/amd64/codegen --base-dir /home/abutcher/go/src/github.com/openshift/cloud-credential-operator/pkg/apis --api-group-versions cloudcredential.openshift.io/v1 --required-feature-sets Default
I0228 10:37:37.514558 613509 root.go:80] Running generators for cloudcredential.openshift.io
...
make generate-with-container
results in,
hack/update-swagger-docs.sh
Generating swagger type docs for apiserver/v1 at apiserver/v1
# sigs.k8s.io/json/internal/golang/encoding/json
vendor/sigs.k8s.io/json/internal/golang/encoding/json/encode.go:1249:12: sf.IsExported undefined (type reflect.StructField has no field or method IsExported)
vendor/sigs.k8s.io/json/internal/golang/encoding/json/encode.go:1255:18: sf.IsExported undefined (type reflect.StructField has no field or method IsExported)
make: *** [Makefile:65: update-scripts] Error 2
make: *** [Makefile:70: generate-with-container] Error 2
I'm sure most of it can point to k8s, but we recently had some more generic api-type items come up like the fact that the registry uses port 5000 for its service when it would have been preferred to use :443 so the port suffix is not needed.
I don't know where else the guidelines could go, so including then in an api guidelines doc in this repo seems like as good a place as any.
/cc @smarterclayton @openshift/api-reviewers @adambkaplan
Currently, only clusteroperator
has a short name as co
, but other CRD resources do not have a short name, it is better to add short names for CRDs in OCP to better usability.
When I create project using rest api's, it is created w/o default role bindins like system:image-builder,
system:deployer, system:image-puller, etc
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.