arnavion / k8s-openapi Goto Github PK
View Code? Open in Web Editor NEWRust definitions of the resource types in the Kubernetes client API
License: Apache License 2.0
Rust definitions of the resource types in the Kubernetes client API
License: Apache License 2.0
Currently, WatchEvent.object
is a RawExtension
, which is a newtype wrapper around serde_json::Value
. This means the user has to re-deserialize the serde_json::Value
as the actual type they want, like api::core::v1::Pod
.
For watch events associated with a group-version-kind, it should be possible to emit a strongly typed WatchEvent
instead. Perhaps override WatchEvent
with:
enum WatchEvent<T> {
Added(T),
Deleted(T),
ErrorStatus(apimachinery::pkg::apis::meta::v1::Status),
ErrorOther(apimachinery::pkg::runtime::RawExtension),
Modified(T),
}
Then use WatchEvent<Pod>
, etc for things with a gvk, and WatchEvent<apimachinery::pkg::runtime::RawExtension>
for everything else.
As before, the user can ignore the response type and manually deserialize into a WatchEvent<apimachinery::pkg::runtime::RawExtension>
if the strong type doesn't meet their requirements for any reason (eg it's incorrect due to a spec bug).
The spec does not record any possible HTTP status codes other than 200, 201, 202 and 401. But in common operation, many APIs can fail with HTTP 403, GETs on non-existent resources can fail with 404, and PUTs on resources with wrong resource version can fail with 409.
It may be worth not pretending to have named variants like Unauthorized
when users will have to commonly handle the Other
variant as well. So for variants that have no data, it may be worth not emitting them at all, and letting the Other
variant match them.
io.k8s.apimachinery.pkg.apis.meta.v1.MicroTime
io.k8s.apimachinery.pkg.apis.meta.v1.Time
Both are newtypes of time.Time
in the golang client.
For example:
#[derive(Debug)]
pub enum CreateCustomResourceDefinitionResponse {
Ok(crate::v1_14::apiextensions_apiserver::pkg::apis::apiextensions::v1beta1::CustomResourceDefinition),
Created(crate::v1_14::apiextensions_apiserver::pkg::apis::apiextensions::v1beta1::CustomResourceDefinition),
Accepted(crate::v1_14::apiextensions_apiserver::pkg::apis::apiextensions::v1beta1::CustomResourceDefinition),
Unauthorized,
Other,
}
However, the API server usually returns some JSON object in the response for the non-successful status codes. Currently this means users that want this value have to parse it from the response buffer, which in turn means clients have to expose the response buffer along side the parsed response type.
So it would be useful to type Unauthorized
, etc as (serde_json::Value)
. Usually it's a Status
, but that would require emitting other variants that aren't Status
to be safe. In any case, a serde_json::Value
can be redeserialized as a Status
if the user wants that.
The mapping in get_rust_ident
needs to be fixed.
Add a test to ensure it doesn't regress.
Others:
PolicyRule::non_resource_ur_ls
FCVolumeSource::target_ww_ns
APIGroup::server_address_by_client_cid_rs
APIVersions::server_address_by_client_cid_rs
Currently all parent mod.rs
are emitted like this:
mod foo;
pub use self::foo::*;
mod bar;
pub use self::bar::*;
However Rust silently allows a re-export from self::bar
to shadow a re-export from self::foo
when using *
re-exports. It only complains in code that tries to use the type, which in this case would probably be user code. This should not be allowed to happen.
The only fix seems to be to collect all the expected types in each submodule, and re-export them explicitly. Each submodule is always completely emitted, including its operations, so collecting the types and listing them in the parent shouldn't be hard.
mod foo;
pub use self::foo::{
Foo,
FooGetOptional, FooGetResponse,
FooListOptional, FooListResponse,
};
mod bar;
pub use self::bar::{
Bar,
BarGetOptional, BarGetResponse,
BarListOptional, BarListResponse,
};
... so that it shows up on crates.io
As the watch_event
test notes, all list operations like Pod::list_core_v1_namespaced_pod
have a response type like ListCoreV1NamespacedPodResponse(PodList)
But most of these list operations support a watch: Option<bool>
parameter, and return a watch response when it's set.
Currently the user has to work around this by choosing between ListCoreV1NamespacedPodResponse
or WatchCoreV1NamespacedPodListResponse
to deserialize the response. This WatchCoreV1NamespacedPodListResponse
is a seemingly unrelated type; it's actually the response type of the (deprecated) Pod::watch_core_v1_namespaced_pod_list
operation.
Ideally the list response types like ListCoreV1NamespacedPodResponse
should be able to support both cases. But it's not clear how to set up the enums and their deserializers to be able to do this.
The Python client appears to have the same problem at first glance - list_namespaced_pod_with_http_info
always parses the response as a V1PodList
regardless of the watch
parameter. The client has a separate watch.Watch
class and the user is expected to use this instead.
The C# client appears to have a version of the list operation that returns a PodList
, and another version that returns a raw HTTP response object. This latter object has an extension method .Watch<T>
to parse any response as a watch response of T
objects.
I recently published a comment and tagged @Arnavion in anguslees/kubernetes-rs/issues/9.
While working on trying to get the kubernetes
and k8s-openapi
crates to work together, I came up with a list of ergonomic changes I'd like to run by the author, work with to implement, or consider forking this project to experiment with these ideas. I'd really like your thoughts on the feasibility and ergonomics of these changes.
Use structs with default params instead of lists of arguments for constructing requests. A trait to generically create variants of these (e.g.: with watch: true
, continue: true
, resourceVersion: X
) would also be very useful.
Helper methods and a common trait for accessing important primitives such as ObjectMeta
, ListMeta
, and their intersection which contains important values like resourceVersion
and continue
.
API cleanup, in particular try_from_parts
wasn't ergonomic because there were too many response types, so a cleaned up way to parse a response as either an enum of Response(...), WatchEvent(…), Status(Status), StatusCode(StatusCode)
would be nice.
Version merging or some better story for API handling. i.e.: if Kubernetes 1.12 adds a new field to Pods that is foo: Option<Bar>
, then it is safe to merge that into the constructor for the Kubernetes 1.11 API. It looks like with this approach we would probably end up with a lot more code re-use and forward compatibility, maybe less need for the various features
.
I haven't begun to work on these things, but I think I might like to.
$spec = iwr 'https://github.com/kubernetes/kubernetes/blob/master/api/openapi-spec/swagger.json?raw=true' | ConvertFrom-Json
$operations = $spec.paths.PSObject.Properties | ?{ ($_.Value.get.'x-kubernetes-action' -eq 'list') -and ($_.Value.get.parameters.Length -gt 0) } | %{ New-Object psobject -Property @{ 'Name' = $_.Name; 'Parameters' = $_.Value.get.parameters | % name | sort } }
$first = $operations[0].Parameters -join ','; $operations | select -Skip 1 | ?{ ($_.Parameters -join ',') -ne $first }
suggests all list operations in the spec have the same optional parameters (except for the API discovery ones at the crate root that have no parameters at all): continue
, fieldSelector
, labelSelector
, limit
, resourceVersion
, timeoutSeconds
, watch
So all list operation functions could be a single type with those fields minus watch
, and all watch operation functions could be the same type with those fields minus continue
and watch
.
io.k8s.api.core.v1.ConfigMap
's binaryData
field (new in 1.10)io.k8s.api.core.v1.Secret
's data
fieldBoth are map[string][]byte
in the golang client.
Note that these are not JSON arrays of JSON numbers in the actual API; they're (standard-encoding) base64-encoded JSON strings.
Hi @Arnavion
As I was exploring available options for kubernetes clients for rust I found out that @ynqa in his ynqa/kubernetes-rust#10 is moving away from your library and tries to use https://github.com/kubernetes-client/gen
On the surface it seems like reinvention of the wheel, so I decided find out if there is substantial difference between kubernetes-client/gen and your approach and why kubernetes-client/gen might not want to accept a client that was not generated with their generator.
I believe that you did some kind of research before writing your own generator, so there must reason for that. Could you please share your perspective?
Currently it contains the element type. Eg DeleteCollectionNamespacedPodResponse
should have an OkValue(PodList)
variant instead of OkValue(Pod)
Upstream's generator processes operation names to strip each string in the operation's tags. Eg listCoreV1NamespacedPod
is tagged with "core_v1"
and becomes list_namespaced_pod
in the Python client.
The codegen knows:
The codegen should not have to know:
DELETE
request to a pod's selfLink
.The current codegen expects any client that impls ::Client
. But it also deserializes the response internally, so it has to require the request to be executed synchronously.
It also doesn't give the user the ability to override how the response is deserialized. This is especially important given issues like #8 and #9 where the API is unusable until the code generator patches mistakes in the upstream spec, or #10 which are bugs in the codegen itself. Ideally the process of creating a request with the URL and parameters should be separate from executing the request and parsing the response.
Alternatives:
Begin/end-style API pairs with Client
foo_begin<C: Client>(&C, &Args) -> C::Request
+ foo_end<C: Client>(C::Response) -> Result<FooResponse, C::Error>
C::Response
still has to be std::io::Read
so this limits asynchronous clients. An asynchronous client could not naively concat2()
'd its response stream because that wouldn't work for watch/watchlist.Begin/end-style API pairs with http
types
foo_begin(&Args) -> http::Request
+ foo_end(http::Response<Vec<u8>>) -> Result<FooResponse>
http::Response<Vec<u8>>
so that it can be std::io::Read
, so this limits asynchronous clients in the same way.Only request API with http
types. Deserializing response is user's responsibility.
foo(&Args) -> http::Request
+ user deserializes response body to FooResponse
match response.status_code() { OK => }
boilerplate and hunting for the type to deserialize to (in the usual case where the codegen is correct) is pushed down to the user.See https://github.com/Arnavion/k8s-openapi-codegen/tree/sans-io branch for an implementation of this idea.
Something else?
v1.15 adds a new kind of watch event called BOOKMARK
.
So WatchEvent
codegen needs to be version-dependent, or check for the presence of an allowWatchBookmarks
parameter in the generated WatchOptional
.
Also, the ListOptional
type has a allowWatchBookmarks
parameter that is watch-specific, so it needs to be removed.
Will have to differentiate based on the kind
field, probably by parsing it as a serde_json::Value
first.
... since ?follow=true
works just like a watch/watchlist API - the response does not end until the client closes it.
The golang client has a code generator that takes a file containing the definition of a custom resource and generates CRUD functions for it ("clientsets").
It might be useful to expose this part of the code generator as an API so that a user can call it from their crate's build script. For example, given this definition it could output this code plus a Resource
and Metadata
impl.
... to future-proof the crate build.
CI can still enforce warnings with cargo rustc -- -Dwarnings
instead of cargo build
The ByteString
newtype wraps a Vec<u8>
so that it can serialize them into / deserialize them from base64 strings. They're used for type=string + format=byte
values in the OpenAPI spec.
The problem with the newtype is that the user cannot create a borrow of the newtype &ByteString
if they have just a &[u8]
; they still need to allocate a Vec<u8>
, wrap that in a ByteString
, and borrow that. The way around this is to make the newtype a wrapper around generic T: AsRef<[u8]>
which would just become annoying to use by infecting every struct that has a value of this type.
The code generator is obviously in control of the Serialize
and Deserialize
impls of all types, so it could emit the values as Vec<u8>
directly and use base64
to (de)seserialize them in the impls of the structs that use them. The problem is that these values can be nested inside types like Vec<ByteString>
and BTreeMap<String, ByteString>
, so every such type would also need custom (de)serialize generated code instead of piggy-backing on the serde-provided impls.
Empirically, the only uses of ByteString
are:
$ C:\find.ps1 -Name *.rs -Pattern 'crate::ByteString' | % Line | % Trim | Sort-Object -Unique
let mut value_binary_data: Option<std::collections::BTreeMap<String, crate::ByteString>> = None;
let mut value_ca_bundle: Option<crate::ByteString> = None;
let mut value_certificate: Option<crate::ByteString> = None;
let mut value_data: Option<std::collections::BTreeMap<String, crate::ByteString>> = None;
let mut value_request: Option<crate::ByteString> = None;
pub binary_data: Option<std::collections::BTreeMap<String, crate::ByteString>>,
pub ca_bundle: crate::ByteString,
pub ca_bundle: Option<crate::ByteString>,
pub certificate: Option<crate::ByteString>,
pub data: Option<std::collections::BTreeMap<String, crate::ByteString>>,
pub request: crate::ByteString,
so it might be okay to special-case them.
Codegen works fine, but I can't deploy a minikube cluster to run the tests against ( kubernetes/minikube#2942 ). So this is currently blocked.
#[cfg(feature = "v1_9")]
works for selecting v1.9, but selecting a range like >= v1.9
requires enumerating every such version manually.
Possible define macros like ge_1_9!()
and lt_1_9!()
(or const fns) that are always available and expand to true or false depending on what features are enabled.
time-passes
for an --all-features
build shows most of the non-LLVM time breaks down like this:
time: 82.388; rss: 951MB expansion
time: 57.681; rss: 2107MB item-bodies checking
time: 57.331; rss: 3152MB borrow checking
Manual impls at codegen time would reduce the time spent in expansion for all users of the crate, at the cost of duplicating the functionality of the derives into the code generator.
Investigate if the complexity is worth it, especially since serde's derives are the most complicated to reimplement.
... since only k8s-openapi-codegen needs it, not k8s-openapi-derive.
Currently docs are compiled with a special feature dox
that emits docs for all supported versions simultaneously. Of course this isn't really possible to do for regular code since only one version's feature can be enabled at a time. dox
cheats by bypassing the restriction and making the private version-specific modules public.
Thus the dox
docs talk about k8s_openapi::v1_13::api::core::v1::Pod
. But in actual code k8s_openapi::v1_13
is private and its contents are re-exported from the crate root. So the user would actually use k8s_openapi::api::core::v1::Pod
.
Currently the top-level modules are emitted with this doc comment:
This module is only emitted because the
dox
feature was enabled for generating docs. When the corresponding feature for this version is enabled instead, this mod will be private and its contents will be re-exported from the crate root.
... but this is probably not noticeable enough. It won't be visible in deep links anyway. If the user doesn't realize this, they might be very confused when the compiler complains that k8s_openapi::v1_13
is private.
Perhaps drop the idea of generating docs for all supported versions, and have the docs.rs docs just be those of the highest supported version. Perhaps add a bolded notice at the top of the crate root docs:
You are currently looking at docs are for the v1.13 version. To see docs for a different version such as v1.12, generate the docs locally with
cargo doc --open --no-deps --features 'v1_12'
Line 9 in 2de1252
None of the v1_* features are enabled on the k8s-openapi crate.
The k8s-openapi crate requires a feature to be enabled to indicate which version of Kubernetes it should support.
If you're using k8s-openapi in a binary crate, enable the feature corresponding to the minimum version of API server that you want to support. It may be possible that your binary crate does not directly depend on k8s-openapi. In this case, add a dependency on k8s-openapi, then enable the corresponding feature.
If you're using k8s-openapi in a library crate, add a dev-dependency on k8s-openapi and enable one of the features there. This way the feature will be enabled when buildings tests and examples of your library, but not when building the library itself. It may be possible that your library crate does not directly depend on k8s-openapi. In this case, add a dev-dependency on k8s-openapi, then enable the corresponding feature.
Library crates must *not* enable any features in their direct dependency on k8s-openapi, only in their dev-dependency. The choice of Kubernetes version to support should be left to the final binary crate, so only the binary crate should enable a specific feature. If library crates also enable features, it can cause multiple features to be enabled simultaneously, which k8s-openapi does not support.
If you want to restrict your library crate to support only a single specific version or range of versions of Kubernetes, please use the k8s_* version-specific macros to emit different code based on which feature gets enabled in the end.
Please see the k8s-openapi crate docs for more details.
Lines 11 to 15 in 2de1252
Both v1_{} and v1_{} features are enabled on the k8s-openapi crate. Only one feature can be enabled at the same time.
The feature indicates which version of Kubernetes the k8s-openapi crate should support.
If you have enabled both of these features yourself, please remove one of them. If you are writing a library crate, do not enable any features *at all*. Library crates should not enable any features on the k8s-openapi crate.
If you have not enabled one or both of these features yourself, then one of the library crates in your dependency graph *has*. Locate which library crates in your dependency graph depend on k8s-openapi and enable one of its features, and file a bug against them. You can search your Cargo.lock for "k8s-openapi" to discover these crates.
These are common to spec-generated clients of all languages, but it helps Rust devs who haven't used the other languages' clients before.
Resource types are under api
, etc, then under modules corresponding to the API group and version, like core/v1/Pod
-> core::v1::Pod
Functions are associated with the relevant resource type. Eg functions dealing with pods are found in api::core::v1::Pod
's docs.
Exceptions: Subresources like Status
or pod logs are returned by functions of the base resource, eq Foo::read_*_status
or Pod::read_namespaced_pod_log
.
Exceptions to the exceptions: Some subresources like api::authentication::v1::TokenRequest
and api::core::v1::Binding
are created from functions associated with their own type (TokenRequest::create_namespaced_service_account_token
and Binding::create_namespaced_pod_binding
respectively) instead of the parent resource's type.
Create a resource with create_*
. Get an existing resource with read_*
. Update with patch_*
. Delete with delete_*
. Delete a collection with delete_collection_*
. Get a list of resources with list_*
. Watch with watch_*
.
_namespaced_
in the function name is just a convention for resources that are namespaced. (Are there resources that have both _namespaced_
and non-_namespaced_
functions to compare with?)
The API server exposes its spec at /openapi/v2
Is it worth allowing a user to run k8s-openapi-codegen
against this endpoint to generate a custom API bindings crate?
Edit: Starting with v1.15, CRDs are also included in the spec:
beta: CustomResourceDefinition OpenAPI Publishing
OpenAPI specs for native types have been served at /openapi/v2 by kube-apiserver for a long time, and they are consumed by a number of components, notably kubectl client-side validation, kubectl explain and OpenAPI based client generators.
OpenAPI publishing for CRDs will be available with Kubernetes 1.15 as beta, yet again only for structural schemas.
My use case is only OpenShift, but there might be other specs.
Does it make sense to "just" include OpenShift's OpenAPI behind feature flags, or should there be a generic mechanism to support supplemental specs?
The schema defines RawExtension
to be an object with a required raw
property that's a byte-format string. So the codegen emits it as struct RawExtension { raw: ByteString }
In reality the schema is wrong. The server-side RawExtension.raw
is indeed a []byte
, but the contents are the JSON-serialized form of some other value. The RawExtension
is serialized to JSON by emitting this serialized form. So RawExtension
should be represented by serde_json::Value
The spec defines these as returning type=string
responses, so the response types (like ConnectGetNamespacedPodExecResponse
) are emitted as streaming String
responses.
However, this is only true if the client is using websockets and negotiated one of the base64 protocols (base64.channel.k8s.io
or v4.base64.channel.k8s.io
). In general, the response is a bag of bytes. But since they're not an HTTP request-response pair, the response types should probably have no data in their variants anyway. Instead the user will likely use a websocket crate to handle the connection after constructing the original http::Request
.
There's also a point to be made that returning http::Request
from the API functions is not useful, since none of the websocket libraries can use it directly for the handshake request (tungstenite and ws each have their own Request
types that are some combination of URL and headers). But the user has to decompose the http::Request
even for regular HTTP functions to add an authority to the URL, so this alone is not a convincing reason. It's also possible that websocket crates could start using http::Request
for the initial handshake request in the future.
Real minikube cluster.
Use a reqwest::Client
and set it up with the identity described in ~/.kube/config
kubectl get
/ Client::get
some pods / deployments to verify the JSON deserialization.
kubectl apply
/ Client::put
some pods / deployments to verify the JSON serialization.
Can easily verify all supported Kubernetes versions.
👍 Can easily add support for new Kubernetes versions. Just add a new env row to .travis.yml
and new #[cfg]
d module imports in every test.
Cluster startup time is negligible compared to building the tests crate, though since the clusters cannot coexist there needs to be a separate Travis run defined for every version.
Minikube 0.26 and 0.27 are broken in Travis but 0.26 is also broken in general. Use 0.25.2.
Does not support 1.10 because of the issue described in Azure/acs-engine#2404
Versions that are supported are not the latest ones for which this crate has codegen:
The following Kubernetes versions are available when using the localkube bootstrapper:
- v1.10.0
- v1.9.4
- v1.8.0
- v1.7.5
but this doesn't seem to be a problem.
Checked-in static files.
serde_yaml
's serialization is stable.--all-features
First of all, thanks for creating this crate. It is really great. I am using it in https://github.com/andygrove/ballista and it is working well. However, I have a new requirement now that I don't think there is a solution for.
I would like to be able to parse k8s yaml templates and then do the equivalent of kubectl apply -f yamlfile
. To implement this, I need the ability to deserialize yaml into the standard API structs. Is there any way to do this using this crate?
Currently the remove_delete_options_body_parameter
fixup removes the DeleteOptions
body parameter, claiming it's redundant with the other query parameters. However the DeleteOptions
body parameter does have a complex preconditions: apimachinery::pkg::apis::meta::v1::Preconditions
field that is not present (and can't be expressed) in the query string parameters.
The fixup should be reversed to keep the body parameter and remove the query string parameters. This does mean that callers will need to create owned values (like String
) instead of passing borrows (like &str
), but allowing both the query string parameters and the body parameter is worse.
A related problem is that currently DeleteOptions
is emitted with api_version
and kind
fields (and without a Resource
impl) since the spec defines it as having multiple gkvs. Need to test with kubectl
whether an actual DeleteOptions
serialization emits these fields at all, and with what values if it does.
Perhaps there is also value in renaming DeleteOptions
to DeleteOptional
to match the other optional types.
delete_collection_
API ("x-kubernetes-action": "deletecollection"
) currently have parameters that are a mix of deletion and list parameters combined. This means they include the bogus watch
parameter of pre-fixup list operations.
The golang clients have a DeleteCollection(options *v1.DeleteOptions, listOptions v1.ListOptions) error
function for this, so this crate should do the same with meta::DeleteOptions
(see above) and ListOptional
I was trying to use the Patch
struct to run patch_namespaced_ingress
, but it appears based on both the documentation and the code that the only valid value of body
is an empty struct reference &Patch{}
.
When I do that, I get the following error at run time:
thread 'main' panicked at 'failed to patch ingress: ErrorMessage { msg: "Invalid method: PATCH" }
This is likely because the JSON body on the API request needs values in the form of a JSON merge patch, eg:
{
"op" : "replace" ,
"path" : "/some/path",
"value" : "foo"
}
At first I thought this was me being new to Rust and just missing how to do pass the body
parameter correctly, like maybe I needed to build a JSON body with Serde and cast it to a Patch
struct. But then I showed my issue to another developer and they came to the same conclusion.
I'm using the v1_11
feature, for what it's worth. If there's any other information I can provide to clarify what I'm seeing, let me know! Thanks for building this!
/// The serialization format is:
///
/// <quantity> ::= <signedNumber><suffix>
Blocked on kubernetes/minikube#3164 Use minikube v0.30.0 and new crictl for v1.12
I am using IntelliJ with the Rust plugin to work on an ingress controller. Right now it is unable to figure out which of the API versions are activated because it can't get past:
macro_rules! mods {
($($name:ident $name_str:literal)*) => {
$(
#[cfg(feature = $name_str)] mod $name;
#[cfg(feature = $name_str)] pub use self::$name::*;
)*
};
}
mods! {
v1_8 "v1_8"
v1_9 "v1_9"
v1_10 "v1_10"
v1_11 "v1_11"
v1_12 "v1_12"
v1_13 "v1_13"
v1_14 "v1_14"
v1_15 "v1_15"
}
would it be OK if we expanded this to regular Rust code? It would make my life using the openapi types much easier!
k8s-openapi can compile with multiple features enabled, but it isn't clear if it makes sense to allow it.
If someone writes a crate for their own cluster, they'd just enable the single feature that matches their cluster.
If someone writes a crate that would work with an arbitrary range of versions but wants to use something available in all of them like core::v1::Pod
, currently they have to import each version's Pod
anyway. This won't compile if more than one feature is enabled.
If someone writes a crate that uses different things on different versions like apps::v1beta1
vs apps::v1beta2
vs apps::v1
, they still have to import each version's corresponding thing anyway. This also won't compile if more than one feature is enabled.
I originally thought the multiple features would be enabled simultaneously because lib crates would take hard dependencies on particular versions that have what they need. For example, if a lib crate wanted apps::v1
that was introduced in 1.9, it would depend on features = ["v1_9"]
and use k8s_openapi::v1_9::api::apps::v1 as apps;
"). This would work but would bloat compilation time for each library that chose a different feature, and would complicate passing types between multiple libraries that depend on different versions but didn't care about the difference (eg a PodSpec
from crate foo that depends on v1_9
can't be passed to crate bar that depends on v1_10
).
That's why I introduced the conditional compilation macros so that lib crates would not depend on a particular version but use whatever's available. But the version-specific module imports mean the lib crates still need to have different imports for each version in the crate, or they won't compile. In fact this means that adding support for a new version is currently a semver-incompatible change, since all crates would stop compiling until they added a new k8s_if_{version}! { use k8s_openapi::v1_{version}::foo::bar; }
It seems to make more sense to:
Have a way of importing types from that version without explicitly writing the module name out.
k8s_if_le_1_8! {
compile_error!("Only Kubernetes 1.9 and above are supported");
}
k8s_if_ge_1_9! {
use k8s_openapi:: k8s_version!() ::api::apps::v1 as apps;
}
Assert that only one feature is enabled. This would prevent building docs for all the versions at the same time though, so only enforce it if #[cfg(feature(not(dox)))]
The counterpoint is that an API that has the same import between different versions isn't necessarily semver-compatible anyway.
For example, if there had been a point where this crate only supported v1.7 and v1.8, someone would write code against apiextensions::CreateApiextensionsV1beta1CustomResourceDefinitionResponse
expecting it to return Other
on success. But this would break when a new release added support for v1.9 and the user enabled the v1_9
feature since it would now return Created
Currently the generated code for String
responses (Pod::read_namespaced_pod_log
, etc) returns Ok(&buf[..valid_up_to])
in case of truncated utf-8, but Err
in the case of invalid utf-8 anywhere in the buffer.
However, it's useful to return Ok(&buf[..valid_up_to])
even in the case of invalid utf-8, and only return the Err
if valid_up_to
is 0. This way the caller gets as much of the parsed response as possible, rather than a variable amount depending on how they happened to chunk their response buffer.
I don't think there's a need to specialize on patch versions (1.7.2 vs 1.7.3), so just separate minor versions will be good enough (1.7.x vs 1.8.x).
Options for source organization:
Separate git branches for every version, including the code generator code.
Separate git branches for every version, and a separate branch for just the code generator code.
Separate directories in the same branch.
Options for publishing:
Multiple releases of a single crate - [email protected].*
, [email protected].*
=
constraints with a specific patch version in their Cargo.toml.Separate crate for every version - k8s-openapi-1-7@*
, k8s-openapi-1-8@*
👍 Access to a full crate versions allows semver-major changes, like changing the root namespace.
👍 For websites like crates.io or docs.rs that always default to showing the latest version, this makes it clear that older Kubernetes versions are supported.
👎 Lots of crate names.
cc @anowell
Hi, this repo is very interesting to me as someone interested in Kubernetes and Rust. I stumbled across it a couple days ago with a random page of "kubernetes rust" search results, and then just came across you in the minikube
repo as well.
May I be nosy? What is your use-case or planned use for this? Are you planning to write clients/controllers/random utils with it?
I'd be curious if you've thought about being able to define models in Rust and emit CRDs or custom API object definitions? Or maybe that's more appropriate for a separate repo/gen-tool?
https://dev.azure.com/arnavion/k8s-openapi-codegen/_build/results?buildId=21&view=logs
Retry, and handle 409s too just in case.
Hi
Is there a kubernetes client that makes use of this project?
//Max
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.