paperclip-rs / paperclip Goto Github PK
View Code? Open in Web Editor NEWWIP OpenAPI tooling for Rust.
License: Apache License 2.0
WIP OpenAPI tooling for Rust.
License: Apache License 2.0
Right now, whenever we encounter generic schemas with no type information, we default to string, when it really should be some generic parameter. We could default to serde_json::Value
for that parameter, but since (OpenAPI-compatible) server/client can support sending/receiving stuff other than JSON (for the same operation!), we need to think about how we should deal with codegen.
Right now, we support generating modules, but we should also support generating crates with appropriate metadata, dependencies, etc. (maybe tests in the future). This is particularly useful for CLI.
Before we start working on other features, we need to make sure that the wrappers are complete. This involves wrapping/proxying all functions and methods actix_web::App::*
and (along with structs) in actix_web::web::*[::*]
.
The following needs to be accomplished before we can go for an initial release.
description
field.Sendable
trait for API call and implement it for all fulfilled builders.RequestBuilder
since we don't (currently) support any kind of auth.I'm planning to add Gitbook for documenting paperclip further. It'll have detailed examples and documentation about the currently supported features. This is inspired from serde.rs.
Given an OpenAPI spec, the crate should support generating the appropriate modules with struct definitions JSON schemas.
This is an epic issue which addresses a variety of task, which should've otherwise been broken down into subtasks, but well, I didn't realize this project was gonna be way more interesting than I originally thought, so here goes..
The following features are covered:
$ref
schema references.Schema
as a trait (implemented for DefaultSchema
) for extending with custom definitions to cover x-*
keys and values.#[api_schema]
proc macro attribute for auto-deriving Schema
for custom definitions.SchemaEmitter
as a trait (implemented for DefaultEmitter
) for emitting struct definitions and modules appropriately with fine-grained methods.X
from module a::b::c
should be written as a::b::c::X
instead of just X
)If a function returns a container wrapped schema value (such as Json<T>
), then we should take that as 200
response schema.
Right now, we don't care about paths with parameters. This is returned by web::Path<T>
where T
could be an N-tuple or some Deserialize
able. We should get the correct types and record those parameters for an operation (all the way to App
wrapper as usual).
Right now, whenever the generated CLI makes an API call, we get:
thread 'tokio-runtime-worker-0' has overflowed its stack
fatal runtime error: stack overflow
Aborted (core dumped)
UPDATE:
Well, this is interesting, because this happens only in test-k8s-cli
(which we generate and build during testing), and (as mentioned above) it only happens when we actually make the API call.
Debugging with LLDB showed that this happens when we encounter cli::response_future
function call in the generated main.rs
. That function is quite huge for test-k8s-cli
because there's like ~1k subcommands and it's probably too huge to be placed in the stack. This doesn't happen in release mode, so I'm guessing that's got to do with stuff getting optimized away.
Right now, the generated code has a basic error enum for representing an API operation failure. So, whenever we get non-2xx response, we bail out with a generic variant which has the Response
body in a mutex. This is okay, but if the operations in some spec share one or more error models, then we should parse them appropriately and extend this ApiError
with those error models.
NOTE: Consider picking this up after #33 i.e., after we land on how we can customize unknown mime types using custom de/serializers.
We should make use of consumes
and produces
fields to set or accept supported content types. The emitter state will support setting de/serializers for those mime types. This will be used by the emitter to set request / parse response bodies corresponding to those mime types using the provided de/serializers. Moreover, they should be feature-gated so that users of the generated lib/bin can also choose to prefer what they want.
The generated CLI forwards the streaming body directly to stdout, but it seems to block after printing most of the response.
The spec has two fields host
and basePath
- we don't use both. Instead, we have a base_url
in EmitterState
which does the job. But, we should make use of the host
and basePath
fields and override base_url
if they exist.
Currently, we're using openssl for setting root certs and client verification (optionally). Users should be able to opt out of this feature if they don't need it. It's sensible to add a feature gate, say custom-tls
for enabling this option.
Similar to #45, we also don't care about query parameters atm. The implementation would be pretty much the same as path parameters - only difference would be the use of web::Query<T>
instead of web::Path<T>
The OpenAPI spec allows us to define enums, but we don't support that in codegen and the plugin. Let's start with adding that support in plugin.
It looks like the host
should be able to be defined with a port number, given:
https://swagger.io/docs/specification/2-0/api-host-and-base-path/
however if I take paperclip/openapi/tests/pet-v2.yaml
and change:
swagger: "2.0"
host: pets.com
...
to:
swagger: "2.0"
host: pets.com:8000
...
I get an error:
$ paperclip test.yaml --api v2
Cannot parse host "pets.com:8000": invalid domain character
Right now, we're generating the raw schemas as and whenever we encounter those. Instead, we should collect them and pass them all the way to App
wrapper so that we can put them in definitions
field and use $ref
everywhere else.
In actix, scopes offer grouping of resources. We should keep track of that and reflect in the generated spec.
We should have a bin
target for installing directly using cargo install
so that we can generate the client crate through a CLI instead of just using build scripts. The CLI should understand and act on all the options that's supported by the build script.
Actix uses web::Json
for marking JSON payloads. Whenever we encounter that in a function signature, we can assume that it's a known schema and it's a body parameter for the operation. This involves changes to #[api_v2_operation]
macro.
The following need to be covered:
Schedule: 0.2.1 (follow the project to see the relevant issues).
Relevant discussion: actix/actix-web#310
It would be nice to support transforming a v3 spec to v2 using the paperclip CLI. I think v2 and v3 only change in their structure, but if there's something more, then we can use vendor extensions to cover whatever we need.
The first step would be to add a wrapper for actix_web::App
so that we can record the API operations and host them under a dedicated path.
Right now, we're enforcing higher precedence for an operation's necessary parameters if they collide with an object's necessary fields. This is fine, but we're only setting the value to the param_
fields in struct and we have defaults in the actual object. We should store the same value in both the places for consistency.
Firstly, paths should be unique, including templated ones i.e., /api/foo/{bar}
and /api/foo/{baz}
are the same, so it should be disallowed. Also, parameters associated with templated paths should always be mentioned in global/local parameters.
We have CrateMeta
struct for passing crate metadata, which (currently) supports setting package name, version and authors. We should initially support setting package name and version through the CLI (we can look at authors separately).
The changes should be done in src/bin/main.rs
.
The #[api_v2_schema]
implements Apiv2Schema
trait for some serializable object, whereas #[api_v2_operation]
implements Apiv2Operation
trait for functions. Both traits have associated functions for returning DefaultSchema
and Operation<DefaultSchema>
stuff. This will all be collected by Resource
wrapper and later by App
wrapper.
This is the core idea of the next release (0.2.0). We generate a bin
target with proper structopt
commands, subcommands and options using the OpenAPI spec. This fancy console will call the generated client code and we make the API request from CLI.
reqwest
supports setting timeouts for HTTP requests. We could accept that as an optional argument in CLI and set it (if needed) before building the client.
Now that we're done with the definitions, we can move on to parsing and resolving path item objects which define the operations and reference the definitions. Let's start simple, with the focus on k8s spec:
We should:
paths
field.$ref
pointing to definitions.If the builder structs take simple parameters such as strings, integers, boolean, etc., then we're good (we can achieve all of those with an impl Into<Foo>
).
Firstly, we can't take raw generated objects if they have requirements. For example, a DeploymentSpec
needs a template and a selector, which means the spec
field in deployment should only consume DeploymentSpecBuilder<SelectorExists, TemplateExists>
to enforce parameter and field requirements.
We can't have Vec<T>
as it is. We don't have to worry about allocation at this point. But, we do need to worry about complex objects in the vector. An example signature would be this:
impl<Containers> PodSpecBuilder<Containers> {
pub fn volumes(self, value: Vec<Volume>) -> Self { ... }
}
Here, we need to ensure that the user has passed a fulfilled VolumeBuilder<NameExists>
instead of the actual (Volume
) object. Extrapolating from 1, we would be consuming impl Iterator<Item=VolumeBuilder<NameExists>>
here.
BTreeMap<String, Foo>
, we'd be consuming impl Iterator<Item=(String, VolumeBuilder<NameExists>)
We could eventually support boxed trait objects, so that the iterator can return anything that implements that trait, but that's for the future.
The deprecation
field in Operation items specify whether that operation is deprecated. We should mark the corresponding functions (with deprecated
attribute) in the generated code and issue that as a message during compilation.
The wrapper for web::Resource
collects the paths, operations and relevant schemas. When this is added through App::service
, they get collected into the global spec.
Currently, we support adding root certs and enabling client verification, but we also need basic auth. We should support that in the generated client if the field is specified and add an option to CLI regardless.
Note that basic auth can be specified globally or locally to some particular operation.
The first step to generating client code would be to build type-safe compile-time checked API objects. This is done by leveraging PhantomData
and unit types. An example would be:
struct MissingName;
struct MissingNamespace;
struct NamePresent;
struct NamespacePresent;
struct Get;
#[repr(transparent)] // for safely transmuting
#[derive(Debug, Clone)]
pub struct ConfigMapFactory<Name, Namespace> {
inner: ConfigMapBuilder,
_param_name: PhantomData<Name>,
_param_namespace: PhantomData<Namespace>,
}
#[derive(Default, Debug, Clone)]
struct ConfigMapBuilder {
inner: ConfigMap,
name: Option<String>,
namespace: Option<String>,
}
impl ConfigMap {
fn get() -> ConfigMapFactory<MissingName, MissingNamespace> {
ConfigMapFactory {
inner: Default::default(),
_param_name: PhantomData,
_param_namespace: PhantomData,
}
}
}
impl<A> ConfigMapFactory<MissingName, A> {
pub fn name(mut self, name: &str) -> ConfigMapFactory<NamePresent, A> {
self.inner.name = Some(name.into());
unsafe { mem::transmute(self) }
}
}
Then, we'll add an impl
for the actual API request only when the type has NamePresent
and NamespacePresent
(we do this for all required parameters and fields). This means rustc won't compile the code when the user forgets some required entity.
Anyway, the focus for this issue would be to generate all of this stuff for all API objects (taking into account of required parameters and fields).
Right now, we're using OpenSSL for custom TLS configuration. But, we can also do that in rustls and reqwest does support that backend, which is what we should switch to for the CLI to be platform-agnostic.
We're only interested in whether the code we've generated actually compiles. We don't need to build the artifacts. Let's change .travis.yml
and Makefile
to cargo check
test_k8s
and test_pet
crates (instead of cargo build
).
Paperclip CLI should accept hosted OpenAPI spec URLs (in addition to local paths), from which it'll download and parse the spec before codegen.
Good specs have descriptions of API objects, properties, parameters and operations. If they exist, then we should drop them as doc comments in the structs, builders and their methods. Right now, we don't care about them and the docs aren't that good looking.
The generated CLI should offer bash/zsh completion so that we don't have to type the subcommands and arguments all the way.
Now that we have the builder structs and their associated methods (#7), it's time to generate the relevant API calls using those builders. Every (valid) builder object will implement a Sendable
trait which will have a send
method for making the request and returning a Future<ResponseThingy>
. Let's start with reqwest client for PoC with application/json
as default content type (we can improve upon it later).
Right now, we're assuming that the generated modules and types are accessible from the crate root. This means, if we set the workdir to, say, one level deeper, then the code won't compile because the compile won't be able to find the generated module.
We should support setting an import prefix in EmitterState
, so that the emitters can address all types appropriately if the user wants to go for a different location in their crate.
We already use the description
fields from the spec for documentation (if they exist). We should do the same for CLI by adding help
for commands and parameters.
We've been using proc_macro::Span::call_site
as a hacky way for throwing compile-time errors. We should switch to using syn::Error::to_compile_error
and compile_error!
for achieving the same with better error messages.
This is necessary for the CLI. We should use the info
field to get the title, description, version, etc. (only if they exist - I don't think it's necessary to force that in CLI). An edge case would be that passing name and version through CLI args will override whatever name and version we obtained from the spec.
The codegen module has a list of Rust keywords for disallowing them in certain areas in the generated code (like struct fields). At this point, the list is tiny. We should populate the list with all blocked keywords in Rust.
SchemaEmitter
currently has write_contents
function which writes generated code to files. If codegen-fmt
feature is enabled, then we should format the output before writing that to a file. rustfmt doesn't document library usage, but I'm guessing we'll be using a Session
to do the job.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.