Giter Site home page Giter Site logo

wither's People

Contributors

isibboi avatar magiclen avatar thedodd avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

wither's Issues

Add more docs on basic model usage & sync.

Per some feedback that I was given, it would be beneficial to add some additional docs on how to use the model. Find, find all, updates, saving &c.

Also, add some docs on what sync is for and how to use it.

Model::sync shouldn't panic.

I originally coded it this way so that users could sync their models at boot time. In retrospect, this is pretty inflexible. It should return a standard Result and the user can panic if they want to.

This will also allow for APIs to boot, attempt a sync, and even if it fails, the service can continue to stay online and service calls. It could attempt another sync operation in the background as part of a "manager" type which controls access to the database connection.

todo

  • modify Model::sync to no longer panic. It should return the default Result type defined in this package. Probably Result<()>.
  • get some tests in place for this.
  • probably do a second critical review of the sync operation as a whole.

Model::find returns DecoderError when collection is empty

I currently have to do the following in order to find documents without raising an error:

let count = Submission::count(db.clone(), None, None).unwrap();
let submissions = if count > 0 { Submission::find(db.clone(), None, None).unwrap() } else { Vec::new() };
Json(json!({ "submissions": submissions }))

Shouldn't Submission::find() return an empty Vec on a empty collection and not raise a DecoderError? I am using MongoDB 4.0.3, should I try to downgrade to a later version?

Fix typo in docs.

Superfluous comma in Model.update's docs: This operation targets the model, instance by the instance's ID.

Support delete_many

First of all, thanks for the lib, I am testing it in a new project and so far I like it!
I came across the use case of deleting many attributes, are there any plans to support delete_many functionality or should I just use the native drive for this case?

Cannot find derive macro `Model` in this scope

I seem to be having an issue inserting this derived trait into my scope. I'm using the standard example on the read me available as well. I'm using the latest stable build of Rust currently which at this time is 1.47.0. I'm still rather green to the language so apologies in advance if this is some silly issue.
image

Expecting one working cargo project for example.

Compiled Readme.md code against following toml dependencies and getting buch of errors.

Code

// First, we add import statements for the crates that we need.
// In Rust 2018, `extern crate` declarations will no longer be needed.
#[macro_use]
extern crate mongodb;
extern crate serde;
#[macro_use(Serialize, Deserialize)]
extern crate serde_derive;
extern crate wither;
#[macro_use(Model)]
extern crate wither_derive;

// Next we bring a few types into scope for our example.
use mongodb::{
    Client, ThreadedClient,
    db::{Database, ThreadedDatabase},
    coll::options::IndexModel,
    oid::ObjectId,
};
use wither::prelude::*;

// Now we define our model. Simple as deriving a few traits.
#[derive(Model, Serialize, Deserialize)]
struct User {
    /// The ID of the model.
    #[serde(rename="_id", skip_serializing_if="Option::is_none")]
    pub id: Option<ObjectId>,

    /// This field has a unique index on it.
    #[model(index(index="dsc", unique="true"))]
    pub email: String,
}

fn main() {
    // Create a user.
    let db = mongodb::Client::with_uri("mongodb://localhost:27017/").unwrap().db("mydb");
    let mut me = User{id: None, email: "[email protected]".to_string()};
    me.save(db.clone(), None);

    // Update user's email address.
    me.update(db.clone(), None, doc!{"$set": doc!{"email": "[email protected]"}}, None).unwrap();

    // Fetch all users.
    let all_users = User::find(db.clone(), None, None).unwrap();
}

toml dependencies

[dependencies]
futures = "0.3.5"
serde = "1.0.114"
serde_derive = "1.0.114"
wither = "0.8.0"
wither_derive = "0.8.0"
[dependencies.mongodb]
version = "1.0.0"
default-features = false
features = ["sync"]

Errors

error[E0432]: unresolved imports `mongodb::ThreadedClient`, `mongodb::db::ThreadedDatabase`, `mongodb::coll::options::IndexModel`, `mongodb::oid`
  --> src/main.rs:14:13
   |
14 |     Client, ThreadedClient,
   |             ^^^^^^^^^^^^^^ no `ThreadedClient` in the root
15 |     db::{Database, ThreadedDatabase},
   |                    ^^^^^^^^^^^^^^^^ no `ThreadedDatabase` in `db`
16 |     coll::options::IndexModel,
   |     ^^^^^^^^^^^^^^^^^^^^^^^^^ no `IndexModel` in `coll::options`
17 |     oid::ObjectId,
   |     ^^^ help: a similar path exists: `wither::mongodb::oid`

error[E0432]: unresolved import `mongodb`
  --> src/main.rs:22:10
   |
22 | #[derive(Model, Serialize, Deserialize)]
   |          ^^^^^ no `IndexModel` in `coll::options`
   |
   = note: this error originates in a derive macro (in Nightly builds, run with -Z macro-backtrace for more info)

error: cannot find macro `doc` in this scope
  --> src/main.rs:22:10
   |
22 | #[derive(Model, Serialize, Deserialize)]
   |          ^^^^^
   |
   = note: this error originates in a derive macro (in Nightly builds, run with -Z macro-backtrace for more info)

error: cannot find macro `doc` in this scope
  --> src/main.rs:40:33
   |
40 |     me.update(db.clone(), None, doc!{"$set": doc!{"email": "[email protected]"}}, None).unwrap();
   |                                 ^^^

error[E0433]: failed to resolve: could not find `oid` in `mongodb`
  --> src/main.rs:22:10
   |
22 | #[derive(Model, Serialize, Deserialize)]
   |          ^^^^^ could not find `oid` in `mongodb`
   |
   = note: this error originates in a derive macro (in Nightly builds, run with -Z macro-backtrace for more info)

error[E0603]: struct `Client` is private
   --> src/main.rs:14:5
    |
14  |     Client, ThreadedClient,
    |     ^^^^^^ private struct
    |
note: the struct `Client` is defined here
   --> /home/bob/.cargo/registry/src/github.com-1ecc6299db9ec823/mongodb-1.0.0/src/lib.rs:141:9
    |
141 |         client::Client,
    |         ^^^^^^^^^^^^^^

error[E0603]: module `db` is private
   --> src/main.rs:15:5
    |
15  |     db::{Database, ThreadedDatabase},
    |     ^^ private module
    |
note: the module `db` is defined here
   --> /home/bob/.cargo/registry/src/github.com-1ecc6299db9ec823/mongodb-1.0.0/src/lib.rs:111:5
    |
111 |     mod db;
    |     ^^^^^^^

error[E0603]: module `coll` is private
   --> src/main.rs:16:5
    |
16  |     coll::options::IndexModel,
    |     ^^^^ private module
    |
note: the module `coll` is defined here
   --> /home/bob/.cargo/registry/src/github.com-1ecc6299db9ec823/mongodb-1.0.0/src/lib.rs:107:5
    |
107 |     mod coll;
    |     ^^^^^^^^^

error[E0603]: module `coll` is private
   --> src/main.rs:22:10
    |
22  | #[derive(Model, Serialize, Deserialize)]
    |          ^^^^^ private module
    |
note: the module `coll` is defined here
   --> /home/bob/.cargo/registry/src/github.com-1ecc6299db9ec823/mongodb-1.0.0/src/lib.rs:107:5
    |
107 |     mod coll;
    |     ^^^^^^^^^

error[E0603]: struct `Client` is private
   --> src/main.rs:35:23
    |
35  |     let db = mongodb::Client::with_uri("mongodb://localhost:27017/").unwrap().db("mydb");
    |                       ^^^^^^ private struct
    |
note: the struct `Client` is defined here
   --> /home/bob/.cargo/registry/src/github.com-1ecc6299db9ec823/mongodb-1.0.0/src/lib.rs:141:9
    |
141 |         client::Client,
    |         ^^^^^^^^^^^^^^

warning: unused `#[macro_use]` import
 --> src/main.rs:3:1
  |
3 | #[macro_use]
  | ^^^^^^^^^^^^
  |
  = note: `#[warn(unused_imports)]` on by default

error[E0599]: no function or associated item named `with_uri` found for struct `mongodb::client::Client` in the current scope
  --> src/main.rs:35:31
   |
35 |     let db = mongodb::Client::with_uri("mongodb://localhost:27017/").unwrap().db("mydb");
   |                               ^^^^^^^^ function or associated item not found in `mongodb::client::Client`

error: aborting due to 11 previous errors; 1 warning emitted

Some errors have detailed explanations: E0432, E0433, E0599, E0603.
For more information about an error, try `rustc --explain E0432`.
error: could not compile `wither_eg`.

To learn more, run the command again with --verbose.

Derive: add use statements for all dep types.

There should be no extra cost associated with simply adding any use statements for types which the derive system depends upon. If the crates do not exist, cargo will notify the user pretty clearly as to the issue.

This is just a usability enhancement. It is annoying to have to declare use statements for types which derives require.

error: proc-macro derive panicked

Hi, I am trying to run the example in the readme but this error comes up when cargo building the project. I'm using the latest versions:

[dependencies]
serde = "1.0.101"
serde_json = "1.0.41"
serde_derive = "1.0.101"
mongodb = "0.4.0"
wither = "0.8.0"
wither_derive = "0.6.1"

Env:

rustup --version
# rustup 1.19.0 (2af131cf9 2019-09-08)

cargo --version
# cargo 1.40.0-nightly (8b0561d68 2019-09-30)

Error:

error: proc-macro derive panicked
  --> src/main.rs:22:10
   |
22 | #[derive(Model, Serialize, Deserialize)]
   |          ^^^^^
   |
   = help: message: Unrecognized `#[model(index(...))]` attribute 'index'.

error: aborting due to previous error

I'm extremely newbie with rust, probably its an error on my end, but maybe you can point me in the right direction.

Integration with new MongoDB driver.

An alpha of the new MongoDB driver has just recently landed. Time to start cutting over.

Hopefully some of the indexing issues opened in this repo are resolved by the new driver.

Transactions and Change Streams are still not supported, but should be coming soon.

Objectives

  • cut over to the new mongodb crate. Re-export it so that folks don’t run into issues with multiple versions.
  • Update docs and examples. Maybe look into creating a pages mdbook guide with model examples and the like. Should be a little easier to approach than a docs.rs heavy guide.
  • Update a few patterns, such as find patterns and the like, to take full advantage of the cursor system. Don’t collect into a vec ahead of time.
  • Revisit the indexing and migrations system to ensure everything is up-to-snuff and working as needed with new version.

Model: failed to parse serde rename attr

I am trying to experiment with branch 42-new-driver with the model below.

use bson::oid::ObjectId;
use serde::{Deserialize, Serialize};
use wither::Model;

#[derive(Serialize, Deserialize, Model)]
struct Foo {
    #[serde(rename = "_id", skip_serializing_if = "Option::is_some")]
    id: Option<ObjectId>,
}

The serde(rename... annotation generated this error:

failed to parse serde rename attr

help: Unexpected literal type stringrustc
...

The error disappear when I remove Model from derive -- do you know why this is happening?

Index subdocument fields w/ custom derive.

The custom derive system will be landing soon HAS LANDED (woot woot!), and one of the last outstanding challenges is to get a pattern in place which will work well for indexing model subdocument fields.

Put a plan of attack together on supporting indexes on nested models. Should be pretty straightforward. As a current workaround, users will simply have to implement the Model trait on their model manually.

`find_one` always return None

I have this model

use wither::prelude::*;

#[derive(Debug, Clone, Model, PartialEq, Deserialize, Serialize, TypedBuilder)]
#[model(collection_name = "accounts")]
#[model(index(
   keys = r#"doc!{"email": 1}"#,
   options = r#"doc!{"name": "unique-email", "unique": true, "background": true}"#
))]
pub struct Account {
  #[serde(rename = "_id", skip_serializing_if = "Option::is_none")]
  pub(crate) id: Option<ObjectId>,
  pub(crate) email: Option<String>,
  pub(crate) created: Option<String>,
  pub(crate) updated: Option<String>,
}

I have noticed below call always return None although the value exists. Does anyone knows why this is happening?

let filter = Some(bson::doc! { 
  "email": String::from("[email protected]") 
});
let select_option = Account::find_one(&db, filter, None).await.map_err(|err| {
    anyhow!("DB Error: :?}", err)
})?

println!("Verify: {:?}", select_option); // <-- Prints None

cannot find derive macro `Model` in this scope

Compiling actix_demo v0.1.0 (D:\backCode\actix_demo)
error: cannot find derive macro Model in this scope
--> src\common\structs.rs:6:17
|
6 | #[derive(Debug, Model, Serialize, Deserialize, Clone)]
| ^^^^^

error: aborting due to previous error

error: could not compile actix_demo.

To learn more, run the command again with --verbose.

The Code:
structs.rs:
use serde_derive::{Deserialize, Serialize};
use mongodb::{
oid::ObjectId,
};

#[derive(Debug, Model, Serialize, Deserialize, Clone)]
pub struct MyObj {
/// The ID of the model.
#[serde(rename = "_id", skip_serializing_if = "Option::is_none")]
pub id: Option,
pub name: String,
pub number: i32,
}
...

main.rs:
#[macro_use]
extern crate json;
#[macro_use]
extern crate bson;
#[macro_use]
extern crate mongodb;
extern crate serde;
#[macro_use(Serialize, Deserialize)]
extern crate serde_derive;
extern crate wither;
#[macro_use(Model)]
extern crate wither_derive;
#[macro_use]
extern crate log;
extern crate hex;
extern crate actix_demo;
extern crate chrono;

use actix_web::{
error, middleware, web, App, Error, HttpRequest, HttpResponse, HttpServer,guard
};
use actix_web::http::{StatusCode};
use bytes::BytesMut;
use json::JsonValue;
use serde_derive::{Deserialize, Serialize};

use mongodb::{
ThreadedClient,
db::{Database, ThreadedDatabase},
coll::options::IndexModel,
oid::ObjectId,
};
use wither::prelude::;
use bson::Bson;
use serde_json::{Value, Map};
use r2d2_mongodb::{MongodbConnectionManager, ConnectionOptions};
use r2d2::Pool;
//use actix_demo::middlewareLocal::state::AppState;
use actix_demo::middlewareLocal::auth::Auth;
use actix_web::client::ClientRequest;
use sha2::{Sha256, Digest};
use rand::prelude::random;
use rand::Rng;
use actix_service::ServiceExt;
use reqwest::r#async::{Client, Response};
use futures::Future;
use std::collections::HashMap;
use reqwest::header::{USER_AGENT, CONTENT_TYPE, ACCESS_CONTROL_ALLOW_HEADERS};
//::header::{Headers, UserAgent, ContentType};
use actix_session::{CookieSession, Session};
use actix_identity::{Identity, CookieIdentityPolicy, IdentityService};
use qstring::QString;
use actix_files as fs;
use std::marker::PhantomData;
use serde_json::value::Value::Object;
use actix_service::Service;
use actix_demo::common::structs::
;
...

I am fixing it for hours, and failed. Please help.

Allow models to use generics.

#37 is a solution for modelizing a structure which uses generics like this,

#[derive(Debug, Serialize, Deserialize, Model)]
struct Person<'a> {
    #[serde(rename = "_id", skip_serializing_if = "Option::is_none")]
    id: Option<ObjectId>,
    name: Cow<'a, str>
}

But it still does not support a structure whose serde::ser::Deserialize<'de> trait is not implemented by the macro, but by manual.

For example:

#[derive(Debug, Serialize, Model)]
struct Person<'a> {
    #[serde(rename = "_id", skip_serializing_if = "Option::is_none")]
    id: Option<ObjectId>,
    name: Cow<'a, str>,
}

struct StringVisitor;

impl<'de> serde::de::Visitor<'de> for StringVisitor {
    type Value = Person<'de>;

    fn expecting(&self, formatter: &mut std::fmt::Formatter) -> std::fmt::Result {
        formatter.write_str("a name")
    }

    fn visit_borrowed_str<E>(self, v: &'de str) -> Result<Self::Value, E> where E: serde::de::Error {
        Ok(Person {
            id: None,
            name: Cow::Borrowed(v),
        })
    }

    fn visit_string<E>(self, v: String) -> Result<Self::Value, E> where E: serde::de::Error {
        Ok(Person {
            id: None,
            name: Cow::Owned(v),
        })
    }
}

impl<'de> serde::Deserialize<'de> for Person<'de> {
    fn deserialize<D>(deserializer: D) -> Result<Self, D::Error> where D: serde::Deserializer<'de> {
        deserializer.deserialize_str(StringVisitor)
    }
}

To make the above code work, we need a new attribute meta to assign a lifetime that is used for 'de. Like the following code,

#[derive(Debug, Serialize, Model)]
#[model(de = "'a")]
struct Person<'a> {
    #[serde(rename = "_id", skip_serializing_if = "Option::is_none")]
    id: Option<ObjectId>,
    name: Cow<'a, str>,
}

Serialize issue of ObjectId

#[derive(Debug, Serialize, Deserialize, Clone, Model)]
pub struct User {
    /// The ID of the model.
    #[serde(rename = "_id", skip_serializing_if = "Option::is_none")]
    pub id: Option<ObjectId>,

    pub nick: String,
}

I have a very simple structure, after save to the db and return back in a api call, there is a $oid in the _id field. How to serialize it only with _id?

{
    "code": 0,
    "message": null,
    "data": {
        "_id": {
            "$oid": "5f5ae5b400f0368a00715eca"
        },
        "nick": "Tm999y"
    }
}

Thanks.

db.clone() performance concern

I see in the examples that db.clone() should be given to function. Isn't it a performance issue? Is there nicer API? why should one use wither rather than official driver?

Create a custom derive for model definitions.

There were a few things mentally blocking me on this at first.

  • I want to be able to keep everything defined very simply in a clean & concise struct. Having to use a procedural macro is less desirable than a custom derive.
  • Everything we are currently doing here should factor into this just fine ... except for migrations. This was the main blocker.

Now that I've been able to think about it a bit, I'm thinking that a solid path forward will be to use a custom derive for all of the core components, and then allow users to optionally implement a new Migrate trait on their models, which is where the model's migrations will be defined.

Deriving Model on your structs will give you a default implementation of sync, so that you can sync your indices with the database. If you choose to also impl Migrate on your struct, you will get a default implementation of sync_and_migrate, which will call the default sync first, and then execute the migrations.

Model::sync is too naive. Needs to deep diff indices.

According to the MongoDB docs:

To add or change index options other than collation, you must drop the index using the dropIndex() method and issue another db.collection.createIndex() operation with the new options.

Currently, Model::sync only diffs the keys of the index itself, but does not take into account the possibility that the options of the index may have changed. It definitely should take this into account and drop an index first if needed.

The code resides in wither::model::sync_model_indexes. A private function of the module which is called from Model::sync.

Model.update should still be able to take a filter document.

In order to be able to query on the target document more precisely, we need to update the Model.update method to also take an optional filter document. The document will be unpacked and the _id field will be forced to be the ObjectId of the current model. This will ensure consistent behavior.

This is needed when you need to conditionally perform an update on the target document and you want to use the DB as your mechanism of staving off race conditions. EG: you want to update a field on the model, but only if the field is currently null. If it is not null, then someone else beat you to the update and the operation should fail.

How to use the Model::Find to complete a find_all function

Hello,

I try to use Model::Find to complete a find_all function, but when I get the ModelCursor as result, I don't know how to iterate it. I can neither use the example code by calling next() nor use the example of Mongo official as cursor is a private member of ModelCursor.

  1. Wither Example which is not working ( next() method not found in wither::ModelCursor<models::role::Role>)
let mut cursor = User::find(mongodb, None, None).await?;
let mut v: Vec<User> = vec![];

while let Some(user) = cursor.next().await {
    v.push(user);
}
  1. MongoDB official example which is also not working ( cursor is a private member of ModelCursor)
let cursor = coll.find(Some(doc! { "x": 1 }), None).await?;

let results: Vec<Result<Document>> = cursor.collect().await;

It will be appreciated if there is any examples, thanks.

Basic example dosn't seem to work

Hey man,

I'm somewhat new to rust and really appreciate your work.
I wanted to set up a new project and just copied your example code from the README but It doesn't seem to work for me. Am I doing something wrong?

error[E0433]: failed to resolve: use of undeclared type or module `futures`
 --> src/main.rs:1:5
  |
1 | use futures::stream::StreamExt;
  |     ^^^^^^^ use of undeclared type or module `futures`

error[E0433]: failed to resolve: use of undeclared type or module `async_trait`
 --> src/main.rs:8:17
  |
8 | #[derive(Debug, Model, Serialize, Deserialize)]
  |                 ^^^^^ use of undeclared type or module `async_trait`
  |
  = note: this error originates in a derive macro (in Nightly builds, run with -Z macro-backtrace for more info)

error[E0433]: failed to resolve: use of undeclared type or module `tokio`
  --> src/main.rs:18:3
   |
18 | #[tokio::main]
   |   ^^^^^ use of undeclared type or module `tokio`

warning: use of deprecated item 'wither::model::Model::sync': Index management is currently missing in the underlying driver, so this method no longer does anything. We are hoping to re-enable this in a future release.
  --> src/main.rs:22:3
   |
22 |   User::sync(db.clone()).await?;
   |   ^^^^^^^^^^
   |
   = note: `#[warn(deprecated)]` on by default

error[E0599]: no method named `next` found for struct `wither::cursor::ModelCursor<User>` in the current scope
   --> src/main.rs:33:33
    |
33  |   while let Some(user) = cursor.next().await {
    |                                 ^^^^ method not found in `wither::cursor::ModelCursor<User>`
    | 
   ::: /Users/chris/.cargo/registry/src/github.com-1ecc6299db9ec823/futures-util-0.3.5/src/stream/stream/mod.rs:222:8
    |
222 |     fn next(&mut self) -> Next<'_, Self>
    |        ----
    |        |
    |        the method is available for `std::boxed::Box<wither::cursor::ModelCursor<User>>` here
    |        the method is available for `std::sync::Arc<wither::cursor::ModelCursor<User>>` here
    |        the method is available for `std::rc::Rc<wither::cursor::ModelCursor<User>>` here
    |
    = help: items from traits can only be used if the trait is in scope
help: the following trait is implemented but not in scope; perhaps add a `use` for it:
    |
1   | use futures_util::stream::stream::StreamExt;
    |

error[E0277]: `main` has invalid return type `impl std::future::Future`
  --> src/main.rs:19:20
   |
19 | async fn main() -> Result<()> {
   |                    ^^^^^^^^^^ `main` can only return types that implement `std::process::Termination`
   |
   = help: consider using `()`, or a `Result`

error[E0752]: `main` function is not allowed to be `async`
  --> src/main.rs:19:1
   |
19 | async fn main() -> Result<()> {
   | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ `main` function is not allowed to be `async`

error: aborting due to 6 previous errors; 1 warning emitted

Some errors have detailed explanations: E0277, E0433, E0599, E0752.
For more information about an error, try `rustc --explain E0277`.

Make the `id` field's name and type changable.

Sometimes we don't want that the id field is named id, and don't want it to be an ObjectId (Option<mongodb::oid::ObjectId> precisely).

Perhaps wither can allow the following code in the future:

#[derive(Serialize, Deserialize, Model)]
struct Document {
    #[model(id)]
    #[serde(rename = "_id")]
    uid: u64
}

Code fails while deserializing DateTime data type.

I have the following code-

use futures::stream::StreamExt;
use serde::{Serialize, Deserialize};
use wither::{prelude::*, Result};
use wither::bson::{doc, oid::ObjectId};
use wither::mongodb::Client;
use chrono::{DateTime, Duration, Utc};
use uuid::Uuid;

#[derive(Debug, Model, Serialize, Deserialize)]
#[model(index(keys=r#"doc!{"uid": 1}"#, options=r#"doc!{"unique": true}"#))]
struct ToDo {
    /// The ID of the model.
    #[serde(rename="_id", skip_serializing_if="Option::is_none")]
    pub id: Option<ObjectId>,
    /// The to_do's email address.
    pub uid: String,
    pub task: String,
    pub completed: bool,
    pub created_at: DateTime<Utc>,
    pub updated_at: DateTime<Utc>,
    pub deleted_at: DateTime<Utc>
}

#[tokio::main]
async fn main() -> Result<()> {
    // Connect & sync indexes.
    let db = Client::with_uri_str("mongodb://localhost:27017/").await?.database("mydb");
    //User::sync(db.clone()).await?;

    // Create a user.
    let mut me = ToDo {
        id: None, uid: Uuid::new_v4().to_string(),
        task: String::from("Task 1"), completed: false, 
        created_at: Utc::now(), updated_at: Utc::now(),
        deleted_at: Utc::now()
    };
    me.save(db.clone(), None).await?;

    // Update user's email address.
    me.update(db.clone(), None, doc!{"$set": doc!{"task": "New task1", "updated_at": Utc::now()}}, None).await?;

    // Fetch all users.
    let mut cursor = ToDo::find(db.clone(), None, None).await?;
    while let Some(to_do) = cursor.next().await {
        println!("{:?}", to_do);
    }
    Ok(())
}

toml-

[dependencies]
chrono = { version = "0.4", features = ["serde"] }
futures = "0.3.5"
global = "0.4.3"
serde = "1.0.114"
serde_derive = "1.0.114"
wither = { version = "0.9.0-alpha.1", default-features = false, features = ["async-std-runtime"] }
#tokio = { version = "0.2.21", features = ["full"] }
tokio = { version = "0.2.21", features = ["macros"] }
uuid = { version = "0.8", features = ["serde", "v4"] }

This code fails at ToDo::find and gives runtime error-

Err(BsonDe(DeserializationError { message: "invalid type: map, expected a formatted date and time string or a unix timestamp" }))

While inserting record it stores datetime value in string format in Mongo, but at update it saves update_at in perfect datetime in Mongo. But then while fetching it breaks, If you will not pass updated_at while updating record it works well without an error.
If deleted_at will make it as optional as "pub deleted_at: Option<DateTime<Utc>>" and pass None while inserting record then also at retrieve code fails with the same error.

Err(BsonDe(DeserializationError { message: "invalid type: map, expected a formatted date and time string or a unix timestamp" }))

Expected is to store perfectly DateTime marked data in datetime format at insertion as well as at update in Mongo. And If None is provided for optional then the program should tackle without failing while retrieving records.

Value of datetime in mongo while inserting is "2020-07-05T11:39:39.352802472Z" and after the update is ISODate("2020-07-05T11:39:39.360Z").

wither::Migration extensions

I see that via an IntervalMigration I can set/unset fields in documents that pass a filter. Is there yet a way to transform data in a field from one format to another?

I think this may get tricky due to the fact that the change to the struct would cause a deserialisation error on the old version. If there's a good solution I'd be happy to work on it.

How to run aggregations?

What is the intended method to run an aggregation query with wither?

Currently I am doing something like this:

let path_regex = wither::bson::Regex { pattern: format!("{}.*", path), options: String::new() };
let pipeline = vec!
    [
        doc!{
            "$match": {
                "path": path_regex
            }
        },
        doc!{
            "$sample": {
                "size": 1
            }
        }
    ];

let mut cursor: Cursor<_> = DataBlockMongo::collection(&db).aggregate(pipeline, AggregateOptions::default()).await.unwrap();

if let Some(chosen_item) = cursor.next().await {
    let chosen_item = DataBlockMongo::instance_from_document(chosen_item.unwrap()).unwrap();
}

Which works, but requires converting to and from a document.

Is this the intended method?
If so, I can't seem to find this documented anywhere.

Model::sync & Model::migrate manager.

A manager object is needed to resolve the following difficulties:

  1. models need to have their indices synced with the backend. We want this to be done as close to startup time as possible, and only once per the lifetime of the program's instantiation. If the backend is unreachable, we want the models to be synced at some future point in time without crashing the service. This is simple enough. Will have it done in no time.
  2. models are going to accrue some number of logical schema migrations over time. We need a way to automate the mutation/evolution of data — which already exists in a live database — in a straightforward & simple way. This last point merits greater discussion, as there are many different approaches to data schema migrations.

proposed schema migrations pattern

  • run from within the service, where the models are defined, similar to how Model::sync works for indices and such. Not from a separate CLI system.
  • migration manager will ensure the migrations for a specific model are run only once per service lifecycle (when the container or process is first booted).
  • should not panic.
  • will still be able to be executed if the backend is down (it shouldn't be) when the service first comes up (probably an exponential backoff algorithm).
  • migrations themselves will follow a two-phase schema migration pattern whenever a simple #[serde(default)] | #[serde(default = "path")] could not be used.

two-phase schema migration pattern

phase one
  • update the model, perhaps using Option<T> for the field type. Serde will deserialize the record from the database as None. Then deal with that condition in your model code if you don't want it to be None.
  • manager will execute Model::migrate, which will receive its execution orders from Model::migrations, which will run whatever mutations against the database are needed according to the migrations specs (these should always be coded to be idempotent).
phase two
  • code deployments for the updated service, which includes the execution of the migrations, have now been finished. All of the service's replicas are updated & using / expecting the updated schema.
  • now the Option<T> fields may be coded as simple T (not wrapped in an Option) as the old data has been fully updated from the migrations.
pros/cons
  • pro: faster schema convergence compared to document versioning.
  • pro: purely based on code in the service, works directly with your current service deployment patterns.
  • pro: undoing a schema migration is much safer as it involves a service deployment with the inverse of the migration being undone.
  • pro: no need for a directory of loosely managed JS files in your code base.

schema migration pattern comparison

django style schema migrations (SQL)

pros/cons
  • pro: executed once against the DB.
  • pro: migration is recorded in a migrations table so that it can't be run again.
  • pro: can be undone.
  • con: must be run outside of the API, using a CLI.
  • con: managing config for the separate environments is overhead (think sequelizejs).
  • con: can be quite tricky in high-throughput environments, especially when doing zero-downtime deployments (which include migrations), especially with blue-green style deployments where old versions of a service may be expecting the old schema but the migration should be applied before the new code is deployed ... and high-throughput. Not fun.

diesel style schema migrations

pros/cons

  • pro: pretty awesome, and definitely a bit cutting edge for the SQL ecosystem.
  • con: wouldn't work well in the mongo ecosystem as schemas are not enforced on the DB layer and schema divergence may occur before all service instances are deployed.

document versioning

pros/cons

  • con: not even really a migration pattern ... but you can at least see the documents version and take imperative action on the document to update it as needed.
  • con: extremely slow schema convergence. Data is left very messy. Difficult to reason about the state of the database at any point in time. Schema may never converge.

far out option

Leverage Rust nightly plugins/custom attributes/&c to define a system which will use the document __version field to automatically handle document updates from version to version, removing fields, changing field types, adding new fields &c from lower versions up to the latest version of the Model.

This would be shooting for the stars ... and I don't currently have time to explore this. But it would be fucking awesome. And better than anything else out there in any other language, including interpreted languages (Python & Ruby have solid stories in this realm).

Re-enable Model::sync

Wither started out with application-level automatic index synchronization capabilities via the Model::sync method. As of [email protected], index management has not yet been implemented in the driver, and thus, as of the [email protected] release, the index syncing features have been disabled.

If this is a feature you enjoyed, used, or would otherwise like to have once again, please upvote this issue.

If this is something you don't particularly care for, and you have other potentially better approaches to handling these sorts of tasks, please share your thoughts and what tools you would recommend.

Thank you in advance for any participation!

Use RON files or macros to define schemas

Much in the way Mongoose deals with schemas, I think it would be cool to see Wither handle document schemas in a way that's more natural with how documents are formatted within MongoDB itself, through either Diesel-style migrations using RON files in a migrations/[migration].ron kind of folder structure, or through macros.

An example RON file:

Users (
    _id: {
        type: String, // maybe as an enum of all supported types
    },
    username: {
        type: String,
        options: (
            // some miscellaneous options for this field
            required: true,
            unique: true,
        ),
    },
    // can also declare objects
    stats: {
        blogs: { type: Number, options: (required: false, default: 0)},
    },
    createdAt: {
        type: Date,
    },
)

(issue migrated from #47 so as not to clutter that one)

Request for project update

It seems like Wither has the potential to be essentially the equivalent of what mongoose is to Node, but the lack of recent development makes me somewhat concerned when considering switching off of NodeJS/mongoose and onto Rust/Wither for a production app. I get the feeling that a large reason for this being the case is simply that for this project to move forward, there had to be progress with the MongoDB driver. Now that it is this case that there's a new officially supported driver that's been released, is Wither going to resume active development again? Do you have a roadmap?

r2d2 support?

Just started to use the crate and can't get it to work as I'm using r2d2-mongodb to create a pooled connection. This works with the pure rust driver.

this works

#[get("/hello")]
pub fn hello_world(connection: DbConn) -> JsonValue {
    let db = &connection;
    db.collection("hello")
        .insert_one(doc! { "name": "John" }, None)
        .unwrap();
    json!({ "status": "ok"})
}

this doesn't work

pub fn all_customer_country_query(
    connection: DbConn,
) -> Result<Vec<RestaurantCountry>, MongoDBError> {
    let db = &connection;
    RestaurantCountry::find(db.clone(), None, None)
}

error

error[E0308]: mismatched types
  --> src/restaurants/repository.rs:11:29
   |
11 |     RestaurantCountry::find(db.clone(), None, None)
   |                             ^^^^^^^^^^ expected struct `std::sync::Arc`, found reference
   |
   = note: expected type `std::sync::Arc<mongodb::db::DatabaseInner>`
              found type `&connection::DbConn`

Support for synchronous code.

As of 0.9.0-alpha.0 the synchronous code has been disabled. This is due to a few difficulties with maintaining both the sync and async code in the same crate, specifically the difficulties exist around ensuring that docs are built properly to expose both interfaces. This difficulty comes about due to some types being made private in the mongodb crate based on the existence of some feature flags.

All in all, the easiest path forward may be to crate a new wither_sync crate which exposes the sync code if folks find themselves in need of this code.

If you are one of those individuals, please let me know!

Clippy & Rustfmt

todo

  • update codebase & add to CI/CD.
  • add some info to CONTRIBUTING.md on how to get these tools setup and how to use them on this codebase.

Add MongoDB connection pool to wither

I am using Mongoose with Node.js and find it handy to only call once to connect to the database.

mongoose.connect('mongodb://localhost/dbname');

then manipulate adding, modifying, deleting without adding database parameters:
User.updateOne({_ id: ''}, {name: 'Your name'});

Would be great if you add this to wither

Alternatively, you can manage pools connected to mongodb using r2d2_mongodb
https://docs.rs/r2d2-mongodb/0.2.2/r2d2_mongodb/

Thank you!

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.