Giter Site home page Giter Site logo

seanwash / catalog-api Goto Github PK

View Code? Open in Web Editor NEW
2.0 3.0 1.0 316 KB

🎶🎸Simple music cataglog app for learning purposes

Home Page: https://catalogapp.netlify.com

Makefile 1.23% Go 67.61% Dockerfile 1.62% JavaScript 6.10% CSS 0.33% Vue 23.11%
golang sqlboiler gqlgen graphql

catalog-api's Introduction

Catalog API

Brief

The API should be able to process all CRUD style requests and store its data in a relational database. The API should allow for the manipulation of Artists, Albums, Songs and Genres at a minimum. You may choose to have read-only data, but at least 1 field for each object should be editable. Feel free to add any features that you think will make your API better to consume.

Setup

Before getting started, note that Docker is configured to run Postgres exposing port 5432 on the host machine, so you'll need to shut down any already running Postgres instances. I chose this approach so that anyone running the service can easily connect DataGrip or similar to the database. Another approach would be to modify docker-compose.yml to expose Postgres on a different port of your choosing.

  1. Clone repository.
  2. Ensure that Docker is installed & running.
  3. Obtain .env file. For development, a .env.sample file has been provided with values suitable for this demo. You can copy it and rename the copy to .env.
  4. Run $ docker-compose up -d from the project root. This will build and start both the Web and Postgres services, keeping them running in the background. If you'd like to see logs, you can skip -d, but you'll have to run the following commands in another terminal.
  5. Run $ docker-compose run --rm web ./bin/migrate up. This will run the initial migrations. Note that --rm will remove the container that was created to run the command. This is fine since the container we want to keep is already running the web instance.
  6. Run $ docker-compose run --rm web go run cmd/seeds/main.go. This will pre-seed the database with some albums from one of my favorite artists, Plini.
  7. Import the Insomnia export (attached via email) and fire away!

Teardown

To stop the containers and remove and volumes:

  1. Run docker-compose down -v.

Using the API

The attached Insomnia workspace will include a number of sample queries and mutations for you to try out. Don't forget to check out the documentation pane!

One thing to note is that all mutations are protected and can be accessed by sending an API-KEY header along with the request. The Insomnia workspace should have this pre configured, so give the mutations a try with and without the header.

Adding / Removing Track Relations

The trackCreate and trackUpdate mutations expose fields like {relationship}_id, e.g. artist_ids, album_ids, and genre_ids. To add a track to an existing album, you just need to include that album's ID in the artist_ids field. If you want to remove a track from an album, just omit that album's ID from the artist_ids field on an update. This works the same for both albums and genres. This approach can also be expanded to work with albums and artists as well, but has not been implemented.

Another approach would have been to allow nested relationship objects to be sent along in a mutation. The server then either assigns a new relationship if the relation object has an ID or creates a new object and then assigns the relationship if the object doesn't have an ID. While I had originally implemented this for tracks, I decided to remove the functionality because it made the create/update mutations more complicated for the client. {relationship}_ids is very clear and explicit, whereas the other approach introduces multiple places for creating a single resource, thereby introducing more points of potential failure.

Live API

Additionally, a live API is available at https://catalog-api.onrender.com. The exported Insomnia workspace contains a production environment as well as a development environment. This live API runs at Render and code is automatically deployed each time code is pushed to the master git branch.

Database Design

It seems like these days you don't see any albums being released without multiple featured artists, and often times an album will cover the span of multiple genres. Because of this, tracks are central to the heart of this database design.

Notable features:

  • Tracks can have multiple artists/features through track_artists.
  • Tracks belong to multiple albums through track_albums, facilitating Spotify style album singles, compilation albums, as well as a traditional album release.
  • Tracks can have multiple genres through track_genres, ensuring that searching and cataloging tracks by genre do not restrict them to the genre of their album.

Additional features:

  • When a track or its relations are removed, the joins track_{artists,albums,genres} are automatically removed ensuring that stale data isn't left around.
  • Both genres and albums require a unique name via a unique index of their lower cased names.

Trade offs

One trade off of this design is that most queries will involve at least one join which over time and with growth will impact performance. If the app grows and performance does become an issue, one could:

  1. Throw a cache in front of the database.
  2. Utilize document store tool like Elasticsearch to create documents of the tracks and their relationships so that joins are avoided altogether for most read/search queries. A drawback of this approach is that this additional data source would need to be kept up to date with the database.

Go Libraries and ORM

The most important dependencies are:

Sqlboiler (ORM)

Sqlboiler utilizes introspection and code generation to build a type safe ORM tailored to your relational database. I chose to use Sqlboiler because:

  1. It uses a schema first approach. The goal of this test wasn't to see whether I could implement a good ORM myself, but to see if I could design a flexible schema around tracks, albums, artists, and genres.
  2. It generates a type safe API, and you're able to see all of the generated code right in the project. Because of this, there aren't any levels of indirection that can cause confusion and the auto completion is superb!

Gqlgen + Chi (Web Interface)

Gqlgen also utilizes code generation to build a type safe GraphQL interface for your app. Once configured, on generate it will analyze a schema.graphql file and generate the appropriate types and queries, as well as stub out any required resolvers required to facilitate the schema. Ultimately I chose to use Gqlgen because:

  1. It seemed relatively stable and mature with an active community/maintainer.
  2. It uses a declarative and code generation approach. You describe what you want and only have to implement the pieces to make that happen.

Chi is a lightweight router that works very well in tandem with Go's own http package. Ultimately I chose to use Chi because provides some useful middleware out of the box while still using the standard HTTP handlers.

Goose

Goose powers the database migration system. I've been spoiled in the past by frameworks that have database migrations built in (Rails, Elixir's Ecto), so I went looking for a lib to handle this for me instead of building one from scratch. I picked Goose because I had read a few favorable reviews of it online.

Project Structure

  • api - Contains middleware related to the Chi router.
  • cmd - Contains the main package and entry points for the HTTP server and database seeds.
  • db - Contains the database migrations and db.Connection which is used to create a new connection to Postgres.
  • graph - Contains the schema.graphql file as well as all of the resolvers and code generated by Gqlgen.
  • models - Contains the models and code generated by Sqlboiler.
  • modex - As recommended by Sqlboiler, contains a few helper functions for models. This is a separate package/directory so that when the models are generated, the custom helpers aren't removed.

Things to Improve Upon

Dataloader

It's really easy to introduce a ton of N+1 queries into a GraphQL service without a tool like Dataloader. In a nutshell, Dataloader is tool that keeps track of which resources have been requested and instead of making a query for each time that resource was requested, it batches those requests into one query. The batched query is similar to select * from thing where id in [resourceIdsFromRequsts]. While I've implemented this pattern in other languages, I'm still pretty new to Go and the recommended package dataloaden is a little too opaque for me at the moment. I would love to ship a product without N+1s, but seeing as this is a small and also a low traffic API we should be ok!

Tests

I would love to have included a whole slew of tests for this API. No excuses, but I just couldn't get there in time.

Idiomatic Go

Since I'm still new to Go, I'm certain that much of the code I did write could be written in a more terse or organized way. Because of this, I tried not to be fancy or utilize any concepts that I didn't feel that I had a firm grasp on. My hope is that the code base is simple and organized enough that a seasoned Go programmer can still find their way around and make edits and improvements should they have to.

catalog-api's People

Contributors

seanwash avatar

Stargazers

 avatar  avatar

Watchers

 avatar  avatar  avatar

Forkers

stanxii

catalog-api's Issues

Tests

Need to pick a testing strategy and implement some! The most important things to test right now are the resolvers since that's where most of the custom logic has been added.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.