Giter Site home page Giter Site logo

sairyss / domain-driven-hexagon Goto Github PK

View Code? Open in Web Editor NEW
11.3K 177.0 1.2K 8.18 MB

Learn Domain-Driven Design, software architecture, design patterns, best practices. Code examples included

License: MIT License

JavaScript 19.02% TypeScript 79.55% Gherkin 1.43%
domain-driven-design hexagonal-architecture clean-architecture typescript best-practices javascript solid-principles architecture architectural-patterns secure-by-design

domain-driven-hexagon's Introduction

Domain-Driven Hexagon

Check out my other repositories:

  • Backend best practices - Best practices, tools and guidelines for backend development.
  • System Design Patterns - list of topics and resources related to distributed systems, system design, microservices, scalability and performance, etc.
  • Full Stack starter template - template for full stack applications based on TypeScript, React, Vite, ChakraUI, tRPC, Fastify, Prisma, zod, etc.

The main emphasis of this project is to provide recommendations on how to design software applications. This readme includes techniques, tools, best practices, architectural patterns and guidelines gathered from different sources.

Code examples are written using NodeJS, TypeScript, NestJS framework and Slonik for the database access.

Patterns and principles presented here are framework/language agnostic. Therefore, the above technologies can be easily replaced with any alternative. No matter what language or framework is used, any application can benefit from principles described below.

Note: code examples are adapted to TypeScript and frameworks mentioned above.
(Implementations in other languages will look differently)

Everything below is provided as a recommendation, not a rule. Different projects have different requirements, so any pattern mentioned in this readme should be adjusted to project needs or even skipped entirely if it doesn't fit. In real world production applications, you will most likely only need a fraction of those patterns depending on your use cases. More info in this section.


Architecture

This is an attempt to combine multiple architectural patterns and styles together, such as:

And many others (more links below in every chapter).

Before we begin, here are the PROS and CONS of using a complete architecture like this:

Pros

  • Independent of external frameworks, technologies, databases, etc. Frameworks and external resources can be plugged/unplugged with much less effort.
  • Easily testable and scalable.
  • More secure. Some security principles are baked in design itself.
  • The solution can be worked on and maintained by different teams, without stepping on each other's toes.
  • Easier to add new features. As the system grows over time, the difficulty in adding new features remains constant and relatively small.
  • If the solution is properly broken apart along bounded context lines, it becomes easy to convert pieces of it into microservices if needed.

Cons

  • This is a sophisticated architecture which requires a firm understanding of quality software principles, such as SOLID, Clean/Hexagonal Architecture, Domain-Driven Design, etc. Any team implementing such a solution will almost certainly require an expert to drive the solution and keep it from evolving the wrong way and accumulating technical debt.

  • Some practices presented here are not recommended for small-medium sized applications with not a lot of business logic. There is added up-front complexity to support all those building blocks and layers, boilerplate code, abstractions, data mapping etc. Thus, implementing a complete architecture like this is generally ill-suited to simple CRUD applications and could over-complicate such solutions. Some principles which are described below can be used in smaller sized applications, but must be implemented only after analyzing and understanding all pros and cons.

Diagram

Domain-Driven Hexagon Diagram is mostly based on this one + others found online

In short, data flow looks like this (from left to right):

  • Request/CLI command/event is sent to the controller using plain DTO;
  • Controller parses this DTO, maps it to a Command/Query object format and passes it to an Application service;
  • Application service handles this Command/Query; it executes business logic using domain services and entities/aggregates and uses the infrastructure layer through ports(interfaces);
  • Infrastructure layer maps data to a format that it needs, retrieves/persists data from/to a database, uses adapters for other I/O communications (like sending an event to an external broker or calling external APIs), maps data back to domain format and returns it back to Application service;
  • After the Application service finishes doing its job, it returns data/confirmation back to Controllers;
  • Controllers return data back to the user (if application has presenters/views, those are returned instead).

Each layer is in charge of its own logic and has building blocks that usually should follow a Single-responsibility principle when possible and when it makes sense (for example, using Repositories only for database access, using Entities for business logic, etc.).

Keep in mind that different projects can have more or less steps/layers/building blocks than described here. Add more if the application requires it, and skip some if the application is not that complex and doesn't need all that abstraction.

General recommendation for any project: analyze how big/complex the application will be, find a compromise and use as many layers/building blocks as needed for the project and skip ones that may over-complicate things.

More in details on each step below.

Modules

This project's code examples use separation by modules (also called components). Each module's name should reflect an important concept from the Domain and have its own folder with a dedicated codebase. Each business use case inside that module gets its own folder to store most of the things it needs (this is also called Vertical Slicing). It's easier to work on things that change together if those things are gathered relatively close to each other. Think of a module as a "box" that groups together related business logic.

Using modules is a great way to encapsulate parts of highly cohesive business domain rules.

Try to make every module independent and keep interactions between modules minimal. Think of each module as a mini application bounded by a single context. Consider module internals private and try to avoid direct imports between modules (like importing a class import SomeClass from '../SomeOtherModule') since this creates tight coupling and can turn your code into a spaghetti and application into a big ball of mud.

Few advices to avoid coupling:

  • Try not to create dependencies between modules or use cases. Instead, move shared logic into a separate files and make both depend on that instead of depending on each other.
  • Modules can cooperate through a mediator or a public facade, hiding all private internals of the module to avoid its misuse, and giving public access only to certain pieces of functionality that meant to be public.
  • Alternatively modules can communicate with each other by using messages. For example, you can send commands using a commands bus or subscribe to events that other modules emit (more info on events and commands bus below).

This ensures loose coupling, refactoring of a module internals can be done easier because outside world only depends on module's public interface, and if bounded contexts are defined and designed properly each module can be easily separated into a microservice if needed without touching any domain logic or major refactoring.

Keep your modules small. You should be able to rewrite a module in a relatively short period of time. This applies not only to modules pattern, but to software development in general: objects, functions, microservices, processes, etc. Keep them small and composable. This is incredibly powerful in a constantly changing environments of software development, since when your requirements change, changing small modules is much easier than changing a big program. You can just delete a module and rewrite it from scratch in a matter of days. This idea is further described in this talk: Greg Young - The art of destroying software.

Code Examples:

  • Check src/modules directory structure.
  • src/modules/user/commands - "commands" directory in a user module includes business use cases (commands) that a module can execute, each with its own Vertical Slice.

Read more:

Each module consists of layers described below.

Application Core

This is the core of the system which is built using DDD building blocks:

Domain layer:

  • Entities
  • Aggregates
  • Domain Services
  • Value Objects
  • Domain Errors

Application layer:

  • Application Services
  • Commands and Queries
  • Ports

Note: different implementations may have slightly different layer structures depending on applications needs. Also, more layers and building blocks may be added if needed.


Application layer

Application Services

Application Services (also called "Workflow Services", "Use Cases", "Interactors", etc.) are used to orchestrate the steps required to fulfill the commands imposed by the client.

Application services:

  • Typically used to orchestrate how the outside world interacts with your application and performs tasks required by the end users;
  • Contain no domain-specific business logic;
  • Operate on scalar types, transforming them into Domain types. A scalar type can be considered any type that's unknown to the Domain Model. This includes primitive types and types that don't belong to the Domain;
  • Uses ports to declare dependencies on infrastructural services/adapters required to execute domain logic (ports are just interfaces, we will discuss this topic in details below);
  • Fetch domain Entities/Aggregates (or anything else) from database/external APIs (through ports/interfaces, with concrete implementations injected by the DI library);
  • Execute domain logic on those Entities/Aggregates (by invoking their methods);
  • In case of working with multiple Entities/Aggregates, use a Domain Service to orchestrate them;
  • Execute other out-of-process communications through Ports (like event emits, sending emails, etc.);
  • Services can be used as a Command/Query handlers;
  • Should not depend on other application services since it may cause problems (like cyclic dependencies);

One service per use case is considered a good practice.

What are "Use Cases"?

wiki:

In software and systems engineering, a use case is a list of actions or event steps typically defining the interactions between a role (known in the Unified Modeling Language as an actor) and a system to achieve a goal.

Use cases are, simply said, list of actions required from an application.


Example file: create-user.service.ts

More about services:

Commands and Queries

This principle is called Command–Query Separation(CQS). When possible, methods should be separated into Commands (state-changing operations) and Queries (data-retrieval operations). To make a clear distinction between those two types of operations, input objects can be represented as Commands and Queries. Before DTO reaches the domain, it's converted into a Command/Query object.

Commands

Command is an object that signals user intent, for example CreateUserCommand. It describes a single action (but does not perform it).

Commands are used for state-changing actions, like creating new user and saving it to the database. Create, Update and Delete operations are considered as state-changing.

Data retrieval is responsibility of Queries, so Command methods should not return business data.

Some CQS purists may say that a Command shouldn't return anything at all. But you will need at least an ID of a created item to access it later. To achieve that you can let clients generate a UUID (more info here: CQS versus server generated IDs).

Though, violating this rule and returning some metadata, like ID of a created item, redirect link, confirmation message, status, or other metadata is a more practical approach than following dogmas.

Note: Command is similar but not the same as described here: Command Pattern. There are multiple definitions across the internet with similar but slightly different implementations.

To execute a command you can use a Command Bus instead of importing a service directly. This will decouple a command Invoker from a Receiver, so you can send your commands from anywhere without creating coupling.

Avoid command handlers executing other commands in this fashion: Command → Command. Instead, use events for that purpose, and execute next commands in a chain in an Event handler: Command → Event → Command.

Example files:

Read more:

Queries

Query is similar to a Command. It belongs to a read model and signals user intent to find something and describes how to do it.

Query is just a data retrieval operation and should not make any state changes (like writes to the database, files, third party APIs, etc.). For this reason, in read model we can bypass a domain and repository layers completely and query database directly from a query handler.

Similarly to Commands, Queries can use a Query Bus if needed. This way you can query anything from anywhere without importing classes directly and avoid coupling.

Example files:


By enforcing Command and Query separation, the code becomes simpler to understand. One changes something, another just retrieves data.

Also, following CQS from the start will facilitate separating write and read models into different databases if someday in the future the need for it arises.

Note: this repo uses NestJS CQRS package that provides a command/query bus.

Read more about CQS and CQRS:


Ports

Ports are interfaces that define contracts that should be implemented by adapters. For example, a port can abstract technology details (like what type of database is used to retrieve some data), and infrastructure layer can implement an adapter in order to execute some action more related to technology details rather than business logic. Ports act like abstractions for technology details that business logic does not care about. Name "port" most actively is used in Hexagonal Architecture.

In Application Core dependencies point inwards. Outer layers can depend on inner layers, but inner layers never depend on outer layers. Application Core shouldn't depend on frameworks or access external resources directly. Any external calls to out-of-process resources/retrieval of data from remote processes should be done through ports (interfaces), with class implementations created somewhere in infrastructure layer and injected into application's core (Dependency Injection and Dependency Inversion). This makes business logic independent of technology, facilitates testing, allows to plug/unplug/swap any external resources easily making application modular and loosely coupled.

  • Ports are basically just interfaces that define what has to be done and don't care about how it's done.
  • Ports can be created to abstract side effects like I/O operations and database access, technology details, invasive libraries, legacy code etc. from the Domain.
  • By abstracting side effects, you can test your application logic in isolation by mocking the implementation. This can be useful for unit testing.
  • Ports should be created to fit the Domain needs, not simply mimic the tools APIs.
  • Mock implementations can be passed to ports while testing. Mocking makes your tests faster and independent of the environment.
  • Abstraction provided by ports can be used to inject different implementations to a port if needed (polymorphism).
  • When designing ports, remember the Interface segregation principle. Split large interfaces into smaller ones when it makes sense, but also keep in mind to not overdo it when not necessary.
  • Ports can also help to delay decisions. The Domain layer can be implemented even before deciding what technologies (frameworks, databases etc.) will be used.

Note: since most ports implementations are injected and executed in application service, Application Layer can be a good place to keep those ports. But there are times when the Domain Layer's business logic depends on executing some external resource, in such cases those ports can be put in a Domain Layer.

Note: abusing ports/interfaces may lead to unnecessary abstractions and overcomplicate your application. In a lot of cases it's totally fine to depend on a concrete implementation instead of abstracting it with an interface. Think carefully if you really need an abstraction before using it.

Example files:

Read more:


Domain Layer

This layer contains the application's business rules.

Domain should operate using domain objects described by ubiquitous language. Most important domain building blocks are described below.

Entities

Entities are the core of the domain. They encapsulate Enterprise-wide business rules and attributes. An entity can be an object with properties and methods, or it can be a set of data structures and functions.

Entities represent business models and express what properties a particular model has, what it can do, when and at what conditions it can do it. An example of business model can be a User, Product, Booking, Ticket, Wallet etc.

Entities must always protect their invariant:

Domain entities should always be valid entities. There are a certain number of invariants for an object that should always be true. For example, an order item object always has to have a quantity that must be a positive integer, plus an article name and price. Therefore, invariants enforcement is the responsibility of the domain entities (especially of the aggregate root) and an entity object should not be able to exist without being valid.

Entities:

  • Contain Domain business logic. Avoid having business logic in your services when possible, this leads to Anemic Domain Model (Domain Services are an exception for business logic that can't be put in a single entity).
  • Have an identity that defines it and makes it distinguishable from others. Its identity is consistent during its life cycle.
  • Equality between two entities is determined by comparing their identificators (usually its id field).
  • Can contain other objects, such as other entities or value objects.
  • Are responsible for collecting all the understanding of state and how it changes in the same place.
  • Responsible for the coordination of operations on the objects it owns.
  • Know nothing about upper layers (services, controllers etc.).
  • Domain entities data should be modelled to accommodate business logic, not some database schema.
  • Entities must protect their invariants, try to avoid public setters - update state using methods and execute invariant validation on each update if needed (this can be a simple validate() method that checks if business rules are not violated by update).
  • Must be consistent on creation. Validate Entities and other domain objects on creation and throw an error on first failure. Fail Fast.
  • Avoid no-arg (empty) constructors, accept and validate all required properties in a constructor (or in a factory method like create()).
  • For optional properties that require some complex setting up, Fluent interface and Builder Pattern can be used.
  • Make Entities partially immutable. Identify what properties shouldn't change after creation and make them readonly (for example id or createdAt).

Note: A lot of people tend to create one module per entity, but this approach is not very good. Each module may have multiple entities. One thing to keep in mind is that putting entities in a single module requires those entities to have related business logic, don't group unrelated entities in one module.

Example files:

Read more:


Aggregates

Aggregate is a cluster of domain objects that can be treated as a single unit. It encapsulates entities and value objects which conceptually belong together. It also contains a set of operations which those domain objects can be operated on.

  • Aggregates help to simplify the domain model by gathering multiple domain objects under a single abstraction.
  • Aggregates should not be influenced by the data model. Associations between domain objects are not the same as database relationships.
  • Aggregate root is an entity that contains other entities/value objects and all logic to operate them.
  • Aggregate root has global identity (UUID / GUID / primary key). Entities inside the aggregate boundary have local identities, unique only within the Aggregate.
  • Aggregate root is a gateway to entire aggregate. Any references from outside the aggregate should only go to the aggregate root.
  • Any operations on an aggregate must be transactional operations. Either everything gets saved/updated/deleted or nothing.
  • Only Aggregate Roots can be obtained directly with database queries. Everything else must be done through traversal.
  • Similar to Entities, aggregates must protect their invariants through entire lifecycle. When a change to any object within the Aggregate boundary is committed, all invariants of the whole Aggregate must be satisfied. Simply said, all objects in an aggregate must be consistent, meaning that if one object inside an aggregate changes state, this shouldn't conflict with other domain objects inside this aggregate (this is called Consistency Boundary).
  • Objects within the Aggregate can reference other Aggregate roots via their globally unique identifier (id). Avoid holding a direct object reference.
  • Try to avoid aggregates that are too big, this can lead to performance and maintaining problems.
  • Aggregates can publish Domain Events (more on that below).

All of these rules just come from the idea of creating a boundary around Aggregates. The boundary simplifies business model, as it forces us to consider each relationship very carefully, and within a well-defined set of rules.

In summary, if you combine multiple related entities and value objects inside one root Entity, this root Entity becomes an Aggregate Root, and this cluster of related entities and value objects becomes an Aggregate.

Example files:

Read more:


Domain Events

Domain Event indicates that something happened in a domain that you want other parts of the same domain (in-process) to be aware of. Domain events are just messages pushed to an in-memory Domain Event dispatcher.

For example, if a user buys something, you may want to:

  • Update his shopping cart;
  • Withdraw money from his wallet;
  • Create a new shipping order;
  • Perform other domain operations that are not a concern of an aggregate that executes a "buy" command.

The typical approach involves executing all this logic in a service that performs a "buy" operation. However, this creates coupling between different subdomains.

An alternative approach would be publishing a Domain Event. If executing a command related to one aggregate instance requires additional domain rules to be run on one or more additional aggregates, you can design and implement those side effects to be triggered by Domain Events. Propagation of state changes across multiple aggregates within the same domain model can be performed by subscribing to a concrete Domain Event and creating as many event handlers as needed. This prevents coupling between aggregates.

Domain Events may be useful for creating an audit log to track all changes to important entities by saving each event to the database. Read more on why audit logs may be useful: Why soft deletes are evil and what to do instead.

All changes caused by Domain Events across multiple aggregates in a single process can be saved in a single database transaction. This approach ensures consistency and integrity of your data. Wrapping an entire flow in a transaction or using patterns like Unit of Work or similar can help with that. Keep in mind that abusing transactions can create bottlenecks when multiple users try to modify single record concurrently. Use it only when you can afford it, otherwise go for other approaches (like eventual consistency).

There are multiple ways on implementing an event bus for Domain Events, for example by using ideas from patterns like Mediator or Observer.

Examples:

To have a better understanding on domain events and implementation read this:

Additional notes:

  • When using only events for complex workflows with a lot of steps, it will be hard to track everything that is happening across the application. One event may trigger another one, then another one, and so on. To track the entire workflow you'll have to go multiple places and search for an event handler for each step, which is hard to maintain. In this case, using a service/orchestrator/mediator might be a preferred approach compared to only using events since you will have an entire workflow in one place. This might create some coupling, but is easier to maintain. Don't rely on events only, pick the right tool for the job.

  • In some cases you will not be able to save all changes done by your events to multiple aggregates in a single transaction. For example, if you are using microservices that span transaction between multiple services, or Event Sourcing pattern that has a single stream per aggregate. In this case saving events across multiple aggregates can be eventually consistent (for example by using Sagas with compensating events or a Process Manager or something similar).

Integration Events

Out-of-process communications (calling microservices, external APIs) are called Integration Events. If sending a Domain Event to external process is needed then domain event handler should send an Integration Event.

Integration Events usually should be published only after all Domain Events finished executing and saving all changes to the database.

To handle integration events in microservices you may need an external message broker / event bus like RabbitMQ or Kafka together with patterns like Transactional outbox, Change Data Capture, Sagas or a Process Manager to maintain eventual consistency.

Read more:

For integration events in distributed systems here are some patterns that may be useful:


Domain Services

Eric Evans, Domain-Driven Design:

Domain services are used for "a significant process or transformation in the domain that is not a natural responsibility of an ENTITY or VALUE OBJECT"

  • Domain Service is a specific type of domain layer class that is used to execute domain logic that relies on two or more Entities.
  • Domain Services are used when putting the logic on a particular Entity would break encapsulation and require the Entity to know about things it really shouldn't be concerned with.
  • Domain services are very granular, while application services are a facade purposed with providing an API.
  • Domain services operate only on types belonging to the Domain. They contain meaningful concepts that can be found within the Ubiquitous Language. They hold operations that don't fit well into Value Objects or Entities.

Value objects

Some Attributes and behaviors can be moved out of the entity itself and put into Value Objects.

Value Objects:

  • Have no identity. Equality is determined through structural property.
  • Are immutable.
  • Can be used as an attribute of entities and other value objects.
  • Explicitly defines and enforces important constraints (invariants).

Value object shouldn’t be just a convenient grouping of attributes but should form a well-defined concept in the domain model. This is true even if it contains only one attribute. When modeled as a conceptual whole, it carries meaning when passed around, and it can uphold its constraints.

Imagine you have a User entity which needs to have an address of a user. Usually an address is simply a complex value that has no identity in the domain and is composed of multiple other values, like country, street, postalCode etc., so it can be modeled and treated as a Value Object with its own business logic.

Value object isn’t just a data structure that holds values. It can also encapsulate logic associated with the concept it represents.

Example files:

Read more about Value Objects:

Domain Invariants

Domain invariants are the policies and conditions that are always met for the Domain in particular context. Invariants determine what is possible or what is prohibited in the context.

Invariants enforcement is the responsibility of domain objects (especially of the entities and aggregate roots).

There are a certain number of invariants for an object that should always be true. For example:

  • When sending money, amount must always be a positive integer, and there always must be a receiver credit card number in a correct format;
  • Client cannot purchase a product that is out of stock;
  • Client's wallet cannot have less than 0 balance;
  • etc.

If the business has some rules similar to described above, the domain object should not be able to exist without following those rules.

Below we will discuss some validation techniques for your domain objects.

Example files:

  • wallet.entity.ts - notice validate method. This is a simplified example of enforcing a domain invariant.

Read more:

Replacing primitives with Value Objects

Most of the code bases operate on primitive types – strings, numbers etc. In the Domain Model, this level of abstraction may be too low.

Significant business concepts can be expressed using specific types and classes. Value Objects can be used instead primitives to avoid primitives obsession. So, for example, email of type string:

const email: string = '[email protected]';

could be represented as a Value Object instead:

export class Email extends ValueObject<string> {
  constructor(value: string) {
    super({ value });
  }

  get value(): string {
    return this.props.value;
  }
}
const email: Email = new Email('[email protected]');

Now the only way to make an email is to create a new instance of Email class first, this ensures it will be validated on creation and a wrong value won't get into Entities.

Also, an important behavior of the domain primitive is encapsulated in one place. By having the domain primitive own and control domain operations, you reduce the risk of bugs caused by lack of detailed domain knowledge of the concepts involved in the operation.

Creating an object for primitive values may be cumbersome, but it somewhat forces a developer to study domain more in details instead of just throwing a primitive type without even thinking what that value represents in domain.

Using Value Objects for primitive types is also called a domain primitive. The concept and naming are proposed in the book "Secure by Design".

Using Value Objects instead of primitives:

  • Makes code easier to understand by using ubiquitous language instead of just string.
  • Improves security by ensuring invariants of every property.
  • Encapsulates specific business rules associated with a value.

Value Object can represent a typed value in domain (a domain primitive). The goal here is to encapsulate validations and business logic related only to the represented fields and make it impossible to pass around raw values by forcing a creation of valid Value Objects first. This object only accepts values which make sense in its context.

If every argument and return value of a method is valid by definition, you’ll have input and output validation in every single method in your codebase without any extra effort. This will make application more resilient to errors and will protect it from a whole class of bugs and security vulnerabilities caused by invalid input data.

Without domain primitives, the remaining code needs to take care of validation, formatting, comparing, and lots of other details. Entities represent long-lived objects with a distinguished identity, such as articles in a news feed, rooms in a hotel, and shopping carts in online sales. The functionality in a system often centers around changing the state of these objects: hotel rooms are booked, shopping cart contents are paid for, and so on. Sooner or later the flow of control will be guided to some code representing these entities. And if all the data is transmitted as generic types such as int or String , responsibilities fall on the entity code to validate, compare, and format the data, among other tasks. The entity code will be burdened with a lot of tasks, rather than focusing on the central business flow-of-state changes that it models. Using domain primitives can counteract the tendency for entities to grow overly complex.

Quote from: Secure by design: Chapter 5.3 Standing on the shoulders of domain primitives

Also, an alternative for creating an object may be a type alias (ideally using nominal types) just to give this primitive a semantic meaning.

Warning: Don't include Value Objects in objects that can be sent to other processes, like dtos, events, database models etc. Serialize them to primitive types first.

Note: In languages like TypeScript, creating value objects for single values/primitives adds some extra complexity and boilerplate code, since you need to access an underlying value by doing something like email.value. Also, it can have performance penalties due to creation of so many objects. This technique works best in languages like Scala with its value classes that represents such classes as primitives at runtime, meaning that object Email will be represented as String at runtime.

Note: if you are using nodejs, Runtypes is a nice library that you can use instead of creating your own value objects for primitives.

Note: Some people say that primitive obsession is a code smell, some people consider making a class/object for every primitive may be overengineering (unless you are using Scala with its value classes). For less complex and smaller projects it's definitely an overkill. For bigger projects, there are people who advocate for and against this approach. If you notice that creating a class for every primitive doesn't give you much benefit, create classes just for those primitives that have specific rules or behavior, or just validate only outside of domain using some validation framework. Here are some thoughts on this topic: From Primitive Obsession to Domain Modelling - Over-engineering?.

Recommended reading:

Make illegal states unrepresentable

Use Value Objects/Domain Primitives and Types (Algebraic Data Types (ADT)) to make illegal states impossible to represent in your program.

Some people recommend using objects for every value:

Quote from John A De Goes:

Making illegal states unrepresentable is all about statically proving that all runtime values (without exception) correspond to valid objects in the business domain. The effect of this technique on eliminating meaningless runtime states is astounding and cannot be overstated.

Let's distinguish two types of protection from illegal states: at compile time and at runtime.

Validation at compile time

Types give useful semantic information to a developer. Good code should be easy to use correctly, and hard to use incorrectly. Types system can be a good help for that. It can prevent some nasty errors at compile time, so IDE will show type errors right away.

The simplest example may be using enums instead of constants, and use those enums as input type for something. When passing anything that is not intended the IDE will show a type error:

export enum UserRoles {
  admin = 'admin',
  moderator = 'moderator',
  guest = 'guest',
}

const userRole: UserRoles = 'some string'; // <-- error

Or, for example, imagine that business logic requires to have contact info of a person by either having email, or phone, or both. Both email and phone could be represented as optional, for example:

interface ContactInfo {
  email?: Email;
  phone?: Phone;
}

But what happens if both are not provided by a programmer? Business rule violated. Illegal state allowed.

Solution: this could be presented as a union type

type ContactInfo = Email | Phone | [Email, Phone];

Now only either Email, or Phone, or both must be provided. If nothing is provided, the IDE will show a type error right away. Now business rule validation is moved from runtime to compile time, which makes the application more secure and gives a faster feedback when something is not used as intended.

This is called a typestate pattern.

The typestate pattern is an API design pattern that encodes information about an object’s run-time state in its compile-time type.

Read more:

Validation at runtime

Data should not be trusted. There are a lot of cases when invalid data may end up in a domain. For example, if data comes from external API, database, or if it's just a programmer error.

Things that can't be validated at compile time (like user input) are validated at runtime.

First line of defense is validation of user input DTOs.

Second line of defense are Domain Objects. Entities and value objects have to protect their invariants. Having some validation rules here will protect their state from corruption. You can use techniques like Design by contract by defining preconditions in object constructors and checking postconditions and invariants before saving an object to the database.

Enforcing self-validation of your domain objects will inform immediately when data is corrupted. Not validating domain objects allows them to be in an incorrect state, this leads to problems.

By combining compile and runtime validations, using objects instead of primitives, enforcing self-validation and invariants of your domain objects, using Design by contract, Algebraic Data Types (ADT) and typestate pattern, and other similar techniques, you can achieve an architecture where it's hard, or even impossible, to end up in illegal states, thus improving security and robustness of your application dramatically (at a cost of extra boilerplate code).

Recommended to read:

Guarding vs validating

You may have noticed that we do validation in multiple places:

  1. First when user input is sent to our application. In our example we use DTO decorators: create-user.request-dto.ts.
  2. Second time in domain objects, for example: address.value-object.ts.

So, why are we validating things twice? Let's call a second validation "guarding", and distinguish between guarding and validating:

  • Guarding is a failsafe mechanism. Domain layer views it as invariants to comply with always-valid domain model.
  • Validation is a filtration mechanism. Outside layers view them as input validation rules.

This difference leads to different treatment of violations of these business rules. An invariant violation in the domain model is an exceptional situation and should be met with throwing an exception. On the other hand, there’s nothing exceptional in external input being incorrect.

The input coming from the outside world should be filtered out before passing it further to the domain model. It’s the first line of defense against data inconsistency. At this stage, any incorrect data is denied with corresponding error messages. Once the filtration has confirmed that the incoming data is valid it's passed to a domain. When the data enters the always-valid domain boundary, it's assumed to be valid and any violation of this assumption means that you’ve introduced a bug. Guards help to reveal those bugs. They are the failsafe mechanism, the last line of defense that ensures data in the always-valid boundary is indeed valid. Guards comply with the Fail Fast principle by throwing runtime exceptions.

Domain classes should always guard themselves against becoming invalid.

For preventing null/undefined values, empty objects and arrays, incorrect input length etc. a library of guards can be created.

Example file: guard.ts

Keep in mind that not all validations/guarding can be done in a single domain object, it should validate only rules shared by all contexts. There are cases when validation may be different depending on a context, or one field may involve another field, or even a different entity. Handle those cases accordingly.

Read more:

Note: Using validation library instead of custom guards

Instead of using custom guards you could use an external validation library, but it's not a good practice to tie domain to external libraries and is not usually recommended.

Although exceptions can be made if needed, especially for very specific validation libraries that validate only one thing (like specific IDs, for example bitcoin wallet address). Tying only one or just few Value Objects to such a specific library won't cause any harm. Unlike general purpose validation libraries which will be tied to domain everywhere, and it will be troublesome to change it in every Value Object in case when old library is no longer maintained, contains critical bugs or is compromised by hackers etc.

Though, it's fine to do full sanity checks using validation framework or library outside the domain (for example class-validator decorators in DTOs), and do only some basic checks (guarding) inside of domain objects (besides business rules), like checking for null or undefined, checking length, matching against simple regexp etc. to check if value makes sense and for extra security.

Note about using regexp

Be careful with custom regexp validations for things like validating email, only use custom regexp for some very simple rules and, if possible, let validation library do its job on more difficult ones to avoid problems in case your regexp is not good enough.

Also, keep in mind that custom regexp that does same type of validation that is already done by validation library outside of domain may create conflicts between your regexp and the one used by a validation library.

For example, value can be accepted as valid by a validation library, but Value Object may throw an error because custom regexp is not good enough (validating email is more complex than just copy - pasting a regular expression found in google. Though, it can be validated by a simple rule that is true all the time and won't cause any conflicts, like every email must contain an @). Try finding and validating only patterns that won't cause conflicts.


Although there are other strategies on how to do validation inside domain, like passing validation schema as a dependency when creating new Value Object, but this creates extra complexity.

Either to use external library/framework for validation inside domain or not is a tradeoff, analyze all the pros and cons and choose what is more appropriate for current application.

For some projects, especially smaller ones, it might be easier and more appropriate to just use validation library/framework.

Domain Errors

Application's core and domain layers shouldn't throw HTTP exceptions or statuses since it shouldn't know in what context it's used, since it can be used by anything: HTTP controller, Microservice event handler, Command Line Interface etc. A better approach is to create custom error classes with appropriate error codes.

Exceptions are for exceptional situations. Complex domains usually have a lot of errors that are not exceptional, but a part of a business logic (like "seat already booked, choose another one"). Those errors may need special handling. In those cases returning explicit error types can be a better approach than throwing.

Returning an error instead of throwing explicitly shows a type of each exception that a method can return so you can handle it accordingly. It can make an error handling and tracing easier.

To help with that you can create an Algebraic Data Types (ADT) for your errors and use some kind of Result object type with a Success or a Failure condition (a monad like Either from functional languages similar to Haskell or Scala). Unlike throwing exceptions, this approach allows defining types (ADTs) for every error and will let you see and handle them explicitly instead of using try/catch and avoid throwing exceptions that are invisible at compile time. For example:

// User errors:
class UserError extends Error {
  /* ... */
}

class UserAlreadyExistsError extends UserError {
  /* ... */
}

class IncorrectUserAddressError extends UserError {
  /* ... */
}

// ... other user errors
// Sum type for user errors
type CreateUserError = UserAlreadyExistsError | IncorrectUserAddressError;

function createUser(
  command: CreateUserCommand,
): Result<UserEntity, CreateUserError> {
  // ^ explicitly showing what function returns
  if (await userRepo.exists(command.email)) {
    return Err(new UserAlreadyExistsError()); // <- returning an Error
  }
  if (!validate(command.address)) {
    return Err(new IncorrectUserAddressError());
  }
  // else
  const user = UserEntity.create(command);
  await this.userRepo.save(user);
  return Ok(user);
}

This approach gives us a fixed set of expected error types, so we can decide what to do with each:

/* in HTTP context we want to convert each error to an 
error with a corresponding HTTP status code: 409, 400 or 500 */
const result = await this.commandBus.execute(command);
return match(result, {
  Ok: (id: string) => new IdResponse(id),
  Err: (error: Error) => {
    if (error instanceof UserAlreadyExistsError)
      throw new ConflictHttpException(error.message);
    if (error instanceof IncorrectUserAddressError)
      throw new BadRequestException(error.message);
    throw error;
  },
});

Throwing makes errors invisible for the consumer of your functions/methods (until those errors happen at runtime, or until you dig deeply into the source code and find them). This means those errors are less likely to be handled properly.

Returning errors instead of throwing them adds some extra boilerplate code, but can make your application robust and secure since errors are now explicitly documented and visible as return types. You decide what to do with each error: propagate it further, transform it, add extra metadata, or try to recover from it (for example, by retrying the operation).

Warning: Some errors/exceptions are non-recoverable and should be thrown, not returned. If you return technical Exceptions (like connection failed, process out of memory, etc.), It may cause some security issues and goes against Fail-fast principle. Instead of terminating a program flow immediately and logging the error, returning an exception continues program execution and allows it to run in an incorrect state, which may lead to more unexpected errors, so it's generally better to throw an Exception in those cases rather than returning it. Analyze if the error is "likely recoverable" or "likely unrecoverable". If an error is most likely a recoverable error, it's a great candidate for using it in a Result object. If an error is most likely unrecoverable, throw it.

Libraries you can use:

Example files:

  • user.errors.ts - user errors
  • create-user.service.ts - notice how Err(new UserAlreadyExistsError()) is returned instead of throwing it.
  • create-user.http.controller.ts - in a user http controller we match an error and decide what to do with it. If an error is UserAlreadyExistsError we throw a Conflict Exception which a user will receive as 409 - Conflict. If an error is unknown we just throw it and our framework will return it to the user as 500 - Internal Server Error.
  • create-user.cli.controller.ts - in a CLI controller we don't care about returning a correct status code so we just .unwrap() a result, which will just throw in case of an error.
  • exceptions folder contains some generic app exceptions (not domain specific)

Read more:

Using libraries inside Application's core

Whether to use libraries in application core and especially domain layer is a subject of a lot of debates. In real world, injecting every library instead of importing it directly is not always practical, so exceptions can be made for some single responsibility libraries that help to implement domain logic (like working with numbers).

Main recommendations to keep in mind is that libraries imported in application's core shouldn't expose:

  • Functionality to access any out-of-process resources (http calls, database access etc);
  • Functionality not relevant to domain (frameworks, technology details like ORMs, Logger etc.).
  • Functionality that brings randomness (generating random IDs, timestamps etc.) since this makes tests unpredictable (though in TypeScript world it's not that big of a deal since this can be mocked by a test library without using DI);
  • If a library changes often or has a lot of dependencies of its own it most likely shouldn't be used in domain layer.

To use such libraries consider creating an anti-corruption layer by using adapter or facade patterns.

We sometimes tolerate libraries in the center, but be careful with general purpose libraries that may scatter across many domain objects. It will be hard to replace those libraries if needed. Tying only one or just a few domain objects to some single-responsibility library should be fine. It's way easier to replace a specific library that is tied to one or few objects than a general purpose library that is everywhere.

In addition to different libraries there are Frameworks. Frameworks can be a real nuisance, because by definition they want to be in control, and it's hard to replace a Framework later when your entire application is glued to it. It's fine to use Frameworks in outside layers (like infrastructure), but keep your domain clean of them when possible. You should be able to extract your domain layer and build a new infrastructure around it using any other framework without breaking your business logic.

NestJS does a good job, as it uses decorators which are not very intrusive, so you could use decorators like @Inject() without affecting your business logic at all, and it's relatively easy to remove or replace it when needed. Don't give up on frameworks completely, but keep them in boundaries and don't let them affect your business logic.

Offload as much of irrelevant responsibilities as possible from the core, especially from domain layer. In addition, try to minimize usage of dependencies in general. More dependencies your software has means more potential errors and security holes. One technique for making software more robust is to minimize what your software depends on - the less that can go wrong, the less will go wrong. On the other hand, removing all dependencies would be counterproductive as replicating that functionality would require huge amount of work and would be less reliable than just using a popular, battle-tested library. Finding a good balance is important, this skill requires experience.

Read more:


Interface Adapters

Interface adapters (also called driving/primary adapters) are user-facing interfaces that take input data from the user and repackage it in a form that is convenient for the use cases(services/command handlers) and entities. Then they take the output from those use cases and entities and repackage it in a form that is convenient for displaying it back for the user. User can be either a person using an application or another server.

Contains Controllers and Request/Response DTOs (can also contain Views, like backend-generated HTML templates, if required).

Controllers

  • Controller is a user-facing API that is used for parsing requests, triggering business logic and presenting the result back to the client.
  • One controller per use case is considered a good practice.
  • In NestJS world controllers may be a good place to use OpenAPI/Swagger decorators for documentation.

One controller per trigger type can be used to have a clearer separation. For example:

Resolvers

If you are using GraphQL instead of controllers, you will use Resolvers.

One of the main benefits of a layered architecture is separation of concerns. As you can see, it doesn't matter if you use REST or GraphQL, the only thing that changes is user-facing API layer (interface-adapters). All the application Core stays the same since it doesn't depend on technology you are using.

Example files:


DTOs

Data that comes from external applications should be represented by a special type of classes - Data Transfer Objects (DTO for short). Data Transfer Object is an object that carries data between processes. It defines a contract between your API and clients.

Request DTOs

Input data sent by a user.

  • Using Request DTOs gives a contract that a client of your API has to follow to make a correct request.

Examples:

Response DTOs

Output data returned to a user.

  • Using Response DTOs ensures clients only receive data described in DTOs contract, not everything that your model/entity owns (which may result in data leaks).

Examples:


DTO contracts protect your clients from internal data structure changes that may happen in your API. When internal data models change (like renaming variables or splitting tables), they can still be mapped to match a corresponding DTO to maintain compatibility for anyone using your API.

When updating DTO interfaces, a new version of API can be created by prefixing an endpoint with a version number, for example: v2/users. This will make transition painless by preventing breaking compatibility for users that are slow to update their apps that uses your API.

You may have noticed that our create-user.command.ts contains the same properties as create-user.request.dto.ts. So why do we need DTOs if we already have Command objects that carry properties? Shouldn't we just have one class to avoid duplication?

Because commands and DTOs are different things, they tackle different problems. Commands are serializable method calls - calls of the methods in the domain model. Whereas DTOs are the data contracts. The main reason to introduce this separate layer with data contracts is to provide backward compatibility for the clients of your API. Without the DTOs, the API will have breaking changes with every modification of the domain model.

More info on this subject here: Are CQRS commands part of the domain model? (read "Commands vs DTOs" section).

Additional recommendations

  • DTOs should be data-oriented, not object-oriented. Its properties should be mostly primitives. We are not modeling anything here, just sending flat data around.
  • When returning a Response prefer whitelisting properties over blacklisting. This ensures that no sensitive data will leak in case if programmer forgets to blacklist newly added properties that shouldn't be returned to the user.
  • If you use the same DTOs in multiple apps (frontend and backend, or between microservices), you can keep them somewhere in a shared directory instead of module directory and create a git submodule or a separate package for sharing them.
  • Request/Response DTO classes may be a good place to use validation and sanitization decorators like class-validator and class-sanitizer (make sure that all validation errors are gathered first and only then return them to the user, this is called Notification pattern. Class-validator does this by default).
  • Request/Response DTO classes may also be a good place to use Swagger/OpenAPI library decorators that NestJS provides.
  • If DTO decorators for validation/documentation are not used, DTO can be just an interface instead of a class.
  • Data can be transformed to DTO format using a separate mapper or right in the constructor of a DTO class.

Local DTOs

Another thing that can be seen in some projects is local DTOs. Some people prefer to never use domain objects (like entities) outside its domain (in controllers, for example) and return a plain DTO object instead. This project doesn't use this technique, to avoid extra complexity and boilerplate code like interfaces and data mapping.

Here are Martin Fowler's thoughts on local DTOs, in short (quote):

Some people argue for them (DTOs) as part of a Service Layer API because they ensure that service layer clients aren't dependent upon an underlying Domain Model. While that may be handy, I don't think it's worth the cost of all of that data mapping.

Though you may want to introduce Local DTOs when you need to decouple modules properly. For example, when querying from one module to another you don't want to leak your entities between modules. In that case using a Local DTO may be justified.


Infrastructure layer

The Infrastructure layer is responsible for encapsulating technology. You can find there the implementations of database repositories for storing/retrieving business entities, message brokers to emit messages/events, I/O services to access external resources, framework related code and any other code that represents a replaceable detail for the architecture.

It's the most volatile layer. Since the things in this layer are so likely to change, they are kept as far away as possible from the more stable domain layers. Because they are kept separate, it's relatively easy to make changes or swap one component for another.

Infrastructure layer can contain Adapters, database related files like Repositories, ORM entities/Schemas, framework related files etc.

Adapters

  • Infrastructure adapters (also called driven/secondary adapters) enable a software system to interact with external systems by receiving, storing and providing data when requested (like persistence, message brokers, sending emails or messages, requesting 3rd party APIs etc).
  • Adapters also can be used to interact with different domains inside single process to avoid coupling between those domains.
  • Adapters are essentially an implementation of ports. They are not supposed to be called directly in any point in code, only through ports(interfaces).
  • Adapters can be used as Anti-Corruption Layer (ACL) for legacy code.

Read more on ACL: Anti-Corruption Layer: How to Keep Legacy Support from Breaking New Systems

Adapters should have:

  • a port somewhere in application/domain layer that it implements;
  • a mapper that maps data from and to domain (if it's needed);
  • a DTO/interface for received data;
  • a validator to make sure incoming data is not corrupted (validation can reside in DTO class using decorators, or it can be validated by Value Objects).

Repositories

Repositories are abstractions over collections of entities that are living in a database. They centralize common data access functionality and encapsulate the logic required to access that data. Entities/aggregates can be put into a repository and then retrieved at a later time without domain even knowing where data is saved: in a database, in a file, or some other source.

We use repositories to decouple the infrastructure or technology used to access databases from the domain model layer.

Martin Fowler describes a repository as follows:

A repository performs the tasks of an intermediary between the domain model layers and data mapping, acting similarly to a set of domain objects in memory. Client objects declaratively build queries and send them to the repositories for answers. Conceptually, a repository encapsulates a set of objects stored in the database and operations that can be performed on them, providing a way that is closer to the persistence layer. Repositories, also, support the purpose of separating, clearly and in one direction, the dependency between the work domain and the data allocation or mapping.

The data flow here looks something like this: repository receives a domain Entity from application service, maps it to database schema/ORM format, does required operations (saving/updating/retrieving etc), then maps it back to domain Entity format and returns it back to service.

Application's core usually is not allowed to depend on repositories directly, instead it depends on abstractions (ports/interfaces). This makes data retrieval technology-agnostic.

Note: in theory, most publications out there recommend abstracting a database with interfaces. In practice, it's not always useful. Most of the projects out there never change database technology (or rewrite most of the code anyway if they do). Another downside is that if you abstract a database you are more likely not using its full potential. This project abstracts repositories with a generic port to make a practical example repository.port.ts, but this doesn't mean you should do that too. Think carefully before using abstractions. More info on this topic: Should you Abstract the Database?

Example files:

This project contains abstract repository class that allows to make basic CRUD operations: sql-repository.base.ts. This base class is then extended by a specific repository, and all specific operations that an entity may need are implemented in that specific repo: user.repository.ts.

Read more:

Persistence models

Using a single entity for domain logic and database concerns leads to a database-centric architecture. In DDD world domain model and persistence model should be separated.

Since domain Entities have their data modeled so that it best accommodates domain logic, it may be not in the best shape to save in a database. For that purpose Persistence models can be created that have a shape that is better represented in a particular database that is used. Domain layer should not know anything about persistence models, and it should not care.

There can be multiple models optimized for different purposes, for example:

  • Domain with its own models - Entities, Aggregates and Value Objects.
  • Persistence layer with its own models - ORM (Object–relational mapping), schemas, read/write models if databases are separated into a read and write db (CQRS) etc.

Over time, when the amount of data grows, there may be a need to make some changes in the database like improving performance or data integrity by re-designing some tables or even changing the database entirely. Without an explicit separation between Domain and Persistance models any change to the database will lead to change in your domain Entities or Aggregates. For example, when performing a database normalization data can spread across multiple tables rather than being in one table, or vice-versa for denormalization. This may force a team to do a complete refactoring of a domain layer which may cause unexpected bugs and challenges. Separating Domain and Persistence models prevents that.

Note: separating domain and persistence models may be overkill for smaller applications. It requires a lot of effort creating and maintaining boilerplate code like mappers and abstractions. Consider all pros and cons before making this decision.

Example files:

  • user.repository.ts <- notice userSchema and UserModel type that describe how user looks in a database
  • user.mapper.ts <- Persistence models should also have a corresponding mapper to map from domain to persistence and back.

For smaller projects you could use ORM libraries like Typeorm for simplicity. But for projects with more complexity ORMs are not flexible and performant enough. For this reason, this project uses raw queries with a Slonik client library.

Read more:

Other things that can be a part of Infrastructure layer

  • Framework related files;
  • Application logger implementation;
  • Infrastructure related events (Nest-event)
  • Periodic cron jobs or tasks launchers (NestJS Schedule);
  • Other technology related files.

Other recommendations

General recommendations on architectures, best practices, design patterns and principles

Different projects most likely will have different requirements. Some principles/patterns in such projects can be implemented in a simplified form, some can be skipped. Follow YAGNI principle and don't overengineer.

Sometimes complex architecture and principles like SOLID can be incompatible with YAGNI and KISS. A good programmer should be pragmatic and has to be able to combine his skills and knowledge with a common sense to choose the best solution for the problem.

You need some experience with object-oriented software development in real world projects before they are of any use to you. Furthermore, they don’t tell you when you have found a good solution and when you went too far. Going too far means that you are outside the “scope” of a principle and the expected advantages don’t appear. Principles, Heuristics, ‘laws of engineering’ are like hint signs, they are helpful when you know where they are pointing to and you know when you have gone too far. Applying them requires experience, that is trying things out, failing, analyzing, talking to people, failing again, fixing, learning and failing some more. There is no shortcut as far as I know.

Before implementing any pattern always analyze if benefit given by using it worth extra code complexity.

Effective design argues that we need to know the price of a pattern is worth paying - that's its own skill.

Don't blindly follow practices, patterns and architectures just because books and articles say so. Sometimes rewriting a software from scratch is the best solution, and all your efforts to fit in all the patterns and architectural styles you know into the project will be a waste of time. Try to evaluate the cost and benefit of every pattern you implement and avoid overengineering. Remember that architectures, patterns and principles are your tools that may be useful in certain situations, not dogmas that you have to follow blindly.

However, remember:

It's easier to refactor over-design than it's to refactor no design.

Read more:

Recommendations for smaller APIs

Be careful when implementing any complex architecture in small-medium sized projects with not a lot of business logic. Some building blocks/patterns/principles may fit well, but others may be an overengineering.

For example:

  • Separating code into modules/layers/use-cases, using some building blocks like controllers/services/entities, respecting boundaries and dependency injections etc. may be a good idea for any project.
  • But practices like creating an object for every primitive, using Value Objects to separate business logic into smaller classes, separating Domain Models from Persistence Models etc. in projects that are more data-centric and have little or no business logic may only complicate such solutions and add extra boilerplate code, data mapping, maintenance overheads etc. without adding much benefit.

DDD and other practices described here are mostly about creating software with complex business logic. But what would be a better approach for simpler applications?

For applications with not a lot of business logic, where code mostly exists as a glue between database and a client, consider other architectures. The most popular is probably MVC. Model-View-Controller is better suited for CRUD applications with little business logic since it tends to favor designs where software is mostly the view of the database.

Additional resources:

Behavioral Testing

Behavioral Testing (and also BDD) is a testing of the external behavior of the program, also known as black box testing.

Domain-Driven Design with its ubiquitous language plays nicely with Behavioral tests.

For BDD tests Cucumber with Gherkin syntax can give a structure and meaning to your tests. This way even people not involved in a development can define steps needed for testing. In node.js world cucumber or jest-cucumber are nice packages to achieve that.

Example files:

Read more:

Folder and File Structure

Some typical approaches are:

  • Layered architecture: split an entire application into directories divided by functionality, like controllers, services, repositories, etc. For example:
- Controllers
  - UserController
  - WalletController
  - OtherControllers...
- Services
  - UserService
  - WalletService
  - OtherServices...
- Repositories
  - ...

This approach makes navigation harder. Every time you need to change some feature, instead of having all related files in the same place (in a module), you have to jump multiple directories to find all related files. This approach usually leads to tight coupling and spaghetti code.

  • Divide application by modules and split each module by some business domain:
- User
  - UserController
  - UserService
  - UserRepository
- Wallet
  - WalletController
  - WalletService
  - WalletRepository
  ...

This looks better. With this approach each module is encapsulated and only contains its own business logic. The only downside is: over time those controllers and services can end up hundreds of lines long, making it difficult to navigate and merge conflicts harder to manage.

  • Divide a module by subcomponents: use modular approach discussed above and divide each module by slices and use cases. We divide a module further into smaller components:
- User
  - CreateUser
    - CreateUserController
    - CreateUserService
    - CreateUserDTO
  - UpdateUser
    - UpdateUserController
    - UpdateUserService
    - UpdateUserDTO
  - UserRepository
  - UserEntity
- Wallet
  - CreateWallet
    - CreateWalletController
    - CreateWalletService
    - CreateWalletDto
  ...

This way each module is further split into highly cohesive subcomponents (by feature). Now when you open the project, instead of just seeing directories like controllers, services, repositories, etc. you can see right away what features application has from just reading directory names.

This approach makes navigation and maintaining easier since all related files are close to each other. It also makes every feature properly encapsulated and gives you an ability to make localized decisions per component, based on each particular feature's needs.

Shared files like domain objects (entities/aggregates), repositories, shared DTOs, interfaces, etc. can be stored outside of feature directory since they are usually reused by multiple subcomponents.

This is called The Common Closure Principle (CCP). Folder/file structure in this project uses this principle. Related files that usually change together (and are not used by anything else outside that component) are stored close together.

The aim here should be to be strategic and place classes that we, from experience, know often changes together into the same component.

Keep in mind that this project's folder/file structure is an example and might not work for everyone. The main recommendations here are:

  • Separate your application into modules;
  • Keep files that change together close to each other (Common Closure Principle and Vertical Slicing);
  • Group files by their behavior that changes together, not by a type of functionality that file provides;
  • Keep files that are reused by multiple components apart;
  • Respect boundaries in your code, keeping files together doesn't mean inner layers can import outer layers;
  • Try to avoid a lot of nested folders;
  • Move files around until it feels right.

There are different approaches to file/folder structuring, choose what suits better for the project/personal preference.

Examples:

  • user module.

  • create-user subcomponent.

  • Commands directory contains all state changing use cases and each use case inside it contains most of the things that it needs: controller, service, DTOs, command, etc.

  • Queries directory is structured in the same way as commands but contains data retrieval use cases.

Read more:

File names

Consider giving a descriptive type names to files after a dot ".", like *.service.ts or *.entity.ts. This makes it easier to differentiate what files do what and makes it easier to find those files using fuzzy search (CTRL+P for Windows/Linux and ⌘+P for MacOS in VSCode to try it out).

Alternatively you could use class names as file names, but consider adding descriptive suffixes like Service or Controller, etc.

Read more:

Enforcing architecture

To make sure everyone in the team adheres to defined architectural practices, use tools and libraries that can analyze and validate dependencies between files and layers.

For example:

  // Dependency cruiser example
  {
    name: 'no-domain-deps',
    comment: 'Domain layer cannot depend on api or database layers',
    severity: 'error',
    from: { path: ['domain', 'entity', 'aggregate', 'value-object'] },
    to: { path: ['api', 'controller', 'dtos', 'database', 'repository'] },
  },

Snippet of code above will prevent your domain layer to depend on the API layer or database layer. Example config: .dependency-cruiser.js

You can also generate graphs like this:

Click to see dependency graph Dependency graph

Example tools:

  • Dependency cruiser - Validate and visualize dependencies for JavaScript / TypeScript.
  • ArchUnit - library for checking the architecture of Java applications

Read more:

Prevent massive inheritance chains

Classes that can be extended should be designed for extensibility and usually should be abstract. If class is not designed to be extended, prevent extending it by making class final. Don't create inheritance more than 1-2 levels deep since this makes refactoring harder and leads to a bad design. You can use composition instead.

Note: in TypeScript, unlike other languages, there is no default way to make class final. But there is a way around it using a custom decorator.

Example file: final.decorator.ts

Read more:


Additional resources

Articles

Websites

Blogs

Videos

Books

domain-driven-hexagon's People

Contributors

alanpcs avatar albestia avatar dispix avatar dzvon avatar estrehle avatar feyroozecode avatar franco4457 avatar harmonictons avatar hieumdd avatar nansd avatar nilaxann65 avatar rzrymiak avatar sairyss avatar snamiki1212 avatar timonback avatar tlc-10 avatar xykkong avatar ymorgenstern avatar youmoo avatar zvikinoza avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

domain-driven-hexagon's Issues

Relationship between `UserRepository` and `TypeormRepositoryBase`

While working through your database implementation, I was wondering why UserRepository does not use TypeormRepositoryBase.findOne?

private async findOneByEmail(
email: string,
): Promise<UserOrmEntity | undefined> {
const user = await this.userRepository.findOne({
where: { email },
});
return user;
}

Shouldn't we do this instead:

const emailVO = new Email(email);
const user = await this.findOne({ email: emailVO });
return user;

And a second question: It seems like UserRepository.prepareQuery removes all query parameters except for id? Why?

// Used to construct a query
protected prepareQuery(
params: QueryParams<UserProps>,
): WhereCondition<UserOrmEntity> {
const where: QueryParams<UserOrmEntity> = {};
if (params.id) {
where.id = params.id.value;
}
return where;
}

UserEntity

What will be your approach for adding optional parameters to the user entity that you do not want to be added via the constructor?

Because it seems like the props value in the base entity only represents items added via the constructor and would not accept any other value otherwise.

Q: Nested Gql field resolvers and auth

Hi, thank you for this great repo.

I have two questions.

  1. How do you handle nested field resolvers when two related entities are in different modules? e.g.
query getUser($filterUser: GetUserFilter!, $filterWallet: GetFilterWallet!) {
   getUser(filter: $filterUser) {
    id
    wallets(filter: $filterWallet) {
      id
    }
  }
}
  1. Where authentication and authorization belong?
    Let's say in our system user can update only his wallet and other entities he owns and each entity is in separate module. Should user module be shared between all modules? Can I use nestjs guards for protecting endpoints/mutations/queries?

A couple of questions

First, and foremost, amazing work you've done. The explanation and diagrams are awesome. You did an impressive job by reviewing and integrating some of the most relevant architecture/software patterns. Impressive!

I'm working right now on a pretty complex project, that requires a big refactor and I'm researching some architecture alternatives. Based on that, I'm already coding something pretty similar to your suggestions, but I would like to ask you some questions:

  1. Which layer is responsible for events?
    In my scenario, I'm coding a CQRS+ES system (one of my main entities require a "time-machine" feature, so CQRS+ES is a pretty solid alternative), and, although you don't mention in the Architecture section, I see your recommended architecture a lot of inspiration from CQRS+ES itself.

That is something that intrigues me. I've read some sources recommending that events should be handled at the Domain layer. But other (the ones that I'm tempted to follow) suggest that the domain layer should not handle complexity such as events. Instead, they suggest handling those in the Application layer, consuming the Domain layer just for business logic. What is your opinion for that?

  1. Which layer is responsible for read/write models?
    Almost the same as above, I've seen many different implementations of read/write models. Some of those considering it as a member of the Application layer, others considering it a member of the Domain layer. What is your opinion is this matter?

Since you suggest that queries should bypass the domain layer to use the repository, I'm assuming you're considering that read models should live in the application layer. Is that correct? What about write models?

  1. Regarding folder structure, why not group by BC, instead of type?
    This question is related specifically to your suggested implementation of this architecture. You suggested grouping files based on their responsibility (folders like core, infrastructure, models). What do you think about grouping them based on their BC instead?

For example, instead of having:

<root folder>
├── src
│   ├── core
│   │   ├── events
│   │   │       ├── created-user.ts
│   │   │       ├── dispatched-product.ts
│   │   │       ├── ....
│   │   ├── commands
│   │   │       ├── create-user.ts
│   │   │       ├── dispatch-product.ts
│   │   │       ├── ....
│   │   ├── ...

What do you think about having:

<root folder>
├── src
│   ├── user
│   │   ├── events
│   │   │       ├── created-user.ts
│   │   │       ├── ....
│   │   ├── commands
│   │   │       ├── create-user.ts
│   │   │       ├── ....
│   ├── product
│   │   ├── events
│   │   │       ├── dispatched-product.ts
│   │   │       ├── ....
│   │   ├── commands
│   │   │       ├── dispatch-product.ts
│   │   │       ├── ....
│   │   ├── ...

I know it looks strange, but I've tested with this type of folder structure for a time now, and it scales pretty well. It is easier to navigate, and it usually makes your import paths smaller and easier to understand.

For the record, I'm also using 2 type of folder: lib and vendor. The first one I use for common, shared logic external from my core (integration with databases, abstract classes, and others). The second one (vendor) I use for framework-specific logic (like bootstrapping NestJS application, configuring TypeORM, and others).

I would like to hear your opinion about this as well :)

  1. Who is responsible for writing in the repository?
    Considering that you're following a CQRS approach, which layer is responsible for writing in the repository? (That question may be related to the second one). Is the application layer?

Again, great job :) looking forward for an answer.

Query handler breaking the dependency rule?

Hi @Sairyss!
Thanks for this awesome repo! It's been super helpful. I did encounter an issue recently though -

According to the readme - queries should be part of the application layer. But when looking at find-users.query-handler.ts, we can see it's importing dependencies from an outer layer - the infrastructure layer (like slonik and user.repository).

Doesn't that break the dependency rule? Or am I missing something?
If queries are at the same layer as application services, shouldn't they be required to use ports to communicate to "the outside world" just like application services are required to?

Thanks!
Simon

entity.base.ts constructor always makes new Id

Nice work and great documentation. I'm wondering about the entity.base.ts constructor. It looks like a new ID is generated for an entity when the constructor is invoked. But there are cases where we want to instantiate an entity object that already has an ID (e.g. from persistence). Right?

From Eric Evans "DDD Reference"-pdf (2015). In the description of Repositories "...return fully instantiated objects or collections (encapsulate the storage technology) .... provide repositories only for aggregate roots keep application logic focused the model ...."

Domain Events

First of all, thank you for this great repo.

When you talk about domain events, you note that you have implemented your own version because NestJS CQRS lacks await. Maybe I’m misunderstanding, but I think the events are now reactive, so you can await them and even get a result https://docs.nestjs.com/recipes/cqrs#events

Resolver returning type

In the graphql resolver we need to unwrap the id since the type of the service is : Promise<Result<AggregateID, ManipulatorAlreadyExistsError>>

Remplace resolver by :

@Resolver()
export class CreateManipulatorGraphqlResolver {
  constructor(private readonly commandBus: CommandBus) {}

  @Mutation(() => IdGqlResponse)
  async create(
    @Args('input') input: CreateManipulatorGqlRequestDto,
  ): Promise<IdGqlResponse> {
    const command = new CreateManipulatorCommand(input);

    const result = await this.commandBus.execute(command);

    return new IdGqlResponse(result.unwrap());
  }
}

please tell me how to relationship

I don't know how to describe relations, so please let me know.

example:
individual-> telnumer [n]

How to register phone number information in the phone number table when creating an individual?

it shows compile error

async save(entity: Entity): Promise<Entity> {
entity.validate(); // Protecting invariant before saving
const ormEntity = this.mapper.toOrmEntity(entity);
const result = await this.repository.save(ormEntity);

when excute start:dev it throws error

src/libs/ddd/infrastructure/database/base-classes/typeorm.repository.base.ts:47:47 - error TS2769: No overload matches this call.
  Overload 1 of 4, '(entities: DeepPartial<OrmEntity>[], options?: SaveOptions | undefined): Promise<(DeepPartial<OrmEntity> & OrmEntity)[]>', gave the following error.
    Argument of type 'OrmEntity' is not assignable to parameter of type 'DeepPartial<OrmEntity>[]'.
  Overload 2 of 4, '(entity: DeepPartial<OrmEntity>, options?: SaveOptions | undefined): Promise<DeepPartial<OrmEntity> & OrmEntity>', gave the following error.
    Argument of type 'OrmEntity' is not assignable to parameter of type 'DeepPartial<OrmEntity>'.

47     const result = await this.repository.save(ormEntity);

what am i missing?

A question regarding use cases and database transactions

I'm sorry if this isn't the right place to ask such a question, I've searched for hours everywhere and cannot seem to find an answer.

If i have a use case that fetches a two aggregates of different types from their repositories, and passes them to a domain service that preforms business logic on the aggregates and returns them to the use case, how would i save all the aggregates back to the database in a single transaction ?.

one solution i came up with is to have one of the repositories methods take the second aggregate as an argument to include in the database transaction:

interface IMemberRepo {
  upgradeMemberToGold: (member: Member, payment: Payment) => Promise<void>
}

// or
interface IPaymentRepo {
  markPaymentAsFulfilled: (payment: Payment, member: Member) => Promise<void>
}

if a member aggregate gets upgraded to gold there must be a payment aggregate with fulfilled: true that exists, the above solution works but I'm not sure if it 100% adheres to DDD principles.

Enable GitHub discussions ?

Hi @Sairyss

First of all, I would like to thank you for that awesome repository 👍

As a beginner in Hexagonal architecture and DDD, your repository and all the documentation you wrote in the README.md really helps to wrap my mind about concepts and good practices to build a clean modular monolith.

I guess I'm not the only one in the TS/JS community really appreciating the effort you put in that repository, so in order to go further and involve more community members I would like to propose to enable Github Discussions in that repository instead of opening issues for clarifying or requesting advices.

Let me know what you think ?

Thanks again

Add DomainCollection concept

First of all, I absolutely love this project, and am working to implement these patterns in production.

Anyway, to continue on the concept of what an "Aggregate Root" is, it can often be a "parent" entity that has relations containing one or many "child" entities (or Aggregates) that the parent has control or can act on. Since we are separating Domain Entities from ORM entities, we will need a way for an Aggregate Root to gain context of the child relations in an ORM-agnostic way.

I borrowed some of the naming conventions from Mikro ORM, prefixed with Domain to avoid naming collisions and better segregate the mental boundary.

Proposal:

For an x-to-many relation:

export interface DomainCollection<Child extends Entity<any>> {
  load(): Promise<Child[]>
  add(entity: Child): void
  addMany(entities: Child[]): void
  remove(entity: Child): void
  removeMany(entities: Child[]): void
  update(entities: Child[]): void
}

For an x-to-one relation:

export interface DomainReference<Child extends Entity<any>> {
  load(): Promise<Child>
  update(updatedEntity: Child): void
  remove(): void
}

And a first stab at an implementation of the DomainCollection:

interface LoadStrategy<Parent, Child> {
  load(p: Parent): Promise<Child[]>
}

export class Collection<Parent extends Entity<any>, Child extends Entity<any>>
  implements DomainCollection<Child>
{
  constructor(parent: Parent) {
    this.#parent = parent
  }

  readonly #parent: Readonly<Parent>

  #loadStrategy?: LoadStrategy<Parent, Child>

  #items?: Map<ID, Child>

  update(entities: Child[]): void {
    entities.forEach((entity) => {
      this.items.set(entity.id, entity)
    })
  }

  add(entity: Child): void {
    const existing = this.items.get(entity.id)
    if (existing)
      throw new DomainException('Child entity with id already exists')
    this.items.set(entity.id, entity)
  }

  addMany(entities: Child[]): void {
    entities.forEach((entity) => {
      const existing = this.items.get(entity.id)
      if (existing)
        throw new DomainException('Child entity with id already exists')
      this.items.set(entity.id, entity)
    })
  }

  remove(entity: Child): void {
    this.items.delete(entity.id)
  }

  removeMany(entities: Child[]): void {
    entities.forEach((entity) => {
      this.items.delete(entity.id)
    })
  }

  withLoadStrategy(strategy: LoadStrategy<Parent, Child>) {
    this.#loadStrategy = strategy
  }

  async load() {
    if (!this.#loadStrategy)
      throw new Error('Load strategy has not been provided!')
    if (!this.#items) {
      const records = await this.#loadStrategy.load(this.#parent)
      this.#items = new Map(records.map((child) => [child.id, child]))
    }
    return Array.from(this.#items.values())
  }

  get items() {
    if (!this.#items) throw new Error('Collection has not been hydrated')
    return this.#items
  }
}

Thoughts with this approach:

  • OrmMappers would now need to have the child entity repository injected so that the loadStrategy can be hydrated, and this may lead to a significant increase in complication.
  • The Collection implementation probably belongs in the infrastructure layer, as it may differ due to the needs of the underlying data access layer.

Would love to see your take on this idea...

Managing `Entity.updatedAt`

First of all: Thank you for publishing this! Amazing job :)

I am wondering how to manage the Entity's updatedAt timestamp. Should I set it manually in every method of every Entity subclass that performs an update?

For example, when updating the UserEntity address:

updateAddress(props: UpdateUserAddressProps): void {
this.props.address = new Address({
...this.props.address,
...props,
} as AddressProps);
}

Should this include a line this._updatedAt = DateVO.now();?

GraphQL Question

Just thought I'd take a moment to thank you for this awesome repo. I regularly struggle for technical sources online suitable for enterprise level development; this is one of the best I've seen on this subject.

One question comes to mind, how would you organise GraphQL within this space? Originally l considered a singular graphql model isolated... but that seems inelegant having thought on it.

Question: Mapping between Many-to-Many relation

Hi, I have been using your repository as a guide to develop an application and I must say your work is amazing. I've been trying to develop a library management system and whenever I find myself confused, I try to approach a solution similar to your work. However, I am having difficulties in something you haven't covered yet.
I am trying to define a many-to-many relationship between OrmEntities ( in my case book and author ). In my domain entities, I have a book entity that has as attribute an array of author entity, so when I am trying to map the props of book to Orm props, I find myself needing to map also the props of author ( which has already its own mapper ). What I have implemented currently is that I use the author mapper in the book mapper to map the props of the author.
I was wondering how should I approach mapping between entities. Is calling a mapper inside another mapper the correct approach?

Thank you and looking forward for your answer!

questions

hello, I'm in process of exploring this wonderful repo and I really hope that you will continue to work on it until it becomes a real-world example project I could always come back to.

But there are some things a bit confusing for me, mostly the "what should be where". Some stuff I have kind of understood but this is still confusing for me. For example, domain-event-handlers and email are under modules but they don't seem to contain any kind of code that would relate to modules, so why are they there?
Understandably, email looks like a work in progress but the directory is missing the modules part which would help me to tie it together. But the domain-event-handlers seem out of place for me.

Also, is there somewhere a reason why the lint rules are what they are?

I'm also having trouble setting up the dev environment. specifically, when I start the DB docker and then run start:dev, I get an exception that test-db doesn't exist.

now that I'm thinking about it, I can't seem to find any instructions on how to start the dev env. like, step-by-step what should be done and what do I need before I can run it.

PS: it seems that while adding a licence, you forgot to update it in package.json, but I doubt it has any significance since it's not in the npm registry.

Authentication module

In one of the MRs, someone asked a question about an example of authorship. In 2021 you wrote that if you find time you will implement such an example. Is this up to date?

I'm interested in this topic and would like to see what it might look like with your eye.

core: ValueObjectProps & DomainPrimitive inconsistency

Hi, in value-object.base.ts we define 3 types:

export type Primitives = string | number | boolean;
export interface DomainPrimitive<T = Primitives> {
  value: T;
}

type ValueObjectProps<T> = T extends Primitives | Date ? DomainPrimitive<T> : T;

If T is a Primitives or a Date then ValueObjectProps will be a DomainPrimitive of T but T = Primitives forbid T from being a Date. I think typescript should not let us write that. If we replace the "=" with an "extend" the problem is clear:

https://www.typescriptlang.org/play?#code/KYDwDg9gTgLgBDAnmYcAKUCWBbTNMBuwAznALxzExYB2A5nAD5w0Cu2ARsFE3BxBAA2wAIY0A3AFgAUKEiw4mGjG4AzEQGNUAEQjYRSjDjyFgAHgAqcUCpoATUkdz4ixAHxwA3nAD0PuADkZAFwUMBggprAdnyIgTbA9sQBMnBwBCKCrMAAXHAWUtIAvjIySChwAGqZ2QDyHABWwBowGBBgxJYeFFYJSehYzqakzNoiKnAA-HC6+oaDJkRdcHkFvv4wpJikNBDwEADWcGKI2NDAMkA

Suggested fix:

Use "extends" instead of "=" and either have DomainPrimitive accept T as a Date or a Primitives:

export interface DomainPrimitive<T extends Primitives | Date> {
  value: T;
}

Or define a new type DomainDate:

export interface DomainPrimitive<T extends Primitives> {
  value: T;
}

export interface DomainDate<T extends Date> {
  value: T;
}

type ValueObjectProps<T> = T extends Primitives ? DomainPrimitive<T> : T extends Date ? DomainDate<T> : T;

Tell me if I misunderstood something. Maybe the "=" as an importance I am not aware of?

Correct approach for different types of concept?

Let's say we have two concepts: Book and BookBeingSold. Book have methods like read, rate, and have properties like title and author. BookBeingSold have the same methods and properties but it also have additional methods buy, cancel, changePrice, and property price. The database have two tables books (id, title, author) and books-being-sold (id, book_id, price, is_deleted).

What should code look like for this scenario in DDD way? Should we create two separate modules book and bookBeingSold and two separate repositories? Should we inherit one module from another? Or we should just have one module book and one repository?

Queries using repositories

Looking into this find-users.query-handler.ts query handler i noticed you were using the user repository for fetching the users.
Since on the read side we're not looking into enforcing any business rules, like we are on the command/write side with Aggregates, is there a reason for using repositories?

For example if we were to have a Post aggregate and Comment entities within that aggregate there'd be no point in building out eachPost and fetching all of its Comment relations when fetching something like a list of all posts.

I couldn't find much on this online but i found this blog post which might explain this better than i can.

Btw, great work on the repo <3

Order by query

Hello, thanks for great repository.

I'm wondering how ordering shall be implemented with this interface?

export interface OrderBy {
  [key: number]: -1 | 1;
}

How can I pass DTO to sort some column ASC/DESC in controller?

execute start:dev throws error

async save(entity: Entity): Promise<Entity> {
entity.validate(); // Protecting invariant before saving
const ormEntity = this.mapper.toOrmEntity(entity);
const result = await this.repository.save(ormEntity);

when excute start:dev it throws error

src/libs/ddd/infrastructure/database/base-classes/typeorm.repository.base.ts:47:47 - error TS2769: No overload matches this call.
  Overload 1 of 4, '(entities: DeepPartial<OrmEntity>[], options?: SaveOptions | undefined): Promise<(DeepPartial<OrmEntity> & OrmEntity)[]>', gave the following error.
    Argument of type 'OrmEntity' is not assignable to parameter of type 'DeepPartial<OrmEntity>[]'.
  Overload 2 of 4, '(entity: DeepPartial<OrmEntity>, options?: SaveOptions | undefined): Promise<DeepPartial<OrmEntity> & OrmEntity>', gave the following error.
    Argument of type 'OrmEntity' is not assignable to parameter of type 'DeepPartial<OrmEntity>'.

47     const result = await this.repository.save(ormEntity);

what am i missing?

Question: double validation?

Hi, thanks for the project, it is very good learning material.

I have a quick question: in the user module you have business rules on the property country of the VO address:

https://github.com/Sairyss/domain-driven-hexagon/blob/master/src/modules/user/domain/value-objects/address.value-object.ts#L29

And later, in the UC create-user you have a DTO model that also species (different) validation rules:

https://github.com/Sairyss/domain-driven-hexagon/blob/master/src/modules/user/use-cases/create-user/create-user.request.dto.ts#L21

Isn't that duplication problematic? Do you think we should try to keep both set of rules similar? Or the DTO rules are just a basic safety and only the Domain rules matter? And in any case: shouldn't the DTO rules be at least less restrictive than the Domain rules? Here it would not be possible to have a country between 30 and 50 characters.

Thanks again for your dedication.

Entities relationship and performance

Thanks for this great repo, it's a mine of good practices and advice.
I've a remark about performance when we have relationship between entities.

We can have different cases , entities in same module or entities in same aggregate.
is there any recommendations to manage entities relationship and preserve good performance results?
Avoid to load all related entities, for example an user that have many posts and each action about post we should avoid to load all posts from user before to execute add, update or any.

Very nice!!

Not really an issue, but wanna congratulate you on this repo, it's very good content.

I'm glad more ppl is walking the path I walked some years ago, even with a repository on github like I did .

Although I must say your diagram looks a lot better than mine :)
68747470733a2f2f646f63732e676f6f676c652e636f6d2f64726177696e67732f642f652f32504143582d317651357073373275615a63454a7a776e4a6250 37686c3269334b32484861744e63736f79473274675832766e724e357878444b4c70354a6d35627a7a6d5a64762f7075623f773d39363026683d363537

If you are curious, you can also check out my talk about it.

Keep it up!

Why are ports defined in infrastructure layer?

Not sure if I'm missing something but why are database ports such as user.repository.port.ts placed in infrastructure layer next to its implementation?
Shouldn't it be defined in application/domain core where its injected?

Typo in the find-users GraphQL resolver's name

Hi!

I'm just exploring your repo and I think I noticed a typo in the name of the file /src/modules/user/queries/find-users/find-users.gralhql-resolver.ts. I guess it should say find-users.graphql-resolver.ts. :)

Thanks for sharing your work!

`StructuredClone()` function converts entity object into plain object, causing `undefined` properties when calling getters

Summary

using the structuredClone() function, as it creates a deep copy of the object, which means it will lose any functions, getters, or setters that were defined on the original object. Instead, you will end up with a plain object that only contains the data properties.

Description:

When using thegetPropsCopy()method to create a copy of an entity object, the StructuredClone() function is converting the entity object into a plain object. This is causing issues when calling the getters for properties of the entity object.

For example, when trying to access the address.country, address.postalCode, and address.street getters of a UserEntity object, the properties are returning undefined due to the conversion of the object by StructuredClone().

This issue seems to be related to the StructuredClone() function, and may be causing similar issues in other parts of the application.

 public getPropsCopy(): EntityProps & BaseEntityProps {
    const propsCopy = structuredClone({
      id: this._id,
      createdAt: this._createdAt,
      updatedAt: this._updatedAt,
      ...this.props,
    });
    return Object.freeze(propsCopy);
  }

Steps to Reproduce:

ref to. >

 toPersistence(entity: UserEntity): UserModel {
    const copy = entity.getPropsCopy();
    const record: UserModel = {
      id: copy.id,
      createdAt: copy.createdAt,
      updatedAt: copy.updatedAt,
      email: copy.email,
      country: copy.address.country, // getter
      postalCode: copy.address.postalCode, // getter
      street: copy.address.street, // getter
      role: copy.role,
    };
    return userSchema.parse(record);
  }
  • Create an instance of a UserEntity object.
  • Call the getPropsCopy() method to create a copy of the object.
  • Try to access the address.country, address.postalCode, and address.street getters of the copied object.

Expected Result:

The address.country, address.postalCode, and address.street getters should return the expected values.

Actual Result:

The address.country, address.postalCode, and address.street getters are returning undefined.

License

You put together a terrific resource!
I am currently writing a book on backend development (https://zero2prod.com) and I'd be interested in featuring (and linking) to your diagram on Hexagonal architecture. While I was checking the repository though I didn't manage to find a license covering the material - can you clarify what can and cannot be done with what is inside this repository?

Best Approach for multi-tenancy

What is the best approach for multi-tenancy on TypeORM?

Currently, I created a TenantModule (Credits to Esposito Medium post to create and change de TypeORM connection on the fly.

But this doesn't seem right, since this ties the TypeORM connection and other sources to a general module instead of let each data source handle its way to multitenancy.

What do you think is the best practice for this?

Intermittent test failure for create-user

First off, thank you for this excellent example. I have learned a lot from it!

I'm encountering an intermittent test failure for the create-user e2e spec.

~/C/P/domain-driven-hexagon master = !3 ?2 ❯ yarn test                                                       4s 06:41:20 AM
yarn run v1.22.19
$ jest --config .jestrc.json
 PASS  tests/user/delete-user/delete-user.e2e-spec.ts (12.032 s)
 FAIL  tests/user/create-user/create-user.e2e-spec.ts (12.212 s)
  ● Create a user › I can create a user

    expect(received).toBe(expected) // Object.is equality

    Expected: "string"
    Received: "undefined"

      42 |     then('I receive my user ID', () => {
      43 |       const response = ctx.latestResponse as IdResponse;
    > 44 |       expect(typeof response.id).toBe('string');
         |                                  ^
      45 |     });
      46 |
      47 |     and('I can see my user in a list of all users', async () => {

      at Object.stepFunction (tests/user/create-user/create-user.e2e-spec.ts:44:34)
      at node_modules/jest-cucumber/src/feature-definition-creation.ts:134:65

Test Suites: 1 failed, 1 passed, 2 total
Tests:       1 failed, 6 passed, 7 total
Snapshots:   0 total
Time:        12.906 s
Ran all test suites.
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.

When I log out the result monad at the controller for that particular endpoint, I see what might be a clue.

When all tests pass, I see the conflict monad printed to the console, then the successful monad that contains the user entity id

~/C/P/domain-driven-hexagon master = !3 ?2 ❯ yarn test                                                      17s 06:42:58 AM
yarn run v1.22.19
$ jest --config .jestrc.json
 PASS  tests/user/delete-user/delete-user.e2e-spec.ts (9.954 s)
  ● Console

    console.log
      ResultType {
        [Symbol(Val)]: UserAlreadyExistsError: User already exists
            at CreateUserService.execute (/Users/zacharyweidenbach/Code/Personal/domain-driven-hexagon/src/modules/user/commands/create-user/create-user.service.ts:39:20)
            at processTicksAndRejections (node:internal/process/task_queues:95:5)
            at CreateUserHttpController.create (/Users/zacharyweidenbach/Code/Personal/domain-driven-hexagon/src/modules/user/commands/create-user/create-user.http.controller.ts:42:7) {
          cause: ConflictException [Error]: Record already exists
              at UserRepository.insert (/Users/zacharyweidenbach/Code/Personal/domain-driven-hexagon/src/libs/db/sql-repository.base.ts:118:15)
              at processTicksAndRejections (node:internal/process/task_queues:95:5)
              at /Users/zacharyweidenbach/Code/Personal/domain-driven-hexagon/src/libs/db/sql-repository.base.ts:215:24
              at execTransaction (/Users/zacharyweidenbach/Code/Personal/domain-driven-hexagon/node_modules/slonik/dist/src/connectionMethods/transaction.js:19:24)
              at transaction (/Users/zacharyweidenbach/Code/Personal/domain-driven-hexagon/node_modules/slonik/dist/src/connectionMethods/transaction.js:77:16)
              at /Users/zacharyweidenbach/Code/Personal/domain-driven-hexagon/node_modules/slonik/dist/src/binders/bindPool.js:120:24
              at createConnection (/Users/zacharyweidenbach/Code/Personal/domain-driven-hexagon/node_modules/slonik/dist/src/factories/createConnection.js:111:18)
              at Object.transaction (/Users/zacharyweidenbach/Code/Personal/domain-driven-hexagon/node_modules/slonik/dist/src/binders/bindPool.js:119:20)
              at CreateUserService.execute (/Users/zacharyweidenbach/Code/Personal/domain-driven-hexagon/src/modules/user/commands/create-user/create-user.service.ts:35:7)
              at CreateUserHttpController.create (/Users/zacharyweidenbach/Code/Personal/domain-driven-hexagon/src/modules/user/commands/create-user/create-user.http.controller.ts:42:7) {
            cause: [UniqueIntegrityConstraintViolationError],
            metadata: undefined,
            correlationId: 'r0aTGk',
            code: 'GENERIC.CONFLICT'
          },
          metadata: undefined,
          correlationId: 'r0aTGk',
          code: 'USER.ALREADY_EXISTS'
        },
        [Symbol(T)]: false
      } result

      at CreateUserHttpController.create (src/modules/user/commands/create-user/create-user.http.controller.ts:44:13)

 PASS  tests/user/create-user/create-user.e2e-spec.ts (10.035 s)
  ● Console

    console.log
      ResultType {
        [Symbol(Val)]: '9ed7cc92-5903-4107-a6e3-f1cf7536eb1d',
        [Symbol(T)]: true
      } result

      at CreateUserHttpController.create (src/modules/user/commands/create-user/create-user.http.controller.ts:44:13)


Test Suites: 2 passed, 2 total
Tests:       7 passed, 7 total
Snapshots:   0 total
Time:        10.585 s, estimated 13 s
Ran all test suites.
✨  Done in 15.33s.

However, when the test fails, I see the success monad printed first, then the conflict monad

~/C/P/domain-driven-hexagon master = !2 ?2 ❯ yarn test                                                      16s 06:51:45 AM
yarn run v1.22.19
$ jest --config .jestrc.json
 PASS  tests/user/delete-user/delete-user.e2e-spec.ts
  ● Console

    console.log
      ResultType {
        [Symbol(Val)]: '4d479c18-2f8a-4fd8-a4cc-8ae30e3ce6a3',
        [Symbol(T)]: true
      } result

      at CreateUserHttpController.create (src/modules/user/commands/create-user/create-user.http.controller.ts:44:13)

 FAIL  tests/user/create-user/create-user.e2e-spec.ts (5.084 s)
  ● Console

    console.log
      ResultType {
        [Symbol(Val)]: UserAlreadyExistsError: User already exists
            at CreateUserService.execute (/Users/zacharyweidenbach/Code/Personal/domain-driven-hexagon/src/modules/user/commands/create-user/create-user.service.ts:39:20)
            at processTicksAndRejections (node:internal/process/task_queues:95:5)
            at CreateUserHttpController.create (/Users/zacharyweidenbach/Code/Personal/domain-driven-hexagon/src/modules/user/commands/create-user/create-user.http.controller.ts:42:7) {
          cause: ConflictException [Error]: Record already exists
              at UserRepository.insert (/Users/zacharyweidenbach/Code/Personal/domain-driven-hexagon/src/libs/db/sql-repository.base.ts:118:15)
              at processTicksAndRejections (node:internal/process/task_queues:95:5)
              at /Users/zacharyweidenbach/Code/Personal/domain-driven-hexagon/src/libs/db/sql-repository.base.ts:215:24
              at execTransaction (/Users/zacharyweidenbach/Code/Personal/domain-driven-hexagon/node_modules/slonik/dist/src/connectionMethods/transaction.js:19:24)
              at transaction (/Users/zacharyweidenbach/Code/Personal/domain-driven-hexagon/node_modules/slonik/dist/src/connectionMethods/transaction.js:77:16)
              at /Users/zacharyweidenbach/Code/Personal/domain-driven-hexagon/node_modules/slonik/dist/src/binders/bindPool.js:120:24
              at createConnection (/Users/zacharyweidenbach/Code/Personal/domain-driven-hexagon/node_modules/slonik/dist/src/factories/createConnection.js:111:18)
              at Object.transaction (/Users/zacharyweidenbach/Code/Personal/domain-driven-hexagon/node_modules/slonik/dist/src/binders/bindPool.js:119:20)
              at CreateUserService.execute (/Users/zacharyweidenbach/Code/Personal/domain-driven-hexagon/src/modules/user/commands/create-user/create-user.service.ts:35:7)
              at CreateUserHttpController.create (/Users/zacharyweidenbach/Code/Personal/domain-driven-hexagon/src/modules/user/commands/create-user/create-user.http.controller.ts:42:7) {
            cause: [UniqueIntegrityConstraintViolationError],
            metadata: undefined,
            correlationId: 'DkMKdX',
            code: 'GENERIC.CONFLICT'
          },
          metadata: undefined,
          correlationId: 'DkMKdX',
          code: 'USER.ALREADY_EXISTS'
        },
        [Symbol(T)]: false
      } result

      at CreateUserHttpController.create (src/modules/user/commands/create-user/create-user.http.controller.ts:44:13)

  ● Create a user › I can create a user

    expect(received).toBe(expected) // Object.is equality

    Expected: "string"
    Received: "undefined"

      42 |     then('I receive my user ID', () => {
      43 |       const response = ctx.latestResponse as IdResponse;
    > 44 |       expect(typeof response.id).toBe('string');
         |                                  ^
      45 |     });
      46 |
      47 |     and('I can see my user in a list of all users', async () => {

      at Object.stepFunction (tests/user/create-user/create-user.e2e-spec.ts:44:34)
      at node_modules/jest-cucumber/src/feature-definition-creation.ts:134:65

Test Suites: 1 failed, 1 passed, 2 total
Tests:       1 failed, 6 passed, 7 total
Snapshots:   0 total
Time:        5.534 s, estimated 15 s
Ran all test suites.
error Command failed with exit code 1.
info Visit https://yarnpkg.com/en/docs/cli/run for documentation about this command.

I'm looking into this more, but my first instinct is that either the tests are running in parallel and that is creating some race condition, although I don't see any jest arguments altering the worker count to enable parallel test execution. Alternatively there might be something wrong in the testing infrastructure for the response context and it being mutated in a non-deterministic way.

Good work !

I've stumbled upon this repo by chance and I think it's great, thanks for this !

Saving aggregate

Hi, your code is a true source of inspiration regarding ddd, lots of explanations, great job!

In your code there is only save and delete for the user. For update I have some issues regarding saving/updating the aggregate.

  1. First load the aggregate from repo/unitOfWork
  2. Do some changes based on dto
  3. Save the aggregate

Using typeorm(in the end) the save/update method would update all aggregate(step 1) properties (not only the changed ones).

If we have two requests that change two different properties(firstName, lastName) and the first request has some delay/more work/etc, that means that the last request would be overwritten by the first. ( Somehow the repo shoud save only what changed in the aggregate?)

Thanks,

resolver return

Hello !

The resolver should return

return match(result, {
      Ok: (id: string) => new IdResponse(id),
      Err: (error: Error) => {
        if (error instanceof UserAlreadyExistsError)
          throw new ConflictHttpException(error.message);
        throw error;
      },
    });

instead of

Ok: (id: string) => new IdResponse(id)

Nest removed Logger.setContext()

Thanks so much for effort writing all this, this repo has been my go to for helping me grok DDD books and their concepts in the context of something tangible.

I'm working through your example implementation and noticed that at some point Nest upgraded their Logger implementation, so the Logger.setContext() method is no longer available. Instead you're expected to pass the context as an optional second parameter in the log call.

All I can think is to add the log method overloads to the Logger port interface and then add a private class attribute for the logContext wherever you've previously called setContext().

Is there a cleaner way to deal with this that I'm missing?

UnitOfWork creating many QueryRunners and not release them

Your uow (in my typeorm-unit-of-work.base.ts) class creates many connections and block any query to the database.

import { EntityTarget, getConnection, QueryRunner, Repository } from 'typeorm';
import { IsolationLevel } from 'typeorm/driver/types/IsolationLevel';

import { ILoggerPort, IUnitOfWorkPort } from 'src/shared/domain/ports';
import { ServiceResponseDtoBase } from 'src/shared/interface-adapters/base-classes/service-response-dto.base';

export class TypeORMUnitOfWork implements IUnitOfWorkPort {
  constructor(private readonly logger: ILoggerPort) {}

  private _queryRunners: Map<string, QueryRunner> = new Map();

  // Use this and check in cron job (executed every 10 min) 
  getQueryRunners() {
    return this._queryRunners;
  }

  getQueryRunner(correlationId: string): QueryRunner {
    const queryRunner = this._queryRunners.get(correlationId);
    if (!queryRunner) {
      throw new Error(
        'Query runner not found. Incorrect correlationId or transaction is not started. To start a transaction wrap operations in a "execute" method.',
      );
    }
    return queryRunner;
  }

  getOrmRepository<Entity>(
    entity: EntityTarget<Entity>,
    correlationId: string,
  ): Repository<Entity> {
    const queryRunner = this.getQueryRunner(correlationId);
    return queryRunner.manager.getRepository(entity);
  }

  async execute<T>(
    correlationId: string,
    callback: () => Promise<T>,
    options?: { isolationLevel: IsolationLevel },
  ): Promise<T> {
    if (!correlationId) {
      throw new Error('Correlation ID must be provided');
    }
    this.logger.setContext(`${this.constructor.name}:${correlationId}`);
    const queryRunner = getConnection().createQueryRunner();
    this._queryRunners.set(correlationId, queryRunner);
    this.logger.debug(`[Starting transaction]`);
    await queryRunner.startTransaction(options?.isolationLevel);
    let result: T | ServiceResponseDtoBase<T>;
    // Here i use my own wrapper, like from source 
    try {
      result = await callback();
      if (
        ServiceResponseDtoBase.isError(
          result as unknown as ServiceResponseDtoBase<T>,
        )
      ) {
        await this.rollbackTransaction(
          correlationId,
          (result as unknown as ServiceResponseDtoBase<any>).data,
        );
        return result;
      }
    } catch (error) {
      await this.rollbackTransaction<T>(correlationId, error as Error);
      throw error;
    }
    try {
      await queryRunner.commitTransaction();
    } finally {
      await this.finish(correlationId);
    }
    this.logger.debug(`[Transaction committed]`);
    return result;
  }

  private async rollbackTransaction<T>(correlationId: string, error: Error) {
    const queryRunner = this.getQueryRunner(correlationId);
    try {
      await queryRunner.rollbackTransaction();
      this.logger.debug(
        `[Transaction rolled back] ${(error as Error).message}`,
      );
    } finally {
      await this.finish(correlationId);
    }
  }

  private async finish(correlationId: string): Promise<void> {
    const queryRunner = this.getQueryRunner(correlationId);
    try {
      await queryRunner.release();
    } finally {
      this._queryRunners.delete(correlationId);
    }
  }
}

And in my cron job just inject this UoW and call getQueryRunners().size, so around 140 objects after 30 mins in production...
Any advice or there are reasons why?

Domain service and persistence

All recommendations about use cases (application services) are clear when we deal with only one entities or aggregate.
When there are many entities you recommend to use domain service instead.

Use case contain all entity methods calls and the call to repository and persistence part.
If we use domain service instead, can we manage calls to repositories and the persistence part from the domain service ?
Or domain service will call use cases at the end ?

Some questions

Hello,

First of all, thank you for your awesome work, this is very inspiring .

  1. I wondering what is the purpose of the (empry) folder "providers" in "infrastructure"? I found nothing in the doc.
  2. Why do you rename "core" to "libs"? I loved the previous name, so I'ld like to know the reasons for this renaming.
  3. Why the file "infrastructure/interceptors/exception.interceptor.ts" is not in "libs" folder, as it seems generic?

Thank you very much :)!

Dependency Inversion in new version

I suggest that instead directly import UserRepository to @Inject with UserRepositoryPort, it still make the handler depend on repository, the port interface here make no use and just for view.

user.module.ts

const repositories = [
  {
    provide: 'UserRepository',
    useClass: UserRepository,
  },
];

create-user.service.ts

  constructor(
    @Inject('UserRepository') private readonly repository: UserRepositoryPort,
  ) {}

Now the @Inject use the class name UserRepository provided in user.module.ts to know exactly dependency to inject to service, no need to import UserRepository directly inside service file

Add more information on the topic of authentication & authorization

Any idea on inside what layer authentication & authorization should be handled?, what is the best way to do that?.
my current implementation is that i have a UserGuard port which is basically an interface with the following methods:

import GuardError from './errors'

interface UserGuard {
  createKeyFor(user: User): Promise<string>
  findKeyHolder(key: string): AsyncOutcome<User, GuardError>
}

The testing implementation uses a Map object and an in-memory implementation of UserRepo it map access keys to user aggregate id values, the production implementation will use an actual caching store such as redis and the user database table.

The "access key" is essentially a typical session cookie, but it could be a JWT token or something else the management of these tokens is all up to the underlying implementation, domain and application layers are blind to this.

The UserGuard lives inside the identity subdomain of my application, it's where user identity data is managed other subdomains (for example Shipping & Billing subdomains) have their own guard ports that are very similar to UserGuard and the production implementation will use the same redis store, basically the user will log in and create an access key, the user then could use the access key to interact with the identity subdomain as a User or the billing subdomain as a Customer.

is this a good way of doing things ?, also where should authorization be done?, I'm doing most of it in the domain layer for example I'd have a domain service called payInvoice(customer: Customer, invoice: Invoice): Outcome<Invoice, Error> this service will return a failed outcome if the invoice belongs to a different customer or if the customer was denied the right to pay any invoices.

I think this repo should add a bit more information around authentication and where and maybe how it should be done.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.