Giter Site home page Giter Site logo

ormus's Introduction

🧾 Introduction

A Customer Data Platform (CDP) is a piece of software that combines data from multiple tools to create a single centralized customer database containing data on all touch points and interactions with your product or service. That database can then be segmented in a nearly endless number of ways to create more personalized marketing campaigns.


🌴 About Ormus

Ormus Island, also known as Hormuz Island, The ancient and enigmatic Ormus Island, nestled in the warm embrace of the Persian Gulf off th coast of Iran, is a place steeped in history, mystique, and natural beauty. Its name evokes tales of legendary treasures, exotic trade, and maritime adventures that have unfolded on its shores for centuries.

ormus's People

Contributors

doo-dev avatar pouriaseyfi avatar mohsenha avatar abtinokhovat avatar rezmehdi avatar sinaw369 avatar amiratashghah avatar iam-benyamin avatar gohossein avatar j3yzz avatar mdhesari avatar

Stargazers

Tommi Saltiola avatar  avatar  avatar Mahdi Razaqi avatar  avatar  avatar Milad Rezaeian avatar Mehdi sadooni avatar Abolfazl Norzad avatar (._.) avatar Ferry avatar  avatar  avatar Taha Ahmadi avatar Amiranbari avatar Amir Mostafavi Nejad avatar  avatar  avatar  avatar

Watchers

Mohammad Shahmaleki avatar  avatar  avatar Alireza Mokhtari Garakani avatar  avatar Abolfazl Norzad avatar Amirabbas Khoshbayan avatar  avatar Ferry avatar  avatar  avatar

ormus's Issues

#destination New implementation of destination service

The code is ready for test hrere

Three managers were created to manage events, tasks, and workers.

  • Event manager
  • Task manager
  • Worker manager

Event manager

Use the below interface to consume events and publish delivery task

type EventManager interface {
	GetDeliveryTaskChannel() chan param.DeliveryTaskResponse
	GetEventPublisherChannel() chan event.ProcessedEvent
}

The GetDeliveryTaskChannel provides a channel used to publish the result of delivery to the Source service. The worker uses this method after the task is handled successfully it publishes it to this channel.

The GetEventPublisherChannel provides a channel. When a processed event is received from the broker then the event manager publishes it to this channel. This channel is used by the task manager.

Task manager

The task manager has the below interface for adapters.

type Adapter interface {
	GetTaskChannelForConsume(taskType tasktype.TaskType) (chan taskentity.Task, error)
	GetTaskChannelForPublish(taskType tasktype.TaskType) (chan taskentity.Task, error)
	NewChannel(taskType tasktype.TaskType, bufferSize int)
}

It has 4 methods:

NewChannel

NewChannel(taskType tasktype.TaskType, bufferSize int)

This method is used to create a new channel for a specific task type. It simply calls the adapter NewChannel and returns it.

GetTaskChannelForConsume

GetTaskChannelForConsume(taskType tasktype.TaskType) (<-chan taskentity.Task, error)

When a process event is received by the task manager after converting it to the task, the task manager publishes it to this channel. It returns a read-only channel with the type taskentity.Task.

GetTaskChannelForPublish

GetTaskChannelForPublish(taskType tasktype.TaskType) (chan<- taskentity.Task, error)

This method is used to get a channel that publishes tasks when it is received in the task manager.

The two above methods when we use the TaskChannelTaskManager adapter are the same. it means when a task is published to the TaskChannelForPublish it can be consumed on the TaskChannelForConsume channel.
It is reasonable when we use separate processes for workers and the main destination service. We can use a message broker between two channels. It means when a task is published on the TaskChannelForPublish channel the adapter publishes the task to the message broker and on the other side one consumer is defined on the task manager for consumption from the broker and published the task on the TaskChannelForConsume channel.

Start

This method starts listening on the event manager's channel and after receive processed event check it with task idempotency and check it status. Then publish it on TaskChannelForPublish.

Task manager adapter

ChannelTaskManager

Simply create a channel and pass it when methods GetTaskChannelForConsume and GetTaskChannelForPublish calls.

RabbitmqTaskManager

When the method NewChannel is called depends on the mod in init it creates channels.

  • Consumer Mod
    In consumer mode, it creates one channel per task type configures a rabbitmq exchange and a queue on rabbitmq, and starts consuming. When a task is received from rabbitmq it publishes it to go channel created for this type of task. The worker go-routine wait to receive tasks from the consumer channel of the task manager.

  • Publisher Mod
    In publisher mode, it creates one channel per task type and waits for the channel to receive the task. After receiving the task then publish it to the defined queue on rabbitmq.

  • Both mode
    In both modes, it creates both of the above channels and all ability of them.

Worker manager

It gets an event manager and task manager. It has below methods:

Register worker

With this method, we can register a worker for a specific task type. Its signature is below:

RegisterWorker(taskType tasktype.TaskType, worker worker.Instant) error

Start

This method is called the work method for all registered workers.

Worker

It has the below interface:

type Worker interface {
	Work(channel <-chan taskentity.Task, deliverChannel chan<- param.DeliveryTaskResponse, crashChannel chan<- uint)
	UpdateCrashCount(lastCrashCount uint)
}

Every worker must implement this interface. Workers can implement a crash recovery plan. When it crashes a defer function can recover the error and publish the number of current crashes to crashChannel. Then the worker manager receives this number and reruns the worker. If the number of crashes is more than a specific amount then the worker manager not run the worker again.

Footnote

My English level is is not good. Sorry for the mistakes in the above doc and the naming on the code.

@iam-benyamin
@PouriaSeyfi

Proposal: Introducing an Abstraction (Interface) for Services

In our development process, it is crucial to promote the independence of teams, especially during the development stage.
Currently, our repository lacks an abstraction layer for services, which can make complex the process of development.
I think it is good to have interface for services.

An abstraction layer would allow developers to easily create mock or fake services for testing purposes. This would aid in debugging and troubleshooting complex interactions between services, without the need for a complete and live infrastructure.

An abstraction layer would allow developers to easily create mock or fake services for testing purposes. This would enable more comprehensive and reliable testing scenarios, including edge cases that are otherwise difficult to simulate.
By using fake services, developers can avoid unnecessary dependencies and complex network interactions during development and testing.

Public API Service Requirements

EDITED 2023-10-30T17:20

General Understanding of Public API

Public API enables the final user to use our system. It lets a user to:

  • introduce a source
  • create a data warehouse
  • define destinations
    As we can see, this service kind of packages the different parts of the system and offers it as a single "Public API" for the end user to consume.

⚠ Before Reading
This issue is not complete and referenceable by any means. It will just bring you up to speed with the Public API's objectives. If you need a more complete reference, visit segment documentation.

📝Todo

  • What is a catalog?
  • More information about Sync Profile Warehouse
  • More information about Selective Sync
  • What is the purpose of Reverse ETL? Aren't we already sending the required data to destinations from the warehouse? Or Are we sending the source's data to both the warehouse and destination simultaneously?

Terminology

Before going into the details, I think it is more beneficial to talk about some terminology associated with
Segment.

Vocabulary Definition
Source A place where we gather data from e.g. web application, cloud application and mobile application
Destination A place where data is sent to. There are 3 groups of destinations event streams, storages and reverse ETL
Warehouse Warehouse is kind of storage destination which holds all the raw data. Other alternatives are but not limited to: AWS S3, Google Cloud Storage and Segment Data Lakes
Destination Subscription Also known as destination actions. The system watches for the data that matches a special condition and if they are met, predefined actions get triggered and preform some actions on the received data and then store it in the destination
Deletion and Suppression Refers to regulations such as GDPR (General Data Protection Regulation) and CCPA (California Consumer Privacy Act)
Workspace A group of sources combined together for easier management and billing
Profiles Sync Warehouse A warehouse that stores all the data received from a workspace
Label An identifier for other parts of our system so we can attach labels to these parts and later use it for easier management
Schema Config The Schema Configuration settings for each source can be used to selectively block events, or omit properties and traits from .track(), .identify() and .group() calls.

Use Cases

As discussed in an in-team (@mtdamir and @AMiR-MN95) meeting we reached the conclusion that it is better to just bring a very concise list of use cases because:
1- Segment documentation holds all the data and copying them over here seems useless.
2- Not every aspect of our system is clearly depicted so we as a service that has to work with every other service has to cooperate deeply with other teams.

Source

Type of Action Resource
Create, Read(list and single), Update and Delete Source
Add and Replace Label to a source
Read(list) Warehouses connected to source
Read(list) Destinations connected to a source
Read(list) and Update Schema config for a source

Warehouse

Type of Action Resource
Create, Read(list and single), Update and Delete Warehouse
Add and Delete connection from a source to a warehouse
Create Validation on input against warehouse fields
Read Warehouse connection state
Read(list) Connected sources to the warehouse

Destination

Type of Action Resource
Create, Read(list and single), Update and Delete Destination
Create, Read(list and single), Update and Delete Destination Subscription
Read(list) Metrics

Deletion and Suppression (regulations)

If we intend to go global we have to respect the legal regulations on user's digital privacy. As segment puts it this way:

In keeping with Segment's commitment to support GDPR and future privacy regulations such as the CCPA, you can delete and suppress data about end users if you identify that user with a userId, should they revoke or alter their consent to data collection. For instance, if an end user in the EU invokes their Right to Object or Right to Erasure under the GDPR, you can use the following features in Segment to block ongoing data collection about the user, and delete all historical data across Segment’s systems, connected S3 buckets and Warehouses, and supported downstream partners.
Regulations enable you to issue a single request to delete and suppress data about a user by userId. All regulations are by default scoped to your Workspace and target all Sources within the Workspace. This way, you don't need to page over every Source within Segment to delete data about a user across all your users.

Because we are not familiar with the foreign laws and the system itself We think this requires more research.

Destination Filters

The Destination Filters API provides fine-grained controls that allow you to conditionally prevent data delivery to specific destinations. You can filter entire events (for example, selectively drop them) or block/allow individual fields in events before you send them.

We could use destination filters to:

  • Reduce the delivery volume of events to a Destination to save on costs
  • Filter out Personally Identifying Information (PII) from the events sent to a Destination that shouldn't receive or store PII
  • Prevent internal user activity from reaching an analytics tool
  • Send the events that you care about to an custom webhook
Type of Action Resource
Create, Read(list and single), Preview, Update and Delete Destination filter

Edge Functions (private alpha testing phase)

Edge Functions allow you to serve content from the CDN server closest to the user. This way we can do certain things faster and more personalized for a user.

Type of Action Resource
Create, Disable Edge function
Create Upload URL for edge function

IAM (Identity and Access Management)

Our workspace admins must be able to add other users to the workspace and restrict their access with permissions and roles. This topic must be discussed later when other resources have been clearly documented.

Reverse ETL (Extract, Transform, Load)

Reverse ETL allows the use of a database (aka: Segment Warehouse) as a source of data to be connected and sent to supported Segment Destinations.

Type of Action Resource
Create, Read(list and single), Update and Delete Warehouse

Selective Sync

Warehouse Selective Sync allows you to manage the data that you send to your Warehouses.

Type of Action Resource
TODO TODO

Profiles Sync

A Profiles Sync Warehouse is a central repository of data collected from your workspace.

Type of Action Resource
TODO TODO

Segment features related endpoints

These services are provided by Segment and their API and inner workings must be studied later:

API Usage

Because our APIs will be exposed to public we must apply a rating and limiting handler too, so each user can only call the endpoints a certain amount of times in a time range.


TEAM PUBLIC API
@AtaReversei
@mtdamir
@AMiR-MN95

SDK Requirements

Basic Definition

Software Development Kit. This is a combination of libraries and used mostly in the context of building mobile or native apps.

Business Requirements List

To start data flow, different resources can be considered as requirements:
  • Website

    • JavaScript library
      1. create-a-source-in-the-backend-app [generate-unique-key]

      2. add-the-snippet [use-unique-key]

      3. identify-users

        • The identify method is how you tell backend who the current user is. It have to includes a unique User ID, and any optional traits you know about them.
          ex:

          analytics.identify('f4ca124298', {
          name: 'Michael Brown',
          email: '[email protected]'
          });
          That identifies Michael by his unique User ID (in this case, f4ca124298, which is what you know him by in your database) and labels him with name and email traits.

        • Note: You don’t need to call identify for anonymous visitors to your site. backend automatically assigns them an anonymousId, so just calling page and track works just fine without identify.
        • we recommend that you use a backend template to inject an identify call into the footer of every page of your site where the user is logged in. That way, no matter what page the user first lands on, they will always be identified. You don’t need to call identify if your unique identifier (userId) is not known. Depending on your templating language, your actual identify call might look something like this:

          analytics.identify(' {{user.id}} ', {
          name: '{{user.fullname}}',
          email: '{{user.email}}'
          });

      4. track-actions [send api request with payload info + key]

        • The track method is how you tell backend about the actions your users are performing on your site. Every action triggers what we call an “event”, which can also have associated properties.
        • Here’s what a call to track call might look like when a user signs up:

          analytics.track('Signed Up', {
          plan: 'Enterprise'
          });

  • Mobile app

  • Server

  • Cloud App

Use-Case Requirements List

Entities Requirements List


It will be added and completed over time

Add Conventional Commit to ormus repository

Enhanced Support for Specs in Project

Hello

To ensure consistency and standardization in our data collection process, I suggest creating a new entity specifically dedicated to representing specs. While events represent specific user actions or occurrences, specs define the structure and format of the event data being sent.

please see this link for more information :
specs

So the proposed solution involves introducing separate struct types for different event types, such as PageSpec, TrackSpec, and others, alongside a base BaseSpec struct for common fields across all spec types. This approach will enable developers and users to define and manage specifications for each event type while maintaining a clear and structured schema.

@gohossein

Implementing CI Workflow for Linting, Formatting and Testing Automation

Overview

As our project continues to grow, maintaining code quality and ensuring reliable test coverage becomes increasingly crucial. This proposal advocates for the establishment of a Continuous Integration (CI) workflow dedicated to automated linting and testing, thereby enhancing the development process and reducing the likelihood of introducing errors.

Features

  • Linting: Integrate a linting step into the CI process to enforce coding standards and identify potential issues related to code style, formatting, and best practices.
  • Code Formatting: Introduce an automated code formatting step to maintain a consistent style across the entire codebase.
  • Unit Testing: Implement a suite of unit tests to validate the correctness of individual components, functions, or modules within the project.

Additional information

In Project we use golangci as lint tool, respectfully we should implement workflow based on that.

Implement Message Broker Interface

Description

Create a broker adapter interface to support various message broker implementations.

Details

  • Define a MessageBroker interface with methods for publishing and consuming messages.
  • Implement the interface for specific message broker technologies such as RabbitMQ, Kafka, etc.
  • Ensure flexibility and extensibility to accommodate future message broker integrations.

Tasks

  • Define MessageBroker interface.
  • Implement RabbitMQ message broker adapter.
  • Write tests for each adapter.
  • Document usage and integration guidelines.

Source service requirements

Essential Information for Transmitting Data/Events

To successfully transmit data or events, the following key pieces of information are required:

  1. User Identification: It is imperative to identify the user involved.
  2. Page Context: Knowing the specific page where the event occurred is crucial
  3. Event Description: Understanding the nature of the event triggered by the user is essential.

Segment Methods Documentation


Anonymous User

In situations where the user may not be authenticated, it is advisable to utilize an "anonymous ID" for the user.
We can consider the Device UUID for this situation.
Device UUID NPM library

Implementing Config Handling for Enhanced Project Configuration Management

Overview

The current state of our project lacks a robust and centralized configuration management system, making it challenging for users and developers to customize and manage project settings efficiently. This proposal suggests the implementation of a comprehensive config handling mechanism to address this limitation.

Features

  • Configuration File Support: Introduce a dedicated configuration file (e.g., YAML, JSON) to store project settings, making it easy for users to modify and maintain configurations.

  • Default Configurations: Establish a set of default configurations within the project to ensure a seamless out-of-the-box experience while allowing users to override these defaults as needed.

  • Integration with Existing Tooling: Ensure compatibility with common tools for configuration management, such as environment variables or command-line arguments, to provide flexibility for different deployment scenarios.

  • Configurable Logging: Consider incorporating the ability to configure logging levels and output destinations to facilitate debugging and monitoring.

Implementation Guidelines:

  • Testing: Provide a robust suite of tests to validate the correctness and reliability of the config handling implementation.
  • Modularity: Design the config handling system to be modular and easily extensible, allowing for the addition of new configuration options in future updates.

Additional information

You can use viper or koanf open-source package to implement this feature.

#core(manager) - get list of integrations by source ID

To handle destinations effectively, it is recommended to retrieve the list of integrations for a specific source by making HTTP/gRPC calls. This approach allows for reading different configurations and types from the integrations.

Relying solely on the event message to obtain this information may not be ideal. Including all of this data in the brokers could lead to unnecessary information and potentially impact performance negatively.

Create a logger package with slog

Description:

The project requires a logging system with structured and leveled output. The slog crate provides a powerful and flexible logging framework that meets these requirements. However, the project does not have a dedicated logger package that uses slog and configures the output format, destination, and filters. This issue suggests the creation of a logger package with slog that can be easily integrated with the project.

Features:

Logger Initialization: Implement a function that initializes the logger with a given configuration file or default settings.
Output Format: Define the log structure and levels, and choose a suitable output format (e.g., JSON, plain text) for the logger.
Output Destination: Choose a suitable output destination (e.g., console, file, network) for the logger, and implement a drain that writes the log messages to the destination.
Output Filters: Implement a mechanism that allows the user to filter the log messages based on the log level, module, or other criteria.

Implementation Guidelines:

Performance: Benchmark and optimize the performance of the logger package, as logging can have a significant impact on the application’s speed and resource consumption. You can use the [testing] and [pprof] packages to measure and improve the performance of your code.

Error Handling: Decide how to handle errors that may occur during logging, such as writing to a closed or unavailable output destination, or formatting an invalid log message. You can use the [errors] package to create and handle errors in a consistent and informative way.

Compatibility: Ensure that your logger package is compatible with the existing log and slog packages, as well as other popular logging packages in the Go ecosystem. You can use the [slog.Handler] interface to adapt different output formats and destinations, and the [slog.Logger] interface to wrap other loggers and add structured fields to them.


Testing: Provide a robust suite of tests to validate the correctness and reliability of the logger package.

Modularity: Design the logger package to be modular and easily extensible, allowing for the addition of new output formats, destinations, or filters in future updates.

Documentation: Provide clear and comprehensive documentation for the logger package, explaining its usage, features, and limitations.

Segment - Recon and discovery

Go Application Source

I have tested the segment service with the go SDK provided here.
I have tested three function: analytics.Identify, analytics.Page, analytics.Track.


Identify

You can provide a UserId for user and then assign different traits to the user, like, name, email or any key-value pair.

Request

client.Enqueue(analytics.Identify{
    UserId: "abtinokhovat",
    Traits: analytics.NewTraits().
        SetName("Abtin Okhovat").
        SetEmail("[email protected]").
        Set("plan", "Enterprise").
        Set("friends", 142).
        SetGender("Male").
        SetDescription("This is a test desc"),
})

Segment Raw data (Dashboard)

{
  "context": {
    "library": {
      "name": "analytics-go",
      "version": "3.0.0"
    }
  },
  "integrations": {},
  "messageId": "156bcd23-6d73-4fb2-ae51-84eabdaf130f",
  "originalTimestamp": "2023-10-31T11:53:03.061412+03:30",
  "receivedAt": "2023-10-31T08:23:12.873Z",
  "sentAt": "2023-10-31T08:23:08.062Z",
  "timestamp": "2023-10-31T08:23:07.872Z",
  "traits": {
    "description": "This is a test desc",
    "email": "[email protected]",
    "friends": 142,
    "gender": "Male",
    "name": "Abtin Okhovat",
    "plan": "Enterprise"
  },
  "type": "identify",
  "userId": "abtinokhovat",
  "writeKey": "REDACTED"
}
  • 💡 When identifying user with pre-identified UserId only messageId, originalTimestamp, receivedAt, sentAt, and timestamp will be distinct for conspicuous reason.

Track

In track function we have to add an Event name, and a UserId, and we can add extra key-value pairs in the Properties field.

Request

client.Enqueue(analytics.Track{
    Event:  "Article Bookmarked",
    UserId: "abtinokhovat",
    Properties: analytics.NewProperties().
            Set("title", "Snow Fall").
            Set("subtitle", "The Avalanche at Tunnel Creek").
            Set("author", "John Branch"),
})

Segment Raw Data (Dashboard)

{
  "context": {
    "library": {
      "name": "analytics-go",
      "version": "3.0.0"
    }
  },
  "event": "Article Bookmarked",
  "integrations": {},
  "messageId": "51672282-c097-454f-8de9-d00e0a43016f",
  "originalTimestamp": "2023-10-31T11:53:29.905488+03:30",
  "properties": {
    "author": "John Branch",
    "subtitle": "The Avalanche at Tunnel Creek",
    "title": "Snow Fall"
  },
  "receivedAt": "2023-10-31T08:23:38.069Z",
  "sentAt": "2023-10-31T08:23:34.906Z",
  "timestamp": "2023-10-31T08:23:33.067Z",
  "type": "track",
  "userId": "abtinokhovat",
  "writeKey": "REDACTED"
}
  • 💡 If you don't send the UserId the event will not land the dashboard.
  • 💡 If you send a UserId which is not Identified it will land dashboard but will have a title of Unknown user.

Page

Page function will track the page views of an application, you can config the function to have multiple hosts like, mobile,web page 1, web page 2, or ... .

Request

client.Enqueue(analytics.Page{
    UserId: "abtinokhovat",
    Name:   "Go Library",
    Properties: analytics.NewProperties().
        SetURL("https://segment.com/libraries/go/"),
})

Segment Raw Data (Dashboard)

{
  "context": {
    "library": {
      "name": "analytics-go",
      "version": "3.0.0"
    }
  },
  "integrations": {},
  "messageId": "2b34b1f5-791e-49b3-a000-783b379e60a1",
  "name": "Go Library",
  "originalTimestamp": "2023-10-31T11:55:39.910113+03:30",
  "properties": {
    "url": "https://segment.com/libraries/go/"
  },
  "receivedAt": "2023-10-31T08:25:50.863Z",
  "sentAt": "2023-10-31T08:25:44.911Z",
  "timestamp": "2023-10-31T08:25:45.862Z",
  "type": "page",
  "userId": "abtinokhovat",
  "writeKey": "REDACTED"
}

Destination Service Requirements

In the context of a Customer Data Platform (CDP), the "Destination" refers to the component responsible for managing the distribution and delivery of customer data to various marketing and analytics tools, databases, or other systems where the data is needed.

Destination, enabling organizations to make informed decisions, personalize customer experiences, and execute marketing campaigns.

Adding destinations allow you to act on your data and learn more about your customers in real time.

Dependencies of destination to core

  1. at Ormus destination(7) we need to get the event from the core processor with pub/sub or broker
  2. we need to get event destination(s) with WriteKey from a manager and cache it to reduce the number of calls to the manager

Event Entity/Model

The event is the core of the CDP system. We should model it carefully. Event should be compatible with Segment.com. The Segment documentation or even JS-SDK/GO-SDK might be helpful.

Implement Distributed Lock Adapter

Currently, our Go project lacks a distributed lock adapter, which is crucial for ensuring synchronization and preventing race conditions when multiple instances of our application are running across different nodes or servers. This issue aims to add a distributed lock adapter that can be easily integrated into our existing codebase, allowing us to use different distributed lock implementations based on our requirements.

  • Research and evaluate existing distributed lock libraries in Go.
  • Design and implement a distributed lock adapter package that abstracts the underlying distributed lock providers and provides a unified interface.
  • Write comprehensive unit tests to ensure the correctness and reliability of the distributed lock adapter.

update rtx to mise

Coming from rtx

mise was formerly called rtx. The name was changed to avoid confusion with Nvidia's line of graphics cards. This wasn't a legal issue, but just general confusion. When people first hear about the project or see it posted they wouldn't realize it was talking about a CLI tool. It was a bit difficult to search for on Google but also places like Twitter and in Slack searches and things. This was the top complaint about rtx and many people were fairly outspoken about disliking the name for this reason. rtx was supposed to be a working title that I intended to change but never got around to doing. This change should've happened earlier when there were fewer users and I apologize for not having done that sooner knowing that this was likely going to be necessary at some point.

The same tools for the development environment

To avoid conflict in our development environment, we must have the same tools
A tool that helps us to have such capabilitiess an RTX tool

RTX Features

asdf-compatible - rtx is compatible with asdf plugins and .tool-versions files. It can be used as a drop-in replacement. See below for migration instructions
Polyglot - compatible with any language, so no more figuring out how nvm, nodenv, pyenv, etc work individually—just use 1 tool.
Fast - rtx is written in Rust and is very fast. 20x-200x faster than asdf.
No shims - shims cause problems, they break which, and add overhead. By default, rtx does not use them— however you can if you want to.
Fuzzy matching and aliases - It's enough to just say you want "v20" of node, or the "lts" version. rtx will figure out the right version without you needing to specify an exact version.
Arbitrary env vars - Set custom env vars when in a project directory like NODE_ENV=production or AWS_PROFILE=staging.

add guidelines

When a new teammate joins the team, it's important to provide them with all the necessary information and guidelines to contribute effectively to the project.

I realized this when I forgot to inform one of my new team members about some important steps in the contributing guidelines.

Therefore, I think it is useful to document the various instructions and add them to the project.

Some changes in main function(destination)

Upon reviewing the main function at line 90, I observed the instantiation of a new task coordinator and the creation of task managers such as webhook within the creation function.

In my opinion, it would enhance clarity to register task managers in the main function after initializing the task coordinator, but before commencing its operation. This approach can provide a clearer understanding of the process flow.

Considering the current difficulty in comprehending the code structure, I suggest refactoring the main function for improved code readability. This adjustment is necessary as the complexity of the code makes it challenging to grasp its functionality at a glance.

@iam-benyamin
@gohossein
@PouriaSeyfi

Creating Write Key

Issue:
we need to currently working on implementing a write key functionality. The write key should include information such as source, destination, and potentially other details. we need to find best practices how to effectively design and implement this feature.

Details:

  • Objective: Create a unique write key for Ormus CDP system.
  • Information to Include: The write key should encompass details such as source, destination, and any other relevant information.

Questions:

  1. What information does the write key in Ormus typically contain?
  2. How should the write key be encrypted to ensure security?
  3. How can I ensure the uniqueness and security of the generated write keys in the context of Ormus?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.