Giter Site home page Giter Site logo

anthdm / hollywood Goto Github PK

View Code? Open in Web Editor NEW
948.0 22.0 82.0 345 KB

Blazingly fast and light-weight Actor engine written in Golang

License: MIT License

Makefile 0.83% Go 98.79% Shell 0.38%
actor-model distributed-systems microservices fault-tolerance golang

hollywood's Introduction

Go Report Card example workflow Discord Shield

Blazingly fast, low latency actors for Golang

Hollywood is an ULTRA fast actor engine build for speed and low-latency applications. Think about game servers, advertising brokers, trading engines, etc... It can handle 10 million messages in under 1 second.

What is the actor model?

The Actor Model is a computational model used to build highly concurrent and distributed systems. It was introduced by Carl Hewitt in 1973 as a way to handle complex systems in a more scalable and fault-tolerant manner.

In the Actor Model, the basic building block is an actor, sometimes referred to as a receiver in Hollywood, which is an independent unit of computation that communicates with other actors by exchanging messages. Each actor has its own state and behavior, and can only communicate with other actors by sending messages. This message-passing paradigm allows for a highly decentralized and fault-tolerant system, as actors can continue to operate independently even if other actors fail or become unavailable.

Actors can be organized into hierarchies, with higher-level actors supervising and coordinating lower-level actors. This allows for the creation of complex systems that can handle failures and errors in a graceful and predictable way.

By using the Actor Model in your application, you can build highly scalable and fault-tolerant systems that can handle a large number of concurrent users and complex interactions.

Features

  • Guaranteed message delivery on actor failure (buffer mechanism)
  • Fire & forget or request & response messaging, or both
  • High performance dRPC as the transport layer
  • Optimized proto buffers without reflection
  • Lightweight and highly customizable
  • Cluster support for writing distributed self discovering actors

Benchmarks

make bench
spawned 10 engines
spawned 2000 actors per engine
Send storm starting, will send for 10s using 20 workers
Messages sent per second 3244217
..
Messages sent per second 3387478
Concurrent senders: 20 messages sent 35116641, messages received 35116641 - duration: 10s
messages per second: 3511664
deadletters: 0

Installation

go get github.com/anthdm/hollywood/...

Hollywood requires Golang version 1.21

Quickstart

We recommend you start out by writing a few examples that run locally. Running locally is a bit simpler as the compiler is able to figure out the types used. When running remotely, you'll need to provide protobuffer definitions for the compiler.

Hello world.

Let's go through a Hello world message. The complete example is available in the hello world folder. Let's start in main:

engine, err := actor.NewEngine(actor.NewEngineConfig())

This creates a new engine. The engine is the core of Hollywood. It's responsible for spawning actors, sending messages and handling the lifecycle of actors. If Hollywood fails to create the engine it'll return an error. For development you shouldn't use to pass any options to the engine so you can pass nil. We'll look at the options later.

Next we'll need to create an actor. These are some times referred to as Receivers after the interface they must implement. Let's create a new actor that will print a message when it receives a message.

pid := engine.Spawn(newHelloer, "hello")

This will cause the engine to spawn an actor with the ID "hello". The actor will be created by the provided function newHelloer. Ids must be unique. It will return a pointer to a PID. A PID is a process identifier. It's a unique identifier for the actor. Most of the time you'll use the PID to send messages to the actor. Against remote systems you'll use the ID to send messages, but on local systems you'll mostly use the PID.

Let's look at the newHelloer function and the actor it returns.

type helloer struct{}

func newHelloer() actor.Receiver {
	return &helloer{}
}

Simple enough. The newHelloer function returns a new actor. The actor is a struct that implements the actor.Receiver. Lets look at the Receive method.

type message struct {}

func (h *helloer) Receive(ctx *actor.Context) {
	switch msg := ctx.Message().(type) {
	case actor.Initialized:
		fmt.Println("helloer has initialized")
	case actor.Started:
		fmt.Println("helloer has started")
	case actor.Stopped:
		fmt.Println("helloer has stopped")
	case *message:
		fmt.Println("hello world", msg.data)
	}
}

You can see we define a message struct. This is the message we'll send to the actor later. The Receive method also handles a few other messages. These lifecycle messages are sent by the engine to the actor, you'll use these to initialize your actor

The engine passes an actor.Context to the Receive method. This context contains the message, the PID of the sender and some other dependencies that you can use.

Now, lets send a message to the actor. We'll send a message, but you can send any type of message you want. The only requirement is that the actor must be able to handle the message. For messages to be able to cross the wire they must be serializable. For protobuf to be able to serialize the message it must be a pointer. Local messages can be of any type.

Finally, lets send a message to the actor.

engine.Send(pid, "hello world!")

This will send a message to the actor. Hollywood will route the message to the correct actor. The actor will then print a message to the console.

The examples folder is the best place to learn and explore Hollywood further.

Spawning actors

When you spawn an actor you'll need to provide a function that returns a new actor. As the actor is spawn there are a few tunable options you can provide.

With default configuration

e.Spawn(newFoo, "myactorname")

Passing arguments to the constructor

Sometimes you'll want to pass arguments to the actor constructor. This can be done by using a closure. There is an example of this in the request example. Let's look at the code.

The default constructor will look something like this:

func newNameResponder() actor.Receiver {
	return &nameResponder{name: "noname"}
}

To build a new actor with a name you can do the following:

func newCustomNameResponder(name string) actor.Producer {
	return func() actor.Receiver {
		return &nameResponder{name}
	}
}

You can then spawn the actor with the following code:

pid := engine.Spawn(newCustomNameResponder("anthony"), "name-responder")

With custom configuration

e.Spawn(newFoo, "myactorname",
	actor.WithMaxRestarts(4),
		actor.WithInboxSize(1024 * 2),
		actor.WithId("bar"),
	)
)

The options should be pretty self explanatory. You can set the maximum number of restarts, which tells the engine how many times the given actor should be restarted in case of panic, the size of the inbox, which sets a limit on how and unprocessed messages the inbox can hold before it will start to block.

As a stateless function

Actors without state can be spawned as a function, because its quick and simple.

e.SpawnFunc(func(c *actor.Context) {
	switch msg := c.Message().(type) {
	case actor.Started:
		fmt.Println("started")
		_ = msg
	}
}, "foo")

Remote actors

Actors can communicate with each other over the network with the Remote package. This works the same as local actors but "over the wire". Hollywood supports serialization with protobuf.

Configuration

remote.New() takes a listen address and a remote.Config struct.

You'll instantiate a new remote with the following code:

tlsConfig := TlsConfig: &tls.Config{
	Certificates: []tls.Certificate{cert},
}

config := remote.NewConfig().WithTLS(tlsConfig)
remote := remote.New("0.0.0.0:2222", config)

engine, err := actor.NewEngine(actor.NewEngineConfig().WithRemote(remote))

Look at the Remote actor examples and the Chat client & Server for more information.

Eventstream

In a production system thing will eventually go wrong. Actors will crash, machines will fail, messages will end up in the deadletter queue. You can build software that can handle these events in a graceful and predictable way by using the event stream.

The Eventstream is a powerful abstraction that allows you to build flexible and pluggable systems without dependencies.

  1. Subscribe any actor to a various list of system events
  2. Broadcast your custom events to all subscribers

Note that events that are not handled by any actor will be dropped. You should have an actor subscribed to the event stream in order to receive events. As a bare minimum, you'll want to handle DeadLetterEvent. If Hollywood fails to deliver a message to an actor it will send a DeadLetterEvent to the event stream.

Any event that fulfills the actor.LogEvent interface will be logged to the default logger, with the severity level, message and the attributes of the event set by the actor.LogEvent log() method.

List of internal system events

  • actor.ActorInitializedEvent, an actor has been initialized but did not processed its actor.Started message
  • actor.ActorStartedEvent, an actor has started
  • actor.ActorStoppedEvent, an actor has stopped
  • actor.DeadLetterEvent, a message was not delivered to an actor
  • actor.ActorRestartedEvent, an actor has restarted after a crash/panic.
  • actor.RemoteUnreachableEvent, sending a message over the wire to a remote that is not reachable.
  • cluster.MemberJoinEvent, a new member joins the cluster
  • cluster.MemberLeaveEvent, a new member left the cluster
  • cluster.ActivationEvent, a new actor is activated on the cluster
  • cluster.DeactivationEvent, an actor is deactivated on the cluster

Eventstream example

There is a eventstream monitoring example which shows you how to use the event stream. It features two actors, one is unstable and will crash every second. The other actor is subscribed to the event stream and maintains a few counters for different events such as crashes, etc.

The application will run for a few seconds and the poison the unstable actor. It'll then query the monitor with a request. As actors are floating around inside the engine, this is the way you interact with them. main will then print the result of the query and the application will exit.

Customizing the Engine

We're using the function option pattern. All function options are in the actor package and start their name with "EngineOpt". Currently, the only option is to provide a remote. This is done by

r := remote.New(remote.Config{ListenAddr: addr})
engine, err := actor.NewEngine(actor.EngineOptRemote(r))

addr is a string with the format "host:port".

Middleware

You can add custom middleware to your Receivers. This can be useful for storing metrics, saving and loading data for your Receivers on actor.Started and actor.Stopped.

For examples on how to implement custom middleware, check out the middleware folder in the examples

Logging

Hollywood has some built in logging. It will use the default logger from the log/slog package. You can configure the logger to your liking by setting the default logger using slog.SetDefaultLogger(). This will allow you to customize the log level, format and output. Please see the slog package for more information.

Note that some events might be logged to the default logger, such as DeadLetterEvent and ActorStartedEvent as these events fulfill the actor.LogEvent interface. See the Eventstream section above for more information.

Test

make test

Community and discussions

Join our Discord community with over 2000 members for questions and a nice chat.
Discord Banner

Used in Production By

This project is currently used in production by the following organizations/projects:

License

Hollywood is licensed under the MIT licence.

hollywood's People

Contributors

andrejacobs avatar anthdm avatar ar1011 avatar godofprodev avatar igumus avatar lrweck avatar mbaitar avatar mczechyra avatar perbu avatar tprifti avatar valentinmontmirail avatar yarcat avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

hollywood's Issues

How to directly stop an actor?

We can send a Poison message to any actor, but that won't immediately kill the actor.

What if we want to immediately stop the actor, ignoring all messages in the queue?

[Suggestion] Using deadletter instead of engine directly in Registry

Hi @anthdm,

Registry struct in registry.go file;

Instead of depending to *Engine directly to acquire deadletter (which is a Processer), we can directly pass it like;

type Registry struct {
	lookup *safemap.SafeMap[string, Processer]
        deadletter Processer
}

func newRegistry(deadletter Processer) *Registry {
	return &Registry{
                deadletter: deadletter,
		lookup: safemap.New[string, Processer](),
	}
}
func (r *Registry) get(pid *PID) Processer {
	if proc, ok := r.lookup.Get(pid.ID); ok {
		return proc
	}
	return r.deadLetter
}

If you approve, I like to update it.

Kill the global variable 'pidSeparator'

the pidSeparator is a global variable. we should get rid of it and either:

  • replace it with a hard-coded const "/".
  • replace it with a struct variable on the engine (which is a tad bit harder)

We should also document that you cannot use pid separator in actor names and check that the supplied actor name doesn't contain a /.

this is a good first task.

Consider having a clusterprovider which uses mdns.

Service discovery by mdns seems useful. Within a network, having nodes in the same cluster automatically find each other solves a rather complex issue which would otherwise require consul or similar tooling.

See if we can have a cluster provider which uses mdns zeroconf.

Adding comments and docs to examples

#As a junior
I am struggling with understanding the terms like producer pid etc. if we could add more basic information and documentation , would be great!

Remove redis dependency

The examples pollute the go.mod file a bit. If we keep adding examples, the go.mod file will bloat further.
suggest name: hollywood-examples.

actor.poisonPill should not reach the actor

Currently, when you poison an actor, it will actually receive a actor.poisonPill{}. As the type is private the actor can't handle it without reflection.

As the message is internal it should be supressed by the engine, so it shouldn't ever reach the actor itself.

Could potentially be done in invokeMsg().

Actor subscribe to event stream

It would be awesome if any actor could subscribe to the global event stream This could become a nice broadcast/signalling channel for all running children.

The challenge is: how to access it.

In the eventStream example, the code subscribes directly to the event stream by calling a function of the engine. However, a child only knows its own PID, and does not have direct access to the engine.

One awkward way would be to subscribe when the Receiver gets a started message:

func (state *a) Receive(ctx *actor.Context) {

	switch msg := ctx.Message().(type) {

	case *actor.Started:
		ctx.Engine().EventStream.Subscribe(...)
}

Is this good enough, or do we want something more specific?

cluster support

I'm curious about what is meant with "Cluster support [coming soon]"? What would it enable that is currently not possible.

TIA, Per.

using QUIC and queuing question?

Hello All,

I am new to the Actor Model engine but I think that I have a rough idea as to what it is and how it works. (still learning though)

I am building out a massively scaling P2P project (Windows 10 x86) and for each node I was thinking to have type of Hollywood node that would receive messages, queue them if needed, and process them from the queue in-turn. It seems that Hollywood might be able to handle this with great success from what I have seen so far with the benchmarks and running some of the examples although I still need to investigate and get some guidance more on this.

The other thing that I am wondering about is if there any examples of Hollywood using QUIC like you have in the examples/tcpserver or similar. Are there any examples or ideas on how this might be achieved?

Thanks and have a great day

How can I horizontal scale it?

I'm using hollywood to build a game server thank to your inspiring video. But I don't know how can I horizontal scale it

Registry.getByID can result in a panic

If the PIDSeparator is changed (lets say to ">") and a call to .getByID("foo/bar") is made then it results in a panic.

panic: runtime error: invalid memory address or nil pointer dereference [recovered]
	panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x2 addr=0x20 pc=0x100a47368]

Easiest way to test is to change the registry_test/TestGetByName and change the PIDSeparator and or the getByID argument.

	PIDSeparator = ">"
...
	// local/foo/bar/q/1
	e.SpawnFunc(func(c *Context) {}, "foo", WithTags("bar", "q", "1"))
	time.Sleep(time.Millisecond * 10)
	proc = e.Registry.getByID("foo/bar/q/1")

No way of shutting down a remote.

While benchmarking I found that I missed a way to shutdown a Remote. Currently, once it has started listening there is no way (that I know of) to shut it down.

I suggest we add a way to shut down the remote. It might be that this might expose other bugs, so I think it is a good idea. It'll also make some higher level tests. For instance, we would see if we're leaking Goroutines and stuff.

Add functionality to Cluster to retrieve PID's

Just like the Registry on local engines, the cluster tracks all actors and kinds that are available cluster wide. Expose a function to the cluster that can retrieve actors / PIDs by ID and kind.

Add more creative examples.

If you feel brave and bored, feel free to add some examples.

Ideas:

  • web socket session handler
  • game server
  • Redis receiver/actor persistence middleware and loading on started and stopped
  • Prometheus instrumentation middleware

completed:

  • TCP server
  • Chat

document how to pass arguments to actor creation.

A user might wonder how to pass arguments when spawning an actor. This is how you do it:

func newPlayerState(health int, username string) actor.Producer {
    return func() actor.Receiver {
        return &PlayerState{
            Health:   health,
            Username: username,
        }
    }
}

Idea; replace logging with an system event actor

We'ce had this discussion, which was quite interesting.

Logging is a bit ugly. But having debug messages, system events and failures accessible somewhere is important for production use.

So, we should think about ripping out the logging and replacing it with a system event actor. This could look roughly like this:

We define a type SystemEvent, similar to a DeadletterEvent{}. It shoud have the following fields

  • source (string?)
  • message (string)
  • severity (debug, info, warning, error)
  • key/value set for arbitrary fields (are there performance implications here? - perhaps it should be a slice, like slog uses that will avoid the overhead of creating a map

Now, when something happens, where we currently log something, a SystemEvent message is sent to the actor. Out of the box Hollywood can ship with a simple actor which just logs these messages to stdout.

System events we want to handle:

  • actor started or stopped
  • remote started or stopped
  • cluster activation event
  • cluster deactivation event
  • cluster member join
  • cluster member leave

Now, if a user wants to handle these messages in a custom way the user can just supply an actor and do whatever they please with it.

Memory leak with inboxes

First of all, great concept library! I recently used it in a quick tool I needed, so figured I would stretch the legs a bit.

What I discovered, is that in a roughly 200 actor system with 8k inboxes and a rough inbox rate of 1-100 messages per actor per second, after about 8 hours the application would be consuming around 2gb of memory.

I did quite a bit of profiling, and at first guess, things that are going into the ggq, are not getting garbage collected at all.

I copied the lib into my application, swapped out the inbox ggq implementation with a channel, and the memory leak disappeared.

Sorry I didn't spend much more time on digging deeper than that, needed to get that tool stable.

I was able to replicate this consistently over the last week, with the same load, about 8 hours in, my app would be near 2gb of memory, with a slow climb nearing the same size of the messages going in.

replace safemap package

Use the sync.Map instead. Should have better performance and scalability than a mutex guarded map. It ain't typesafe, atm, but that will hopefully come around soon.

Import Error: package log/slog not in GOROOT (/usr/local/go/src/log/slog)

I'm encountering an issue while importing the package github.com/anthdm/hollywood/log. The error message suggests that the package log/slog is not found in GOROOT (/usr/local/go/src/log/slog).

Environment:

  • Go version: 1.19
  • Operating System: mac os

Error Message:
error while importing github.com/anthdm/hollywood/log: package log/slog is not in GOROOT (/usr/local/go/src/log/slog)

v1.0, impose consistency on constructor functions

All constructor functions should act somewhat similar. That is:

All mandatory arguments are given directly. Optional arguments are passed in a pointer to a struct.

The exception is actor creation. Here the functional option pattern seems to work really well.

Default log level should be debug.

Currently most of the log levels are INFO, this is pretty annoying for users that are also using SLOG and don't want to pollute their logs with "actor started stuff"

Message Initialized{} seems to be unused.

From my testing I've never seem to get the Initialized{} message when I spin up an actor. I get the Started{} and Stopped{}.

Is this an oversight or am I missing something?

Per.

Releases/Versioning

Are you planning on setting up a release pipeline so the library can have versioning? Not sure if you are waiting until after the alpha rollout or whatnot but whenever you do want it I could set it up for you if you would like. Super simple and EZ

examples/tcpserver goroutine leak

the tcpserver (examples/tcpserver) crashed during the stress test.
gorutines number will not reduce after the tcp client disconnected.

Ability to grab all PIDs in the registry

It would make things less complicated in my repos if I was able to have a couple more public functions for reading the registry. Right now only being able to get active PID info only by inputing the specific ID I'm looking for isn't too helpful for my case. I require the ability to check what the active PIDs are upon request. Using the event stream to monitor this just bloats my code by needing to create my own mapping to just replicate the privated one. I understand it is privated for mutex reasons, but if these functions could be added it would be very helpful. I have used these functions in my own code before using my fork with no issues, but I think this could be useful to others as well.

registry.go

...

func (r *Registry) GetIDs() []string {
	r.mu.RLock()
	defer r.mu.RUnlock()
	keys := make([]string, 0, len(r.lookup))
	for k := range r.lookup {
		keys = append(keys, k)
	}
	return keys
}

func (r *Registry) GetPIDs() []*PID {
	r.mu.RLock()
	defer r.mu.RUnlock()
	keys := make([]*PID, 0, len(r.lookup))
	for _, v := range r.lookup {
		keys = append(keys, v.PID())
	}
	return keys
}

...

[ACTOR-MODEL] Race condition occurs when adding/deleting from map in actor state.

Hi, @anthdm
I'm confused about this;

I have logics for storage(map).
1-Adding to map.
2-Deleting from map.

Green side: no race condition; Logics is splitted to two case.

Red side is causing race condition; Logics are staying in one case.

So all status only process one logic at the same time. However, one of them is causing race condition.


Old code which is raising a race condition is here:
https://github.com/abdullahb53/goRoutines/tree/main/hollywood-ws/websocket

no race condition is here: #29


racecondition2


Code:

// It is storing goblin-process and coressponding websocket identities.
// Also it broadcasts messages to all goblin-processes.
func (f *hodorStorage) Receive(ctx *actor.Context) {
	switch msg := ctx.Message().(type) {
	case actor.Started:
		fmt.Println("[HODOR] storage has started, id:", ctx.PID().ID)

	//----------------------------
	//-----CAUSES RACE COND.------
	//----------------------------
	case *letterToHodor:
		fmt.Println("[HODOR] message has received from:", ctx.PID())

		// Delete the incoming websocket value -
		// or add a new websocket and goblin-process_id pair.
		if msg.drop {
			delete(f.storage, msg.ws)
		} else {
			f.storage[msg.ws] = msg.pid
		}

	//------------------------------
	// RACE-COND. IS SOLVED IN HERE.
	//------------------------------
	//
	// // Delete the incoming websocket value.
	// case *deleteFromHodor:
	// 	delete(f.storage, msg.ws)

	// // Add a new websocket and goblin-process pair.
	// case addToHodor:
	// 	f.storage[msg.ws] = msg.pid
	//
	//-----------------------------
	// RACE-COND. IS SOLVED IN HERE.
	//-----------------------------

ERRORS:
race1
race2


Test file:

func TestHandleWebsocket(t *testing.T) {

	// Create a new engine to spawn Hodor-Storage..
	engine = actor.NewEngine()
	hodorProcessId = engine.Spawn(newHodor, "HODOR_STORAGE")

	s := httptest.NewServer(HandleFunc(GenerateProcessForWs))
	defer s.Close()

	u := "ws" + strings.TrimPrefix(s.URL, "http")
	println(u)

	// Connect to the server
	ws, err := websocket.Dial(u, "", u)
	if err != nil {
		t.Fatalf("%v", err)
	}
	defer ws.Close()

	go func(ws *websocket.Conn) {
		for {
			buf := make([]byte, 1024)
			_, err = ws.Read(buf)
			if err != nil {
				continue
			}
			// println(string(buf))
		}
	}(ws)

	for i := 0; i < 10; i++ {
		_, err := ws.Write([]byte("Some text."))
		if err != nil {
			t.Fatalf("WS Write Error: %v", err)
		}
	}

}

Calling request without waiting for the response will lock up

I was doing some causual benchmarking, as one does. I was trying to measure latency so I was using requests. I forgot to wait for the response. This seem to cause the engine to lock up after about 500 requests.

I'm not sure if this is fixable, but we should at least document it.

How to make sure all messages are sent

I'm polishing up the chat example, adding some logging and fixing some issues. One thing I've noticed is that the disconnect event doesn't reach the server when the client shuts down.

I can ofc add a time.Sleep(), but that would be lame.

How do I flush out all messages so we can shutdown cleanly?

Hollywood speeds

Hello,

I am looking over many libraries for this P2P project and I think that Hollywood could be an awesome solution, but have a few questions that I hope you will answer for me.

Some similar libraries seems to be:

Hollywood
libp2p-go
ZeroMQ

At the core of the project idea is that this will be a massive P2P database such that maybe each node will have an SQLite (or maybe Immudb) and/or some KV instance running and the nodes can pass data in message blocks as needed. Data can be relayed through node neighborhoods to other nodes that are not in the neighborhood to reach their destination.

Effectively the user may be able to write data globally or locally which will also be encrypted to the network without having to worry about where the data is stored but know that it is encrypted and striped across nodes like you might find in various RAID levels.

Although I am still researching this idea, I think that message passing may be the best approach but also need the fastest and most reliable core to start with and although Libp2p offers node discovery, I think that just maybe Hollywood or ZeroMQ that I do not think have node discovery could be much faster, but could be wrong.

Any ideas or thoughts on this?

Thanks

SafeMap Len() is Linear Time Instead of Constant and ForEach() is Unsafe

While reading through the safemap package I noticed that Len() is being calculated with the sync.Map function s.data.Range(). It looks like this was previously constant time prior to this PR: https://github.com/anthdm/hollywood/pull/63/files because len() is constant for a traditional map.

It's worth noting that (sync.Map).Range() isn't safe since it's non blocking meaning (SafeMap[K,V]).ForEach() isn't either. Therefore it's not guaranteed to get an accurate length or an accurate slice of []*PID for children in actor.Context.

The (sync.Map).Range() function comments state:
// Range does not necessarily correspond to any consistent snapshot of the Map's contents: no key will be visited more than once, but if the value for any key is stored or deleted concurrently (including by f), Range may reflect any mapping for that key from any point during the Range call. Range does not block other methods on the receiver; even f itself may call any method on m.
So this would lead to a common concurrency bug: https://en.wikipedia.org/wiki/Time-of-check_to_time-of-use.

Since safemap is used by children in actor.Context, if the number of children is sufficiently large, then calculating length will be slow and likely inaccurate if children are changing.

The performance gains of #63 traded off safety for speed. Depending on your thoughts, it might be worth rolling this back to have a lock in SafeMap.

Is the use case for children aligned with the optimization of sync.Map? It seems like it isn't. Note that according to the docs the sync.Map "type is optimized for two common use cases: (1) when the entry for a given key is only ever written once but read many times, as in caches that only grow, or (2) when multiple goroutines read, write, and overwrite entries for disjoint sets of keys. In these two cases, use of a Map may significantly reduce lock contention compared to a Go map paired with a separate Mutex or RWMutex."

how remote shutdown should look

This ticket should really be a comment on #74. But I was in a hurry, so here it is.

#76 tries to supply a way for the user to shut down a remote. This has one issue. We don't know when we actually are done with shutting down the remote.

@anthdm suggested the following:

  • We add a Stop() method on the remote
  • Stop returns a channel, that can be used to wait (if the user wants) until the remote is actually shut down
  • Something like that
  • So I would create the context internally and save the cancel function inside the remote.
  • When we call Stop() we cance(), return a channel.
  • we just close the channel after s.Serve()

I will try to whip #76 into shape and take the feedback into account.

replace logrus with slog?

Would you be interested in a PR which gets rid of logrus in favor of the stdlib slog? It would make go.mod one line nicer. :-)

Beatiful stack traces on actor panic

Problem:
When an actor (receiver) panics it dumps out the "normal" Go stack trace.

  1. the stack contains unnecessary callers
  2. its annoying

Solution:
Custom trace by parsing the given stack trace?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.