This is the test of distributed-process library on "real-life" task
Due to test nature of this solution, the simple node list is kept in the app/src/NodeList.hs. Please modify it before run!
To run program please specify host and port to bind:
stack exec app -- --host=10.0.1.12 --port=4001
- Each message contains a deterministic random number n ∈ (0, 1].
- There is no distribution of random number. In order to optimize the goal (higher value is better), the distribution could be \delta(x-1).
- |m| is a kind of norm. Sum uses it as a length of list and that is approximately l0-norm, though messages with zero results are excluded by RNG definition.
- We should keep node list, therefore, we cannot use SimpleLocalnet stuff.
- By specification, the calculation process should stop waiting after the sender process finished. How to coordinate them if they did not start at the same moment. There are two ways to solve it:
- Stop cycling after send-for time, read all messages in the mailbox and after that print out the value. (chosen solution)
- Try to coordinate all senders by message passing (like register sender, unregister sender) and stop cycle after all senders are unregistered. Though the spec defines wait-for time when the process should be killed. This approach is too complicated and requires clarifications on the spec.
- There are two possible options of transport at home: tcp or udp. There is a library for TCP with reasonable defaults. I was not able to find UDP library to get rid off tcp delivery confirmations but as "nsend" spec says that the function never fails.
- Command line arguments √
- --send-for k
- --wait-for l
- --with-seed s
- --some parameters for distributed-process startup
[(host, port)]
√ - Node list is embedded into NodeList.hs √
- Random number generators
System- Mersienne-Twister √
- Messages
- Message data type √
- Calculation of tuple (it seems to be a monoid computation or kind of) √
- Message sending code √
- Communication between nodes
- distributed-process (study!) √
- obviously there are two types of nodes: those "several" and "other nodes"
- several nodes send messages
- every message reach every node
- Debug configuration
- To stderr only √
- monad-logger √
- say in CloudHaskell is also used √
- Testing
- The larger your score is, the better.
- Properties to test: message processing.
- Failure tests
- Other considerations
- Given that RNG is uniformly distributed and result is (N, S) than E[S] = (N+1) * N / 4. Standard deviation is a bit more involved computation.