Giter Site home page Giter Site logo

devyukine / kurasuta Goto Github PK

View Code? Open in Web Editor NEW
159.0 159.0 23.0 557 KB

A Custom discord.js Sharding Library inspired by eris-sharder.

License: MIT License

JavaScript 2.43% TypeScript 97.57%
cluster discord discord-library discordjs discordjs-sharder sharder

kurasuta's People

Contributors

bigbrainafk avatar deivu avatar dependabot-preview[bot] avatar dependabot[bot] avatar devyukine avatar extremedevelopments avatar kyranet avatar optimisticside avatar quantumlyy avatar tylertron1998 avatar vexyr avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar

kurasuta's Issues

List of events

Is there list of events we can listen to? Such as clusterReady or something like that

this.socket.off is not a function

Im getting error 'TypeError: this.socket.off is not a function' idk in which files but thoce are my files:

const { Token } = require('./Config/AspektConfig');

module.exports = class extends BaseCluster {
	launch() {
		this.client.login(Token)
	}
};```
and
```// Packages
const { ShardingManager } = require('kurasuta');
const { join } = require('path');

// Config
const { Token } = require('./Config/AspektConfig');

const sharder = new ShardingManager(join(__dirname, 'AspektStruct'), {
	token: Token,
	clusterCount: 1,
	guildsPerShard: 100,
	respawn: true
});

sharder.spawn();```

Cluster 0 times out and then becomes ready

After upgrading to 0.2.17 this started happening. Cluster 0 appears to become ready after it says it took to long to become ready, what's happening here?

[3/22/2019] [4:59:11 AM] [shard manager]    debug Shard 0 became ready
[3/22/2019] [5:00:50 AM] [shard manager]    debug Shard 2 became ready
[3/22/2019] [5:01:06 AM] [shard manager]    fatal An error occurred when spawning clusters
[3/22/2019] [5:01:06 AM] [shard manager]    error Error: Cluster 0 took too long to get ready
    at Timeout.setTimeout [as _onTimeout] (/usr/src/dice/node_modules/.registry.npmjs.org/kurasuta/0.2.17/node_modules/kurasuta/dist/Cluster/Cluster.js:72:37)
    at ontimeout (timers.js:436:11)
    at tryOnTimeout (timers.js:300:5)
    at listOnTimeout (timers.js:263:5)
    at Timer.processTimers (timers.js:223:10)
[3/22/2019] [5:01:25 AM] [shard manager]    debug Shard 1 became ready
[3/22/2019] [5:01:34 AM] [shard manager]    debug Shard 3 became ready
[3/22/2019] [5:01:39 AM] [shard manager]    debug Cluster 0 became ready

This session would have handled too many guilds

So i'm not sure if i'm doing something wrong but even just the base code in the example doesn't work for me.

It calculates the required amount of shards correctly but immediately throws an error:
Selection_024

Invalid array length

(node:776) UnhandledPromiseRejectionWarning: RangeError: Invalid array length
    at ShardingManager.spawn (/home/runner/tuneybot/node_modules/kurasuta/dist/Sharding/ShardingManager.js:71:36)
    at /home/runner/tuneybot/index.js:18:9
    at Script.runInContext (vm.js:130:18)
    at Object.<anonymous> (/run_dir/interp.js:209:20)
    at Module._compile (internal/modules/cjs/loader.js:1137:30)
    at Object.Module._extensions..js (internal/modules/cjs/loader.js:1157:10)
    at Module.load (internal/modules/cjs/loader.js:985:32)
    at Function.Module._load (internal/modules/cjs/loader.js:878:14)
    at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js:71:12)
    at internal/main/run_main_module.js:17:47
(node:776) UnhandledPromiseRejectionWarning: Unhandled promise rejection. This error originated either by throwing inside of an async function without a catch block, or by rejecting a promise which was not handled with .catch(). To terminate the node process on unhandled promise rejection, use the CLI flag `--unhandled-rejections=strict` (see https://nodejs.org/api/cli.html#cli_unhandled_rejections_mode). (rejection id: 1)
(node:776) [DEP0018] DeprecationWarning: Unhandled promise rejections are deprecated. In the future, promise rejections that are not handled will terminate the Node.js process with a non-zero exit code.```

Issue

Hey whenever I use guildsPerShard: 500 my bot starts sending responses 2 times like if I use help command it will register it as it has been used 2 times idk why

Errors causing my bot to restart.

I have two errors and they are constantly restarting my bot.
Errors:

events.js:298
      throw er; // Unhandled 'error' event
      ^
MessageError: Failed to process message during connection, calling disconnect: Unknown type received: 1 [UnknownType]
    at Object.makeError (/root/bot/node_modules/veza/dist/lib/Structures/MessageError.js:33:16)
    at ServerSocket._onData (/root/bot/node_modules/veza/dist/lib/ServerSocket.js:93:62)
    at Socket.emit (events.js:321:20)
    at addChunk (_stream_readable.js:297:12)
    at readableAddChunk (_stream_readable.js:273:9)
    at Socket.Readable.push (_stream_readable.js:214:10)
    at TCP.onStreamRead (internal/stream_base_commons.js:186:23)
Emitted 'error' event on ShardingManager instance at:
    at MasterIPC.<anonymous> (/root/bot/kurasuta/dist/Sharding/ShardingManager.js:41:42)
    at MasterIPC.emit (events.js:321:20)
    at Server.<anonymous> (/root/bot/kurasuta/dist/IPC/MasterIPC.js:15:40)
    at Server.emit (events.js:321:20)
    at ServerSocket._onData (/root/bot/node_modules/veza/dist/lib/ServerSocket.js:93:33)
    at Socket.emit (events.js:321:20)
    [... lines matching original stack trace ...]
    at TCP.onStreamRead (internal/stream_base_commons.js:186:23) {
  kind: 'UnknownType'
}

and

events.js:298
      throw er; // Unhandled 'error' event
      ^
MessageError: Failed to parse message: Found End-Of-Buffer, expecting a `NullTerminator` before. [UnexpectedEndOfBuffer]
    at Object.makeError (/root/bot/node_modules/veza/dist/lib/Structures/MessageError.js:33:16)
    at ClientSocket._onData (/root/bot/node_modules/veza/dist/lib/ClientSocket.js:93:58)
    at Socket.emit (events.js:321:20)
    at addChunk (_stream_readable.js:297:12)
    at readableAddChunk (_stream_readable.js:273:9)
    at Socket.Readable.push (_stream_readable.js:214:10)
    at TCP.onStreamRead (internal/stream_base_commons.js:186:23)
Emitted 'error' event on ClusterIPC instance at:
    at Client.<anonymous> (/root/bot/kurasuta/dist/IPC/ClusterIPC.js:14:40)
    at Client.emit (events.js:321:20)
    at ClientSocket._onData (/root/bot/node_modules/veza/dist/lib/ClientSocket.js:93:29)
    at Socket.emit (events.js:321:20)
    [... lines matching original stack trace ...]
    at TCP.onStreamRead (internal/stream_base_commons.js:186:23) {
  kind: 'UnexpectedEndOfBuffer'
}
events.js:298
      throw er; // Unhandled 'error' event
      ^
Error: read ECONNRESET
    at TCP.onStreamRead (internal/stream_base_commons.js:205:27)
Emitted 'error' event on ShardingManager instance at:
    at MasterIPC.<anonymous> (/root/bot/kurasuta/dist/Sharding/ShardingManager.js:41:42)
    at MasterIPC.emit (events.js:321:20)
    at Server.<anonymous> (/root/bot/kurasuta/dist/IPC/MasterIPC.js:15:40)
    at Server.emit (events.js:321:20)
    at ServerSocket._onError (/root/bot/node_modules/veza/dist/lib/ServerSocket.js:106:21)
    at Socket.emit (events.js:321:20)
    at emitErrorNT (internal/streams/destroy.js:84:8)
    at processTicksAndRejections (internal/process/task_queues.js:84:21) {
  errno: -104,
  code: 'ECONNRESET',
  syscall: 'read'
}

How can I solve it?

[Need support] broadcastEval("this.guilds.cache") not working

Hello there! I love to use kurasuta for sharding and keep your good work!
Here I just noticed that broadcastEval("this.guilds.cache") is always given an error says

Error: Unsupported type 'function'.

But when I ask for someone who uses ShardingManager from discord.js directly, it's working fine:
image

Unhandled Rejection: Error [ERR_STREAM_DESTROYED]: Cannot call write after a stream was destroyed

Sometimes it throws this error.

(node:16383) UnhandledPromiseRejectionWarning: Error [ERR_STREAM_DESTROYED]: Cannot call write after a stream was destroyed
    at doWrite (_stream_writable.js:399:19)
    at writeOrBuffer (_stream_writable.js:387:5)
    at Socket.Writable.write (_stream_writable.js:318:11)
    at NodeMessage.reply (/root/bot/node_modules/veza/dist/lib/Structures/NodeMessage.js:28:32)
    at MasterIPC._broadcast (/root/bot/node_modules/kurasuta/dist/IPC/MasterIPC.js:45:21)
    at runMicrotasks (<anonymous>)
    at processTicksAndRejections (internal/process/task_queues.js:97:5)

What is it caused by?

error

C:\Users\Fabricio\Documents\GitHub\Fumiko\node_modules\kurasuta\dist\Cluster\Bas
eCluster.js:30
const shards = env.CLUSTER_SHARDS.split(',').map(Number);
^

TypeError: Cannot read property 'split' of undefined
at new BaseCluster (C:\Users\Fabricio\Documents\GitHub\Fumiko\node_modules\k
urasuta\dist\Cluster\BaseCluster.js:30:43)
at new FumikoClient (C:\Users\Fabricio\Documents\GitHub\Fumiko\Structures\Fu
mikoClient.js:9:3)
at Object. (C:\Users\Fabricio\Documents\GitHub\Fumiko\index.js:6:
16)
at Module._compile (internal/modules/cjs/loader.js:1076:30)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:1097:10)
at Module.load (internal/modules/cjs/loader.js:941:32)
at Function.Module._load (internal/modules/cjs/loader.js:782:14)
at Function.executeUserEntryPoint [as runMain] (internal/modules/run_main.js
:72:12)
at internal/main/run_main_module.js:17:47

Retrieve the list of shards

I would like to know on each shard how many guilds there are

and when I do it with client.shard.fetchClientValues('guilds.cache.size')
it only gives me my 2 cluster and not my 20 shard

And I can't find

Not spawning more shards then 3

So, I tried to implement it in my current bot, but for some reason it doesn't work.. I defined I want 8 shards, on 4 clusters, but it only spawns 3?

Cluster 2 is constantly rebooting.

1 Clusters still failed, retry...
Respawning Cluster 2
Killing Cluster 2
[IPC] Client Disconnected: Cluster 2

It constantly throws it into the console. What is it caused by?

Unsupported type 'function'.

When I try using broadcastEval with functions which makes promises, it throws "unsupported type function" error. As an example when I evaluate "this.client.shard.broadcastEval("this.guilds.cache.get('someid')")` my bot doesn't give me the results. I'm new with this module so I don't know if there is another way that I should use. I looked into the source codes and found fetchGuild function but it seems like it works for just info but that function doesn't give with guild structure and functions doesn't work.

Here is the full error:

Error: Unsupported type 'function'. at Serializer.handleUnsupported (C:\Users\Administrator\Desktop\husnu\node_modules\binarytf\dist\lib\Serializer.js:60:15) at Serializer.parse (C:\Users\Administrator\Desktop\husnu\node_modules\binarytf\dist\lib\Serializer.js:41:34) at Serializer.parseValueObjectLiteral (C:\Users\Administrator\Desktop\husnu\node_modules\binarytf\dist\lib\Serializer.js:186:18) at Serializer.parseObject (C:\Users\Administrator\Desktop\husnu\node_modules\binarytf\dist\lib\Serializer.js:134:49) at Serializer.parse (C:\Users\Administrator\Desktop\husnu\node_modules\binarytf\dist\lib\Serializer.js:38:67) at Serializer.parseValueObjectLiteral (C:\Users\Administrator\Desktop\husnu\node_modules\binarytf\dist\lib\Serializer.js:186:18) at Serializer.parseObject (C:\Users\Administrator\Desktop\husnu\node_modules\binarytf\dist\lib\Serializer.js:134:49) at Serializer.parse (C:\Users\Administrator\Desktop\husnu\node_modules\binarytf\dist\lib\Serializer.js:38:67) at Serializer.parseValueObjectLiteral (C:\Users\Administrator\Desktop\husnu\node_modules\binarytf\dist\lib\Serializer.js:186:18) at Serializer.parseObject (C:\Users\Administrator\Desktop\husnu\node_modules\binarytf\dist\lib\Serializer.js:134:49) at Serializer.parse (C:\Users\Administrator\Desktop\husnu\node_modules\binarytf\dist\lib\Serializer.js:38:67) at Serializer.process (C:\Users\Administrator\Desktop\husnu\node_modules\binarytf\dist\lib\Serializer.js:25:14) at Object.serialize (C:\Users\Administrator\Desktop\husnu\node_modules\binarytf\dist\index.js:6:61) at NodeMessage.reply (C:\Users\Administrator\Desktop\husnu\node_modules\veza\dist\lib\Structures\NodeMessage.js:28:87) at ClusterIPC._message (C:\Users\Administrator\Desktop\husnu\node_modules\kurasuta\dist\IPC\ClusterIPC.js:48:25) at runMicrotasks (<anonymous>) at processTicksAndRejections (internal/process/task_queues.js:97:5) { name: 'Error' }

Versions that I'm using if that helps:
discord.js => 12.0.2
klasa => v1.0.0-alpha
kurasuta => 2.0.0
veza => 1.1.0
binarytf => 2.0.0

Cluster 0 failed to start

image

Current code:

const Sharder = new ShardingManager(join(__dirname, 'Client.js'), { token: settings.token, client: BaseClient, clientOptions: { disableEveryone: true, commandEditing: true, commandLogging: true, regexPrefix: /^((?:Hey |Ok )?Dum(?:,|!| ))/i, console: { useColor: true, timestamps: 'MM-DD-YYYY hh:mm:ss A', utc: false }, prefixCaseInsensitive: true, noPrefixDM: true, prefix: 'dd.', pieceDefaults: { commands: { deletable: true, cooldown: 3, quotedStringSupport: true, bucket: 2 } }, readyMessage: (client) => [ ✔ ] Shard ${client.shard.id + 1} out of ${client.shard.shardCount} is ready. ${client.guilds.size.toLocaleString()} guilds & ${client.users.size.toLocaleString()} users loaded., disabledEvents: [ 'CHANNEL_PINS_UPDATE', 'USER_NOTE_UPDATE', 'RELATIONSHIP_ADD', 'RELATIONSHIP_REMOVE', 'USER_SETTINGS_UPDATE', 'VOICE_SERVER_UPDATE', 'TYPING_START', 'PRESENCE_UPDATE' ], aliasFunctions: { returnRun: true, enabled: true, prefix: 'funcs' }, dashboardHooks: { apiPrefix: '/', port: 7575 }, typing: true, production: true, providers: { default: 'mongodb'} }, development: false });

ClusterClass is not a constructor

Hello, I am using the Kurasuta sharder with the correct options and everything but when I start my file I get this:

[2021-06-09 19:51:06] [ERROR] TypeError: ClusterClass is not a constructor
at Object.startCluster (C:\Users\User\Desktop\Role Manager\node_modules\kurasuta\dist\Util\Util.js: 100:21)

so cluster and shards fail to load, why?

error

4|mt-trial  | [Kurasuta] [Debug] Using recommend shard count of 3 shards with 1000 guilds per shard
4|mt-trial  | [Kurasuta] [Debug] Starting 3 Shards in 2 Clusters!
4|mt-trial  | [Kurasuta] [Debug] Worker spawned with id 1
4|mt-trial  | [Kurasuta] [Debug] [IPC] Client Connected: Cluster 0
4|mt-trial  | [Saturday February 13th 8:42 AM] [MongoDB] Database has been disconnected!
4|mt-trial  | [Kurasuta] [Debug] [IPC] Shard 0 disconnected!
4|mt-trial  | [Kurasuta] [Shard] Shard [object Object] disconnected.
4|mt-trial  | Error [SHARDING_REQUIRED]: This session would have handled too many guilds - Sharding is required.
4|mt-trial  |     at WebSocketManager.createShards (/home/utkarsh/mt-trial/node_modules/discord.js/src/client/websocket/WebSocketManager.js:258:15)
4|mt-trial  |     at processTicksAndRejections (internal/process/task_queues.js:93:5)
4|mt-trial  |     at async TuneyClient.login (/home/utkarsh/mt-trial/node_modules/discord.js/src/client/Client.js:223:7) {
4|mt-trial  |   [Symbol(code)]: 'SHARDING_REQUIRED'
4|mt-trial  | }
4|mt-trial  | Error [SHARDING_REQUIRED]: This session would have handled too many guilds - Sharding is required.
4|mt-trial  |     at WebSocketManager.createShards (/home/utkarsh/mt-trial/node_modules/discord.js/src/client/websocket/WebSocketManager.js:258:15)
4|mt-trial  |     at processTicksAndRejections (internal/process/task_queues.js:93:5)
4|mt-trial  |     at async TuneyClient.login (/home/utkarsh/mt-trial/node_modules/discord.js/src/client/Client.js:223:7)

Why am I getting this?

Error: Unsupported type 'function'.

I'm trying to create a broadcast command which sends a message to all servers in all clusters that have a player but I'm getting the following error:

node:internal/process/promises:246
          triggerUncaughtException(err, true /* fromPromise */);
          ^

Error: Unsupported type 'function'.
    at Serializer.handleUnsupported (/home/seif/EarTensifier/node_modules/binarytf/dist/lib/Serializer.js:60:15)
    at Serializer.parse (/home/seif/EarTensifier/node_modules/binarytf/dist/lib/Serializer.js:41:34)
    at Serializer.parseValueObjectLiteral (/home/seif/EarTensifier/node_modules/binarytf/dist/lib/Serializer.js:186:18)
    at Serializer.parseObject (/home/seif/EarTensifier/node_modules/binarytf/dist/lib/Serializer.js:134:49)
    at Serializer.parse (/home/seif/EarTensifier/node_modules/binarytf/dist/lib/Serializer.js:38:67)
    at Serializer.parseValueObjectLiteral (/home/seif/EarTensifier/node_modules/binarytf/dist/lib/Serializer.js:186:18)
    at Serializer.parseObject (/home/seif/EarTensifier/node_modules/binarytf/dist/lib/Serializer.js:134:49)
    at Serializer.parse (/home/seif/EarTensifier/node_modules/binarytf/dist/lib/Serializer.js:38:67)
    at Serializer.parseValueObjectMap (/home/seif/EarTensifier/node_modules/binarytf/dist/lib/Serializer.js:197:18)
    at Serializer.parseObject (/home/seif/EarTensifier/node_modules/binarytf/dist/lib/Serializer.js:135:46)

Code:

const players = await client.shard.broadcastEval(`this.music.players.each(p => p.textChannel.send('${message}'))`);

I tried looking at #357 but don't see how this could apply in my use case since I want to send a message in all clusters and not just certain ones.

Clutsters failed to spawn

Version: Latest

require('events').EventEmitter.defaultMaxListeners = 0;

const { ShardingManager } = require("kurasuta");
const config = require('./config');
const Client = require('./structures/Client.js');

const sharder = new ShardingManager(__dirname + "/index.js", {
    token: config.token,
    client: Client,
    respawn: true, 
    retry: true,
    ipcSocket: 9999,
    guildsPerShard: 500,
    timeout: 45000
})

sharder.spawn();

Error:

[Kurasuta] [Debug] Using recommend shard count of 2 shards with 500 guilds per shard
[Kurasuta] [Debug] Starting 2 Shards in 3 Clusters!
[Kurasuta] [Debug] Worker spawned with id 1
[Kurasuta] [Debug] Cluster 0 failed to start
Error: Cluster 0 failed to start
    at ShardingManager.spawn (/rbd/pnpm-volume/38d948f2-5402-4465-ac06-b48a224c1c34/node_modules/.registry.npmjs.org/kurasuta/2.2.0/node_modules/kurasuta/dist/Sharding/ShardingManager.js:84:64)
[Kurasuta] [Debug] Requeuing Cluster 0 to be spawned
[Kurasuta] [Debug] Worker spawned with id 2
[Kurasuta] [Debug] [IPC] Client Connected: Cluster 0
[Monday November 2nd 12:40 PM] [MongoDB] Connection established
[Kurasuta] [Debug] Cluster 1 failed to start
[Kurasuta] [Debug] Requeuing Cluster 1 to be spawned
[Kurasuta] [Debug] Respawning Cluster 0
[Kurasuta] [Debug] Killing Cluster 0
Error: Cluster 1 failed to start
    at ShardingManager.spawn (/rbd/pnpm-volume/38d948f2-5402-4465-ac06-b48a224c1c34/node_modules/.registry.npmjs.org/kurasuta/2.2.0/node_modules/kurasuta/dist/Sharding/ShardingManager.js:84:64)
[Kurasuta] [Debug] Worker spawned with id 3
[Kurasuta] [Debug] [IPC] Client Disconnected: Cluster 0
[Kurasuta] [Debug] [IPC] Client Connected: Cluster 1
[Kurasuta] [Debug] Cluster 0 failed, requeuing...
[Kurasuta] [Debug] Respawning Cluster 1
[Kurasuta] [Debug] Killing Cluster 1
[Kurasuta] [Debug] Worker spawned with id 4
[Monday November 2nd 12:40 PM] [MongoDB] Connection established
[Kurasuta] [Debug] [IPC] Client Disconnected: Cluster 1
[Kurasuta] [Debug] Cluster 1 failed, requeuing...
[Kurasuta] [Debug] 2 Clusters still failed, retry...
[Kurasuta] [Debug] Respawning Cluster 0
[Kurasuta] [Debug] Killing Cluster 0
[Kurasuta] [Debug] Worker spawned with id 5
[Kurasuta] [Debug] Cluster 0 failed, requeuing...
[Kurasuta] [Debug] Respawning Cluster 1
[Kurasuta] [Debug] Killing Cluster 1
[Kurasuta] [Debug] Worker spawned with id 6
Error: read ECONNRESET
    at TCP.onStreamRead (internal/stream_base_commons.js:183:27) {
  errno: 'ECONNRESET',
  code: 'ECONNRESET',
  syscall: 'read'
}
[Kurasuta] [Debug] [IPC] Client Disconnected: null

Socket file

This problem keeps occurring and causing cluster spawning to fail. I'm running the latest version of Kurasuta off of NPM. The solution is to compose down and then compose up afterward.

[3/16/2019] [3:59:19 PM] [shard manager]    fatal
     An error occured when spawning clusters
[3/16/2019] [3:59:19 PM] [shard manager]    error
     Error: connect ECONNREFUSED /tmp/DiscordBot.sock

Spawning clusters resolves with `undefined` and then rejects (timeout)

Initially resolved with: undefined
Then rejected with: Error: Cluster 0 took too long to get ready

// My logger
[shard manager] »   success   Clusters spawned

// ...

// Node.js error
Error: Cluster 0 took too long to get ready
    at Timeout.setTimeout [as _onTimeout] (D:\username\Documents\Programming\dice-discord\bot\node_modules\.registry.npmjs.org\kurasuta\0.2.18\node_modules\kurasuta\dist\Cluster\Cluster.js:72:37)
    at ontimeout (timers.js:436:11)
    at tryOnTimeout (timers.js:300:5)
    at listOnTimeout (timers.js:263:5)
    at Timer.processTimers (timers.js:223:10)

More examples in the documentation

Actually, I still don't understand how to use this package with Discord-Akairo, I don't see examples, Could you provide more more practical examples when it comes to frameworks?

The problem I have right now is that when starting for example, 3 clusters, with 1 shard in each one, the bot responds to me 3 times, I don't understand

I would very much like to use this package, but I have no examples to understand its use

Getting cluster by shard ID

Hi, I want to add stats into a command for the amount of clusters and the current cluster the guilds shard is on but when I attempt to get all the clusters from the ShardManager I get Unsupported type 'function'.. Is this possible to do?

Error: Cluster 0 failed to start.

image

Is there any reason for this failure? What is it caused by?

I can't start my bot due to this error and I tried to correct the verification of the open ipc port (ufw), token regeneration, reversion of my code (git reset) and is there any reason for that? Because I need to recover the state of the bot as soon as possible, but I already tried everything I remembered.

Error i get when i call the client.shard.restartAll() function

The Error:

events.js:291
throw er; // Unhandled 'error' event
^

Error: read ECONNRESET
at TCP.onStreamRead (internal/stream_base_commons.js:209:20)
Emitted 'error' event on ShardingManager instance at:
at MasterIPC. (C:\Users\admin\Documents\Coding Projects\bot rewrite\node_modules\kurasuta\dist\Sharding\ShardingManager.js:55:42)
at MasterIPC.emit (events.js:314:20)
at Server. (C:\Users\admin\Documents\Coding Projects\bot rewrite\node_modules\kurasuta\dist\IPC\MasterIPC.js:16:40)
at Server.emit (events.js:314:20)
at ServerSocket._onError (C:\Users\admin\Documents\Coding Projects\bot rewrite\node_modules\veza\dist\lib\ServerSocket.js:106:21)
at Socket.emit (events.js:314:20)
at emitErrorNT (internal/streams/destroy.js:106:8)
at emitErrorCloseNT (internal/streams/destroy.js:74:3)
at processTicksAndRejections (internal/process/task_queues.js:80:21) {
errno: -4077,
code: 'ECONNRESET',
syscall: 'read'
}

pls help, also when i remotely shut down my bot i get the same error

FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory

I get an error like:

FATAL ERROR: Ineffective mark-compacts near heap limit Allocation failed - JavaScript heap out of memory
 1: 0xa08900 node::Abort() [node /root/bot/shard.js]
 2: 0xa08d0c node::OnFatalError(char const*, char const*) [node /root/bot/shard.js]
 3: 0xb7ef5e v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [node /root/bot/shard.js]
 4: 0xb7f2d9 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [node /root/bot/shard.js]
 5: 0xd2ba45  [node /root/bot/shard.js]
 6: 0xd2c0d6 v8::internal::Heap::RecomputeLimits(v8::internal::GarbageCollector) [node /root/bot/shard.js]
 7: 0xd38955 v8::internal::Heap::PerformGarbageCollection(v8::internal::GarbageCollector, v8::GCCallbackFlags) [node /root/bot/shard.js]
 8: 0xd39805 v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [node /root/bot/shard.js]
 9: 0xd3c2bc v8::internal::Heap::AllocateRawWithRetryOrFail(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [node /root/bot/shard.js]
10: 0xd02e8b v8::internal::Factory::NewFillerObject(int, bool, v8::internal::AllocationType, v8::internal::AllocationOrigin) [node /root/bot/shard.js]
11: 0x104474e v8::internal::Runtime_AllocateInYoungGeneration(int, unsigned long*, v8::internal::Isolate*) [node /root/bot/shard.js]
12: 0x13c9819  [node /root/bot/shard.js]

Is this about me or is there a problem?

Veza missing socket error

Error: Cannot send a message to a missing socket.
    at ClientSocket.send (/root/tune/node_modules/veza/dist/lib/Structures/Base/SocketHandler.js:34:35)
    at ShardClientUtil.send (/root/tune/node_modules/kurasuta/dist/Sharding/ShardClientUtil.js:57:36)
    at TuneClient.<anonymous> (/root/tune/node_modules/kurasuta/dist/Cluster/BaseCluster.js:46:67)
    at TuneClient.emit (events.js:375:28)
    at Object.module.exports [as RESUMED] (/root/tune/node_modules/discord.js/src/client/websocket/handlers/RESUMED.js:13:10)
    at WebSocketManager.handlePacket (/root/tune/node_modules/discord.js/src/client/websocket/WebSocketManager.js:345:31)
    at WebSocketShard.onPacket (/root/tune/node_modules/discord.js/src/client/websocket/WebSocketShard.js:443:22)
    at WebSocketShard.onMessage (/root/tune/node_modules/discord.js/src/client/websocket/WebSocketShard.js:300:10)
    at WebSocket.onMessage (/root/tune/node_modules/ws/lib/event-target.js:132:16)
    at WebSocket.emit (events.js:375:28)
    at Receiver.receiverOnMessage (/root/tune/node_modules/ws/lib/websocket.js:970:20)
    at Receiver.emit (events.js:375:28)
    at Receiver.dataMessage (/root/tune/node_modules/ws/lib/receiver.js:502:14)
    at Receiver.getData (/root/tune/node_modules/ws/lib/receiver.js:435:17)
    at Receiver.startLoop (/root/tune/node_modules/ws/lib/receiver.js:143:22)
    at Receiver._write (/root/tune/node_modules/ws/lib/receiver.js:78:10)

this error emits so many times like when I searched the log file (output of 3 days) I have find around 350+ matches and my bot restarts like so much haven't been able to find any cause and I suspect this could be the case

Facing issue while sharding

[2020-02-19 08:08:08] [CLUSTER:0] Cluster 0 spawned! (node:3553) UnhandledPromiseRejectionWarning: TypeError: BaseManager is not a constructor

Sharder assumes first argument to client is for configuring it

The code

this.client = new manager.client(clientConfig);

This line assumes that the provided client

  1. Accepts 1 argument
  2. That argument is for configuring Discord.js

The problem

In my case, my client extends the AkairoClient. This is a standard practice for many frameworks. Because of this you may forget to add in the constructor parameter that Kurasuta relies on to pass along to Discord.js.

The solution

Technically, this isn't a library issue since users are responsible for configuring their Client properly, although maybe something could be done to validate that the settings are actually being set.
Another "solution" would be to add a very large notice to the README to remind users to setup their client properly.

How Akairo users should fix this

export class MyClient extends AkairoClient {
	constructor(clientOptions: ClientOptions) {
		super(
			{
				// Akairo settings
				ownerID: owners
			},
			{
				// I like to put my settings here rather than in Kurasuta
				disableMentions: 'everyone',
				// Make sure to copy the settings that Kurasuta passes in
				// If you're configuring everything via Kurasuta just set this object to `clientOptions` and avoid the spread operator
				...clientOptions
			}
		);
	}
}

u qt

yuki is larg qt

Large Bot Sharding (max_concurrency)

I currently use Kurasuta on a large bot that has a larger max_concurrency, however it seems like the bot still logs in 1 shard at a time. Is there specific setup I need to do in order to support this feature?

TypeError: Cannot read property 'toLowerCase' of undefined

Sometimes(every 2-3 weeks) I get an error like this and this error restarts all Clusters:

/root/bot/node_modules/kurasuta/dist/IPC/MasterIPC.js:33
        this[`_${Constants_1.IPCEvents[op].toLowerCase()}`](message);
                                           ^

TypeError: Cannot read property 'toLowerCase' of undefined
    at MasterIPC._incommingMessage (/root/bot/node_modules/kurasuta/dist/IPC/MasterIPC.js:33:44)
    at Server.emit (events.js:315:20)
    at ServerSocket._onData (/root/bot/node_modules/veza/dist/lib/ServerSocket.js:100:33)
    at Socket.emit (events.js:315:20)
    at addChunk (_stream_readable.js:295:12)
    at readableAddChunk (_stream_readable.js:271:9)
    at Socket.Readable.push (_stream_readable.js:212:10)
    at TCP.onStreamRead (internal/stream_base_commons.js:186:23)

And I am also having "JavaScript heap out of memory" issues in shard file. It may be relevant to me. Are there any problems with this file?

const { ShardingManager } = require('kurasuta');
const { join } = require('path');
const { WebhookClient } = require("discord.js");
const ayarlar = require('./ayarlar.json')
const sharder = new ShardingManager(join(__dirname, 'bot'), {
  token: ayarlar.token,
  guildsPerShard: 1500,
  clusterCount: 4,
  clientOptions: {
    retryLimit: Infinity,
    presence: { activity: { name: '+yardım', type: "WATCHING" }, status: 'online' },
    messageCacheMaxSize: 20,
    messageCacheLifetime: 600,
    messageSweepInterval: 300
  },
  timeout: 60000
});

const webhook = new WebhookClient(ayarlar.shardWebhook.ID, ayarlar.shardWebhook.TOKEN);
const clusterWebhook = new WebhookClient(ayarlar.clusterWebhook.ID, ayarlar.clusterWebhook.TOKEN);

sharder.on("debug", console.log)
.on('ready', async(cluster) => {
  clusterWebhook.send(`\`${cluster.id+1}.\` Cluster aktif oldu.`)
})
.on('shardReady', async(shard) => {
  webhook.send(`\`${shard+1}.\` Shard aktif oldu.`)
})
.on('shardReconnect', async(shard) => {
  webhook.send(`\`${shard+1}.\` Shard yeniden bağlanmaya çalışıyor.`)
})
.on('shardResume', async(replay, shard) => {
  webhook.send(`\`${shard+1}.\` Shard yeniden bağlandı.`)
})
.on('shardDisconnect', async(event, shard) => {
  webhook.send(`\`${shard+1}.\` Shard bağlantısını kesti.
\`\`\`diff
- ${event}
\`\`\``)
})
.on('error', async() => {
  return undefined
})

sharder.spawn().catch(err => {
  console.error("Shardlar başlatılırken bir sorun oluştu.", err)
})

Add ShardingManager#broadcast

What will happen if I attempt to send a message from ShardingManager to all the shards via ShardingManager#broadcast? Where would I listen to it in ShardClientUtil? I can't find it anywhere in the docs nor the code

Name has big gay

I refoose to use unless it get beter name. luv u yukine ❤️

The npm send a error

I get this error when connecting everything
Server { insecureHTTPParser: undefined, _events: [Object: null prototype] { request: [Function: app] EventEmitter { _events: [Object: null prototype], _eventsCount: 1, _maxListeners: undefined, setMaxListeners: [Function: setMaxListeners], getMaxListeners: [Function: getMaxListeners], emit: [Function], addListener: [Function: addListener], on: [Function: addListener], prependListener: [Function: prependListener], once: [Function: once], prependOnceListener: [Function: prependOnceListener], removeListener: [Function: removeListener], off: [Function: removeListener], removeAllListeners: [Function: removeAllListeners], listeners: [Function: listeners], rawListeners: [Function: rawListeners], listenerCount: [Function: listenerCount], eventNames: [Function: eventNames], init: [Function: init], defaultConfiguration: [Function: defaultConfiguration], lazyrouter: [Function: lazyrouter], handle: [Function: handle], use: [Function: use], route: [Function: route], engine: [Function: engine], param: [Function: param], set: [Function: set], path: [Function: path], enabled: [Function: enabled], disabled: [Function: disabled], enable: [Function: enable], disable: [Function: disable], acl: [Function], bind: [Function], checkout: [Function], connect: [Function], copy: [Function], delete: [Function], get: [Function], head: [Function], link: [Function], lock: [Function], 'm-search': [Function], merge: [Function], mkactivity: [Function], mkcalendar: [Function], mkcol: [Function], move: [Function], notify: [Function], options: [Function], patch: [Function], post: [Function], propfind: [Function], proppatch: [Function], purge: [Function], put: [Function], rebind: [Function], report: [Function], search: [Function], source: [Function], subscribe: [Function], trace: [Function], unbind: [Function], unlink: [Function], unlock: [Function], unsubscribe: [Function], all: [Function: all], del: [Function], render: [Function: render], listen: [Function: listen], request: [IncomingMessage], response: [ServerResponse], cache: {}, engines: {}, settings: [Object], locals: [Object: null prototype], mountpath: '/', _router: [Function] }, connection: [Function: connectionListener], listening: [Function: bound onceWrapper] { listener: [Function] } }, _eventsCount: 3, _maxListeners: undefined, _connections: 0, _handle: TCP { reading: false, onconnection: [Function: onconnection], [Symbol(owner_symbol)]: [Circular] }, _usingWorkers: false, _workers: [], _unref: false, allowHalfOpen: true, pauseOnConnect: false, httpAllowHalfOpen: false, timeout: 120000, keepAliveTimeout: 5000, maxHeadersCount: null, headersTimeout: 60000, _connectionKey: '6::::3000', [Symbol(IncomingMessage)]: [Function: IncomingMessage], [Symbol(ServerResponse)]: [Function: ServerResponse], [Symbol(kCapture)]: false, [Symbol(asyncId)]: 4 }

main.js
`const { BaseCluster } = require('kurasuta');

module.exports = class extends BaseCluster {
launch() {
this.client.login(process.env.TOKEN);
}
};`

index.js
`const { ShardingManager } = require('kurasuta');
const { join } = require('path');
const sharder = new ShardingManager(join(__dirname, 'main'), {
clusterCount: 1
});

sharder.spawn();`

Discord.js x Kurasuta

Hello ! I already my bot on Discord.js and I would like replace the ShardingManager of d.js by Kurasuta.

My problematic is : I already have a class extends from Discord.Client.

I tested with ts-mixer and extends-classes for multiples extends in one Classes but i've got errors...


Example of code from futurestud.io/tutorials/node-js-extend-multiple-classes-multi-inheritance

image


My app.js if the ShardingManager :

const config = require('./config.js')
const { ShardingManager } = require('kurasuta')
const { join } = require('path')
const sharder = new ShardingManager(join(__dirname, 'core.js'), {
    token: config.token,
    clusterCount: 1,
    guildsPerShard: 1200

})

sharder.spawn()

And my core.js :

const Discord = require("discord.js")
const { promisify } = require("util")
const readdir = promisify(require("fs").readdir)
const klaw = require("klaw")
const path = require("path")
const config = require('./config')
const ls = require('log-symbols')

class Core extends Discord.Client {
    constructor(option) {
        super(option)
        this.config = config
        this.logger = require("./src/Logger")
        this.ls = require('log-symbols')
        this.prefix = config.prefix

        this.commands = new Discord.Collection()
        this.aliases = new Discord.Collection()
        this.modules = new Discord.Collection()

        this._addEventListeners()
        this._registerCommands()
        this._catchUnhandledRejections()
        this.login(config.token)
    }

    async _addEventListeners() {
        const evtFiles = await readdir("./src/events/")
        evtFiles.forEach(file => {
            const eventName = file.split(".")[0]
            const event = new (require(`./src/events/${file}`))(this)
            this.modules.set(eventName, event)
            this.on(eventName, (...args) => event.run(...args))
        })
    }

    _registerCommands() {
        klaw("./src/commands").on("data", (item) => {
            const cmdFile = path.parse(item.path)
            if (!cmdFile.ext || cmdFile.ext !== ".js") return
            const response = this._loadCommand(cmdFile.dir, `${cmdFile.name}${cmdFile.ext}`)
            if (response) console.log(response)
        })
    }

    _loadCommand(commandPath, commandName) {
        try {
            const props = new (require(`${commandPath}${path.sep}${commandName}`))(this)

            props.conf.location = commandPath
            if (props.init) {
                props.init(this)
            }
            this.commands.set(props.help.name, props)
            props.conf.aliases.forEach(alias => {
                this.aliases.set(alias, props.help.name)
            })
            return false
        } catch (e) {
            return `Err on ${commandName}${e}`
        }
    }

    async _unloadCommand(commandPath, commandName) {
        let command
        if (this.commands.has(commandName)) {
            command = this.commands.get(commandName)
        } else if (this.aliases.has(commandName)) {
            command = this.commands.get(this.aliases.get(commandName))
        }

        delete require.cache[require.resolve(`${commandPath}${path.sep}${commandName}.js`)]
        return false
    }
}

module.exports = new Core()

Maybe if integrate like this.. but errors again

const Discord = require("discord.js")
+ const { BaseCluster } = require('kurasuta')
const { promisify } = require("util")
const readdir = promisify(require("fs").readdir)
const klaw = require("klaw")
const path = require("path")
const config = require('./config')
const ls = require('log-symbols')

+ class Cluster extends BaseCluster {
+     constructor(manager) {
+         super(manager)
+     }
+     launch() {
+         this._addEventListeners()
+         this._registerCommands()
+         this._catchUnhandledRejections()
+         this.client.login(config.token)
+     }
+ }


class Core extends Discord.Client {
    constructor(option) {
        super(option)
        this.config = config
        this.logger = require("./src/Logger")
        this.ls = require('log-symbols')
        this.prefix = config.prefix

        this.commands = new Discord.Collection()
        this.aliases = new Discord.Collection()
        this.modules = new Discord.Collection()

-        /* this._addEventListeners()
-        this._registerCommands()
-        this._catchUnhandledRejections()
-        this.login(config.token)*/
    }

    async _addEventListeners() {
        const evtFiles = await readdir("./src/events/")
        evtFiles.forEach(file => {
            const eventName = file.split(".")[0]
            const event = new (require(`./src/events/${file}`))(this)
            this.modules.set(eventName, event)
            this.on(eventName, (...args) => event.run(...args))
        })
    }

    _registerCommands() {
        klaw("./src/commands").on("data", (item) => {
            const cmdFile = path.parse(item.path)
            if (!cmdFile.ext || cmdFile.ext !== ".js") return
            const response = this._loadCommand(cmdFile.dir, `${cmdFile.name}${cmdFile.ext}`)
            if (response) console.log(response)
        })
    }

    _loadCommand(commandPath, commandName) {
        try {
            const props = new (require(`${commandPath}${path.sep}${commandName}`))(this)

            props.conf.location = commandPath
            if (props.init) {
                props.init(this)
            }
            this.commands.set(props.help.name, props)
            props.conf.aliases.forEach(alias => {
                this.aliases.set(alias, props.help.name)
            })
            return false
        } catch (e) {
            return `Err on ${commandName} — ${e}`
        }
    }

    async _unloadCommand(commandPath, commandName) {
        let command
        if (this.commands.has(commandName)) {
            command = this.commands.get(commandName)
        } else if (this.aliases.has(commandName)) {
            command = this.commands.get(this.aliases.get(commandName))
        }

        delete require.cache[require.resolve(`${commandPath}${path.sep}${commandName}.js`)]
        return false
    }
}

module.exports = new Core()
+ module.exports = new Cluster()

I've got this error :

node_modules\kurasuta\dist\Cluster\BaseCluster.js:30
        const shards = env.CLUSTER_SHARDS.split(',').map(Number);
                                          ^

TypeError: Cannot read property 'split' of undefined

Can you help me? 🤔

TypeScript usage for shard util

Since discord.js ShardClientUtil and kurasuta ShardClientUtil are different
how do we use the kurasuta only functions like masterEval in typescript?

It isn't showing up with client#shard

env.CLUSTER_SHARDS is undefined

index.ts

import { Intents, Options } from "discord.js";
import { join } from "path/posix";
import Rick from "./Models/Rick";

const { ShardingManager } = require("kurasuta");
const sharder = new ShardingManager(join(__dirname, "shard"), {
  clusterCount: 1,
  guildsPerShard: 1200,
  client: Rick,
  clientOptions: {
    intents: [
      Intents.FLAGS.GUILDS,
      Intents.FLAGS.GUILD_INTEGRATIONS,
      Intents.FLAGS.GUILD_MEMBERS,
      Intents.FLAGS.GUILD_MESSAGES,
      Intents.FLAGS.GUILD_MESSAGE_REACTIONS,
      Intents.FLAGS.GUILD_PRESENCES,
      Intents.FLAGS.GUILD_VOICE_STATES,
      Intents.FLAGS.DIRECT_MESSAGES,
      Intents.FLAGS.DIRECT_MESSAGE_TYPING,
    ],

    makeCache: Options.cacheWithLimits({
      MessageManager: {
        maxSize: 25,
        sweepInterval: 43200,
      },
      PresenceManager: 0,
    }),
  },
});

sharder.spawn();

shard.ts

import { Intents } from 'discord.js';
import { ShardingManager } from 'kurasuta';
import { BaseCluster } from 'kurasuta';
import Rick from './Models/Rick';

export default class extends BaseCluster {
    constructor(manager: ShardingManager) {
        super(manager)
    }

	async launch() {
		await (this.client as Rick).bootstrap(this.client.token!);
	}
};
                const shards = env.CLUSTER_SHARDS!.split(',').map(Number);
                                     ^
TypeError: Cannot read property 'split' of undefined
    at new BaseCluster (H:\Rickbot\rickbot-3\node_modules\kurasuta\src\Cluster\BaseCluster.ts:14:38)
    at new default_1 (H:\Rickbot\rickbot-3\src\shard.ts:8:9)
    at Object.startCluster (H:\Rickbot\rickbot-3\node_modules\kurasuta\src\Util\Util.ts:78:1

Looking at #411 I've done everything the right way!

TypeError: ClusterClass is not a constructor

This is the error im getting which trying to start the bot

then after that for no reason this error is showing up Error: Cluster 0 failed to start

this is what i have rn

const {
    ShardingManager
} = require('kurasuta');
const {
    join
} = require('path');
require('dotenv').config()
const sharder = new ShardingManager(join(__dirname, 'bot.js'), {
    token: process.env.TOKEN,
    clusterCount: 3,
    guildsPerShard: 1
});

sharder.spawn();

How to determine shard ID?

How can a shard determine its shard ID? ShardClientUtil.id is always 0, so I can't use that. Does Kurasuta expose an environment variable to clustered processes?

Better Code Documentation

I'm just looking for better documentation of the source code. Whether that be JSDoc or simple inline comments, I feel that this would greatly improve the ability for people to contribute. I realize this would take quite some time, but this improvement would make this project much easier to digest.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.