Giter Site home page Giter Site logo

snap-to-s3's People

Contributors

thenickdude avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

snap-to-s3's Issues

snap-to-s3 error : Missing required utility 'lsblk', is it on the path?

I am trying to use snap-to-s3 for backup the ebs snapshot when I try to use following command it fails and give the following error :

bash-3.2$ sudo snap-to-s3 --migrate --all --bucket .s3.amazonaws.com
Error: Missing required utility 'lsblk', is it on the path?

Terminating due to fatal errors.
bash-3.2$

Can someone help please?

Not all illegal characters are replaced when creating snapshot

Only just started using snap-to-s3, so perhaps this is user error, but several of my snapshots have the '/' character in the Description, and this seems to be used for the object name in S3, causing 'folders' to be created. Here's an example of what I see in the 'breadcrumbs' on the S3 console:

image

Is that right?

JavaScript heap out of memory during validation

I've been using snap-to-s3 for the last couple of years with any issues like this. I'm now seeing a JavaScript memory issue on some snapshots during the validation step. I tried increasing the instance size from a c5.large to r5.large, but that didn't seem to help. Do you know what I could do to resolve this issue? There's the full error message:

(node:3259) NOTE: We are formalizing our plans to enter AWS SDK for JavaScript (v2) into maintenance mode in 2023.
Please migrate your code to use AWS SDK for JavaScript (v3).
For more information, check the migration guide at https://a.co/7PzMCcy
(Use node --trace-warnings ... to show where the warning was created)
[snap-0bc4fff30da0bb123] Migrating snap-0bc4fff30da0bb123 to S3
[snap-0bc4fff30da0bb123] Tagging snapshot with "migrating"...
[snap-0bc4fff30da0bb123] Creating temporary EBS volume of type "gp2" from snapshot
[snap-0bc4fff30da0bb123] Attaching vol-00cbce6055d9e7123 to this instance (i-0e9a57a184b904123) at /dev/sdj...
[snap-0bc4fff30da0bb123] Waiting for vol-00cbce6055d9e7123's partitions to become visible to the operating system...
[snap-0bc4fff30da0bb123] 1 partition to upload
[snap-0bc4fff30da0bb123]
[snap-0bc4fff30da0bb123] Uploading partition 1 of 1...
[snap-0bc4fff30da0bb123] Mounting /dev/nvme1n1p1 at /mnt/snap-0bc4fff30da0bb123-p1...
[snap-0bc4fff30da0bb123] Computing size of files to upload...
[snap-0bc4fff30da0bb123] 764.57 GB to compress and upload to s3://my-infra-snap-to-s3-archive/vol-0d7f97a969a191123/2023-07-12T06:24:06+00:00 snap-0bc4fff30da0bb123 - Created for policy: policy-0fb98236512681123 schedule: daily_1096 at 0600.p1.tar.zstd
[snap-0bc4fff30da0bb123] Progress is based on the pre-compression data size:
[snap-0bc4fff30da0bb123] Upload complete, now validating the upload of this partition...
<--- Last few GCs --->
[3259:0x7126da0] 169032912 ms: Mark-sweep 1896.1 (2094.1) -> 1889.7 (2086.9) MB, 1856.9 / 0.0 ms  (average mu = 0.339, current mu = 0.092) allocation failure; scavenge might not succeed
[3259:0x7126da0] 169035403 ms: Mark-sweep 1905.5 (2086.9) -> 1897.6 (2107.6) MB, 2335.3 / 0.0 ms  (average mu = 0.205, current mu = 0.063) allocation failure; scavenge might not succeed
<--- JS stacktrace --->
FATAL ERROR: Reached heap limit Allocation failed - JavaScript heap out of memory
 1: 0xb7b3e0 node::Abort() [node]
 2: 0xa8c8aa  [node]
 3: 0xd69100 v8::Utils::ReportOOMFailure(v8::internal::Isolate*, char const*, bool) [node]
 4: 0xd694a7 v8::internal::V8::FatalProcessOutOfMemory(v8::internal::Isolate*, char const*, bool) [node]
 5: 0xf46ba5  [node]
 6: 0xf5908d v8::internal::Heap::CollectGarbage(v8::internal::AllocationSpace, v8::internal::GarbageCollectionReason, v8::GCCallbackFlags) [node]
 7: 0xf3378e v8::internal::HeapAllocator::AllocateRawWithLightRetrySlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [node]
 8: 0xf34b57 v8::internal::HeapAllocator::AllocateRawWithRetryOrFailSlowPath(int, v8::internal::AllocationType, v8::internal::AllocationOrigin, v8::internal::AllocationAlignment) [node]
 9: 0xf15d2a v8::internal::Factory::NewFillerObject(int, v8::internal::AllocationAlignment, v8::internal::AllocationType, v8::internal::AllocationOrigin) [node]
10: 0x12dacdf v8::internal::Runtime_AllocateInYoungGeneration(int, unsigned long*, v8::internal::Isolate*) [node]
11: 0x1707b79  [node]

FS failed... I have Windows snapshots

[snap-093de0f5e3d376b64] Migrating snap-093de0f5e3d376b64 to S3
[snap-093de0f5e3d376b64] Tagging snapshot with "migrating"...
[snap-093de0f5e3d376b64] Creating temporary EBS volume of type "standard" from snapshot
[snap-093de0f5e3d376b64] Attaching vol-02af295d4171dc450 to this instance (i-0940490a053bac25d) at /dev/sdo...
[snap-093de0f5e3d376b64] Waiting for vol-02af295d4171dc450's partitions to become visible to the operating system...
[snap-093de0f5e3d376b64] 2 partitions to upload
[snap-093de0f5e3d376b64]
[snap-093de0f5e3d376b64] Uploading partition 1 of 2...
[snap-093de0f5e3d376b64] Mounting /dev/xvdo1 at /mnt/snap-093de0f5e3d376b64-1...
[snap-093de0f5e3d376b64] An error occurred, tagging snapshot with "migrate" so it can be retried later
[snap-093de0f5e3d376b64] mount --source /dev/xvdo1 --target /mnt/snap-093de0f5e3d376b64-1 --read-only failed: mount: wrong fs type, bad option, bad superblock on /dev/xvdo1,
missing codepage or helper program, or other error

   In some cases useful info is found in syslog - try
   dmesg | tail or so.

Terminating due to fatal errors.

Any noted limitations on size of EBS Snapshots crossing which can't be copied.

Hey @thenickdude ,

Thank you so much for creating this tool. This made my half work done so easily.

I came across multiple sizes of snapshots from 8, 10, 50, 100, 200, 350, 500, 850, 960 and 1000 GiBs.

It is working fine for till size of 500 GiBs but when I tried 850 GiB snapshots,

I got a response like below

`[snap-0xxxxxxxx3] Migrating snap-xxxxxxxxf3 to S3
[snap-0xxxxxxxx3] Tagging snapshot with "migrating"...
[snap-0xxxxxxxx3] A temporary volume for snap-xxxxxxxx3 already exists, using vol-0xxxxxxxxb1
[snap-0xxxxxxxx3] Attaching vol-0xxxxxxxx1 to this instance (i-0exxxxxxxx2f) at /dev/sdx...
[snap-0xxxxxxxx3] Waiting for vol-02xxxxxxxxb1's partitions to become visible to the operating system...
[snap-0xxxxxxxx3] 1 partition to upload
[snap-0xxxxxxxx3]
[snap-0xxxxxxxx3] Uploading partition 1 of 1...
[snap-0xxxxxxxx3] Mounting /dev/xvdx at /mnt/snap-0xxxxxxxx3...
[snap-0xxxxxxxx3] Computing size of files to upload...
[snap-0xxxxxxxx3] 28 KB to compress and upload to s3://xxxxxxxx/vol-0fdc1xxxxxxxx19-09-10T11:21:48+00:00 snap-0xxxxxxxx3 - Created for policy: policy-0xxxxxxxxec7 schedule: Default Schedule.tar.lz4
[snap-0xxxxxxxx3] Progress is based on the pre-compression data size:

[snap-0xxxxxxxx3] Uploaded partition to S3 successfully!
[snap-0xxxxxxxx3] Unmounting partition...
[snap-0xxxxxxxx3]
[snap-0xxxxxxxx3] Detaching vol-02xxxxxxxxb1
[snap-0xxxxxxxx3] Deleting temporary volume vol-xxxxxxxx1
[snap-0xxxxxxxx3] Tagging snapshot with "migrated"
[snap-0xxxxxxxx3] Successfully migrated this snapshot!`

Let me know if the issue is caused by my snapshots

Error during npm install

During the npm install step (Amazon Linux) I get an error message:

[root@ip-10-0-0-22 ~]# npm install -g snap-to-s3
npm ERR! Error while executing:
npm ERR! /usr/bin/git ls-remote -h -t ssh://[email protected]/thenickdude/node-progress.git
npm ERR!
npm ERR! fatal: Cannot come back to cwd: Permission denied
npm ERR!
npm ERR! exited with error code: 128

npm ERR! A complete log of this run can be found in:
npm ERR! /root/.npm/_logs/2017-10-08T18_24_59_631Z-debug.log

Here is a copy of the log file:

0 info it worked if it ends with ok
1 verbose cli [ '/root/.nvm/versions/node/v8.6.0/bin/node',
1 verbose cli '/root/.nvm/versions/node/v8.6.0/bin/npm',
1 verbose cli 'install',
1 verbose cli '-g',
1 verbose cli 'snap-to-s3' ]
2 info using [email protected]
3 info using [email protected]
4 verbose npm-session b828424b2b1dccaa
5 silly install loadCurrentTree
6 silly install readGlobalPackageData
7 http fetch GET 304 https://registry.npmjs.org/snap-to-s3 436ms (from cache)
8 silly pacote tag manifest for snap-to-s3@latest fetched in 468ms
9 silly install loadIdealTree
10 silly install cloneCurrentTreeToIdealTree
11 silly install loadShrinkwrap
12 silly install loadAllDepsIntoIdealTree
13 silly resolveWithNewModule [email protected] checking installable status
14 http fetch GET 304 https://registry.npmjs.org/aws-sdk 81ms (from cache)
15 http fetch GET 304 https://registry.npmjs.org/deep-equal 83ms (from cache)
16 http fetch GET 304 https://registry.npmjs.org/clone 85ms (from cache)
17 http fetch GET 304 https://registry.npmjs.org/command-line-args 86ms (from cache)
18 http fetch GET 304 https://registry.npmjs.org/command-line-usage 85ms (from cache)
19 silly pacote range manifest for aws-sdk@^2.55.0 fetched in 89ms
20 silly resolveWithNewModule [email protected] checking installable status
21 http fetch GET 304 https://registry.npmjs.org/filesize 88ms (from cache)
22 silly pacote range manifest for deep-equal@^1.0.1 fetched in 90ms
23 silly resolveWithNewModule [email protected] checking installable status
24 silly pacote range manifest for clone@^2.1.1 fetched in 93ms
25 silly resolveWithNewModule [email protected] checking installable status
26 silly pacote range manifest for command-line-args@^4.0.4 fetched in 94ms
27 silly resolveWithNewModule [email protected] checking installable status
28 silly pacote range manifest for command-line-usage@^4.0.0 fetched in 94ms
29 silly resolveWithNewModule [email protected] checking installable status
30 silly pacote range manifest for filesize@^3.5.9 fetched in 108ms
31 silly resolveWithNewModule [email protected] checking installable status
32 silly fetchPackageMetaData error for progress@github:thenickdude/node-progress#f901750478a76057b9271bda333dd1dcdd5406dd Error while executing:
32 silly fetchPackageMetaData /usr/bin/git ls-remote -h -t ssh://[email protected]/thenickdude/node-progress.git
32 silly fetchPackageMetaData
32 silly fetchPackageMetaData fatal: Cannot come back to cwd: Permission denied
32 silly fetchPackageMetaData
32 silly fetchPackageMetaData exited with error code: 128
33 http fetch GET 304 https://registry.npmjs.org/binary-split 160ms (from cache)
34 http fetch GET 304 https://registry.npmjs.org/csv-parse 159ms (from cache)
35 http fetch GET 304 https://registry.npmjs.org/mkdirp 73ms (from cache)
36 silly pacote range manifest for binary-split@^1.0.3 fetched in 165ms
37 silly resolveWithNewModule [email protected] checking installable status
38 silly pacote range manifest for csv-parse@^1.2.0 fetched in 164ms
39 silly resolveWithNewModule [email protected] checking installable status
40 http fetch GET 304 https://registry.npmjs.org/moment 74ms (from cache)
41 silly pacote range manifest for mkdirp@^0.5.1 fetched in 81ms
42 silly resolveWithNewModule [email protected] checking installable status
43 http fetch GET 304 https://registry.npmjs.org/js-logger 163ms (from cache)
44 silly pacote range manifest for moment@^2.18.1 fetched in 83ms
45 silly resolveWithNewModule [email protected] checking installable status
46 http fetch GET 304 https://registry.npmjs.org/gunzip-maybe 171ms (from cache)
47 silly pacote range manifest for js-logger@^1.3.0 fetched in 171ms
48 silly resolveWithNewModule [email protected] checking installable status
49 http fetch GET 304 https://registry.npmjs.org/multipipe 89ms (from cache)
50 http fetch GET 304 https://registry.npmjs.org/object.values 89ms (from cache)
51 silly pacote range manifest for gunzip-maybe@^1.4.0 fetched in 178ms
52 silly resolveWithNewModule [email protected] checking installable status
53 silly fetchPackageMetaData error for tar-stream@github:thenickdude/tar-stream#3160e7d60fe142f04e126b8b261248d023200e1b Error while executing:
53 silly fetchPackageMetaData /usr/bin/git ls-remote -h -t ssh://[email protected]/thenickdude/tar-stream.git
53 silly fetchPackageMetaData
53 silly fetchPackageMetaData fatal: Cannot come back to cwd: Permission denied
53 silly fetchPackageMetaData
53 silly fetchPackageMetaData exited with error code: 128
54 http fetch GET 304 https://registry.npmjs.org/sprintf-js 51ms (from cache)
55 silly pacote range manifest for multipipe@^1.0.2 fetched in 93ms
56 silly resolveWithNewModule [email protected] checking installable status
57 silly pacote range manifest for object.values@^1.0.4 fetched in 94ms
58 silly resolveWithNewModule [email protected] checking installable status
59 silly pacote range manifest for sprintf-js@^1.1.0 fetched in 54ms
60 silly resolveWithNewModule [email protected] checking installable status
61 http fetch GET 304 https://registry.npmjs.org/which 28ms (from cache)
62 silly pacote range manifest for which@^1.2.14 fetched in 30ms
63 silly resolveWithNewModule [email protected] checking installable status
64 http fetch GET 304 https://registry.npmjs.org/rmdir 148ms (from cache)
65 silly pacote range manifest for rmdir@^1.2.0 fetched in 150ms
66 silly resolveWithNewModule [email protected] checking installable status
67 verbose stack Error: exited with error code: 128
67 verbose stack at ChildProcess.onexit (/root/.nvm/versions/node/v8.6.0/lib/node_modules/npm/node_modules/mississippi/node_modules/end-of-stream/index.js:39:36)
67 verbose stack at emitTwo (events.js:125:13)
67 verbose stack at ChildProcess.emit (events.js:213:7)
67 verbose stack at Process.ChildProcess._handle.onexit (internal/child_process.js:200:12)
68 verbose cwd /root
69 verbose Linux 4.9.51-10.52.amzn1.x86_64
70 verbose argv "/root/.nvm/versions/node/v8.6.0/bin/node" "/root/.nvm/versions/node/v8.6.0/bin/npm" "install" "-g" "snap-to-s3"
71 verbose node v8.6.0
72 verbose npm v5.3.0
73 error Error while executing:
73 error /usr/bin/git ls-remote -h -t ssh://[email protected]/thenickdude/node-progress.git
73 error
73 error fatal: Cannot come back to cwd: Permission denied
73 error
73 error exited with error code: 128
74 verbose exit [ 1, true ]

Throwing Error while trying to migrate snapshots created by DLM policies

I am trying to migrate the snapshots created by a DLM policy to s3 bucket.

The snapshot is tagged with aws:snapshots etc

`[snap-0b0xxxxxxxb] An error occurred, tagging snapshot with "migrate" so it can be retried later
[snap-0b0cxxxxxxxxx] { Error: snap-0bxxxxxxxxxxxb: Error: S3 upload failed: InvalidTag: Your TagKey cannot be prefixed with aws:
at SnapshotMigrationError (/usr/lib/node_modules/snap-to-s3/lib/snap-to-s3.js:1948:3)
at _raceToMarkSnapshot.then.then (/usr/lib/node_modules/snap-to-s3/lib/snap-to-s3.js:1682:12)
at process._tickDomainCallback (internal/process/next_tick.js:135:7)
error:
Error: S3 upload failed: InvalidTag: Your TagKey cannot be prefixed with aws:
at uploader.promise.then.e (/usr/lib/node_modules/snap-to-s3/lib/snap-to-s3.js:426:14)
at process._tickDomainCallback (internal/process/next_tick.js:135:7),
snapshotID: 'snap-0b0xxxxxxxxxx' }

Terminating due to fatal errors.
`

Error when migrating snapshot with multiple partitions

Uploading partition 2 of 2...
[snap-068329d799735982a] Mounting /dev/nvme3n1p128 at /mnt/snap-068329d799735982a-p128...
[snap-068329d799735982a] An error occurred, tagging snapshot with "migrate" so it can be retried later
[snap-068329d799735982a] { Error: snap-068329d799735982a: mount --source /dev/nvme3n1p128 --target /mnt/snap-068329d799735982a-p128 --read-only failed:  mount: /mnt/snap-068329d799735982a-p128: wrong fs type, bad option, bad superblock on /dev/nvme3n1p128, missing codepage or helper program, or other error.
    at _raceToMarkSnapshot.then.then (/home/ec2-user/snap-to-s3-master/lib/snap-to-s3.js:1691:12)
    at process._tickDomainCallback (internal/process/next_tick.js:135:7)
  error: 'mount --source /dev/nvme3n1p128 --target /mnt/snap-068329d799735982a-p128 --read-only failed:  mount: /mnt/snap-068329d799735982a-p128: wrong fs type, bad option, bad superblock on /dev/nvme3n1p128, missing codepage or helper program, or other error.\n',
  snapshotID: 'snap-068329d799735982a' }

Terminating due to fatal errors.

TypeError: Cannot read property '1' of null

I have been trying to figure out what is going on here for a while but am stuck due to my very limited understanding of Javascript.

ubuntu@ip-192-168-16-140:~$ sudo snap-to-s3 --migrate --dd --snapshots snap-0059a3b2152d58613 --bucket snap-to-s3-archive --validate
sudo: unable to resolve host ip-192-168-16-140: Connection timed out
[snap-0059a3b2152d58613] Migrating snap-0059a3b2152d58613 to S3
[snap-0059a3b2152d58613] Tagging snapshot with "migrating"...
[snap-0059a3b2152d58613] A temporary volume for snap-0059a3b2152d58613 already exists, using vol-02d14a82d004733a5
[snap-0059a3b2152d58613] Volume vol-02d14a82d004733a5 is already attached here
[snap-0059a3b2152d58613] Waiting for vol-02d14a82d004733a5's partitions to become visible to the operating system...
/usr/lib/node_modules/snap-to-s3/lib/aws-tools.js:76
                                                                } if ("vol-" + matches[1] === volume.VolumeId) {
                                                                                      ^

TypeError: Cannot read property '1' of null
    at /usr/lib/node_modules/snap-to-s3/lib/aws-tools.js:76:31
    at ChildProcess.exithandler (child_process.js:294:7)
    at ChildProcess.emit (events.js:315:20)
    at ChildProcess.EventEmitter.emit (domain.js:483:12)
    at maybeClose (internal/child_process.js:1021:16)
    at Socket.<anonymous> (internal/child_process.js:443:11)
    at Socket.emit (events.js:315:20)
    at Socket.EventEmitter.emit (domain.js:483:12)
    at Pipe.<anonymous> (net.js:674:12)

So I went into the code and tried to modify it but with no luck

case "nvme":
                                                child_process.execFile("nvme", ["id-ctrl", device.DEVICEPATH], function (error, stdout, stderr) {
                                                        if (error) {
                                                                reject("Failed to nvme " + stdout + " " + stderr);
                                                        } else {
                                                                console.log("in that one function");
                                                                let matches = stdout.match(/vol([0-9a-zA-Z]+)/);
                                                                console.log(matches);
                                                                console.log("checking device");
                                                                console.log(device);
                                                                //if (matches === null){
                                                                //      console.log("it equaled null");
                                                                //} else {
                                                                        if (!matches) {
                                                                                reject("Failed to parse output of ebsnvme-id: " + stdout);
                                                                        } if (matches === null) {
                                                                                console.log("it is null");
                                                                                reject("Failed to parse output of ebsnvme-id: " + stdout);
                                                                        } if ((matches != null || matches !== null) && "vol-" + matches[1] === volume.VolumeId) {
                                                                                resolve(device);
                                                                        } else {
                                                                                resolve(null);
                                                                        }
                                                                //}
                                                        }
                                                });
                                                break;

I have no clue what I am doing with Javascript so I could be completely wrong here but it would seem like there is a missing null check and or a null case not being handled?

This is what I get when I run the above modified code

ubuntu@ip-192-168-16-140:~$ sudo snap-to-s3 --migrate --dd --snapshots snap-0059a3b2152d58613 --bucket snap-to-s3-archive --validate
sudo: unable to resolve host ip-192-168-16-140: Connection timed out
[snap-0059a3b2152d58613] Migrating snap-0059a3b2152d58613 to S3
[snap-0059a3b2152d58613] Tagging snapshot with "migrating"...
[snap-0059a3b2152d58613] A temporary volume for snap-0059a3b2152d58613 already exists, using vol-02d14a82d004733a5
[snap-0059a3b2152d58613] Volume vol-02d14a82d004733a5 is already attached here
[snap-0059a3b2152d58613] Waiting for vol-02d14a82d004733a5's partitions to become visible to the operating system...
in that one function
[
  'vol02d14a82d004733a5',
  '02d14a82d004733a5',
  index: 70,
  input: 'NVME Identify Controller:\n' +
    'vid     : 0x1d0f\n' +
    'ssvid   : 0x1d0f\n' +
    'sn      : vol02d14a82d004733a5\n' +
    'mn      : Amazon Elastic Block Store              \n' +
    'fr      : 1.0     \n' +
    'rab     : 32\n' +
    'ieee    : dc02a0\n' +
    'cmic    : 0\n' +
    'mdts    : 6\n' +
    'cntlid  : 0\n' +
    'ver     : 0\n' +
    'rtd3r   : 0\n' +
    'rtd3e   : 0\n' +
    'oaes    : 0x100\n' +
    'oacs    : 0\n' +
    'acl     : 4\n' +
    'aerl    : 0\n' +
    'frmw    : 0x3\n' +
    'lpa     : 0\n' +
    'elpe    : 0\n' +
    'npss    : 1\n' +
    'avscc   : 0x1\n' +
    'apsta   : 0\n' +
    'wctemp  : 0\n' +
    'cctemp  : 0\n' +
    'mtfa    : 0\n' +
    'hmpre   : 0\n' +
    'hmmin   : 0\n' +
    'tnvmcap : 0\n' +
    'unvmcap : 0\n' +
    'rpmbs   : 0\n' +
    'sqes    : 0x66\n' +
    'cqes    : 0x44\n' +
    'nn      : 1\n' +
    'oncs    : 0\n' +
    'fuses   : 0\n' +
    'fna     : 0\n' +
    'vwc     : 0\n' +
    'awun    : 0\n' +
    'awupf   : 0\n' +
    'nvscc   : 0\n' +
    'acwu    : 0\n' +
    'sgls    : 0\n' +
    'ps    0 : mp:0.01W operational enlat:1000000 exlat:1000000 rrt:0 rrl:0\n' +
    '          rwt:0 rwl:0 idle_power:- active_power:-\n' +
    'ps    1 : mp:0.00W operational enlat:0 exlat:0 rrt:0 rrl:0\n' +
    '          rwt:0 rwl:0 idle_power:- active_power:-\n',
  groups: undefined
]
checking device
{
  NAME: 'nvme2n1',
  FSTYPE: '',
  MOUNTPOINT: '',
  SIZE: 17179869184,
  TYPE: 'disk',
  'LOG-SEC': 512,
  'PHY-SEC': 512,
  PKNAME: '',
  PARTNAME: '',
  DEVICEPATH: '/dev/nvme2n1'
}
in that one function
null
checking device
{
  NAME: 'nvme1n1',
  FSTYPE: '',
  MOUNTPOINT: '',
  SIZE: 75000000000,
  TYPE: 'disk',
  'LOG-SEC': 512,
  'PHY-SEC': 512,
  PKNAME: '',
  PARTNAME: '',
  DEVICEPATH: '/dev/nvme1n1'
}
it is null
[snap-0059a3b2152d58613] An error occurred, tagging snapshot with "migrate" so it can be retried later
in that one function
[
  'vol0d6e4f47f0eb6a0b3',
  '0d6e4f47f0eb6a0b3',
  index: 70,
  input: 'NVME Identify Controller:\n' +
    'vid     : 0x1d0f\n' +
    'ssvid   : 0x1d0f\n' +
    'sn      : vol0d6e4f47f0eb6a0b3\n' +
    'mn      : Amazon Elastic Block Store              \n' +
    'fr      : 1.0     \n' +
    'rab     : 32\n' +
    'ieee    : dc02a0\n' +
    'cmic    : 0\n' +
    'mdts    : 6\n' +
    'cntlid  : 0\n' +
    'ver     : 0\n' +
    'rtd3r   : 0\n' +
    'rtd3e   : 0\n' +
    'oaes    : 0x100\n' +
    'oacs    : 0\n' +
    'acl     : 4\n' +
    'aerl    : 0\n' +
    'frmw    : 0x3\n' +
    'lpa     : 0\n' +
    'elpe    : 0\n' +
    'npss    : 1\n' +
    'avscc   : 0x1\n' +
    'apsta   : 0\n' +
    'wctemp  : 0\n' +
    'cctemp  : 0\n' +
    'mtfa    : 0\n' +
    'hmpre   : 0\n' +
    'hmmin   : 0\n' +
    'tnvmcap : 0\n' +
    'unvmcap : 0\n' +
    'rpmbs   : 0\n' +
    'sqes    : 0x66\n' +
    'cqes    : 0x44\n' +
    'nn      : 1\n' +
    'oncs    : 0\n' +
    'fuses   : 0\n' +
    'fna     : 0\n' +
    'vwc     : 0\n' +
    'awun    : 0\n' +
    'awupf   : 0\n' +
    'nvscc   : 0\n' +
    'acwu    : 0\n' +
    'sgls    : 0\n' +
    'ps    0 : mp:0.01W operational enlat:1000000 exlat:1000000 rrt:0 rrl:0\n' +
    '          rwt:0 rwl:0 idle_power:- active_power:-\n' +
    'ps    1 : mp:0.00W operational enlat:0 exlat:0 rrt:0 rrl:0\n' +
    '          rwt:0 rwl:0 idle_power:- active_power:-\n',
  groups: undefined
]
checking device
{
  NAME: 'nvme0n1p1',
  FSTYPE: 'ext4',
  MOUNTPOINT: '/',
  SIZE: 8588869120,
  TYPE: 'part',
  'LOG-SEC': 512,
  'PHY-SEC': 512,
  PKNAME: 'nvme0n1',
  PARTNAME: 'p1',
  DEVICEPATH: '/dev/nvme0n1p1'
}
in that one function
[
  'vol0d6e4f47f0eb6a0b3',
  '0d6e4f47f0eb6a0b3',
  index: 70,
  input: 'NVME Identify Controller:\n' +
    'vid     : 0x1d0f\n' +
    'ssvid   : 0x1d0f\n' +
    'sn      : vol0d6e4f47f0eb6a0b3\n' +
    'mn      : Amazon Elastic Block Store              \n' +
    'fr      : 1.0     \n' +
    'rab     : 32\n' +
    'ieee    : dc02a0\n' +
    'cmic    : 0\n' +
    'mdts    : 6\n' +
    'cntlid  : 0\n' +
    'ver     : 0\n' +
    'rtd3r   : 0\n' +
    'rtd3e   : 0\n' +
    'oaes    : 0x100\n' +
    'oacs    : 0\n' +
    'acl     : 4\n' +
    'aerl    : 0\n' +
    'frmw    : 0x3\n' +
    'lpa     : 0\n' +
    'elpe    : 0\n' +
    'npss    : 1\n' +
    'avscc   : 0x1\n' +
    'apsta   : 0\n' +
    'wctemp  : 0\n' +
    'cctemp  : 0\n' +
    'mtfa    : 0\n' +
    'hmpre   : 0\n' +
    'hmmin   : 0\n' +
    'tnvmcap : 0\n' +
    'unvmcap : 0\n' +
    'rpmbs   : 0\n' +
    'sqes    : 0x66\n' +
    'cqes    : 0x44\n' +
    'nn      : 1\n' +
    'oncs    : 0\n' +
    'fuses   : 0\n' +
    'fna     : 0\n' +
    'vwc     : 0\n' +
    'awun    : 0\n' +
    'awupf   : 0\n' +
    'nvscc   : 0\n' +
    'acwu    : 0\n' +
    'sgls    : 0\n' +
    'ps    0 : mp:0.01W operational enlat:1000000 exlat:1000000 rrt:0 rrl:0\n' +
    '          rwt:0 rwl:0 idle_power:- active_power:-\n' +
    'ps    1 : mp:0.00W operational enlat:0 exlat:0 rrt:0 rrl:0\n' +
    '          rwt:0 rwl:0 idle_power:- active_power:-\n',
  groups: undefined
]
checking device
{
  NAME: 'nvme0n1',
  FSTYPE: '',
  MOUNTPOINT: '',
  SIZE: 8589934592,
  TYPE: 'disk',
  'LOG-SEC': 512,
  'PHY-SEC': 512,
  PKNAME: '',
  PARTNAME: '',
  DEVICEPATH: '/dev/nvme0n1'
}
in that one function
[
  'vol02d14a82d004733a5',
  '02d14a82d004733a5',
  index: 70,
  input: 'NVME Identify Controller:\n' +
    'vid     : 0x1d0f\n' +
    'ssvid   : 0x1d0f\n' +
    'sn      : vol02d14a82d004733a5\n' +
    'mn      : Amazon Elastic Block Store              \n' +
    'fr      : 1.0     \n' +
    'rab     : 32\n' +
    'ieee    : dc02a0\n' +
    'cmic    : 0\n' +
    'mdts    : 6\n' +
    'cntlid  : 0\n' +
    'ver     : 0\n' +
    'rtd3r   : 0\n' +
    'rtd3e   : 0\n' +
    'oaes    : 0x100\n' +
    'oacs    : 0\n' +
    'acl     : 4\n' +
    'aerl    : 0\n' +
    'frmw    : 0x3\n' +
    'lpa     : 0\n' +
    'elpe    : 0\n' +
    'npss    : 1\n' +
    'avscc   : 0x1\n' +
    'apsta   : 0\n' +
    'wctemp  : 0\n' +
    'cctemp  : 0\n' +
    'mtfa    : 0\n' +
    'hmpre   : 0\n' +
    'hmmin   : 0\n' +
    'tnvmcap : 0\n' +
    'unvmcap : 0\n' +
    'rpmbs   : 0\n' +
    'sqes    : 0x66\n' +
    'cqes    : 0x44\n' +
    'nn      : 1\n' +
    'oncs    : 0\n' +
    'fuses   : 0\n' +
    'fna     : 0\n' +
    'vwc     : 0\n' +
    'awun    : 0\n' +
    'awupf   : 0\n' +
    'nvscc   : 0\n' +
    'acwu    : 0\n' +
    'sgls    : 0\n' +
    'ps    0 : mp:0.01W operational enlat:1000000 exlat:1000000 rrt:0 rrl:0\n' +
    '          rwt:0 rwl:0 idle_power:- active_power:-\n' +
    'ps    1 : mp:0.00W operational enlat:0 exlat:0 rrt:0 rrl:0\n' +
    '          rwt:0 rwl:0 idle_power:- active_power:-\n',
  groups: undefined
]
checking device
{
  NAME: 'nvme2n1p1',
  FSTYPE: 'ext4',
  MOUNTPOINT: '',
  SIZE: 17178803712,
  TYPE: 'part',
  'LOG-SEC': 512,
  'PHY-SEC': 512,
  PKNAME: 'nvme2n1',
  PARTNAME: 'p1',
  DEVICEPATH: '/dev/nvme2n1p1'
}
[snap-0059a3b2152d58613] SnapshotMigrationError: snap-0059a3b2152d58613: Failed to parse output of ebsnvme-id: NVME Identify Controller:
vid     : 0x1d0f
ssvid   : 0
sn      : AWSBB1A925AA9DB2BBB4
mn      : Amazon EC2 NVMe Instance Storage
fr      : 0
rab     : 0
ieee    : 40b4cd
cmic    : 0
mdts    : 5
cntlid  : b
ver     : 0
rtd3r   : 0
rtd3e   : 0
oaes    : 0
oacs    : 0
acl     : 3
aerl    : 4
frmw    : 0x3
lpa     : 0
elpe    : 63
npss    : 0
avscc   : 0
apsta   : 0
wctemp  : 0
cctemp  : 0
mtfa    : 0
hmpre   : 0
hmmin   : 0
tnvmcap : 0
unvmcap : 0
rpmbs   : 0
sqes    : 0x66
cqes    : 0x44
nn      : 1
oncs    : 0x4
fuses   : 0
fna     : 0
vwc     : 0
awun    : 0
awupf   : 0
nvscc   : 0
acwu    : 0
sgls    : 0
ps    0 : mp:0.00W operational enlat:0 exlat:0 rrt:0 rrl:0
          rwt:0 rwl:0 idle_power:- active_power:-

    at /usr/lib/node_modules/snap-to-s3/lib/snap-to-s3.js:1727:12
    at processTicksAndRejections (internal/process/task_queues.js:97:5) {
  error: 'Failed to parse output of ebsnvme-id: NVME Identify Controller:\n' +
    'vid     : 0x1d0f\n' +
    'ssvid   : 0\n' +
    'sn      : AWSBB1A925AA9DB2BBB4\n' +
    'mn      : Amazon EC2 NVMe Instance Storage        \n' +
    'fr      : 0\n' +
    'rab     : 0\n' +
    'ieee    : 40b4cd\n' +
    'cmic    : 0\n' +
    'mdts    : 5\n' +
    'cntlid  : b\n' +
    'ver     : 0\n' +
    'rtd3r   : 0\n' +
    'rtd3e   : 0\n' +
    'oaes    : 0\n' +
    'oacs    : 0\n' +
    'acl     : 3\n' +
    'aerl    : 4\n' +
    'frmw    : 0x3\n' +
    'lpa     : 0\n' +
    'elpe    : 63\n' +
    'npss    : 0\n' +
    'avscc   : 0\n' +
    'apsta   : 0\n' +
    'wctemp  : 0\n' +
    'cctemp  : 0\n' +
    'mtfa    : 0\n' +
    'hmpre   : 0\n' +
    'hmmin   : 0\n' +
    'tnvmcap : 0\n' +
    'unvmcap : 0\n' +
    'rpmbs   : 0\n' +
    'sqes    : 0x66\n' +
    'cqes    : 0x44\n' +
    'nn      : 1\n' +
    'oncs    : 0x4\n' +
    'fuses   : 0\n' +
    'fna     : 0\n' +
    'vwc     : 0\n' +
    'awun    : 0\n' +
    'awupf   : 0\n' +
    'nvscc   : 0\n' +
    'acwu    : 0\n' +
    'sgls    : 0\n' +
    'ps    0 : mp:0.00W operational enlat:0 exlat:0 rrt:0 rrl:0\n' +
    '          rwt:0 rwl:0 idle_power:- active_power:-\n',
  snapshotID: 'snap-0059a3b2152d58613'
}

Terminating due to fatal errors.

Any help on this would be greatly appreciated

For reference the instance type I am using is M5ad.large

Validation fails on unencrypted snapshot

I haven't (knowingly) encrypted the snapshots (no --gpg... options used), but validation is failing like so:

Error: "s3://XXX/vol-YYY/2016-05-17T14:17:51+00:00 snap-ZZZ - Pre SSD upgrade.tar.lz4.gpg" should exist, but wasn't readable/found! NotFound: null

And I can see there is no such object in S3. Should that have been created along with the .tar.lz4 object anyway?

Question regarding how percentages are calculated for analysis report

G'day,

I am reading https://www.npmjs.com/package/snap-to-s3, specifically the section around "Analyzing a Cost and Usage report".

Looking at this example:

Region us-west-2 ($166.16/month for 24 snapshots)
vol-xxx (500GB, MySQL Slave DB): 3016 GB total, $151/month for 16 snapshots, average snapshot change 32%
  snap-xxx  2016-11-01  448.7 GB
  snap-xxx  2016-12-01  261.7 GB (52%)
  snap-xxx  2017-01-01  301.5 GB (60%)
  snap-xxx  2017-02-01  275.4 GB (55%)
  snap-xxx  2017-03-01  250.5 GB (50%)
  snap-xxx  2017-04-01  279.3 GB (56%)
  snap-xxx  2017-05-01  320.6 GB (64%)
  snap-xxx  2017-05-17  218.1 GB (44%)
  snap-xxx  2017-05-18  90.8 GB (18%)
  snap-xxx  2017-05-19  85.2 GB (17%)
  snap-xxx  2017-05-20  89.4 GB (18%)
  snap-xxx  2017-05-21  93.2 GB (19%)
  snap-xxx  2017-05-22  92.6 GB (19%)
  snap-xxx  2017-05-23  82.8 GB (17%)
  snap-xxx  2017-05-24  87.1 GB (17%)
  snap-xxx  2017-05-25  39.5 GB (7.9%)

How are you exactly calculating the percentage difference between snap 1 and 2? I'm not clear how 261.7 is 52% of 448? 448 * .52 = 232. Same with snap 2 and 3 - there's 60% change between the snaps.

I'm probably just interpreting incorrectly - and help is appriciated :)

Bonus question: Is the recomendation to move anything >30% changed to archive (even though archive is a full image? The total billed storage for these snaps is about 1904GB. So 1904 x 0.0550000000 = 104.72 USD approx.

  snap-xxx  2016-12-01  261.7 GB (52%)
  snap-xxx  2017-01-01  301.5 GB (60%)
  snap-xxx  2017-02-01  275.4 GB (55%)
  snap-xxx  2017-03-01  250.5 GB (50%)
  snap-xxx  2017-04-01  279.3 GB (56%)
  snap-xxx  2017-05-01  320.6 GB (64%)
  snap-xxx  2017-05-17  218.1 GB (44%)

But if I moved these 7 snaps @ 448GB each to archive, that would be 3136GB x $0.0125=$39. But then this snap's change % would increase, so you would be slightly billed more.

  snap-xxx  2017-05-18  90.8 GB (18%)

Sorry for the silly questions! Just trying to work out the best strategy to move snaps to archive.

Getting an error on while snap-to-s3 trying to mount the snapshot

I assume it's waiting for the previous volume to be removed completely.

Not sure if it's caused by the snapshot I am trying to copy. I am getting the following error.

`> { Error: snap-xxxx: InvalidParameterValue: Invalid value '/dev/sdi' for unixDevice. Attachment point /dev/sdi is already in use

at SnapshotMigrationError (/usr/lib/node_modules/snap-to-s3/lib/snap-to-s3.js:1957:3)
at _raceToMarkSnapshot.then.then (/usr/lib/node_modules/snap-to-s3/lib/snap-to-s3.js:1691:12)
at process._tickDomainCallback (internal/process/next_tick.js:135:7)

error:
{ InvalidParameterValue: Invalid value '/dev/sdi' for unixDevice. Attachment point /dev/sdi is already in use
at Request.extractError (/usr/lib/node_modules/snap-to-s3/node_modules/aws-sdk/lib/services/ec2.js:50:35)

   at Request.callListeners (/usr/lib/node_modules/snap-to-s3/node_modules/aws-sdk/lib/sequential_executor.js:106:20)

   at Request.emit (/usr/lib/node_modules/snap-to-s3/node_modules/aws-sdk/lib/sequential_executor.js:78:10)

   at Request.emit (/usr/lib/node_modules/snap-to-s3/node_modules/aws-sdk/lib/request.js:683:14)

   at Request.transition (/usr/lib/node_modules/snap-to-s3/node_modules/aws-sdk/lib/request.js:22:10)

   at AcceptorStateMachine.runTo (/usr/lib/node_modules/snap-to-s3/node_modules/aws-sdk/lib/state_machine.js:14:12)

   at /usr/lib/node_modules/snap-to-s3/node_modules/aws-sdk/lib/state_machine.js:26:10

   at Request.<anonymous> (/usr/lib/node_modules/snap-to-s3/node_modules/aws-sdk/lib/request.js:38:9)

   at Request.<anonymous> (/usr/lib/node_modules/snap-to-s3/node_modules/aws-sdk/lib/request.js:685:12)

   at Request.callListeners (/usr/lib/node_modules/snap-to-s3/node_modules/aws-sdk/lib/sequential_executor.js:116:18)

 message: 'Invalid value \'/dev/sdi\' for unixDevice. Attachment point /dev/sdi is already in use',

 code: 'InvalidParameterValue',

 time: 2019-09-23T06:29:45.275Z,

 requestId: โ€˜a737-xxxx-94-aa',

 statusCode: 400,

 retryable: false,

 retryDelay: 58.26884347307608 },

snapshotID: 'snap-xxxx' }

Terminating due to fatal errors.`

It would be very helpful if you can tell if I need to make any changes on the AWS side as well.

Question:

Hi,

I am using --dd parameter, my question how to restore it back as snapshot ec2 instead file, because after executing line below, it will return insufficient space for /dev/xvdf, so i use another root volume /mnt/xvdf

aws s3 cp "s3://backups.example.com/vol-xxx/2017-01-01 snap-xxx.img.lz4" - | lz4 -d | sudo dd bs=1M of=/mnt/xvdf

But after extraction it returns file, how to make it mountable or back to snapshot ec2.

I have tried without --dd and after extracting to empty volume, then take snapshot on it and after that create image from snapshot, the image can not be started after lunched. The image keeps stoping. Please kindly help, thank you

copying procees failing when there are two disks/mount points are there in a snapshot.

`Uploading partition 2 of 2...
[snap-xxx] Mounting /dev/xvdy128 at /mnt/snap-xxxx-128...
[snap-xxxxx] An error occurred, tagging snapshot with "migrate" so it can be retried later
[snap-xxxx] { Error: snap-xxxx: mount --source /dev/xvdy128 --target /mnt/snap-xxx-128 --read-only failed: mount: /mnt/snap-xxxx-128: wrong fs type, bad option, bad superblock on /dev/xvdy128, missing codepage or helper program, or other error.

at SnapshotMigrationError (/usr/lib/node_modules/snap-to-s3/lib/snap-to-s3.js:1957:3)
at _raceToMarkSnapshot.then.then (/usr/lib/node_modules/snap-to-s3/lib/snap-to-s3.js:1691:12)
at process._tickDomainCallback (internal/process/next_tick.js:135:7)

error: 'mount --source /dev/xvdy128 --target /mnt/snap-xxxxx-128 --read-only failed: mount: /mnt/snap-xxxxx-128: wrong fs type, bad option, bad superblock on /dev/xvdy128, missing codepage or helper program, or other error.\n',
snapshotID: 'snap-xxxxxx' }`

Run it with background task or something

how can I run the command in the background, I want to tag all snapshots and just want to run the command and move all of them because it fails when running on a command line with a broken pipeline().

also is there any way I can transfer this through a local aws network that way it will be fast and secure.

Thanks
Niraj

Support market place codes on snapshots

Support market place codes on snapshots.

Snapshots that are of a market place image, such as Centos, have a product code.
AWS won't allow the volumes to be attached to the instance, unless the instance is stopped.

I suggest, have the script changed a bit, so it:

  1. Creates an instance per volume
  2. Stops the instance, and attaches the volume
  3. Starts the instance
  4. SSH to the instance which mounts the volume, does the tar and copy to s3
  5. Terminate the temporary instance
  6. Destroy snap volume, etc.

One could go to the extreme of:

  1. Create all the volumes for target snapshots at once. Wait for volumes to be available
  2. Create an instance per volume, with the volume attached, and user data set, so it mounts, tars it up and copies it to s3 etc. Report back to the script success, etc via sqs. All of these run at the same time.
  3. Destroy instances and snapshots.

The above would make this script work for large numbers of snapshots.

Error:

[snap-786aa206] Migrating snap-786aa206 to S3
[snap-786aa206] Tagging snapshot with "migrating"...
[snap-786aa206] Creating temporary EBS volume of type "standard" from snapshot
[snap-786aa206] Attaching vol-04de464a5f5ab334f to this instance (i-0143d3abc47eb48d9) at /dev/sdl...
[snap-786aa206] An error occurred, tagging snapshot with "migrate" so it can be retried later
[snap-786aa206] { Error: snap-786aa206: IncorrectInstanceState: Cannot attach volume 'vol-04de464a5f5ab334f' with Marketplace codes as the instance 'i-0143d3abc47eb48d9' is not in the 'stopped' state.
    at SnapshotMigrationError (/usr/lib/node_modules/snap-to-s3/lib/snap-to-s3.js:1948:3)
    at _raceToMarkSnapshot.then.then (/usr/lib/node_modules/snap-to-s3/lib/snap-to-s3.js:1682:12)
    at process._tickDomainCallback (internal/process/next_tick.js:135:7)
  error: 
   { IncorrectInstanceState: Cannot attach volume 'vol-04de464a5f5ab334f' with Marketplace codes as the instance 'i-0143d3abc47eb48d9' is not in the 'stopped' state.
       at Request.extractError (/usr/lib/node_modules/snap-to-s3/node_modules/aws-sdk/lib/services/ec2.js:50:35)
       at Request.callListeners (/usr/lib/node_modules/snap-to-s3/node_modules/aws-sdk/lib/sequential_executor.js:106:20)
       at Request.emit (/usr/lib/node_modules/snap-to-s3/node_modules/aws-sdk/lib/sequential_executor.js:78:10)
       at Request.emit (/usr/lib/node_modules/snap-to-s3/node_modules/aws-sdk/lib/request.js:683:14)
       at Request.transition (/usr/lib/node_modules/snap-to-s3/node_modules/aws-sdk/lib/request.js:22:10)
       at AcceptorStateMachine.runTo (/usr/lib/node_modules/snap-to-s3/node_modules/aws-sdk/lib/state_machine.js:14:12)
       at /usr/lib/node_modules/snap-to-s3/node_modules/aws-sdk/lib/state_machine.js:26:10
       at Request.<anonymous> (/usr/lib/node_modules/snap-to-s3/node_modules/aws-sdk/lib/request.js:38:9)
       at Request.<anonymous> (/usr/lib/node_modules/snap-to-s3/node_modules/aws-sdk/lib/request.js:685:12)
       at Request.callListeners (/usr/lib/node_modules/snap-to-s3/node_modules/aws-sdk/lib/sequential_executor.js:116:18)
     message: 'Cannot attach volume \'vol-04de464a5f5ab334f\' with Marketplace codes as the instance \'i-0143d3abc47eb48d9\' is not in the \'stopped\' state.',
     code: 'IncorrectInstanceState',
     time: 2019-07-12T23:44:29.034Z,
     requestId: '9523bec0-b2f2-4e82-acf6-ff3f3303c8c9',
     statusCode: 400,
     retryable: false,
     retryDelay: 99.12345406671746 },
  snapshotID: 'snap-786aa206' }

Terminating due to fatal errors.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.