Giter Site home page Giter Site logo

sdfs's Introduction

sdfs

What is this?

A deduplicated file system that can store data in object storage or block storage.

License

GPLv2

Requirements

System Requirements

1. x64 Linux Distribution. The application was tested and developed on ubuntu 18.04
2. At least 8 GB of RAM
3. Minimum of 2 cores
4. Minimum of 16GB or Storage

Optional Packages

* Docker

Installation

Ubuntu/Debian (Ubuntu 14.04+)

Step 1: Download the latest sdfs version
	wget http://opendedup.org/downloads/sdfs-latest.deb

Step 2: Install sdfs and dependencies
	sudo apt-get install fuse libfuse2 ssh openssh-server jsvc libxml2-utils
	sudo dpkg -i sdfs-latest.deb
	 
Step 3: Change the maximum number of open files allowed
	echo "* hard nofile 65535" >> /etc/security/limits.conf
	echo "* soft nofile 65535" >> /etc/security/limits.conf
	exit
Step 5: Log Out and Proceed to Initialization Instructions

CentOS/RedHat (Centos 7.0+)

Step 1: Download the latest sdfs version
	wget http://opendedup.org/downloads/sdfs-latest.rpm

Step 2: Install sdfs and dependencies
	yum install jsvc libxml2 java-1.8.0-openjdk
	rpm -iv --force sdfs-latest.rpm
	 
Step 3: Change the maximum number of open files allowed
	echo "* hardnofile 65535" >> /etc/security/limits.conf
	echo "* soft nofile 65535" >> /etc/security/limits.conf
	exit
Step 5: Log Out and Proceed to Initialization Instructions

Step 6: Disable the IPTables firewall

	service iptables save
	service iptables stop
	chkconfig iptables off

Step 7: Log Out and Proceed to Initialization Instructions

Docker Usage

Setup

Step 1:

	docker pull gcr.io/hybrics/hybrics:master

Step 2:

	docker run --name=sdfs1 -p 0.0.0.0:6442:6442 -d gcr.io/hybrics/hybrics:master

Step 3:

	wget https://storage.cloud.google.com/hybricsbinaries/hybrics-fs/mount.sdfs-master
	sudo mv mount.sdfs-master /usr/sbin/mount.sdfs
	sudo chmod 777 /usr/sbin/mount.sdfs
	sudo mkdir /media/sdfs

Step 4:
	sudo ./mount.sdfs -d sdfs://localhost:6442 /mnt

Docker Parameters:

Envronmental Variable Description Default
CAPACITY The Maximum Phyiscal Capacity of the volume. This is Specified in GB or TB 100GB
TYPE The type of backend storage. This can be specified as AZURE, GOOGLE, AWS, BACKBLAZE. If none is specified local storage is used. local storage
URL The url of for the oject storage used None
BACKUP_VOLUME If set to true, the sdfs volume will be setup for deduping archive data better and faster. If not set it will default to better read/write access for random IO false
GCS_CREDS_FILE The location of a GCS creds file for authicating to Google cloud storage and GCP Pubsub. Will be required for Google Cloud Storage and GCP Pubsub access. None
ACCESS_KEY S3 or Azure Access Key None
SECRET_KEY The S3 or Azure Secret Key used to Access object storage None
AWS_AIM If set to true AWS AIM will be used for access false
PUBSUB_PROJECT The project where the pubsub notification should be setup for file changes and replication None
PUBSUB_CREDS_FILE The credentials file used for pubsub creation and access with GCP. If not set GCS_CREDS_FILE will be used. None
DISABLE_TLS Disable TLS for api access is set to true false
REQUIRE_AUTH Whether to require authication for access to the sdfs APIs false
PASSWORD The password to use when creating the volume admin
EXTENDED_CMD Any addition command parameters to run during creation None

Docker run examples

Optimize usage running using local storage:

sudo mkdir /opt/sdfs1
sudo docker run --name=sdfs1 --env CAPACITY=1TB --volume /home/A_USER/sdfs1:/opt/sdfs -p 0.0.0.0:6442:6442 -d gcr.io/hybrics/hybrics:master

Optimize usage running using Google Cloud Storage:

sudo mkdir /opt/sdfs1
sudo docker run --name=sdfs1 --env BUCKET_NAME=ABUCKETNAME --env TYPE=GOOGLE --env=GCS_CREDS_FILE=/keys/service_account_key.json --env=PUBSUB_PROJECT=A_GCP_PROJECT --env CAPACITY=1TB --volume=/home/A_USER/keys:/keys --volume /home/A_USER/sdfs1:/opt/sdfs -p 0.0.0.0:6442:6442 -d gcr.io/hybrics/hybrics:master

Build Instructions

Linux Version Must be build from a Linux System and Windows must be build from a Windows System

Linux build Requirements:
	1. Docker
	2. git 

Docker Build Steps	
```bash
git clone https://github.com/opendedup/sdfs.git
cd sdfs
git fetch
git checkout -b master origin/master
#Build image with packages
docker build -t sdfs-package:latest --target build -f Dockerbuild.localbuild .
mkdir pkgs
#Extract Package
docker run --rm sdfs-package:latest | tar --extract --verbose -C pkgs/
#Build docker sdfs container
docker build -t sdfs:latest -f Dockerbuild.localbuild .
```

Initialization Instructions for Standalone Volumes

Step 1: Log into the linux system as root or use sudo

Step 2: Create the SDFS Volume. This will create a volume with 256 GB of capacity using a Variable block size.
	**Local Storage**
	sudo mkfs.sdfs --volume-name=pool0 --volume-capacity=256GB

	**AWS Storage**
	sudo mkfs.sdfs --volume-name=pool0 --volume-capacity=1TB --aws-enabled true --cloud-access-key <access-key> --cloud-secret-key <secret-key> --cloud-bucket-name <unique bucket name>
	
	**Azure Storage**
	sudo mkfs.sdfs --volume-name=pool0 --volume-capacity=1TB --azure-enabled true --cloud-access-key <access-key> --cloud-secret-key <secret-key> --cloud-bucket-name <unique bucket name>
	
	**Google Storage**
	sudo mkfs.sdfs --volume-name=pool0 --volume-capacity=1TB --google-enabled true --cloud-access-key <access-key> --cloud-secret-key <secret-key> --cloud-bucket-name <unique bucket name>
	
	
Step 3: Create a mount point on the filesystem for the volume

	sudo mkdir /media/pool0

Step 4: Mount the Volume

	sudo mount -t sdfs pool0 /media/pool0/

Set 5: Add the filesystem to fstab
	pool0           /media/pool0    sdfs    defaults                0       0

Troubleshooting and other Notes

Running on a Multi-Node clusting on KVM guest.

By default KVM networking does not seem to allow guest to communicate over multicast. It also doesn't seem to work when bridging from a nic. From my reseach it looks like you have 		to setup a routed network from a KVM host and have all the guests on that shared network. In addition, you will want to enable multicast on the virtual nic that is shared by those 		guest. Here is is the udev code to make this happen. A reference to this issue is found here.

# cat /etc/udev/rules.d/61-virbr-querier.rules 
ACTION=="add", SUBSYSTEM=="net", RUN+="/etc/sysconfig/network-scripts/vnet_querier_enable"

# cat /etc/sysconfig/network-scripts/vnet_querier_enable 
#!/bin/sh
if [[ $INTERFACE == virbr* ]]; then
    /bin/echo 1 > /sys/devices/virtual/net/$INTERFACE/bridge/multicast_querier
fi

Testing multicast support on nodes in the cluster

The jgroups protocol includes a nice tool to verify multicast is working on all nodes. Its an echo tool and sends messages from a sender to a reciever

On the receiver run

	java -cp /usr/share/sdfs/lib/jgroups-3.4.1.Final.jar org.jgroups.tests.McastReceiverTest -mcast_addr 231.12.21.132 -port 45566

On the sender run

	java -cp /usr/share/sdfs/lib/jgroups-3.4.1.Final.jar org.jgroups.tests.McastSenderTest -mcast_addr 231.12.21.132 -port 45566

Once you have both sides running type a message on the sender and you should see it on the receiver after you press enter. You may also want to switch rolls to make sure multicast 		works both directions.

take a look at http://docs.jboss.org/jbossas/docs/Clustering_Guide/4/html/ch07s07s11.html for more detail.

Further reading:

	Take a look at the administration guide for more detail. http://www.opendedup.org/sdfs-20-administration-guide

Ask for Help

	If you still need help check out the message board here https://groups.google.com/forum/#!forum/dedupfilesystem-sdfs-user-discuss

sdfs's People

Contributors

nefelim4ag avatar opendedup avatar skulkar2 avatar spicyaluminum avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

sdfs's Issues

Windows: automatic mount through service does not work

In Windows 10 I have setup a new local volume. Manual mount works fine, But when I install the service with the mount command from the example of the Windows Quickstart Guide, the volume is not mounted. Manual start of the Service also doesn't works.

I also tried to use the task scheduler to mount the volume, but have the same problem.

symlinks in backups can corrupt the backup server's operating system (under linux)

this issue is almost certainly related to issue #9 and #25 and there is also a relevant thread on the mailing list

a symlink that points to an absolute path ( a path starting with "/") will be written to <base-path>/files/path/to/symlink and it will point to the same absolute path as it did when the symlink was backed up.
the problem is, that sdfs will then overwrite the target of this symlink with some metadata, essentially crippling the backup server!

steps to reproduce (not tested, as i have already uninstalled sdfs and all its components and data after finding the other problems wich are related to relative symlinks that i described in the above linked mailinlist thread):

create a file on the backup server:
echo "i am original" > /testfile
now in your sdfs mount, create a symlik to /testfile:
ln -s /testfile /mnt/sdfs/mysymlink
now check the contents of your testfile
cat /testfile
most certainly this will not return the "i am original" text from before but some metadata header from sdfs. if you still see the original content of the file, try to unount and then mount sdfs again.

on my backup server, executable binaries in /bin/ like /bin/systemd and /bin/kmod where broken because symlinks like this one /sbin/reboot -> /bin/systemctl where part of a backup i pulled from a server onto my sdfs mount via rsync and basically rendered my backup server inoperable.

i want to stress, that in my opinion this is a highly critical issue and should be addressed immediately by simply removing symlink support for the time being until a proper fix is implemented. the way symlinks are stored in sdfs right now is simply dangerous and useless at the same time. it's better not not support symlinks at all than to pretend to support it and by doing so break the backup server.

i posted some instructions in the above mentioned mailing-list thread how to recover a broken backup server after trying out sdfs with this catastrophic bug, maybe this will be of use for someone else too.

Ceph with windows or OST failure to mount or 'online' storage in BE

Ceph version 12.1.0-292-g20d6a47cc9 (20d6a47cc9a08e4013d0492381d62b60f48eed47) luminous (dev)

Windows 2k12; Backup Exec 16.2, OST 2.1, opendedupe 3.4.10.1

Empty bucket fails to mount or takes a very long time (eg. 30mins+).

Example output;
CLI

H:\>"C:\Program Files\sdfs\mountsdfs.exe" -v cranberry-vol1  -m x -cp
cp=C:\Program Files\sdfs\bin\jre\bin\java.exe -Djava.library.path="C:\Program Files\sdfs\bin/"  -Xmx9101M -XX:+UseG1GC -Djava.awt.headless=true -server -cp "C:\Program Files\sdfs\lib\sdfs.jar";"C:\Program Files\sdfs\lib\*" org.opendedup.sdfs.windows.fs.MountSDFS -v cranberry-vol1 -m xRunning Program SDFS Version 3.4.10.1
reading config file = C:\Program Files\sdfs\etc\cranberry-vol1-volume-cfg.xml
target=https://cranberry
disableDNSBucket=true
Loading Existing Hash Tables |))))))))))))))))))))))))))))))))))))))))))))))))))| 100%

Loading BloomFilters |))))))))))))))))))))))))))))))))))))))))))))))))))| 100%

Waiting for last bloomfilters to load
ReCreating BloomFilters |))))))))))))))))))))))))))))))))))))))))))))))))))| 100%

ReCreating BloomFilters |))))))))))))))))))))))))))))))))))))))))))))))))))| 100%

Waiting for all BloomFilters creation threads to finish done
Loaded entries 0
Running Consistancy Check on DSE, this may take a while
Scanning DSE |))))))))))))))))))))))))))))))))))))))))))))))))))| 100%

Finished
Succesfully Ran Consistance Check for [0] records, recovered [0]

Fails to return a lot of the time. Sits at 4.8 to 6GB RAM usage and 3-5% CPU. Minimal network traffic.
Sometimes it doesn't return but the drive still mounts.

Logs from trying to use OST connector in BE:

These are the ceph/radosgw logs after it's been stalled for 1.5 hours. It's still ticking over very slowly.

2017-08-17 20:13:22.652445 7fb3d9d0d700  1 civetweb: 0xbf85859000: 10.1.0.51 - - [17/Aug/2017:20:13:22 +0100] "PUT /sdfs1/bucketinfo/3605059006502349093 HTTP/1.1" 1 0 - aws-sdk-java/1.11.113 Windows_Server_2012/6.2 OpenJDK_64-Bit_Server_VM/25.121-b15/1.8.0_121 scala/2.11.7 kotlin/1.0.1
2017-08-17 20:13:26.827286 7fb3d9d0d700  1 ====== starting new request req=0x7fb3d9d07660 =====
2017-08-17 20:13:26.828443 7fb3d9d0d700  1 ====== req done req=0x7fb3d9d07660 op status=0 http_status=200 ======
2017-08-17 20:13:26.828475 7fb3d9d0d700  1 civetweb: 0xbf85859000: 10.1.0.51 - - [17/Aug/2017:20:13:22 +0100] "HEAD /sdfs1/bucketinfo/3605059006502349093 HTTP/1.1" 1 0 - aws-sdk-java/1.11.113 Windows_Server_2012/6.2 OpenJDK_64-Bit_Server_VM/25.121-b15/1.8.0_121 scala/2.11.7 kotlin/1.0.1
2017-08-17 20:13:41.835713 7fb3cbcf1700  1 ====== starting new request req=0x7fb3cbceb660 =====
2017-08-17 20:13:41.836812 7fb3cbcf1700  1 ====== req done req=0x7fb3cbceb660 op status=0 http_status=200 ======
2017-08-17 20:13:41.836877 7fb3cbcf1700  1 civetweb: 0xbf858ee000: 10.1.0.51 - - [17/Aug/2017:20:13:41 +0100] "HEAD /sdfs1/bucketinfo/3605059006502349093 HTTP/1.1" 1 0 - aws-sdk-java/1.11.113 Windows_Server_2012/6.2 OpenJDK_64-Bit_Server_VM/25.121-b15/1.8.0_121 scala/2.11.7 kotlin/1.0.1
2017-08-17 20:13:56.843483 7fb3c84ea700  1 ====== starting new request req=0x7fb3c84e4660 =====
2017-08-17 20:13:56.844779 7fb3c84ea700  1 ====== req done req=0x7fb3c84e4660 op status=0 http_status=200 ======
2017-08-17 20:13:56.844842 7fb3c84ea700  1 civetweb: 0xbf8590c000: 10.1.0.51 - - [17/Aug/2017:20:13:56 +0100] "HEAD /sdfs1/bucketinfo/3605059006502349093 HTTP/1.1" 1 0 - aws-sdk-java/1.11.113 Windows_Server_2012/6.2 OpenJDK_64-Bit_Server_VM/25.121-b15/1.8.0_121 scala/2.11.7 kotlin/1.0.1
2017-08-17 20:14:11.852902 7fb3dad0f700  1 ====== starting new request req=0x7fb3dad09660 =====
2017-08-17 20:14:11.854238 7fb3dad0f700  1 ====== req done req=0x7fb3dad09660 op status=0 http_status=200 ======
2017-08-17 20:14:11.854297 7fb3dad0f700  1 civetweb: 0xbf8584f000: 10.1.0.51 - - [17/Aug/2017:20:14:11 +0100] "HEAD /sdfs1/bucketinfo/3605059006502349093 HTTP/1.1" 1 0 - aws-sdk-java/1.11.113 Windows_Server_2012/6.2 OpenJDK_64-Bit_Server_VM/25.121-b15/1.8.0_121 scala/2.11.7 kotlin/1.0.1
2017-08-17 20:14:22.661675 7fb3cb4f0700  1 ====== starting new request req=0x7fb3cb4ea660 =====
2017-08-17 20:14:22.662912 7fb3cb4f0700  1 ====== req done req=0x7fb3cb4ea660 op status=0 http_status=200 ======
2017-08-17 20:14:22.662959 7fb3cb4f0700  1 civetweb: 0xbf858e9000: 10.1.0.51 - - [17/Aug/2017:20:14:22 +0100] "HEAD /sdfs1/bucketinfo/3605059006502349093 HTTP/1.1" 1 0 - aws-sdk-java/1.11.113 Windows_Server_2012/6.2 OpenJDK_64-Bit_Server_VM/25.121-b15/1.8.0_121 scala/2.11.7 kotlin/1.0.1

Last log entries on windows from the above stall:

2017-08-17 18:51:13,240 [sdfs] [org.opendedup.sdfs.servers.HashChunkService] [245] [main]  - DSE did not close gracefully, running consistancy check
2017-08-17 18:51:13,434 [sdfs] [org.opendedup.sdfs.filestore.ConsistancyCheck] [25] [main]  - Running Consistancy Check on DSE, this may take a while
2017-08-17 18:51:26,452 [sdfs] [org.opendedup.sdfs.filestore.ConsistancyCheck] [73] [main]  - Succesfully Ran Consistance Check for [0] records, recovered [0]
2017-08-17 18:51:26,452 [sdfs] [org.opendedup.sdfs.servers.HCServiceProxy] [216] [main]  - running consistency check
----
2017-08-17 18:50:44,425445 | DEBUG| __cdecl SampleStorageServer::SampleStorageServer(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >,class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >) | looking for server
2017-08-17 18:50:44,425445 | DEBUG| __cdecl SampleStorageServer::SampleStorageServer(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >,class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >) | 1 looking for server = OpenDedupe:cranberry-vol1
2017-08-17 18:50:44,425445 | DEBUG| __cdecl SampleStorageServer::SampleStorageServer(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >,class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >) | config file = C:\Program Files\sdfs\etc\ostconfig.xml
2017-08-17 18:50:44,425445 | DEBUG| __cdecl SampleStorageServer::SampleStorageServer(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >,class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >) | direct=indirect mount=1 name=cranberry-vol1 enc=0
2017-08-17 18:50:44,426441 | DEBUG| __cdecl SampleLSU::SampleLSU(const char *,class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >,class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >,class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >,bool,bool,class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >) | in function
2017-08-17 18:50:44,426441 | DEBUG| __cdecl SampleLSU::SampleLSU(const char *,class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >,class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >,class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >,bool,bool,class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >) | LSU Path X:\
2017-08-17 18:50:44,426441 | DEBUG| void __cdecl CreateChildProcess(class std::basic_string<char,struct std::char_traits<char>,class std::allocator<char> >) |  executing mountsdfs -v cranberry-vol1 -m X:\

Then nothing for 2 hours before I killed it. Within BE it sits at 'discovering devices' storage tab which blocks use of all other storage.

The ceph installation works fine with other clients and BE directly.

If I can get the drive to mount, then it appears to work fine for the duration of the session with cli & windows, and BE+OST. It's just the initialisation that is the problem.

sdfscli volume info

is there any way to display multiple volumes statistics (sdfscli --volume-info).
i've created a default one in /opt and another volume (pool) in /backup but there is no parameter for --volume-info and it always displays the default /opt volume info

error in new version of opendedup

hi!
today I've installed new version of opendedup, created new volume and got an error
sdfscli --port=6444 --cleanstore
This error is returned on any command. I've downgraded back to 3.5 currently

image

opendedupe virtual nas appliance could not save backblaze configuration

I have setup an account with Backblaze, added a bucket and now try to add it to nas appliance.
I choose "Add a Cloud Storage Target", then fill the fileds:

  • Cloud Storage Provider: Blackbaze (B2)
  • Access key: Here I put Backblaze Account ID for the bucket
  • Secret Key": Here I put Backblaze Application ID for the bucket

Now when I click submit the error shows: "Could not Save Cloud Storage Target Configuration" and the Backblaze storage target is not created.

In /var/log/sdfs/sdfs.log there is:

2018-05-05 10:22:49,693 [datish-viewer] [com.datish.explorer.NetsExplorer] [142] [play-thread-5]  - iface ens160 inet static
2018-05-05 10:23:16,677 [sdfs] [controllers.CloudServers] [160] [play-thread-2]  - {"type":"backblaze","subtype":"backblaze","bucketLocation":"us-west-2","hostName":"","disableDNSBucket":false,"disableCheckAuth":false,"useAim":false,"encryptData":false,"accessKey":"225512b3d067","secretKey":"0024d74cc978eed2f3d5f5736277a4a63f160ec1a0","maxThreads":24,"readSpeed":0,"writeSpeed":0,"archiveInDays":0,"connectionTimeoutMS":15000,"blockSizeMB":30,"proxyHost":"","proxyPort":0,"proxyUser":"","proxyPassword":"","proxyDomain":"","name":"backblaze-indevops","id":"","simpleS3":false,"usebasicsigner":false,"usev4signer":false,"simpleMD":false,"acceleratedAWS":false,"iaInDays":0}
2018-05-05 10:23:16,678 [play] [play.Logger] [604] [play-thread-2]  -

@77o3ih4cb
Internal Server Error (500) for request POST /cloudserver/?_dc=1525508583762

Execution exception
NoClassDefFoundError occured : org/jclouds/ContextBuilder

play.exceptions.JavaExecutionException: org/jclouds/ContextBuilder
        at play.mvc.ActionInvoker.invoke(ActionInvoker.java:231)
        at Invocation.HTTP Request(Play!)
Caused by: java.lang.NoClassDefFoundError: org/jclouds/ContextBuilder
        at org.opendedup.sdfs.filestore.cloud.BatchJCloudChunkStore.checkAccess(BatchJCloudChunkStore.java:1537)
        at controllers.CloudServers.auth(CloudServers.java:84)
        at controllers.CloudServers.add(CloudServers.java:175)
        at play.mvc.ActionInvoker.invokeWithContinuation(ActionInvoker.java:544)
        at play.mvc.ActionInvoker.invoke(ActionInvoker.java:494)
        at play.mvc.ActionInvoker.invokeControllerMethod(ActionInvoker.java:489)
        at play.mvc.ActionInvoker.invokeControllerMethod(ActionInvoker.java:458)
        at play.mvc.ActionInvoker.invoke(ActionInvoker.java:162)
        ... 1 more

This is latest appliance installed from ISO and then upgraded with apt-get dist-upgrade and installing sdfs-latest.deb

After power reset, statfs impormation is disappeard

Hi guys.

I'm interesting in this project, it's so cool!
I have been testing OpenDedup for my Cloud Backup.

My test environments.

  • VMware ESXi 5.5 + NFS storage
  • GUEST OS : Ubuntu 16.06 + OpenDedup 3.5.4
  • rsync backup

First 2 weeks, everything ok.
Yesterday, I force shutdown OpenDedup VM and then migration VM to NFS datastore.
After migration was done, OpenDedup VM booted up successfully.

mount.sdfs -v XXXX /media/DEDUP
5 minutes ... hash loading
rm /volume/DEDUP/chunkstore/hdb-4448648576798767234/.lock
mount.sdfs -v XXXX /media/DEDUP

After mount the dedupe volume, I can't found inode information with df .
"sdfscli --volume-info" output was cleared too.
But all data is alive !!

How Can I recover that informations? or how can prevent this error?

thanks.

  • hgichon

first mount log

2018-02-12 18:41:33,086 [sdfs] [org.opendedup.sdfs.Config] [251] [main] - ############ Running SDFS Version 3.5.4.0 2018-02-12 18:41:33,096 [sdfs] [org.opendedup.sdfs.Config] [271] [main] - Parsing volume subsystem-config version 3.5.4.0 2018-02-12 18:41:33,101 [sdfs] [org.opendedup.sdfs.Config] [273] [main] - parsing folder locations 2018-02-12 18:41:33,101 [sdfs] [org.opendedup.sdfs.Config] [309] [main] - Setting hash engine to VARIABLE_SIP 2018-02-12 18:41:33,110 [sdfs] [org.opendedup.hashing.HashFunctionPool] [59] [main] - Set hashtype to VARIABLE_SIP 2018-02-12 18:41:33,116 [sdfs] [org.opendedup.sdfs.io.Volume] [220] [main] - Mounting volume /volume/DEDUP/files 2018-02-12 18:41:33,122 [sdfs] [org.opendedup.sdfs.io.Volume] [296] [main] - Volume write threshold is 0.95 2018-02-12 18:41:33,123 [sdfs] [org.opendedup.sdfs.io.Volume] [329] [main] - Setting volume size to 3760329850880 2018-02-12 18:41:33,123 [sdfs] [org.opendedup.sdfs.io.Volume] [331] [main] - Setting maximum capacity to 3572313358336 2018-02-12 18:41:33,182 [sdfs] [org.opendedup.sdfs.Config] [426] [main] - ######### Will allocate 3758096384000 in chunkstore ############## 2018-02-12 18:41:33,182 [sdfs] [org.opendedup.sdfs.Config] [502] [main] - ################## Encryption is NOT enabled ################## 2018-02-12 18:41:33,355 [sdfs] [org.opendedup.sdfs.servers.SDFSService] [74] [main] - HashFunction Min Block Size=4095 Max Block Size=32768 2018-02-12 18:41:33,356 [sdfs] [org.opendedup.sdfs.servers.HCServiceProxy] [200] [main] - Starting local chunkstore 2018-02-12 18:41:33,507 [sdfs] [org.opendedup.sdfs.filestore.HashBlobArchive] [250] [main] - ############################ Initialied HashBlobArchive ############################## 2018-02-12 18:41:33,507 [sdfs] [org.opendedup.sdfs.filestore.HashBlobArchive] [251] [main] - Version : 0 2018-02-12 18:41:33,507 [sdfs] [org.opendedup.sdfs.filestore.HashBlobArchive] [252] [main] - HashBlobArchive IO Threads : 16 2018-02-12 18:41:33,510 [sdfs] [org.opendedup.sdfs.filestore.HashBlobArchive] [253] [main] - HashBlobArchive Max Upload Size : 62914560 2018-02-12 18:41:33,513 [sdfs] [org.opendedup.hashing.VariableSipHashEngine] [56] [main] - Variable minLen=4095 maxlen=32768 windowSize=48 2018-02-12 18:41:34,695 [sdfs] [org.opendedup.hashing.VariableSipHashEngine] [56] [main] - Variable minLen=4095 maxlen=32768 windowSize=48 2018-02-12 18:41:35,568 [sdfs] [org.opendedup.sdfs.filestore.HashBlobArchive] [265] [main] - HashBlobArchive Max Map Size : 25605 2018-02-12 18:41:35,569 [sdfs] [org.opendedup.sdfs.filestore.HashBlobArchive] [266] [main] - HashBlobArchive Maximum Local Cache Size : 10737418240 2018-02-12 18:41:35,569 [sdfs] [org.opendedup.sdfs.filestore.HashBlobArchive] [267] [main] - HashBlobArchive Max Thread Sleep Time : 1500 2018-02-12 18:41:35,569 [sdfs] [org.opendedup.sdfs.filestore.HashBlobArchive] [268] [main] - HashBlobArchive Spool Directory : /volume/DEDUP/chunkstore/chunks 2018-02-12 18:41:35,875 [sdfs] [org.opendedup.sdfs.filestore.HashBlobArchive] [371] [main] - ################################# Done Uploading Archives ################# 2018-02-12 18:41:35,877 [sdfs] [org.opendedup.hashing.VariableSipHashEngine] [56] [main] - Variable minLen=4095 maxlen=32768 windowSize=48 2018-02-12 18:41:36,727 [sdfs] [org.opendedup.sdfs.filestore.HashStore] [171] [main] - Loading hashdb class org.opendedup.collections.RocksDBMap 2018-02-12 18:41:36,727 [sdfs] [org.opendedup.sdfs.filestore.HashStore] [173] [main] - Maximum Number of Entries is 458760000 2018-02-12 18:53:04,368 [sdfs] [org.opendedup.sdfs.filestore.HashStore] [88] [main] - Cache Size = 262144 2018-02-12 18:53:04,370 [sdfs] [org.opendedup.sdfs.filestore.HashStore] [89] [main] - Total Entries 258083876 2018-02-12 18:53:04,370 [sdfs] [org.opendedup.sdfs.filestore.HashStore] [90] [main] - Added sdfs 2018-02-12 18:53:04,370 [sdfs] [org.opendedup.sdfs.servers.HCServiceProxy] [205] [main] - lock file exists /volume/DEDUP/chunkstore/hdb-4448648576798767234/.lock 2018-02-12 18:53:04,371 [sdfs] [org.opendedup.sdfs.servers.HCServiceProxy] [206] [main] - Please remove lock file to proceed

second mount log

`2018-02-12 18:55:47,755 [sdfs] [org.opendedup.sdfs.Config] [251] [main] - ############ Running SDFS Version 3.5.4.0
2018-02-12 18:55:47,760 [sdfs] [org.opendedup.sdfs.Config] [271] [main] - Parsing volume subsystem-config version 3.5.4.0
2018-02-12 18:55:47,763 [sdfs] [org.opendedup.sdfs.Config] [273] [main] - parsing folder locations
2018-02-12 18:55:47,764 [sdfs] [org.opendedup.sdfs.Config] [309] [main] - Setting hash engine to VARIABLE_SIP
2018-02-12 18:55:47,773 [sdfs] [org.opendedup.hashing.HashFunctionPool] [59] [main] - Set hashtype to VARIABLE_SIP
2018-02-12 18:55:47,778 [sdfs] [org.opendedup.sdfs.io.Volume] [220] [main] - Mounting volume /volume/DEDUP/files
2018-02-12 18:55:47,784 [sdfs] [org.opendedup.sdfs.io.Volume] [296] [main] - Volume write threshold is 0.95
2018-02-12 18:55:47,785 [sdfs] [org.opendedup.sdfs.io.Volume] [329] [main] - Setting volume size to 3760329850880
2018-02-12 18:55:47,785 [sdfs] [org.opendedup.sdfs.io.Volume] [331] [main] - Setting maximum capacity to 3572313358336
2018-02-12 18:55:47,845 [sdfs] [org.opendedup.sdfs.Config] [426] [main] - ######### Will allocate 3758096384000 in chunkstore ##############
2018-02-12 18:55:47,845 [sdfs] [org.opendedup.sdfs.Config] [502] [main] - ################## Encryption is NOT enabled ##################
2018-02-12 18:55:48,034 [sdfs] [org.opendedup.sdfs.servers.SDFSService] [74] [main] - HashFunction Min Block Size=4095 Max Block Size=32768
2018-02-12 18:55:48,035 [sdfs] [org.opendedup.sdfs.servers.HCServiceProxy] [200] [main] - Starting local chunkstore
2018-02-12 18:55:48,049 [sdfs] [org.opendedup.sdfs.filestore.HashBlobArchive] [250] [main] - ############################ Initialied HashBlobArchive ##############################
2018-02-12 18:55:48,050 [sdfs] [org.opendedup.sdfs.filestore.HashBlobArchive] [251] [main] - Version : 0
2018-02-12 18:55:48,050 [sdfs] [org.opendedup.sdfs.filestore.HashBlobArchive] [252] [main] - HashBlobArchive IO Threads : 16
2018-02-12 18:55:48,052 [sdfs] [org.opendedup.sdfs.filestore.HashBlobArchive] [253] [main] - HashBlobArchive Max Upload Size : 62914560
2018-02-12 18:55:48,056 [sdfs] [org.opendedup.hashing.VariableSipHashEngine] [56] [main] - Variable minLen=4095 maxlen=32768 windowSize=48
2018-02-12 18:55:49,227 [sdfs] [org.opendedup.hashing.VariableSipHashEngine] [56] [main] - Variable minLen=4095 maxlen=32768 windowSize=48
2018-02-12 18:55:50,105 [sdfs] [org.opendedup.sdfs.filestore.HashBlobArchive] [265] [main] - HashBlobArchive Max Map Size : 25605
2018-02-12 18:55:50,105 [sdfs] [org.opendedup.sdfs.filestore.HashBlobArchive] [266] [main] - HashBlobArchive Maximum Local Cache Size : 10737418240
2018-02-12 18:55:50,105 [sdfs] [org.opendedup.sdfs.filestore.HashBlobArchive] [267] [main] - HashBlobArchive Max Thread Sleep Time : 1500
2018-02-12 18:55:50,105 [sdfs] [org.opendedup.sdfs.filestore.HashBlobArchive] [268] [main] - HashBlobArchive Spool Directory : /volume/DEDUP/chunkstore/chunks
2018-02-12 18:55:50,125 [sdfs] [org.opendedup.sdfs.filestore.HashBlobArchive] [371] [main] - ################################# Done Uploading Archives #################
2018-02-12 18:55:50,126 [sdfs] [org.opendedup.hashing.VariableSipHashEngine] [56] [main] - Variable minLen=4095 maxlen=32768 windowSize=48
2018-02-12 18:55:50,975 [sdfs] [org.opendedup.sdfs.filestore.HashStore] [171] [main] - Loading hashdb class org.opendedup.collections.RocksDBMap
2018-02-12 18:55:50,975 [sdfs] [org.opendedup.sdfs.filestore.HashStore] [173] [main] - Maximum Number of Entries is 458760000
2018-02-12 18:55:51,584 [sdfs] [org.opendedup.sdfs.filestore.HashStore] [88] [main] - Cache Size = 262144
2018-02-12 18:55:51,584 [sdfs] [org.opendedup.sdfs.filestore.HashStore] [89] [main] - Total Entries 258083876
2018-02-12 18:55:51,585 [sdfs] [org.opendedup.sdfs.filestore.HashStore] [90] [main] - Added sdfs
2018-02-12 18:55:51,585 [sdfs] [org.opendedup.sdfs.servers.HCServiceProxy] [672] [main] - Write lock file /volume/DEDUP/chunkstore/hdb-4448648576798767234/.lock
2018-02-12 18:55:51,893 [sdfs] [org.opendedup.sdfs.mgmt.Io] [58] [main] - mounting /volume/DEDUP/files to /media/
2018-02-12 18:55:51,923 [sdfs] [org.opendedup.sdfs.mgmt.MgmtWebServer] [1309] [main] - ###################### SDFSCLI SSL Management WebServer Started at localhost/127.0.0.1:6442 #########################
2018-02-12 18:55:51,929 [sdfs] [org.opendedup.sdfs.filestore.gc.PFullGC] [49] [main] - Current DSE Percentage Full is [56.26] will run GC when [66.26]
2018-02-12 18:55:51,930 [sdfs] [org.opendedup.sdfs.filestore.gc.StandAloneGCScheduler] [39] [main] - Using org.opendedup.sdfs.filestore.gc.PFullGC for DSE Garbage Collection
2018-02-12 18:55:51,930 [sdfs] [org.opendedup.sdfs.filestore.gc.StandAloneGCScheduler] [48] [main] - GC Thread priority is 10
2018-02-12 18:55:51,932 [sdfs] [org.opendedup.sdfs.filestore.gc.SDFSGCScheduler] [45] [main] - Scheduling FDISK Jobs for SDFS
2018-02-12 18:55:51,977 [org.quartz.impl.StdSchedulerFactory] [org.quartz.impl.StdSchedulerFactory] [1179] [main] - Using default implementation for ThreadExecutor
2018-02-12 18:55:51,993 [org.quartz.core.SchedulerSignalerImpl] [org.quartz.core.SchedulerSignalerImpl] [60] [main] - Initialized Scheduler Signaller of type: class org.quartz.core.SchedulerSignalerImpl
2018-02-12 18:55:51,993 [org.quartz.core.QuartzScheduler] [org.quartz.core.QuartzScheduler] [229] [main] - Quartz Scheduler v.1.8.6 created.
2018-02-12 18:55:51,994 [org.quartz.simpl.RAMJobStore] [org.quartz.simpl.RAMJobStore] [139] [main] - RAMJobStore initialized.
2018-02-12 18:55:51,995 [org.quartz.core.QuartzScheduler] [org.quartz.core.QuartzScheduler] [255] [main] - Scheduler meta-data: Quartz Scheduler (v1.8.6) 'QuartzScheduler' with instanceId 'NON_CLUSTERED'
Scheduler class: 'org.quartz.core.QuartzScheduler' - running locally.
NOT STARTED.
Currently in standby mode.
Number of jobs executed: 0
Using thread pool 'org.quartz.simpl.SimpleThreadPool' - with 1 threads.
Using job-store 'org.quartz.simpl.RAMJobStore' - which does not support persistence. and is not clustered.

2018-02-12 18:55:51,996 [org.quartz.impl.StdSchedulerFactory] [org.quartz.impl.StdSchedulerFactory] [1324] [main] - Quartz scheduler 'QuartzScheduler' initialized from an externally provided properties instance.
2018-02-12 18:55:51,996 [org.quartz.impl.StdSchedulerFactory] [org.quartz.impl.StdSchedulerFactory] [1328] [main] - Quartz scheduler version: 1.8.6
2018-02-12 18:55:51,996 [org.quartz.core.QuartzScheduler] [org.quartz.core.QuartzScheduler] [519] [main] - Scheduler QuartzScheduler_$_NON_CLUSTERED started.
2018-02-12 18:55:52,007 [sdfs] [org.opendedup.sdfs.filestore.gc.SDFSGCScheduler] [53] [main] - Stand Alone Garbage Collection Jobs Scheduled will run first at Sun Feb 18 12:00:00 KST 2018
2018-02-12 18:55:52,058 [sdfs] [fuse.SDFS.MountSDFS] [236] [main] - Mount Option : -f
2018-02-12 18:55:52,059 [sdfs] [fuse.SDFS.MountSDFS] [236] [main] - Mount Option : /media/
2018-02-12 18:55:52,059 [sdfs] [fuse.SDFS.MountSDFS] [236] [main] - Mount Option : -o
2018-02-12 18:55:52,059 [sdfs] [fuse.SDFS.MountSDFS] [236] [main] - Mount Option : modules=iconv,from_code=UTF-8,to_code=UTF-8,direct_io,allow_other,nonempty,big_writes,allow_other,fsname=sdfs:/etc/sdfs/XMAS2018-volume-cfg.xml:6442
2018-02-12 18:55:52,062 [sdfs] [fuse.SDFS.SDFSFileSystem] [81] [Thread-22] - mounting /volume/DEDUP/files to /media/
2018-02-12 19:05:39,400 [sdfs] [fuse.SDFS.MountSDFS] [260] [main] - Please Wait while shutting down SDFS
2018-02-12 19:05:39,401 [sdfs] [fuse.SDFS.MountSDFS] [261] [main] - Data Can be lost if this is interrupted
2018-02-12 19:05:39,402 [sdfs] [org.opendedup.sdfs.servers.SDFSService] [118] [main] - Shutting Down SDFS
2018-02-12 19:05:39,402 [sdfs] [org.opendedup.sdfs.servers.SDFSService] [119] [main] - Stopping FDISK scheduler
2018-02-12 19:05:39,403 [sdfs] [org.opendedup.sdfs.servers.SDFSService] [128] [main] - Flushing and Closing Write Caches
`

unnecessary limit. Drive is almost full.

The real disk is 356 GB (/media/D) , current de-duplication folder is 223 GB, 114G free space

But system does not allow me to write data into deduplication volume
There is an error in file /var/log/sdfs/dedu384-volume-cfg.xml.log

DSE HashMap Size [67231503] and DSE HashMap Max Size is [67116864]

I do not understand why this DSE HashMap Max Size limit even exist??
maybe it is better that DSE will check space availability on real disk instead??

Permission Denied reported while coping to local sdfs volume with SDFS v 3.1.7(Tag 3.1.7)

Setup Configuration -
OS - Ubuntu ( Kernel Version 3.13.0-32)
RAM - 8GB
SDFS Version - Latest Code taken from GIT Hub with Tag 3.1.7 and compiled
Note - SDFS Installed as per steps mention at following link and then replace the sdfs binary from compiling sdfs version with tag 3.1.7
https://github.com/opendedup/sdfs

Test Scenario -

  1. Create local sdfs volume with capacity 1500MB, compression enable using following command -
    /sbin/mkfs.sdfs --volume-name=volume02 --volume-capacity=1500MB --chunk-store-compress=true
  2. Create mount point name as /mnt/volume
  3. Mount the created sdfs volume using command -
    mount -t sdfs volume02 /mnt/volume
  4. Run a script which create a 300MB of file using dd command with random data and copy the same to sdfs volume.
  5. Remove the old copied file from sdfs volume and perform Step 4 multiple time, to test the performance.

Expected Result -
No issue should be reported and copy should complete successfully.

Result -
In Test iteration Permission Denied is reported for 3-4 files.
cp: error writing ‘/mnt/volume/list_5.txt’: Permission denied
cp: failed to extend ‘/mnt/volume/list_5.txt’: Permission denied
cp: failed to close ‘/mnt/volume/list_5.txt’: Permission denied
Command exited with non-zero status 1

Initial Analysis -
After Initial analysis from logs it seen that there is java.lang.NullPointerException is reported while writing hash.
2016-07-11 23:54:09,384 [sdfs] [org.opendedup.sdfs.filestore.BatchFileChunkStore] [?] [pool-2-thread-4] - error writing hash
java.lang.NullPointerException
at org.opendedup.sdfs.filestore.HashBlobArchive.putChunk(Unknown Source)
at org.opendedup.sdfs.filestore.HashBlobArchive.writeBlock(Unknown Source)
at org.opendedup.sdfs.filestore.BatchFileChunkStore.writeChunk(Unknown Source)
at org.opendedup.sdfs.filestore.ChunkData.persistData(Unknown Source)
at org.opendedup.collections.ProgressiveFileBasedCSMap.put(Unknown Source)
at org.opendedup.collections.ProgressiveFileBasedCSMap.put(Unknown Source)
at org.opendedup.sdfs.filestore.HashStore.addHashChunk(Unknown Source)
at org.opendedup.sdfs.servers.HashChunkService.writeChunk(Unknown Source)
at org.opendedup.sdfs.servers.HCServiceProxy.writeChunk(Unknown Source)
at org.opendedup.hashing.Finger.run(Unknown Source)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

After debugging it found that issue is not in compress function but the NullException is being reported in function putChunk of HashBlobArchive class while calling wMaps.get(this.id).getCurrentSize() which is ConcurrentHashMap type of class.

Logs attached for reference.
sdfs-latest_v3.1.7_compress.zip

Thanks
Ashish

AWS Cloud - default bucket location

Looks like there is some issue with default-bucket-location. I can't seem to find any documentation on the setting. Here is my stack trace:

[root@STEP1 ~]# mount.sdfs -v Filesystem-cloud1 -m /opt/step/shares/Filesystem-cloud1
Running Program SDFS Version 2.0.11
reading config file = /etc/sdfs/Filesystem-cloud1-volume-cfg.xml
Initializing HashFunction
HashFunction Initialized
Unable to initiate ChunkStore
java.io.IOException: java.lang.NullPointerException
at org.opendedup.sdfs.filestore.S3ChunkStore.init(S3ChunkStore.java:424)
at org.opendedup.sdfs.servers.HashChunkService.(HashChunkService.java:50)
at org.opendedup.sdfs.servers.HCServiceProxy.init(HCServiceProxy.java:160)
at org.opendedup.sdfs.servers.SDFSService.start(SDFSService.java:62)
at fuse.SDFS.MountSDFS.main(MountSDFS.java:152)
Caused by: java.lang.NullPointerException
at org.opendedup.sdfs.filestore.S3ChunkStore.init(S3ChunkStore.java:350)
... 4 more

Same IV spec is used for every file upload in a single volume which is insecure.

Setup Configuration :-
OS - Ubuntu ( Kernel Version 3.13.0-32)
RAM - 8GB
SDFS Version - Latest Code taken from GIT Hub with Master Branch and compiled
Following scenario has been seen in the sdfs code :-

  1. In sdfs/src/org/opendedup/util/EncryptUtils.java file, the IV (Initialization Vector) can be read in either of the below ways :-

a. By passing “encryption-iv” parameter while creating the volume using mkfs.sdfs command or
b. If not passed with “encryption-iv” parameter then it is read from sdfs/src/org/opendedup/sdfs/Main.java file

Which is further passed as an argument in IvParameterSpec as

IvParameterSpec spec = new IvParameterSpec(iv);

  1. The object of IvParameterSpec is then passed in cipher.init() method as

a. in encryptCBC() method :-
cipher.init(Cipher.ENCRYPT_MODE, key, spec);

b. decryptCBC() method :-
cipher.init(Cipher.DECRYPT_MODE, key, spec);

Test Scenario :-

  1. Create the volume.
  2. Copy/upload a file to the volume.
  3. Download the file from the volume.

Expected Result :-

The IV “spec” used must be different for each file upload in the volume.

Current Result :-

As I have put the logs to print the spec while upload(encrypt) and download(decrypt), so, its noted that the same spec is used for each file upload in the same volume which is insecure.

So, is there any way to make use of different IV for different file upload in the same volume.

Thanks,
Deepa

Local Dedup Volume using Cloud storage?

I created a dedupe volume using the dedup volume manager. I've been transferring files to it at 90+MB/s consistently for the last 12hours.

I just setup a cloud backed filesystem (AWS) and began copying other data in to that. Interestingly enough, the original transfer has come to a crawl. I get the sense the original volume is somehow use the DSE for the cloud backed FS I just setup. (even if that isn't possible, it's definitely come to a total crawl):

what should I be looking at?

XXXX/XXXXXX/FILE.EXT
314513152 100% 63.52MB/s 0:00:04 (xfer#37407, to-check=4045/42305)
XXXX/XXXXXX/FILE2.EXT
340211304 100% 2.54MB/s 0:02:07 (xfer#37408, to-check=4044/42305)

No generic build instructions

After reading several times through the source code I cannot figure how to build the opendedup. Could a Makefile be provided (preferably with DESTDIR support)?

sdfs recreates absent folder

  1. mount partition into folder /media/disk
  2. create folder /media/disk/DDbackend
  3. create sdfs "ddtest" volume and set
    dedup-db-store="/media/disk/DDbackend/ddtest/ddb" and all other paths in xml so all SDFS data will be located at /media/disk/DDbackend/ddtest/
  4. mount ddtest. SDFS DDB and folder structures will be created at /media/disk/DDbackend/ddtest/*
  5. unmount ddtest
  6. after successfull stop of SDFS, unmount partition from /media/disk
    now folder DDbackend is absent!
  7. mount ddtest

Expected: system must show error folder /media/disk/DDbackend is not exists
Actual result: system recreates entire path /media/disk/DDbackend/ddtest. Creates new DDB and SDFS folder structure on root partition of linux.

unable to de-serialize file

My log keeps repeating errors with one file that does not exist anymore on the SDFS:

2018-04-24 08:07:06,624 [ERROR] [sdfs] [org.opendedup.sdfs.io.MetaDataDedupFile] [480] [pool-3-thread-2] - unable to de-serialize F:\SDFS\files\gashnetplus_SQL\GashNetPlus-20180418211627-(ac812bf2-4a41-48a2-8711-53945e79972e)-Full.bak
java.io.EOFException
at java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2675)
at java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:3150)
at java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:859)
at java.io.ObjectInputStream.(ObjectInputStream.java:355)
at org.opendedup.sdfs.io.MetaDataDedupFile.getFile(MetaDataDedupFile.java:468)
at org.opendedup.sdfs.filestore.MetaFileStore.getNCMF(MetaFileStore.java:160)
at org.opendedup.sdfs.windows.fs.WinSDFS$ListFiles.run(WinSDFS.java:1046)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
2018-04-24 08:07:06,640 [ERROR] [sdfs] [org.opendedup.sdfs.filestore.MetaFileStore] [164] [pool-3-thread-2] - unable to get F:\SDFS\files\gashnetplus_SQL\GashNetPlus-20180418211627-(ac812bf2-4a41-48a2-8711-53945e79972e)-Full.bak
java.io.IOException: java.io.EOFException
at org.opendedup.sdfs.io.MetaDataDedupFile.getFile(MetaDataDedupFile.java:481)
at org.opendedup.sdfs.filestore.MetaFileStore.getNCMF(MetaFileStore.java:160)
at org.opendedup.sdfs.windows.fs.WinSDFS$ListFiles.run(WinSDFS.java:1046)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.io.EOFException
at java.io.ObjectInputStream$PeekInputStream.readFully(ObjectInputStream.java:2675)
at java.io.ObjectInputStream$BlockDataInputStream.readShort(ObjectInputStream.java:3150)
at java.io.ObjectInputStream.readStreamHeader(ObjectInputStream.java:859)
at java.io.ObjectInputStream.(ObjectInputStream.java:355)
at org.opendedup.sdfs.io.MetaDataDedupFile.getFile(MetaDataDedupFile.java:468)
... 5 more
2018-04-24 08:07:06,640 [ERROR] [sdfs] [org.opendedup.sdfs.windows.fs.WinSDFS$ListFiles] [1051] [pool-3-thread-2] - error getting file F:\SDFS\files\gashnetplus_SQL\GashNetPlus-20180418211627-(ac812bf2-4a41-48a2-8711-53945e79972e)-Full.bak
java.lang.NullPointerException
at org.opendedup.sdfs.windows.fs.MetaDataFileInfo.(MetaDataFileInfo.java:36)
at org.opendedup.sdfs.windows.fs.WinSDFS$ListFiles.run(WinSDFS.java:1047)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)

sdfs_1800GB-volume-cfg.zip
sdfs_1800GB-volume-cfg.log.zip

website errors

These are links in the doc or in the GH readme.
http://www.opendedup.org/sdfs-20-administration-guide

http://www.opendedup.org/cbdquickstart

Both give this error...

Warning: include(/var/chroot/home/content/a/n/n/annesam/html/piwik/tmp/templates_c/help.php) [function.include]: failed to open stream: No such file or directory in /home/content/42/5836542/html/index.php on line 3

Warning: include() [function.include]: Failed opening '/var/chroot/home/content/a/n/n/annesam/html/piwik/tmp/templates_c/help.php' for inclusion (include_path='.:/usr/local/php5/lib/php') in /home/content/42/5836542/html/index.php on line 3

Warning: session_start() [function.session-start]: Cannot send session cookie - headers already sent by (output started at /home/content/42/5836542/html/index.php:3) in /home/content/42/5836542/html/libraries/joomla/session/session.php on line 423

Warning: session_start() [function.session-start]: Cannot send session cache limiter - headers already sent (output started at /home/content/42/5836542/html/index.php:3) in /home/content/42/5836542/html/libraries/joomla/session/session.php on line 423

Warning: Cannot modify header information - headers already sent by (output started at /home/content/42/5836542/html/index.php:3) in /home/content/42/5836542/html/libraries/joomla/session/session.php on line 426

lack of integrity check commands

If one of chunks broken, then de-duplicated data can be corrupted. We need to run hash check of stored chunks.
I there any way to run such check?

3.5.2 java errors during mount - Windows 2012 R2

Version I downloaded shows 3.5.2 and I am unable to find a change log (or a commit) that I can find. We end up with java error (see next comment) when trying to mount a newly created volume/bucket pair.

Short Term:
Wondering if there is a repo somewhere of previous compiled builds? I started working with this project just last week and I have a feeling I was using (and getting better success with) a previous version.

Long Term:

  • Fix / understand / workaround current mount errors (see comment below)
  • Understand how to remount a volume where all we have is the data in a cloud bucket (DR event testing)
  • Understand better which setting needs to be toggled to write in-line duplicated data to a cloud bucket that also is encrypted with a unique key of our choosing.

typo "--volume_name" in Admin guide

There is a typo in the "Admin guide for Version 2.0":
administration-guide

Creating A Standalone SDFS Volume

A simple standalone volume named "dedup" with a dedup capacity of 1TB and a block size of 128K > > run the following command:

mkfs.sdfs --volume_name=dedup --volume-capacity=1TB
[...]

This should be --volume-name for sdfs version 3.3.6

`
mkfs.sdfs --volume_name dedup --volume-capacity 1TB
Attempting to create SDFS volume ...
ERROR : Unable to create volume because org.apache.commons.cli.UnrecognizedOptionException: Unrecognized option: --volume_name
org.apache.commons.cli.UnrecognizedOptionException: Unrecognized option: --volume_name
at org.apache.commons.cli.Parser.processOption(Parser.java:363)
at org.apache.commons.cli.Parser.parse(Parser.java:199)
at org.apache.commons.cli.Parser.parse(Parser.java:85)
at org.opendedup.sdfs.VolumeConfigWriter.parseCmdLine(VolumeConfigWriter.java:145)
at org.opendedup.sdfs.VolumeConfigWriter.main(VolumeConfigWriter.java:1204)

mkfs.sdfs --volume-name=dedup --volume-capacity=1TB
Attempting to create SDFS volume ...
Volume [dedup] created with a capacity of [1TB]
check [/etc/sdfs/dedup-volume-cfg.xml] for configuration details if you need to change anything
`

How to use SDFS with Backblaze cloud backend?

I found some b2 store related code in source files and a commit which seems related, but I am not able to figure out how to use Backblaze cloud storage as a backend for opendedup SDFS.

Please could you advise?

Project still alive?

Hi all,

Is this project still alive?
Missing a lot of proper documentation. No communication in the forum, email
Seems like this project is moving into a commercial direction (Datish Systems) with their virtual appliance which already asks for a license.

Greets,
Stefan Nader

sdfs/README.md

The following steps will create 2 DSEs on server1 and server2

Step 1: Create a DSE on Server1 using a 4K block size and 200GB of capacity and cluster node id of "1"

mkdse --dse-name=sdfs --dse-capacity=200GB --cluster-node-id=1

Step 2: Edit the /etc/sdfs/jgroups.cfg.xml and add the bind_addr attribute with Server1's IP address to the tag.

<UDP         
mcast_port="${jgroups.udp.mcast_port:45588}"         
tos="8"         
ucast_recv_buf_size="5M"       
 ucast_send_buf_size="640K"         
mcast_recv_buf_size="5M"       
 mcast_send_buf_size="640K"         
loopback="true"         
max_bundle_size="64K"         
bind_addr="SERVER1 IP Address"

Step 3: Start the DSE service on Server1

startDSEService.sh -c /etc/sdfs/sdfs-dse-cfg.xml &

Step 4: Create a DSE on Server1 using a 4K block size and 200GB of capacity and cluster node id of "1"

mkdse --dse-name=sdfs --dse-capacity=200GB --cluster-node-id=1

Step 5: Edit the /etc/sdfs/jgroups.cfg.xml and add the bind_addr attribute with Server1's IP address to the tag.

<UDP
mcast_port="${jgroups.udp.mcast_port:45588}"
tos="8"
ucast_recv_buf_size="5M"
ucast_send_buf_size="640K"
mcast_recv_buf_size="5M"
mcast_send_buf_size="640K"
loopback="true"
max_bundle_size="64K"
bind_addr="SERVER1 IP Address"

Step 6: Start the DSE service on Server1

startDSEService.sh -c /etc/sdfs/sdfs-dse-cfg.xml &

the read me step step1 -step4 step2- step5 step 3 step6 are the same

Unable to initialize HashChunkService

after run startDSEService.sh -c /etc/sdfs/sdfs-dse-cfg.xml &

Scanning DSE |] | 0%

then

Finished
Succesfully Ran Consistance Check for [0] records, recovered [0]
Unable to initialize HashChunkService 
java.lang.NullPointerException
	at org.opendedup.sdfs.servers.HCServiceProxy.init(HCServiceProxy.java:218)
	at org.opendedup.sdfs.network.ClusteredHCServer.init(ClusteredHCServer.java:186)
	at org.opendedup.sdfs.network.ClusteredHCServer.start(ClusteredHCServer.java:222)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.commons.daemon.support.DaemonLoader.start(DaemonLoader.java:243)
Service exit with a return value of 255
[root@lab8106 sdfs]# rpm -qa|grep sdfs
sdfs-3.3.9-1.x86_64
[root@lab8106 sdfs]# lsb_release -a
LSB Version:	:core-4.1-amd64:core-4.1-noarch:cxx-4.1-amd64:cxx-4.1-noarch
Distributor ID:	CentOS
Description:	CentOS Linux release 7.2.1511 (Core) 
Release:	7.2.1511
Codename:	Core

postgres for ddb

Can we connect to postgresql for ddb instead of embedded database?

Corrupted symlink targets

I've installed version 3.1.9 on debian wheezy. This involved rebuilding libfuse with threads patch and libjavafs.so because of old GLIBC.

sdfs launched successfully and I performed a backup of my database.
There were symlinked tables and all of them become corrupt.
I mean, symlink targets were corrupt. They become 61 byte files with content:
¬н..sr.'org.opendedup.sdfs.io.MetaDataDedupFileА-HAВsА5...xpx

Help recover v2 fs

Hello,

is it possible to get old v2 amd64 deb s? I used to create backups and know the exact volume config (variable murmur 3), but can't find an easy way to setup v2.x.y opendedup. The v2.0.11 doesn't read the chunkstore correctly for some unknown reason.

Any help would be greatly appreciated!
Maybe you can tell me which commits belong to v2.x.y and how I can build them? (I opened the eclipse project, but it depends on local java libraries :( )

Best, !evil

Path style with S3 storage ( not aws)

Hi,

Is that possible to change the path for S3 endpoint ( style and port connection)?

mkfs.sdfs --volume-name=pool0 --volume-capacity=1TB --aws-enabled true --cloud-access-key --cloud-secret-key --cloud-bucket-name

For instance, my endpoint would be something like : https://mydnsname:8082/mybucket/

Thanks

MEM not initialised in "mount.sdfs" under Ubuntu 16.04 LTS

I couldn't mount an "sdfs" filesystem under Ubuntu 16.04 LTS:

mount.sdfs dedup /mnt
Invalid maximum heap size: -XmxM
Cannot create Java VM
Service exit with a return value of 1

Tracing the execution with "bash -x" I can see that this is because $MEM is not initialised, but it can be set in the environment:

export MEM=1024
mount.sdfs dedup /mnt
Running Program SDFS Version 3.3.6
reading config file = /etc/sdfs/dedup-volume-cfg.xml
Loading BloomFilters |))))))))))))))))))))))))))))))))))))))))))))))))))| 100%
Waiting for last bloomfilters to load
Loaded entries 0
Mounted Filesystem

Can I configure opendedup not to use chunk comparison & lower memory

Can I configure opendedup not to use chunk comparison (neither static nor dynamic)?
Can I safely reduce xms & xms to reduce memory requirement without adverse effect or should I change elsewhere too? 8GB is too high limit for my system. I am also thinking of running it on Raspberry Pi.

Unable to create a SDFS FSS by following README

Hello!

The REEADME says that I can create a SDFS FSS by
mkfs.sdfs --volume-name=pool0 --volume-capacity=400GB --chunk-store-local false.

But it seems there is no option called --chunk-store-local false
error msgs:

root@ubuntu:/home/chufengtan# mkfs.sdfs --volume-name=pool0 --volume-capacity=1GB --chunk-store-local false
Attempting to create SDFS volume ...
ERROR : Unable to create volume because org.apache.commons.cli.UnrecognizedOptionException: Unrecognized option: --chunk-store-local
org.apache.commons.cli.UnrecognizedOptionException: Unrecognized option: --chunk-store-local
        at org.apache.commons.cli.Parser.processOption(Parser.java:363)
        at org.apache.commons.cli.Parser.parse(Parser.java:199)
        at org.apache.commons.cli.Parser.parse(Parser.java:85)
        at org.opendedup.sdfs.VolumeConfigWriter.parseCmdLine(VolumeConfigWriter.java:161)
        at org.opendedup.sdfs.VolumeConfigWriter.main(VolumeConfigWriter.java:1295)

How can I create a SDFS FSS?
Thanks.

Doesn't work on Win10x64

After I installed opendedup, I tried so many things, but I it's impossible to write any file to the mounted volume at all.

I followed:
http://www.opendedup.org/wqs

After I mounted the device by entering
mountsdfs -v sdfs_vol1 -m S
a new device appeared, which I was able to enter by using the command prompt or the explorer. Unfortunately it was just possible to create folders, using the prompt or the explorer.
Everytime I tried to create a file, by copying it from another source, Windows tells me that there isn't enough space.

So I digged deeper and found out, that the used file are stored on %ProgramFiles% - so there shouldn't be any write permission. So I gave "everybody" full access to that folder. But it still didn't work. So I found out that there is a command line argument the set the base path. So I tried to create a volume on a data-path by issuing:
mksdfs --base-path=E:\opendedup --volume-capacity=500MB --volume-name=test3

This worked fine, until I mounted and tried to use the volume. Still the same issue... :-/ Not enough space.

By the way: I wondered, why there is a "sdfs.cmd" documented in http://www.opendedup.org/wqs . Because the only client, I found, which accepted the arguments is sdfscli ...

Cannot allocate memory: Native memory allocation (mmap) failed to map 88013275136 bytes for committing reserved memory

# mount.sdfs pool0 /media/pool0/
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00007f3fcc000000, 88013275136, 0) failed; error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 88013275136 bytes for committing reserved memory.
# An error report file with more information is saved as:
# //hs_err_pid803.log

88 GB o_O :D

volume capacity XML string setting error - get parse err from mountsdfs.exe

mksdfs --volume-name=sdfs_vol1 --volume-capacity=2000GB
mountsdfs -v sdfs_vol1 -m S
it was ok on first run yesterday ... then today it was broken:
D:\Program Files\sdfs>mountsdfs -v sdfs_vol1 -m S
Running Program SDFS Version 3.7.6.0 build date 2018-06-19 22:17
reading config file = D:\Program Files\sdfs\etc\sdfs_vol1-volume-cfg.xml
java.lang.NumberFormatException: For input string: "1,95"
at sun.misc.FloatingDecimal.readJavaFormatString(FloatingDecimal.java:2043)
at sun.misc.FloatingDecimal.parseFloat(FloatingDecimal.java:122)
at java.lang.Float.parseFloat(Float.java:451)
at org.opendedup.util.StringUtils.parseSize(StringUtils.java:102)
at org.opendedup.sdfs.io.Volume.(Volume.java:281)
at org.opendedup.sdfs.Config.parseSDFSConfigFile(Config.java:373)
at org.opendedup.sdfs.servers.SDFSService.start(SDFSService.java:66)
at org.opendedup.sdfs.windows.fs.MountSDFS.main(MountSDFS.java:161)
Exiting because java.lang.NumberFormatException: For input string: "1,95"

Workaround:
notepad D:\Program Files\sdfs\etc\sdfs_vol1-volume-cfg.xml
now the file again ends with the wrong setting, even after I manually fixed to "1.95 TB" so ..
<volume allow-external-links="true" capacity="1,95 TB" closed-gracefully="false"

Windows exception when mounting

H:\>"C:\Program Files\sdfs\mountsdfs.exe" -v cranberry-vol1  -m x -cp
cp=C:\Program Files\sdfs\bin\jre\bin\java.exe -Djava.library.path="C:\Program Files\sdfs\bin/"  -Xmx6839M -XX:+UseG1GC -Djava.awt.headless=true -server -cp "C:\Program Files\sdfs\lib\sdfs.jar";"C:\Program Files\sdfs\lib\*" org.opendedup.sdfs.windows.fs.MountSDFS -v cranberry-vol1 -m xRunning Program SDFS Version 3.4.7.1
reading config file = C:\Program Files\sdfs\etc\cranberry-vol1-volume-cfg.xml
target=https://cranberry
disableDNSBucket=true
Loading Existing Hash Tables |))))))))))))))))))))))))))))))))))))))))))))))))))| 100%

Loading BloomFilters |))))))))))))))))))))))))))))))))))))))))))))))))))| 100%

Waiting for last bloomfilters to load
ReCreating BloomFilters |))))))))))))))))))))))))))))))))))))))))))))))))))| 100%

ReCreating BloomFilters |))))))))))))))))))))))))))))))))))))))))))))))))))| 100%

Loading BloomFilters |))))))))))))))))))))))))))))))))))))))))))))))))))| 100%

Waiting for last bloomfilters to load
ReCreating BloomFilters |))))))))))))))))))))))))))))))))))))))))))))))))))| 100%

ReCreating BloomFilters |))))))))))))))))))))))))))))))))))))))))))))))))))| 100%

Waiting for all BloomFilters creation threads to finishjava.io.IOException: java.io.IOException: java.io.IOException: java.lang.NullPointerException
        at org.opendedup.collections.ShardedProgressiveFileBasedCSMap2.init(ShardedProgressiveFileBasedCSMap2.java:112)
        at org.opendedup.sdfs.filestore.HashStore.connectDB(HashStore.java:176)
        at org.opendedup.sdfs.filestore.HashStore.<init>(HashStore.java:83)
        at org.opendedup.sdfs.servers.HashChunkService.<init>(HashChunkService.java:80)
        at org.opendedup.sdfs.servers.HCServiceProxy.init(HCServiceProxy.java:202)
        at org.opendedup.sdfs.servers.SDFSService.start(SDFSService.java:78)
        at org.opendedup.sdfs.windows.fs.MountSDFS.main(MountSDFS.java:176)
Caused by: java.io.IOException: java.io.IOException: java.lang.NullPointerException
        at org.opendedup.collections.ShardedProgressiveFileBasedCSMap2.setUp(ShardedProgressiveFileBasedCSMap2.java:823)
        at org.opendedup.collections.ShardedProgressiveFileBasedCSMap2.init(ShardedProgressiveFileBasedCSMap2.java:110)
        ... 6 more
Caused by: java.io.IOException: java.lang.NullPointerException
        at org.opendedup.collections.ShardedProgressiveFileBasedCSMap2.setUp(ShardedProgressiveFileBasedCSMap2.java:815)
        ... 7 more
Caused by: java.lang.NullPointerException
        at org.opendedup.hashing.LargeBloomFilter.put(LargeBloomFilter.java:142)
        at org.opendedup.collections.LBFReconstructThread.run(LBFReconstructThread.java:41)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
        at java.lang.Thread.run(Thread.java:745)

H:\>

Just updated, it worked before the update.

Missing sst files prevent opendedup disk from mounting

My Fedora 27 PC froze (never happened before) and I had to hit the reset button.
It rebooted normally, and then I updated all the latest patches. After a final reboot, it forced me to maintenance mode because the opendedup disk could not mount. After removing it from fstab I could at least boot.

Trying to mount it manually, I get the following:

mount -t sdfs pool0 /media/opendedup

Running Program SDFS Version 3.6.0.12 build date 2018-03-05 16:19
reading config file = /etc/sdfs/pool0-volume-cfg.xml
multiplier=32 size=8
mem=19668800 memperDB=2458600 bufferSize=1073741824 bufferSizePerDB=134217728
Loading Existing Hash Tables |))))))))))))))))))))))))))))))))))))))))))))] | 88% java.io.IOException: org.rocksdb.RocksDBException: Can't access /000018.sst: IO error: while stat a file for size: /media/500GB-BKUP/sdfs-volumes/chunkstore/hdb-8708970613374839576/hashstore-sdfs/0/000018.sst: No such file or directory
Can't access /000012.sst: IO error: while stat a file for size: /media/500GB-BKUP/sdfs-volumes/chunkstore/hdb-8708970613374839576/hashstore-sdfs/0/000012.sst: No such file or directory
Can't access /000007.sst: IO error: while stat a file for size: /media/500GB-BKUP/sdfs-volumes/chunkstore/hdb-8708970613374839576/hashstore-sdfs/0/000007.sst: No such file or directory

at org.opendedup.collections.RocksDBMap.init(RocksDBMap.java:301)
at org.opendedup.sdfs.filestore.HashStore.connectDB(HashStore.java:187)
at org.opendedup.sdfs.filestore.HashStore.<init>(HashStore.java:83)
at org.opendedup.sdfs.servers.HashChunkService.<init>(HashChunkService.java:74)
at org.opendedup.sdfs.servers.HCServiceProxy.init(HCServiceProxy.java:154)
at org.opendedup.sdfs.servers.SDFSService.start(SDFSService.java:86)
at fuse.SDFS.MountSDFS.setup(MountSDFS.java:214)
at fuse.SDFS.MountSDFS.init(MountSDFS.java:253)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.commons.daemon.support.DaemonLoader.load(DaemonLoader.java:207)

Caused by: org.rocksdb.RocksDBException: Can't access /000018.sst: IO error: while stat a file for size: /media/500GB-BKUP/sdfs-volumes/chunkstore/hdb-8708970613374839576/hashstore-sdfs/0/000018.sst: No such file or directory
Can't access /000012.sst: IO error: while stat a file for size: /media/500GB-BKUP/sdfs-volumes/chunkstore/hdb-8708970613374839576/hashstore-sdfs/0/000012.sst: No such file or directory
Can't access /000007.sst: IO error: while stat a file for size: /media/500GB-BKUP/sdfs-volumes/chunkstore/hdb-8708970613374839576/hashstore-sdfs/0/000007.sst: No such file or directory

at org.rocksdb.RocksDB.open(Native Method)
at org.rocksdb.RocksDB.open(RocksDB.java:231)
at org.opendedup.collections.RocksDBMap$StartShard.run(RocksDBMap.java:906)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

Service exit with a return value of 255

The three sst files it refers to are definitely not there. I've attached the logs from the log folder.

The issue occurred on 2 May. The PC froze around 02h16, and was hard reset around 02h27. The patches were installed after downloading - from about 02h50 onwards, with the reboot after the patches around 03h02.

What do I do to recover this opendedup instance that will not mount?
dnf-updates.log
sdfslogs.tar.gz

DSE Consistency Check Hangs in VMs

I am using CentOS 7 VMs to test SDFS. It seems that regardless of the hypervisor (I tried Virtual Box and Parallels) the DSE Consistency Check hangs at 0% and never completes. I cannot reproduce this on physical hardware. As for methodology, I was simply testing the time to run a DSE Consistency Check. I have ~1GB in the chunk store, so it is a relatively small amount of data. I unmounted the SDFS volume gracefully, edited the XML to change closed-gracefully to false to force a DSE check. The logs provide no useful information other than the mention of the DSE check starting. Let me know what other information I can provide.

writeHashBlobArchive() function of cloud file is not getting hit in the SDFS code.

Hi ,

We added the pluggable mechanism for encryption to support different encryption types in sdfs. This option can be passed during mkfs.sdfs command as an argument to “chunk-store-encrypt-type”.

Now, we want to add the encryption type in metadata. We added encryption-type in writeHashBlobArchive() function of cloud files for chunks. But as per current flow, while copy operation is performed in cloud volume then putChunk() of HashBlobArchive class is called from cloud writeChunk(), which doesn’t save the data in MetaData.

Can you please suggest that why the writeHashBlobArchive() function is not getting hit ?

Corruption of source files using "rsync" to sdfs volume

I've just tried:

rsync -axP / /mnt

Where "/mnt" is an sdfs volume and discovered that it systematically corrupts the source files, replacing them with files 61 byes long reported as "Java serialization data, version5". This confirms a bug reported last February on the "sdfs" Google group:

rsync sdfs BUG report

Yes, I backed up my system before trying "sdfs", but this is still a show-stopper for me :-(

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.