Giter Site home page Giter Site logo

tongwang / s3fs-c Goto Github PK

View Code? Open in Web Editor NEW
135.0 135.0 67.0 383 KB

S3FS-C is a FUSE (File System in User Space) based file system backed by Amazon S3 storage buckets. Once mounted, S3 can be used just like it was a local file system. This project was forked from S3FS (http://code.google.com/p/s3fs/) release 1.59 and being rewritten to be compatible with other S3 clients such as s3cmd, AWS Management Console, etc.

License: GNU General Public License v2.0

Shell 25.28% C++ 70.77% C 3.95%

s3fs-c's People

Contributors

franc-carter-sirca avatar memorycraft avatar tongwang avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

s3fs-c's Issues

Merge s3fs version 1.61?

Thanks for the fork, its been very useful. I noticed though there's been a number of fixes/features added since you forked from s3fs. Any chance you can merge these changes in? I could possibly take a stab at it myself too and just send you the patches.

Debugging

Hi. Can you tell me about how to debug the virtual FUSE filesystems. For example, s3fs-c. How does it works? Can i do it with IDE? Thanks

No space left on devise error for big file

Hi, I am running on archlinux
Linux li49-198 3.4.2-linode44 #1 SMP Tue Jun 12 15:04:46 EDT 2012 i686 GNU/Linux
with fuse 2.9.0-1

I mounted my bucket with this fstab entry:
s3fs#bucket /mnt/s3-torrent fuse rw,noauto,noatime,uid=1001,gid=100,umask=007,allow_other 0 0

When trying to copy a 300M file to my mounted bucket I get this error:
cp: writing '/mnt/s3/file.m4v': No space left on device
cp: failed to extend '/mnt/s3/file.m4v': No space left on device
cp: closing '/mnt/s3t/file.m4v': Input/output error

Incorrect file timestamps, sizes, and contents reported when using rsync on S3 mount

Stale timestamps, sizes, and contents reported for changed files when using rsync to synchronize to an S3 mount.

Steps to reproduce the issue:

Mount S3 bucket and create local file.

ifactor@dom:~$ s3fs-c somebucket S3
ifactor@dom:~$ echo Hello >foo

Copy file to S3 mount via rsync.

ifactor@dom:~$ rsync -acv foo S3/foo
sending incremental file list
foo

sent 112 bytes  received 31 bytes  57.20 bytes/sec
total size is 6  speedup is 0.04

Verify original and copied file metadata match.

ifactor@dom:~$ ls -l foo S3/foo
-rw-rw-r-- 1 ifactor ifactor 6 2012-06-15 22:17 foo
-rwxrwxrwx 1 root    root    6 2012-06-15 22:17 S3/foo

Change local file and re-rsync.

ifactor@dom:~$ echo World >>foo
ifactor@dom:~$ rsync -acv foo S3/foo
sending incremental file list
foo

sent 118 bytes  received 31 bytes  59.60 bytes/sec
total size is 12  speedup is 0.08

Examining file via AWS console interface shows expected file timestamp and size at this point.

Original and copied file metadata (timestamp and size) do not match as expected when checked via mount.

ifactor@dom:~$ ls -l foo S3/foo
-rw-rw-r-- 1 ifactor ifactor 12 2012-06-15 22:19 foo
-rwxrwxrwx 1 root    root     6 2012-06-15 22:17 S3/foo

Contents of files appear to be different as well.

ifactor@dom:~$ cat foo
Hello
World
ifactor@dom:~$ cat S3/foo
Hello

All subsequent rsync runs will recopy data, defeating the purpose of rsync.

ifactor@dom:~$ rsync -acv foo S3/foo
sending incremental file list
foo

sent 118 bytes  received 31 bytes  99.33 bytes/sec
total size is 12  speedup is 0.08
ifactor@dom:~$ ls -l foo S3/foo
-rw-rw-r-- 1 ifactor ifactor 12 2012-06-15 22:19 foo
-rwxrwxrwx 1 root    root     6 2012-06-15 22:17 S3/foo

Unmounting and remounting solves the problem.

ifactor@dom:~$ fusermount -u S3
ifactor@dom:~$ s3fs-c somebucket S3
ifactor@dom:~$ ls -l foo S3/foo
-rw-rw-r-- 1 ifactor ifactor 12 2012-06-15 22:19 foo
-rwxrwxrwx 1 root    root    12 2012-06-15 22:20 S3/foo

This bug also affects and has been reported as Issue 276 with the s3fs project.

Release the project.

I'm not interested in maintaining this, but certainly others are. You've got open commits and branches that are years ahead in development. Merge them or reference them as being newer more authoritative copies. Or, appoint a new admin.

@tongwang

Access Denied for certain buckets

even correct credential is provided.

s3fs: CURLE_HTTP_RETURNED_ERROR
s3fs: HTTP Error Code: 403
s3fs: AWS Error Code: AccessDenied
s3fs: AWS Message: Access Denied

Remove support for symbolic links

Is symbolic link feature really necessary? Other S3 tools see the symbol link created by s3fs as a regular object. Symbolic links created on one host may not make sense on another host, so they are not portable. Consider removing the support for symbolic links.

Taking up gigs of memory

Any reason why this would be taking up so much memory after a few days of running? It ends up eating 3.5 gigs of mem and exhausting 1.5 gigs of swap before I kill the process.

rsync directory timestamp setting fails under some circumstances

From http://code.google.com/p/s3fs/wiki/FuseOverAmazon it appears that S3FS should support timestamp modifications.

I am however running into rsync errors stating:
rsync: failed to set times on "/mnt/s3test/hello": No such file or directory (2)
(Where hello is a "directory")

The data within the bucket was created from an Amazon hard drive import with their own internal tools. s3fs-c was needed to be able to see these "directories" as they were invisible for s3fs.

I can use --size-only but would prefer to preserve times as they are relevant for my application. Does anyone have any suggestions which may help?

Incorrect filesize reported

This made me laugh when I saw this;

$ ls -la /mnt/s3/bts-backups/illusion/da/user.admin.dan.tar.gz
-rwxrwxrwx 1 root root 18446744073709551615 2011-12-19 03:42 /mnt/s3/bts-backups/illusion/da/user.admin.dan.tar.gz

The file is only 6Gb (s3cmd lists is at 6449064308 bytes). Any idea why this is reporting as 16 Exabytes?

Problem with one bucket and two or more mount points

Hello!

I try to use s3fs-c for our project. We have one bucket and two mount points (let it be on the same machine). We copy a file on the bucket 'cp SRC DST' via one mount point and try to see this file from second mount point like 'cat file' and we see, that file has old version of data. If we umount than mount second mount point again, the file return the correct data.

OS: Debian 7.0

Incorrect file data/size after modification

I've been using s3fs-c to mount a remote git repo hosted in an s3 bucket. I find when I do git operations on the mounted dir, like "git config ....", and then "cat config", the file contents are incorrect. If I kill and restart s3fs, "cat" produces the correct contents. This is without any parameters like use_cache. I have a guess that this has nothing to do with git, but I haven't played around yet to isolate it. Anyone seeing this?

FAQ question

I think a popular FAQ question would be the most-right rsync settings for s3fs-c. Some people say to use --inplace, others say not to.

ls: reading directory ./: Input/output error

Hi,

I have a bucket with more than ten thousand files. I am getting following error while doing ls folder but working fine for less than 1000 files in folder. Please help.

ls: reading directory ./: Input/output error

I used following command to mount bucket

s3fs -o passwd_file=/root/.passwd-s3fs -d mybucjet -ouse_cache=/tmp/ -o allow_other -o max_stat_cache_size=90000000000 /mnt/production-s3

Regards,
Mudassir Aftab

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.