1001 yum install gcc libstdc++-devel gcc-c++ curl curl* curl-devel libxml2 libxml2* libxml2-devel openssl-devel mailcap
1002 wget http://sourceforge.net/projects/fuse/files/latest/download?source=files
1003 tar -xzvf fuse-2.9.3.tar.gz
1004 cd fuse-2.9.3
1005 ./configure --prefix=/usr
1006 make && make install
1007 export PKG_CONFIG_PATH=/usr/lib/pkgconfig:/usr/lib64/pkgconfig/
1009 modprobe fuse
1010 pkg-config --modversion fuse
1012 yum install git
1015 cd ../
1016 git clone https://github.com/s3fs-fuse/s3fs-fuse.git
1017 cd s3fs-fuse/
1018 vim src/s3fs.cpp
将 std::string host = "http://s3.amazonaws.com";
改为：std::string host = "http://oos.ctyunapi.cn";
1020 ./configure --prefix=/usr
1022 make install
1023 vim ~/.passwd-s3fs
1024 chmod 600 ~/.passwd-s3fs
1025 mkdir -p /mnt/s3
1026 s3fs ganl /mnt/s3/ -ouse_cache=/tmp
1027 cd /mnt/s3/
[root@localhost s3]# ls
[root@localhost s3]# cd uu
[root@localhost uu]# touch 222.text
THIS README CONTAINS OUTDATED INFORMATION - please refer to the wiki or –help
S3FS is FUSE (File System in User Space) based solution to mount/unmount an Amazon S3 storage buckets and use system commands with S3 just like it was another Hard Disk.
In order to compile s3fs, You’ll need the following requirements:
- Kernel-devel packages (or kernel source) installed that is the SAME version of your running kernel
- LibXML2-devel packages
- CURL-devel packages (or compile curl from sources at: curl.haxx.se/ use 7.15.X)
- GCC, GCC-C++
- FUSE (>= 2.8.4)
- FUSE Kernel module installed and running (RHEL 4.x/CentOS 4.x users - read below)
- OpenSSL-devel (0.9.8)
GnuTLS(gcrypt and nettle)
If you’re using YUM or APT to install those packages, then it might require additional packaging, allow it to be installed.
In order to download s3fs, download from following url:
Or clone the following command:
git clone git://github.com/s3fs-fuse/s3fs-fuse.git
Go inside the directory that has been created (s3fs-fuse) and run: ./autogen.sh
This will generate a number of scripts in the project directory, including a configure script which you should run with: ./configure
If configure succeeded, you can now run: make. If it didn’t, make sure you meet the dependencies above.
This should compile the code. If everything goes OK, you’ll be greated with “ok!” at the end and you’ll have a binary file called “s3fs”
in the src/ directory.
As root (you can use su, su -, sudo) do: “make install” -this will copy the “s3fs” binary to /usr/local/bin.
Congratulations. S3fs is now compiled and installed.
In order to use s3fs, make sure you have the Access Key and the Secret Key handy. (refer to the wiki)
First, create a directory where to mount the S3 bucket you want to use.
Example (as root): mkdir -p /mnt/s3
Then run: s3fs mybucket[:path] /mnt/s3
This will mount your bucket to /mnt/s3. You can do a simple “ls -l /mnt/s3” to see the content of your bucket.
If you want to allow other people access the same bucket in the same machine, you can add “-o allow_other” to read/write/delete content of the bucket.
You can add a fixed mount point in /etc/fstab, here’s an example:
s3fs#mybucket /mnt/s3 fuse allow_other 0 0
This will mount upon reboot (or by launching: mount -a) your bucket on your machine.
If that does not work, probably you should specify with “_netdev” option in fstab.
All other options can be read at: https://github.com/s3fs-fuse/s3fs-fuse/wiki/Fuse-Over-Amazon
s3fs should be working fine with S3 storage. However, There are couple of limitations:
- Currently s3fs could hang the CPU if you have lots of time-outs. This is NOT a fault of s3fs but rather libcurl. This happens when you try to copy thousands of files in 1 session, it doesn’t happen when you upload hundreds of files or less.
- CentOS 4.x/RHEL 4.x users - if you use the kernel that shipped with your distribution and didn’t upgrade to the latest kernel RedHat/CentOS gives, you might have a problem loading the “fuse” kernel. Please upgrade to the latest kernel (2.6.16 or above) and make sure “fuse” kernel module is compiled and loadable since FUSE requires this kernel module and s3fs requires it as well.
- Moving/renaming/erasing files takes time since the whole file needs to be accessed first. A workaround could be to use s3fs’s cache support with the use_cache option.