S3cmd is a command line client for storing data in Amazon S3 (Simple Storage Service). It is ideal for power users, scripts, cron jobs, etc.
|Tags||Archiving Filesystems Systems Administration Utilities|
|Operating Systems||OS Independent|
Release Notes: Server-side copy for hardlinks/softlinks for improved performance, a new [signurl] command, handling of empty return bodies when processing S3 errors, upload from STDIN, support for custom HTTP headers, improved MIME support, support for --acl-grant/--acl-revoke in the 'sync' command, support for CloudFront default index and default root invalidation, and much more.
Release Notes: This release added commands for copying and moving remote files, CloudFront support, a new [setacl] command for setting an ACL on existing objects, and recursive and wildcard support for [put], [get], and [del]. --dry-run was enabled for [put], [get], and [sync]. Removal of non-empty buckets is allowed. A progress meter was implemented. New --include, --rinclude, and --(r)include-from options were added to override --exclude exclusions. A --add-header option was added along with a --list-md5 option for [ls].
Release Notes: This release restores access to upper-case named buckets. It has improved handling of filenames with Unicode characters. It avoids ZeroDivisionError on very fast links (for instance on Amazon EC2). It will reissue failed requests (e.g. connection errors, internal server errors, etc). Sync skips over files that can't be opened instead of terminating the sync completely. It doesn't run out of open files quota on sync with lots of files.
Release Notes: --exclude and --rexclude options were added to the sync command. The $HOME environment variable no longer needs to be set. Better checking of bucket names against Amazon S3 rules was implemented.
Release Notes: Synchronization from S3 back to a local folder was implemented, including file attribute restoration. Failed uploads are retried at a lower speed to improve error resilience. The MD5 of the uploaded file is computed and compared with the checksum reported by S3, and the file is uploaded again if there is a mismatch.