Opendedup is a deduplication-based filesystem and block device designed to provide inline deduplication and flexiblity for applications. It benefits services such as backup, archiving, NAS storage, and Virtual Machine primary and secondary storage, and can be deployed in both a stand alone and distributed, multi node configuration. Standalone, it provides inline deduplication, replication, and unlimited snapshot capabilities. In a multi-node configuration, it adds global intra-volume deduplication, block storage redundancy, and block storage expandability. and will store and share unique data blocks with other volumes within the cluster. These volumes can also specify a level of redundancy for data stored in the cluster.
|Tags||Linux Compression volume management network attached storage storage Windows port Windows support|
|Operating Systems||Linux (64 Bit) Windows (32 bit)|
New SDFS performance numbers are out and have greatly improved in 1.0.5. SDFS now provides over 10 Gb/s for most work loads. Check it at http://ope...
Release Notes: This release adds Variable Block deduplication for local and cloud storage. This allows accurate inline deduplication for structured and unstructured data with the SDFS filesystem.
Release Notes: This release includes a fix for reclaiming blocks from file-based chunkstores and basic structure for variable block deduplication. Beta 4 will include basic support for variable block deduplication.
Release Notes: A deduplicated virtual volume manager was added. Opendedup can now run as a virtual block device and dedup any filesystem on top of it. EXT4, btrfs, and OCFS2 were tested for validation and QA. All perform and function as if they are running on block device. odvol.make, odvol.start, and odvol.cli were updated with details, and a quick start guide is now available.
Release Notes: This version features re-factored clustering capability.
Release Notes: This release fixes unique block deletion for filebasedchunkstore, volume freezing due to thread deadlock, and Azure bucket name not being honored. It enhances read performance by 40% under NFS through better locking techniques and file caching, and better file level locking under concurrent I/O should allow for lower latency multi write to the same file. It adds block caching for the S3, Azure, and FileChunkStore and defaults to 10MB. This should improve performance for multiple writes. This option can be configured during volume creation using the --chunk-store-read-cache option.