![]() ![]() ![]() Then when modified.iso is pulled the whole file is traveling the wire. Modified.iso,4.7GB with 4.5GB identical data, in my setup when I pull original.iso the branchcache builds. The whole file would transfer again UNLESS it has the same name. I tried this with a few ISO's as well, injecting about 150MB of additional data. While I got BranchCache to work, it didn't work like you state. If only Veeam WAN acceleration would be available as a seperate add-in or something like that. I want to try to get it to store the 'chunk hash database' for a file, in order to be able to scan / hash files in advance. Hashing files on both sides and then sync changed blocks. While I also know spending more and more time on this at some point justifies just buying a solution, be it Veeam Enterprise or another, it's not that far yet :) I have currently written a sync tool based on RDC myself, more in the vein of RSync, hence first ![]() In addition, while I am not completely sure of how WANĪcceleration works, it sits in between and more like BranchCache or other WAN accelerators I would probably need another at least 1TB of storage for the cache as I have about 1TB of unique data at the moment. However I have a small company, we are just with two and budget is just too tight now to spend that amount. Granted - data is priceless when disaster struck. Per year just to do the sync more efficient. Moving to Enterprise would cost me thousands of dollars / euros Phil Wilcock I'm grateful for all answers I must admit I feel people aren't really reading my posts :) As stated in my opening post we have Veeam Standard, which does not include WAN acceleration. Obviously this needs to be a copy instigated from server B as you can't push content forīranchCache - so you couldn't run the copy from Server A. ACROSYNC DELETE FILES NOT ON THE OTHER SIDE FULLWhen I ran a full copy of the 10Gb content again - only the 110Mb extra was transferred. Not bad for 10Gb of content.įurthermore, I fiddled with some of the files, adding some extra to the VHD and injecting some files into a. So on Server B, I ended up with just 5.5Gb of disk usage (this disk also being DeDuped) INCLUDING the BranchCache cache. ACROSYNC DELETE FILES NOT ON THE OTHER SIDE PLUSSo the blocks that were deduplicated will correspond exactly to branchcache blocks and thus take up onlyĢ.9GB of space, plus a slight overhead of cache metadata. Only 2.9 GB crossed the wire, as DeDupe was working with BranchCache. Server A being the source, and BranchCache content server.Ĭreated 10Gb of content (ISOs and VHDs etc) which deduped down to 2.9Gbįrom server B (which would be the backup server) I copied all 10GB of content using BITS via http. So I created a 20Gb DeDuped drive on 2 servers. I know that you've looked at BranchCache - and just wanted to clarify how that would work (cos I just tested it :-)), bearing in mind that storage is a consideration and you don't want to have to cache everything twice. So my question is, as RDC is actually an API, are there any copy tools around (preferably command line driven) that use Remote Differential Compression without using a staging location? Recovery and I only need to store the latests fulls + incrementals there, not everything. In addition I don't need or want to sync the whole backup store - the remote location is only for disaster We are a small company, budget it very tight and any storage I can save helps me a lot. If I have a 1.2TBįile I need another 1.2TB of storage for it to be put in staging. The downside for this in DFS is that it uses the staging folder. RDC works just like rsync, hashing both sides and then copy only delta across. I've been using DFS with RDC enabled and in terms of bandwidth that's a very good solution. It has all kinds of limitations, for one speed and being POSIX compliant I can't use NTFS permissions (for what it's worth - but it's still a hassle). I've been testing with rsync to only copy deltas across. We have Veeam standard, not Enterprise so we don't have Veeam's wan The 10Mbps line was more than sufficient to move the data across. In the past we used HP Dataprotector with StoreOnce, which does source-side-dedup. In my ongoing quest to sync our backup repository over a 6-10Mbps (and we can't do anything about that) to a remote location, I am looking for a copy or sync tool that incorporates Remote Differential Compression. ![]()
0 Comments
Leave a Reply.AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |