Anyone know of a way to share a block device like Raid-1 across multiple machines? I want to get two machines to share files to the same group of clients for R/W access and have changes replicated between the two machines, so they'd be effective immediately for the other clients. A month ago, I googled, and found a wiki of a bunch of different systems but they each had some fault. Some could only be written to from one place. The comparison section of the page left me ultimately thinking the only things that did what i wanted were GoogleFS (NOT public) and a couple others that were really expensive.
I don't even so much care that the block device is whats replicated, but trying to replicate a file system on demand like that I'm guessing is not possible for locking reasons. (Two people from different servers could lock the same file in the same place and make changes to it. Has anyone here done anything like this before?
Billy Crook wrote:
Anyone know of a way to share a block device like Raid-1 across multiple machines? I want to get two machines to share files to the same group of clients for R/W access and have changes replicated between the two machines, so they'd be effective immediately for the other clients. A month ago, I googled, and found a wiki of a bunch of different systems but they each had some fault. Some could only be written to from one place. The comparison section of the page left me ultimately thinking the only things that did what i wanted were GoogleFS (NOT public) and a couple others that were really expensive.
I don't even so much care that the block device is whats replicated, but trying to replicate a file system on demand like that I'm guessing is not possible for locking reasons. (Two people from different servers could lock the same file in the same place and make changes to it. Has anyone here done anything like this before?
Kclug mailing list [email protected] http://kclug.org/mailman/listinfo/kclug
You can setup shares over NFS and join them together using a software RAID (mdadm if you're using linux) . I read some places of people trying this and being somewhat successful.
-SO
If I understand what you're after, sharing the drive via NFS and mounting it in the appropriate place in the secondary filesystem would be the easiest thing to do. Actual hardware level sharing using SAN architecture would involve expensive hardware as far as I know.
I am trying to make it so that the two servers each have their own copy, synced over a gigabit crossover cable or other independent connection. So if either server failed, the other could handle all the clients when they reconnected. But both servers have to allow writes and updates to files, and have to synchronise those writes quickly to the other server. In windows, I wrote a .net app using a filesystemwatcher to do this. It was a pain though because windows cried wolf with its filesystem alteration notifications alot.
I just figured something like that had been done in Linux before. I plan to share to the clients with samba. Synchronizing at the filesystem layer should suffice, providing that file locks from the clients can not interfere with synchronization between servers. The problem is that if synchronization takes place at the filesystem layer, then one client's lock on a file could happen concurrently with another client's on the same file because locking info wouldn't be synced. From what I gathered, I would have to use a shared block device, and a journaling filesystem to solve that problem.
On 3/11/07, Jonathan Hutchins [email protected] wrote:
If I understand what you're after, sharing the drive via NFS and mounting it in the appropriate place in the secondary filesystem would be the easiest thing to do. Actual hardware level sharing using SAN architecture would involve expensive hardware as far as I know. _______________________________________________ Kclug mailing list [email protected] http://kclug.org/mailman/listinfo/kclug
Billy Crook wrote:
From what I gathered, I would have to use a shared block device, and a
journaling filesystem to solve that problem.
On 3/11/07, *Jonathan Hutchins* <[email protected] mailto:[email protected]> wrote:
If I understand what you're after, sharing the drive via NFS and mounting it in the appropriate place in the secondary filesystem would be the easiest thing to do. Actual hardware level sharing using SAN architecture would involve expensive hardware as far as I know.
As mentioned, GFS will let multiple servers mount the same block device without massive filesystem corruption. That just leave the 'shared block device' portion, which is traditionally handled by some sort of SAN.
If you don't feel like shelling out bucks for a pre-packaged solution, you can coerce linux into exporting raw block devices via iSCSI (SCSI over IP) or ATAoE (ATA over Ethernet). Presto! Instant Po' Man's SAN!
The number of linux boxen required, the performance level, and the ancillary 'glue' required (ie: switches, GigE/10Gig NICs - possibly with TCP off-load engines, etc) can vary widely, based on performance requirements, how much $$$ you want to spend, and how many points of failure are tolerable.
things to look at:
ENBD (Extended Network Block Device) to share the block devices
redhat GFS for using the block devices (or reccommended BCPs on ENBD mailing lists)
Coda and Intermezzo for distributed file systems that support detach, reattach (although how that would be better than a svn checkout directly on the mobile device is somewhat of a mystery -- is Luke still drive-space-constrained for source code projects?)
On Monday 12 March 2007 17:06, David Nicol wrote:
things to look at:
ENBD (Extended Network Block Device) to share the block devices
redhat GFS for using the block devices (or reccommended BCPs on ENBD mailing lists)
Coda and Intermezzo for distributed file systems that support detach, reattach (although how that would be better than a svn checkout directly on the mobile device is somewhat of a mystery -- is Luke still drive-space-constrained for source code projects?)
The idea is to have read-write mirrors of a Svn repository for maximum uptime.
On 3/12/07, Luke -Jr [email protected] wrote:
The idea is to have read-write mirrors of a Svn repository for maximum uptime.
intermezzo will force some edits before checkin when there is a file that has been changed independently -- i think -- i would ask svn, or add to the svn source, some features, after thinking about what exactly the semantics should be.
IMO the one-main-master semantic with svn is completely acceptible, including the occasional downtime.
Luke -Jr wrote:
On Monday 12 March 2007 17:06, David Nicol wrote:
things to look at:
ENBD (Extended Network Block Device) to share the block devices
redhat GFS for using the block devices (or reccommended BCPs on ENBD mailing lists)
Coda and Intermezzo for distributed file systems that support detach, reattach (although how that would be better than a svn checkout directly on the mobile device is somewhat of a mystery -- is Luke still drive-space-constrained for source code projects?)
The idea is to have read-write mirrors of a Svn repository for maximum uptime.
That sounds more like Git or one of the other distributed versioning tools. With subversion, you have The_Repository, running on The_Server, and any redundancy needs to pretty much hide itself behind the scenes.
Did you read up on the back-ends in the red-bean book?: http://svnbook.red-bean.com/nightly/en/svn.reposadmin.planning.html#svn.repo...
There's also an interesting thread from the subversion mailing list, indicating FSFS on top of NFS (should) or GFS/AoE (will) do what you want (assuming your servers are fairly 'near' each other in network terms): http://svn.haxx.se/users/archive-2006-10/0225.shtml http://svn.haxx.se/users/archive-2006-10/0220.shtml http://svn.haxx.se/users/archive-2006-10/0243.shtml
If your mirrors need to be geographically separated, you might want to think about an alternative source control system.
On Tuesday 13 March 2007 03:46:12 am Charles Steinkuehler wrote:
Luke -Jr wrote:
The idea is to have read-write mirrors of a Svn repository for maximum uptime.
That sounds more like Git or one of the other distributed versioning tools.
Indeed, but distributed SCM are all on par with CVS currently when it comes to copying/branching.
With subversion, you have The_Repository, running on The_Server, and any redundancy needs to pretty much hide itself behind the scenes.
Exactly what we need.
There's also an interesting thread from the subversion mailing list, indicating FSFS on top of NFS (should) or GFS/AoE (will) do what you want (assuming your servers are fairly 'near' each other in network terms):
At least with NFS, there's a single point of failure, defeating the purpose.
If your mirrors need to be geographically separated, you might want to think about an alternative source control system.
Darcs is really nice, but also has some major flaws: - Impossible to get a repository's true history - Cannot copy at all - Can only do a full repository checkout (and not just subdirectories)
On 3/10/07, Billy Crook [email protected] wrote:
Anyone know of a way to share a block device like Raid-1 across multiple machines? I want to get two machines to share files to the same group of clients for R/W access and have changes replicated between the two machines, so they'd be effective immediately for the other clients. A month ago, I googled, and found a wiki of a bunch of different systems but they each had some fault. Some could only be written to from one place. The comparison section of the page left me ultimately thinking the only things that did what i wanted were GoogleFS (NOT public) and a couple others that were really expensive.
I don't even so much care that the block device is whats replicated, but trying to replicate a file system on demand like that I'm guessing is not possible for locking reasons. (Two people from different servers could lock the same file in the same place and make changes to it. Has anyone here done anything like this before?
Kclug mailing list [email protected] http://kclug.org/mailman/listinfo/kclug
Not sure if this is what you're looking for, but Redhat Cluster Suite has a clustered filesystem called GFS.
More information here: http://www.redhat.com/software/rha/gfs/
And if you're looking for the features but not the Redhat name CentOS includes this as well.
On a similar topic, is anyone aware of a distributed p2p filesystem that can tolerate downtime of nodes? The idea is to have a svn repo across multiple sites, so proper locking for writes is a must.
On Sat, 2007-03-10 at 12:55 -0600, Billy Crook wrote:
Anyone know of a way to share a block device like Raid-1 across multiple machines? I want to get two machines to share files to the same group of clients for R/W access and have changes replicated between the two machines, so they'd be effective immediately for the other clients.
Take a look at clvm. Last I heard it was not stable.