I am trying to make it so that the two servers each have their own copy, synced over a gigabit crossover cable or other independent connection. So if either server failed, the other could handle all the clients when they reconnected. But both servers have to allow writes and updates to files, and have to synchronise those writes quickly to the other server. In windows, I wrote a .net app using a filesystemwatcher to do this. It was a pain though because windows cried wolf with its filesystem alteration notifications alot.
I just figured something like that had been done in Linux before. I plan to share to the clients with samba. Synchronizing at the filesystem layer should suffice, providing that file locks from the clients can not interfere with synchronization between servers. The problem is that if synchronization takes place at the filesystem layer, then one client's lock on a file could happen concurrently with another client's on the same file because locking info wouldn't be synced. From what I gathered, I would have to use a shared block device, and a journaling filesystem to solve that problem.
On 3/11/07, Jonathan Hutchins [email protected] wrote:
If I understand what you're after, sharing the drive via NFS and mounting it in the appropriate place in the secondary filesystem would be the easiest thing to do. Actual hardware level sharing using SAN architecture would involve expensive hardware as far as I know. _______________________________________________ Kclug mailing list [email protected] http://kclug.org/mailman/listinfo/kclug