Access to data in DX90


Moderator: ETERNUS Moderator Team

Posts: 1
Joined: Mon May 12, 2014 17:35
Product(s): Dx90 s2

Access to data in DX90

Postby aldobm » Thu May 15, 2014 15:21

hi folks,

I have 6 blade servers connec to to dx90 s2 storage server , connecting to fiber channel, no have iscsi ports in server dx90.

After install MPIO in windows server hyper-v core and configure with disKpart the disc's, have unit D:\ in all servers, read and write operations.

But the files write for server A not show in server B, the files write in server B not show in Server A.

apparently this is the expected behavior, manages a DX90 s2 isolation between servers.

All good, but I need to clarify certain questions.

*How to access the filesystem server A, if for some reason this does not work?

*Like looking and extract the information from the server A, no need for this to work, I want to copy these files from DX90 to a disk drive that is in a computer network for example?

*How to assign the file system server to a new server, say totally new and different?

*How access and delete files and folders direct.

I'm thinking logically in case of a major incident, and that the DX90 becomes a black box, which left me no data.

Thank you very much.

Posts: 891
Joined: Thu Jun 22, 2006 15:32
Product(s): Scaleo Pi2662, Primergy

Re: Access to data in DX90

Postby me@work » Fri May 16, 2014 11:59

I'm afraid that what you're actually looking for is not a Multipathing extension (MPIO), but rather a cluster solution...

Windows and its native filesystem NTFS is generally NOT multi-host-aware. If you let several different Windows hosts access the same LUN without an arbritator, data corruption WILL happen, because each of those Windows hosts thinks that it can do whatever it wishes to the filesystem, incl. e.g. cache and write small blocks, allocation info, etc.pp.

So in order to set these things straight, you either need to acquaint yourself with installing and operating an MS Cluster (which arbitrates access to storage devices available to all cluster nodes), or present one LUN to one single host only, and then let this host share access to the data through networking.

Another option could be to operate the Eternus Storage System as an NFS provider, because this Networking File System allows for file locks. As its name already implies, it is not possible to use if over an FC connection, though...

Return to “Storage solutions”

Who is online

Users browsing this forum: No registered users and 1 guest