Home > filesystems > Horrible situation - file systems mounted simultaneously by multiple independent OS instances

Horrible situation - file systems mounted simultaneously by multiple independent OS instances

March 5Hits:1

How do I get out of this situation safely?

Details are as follow:

A xen server has got block devices allocated to VMs. But these devices have also been mounted inside Xen.

In fact 44 of these block devices have been mounted like this. To make matters worse, each physical device is seen over 4 paths and each of those are mounted on a separate mountpoint. In other words the devices are actually mounted 5 times each.

The VM guest OS sees the path via a PowerPath pseudo device (allocated as a phy: block device to the domU)

Some of the devices are formatted as ext2 and reiserfs.

No need to explain to me the file system corruption risks involved here.

I am afraid that even just unmounting the file systems may cause corruption, and feel that at this point pulling the power from the host, is the safest option.

Note that the applications, Oracle databases for the most part, in all the VMs are still running and in use.

I discovered this when investigating high CPU usage on the dom0. There is an unkillable "find" process, with cwd -> /media/disk-12 which is mounted from /dev/sdf1, which belongs to /dev/emcpowerr

Before anybody asks, the one time I've seen processes cannot be killed and continue to use CPU and RAM (unlike a defunct/zombie process), is when there is outstanding commited I/Os, eg sync returned but not physically on disk yet. More commonly this occurs on tape I/O.


P.S. I would have expected devices to be "reserved" once mounted, to prevent this kind of thing? Or is that not possible on Linux?

EDIT: Firstly I am convinced that KDE within the hypervisor) is the culprit. It looks like KDE is mounting the devices it can on logging to create desktop icons. The same thing is however not happening on other Xen servers, but all the other servers are running a much older version of SLES and KDE ... V4 appears to be the offending one, with 3.4 behaving better).

Furthermore two non-critical VMs have become hung. After shutting them down they would not boot up again due to file system corruption. The main/production VM is still running and the database on it still working, but clearly this is a time bomb. The customer is attempting to re-build the environment on another VM on another server but is stuck on issues configuring some of the components, so we are waiting...

In any case I feel that none of the answers have so far been more than "best practice is always shut down gracefully" And I hope to get something more concrete... In any case, I feel that this situation may warrant some more careful thinking. Will shutting down cause outstanding IO, in particular file system meta data updates from the hypervisor, to be synced and cause potentially major file system corruption?


If the disks are being written from a single mount point no harm is being done. Do a clean shutdown, (back it up from suspended state if you will) fix the mounts. Do not run anything but the bare needed apps on the Dom0. If, OTOH, partitions are being written from multiple paths, that's BAD and getting worse by the second. Pull the plug.

I have no concrete reason but my gut-feeling tells me that the following may be the best approach:

  1. Shut down applications.
  2. Copy all data from the VM via the network to a backup location.
  3. Un-mount the file systems from within the VM.
  4. Shut down the VM. (There is only one VM running on this host now).
  5. Ensure no domUs are set to start automatically.
  6. Pull out the power on the host to prevent the hypervisor from performing any "closing" actions, sync of outstanding I/O, etc.
  7. Boot up the VM, hoping that the hypervisor itself survived the power-yank.
  8. If it fails, re-build the environment. (The VMs boot disks are file based, but data mount points reside on external disk allocated as block devices)
  9. Check if the hypervisor is mounting any file systems belonging to the domUs. Un-mount these before any domUs are started)
  10. Turn KDE auto-mounting off.
  11. Start-up the VM and force a full FS check.

Alternative to 11: Start-up the VM and mount the file systems without a full fsck.

The reasoning is that I do not want the Xen hypervisor to have any more chance that absolutely necessary to cause corruption on the domU file systems.

I am no Xen expert and had no experience with it yet. But my approach if I was in your place would be: first I know I might lose data (maybe even all); second I would try to create snapshots and then suspending the VMs, restoring them in safe different environment.
I do not want to give you false hopes, but I think you will be lucky if you can recover anything.

Warning: following these advices could make you lose all data. This is up to you to see if it is worth the risk or not.

With a lot of luck, your applications are still working because the data they are using is all in volatile memory. You should try to get advantage of this situation (try to evaluate if that could be the case on a per apps basis) and export the live data to a network share if the applications offer such a feature. If any data is on disk, this export function could either be "locked" much like your find statement or crash (and crash the application or OS) because of the changed/corrupted disk data.

Then you could try to do a live snapshot, the instructions in the following article: Creating snapshots in Xen. I would go for the byte-by-byte snapshot, although it could get stuck much like your find command... However, I would not give this much hope.

Before doing the previous command, you ought to read this document from Citrix which helps understanding snapshots in Xen (PDF).

I wish you good luck.

Related Articles

  • Horrible situation - file systems mounted simultaneously by multiple independent OS instancesMarch 5

    How do I get out of this situation safely? Details are as follow: A xen server has got block devices allocated to VMs. But these devices have also been mounted inside Xen. In fact 44 of these block devices have been mounted like this. To make matters

  • Maximum numbers for file system mounts in LinuxJanuary 14

    Is there maximum numbers for file system mounts that Linux can handle? Is there differences between distros? --------------Solutions------------- Most distros should be the same, because they all run fundamentally the same kernel. I'm not aware of a

  • How are network file system mount paths defined in Lion (apart from Directory Services)?October 25

    Checking Directory Services for network file system mounts, i.e. sudo dscl . -list /Mounts and sudo dscl . -readall /Mounts returns nothing, yet the /etc/fstab file is deprecated in Lion. Why is dscl returning nothing and where are the mounts defined

  • How do you move files into the in-memory file system mounted at /dev/shm

    How do you move files into the in-memory file system mounted at /dev/shmAugust 25

    Someone recently told me about /dev/shm. You all helped refine my understanding of what /dev/shm is What dev folder allows you tell the OS to cache something? The person who told me about /dev/shm said that I could use it to, say, unzip a big file mo

  • Windows Explorer for multiple file system locations simultaneouslyAugust 1

    I would like to me able to make changes to the Windows file system in several locations simultaneously, for example so as to make the same manual changes to two load-balanced web servers*. I'd like to do this in a graphical fashion, so something like

  • Making very large file systems in windows with multiple RAID volumesJanuary 24

    I have an ipSAN that supports 64 3Tb Hard drives. The SAN limits the volume size of each RAID to be 16HDD's Per volume(16+1 for RAID5). I must split my total HDD space in to 4 volumes. I have a Windows 2k8R2 servers that will be connecting to the 4x

  • Where can I find a description of file system mount options in Linux?August 28

    Is there some file in Linux that enumerates and describes mount options for file systems like /etc/services describes ports? --------------Solutions------------- If you're asking for "Which filesystems are mounted, and how are they mounted?", th

  • Which are the best file-systems/mount-configuration for each folder on GNU/linux?April 10

    One of the good points of Linux is that is easy to customize the partitioning scheme of the disk and put each directory (/home, /var, etc) in different partitions and/or different disk. Then we can use different file system/configurations for each of

  • Why is my file system mounted as read-only?

    Why is my file system mounted as read-only?February 27

    I've put together a small system with busybox, a Linux kernel, and a small file system, putting stuff in as it seemed necessary -- I don't know if I've been learning much from this, but I started out pretty clueless, so it sure hasn't been a smooth r

  • How to unomount file system mounted on "/" November 16

    I want to unmount / file system because my disk space is full and I want to extend my directory. Problem is my disk has errors and I have to run fsck on it. When I try to unmount my directory, my file is mounted on: /dev/mapper/vg_carlocentos-lv_root

  • per process private file system mount pointsSeptember 4

    I was checking unshare command and according to it's man page, unshare - run program with some namespaces unshared from parent I also see there is a type of namespace listed as, mount namespace mounting and unmounting filesystems will not affect rest

  • How is the visibility of a file-system mount limited?January 26

    If a user mounts an encrypted file system, is it possible to limit the visibility of that file system such that other users are not able to view it or even see that the file system is mounted? --------------Solutions------------- I can not tell you h

  • root file system mounts as read only after some timeJuly 19

    My root (/) file system get remounted as read only after the system has been up for some time. This seems to vary from 3 hours to 48 hours. This system is running Ubuntu desktop 14.04. Its primary function is to run several Virtual Machines for mysel

  • Hard drive file systems for use with multiple operating systems?July 17

    Where I work, we constantly switch between Linux, Windows and Mac OS X machines. We use a lot of external hard drives which mostly come preformatted as NTFS. This is a problem because Mac OS X by default cannot natively write to NTFS, and the third p

  • sharemgr with smb hates file system mount points?November 28

    I've migrated my zfs pool back into Solaris 11. I'd been happily sharing files on linux with ZFS-fuse and samba before, and FreeBSD 8.1 using samba before that. I have some data sets within data sets in ZFS something like the following: tank/home/sha

  • Determining the Source of a Given File System Mount on UnixNovember 25

    Background Recently I have run into a bit of a snag on my home FreeBSD server. I recently upgraded it to the latest stable release, and I have noticed some strange behavior with the /var partition. Originally, I had the system configured such that /v

  • Server - File System Mounts

    Server - File System MountsApril 17

    I stupidly edited the /etc/fstab file on my dedicated server and now the server will not boot and is not reachable. My hosting company say that a reprovision is required but I do not believe them and this also means a loss of data. The disk or data i

  • How can SpriteBatch use a single texture/asset as multiple independent objects/instancesMarch 10

    I'm using LibGDX to create a game, but I'm encountering a problem with SpriteBatch. Whenever two objects that use the same image for their sprite come onto the screen, the new object replaces the old object. So, for example, one ship will come onto t

  • Fsck gets mad when the file system is mountedAugust 25

    When I run fsck -fy in a terminal it says something similar to: WARNING!! FILE SYSTEM MOUNTED!! You will cause seviere damages. Why is that? This might be a little off-topic but I ran that command under Mac OS X and it didn't complain about anything,

  • "General error mounting file systems"October 12

    No idea what I've done, but I just started getting this error everytime I boot. Running Ubuntu 12.04. I was able to see more of the boot log by removing splash and quiet from grub. I believe this is the culprit: init: mountall main process (306) term

Copyright (C) 2017 ceus-now.com, All Rights Reserved. webmaster#ceus-now.com 14 q. 1.441 s.