By simply re-attaching to shared rings during connect_ring() rather than
assuming they are freshly allocated (i.e assuming the counters are zero)
it is possible for vbd instances to be unbound and re-bound from and to
(respectively) a running guest.
This has been tested by running:
while true; do dd if=/dev/urandom of=test.img bs=1M count=1024; done
in a PV guest whilst running:
while true;
do echo vbd-$DOMID-$VBD >unbind;
echo unbound;
sleep 5;
echo vbd-$DOMID-$VBD >bind;
echo bound;
sleep 3;
done
in dom0 from /sys/bus/xen-backend/drivers/vbd to continuously unbind and
re-bind its system disk image.
This is a highly useful feature for a backend module as it allows it to be
unloaded and re-loaded (i.e. updated) without requiring domUs to be halted.
This was also tested by running:
while true;
do echo vbd-$DOMID-$VBD >unbind;
echo unbound;
sleep 5;
rmmod xen-blkback;
echo unloaded;
sleep 1;
modprobe xen-blkback;
echo bound;
cd $(pwd);
sleep 3;
done
in dom0 whilst running the same loop as above in the (single) PV guest.
Some (less stressful) testing has also been done using a Windows HVM guest
with the latest 9.0 PV drivers installed.
Signed-off-by: Paul Durrant <[email protected]>
---
Cc: Konrad Rzeszutek Wilk <[email protected]>
Cc: "Roger Pau Monné" <[email protected]>
Cc: Jens Axboe <[email protected]>
Cc: Boris Ostrovsky <[email protected]>
Cc: Juergen Gross <[email protected]>
Cc: Stefano Stabellini <[email protected]>
---
drivers/block/xen-blkback/xenbus.c | 12 ++++++------
1 file changed, 6 insertions(+), 6 deletions(-)
diff --git a/drivers/block/xen-blkback/xenbus.c b/drivers/block/xen-blkback/xenbus.c
index e8c5c54e1d26..0b82740c4a9d 100644
--- a/drivers/block/xen-blkback/xenbus.c
+++ b/drivers/block/xen-blkback/xenbus.c
@@ -196,24 +196,24 @@ static int xen_blkif_map(struct xen_blkif_ring *ring, grant_ref_t *gref,
{
struct blkif_sring *sring;
sring = (struct blkif_sring *)ring->blk_ring;
- BACK_RING_INIT(&ring->blk_rings.native, sring,
- XEN_PAGE_SIZE * nr_grefs);
+ BACK_RING_ATTACH(&ring->blk_rings.native, sring,
+ XEN_PAGE_SIZE * nr_grefs);
break;
}
case BLKIF_PROTOCOL_X86_32:
{
struct blkif_x86_32_sring *sring_x86_32;
sring_x86_32 = (struct blkif_x86_32_sring *)ring->blk_ring;
- BACK_RING_INIT(&ring->blk_rings.x86_32, sring_x86_32,
- XEN_PAGE_SIZE * nr_grefs);
+ BACK_RING_ATTACH(&ring->blk_rings.x86_32, sring_x86_32,
+ XEN_PAGE_SIZE * nr_grefs);
break;
}
case BLKIF_PROTOCOL_X86_64:
{
struct blkif_x86_64_sring *sring_x86_64;
sring_x86_64 = (struct blkif_x86_64_sring *)ring->blk_ring;
- BACK_RING_INIT(&ring->blk_rings.x86_64, sring_x86_64,
- XEN_PAGE_SIZE * nr_grefs);
+ BACK_RING_ATTACH(&ring->blk_rings.x86_64, sring_x86_64,
+ XEN_PAGE_SIZE * nr_grefs);
break;
}
default:
--
2.20.1
> -----Original Message-----
> From: Roger Pau Monn? <[email protected]>
> Sent: 09 December 2019 12:17
> To: Durrant, Paul <[email protected]>
> Cc: [email protected]; [email protected]; Konrad
> Rzeszutek Wilk <[email protected]>; Jens Axboe <[email protected]>;
> Boris Ostrovsky <[email protected]>; Juergen Gross
> <[email protected]>; Stefano Stabellini <[email protected]>
> Subject: Re: [PATCH 4/4] xen-blkback: support dynamic unbind/bind
>
> On Thu, Dec 05, 2019 at 02:01:23PM +0000, Paul Durrant wrote:
> > By simply re-attaching to shared rings during connect_ring() rather than
> > assuming they are freshly allocated (i.e assuming the counters are zero)
> > it is possible for vbd instances to be unbound and re-bound from and to
> > (respectively) a running guest.
> >
> > This has been tested by running:
> >
> > while true; do dd if=/dev/urandom of=test.img bs=1M count=1024; done
> >
> > in a PV guest whilst running:
> >
> > while true;
> > do echo vbd-$DOMID-$VBD >unbind;
> > echo unbound;
> > sleep 5;
> > echo vbd-$DOMID-$VBD >bind;
> > echo bound;
> > sleep 3;
> > done
>
> So this does unbind blkback while leaving the PV interface as
> connected?
>
Yes, everything is left in place in the frontend. The backend detaches from the ring, closes its end of the event channels, etc. but the guest can still send requests which will get serviced when the new backend attaches.
Paul
On Thu, Dec 05, 2019 at 02:01:23PM +0000, Paul Durrant wrote:
> By simply re-attaching to shared rings during connect_ring() rather than
> assuming they are freshly allocated (i.e assuming the counters are zero)
> it is possible for vbd instances to be unbound and re-bound from and to
> (respectively) a running guest.
>
> This has been tested by running:
>
> while true; do dd if=/dev/urandom of=test.img bs=1M count=1024; done
>
> in a PV guest whilst running:
>
> while true;
> do echo vbd-$DOMID-$VBD >unbind;
> echo unbound;
> sleep 5;
> echo vbd-$DOMID-$VBD >bind;
> echo bound;
> sleep 3;
> done
So this does unbind blkback while leaving the PV interface as
connected?
Thanks, Roger.
On 05.12.19 15:01, Paul Durrant wrote:
> By simply re-attaching to shared rings during connect_ring() rather than
> assuming they are freshly allocated (i.e assuming the counters are zero)
> it is possible for vbd instances to be unbound and re-bound from and to
> (respectively) a running guest.
>
> This has been tested by running:
>
> while true; do dd if=/dev/urandom of=test.img bs=1M count=1024; done
>
> in a PV guest whilst running:
>
> while true;
> do echo vbd-$DOMID-$VBD >unbind;
> echo unbound;
> sleep 5;
> echo vbd-$DOMID-$VBD >bind;
> echo bound;
> sleep 3;
> done
>
> in dom0 from /sys/bus/xen-backend/drivers/vbd to continuously unbind and
> re-bind its system disk image.
Could you do the same test with mixed reads/writes and verification of
the read/written data, please? A write-only test is not _that_
convincing regarding correctness. It only proves the guest is not
crashing.
I'm fine with the general approach, though.
Juergen
> -----Original Message-----
> From: Jürgen Groß <[email protected]>
> Sent: 09 December 2019 13:58
> To: Durrant, Paul <[email protected]>; [email protected];
> [email protected]
> Cc: Konrad Rzeszutek Wilk <[email protected]>; Roger Pau Monné
> <[email protected]>; Jens Axboe <[email protected]>; Boris Ostrovsky
> <[email protected]>; Stefano Stabellini <[email protected]>
> Subject: Re: [PATCH 4/4] xen-blkback: support dynamic unbind/bind
>
> On 05.12.19 15:01, Paul Durrant wrote:
> > By simply re-attaching to shared rings during connect_ring() rather than
> > assuming they are freshly allocated (i.e assuming the counters are zero)
> > it is possible for vbd instances to be unbound and re-bound from and to
> > (respectively) a running guest.
> >
> > This has been tested by running:
> >
> > while true; do dd if=/dev/urandom of=test.img bs=1M count=1024; done
> >
> > in a PV guest whilst running:
> >
> > while true;
> > do echo vbd-$DOMID-$VBD >unbind;
> > echo unbound;
> > sleep 5;
> > echo vbd-$DOMID-$VBD >bind;
> > echo bound;
> > sleep 3;
> > done
> >
> > in dom0 from /sys/bus/xen-backend/drivers/vbd to continuously unbind and
> > re-bind its system disk image.
>
> Could you do the same test with mixed reads/writes and verification of
> the read/written data, please? A write-only test is not _that_
> convincing regarding correctness. It only proves the guest is not
> crashing.
Sure. I'll find something that will verify content.
>
> I'm fine with the general approach, though.
>
Cool, thanks,
Paul
>
> Juergen