Mailing List Archive

Re: [openstack-dev] [nova][cinder][neutron] Cross-cell cold migration
> I think Nova should never have to rely on Cinder's hosts/backends
> information to do migrations or any other operation.
> In this case even if Nova had that info, it wouldn't be the solution.
> Cinder would reject migrations if there's an incompatibility on the
> Volume Type (AZ, Referenced backend, capabilities...)

I think I'm missing a bunch of cinder knowledge required to fully grok
this situation and probably need to do some reading. Is there some
reason that a volume type can't exist in multiple backends or something?
I guess I think of volume type as flavor, and the same definition in two
places would be interchangeable -- is that not the case?

> I don't know anything about Nova cells, so I don't know the specifics of
> how we could do the mapping between them and Cinder backends, but
> considering the limited range of possibilities in Cinder I would say we
> only have Volume Types and AZs to work a solution.

I think the only mapping we need is affinity or distance. The point of
needing to migrate the volume would purely be because moving cells
likely means you moved physically farther away from where you were,
potentially with different storage connections and networking. It
doesn't *have* to mean that, but I think in reality it would. So the
question I think Matt is looking to answer here is "how do we move an
instance from a DC in building A to building C and make sure the
volume gets moved to some storage local in the new building so we're
not just transiting back to the original home for no reason?"

Does that explanation help or are you saying that's fundamentally hard
to do/orchestrate?

Fundamentally, the cells thing doesn't even need to be part of the
discussion, as the same rules would apply if we're just doing a normal
migration but need to make sure that storage remains affined to compute.

> I don't know how the Nova Placement works, but it could hold an
> equivalency mapping of volume types to cells as in:
> Cell#1 Cell#2
> VolTypeA <--> VolTypeD
> VolTypeB <--> VolTypeE
> VolTypeC <--> VolTypeF
> Then it could do volume retypes (allowing migration) and that would
> properly move the volumes from one backend to another.

The only way I can think that we could do this in placement would be if
volume types were resource providers and we assigned them traits that
had special meaning to nova indicating equivalence. Several of the words
in that sentence are likely to freak out placement people, myself
included :)

So is the concern just that we need to know what volume types in one
backend map to those in another so that when we do the migration we know
what to ask for? Is "they are the same name" not enough? Going back to
the flavor analogy, you could kinda compare two flavor definitions and
have a good idea if they're equivalent or not...


OpenStack-operators mailing list
Re: [openstack-dev] [nova][cinder][neutron] Cross-cell cold migration [ In reply to ]

On 8/24/2018 4:08 PM, Matt Riedemann wrote:
> On 8/23/2018 10:22 AM, Sean McGinnis wrote:
>> I haven't gone through the workflow, but I thought shelve/unshelve
>> could detach
>> the volume on shelving and reattach it on unshelve. In that workflow,
>> assuming
>> the networking is in place to provide the connectivity, the nova
>> compute host
>> would be connecting to the volume just like any other attach and
>> should work
>> fine. The unknown or tricky part is making sure that there is the network
>> connectivity or routing in place for the compute host to be able to
>> log in to
>> the storage target.
> Yeah that's also why I like shelve/unshelve as a start since it's doing
> volume detach from the source host in the source cell and volume attach
> to the target host in the target cell.
> Host aggregates in Nova, as a grouping concept, are not restricted to
> cells at all, so you could have hosts in the same aggregate which span
> cells, so I'd think that's what operators would be doing if they have
> network/storage spanning multiple cells. Having said that, host
> aggregates are not exposed to non-admin end users, so again, if we rely
> on a normal user to do this move operation via resize, the only way we
> can restrict the instance to another host in the same aggregate is via
> availability zones, which is the user-facing aggregate construct in
> nova. I know Sam would care about this because NeCTAR sets
> [cinder]/cross_az_attach=False in nova.conf so servers/volumes are
> restricted to the same AZ, but that's not the default, and specifying an
> AZ when you create a server is not required (although there is a config
> option in nova which allows operators to define a default AZ for the
> instance if the user didn't specify one).
> Anyway, my point is, there are a lot of "ifs" if it's not an
> operator/admin explicitly telling nova where to send the server if it's
> moving across cells.
>> If it's the other scenario mentioned where the volume needs to be
>> migrated from
>> one storage backend to another storage backend, then that may require
>> a little
>> more work. The volume would need to be retype'd or migrated (storage
>> migration)
>> from the original backend to the new backend.
> Yeah, the thing with retype/volume migration that isn't great is it
> triggers the swap_volume callback to the source host in nova, so if nova
> was orchestrating the volume retype/move, we'd need to wait for the swap
> volume to be done (not impossible) before proceeding, and only the
> libvirt driver implements the swap volume API. I've always wondered,
> what the hell do non-libvirt deployments do with respect to the volume
> retype/migration APIs in Cinder? Just disable them via policy?




OpenStack-operators mailing list
Re: [openstack-dev] [nova][cinder][neutron] Cross-cell cold migration [ In reply to ]
> >
> > Yeah it's already on the PTG agenda [1][2]. I started the thread because I
> > wanted to get the ball rolling as early as possible, and with people that
> > won't attend the PTG and/or the Forum, to weigh in on not only the known
> > issues with cross-cell migration but also the things I'm not thinking about.
> >
> > [1]
> > [2]
> >
> > --
> >
> > Thanks,
> >
> > Matt
> >
> Should we also add the topic to the Thursday Cinder-Nova slot in case
> there are some questions where the Cinder team can assist?
> Cheers,
> Gorka.

Good idea. That will be a good time to circle back between the teams to see if
any Cinder needs come up that we can still have time to talk through and see if
we can get work started.

OpenStack-operators mailing list