Mailing List Archive

1 2 3  View All
Re: Monitoring / Billing Architecture proposed [ In reply to ]
On 04/23/2012 10:09 PM, Sandy Walsh wrote:
> Flavor information is copied to the Instance table on creation so the
> Flavors can change and still be tracked in the Instance. It may just
> need to be sent in the notification payload.
>
> The current events in the system are documented here:
> http://wiki.openstack.org/SystemUsageData
>
Hi,

Metering needs to account for the "volume of data sent to external network destinations " ( i.e. n4 in http://wiki.openstack.org/EfficientMetering ) or the disk I/O etc. This kind of resource is billable.

The information described at http://wiki.openstack.org/SystemUsageData will be used by metering but other data sources need to be harvested as well.

Cheers
> -Sandy
>
>
> On 04/23/2012 02:50 PM, Brian Schott wrote:
>> So, we could build on this. No reason to reinvent, but we might want to
>> expand the number of events. I'm concerned about things like what
>> happens when flavors change over time. Maybe the answer is, always
>> append to the flavor/instance-type table. The code I remember and the
>> admin interface that Ken wrote allowed you to modify flavors. That
>> would break billing unless you also track flavor modifications.
>>
>> -------------------------------------------------
>> Brian Schott, CTO
>> Nimbis Services, Inc.
>> brian.schott@nimbisservices.com <mailto:brian.schott@nimbisservices.com>
>> ph: 443-274-6064 fx: 443-274-6060
>>
>>
>>
>> On Apr 23, 2012, at 1:40 PM, Luis Gervaso wrote:
>>
>>> I have been looking at : http://wiki.openstack.org/SystemUsageData
>>>
>>> On Mon, Apr 23, 2012 at 7:35 PM, Brian Schott
>>> <brian.schott@nimbisservices.com
>>> <mailto:brian.schott@nimbisservices.com>> wrote:
>>>
>>> Is there a document somewhere on what events the services emit?
>>>
>>> -------------------------------------------------
>>> Brian Schott, CTO
>>> Nimbis Services, Inc.
>>> brian.schott@nimbisservices.com
>>> <mailto:brian.schott@nimbisservices.com>
>>> ph: 443-274-6064 <tel:443-274-6064> fx: 443-274-6060
>>> <tel:443-274-6060>
>>>
>>>
>>>
>>> On Apr 23, 2012, at 12:39 PM, Monsyne Dragon wrote:
>>>
>>>> This already exists in trunk. The Notification system was
>>>> designed specifically to feed billing and monitoring systems.
>>>>
>>>> Basically, we don't want Nova/Glance/etc to be in the business of
>>>> trying to determine billing logic, since it is different for
>>>> pretty much everyone, so we just emit notifications to a queue
>>>> and the interested pull what they want, and aggregate according
>>>> to their own rules.
>>>>
>>>> On Apr 22, 2012, at 1:50 PM, Luis Gervaso wrote:
>>>>
>>>>> Hi,
>>>>>
>>>>> I want to share the architecture i am developing in order to
>>>>> perform the monitorig / billing OpenStack support:
>>>>>
>>>>> 1. AMQP Client which listen to RabbitMQ / QPid (this should be
>>>>> interchangeable) (Own Stuff or ServiceMix / Camel)
>>>>
>>>>> 2. Events should be stored on a NoSQL document oriented database
>>>>> (I think mongodb is perfect, since we can query in a super easy
>>>>> fashion)
>>>> We have an existing system called Yagi
>>>> (https://github.com/Cerberus98/yagi/) that listens to the
>>>> notification queues and persists events to a Redis database. It
>>>> then provides feeds as ATOM formatted documents that a billing
>>>> system can pull to aggregate data, It also can support PubSub
>>>> notification of clients thru the pubsubhubub protocol, and push
>>>> events to a long-term archiving store thru the AtomPub protocol.
>>>>
>>>> That said, the notification system outputs its events as JSON, so
>>>> it should be easy to pipe into a json document-oriented db if
>>>> that's what you need. (we only use ATOM because we have a
>>>> atom-based archiving/search/aggregation engine (it's open
>>>> source: http://atomhopper.org/ ) our in-house systems already
>>>> plug into. )
>>>>
>>>>
>>>>
>>>>> 3a. The monitoring system can pull/push MongoDB
>>>>>
>>>>> 3b. The billing system can pull to create invoices
>>>>>
>>>>> 4. A mediation EIP should be necessary to integrate a
>>>>> billing/monitoring product. (ServiceMix / Camel)
>>>>>
>>>>> This is to receive your feedback. So please, critics are welcome!
>>>>>
>>>>> Cheers!
>>>>>
>>>>> --
>>>>> -------------------------------------------
>>>>> Luis Alberto Gervaso Martin
>>>>> Woorea Solutions, S.L
>>>>> CEO & CTO
>>>>> mobile: (+34) 627983344 <tel:%28%2B34%29%20627983344>
>>>>> luis@ <mailto:luis.gervaso@gmail.com>woorea.es <http://woorea.es/>
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Mailing list: https://launchpad.net/~openstack
>>>>> Post to : openstack@lists.launchpad.net
>>>>> <mailto:openstack@lists.launchpad.net>
>>>>> Unsubscribe : https://launchpad.net/~openstack
>>>>> More help : https://help.launchpad.net/ListHelp
>>>> --
>>>> Monsyne M. Dragon
>>>> OpenStack/Nova
>>>> cell 210-441-0965 <tel:210-441-0965>
>>>> work x 5014190
>>>>
>>>> _______________________________________________
>>>> Mailing list: https://launchpad.net/~openstack
>>>> Post to : openstack@lists.launchpad.net
>>>> <mailto:openstack@lists.launchpad.net>
>>>> Unsubscribe : https://launchpad.net/~openstack
>>>> More help : https://help.launchpad.net/ListHelp
>>>
>>>
>>>
>>> --
>>> -------------------------------------------
>>> Luis Alberto Gervaso Martin
>>> Woorea Solutions, S.L
>>> CEO & CTO
>>> mobile: (+34) 627983344
>>> luis@ <mailto:luis.gervaso@gmail.com>woorea.es <http://woorea.es/>
>>>
>>
>>
>> _______________________________________________
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help : https://help.launchpad.net/ListHelp
> _______________________________________________
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help : https://help.launchpad.net/ListHelp


--
Loïc Dachary Chief Research Officer
// eNovance labs http://labs.enovance.com
// ✉ loic@enovance.com ☎ +33 1 49 70 99 82


_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp
Re: Monitoring / Billing Architecture proposed [ In reply to ]
On 04/23/2012 10:45 PM, Doug Hellmann wrote:
>
>
> On Mon, Apr 23, 2012 at 4:14 PM, Brian Schott
> <brian.schott@nimbisservices.com
> <mailto:brian.schott@nimbisservices.com>> wrote:
>
> Doug,
>
> Do we mirror the table structure of nova, etc. and add
> created/modified columns?
>
>
> Or do we flatten into an instance event record with everything?
>
>
> I lean towards flattening the data as it is recorded and making a second
> pass during the bill calculation. You need to record instance
> modifications separately from the creation, especially if the
> modification changes the billing rate. So you might have records for:
>
> created instance, with UUID, name, size, timestamp, ownership
> information, etc.
> resized instance, with UUID, name, new size, timestamp, ownership
> information, etc.
> deleted instance, with UUID, name, size, timestamp, ownership
> information, etc.
>
> Maybe some of those values don't need to be reported in some cases, but
> if you record a complete picture of the state of the instance then the
> code that aggregates the event records to produce billing information
> can use it to make decisions about how to record the charges.
>
> There is also the case where an instance is still no longer running but
> nova thinks it is (or the reverse), so some sort of auditing sweep needs
> to be included (I think that's what Dough called the "farmer" but I
> don't have my notes in front of me).

When I wrote [1], one of the things that I never assumed was how agents
would collect their information. I imagined that the system should allow
for multiple implementation of agents that would collect the same
counters, assuming that 2 implementations for the same counter should
never be running at once.

That said, I am not sure an event based collection of what nova is
notifying would satisfy the requirements I have heard from many cloud
providers:
- how do we ensure that event are not forged or lost in the current nova
system?
- how can I be sure that an instance has not simply crashed and never
started?
- how can I collect information which is not captured by nova events?

Hence the proposal to use a dedicated event queue for billing, allowing
for agents to collect and eventually validate data from different
sources, including, but not necessarily limiting, collection from the
nova events.

Moreover, as soon as you generalize the problem to other components than
just Nova (swift, glance, quantum, daas, ...) just using the nova event
queue is not an option anymore.

[1] http://wiki.openstack.org/EfficientMetering

Nick
Re: Monitoring / Billing Architecture proposed [ In reply to ]
I think we have support for this currently in some fashion, Dragon?

-S



On 04/24/2012 12:55 AM, Loic Dachary wrote:
> Metering needs to account for the "volume of data sent to external network destinations " ( i.e. n4 in http://wiki.openstack.org/EfficientMetering ) or the disk I/O etc. This kind of resource is billable.
>
> The information described at http://wiki.openstack.org/SystemUsageData will be used by metering but other data sources need to be harvested as well.

_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp
Re: Monitoring / Billing Architecture proposed [ In reply to ]
Yes, we emit bandwidth (bytes in/out) on a per VIF basis from each instance The event has the somewhat generic name of 'compute.instance.exists' and is emitted on an periodic basis, currently by a cronjob.
Currently, we only populate bandwidth data from XenServer, but if the hook is implemented for the kvm, etc drivers, it will be picked up automatically for them as well.

Note that we could report other metrics similarly.


On Apr 24, 2012, at 6:20 AM, Sandy Walsh wrote:

> I think we have support for this currently in some fashion, Dragon?
>
> -S
>
>
>
> On 04/24/2012 12:55 AM, Loic Dachary wrote:
>> Metering needs to account for the "volume of data sent to external network destinations " ( i.e. n4 in http://wiki.openstack.org/EfficientMetering ) or the disk I/O etc. This kind of resource is billable.
>>
>> The information described at http://wiki.openstack.org/SystemUsageData will be used by metering but other data sources need to be harvested as well.

--
Monsyne M. Dragon
OpenStack/Nova
cell 210-441-0965
work x 5014190


_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp
Re: Monitoring / Billing Architecture proposed [ In reply to ]
On 04/24/2012 03:06 PM, Monsyne Dragon wrote:
> Yes, we emit bandwidth (bytes in/out) on a per VIF basis from each instance The event has the somewhat generic name of 'compute.instance.exists' and is emitted on an periodic basis, currently by a cronjob.
> Currently, we only populate bandwidth data from XenServer, but if the hook is implemented for the kvm, etc drivers, it will be picked up automatically for them as well.
>
> Note that we could report other metrics similarly.
Hi,

Thanks for clarifying this. So you're suggesting that the metering agent should collect this data from the nova queue instead of extracting it from the system (interface, disk stats etc.) ? And for other openstack components ( as Nick Barcet suggests below ) the metering agent will have to find another way. Or do you have something else in mind ?

Cheers

On 04/24/2012 12:17 PM, Nick Barcet wrote:
> On 04/23/2012 10:45 PM, Doug Hellmann wrote:
>> >
>> >
>> > On Mon, Apr 23, 2012 at 4:14 PM, Brian Schott
>> > <brian.schott@nimbisservices.com
>> > <mailto:brian.schott@nimbisservices.com>> wrote:
>> >
>> > Doug,
>> >
>> > Do we mirror the table structure of nova, etc. and add
>> > created/modified columns?
>> >
>> >
>> > Or do we flatten into an instance event record with everything?
>> >
>> >
>> > I lean towards flattening the data as it is recorded and making a second
>> > pass during the bill calculation. You need to record instance
>> > modifications separately from the creation, especially if the
>> > modification changes the billing rate. So you might have records for:
>> >
>> > created instance, with UUID, name, size, timestamp, ownership
>> > information, etc.
>> > resized instance, with UUID, name, new size, timestamp, ownership
>> > information, etc.
>> > deleted instance, with UUID, name, size, timestamp, ownership
>> > information, etc.
>> >
>> > Maybe some of those values don't need to be reported in some cases, but
>> > if you record a complete picture of the state of the instance then the
>> > code that aggregates the event records to produce billing information
>> > can use it to make decisions about how to record the charges.
>> >
>> > There is also the case where an instance is still no longer running but
>> > nova thinks it is (or the reverse), so some sort of auditing sweep needs
>> > to be included (I think that's what Dough called the "farmer" but I
>> > don't have my notes in front of me).
> When I wrote [1], one of the things that I never assumed was how agents
> would collect their information. I imagined that the system should allow
> for multiple implementation of agents that would collect the same
> counters, assuming that 2 implementations for the same counter should
> never be running at once.
>
> That said, I am not sure an event based collection of what nova is
> notifying would satisfy the requirements I have heard from many cloud
> providers:
> - how do we ensure that event are not forged or lost in the current nova
> system?
> - how can I be sure that an instance has not simply crashed and never
> started?
> - how can I collect information which is not captured by nova events?
>
> Hence the proposal to use a dedicated event queue for billing, allowing
> for agents to collect and eventually validate data from different
> sources, including, but not necessarily limiting, collection from the
> nova events.
>
> Moreover, as soon as you generalize the problem to other components than
> just Nova (swift, glance, quantum, daas, ...) just using the nova event
> queue is not an option anymore.
>
> [1] http://wiki.openstack.org/EfficientMetering
>
> Nick
>
>

>
> On Apr 24, 2012, at 6:20 AM, Sandy Walsh wrote:
>
>> I think we have support for this currently in some fashion, Dragon?
>>
>> -S
>>
>>
>>
>> On 04/24/2012 12:55 AM, Loic Dachary wrote:
>>> Metering needs to account for the "volume of data sent to external network destinations " ( i.e. n4 in http://wiki.openstack.org/EfficientMetering ) or the disk I/O etc. This kind of resource is billable.
>>>
>>> The information described at http://wiki.openstack.org/SystemUsageData will be used by metering but other data sources need to be harvested as well.
> --
> Monsyne M. Dragon
> OpenStack/Nova
> cell 210-441-0965
> work x 5014190
>
>
> _______________________________________________
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help : https://help.launchpad.net/ListHelp


--
Loïc Dachary Chief Research Officer
// eNovance labs http://labs.enovance.com
// ? loic@enovance.com ? +33 1 49 70 99 82
Re: Monitoring / Billing Architecture proposed [ In reply to ]
On Apr 24, 2012, at 9:03 AM, Loic Dachary wrote:

On 04/24/2012 03:06 PM, Monsyne Dragon wrote:

Yes, we emit bandwidth (bytes in/out) on a per VIF basis from each instance The event has the somewhat generic name of 'compute.instance.exists' and is emitted on an periodic basis, currently by a cronjob.
Currently, we only populate bandwidth data from XenServer, but if the hook is implemented for the kvm, etc drivers, it will be picked up automatically for them as well.

Note that we could report other metrics similarly.


Hi,

Thanks for clarifying this. So you're suggesting that the metering agent should collect this data from the nova queue instead of extracting it from the system (interface, disk stats etc.) ? And for other openstack components ( as Nick Barcet suggests below ) the metering agent will have to find another way. Or do you have something else in mind ?

If it's something we have access to, we should emit it in those usage events. As far as the other components, glance is already using the same notification system. (there was a thread awhile back about putting it into openstack.common) It would be nice to have all of the components using it.

Cheers

On 04/24/2012 12:17 PM, Nick Barcet wrote:

On 04/23/2012 10:45 PM, Doug Hellmann wrote:


>
>
> On Mon, Apr 23, 2012 at 4:14 PM, Brian Schott
> <brian.schott@nimbisservices.com<mailto:brian.schott@nimbisservices.com>
> <mailto:brian.schott@nimbisservices.com><mailto:brian.schott@nimbisservices.com>> wrote:
>
> Doug,
>
> Do we mirror the table structure of nova, etc. and add
> created/modified columns?
>
>
> Or do we flatten into an instance event record with everything?
>
>
> I lean towards flattening the data as it is recorded and making a second
> pass during the bill calculation. You need to record instance
> modifications separately from the creation, especially if the
> modification changes the billing rate. So you might have records for:
>
> created instance, with UUID, name, size, timestamp, ownership
> information, etc.
> resized instance, with UUID, name, new size, timestamp, ownership
> information, etc.
> deleted instance, with UUID, name, size, timestamp, ownership
> information, etc.
>
> Maybe some of those values don't need to be reported in some cases, but
> if you record a complete picture of the state of the instance then the
> code that aggregates the event records to produce billing information
> can use it to make decisions about how to record the charges.
>
> There is also the case where an instance is still no longer running but
> nova thinks it is (or the reverse), so some sort of auditing sweep needs
> to be included (I think that's what Dough called the "farmer" but I
> don't have my notes in front of me).


When I wrote [1], one of the things that I never assumed was how agents
would collect their information. I imagined that the system should allow
for multiple implementation of agents that would collect the same
counters, assuming that 2 implementations for the same counter should
never be running at once.

That said, I am not sure an event based collection of what nova is
notifying would satisfy the requirements I have heard from many cloud
providers:
- how do we ensure that event are not forged or lost in the current nova
system?
- how can I be sure that an instance has not simply crashed and never
started?
- how can I collect information which is not captured by nova events?

Hence the proposal to use a dedicated event queue for billing, allowing
for agents to collect and eventually validate data from different
sources, including, but not necessarily limiting, collection from the
nova events.

Moreover, as soon as you generalize the problem to other components than
just Nova (swift, glance, quantum, daas, ...) just using the nova event
queue is not an option anymore.

[1] http://wiki.openstack.org/EfficientMetering

Nick






On Apr 24, 2012, at 6:20 AM, Sandy Walsh wrote:



I think we have support for this currently in some fashion, Dragon?

-S



On 04/24/2012 12:55 AM, Loic Dachary wrote:


Metering needs to account for the "volume of data sent to external network destinations " ( i.e. n4 in http://wiki.openstack.org/EfficientMetering ) or the disk I/O etc. This kind of resource is billable.

The information described at http://wiki.openstack.org/SystemUsageData will be used by metering but other data sources need to be harvested as well.


--
Monsyne M. Dragon
OpenStack/Nova
cell 210-441-0965
work x 5014190


_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net<mailto:openstack@lists.launchpad.net>
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp




--
Loïc Dachary Chief Research Officer
// eNovance labs http://labs.enovance.com<http://labs.enovance.com/>
// ✉ loic@enovance.com<mailto:loic@enovance.com> ☎ +33 1 49 70 99 82


_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net<mailto:openstack@lists.launchpad.net>
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp

--
Monsyne M. Dragon
OpenStack/Nova
cell 210-441-0965
work x 5014190
Re: Monitoring / Billing Architecture proposed [ In reply to ]
On 04/24/2012 04:45 PM, Monsyne Dragon wrote:
>
> On Apr 24, 2012, at 9:03 AM, Loic Dachary wrote:
>
>> On 04/24/2012 03:06 PM, Monsyne Dragon wrote:
>>> Yes, we emit bandwidth (bytes in/out) on a per VIF basis from each instance The event has the somewhat generic name of 'compute.instance.exists' and is emitted on an periodic basis, currently by a cronjob.
>>> Currently, we only populate bandwidth data from XenServer, but if the hook is implemented for the kvm, etc drivers, it will be picked up automatically for them as well.
>>>
>>> Note that we could report other metrics similarly.
>> Hi,
>>
>> Thanks for clarifying this. So you're suggesting that the metering agent should collect this data from the nova queue instead of extracting it from the system (interface, disk stats etc.) ? And for other openstack components ( as Nick Barcet suggests below ) the metering agent will have to find another way. Or do you have something else in mind ?
>
> If it's something we have access to, we should emit it in those usage events. As far as the other components, glance is already using the same notification system. (there was a thread awhile back about putting it into openstack.common) It would be nice to have all of the components using it.
>
Hi,

I don't see a section in http://wiki.openstack.org/SystemUsageData about making sure all messages related to a billable event are accounted for. I mean, for instance, what if the event that says an instance is deleted is lost ? How is the billing software supposed to cope with that ? If it checks the status of all VM on a regular basis to deal with this, how can it figure out when the missed event occured ?

It would be worth adding a short section about this in http://wiki.openstack.org/SystemUsageData . Or I can do it if you give me a hint.

Cheers
>> Cheers
>>
>> On 04/24/2012 12:17 PM, Nick Barcet wrote:
>>> On 04/23/2012 10:45 PM, Doug Hellmann wrote:
>>>> >
>>>> >
>>>> > On Mon, Apr 23, 2012 at 4:14 PM, Brian Schott
>>>> > <brian.schott@nimbisservices.com
>>>> > <mailto:brian.schott@nimbisservices.com>> wrote:
>>>> >
>>>> > Doug,
>>>> >
>>>> > Do we mirror the table structure of nova, etc. and add
>>>> > created/modified columns?
>>>> >
>>>> >
>>>> > Or do we flatten into an instance event record with everything?
>>>> >
>>>> >
>>>> > I lean towards flattening the data as it is recorded and making a second
>>>> > pass during the bill calculation. You need to record instance
>>>> > modifications separately from the creation, especially if the
>>>> > modification changes the billing rate. So you might have records for:
>>>> >
>>>> > created instance, with UUID, name, size, timestamp, ownership
>>>> > information, etc.
>>>> > resized instance, with UUID, name, new size, timestamp, ownership
>>>> > information, etc.
>>>> > deleted instance, with UUID, name, size, timestamp, ownership
>>>> > information, etc.
>>>> >
>>>> > Maybe some of those values don't need to be reported in some cases, but
>>>> > if you record a complete picture of the state of the instance then the
>>>> > code that aggregates the event records to produce billing information
>>>> > can use it to make decisions about how to record the charges.
>>>> >
>>>> > There is also the case where an instance is still no longer running but
>>>> > nova thinks it is (or the reverse), so some sort of auditing sweep needs
>>>> > to be included (I think that's what Dough called the "farmer" but I
>>>> > don't have my notes in front of me).
>>> When I wrote [1], one of the things that I never assumed was how agents
>>> would collect their information. I imagined that the system should allow
>>> for multiple implementation of agents that would collect the same
>>> counters, assuming that 2 implementations for the same counter should
>>> never be running at once.
>>>
>>> That said, I am not sure an event based collection of what nova is
>>> notifying would satisfy the requirements I have heard from many cloud
>>> providers:
>>> - how do we ensure that event are not forged or lost in the current nova
>>> system?
>>> - how can I be sure that an instance has not simply crashed and never
>>> started?
>>> - how can I collect information which is not captured by nova events?
>>>
>>> Hence the proposal to use a dedicated event queue for billing, allowing
>>> for agents to collect and eventually validate data from different
>>> sources, including, but not necessarily limiting, collection from the
>>> nova events.
>>>
>>> Moreover, as soon as you generalize the problem to other components than
>>> just Nova (swift, glance, quantum, daas, ...) just using the nova event
>>> queue is not an option anymore.
>>>
>>> [1] http://wiki.openstack.org/EfficientMetering
>>>
>>> Nick
>>>
>>>
>>
>>> On Apr 24, 2012, at 6:20 AM, Sandy Walsh wrote:
>>>
>>>> I think we have support for this currently in some fashion, Dragon?
>>>>
>>>> -S
>>>>
>>>>
>>>>
>>>> On 04/24/2012 12:55 AM, Loic Dachary wrote:
>>>>> Metering needs to account for the "volume of data sent to external network destinations " ( i.e. n4 in http://wiki.openstack.org/EfficientMetering ) or the disk I/O etc. This kind of resource is billable.
>>>>>
>>>>> The information described at http://wiki.openstack.org/SystemUsageData will be used by metering but other data sources need to be harvested as well.
>>> --
>>> Monsyne M. Dragon
>>> OpenStack/Nova
>>> cell 210-441-0965
>>> work x 5014190
>>>
>>>
>>> _______________________________________________
>>> Mailing list: https://launchpad.net/~openstack
>>> Post to : openstack@lists.launchpad.net
>>> Unsubscribe : https://launchpad.net/~openstack
>>> More help : https://help.launchpad.net/ListHelp
>>
>>
>> --
>> Loïc Dachary Chief Research Officer
>> // eNovance labs http://labs.enovance.com
>> // ✉ loic@enovance.com ☎ +33 1 49 70 99 82
>> _______________________________________________
>> Mailing list: https://launchpad.net/~openstack <https://launchpad.net/%7Eopenstack>
>> Post to : openstack@lists.launchpad.net <mailto:openstack@lists.launchpad.net>
>> Unsubscribe : https://launchpad.net/~openstack <https://launchpad.net/%7Eopenstack>
>> More help : https://help.launchpad.net/ListHelp
>
> --
> Monsyne M. Dragon
> OpenStack/Nova
> cell 210-441-0965
> work x 5014190
>


--
Loïc Dachary Chief Research Officer
// eNovance labs http://labs.enovance.com
// ✉ loic@enovance.com ☎ +33 1 49 70 99 82
Re: Monitoring / Billing Architecture proposed [ In reply to ]
Probably an extra audit system is required. I'm searching for solutions in
the IT market.

Regards

On Tue, Apr 24, 2012 at 6:00 PM, Loic Dachary <loic@enovance.com> wrote:

> **
> On 04/24/2012 04:45 PM, Monsyne Dragon wrote:
>
>
> On Apr 24, 2012, at 9:03 AM, Loic Dachary wrote:
>
> On 04/24/2012 03:06 PM, Monsyne Dragon wrote:
>
> Yes, we emit bandwidth (bytes in/out) on a per VIF basis from each instance The event has the somewhat generic name of 'compute.instance.exists' and is emitted on an periodic basis, currently by a cronjob.
> Currently, we only populate bandwidth data from XenServer, but if the hook is implemented for the kvm, etc drivers, it will be picked up automatically for them as well.
>
> Note that we could report other metrics similarly.
>
> Hi,
>
> Thanks for clarifying this. So you're suggesting that the metering agent
> should collect this data from the nova queue instead of extracting it from
> the system (interface, disk stats etc.) ? And for other openstack
> components ( as Nick Barcet suggests below ) the metering agent will have
> to find another way. Or do you have something else in mind ?
>
>
> If it's something we have access to, we should emit it in those usage
> events. As far as the other components, glance is already using the same
> notification system. (there was a thread awhile back about putting it into
> openstack.common) It would be nice to have all of the components using it.
>
>
> Hi,
>
> I don't see a section in http://wiki.openstack.org/SystemUsageData about
> making sure all messages related to a billable event are accounted for. I
> mean, for instance, what if the event that says an instance is deleted is
> lost ? How is the billing software supposed to cope with that ? If it
> checks the status of all VM on a regular basis to deal with this, how can
> it figure out when the missed event occured ?
>
> It would be worth adding a short section about this in
> http://wiki.openstack.org/SystemUsageData . Or I can do it if you give me
> a hint.
>
> Cheers
>
> Cheers
>
> On 04/24/2012 12:17 PM, Nick Barcet wrote:
>
> On 04/23/2012 10:45 PM, Doug Hellmann wrote:
>
> > > > On Mon, Apr 23, 2012 at 4:14 PM, Brian Schott> <brian.schott@nimbisservices.com> <mailto:brian.schott@nimbisservices.com> <brian.schott@nimbisservices.com>> wrote:> > Doug,> > Do we mirror the table structure of nova, etc. and add> created/modified columns? > > > Or do we flatten into an instance event record with everything? > > > I lean towards flattening the data as it is recorded and making a second> pass during the bill calculation. You need to record instance> modifications separately from the creation, especially if the> modification changes the billing rate. So you might have records for:> > created instance, with UUID, name, size, timestamp, ownership> information, etc.> resized instance, with UUID, name, new size, timestamp, ownership> information, etc.> deleted instance, with UUID, name, size, timestamp, ownership> information, etc.> > Maybe some of those values don't need to be reported in some cases, but> if you record a complete picture of the state of the instance then the> code that aggregates the event records to produce billing information> can use it to make decisions about how to record the charges.> > There is also the case where an instance is still no longer running but> nova thinks it is (or the reverse), so some sort of auditing sweep needs> to be included (I think that's what Dough called the "farmer" but I> don't have my notes in front of me).
>
> When I wrote [1], one of the things that I never assumed was how agents
> would collect their information. I imagined that the system should allow
> for multiple implementation of agents that would collect the same
> counters, assuming that 2 implementations for the same counter should
> never be running at once.
>
> That said, I am not sure an event based collection of what nova is
> notifying would satisfy the requirements I have heard from many cloud
> providers:
> - how do we ensure that event are not forged or lost in the current nova
> system?
> - how can I be sure that an instance has not simply crashed and never
> started?
> - how can I collect information which is not captured by nova events?
>
> Hence the proposal to use a dedicated event queue for billing, allowing
> for agents to collect and eventually validate data from different
> sources, including, but not necessarily limiting, collection from the
> nova events.
>
> Moreover, as soon as you generalize the problem to other components than
> just Nova (swift, glance, quantum, daas, ...) just using the nova event
> queue is not an option anymore.
>
> [1] http://wiki.openstack.org/EfficientMetering
>
> Nick
>
>
>
>
> On Apr 24, 2012, at 6:20 AM, Sandy Walsh wrote:
>
>
> I think we have support for this currently in some fashion, Dragon?
>
> -S
>
>
>
> On 04/24/2012 12:55 AM, Loic Dachary wrote:
>
> Metering needs to account for the "volume of data sent to external network destinations " ( i.e. n4 in http://wiki.openstack.org/EfficientMetering ) or the disk I/O etc. This kind of resource is billable.
>
> The information described at http://wiki.openstack.org/SystemUsageData will be used by metering but other data sources need to be harvested as well.
>
> --
> Monsyne M. Dragon
> OpenStack/Nova
> cell 210-441-0965
> work x 5014190
>
>
> _______________________________________________
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help : https://help.launchpad.net/ListHelp
>
>
>
> --
> Loïc Dachary Chief Research Officer
> // eNovance labs http://labs.enovance.com
> // ✉ loic@enovance.com ☎ +33 1 49 70 99 82
>
> _______________________________________________
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help : https://help.launchpad.net/ListHelp
>
>
> --
> Monsyne M. Dragon
> OpenStack/Nova
> cell 210-441-0965
> work x 5014190
>
>
>
> --
> Loïc Dachary Chief Research Officer
> // eNovance labs http://labs.enovance.com
> // ✉ loic@enovance.com ☎ +33 1 49 70 99 82
>
>
> _______________________________________________
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help : https://help.launchpad.net/ListHelp
>
>


--
-------------------------------------------
Luis Alberto Gervaso Martin
Woorea Solutions, S.L
CEO & CTO
mobile: (+34) 627983344
luis@ <luis.gervaso@gmail.com>woorea.es
Re: Monitoring / Billing Architecture proposed [ In reply to ]
Hi Monsyne,

I have set the notification_driver param, but no notification.* queues
created. I'm using devstack stable/essex.

stack@ubuntu:/$ sudo rabbitmqctl list_queues
Listing queues ...
volume_fanout_e0923a8bbb9f45dc9e63d382796a4c8f 0
cert.ubuntu 0
consoleauth.ubuntu 0
compute 0
compute.ubuntu 0
scheduler.ubuntu 0
network_fanout_1a3d6d9e946b46d1bf64fc38be5a38aa 0
volume.ubuntu 0
compute_fanout_b29d53b516bb4acc9f8fb1bd4a9fc7f1 0
cert 0
scheduler 0
consoleauth_fanout_d0fad95fbd0749929a84830a56551420 0
scheduler_fanout_0d320a2d79404d1d833ac248a8ff3dfc 0
network 0
volume 0
network.ubuntu 0
consoleauth 0
...done.
stack@ubuntu:/$

stack@ubuntu:/$ sudo rabbitmqctl list_exchanges
Listing exchanges ...
consoleauth_fanout fanout
compute_fanout fanout
amq.rabbitmq.trace topic
network_fanout fanout
amq.rabbitmq.log topic
amq.match headers
amq.headers headers
scheduler_fanout fanout
volume_fanout fanout
amq.topic topic
amq.direct direct
amq.fanout fanout
nova topic
direct
...done.
stack@ubuntu:/$


On Tue, Apr 24, 2012 at 2:25 AM, Monsyne Dragon <mdragon@rackspace.com>wrote:

> This looks like just the standard RPC traffic.
> You need to turn notifications on
> (set:
> notification_driver=nova.notifier.rabbit_notifier
> in nova's config file)
>
> and listen on the notification.* queues
>
>
>
> On Apr 23, 2012, at 2:26 PM, Luis Gervaso wrote:
>
> Joshua,
>
> I have performed a create instance operation and here is an example data
> obtained from stable/essex rabbitmq nova catch all exchange.
>
> [*] Waiting for messages. To exit press CTRL+C
>
> [x] Received '{"_context_roles": ["admin"], "_msg_id":
> "a2d13735baad4613b89c6132e0fa8302", "_context_read_deleted": "no",
> "_context_request_id": "req-d7ffbe78-7a9c-4d20-9ac5-3e56951526fe", "args":
> {"instance_id": 6, "instance_uuid": "e3ad17e6-dd59-4b67-a7d0-e3812f96c2d7",
> "host": "ubuntu", "project_id": "c290118b14564257be26a2cb901721a2",
> "rxtx_factor": 1.0}, "_context_auth_token": null, "_context_is_admin":
> true, "_context_project_id": null, "_context_timestamp":
> "2012-03-24T01:36:48.774891", "_context_user_id": null, "method":
> "get_instance_nw_info", "_context_remote_address": null}'
>
> [x] Received '{"_context_roles": ["admin"], "_msg_id":
> "a1cb1cf61e5441c2a772b29d3cd54202", "_context_read_deleted": "no",
> "_context_request_id": "req-db34ba32-8bd9-4cd5-b7b5-43705a9e258e", "args":
> {"instance_id": 6, "instance_uuid": "e3ad17e6-dd59-4b67-a7d0-e3812f96c2d7",
> "host": "ubuntu", "project_id": "c290118b14564257be26a2cb901721a2",
> "rxtx_factor": 1.0}, "_context_auth_token": null, "_context_is_admin":
> true, "_context_project_id": null, "_context_timestamp":
> "2012-03-24T01:37:50.463586", "_context_user_id": null, "method":
> "get_instance_nw_info", "_context_remote_address": null}'
>
> [x] Received '{"_context_roles": ["admin"], "_msg_id":
> "ebb0b1c340de4024a22eafec9d0a2d66", "_context_read_deleted": "no",
> "_context_request_id": "req-ddb51b2b-a29f-4aad-909d-3f7f79f053c4", "args":
> {"instance_id": 6, "instance_uuid": "e3ad17e6-dd59-4b67-a7d0-e3812f96c2d7",
> "host": "ubuntu", "project_id": "c290118b14564257be26a2cb901721a2",
> "rxtx_factor": 1.0}, "_context_auth_token": null, "_context_is_admin":
> true, "_context_project_id": null, "_context_timestamp":
> "2012-03-24T01:38:59.217333", "_context_user_id": null, "method":
> "get_instance_nw_info", "_context_remote_address": null}'
>
> [x] Received '{"_context_roles": ["Member"], "_msg_id":
> "729535c00d224414a98286e9ce3475a9", "_context_read_deleted": "no",
> "_context_request_id": "req-b056a8cc-3542-41a9-9e58-8fb592086264",
> "_context_auth_token": "deb477655fba448e85199f7e559da77a",
> "_context_is_admin": false, "_context_project_id":
> "df3827f76f714b1e8f31675caf84ae9d", "_context_timestamp":
> "2012-03-24T01:39:19.813393", "_context_user_id":
> "abe21eb7e6884547810f0a43c216e6a6", "method":
> "get_floating_ips_by_project", "_context_remote_address": "192.168.1.41"}'
>
> [x] Received '{"_context_roles": ["Member", "admin"],
> "_context_request_id": "req-45e6c2af-52c7-4de3-af6c-6b2f7520cfd5",
> "_context_read_deleted": "no", "args": {"request_spec": {"num_instances":
> 1, "block_device_mapping": [], "image": {"status": "active", "name":
> "cirros-0.3.0-x86_64-uec", "deleted": false, "container_format": "ami",
> "created_at": "2012-03-20 17:37:08", "disk_format": "ami", "updated_at":
> "2012-03-20 17:37:08", "properties": {"kernel_id":
> "6b700d25-3293-420a-82e4-8247d4b0da2a", "ramdisk_id":
> "22b10c35-c868-4470-84ef-54ae9f17a977"}, "min_ram": "0", "checksum":
> "2f81976cae15c16ef0010c51e3a6c163", "min_disk": "0", "is_public": true,
> "deleted_at": null, "id": "f7d4bea2-2aed-4bf3-a5cb-db6a34c4a525", "size":
> 25165824}, "instance_type": {"root_gb": 0, "name": "m1.tiny", "deleted":
> false, "created_at": null, "ephemeral_gb": 0, "updated_at": null,
> "memory_mb": 512, "vcpus": 1, "flavorid": "1", "swap": 0, "rxtx_factor":
> 1.0, "extra_specs": {}, "deleted_at": null, "vcpu_weight": null, "id": 2},
> "instance_properties": {"vm_state": "building", "ephemeral_gb": 0,
> "access_ip_v6": null, "access_ip_v4": null, "kernel_id":
> "6b700d25-3293-420a-82e4-8247d4b0da2a", "key_name": "testssh",
> "ramdisk_id": "22b10c35-c868-4470-84ef-54ae9f17a977", "instance_type_id":
> 2, "user_data": "dGhpcyBpcyBteSB1c2VyIGRhdGE=", "vm_mode": null,
> "display_name": "eureka", "config_drive_id": "", "reservation_id":
> "r-xtzjx50j", "key_data": "ssh-rsa
> AAAAB3NzaC1yc2EAAAADAQABAAAAgQDJ31tdayh1xnAY+JO/ZVdg5L83CsIU7qaOmFubdH7zlg2jjS9JmkPNANj99zx+UHg5F5JKGMef9M8VP/V89D5g0oIjIJtBdFpKOScBo3yJ1vteW5ItImH8h9TldymHf+CWNVY1oNNqzXqAb41xwUUDNvgeXHRZNnE6tmwZO0oC1Q==
> stack@ubuntu\n", "root_gb": 0, "user_id":
> "abe21eb7e6884547810f0a43c216e6a6", "uuid":
> "40b5a1c5-bd4f-40ee-ae0a-73e0bc927431", "root_device_name": null,
> "availability_zone": null, "launch_time": "2012-03-24T01:39:52Z",
> "metadata": {}, "display_description": "eureka", "memory_mb": 512,
> "launch_index": 0, "vcpus": 1, "locked": false, "image_ref":
> "f7d4bea2-2aed-4bf3-a5cb-db6a34c4a525", "architecture": null,
> "power_state": 0, "auto_disk_config": null, "progress": 0, "os_type": null,
> "project_id": "df3827f76f714b1e8f31675caf84ae9d", "config_drive": ""},
> "security_group": ["default"]}, "is_first_time": true, "filter_properties":
> {"scheduler_hints": {}}, "topic": "compute", "admin_password":
> "SKohh79r956J", "injected_files": [], "requested_networks": null},
> "_context_auth_token": "deb477655fba448e85199f7e559da77a",
> "_context_is_admin": false, "_context_project_id":
> "df3827f76f714b1e8f31675caf84ae9d", "_context_timestamp":
> "2012-03-24T01:39:52.089383", "_context_user_id":
> "abe21eb7e6884547810f0a43c216e6a6", "method": "run_instance",
> "_context_remote_address": "192.168.1.41"}'
>
> [x] Received '{"_context_roles": ["Member", "admin"],
> "_context_request_id": "req-45e6c2af-52c7-4de3-af6c-6b2f7520cfd5",
> "_context_read_deleted": "no", "args": {"instance_uuid":
> "40b5a1c5-bd4f-40ee-ae0a-73e0bc927431", "requested_networks": null,
> "is_first_time": true, "admin_password": "SKohh79r956J", "injected_files":
> []}, "_context_auth_token": "deb477655fba448e85199f7e559da77a",
> "_context_is_admin": true, "_context_project_id":
> "df3827f76f714b1e8f31675caf84ae9d", "_context_timestamp":
> "2012-03-24T01:39:52.089383", "_context_user_id":
> "abe21eb7e6884547810f0a43c216e6a6", "method": "run_instance",
> "_context_remote_address": "192.168.1.41"}'
>
> [x] Received '{"_context_roles": ["Member", "admin"], "_msg_id":
> "f40e21507437492f97a02cd25415498a", "_context_read_deleted": "no",
> "_context_request_id": "req-45e6c2af-52c7-4de3-af6c-6b2f7520cfd5", "args":
> {"instance_uuid": "40b5a1c5-bd4f-40ee-ae0a-73e0bc927431", "vpn": false,
> "requested_networks": null, "instance_id": 7, "host": "ubuntu",
> "rxtx_factor": 1.0, "project_id": "df3827f76f714b1e8f31675caf84ae9d"},
> "_context_auth_token": "deb477655fba448e85199f7e559da77a",
> "_context_is_admin": true, "_context_project_id":
> "df3827f76f714b1e8f31675caf84ae9d", "_context_timestamp":
> "2012-03-24T01:39:52.089383", "_context_user_id":
> "abe21eb7e6884547810f0a43c216e6a6", "method": "allocate_for_instance",
> "_context_remote_address": "192.168.1.41"}'
>
> [x] Received '{"_context_roles": ["admin"], "_msg_id":
> "96c3d16edf894a89ae85ed90b0a0858b", "_context_read_deleted": "no",
> "_context_request_id": "req-81c9353b-f912-408e-a297-0e8ad6b8fe10", "args":
> {"instance_id": 6, "instance_uuid": "e3ad17e6-dd59-4b67-a7d0-e3812f96c2d7",
> "host": "ubuntu", "project_id": "c290118b14564257be26a2cb901721a2",
> "rxtx_factor": 1.0}, "_context_auth_token": null, "_context_is_admin":
> true, "_context_project_id": null, "_context_timestamp":
> "2012-03-24T01:40:01.390757", "_context_user_id": null, "method":
> "get_instance_nw_info", "_context_remote_address": null}'
>
> [x] Received '{"_context_roles": ["admin"], "_context_request_id":
> "req-d0707421-7f4e-4f1f-bf89-109ca4625ca5", "_context_read_deleted": "no",
> "args": {"address": "10.0.0.2"}, "_context_auth_token": null,
> "_context_is_admin": true, "_context_project_id": null,
> "_context_timestamp": "2012-03-24T01:40:53.338021", "_context_user_id":
> null, "method": "lease_fixed_ip", "_context_remote_address": null}'
>
> [x] Received '{"_context_roles": ["admin"], "_msg_id":
> "38ad50d1abf445118c60017ee03851f6", "_context_read_deleted": "no",
> "_context_request_id": "req-51cd0d75-17e5-414b-affd-1ca2060cc8cb", "args":
> {"instance_id": 7, "instance_uuid": "40b5a1c5-bd4f-40ee-ae0a-73e0bc927431",
> "host": "ubuntu", "project_id": "df3827f76f714b1e8f31675caf84ae9d",
> "rxtx_factor": 1.0}, "_context_auth_token": null, "_context_is_admin":
> true, "_context_project_id": null, "_context_timestamp":
> "2012-03-24T01:41:07.580157", "_context_user_id": null, "method":
> "get_instance_nw_info", "_context_remote_address": null}'
>
> On Mon, Apr 23, 2012 at 9:23 PM, Doug Hellmann <
> doug.hellmann@dreamhost.com> wrote:
>
>>
>>
>> On Mon, Apr 23, 2012 at 1:50 PM, Brian Schott <
>> brian.schott@nimbisservices.com> wrote:
>>
>>> So, we could build on this. No reason to reinvent, but we might want
>>> to expand the number of events. I'm concerned about things like what
>>> happens when flavors change over time. Maybe the answer is, always append
>>> to the flavor/instance-type table. The code I remember and the admin
>>> interface that Ken wrote allowed you to modify flavors. That would break
>>> billing unless you also track flavor modifications.
>>>
>>
>> That seems like a situation where you would want to denormalize the
>> billing database and record the flavor details along with the rest of the
>> creation event data.
>>
>> Doug
>>
>>
>
>
> --
> -------------------------------------------
> Luis Alberto Gervaso Martin
> Woorea Solutions, S.L
> CEO & CTO
> mobile: (+34) 627983344
> luis@ <luis.gervaso@gmail.com>woorea.es
>
> _______________________________________________
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help : https://help.launchpad.net/ListHelp
>
>
> --
> Monsyne M. Dragon
> OpenStack/Nova
> cell 210-441-0965
> work x 5014190
>
>


--
-------------------------------------------
Luis Alberto Gervaso Martin
Woorea Solutions, S.L
CEO & CTO
mobile: (+34) 627983344
luis@ <luis.gervaso@gmail.com>woorea.es
Re: Monitoring / Billing Architecture proposed [ In reply to ]
I take it that the instance manager doesn't generate any kind of heartbeat, so whatever monitoring/archiving service we do should internally poll the status over MQ?


-------------------------------------------------
Brian Schott, CTO
Nimbis Services, Inc.
brian.schott@nimbisservices.com
ph: 443-274-6064 fx: 443-274-6060



On Apr 24, 2012, at 2:10 PM, Luis Gervaso wrote:

> Probably an extra audit system is required. I'm searching for solutions in the IT market.
>
> Regards
>
> On Tue, Apr 24, 2012 at 6:00 PM, Loic Dachary <loic@enovance.com> wrote:
> On 04/24/2012 04:45 PM, Monsyne Dragon wrote:
>>
>>
>> On Apr 24, 2012, at 9:03 AM, Loic Dachary wrote:
>>
>>> On 04/24/2012 03:06 PM, Monsyne Dragon wrote:
>>>>
>>>> Yes, we emit bandwidth (bytes in/out) on a per VIF basis from each instance The event has the somewhat generic name of 'compute.instance.exists' and is emitted on an periodic basis, currently by a cronjob.
>>>> Currently, we only populate bandwidth data from XenServer, but if the hook is implemented for the kvm, etc drivers, it will be picked up automatically for them as well.
>>>>
>>>> Note that we could report other metrics similarly.
>>> Hi,
>>>
>>> Thanks for clarifying this. So you're suggesting that the metering agent should collect this data from the nova queue instead of extracting it from the system (interface, disk stats etc.) ? And for other openstack components ( as Nick Barcet suggests below ) the metering agent will have to find another way. Or do you have something else in mind ?
>>
>> If it's something we have access to, we should emit it in those usage events. As far as the other components, glance is already using the same notification system. (there was a thread awhile back about putting it into openstack.common) It would be nice to have all of the components using it.
>>
> Hi,
>
> I don't see a section in http://wiki.openstack.org/SystemUsageData about making sure all messages related to a billable event are accounted for. I mean, for instance, what if the event that says an instance is deleted is lost ? How is the billing software supposed to cope with that ? If it checks the status of all VM on a regular basis to deal with this, how can it figure out when the missed event occured ?
>
> It would be worth adding a short section about this in http://wiki.openstack.org/SystemUsageData . Or I can do it if you give me a hint.
>
> Cheers
>
>>> Cheers
>>>
>>> On 04/24/2012 12:17 PM, Nick Barcet wrote:
>>>>
>>>> On 04/23/2012 10:45 PM, Doug Hellmann wrote:
>>>>> >
>>>>> >
>>>>> > On Mon, Apr 23, 2012 at 4:14 PM, Brian Schott
>>>>> > <brian.schott@nimbisservices.com
>>>>> > <mailto:brian.schott@nimbisservices.com>> wrote:
>>>>> >
>>>>> > Doug,
>>>>> >
>>>>> > Do we mirror the table structure of nova, etc. and add
>>>>> > created/modified columns?
>>>>> >
>>>>> >
>>>>> > Or do we flatten into an instance event record with everything?
>>>>> >
>>>>> >
>>>>> > I lean towards flattening the data as it is recorded and making a second
>>>>> > pass during the bill calculation. You need to record instance
>>>>> > modifications separately from the creation, especially if the
>>>>> > modification changes the billing rate. So you might have records for:
>>>>> >
>>>>> > created instance, with UUID, name, size, timestamp, ownership
>>>>> > information, etc.
>>>>> > resized instance, with UUID, name, new size, timestamp, ownership
>>>>> > information, etc.
>>>>> > deleted instance, with UUID, name, size, timestamp, ownership
>>>>> > information, etc.
>>>>> >
>>>>> > Maybe some of those values don't need to be reported in some cases, but
>>>>> > if you record a complete picture of the state of the instance then the
>>>>> > code that aggregates the event records to produce billing information
>>>>> > can use it to make decisions about how to record the charges.
>>>>> >
>>>>> > There is also the case where an instance is still no longer running but
>>>>> > nova thinks it is (or the reverse), so some sort of auditing sweep needs
>>>>> > to be included (I think that's what Dough called the "farmer" but I
>>>>> > don't have my notes in front of me).
>>>> When I wrote [1], one of the things that I never assumed was how agents
>>>> would collect their information. I imagined that the system should allow
>>>> for multiple implementation of agents that would collect the same
>>>> counters, assuming that 2 implementations for the same counter should
>>>> never be running at once.
>>>>
>>>> That said, I am not sure an event based collection of what nova is
>>>> notifying would satisfy the requirements I have heard from many cloud
>>>> providers:
>>>> - how do we ensure that event are not forged or lost in the current nova
>>>> system?
>>>> - how can I be sure that an instance has not simply crashed and never
>>>> started?
>>>> - how can I collect information which is not captured by nova events?
>>>>
>>>> Hence the proposal to use a dedicated event queue for billing, allowing
>>>> for agents to collect and eventually validate data from different
>>>> sources, including, but not necessarily limiting, collection from the
>>>> nova events.
>>>>
>>>> Moreover, as soon as you generalize the problem to other components than
>>>> just Nova (swift, glance, quantum, daas, ...) just using the nova event
>>>> queue is not an option anymore.
>>>>
>>>> [1] http://wiki.openstack.org/EfficientMetering
>>>>
>>>> Nick
>>>>
>>>>
>>>
>>>> On Apr 24, 2012, at 6:20 AM, Sandy Walsh wrote:
>>>>
>>>>> I think we have support for this currently in some fashion, Dragon?
>>>>>
>>>>> -S
>>>>>
>>>>>
>>>>>
>>>>> On 04/24/2012 12:55 AM, Loic Dachary wrote:
>>>>>> Metering needs to account for the "volume of data sent to external network destinations " ( i.e. n4 in http://wiki.openstack.org/EfficientMetering ) or the disk I/O etc. This kind of resource is billable.
>>>>>>
>>>>>> The information described at http://wiki.openstack.org/SystemUsageData will be used by metering but other data sources need to be harvested as well.
>>>> --
>>>> Monsyne M. Dragon
>>>> OpenStack/Nova
>>>> cell 210-441-0965
>>>> work x 5014190
>>>>
>>>>
>>>> _______________________________________________
>>>> Mailing list: https://launchpad.net/~openstack
>>>> Post to : openstack@lists.launchpad.net
>>>> Unsubscribe : https://launchpad.net/~openstack
>>>> More help : https://help.launchpad.net/ListHelp
>>>
>>>
>>> --
>>> Loïc Dachary Chief Research Officer
>>> // eNovance labs http://labs.enovance.com
>>> // ✉ loic@enovance.com ☎ +33 1 49 70 99 82
>>> _______________________________________________
>>> Mailing list: https://launchpad.net/~openstack
>>> Post to : openstack@lists.launchpad.net
>>> Unsubscribe : https://launchpad.net/~openstack
>>> More help : https://help.launchpad.net/ListHelp
>>
>> --
>> Monsyne M. Dragon
>> OpenStack/Nova
>> cell 210-441-0965
>> work x 5014190
>>
>
>
> --
> Loïc Dachary Chief Research Officer
> // eNovance labs http://labs.enovance.com
> // ✉ loic@enovance.com ☎ +33 1 49 70 99 82
>
> _______________________________________________
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help : https://help.launchpad.net/ListHelp
>
>
>
>
> --
> -------------------------------------------
> Luis Alberto Gervaso Martin
> Woorea Solutions, S.L
> CEO & CTO
> mobile: (+34) 627983344
> luis@woorea.es
>
> _______________________________________________
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help : https://help.launchpad.net/ListHelp
Re: Monitoring / Billing Architecture proposed [ In reply to ]
This kind of messages are coming from nova exchange aprox. each 60 secs

Can be this considered as a heartbeat for you?

[x] Received '{"_context_roles": ["admin"], "_msg_id": "
a2d13735baad4613b89c6132e0fa8302", "_context_read_deleted": "no",
"_context_request_id": "req-d7ffbe78-7a9c-4d20-9ac5-3e56951526fe", "args":
{"instance_id": 6, "instance_uuid": "e3ad17e6-dd59-4b67-a7d0-e3812f96c2d7",
"host": "ubuntu", "project_id": "c290118b14564257be26a2cb901721a2",
"rxtx_factor": 1.0}, "_context_auth_token": null, "_context_is_admin":
true, "_context_project_id": null, "_context_timestamp":
"2012-03-24T01:36:48.774891", "_context_user_id": null, "method":
"get_instance_nw_info", "_context_remote_address": null}'



On Tue, Apr 24, 2012 at 9:31 PM, Brian Schott <
brian.schott@nimbisservices.com> wrote:

> I take it that the instance manager doesn't generate any kind of
> heartbeat, so whatever monitoring/archiving service we do should internally
> poll the status over MQ?
>
>
> -------------------------------------------------
> Brian Schott, CTO
> Nimbis Services, Inc.
> brian.schott@nimbisservices.com
> ph: 443-274-6064 fx: 443-274-6060
>
>
>
> On Apr 24, 2012, at 2:10 PM, Luis Gervaso wrote:
>
> Probably an extra audit system is required. I'm searching for solutions in
> the IT market.
>
> Regards
>
> On Tue, Apr 24, 2012 at 6:00 PM, Loic Dachary <loic@enovance.com> wrote:
>
>> **
>> On 04/24/2012 04:45 PM, Monsyne Dragon wrote:
>>
>>
>> On Apr 24, 2012, at 9:03 AM, Loic Dachary wrote:
>>
>> On 04/24/2012 03:06 PM, Monsyne Dragon wrote:
>>
>> Yes, we emit bandwidth (bytes in/out) on a per VIF basis from each instance The event has the somewhat generic name of 'compute.instance.exists' and is emitted on an periodic basis, currently by a cronjob.
>> Currently, we only populate bandwidth data from XenServer, but if the hook is implemented for the kvm, etc drivers, it will be picked up automatically for them as well.
>>
>> Note that we could report other metrics similarly.
>>
>> Hi,
>>
>> Thanks for clarifying this. So you're suggesting that the metering agent
>> should collect this data from the nova queue instead of extracting it from
>> the system (interface, disk stats etc.) ? And for other openstack
>> components ( as Nick Barcet suggests below ) the metering agent will have
>> to find another way. Or do you have something else in mind ?
>>
>>
>> If it's something we have access to, we should emit it in those usage
>> events. As far as the other components, glance is already using the same
>> notification system. (there was a thread awhile back about putting it into
>> openstack.common) It would be nice to have all of the components using it.
>>
>>
>> Hi,
>>
>> I don't see a section in http://wiki.openstack.org/SystemUsageData about
>> making sure all messages related to a billable event are accounted for. I
>> mean, for instance, what if the event that says an instance is deleted is
>> lost ? How is the billing software supposed to cope with that ? If it
>> checks the status of all VM on a regular basis to deal with this, how can
>> it figure out when the missed event occured ?
>>
>> It would be worth adding a short section about this in
>> http://wiki.openstack.org/SystemUsageData . Or I can do it if you give
>> me a hint.
>>
>> Cheers
>>
>> Cheers
>>
>> On 04/24/2012 12:17 PM, Nick Barcet wrote:
>>
>> On 04/23/2012 10:45 PM, Doug Hellmann wrote:
>>
>> > > > On Mon, Apr 23, 2012 at 4:14 PM, Brian Schott> <brian.schott@nimbisservices.com> <mailto:brian.schott@nimbisservices.com> <brian.schott@nimbisservices.com>> wrote:> > Doug,> > Do we mirror the table structure of nova, etc. and add> created/modified columns? > > > Or do we flatten into an instance event record with everything? > > > I lean towards flattening the data as it is recorded and making a second> pass during the bill calculation. You need to record instance> modifications separately from the creation, especially if the> modification changes the billing rate. So you might have records for:> > created instance, with UUID, name, size, timestamp, ownership> information, etc.> resized instance, with UUID, name, new size, timestamp, ownership> information, etc.> deleted instance, with UUID, name, size, timestamp, ownership> information, etc.> > Maybe some of those values don't need to be reported in some cases, but> if you record a complete picture of the state of the instance then the> code that aggregates the event records to produce billing information> can use it to make decisions about how to record the charges.> > There is also the case where an instance is still no longer running but> nova thinks it is (or the reverse), so some sort of auditing sweep needs> to be included (I think that's what Dough called the "farmer" but I> don't have my notes in front of me).
>>
>> When I wrote [1], one of the things that I never assumed was how agents
>> would collect their information. I imagined that the system should allow
>> for multiple implementation of agents that would collect the same
>> counters, assuming that 2 implementations for the same counter should
>> never be running at once.
>>
>> That said, I am not sure an event based collection of what nova is
>> notifying would satisfy the requirements I have heard from many cloud
>> providers:
>> - how do we ensure that event are not forged or lost in the current nova
>> system?
>> - how can I be sure that an instance has not simply crashed and never
>> started?
>> - how can I collect information which is not captured by nova events?
>>
>> Hence the proposal to use a dedicated event queue for billing, allowing
>> for agents to collect and eventually validate data from different
>> sources, including, but not necessarily limiting, collection from the
>> nova events.
>>
>> Moreover, as soon as you generalize the problem to other components than
>> just Nova (swift, glance, quantum, daas, ...) just using the nova event
>> queue is not an option anymore.
>>
>> [1] http://wiki.openstack.org/EfficientMetering
>>
>> Nick
>>
>>
>>
>>
>> On Apr 24, 2012, at 6:20 AM, Sandy Walsh wrote:
>>
>>
>> I think we have support for this currently in some fashion, Dragon?
>>
>> -S
>>
>>
>>
>> On 04/24/2012 12:55 AM, Loic Dachary wrote:
>>
>> Metering needs to account for the "volume of data sent to external network destinations " ( i.e. n4 in http://wiki.openstack.org/EfficientMetering ) or the disk I/O etc. This kind of resource is billable.
>>
>> The information described at http://wiki.openstack.org/SystemUsageData will be used by metering but other data sources need to be harvested as well.
>>
>> --
>> Monsyne M. Dragon
>> OpenStack/Nova
>> cell 210-441-0965
>> work x 5014190
>>
>>
>> _______________________________________________
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help : https://help.launchpad.net/ListHelp
>>
>>
>>
>> --
>> Loïc Dachary Chief Research Officer
>> // eNovance labs http://labs.enovance.com
>> // ✉ loic@enovance.com ☎ +33 1 49 70 99 82
>>
>> _______________________________________________
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help : https://help.launchpad.net/ListHelp
>>
>>
>> --
>> Monsyne M. Dragon
>> OpenStack/Nova
>> cell 210-441-0965
>> work x 5014190
>>
>>
>>
>> --
>> Loïc Dachary Chief Research Officer
>> // eNovance labs http://labs.enovance.com
>> // ✉ loic@enovance.com ☎ +33 1 49 70 99 82
>>
>>
>> _______________________________________________
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help : https://help.launchpad.net/ListHelp
>>
>>
>
>
> --
> -------------------------------------------
> Luis Alberto Gervaso Martin
> Woorea Solutions, S.L
> CEO & CTO
> mobile: (+34) 627983344
> luis@ <luis.gervaso@gmail.com>woorea.es
>
> _______________________________________________
> Mailing list: https://launchpad.net/~openstack
> Post to : openstack@lists.launchpad.net
> Unsubscribe : https://launchpad.net/~openstack
> More help : https://help.launchpad.net/ListHelp
>
>
>


--
-------------------------------------------
Luis Alberto Gervaso Martin
Woorea Solutions, S.L
CEO & CTO
mobile: (+34) 627983344
luis@ <luis.gervaso@gmail.com>woorea.es
Re: Monitoring / Billing Architecture proposed [ In reply to ]
Yeah, but does that mean the instance is alive and billable :-)? I guess that counts! I thought they were only in response to external API/admin requests.

-------------------------------------------------
Brian Schott, CTO
Nimbis Services, Inc.
brian.schott@nimbisservices.com
ph: 443-274-6064 fx: 443-274-6060



On Apr 24, 2012, at 3:42 PM, Luis Gervaso wrote:

> This kind of messages are coming from nova exchange aprox. each 60 secs
>
> Can be this considered as a heartbeat for you?
>
> [x] Received '{"_context_roles": ["admin"], "_msg_id": "a2d13735baad4613b89c6132e0fa8302", "_context_read_deleted": "no", "_context_request_id": "req-d7ffbe78-7a9c-4d20-9ac5-3e56951526fe", "args": {"instance_id": 6, "instance_uuid": "e3ad17e6-dd59-4b67-a7d0-e3812f96c2d7", "host": "ubuntu", "project_id": "c290118b14564257be26a2cb901721a2", "rxtx_factor": 1.0}, "_context_auth_token": null, "_context_is_admin": true, "_context_project_id": null, "_context_timestamp": "2012-03-24T01:36:48.774891", "_context_user_id": null, "method": "get_instance_nw_info", "_context_remote_address": null}'
>
>
>
> On Tue, Apr 24, 2012 at 9:31 PM, Brian Schott <brian.schott@nimbisservices.com> wrote:
> I take it that the instance manager doesn't generate any kind of heartbeat, so whatever monitoring/archiving service we do should internally poll the status over MQ?
>
>
> -------------------------------------------------
> Brian Schott, CTO
> Nimbis Services, Inc.
> brian.schott@nimbisservices.com
> ph: 443-274-6064 fx: 443-274-6060
>
>
>
> On Apr 24, 2012, at 2:10 PM, Luis Gervaso wrote:
>
>> Probably an extra audit system is required. I'm searching for solutions in the IT market.
>>
>> Regards
>>
>> On Tue, Apr 24, 2012 at 6:00 PM, Loic Dachary <loic@enovance.com> wrote:
>> On 04/24/2012 04:45 PM, Monsyne Dragon wrote:
>>>
>>>
>>> On Apr 24, 2012, at 9:03 AM, Loic Dachary wrote:
>>>
>>>> On 04/24/2012 03:06 PM, Monsyne Dragon wrote:
>>>>>
>>>>> Yes, we emit bandwidth (bytes in/out) on a per VIF basis from each instance The event has the somewhat generic name of 'compute.instance.exists' and is emitted on an periodic basis, currently by a cronjob.
>>>>> Currently, we only populate bandwidth data from XenServer, but if the hook is implemented for the kvm, etc drivers, it will be picked up automatically for them as well.
>>>>>
>>>>> Note that we could report other metrics similarly.
>>>> Hi,
>>>>
>>>> Thanks for clarifying this. So you're suggesting that the metering agent should collect this data from the nova queue instead of extracting it from the system (interface, disk stats etc.) ? And for other openstack components ( as Nick Barcet suggests below ) the metering agent will have to find another way. Or do you have something else in mind ?
>>>
>>> If it's something we have access to, we should emit it in those usage events. As far as the other components, glance is already using the same notification system. (there was a thread awhile back about putting it into openstack.common) It would be nice to have all of the components using it.
>>>
>> Hi,
>>
>> I don't see a section in http://wiki.openstack.org/SystemUsageData about making sure all messages related to a billable event are accounted for. I mean, for instance, what if the event that says an instance is deleted is lost ? How is the billing software supposed to cope with that ? If it checks the status of all VM on a regular basis to deal with this, how can it figure out when the missed event occured ?
>>
>> It would be worth adding a short section about this in http://wiki.openstack.org/SystemUsageData . Or I can do it if you give me a hint.
>>
>> Cheers
>>
>>>> Cheers
>>>>
>>>> On 04/24/2012 12:17 PM, Nick Barcet wrote:
>>>>>
>>>>> On 04/23/2012 10:45 PM, Doug Hellmann wrote:
>>>>>> >
>>>>>> >
>>>>>> > On Mon, Apr 23, 2012 at 4:14 PM, Brian Schott
>>>>>> > <brian.schott@nimbisservices.com
>>>>>> > <mailto:brian.schott@nimbisservices.com>> wrote:
>>>>>> >
>>>>>> > Doug,
>>>>>> >
>>>>>> > Do we mirror the table structure of nova, etc. and add
>>>>>> > created/modified columns?
>>>>>> >
>>>>>> >
>>>>>> > Or do we flatten into an instance event record with everything?
>>>>>> >
>>>>>> >
>>>>>> > I lean towards flattening the data as it is recorded and making a second
>>>>>> > pass during the bill calculation. You need to record instance
>>>>>> > modifications separately from the creation, especially if the
>>>>>> > modification changes the billing rate. So you might have records for:
>>>>>> >
>>>>>> > created instance, with UUID, name, size, timestamp, ownership
>>>>>> > information, etc.
>>>>>> > resized instance, with UUID, name, new size, timestamp, ownership
>>>>>> > information, etc.
>>>>>> > deleted instance, with UUID, name, size, timestamp, ownership
>>>>>> > information, etc.
>>>>>> >
>>>>>> > Maybe some of those values don't need to be reported in some cases, but
>>>>>> > if you record a complete picture of the state of the instance then the
>>>>>> > code that aggregates the event records to produce billing information
>>>>>> > can use it to make decisions about how to record the charges.
>>>>>> >
>>>>>> > There is also the case where an instance is still no longer running but
>>>>>> > nova thinks it is (or the reverse), so some sort of auditing sweep needs
>>>>>> > to be included (I think that's what Dough called the "farmer" but I
>>>>>> > don't have my notes in front of me).
>>>>> When I wrote [1], one of the things that I never assumed was how agents
>>>>> would collect their information. I imagined that the system should allow
>>>>> for multiple implementation of agents that would collect the same
>>>>> counters, assuming that 2 implementations for the same counter should
>>>>> never be running at once.
>>>>>
>>>>> That said, I am not sure an event based collection of what nova is
>>>>> notifying would satisfy the requirements I have heard from many cloud
>>>>> providers:
>>>>> - how do we ensure that event are not forged or lost in the current nova
>>>>> system?
>>>>> - how can I be sure that an instance has not simply crashed and never
>>>>> started?
>>>>> - how can I collect information which is not captured by nova events?
>>>>>
>>>>> Hence the proposal to use a dedicated event queue for billing, allowing
>>>>> for agents to collect and eventually validate data from different
>>>>> sources, including, but not necessarily limiting, collection from the
>>>>> nova events.
>>>>>
>>>>> Moreover, as soon as you generalize the problem to other components than
>>>>> just Nova (swift, glance, quantum, daas, ...) just using the nova event
>>>>> queue is not an option anymore.
>>>>>
>>>>> [1] http://wiki.openstack.org/EfficientMetering
>>>>>
>>>>> Nick
>>>>>
>>>>>
>>>>
>>>>> On Apr 24, 2012, at 6:20 AM, Sandy Walsh wrote:
>>>>>
>>>>>> I think we have support for this currently in some fashion, Dragon?
>>>>>>
>>>>>> -S
>>>>>>
>>>>>>
>>>>>>
>>>>>> On 04/24/2012 12:55 AM, Loic Dachary wrote:
>>>>>>> Metering needs to account for the "volume of data sent to external network destinations " ( i.e. n4 in http://wiki.openstack.org/EfficientMetering ) or the disk I/O etc. This kind of resource is billable.
>>>>>>>
>>>>>>> The information described at http://wiki.openstack.org/SystemUsageData will be used by metering but other data sources need to be harvested as well.
>>>>> --
>>>>> Monsyne M. Dragon
>>>>> OpenStack/Nova
>>>>> cell 210-441-0965
>>>>> work x 5014190
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Mailing list: https://launchpad.net/~openstack
>>>>> Post to : openstack@lists.launchpad.net
>>>>> Unsubscribe : https://launchpad.net/~openstack
>>>>> More help : https://help.launchpad.net/ListHelp
>>>>
>>>>
>>>> --
>>>> Loïc Dachary Chief Research Officer
>>>> // eNovance labs http://labs.enovance.com
>>>> // ✉ loic@enovance.com ☎ +33 1 49 70 99 82
>>>> _______________________________________________
>>>> Mailing list: https://launchpad.net/~openstack
>>>> Post to : openstack@lists.launchpad.net
>>>> Unsubscribe : https://launchpad.net/~openstack
>>>> More help : https://help.launchpad.net/ListHelp
>>>
>>> --
>>> Monsyne M. Dragon
>>> OpenStack/Nova
>>> cell 210-441-0965
>>> work x 5014190
>>>
>>
>>
>> --
>> Loïc Dachary Chief Research Officer
>> // eNovance labs http://labs.enovance.com
>> // ✉ loic@enovance.com ☎ +33 1 49 70 99 82
>>
>> _______________________________________________
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help : https://help.launchpad.net/ListHelp
>>
>>
>>
>>
>> --
>> -------------------------------------------
>> Luis Alberto Gervaso Martin
>> Woorea Solutions, S.L
>> CEO & CTO
>> mobile: (+34) 627983344
>> luis@woorea.es
>>
>> _______________________________________________
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help : https://help.launchpad.net/ListHelp
>
>
>
>
> --
> -------------------------------------------
> Luis Alberto Gervaso Martin
> Woorea Solutions, S.L
> CEO & CTO
> mobile: (+34) 627983344
> luis@woorea.es
>
Re: Monitoring / Billing Architecture proposed [ In reply to ]
I think so.

If it is in response or not, it's a truly heartbeat

On Tue, Apr 24, 2012 at 9:46 PM, Brian Schott <
brian.schott@nimbisservices.com> wrote:

> Yeah, but does that mean the instance is alive and billable :-)? I guess
> that counts! I thought they were only in response to external API/admin
> requests.
>
> -------------------------------------------------
> Brian Schott, CTO
> Nimbis Services, Inc.
> brian.schott@nimbisservices.com
> ph: 443-274-6064 fx: 443-274-6060
>
>
>
> On Apr 24, 2012, at 3:42 PM, Luis Gervaso wrote:
>
> This kind of messages are coming from nova exchange aprox. each 60 secs
>
> Can be this considered as a heartbeat for you?
>
> [x] Received '{"_context_roles": ["admin"], "_msg_id": "
> a2d13735baad4613b89c6132e0fa8302", "_context_read_deleted": "no",
> "_context_request_id": "req-d7ffbe78-7a9c-4d20-9ac5-3e56951526fe",
> "args": {"instance_id": 6, "instance_uuid": "e3ad17e6-dd59-4b67-a7d0-e3812f96c2d7",
> "host": "ubuntu", "project_id": "c290118b14564257be26a2cb901721a2",
> "rxtx_factor": 1.0}, "_context_auth_token": null, "_context_is_admin":
> true, "_context_project_id": null, "_context_timestamp":
> "2012-03-24T01:36:48.774891", "_context_user_id": null, "method":
> "get_instance_nw_info", "_context_remote_address": null}'
>
>
>
> On Tue, Apr 24, 2012 at 9:31 PM, Brian Schott <
> brian.schott@nimbisservices.com> wrote:
>
>> I take it that the instance manager doesn't generate any kind of
>> heartbeat, so whatever monitoring/archiving service we do should internally
>> poll the status over MQ?
>>
>>
>> -------------------------------------------------
>> Brian Schott, CTO
>> Nimbis Services, Inc.
>> brian.schott@nimbisservices.com
>> ph: 443-274-6064 fx: 443-274-6060
>>
>>
>>
>> On Apr 24, 2012, at 2:10 PM, Luis Gervaso wrote:
>>
>> Probably an extra audit system is required. I'm searching for solutions
>> in the IT market.
>>
>> Regards
>>
>> On Tue, Apr 24, 2012 at 6:00 PM, Loic Dachary <loic@enovance.com> wrote:
>>
>>> **
>>> On 04/24/2012 04:45 PM, Monsyne Dragon wrote:
>>>
>>>
>>> On Apr 24, 2012, at 9:03 AM, Loic Dachary wrote:
>>>
>>> On 04/24/2012 03:06 PM, Monsyne Dragon wrote:
>>>
>>> Yes, we emit bandwidth (bytes in/out) on a per VIF basis from each instance The event has the somewhat generic name of 'compute.instance.exists' and is emitted on an periodic basis, currently by a cronjob.
>>> Currently, we only populate bandwidth data from XenServer, but if the hook is implemented for the kvm, etc drivers, it will be picked up automatically for them as well.
>>>
>>> Note that we could report other metrics similarly.
>>>
>>> Hi,
>>>
>>> Thanks for clarifying this. So you're suggesting that the metering agent
>>> should collect this data from the nova queue instead of extracting it from
>>> the system (interface, disk stats etc.) ? And for other openstack
>>> components ( as Nick Barcet suggests below ) the metering agent will have
>>> to find another way. Or do you have something else in mind ?
>>>
>>>
>>> If it's something we have access to, we should emit it in those usage
>>> events. As far as the other components, glance is already using the same
>>> notification system. (there was a thread awhile back about putting it into
>>> openstack.common) It would be nice to have all of the components using it.
>>>
>>>
>>> Hi,
>>>
>>> I don't see a section in http://wiki.openstack.org/SystemUsageDataabout making sure all messages related to a billable event are accounted
>>> for. I mean, for instance, what if the event that says an instance is
>>> deleted is lost ? How is the billing software supposed to cope with that ?
>>> If it checks the status of all VM on a regular basis to deal with this, how
>>> can it figure out when the missed event occured ?
>>>
>>> It would be worth adding a short section about this in
>>> http://wiki.openstack.org/SystemUsageData . Or I can do it if you give
>>> me a hint.
>>>
>>> Cheers
>>>
>>> Cheers
>>>
>>> On 04/24/2012 12:17 PM, Nick Barcet wrote:
>>>
>>> On 04/23/2012 10:45 PM, Doug Hellmann wrote:
>>>
>>> > > > On Mon, Apr 23, 2012 at 4:14 PM, Brian Schott> <brian.schott@nimbisservices.com> <mailto:brian.schott@nimbisservices.com> <brian.schott@nimbisservices.com>> wrote:> > Doug,> > Do we mirror the table structure of nova, etc. and add> created/modified columns? > > > Or do we flatten into an instance event record with everything? > > > I lean towards flattening the data as it is recorded and making a second> pass during the bill calculation. You need to record instance> modifications separately from the creation, especially if the> modification changes the billing rate. So you might have records for:> > created instance, with UUID, name, size, timestamp, ownership> information, etc.> resized instance, with UUID, name, new size, timestamp, ownership> information, etc.> deleted instance, with UUID, name, size, timestamp, ownership> information, etc.> > Maybe some of those values don't need to be reported in some cases, but> if you record a complete picture of the state of the instance then the> code that aggregates the event records to produce billing information> can use it to make decisions about how to record the charges.> > There is also the case where an instance is still no longer running but> nova thinks it is (or the reverse), so some sort of auditing sweep needs> to be included (I think that's what Dough called the "farmer" but I> don't have my notes in front of me).
>>>
>>> When I wrote [1], one of the things that I never assumed was how agents
>>> would collect their information. I imagined that the system should allow
>>> for multiple implementation of agents that would collect the same
>>> counters, assuming that 2 implementations for the same counter should
>>> never be running at once.
>>>
>>> That said, I am not sure an event based collection of what nova is
>>> notifying would satisfy the requirements I have heard from many cloud
>>> providers:
>>> - how do we ensure that event are not forged or lost in the current nova
>>> system?
>>> - how can I be sure that an instance has not simply crashed and never
>>> started?
>>> - how can I collect information which is not captured by nova events?
>>>
>>> Hence the proposal to use a dedicated event queue for billing, allowing
>>> for agents to collect and eventually validate data from different
>>> sources, including, but not necessarily limiting, collection from the
>>> nova events.
>>>
>>> Moreover, as soon as you generalize the problem to other components than
>>> just Nova (swift, glance, quantum, daas, ...) just using the nova event
>>> queue is not an option anymore.
>>>
>>> [1] http://wiki.openstack.org/EfficientMetering
>>>
>>> Nick
>>>
>>>
>>>
>>>
>>> On Apr 24, 2012, at 6:20 AM, Sandy Walsh wrote:
>>>
>>>
>>> I think we have support for this currently in some fashion, Dragon?
>>>
>>> -S
>>>
>>>
>>>
>>> On 04/24/2012 12:55 AM, Loic Dachary wrote:
>>>
>>> Metering needs to account for the "volume of data sent to external network destinations " ( i.e. n4 in http://wiki.openstack.org/EfficientMetering ) or the disk I/O etc. This kind of resource is billable.
>>>
>>> The information described at http://wiki.openstack.org/SystemUsageData will be used by metering but other data sources need to be harvested as well.
>>>
>>> --
>>> Monsyne M. Dragon
>>> OpenStack/Nova
>>> cell 210-441-0965
>>> work x 5014190
>>>
>>>
>>> _______________________________________________
>>> Mailing list: https://launchpad.net/~openstack
>>> Post to : openstack@lists.launchpad.net
>>> Unsubscribe : https://launchpad.net/~openstack
>>> More help : https://help.launchpad.net/ListHelp
>>>
>>>
>>>
>>> --
>>> Loïc Dachary Chief Research Officer
>>> // eNovance labs http://labs.enovance.com
>>> // ✉ loic@enovance.com ☎ +33 1 49 70 99 82
>>>
>>> _______________________________________________
>>> Mailing list: https://launchpad.net/~openstack
>>> Post to : openstack@lists.launchpad.net
>>> Unsubscribe : https://launchpad.net/~openstack
>>> More help : https://help.launchpad.net/ListHelp
>>>
>>>
>>> --
>>> Monsyne M. Dragon
>>> OpenStack/Nova
>>> cell 210-441-0965
>>> work x 5014190
>>>
>>>
>>>
>>> --
>>> Loïc Dachary Chief Research Officer
>>> // eNovance labs http://labs.enovance.com
>>> // ✉ loic@enovance.com ☎ +33 1 49 70 99 82
>>>
>>>
>>> _______________________________________________
>>> Mailing list: https://launchpad.net/~openstack
>>> Post to : openstack@lists.launchpad.net
>>> Unsubscribe : https://launchpad.net/~openstack
>>> More help : https://help.launchpad.net/ListHelp
>>>
>>>
>>
>>
>> --
>> -------------------------------------------
>> Luis Alberto Gervaso Martin
>> Woorea Solutions, S.L
>> CEO & CTO
>> mobile: (+34) 627983344
>> luis@ <luis.gervaso@gmail.com>woorea.es
>>
>> _______________________________________________
>> Mailing list: https://launchpad.net/~openstack
>> Post to : openstack@lists.launchpad.net
>> Unsubscribe : https://launchpad.net/~openstack
>> More help : https://help.launchpad.net/ListHelp
>>
>>
>>
>
>
> --
> -------------------------------------------
> Luis Alberto Gervaso Martin
> Woorea Solutions, S.L
> CEO & CTO
> mobile: (+34) 627983344
> luis@ <luis.gervaso@gmail.com>woorea.es
>
>
>


--
-------------------------------------------
Luis Alberto Gervaso Martin
Woorea Solutions, S.L
CEO & CTO
mobile: (+34) 627983344
luis@ <luis.gervaso@gmail.com>woorea.es
Re: Monitoring / Billing Architecture proposed [ In reply to ]
On Apr 24, 2012, at 11:00 AM, Loic Dachary wrote:

On 04/24/2012 04:45 PM, Monsyne Dragon wrote:

On Apr 24, 2012, at 9:03 AM, Loic Dachary wrote:

On 04/24/2012 03:06 PM, Monsyne Dragon wrote:

Yes, we emit bandwidth (bytes in/out) on a per VIF basis from each instance The event has the somewhat generic name of 'compute.instance.exists' and is emitted on an periodic basis, currently by a cronjob.
Currently, we only populate bandwidth data from XenServer, but if the hook is implemented for the kvm, etc drivers, it will be picked up automatically for them as well.

Note that we could report other metrics similarly.


Hi,

Thanks for clarifying this. So you're suggesting that the metering agent should collect this data from the nova queue instead of extracting it from the system (interface, disk stats etc.) ? And for other openstack components ( as Nick Barcet suggests below ) the metering agent will have to find another way. Or do you have something else in mind ?

If it's something we have access to, we should emit it in those usage events. As far as the other components, glance is already using the same notification system. (there was a thread awhile back about putting it into openstack.common) It would be nice to have all of the components using it.

Hi,

I don't see a section in http://wiki.openstack.org/SystemUsageData about making sure all messages related to a billable event are accounted for. I mean, for instance, what if the event that says an instance is deleted is lost ? How is the billing software supposed to cope with that ? If it checks the status of all VM on a regular basis to deal with this, how can it figure out when the missed event occured ?

FIrst, we use a reliable queueing mechanism to prevent that. Secondly there are periodic audit events that act as a check (and also contain data for usage over a time period, like bandwidth).


It would be worth adding a short section about this in http://wiki.openstack.org/SystemUsageData . Or I can do it if you give me a hint.

Cheers
Cheers

On 04/24/2012 12:17 PM, Nick Barcet wrote:

On 04/23/2012 10:45 PM, Doug Hellmann wrote:


>
>
> On Mon, Apr 23, 2012 at 4:14 PM, Brian Schott
> <brian.schott@nimbisservices.com<mailto:brian.schott@nimbisservices.com>
> <mailto:brian.schott@nimbisservices.com><mailto:brian.schott@nimbisservices.com>> wrote:
>
> Doug,
>
> Do we mirror the table structure of nova, etc. and add
> created/modified columns?
>
>
> Or do we flatten into an instance event record with everything?
>
>
> I lean towards flattening the data as it is recorded and making a second
> pass during the bill calculation. You need to record instance
> modifications separately from the creation, especially if the
> modification changes the billing rate. So you might have records for:
>
> created instance, with UUID, name, size, timestamp, ownership
> information, etc.
> resized instance, with UUID, name, new size, timestamp, ownership
> information, etc.
> deleted instance, with UUID, name, size, timestamp, ownership
> information, etc.
>
> Maybe some of those values don't need to be reported in some cases, but
> if you record a complete picture of the state of the instance then the
> code that aggregates the event records to produce billing information
> can use it to make decisions about how to record the charges.
>
> There is also the case where an instance is still no longer running but
> nova thinks it is (or the reverse), so some sort of auditing sweep needs
> to be included (I think that's what Dough called the "farmer" but I
> don't have my notes in front of me).


When I wrote [1], one of the things that I never assumed was how agents
would collect their information. I imagined that the system should allow
for multiple implementation of agents that would collect the same
counters, assuming that 2 implementations for the same counter should
never be running at once.

That said, I am not sure an event based collection of what nova is
notifying would satisfy the requirements I have heard from many cloud
providers:
- how do we ensure that event are not forged or lost in the current nova
system?
- how can I be sure that an instance has not simply crashed and never
started?
- how can I collect information which is not captured by nova events?

Hence the proposal to use a dedicated event queue for billing, allowing
for agents to collect and eventually validate data from different
sources, including, but not necessarily limiting, collection from the
nova events.

Moreover, as soon as you generalize the problem to other components than
just Nova (swift, glance, quantum, daas, ...) just using the nova event
queue is not an option anymore.

[1] http://wiki.openstack.org/EfficientMetering

Nick





On Apr 24, 2012, at 6:20 AM, Sandy Walsh wrote:



I think we have support for this currently in some fashion, Dragon?

-S



On 04/24/2012 12:55 AM, Loic Dachary wrote:


Metering needs to account for the "volume of data sent to external network destinations " ( i.e. n4 in http://wiki.openstack.org/EfficientMetering ) or the disk I/O etc. This kind of resource is billable.

The information described at http://wiki.openstack.org/SystemUsageData will be used by metering but other data sources need to be harvested as well.


--
Monsyne M. Dragon
OpenStack/Nova
cell 210-441-0965
work x 5014190


_______________________________________________
Mailing list: https://launchpad.net/~openstack<https://launchpad.net/%7Eopenstack>
Post to : openstack@lists.launchpad.net<mailto:openstack@lists.launchpad.net>
Unsubscribe : https://launchpad.net/~openstack<https://launchpad.net/%7Eopenstack>
More help : https://help.launchpad.net/ListHelp




--
Loïc Dachary Chief Research Officer
// eNovance labs http://labs.enovance.com<http://labs.enovance.com/>
// ✉ loic@enovance.com<mailto:loic@enovance.com> ☎ +33 1 49 70 99 82


_______________________________________________
Mailing list: https://launchpad.net/~openstack<https://launchpad.net/%7Eopenstack>
Post to : openstack@lists.launchpad.net<mailto:openstack@lists.launchpad.net>
Unsubscribe : https://launchpad.net/~openstack<https://launchpad.net/%7Eopenstack>
More help : https://help.launchpad.net/ListHelp

--
Monsyne M. Dragon
OpenStack/Nova
cell 210-441-0965
work x 5014190




--
Loïc Dachary Chief Research Officer
// eNovance labs http://labs.enovance.com<http://labs.enovance.com/>
// ✉ loic@enovance.com<mailto:loic@enovance.com> ☎ +33 1 49 70 99 82


--
Monsyne M. Dragon
OpenStack/Nova
cell 210-441-0965
work x 5014190
Re: Monitoring / Billing Architecture proposed [ In reply to ]
On Apr 24, 2012, at 2:31 PM, Brian Schott wrote:

I take it that the instance manager doesn't generate any kind of heartbeat, so whatever monitoring/archiving service we do should internally poll the status over MQ?


Actually, it generates periodic 'heartbeat' events ('exists' events) for each instance that existed during a given audit period.



-------------------------------------------------
Brian Schott, CTO
Nimbis Services, Inc.
brian.schott@nimbisservices.com<mailto:brian.schott@nimbisservices.com>
ph: 443-274-6064 fx: 443-274-6060



On Apr 24, 2012, at 2:10 PM, Luis Gervaso wrote:

Probably an extra audit system is required. I'm searching for solutions in the IT market.

Regards

On Tue, Apr 24, 2012 at 6:00 PM, Loic Dachary <loic@enovance.com<mailto:loic@enovance.com>> wrote:
On 04/24/2012 04:45 PM, Monsyne Dragon wrote:

On Apr 24, 2012, at 9:03 AM, Loic Dachary wrote:

On 04/24/2012 03:06 PM, Monsyne Dragon wrote:

Yes, we emit bandwidth (bytes in/out) on a per VIF basis from each instance The event has the somewhat generic name of 'compute.instance.exists' and is emitted on an periodic basis, currently by a cronjob.
Currently, we only populate bandwidth data from XenServer, but if the hook is implemented for the kvm, etc drivers, it will be picked up automatically for them as well.

Note that we could report other metrics similarly.


Hi,

Thanks for clarifying this. So you're suggesting that the metering agent should collect this data from the nova queue instead of extracting it from the system (interface, disk stats etc.) ? And for other openstack components ( as Nick Barcet suggests below ) the metering agent will have to find another way. Or do you have something else in mind ?

If it's something we have access to, we should emit it in those usage events. As far as the other components, glance is already using the same notification system. (there was a thread awhile back about putting it into openstack.common) It would be nice to have all of the components using it.

Hi,

I don't see a section in http://wiki.openstack.org/SystemUsageData about making sure all messages related to a billable event are accounted for. I mean, for instance, what if the event that says an instance is deleted is lost ? How is the billing software supposed to cope with that ? If it checks the status of all VM on a regular basis to deal with this, how can it figure out when the missed event occured ?

It would be worth adding a short section about this in http://wiki.openstack.org/SystemUsageData . Or I can do it if you give me a hint.

Cheers

Cheers

On 04/24/2012 12:17 PM, Nick Barcet wrote:

On 04/23/2012 10:45 PM, Doug Hellmann wrote:


>
>
> On Mon, Apr 23, 2012 at 4:14 PM, Brian Schott
> <brian.schott@nimbisservices.com<mailto:brian.schott@nimbisservices.com>
> <mailto:brian.schott@nimbisservices.com><mailto:brian.schott@nimbisservices.com>> wrote:
>
> Doug,
>
> Do we mirror the table structure of nova, etc. and add
> created/modified columns?
>
>
> Or do we flatten into an instance event record with everything?
>
>
> I lean towards flattening the data as it is recorded and making a second
> pass during the bill calculation. You need to record instance
> modifications separately from the creation, especially if the
> modification changes the billing rate. So you might have records for:
>
> created instance, with UUID, name, size, timestamp, ownership
> information, etc.
> resized instance, with UUID, name, new size, timestamp, ownership
> information, etc.
> deleted instance, with UUID, name, size, timestamp, ownership
> information, etc.
>
> Maybe some of those values don't need to be reported in some cases, but
> if you record a complete picture of the state of the instance then the
> code that aggregates the event records to produce billing information
> can use it to make decisions about how to record the charges.
>
> There is also the case where an instance is still no longer running but
> nova thinks it is (or the reverse), so some sort of auditing sweep needs
> to be included (I think that's what Dough called the "farmer" but I
> don't have my notes in front of me).


When I wrote [1], one of the things that I never assumed was how agents
would collect their information. I imagined that the system should allow
for multiple implementation of agents that would collect the same
counters, assuming that 2 implementations for the same counter should
never be running at once.

That said, I am not sure an event based collection of what nova is
notifying would satisfy the requirements I have heard from many cloud
providers:
- how do we ensure that event are not forged or lost in the current nova
system?
- how can I be sure that an instance has not simply crashed and never
started?
- how can I collect information which is not captured by nova events?

Hence the proposal to use a dedicated event queue for billing, allowing
for agents to collect and eventually validate data from different
sources, including, but not necessarily limiting, collection from the
nova events.

Moreover, as soon as you generalize the problem to other components than
just Nova (swift, glance, quantum, daas, ...) just using the nova event
queue is not an option anymore.

[1] http://wiki.openstack.org/EfficientMetering

Nick





On Apr 24, 2012, at 6:20 AM, Sandy Walsh wrote:



I think we have support for this currently in some fashion, Dragon?

-S



On 04/24/2012 12:55 AM, Loic Dachary wrote:


Metering needs to account for the "volume of data sent to external network destinations " ( i.e. n4 in http://wiki.openstack.org/EfficientMetering ) or the disk I/O etc. This kind of resource is billable.

The information described at http://wiki.openstack.org/SystemUsageData will be used by metering but other data sources need to be harvested as well.


--
Monsyne M. Dragon
OpenStack/Nova
cell 210-441-0965<tel:210-441-0965>
work x 5014190


_______________________________________________
Mailing list: https://launchpad.net/~openstack<https://launchpad.net/%7Eopenstack>
Post to : openstack@lists.launchpad.net<mailto:openstack@lists.launchpad.net>
Unsubscribe : https://launchpad.net/~openstack<https://launchpad.net/%7Eopenstack>
More help : https://help.launchpad.net/ListHelp




--
Loïc Dachary Chief Research Officer
// eNovance labs http://labs.enovance.com<http://labs.enovance.com/>
// ✉ loic@enovance.com<mailto:loic@enovance.com> ☎ +33 1 49 70 99 82<tel:%2B33%201%2049%2070%2099%2082>


_______________________________________________
Mailing list: https://launchpad.net/~openstack<https://launchpad.net/%7Eopenstack>
Post to : openstack@lists.launchpad.net<mailto:openstack@lists.launchpad.net>
Unsubscribe : https://launchpad.net/~openstack<https://launchpad.net/%7Eopenstack>
More help : https://help.launchpad.net/ListHelp

--
Monsyne M. Dragon
OpenStack/Nova
cell 210-441-0965<tel:210-441-0965>
work x 5014190




--
Loïc Dachary Chief Research Officer
// eNovance labs http://labs.enovance.com<http://labs.enovance.com/>
// ✉ loic@enovance.com<mailto:loic@enovance.com> ☎ +33 1 49 70 99 82<tel:%2B33%201%2049%2070%2099%2082>


_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net<mailto:openstack@lists.launchpad.net>
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp




--
-------------------------------------------
Luis Alberto Gervaso Martin
Woorea Solutions, S.L
CEO & CTO
mobile: (+34) 627983344
luis@<mailto:luis.gervaso@gmail.com>woorea.es<http://woorea.es/>

_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net<mailto:openstack@lists.launchpad.net>
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp

_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net<mailto:openstack@lists.launchpad.net>
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp

--
Monsyne M. Dragon
OpenStack/Nova
cell 210-441-0965
work x 5014190
Re: Monitoring / Billing Architecture proposed [ In reply to ]
On Apr 24, 2012, at 2:46 PM, Brian Schott wrote:

Yeah, but does that mean the instance is alive and billable :-)? I guess that counts! I thought they were only in response to external API/admin requests.

-------------------------------------------------
Brian Schott, CTO
Nimbis Services, Inc.
brian.schott@nimbisservices.com<mailto:brian.schott@nimbisservices.com>
ph: 443-274-6064 fx: 443-274-6060



Actually, that is simply an rpc call message. Unrelated to notifications.




On Apr 24, 2012, at 3:42 PM, Luis Gervaso wrote:

This kind of messages are coming from nova exchange aprox. each 60 secs

Can be this considered as a heartbeat for you?

[x] Received '{"_context_roles": ["admin"], "_msg_id": "a2d13735baad4613b89c6132e0fa8302", "_context_read_deleted": "no", "_context_request_id": "req-d7ffbe78-7a9c-4d20-9ac5-3e56951526fe", "args": {"instance_id": 6, "instance_uuid": "e3ad17e6-dd59-4b67-a7d0-e3812f96c2d7", "host": "ubuntu", "project_id": "c290118b14564257be26a2cb901721a2", "rxtx_factor": 1.0}, "_context_auth_token": null, "_context_is_admin": true, "_context_project_id": null, "_context_timestamp": "2012-03-24T01:36:48.774891", "_context_user_id": null, "method": "get_instance_nw_info", "_context_remote_address": null}'



On Tue, Apr 24, 2012 at 9:31 PM, Brian Schott <brian.schott@nimbisservices.com<mailto:brian.schott@nimbisservices.com>> wrote:
I take it that the instance manager doesn't generate any kind of heartbeat, so whatever monitoring/archiving service we do should internally poll the status over MQ?


-------------------------------------------------
Brian Schott, CTO
Nimbis Services, Inc.
brian.schott@nimbisservices.com<mailto:brian.schott@nimbisservices.com>
ph: 443-274-6064<tel:443-274-6064> fx: 443-274-6060<tel:443-274-6060>



On Apr 24, 2012, at 2:10 PM, Luis Gervaso wrote:

Probably an extra audit system is required. I'm searching for solutions in the IT market.

Regards

On Tue, Apr 24, 2012 at 6:00 PM, Loic Dachary <loic@enovance.com<mailto:loic@enovance.com>> wrote:
On 04/24/2012 04:45 PM, Monsyne Dragon wrote:

On Apr 24, 2012, at 9:03 AM, Loic Dachary wrote:

On 04/24/2012 03:06 PM, Monsyne Dragon wrote:

Yes, we emit bandwidth (bytes in/out) on a per VIF basis from each instance The event has the somewhat generic name of 'compute.instance.exists' and is emitted on an periodic basis, currently by a cronjob.
Currently, we only populate bandwidth data from XenServer, but if the hook is implemented for the kvm, etc drivers, it will be picked up automatically for them as well.

Note that we could report other metrics similarly.


Hi,

Thanks for clarifying this. So you're suggesting that the metering agent should collect this data from the nova queue instead of extracting it from the system (interface, disk stats etc.) ? And for other openstack components ( as Nick Barcet suggests below ) the metering agent will have to find another way. Or do you have something else in mind ?

If it's something we have access to, we should emit it in those usage events. As far as the other components, glance is already using the same notification system. (there was a thread awhile back about putting it into openstack.common) It would be nice to have all of the components using it.

Hi,

I don't see a section in http://wiki.openstack.org/SystemUsageData about making sure all messages related to a billable event are accounted for. I mean, for instance, what if the event that says an instance is deleted is lost ? How is the billing software supposed to cope with that ? If it checks the status of all VM on a regular basis to deal with this, how can it figure out when the missed event occured ?

It would be worth adding a short section about this in http://wiki.openstack.org/SystemUsageData . Or I can do it if you give me a hint.

Cheers

Cheers

On 04/24/2012 12:17 PM, Nick Barcet wrote:

On 04/23/2012 10:45 PM, Doug Hellmann wrote:


>
>
> On Mon, Apr 23, 2012 at 4:14 PM, Brian Schott
> <brian.schott@nimbisservices.com<mailto:brian.schott@nimbisservices.com>
> <mailto:brian.schott@nimbisservices.com><mailto:brian.schott@nimbisservices.com>> wrote:
>
> Doug,
>
> Do we mirror the table structure of nova, etc. and add
> created/modified columns?
>
>
> Or do we flatten into an instance event record with everything?
>
>
> I lean towards flattening the data as it is recorded and making a second
> pass during the bill calculation. You need to record instance
> modifications separately from the creation, especially if the
> modification changes the billing rate. So you might have records for:
>
> created instance, with UUID, name, size, timestamp, ownership
> information, etc.
> resized instance, with UUID, name, new size, timestamp, ownership
> information, etc.
> deleted instance, with UUID, name, size, timestamp, ownership
> information, etc.
>
> Maybe some of those values don't need to be reported in some cases, but
> if you record a complete picture of the state of the instance then the
> code that aggregates the event records to produce billing information
> can use it to make decisions about how to record the charges.
>
> There is also the case where an instance is still no longer running but
> nova thinks it is (or the reverse), so some sort of auditing sweep needs
> to be included (I think that's what Dough called the "farmer" but I
> don't have my notes in front of me).


When I wrote [1], one of the things that I never assumed was how agents
would collect their information. I imagined that the system should allow
for multiple implementation of agents that would collect the same
counters, assuming that 2 implementations for the same counter should
never be running at once.

That said, I am not sure an event based collection of what nova is
notifying would satisfy the requirements I have heard from many cloud
providers:
- how do we ensure that event are not forged or lost in the current nova
system?
- how can I be sure that an instance has not simply crashed and never
started?
- how can I collect information which is not captured by nova events?

Hence the proposal to use a dedicated event queue for billing, allowing
for agents to collect and eventually validate data from different
sources, including, but not necessarily limiting, collection from the
nova events.

Moreover, as soon as you generalize the problem to other components than
just Nova (swift, glance, quantum, daas, ...) just using the nova event
queue is not an option anymore.

[1] http://wiki.openstack.org/EfficientMetering

Nick





On Apr 24, 2012, at 6:20 AM, Sandy Walsh wrote:



I think we have support for this currently in some fashion, Dragon?

-S



On 04/24/2012 12:55 AM, Loic Dachary wrote:


Metering needs to account for the "volume of data sent to external network destinations " ( i.e. n4 in http://wiki.openstack.org/EfficientMetering ) or the disk I/O etc. This kind of resource is billable.

The information described at http://wiki.openstack.org/SystemUsageData will be used by metering but other data sources need to be harvested as well.


--
Monsyne M. Dragon
OpenStack/Nova
cell 210-441-0965<tel:210-441-0965>
work x 5014190


_______________________________________________
Mailing list: https://launchpad.net/~openstack<https://launchpad.net/%7Eopenstack>
Post to : openstack@lists.launchpad.net<mailto:openstack@lists.launchpad.net>
Unsubscribe : https://launchpad.net/~openstack<https://launchpad.net/%7Eopenstack>
More help : https://help.launchpad.net/ListHelp




--
Loïc Dachary Chief Research Officer
// eNovance labs http://labs.enovance.com<http://labs.enovance.com/>
// ✉ loic@enovance.com<mailto:loic@enovance.com> ☎ +33 1 49 70 99 82<tel:%2B33%201%2049%2070%2099%2082>


_______________________________________________
Mailing list: https://launchpad.net/~openstack<https://launchpad.net/%7Eopenstack>
Post to : openstack@lists.launchpad.net<mailto:openstack@lists.launchpad.net>
Unsubscribe : https://launchpad.net/~openstack<https://launchpad.net/%7Eopenstack>
More help : https://help.launchpad.net/ListHelp

--
Monsyne M. Dragon
OpenStack/Nova
cell 210-441-0965<tel:210-441-0965>
work x 5014190




--
Loïc Dachary Chief Research Officer
// eNovance labs http://labs.enovance.com<http://labs.enovance.com/>
// ✉ loic@enovance.com<mailto:loic@enovance.com> ☎ +33 1 49 70 99 82<tel:%2B33%201%2049%2070%2099%2082>


_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net<mailto:openstack@lists.launchpad.net>
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp




--
-------------------------------------------
Luis Alberto Gervaso Martin
Woorea Solutions, S.L
CEO & CTO
mobile: (+34) 627983344<tel:%28%2B34%29%20627983344>
luis@<mailto:luis.gervaso@gmail.com>woorea.es<http://woorea.es/>

_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net<mailto:openstack@lists.launchpad.net>
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp




--
-------------------------------------------
Luis Alberto Gervaso Martin
Woorea Solutions, S.L
CEO & CTO
mobile: (+34) 627983344
luis@<mailto:luis.gervaso@gmail.com>woorea.es<http://woorea.es/>


_______________________________________________
Mailing list: https://launchpad.net/~openstack
Post to : openstack@lists.launchpad.net<mailto:openstack@lists.launchpad.net>
Unsubscribe : https://launchpad.net/~openstack
More help : https://help.launchpad.net/ListHelp

--
Monsyne M. Dragon
OpenStack/Nova
cell 210-441-0965
work x 5014190

1 2 3  View All