Before we get to walking through these templates, we need to enable the Load Balancer as a Service (LBaaS) functionality of Neutron. Packstack does not configure it when it installs Neutron. There are a couple of configuration files to be updated and a couple of services to restart. First off, ensure that HAProxy is installed on the control node:
control# yum install -y haproxy
Note that the contents of the file referenced in this chapter should not be replaced in their entirety. The configuration options listed are intended to be updated and the rest of the file left intact. If the contents of the files edited here include only the contents referenced here, then LBaaS will not be enabled properly, and this Heat template will fail to launch.
Next, edit /etc/neutron/neutron.conf
on the control nodes and add the value lbaas
to the service_plugins
configuration option. If there are already values, leave them there and add lbaas
to the comma-delimited list. Mine was commented out without a value, so I just added lbaas
as the only value to this configuration. If yours is not commented, be sure to leave the plugins that are already listed. With only lbass
enabled, it would look like this:
service_plugins = lbaas
With more than lbass
enabled, it would be a comma-delimited list of plugins. Next, edit /etc/neutron/lbaas_agent.ini
and make sure that the device_driver
option is set to HAProxy
, the interface_driver
is set to OVS
and that the [haproxy] user_group
is set to haproxy
:
[DEFAULT] device_driver = neutron.services.loadbalancer.drivers.haproxy.namespace_driver.HaproxyNSDriver interface_driver = neutron.agent.linux.interface.OVSInterfaceDriver [haproxy] user_group = nobodyhaproxy
Finally, restart the Neutron server and lbaas
services:
control# service neutron-server restart control# service neutron-lbaas-agent restart
Now that the lbaas
service is enabled, let's take a look at the autoscaling.yaml
file. This file was pulled from https://github.com/openstack/heat-templates/blob/master/hot/autoscaling.yaml. Here are the contents; there is more explanation after the contents of the file:
heat_template_version: 2013-05-23 description: AutoScaling Wordpress parameters: image: type: string description: Image used for servers key: type: string description: SSH key to connect to the servers flavor: type: string description: flavor used by the web servers database_flavor: type: string description: flavor used by the db server network: type: string description: Network used by the server subnet_id: type: string description: subnet on which the load balancer will be located database_name: type: string description: Name of the wordpress DB default: wordpress database_user: type: string description: Name of the wordpress user default: wordpress external_network_id: type: string description: UUID of a Neutron external network resources: database_password: type: OS::Heat::RandomString database_root_password: type: OS::Heat::RandomString db: type: OS::Nova::Server properties: flavor: {get_param: database_flavor} image: {get_param: image} key_name: {get_param: key} networks: [{network: {get_param: network} }] user_data_format: RAW user_data: str_replace: template: | #!/bin/bash -v yum -y install mariadb mariadb-server systemctl enable mariadb.service systemctl start mariadb.service mysqladmin -u root password $db_rootpassword cat << EOF | mysql -u root --password=$db_rootpassword CREATE DATABASE $db_name; GRANT ALL PRIVILEGES ON $db_name.* TO "$db_user"@"%" IDENTIFIED BY "$db_password"; FLUSH PRIVILEGES; EXIT EOF params: $db_rootpassword: {get_attr: [database_root_password, value]} $db_name: {get_param: database_name} $db_user: {get_param: database_user} $db_password: {get_attr: [database_password, value]} asg: type: OS::Heat::AutoScalingGroup properties: min_size: 1 max_size: 3 resource: type: lb_server.yaml properties: flavor: {get_param: flavor} image: {get_param: image} key_name: {get_param: key} network: {get_param: network} pool_id: {get_resource: pool} metadata: {"metering.stack": {get_param: "OS::stack_id"}} user_data: str_replace: template: | #!/bin/bash -v yum -y install httpd wordpress systemctl enable httpd.service systemctl start httpd.service setsebool -P httpd_can_network_connect_db=1 sed -i "/Deny from All/d" /etc/httpd/conf.d/wordpress.conf sed -i "s/Require local/Require all granted/" /etc/httpd/conf.d/wordpress.conf sed -i s/database_name_here/$db_name/ /etc/wordpress/wp-config.php sed -i s/username_here/$db_user/ /etc/wordpress/wp-config.php sed -i s/password_here/$db_password/ /etc/wordpress/wp-config.php sed -i s/localhost/$db_host/ /etc/wordpress/wp-config.php systemctl restart httpd.service params: $db_name: {get_param: database_name} $db_user: {get_param: database_user} $db_password: {get_attr: [database_password, value]} $db_host: {get_attr: [db, first_address]} web_server_scaleup_policy: type: OS::Heat::ScalingPolicy properties: adjustment_type: change_in_capacity auto_scaling_group_id: {get_resource: asg} cooldown: 60 scaling_adjustment: 1 web_server_scaledown_policy: type: OS::Heat::ScalingPolicy properties: adjustment_type: change_in_capacity auto_scaling_group_id: {get_resource: asg} cooldown: 60 scaling_adjustment: -1 cpu_alarm_high: type: OS::Ceilometer::Alarm properties: description: Scale-up if the average CPU > 50% for 1 minute meter_name: cpu_util statistic: avg period: 60 evaluation_periods: 1 threshold: 50 alarm_actions: - {get_attr: [web_server_scaleup_policy, alarm_url]} matching_metadata: {'metadata.user_metadata.stack': {get_param: "OS::stack_id"}} comparison_operator: gt cpu_alarm_low: type: OS::Ceilometer::Alarm properties: description: Scale-down if the average CPU < 15% for 10 minutes meter_name: cpu_util statistic: avg period: 600 evaluation_periods: 1 threshold: 15 alarm_actions: - {get_attr: [web_server_scaledown_policy, alarm_url]} matching_metadata: {'metadata.user_metadata.stack': {get_param: "OS::stack_id"}} comparison_operator: lt monitor: type: OS::Neutron::HealthMonitor properties: type: TCP delay: 5 max_retries: 5 timeout: 5 pool: type: OS::Neutron::Pool properties: protocol: HTTP monitors: [{get_resource: monitor}] subnet_id: {get_param: subnet_id} lb_method: ROUND_ROBIN vip: protocol_port: 80 lb: type: OS::Neutron::LoadBalancer properties: protocol_port: 80 pool_id: {get_resource: pool} # assign a floating ip address to the load balancer # pool. lb_floating: type: OS::Neutron::FloatingIP properties: floating_network_id: {get_param: external_network_id} port_id: {get_attr: [pool, vip, port_id]} outputs: scale_up_url: description: > This URL is the webhook to scale up the autoscaling group. You can invoke the scale-up operation by doing an HTTP POST to this URL; no body nor extra headers are needed. value: {get_attr: [web_server_scaleup_policy, alarm_url]} scale_dn_url: description: > This URL is the webhook to scale down the autoscaling group. You can invoke the scale-down operation by doing an HTTP POST to this URL; no body nor extra headers are needed. value: {get_attr: [web_server_scaledown_policy, alarm_url]} pool_ip_address: value: {get_attr: [pool, vip, address]} description: The IP address of the load balancing pool website_url: value: str_replace: template: http://host/wordpress/ params: host: { get_attr: [lb_floating, floating_ip_address] } description: > This URL is the "external" URL that can be used to access the Wordpress site. ceilometer_query: value: str_replace: template: > ceilometer statistics -m cpu_util -q metadata.user_metadata.stack=stackval -p 600 -a avg params: stackval: { get_param: "OS::stack_id" } description: > This is a Ceilometer query for statistics on the cpu_util meter Samples about OS::Nova::Server instances in this stack. The -q parameter selects Samples according to the subject's metadata. When a VM's metadata includes an item of the form metering.X=Y, the corresponding Ceilometer resource has a metadata item of the form user_metadata.X=Y and samples about resources so tagged can be queried with a Ceilometer query term of the form metadata.user_metadata.X=Y. In this case the nested stacks give their VMs metadata that is passed as a nested stack parameter, and this stack passes a metadata of the form metering.stack=Y, where Y is this stack's ID.
You will see that the parameters collect the information necessary to dynamically launch the instances, attach them to networks, and create a database name and user to set up the database. The first three resources in the resource definitions include the database server itself and randomly generated passwords for the database users. The next resource is an autoscaling group. The group is of the AutoScalingGroup
type, and the resource defined in this group is of the lb_server.yaml
type. This refers to the other YAML file available from https://github.com/openstack/heat-templates/blob/master/hot/lb_server.yaml. Let's quickly look at this template:
heat_template_version: 2013-05-23 description: A load-balancer server parameters: image: type: string description: Image used for servers key_name: type: string description: SSH key to connect to the servers flavor: type: string description: flavor used by the servers pool_id: type: string description: Pool to contact user_data: type: string description: Server user_data metadata: type: json network: type: string description: Network used by the server resources: server: type: OS::Nova::Server properties: flavor: {get_param: flavor} image: {get_param: image} key_name: {get_param: key_name} metadata: {get_param: metadata} user_data: {get_param: user_data} user_data_format: RAW networks: [{network: {get_param: network} }] member: type: OS::Neutron::PoolMember properties: pool_id: {get_param: pool_id} address: {get_attr: [server, first_address]} protocol_port: 80 outputs: server_ip: description: IP Address of the load-balanced server. value: { get_attr: [server, first_address] } lb_member: description: LB member details. value: { get_attr: [member, show] }
The lb_server.yaml
template is a fairly basic server definition to launch a single instance using Heat. The extra definitions to note are the pool_id
parameter and the Neutron PoolMember
resource. These associate the servers that are launched with this template with the LBaaS pool resource created in the autoscaling.yaml
template. This also shows an example of how Heat templates can reference each other. Let's jump back to the autoscaling.yaml
template now.
The next two resources defined after the AutoScalingGroup
resource are the Heat policies that are used to define what to do when scaling up or scaling down. The next two resources are the Ceilometer alarms that trigger the Heat policies to scale up or down accordingly when the CPU usage is too high or too low for the number of instances that are currently running. The last four resources define a load balancer, an IP address for the load balancer, a monitor for the load balancer, and a pool to add servers to for the load balancer to balance the load.
Lastly, the autoscale.yaml
template defines a set of outputs to get URLs and the pool IP address or that the heat stack can be used.
Now that we've walked through these templates, let's launch the autoscale template. You will need to pass in a Glance image ID to launch all the instances off, the ID of your internal subnet and your external network, a key pair's name, and Nova flavor names for the database and the web server instances. The stack-create
command should be executed as the admin user. The policies in Ceilometer require admin access. They could be created ahead of time and provided to end users if it was necessary for non-administrative users to launch autoscaling stacks. For our demonstrations here, just use the admin user. The command will look something like this:
heat stack-create -f autoscaling.yaml -P database_flavor=m1.small -P subnet_id={INTERNAL_SUBNET_ID} -P external_network_id={EXT_NET_ID} -P image={GLANCE_IMAGE_ID} -P key=danradez -P flavor=m1.small autoscale_me
Once the stack launches, you can use the stack, resource, and event commands to list and show information about the stack, monitor its progress, and troubleshoot any errors that might be encountered. This stack is now ready to scale automatically using the resources Heat has put into place to monitor the set of resources created through this stack. If you were to put a load on the web service instance enough to trigger the scale-up alarm, another instance would spawn. You can also accomplish this via POST to the scale-up URL listed in the outputs of the autoscaling template. Similarly, reducing the load to trigger the scale-down alarm or a POST to the scale-down URL in the outputs section of the template would reduce the number of instances in the web server pool.
18.223.171.51