The integration of XenServer, RDO and Neutron

3:58 PM
The integration of XenServer, RDO and Neutron -

Citrix XenServer is a wide choice of hypervisor under OpenStack, but there is no native integration between him and RDO packages of RedHat. This means that an integrated environment with XenServer and RDO device is more difficult than it should be. This blog entry aims to solve that, a process will be where CentOS can be easily adjusted as XenServer hypervisor using

. Surroundings: XenServer 6.5 CentOS 7.0 OpenStack: Freedom Network: Neutron, ML2 plugin, OVS, VLAN 

1. Install XenServer

The integration XenServer with OpenStack has some optimizations which means that only EXT3 memory is supported. Make sure you select optimized for XenDesktop when prompted during installation of XenServer. Use XenCenter to examine whether the SR-type EXT3 as attachment to create after the VMs it requires to delete the VMs and start over.

2. Install

With XenServer that must Nova Compute service run OpenStack VM in a virtual machine on the hypervisor, which they will control. How do we use CentOS 7.0 for the neighborhood, create a VM using CentOS 7.0 template in XenCenter. If you want to copy + paste the scripts from the rest of the blog, use the name "CentOS_RDO" for this VM. Install the CentOS 7.0 VM but shut it to install before RDO.

2.1 Creating network for OpenStack VM

In each box environment, we need three networks, "integration network", "External Network", "VM Network". If you have the appropriate networks for who above (for example, a network that gives you external access), then rename the existing network to have its name label. Note that a helper script rdo_xenserver_helper.sh for some of the later steps in this blog rely on this specific name labels is provided so that if you do not choose to use it then please update the helper script.

you can this is done via XenCenter or the following commands in dom0 run:

 xe network create name-label = OpenStack int network xe network create name-label = OpenStack ext network Create xe network registered shares label = OpenStack vm network 

2.2 virtual network interfaces for OpenStack VM create

This step requires the VM to shut down, as they network setup is modified and the PV tools not in the guest.

 vm_uuid = $ (xe vm-list name-label = CentOS_RDO minimal = true) vm_net_uuid = $ (xe network-list name-label = OpenStack vm installed network minimally = true) next_device = $ (xe vm-param -get uuid = $ vm_uuid param name = allowed-VIF-devices | cut -d ';' -f1) vm_vif_uuid = $ (xe vif-create device = $ next_device network UUID = $ vm_net_uuid vm UUID = $ vm_uuid) xe vif-plug uuid = $ vm_vif_uuid ext_net_uuid = $ (xe network-list name-label = OpenStack ext network minimally = true) next_device = $ (xe vm-param -get uuid = $ vm_uuid param name = allowed-VIF equipment | cut -d ';' -f1) ext_vif_uuid = $ (xe vif-create device = $ next_device network UUID = $ ext_net_uuid vm UUID = $ vm_uuid) xe vif-plug uuid = $ ext_vif_uuid 

you can also select helper script to do this in dom0.

 Source rdo_xenserver_helper.sh create_vif 

2.3 Configuring OpenStackVM / hypervisor communication

Use himn tool (plug-in for XenCenter) Add internal management network to OpenStack VMs. This effectively the following operations, which could also be manually performed rdo_xenserver_helper.sh in dom0 or use

 Source rdo_xenserver_helper.sh create_himn 

. Note: If you use the commands manually, it should be performed when the OpenStack VM shut down

DHCP on the himn network for OpenStack VM so that OpenStack VM to access its own hypervisor on the static address 169.254. 0.1 a. Run helper script in domU.

 Source rdo_xenserver_helper.sh active_himn_interface 

3. Install RDO

3.1 RDO short gives detailed installation instructions, please follow the instructions step step. This guide pointed out only the steps that attation payable during installation.

3.2 Step 3: Run pack stack OpenStack

more to install than runs immediately pack stack, we need to create an answer file, so that we can optimize the configuration of

[1945001generieren] answer file:

 pack stack --gen response-file =  

Install OpenStack services:

 pack stack --answer-file =  

These elements in should be amended as shown below:

 CONFIG_NEUTRON_ML2_TYPE_DRIVERS = vlan CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES = vlan 

These elements in should be amended to suit your environment:

 CONFIG_NEUTRON_ML2_VLAN_RANGES =  CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS =  CONFIG_NEUTRON_OVS_BRIDGE_IFACES =  

Note :
: physnet1 is the physical network name for VLAN -Anbieter and tenant networks. 1000: 1050 is a VLAN tag for allocation to tenant networks

ranges in each physical network. Br-eth1 is OVS bridge for VM network. br-ex OVS bridge is for external network, Neutron L3 Agent for external traffic using

. eth1 NIC OpenStack VM, the VM network. eth2 is NIC OpenStack VM that is connected to an external network.

4. Configure Nova and neutron

4.1 Copy Nova and Neutron plugins XenServer host.

 Source rdo_xenserver_helper.sh install_dom0_plugins 

4.2 / etc /nova/nova.conf, change rakes driver XenServer

 [DEFAULT] = compute_driver xenapi.XenAPIDriver [xenserver] connection_url = http :. //169.254.0.1 Connection_username = root connection_password =  vif_driver = nova .virt.xenapi.vif.XenAPIOpenVswitchDriver ovs_int_bridge =  

Note:

The integration_bridge top of dom0 can be found:

xe network-list name-label = OpenStack int network params = bridge

169.254.0.1 is the address hypervisor dom0 that can reach OpenStack VM himn.

4.3 install XenAPI Python XML-RPC slight bonds.

 yum install -y python-pip install XenAPI 

4.4 Configuring Neutron

Edit /etc/neutron/rootwrap.conf uing support XenServer remotely.

 [xenapi] # XenAPI configuration is only required by the L2 agent when goal is to # a XenServer / XCP calculate host dom0. xenapi_connection_url = http: //169.254.0.1 xenapi_connection_username = root xenapi_connection_password =  

4.5 Restart Nova and Neutron Services

 for SVC in ladder rake scheduler api cert; do  Service OpenStack nova $ svc restart;  Done Service neutron openvswitch-agent restar 

5. Start another neutron openvswitch agent for talking with Dom0

XenServer has a separation of Dom0 and DomU and all VIF 'instances of Dom0 actually managed. Their respective OVS ports are created in Dom0. Therefore, we should manually start the other ovs agents, which is responsible for these ports and speaks to Dom0, xenserver_neutron in Fig.

5.1 Create another configuration file

 cp /etc/neutron/plugins/ml2/openvswitch_agent.ini /etc/neutron/plugins/ml2/openvswitch_agent.ini.dom0 [ovs] integration_bridge = xapi3 bridge_mappings = physnet1: xapi2 [agent] root_helper = neutron rootwrap-xen dom0 / etc /neutron/rootwrap.conf root_helper_daemon = minimize_polling = false [securitygroup] = firewall_driver neutron.agent.firewall.NoopFirewallDriver 

Note

xapi3 integration bridge xapX is in the diagram. xapi2 vm network bridge, it is xapiY in the graph.

xe network-list name-label = OpenStack int network params = bridgexe network list name-label = OpenStack vm network params = bridge

5.2 Introduction neutron openvswitch-agent

6. Replace cirros guest with a Setup for XenServer to work

 / usr / bin / python2 / usr / bin / neutron openvswitch -agent --config-file /usr/share/neutron/neutron-dist.conf - -config file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini. dom0 -config-dir /etc/neutron/conf.d/neutron-openvswitch-agent --log file /var/log/neutron/openvswitch-agent.log.dom0 & 
6. Replace cirros guest to work with one setting for XenServer
 nova image delete cirros wget http://ca.downloads.xensource.com/OpenStack/cirros-0.3.4-x86_64-disk.vhd.tgz create eye image - -name cirros --container format ovf --disk format vHD --Property vm_mode = xen --visibility public --file cirros-0.3.4-x86_64 disk.vhd.tgz 
7. Start of instance and test its connectivity
 source keystonerc_demo [root@localhost ~(keystone_demo)] # view-screen list + --------------------------- ----------- + -------- + | ID | name | + -------------------------------------- + -------- + | 5c227c8e-3cfa-4368-963c-6ebc2f846ee1 | cirros | + -------------------------------------- + -------- + [root@localhost ~(keystone_demo)] # neutron netlist +--------------------------------------+---------+--------------------------------------------------+ | id | name | subnets | +--------------------------------------+---------+--------------------------------------------------+ | 91c0f6ac-36f2-46fc-b075-6213a241fc2b | private | 3a4eebdc-6727-43e3-b5fe-8760d64c00fb 10.0.0.0/24 | | 7ccf5c93-ca20-4962-b8bb-bff655e29788 | public | 4e023f19-dfdd-4d00-94cc-dbea59b31698 | +--------------------------------------+---------+--------------------------------------------------+ nova-boat --flavor m1.tiny --image cirros --nic net-id = 91c0f6ac-36f2-46fc-b075-6213a241fc2b demo instance floatingip create [root@localhost ~(keystone_demo)] # neutron public a new floatingip Created: + ---- ----------------- + -------------------------------- ------ + | field | value | + --------------------- + --------------------------- ----------- + | fixed_ip_address | | | floating_ip_address | 172.24.4.228 | | floating_network_id | 7ccf5c93-ca20-4962-b8bb-bff655e29788 | | id | 2f0e7c1e-07dc-4c7e-b9a6-64f312e7f693 | | port_id | | | ROUTER_ID | | | status | AB | | tenant_id | 838ec33967ff4f659b808e4a593e7085 | + --------------------- + --------------------------- ----------- + nova Add floating IP demo instance 172.24.4.228 

After these steps above we succefully an instance with floating ip booted, use "nova list "are the instances

 [root@localhost ~(keystone_demo)] # nova list output +--------------------------------------+---------------+--------+------------+-------------+--------------------------------+ | ID | name | Status | TaskState | Power State | networks | +--------------------------------------+---------------+--------+------------+-------------+--------------------------------+ | ac82fcc8-09-4d34-a4a7-80e5985433f7 | Demo inst1 | ACTIVE | - | Running | Private = 10.0.0.3, 172.24.4.227 | | f302a03f-3761-48e6-a786-45b324182545 | Demo Example | ACTIVE | - | Running | Private = 10.0.0.4, 172.24.4.228 | +--------------------------------------+---------------+--------+------------+-------------+--------------------------------+ 

Test connectivity via floating IP, "ping 172.24.4.228" on the OpenStack VM is properbly outputs obtained as:

 [root@localhost ~(keystone_demo)] # ping 172.24.4.228 172.24.4.228 PING ( 172.24 .4.228) 56 (84) bytes of data. 64 bytes of 172.24.4.228: icmp_seq = 1 ttl = 63 time = 1.76 ms 64 bytes from 172.24.4.228: icmp_seq = 2 ttl = 63 time = 0.666 ms 64 bytes from 172.24.4.228: icmp_seq = 3 ttl = 63 time = 0.284 ms 
Previous
Next Post »
0 Komentar