Centos – Openstack/PackStack basic multi-node network setup

centoscentos6.5openstackredhat

Background

After having installed OpenStack with PackStack we are left with a problem with the networking. We wanted a super-simple network with one subnet where all the virtual machines should reside, as this looked like the simplest solution. We have to begin with three nodes which is running nova.

The answer file we have used is this: answer file (pastebin)

Our setup

Three nodes with CentOS 6.5 where the nodes are connected to two switches.

  • eth0: public
  • eth1: internal network, ip 10.0.20.0/24 where node1 (10.0.20.1), node2 (10.0.20.2)…
  • The switch ports is not vlan tagged or trunked (unfortunately).
  • We want the instances to be able to communicate, but also access internet.

Security group rules

Ingress IPv4    TCP 1 - 65535   0.0.0.0/0 (CIDR)
Ingress IPv4    TCP 22 (SSH)    0.0.0.0/0 (CIDR)
Ingress IPv6    Any -       default
Ingress IPv4    TCP 22 (SSH)    10.0.20.0/24 (CIDR)
Ingress IPv4    TCP 53 (DNS)    0.0.0.0/0 (CIDR)
Egress  IPv4    Any -       0.0.0.0/0 (CIDR)
Ingress IPv4    ICMP    -       0.0.0.0/0 (CIDR)
Egress  IPv6    Any -       ::/0 (CIDR)
Ingress IPv4    Any -       default

Neutron

(neutron) agent-list
+--------------------------------------+--------------------+------------------------
| id                                   | agent_type         | host  | alive | admin_state_up
+--------------------------------------+--------------------+------------------------
| 09add8dd-0328-4c63-8a79-5c61322a8314 | L3 agent           | host3 | :-)   | True
| 0d0748a9-4289-4a5d-b1d9-d06a764a8d25 | Open vSwitch agent | host2 | :-)   | True
| 258c92fe-8e3a-4760-864e-281a47523e85 | Open vSwitch agent | host1 | :-)   | True
| 2e886dc1-af93-4f4f-b66c-61177a6c9dba | L3 agent           | host1 | :-)   | True
| 50f37a33-2bfc-43f2-9d2f-4f42564d234d | Open vSwitch agent | host3 | :-)   | True
| 535bf0a3-06aa-4072-ae5a-1b1ba1d377ab | L3 agent           | host2 | :-)   | True
| 9b17ef73-a602-4b5d-a4e9-e97445e594b4 | DHCP agent         | host1 | :-)   | True

ovs-vsctl

Host1

ovs-vsctl show

43da814e-223c-4f66-ba2d-c3c9de91e1f8
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
    Bridge br-int
        Port br-int
            Interface br-int
                type: internal
        Port "tap3e0d3121-32"
            tag: 4
            Interface "tap3e0d3121-32"
                type: internal
        Port "tap4a397755-29"
            tag: 4
            Interface "tap4a397755-29"
    ovs_version: "1.11.0"

Host2

afa75816-6a40-4f0c-842f-236a3a94cd63
    Bridge br-int
        Port br-int
            Interface br-int
                type: internal
        Port "tap46f55af8-73"
            tag: 1
            Interface "tap46f55af8-73"
    Bridge br-ex
        Port br-ex
            Interface br-ex
                type: internal
    ovs_version: "1.11.0"

Out problem

Instances are not able to communicate to each other, and are not able to reach internet. Frankly we are not sure what the requirements for this is when using a multi-node nova setup when the "internal" network between the nodes is only using one link.
I think the problem is a routing problem, since we are not able to connect between the instances on different nodes, but after having read a LOT of documentation, I am not quite sure how to proseed. If I tcpdump the interface br-int I can se the ARP requests, but nothing more. That is if I try to ping from a instance on the respective host.

The question

So the question is: How can we proseed in finding the solution to this multi-node network problem, and what do we need to think about? Could it be the routing, or a misconfiguration with the host os or openstack? (Running CentOS).

Any feedback is highly appriciated, since we have been stuck at this point for a couple of weeks. Sorry for the long post, but I hope the needed information is in here. If not; dont be shy 🙂

Update

I have been able to fix the internal network between the nodes, so that the instances are able to communicate between the physical nodes.

- Changed the /etc/neutron/plugins/openvswitch/ovs_neutron_plugin.ini:
[database]
connection = mysql://neutron:password@127.0.0.1:3306/ovs_neutron

[OVS]
tennant_network_type = gre
tunnel_id_ranges = 1:1000
integration_brige = br-int
tunnel_bridge = br-tun
local_ip = 10.0.20.1
enable_tunneling = True

- Restarted the service on controller
cd /etc/init.d/; for i in $( ls neutron-* ); do sudo service $i restart; done )
service openvswitch restart

This were done on all the nodes and created the GRE tunnels. Although the flow did not work, so i needed to use ovs-ofctl add-flow br-tun action=normal.

The current problem that I now have is being able to route the internal subnet to internet, so that all the instances get internet access. Do I need floating ips to be able to connect to the internet? There is no patch between br-int and br-ex or the routers, so is this needed to be able to route the traffic to internet?

Can I add a default route with ip netns exec … ip add default gw via (ip of br-ex) or do I need to add some new interfaces?

Best Answer

Sit down and watch this.

It walks through setting up a simple multi-node cluster and I found it very clear. The last bit about setting up NAT will not apply, because he is running his cluster on Virtualbox.

There is an associated slideshow linked in the video description.

Related Topic