本文共 37600 字,大约阅读时间需要 125 分钟。
第一篇:
上一篇: 下一篇:由于实验室网络冲突,机器变更如下。
原机器名 | 新机器名 | /etc/hosts文件 |
---|---|---|
controller | tony-controller | 172.18.22.231 controller |
compute | tony-compute1 | 172.18.22.232 compute1 |
[tony@tony-controller ~]$ sudo ip addr1: lo:mtu 65536 qdisc noqueue state UNKNOWN group 2: enp0s3: mtu 1500 qdisc pfifo_fast state UP inet 172.18.22.231/24 brd 172.18.22.255 scope global noprefixroute enp0s33: enp0s8: mtu 1500 qdisc pfifo_fast state UP inet 10.0.0.1/24 brd 10.0.0.255 scope global noprefixroute enp0s84: enp0s9: mtu 1500 qdisc pfifo_fast state UP inet 10.238.156.138/23 brd 10.238.157.255 scope global noprefixroute dynamic
[tony@tony-compute1 ~]$ ip addr1: lo:mtu 65536 qdisc noqueue state UNKNOWN group 2: enp0s3: mtu 1500 qdisc pfifo_fast state UP inet 172.18.22.232/24 brd 172.18.22.255 scope global noprefixroute enp0s33: enp0s8: mtu 1500 qdisc pfifo_fast state UP inet 10.0.0.2/24 brd 10.0.0.255 scope global noprefixroute enp0s84: enp0s9: mtu 1500 qdisc pfifo_fast state UP inet 10.238.157.84/23 brd 10.238.157.255 scope global noprefixroute dynamic
# oops, 忘记加载鉴权信息了[tony@tony-controller ~]$ openstack service create --name neutron --description "OpenStack Networking" networkMissing value auth-url required for auth plugin password# 加载admin鉴权信息[tony@tony-controller ~]$ source adminrc# 创建network服务,名称为 neutron[tony@tony-controller ~]$ openstack service create --name neutron --description "OpenStack Networking" network+-------------+----------------------------------+| Field | Value |+-------------+----------------------------------+| description | OpenStack Networking || enabled | True || id | 603f059a82a34c9f84ccf6aa40619e7e || name | neutron || type | network |+-------------+----------------------------------+# 创建neutron账户[tony@tony-controller ~]$ openstack user create --domain default --password-prompt neutronUser Password:Repeat User Password: +---------------------+----------------------------------+| Field | Value |+---------------------+----------------------------------+| domain_id | default || enabled | True || id | 5cc6dddbf4cf49cb8cbe07dd45a2ba37 || name | neutron || options | { } || password_expires_at | None |+---------------------+----------------------------------+[tony@tony-controller ~]$# 将neutron账户加入到service项目的admin角色中[tony@tony-controller ~]$ openstack role add --project service --user neutron admin# 创建public访问的endpoint[tony@tony-controller ~]$ openstack endpoint create --region RegionOne network public http://controller:9696+--------------+----------------------------------+| Field | Value |+--------------+----------------------------------+| enabled | True || id | 7c400b1cd37646bfb8c6105e61b0fecc || interface | public || region | RegionOne || region_id | RegionOne || service_id | 603f059a82a34c9f84ccf6aa40619e7e || service_name | neutron || service_type | network || url | http://controller:9696 |+--------------+----------------------------------+# 创建internal访问的endpoint[tony@tony-controller ~]$ openstack endpoint create --region RegionOne network internal http://controller:9696+--------------+----------------------------------+| Field | Value |+--------------+----------------------------------+| enabled | True || id | de6d2c721df7458b89abe0a4b47d8b9c || interface | internal || region | RegionOne || region_id | RegionOne || service_id | 603f059a82a34c9f84ccf6aa40619e7e || service_name | neutron || service_type | network || url | http://controller:9696 |+--------------+----------------------------------+# 创建admin访问的endpoint[tony@tony-controller ~]$ openstack endpoint create --region RegionOne network admin http://controller:9696+--------------+----------------------------------+| Field | Value |+--------------+----------------------------------+| enabled | True || id | 5683397959764e4ba97854f3a2996c54 || interface | admin || region | RegionOne || region_id | RegionOne || service_id | 603f059a82a34c9f84ccf6aa40619e7e || service_name | neutron || service_type | network || url | http://controller:9696 |+--------------+----------------------------------+
[tony@tony-controller ~]$ sudo yum install -y openstack-neutron openstack-neutron-ml2 openstack-neutron-openvswitch
neutron的配置文件:/etc/neutron/neutron.conf
原始配置文件
所有配置项都是注释掉的
[tony@tony-controller ~]$ sudo cat /etc/neutron/neutron.conf | grep -v -E '^$|^#'[DEFAULT][agent][cors][database][keystone_authtoken][matchmaker_redis][nova][oslo_concurrency][oslo_messaging_amqp][oslo_messaging_kafka][oslo_messaging_notifications][oslo_messaging_rabbit][oslo_messaging_zmq][oslo_middleware][oslo_policy][quotas][ssl]
[tony@tony-controller ~]$ sudo cat /etc/neutron/neutron.conf | grep -v -E '^#|^$'[DEFAULT]core_plugin = ml2service_plugins = routerallow_overlapping_ips = truetransport_url = rabiit://openstack:Netvista123@controllerauth_strategy = keystonenotify_nova_on_port_status_changes = truenotify_nova_on_port_data_changes = true[agent][cors][database]connection = mysql+pymysql://neutron:Netvista123@controller/neutron[keystone_authtoken]www_authenticate_uri = http://controller:5000auth_url = http://controller:5000memcached_servers = controller:11211auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultproject_name = serviceusername = neutronpassword = Netvista123[matchmaker_redis][nova]auth_url = http://controller:5000auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultregion_name = RegionOneproject_name = serviceusername = novapassword = Netvista123[oslo_concurrency]lock_path = /var/lib/neutron/tmp[oslo_messaging_amqp][oslo_messaging_kafka][oslo_messaging_notifications][oslo_messaging_rabbit][oslo_messaging_zmq][oslo_middleware][oslo_policy][quotas][ssl]
ml2配置文件:/etc/neutron/plugins/ml2/ml2_conf.ini
原始配置文件
所有配置项都是注释掉的
[tony@tony-controller ~]$ sudo cat /etc/neutron/plugins/ml2/ml2_conf.ini | grep -v -E '^#|^$'[DEFAULT][l2pop][ml2][ml2_type_flat][ml2_type_geneve][ml2_type_gre][ml2_type_vlan][ml2_type_vxlan][securitygroup]
[tony@tony-controller ~]$ sudo cat /etc/neutron/plugins/ml2/ml2_conf.ini | grep -v -E '^#|^$'[DEFAULT][l2pop][ml2]type_drivers = flat,vlan,vxlantenant_network_types = vxlanextension_drivers = port_securitymechanism_drivers = openvswitch,l2population[ml2_type_flat][ml2_type_geneve][ml2_type_gre][ml2_type_vlan][ml2_type_vxlan]vni_ranges = 1:1000[securitygroup]enable_ipset = true
openvswitch配置文件:/etc/neutron/plugins/ml2/openvswitch_agent.ini
原始配置文件
所有配置项都是注释掉的。
[tony@tony-controller ~]$ sudo cat /etc/neutron/plugins/ml2/openvswitch_agent.ini | grep -v -E '^#|^$'[DEFAULT][agent][network_log][ovs][securitygroup][xenapi]
[DEFAULT][agent]tunnel_types = vxlanl2_population = True[network_log][ovs]bridge_mappings = provider:br-providerlocal_ip = 10.0.0.1[securitygroup]firewall_driver = iptables_hybrid[xenapi]
配置文件:/etc/neutron/l3_agent.ini
原始配置文件
所有的配置项都是注释掉的。
[tony@tony-controller ~]$ sudo cat /etc/neutron/l3_agent.ini | grep -v -E '^#|^$' [DEFAULT][agent][ovs]
# The external_network_bridge option intentionally contains no value.[tony@tony-controller ~]$ sudo cat /etc/neutron/l3_agent.ini | grep -v -E '^#|^$'[DEFAULT]interface_driver = openvswitchexternal_network_bridge =[agent][ovs]
配置文件:/etc/neutron/dhcp_agent.ini
[tony@tony-controller ~]$ sudo cat /etc/neutron/dhcp_agent.ini | grep -v -E '^#|^$'[DEFAULT][agent][ovs]
[tony@tony-controller ~]$ sudo cat /etc/neutron/dhcp_agent.ini | grep -v -E '^#|^$'[DEFAULT]interface_driver = openvswitchdhcp_driver = neutron.agent.linux.dhcp.Dnsmasqenable_ioslated_metadata = true[agent][ovs]
配置文件:
所有的配置项都是注释掉的。
[tony@tony-controller ~]$ sudo cat /etc/neutron/metadata_agent.ini | grep -v -E '^#|^$'[DEFAULT][agent][cache]
[tony@tony-controller ~]$ sudo cat /etc/neutron/metadata_agent.ini | grep -v -E '^#|^$'[DEFAULT]nova_metadata_host = controllermetadata_proxy_shared_secret = Netvista123[agent][cache]
配置文件:
[tony@tony-controller ~]$ sudo cat /etc/nova/nova.conf | grep -v -E '^#|^$'[DEFAULT]my_ip = 172.18.22.231enabled_apis = osapi_compute,metadatatransport_url = rabbit://openstack:Netvista123@controlleruse_neutron = truefirewall_driver = nova.virt.firewall.NoopFirewallDrivercompute_driver = libvirt.LibvirtDriverinstances_path = /var/lib/nova/instances[api]auth_strategy = keystone[api_database]connection = mysql+pymysql://nova:Netvista123@controller/nova_api[barbican][cache][cells][cinder][compute][conductor][console][consoleauth][cors][database]connection = mysql+pymysql://nova:Netvista123@controller/nova[devices][ephemeral_storage_encryption][filter_scheduler][glance]api_servers = http://controller:9292[guestfs][healthcheck][hyperv][ironic][key_manager][keystone][keystone_authtoken]auth_url = http://controller:5000/v3memcached_servers = controller:11211auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultproject_name = serviceusername = novapassword = Netvista123[libvirt]virt_type = qemu[matchmaker_redis][metrics][mks][neutron][notifications][osapi_v21][oslo_concurrency][oslo_messaging_amqp][oslo_messaging_kafka][oslo_messaging_notifications][oslo_messaging_rabbit][oslo_messaging_zmq][oslo_middleware][oslo_policy][pci][placement]region_name = RegionOneproject_domain_name = Defaultproject_name = serviceuser_domain_name = Defaultauth_type = passwordauth_url = http://controller:5000/v3username = placementpassword = Netvista123[placement_database]connection = mysql+pymysql://placement:Netvista123@controller/placement[powervm][profiler][quota][rdp][remote_debug][scheduler][serial_console][service_user][spice][upgrade_levels][vault][vendordata_dynamic_auth][vmware][vnc]enabled = trueserver_listen = 0.0.0.0server_proxyclient_address = $my_ipnovncproxy_base_url = http://controller:6080/vnc_auto.html[workarounds][wsgi][xenserver][xvp][zvm]
[tony@tony-controller ~]$ sudo cat /etc/nova/nova.conf | grep -v -E '^#|^$'
[DEFAULT]
my_ip = 172.18.22.231 enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:Netvista123@controller use_neutron = true firewall_driver = nova.virt.firewall.NoopFirewallDriver compute_driver = libvirt.LibvirtDriver instances_path = /var/lib/nova/instances [api] auth_strategy = keystone [api_database] connection = mysql+pymysql://nova:Netvista123@controller/nova_api [barbican] [cache] [cells] [cinder] [compute] [conductor] [console] [consoleauth] [cors] [database] connection = mysql+pymysql://nova:Netvista123@controller/nova [devices] [ephemeral_storage_encryption] [filter_scheduler] [glance] api_servers = [guestfs] [healthcheck] [hyperv] [ironic] [key_manager] [keystone] [keystone_authtoken] auth_url = memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = nova password = Netvista123 [libvirt] virt_type = qemu [matchmaker_redis] [metrics] [mks] [neutron] url = auth_url = auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = neutron password = Netvista123 service_metadata_proxy = true metadata_proxy_shared_secret = Netvista123 [notifications] [osapi_v21] [oslo_concurrency] [oslo_messaging_amqp] [oslo_messaging_kafka] [oslo_messaging_notifications] [oslo_messaging_rabbit] [oslo_messaging_zmq] [oslo_middleware] [oslo_policy] [pci] [placement] region_name = RegionOne project_domain_name = Default project_name = service user_domain_name = Default auth_type = password auth_url = username = placement password = Netvista123 [placement_database] connection = mysql+pymysql://placement:Netvista123@controller/placement [powervm] [profiler] [quota] [rdp] [remote_debug] [scheduler] [serial_console] [service_user] [spice] [upgrade_levels] [vault] [vendordata_dynamic_auth] [vmware] [vnc] enabled = true server_listen = 0.0.0.0 server_proxyclient_address = $my_ip novncproxy_base_url = [workarounds] [wsgi] [xenserver] [xvp] [zvm]
# 没有软链接[tony@tony-controller ~]$ ls -l /etc/neutron/total 128drwxr-xr-x. 11 root root 260 Apr 13 11:56 conf.d-rw-r-----. 1 root neutron 10867 Apr 13 13:20 dhcp_agent.ini-rw-r-----. 1 root neutron 14206 Apr 13 13:16 l3_agent.ini-rw-r-----. 1 root neutron 11389 Apr 13 13:24 metadata_agent.ini-rw-r-----. 1 root neutron 72079 Apr 13 12:58 neutron.confdrwxr-xr-x. 3 root root 17 Apr 13 11:56 plugins-rw-r-----. 1 root neutron 12153 Nov 6 14:12 policy.json-rw-r--r--. 1 root root 1195 Nov 6 14:12 rootwrap.conf# 创建到ml2_conf.ini的软链接,名字叫plugin.ini[tony@tony-controller ~]$ sudo ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini# 软链接创建成功[tony@tony-controller ~]$ ls -l /etc/neutron/total 128drwxr-xr-x. 11 root root 260 Apr 13 11:56 conf.d-rw-r-----. 1 root neutron 10867 Apr 13 13:20 dhcp_agent.ini-rw-r-----. 1 root neutron 14206 Apr 13 13:16 l3_agent.ini-rw-r-----. 1 root neutron 11389 Apr 13 13:24 metadata_agent.ini-rw-r-----. 1 root neutron 72079 Apr 13 12:58 neutron.conflrwxrwxrwx. 1 root root 37 Apr 13 13:33 plugin.ini -> /etc/neutron/plugins/ml2/ml2_conf.inidrwxr-xr-x. 3 root root 17 Apr 13 11:56 plugins-rw-r-----. 1 root neutron 12153 Nov 6 14:12 policy.json-rw-r--r--. 1 root root 1195 Nov 6 14:12 rootwrap.conf
# 启用openvswitch服务[tony@tony-controller ~]$ sudo systemctl enable openvswitchCreated symlink from /etc/systemd/system/multi-user.target.wants/openvswitch.service to /usr/lib/systemd/system/openvswitch.service.# 启动服务[tony@tony-controller ~]$ sudo systemctl start openvswitch# 检查服务状态[tony@tony-controller ~]$ sudo systemctl status openvswitch● openvswitch.service - Open vSwitch Loaded: loaded (/usr/lib/systemd/system/openvswitch.service; enabled; vendor preset: disabled) Active: active (exited) since Sat 2019-04-13 13:36:05 CST; 6s ago Process: 782 ExecStart=/bin/true (code=exited, status=0/SUCCESS) Main PID: 782 (code=exited, status=0/SUCCESS)Apr 13 13:36:05 tony-controller systemd[1]: Starting Open vSwitch...Apr 13 13:36:05 tony-controller systemd[1]: Started Open vSwitch.
# 在添加bridge之前,检查网络配置状态[tony@tony-controller ~]$ sudo ip addr1: lo:mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever2: enp0s3: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 08:00:27:51:a8:c5 brd ff:ff:ff:ff:ff:ff inet 172.18.22.231/24 brd 172.18.22.255 scope global noprefixroute enp0s3 valid_lft forever preferred_lft forever inet6 fe80::a00:27ff:fe51:a8c5/64 scope link valid_lft forever preferred_lft forever3: enp0s8: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 08:00:27:2a:dd:4b brd ff:ff:ff:ff:ff:ff inet 10.0.0.1/24 brd 10.0.0.255 scope global noprefixroute enp0s8 valid_lft forever preferred_lft forever inet6 fe80::a00:27ff:fe2a:dd4b/64 scope link valid_lft forever preferred_lft forever4: enp0s9: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 08:00:27:70:f2:73 brd ff:ff:ff:ff:ff:ff inet 10.238.156.138/23 brd 10.238.157.255 scope global noprefixroute dynamic enp0s9 valid_lft 31972sec preferred_lft 31972sec inet6 fe80::a00:27ff:fe70:f273/64 scope link valid_lft forever preferred_lft forever# 添加bridge,名称为br-provider[tony@tony-controller ~]$ sudo ovs-vsctl add-br br-provider# 再次检查网络配置状态,看到第5与第6项分别是ovs-system与br-provider[tony@tony-controller ~]$ sudo ip addr1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever2: enp0s3: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 08:00:27:51:a8:c5 brd ff:ff:ff:ff:ff:ff inet 172.18.22.231/24 brd 172.18.22.255 scope global noprefixroute enp0s3 valid_lft forever preferred_lft forever inet6 fe80::a00:27ff:fe51:a8c5/64 scope link valid_lft forever preferred_lft forever3: enp0s8: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 08:00:27:2a:dd:4b brd ff:ff:ff:ff:ff:ff inet 10.0.0.1/24 brd 10.0.0.255 scope global noprefixroute enp0s8 valid_lft forever preferred_lft forever inet6 fe80::a00:27ff:fe2a:dd4b/64 scope link valid_lft forever preferred_lft forever4: enp0s9: mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 link/ether 08:00:27:70:f2:73 brd ff:ff:ff:ff:ff:ff inet 10.238.156.138/23 brd 10.238.157.255 scope global noprefixroute dynamic enp0s9 valid_lft 31722sec preferred_lft 31722sec inet6 fe80::a00:27ff:fe70:f273/64 scope link valid_lft forever preferred_lft forever5: ovs-system: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether aa:d2:a5:c5:27:9d brd ff:ff:ff:ff:ff:ff6: br-provider: mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 0a:1e:81:c9:b6:45 brd ff:ff:ff:ff:ff:ff # 给br-provider添加port,这里指定enp0s9[tony@tony-controller ~]$ sudo ovs-vsctl add-port br-provider enp0s9# 注意:这条命令执行后,导致enp0s9网口无法ping通,ssh连接中断。# 解决方法:登录到机器的控制台,执行 # $ sudo dhclient br-provider# 让br-provider获得一个DHCP地址。通过br-provider的IP地址继续ssh登录,执行后续操作。# 显示Bridge的配置[tony@tony-controller ~]$ sudo ovs-vsctl showc3928675-2e0a-42c1-83f7-6d8bee8fee1d Bridge br-provider Port br-provider Interface br-provider type: internal Port "enp0s9" Interface "enp0s9" ovs_version: "2.10.1"
# 登录到mysql数据库[tony@tony-controller ~]$ sudo mysql -u root -pEnter password:Welcome to the MariaDB monitor. Commands end with ; or \g.Your MariaDB connection id is 469Server version: 10.1.20-MariaDB MariaDB ServerCopyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.MariaDB [(none)]> create database neutron; Query OK, 1 row affected (0.00 sec)MariaDB [(none)]> grant all privileges on neutron.* to 'neutron'@'localhost' identified by 'Netvista123' ; Query OK, 0 rows affected (0.00 sec)MariaDB [(none)]> grant all privileges on neutron.* to 'neutron'@'%' identified by 'Netvista123' ; Query OK, 0 rows affected (0.00 sec) MariaDB [(none)]> show databases; +--------------------+| Database |+--------------------+| glance || information_schema || keystone || mysql || neutron || nova || nova_api || nova_cell0 || performance_schema || placement |+--------------------+10 rows in set (0.01 sec)MariaDB [(none)]> quit Bye
[tony@tony-controller ~]$ sudo -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutronINFO [alembic.runtime.migration] Context impl MySQLImpl.INFO [alembic.runtime.migration] Will assume non-transactional DDL. Running upgrade for neutron ...INFO [alembic.runtime.migration] Context impl MySQLImpl.INFO [alembic.runtime.migration] Will assume non-transactional DDL.INFO [alembic.runtime.migration] Running upgrade -> kiloINFO [alembic.runtime.migration] Running upgrade kilo -> 354db87e3225INFO [alembic.runtime.migration] Running upgrade 354db87e3225 -> 599c6a226151...INFO [alembic.runtime.migration] Running upgrade c415aab1c048 -> a963b38d82f4INFO [alembic.runtime.migration] Running upgrade kilo -> 30018084ec99INFO [alembic.runtime.migration] Running upgrade 30018084ec99 -> 4ffceebfada...INFO [alembic.runtime.migration] Running upgrade 594422d373ee -> 61663558142cINFO [alembic.runtime.migration] Running upgrade 61663558142c -> 867d39095bf4, port forwarding...INFO [alembic.runtime.migration] Running upgrade 2e0d7a8a1586 -> 5c85685d616d OK
# 启用与neutron相关的4个服务[tony@tony-controller ~]$ sudo systemctl enable neutron-server.service neutron-openvswitch-agent.service neutron-dhcp-agent.service neutron-metadata-agent.serviceCreated symlink from /etc/systemd/system/multi-user.target.wants/neutron-server.service to /usr/lib/systemd/system/neutron-server.service.Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-openvswitch-agent.service to /usr/lib/systemd/system/neutron-openvswitch-agent.service.Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-dhcp-agent.service to /usr/lib/systemd/system/neutron-dhcp-agent.service.Created symlink from /etc/systemd/system/multi-user.target.wants/neutron-metadata-agent.service to /usr/lib/systemd/system/neutron-metadata-agent.service.# 重启nova-api服务[root@tony-controller ~]# sudo systemctl start openstack-nova-api.service# 启动neutron相关的4个服务[root@tony-controller ~]# sudo systemctl start neutron-server.service[root@tony-controller ~]# sudo systemctl start neutron-openvswitch-agent.service[root@tony-controller ~]# sudo systemctl start neutron-dhcp-agent.service[root@tony-controller ~]# sudo systemctl start neutron-metadata-agent.service[root@tony-controller ~]## 检查服务状态[root@tony-controller ~]# sudo systemctl status neutron-server.service neutron-openvswitch-agent.service neutron-dhcp-agent.service neutron-metadata-agent.service● neutron-server.service - OpenStack Neutron Server Loaded: loaded (/usr/lib/systemd/system/neutron-server.service; enabled; vendor preset: disabled) Active: active (running) since Sat 2019-04-13 14:54:21 CST; 1min 40s ago Main PID: 22322 (neutron-server) Tasks: 5 CGroup: /system.slice/neutron-server.service ├─22322 /usr/bin/python2 /usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf -... ├─22336 /usr/bin/python2 /usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf -... ├─22337 /usr/bin/python2 /usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf -... ├─22338 /usr/bin/python2 /usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf -... └─22339 /usr/bin/python2 /usr/bin/neutron-server --config-file /usr/share/neutron/neutron-dist.conf -...Apr 13 14:54:19 tony-controller systemd[1]: Starting OpenStack Neutron Server...Apr 13 14:54:21 tony-controller systemd[1]: Started OpenStack Neutron Server.● neutron-openvswitch-agent.service - OpenStack Neutron Open vSwitch Agent Loaded: loaded (/usr/lib/systemd/system/neutron-openvswitch-agent.service; enabled; vendor preset: disabled) Active: active (running) since Sat 2019-04-13 14:54:43 CST; 1min 18s ago Process: 22382 ExecStartPre=/usr/bin/neutron-enable-bridge-firewall.sh (code=exited, status=0/SUCCESS) Main PID: 22387 (neutron-openvsw) Tasks: 3 CGroup: /system.slice/neutron-openvswitch-agent.service ├─22387 /usr/bin/python2 /usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/neutron-... ├─22457 ovsdb-client monitor tcp:127.0.0.1:6640 Interface name,ofport,external_ids --format=json └─22459 ovsdb-client monitor tcp:127.0.0.1:6640 Bridge name --format=jsonApr 13 14:54:43 tony-controller systemd[1]: Starting OpenStack Neutron Open vSwitch Agent...Apr 13 14:54:43 tony-controller neutron-enable-bridge-firewall.sh[22382]: net.bridge.bridge-nf-call-iptables = 1Apr 13 14:54:43 tony-controller neutron-enable-bridge-firewall.sh[22382]: net.bridge.bridge-nf-call-ip6tables = 1Apr 13 14:54:43 tony-controller systemd[1]: Started OpenStack Neutron Open vSwitch Agent.Apr 13 14:54:45 tony-controller sudo[22402]: neutron : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/neutr...conf● neutron-dhcp-agent.service - OpenStack Neutron DHCP Agent Loaded: loaded (/usr/lib/systemd/system/neutron-dhcp-agent.service; enabled; vendor preset: disabled) Active: active (running) since Sat 2019-04-13 14:55:04 CST; 57s ago Main PID: 22492 (neutron-dhcp-ag) Tasks: 1 CGroup: /system.slice/neutron-dhcp-agent.service └─22492 /usr/bin/python2 /usr/bin/neutron-dhcp-agent --config-file /usr/share/neutron/neutron-dist.co...Apr 13 14:55:04 tony-controller systemd[1]: Started OpenStack Neutron DHCP Agent.● neutron-metadata-agent.service - OpenStack Neutron Metadata Agent Loaded: loaded (/usr/lib/systemd/system/neutron-metadata-agent.service; enabled; vendor preset: disabled) Active: active (running) since Sat 2019-04-13 14:55:20 CST; 42s ago Main PID: 22532 (neutron-metadat) Tasks: 1 CGroup: /system.slice/neutron-metadata-agent.service └─22532 /usr/bin/python2 /usr/bin/neutron-metadata-agent --config-file /usr/share/neutron/neutron-dis...Apr 13 14:55:20 tony-controller systemd[1]: Started OpenStack Neutron Metadata Agent.Hint: Some lines were ellipsized, use -l to show in full.[root@tony-controller ~]## 启用menutron-l3 agent服务[root@tony-controller ~]# sudo systemctl enable neutron-l3-agent.serviceCreated symlink from /etc/systemd/system/multi-user.target.wants/neutron-l3-agent.service to /usr/lib/systemd/system/neutron-l3-agent.service.# 启动并检查状态[root@tony-controller ~]# sudo systemctl start neutron-l3-agent.service[root@tony-controller ~]# sudo systemctl status neutron-l3-agent.service● neutron-l3-agent.service - OpenStack Neutron Layer 3 Agent Loaded: loaded (/usr/lib/systemd/system/neutron-l3-agent.service; enabled; vendor preset: disabled) Active: active (running) since Sat 2019-04-13 14:56:32 CST; 5s ago Main PID: 22672 (neutron-l3-agen) Tasks: 1 CGroup: /system.slice/neutron-l3-agent.service └─22672 /usr/bin/python2 /usr/bin/neutron-l3-agent --config-file /usr/share/neutron/neutron-dist.conf...Apr 13 14:56:32 tony-controller systemd[1]: Started OpenStack Neutron Layer 3 Agent.Apr 13 14:56:37 tony-controller sudo[22690]: neutron : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/neutron-r...Hint: Some lines were ellipsized, use -l to show in full.
[root@tony-controller ~]# sudo ovs-vsctl showc3928675-2e0a-42c1-83f7-6d8bee8fee1d Manager "ptcp:6640:127.0.0.1" is_connected: true Bridge br-tun Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port br-tun Interface br-tun type: internal Port patch-int Interface patch-int type: patch options: { peer=patch-tun} Bridge br-provider Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port phy-br-provider Interface phy-br-provider type: patch options: { peer=int-br-provider} Port br-provider Interface br-provider type: internal Port "enp0s9" Interface "enp0s9" Bridge br-int Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port int-br-provider Interface int-br-provider type: patch options: { peer=phy-br-provider} Port br-int Interface br-int type: internal Port patch-tun Interface patch-tun type: patch options: { peer=patch-int} ovs_version: "2.10.1"
[tony@tony-compute1 ~]$ sudo yum install -y openstack-neutron-openvswitch ipset
[tony@tony-compute1 ~]$ sudo cat /etc/neutron/neutron.conf[DEFAULT]transport_url = rabbit://openstack:Netvista123@controllerauth_strategy = keystone[keystone_authtoken]www_authenticate_uri = http://controller:5000auth_url = http://controller:5000memcached_servers = controller:11211auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultproject_name = serviceusername = neutronpassword = Netvista123
[tony@tony-compute1 ~]$ sudo cat /etc/neutron/plugins/ml2/openvswitch_agent.ini [ovs]local_ip = 10.0.0.2[agent]tunnel_types = vxlanl2_population = True
[tony@tony-compute1 ~]$ sudo vim /etc/nova/nova.conf...[neutron]url = http://controller:9696auth_url = http://controller:5000auth_type = passwordproject_domain_name = defaultuser_domain_name = defaultregion_name = RegionOneproject_name = serviceusername = neutronpassword = Netvista123...
[tony@tony-compute1 ~]$ sudo systemctl enable openvswitchCreated symlink from /etc/systemd/system/multi-user.target.wants/openvswitch.service to /usr/lib/systemd/system/openvswitch.service.[tony@tony-compute1 ~]$ sudo systemctl start openvswitch[tony@tony-compute1 ~]$ sudo systemctl status openvswitch● openvswitch.service - Open vSwitch Loaded: loaded (/usr/lib/systemd/system/openvswitch.service; enabled; vendor preset: disabled) Active: active (exited) since Sat 2019-04-13 15:12:55 CST; 4s ago Process: 10626 ExecStart=/bin/true (code=exited, status=0/SUCCESS) Main PID: 10626 (code=exited, status=0/SUCCESS)Apr 13 15:12:55 tony-compute1 systemd[1]: Starting Open vSwitch...Apr 13 15:12:55 tony-compute1 systemd[1]: Started Open vSwitch.
[tony@tony-compute1 ~]$ sudo systemctl restart openstack-nova-compute.service
[tony@tony-compute1 ~]$ sudo systemctl enable neutron-openvswitch-agent.serviceCreated symlink from /etc/systemd/system/multi-user.target.wants/neutron-openvswitch-agent.service to /usr/lib/systemd/system/neutron-openvswitch-agent.service.[tony@tony-compute1 ~]$ sudo systemctl start neutron-openvswitch-agent.service[tony@tony-compute1 ~]$ sudo systemctl status neutron-openvswitch-agent.service● neutron-openvswitch-agent.service - OpenStack Neutron Open vSwitch Agent Loaded: loaded (/usr/lib/systemd/system/neutron-openvswitch-agent.service; enabled; vendor preset: disabled) Active: active (running) since Sat 2019-04-13 16:27:52 CST; 20s ago Process: 13259 ExecStartPre=/usr/bin/neutron-enable-bridge-firewall.sh (code=exited, status=0/SUCCESS) Main PID: 13264 (neutron-openvsw) Tasks: 3 CGroup: /system.slice/neutron-openvswitch-agent.service ├─13264 /usr/bin/python2 /usr/bin/neutron-openvswitch-agent --config-file /usr/share/neutron/ne... ├─13333 ovsdb-client monitor tcp:127.0.0.1:6640 Interface name,ofport,external_ids --format=jso... └─13335 ovsdb-client monitor tcp:127.0.0.1:6640 Bridge name --format=jsonApr 13 16:27:52 tony-compute1 systemd[1]: Starting OpenStack Neutron Open vSwitch Agent...Apr 13 16:27:52 tony-compute1 neutron-enable-bridge-firewall.sh[13259]: net.bridge.bridge-nf-call-iptable...1Apr 13 16:27:52 tony-compute1 neutron-enable-bridge-firewall.sh[13259]: net.bridge.bridge-nf-call-ip6tabl...1Apr 13 16:27:52 tony-compute1 systemd[1]: Started OpenStack Neutron Open vSwitch Agent.Apr 13 16:27:54 tony-compute1 sudo[13279]: neutron : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/bin/n...confHint: Some lines were ellipsized, use -l to show in full.
注意:如果发生类似如下Permission Denied错误。首先检查openstack-selinux包是否安装,还若不行,则禁用selinux。
$ sudo yum install -y openstack-selinux # or$ sudo setenforce 0
2019-04-13 15:32:16.486 11776 INFO neutron.common.config [-] Logging enabled! 2019-04-13 15:32:16.486 11776 INFO neutron.common.config [-] /usr/bin/neutron-openvswitch-agent version 13.0.2 2019-04-13 15:32:16.486 11776 INFO ryu.base.app_manager [-] loading app neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp 2019-04-13 15:32:16.860 11776 INFO ryu.base.app_manager [-] loading app ryu.app.ofctl.service 2019-04-13 15:32:16.861 11776 INFO ryu.base.app_manager [-] loading app ryu.controller.ofp_handler 2019-04-13 15:32:16.861 11776 INFO ryu.base.app_manager [-] instantiating app neutron.plugins.ml2.drivers.openvswitch.agent.openflow.native.ovs_ryuapp of OVSNeutronAgentRyuApp 2019-04-13 15:32:16.862 11776 INFO ryu.base.app_manager [-] instantiating app ryu.controller.ofp_handler of OFPHandler 2019-04-13 15:32:16.862 11776 INFO ryu.base.app_manager [-] instantiating app ryu.app.ofctl.service of OfctlService 2019-04-13 15:32:16.863 11776 INFO neutron.agent.agent_extensions_manager [-] Loaded agent extensions: [] 2019-04-13 15:32:16.875 11776 ERROR ryu.lib.hub [-] hub: uncaught exception: Traceback (most recent call last): File “/usr/lib/python2.7/site-packages/ryu/lib/hub.py”, line 59, in _launch return func(*args, **kwargs) File “/usr/lib/python2.7/site-packages/ryu/controller/controller.py”, line 153, in call self.ofp_ssl_listen_port) File “/usr/lib/python2.7/site-packages/ryu/controller/controller.py”, line 187, in server_loop datapath_connection_factory) File “/usr/lib/python2.7/site-packages/ryu/lib/hub.py”, line 126, in init self.server = eventlet.listen(listen_info) File “/usr/lib/python2.7/site-packages/eventlet/convenience.py”, line 46, in listen sock.bind(addr) File “/usr/lib64/python2.7/socket.py”, line 224, in meth return getattr(self._sock,name)(*args) error: [Errno 13] Permission denied : error: [Errno 13] Permission denied
[tony@tony-compute1 ~]$ sudo ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group
2: enp0s3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 inet 172.18.22.232/24 brd 172.18.22.255 scope global noprefixroute enp0s3 3: enp0s8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000 inet 10.0.0.2/24 brd 10.0.0.255 scope global noprefixroute enp0s8 4: enp0s9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP inet 10.238.157.84/23 brd 10.238.157.255 scope global noprefixroute dynamic 5: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 9e:23:18:bd:e0:4a brd ff:ff:ff:ff:ff:ff 6: br-int: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether 42:30:63:9b:0f:41 brd ff:ff:ff:ff:ff:ff 7: br-tun: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN group default qlen 1000 link/ether aa:8d:70:af:56:4d brd ff:ff:ff:ff:ff:ff
# 服务正在侦听6633端口[tony@tony-compute1 ~]$ sudo ovs-vsctl show6e9ac464-afae-4179-90bb-e665dbba5fb2 Manager "ptcp:6640:127.0.0.1" is_connected: true Bridge br-int Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port patch-tun Interface patch-tun type: patch options: { peer=patch-int} Port br-int Interface br-int type: internal Bridge br-tun Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port br-tun Interface br-tun type: internal Port patch-int Interface patch-int type: patch options: { peer=patch-tun} ovs_version: "2.10.1"
在controller上执行如下命令,如果看到了compute1,说明OpenvSwitch Agent正常工作了。
[tony@tony-controller ~]$ openstack network agent list
±-------------------------------------±-------------------±----------------±------------------±------±------±--------------------------+
| ID | Agent Type | Host | Availability Zone | Alive | State | Binary | ±-------------------------------------±-------------------±----------------±------------------±------±------±--------------------------+ | 67d969a3-0361-46ea-9e63-c57eabbf5bfc | Open vSwitch agent | tony-compute1 | None | : -) | UP | neutron-openvswitch-agent | | 8ef64698-2b12-42e7-ae06-c3be746047d1 | L3 agent | tony-controller | nova | : -) | UP | neutron-l3-agent | | c6df225b-0d74-439b-b027-0521b057716d | Metadata agent | tony-controller | None | : -) | UP | neutron-metadata-agent | | ceb81b9d-143f-472a-ba28-78cee7247520 | DHCP agent | tony-controller | nova | : -) | UP | neutron-dhcp-agent | | eb0370da-6509-4be8-92ae-01b5f93a30f3 | Open vSwitch agent | tony-controller | None | : -) | UP | neutron-openvswitch-agent | ±-------------------------------------±-------------------±----------------±------------------±------±------±--------------------------+
自此,Neutron模块的部署工作完成。
/var/log/neutron目录下有各种与neutron模块有关的日志文件,如果发生错误,可以通过分析日志解决。第一篇:
上一篇: 下一篇:转载地址:http://ctpof.baihongyu.com/