I know a lot of folks are using the StackOps script thingy to install OpenStack. I’ve been installing it (quite a bit) lately just from packages, and it’s not all that difficult, so I thought I’d write up the details on how to do that. A lot of this is exactly what’s encoded into Chef recipes and Puppet modules out there – so if you’re looking to run with something already made, there’s plenty of options.
These instructions are assuming you’re starting with an Ubuntu based system – either 10.10 or 11.04. I haven’t tried it as yet with 11.10.
First things first, I recommend you make sure you have the latest bits of everything:
sudo apt-get update sudo apt-get dist-upgrade sudo apt-get autoremove
Then we need to add the release “PPA” so that your system can grab the packages for Openstack:
sudo apt-get install python-software-properties sudo add-apt-repository ppa:openstack-release/2011.3 sudo apt-get update
Now we get into the details. I’m going to drive out the instructions that will start with a single host, but are set up to add additional virtualization hosts as you need. I’m writing this assuming you’re working in a small network, and setting it up for FlatDHCP networking. Choosing the networking strategy and IP address space to use is actually one of the trickier parts of doing a reasonable install. For just testing something out in a test lab, this setup will work reasonable well – the only thing to really note is that this *will* install a DHCP server to provide IP addresses to the virtual instances, so if you have another DHCP server handing out addresses, you might need to get into the details and change some of these settings.
Installing the packages:
OpenStack relies on using MySQL as a data repository for information about the openstack configuration, so we’ll need to set up a MySQL server. Normally when you install the packages for MySQL, it’ll ask you about configuring a root password and such. We can make that hands-off by pre-answering some of those questions. To do this, make a file named “/tmp/mysql_preseed.txt” and put in it the following:
mysql-server-5.1 mysql-server/root_password password openstack mysql-server-5.1 mysql-server/root_password_again password openstack mysql-server-5.1 mysql-server/start_on_boot boolean true
Then we can get into the commands to install the packages:
cat /tmp/mysql_preseed.txt | debconf-set-selections apt-get install mysql-server python-mysqldb apt-get install rabbitmq-server # ^^ pre-reqs for running controller nova instance apt-get install euca2ools unzip # ^^ for accessing nova through EC2 APIs apt-get install nova-volume nova-vncproxy nova-api nova-ajax-console-proxy apt-get install nova-doc nova-scheduler nova-objectstore apt-get install nova-network nova-compute apt-get install glance
That’s got all the packages installed onto your local system! Now we just need to configure it up and initialize some information (that’s the bit about networks, etc).
Before I get into changing configs, let me explain what I’ll be setting up. In this example, my internal “network” is 172.17.0.0/24 – and I have a dedicated IP address for this host that is 172.17.0.133. The virtual machines will be in their own network space (10.0.0.0 to 10.0.0.254), and (at this point) not visible from the local network, but will be able to access the local network through their virtualization hosts. The machine I’m using also only has a single NIC (eth0), which is fine for a little test bed, but not likely what you want to do in any sort of real setup.
/etc/nova/nova.conf
--dhcpbridge_flagfile=/etc/nova/nova.conf --dhcpbridge=/usr/bin/nova-dhcpbridge --logdir=/var/log/nova --state_path=/var/lib/nova --lock_path=/var/lock/nova --flagfile=/etc/nova/nova-compute.conf --verbose # --sql_connection=mysql://novadbuser:novaDBsekret@172.17.0.133/nova # --network_manager=nova.network.manager.FlatDHCPManager --flat_network_bridge=br100 --flat_injected=False --flat_interface=eth0 --public_interface=eth0 # --vncproxy_url=http://172.17.0.133:6080 --daemonize=1 --rabbit_host=172.17.0.133 --osapi_host=172.17.0.133 --ec2_host=172.17.0.133 --image_service=nova.image.glance.GlanceImageService --glance_api_servers=172.17.0.133:9292 --use_syslog
Now you might have noticed the MySQL connection string in there. We need to set up that user and password in MySQL to do what needs to be done. I also change the MySQL configuration so that remote systems can connect to MySQL. It’s not needed on a single host, but if you ever want to have more than one compute host, you need to make this change. In /etc/mysql/my.conf, find the line:
bind-address = 127.0.0.1
and change it to
bind-address 0.0.0.0
Now lets make the user in Mysql:
mysql -popenstack CREATE USER 'novadbuser' IDENTIFIED BY 'novaDBsekret'; GRANT ALL PRIVILEGES ON *.* TO 'novadbuser'@'%' WITH GRANT OPTION; FLUSH PRIVILEGES;
And set up the database:
mysql -popenstack -e 'CREATE DATABASE nova;' nova-manage db sync
If that last command gives you any trouble, then we likely don’t have the MySQL system configured correctly – the user can’t access the tables or something. Check in the logs for MySQL to get a sense of what might have gone wrong.
At this point, it’s time to configure up the internals of openstack – create projects, networks, etc.
We’ll start by creating an admin user:
# create admin user called "cloudroot" nova-manage user admin --name=cloudroot --secret=sekret
This should respond with something like:
export EC2_ACCESS_KEY=sekret export EC2_SECRET_KEY=653f3fad-df22-449b-9e6a-ea6c81e32621
You can scratch that down, but we’ll be getting that same information again later and using it, so don’t worry too much about it.
Now we create a project:
# create project "cloudproject" with project mgr: "cloudroot" nova-manage project create --project=cloudproject --user=cloudroot
And finally, a network configuration for those internal IP addresses:
nova-manage network create private --fixed_range_v4=10.0.0.0/24 --num_networks=1 --network_size=256 --bridge=br100 --bridge_interface=eth0 --multi_host=T # gateway assumed at 10.0.0.1 # broadcast assumed at 10.0.0.255
Now I’m using the multi-host flag, which is new in the Diablo release. This makes each compute node it’s own networking host for the purposes of allowing the VM’s you spin up to access your network or the internet.
At this point, you’re system should be up and running, all systems operational. Let me walk you through the command steps to actually kick up a little test VM though. These commands are all meant to be done as a local user (not root!)
sudo nova-manage project zipfile cloudproject cloudroot /tmp/nova.zip unzip -o /tmp/nova.zip -d ~/creds cat creds/novarc >> ~/.bashrc source creds/novarc # euca-add-keypair mykey > mykey.priv chmod 600 mykey.priv # image="ttylinux-uec-amd64-12.1_2.6.35-22_1.tar.gz" wget http://smoser.brickies.net/ubuntu/ttylinux-uec/$image uec-publish-tarball $image mybucket # wget http://uec-images.ubuntu.com/releases/10.04/release/ubuntu-10.04-server-uec-amd64.tar.gz uec-publish-tarball ubuntu-10.04-server-uec-amd64.tar.gz mybucket ...OUTPUT... Thu Aug 18 14:02:20 PDT 2011: ====== extracting image ====== Warning: no ramdisk found, assuming '--ramdisk none' kernel : lucid-server-uec-amd64-vmlinuz-virtual ramdisk: none image : lucid-server-uec-amd64.img Thu Aug 18 14:02:29 PDT 2011: ====== bundle/upload kernel ====== Thu Aug 18 14:02:34 PDT 2011: ====== bundle/upload image ====== Thu Aug 18 14:03:12 PDT 2011: ====== done ====== emi="ami-00000002"; eri="none"; eki="aki-00000001"; ...OUTPUT...
And running the instances:
euca-run-instances ami-00000002 -k mykey -t m1.large ...OUTPUT... RESERVATION r-1jj2a80v cloudproject default INSTANCE i-00000001 ami-00000002 scheduling mykey (cloudproject, None) 0 m1.tiny2011-08-18T21:06:03Z unknown zone aki-00000001 ami-00000000 ...OUTPUT... # euca-describe-instances ...OUTPUT... RESERVATION r-1jj2a80v cloudproject default INSTANCE i-00000001 ami-00000002 10.0.0.2 10.0.0.2 building mykey (cloudproject, SIX) 0 m1.tiny 2011-08-18T21:06:03Z nova aki-00000001 ami-00000000 ...OUTPUT... # euca-describe-instances ...OUTPUT... RESERVATION r-1jj2a80v cloudproject default INSTANCE i-00000001 ami-00000002 10.0.0.2 10.0.0.2 running mykey (cloudproject, SIX) 0 m1.tiny 2011-08-18T21:06:03Z nova aki-00000001 ami-00000000 ...OUTPUT... # euca-authorize -P tcp -p 22 default ssh -i mykey.priv root@10.0.0.2
To add on additional hosts to support more VMs, you only need to install a few of the packages:
apt-get install nova-compute nova-network nova-api
You do need that exact same /etc/nova/nova.conf file though.
Note:
The default install of Glance expects the images that you’ve loaded up to be available on the local file system for every compute node at /var/lib/glance. Either NFS mount this directory from a central machine, or replicate the files underneath it to all your “compute hosts” when you upload a new image to be used in the virtual machines.
Also, the metadata URL needed for UEC images (169.154.169.154) may need help getting forwarded when running on a system with a single NIC. Two potential solutions: A) run nova-api on each of the compute nodes (quick and dirty) or B) specify the –ec2_dmz_host=$HOSTIP, and potentially invoke the command ip link set dev br100 promisc on to turn on promiscuous mode (per https://answers.launchpad.net/nova/+question/152528).
Hi Joe,
Thanks for the heads-up, this pretty much summarizes what we need to move up from Cactus to Diablo!
OpenStack tops the stack of clouds 🙂
Regards,
-Anirudha
LikeLike
I have done the similar steps with you. But I meet some problems.
First can you put your /etc/network/interfaces here.
And then how can vms access to the internal ? Do you use iptables for nat?
LikeLike
The /etc/network/interfaces file is a standard static IP address for this host, but be aware that specifying –multi-host in the options causes the Nova network system to enable a bridge interface on whatever you have defined as “–flat-interface” for use in communicating with the VMs on that host.
The file:
auto lo
iface lo inet loopback
auto eth0
iface eth0 inet static
address 172.17.0.133
netmask 255.255.255.0
gateway 172.17.0.1
By default (and this example), the outside world doesn’t have access to ping these VMs directly. For that you need to define and set up floating IP addresses, which I skipped in this walk through. The virtualization host has access to that 10.0.0.0/8 network, so you can SSH into your hosts from there once you’ve given appropriate permissions to access port 22 (it’s at the very end of the writeup, in not very documented steps)
LikeLike
Hi Joe,
I installed the same your instructions.and I success with a single host.
But i have the problem for multi host model.
I have 2 node
node 1 and node 2 installed with your instructions.
Don’t have any problem When i try “sudo nova-manager db sync”.But the command “sudo nova-manage service list” shows out an error in the nova-manage.log
(http://pastebin.com/pWyDQ4hg)
And It have the same issue with this bug (http://www.mail-archive.com/openstack@lists.launchpad.net/msg04120.html)
node 1:
nova.conf
http://pastebin.com/YkdRHN55
network card:
root@openstack2:/home/cloud# ip addr
1: lo: mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet 169.254.169.254/32 scope link lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:1d:60:e8:75:74 brd ff:ff:ff:ff:ff:ff
inet 10.2.76.3/24 brd 10.2.76.255 scope global eth0
inet6 fe80::21d:60ff:fee8:7574/64 scope link
valid_lft forever preferred_lft forever
3: eth1: mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:1d:60:e8:76:6e brd ff:ff:ff:ff:ff:ff
inet 10.2.77.3/24 brd 10.2.77.255 scope global eth1
inet6 fe80::21d:60ff:fee8:766e/64 scope link
valid_lft forever preferred_lft forever
4: eth2: mtu 1500 qdisc pfifo_fast state DOWN qlen 1000
link/ether 00:1d:60:e8:72:99 brd ff:ff:ff:ff:ff:ff
5: eth3: mtu 1500 qdisc pfifo_fast state DOWN qlen 1000
link/ether 00:1d:60:e8:74:29 brd ff:ff:ff:ff:ff:ff
6: virbr0: mtu 1500 qdisc noqueue state UNKNOWN
link/ether 8a:50:e4:60:e4:49 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
node 2:
nova.conf
http://pastebin.com/Has60MVg
network card
cloud4@openstack4:~$ ip addr
1: lo: mtu 16436 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
inet 169.254.169.254/32 scope link lo
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: mtu 1500 qdisc mq state DOWN qlen 1000
link/ether 00:14:5e:7b:aa:8e brd ff:ff:ff:ff:ff:ff
3: eth1: mtu 1500 qdisc mq state UP qlen 1000
link/ether 00:14:5e:7b:aa:8f brd ff:ff:ff:ff:ff:ff
inet6 fe80::214:5eff:fe7b:aa8f/64 scope link tentative dadfailed
valid_lft forever preferred_lft forever
4: eth2: mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:11:0a:61:0e:a0 brd ff:ff:ff:ff:ff:ff
inet 10.2.76.4/24 brd 10.2.76.255 scope global eth2
inet6 fe80::211:aff:fe61:ea0/64 scope link
valid_lft forever preferred_lft forever
5: eth3: mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:11:0a:61:0e:a1 brd ff:ff:ff:ff:ff:ff
inet 10.2.77.4/24 brd 10.2.77.255 scope global eth3
inet6 fe80::211:aff:fe61:ea1/64 scope link
valid_lft forever preferred_lft forever
6: virbr0: mtu 1500 qdisc noqueue state UNKNOWN
link/ether 0e:41:57:f3:05:73 brd ff:ff:ff:ff:ff:ff
inet 192.168.122.1/24 brd 192.168.122.255 scope global virbr0
Thanks,
Phucvdb
LikeLike
That reads like a timeout while connecting to your Mysql database. You might double check the connectivity between your hosts and that database by using the mysql client on the command line from each of the compute nodes to make sure you can get to what’s needed. If you were following my examples, then something like “
mysql -h 10.2.77.3 -u novadbuser --password novaDBsekret nova
” should allow you to connect from either host.LikeLike
Nice article. One thing I noticed, you’re missing multi_host=T from your network create line.
Darren
LikeLike
It’s there – I just reformatted that segment of the post to make it more obvious.
LikeLike
@ Joe: on node 2 ( 10.2.77.4) , I could join to Mysql on node 1 (10.2.77.3) as following:
http://pastebin.com/YaBt9fM1
But he command “sudo nova-manage service list†still shows out the same error.
thanks.
phucvdb
LikeLike
I tried to change the vlan 10.2.77.0/24 to 172.16.1.0/24 and the command “sudo nova-manage service list” run successfully without any error in my nova-manage.log
I don’t understand this issue. This is a bug or where i was wrong?
LikeLike
Hi Joe,
I was unable to get your config to work on 10.10 following your steps exactly. In the end I had to use this doc: _http://cssoss.files.wordpress.com/2011/08/openstackbookv1-0_csscorp.pdf when it came to the creation of the user, project and credentials. When I followed your steps I did not receive any errors until trying to publish the tarball. I would then run a simple euca command and get:
nova@licof038:~$ euca-describe-images
Warning: failed to parse error message from AWS: :1:0: syntax error
EC2ResponseError: 403 Forbidden
403 Forbidden
Access was denied to this resource.
I knew it was a credentials issue which is why I tried another document which had a variant of the procedure. It think it was a combination of the different syntax as well as restarting all the services after the sourcing of novarc which this document did not mention. Can you think of any other reason why following the steps herein ended in the credentials issue for me? These steps were far easier and straightforward so I would prefer to use it in the future.
Thanks.
LikeLike
I tried these instructions on an Ubuntu 11.04 VM using the latest diablo versions available from ppa:openstack-release/2011.3. uec-publish-tarball failed because euca-describe-images failed. That failed because the wrong kind of value is being placed into EC2_ACCESS_KEY. It tried to use “clouduser:cloudproject”. It needs to be EC2_ACCESS_KEY=”5146563d-779b-43d5-90f3-31c4cc360f9a:cloudproject” where 514…f9a is the value spit out by “nova-manage user admin –name=cloudroot –secret=sekret”.
I also had firewall issues with the default firewall settings.
LikeLike
This is a great blogpost! I think this blog post succinctly describes the steps to get OpenStack up and running, the docs have all this information but you have to hunt for it.
Any chance, you are planning on doing on addendum to this post explaining keystone and dashboard installation. Due to Keystone issues, that has been a real challenge with to know which version and how to get all that properly installed.
LikeLike
Actually, ive been spending time documenting Keystone because it has been such a challenge. The latest bits are up at http://keystone.openstack.org/, and when the stable/diablo branch is nailed down I’ll be working on the formal docs to match it.
LikeLike
Thanks Joe, that would be super helpful in general in the community to get the entire set-up working with a UI( Keystone and Dashboard) that can be used to showcase the product to non-technical folks.
LikeLike
Hi Joe,
thanks for a nice & clear tutorial; however, when installing glance (apt-get install glance) I have some issues, not sure if it’s my setup of a bug; I get
———————————————————————————————
Setting up glance (2012.1~e2~20111110.1090-0ubuntu0ppa1~maverick1) …
Traceback (most recent call last):
File “/usr/bin/glance-manage”, line 44, in
from glance import version as glance_version
ImportError: No module named glance
dpkg: error processing glance (–configure):
subprocess installed post-installation script returned error exit status 1
————————————————————————————————-
(I’m on a ubuntu 10.10);
Thanks!
/nico
LikeLike
Hey Nico,
It’s a bug – you’re using the very latest trunk PPA to install your packages, and it looks like there’s a problem with that package. You can either use one of the “released” packages, or you can wait a bit until the bug gets resolved. I’d recommend at least reporting this bug at https://bugs.launchpad.net/glance/+filebug
LikeLike
Hey Joe …
Thanks for a great tutorial. I have one slight issue. I need to access my virtuals remotely so I need to allocate/associate floating ips to them. How do I do that with your setup? Can I do it without disturbing the 6 virtual machines I have currently running?
Thanks,
Thomas
LikeLike
when execute “uec-publish-tarball $image mybucket” now “cloud-pu…”
I receive: “Unable to run euca–describe-images. Is environment for euca- set up?”
all logs checked all seems ok
any suggestion? I followed this guide for Ubuntu 11.10 e diablo
nova e glance 2011.3 release
LikeLike
Hi all again!
Found a trick to continue:
in /etc/nova/nova.conf
add line with:
use_deprecated_auth=true
Thanks
LikeLike