I know a lot of folks are using the StackOps script thingy to install OpenStack. I’ve been installing it (quite a bit) lately just from packages, and it’s not all that difficult, so I thought I’d write up the details on how to do that. A lot of this is exactly what’s encoded into Chef recipes and Puppet modules out there – so if you’re looking to run with something already made, there’s plenty of options.
These instructions are assuming you’re starting with an Ubuntu based system – either 10.10 or 11.04. I haven’t tried it as yet with 11.10.
First things first, I recommend you make sure you have the latest bits of everything:
sudo apt-get update sudo apt-get dist-upgrade sudo apt-get autoremove
Then we need to add the release “PPA” so that your system can grab the packages for Openstack:
sudo apt-get install python-software-properties sudo add-apt-repository ppa:openstack-release/2011.3 sudo apt-get update
Now we get into the details. I’m going to drive out the instructions that will start with a single host, but are set up to add additional virtualization hosts as you need. I’m writing this assuming you’re working in a small network, and setting it up for FlatDHCP networking. Choosing the networking strategy and IP address space to use is actually one of the trickier parts of doing a reasonable install. For just testing something out in a test lab, this setup will work reasonable well – the only thing to really note is that this *will* install a DHCP server to provide IP addresses to the virtual instances, so if you have another DHCP server handing out addresses, you might need to get into the details and change some of these settings.
Installing the packages:
OpenStack relies on using MySQL as a data repository for information about the openstack configuration, so we’ll need to set up a MySQL server. Normally when you install the packages for MySQL, it’ll ask you about configuring a root password and such. We can make that hands-off by pre-answering some of those questions. To do this, make a file named “/tmp/mysql_preseed.txt” and put in it the following:
mysql-server-5.1 mysql-server/root_password password openstack mysql-server-5.1 mysql-server/root_password_again password openstack mysql-server-5.1 mysql-server/start_on_boot boolean true
Then we can get into the commands to install the packages:
cat /tmp/mysql_preseed.txt | debconf-set-selections apt-get install mysql-server python-mysqldb apt-get install rabbitmq-server # ^^ pre-reqs for running controller nova instance apt-get install euca2ools unzip # ^^ for accessing nova through EC2 APIs apt-get install nova-volume nova-vncproxy nova-api nova-ajax-console-proxy apt-get install nova-doc nova-scheduler nova-objectstore apt-get install nova-network nova-compute apt-get install glance
That’s got all the packages installed onto your local system! Now we just need to configure it up and initialize some information (that’s the bit about networks, etc).
Before I get into changing configs, let me explain what I’ll be setting up. In this example, my internal “network” is 172.17.0.0/24 – and I have a dedicated IP address for this host that is 172.17.0.133. The virtual machines will be in their own network space (10.0.0.0 to 10.0.0.254), and (at this point) not visible from the local network, but will be able to access the local network through their virtualization hosts. The machine I’m using also only has a single NIC (eth0), which is fine for a little test bed, but not likely what you want to do in any sort of real setup.
--dhcpbridge_flagfile=/etc/nova/nova.conf --dhcpbridge=/usr/bin/nova-dhcpbridge --logdir=/var/log/nova --state_path=/var/lib/nova --lock_path=/var/lock/nova --flagfile=/etc/nova/nova-compute.conf --verbose # --sql_connection=mysql://novadbuser:novaDBsekret@172.17.0.133/nova # --network_manager=nova.network.manager.FlatDHCPManager --flat_network_bridge=br100 --flat_injected=False --flat_interface=eth0 --public_interface=eth0 # --vncproxy_url=http://172.17.0.133:6080 --daemonize=1 --rabbit_host=172.17.0.133 --osapi_host=172.17.0.133 --ec2_host=172.17.0.133 --image_service=nova.image.glance.GlanceImageService --glance_api_servers=172.17.0.133:9292 --use_syslog
Now you might have noticed the MySQL connection string in there. We need to set up that user and password in MySQL to do what needs to be done. I also change the MySQL configuration so that remote systems can connect to MySQL. It’s not needed on a single host, but if you ever want to have more than one compute host, you need to make this change. In /etc/mysql/my.conf, find the line:
bind-address = 127.0.0.1
and change it to
Now lets make the user in Mysql:
mysql -popenstack CREATE USER 'novadbuser' IDENTIFIED BY 'novaDBsekret'; GRANT ALL PRIVILEGES ON *.* TO 'novadbuser'@'%' WITH GRANT OPTION; FLUSH PRIVILEGES;
And set up the database:
mysql -popenstack -e 'CREATE DATABASE nova;' nova-manage db sync
If that last command gives you any trouble, then we likely don’t have the MySQL system configured correctly – the user can’t access the tables or something. Check in the logs for MySQL to get a sense of what might have gone wrong.
At this point, it’s time to configure up the internals of openstack – create projects, networks, etc.
We’ll start by creating an admin user:
# create admin user called "cloudroot" nova-manage user admin --name=cloudroot --secret=sekret
This should respond with something like:
export EC2_ACCESS_KEY=sekret export EC2_SECRET_KEY=653f3fad-df22-449b-9e6a-ea6c81e32621
You can scratch that down, but we’ll be getting that same information again later and using it, so don’t worry too much about it.
Now we create a project:
# create project "cloudproject" with project mgr: "cloudroot" nova-manage project create --project=cloudproject --user=cloudroot
And finally, a network configuration for those internal IP addresses:
nova-manage network create private --fixed_range_v4=10.0.0.0/24 --num_networks=1 --network_size=256 --bridge=br100 --bridge_interface=eth0 --multi_host=T # gateway assumed at 10.0.0.1 # broadcast assumed at 10.0.0.255
Now I’m using the multi-host flag, which is new in the Diablo release. This makes each compute node it’s own networking host for the purposes of allowing the VM’s you spin up to access your network or the internet.
At this point, you’re system should be up and running, all systems operational. Let me walk you through the command steps to actually kick up a little test VM though. These commands are all meant to be done as a local user (not root!)
sudo nova-manage project zipfile cloudproject cloudroot /tmp/nova.zip unzip -o /tmp/nova.zip -d ~/creds cat creds/novarc >> ~/.bashrc source creds/novarc # euca-add-keypair mykey > mykey.priv chmod 600 mykey.priv # image="ttylinux-uec-amd64-12.1_2.6.35-22_1.tar.gz" wget http://smoser.brickies.net/ubuntu/ttylinux-uec/$image uec-publish-tarball $image mybucket # wget http://uec-images.ubuntu.com/releases/10.04/release/ubuntu-10.04-server-uec-amd64.tar.gz uec-publish-tarball ubuntu-10.04-server-uec-amd64.tar.gz mybucket ...OUTPUT... Thu Aug 18 14:02:20 PDT 2011: ====== extracting image ====== Warning: no ramdisk found, assuming '--ramdisk none' kernel : lucid-server-uec-amd64-vmlinuz-virtual ramdisk: none image : lucid-server-uec-amd64.img Thu Aug 18 14:02:29 PDT 2011: ====== bundle/upload kernel ====== Thu Aug 18 14:02:34 PDT 2011: ====== bundle/upload image ====== Thu Aug 18 14:03:12 PDT 2011: ====== done ====== emi="ami-00000002"; eri="none"; eki="aki-00000001"; ...OUTPUT...
And running the instances:
euca-run-instances ami-00000002 -k mykey -t m1.large ...OUTPUT... RESERVATION r-1jj2a80v cloudproject default INSTANCE i-00000001 ami-00000002 scheduling mykey (cloudproject, None) 0 m1.tiny2011-08-18T21:06:03Z unknown zone aki-00000001 ami-00000000 ...OUTPUT... # euca-describe-instances ...OUTPUT... RESERVATION r-1jj2a80v cloudproject default INSTANCE i-00000001 ami-00000002 10.0.0.2 10.0.0.2 building mykey (cloudproject, SIX) 0 m1.tiny 2011-08-18T21:06:03Z nova aki-00000001 ami-00000000 ...OUTPUT... # euca-describe-instances ...OUTPUT... RESERVATION r-1jj2a80v cloudproject default INSTANCE i-00000001 ami-00000002 10.0.0.2 10.0.0.2 running mykey (cloudproject, SIX) 0 m1.tiny 2011-08-18T21:06:03Z nova aki-00000001 ami-00000000 ...OUTPUT... # euca-authorize -P tcp -p 22 default ssh -i mykey.priv firstname.lastname@example.org
To add on additional hosts to support more VMs, you only need to install a few of the packages:
apt-get install nova-compute nova-network nova-api
You do need that exact same /etc/nova/nova.conf file though.
The default install of Glance expects the images that you’ve loaded up to be available on the local file system for every compute node at /var/lib/glance. Either NFS mount this directory from a central machine, or replicate the files underneath it to all your “compute hosts” when you upload a new image to be used in the virtual machines.
Also, the metadata URL needed for UEC images (188.8.131.52) may need help getting forwarded when running on a system with a single NIC. Two potential solutions: A) run nova-api on each of the compute nodes (quick and dirty) or B) specify the –ec2_dmz_host=$HOSTIP, and potentially invoke the command ip link set dev br100 promisc on to turn on promiscuous mode (per https://answers.launchpad.net/nova/+question/152528).