Arrfab's blog - Openstackhttps://arrfab.net/2017-10-11T00:00:00+02:00Some tips and tricks, mostly around CentOSUsing Ansible Openstack modules on CentOS 72017-10-11T00:00:00+02:002017-10-11T00:00:00+02:00Fabian Arrotintag:arrfab.net,2017-10-11:/posts/2017/Oct/11/using-ansible-openstack-modules-on-centos-7/<p>Suppose that you have a RDO/Openstack cloud already in place, but that you'd want to automate some operations : what can you do ? On my side, I already <a href="/posts/2017/May/08/deploying-openstack-through-puppet-on-centos-7-a-journey/">mentioned</a> that I used puppet to deploy initial clouds, but I still prefer Ansible myself when having to launch ad-hoc tasks, or even change configuration[s]. It's particulary true for our <a href="https://ci.centos.org">CI environment</a> where we run "agentless" so all configuration changes happen through Ansible.</p>
<p>The good news is that Ansible has already some modules for <a href="http://docs.ansible.com/ansible/latest/list_of_cloud_modules.html#openstack">Openstack</a> but it has some requirements and a little bit of understanding before being able to use those.</p>
<p>First of all, all the ansible os_ modules need <a href="https://pypi.python.org/pypi/shade">"shade"</a> on the host included in the play, and that will be responsible of all os_ modules launch.
At the time of writing this post, it's <em>not</em> yet available on mirror.centos.org, (a review is open so that will be soon available directly) but you can find the pkg on <a href="https://cbs.centos.org/koji/buildinfo?buildID=20086">our CBS builders</a></p>
<p>Once installed, a simple os_image task was directly failing, despite the fact that auth: was present, and that's due to a simple reason : Ansible os_ modules still want to use v2 API, while it's now defaulting …</p><p>Suppose that you have a RDO/Openstack cloud already in place, but that you'd want to automate some operations : what can you do ? On my side, I already <a href="/posts/2017/May/08/deploying-openstack-through-puppet-on-centos-7-a-journey/">mentioned</a> that I used puppet to deploy initial clouds, but I still prefer Ansible myself when having to launch ad-hoc tasks, or even change configuration[s]. It's particulary true for our <a href="https://ci.centos.org">CI environment</a> where we run "agentless" so all configuration changes happen through Ansible.</p>
<p>The good news is that Ansible has already some modules for <a href="http://docs.ansible.com/ansible/latest/list_of_cloud_modules.html#openstack">Openstack</a> but it has some requirements and a little bit of understanding before being able to use those.</p>
<p>First of all, all the ansible os_ modules need <a href="https://pypi.python.org/pypi/shade">"shade"</a> on the host included in the play, and that will be responsible of all os_ modules launch.
At the time of writing this post, it's <em>not</em> yet available on mirror.centos.org, (a review is open so that will be soon available directly) but you can find the pkg on <a href="https://cbs.centos.org/koji/buildinfo?buildID=20086">our CBS builders</a></p>
<p>Once installed, a simple os_image task was directly failing, despite the fact that auth: was present, and that's due to a simple reason : Ansible os_ modules still want to use v2 API, while it's now defaulting to v3 in Pike release. There is no way to force ansible itself to use v3, but as it uses shade behind the scene, there is a way to force this through <a href="https://docs.openstack.org/os-client-config/latest/index.html">os-client-config</a></p>
<p>That means that you just have to use a .yaml file (does that sound familiar for ansible ?) that will contain everything you need to know about specific cloud, and then just in ansible declare which cloud you're configuring.</p>
<p>That clouds.yaml file can be under $current_directory, ~/.config/openstack or /etc/openstack so it's up to you to decide where you want to temporary host it, but I selected /etc/openstack/ :</p>
<div class="highlight"><pre><span></span><span class="o">-</span> <span class="nv">name</span>: <span class="nv">Ensuring</span> <span class="nv">we</span> <span class="nv">have</span> <span class="nv">required</span> <span class="nv">pkgs</span> <span class="k">for</span> <span class="nv">ansible</span><span class="o">/</span><span class="nv">openstack</span>
<span class="nv">yum</span>:
<span class="nv">name</span>: <span class="nv">python2</span><span class="o">-</span><span class="nv">shade</span>
<span class="nv">state</span>: <span class="nv">installed</span>
<span class="o">-</span> <span class="nv">name</span>: <span class="nv">Ensuring</span> <span class="nv">local</span> <span class="nv">directory</span> <span class="nv">to</span> <span class="nv">hold</span> <span class="nv">the</span> <span class="nv">os</span><span class="o">-</span><span class="nv">client</span><span class="o">-</span><span class="nv">config</span> <span class="nv">file</span>
<span class="nv">file</span>:
<span class="nv">path</span>: <span class="o">/</span><span class="nv">etc</span><span class="o">/</span><span class="nv">openstack</span>
<span class="nv">state</span>: <span class="nv">directory</span>
<span class="nv">owner</span>: <span class="nv">root</span>
<span class="nv">group</span>: <span class="nv">root</span>
<span class="o">-</span> <span class="nv">name</span>: <span class="nv">Adding</span> <span class="nv">clouds</span>.<span class="nv">yaml</span> <span class="k">for</span> <span class="nv">os</span><span class="o">-</span><span class="nv">client</span><span class="o">-</span><span class="nv">config</span> <span class="k">for</span> <span class="nv">further</span> <span class="nv">actions</span>
<span class="nv">template</span>:
<span class="nv">src</span>: <span class="nv">clouds</span>.<span class="nv">yaml</span>.<span class="nv">j2</span>
<span class="nv">dest</span>: <span class="o">/</span><span class="nv">etc</span><span class="o">/</span><span class="nv">openstack</span><span class="o">/</span><span class="nv">clouds</span>.<span class="nv">yaml</span>
<span class="nv">owner</span>: <span class="nv">root</span>
<span class="nv">group</span>: <span class="nv">root</span>
<span class="nv">mode</span>: <span class="mi">0700</span>
</pre></div>
<p>Of course such clouds.yaml file is itself a jinja2 template distributed by ansible on the host in the play <em>before</em> using the os_* modules : </p>
<div class="highlight"><pre><span></span><span class="x">clouds:</span>
<span class="x"> </span><span class="cp">{{</span> <span class="nv">cloud_name</span> <span class="cp">}}</span><span class="x">:</span>
<span class="x"> auth:</span>
<span class="x"> username: admin</span>
<span class="x"> project_name: admin</span>
<span class="x"> password: </span><span class="cp">{{</span> <span class="nv">openstack_admin_pass</span> <span class="cp">}}</span><span class="x"></span>
<span class="x"> auth_url: http://</span><span class="cp">{{</span> <span class="nv">openstack_controller</span> <span class="cp">}}</span><span class="x">:5000/v3/</span>
<span class="x"> user_domain_name: default</span>
<span class="x"> project_domain_name: default</span>
<span class="x"> identity_api_version: 3</span>
</pre></div>
<p>You just have to adapt to your needs (see <a href="https://docs.openstack.org/os-client-config/latest/user/configuration.html">doc</a> for this) but the interesting part is the identity_api_version to force v3.</p>
<p>Then, you can use all that in a simple way through ansible tasks, in this case adding users to a project :</p>
<div class="highlight"><pre><span></span><span class="o">-</span><span class="w"> </span><span class="nl">name</span><span class="p">:</span><span class="w"> </span><span class="n">Configuring</span><span class="w"> </span><span class="n">OpenStack</span><span class="w"> </span><span class="k">user</span><span class="o">[</span><span class="n">s</span><span class="o">]</span><span class="w"></span>
<span class="w"> </span><span class="nl">os_user</span><span class="p">:</span><span class="w"></span>
<span class="w"> </span><span class="nl">cloud</span><span class="p">:</span><span class="w"> </span><span class="ss">"{{ cloud_name }}"</span><span class="w"></span>
<span class="w"> </span><span class="nl">default_project</span><span class="p">:</span><span class="w"> </span><span class="ss">"{{ item.0.name }}"</span><span class="w"></span>
<span class="w"> </span><span class="k">domain</span><span class="err">:</span><span class="w"> </span><span class="ss">"{{ item.0.domain_id }}"</span><span class="w"></span>
<span class="w"> </span><span class="nl">name</span><span class="p">:</span><span class="w"> </span><span class="ss">"{{ item.1.login }}"</span><span class="w"></span>
<span class="w"> </span><span class="nl">email</span><span class="p">:</span><span class="w"> </span><span class="ss">"{{ item.1.email }}"</span><span class="w"></span>
<span class="w"> </span><span class="nl">password</span><span class="p">:</span><span class="w"> </span><span class="ss">"{{ item.1.password }}"</span><span class="w"> </span>
<span class="w"> </span><span class="nl">with_subelements</span><span class="p">:</span><span class="w"></span>
<span class="w"> </span><span class="o">-</span><span class="w"> </span><span class="ss">"{{ cloud_projects }}"</span><span class="w"></span>
<span class="w"> </span><span class="o">-</span><span class="w"> </span><span class="n">users</span><span class="w"> </span>
<span class="w"> </span><span class="nl">no_log</span><span class="p">:</span><span class="w"> </span><span class="k">True</span><span class="w"></span>
</pre></div>
<p>From a variables point of view, I decided to just have a simple structure to host project/users/roles/quotas like this : </p>
<div class="highlight"><pre><span></span><span class="nl">cloud_projects</span><span class="p">:</span><span class="w"></span>
<span class="w"> </span><span class="o">-</span><span class="w"> </span><span class="nl">name</span><span class="p">:</span><span class="w"> </span><span class="n">demo</span><span class="w"></span>
<span class="w"> </span><span class="nl">description</span><span class="p">:</span><span class="w"> </span><span class="n">demo</span><span class="w"> </span><span class="n">project</span><span class="w"></span>
<span class="w"> </span><span class="nl">domain_id</span><span class="p">:</span><span class="w"> </span><span class="k">default</span><span class="w"></span>
<span class="w"> </span><span class="nl">quota_cores</span><span class="p">:</span><span class="w"> </span><span class="mi">20</span><span class="w"></span>
<span class="w"> </span><span class="nl">quota_instances</span><span class="p">:</span><span class="w"> </span><span class="mi">10</span><span class="w"></span>
<span class="w"> </span><span class="nl">quota_ram</span><span class="p">:</span><span class="w"> </span><span class="mi">40960</span><span class="w"></span>
<span class="w"> </span><span class="nl">users</span><span class="p">:</span><span class="w"></span>
<span class="w"> </span><span class="o">-</span><span class="w"> </span><span class="nl">login</span><span class="p">:</span><span class="w"> </span><span class="n">demo_user</span><span class="w"></span>
<span class="w"> </span><span class="nl">email</span><span class="p">:</span><span class="w"> </span><span class="n">demo</span><span class="nv">@centos</span><span class="p">.</span><span class="n">org</span><span class="w"></span>
<span class="w"> </span><span class="nl">password</span><span class="p">:</span><span class="w"> </span><span class="n">Ch</span><span class="nv">@ngeM3</span><span class="w"></span>
<span class="w"> </span><span class="k">role</span><span class="err">:</span><span class="w"> </span><span class="k">admin</span><span class="w"> </span><span class="err">#</span><span class="w"> </span><span class="n">can</span><span class="w"> </span><span class="n">be</span><span class="w"> </span><span class="n">_member_</span><span class="w"> </span><span class="ow">or</span><span class="w"> </span><span class="k">admin</span><span class="w"></span>
<span class="w"> </span><span class="o">-</span><span class="w"> </span><span class="nl">login</span><span class="p">:</span><span class="w"> </span><span class="n">demo_user2</span><span class="w"></span>
<span class="w"> </span><span class="nl">email</span><span class="p">:</span><span class="w"> </span><span class="n">demo2</span><span class="nv">@centos</span><span class="p">.</span><span class="n">org</span><span class="w"></span>
<span class="w"> </span><span class="nl">password</span><span class="p">:</span><span class="w"> </span><span class="n">Ch</span><span class="nv">@ngeMe2</span><span class="w"></span>
</pre></div>
<p>Now that it works, you can explore all the other os_* modules and I'm already using those to :</p>
<ul>
<li>Import cloud images in glance</li>
<li>Create networks and subnets in neutron</li>
<li>Create projects/users/roles in keystone</li>
<li>Change quotas for those projects</li>
</ul>
<p>I'm just discovering how powerful those tools are, so I'll probably discover much more interesting things to do with those later. </p>Using NFS for OpenStack (glance,nova) with selinux2017-07-28T00:00:00+02:002017-07-28T00:00:00+02:00Fabian Arrotintag:arrfab.net,2017-07-28:/posts/2017/Jul/28/using-nfs-for-openstack-glancenova-with-selinux/<p>As announced already, I was (between other things) playing with Openstack/RDO and had deployed some small openstack setup in the CentOS Infra. Then I had to look at our existing <a href="https://wiki.centos.org/DevCloud">DevCloud</a> setup. This setup was based on Opennebula running on CentOS 6, and also using Gluster as backend for the VM store. That's when I found out that Gluster isn't a valid option anymore : Gluster is was deprecated and was now even removed from <a href="https://docs.openstack.org/releasenotes/cinder/ocata.html">Cinder</a>. Sad as one advantage of gluster is that you could (you had to ! ) user libgfapi so that qemu-kvm process could talk directly to gluster through ligbfapi and not accessing VM images over locally mounted gluster volumes (please, don't even try to do that, through fuse).</p>
<p>So what could be a replacement for Gluster from an openstack side ? I still have some dedicated nodes for storage backend[s], but not enough to even just think about Ceph. So it seems my only option was to consider NFS. (Technically speaking driver was removed from cinder, but I could have only tried to use it for glance and nova, as I have no need for cinder for DevCloud project, but clearly it would be dangerous for potential …</p><p>As announced already, I was (between other things) playing with Openstack/RDO and had deployed some small openstack setup in the CentOS Infra. Then I had to look at our existing <a href="https://wiki.centos.org/DevCloud">DevCloud</a> setup. This setup was based on Opennebula running on CentOS 6, and also using Gluster as backend for the VM store. That's when I found out that Gluster isn't a valid option anymore : Gluster is was deprecated and was now even removed from <a href="https://docs.openstack.org/releasenotes/cinder/ocata.html">Cinder</a>. Sad as one advantage of gluster is that you could (you had to ! ) user libgfapi so that qemu-kvm process could talk directly to gluster through ligbfapi and not accessing VM images over locally mounted gluster volumes (please, don't even try to do that, through fuse).</p>
<p>So what could be a replacement for Gluster from an openstack side ? I still have some dedicated nodes for storage backend[s], but not enough to even just think about Ceph. So it seems my only option was to consider NFS. (Technically speaking driver was removed from cinder, but I could have only tried to use it for glance and nova, as I have no need for cinder for DevCloud project, but clearly it would be dangerous for potential upgrades)</p>
<p>It's no that I'm a fan of storing qcow2 images on top of NFS, but it seems it was my only option, and at least the most transparent/less intrusive path, would I need to migrate to something else later.
So let's test this before then using NFS through <a href="http://en.wikipedia.org/wiki/InfiniBand">Infiniband</a> (using <a href="https://www.kernel.org/doc/Documentation/infiniband/ipoib.txt">IPoIB</a>), and so at "good speed" (still have the infiniband hardware in place running for gluster, that will be replaced)</p>
<p>It's easy to mount the nfs exported dir under /var/lib/glance/images for glance, and then on every compute node also a nfs export under /var/lib/nova/instances/.</p>
<p>That's where you have to see what would be blocked by Selinux, as it seems the current policy shipped with openstack-selinux-0.8.6-0 (from Ocata) doesn't seem to allow that.</p>
<p>I initially tested services one and one and decided to open <a href="https://github.com/redhat-openstack/openstack-selinux/pull/13">Pull Request</a> for this, but in the mean time I rebuilt a custom selinux policy that seems to do the job in my rdo playground.</p>
<p>Here it is the .te that you can compile into usable .pp policy file : </p>
<div class="highlight"><pre><span></span><span class="nv">module</span> <span class="nv">os</span><span class="o">-</span><span class="nv">local</span><span class="o">-</span><span class="nv">nfs</span> <span class="mi">0</span>.<span class="mi">2</span><span class="c1">;</span>
<span class="nv">require</span> {
<span class="nv">type</span> <span class="nv">glance_api_t</span><span class="c1">;</span>
<span class="nv">type</span> <span class="nv">virtlogd_t</span><span class="c1">;</span>
<span class="nv">type</span> <span class="nv">nfs_t</span><span class="c1">;</span>
<span class="nv">class</span> <span class="nv">file</span> { <span class="nv">append</span> <span class="nv">getattr</span> <span class="nv">open</span> <span class="nv">read</span> <span class="nv">write</span> <span class="k">unlink</span> <span class="nv">create</span> }<span class="c1">;</span>
<span class="nv">class</span> <span class="nv">dir</span> { <span class="nv">search</span> <span class="nv">getattr</span> <span class="nv">write</span> <span class="nv">remove_name</span> <span class="nv">create</span> <span class="nv">add_name</span> }<span class="c1">;</span>
}
#<span class="o">=============</span> <span class="nv">glance_api_t</span> <span class="o">==============</span>
<span class="nv">allow</span> <span class="nv">glance_api_t</span> <span class="nv">nfs_t</span>:<span class="nv">dir</span> { <span class="nv">search</span> <span class="nv">getattr</span> <span class="nv">write</span> <span class="nv">remove_name</span> <span class="nv">create</span> <span class="nv">add_name</span> }<span class="c1">;</span>
<span class="nv">allow</span> <span class="nv">glance_api_t</span> <span class="nv">nfs_t</span>:<span class="nv">file</span> { <span class="nv">write</span> <span class="nv">getattr</span> <span class="k">unlink</span> <span class="nv">open</span> <span class="nv">create</span> <span class="nv">read</span>}<span class="c1">;</span>
#<span class="o">=============</span> <span class="nv">virtlogd_t</span> <span class="o">==============</span>
<span class="nv">allow</span> <span class="nv">virtlogd_t</span> <span class="nv">nfs_t</span>:<span class="nv">dir</span> <span class="nv">search</span><span class="c1">;</span>
<span class="nv">allow</span> <span class="nv">virtlogd_t</span> <span class="nv">nfs_t</span>:<span class="nv">file</span> { <span class="nv">append</span> <span class="nv">getattr</span> <span class="nv">open</span> }<span class="c1">;</span>
</pre></div>
<p>Of course you also need to enable some booleans. Some are already loaded by openstack-selinux (and you can see that from the enabled booleans by looking at /etc/selinux/targeted/active/booleans.local) but you also now need <code>virt_use_nfs=1</code></p>
<p>Now that it works, I can replay that (all that coming from puppet) on the DevCloud nodes </p>Deploying Openstack through puppet on CentOS 7 - a Journey2017-05-08T00:00:00+02:002017-05-08T00:00:00+02:00Fabian Arrotintag:arrfab.net,2017-05-08:/posts/2017/May/08/deploying-openstack-through-puppet-on-centos-7-a-journey/<p>It's not a secret that I was playing/experimenting with <a href="http://www.openstack.org">OpenStack</a> in the <a href="/posts/2017/Apr/14/deploying-openstack-poc-on-centos-with-linux-bridge/">last days</a>.
When I mention OpenStack, I should even say <a href="http://www.rdoproject.org">RDO</a> , as it's RPM packaged, built and tested on CentOS infra.</p>
<p>Now that it's time to deploy it in Production, that's when you should have a deeper look at how to proceed and which tool to use. Sure, <a href="https://wiki.openstack.org/wiki/Packstack">Packstack</a> can help you setting up a quick <a href="https://en.wikipedia.org/wiki/Proof_of_concept">PoC</a> but after some discussions with people hanging around in the #rdo irc channel on freenode, it seems that almost everybody agreed on the fact that it's not the kind of tool you want to use for a proper deploy.</p>
<p>Let's so have a look at the available options. While I really like/prefer <a href="http://www.ansible.com">Ansible</a>, we (CentOS Project) still use <a href="https://puppet.com/">puppet</a> as our Configuration Management tool, and itself using <a href="https://theforeman.org/">Foreman</a> as the <a href="https://docs.puppet.com/puppet/4.10/nodes_external.html#what-is-an-enc">ENC</a>. So let's see both options.</p>
<ul>
<li>Ansible : Lot of <a href="http://docs.ansible.com/ansible/list_of_cloud_modu">natives modules</a> exist to manage an existing/already deployed openstack cloud, but nothing really that can help setting up one from scratch. OTOH it's true that <a href="https://docs.openstack.org/project-deploy-guide/openstack-ansible/ocata/">Openstack Ansible</a> exists, but that will setup openstack components into LXC containers, and wasn't really comfortable with the whole idea (YMMV) </li>
<li>Puppet : Lot of …</li></ul><p>It's not a secret that I was playing/experimenting with <a href="http://www.openstack.org">OpenStack</a> in the <a href="/posts/2017/Apr/14/deploying-openstack-poc-on-centos-with-linux-bridge/">last days</a>.
When I mention OpenStack, I should even say <a href="http://www.rdoproject.org">RDO</a> , as it's RPM packaged, built and tested on CentOS infra.</p>
<p>Now that it's time to deploy it in Production, that's when you should have a deeper look at how to proceed and which tool to use. Sure, <a href="https://wiki.openstack.org/wiki/Packstack">Packstack</a> can help you setting up a quick <a href="https://en.wikipedia.org/wiki/Proof_of_concept">PoC</a> but after some discussions with people hanging around in the #rdo irc channel on freenode, it seems that almost everybody agreed on the fact that it's not the kind of tool you want to use for a proper deploy.</p>
<p>Let's so have a look at the available options. While I really like/prefer <a href="http://www.ansible.com">Ansible</a>, we (CentOS Project) still use <a href="https://puppet.com/">puppet</a> as our Configuration Management tool, and itself using <a href="https://theforeman.org/">Foreman</a> as the <a href="https://docs.puppet.com/puppet/4.10/nodes_external.html#what-is-an-enc">ENC</a>. So let's see both options.</p>
<ul>
<li>Ansible : Lot of <a href="http://docs.ansible.com/ansible/list_of_cloud_modu">natives modules</a> exist to manage an existing/already deployed openstack cloud, but nothing really that can help setting up one from scratch. OTOH it's true that <a href="https://docs.openstack.org/project-deploy-guide/openstack-ansible/ocata/">Openstack Ansible</a> exists, but that will setup openstack components into LXC containers, and wasn't really comfortable with the whole idea (YMMV) </li>
<li>Puppet : Lot of <a href="http://git.openstack.org/cgit/openstack/">puppet modules</a> so you can automatically reuse/import those into your existing puppet setup, and seems to be the prefered method when discussing with people in #rdo (when not using <a href="https://wiki.openstack.org/wiki/TripleO">TripleO</a> though)</li>
</ul>
<p>So, after some analysis, and despite the fact that I really prefer Ansible over Puppet, I decided (so that it could still make sense in our infra) to go the "puppet modules way". That was the beginning of a journey, where I saw a lot of <a href="https://en.wiktionary.org/wiki/yak_shaving">Yaks to shave</a> too.</p>
<p>It started with me trying to "just" reuse and adapt some existing modules I found. <strong>Wrong</strong>. And it's even fun because it's one of my mantras : "Don't try to automate what you can't understand from scratch" (And I fully agree with Matthias' <a href="https://ma.ttias.be/automating-unknown/">thought</a> on this ).</p>
<p>So one can just read all the openstack puppet modules, and then try to understand how to assemble them together to build a cloud. But I remembered that Packstack itself <em>is</em> puppet driven. So I just decided to have a look at what it was generating and start from that to write my own module from scratch. How to proceed ? Easy : on a VM, just install packstack, generate answer file, "salt" it your needs, and generate the manifests :</p>
<div class="highlight"><pre><span></span> <span class="n">yum</span> <span class="n">install</span> <span class="o">-</span><span class="n">y</span> <span class="n">centos</span><span class="o">-</span><span class="n">release</span><span class="o">-</span><span class="n">openstack</span><span class="o">-</span><span class="n">ocata</span> <span class="o">&&</span> <span class="n">yum</span> <span class="n">install</span> <span class="n">openstack</span><span class="o">-</span><span class="n">packstack</span> <span class="o">-</span><span class="n">y</span>
<span class="n">packstack</span> <span class="c1">--gen-answer-file=answers.txt</span>
<span class="n">vim</span> <span class="n">answers</span><span class="p">.</span><span class="n">txt</span>
<span class="n">packstack</span> <span class="c1">--answer-file=answers.txt --dry-run</span>
<span class="o">*</span> <span class="n">The</span> <span class="n">installation</span> <span class="n">log</span> <span class="n">file</span> <span class="k">is</span> <span class="n">available</span> <span class="k">at</span><span class="p">:</span> <span class="o">/</span><span class="n">var</span><span class="o">/</span><span class="n">tmp</span><span class="o">/</span><span class="n">packstack</span><span class="o">/</span><span class="mi">20170508</span><span class="o">-</span><span class="mi">101433</span><span class="o">-</span><span class="mi">49</span><span class="n">cCcj</span><span class="o">/</span><span class="n">openstack</span><span class="o">-</span><span class="n">setup</span><span class="p">.</span><span class="n">log</span>
<span class="o">*</span> <span class="n">The</span> <span class="k">generated</span> <span class="n">manifests</span> <span class="k">are</span> <span class="n">available</span> <span class="k">at</span><span class="p">:</span> <span class="o">/</span><span class="n">var</span><span class="o">/</span><span class="n">tmp</span><span class="o">/</span><span class="n">packstack</span><span class="o">/</span><span class="mi">20170508</span><span class="o">-</span><span class="mi">101433</span><span class="o">-</span><span class="mi">49</span><span class="n">cCcj</span><span class="o">/</span><span class="n">manifests</span>
</pre></div>
<p>So now we can have a look at all the generated manifests and start from scratch our own, reimporting all the needed openstack puppet modules, and that's what I did .. but started to encounter some issues. The first one was that the puppet version we were using was 3.6.2 (everywhere on every release/arch we support, so centos 6 and 7, and x86_64,i386,aarch64,ppc64,ppc64le). </p>
<p>One of the openstack component is <a href="https://www.rabbitmq.com/">RabbitMQ</a> but openstack modules rely on the puppetlabs module to deploy/manage it. You'll see a lot of those external modules being called/needed by openstack puppet. The first thing that I had to do was investigating our own modules as some are the same name, but not coming from puppetlabs/forge, so instead of analyzing all those, I moved everything RDO related to a <a href="https://theforeman.org/manuals/1.12/index.html#4.2ManagingPuppet">different environment</a> so that it wouldn't conflict with some our our existing modules. Back now to the RabbitMQ one : puppet errors where trying to just use it. First yak to shave : updating the whole CentOS infra puppet to higher version because of a <a href="https://tickets.puppetlabs.com/browse/MODULES-1781">puppet bug</a>. Let's so rebuild puppet for centos 6/7 and with a higher version on <a href="https://cbs.centos.org/koji/packageinfo?packageID=390">CBS</a> </p>
<p>That means of course testing our own modules, on our Test Foreman/puppetmasterd instance first, and as upgraded worked, I applied it everywhere. Good, so let's jump to the next yak.</p>
<p>After the rabbitmq issue was solved, I encountered other ones coming from openstack puppet modules now, as the .rb ruby code used for type/provider was expecting ruby2 and not 1.8.3, which was the one available on our puppetmasterd (yeah, our Foreman was on a CentOS 6 node) so another yak to shave : migrating our Foreman instance from CentOS 6 to a new CentOS 7 node. Basically installing a CentOS 7 node with the <em>same</em> Foreman version running on CentOS 6 node, and then following <a href="https://theforeman.org/manuals/1.12/index.html#5.5Backup,RecoveryandMigration">procedure</a>, but then, again, time lost to test update/upgrade and also all other modules, etc (One can see why I prefer agentless cfgmgmt).</p>
<p>Finally I found that some of the openstack puppet modules aren't touching the whole config. Let me explain why. In Openstack <a href="https://releases.openstack.org/ocata/">Ocata</a>, some things are mandatory, like the <a href="https://docs.openstack.org/developer/nova/placement.html">Placement API</a>, but despite all the classes being applied, I had some issues to have it to run correctly when deploying an instance. It's true that I initially had a bug in my puppet code for the user/password to use to configure the rabbitmq settings, but it was solved and also applied correctly in /etc/nova/nova.conf (setting "transport_url=") . But openstack nova services (all nova-*.log files btw) were always saying that credentials given were refused by rabbitmq, while tested manually)</p>
<p>After having verified in the rabbitmq logs, I saw that despite what was configured in nova.conf, services were still trying to use the wrong user/pass to connect to rabbitmq. Strange as <a href="http://git.openstack.org/cgit/openstack/puppet-nova/tree/manifests/cell_v2/simple_setup.pp">::nova::cell_v2::simple_setup</a> was included and was supposed also to use the transport_url declared at the nova.conf level (and so configured by ::nova) . That's how I discovered that something "ugly" happened : in fact even if you modify nova.conf, it stores some settings in the mysql DB, and you can see those (so the "wrong" ones in my case) with :</p>
<div class="highlight"><pre><span></span><span class="n">nova</span><span class="o">-</span><span class="n">manage</span> <span class="n">cell_v2</span> <span class="n">list_cells</span> <span class="c1">--debug</span>
</pre></div>
<p>Something to keep in mind, as for initial deployment, if your rabbitmq user/pass needs to be changed, and despite the fact that puppet will not complain, it will only update the conf file, but not the settings imported first by puppet in the DB (table nova_api.cell_mapping if you're interested)
After that, everything was then running, and reinstalled/reprovisioned multiple times my test nodes to apply the puppet module/manifests from puppetmasterd to confirm. </p>
<p>That was quite a journey, but it's probably only the beginning but it's a good start. Now to investigate other option for cinder/glance as it seems Gluster was deprecated and I'd like to know hy.</p>
<p>Hope this helps if you need to bootstrap openstack with puppet !</p>Deploying Openstack PoC on CentOS with linux bridge2017-04-14T00:00:00+02:002017-04-14T00:00:00+02:00Fabian Arrotintag:arrfab.net,2017-04-14:/posts/2017/Apr/14/deploying-openstack-poc-on-centos-with-linux-bridge/<p>I was recently in a need to start "playing" with <a href="http://www.openstack.org">Openstack</a> (working in an existing <a href="http://www.rdoproject.org">RDO</a> setup) so I thought that it would be good idea to have my personal playground to start deploying from scratch/breaking/fixing that playground setup.</p>
<p>At first sight, Openstack looks <a href="https://docs.openstack.org/admin-guide/_images/openstack-arch-kilo-logical-v1.png">impressive</a> and "over-engineered", as it's complex and have zillions of modules to make it work. But then when you dive into it, you understand that the choice is yours to make it complex or not. Yeah, that sentence can look strange, but I'll explain you why.</p>
<p>First, you should just write your requirements, and then only have a look at the needed openstack components. For my personal playground, I just wanted to have a basic thing that would let me deploy VMs on demand, <em>in</em> the existing network, and so directly using bridge as I want the VMs to be directly integrated into the existing network/subnet.</p>
<p>So just by looking at the mentioned <a href="https://docs.openstack.org/admin-guide/_images/openstack-arch-kilo-logical-v1.png">diagram</a>, we just need :</p>
<ul>
<li>keystone (needed for the identity service)</li>
<li>nova (hypervisor part)</li>
<li>neutron (handling the network part)</li>
<li>glance (to store the OS images that will be used to create the VMs)</li>
</ul>
<p>Now that I have my requirements and list …</p><p>I was recently in a need to start "playing" with <a href="http://www.openstack.org">Openstack</a> (working in an existing <a href="http://www.rdoproject.org">RDO</a> setup) so I thought that it would be good idea to have my personal playground to start deploying from scratch/breaking/fixing that playground setup.</p>
<p>At first sight, Openstack looks <a href="https://docs.openstack.org/admin-guide/_images/openstack-arch-kilo-logical-v1.png">impressive</a> and "over-engineered", as it's complex and have zillions of modules to make it work. But then when you dive into it, you understand that the choice is yours to make it complex or not. Yeah, that sentence can look strange, but I'll explain you why.</p>
<p>First, you should just write your requirements, and then only have a look at the needed openstack components. For my personal playground, I just wanted to have a basic thing that would let me deploy VMs on demand, <em>in</em> the existing network, and so directly using bridge as I want the VMs to be directly integrated into the existing network/subnet.</p>
<p>So just by looking at the mentioned <a href="https://docs.openstack.org/admin-guide/_images/openstack-arch-kilo-logical-v1.png">diagram</a>, we just need :</p>
<ul>
<li>keystone (needed for the identity service)</li>
<li>nova (hypervisor part)</li>
<li>neutron (handling the network part)</li>
<li>glance (to store the OS images that will be used to create the VMs)</li>
</ul>
<p>Now that I have my requirements and list of needed components, let's see how to setup my PoC ... The <a href="http://www.rdoproject.org">RDO project</a> has good doc for this, including the <a href="https://www.rdoproject.org/install/quickstart/">Quickstart</a> guide. You can follow that guide, and as everything is packaged/built/tested and also delivered through CentOS mirror network, you can have a working RDO/openstack All-in-one setup working in minutes ...</p>
<p>The only issue is that it doesn't fit my need, as it will setup unneeded components, and the network layout isn't the one I wanted either, as it will be based on openvswitch, and other rules (so multiple layers I wanted to get rid of). The good news is that <a href="https://www.rdoproject.org/install/quickstart/">Packstack</a> is in fact a wrapper tool around puppet modules, and it also supports lot of options to configure your PoC.</p>
<p>Let's assume that I wanted a PoC based on openstack-newton, and that my machine has two nics : eth0 for mgmt network and eth1 for VMs network. You don't need to configure the bridge on the eth1 interface, as that will be done automatically by neutron. So let's follow the quickstart guide, but we'll just adapt the packstack command line :</p>
<div class="highlight"><pre><span></span><span class="n">yum</span> <span class="n">install</span> <span class="n">centos</span><span class="o">-</span><span class="n">release</span><span class="o">-</span><span class="n">openstack</span><span class="o">-</span><span class="n">newton</span> <span class="o">-</span><span class="n">y</span>
<span class="n">systemctl</span> <span class="n">disable</span> <span class="n">firewalld</span>
<span class="n">systemctl</span> <span class="n">stop</span> <span class="n">firewalld</span>
<span class="n">systemctl</span> <span class="n">disable</span> <span class="n">NetworkManager</span>
<span class="n">systemctl</span> <span class="n">stop</span> <span class="n">NetworkManager</span>
<span class="n">systemctl</span> <span class="n">enable</span> <span class="n">network</span>
<span class="n">systemctl</span> <span class="k">start</span> <span class="n">network</span>
<span class="n">yum</span> <span class="n">install</span> <span class="o">-</span><span class="n">y</span> <span class="n">openstack</span><span class="o">-</span><span class="n">packstack</span>
</pre></div>
<p>Let's fix eth1 to ensure that it's started but without <em>any</em> IP on it : </p>
<div class="highlight"><pre><span></span><span class="n">sed</span> <span class="o">-</span><span class="n">i</span> <span class="s1">'s/BOOTPROTO="dhcp"/BOOTPROTO="none"/'</span> <span class="o">/</span><span class="n">etc</span><span class="o">/</span><span class="n">sysconfig</span><span class="o">/</span><span class="n">network</span><span class="o">-</span><span class="n">scripts</span><span class="o">/</span><span class="n">ifcfg</span><span class="o">-</span><span class="n">eth1</span>
<span class="n">sed</span> <span class="o">-</span><span class="n">i</span> <span class="s1">'s/ONBOOT="no"/ONBOOT="yes"/'</span> <span class="o">/</span><span class="n">etc</span><span class="o">/</span><span class="n">sysconfig</span><span class="o">/</span><span class="n">network</span><span class="o">-</span><span class="n">scripts</span><span class="o">/</span><span class="n">ifcfg</span><span class="o">-</span><span class="n">eth1</span>
<span class="n">ifup</span> <span class="n">eth1</span>
</pre></div>
<p>And now let's call packstack with the required option so that we'll use basic linux bridge (and so no openvswitch), and we'll instruct that it will have to use eth1 for that mapping</p>
<div class="highlight"><pre><span></span><span class="n">packstack</span> <span class="c1">--allinone --provision-demo=n --os-neutron-ml2-type-drivers=flat --os-neutron-ml2-mechanism-drivers=linuxbridge --os-neutron-ml2-flat-networks=physnet0 --os-neutron-l2-agent=linuxbridge --os-neutron-lb-interface-mappings=physnet0:eth1 --os-neutron-ml2-tenant-network-types=' ' --nagios-install=n </span>
</pre></div>
<p>At this stage we have openstack components installed, and /root/keystonerc_admin file that we can source for openstack CLI operations.
We have instructed neutron to use linuxbridge, but we haven't (yet) created a network and a subnet tied to it, so let's do that now :</p>
<div class="highlight"><pre><span></span><span class="nv">source</span> <span class="o">/</span><span class="nv">root</span><span class="o">/</span><span class="nv">keystonerc_admin</span>
<span class="nv">neutron</span> <span class="nv">net</span><span class="o">-</span><span class="nv">create</span> <span class="o">--</span><span class="nv">shared</span> <span class="o">--</span><span class="nv">provider</span>:<span class="nv">network_type</span><span class="o">=</span><span class="nv">flat</span> <span class="o">--</span><span class="nv">provider</span>:<span class="nv">physical_network</span><span class="o">=</span><span class="nv">physnet0</span> <span class="nv">othernet</span>
<span class="nv">neutron</span> <span class="nv">subnet</span><span class="o">-</span><span class="nv">create</span> <span class="o">--</span><span class="nv">name</span> <span class="nv">other_subnet</span> <span class="o">--</span><span class="nv">enable_dhcp</span> <span class="o">--</span><span class="nv">allocation</span><span class="o">-</span><span class="nv">pool</span><span class="o">=</span><span class="nv">start</span><span class="o">=</span><span class="mi">192</span>.<span class="mi">168</span>.<span class="mi">123</span>.<span class="mi">1</span>,<span class="k">end</span><span class="o">=</span><span class="mi">192</span>.<span class="mi">168</span>.<span class="mi">123</span>.<span class="mi">4</span> <span class="o">--</span><span class="nv">gateway</span><span class="o">=</span><span class="mi">192</span>.<span class="mi">168</span>.<span class="mi">123</span>.<span class="mi">254</span> <span class="o">--</span><span class="nv">dns</span><span class="o">-</span><span class="nv">nameserver</span><span class="o">=</span><span class="mi">192</span>.<span class="mi">168</span>.<span class="mi">123</span>.<span class="mi">254</span> <span class="nv">othernet</span> <span class="mi">192</span>.<span class="mi">168</span>.<span class="mi">123</span>.<span class="mi">0</span><span class="o">/</span><span class="mi">24</span>
</pre></div>
<p>Before import image[s] and creating instances, there is one thing left to do : instruct dhcp_agent that metadata for cloud-init inside the VM will not be served from traditional "router" inside of openstack. And also don't forget to let traffic (in/out) pass through security group (see <a href="https://docs.openstack.org/user-guide/cli-nova-configure-access-security-for-instances.html">doc</a>)</p>
<p>Just be sure to have <code>enable_isolated_metadata = True</code> in /etc/neutron/dhcp_agent.ini and then <code>systemctl restart neutron-dhcp-agent</code> : and from that point, cloud metadata will be served from dhcp too.</p>
<p>From that point you can just follow the <a href="https://www.rdoproject.org/install/running-an-instance/">quickstart</a> guide to create projects/users, import images, create instances and/or do all this from <a href="https://docs.openstack.org/user-guide/cli-cheat-sheet.html">cli</a> too </p>
<p>One last remark with linuxbridge in an existing network : as neutron will have a dhcp-agent listening on the bridge, the provisioned VMs will get an IP from the pool declared in the "neutron subnet-create" command. However (and I saw that when I added other compute nodes in the same setup), you'll have a potential conflict with an existing dhcpd instance on the same segment/network, so your VM can potentially get their IP from your existing dhcpd instance on the network, and not from neutron. As a workaround, you can just ignore the mac addresses range used by openstack, so that your VMs will always get their IP from neutron dhcp.
To do this, there are different options, depending on your local dhcpd instance : </p>
<ul>
<li>for dnsmasq : dhcp-host=fa:16:3e:<em>:</em>:*,ignore (see <a href="http://www.thekelleys.org.uk/dnsmasq/docs/dnsmasq.conf.example">doc</a>)</li>
<li>for ISC dhcpd : "ignore booting" (see <a href="https://linux.die.net/man/5/dhcpd.conf">doc</a>)</li>
</ul>
<p>The default mac addresses range for openstack VMs is indeed fa:16:3e:00:00:00 (see /etc/neutron/neutron.conf, so that can be changed too)</p>
<p>Those were some of my findings for my openstack PoC/playground. Now that I understand a little bit more all this, I'm currently working on some puppet integration for this, as there are official openstack puppet modules available on <a href="http://git.openstack.org/cgit">git.openstack.org</a> that one can import to deploy/configure openstack (and better than using packstack). But lot of "yaks to shave" to get to that point, so surely for another future blog post.</p>