Arrfab's blog - bloghttps://arrfab.net/2022-01-06T00:00:00+01:00Some tips and tricks, mostly around CentOSCombining multiples audio sinks with PulseAudio on CentOS Stream 82022-01-06T00:00:00+01:002022-01-06T00:00:00+01:00Fabian Arrotintag:arrfab.net,2022-01-06:/posts/2022/Jan/06/combining-multiples-audio-sinks-with-pulseaudio-on-centos-stream-8/<p>During winter break/holidays, I offered myself a new Bass and I mentioned this to one of my friends, who also offered himself a new guitar. As pandemic is still ongoing, he decided to just quickly record himself (video shot) and posted me the link and asked me to do the same.</p>
<p>Then became the simple problem to solve : while I have two nice Fender Amplifiers (<a href="https://www.fender.com/en-US/guitar-amplifiers/contemporary-digital/mustang-lt25/2311100000.html">Mustang LT</a> and <a href="https://www.fender.com/en-US/bass-amplifiers/contemporary-digital/rumble-lt25/2270100000.html">Rumble LT</a>) that are recognized natively by linux kernel on CentOS Stream 8 as valid input sources, I wanted to <em>also</em> combine that with a backing track (something playing on my computer, basically a youtube stream) and record that easily with the simple <a href="https://wiki.gnome.org/Apps/Cheese">Cheese video recording app</a> present by default in gnome.</p>
<p>I had so a look at <a href="https://www.freedesktop.org/wiki/Software/PulseAudio/">PulseAudio</a> and see if that was easily possible to combine the monitor device (basically the sound coming from your pc/speaker when you play something) with my amplifier as different input, and so then record in one shot that as a new stream/input that Cheese would transparently use (Cheese lets you specific a webcam but nothing wrt sound/microphone/input device)</p>
<p>Here is the solution : </p>
<ul>
<li>creating a new <code>sink</code> with the <a href="https://www.freedesktop.org/wiki/Software/PulseAudio/Documentation/User/Modules/#module-null-sink"><code>module-null-sink …</code></a></li></ul><p>During winter break/holidays, I offered myself a new Bass and I mentioned this to one of my friends, who also offered himself a new guitar. As pandemic is still ongoing, he decided to just quickly record himself (video shot) and posted me the link and asked me to do the same.</p>
<p>Then became the simple problem to solve : while I have two nice Fender Amplifiers (<a href="https://www.fender.com/en-US/guitar-amplifiers/contemporary-digital/mustang-lt25/2311100000.html">Mustang LT</a> and <a href="https://www.fender.com/en-US/bass-amplifiers/contemporary-digital/rumble-lt25/2270100000.html">Rumble LT</a>) that are recognized natively by linux kernel on CentOS Stream 8 as valid input sources, I wanted to <em>also</em> combine that with a backing track (something playing on my computer, basically a youtube stream) and record that easily with the simple <a href="https://wiki.gnome.org/Apps/Cheese">Cheese video recording app</a> present by default in gnome.</p>
<p>I had so a look at <a href="https://www.freedesktop.org/wiki/Software/PulseAudio/">PulseAudio</a> and see if that was easily possible to combine the monitor device (basically the sound coming from your pc/speaker when you play something) with my amplifier as different input, and so then record in one shot that as a new stream/input that Cheese would transparently use (Cheese lets you specific a webcam but nothing wrt sound/microphone/input device)</p>
<p>Here is the solution : </p>
<ul>
<li>creating a new <code>sink</code> with the <a href="https://www.freedesktop.org/wiki/Software/PulseAudio/Documentation/User/Modules/#module-null-sink"><code>module-null-sink</code></a> pulseaudio module</li>
<li>adding some inputs (basically the main audio .monitor device and my amplifier) to that sink with the <a href="https://www.freedesktop.org/wiki/Software/PulseAudio/Documentation/User/Modules/#module-loopback"><code>module-loopback</code></a> pulseaudio module</li>
<li>creating then a "fake" stream that can be used as input device (like a microphone) using the <a href="https://www.freedesktop.org/wiki/Software/PulseAudio/Documentation/User/Modules/#module-remap-source"><code>module-remap-source</code></a></li>
</ul>
<p>For example, when my Guitar amplifier is usb connected , it's shown like this : </p>
<div class="highlight"><pre><span></span>pacmd list-sources <span class="p">|</span> egrep <span class="s1">'(^\s+name: .*)|(^\s+device.description = .*)'</span>
name: <alsa_output.usb-Lenovo_ThinkPad_Thunderbolt_3_Dock_USB_Audio_000000000000-00.analog-stereo.monitor>
device.description <span class="o">=</span> <span class="s2">"Monitor of ThinkPad Thunderbolt 3 Dock USB Audio Analog Stereo"</span>
name: <alsa_input.usb-Lenovo_ThinkPad_Thunderbolt_3_Dock_USB_Audio_000000000000-00.mono-fallback>
device.description <span class="o">=</span> <span class="s2">"ThinkPad Thunderbolt 3 Dock USB Audio Mono"</span>
name: <alsa_input.usb-046d_HD_Pro_Webcam_C920_F4525F9F-02.analog-stereo>
device.description <span class="o">=</span> <span class="s2">"HD Pro Webcam C920 Analog Stereo"</span>
name: <alsa_input.usb-MICE_MICROPHONE_USB_MICROPHONE_201308-00.mono-fallback>
device.description <span class="o">=</span> <span class="s2">"Blue Snowball Mono"</span>
name: <alsa_output.pci-0000_00_1f.3.analog-stereo.monitor>
device.description <span class="o">=</span> <span class="s2">"Monitor of Built-in Audio Analog Stereo"</span>
name: <alsa_input.pci-0000_00_1f.3.analog-stereo>
device.description <span class="o">=</span> <span class="s2">"Built-in Audio Analog Stereo"</span>
name: <alsa_input.usb-FMIC_Mustang_LT_25_00000000001A-02.analog-stereo>
device.description <span class="o">=</span> <span class="s2">"Mustang LT 25 Analog Stereo"</span>
</pre></div>
<p>Now that we have the full name, we can use a simple bash wrapper script to either create a new input , based on bass/guitar amp preference, and this is the script : </p>
<table class="highlighttable"><tr><td class="linenos"><div class="linenodiv"><pre> 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86</pre></div></td><td class="code"><div class="highlight"><pre><span></span><span class="ch">#!/bin/bash</span>
<span class="c1"># This little bash wrapper will just combine monitor and existing source from fender amplifier</span>
<span class="c1"># and create a virtual input that can be selected a default input for recording</span>
f_log<span class="o">()</span> <span class="o">{</span>
<span class="nb">echo</span> <span class="s2">"[+] </span><span class="nv">$0</span><span class="s2"> -> </span><span class="nv">$*</span><span class="s2">"</span>
<span class="o">}</span>
<span class="k">function</span> usage <span class="o">()</span> <span class="o">{</span>
cat <span class="s"><< EOF</span>
<span class="s">You need to call this script like this : $0 (-r) -i <input></span>
<span class="s"> -r : reset pulseaudio to default and so removes virtual input</span>
<span class="s"> -i : external amplifier to combine with source monitor [required param, values: (guitar|bass)]</span>
<span class="s">EOF</span>
<span class="o">}</span>
<span class="k">while</span> <span class="nb">getopts</span> <span class="s2">"hri:"</span> option
<span class="k">do</span>
<span class="k">case</span> <span class="si">${</span><span class="nv">option</span><span class="si">}</span> in
h<span class="o">)</span>
usage
<span class="nb">exit</span>
<span class="p">;;</span>
r<span class="o">)</span>
<span class="nv">action</span><span class="o">=</span>reset
<span class="p">;;</span>
i<span class="o">)</span>
<span class="nv">amplifier_model</span><span class="o">=</span><span class="si">${</span><span class="nv">OPTARG</span><span class="si">}</span>
<span class="p">;;</span>
?<span class="o">)</span>
usage
<span class="nb">exit</span>
<span class="p">;;</span>
<span class="k">esac</span>
<span class="k">done</span>
<span class="c1"># Checking first if we just need to reset pulseaudio</span>
<span class="k">if</span> <span class="o">[</span> <span class="s2">"</span><span class="si">${</span><span class="nv">action</span><span class="si">}</span><span class="s2">"</span> <span class="o">==</span> <span class="s2">"reset"</span> <span class="o">]</span> <span class="p">;</span> <span class="k">then</span>
f_log <span class="s2">"Resetting pulseaudio to defaults ..."</span>
pactl unload-module module-loopback
pactl unload-module module-null-sink
sleep <span class="m">2</span>
pulseaudio -k
<span class="nb">exit</span>
<span class="k">fi</span>
<span class="c1"># Parsing amplifier input to combine and exit if not specified</span>
<span class="c1"># One can use the following commands to know which sources are available</span>
<span class="c1"># pacmd list-sources | egrep '(^\s+name: .*)|(^\s+device.description = .*)'</span>
<span class="k">if</span> <span class="o">[</span> <span class="s2">"</span><span class="si">${</span><span class="nv">amplifier_model</span><span class="si">}</span><span class="s2">"</span> <span class="o">==</span> <span class="s2">"guitar"</span> <span class="o">]</span> <span class="p">;</span> <span class="k">then</span>
f_log <span class="s2">"Fender Mustang amplifier selected"</span>
<span class="nv">source_device</span><span class="o">=</span><span class="s2">"alsa_input.usb-FMIC_Mustang_LT_25_00000000001A-02.analog-stereo"</span>
<span class="nv">sink_name</span><span class="o">=</span><span class="s2">"monitor-and-amp"</span>
<span class="nv">fake_input_name</span><span class="o">=</span><span class="s2">"mustang-combined"</span>
<span class="k">elif</span> <span class="o">[</span> <span class="s2">"</span><span class="si">${</span><span class="nv">amplifier_model</span><span class="si">}</span><span class="s2">"</span> <span class="o">==</span> <span class="s2">"bass"</span> <span class="o">]</span> <span class="p">;</span> <span class="k">then</span>
f_log <span class="s2">"Fender Rumbler Amplifier selected"</span>
<span class="nv">source_device</span><span class="o">=</span><span class="s2">"alsa_input.usb-FMIC_Fender_LT_USB_Audio_Streaming_00000000001A-00.analog-stereo"</span>
<span class="nv">sink_name</span><span class="o">=</span><span class="s2">"monitor-and-bassamp"</span>
<span class="nv">fake_input_name</span><span class="o">=</span><span class="s2">"rumble-combined"</span>
<span class="k">else</span>
usage
<span class="nb">exit</span> <span class="m">1</span>
<span class="k">fi</span>
<span class="c1"># Now let's do the real work</span>
<span class="c1"># Common</span>
<span class="nv">monitor_device</span><span class="o">=</span><span class="s2">"alsa_output.usb-Lenovo_ThinkPad_Thunderbolt_3_Dock_USB_Audio_000000000000-00.analog-stereo.monitor"</span>
f_log <span class="s2">"Adding new sink [</span><span class="si">${</span><span class="nv">sink_name</span><span class="si">}</span><span class="s2">]"</span>
pactl load-module module-null-sink <span class="nv">sink_name</span><span class="o">=</span><span class="si">${</span><span class="nv">sink_name</span><span class="si">}</span> <span class="nv">sink_properties</span><span class="o">=</span>device.description<span class="o">=</span>Source-monitor-amp
sleep <span class="m">5</span>
f_log <span class="s2">"Adding monitor device [</span><span class="si">${</span><span class="nv">monitor_device</span><span class="si">}</span><span class="s2">] to created sink [</span><span class="si">${</span><span class="nv">sink_name</span><span class="si">}</span><span class="s2">]"</span>
pactl load-module module-loopback <span class="nv">source</span><span class="o">=</span><span class="si">${</span><span class="nv">monitor_device</span><span class="si">}</span> <span class="nv">sink_dont_move</span><span class="o">=</span><span class="nb">true</span> <span class="nv">sink</span><span class="o">=</span><span class="si">${</span><span class="nv">sink_name</span><span class="si">}</span>
sleep <span class="m">5</span>
f_log <span class="s2">"Adding external amplifier [</span><span class="si">${</span><span class="nv">source_device</span><span class="si">}</span><span class="s2">] to created sink [</span><span class="si">${</span><span class="nv">sink_name</span><span class="si">}</span><span class="s2">]"</span>
pactl load-module module-loopback <span class="nv">source</span><span class="o">=</span><span class="si">${</span><span class="nv">source_device</span><span class="si">}</span> <span class="nv">sink_dont_move</span><span class="o">=</span><span class="nb">true</span> <span class="nv">sink</span><span class="o">=</span><span class="si">${</span><span class="nv">sink_name</span><span class="si">}</span>
<span class="c1"># Create fake input combining all sinks </span>
f_log <span class="s2">"Creating now new virtual input [</span><span class="si">${</span><span class="nv">fake_input_name</span><span class="si">}</span><span class="s2">] to be used as input for recording"</span>
sleep <span class="m">5</span>
pactl load-module module-remap-source <span class="nv">source_name</span><span class="o">=</span><span class="si">${</span><span class="nv">fake_input_name</span><span class="si">}</span> <span class="nv">master</span><span class="o">=</span><span class="si">${</span><span class="nv">sink_name</span><span class="si">}</span>.monitor <span class="nv">source_properties</span><span class="o">=</span>device.description<span class="o">=</span><span class="si">${</span><span class="nv">fake_input_name</span><span class="si">}</span>
</pre></div>
</td></tr></table>
<p>Now that we have a script, I can just call it like that, example for my Guitar amp :</p>
<div class="highlight"><pre><span></span> .<span class="o">/</span><span class="nv">pulse</span><span class="o">-</span><span class="nv">audio</span><span class="o">-</span><span class="nv">amp</span><span class="o">-</span><span class="nv">combine</span> <span class="o">-</span><span class="nv">i</span> <span class="nv">guitar</span>
[<span class="o">+</span>] .<span class="o">/</span><span class="nv">pulse</span><span class="o">-</span><span class="nv">audio</span><span class="o">-</span><span class="nv">amp</span><span class="o">-</span><span class="nv">combine</span> <span class="o">-></span> <span class="nv">Fender</span> <span class="nv">Mustang</span> <span class="nv">amplifier</span> <span class="nv">selected</span>
[<span class="o">+</span>] .<span class="o">/</span><span class="nv">pulse</span><span class="o">-</span><span class="nv">audio</span><span class="o">-</span><span class="nv">amp</span><span class="o">-</span><span class="nv">combine</span> <span class="o">-></span> <span class="nv">Adding</span> <span class="nv">new</span> <span class="nv">sink</span> [<span class="nv">monitor</span><span class="o">-</span><span class="nv">and</span><span class="o">-</span><span class="nv">amp</span>]
<span class="mi">26</span>
[<span class="o">+</span>] .<span class="o">/</span><span class="nv">pulse</span><span class="o">-</span><span class="nv">audio</span><span class="o">-</span><span class="nv">amp</span><span class="o">-</span><span class="nv">combine</span> <span class="o">-></span> <span class="nv">Adding</span> <span class="nv">monitor</span> <span class="nv">device</span> [<span class="nv">alsa_output</span>.<span class="nv">usb</span><span class="o">-</span><span class="nv">Lenovo_ThinkPad_Thunderbolt_3_Dock_USB_Audio_000000000000</span><span class="o">-</span><span class="mi">00</span>.<span class="nv">analog</span><span class="o">-</span><span class="nv">stereo</span>.<span class="nv">monitor</span>] <span class="nv">to</span> <span class="nv">created</span> <span class="nv">sink</span> [<span class="nv">monitor</span><span class="o">-</span><span class="nv">and</span><span class="o">-</span><span class="nv">amp</span>]
<span class="mi">27</span>
[<span class="o">+</span>] .<span class="o">/</span><span class="nv">pulse</span><span class="o">-</span><span class="nv">audio</span><span class="o">-</span><span class="nv">amp</span><span class="o">-</span><span class="nv">combine</span> <span class="o">-></span> <span class="nv">Adding</span> <span class="nv">external</span> <span class="nv">amplifier</span> [<span class="nv">alsa_input</span>.<span class="nv">usb</span><span class="o">-</span><span class="nv">FMIC_Mustang_LT_25_00000000001A</span><span class="o">-</span><span class="mi">02</span>.<span class="nv">analog</span><span class="o">-</span><span class="nv">stereo</span>] <span class="nv">to</span> <span class="nv">created</span> <span class="nv">sink</span> [<span class="nv">monitor</span><span class="o">-</span><span class="nv">and</span><span class="o">-</span><span class="nv">amp</span>]
<span class="mi">28</span>
[<span class="o">+</span>] .<span class="o">/</span><span class="nv">pulse</span><span class="o">-</span><span class="nv">audio</span><span class="o">-</span><span class="nv">amp</span><span class="o">-</span><span class="nv">combine</span> <span class="o">-></span> <span class="nv">Creating</span> <span class="nv">now</span> <span class="nv">new</span> <span class="nv">virtual</span> <span class="nv">input</span> [<span class="nv">mustang</span><span class="o">-</span><span class="nv">combined</span>] <span class="nv">to</span> <span class="nv">be</span> <span class="nv">used</span> <span class="nv">as</span> <span class="nv">input</span> <span class="k">for</span> <span class="nv">recording</span>
<span class="mi">29</span>
</pre></div>
<p>And it then appears as new input that I can select as default under gnome : </p>
<p><img alt="gnome-settings" src="/images/gnome-control-center-sound.png"></p>
<p>I also have rebuilt/installed <a href="https://freedesktop.org/software/pulseaudio/pavucontrol/">pavucontrol</a> application, which can be handy to visualize all the streams and you can also control the volume in the recording tab :</p>
<p><img alt="pavucontrol-recording" src="/images/pavucontrol-recording.png"></p>
<p>You can then have lower input from the audio you're playing on laptop (for example a backing track found on youtube but anything played on laptop is going to the monitor device) but YMMV and you have to do a quick test first with your other input (my amp+instrument in my case)</p>
<p>Once done, you can use any app like audacity or cheese or else to just record. Probably easier and faster than complex (but more professional though) systems around Jack. As said, it's just to quickly record something and combine streams/sinks all together, nothing like a DAW system :-)</p>Using connection delegation with mitogen for Ansible2020-10-28T00:00:00+01:002020-10-28T00:00:00+01:00Fabian Arrotintag:arrfab.net,2020-10-28:/posts/2020/Oct/28/using-connection-delegation-with-mitogen-for-ansible/<p>This should be a very short blog post, but long enough to justify a blog post instead of a 'tweet' : I had myself a small issue with mitogen plugin in our Ansible infra.</p>
<p>To cut a long story short, everybody knows that ansible relies on ssh as transport. So one can use traditional ~/.ssh/config tuning to declare ProxyJump for some hosts, etc</p>
<p>But when you use mitogen (we do), in the <a href="https://mitogen.networkgenomics.com/ansible_detailed.html#connection-delegation">official doc</a> there is a mention of specific parameter for connection delegation : <code>mitogen_via</code></p>
<p>The simple example on the webpage seems trivial and if you have multiple hosts that need to be configured from remote ansible+mitogen combo, using mitogen would speed things up as it would <em>know</em> about the host topology.</p>
<p>That's what I thought when having a look at the simple inventory on that web page: </p>
<div class="highlight"><pre><span></span><span class="k">[dc2]</span>
<span class="na">web1.dc2</span>
<span class="na">web2.dc2</span>
<span class="na">web3.dc2</span>
<span class="k">[dc2:vars]</span>
<span class="na">mitogen_via</span> <span class="o">=</span> <span class="s">bastion.dc2</span>
</pre></div>
<p>Sounds easy but when I tried quickly to use mitogen_via , something that I thought would be obvious in fact wasn't.
My understanding was that mitogen would automatically force agent forwarding when going through the bastion host.
A simple <code>ansible -m ping</code> (let's assume web1.dc2 in their example) returned …</p><p>This should be a very short blog post, but long enough to justify a blog post instead of a 'tweet' : I had myself a small issue with mitogen plugin in our Ansible infra.</p>
<p>To cut a long story short, everybody knows that ansible relies on ssh as transport. So one can use traditional ~/.ssh/config tuning to declare ProxyJump for some hosts, etc</p>
<p>But when you use mitogen (we do), in the <a href="https://mitogen.networkgenomics.com/ansible_detailed.html#connection-delegation">official doc</a> there is a mention of specific parameter for connection delegation : <code>mitogen_via</code></p>
<p>The simple example on the webpage seems trivial and if you have multiple hosts that need to be configured from remote ansible+mitogen combo, using mitogen would speed things up as it would <em>know</em> about the host topology.</p>
<p>That's what I thought when having a look at the simple inventory on that web page: </p>
<div class="highlight"><pre><span></span><span class="k">[dc2]</span>
<span class="na">web1.dc2</span>
<span class="na">web2.dc2</span>
<span class="na">web3.dc2</span>
<span class="k">[dc2:vars]</span>
<span class="na">mitogen_via</span> <span class="o">=</span> <span class="s">bastion.dc2</span>
</pre></div>
<p>Sounds easy but when I tried quickly to use mitogen_via , something that I thought would be obvious in fact wasn't.
My understanding was that mitogen would automatically force agent forwarding when going through the bastion host.
A simple <code>ansible -m ping</code> (let's assume web1.dc2 in their example) returned me :</p>
<div class="highlight"><pre><span></span><span class="n">web1</span><span class="p">.</span><span class="n">dc2</span> <span class="o">|</span> <span class="n">UNREACHABLE</span><span class="o">!</span> <span class="o">=></span> <span class="err">{</span>
<span class="ss">"changed"</span><span class="p">:</span> <span class="k">false</span><span class="p">,</span>
<span class="ss">"msg"</span><span class="p">:</span> <span class="ss">"error occurred on host bastion.dc2: SSH authentication is incorrect"</span><span class="p">,</span>
<span class="ss">"unreachable"</span><span class="p">:</span> <span class="k">true</span>
<span class="err">}</span>
</pre></div>
<p>Well, we can see from the returned json that it was trying to pass through bastion.dc2 and that's confirmed on web1.dc2 : </p>
<div class="highlight"><pre><span></span><span class="n">Oct</span><span class="w"> </span><span class="mi">28</span><span class="w"> </span><span class="mi">15</span><span class="err">:</span><span class="mi">52</span><span class="err">:</span><span class="mi">36</span><span class="w"> </span><span class="n">web1</span><span class="p">.</span><span class="n">dc2</span><span class="w"> </span><span class="n">sshd</span><span class="o">[</span><span class="n">12913</span><span class="o">]</span><span class="err">:</span><span class="w"> </span><span class="k">Connection</span><span class="w"> </span><span class="n">closed</span><span class="w"> </span><span class="k">by</span><span class="w"> </span><span class="o"><</span><span class="n">ip_from_bastion</span><span class="p">.</span><span class="n">dc2</span><span class="o">></span><span class="w"> </span><span class="n">port</span><span class="w"> </span><span class="mi">56728</span><span class="w"> </span><span class="o">[</span><span class="n">preauth</span><span class="o">]</span><span class="w"></span>
</pre></div>
<p>Then I thought about something that was obvious to me but that mitogen (just reusing underlying ssh) doesn't do automatically : Forwarding the ssh agent to the nodes behind.</p>
<p>We can easily solve that with one simple ansible parameter : ansible has the <code>ansible_ssh_common_args</code> and <code>ansible_ssh_extra_args</code> parameters, specific to the <a href="https://docs.ansible.com/ansible/latest/user_guide/intro_inventory.html#connecting-to-hosts-behavioral-inventory-parameters">SSH connection</a></p>
<p>So what about we force Agent Forward just on that bastion host and see how that works ?
That means that in our inventory (but can go to host_vars/bastion.dc2 too) we just have to add parameter:</p>
<div class="highlight"><pre><span></span><span class="n">bastion</span><span class="p">.</span><span class="n">dc2</span> <span class="n">ansible_ssh_extra_args</span><span class="o">=</span><span class="s1">'-o ForwardAgent=yes'</span>
</pre></div>
<p>Let's try again :</p>
<div class="highlight"><pre><span></span><span class="n">web1</span><span class="p">.</span><span class="n">dc2</span> <span class="o">|</span> <span class="n">SUCCESS</span> <span class="o">=></span> <span class="err">{</span>
<span class="ss">"ansible_facts"</span><span class="p">:</span> <span class="err">{</span>
<span class="ss">"discovered_interpreter_python"</span><span class="p">:</span> <span class="ss">"/usr/bin/python"</span>
<span class="err">}</span><span class="p">,</span>
<span class="ss">"changed"</span><span class="p">:</span> <span class="k">false</span><span class="p">,</span>
<span class="ss">"ping"</span><span class="p">:</span> <span class="ss">"pong"</span>
<span class="err">}</span>
</pre></div>
<p>Good, so we can push that for our bastion hosts (used in inventory for mitogen_via) in host_vars or group_vars and call it a day.
The reason why I prefer using <code>ansible_ssh_extra_args</code> is that it will merge and add settings, in case you have already something like this in your ansible.cfg : </p>
<div class="highlight"><pre><span></span><span class="k">[ssh_connection]</span>
<span class="na">ssh_args</span> <span class="o">=</span>
</pre></div>
<p>I like the logic that we don't need to modify ~/.ssh/config with all exceptions to reflect the infra layout but we can just reflect it in ansible inventory</p>Deploying OpenShift in KVM/libvirt guests2020-09-11T00:00:00+02:002020-09-11T00:00:00+02:00Fabian Arrotintag:arrfab.net,2020-09-11:/posts/2020/Sep/11/deploying-openshift-in-kvmlibvirt-guests/<p>This week I had to work on a PoC to deploy OpenShift in Virtual Machines instead of bare-metal, like we did recently for the <a href="https://arrfab.net/posts/2020/May/20/deploying-openshift-4-on-bare-metal-and-disabling-dhcp/">CentOS CI infra</a></p>
<p>Why in Virtual Machines (KVM guests) and not on bare-metal ? Well, there are cases where you have powerful/beefy machines, but not enough to meet the minimum number of nodes (at least 3 etcd nodes, and not even counting the real workers, at least 2 so 5 in total for bare minimum), while these nodes would perfectly (both at cpu/memory and storage) support the whole infra (assuming that you don't deploy all etcd/control planes nodes on the same physical node of course, and same for workers)</p>
<p>If you have a look at the official <a href="https://docs.openshift.com/container-platform/4.5/welcome/index.html">openshift documentation</a>, you'll see that while all major cloud providers (AWS, Azure, GCP) are listed, there are also ways to deploy on bare-metal (what we did for CI infra), but also on RHEV, vSphere and Openstack too .. but nothing for plain KVM hypervisors (managed by libvirt in our cases).</p>
<p>But a VM is more or less like a bare-metal install, so what about we treat the VMs <em>as</em> bare-metal ? problem solved, right ?
For our bare-metal deployment, we …</p><p>This week I had to work on a PoC to deploy OpenShift in Virtual Machines instead of bare-metal, like we did recently for the <a href="https://arrfab.net/posts/2020/May/20/deploying-openshift-4-on-bare-metal-and-disabling-dhcp/">CentOS CI infra</a></p>
<p>Why in Virtual Machines (KVM guests) and not on bare-metal ? Well, there are cases where you have powerful/beefy machines, but not enough to meet the minimum number of nodes (at least 3 etcd nodes, and not even counting the real workers, at least 2 so 5 in total for bare minimum), while these nodes would perfectly (both at cpu/memory and storage) support the whole infra (assuming that you don't deploy all etcd/control planes nodes on the same physical node of course, and same for workers)</p>
<p>If you have a look at the official <a href="https://docs.openshift.com/container-platform/4.5/welcome/index.html">openshift documentation</a>, you'll see that while all major cloud providers (AWS, Azure, GCP) are listed, there are also ways to deploy on bare-metal (what we did for CI infra), but also on RHEV, vSphere and Openstack too .. but nothing for plain KVM hypervisors (managed by libvirt in our cases).</p>
<p>But a VM is more or less like a bare-metal install, so what about we treat the VMs <em>as</em> bare-metal ? problem solved, right ?
For our bare-metal deployment, we just used Ansible and with a simple <a href="https://github.com/CentOS/ansible-infra-playbooks/blob/master/adhoc-provision-ocp4-node.yml">ad-hoc playbook</a>, so nothing fancy : just creating pxe boot entries, using ipmi to remotely power on the nodes and ensure they'd boot on network, RHCOS is installed and has all the <a href="https://github.com/CentOS/ansible-infra-playbooks/blob/master/templates/ocp_pxeboot.j2#L15">kernel parameters</a> for network settings and where to find RHCOS image to install, and where to find ignition files</p>
<p>So reusing that was my first idea, as we can easily create a VM with a fixed mac-address, and boot from the network. But then I thought about what we already use for our traditional KVM deploy : a simple <a href="https://github.com/CentOS/ansible-infra-playbooks/blob/master/adhoc-deploy-kvm-guest.yml">ad-hoc playbook</a> just templating a <a href="https://github.com/CentOS/ansible-infra-playbooks/blob/master/templates/ansible-virt-install.j2">virt-install</a> command that is kicked on the hypervisor.</p>
<p>If you have used <code>virt-install</code> yourself, you know that there is the <code>--location</code> parameter (that we used already). Extracted from <code>man virt-install</code> :</p>
<div class="highlight"><pre><span></span> <span class="o">-</span><span class="n">l</span><span class="p">,</span> <span class="c1">--location OPTIONS</span>
<span class="n">Distribution</span> <span class="n">tree</span> <span class="n">installation</span> <span class="k">source</span><span class="p">.</span> <span class="n">virt</span><span class="o">-</span><span class="n">install</span> <span class="n">can</span> <span class="n">recognize</span> <span class="n">certain</span> <span class="n">distribution</span> <span class="n">trees</span> <span class="k">and</span> <span class="n">fetches</span> <span class="n">a</span>
<span class="n">bootable</span> <span class="n">kernel</span><span class="o">/</span><span class="n">initrd</span> <span class="n">pair</span> <span class="k">to</span> <span class="n">launch</span> <span class="n">the</span> <span class="n">install</span><span class="p">.</span>
</pre></div>
<p>How does that work ? Well, virt-install grabs kernel and initrd from that location, but to know where to find it (name/path), it uses a .treeinfo file. Example of http://mirror.centos.org/centos/7/os/x86_64/.treeinfo : </p>
<div class="highlight"><pre><span></span><span class="k">[general]</span>
<span class="na">name</span> <span class="o">=</span> <span class="s">CentOS-7</span>
<span class="na">family</span> <span class="o">=</span> <span class="s">CentOS</span>
<span class="na">timestamp</span> <span class="o">=</span> <span class="s">1587405659.3</span>
<span class="na">variant</span> <span class="o">=</span>
<span class="na">version</span> <span class="o">=</span> <span class="s">7</span>
<span class="na">packagedir</span> <span class="o">=</span>
<span class="na">arch</span> <span class="o">=</span> <span class="s">x86_64</span>
<span class="k">[stage2]</span>
<span class="na">mainimage</span> <span class="o">=</span> <span class="s">LiveOS/squashfs.img</span>
<span class="k">[images-x86_64]</span>
<span class="na">kernel</span> <span class="o">=</span> <span class="s">images/pxeboot/vmlinuz</span>
<span class="na">initrd</span> <span class="o">=</span> <span class="s">images/pxeboot/initrd.img</span>
<span class="na">boot.iso</span> <span class="o">=</span> <span class="s">images/boot.iso</span>
<span class="k">[images-xen]</span>
<span class="na">kernel</span> <span class="o">=</span> <span class="s">images/pxeboot/vmlinuz</span>
<span class="na">initrd</span> <span class="o">=</span> <span class="s">images/pxeboot/initrd.img</span>
</pre></div>
<p>So let's combine this option with the Red Hat CoreOS tree that we'll generate on our httpd deployment server : such .treeinfo doesn't exist, but let's just <a href="https://github.com/CentOS/ansible-infra-playbooks/blob/master/templates/ocp-treeinfo.j2">template it</a> .
From that point, it's easy, let's just use a variant for <a href="https://github.com/CentOS/ansible-infra-playbooks/blob/master/adhoc-provision-ocp4-kvm-guest.yml">ad-hoc playbook</a> that will :</p>
<ul>
<li>Download kernel/initrd.img and deployer image for openshift to our local httpd server</li>
<li>Ensure we'll have correct .treeinfo file in place</li>
<li>Create a <a href="https://github.com/CentOS/ansible-infra-playbooks/blob/master/templates/ansible-virt-install-ocp.j2">virt-install wrapper</a> that will just point to correct path with --location, and deploy VMs with RHCOS and automatically calling ignition</li>
</ul>
<p>While I admit that I'm surely not the most experienced openshift admin (just started to play with it), I still like the fact that RHCOS is still more or less linux that we use to know, and so combinining tools we already use allow us to deploy it , but surely not the way it's officially documented :)</p>Remotely reinstalling a node on CentOS 8 with DuD (Driver Disk Update / kernel module for nic/hba)2020-09-05T00:00:00+02:002020-09-05T00:00:00+02:00Fabian Arrotintag:arrfab.net,2020-09-05:/posts/2020/Sep/05/remotely-reinstalling-a-node-on-centos-8-with-dud-driver-disk-update-kernel-module-for-nichba/<p>Recently in the CentOS Infra, we got a new sponsor giving us access to a server that has a HBA needing a kernel module that was deprecated in the RHEL8 (and thus CentOS 8) kernel by default.</p>
<p>What can you do in such situation ? Answer is easy : <a href="https://elrepo.org">Elrepo</a> !
They provide (for years now) kernel modules ready to go for network cards, raid/hba controllers, wifi nics, etc, and for various versions of RHEL/CentOS and other rebuilds using same kernel.</p>
<p>I wanted to give it a try on a node I have at least remote KVM/ipmi access, to reset the node in case of problem.
Let's use the following ~8y old IBM blade for this example , that has the following network interface card and also hba :</p>
<div class="highlight"><pre><span></span><span class="err">#</span><span class="w"> </span><span class="n">lspci</span><span class="w"> </span><span class="o">|</span><span class="n">egrep</span><span class="w"> </span><span class="o">-</span><span class="n">i</span><span class="w"> </span><span class="s1">'ethernet|Serial'</span><span class="w"></span>
<span class="mi">0</span><span class="nl">c</span><span class="p">:</span><span class="mf">00.0</span><span class="w"> </span><span class="n">Serial</span><span class="w"> </span><span class="n">Attached</span><span class="w"> </span><span class="n">SCSI</span><span class="w"> </span><span class="nl">controller</span><span class="p">:</span><span class="w"> </span><span class="n">Broadcom</span><span class="w"> </span><span class="o">/</span><span class="w"> </span><span class="n">LSI</span><span class="w"> </span><span class="n">SAS2004</span><span class="w"> </span><span class="n">PCI</span><span class="o">-</span><span class="n">Express</span><span class="w"> </span><span class="k">Fusion</span><span class="o">-</span><span class="n">MPT</span><span class="w"> </span><span class="n">SAS</span><span class="o">-</span><span class="mi">2</span><span class="w"> </span><span class="o">[</span><span class="n">Spitfire</span><span class="o">]</span><span class="w"> </span><span class="p">(</span><span class="n">rev</span><span class="w"> </span><span class="mi">03</span><span class="p">)</span><span class="w"></span>
<span class="mi">16</span><span class="err">:</span><span class="mf">00.0</span><span class="w"> </span><span class="n">Ethernet</span><span class="w"> </span><span class="nl">controller</span><span class="p">:</span><span class="w"> </span><span class="n">Emulex</span><span class="w"> </span><span class="n">Corporation</span><span class="w"> </span><span class="n">OneConnect</span><span class="w"> </span><span class="mi">10</span><span class="n">Gb</span><span class="w"> </span><span class="n">NIC</span><span class="w"> </span><span class="p">(</span><span class="n">be3</span><span class="p">)</span><span class="w"> </span><span class="p">(</span><span class="n">rev</span><span class="w"> </span><span class="mi">03</span><span class="p">)</span><span class="w"></span>
<span class="err">#</span><span class="w"> </span><span class="n">lspci</span><span class="w"> </span><span class="o">-</span><span class="n">n</span><span class="w"> </span><span class="o">|</span><span class="n">egrep</span><span class="w"> </span><span class="s1">'0c:00.0|16:00.0'</span><span class="w"></span>
<span class="mi">0</span><span class="nl">c</span><span class="p">:</span><span class="mf">00.0</span><span class="w"> </span><span class="mi">0107</span><span class="err">:</span><span class="w"> </span><span class="mi">1000</span><span class="err">:</span><span class="mi">0070</span><span class="w"> </span><span class="p">(</span><span class="n">rev</span><span class="w"> </span><span class="mi">03</span><span class="p">)</span><span class="w"></span>
<span class="mi">16</span><span class="err">:</span><span class="mf">00.0</span><span class="w"> </span><span class="mi">0200</span><span class="err">:</span><span class="w"> </span><span class="mi">19</span><span class="nl">a2</span><span class="p">:</span><span class="mi">0710</span><span class="w"> </span><span class="p">(</span><span class="n">rev</span><span class="w"> </span><span class="mi">03</span><span class="p">)</span><span class="w"></span>
<span class="err">#</span><span class="w"> </span><span class="n">ethtool</span><span class="w"> </span><span class="o">-</span><span class="n">i</span><span class="w"> </span><span class="n">eth0</span><span class="o">|</span><span class="n">grep</span><span class="w"> </span><span class="n">driver</span><span class="w"></span>
<span class="nl">driver</span><span class="p">:</span><span class="w"> </span><span class="n">be2net …</span></pre></div><p>Recently in the CentOS Infra, we got a new sponsor giving us access to a server that has a HBA needing a kernel module that was deprecated in the RHEL8 (and thus CentOS 8) kernel by default.</p>
<p>What can you do in such situation ? Answer is easy : <a href="https://elrepo.org">Elrepo</a> !
They provide (for years now) kernel modules ready to go for network cards, raid/hba controllers, wifi nics, etc, and for various versions of RHEL/CentOS and other rebuilds using same kernel.</p>
<p>I wanted to give it a try on a node I have at least remote KVM/ipmi access, to reset the node in case of problem.
Let's use the following ~8y old IBM blade for this example , that has the following network interface card and also hba :</p>
<div class="highlight"><pre><span></span><span class="err">#</span><span class="w"> </span><span class="n">lspci</span><span class="w"> </span><span class="o">|</span><span class="n">egrep</span><span class="w"> </span><span class="o">-</span><span class="n">i</span><span class="w"> </span><span class="s1">'ethernet|Serial'</span><span class="w"></span>
<span class="mi">0</span><span class="nl">c</span><span class="p">:</span><span class="mf">00.0</span><span class="w"> </span><span class="n">Serial</span><span class="w"> </span><span class="n">Attached</span><span class="w"> </span><span class="n">SCSI</span><span class="w"> </span><span class="nl">controller</span><span class="p">:</span><span class="w"> </span><span class="n">Broadcom</span><span class="w"> </span><span class="o">/</span><span class="w"> </span><span class="n">LSI</span><span class="w"> </span><span class="n">SAS2004</span><span class="w"> </span><span class="n">PCI</span><span class="o">-</span><span class="n">Express</span><span class="w"> </span><span class="k">Fusion</span><span class="o">-</span><span class="n">MPT</span><span class="w"> </span><span class="n">SAS</span><span class="o">-</span><span class="mi">2</span><span class="w"> </span><span class="o">[</span><span class="n">Spitfire</span><span class="o">]</span><span class="w"> </span><span class="p">(</span><span class="n">rev</span><span class="w"> </span><span class="mi">03</span><span class="p">)</span><span class="w"></span>
<span class="mi">16</span><span class="err">:</span><span class="mf">00.0</span><span class="w"> </span><span class="n">Ethernet</span><span class="w"> </span><span class="nl">controller</span><span class="p">:</span><span class="w"> </span><span class="n">Emulex</span><span class="w"> </span><span class="n">Corporation</span><span class="w"> </span><span class="n">OneConnect</span><span class="w"> </span><span class="mi">10</span><span class="n">Gb</span><span class="w"> </span><span class="n">NIC</span><span class="w"> </span><span class="p">(</span><span class="n">be3</span><span class="p">)</span><span class="w"> </span><span class="p">(</span><span class="n">rev</span><span class="w"> </span><span class="mi">03</span><span class="p">)</span><span class="w"></span>
<span class="err">#</span><span class="w"> </span><span class="n">lspci</span><span class="w"> </span><span class="o">-</span><span class="n">n</span><span class="w"> </span><span class="o">|</span><span class="n">egrep</span><span class="w"> </span><span class="s1">'0c:00.0|16:00.0'</span><span class="w"></span>
<span class="mi">0</span><span class="nl">c</span><span class="p">:</span><span class="mf">00.0</span><span class="w"> </span><span class="mi">0107</span><span class="err">:</span><span class="w"> </span><span class="mi">1000</span><span class="err">:</span><span class="mi">0070</span><span class="w"> </span><span class="p">(</span><span class="n">rev</span><span class="w"> </span><span class="mi">03</span><span class="p">)</span><span class="w"></span>
<span class="mi">16</span><span class="err">:</span><span class="mf">00.0</span><span class="w"> </span><span class="mi">0200</span><span class="err">:</span><span class="w"> </span><span class="mi">19</span><span class="nl">a2</span><span class="p">:</span><span class="mi">0710</span><span class="w"> </span><span class="p">(</span><span class="n">rev</span><span class="w"> </span><span class="mi">03</span><span class="p">)</span><span class="w"></span>
<span class="err">#</span><span class="w"> </span><span class="n">ethtool</span><span class="w"> </span><span class="o">-</span><span class="n">i</span><span class="w"> </span><span class="n">eth0</span><span class="o">|</span><span class="n">grep</span><span class="w"> </span><span class="n">driver</span><span class="w"></span>
<span class="nl">driver</span><span class="p">:</span><span class="w"> </span><span class="n">be2net</span><span class="w"></span>
<span class="err">#</span><span class="w"> </span><span class="n">modinfo</span><span class="w"> </span><span class="n">be2net</span><span class="o">|</span><span class="n">grep</span><span class="w"> </span><span class="mi">0710</span><span class="w"></span>
<span class="k">alias</span><span class="err">:</span><span class="w"> </span><span class="nl">pci</span><span class="p">:</span><span class="n">v000019A2d00000710sv</span><span class="o">*</span><span class="n">sd</span><span class="o">*</span><span class="n">bc</span><span class="o">*</span><span class="n">sc</span><span class="o">*</span><span class="n">i</span><span class="o">*</span><span class="w"></span>
<span class="err">#</span><span class="w"> </span><span class="n">lsmod</span><span class="o">|</span><span class="n">grep</span><span class="w"> </span><span class="n">sas</span><span class="w"></span>
<span class="n">mpt2sas</span><span class="w"> </span><span class="mi">249763</span><span class="w"> </span><span class="mi">2</span><span class="w"></span>
<span class="err">#</span><span class="w"> </span><span class="n">modinfo</span><span class="w"> </span><span class="n">mpt2sas</span><span class="o">|</span><span class="n">grep</span><span class="w"> </span><span class="mi">0070</span><span class="w"></span>
<span class="k">alias</span><span class="err">:</span><span class="w"> </span><span class="nl">pci</span><span class="p">:</span><span class="n">v00001000d00000070sv</span><span class="o">*</span><span class="n">sd</span><span class="o">*</span><span class="n">bc</span><span class="o">*</span><span class="n">sc</span><span class="o">*</span><span class="n">i</span><span class="o">*</span><span class="w"></span>
</pre></div>
<p>As you can see above, we were searching for the kernel module in use, and compare in kernel module the pci id.
We know which kmod to search for <em>and</em> which pci id the kmod is supposed to support. Let's verify this in CentOS 8 :
Let's try with network module : </p>
<div class="highlight"><pre><span></span><span class="o">#</span> <span class="n">modinfo</span> <span class="n">be2net</span><span class="o">|</span><span class="n">grep</span> <span class="mi">0710</span> <span class="o">||</span> <span class="n">echo</span> <span class="ss">"Sorry, doesn't seem supported"</span>
<span class="n">Sorry</span><span class="p">,</span> <span class="n">doesn</span><span class="err">'</span><span class="n">t</span> <span class="n">seem</span> <span class="n">supported</span>
</pre></div>
<p>Ouch, be2net is present <em>but</em> doesn't support our pci id so it was deprecated ... we need a different one.
Let's try now with the hba : </p>
<div class="highlight"><pre><span></span><span class="o">#</span> <span class="n">modinfo</span> <span class="n">mpt2sas</span><span class="o">|</span><span class="n">grep</span> <span class="mi">0070</span> <span class="o">||</span> <span class="n">echo</span> <span class="ss">"Sorry, doesn't seem supported"</span>
<span class="n">Sorry</span><span class="p">,</span> <span class="n">doesn</span><span class="err">'</span><span class="n">t</span> <span class="n">seem</span> <span class="n">supported</span>
</pre></div>
<p>empty so also not supported. By chance, Elrepo has this packaged as rpm : </p>
<ul>
<li>http://elrepo.reloumirrors.net/elrepo/el8/x86_64/RPMS/kmod-mpt3sas-28.100.00.00-3.el8_2.elrepo.x86_64.rpm</li>
<li>http://elrepo.reloumirrors.net/elrepo/el8/x86_64/RPMS/kmod-be2net-12.0.0.0-5.el8_2.elrepo.x86_64.rpm</li>
</ul>
<p>Of course we can also use a <a href="https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/performing_an_advanced_rhel_installation/updating-drivers-during-installation_installing-rhel-as-an-experienced-user">DuD</a> as Elrepo already provides such .iso images.</p>
<p>In our case though, we have to do things differently, as we need two kmods and also no way to fetch it through network ( Obviously, as we need first nic kernel module/driver ) ...</p>
<p>So here was my idea :</p>
<ul>
<li>build a DuD .iso that has both kmods/kernel modules</li>
<li>inject that .iso <em>inside</em> the initrd.img (as we need kernel module loaded before we can reach network for stage2 and no way to grab network driver through network obviously)</li>
</ul>
<p>Let's go back on the CentOS 7 node that needs to be reinstalled with CentOS 8 :</p>
<div class="highlight"><pre><span></span><span class="o">#</span> <span class="n">yum</span> <span class="n">install</span> <span class="o">-</span><span class="n">y</span> <span class="n">genisoimage</span> <span class="n">createrepo_c</span>
<span class="o">#</span> <span class="n">cd</span> <span class="err">$</span><span class="p">(</span><span class="n">mktemp</span> <span class="o">-</span><span class="n">d</span><span class="p">)</span>
<span class="o">#</span> <span class="n">mkdir</span> <span class="o">-</span><span class="n">p</span> <span class="err">{</span><span class="p">.</span><span class="o">/</span><span class="n">dd</span><span class="o">/</span><span class="n">rpms</span><span class="o">/</span><span class="n">x86_64</span><span class="o">/</span><span class="p">,.</span><span class="o">/</span><span class="n">dd</span><span class="o">/</span><span class="n">src</span><span class="err">}</span>
<span class="o">#</span> <span class="n">echo</span> <span class="o">-</span><span class="n">e</span> <span class="ss">"Driver Update Disk version 3\c"</span> <span class="o">></span> <span class="p">.</span><span class="o">/</span><span class="n">dd</span><span class="o">/</span><span class="n">rhdd3</span>
<span class="o">#</span> <span class="n">pushd</span> <span class="n">dd</span><span class="o">/</span><span class="n">rpms</span><span class="o">/</span><span class="n">x86_64</span><span class="o">/</span>
<span class="o">#</span> <span class="n">wget</span> <span class="n">http</span><span class="p">:</span><span class="o">//</span><span class="n">elrepo</span><span class="p">.</span><span class="n">reloumirrors</span><span class="p">.</span><span class="n">net</span><span class="o">/</span><span class="n">elrepo</span><span class="o">/</span><span class="n">el8</span><span class="o">/</span><span class="n">x86_64</span><span class="o">/</span><span class="n">RPMS</span><span class="o">/</span><span class="err">{</span><span class="n">kmod</span><span class="o">-</span><span class="n">mpt3sas</span><span class="o">-</span><span class="mi">28</span><span class="p">.</span><span class="mi">100</span><span class="p">.</span><span class="mi">00</span><span class="p">.</span><span class="mi">00</span><span class="o">-</span><span class="mi">3</span><span class="p">.</span><span class="n">el8_2</span><span class="p">.</span><span class="n">elrepo</span><span class="p">.</span><span class="n">x86_64</span><span class="p">.</span><span class="n">rpm</span><span class="p">,</span><span class="n">kmod</span><span class="o">-</span><span class="n">be2net</span><span class="o">-</span><span class="mi">12</span><span class="p">.</span><span class="mi">0</span><span class="p">.</span><span class="mi">0</span><span class="p">.</span><span class="mi">0</span><span class="o">-</span><span class="mi">5</span><span class="p">.</span><span class="n">el8_2</span><span class="p">.</span><span class="n">elrepo</span><span class="p">.</span><span class="n">x86_64</span><span class="p">.</span><span class="n">rpm</span><span class="err">}</span>
<span class="o">#</span> <span class="n">createrepo_c</span> <span class="p">.</span><span class="o">/</span>
<span class="o">#</span> <span class="n">popd</span>
<span class="o">#</span> <span class="n">pushd</span> <span class="n">dd</span><span class="o">/</span><span class="n">src</span>
<span class="o">#</span> <span class="n">wget</span> <span class="n">http</span><span class="p">:</span><span class="o">//</span><span class="n">elrepo</span><span class="p">.</span><span class="n">reloumirrors</span><span class="p">.</span><span class="n">net</span><span class="o">/</span><span class="n">elrepo</span><span class="o">/</span><span class="n">el8</span><span class="o">/</span><span class="n">SRPMS</span><span class="o">/</span><span class="err">{</span><span class="n">kmod</span><span class="o">-</span><span class="n">be2net</span><span class="o">-</span><span class="mi">12</span><span class="p">.</span><span class="mi">0</span><span class="p">.</span><span class="mi">0</span><span class="p">.</span><span class="mi">0</span><span class="o">-</span><span class="mi">5</span><span class="p">.</span><span class="n">el8_2</span><span class="p">.</span><span class="n">elrepo</span><span class="p">.</span><span class="n">src</span><span class="p">.</span><span class="n">rpm</span><span class="p">,</span><span class="n">kmod</span><span class="o">-</span><span class="n">mpt3sas</span><span class="o">-</span><span class="mi">28</span><span class="p">.</span><span class="mi">100</span><span class="p">.</span><span class="mi">00</span><span class="p">.</span><span class="mi">00</span><span class="o">-</span><span class="mi">3</span><span class="p">.</span><span class="n">el8_2</span><span class="p">.</span><span class="n">elrepo</span><span class="p">.</span><span class="n">src</span><span class="p">.</span><span class="n">rpm</span><span class="err">}</span>
<span class="o">#</span> <span class="n">popd</span>
<span class="o">#</span> <span class="n">mkisofs</span> <span class="o">-</span><span class="n">quiet</span> <span class="o">-</span><span class="n">lR</span> <span class="o">-</span><span class="n">V</span> <span class="n">OEMDRV</span> <span class="o">-</span><span class="k">input</span><span class="o">-</span><span class="n">charset</span> <span class="n">utf8</span> <span class="o">-</span><span class="n">o</span> <span class="n">mpt3sas</span><span class="o">-</span><span class="n">be2net</span><span class="o">-</span><span class="n">kmod</span><span class="p">.</span><span class="n">iso</span> <span class="p">.</span><span class="o">/</span><span class="n">dd</span>
</pre></div>
<p>Now that we have <code>mpt3sas-be2net-kmod.iso</code> we can use it with inst.dd= .. but as we have no network, anaconda needs to find it early in the process. So let's inject it into initrd.img ( <a href="https://arrfab.net/posts/2015/May/06/hacking-initrdimg-for-fun-and-profit/">you can do the same with kickstart</a> )</p>
<p>Let's retrieve vmlinuz and initrd.img to remotely kick a CentOS 8 reinstall on itself (node is actually running CentOS 7)</p>
<div class="highlight"><pre><span></span># pushd /boot
# mirror_url="http://mirror.centos.org/centos/8/"
# curl --location --fail <span class="cp">${</span><span class="n">mirror_url</span><span class="cp">}</span>/BaseOS/x86_64/os/images/pxeboot/initrd.img > initrd.img.install
# curl --location --fail <span class="cp">${</span><span class="n">mirror_url</span><span class="cp">}</span>/BaseOS/x86_64/os/images/pxeboot/vmlinuz > vmlinuz.install
# popd
# echo mpt3sas-be2net-kmod.iso |cpio -c -o >> /boot/initrd.img.install
2005 blocks
</pre></div>
<p>Now that we have injected .iso into initrd.img, we can reference it for anaconda/install process as <code>/<name>.iso</code>.
Let's then just use kexec (usual) to remotely launch the install and use also VNC to see if everything is working : network should respond and be configured, and then we'll be able to see storage too.</p>
<div class="highlight"><pre><span></span><span class="o">#</span> <span class="n">pushd</span> <span class="o">/</span><span class="n">boot</span>
<span class="o">#</span> <span class="n">yum</span> <span class="n">install</span> <span class="o">-</span><span class="n">y</span> <span class="n">kexec</span><span class="o">-</span><span class="n">tools</span>
<span class="o">#</span> <span class="n">kexec</span> <span class="o">-</span><span class="n">l</span> <span class="n">vmlinuz</span><span class="p">.</span><span class="n">install</span> <span class="c1">--append="net.ifnames=0 biosdevname=0 ksdevice=eth2 inst.repo=http://mirror.centos.org/centos/8/BaseOS/x86_64/os/ inst.lang=en_GB inst.keymap=be-latin1 inst.dd=/mpt3sas-be2net-kmod.iso inst.vnc inst.vncpassword=DuDTest ip=172.22.0.16 netmask=255.255.254.0 gateway=172.22.1.254 nameserver=172.22.0.1 hostname=test.ci.centos.org pcie_aspm=off" --initrd=initrd.img.install && kexec -e</span>
</pre></div>
<p>From that point it is like described in the previous link about kexec and kick a reinstall : kernel boots, loads initrd.img (but this time we see the DuD iso image being loaded and then it starts anaconda as usual. We can from there connect over vnc to finish the install (we have network and hba kernel module loaded and able to configure hardware)</p>
<p>Once machine is installed and rebooted, we can just ssh into it and clearly we can see that both rpm/kmods were installed ok (otherwise, no network nor storage and of course no install :) )</p>
<div class="highlight"><pre><span></span><span class="o">#</span> <span class="n">modinfo</span> <span class="n">mpt3sas</span><span class="o">|</span><span class="n">egrep</span> <span class="s1">'filename|signer'</span>
<span class="n">filename</span><span class="p">:</span> <span class="o">/</span><span class="n">lib</span><span class="o">/</span><span class="n">modules</span><span class="o">/</span><span class="mi">4</span><span class="p">.</span><span class="mi">18</span><span class="p">.</span><span class="mi">0</span><span class="o">-</span><span class="mi">193</span><span class="p">.</span><span class="mi">14</span><span class="p">.</span><span class="mi">2</span><span class="p">.</span><span class="n">el8_2</span><span class="p">.</span><span class="n">x86_64</span><span class="o">/</span><span class="n">weak</span><span class="o">-</span><span class="n">updates</span><span class="o">/</span><span class="n">mpt3sas</span><span class="o">/</span><span class="n">mpt3sas</span><span class="p">.</span><span class="n">ko</span>
<span class="n">signer</span><span class="p">:</span> <span class="n">ELRepo</span><span class="p">.</span><span class="n">org</span> <span class="n">Secure</span> <span class="n">Boot</span> <span class="k">Key</span>
<span class="o">#</span> <span class="n">modinfo</span> <span class="n">be2net</span><span class="o">|</span><span class="n">egrep</span> <span class="s1">'filename|signer'</span>
<span class="n">filename</span><span class="p">:</span> <span class="o">/</span><span class="n">lib</span><span class="o">/</span><span class="n">modules</span><span class="o">/</span><span class="mi">4</span><span class="p">.</span><span class="mi">18</span><span class="p">.</span><span class="mi">0</span><span class="o">-</span><span class="mi">193</span><span class="p">.</span><span class="mi">14</span><span class="p">.</span><span class="mi">2</span><span class="p">.</span><span class="n">el8_2</span><span class="p">.</span><span class="n">x86_64</span><span class="o">/</span><span class="n">weak</span><span class="o">-</span><span class="n">updates</span><span class="o">/</span><span class="n">be2net</span><span class="o">/</span><span class="n">be2net</span><span class="p">.</span><span class="n">ko</span>
<span class="n">signer</span><span class="p">:</span> <span class="n">ELRepo</span><span class="p">.</span><span class="n">org</span> <span class="n">Secure</span> <span class="n">Boot</span> <span class="k">Key</span>
<span class="o">#</span> <span class="n">rpm</span> <span class="o">-</span><span class="n">qa</span><span class="o">|</span><span class="n">grep</span> <span class="n">elrepo</span>
<span class="n">kmod</span><span class="o">-</span><span class="n">be2net</span><span class="o">-</span><span class="mi">12</span><span class="p">.</span><span class="mi">0</span><span class="p">.</span><span class="mi">0</span><span class="p">.</span><span class="mi">0</span><span class="o">-</span><span class="mi">5</span><span class="p">.</span><span class="n">el8_2</span><span class="p">.</span><span class="n">elrepo</span><span class="p">.</span><span class="n">x86_64</span>
<span class="n">kmod</span><span class="o">-</span><span class="n">mpt3sas</span><span class="o">-</span><span class="mi">28</span><span class="p">.</span><span class="mi">100</span><span class="p">.</span><span class="mi">00</span><span class="p">.</span><span class="mi">00</span><span class="o">-</span><span class="mi">3</span><span class="p">.</span><span class="n">el8_2</span><span class="p">.</span><span class="n">elrepo</span><span class="p">.</span><span class="n">x86_64</span>
<span class="o">#</span> <span class="n">yum</span> <span class="n">installl</span> <span class="o">-</span><span class="n">y</span> <span class="n">elrepo</span><span class="o">-</span><span class="n">release</span>
</pre></div>
<p>Of course, as shown above, don't forget to also install elrepo-release pkg, to then access newer kmods when needed, in case of a rebase between major.minor releases.</p>
<p>Hope you found that useful in case you need to upgrade working hardware but with deprecated drivers in the CentOS 8 kernel.</p>Deploying OpenShift 4 on bare-metal and disabling dhcp2020-05-20T00:00:00+02:002020-05-20T00:00:00+02:00Fabian Arrotintag:arrfab.net,2020-05-20:/posts/2020/May/20/deploying-openshift-4-on-bare-metal-and-disabling-dhcp/<p>Recently I had to work with one of my colleagues (David) on something that was new to me : Openshift.
I never really looked at OpenShift but knew the basic concepts, at least on OKD 3.x.</p>
<p>With 4.x, OCP is completely different as instead of deploying "normal" Linux distro (like CentOS in our case), it's now using RHCOS (so CoreOS) as it's foundation.
The goal of this blog post is not to dive into all the technical steps required to deploy/bootstrap the openshift cluster, but to discuss of one particular 'issue' that I found myself annoying while deploying: how to disable dhcp on the CoreOS provisioned nodes.</p>
<p>To cut a long story short, you can read the basic steps needed to deploy Openshift on bare-metal in the <a href="https://docs.openshift.com/container-platform/4.4/installing/installing_bare_metal/installing-bare-metal.html">official doc</a></p>
<p>Have you read it ? Good, now we can move forward :)</p>
<p>After we had configured our install-config.yaml (with our needed values) and also generated the manifests with <code>openshift-install create manifests --dir=/path/</code> we thought that it would be just deploying with the ignition files built by the <code>openshift-install create ignition-configs --dir=/path</code> step (see in the above doc for all details)</p>
<p>It's true that we ended up with some …</p><p>Recently I had to work with one of my colleagues (David) on something that was new to me : Openshift.
I never really looked at OpenShift but knew the basic concepts, at least on OKD 3.x.</p>
<p>With 4.x, OCP is completely different as instead of deploying "normal" Linux distro (like CentOS in our case), it's now using RHCOS (so CoreOS) as it's foundation.
The goal of this blog post is not to dive into all the technical steps required to deploy/bootstrap the openshift cluster, but to discuss of one particular 'issue' that I found myself annoying while deploying: how to disable dhcp on the CoreOS provisioned nodes.</p>
<p>To cut a long story short, you can read the basic steps needed to deploy Openshift on bare-metal in the <a href="https://docs.openshift.com/container-platform/4.4/installing/installing_bare_metal/installing-bare-metal.html">official doc</a></p>
<p>Have you read it ? Good, now we can move forward :)</p>
<p>After we had configured our install-config.yaml (with our needed values) and also generated the manifests with <code>openshift-install create manifests --dir=/path/</code> we thought that it would be just deploying with the ignition files built by the <code>openshift-install create ignition-configs --dir=/path</code> step (see in the above doc for all details)</p>
<p>It's true that we ended up with some ignition files like:</p>
<ul>
<li>bootstrap.ign</li>
<li>worker.ign</li>
<li>master.ign</li>
</ul>
<p>Those ignition files are (more or less) like traditional kickstart files to let you automate the RHCOS deploy on bare-metal.
The other part is really easy, as it's a matter (with ansible in our case) to just configure the tftp boot argument, and call an ad-hoc task to remotely force a physical reinstall of the machine (through ipmi):</p>
<ul>
<li><a href="https://github.com/CentOS/ansible-infra-playbooks/blob/master/adhoc-provision-ocp4-node.yml">ansible ad-hoc task</a></li>
<li><a href="https://github.com/CentOS/ansible-infra-playbooks/blob/master/templates/ocp_pxeboot.j2">tftp pxeboot template</a></li>
</ul>
<p>So we kicked off first the bootstrap node (ephemeral node being used as a temporary master, from which the real master forming the etcd cluster will get their initial config from), but then we realized that, while RHCOS was installed and responding with the fixed IP we set through pxeboot kernel parameters (and correctly applied on the reboot), <em>each</em> RHCOS node was also trying by default to activate <em>all</em> present NICs on the machine.</p>
<p>That was suddenly "interesting" as we don't fully control the network where those machines are, and each physical node has 4 NICs, all in the same vlan , in which we have also a small dhcp range for other deployments.
Do you see the problem about etcd and members in the same subnet and multiple IP addresses ? yeah, it wasn't working as we saw some requests coming from the dhcp interfaces instead of the first properly configured NIC in each system.</p>
<p>The "good" thing is that you can still ssh into each deployed RHCOS (even if not adviced to) , to troubleshoot this.
We discovered that RHCOS still uses NetworkManager but that default settings would be to enable all NICs with DHCP if nothing else declared which is what we need to disable.</p>
<p>After some research and help from Colin Walters, we were pointed to this <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1800900">bug report for coreos</a></p>
<p>With the traditional "CentOS Linux" sysadmin mindset, I thought : "good, we can just automate with ansible ssh'ing into each provisioned rhcos to just disable it", but there should be a clever other way to deal with this, as it was also impacting our initial bootstrap and master nodes (so no way to get cluster up)</p>
<p>That's then that we found this : Customing deployment with Day0 config : here is a simple example for <a href="https://access.redhat.com/documentation/en-us/openshift_container_platform/4.3/html-single/installing/index#installation-special-config-crony_installing-customizing">Chrony</a></p>
<p>That's how I understood the concept of <a href="https://github.com/openshift/machine-config-operator/blob/master/docs/MachineConfiguration.md">MachineConfig</a> and how that's then supposed to work for a provisioned cluster, but also for the bootstrap process. Let's so use those informations to create what we need and start a fresh deploy.</p>
<p>Assuming that we want to create our manifest in <path> : </p>
<div class="highlight"><pre><span></span><span class="n">openshift</span><span class="o">-</span><span class="n">install</span> <span class="k">create</span> <span class="n">manifests</span> <span class="c1">--dir=/<path>/</span>
</pre></div>
<p>And now that we have manifests, let's inject our machine configs :
You'll see that because it's YAML all over the place, injecting Yaml in Yaml would be "interesting" so the concept here is to inject content as base64 encoded string, everywhere.</p>
<p>Let's suppose that we want the /etc/NetworkManager.conf.d/disabledhcp.conf having this content on each provisioned node (master and worker) to tell NetworkManager to not default to auto/dhcp:</p>
<div class="highlight"><pre><span></span><span class="k">[main]</span>
<span class="na">no-auto-default</span><span class="o">=</span><span class="s">*</span>
</pre></div>
<p>Let's first encode it to base64:</p>
<div class="highlight"><pre><span></span><span class="o">/</span><span class="n">etc</span><span class="o">/</span><span class="n">NetworkManager</span><span class="p">.</span><span class="n">conf</span><span class="p">.</span><span class="n">d</span><span class="o">/</span><span class="n">disabledhcp</span><span class="p">.</span><span class="n">conf</span><span class="w"></span>
<span class="n">cat</span><span class="w"> </span><span class="o"><<</span><span class="w"> </span><span class="n">EOF</span><span class="w"> </span><span class="o">|</span><span class="w"> </span><span class="n">base64</span><span class="w"></span>
<span class="o">[</span><span class="n">main</span><span class="o">]</span><span class="w"></span>
<span class="k">no</span><span class="o">-</span><span class="n">auto</span><span class="o">-</span><span class="k">default</span><span class="o">=*</span><span class="w"></span>
<span class="n">EOF</span><span class="w"></span>
</pre></div>
<p>Our base64 value is <code>W21haW5dCm5vLWF1dG8tZGVmYXVsdD0qCg==</code></p>
<p>So now that we have content, let's create manifests to create automatically that file at provisioning time :</p>
<div class="highlight"><pre><span></span><span class="nv">pushd</span> <span class="o"><</span><span class="nv">path</span><span class="o">></span>
# <span class="nv">To</span> <span class="nv">ensure</span> <span class="nv">that</span> <span class="nv">provisioned</span> <span class="nv">master</span> <span class="nv">will</span> <span class="nv">try</span> <span class="nv">to</span> <span class="nv">become</span> <span class="nv">master</span> <span class="nv">as</span> <span class="nv">soon</span> <span class="nv">as</span> <span class="nv">they</span> <span class="nv">are</span> <span class="nv">installed</span>
<span class="nv">sed</span> <span class="o">-</span><span class="nv">i</span> <span class="s1">'</span><span class="s">s/mastersSchedulable: true/mastersSchedulable: false/g</span><span class="s1">'</span> <span class="nv">manifests</span><span class="o">/</span><span class="nv">cluster</span><span class="o">-</span><span class="nv">scheduler</span><span class="o">-</span><span class="mi">02</span><span class="o">-</span><span class="nv">config</span>.<span class="nv">yml</span>
<span class="nv">pushd</span> <span class="nv">openshift</span>
<span class="k">for</span> <span class="nv">variant</span> <span class="nv">in</span> <span class="nv">master</span> <span class="nv">worker</span><span class="c1">; do </span>
<span class="nv">cat</span> <span class="o"><<</span> <span class="nv">EOF</span> <span class="o">></span> .<span class="o">/</span><span class="mi">99</span><span class="nv">_openshift</span><span class="o">-</span><span class="nv">machineconfig_99</span><span class="o">-</span>${<span class="nv">variant</span>}<span class="o">-</span><span class="nv">nm</span><span class="o">-</span><span class="nv">nodhcp</span>.<span class="nv">yaml</span>
<span class="nv">apiVersion</span>: <span class="nv">machineconfiguration</span>.<span class="nv">openshift</span>.<span class="nv">io</span><span class="o">/</span><span class="nv">v1</span>
<span class="nv">kind</span>: <span class="nv">MachineConfig</span>
<span class="nv">metadata</span>:
<span class="nv">labels</span>:
<span class="nv">machineconfiguration</span>.<span class="nv">openshift</span>.<span class="nv">io</span><span class="o">/</span><span class="nv">role</span>: ${<span class="nv">variant</span>}
<span class="nv">name</span>: <span class="nv">nm</span><span class="o">-</span>${<span class="nv">variant</span>}<span class="o">-</span><span class="nv">nodhcp</span>
<span class="nv">spec</span>:
<span class="nv">config</span>:
<span class="nv">ignition</span>:
<span class="nv">config</span>: {}
<span class="nv">security</span>:
<span class="nv">tls</span>: {}
<span class="nv">timeouts</span>: {}
<span class="nv">version</span>: <span class="mi">2</span>.<span class="mi">2</span>.<span class="mi">0</span>
<span class="nv">networkd</span>: {}
<span class="nv">passwd</span>: {}
<span class="nv">storage</span>:
<span class="nv">files</span>:
<span class="o">-</span> <span class="nv">contents</span>:
<span class="nv">source</span>: <span class="nv">data</span>:<span class="nv">text</span><span class="o">/</span><span class="nv">plain</span><span class="c1">;charset=utf-8;base64,W21haW5dCm5vLWF1dG8tZGVmYXVsdD0qCg==</span>
<span class="nv">verification</span>: {}
<span class="nv">filesystem</span>: <span class="nv">root</span>
<span class="nv">mode</span>: <span class="mi">0644</span>
<span class="nv">path</span>: <span class="o">/</span><span class="nv">etc</span><span class="o">/</span><span class="nv">NetworkManager</span><span class="o">/</span><span class="nv">conf</span>.<span class="nv">d</span><span class="o">/</span><span class="nv">disabledhcp</span>.<span class="nv">conf</span>
<span class="nv">osImageURL</span>: <span class="s2">""</span>
<span class="nv">EOF</span>
<span class="nv">done</span>
<span class="nv">popd</span>
<span class="nv">popd</span>
</pre></div>
<p>I think this snipped is pretty straight-forward, and you see in the source how we "inject" the content of the file itself (previous base64 value we got in previous step)</p>
<p>Now that we have added our customizations, we can just proceed with the <code>openshift-install create ignition-configs --dir=/<path></code> command again, retrieve our .ign file, and call ansible again to redeploy the nodes, and this time they were deployed correctly with only the IP coming from ansible inventory and no other nic in dhcp.</p>
<p>And also that it works, deploying/adding more workers node in the OCP cluster is just a matter to calling ansible and physical nodes are deployed in a matter of ~5minutes (as RHCOS is just extracting its own archive on disk and reboot)</p>
<p>I don't know if I'll have to take multiple deep dives into OpenShift in the future , but at least I learned multiple things, and yes : you <em>always</em> learn more when you have to deploy something for the first time and that it <em>doesn't</em> work straight away .. so while you try to learn the basics from official doc, you have also to find other resources/docs elsewhere :-)</p>
<p>Hope that it can help people in the same situation when having to deploy OpenShift on premises/bare-metal.</p>Fixing heat/fan issue on Thinkpad t490s running CentOS 8/Stream2019-10-29T00:00:00+01:002019-10-29T00:00:00+01:00Fabian Arrotintag:arrfab.net,2019-10-29:/posts/2019/Oct/29/fixing-heatfan-issue-on-thinkpad-t490s-running-centos-8stream/<p>It's usually always a good thing to receive a newer laptop, as usually that means shiny new hardware, better performances and also better battery life. I was really pleased with previous Lenovo <a href="https://www.lenovo.com/us/en/laptops/thinkpad/thinkpad-t-series/ThinkPad-T460s/p/22TP2TT460S">Thinkpad t460s</a> and so the normal choice was its successor, also because default model following company standard, and so the <a href="https://www.lenovo.com/us/en/laptops/thinkpad/thinkpad-t-series/ThinkPad-T490s/p/22TP2TT490S">t490s</a></p>
<p>When I received the laptop, I was a little bit surprized (had no real time to review/analyze in advance) by some choices :</p>
<ul>
<li>No SD card reader anymore (useful when having to "dd" some image for armhfp tests)</li>
<li>Old docking style is gone and you have to connect through <a href="https://www.lenovo.com/us/en/accessories-and-monitors/docking/thinkpad-thunderbolt-3-dock">usb-c/thunderbolt</a></li>
<li>Embedded gigabit ethernet in the t490s (Intel Corporation Ethernet Connection (6) I219-LM (rev 30)) isn't used <em>at all</em> when docked, but going through usb-net device </li>
</ul>
<p>Installing CentOS Stream (so running kernel 4.18.0-147.6.el8.x86_64 when writing this post) was a breeze, after I turned on SecureBoot (useful also because you can also use fwupd to get <a href="https://fwupd.org/">LVFS</a> firmware updates automagically as I did for my t460s) </p>
<p>But quickly I realized a <em>huge</em> difference between my previous t460s and the new t490s : heat/temperature and so fan usage.
To a point where it was …</p><p>It's usually always a good thing to receive a newer laptop, as usually that means shiny new hardware, better performances and also better battery life. I was really pleased with previous Lenovo <a href="https://www.lenovo.com/us/en/laptops/thinkpad/thinkpad-t-series/ThinkPad-T460s/p/22TP2TT460S">Thinkpad t460s</a> and so the normal choice was its successor, also because default model following company standard, and so the <a href="https://www.lenovo.com/us/en/laptops/thinkpad/thinkpad-t-series/ThinkPad-T490s/p/22TP2TT490S">t490s</a></p>
<p>When I received the laptop, I was a little bit surprized (had no real time to review/analyze in advance) by some choices :</p>
<ul>
<li>No SD card reader anymore (useful when having to "dd" some image for armhfp tests)</li>
<li>Old docking style is gone and you have to connect through <a href="https://www.lenovo.com/us/en/accessories-and-monitors/docking/thinkpad-thunderbolt-3-dock">usb-c/thunderbolt</a></li>
<li>Embedded gigabit ethernet in the t490s (Intel Corporation Ethernet Connection (6) I219-LM (rev 30)) isn't used <em>at all</em> when docked, but going through usb-net device </li>
</ul>
<p>Installing CentOS Stream (so running kernel 4.18.0-147.6.el8.x86_64 when writing this post) was a breeze, after I turned on SecureBoot (useful also because you can also use fwupd to get <a href="https://fwupd.org/">LVFS</a> firmware updates automagically as I did for my t460s) </p>
<p>But quickly I realized a <em>huge</em> difference between my previous t460s and the new t490s : heat/temperature and so fan usage.
To a point where it was really <em>impossible</em> to just even use our official <a href="http://bluejeans.com">video-conferencing solution</a> : fan going crazy, laptop unresponsive (load average climbing to ~16), and video/sound completely "off-sync".</p>
<p>Dmesg was also full of such warnings : </p>
<div class="highlight"><pre><span></span><span class="p">[</span><span class="mi">248849</span><span class="p">.</span><span class="mi">131909</span><span class="p">]</span> <span class="n">CPU1</span><span class="p">:</span> <span class="n">Core</span> <span class="n">temperature</span><span class="o">/</span><span class="n">speed</span> <span class="n">normal</span>
<span class="p">[</span><span class="mi">248894</span><span class="p">.</span><span class="mi">211874</span><span class="p">]</span> <span class="n">CPU1</span><span class="p">:</span> <span class="n">Package</span> <span class="n">temperature</span> <span class="n">above</span> <span class="n">threshold</span><span class="p">,</span> <span class="n">cpu</span> <span class="n">clock</span> <span class="n">throttled</span> <span class="p">(</span><span class="n">total</span> <span class="n">events</span> <span class="o">=</span> <span class="mi">1221232</span><span class="p">)</span>
<span class="p">[</span><span class="mi">248894</span><span class="p">.</span><span class="mi">211897</span><span class="p">]</span> <span class="n">CPU5</span><span class="p">:</span> <span class="n">Package</span> <span class="n">temperature</span> <span class="n">above</span> <span class="n">threshold</span><span class="p">,</span> <span class="n">cpu</span> <span class="n">clock</span> <span class="n">throttled</span> <span class="p">(</span><span class="n">total</span> <span class="n">events</span> <span class="o">=</span> <span class="mi">1221232</span><span class="p">)</span>
<span class="p">[</span><span class="mi">248894</span><span class="p">.</span><span class="mi">211902</span><span class="p">]</span> <span class="n">CPU3</span><span class="p">:</span> <span class="n">Package</span> <span class="n">temperature</span> <span class="n">above</span> <span class="n">threshold</span><span class="p">,</span> <span class="n">cpu</span> <span class="n">clock</span> <span class="n">throttled</span> <span class="p">(</span><span class="n">total</span> <span class="n">events</span> <span class="o">=</span> <span class="mi">1221233</span><span class="p">)</span>
<span class="p">[</span><span class="mi">248894</span><span class="p">.</span><span class="mi">211903</span><span class="p">]</span> <span class="n">CPU0</span><span class="p">:</span> <span class="n">Package</span> <span class="n">temperature</span> <span class="n">above</span> <span class="n">threshold</span><span class="p">,</span> <span class="n">cpu</span> <span class="n">clock</span> <span class="n">throttled</span> <span class="p">(</span><span class="n">total</span> <span class="n">events</span> <span class="o">=</span> <span class="mi">1221233</span><span class="p">)</span>
<span class="p">[</span><span class="mi">248894</span><span class="p">.</span><span class="mi">211903</span><span class="p">]</span> <span class="n">CPU6</span><span class="p">:</span> <span class="n">Package</span> <span class="n">temperature</span> <span class="n">above</span> <span class="n">threshold</span><span class="p">,</span> <span class="n">cpu</span> <span class="n">clock</span> <span class="n">throttled</span> <span class="p">(</span><span class="n">total</span> <span class="n">events</span> <span class="o">=</span> <span class="mi">1221233</span><span class="p">)</span>
<span class="p">[</span><span class="mi">248894</span><span class="p">.</span><span class="mi">211904</span><span class="p">]</span> <span class="n">CPU4</span><span class="p">:</span> <span class="n">Package</span> <span class="n">temperature</span> <span class="n">above</span> <span class="n">threshold</span><span class="p">,</span> <span class="n">cpu</span> <span class="n">clock</span> <span class="n">throttled</span> <span class="p">(</span><span class="n">total</span> <span class="n">events</span> <span class="o">=</span> <span class="mi">1221233</span><span class="p">)</span>
<span class="p">[</span><span class="mi">248894</span><span class="p">.</span><span class="mi">211905</span><span class="p">]</span> <span class="n">CPU2</span><span class="p">:</span> <span class="n">Package</span> <span class="n">temperature</span> <span class="n">above</span> <span class="n">threshold</span><span class="p">,</span> <span class="n">cpu</span> <span class="n">clock</span> <span class="n">throttled</span> <span class="p">(</span><span class="n">total</span> <span class="n">events</span> <span class="o">=</span> <span class="mi">1221233</span><span class="p">)</span>
<span class="p">[</span><span class="mi">248894</span><span class="p">.</span><span class="mi">211905</span><span class="p">]</span> <span class="n">CPU7</span><span class="p">:</span> <span class="n">Package</span> <span class="n">temperature</span> <span class="n">above</span> <span class="n">threshold</span><span class="p">,</span> <span class="n">cpu</span> <span class="n">clock</span> <span class="n">throttled</span> <span class="p">(</span><span class="n">total</span> <span class="n">events</span> <span class="o">=</span> <span class="mi">1221233</span><span class="p">)</span>
<span class="p">[</span><span class="mi">248894</span><span class="p">.</span><span class="mi">212895</span><span class="p">]</span> <span class="n">CPU1</span><span class="p">:</span> <span class="n">Package</span> <span class="n">temperature</span><span class="o">/</span><span class="n">speed</span> <span class="n">normal</span>
<span class="p">[</span><span class="mi">248894</span><span class="p">.</span><span class="mi">212895</span><span class="p">]</span> <span class="n">CPU5</span><span class="p">:</span> <span class="n">Package</span> <span class="n">temperature</span><span class="o">/</span><span class="n">speed</span> <span class="n">normal</span>
<span class="p">[</span><span class="mi">248894</span><span class="p">.</span><span class="mi">212908</span><span class="p">]</span> <span class="n">CPU4</span><span class="p">:</span> <span class="n">Package</span> <span class="n">temperature</span><span class="o">/</span><span class="n">speed</span> <span class="n">normal</span>
</pre></div>
<p>After some quick research, I found some links about known issues on some (recent) Lenovo thinkpads and some possible solutions explaining the issue[s]:</p>
<ul>
<li><a href="https://www.notebookcheck.net/Lenovo-admits-ThinkPad-CPU-throttling-problem-when-running-Linux-fix-in-development.435549.0.html">Lenovo admitting CPU Throttling under Linux</a></li>
<li><a href="https://forums.lenovo.com/t5/Other-Linux-Discussions/X1C6-T480s-low-cTDP-and-trip-temperature-in-Linux/td-p/4028489/highlight/true/page/16">Lenovo forum thread about this</a> (read the whole thread) </li>
</ul>
<p>Nice, or not so (still waiting for Lenovo to fix this through FW update for the t490s - when writing this blog post). I quickly tried to rebuild a community <a href="https://github.com/erpalma/throttled">proposed fix</a> and rpm is available in my <a href="https://copr.fedorainfracloud.org/coprs/arrfab/not-in-epel8/package/throttled/">Copr repository</a>.</p>
<p>But, as <a href="https://github.com/erpalma/throttled#writing-to-msr-and-pci-bar">stated on said github repo</a>, it doesn't work with SecureBoot, so I temporary disabled it to test said fix, but it wasn't magical either, so I decided to re-eanble SecureBoot and be back in "normal" mode.</p>
<p>Then I found another interesting forum thread about <a href="https://forums.lenovo.com/t5/Other-Linux-Discussions/T480-CPU-temperature-and-fan-speed-under-linux/m-p/4114832">t480 and fan/heat issue</a>, so I decided to have a look.</p>
<p>Indeed : 'Thunderbolt BIOS Assist Mode' was disabled too in my case (wondering why it came with that disabled, while it <em>was</em> coming with RHEL8 installed, and pre-loaded) : let's enable it and see how that goes : </p>
<p><img alt="T490s settings" src="/images/t490s-bios.png"></p>
<p>OMG ! instead of having a terminal open with "watch sensors" running, I wanted to have a quick look directly from gnome, so just installed the gnome-shell-extension-system-monitor-applet (available <em>now</em> in epel8-testing) and so far so good : </p>
<p>When running normal workload, while connected to Dock and two external displays), it runs like this : </p>
<p><img alt="temperature" src="/images/normal-temp.png"></p>
<p>And yesterday I was happy (ultimate test) to be in a video conf-call for more than one hour, with no video/sound issue and temperature just climbed a little bit, but nothing unusual when using such video call : </p>
<p><img alt="temperature" src="/images/temperature-monitor.png"></p>
<p>Hope it helps, also not if you run Linux on a t490s but any recent Lenovo Thinkpad (or even Yoga it seems) model. Now still waiting on Lenovo to release firmware for the throttling issue but at least the laptop is currently usable :)</p>Renew/Extend Puppet CA/puppetmasterd certs2019-04-29T00:00:00+02:002019-04-29T00:00:00+02:00Fabian Arrotintag:arrfab.net,2019-04-29:/posts/2019/Apr/29/renewextend-puppet-capuppetmasterd-certs/<h1>Puppet CA/puppetmasterd cert renewal</h1>
<p>While we're still converting our puppet controlled infra to Ansible, we still have some nodes "controlled" by puppet, as converting some roles isn't something that can be done in just one or two days.
Add to that other items in your backlog that all have priority set to #1 and then time is flying, until you realize this for your existing legacy puppet environment (assuming false FQDN here, but you'll get the idea): </p>
<div class="highlight"><pre><span></span><span class="n">Warning</span><span class="o">:</span> <span class="n">Certificate</span> <span class="s1">'Puppet CA: puppetmasterd.domain.com'</span> <span class="n">will</span> <span class="n">expire</span> <span class="n">on</span> <span class="mi">2019</span><span class="o">-</span><span class="mi">05</span><span class="o">-</span><span class="mi">06</span><span class="n">T12</span><span class="o">:</span><span class="mi">12</span><span class="o">:</span><span class="mi">56</span><span class="n">UTC</span>
<span class="n">Warning</span><span class="o">:</span> <span class="n">Certificate</span> <span class="s1">'puppetmasterd.domain.com'</span> <span class="n">will</span> <span class="n">expire</span> <span class="n">on</span> <span class="mi">2019</span><span class="o">-</span><span class="mi">05</span><span class="o">-</span><span class="mi">06</span><span class="n">T12</span><span class="o">:</span><span class="mi">12</span><span class="o">:</span><span class="mi">56</span><span class="n">UTC</span>
</pre></div>
<p>So, as long as your PKI setup for puppet is still valid, you can act in advance, resign/extend CA and puppetmasterd and distribute newer CA certs to agents, and go forward with other items in your backlog, while still converting from puppet to Ansible (at least for us)</p>
<h2>Puppetmasterd/CA</h2>
<p>Before anything else, (in case you don't backup this, but you should), let's take a backup on the Puppet CA (in our case, it's a <a href="https://www.theforeman.org/">Foreman</a> driven puppetmasterd, so foreman host is where all this will happen, YMMV) </p>
<div class="highlight"><pre><span></span><span class="n">tar …</span></pre></div><h1>Puppet CA/puppetmasterd cert renewal</h1>
<p>While we're still converting our puppet controlled infra to Ansible, we still have some nodes "controlled" by puppet, as converting some roles isn't something that can be done in just one or two days.
Add to that other items in your backlog that all have priority set to #1 and then time is flying, until you realize this for your existing legacy puppet environment (assuming false FQDN here, but you'll get the idea): </p>
<div class="highlight"><pre><span></span><span class="n">Warning</span><span class="o">:</span> <span class="n">Certificate</span> <span class="s1">'Puppet CA: puppetmasterd.domain.com'</span> <span class="n">will</span> <span class="n">expire</span> <span class="n">on</span> <span class="mi">2019</span><span class="o">-</span><span class="mi">05</span><span class="o">-</span><span class="mi">06</span><span class="n">T12</span><span class="o">:</span><span class="mi">12</span><span class="o">:</span><span class="mi">56</span><span class="n">UTC</span>
<span class="n">Warning</span><span class="o">:</span> <span class="n">Certificate</span> <span class="s1">'puppetmasterd.domain.com'</span> <span class="n">will</span> <span class="n">expire</span> <span class="n">on</span> <span class="mi">2019</span><span class="o">-</span><span class="mi">05</span><span class="o">-</span><span class="mi">06</span><span class="n">T12</span><span class="o">:</span><span class="mi">12</span><span class="o">:</span><span class="mi">56</span><span class="n">UTC</span>
</pre></div>
<p>So, as long as your PKI setup for puppet is still valid, you can act in advance, resign/extend CA and puppetmasterd and distribute newer CA certs to agents, and go forward with other items in your backlog, while still converting from puppet to Ansible (at least for us)</p>
<h2>Puppetmasterd/CA</h2>
<p>Before anything else, (in case you don't backup this, but you should), let's take a backup on the Puppet CA (in our case, it's a <a href="https://www.theforeman.org/">Foreman</a> driven puppetmasterd, so foreman host is where all this will happen, YMMV) </p>
<div class="highlight"><pre><span></span><span class="n">tar</span> <span class="n">cvzf</span> <span class="o">/</span><span class="n">root</span><span class="o">/</span><span class="n">puppet</span><span class="o">-</span><span class="n">ssl</span><span class="o">-</span><span class="n">backup</span><span class="p">.</span><span class="n">tar</span><span class="p">.</span><span class="n">gz</span> <span class="o">/</span><span class="n">var</span><span class="o">/</span><span class="n">lib</span><span class="o">/</span><span class="n">puppet</span><span class="o">/</span><span class="n">ssl</span><span class="o">/</span>
</pre></div>
<h3>CA itself</h3>
<p>We first need to regenerate the CSR for the CA cert, and sign it again
Ideally we confirm that the ca_key.pem and the existing ca_crt.pem "matches" through modulus (should be equals)</p>
<div class="highlight"><pre><span></span><span class="n">cd</span> <span class="o">/</span><span class="n">var</span><span class="o">/</span><span class="n">lib</span><span class="o">/</span><span class="n">puppet</span><span class="o">/</span><span class="n">ssl</span><span class="o">/</span><span class="n">ca</span>
<span class="p">(</span> <span class="n">openssl</span> <span class="n">rsa</span> <span class="o">-</span><span class="n">noout</span> <span class="o">-</span><span class="n">modulus</span> <span class="o">-</span><span class="k">in</span> <span class="n">ca_key</span><span class="p">.</span><span class="n">pem</span> <span class="mi">2</span><span class="o">></span> <span class="o">/</span><span class="n">dev</span><span class="o">/</span><span class="k">null</span> <span class="o">|</span> <span class="n">openssl</span> <span class="n">md5</span> <span class="p">;</span> <span class="n">openssl</span> <span class="n">x509</span> <span class="o">-</span><span class="n">noout</span> <span class="o">-</span><span class="n">modulus</span> <span class="o">-</span><span class="k">in</span> <span class="n">ca_crt</span><span class="p">.</span><span class="n">pem</span> <span class="mi">2</span><span class="o">></span> <span class="o">/</span><span class="n">dev</span><span class="o">/</span><span class="k">null</span> <span class="o">|</span> <span class="n">openssl</span> <span class="n">md5</span> <span class="p">)</span>
<span class="p">(</span><span class="k">stdin</span><span class="p">)</span><span class="o">=</span> <span class="n">cbc4d35f58b28ad7c4dca17bd4408403</span>
<span class="p">(</span><span class="k">stdin</span><span class="p">)</span><span class="o">=</span> <span class="n">cbc4d35f58b28ad7c4dca17bd4408403</span>
</pre></div>
<p>As it's the case, we can now Regenerate from that private key and existing crt a CSR</p>
<div class="highlight"><pre><span></span><span class="n">openssl</span> <span class="n">x509</span> <span class="o">-</span><span class="n">x509toreq</span> <span class="o">-</span><span class="k">in</span> <span class="n">ca_crt</span><span class="p">.</span><span class="n">pem</span> <span class="o">-</span><span class="n">signkey</span> <span class="n">ca_key</span><span class="p">.</span><span class="n">pem</span> <span class="o">-</span><span class="k">out</span> <span class="n">ca_csr</span><span class="p">.</span><span class="n">pem</span>
<span class="n">Getting</span> <span class="n">request</span> <span class="n">Private</span> <span class="k">Key</span>
<span class="n">Generating</span> <span class="n">certificate</span> <span class="n">request</span>
</pre></div>
<p>Now that we have the CSR for CA, we need to sign it again, but we have to add extensions</p>
<div class="highlight"><pre><span></span><span class="n">cat</span><span class="w"> </span><span class="o">></span><span class="w"> </span><span class="n">extension</span><span class="p">.</span><span class="n">cnf</span><span class="w"> </span><span class="o"><<</span><span class="w"> </span><span class="n">EOF</span><span class="w"></span>
<span class="o">[</span><span class="n">CA_extensions</span><span class="o">]</span><span class="w"></span>
<span class="n">basicConstraints</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">critical</span><span class="p">,</span><span class="nl">CA</span><span class="p">:</span><span class="k">TRUE</span><span class="w"></span>
<span class="n">nsComment</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="ss">"Puppet Ruby/OpenSSL Internal Certificate"</span><span class="w"></span>
<span class="n">keyUsage</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">critical</span><span class="p">,</span><span class="n">keyCertSign</span><span class="p">,</span><span class="n">cRLSign</span><span class="w"></span>
<span class="n">subjectKeyIdentifier</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">hash</span><span class="w"></span>
<span class="n">EOF</span><span class="w"></span>
</pre></div>
<p>And now archive old CA crt and sign (new) extended one</p>
<div class="highlight"><pre><span></span><span class="n">cp</span> <span class="n">ca_crt</span><span class="p">.</span><span class="n">pem</span> <span class="n">ca_crt</span><span class="p">.</span><span class="n">pem</span><span class="p">.</span><span class="k">old</span>
<span class="n">openssl</span> <span class="n">x509</span> <span class="o">-</span><span class="n">req</span> <span class="o">-</span><span class="n">days</span> <span class="mi">3650</span> <span class="o">-</span><span class="k">in</span> <span class="n">ca_csr</span><span class="p">.</span><span class="n">pem</span> <span class="o">-</span><span class="n">signkey</span> <span class="n">ca_key</span><span class="p">.</span><span class="n">pem</span> <span class="o">-</span><span class="k">out</span> <span class="n">ca_crt</span><span class="p">.</span><span class="n">pem</span> <span class="o">-</span><span class="n">extfile</span> <span class="n">extension</span><span class="p">.</span><span class="n">cnf</span> <span class="o">-</span><span class="n">extensions</span> <span class="n">CA_extensions</span>
<span class="n">Signature</span> <span class="n">ok</span>
<span class="n">subject</span><span class="o">=/</span><span class="n">CN</span><span class="o">=</span><span class="n">Puppet</span> <span class="n">CA</span><span class="p">:</span> <span class="n">puppetmasterd</span><span class="p">.</span><span class="k">domain</span><span class="p">.</span><span class="n">com</span>
<span class="n">Getting</span> <span class="n">Private</span> <span class="k">key</span>
<span class="n">openssl</span> <span class="n">x509</span> <span class="o">-</span><span class="k">in</span> <span class="n">ca_crt</span><span class="p">.</span><span class="n">pem</span> <span class="o">-</span><span class="n">noout</span> <span class="o">-</span><span class="nb">text</span><span class="o">|</span><span class="n">grep</span> <span class="o">-</span><span class="n">A</span> <span class="mi">3</span> <span class="n">Validity</span>
<span class="n">Validity</span>
<span class="k">Not</span> <span class="k">Before</span><span class="p">:</span> <span class="n">Apr</span> <span class="mi">29</span> <span class="mi">08</span><span class="p">:</span><span class="mi">25</span><span class="p">:</span><span class="mi">49</span> <span class="mi">2019</span> <span class="n">GMT</span>
<span class="k">Not</span> <span class="k">After</span> <span class="p">:</span> <span class="n">Apr</span> <span class="mi">26</span> <span class="mi">08</span><span class="p">:</span><span class="mi">25</span><span class="p">:</span><span class="mi">49</span> <span class="mi">2029</span> <span class="n">GMT</span>
</pre></div>
<h3>Puppetmasterd server</h3>
<p>We have also to regen the CSR from the existing cert (assuming our fqdn for our cert is correctly also the currently set hostname)</p>
<div class="highlight"><pre><span></span><span class="n">cd</span> <span class="o">/</span><span class="n">var</span><span class="o">/</span><span class="n">lib</span><span class="o">/</span><span class="n">puppet</span><span class="o">/</span><span class="n">ssl</span>
<span class="n">openssl</span> <span class="n">x509</span> <span class="o">-</span><span class="n">x509toreq</span> <span class="o">-</span><span class="k">in</span> <span class="n">certs</span><span class="o">/</span><span class="err">$</span><span class="p">(</span><span class="n">hostname</span><span class="p">).</span><span class="n">pem</span> <span class="o">-</span><span class="n">signkey</span> <span class="n">private_keys</span><span class="o">/</span><span class="err">$</span><span class="p">(</span><span class="n">hostname</span><span class="p">).</span><span class="n">pem</span> <span class="o">-</span><span class="k">out</span> <span class="n">certificate_requests</span><span class="o">/</span><span class="err">$</span><span class="p">(</span><span class="n">hostname</span><span class="p">)</span><span class="n">_csr</span><span class="p">.</span><span class="n">pem</span>
<span class="n">Getting</span> <span class="n">request</span> <span class="n">Private</span> <span class="k">Key</span>
<span class="n">Generating</span> <span class="n">certificate</span> <span class="n">request</span>
</pre></div>
<p>Now that we have CSR, we can sign with new CA</p>
<div class="highlight"><pre><span></span><span class="n">cp</span> <span class="n">certs</span><span class="o">/</span><span class="err">$</span><span class="p">(</span><span class="n">hostname</span><span class="p">).</span><span class="n">pem</span> <span class="n">certs</span><span class="o">/</span><span class="err">$</span><span class="p">(</span><span class="n">hostname</span><span class="p">).</span><span class="n">pem</span><span class="p">.</span><span class="k">old</span> <span class="o">#</span><span class="n">Backing</span> <span class="n">up</span>
<span class="n">openssl</span> <span class="n">x509</span> <span class="o">-</span><span class="n">req</span> <span class="o">-</span><span class="n">days</span> <span class="mi">3650</span> <span class="o">-</span><span class="k">in</span> <span class="n">certificate_requests</span><span class="o">/</span><span class="err">$</span><span class="p">(</span><span class="n">hostname</span><span class="p">)</span><span class="n">_csr</span><span class="p">.</span><span class="n">pem</span> <span class="o">-</span><span class="n">CA</span> <span class="n">ca</span><span class="o">/</span><span class="n">ca_crt</span><span class="p">.</span><span class="n">pem</span> <span class="err">\</span>
<span class="o">-</span><span class="n">CAkey</span> <span class="n">ca</span><span class="o">/</span><span class="n">ca_key</span><span class="p">.</span><span class="n">pem</span> <span class="o">-</span><span class="n">CAserial</span> <span class="n">ca</span><span class="o">/</span><span class="nb">serial</span> <span class="o">-</span><span class="k">out</span> <span class="n">certs</span><span class="o">/</span><span class="err">$</span><span class="p">(</span><span class="n">hostname</span><span class="p">).</span><span class="n">pem</span>
<span class="n">Signature</span> <span class="n">ok</span>
</pre></div>
<p>Validating that puppetmasted key and new certs are matching (so crt and private keys are ok)</p>
<div class="highlight"><pre><span></span><span class="p">(</span> <span class="n">openssl</span> <span class="n">rsa</span> <span class="o">-</span><span class="n">noout</span> <span class="o">-</span><span class="n">modulus</span> <span class="o">-</span><span class="k">in</span> <span class="n">private_keys</span><span class="o">/</span><span class="err">$</span><span class="p">(</span><span class="n">hostname</span><span class="p">).</span><span class="n">pem</span> <span class="mi">2</span><span class="o">></span> <span class="o">/</span><span class="n">dev</span><span class="o">/</span><span class="k">null</span> <span class="o">|</span> <span class="n">openssl</span> <span class="n">md5</span> <span class="p">;</span> <span class="n">openssl</span> <span class="n">x509</span> <span class="o">-</span><span class="n">noout</span> <span class="o">-</span><span class="n">modulus</span> <span class="o">-</span><span class="k">in</span> <span class="n">certs</span><span class="o">/</span><span class="err">$</span><span class="p">(</span><span class="n">hostname</span><span class="p">).</span><span class="n">pem</span> <span class="mi">2</span><span class="o">></span> <span class="o">/</span><span class="n">dev</span><span class="o">/</span><span class="k">null</span> <span class="o">|</span> <span class="n">openssl</span> <span class="n">md5</span> <span class="p">)</span>
<span class="p">(</span><span class="k">stdin</span><span class="p">)</span><span class="o">=</span> <span class="mi">0</span><span class="n">ab385eb2c6e9e65a4ed929a2dd0dbe5</span>
<span class="p">(</span><span class="k">stdin</span><span class="p">)</span><span class="o">=</span> <span class="mi">0</span><span class="n">ab385eb2c6e9e65a4ed929a2dd0dbe5</span>
</pre></div>
<p>It seems all good, so let's restart puppetmasterd/httpd (foremand launches puppetmasterd for us)</p>
<div class="highlight"><pre><span></span><span class="n">systemctl</span> <span class="k">restart</span> <span class="n">puppet</span>
</pre></div>
<h2>Puppet agents</h2>
<p>From this point, puppet agents will not complain about the puppetmasterd cert, but still about the fact that CA itself will expire soon : </p>
<div class="highlight"><pre><span></span><span class="n">Warning</span><span class="o">:</span> <span class="n">Certificate</span> <span class="s1">'Puppet CA: puppetmasterd.domain.com'</span> <span class="n">will</span> <span class="n">expire</span> <span class="n">on</span> <span class="mi">2019</span><span class="o">-</span><span class="mi">05</span><span class="o">-</span><span class="mi">06</span><span class="n">T12</span><span class="o">:</span><span class="mi">12</span><span class="o">:</span><span class="mi">56</span><span class="n">GMT</span>
</pre></div>
<p>But as we have now the new ca_crt.pem at the puppetmasterd/foreman side, we can just distribute it on clients (through puppet or ansible or whatever) and then it will continue to work</p>
<div class="highlight"><pre><span></span><span class="n">cd</span> <span class="o">/</span><span class="n">var</span><span class="o">/</span><span class="n">lib</span><span class="o">/</span><span class="n">puppet</span><span class="o">/</span><span class="n">ssl</span><span class="o">/</span><span class="n">certs</span>
<span class="n">mv</span> <span class="n">ca</span><span class="p">.</span><span class="n">pem</span> <span class="n">ca</span><span class="p">.</span><span class="n">pem</span><span class="p">.</span><span class="k">old</span>
</pre></div>
<p>And now distribute the new ca_crt.pem as ca.pem here</p>
<p>puppet snippet for this (in our puppet::agent class)</p>
<div class="highlight"><pre><span></span> <span class="n">file</span> <span class="err">{</span> <span class="s1">'/var/lib/puppet/ssl/certs/ca.pem'</span><span class="p">:</span>
<span class="k">source</span> <span class="o">=></span> <span class="s1">'puppet:///puppet/ca_crt.pem'</span><span class="p">,</span>
<span class="k">owner</span> <span class="o">=></span> <span class="s1">'puppet'</span><span class="p">,</span>
<span class="k">group</span> <span class="o">=></span> <span class="s1">'puppet'</span><span class="p">,</span>
<span class="n">require</span> <span class="o">=></span> <span class="n">Package</span><span class="p">[</span><span class="s1">'puppet'</span><span class="p">],</span>
<span class="err">}</span>
</pre></div>
<p>Next time you'll "puppet agent -t" or that puppet will contact puppetmasterd, it will apply the new cert on and on next call, no warning, issue anymore</p>
<div class="highlight"><pre><span></span><span class="nl">Info</span><span class="p">:</span><span class="w"> </span><span class="n">Computing</span><span class="w"> </span><span class="nf">checksum</span><span class="w"> </span><span class="k">on</span><span class="w"> </span><span class="k">file</span><span class="w"> </span><span class="o">/</span><span class="nf">var</span><span class="o">/</span><span class="n">lib</span><span class="o">/</span><span class="n">puppet</span><span class="o">/</span><span class="n">ssl</span><span class="o">/</span><span class="n">certs</span><span class="o">/</span><span class="n">ca</span><span class="p">.</span><span class="n">pem</span><span class="w"></span>
<span class="nl">Info</span><span class="p">:</span><span class="w"> </span><span class="o">/</span><span class="n">Stage</span><span class="o">[</span><span class="n">main</span><span class="o">]/</span><span class="nl">Puppet</span><span class="p">:</span><span class="err">:</span><span class="n">Agent</span><span class="o">/</span><span class="k">File</span><span class="o">[</span><span class="n">/var/lib/puppet/ssl/certs/ca.pem</span><span class="o">]</span><span class="err">:</span><span class="w"> </span><span class="n">Filebucketed</span><span class="w"> </span><span class="o">/</span><span class="nf">var</span><span class="o">/</span><span class="n">lib</span><span class="o">/</span><span class="n">puppet</span><span class="o">/</span><span class="n">ssl</span><span class="o">/</span><span class="n">certs</span><span class="o">/</span><span class="n">ca</span><span class="p">.</span><span class="n">pem</span><span class="w"> </span><span class="k">to</span><span class="w"> </span><span class="n">puppet</span><span class="w"> </span><span class="k">with</span><span class="w"> </span><span class="nf">sum</span><span class="w"> </span><span class="n">c63b1cc5a39489f5da7d272f00ec09fa</span><span class="w"></span>
<span class="nl">Notice</span><span class="p">:</span><span class="w"> </span><span class="o">/</span><span class="n">Stage</span><span class="o">[</span><span class="n">main</span><span class="o">]/</span><span class="nl">Puppet</span><span class="p">:</span><span class="err">:</span><span class="n">Agent</span><span class="o">/</span><span class="k">File</span><span class="o">[</span><span class="n">/var/lib/puppet/ssl/certs/ca.pem</span><span class="o">]/</span><span class="nl">content</span><span class="p">:</span><span class="w"> </span><span class="n">content</span><span class="w"> </span><span class="n">changed</span><span class="w"> </span><span class="s1">'{md5}c63b1cc5a39489f5da7d272f00ec09fa'</span><span class="w"> </span><span class="k">to</span><span class="w"> </span><span class="s1">'{md5}e3d2e55edbe1ad45570eef3c9ade051f'</span><span class="w"></span>
</pre></div>
<p>Hope it helps</p>Implementing Zabbix custom LLD rules with Ansible2018-11-07T00:00:00+01:002018-11-07T00:00:00+01:00Fabian Arrotintag:arrfab.net,2018-11-07:/posts/2018/Nov/07/implementing-zabbix-custom-lld-rules-with-ansible/<p>While I have to admit that I'm using <a href="http://www.zabbix.com">Zabbix</a> since the 1.8.x era, I also have to admit that I'm not an expert, and that one can learn new things every day. I recently had to implement a new template for a custom service, that is multi-instances aware, and so can be started multiple times with various configurations, and so with its own set of settings, like tcp port on which to listen, etc .. , but also the number of instances running as it can be different from one node to the next one.</p>
<p>I was thinking about the best way to implement this through Zabbix, and my initial idea was to just have one template per possible instance type, that would though use <a href="https://www.zabbix.com/documentation/3.2/manual/config/macros">macros</a> defined at the host level, to know which port to check, etc .. so in fact backporting into zabbix what configuration management (Ansible in our case) already has to know to deploy such app instance.</p>
<p>But parallel to that, I always liked the fact that Zabbix itself has some internal tools to auto-discover items and so triggers for those : That's called <a href="https://www.zabbix.com/documentation/3.0/manual/discovery/low_level_discovery">Low-level Discovery</a> (LLD in short). </p>
<p>By default, if you use (or have modified) some …</p><p>While I have to admit that I'm using <a href="http://www.zabbix.com">Zabbix</a> since the 1.8.x era, I also have to admit that I'm not an expert, and that one can learn new things every day. I recently had to implement a new template for a custom service, that is multi-instances aware, and so can be started multiple times with various configurations, and so with its own set of settings, like tcp port on which to listen, etc .. , but also the number of instances running as it can be different from one node to the next one.</p>
<p>I was thinking about the best way to implement this through Zabbix, and my initial idea was to just have one template per possible instance type, that would though use <a href="https://www.zabbix.com/documentation/3.2/manual/config/macros">macros</a> defined at the host level, to know which port to check, etc .. so in fact backporting into zabbix what configuration management (Ansible in our case) already has to know to deploy such app instance.</p>
<p>But parallel to that, I always liked the fact that Zabbix itself has some internal tools to auto-discover items and so triggers for those : That's called <a href="https://www.zabbix.com/documentation/3.0/manual/discovery/low_level_discovery">Low-level Discovery</a> (LLD in short). </p>
<p>By default, if you use (or have modified) some zabbix templates, you can see those in actions for the mounted filesystems or even the present network interfaces in your linux OS. That's the "magic" : you added a new mount point or a new interface ? Zabbix discovers it automatically and start monitoring it, and also graph values for those.</p>
<p>So back to our monitoring problem and the need for multiple templates : what if we could use LLD too and so have Zabbix automatically checking our deployed instances (multiple ones) automatically ? The good is that we can : one can create <a href="https://www.zabbix.com/documentation/3.0/manual/discovery/low_level_discovery#creating_custom_lld_rules">custom LLD rules</a> and so it would work OOTB when only one template would be added for those nodes.</p>
<p>If you read the link above for custom LLD rule, you'll see some examples about a script being called at the agent level, from the zabbix server, at periodic interval, to "discover" those custom discovery checks.
The interesting part to notice is that it's a json that is returned to zabbix server , pointing to a new key, that is declared at the template level.</p>
<p>So it (usually) goes like this : </p>
<ul>
<li>create a template</li>
<li>create a new discovery rule, give it a name and a key (and also eventually add Filters)</li>
<li>deploy a new <a href="https://www.zabbix.com/documentation/3.0/manual/config/items/userparameters">UserParameter</a> at the agent level reporting to that key the json string it needs to declare to zabbix server</li>
<li>Zabbix server receives/parses that json and based on the checks/variables declared in that json, it will create , based on those returned macros, some Item Prototypes, Trigger prototypes and so on ...</li>
</ul>
<p>Magic! ... except that in my specific case, for some reasons I never allowed the zabbix user to really launch commands, due to limited rights and also the Selinux context in which it's running (for interested people, it's running in the <a href="https://arrfab.net/posts/2016/Nov/25/zabbix-selinux-and-centos-731611/">zabbix_agent_t</a> context)</p>
<p>I suddenly didn't want to change that base rule for our deployments, but the good news is that you don't have to use UserParameter for LLD ! . It's true that if you look at the existing Discovery Rules for "Network interface discovery", you'll see the key net.if.discovery, that is used for everything after, but the Type is "Zabbix agent". We can use something else in that list, like we already do for a "normal" check</p>
<p>I'm already (ab)using the <a href="https://www.zabbix.com/documentation/3.0/manual/config/items/itemtypes/trapper">Trapper item type</a> for a lot of hardware checks : reason is simple : as zabbix user is limited (and I don't want to grant more rights for it), I have some scripts checking for hardware raid controllers (if any), etc, and reporting back to zabbix through zabbix_sender.</p>
<p>Let's use the same logic for the json string to be returned to Zabbix server for LLD. (as yes, Trapper is in the list for the discovery rule Type.</p>
<p>It's even easier for us, as we'll control that through <a href="http:///www.ansible.com">Ansible</a> : It's what is already used to deploy/configure our <a href="https://repospanner.org">RepoSpanner</a> instances so we have all the logic there.</p>
<p>Let's first start by creating the new template for repospanner, and create a discovery rule (detecting each instances and settings) :</p>
<p><img alt="zabbix-discovery-type.png" src="/images/zabbix-discovery-type.png"></p>
<p>You can then apply that template to host[s] and wait ... but first we need to report back from agent to server which instances are deployed/running. So let's see how to implement that through ansible.</p>
<p>To keep it short, in Ansible we have the following (default values, not the correct ones) variables (from roles/repospanner/default.yml):</p>
<div class="highlight"><pre><span></span>...
<span class="nv">repospanner_instances</span>:
<span class="o">-</span> <span class="nv">name</span>: <span class="nv">default</span>
<span class="nv">admin_cli</span>: <span class="nv">False</span>
<span class="nv">admin_ca_cert</span>:
<span class="nv">admin_cert</span>:
<span class="nv">admin_key</span>:
<span class="nv">rpc_port</span>: <span class="mi">8443</span>
<span class="nv">rpc_allow_from</span>:
<span class="o">-</span> <span class="mi">127</span>.<span class="mi">0</span>.<span class="mi">0</span>.<span class="mi">1</span>
<span class="nv">http_port</span>: <span class="mi">8444</span>
<span class="nv">http_allow_from</span>:
<span class="o">-</span> <span class="mi">127</span>.<span class="mi">0</span>.<span class="mi">0</span>.<span class="mi">1</span>
<span class="nv">tls_ca_cert</span>: <span class="nv">ca</span>.<span class="nv">crt</span>
<span class="nv">tls_cert</span>: <span class="nv">nodea</span>.<span class="nv">regiona</span>.<span class="nv">crt</span>
<span class="nv">tls_key</span>: <span class="nv">nodea</span>.<span class="nv">regiona</span>.<span class="nv">key</span>
<span class="nv">my_cn</span>: <span class="nv">localhost</span>.<span class="nv">localdomain</span>
<span class="nv">master_node</span> : <span class="nv">nodea</span>.<span class="nv">regiona</span>.<span class="nv">domain</span>.<span class="nv">com</span> # <span class="nv">to</span> <span class="nv">know</span> <span class="nv">how</span> <span class="nv">to</span> <span class="nv">join</span> <span class="nv">a</span> <span class="nv">cluster</span> <span class="k">for</span> <span class="nv">other</span> <span class="nv">nodes</span>
<span class="nv">init_node</span>: <span class="nv">True</span> # <span class="nv">To</span> <span class="nv">be</span> <span class="nv">declared</span> <span class="nv">only</span> <span class="nv">on</span> <span class="nv">the</span> <span class="nv">first</span> <span class="nv">node</span>
...
</pre></div>
<p>That simple example has only one instance, but you can easily see how to have multiple ones, etc
So here is the logic : let's have ansible, when configuring the node, create the file that will be used zabbix_sender (triggered by ansible itself) to send the json to zabbix server. zabbix_sender can use a file that is separated (man page) like this :</p>
<ul>
<li>hostname (or '-' to use name configured in zabbix_agentd.conf) </li>
<li>key </li>
<li>value</li>
</ul>
<p>Those three fields <em>have</em> to be separated by one space only, and important : you can't have extra empty line (but something can you easily see when playing with this the first time)</p>
<p>How does our file (roles/repospanner/templates/zabbix-repospanner-lld.j2) look like ? : </p>
<div class="highlight"><pre><span></span><span class="o">-</span> <span class="nv">repospanner</span>.<span class="nv">lld</span>.<span class="nv">instances</span> { <span class="s2">"</span><span class="s">data</span><span class="s2">"</span>: [ {<span class="o">%</span> <span class="k">for</span> <span class="nv">instance</span> <span class="nv">in</span> <span class="nv">repospanner_instances</span> <span class="o">-%</span>} { <span class="s2">"</span><span class="s">{{ '{#INSTANCE}' }}</span><span class="s2">"</span>: <span class="s2">"</span><span class="s">{{ instance.name }}</span><span class="s2">"</span>, <span class="s2">"</span><span class="s">{{ '{#RPCPORT}' }}</span><span class="s2">"</span>: <span class="s2">"</span><span class="s">{{ instance.rpc_port }}</span><span class="s2">"</span>, <span class="s2">"</span><span class="s">{{ '{#HTTPPORT}' }}</span><span class="s2">"</span>: <span class="s2">"</span><span class="s">{{ instance.http_port }}</span><span class="s2">"</span> } {<span class="o">%-</span> <span class="k">if</span> <span class="nv">not</span> <span class="k">loop</span>.<span class="nv">last</span> <span class="o">-%</span>},{<span class="o">%</span> <span class="k">endif</span> <span class="o">%</span>} {<span class="o">%</span> <span class="nv">endfor</span> <span class="o">%</span>} ] }
</pre></div>
<p>If you have already used jinja2 templates for Ansible, it's quite easy to understand. But I have to admit that I had troubles with the {#INSTANCE} one : that one isn't an ansible variable, but rather a fixed name for the macro that we'll send to zabbix (and so reused as macro everywhere). But ansible, when trying to translate the jinja2 template, was complaining about missing "comment' : Indeed {# ... #} is a <a href="http://jinja.pocoo.org/docs/dev/templates/#comments">comment in jinja2</a>. So the best way (thanks to people in #ansible for that trick) is to include it in {{ }} brackets but then escape it so that it would be rendered as {#INSTANCE} (nice to know if you have to do that too ....)</p>
<p>The rest is trival : excerpt from monitoring.yml (included in that repospanner role) : </p>
<div class="highlight"><pre><span></span><span class="x">- name: Distributing zabbix repospanner check file</span>
<span class="x"> template:</span>
<span class="x"> src: "</span><span class="cp">{{</span> <span class="nv">item</span> <span class="cp">}}</span><span class="x">.j2"</span>
<span class="x"> dest: "/usr/lib/zabbix/</span><span class="cp">{{</span> <span class="nv">item</span> <span class="cp">}}</span><span class="x">"</span>
<span class="x"> mode: 0755</span>
<span class="x"> with_items:</span>
<span class="x"> - zabbix-repospanner-check</span>
<span class="x"> - zabbix-repospanner-lld</span>
<span class="x"> register: zabbix_templates </span>
<span class="x"> tags:</span>
<span class="x"> - templates</span>
<span class="x">- name: Launching LLD to announce to zabbix</span>
<span class="x"> shell: /bin/zabbix_sender -c /etc/zabbix/zabbix_agentd.conf -i /usr/lib/zabbix/zabbix-repospanner-lld</span>
<span class="x"> when: zabbix_templates is changed</span>
</pre></div>
<p>And this is how is rendered on one of my test node : </p>
<div class="highlight"><pre><span></span><span class="o">-</span> <span class="n">repospanner</span><span class="p">.</span><span class="n">lld</span><span class="p">.</span><span class="n">instances</span> <span class="err">{</span> <span class="ss">"data"</span><span class="p">:</span> <span class="p">[</span> <span class="err">{</span> <span class="ss">"{#INSTANCE}"</span><span class="p">:</span> <span class="ss">"namespace_rpms"</span><span class="p">,</span> <span class="ss">"{#RPCPORT}"</span><span class="p">:</span> <span class="ss">"8443"</span><span class="p">,</span> <span class="ss">"{#HTTPPORT}"</span><span class="p">:</span> <span class="ss">"8444"</span> <span class="err">}</span><span class="p">,</span> <span class="err">{</span> <span class="ss">"{#INSTANCE}"</span><span class="p">:</span> <span class="ss">"namespace_centos"</span><span class="p">,</span> <span class="ss">"{#RPCPORT}"</span><span class="p">:</span> <span class="ss">"8445"</span><span class="p">,</span> <span class="ss">"{#HTTPPORT}"</span><span class="p">:</span> <span class="ss">"8446"</span> <span class="err">}</span> <span class="p">]</span> <span class="err">}</span>
</pre></div>
<p>As ansible auto-announces/push that back to zabbix, zabbix server can automatically start creating (through LLD, based on the item prototypes) some checks and triggers/graphs and so start monitoring each newly instance.
You want to add a third one ? (we have two in our case) : ansible pushes the config, would modify the .j2 template and would notify zabbix server. etc, etc ...</p>
<p>The rest is just "normal" operation for zabbix : you can create items/trigger prototypes and just use those special Macros coming from LLD : </p>
<p><img alt="zabbix-item-prototypes.png" src="/images/zabbix-item-prototypes.png"></p>
<p>It was worth spending some time in the LLD doc and in #zabbix to discuss LLD, but once you see the added value, and that you can automatically configure it through Ansible, one can see how powerful it can be.</p>Updated mirrorlist code in the CentOS Infra2018-09-24T00:00:00+02:002018-09-24T00:00:00+02:00Fabian Arrotintag:arrfab.net,2018-09-24:/posts/2018/Sep/24/updated-mirrorlist-code-in-the-centos-infra/<p>Recently I had to update the existing code running behind mirrorlist.centos.org (the service that returns you a list of validated mirrors for yum, see the /etc/yum.repos.d/CentOS*.repo file) as it was still using the <a href="https://dev.maxmind.com/geoip/legacy/install/country/">Maxmind GeoIP Legacy country database</a>. As you can probably know, Maxmind <a href="https://support.maxmind.com/geolite-legacy-discontinuation-notice/">announced</a> that they're discontinuing the Legacy DB, so that was one reason to update the code.
Switching to <a href="https://dev.maxmind.com/geoip/geoip2/geolite2/">GeoLite2</a> , with python2-geoip2 package was really easy to do and so was done already and pushed last month. </p>
<p>But that's when I discussed with <a href="https://twitter.com/avij">Anssi</a> (if you don't know him, he's maintaining the CentOS external mirrors DB up2date, including through the <a href="https://lists.centos.org/mailman/listinfo/centos-mirror">centos-mirror list</a> ) that we thought about not only doing that change there, but in the whole chain (so on our "mirror crawler" node, and also for the <a href="http://isoredirect.centos.org">isoredirect.centos.org service</a>), and random chat like these are good because suddenly we don't only want to "fix" one thing, but also take time on enhancing it and so adding more new features.</p>
<p>The previous code was already supporting both IPv4 and IPv6, but it was consuming different data sources (as external mirrors were validated differently for ipv4 vs ipv6 connnectivity). So …</p><p>Recently I had to update the existing code running behind mirrorlist.centos.org (the service that returns you a list of validated mirrors for yum, see the /etc/yum.repos.d/CentOS*.repo file) as it was still using the <a href="https://dev.maxmind.com/geoip/legacy/install/country/">Maxmind GeoIP Legacy country database</a>. As you can probably know, Maxmind <a href="https://support.maxmind.com/geolite-legacy-discontinuation-notice/">announced</a> that they're discontinuing the Legacy DB, so that was one reason to update the code.
Switching to <a href="https://dev.maxmind.com/geoip/geoip2/geolite2/">GeoLite2</a> , with python2-geoip2 package was really easy to do and so was done already and pushed last month. </p>
<p>But that's when I discussed with <a href="https://twitter.com/avij">Anssi</a> (if you don't know him, he's maintaining the CentOS external mirrors DB up2date, including through the <a href="https://lists.centos.org/mailman/listinfo/centos-mirror">centos-mirror list</a> ) that we thought about not only doing that change there, but in the whole chain (so on our "mirror crawler" node, and also for the <a href="http://isoredirect.centos.org">isoredirect.centos.org service</a>), and random chat like these are good because suddenly we don't only want to "fix" one thing, but also take time on enhancing it and so adding more new features.</p>
<p>The previous code was already supporting both IPv4 and IPv6, but it was consuming different data sources (as external mirrors were validated differently for ipv4 vs ipv6 connnectivity). So the first thing was to rewrite/combine the new code on the "mirror crawler" process for dual-stack tests, and also reflect that change o nthe frontend (aka mirrorlist.centos.org) nodes.</p>
<p>While we were working on this, Anssi proposed to also not adapt the isoredirect.centos.org code, but convert it in the same python format as the mirrorlist.centos.org, which he did.</p>
<p>Last big change also that was added is the following : only some repositories/architectures were checked/validated in the past but not all the other ones (so nothing from the <a href="https://wiki.centos.org/SpecialInterestGroup">SIGs</a> and nothing from AltArch, so no mirrorlist support for i386/armhfp/aarch64/ppc64/ppc64le). </p>
<p>While it wasn't a real problem in the past when we launched the SIGs concept, and that we added after that the other architectures (AltArch), we suddenly started suffering from some side-effects :</p>
<ul>
<li>More and more users "using" RPM content from mirror.centos.org (mainly through SIGs - which is a good indicator that those are successful, which is a good "problem to solve")</li>
<li>We are currently losing some nodes in that mirror.centos.org network (it's still entirely based on free dedicated servers <a href="https://wiki.centos.org/Donate#head-2d5ae152a1967f88237a2d61216613e142d42fc1">donated</a> to the project)</li>
</ul>
<p>To address first point, offloading more content to the 600+ external mirrors we have right now would be really good, as those nodes have better connectivity than we do, and with more presence around the globe too, so slowly pointing SIGs and AltArch to those external mirrors will help.</p>
<p>The other good point is that , as we switched to the <a href="http://geolite.maxmind.com/download/geoip/database/GeoLite2-City.tar.gz">GeoLite2 City</a> DB, it gives us more granularity and also for example, instead of "just" returning you a list of 10 validated mirrors for USA (if your request was identified as coming from that country of course), you now get a list of validated mirrors in your state/region instead. That means that then for such big countries having a lot of mirrors, we also better distribute the load amongst all of those, which is a big win for everybody - users and mirrors admins - )</p>
<p>For people interested in the <a href="https://github.com/CentOS/mirrorlists-code">code</a>, you'll see that we just run several instances of the python code, behind Apache running with <a href="https://httpd.apache.org/docs/2.4/mod/mod_proxy_balancer.html">mod_proxy_balancer</a>. That means that if we just need to increase the number of "instances", it's easy to do but so far it's running great with 5 running instances per node (and we have 4 nodes behind mirrorlist.centos.org). Worth noting that on average, each of those nodes gets 36+ millions requests per week for the mirrorlist service (so 144+ millions in total per week)</p>
<p>So in (very) short summary : </p>
<ul>
<li>mirrorlist.centos.org code now supports SIGs/AltArch repositories (we'll sync with SIGs to update their .repo file to use mirrorlist= instead of baseurl= soon)</li>
<li>we have better accuracy for large countries, so we redirect you to a 'closer' validated mirror</li>
</ul>
<p>One reminder btw : you know that you can verify which nodes are returned to you with some simple requests : </p>
<div class="highlight"><pre><span></span><span class="o">#</span> <span class="k">to</span> <span class="k">force</span> <span class="n">ipv4</span>
<span class="n">curl</span> <span class="s1">'http://mirrorlist.centos.org/?release=7&arch=x86_64&repo=updates'</span> <span class="o">-</span><span class="mi">4</span>
<span class="o">#</span> <span class="k">to</span> <span class="k">force</span> <span class="n">ipv6</span>
<span class="n">curl</span> <span class="s1">'http://mirrorlist.centos.org/?release=7&arch=x86_64&repo=updates'</span> <span class="o">-</span><span class="mi">6</span>
</pre></div>
<p>Last thing I wanted to mention was a potential way to fix point #2 from the list : when I checked in our "donated nodes" inventory, we still are running CentOS on nodes from ~2003 (yes, you read that correctly), so if you want to help/<a href="https://www.centos.org/sponsors">sponsor</a> the CentOS Project, feel free to <a href="https://wiki.centos.org/Donate">reach out</a> ! </p>Using newer PHP stack (built and distributed by CentOS) on CentOS 72018-02-20T00:00:00+01:002018-02-20T00:00:00+01:00Fabian Arrotintag:arrfab.net,2018-02-20:/posts/2018/Feb/20/using-newer-php-stack-built-and-distributed-by-centos-on-centos-7/<p>One thing that one has to like with Entreprise distribution is the same stable api/abi during the distro lifetime. If you have one application that works, you'll know that it will continue to work.</p>
<p>But in parallel, one can't always decide the application to run on that distro, with the built-in components. I was personally faced with this recently, when I was in a need to migrate our <a href="https://bugs.centos.org">Bug Tracker</a> to a new version. Let's so use that example to see how we can use "newer" php pkgs distributed through the distro itself.</p>
<p>The application that we use for https://bugs.centos.org is <a href="https://www.mantisbt.org/">MantisBT</a>, and by reading their <a href="https://www.mantisbt.org/docs/master/en-US/Admin_Guide/html-desktop/#admin.install.requirements.software.versions">requirements list</a> it was clear than a CentOS 7 default setup would not work : as a reminder the default php pkg for .el7 is 5.4.16 , so not supported anymore by "modern" application[s].</p>
<p>That's where <a href="https://https://www.softwarecollections.org/en/">SCLs</a> come to the rescue ! With such "collections", one can install those, without overwriting the base pkgs, and so can even run multiple parallel instances of such "stack", based on configuration.</p>
<p>Let's just start simple with our MantisBT example : forget about the traditional php-* packages (including "php" which provides the mod_php for Apache …</p><p>One thing that one has to like with Entreprise distribution is the same stable api/abi during the distro lifetime. If you have one application that works, you'll know that it will continue to work.</p>
<p>But in parallel, one can't always decide the application to run on that distro, with the built-in components. I was personally faced with this recently, when I was in a need to migrate our <a href="https://bugs.centos.org">Bug Tracker</a> to a new version. Let's so use that example to see how we can use "newer" php pkgs distributed through the distro itself.</p>
<p>The application that we use for https://bugs.centos.org is <a href="https://www.mantisbt.org/">MantisBT</a>, and by reading their <a href="https://www.mantisbt.org/docs/master/en-US/Admin_Guide/html-desktop/#admin.install.requirements.software.versions">requirements list</a> it was clear than a CentOS 7 default setup would not work : as a reminder the default php pkg for .el7 is 5.4.16 , so not supported anymore by "modern" application[s].</p>
<p>That's where <a href="https://https://www.softwarecollections.org/en/">SCLs</a> come to the rescue ! With such "collections", one can install those, without overwriting the base pkgs, and so can even run multiple parallel instances of such "stack", based on configuration.</p>
<p>Let's just start simple with our MantisBT example : forget about the traditional php-* packages (including "php" which provides the mod_php for Apache) : it's up to you to let those installed if you need it, but on my case, I'll default to php 7.1.x for the whole vhost, and also worth knowing that I wanted to integrate php with the default httpd from the distro (to ease the configuration management side, to expect finding the .conf files at $usual_place)</p>
<p>The good news is that those collections are <a href="https://cbs.centos.org/koji/search?match=glob&type=package&terms=scl*">built</a> and so then tested and released through our CentOS Infra, so you don't have to care about anything else ! (kudos to the <a href="https://wiki.centos.org/SpecialInterestGroup/SCLo">SCLo SIG</a> ! ). You can see the available collections <a href="https://wiki.centos.org/SpecialInterestGroup/SCLo/CollectionsList">here</a></p>
<p>So, how do we proceed ? easy ! First let's add the repository :</p>
<div class="highlight"><pre><span></span><span class="n">yum</span> <span class="n">install</span> <span class="n">centos</span><span class="o">-</span><span class="n">release</span><span class="o">-</span><span class="n">scl</span>
</pre></div>
<p>And from that point, you can just install what you need. For our case, MantisBT needs php, php-xml, php-mbstring, php-gd (for the captcha, if you want to use it), and a DB driver, so php-mysql (if you targets mysql of course). You just have to "translate" that into SCLs pkgs : in our case, php becomes rh-php71 (meta pkg), php-xml becomes rh-php71-php-xml and so on (one remark though, php-mysql became rh-php71-php-mysqlnd !)</p>
<p>So here we go : </p>
<div class="highlight"><pre><span></span><span class="n">yum</span> <span class="n">install</span> <span class="n">httpd</span> <span class="n">rh</span><span class="o">-</span><span class="n">php71</span> <span class="n">rh</span><span class="o">-</span><span class="n">php71</span><span class="o">-</span><span class="n">php</span><span class="o">-</span><span class="n">xml</span> <span class="n">rh</span><span class="o">-</span><span class="n">php71</span><span class="o">-</span><span class="n">php</span><span class="o">-</span><span class="n">mbstring</span> <span class="n">rh</span><span class="o">-</span><span class="n">php71</span><span class="o">-</span><span class="n">php</span><span class="o">-</span><span class="n">gd</span> <span class="n">rh</span><span class="o">-</span><span class="n">php71</span><span class="o">-</span><span class="n">php</span><span class="o">-</span><span class="n">soap</span> <span class="n">rh</span><span class="o">-</span><span class="n">php71</span><span class="o">-</span><span class="n">php</span><span class="o">-</span><span class="n">mysqlnd</span> <span class="n">rh</span><span class="o">-</span><span class="n">php71</span><span class="o">-</span><span class="n">php</span><span class="o">-</span><span class="n">fpm</span>
</pre></div>
<p>As said earlier, we'll target the default httpd pkg from the distro , so we just have to "link" php and httpd. Remember that mod_php isn't available anymore, but instead we'll use the php-fpm pkg (see rh-php71-php-fpm) for this (so all requests are sent to that FastCGI Process Manager daemon)</p>
<p>Let's do this : </p>
<div class="highlight"><pre><span></span>systemctl enable httpd --now
systemctl enable rh-php71-php-fpm --now
cat > /etc/httpd/conf.d/php-fpm.conf <span class="err"><</span><span class="nt">< EOF</span>
<span class="err">AddType</span> <span class="err">text/html</span> <span class="err">.php</span>
<span class="err">DirectoryIndex</span> <span class="err">index.php</span>
<span class="err"><FilesMatch</span> <span class="err">\.php$</span><span class="nt">></span>
SetHandler "proxy:fcgi://127.0.0.1:9000"
<span class="nt"></FilesMatch></span>
EOF
systemctl restart httpd
</pre></div>
<p>And from this point, it's all basic, and application is now using php 7.1.x stack. That's a basic "howto" but you can also run multiple versions in parallel, and also tune php-fpm itself. If you're interested, I'll let you read Remi Collet's <a href="https://developers.redhat.com/blog/2017/10/25/php-configuration-tips/">blog post about this</a> (Thank you again Remi !)</p>
<p>Hope this helps, as strangely I couldn't easily find a simple howto for this, as "scl enable rh-php71 bash" wouldn't help a lot with httpd (which is probably the most used scenario)</p>Diagnosing nf_conntrack/nf_conntrack_count issues on CentOS mirrorlist nodes2018-01-19T00:00:00+01:002018-01-19T00:00:00+01:00Fabian Arrotintag:arrfab.net,2018-01-19:/posts/2018/Jan/19/diagnosing-nf_conntracknf_conntrack_count-issues-on-centos-mirrorlist-nodes/<p>Yesterday, I got some alerts for some nodes in the CentOS Infra from both our monitoring system, but also confirmed by some folks reporting errors directly in our #centos-devel irc channel on Freenode.</p>
<p>The impacted nodes were the nodes we use for mirrorlist service. For people not knowing what they are used for, here is a quick overview of what happens when you run "yum update" on your CentOS node :</p>
<ul>
<li>yum analyzes the .repo files contained under /etc/yum.repos.d/</li>
<li>for CentOS repositories, it knows that it has to use a list of mirrors provided by a server hosted within the centos infra (mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates&infra=$infra
)</li>
<li>yum then contacts one of the server behind "mirrorlist.centos.org" (we have 4 nodes so far : two in Europe and two in USA, all available over IPv4 and IPv6)</li>
<li>mirrorlist checks the src ip and sends back a list of current/up2date mirrors in the country (some GeoIP checks are done)</li>
<li>yum then opens connection to those validated mirrors</li>
</ul>
<p>We monitor the response time for those services, and average response time is usually < 1sec (with some exceptions, mostly due to network latency …</p><p>Yesterday, I got some alerts for some nodes in the CentOS Infra from both our monitoring system, but also confirmed by some folks reporting errors directly in our #centos-devel irc channel on Freenode.</p>
<p>The impacted nodes were the nodes we use for mirrorlist service. For people not knowing what they are used for, here is a quick overview of what happens when you run "yum update" on your CentOS node :</p>
<ul>
<li>yum analyzes the .repo files contained under /etc/yum.repos.d/</li>
<li>for CentOS repositories, it knows that it has to use a list of mirrors provided by a server hosted within the centos infra (mirrorlist=http://mirrorlist.centos.org/?release=$releasever&arch=$basearch&repo=updates&infra=$infra
)</li>
<li>yum then contacts one of the server behind "mirrorlist.centos.org" (we have 4 nodes so far : two in Europe and two in USA, all available over IPv4 and IPv6)</li>
<li>mirrorlist checks the src ip and sends back a list of current/up2date mirrors in the country (some GeoIP checks are done)</li>
<li>yum then opens connection to those validated mirrors</li>
</ul>
<p>We monitor the response time for those services, and average response time is usually < 1sec (with some exceptions, mostly due to network latency also for nodes in other continents). But yesterday the values where not only higher, but also even completely missing from our monitoring system, so no data received. Here is a graph from our monitoring/Zabbix server : </p>
<p><img alt="mirrorlist-response-time-error.png" src="/images/mirrorlist-response-time-error.png" title="Mirrorlist ip_conntrack status"></p>
<p>So clearly something was happening and time to also find some patterns.
Also from our monitoring we discovered that the number of tracked network connections by the kernel was also suddenly higher than usual. In fact, as soon as your node does some state tracking with <a href="http://www.netfilter.org">netfilter</a> (like for example <code>-m state ESTABLISHED,RELATED</code> ), it keeps that in memory. You can easily retrive number of actively tracked connections like this : </p>
<div class="highlight"><pre><span></span><span class="n">cat</span> <span class="o">/</span><span class="n">proc</span><span class="o">/</span><span class="n">sys</span><span class="o">/</span><span class="n">net</span><span class="o">/</span><span class="n">netfilter</span><span class="o">/</span><span class="n">nf_conntrack_count</span>
</pre></div>
<p>So it's easy to guess what happens if the max (/proc/sys/net/netfilter/nf_conntrack_max) is reached : kernel drops packets (from dmesg):</p>
<div class="highlight"><pre><span></span><span class="n">nf_conntrack</span><span class="o">:</span> <span class="n">table</span> <span class="n">full</span><span class="o">,</span> <span class="n">dropping</span> <span class="n">packet</span>
</pre></div>
<p>Depending on the available memory, you can get default values, which can be changed in real-time.
Don't forget to also tune then the Hash size (basic rule is nf_conntrack_max / 4)
On the mirrorlist nodes, we had default values of 262144 (so yeah, keeping track of that amount of connections in memory), so to get quickly the service in shape : </p>
<div class="highlight"><pre><span></span>new_number="524288"
echo <span class="cp">${</span><span class="n">new_number</span><span class="cp">}</span> > /proc/sys/net/netfilter/nf_conntrack_max
echo $(( <span class="nv">$new_number</span> / 4 )) > /sys/module/nf_conntrack/parameters/hashsize
</pre></div>
<p>Other option was also to flush the table (you can do that with <code>conntrack -F</code> , tool from conntrack-tools package) but it's really only a temporary fix, and that will not help you getting the needed info for proper troubleshooting (see below)</p>
<p>Here is the Zabbix graph showing that for some nodes it was higher than default values, but now kernel wasn't dropping packets.</p>
<p><img alt="ip_conntrack_count.png" src="/images/ip_conntrack_count.png" title="Mirrorlist ip_conntrack status"></p>
<p>We could then confirm that service was then working fine (not "flapping" anymore).</p>
<p>So one can think that it was the only solution for the problem and stop investigation there. But what is the root cause of this ? What happened that opened so many (unclosed) connections to those mirrorlist nodes ? Let's dive into nf_conntrack table again !</p>
<p>Not only you have the number of tracked connections (through /proc/sys/net/netfilter/nf_conntrack_count) but also the whole details about those.
So let's dump that into a file for full analysis and try to find a pattern : </p>
<div class="highlight"><pre><span></span><span class="n">cat</span> <span class="o">/</span><span class="n">proc</span><span class="o">/</span><span class="n">net</span><span class="o">/</span><span class="n">nf_conntrack</span> <span class="o">></span> <span class="n">conntrack</span><span class="p">.</span><span class="n">list</span>
<span class="n">cat</span> <span class="n">conntrack</span><span class="p">.</span><span class="n">list</span> <span class="o">|</span><span class="n">awk</span> <span class="s1">'{print $7}'</span><span class="o">|</span><span class="n">sed</span> <span class="s1">'s/src=//g'</span><span class="o">|</span><span class="n">sort</span><span class="o">|</span><span class="n">uniq</span> <span class="o">-</span><span class="k">c</span><span class="o">|</span><span class="n">sort</span> <span class="o">-</span><span class="n">n</span> <span class="o">-</span><span class="n">r</span><span class="o">|</span><span class="n">head</span>
</pre></div>
<p>Here we go : same range of IPs on all our mirrorlist servers having <em>thousands</em> of ESTABLISHED connection. Not going to give you all details about this (goal of this blog post isn't "finger pointing"), but we suddenly identified the issue. So we took contact with network team behind those identified IPs to report that behaviour, still to be tracked, but wondering myself if a Firewall doing NAT wasn't closing tcp connections at all, more to come.</p>
<p>At least mirrorlist response time is now back at usual state : </p>
<p><img alt="mirrorlist-response-time.png" src="/images/mirrorlist-response-time.png" title="Mirrorlist Avg response time"></p>
<p>So you can also let your configuration management now set those parameters through dedicated .conf under /etc/systctl.d/ to ensure that they'll be applied automatically.</p>Using a RaspberryPI3 as Unifi AP controller with CentOS 72018-01-10T00:00:00+01:002018-01-10T00:00:00+01:00Fabian Arrotintag:arrfab.net,2018-01-10:/posts/2018/Jan/10/using-a-raspberrypi3-as-unifi-ap-controller-with-centos-7/<p>That's something I should have blogged about earlier, but I almost forgot about it, until I read on twitter other people having replaced their home network equipment with Ubnt/Ubiquiti gear so I realized that it was on my to 'TOBLOG' list.</p>
<p>During the winter holidays, the whole family was at home, and also with kids on the WiFi network. Of course I already had a different wlan for them, separated/seggregated from the main one, but plenty of things weren't really working on that crappy device. So it was time to setup something else. I had opportunity to play with some <a href="https://www.ubnt.com/">Ubiquiti</a> devices in the past, so finding even an old <a href="https://www.ubnt.com/unifi/unifi-ap/">Unifi UAP model</a> was enough for my needs (just need Access Point, routing/firewall being done on something else).</p>
<p>If you've already played with those tools, you know that you need a controller to setup the devices up , and because it's 'only' a java/mongodb stack, I thought it would be trivial to setup on a low-end device like <a href="https://www.raspberrypi.org/">RaspberryPi3</a> (not limited to that , so all armhfp boards on which you can run CentOS would work)</p>
<p>After having installed <a href="http://mirror.centos.org/altarch/7/isos/armhfp/">CentOS 7 armhfp minimal</a> on the device, and once …</p><p>That's something I should have blogged about earlier, but I almost forgot about it, until I read on twitter other people having replaced their home network equipment with Ubnt/Ubiquiti gear so I realized that it was on my to 'TOBLOG' list.</p>
<p>During the winter holidays, the whole family was at home, and also with kids on the WiFi network. Of course I already had a different wlan for them, separated/seggregated from the main one, but plenty of things weren't really working on that crappy device. So it was time to setup something else. I had opportunity to play with some <a href="https://www.ubnt.com/">Ubiquiti</a> devices in the past, so finding even an old <a href="https://www.ubnt.com/unifi/unifi-ap/">Unifi UAP model</a> was enough for my needs (just need Access Point, routing/firewall being done on something else).</p>
<p>If you've already played with those tools, you know that you need a controller to setup the devices up , and because it's 'only' a java/mongodb stack, I thought it would be trivial to setup on a low-end device like <a href="https://www.raspberrypi.org/">RaspberryPi3</a> (not limited to that , so all armhfp boards on which you can run CentOS would work)</p>
<p>After having installed <a href="http://mirror.centos.org/altarch/7/isos/armhfp/">CentOS 7 armhfp minimal</a> on the device, and once logged, I just had to add the mandatory <a href="https://wiki.centos.org/SpecialInterestGroup/AltArch/Arm32#head-f2a772703b3caa90cc284e01bc87423ce9a87bcd">unofficial epel repository</a> for mongodb </p>
<div class="highlight"><pre><span></span><span class="nv">cat</span> <span class="o">></span> <span class="o">/</span><span class="nv">etc</span><span class="o">/</span><span class="nv">yum</span>.<span class="nv">repos</span>.<span class="nv">d</span><span class="o">/</span><span class="nv">epel</span>.<span class="nv">repo</span> <span class="o"><<</span> <span class="nv">EOF</span>
[<span class="nv">epel</span>]
<span class="nv">name</span><span class="o">=</span><span class="nv">Epel</span> <span class="nv">rebuild</span> <span class="k">for</span> <span class="nv">armhfp</span>
<span class="nv">baseurl</span><span class="o">=</span><span class="nv">https</span>:<span class="o">//</span><span class="nv">armv7</span>.<span class="nv">dev</span>.<span class="nv">centos</span>.<span class="nv">org</span><span class="o">/</span><span class="nv">repodir</span><span class="o">/</span><span class="nv">epel</span><span class="o">-</span><span class="nv">pass</span><span class="o">-</span><span class="mi">1</span><span class="o">/</span>
<span class="nv">enabled</span><span class="o">=</span><span class="mi">1</span>
<span class="nv">gpgcheck</span><span class="o">=</span><span class="mi">0</span>
<span class="nv">EOF</span>
</pre></div>
<p>After that, just installed what's required to run the application :</p>
<div class="highlight"><pre><span></span><span class="n">yum</span> <span class="n">install</span> <span class="n">mongodb</span> <span class="n">mongodb</span><span class="o">-</span><span class="n">server</span> <span class="n">java</span><span class="o">-</span><span class="mi">1</span><span class="p">.</span><span class="mi">8</span><span class="p">.</span><span class="mi">0</span><span class="o">-</span><span class="n">openjdk</span><span class="o">-</span><span class="n">headless</span> <span class="o">-</span><span class="n">y</span>
</pre></div>
<p>The "interesting" part is that now Ubnt only provides .deb packages , so we just have to download/extract what we need (it's all java code) and start it : </p>
<div class="highlight"><pre><span></span><span class="n">tmp_dir</span><span class="o">=</span><span class="err">$</span><span class="p">(</span><span class="n">mktemp</span> <span class="o">-</span><span class="n">d</span><span class="p">)</span>
<span class="n">cd</span> <span class="err">$</span><span class="n">tmp_dir</span>
<span class="n">curl</span> <span class="o">-</span><span class="n">O</span> <span class="n">http</span><span class="p">:</span><span class="o">//</span><span class="n">dl</span><span class="p">.</span><span class="n">ubnt</span><span class="p">.</span><span class="n">com</span><span class="o">/</span><span class="n">unifi</span><span class="o">/</span><span class="mi">5</span><span class="p">.</span><span class="mi">6</span><span class="p">.</span><span class="mi">26</span><span class="o">/</span><span class="n">unifi_sysvinit_all</span><span class="p">.</span><span class="n">deb</span>
<span class="n">ar</span> <span class="n">vx</span> <span class="n">unifi_sysvinit_all</span><span class="p">.</span><span class="n">deb</span>
<span class="n">tar</span> <span class="n">xvf</span> <span class="k">data</span><span class="p">.</span><span class="n">tar</span><span class="p">.</span><span class="n">xz</span>
<span class="n">mv</span> <span class="n">usr</span><span class="o">/</span><span class="n">lib</span><span class="o">/</span><span class="n">unifi</span><span class="o">/</span> <span class="o">/</span><span class="n">opt</span><span class="o">/</span><span class="n">UniFi</span>
<span class="n">cd</span> <span class="o">/</span><span class="n">opt</span><span class="o">/</span><span class="n">UniFi</span><span class="o">/</span><span class="n">bin</span>
<span class="o">/</span><span class="n">bin</span><span class="o">/</span><span class="n">rm</span> <span class="o">-</span><span class="n">Rf</span> <span class="err">$</span><span class="n">tmp_dir</span>
<span class="n">ln</span> <span class="o">-</span><span class="n">s</span> <span class="o">/</span><span class="n">bin</span><span class="o">/</span><span class="n">mongod</span>
</pre></div>
<p>You can start it "by hand" but let's create a simple systemd file and use it directly :</p>
<div class="highlight"><pre><span></span><span class="n">cat</span><span class="w"> </span><span class="o">></span><span class="w"> </span><span class="o">/</span><span class="n">etc</span><span class="o">/</span><span class="n">systemd</span><span class="o">/</span><span class="k">system</span><span class="o">/</span><span class="n">unifi</span><span class="p">.</span><span class="n">service</span><span class="w"> </span><span class="o"><<</span><span class="w"> </span><span class="n">EOF</span><span class="w"></span>
<span class="o">[</span><span class="n">Unit</span><span class="o">]</span><span class="w"></span>
<span class="n">Description</span><span class="o">=</span><span class="n">UBNT</span><span class="w"> </span><span class="n">UniFi</span><span class="w"> </span><span class="n">Controller</span><span class="w"></span>
<span class="k">After</span><span class="o">=</span><span class="n">syslog</span><span class="p">.</span><span class="n">target</span><span class="w"> </span><span class="n">network</span><span class="p">.</span><span class="n">target</span><span class="w"></span>
<span class="o">[</span><span class="n">Service</span><span class="o">]</span><span class="w"></span>
<span class="n">WorkingDirectory</span><span class="o">=/</span><span class="n">opt</span><span class="o">/</span><span class="n">UniFi</span><span class="w"></span>
<span class="n">ExecStart</span><span class="o">=/</span><span class="n">usr</span><span class="o">/</span><span class="n">bin</span><span class="o">/</span><span class="n">java</span><span class="w"> </span><span class="o">-</span><span class="n">jar</span><span class="w"> </span><span class="o">/</span><span class="n">opt</span><span class="o">/</span><span class="n">UniFi</span><span class="o">/</span><span class="n">lib</span><span class="o">/</span><span class="n">ace</span><span class="p">.</span><span class="n">jar</span><span class="w"> </span><span class="k">start</span><span class="w"></span>
<span class="n">ExecStop</span><span class="o">=/</span><span class="n">usr</span><span class="o">/</span><span class="n">bin</span><span class="o">/</span><span class="n">java</span><span class="w"> </span><span class="o">-</span><span class="n">jar</span><span class="w"> </span><span class="o">/</span><span class="n">opt</span><span class="o">/</span><span class="n">UniFi</span><span class="o">/</span><span class="n">lib</span><span class="o">/</span><span class="n">ace</span><span class="p">.</span><span class="n">jar</span><span class="w"> </span><span class="n">stop</span><span class="w"></span>
<span class="o">[</span><span class="n">Install</span><span class="o">]</span><span class="w"></span>
<span class="n">WantedBy</span><span class="o">=</span><span class="n">multi</span><span class="o">-</span><span class="k">user</span><span class="p">.</span><span class="n">target</span><span class="w"></span>
<span class="n">EOF</span><span class="w"></span>
<span class="n">systemctl</span><span class="w"> </span><span class="n">daemon</span><span class="o">-</span><span class="n">reload</span><span class="w"></span>
<span class="n">systemctl</span><span class="w"> </span><span class="n">enable</span><span class="w"> </span><span class="n">unifi</span><span class="w"> </span><span class="c1">--now</span>
</pre></div>
<p>Don't forget that :</p>
<ul>
<li>it's "Java"</li>
<li>running on slow armhfp processor</li>
</ul>
<p>So that will take time to initialize. You can follow progress in /opt/UniFi/logs/server.log and wait for the TLS port to be opened : </p>
<div class="highlight"><pre><span></span><span class="k">while</span> <span class="nv">true</span> <span class="c1">; do sleep 1 ; ss -tanp|grep 8443 && break ; done</span>
</pre></div>
<p>Dont forget to open the needed ports for firewall and you can then reach the Unifi controller running on your armhfp board.</p>Lightweigth CentOS 7 i686 desktop on older machine2018-01-02T00:00:00+01:002018-01-02T00:00:00+01:00Fabian Arrotintag:arrfab.net,2018-01-02:/posts/2018/Jan/02/lightweigth-centos-7-i686-desktop-on-older-machine/<p>So, end of the year is always when you have some "time off" and so can work on various projects that were left behind. While searching for other hardware collecting dust in my furniture (other blog post coming soon about that too) I found my old <a href="https://en.wikipedia.org/wiki/Asus_Eee_PC#Eee_900_series">Asus Eeepc 900</a> and was wondering if I could resurrect it.</p>
<p>While it was working CentOS 5 and then 6 "just fine" I wanted to give it a try with CentOS 7.</p>
<p>Of course, if you remember the specs from that ~2008 small netbook, you remember that it had :</p>
<ul>
<li>slow cpu (Intel(R) Celeron(R) M processor 900MHz)</li>
<li>only 1Gb of ram</li>
<li>very limited disk space (ASUS-PHISON OB SSD 4GB + additional 8GB for my model)</li>
</ul>
<p>Setting up the full Gnome3 experience on it would be completely useless and also unusable.
So let's try to setup CentOS 7 AltArch minimal (needed as cpu is only i686/32bits) and add what we need after that.
So here we go : </p>
<ul>
<li>Download netinstall iso image (I used "local" mirror for me , so http://mirror.nucleus.be/centos-altarch/7/isos/i386/CentOS-7-i386-NetInstall-1611.iso)</li>
<li>use dd to transfer it to usb storage key</li>
<li>starting the installed on the eeepc</li>
<li>wait …</li></ul><p>So, end of the year is always when you have some "time off" and so can work on various projects that were left behind. While searching for other hardware collecting dust in my furniture (other blog post coming soon about that too) I found my old <a href="https://en.wikipedia.org/wiki/Asus_Eee_PC#Eee_900_series">Asus Eeepc 900</a> and was wondering if I could resurrect it.</p>
<p>While it was working CentOS 5 and then 6 "just fine" I wanted to give it a try with CentOS 7.</p>
<p>Of course, if you remember the specs from that ~2008 small netbook, you remember that it had :</p>
<ul>
<li>slow cpu (Intel(R) Celeron(R) M processor 900MHz)</li>
<li>only 1Gb of ram</li>
<li>very limited disk space (ASUS-PHISON OB SSD 4GB + additional 8GB for my model)</li>
</ul>
<p>Setting up the full Gnome3 experience on it would be completely useless and also unusable.
So let's try to setup CentOS 7 AltArch minimal (needed as cpu is only i686/32bits) and add what we need after that.
So here we go : </p>
<ul>
<li>Download netinstall iso image (I used "local" mirror for me , so http://mirror.nucleus.be/centos-altarch/7/isos/i386/CentOS-7-i386-NetInstall-1611.iso)</li>
<li>use dd to transfer it to usb storage key</li>
<li>starting the installed on the eeepc</li>
<li>wait .... wait .... wait ...</li>
</ul>
<p>Once installed and up2date, one needs to add additional repositories that aren't there by default. As a reminder, there is <em>no</em> official Epel builds for i686 (same as for <a href="https://wiki.centos.org/SpecialInterestGroup/AltArch/Arm32#head-f2a772703b3caa90cc284e01bc87423ce9a87bcd">armhfp</a> ) but <a href="https://twitter.com/JohnnyCentOS">Johnny</a> started to rebuild Epel SRPMs for that specific reason, so here we go :</p>
<div class="highlight"><pre><span></span><span class="nv">cat</span> <span class="o">></span> <span class="o">/</span><span class="nv">etc</span><span class="o">/</span><span class="nv">yum</span>.<span class="nv">repos</span>.<span class="nv">d</span><span class="o">/</span><span class="nv">epel</span>.<span class="nv">repo</span> <span class="o"><<</span> <span class="nv">EOF</span>
[<span class="nv">epel</span>]
<span class="nv">name</span><span class="o">=</span><span class="nv">Epel</span> <span class="nv">rebuild</span> <span class="k">for</span> <span class="nv">i686</span>
<span class="nv">baseurl</span><span class="o">=</span><span class="nv">https</span>:<span class="o">//</span><span class="nv">buildlogs</span>.<span class="nv">centos</span>.<span class="nv">org</span><span class="o">/</span><span class="nv">c7</span><span class="o">-</span><span class="nv">epel</span><span class="o">/</span>
<span class="nv">enabled</span><span class="o">=</span><span class="mi">1</span>
<span class="nv">gpgcheck</span><span class="o">=</span><span class="mi">0</span>
<span class="nv">EOF</span>
<span class="nv">cat</span> <span class="o">></span> <span class="o">/</span><span class="nv">etc</span><span class="o">/</span><span class="nv">yum</span>.<span class="nv">repos</span>.<span class="nv">d</span><span class="o">/</span><span class="nv">kernel</span>.<span class="nv">repo</span> <span class="o"><<</span> <span class="nv">EOF</span>
[<span class="nv">kernel</span>]
<span class="nv">name</span><span class="o">=</span><span class="nv">LTS</span> <span class="nv">kernel</span> <span class="k">for</span> <span class="nv">i686</span>
<span class="nv">baseurl</span><span class="o">=</span><span class="nv">https</span>:<span class="o">//</span><span class="nv">buildlogs</span>.<span class="nv">centos</span>.<span class="nv">org</span><span class="o">/</span><span class="nv">c7</span>.<span class="mi">1708</span>.<span class="nv">exp</span>.<span class="nv">i386</span><span class="o">/</span>
<span class="nv">enabled</span><span class="o">=</span><span class="mi">1</span>
<span class="nv">gpgcheck</span><span class="o">=</span><span class="mi">0</span>
<span class="nv">EOF</span>
</pre></div>
<p>If you see the other kernel repository, that's because the needed ath5k kernel module for the Wifi device in the Eeepc isn't there in the default kernel nor available through elrepo, but it works with that 4.9.x LTS kernel we build and maintain/update for AltArch so let's use it.</p>
<p>We can install what we need (YMMV though) : </p>
<div class="highlight"><pre><span></span><span class="n">yum</span> <span class="k">update</span> <span class="o">-</span><span class="n">y</span>
<span class="n">yum</span> <span class="n">groupinstall</span> <span class="o">-</span><span class="n">y</span> <span class="s1">'X Window System'</span>
<span class="n">yum</span> <span class="n">install</span> <span class="o">-</span><span class="n">y</span> <span class="n">openbox</span> <span class="n">lightdm</span> <span class="n">lightdm</span><span class="o">-</span><span class="n">gtk</span>
<span class="n">systemctl</span> <span class="n">enable</span> <span class="n">lightdm</span><span class="p">.</span><span class="n">service</span>
<span class="n">yum</span> <span class="n">install</span> <span class="o">-</span><span class="n">y</span> <span class="n">tint2</span> <span class="n">terminator</span> <span class="n">firefox</span> <span class="n">terminus</span><span class="o">-</span><span class="n">fonts</span><span class="o">-</span><span class="n">console</span> <span class="n">terminus</span><span class="o">-</span><span class="n">fonts</span> <span class="n">network</span><span class="o">-</span><span class="n">manager</span><span class="o">-</span><span class="n">applet</span> <span class="n">gnome</span><span class="o">-</span><span class="n">keyring</span> <span class="n">dejavu</span><span class="o">-</span><span class="n">sans</span><span class="o">-</span><span class="n">fonts</span> <span class="n">dejavu</span><span class="o">-</span><span class="n">fonts</span><span class="o">-</span><span class="n">common</span> <span class="n">dejavu</span><span class="o">-</span><span class="n">serif</span><span class="o">-</span><span class="n">fonts</span> <span class="n">dejavu</span><span class="o">-</span><span class="n">sans</span><span class="o">-</span><span class="n">mono</span><span class="o">-</span><span class="n">fonts</span> <span class="k">open</span><span class="o">-</span><span class="n">sans</span><span class="o">-</span><span class="n">fonts</span> <span class="n">overpass</span><span class="o">-</span><span class="n">fonts</span> <span class="n">liberation</span><span class="o">-</span><span class="n">mono</span><span class="o">-</span><span class="n">fonts</span> <span class="n">liberation</span><span class="o">-</span><span class="n">serif</span><span class="o">-</span><span class="n">fonts</span> <span class="n">google</span><span class="o">-</span><span class="n">crosextra</span><span class="o">-</span><span class="n">caladea</span><span class="o">-</span><span class="n">fonts</span> <span class="n">google</span><span class="o">-</span><span class="n">crosextra</span><span class="o">-</span><span class="n">carlito</span><span class="o">-</span><span class="n">fonts</span>
<span class="n">echo</span> <span class="s1">'tint2 &'</span> <span class="o">>></span> <span class="o">/</span><span class="n">etc</span><span class="o">/</span><span class="n">xdg</span><span class="o">/</span><span class="n">openbox</span><span class="o">/</span><span class="n">autostart</span>
<span class="n">echo</span> <span class="s1">'nm-applet &'</span> <span class="o">>></span> <span class="o">/</span><span class="n">etc</span><span class="o">/</span><span class="n">xdg</span><span class="o">/</span><span class="n">openbox</span><span class="o">/</span><span class="n">autostart</span>
<span class="n">systemctl</span> <span class="n">reboot</span>
</pre></div>
<p>The last line with tint2 , terminator and firefox is purely optional but that's what I needed on my eeepc.
Same for network-manager-applet, but once installed, it gives you easy to work with applet integrated in openbox environment.</p>
<p>You can then customize it, etc, but I like it so far for what I wanted to use that old netbook for :</p>
<p><img alt="CentOS 7 i686 running on Asus Eeepc 900" src="/images/centos-7-openbox.jpg" title="CentOS 7 lightweight desktop"></p>Using Ansible Openstack modules on CentOS 72017-10-11T00:00:00+02:002017-10-11T00:00:00+02:00Fabian Arrotintag:arrfab.net,2017-10-11:/posts/2017/Oct/11/using-ansible-openstack-modules-on-centos-7/<p>Suppose that you have a RDO/Openstack cloud already in place, but that you'd want to automate some operations : what can you do ? On my side, I already <a href="/posts/2017/May/08/deploying-openstack-through-puppet-on-centos-7-a-journey/">mentioned</a> that I used puppet to deploy initial clouds, but I still prefer Ansible myself when having to launch ad-hoc tasks, or even change configuration[s]. It's particulary true for our <a href="https://ci.centos.org">CI environment</a> where we run "agentless" so all configuration changes happen through Ansible.</p>
<p>The good news is that Ansible has already some modules for <a href="http://docs.ansible.com/ansible/latest/list_of_cloud_modules.html#openstack">Openstack</a> but it has some requirements and a little bit of understanding before being able to use those.</p>
<p>First of all, all the ansible os_ modules need <a href="https://pypi.python.org/pypi/shade">"shade"</a> on the host included in the play, and that will be responsible of all os_ modules launch.
At the time of writing this post, it's <em>not</em> yet available on mirror.centos.org, (a review is open so that will be soon available directly) but you can find the pkg on <a href="https://cbs.centos.org/koji/buildinfo?buildID=20086">our CBS builders</a></p>
<p>Once installed, a simple os_image task was directly failing, despite the fact that auth: was present, and that's due to a simple reason : Ansible os_ modules still want to use v2 API, while it's now defaulting …</p><p>Suppose that you have a RDO/Openstack cloud already in place, but that you'd want to automate some operations : what can you do ? On my side, I already <a href="/posts/2017/May/08/deploying-openstack-through-puppet-on-centos-7-a-journey/">mentioned</a> that I used puppet to deploy initial clouds, but I still prefer Ansible myself when having to launch ad-hoc tasks, or even change configuration[s]. It's particulary true for our <a href="https://ci.centos.org">CI environment</a> where we run "agentless" so all configuration changes happen through Ansible.</p>
<p>The good news is that Ansible has already some modules for <a href="http://docs.ansible.com/ansible/latest/list_of_cloud_modules.html#openstack">Openstack</a> but it has some requirements and a little bit of understanding before being able to use those.</p>
<p>First of all, all the ansible os_ modules need <a href="https://pypi.python.org/pypi/shade">"shade"</a> on the host included in the play, and that will be responsible of all os_ modules launch.
At the time of writing this post, it's <em>not</em> yet available on mirror.centos.org, (a review is open so that will be soon available directly) but you can find the pkg on <a href="https://cbs.centos.org/koji/buildinfo?buildID=20086">our CBS builders</a></p>
<p>Once installed, a simple os_image task was directly failing, despite the fact that auth: was present, and that's due to a simple reason : Ansible os_ modules still want to use v2 API, while it's now defaulting to v3 in Pike release. There is no way to force ansible itself to use v3, but as it uses shade behind the scene, there is a way to force this through <a href="https://docs.openstack.org/os-client-config/latest/index.html">os-client-config</a></p>
<p>That means that you just have to use a .yaml file (does that sound familiar for ansible ?) that will contain everything you need to know about specific cloud, and then just in ansible declare which cloud you're configuring.</p>
<p>That clouds.yaml file can be under $current_directory, ~/.config/openstack or /etc/openstack so it's up to you to decide where you want to temporary host it, but I selected /etc/openstack/ :</p>
<div class="highlight"><pre><span></span><span class="o">-</span> <span class="nv">name</span>: <span class="nv">Ensuring</span> <span class="nv">we</span> <span class="nv">have</span> <span class="nv">required</span> <span class="nv">pkgs</span> <span class="k">for</span> <span class="nv">ansible</span><span class="o">/</span><span class="nv">openstack</span>
<span class="nv">yum</span>:
<span class="nv">name</span>: <span class="nv">python2</span><span class="o">-</span><span class="nv">shade</span>
<span class="nv">state</span>: <span class="nv">installed</span>
<span class="o">-</span> <span class="nv">name</span>: <span class="nv">Ensuring</span> <span class="nv">local</span> <span class="nv">directory</span> <span class="nv">to</span> <span class="nv">hold</span> <span class="nv">the</span> <span class="nv">os</span><span class="o">-</span><span class="nv">client</span><span class="o">-</span><span class="nv">config</span> <span class="nv">file</span>
<span class="nv">file</span>:
<span class="nv">path</span>: <span class="o">/</span><span class="nv">etc</span><span class="o">/</span><span class="nv">openstack</span>
<span class="nv">state</span>: <span class="nv">directory</span>
<span class="nv">owner</span>: <span class="nv">root</span>
<span class="nv">group</span>: <span class="nv">root</span>
<span class="o">-</span> <span class="nv">name</span>: <span class="nv">Adding</span> <span class="nv">clouds</span>.<span class="nv">yaml</span> <span class="k">for</span> <span class="nv">os</span><span class="o">-</span><span class="nv">client</span><span class="o">-</span><span class="nv">config</span> <span class="k">for</span> <span class="nv">further</span> <span class="nv">actions</span>
<span class="nv">template</span>:
<span class="nv">src</span>: <span class="nv">clouds</span>.<span class="nv">yaml</span>.<span class="nv">j2</span>
<span class="nv">dest</span>: <span class="o">/</span><span class="nv">etc</span><span class="o">/</span><span class="nv">openstack</span><span class="o">/</span><span class="nv">clouds</span>.<span class="nv">yaml</span>
<span class="nv">owner</span>: <span class="nv">root</span>
<span class="nv">group</span>: <span class="nv">root</span>
<span class="nv">mode</span>: <span class="mi">0700</span>
</pre></div>
<p>Of course such clouds.yaml file is itself a jinja2 template distributed by ansible on the host in the play <em>before</em> using the os_* modules : </p>
<div class="highlight"><pre><span></span><span class="x">clouds:</span>
<span class="x"> </span><span class="cp">{{</span> <span class="nv">cloud_name</span> <span class="cp">}}</span><span class="x">:</span>
<span class="x"> auth:</span>
<span class="x"> username: admin</span>
<span class="x"> project_name: admin</span>
<span class="x"> password: </span><span class="cp">{{</span> <span class="nv">openstack_admin_pass</span> <span class="cp">}}</span><span class="x"></span>
<span class="x"> auth_url: http://</span><span class="cp">{{</span> <span class="nv">openstack_controller</span> <span class="cp">}}</span><span class="x">:5000/v3/</span>
<span class="x"> user_domain_name: default</span>
<span class="x"> project_domain_name: default</span>
<span class="x"> identity_api_version: 3</span>
</pre></div>
<p>You just have to adapt to your needs (see <a href="https://docs.openstack.org/os-client-config/latest/user/configuration.html">doc</a> for this) but the interesting part is the identity_api_version to force v3.</p>
<p>Then, you can use all that in a simple way through ansible tasks, in this case adding users to a project :</p>
<div class="highlight"><pre><span></span><span class="o">-</span><span class="w"> </span><span class="nl">name</span><span class="p">:</span><span class="w"> </span><span class="n">Configuring</span><span class="w"> </span><span class="n">OpenStack</span><span class="w"> </span><span class="k">user</span><span class="o">[</span><span class="n">s</span><span class="o">]</span><span class="w"></span>
<span class="w"> </span><span class="nl">os_user</span><span class="p">:</span><span class="w"></span>
<span class="w"> </span><span class="nl">cloud</span><span class="p">:</span><span class="w"> </span><span class="ss">"{{ cloud_name }}"</span><span class="w"></span>
<span class="w"> </span><span class="nl">default_project</span><span class="p">:</span><span class="w"> </span><span class="ss">"{{ item.0.name }}"</span><span class="w"></span>
<span class="w"> </span><span class="k">domain</span><span class="err">:</span><span class="w"> </span><span class="ss">"{{ item.0.domain_id }}"</span><span class="w"></span>
<span class="w"> </span><span class="nl">name</span><span class="p">:</span><span class="w"> </span><span class="ss">"{{ item.1.login }}"</span><span class="w"></span>
<span class="w"> </span><span class="nl">email</span><span class="p">:</span><span class="w"> </span><span class="ss">"{{ item.1.email }}"</span><span class="w"></span>
<span class="w"> </span><span class="nl">password</span><span class="p">:</span><span class="w"> </span><span class="ss">"{{ item.1.password }}"</span><span class="w"> </span>
<span class="w"> </span><span class="nl">with_subelements</span><span class="p">:</span><span class="w"></span>
<span class="w"> </span><span class="o">-</span><span class="w"> </span><span class="ss">"{{ cloud_projects }}"</span><span class="w"></span>
<span class="w"> </span><span class="o">-</span><span class="w"> </span><span class="n">users</span><span class="w"> </span>
<span class="w"> </span><span class="nl">no_log</span><span class="p">:</span><span class="w"> </span><span class="k">True</span><span class="w"></span>
</pre></div>
<p>From a variables point of view, I decided to just have a simple structure to host project/users/roles/quotas like this : </p>
<div class="highlight"><pre><span></span><span class="nl">cloud_projects</span><span class="p">:</span><span class="w"></span>
<span class="w"> </span><span class="o">-</span><span class="w"> </span><span class="nl">name</span><span class="p">:</span><span class="w"> </span><span class="n">demo</span><span class="w"></span>
<span class="w"> </span><span class="nl">description</span><span class="p">:</span><span class="w"> </span><span class="n">demo</span><span class="w"> </span><span class="n">project</span><span class="w"></span>
<span class="w"> </span><span class="nl">domain_id</span><span class="p">:</span><span class="w"> </span><span class="k">default</span><span class="w"></span>
<span class="w"> </span><span class="nl">quota_cores</span><span class="p">:</span><span class="w"> </span><span class="mi">20</span><span class="w"></span>
<span class="w"> </span><span class="nl">quota_instances</span><span class="p">:</span><span class="w"> </span><span class="mi">10</span><span class="w"></span>
<span class="w"> </span><span class="nl">quota_ram</span><span class="p">:</span><span class="w"> </span><span class="mi">40960</span><span class="w"></span>
<span class="w"> </span><span class="nl">users</span><span class="p">:</span><span class="w"></span>
<span class="w"> </span><span class="o">-</span><span class="w"> </span><span class="nl">login</span><span class="p">:</span><span class="w"> </span><span class="n">demo_user</span><span class="w"></span>
<span class="w"> </span><span class="nl">email</span><span class="p">:</span><span class="w"> </span><span class="n">demo</span><span class="nv">@centos</span><span class="p">.</span><span class="n">org</span><span class="w"></span>
<span class="w"> </span><span class="nl">password</span><span class="p">:</span><span class="w"> </span><span class="n">Ch</span><span class="nv">@ngeM3</span><span class="w"></span>
<span class="w"> </span><span class="k">role</span><span class="err">:</span><span class="w"> </span><span class="k">admin</span><span class="w"> </span><span class="err">#</span><span class="w"> </span><span class="n">can</span><span class="w"> </span><span class="n">be</span><span class="w"> </span><span class="n">_member_</span><span class="w"> </span><span class="ow">or</span><span class="w"> </span><span class="k">admin</span><span class="w"></span>
<span class="w"> </span><span class="o">-</span><span class="w"> </span><span class="nl">login</span><span class="p">:</span><span class="w"> </span><span class="n">demo_user2</span><span class="w"></span>
<span class="w"> </span><span class="nl">email</span><span class="p">:</span><span class="w"> </span><span class="n">demo2</span><span class="nv">@centos</span><span class="p">.</span><span class="n">org</span><span class="w"></span>
<span class="w"> </span><span class="nl">password</span><span class="p">:</span><span class="w"> </span><span class="n">Ch</span><span class="nv">@ngeMe2</span><span class="w"></span>
</pre></div>
<p>Now that it works, you can explore all the other os_* modules and I'm already using those to :</p>
<ul>
<li>Import cloud images in glance</li>
<li>Create networks and subnets in neutron</li>
<li>Create projects/users/roles in keystone</li>
<li>Change quotas for those projects</li>
</ul>
<p>I'm just discovering how powerful those tools are, so I'll probably discover much more interesting things to do with those later. </p>Using CentOS 7 armhfp VM on CentOS 7 aarch642017-09-29T00:00:00+02:002017-09-29T00:00:00+02:00Fabian Arrotintag:arrfab.net,2017-09-29:/posts/2017/Sep/29/using-centos-7-armhfp-vm-on-centos-7-aarch64/<p>Recently we got our hands on some aarch64 (aka ARMv8 / 64Bits) nodes running in a remote DC. On my (already too long) TODO/TOTEST list I had the idea of testing armhfp VM on top of aarch64. Reason is that when I need to test our packages, using my own <a href="https://www.cubietruck.com/">Cubietruck</a> or <a href="https://www.raspberrypi.org/">RaspberryPi3</a> is time consuming : removing the sdcard, reflashing with the correct <a href="http://mirror.centos.org/altarch/7/isos/armhfp/">CentOS 7 image</a> and booting/testing the pkg/update/etc ...</p>
<p>So is that possible to just automate this through available aarch64 node as hypervisor ? Sure ! and it's just pretty straightforward if you have already played with libvirt.
Let's so start with a CentOS 7 aarch64 minimal setup and then : </p>
<div class="highlight"><pre><span></span><span class="n">yum</span> <span class="n">install</span> <span class="n">qemu</span><span class="o">-</span><span class="n">kvm</span><span class="o">-</span><span class="n">tools</span> <span class="n">qemu</span><span class="o">-</span><span class="n">kvm</span> <span class="n">virt</span><span class="o">-</span><span class="n">install</span> <span class="n">libvirt</span> <span class="n">libvirt</span><span class="o">-</span><span class="n">python</span> <span class="n">libguestfs</span><span class="o">-</span><span class="n">tools</span><span class="o">-</span><span class="k">c</span>
<span class="n">systemctl</span> <span class="n">enable</span> <span class="n">libvirtd</span> <span class="c1">--now</span>
</pre></div>
<p>That's pretty basic but for armhfp we'll have to do some extra steps : qemu normally tries to simulate a bios/uefi boot, which armhfp doesn't support, and qemu doesn't emulate the mandatory uboot to just chainload to the RootFS from the guest VM.</p>
<p>So here is just what we need : </p>
<ul>
<li>Import the RootFS from an existing image</li>
</ul>
<div class="highlight"><pre><span></span><span class="n">curl</span> <span class="n">http</span><span class="p">:</span><span class="o">//</span><span class="n">mirror</span><span class="p">.</span><span class="n">centos</span><span class="p">.</span><span class="n">org</span><span class="o">/</span><span class="n">altarch</span><span class="o">/</span><span class="mi">7</span><span class="o">/</span><span class="n">isos</span><span class="o">/</span><span class="n">armhfp</span><span class="o">/</span><span class="n">CentOS</span><span class="o">-</span><span class="n">Userland</span><span class="o">-</span><span class="mi">7 …</span></pre></div><p>Recently we got our hands on some aarch64 (aka ARMv8 / 64Bits) nodes running in a remote DC. On my (already too long) TODO/TOTEST list I had the idea of testing armhfp VM on top of aarch64. Reason is that when I need to test our packages, using my own <a href="https://www.cubietruck.com/">Cubietruck</a> or <a href="https://www.raspberrypi.org/">RaspberryPi3</a> is time consuming : removing the sdcard, reflashing with the correct <a href="http://mirror.centos.org/altarch/7/isos/armhfp/">CentOS 7 image</a> and booting/testing the pkg/update/etc ...</p>
<p>So is that possible to just automate this through available aarch64 node as hypervisor ? Sure ! and it's just pretty straightforward if you have already played with libvirt.
Let's so start with a CentOS 7 aarch64 minimal setup and then : </p>
<div class="highlight"><pre><span></span><span class="n">yum</span> <span class="n">install</span> <span class="n">qemu</span><span class="o">-</span><span class="n">kvm</span><span class="o">-</span><span class="n">tools</span> <span class="n">qemu</span><span class="o">-</span><span class="n">kvm</span> <span class="n">virt</span><span class="o">-</span><span class="n">install</span> <span class="n">libvirt</span> <span class="n">libvirt</span><span class="o">-</span><span class="n">python</span> <span class="n">libguestfs</span><span class="o">-</span><span class="n">tools</span><span class="o">-</span><span class="k">c</span>
<span class="n">systemctl</span> <span class="n">enable</span> <span class="n">libvirtd</span> <span class="c1">--now</span>
</pre></div>
<p>That's pretty basic but for armhfp we'll have to do some extra steps : qemu normally tries to simulate a bios/uefi boot, which armhfp doesn't support, and qemu doesn't emulate the mandatory uboot to just chainload to the RootFS from the guest VM.</p>
<p>So here is just what we need : </p>
<ul>
<li>Import the RootFS from an existing image</li>
</ul>
<div class="highlight"><pre><span></span><span class="n">curl</span> <span class="n">http</span><span class="p">:</span><span class="o">//</span><span class="n">mirror</span><span class="p">.</span><span class="n">centos</span><span class="p">.</span><span class="n">org</span><span class="o">/</span><span class="n">altarch</span><span class="o">/</span><span class="mi">7</span><span class="o">/</span><span class="n">isos</span><span class="o">/</span><span class="n">armhfp</span><span class="o">/</span><span class="n">CentOS</span><span class="o">-</span><span class="n">Userland</span><span class="o">-</span><span class="mi">7</span><span class="o">-</span><span class="n">armv7hl</span><span class="o">-</span><span class="n">Minimal</span><span class="o">-</span><span class="mi">1708</span><span class="o">-</span><span class="n">CubieTruck</span><span class="p">.</span><span class="n">img</span><span class="p">.</span><span class="n">xz</span><span class="o">|</span><span class="n">unxz</span> <span class="o">>/</span><span class="n">var</span><span class="o">/</span><span class="n">lib</span><span class="o">/</span><span class="n">libvirt</span><span class="o">/</span><span class="n">images</span><span class="o">/</span><span class="n">CentOS</span><span class="o">-</span><span class="n">Userland</span><span class="o">-</span><span class="mi">7</span><span class="o">-</span><span class="n">armv7hl</span><span class="o">-</span><span class="n">Minimal</span><span class="o">-</span><span class="mi">1708</span><span class="o">-</span><span class="n">CubieTruck</span><span class="p">.</span><span class="n">img</span>
</pre></div>
<ul>
<li>Convert image to <a href="https://en.wikipedia.org/wiki/Qcow">qcow2</a> (that will give us more flexibility) and extend it a little bit</li>
</ul>
<div class="highlight"><pre><span></span><span class="n">qemu</span><span class="o">-</span><span class="n">img</span> <span class="k">convert</span> <span class="o">-</span><span class="n">f</span> <span class="n">raw</span> <span class="o">-</span><span class="n">O</span> <span class="n">qcow2</span> <span class="o">/</span><span class="n">var</span><span class="o">/</span><span class="n">lib</span><span class="o">/</span><span class="n">libvirt</span><span class="o">/</span><span class="n">images</span><span class="o">/</span><span class="n">CentOS</span><span class="o">-</span><span class="n">Userland</span><span class="o">-</span><span class="mi">7</span><span class="o">-</span><span class="n">armv7hl</span><span class="o">-</span><span class="n">Minimal</span><span class="o">-</span><span class="mi">1708</span><span class="o">-</span><span class="n">CubieTruck</span><span class="p">.</span><span class="n">img</span> <span class="o">/</span><span class="n">var</span><span class="o">/</span><span class="n">lib</span><span class="o">/</span><span class="n">libvirt</span><span class="o">/</span><span class="n">images</span><span class="o">/</span><span class="n">CentOS</span><span class="o">-</span><span class="n">Userland</span><span class="o">-</span><span class="mi">7</span><span class="o">-</span><span class="n">armv7hl</span><span class="o">-</span><span class="n">Minimal</span><span class="o">-</span><span class="mi">1708</span><span class="o">-</span><span class="n">guest</span><span class="p">.</span><span class="n">qcow2</span>
<span class="n">qemu</span><span class="o">-</span><span class="n">img</span> <span class="n">resize</span> <span class="o">/</span><span class="n">var</span><span class="o">/</span><span class="n">lib</span><span class="o">/</span><span class="n">libvirt</span><span class="o">/</span><span class="n">images</span><span class="o">/</span><span class="n">CentOS</span><span class="o">-</span><span class="n">Userland</span><span class="o">-</span><span class="mi">7</span><span class="o">-</span><span class="n">armv7hl</span><span class="o">-</span><span class="n">Minimal</span><span class="o">-</span><span class="mi">1708</span><span class="o">-</span><span class="n">guest</span><span class="p">.</span><span class="n">qcow2</span> <span class="o">+</span><span class="mi">15</span><span class="k">G</span>
</pre></div>
<ul>
<li>Extract kernel+initrd as libvirt will boot that directly for the VM</li>
</ul>
<div class="highlight"><pre><span></span><span class="n">mkdir</span> <span class="o">/</span><span class="n">var</span><span class="o">/</span><span class="n">lib</span><span class="o">/</span><span class="n">libvirt</span><span class="o">/</span><span class="n">armhfp</span><span class="o">-</span><span class="n">boot</span>
<span class="n">virt</span><span class="o">-</span><span class="k">copy</span><span class="o">-</span><span class="k">out</span> <span class="o">-</span><span class="n">a</span> <span class="o">/</span><span class="n">var</span><span class="o">/</span><span class="n">lib</span><span class="o">/</span><span class="n">libvirt</span><span class="o">/</span><span class="n">images</span><span class="o">/</span><span class="n">CentOS</span><span class="o">-</span><span class="n">Userland</span><span class="o">-</span><span class="mi">7</span><span class="o">-</span><span class="n">armv7hl</span><span class="o">-</span><span class="n">Minimal</span><span class="o">-</span><span class="mi">1708</span><span class="o">-</span><span class="n">guest</span><span class="p">.</span><span class="n">qcow2</span> <span class="o">/</span><span class="n">boot</span><span class="o">/</span> <span class="o">/</span><span class="n">var</span><span class="o">/</span><span class="n">lib</span><span class="o">/</span><span class="n">libvirt</span><span class="o">/</span><span class="n">armhfp</span><span class="o">-</span><span class="n">boot</span><span class="o">/</span>
</pre></div>
<p>So now that we have a RootFS, and also kernel/initrd, we can just use virt-install to create the VM (pointing to existing backend qcow2) :</p>
<div class="highlight"><pre><span></span><span class="n">virt</span><span class="o">-</span><span class="n">install</span> \
<span class="o">--</span><span class="n">name</span> <span class="n">centos7_armhfp</span> \
<span class="o">--</span><span class="n">memory</span> <span class="mi">4096</span> \
<span class="o">--</span><span class="n">boot</span> <span class="n">kernel</span><span class="o">=/</span><span class="n">var</span><span class="o">/</span><span class="n">lib</span><span class="o">/</span><span class="n">libvirt</span><span class="o">/</span><span class="n">armhfp</span><span class="o">-</span><span class="n">boot</span><span class="o">/</span><span class="n">boot</span><span class="o">/</span><span class="n">vmlinuz</span><span class="o">-</span><span class="mf">4.9</span><span class="o">.</span><span class="mi">40</span><span class="o">-</span><span class="mf">203.</span><span class="n">el7</span><span class="o">.</span><span class="n">armv7hl</span><span class="p">,</span><span class="n">initrd</span><span class="o">=/</span><span class="n">var</span><span class="o">/</span><span class="n">lib</span><span class="o">/</span><span class="n">libvirt</span><span class="o">/</span><span class="n">armhfp</span><span class="o">-</span><span class="n">boot</span><span class="o">/</span><span class="n">boot</span><span class="o">/</span><span class="n">initramfs</span><span class="o">-</span><span class="mf">4.9</span><span class="o">.</span><span class="mi">40</span><span class="o">-</span><span class="mf">203.</span><span class="n">el7</span><span class="o">.</span><span class="n">armv7hl</span><span class="o">.</span><span class="n">img</span><span class="p">,</span><span class="n">kernel_args</span><span class="o">=</span><span class="s2">"console=ttyAMA0 rw root=/dev/sda3"</span> \
<span class="o">--</span><span class="n">disk</span> <span class="o">/</span><span class="n">var</span><span class="o">/</span><span class="n">lib</span><span class="o">/</span><span class="n">libvirt</span><span class="o">/</span><span class="n">images</span><span class="o">/</span><span class="n">CentOS</span><span class="o">-</span><span class="n">Userland</span><span class="o">-</span><span class="mi">7</span><span class="o">-</span><span class="n">armv7hl</span><span class="o">-</span><span class="n">Minimal</span><span class="o">-</span><span class="mi">1708</span><span class="o">-</span><span class="n">guest</span><span class="o">.</span><span class="n">qcow2</span> \
<span class="o">--</span><span class="kn">import</span> \
<span class="o">--</span><span class="n">arch</span> <span class="n">armv7l</span> \
<span class="o">--</span><span class="n">machine</span> <span class="n">virt</span> \
</pre></div>
<p>And here we go : we have a armhfp VM that boots <em>really</em> fast (compared to a armhfp board using a microsd card of course)</p>
<p>At this stage, you can configure the node, etc.. The only thing you have to remember is that of course kernel will be provided from <em>outside</em> the VM, so just extract it from an updated VM to boot on that kernel. Let's show how to do that, as in the above example, we configured the VM to run with 4Gb of ram, but only 3 are really seen inside (remember the 32bits mode and so the need for <a href="https://en.wikipedia.org/wiki/Physical_Address_Extension">PAE</a> on i386 ?)</p>
<p>So let's use this example to show how to switch kernel : From the armhfp VM : </p>
<div class="highlight"><pre><span></span># <span class="nv">Let</span> <span class="nv">extend</span> <span class="nv">first</span> <span class="nv">as</span> <span class="nv">we</span> <span class="nv">have</span> <span class="nv">bigger</span> <span class="nv">disk</span>
<span class="nv">growpart</span> <span class="o">/</span><span class="nv">dev</span><span class="o">/</span><span class="nv">sda</span> <span class="mi">3</span>
<span class="nv">resize2fs</span> <span class="o">/</span><span class="nv">dev</span><span class="o">/</span><span class="nv">sda3</span>
<span class="nv">yum</span> <span class="nv">update</span> <span class="o">-</span><span class="nv">y</span>
<span class="nv">yum</span> <span class="nv">install</span> <span class="nv">kernel</span><span class="o">-</span><span class="nv">lpae</span>
<span class="nv">systemctl</span> <span class="nv">poweroff</span> # <span class="nv">we</span><span class="s1">'</span><span class="s">ll modify libvirt conf file for new kernel</span>
</pre></div>
<p>Back to the hypervisor we can again extract needed files :</p>
<div class="highlight"><pre><span></span><span class="n">virt</span><span class="o">-</span><span class="k">copy</span><span class="o">-</span><span class="k">out</span> <span class="o">-</span><span class="n">a</span> <span class="o">/</span><span class="n">var</span><span class="o">/</span><span class="n">lib</span><span class="o">/</span><span class="n">libvirt</span><span class="o">/</span><span class="n">images</span><span class="o">/</span><span class="n">CentOS</span><span class="o">-</span><span class="n">Userland</span><span class="o">-</span><span class="mi">7</span><span class="o">-</span><span class="n">armv7hl</span><span class="o">-</span><span class="n">Minimal</span><span class="o">-</span><span class="mi">1708</span><span class="o">-</span><span class="n">guest</span><span class="p">.</span><span class="n">qcow2</span> <span class="o">/</span><span class="n">boot</span><span class="o">/</span><span class="n">vmlinuz</span><span class="o">-</span><span class="mi">4</span><span class="p">.</span><span class="mi">9</span><span class="p">.</span><span class="mi">50</span><span class="o">-</span><span class="mi">203</span><span class="p">.</span><span class="n">el7</span><span class="p">.</span><span class="n">armv7hl</span><span class="o">+</span><span class="n">lpae</span> <span class="o">/</span><span class="n">var</span><span class="o">/</span><span class="n">lib</span><span class="o">/</span><span class="n">libvirt</span><span class="o">/</span><span class="n">armhfp</span><span class="o">-</span><span class="n">boot</span><span class="o">/</span><span class="n">boot</span><span class="o">/</span>
<span class="n">virt</span><span class="o">-</span><span class="k">copy</span><span class="o">-</span><span class="k">out</span> <span class="o">-</span><span class="n">a</span> <span class="o">/</span><span class="n">var</span><span class="o">/</span><span class="n">lib</span><span class="o">/</span><span class="n">libvirt</span><span class="o">/</span><span class="n">images</span><span class="o">/</span><span class="n">CentOS</span><span class="o">-</span><span class="n">Userland</span><span class="o">-</span><span class="mi">7</span><span class="o">-</span><span class="n">armv7hl</span><span class="o">-</span><span class="n">Minimal</span><span class="o">-</span><span class="mi">1708</span><span class="o">-</span><span class="n">guest</span><span class="p">.</span><span class="n">qcow2</span> <span class="o">/</span><span class="n">boot</span><span class="o">/</span><span class="n">initramfs</span><span class="o">-</span><span class="mi">4</span><span class="p">.</span><span class="mi">9</span><span class="p">.</span><span class="mi">50</span><span class="o">-</span><span class="mi">203</span><span class="p">.</span><span class="n">el7</span><span class="p">.</span><span class="n">armv7hl</span><span class="o">+</span><span class="n">lpae</span><span class="p">.</span><span class="n">img</span> <span class="o">/</span><span class="n">var</span><span class="o">/</span><span class="n">lib</span><span class="o">/</span><span class="n">libvirt</span><span class="o">/</span><span class="n">armhfp</span><span class="o">-</span><span class="n">boot</span><span class="o">/</span><span class="n">boot</span><span class="o">/</span>
</pre></div>
<p>And just <code>virsh edit centos7_armhfp</code> so that kernel and armhfp are pointing to correct location:</p>
<div class="highlight"><pre><span></span><span class="nt"><kernel></span>/var/lib/libvirt/armhfp-boot/boot/vmlinuz-4.9.50-203.el7.armv7hl+lpae<span class="nt"></kernel></span>
<span class="nt"><initrd></span>/var/lib/libvirt/armhfp-boot/boot/initramfs-4.9.50-203.el7.armv7hl+lpae.img<span class="nt"></initrd></span>
</pre></div>
<p>Now that we have a "gold" image, we can even use exiting tools to provision quickly other nodes on that hypervisor ! :</p>
<div class="highlight"><pre><span></span><span class="n">time</span> <span class="n">virt</span><span class="o">-</span><span class="n">clone</span> <span class="c1">--original centos7_armhfp --name armhfp_guest1 --file /var/lib/libvirt/images/armhfp_guest1.qcow2</span>
<span class="n">Allocating</span> <span class="s1">'armhfp_guest1.qcow2'</span> <span class="o">|</span> <span class="mi">18</span> <span class="n">GB</span> <span class="mi">00</span><span class="p">:</span><span class="mi">00</span><span class="p">:</span><span class="mi">02</span>
<span class="n">Clone</span> <span class="s1">'armhfp_guest1'</span> <span class="n">created</span> <span class="n">successfully</span><span class="p">.</span>
<span class="nb">real</span> <span class="mi">0</span><span class="n">m2</span><span class="p">.</span><span class="mi">809</span><span class="n">s</span>
<span class="k">user</span> <span class="mi">0</span><span class="n">m0</span><span class="p">.</span><span class="mi">473</span><span class="n">s</span>
<span class="n">sys</span> <span class="mi">0</span><span class="n">m0</span><span class="p">.</span><span class="mi">062</span><span class="n">s</span>
<span class="n">time</span> <span class="n">virt</span><span class="o">-</span><span class="n">sysprep</span> <span class="c1">--add /var/lib/libvirt/images/armhfp_guest1.qcow2 --operations defaults,net-hwaddr,machine-id,net-hostname,ssh-hostkeys,udev-persistent-net --hostname guest1</span>
<span class="n">virsh</span> <span class="k">start</span> <span class="n">armhfp_guest1</span>
</pre></div>
<p>As simple as that.
Of course, in the previous example we were just using the default network from libvirt, and not any bridge, but you get the idea : all the rest with well-known concept for libvirt on linux.</p>Using NFS for OpenStack (glance,nova) with selinux2017-07-28T00:00:00+02:002017-07-28T00:00:00+02:00Fabian Arrotintag:arrfab.net,2017-07-28:/posts/2017/Jul/28/using-nfs-for-openstack-glancenova-with-selinux/<p>As announced already, I was (between other things) playing with Openstack/RDO and had deployed some small openstack setup in the CentOS Infra. Then I had to look at our existing <a href="https://wiki.centos.org/DevCloud">DevCloud</a> setup. This setup was based on Opennebula running on CentOS 6, and also using Gluster as backend for the VM store. That's when I found out that Gluster isn't a valid option anymore : Gluster is was deprecated and was now even removed from <a href="https://docs.openstack.org/releasenotes/cinder/ocata.html">Cinder</a>. Sad as one advantage of gluster is that you could (you had to ! ) user libgfapi so that qemu-kvm process could talk directly to gluster through ligbfapi and not accessing VM images over locally mounted gluster volumes (please, don't even try to do that, through fuse).</p>
<p>So what could be a replacement for Gluster from an openstack side ? I still have some dedicated nodes for storage backend[s], but not enough to even just think about Ceph. So it seems my only option was to consider NFS. (Technically speaking driver was removed from cinder, but I could have only tried to use it for glance and nova, as I have no need for cinder for DevCloud project, but clearly it would be dangerous for potential …</p><p>As announced already, I was (between other things) playing with Openstack/RDO and had deployed some small openstack setup in the CentOS Infra. Then I had to look at our existing <a href="https://wiki.centos.org/DevCloud">DevCloud</a> setup. This setup was based on Opennebula running on CentOS 6, and also using Gluster as backend for the VM store. That's when I found out that Gluster isn't a valid option anymore : Gluster is was deprecated and was now even removed from <a href="https://docs.openstack.org/releasenotes/cinder/ocata.html">Cinder</a>. Sad as one advantage of gluster is that you could (you had to ! ) user libgfapi so that qemu-kvm process could talk directly to gluster through ligbfapi and not accessing VM images over locally mounted gluster volumes (please, don't even try to do that, through fuse).</p>
<p>So what could be a replacement for Gluster from an openstack side ? I still have some dedicated nodes for storage backend[s], but not enough to even just think about Ceph. So it seems my only option was to consider NFS. (Technically speaking driver was removed from cinder, but I could have only tried to use it for glance and nova, as I have no need for cinder for DevCloud project, but clearly it would be dangerous for potential upgrades)</p>
<p>It's no that I'm a fan of storing qcow2 images on top of NFS, but it seems it was my only option, and at least the most transparent/less intrusive path, would I need to migrate to something else later.
So let's test this before then using NFS through <a href="http://en.wikipedia.org/wiki/InfiniBand">Infiniband</a> (using <a href="https://www.kernel.org/doc/Documentation/infiniband/ipoib.txt">IPoIB</a>), and so at "good speed" (still have the infiniband hardware in place running for gluster, that will be replaced)</p>
<p>It's easy to mount the nfs exported dir under /var/lib/glance/images for glance, and then on every compute node also a nfs export under /var/lib/nova/instances/.</p>
<p>That's where you have to see what would be blocked by Selinux, as it seems the current policy shipped with openstack-selinux-0.8.6-0 (from Ocata) doesn't seem to allow that.</p>
<p>I initially tested services one and one and decided to open <a href="https://github.com/redhat-openstack/openstack-selinux/pull/13">Pull Request</a> for this, but in the mean time I rebuilt a custom selinux policy that seems to do the job in my rdo playground.</p>
<p>Here it is the .te that you can compile into usable .pp policy file : </p>
<div class="highlight"><pre><span></span><span class="nv">module</span> <span class="nv">os</span><span class="o">-</span><span class="nv">local</span><span class="o">-</span><span class="nv">nfs</span> <span class="mi">0</span>.<span class="mi">2</span><span class="c1">;</span>
<span class="nv">require</span> {
<span class="nv">type</span> <span class="nv">glance_api_t</span><span class="c1">;</span>
<span class="nv">type</span> <span class="nv">virtlogd_t</span><span class="c1">;</span>
<span class="nv">type</span> <span class="nv">nfs_t</span><span class="c1">;</span>
<span class="nv">class</span> <span class="nv">file</span> { <span class="nv">append</span> <span class="nv">getattr</span> <span class="nv">open</span> <span class="nv">read</span> <span class="nv">write</span> <span class="k">unlink</span> <span class="nv">create</span> }<span class="c1">;</span>
<span class="nv">class</span> <span class="nv">dir</span> { <span class="nv">search</span> <span class="nv">getattr</span> <span class="nv">write</span> <span class="nv">remove_name</span> <span class="nv">create</span> <span class="nv">add_name</span> }<span class="c1">;</span>
}
#<span class="o">=============</span> <span class="nv">glance_api_t</span> <span class="o">==============</span>
<span class="nv">allow</span> <span class="nv">glance_api_t</span> <span class="nv">nfs_t</span>:<span class="nv">dir</span> { <span class="nv">search</span> <span class="nv">getattr</span> <span class="nv">write</span> <span class="nv">remove_name</span> <span class="nv">create</span> <span class="nv">add_name</span> }<span class="c1">;</span>
<span class="nv">allow</span> <span class="nv">glance_api_t</span> <span class="nv">nfs_t</span>:<span class="nv">file</span> { <span class="nv">write</span> <span class="nv">getattr</span> <span class="k">unlink</span> <span class="nv">open</span> <span class="nv">create</span> <span class="nv">read</span>}<span class="c1">;</span>
#<span class="o">=============</span> <span class="nv">virtlogd_t</span> <span class="o">==============</span>
<span class="nv">allow</span> <span class="nv">virtlogd_t</span> <span class="nv">nfs_t</span>:<span class="nv">dir</span> <span class="nv">search</span><span class="c1">;</span>
<span class="nv">allow</span> <span class="nv">virtlogd_t</span> <span class="nv">nfs_t</span>:<span class="nv">file</span> { <span class="nv">append</span> <span class="nv">getattr</span> <span class="nv">open</span> }<span class="c1">;</span>
</pre></div>
<p>Of course you also need to enable some booleans. Some are already loaded by openstack-selinux (and you can see that from the enabled booleans by looking at /etc/selinux/targeted/active/booleans.local) but you also now need <code>virt_use_nfs=1</code></p>
<p>Now that it works, I can replay that (all that coming from puppet) on the DevCloud nodes </p>Linking Foreman with Zabbix through MQTT2017-05-16T00:00:00+02:002017-05-16T00:00:00+02:00Fabian Arrotintag:arrfab.net,2017-05-16:/posts/2017/May/16/linking-foreman-with-zabbix-through-mqtt/<p>It's been a while since I thought about this design, but I finally had time to implement it the proper way, and "just in time" as I needed recently to migrate our <a href="https://www.theforeman.org">Foreman</a> instance to another host (from CentOS 6 to CentOS 7)</p>
<p>Within the CentOS Infra, we use Foreman as an <a href="https://docs.puppet.com/puppet/4.10/nodes_external.html#what-is-an-enc">ENC</a> for our Puppet environments (multiple ones). For full automation between configuration management and monitoring, you need some "glue". The idea is that whatever you describe at the configuration management level should be authoritative and so automatically configuring the monitoring solution you have in place in your Infra.</p>
<p>In our case, that means that we have Foreman/puppet on one side, and <a href="http://www.zabbix.com">Zabbix</a> on the other side. Let's see how we can "link" the two sides.</p>
<p>What I've seen so far is that you use <a href="https://docs.puppet.com/puppet/4.10/lang_exported.html">exported resources</a> on each node, store that in another <a href="https://docs.puppet.com/puppetdb/4.4/install_via_module.html">PuppetDB</a>, and then on the monitoring node, reapply all those resources. Problem with such solution is that it's "expensive" and when one thinks about it, a little bit strange to export the "knowledge" from Foreman back into another DB, and then let puppet compiles a <em>huge</em> catalog at the monitoring side, even if nothing …</p><p>It's been a while since I thought about this design, but I finally had time to implement it the proper way, and "just in time" as I needed recently to migrate our <a href="https://www.theforeman.org">Foreman</a> instance to another host (from CentOS 6 to CentOS 7)</p>
<p>Within the CentOS Infra, we use Foreman as an <a href="https://docs.puppet.com/puppet/4.10/nodes_external.html#what-is-an-enc">ENC</a> for our Puppet environments (multiple ones). For full automation between configuration management and monitoring, you need some "glue". The idea is that whatever you describe at the configuration management level should be authoritative and so automatically configuring the monitoring solution you have in place in your Infra.</p>
<p>In our case, that means that we have Foreman/puppet on one side, and <a href="http://www.zabbix.com">Zabbix</a> on the other side. Let's see how we can "link" the two sides.</p>
<p>What I've seen so far is that you use <a href="https://docs.puppet.com/puppet/4.10/lang_exported.html">exported resources</a> on each node, store that in another <a href="https://docs.puppet.com/puppetdb/4.4/install_via_module.html">PuppetDB</a>, and then on the monitoring node, reapply all those resources. Problem with such solution is that it's "expensive" and when one thinks about it, a little bit strange to export the "knowledge" from Foreman back into another DB, and then let puppet compiles a <em>huge</em> catalog at the monitoring side, even if nothing was changed.</p>
<p>One issue is also that in our Zabbix setup, we also have some nodes that aren't really managed by Foreman/puppet (but other automation around <a href="http://www.ansible.com">Ansible</a>, so I had to use an intermediate step that other tools can also use/abuse for the same reason.</p>
<p>The other reason also is that I admit that I'm a fan of "event driven" configuration change, so my idea was :</p>
<ul>
<li>update a host in Foreman (or groups of hosts, etc)</li>
<li>publish that change on a secure network through a message queue (so asynchronous so that it doesn't slow down the foreman update operation itself)</li>
<li>let Zabbix server know that change and apply it (like linking a template to a host)</li>
</ul>
<p>So the good news is that it can be done really easily with several components :</p>
<ul>
<li><a href="https://github.com/theforeman/foreman_hooks">foreman hooks</a></li>
<li><a href="https://mosquitto.org/">Mosquitto</a>, a very lightweight MQTT broker/pub/sub client</li>
<li><a href="https://github.com/usit-gd/zabbix-cli">zabbix-cli</a> , to let us talk to the Zabbix API</li>
</ul>
<p>Here is a small overview of the process :</p>
<p><img alt="Foreman MQTT Zabbix" src="/images/mqtt-foreman-zabbix.png" title="Foreman MQTT Zabbix"></p>
<h2>Foreman hooks</h2>
<p>Setting up foreman hooks is really easy: just install the pkg itself (tfm-rubygem-foreman_hooks.noarch), read the <a href="https://github.com/theforeman/foreman_hooks">Documentation</a>, and then create your scripts. There are some examples for Bash and python in the <a href="https://github.com/theforeman/foreman_hooks/tree/master/examples">examples</a> directory, but basically you just need to place some scripts at specific place[s].
In my case I wanted to "trigger" an event in the case of a node update (like adding a puppet class, or variable/paramater change) so I just had to place it under /usr/share/foreman/config/hooks/host/managed/update/.</p>
<p>One little remark though : if you put a new file, don't forget to restart foreman itself, so that it picks that hooks file, otherwise it would still be ignored and so not ran.</p>
<h2>Mosquitto</h2>
<p>Mosquitto itself is available in your favorite rpm repo, so installing it is a breeze. Reason why I selected mosquitto is that it's very lightweight (package size is under 200Kb), it supports TLS and ACL out-of-the box.</p>
<p>For an introduction to MQTT/Mosquitto, I'd suggest you to read <a href="https://twitter.com/jpmens">Jan-Piet Mens</a> dedicated <a href="http://jpmens.net/2013/02/25/lots-of-messages-mqtt-pub-sub-and-the-mosquitto-broker/">blog post</a> around it
I even admit that I discovered it by attending one of his talks on the topic, back in the <a href="http://loadays.org/">Loadays.org</a> days :-)</p>
<h2>Zabbix-cli</h2>
<p>While one can always discuss <a href="https://www.zabbix.com/documentation/3.0/manual/api">"Raw API"</a> with Zabbix, I found it useful to use a tool I was already using for various tasks around Zabbix : <a href="https://github.com/usit-gd/zabbix-cli">zabbix-cli</a>
For people interested in using it on CentOS 6 or 7, I built the packages and they are on <a href="https://cbs.centos.org/koji/packageinfo?packageID=4477">CBS</a></p>
<p>So I plumbed it in a systemd unit file that subscribe to specific MQTT topic, parse the needed informations (like hostname and zabbix templates to link, unlink, etc) and then it updates that in Zabbix itself (from the log output):</p>
<div class="highlight"><pre><span></span><span class="o">[</span><span class="n">+</span><span class="o">]</span><span class="w"> </span><span class="mi">20170516</span><span class="o">-</span><span class="mi">11</span><span class="err">:</span><span class="mi">43</span><span class="w"> </span><span class="err">:</span><span class="w"> </span><span class="n">Adding</span><span class="w"> </span><span class="n">zabbix</span><span class="w"> </span><span class="n">template</span><span class="w"> </span><span class="ss">"Template CentOS - https SSL Cert Check External"</span><span class="w"> </span><span class="k">to</span><span class="w"> </span><span class="k">host</span><span class="w"> </span><span class="ss">"dev-registry.lon1.centos.org"</span><span class="w"> </span>
<span class="o">[</span><span class="n">Done</span><span class="o">]</span><span class="err">:</span><span class="w"> </span><span class="n">Templates</span><span class="w"> </span><span class="n">Template</span><span class="w"> </span><span class="n">CentOS</span><span class="w"> </span><span class="o">-</span><span class="w"> </span><span class="n">https</span><span class="w"> </span><span class="n">SSL</span><span class="w"> </span><span class="n">Cert</span><span class="w"> </span><span class="k">Check</span><span class="w"> </span><span class="k">External</span><span class="w"> </span><span class="p">(</span><span class="err">{</span><span class="ss">"templateid"</span><span class="err">:</span><span class="ss">"10105"</span><span class="err">}</span><span class="p">)</span><span class="w"> </span><span class="n">linked</span><span class="w"> </span><span class="k">to</span><span class="w"> </span><span class="n">these</span><span class="w"> </span><span class="nl">hosts</span><span class="p">:</span><span class="w"> </span><span class="n">dev</span><span class="o">-</span><span class="n">registry</span><span class="p">.</span><span class="n">lon1</span><span class="p">.</span><span class="n">centos</span><span class="p">.</span><span class="n">org</span><span class="w"> </span><span class="p">(</span><span class="err">{</span><span class="ss">"hostid"</span><span class="err">:</span><span class="ss">"10174"</span><span class="err">}</span><span class="p">)</span><span class="w"></span>
</pre></div>
<p>Cool, so now I don't have to worry about forgetting to tie a zabbix template to a host , as it's now done automatically. No need to say that the deployment of those tools was of course automated and coming from Puppet/foreman :-)</p>Deploying Openstack through puppet on CentOS 7 - a Journey2017-05-08T00:00:00+02:002017-05-08T00:00:00+02:00Fabian Arrotintag:arrfab.net,2017-05-08:/posts/2017/May/08/deploying-openstack-through-puppet-on-centos-7-a-journey/<p>It's not a secret that I was playing/experimenting with <a href="http://www.openstack.org">OpenStack</a> in the <a href="/posts/2017/Apr/14/deploying-openstack-poc-on-centos-with-linux-bridge/">last days</a>.
When I mention OpenStack, I should even say <a href="http://www.rdoproject.org">RDO</a> , as it's RPM packaged, built and tested on CentOS infra.</p>
<p>Now that it's time to deploy it in Production, that's when you should have a deeper look at how to proceed and which tool to use. Sure, <a href="https://wiki.openstack.org/wiki/Packstack">Packstack</a> can help you setting up a quick <a href="https://en.wikipedia.org/wiki/Proof_of_concept">PoC</a> but after some discussions with people hanging around in the #rdo irc channel on freenode, it seems that almost everybody agreed on the fact that it's not the kind of tool you want to use for a proper deploy.</p>
<p>Let's so have a look at the available options. While I really like/prefer <a href="http://www.ansible.com">Ansible</a>, we (CentOS Project) still use <a href="https://puppet.com/">puppet</a> as our Configuration Management tool, and itself using <a href="https://theforeman.org/">Foreman</a> as the <a href="https://docs.puppet.com/puppet/4.10/nodes_external.html#what-is-an-enc">ENC</a>. So let's see both options.</p>
<ul>
<li>Ansible : Lot of <a href="http://docs.ansible.com/ansible/list_of_cloud_modu">natives modules</a> exist to manage an existing/already deployed openstack cloud, but nothing really that can help setting up one from scratch. OTOH it's true that <a href="https://docs.openstack.org/project-deploy-guide/openstack-ansible/ocata/">Openstack Ansible</a> exists, but that will setup openstack components into LXC containers, and wasn't really comfortable with the whole idea (YMMV) </li>
<li>Puppet : Lot of …</li></ul><p>It's not a secret that I was playing/experimenting with <a href="http://www.openstack.org">OpenStack</a> in the <a href="/posts/2017/Apr/14/deploying-openstack-poc-on-centos-with-linux-bridge/">last days</a>.
When I mention OpenStack, I should even say <a href="http://www.rdoproject.org">RDO</a> , as it's RPM packaged, built and tested on CentOS infra.</p>
<p>Now that it's time to deploy it in Production, that's when you should have a deeper look at how to proceed and which tool to use. Sure, <a href="https://wiki.openstack.org/wiki/Packstack">Packstack</a> can help you setting up a quick <a href="https://en.wikipedia.org/wiki/Proof_of_concept">PoC</a> but after some discussions with people hanging around in the #rdo irc channel on freenode, it seems that almost everybody agreed on the fact that it's not the kind of tool you want to use for a proper deploy.</p>
<p>Let's so have a look at the available options. While I really like/prefer <a href="http://www.ansible.com">Ansible</a>, we (CentOS Project) still use <a href="https://puppet.com/">puppet</a> as our Configuration Management tool, and itself using <a href="https://theforeman.org/">Foreman</a> as the <a href="https://docs.puppet.com/puppet/4.10/nodes_external.html#what-is-an-enc">ENC</a>. So let's see both options.</p>
<ul>
<li>Ansible : Lot of <a href="http://docs.ansible.com/ansible/list_of_cloud_modu">natives modules</a> exist to manage an existing/already deployed openstack cloud, but nothing really that can help setting up one from scratch. OTOH it's true that <a href="https://docs.openstack.org/project-deploy-guide/openstack-ansible/ocata/">Openstack Ansible</a> exists, but that will setup openstack components into LXC containers, and wasn't really comfortable with the whole idea (YMMV) </li>
<li>Puppet : Lot of <a href="http://git.openstack.org/cgit/openstack/">puppet modules</a> so you can automatically reuse/import those into your existing puppet setup, and seems to be the prefered method when discussing with people in #rdo (when not using <a href="https://wiki.openstack.org/wiki/TripleO">TripleO</a> though)</li>
</ul>
<p>So, after some analysis, and despite the fact that I really prefer Ansible over Puppet, I decided (so that it could still make sense in our infra) to go the "puppet modules way". That was the beginning of a journey, where I saw a lot of <a href="https://en.wiktionary.org/wiki/yak_shaving">Yaks to shave</a> too.</p>
<p>It started with me trying to "just" reuse and adapt some existing modules I found. <strong>Wrong</strong>. And it's even fun because it's one of my mantras : "Don't try to automate what you can't understand from scratch" (And I fully agree with Matthias' <a href="https://ma.ttias.be/automating-unknown/">thought</a> on this ).</p>
<p>So one can just read all the openstack puppet modules, and then try to understand how to assemble them together to build a cloud. But I remembered that Packstack itself <em>is</em> puppet driven. So I just decided to have a look at what it was generating and start from that to write my own module from scratch. How to proceed ? Easy : on a VM, just install packstack, generate answer file, "salt" it your needs, and generate the manifests :</p>
<div class="highlight"><pre><span></span> <span class="n">yum</span> <span class="n">install</span> <span class="o">-</span><span class="n">y</span> <span class="n">centos</span><span class="o">-</span><span class="n">release</span><span class="o">-</span><span class="n">openstack</span><span class="o">-</span><span class="n">ocata</span> <span class="o">&&</span> <span class="n">yum</span> <span class="n">install</span> <span class="n">openstack</span><span class="o">-</span><span class="n">packstack</span> <span class="o">-</span><span class="n">y</span>
<span class="n">packstack</span> <span class="c1">--gen-answer-file=answers.txt</span>
<span class="n">vim</span> <span class="n">answers</span><span class="p">.</span><span class="n">txt</span>
<span class="n">packstack</span> <span class="c1">--answer-file=answers.txt --dry-run</span>
<span class="o">*</span> <span class="n">The</span> <span class="n">installation</span> <span class="n">log</span> <span class="n">file</span> <span class="k">is</span> <span class="n">available</span> <span class="k">at</span><span class="p">:</span> <span class="o">/</span><span class="n">var</span><span class="o">/</span><span class="n">tmp</span><span class="o">/</span><span class="n">packstack</span><span class="o">/</span><span class="mi">20170508</span><span class="o">-</span><span class="mi">101433</span><span class="o">-</span><span class="mi">49</span><span class="n">cCcj</span><span class="o">/</span><span class="n">openstack</span><span class="o">-</span><span class="n">setup</span><span class="p">.</span><span class="n">log</span>
<span class="o">*</span> <span class="n">The</span> <span class="k">generated</span> <span class="n">manifests</span> <span class="k">are</span> <span class="n">available</span> <span class="k">at</span><span class="p">:</span> <span class="o">/</span><span class="n">var</span><span class="o">/</span><span class="n">tmp</span><span class="o">/</span><span class="n">packstack</span><span class="o">/</span><span class="mi">20170508</span><span class="o">-</span><span class="mi">101433</span><span class="o">-</span><span class="mi">49</span><span class="n">cCcj</span><span class="o">/</span><span class="n">manifests</span>
</pre></div>
<p>So now we can have a look at all the generated manifests and start from scratch our own, reimporting all the needed openstack puppet modules, and that's what I did .. but started to encounter some issues. The first one was that the puppet version we were using was 3.6.2 (everywhere on every release/arch we support, so centos 6 and 7, and x86_64,i386,aarch64,ppc64,ppc64le). </p>
<p>One of the openstack component is <a href="https://www.rabbitmq.com/">RabbitMQ</a> but openstack modules rely on the puppetlabs module to deploy/manage it. You'll see a lot of those external modules being called/needed by openstack puppet. The first thing that I had to do was investigating our own modules as some are the same name, but not coming from puppetlabs/forge, so instead of analyzing all those, I moved everything RDO related to a <a href="https://theforeman.org/manuals/1.12/index.html#4.2ManagingPuppet">different environment</a> so that it wouldn't conflict with some our our existing modules. Back now to the RabbitMQ one : puppet errors where trying to just use it. First yak to shave : updating the whole CentOS infra puppet to higher version because of a <a href="https://tickets.puppetlabs.com/browse/MODULES-1781">puppet bug</a>. Let's so rebuild puppet for centos 6/7 and with a higher version on <a href="https://cbs.centos.org/koji/packageinfo?packageID=390">CBS</a> </p>
<p>That means of course testing our own modules, on our Test Foreman/puppetmasterd instance first, and as upgraded worked, I applied it everywhere. Good, so let's jump to the next yak.</p>
<p>After the rabbitmq issue was solved, I encountered other ones coming from openstack puppet modules now, as the .rb ruby code used for type/provider was expecting ruby2 and not 1.8.3, which was the one available on our puppetmasterd (yeah, our Foreman was on a CentOS 6 node) so another yak to shave : migrating our Foreman instance from CentOS 6 to a new CentOS 7 node. Basically installing a CentOS 7 node with the <em>same</em> Foreman version running on CentOS 6 node, and then following <a href="https://theforeman.org/manuals/1.12/index.html#5.5Backup,RecoveryandMigration">procedure</a>, but then, again, time lost to test update/upgrade and also all other modules, etc (One can see why I prefer agentless cfgmgmt).</p>
<p>Finally I found that some of the openstack puppet modules aren't touching the whole config. Let me explain why. In Openstack <a href="https://releases.openstack.org/ocata/">Ocata</a>, some things are mandatory, like the <a href="https://docs.openstack.org/developer/nova/placement.html">Placement API</a>, but despite all the classes being applied, I had some issues to have it to run correctly when deploying an instance. It's true that I initially had a bug in my puppet code for the user/password to use to configure the rabbitmq settings, but it was solved and also applied correctly in /etc/nova/nova.conf (setting "transport_url=") . But openstack nova services (all nova-*.log files btw) were always saying that credentials given were refused by rabbitmq, while tested manually)</p>
<p>After having verified in the rabbitmq logs, I saw that despite what was configured in nova.conf, services were still trying to use the wrong user/pass to connect to rabbitmq. Strange as <a href="http://git.openstack.org/cgit/openstack/puppet-nova/tree/manifests/cell_v2/simple_setup.pp">::nova::cell_v2::simple_setup</a> was included and was supposed also to use the transport_url declared at the nova.conf level (and so configured by ::nova) . That's how I discovered that something "ugly" happened : in fact even if you modify nova.conf, it stores some settings in the mysql DB, and you can see those (so the "wrong" ones in my case) with :</p>
<div class="highlight"><pre><span></span><span class="n">nova</span><span class="o">-</span><span class="n">manage</span> <span class="n">cell_v2</span> <span class="n">list_cells</span> <span class="c1">--debug</span>
</pre></div>
<p>Something to keep in mind, as for initial deployment, if your rabbitmq user/pass needs to be changed, and despite the fact that puppet will not complain, it will only update the conf file, but not the settings imported first by puppet in the DB (table nova_api.cell_mapping if you're interested)
After that, everything was then running, and reinstalled/reprovisioned multiple times my test nodes to apply the puppet module/manifests from puppetmasterd to confirm. </p>
<p>That was quite a journey, but it's probably only the beginning but it's a good start. Now to investigate other option for cinder/glance as it seems Gluster was deprecated and I'd like to know hy.</p>
<p>Hope this helps if you need to bootstrap openstack with puppet !</p>Deploying Openstack PoC on CentOS with linux bridge2017-04-14T00:00:00+02:002017-04-14T00:00:00+02:00Fabian Arrotintag:arrfab.net,2017-04-14:/posts/2017/Apr/14/deploying-openstack-poc-on-centos-with-linux-bridge/<p>I was recently in a need to start "playing" with <a href="http://www.openstack.org">Openstack</a> (working in an existing <a href="http://www.rdoproject.org">RDO</a> setup) so I thought that it would be good idea to have my personal playground to start deploying from scratch/breaking/fixing that playground setup.</p>
<p>At first sight, Openstack looks <a href="https://docs.openstack.org/admin-guide/_images/openstack-arch-kilo-logical-v1.png">impressive</a> and "over-engineered", as it's complex and have zillions of modules to make it work. But then when you dive into it, you understand that the choice is yours to make it complex or not. Yeah, that sentence can look strange, but I'll explain you why.</p>
<p>First, you should just write your requirements, and then only have a look at the needed openstack components. For my personal playground, I just wanted to have a basic thing that would let me deploy VMs on demand, <em>in</em> the existing network, and so directly using bridge as I want the VMs to be directly integrated into the existing network/subnet.</p>
<p>So just by looking at the mentioned <a href="https://docs.openstack.org/admin-guide/_images/openstack-arch-kilo-logical-v1.png">diagram</a>, we just need :</p>
<ul>
<li>keystone (needed for the identity service)</li>
<li>nova (hypervisor part)</li>
<li>neutron (handling the network part)</li>
<li>glance (to store the OS images that will be used to create the VMs)</li>
</ul>
<p>Now that I have my requirements and list …</p><p>I was recently in a need to start "playing" with <a href="http://www.openstack.org">Openstack</a> (working in an existing <a href="http://www.rdoproject.org">RDO</a> setup) so I thought that it would be good idea to have my personal playground to start deploying from scratch/breaking/fixing that playground setup.</p>
<p>At first sight, Openstack looks <a href="https://docs.openstack.org/admin-guide/_images/openstack-arch-kilo-logical-v1.png">impressive</a> and "over-engineered", as it's complex and have zillions of modules to make it work. But then when you dive into it, you understand that the choice is yours to make it complex or not. Yeah, that sentence can look strange, but I'll explain you why.</p>
<p>First, you should just write your requirements, and then only have a look at the needed openstack components. For my personal playground, I just wanted to have a basic thing that would let me deploy VMs on demand, <em>in</em> the existing network, and so directly using bridge as I want the VMs to be directly integrated into the existing network/subnet.</p>
<p>So just by looking at the mentioned <a href="https://docs.openstack.org/admin-guide/_images/openstack-arch-kilo-logical-v1.png">diagram</a>, we just need :</p>
<ul>
<li>keystone (needed for the identity service)</li>
<li>nova (hypervisor part)</li>
<li>neutron (handling the network part)</li>
<li>glance (to store the OS images that will be used to create the VMs)</li>
</ul>
<p>Now that I have my requirements and list of needed components, let's see how to setup my PoC ... The <a href="http://www.rdoproject.org">RDO project</a> has good doc for this, including the <a href="https://www.rdoproject.org/install/quickstart/">Quickstart</a> guide. You can follow that guide, and as everything is packaged/built/tested and also delivered through CentOS mirror network, you can have a working RDO/openstack All-in-one setup working in minutes ...</p>
<p>The only issue is that it doesn't fit my need, as it will setup unneeded components, and the network layout isn't the one I wanted either, as it will be based on openvswitch, and other rules (so multiple layers I wanted to get rid of). The good news is that <a href="https://www.rdoproject.org/install/quickstart/">Packstack</a> is in fact a wrapper tool around puppet modules, and it also supports lot of options to configure your PoC.</p>
<p>Let's assume that I wanted a PoC based on openstack-newton, and that my machine has two nics : eth0 for mgmt network and eth1 for VMs network. You don't need to configure the bridge on the eth1 interface, as that will be done automatically by neutron. So let's follow the quickstart guide, but we'll just adapt the packstack command line :</p>
<div class="highlight"><pre><span></span><span class="n">yum</span> <span class="n">install</span> <span class="n">centos</span><span class="o">-</span><span class="n">release</span><span class="o">-</span><span class="n">openstack</span><span class="o">-</span><span class="n">newton</span> <span class="o">-</span><span class="n">y</span>
<span class="n">systemctl</span> <span class="n">disable</span> <span class="n">firewalld</span>
<span class="n">systemctl</span> <span class="n">stop</span> <span class="n">firewalld</span>
<span class="n">systemctl</span> <span class="n">disable</span> <span class="n">NetworkManager</span>
<span class="n">systemctl</span> <span class="n">stop</span> <span class="n">NetworkManager</span>
<span class="n">systemctl</span> <span class="n">enable</span> <span class="n">network</span>
<span class="n">systemctl</span> <span class="k">start</span> <span class="n">network</span>
<span class="n">yum</span> <span class="n">install</span> <span class="o">-</span><span class="n">y</span> <span class="n">openstack</span><span class="o">-</span><span class="n">packstack</span>
</pre></div>
<p>Let's fix eth1 to ensure that it's started but without <em>any</em> IP on it : </p>
<div class="highlight"><pre><span></span><span class="n">sed</span> <span class="o">-</span><span class="n">i</span> <span class="s1">'s/BOOTPROTO="dhcp"/BOOTPROTO="none"/'</span> <span class="o">/</span><span class="n">etc</span><span class="o">/</span><span class="n">sysconfig</span><span class="o">/</span><span class="n">network</span><span class="o">-</span><span class="n">scripts</span><span class="o">/</span><span class="n">ifcfg</span><span class="o">-</span><span class="n">eth1</span>
<span class="n">sed</span> <span class="o">-</span><span class="n">i</span> <span class="s1">'s/ONBOOT="no"/ONBOOT="yes"/'</span> <span class="o">/</span><span class="n">etc</span><span class="o">/</span><span class="n">sysconfig</span><span class="o">/</span><span class="n">network</span><span class="o">-</span><span class="n">scripts</span><span class="o">/</span><span class="n">ifcfg</span><span class="o">-</span><span class="n">eth1</span>
<span class="n">ifup</span> <span class="n">eth1</span>
</pre></div>
<p>And now let's call packstack with the required option so that we'll use basic linux bridge (and so no openvswitch), and we'll instruct that it will have to use eth1 for that mapping</p>
<div class="highlight"><pre><span></span><span class="n">packstack</span> <span class="c1">--allinone --provision-demo=n --os-neutron-ml2-type-drivers=flat --os-neutron-ml2-mechanism-drivers=linuxbridge --os-neutron-ml2-flat-networks=physnet0 --os-neutron-l2-agent=linuxbridge --os-neutron-lb-interface-mappings=physnet0:eth1 --os-neutron-ml2-tenant-network-types=' ' --nagios-install=n </span>
</pre></div>
<p>At this stage we have openstack components installed, and /root/keystonerc_admin file that we can source for openstack CLI operations.
We have instructed neutron to use linuxbridge, but we haven't (yet) created a network and a subnet tied to it, so let's do that now :</p>
<div class="highlight"><pre><span></span><span class="nv">source</span> <span class="o">/</span><span class="nv">root</span><span class="o">/</span><span class="nv">keystonerc_admin</span>
<span class="nv">neutron</span> <span class="nv">net</span><span class="o">-</span><span class="nv">create</span> <span class="o">--</span><span class="nv">shared</span> <span class="o">--</span><span class="nv">provider</span>:<span class="nv">network_type</span><span class="o">=</span><span class="nv">flat</span> <span class="o">--</span><span class="nv">provider</span>:<span class="nv">physical_network</span><span class="o">=</span><span class="nv">physnet0</span> <span class="nv">othernet</span>
<span class="nv">neutron</span> <span class="nv">subnet</span><span class="o">-</span><span class="nv">create</span> <span class="o">--</span><span class="nv">name</span> <span class="nv">other_subnet</span> <span class="o">--</span><span class="nv">enable_dhcp</span> <span class="o">--</span><span class="nv">allocation</span><span class="o">-</span><span class="nv">pool</span><span class="o">=</span><span class="nv">start</span><span class="o">=</span><span class="mi">192</span>.<span class="mi">168</span>.<span class="mi">123</span>.<span class="mi">1</span>,<span class="k">end</span><span class="o">=</span><span class="mi">192</span>.<span class="mi">168</span>.<span class="mi">123</span>.<span class="mi">4</span> <span class="o">--</span><span class="nv">gateway</span><span class="o">=</span><span class="mi">192</span>.<span class="mi">168</span>.<span class="mi">123</span>.<span class="mi">254</span> <span class="o">--</span><span class="nv">dns</span><span class="o">-</span><span class="nv">nameserver</span><span class="o">=</span><span class="mi">192</span>.<span class="mi">168</span>.<span class="mi">123</span>.<span class="mi">254</span> <span class="nv">othernet</span> <span class="mi">192</span>.<span class="mi">168</span>.<span class="mi">123</span>.<span class="mi">0</span><span class="o">/</span><span class="mi">24</span>
</pre></div>
<p>Before import image[s] and creating instances, there is one thing left to do : instruct dhcp_agent that metadata for cloud-init inside the VM will not be served from traditional "router" inside of openstack. And also don't forget to let traffic (in/out) pass through security group (see <a href="https://docs.openstack.org/user-guide/cli-nova-configure-access-security-for-instances.html">doc</a>)</p>
<p>Just be sure to have <code>enable_isolated_metadata = True</code> in /etc/neutron/dhcp_agent.ini and then <code>systemctl restart neutron-dhcp-agent</code> : and from that point, cloud metadata will be served from dhcp too.</p>
<p>From that point you can just follow the <a href="https://www.rdoproject.org/install/running-an-instance/">quickstart</a> guide to create projects/users, import images, create instances and/or do all this from <a href="https://docs.openstack.org/user-guide/cli-cheat-sheet.html">cli</a> too </p>
<p>One last remark with linuxbridge in an existing network : as neutron will have a dhcp-agent listening on the bridge, the provisioned VMs will get an IP from the pool declared in the "neutron subnet-create" command. However (and I saw that when I added other compute nodes in the same setup), you'll have a potential conflict with an existing dhcpd instance on the same segment/network, so your VM can potentially get their IP from your existing dhcpd instance on the network, and not from neutron. As a workaround, you can just ignore the mac addresses range used by openstack, so that your VMs will always get their IP from neutron dhcp.
To do this, there are different options, depending on your local dhcpd instance : </p>
<ul>
<li>for dnsmasq : dhcp-host=fa:16:3e:<em>:</em>:*,ignore (see <a href="http://www.thekelleys.org.uk/dnsmasq/docs/dnsmasq.conf.example">doc</a>)</li>
<li>for ISC dhcpd : "ignore booting" (see <a href="https://linux.die.net/man/5/dhcpd.conf">doc</a>)</li>
</ul>
<p>The default mac addresses range for openstack VMs is indeed fa:16:3e:00:00:00 (see /etc/neutron/neutron.conf, so that can be changed too)</p>
<p>Those were some of my findings for my openstack PoC/playground. Now that I understand a little bit more all this, I'm currently working on some puppet integration for this, as there are official openstack puppet modules available on <a href="http://git.openstack.org/cgit">git.openstack.org</a> that one can import to deploy/configure openstack (and better than using packstack). But lot of "yaks to shave" to get to that point, so surely for another future blog post.</p>Remotely kicking a CentOS install through ligthweight 1Mb iso image2017-04-13T00:00:00+02:002017-04-13T00:00:00+02:00Fabian Arrotintag:arrfab.net,2017-04-13:/posts/2017/Apr/13/remotely-kicking-a-centos-install-through-ligthweight-1mb-iso-image/<p>As a sysadmin, you probably deploy your bare-metal nodes through kickstarts in combination with pxe/dhcp. That's the most convenient way to deploy nodes in an existing environment. But what about having to remotely init a new DC/environement, without anything at all ? Suppose that you have a standalone node that you have to deploy, but there is no PXE/Dhcp environment configured (yet).</p>
<p>The simple solution would be to , as long as you have at least some kind of management/out-of-band network, to either ask the local DC people to burn the CentOS Minimal iso image on a usb stick, or other media. But I was in a need to deploy a machine without any remote hand available locally there to help me. The only things I had were :</p>
<ul>
<li>access to the ipmi interface of that server</li>
<li>the fixed IP/netmask/gateway/dns settings for the NIC connected to that segment/vlan</li>
</ul>
<p>One simple solution would have been to just "attach" the CentOS 7 iso as a virtual media, and then boot the machine, and setup from "locally emulated" cd-rom drive. But that's not something I wanted to do, as I didn't want to slow the install, as that …</p><p>As a sysadmin, you probably deploy your bare-metal nodes through kickstarts in combination with pxe/dhcp. That's the most convenient way to deploy nodes in an existing environment. But what about having to remotely init a new DC/environement, without anything at all ? Suppose that you have a standalone node that you have to deploy, but there is no PXE/Dhcp environment configured (yet).</p>
<p>The simple solution would be to , as long as you have at least some kind of management/out-of-band network, to either ask the local DC people to burn the CentOS Minimal iso image on a usb stick, or other media. But I was in a need to deploy a machine without any remote hand available locally there to help me. The only things I had were :</p>
<ul>
<li>access to the ipmi interface of that server</li>
<li>the fixed IP/netmask/gateway/dns settings for the NIC connected to that segment/vlan</li>
</ul>
<p>One simple solution would have been to just "attach" the CentOS 7 iso as a virtual media, and then boot the machine, and setup from "locally emulated" cd-rom drive. But that's not something I wanted to do, as I didn't want to slow the install, as that would come from my local iso image, and so using my "slow" bandwidth. Instead, I directly wanted to use the Gbit link from that server to kick the install. So here is how you can do it with ipxe.iso. <a href="http://ipxe.org">Ipxe</a> is really helpful for such thing. The only "issue" was that I had to configure the nic first with Fixed IP (remember ? no dhcpd yet).</p>
<p>So, download the <a href="http://boot.ipxe.org/ipxe.iso">ipxe.iso</a> image, add it as "virtual media" (and transfer will be fast, as that's under 1Mb), and boot the server.
Once it boots from the iso image, don't let ipxe run, but instead hit CTRL/B when you see ipxe starting . Reason is that we don't want to let it starting the dhcp discover/offer/request/ack process, as we know that it will not work.</p>
<p>You're then presented with ipxe shell, so here we go (all parameters are obviously to be adapted, including net adapter number) :</p>
<div class="highlight"><pre><span></span><span class="k">set</span> <span class="n">net0</span><span class="o">/</span><span class="n">ip</span> <span class="n">x</span><span class="p">.</span><span class="n">x</span><span class="p">.</span><span class="n">x</span><span class="p">.</span><span class="n">x</span>
<span class="k">set</span> <span class="n">net0</span><span class="o">/</span><span class="n">netmask</span> <span class="n">x</span><span class="p">.</span><span class="n">x</span><span class="p">.</span><span class="n">x</span><span class="p">.</span><span class="n">x</span>
<span class="k">set</span> <span class="n">net0</span><span class="o">/</span><span class="n">gateway</span> <span class="n">x</span><span class="p">.</span><span class="n">x</span><span class="p">.</span><span class="n">x</span><span class="p">.</span><span class="n">x</span>
<span class="k">set</span> <span class="n">dns</span> <span class="n">x</span><span class="p">.</span><span class="n">x</span><span class="p">.</span><span class="n">x</span><span class="p">.</span><span class="n">x</span>
<span class="n">ifopen</span> <span class="n">net0</span>
<span class="n">ifstat</span>
</pre></div>
<p>From that point you should have network connectivity, so we can "just" chainload the CentOS pxe images and start the install :</p>
<div class="highlight"><pre><span></span><span class="n">initrd</span> <span class="n">http</span><span class="p">:</span><span class="o">//</span><span class="n">mirror</span><span class="p">.</span><span class="n">centos</span><span class="p">.</span><span class="n">org</span><span class="o">/</span><span class="n">centos</span><span class="o">/</span><span class="mi">7</span><span class="o">/</span><span class="n">os</span><span class="o">/</span><span class="n">x86_64</span><span class="o">/</span><span class="n">images</span><span class="o">/</span><span class="n">pxeboot</span><span class="o">/</span><span class="n">initrd</span><span class="p">.</span><span class="n">img</span>
<span class="k">chain</span> <span class="n">http</span><span class="p">:</span><span class="o">//</span><span class="n">mirror</span><span class="p">.</span><span class="n">centos</span><span class="p">.</span><span class="n">org</span><span class="o">/</span><span class="n">centos</span><span class="o">/</span><span class="mi">7</span><span class="o">/</span><span class="n">os</span><span class="o">/</span><span class="n">x86_64</span><span class="o">/</span><span class="n">images</span><span class="o">/</span><span class="n">pxeboot</span><span class="o">/</span><span class="n">vmlinuz</span> <span class="n">net</span><span class="p">.</span><span class="n">ifnames</span><span class="o">=</span><span class="mi">0</span> <span class="n">biosdevname</span><span class="o">=</span><span class="mi">0</span> <span class="n">ksdevice</span><span class="o">=</span><span class="n">eth2</span> <span class="n">inst</span><span class="p">.</span><span class="n">repo</span><span class="o">=</span><span class="n">http</span><span class="p">:</span><span class="o">//</span><span class="n">mirror</span><span class="p">.</span><span class="n">centos</span><span class="p">.</span><span class="n">org</span><span class="o">/</span><span class="n">centos</span><span class="o">/</span><span class="mi">7</span><span class="o">/</span><span class="n">os</span><span class="o">/</span><span class="n">x86_64</span><span class="o">/</span> <span class="n">inst</span><span class="p">.</span><span class="n">lang</span><span class="o">=</span><span class="n">en_GB</span> <span class="n">inst</span><span class="p">.</span><span class="n">keymap</span><span class="o">=</span><span class="n">be</span><span class="o">-</span><span class="n">latin1</span> <span class="n">inst</span><span class="p">.</span><span class="n">vnc</span> <span class="n">inst</span><span class="p">.</span><span class="n">vncpassword</span><span class="o">=</span><span class="n">CHANGEME</span> <span class="n">ip</span><span class="o">=</span><span class="n">x</span><span class="p">.</span><span class="n">x</span><span class="p">.</span><span class="n">x</span><span class="p">.</span><span class="n">x</span> <span class="n">netmask</span><span class="o">=</span><span class="n">x</span><span class="p">.</span><span class="n">x</span><span class="p">.</span><span class="n">x</span><span class="p">.</span><span class="n">x</span> <span class="n">gateway</span><span class="o">=</span><span class="n">x</span><span class="p">.</span><span class="n">x</span><span class="p">.</span><span class="n">x</span><span class="p">.</span><span class="n">x</span> <span class="n">dns</span><span class="o">=</span><span class="n">x</span><span class="p">.</span><span class="n">x</span><span class="p">.</span><span class="n">x</span><span class="p">.</span><span class="n">x</span>
</pre></div>
<p>Then you can just enjoy your CentOS install running all from network, and so at "full steam" !
You can also combine directly with inst.ks= to have a fully automated setup.
Worth knowing that you can also regenerate/build an updated/customized ipxe.iso with those scripts directly too. That's more or less what we used to also have a 1Mb universal installer for CentOS 6 and 7, see <a href="https://wiki.centos.org/HowTos/RemoteiPXE">https://wiki.centos.org/HowTos/RemoteiPXE</a> , but that one defaults to dhcp</p>
<p>Hope it helps</p>Enabling SPF record for centos.org2017-01-17T00:00:00+01:002017-01-17T00:00:00+01:00Fabian Arrotintag:arrfab.net,2017-01-17:/posts/2017/Jan/17/enabling-spf-record-for-centosorg/<p>In the last weeks, I noticed that spam activity was back, including against centos.org infra. One of the most used technique was <a href="https://en.wikipedia.org/wiki/Email_spoofing#Technical_detail">Email Spoofing</a> (aka "forged from address"). That's how I discovered that we never implemented <a href="https://en.wikipedia.org/wiki/Sender_Policy_Framework">SPF</a> for centos.org (while some of the Infra team members had that on their personal SMTP servers).</p>
<p>While SPF itself is "just" a TXT dns record in your zone, you have to think twice before implementing it. And publishing yourself such a policy doesn't mean that your SMTP servers are checking SPF either. There are PROS and CONS to SPF so read first multiple sources/articles to understand how it will impact your server/domain when sending/receiving :</p>
<h2>sending</h2>
<p>The first thing to consider is how people having an alias can send send their mails : either behind their known MX borders (and included in your SPF) or through alternate SMTP servers relaying (after <a href="http://www.postfix.org/access.5.html">being</a> <a href="http://www.postfix.org/SMTPD_ACCESS_README.html">authorized</a> of course) through servers listed in your SPF.</p>
<p>One thing to know with SPF is that it breaks <a href="https://en.wikipedia.org/wiki/Sender_Policy_Framework#FAIL_and_forwarding">plain forwarding</a> and <a href="http://www.openspf.org/FAQ/Forwarding">aliases</a> but it's not how you will setup <em>your</em> SPF record, but how originator domain does it : For example if you have joe@domain.com sending …</p><p>In the last weeks, I noticed that spam activity was back, including against centos.org infra. One of the most used technique was <a href="https://en.wikipedia.org/wiki/Email_spoofing#Technical_detail">Email Spoofing</a> (aka "forged from address"). That's how I discovered that we never implemented <a href="https://en.wikipedia.org/wiki/Sender_Policy_Framework">SPF</a> for centos.org (while some of the Infra team members had that on their personal SMTP servers).</p>
<p>While SPF itself is "just" a TXT dns record in your zone, you have to think twice before implementing it. And publishing yourself such a policy doesn't mean that your SMTP servers are checking SPF either. There are PROS and CONS to SPF so read first multiple sources/articles to understand how it will impact your server/domain when sending/receiving :</p>
<h2>sending</h2>
<p>The first thing to consider is how people having an alias can send send their mails : either behind their known MX borders (and included in your SPF) or through alternate SMTP servers relaying (after <a href="http://www.postfix.org/access.5.html">being</a> <a href="http://www.postfix.org/SMTPD_ACCESS_README.html">authorized</a> of course) through servers listed in your SPF.</p>
<p>One thing to know with SPF is that it breaks <a href="https://en.wikipedia.org/wiki/Sender_Policy_Framework#FAIL_and_forwarding">plain forwarding</a> and <a href="http://www.openspf.org/FAQ/Forwarding">aliases</a> but it's not how you will setup <em>your</em> SPF record, but how originator domain does it : For example if you have joe@domain.com sending to joe@otherdomain.com itself being an alias to joe2@domain.com, that will break, as MX for domain.com will see that a mail for domain.com was 'sent' from otherdomain.com and not from an IP listed in <em>their</em> SPF. There are workaround for this though, aka remailing and <a href="https://en.wikipedia.org/wiki/Sender_Rewriting_Scheme">SRS</a></p>
<h2>receiving</h2>
<p>So you have a SPF in place and so restrict from where you are sending mails ? Great, but SPF <em>only</em> works if other SMTP servers involved are checking for it, and so you should do the same !
The fun part is that even if you have CentOS 7, and so <a href="http://www.postfix.org">Postfix</a> 2.10, there is nothing by default that let you verify SPF : as stated on <a href="http://www.postfix.org/addon.html">this page</a> : </p>
<div class="highlight"><pre><span></span><span class="nv">Note</span>: <span class="nv">Postfix</span> <span class="nv">already</span> <span class="nv">ships</span> <span class="nv">with</span> <span class="nv">SPF</span> <span class="nv">support</span>, <span class="nv">in</span> <span class="nv">the</span> <span class="nv">form</span> <span class="nv">of</span> <span class="nv">a</span> <span class="nv">plug</span><span class="o">-</span><span class="nv">in</span> <span class="nv">policy</span> <span class="nv">daemon</span>. <span class="nv">This</span> <span class="nv">is</span> <span class="nv">the</span> <span class="nv">preferred</span> <span class="nv">integration</span> <span class="nv">model</span>, <span class="nv">at</span> <span class="nv">least</span> <span class="k">until</span> <span class="nv">SPF</span> <span class="nv">is</span> <span class="nv">mandated</span> <span class="nv">by</span> <span class="nv">standards</span>.
</pre></div>
<p>So for our postfix setup, we decided to use <a href="https://launchpad.net/pypolicyd-spf">pypolicy-spf</a> : lightweight, easy , written in python. The needed packages are already available in Epel, but we also <a href="https://cbs.centos.org/koji/packageinfo?packageID=5142">rebuilt</a> it on <a href="https://cbs.centos.org/koji/packageinfo?packageID=5142">CBS</a>. Once installed, <a href="http://bazaar.launchpad.net/~kitterman/pypolicyd-spf/1.3/view/head:/policyd-spf.conf.commented">configured</a> <em>and</em> <a href="http://bazaar.launchpad.net/~kitterman/pypolicyd-spf/1.3/view/head:/policyd-spf.1#L251">integrated</a> with Postfix, you'll start (based on your .conf settings) blocking mail that arrives to your SMTP servers, but from IP/servers not listed in the originator domain SPF policy (if any).</p>
<p>If you have issues with our SPF current policy on centos.org, feel free to reach us in #centos-devel on irc.freenode.net to discuss it.</p>Music recording on CentOS 7 DAW2017-01-05T00:00:00+01:002017-01-05T00:00:00+01:00Fabian Arrotintag:arrfab.net,2017-01-05:/posts/2017/Jan/05/music-recording-on-centos-7-daw/<p>There was something that was on my (private) TODO list for quite some time now : being able to record music, mix and output a single song from multiple recorded tracks. For that you need a Digital Audio Workstation <a href="https://en.wikipedia.org/wiki/Digital_audio_workstation">DAW</a>.</p>
<p>I have several instruments at home (electric guitars, bass, digital piano and also drums) but due to lack of (free) time I never investigated the DAW part on Linux and especially CentOS. So having some "offline" days during the holidays helped me investigating that and also being able to setup a small DAW on a recycled machine. Let's consider the hardware and software parts.</p>
<h3>Hardware support</h3>
<p>I personally still own a <a href="http://line6.com/legacy/toneportux2/">Line6 TonePort UX2</a> interface which is now more than 10 years old, and that I used in the past on a iMac. The iMac still runs, but exclusively with CentOS 7 those days, and the TonePort was just collecting dust. When I tried to plug it , it wasn't really detected, but just mainly because of the kernel config, so I asked (gently) <a href="https://wiki.centos.org/AkemiYagi">Toracat</a> to <a href="https://bugs.centos.org/view.php?id=9569">enable</a> the required kernel module in the centos-plus kernel and with the centos-plus kernel, the toneport ux2 is seen as an external sound card. Good :</p>
<div class="highlight"><pre><span></span><span class="n">geonosis …</span></pre></div><p>There was something that was on my (private) TODO list for quite some time now : being able to record music, mix and output a single song from multiple recorded tracks. For that you need a Digital Audio Workstation <a href="https://en.wikipedia.org/wiki/Digital_audio_workstation">DAW</a>.</p>
<p>I have several instruments at home (electric guitars, bass, digital piano and also drums) but due to lack of (free) time I never investigated the DAW part on Linux and especially CentOS. So having some "offline" days during the holidays helped me investigating that and also being able to setup a small DAW on a recycled machine. Let's consider the hardware and software parts.</p>
<h3>Hardware support</h3>
<p>I personally still own a <a href="http://line6.com/legacy/toneportux2/">Line6 TonePort UX2</a> interface which is now more than 10 years old, and that I used in the past on a iMac. The iMac still runs, but exclusively with CentOS 7 those days, and the TonePort was just collecting dust. When I tried to plug it , it wasn't really detected, but just mainly because of the kernel config, so I asked (gently) <a href="https://wiki.centos.org/AkemiYagi">Toracat</a> to <a href="https://bugs.centos.org/view.php?id=9569">enable</a> the required kernel module in the centos-plus kernel and with the centos-plus kernel, the toneport ux2 is seen as an external sound card. Good :</p>
<div class="highlight"><pre><span></span><span class="n">geonosis</span> <span class="n">kernel</span><span class="p">:</span> <span class="n">usb</span> <span class="mi">3</span><span class="o">-</span><span class="mi">2</span><span class="p">:</span> <span class="k">new</span> <span class="k">full</span><span class="o">-</span><span class="n">speed</span> <span class="n">USB</span> <span class="n">device</span> <span class="nb">number</span> <span class="mi">2</span> <span class="k">using</span> <span class="n">xhci_hcd</span>
<span class="n">geonosis</span> <span class="n">kernel</span><span class="p">:</span> <span class="n">usb</span> <span class="mi">3</span><span class="o">-</span><span class="mi">2</span><span class="p">:</span> <span class="k">New</span> <span class="n">USB</span> <span class="n">device</span> <span class="k">found</span><span class="p">,</span> <span class="n">idVendor</span><span class="o">=</span><span class="mi">0</span><span class="n">e41</span><span class="p">,</span> <span class="n">idProduct</span><span class="o">=</span><span class="mi">4142</span>
<span class="n">geonosis</span> <span class="n">kernel</span><span class="p">:</span> <span class="n">usb</span> <span class="mi">3</span><span class="o">-</span><span class="mi">2</span><span class="p">:</span> <span class="k">New</span> <span class="n">USB</span> <span class="n">device</span> <span class="n">strings</span><span class="p">:</span> <span class="n">Mfr</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> <span class="n">Product</span><span class="o">=</span><span class="mi">2</span><span class="p">,</span> <span class="n">SerialNumber</span><span class="o">=</span><span class="mi">0</span>
<span class="n">geonosis</span> <span class="n">kernel</span><span class="p">:</span> <span class="n">usb</span> <span class="mi">3</span><span class="o">-</span><span class="mi">2</span><span class="p">:</span> <span class="n">Product</span><span class="p">:</span> <span class="n">TonePort</span> <span class="n">UX2</span>
<span class="n">geonosis</span> <span class="n">kernel</span><span class="p">:</span> <span class="n">usb</span> <span class="mi">3</span><span class="o">-</span><span class="mi">2</span><span class="p">:</span> <span class="n">Manufacturer</span><span class="p">:</span> <span class="n">Line</span> <span class="mi">6</span>
<span class="n">geonosis</span> <span class="n">mtp</span><span class="o">-</span><span class="n">probe</span><span class="p">:</span> <span class="n">checking</span> <span class="n">bus</span> <span class="mi">3</span><span class="p">,</span> <span class="n">device</span> <span class="mi">2</span><span class="p">:</span> <span class="ss">"/sys/devices/pci0000:00/0000:00:14.0/</span>
<span class="ss">usb3/3-2"</span>
<span class="n">geonosis</span> <span class="n">mtp</span><span class="o">-</span><span class="n">probe</span><span class="p">:</span> <span class="n">bus</span><span class="p">:</span> <span class="mi">3</span><span class="p">,</span> <span class="n">device</span><span class="p">:</span> <span class="mi">2</span> <span class="n">was</span> <span class="k">not</span> <span class="n">an</span> <span class="n">MTP</span> <span class="n">device</span>
<span class="n">geonosis</span> <span class="n">kernel</span><span class="p">:</span> <span class="n">line6usb</span><span class="p">:</span> <span class="n">module</span> <span class="k">is</span> <span class="k">from</span> <span class="n">the</span> <span class="n">staging</span> <span class="n">directory</span><span class="p">,</span> <span class="n">the</span> <span class="n">quality</span> <span class="k">is</span> <span class="n">unkn</span>
<span class="n">own</span><span class="p">,</span> <span class="n">you</span> <span class="n">have</span> <span class="n">been</span> <span class="n">warned</span><span class="p">.</span>
<span class="n">geonosis</span> <span class="n">kernel</span><span class="p">:</span> <span class="n">line6usb</span> <span class="mi">3</span><span class="o">-</span><span class="mi">2</span><span class="p">:</span><span class="mi">1</span><span class="p">.</span><span class="mi">0</span><span class="p">:</span> <span class="n">Line6</span> <span class="n">TonePort</span> <span class="n">UX2</span> <span class="k">found</span>
<span class="n">geonosis</span> <span class="n">kernel</span><span class="p">:</span> <span class="n">line6usb</span><span class="p">:</span> <span class="n">module</span> <span class="k">is</span> <span class="k">from</span> <span class="n">the</span> <span class="n">staging</span> <span class="n">directory</span><span class="p">,</span> <span class="n">the</span> <span class="n">quality</span> <span class="k">is</span> <span class="n">unkn</span>
<span class="n">own</span><span class="p">,</span> <span class="n">you</span> <span class="n">have</span> <span class="n">been</span> <span class="n">warned</span><span class="p">.</span>
<span class="n">geonosis</span> <span class="n">kernel</span><span class="p">:</span> <span class="n">line6usb</span> <span class="mi">3</span><span class="o">-</span><span class="mi">2</span><span class="p">:</span><span class="mi">1</span><span class="p">.</span><span class="mi">0</span><span class="p">:</span> <span class="n">Line6</span> <span class="n">TonePort</span> <span class="n">UX2</span> <span class="n">now</span> <span class="n">attached</span>
<span class="n">geonosis</span> <span class="n">kernel</span><span class="p">:</span> <span class="n">line6usb</span> <span class="mi">3</span><span class="o">-</span><span class="mi">2</span><span class="p">:</span><span class="mi">1</span><span class="p">.</span><span class="mi">1</span><span class="p">:</span> <span class="n">Line6</span> <span class="n">TonePort</span> <span class="n">UX2</span> <span class="k">found</span>
<span class="n">geonosis</span> <span class="n">kernel</span><span class="p">:</span> <span class="n">usbcore</span><span class="p">:</span> <span class="n">registered</span> <span class="k">new</span> <span class="n">interface</span> <span class="n">driver</span> <span class="n">line6usb</span>
<span class="n">geonosis</span> <span class="n">kernel</span><span class="p">:</span> <span class="n">usbcore</span><span class="p">:</span> <span class="n">registered</span> <span class="k">new</span> <span class="n">interface</span> <span class="n">driver</span> <span class="n">snd_usb_toneport</span>
</pre></div>
<p>I also recently offered myself a small gift to play with : a small <a href="http://shop.fender.com/en-BE/guitar-amplifiers/contemporary-digital/mustang-i-v.2/2300104900.html#start=1">Fender Mustang</a> guitar amplifier : small enough to fit under my desk in my home office, and with amps/effects emulation built-in, plus usb output to redirect sound directly to computer. (Easier for easy recording, than setting up a microphone in front my other Fender Custom vibrolux reverb tube amp, and my neighbors are also grateful for that decision)</p>
<p>The good news is that it's directly recognized as another sound card without any kernel module to activate/enable : </p>
<div class="highlight"><pre><span></span><span class="n">geonosis</span> <span class="n">kernel</span><span class="p">:</span> <span class="n">usb</span> <span class="mi">3</span><span class="o">-</span><span class="mi">1</span><span class="p">:</span> <span class="k">new</span> <span class="k">full</span><span class="o">-</span><span class="n">speed</span> <span class="n">USB</span> <span class="n">device</span> <span class="nb">number</span> <span class="mi">3</span> <span class="k">using</span> <span class="n">xhci_hcd</span>
<span class="n">geonosis</span> <span class="n">kernel</span><span class="p">:</span> <span class="n">usb</span> <span class="mi">3</span><span class="o">-</span><span class="mi">1</span><span class="p">:</span> <span class="k">New</span> <span class="n">USB</span> <span class="n">device</span> <span class="k">found</span><span class="p">,</span> <span class="n">idVendor</span><span class="o">=</span><span class="mi">1</span><span class="n">ed8</span><span class="p">,</span> <span class="n">idProduct</span><span class="o">=</span><span class="mi">0014</span>
<span class="n">geonosis</span> <span class="n">kernel</span><span class="p">:</span> <span class="n">usb</span> <span class="mi">3</span><span class="o">-</span><span class="mi">1</span><span class="p">:</span> <span class="k">New</span> <span class="n">USB</span> <span class="n">device</span> <span class="n">strings</span><span class="p">:</span> <span class="n">Mfr</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> <span class="n">Product</span><span class="o">=</span><span class="mi">2</span><span class="p">,</span> <span class="n">SerialNumber</span><span class="o">=</span><span class="mi">3</span>
<span class="n">geonosis</span> <span class="n">kernel</span><span class="p">:</span> <span class="n">usb</span> <span class="mi">3</span><span class="o">-</span><span class="mi">1</span><span class="p">:</span> <span class="n">Product</span><span class="p">:</span> <span class="n">Mustang</span> <span class="n">Amplifier</span>
<span class="n">geonosis</span> <span class="n">kernel</span><span class="p">:</span> <span class="n">usb</span> <span class="mi">3</span><span class="o">-</span><span class="mi">1</span><span class="p">:</span> <span class="n">Manufacturer</span><span class="p">:</span> <span class="n">FMIC</span>
<span class="n">geonosis</span> <span class="n">kernel</span><span class="p">:</span> <span class="n">usb</span> <span class="mi">3</span><span class="o">-</span><span class="mi">1</span><span class="p">:</span> <span class="n">SerialNumber</span><span class="p">:</span> <span class="mi">05</span><span class="n">D7FF373837594743075518</span>
<span class="n">geonosis</span> <span class="n">kernel</span><span class="p">:</span> <span class="n">hid</span><span class="o">-</span><span class="n">generic</span> <span class="mi">0003</span><span class="p">:</span><span class="mi">1</span><span class="n">ED8</span><span class="p">:</span><span class="mi">0014</span><span class="p">.</span><span class="mi">0001</span><span class="p">:</span> <span class="n">hiddev0</span><span class="p">,</span><span class="n">hidraw0</span><span class="p">:</span> <span class="n">USB</span> <span class="n">HID</span> <span class="n">v1</span><span class="p">.</span><span class="mi">10</span> <span class="n">Device</span> <span class="p">[</span><span class="n">FMIC</span> <span class="n">Mustang</span> <span class="n">Amplifier</span><span class="p">]</span> <span class="k">on</span> <span class="n">usb</span><span class="o">-</span><span class="mi">0000</span><span class="p">:</span><span class="mi">00</span><span class="p">:</span><span class="mi">14</span><span class="p">.</span><span class="mi">0</span><span class="o">-</span><span class="mi">1</span><span class="o">/</span><span class="n">input0</span>
<span class="n">geonosis</span> <span class="n">mtp</span><span class="o">-</span><span class="n">probe</span><span class="p">:</span> <span class="n">checking</span> <span class="n">bus</span> <span class="mi">3</span><span class="p">,</span> <span class="n">device</span> <span class="mi">3</span><span class="p">:</span> <span class="ss">"/sys/devices/pci0000:00/0000:00:14.0/usb3/3-1"</span>
<span class="n">geonosis</span> <span class="n">mtp</span><span class="o">-</span><span class="n">probe</span><span class="p">:</span> <span class="n">bus</span><span class="p">:</span> <span class="mi">3</span><span class="p">,</span> <span class="n">device</span><span class="p">:</span> <span class="mi">3</span> <span class="n">was</span> <span class="k">not</span> <span class="n">an</span> <span class="n">MTP</span> <span class="n">device</span>
<span class="n">geonosis</span> <span class="n">kernel</span><span class="p">:</span> <span class="n">usbcore</span><span class="p">:</span> <span class="n">registered</span> <span class="k">new</span> <span class="n">interface</span> <span class="n">driver</span> <span class="n">snd</span><span class="o">-</span><span class="n">usb</span><span class="o">-</span><span class="n">audio</span>
</pre></div>
<p>With those two additional sound cards detected, it looks now like this : </p>
<div class="highlight"><pre><span></span><span class="o">[</span><span class="n">arrfab@geonosis ~</span><span class="o">]</span><span class="err">$</span><span class="w"> </span><span class="n">cat</span><span class="w"> </span><span class="o">/</span><span class="k">proc</span><span class="o">/</span><span class="n">asound</span><span class="o">/</span><span class="n">cards</span><span class="w"> </span>
<span class="w"> </span><span class="mi">0</span><span class="w"> </span><span class="o">[</span><span class="n">PCH </span><span class="o">]</span><span class="err">:</span><span class="w"> </span><span class="n">HDA</span><span class="o">-</span><span class="n">Intel</span><span class="w"> </span><span class="o">-</span><span class="w"> </span><span class="n">HDA</span><span class="w"> </span><span class="n">Intel</span><span class="w"> </span><span class="n">PCH</span><span class="w"></span>
<span class="w"> </span><span class="n">HDA</span><span class="w"> </span><span class="n">Intel</span><span class="w"> </span><span class="n">PCH</span><span class="w"> </span><span class="k">at</span><span class="w"> </span><span class="mh">0xd2530000</span><span class="w"> </span><span class="n">irq</span><span class="w"> </span><span class="mi">32</span><span class="w"></span>
<span class="w"> </span><span class="mi">1</span><span class="w"> </span><span class="o">[</span><span class="n">TonePortUX2 </span><span class="o">]</span><span class="err">:</span><span class="w"> </span><span class="n">line6usb</span><span class="w"> </span><span class="o">-</span><span class="w"> </span><span class="n">TonePort</span><span class="w"> </span><span class="n">UX2</span><span class="w"></span>
<span class="w"> </span><span class="n">Line6</span><span class="w"> </span><span class="n">TonePort</span><span class="w"> </span><span class="n">UX2</span><span class="w"> </span><span class="k">at</span><span class="w"> </span><span class="n">USB</span><span class="w"> </span><span class="mi">3</span><span class="o">-</span><span class="mi">2</span><span class="err">:</span><span class="mf">1.0</span><span class="w"></span>
<span class="w"> </span><span class="mi">2</span><span class="w"> </span><span class="o">[</span><span class="n">Amplifier </span><span class="o">]</span><span class="err">:</span><span class="w"> </span><span class="n">USB</span><span class="o">-</span><span class="n">Audio</span><span class="w"> </span><span class="o">-</span><span class="w"> </span><span class="n">Mustang</span><span class="w"> </span><span class="n">Amplifier</span><span class="w"></span>
<span class="w"> </span><span class="n">FMIC</span><span class="w"> </span><span class="n">Mustang</span><span class="w"> </span><span class="n">Amplifier</span><span class="w"> </span><span class="k">at</span><span class="w"> </span><span class="n">usb</span><span class="o">-</span><span class="mi">0000</span><span class="err">:</span><span class="mi">00</span><span class="err">:</span><span class="mf">14.0</span><span class="o">-</span><span class="mi">1</span><span class="p">,</span><span class="w"> </span><span class="k">full</span><span class="w"> </span><span class="n">speed</span><span class="w"></span>
</pre></div>
<p>Great, now let's have a look at the software part !</p>
<h3>Software</h3>
<p>There are multiple ways to record quickly any sound from a sound card on Linux, and <a href="http://www.audacityteam.org/">Audacity</a> is well known for this, as it comes with several effects, you can import, edit , cut, paste (and more !) quickly sounds (and even multiple tracks). But when it comes to music recording, especially if you want to also play with <a href="https://en.wikipedia.org/wiki/MIDI">MIDI</a> , you need a proper sequencer. It's really great to see that on Linux you have multiple alternatives, but one that seems to be very popular in the Free and open source world is <a href="http://ardour.org/">Ardour</a>. As nothing was built for CentOS 7, I decided to create a <a href="https://copr.fedorainfracloud.org/coprs/arrfab/DAW-7/">DAW-7 COPR</a> repository that has everything I need ( when combined with <a href="https://dl.fedoraproject.org/pub/epel/7/">EPEL</a> and/or <a href="http://li.nux.ro/download/nux/dextop/">Nux-Dextop</a> ) </p>
<p>I so (re)built (thanks to upstream Fedora maintainers !) in that copr repository multiple packages, including (but not limited to) :</p>
<ul>
<li><a href="http://ardour.org/">Ardour 5.5</a> : sequencer</li>
<li><a href="https://qjackctl.sourceforge.io/">Qjackctl</a> : frontend for needed <a href="http://www.jackaudio.org">jack-audio-connection-kit</a></li>
<li><a href="http://calf-studio-gear.org/">Calf</a> : very good effects/plugins for jack and so that can be used directly within ardour</li>
<li><a href="http://lv2plug.in/">LV2</a> : other effects/plugins</li>
<li><a href="http://guitarix.org/">Guitarix</a> : guitar/bass amp+effect simulator </li>
<li><a href="https://lmms.io/">LMMS</a> : other sequencer but more oriented for midi/loops but not really audio recording from external devices</li>
<li><a href="http://www.hydrogen-music.org/hcms/node/2">Hydrogen</a> : Drum machine when you can't record real drums but you can program your own pattern[s]</li>
<li>... and much more ... :-)</li>
</ul>
<p>After having tested multiple settings (there are a <em>lot</em> to learn around this), I found myself comfortable with this : </p>
<div class="highlight"><pre><span></span><span class="n">sudo</span> <span class="n">su</span> <span class="o">-</span><span class="k">c</span> <span class="s1">'curl https://copr.fedorainfracloud.org/coprs/arrfab/DAW-7/repo/epel-7/arrfab-DAW-7-epel-7.repo > /etc/yum.repos.d/arrfab-daw.repo'</span>
<span class="n">sudo</span> <span class="n">yum</span> <span class="n">install</span> <span class="o">-</span><span class="n">y</span> <span class="n">ardour5</span> <span class="n">calf</span> <span class="n">lmms</span> <span class="n">hydrogen</span> <span class="n">qjackctl</span> <span class="n">jack</span><span class="o">-</span><span class="n">audio</span><span class="o">-</span><span class="k">connection</span><span class="o">-</span><span class="n">kit</span> <span class="n">jack_capture</span> <span class="n">guitarix</span> <span class="n">lv2</span><span class="o">-</span><span class="n">abGate</span> <span class="n">lv2</span><span class="o">-</span><span class="n">calf</span><span class="o">-</span><span class="n">plugins</span> <span class="n">lv2</span><span class="o">-</span><span class="n">drumgizmo</span> <span class="n">lv2</span><span class="o">-</span><span class="n">drumkv1</span> <span class="n">lv2</span><span class="o">-</span><span class="n">fomp</span><span class="o">-</span><span class="n">plugins</span> <span class="n">lv2</span><span class="o">-</span><span class="n">guitarix</span><span class="o">-</span><span class="n">plugins</span> <span class="n">lv2</span><span class="o">-</span><span class="n">invada</span><span class="o">-</span><span class="n">plugins</span> <span class="n">lv2</span><span class="o">-</span><span class="n">vocoder</span><span class="o">-</span><span class="n">plugins</span> <span class="n">lv2</span><span class="o">-</span><span class="n">x42</span><span class="o">-</span><span class="n">plugins</span> <span class="n">fluid</span><span class="o">-</span><span class="n">soundfont</span><span class="o">-</span><span class="n">gm</span> <span class="n">fluid</span><span class="o">-</span><span class="n">soundfont</span><span class="o">-</span><span class="n">gs</span>
</pre></div>
<p>One thing that you have to know (but read all the tutorials/documentation around this) is that your user needs to be part of the jackuser and audio groups in Linux to be able to use the needed Jack sound server (which you have to also master, but once you understand it, it's just a virtual view of what you'd need to do with real cables plugging in/out into various hardware elements) :</p>
<div class="highlight"><pre><span></span><span class="n">sudo</span> <span class="n">usermod</span> <span class="c1">--groups jackuser,audio --append $your_username</span>
</pre></div>
<p>One website I recommend you to read is <a href="http://libremusicproduction.com/">LibreMusicProduction</a> as it has tons of howtos and also video tutorials about Ardour and other settings.
Something else worth mentioning if you just want drum loops : you can find some on Google but I found some real good ones to start with (and licensed in a way that you can reuse those) on <a href="http://www.looperman.com">Looperman</a>, <a href="http://drumslive.com/dir/free-loops/">Drumslive</a> and <a href="http://freesound.org">Freesound</a>.</p>
<p>Who said that CentOS 7 was only for servers in datacenters and the Cloud ? :-)</p>
<p><img alt="CentOS 7 DAW" src="/images/centos-7-daw.png" title="CentOS 7 DAW"></p>
<p>Have fun on your CentOS 7 DAW.</p>Zabbix, selinux and CentOS 7.3.16112016-11-25T00:00:00+01:002016-11-25T00:00:00+01:00Fabian Arrotintag:arrfab.net,2016-11-25:/posts/2016/Nov/25/zabbix-selinux-and-centos-731611/<p>If you're using CentOS, you probably noticed that we have a <a href="https://wiki.centos.org/AdditionalResources/Repositories/CR">CR repository</a> containing all the built packages for the next minor release, so that people can "opt-in" and already use those packages, before they are released with the full installable tree and iso images.</p>
<p>Using those packages on a subset of your nodes can be interesting, as it permits you to catch some errors/issues/conflicts before the official release (and so symlink on mirrors being changed to that new major.minor version)</p>
<p>For example, I tested myself some roles and found an issue with zabbix-agent refusing to start on a node fully updated/rebooted with CR pkgs (so what will become 7.3.1611 release). The issue was due to selinux denying something (that was allowed in previous policy)</p>
<p>Here is what selinux had to say about it : </p>
<div class="highlight"><pre><span></span><span class="nv">type</span><span class="o">=</span><span class="nv">AVC</span> <span class="nv">msg</span><span class="o">=</span><span class="nv">audit</span><span class="ss">(</span><span class="mi">1480001303</span>.<span class="mi">440</span>:<span class="mi">2626</span><span class="ss">)</span>: <span class="nv">avc</span>: <span class="nv">denied</span> { <span class="nv">setrlimit</span> } <span class="k">for</span> <span class="nv">pid</span><span class="o">=</span><span class="mi">22682</span> <span class="nv">comm</span><span class="o">=</span><span class="s2">"</span><span class="s">zabbix_agentd</span><span class="s2">"</span> <span class="nv">scontext</span><span class="o">=</span><span class="nv">system_u</span>:<span class="nv">system_r</span>:<span class="nv">zabbix_agent_t</span>:<span class="nv">s0</span> <span class="nv">tcontext</span><span class="o">=</span><span class="nv">system_u</span>:<span class="nv">system_r</span>:<span class="nv">zabbix_agent_t</span>:<span class="nv">s0</span> <span class="nv">tclass</span><span class="o">=</span><span class="nv">process</span>
</pre></div>
<p>It's true that there was an update for selinux policy : from selinux-policy-3.13.1-60.el7_2.9.noarch to selinux-policy-3.13.1-102.el7.noarch.</p>
<p>What's interesting is that I found the reported issue at …</p><p>If you're using CentOS, you probably noticed that we have a <a href="https://wiki.centos.org/AdditionalResources/Repositories/CR">CR repository</a> containing all the built packages for the next minor release, so that people can "opt-in" and already use those packages, before they are released with the full installable tree and iso images.</p>
<p>Using those packages on a subset of your nodes can be interesting, as it permits you to catch some errors/issues/conflicts before the official release (and so symlink on mirrors being changed to that new major.minor version)</p>
<p>For example, I tested myself some roles and found an issue with zabbix-agent refusing to start on a node fully updated/rebooted with CR pkgs (so what will become 7.3.1611 release). The issue was due to selinux denying something (that was allowed in previous policy)</p>
<p>Here is what selinux had to say about it : </p>
<div class="highlight"><pre><span></span><span class="nv">type</span><span class="o">=</span><span class="nv">AVC</span> <span class="nv">msg</span><span class="o">=</span><span class="nv">audit</span><span class="ss">(</span><span class="mi">1480001303</span>.<span class="mi">440</span>:<span class="mi">2626</span><span class="ss">)</span>: <span class="nv">avc</span>: <span class="nv">denied</span> { <span class="nv">setrlimit</span> } <span class="k">for</span> <span class="nv">pid</span><span class="o">=</span><span class="mi">22682</span> <span class="nv">comm</span><span class="o">=</span><span class="s2">"</span><span class="s">zabbix_agentd</span><span class="s2">"</span> <span class="nv">scontext</span><span class="o">=</span><span class="nv">system_u</span>:<span class="nv">system_r</span>:<span class="nv">zabbix_agent_t</span>:<span class="nv">s0</span> <span class="nv">tcontext</span><span class="o">=</span><span class="nv">system_u</span>:<span class="nv">system_r</span>:<span class="nv">zabbix_agent_t</span>:<span class="nv">s0</span> <span class="nv">tclass</span><span class="o">=</span><span class="nv">process</span>
</pre></div>
<p>It's true that there was an update for selinux policy : from selinux-policy-3.13.1-60.el7_2.9.noarch to selinux-policy-3.13.1-102.el7.noarch.</p>
<p>What's interesting is that I found the reported issue at Zabbix side, but for zabbix-server (here it's the agent, server is running fine) : <a href="https://support.zabbix.com/browse/ZBX-10542">ZBX-10542</a></p>
<p>Clearly something that was working before and now denied, so I created a <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1398721">bug report</a> and hopefully one fix will come in an updated selinux-policy package. But I doubt that it will be available soon.</p>
<p>So in the mean time, what you have to do is :</p>
<ul>
<li>either put zabbix_agent_t into permissive mode with <code>semanage permissive -a zabbix_agent_t</code></li>
<li>either build and distribute a custom selinux policy in your infra (preferred method for me)</li>
</ul>
<p>For those interested, the following .te (type enforcement) will allow you to build a custom .pp selinux policy file (that you can load with semodule) : </p>
<div class="highlight"><pre><span></span><span class="nt">module</span> <span class="nt">local-zabbix</span> <span class="nt">1</span><span class="p">.</span><span class="nc">0</span><span class="o">;</span>
<span class="nt">require</span> <span class="p">{</span>
<span class="err">type</span> <span class="err">zabbix_agent_t</span><span class="p">;</span>
<span class="err">class</span> <span class="err">process</span> <span class="err">setrlimit</span><span class="p">;</span>
<span class="p">}</span>
<span class="err">#</span><span class="o">=============</span> <span class="nt">zabbix_agent_t</span> <span class="o">==============</span>
<span class="nt">allow</span> <span class="nt">zabbix_agent_t</span> <span class="nt">self</span><span class="p">:</span><span class="nd">process</span> <span class="nt">setrlimit</span><span class="o">;</span>
</pre></div>
<p>You can now use your configuration management platform to distribute that built .pp policy (you don't need to build it on every node). I'll not dive into details, but I wrote <a href="https://people.centos.org/arrfab/Events/Loadays-2014/managing%20selinux%20with%20your%20cfgmgmt%20solution.pdf">some slides</a> around this (for Ansible and Puppet) for a talk I gave some time ago, so feel free to read those, especially the last slides (with examples)</p>(ab)using Alias for Zabbix2016-10-21T00:00:00+02:002016-10-21T00:00:00+02:00Fabian Arrotintag:arrfab.net,2016-10-21:/posts/2016/Oct/21/abusing-alias-for-zabbix/<p>It's not a secret that we use Zabbix to monitor the CentOS.org infra. That's even a reason why we (re)build it for some other architectures, including aarch64,ppc64,ppc64le on <a href="https://cbs.centos.org/koji/packageinfo?packageID=15">CBS</a> and also <a href="http://armv7.dev.centos.org/repodir/c7-extras-1/zabbix/">armhfp</a></p>
<p>There are really cool things in Zabbix, including <a href="https://www.zabbix.com/documentation/3.0/manual/discovery/low_level_discovery">Low-Level Discovery</a>. With such discovery, you can create items/prototypes/triggers that will be applied "automagically" for each discovered network interface, or mounted filesystem. For example, the default template (if you still use it) has such item prototypes and also graph for each discovered network interface and show you the bandwidth usage on those network interfaces.</p>
<p>But what happens if you suddenly want to for example to create some <a href="https://www.zabbix.com/documentation/3.0/manual/config/items/itemtypes/calculated">calculated item</a> on top of those ? Well, the issue is that from one node to the other, interface name can be eth0, or sometimes eth1, and with CentOS 7 things started to also move to the new naming scheme, so you can have something like enp4s0f0. I wanted to create a template that would fit-them-all, so I had a look at calculated item and thought "well, easy : let's have that calculated item use a user macro that would define the name of the interface we really want …</p><p>It's not a secret that we use Zabbix to monitor the CentOS.org infra. That's even a reason why we (re)build it for some other architectures, including aarch64,ppc64,ppc64le on <a href="https://cbs.centos.org/koji/packageinfo?packageID=15">CBS</a> and also <a href="http://armv7.dev.centos.org/repodir/c7-extras-1/zabbix/">armhfp</a></p>
<p>There are really cool things in Zabbix, including <a href="https://www.zabbix.com/documentation/3.0/manual/discovery/low_level_discovery">Low-Level Discovery</a>. With such discovery, you can create items/prototypes/triggers that will be applied "automagically" for each discovered network interface, or mounted filesystem. For example, the default template (if you still use it) has such item prototypes and also graph for each discovered network interface and show you the bandwidth usage on those network interfaces.</p>
<p>But what happens if you suddenly want to for example to create some <a href="https://www.zabbix.com/documentation/3.0/manual/config/items/itemtypes/calculated">calculated item</a> on top of those ? Well, the issue is that from one node to the other, interface name can be eth0, or sometimes eth1, and with CentOS 7 things started to also move to the new naming scheme, so you can have something like enp4s0f0. I wanted to create a template that would fit-them-all, so I had a look at calculated item and thought "well, easy : let's have that calculated item use a user macro that would define the name of the interface we really want to gather stats from ..." .. but it seems I was wrong. Zabbix <a href="https://www.zabbix.com/documentation/3.0/manual/config/macros/usermacros">user macros</a> can be used in multiple places, but not <a href="https://support.zabbix.com/browse/ZBX-11373">everywhere</a>. (It seems that I wasn't the only one not understanding the doc coverage for this, but at least that bug report will have an effect on the doc to clarify this)</p>
<p>That's when I discussed this in #zabbix (on irc.freenode.net) that <a href="https://twitter.com/real_richlv">RichLV</a> pointed me to something that could be interesting for my case : <a href="https://www.zabbix.com/documentation/3.0/manual/appendix/config/zabbix_agentd">Alias</a>. I must admit that it's the first time I was hearing about it, and I don't even know when it landed in Zabbix (or if I just overlooked it at first sight).</p>
<p>So cool, now I can just have our config mgmt pushing for example a /etc/zabbix/zabbix_agentd.d/interface-alias.conf file that looks like this and reload zabbix-agent : </p>
<div class="highlight"><pre><span></span><span class="nv">Alias</span><span class="o">=</span><span class="nv">net</span>.<span class="k">if</span>.<span class="nv">default</span>.<span class="nv">out</span>:<span class="nv">net</span>.<span class="k">if</span>.<span class="nv">out</span>[<span class="nv">enp4s0f0</span>]
<span class="nv">Alias</span><span class="o">=</span><span class="nv">net</span>.<span class="k">if</span>.<span class="nv">default</span>.<span class="nv">in</span>:<span class="nv">net</span>.<span class="k">if</span>.<span class="nv">in</span>[<span class="nv">enp4s0f0</span>]
</pre></div>
<p>That means that now, whatever the interface name will be (as puppet in our case will create that file for us) , we'll be able to get values from net.if.default.out and net.if.default.in keys, automatically. Cool</p>
<p>That also means that if you want to aggregate all this into a single key for a group of nodes (and so graph that too), you can do something always referencing those new keys (example for the total outgoing bandwidth for a group of hosts) :</p>
<div class="highlight"><pre><span></span><span class="nv">grpsum</span>[<span class="s2">"</span><span class="s">Your group name</span><span class="s2">"</span>,<span class="s2">"</span><span class="s">net.if.default.out</span><span class="s2">"</span>,<span class="nv">last</span>,<span class="mi">0</span>]
</pre></div>
<p>And from that point, you can easily also configure triggers, and graphs too.
Now going back to work on some other calculated items for total bandwith usage for a period of time and triggers based on some max_bw_usage user macro.</p>CentOS Infra public service dashboard2016-09-22T00:00:00+02:002016-09-22T00:00:00+02:00Fabian Arrotintag:arrfab.net,2016-09-22:/posts/2016/Sep/22/centos-infra-public-service-dashboard/<p>As soon as you're running some IT services, there is one thing that you already know : you'll have <a href="https://en.wikipedia.org/wiki/Downtime">downtimes</a>, despite all your efforts to avoid those...</p>
<p>As the old joke says : <code>"What's up ?" asked the Boss. "Hopefully everything !" answered the SysAdmin guy ....</code></p>
<p>You probably know that the CentOS infra is itself widespread, and subject to quick move too. Recently we had to <a href="https://lists.centos.org/pipermail/centos-announce/2016-September/022065.html">announce</a> an important DC relocation that impacts some of our crucial and publicly facing services. That one falls in the "scheduled and known outages" category, and can be prepared. For such "downtime" we always announced that through several mediums, like sending a mail to the centos-announce, centos-devel (and in this case , also to the ci-users) <a href="https://lists.centos.org">mailing lists</a>. But even when we announce that in advance, some people forget about it, or people using (sometimes "indirectly") the concerned service are surprized and then ask about it (usually in #centos or #centos-devel on irc.freenode.net).</p>
<p>In parallel to those "scheduled outages", we have also the worst ones : the unscheduled ones. For those ones, depending on the impact/criticity of the impacted service, and also the estimated <a href="https://en.wikipedia.org/wiki/Recovery_time_objective">RTO</a>, we also send a mail to the concerned mailing lists (or not …</p><p>As soon as you're running some IT services, there is one thing that you already know : you'll have <a href="https://en.wikipedia.org/wiki/Downtime">downtimes</a>, despite all your efforts to avoid those...</p>
<p>As the old joke says : <code>"What's up ?" asked the Boss. "Hopefully everything !" answered the SysAdmin guy ....</code></p>
<p>You probably know that the CentOS infra is itself widespread, and subject to quick move too. Recently we had to <a href="https://lists.centos.org/pipermail/centos-announce/2016-September/022065.html">announce</a> an important DC relocation that impacts some of our crucial and publicly facing services. That one falls in the "scheduled and known outages" category, and can be prepared. For such "downtime" we always announced that through several mediums, like sending a mail to the centos-announce, centos-devel (and in this case , also to the ci-users) <a href="https://lists.centos.org">mailing lists</a>. But even when we announce that in advance, some people forget about it, or people using (sometimes "indirectly") the concerned service are surprized and then ask about it (usually in #centos or #centos-devel on irc.freenode.net).</p>
<p>In parallel to those "scheduled outages", we have also the worst ones : the unscheduled ones. For those ones, depending on the impact/criticity of the impacted service, and also the estimated <a href="https://en.wikipedia.org/wiki/Recovery_time_objective">RTO</a>, we also send a mail to the concerned mailing lists (or not).</p>
<p>So we just decided to show a very simple and public dashboard for the CentOS Infra, but only covering the publicly facing services, to have a quick overview of that part of the Infra. It's now live and hosted on <a href="https://status.centos.org">https://status.centos.org</a>.</p>
<p>We use <a href="http://www.zabbix.com">Zabbix</a> to monitor our Infra (so we build it for multiple arches, like x86_64,i386,ppc64,ppc64le,aarch64 and also armhfp) , including through remote zabbix <a href="https://www.zabbix.com/documentation/3.0/manual/concepts/proxy">proxies</a> (because of our "distributed" network setup right now, with machines all around the world).
For some of those services listed on status.centos.org, we can "manually" announce a downtime/maintenance period, but Zabbix also updates on its own that dashboard.
The simple way to link those together was to use zabbix <a href="https://www.zabbix.com/documentation/3.0/manual/config/notifications/media/script">custom alertscripts</a> and you can even customize those to send specific <a href="https://www.zabbix.com/documentation/3.0/manual/appendix/macros/supported_by_location">macros</a> and have that alertscript just parsing and then updating the dashboard.</p>
<p>We hope to enhance that dashboard in the future, but it's a good start, and I have to thank again <a href="https://twitter.com/puiterwijkFP">Patrick Uiterwijk</a> who wrote that <a href="https://git.fedorahosted.org/git/fedora-status">tool</a> for Fedora initially (and that we adapted to our needs).</p>Generating multiple certificates with Letsencrypt from a single instance2016-05-03T00:00:00+02:002016-05-03T00:00:00+02:00Fabian Arrotintag:arrfab.net,2016-05-03:/posts/2016/May/03/generating-multiple-certificates-with-letsencrypt-from-a-single-instance/<p>Recently I was discussing with some people about TLS everywhere, and we then started to discuss about the <a href="https://letsencrypt.org/">Letsencrypt</a> initiative.
I had to admit that I just tested it some time ago (just for "fun") but I suddenly looked at it from a different angle : while the most used case is when you install/run the letsencrypt client on your node to directly configure it, I have to admit that it's something I didn't want to have to deal with. I still think that proper web server configuration has to happen through cfgmgmt, and not through another process. (and same for the key/cert distribution, something for a different blog post maybe).</p>
<p>If so you're (pushing|pulling) automatically your web servers configuration from $cfgmgmt, but that you want to use/deploy TLS certificates signed by letsencrypt, what can you do ? Well, the good news is that you don't have to be forced to let the letsencrypt client touch your configuration at all : you can use the "certonly" option to just generate the private key locally, send the <a href="https://en.wikipedia.org/wiki/Certificate_signing_request">csr</a> and get the signed cert back (and the whole chain too)
One thing to know about letsencrypt is that the validation/verification …</p><p>Recently I was discussing with some people about TLS everywhere, and we then started to discuss about the <a href="https://letsencrypt.org/">Letsencrypt</a> initiative.
I had to admit that I just tested it some time ago (just for "fun") but I suddenly looked at it from a different angle : while the most used case is when you install/run the letsencrypt client on your node to directly configure it, I have to admit that it's something I didn't want to have to deal with. I still think that proper web server configuration has to happen through cfgmgmt, and not through another process. (and same for the key/cert distribution, something for a different blog post maybe).</p>
<p>If so you're (pushing|pulling) automatically your web servers configuration from $cfgmgmt, but that you want to use/deploy TLS certificates signed by letsencrypt, what can you do ? Well, the good news is that you don't have to be forced to let the letsencrypt client touch your configuration at all : you can use the "certonly" option to just generate the private key locally, send the <a href="https://en.wikipedia.org/wiki/Certificate_signing_request">csr</a> and get the signed cert back (and the whole chain too)
One thing to know about letsencrypt is that the validation/verification process isn't the one that you can see in most of the companies providing CA/signing capabilities : as there is no ID/Paper verification (or something else) , the only validation for the domain/sub-domain that you want to generate a certificate for happens over http request (basically creating a file with a challenge , process a request from their "ACME" server[s] to retrieve that file back, and validate content)</p>
<p>So what are our options then ? The letsencrypt documentation mentions several <a href="http://letsencrypt.readthedocs.io/en/latest/using.html#plugins">plugins</a> like manual (involves you to then create the file with the challenge answer to the webserver, then launching the validation process) , or standalone (doesn't work if you already have a httpd/nginx process as there will be a port conflict) , or even webroot (working fine as it will then just write the file itself under /.well-kwown/ under the DocumentRoot)</p>
<p>The webroot seems easy, but as said, we don't want to even install letsencrypt on the web server[s]. Even worse, suppose (and that's the case I had in mind) that you have multiple web nodes configured in a kind of <a href="https://en.wikipedia.org/wiki/Content_delivery_network">CDN</a> way : you don't want to distribute that file on all the nodes for validation/verification (when using the "manual" plugin) and you'd have to do it on <em>all</em> the nodes (as you don't know in advance which one will be verified by the ACME server)</p>
<p>So what about something centralized (where you'd run the letsencrypt client locally) for all your certs (including some with <a href="https://en.wikipedia.org/wiki/SubjectAltName">SANs</a> ) in a transpartent way ? I so thought about something like this :</p>
<p><img alt="Single Letsencrypt node" src="/images/central-le-process.png" title="central letsencrypt node"></p>
<p>The idea would be to :</p>
<ul>
<li>use a central node : let's call it central.domain.com (vm, docker container, make-your-choice-here) to launch the letsencrypt client</li>
<li>have the ACME server hitting transparently one of the web servers without any changed/uploaded file</li>
<li>the server getting the GET request for that file using the letsencrypt central node as a backend node</li>
<li>ACME server being happy and so signed certificates being available automatically on the centralize letsencrypt node.</li>
</ul>
<p>The good news is that it's possible and even really easy to implement, through <a href="https://httpd.apache.org/docs/current/mod/mod_proxy.html#proxypass">ProxyPass</a> (for httpd/Apache web server) or <a href="http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass">proxy_pass</a> (for nginx based setup)</p>
<p>For example, for the httpd vhost config for sub1.domain.com (three nodes in our example) we can just add this in the .conf file :</p>
<div class="highlight"><pre><span></span><span class="nt"><Location</span> <span class="err">"/.well-known/"</span><span class="nt">></span>
ProxyPass "http://central.domain.com/.well-known/"
<span class="nt"></Location></span>
</pre></div>
<p>So now, once in place everywhere, you can generate the cert for that domain on the central letsencrypt node (assuming that httpd is running on that node, and reachable from the "frontend" nodes, and that /var/www/html is indeed the DocumentRoot (default) for httpd on that node): </p>
<div class="highlight"><pre><span></span><span class="n">letsencrypt</span><span class="w"> </span><span class="n">certonly</span><span class="w"> </span><span class="c1">--webroot --webroot-path /var/www/html --manual-public-ip-logging-ok --agree-tos --email you@domain.com -d sub1.domain.com</span>
</pre></div>
<p>Same if you run nginx instead (let's assume this for sub2.domain.com and sub3.domain.com) , you just have to add a snippet in your vhost .conf file (and before the / definition too): </p>
<div class="highlight"><pre><span></span><span class="nt">location</span> <span class="o">/</span><span class="p">.</span><span class="nc">well-known</span><span class="o">/</span> <span class="p">{</span>
<span class="err">proxy_pass</span> <span class="n">http</span><span class="p">:</span><span class="o">//</span><span class="n">central</span><span class="o">.</span><span class="n">domain</span><span class="o">.</span><span class="n">com</span><span class="o">/.</span><span class="n">well-known</span><span class="o">/</span> <span class="p">;</span>
<span class="p">}</span>
</pre></div>
<p>And then on the central node, do the same thing, but you can add multiple -d for multiple SubjectAltName in the same cert :</p>
<div class="highlight"><pre><span></span><span class="n">letsencrypt</span><span class="w"> </span><span class="n">certonly</span><span class="w"> </span><span class="c1">--webroot --webroot-path /var/www/html --manual-public-ip-logging-ok --agree-tos --email you@domain.com -d sub2.domain.com -d sub3.domain.com</span>
</pre></div>
<p>Transparent, smart, easy to do and even something you can deploy when you need to renew, and then remove to be back with initial config files too (if you don't want to have those ProxyPass directives active all the time)</p>
<p>The only thing you have also to know is that once you have proper TLS in place, it's usually better to redirect transpartently all requests to your http server to the https version. Most of the people will do that (next example for httpd/apache) like this : </p>
<div class="highlight"><pre><span></span> <span class="n">RewriteEngine</span> <span class="k">On</span>
<span class="n">RewriteCond</span> <span class="o">%</span><span class="err">{</span><span class="n">HTTPS</span><span class="err">}</span> <span class="o">!=</span><span class="k">on</span>
<span class="n">RewriteRule</span> <span class="o">^/?</span><span class="p">(.</span><span class="o">*</span><span class="p">)</span> <span class="n">https</span><span class="p">:</span><span class="o">//%</span><span class="err">{</span><span class="k">SERVER_NAME</span><span class="err">}</span><span class="o">/</span><span class="err">$</span><span class="mi">1</span> <span class="p">[</span><span class="n">R</span><span class="p">,</span><span class="n">L</span><span class="p">]</span>
</pre></div>
<p>It's good, but when you'll renew the certificate, you'll probably just want to be sure that the GET request for /.well-known/* will continue to work over http (from the ACME server) so we can tune a little bit those rules (<a href="https://httpd.apache.org/docs/2.2/mod/mod_rewrite.html#rewritecond">RewriteCond</a> are cumulatives so it will not be redirect if url starts with .well-known: </p>
<div class="highlight"><pre><span></span> <span class="n">RewriteEngine</span> <span class="k">On</span>
<span class="n">RewriteCond</span> <span class="err">$</span><span class="mi">1</span> <span class="o">!^</span><span class="p">.</span><span class="n">well</span><span class="o">-</span><span class="n">known</span>
<span class="n">RewriteCond</span> <span class="o">%</span><span class="err">{</span><span class="n">HTTPS</span><span class="err">}</span> <span class="o">!=</span><span class="k">on</span>
<span class="n">RewriteRule</span> <span class="o">^/?</span><span class="p">(.</span><span class="o">*</span><span class="p">)</span> <span class="n">https</span><span class="p">:</span><span class="o">//%</span><span class="err">{</span><span class="k">SERVER_NAME</span><span class="err">}</span><span class="o">/</span><span class="err">$</span><span class="mi">1</span> <span class="p">[</span><span class="n">R</span><span class="p">,</span><span class="n">L</span><span class="p">]</span>
</pre></div>
<p>Different syntax, but same principle for nginx : (also snippet, not full configuration file for that server/vhost):</p>
<div class="highlight"><pre><span></span><span class="nt">location</span> <span class="o">/</span><span class="p">.</span><span class="nc">well-known</span><span class="o">/</span> <span class="p">{</span>
<span class="err">proxy_pass</span> <span class="n">http</span><span class="p">:</span><span class="o">//</span><span class="n">central</span><span class="o">.</span><span class="n">domain</span><span class="o">.</span><span class="n">com</span><span class="o">/.</span><span class="n">well-known</span><span class="o">/</span> <span class="p">;</span>
<span class="p">}</span>
<span class="nt">location</span> <span class="o">/</span> <span class="p">{</span>
<span class="err">rewrite</span> <span class="err">^</span> <span class="n">https</span><span class="p">:</span><span class="o">//</span><span class="err">$</span><span class="n">server_name</span><span class="err">$</span><span class="n">request_uri</span><span class="o">?</span> <span class="n">permanent</span><span class="p">;</span>
<span class="p">}</span>
</pre></div>
<p>Hope that you'll have found that useful, especially if you don't want to deploy letsencrypt everywhere but still use it to generate locally your keys/certs. Once done, you can then distribute/push/pull (depending on your cfgmgmt) those files and don't forget to also implement proper monitoring for cert validity and automation around that too (consider that your homework)</p>IPv6 connectivity status within the CentOS.org infra2016-04-29T00:00:00+02:002016-04-29T00:00:00+02:00Fabian Arrotintag:arrfab.net,2016-04-29:/posts/2016/Apr/29/ipv6-connectivity-status-within-the-centosorg-infra/<p>Recently, some people started to ask proper IPv6/AAAA record for some of our public mirror infrastructure, like mirror.centos.org, and also <a href="https://wiki.centos.org/HowTos/CreatePublicMirrors">msync.centos.org</a> </p>
<p>Reason is that a lot of people are now using IPv6 wherever possible and from a CentOS point of view, we should ensure that everybody can have content over (legacy) ipv4 and ipv6. Funny that I call ipv4 "legacy" as we still have to admit that it's still the default everywhere, even in 2016 with the available pools now exhausted.</p>
<p>While we had already some AAAA records for some of our public nodes (like <a href="https://www.centos.org">www.centos.org</a> as an example), I started to "chase" after proper and native ipv6 connectivity for our nodes.
That's where I had to take contact with all our valuable <a href="https://www.centos.org/sponsors">sponsors</a>. First thing to say is that we'd like to thank them all for their support for the CentOS Project over the years : it wouldn't have been possible to deliver multiple terrabytes of data per month without their sponsorship !</p>
<p>WRT ipv6 connectivity that's where the results of my quest where really different : while some DCs support ipv6 natively, and even answer you in 5 minutes when asking for a /64 …</p><p>Recently, some people started to ask proper IPv6/AAAA record for some of our public mirror infrastructure, like mirror.centos.org, and also <a href="https://wiki.centos.org/HowTos/CreatePublicMirrors">msync.centos.org</a> </p>
<p>Reason is that a lot of people are now using IPv6 wherever possible and from a CentOS point of view, we should ensure that everybody can have content over (legacy) ipv4 and ipv6. Funny that I call ipv4 "legacy" as we still have to admit that it's still the default everywhere, even in 2016 with the available pools now exhausted.</p>
<p>While we had already some AAAA records for some of our public nodes (like <a href="https://www.centos.org">www.centos.org</a> as an example), I started to "chase" after proper and native ipv6 connectivity for our nodes.
That's where I had to take contact with all our valuable <a href="https://www.centos.org/sponsors">sponsors</a>. First thing to say is that we'd like to thank them all for their support for the CentOS Project over the years : it wouldn't have been possible to deliver multiple terrabytes of data per month without their sponsorship !</p>
<p>WRT ipv6 connectivity that's where the results of my quest where really different : while some DCs support ipv6 natively, and even answer you in 5 minutes when asking for a /64 subnet to be allocated , some other aren't still ipv6 ready : For the worst case the answer was "nothing ready and no plan for that" or for sometimes the received answer was something like "it's on the roadmap for 2018/2019").</p>
<p>The good news is that ~30% of our nodes behind msync.centos.org have now ipv6 connectivity, so the next step is now to test our various configurations (distributed by puppet) and then also our GeoIP redirection (done at the <a href="http://www.powerdns.com">PowerDNS</a> level for such records, for which we'll also then add proper AAAA record)</p>
<p>Hopefully we'll have that tested and then announced soon, and also for other public services that we're providing to you.</p>
<p>Stay tuned for more info about ipv6 deployment within centos.org !</p>Kernel 3.10.0-327 issue on AMD Neo processor2015-12-15T00:00:00+01:002015-12-15T00:00:00+01:00Fabian Arrotintag:arrfab.net,2015-12-15:/posts/2015/Dec/15/kernel-3100-327-issue-on-amd-neo-processor/<p>As CentOS 7 (1511) was released, I thought it would be a good idea to update several of my home machines (including kids' workstations) with that version, and also newer kernel.
Usually that's just a smooth operation, but sometimes some backported features/new features, especially in the kernel, can lead to some strange issues.
That's what happened for my older Thinkpad Edge : That's a cheap/small thinkpad that Lenovo did several years ago ( circa 2011 ), and that I used a lot just when travelling, as it only has a <a href="http://www.notebookcheck.net/AMD-Athlon-II-Neo-K325-Notebook-Processor.33886.0.html">AMD Athlon(tm) II Neo K345 Dual-Core Processor</a>.
So basically not a lot of horse power, but still something convenient just to read your mails, remotely connect through ssh, or browse the web.
When rebooting on the newer kernel, it panics directly.</p>
<p>Two bug reports are open for this, one on the <a href="https://bugs.centos.org/view.php?id=9860">CentOS Bug tracker</a>, linked also to the <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1285235">upstream one</a>. Current status is that there is no kernel update that will fix this, but there is a easy to implement workaround : </p>
<ul>
<li>boot with the initcall_blacklist=clocksource_done_booting kernel parameter added (or reboot on previous kernel)</li>
<li>once booted, add the same parameter at the end of the GRUB_CMDLINE_LINUX=" .." line , in the file …</li></ul><p>As CentOS 7 (1511) was released, I thought it would be a good idea to update several of my home machines (including kids' workstations) with that version, and also newer kernel.
Usually that's just a smooth operation, but sometimes some backported features/new features, especially in the kernel, can lead to some strange issues.
That's what happened for my older Thinkpad Edge : That's a cheap/small thinkpad that Lenovo did several years ago ( circa 2011 ), and that I used a lot just when travelling, as it only has a <a href="http://www.notebookcheck.net/AMD-Athlon-II-Neo-K325-Notebook-Processor.33886.0.html">AMD Athlon(tm) II Neo K345 Dual-Core Processor</a>.
So basically not a lot of horse power, but still something convenient just to read your mails, remotely connect through ssh, or browse the web.
When rebooting on the newer kernel, it panics directly.</p>
<p>Two bug reports are open for this, one on the <a href="https://bugs.centos.org/view.php?id=9860">CentOS Bug tracker</a>, linked also to the <a href="https://bugzilla.redhat.com/show_bug.cgi?id=1285235">upstream one</a>. Current status is that there is no kernel update that will fix this, but there is a easy to implement workaround : </p>
<ul>
<li>boot with the initcall_blacklist=clocksource_done_booting kernel parameter added (or reboot on previous kernel)</li>
<li>once booted, add the same parameter at the end of the GRUB_CMDLINE_LINUX=" .." line , in the file /etc/default/grub</li>
<li>as root, run <code>grub2-mkconfig -o /etc/grub2.conf</code></li>
</ul>
<p>Hope it can help others too</p>Kernel IO wait and megaraid controller2015-12-01T00:00:00+01:002015-12-01T00:00:00+01:00Fabian Arrotintag:arrfab.net,2015-12-01:/posts/2015/Dec/01/kernel-io-wait-and-megaraid-controller/<p>Last friday, while working on something else (working on "CentOS 7 userland" release for Armv7hl boards), I got notifications coming from our <a href="http://www.zabbix.org">Zabbix</a> monitoring instance complaining about <a href="https://www.zabbix.com/documentation/2.4/manual/web_monitoring">web scenarios</a> failing (errors due to time outs) , and also then also about "Disk I/O is overloaded" triggers (checking the cpu iowait time). Usually you'd verify what happens in the Virtual Machine itself, but even connecting to the VM was difficult and slow. But once connected, nothing strange, and no real activity , not even on the disk (Plenty of tools for this, but <a href="http://guichaz.free.fr/iotop/">iotop</a> is helpful to see which process is reading/writing to the disk in that case), but iowait was almost at 100%).</p>
<p>As said, it was happening suddenly for all Virtual Machines on the same hypervisor (CentOS 6 x86_64 KVM host), and even the hypervisor was suddenly complaining (but less in comparison with the VMs) about iowait too. So obviously, it wasn't really something not being optimized at the hypervisor/VMS, but something else. That rang a bell, as if you have a raid controller, and that battery for example is to be replaced, the controller can decide to stop all read/write cache, so slowing down all IOs …</p><p>Last friday, while working on something else (working on "CentOS 7 userland" release for Armv7hl boards), I got notifications coming from our <a href="http://www.zabbix.org">Zabbix</a> monitoring instance complaining about <a href="https://www.zabbix.com/documentation/2.4/manual/web_monitoring">web scenarios</a> failing (errors due to time outs) , and also then also about "Disk I/O is overloaded" triggers (checking the cpu iowait time). Usually you'd verify what happens in the Virtual Machine itself, but even connecting to the VM was difficult and slow. But once connected, nothing strange, and no real activity , not even on the disk (Plenty of tools for this, but <a href="http://guichaz.free.fr/iotop/">iotop</a> is helpful to see which process is reading/writing to the disk in that case), but iowait was almost at 100%).</p>
<p>As said, it was happening suddenly for all Virtual Machines on the same hypervisor (CentOS 6 x86_64 KVM host), and even the hypervisor was suddenly complaining (but less in comparison with the VMs) about iowait too. So obviously, it wasn't really something not being optimized at the hypervisor/VMS, but something else. That rang a bell, as if you have a raid controller, and that battery for example is to be replaced, the controller can decide to stop all read/write cache, so slowing down all IOs going to the disk. </p>
<p>At first sight, there was no HDD issue, and array/logical volume was working fine (no failed HDD in that RAID10 volume), so it was time to dive deeper into analysis.</p>
<p>That server has the following raid adapter : </p>
<div class="highlight"><pre><span></span><span class="mi">03</span><span class="err">:</span><span class="mf">00.0</span><span class="w"> </span><span class="n">RAID</span><span class="w"> </span><span class="n">bus</span><span class="w"> </span><span class="nl">controller</span><span class="p">:</span><span class="w"> </span><span class="n">LSI</span><span class="w"> </span><span class="n">Logic</span><span class="w"> </span><span class="o">/</span><span class="w"> </span><span class="n">Symbios</span><span class="w"> </span><span class="n">Logic</span><span class="w"> </span><span class="n">MegaRAID</span><span class="w"> </span><span class="n">SAS</span><span class="w"> </span><span class="mi">2108</span><span class="w"> </span><span class="o">[</span><span class="n">Liberator</span><span class="o">]</span><span class="w"> </span><span class="p">(</span><span class="n">rev</span><span class="w"> </span><span class="mi">03</span><span class="p">)</span><span class="w"></span>
</pre></div>
<p>That means that you need to use the <a href="http://www.avagotech.com/cs/Satellite?pagename=AVG2/searchLayout&SearchKeyWord=megacli&searchType=type-AVG_Document_C~Downloads&locale=avg_en&srchradio=null">MegaCLI</a> tool for that.</p>
<p>A quick <strong><code>MegaCli64 -ShowSummary -a0</code></strong> showed me that indeed the underlying disk were active but I got my attention caught by the fact that there was a "Patrol Read" operation in progress on a disk. I then discovered a useful (bookmarked, as it's a gold mine) <a href="http://fibrevillage.com/storage/175-lsi-megaraid-patrol-read-and-consistency-check">page</a> explaining the issue with default settings and the "Patrol Read" operation.
While it seems a good idea to scan the disks in the background to discover disk error in advance (PFA), the default setting is really not optimized : (from that website) : "will take up to 30% of IO resources"</p>
<p>I decided to stop the currently running patrol read process with <strong><code>MegaCli64 -AdpPR -Stop -aALL</code></strong> and I directly saw Virtual Machines (and hypervisor) iowait going back to normal mode.
Here is the Zabbix graph for one of the impacted VM, and it's easy to guess when I stopped the underlying "Patrol read" process : </p>
<p><img alt="VM iowait" src="/images/iowait.png" title="VM iowait"></p>
<p>That "patrol read" operation is scheduled to run by default once a week (168h) so your real option is to either disable it completely (through <strong><code>MegaCli64 -AdpPR -Dsbl -aALL</code></strong>) or at least (adviced) change the IO impact (for example 5% : <strong><code>MegaCli64 -AdpSetProp PatrolReadRate 5 -aALL</code></strong>)</p>
<p><strong>Never</strong> understimate the power of Hardware settings (in the BIOS or in that case raid hardware controller). </p>
<p>Hope it can help others too</p>CentOS AltArch SIG status2015-09-24T00:00:00+02:002015-09-24T00:00:00+02:00Fabian Arrotintag:arrfab.net,2015-09-24:/posts/2015/Sep/24/centos-altarch-sig-status/<p>Recently I had (from an Infra side) to start deploying KVM guests for the <a href="https://en.wikipedia.org/wiki/Ppc64">ppc64</a> and <a href="https://en.wikipedia.org/wiki/Ppc64">ppc64le</a> arches, so that <a href="https://wiki.centos.org/SpecialInterestGroup/AltArch">AltArch</a> SIGs contributors could start bootstrapping CentOS 7 rebuild for those arches. I'll probably write a tech review about <a href="https://en.wikipedia.org/wiki/POWER8">Power8</a> and the fact you can just use libvirt/virt-install to quickly provision new VMs on <a href="http://www-03.ibm.com/systems/power/software/linux/powerkvm/">PowerKVM</a> , but I'll do that in a separate post.</p>
<p>Parallel to ppc64/ppc64le, <a href="https://en.wikipedia.org/wiki/ARM_architecture#32-bit_architecture">armv7hl</a> interested some Community members, and the discussion/activity about that arch is discussed on the <a href="https://lists.centos.org/mailman/listinfo/arm-dev">dedicated mailing list</a>. It's slowly coming and some users already reported having used that on some boards (but still unsigned and no updates packages -yet- )</p>
<p>Last (but not least) in this AltArch list is i686 : <a href="https://wiki.centos.org/JohnnyHughes">Johnny</a> built all packages and are already publicly available on <a href="http://buildlogs.centos.org/">buildlogs.centos.org</a> , each time in parallel to the x86_64 version. It seems that respinning the ISO for that arch and last tests would be the only things to do.</p>
<p>If you're interested in participating in AltArch (and have special interesting a specific arch/platform), feel free to discuss that on the <a href="https://lists.centos.org/mailman/listinfo/centos-devel">centos-devel</a> list !</p>CentOS Dojo in Barcelona2015-09-17T00:00:00+02:002015-09-17T00:00:00+02:00Fabian Arrotintag:arrfab.net,2015-09-17:/posts/2015/Sep/17/centos-dojo-in-barcelona/<p>So, thanks to the folks from Opennebula, we'll have another CentOS Dojo in Barcelona on Tuesday 20th October 2015. That even will be colocated with the <a href="http://2015.opennebulaconf.com/">Opennebulaconf</a> happening the days after that Dojo. If you're attending the OpennebulaConf, or if you're just in the area and would like to attend the CentOS Dojo, feel free to <a href="http://www.eventbrite.com/e/centos-dojo-barcelona-2015-tickets-18514955731">register</a></p>
<p>Regarding the Dojo content, I'll be myself giving a presentation about Selinux : covering a little bit of intro (still needed for some folks afraid of using it , don't know why but we'll change that ...) about selinux itself, how to run it on bare-metal, virtual machines <em>and</em> there will be some slides for the mandatory container hype thing.
But we'll also cover managing selinux booleans/contexts, etc through your config management solution. (We'll cover <a href="https://puppetlabs.com/">puppet</a> and <a href="http://www.ansible.com/">ansible</a> as those are the two I'm using on a daily basis) and also how to build and deploy custom selinux policies with your config management solution.</p>
<p>On the other hand, if you're a CentOS user and would like yourself to give a talk during that Dojo, feel free to submit a talk ! More informations about the Dojo on the <a href="https://wiki.centos.org/Events/Dojo/Barcelona2015">dedicated wiki page</a></p>
<p>See you there !</p>Ext4 limitation with GDT blocks number2015-09-10T00:00:00+02:002015-09-10T00:00:00+02:00Fabian Arrotintag:arrfab.net,2015-09-10:/posts/2015/Sep/10/ext4-limitation-with-gdt-blocks-number/<p>In the last days, I encountered a strange issue^Wlimitation with <a href="https://en.wikipedia.org/wiki/Ext4">Ext4</a> that I wouldn't have thought of. I've used ext2/ext3/ext4 for quite some time and so I've been used to resize the filesystem "online" (while "mounted"). In the past you had to use <a href="http://linux.die.net/man/8/ext2online">ext2online</a> for that, then it was integrated into <a href="http://linux.die.net/man/8/resize2fs">resize2fs</a> itself.</p>
<p>The logic is simple and always the same : extend your underlaying block device (or add another one), then modify the LVM Volume Group (if needed), then the Logical Volume and finally the resize2fs operation, so something like </p>
<div class="highlight"><pre><span></span>lvextend -L +<span class="cp">${</span><span class="n">added_size</span><span class="cp">}</span>G /dev/mapper/<span class="cp">${</span><span class="n">name_of_your_logical_volume</span><span class="cp">}</span>
resize2fs /dev/mapper/<span class="cp">${</span><span class="n">name_of_your_logical_volume</span><span class="cp">}</span>
</pre></div>
<p>I don't know how much times I've used that, but this time resize2fs wasn't happy :</p>
<div class="highlight"><pre><span></span><span class="nv">resize2fs</span>: <span class="nv">Operation</span> <span class="nv">not</span> <span class="nv">permitted</span> <span class="k">While</span> <span class="nv">trying</span> <span class="nv">to</span> <span class="nv">add</span> <span class="nv">group</span> <span class="sc">#16384</span>
</pre></div>
<p>I remember having had in the past an issue because of the journal size <a href="https://bugzilla.redhat.com/show_bug.cgi?id=160612#c27">not being big enough</a>. <code>But this wasn't the case here.</code></p>
<p>FWIW, you can always verify your journal size with <code>dumpe2fs /dev/mapper/${name_of_your_logical_volume} |grep "Journal Size"</code></p>
<p>Small note : if you need to increase the journal size, you have to do it "offline" as you have to remove the journal and then add it back with a bigger …</p><p>In the last days, I encountered a strange issue^Wlimitation with <a href="https://en.wikipedia.org/wiki/Ext4">Ext4</a> that I wouldn't have thought of. I've used ext2/ext3/ext4 for quite some time and so I've been used to resize the filesystem "online" (while "mounted"). In the past you had to use <a href="http://linux.die.net/man/8/ext2online">ext2online</a> for that, then it was integrated into <a href="http://linux.die.net/man/8/resize2fs">resize2fs</a> itself.</p>
<p>The logic is simple and always the same : extend your underlaying block device (or add another one), then modify the LVM Volume Group (if needed), then the Logical Volume and finally the resize2fs operation, so something like </p>
<div class="highlight"><pre><span></span>lvextend -L +<span class="cp">${</span><span class="n">added_size</span><span class="cp">}</span>G /dev/mapper/<span class="cp">${</span><span class="n">name_of_your_logical_volume</span><span class="cp">}</span>
resize2fs /dev/mapper/<span class="cp">${</span><span class="n">name_of_your_logical_volume</span><span class="cp">}</span>
</pre></div>
<p>I don't know how much times I've used that, but this time resize2fs wasn't happy :</p>
<div class="highlight"><pre><span></span><span class="nv">resize2fs</span>: <span class="nv">Operation</span> <span class="nv">not</span> <span class="nv">permitted</span> <span class="k">While</span> <span class="nv">trying</span> <span class="nv">to</span> <span class="nv">add</span> <span class="nv">group</span> <span class="sc">#16384</span>
</pre></div>
<p>I remember having had in the past an issue because of the journal size <a href="https://bugzilla.redhat.com/show_bug.cgi?id=160612#c27">not being big enough</a>. <code>But this wasn't the case here.</code></p>
<p>FWIW, you can always verify your journal size with <code>dumpe2fs /dev/mapper/${name_of_your_logical_volume} |grep "Journal Size"</code></p>
<p>Small note : if you need to increase the journal size, you have to do it "offline" as you have to remove the journal and then add it back with a bigger size (and that also takes time) :</p>
<div class="highlight"><pre><span></span>umount /<span class="nv">$path_where_that_fs_is_mounted</span>
tune2fs -O ^has_journal /dev/mapper/<span class="cp">${</span><span class="n">name_of_your_logical_volume</span><span class="cp">}</span>
# Assuming we want to increase to 128Mb
tune2fs -j -J size=128 /dev/mapper/<span class="cp">${</span><span class="n">name_of_your_logical_volume</span><span class="cp">}</span>
</pre></div>
<p>But in that case, as said, it wasn't really the root cause : while the <code>resize2fs: Operation not permitted</code> doesn't give much informations, <code>dmesg</code> was more explicit : </p>
<div class="highlight"><pre><span></span><span class="n">EXT4</span><span class="o">-</span><span class="n">fs</span> <span class="n">warning</span> <span class="p">(</span><span class="n">device</span> <span class="n">dm</span><span class="o">-</span><span class="mi">2</span><span class="p">):</span> <span class="n">ext4_group_add</span><span class="p">:</span> <span class="k">No</span> <span class="n">reserved</span> <span class="n">GDT</span> <span class="n">blocks</span><span class="p">,</span> <span class="n">can</span><span class="err">'</span><span class="n">t</span> <span class="n">resize</span>
</pre></div>
<p>The limitation is that when the initial Ext4 filesystem is created, the number of reserved/calculated GDT blocks for that filesystem will allow to grow it by a <a href="http://www.spinics.net/lists/linux-ext4/msg35015.html">factor of 1000</a>.</p>
<p>Ouch, that system (CentOS 6.7) I was working on had been provisioned in the past for a certain role, and that particular fs/mount point was set to 2G (installed like this through the <a href="https://en.wikipedia.org/wiki/Kickstart_(Linux)">Kickstart setup</a> ). But finally role changed and so the filesystem has been extended/resized some times, until I tried to extend it to more than 2TiB, which then caused resize2fs to complain ...</p>
<p>So two choices :</p>
<ul>
<li>you do it "offline" through <code>umount, e2fsck, resize2fs, e2fsck, mount</code> (but time consumming)</li>
<li>you still have plenty of space in the VG, and you just want to create another volume with correct size, format it, rsync content, umount old one and mount the new one.</li>
</ul>
<p>That means that I learned something new (one learns something new every day !), and also the fact that you then need to take that limitation in mind when using a kickstart (that doesn't include the <a href="https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Installation_Guide/s1-kickstart2-options.html">--grow</a> option, but a fixed size for the filesystem).</p>
<p>Hope that it can help</p>Implementing TLS for postfix2015-09-03T00:00:00+02:002015-09-03T00:00:00+02:00Fabian Arrotintag:arrfab.net,2015-09-03:/posts/2015/Sep/03/implementing-tls-for-postfix/<p>As some initiatives (like <a href="https://letsencrypt.org">Let's Encrypt</a> as one example) try to force <a href="https://en.wikipedia.org/wiki/Transport_Layer_Security">TLS</a> usage everywhere. We thought about doing the same for the CentOS.org infra. Obviously we already had some <a href="https://en.wikipedia.org/wiki/X.509">x509</a> certificates, but not for every httpd server that was serving content for CentOS users. So we decided to <a href="https://lists.centos.org/pipermail/centos-announce/2015-August/021341.html">enforce</a> TLS usage on those servers. But TLS can be used obviously on other things than a web server.</p>
<p>That's why we considered implementing something for our <a href="http://www.postfix.org">Postfix</a> nodes. The interesting part is that it's really easy (depending of course at the security level one may want to reach/use). There are two parts in the postfix main.cf that can be configured :</p>
<ul>
<li>outgoing mails (aka your server sends mail to other SMTPD servers)</li>
<li>incoming mails (aka remote clients/servers send mail to your postfix/smtpd server)</li>
</ul>
<p>Let's start with the client/outgoing part : just adding those lines in your main.cf will automatically configure it to use TLS when possible, but otherwise fall back on clear if remote server doesn't support TLS :</p>
<div class="highlight"><pre><span></span><span class="o">#</span> <span class="n">TLS</span> <span class="o">-</span> <span class="n">client</span> <span class="n">part</span>
<span class="n">smtp_tls_CAfile</span><span class="o">=/</span><span class="n">etc</span><span class="o">/</span><span class="n">pki</span><span class="o">/</span><span class="n">tls</span><span class="o">/</span><span class="n">certs</span><span class="o">/</span><span class="n">ca</span><span class="o">-</span><span class="n">bundle</span><span class="p">.</span><span class="n">crt</span>
<span class="n">smtp_tls_security_level</span> <span class="o">=</span> <span class="n">may</span>
<span class="n">smtp_tls_loglevel</span> <span class="o">=</span> <span class="mi">1</span>
<span class="n">smtp_tls_session_cache_database</span> <span class="o">=</span> <span class="n">btree</span><span class="p">:</span><span class="o">/</span><span class="n">var</span><span class="o">/</span><span class="n">lib</span><span class="o">/</span><span class="k">postfix</span><span class="o">/</span><span class="n">smtp_scache</span>
</pre></div>
<p>The interesting part is the <code>smtp_tls_security_level …</code></p><p>As some initiatives (like <a href="https://letsencrypt.org">Let's Encrypt</a> as one example) try to force <a href="https://en.wikipedia.org/wiki/Transport_Layer_Security">TLS</a> usage everywhere. We thought about doing the same for the CentOS.org infra. Obviously we already had some <a href="https://en.wikipedia.org/wiki/X.509">x509</a> certificates, but not for every httpd server that was serving content for CentOS users. So we decided to <a href="https://lists.centos.org/pipermail/centos-announce/2015-August/021341.html">enforce</a> TLS usage on those servers. But TLS can be used obviously on other things than a web server.</p>
<p>That's why we considered implementing something for our <a href="http://www.postfix.org">Postfix</a> nodes. The interesting part is that it's really easy (depending of course at the security level one may want to reach/use). There are two parts in the postfix main.cf that can be configured :</p>
<ul>
<li>outgoing mails (aka your server sends mail to other SMTPD servers)</li>
<li>incoming mails (aka remote clients/servers send mail to your postfix/smtpd server)</li>
</ul>
<p>Let's start with the client/outgoing part : just adding those lines in your main.cf will automatically configure it to use TLS when possible, but otherwise fall back on clear if remote server doesn't support TLS :</p>
<div class="highlight"><pre><span></span><span class="o">#</span> <span class="n">TLS</span> <span class="o">-</span> <span class="n">client</span> <span class="n">part</span>
<span class="n">smtp_tls_CAfile</span><span class="o">=/</span><span class="n">etc</span><span class="o">/</span><span class="n">pki</span><span class="o">/</span><span class="n">tls</span><span class="o">/</span><span class="n">certs</span><span class="o">/</span><span class="n">ca</span><span class="o">-</span><span class="n">bundle</span><span class="p">.</span><span class="n">crt</span>
<span class="n">smtp_tls_security_level</span> <span class="o">=</span> <span class="n">may</span>
<span class="n">smtp_tls_loglevel</span> <span class="o">=</span> <span class="mi">1</span>
<span class="n">smtp_tls_session_cache_database</span> <span class="o">=</span> <span class="n">btree</span><span class="p">:</span><span class="o">/</span><span class="n">var</span><span class="o">/</span><span class="n">lib</span><span class="o">/</span><span class="k">postfix</span><span class="o">/</span><span class="n">smtp_scache</span>
</pre></div>
<p>The interesting part is the <code>smtp_tls_security_level</code> option : as you see, we decided to force it to <code>may</code> . That's what Postfix <a href="http://www.postfix.org/TLS_README.html#client_tls_may">official TLS documentation</a> calls "Opportunistic TLS" : in some words it will try TLS (even with untrusted remote certs !) and will only default to clear if no remote TLS support is available. That's the option we decided to use as it doesn't break anything, and even if the remote server has a self-signed cert, it's still better to use TLS with self-signed than clear text, right ?</p>
<p>Once you have reloaded your postfix configuration, you'll directly see in your maillog that it will start trying TLS and deliver mails to servers configured for it : </p>
<div class="highlight"><pre><span></span><span class="n">Sep</span> <span class="mi">3</span> <span class="mi">07</span><span class="p">:</span><span class="mi">50</span><span class="p">:</span><span class="mi">37</span> <span class="n">mailsrv</span> <span class="k">postfix</span><span class="o">/</span><span class="n">smtp</span><span class="p">[</span><span class="mi">1936</span><span class="p">]:</span> <span class="n">setting</span> <span class="n">up</span> <span class="n">TLS</span> <span class="k">connection</span> <span class="k">to</span> <span class="n">ASPMX</span><span class="p">.</span><span class="n">L</span><span class="p">.</span><span class="n">GOOGLE</span><span class="p">.</span><span class="n">com</span><span class="p">[</span><span class="mi">173</span><span class="p">.</span><span class="mi">194</span><span class="p">.</span><span class="mi">207</span><span class="p">.</span><span class="mi">27</span><span class="p">]:</span><span class="mi">25</span>
<span class="n">Sep</span> <span class="mi">3</span> <span class="mi">07</span><span class="p">:</span><span class="mi">50</span><span class="p">:</span><span class="mi">37</span> <span class="n">mailsrv</span> <span class="k">postfix</span><span class="o">/</span><span class="n">smtp</span><span class="p">[</span><span class="mi">1936</span><span class="p">]:</span> <span class="k">Trusted</span> <span class="n">TLS</span> <span class="k">connection</span> <span class="n">established</span> <span class="k">to</span> <span class="n">ASPMX</span><span class="p">.</span><span class="n">L</span><span class="p">.</span><span class="n">GOOGLE</span><span class="p">.</span><span class="n">com</span><span class="p">[</span><span class="mi">173</span><span class="p">.</span><span class="mi">194</span><span class="p">.</span><span class="mi">207</span><span class="p">.</span><span class="mi">27</span><span class="p">]:</span><span class="mi">25</span><span class="p">:</span> <span class="n">TLSv1</span><span class="p">.</span><span class="mi">2</span> <span class="k">with</span> <span class="n">cipher</span> <span class="n">ECDHE</span><span class="o">-</span><span class="n">RSA</span><span class="o">-</span><span class="n">AES128</span><span class="o">-</span><span class="n">GCM</span><span class="o">-</span><span class="n">SHA256</span> <span class="p">(</span><span class="mi">128</span><span class="o">/</span><span class="mi">128</span> <span class="n">bits</span><span class="p">)</span>
<span class="n">Sep</span> <span class="mi">3</span> <span class="mi">07</span><span class="p">:</span><span class="mi">50</span><span class="p">:</span><span class="mi">37</span> <span class="n">mailsrv</span> <span class="k">postfix</span><span class="o">/</span><span class="n">smtp</span><span class="p">[</span><span class="mi">1936</span><span class="p">]:</span> <span class="n">DF584A00774</span><span class="p">:</span> <span class="k">to</span><span class="o">=<></span><span class="p">,</span> <span class="n">orig_to</span><span class="o">=<></span><span class="p">,</span> <span class="n">relay</span><span class="o">=</span><span class="n">ASPMX</span><span class="p">.</span><span class="n">L</span><span class="p">.</span><span class="n">GOOGLE</span><span class="p">.</span><span class="n">com</span><span class="p">[</span><span class="mi">173</span><span class="p">.</span><span class="mi">194</span><span class="p">.</span><span class="mi">207</span><span class="p">.</span><span class="mi">27</span><span class="p">]:</span><span class="mi">25</span><span class="p">,</span> <span class="n">delay</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> <span class="n">delays</span><span class="o">=</span><span class="mi">0</span><span class="o">/</span><span class="mi">0</span><span class="p">.</span><span class="mi">12</span><span class="o">/</span><span class="mi">0</span><span class="p">.</span><span class="mi">22</span><span class="o">/</span><span class="mi">0</span><span class="p">.</span><span class="mi">71</span><span class="p">,</span> <span class="n">dsn</span><span class="o">=</span><span class="mi">2</span><span class="p">.</span><span class="mi">0</span><span class="p">.</span><span class="mi">0</span><span class="p">,</span> <span class="n">status</span><span class="o">=</span><span class="n">sent</span> <span class="p">(</span><span class="mi">250</span> <span class="mi">2</span><span class="p">.</span><span class="mi">0</span><span class="p">.</span><span class="mi">0</span> <span class="n">OK</span> <span class="mi">1441266639</span> <span class="mi">79</span><span class="n">si29025652qku</span><span class="p">.</span><span class="mi">67</span> <span class="o">-</span> <span class="n">gsmtp</span><span class="p">)</span>
</pre></div>
<p>Now let's have a look at the other part : when you want your server to present the STARTTLS feature when remote servers/clients try to send you mails (still in postfix main.cf) :</p>
<div class="highlight"><pre><span></span><span class="x"># TLS - server part</span>
<span class="x">smtpd_tls_CAfile=/etc/pki/tls/certs/ca-bundle.crt</span>
<span class="x">smtpd_tls_cert_file = /etc/pki/tls/certs/</span><span class="cp"><%=</span> <span class="n">postfix_myhostname</span> <span class="cp">%></span><span class="x">-postfix.crt </span>
<span class="x">smtpd_tls_key_file = /etc/pki/tls/private/</span><span class="cp"><%=</span> <span class="n">postfix_myhostname</span> <span class="cp">%></span><span class="x">.key</span>
<span class="x">smtpd_tls_security_level = may</span>
<span class="x">smtpd_tls_loglevel = 1</span>
<span class="x">smtpd_tls_session_cache_database = btree:/var/lib/postfix/smtpd_scache</span>
</pre></div>
<p>Still easy, but here we also add our key/cert to the config but if you decide to use a signed by a trusted CA cert (like we do for centos.org infra), be sure that the cert is the concatenated/bundled version of both your cert and the CAChain cert. That's also documented in the <a href="http://www.postfix.org/TLS_README.html#server_cert_key">Postfix TLS guide</a>, and if you're already using <a href="http://www.nginx.org">Nginx</a>, you already know what I'm talking about as you <a href="http://nginx.org/en/docs/http/configuring_https_servers.html#chains">already have to do it</a> too.</p>
<p>If you've correctly configured your cert/keys and reloaded your postfix config, now remote SMTPD servers will also (if configured to do so) deliver mails to your server through TLS. Bonus point if you're using a cert signed by a trusted CA, as from a client side you'll see this : </p>
<div class="highlight"><pre><span></span><span class="n">Sep</span><span class="w"> </span><span class="mi">2</span><span class="w"> </span><span class="mi">16</span><span class="err">:</span><span class="mi">17</span><span class="err">:</span><span class="mi">22</span><span class="w"> </span><span class="n">hoth</span><span class="w"> </span><span class="k">postfix</span><span class="o">/</span><span class="n">smtp</span><span class="o">[</span><span class="n">15329</span><span class="o">]</span><span class="err">:</span><span class="w"> </span><span class="n">setting</span><span class="w"> </span><span class="n">up</span><span class="w"> </span><span class="n">TLS</span><span class="w"> </span><span class="k">connection</span><span class="w"> </span><span class="k">to</span><span class="w"> </span><span class="n">mail</span><span class="p">.</span><span class="n">centos</span><span class="p">.</span><span class="n">org</span><span class="o">[</span><span class="n">72.26.200.203</span><span class="o">]</span><span class="err">:</span><span class="mi">25</span><span class="w"></span>
<span class="n">Sep</span><span class="w"> </span><span class="mi">2</span><span class="w"> </span><span class="mi">16</span><span class="err">:</span><span class="mi">17</span><span class="err">:</span><span class="mi">22</span><span class="w"> </span><span class="n">hoth</span><span class="w"> </span><span class="k">postfix</span><span class="o">/</span><span class="n">smtp</span><span class="o">[</span><span class="n">15329</span><span class="o">]</span><span class="err">:</span><span class="w"> </span><span class="n">Trusted</span><span class="w"> </span><span class="n">TLS</span><span class="w"> </span><span class="k">connection</span><span class="w"> </span><span class="n">established</span><span class="w"> </span><span class="k">to</span><span class="w"> </span><span class="n">mail</span><span class="p">.</span><span class="n">centos</span><span class="p">.</span><span class="n">org</span><span class="o">[</span><span class="n">72.26.200.203</span><span class="o">]</span><span class="err">:</span><span class="mi">25</span><span class="err">:</span><span class="w"> </span><span class="n">TLSv1</span><span class="mf">.2</span><span class="w"> </span><span class="k">with</span><span class="w"> </span><span class="n">cipher</span><span class="w"> </span><span class="n">DHE</span><span class="o">-</span><span class="n">RSA</span><span class="o">-</span><span class="n">AES256</span><span class="o">-</span><span class="n">GCM</span><span class="o">-</span><span class="n">SHA384</span><span class="w"> </span><span class="p">(</span><span class="mi">256</span><span class="o">/</span><span class="mi">256</span><span class="w"> </span><span class="n">bits</span><span class="p">)</span><span class="w"></span>
<span class="n">Sep</span><span class="w"> </span><span class="mi">2</span><span class="w"> </span><span class="mi">16</span><span class="err">:</span><span class="mi">17</span><span class="err">:</span><span class="mi">23</span><span class="w"> </span><span class="n">hoth</span><span class="w"> </span><span class="k">postfix</span><span class="o">/</span><span class="n">smtp</span><span class="o">[</span><span class="n">15329</span><span class="o">]</span><span class="err">:</span><span class="w"> </span><span class="nl">CC8351C00C9</span><span class="p">:</span><span class="w"> </span><span class="k">to</span><span class="o">=<</span><span class="n">fake_one_for_blog_post</span><span class="nv">@centos</span><span class="p">.</span><span class="n">org</span><span class="o">></span><span class="p">,</span><span class="w"> </span><span class="n">relay</span><span class="o">=</span><span class="n">mail</span><span class="p">.</span><span class="n">centos</span><span class="p">.</span><span class="n">org</span><span class="o">[</span><span class="n">72.26.200.203</span><span class="o">]</span><span class="err">:</span><span class="mi">25</span><span class="p">,</span><span class="w"> </span><span class="n">delay</span><span class="o">=</span><span class="mf">1.6</span><span class="p">,</span><span class="w"> </span><span class="n">delays</span><span class="o">=</span><span class="mf">0.19</span><span class="o">/</span><span class="mf">0.03</span><span class="o">/</span><span class="mf">1.1</span><span class="o">/</span><span class="mf">0.31</span><span class="p">,</span><span class="w"> </span><span class="n">dsn</span><span class="o">=</span><span class="mf">2.0.0</span><span class="p">,</span><span class="w"> </span><span class="n">status</span><span class="o">=</span><span class="n">sent</span><span class="w"> </span><span class="p">(</span><span class="mi">250</span><span class="w"> </span><span class="mf">2.0.0</span><span class="w"> </span><span class="nl">Ok</span><span class="p">:</span><span class="w"> </span><span class="n">queued</span><span class="w"> </span><span class="k">as</span><span class="w"> </span><span class="n">A7299A006E2</span><span class="p">)</span><span class="w"></span>
</pre></div>
<p>The <code>Trusted TLS connection established</code> part shows that your smtpd server presents a correct cert (bundle) and that the remote server sending you mails trusts the CA used to sign that cert.</p>
<p>There are a lot of TLS options that you can also add for tuning/security reasons, and all can be seen through <code>postconf |grep tls</code>, but also on the Postfix <a href="http://www.postfix.org/postconf.5.html#smtp_tls_ciphers">postconf doc</a></p>CentOS 7 armv7hl build in progress2015-05-21T00:00:00+02:002015-05-21T00:00:00+02:00Fabian Arrotintag:arrfab.net,2015-05-21:/posts/2015/May/21/centos-7-armv7hl-build-in-progress/<p>As more and more people were showing interest in CentOS on the ARM platform, we thought that it would be a good idea to start trying building CentOS 7 for that platform. Jim started with arm64/aarch64 and got an <a href="http://lists.centos.org/pipermail/centos-announce/2015-May/021102.html">alpha build ready</a> and installable.</p>
<p>On my end, I configured some armv7hl nodes, "donated" to the project by <a href="https://www.scaleway.com">Scaleway</a>. The first goal was to init some <a href="http://www.fedoraproject.org/wiki/Projects/Plague">Plague builders</a> to distribute the jobs on those nodes, which is now done. Then working on a "self-contained" buildroot , so that all other packages can be rebuilt only against that buildroot. So building first gcc from CentOS 7 (latest release, better arm support), then glibc, etc, etc ... That buildroot is now done and is available <a href="http://armv7.dev.centos.org/repodir/c7-buildroot/">here</a>.</p>
<p>Now the fun started (meaning that 4 armv7hl nodes are currently (re)building a <em>bunch</em> of SRPMS) and you can follow the status on the <a href="http://lists.centos.org/mailman/listinfo/arm-dev">Arm-dev List</a> if you're interested, or even better, if you're willing to join the party and have a look at the build logs for packages that failed to rebuild. The first target would be to have a "minimal" install working, so basically having sshd/yum working. Then try other things like GUI environment …</p><p>As more and more people were showing interest in CentOS on the ARM platform, we thought that it would be a good idea to start trying building CentOS 7 for that platform. Jim started with arm64/aarch64 and got an <a href="http://lists.centos.org/pipermail/centos-announce/2015-May/021102.html">alpha build ready</a> and installable.</p>
<p>On my end, I configured some armv7hl nodes, "donated" to the project by <a href="https://www.scaleway.com">Scaleway</a>. The first goal was to init some <a href="http://www.fedoraproject.org/wiki/Projects/Plague">Plague builders</a> to distribute the jobs on those nodes, which is now done. Then working on a "self-contained" buildroot , so that all other packages can be rebuilt only against that buildroot. So building first gcc from CentOS 7 (latest release, better arm support), then glibc, etc, etc ... That buildroot is now done and is available <a href="http://armv7.dev.centos.org/repodir/c7-buildroot/">here</a>.</p>
<p>Now the fun started (meaning that 4 armv7hl nodes are currently (re)building a <em>bunch</em> of SRPMS) and you can follow the status on the <a href="http://lists.centos.org/mailman/listinfo/arm-dev">Arm-dev List</a> if you're interested, or even better, if you're willing to join the party and have a look at the build logs for packages that failed to rebuild. The first target would be to have a "minimal" install working, so basically having sshd/yum working. Then try other things like GUI environment.</p>
<p>As plague-server required mod_python (deprecated now) we don't have any Web UI people can have a look at. But I created a "quick-and-dirty" script that gathers information from the mysql DB, and outputs that here :</p>
<ul>
<li><a href="http://armv7.dev.centos.org/queue.html">Packages in the current queue</a></li>
<li><a href="http://armv7.dev.centos.org/report.html">Packages in a "failed" status</a> (with link to the log files)</li>
</ul>
<p>The other interesting step will be to produce .img files that would work on some armv7hl nodes. So diving into <a href="http://www.denx.de/wiki/U-Boot">uboot</a> for <a href="http://www.hardkernel.com/main/products/prdt_info.php?g_code=G141578608433">Odroid C1</a> (just as an example) ....</p>
<p>I'll also try to maintain a <a href="http://wiki.centos.org/SpecialInterestGroup/AltArch/Arm32">dedicated Wiki</a> page for the arm32 status in the following days/weeks/etc ..</p>Hacking initrd.img for fun and profit2015-05-06T00:00:00+02:002015-05-06T00:00:00+02:00Fabian Arrotintag:arrfab.net,2015-05-06:/posts/2015/May/06/hacking-initrdimg-for-fun-and-profit/<p>During my presentation at <a href="http://loadays.org">Loadays 2015</a> , I was mentioning some tips and tricks around <a href="http://fedoraproject.org/wiki/Anaconda">Anaconda</a> and <a href="https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Installation_Guide/chap-kickstart-installations.html">kickstart</a>, and so how to deploy CentOS , fully automated.
I asked the audience about where to store the kickstart, that would be used then by anaconda to install CentOS (same works for RHEL/Fedora), and I got several answers, like "on the http server", or "on the ftp server", which is where most people will put their kickstart files.
Some would generate those files files "dynamically" (through $cfgmgmt - I use <a href="http://www.ansible.com">Ansible</a> with Jinja2 template for this - ) as a bonus point.</p>
<p>But it's not mandatory to host your kickstart file on a publicly available http/ftp/nfs server, and surely not when having to reinstall nodes not in the same DC. Within the CentOS.org infra, I sometimes have to reinstall remote nodes ("donated" to the Project) that are running CentOS 5 or 6 to 7. That's how injecting your ks file directly into the initrd.img really helps. (yes, so network server needed).
Just as an intro, here is how you can remotely trigger a CentOS install, without any medium/iso/pxe environment : basically you just need to download the pxeboot images (so vmlinuz …</p><p>During my presentation at <a href="http://loadays.org">Loadays 2015</a> , I was mentioning some tips and tricks around <a href="http://fedoraproject.org/wiki/Anaconda">Anaconda</a> and <a href="https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Installation_Guide/chap-kickstart-installations.html">kickstart</a>, and so how to deploy CentOS , fully automated.
I asked the audience about where to store the kickstart, that would be used then by anaconda to install CentOS (same works for RHEL/Fedora), and I got several answers, like "on the http server", or "on the ftp server", which is where most people will put their kickstart files.
Some would generate those files files "dynamically" (through $cfgmgmt - I use <a href="http://www.ansible.com">Ansible</a> with Jinja2 template for this - ) as a bonus point.</p>
<p>But it's not mandatory to host your kickstart file on a publicly available http/ftp/nfs server, and surely not when having to reinstall nodes not in the same DC. Within the CentOS.org infra, I sometimes have to reinstall remote nodes ("donated" to the Project) that are running CentOS 5 or 6 to 7. That's how injecting your ks file directly into the initrd.img really helps. (yes, so network server needed).
Just as an intro, here is how you can remotely trigger a CentOS install, without any medium/iso/pxe environment : basically you just need to download the pxeboot images (so vmlinuz and initrd.img), provide some default settings for Anaconda (for the network config, and how to grab stage2 image, and so where is the install tree)
On the machine to be reinstalled : </p>
<div class="highlight"><pre><span></span><span class="n">cd</span> <span class="o">/</span><span class="n">boot</span><span class="o">/</span>
<span class="n">wget</span> <span class="n">http</span><span class="p">:</span><span class="o">//</span><span class="n">mirror</span><span class="p">.</span><span class="n">centos</span><span class="p">.</span><span class="n">org</span><span class="o">/</span><span class="n">centos</span><span class="o">/</span><span class="mi">7</span><span class="o">/</span><span class="n">os</span><span class="o">/</span><span class="n">x86_64</span><span class="o">/</span><span class="n">images</span><span class="o">/</span><span class="n">pxeboot</span><span class="o">/</span><span class="err">{</span><span class="n">vmlinuz</span><span class="p">,</span><span class="n">initrd</span><span class="p">.</span><span class="n">img</span><span class="err">}</span>
</pre></div>
<p>Now you can generate and copy your kickstart file for that node and send it to the remote node (with scp, etc ..)
Next step on that remote node is to "inject" the kickstart directly in the initrd.img :</p>
<div class="highlight"><pre><span></span><span class="o">#</span><span class="n">assuming</span> <span class="n">we</span> <span class="n">have</span> <span class="n">copied</span> <span class="n">the</span> <span class="n">ks</span> <span class="n">file</span> <span class="k">as</span> <span class="n">ks</span><span class="p">.</span><span class="n">cfg</span> <span class="k">in</span> <span class="o">/</span><span class="n">boot</span> <span class="n">already</span>
<span class="n">echo</span> <span class="n">ks</span><span class="p">.</span><span class="n">cfg</span> <span class="o">|</span> <span class="n">cpio</span> <span class="o">-</span><span class="k">c</span> <span class="o">-</span><span class="n">o</span> <span class="o">>></span> <span class="n">initrd</span><span class="p">.</span><span class="n">img</span>
</pre></div>
<p>So now we have a kernel/initrd.img, containing the kickstart file. You can modify grub(2) to add a new menu entry, make it the default one for next reboot and enjoy. But I usually prefer not doing that, if you need someone to reset that node remotely if something wrong happens, so instead of modifying grub(2), I just use kexec to reboot directly with the new kernel (without having to power cycle the node) :</p>
<div class="highlight"><pre><span></span># <span class="nv">can</span> <span class="nv">be</span> <span class="nv">changed</span> <span class="nv">to</span> <span class="nv">something</span> <span class="k">else</span>, <span class="k">if</span> <span class="k">for</span> <span class="nv">example</span> <span class="nv">node</span> <span class="nv">is</span> <span class="nv">running</span> <span class="nv">another</span> <span class="nv">distro</span> <span class="nv">not</span> <span class="nv">using</span> <span class="nv">yum</span> <span class="nv">as</span> <span class="nv">package</span> <span class="nv">manager</span>
<span class="nv">yum</span> <span class="nv">install</span> <span class="o">-</span><span class="nv">y</span> <span class="nv">wget</span> <span class="nv">kexec</span><span class="o">-</span><span class="nv">tools</span>
<span class="nv">kexec</span> <span class="o">-</span><span class="nv">l</span> <span class="nv">vmlinuz</span> <span class="o">--</span><span class="nv">append</span><span class="o">=</span><span class="s1">'</span><span class="s">net.ifnames=0 biosdevname=0 ksdevice=eth0 inst.ks=file:/ks.cfg inst.lang=en_GB inst.keymap=be-latin1 ip=your.ip netmask=your.netmask gateway=your.gw dns=your.dns</span><span class="s1">'</span> <span class="o">--</span><span class="nv">initrd</span><span class="o">=</span><span class="nv">initrd</span>.<span class="nv">img</span> <span class="o">&&</span> <span class="nv">kexec</span> <span class="o">-</span><span class="nv">e</span>
</pre></div>
<p>As you can see in the append line, I just tell anaconda/kernel to <em>not</em> use the new nic naming (default now in CentOS 7, and sometimes hard to guess in advance), assuming that eth0 is the one to use (verify carefully that !), and the traditional ks= line in fact now just points to /ks.cfg ( initrd.img being / ). The rest is self-explained.</p>
<p>The other cool stuff, is that you can use the same "inject" technique but for Virtual Machines installed through virt-install : it supports injecting directly files in the initrd.img, so easier than for bare metal nodes : you just have to use two parameters for virt-install : </p>
<ul>
<li>--initrd-inject=/path/to/your/ks.cfg</li>
<li>--extra-args "console=ttyS0 ks=file:/ks.cfg”</li>
</ul>
<p>Hope this helps </p>More builders available for Koji/CBS2015-01-23T17:54:00+01:002015-01-23T17:54:00+01:00Fabian Arrotintag:arrfab.net,2015-01-23:/posts/2015/Jan/23/more-builders-available-for-kojicbs/<p>As you probably know, the CentOS Project now hosts the
<a href="http://cbs.centos.org">CBS</a> effort, (aka Community Build System), that
is used to build all packages for the
CentOS<a href="http://wiki.centos.org/SpecialInterestGroup">SIGs</a>.</p>
<p>There was already one physical node dedicated to Koji Web and Koji Hub,
and another node dedicated to the build threads (koji-builder). As we
have now more people building packages,
we <a href="http://www.centos.org/minutes/2015/january/centos-devel.2015-01-19-14.00.html">thought</a>
it was time to add more builders to the mix, and here we go:
<a href="http://cbs.centos.org/koji/hosts">http://cbs.centos.org/koji/hosts</a> lists now two added machines that are
dedicated to Koji/CBS.</p>
<p>Those added nodes have 2 * Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz
with 8cores/sockets (+ Hyperthreading activated) , and 32Gb of RAM.
Let's see how the SIGs members will keep those builders busy and
throwing a bunch of interesting packages for the CentOS Community :-) .
Have a nice week-end</p>Provisioning quickly nodes in a SeaMicro chassis with Ansible2015-01-12T15:19:00+01:002015-01-12T15:19:00+01:00Fabian Arrotintag:arrfab.net,2015-01-12:/posts/2015/Jan/12/provisioning-quickly-nodes-in-a-seamicro-chassis-with-ansible/<p>Recently I had to quickly test and deploy CentOS on 128 physical nodes,
just to test hardware and that all currently "supported" CentOS releases
could be installed quickly when needed. The interesting bit is that it
was a completely new infra, without any traditional deployment setup in
place, so obviously, as sysadmin, we directly think about pxe/kickstart,
which is so trivial to setup. That was the first time I had to "play"
with SeaMicro devices/chassis though, and so understanding how they work
(the SeaMicro <a href="http://www.seamicro.com/SM15000">15K fabric chassis</a>, to
be precise). One thing to note is that those seamicro chassis don't
provide remote VGA/KVM feature (but who cares, as we'll automate the
whole thing, right ? ) but they instead provide either cli (ssh) or rest
api access to the management interface, so that you can quickly
reset/reconfigure a node, changing vlan assignement, and so on.</p>
<p>It's not a secret that I like to use <a href="http://www.ansible.com/">Ansible</a>
for ad-hoc tasks, and I thought that it would be (again) a good tool for
that quick task. If you have used Ansible already, you know that you
have to declare nodes and variables (not needed, but really useful) in
the inventory (if …</p><p>Recently I had to quickly test and deploy CentOS on 128 physical nodes,
just to test hardware and that all currently "supported" CentOS releases
could be installed quickly when needed. The interesting bit is that it
was a completely new infra, without any traditional deployment setup in
place, so obviously, as sysadmin, we directly think about pxe/kickstart,
which is so trivial to setup. That was the first time I had to "play"
with SeaMicro devices/chassis though, and so understanding how they work
(the SeaMicro <a href="http://www.seamicro.com/SM15000">15K fabric chassis</a>, to
be precise). One thing to note is that those seamicro chassis don't
provide remote VGA/KVM feature (but who cares, as we'll automate the
whole thing, right ? ) but they instead provide either cli (ssh) or rest
api access to the management interface, so that you can quickly
reset/reconfigure a node, changing vlan assignement, and so on.</p>
<p>It's not a secret that I like to use <a href="http://www.ansible.com/">Ansible</a>
for ad-hoc tasks, and I thought that it would be (again) a good tool for
that quick task. If you have used Ansible already, you know that you
have to declare nodes and variables (not needed, but really useful) in
the inventory (if you don't gather inventory from an external source).
To configure my pxe setup (and so being able to reconfigure it when
needed) I obviously needed to get mac addresses from all 64 nodes in
each chassis, decide that hostnames will be n${slot-number}., etc ..
(and yes in Seamicro slot 1 = 0/0, slot 2 = 1/0, and so on ...)</p>
<p>The following quick-and-dirty bash script let you do that quickly in 2
seconds (ssh into chassis, gather information, and fill some variables
in my ansible host_vars/${hostname} file) :</p>
<table class="highlighttable"><tr><td class="linenos"><div class="linenodiv"><pre>1
2
3
4
5
6
7
8</pre></div></td><td class="code"><div class="highlight"><pre><span></span><span class="ch">#!/bin/bash </span>
ssh admin@hufty.ci.centos.org <span class="s2">"enable ; show server summary | include Intel ; quit"</span> <span class="p">|</span> <span class="k">while</span> <span class="nb">read</span> line <span class="p">;</span>
<span class="k">do</span>
<span class="nv">seamicrosrvid</span><span class="o">=</span><span class="se">\$</span><span class="o">(</span><span class="nb">echo</span> <span class="se">\$</span>line <span class="p">|</span>awk <span class="s1">'{print \$1}'</span><span class="o">)</span>
<span class="nv">slot</span><span class="o">=</span><span class="se">\$</span><span class="o">(</span><span class="nb">echo</span> <span class="se">\$</span>seamicrosrvid<span class="p">|</span> cut -f <span class="m">1</span> -d <span class="s1">'/'</span><span class="o">)</span>
<span class="nv">id</span><span class="o">=</span><span class="se">\$</span><span class="o">((</span> <span class="se">\$</span>slot + <span class="m">1</span><span class="o">))</span><span class="p">;</span> <span class="nv">ip</span><span class="o">=</span><span class="se">\$</span>id <span class="p">;</span> <span class="nv">mac</span><span class="o">=</span><span class="se">\$</span><span class="o">(</span><span class="nb">echo</span> <span class="se">\$</span>line <span class="p">|</span>awk <span class="s1">'{print \$3}'</span><span class="o">)</span>
<span class="nb">echo</span> -e <span class="s2">"name: n</span><span class="si">${</span><span class="nv">id</span><span class="si">}</span><span class="s2">.hufty.ci.centos.org \nseamicro_chassis: hufty \nseamicro_srvid: </span><span class="nv">$seamicrosrvid</span><span class="s2"> \nmac_address: </span><span class="nv">$mac</span><span class="s2"> \nip: 172.19.3.</span><span class="nv">$ip</span><span class="s2"> \ngateway: 172.19.3.254 \nnetmask: 255.255.252.0 \nnameserver: 172.19.0.12 \ncentos_dist: 6"</span> > inventory/n<span class="si">${</span><span class="nv">id</span><span class="si">}</span>.hufty.ci.centos.org
<span class="k">done</span>
</pre></div>
</td></tr></table>
<p>Nice so we have all \~/ansible/hosts/host_vars/${inventory_hostname}
files in one go (I let you add ${inventory_hostname} in the
\~/ansible/hosts/hosts.cfg file with the same script, but modify to your
needs<br>
For the next step, we assume that we already have dnsmasq installed on
the "head" node, and that we also have a httpd setup to provide the
kickstart to the nodes during installation.<br>
So our basic ansible playbook looks like this :</p>
<div class="highlight"><pre><span></span><span class="gd">--- </span>
<span class="gd">- hosts: ci-nodes </span>
sudo: True
gather_facts: False
vars:
deploy_node: admin.ci.centos.org
seamicro_user_login: admin
seamicro_user_pass: obviously-hidden-and-changed
seamicro_reset_body:
action: reset
using-pxe: "true"
username: "{{ seamicro_user_login }}"
password: "{{ seamicro_user_pass }}"
tasks:
- name: Generate kickstart file[s] for Seamicro node[s]
template: src=../templates/kickstarts/ci-centos-{{ centos_dist}}-ks.j2 dest=/var/www/html/ks/{{ inventory_hostname }}-ks.cfg mode=0755
delegate_to: "{{ deploy_node }}"
- name: Adding the entry in DNS (dnsmasq)
lineinfile: dest=/etc/hosts regexp="\^{{ ip }} {{ inventory_hostname }}" line="{{ ip }} {{ inventory_hostname }}"
delegate_to: "{{ deploy_node }}"
notify: reload_dnsmasq
- name: Adding the DHCP entry in dnsmasq
template: src=../templates/dnsmasq-dhcp.j2 dest=/etc/dnsmasq.d/{{ inventory_hostname }}.conf
delegate_to: "{{ deploy_node }}"
register: dhcpdnsmasq
- name: Reloading dnsmasq configuration
service: name=dnsmasq state=restarted
run_once: true
when: dhcpdnsmasq|changed
delegate_to: "{{ deploy_node }}"
- name: Generating the tftp configuration boot file
template: src=../templates/pxeboot-ci dest=/var/lib/tftpboot/pxelinux.cfg/01-{{ mac_address | lower | replace(":","-") }} mode=0755
delegate_to: "{{ deploy_node }}"
- name: Resetting the Seamicro node[s]
uri: url=https://{{ seamicro_chassis }}.ci.centos.org/v2.0/server/{{ seamicro_srvid }}
method=POST
HEADER_Content-Type="application/json"
body='{{ seamicro_reset_body | to_json }}'
timeout=60
delegate_to: "{{ deploy_node }}"
- name: Waiting for Seamicro node[s] to be available through ssh ...
action: wait_for port=22 host={{ inventory_hostname }} timeout=1200
delegate_to: "{{ deploy_node }}"
handlers:
- name: reload_dnsmasq
service: name=dnsmasq state=reloaded
</pre></div>
<p>The first thing to notice is that you can use Ansible to provision nodes
that aren't already running : people think than ansible is just to
interact with already provisioned and running nodes, but by providing
useful informations in the inventory, and by delegating actions, we can
already start "managing" those yet-to-come nodes.<br>
All the templates used in that playbook are really basic ones, so
nothing "rocket science". For example the only diff for the kickstart.j2
template is that we inject ansible variables (for network and storage) :</p>
<div class="highlight"><pre><span></span><span class="x">network --bootproto=static --device=eth0 --gateway=</span><span class="cp">{{</span> <span class="nv">gateway</span> <span class="cp">}}</span><span class="x"></span>
<span class="x">--ip=</span><span class="cp">{{</span> <span class="nv">ip</span> <span class="cp">}}</span><span class="x"> --nameserver=</span><span class="cp">{{</span> <span class="nv">nameserver</span> <span class="cp">}}</span><span class="x"> --netmask=</span><span class="cp">{{</span> <span class="nv">netmask</span> <span class="cp">}}</span><span class="x"></span>
<span class="x">--ipv6=auto --activate </span>
<span class="x">network --hostname=</span><span class="cp">{{</span> <span class="nv">inventory_hostname</span> <span class="cp">}}</span><span class="x"> </span>
<span class="x">\<snip\> </span>
<span class="x">part /boot --fstype="ext4" --ondisk=sda --size=500 </span>
<span class="x">part pv.14 --fstype="lvmpv" --ondisk=sda --size=10000 --grow </span>
<span class="x">volgroup vg_</span><span class="cp">{{</span> <span class="nv">inventory_hostname_short</span> <span class="cp">}}</span><span class="x"> --pesize=4096 pv.14 </span>
<span class="x">logvol /home --fstype="xfs" --size=2412 --name=home --vgname=vg_</span><span class="cp">{{</span>
<span class="nv">inventory_hostname_short</span> <span class="cp">}}</span><span class="x"> --grow --maxsize=100000 </span>
<span class="x">logvol / --fstype="xfs" --size=8200 --name=root --vgname=vg_</span><span class="cp">{{</span>
<span class="nv">inventory_hostname_short</span> <span class="cp">}}</span><span class="x"> --grow --maxsize=1000000 </span>
<span class="x">logvol swap --fstype="swap" --size=2136 --name=swap --vgname=vg_</span><span class="cp">{{</span>
<span class="nv">inventory_hostname_short</span> <span class="cp">}}</span><span class="x"> </span>
<span class="x">\<snip\> </span>
</pre></div>
<p>The dhcp step isn't mandatory, but at least in that subnet we only allow
dhcp to "already known" mac address, retrieved from the ansible
inventory (and previously fetched directly from the seamicro chassis) :</p>
<div class="highlight"><pre><span></span><span class="x"># </span><span class="cp">{{</span> <span class="nv">name</span> <span class="cp">}}</span><span class="x"> ip assignement </span>
<span class="x">dhcp-host=</span><span class="cp">{{</span> <span class="nv">mac_address</span> <span class="cp">}}</span><span class="x">,</span><span class="cp">{{</span> <span class="nv">ip</span> <span class="cp">}}</span><span class="x"> </span>
</pre></div>
<p>Same thing for the pxelinux tftp config file :</p>
<div class="highlight"><pre><span></span><span class="nv">SERIAL</span> <span class="mi">0</span> <span class="mi">9600</span>
<span class="nv">DEFAULT</span> <span class="nv">text</span>
<span class="nv">PROMPT</span> <span class="mi">0</span>
<span class="nb">TIMEOUT</span> <span class="mi">50</span>
<span class="nv">TOTALTIMEOUT</span> <span class="mi">6000</span>
<span class="nv">ONTIMEOUT</span> {{ <span class="nv">inventory_hostname</span> }}<span class="o">-</span><span class="nv">deploy</span>
<span class="nv">LABEL</span> <span class="nv">local</span>
<span class="nv">MENU</span> <span class="nv">LABEL</span> <span class="ss">(</span><span class="nv">local</span><span class="ss">)</span>
<span class="nv">MENU</span> <span class="nv">DEFAULT</span>
<span class="nv">LOCALBOOT</span> <span class="mi">0</span>
<span class="nv">LABEL</span> {{ <span class="nv">inventory_hostname</span>}}<span class="o">-</span><span class="nv">deploy</span>
<span class="nv">kernel</span> <span class="nv">CentOS</span><span class="o">/</span>{{ <span class="nv">centos_dist</span> }}<span class="o">/</span>{{ <span class="nv">centos_arch</span>}}<span class="o">/</span><span class="nv">vmlinuz</span>
<span class="nv">MENU</span> <span class="nv">LABEL</span> <span class="nv">CentOS</span> {{ <span class="nv">centos_dist</span> }} {{ <span class="nv">centos_arch</span> }}<span class="o">-</span> <span class="nv">CI</span> <span class="nv">Kickstart</span>
<span class="k">for</span> {{ <span class="nv">inventory_hostname</span> }}
{<span class="o">%</span> <span class="k">if</span> <span class="nv">centos_dist</span> <span class="o">==</span> <span class="mi">7</span> <span class="o">-%</span>}
<span class="nv">append</span> <span class="nv">initrd</span><span class="o">=</span><span class="nv">CentOS</span><span class="o">/</span><span class="mi">7</span><span class="o">/</span>{{ <span class="nv">centos_arch</span> }}<span class="o">/</span><span class="nv">initrd</span>.<span class="nv">img</span> <span class="nv">net</span>.<span class="nv">ifnames</span><span class="o">=</span><span class="mi">0</span> <span class="nv">biosdevname</span><span class="o">=</span><span class="mi">0</span> <span class="nv">ip</span><span class="o">=</span><span class="nv">eth0</span>:<span class="nv">dhcp</span> <span class="nv">inst</span>.<span class="nv">ks</span><span class="o">=</span><span class="nv">http</span>:<span class="o">//</span><span class="nv">admin</span>.<span class="nv">ci</span>.<span class="nv">centos</span>.<span class="nv">org</span><span class="o">/</span><span class="nv">ks</span><span class="o">/</span>{{ <span class="nv">inventory_hostname</span> }}<span class="o">-</span><span class="nv">ks</span>.<span class="nv">cfg</span> <span class="nv">console</span><span class="o">=</span><span class="nv">ttyS0</span>,<span class="mi">9600</span><span class="nv">n8</span>
{<span class="o">%</span> <span class="k">else</span> <span class="o">-%</span>}
<span class="nv">append</span> <span class="nv">initrd</span><span class="o">=</span><span class="nv">CentOS</span><span class="o">/</span>{{ <span class="nv">centos_dist</span> }}<span class="o">/</span>{{ <span class="nv">centos_arch</span> }}<span class="o">/</span><span class="nv">initrd</span>.<span class="nv">img</span> <span class="nv">ksdevice</span><span class="o">=</span><span class="nv">eth0</span> <span class="nv">ip</span><span class="o">=</span><span class="nv">dhcp</span> <span class="nv">ks</span><span class="o">=</span><span class="nv">http</span>:<span class="o">//</span><span class="nv">admin</span>.<span class="nv">ci</span>.<span class="nv">centos</span>.<span class="nv">org</span><span class="o">/</span><span class="nv">ks</span><span class="o">/</span>{{ <span class="nv">inventory_hostname</span> }}<span class="o">-</span><span class="nv">ks</span>.<span class="nv">cfg</span> <span class="nv">console</span><span class="o">=</span><span class="nv">ttyS0</span>,<span class="mi">9600</span><span class="nv">n8</span>
{<span class="o">%</span> <span class="k">endif</span> <span class="o">%</span>}
</pre></div>
<p>The interesting part is the one on which I needed to spend more time :
as said, it was the first time I had to play with SeaMicro hardware, so
I had to dive into documentation (which I *always* do, RTFM FTW !) and
understand how to use their <a href="http://en.wikipedia.org/wiki/Representational_state_transfer">Rest
API</a> but
once done, it was a breeze. Ansible by default doesn't provide a native
resource for Seamicro, but that's why Rest exists, right and thanfully,
Ansible has a native <a href="http://docs.ansible.com/uri_module.html">URI
module</a>, which we use here .
The only thing on which I had to spend more time was to understand how
to properly construct the body, but declaring in the yaml file as a
variable/list and then converting it on the fly to json (with the
magical <em>body='{{ seamicro_reset_body | to_json }}'</em> ) was the way to
go and is so self-explained when read now.</p>
<p>And here we go, calling that ansible playbook and suddenly 128 physical
machines were being installed (and reinstalled with different CentOS
versions - 5,6,7 - and arches i386,x86_64)</p>
<p>Hope this helps if you have to interact with Seamicro chassis from
within an ansible playbook too</p>Switching from Ethernet to Infiniband for Gluster access (or why we had to ...)2014-11-24T11:37:00+01:002014-11-24T11:37:00+01:00Fabian Arrotintag:arrfab.net,2014-11-24:/posts/2014/Nov/24/switching-from-ethernet-to-infiniband-for-gluster-access-or-why-we-had-to/<p>As explained in my previous (small) blog post, I had to migrate a
<a href="http://www.gluster.org">Gluster</a> setup we have within CentOS.org Infra.
As said in that previous blog post too, Gluster is really easy to
install, and sometimes it can even "smells" too easy to be true. One
thing to keep in mind when dealing with Gluster is that it's a
"file-level" storage solution, so don't try to compare it with
"block-level" solutions (so typically a NAS vs SAN comparison, even if
"SAN" itself is wrong for such discussion, as
<a href="http://en.wikipedia.org/wiki/Storage_area_network">SAN</a> is what's
*between* your nodes and the storage itself, just a reminder.)</p>
<p>Within <a href="http://www.centos.org">CentOS.org</a> infra, we have a multiple
nodes Gluster setup, that we use for multiple things at the same time.
The Gluster volumes are used to store some files, but also to host
(different gluster volumes with different settings/ACLs) KVM
virtual-disks (qcow2). People knowing me will say : "hey, but for
performances reasons, it's faster to just dedicate for example a
partition , or a Logical Volume instead of using qcow2 images sitting on
top a filesystem for Virtual Machines, right ?" and that's true. But
with our limited amount of machines, and a need to "move" Virtual
Machine …</p><p>As explained in my previous (small) blog post, I had to migrate a
<a href="http://www.gluster.org">Gluster</a> setup we have within CentOS.org Infra.
As said in that previous blog post too, Gluster is really easy to
install, and sometimes it can even "smells" too easy to be true. One
thing to keep in mind when dealing with Gluster is that it's a
"file-level" storage solution, so don't try to compare it with
"block-level" solutions (so typically a NAS vs SAN comparison, even if
"SAN" itself is wrong for such discussion, as
<a href="http://en.wikipedia.org/wiki/Storage_area_network">SAN</a> is what's
*between* your nodes and the storage itself, just a reminder.)</p>
<p>Within <a href="http://www.centos.org">CentOS.org</a> infra, we have a multiple
nodes Gluster setup, that we use for multiple things at the same time.
The Gluster volumes are used to store some files, but also to host
(different gluster volumes with different settings/ACLs) KVM
virtual-disks (qcow2). People knowing me will say : "hey, but for
performances reasons, it's faster to just dedicate for example a
partition , or a Logical Volume instead of using qcow2 images sitting on
top a filesystem for Virtual Machines, right ?" and that's true. But
with our limited amount of machines, and a need to "move" Virtual
Machine without a proper shared storage solution (and because in our
setup, those physical nodes *are* both glusterd and hypervisors),
Gluster was an easy to use solution to :</p>
<blockquote>
<ul>
<li>Aggregate local SATA disks as a bigger shared drive</li>
<li>use <a href="https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html/Administration_Guide/Creating_Distributed_Replicated_Volumes.html">replicated+distributed
mode</a>
to also have local resiliency for those VMs</li>
</ul>
</blockquote>
<p>It was working, but not that fast ... I then heard about the fact that
(obviously) accessing those qcow2 images file through fuse wasn't
efficient at all, but that Gluster had
<a href="http://www.gluster.org/community/documentation/index.php/Libgfapi_with_qemu_libvirt">libgfapi</a>
that could be used to "talk" directly to the gluster daemons, bypassing
completely the need to mount your gluster volumes locally through fuse.
Thankfully, qemu-kvm from CentOS 6 is built against libgfapi so can use
that directly (and that's the reason why it's automatically installed
when you install KVM hypervisor components). Results ? better , but
still not was I/we was/were expecting ...</p>
<p>When trying to find the issue, I discussed with some folks in the
#gluster irc channel (irc.freenode.net) and suddenly I understood
something that it's *not* so obvious for Gluster in
distributed+replicated mode : for people having dealt with storage
solutions at the hardware level (or people using
<a href="http://www.drbd.org/">DRBD</a>, which I did too in the past, and that I
also liked a lot ..) in the past, we expect the replication to happens
automatically at the storage/server side, but that's not true for
Gluster : in fact Glusterd just exposes metadata to gluster clients,
which then know where to read/write (being "redirected" to correct
gluster nodes). That means so than replication happens at the *client*
side : in replicated mode, the clients will write itself twice the same
data : once on each server ...</p>
<p>So back to our example, as our nodes have 2*1Gb/s Ethernet card, and
that one is a bridge used by the Virtual Machines, and the other one
"dedicated" to gluster, and that each node is itself a glusterd/gluster
client, I let you think about the max perf we could get : for a write
operation : 1Gbit/s , divided by two (because of the replication) so \~
125MB / 2 => in theory \~ 62 MB/sec (and then remove
tcp/gluster/overhead and that drops to \~ 55MB/s)</p>
<p>How to solve that ? well, I tested that theory and confirmed directly
that it was the case, when in <a href="https://access.redhat.com/documentation/en-US/Red_Hat_Storage/3/html/Administration_Guide/Creating_Distributed_Volumes.html">distributed
mode</a>
only, write performances were automatically doubled. So yes, running
Gluster on Gigabit Ethernet suddenly was the bottleneck. Upgrading to
10Gb wasn't something we could do, but , thanks to <a href="https://twitter.com/realjustinclift">Justin
Clift</a> (and some other Gluster
folks), we were able to find some "second hand"
<a href="http://en.wikipedia.org/wiki/InfiniBand">Infiniband</a> hardware (10Gbps
HCAs and switch)</p>
<p>While Gluster has native/builtin rdma/Infiniband capabilities (see
"tranport" option in the "gluster create volume" command), we had in our
case to migrate existing Gluster volumes from plain TCP/Ethernet to
Infiniband, while trying to get the downtime as small as possible. That
is/was my first experience with Infiniband, but it's not as hard as it
seems, especially when you discover
<a href="https://www.kernel.org/doc/Documentation/infiniband/ipoib.txt">IPoIB</a>(IP
over Infiniband). So from a Syadmin POV, it's just "yet another network
interface", but a 10Gbps one now :)</p>
<p>The Gluster volume migration then goes like this : (schedule a - obvious
- downtime for this) :</p>
<p>On all gluster nodes (assuming that we start from machines installed
only with @core group, so minimal ones) :</p>
<div class="highlight"><pre><span></span><span class="n">yum</span> <span class="n">groupinstall</span> <span class="ss">"Infiniband Support"</span>
<span class="n">chkconfig</span> <span class="n">rdma</span> <span class="k">on</span>
<span class="o">#</span><span class="n">stop</span> <span class="n">your</span> <span class="n">clients</span> <span class="k">or</span> <span class="n">other</span> <span class="n">apps</span> <span class="n">accessing</span> <span class="n">gluster</span> <span class="n">volumes</span><span class="p">,</span> <span class="k">as</span> <span class="n">they</span> <span class="n">will</span> <span class="n">be</span> <span class="n">stopped</span>
<span class="n">service</span> <span class="n">glusterd</span> <span class="n">stop</span> <span class="o">&&</span> <span class="n">chkconfig</span> <span class="n">glusterd</span> <span class="k">off</span> <span class="o">&&</span> <span class="n">init</span> <span class="mi">0</span>
</pre></div>
<p>Install then the hardware in each server, connect all Infiniband cards
to the IB switch (previously configured) and power back on all servers.
When machines are back online, you have "just" to configure the ib
interfaces. As in my cases, machines were "remote nodes" and not having
a look at how they were configured, I had to use some IB tools to see
which port was connected (a tool like "ibv_devinfo" showed me which
port was active/connected, while "ibdiagnet" shows you the topology and
other nodes/devices). In our case it was port 2, so let's create the
ifcfg-ib{0,1} devices (and ib1 being the one we'll use) :</p>
<div class="highlight"><pre><span></span><span class="n">DEVICE</span><span class="o">=</span><span class="n">ib1</span>
<span class="k">TYPE</span><span class="o">=</span><span class="n">Infiniband</span>
<span class="n">BOOTPROTO</span><span class="o">=</span><span class="k">static</span>
<span class="n">BROADCAST</span><span class="o">=</span><span class="mi">192</span><span class="p">.</span><span class="mi">168</span><span class="p">.</span><span class="mi">123</span><span class="p">.</span><span class="mi">255</span>
<span class="n">IPADDR</span><span class="o">=</span><span class="mi">192</span><span class="p">.</span><span class="mi">168</span><span class="p">.</span><span class="mi">123</span><span class="p">.</span><span class="mi">2</span>
<span class="n">NETMASK</span><span class="o">=</span><span class="mi">255</span><span class="p">.</span><span class="mi">255</span><span class="p">.</span><span class="mi">255</span><span class="p">.</span><span class="mi">0</span>
<span class="n">NETWORK</span><span class="o">=</span><span class="mi">192</span><span class="p">.</span><span class="mi">168</span><span class="p">.</span><span class="mi">123</span><span class="p">.</span><span class="mi">0</span>
<span class="n">ONBOOT</span><span class="o">=</span><span class="n">yes</span>
<span class="n">NM_CONTROLLED</span><span class="o">=</span><span class="k">no</span>
<span class="n">CONNECTED_MODE</span><span class="o">=</span><span class="n">yes</span>
</pre></div>
<p>The interesting part here is the "CONNECTED_MODE=yes" : for people who
already uses iscsi, you know that Jumbo frames are really important if
you have a dedicated VLAN (and that the Ethernet switch support Jumbo
frames too). As stated in the <a href="https://www.kernel.org/doc/Documentation/infiniband/ipoib.txt">IPoIB kernel
doc</a> ,
you can have two operation mode : datagram (default 2044 bytes MTU) or
Connected (up to 65520 bytes MTU). It's up to you to decide which one to
use, but if you understood the Jumbo frames thing for iscsi, you get the
point already.</p>
<p>An "ifup ib1" on all nodes will bring the interfaces up and you can
verify that everything works by pinging each other node, including with
larger mtu values :</p>
<blockquote>
<p>ping -s 16384 \<other-node-on-the-infiniband-network></p>
</blockquote>
<p>If everything's fine, you can then decide to start gluster *but* don't
forget that gluster uses FQDN (at least I hope that's how you configured
initially your gluster setup, already on a dedicated segment, and using
different FQDN for the storage vlan). You just have to update your local
resolver (internal DNS, local hosts files, whatever you want) to be sure
that gluster will then use the new IP subnet on the Infiniband network.
(If you haven't previously defined different hostnames for your gluster
setup, you can "just" update that in the different
/var/lib/glusterd/peers/<em> and /var/lib/glusterd/vols/</em>/*.vol)</p>
<p>Restart the whole gluster stack (on all gluster nodes) and verify that
it works fine :</p>
<div class="highlight"><pre><span></span><span class="nv">service</span> <span class="nv">glusterd</span> <span class="nv">start</span>
<span class="nv">gluster</span> <span class="nv">peer</span> <span class="nv">status</span>
<span class="nv">gluster</span> <span class="nv">volume</span> <span class="nv">status</span>
# <span class="nv">and</span> <span class="k">if</span> <span class="nv">you</span><span class="s1">'</span><span class="s">re happy with the results :</span>
<span class="nv">chkconfig</span> <span class="nv">glusterd</span> <span class="nv">on</span>
</pre></div>
<p>So, in a short summary:</p>
<ul>
<li>Infiniband isn't that difficult (and surely if you use IPoIB, which
has though a very small overhead)</li>
<li>Migrating gluster from Ethernet to Infiniband is also easy (and
surely if you planned carefully your initial design about IP
subnet/VLAN/segment/DNS resolution for "transparent" move)</li>
</ul>Updating to Gluster 3.6 packages on CentOS 62014-11-21T16:08:00+01:002014-11-21T16:08:00+01:00Fabian Arrotintag:arrfab.net,2014-11-21:/posts/2014/Nov/21/updating-to-gluster-3-6-packages-on-centos-6/<p>I had to do yesterday some maintenance yesterday on our
<a href="http://www.gluster.org">Gluster</a> nodes used within CentOS.org infra.
Basically I had to reconfigure some gluster volumes to use Infiniband
instead of Ethernet. (I'll write a dedicated blog post about that
migration later).</p>
<p>While a lot of people directly consume packages from Gluster.org (for
example
http://download.gluster.org/pub/gluster/glusterfs/3.6/LATEST/CentOS/epel-6/x86_64/),
you'll be able (soon) to also install directly those packages on CentOS,
through packages built by the <a href="https://wiki.centos.org/SpecialInterestGroup/Storage/Proposal">Storage
SIG</a>. At
the moment I'm writing this blog post, gluster 3.6.1 packages are built
and available on our <a href="http://cbs.centos.org/koji/">Community Build Server Koji
setup</a> , but still in testing (and
unsigned).</p>
<p>"But wait, there are already glusterfs packages tagged 3.6 in CentOS
6.6, right ? " will you say. Well, yes, but not the full stack. What you
see in the [base] (or [updates]) repository are the client packages, as
for example a base CentOS 6.x can be a gluster client (through fuse, or
libgfapi - really interesting to speed up qemu-kvm instead of using the
default fuse mount point ..) , but the -server package isn't there. So
the reason why you can either use the …</p><p>I had to do yesterday some maintenance yesterday on our
<a href="http://www.gluster.org">Gluster</a> nodes used within CentOS.org infra.
Basically I had to reconfigure some gluster volumes to use Infiniband
instead of Ethernet. (I'll write a dedicated blog post about that
migration later).</p>
<p>While a lot of people directly consume packages from Gluster.org (for
example
http://download.gluster.org/pub/gluster/glusterfs/3.6/LATEST/CentOS/epel-6/x86_64/),
you'll be able (soon) to also install directly those packages on CentOS,
through packages built by the <a href="https://wiki.centos.org/SpecialInterestGroup/Storage/Proposal">Storage
SIG</a>. At
the moment I'm writing this blog post, gluster 3.6.1 packages are built
and available on our <a href="http://cbs.centos.org/koji/">Community Build Server Koji
setup</a> , but still in testing (and
unsigned).</p>
<p>"But wait, there are already glusterfs packages tagged 3.6 in CentOS
6.6, right ? " will you say. Well, yes, but not the full stack. What you
see in the [base] (or [updates]) repository are the client packages, as
for example a base CentOS 6.x can be a gluster client (through fuse, or
libgfapi - really interesting to speed up qemu-kvm instead of using the
default fuse mount point ..) , but the -server package isn't there. So
the reason why you can either use the upstream gluster.org yum
repositories or the Storage SIG one to have access to the full stack,
and so run glusterd on CentOS.</p>
<p>Interested in testing those packages ? Wanting to test the update before
those packages will be released by the Storage SIG ? here we go :
<a href="http://cbs.centos.org/repos/storage6-testing/x86_64/os/Packages/">http://cbs.centos.org/repos/storage6-testing/x86_64/os/Packages/</a>
(packages available for <a href="http://cbs.centos.org/repos/storage7-testing/x86_64/os/Packages/">CentOS
7</a>
too)</p>
<p>By the way, if you never tested Gluster, it's really easy to setup and
play with, even within Virtual Machines. Interesting reading : (quick
start) :
<a href="http://wiki.centos.org/SpecialInterestGroup/Storage/gluster-Quickstart">http://wiki.centos.org/SpecialInterestGroup/Storage/gluster-Quickstart</a></p>Koji - CentOS CBS infra and sslv3/Poodle important notification2014-10-15T11:46:00+02:002014-10-15T11:46:00+02:00Fabian Arrotintag:arrfab.net,2014-10-15:/posts/2014/Oct/15/koji-centos-cbs-infra-and-sslv3poodle-important-notification/<p>As most of you already know, there is an important SSLv3 vulnerability
(CVE-2014-3566 - see https://access.redhat.com/articles/1232123) , known
as Poodle.<br>
While it's easy to disable SSLv3 in the allowed Protocols at the server
level (for example SSLProtocol All -SSLv2 -SSLv3 for apache), some
clients are still defaulting to SSLv3, and Koji does that.</p>
<p>We currently have disabled SSLv3 on our cbs.centos.org koji instance, so
if you're a cbs/koji user, please adapt your local koji package (local
fix !)<br>
At the moment, there is no available upstream package, but the
following patch has been tested by Fedora people too (and credits go to </p>
<p>https://lists.fedoraproject.org/pipermail/infrastructure/2014-October/014976.html)</p>
<div class="highlight"><pre><span></span> <span class="o">---</span> <span class="nv">SSLCommon</span>.<span class="nv">py</span>.<span class="nv">orig</span> <span class="mi">2014</span><span class="o">-</span><span class="mi">10</span><span class="o">-</span><span class="mi">15</span> <span class="mi">11</span>:<span class="mi">42</span>:<span class="mi">54</span>.<span class="mi">747082029</span> <span class="o">+</span><span class="mi">0200</span>
<span class="o">+++</span> <span class="nv">SSLCommon</span>.<span class="nv">py</span> <span class="mi">2014</span><span class="o">-</span><span class="mi">10</span><span class="o">-</span><span class="mi">15</span> <span class="mi">11</span>:<span class="mi">44</span>:<span class="mi">08</span>.<span class="mi">215257590</span> <span class="o">+</span><span class="mi">0200</span>
@@ <span class="o">-</span><span class="mi">37</span>,<span class="mi">7</span> <span class="o">+</span><span class="mi">37</span>,<span class="mi">8</span> @@
<span class="k">if</span> <span class="nv">f</span> <span class="nv">and</span> <span class="nv">not</span> <span class="nv">os</span>.<span class="nv">access</span><span class="ss">(</span><span class="nv">f</span>, <span class="nv">os</span>.<span class="nv">R_OK</span><span class="ss">)</span>:
<span class="nv">raise</span> <span class="nv">StandardError</span>, <span class="s2">"</span><span class="s">%s does not exist or is not </span>
<span class="nv">readable</span><span class="s2">"</span><span class="s"> % f</span>
<span class="o">-</span> <span class="nv">ctx</span> <span class="o">=</span> <span class="nv">SSL</span>.<span class="nv">Context</span><span class="ss">(</span><span class="nv">SSL</span>.<span class="nv">SSLv3_METHOD</span><span class="ss">)</span> # <span class="nv">SSLv3</span> <span class="nv">only</span>
<span class="o">+</span> #<span class="nv">ctx</span> <span class="o">=</span> <span class="nv">SSL</span>.<span class="nv">Context</span><span class="ss">(</span><span class="nv">SSL</span>.<span class="nv">SSLv3_METHOD</span><span class="ss">)</span> # <span class="nv">SSLv3</span> <span class="nv">only</span>
<span class="o">+</span> <span class="nv">ctx</span> <span class="o">=</span> <span class="nv">SSL</span>.<span class="nv">Context</span><span class="ss">(</span><span class="nv">SSL</span>.<span class="nv">TLSv1_METHOD</span><span class="ss">)</span> # <span class="nv">TLSv1</span> <span class="nv">only</span>
<span class="nv">ctx</span>.<span class="nv">use_certificate_file</span><span class="ss">(</span><span class="nv">key_and_cert</span><span class="ss">)</span>
<span class="nv">ctx</span>.<span class="nv">use_privatekey_file</span><span class="ss">(</span><span class="nv">key_and_cert</span><span class="ss">)</span>
<span class="nv">ctx</span>.<span class="nv">load_client_ca</span><span class="ss">(</span><span class="nv">ca_cert</span><span class="ss">)</span>
@@ <span class="o">-</span><span class="mi">45</span>,<span class="mi">7</span> <span class="o">+</span><span class="mi">46</span>,<span class="mi">8</span> @@
<span class="nv">verify</span> <span class="o">=</span> <span class="nv">SSL</span>.<span class="nv">VERIFY_PEER</span> <span class="o">|</span> <span class="nv">SSL</span>.<span class="nv">VERIFY_FAIL_IF_NO_PEER_CERT …</span></pre></div><p>As most of you already know, there is an important SSLv3 vulnerability
(CVE-2014-3566 - see https://access.redhat.com/articles/1232123) , known
as Poodle.<br>
While it's easy to disable SSLv3 in the allowed Protocols at the server
level (for example SSLProtocol All -SSLv2 -SSLv3 for apache), some
clients are still defaulting to SSLv3, and Koji does that.</p>
<p>We currently have disabled SSLv3 on our cbs.centos.org koji instance, so
if you're a cbs/koji user, please adapt your local koji package (local
fix !)<br>
At the moment, there is no available upstream package, but the
following patch has been tested by Fedora people too (and credits go to </p>
<p>https://lists.fedoraproject.org/pipermail/infrastructure/2014-October/014976.html)</p>
<div class="highlight"><pre><span></span> <span class="o">---</span> <span class="nv">SSLCommon</span>.<span class="nv">py</span>.<span class="nv">orig</span> <span class="mi">2014</span><span class="o">-</span><span class="mi">10</span><span class="o">-</span><span class="mi">15</span> <span class="mi">11</span>:<span class="mi">42</span>:<span class="mi">54</span>.<span class="mi">747082029</span> <span class="o">+</span><span class="mi">0200</span>
<span class="o">+++</span> <span class="nv">SSLCommon</span>.<span class="nv">py</span> <span class="mi">2014</span><span class="o">-</span><span class="mi">10</span><span class="o">-</span><span class="mi">15</span> <span class="mi">11</span>:<span class="mi">44</span>:<span class="mi">08</span>.<span class="mi">215257590</span> <span class="o">+</span><span class="mi">0200</span>
@@ <span class="o">-</span><span class="mi">37</span>,<span class="mi">7</span> <span class="o">+</span><span class="mi">37</span>,<span class="mi">8</span> @@
<span class="k">if</span> <span class="nv">f</span> <span class="nv">and</span> <span class="nv">not</span> <span class="nv">os</span>.<span class="nv">access</span><span class="ss">(</span><span class="nv">f</span>, <span class="nv">os</span>.<span class="nv">R_OK</span><span class="ss">)</span>:
<span class="nv">raise</span> <span class="nv">StandardError</span>, <span class="s2">"</span><span class="s">%s does not exist or is not </span>
<span class="nv">readable</span><span class="s2">"</span><span class="s"> % f</span>
<span class="o">-</span> <span class="nv">ctx</span> <span class="o">=</span> <span class="nv">SSL</span>.<span class="nv">Context</span><span class="ss">(</span><span class="nv">SSL</span>.<span class="nv">SSLv3_METHOD</span><span class="ss">)</span> # <span class="nv">SSLv3</span> <span class="nv">only</span>
<span class="o">+</span> #<span class="nv">ctx</span> <span class="o">=</span> <span class="nv">SSL</span>.<span class="nv">Context</span><span class="ss">(</span><span class="nv">SSL</span>.<span class="nv">SSLv3_METHOD</span><span class="ss">)</span> # <span class="nv">SSLv3</span> <span class="nv">only</span>
<span class="o">+</span> <span class="nv">ctx</span> <span class="o">=</span> <span class="nv">SSL</span>.<span class="nv">Context</span><span class="ss">(</span><span class="nv">SSL</span>.<span class="nv">TLSv1_METHOD</span><span class="ss">)</span> # <span class="nv">TLSv1</span> <span class="nv">only</span>
<span class="nv">ctx</span>.<span class="nv">use_certificate_file</span><span class="ss">(</span><span class="nv">key_and_cert</span><span class="ss">)</span>
<span class="nv">ctx</span>.<span class="nv">use_privatekey_file</span><span class="ss">(</span><span class="nv">key_and_cert</span><span class="ss">)</span>
<span class="nv">ctx</span>.<span class="nv">load_client_ca</span><span class="ss">(</span><span class="nv">ca_cert</span><span class="ss">)</span>
@@ <span class="o">-</span><span class="mi">45</span>,<span class="mi">7</span> <span class="o">+</span><span class="mi">46</span>,<span class="mi">8</span> @@
<span class="nv">verify</span> <span class="o">=</span> <span class="nv">SSL</span>.<span class="nv">VERIFY_PEER</span> <span class="o">|</span> <span class="nv">SSL</span>.<span class="nv">VERIFY_FAIL_IF_NO_PEER_CERT</span>
<span class="nv">ctx</span>.<span class="nv">set_verify</span><span class="ss">(</span><span class="nv">verify</span>, <span class="nv">our_verify</span><span class="ss">)</span>
<span class="nv">ctx</span>.<span class="nv">set_verify_depth</span><span class="ss">(</span><span class="mi">10</span><span class="ss">)</span>
<span class="o">-</span> <span class="nv">ctx</span>.<span class="nv">set_options</span><span class="ss">(</span><span class="nv">SSL</span>.<span class="nv">OP_NO_SSLv2</span> <span class="o">|</span> <span class="nv">SSL</span>.<span class="nv">OP_NO_TLSv1</span><span class="ss">)</span>
<span class="o">+</span> #<span class="nv">ctx</span>.<span class="nv">set_options</span><span class="ss">(</span><span class="nv">SSL</span>.<span class="nv">OP_NO_SSLv2</span> <span class="o">|</span> <span class="nv">SSL</span>.<span class="nv">OP_NO_TLSv1</span><span class="ss">)</span>
<span class="o">+</span> <span class="nv">ctx</span>.<span class="nv">set_options</span><span class="ss">(</span><span class="nv">SSL</span>.<span class="nv">OP_NO_SSLv2</span> <span class="o">|</span> <span class="nv">SSL</span>.<span class="nv">OP_NO_TLSv1</span> <span class="o">|</span>
<span class="nv">SSL</span>.<span class="nv">OP_NO_SSLv3</span><span class="ss">)</span>
<span class="k">return</span> <span class="nv">ctx</span>
</pre></div>
<p>We'll keep you informed about possible upstream koji packages that would
default to at least TLSv1</p>
<p>If you encounter a problem, feel free to drop into #centos-devel
channel on irc.freenode.net and have a chat with us</p>CentOS Mirrors "Spring Clean-up operation"2014-03-20T13:24:00+01:002014-03-20T13:24:00+01:00Fabian Arrotintag:arrfab.net,2014-03-20:/posts/2014/Mar/20/centos-mirrors-spring-clean-up-operation/<p>Just to let you know that I have verified some mirrors last week and
sent several mails to the contact info we had for those mirrors
(unreachable/far behind).<br>
I've received feedback from some people still willing to be listed as
third-party mirror and so they fixed the issue they had (thank you !)</p>
<p>Some other people replied with a "sorry, we can't host a mirror anymore"
answer . (Thanks for having replied my email and thank you for having
been part of the successful "centos mirror party" !).</p>
<p>For the "unanswered" ones, I've decided that it was time to launch a
"Spring clean-up operation" in the mirrors DB/Network.<br>
I've removed them from the DB, meaning that the crawler process we use
to detect bad/unreachable mirrors will not even try anymore to verify
them.<br>
We actually have more than <a href="http://mirror-status.centos.org">500 external (third-party)
mirrors</a> serving CentOS to the whole
world, without counting the 50+ (managed by CentOS) servers used to feed
those external mirrors, and sometimes serving content too for countries
less covered.</p>
<p>Thanks a lot for your collaboration and support ! We <em>love</em> you :-)</p>CentOS Dojo Lyon (France)2014-03-15T15:00:00+01:002014-03-15T15:00:00+01:00Fabian Arrotintag:arrfab.net,2014-03-15:/posts/2014/Mar/15/centos-dojo-lyon-france/<p>Comme vous le savez peut-être (ou pas !), nous tiendrons un Dojo CentOS
à Lyon le vendredi 11 avril. Si donc vous avez envie de partager votre
expérience autour de CentOS, en donnant une présentation par exemple, ou
bien si vous désirez seulement venir passer un bon moment avec nous en
écoutant les présentations prévues (appel - subliminal - aux candidats
volontaires !), sentez-vous libre de vous inscrire.<br>
L'inscription est gratuite ! Plus d'informations sur la page Wiki :
<a href="http://wiki.centos.org/Events/Dojo/Lyon2014">http://wiki.centos.org/Events/Dojo/Lyon2014</a> .</p>
<dl>
<dt>Hi people, are you in the Lyon (France) area around April 11th ? Willing</dt>
<dt>to come to a CentOS Dojo ? (either to attend it or even better, present</dt>
<dt>something around CentOS ?) . Feel free to register for this free event !</dt>
<dd><a href="http://wiki.centos.org/Events/Dojo/Lyon2014">http://wiki.centos.org/Events/Dojo/Lyon2014</a></dd>
</dl>IPv6 vs IPv4 usage for the new www.centos.org website [ Stats ! ]2014-01-08T14:31:00+01:002014-01-08T14:31:00+01:00Fabian Arrotintag:arrfab.net,2014-01-08:/posts/2014/Jan/08/ipv6-vs-ipv4-usage-for-the-new-www-centos-org-website-stats/<p>So, everybody now knows the <a href="http://lists.centos.org/pipermail/centos-announce/2014-January/020100.htmlhttp://">whole
story</a>,
and so visited the new <a href="http://www.centos.org">CentOS website</a>. It's
always a good time to keep an eye on statistics and we also added now
native IPv6 support ! (Finally ! , we live in 2014, right ? ). And
because we "love" stats, here they are (for IPv4 vs IPv6) :</p>
<p><strong><em>IPv4 traffic for the new website :</em></strong> </p>
<p><img alt="IPv4 usage" src="/images/www-ipv4.png" title="www.centos.org - ipv4 statistics"></p>
<p><strong><em>IPv6 traffic for the new website :</em></strong></p>
<p><img alt="IPv6 usage" src="/images/www-ipv6.png" title="www.centos.org ipv6 stats"></p>
<p>So clearly not so much IPv6 traffic vs IPv4 one.<strong><em>Join the IPv6
movement !</em></strong></p>Debug for the winners !2013-10-31T16:33:00+01:002013-10-31T16:33:00+01:00Fabian Arrotintag:arrfab.net,2013-10-31:/posts/2013/Oct/31/debug-for-the-winners/<p>Recently I had to dive back into <a href="http://www.ansibleworks.com/">Ansible</a>
playbooks I wrote (quite) some time ago. I had to add some logic to
generate different application templates based on facts/packages being
installed on the managed nodes. Long story short (I'll not describe the
use case here as it's quite complex), I decided that injecting directly
some kind of <a href="http://jinja.pocoo.org/docs/templates/#filters">logic in the Jinja2
templates</a> was enough ..
but not.</p>
<p>Let's take a very simplified example here (don't even look at the tasks
but rather at the logic explained how to get there, once again this is a
'stupid' playbook) :</p>
<div class="highlight"><pre><span></span><span class="gd">--- </span>
<span class="gd">- hosts: localhost </span>
connection: local
user: root
vars:
- myrole: httpserver
tasks:
- name: registering a variable only if myrole is httpserver
command: /bin/rpm -q --qf '%{version}' httpd
register: httpd_version
when: myrole == 'httpserver'
- name: pushing the generated template
template: src=../templates/logic.txt.j2 dest=/tmp/logic.txt
handlers:
</pre></div>
<p>Now let's have a look at the (very) simple logic.txt.j2 :</p>
<div class="highlight"><pre><span></span>{<span class="o">%</span> <span class="k">if</span> <span class="nv">httpd_version</span> <span class="nv">is</span> <span class="nv">defined</span> <span class="o">-%</span>}
<span class="nv">You</span><span class="s1">'</span><span class="s">re using an Apache http server version : {{ httpd_version.stdout> }} </span>
{<span class="o">%</span> <span class="k">else</span> <span class="o">%</span>}
<span class="nv">You</span><span class="s1">'</span><span class="s">re not using an http server, or not defined in the ansible> machine role </span>
{<span class="o">%</span> <span class="k">endif</span> <span class="o">-%</span>}
</pre></div>
<p>Easy, and it seems it was working when myrole was indeed httpserver :</p>
<div class="highlight"><pre><span></span> <span class="n">cat …</span></pre></div><p>Recently I had to dive back into <a href="http://www.ansibleworks.com/">Ansible</a>
playbooks I wrote (quite) some time ago. I had to add some logic to
generate different application templates based on facts/packages being
installed on the managed nodes. Long story short (I'll not describe the
use case here as it's quite complex), I decided that injecting directly
some kind of <a href="http://jinja.pocoo.org/docs/templates/#filters">logic in the Jinja2
templates</a> was enough ..
but not.</p>
<p>Let's take a very simplified example here (don't even look at the tasks
but rather at the logic explained how to get there, once again this is a
'stupid' playbook) :</p>
<div class="highlight"><pre><span></span><span class="gd">--- </span>
<span class="gd">- hosts: localhost </span>
connection: local
user: root
vars:
- myrole: httpserver
tasks:
- name: registering a variable only if myrole is httpserver
command: /bin/rpm -q --qf '%{version}' httpd
register: httpd_version
when: myrole == 'httpserver'
- name: pushing the generated template
template: src=../templates/logic.txt.j2 dest=/tmp/logic.txt
handlers:
</pre></div>
<p>Now let's have a look at the (very) simple logic.txt.j2 :</p>
<div class="highlight"><pre><span></span>{<span class="o">%</span> <span class="k">if</span> <span class="nv">httpd_version</span> <span class="nv">is</span> <span class="nv">defined</span> <span class="o">-%</span>}
<span class="nv">You</span><span class="s1">'</span><span class="s">re using an Apache http server version : {{ httpd_version.stdout> }} </span>
{<span class="o">%</span> <span class="k">else</span> <span class="o">%</span>}
<span class="nv">You</span><span class="s1">'</span><span class="s">re not using an http server, or not defined in the ansible> machine role </span>
{<span class="o">%</span> <span class="k">endif</span> <span class="o">-%</span>}
</pre></div>
<p>Easy, and it seems it was working when myrole was indeed httpserver :</p>
<div class="highlight"><pre><span></span> <span class="n">cat</span> <span class="o">/</span><span class="n">tmp</span><span class="o">/</span><span class="n">logic</span><span class="p">.</span><span class="n">txt</span>
<span class="n">You</span><span class="err">'</span><span class="n">re</span> <span class="k">using</span> <span class="n">an</span> <span class="n">Apache</span> <span class="n">http</span> <span class="n">server</span> <span class="k">version</span> <span class="p">:</span> <span class="mi">2</span><span class="p">.</span><span class="mi">2</span><span class="p">.</span><span class="mi">15</span>
</pre></div>
<p>But things didn't work as expected when myrole was something else, like
for example dbserver</p>
<div class="highlight"><pre><span></span> <span class="nv">TASK</span>: [<span class="nv">registering</span> <span class="nv">a</span> <span class="nv">variable</span> <span class="nv">only</span> <span class="k">if</span> <span class="nv">myrole</span> <span class="nv">is</span> <span class="nv">httpserver</span>]
\<span class="o">*</span>\<span class="o">*</span>\<span class="o">*</span>\<span class="o">*</span>\<span class="o">*</span>\<span class="o">*</span>\<span class="o">*</span>\<span class="o">*</span>\<span class="o">*</span>\<span class="o">*</span>\<span class="o">*</span>\<span class="o">*</span>\<span class="o">*</span>\<span class="o">*</span>\<span class="o">*</span>\<span class="o">*</span>\<span class="o">*</span>\<span class="o">*</span>\<span class="o">*</span>
<span class="nv">skipping</span>: [<span class="nv">localhost</span>]
<span class="nv">TASK</span>: [<span class="nv">pushing</span> <span class="nv">the</span> <span class="nv">generated</span> <span class="nv">template</span>]
\<span class="o">*</span>\<span class="o">*</span>\<span class="o">*</span>\<span class="o">*</span>\<span class="o">*</span>\<span class="o">*</span>\<span class="o">*</span>\<span class="o">*</span>\<span class="o">*</span>\<span class="o">*</span>\<span class="o">*</span>\<span class="o">*</span>\<span class="o">*</span>\<span class="o">*</span>\<span class="o">*</span>\<span class="o">*</span>\<span class="o">*</span>\<span class="o">*</span>\<span class="o">*</span>\<span class="o">*</span>\<span class="o">*</span>\<span class="o">*</span>\<span class="o">*</span>\<span class="o">*</span>\<span class="o">*</span>\<span class="o">*</span>\<span class="o">*</span>\<span class="o">*</span>\<span class="o">*</span>\<span class="o">*</span>\<span class="o">*</span>\<span class="o">*</span>\<span class="o">*</span>\<span class="o">*</span>\<span class="o">*</span>\<span class="o">*</span>\<span class="o">*</span>\<span class="o">*</span>\<span class="o">*</span>\<span class="o">*</span>
<span class="nv">fatal</span>: [<span class="nv">localhost</span>] <span class="o">=</span>\<span class="o">></span> {<span class="s1">'</span><span class="s">msg</span><span class="s1">'</span>: <span class="s2">"</span><span class="s">One or more undefined variables:</span>
<span class="s1">'</span><span class="s">dict</span><span class="s1">'</span> <span class="nv">object</span> <span class="nv">has</span> <span class="nv">no</span> <span class="nv">attribute</span> <span class="s1">'</span><span class="s">stdout</span><span class="s1">'</span><span class="s2">"</span><span class="s">, 'failed': True}</span>
</pre></div>
<p>hmm, as the register: task was skipped, I was wondering why it then
complained about the httpd_version.stdout as I thought that
httpd_version wasn't defined .. but I was wrong : even if 'skipped'
the variable exists for that host. I quickly discovered it when adding a
<a href="http://www.ansibleworks.com/docs/modules.html#debug">debug</a> task in
between the other tasks in my playbook :</p>
<p><code>- debug: msg="this is http_version value {{ httpd_version }}"</code></p>
<p>Now let's see what can be wrong :</p>
<div class="highlight"><pre><span></span><span class="w"> </span><span class="nl">TASK</span><span class="p">:</span><span class="w"> </span><span class="o">[</span><span class="n">debug msg="this is http_version value {{httpd_version}}"</span><span class="o">]</span><span class="w"></span>
<span class="w"> </span><span class="err">\</span><span class="o">*</span><span class="err">\</span><span class="o">*</span><span class="err">\</span><span class="o">*</span><span class="err">\</span><span class="o">*</span><span class="err">\</span><span class="o">*</span><span class="err">\</span><span class="o">*</span><span class="err">\</span><span class="o">*</span><span class="err">\</span><span class="o">*</span><span class="err">\</span><span class="o">*</span><span class="err">\</span><span class="o">*</span><span class="err">\</span><span class="o">*</span><span class="err">\</span><span class="o">*</span><span class="err">\</span><span class="o">*</span><span class="err">\</span><span class="o">*</span><span class="w"> </span>
<span class="w"> </span><span class="nl">ok</span><span class="p">:</span><span class="w"> </span><span class="o">[</span><span class="n">localhost</span><span class="o">]</span><span class="w"> </span><span class="o">=</span><span class="err">\</span><span class="o">></span><span class="w"> </span><span class="err">{</span><span class="ss">"msg"</span><span class="err">:</span><span class="w"> </span><span class="ss">"this is http_version value {u'skipped': True, u'changed': False}"</span><span class="err">}</span><span class="w"></span>
</pre></div>
<p>Very interesting : so even when skipped, the variable httpd_version is
still "registered" by the register: feature but marked as skipped.</p>
<p>Let's so change our "logic" in the Jinja2 template then ! :</p>
<div class="highlight"><pre><span></span>{<span class="o">%</span> <span class="k">if</span> <span class="nv">httpd_version</span>.<span class="nv">skipped</span> <span class="o">-%</span>}
<span class="nv">You</span><span class="s1">'</span><span class="s">re not using an http server, or not defined in the ansible machine role </span>
{<span class="o">%</span> <span class="k">else</span> <span class="o">%</span>}
<span class="nv">You</span><span class="s1">'</span><span class="s">re using an Apache http server version : {{ httpd_version.stdout }} </span>
{<span class="o">%</span> <span class="k">endif</span> <span class="o">-%</span>}
</pre></div>
<p>And now it works in all cases ..</p>
<p>It's a (very,very,very) simplified example, but you get the idea and
using the debug module (don't forget to call ansible-playbook with -vvv
to see those messages too !) can quickly show you where your issue is
when having to troubleshoot something. As <a href="http://www.jedi.be/">Patrick
Debois</a> was <a href="https://twitter.com/patrickdebois/status/390719078367002624">saying</a> : "you gotta love Ansible for its simplicity" :-)</p>Rolling updates with Ansible and Apache reverse proxies2013-05-23T17:36:00+02:002013-05-23T17:36:00+02:00Fabian Arrotintag:arrfab.net,2013-05-23:/posts/2013/May/23/rolling-updates-with-ansible-and-apache-reverse-proxies/<p>It's not a secret anymore that I use <a href="http://ansible.cc/">Ansible</a> to do
a lot of things. That goes from simple "one shot" actions with ansible
on multiple nodes to "configuration management and deployment tasks"
with ansible-playbook. One of the thing I also really like with Ansible
is the fact that it's also a great
<a href="http://en.wikipedia.org/wiki/Orchestration_%28computing%29">orchestration</a>
tool.</p>
<p>For example, in some WSOA flows you can have a bunch of servers behind
load balancer nodes. When you want to put a backend node/web server node
in maintenance mode (to change configuration/update package/update
app/whatever), you just "remove" that node from the production flow, do
what you need to do, verify it's up again and put that node back in
production. The principle of "rolling updates" is then interesting as
you still have 24/7 flows in production.</p>
<p>But what if you're not in charge of the whole infrastructure ? AKA for
example you're in charge of some servers, but not the load balancers in
front of your infrastructure. Let's consider the following situation,
and how we'll use ansible to still disable/enable a backend server
behind Apache reverse proxies.</p>
<p><img alt="Apache LB" src="/images/Apache-LB1.png" title="Apache-LB"></p>
<p>So here is the (simplified) situation : two Apache reverse proxies
(using the …</p><p>It's not a secret anymore that I use <a href="http://ansible.cc/">Ansible</a> to do
a lot of things. That goes from simple "one shot" actions with ansible
on multiple nodes to "configuration management and deployment tasks"
with ansible-playbook. One of the thing I also really like with Ansible
is the fact that it's also a great
<a href="http://en.wikipedia.org/wiki/Orchestration_%28computing%29">orchestration</a>
tool.</p>
<p>For example, in some WSOA flows you can have a bunch of servers behind
load balancer nodes. When you want to put a backend node/web server node
in maintenance mode (to change configuration/update package/update
app/whatever), you just "remove" that node from the production flow, do
what you need to do, verify it's up again and put that node back in
production. The principle of "rolling updates" is then interesting as
you still have 24/7 flows in production.</p>
<p>But what if you're not in charge of the whole infrastructure ? AKA for
example you're in charge of some servers, but not the load balancers in
front of your infrastructure. Let's consider the following situation,
and how we'll use ansible to still disable/enable a backend server
behind Apache reverse proxies.</p>
<p><img alt="Apache LB" src="/images/Apache-LB1.png" title="Apache-LB"></p>
<p>So here is the (simplified) situation : two Apache reverse proxies
(using the
<a href="http://httpd.apache.org/docs/2.2/mod/mod_proxy_balancer.html">mod_proxy_balancer</a>
module) are used to load balance traffic to four backend nodes (Jboss in
our simplified case). We can't directly touch those upstream Apache
nodes, but we can still interact on them , thanks to the fact that
"<a href="http://httpd.apache.org/docs/2.2/mod/mod_proxy_balancer.html#balancer_manager">balancer manager
suppor</a>t"
is active (and protected !)</p>
<p>Let's have a look at a (simplified) ansible inventory file :</p>
<div class="highlight"><pre><span></span><span class="k">[jboss-cluster]</span>
<span class="na">jboss-1</span>
<span class="na">jboss-2</span>
<span class="na">jboss-3</span>
<span class="na">jboss-4</span>
<span class="k">[apache-group-1]</span>
<span class="na">apache-node-1</span>
<span class="na">apache-node-2</span>
</pre></div>
<p>Let's now create a generic (write once/use it many) task to disable a
backend node from apache ! :</p>
<div class="highlight"><pre><span></span><span class="o">---</span>
##############################################################################
#
# <span class="nv">This</span> <span class="nv">task</span> <span class="nv">can</span> <span class="nv">be</span> <span class="nv">included</span> <span class="nv">in</span> <span class="nv">a</span> <span class="nv">playbook</span> <span class="nv">to</span> <span class="k">pause</span> <span class="nv">a</span> <span class="nv">backend</span> <span class="nv">node</span>
# <span class="nv">being</span> <span class="nv">load</span> <span class="nv">balanced</span> <span class="nv">by</span> <span class="nv">Apache</span> <span class="nv">Reverse</span> <span class="nv">Proxies</span>
# <span class="nv">Two</span> <span class="nv">variables</span> <span class="nv">need</span> <span class="nv">to</span> <span class="nv">be</span> <span class="nv">defined</span> :
# <span class="o">-</span> ${<span class="nv">apache_rp_backend_url</span>} : <span class="nv">the</span> <span class="nv">URL</span> <span class="nv">of</span> <span class="nv">the</span> <span class="nv">backend</span> <span class="nv">server</span>, <span class="nv">as</span> <span class="nv">known</span> <span class="nv">by</span> <span class="nv">Apache</span> <span class="nv">server</span>
# <span class="o">-</span> ${<span class="nv">apache_rp_backend_cluster</span>} : <span class="nv">the</span> <span class="nv">name</span> <span class="nv">of</span> <span class="nv">the</span> <span class="nv">cluster</span> <span class="nv">as</span> <span class="nv">defined</span> <span class="nv">on</span> <span class="nv">the</span> <span class="nv">Apache</span> <span class="nv">RP</span> <span class="ss">(</span><span class="nv">the</span> <span class="nv">group</span> <span class="nv">the</span> <span class="nv">node</span> <span class="nv">is</span> <span class="nv">member</span> <span class="nv">of</span><span class="ss">)</span> <span class="ss">(</span><span class="nv">internalasync</span><span class="ss">)</span>
# <span class="o">-</span> ${<span class="nv">apache_rp_group</span>} : <span class="nv">the</span> <span class="nv">name</span> <span class="nv">of</span> <span class="nv">the</span> <span class="nv">group</span> <span class="nv">declared</span> <span class="nv">in</span> <span class="nv">hosts</span>.<span class="nv">cfg</span> <span class="nv">containing</span> <span class="nv">Apache</span> <span class="nv">Reverse</span> <span class="nv">Proxies</span>
# <span class="o">-</span> ${<span class="nv">apache_rp_user</span>}: <span class="nv">the</span> <span class="nv">username</span> <span class="nv">used</span> <span class="nv">to</span> <span class="nv">authenticate</span> <span class="nv">against</span> <span class="nv">the</span> <span class="nv">Apache</span> <span class="nv">balancer</span><span class="o">-</span><span class="nv">manager</span> <span class="ss">(</span><span class="nv">clusteradmin</span><span class="ss">)</span>
# <span class="o">-</span> ${<span class="nv">apache_rp_password</span>}: <span class="nv">the</span> <span class="nv">password</span> <span class="nv">used</span> <span class="nv">to</span> <span class="nv">authenticate</span> <span class="nv">against</span> <span class="nv">the</span> <span class="nv">Apache</span> <span class="nv">balancer</span><span class="o">-</span><span class="nv">manager</span> <span class="ss">(</span><span class="mi">5</span><span class="nv">added592b</span><span class="ss">)</span>
# <span class="o">-</span> ${<span class="nv">apache_rp_balancer_manager_uri</span>}: <span class="nv">the</span> <span class="nv">URI</span> <span class="nv">where</span> <span class="nv">to</span> <span class="nv">find</span> <span class="nv">the</span> <span class="nv">balancer</span><span class="o">-</span><span class="nv">manager</span> <span class="nv">Apache</span> <span class="nv">mod</span>
#
##############################################################################
<span class="o">-</span> <span class="nv">name</span>: <span class="nv">Disabling</span> <span class="nv">the</span> <span class="nv">worker</span> <span class="nv">in</span> <span class="nv">Apache</span> <span class="nv">Reverse</span> <span class="nv">Proxies</span>
<span class="nv">local_action</span>: <span class="nv">shell</span> <span class="o">/</span><span class="nv">usr</span><span class="o">/</span><span class="nv">bin</span><span class="o">/</span><span class="nv">curl</span> <span class="o">-</span><span class="nv">k</span> <span class="o">--</span><span class="nv">user</span> ${<span class="nv">apache_rp_user</span>}:${<span class="nv">apache_rp_password</span>} <span class="s2">"</span><span class="s">https://${item}/${apache_rp_balancer_manager_uri}?b=${apache_rp_backend_cluster}&w=${apache_rp_backend_url}&nonce=$(curl -k --user ${apache_rp_user}:${apache_rp_password} https://${item}/${apache_rp_balancer_manager_uri} |grep nonce|tail -n 1|cut -f 3 -d '&'|cut -f 2 -d '='|cut -f 1 -d '</span><span class="s2">"</span><span class="s1">'</span><span class="s">)&dw=Disable"</span>
<span class="nv">with_items</span>: ${<span class="nv">groups</span>.${<span class="nv">apache_rp_group</span>}}
<span class="o">-</span> <span class="nv">name</span>: <span class="nv">Waiting</span> <span class="mi">20</span> <span class="nv">seconds</span> <span class="nv">to</span> <span class="nv">be</span> <span class="nv">sure</span> <span class="nv">no</span> <span class="nv">traffic</span> <span class="nv">is</span> <span class="nv">being</span> <span class="nv">sent</span> <span class="nv">anymore</span> <span class="nv">to</span> <span class="nv">that</span> <span class="nv">worker</span> <span class="nv">backend</span> <span class="nv">node</span>
<span class="k">pause</span>: <span class="nv">seconds</span><span class="o">=</span><span class="mi">20</span>
</pre></div>
<p>The interesting bit is the with_items one : it will use the
apache_rp_group variable to know which apache servers are used
upstream (assuming you can have multiple nodes/clusters) and will play
that command for every host in the list obtained from the inventory !</p>
<p>We can now, in the "rolling-updates" playbook, just call the previous
tasks (assuming we saved it as ../tasks/apache-disable-worker.yml) :</p>
<div class="highlight"><pre><span></span><span class="o">---</span>
<span class="o">-</span> <span class="nv">hosts</span>: <span class="nv">jboss</span><span class="o">-</span><span class="nv">cluster</span>
<span class="nv">serial</span>: <span class="mi">1</span>
<span class="nv">user</span>: <span class="nv">root</span>
<span class="nv">tasks</span>:
<span class="o">-</span> <span class="k">include</span>: ..<span class="o">/</span><span class="nv">tasks</span><span class="o">/</span><span class="nv">apache</span><span class="o">-</span><span class="nv">disable</span><span class="o">-</span><span class="nv">worker</span>.<span class="nv">yml</span>
<span class="o">-</span> <span class="nv">etc</span><span class="o">/</span><span class="nv">etc</span> ...
<span class="o">-</span> <span class="nv">wait_for</span>: <span class="nv">port</span><span class="o">=</span><span class="mi">8443</span> <span class="nv">state</span><span class="o">=</span><span class="nv">started</span>
<span class="o">-</span> <span class="k">include</span>: ..<span class="o">/</span><span class="nv">tasks</span><span class="o">/</span><span class="nv">apache</span><span class="o">-</span><span class="nv">enable</span><span class="o">-</span><span class="nv">worker</span>.<span class="nv">yml</span>
</pre></div>
<p>But Wait ! As you've seen, we still need to declare some variables :
let's do that in the inventory, under group_vars and host_vars !</p>
<p>group_vars/jboss-cluster :</p>
<div class="highlight"><pre><span></span><span class="o">#</span> <span class="n">Apache</span> <span class="n">reverse</span> <span class="n">proxies</span> <span class="n">settins</span>
<span class="n">apache_rp_group</span><span class="p">:</span> <span class="n">apache</span><span class="o">-</span><span class="k">group</span><span class="o">-</span><span class="mi">1</span>
<span class="n">apache_rp_user</span><span class="p">:</span> <span class="n">my</span><span class="o">-</span><span class="k">admin</span><span class="o">-</span><span class="n">account</span>
<span class="n">apache_rp_password</span><span class="p">:</span> <span class="n">my</span><span class="o">-</span><span class="n">beautiful</span><span class="o">-</span><span class="n">pass</span>
<span class="n">apache_rp_balancer_manager_uri</span><span class="p">:</span>
<span class="n">balancer</span><span class="o">-</span><span class="n">manager</span><span class="o">-</span><span class="n">hidden</span><span class="o">-</span><span class="k">and</span><span class="o">-</span><span class="n">redirected</span>
</pre></div>
<p>host_vars/jboss-1 :</p>
<div class="highlight"><pre><span></span><span class="n">apache_rp_backend_url</span> <span class="p">:</span> <span class="s1">'https://jboss1.myinternal.domain.org:8443'</span>
<span class="n">apache_rp_backend_cluster</span> <span class="p">:</span> <span class="n">nameofmyclusterdefinedinapache</span>
</pre></div>
<p>Now when we'll use that playbook, we'll have a local action that will
interact with the balancer manager to disable that backend node while we
do maintainance.</p>
<p>I let you imagine (and create) a ../tasks/apache-enable-worker.yml file
to enable it (which you'll call at the end of your playbook).</p>Automatic laptop backup with NetworkManager (and correct selinux policies ...)2013-03-30T16:49:00+01:002013-03-30T16:49:00+01:00Fabian Arrotintag:arrfab.net,2013-03-30:/posts/2013/Mar/30/automatic-laptop-backup-with-networkmanager-and-correct-selinux-policies/<p>Those days, almost everyone uses a laptop as his primary (work)station :
I don't remember when I was using something else than a laptop for both
work and home usage. I admit that I'm using what I'll describe in the
following sentences for quite some time, but it seems some people I
spoke to don't know what can be done around
<a href="http://projects.gnome.org/NetworkManager/">NetworkManager</a>, and because
I encountered a (small) issue with that process (because of updated
selinux policies), I thought it would be a good time to speak about it.</p>
<p>Let me first discuss a (little) bit about NetworkManager : almost
everyone (using CentOS/Fedora or other distributions) knows what it's
all about : helping you to quickly switch from one network to another,
that network being a wired one, a Wifi hotpot, or even a 3G connection
through your 3G usb modem or your smartphone being used as a modem, etc,
etc .... That's the "visible" part of NetworkManager. While some people
don't seem to like it, I admit myself that I really appreciate it and I
use it on a daily basis for \$work and \$home usage (switching from
wired to wireless, and so on). A quick read in the <a href="http://linux.die.net/man/8/networkmanager">NetworkManager man …</a></p><p>Those days, almost everyone uses a laptop as his primary (work)station :
I don't remember when I was using something else than a laptop for both
work and home usage. I admit that I'm using what I'll describe in the
following sentences for quite some time, but it seems some people I
spoke to don't know what can be done around
<a href="http://projects.gnome.org/NetworkManager/">NetworkManager</a>, and because
I encountered a (small) issue with that process (because of updated
selinux policies), I thought it would be a good time to speak about it.</p>
<p>Let me first discuss a (little) bit about NetworkManager : almost
everyone (using CentOS/Fedora or other distributions) knows what it's
all about : helping you to quickly switch from one network to another,
that network being a wired one, a Wifi hotpot, or even a 3G connection
through your 3G usb modem or your smartphone being used as a modem, etc,
etc .... That's the "visible" part of NetworkManager. While some people
don't seem to like it, I admit myself that I really appreciate it and I
use it on a daily basis for \$work and \$home usage (switching from
wired to wireless, and so on). A quick read in the <a href="http://linux.die.net/man/8/networkmanager">NetworkManager man
page</a> shows that you can
"script" events based on the actual status of your network interface :
basically all executables scripts found by NetworkManager under
/etc/NetworkManager/dispatcher.d/ will be executed on network change.
When I discovered that (was quite some time ago now ...), I decided that
it would be good to launch backup script for my laptop, depending on the
network my laptop is connected, and using different profiles. For
example, (the "head" of ) a simple script can look like :</p>
<table class="highlighttable"><tr><td class="linenos"><div class="linenodiv"><pre>1
2
3
4
5
6
7</pre></div></td><td class="code"><div class="highlight"><pre><span></span><span class="ch">#!/bin/bash </span>
<span class="nv">IF</span><span class="o">=</span><span class="nv">$1</span>
<span class="nv">STATUS</span><span class="o">=</span><span class="nv">$2</span>
<span class="k">if</span> <span class="o">[[</span> <span class="s2">"\$IF"</span> <span class="o">=</span> <span class="s2">"eth0"</span> <span class="o">&&</span> <span class="s2">"\$STATUS"</span> <span class="o">=</span> <span class="s2">"up"</span> <span class="o">]]</span> <span class="p">;</span> <span class="k">then</span>
<span class="nv">NET</span><span class="o">=</span><span class="se">\$</span><span class="o">(</span>/sbin/ip -4 route show dev eth0<span class="p">|</span>awk <span class="s1">'{print \$1}'</span><span class="p">|</span>grep -v> default<span class="o">)</span>
<span class="k">if</span> <span class="o">[</span> <span class="s2">"\$NET"</span> <span class="o">=</span> <span class="s2">"192.168.2.0/24"</span> <span class="o">]</span> <span class="p">;</span> <span class="k">then</span> <span class="se">\#</span> and now the rest up to you ....
</pre></div>
</td></tr></table>
<p>You've got the idea, so it's now just a matter of writing the whole
script. One thing that I like when writing some small scripts is the
fact that I can be notified on my laptop when something happens (or
doesn't, because of errors). I use also quite often notify-send for
that, but because all scripts under dispatcher.d are executed under
root, I prefer from there "jumping" to my user account with a "su -
$my_user_name -c $my_backup_script.sh".</p>
<p>Of course, my script needs several things to "interact" with my desktop
session : the DISPLAY to use and also the dbus-session I currently use
(because I also have to use gvfs-mount to automatically mount in my
gnome session some remote folders, like , (yeah, don't shoot me for
that, not my idea) CIFS shares for \$work).</p>
<p>So that backup script needs some variables like this :</p>
<div class="highlight"><pre><span></span><span class="n">export</span> <span class="n">DISPLAY</span><span class="o">=</span><span class="ss">":0"</span>
<span class="n">export</span> <span class="n">DBUS_SESSION_BUS_ADDRESS</span><span class="o">=</span><span class="err">\$</span><span class="p">(</span><span class="n">cat</span> <span class="o">/</span><span class="n">proc</span><span class="o">/</span><span class="err">\$</span><span class="p">(</span><span class="n">pidof</span> <span class="n">nautilus</span><span class="p">)</span><span class="o">/</span><span class="n">environ</span><span class="o">|</span><span class="n">tr</span> <span class="s1">'\\0'</span> <span class="s1">'\\n'</span><span class="o">|</span><span class="n">grep</span> <span class="n">DBUS_SESSION_BUS</span><span class="o">|</span><span class="n">cut</span> <span class="o">-</span><span class="n">f2</span><span class="o">-</span> <span class="o">-</span><span class="n">d</span> <span class="s1">'='</span><span class="p">)</span>
</pre></div>
<p>If I started that blog post, it's not to speak about NetworkManager at
first (well, I still thought that some people would benefit of those
unknown/unused dispatcher.d scripts ....) but because I encountered an
issue with the recent updates to CentOS 6.4 (and to be precise, newer
selinux-policy-3.7.19-195.el6_4.3.noarch package). So it was time to
dive into that issue , and *yes*, i run selinux everywhere, including
on my laptop ...</p>
<p>Long story short : because I use rsync for my backup scripts (why having
to reinvent the wheel ? ), I had to enable two selinux booleans :</p>
<div class="highlight"><pre><span></span><span class="n">setsebool</span> <span class="o">-</span><span class="n">P</span> <span class="n">rsync_client</span> <span class="mi">1</span>
<span class="n">setsebool</span> <span class="o">-</span><span class="n">P</span> <span class="n">rsync_export_all_ro</span> <span class="mi">1</span>
</pre></div>
<p>But that was still not enough. sealert/audit.log/audit2allow to the
rescue (read the <a href="http://wiki.centos.org/HowTos/SELinux">Selinux page</a>
on the <a href="http://wiki.centos.org">CentOS wiki</a>) and finally I created a
custom policy that suits my needs. Here it is :</p>
<div class="highlight"><pre><span></span> <span class="nt">module</span> <span class="nt">rsync-client</span><span class="p">.</span><span class="nc">pol</span> <span class="nt">1</span><span class="p">.</span><span class="nc">0</span><span class="o">;</span>
<span class="nt">require</span> <span class="p">{</span>
<span class="err">type</span> <span class="err">initrc_tmp_t</span><span class="p">;</span>
<span class="err">type</span> <span class="err">user_home_t</span><span class="p">;</span>
<span class="err">type</span> <span class="err">rsync_t</span><span class="p">;</span>
<span class="err">class</span> <span class="err">sock_file</span> <span class="err">getattr</span><span class="p">;</span>
<span class="err">class</span> <span class="err">file</span> <span class="err">write</span><span class="p">;</span>
<span class="p">}</span>
<span class="err">#</span><span class="o">=============</span> <span class="nt">rsync_t</span> <span class="o">==============</span>
<span class="nt">allow</span> <span class="nt">rsync_t</span> <span class="nt">initrc_tmp_t</span><span class="p">:</span><span class="nd">file</span> <span class="nt">write</span><span class="o">;</span>
<span class="nt">allow</span> <span class="nt">rsync_t</span> <span class="nt">user_home_t</span><span class="p">:</span><span class="nd">sock_file</span> <span class="nt">getattr</span><span class="o">;</span>
</pre></div>
<p>Now, everytime I connect my laptop to a (recognized) network, my laptop
auto-backups itself :</p>
<p><img alt="Backup with NM" src="/images/backup-NM.png"></p>Using Openssh as transport for Ansible instead of default paramiko2012-10-30T14:35:00+01:002012-10-30T14:35:00+01:00Fabian Arrotintag:arrfab.net,2012-10-30:/posts/2012/Oct/30/using-openssh-as-transport-for-ansible-instead-of-default-paramiko/<p>You've probably read that<a href="http://ansible.cc">Ansible</a>uses by default
<a href="http://www.lag.net/paramiko/">paramiko</a>for the SSH connections to the
host(s) you want to manage. But since 0.5 (quite some ago now ...)
Ansible can use plain openssh binary as a transport. Why ? simple
reasons : you sometimes have complex scenario and you can for example
declare a <a href="http://www.arrfab.net/blog/?p=246">ProxyCommand</a>in your
\~/.ssh/config if you need to use a JumpHost to reach the real host you
want to connect to. That's fine and I was using that for some of the
hosts i have to managed (specifying -c ssh when calling ansible, but
having switched to a bash alias containing that string and also -i
/path/to/my/inventory for those hosts).</p>
<p>It's great but it can lead to strange results if you don't have a full
look at what's happening in the background. Here is the situation I just
had yesterday : one of the remote hosts is reachable, but not a standard
port (aka tcp/22) so an entry in my \~/.ssh/config was containing both
HostName (for the known FQDN of the host I had to point to, not the host
i wanted to reach) and Port.</p>
<blockquote>
<p>Host myremotehost<br>
HostName
my.public.name …</p></blockquote><p>You've probably read that<a href="http://ansible.cc">Ansible</a>uses by default
<a href="http://www.lag.net/paramiko/">paramiko</a>for the SSH connections to the
host(s) you want to manage. But since 0.5 (quite some ago now ...)
Ansible can use plain openssh binary as a transport. Why ? simple
reasons : you sometimes have complex scenario and you can for example
declare a <a href="http://www.arrfab.net/blog/?p=246">ProxyCommand</a>in your
\~/.ssh/config if you need to use a JumpHost to reach the real host you
want to connect to. That's fine and I was using that for some of the
hosts i have to managed (specifying -c ssh when calling ansible, but
having switched to a bash alias containing that string and also -i
/path/to/my/inventory for those hosts).</p>
<p>It's great but it can lead to strange results if you don't have a full
look at what's happening in the background. Here is the situation I just
had yesterday : one of the remote hosts is reachable, but not a standard
port (aka tcp/22) so an entry in my \~/.ssh/config was containing both
HostName (for the known FQDN of the host I had to point to, not the host
i wanted to reach) and Port.</p>
<blockquote>
<p>Host myremotehost<br>
HostName
my.public.name.or.the.one.from.the.bastion.with.iptables.rule<br>
Port 2222</p>
</blockquote>
<p>With such entry, I was able to just "ssh user@myremotehost" and was
directly on the remote box. "ansible -c ssh -m ping myremotehost" was
happy, but in fact was not reaching the host I was thinking : running
"ansible -c ssh -m setup myremotehost -vvv" showed me that ansible_fqdn
(one of the ansible facts) wasn't the correct one but instead the host
in front of that machine (the one declared with HostName in
\~/.ssh/config). The verbose mode showed me that even if you specify the
Port in your \~/.ssh/config, ansible will *always* use port 22 :</p>
<blockquote>
<p>\<myremotehost> EXEC ['ssh', '-tt', '-q', '-o', 'AddressFamily=inet',
'-o', 'ControlMaster=auto', '-o',
'ControlPath=/tmp/ansible-ssh-%h-%p-%r', '-o',
'StrictHostKeyChecking=no', '-o', 'Port=22', '-o', 'User=root',
'myremotehost', 'mkdir -p
/var/tmp/ansible-1351603527.81-16435744643257 && echo
/var/tmp/ansible-1351603527.81-16435744643257']</p>
</blockquote>
<p>Hmm, quickly resolved : a quick discussion with people hanging in the
#ansible IRC channel (on irc.freenode.net) explained the issue to me :
Port is *never* being looked at in your \~/.ssh/config, even when
using -c ssh. Solution is to specify the port in your inventory file, as
a variable for that host :</p>
<blockquote>
<p>myremotehost ansible_ssh_port=9999</p>
</blockquote>
<p>In the same vein, you can also use ansible_ssh_host , this one
corresponding to the HostName of your \~/.ssh/config.</p>
<p>Hope that it can save you time, if you encounter the same "issue" one
day ...</p>Ansible as an alternative to puppet/chef/cfengine and others ...2012-10-26T15:02:00+02:002012-10-26T15:02:00+02:00Fabian Arrotintag:arrfab.net,2012-10-26:/posts/2012/Oct/26/ansible-as-an-alternative-to-puppetchefcfengine-and-others/<p>I already know that i'll be criticized for this post, but i don't care
:-) . Strangely my last blog post (which is *very* old ...) was about
a puppet dashboard, so why speaking about another tool ? Well, first i
got a new job and some prerequisites have changed. I still like puppet
(and I'd even want to be able to use puppet but that's another story
...) but I was faced to some constraints when being in front of a new
project. For that specific project, I had to configure a bunch of new
Virtual Machines (RHEL6) coming as OVF files. Problem number one was
that I can't alter or modify the base image so i can't push packages
(from the distro or third-party repositories). Second issue is that I
can't install nor have a daemon/agent running on those machines. I had a
look at the different config tools available but they all require either
a daemon to be started, or at least having extra packages to be
installed on each managed node. (so not possible to have puppetd nor
puppetrun or invoke puppet directly through ssh , as
<a href="http://www.puppetlabs.com/">puppet</a> can't even be installed, same for
<a href="http://saltstack.org/">saltstack</a>). That's why i decided to give …</p><p>I already know that i'll be criticized for this post, but i don't care
:-) . Strangely my last blog post (which is *very* old ...) was about
a puppet dashboard, so why speaking about another tool ? Well, first i
got a new job and some prerequisites have changed. I still like puppet
(and I'd even want to be able to use puppet but that's another story
...) but I was faced to some constraints when being in front of a new
project. For that specific project, I had to configure a bunch of new
Virtual Machines (RHEL6) coming as OVF files. Problem number one was
that I can't alter or modify the base image so i can't push packages
(from the distro or third-party repositories). Second issue is that I
can't install nor have a daemon/agent running on those machines. I had a
look at the different config tools available but they all require either
a daemon to be started, or at least having extra packages to be
installed on each managed node. (so not possible to have puppetd nor
puppetrun or invoke puppet directly through ssh , as
<a href="http://www.puppetlabs.com/">puppet</a> can't even be installed, same for
<a href="http://saltstack.org/">saltstack</a>). That's why i decided to give
<a href="http://ansible.cc">Ansible</a> a try. It was already on my "TO-test" list
for a long time but it seems it was really fitting the bill for that
specific project and constraints : using the 'already-in-place' ssh
authorization, no packages to be installed on the managed nodes, and
last-but-no-least, a learning curve that is really thin (compared to
puppet and others, but that's my personal opinion/experience).</p>
<p>The other good thing with Ansible is that you can start very easily and
then slowly add 'complexity' to your playbooks/tasks. I'm still using
for example a flat inventory file, but already organized to reflect what
we can do in the future (hostnames included in groups, themselves
included in parents groups - aka nested groups). Same for the variables
inheritance : at the group level and down to the host level, host
variables overwriting those defined at the group level , etc ...)</p>
<p>The Yaml syntax is really easy to understand so you can have quickly
your first playbook being played on a bunch of machines simultaneously
(thanks to paramiko/parallel ssh). The number of modules is less than
the puppet resources, but is quickly growing. I also just tested to tie
the execution of ansible playbook with<a href="http://jenkins-ci.org/">Jenkins</a>
so that people not having access to the ansible
inventory/playbooks/tasks (stored in a vcs, subversion in my case) can
use it from a gui.. More to come on Ansible in the future</p>Puppet, Foreman and selinux on CentOS2012-02-21T15:23:00+01:002012-02-21T15:23:00+01:00Fabian Arrotintag:arrfab.net,2012-02-21:/posts/2012/Feb/21/puppet-foreman-and-selinux-on-centos/<p>We implemented <a href="http://puppetlabs.com">Puppet</a> as a configuration
management system at \$work , and Puppet is a great tool. Then I heard
about some dashboards that could be used on top of it. I've heard about
different dashboards (\$management_people *like* dashboards) like
<a href="http://puppetlabs.com/puppet/related-projects/dashboard/">Puppet-dashboard</a>
and <a href="http://theforeman.org/">Foreman</a>.</p>
<p>I was advised by several people to give Foreman a try and it's really
simple to install. Their
<a href="http://theforeman.org/projects/foreman/wiki/Installation_instructions">wiki</a>
covers basic installation and there is even a<a href="http://yum.theforeman.org/">yum
repo</a> that can be used (Epel has to be
enabled too). As i have a small network to manage, I decided to setup
Foreman on the same host as puppetmaster. Configuring /etc/foreman/* is
easy and missing parts can be configured just by looking at the Foreman
website wiki/FAQ. But troubles came when I enabled reports :
puppetmasterd config was changed to include :</p>
<blockquote>
<p>[master]<br>
reports = store, foreman</p>
</blockquote>
<p>and the foreman.rb script (copied and modified from
/usr/share/foreman/extras/puppet/foreman/templates/foreman-report.rb.erb)
integrated in the correct /usr/lib/ruby/site_ruby/1.8/puppet/reports
dir. (Note : don't forget to update \$foreman_url).</p>
<p>But no reports were coming in Foreman. hmmm .... error message was :</p>
<blockquote>
<p>Report foreman failed: Could not send report to Foreman at
http://puppetmaster.mybeautifuldomain.com …</p></blockquote><p>We implemented <a href="http://puppetlabs.com">Puppet</a> as a configuration
management system at \$work , and Puppet is a great tool. Then I heard
about some dashboards that could be used on top of it. I've heard about
different dashboards (\$management_people *like* dashboards) like
<a href="http://puppetlabs.com/puppet/related-projects/dashboard/">Puppet-dashboard</a>
and <a href="http://theforeman.org/">Foreman</a>.</p>
<p>I was advised by several people to give Foreman a try and it's really
simple to install. Their
<a href="http://theforeman.org/projects/foreman/wiki/Installation_instructions">wiki</a>
covers basic installation and there is even a<a href="http://yum.theforeman.org/">yum
repo</a> that can be used (Epel has to be
enabled too). As i have a small network to manage, I decided to setup
Foreman on the same host as puppetmaster. Configuring /etc/foreman/* is
easy and missing parts can be configured just by looking at the Foreman
website wiki/FAQ. But troubles came when I enabled reports :
puppetmasterd config was changed to include :</p>
<blockquote>
<p>[master]<br>
reports = store, foreman</p>
</blockquote>
<p>and the foreman.rb script (copied and modified from
/usr/share/foreman/extras/puppet/foreman/templates/foreman-report.rb.erb)
integrated in the correct /usr/lib/ruby/site_ruby/1.8/puppet/reports
dir. (Note : don't forget to update \$foreman_url).</p>
<p>But no reports were coming in Foreman. hmmm .... error message was :</p>
<blockquote>
<p>Report foreman failed: Could not send report to Foreman at
http://puppetmaster.mybeautifuldomain.com:3000/reports/create?format=yml:
Permission denied - connect(2)</p>
</blockquote>
<p>That was not an iptables issue, but selinux one :</p>
<blockquote>
<p>type=AVC msg=audit(1329830711.788:28372): avc: denied {
name_connect } for pid=13144 comm="puppetmasterd" dest=3000
scontext=unconfined_u:system_r:puppetmaster_t:s0
tcontext=system_u:object_r:ntop_port_t:s0 tclass=tcp_socket</p>
</blockquote>
<p>Here is my locally generated selinux for Foreman :</p>
<blockquote>
<p>module foreman 1.0;</p>
<p>require {<br>
type puppetmaster_t;<br>
type http_port_t;<br>
type ntop_port_t;<br>
class tcp_socket name_connect;<br>
}</p>
<p>#============= puppetmaster_t ==============<br>
allow puppetmaster_t http_port_t:tcp_socket name_connect;<br>
allow puppetmaster_t ntop_port_t:tcp_socket name_connect;</p>
</blockquote>
<p>Things work really better after I added my foreman.pp selinux module on
that host. If you don't know how to compile selinux custom policies,
please read the nice <a href="http://wiki.centos.org/HowTos/SELinux">Selinux page on the CentOS
wiki</a>, and especially the
<a href="http://wiki.centos.org/HowTos/SELinux#head-aa437f65e1c7873cddbafd9e9a73bbf9d102c072">"Manually customizing selinux
policies"</a>
section. Tools like sealert (from setroubleshoot-server package) and
audit2allow are really helpful when there is no pre-defined selinux
boolean that can be used.</p>
<p>Hope this helps .. and now going back enjoying reports, including error
reports by mail (nice feature)</p>CentOS Automated QA explained ...2012-01-09T15:41:00+01:002012-01-09T15:41:00+01:00Fabian Arrotintag:arrfab.net,2012-01-09:/posts/2012/Jan/09/centos-automated-qa-explained/<p>While <a href="http://centosnow.blogspot.com/2012/01/centos-in-2012.html">Johnny was
explaining</a>
to the rest of the world how CentOS 6.1 and 6.2 were released, I
received quite some questions about the QA tests and how they were
performed. Well, let me explain in some words how it's now organized.
Previously, there was only a Tests Matrix that was shared between the QA
team members : each member of that group had access to the QA bits,
could download/rsync the complete tree (with ISO images too) and do his
tests, and then reported the results in one way or the other (irc,
mailing-list). Of course it didn't scale out very well. Too much manual
intervention, and when someone was busy with personal (or work related)
issues, no feedback was coming back to the CentOS devteam.</p>
<p>So during <a href="http://archive.fosdem.org/2011/">Fosdem 2011</a>, I had a
meeting with <a href="http://www.karan.org/blog/index.php">Karanbir</a> to see how
we could solve that issue and put automation in the QA loop. We
dedicated some (old) machines to be used only for QA, and in a separate
VLAN. Basically, here are the steps from the built bits to the QA
reports.</p>
<ul>
<li>The CentOS buildfarm (using the newly build system called 'reimzul'
and using <a href="http://kr.github.com/beanstalkd/">beanstalkd</a> as a
queuing system …</li></ul><p>While <a href="http://centosnow.blogspot.com/2012/01/centos-in-2012.html">Johnny was
explaining</a>
to the rest of the world how CentOS 6.1 and 6.2 were released, I
received quite some questions about the QA tests and how they were
performed. Well, let me explain in some words how it's now organized.
Previously, there was only a Tests Matrix that was shared between the QA
team members : each member of that group had access to the QA bits,
could download/rsync the complete tree (with ISO images too) and do his
tests, and then reported the results in one way or the other (irc,
mailing-list). Of course it didn't scale out very well. Too much manual
intervention, and when someone was busy with personal (or work related)
issues, no feedback was coming back to the CentOS devteam.</p>
<p>So during <a href="http://archive.fosdem.org/2011/">Fosdem 2011</a>, I had a
meeting with <a href="http://www.karan.org/blog/index.php">Karanbir</a> to see how
we could solve that issue and put automation in the QA loop. We
dedicated some (old) machines to be used only for QA, and in a separate
VLAN. Basically, here are the steps from the built bits to the QA
reports.</p>
<ul>
<li>The CentOS buildfarm (using the newly build system called 'reimzul'
and using <a href="http://kr.github.com/beanstalkd/">beanstalkd</a> as a
queuing system) pushes automatically each new tree to the dedicated
QA hardware</li>
<li>There is a rsync post-xfer script that is launched from there that
also uses beanstalkd and some workers (so we can scale out easily if
we add machines)</li>
<li>Each built and pushed tree/ISOs set has its own BuildTag (that is
used to identify what was tested and when)</li>
<li>Some tools (hosted in an internal Git repository) are then used to
deploy some Virtual Machines (actually a mix of BareMetal and VMs :
blade/Virtual Box/Xen/KVM) and send a report if the "deploy VM step"
failed (VMs are installed through ISO/pxe boot/virt-install through
http/ftp/nfs methods)</li>
<li>A test suite (that we call the t_functional stack) is then copied
from the local git repo to those newly deployed machines and each
test is then ran. From that point a report is then automatically
sent to the QA mailing-list so that people can see the results,
while the full log is available on QA head node.</li>
</ul>
<p>The fact that we use two separate git repositories (one for the
deploy/provisioniong functions and another one for the tests themselves)
was really a good thing, as it permitted some people to include their
tests in the t_functional stack. For example ,
<a href="http://athmane.wordpress.com/">Athmane</a> did a great job writing/fixing
some tests used for 6.1 and 6.2.</p>
<p>More informations to come later about how you (yes, *you*) can
participate and contribute such CentOS QA auto-tests !</p>Monitoring DRBD resources with Zabbix on CentOS2011-09-07T13:10:00+02:002011-09-07T13:10:00+02:00Fabian Arrotintag:arrfab.net,2011-09-07:/posts/2011/Sep/07/monitoring-drbd-resources-with-zabbix-on-centos/<p>We use <a href="http://www.drbd.org">DRBD</a> at work on several CentOS 5.x nodes
to replicate data between our two computer rooms (in different buildings
but linked with Gigabit fiber). It's true that you can know if something
wrong happens at the DRBD level if you have configured the correct
'handlers' and the appropriate notifications scripts (Have a look for
example at the <a href="http://www.drbd.org/users-guide/s-configure-split-brain-behavior.html#s-split-brain-notification">Split Brain notification
script</a>).
Those scripts are 'cool' but what if you could 'plumb' the DRBD status
in your actual monitoring solution ? We
use<a href="http://www.zabbix.com">Zabbix</a>at \$work and I was asked to
centralize events from differents sources and Zabbix doesn't support
directly monitoring DRBD devices. But one of the cool thing with Zabbix
is that it's like a <a href="http://www.lego.com">Lego</a> system : you can extend
what it does if you know what to query and how to do it. If you want to
monitor DRBD devices, the best that Zabbix can do (on the agent side,
when using the zabbix agent running as a simple zabbix user with
/sbin/nologin as shell) is to query and
parse<a href="http://www.drbd.org/users-guide/ch-admin.html#s-proc-drbd">/proc/drbd</a>
. So here we go : we need to modify the Zabbix agent to use <a href="http://www.zabbix.com/documentation/1.8/manual/config/user_parameters#flexible_user_parameters">Flexible
User
Parameters</a>,
like this (in /etc/zabbix/zabbix_agentd.conf …</p><p>We use <a href="http://www.drbd.org">DRBD</a> at work on several CentOS 5.x nodes
to replicate data between our two computer rooms (in different buildings
but linked with Gigabit fiber). It's true that you can know if something
wrong happens at the DRBD level if you have configured the correct
'handlers' and the appropriate notifications scripts (Have a look for
example at the <a href="http://www.drbd.org/users-guide/s-configure-split-brain-behavior.html#s-split-brain-notification">Split Brain notification
script</a>).
Those scripts are 'cool' but what if you could 'plumb' the DRBD status
in your actual monitoring solution ? We
use<a href="http://www.zabbix.com">Zabbix</a>at \$work and I was asked to
centralize events from differents sources and Zabbix doesn't support
directly monitoring DRBD devices. But one of the cool thing with Zabbix
is that it's like a <a href="http://www.lego.com">Lego</a> system : you can extend
what it does if you know what to query and how to do it. If you want to
monitor DRBD devices, the best that Zabbix can do (on the agent side,
when using the zabbix agent running as a simple zabbix user with
/sbin/nologin as shell) is to query and
parse<a href="http://www.drbd.org/users-guide/ch-admin.html#s-proc-drbd">/proc/drbd</a>
. So here we go : we need to modify the Zabbix agent to use <a href="http://www.zabbix.com/documentation/1.8/manual/config/user_parameters#flexible_user_parameters">Flexible
User
Parameters</a>,
like this (in /etc/zabbix/zabbix_agentd.conf) :</p>
<blockquote>
<p>UserParameter=drbd.cstate[*],cat /proc/drbd |grep \$1:|tr [:blank:]
\\n|grep cs|cut -f 2 -d ':'|grep Connected |wc -l<br>
UserParameter=drbd.dstate[*],cat /proc/drbd |grep \$1:|tr [:blank:]
\\n|grep ds|cut -f 2 -d ':'|cut -f 1 -d '/'|grep UpToDate|wc -l</p>
</blockquote>
<p>We just need to inform the Zabbix server of the actual Connection State
(cs) and Disk State (ds) . For that we just need to create
Application/Items and Triggers .. but what if we could just create a
<a href="http://www.zabbix.com/documentation/1.8/manual/config/host_templates">Zabbix
Template</a>
so that we can just link that template to a DRBD host ? I attach to this
post the DRBD Zabbix template (xml file that you can import in your
zabbix setup) and you can just link it to your drbd hosts. Here is the
<a href="http://www.arrfab.net/blog/wp-content/uploads/2011/09/zabbix-drbd.xml">link</a>.
That XML file contains both two Items (cstate and dstate) and the
associated triggers. Of course you can extend it, especially if you use
multiple resources , drbd disks. Because we used the Flexible
parameters, you can for example in the Zabbix item, create a new one
(based on the template) and monitor the /dev/drbd1 device just by using
the drbd.dstate[1] key in that zabbix item.</p>
<p>Happy Monitoring and DRBD'ing ...</p>CentOS 6 LiveCD and LiveDVD tools2011-07-28T14:29:00+02:002011-07-28T14:29:00+02:00Fabian Arrotintag:arrfab.net,2011-07-28:/posts/2011/Jul/28/centos-6-livecd-and-livedvd-tools/<p>The number of questions I received from different people regarding the
LiveCD/LiveDVD tools and the kickstart files used to produce the ISO
images was quite "high". People looking at the normal place will be
disappointed because we haven't used the original <a href="https://projects.centos.org/svn/livecd/">livecd subversion
repo</a> to produce the actual
Live medias. So in the meantime, if people want to use the
livecd-creator tool, they can fetch the SRPM here :
<a href="http://people.centos.org/arrfab/CentOS6/SRPMS/livecd-tools-0.3.6-1.el6.src.rpm">http://people.centos.org/arrfab/CentOS6/SRPMS/livecd-tools-0.3.6-1.el6.src.rpm</a>
. I've just copied also the two kickstart files used for both LiveCD and
LiveDVD here :<a href="http://people.centos.org/arrfab/CentOS6/LiveCD-DVD/">http://people.centos.org/arrfab/CentOS6/LiveCD-DVD/</a></p>
<p>Hope that people will be satisfied .. faster to push those files there
than to change the whole 'used behind the scene' infra</p>CentOS 6 ISO spins2011-07-26T19:39:00+02:002011-07-26T19:39:00+02:00Fabian Arrotintag:arrfab.net,2011-07-26:/posts/2011/Jul/26/centos-6-iso-spins/<p>As you've probably seen if you're subscribed to the <a href="http://lists.centos.org/pipermail/centos-announce/2011-July/017658.html">CentOS announce
list</a>
(or if you just rsync/mirror the <a href="http://mirror.centos.org/centos/">whole CentOS
tree</a>) , the CentOS 6.0 LiveCD was
released last monday. This is the first of our CentOS custom spins !
While I'm writing that blog post, the CentOS 6.0 LiveDVD is on its way
to the external mirrors too and will normally be announced shortly (when
enough mirrors will have it) ! It will be the second CentOS respin and
we have more in the pipe for you ! As Karanbir announced it in the <a href="http://lists.centos.org/pipermail/centos-announce/2011-July/017645.html">6.0
release
mail</a>
, we planned also to provide two other spins : the minimal one and the
lws one. Good news is that the minimal one is almost finished and being
intensively tested. If things don't change (or bugs appear during QA),
the iso image will be only \~250Mb for the i386 arch and \~300Mb for the
x86_64 one. It's meant to be used as a real basic CentOS system (even
less packages that the @core group on a normal install if used with the
proper kickstart invocation !) : 186 packages only on your disk. You'll
have a very basic CentOS system with only openssh-server and yum …</p><p>As you've probably seen if you're subscribed to the <a href="http://lists.centos.org/pipermail/centos-announce/2011-July/017658.html">CentOS announce
list</a>
(or if you just rsync/mirror the <a href="http://mirror.centos.org/centos/">whole CentOS
tree</a>) , the CentOS 6.0 LiveCD was
released last monday. This is the first of our CentOS custom spins !
While I'm writing that blog post, the CentOS 6.0 LiveDVD is on its way
to the external mirrors too and will normally be announced shortly (when
enough mirrors will have it) ! It will be the second CentOS respin and
we have more in the pipe for you ! As Karanbir announced it in the <a href="http://lists.centos.org/pipermail/centos-announce/2011-July/017645.html">6.0
release
mail</a>
, we planned also to provide two other spins : the minimal one and the
lws one. Good news is that the minimal one is almost finished and being
intensively tested. If things don't change (or bugs appear during QA),
the iso image will be only \~250Mb for the i386 arch and \~300Mb for the
x86_64 one. It's meant to be used as a real basic CentOS system (even
less packages that the @core group on a normal install if used with the
proper kickstart invocation !) : 186 packages only on your disk. You'll
have a very basic CentOS system with only openssh-server and yum. We are
even testing the luks/lvm/md devices combination to be sure to meet your
needs.</p>
<p>The next custom respin (LWS code name - for LightWeigth Server edition)
will still be a CD iso image (but pushed to the limit) that will include
basic server packages, more or less in the idea of the ServerCD that
existed during the CentOS 4.x days ... That one still needs to be
finished while work has already being done.</p>
<p>Stay tuned for more informations when it will be pushed to mirrors and
announced .. all that at the same time as 6.1 and 5.7 (in parallel)
builds ..Interesting times ! :-)</p>CentOS 6 on the iMac2011-07-25T09:58:00+02:002011-07-25T09:58:00+02:00Fabian Arrotintag:arrfab.net,2011-07-25:/posts/2011/Jul/25/centos-6-on-the-imac/<p>I decided to put CentOS 6 on my iMac. It was running in dual-boot mode
with OSX and CentOS 5. Installing through the network (from a NFS share)
was really easy and no bug encountered but at the end of the install,
when it asked me to reboot, nothing : after having selected the Linux
partition in the <a href="http://refit.sf.net">rEfit</a> boot manager screen,
nothing. hmm ....</p>
<p>I restarted the install process to see if at least anaconda tried to
install grub on the first sector of the /boot partition and not in the
MBR but that was correctly seen and chosen by anaconda . So the issue
was somewhere else. I had a /boot ext3 partition (on /dev/sda3) while
/dev/sda4 is the VolumeGroup in which I had defined my Logical Volumes.
There was a big rewrite in Anaconda for the storage part and el6/CentOS
6 suffers from one bug found on the upstream bugzilla when having to
deal with Apple computers *and* using rEfit at the same time :
<a href="https://bugzilla.redhat.com/show_bug.cgi?id=505817">https://bugzilla.redhat.com/show_bug.cgi?id=505817</a></p>
<p>Long story short : to have CentOS 6 running on your iMac (if using refit
as the EFI boot manager) :</p>
<ul>
<li>install CentOS 6 as usual (check …</li></ul><p>I decided to put CentOS 6 on my iMac. It was running in dual-boot mode
with OSX and CentOS 5. Installing through the network (from a NFS share)
was really easy and no bug encountered but at the end of the install,
when it asked me to reboot, nothing : after having selected the Linux
partition in the <a href="http://refit.sf.net">rEfit</a> boot manager screen,
nothing. hmm ....</p>
<p>I restarted the install process to see if at least anaconda tried to
install grub on the first sector of the /boot partition and not in the
MBR but that was correctly seen and chosen by anaconda . So the issue
was somewhere else. I had a /boot ext3 partition (on /dev/sda3) while
/dev/sda4 is the VolumeGroup in which I had defined my Logical Volumes.
There was a big rewrite in Anaconda for the storage part and el6/CentOS
6 suffers from one bug found on the upstream bugzilla when having to
deal with Apple computers *and* using rEfit at the same time :
<a href="https://bugzilla.redhat.com/show_bug.cgi?id=505817">https://bugzilla.redhat.com/show_bug.cgi?id=505817</a></p>
<p>Long story short : to have CentOS 6 running on your iMac (if using refit
as the EFI boot manager) :</p>
<ul>
<li>install CentOS 6 as usual (check that grub will be installed on the
first sector of /boot and not in the MBR , normally correctly
seen/proposed by Anaconda)</li>
<li>on the first reboot, enter the rEFIt shell and launch 'gptsync' (it
will say that it has to 'sync' the gpt, accept the sync)</li>
<li>select now the Linux partition : it will fail with a black screen</li>
<li>power down the iMac and start it up : select Linux in the refit boot
manager and enjoy your CentOS 6 installation on the iMac</li>
</ul>Modifying Anaconda behaviour without rebuilding the whole install media2011-06-11T14:54:00+02:002011-06-11T14:54:00+02:00Fabian Arrotintag:arrfab.net,2011-06-11:/posts/2011/Jun/11/modifying-anaconda-behaviour-without-rebuilding-the-whole-install-media/<p>One thing that I had to have a look at (during CentOS 6 QA), is the way
<a href="http://fedoraproject.org/wiki/Anaconda">anaconda</a> (the Red
Hat/Fedora/CentOS installer) pre-defines some 'tasks' . People used to
those kind of install know what I'm talking about : the "Mininal",
"Desktop", "Basic Server" and other choices you have during setup. From
that first selection, you can decide (or not) to customize the software
selection which then leads you to a screen containing categories /
groups / packages defined in the comps.xml file present under /repodata
on the tree/install media.</p>
<p>If you don't 'see' which screen i'm talking about, a small screenshot of
the upcoming CentOS 6 will explain better than words :</p>
<p><img alt="Anaconda in CentOS" src="/images/anaconda-centos.png" title="anaconda-centos"></p>
<p>Those pre-defined tasks aren't defined in the comps.xml file but rather
at build time within anaconda. Fine but how can you 'modify' anaconda
behaviour and test it without having to patch anaconda SRPM, rebuild it
and launch a new build to generate the tree and install medias ? Easy ,
thanks to a simple file on the tree !</p>
<p>People wanting to modify anaconda behaviour at install time without
having to regenerate the whole tree can just create a small file
(updates.img) , put it in the /images directory in …</p><p>One thing that I had to have a look at (during CentOS 6 QA), is the way
<a href="http://fedoraproject.org/wiki/Anaconda">anaconda</a> (the Red
Hat/Fedora/CentOS installer) pre-defines some 'tasks' . People used to
those kind of install know what I'm talking about : the "Mininal",
"Desktop", "Basic Server" and other choices you have during setup. From
that first selection, you can decide (or not) to customize the software
selection which then leads you to a screen containing categories /
groups / packages defined in the comps.xml file present under /repodata
on the tree/install media.</p>
<p>If you don't 'see' which screen i'm talking about, a small screenshot of
the upcoming CentOS 6 will explain better than words :</p>
<p><img alt="Anaconda in CentOS" src="/images/anaconda-centos.png" title="anaconda-centos"></p>
<p>Those pre-defined tasks aren't defined in the comps.xml file but rather
at build time within anaconda. Fine but how can you 'modify' anaconda
behaviour and test it without having to patch anaconda SRPM, rebuild it
and launch a new build to generate the tree and install medias ? Easy ,
thanks to a simple file on the tree !</p>
<p>People wanting to modify anaconda behaviour at install time without
having to regenerate the whole tree can just create a small file
(updates.img) , put it in the /images directory in the tree. Anaconda
(when installing over the network, http/ftp/nfs) always try to see if an
updates.img file exists, and if so, use it. Fine, so I could easily try
to "patch" it without having to modify the whole tree.</p>
<p>Creating that updates.img (it's just a ext2 filesystem on top) is really
easy :</p>
<div class="highlight"><pre><span></span><span class="nv">dd</span> <span class="k">if</span><span class="o">=/</span><span class="nv">dev</span><span class="o">/</span><span class="nv">zero</span> <span class="nv">of</span><span class="o">=/</span><span class="nv">tmp</span><span class="o">/</span><span class="nv">updates</span>.<span class="nv">img</span> <span class="nv">bs</span><span class="o">=</span><span class="mi">1</span><span class="nv">k</span> <span class="nv">count</span><span class="o">=</span><span class="mi">1440</span>
<span class="nv">losetup</span> \`<span class="nv">losetup</span> <span class="o">-</span><span class="nv">f</span>\` <span class="o">/</span><span class="nv">tmp</span><span class="o">/</span><span class="nv">updates</span>.<span class="nv">img</span>
<span class="nv">losetup</span> <span class="o">-</span><span class="nv">a</span><span class="o">|</span><span class="nv">grep</span> <span class="nv">updates</span>.<span class="nv">img</span>
<span class="nv">mkfs</span>.<span class="nv">ext2</span> <span class="o">/</span><span class="nv">dev</span><span class="o">/</span><span class="nv">loop3</span> \# <span class="nv">was</span> <span class="nv">loop3</span> <span class="nv">in</span> <span class="nv">my</span> <span class="nv">case</span>
<span class="nv">mkdir</span> <span class="o">/</span><span class="nv">mnt</span><span class="o">/</span><span class="k">loop</span> <span class="c1">; mount -o loop /tmp/updates.img /mnt/loop/ ; ll</span>
<span class="o">/</span><span class="nv">mnt</span><span class="o">/</span><span class="k">loop</span>
<span class="nv">drwx</span><span class="o">------</span>. <span class="mi">2</span> <span class="nv">root</span> <span class="nv">root</span> <span class="mi">12288</span> <span class="nv">Jun</span> <span class="mi">11</span> <span class="mi">15</span>:<span class="mi">43</span> <span class="nv">lost</span><span class="o">+</span><span class="nv">found</span>
</pre></div>
<p>From now, it's just a matter of putting the new files that you want to
test and that will "overwrite" at run-time the defaults anaconda ones.</p>
<p>(in our current example, it was the installclasses/rhel.py that needed
to be modified, so I just had to create a installclasses dir and drop my
version of rhel.py in there on the loop device)</p>
<p>When you're done, umount the updates.img, copy it to
/path/to/your/install/tree/images , restart a http install (verify that
permissions and selinux contexts are of course correct !) and enjoy !</p>
<p>Easier and faster. Thanks to the Anaconda team which decided to permit
modifying the anaconda behaviour at run-time with a simple file :-)</p>IPV6 world day !2011-06-09T21:24:00+02:002011-06-09T21:24:00+02:00Fabian Arrotintag:arrfab.net,2011-06-09:/posts/2011/Jun/09/ipv6-world-day/<p>It seems quite a lot of people blogged about<a href="http://www.worldipv6day.org/">IPV6
day</a> . It's true that it's always a good
idea to speak about IPV6. I'm using IPV6 natively on my server hosted
at<a href="http://www.hetzner.de/en/">Hetzner</a> (they offer a /64 IPV6 subnet,
which is more than enough for a <a href="http://www.arrfab.net/blog/?p=271">CentOS server hosting several xen domU
Virtual Machines</a>). At home, that's
another story. I use a<a href="http://ipv6.he.net/">HE.net</a> <a href="http://www.tunnelbroker.net/">free
tunnel</a> to be able to reach ipv6 hosts.
Yes, even in 2011, you still have to use tunnels to use IPV6 ! Why ?
that's indeed a good question. Even if my CentOS ipv6 tunnel
end-point/router/radvd at home is working correctly, I decided to ask my
belgian provider if they had plans on implementing native IPV6. Well,
not for my home connection, as I already know that
<a href="http://www.belgacom.be/privathttp://www.belgacom.be/private/hbsres/jsp/dynamic/homepage.jsp">Belgacom</a>
(the biggest provider in belgium) doesn't support IPV6 on their BBOX2
modems that they give to customers when ordering a DSL connection at
home (<em>while i'm talking about Belgacom, please stop sending me direct
advertisement to my mailbox - the real one and not the electronic one -
with your invoices about a service - VDSL2/BelgacomTV - that you
*can't* offer to all your customers ... thanks</em>) . So I decided to …</p><p>It seems quite a lot of people blogged about<a href="http://www.worldipv6day.org/">IPV6
day</a> . It's true that it's always a good
idea to speak about IPV6. I'm using IPV6 natively on my server hosted
at<a href="http://www.hetzner.de/en/">Hetzner</a> (they offer a /64 IPV6 subnet,
which is more than enough for a <a href="http://www.arrfab.net/blog/?p=271">CentOS server hosting several xen domU
Virtual Machines</a>). At home, that's
another story. I use a<a href="http://ipv6.he.net/">HE.net</a> <a href="http://www.tunnelbroker.net/">free
tunnel</a> to be able to reach ipv6 hosts.
Yes, even in 2011, you still have to use tunnels to use IPV6 ! Why ?
that's indeed a good question. Even if my CentOS ipv6 tunnel
end-point/router/radvd at home is working correctly, I decided to ask my
belgian provider if they had plans on implementing native IPV6. Well,
not for my home connection, as I already know that
<a href="http://www.belgacom.be/privathttp://www.belgacom.be/private/hbsres/jsp/dynamic/homepage.jsp">Belgacom</a>
(the biggest provider in belgium) doesn't support IPV6 on their BBOX2
modems that they give to customers when ordering a DSL connection at
home (<em>while i'm talking about Belgacom, please stop sending me direct
advertisement to my mailbox - the real one and not the electronic one -
with your invoices about a service - VDSL2/BelgacomTV - that you
*can't* offer to all your customers ... thanks</em>) . So I decided to ask
their 'professional services' because we have two 'professional and
business' lines that we used at \$work. Long story short (to avoid
explaining how much emails/cases I had to send/open to have an answer) :
"no, even on the business lines we can't support IPV6 and we have no
plans (*sic*, I hope that guy was just kidding or probably doesn't
know the real answer ..) nor dates about future implementation of the
IPV6 services/connectivity " ..</p>
<p>Nice .. now /me goes back to CentOS QA mode ...</p>Bye-Bye Nokia .. we loved you, until now2011-02-12T10:25:00+01:002011-02-12T10:25:00+01:00Fabian Arrotintag:arrfab.net,2011-02-12:/posts/2011/Feb/12/bye-bye-nokia-we-loved-you-until-now/<p>I couldn't believe what I read yesterday .. but yes, Nokia has decided
to sign a special partnership with Microsoft to load WM7 on their
mobiles phones in the future .. The end of Symbian and Meego/Maemo ..
sad, sad .. time for me to think about my next mobile phone. Official
Nokia announce here
:<a href="http://conversations.nokia.com/nokia-strategy-2011/">http://conversations.nokia.com/nokia-strategy-2011/</a></p>What do you want to see ? CentOS 5.6 or CentOS 6.0 ?2011-01-13T22:17:00+01:002011-01-13T22:17:00+01:00Fabian Arrotintag:arrfab.net,2011-01-13:/posts/2011/Jan/13/what-do-you-want-to-see-centos-5-6-or-centos-6-0/<p>As you probably know (if you are interested in the Enterprise Linux
market), <a href="https://www.redhat.com/archives/rhelv5-announce/2011-January/msg00000.html">Red Hat released earlier today
5.6</a>
. So automatically some CentOS QA team members started to discuss about
that in the appropriate IRC channel. As CentOS 6.0 isn't (yet) released
nor ready, the discussion was about putting 5.6 build & release as
priority number one or not. Karanbir on his side <a href="http://twitter.com/CentOS/status/25505187548368897">asked on
Twitter</a> about
thoughts on the matter, and a discussion was started too on the
<a href="http://lists.centos.org/pipermail/centos-devel/2011-January/006511.html">centos-devel</a>
list about that topic. My personal opinion (and shared by some people
too) seems to give 5.6 the priority for quite some reasons :</p>
<ul>
<li>The centos 5.x install base is there while there is (obviously) no
centos 6 install base.</li>
<li>So those people having machines in production, faced to the net (,
etc, etc, ...) would prefer having their machines patched and
up2date (security first !)</li>
<li>People running CentOS 5.x on servers and willing to install php53
packages, now officially included</li>
<li>On the build side, the el5 build process is clearly identified and
known since 2007 : packages with branding issues are already
identified and patches/artwork is already there, meaning that it
will be <span style="text-decoration: line-through;">probably</span>
(no, surely !) faster to …</li></ul><p>As you probably know (if you are interested in the Enterprise Linux
market), <a href="https://www.redhat.com/archives/rhelv5-announce/2011-January/msg00000.html">Red Hat released earlier today
5.6</a>
. So automatically some CentOS QA team members started to discuss about
that in the appropriate IRC channel. As CentOS 6.0 isn't (yet) released
nor ready, the discussion was about putting 5.6 build & release as
priority number one or not. Karanbir on his side <a href="http://twitter.com/CentOS/status/25505187548368897">asked on
Twitter</a> about
thoughts on the matter, and a discussion was started too on the
<a href="http://lists.centos.org/pipermail/centos-devel/2011-January/006511.html">centos-devel</a>
list about that topic. My personal opinion (and shared by some people
too) seems to give 5.6 the priority for quite some reasons :</p>
<ul>
<li>The centos 5.x install base is there while there is (obviously) no
centos 6 install base.</li>
<li>So those people having machines in production, faced to the net (,
etc, etc, ...) would prefer having their machines patched and
up2date (security first !)</li>
<li>People running CentOS 5.x on servers and willing to install php53
packages, now officially included</li>
<li>On the build side, the el5 build process is clearly identified and
known since 2007 : packages with branding issues are already
identified and patches/artwork is already there, meaning that it
will be <span style="text-decoration: line-through;">probably</span>
(no, surely !) faster to have 5.6 out of the door than 6</li>
<li>Same rule for the QA process : people from the QA team can "blindly"
focus on their previous tests, and just have a look eventually at
some newer packages (a few, like php53 but not that much in
comparison with el6)</li>
</ul>
<p>Please notice that it's still my <em>personal opinion</em> on that question and
isn't the (to be defined) official CentOS position.</p>CentOS team @ Fosdem 20112011-01-10T15:49:00+01:002011-01-10T15:49:00+01:00Fabian Arrotintag:arrfab.net,2011-01-10:/posts/2011/Jan/10/centos-team-fosdem-2011/<p>Some members of the CentOS team will be present at the Fosdem . Feel
free to come at our booth just to discuss ...</p>
<p>More informations on <a href="http://wiki.centos.org/Events/Fosdem2011">our
wiki</a> and on the
<a href="http://www.fosdem.org/">Fosdem</a> website</p>Enabling IPv6 for guests on an Hetzner CentOS 5.5 xen dom02010-12-31T11:28:00+01:002010-12-31T11:28:00+01:00Fabian Arrotintag:arrfab.net,2010-12-31:/posts/2010/Dec/31/enabling-ipv6-for-guests-on-an-hetzner-centos-5-5-xen-dom0/<p>I was playing with IPv6 in the last days (started to use a tunnel from
<a href="http://www.tunnelbroker.net/">he.net</a> as my current ISP doesn't
support native IPv6 and doesn't plan to support it in a short time) and
wanted to add IPv6 to some of my CentOS Xen domU's running on a
<a href="http://www.hetzner.de">Hetzner</a> box. This part was a little bit more
difficult than for a standard network. Due to their internal network
design, Hetzner <a href="http://translate.google.be/translate?u=http%3A%2F%2Fwiki.hetzner.de%2Findex.php%2FZusaetzliche_IP-Adressen&sl=de&tl=en&hl=&ie=UTF-8">only
allow</a>
'routed' xen networks and not standard 'bridged' ones. What I used for
IPv4 was just binding the public IPs on the dom0 and configured all my
iptables rules there to forward/SNAT/DNAT to the appropriate domU. But
you know that NAT is gone with IPv6 so normally it's supposed to be
easier, right ? Well, yes and no, depending on your network layout. Even
after having enabled ipv6 forwarding (net.ipv6.conf.all.forwarding=1 ),
I was just able to ping the dom0 but not the guests behind. Hmm, that
reminds me the <a href="http://en.wikipedia.org/wiki/Proxy_arp">proxy ARP</a> that
was used for IPv4 but not existing anymore for IPv6 (gone too ...) . ARP
was (more or less, not technically correct but read the RFCs if you
enough time) replaced by …</p><p>I was playing with IPv6 in the last days (started to use a tunnel from
<a href="http://www.tunnelbroker.net/">he.net</a> as my current ISP doesn't
support native IPv6 and doesn't plan to support it in a short time) and
wanted to add IPv6 to some of my CentOS Xen domU's running on a
<a href="http://www.hetzner.de">Hetzner</a> box. This part was a little bit more
difficult than for a standard network. Due to their internal network
design, Hetzner <a href="http://translate.google.be/translate?u=http%3A%2F%2Fwiki.hetzner.de%2Findex.php%2FZusaetzliche_IP-Adressen&sl=de&tl=en&hl=&ie=UTF-8">only
allow</a>
'routed' xen networks and not standard 'bridged' ones. What I used for
IPv4 was just binding the public IPs on the dom0 and configured all my
iptables rules there to forward/SNAT/DNAT to the appropriate domU. But
you know that NAT is gone with IPv6 so normally it's supposed to be
easier, right ? Well, yes and no, depending on your network layout. Even
after having enabled ipv6 forwarding (net.ipv6.conf.all.forwarding=1 ),
I was just able to ping the dom0 but not the guests behind. Hmm, that
reminds me the <a href="http://en.wikipedia.org/wiki/Proxy_arp">proxy ARP</a> that
was used for IPv4 but not existing anymore for IPv6 (gone too ...) . ARP
was (more or less, not technically correct but read the RFCs if you
enough time) replaced by
<a href="http://en.wikipedia.org/wiki/Neighbor_Discovery_Protocol">NDP</a> but I
don't see such option for IPv6. Well, a kernel feature called proxy_ndp
(net.ipv6.conf.all.proxy_ndp=1) exists on newer kernels (like for
example the 2.6.32.x that is used on RHEL6 , and so in CentOS 6) but not
on CentOS 5.5 (using a 2.6.18.x) kernel .. Hmmm ...</p>
<p>On the other side, I was searching for a 'workaround' probably given by
libvirt, but the version included in RHEL5/CentOS5 doesn't know what to
do with IPv6. Okay so let's have a look at the Xen and kernel side at
the same time. If the proxy_ndp kernel feature is not present on my
CentOS 5.5 dom0, I can still 'advertise' my neighbors with the ip
command : yes, it supports it : " ip -6 neighbor add proxy
your:ipv6:long:address::1 dev eth0"</p>
<p>So we just need to create a modified vif-route script (in fact I decided
to call it vif-route6) that will be used for ipv6 guests :</p>
<blockquote>
<p>#!/bin/bash </p>
<p>#============================================================================<br>
# /etc/xen/scripts/vif-route6<br>
# Script for configuring a vif in routed mode for IPv6 only<br>
# Based on existing vif-route script in /etc/xen/scripts and adapted
for ipv6 </p>
<p>#============================================================================</p>
<p>dir=\$(dirname "\$0")<br>
. "\$dir/vif-common.sh"</p>
<p>main_ip=\$(dom0_ip)<br>
main_ip6=\$(ip -6 addr show eth0|grep 'scope global'|sort|head -n
1|awk '{print \$2}'|cut -f 1 -d '/')</p>
<p>case "\$command" in<br>
online)<br>
ifconfig \${vif} \${main_ip} netmask 255.255.255.255 up<br>
ip -6 addr add \${main_ip6} dev \${vif}<br>
ipcmd='add'<br>
cmdprefix=''<br>
;;<br>
offline)<br>
do_without_error ifdown \${vif}<br>
ipcmd='del'<br>
cmdprefix='do_without_error'<br>
;;<br>
esac</p>
<p>if [ "\${ip}" ] ; then<br>
# If we've been given a list of IP addresses, then add routes from
dom0 to<br>
# the guest using those addresses.<br>
for addr in \${ip} ; do<br>
\${cmdprefix} ip -6 neighbor \${ipcmd} proxy \${addr} dev
\${netdev:-eth0} 2>&1<br>
result=`\${cmdprefix} ip -6 route \${ipcmd} \${addr} dev \${vif} src
\${main_ip6} 2>&1`<br>
done<br>
fi</p>
<p>handle_iptable</p>
<p>log debug "Successful vif-route \$command for \$vif."<br>
if [ "\$command" = "online" ]<br>
then<br>
success<br>
fi</p>
</blockquote>
<p>Ok, so we have just now to modify our xen domU's config to add a vif
that will use that specific script and give it the IPv6 address that
we'll assign to that domU (from /etc/xen/your-domU-name):</p>
<blockquote>
<p>vif = [ \<snip of the first vif> ,
"mac=00:16:36:38:31:b8,vifname=test.ipv6,script=vif-route6,ip=2a01:4f8:100:4363::dead"
]</p>
</blockquote>
<p>You can now start your domU and configure it normally for IPv6 (using
obviously that 2a01:4f8:100:4363::dead IPv6 address and choosing the
dom0 main IPv6 address as gateway ...</p>
<p>Hope it will help some people in the same situation (using a routed and
not a bridged network layout for xen)</p>Zabbix crashes when using IPMI checks2010-12-17T11:23:00+01:002010-12-17T11:23:00+01:00Fabian Arrotintag:arrfab.net,2010-12-17:/posts/2010/Dec/17/zabbix-crashes-when-using-ipmi-checks/<p>Working for an IBM Business Partner for quite some years, I was used to
deploy and configure (and even teach for IBM) <a href="http://www-03.ibm.com/systems/be/management/director/index.html">IBM
Director</a>
as a monitoring solution (for both hardware/operating systems/snmp
devices/etc/etc ...). Now that I work as a sysadmin, I have to maintain
one IBM director 5.20.3 setup I had myself installed and configured
quite some time ago (as a consultant then). But I didn't want to update
to 6.2 because it simply kills the machine on which it runs .. needs too
much processor, too much memory .. and just to give you an idea : it's a
<a href="http://www-01.ibm.com/software/be/smb/websphere/">Websphere</a>/java thing
that you have to install now ... I wanted to go the opensource way
instead, but with something that can still monitor Linux/Windows/snmp
devices and <a href="http://en.wikipedia.org/wiki/Ipmi">IPMI</a>devices (we have
quite some IBM servers and/or BladeCenter).</p>
<p>I tested <a href="http://www.zabbix.com/">Zabbix</a> and directly felt in love with
it : the agent memory footprint is really small (in comparison with that
java-based agent on the Director side) and the way to build Items and
Triggers is really great. I deployed it in our environment but focused
first on the OS/services side (as the 'other' monitoring …</p><p>Working for an IBM Business Partner for quite some years, I was used to
deploy and configure (and even teach for IBM) <a href="http://www-03.ibm.com/systems/be/management/director/index.html">IBM
Director</a>
as a monitoring solution (for both hardware/operating systems/snmp
devices/etc/etc ...). Now that I work as a sysadmin, I have to maintain
one IBM director 5.20.3 setup I had myself installed and configured
quite some time ago (as a consultant then). But I didn't want to update
to 6.2 because it simply kills the machine on which it runs .. needs too
much processor, too much memory .. and just to give you an idea : it's a
<a href="http://www-01.ibm.com/software/be/smb/websphere/">Websphere</a>/java thing
that you have to install now ... I wanted to go the opensource way
instead, but with something that can still monitor Linux/Windows/snmp
devices and <a href="http://en.wikipedia.org/wiki/Ipmi">IPMI</a>devices (we have
quite some IBM servers and/or BladeCenter).</p>
<p>I tested <a href="http://www.zabbix.com/">Zabbix</a> and directly felt in love with
it : the agent memory footprint is really small (in comparison with that
java-based agent on the Director side) and the way to build Items and
Triggers is really great. I deployed it in our environment but focused
first on the OS/services side (as the 'other' monitoring solution was
still there for the hardware layer monitoring). I wanted then to use the
integrated IPMI features of Zabbix and started to poll data from our IBM
servers ... until .. crash !</p>
<p>From the zabbix_server.log :</p>
<blockquote>
<p>2774:20101217:100001.893 IPMI Host [my.host.name]: first network
error, wait for 15 seconds<br>
2774:20101217:100002.894 Got signal
[signal:11(SIGSEGV),reason:2,refaddr:0x34a3f52a38]. Crashing ...</p>
</blockquote>
<p>Hmm, not good when the monitoring application crashes itself. I disabled
all my IPMI checks and then the server was back without any issue. I
repeated the above steps vice and versa to proove that it was really
IPMI related and it's the case. Browsing the Zabbix support website
returned me quite some interesting answers, including <a href="https://support.zabbix.com/browse/ZBX-2898">that one
(ZBX-2898)</a> and surely <a href="https://support.zabbix.com/browse/ZBX-633">that
one (ZBX-633)</a> . Ok so that
confirms that IPMI checks have to be disabled now and let's wait for
Zabbix 1.8.4 to appear .. In the meantime I'll write some scripts (type
External Check) to return values in Zabbix that can be used to create
Triggers ... that's also one of the advantages in Zabbix : you can still
write many plugins/scripts to do the same things :-)</p>RPMforge el6 ppc builds ...2010-11-14T13:11:00+01:002010-11-14T13:11:00+01:00Fabian Arrotintag:arrfab.net,2010-11-14:/posts/2010/Nov/14/rpmforge-el6-ppc-builds/<p>Following <a href="http://lists.rpmforge.net/pipermail/users/2010-November/003342.html">Dag's post about packages now being built for
el6</a>
(and landing in the el6 repository for x86_64 and i386) I have to say
that the ppc builds are delayed for some reasons.</p>
<p>First is the (already existing) problem with the build arch. RHEL4/5 and
6 aren't build to work on Mac ppc hardware. I was able to build the
el4/el5 packages with a minimal mock environment (using the official
RHEL tree, but reduced to contain only the ppc and noarch packages,
obviously because the ppc64 packages coudn't be installed in the chroot
environment). It was even harder with the glibc package from RHEL5
because it contains specific patches <a href="http://www.arrfab.net/blog/?p=181">that require Power4 or above
processor for the ppc arch</a>. I was
able to reduild it without those patches, meaning that the buildroot
isn't even 100% equal to the real RHEL 5.x tree.<br>
Now that el6 landed, i'll have a look at all these problems and try to
chase them one by one. I think that my 10+ years old mac G4 will suffer
from all these tests but that's still the machine that i use to build
the RPMforge ppc builds.</p>
<p>So my first plan is to …</p><p>Following <a href="http://lists.rpmforge.net/pipermail/users/2010-November/003342.html">Dag's post about packages now being built for
el6</a>
(and landing in the el6 repository for x86_64 and i386) I have to say
that the ppc builds are delayed for some reasons.</p>
<p>First is the (already existing) problem with the build arch. RHEL4/5 and
6 aren't build to work on Mac ppc hardware. I was able to build the
el4/el5 packages with a minimal mock environment (using the official
RHEL tree, but reduced to contain only the ppc and noarch packages,
obviously because the ppc64 packages coudn't be installed in the chroot
environment). It was even harder with the glibc package from RHEL5
because it contains specific patches <a href="http://www.arrfab.net/blog/?p=181">that require Power4 or above
processor for the ppc arch</a>. I was
able to reduild it without those patches, meaning that the buildroot
isn't even 100% equal to the real RHEL 5.x tree.<br>
Now that el6 landed, i'll have a look at all these problems and try to
chase them one by one. I think that my 10+ years old mac G4 will suffer
from all these tests but that's still the machine that i use to build
the RPMforge ppc builds.</p>
<p>So my first plan is to try to have a minimal buildroot that can be
initiliazed on that old mac (as i've still no better hardware at my
disposal ...) and once that i'll be able to have a mock buildroot
initiliazed correctly with the RHEL6 ppc only packages (if that's
possible, still something to determine), i'll process the whole RPMforge
svn tree, meaning several days/weeks for the first run (and no, i refuse
to launch a createrepo on the produced tree after each successful build
:-) )</p>
<p>If you're a RHEL ppc user and that there are some packages you really
want first (the only requests i've received directly for the el5 land
were clamav for example), feel free to ask them directly on the RPMforge
list.</p>
<p>Thanks for your comprehension : that's hard to produce such packages on
a platform not supported upstream ;-)</p>ProxyCommand to the rescue !2010-10-18T19:51:00+02:002010-10-18T19:51:00+02:00Fabian Arrotintag:arrfab.net,2010-10-18:/posts/2010/Oct/18/proxycommand-to-the-rescue/<p>I discussed today with a web developper who needed to reach a machine
through ssh but not directly accessible from the wild Internet. In fact,
she told me that she takes a shell on each hop with ssh agent forwarding
and so from that shell launch another ssh session. Well, of course that
works but my question was "Why don't you just simply use a ProxyCommand
in your \~/.ssh/config for that host ?". I discussed with quite some
people in the last months not knowing that ProxyCommand feature in
OpenSSH so once again it was time to at least blog about it</p>
<p>From <code>man ssh_config</code> :</p>
<blockquote>
<p>ProxyCommand<br>
Specifies the command to use to connect to the server ...</p>
</blockquote>
<p>The man page has an example but what I do is using ssh itself as a
ProxyCommand. Just an example : suppose you need to reach HostB (not
reachable from where you are) but that you can reach HostA (and that
HostA can reach HostB). You can configure your \~/.ssh/config like this
:</p>
<div class="highlight"><pre><span></span><span class="k">Host</span><span class="w"> </span><span class="n">HostB</span><span class="w"> </span>
<span class="w"> </span><span class="n">Hostname</span><span class="w"> </span><span class="n">the</span><span class="p">.</span><span class="n">known</span><span class="p">.</span><span class="n">fqdn</span><span class="p">.</span><span class="k">as</span><span class="p">.</span><span class="n">resolvable</span><span class="p">.</span><span class="k">by</span><span class="p">.</span><span class="n">HostA</span><span class="w"> </span>
<span class="w"> </span><span class="k">User</span><span class="w"> </span><span class="n">arrfab</span><span class="w"> </span>
<span class="w"> </span><span class="n">ForwardAgent</span><span class="w"> </span><span class="n">yes</span><span class="w"> </span>
<span class="w"> </span><span class="n">Port</span><span class="w"> </span><span class="mi">22</span><span class="w"> </span>
<span class="w"> </span><span class="n">ProxyCommand</span><span class="w"> </span><span class="n">ssh</span><span class="w"> </span><span class="n">remoteuser</span><span class="nv">@HostA</span><span class="p">.</span><span class="k">with</span><span class="p">.</span><span class="n">ssh</span><span class="p">.</span><span class="n">access</span><span class="w"> </span><span class="n">nc</span><span class="w"> </span><span class="o">%</span><span class="n">h</span><span class="w"> </span><span class="o">%</span><span class="n">p</span><span class="o">*</span><span class="w"></span>
</pre></div>
<p>And what if you need to …</p><p>I discussed today with a web developper who needed to reach a machine
through ssh but not directly accessible from the wild Internet. In fact,
she told me that she takes a shell on each hop with ssh agent forwarding
and so from that shell launch another ssh session. Well, of course that
works but my question was "Why don't you just simply use a ProxyCommand
in your \~/.ssh/config for that host ?". I discussed with quite some
people in the last months not knowing that ProxyCommand feature in
OpenSSH so once again it was time to at least blog about it</p>
<p>From <code>man ssh_config</code> :</p>
<blockquote>
<p>ProxyCommand<br>
Specifies the command to use to connect to the server ...</p>
</blockquote>
<p>The man page has an example but what I do is using ssh itself as a
ProxyCommand. Just an example : suppose you need to reach HostB (not
reachable from where you are) but that you can reach HostA (and that
HostA can reach HostB). You can configure your \~/.ssh/config like this
:</p>
<div class="highlight"><pre><span></span><span class="k">Host</span><span class="w"> </span><span class="n">HostB</span><span class="w"> </span>
<span class="w"> </span><span class="n">Hostname</span><span class="w"> </span><span class="n">the</span><span class="p">.</span><span class="n">known</span><span class="p">.</span><span class="n">fqdn</span><span class="p">.</span><span class="k">as</span><span class="p">.</span><span class="n">resolvable</span><span class="p">.</span><span class="k">by</span><span class="p">.</span><span class="n">HostA</span><span class="w"> </span>
<span class="w"> </span><span class="k">User</span><span class="w"> </span><span class="n">arrfab</span><span class="w"> </span>
<span class="w"> </span><span class="n">ForwardAgent</span><span class="w"> </span><span class="n">yes</span><span class="w"> </span>
<span class="w"> </span><span class="n">Port</span><span class="w"> </span><span class="mi">22</span><span class="w"> </span>
<span class="w"> </span><span class="n">ProxyCommand</span><span class="w"> </span><span class="n">ssh</span><span class="w"> </span><span class="n">remoteuser</span><span class="nv">@HostA</span><span class="p">.</span><span class="k">with</span><span class="p">.</span><span class="n">ssh</span><span class="p">.</span><span class="n">access</span><span class="w"> </span><span class="n">nc</span><span class="w"> </span><span class="o">%</span><span class="n">h</span><span class="w"> </span><span class="o">%</span><span class="n">p</span><span class="o">*</span><span class="w"></span>
</pre></div>
<p>And what if you need to reach HostC, which itself is only reachable by
HostB ? Let's just define a new Host section in the \~/.ssh/config and
another ProxyCommand !</p>
<div class="highlight"><pre><span></span><span class="k">Host</span><span class="w"> </span><span class="n">HostC</span><span class="w"> </span>
<span class="w"> </span><span class="n">Hostname</span><span class="w"> </span><span class="n">the</span><span class="p">.</span><span class="n">known</span><span class="p">.</span><span class="n">fqdn</span><span class="p">.</span><span class="k">as</span><span class="p">.</span><span class="n">resolvable</span><span class="p">.</span><span class="k">by</span><span class="p">.</span><span class="n">HostB</span><span class="w"> </span>
<span class="w"> </span><span class="k">User</span><span class="w"> </span><span class="n">arrfab</span><span class="w"> </span>
<span class="w"> </span><span class="n">ForwardAgent</span><span class="w"> </span><span class="n">yes</span><span class="w"> </span>
<span class="w"> </span><span class="n">Port</span><span class="w"> </span><span class="mi">22</span><span class="w"> </span>
<span class="w"> </span><span class="n">ProxyCommand</span><span class="w"> </span><span class="n">ssh</span><span class="w"> </span><span class="n">remoteuser</span><span class="nv">@HostB</span><span class="w"> </span><span class="n">nc</span><span class="w"> </span><span class="o">%</span><span class="n">h</span><span class="w"> </span><span class="o">%</span><span class="n">p</span><span class="o">*</span><span class="w"></span>
</pre></div>
<p>You can now directly use the <code>ssh HostC</code> from your laptop/workstation
and have a direct shell on HostC even if it has to open a connection to
HostA and from</p>
<p>there to HostB to finish to HostC.That works also for scp/sftp so you
can directly copy/retrieve files to/from HostC instead of copy from one
host to the next hop. More informations about those features and the
correct syntax in <code>man ssh_config</code>.</p>
<p>Hope that you can find that useful if you didn't know that already</p>To automate ? or not ?2010-10-12T12:44:00+02:002010-10-12T12:44:00+02:00Fabian Arrotintag:arrfab.net,2010-10-12:/posts/2010/Oct/12/to-automate-or-not/<p>Well, this is a good question and most of us will likely answer 'yes of
course'. Indeed, as a SysAdmin you want regular tasks to be automated
and act the same way on a bunch of systems. But what if you need a
custom task (that you only need once) to be launched on some machines at
the same time ? A lot of solutions exist of course to "push" commands to
be executed by clients machines. Some will use
<a href="http://www.redhat.com/red_hat_network/">RHN/Satellite</a>, some will
prefer using something like<a href="http://www.cfengine.org/">CFengine</a>or
<a href="http://www.puppetlabs.com/">Puppet</a>. I've even discussed with some
admin pushing some 'encrypted' commands on a
<a href="http://twitter.com/">Twitter</a> feed followed by some clients able to
understand the commands and process them ... Multiple solutions exists,
and your imagination is probably the limit.</p>
<p>But what if you just have to manage a <strong><em>*very*</em></strong> small amount of
servers at the same time for different customers/environments. For
example, as an IT consultant, you probably have a bunch of customers
running different solutions and sometimes with only 10 or 20 servers,
right ? Will you install a satellite proxy server just to push commands
for those 15 machines ? or sometimes even less ? IMHO, ssh is the
solution, especially if you …</p><p>Well, this is a good question and most of us will likely answer 'yes of
course'. Indeed, as a SysAdmin you want regular tasks to be automated
and act the same way on a bunch of systems. But what if you need a
custom task (that you only need once) to be launched on some machines at
the same time ? A lot of solutions exist of course to "push" commands to
be executed by clients machines. Some will use
<a href="http://www.redhat.com/red_hat_network/">RHN/Satellite</a>, some will
prefer using something like<a href="http://www.cfengine.org/">CFengine</a>or
<a href="http://www.puppetlabs.com/">Puppet</a>. I've even discussed with some
admin pushing some 'encrypted' commands on a
<a href="http://twitter.com/">Twitter</a> feed followed by some clients able to
understand the commands and process them ... Multiple solutions exists,
and your imagination is probably the limit.</p>
<p>But what if you just have to manage a <strong><em>*very*</em></strong> small amount of
servers at the same time for different customers/environments. For
example, as an IT consultant, you probably have a bunch of customers
running different solutions and sometimes with only 10 or 20 servers,
right ? Will you install a satellite proxy server just to push commands
for those 15 machines ? or sometimes even less ? IMHO, ssh is the
solution, especially if you want interactive output/processing on all
the machines. I discussed with a friend of mine who said that ssh was
the solution but taking a shell on 15 servers "one at a time" was time
consuming. Of course it is. So why not using a shell multiplexer or
distributed shells ? I was astonished to see how many people I had the
chance to speak with don't even know that you can launch interactively
or in batch the same commands on multiple systems at the same time ! So
I thought that maybe it was time to write (like many others did) about
ssh based solutions !</p>
<p>I've tested and continue to use on a daily basis different programs
(that use ssh in the background of course). Depending on the situation
(you'd have to test them to find the one that fits your needs), I like
to use
<a href="http://sourceforge.net/apps/mediawiki/clusterssh/index.php?title=Main_Page">cluster-ssh</a>,
<a href="http://sourceforge.net/projects/mussh/">mussh</a> and
<a href="http://web.taranis.org/shmux/">shmux</a>. Others are available, like
<a href="https://computing.llnl.gov/linux/pdsh.html">pdsh</a>, etc .. but the first
ones are the ones i'm more comfortable with and that I personnally use.</p>FrOSCon 2010 is over .. waiting now for 2011 :-)2010-08-26T15:40:00+02:002010-08-26T15:40:00+02:00Fabian Arrotintag:arrfab.net,2010-08-26:/posts/2010/Aug/26/froscon-2010-is-over-waiting-now-for-2011/<p>It was the first time that I was at <a href="http://www.froscon.org/">FrOSCon</a>
and I admit I enjoyed it. Not only because I can always see in real life
some other CentOS contributors (thanks again Andreas, Sarah, Didi and
Christoph), but also because I can see some other people really happy
with CentOS. Last year (even if i was not there), CentOS <a href="http://dag.wieers.com/blog/centos-based-livecd-at-froscon">was used to
'power' some TFT
screens</a> at
the entrance. So we did the same this year and just because Dag asked
it, we took a picture this year too :-)</p>
<p><img alt="CentOS @ FrOSCon
2010" src="http://www.arrfab.net/blog/wp-content/uploads/2010/08/22082010025-300x225.jpg" title="CentOS @ FrOSCon 2010"></p>Automatic network switcher on Nokia E712010-08-19T12:54:00+02:002010-08-19T12:54:00+02:00Fabian Arrotintag:arrfab.net,2010-08-19:/posts/2010/Aug/19/automatic-network-switcher-on-nokia-e71/<p>I've always loved Nokia products and always have been satisfied by my
last two E-series
(<a href="http://www.nokia.co.uk/support/product-support/nokia-e51">E51</a>and
<a href="http://www.nokia.co.uk/support/product-support/nokia-e71">E71</a>). But
there are those little things that can bring you a better life when
using it. For example, you can decide which data access point (Wireless
or 3G) you want to use when launching an application. That's fine and of
course I prefer using my WLAN at home than the 3G connection. But what
if you schedule something to happen automagically on your phone, like
calendar and contacts sync (I sync those with the integrated Mail for
Exchange application, even if, obviously, i don't have an Exchange
server, but rather a <a href="http://www.zarafa.com/content/community">Zarafa OpenSource
server</a> with
<a href="http://z-push.sourceforge.net/soswp/">z-push</a> installed). You have to
define a 'sync plan' and choose which connection it will use in the
background. Wait, I'd like it to select my Wireless AP when at home (or
with some pre-defined wlans from friends, etc ..) if available and then
switch to 3G if no wlans available. That's where
<a href="http://www.birdstep.com/Products/Birdstep/SmartConnect/">SmartConnect</a>
helps you a lot : you can define a 'fake' access point which is in fact
a group that contains your connections (wlans/3G) with priorities and it
will use the first available one. Use …</p><p>I've always loved Nokia products and always have been satisfied by my
last two E-series
(<a href="http://www.nokia.co.uk/support/product-support/nokia-e51">E51</a>and
<a href="http://www.nokia.co.uk/support/product-support/nokia-e71">E71</a>). But
there are those little things that can bring you a better life when
using it. For example, you can decide which data access point (Wireless
or 3G) you want to use when launching an application. That's fine and of
course I prefer using my WLAN at home than the 3G connection. But what
if you schedule something to happen automagically on your phone, like
calendar and contacts sync (I sync those with the integrated Mail for
Exchange application, even if, obviously, i don't have an Exchange
server, but rather a <a href="http://www.zarafa.com/content/community">Zarafa OpenSource
server</a> with
<a href="http://z-push.sourceforge.net/soswp/">z-push</a> installed). You have to
define a 'sync plan' and choose which connection it will use in the
background. Wait, I'd like it to select my Wireless AP when at home (or
with some pre-defined wlans from friends, etc ..) if available and then
switch to 3G if no wlans available. That's where
<a href="http://www.birdstep.com/Products/Birdstep/SmartConnect/">SmartConnect</a>
helps you a lot : you can define a 'fake' access point which is in fact
a group that contains your connections (wlans/3G) with priorities and it
will use the first available one. Use that 'fake' access point on your
mobile, whatever the application (even tested with <a href="http://s2putty.sourceforge.net/">Putty for
Symbian</a>). Great and useful.</p>
<p>You can download it from the official website, or directly install it
from your mobile through the OVI app installer.</p>CentOS promo team @ FrOSCon 20102010-08-04T14:31:00+02:002010-08-04T14:31:00+02:00Fabian Arrotintag:arrfab.net,2010-08-04:/posts/2010/Aug/04/centos-promo-team-froscon-2010/<p>Hi all ... just to let you know that some members of the CentOS promo
team (including myself) will be at the '<a href="http://www.froscon.org/">Free and OpenSource Software
Conference</a>' in Germany in (more than) two
weeks ...</p>
<p>Feel free to come at the CentOS booth to discuss with some of us !</p>DRBD backported (or not) to 2.6.32 in EL6 ?2010-07-20T13:54:00+02:002010-07-20T13:54:00+02:00Fabian Arrotintag:arrfab.net,2010-07-20:/posts/2010/Jul/20/drbd-backported-or-not-to-2-6-32-in-el6/<p>As some of you already know it, DRBD is now (since kernel 2.6.33) <a href="http://www.drbd.org/download/mainline/">part
of the mainline/upstream
kernel</a>. Some were expecting
RHEL6 to come with that kernel (used for Fedora 13). The latest
RHEL6beta2 still comes with 2.6.32, which doesn't include DRBD support.
Of course we still don't know what the 'frozen' RHEL6 kernel will be but
on the other hand, we know that Red Hat quite often 'backports' modules
from newer kernel into the RHEL kernel. What about DRBD ? At the time of
writing this blog post, it seems still undecided, but you can follow the
<a href="https://bugzilla.redhat.com/show_bug.cgi?id=585309">DRBD RFE on Upstream
Bugzilla</a>to get a
clue, or even comment on it if you have a bugzilla account to make hear
your voice. On the other hand, you can still be sure that even if DRBD
isn't part of EL6, CentOS will still ship it in the <a href="http://wiki.centos.org/AdditionalResources/Repositories">Extras
repository</a>,
like for EL4/EL5 ...</p>Following build status through Twitter ...2010-05-31T20:09:00+02:002010-05-31T20:09:00+02:00Fabian Arrotintag:arrfab.net,2010-05-31:/posts/2010/May/31/following-build-status-through-twitter/<p>The other day, <a href="http://orcorc.blogspot.com/">Russ</a> pointed me/us to a
cli client for Twitter :
<a href="http://www.floodgap.com/software/ttytter/">TTYter</a>. Even if i'm not
myself a Twitter (ab)user, I thought it would be funny to create a tweet
that can be followed by those Twitter abusers and wanting to follow the
build status of the RPMforge PPC rpm packages that I build. I've quickly
modified my automated build scripts to post a build status after each
build and you can follow it here : <a href="http://twitter.com/rpmppcbuilder">http://twitter.com/rpmppcbuilder</a> .
It would be nice also to have such stuff for other build machines as
well ... While i'm talking about those PPC packages, if you still have a
PPC machine doing nothing and that you want to throw away, please
redirect it to me instead (that can even be an IBM js2x blade that I
would be able to have it hosted somewhere) ;-)</p>samba preexec and postexec to the rescue ...2010-04-09T09:32:00+02:002010-04-09T09:32:00+02:00Fabian Arrotintag:arrfab.net,2010-04-09:/posts/2010/Apr/09/samba-preexec-and-postexec-to-the-rescue/<p>I had yesterday to migrate a server running CentOS 4.x to a new xen domU
running CentOS 5.4 . While the migration was the easy part (including
the data), the 'problem' I had was the backup part. In fact the backup
application used is a proprietary one (<a href="http://arcserve.com/us/products/product.aspx?id=5282">Computer Associates
Arcserve</a>, to keep
it 'secret' .. ). Well, that proprietary backup agent (v11.5, old but
still used 'here') doesn't work on CentOS 5.4 (glibc issue) and i didn't
want to find a workaround that problem (with LD_LIBRARY_PATH or some
other tweak). That's where <a href="http://www.samba.org">Samba</a> came to the
rescue : the Arcserve backup server can backup Windows nodes (and so
Samba nodes too) through <a href="http://en.wikipedia.org/wiki/Path_(computing)#Uniform_Naming_Convention">UNC
path</a>.
OK, but for the previous system i had prebackup and postbackup
jobs/scripts on the CentOS node (doing specific things, including
creating a lvm snapshot and removing it after the backup job). How could
i do that now through the samba-backup-method approach ?. Well a
standard cron job on the CentOS machine was of course always doable but
I (we ?) prefered some kind of 'triggering action'. That's where i used
for the first time the preexec and postexec options for a samba share.
Basically it still uses …</p><p>I had yesterday to migrate a server running CentOS 4.x to a new xen domU
running CentOS 5.4 . While the migration was the easy part (including
the data), the 'problem' I had was the backup part. In fact the backup
application used is a proprietary one (<a href="http://arcserve.com/us/products/product.aspx?id=5282">Computer Associates
Arcserve</a>, to keep
it 'secret' .. ). Well, that proprietary backup agent (v11.5, old but
still used 'here') doesn't work on CentOS 5.4 (glibc issue) and i didn't
want to find a workaround that problem (with LD_LIBRARY_PATH or some
other tweak). That's where <a href="http://www.samba.org">Samba</a> came to the
rescue : the Arcserve backup server can backup Windows nodes (and so
Samba nodes too) through <a href="http://en.wikipedia.org/wiki/Path_(computing)#Uniform_Naming_Convention">UNC
path</a>.
OK, but for the previous system i had prebackup and postbackup
jobs/scripts on the CentOS node (doing specific things, including
creating a lvm snapshot and removing it after the backup job). How could
i do that now through the samba-backup-method approach ?. Well a
standard cron job on the CentOS machine was of course always doable but
I (we ?) prefered some kind of 'triggering action'. That's where i used
for the first time the preexec and postexec options for a samba share.
Basically it still uses my prebackup and postbackup scripts (doing the
snapshot and mount it, as well as the reverse , obviously) and expose
that mounted snapshot during backup time for the backup server. ..
Working nicely and maybe it can give you some ideas .... ;-)</p>Extending (live) a SR (storage repository) on XenServer 5.52010-03-04T22:54:00+01:002010-03-04T22:54:00+01:00Fabian Arrotintag:arrfab.net,2010-03-04:/posts/2010/Mar/04/extending-live-a-sr-storage-repository-on-xenserver-5-5/<p>For my new job I have to learn how to deal with <a href="http://www.citrix.com/English/ps2/products/feature.asp?contentID=1686939">Citrix
XenServer</a>
(yeah, because of a mixed workload of CentOS domU's and Windows TSE
servers, for which XenServer has been optimized). I liked the fact that
I'm directly feeling "like home" , as Citrix XenServer dom0 is based on
CentOS (still 5.3 at this time though). One of the things i had to do
was to extend a Storage Repository served from an IBM DS3200 through
dual HBAs, and using <a href="http://www.lsi.com/rdac/ds3000.html">mpp/rdac</a>
(the default on XenServer 5.5 when it sees a rdac disk storage backend).
Great, I've never had problem doing this on plain RHEL or CentOS
machines, so after having extended the LUN on the IBM DS3200, I was back
on the XenServer side. I always like to read the official documention
before doing something (and it's even faster when you know what you're
searching for) and I found this on the Citrix XenServer documentation :
"<a href="http://support.citrix.com/article/CTX120865">How to resize a Storage repository after changing the size of an
LVM-base storage</a>" . Hmmm,
WTF ? Their recipe is : "live migrate the guests, restart the host and
proceed for each host"! . No, it has to work without a reboot, we're not …</p><p>For my new job I have to learn how to deal with <a href="http://www.citrix.com/English/ps2/products/feature.asp?contentID=1686939">Citrix
XenServer</a>
(yeah, because of a mixed workload of CentOS domU's and Windows TSE
servers, for which XenServer has been optimized). I liked the fact that
I'm directly feeling "like home" , as Citrix XenServer dom0 is based on
CentOS (still 5.3 at this time though). One of the things i had to do
was to extend a Storage Repository served from an IBM DS3200 through
dual HBAs, and using <a href="http://www.lsi.com/rdac/ds3000.html">mpp/rdac</a>
(the default on XenServer 5.5 when it sees a rdac disk storage backend).
Great, I've never had problem doing this on plain RHEL or CentOS
machines, so after having extended the LUN on the IBM DS3200, I was back
on the XenServer side. I always like to read the official documention
before doing something (and it's even faster when you know what you're
searching for) and I found this on the Citrix XenServer documentation :
"<a href="http://support.citrix.com/article/CTX120865">How to resize a Storage repository after changing the size of an
LVM-base storage</a>" . Hmmm,
WTF ? Their recipe is : "live migrate the guests, restart the host and
proceed for each host"! . No, it has to work without a reboot, we're not
Windows admins, right ? Here is what i did : (that was tested on a test
machine !)</p>
<p>We have first to list the current status/size :</p>
<blockquote>
<p>[root@xen1 \~]# xe sr-list<br>
uuid ( RO) : c945d1bb-2432-36ac-2766-ebd2bc7f2e81<br>
name-label ( RW): Hardware HBA virtual disk storage<br>
name-description ( RW): Hardware HBA SR [IBM - /dev/sdb]<br>
host ( RO): xen1<br>
type ( RO): lvmohba<br>
content-type ( RO):<br>
[root@xen1 \~]# xe sr-param-list
uuid=c945d1bb-2432-36ac-2766-ebd2bc7f2e81|grep physical-size<br>
physical-size ( RO): 85886763008<br>
[root@xen1 \~]# pvscan|grep c945d1bb-2432-36ac-2766-ebd2bc7f2e81<br>
PV /dev/sdb VG
VG_XenStorage-c945d1bb-2432-36ac-2766-ebd2bc7f2e81 lvm2 [79.99 GB /
16.12 GB free]</p>
</blockquote>
<p>Now we'll extend with the IBM DS StorageManager script editor : "set
logicalDrive ["XenPool1"] addcapacity=139 GB;"</p>
<p>Back on the xen host we have to rescan for the new size (using a MPP
device presented as /dev/sdb on the xen host) and confirm with
dmesg|tail</p>
<blockquote>
<p>[root@xen1 device]# echo 1 >/sys/block/sdb/device/rescan ;
dmesg|tail</p>
<p>sdb: detected capacity change from 85899345920 to 235149459456</p>
<p>[root@xen1 device]# pvresize /dev/sdb<br>
Physical volume "/dev/sdb" changed<br>
1 physical volume(s) resized / 0 physical volume(s) not resized<br>
[root@sicxen1 device]# pvscan<br>
PV /dev/sdb VG
VG_XenStorage-c945d1bb-2432-36ac-2766-ebd2bc7f2e81 lvm2 [218.99 GB
/ 155.12 GB free]<br>
PV /dev/sda3 VG
VG_XenStorage-9c1e7a2a-2fc0-83eb-3e32-7cea2c9e9d93 lvm2 [60.59 GB /
60.59 GB free]<br>
Total: 2 [279.58 GB] / in use: 2 [279.58 GB] / in no VG: 0 [0 ]</p>
</blockquote>
<p>Rescan now that SR :</p>
<blockquote>
<p>[root@xen1 device]# xe sr-scan
uuid=c945d1bb-2432-36ac-2766-ebd2bc7f2e81<br>
[root@xen1 device]# xe sr-param-list
uuid=c945d1bb-2432-36ac-2766-ebd2bc7f2e81|grep physical-size<br>
physical-size ( RO): 235136876544</p>
</blockquote>
<p>Done ! and i confirm that the CentOS domU's were still running after
that ... ;-)</p>
<p>PS : while talking about Citrix XenServer, I have to add that I used
only ssh/xe to manage it, as their XenCenter gui app is a Windows only
GUI (relying on .Net). But I found several days ago an interesting GPL
project: <a href="http://www.openxencenter.com">OpenXenCenter</a>, something to
keep an eye on as it's still alpha but quickly involving ...</p>WARNING: mismatch_cnt is not 0 on /dev/md0 through LogWatch2010-02-21T20:32:00+01:002010-02-21T20:32:00+01:00Fabian Arrotintag:arrfab.net,2010-02-21:/posts/2010/Feb/21/warning-mismatch_cnt-is-not-0-on-devmd0-through-logwatch/<p>Recently I received (through logwatch) several weekly reports about a
mismatch in synchronized block on my md0 (/boot) device :</p>
<p><code>{style="white-space: pre-wrap;"}
WARNING: mismatch_cnt is not 0 on /dev/md0</code></p>
<p>`cat /proc/mdstat` was normal though.</p>
<p>A `echo repair >/sys/block/md0/md/sync_action` followed by a `echo
check >/sys/block/md0/md/sync_action` seems to have corrected it. Now
`cat /sys/block/md0/md/mismatch_cnt` returns 0 ...</p>Fosdem 2010 is over ...2010-02-11T09:31:00+01:002010-02-11T09:31:00+01:00Fabian Arrotintag:arrfab.net,2010-02-11:/posts/2010/Feb/11/fosdem-2010-is-over/<p>Yes i know, it's even over for more than 4 days, but i was too busy with
other stuff to write a small report of the event , which i'll do later
if `locate free_time` returns something useful.</p>
<p>In the meantime, a (funny) picture from the CentOS booth at Fosdem 2010
:D</p>
<p><img alt="CentOS_amazing" src="http://www.arrfab.net/blog/wp-content/uploads/2010/02/CentOS_amazing1.jpg" title="CentOS_amazing"></p>CentOS @ Fosdem 20102010-01-23T14:45:00+01:002010-01-23T14:45:00+01:00Fabian Arrotintag:arrfab.net,2010-01-23:/posts/2010/Jan/23/centos-fosdem-2010/<p>Some member of the CentOS team will be present at the Fosdem . Feel free
to come at our booth just to discuss ...</p>
<p>More informations on <a href="http://wiki.centos.org/Events/Fosdem2010">our
wiki</a> and on the
<a href="http://www.fosdem.org">Fosdem</a> website</p>the joy of building ppc rpms for RHEL 5.4 PPC on an unsupported platform2010-01-08T09:47:00+01:002010-01-08T09:47:00+01:00Fabian Arrotintag:arrfab.net,2010-01-08:/posts/2010/Jan/08/the-joy-of-building-ppc-rpms-for-rhel-5-4-ppc-on-an-unsupported-platform/<p>I was a little bit late to build latest rpm packages from spec files
commited in the <a href="http://svn.rpmforge.net/svn/trunk/rpms/">RPMforge svn
tree.</a> I had to deal with some
external stuff and also fixing the fact that rpm-macros-rpmforge has to
be installed in the chroot prior to try to build the prepared SRPM that
my script/wrapper created. Now that it has been fixed and it's working
(and newer rpmforge-release package to reflect all current arches), it
was time to update the tree i'm building against/for . No problem for
RHEL 4.8 PPC as it was ok but i updated the el5 tree to reflect RHEL 5.4
ppc. And then the problem : Mock dies completely on a bunch of errors
but the first one seems obvious :</p>
<blockquote>
<p>/usr/sbin/glibc_post_upgrade: While trying to execute
/usr/sbin/iconvconfig.ppc child terminated abnormally<br>
error: %post(glibc-2.5-42.ppc) scriptlet failed, exit status 115</p>
</blockquote>
<p>Grr. Looking at the`rpm -qp --changelog glibc-2.5-42.ppc.rpm` to see
the differences among the glibc.ppc releases from RHEL 5.0 to 5.4 gave
me some pointers :</p>
<blockquote>
<p>build ppc and ppc64 base shared libraries with -mcpu=power4,<br>
i.e. only support power4 and newer CPUs, *.a and …</p></blockquote><p>I was a little bit late to build latest rpm packages from spec files
commited in the <a href="http://svn.rpmforge.net/svn/trunk/rpms/">RPMforge svn
tree.</a> I had to deal with some
external stuff and also fixing the fact that rpm-macros-rpmforge has to
be installed in the chroot prior to try to build the prepared SRPM that
my script/wrapper created. Now that it has been fixed and it's working
(and newer rpmforge-release package to reflect all current arches), it
was time to update the tree i'm building against/for . No problem for
RHEL 4.8 PPC as it was ok but i updated the el5 tree to reflect RHEL 5.4
ppc. And then the problem : Mock dies completely on a bunch of errors
but the first one seems obvious :</p>
<blockquote>
<p>/usr/sbin/glibc_post_upgrade: While trying to execute
/usr/sbin/iconvconfig.ppc child terminated abnormally<br>
error: %post(glibc-2.5-42.ppc) scriptlet failed, exit status 115</p>
</blockquote>
<p>Grr. Looking at the`rpm -qp --changelog glibc-2.5-42.ppc.rpm` to see
the differences among the glibc.ppc releases from RHEL 5.0 to 5.4 gave
me some pointers :</p>
<blockquote>
<p>build ppc and ppc64 base shared libraries with -mcpu=power4,<br>
i.e. only support power4 and newer CPUs, *.a and *.o in<br>
glibc-devel should still work on any powerpc CPU (#241003)</p>
</blockquote>
<p>First thing : I always like when I consider reading such bugzilla report
<a href="https://bugzilla.redhat.com/show_bug.cgi?id=241003">but can't read it</a>.
And the machine I use to build the RPMforge PPC packages for RHEL PPC
is quite old (a mac G4 from year 2K with a 7400, altivec supported cpu @
400MHz). That one, of course, <a href="http://en.wikipedia.org/wiki/List_of_PowerPC_processors#G4">isn't at the required
level</a>
compared to the real IBM PPC Power line .. :/</p>
<p>So that means that :</p>
<p>* either I need to find a Power4 (or above) machine to build the
RPMforge ppc packages for RHEL5.4 ppc target (someone ?)</p>
<p>* or I need to recompile glibc.ppc with different flags, and all the
dependencies ... which in the background means producing a CentOS 5 PPC
(which is already a slowly but ongoing process) , but I already hear my
small machine screaming at me "Welcome to the Hell of dependencies [TM]"
... ;-)</p>Spamassasin default rules don't like 20102010-01-01T21:55:00+01:002010-01-01T21:55:00+01:00Fabian Arrotintag:arrfab.net,2010-01-01:/posts/2010/Jan/01/spamassasin-default-rules-dont-like-2010/<p>Maybe some of you have already noticed but the standard Spamassassin
rules don't like 2010. As explained in the <a href="https://issues.apache.org/SpamAssassin/show_bug.cgi?id=6269">SA bug
6269</a> , the
FH_DATE_PAST_20XX rule of course matches every incoming mail starting
from today .. Ouch. Time to update your rules or change your score for
that rule .. I guess that an update of that rule will be available soon
(i hope so) and will be fetched by a simple `sa-update` . In the
meantime, time for you to fix it manually ! ;-)</p>Accessing Exchange 2007 from a CentOS laptop ...2009-11-30T21:25:00+01:002009-11-30T21:25:00+01:00Fabian Arrotintag:arrfab.net,2009-11-30:/posts/2009/Nov/30/accessing-exchange-2007-from-a-centos-laptop/<p>What can you do when the company you work for (or should I say the
people who manage the Internal Network) has decided to switch from Lotus
Domino to M\$ Exchange 2007 ? Ouch ... I can't say that i'm personnally
a great Lotus Domino supporter but it's stable system and a native
client exists for all the current platforms (packaged in .rpm and .deb
for Linux,as well as in Java installshied wizard for linux distros not
using either rpm nor deb packages) .. But what when you have to switch
to Exchange backend ? Up to now I always managed to have my professional
laptop installed with CentOS and I surely don't want Windows on my
laptop that i use for my day-to-day work :D</p>
<p>I had a quick look at the Exchange plugin that you can find for
Evolution, but unfortunately that one (that uses OWA in the backend) can
only be used against Exchange 2K or 2K3 but is incompatible with 2K7.
Then i heard about rumours regarding a new Exchange/Mapi plugin (that
requires a newer Evolution/gnome than the one provided in el5). I can't
test it as it requires direct mapi access to the Exchange server and …</p><p>What can you do when the company you work for (or should I say the
people who manage the Internal Network) has decided to switch from Lotus
Domino to M\$ Exchange 2007 ? Ouch ... I can't say that i'm personnally
a great Lotus Domino supporter but it's stable system and a native
client exists for all the current platforms (packaged in .rpm and .deb
for Linux,as well as in Java installshied wizard for linux distros not
using either rpm nor deb packages) .. But what when you have to switch
to Exchange backend ? Up to now I always managed to have my professional
laptop installed with CentOS and I surely don't want Windows on my
laptop that i use for my day-to-day work :D</p>
<p>I had a quick look at the Exchange plugin that you can find for
Evolution, but unfortunately that one (that uses OWA in the backend) can
only be used against Exchange 2K or 2K3 but is incompatible with 2K7.
Then i heard about rumours regarding a new Exchange/Mapi plugin (that
requires a newer Evolution/gnome than the one provided in el5). I can't
test it as it requires direct mapi access to the Exchange server and i'm
forced (up to now) to use RPC over HTTPS . Damn. It seemed that the only
solution was then to install Outlook with Wine on my CentOS laptop ..
until i found <a href="http://davmail.sourceforge.net/">DavMail</a> : it uses OWA
in the backend (and is compatible with Exchange 2k7 OWA) and acts as a
IMAP/Caldav/LDAP gateway. Cool, so i can use my MUA of choice (tested
now with Thunderbird but i want to test Mutt as well) to read my mails,
consult/update my calendar and search/uses the Exchange Addressbook
without having to install any M\$ component ..</p>
<p>So far, so Good ... thanks DavMail ! :D</p>dm-multipath for IBM DS3xxx2009-11-09T12:17:00+01:002009-11-09T12:17:00+01:00Fabian Arrotintag:arrfab.net,2009-11-09:/posts/2009/Nov/09/dm-multipath-for-ibm-ds3xxx/<p>While i've used (up to now) the IBM/LSI-logic solution (aka
<a href="http://www.lsi.com/rdac/ds3000.html">RDAC</a>) to support multiple paths
to an IBM storage solution (aka DS4xxx and DS3xxx), it was a pain
because each time you wanted to install a new kernel the procedure
implied to remove the old/previous rdac module, boot with the new kernel
(without mpp), rebuild mpp/rdac and creating a new initrd and then
another reboot (with the new initrd containing the correct module).</p>
<p>I've now switched to dm-multipath instead. The basic and provided
/etc/multipath.conf normally works quite ok, but if you want to tune it
to support more storage vendors/solutions you really have to read the
multipath documentation. <a href="http://www.bofh-hunter.com/2009/09/02/dm-multipath-and-the-ds4700/">Jim already
blogged</a>
about the DS4700 FC backend storage .</p>
<p>Here is the version for the DS3200 (SAS connections) :</p>
<div class="highlight"><pre><span></span><span class="n">devices</span> <span class="err">{</span>
<span class="n">device</span> <span class="err">{</span>
<span class="n">vendor</span> <span class="ss">"IBM"</span>
<span class="n">product</span> <span class="ss">"1726-2xx FAStT"</span>
<span class="n">getuid_callout</span> <span class="ss">"/sbin/scsi_id -g -u -s /block/%n"</span>
<span class="n">prio_callout</span> <span class="ss">"/sbin/mpath_prio_rdac /dev/%n"</span>
<span class="n">features</span> <span class="ss">"0"</span>
<span class="n">hardware_handler</span> <span class="ss">"1 rdac"</span>
<span class="n">path_grouping_policy</span> <span class="n">group_by_prio</span>
<span class="n">failback</span> <span class="k">immediate</span>
<span class="n">rr_weight</span> <span class="n">uniform</span>
<span class="n">no_path_retry</span> <span class="mi">300</span>
<span class="n">rr_min_io</span> <span class="mi">1000</span>
<span class="n">path_checker</span> <span class="n">rdac</span>
<span class="err">}</span>
<span class="err">}</span>
</pre></div>Lftp doesn't work in SSL mode since update to 5.42009-10-30T17:41:00+01:002009-10-30T17:41:00+01:00Fabian Arrotintag:arrfab.net,2009-10-30:/posts/2009/Oct/30/lftp-doesnt-work-in-ssl-mode-since-update-to-5-4/<p>The other day I had to configure a box that had to fetch some files from
another machine and transfer those files from the DMZ to an external
bank. While I usually use SFTP for that, in that specific case i had no
choice and had to use FTP/SSL. First thing that hurted me is that to
fetch the certificate/private key that the bank created for me, I had to
use Internet Explorer on a Windows machine ! Ouch ... (yeah, they use
activex on the page you have to login to for the certificate request,
you *can't* use openssl yourself to send them the CSR ...) bad, bad ..
and also funny that they point you to an https website to read the
documentation, in which they say how to import they Root CA (which
obvsiouly you had to import yourself first to read the same manual ...)
.. From that time i knew i'd have troubles ..</p>
<p>Okay, exporting the SSL certificate/private key from Internet Exploder,
using openssl to convert to PEM and i had those ready to be used on my
CentOS 5.4 VM. Lftp seems good for such task and supports ssl too ..
After having configured my \~/.lftprc with …</p><p>The other day I had to configure a box that had to fetch some files from
another machine and transfer those files from the DMZ to an external
bank. While I usually use SFTP for that, in that specific case i had no
choice and had to use FTP/SSL. First thing that hurted me is that to
fetch the certificate/private key that the bank created for me, I had to
use Internet Explorer on a Windows machine ! Ouch ... (yeah, they use
activex on the page you have to login to for the certificate request,
you *can't* use openssl yourself to send them the CSR ...) bad, bad ..
and also funny that they point you to an https website to read the
documentation, in which they say how to import they Root CA (which
obvsiouly you had to import yourself first to read the same manual ...)
.. From that time i knew i'd have troubles ..</p>
<p>Okay, exporting the SSL certificate/private key from Internet Exploder,
using openssl to convert to PEM and i had those ready to be used on my
CentOS 5.4 VM. Lftp seems good for such task and supports ssl too ..
After having configured my \~/.lftprc with the correct value (like
ssl:key-file and ssl:cert-file) I wasn't able to connect : the message
was : "Fatal error: gnutls_handshake: A TLS fatal alert has been
received" . Hmm, strange. I decided to test with the RPMforge version
(which is built against OpenSSL and not Gnutls) and that one worked
correctly (without having changed the conf files). Okay it's now working
but does that mean that the lftp package from 5.x doesn't work in ssl
mode with a client certificate ? I've downgraded the package to the one
present in the 5.x branch (before the 5.4) : lftp-3.5.1-2.fc6 instead of
lftp-3.7.11-4.el5 and it worked perfectly with the same config files
too. Sounds like a bug to me and not a config issue so i opened an <a href="https://bugzilla.redhat.com/show_bug.cgi?id=532099">bug
upstream</a>and on the
<a href="http://bugs.centos.org/view.php?id=3954">CentOS mantis</a> system. Let's
see how it goes. In the meantime (if you have the same issue) you can
either downgrade to the lftp version you'll find in the 5.3 tree or
update to the <a href="http://packages.sw.be/lftp/lftp-4.0.1-1.el4.rf.i386.rpm">version from
RPMforge</a>.</p>CentOS 5.3 on Neoware e90 Thin Client2009-09-14T14:41:00+02:002009-09-14T14:41:00+02:00Fabian Arrotintag:arrfab.net,2009-09-14:/posts/2009/Sep/14/centos-5-3-on-neoware-e90-thin-client/<p>As <a href="http://www.hp.com">Hp</a> acquired <a href="http://www.neoware.com">Neoware</a>
several months ago, customers are searching for new thin clients .. and
I received a <a href="http://h20000.www2.hp.com/bizsupport/TechSupport/Home.jsp?lang=en&cc=us&prodTypeId=12454&prodSeriesId=3638812&lang=en&cc=us">Neoware e90 thin
client</a>
(that wasn't used anymore). What could I do with it ? ... hmm, let's try
to use it at home as a small appliance to host a USB HDD that can be
shared . Advantage is that it doesn't consume a lot of electricity (in
comparison with my Asus Barebone with a AMD x2 64) and doesn't produce
noise at all .. which is also a good thing. The thin client I received
has a Via Nehemiah cpu @ 800mghz and 128Mb ram. It also has a small
IDE-DiskOnChip disk (32mb) but that is obviously too small to setup
CentOS on it. I decided to dedicate a small 1Gb USB stick gift I
received from a "well-known hypervisor" company (aka Vmware) and use it
for / and swap.</p>
<p>I disconnected the DiskOnChip module from the motherboard and configured
the bios to boot in pxe as first device and local usb-hdd for the second
one (if you need a password, it's likely to be either 'dogbites' or
'DOGBITES') and i started a CentOS 5.3 setup. But that didn't work on
first try : the embedded …</p><p>As <a href="http://www.hp.com">Hp</a> acquired <a href="http://www.neoware.com">Neoware</a>
several months ago, customers are searching for new thin clients .. and
I received a <a href="http://h20000.www2.hp.com/bizsupport/TechSupport/Home.jsp?lang=en&cc=us&prodTypeId=12454&prodSeriesId=3638812&lang=en&cc=us">Neoware e90 thin
client</a>
(that wasn't used anymore). What could I do with it ? ... hmm, let's try
to use it at home as a small appliance to host a USB HDD that can be
shared . Advantage is that it doesn't consume a lot of electricity (in
comparison with my Asus Barebone with a AMD x2 64) and doesn't produce
noise at all .. which is also a good thing. The thin client I received
has a Via Nehemiah cpu @ 800mghz and 128Mb ram. It also has a small
IDE-DiskOnChip disk (32mb) but that is obviously too small to setup
CentOS on it. I decided to dedicate a small 1Gb USB stick gift I
received from a "well-known hypervisor" company (aka Vmware) and use it
for / and swap.</p>
<p>I disconnected the DiskOnChip module from the motherboard and configured
the bios to boot in pxe as first device and local usb-hdd for the second
one (if you need a password, it's likely to be either 'dogbites' or
'DOGBITES') and i started a CentOS 5.3 setup. But that didn't work on
first try : the embedded NIC (VIA Technologies, Inc. VT6102 [Rhine-II]
(rev 74) ) refused to aquire an IP address . Switching to VT3/VT4 showed
me that even if via-rhine.ko kernel module was loaded, it was impossible
to have a network connection. (message was related to "netdev watchdog
transmit timed out" and some IRQ messages too). I then decided to add
the kernel parameter 'irqpoll' and then the setup was able to work on
the network. One problem solved ... Second problem is that with 128mb
ram, CentOS 5.x normally isn't installable. Well, if you use text mode
(anyway graphical mode will even refuse to start ...) and use disk-druid
to create the swap partition, anaconda will use it directly to simulate
the missing RAM. Other thing is that I *had* to use was a NFS based
setup : I tried a http based setup and it always died on me (maybe
because it had to fetch stage2.img while with NFS it just loop-mounts it
...). Anyway it installed succesfully on the USB stick (minimal install,
so every component removed from the software selection, took 29 minutes
to complete) and it rebooted normally. Don't forget also to add the
irqpoll kernel parameter in grub.conf so that you'll have network
connection after reboot ... And as an image talks more than a long
sentence .. :</p>
<p><img alt="14092009" src="http://www.arrfab.net/blog/wp-content/uploads/2009/09/14092009-300x225.jpg" title="14092009"></p>virt-install / xen domU 'out of memory' issue2009-09-02T15:34:00+02:002009-09-02T15:34:00+02:00Fabian Arrotintag:arrfab.net,2009-09-02:/posts/2009/Sep/02/virt-install-xen-domu-out-of-memory-issue/<p>I had today to deploy two CentOS 5.3 xen dom0 on two blades and then
some domU's guests. Everything was fine except that when i used our
traditionnal deploydomU script (which uses virt-install) it directly
complained about memory issue. The exact message was " '<strong>Out of
memory', "xc_dom_boot_mem_init: can't allocate low memory for
domain\n</strong>" " . Strange as I was sure that the dom0 had plenty of
memory and the new guest was defined to use only 768Mb .. so what was
the issue ? In fact, nothing related to memory : Our new machines get
deployed through a pxe boot menu (with syslinux/pxelinux.0 and
pxelinux.cfg) in the Labs zone, but a typo was inserted in that menu so
that newer CentOS 5.3 x86_64 machines were in fact ... using i386 repo
! ;-)</p>
<p>It took me 5 minutes to consult the great oracle (aka google) , find the
same issue and look at both new nodes to confirm with `uname -a` that
I tried to deploy a x86_64 domU on a i386 dom0 ...</p>
<p>Hehehe, strange that the message is related to memory and not arch ..
but several minutes later (and a coffee cup, machines being redeployed
correctly *after* the pxelinux.cfg file was …</p><p>I had today to deploy two CentOS 5.3 xen dom0 on two blades and then
some domU's guests. Everything was fine except that when i used our
traditionnal deploydomU script (which uses virt-install) it directly
complained about memory issue. The exact message was " '<strong>Out of
memory', "xc_dom_boot_mem_init: can't allocate low memory for
domain\n</strong>" " . Strange as I was sure that the dom0 had plenty of
memory and the new guest was defined to use only 768Mb .. so what was
the issue ? In fact, nothing related to memory : Our new machines get
deployed through a pxe boot menu (with syslinux/pxelinux.0 and
pxelinux.cfg) in the Labs zone, but a typo was inserted in that menu so
that newer CentOS 5.3 x86_64 machines were in fact ... using i386 repo
! ;-)</p>
<p>It took me 5 minutes to consult the great oracle (aka google) , find the
same issue and look at both new nodes to confirm with `uname -a` that
I tried to deploy a x86_64 domU on a i386 dom0 ...</p>
<p>Hehehe, strange that the message is related to memory and not arch ..
but several minutes later (and a coffee cup, machines being redeployed
correctly *after* the pxelinux.cfg file was modified) everything was
back to normal and x86_64 domU's running fine ... hope that it can help
other people having the same 'typo' :-p</p>Setting up DRBD on only one active and available node2009-08-25T13:50:00+02:002009-08-25T13:50:00+02:00Fabian Arrotintag:arrfab.net,2009-08-25:/posts/2009/Aug/25/setting-up-drbd-on-only-one-active-and-available-node/<p>Recently I had to install a new server that will act as a mail server
(Zarafa but that doesn't matter) and being member of a <a href="http://www.drbd.org/">DRBD
cluster</a> (to replicate automagically the Zarafa
MySQL DB and Attachments on disks to the other node) . Fine, except that
only one physical node was at my disposal : we'll convert the existing
M\$ Exchange server physical box to CentOS/DRBD after the migration. So
what ?</p>
<p>I was thinking about that nice feature in mdadm when you want to create
a Linux software Raid 1 array but with only one available disk ("mdadm
--create /dev/md0 --level=1 --raid-devices=2 /dev/sda1 <strong><em>missing</em></strong>"
for those of you who don't know that nice feature) and add the second
disk later .. That would be cool to do exactly the same with DRBD : one
node active and then add the missing one later .. Don't try to find a
'missing' parameter in the drbd.conf file .. but that's possible (even
if not documented in the <a href="http://www.drbd.org/docs/about/">online
docs)</a>. Do you remember that nice
parameter you use when you initialize your first DRBD resource (drbadm
-- --overwrite-data-of-peer primary \$resourcename) ? Why not testing it
with only one available node ? Yes, it works .. In fact …</p><p>Recently I had to install a new server that will act as a mail server
(Zarafa but that doesn't matter) and being member of a <a href="http://www.drbd.org/">DRBD
cluster</a> (to replicate automagically the Zarafa
MySQL DB and Attachments on disks to the other node) . Fine, except that
only one physical node was at my disposal : we'll convert the existing
M\$ Exchange server physical box to CentOS/DRBD after the migration. So
what ?</p>
<p>I was thinking about that nice feature in mdadm when you want to create
a Linux software Raid 1 array but with only one available disk ("mdadm
--create /dev/md0 --level=1 --raid-devices=2 /dev/sda1 <strong><em>missing</em></strong>"
for those of you who don't know that nice feature) and add the second
disk later .. That would be cool to do exactly the same with DRBD : one
node active and then add the missing one later .. Don't try to find a
'missing' parameter in the drbd.conf file .. but that's possible (even
if not documented in the <a href="http://www.drbd.org/docs/about/">online
docs)</a>. Do you remember that nice
parameter you use when you initialize your first DRBD resource (drbadm
-- --overwrite-data-of-peer primary \$resourcename) ? Why not testing it
with only one available node ? Yes, it works .. In fact that remembers
me the name of that parameter in the previous DRBD versions (aka "--
--do-what-I-say" ) : that was really a way of instructing DRBD to do
what you wanted it to do.</p>
<p>The only "issue" found so far is that it isn't possible to use the
"drbdadm resize" command online to extend its size (yes, I use the
<a href="http://www.drbd.org/users-guide-emb/s-nested-lvm.html">nested LVM
configuration</a> :
so backend disks / LVM / LV as a DRBD device / LVM / new LV on top of
the drbd device) but I can easily understand why such operation really
needs a connection to the second real node (which obviously is missing
here)</p>
<p>Oh, while i'm talking about DRBD you have to know (if you use it
already) that DRBD 8.3.2 (and the corresponding kABI kmods) are
available in the <a href="http://dev.centos.org/centos">[testing]</a> repo ;-)</p>Suspend issue on Thinkpad R61 / CentOS 5.3 x86_642009-08-09T17:27:00+02:002009-08-09T17:27:00+02:00Fabian Arrotintag:arrfab.net,2009-08-09:/posts/2009/Aug/09/suspend-issue-on-thinkpad-r61-centos-5-3-x86_64/<p>I recently received a new laptop (IBM/Lenovo thinkpad R61) and I
installed it with CentOS 5.3 x86_64. I used (of course) the nvidia
driver from RPMforge (for the nVidia Corporation Quadro NVS 140M card
that the R61 contains) but i had issues when trying to suspend/resume :
In fact it suspended correctly and resumed , but with a black screen
(machine was reachable on the network) so it was a video driver issue.
Here is the patch you need to apply to a hal file for the suspend/resume
operation to work :</p>
<blockquote>
<hr>
<p>/usr/share/hal/fdi/information/10freedesktop/21-video-quirk-pm-el5-nvidia.fdi.orig
2009-07-08 17:04:21.000000000 +0200<br>
+++
/usr/share/hal/fdi/information/10freedesktop/21-video-quirk-pm-el5-nvidia.fdi
2009-08-09 18:18:08.000000000 +0200<br>
@@ -16,6 +16,10 @@<br>
\<remove key="power_management.quirk.vga_mode_3">\</remove><br>
\<remove key="power_management.quirk.none">\</remove><br>
\<merge key="power_management.quirk.vbe_post"
type="bool">true\</merge><br>
+<br>
+ \<remove
key="power_management.quirk.vbe_post">\</remove><br>
+ \<merge key="power_management.quirk.s3_mode"
type="bool">true\</merge><br>
+<br>
\</match><br>
\</match><br>
\</device></p>
</blockquote>about the future of the CentOS.org project ..2009-07-30T10:25:00+02:002009-07-30T10:25:00+02:00Fabian Arrotintag:arrfab.net,2009-07-30:/posts/2009/Jul/30/about-the-future-of-the-centos-org-project/<p>As a lot of you have already seen the post (either on the <a href="http://lists.centos.org/pipermail/centos/2009-July/079767.html">CentOS
mailing-list</a>)
or on <a href="http://planet.centos.org">http://planet.centos.org</a> , something is happening within the
CentOS project. Now that it's public, I can add my thoughts too on that
"hot" topic. I started to use CentOS around release 3.4 (in fact
converting a Whitebox linux to CentOS) and was really happy with the
updates and also the contact with the team.What has changed since that
time ? As you've probably read it already, Lance Davis, one of the
CentOS Project founders and the "Leader" at that time, decided to not
participate anymore in the project (absent from the lists/irc/events/etc
..) while still receiving money from the contributors. In fact, and that
surely the point that hurted the most the whole CentOS crew, he never
said what was the fund dedicated to the project and for what it was
used/spent ! As most of the bandwidth/servers/mirrors are
donations/contributions from ISP and other companies, I'll let your
imagination work. Some months ago the whole centos.org domain (so
website/lists/etc ..) were unavailable because the domain was not
renewed (but locked) and the same issue …</p><p>As a lot of you have already seen the post (either on the <a href="http://lists.centos.org/pipermail/centos/2009-July/079767.html">CentOS
mailing-list</a>)
or on <a href="http://planet.centos.org">http://planet.centos.org</a> , something is happening within the
CentOS project. Now that it's public, I can add my thoughts too on that
"hot" topic. I started to use CentOS around release 3.4 (in fact
converting a Whitebox linux to CentOS) and was really happy with the
updates and also the contact with the team.What has changed since that
time ? As you've probably read it already, Lance Davis, one of the
CentOS Project founders and the "Leader" at that time, decided to not
participate anymore in the project (absent from the lists/irc/events/etc
..) while still receiving money from the contributors. In fact, and that
surely the point that hurted the most the whole CentOS crew, he never
said what was the fund dedicated to the project and for what it was
used/spent ! As most of the bandwidth/servers/mirrors are
donations/contributions from ISP and other companies, I'll let your
imagination work. Some months ago the whole centos.org domain (so
website/lists/etc ..) were unavailable because the domain was not
renewed (but locked) and the same issue for the Cacert ssl certificate
used for the www.centos.org website.</p>
<p>So I add my opinion to the existing list even if that will not change
the actual situation. "what will be the future of the CentOS project ?"
will you ask ? I don't know , even if all the actual members agree to
continue the project, with or without the centos.org domain (and name).
Let's see how it goes ...</p>GPG and SSH keys2009-07-11T18:32:00+02:002009-07-11T18:32:00+02:00Fabian Arrotintag:arrfab.net,2009-07-11:/posts/2009/Jul/11/gpg-and-ssh-keys/<h2><strong><em>Arrfab Public Keys (GPG & SSH)</em></strong></h2>
<h2>GnuPG Key</h2>
<div class="highlight"><pre><span></span><span class="nv">pub</span> <span class="mi">1024</span><span class="nv">D</span><span class="o">/</span><span class="mi">56</span><span class="nv">BEC54E</span> <span class="mi">2004</span><span class="o">-</span><span class="mi">04</span><span class="o">-</span><span class="mi">10</span> <span class="nv">Fabian</span> <span class="nv">Arrotin</span>
<span class="nv">Key</span> <span class="nv">fingerprint</span> <span class="o">=</span> <span class="mi">7</span><span class="nv">A38</span> <span class="nv">A620</span> <span class="nv">E0B5</span> <span class="mi">0</span><span class="nv">E9F</span> <span class="nv">F919</span> <span class="mi">407</span><span class="nv">B</span> <span class="mi">9</span><span class="nv">D59</span> <span class="mi">07</span><span class="nv">A3</span> <span class="mi">56</span><span class="nv">BE</span> <span class="nv">C54E</span>
<span class="nv">sub</span> <span class="mi">1024</span><span class="nv">g</span><span class="o">/</span><span class="mi">8</span><span class="nv">C0D95C6</span> <span class="mi">2004</span><span class="o">-</span><span class="mi">04</span><span class="o">-</span><span class="mi">10</span>
<span class="o">-----</span><span class="nv">BEGIN</span> <span class="nv">PGP</span> <span class="nv">PUBLIC</span> <span class="nv">KEY</span> <span class="nv">BLOCK</span><span class="o">-----</span>
<span class="nv">Version</span>: <span class="nv">GnuPG</span> <span class="nv">v1</span>.<span class="mi">2</span>.<span class="mi">2</span> <span class="ss">(</span><span class="nv">GNU</span><span class="o">/</span><span class="nv">Linux</span><span class="ss">)</span>
<span class="nv">mQGiBEB3xZERBACcX2H2eRZYQW3hXRYwjiYIoTbSaJZUY6PtI</span><span class="o">+</span><span class="nv">exw4jVCGsQXvAu</span>
<span class="o">/</span><span class="nv">GLEhtL</span><span class="o">/</span><span class="nv">lYQ7rhejLS7jgbGA5f2</span><span class="o">+</span><span class="nv">C16zrCx6z7kRkbPxFwylYUoJZMEOMbFNn4Ms</span>
<span class="nv">hpmHVa069BugiRHlFGkCwUCJsDBlpBOL422DCQnnVJzwSR73XcIZDD1E7wCgnR8</span><span class="o">/</span>
<span class="nv">hB4zu5ExoIpawJ6QUr70rAED</span><span class="o">/</span><span class="nv">jjIQxWJkZQ</span><span class="o">/</span><span class="nv">hPtkfeIbjFCFi5d77GqSiQoLA4WK</span>
<span class="nv">N</span><span class="o">/</span><span class="nv">QGpdqGUNZyCmdIHZaKCSpgYuWhurDURgJ</span><span class="o">+</span><span class="nv">F1XWqdZXmKU8PAb0</span><span class="o">+</span><span class="nv">mw5v8pHhHBX</span>
<span class="nv">ShvgSy7OuFQFjQAwZJC3hWYr6nhi5NOqWAt6yglJaGE0JN7gWMEbMwo9</span><span class="o">/</span><span class="nv">ZoZGKZH</span>
<span class="o">/</span><span class="mi">6</span><span class="nv">yMA</span><span class="o">/</span><span class="mi">4</span><span class="nv">pCe89</span><span class="o">+</span><span class="nv">CAwbxlHQfaXqRj5cLQcapFLBytZmJ9vAHzDrz31rQ9aWG0MzISu</span>
<span class="nv">xbalSjjIORJfISvkcC3</span><span class="o">+</span><span class="nv">K9Vm3iOF1nr4LtqNMFBlmJ3U2kc2QWyJcPC3Ynxv9gwS</span>
<span class="nv">d3z44O8lG2LKeX7nfLhX9gRTZY1ftuZD4o7ZW1Oi1VhBX8sFuLQqRmFiaWFuIEFy</span>
<span class="nv">cm90aW4gPGZhYmlhbi5hcnJvdGluQGFycmZhYi5uZXQ</span><span class="o">+</span><span class="nv">iFsEExECABsFAkB3xZEG</span>
<span class="nv">CwkIBwMCAxUCAwMWAgECHgECF4AACgkQnVkHo1a</span><span class="o">+</span><span class="nv">xU4PxwCcDhrq06RUDmvMj</span><span class="o">+</span><span class="nv">Zn</span>
<span class="nv">D6IEh</span><span class="o">+</span><span class="nv">r7bDMAmQEpLLV7MCcOTrjrGK2ABJ8pUtjDuQENBEB3xZIQBACA9Ra</span><span class="o">+</span><span class="nv">I9Mx</span>
<span class="nv">LdO4XoxLWx0k3gP2TkXsRuvDhuqz67BlyGaMisRrX7</span><span class="o">/</span><span class="nv">Xot1T4KRtqEaoY84IgCMn</span>
<span class="nv">GAsPhzGQsObUEK</span><span class="o">/</span><span class="nv">hO</span><span class="o">+</span><span class="nv">y</span><span class="o">+</span><span class="nv">O8S</span><span class="o">+</span><span class="nv">elncg4JkLTCvx</span><span class="o">+</span><span class="nv">MpQQnFgcBEmfhFYkIDQgLJijvF</span>
<span class="nv">jDvNjqIFot5EEr26ymqwOPSruwsBAIIgtwADBQP8D2biakfMRMqXHWf</span><span class="o">/</span><span class="nv">vZHZ</span><span class="o">/</span><span class="mi">0</span><span class="nv">El</span>
<span class="mi">2</span><span class="nv">ihRhIHPmHUsrn</span><span class="o">+</span><span class="nv">TQy0cUnbQ7ic9Rx</span><span class="o">+</span><span class="nv">bjXlid23lldKxarOMS21gCEPBFKwNPLDR</span>
<span class="nv">KaK3bznzE7WUICTVNB2TgCygr0GG0E6cFL8fl4XPmkRaR</span><span class="o">+</span><span class="nv">EygBW0Qjjxxz5B</span><span class="o">+</span><span class="nv">isY</span>
<span class="nv">g8VcnBnKp7dgs87CgFeIRgQYEQIABgUCQHfFkgAKCRCdWQejVr7FTtPeAJ9pCjaU</span>
<span class="nv">lzhr5D6tPodpib4QcokCxACdFpWL9ZvbhfCZaZWYIunWP0j7ZLM</span><span class="o">=</span>
<span class="o">=</span><span class="nv">Pf1y</span>
<span class="o">-----</span><span class="k">END</span> <span class="nv">PGP</span> <span class="nv">PUBLIC</span> <span class="nv">KEY</span> <span class="nv">BLOCK</span><span class="o">-----</span>
</pre></div>
<p>My key is also available on the server belgium.keyserver.net<br>
You can import it directly with the following command :<br>
gpg --recv-keys --keyserver belgium.keyserver.net 56BEC54E</p>
<h2>SSH Key</h2>
<p>You can find my SSH Public Key <a href="../../keys/id_dsa.pub">Here</a> You can
import my SSH Public Key with the following command :<br>
cat id_dsa.pub >> \~/.ssh/authorized_keys</p>Controlling your OOimpress presentations over bluetooth2009-06-22T15:05:00+02:002009-06-22T15:05:00+02:00Fabian Arrotintag:arrfab.net,2009-06-22:/posts/2009/Jun/22/controlling-your-ooimpress-presentations-over-bluetooth/<p>One other thing I learned from the
<a href="http://blogs.linbit.com/florian">Florian</a>'s talk last week-end is
<a href="http://anyremote.sourceforge.net/">anyRemote</a> . It can be used to
control your Linux laptop (or the application started on your Linux
laptop/desktop) , like for example OpenOffice Impress from your mobile
phone (over IR/bluetooth/WiFi) . Of course that's not the only stuff
that you can use for that : <a href="http://dag.wieers.com/">Dag</a>recently posted
his <a href="http://dag.wieers.com/home-made/wiipresent/">WiiPresent</a> package he
wrote during the last Fosdem (co-authored with
<a href="http://www.ribalba.de/">Didi</a>) but in my case it's difficult to justify
to my kids that 'Daddy has to steal one of your wiimote's because he
wants to use it during an OSS presentation' . Advantage of anyRemote is
that it's compatible with my Nokia mobile phone so I was interested in
testing/using it. It was not available on RPMforge .. until now ! :
i've made a commit to the rpmforge svn yesterday (so expect the packages
to appear in some days, when Dag's buildsystem will process them)</p>
<p>People in the meantime who don't want to wait can 'ping' me for the
locally built RPMs for CentOS 5 ;-)</p>Interested in Heartbeat/Pacemaker newer rpms ?2009-06-20T11:10:00+02:002009-06-20T11:10:00+02:00Fabian Arrotintag:arrfab.net,2009-06-20:/posts/2009/Jun/20/interested-in-heartbeatpacemaker-newer-rpms/<p>While I am/was attending a <a href="http://www.zarafa.com/summercamp2009">Zarafa
summercamp</a> for professional
reasons, I discussed with <a href="http://blogs.linbit.com/florian">Florian
Haas</a> (from Linbit/DRBD) about newer
Heartbeat/Pacemaker packages landing or not in the CentOS Extras
repositories (we didn't talk about DRBD itself which is already provided
in the Extras repository while newer DRBD packages are actually in the
[testing] one). That's true that I've myself not used/deployed heartbeat
based cluster the last months (RHCS instead ...) so I didn't follow what
happened on the Linux-HA/Pacemaker level. (I was just aware of the fact
that Pacemaker was a replacement for the included Cluster Resource
Manager within heartbeat 2.x). The actual heartbeat packages in the
CentOS Extras repository were being packaged/built by Johnny but I'll
probably have a look with Ralph about what we can do. Florian told me
that he was using the RPMs built by the <a href="http://download.opensuse.org/repositories/server:/ha-clustering/RHEL_5/">Novell/OpenSUSE
buildservice</a>.
While I was following some interesting talks, I had a quick look to see
if their SRPMS could be used 'as-is' and submitted to Mock.
Unfortunately no. I (we ?) 'll have to do some cleanups/adjustements
within the SPEC file to fit the Mock buildsystem. (OpenSUSE uses
someting different of course)</p>
<p>But …</p><p>While I am/was attending a <a href="http://www.zarafa.com/summercamp2009">Zarafa
summercamp</a> for professional
reasons, I discussed with <a href="http://blogs.linbit.com/florian">Florian
Haas</a> (from Linbit/DRBD) about newer
Heartbeat/Pacemaker packages landing or not in the CentOS Extras
repositories (we didn't talk about DRBD itself which is already provided
in the Extras repository while newer DRBD packages are actually in the
[testing] one). That's true that I've myself not used/deployed heartbeat
based cluster the last months (RHCS instead ...) so I didn't follow what
happened on the Linux-HA/Pacemaker level. (I was just aware of the fact
that Pacemaker was a replacement for the included Cluster Resource
Manager within heartbeat 2.x). The actual heartbeat packages in the
CentOS Extras repository were being packaged/built by Johnny but I'll
probably have a look with Ralph about what we can do. Florian told me
that he was using the RPMs built by the <a href="http://download.opensuse.org/repositories/server:/ha-clustering/RHEL_5/">Novell/OpenSUSE
buildservice</a>.
While I was following some interesting talks, I had a quick look to see
if their SRPMS could be used 'as-is' and submitted to Mock.
Unfortunately no. I (we ?) 'll have to do some cleanups/adjustements
within the SPEC file to fit the Mock buildsystem. (OpenSUSE uses
someting different of course)</p>
<p>But even if the packages built succesfully , some testing will of course
need to be done to see if upgrading from the actual heartbeat 2.1.3
package to 2.99 can be done in a 'smoothly' way .. More informations to
come in (i hope) a near future now that Florian gave me extra-pressure
on my shoulders ;-)</p>VMware : a "time machine" ?2009-06-18T04:25:00+02:002009-06-18T04:25:00+02:00Fabian Arrotintag:arrfab.net,2009-06-18:/posts/2009/Jun/18/vmware-a-time-machine/<p>I had (for professional reasons) to look at the Compatibility List on
the VMware website for the (recently released) vSphere 4 product. And
the the surprise : they <a href="http://www.vmware.com/resources/compatibility/search.php?action=search&deviceCategory=software&advancedORbasic=advanced&maxDisplayRows=50&key=&productId=1&gos_vmw_product_release[]=13&datePosted=-1&partnerId[]=-1&os_bits=-1&os_use[]=16&os_family[]=-1&os_name[]=CentOS&os_type[]=-1&rorre=0">already list CentOS
5.4</a>,
even if not even in 'alpha' stage .. interesting</p>
<p>Is VMware a kind of 'time machine' ? :D</p>"cpio: MD5 sum mismatch" error when submitting a F11 SRPM to Mock2009-06-08T21:22:00+02:002009-06-08T21:22:00+02:00Fabian Arrotintag:arrfab.net,2009-06-08:/posts/2009/Jun/08/cpio-md5-sum-mismatch-error-when-submitting-a-f11-srpm-to-mock/<p>Today I had to migrate a customer CVS repository to Subversion. I looked
after cvs2svn but I only found it (at least a 'working' version) in
Rawhide. "No problem ! , I'll use my mock wrapper script on my build
system" ... Except that instead of building a nice ready-to-go rpm, I
ended with that error message :</p>
<p><em>warning: /builddir/build/originals/cvs2svn-2.2.0-2.fc11.src.rpm: Header
V3 RSA/SHA256 signature: NOKEY, key ID d22e77f2<br>
cvs2svn
##################################################<br>
error: unpacking of archive failed on file
/builddir/build/SOURCES/cvs2svn-2.2.0.tar.gz;4a2d65a4: cpio: MD5 sum
mismatch<br>
Error installing srpm: cvs2svn-2.2.0-2.fc11.src.rpm<br>
</em></p>
<p>Hmm, that famous problem <a href="http://orcorc.blogspot.com/2009/03/oh-my-goodness.html">Russ reported some time
ago</a> . But
instead of setting up a F11/Rawhide domU somewhere just to extract the
sources/spec from the rawhide srpm , I just decided to modify my wrapper
script around mock on my CentOS 5 builder. Instead of just downloading
the SRPM and directly submit it to mock, I first install it with the
--nomd5 rpm parameter (using rpm2cpio is also an alternative), and then
recreate directly a SRPM with `rpmbuild -bs --nodeps` (with of course
the correct --define ' ' values for my build system) and then submit the
resulting …</p><p>Today I had to migrate a customer CVS repository to Subversion. I looked
after cvs2svn but I only found it (at least a 'working' version) in
Rawhide. "No problem ! , I'll use my mock wrapper script on my build
system" ... Except that instead of building a nice ready-to-go rpm, I
ended with that error message :</p>
<p><em>warning: /builddir/build/originals/cvs2svn-2.2.0-2.fc11.src.rpm: Header
V3 RSA/SHA256 signature: NOKEY, key ID d22e77f2<br>
cvs2svn
##################################################<br>
error: unpacking of archive failed on file
/builddir/build/SOURCES/cvs2svn-2.2.0.tar.gz;4a2d65a4: cpio: MD5 sum
mismatch<br>
Error installing srpm: cvs2svn-2.2.0-2.fc11.src.rpm<br>
</em></p>
<p>Hmm, that famous problem <a href="http://orcorc.blogspot.com/2009/03/oh-my-goodness.html">Russ reported some time
ago</a> . But
instead of setting up a F11/Rawhide domU somewhere just to extract the
sources/spec from the rawhide srpm , I just decided to modify my wrapper
script around mock on my CentOS 5 builder. Instead of just downloading
the SRPM and directly submit it to mock, I first install it with the
--nomd5 rpm parameter (using rpm2cpio is also an alternative), and then
recreate directly a SRPM with `rpmbuild -bs --nodeps` (with of course
the correct --define ' ' values for my build system) and then submit the
resulting srpm to mock. I'll check later if it's possible to find a
rpmmacro that can be used directly in the mock config file to bypass the
srpm explode/recreate step. More informations about that issue in the
<a href="https://bugzilla.redhat.com/show_bug.cgi?id=490613">Red Hat bugzilla</a>
and also on the <a href="https://fedoraproject.org/wiki/Features/StrongerHashes">Fedora
wiki</a> ...</p>An opensource backend to sync my mobile phone2009-06-05T08:54:00+02:002009-06-05T08:54:00+02:00Fabian Arrotintag:arrfab.net,2009-06-05:/posts/2009/Jun/05/an-opensource-backend-to-sync-my-mobile-phone/<p>While I used for several months the service offered by
<a href="http://www.scheduleworld.com/sw2/index.html">ScheduleWorld</a>, I didn't
like the idea that my calendar was stored elsewhere than on one of my
machines. The fact that ScheduleWorld decided recently to switch to V2
(and now don't provide the service for free anymore), it pushed me to
find a solution to sync my calendar between my <a href="http://www.nokia.co.uk/link?cid=PLAIN_TEXT_519105">Nokia
E51</a> and my Linux
laptop/computers. I really appreciate my Nokia mobile phone, but
unfortunately it doesn't support iCal (and I've not found a symbian app
that could do that ..) . The only protocols that the Nokia can 'talk' is
SyncML or 'ActiveSync' (through their '<a href="http://www.businesssoftware.nokia.com/mail_for_exchange_downloads.php">Mail for
Exchange</a>'
free plugin) . That directly limits the scope for the backend. While I
considered <a href="https://www.forge.funambol.org/DomainHome.html">Funambol</a> at
a time (to use SyncML) , I finally ended with
<a href="http://www.zarafa.com">Zarafa</a> (and
<a href="http://z-push.sourceforge.net/soswp/">Z-push</a>) . It's all open-source
(in the community edition though) and emulates an ical (and caldav
support is now available in the 6.30 release) and Z-push emulates an
'ActiveSync-over-the-air' server so I'm now able to directly sync my
calendar/contacts/tasks/mails from my Nokia mobile phone to the server
(using a MySQL backend) and either use the Zarafa webaccess (that I
don't use that much …</p><p>While I used for several months the service offered by
<a href="http://www.scheduleworld.com/sw2/index.html">ScheduleWorld</a>, I didn't
like the idea that my calendar was stored elsewhere than on one of my
machines. The fact that ScheduleWorld decided recently to switch to V2
(and now don't provide the service for free anymore), it pushed me to
find a solution to sync my calendar between my <a href="http://www.nokia.co.uk/link?cid=PLAIN_TEXT_519105">Nokia
E51</a> and my Linux
laptop/computers. I really appreciate my Nokia mobile phone, but
unfortunately it doesn't support iCal (and I've not found a symbian app
that could do that ..) . The only protocols that the Nokia can 'talk' is
SyncML or 'ActiveSync' (through their '<a href="http://www.businesssoftware.nokia.com/mail_for_exchange_downloads.php">Mail for
Exchange</a>'
free plugin) . That directly limits the scope for the backend. While I
considered <a href="https://www.forge.funambol.org/DomainHome.html">Funambol</a> at
a time (to use SyncML) , I finally ended with
<a href="http://www.zarafa.com">Zarafa</a> (and
<a href="http://z-push.sourceforge.net/soswp/">Z-push</a>) . It's all open-source
(in the community edition though) and emulates an ical (and caldav
support is now available in the 6.30 release) and Z-push emulates an
'ActiveSync-over-the-air' server so I'm now able to directly sync my
calendar/contacts/tasks/mails from my Nokia mobile phone to the server
(using a MySQL backend) and either use the Zarafa webaccess (that I
don't use that much though) or Thunderbird with the <a href="https://addons.mozilla.org/en-US/thunderbird/addon/2313">Lightning
extension</a> .
(every "iCal aware" program works of course)<br>
Note : Z-push isn't yet available in the RPM format on the Zarafa
website due to a clause in the GPL license (more informations on the
<a href="https://bugzilla.rpmfusion.org/show_bug.cgi?id=585">RPMfusion bugzilla related
page</a>) . Thanks to
<a href="https://fedoraproject.org/wiki/User:Robert">Robert Scheck</a> a spec file
was written but isn't yet available. Robert is interested in seeing his
package landing in <a href="http://fedoraproject.org/wiki/EPEL">EPEL</a> and
<a href="http://rpmfusion.org/">RPMfusion</a> while I consider myself providing it
in <a href="http://rpmforge.net">RPMforge</a>. In the meantime, if you're
interested in the RPM version, feel free to 'poke' me or consult the
spec file in the <a href="https://bugzilla.rpmfusion.org/show_bug.cgi?id=585">RPMfusion
bugzilla</a> .</p>RPM the easy way ?2009-05-30T09:26:00+02:002009-05-30T09:26:00+02:00Fabian Arrotintag:arrfab.net,2009-05-30:/posts/2009/May/30/rpm-the-easy-way/<p>While Karanbir posted an <a href="http://www.karan.org/blog/index.php/2009/05/20/say-what-arch-is-this-package">interesting
rpm</a>
the other day , that reminded me another commercial app I had to look
once. The application was provided as an RPM, but it seems that none of
the installed files was declared in the rpmdb .. and here is why :</p>
<p><em>[arrfab@waldorf vmware]\$ echo -e "Files present in the RPM package:
\n" ; rpm -qlp VMware-Player-2.5.1-126130.x86_64.rpm ; echo -e "\nand
now the RPM script : \n" ; rpm -qp --scripts
VMware-Player-2.5.1-126130.x86_64.rpm<br>
Files present in the RPM package:</em></p>
<p><em>/var/cache/vmware/VMware-Player-2.5.1-126130.x86_64.bundle</em></p>
<p><em>and now the RPM script :</em></p>
<p><em>preinstall program: /bin/sh<br>
postinstall scriptlet (using /bin/sh):<br>
# Execute bundle installer on install or upgrade after laying down
bundle<br>
# and then delete the bundle afterwards.<br>
# Have to redirect the console to stdin because it's closed by
default.<br>
# Setting VMWARE_SKIP_RPM_UNINSTALL is necessary because we don't
want the<br>
# bundle to run rpm commands, since rpm will deadlock if that
happens.<br>
TERM=dumb VMWARE_SKIP_RPM_UNINSTALL=1
/var/cache/vmware/VMware-Player-2.5.1-126130.x86_64.bundle \<br>
--required --console \< /dev/tty<br>
rm -f /var/cache/vmware/VMware-Player-2.5.1-126130.x86_64.bundle<br>
preuninstall scriptlet (using /bin/sh):<br>
# On uninstall only, remove existing bundle installation.<br>
if [ \$1 -eq 0 …</em></p><p>While Karanbir posted an <a href="http://www.karan.org/blog/index.php/2009/05/20/say-what-arch-is-this-package">interesting
rpm</a>
the other day , that reminded me another commercial app I had to look
once. The application was provided as an RPM, but it seems that none of
the installed files was declared in the rpmdb .. and here is why :</p>
<p><em>[arrfab@waldorf vmware]\$ echo -e "Files present in the RPM package:
\n" ; rpm -qlp VMware-Player-2.5.1-126130.x86_64.rpm ; echo -e "\nand
now the RPM script : \n" ; rpm -qp --scripts
VMware-Player-2.5.1-126130.x86_64.rpm<br>
Files present in the RPM package:</em></p>
<p><em>/var/cache/vmware/VMware-Player-2.5.1-126130.x86_64.bundle</em></p>
<p><em>and now the RPM script :</em></p>
<p><em>preinstall program: /bin/sh<br>
postinstall scriptlet (using /bin/sh):<br>
# Execute bundle installer on install or upgrade after laying down
bundle<br>
# and then delete the bundle afterwards.<br>
# Have to redirect the console to stdin because it's closed by
default.<br>
# Setting VMWARE_SKIP_RPM_UNINSTALL is necessary because we don't
want the<br>
# bundle to run rpm commands, since rpm will deadlock if that
happens.<br>
TERM=dumb VMWARE_SKIP_RPM_UNINSTALL=1
/var/cache/vmware/VMware-Player-2.5.1-126130.x86_64.bundle \<br>
--required --console \< /dev/tty<br>
rm -f /var/cache/vmware/VMware-Player-2.5.1-126130.x86_64.bundle<br>
preuninstall scriptlet (using /bin/sh):<br>
# On uninstall only, remove existing bundle installation.<br>
if [ \$1 -eq 0 ]; then<br>
if [ -e /usr/bin/vmware-uninstall ]; then<br>
TERM=dumb /usr/bin/vmware-uninstall --console > /dev/null 2>&1<br>
fi<br>
fi<br>
postuninstall program: /bin/sh</em></p>
<p>Do we really have to comment on that one ?</p>Small thoughts about the upcoming RHEV2009-04-22T10:37:00+02:002009-04-22T10:37:00+02:00Fabian Arrotintag:arrfab.net,2009-04-22:/posts/2009/Apr/22/small-thoughts-about-the-upcoming-rhev/<p>While I attended the Red Hat partner summit, we had a demo of the
upcoming RHEV (for servers and desktops). It was strange that while
Vmware announced a beta version of VirtualCenter running on Linux, on
the their side, Red Hat decided to keep the version written in .Net
(from people from <a href="http://www.qumranet.com/">Qumranet</a>, acquired by Red
Hat last year). So you need a Microsoft Windows 2003 machine to manage
your Red Hat Virtualization infrastructure .. are times changing ?</p>
<p>Of course we know that Red Hat is an opensource company and that each
time they acquired a company they opensourced properly the product
(Directory Server, GFS, etc ...) so we're sure that the goal is to
provide a Linux version in the future .. But due to the fact that all
Virtualization companies are now in a race, Red Hat didn't want (again)
to wait several months (even if RHEV ETA is september). Of course we can
trust Red Hat on that one .. but on the other hand , Red Hat addicted
people were astonished when we saw a Windows machine with Internet
Explorer. Something nobody swore it would happen some years ago ...</p>Citrix XenServer (still) using CentOS 5.x2009-03-13T09:38:00+01:002009-03-13T09:38:00+01:00Fabian Arrotintag:arrfab.net,2009-03-13:/posts/2009/Mar/13/citrix-xenserver-still-using-centos-5x/<p>While we were busy talking about the Virtualization market in #centos
the other day , someone didn't know that Citrix was now offering their
<a href="http://www.citrix.com/English/ps2/products/feature.asp?contentID=1686939">XenServer
enterprise</a>
for
<a href="http://community.citrix.com/blogs/citrite/simoncr/2009/02/23/Free,+as+in+Virtual+Infrastructure">free</a>
(as in beer, not speech). I guess that it's a kind of answer to the fact
that Vmware offers ESXi also for free (since late july 2008). The
console app is almost an exact copy of the screen you get with ESXi (but
i don't know who copied the other though). I don't want to compare both
products or features but because I was already busy with CentOS 5.3 QA
tests I thought that it was a good time to download/test it ..
Unfortunately their Xencenter management application is still a MS-only
application that depends on .Net 2.0 (like VI client for Vmware, even if
VMware announced recently a that a VI client for linux would probably be
released and that they have now a demo of VirtualCenter <a href="http://virtwo.blogspot.com/2009/02/scoop-vmware-vcenter-on-centos-5.html">Linux version
running on
CentOS</a>
..)</p>
<p>And guess what Citrix is (<a href="http://www.arrfab.net/blog/?p=66">still</a>)
using for the dom0 ? CentOS ! okay not a 'real' CentOS anymore because
some packages (including the kernel of course but still based on
2.6.18-92.1.10.el5) were replaced but …</p><p>While we were busy talking about the Virtualization market in #centos
the other day , someone didn't know that Citrix was now offering their
<a href="http://www.citrix.com/English/ps2/products/feature.asp?contentID=1686939">XenServer
enterprise</a>
for
<a href="http://community.citrix.com/blogs/citrite/simoncr/2009/02/23/Free,+as+in+Virtual+Infrastructure">free</a>
(as in beer, not speech). I guess that it's a kind of answer to the fact
that Vmware offers ESXi also for free (since late july 2008). The
console app is almost an exact copy of the screen you get with ESXi (but
i don't know who copied the other though). I don't want to compare both
products or features but because I was already busy with CentOS 5.3 QA
tests I thought that it was a good time to download/test it ..
Unfortunately their Xencenter management application is still a MS-only
application that depends on .Net 2.0 (like VI client for Vmware, even if
VMware announced recently a that a VI client for linux would probably be
released and that they have now a demo of VirtualCenter <a href="http://virtwo.blogspot.com/2009/02/scoop-vmware-vcenter-on-centos-5.html">Linux version
running on
CentOS</a>
..)</p>
<p>And guess what Citrix is (<a href="http://www.arrfab.net/blog/?p=66">still</a>)
using for the dom0 ? CentOS ! okay not a 'real' CentOS anymore because
some packages (including the kernel of course but still based on
2.6.18-92.1.10.el5) were replaced but most of the packages still come
from CentOS (they were not even rebuilt and CentOS yum repositories are
still in /etc/yum.repos.d/) .. That reminds me that someone else
confirmed me that <a href="http://www.oracle.com/technologies/virtualization/index.html">Oracle
VM</a> itself
was based on Unbreakable (and so on CentOS/RHEL).</p>Watching dd progress from one host to the other with pv2009-02-26T14:53:00+01:002009-02-26T14:53:00+01:00Fabian Arrotintag:arrfab.net,2009-02-26:/posts/2009/Feb/26/watching-dd-progress-from-one-host-to-the-other-with-pv/<p>Recently i had to migrate a LVM based domU from machine 1 to machine 2
with only ssh port being available between the two hosts. Of course dd
comes to the rescue for that but i admit that having some informations
about transfer rate would be interesting. And then i remembered a
Sébastien's <a href="http://www.wains.be/index.php/2008/10/29/tool-of-the-day-pipeview-aka-pv/">blog
post</a>
talking about about a nice tool called
<a href="http://www.ivarch.com/programs/pv.shtml">PV</a>. Of course PV has nothing
to do with PV as in Physical Volume for LVM but it's a 'pipe viewer' . A
<a href="http://packages.sw.be/pv/">pv rpm</a> is available in the
<a href="http://rpmforge.net">RPMForge</a> repo. Example (assuming that you've
already created a domU2migrate lv on the target system) :</p>
<div class="highlight"><pre><span></span>[<span class="nv">root</span>@<span class="nv">machine2</span> \<span class="o">~</span>]\# <span class="nv">ssh</span> <span class="nv">machine1</span> <span class="s2">"</span><span class="s">dd if=/dev/VolGroup00/domU2migrate</span><span class="s2">"</span><span class="o">|</span><span class="nv">pv</span> <span class="o">-</span><span class="nv">s</span> <span class="mi">8</span><span class="nv">G</span> <span class="o">-</span><span class="nv">petr</span><span class="o">|</span><span class="nv">dd</span> <span class="nv">of</span><span class="o">=/</span><span class="nv">dev</span><span class="o">/</span><span class="nv">xen02vg</span><span class="o">/</span><span class="nv">domU2migrate</span>
<span class="mi">0</span>:<span class="mi">00</span>:<span class="mi">30</span> [<span class="mi">11</span>.<span class="mi">2</span><span class="nv">MB</span><span class="o">/</span><span class="nv">s</span>] [<span class="o">====</span>\<span class="o">></span> ] <span class="mi">4</span><span class="o">%</span> <span class="nv">ETA</span> :<span class="mi">10</span>:<span class="mi">13</span>
</pre></div>
<p>I hope you'll find that useful if you never heard of such tool ..</p>CentOS 5.3 QA tests at full steam2009-02-20T13:17:00+01:002009-02-20T13:17:00+01:00Fabian Arrotintag:arrfab.net,2009-02-20:/posts/2009/Feb/20/centos-53-qa-tests-at-full-steam/<p>Thanks to the fact that <a href="http://www.karan.org/blog/index.php/2009/02/18/back-in-town-slowly-getting-back-to-spee">Karanbir is now back in
action</a>,
the QA team is now working on the 5.3 QA tree at full speed. There are
some nice things in 5.3 (you can already look at the <a href="http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5/html/Release_Notes/index.html">Upstream release
notes</a>).
We've already discovered some missing deps and other new good things. Of
course everything will be reported on the Wiki/in the CentOS 5.3
specific release notes.</p>
<p>One of the thing that astonished me is the fact that (even if not
written in Upstream RN) some drivers seem to have been updated. For
example the sky2 module didn't support the Marvel gigabit 88E8056 nic
since 5.1 .. but .. :</p>
<p>[arrfab@waldorf \~]\$ modinfo
/lib/modules/2.6.18-92.1.22.el5/kernel/drivers/net/sky2.ko |grep
alias|wc -l<br>
29<br>
[arrfab@waldorf \~]\$ modinfo
/lib/modules/2.6.18-128.el5/kernel/drivers/net/sky2.ko |grep alias|wc
-l<br>
30<br>
Interesting , isn't it ? (especially for people having that kind of
low-level entry nic in their workstation ..)</p>
<p>Other interesting stuff is the newer scsi-target-utils (aka tgtadm/iScsi
target) that now includes a config file and two helpers to setup a new
iscsi lun easily (tgt-setup-lun and tgt-admin) .. of course …</p><p>Thanks to the fact that <a href="http://www.karan.org/blog/index.php/2009/02/18/back-in-town-slowly-getting-back-to-spee">Karanbir is now back in
action</a>,
the QA team is now working on the 5.3 QA tree at full speed. There are
some nice things in 5.3 (you can already look at the <a href="http://www.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5/html/Release_Notes/index.html">Upstream release
notes</a>).
We've already discovered some missing deps and other new good things. Of
course everything will be reported on the Wiki/in the CentOS 5.3
specific release notes.</p>
<p>One of the thing that astonished me is the fact that (even if not
written in Upstream RN) some drivers seem to have been updated. For
example the sky2 module didn't support the Marvel gigabit 88E8056 nic
since 5.1 .. but .. :</p>
<p>[arrfab@waldorf \~]\$ modinfo
/lib/modules/2.6.18-92.1.22.el5/kernel/drivers/net/sky2.ko |grep
alias|wc -l<br>
29<br>
[arrfab@waldorf \~]\$ modinfo
/lib/modules/2.6.18-128.el5/kernel/drivers/net/sky2.ko |grep alias|wc
-l<br>
30<br>
Interesting , isn't it ? (especially for people having that kind of
low-level entry nic in their workstation ..)</p>
<p>Other interesting stuff is the newer scsi-target-utils (aka tgtadm/iScsi
target) that now includes a config file and two helpers to setup a new
iscsi lun easily (tgt-setup-lun and tgt-admin) .. of course there are
other new good things so stay tuned for more informations and of course,
don't forget to read the Release Notes when they'll be published !</p>CentOS @ Fosdem 2009 report2009-02-09T22:22:00+01:002009-02-09T22:22:00+01:00Fabian Arrotintag:arrfab.net,2009-02-09:/posts/2009/Feb/09/centos-fosdem-2009-report/<p>This Fosdem edition was, as usual, a good edition, including for the
CentOS crew present on site. Of course some things could be better, like
the fact that the WiFi network was unreliable (especially on saturday,
but fixed after that) , that our booth was not in front of the devroom
(like it was the case for the past events) . Some core CentOS members
were missing, but for a very <a href="http://twitter.com/CentOS/status/1171948359">good
reason</a> though :D</p>
<p>We had some nice discussions with people coming at the booth and/or in
the devroom. I personnaly had interesting conversations with
<a href="http://www.grep.be/">Wouter</a> Verhelst (about the eid middleware that we
both package for our respective distributions ) and <a href="http://www.linkedin.com/in/simosorce">Simo
Sorce</a> (about the integration of
(Free)IPA in the CentOS repositories) .. let's see how it goes in the
future ..</p>
<p>Of course it was a pleasure to discuss with so much people , including
people we're used to see each year, like <a href="http://rpm5.org/">Jeff </a></p>
<p>For people interested , pictures are now
<a href="http://www.arrfab.net/pics/view_album.php?set_albumName=Fosdem2009">online</a>
and presentations are (almost) all uploaded on the
<a href="http://wiki.centos.org/Events/Fosdem2009">wiki</a>.</p>
<p>See you next year and thanks to all the people from the Fosdem team who
organized each year such an event !</p>Maxtor external USB disk not Linux friendly ?2009-01-30T16:32:00+01:002009-01-30T16:32:00+01:00Fabian Arrotintag:arrfab.net,2009-01-30:/posts/2009/Jan/30/maxtor-external-usb-disk-not-linux-friendly/<p>I recently decided to add an external disk to my small CentOS 5.2 xen
dom0 home server (already using two 500Gb sata disk in lvm/mdadm raid1).
I attached a <a href="http://www.maxtor.com/en/hard-drive-backup/external-drives/maxtor-onetouch-4.html">Maxtor One-Touch IV
750Gb</a>
USB2 external disk to it and was able to directly re-partition the disk
, format it in ext3 (not adding it in the lvm VG , i'm not 'so' fool).
Great, i had now external device to store 'non vital data', aka local
mirror of CentOS repositories and other stuff i can even grab from the
Net if needed while the internal VG is used to store domU's and data
shared through nfs on my lan. And then the problems :</p>
<p>"Jan 29 07:10:31 helium kernel: sd 6:0:0:0: Device not ready: \<6>:
Current: sense key: Not Ready<br>
Jan 29 07:10:31 helium kernel: Add. Sense: Logical unit not ready,
initializing command required<br>
Jan 29 07:10:31 helium kernel:<br>
Jan 29 07:10:31 helium kernel: end_request: I/O error, dev sdc, sector
12375<br>
Jan 29 09:42:58 helium kernel: sd 6:0:0:0: Device not ready: \<6>:
Current: sense key: Not Ready<br>
Jan 29 09:42:58 helium kernel …</p><p>I recently decided to add an external disk to my small CentOS 5.2 xen
dom0 home server (already using two 500Gb sata disk in lvm/mdadm raid1).
I attached a <a href="http://www.maxtor.com/en/hard-drive-backup/external-drives/maxtor-onetouch-4.html">Maxtor One-Touch IV
750Gb</a>
USB2 external disk to it and was able to directly re-partition the disk
, format it in ext3 (not adding it in the lvm VG , i'm not 'so' fool).
Great, i had now external device to store 'non vital data', aka local
mirror of CentOS repositories and other stuff i can even grab from the
Net if needed while the internal VG is used to store domU's and data
shared through nfs on my lan. And then the problems :</p>
<p>"Jan 29 07:10:31 helium kernel: sd 6:0:0:0: Device not ready: \<6>:
Current: sense key: Not Ready<br>
Jan 29 07:10:31 helium kernel: Add. Sense: Logical unit not ready,
initializing command required<br>
Jan 29 07:10:31 helium kernel:<br>
Jan 29 07:10:31 helium kernel: end_request: I/O error, dev sdc, sector
12375<br>
Jan 29 09:42:58 helium kernel: sd 6:0:0:0: Device not ready: \<6>:
Current: sense key: Not Ready<br>
Jan 29 09:42:58 helium kernel: Add. Sense: Logical unit not ready,
initializing command required<br>
Jan 29 09:42:58 helium kernel:<br>
Jan 29 09:42:58 helium kernel: end_request: I/O error, dev sdc, sector
220455<br>
Jan 29 09:42:58 helium kernel: Aborting journal on device sdc1.<br>
Jan 29 09:43:31 helium kernel: ext3_abort called.<br>
Jan 29 09:43:31 helium kernel: EXT3-fs error (device sdc1):
ext3_journal_start_sb: Detected aborted journal<br>
Jan 29 09:43:31 helium kernel: Remounting filesystem read-only<br>
Jan 29 09:44:03 helium kernel: ext3_abort called."</p>
<p>Not so cool, right ? By default the Maxtor disk has a standby mode that
spins down the disk (and so fool the kernel) that leads to such messages
. Fortunately you can change that with
<a href="http://www.torque.net/sg/sdparm.html">sdparm</a> (rpm available in the
<a href="http://www.rpmforge.net">RPMforge</a> repo):</p>
<p>"[root@helium \~]# sdparm -a /dev/sdc<br>
/dev/sdc: Maxtor OneTouch 0125<br>
Power condition mode page:<br>
IDLE 0 [cha: n, def: 0, sav: 0]<br>
STANDBY 1 [cha: y, def: 1, sav: 1]<br>
ICT 0 [cha: n, def: 0, sav: 0]<br>
SCT 9000 [cha: y, def:9000, sav:9000]</p>
<p>[root@helium \~]# sdparm --clear=STANDBY /dev/sdc -S</p>
<p>[root@helium \~]# sdparm -a /dev/sdc<br>
/dev/sdc: Maxtor OneTouch 0125<br>
Power condition mode page:<br>
IDLE 0 [cha: n, def: 0, sav: 0]<br>
STANDBY 0 [cha: y, def: 1, sav: 0]<br>
ICT 0 [cha: n, def: 0, sav: 0]<br>
SCT 4294967286 [cha: y, def:9000, sav:4294967286]"<br>
As you can see you can query and modify the default settings and even
save them "-S" so that after a reboot they are still applied. Ok,
problem solved so now I can continue to work ...</p>Using 'compiled from source' software on CentOS ?2009-01-02T20:36:00+01:002009-01-02T20:36:00+01:00Fabian Arrotintag:arrfab.net,2009-01-02:/posts/2009/Jan/02/using-compiled-from-source-software-on-centos/<p>Right after Jim posted a link on his blog (appearing on
<a href="http://planet.centos.org">http://planet.centos.org</a> too) regarding <a href="http://www.bofh-hunter.com/2009/01/02/evils-of-source">software installed from
source</a>, we
talked about that a little bit in #centos-social. In fact that's a
common thing that we see with people entering #centos irc channel and
looking for advice after they broke their CentOS installations. Don't
get me wrong : I don't say that 'installing from source' will
automatically 'break' your CentOS setup but usually people following
such advice don't understand what they are doing, and so have to keep
the pieces once that it's done ...</p>
<p>A lot of tutorials written "for CentOS" on the web in fact completely
deviate from the CentOS philosophy. For example i've seen a lot of
tutorials from <a href="http://www.howtoforge.com">Howtoforge</a>advising to
disable selinux and compile from source. More recently we found a new
website <a href="http://www.securecentos.com">securecentos.com</a> explaining how
to use a vanilla kernel patched with grsec, and installing everything
else from source (or from third-party rpms provider like for the MySQL
rpms) . Sorry, but I don't get the point ! Why use CentOS if 1) you
don't care about the provided kernel 2) you don't benefit from all the
security patches that <a href="http://www.redhat.com/security/updates/backporting/?sc_cid=3093">Upstream
backports</a>
to the provided …</p><p>Right after Jim posted a link on his blog (appearing on
<a href="http://planet.centos.org">http://planet.centos.org</a> too) regarding <a href="http://www.bofh-hunter.com/2009/01/02/evils-of-source">software installed from
source</a>, we
talked about that a little bit in #centos-social. In fact that's a
common thing that we see with people entering #centos irc channel and
looking for advice after they broke their CentOS installations. Don't
get me wrong : I don't say that 'installing from source' will
automatically 'break' your CentOS setup but usually people following
such advice don't understand what they are doing, and so have to keep
the pieces once that it's done ...</p>
<p>A lot of tutorials written "for CentOS" on the web in fact completely
deviate from the CentOS philosophy. For example i've seen a lot of
tutorials from <a href="http://www.howtoforge.com">Howtoforge</a>advising to
disable selinux and compile from source. More recently we found a new
website <a href="http://www.securecentos.com">securecentos.com</a> explaining how
to use a vanilla kernel patched with grsec, and installing everything
else from source (or from third-party rpms provider like for the MySQL
rpms) . Sorry, but I don't get the point ! Why use CentOS if 1) you
don't care about the provided kernel 2) you don't benefit from all the
security patches that <a href="http://www.redhat.com/security/updates/backporting/?sc_cid=3093">Upstream
backports</a>
to the provided RPMS 3) you don't have a setup that you can easily
upgrade for security reasons (try to explain that to me because none of
the tutorials i've seen advicing to install from source explain how to
maintain the server) .</p>
<p>Of course everybody is free , it's a free world but then why installing
CentOS if the server doesn't look like a CentOS anymore ? i don't have a
clue ... If you're looking for good advice, why not start by reading the
<a href="http://www.centos.org/docs">official documentation</a> or on the <a href="http://wiki.centos.org">official
wiki</a> ? Some wiki articles explains how to
install <a href="http://wiki.centos.org/AdditionalResources/Repositories">packages not present in the core
distribution</a>
and the <a href="http://wiki.centos.org/PackageManagement/SourceInstalls">pros/cons of installing from
source</a> .</p>
<dl>
<dt>And what about missing packages ? if none of the third-party</dt>
<dt>repositories provide the rpm you're searching for, ask them if it's</dt>
<dt>possible to add it to the list of rpms they're providing .. Even better</dt>
<dd>write and submit a spec file that can be used ..</dd>
</dl>
<p>And what if [base] repo provides a package but that you need a specific
option to be turned on at compile time ? Once again you can benefit from
the rpm package management : instead of installing it from source,
<a href="http://wiki.centos.org/HowTos/RebuildSRPM">rebuild the SRPM</a>by changing
the options that need to be turned on or a patch that needs to be
inserted .. one example is the postfix rpm sitting in the [centosplus]
repo : it's the same as the postfix rpm from [base] except that some
options were enabled (mysql and postgresql support).</p>
<p>Just my two cents, but i hope that it clarifies the situation a bit ..
but long story short : feel free to do what you want (it's a free world
after all) but if you really want to install from source, why not then
really install *everything* from source and install <a href="http://www.linuxfromscratch.org/">Linux From
Scratch</a> ? ;-)</p>Finch IRC client running remotely and local notification event2008-12-30T23:35:00+01:002008-12-30T23:35:00+01:00Fabian Arrotintag:arrfab.net,2008-12-30:/posts/2008/Dec/30/finch-irc-client-running-remotely-and-local-notification-event/<p>As a lot of people i've always a irc client running in a screen session
on one of my machines and i attach/detach that screen session from my
laptop through ssh. I know that almost everybody in the same situation
use <a href="http://www.irssi.org">irssi</a> for that but due to professional
reasons (at least until i got a new job ;-) ) i have to be able to reach
also Lotus Sametime so finch (console part of the <a href="http://www.pidgin.im/about/">pidgin/purple
project</a>) was the only one to be able to
reach both IRC and Sametime (through the meanwhile plugin).</p>
<p>But even when connected and attached to my screen session it's not
possible to be notified if someone pings me. The idea was so to just
parse the finch log files (by default in \~/.purple/logs/irc/*) and
use notify-send (part of the libnotify package)</p>
<p>Of course it's still needs some 'love' but it does what i want now :</p>
<p><em>ssh yourname@your.remote.server "tail -n 1 -q -f
\~/.purple/logs/irc/youraccount\@irc.freenode.net/*/*.txt|grep -i
--line-buffered yournickname"|while read line;do notify-send -i
/usr/share/pixmaps/IRC.png -u normal -t 20000 -- "IRC message"
"\${line}";done</em><br>
Now i've a nice pop-up when …</p><p>As a lot of people i've always a irc client running in a screen session
on one of my machines and i attach/detach that screen session from my
laptop through ssh. I know that almost everybody in the same situation
use <a href="http://www.irssi.org">irssi</a> for that but due to professional
reasons (at least until i got a new job ;-) ) i have to be able to reach
also Lotus Sametime so finch (console part of the <a href="http://www.pidgin.im/about/">pidgin/purple
project</a>) was the only one to be able to
reach both IRC and Sametime (through the meanwhile plugin).</p>
<p>But even when connected and attached to my screen session it's not
possible to be notified if someone pings me. The idea was so to just
parse the finch log files (by default in \~/.purple/logs/irc/*) and
use notify-send (part of the libnotify package)</p>
<p>Of course it's still needs some 'love' but it does what i want now :</p>
<p><em>ssh yourname@your.remote.server "tail -n 1 -q -f
\~/.purple/logs/irc/youraccount\@irc.freenode.net/*/*.txt|grep -i
--line-buffered yournickname"|while read line;do notify-send -i
/usr/share/pixmaps/IRC.png -u normal -t 20000 -- "IRC message"
"\${line}";done</em><br>
Now i've a nice pop-up when someone tries to `ping` me or uses my
nickname :D</p>
<p>I don't know if someone else had the same need but at least that can
give some ideas ... </p>
<p><img alt="remote-irc-notification.png" src="http://www.arrfab.net/blog/wp-content/uploads/2008/12/remote-irc-notification.png"></p>Apache accessing nfs mounted dir with selinux enabled on CentOS 4.x2008-12-15T17:39:00+01:002008-12-15T17:39:00+01:00Fabian Arrotintag:arrfab.net,2008-12-15:/posts/2008/Dec/15/apache-accessing-nfs-mounted-dir-with-selinux-enabled-on-centos-4x/<p>I had recently to modify/add some selinux policies on a CentOS 4.7
machine running in the DMZ network. The goal was to mount (through the
Firewall between the DMZ and the production network) a exported NFS dir
(from a CentOS 5.2 machine in the production lan) to a CentOS 4.7
machine. The second goal was to permit the httpd process on that CentOS
4.7 to browse and read file from that NFS dir.</p>
<p>The first goal was met by configuring properly the ports used on the NFS
server (basically you can follow <a href="http://www.bofh-hunter.com/2007/12/07/locking-down-nfs/">Jim's advice on that
point</a> but you
can easily change port numbers of course) otherwise it's gonna be a
nightmare to manage if you don't know in advance which ports need to be
opened in your firewall ;-)</p>
<p>But the *Fun* really began when i tried to access that NFS dir from
Apache/httpd : of course it doesn't work with selinux enabled .. Does
that mean that you have to disable selinux on a machine sitting in the
DMZ and exposed on the Wild internet through the httpd process ? No !</p>
<p>While several folks adviced that, don't do it .. On the other hand, it's
true that …</p><p>I had recently to modify/add some selinux policies on a CentOS 4.7
machine running in the DMZ network. The goal was to mount (through the
Firewall between the DMZ and the production network) a exported NFS dir
(from a CentOS 5.2 machine in the production lan) to a CentOS 4.7
machine. The second goal was to permit the httpd process on that CentOS
4.7 to browse and read file from that NFS dir.</p>
<p>The first goal was met by configuring properly the ports used on the NFS
server (basically you can follow <a href="http://www.bofh-hunter.com/2007/12/07/locking-down-nfs/">Jim's advice on that
point</a> but you
can easily change port numbers of course) otherwise it's gonna be a
nightmare to manage if you don't know in advance which ports need to be
opened in your firewall ;-)</p>
<p>But the *Fun* really began when i tried to access that NFS dir from
Apache/httpd : of course it doesn't work with selinux enabled .. Does
that mean that you have to disable selinux on a machine sitting in the
DMZ and exposed on the Wild internet through the httpd process ? No !</p>
<p>While several folks adviced that, don't do it .. On the other hand, it's
true that modifying selinux booleans/policies is easier on CentOS 5.x
than it was on 4.x ...</p>
<p>Thanks to help from other CentOS folks hanging in #centos-social
(aka<a href="http://lestighaniker.de/">Range</a>, our selinux guy :-p , and
ivazquez), i was able to refresh my mind on selinux policies on CentOS
4.x. audit2allow permits you to scan your denied attempts and so to
create new policies. On CentOS 4.x there are not a lot of selinux
booleans you can modify (`getsebool -a|wc -l` returning 26 on 4.x and
213 on 5.x) and audit2allow doesn't include the -M option on 4.x to
create a new module that can be inserted later on.</p>
<p>So how to create (and so use) your new policy on 4.x ? Let's use
audit2allow first to see what we need (in our case let the httpd process
access nfs mounted dir and read files ) : `audit2allow -l -i
/var/log/messages` : that returns us a list of interesting stuff.</p>
<p>To create a new rule, you need to install the
selinux-policy-targeted-sources package. Then you need to create a new
file under /etc/selinux/targeted/src/policy/domains/misc (for example
httpnfs.te) and then launch `make load` in
/etc/selinux/targeted/src/policy to load your newly created rules.</p>
<p>For example, my /etc/selinux/targeted/src/policy/domains/misc/httpnfs.te
contains :</p>
<p>allow httpd_t nfs_t:dir { getattr read search };<br>
allow httpd_t nfs_t:file getattr;<br>
allow httpd_t nfs_t:file read;<br>
Voila, that was a 'quick refresh' on selinux on CentOS 4.x .. and i
hope someone will find this useful too :D</p>CentOS and Fosdem 20092008-12-01T14:02:00+01:002008-12-01T14:02:00+01:00Fabian Arrotintag:arrfab.net,2008-12-01:/posts/2008/Dec/01/centos-and-fosdem-2009/<p>Hi folks .. just to confirm that some members of the CentOS crew will be
present for the next <a href="http://www.fosdem.org">Fosdem</a> event in Belgium.
We'll (as usual) have a dedicated booth and share the DevRoom with our
friends of Fedora. If you want to come and talk, feel free to drop at
the booth and/or attend one of the presentations. If you want to
participate (at the booth and/or Devroom) feel free to add your name to
the list on the CentOS Wiki :
<a href="http://wiki.centos.org/Events/Fosdem2009">http://wiki.centos.org/Events/Fosdem2009</a>. More details on that wiki
page in the following weeks.</p>CentOS vs Microsoft ... hmm in a uptime comparison2008-11-20T23:33:00+01:002008-11-20T23:33:00+01:00Fabian Arrotintag:arrfab.net,2008-11-20:/posts/2008/Nov/20/centos-vs-microsoft-hmm-in-a-uptime-comparison/<p>I just discovered a small "homepage uptime benchmark" done by
<a href="http://www.pingdom.com">Pingdom</a>. They compared Corporate Linux and
Community Linux distros homepage uptime versus Apple and Microsoft ..
what are the results ?</p>
<p><img alt="CentOS vs MS" src="/images/centos-ms-uptime.jpg"></p>
<p>More informations on <a href="http://royal.pingdom.com/2008/11/19/linux-distros-and-apple-beat-microsofts-homepage-uptime/">their analysis
page</a></p>Spacewalk repository containing rpms signed with another key ...2008-11-13T17:29:00+01:002008-11-13T17:29:00+01:00Fabian Arrotintag:arrfab.net,2008-11-13:/posts/2008/Nov/13/spacewalk-repository-containing-rpms-signed-with-another-key/<p>I was interested in testing <a href="http://www.redhat.com/spacewalk">Spacewalk</a>
on CentOS 5.2 .. in fact it was on my (already too long) TODO list . So
i followed the instructions from the <a href="https://fedorahosted.org/spacewalk/wiki/HowToInstall">Spacewalk
Wiki</a> but it
failed during the yum process : "Public key for
asm-1.5.3-1jpp.ep1.1.el5.2.noarch.rpm is not installed"</p>
<p>Hmm, i imported both EPEL and Spacewalk rpm signing keys so i had a look
on the key used to sign that package :
"asm-1.5.3-1jpp.ep1.1.el5.2.noarch.rpm: (SHA1) DSA sha1 md5 (GPG) NOT OK
(MISSING KEYS: GPG#37017186)"</p>
<p>Hey, that's the Red Hat security team signing key ! Why was it used to
sign a package in the Spacewalk repo ? I guess that it's imported by
default on RHEL5 but you have of course to import it (and first verify
it of course) : see the key 37017186 on the
<a href="http://www.redhat.com/security/team/key/">http://www.redhat.com/security/team/key/</a></p>
<p>And now the fun begins .. ;-)</p>Running CentOS 5 on a Hetzner dedicated server - part 22008-09-20T09:20:00+02:002008-09-20T09:20:00+02:00Fabian Arrotintag:arrfab.net,2008-09-20:/posts/2008/Sep/20/running-centos-5-on-a-hetzner-dedicated-server-part-2/<p>I <a href="http://www.arrfab.net/blog/?p=60">blogged some time</a> ago about
getting CentOS 5.1 installed on a dedicated server at
<a href="http://www.hetzner.de">Hetzner</a> . Because the r8168 nic was not
recognized, you had to remotely setup the box from another linux distro
and with some preparation (including preparing a driver disk , etc ..)</p>
<p>I still receive questions about that from people not aware that actually
CentOS 5.2 default kernel has the r8169 kmod that works on such chipset
(have a look at the <a href="http://wiki.centos.org/AdditionalResources/HardwareList/RealTekRTL8111b">CentOS wiki page dedicated to that
thread</a>)
. And the other good news is that you don't need to setup first another
small distro on the server prior to run the CentOS setup ... Indeed
Hetzner has now CentOS 5.2 in their supported distro list .. cool</p>
<p>So don't ask me how to do it now : it's now working Out-Of-The-Box [TM]
:D</p>Newer Belgian eID middleware version ! ...2008-09-17T21:36:00+02:002008-09-17T21:36:00+02:00Fabian Arrotintag:arrfab.net,2008-09-17:/posts/2008/Sep/17/newer-belgian-eid-middleware-version/<dl>
<dt>... but not packaged yet ! .. reason is simple. If you don't live in</dt>
<dt>Belgium you're probably not interested in this post .. but for people</dt>
<dt>like me it is : i was looking at the official belgium federal government</dt>
<dt>page about the <a href="http://www.eid.belgium.be">belgium eid</a>middleware and i</dt>
<dt>saw that a newer version was available . Great . We worked with</dt>
<dt><a href="http://dag.wieers.com">Dag</a> to package the previous version (we had to</dt>
<dt><a href="http://svn.rpmforge.net/svn/trunk/rpms/eid-belgium/">patch it</a> but</dt>
<dt>that's another story) and provide it as an rpm for EL4/EL5 in the</dt>
<dt><a href="http://www.rpmforge.net">RPMforge</a> repository. But then the fun begins</dt>
<dd>in the previous version (up to 2.6.0) the linux version was provided
only as source, which could be a pain to install/setup for the 'lambda'
user but great for packager/maintainer.</dd>
</dl>
<p>But for a strange (and not explained) reason they decided to only
deliver binaries now ... Of course i don't mind if the Belgian
government would provide binaries but at least correctly built ! Quick
and stupid example : they claim to provide the 'package' for b<a href="http://eid.belgium.be/fr/Comment_installer_l_eID/Linux/index.jsp">oth
Debian , Fedora 9 and
OpenSUSE11</a>
but not as .deb nor .rpm ! .. and i'll not talk about stupid
<em>install.sh</em> that doesn't even care about missing dependencies ... !</p>
<p>I'm not blogging now against the guy …</p><dl>
<dt>... but not packaged yet ! .. reason is simple. If you don't live in</dt>
<dt>Belgium you're probably not interested in this post .. but for people</dt>
<dt>like me it is : i was looking at the official belgium federal government</dt>
<dt>page about the <a href="http://www.eid.belgium.be">belgium eid</a>middleware and i</dt>
<dt>saw that a newer version was available . Great . We worked with</dt>
<dt><a href="http://dag.wieers.com">Dag</a> to package the previous version (we had to</dt>
<dt><a href="http://svn.rpmforge.net/svn/trunk/rpms/eid-belgium/">patch it</a> but</dt>
<dt>that's another story) and provide it as an rpm for EL4/EL5 in the</dt>
<dt><a href="http://www.rpmforge.net">RPMforge</a> repository. But then the fun begins</dt>
<dd>in the previous version (up to 2.6.0) the linux version was provided
only as source, which could be a pain to install/setup for the 'lambda'
user but great for packager/maintainer.</dd>
</dl>
<p>But for a strange (and not explained) reason they decided to only
deliver binaries now ... Of course i don't mind if the Belgian
government would provide binaries but at least correctly built ! Quick
and stupid example : they claim to provide the 'package' for b<a href="http://eid.belgium.be/fr/Comment_installer_l_eID/Linux/index.jsp">oth
Debian , Fedora 9 and
OpenSUSE11</a>
but not as .deb nor .rpm ! .. and i'll not talk about stupid
<em>install.sh</em> that doesn't even care about missing dependencies ... !</p>
<p>I'm not blogging now against the guy who was asked to 'package it' to
provide binaries for the newer version .. but against the people who
decide to *NOT* deliver the sources anymore on the same page ! How can
we now succesfully build the newer version and provide a
good/tested/correctly built RPM for CentOS/RHEL 4 and 5 when sources are
missing ?</p>
<p>Dear mr the eid middleware developer, if you decide to provide binaries,
i wouldn't even care about the fact that you package it correctly or not
.. i promise .. but *at least* continue to give the source code so
that people who packaged it for different distributions and in different
formats (including .RPM like we did or .deb like
<a href="http://www.grep.be">Wouter</a> is doing for
<a href="http://packages.debian.org/search?suite=default&section=all&arch=any&searchon=names&keywords=beid">Debian</a>)</p>
<p>Dear mr webmaster : at least on the <a href="http://eid.belgium.be/fr/Comment_installer_l_eID/Linux/index.jsp">Linux
page</a>,
don't ask as the last step (step 4) to update Windows ... ;-)</p>
<p>/me is now spamming their
<a href="http://eid.belgium.be/fr/Contact/">servicedesk</a> to have an official
answer ..</p>CentOS 5.2 on the Asus Eee PC 9002008-09-09T12:30:00+02:002008-09-09T12:30:00+02:00Fabian Arrotintag:arrfab.net,2008-09-09:/posts/2008/Sep/09/centos-52-on-the-asus-eee-pc-900/<p>I decided to buy myself a new toy (like a lot of people surfing on the
<a href="http://en.wikipedia.org/wiki/Netbook">netbook</a> 'hype') : the <a href="http://en.wikipedia.org/wiki/ASUS_Eee_PC">Asus Eee
PC 900</a></p>
<p>Getting CentOS installed on the Eee PC needs more work than for a
traditionnal laptop (there is not cd drive and CentOS kernel doesn't
have the necessary module for the integrated nic) but it's of course
doable.</p>
<p>I don't plan to detail all the involved steps here but just a link to
the <a href="http://wiki.centos.org/HowTos/Laptops/Asus/Eeepc">page i've created on the CentOS
wiki</a> .. It's still a
draft but at least you know that work is being done for the Eee PC ...
As always, ideas & comments are welcome :D</p>
<p><img alt="CentOS 5.2 on the Eee PC
900" src="http://www.arrfab.net/blog/wp-content/uploads/2008/09/09092008003.jpg"></p>RPMforge PPC packages - Feedback wanted !2008-09-01T22:42:00+02:002008-09-01T22:42:00+02:00Fabian Arrotintag:arrfab.net,2008-09-01:/posts/2008/Sep/01/rpmforge-ppc-packages-feedback-wanted/<p>Some months ago i started to build PPC RPMS packages in the RPMforge
effort. Packages were built against EL4/EL5 but i just wanted to have
feedback .. if you use them, tested them, let us know on the <a href="http://lists.rpmforge.net/mailman/listinfo/users">RPMforge
users list</a> .</p>CentOS 5.2 on the iMac - wiki page created2008-09-01T22:38:00+02:002008-09-01T22:38:00+02:00Fabian Arrotintag:arrfab.net,2008-09-01:/posts/2008/Sep/01/centos-52-on-the-imac-wiki-page-created/<p>I <a href="http://www.arrfab.net/blog/?p=83">recently told</a> everybody that i
planned on writing a page on the CentOS wiki about the features
supported/non supported on the Intel-based iMac .. i've just created a
<a href="http://wiki.centos.org/HowTos/Mactel">draft page</a>, but if someone is
interested, feel free to add comments/experiences .. I'll update the
wiki page as often as time permits, for example when i'll have the RPM
ready to go for the
<a href="http://bersace03.free.fr/ift/">isight-firmware-tools</a> package that is
needed to load the firmware in the iSight webcam ... (the goal is of
course to provide the rpm through RPMforge).</p>Extending a xvd virtual disk for a DomU machine on-the-fly ?2008-08-21T06:47:00+02:002008-08-21T06:47:00+02:00Fabian Arrotintag:arrfab.net,2008-08-21:/posts/2008/Aug/21/extending-a-xvd-virtual-disk-for-a-domu-machine-on-the-fly/<p>Recently i had to extend the space in one of my Xen DomU paravirt guest.
I usually create a LV on the Dom0 that is presented to the DomU as a
block device. Of course you can extend on-the-fly the LV on Dom0 but how
can you tell to the DomU that the underlying block device was
modified/extended ? Hmmm .. okay, people will point me to the fact that
it's possible to just create a new LV on Dom0 and give it live (aka
block-attach) it to the DomU but that was not my question ... Or they
can tell me that shutting down the DomU and `xm create` the DomU again
will work (and yes, it works of course) but that was not the goal ...<br>
On a real system (meaning non-virtualized) you can just rescan the scsi
bus/adapter (or Fiber Channel if on a San storage through a LIP command)
with just `echo '- - -' > /sys/class/scsi_host/host0/scan `
(assuming that host0 is the adapter that has the device you
modified/added/extended/whatever ...) . So i expected to see the same
behaviour in DomU .. but of course, block devices being not emulated
because of the awareness of the DomU kernel …</p><p>Recently i had to extend the space in one of my Xen DomU paravirt guest.
I usually create a LV on the Dom0 that is presented to the DomU as a
block device. Of course you can extend on-the-fly the LV on Dom0 but how
can you tell to the DomU that the underlying block device was
modified/extended ? Hmmm .. okay, people will point me to the fact that
it's possible to just create a new LV on Dom0 and give it live (aka
block-attach) it to the DomU but that was not my question ... Or they
can tell me that shutting down the DomU and `xm create` the DomU again
will work (and yes, it works of course) but that was not the goal ...<br>
On a real system (meaning non-virtualized) you can just rescan the scsi
bus/adapter (or Fiber Channel if on a San storage through a LIP command)
with just `echo '- - -' > /sys/class/scsi_host/host0/scan `
(assuming that host0 is the adapter that has the device you
modified/added/extended/whatever ...) . So i expected to see the same
behaviour in DomU .. but of course, block devices being not emulated
because of the awareness of the DomU kernel, such command is invalid (no
scsi_host even exists) .. Shocking !</p>
<p>Google pointed me to the answer (which didn't satisfy me) on the
<a href="http://lists.xensource.com/archives/html/xen-users/2008-04/msg00246.html">Xen-users
list</a>
(read the full thread) . So it seems not possible (anymore ?). Ouch ...
Okay, back to the alternative : instead of lvextend the LV on the Dom0,
create a new LV and block-attach it to the DomU in which use LVM too to
extend your VG/LV with a newly initialized PV ...</p>
<p>Dear lazyweb, if you find me something that claims that it's possible,
let me know ... ;-)</p>NetworkManager and ipw3945 issue2008-08-19T19:10:00+02:002008-08-19T19:10:00+02:00Fabian Arrotintag:arrfab.net,2008-08-19:/posts/2008/Aug/19/networkmanager-and-ipw3945-issue/<p>I replaced recently my WiFi access-point at home and because the new AP
(a <a href="http://www.linksys.com/servlet/Satellite?c=L_Product_C2&childpagename=US%2FLayout&cid=1175239516849&pagename=Linksys%2FCommon%2FVisitorWrapper&lid=1684939789B01">Linksys
WRT160n</a>)
supports WPA/WPA2 i tried to connect with WPA2 .. I had some stranges
messages (in loop) from NetworkManager when trying to connect to the AP
:</p>
<p>*NetworkManager: <information> Activation (eth1) Stage 2 of 5 (Device
Configure) complete.<br>
NetworkManager: <information> Activation (eth1/wireless): disconnected
during association, asking for new key.<br>
NetworkManager: <information> Activation (eth1) New wireless user key
requested for network '\$wlan-name'.<br>
NetworkManager: <information> Activation (eth1) New wireless user key
for network '\$wlan-name' received. *</p>
<p>I was sure that the PSK was correct because i was able to connect with
both my eeepc and my e51 nokia mobile phone.</p>
<p>Querying the great oracle (translate to `<em>using google</em>`) told me that
a *lot* of people have the same issue with the ipw3945 wireless nic.
(independently of the linux distro : CentOS, Fedora, OpenSUSE, Ubuntu
.....) but upgrading to a more recent wpa_supplicant (not available in
the CentOS repositories !) package solved it for me.</p>
<p>Attention : The wpa_supplicant package available on RHEL/CentOS 5.2 is
0.4.8-10.2.el5 while <a href="http://atrpms.net">Axel</a> built version
0.5.8-16.el5 in his <a href="http://dl.atrpms.net/el5-i386/atrpms/testing/">el5-testing
repo</a> .</p>
<p>As usual, read carefully instructions present on the CentOS wiki about
the <a href="http://wiki.centos.org/PackageManagement/Yum/Priorities">yum-plugin-priorities …</a></p><p>I replaced recently my WiFi access-point at home and because the new AP
(a <a href="http://www.linksys.com/servlet/Satellite?c=L_Product_C2&childpagename=US%2FLayout&cid=1175239516849&pagename=Linksys%2FCommon%2FVisitorWrapper&lid=1684939789B01">Linksys
WRT160n</a>)
supports WPA/WPA2 i tried to connect with WPA2 .. I had some stranges
messages (in loop) from NetworkManager when trying to connect to the AP
:</p>
<p>*NetworkManager: <information> Activation (eth1) Stage 2 of 5 (Device
Configure) complete.<br>
NetworkManager: <information> Activation (eth1/wireless): disconnected
during association, asking for new key.<br>
NetworkManager: <information> Activation (eth1) New wireless user key
requested for network '\$wlan-name'.<br>
NetworkManager: <information> Activation (eth1) New wireless user key
for network '\$wlan-name' received. *</p>
<p>I was sure that the PSK was correct because i was able to connect with
both my eeepc and my e51 nokia mobile phone.</p>
<p>Querying the great oracle (translate to `<em>using google</em>`) told me that
a *lot* of people have the same issue with the ipw3945 wireless nic.
(independently of the linux distro : CentOS, Fedora, OpenSUSE, Ubuntu
.....) but upgrading to a more recent wpa_supplicant (not available in
the CentOS repositories !) package solved it for me.</p>
<p>Attention : The wpa_supplicant package available on RHEL/CentOS 5.2 is
0.4.8-10.2.el5 while <a href="http://atrpms.net">Axel</a> built version
0.5.8-16.el5 in his <a href="http://dl.atrpms.net/el5-i386/atrpms/testing/">el5-testing
repo</a> .</p>
<p>As usual, read carefully instructions present on the CentOS wiki about
the <a href="http://wiki.centos.org/PackageManagement/Yum/Priorities">yum-plugin-priorities
configuration</a>
or do like me : disable all third-party repositories and enable them
only when wanted/needed ;-)</p>Tools to sync a RPM repository in your LAN2008-08-19T13:08:00+02:002008-08-19T13:08:00+02:00Fabian Arrotintag:arrfab.net,2008-08-19:/posts/2008/Aug/19/tools-to-sync-a-repo-in-your-lan/<p>Due to <a href="http://dag.wieers.com/blog/mrepo-now-with-fuseiso-and-unionfs-support-085-ready-soon">Dag's last blog
post</a>
about his latest update to
<a href="http://dag.wieers.com/home-made/mrepo/">mrepo</a>, we had several folks in
the #centos and #centos-social irc channels asking how to configure it
to just synchronize repositories on a server in their local network.</p>
<p>First of all you have to understand that mrepo isn't only a repo
synchronisation tool : it can help you to create deployment servers etc
(see the <a href="http://dag.wieers.com/home-made/mrepo/">mrepo features list</a>
)..</p>
<p>If you only need to sync repositories, you have other alternatives :</p>
<ul>
<li>
<p>rsync (if your remote mirror support rsync of course .. for CentOS
mirrors that support rsync, see the <a href="http://www.centos.org/modules/tinycontent/index.php?id=13">centos.org mirrors list
webpage</a>)</p>
</li>
<li>
<p>if rsync is not available, use
<a href="http://wiki.linux.duke.edu/YumUtils">reposync</a> from the yum-utils
package (available too on RHEL and works to mirror rhn internally
because of the rhn-plugin available on RHEL yum version)</p>
</li>
</ul>
<p>Just a reminder for people who forget that such tools (especially
reposync) exist .. ;-)</p>CentOS 5.2 on the Apple iMac2008-08-05T08:48:00+02:002008-08-05T08:48:00+02:00Fabian Arrotintag:arrfab.net,2008-08-05:/posts/2008/Aug/05/centos-52-on-the-apple-imac/<p>I've always heard that a picture tells more than long sentences .. ;-)</p>
<p><img alt="centos-imac.jpg" src="http://www.arrfab.net/blog/wp-content/uploads/2008/08/centos-imac.jpg"></p>
<p>For various reasons (including the fact that i like the iMac design and
that as a musician i have recording hardware that is only
recognized/usable with Mac OS X), i decided to buy me a shining new
Apple iMac 24". But of course Linux remains my OS of choice ..</p>
<p>So i dediced to use it in dual-boot mode and i installed of course
CentOS (don't need to explain why i think ... ;-) ) . I decided to use
<a href="http://refit.sf.net">rEfit</a> as the efi boot menu (better than the
included bootcamp because to boot an alternative OS at boot you have to
press a key, while rEfit always displays a boot menu and boots a
(configurable) default OS)</p>
<p>I'll write of course a page on the<a href="http://wiki.centos.org">CentOS wiki</a>
explaining in details what has been tested, what works and what doesn't
... One little note about the setup : i *always* setup linux through
the network (with or without kickstart) so i tested the netinstall
boot.iso on the mac. I had to play with some options : for example
anaconda is always trying to mount the cdrom before asking you which
method you want to use …</p><p>I've always heard that a picture tells more than long sentences .. ;-)</p>
<p><img alt="centos-imac.jpg" src="http://www.arrfab.net/blog/wp-content/uploads/2008/08/centos-imac.jpg"></p>
<p>For various reasons (including the fact that i like the iMac design and
that as a musician i have recording hardware that is only
recognized/usable with Mac OS X), i decided to buy me a shining new
Apple iMac 24". But of course Linux remains my OS of choice ..</p>
<p>So i dediced to use it in dual-boot mode and i installed of course
CentOS (don't need to explain why i think ... ;-) ) . I decided to use
<a href="http://refit.sf.net">rEfit</a> as the efi boot menu (better than the
included bootcamp because to boot an alternative OS at boot you have to
press a key, while rEfit always displays a boot menu and boots a
(configurable) default OS)</p>
<p>I'll write of course a page on the<a href="http://wiki.centos.org">CentOS wiki</a>
explaining in details what has been tested, what works and what doesn't
... One little note about the setup : i *always* setup linux through
the network (with or without kickstart) so i tested the netinstall
boot.iso on the mac. I had to play with some options : for example
anaconda is always trying to mount the cdrom before asking you which
method you want to use (you can of course use the 'method= ' to override
this behaviour though.</p>
<p>But i noticed that it was really slow to 'inspect' the cd .. using the
option hda=ide-scsi helped me for the setup (i installed from my local
nfs repo )</p>
<p>So the full line i used (you specifiy more paramaters of course) was :
"linux vnc hda=ide-scsi"</p>
<p>More informations to come on the <a href="http://wiki.centos.org">CentOS wiki</a>
...</p>CentOS 4.x machine not rebooting and faced with a grub prompt2008-06-25T14:44:00+02:002008-06-25T14:44:00+02:00Fabian Arrotintag:arrfab.net,2008-06-25:/posts/2008/Jun/25/centos-4x-machine-not-rebooting-and-faced-with-a-grub-prompt/<p>One of my customer phoned me to say that one CentOS 4.x machine (acting
as a apache reverse proxy) didn't reboot after a power outage. The
machine had two sata disks configured in raid 1 (through md/software
raid) but instead of booting, the machine was just displaying a grub>
prompt.</p>
<p>Of course i tried the traditional `grub-install --recheck /dev/sda`
and `grub-install --recheck /dev/sdb` and also the manual procedure
(already described <a href="http://www.arrfab.net/blog/?p=11">here</a>) to install
grub on both devices .. but no luck .. still booting at the grub>
prompt.</p>
<p>But then i looked (in rescue mode) at the (/mnt/sysimage)/etc/grub.conf
and i counted 22 kernel entries in the file .. The customer had
configured the nightly automatic yum update but he never cleaned the old
kernels (both up and smp) ... so i "cleaned up" the grub.conf file, once
again installed grub with grub-install and .... machine rebooted
normally ..</p>
<p>I've never thought that too many entries in the grub.conf file could
block the machine from booting ... Maybe that will save other people
time</p>Entering CentOS 5.2 QA mode ...2008-06-06T22:21:00+02:002008-06-06T22:21:00+02:00Fabian Arrotintag:arrfab.net,2008-06-06:/posts/2008/Jun/06/entering-centos-52-qa-mode/<p>Yes, it started .. the CentOS QA-Team entered the 5.2 QA era .. meaning
that we have to test a bunch of existing features and newer ones
included in 5.2. For example, in the <a href="https://www.redhat.com/archives/rhelv5-announce/2008-May/msg00002.html">upstream announce
mail</a>
i saw that the newer libvirt has support remote connections. So i
decided to give it a try just after i updated my CentOS 5.1 x86_64 dom0
to 5.2QA (and my domU i386 and x86_64) ... but when i tried to connect,
i received a 'connection reset by peer' (i tested with only ssh and not
tls/certs) ... so i decided to read a little bit on the libvirt.org
website and found which parameters should have been configured in the
/etc/libvirt/libvirtd.conf (full list available
<a href="http://libvirt.org/remote.html#Remote_libvirtd_configuration">here</a>) .
The only 'problem' so far is that the /etc/libvirt/libvirtd.conf is not
provided by libvirt itself and doesn't exist ! .. Strange because it's
referenced in the /etc/sysconfig/libvirtd (that you have to modify too)
file .. So it seems you have to create it yourself , and then i was able
to connect remotely (i tested only with ssh .. and important : don't
forget that you need ssh key-based auth for this ...)</p>
<p>More informations …</p><p>Yes, it started .. the CentOS QA-Team entered the 5.2 QA era .. meaning
that we have to test a bunch of existing features and newer ones
included in 5.2. For example, in the <a href="https://www.redhat.com/archives/rhelv5-announce/2008-May/msg00002.html">upstream announce
mail</a>
i saw that the newer libvirt has support remote connections. So i
decided to give it a try just after i updated my CentOS 5.1 x86_64 dom0
to 5.2QA (and my domU i386 and x86_64) ... but when i tried to connect,
i received a 'connection reset by peer' (i tested with only ssh and not
tls/certs) ... so i decided to read a little bit on the libvirt.org
website and found which parameters should have been configured in the
/etc/libvirt/libvirtd.conf (full list available
<a href="http://libvirt.org/remote.html#Remote_libvirtd_configuration">here</a>) .
The only 'problem' so far is that the /etc/libvirt/libvirtd.conf is not
provided by libvirt itself and doesn't exist ! .. Strange because it's
referenced in the /etc/sysconfig/libvirtd (that you have to modify too)
file .. So it seems you have to create it yourself , and then i was able
to connect remotely (i tested only with ssh .. and important : don't
forget that you need ssh key-based auth for this ...)</p>
<p>More informations about the QA tests later (and by other people/QA
testers too ... ;-) )</p>IBM Director 5.20.2 agent setup on CentOS/EL2008-05-02T09:13:00+02:002008-05-02T09:13:00+02:00Fabian Arrotintag:arrfab.net,2008-05-02:/posts/2008/May/02/ibm-director-agent-setup-on-centosel/<p>I'm used to deploy IBM Director server/agents on IBM hardware to monitor
hardware/services .. and surely due to the fact that i work for an IBM
business partner and that i give myself the IBM director course for IBM
... ;-)</p>
<p>But there is something really anoying : each time you receive a IBM
director cd/iso image (like the 5.20.2 that you can download from the
IBM support website), it should normally contains the Linux level 2
agent for each of the supported Linux distributions (aka RHEL 3,4,5 ,
SLES 9,10 and Vmware esx). You can even integrate such agent in the
director console to push it to remote machine (in fact it will do it
through ssh ... so be careful if you tuned sshd to accept only specific
user/key-based auth ...)</p>
<p>But last time i had to deploy it on CentOS machines (usually a simple
change in the /etc/redhat-release file is enough ;-) ) i did it from the
director console ... Task was marked as successfully but nothing was
installed .. (how the hell could director answer me that it was
successfull if it was not the case ?) . Okay, let's do it manually then
... but then i saw …</p><p>I'm used to deploy IBM Director server/agents on IBM hardware to monitor
hardware/services .. and surely due to the fact that i work for an IBM
business partner and that i give myself the IBM director course for IBM
... ;-)</p>
<p>But there is something really anoying : each time you receive a IBM
director cd/iso image (like the 5.20.2 that you can download from the
IBM support website), it should normally contains the Linux level 2
agent for each of the supported Linux distributions (aka RHEL 3,4,5 ,
SLES 9,10 and Vmware esx). You can even integrate such agent in the
director console to push it to remote machine (in fact it will do it
through ssh ... so be careful if you tuned sshd to accept only specific
user/key-based auth ...)</p>
<p>But last time i had to deploy it on CentOS machines (usually a simple
change in the /etc/redhat-release file is enough ;-) ) i did it from the
director console ... Task was marked as successfully but nothing was
installed .. (how the hell could director answer me that it was
successfull if it was not the case ?) . Okay, let's do it manually then
... but then i saw that the level2 agent located on the CD
(director/agent/linux/i386/FILES/dir5.20.2_agent_linux.sh -x)
contained only the RHEL3 and SLES10 RPMS inside ! WTF ?</p>
<p>You can download the full Director Linux agent 2 package on the <a href="https://www-304.ibm.com/systems/support/supportsite.wss/mainselect?familyind=5347902&osind=0&continue.x=20&continue.y=16&brandind=5000016&oldbrand=5000016&oldfamily=5347902&oldtype=0&taskind=2&psid=bm">IBM
website</a>and
that one will contain all the required RPMS ...</p>Red Hat EMEA Partner summit event - part 22008-04-06T08:49:00+02:002008-04-06T08:49:00+02:00Fabian Arrotintag:arrfab.net,2008-04-06:/posts/2008/Apr/06/red-hat-emea-partner-summit-event-part-2/<p>Red Hat partner summit is over and i really enjoyed it for both the
technical labs/presentations and the nice discussions i had with Red Hat
employees (for example i really appreciated Boris Devouge's talks). One
thing that was announced is the upcoming release of Paravirt drivers for
Windows DomU. (probably they will be released somewhere between 5.2 and
5.3). I've seen them in action during a lab organized by Olivier
Reneault and it's funny to see that Windows device manager reports them
as 'RHEL scsi driver disk' and 'RHEL PV nic driver'. It seems the goal
(as usual with Red Hat, in opposite with what Novell is always doing
regarding this ...) is to release them under the GPL. In fact, my
discussion with Olivier learned me that they were/are developed in
collaboration with Hitachi.</p>
<p>Other thing that i learned is that PV drivers/modules for EL3 are on the
way too (you'll never have a xen kernel for el3 because of its 2.4
kernel ...) so that you'll have better performances too.</p>
<p>During some presentations and labs it was mentionned also that
RHN/Satellite technology will also be released as open-source/gpl but
the main stopping …</p><p>Red Hat partner summit is over and i really enjoyed it for both the
technical labs/presentations and the nice discussions i had with Red Hat
employees (for example i really appreciated Boris Devouge's talks). One
thing that was announced is the upcoming release of Paravirt drivers for
Windows DomU. (probably they will be released somewhere between 5.2 and
5.3). I've seen them in action during a lab organized by Olivier
Reneault and it's funny to see that Windows device manager reports them
as 'RHEL scsi driver disk' and 'RHEL PV nic driver'. It seems the goal
(as usual with Red Hat, in opposite with what Novell is always doing
regarding this ...) is to release them under the GPL. In fact, my
discussion with Olivier learned me that they were/are developed in
collaboration with Hitachi.</p>
<p>Other thing that i learned is that PV drivers/modules for EL3 are on the
way too (you'll never have a xen kernel for el3 because of its 2.4
kernel ...) so that you'll have better performances too.</p>
<p>During some presentations and labs it was mentionned also that
RHN/Satellite technology will also be released as open-source/gpl but
the main stopping problem is that actually both products use Oracle as a
backend, and that explains also the prices for such products. I
explained to them that what i do for some customers who want to save
bandwidth without having to pay for Satellite is that i use reposync
(from the yum-utils package) to mirror the rhn channels on a local
machine .. and i was astonished that some RH tech people didn't know
that it was included in the base EL5 ...</p>
<p>Last but not least is the fact that the Partner portal changed a bit
several weeks ago and i decided to update the profile. When you do it
you're asked several questions including 'Which products do you actually
support ?' and in the list, below RHEL, SLES and MS windows i saw CentOS
... ;-)</p>Red Hat EMEA Partner summit event - part 12008-04-02T16:00:00+02:002008-04-02T16:00:00+02:00Fabian Arrotintag:arrfab.net,2008-04-02:/posts/2008/Apr/02/red-hat-emea-partner-summit-event-part-1/<p>I have actually the chance to assist to the <a href="http://www.europe.redhat.com/mktg/partnersummit/">Red Hat Emea partner
summit</a> event in
Malaga (Spain) and i had the opportunity to listen to Jim Whitehurst,
the new Red Hat ceo .. he's really pleasant to listen to.</p>
<p>We (Dag Wieers and myself) had the oppurtunity to talk to Scott
Creenshaw, the Red Hat vice president, about CentOS .. but i'll come
back probably later on that ... One thing he announced during his
presentation was <a href="http://www.ovirt.org">Ovirt.org</a> , which is an
http-based Virtual Machine management system. This was produced by the
<a href="http://et.redhat.com/page/Main_Page">Red Hat emerging techonologies</a>
group, so basically by the same people that bring koan and cobbler to
live. I'm now interested in testing it and see how it can compete
against other http-based systems like openqrm .. while on the other hand
openqrm is not limited to vm deployment and provisioning ...</p>Naissance de fr.centos.org2008-03-15T16:38:00+01:002008-03-15T16:38:00+01:00Fabian Arrotintag:arrfab.net,2008-03-15:/posts/2008/Mar/15/naissance-de-frcentosorg/<p>(For non native french speakers : that will be my only announce here in
another language than english ;-) )</p>
<p>Le projet CentOS est heureux de vous annoncer la naissance du site
http://fr.centos.org .<br>
En réponse à la demande croissante de la communauté des utilisateurs
francophones de CentOS, le forum fr.centos.org a vu le jour.<br>
Nous profitons de cette annonce pour relancer l'appel aux volontaires
pour traduire le wiki existant (http://wiki.centos.org) ;-)<br>
Pour se faire, il suffit de vous inscrire dans un premier temps à la
liste de diffusion centos-docs (sur http://lists.centos.org) et de vous
créer un identifiant/login sur le wiki.<br>
Demandez ensuite l'autorisation d'éditer les pages en dessous de
http://wiki.centos.org/fr ...</p>
<p>Nous tenons tout particulièrement à remercier Guillaume Kulawoski qui
est à la base de l'idée et la mise en place du forum , ainsi que Thierry
Delmonte pour la conception graphique.</p>
<p>A bientôt sur fr.centos.org !</p>Vmware server guest VMs on top of ocfs22008-03-01T08:37:00+01:002008-03-01T08:37:00+01:00Fabian Arrotintag:arrfab.net,2008-03-01:/posts/2008/Mar/01/vmware-server-guest-vms-on-top-of-ocfs2/<p>While i was testing <a href="http://oss.oracle.com/projects/ocfs2/">ocfs2</a> on
CentOS 5.1, one colleague of mine asked me if that was possible to have
VMware server on top of ocfs2 to test a move from one node to the other
node. Of course my first reaction was that vmware-server can't do live
migration like esx/vmware infrastructure can .. but because the machines
were ready and that it's fast to setup , we did the test.</p>
<p>The first vm refused to start on top of ocfs2 , while the same vm
started on local storage. Google pointed me to the correct answer in 3
seconds : you need to include a special parameter in the vmx (vmware
guest config file) to have it working on top of ocfs2 . The line to be
included is "mainmem.usenamedfile="FALSE" ". You can have more
informations on the <a href="http://communities.vmware.com/message/874435">Vmware
forum</a> regarding this.</p>
<p>We then were able to quickly move (by suspending a vm on node1 and
resuming it directly on node2) a VM between the two physical machines.
Of course that's not live migration, but that's very close to ... and my
colleague was happy ;-)</p>scsi-target-utils/iscsi tgtadm not production ready on el5.1 ?2008-02-29T09:43:00+01:002008-02-29T09:43:00+01:00Fabian Arrotintag:arrfab.net,2008-02-29:/posts/2008/Feb/29/scsi-target-utilsiscsi-tgtadm-not-production-ready-on-el51/<p>When CentOS 5.1 was announced, the <a href="http://www.centos.org/docs/5/html/release-notes/as-x86/RELEASE-NOTES-U1-x86-en.html">upstream release
notes</a>
contained some notes about new features being integrated in 5.1, like
iscsi-target functionnality. Of course they were announced in the
"Technology Previews" section, meaning that it's not fully supported and
not considered production ready. But most of the time, packages 'just
work' [TM].</p>
<p>Is this the case for the package
scsi-target-utils-0.0-0.20070620snap.el5 ? hmmm .... On my (already too
long) TODO list, I planned to test
<a href="http://oss.oracle.com/projects/ocfs2/">Ocfs2</a> on top of a shared device
, and because of a lack of Fiber Channel HBAs in my lab, the only
solution was to play with iScsi target/iscsi initiator on both machines
(3 machines : 1 as a iscsi target and the 2 others as initiator/ocfs2
machines). I already tested the standard <a href="http://iscsitarget.sourceforge.net/">IET iscsi
target</a> daemon in the past and i
was expecting to find almost the same behavior .. but it's not.</p>
<p>In fact, there is *NO* configuration files included with tgtadm so you
have to type all your tgtadm commands to create the iscsi target LUNs
and share them . The tgtadm tool isn't a big deal and it's even good to
add new target on the fly ... but because of the …</p><p>When CentOS 5.1 was announced, the <a href="http://www.centos.org/docs/5/html/release-notes/as-x86/RELEASE-NOTES-U1-x86-en.html">upstream release
notes</a>
contained some notes about new features being integrated in 5.1, like
iscsi-target functionnality. Of course they were announced in the
"Technology Previews" section, meaning that it's not fully supported and
not considered production ready. But most of the time, packages 'just
work' [TM].</p>
<p>Is this the case for the package
scsi-target-utils-0.0-0.20070620snap.el5 ? hmmm .... On my (already too
long) TODO list, I planned to test
<a href="http://oss.oracle.com/projects/ocfs2/">Ocfs2</a> on top of a shared device
, and because of a lack of Fiber Channel HBAs in my lab, the only
solution was to play with iScsi target/iscsi initiator on both machines
(3 machines : 1 as a iscsi target and the 2 others as initiator/ocfs2
machines). I already tested the standard <a href="http://iscsitarget.sourceforge.net/">IET iscsi
target</a> daemon in the past and i
was expecting to find almost the same behavior .. but it's not.</p>
<p>In fact, there is *NO* configuration files included with tgtadm so you
have to type all your tgtadm commands to create the iscsi target LUNs
and share them . The tgtadm tool isn't a big deal and it's even good to
add new target on the fly ... but because of the lack of config files,
you can't save your actual config and hope to restore it at the next
boot ... So you'd better have to save your tgtadm commands in a script
and call that bash script from within a new initscript ... I now
understand why the release notes consider that it's not *production
ready* yet ... so let's see what will be included/modified in 5.3 ...</p>Fosdem 2008 review - part 22008-02-25T12:14:00+01:002008-02-25T12:14:00+01:00Fabian Arrotintag:arrfab.net,2008-02-25:/posts/2008/Feb/25/fosdem-2008-review-part-2/<p>Fosdem is over .. and it was a good edition. From a CentOS perspective,
i'd say that the booth was better than in 2007 and because we were not
at the same place, i think more people had a look at the booth ... I was
at the booth and/or in the devroom for all the CentOS talks (i've posted
some pictures) but i had opportunity to at least follow some other talks
:</p>
<ul>
<li><a href="http://www.grep.be/blog/">Wouter Verhelst</a>'s talk about Belgian eid
on Debian (that reminds me of course my 2007 talk ...) ;-)</li>
<li><a href="http://www.fosdem.org/2008/schedule/events/fedora_lvm2">Alasdair
Kergon</a>'s
talk about new features in LVM2</li>
<li><a href="http://www.jekkt.com/">Jens Kühnel</a>'s talk about SELinux (Jens is
really a good guy to talk to ... always a pleasure to see/talk with
him)</li>
</ul>
<p>Let's go back to normal life ... and try to prepare next year .. ;-)</p>Fosdem 2008 review - part 12008-02-24T12:42:00+01:002008-02-24T12:42:00+01:00Fabian Arrotintag:arrfab.net,2008-02-24:/posts/2008/Feb/24/fosdem-2008-review-part-1/<p>What can i say ? hmm ... I've posted some pics online
<a href="http://www.arrfab.net/pics/view_album.php?set_albumName=Fosdem2008">here</a>
so that you can already feel the Fosdem event mood ...</p>
<p>I gave the 'Introduction to CentOS' yesterday and it's scheduled today
too ... (30 minutes to go ....)</p>
<p>I can already say that i enjoy the Fosdem event .. cool organizers, nice
place to be (but am i objective, being a belgian guy myself ? ;-) ) and
of course it's always good to see people you're chatting with or sending
mail to .. but in the real life ...</p>
<p>We share the devroom with Fedora and our booth is next to them ... and ,
to <a href="http://fedoraproject.org/wiki/FedoraEvents/FOSDEM/FOSDEM2008">quote what they say on their
wiki</a>
about us, "they are nice people" ..</p>
<p>More informations/pictures to come ..</p>Fosdem 20082008-02-19T21:14:00+01:002008-02-19T21:14:00+01:00Fabian Arrotintag:arrfab.net,2008-02-19:/posts/2008/Feb/19/fosdem-2008/<p>It's with a great pleasure (again ..) that i'll be at the <a href="http://www.fosdem.org">Fosdem
2008</a> event with folks from the CentOS project ..
We'll have a <a href="http://www.fosdem.org/2008/schedule/devroom/centosfedora">booth and a
devroom</a> that
we'll share with <a href="http://fedoraproject.org/">Fedora</a> (like last year) so
if you just want to tell us what you think of CentOS, what can be
improved (there is open debate scheduled in the devroom) or just to say
'hello' and discuss with some other CentOS community members , feel free
to come ...</p>
<p>And because it seems mandatory this year, here is the famous logo that i
forgot to put in my blog<br>
<a href="http://www.fosdem.org"><img alt="I’m going to FOSDEM, the Free and Open Source Software Developers’
European
Meeting" src="http://www.fosdem.org/promo/going-to"></a></p>Citrix XenServer using CentOS 52008-02-12T21:55:00+01:002008-02-12T21:55:00+01:00Fabian Arrotintag:arrfab.net,2008-02-12:/posts/2008/Feb/12/citrix-xenserver-using-centos-5/<p>Because the company i'm working for is a Citrix partner and that Citrix
wants us to become Xen partner (since Citrix acquired XenSource several
months ago), i decided to download and test their
<a href="http://www.citrix.com/English/ps2/products/product.asp?contentID=683148">XenServer</a>
on a IBM HS21 quad-core. Funny that to administer Xen graphically you
need to use their .Net application on Windows .. and no Linux console
available ! (while i remember that on previous version a java and so
cross-platform version was available).</p>
<p>But the thing that surprized me more is that they are using CentOS
packages for a lot of RPMS on the Dom0 ! In fact , i wanted directly to
ssh in the xen box to see what was inside and `uname -a` answered me :
Linux xen 2.6.18-8.1.8.el5.xs4.0.1.125.163xen #1 SMP Mon Aug 13
09:27:46 EDT 2007 i686 i686 i386 GNU/Linux ... That sounded very
familiar to me ... so from where does the rest of the system come ? :
rpm -qai|grep "Vendor: CentOS"|wc -l : 185 (out of 220 packages) ..</p>
<p>In fact, even the standard CentOS-Base.repo and CentOS-Media.repo are
still in /etc/yum.repos.d/ (but they are modified to exclude kernel and
xen packages …</p><p>Because the company i'm working for is a Citrix partner and that Citrix
wants us to become Xen partner (since Citrix acquired XenSource several
months ago), i decided to download and test their
<a href="http://www.citrix.com/English/ps2/products/product.asp?contentID=683148">XenServer</a>
on a IBM HS21 quad-core. Funny that to administer Xen graphically you
need to use their .Net application on Windows .. and no Linux console
available ! (while i remember that on previous version a java and so
cross-platform version was available).</p>
<p>But the thing that surprized me more is that they are using CentOS
packages for a lot of RPMS on the Dom0 ! In fact , i wanted directly to
ssh in the xen box to see what was inside and `uname -a` answered me :
Linux xen 2.6.18-8.1.8.el5.xs4.0.1.125.163xen #1 SMP Mon Aug 13
09:27:46 EDT 2007 i686 i686 i386 GNU/Linux ... That sounded very
familiar to me ... so from where does the rest of the system come ? :
rpm -qai|grep "Vendor: CentOS"|wc -l : 185 (out of 220 packages) ..</p>
<p>In fact, even the standard CentOS-Base.repo and CentOS-Media.repo are
still in /etc/yum.repos.d/ (but they are modified to exclude kernel and
xen packages) . The CentOS repositories are disabled by default but you
can use them to install other centos packages on the XenServer Dom0 ...</p>
<p>Funny, isn't it ? ;-)</p>"No reserved GDT blocks" message when expanding an ext3 filesystem2008-01-18T13:50:00+01:002008-01-18T13:50:00+01:00Fabian Arrotintag:arrfab.net,2008-01-18:/posts/2008/Jan/18/no-reserved-gdt-blocks-message-when-expanding-an-ext3-filesystem/<p>I had recently to import a LVM previously created on a SLES9 (in fact i
scratched the SLES that was installed on a separate disk). Of course lvm
is lvm so i had direct access to the datas sitting in my logical volume.
Everything was fine except that i was asked in the same time to extend
the filesystem (a new disk was added in the volume group). Because i
told the customer that it was easy with resize2fs (now supporting online
extend on el5 , while you needed ext2online on el3 and el4), i decided
to do it directly after my migration (understand after having scratched
and replaced the SLES). But i had a surprise when trying to extend the
filesystem : it didn't work !</p>
<p>resize2fs answered me : 'resize2fs: Operation not permitted While trying
to add group' .. and /var/log/messages told me : 'No reserved GDT
blocks' ... hmm, what does that mean ? inspecting the filesystem with
`tune2fs -l` showed me that the ext3 filesystem created on SLES was
lacking an important feature (already present for a while on
RHEL/CentOS/others ...) : the resize_inode function was missing in the
Filesystem features .. damn ... `man tune2fs` was not a great help
because it seems …</p><p>I had recently to import a LVM previously created on a SLES9 (in fact i
scratched the SLES that was installed on a separate disk). Of course lvm
is lvm so i had direct access to the datas sitting in my logical volume.
Everything was fine except that i was asked in the same time to extend
the filesystem (a new disk was added in the volume group). Because i
told the customer that it was easy with resize2fs (now supporting online
extend on el5 , while you needed ext2online on el3 and el4), i decided
to do it directly after my migration (understand after having scratched
and replaced the SLES). But i had a surprise when trying to extend the
filesystem : it didn't work !</p>
<p>resize2fs answered me : 'resize2fs: Operation not permitted While trying
to add group' .. and /var/log/messages told me : 'No reserved GDT
blocks' ... hmm, what does that mean ? inspecting the filesystem with
`tune2fs -l` showed me that the ext3 filesystem created on SLES was
lacking an important feature (already present for a while on
RHEL/CentOS/others ...) : the resize_inode function was missing in the
Filesystem features .. damn ... `man tune2fs` was not a great help
because it seems that it was not possible to add the missing feature ..
so i decided to use resize2fs offline (and of course it worked) ...</p>
<p>But i was frustrated (and the customer too) because i told him that it
was just easy to extend a lvm/filesystem on-the-fly [TM] .. so, while
extending the filesystem unmounted, i decided to google a bit and i
found an interesting thing about a patch being added by redhat in
tune2fs that allows to add the resize_inode feature !</p>
<p>`rpm -q --changelog e2fsprogs|grep resize_inode` returned me : -
enable tune2fs to set and clear feature resize_inode (#167816)</p>
<p>Of course this number is a <a href="https://bugzilla.redhat.com/show_bug.cgi?id=167816">Red Hat bugzilla
entry</a> that pointed
me to the <a href="http://rhn.redhat.com/errata/RHBA-2006-0060.html">errata
page/rpm</a> (already
included in el4 !) ... Great and cool !</p>
<p>I tested this (on a separate machine) and it worked .. always
interesting to know if you import an ext3 filesystem from a system that
didn't use the resize_inode ext3 feature (check the defaults that
mkfs.ext3 use on a CentOS/RHEL/Fedora in /etc/mke2fs.conf)</p>Sky2 kernel module not supporting Marvel 88E8056 gigabit anymore2008-01-12T00:30:00+01:002008-01-12T00:30:00+01:00Fabian Arrotintag:arrfab.net,2008-01-12:/posts/2008/Jan/12/sky2-kernel-module-not-supporting-marvel-88e8056-gigabit-anymore/<p>Usually i deploy mainly IBM servers that use broadcom network cards and
i swear i've never encountered any problems with such cards. The other
day i was asked to setup a small pc with two sata disks (in raid1). I
was sure that CentOS 5 could handle the integrated network card (from
`lspci`: Ethernet controller: Marvell Technology Group Ltd. 88E8056
PCI-E Gigabit Ethernet Controller (rev 12) ) because i already had
installed exactly the same box with 5.0 . Of course i tried to setup 5.1
and it was not possible to use that integrated card anymore. Frustrating
especially when i wanted to setup the box over the network, like i
always do ...</p>
<p>From the <a href="http://www.centos.org/docs/5/html/release-notes/as-x86/RELEASE-NOTES-U1-x86-en.html">5.1 release
notes</a>,
it was stated that the sky2 module was updated to a newer version
(version 1.14) but i was not expecting that it removed hardware
compatibility ! In fact, the sky2 module was backported by Red Hat into
5.1 but it was decided in the main kernel tree to not support anymore
the Marvel 88E8056 chipset with the sky2 module .. at least that's what
i understand from this <a href="http://lkml.org/lkml/2007/4/25/561">mail sent by Linus himself (search for
sky2)</a> . The only way to get …</p><p>Usually i deploy mainly IBM servers that use broadcom network cards and
i swear i've never encountered any problems with such cards. The other
day i was asked to setup a small pc with two sata disks (in raid1). I
was sure that CentOS 5 could handle the integrated network card (from
`lspci`: Ethernet controller: Marvell Technology Group Ltd. 88E8056
PCI-E Gigabit Ethernet Controller (rev 12) ) because i already had
installed exactly the same box with 5.0 . Of course i tried to setup 5.1
and it was not possible to use that integrated card anymore. Frustrating
especially when i wanted to setup the box over the network, like i
always do ...</p>
<p>From the <a href="http://www.centos.org/docs/5/html/release-notes/as-x86/RELEASE-NOTES-U1-x86-en.html">5.1 release
notes</a>,
it was stated that the sky2 module was updated to a newer version
(version 1.14) but i was not expecting that it removed hardware
compatibility ! In fact, the sky2 module was backported by Red Hat into
5.1 but it was decided in the main kernel tree to not support anymore
the Marvel 88E8056 chipset with the sky2 module .. at least that's what
i understand from this <a href="http://lkml.org/lkml/2007/4/25/561">mail sent by Linus himself (search for
sky2)</a> . The only way to get your
network card directly is to setup CentOS 5.0 and not update the kernel
with the one from 5.1 ... at least until I/we found and publish a
workaround (like an updated module through kmod or something else ...)</p>
<p>That reminds me to always use good hardware and not playing with exotic
or cheap hardware .. like for example the fakeraid controllers...</p>divider=10 kernel parameter for CentOS 5.1 guest Virtual Machines2008-01-06T20:15:00+01:002008-01-06T20:15:00+01:00Fabian Arrotintag:arrfab.net,2008-01-06:/posts/2008/Jan/06/divider10-kernel-parameter-for-centos-51-guest-virtual-machines/<p>When Red Hat released 5.1, everybody wanted to test a new kernel
parameter that could adjust the system clock rate at boot time to
something else than the standard 1000Hz clock rate. A lot of testings
has been done by the CentOS QA Team and you can see the results here :
<a href="http://bugs.centos.org/view.php?id=2189">http://bugs.centos.org/view.php?id=2189</a> (Notice that Xen guests don't
need the system clock rate to be modified because they already have a
250Hz kernel)<a href="http://bugs.centos.org/view.php?id=2189"><br>
</a></p>
<dl>
<dt>As you can read at the bottom of the comments, it seems there was a typo</dt>
<dt>in the [official RH Release</dt>
<dt>Notes](http://www.centos.org/docs/5/html/release-notes/as-x86/RELEASE-NOTES-U1-x86-en.html)</dt>
<dd>you'd have to read divider= and *NOT* tick_divider= !</dd>
</dl>
<p>It seems so to work with the correct kernel parameter and so there is no
need to build a kernel-vm for CentOS 5.1 guests .. (it's still needed
for example for 4.x ..). The CentOS 5.1 Release Notes have been
corrected to <a href="http://wiki.centos.org/Manuals/ReleaseNotes/CentOS5.1">reflect the good divider=
option</a> .</p>
<p>Here is a small benchmark of an idle minimal CentOS 5.1 i386 running in
a VMWare-server guest VM without (before 20h40) and with the divider=10
option (after 20h40 , so …</p><p>When Red Hat released 5.1, everybody wanted to test a new kernel
parameter that could adjust the system clock rate at boot time to
something else than the standard 1000Hz clock rate. A lot of testings
has been done by the CentOS QA Team and you can see the results here :
<a href="http://bugs.centos.org/view.php?id=2189">http://bugs.centos.org/view.php?id=2189</a> (Notice that Xen guests don't
need the system clock rate to be modified because they already have a
250Hz kernel)<a href="http://bugs.centos.org/view.php?id=2189"><br>
</a></p>
<dl>
<dt>As you can read at the bottom of the comments, it seems there was a typo</dt>
<dt>in the [official RH Release</dt>
<dt>Notes](http://www.centos.org/docs/5/html/release-notes/as-x86/RELEASE-NOTES-U1-x86-en.html)</dt>
<dd>you'd have to read divider= and *NOT* tick_divider= !</dd>
</dl>
<p>It seems so to work with the correct kernel parameter and so there is no
need to build a kernel-vm for CentOS 5.1 guests .. (it's still needed
for example for 4.x ..). The CentOS 5.1 Release Notes have been
corrected to <a href="http://wiki.centos.org/Manuals/ReleaseNotes/CentOS5.1">reflect the good divider=
option</a> .</p>
<p>Here is a small benchmark of an idle minimal CentOS 5.1 i386 running in
a VMWare-server guest VM without (before 20h40) and with the divider=10
option (after 20h40 , so system clock rate fixed to 100Hz) : You'll
notice directly that from a Host point of view, the vm consumes less CPU
than before the divider option :</p>
<div style="text-align: center">
![Centos 5.1 guest VM without and with the divider=10 kernel
parameter](http://www.arrfab.net/blog/wp-content/uploads/2007/12/c51-i386-divider.png)
</div>Time to waste ?2007-12-27T13:30:00+01:002007-12-27T13:30:00+01:00Fabian Arrotintag:arrfab.net,2007-12-27:/posts/2007/Dec/27/time-to-waste/<p>Ok, nothing linux/centos related, but still good to see ...
<a href="http://www.tetesaclaques.tv">http://www.tetesaclaques.tv</a> .. real stupid fun .. Attention : in
french only</p>Remotely install CentOS 5.1 on a Hetzner dedicated server2007-12-14T10:09:00+01:002007-12-14T10:09:00+01:00Fabian Arrotintag:arrfab.net,2007-12-14:/posts/2007/Dec/14/remotely-install-centos-51-on-a-hetzner-dedicated-server/<p>I recently had to setup CentOS 5.1 x86_64 on a remote <a href="http://www.hetzner.de/rootserver_en.html">Hetzner
dedicated server</a> . CentOS is
not listed in the supported distributions but that's not a problem,
especially if you have already played with the remote vnc installation
mode. The only problem is that their dedicated servers have a newer
Realtek Gigabit controller that is not supported by the CentOS 5.1
kernel (from `lspci`: RTL8111/8168B PCI Express Gigabit Ethernet
controller ). But the good news is that a driverdisk is available on the
<a href="http://wiki.centos.org/HardwareList/RealTekRTL8111b">CentOS Wiki</a>.</p>
<p>I choosed to setup a minimal Fedora Core 8 (supported through their web
control panel) and from there modify grub to launch the CentOS 5.1
x86_64 setup. I was thinking writing the 'step-by-step' procedure
somewhere on the wiki but i've found that such procedure (even if i
didn't follow it completely because i installed first and quickly a
minimal Fedora core 8 ) was already written by someone else on the
<a href="http://wiki.hetzner.de/index.php/DS8000_/_CentOS_5_/_VNC_Install">Hetzner
Wiki</a>
. My advice : add the 'noipv6' parameter to speed up the installation
and avoid anaconda trying to get a ipv6 address through dhcp (see my
<a href="http://www.arrfab.net/blog/?p=30">previous note</a>)<br>
The other good news is that they have an <a href="http://download.hetzner.de/mirrors/centos/">internal CentOS 5
mirror</a> so …</p><p>I recently had to setup CentOS 5.1 x86_64 on a remote <a href="http://www.hetzner.de/rootserver_en.html">Hetzner
dedicated server</a> . CentOS is
not listed in the supported distributions but that's not a problem,
especially if you have already played with the remote vnc installation
mode. The only problem is that their dedicated servers have a newer
Realtek Gigabit controller that is not supported by the CentOS 5.1
kernel (from `lspci`: RTL8111/8168B PCI Express Gigabit Ethernet
controller ). But the good news is that a driverdisk is available on the
<a href="http://wiki.centos.org/HardwareList/RealTekRTL8111b">CentOS Wiki</a>.</p>
<p>I choosed to setup a minimal Fedora Core 8 (supported through their web
control panel) and from there modify grub to launch the CentOS 5.1
x86_64 setup. I was thinking writing the 'step-by-step' procedure
somewhere on the wiki but i've found that such procedure (even if i
didn't follow it completely because i installed first and quickly a
minimal Fedora core 8 ) was already written by someone else on the
<a href="http://wiki.hetzner.de/index.php/DS8000_/_CentOS_5_/_VNC_Install">Hetzner
Wiki</a>
. My advice : add the 'noipv6' parameter to speed up the installation
and avoid anaconda trying to get a ipv6 address through dhcp (see my
<a href="http://www.arrfab.net/blog/?p=30">previous note</a>)<br>
The other good news is that they have an <a href="http://download.hetzner.de/mirrors/centos/">internal CentOS 5
mirror</a> so the setup is
really quick because on an internal Gigabit network.</p>
<p>Of course such procedure should only be done by people who have already
installed a box remotely and understood the way a remote setup works. My
advice is so that you test such procedure locally in your lan prior to
'test' it on the remote server.</p>
<p>Anyway, if something went wrong, you still have a possibility to
remotely reset the server and boot it up through pxe in a rescue
environment (based on Debian Etch)</p>RPMForge ppc rebuild for EL52007-12-03T14:51:00+01:002007-12-03T14:51:00+01:00Fabian Arrotintag:arrfab.net,2007-12-03:/posts/2007/Dec/03/rpmforge-ppc-rebuild-for-el5/<p>Some of my customers are running RHEL 5.x on ppc64 (IBM Blade JS20/21 ,
OpenPower, System I or System P) but it was frustrating that RPMForge
had no ppc build . And there is also a plan to release CentOS 5.x for
ppc/ppc64 arch. So i decided to rebuild (through
<a href="http://fedoraproject.org/wiki/Projects/Mock">Mock</a>) all the RPMForge
srpms on a Mac G4 . Packages rebuilt so far are now in
<a href="http://rpms.arrfab.net/rpmforge/el5/testing/ppc/">testing</a>, and the
list can be <a href="http://rpms.arrfab.net/rpmforge/el5/testing/ppc/repodata/">seen
here</a>.</p>
<p>Hope to receive feedback from users using el5 on ppc/ppc64 .. When all
srpms will be rebuilt, i plan on (probably) using the G4 to rebuild as
well for fc8ppc.</p>
<p>Packages will normally appear on the official RPMForge mirror when
everything will be stabilized ... and that can take some time because
building on a (old) G4 400Mghz is not that fast ... (hardware donations
accepted ;-) )</p>Linux on my Mac G42007-11-21T21:55:00+01:002007-11-21T21:55:00+01:00Fabian Arrotintag:arrfab.net,2007-11-21:/posts/2007/Nov/21/linux-on-my-mac-g4/<p>I was searching on eBay for a Mac G4 (G5 is definitively better but more
expensive .. except if you want to make me a donation .. ;-) ) because i
wanted to play a little bit with Linux on the PowerPC archictecture.
Surely because CentOS 5.1 ppc is in the pipeline (when 5.1 i386 and
x86_64 will be released of course ...) and that i want to be able to
test it .. and surely also because i have from time to time to setup
RHEL on bigger IBM ppc64 based machines (aka <a href="http://www-03.ibm.com/systems/be/en/eserver/power/iseries.html">IBM system
I</a>) . In
the meantime i decided to setup Fedora Core 6 PPC (yes, i had a ppc tree
on one of my disks ....) on a G4 400Mghz with 512Mb . Everything ran
smoothly .. until the machine had to reboot .. and then nothing ... I
was almost sure that i had added a Applebootstrap partition during the
anaconda setup ... but nothing.</p>
<p>I forget to say that i was installing on another (and bigger) ide drive
than the one originally in the machine. And that was my (stupid)
problem. I had of course first to boot in rescue mode, use parted to put
a mac label on the drive (it had the …</p><p>I was searching on eBay for a Mac G4 (G5 is definitively better but more
expensive .. except if you want to make me a donation .. ;-) ) because i
wanted to play a little bit with Linux on the PowerPC archictecture.
Surely because CentOS 5.1 ppc is in the pipeline (when 5.1 i386 and
x86_64 will be released of course ...) and that i want to be able to
test it .. and surely also because i have from time to time to setup
RHEL on bigger IBM ppc64 based machines (aka <a href="http://www-03.ibm.com/systems/be/en/eserver/power/iseries.html">IBM system
I</a>) . In
the meantime i decided to setup Fedora Core 6 PPC (yes, i had a ppc tree
on one of my disks ....) on a G4 400Mghz with 512Mb . Everything ran
smoothly .. until the machine had to reboot .. and then nothing ... I
was almost sure that i had added a Applebootstrap partition during the
anaconda setup ... but nothing.</p>
<p>I forget to say that i was installing on another (and bigger) ide drive
than the one originally in the machine. And that was my (stupid)
problem. I had of course first to boot in rescue mode, use parted to put
a mac label on the drive (it had the msdos label since it was coming
from an intel box) and then i partition my drive again .. of course this
time Fedora installed succesfully .. and rebooted :-)</p>
<p>But what i found strange is that anaconda let me partition the drive as
i wanted , formated the drive, installed all the packages and didn't
complaint about the drive not having the correct label ... something to
write down on the centos wiki when 5.1 ppc will be ready ...</p>Connect to a Juniper/Netscreen SSL box with CentOS 52007-11-16T14:23:00+01:002007-11-16T14:23:00+01:00Fabian Arrotintag:arrfab.net,2007-11-16:/posts/2007/Nov/16/connect-to-a-junipernetscreen-ssl-box-with-centos-5/<p>I had recently to support remotely a customer using a <a href="http://www.juniper.net/products_and_services/ssl_vpn_secure_access/index.html">Juniper/Netscreen
SSL
gateway</a>
as a vpn solution. Normally you point your browser to a https:// website
, sign in and then a java applet should start and modify your routing
table automatically passing all the traffic through a tun device in ssl
mode ..</p>
<p>I was surprised that the Juniper Network Connect applet detected my OS
as being Linux (i feared that a M\$ machine was needed to connect ...)
and it launched a xterm box asking my root password to setup the Network
Connect client. It installed the java jar archives in a
\~/.juniper_networks/network_connect folder but nothing happened after
that ... I should receive a pop-up window starting the connection itself
but nothing.</p>
<p>After analyzing the .sh scripts inside of the
/.juniper_networks/network_connect/ folder and searching with strace,
java -jar NC.jar , i saw a executable file : ncdiag. okay, surely a
missing lib : ldd ./ncdiag pointed me directly to the missing
libstdc++-libc6.2-2.so.3 .. so really easy to troubleshoot with `yum
provides` .. (why do still a lot of packages rely on older libraries ?
...)</p>
<p>Then the java applet opened .. but i received a nice 'Unable to load
library libncui.so' …</p><p>I had recently to support remotely a customer using a <a href="http://www.juniper.net/products_and_services/ssl_vpn_secure_access/index.html">Juniper/Netscreen
SSL
gateway</a>
as a vpn solution. Normally you point your browser to a https:// website
, sign in and then a java applet should start and modify your routing
table automatically passing all the traffic through a tun device in ssl
mode ..</p>
<p>I was surprised that the Juniper Network Connect applet detected my OS
as being Linux (i feared that a M\$ machine was needed to connect ...)
and it launched a xterm box asking my root password to setup the Network
Connect client. It installed the java jar archives in a
\~/.juniper_networks/network_connect folder but nothing happened after
that ... I should receive a pop-up window starting the connection itself
but nothing.</p>
<p>After analyzing the .sh scripts inside of the
/.juniper_networks/network_connect/ folder and searching with strace,
java -jar NC.jar , i saw a executable file : ncdiag. okay, surely a
missing lib : ldd ./ncdiag pointed me directly to the missing
libstdc++-libc6.2-2.so.3 .. so really easy to troubleshoot with `yum
provides` .. (why do still a lot of packages rely on older libraries ?
...)</p>
<p>Then the java applet opened .. but i received a nice 'Unable to load
library libncui.so' window .. ok, one step further, when launching java
-jar NC.jar in a terminal, i was pointed to something useful :
'DSSSL_load_so failed'</p>
<p>Hmm, DSSSL ? sounds like openssl right ? but it was installed ... i had
something similar recently when a package relied on the -devel package
and not on the normal one ... so i tried just to symlink
/lib/libssl.so.6 to /lib/libssl.so (instead of installing the
openssl-devel package) and NC was now happy .. Silly isn't it ?
Unfortunately i've no means to modify their broken NC client .. but at
least there is now a workaround, and that's the only reason why i blog
it here .. so that other users don't have to search again .. myself
included ;-)</p>reposync now included in RHEL5.12007-11-14T11:57:00+01:002007-11-14T11:57:00+01:00Fabian Arrotintag:arrfab.net,2007-11-14:/posts/2007/Nov/14/reposync-now-included-in-rhel51/<p>I had to update some customers' machines running official RHEL5 to
RHEL5.1 ... i've just discovered that the reposync tool (included in the
yum-utils package) is now included (CentOS had it already included even
for the 4.x branch ...).</p>
<p>That means that with just reposync and createrepo (also available by
default on rhel5) you can create very quickly an internal 'updates' repo
without having to use :</p>
<ul>
<li>
<p>either the RedHat solution to have a commercial version of rhn/proxy
satellite inside of your network</p>
</li>
<li>
<p>either use <a href="http://dag.wieers.com/home-made/mrepo/">mrepo</a> (even if
mrepo can do more things that reposync ...)</p>
</li>
</ul>
<p>Now the underlying question for the CentOS team will be : 'which version
will we ship in extras : our previously rpm packaged version or the one
released upstream ?' ...</p>Entering CentOS 5.1 QA mode ...2007-11-11T19:03:00+01:002007-11-11T19:03:00+01:00Fabian Arrotintag:arrfab.net,2007-11-11:/posts/2007/Nov/11/entering-centos-51-qa-mode/<p>Can you smell the newer CentOS 5.1 cooking ?</p>
<p>I do : some of us are busy right now trying freshly rebuilt srpms that
will be integrated in CentOS 5.1 ... Some new features <a href="https://www.redhat.com/archives/rhelv5-beta-list/2007-November/msg00001.html">announced
Upstream</a>
seem interesting : to name a few : newer libvirt and xen packages
(updated to 3.1.0) , cifs module updated to correct a <a href="http://bugs.centos.org/view.php?id=1776">known
bug</a> , some new added drivers
(like the Areca ones) , and also the newly added iscsi-target feature
(i'll test it as soon as it will be available for QA members ....).</p>
<p>The newer kernel corrects also a number of bugs, notably the <a href="http://bugs.centos.org/view.php?id=1809">vga=
parameter</a> that was not working
anymore (why did they remove vesa_fb ?)</p>
<p>List is too long .. and no ETA yet for CentOS 5.1 ... but that won't be
too long i hope ;-)</p>How CentOS is considered by Red Hat employees2007-11-05T11:43:00+01:002007-11-05T11:43:00+01:00Fabian Arrotintag:arrfab.net,2007-11-05:/posts/2007/Nov/05/how-centos-is-considered-by-red-hat-employees/<p>A recent article on Slashdot (<a href="http://linux.slashdot.org/comments.pl?sid=349751&cid=21234239">is CentOS hurting Red Hat
?</a>)
forced some people to explain their opinion on/about CentOS ... Here is
a good <a href="http://gregdek.livejournal.com/18387.html">link from a Red Hat
employee</a> .</p>
<p>And yes, i personnaly like Red Hat, that's the main reason i switched to
CentOS for my own machines/servers .. but i continue to install RHEL for
customers wanting support ... and CentOS for the others ...</p>eid-belgium package for EL5 now working2007-10-21T15:55:00+02:002007-10-21T15:55:00+02:00Fabian Arrotintag:arrfab.net,2007-10-21:/posts/2007/Oct/21/eid-belgium-package-for-el5-now-working/<p>I had some time to spend on testing newer <a href="http://lists.centos.org/pipermail/centos/2007-October/088312.html" title="http://lists.centos.org/pipermail/centos/2007-October/088312.html">NX/FreeNX
packages</a>
for CentOS before they got released and also doing some tests/builds on
the eid-belgium package newer version 2.6.0. It's now working and it
will appear in RPMForge as soon as i can submit the modified spec
upstream ... In the meantime you can download/test <a href="http://rpms.arrfab.net/centos/5/i386/repodata/repoview/eid-belgium-0-2.6.0-1.el5.af.html">from here for the
i386
version</a>
. Now a patch allows the package to be installed without the
pcsc-lite-devel package ...</p>wasting time on patches for beid (eid-belgium)2007-10-17T22:55:00+02:002007-10-17T22:55:00+02:00Fabian Arrotintag:arrfab.net,2007-10-17:/posts/2007/Oct/17/wasting-time-on-patches-for-beid-eid-belgium/<p>I had to install the eid-belgium package that we prepared with Dag and
that was working on previously on el4 but now on el5 ... not working
out-of-the-box .. Then i saw that a newer version (2.6.0) was available
... But the patches previously working are not working anymore .. i
really like when developers hard-code value in the sources , like for
example /usr/local/lib .. :-(</p>
<p>I'm currently modifying the spec file and will post the result when a
clean spec file will be usable ...</p>Funny picture ...2007-10-14T19:41:00+02:002007-10-14T19:41:00+02:00Fabian Arrotintag:arrfab.net,2007-10-14:/posts/2007/Oct/14/funny-picture/<p>Okay, this isn't very technical ... but i've just found this old picture
i tooked at the company i'm working for when we had a meeting with <a href="http://dag.wieers.com">Dag
Wieers</a> just before the <a href="http://wiki.centos.org/Events/Fosdem2007">Fosdem
2007</a> event ...</p>
<p>When we scheduled that meeting with Dag, i didn't know which other
company should visit us (the company i mean, not Dag an myself ... ;-) )
... i let you guess the name of the other company, and their faces when
they saw the 'welcome screen' when entering the building ... ;-)</p>
<p><img alt="centos-ms.png" src="http://www.arrfab.net/blog/wp-content/uploads/2007/10/centos-ms.png"></p>The good, the bad and the ugly - part 22007-10-14T13:46:00+02:002007-10-14T13:46:00+02:00Fabian Arrotintag:arrfab.net,2007-10-14:/posts/2007/Oct/14/the-good-the-bad-and-the-ugly-part-2/<p>After googled at bit, i've found that newer libgpod (svn version only !)
can support newer iPods. I've so rebuild it (as well as gtkpod itself)
and packages are now sitting in my <a href="http://rpms.arrfab.net/centos/5/testing/i386/repodata/">testing
repository</a>
(only for i386 at this time, x86_64 will follow)<br>
Attention, you'll have to remove first the libgpod package (if already
installed) with `yum remove libgpod` (it will also remove rhythmbox !)
and then you can install gtkpod and libgpod (svn version)</p>
<p>You have to know also that to correctly write informations in the iTunes
DB file sitting on the iPod, gtkpod needs to know the 'FireWire id' of
your iPod. You can easily discover it with `sudo lsusb -v|grep -i
serial` . Write the 16 character long string down on the iPod so that
gtkpod knows it for a correct synchronisation. For example , mine is
000A27001A484CF8 so i created the following file /media/ARRFAB\<br>
IPOD/iPod_Control/Device/SysInfo (Arrfab Ipod being my iPod's name in
case you were wondering ... ) with the content being : `FirewireGuid:
0x000A27001A484CF8` (notice the 0x in front of the 16 characters
string) .</p>
<p>Now you can fire up gtkpod and upload music to your iPod .... Enjoy</p>
<p>I'll try to put it in the clean repo …</p><p>After googled at bit, i've found that newer libgpod (svn version only !)
can support newer iPods. I've so rebuild it (as well as gtkpod itself)
and packages are now sitting in my <a href="http://rpms.arrfab.net/centos/5/testing/i386/repodata/">testing
repository</a>
(only for i386 at this time, x86_64 will follow)<br>
Attention, you'll have to remove first the libgpod package (if already
installed) with `yum remove libgpod` (it will also remove rhythmbox !)
and then you can install gtkpod and libgpod (svn version)</p>
<p>You have to know also that to correctly write informations in the iTunes
DB file sitting on the iPod, gtkpod needs to know the 'FireWire id' of
your iPod. You can easily discover it with `sudo lsusb -v|grep -i
serial` . Write the 16 character long string down on the iPod so that
gtkpod knows it for a correct synchronisation. For example , mine is
000A27001A484CF8 so i created the following file /media/ARRFAB\<br>
IPOD/iPod_Control/Device/SysInfo (Arrfab Ipod being my iPod's name in
case you were wondering ... ) with the content being : `FirewireGuid:
0x000A27001A484CF8` (notice the 0x in front of the 16 characters
string) .</p>
<p>Now you can fire up gtkpod and upload music to your iPod .... Enjoy</p>
<p>I'll try to put it in the clean repo but because it will overwrite a
base package, i have first to find the correct way to do it ...</p>
<p><a href="http://www.arrfab.net/blog/wp-content/uploads/2007/10/gtkpod-centos5.png" title="Gtkpod with newer libgpod in action"><img alt="Gtkpod in action on CentOS
5" src="http://www.arrfab.net/blog/wp-content/uploads/2007/10/gtkpod.png"><br>
</a></p>The good, the bad and the ugly2007-10-12T20:50:00+02:002007-10-12T20:50:00+02:00Fabian Arrotintag:arrfab.net,2007-10-12:/posts/2007/Oct/12/the-good-the-bad-and-the-ugly/<p>* the good (news) : i've received as a gift a iPod Nano 4gb (latest
generation) ... normally receiving a gift can be considered as a good
news ...</p>
<p>* the bad : latest firmware of the iPod (on latest generation) is <a href="http://linuxrevolution.blogspot.com/2007/09/new-ipod-firmware-screws-linux-users.html">not
compatible</a>
with standard linux tools available for CentOS 5 ...</p>
<p>* the ugly : i've rebuilt newer versions of all the opensource software
that can normally handle the iPod database (gtkpod,
<a href="http://tirania.org/blog/archive/2007/Sep-15.html">banshee</a>) but
unfortunately, none of these tools can already manage it . Plus the fact
that some of these tools requires newer libs than those available in
CentOs 5 [base] ...<br>
Who said that receiving a gift is always good ? it's a shame that i
needed to find a XP machine so that i could upload music on my iPod ...
I'll wait for Fedora 8 to be released on which i'll update/overwrite
banshee and all other tools until i'll be able to enjoy this gift ...<br>
Thinking of the day : 'welcome to a free world' ...</p>Kernel update and not the correct kernel on next reboot ...2007-10-01T21:40:00+02:002007-10-01T21:40:00+02:00Fabian Arrotintag:arrfab.net,2007-10-01:/posts/2007/Oct/01/kernel-update-and-not-the-correct-kernel-on-next-reboot/<p>This is a question we receive a lot on the CentOS forum : "i've updated
my system with yum and it installed a new kernel but it didn't start on
my default kernel ..." Ok, so let explain what happened for sure :
you've installed initially a box with multiple kernels, for example
kernel and kernel-xen (it can be PAE as well ...).<br>
Of course you know how to configure grub and you modify the
/boot/grub/grub.conf to put your default kernel to be the standard one
(because you installed the kernel-xen for testing purposes only ....)</p>
<p>Everything is fine ... until next kernel update : it reverts the default
kernel to boot in grub to the kernel-xen ...</p>
<p>Explanations : When you installed both kernels, a file containing a
default kernel for the system has been created (/etc/sysconfig/kernel ..
check for defaultkernel=) .. and if you installed kernel-xen (for a test
maybe ...) this is the default kernel. After a new kernel installation,
/sbin/new-kernel-pkg is called to make a new initrd and use grubby to
automatically add the newer kernel entry in the grub config file ... If
you have a look in this new-kernel-pkg script, you'll see that he uses
the /etc/sysconfig/kernel file to …</p><p>This is a question we receive a lot on the CentOS forum : "i've updated
my system with yum and it installed a new kernel but it didn't start on
my default kernel ..." Ok, so let explain what happened for sure :
you've installed initially a box with multiple kernels, for example
kernel and kernel-xen (it can be PAE as well ...).<br>
Of course you know how to configure grub and you modify the
/boot/grub/grub.conf to put your default kernel to be the standard one
(because you installed the kernel-xen for testing purposes only ....)</p>
<p>Everything is fine ... until next kernel update : it reverts the default
kernel to boot in grub to the kernel-xen ...</p>
<p>Explanations : When you installed both kernels, a file containing a
default kernel for the system has been created (/etc/sysconfig/kernel ..
check for defaultkernel=) .. and if you installed kernel-xen (for a test
maybe ...) this is the default kernel. After a new kernel installation,
/sbin/new-kernel-pkg is called to make a new initrd and use grubby to
automatically add the newer kernel entry in the grub config file ... If
you have a look in this new-kernel-pkg script, you'll see that he uses
the /etc/sysconfig/kernel file to know which kernel to configure as
default ... you know now which file to modify ... i now i hope that this
question will not come back once again on the next kernel update ... :o)</p>Zattoo , or how to watch TV over IP on Linux/CentOS ...2007-09-30T12:24:00+02:002007-09-30T12:24:00+02:00Fabian Arrotintag:arrfab.net,2007-09-30:/posts/2007/Sep/30/zattoo-or-how-to-watch-tv-over-ip-on-linuxcentos/<p>Thanks to <a href="http://www.krisbuytaert.be/blog/?q=node/452">sdog's blog</a> ,
i've found and tested <a href="http://www.zattoo.com">Zattoo</a> . They provide
binaries (yep, binaries only ...) for Mac OS X, also for an OS from
Redmond, and ... for Linux ! You have the choice between a .deb package,
a RPM package , and a .tar.gz file containing the binaries. Of course i
tested the rpm, that was packaged very badly .. `rpm -qi zattoo` gave
me quickly the answer why : '(Converted from a deb package by alien
version 8.64.)' :o)</p>
<p>I'm wondering if i need/can (due to their proprietary license)
re-package the static binaries with the correct 'Requires:' field and a
'%post' section doing all the symlinks (yep, their binaries are build
against/for .so.0d :-( ....) and a proper ldconfig ... and put a .nosrc
in the package name ....</p>
<p>Anyway, the future will tell if the Zattoo service will remain free or
not ...</p>mugshot for CentOS 52007-09-29T18:23:00+02:002007-09-29T18:23:00+02:00Fabian Arrotintag:arrfab.net,2007-09-29:/posts/2007/Sep/29/mugshot-for-centos-5/<p>I've just joined the <a href="http://mugshot.org/group?who=HWQjjmwzV6vr7T">mugshot CentOS
group</a> so i needed the
mugshot client on my gnome desktop ... here it is for the <a href="http://rpms.arrfab.net/centos/5/i386/repodata/repoview/mugshot-0-1.1.45-1.el5.af.html">i386
platform</a>
and here for the <a href="http://rpms.arrfab.net/centos/5/x86_64/repodata/repoview/mugshot-0-1.1.45-1.el5.af.html">x86_64 one<br>
</a></p>
<p>I'm still wondering what will be the benefit of mugshot, but time will
tell ... :-)</p>tn5250j packaged in rpm for CentOS 52007-09-25T14:54:00+02:002007-09-25T14:54:00+02:00Fabian Arrotintag:arrfab.net,2007-09-25:/posts/2007/Sep/25/tn5250j-packaged-in-rpm-for-centos-5/<p>I work for a company that uses IBM iSeries machines (aka as400) so i
needed a tn5250 emulator. CentOS 5 comes with tn5250 but that package is
very limited. I found tn5250j (java based) and i've tested it for
several months ... now it's time to package it as a rpm instead of using
their java jar installer ... Here it is :
<a href="http://rpms.arrfab.net/centos/5/i386/repodata/repoview/tn5250j-0-0.6.0-1.el5.af.html">http://rpms.arrfab.net/centos/5/i386/repodata/repoview/tn5250j-0-0.6.0-1.el5.af.html</a>
.</p>
<p>You need the sun JRE 1.4 minimum (i use the 1.5 without any problems).
This package doesn't work correctly with the provided
java-1.4.2-gcj-compat !</p>Multiple vlans on a single interface on CentOS (aka 802.1q tagging)2007-09-17T20:47:00+02:002007-09-17T20:47:00+02:00Fabian Arrotintag:arrfab.net,2007-09-17:/posts/2007/Sep/17/multiple-vlans-on-a-single-interface-on-centos/<p>I had to setup a CentOS box that needed access to multiple VLANs
configured in a bunch of Cisco switches ... Instead of having a
different physical adapter for each vlan in linux, you can configure
your physical interface with multiple logicals interfaces that will be
'tagged' (aka 802.1q vlan tagging). You need of course to configure your
switch so that your eth0 will be in trunk mode ('switchport mode trunk'
on cisco) and then configure logical interfaces on the linux side. A
quick `grep -Ri vlan /usr/share/doc/initscripts-8.45.14.EL/` showed me
that on CentOS you just need to create a
/etc/sysconfig/network-scripts/ifcfg-eth0.30 (assuming that the vlan you
need access to is vlan 30) and this file will have to look like this :</p>
<p>DEVICE=eth0.30<br>
BOOTPROTO=STATIC<br>
IPADDR=10.111.32.23<br>
NETMASK=255.255.255.240<br>
VLAN=yes<br>
ONBOOT=yes</p>
<p>Of course you can fix parameters (dhcp, onboot, etc ...) like usual. A
`ifup eth0.30` will bring this logical interface up (and will
automatically modprobe the 8021q module) and you'll have access to
machines sitting in this vlan 30 from your single eth0 physical
inteface. Of course you can create multiple …</p><p>I had to setup a CentOS box that needed access to multiple VLANs
configured in a bunch of Cisco switches ... Instead of having a
different physical adapter for each vlan in linux, you can configure
your physical interface with multiple logicals interfaces that will be
'tagged' (aka 802.1q vlan tagging). You need of course to configure your
switch so that your eth0 will be in trunk mode ('switchport mode trunk'
on cisco) and then configure logical interfaces on the linux side. A
quick `grep -Ri vlan /usr/share/doc/initscripts-8.45.14.EL/` showed me
that on CentOS you just need to create a
/etc/sysconfig/network-scripts/ifcfg-eth0.30 (assuming that the vlan you
need access to is vlan 30) and this file will have to look like this :</p>
<p>DEVICE=eth0.30<br>
BOOTPROTO=STATIC<br>
IPADDR=10.111.32.23<br>
NETMASK=255.255.255.240<br>
VLAN=yes<br>
ONBOOT=yes</p>
<p>Of course you can fix parameters (dhcp, onboot, etc ...) like usual. A
`ifup eth0.30` will bring this logical interface up (and will
automatically modprobe the 8021q module) and you'll have access to
machines sitting in this vlan 30 from your single eth0 physical
inteface. Of course you can create multiple logical interfaces ...</p>
<p>Useful for example when you have a physical machine with a limited
number of ethernet devices (like on IBM Blades ...)</p>Running a Domino cluster with heartbeat/drbd on CentOS2007-09-10T15:47:00+02:002007-09-10T15:47:00+02:00Fabian Arrotintag:arrfab.net,2007-09-10:/posts/2007/Sep/10/running-a-domino-cluster-with-heartbeatdrbd-on-centos/<p>I've recently installed a IBM Domino 7 cluster on CentOS 4.5. I used
Heartbeat and DRBD for data replication between the two nodes. I first
started with a v1 heartbeat cluster style (using haresources) but then i
decided to swith to v2 (using crm / cib.xml). The problem is that the
init script for domino that i found somewhere on the ibm website didn't
answer to the 'service domino status' and was not even returning code
... so when you wanted to move a resource from one node to the other,
the domino server was always listed as an unmanaged resources and failed
to switch to the other node. So i <a href="http://www.arrfab.net/blog/domino">attach the modified domino init
script</a>.</p>
<p>I had also to put a timeout value for the stop operation in the cib.xml.
Otherwise heartbeat complained that he had to kill the process itself
(domino is slow to stop .... :o)). You can do this by entering the
following parameters for the lsb resource :</p>
<p>operations op id="afe9e8fe-bc40-4b3f-b563-9bd70e695ab6" name="stop"
timeout="120s" start_delay="0" disabled="false " role="Started"
/operations</p>
<p>Hope that this will help someone ... i'll try to write a full doc
(centos/heartbeat/drbd) on the wiki very soon ....</p>Lotus Notes 8 on Linux2007-08-28T21:15:00+02:002007-08-28T21:15:00+02:00Fabian Arrotintag:arrfab.net,2007-08-28:/posts/2007/Aug/28/lotus-notes-8-on-linux/<p>IBM released Notes client version 8 recently. Version 7 for Linux sucked
but this newer version is better integrated, verify dependencies and i'm
using it now for one week without any problems so far ... Still not
delivered in rpm format (when will IBM/Lotus understand the benefit of
RPM instead of their stupid Installshield anywhere java app ... ?)
though. On the other hand, it contains the IM Sametime plugin (very well
integrated this time) so i don't need to use Gaim/Pidgin meanwhile
plugin to connect to our Sametime server . IBM decided to package
OpenOffice (ibm modified version) inside the Notes client so it seems
their philosophy is to push Odt format ...</p>Imapsync rpm for CentOS 52007-08-16T16:09:00+02:002007-08-16T16:09:00+02:00Fabian Arrotintag:arrfab.net,2007-08-16:/posts/2007/Aug/16/imapsync-rpm-for-centos-5/<p>I needed a tool to synchronize imap folders from one server (M\$
Exchange) to another imap server. Imapsync seemed to be a good tool but
didn't exist for CentOS 5 so i've built it :
<a href="http://rpms.arrfab.net/centos/5/i386/repodata/repoview/imapsync-0-1.219-1.html">http://rpms.arrfab.net/centos/5/i386/repodata/repoview/imapsync-0-1.219-1.html </a>
. Also available in the x86_64 tree ...</p>Vmware server on CentOS/IBM BladeCenter2007-08-08T15:18:00+02:002007-08-08T15:18:00+02:00Fabian Arrotintag:arrfab.net,2007-08-08:/posts/2007/Aug/08/vmware-server-on-centosibm-bladecenter/<p>One of my customers wanted to setup Vmware server on top of CentOS 4.5
on a IBM HS20 (in an IBM BladeCenter). The guest machines were unable to
connect to the network (or just briefly, like 3 or 4 icmp packets) but
were able to at least see remote machines mac addresses (as shown
through arp). On the other hand, everything was running smoothly in NAT
mode . I had a look on upstream Cisco Switches configs but everything
was ok ... So i decided to update the firmware of the built-in Broadcom
Fiber ethernet cards and after that everything went back to normal in
bridge mode. So it seems that these devices didn't support correctly the
bridging mode out-of-the-box and i assume that it should have been the
same with other virt products (qemu, kvm, xen, etc ...). It reminds me a
colleague some years ago : when you told him that you had a problem, his
answer was always : 'Have you updated the BIOS ?' simple but it works
often ... ;-)</p>YaST on Enterprise Linux/CentOS ?!2007-08-08T15:14:00+02:002007-08-08T15:14:00+02:00Fabian Arrotintag:arrfab.net,2007-08-08:/posts/2007/Aug/08/yast-on-enterprise-linuxcentos/<p>Yep, you've correctly read ... Oracle released a <a href="http://oss.oracle.com/projects/yast/">modified version of
YaST</a> to run on top of Enterprise
Linux (including Unbreakable linux of course). Is it april 1st ? or
another Oracle inconsistency ? Will their next unbreakable linux be
based on SLES ? :o)</p>Speed your yum on CentOS 3.x2007-08-01T09:00:00+02:002007-08-01T09:00:00+02:00Fabian Arrotintag:arrfab.net,2007-08-01:/posts/2007/Aug/01/speed-your-yum-on-centos-3x/<p>Tired of waiting for 'resolving dependencies' on centos 3.x because of
yum 2.0 ? Thanks to
<a href="http://blog.danieldk.org/post/2007/05/10/yum-metadata-parser-for-yum-24">danieldk</a>,
yum-2.4.3 is available for centos 3 , as well as the newer
yum-metadata-parser plugin. At the time of writing, it's only available
in the <a href="http://dev.centos.org/centos/3/testing/">[testing] repo</a>, but
you can enable it temporary and switch back to normal behaviour once
that you've upgraded yum. BTW you'll have to clean up your /etc/yum.conf
because newer yum will use CentOS-Base.repo sitting in /etc/yum.repos.d/
and so both centos repositories will be listed twice, which is not a
real problem but a little bit annoying though ... My CentOS 3.x boxes
now rocks .. :o)</p>ODF Plugin for M$ users2007-07-31T18:34:00+02:002007-07-31T18:34:00+02:00Fabian Arrotintag:arrfab.net,2007-07-31:/posts/2007/Jul/31/odf-plugin-for-m-users/<p>Sun decided to release a free ODF plug in for M\$ Office users. Good
thing ... usually OO.org users had to convert their OpenDocument files
into either pdf (which is still the format i use for documents exchange
with customers ...) or as a native M\$ office format. Now you can even
send your document in odf format and they have no excuses anymore ....
:o) . Btw it was also possible that such users install OpenOffice on
Windows though ... Here it is :
<a href="http://www.sun.com/software/star/openoffice/">http://www.sun.com/software/star/openoffice/</a></p>ldapvi rpm built for CentOS 52007-07-25T11:58:00+02:002007-07-25T11:58:00+02:00Fabian Arrotintag:arrfab.net,2007-07-25:/posts/2007/Jul/25/ldapvi-rpm-built-for-centos-5/<p>A friend of mine (<a href="http://www.x-tend.be/~fred/blog/index.php">lefred</a>)
pointed me to another ldap administration tool but that uses a vi
interface so useful tool in ssh connection for example ... it didn't
exist in rpm for EL5 so i decided to build it and put it in <a href="http://rpms.arrfab.net">my rpms
repo</a> ...</p>OpenMoko and marketing2007-07-23T20:52:00+02:002007-07-23T20:52:00+02:00Fabian Arrotintag:arrfab.net,2007-07-23:/posts/2007/Jul/23/openmoko-and-marketing/<p>I was wondering how <a href="http://www.openmoko.org/">OpenMoko</a> will fight
against marketing around the iPhone from Apple .. i've now the answer :
<a href="http://www.youtube.com/watch?v=POiNqP4savI">http://www.youtube.com/watch?v=POiNqP4savI</a> .... :o)</p>CentOS 5 remote install and ipv62007-07-23T13:17:00+02:002007-07-23T13:17:00+02:00Fabian Arrotintag:arrfab.net,2007-07-23:/posts/2007/Jul/23/centos-5-remote-install-and-ipv6/<p>I used several times z00dax's howto to remotely setup CentOS 4 on
already installed Linux boxes (see
<a href="http://www.karan.org/blog/index.php/2005/06/15">http://www.karan.org/blog/index.php/2005/06/15</a> ) but if you want to
test the same thing with CentOS 5 you'll have to add a new paramater :
noipv6 . Otherwise anaconda will try to ask if you want an ipv6 address
(the same behaviour that you'll have in manual setup)</p>Too hot in the server room ?2007-07-18T21:39:00+02:002007-07-18T21:39:00+02:00Fabian Arrotintag:arrfab.net,2007-07-18:/posts/2007/Jul/18/too-hot-in-the-server-room/<p>Not a technical/linux tip, but as a consultant, i can assure you that
such situation exist in the real life ... :o)</p>
<p><a href="http://worsethanfailure.com/Articles/Im-Sure-You-Can-Deal.aspx">http://worsethanfailure.com/Articles/Im-Sure-You-Can-Deal.aspx </a></p>a Jabber daemon on CentOS 4.x2007-07-17T19:11:00+02:002007-07-17T19:11:00+02:00Fabian Arrotintag:arrfab.net,2007-07-17:/posts/2007/Jul/17/a-jabber-daemon-on-centos-4x/<p>I received a request from family members to be able to use Instant
Messaging on a private server. I never installed a jabber daemon and if
you have a look on <a href="http://www.jabber.org">http://www.jabber.org</a> you'll see that a lot are
existing ... Thanks to <a href="http://planet.x-tend.be">X-Tend's people</a> advice
i installed (very quickly)
<a href="http://www.igniterealtime.org/projects/openfire/index.jsp">OpenFire</a> .
It exists already in a RPM format so i didn't have to build it. You can
have your own jabberd running in 5 minutes and it has a lot of features,
like different kind of authentication, different databases backend, etc
.. it even has a plugin to act as a gateway for other IM systems, like
IRC, AIM etc .... I was really impressed ... and the Admin console will
make gui admins really happy :o)</p>When will ATI drivers support Aiglx ?2007-07-12T18:29:00+02:002007-07-12T18:29:00+02:00Fabian Arrotintag:arrfab.net,2007-07-12:/posts/2007/Jul/12/when-will-ati-drivers-support-aiglx/<p>I've received my new laptop (IBM Thinkpad R60) and this box has an ATI
Radeon Mobility X1400 inside ... this card is not even usable by the
xorg radeon driver. So i had to chose between using the generic (and
included) vesa driver or installing fglrx (ATI driver). The second
choice was the way to go it seems, but when you use fglrx, you can not
activate 'desktop effects' because fglrx is incompatible with Aiglx (see
<a href="http://support.ati.com/ics/support/default.asp?deptID=894&task=knowledge&questionID=26907">http://support.ati.com/ics/support/default.asp?deptID=894&task=knowledge&questionID=26907</a>
) . But it's possible to use Beryl+fglrx through Xgl (and so not Aiglx)
on CentOS 5. Xgl is not included but i decided to give it a try and i
built it (with beryl as well) for i386 and i've included both rpms in my
little repository : <a href="http://rpms.arrfab.net">http://rpms.arrfab.net</a> .</p>FreeNX on CentOS2007-07-12T17:34:00+02:002007-07-12T17:34:00+02:00Fabian Arrotintag:arrfab.net,2007-07-12:/posts/2007/Jul/12/freenx-on-centos/<p>A lot of people misunderstand how NX/FreeNX works , especially the ssh
authentication process ... How can you secure your NX box so that only
key-based authentication will occur for the ssh part of it ? I've added
some notes about it on the CentOS wiki here :
<a href="http://wiki.centos.org/HowTos/FreeNX">http://wiki.centos.org/HowTos/FreeNX</a></p>Manipulating SCSI devices in the kernel2007-06-15T10:11:00+02:002007-06-15T10:11:00+02:00Fabian Arrotintag:arrfab.net,2007-06-15:/posts/2007/Jun/15/manipulating-scsi-devices-in-the-kernel/<p>You've added a scsi disk on a controller and you don't want to restart
linux to scan the scsi bus ? Tell the kernel directly that a new scsi
device was added .</p>
<p>The previous method was to use the echo command talking to
/proc/scsi/scsi : For example , we've added a scsi disk on the first
scsi controller (0), on the first channel (0), with a scsi id 4 (4) and
on LUN 0 (0) : echo "scsi add-single-device 0 0 4 0" > /proc/scsi/scsi</p>
<p>Now (better and updated solution) : echo "- - -" >
/sys/class/scsi_host/host0/scan</p>
<p>Check with cat /proc/scsi/scsi that you're able to see the device and
use the disk without having to reboot ....</p>Java apps and AIGLX (compiz) gray screen problem2007-06-11T10:30:00+02:002007-06-11T10:30:00+02:00Fabian Arrotintag:arrfab.net,2007-06-11:/posts/2007/Jun/11/java-apps-and-aiglx-compiz-gray-screen-problem/<p>I had recently to test the newer Lotus Notes 8 beta 2 apps on my CentOS
5 laptop. The java installer starts but displays nothing more than a
gray screen. This is a known behaviour with java apps (problems occurs
also with the xenserver-client package). Solution is to add the
following line in your \~/.bash_profile : export AWT_TOOLKIT=MToolkit</p>
<p>Log again and java apps will display correctly</p>AdobeReader 7.0.9 on CentOS 5 expr problem2007-05-14T11:48:00+02:002007-05-14T11:48:00+02:00Fabian Arrotintag:arrfab.net,2007-05-14:/posts/2007/May/14/adobereader-709-on-centos-5-expr-problem/<p>I like Evince to read pdf documents on CentOS 5 , but i wanted also to
use the Adobe Reader to do it ... but each time i wanted to launch it, i
had a "expr: syntax error" error while launching the /usr/bin/acroread
script. Solution is to edit this file and remove a check :
check_gtk_ver_and_set_lib_path "\$MIN_GTK_VERSION" . After that,
it works ok ...</p>LVM on top of DRBD devices2007-04-19T15:24:00+02:002007-04-19T15:24:00+02:00Fabian Arrotintag:arrfab.net,2007-04-19:/posts/2007/Apr/19/lvm-on-top-of-drbd-devices/<p>I wanted to use lvm on top of drbd devices so that i can add more disks
as drbd resources and integrate them in a vg. pvcreate /dev/drbd0 will
work ok, but if you use vgcreate yourvol /dev/drbd0 it will complaint
about duplicate pv. So how to use drbd devices with LVM. Solution seems
to use a filter option in /etc/lvm/lvm.conf to ignore real devices and
use only drbd devices . More informations on the <a href="http://thread.gmane.org/gmane.linux.network.drbd/4813">drbd mailing list
archive</a></p>some extra rpms for CentOS 52007-04-13T19:34:00+02:002007-04-13T19:34:00+02:00Fabian Arrotintag:arrfab.net,2007-04-13:/posts/2007/Apr/13/some-extra-rpms-for-centos-5/<p>I needed several rpms i was not able to find in any other third-party
repo. This include pptpconfig (a pptp client i need to support some
customers relying on this poor protocol), brasero (nice gtk burning
software), audacity, etc ....</p>
<p>So i've just (re)built them on CentOS 5 . If you're interested, you can
grab them with yum by following instructions on
<a href="http://rpms.arrfab.net/">http://rpms.arrfab.net</a> . Please note that
i've built such packages only for i386 (i've no x86_64 available at
this time) ...</p>CentOS 5 minimal install2007-04-13T07:35:00+02:002007-04-13T07:35:00+02:00Fabian Arrotintag:arrfab.net,2007-04-13:/posts/2007/Apr/13/centos-5-minimal-install/<p>As you've probably noted, the minimal install checkbox is missing from
anaconda on CentOS 5. But if you unselect every group, choose 'customize
now' and then unselect everything (including Base), you'll end up with a
594mb minimal installation. Then you can customize yourself what you
want to remove (like the Deployment guide) and what to add (with yum).</p>Compiz with Nvidia drivers on CentOS 52007-04-09T15:36:00+02:002007-04-09T15:36:00+02:00Fabian Arrotintag:arrfab.net,2007-04-09:/posts/2007/Apr/09/compiz-with-nvidia-drivers-on-centos-5/<p>I've just installed my main workstation at home with CentOS 5 ... this
one has a nvidia video card (GeForce 6600) and i installed official
nvidia drivers ... but after that i was not able to enable
desktop-effects (compiz). You need to add something in your
/etc/xorg.conf for compiz to work with nvidia drivers ... here they are
(verify that they exist in your existing xorg.conf or add them) :</p>
<p>Section "Module"<br>
Load "glx"<br>
Load "extmod"<br>
EndSection</p>
<p>Section "Device"<br>
Identifier "Videocard0"<br>
Driver "nvidia"<br>
Option "AddARGBGLXVisuals" "True"<br>
Option "DisableGLXRootClipping" "True"<br>
EndSection</p>
<p>Section "DRI"<br>
Group 0<br>
Mode 0666<br>
EndSection</p>A friday afternoon test2007-04-06T13:13:00+02:002007-04-06T13:13:00+02:00Fabian Arrotintag:arrfab.net,2007-04-06:/posts/2007/Apr/06/a-friday-afternoon-test/<p>I had foolish idea this friday afternoon. I was at the office and i
tested an upgrade from CentOS 4.4 i386 (32 bits) running on a IBM HS20
Blade server to CentOS 5.0 X86_64 ! Some will say (and i agree) that
such thing is a silly thing to do , but the sun was shining and i had to
reinstall the blade anyway ... But strangely, it worked ! I upgraded
with Anaconda (through an NFS install). The only problem i had was
related to udev and linuxwacom. I had to remove the old udev package
(that was left behind) and verify the newer one ... but after that
everything was running perfectly ! Of course that's something i'll never
do on production servers ... :o)</p>Simple file sharing tool2007-04-01T19:37:00+02:002007-04-01T19:37:00+02:00Fabian Arrotintag:arrfab.net,2007-04-01:/posts/2007/Apr/01/simple-file-sharing-tool/<p>Thanks to <a href="http://www.x-tend.be/~kb/blog/index.php">Kris' blog</a> , i
found woof ... This is a simple python script that will serve a specific
file on http protocol when you just need to share quickly a file with
someone else . Useful when you need to share a file with a Redmond
fanatic once in a while and that you don't want to deal with samba just
for this ... <a href="http://www.home.unix-ag.org/simon/woof.html">http://www.home.unix-ag.org/simon/woof.html</a></p>IBM Lotus Notes for linux sucks2007-03-23T13:06:00+01:002007-03-23T13:06:00+01:00Fabian Arrotintag:arrfab.net,2007-03-23:/posts/2007/Mar/23/ibm-lotus-notes-for-linux-sucks/<p>It's been a while that i've installed the Lotus Notes client for Linux
on my laptop ... but because i've reinstalled my laptop from CentOS 4.4
to CentOS 5 , i needed to setup this crappy client once again ...
'crappy client ?' yes . First of all, IBM decided to release such tool
only with its own installer (that is java based of course) ... so no RPM
available (hmmm, that should have been too easy for stupid end users i
guess ...) . The problem with such installer is that it doesn't even
verify if you have dependencies ... it only installs a java-based
workplace client (by default in /opt) but if you do it as root, it will
try to setup the Notes plugin for the root account only ... (hehehe the
Lotus notes client for Linux is not multi-users aware yet ...) . So
recommandation is to create a /opt/IBM directory and let the user modify
this directory ... You launch so the installation as a simple user but
when it finishes you think you can now configure the client .... no !</p>
<p>Because no dependencies were checked you have to use ldd and strace to
find what it is missing .. (in my case , using ldd notes and ldd …</p><p>It's been a while that i've installed the Lotus Notes client for Linux
on my laptop ... but because i've reinstalled my laptop from CentOS 4.4
to CentOS 5 , i needed to setup this crappy client once again ...
'crappy client ?' yes . First of all, IBM decided to release such tool
only with its own installer (that is java based of course) ... so no RPM
available (hmmm, that should have been too easy for stupid end users i
guess ...) . The problem with such installer is that it doesn't even
verify if you have dependencies ... it only installs a java-based
workplace client (by default in /opt) but if you do it as root, it will
try to setup the Notes plugin for the root account only ... (hehehe the
Lotus notes client for Linux is not multi-users aware yet ...) . So
recommandation is to create a /opt/IBM directory and let the user modify
this directory ... You launch so the installation as a simple user but
when it finishes you think you can now configure the client .... no !</p>
<p>Because no dependencies were checked you have to use ldd and strace to
find what it is missing .. (in my case , using ldd notes and ldd lnotes
under my \~/notes directory helped me find what it needs and openmotif22
was the one i missed ...)</p>
<p>But the story continues : the java installer modifies your
\~/.bash_profile with stupid things like modifying your \$PATH by
entering their software path in front of your existing \$PATH ! so you
end with a java vm in your path that is not the one you've configured
the proper way with /usr/sbin/alternatives !</p>
<p>I cleaned all this stuff and i ended by creating a (very) small script
that uses a modified PATH only for Notes Client and LD_LIBRARY_PATH
(oh, yes i forgot to mention that .so files were in \~/notes ...).</p>
<p>Here it is :</p>
<p>#!/bin/bash </p>
<p>PATH=\$HOME/notes/jvm/bin:\$HOME/notes:\$PATH </p>
<p>LD_LIBRARY_PATH=\$LD_LIBRARY_PATH:\$HOME/notes/:\$HOME/notes/jvm/bin/classic/
/opt/IBM/Workplace\ Managed\ Client/rcp/richclient -personality
com.ibm.workplace.noteswc.standalone.linux.personality</p>
<p>PS : the LD_LIBRARY_PATH up to .personality is one line ! </p>
<p>Conclusion : stupid but it was easier to run the win32 client with wine
on Linux than a native client ! and IBM tries to push Linux, ? ouch !</p>ext3 filesystem optimization2007-03-22T21:28:00+01:002007-03-22T21:28:00+01:00Fabian Arrotintag:arrfab.net,2007-03-22:/posts/2007/Mar/22/ext3-filesystem-optimization/<p>ext3 comes by default on a lot of linux distributions (even SuSE/Novell
stops using reiserfs and switchs back to ext3 now ...). It's a robust
filesystem but sometimes people complain about the fact that it's maybe
not the fastest fs to use ... Jim Perrin wrote a good overview of
possible tweaks for ext3 on the CentOS wiki :
<a href="http://wiki.centos.org/HowTos/Disk_Optimization">http://wiki.centos.org/HowTos/Disk_Optimization</a></p>CentOS at Linux World Expo in Belgium2007-03-20T06:30:00+01:002007-03-20T06:30:00+01:00Fabian Arrotintag:arrfab.net,2007-03-20:/posts/2007/Mar/20/centos-at-linux-world-expo-in-belgium/<p>Thanks to <a href="http://www.x-tend.be">X-tend</a> , CentOS will be present at the
<a href="http://www.linuxworldexpo.be">Linux World Expo</a> in Belgium this year.
We (Dag Wieers and myself) will distribute flyers and try to show CentOS
5 in action. This was not really scheduled, but it's always a good idea
to show CentOS to people who never heard of it ... especially when you
know that, unlike the Fosdem that is attracting more geeks, the Linux
World Expo tends to attract Enterprise Linux users (by inviting
commercial companies like Redhat, Novell, and companies offering support
for linux in belgium ). We hope to see people already using upstream
distribution to at least be aware that CentOS exists ... Pictures and
comments will come after the event ...</p>Fosdem 2007 is over2007-03-01T21:36:00+01:002007-03-01T21:36:00+01:00Fabian Arrotintag:arrfab.net,2007-03-01:/posts/2007/Mar/01/fosdem-2007-is-over/<p>I had the chance this year to participate with the CentOS team ... too
much people at 'le roy d'espagne' cafe for the beer event so we had to
move and go to another bar ... some
<a href="http://wiki.centos.org/Events/Fosdem2007">pictures</a> are available on my
website and my presentation is available (with other centos fosdem team
members presentations) on the <a href="http://wiki.centos.org/Events/Fosdem2007">CentOS
wiki</a></p>vga= in grub.conf2007-02-17T16:57:00+01:002007-02-17T16:57:00+01:00Fabian Arrotintag:arrfab.net,2007-02-17:/posts/2007/Feb/17/vga-in-grubconf/<p><span id="nointelliTXT">Each time i want to use framebuffer in console
i have to search kernel doc about vesafb about possible vga=
entries.</span></p>
<p>Here they are :</p>
<p><span id="nointelliTXT"></p>
<p><code>{.alt2 style="border: 1px inset ; margin: 0px; padding: 6px; overflow: auto; width: 640px; height: 114px"}
| 640x480 800x600 1024x768 1280x1024
----+-------------------------------------
256 | 0x301 0x303 0x305 0x307
32k | 0x310 0x313 0x316 0x319
64k | 0x311 0x314 0x317 0x31A
16M | 0x312 0x315 0x318 0x31B</code></p>
<p></span></p>
<p>You can use it like that or convert hex in decimal : 0x317 then becomes
791</p>Software raid1 and grub failing on CentOS 4.x2007-02-17T10:18:00+01:002007-02-17T10:18:00+01:00Fabian Arrotintag:arrfab.net,2007-02-17:/posts/2007/Feb/17/software-raid1-and-grub-failing-on-centos-4x/<p>I don't like to play with software raid usually (i prefer real hardware
raid card on servers ..) but i had to setup a small machine with 2 sata
disks in raid 1. The problem is that after the reboot grub will fail to
load the kernel (grub error 15).</p>
<p>Solution is to boot in rescue mode , issue a chroot /mnt/sysimage and
setup grub manually on each drives :</p>
<p>grub</p>
<p>grub>root (hd0,0)<br>
grub>setup (hd0)<br>
grub>root (hd1,0)<br>
grub>setup (hd1)<br>
grub>quit</p>
<p>Reboot the machine and you'll have grub on each raid device ...</p>Newer sata controllers on CentOS 4.x and 'all-generic-ide'2007-02-17T10:14:00+01:002007-02-17T10:14:00+01:00Fabian Arrotintag:arrfab.net,2007-02-17:/posts/2007/Feb/17/newer-sata-controllers-on-centos-4x/<p>I had recently to setup CentOS 4.4 on a Acer desktop machine. Problem
was that this machine (acert T180) contains a nvidia nforce chipset that
neither centos default kernel or nvidia nforce driver disk
(http://www.nvidia.com/object/linux_nforce_1.21.html) support ...
Solution was to pass the 'all-generic-ide' parameter to the kernel ...
it treats sata discs as ide but it works ... and performances seem
roughtly the same ... tip : don't forget to give this parameter in
grub.conf also so that your machine will reboot after the setup part
....</p>Belgian eID card under CentOS Linux2007-02-02T22:19:00+01:002007-02-02T22:19:00+01:00Fabian Arrotintag:arrfab.net,2007-02-02:/posts/2007/Feb/02/belgian-eid-card-under-centos-linux/<p>I have to do a presentation for the next <a href="http://www.fosdem.org">fosdem</a>
(24-25 february 07) about using the belgian eID card under CentOS Linux.
I'm busy with <a href="http://dag.wieers.com">Dag Wieers</a> to build the (provided
by the belgian gov) sources as rpm packages but we need to patch it
because some sources contains hard-coded values (shame on them ...) .
I'll post when everything will be ready and hosted at
<a href="http://www.rpmforge.net">rpmforge</a></p>The ultimate solution for Linux problems2007-01-21T09:05:00+01:002007-01-21T09:05:00+01:00Fabian Arrotintag:arrfab.net,2007-01-21:/posts/2007/Jan/21/the-ultimate-solution-for-linux-problems/<p><strong><em>echo '16i[q]sa[ln0=aln100%Pln100/snlbx]sbA0D4D465452snlbxq' | dc</em></strong></p>
<p>This command will do the job for you for 99% of your problems ...</p>Rsync with acl support on CentOS2007-01-06T15:48:00+01:002007-01-06T15:48:00+01:00Fabian Arrotintag:arrfab.net,2007-01-06:/posts/2007/Jan/06/rsync-with-acl-support-on-centos/<p>If you use ACLs on CentOS and want to keep your data synchronized with
rsync, you probably already know that the standard supplied rsync
doesn't support acls ... you then should use getfacl and setfacl on the
target machine to replicate your acls .. but great news : a rsync rpm
with acl support is in testing phase on
<a href="http://dev.centos.org/centos/4/testing/i386/">http://dev.centos.org/centos/4/testing/i386/</a></p>Vmware server on Fedora Core 62007-01-05T22:28:00+01:002007-01-05T22:28:00+01:00Fabian Arrotintag:arrfab.net,2007-01-05:/posts/2007/Jan/05/vmware-server-on-fedora-core-6/<p>I had recently to setup vmware server on a Fedora Core 6 but i
encountered some problems during the vmware-config.pl post-setup step :
it complained about linux.h ... Workaround was to `touch
/usr/src/kernels/`uname -r`/include/linux/linux.h`</p>
<p>vmware-config.pl was happy after that file was created. This file was
removed from the kernel tree for newer kernels and vmware-config.pl was
searching this file ...</p>Free EU petition ...2007-01-05T15:54:00+01:002007-01-05T15:54:00+01:00Fabian Arrotintag:arrfab.net,2007-01-05:/posts/2007/Jan/05/free-eu-petition/<p>Why do i need to use M\$ windows and media player to watch some eu
movies ? sign the petition too :
http://www.petitionspot.com/petitions/eu_streaming_service_for_everybody</p>CentOS 5 beta released very soon ?2007-01-04T23:29:00+01:002007-01-04T23:29:00+01:00Fabian Arrotintag:arrfab.net,2007-01-04:/posts/2007/Jan/04/centos-5-beta-released-very-soon/<p>it seems so : have a look on z00dax's blog , a CentOS developer :
<a href="http://www.karan.org/blog/index.php/2007/01/03/centos_5_first_install">http://www.karan.org/blog/index.php/2007/01/03/centos_5_first_install</a></p>About me2007-01-04T22:12:00+01:002007-01-04T22:12:00+01:00Fabian Arrotintag:arrfab.net,2007-01-04:/posts/2007/Jan/04/about/<p>As already explained , i needed a kind of blog to write some of my linux
experiences down .. that's all folks ...</p>
<p><strong><em><img alt="arrfab" src="http://www.arrfab.net/blog/wp-content/uploads/2007/01/arrfab.png" title="arrfab"><br>
</em></strong>
----------------------------------------------------------------------------------------</p>
<h2>Personalia</h2>
<hr>
<ul>
<li>Name: Fabian Arrotin (Arrfab) </li>
<li>Date of birth: 27 may 1976 </li>
<li>Marital Status : Married , 3 children</li>
</ul>
<hr>
<h2>Work Experience</h2>
<ul>
<li><a href="http://www.redhat.com">Red Hat</a> ( 18/11/2014 -> now )</li>
</ul>
<p>working as a SysAdmin for the CentOS Project infrastructure</p>
<p>-<a href="http://www.m-team.be/jsp/index.jsp">M-team</a> (14/05/2012 ->
15/11/2014)</p>
<dl>
<dt>Designing the infrastructure around the following products/technologies</dt>
<dd>* Red Hat Enterprise Linux (RHEL)</dd>
</dl>
<p>* CentOS Linux</p>
<p>* IBM Hardware (servers and storage)</p>
<p>Evaluating migration paths for the installed base and minimizing
downtime by using automation tools (kickstarts/config
management/centralized authentication)</p>
<ul>
<li><a href="http://www.sicli.be">Sicli</a>(01/04/2010 -> 11/05/2012)</li>
</ul>
<p>* Systems Administrator (IBM servers and IBM SAN Storage solutions)<br>
* Operating Systems maintenance (CentOS Linux and Microsoft Windows
2003)<br>
* Network administration : HP Procurve l2/l3 - Vlans<br>
* Virtualization : Citrix XenServer , kvm</p>
<ul>
<li>
<p><a href="http://www.ibsts.be/">IBS Technology & Services</a>(15/10/1998 ->
31/03/2010)<br>
IBS T&S is the Premier IBM Business Partner in Belgium for the iSeries
(formerly known as AS400) Platforms.<br>
I Work in the Networking team on Intel platforms : Designing and
implementing solutions that meet the customer's needs. Previously i was
implementing mostly Microsoft and Citrix solutions but now i try (and
hopefully wins) to convince customers …</p></li></ul><p>As already explained , i needed a kind of blog to write some of my linux
experiences down .. that's all folks ...</p>
<p><strong><em><img alt="arrfab" src="http://www.arrfab.net/blog/wp-content/uploads/2007/01/arrfab.png" title="arrfab"><br>
</em></strong>
----------------------------------------------------------------------------------------</p>
<h2>Personalia</h2>
<hr>
<ul>
<li>Name: Fabian Arrotin (Arrfab) </li>
<li>Date of birth: 27 may 1976 </li>
<li>Marital Status : Married , 3 children</li>
</ul>
<hr>
<h2>Work Experience</h2>
<ul>
<li><a href="http://www.redhat.com">Red Hat</a> ( 18/11/2014 -> now )</li>
</ul>
<p>working as a SysAdmin for the CentOS Project infrastructure</p>
<p>-<a href="http://www.m-team.be/jsp/index.jsp">M-team</a> (14/05/2012 ->
15/11/2014)</p>
<dl>
<dt>Designing the infrastructure around the following products/technologies</dt>
<dd>* Red Hat Enterprise Linux (RHEL)</dd>
</dl>
<p>* CentOS Linux</p>
<p>* IBM Hardware (servers and storage)</p>
<p>Evaluating migration paths for the installed base and minimizing
downtime by using automation tools (kickstarts/config
management/centralized authentication)</p>
<ul>
<li><a href="http://www.sicli.be">Sicli</a>(01/04/2010 -> 11/05/2012)</li>
</ul>
<p>* Systems Administrator (IBM servers and IBM SAN Storage solutions)<br>
* Operating Systems maintenance (CentOS Linux and Microsoft Windows
2003)<br>
* Network administration : HP Procurve l2/l3 - Vlans<br>
* Virtualization : Citrix XenServer , kvm</p>
<ul>
<li>
<p><a href="http://www.ibsts.be/">IBS Technology & Services</a>(15/10/1998 ->
31/03/2010)<br>
IBS T&S is the Premier IBM Business Partner in Belgium for the iSeries
(formerly known as AS400) Platforms.<br>
I Work in the Networking team on Intel platforms : Designing and
implementing solutions that meet the customer's needs. Previously i was
implementing mostly Microsoft and Citrix solutions but now i try (and
hopefully wins) to convince customers that Linux can bring them what
they are waiting for ... : Samba/Ldap as a file and authentication
server, Sendmail/IMAP for Mail serving, Squid for secure web browsing,
Spamassassin (with plugins) for Anti-Spam, Mailscanner for Anti-Virus
mail scanning, Iptables for IP Packets filtering, OpenVPN and Poptop for
Secure VPN Access ,DRBD/Heartbeat for HA Linux Clusters, etc ....<br>
I'm also setting up Hardware solutions like IBM BladeCenters/Servers,
Storage solutions (Fiber or iScsi), L2/L3 Switches configuration
(HP,Cisco), etc ...</p>
</li>
<li>
<p><a href="http://www.systemat.com/">Systemat</a> (01/07/1997 -> 14/10/1998)<br>
I Worked as a Technical consultant in the TIC (Technical Integration
Center) : Assembly and Installation of Compaq/IBM/HP Servers and
Workstations. Setup of Operating Systems.</p>
</li>
</ul>
<h2>Education</h2>
<ul>
<li>IPv6 Certification (Level : Sage) :
<a href="http://ipv6.he.net/certification/scoresheet.php?pass_name=arrfab">He.net</a>
(01/2010)</li>
<li>Red Hat Linux Certified Engineer (RHCE) <a href="https://www.redhat.com/wapps/training/certification/verify.html;?certNumber=804007120924163&verify=Verify">Redhat verification
website</a>
CertNumber : 804007120924163 (01/2007)</li>
<li>IBM Official Instructor (08/2006)</li>
<li>Linux Professional Institute Level 1 <a href="http://www.lpi.org/">(LPIC 1)</a>
Certified - (12/2003)</li>
<li>Citrix Certified Administrator
<a href="http://www.citrix.com/English/SS/education/certtrack.asp?contentID=23727">(CCA)</a><ul>
<li>(03/2002)</li>
</ul>
</li>
<li>Microsoft Certified System Engineer
<a href="http://www.microsoft.com/learning/mcp/mcse/default.asp">(MCSE)</a>
Windows 2000 - (09/2001)</li>
<li>Microsoft Certified System Engineer (MCSE) NT 4.0 - (11/1999)</li>
<li>Graduate in Accountancy - IESET Tamines (Belgium) - (06/1997)</li>
<li>College - Athenee Royal Solvay Charleroi - Latin-Maths-Sciences -
(06/1994)</li>
</ul>
<h2>Hobbies</h2>
<ul>
<li>Spending time with my wife and my 3 kids ... </li>
<li>Spending time on my computer(s) running Linux when my wife and my
kids are sleeping .... :o) </li>
<li>Contributor to the <a href="http://www.centos.org/">CentOS project</a> and the
<a href="http://www.rpmforge.net/">RPMforge project</a> (PPC builds)... </li>
<li>Listening Blues and Jazz music and Playing the Guitar on one of my
<a href="javascript:%20window.open(" title="../images/guitars.jpg','','status=no,resizable=no,scrollbars=yes,width=615,height=820">electric
ones</a>;%20void('');)</li>
</ul>
<h2>Geek Code</h2>
<div class="highlight"><pre><span></span><span class="o">-----</span><span class="nv">BEGIN</span> <span class="nv">GEEK</span> <span class="nv">CODE</span> <span class="nv">BLOCK</span><span class="o">-----</span>
<span class="nv">Version</span>: <span class="mi">3</span>.<span class="mi">12</span>
<span class="nv">GB</span><span class="o">/</span><span class="nv">CM</span><span class="o">/</span><span class="nv">MU</span> <span class="nv">d</span><span class="o">+</span> <span class="nv">s</span>: <span class="nv">a</span> <span class="nv">C</span><span class="o">+++</span> <span class="nv">UL</span><span class="o">++</span>$ <span class="nv">P</span><span class="o">+</span> <span class="nv">L</span><span class="o">+++</span> <span class="nv">E</span><span class="o">---</span>
<span class="nv">W</span><span class="o">+++</span> <span class="nv">N</span><span class="o">+</span> <span class="nv">o</span><span class="o">-</span> <span class="nv">K</span><span class="o">-</span> <span class="nv">w</span><span class="o">-</span> <span class="nv">O</span> <span class="nv">M</span> <span class="nv">V</span><span class="o">-</span> <span class="nv">PS</span> <span class="nv">PE</span> <span class="nv">Y</span><span class="o">+</span> <span class="nv">PGP</span><span class="o">++</span> <span class="nv">t</span> <span class="mi">5</span>? <span class="nv">X</span>
<span class="nv">R</span><span class="o">-</span> <span class="nv">tv</span><span class="o">-</span> <span class="nv">b</span> <span class="nv">DI</span> <span class="nv">D</span><span class="o">--</span> <span class="nv">G</span> <span class="nv">e</span><span class="o">++</span> <span class="nv">h</span><span class="o">----</span> <span class="nv">r</span><span class="o">+++</span> <span class="nv">y</span><span class="o">+++</span>
<span class="o">------</span><span class="k">END</span> <span class="nv">GEEK</span> <span class="nv">CODE</span> <span class="nv">BLOCK</span><span class="o">------</span>
</pre></div>
<p><a href="http://www.geekcode.com/geek.html">Translate this Geek Code</a></p>Yep, another blog .... :o)2007-01-04T22:12:00+01:002007-01-04T22:12:00+01:00Fabian Arrotintag:arrfab.net,2007-01-04:/posts/2007/Jan/04/hello-world/<p>I hated really blogs in the past ... but i admit that sometimes i find
something really useful but if i don't write it somewhere it will
shortly being forgotten .... so this blog will exist only as a little
browseable and searchable 'knowledgebase' regarding linux tips and
tricks ... but don't count on me to explain what i eat on morning or the
color of my socks .... :)</p>