CentOS 7 armv7hl build in progress

As more and more people were showing interest in CentOS on the ARM platform, we thought that it would be a good idea to start trying building CentOS 7 for that platform. Jim started with arm64/aarch64 and got an alpha build ready and installable.

On my end, I configured some armv7hl nodes, "donated" to the project by Scaleway. The first goal was to init some Plague builders to distribute the jobs on those nodes, which is now done. Then working on a "self-contained" buildroot , so that all other packages can be rebuilt only against that buildroot. So building first gcc from CentOS 7 (latest release, better arm support), then glibc, etc, etc ... That buildroot is now done and is available here.

Now the fun started (meaning that 4 armv7hl nodes are currently (re)building a bunch of SRPMS) and you can follow the status on the Arm-dev List if you're interested, or even better, if you're willing to join the party and have a look at the build logs for packages that failed to rebuild. The first target would be to have a "minimal" install working, so basically having sshd/yum working. Then try other things like ...

Hacking initrd.img for fun and profit

During my presentation at Loadays 2015 , I was mentioning some tips and tricks around Anaconda and kickstart, and so how to deploy CentOS , fully automated. I asked the audience about where to store the kickstart, that would be used then by anaconda to install CentOS (same works for RHEL/Fedora), and I got several answers, like "on the http server", or "on the ftp server", which is where most people will put their kickstart files. Some would generate those files files "dynamically" (through $cfgmgmt - I use Ansible with Jinja2 template for this - ) as a bonus point.

But it's not mandatory to host your kickstart file on a publicly available http/ftp/nfs server, and surely not when having to reinstall nodes not in the same DC. Within the infra, I sometimes have to reinstall remote nodes ("donated" to the Project) that are running CentOS 5 or 6 to 7. That's how injecting your ks file directly into the initrd.img really helps. (yes, so network server needed). Just as an intro, here is how you can remotely trigger a CentOS install, without any medium/iso/pxe environment : basically you just need to download the pxeboot images ...

More builders available for Koji/CBS

As you probably know, the CentOS Project now hosts the CBS effort, (aka Community Build System), that is used to build all packages for the CentOSSIGs.

There was already one physical node dedicated to Koji Web and Koji Hub, and another node dedicated to the build threads (koji-builder). As we have now more people building packages, we thought it was time to add more builders to the mix, and here we go: lists now two added machines that are dedicated to Koji/CBS.

Those added nodes have 2 * Intel(R) Xeon(R) CPU E5-2650 0 @ 2.00GHz with 8cores/sockets (+ Hyperthreading activated)  , and 32Gb of RAM. Let's see how the SIGs members will keep those builders busy and throwing a bunch of interesting packages for the CentOS Community :-) . Have a nice week-end

Provisioning quickly nodes in a SeaMicro chassis with Ansible

Recently I had to quickly test and deploy CentOS on 128 physical nodes, just to test hardware and that all currently "supported" CentOS releases could be installed quickly when needed. The interesting bit is that it was a completely new infra, without any traditional deployment setup in place, so obviously, as sysadmin, we directly think about pxe/kickstart, which is so trivial to setup. That was the first time I had to "play" with SeaMicro devices/chassis though, and so understanding how they work (the SeaMicro 15K fabric chassis, to be precise). One thing to note is that those seamicro chassis don't provide remote VGA/KVM feature (but who cares, as we'll automate the whole thing, right ? ) but they instead provide either cli (ssh) or rest api access to the management interface, so that you can quickly reset/reconfigure a node, changing vlan assignement, and so on.

It's not a secret that I like to use Ansible for ad-hoc tasks, and I thought that it would be (again) a good tool for that quick task. If you have used Ansible already, you know that you have to declare nodes and variables (not needed, but really useful) in ...

Switching from Ethernet to Infiniband for Gluster access (or why we had to ...)

As explained in my previous (small) blog post, I had to migrate a Gluster setup we have within Infra. As said in that previous blog post too, Gluster is really easy to install, and sometimes it can even "smells" too easy to be true. One thing to keep in mind when dealing with Gluster is that it's a "file-level" storage solution, so don't try to compare it with "block-level" solutions (so typically a NAS vs SAN comparison, even if "SAN" itself is wrong for such discussion, as SAN is what's *between* your nodes and the storage itself, just a reminder.)

Within infra, we have a multiple nodes Gluster setup, that we use for multiple things at the same time. The Gluster volumes are used to store some files, but also to host (different gluster volumes with different settings/ACLs) KVM virtual-disks (qcow2). People knowing me will say : "hey, but for performances reasons, it's faster to just dedicate for example a partition , or a Logical Volume instead of using qcow2 images sitting on top a filesystem for Virtual Machines, right ?" and that's true. But with our limited amount of machines, and a ...

Updating to Gluster 3.6 packages on CentOS 6

I had to do yesterday some maintenance yesterday on our Gluster nodes used within infra. Basically I had to reconfigure some gluster volumes to use Infiniband instead of Ethernet. (I'll write a dedicated blog post about that migration later).

While a lot of people directly consume packages from (for example, you'll be able (soon) to also install directly those packages on CentOS, through packages built by the Storage SIG. At the moment I'm writing this blog post, gluster 3.6.1 packages are built and available on our Community Build Server Koji setup , but still in testing (and unsigned).

"But wait, there are already glusterfs packages tagged 3.6 in CentOS 6.6, right ? " will you say. Well, yes, but not the full stack. What you see in the [base] (or [updates]) repository are the client packages, as for example a base CentOS 6.x can be a gluster client (through fuse, or libgfapi - really interesting to speed up qemu-kvm instead of using the default fuse mount point ..) , but the -server package isn't there. So the reason why you ...

Koji - CentOS CBS infra and sslv3/Poodle important notification

As most of you already know, there is an important SSLv3 vulnerability (CVE-2014-3566 - see , known as Poodle.
While it's easy to disable SSLv3 in the allowed Protocols at the server level (for example SSLProtocol All -SSLv2 -SSLv3 for apache), some clients are still defaulting to SSLv3, and Koji does that.

We currently have disabled SSLv3 on our koji instance, so if you're a cbs/koji user, please adapt your local koji package (local fix !)
At the moment, there is no available upstream package, but the following patch has been tested by Fedora people too (and credits go to

  ---    2014-10-15 11:42:54.747082029 +0200  
  +++    2014-10-15 11:44:08.215257590 +0200  
  @@ -37,7 +37,8 @@  
  if f and not os.access(f, os.R_OK):  
  raise StandardError, "%s does not exist or is not  
  readable" % f

  -    ctx = SSL.Context(SSL.SSLv3_METHOD)   # SSLv3 only  
  +    #ctx = SSL.Context(SSL.SSLv3_METHOD)   # SSLv3 only  
  +    ctx = SSL.Context(SSL.TLSv1_METHOD)   # TLSv1 only  
  @@ -45,7 +46,8 @@  
  ctx.set_verify ...

CentOS Mirrors "Spring Clean-up operation"

Just to let you know that I have verified some mirrors last week and sent several mails to the contact info we had for those mirrors (unreachable/far behind).
I've received feedback from some people still willing to be listed as third-party mirror and so they fixed the issue they had (thank you !)

Some other people replied with a "sorry, we can't host a mirror anymore" answer . (Thanks for having replied my email and thank you for having been part of the successful "centos mirror party" !).

For the "unanswered" ones, I've decided that it was time to launch a "Spring clean-up operation" in the mirrors DB/Network.
I've removed them from the DB, meaning that the crawler process we use to detect bad/unreachable mirrors will not even try anymore to verify them.
We actually have more than 500 external (third-party) mirrors serving CentOS to the whole world, without counting the 50+ (managed by CentOS) servers used to feed those external mirrors, and sometimes serving content too for countries less covered.

Thanks a lot for your collaboration and support ! We love you :-)

CentOS Dojo Lyon (France)

Comme vous le savez peut-être (ou pas !), nous tiendrons un Dojo CentOS à Lyon le vendredi 11 avril. Si donc vous avez envie de partager votre expérience autour de CentOS, en donnant une présentation par exemple, ou bien si vous désirez seulement venir passer un bon moment avec nous en écoutant les présentations prévues (appel - subliminal - aux candidats volontaires !), sentez-vous libre de vous inscrire.
L'inscription est gratuite ! Plus d'informations sur la page Wiki : .

Hi people, are you in the Lyon (France) area around April 11th ? Willing
to come to a CentOS Dojo ? (either to attend it or even better, present
something around CentOS ?) . Feel free to register for this free event !

IPv6 vs IPv4 usage for the new website [ Stats ! ]

So, everybody now knows the whole story, and so visited the new CentOS website. It's always a good time to keep an eye on statistics and we also added now native IPv6 support ! (Finally ! , we live in 2014, right ? ). And because we "love" stats, here they are (for IPv4 vs IPv6) :

IPv4 traffic for the new website :

IPv4 usage

IPv6 traffic for the new website :

IPv6 usage

So clearly not so much IPv6 traffic vs IPv4 one.Join the IPv6 movement !