OpenLMI on YouTube

OpenLMI seems to provide a quite beneficial way to manage enterprise linux systems. Since everybody says Cloud computing, OpenLMI can do more than what I expected in the elastic computing workload.

Red Hat Enterprise Linux Blog

For anyone who may not have read my previousposts on OpenLMI or who may have never visited my blog (TechPonder) – OpenLMI is a new management framework for Linux.

The most common initial questions about OpenLMI are:

원본 글 보기 95단어 남음


Red Hat Enterprise Linux OpenStack Platform 4 Installation with Packstack

There are many ways to install/deploy OpenStack but it looks rare to use packstack for Red Hat Enterprise Linux OpenStack Platform(hereafter RHEL OSP) 4 Havana. For my personal test, I used packstack to install RHEL OSP4 and it just took about 30 minutes. How easy it is.

Here is my target architecture.

  • 1 x Control Node : Keystone, Glance, Cinder, Neutron, Heat (No Swift)
  • 2 x Compute Nodes : Nova



  • 3 Physical Machines or 3 Virtual Machines (KVM Nested)
  • Red Hat Enterprise Linux 6.5 ISO
  • Red Hat Enterprise Linux OpenStack Platform 4

It is always true that more physical machines can give you more chance to know about OpenStack, so I would like to recommend you to have more machine as many as possible. However, it is also enough to install RHEL OSP 4 on Nested KVM VMs to taste it.

I am not going to describe 1) Hardware or Nested KVM preparation and 2) RHEL 6.5 installation. Since RHEL OSP 4 was released after RHEL 6.5, RHEL 6.5 needs to be updated to the latest versions.


Configure the Control Node

I didn’t use a external DNS server so put all hostname into /etc/hosts file.

# vi /etc/hosts  control compute1 compute2

And, configure the fake external network. At this moment, RHEL 6.5 won’t understand OVSPort and OVSBridge type but it will work anyway after RHEL OSP4 installation.

# vi /etc/sysconfig/network-scripts/ifcfg-eth0   // – API/OpenStack Network

# vi /etc/sysconfig/network-scripts/ifcfg-eth1   // – GRE Tenant Network

# vi /etc/sysconfig/network-scripts/ifcfg-eth2

# vi /etc/sysconfig/network-scripts/ifcfg-br-ex


Install Control Node with packstack

# yum -y install openstack-packstack
# packstack –allinone \
–timeout=600 \
–keystone-admin-passwd=rhelosp4 \
–provision-all-in-one-ovs-bridge=n \
–os-swift-install=n \

“–timeout” was needed when installing RHEL OSP4 in Nested KVM VMs because I used my laptop and it is not powerful workstation so packstack could fail to install due to the timeout.

To use the existing bridge for the fake external network, need to add “–provision-all-in-one-ovs-bridge=n” option.

It’s all done. You are now ready to use RHEL OSP4.

# firefox

There was a bug in RHEL OSP4 early version I used, and it block you to log in the dashboard. To make it work, need to create a Member role.

# source ~/keystonerc_admin
# keystone role-create –name=Member

Let’s check if OpenStack’s integration bridge(br-int) and external bridge(br-ex) properly configured.

# ovs-vsctl show
Bridge br-int
Port br-int
Interface br-int
type: internal
Port "tapfd3ab27e-f0"
tag: 1
Interface "tapfd3ab27e-f0"
type: internal
Bridge br-ex            // br-ex connected to eth2 interface
Port br-ex
Interface br-ex
type: internal
Port "eth2"
Interface "eth2"
ovs_version: "1.11.0"


Configure GLANCE in control node

I added two new disks into the control node for Glance, Cinder service.

# mkfs.ext4 /dev/sdx
# vi /etc/fstab
/dev/sdx     /var/lib/glance/images/        ext4    defaults     1 2

# mount -a
# chown glance.glance /var/lib/glance/images

You can download pre-built images for OpenStack from

Then, add the downloaded images into your glance service.

# glance image-create –name rhel65x64 –is-public true –disk-format qcow2 –container-format bare –file ./rhel-guest-image-6.5-20140630.0.x86_64.qcow2


Configure CINDER in control node

Packstack automatically created a volume group named ‘cinder-volumes’ for Cinder but I will re-create it for newly added disk.

# vgremove cinder-volumes
# pvcreate /dev/sdy
# vgcreate cinder-volumes /dev/sdy


 Add a new compute node

To make our work more easier, use the same order of network interfaces over the all machines. eth0 in all machines should be connected to OpenStack API network (192.168.30.x) and eth1 should be connected to GRE Tenant network (192.168.50.x). Compute Nodes don’t need to be connected to the fake external network in my test environment.

Configure GRE for neutron for OVS plugin
Note : this will be replaced by ML2 plugin in OSP5

Packstack will create the answer file and I will update it for the new compute node.
Note : Ignore all NOVA_NETWORK configurations since we are using OVS instead

# vi packstack-answer-xxxx.txt

Re-run packstack with updated answer file.

# packstack –answer-file=packstack-answers-xxxx.txt

Let check if your new compute node successfully added.

# source ~/keystonerc_admin
# nova service-list

You can update the answer file to add another compute node.


Enjoy RHEL OSP 4

To complete the installation, you probably need to add a router, public, private networks and link them work together. It could be done via Dashboard or CLI.

Here is an example of CLI.

1) Router and Internal Network

# neutron router-create router1
# neutron net-create private
# neutron subnet-create private –name internal        // for tenant
# neutron router-interface-add router1 internal

2) External and Router

# neutron net-create public –router:external=True
# neutron subnet-create public –name external –enable_dhcp=False –allocation-pool start=,end= –gateway=
Note : this will be the floating ip and will associate to an instance
# neutron router-gateway-set router1 public

3) Floating IP

# neutron floatingip-list
# neutron folatingip-create public
# neutron port-list
# neutron floatingip-associate <IP> <Port>


I hope this article can help you to enjoy RHEL OSP 4.


drop_caches needs to be used more carefully

유 닉스 사용하시다가 리눅스로 넘어오신 경우 리눅스의 어마무지한 Cache 사랑에 놀라시는 분들이 많더군요. 그리고 무시무시한 “백업” 솔루션들이 새벽에 한번 돌라치면 엄청난 I/O 발생과 더불어 Cache 사용량의 폭증… 그리고 Swap out으로 이어지는 성능 저하로 소위 ‘절대악’이 되버린 Cache를 날려버릴 방법을 찾기 시작하십니다. 결국 /proc/sys/vm/drop_caches ‘한방’으로 평안을 찾게 되구요.

일반적으로 drop_caches를 쓸 일은 별로 없지만 백업 솔루션이나 Batch job과 같이 I/O가 많은 경우, 작업 전과 후에 한두번씩 해주는게 시스템 운영에 도움이 됩니다. 다만 RHEL/CentOS 4 또는 5.3 이전 버전을 사용하신다면 커널 패닉이나 행이 걸릴 수 있으니 특히 자제하셔야 합니다. 그 외 버전이라도 꼭 필요한 경우가 아니라면 차라리 vm.vfs_cache_pressure, vm.swappiness를 가지고 간접적으로 Cache를 통제하는 것이 장수하는 길이 아닐까 생각됩니다.

Tickless Kernel

Tickless Kernel

RHEL/CentOS 6부터 Tickless 커널이 사용되고 있습니다. CPU가 idle인 상태라도 Timer tick을 처리하기 위해 주기적으로(1초에 1000번, 1000HZ) CPU가 실행되어야 해서 Power Management 차원에서는 비효율적이죠. 그래서 도입된 것이 Dynamic Tick 기능으로 idle 상태에서는 Timer tick까지 멈추도록 하여 CPU가 Deep sleep state로 내려가 전력사용량을 줄이는 기능입니다.

Upstream kernel 2.6.21 버전부터 패치로 포함된 이 기능은 High resolution timer의 도입과 함께 이루어졌습니다. 물론 여기에 단점도 있습니다. Sleep 상태의 CPU를 깨우는데 Latency가 크기 때문에 Low-latency 환경에는 적합하지 않습니다.

그러던 것이 kernel 3.10 버전부터는 full NOHZ라는 기능이 추가되었는데요, idle 상태가 아닌 상태에서도 필요에 따라 tickless가 될 수 있는 기능이 추가되었습니다. 즉 CPU에서 실행중인 프로세스가 오직 1개일때는 timer를 1HZ로 줄여서 timer tick 처리를 위한 overhead를 줄일 수 있도록 개선된 것인데요, 같은 tickless 기능이라도 full NOHZ의 경우 Power Management 측면 보다는 Performance 측면에서 이점에 초점이 맞춰졌습니다.

RHEL7에는 kernel 3.10 버전이 base 버전으로 포함되어 있어 full NOHZ 기능도 지원하게 되었습니다.