[rudder-users] IO on vmware
Prestasit01
prestasit01 at ouest-france.fr
Thu Oct 3 09:55:46 CEST 2013
Hi, thanks for your reply.
In addition : A simple update from 2.4.3 to 2.4.8 don't change anything, uninstall the 2.4.3 and install the latest version was the solution.
BR
De : Nicolas Charles [mailto:nicolas.charles at normation.com]
Envoyé : mercredi 2 octobre 2013 17:57
À : Prestasit01
Cc : 'Matthieu CERDA'; 'rudder-users at lists.rudder-project.org'
Objet : Re: [rudder-users] IO on vmware
Hello,
I'm glad that upgrading helped lower the I/O !
I didn't realize you were still using 2.4; moving to 2.6 should lower the I/O again by changing the Database backend from BDB to TokyoCabinet.
As for the warning message, it's because you mounted the /var/rudder/cfengine-community/state with mode 775 rather than 770
There's a typo in the blogpost, the entry to put in the /etc/fstab should be
# Tmpfs for the CFEngine state backend storage directory
tmpfs /var/cfengine/state tmpfs size=128M,nr_inodes=2k,mode=0770,noexec,nosuid,noatime,nodiratime 0 0
(770 rather than 775)
It should remove your warning message
Nicolas
On 02/10/2013 17:48, Prestasit01 wrote:
Hello,
Just for information rudder-agent has been updated from 2.4.3 to 2.4.8.
I don't know why but now I/O seems ok since i did that :
1. delete one node which had big I/O (on the rudder server)
2. remove the rudder-agent from my node
3. delete /var/rudder & /opt/rudder
4. install the rudder-agent (2.4.8)
5. accept the node on the rudder server and re-affect the node to it last group.
I/O has been divided by 10.
I also have this message since i'm using RAM Disk :
UNTRUSTED: State directory /var/rudder/cfengine-community (mode 775) was not private!
Is that normal ?
BR
De : Prestasit01
Envoyé : mardi 1 octobre 2013 14:41
À : 'Matthieu CERDA'
Cc : 'Nicolas Charles'; rudder-users at lists.rudder-project.org<mailto:rudder-users at lists.rudder-project.org>
Objet : RE: [rudder-users] IO on vmware
I don't have theses informations, here is the 3 first results of the iodump command :
while true; do sleep 1; dmesg -c; done | perl iodump
TASK PID TOTAL READ WRITE DIRTY DEVICES
cf-agent 30398 27817 0 27817 0 dm-0
cf-agent 29655 27817 0 27817 0 dm-0
cf-agent 29316 9643 0 9643 0 dm-0
BR
De : Matthieu CERDA [mailto:matthieu.cerda at normation.com]
Envoyé : mardi 1 octobre 2013 12:06
À : Prestasit01
Cc : 'Nicolas Charles'; rudder-users at lists.rudder-project.org<mailto:rudder-users at lists.rudder-project.org>
Objet : Re: [rudder-users] IO on vmware
Howdy,
Can you tell us which specific file/place/function is eating so much I/O ressources ?
Thanks in advance.
MC
Le 01/10/2013 11:30, Prestasit01 a écrit :
Hi,
It seems to work on some nodes. But, not for all.
On one impacted server, I used iodump (http://www.geeek.org/post/linux-comment-trouver-les-processus-qui-consomment-de-l-io-992.html)
And it reveals that cf-agent is the biggest i/o eater on the server. (Ram disk mounted)
I hope there is one other solution.
BR
-------------------------------------------------------------------------
Les informations ou pieces jointes contenues dans ce message sont confidentielles. Seul le destinataire expressement vise peut en prendre connaissance. Toute autre personne qui en divulguera, diffusera ou prendra des copies sera passible de poursuites. La societe Ouest-France decline en outre, toute responsabilite de quelque nature que ce soit au titre de ce message s'il a ete altere, deforme ou falsifie.
--
Nicolas CHARLES
-------------------------------------------------------------------------<br>
Les informations ou pieces jointes contenues dans ce message sont
confidentielles. Seul le destinataire expressement vise peut en prendre
connaissance. Toute autre personne qui en divulguera, diffusera ou prendra des copies sera passible de poursuites. La societe Ouest-France decline en outre, toute responsabilite de quelque nature que ce soit au titre de ce message s'il a ete altere, deforme ou falsifie.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.rudder-project.org/pipermail/rudder-users/attachments/20131003/26cfc74b/attachment.html>
More information about the rudder-users
mailing list