[rudder-dev] Link separate Rudder Installations

Benoit Peccatte benoit.peccatte at normation.com
Mon Mar 27 11:02:07 CEST 2017


Le 14/03/2017 à 10:35, Janos Mattyasovszky a écrit :
> Hi dear Rudder Community,
>
> _The Challenge:_
> After a certain size or having different departments / security 
> requirements, it might be a valid topic of having different teams 
> managing a given subset of your servers. Currently a Role-Based 
> separation is not implemented inside Rudder, and some cases it is also 
> required to have physical separation. OTOH, from a top-level overview, 
> you'd want to have the same base rules being applied to your machines 
> (like security hardening and other standardization) and the best would 
> be to get back an over-all compliance report of these basic rules.
>
> We have currently the Relay-construct, which helps you to get around 
> network / bandwidth limitations and helps a lot to balance the 
> load/stress that many thousands of nodes would generate otherwise on 
> one policy server. However, there is currently no proper way to 
> connect stand alone Rudder Root servers to work together in a kind of 
> master-slave role.
>
> _Possible solutions:_
> /
> /
> /-- Simple --/
> Ability to "sync" the policy of one Rudder Root server to another Root 
> Server (via API)
>
> This would allow you to export all items used in the Rules via API and 
> import it via the other Server's API. This would also enable you to 
> have a Staging environment, where you develop and test your Rules 
> outside anything important to you (sandbox/lab env), if you think you 
> are ready, you put all your stuff over to a Staging environment, where 
> you do an initial Rollout to a subset of your "more important" nodes, 
> like some Real-Life test systems or Canary hosts, and if nothing 
> breaks, you can use that well-defined set of rules to put that in your 
> n*Prod environment.

The most difficult thing here, is to link rules and propagate groups.
A group that is on server A may not be on server B so synchronizing 
rules between A and B means reapplying then to a different group.
Synchronizing groups can be complex too, the definition of a group in 
one environment may mean nothing in another one and will produce empty 
groups.

>
>
> /-- Advanced --//
> /
> Have a new type of Rudder-Installation, like a Rudder Policy Master
>
> This brings in a whole new concept of managing Rules. It would mostly 
> look like a "regular" WebGUI you use today, but you do not use this to 
> connect end-nodes to, it can only hook up Rudder Root Servers as 
> "Slave" servers. You could then use it to create Rules, Directives, 
> Groups, but would not have any node-policy generated on. You would use 
> it to define a base set of Rules, that would be signed and sync'ed to 
> any slave root server, which would then use that (Dynamic groups would 
> pick up any Server that is entitled to receive that Policy) to 
> generate it's end-node Policy, maybe also in addition to any locally 
> defined Policy. The "slave root servers" would then aggregate the 
> "central Rules"-Related compliance and report it to the Policy Master, 
> on which you'd have a similar report to what you currently have on a 
> regular Rule, but that would be not based on per-node, but on 
> "per-Root-Server" with focus on the policy being propagated.

It is a very interesting idea. You would have to rethink how promises 
are generated and have a specific communication channel between the servers.

I think I would call it a virtual server more than a master server, 
because it doesn't serve any promise. And I would not allow it to create 
groups, since only the real servers know which nodes exists an why. But 
it should be able to aggregate the information (including groups) from 
other servers to have compliance details.

>
> Here is a very imaginary example list of what would be seen in Policy 
> Master:
>
> Test LAB      | 250 nodes  | overall compliance: 98.5% | policy 
> version: 15156 latest built    | nodes up-to-date with current policy
> Test Staging  | 300 nodes  | overall compliance: 99.8% | policy 
> version: 15156 latest built    | rollout in progress, 60% done
> Test Canary   | 56 nodes   | overall compliance: 99.5% | policy 
> version: 15156 latest building | nodes up-to-date with previous policy
> Prod AWS EAST | 1517 nodes | overall compliance: 99.6% | policy 
> version: 15154 outdated built  | nodes up-to-date with outdated policy
> Prod AZURE EU | 3114 nodes | overall compliance: 99.3% | policy 
> version: 15154 outdated built  | nodes up-to-date with outdated policy
>
> From this list you could see how to create different "failure domains" 
> that can operate independently, but you could still have an overall 
> control of the base policy, and then you could delegate like your 
> Microsoft-Devs the control to the Azure-Enviroment, the other DevOps 
> Team to the AWS env and so on, and you could test your policy stuff on 
> Lab/Staging, and put it on the Canary Hosts (which would include one 
> from each different environment) before it would be released for 
> Production.
>
> This could be coupled with the previous idea of having Rudder commence 
> of a not-all-at-once Rollout of the policy.
>
> Thanks for reading,
>
> Best Regards,
> Janos Mattyasovszky
>
>
>
>
> _______________________________________________
> rudder-dev mailing list
> rudder-dev at lists.rudder-project.org
> http://www.rudder-project.org/mailman/listinfo/rudder-dev


-- 
------------------------------------------------------------------------
*Logo Normation Benoît Peccatte*
/Architecte/
Normation <http://www.normation.com>
------------------------------------------------------------------------
*87, Rue de Turbigo, 75003 Paris, France*
Phone: 	+33 (0)1 85 08 48 96
------------------------------------------------------------------------

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://www.rudder-project.org/pipermail/rudder-dev/attachments/20170327/f953db72/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/gif
Size: 1036 bytes
Desc: not available
URL: <http://www.rudder-project.org/pipermail/rudder-dev/attachments/20170327/f953db72/attachment.gif>


More information about the rudder-dev mailing list