* wireguard: export an Endpoint comparison method
* Record discovered endpoints in node
* Synchronize DiscoveredEndpoints in k8s backend
* Add discoveredEndpointsAreEqual
* Handle discovered Endpoints in topology to enable NAT 2 NAT
* Refactor to use Endpoint.Equal
Compare IP first by default and compare DNS name first when we know the Endpoint was resolved.
* Drop the shallow copies of nodes and peers
Now that updateNATEndpoints was updated to discoverNATEndpoints and that
the endpoints are overridden by topology instead of mutating the nodes and
peers object, we can safely drop this copy.
First the comment "so remove it from the mesh" is wrong / missleading as
since 034c27ab78 the delete in that if is
not in there anymore.
Second the m.nodes map is not updated so setting `diff = true` will call `applyTopology` without any changes... which seams useless.
Third the rest of the code already checks for Ready so this special case
here should not be needed.
Previously the newlines were ignored by circo.
This lead to very flat ellipses.
Masked newlines "\\n" are correctly handeled.
Signed-off-by: leonnicolas <leonloechner@gmx.de>
Commit 4d00bc56fe introduced a bug in the
Kilo graph generation logic. This commit used the WireGuard CIDR from
the topology struct as the graph title, however this field is nil
whenever the selected node is not a leader, causing the program to
panic.
This commit changes the meaning of the topology struct's wireGuardCIDR
field so that the field is always defined and the normalized value will
always be equal to the Kilo subnet CIDR. When the selected node is a
leader node, then the field's IP will be the IP allocated to the node
within the subnet. This effectively prevents the program from panicking.
Signed-off-by: Lucas Servén Marín <lserven@gmail.com>
Since #116 implemented fragile comparisons of iptables rules to avoid
calling the iptables binary excessively during every reconciliation, the
iptables rules for IPIP encapsulation must be updated to match the
expected output. One complication is that rather than returning the
protocol number in the rule, iptables resolves the protocol number to a
name by looking up the number in the netd protocols database. This name
can vary depending on the host's environment. This commit adds two
solutions for resolving the protocol name:
1. a fixed mapping to the string `ipencap`, which should always work
for Kilo whenever it runs in the Alpine Linux container; and
2. a runtime lookup using the netd database, which only works if Kilo is
compiled with CGO and is meant to be used only if Kilo is not running in
the normal container environment.
Signed-off-by: Lucas Servén Marín <lserven@gmail.com>
Currently Kilo incorrectly identifies the 172.16/12 private IP range.
This commit fixes the logic.
Signed-off-by: Lucas Servén Marín <lserven@gmail.com>
This commit introduces a new `--resync-period` flag to control how often
the Kilo controllers should reconcile.
Signed-off-by: Lucas Servén Marín <lserven@gmail.com>
This commit adds a logger to the iptables controller using the options
pattern. It also logs when the controller needs to reset rules, to be
able to identify costly reconciliations.
Signed-off-by: Lucas Servén Marín <lserven@gmail.com>
Let's make the documentation more inclusive and sensitive of the
familiarity and comfort of users.
Signed-off-by: Lucas Servén Marín <lserven@gmail.com>
This commit proposes [Leon](https://github.com/leonnicolas) as a
maintainer of Kilo. Leon has done tons of great work in the project in
feature development, bug triaging, and documentation. It would be a
privilege to have you join!
Signed-off-by: Lucas Servén Marín <lserven@gmail.com>
Currently, every time the iptables controller syncs rules, it spawns an
an iptables process for every rule it checks. This causes two problems:
1. it creates unnecessary load on the system; and
2. it causes contention on the xtables lock file.
This commit creates a lazy cache for iptables rules and chains that
avoids spawning iptables processes. This means that each time the
iptables rules are reconciled, if no rules need to be changed then at
most one iptables process should be spawned to check all of the rules in
a chain and at most one process should be spawned to check all of the
chains in a table.
Note: the success of this reduction in calls to iptables depends on a
somewhat fragile comparison of iptables rule text. The text of any rule
must match exactly, including the order of the flags. An improvement to
come would be to implement an iptables rule parser than can be used to
check semantic equivalence betweem iptables rules.
Signed-off-by: Lucas Servén Marín <lserven@gmail.com>
Because of new naming conventions for locations, the CIDRs were not
being set within locations.
This lead to no iptables rules added for nodes in the same location.