Commit graph

179 commits

Author SHA1 Message Date
Thomas Haller cd31437024 shared: drop _STATIC variant of macros that define functions
Several macros are used to define function. They had a "_STATIC" variant,
to define the function as static.

I think those macros should not try to abstract entirely what they do.
They should not accept the function scope as argument (or have two
variants per scope). This also because it might make sense to add
additional __attribute__(()) to the function. That only works, if
the macro does not pretend to *not* define a plain function.

Instead, embrace what the function does and let the users place the
function scope as they see fit.

This also follows what is already done with

    static NM_CACHED_QUARK_FCN ("autoconnect-root", autoconnect_root_quark)
2020-02-13 17:17:07 +01:00
Thomas Haller 53f6858a27 all: add nm_utils_error_is_cancelled() and nm_utils_error_is_cancelled_or_disposing()
Most callers would pass FALSE to nm_utils_error_is_cancelled(). That's
not very useful. Split the two functions and have nm_utils_error_is_cancelled()
and nm_utils_error_is_cancelled_is_disposing().
2020-02-10 19:11:50 +01:00
Thomas Haller 425412a363 libnm: hide NMActiveConnection until NMRemoteConnection is ready
Generally, libnm's NMClient cache only wants to expose NMObjects that
are fully initalized. Most objects don't require anything special,
except NMRemoteConnection which waits for the GetSettings() call to complete.

NMObjects reference each other. For example, NMActiveConnection
references NMDevice and NMRemoteConnection. There is a desire that an
object is only ready, if the objects that it references are ready too.
In practice that is not done, because usually every objects references
other objects, that means all objects would be declared as non-ready
as long as any of them is still initializing. That does not seem
desirable. Instead, most objects (except NMRemoteConnection and now
NMActiveConnection) are considered ready and visible, once their first
notification completes. In case the objects reference any object that is
not yet ready, the references is NULL (but the source object is visible
already). This is also done this way, to cope with cycles where
objects reference each other. In practice, such cycles should not be
exposed by NetworkManager. However, libnm should be robust against that.

This has the undesired effect that when you call AddAndActivate(), then
the NMActiveConnection might already be visible while its
NMRemoteConnection isn't. That means, ac.get_connection() will
initially return NULL, until the remote connection becomes ready.
Also add a special handling that NMActiveConnection waits for their
NMRemoteConnection to be ready, before being ready itself.

Fixes: ce0e898fb4 ('libnm: refactor caching of D-Bus objects in NMClient')
2020-02-10 19:02:42 +01:00
Thomas Haller 6b745e0725 libnm: minor cleanup of libnm trace logging 2020-02-10 19:02:42 +01:00
Thomas Haller 0382e54d8d libnm: expose nml_cleanup_context_busy_watcher_on_idle() helper for reuse
This can be used by NMSecretAgentOld.
2020-01-28 10:54:14 +01:00
Thomas Haller 587c35b1f4 libnm: factor out nml_init_data_return() for reuse 2020-01-28 10:54:14 +01:00
Thomas Haller 718e524a7f libnm: move InitData to nm-libnm-utils.h for sharing
It can be reused.
2020-01-28 10:54:14 +01:00
Thomas Haller 50bda649b1 libnm: expose nm_context_busy_watcher_integrate_source() as internal API for reuse
This will also be useful for NMSecretAgentOld.

The mechanics how NMClient handles the GMainContext and the
context-busy-watcher apply really to every GObject that uses
GDBusConnection and registers to signals.

At least, as long as the API provides no shutdown/stop method,
because that means shutdown/stop happens when unreferencing the
instance, at which point pending operations get cancelled (but
they cannot complete right away due to the nature of GTask and
g_dbus_connection_call()). If there is a shutdown/stop API, then all
pending operations could keep the instance alive, and the instance
would sticks around (and keeps the GMainContext busy) until shutdown is
completed. Basically, then the instance could be the context-busy-watcher
itself.

But in existing API which does not require the user to explicitly shutdown,
that is not a feasible (backward compatible) addition. But the context-busy-watcher
object is.
2020-01-28 10:54:14 +01:00
Thomas Haller 2c4f57be19 libnm: log message when NMClient gets disposed
This is useful as a last message to know when the instance is gone
for good.
2020-01-28 10:54:14 +01:00
Thomas Haller c2f8400e66 libnm: fix another leak when cleaning up NMClient
We now move the deletion of the context-busy-watcher to and idle handler
on the D-Bus GMainContext.

Note that the idle source does not take an additional reference on the
context. Hence, in certain cases it might happen that the context will
be completely unrefed before the idle handler runs. In that case, we
would leak the object.

Avoid that, by taking an additional reference to the GMainContext.

Note that the alternative would be to unref the context-busy-watcher
via the GSource's GDestroyNotify. That is not done, because then the
busy watcher might be unrefed in a different thread. Instead, we want
that to happen for the right context. The only minor downside of this
is that the user now always pays the price and must iterate the context
to fully clean up. But note that the user anyway must be prepared to
iterate the context after NMClient is gone. And that depends on some
unpredictable events that the user cannot control. That means, either
the user handles this correctly already, or the problem anyway exists
(randomly).

Of course all of the discussed "problems" are very specific. In practice, the
users uses the g_main_context_default() instance and anyway will either
keep iterating it or quit the process after the NMClient instance goes
away.
2020-01-16 14:43:29 +01:00
Thomas Haller b572c0542a libnm: keep context-busy-watcher of NMClient alive for one more idle round
The context-busy-watch has two purposes:

1) it allows the user to watch whether the NMClient still has pending
  GSource'es attached to the GMainContext.
2) thereby, it also keeps the inner GMainContext integrated into the
  caller's (in case of synchronous initialization of NMClient).

Especially for 2), we must not get this wrong. Otherwise, we might
un-integrate the inner GMainContext too early and it will be leaked
indefinitely (because the user has no means to access or iterate it).

To be extra careful, extend the lifetime of the context-busy-watcher
for one more idle invocation. Theoretically, this should not be necessary,
but it's not clear whether something else is still pending.

The downside of that extra safety is that it is probably unnecessary in
practice. And in case where it is necessary, it hides an actual
issue, making it harder to notice and fix it.
2020-01-16 12:44:56 +01:00
Thomas Haller a2fd2ab55d shared: remove nm_dbus_connection_signal_subscribe_object_manager() helper
It seems to complicate things more than helping. Drop it. What we still have
is a wrapper around plain g_dbus_connection_signal_subscribe(). That one is
trivial and helpful. The previous wrapper seems to add more complexity.
2020-01-16 12:42:41 +01:00
Thomas Haller e280124757 libnm: avoid leaking GMainContext for sync initialization after context-busy-watcher quits
When passing a destroy notify to g_dbus_connection_signal_subscribe(),
that callback gets invoked as an idle handler of the associated
GMainContext. That caused to have yet another source attached to the
context after the NMClient gets destroyed.

Especially with synchronous initialization of NMClient that is bad,
because we may destroy the context-busy-watcher too early. That results
in removing the integration of the inner GMainContext into the caller's
context, and thus we leak the inner context indefinitely.

Avoid that leak by not passing a cleanup function to
g_dbus_connection_signal_subscribe().

Fixes: ce0e898fb4 ('libnm: refactor caching of D-Bus objects in NMClient')
2020-01-16 12:42:41 +01:00
Thomas Haller 51b39ceb33 libnm: fix wrong assertion in nm_client_add_and_activate_connection2_finish()
Fixes: ce0e898fb4 ('libnm: refactor caching of D-Bus objects in NMClient')
2020-01-15 12:32:02 +01:00
Thomas Haller 900af25263 client: add nm_client_get_object_by_path() and nm_object_get_client() API
When iterating the GMainContext of the NMClient instance, D-Bus events
get processed. That means, every time you iterate the context (or "return to
the main loop"), the content of the cache might change completely.

It makes sense to keep a reference to an NMObject instance, do something,
and afterwards check whether the instance can still be found in the cache.

Add an API for that. nm_object_get_client() allows to know whether the
object is still cached.

Likewise, while NMClient abstracts D-Bus, it should still provide a way
to look up an NMObject by D-Bus path. Add nm_client_get_object_by_path()
for that.

https://gitlab.freedesktop.org/NetworkManager/NetworkManager/merge_requests/384
2020-01-08 18:33:10 +01:00
Thomas Haller 21b008d0ff libnm: add nm_client_get_capabilities() to expose server Capabilities
I hesitated to add this to libnm, because it's hardly used.

However, we already fetch the property during GetManagedObjects(),
we we should make it accessible, instead of requiring the user to
make another D-Bus call.
2019-12-21 12:25:12 +01:00
Thomas Haller 3ae97cc543 libnm: emit property changed signal when setting NM_CLIENT_DBUS_CONNECTION 2019-12-18 17:13:27 +01:00
Thomas Haller a33ed5ad82 libnm: allow to enable/disable fetching of permissions in NMClient
Currently, NMClient by default always fetches the permissions
("GetPermissions()") and refreshes them on "CheckPermissions" signal.

Fetching permissions is relatively expensive, while they are not used
most of the time. Allow the user to opt out of this.

For that, have a NMClientInstanceFlags to enable/disable automatic
fetching. Also add a "permissions-state" property that allows the user
to understand whether the cached permissions are up to date or not.

This is a bit an awkward API for handling this. E.g. you cannot
explicitly request permissions, you can just enable/disable fetching
permissions. And then you can watch the permission-state to know whether
you are ready. It's done this way because it fits the previous model
and extends the API with a (relative) small amount of new functions and
properties.
2019-12-10 09:17:17 +01:00
Thomas Haller f7aeda0390 libnm: add NMClient:instance-flags property
Add a flags property to control behavior of NMClient.
Possible future use cases:

 - currently it would always automatically fetch permissions. Often that
   is not used and the user could opt out of it.

 - currently, using sync init creates an internal GMainContext. This
   has an overhead and may be undesirable. We could implement another
   "sync" initialization that would merely iterate the callers mainloop
   until the initialization completes. A flag would allow to opt in.

 - currently, NMClient always fetches all connection settings
   automatically. Via a flag the user could opt out of that.
   Instead NMClient could provide an API so the user can request
   settings as they are needed.
2019-12-10 07:53:25 +01:00
Thomas Haller 51bc2c0224 libnm: track permissions in NMClient as an array of well known permissions
On D-Bus, the permission names are just the PolicyKit action names, like
"org.freedesktop.NetworkManager.wifi.scan". But NMClient already
ignores all strings that it doesn't know at compile time and only
keeps track of well known permission.

And neither does the API nm_client_get_permissions_result() allow to
expose permissions unknown to libnm.

Maybe the API of NMClient should be more generic and allow exposing
any permissions announced by NetworkManager. As it is however, it's
not necessary to track the permissions in a hash table. An array with
fixed indices is sufficient.
2019-12-10 07:53:25 +01:00
Thomas Haller b7462b1910 libnm,shared: move nm_permission_result_to_client() to shared's nm_client_permission_result_from_string() 2019-12-10 07:53:25 +01:00
Thomas Haller bfdd352a61 libnm,cli: cleanup mapping between NMClientPermission and strings 2019-12-10 07:53:25 +01:00
Thomas Haller 53db3a2da9 libnm: don't emit property changed "notify" signal while destructing NMClient
It seems to trip up gnome-control-center (rh #1778668). Just don't emit
anymore signals once NMClient goes down.
2019-12-03 14:50:18 +01:00
Thomas Haller 61807e9b6b libnm: add assertion for object returned by nm_device_get_active_connection()
I have a coredump that seems to indicate that nm_device_get_active_connection()
did not return a valid object. Let's add an assertion, trying to identify the
issue earlier. Aside from that, this change isn't useful, but an nm_assert()
shouldn't hurt anyway.
2019-11-28 12:47:07 +01:00
Thomas Haller 81bd50874b libnm: add nm_client_get_main_context() function
The NMClient is associated with a certain context. Add a getter
function to give the context.

The context is really not internal API of NMClient, that is because
the user must iterate this context and be aware of it.
2019-11-26 13:37:38 +01:00
Thomas Haller 812ad586dd libnm: fix assertion for cleaning up nml_dbus_property_o_notify()
Usually, the nmobj never gets reused for one dbobj. That means,
we really don't expect a nml_dbus_property_o_notify() for a property
that was already cleared.

However, that is for example not the case with NMClient itself. As NetworkManager
gets restarted, the name owner gets lost, the property cleared but afterwards
it might get notified again.

That means, nml_dbus_property_o_notify() and nml_dbus_property_o_clear() must
work well together, otherwise a sequence of

   nml_dbus_property_o_notify()
   nml_dbus_property_o_clear()
   nml_dbus_property_o_notify()

leads to an assertion failure "nm_assert (!pr_o->is_ready)".

Fixes: ce0e898fb4 ('libnm: refactor caching of D-Bus objects in NMClient')
2019-11-26 12:40:13 +01:00
Thomas Haller 2078acfddc libnm: fix leaking internal GMainContext for synchronously initialized NMClient
NMClient makes asynchronous D-Bus calls via g_dbus_connection_call().
This references the current GMainContext to later invoke the
asynchronous callback. Even when we cancel the asynchronous call,
the callback will still be invoked (later) to complete the request.

In particular this means when we destroy (unref) an NMClient, there
are quite possibly pending requests in the GMainContext. Although they
are cancelled, they keep the GMainContext alive.

With synchronous initialization, we have an internal GMainContext.
When we destroy the NMClient, we cannot just unhook the integrated
source, instead, we need to keep it integrated in the caller's main
context, as long as there are pending requests.

Add a mechanism to track those pending requests and fix the leak for the
internal GMainContext. Also expose the same mechanism to the user via a new
API called nm_client_get_context_busy_watcher(). This allows the user
to know when it can stop iterating the main context and when all
resources are reclaimed.

For example the following will lead to a crash:

    for i in range(1,2000):
        nmc = NM.Client.new(None)

This creates a number of NMClient instances and destroys them again.
Note that here the GMainContext is never iterated, because
synchronous initialization does not iterate the caller's context. So,
while we correctly unref and dispose the created NMClient instances,
there are pending requests left in the inner GMainContext. These pile
up and soon the program will crash because it runs out of file descriptors.

We can have a similar problem with asynchronous initialization, when
we create a new GMainContext per client, and don't iterate it after
we are done with the client.

Note that this patch does not avoid the problem in general. The problem
cannot be avoided, the user must iterate the main contex at some point.
Otherwise resources (memory and file descriptors) will be leaked.

Fixes: ce0e898fb4 ('libnm: refactor caching of D-Bus objects in NMClient')

https://gitlab.freedesktop.org/NetworkManager/NetworkManager/merge_requests/347
2019-11-26 10:02:58 +01:00
Thomas Haller ce0e898fb4 libnm: refactor caching of D-Bus objects in NMClient
No longer use GDBusObjectMangaerClient and gdbus-codegen generated classes
for the NMClient cache. Instead, use GDBusConnection directly and a
custom implementation (NMLDBusObject) for caching D-Bus' ObjectManager
data.

CHANGES
-------

- This is a complete rework. I think the previous implementation was
difficult to understand. There were unfixed bugs and nobody understood
the code well enough to fix them. Maybe somebody out there understood the
code, but I certainly did not. At least nobody provided patches to fix those
issues. I do believe that this implementation is more straightforward and
easier to understand. It removes a lot of layers of code. Whether this claim
of simplicity is true, each reader must decide for himself/herself. Note
that it is still fairly complex.

- There was a lingering performance issue with large number of D-Bus
objects. The patch tries hard that the implementation scales well. Of
course, when we cache N objects that have N-to-M references to other,
we still are fundamentally O(N*M) for runtime and memory consumption (with
M being the number of references between objects). But each part should behave
efficiently and well.

- Play well with GMainContext. libnm code (NMClient) is generally not
thread safe. However, it should work to use multiple instances in
parallel, as long as each access to a NMClient is through the caller's
GMainContext. This follows glib's style and effectively allows to use NMClient
in a multi threaded scenario. This implies to stick to a main context
upon construction and ensure that callbacks are only invoked when
iterating that context. Also, NMClient itself shall never iterate the
caller's context. This also means, libnm must never use g_idle_add() or
g_timeout_add(), as those enqueue sources in the g_main_context_default()
context.

- Get ordering of messages right. All events are consistently enqueued
in a GMainContext and processed strictly in order. For example,
previously "nm-object.c" tried to combine signals and emit them on an
idle handler. That is wrong, signals must be emitted in the right order
and when they happen. Note that when using GInitable's synchronous initialization
to initialize the NMClient instance, NMClient internally still operates fully
asynchronously. In that case NMClient has an internal main context.

- NMClient takes over most of the functionality. When using D-Bus'
ObjectManager interface, one needs to handle basically the entire state
of the D-Bus interface. That cannot be separated well into distinct
parts, and even if you try, you just end up having closely related code
in different source files. Spreading related code does not make it
easier to understand, on the contrary. That means, NMClient is
inherently complex as it contains most of the logic. I think that is
not avoidable, but it's not as bad as it sounds.

- NMClient processes D-Bus messages and state changes in separate steps.
First NMClient unpacks the message (e.g. _dbus_handle_properties_changed()) and
keeps track of the changed data. Then we update the GObject instances
(_dbus_handle_obj_changed_dbus()) without emitting any signals yet. Finally,
we emit all signals and notifications that were collected
(_dbus_handle_changes_commit()). Note that for example during the initial
GetManagedObjects() reply, NMClient receive a large amount of state at once.
But we first apply all the changes to our GObject instances before
emitting any signals. The result is that signals are always emitted in a moment
when the cache is consistent. The unavoidable downside is that when you receive
a property changed signal, possibly many other properties changed
already and more signals are about to be emitted.

- NMDeviceWifi no longer modifies the content of the cache from client side
during poke_wireless_devices_with_rf_status(). The content of the cache
should be determined by D-Bus alone and follow what NetworkManager
service exposes. Local modifications should be avoided.

- This aims to bring no API/ABI change, though it does of course bring
various subtle changes in behavior. Those should be all for the better, but the
goal is not to break any existing clients. This does change internal
(albeit externally visible) API, like dropping NM_OBJECT_DBUS_OBJECT_MANAGER
property and NMObject no longer implementing GInitableIface and GAsyncInitableIface.

- Some uses of gdbus-codegen classes remain in NMVpnPluginOld, NMVpnServicePlugin
and NMSecretAgentOld. These are independent of NMClient/NMObject and
should be reworked separately.

- While we no longer use generated classes from gdbus-codegen, we don't
need more glue code than before. Also before we constructed NMPropertiesInfo and
a had large amount of code to propagate properties from NMDBus* to NMObject.
That got completely reworked, but did not fundamentally change. You still need
about the same effort to create the NMLDBusMetaIface. Not using
generated bindings did not make anything worse (which tells about the
usefulness of generated code, at least in the way it was used).

- NMLDBusMetaIface and other meta data is static and immutable. This
avoids copying them around. Also, macros like NML_DBUS_META_PROPERTY_INIT_U()
have compile time checks to ensure the property types matches. It's pretty hard
to misuse them because it won't compile.

- The meta data now explicitly encodes the expected D-Bus types and
makes sure never to accept wrong data. That would only matter when the
server (accidentally or intentionally) exposes unexpected types on
D-Bus. I don't think that was previously ensured in all cases.
For example, demarshal_generic() only cared about the GObject property
type, it didn't know the expected D-Bus type.

- Previously GDBusObjectManager would sometimes emit warnings (g_log()). Those
probably indicated real bugs. In any case, it prevented us from running CI
with G_DEBUG=fatal-warnings, because there would be just too many
unrelated crashes. Now we log debug messages that can be enabled with
"LIBNM_CLIENT_DEBUG=trace". Some of these messages can also be turned
into g_warning()/g_critical() by setting LIBNM_CLIENT_DEBUG=warning,error.
Together with G_DEBUG=fatal-warnings, this turns them into assertions.
Note that such "assertion failures" might also happen because of a server
bug (or change). Thus these are not common assertions that indicate a bug
in libnm and are thus not armed unless explicitly requested. In our CI we
should now always run with LIBNM_CLIENT_DEBUG=warning,error and
G_DEBUG=fatal-warnings and to catch bugs. Note that currently
NetworkManager has bugs in this regard, so enabling this will result in
assertion failures. That should be fixed first.

- Note that this changes the order in which we emit "notify:devices" and
"device-added" signals. I think it makes the most sense to emit first
"device-removed", then "notify:devices", and finally "device-added"
signals.
This changes behavior for commit 52ae28f6e5 ('libnm: queue
added/removed signals and suppress uninitialized notifications'),
but I don't think that users should actually rely on the order. Still,
the new order makes the most sense to me.

- In NetworkManager, profiles can be invisible to the user by setting
"connection.permissions". Such profiles would be hidden by NMClient's
nm_client_get_connections() and their "connection-added"/"connection-removed"
signals.
Note that NMActiveConnection's nm_active_connection_get_connection()
and NMDevice's nm_device_get_available_connections() still exposes such
hidden NMRemoteConnection instances. This behavior was preserved.

NUMBERS
-------

I compared 3 versions of libnm.

  [1] 962297f908, current tip of nm-1-20 branch
  [2] 4fad8c7c64, current master, immediate parent of this patch
  [3] this patch

All tests were done on Fedora 31, x86_64, gcc 9.2.1-1.fc31.
The libraries were build with

  $ ./contrib/fedora/rpm/build_clean.sh -g -w test -W debug

Note that RPM build already stripped the library.

---

N1) File size of libnm.so.0.1.0 in bytes. There currently seems to be a issue
  on Fedora 31 generating wrong ELF notes. Usually, libnm is smaller but
  in these tests it had large (and bogus) ELF notes. Anyway, the point
  is to show the relative sizes, so it doesn't matter).

  [1] 4075552 (102.7%)
  [2] 3969624 (100.0%)
  [3] 3705208 ( 93.3%)

---

N2) `size /usr/lib64/libnm.so.0.1.0`:

          text             data              bss                dec               hex   filename
  [1]  1314569 (102.0%)   69980 ( 94.8%)   10632 ( 80.4%)   1395181 (101.4%)   1549ed   /usr/lib64/libnm.so.0.1.0
  [2]  1288410 (100.0%)   73796 (100.0%)   13224 (100.0%)   1375430 (100.0%)   14fcc6   /usr/lib64/libnm.so.0.1.0
  [3]  1229066 ( 95.4%)   65248 ( 88.4%)   13400 (101.3%)   1307714 ( 95.1%)   13f442   /usr/lib64/libnm.so.0.1.0

---

N3) Performance test with test-client.py. With checkout of [2], run

```
prepare_checkout() {
    rm -rf /tmp/nm-test && \
    git checkout -B test 4fad8c7c64 && \
    git clean -fdx && \
    ./autogen.sh --prefix=/tmp/nm-test && \
    make -j 5 install && \
    make -j 5 check-local-clients-tests-test-client
}
prepare_test() {
    NM_TEST_REGENERATE=1 NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v
}
do_test() {
  for i in {1..10}; do
      NM_TEST_CLIENT_BUILDDIR="/data/src/NetworkManager" NM_TEST_CLIENT_NMCLI_PATH=/usr/bin/nmcli python3 ./clients/tests/test-client.py -v || return -1
  done
  echo "done!"
}
prepare_checkout
prepare_test
time do_test
```

  [1]  real 2m14.497s (101.3%)     user 5m26.651s (100.3%)     sys  1m40.453s (101.4%)
  [2]  real 2m12.800s (100.0%)     user 5m25.619s (100.0%)     sys  1m39.065s (100.0%)
  [3]  real 1m54.915s ( 86.5%)     user 4m18.585s ( 79.4%)     sys  1m32.066s ( 92.9%)

---

N4) Performance. Run NetworkManager from build [2] and setup a large number
of profiles (551 profiles and 515 devices, mostly unrealized). This
setup is already at the edge of what NetworkManager currently can
handle. Of course, that is a different issue. Here we just check how
long plain `nmcli` takes on the system.

```
do_cleanup() {
    for UUID in $(nmcli -g NAME,UUID connection show | sed -n 's/^xx-c-.*:\([^:]\+\)$/\1/p'); do
        nmcli connection delete uuid "$UUID"
    done
    for DEVICE in $(nmcli -g DEVICE device status | grep '^xx-i-'); do
        nmcli device delete "$DEVICE"
    done
}

do_setup() {
    do_cleanup
    for i in {1..30}; do
        nmcli connection add type bond autoconnect no con-name xx-c-bond-$i ifname xx-i-bond-$i ipv4.method disabled ipv6.method ignore
        for j in $(seq $i 30); do
            nmcli connection add type vlan autoconnect no con-name xx-c-vlan-$i-$j vlan.id $j ifname xx-i-vlan-$i-$j vlan.parent xx-i-bond-$i  ipv4.method disabled ipv6.method ignore
        done
    done
    systemctl restart NetworkManager.service
    sleep 5
}

do_test() {
    perf stat -r 50 -B nmcli 1>/dev/null
}

do_test
```

  [1]

   Performance counter stats for 'nmcli' (50 runs):

              456.33 msec task-clock:u              #    1.093 CPUs utilized            ( +-  0.44% )
                   0      context-switches:u        #    0.000 K/sec
                   0      cpu-migrations:u          #    0.000 K/sec
               5,900      page-faults:u             #    0.013 M/sec                    ( +-  0.02% )
       1,408,675,453      cycles:u                  #    3.087 GHz                      ( +-  0.48% )
       1,594,741,060      instructions:u            #    1.13  insn per cycle           ( +-  0.02% )
         368,744,018      branches:u                #  808.061 M/sec                    ( +-  0.02% )
           4,566,058      branch-misses:u           #    1.24% of all branches          ( +-  0.76% )

             0.41761 +- 0.00282 seconds time elapsed  ( +-  0.68% )

  [2]

   Performance counter stats for 'nmcli' (50 runs):

              477.99 msec task-clock:u              #    1.088 CPUs utilized            ( +-  0.36% )
                   0      context-switches:u        #    0.000 K/sec
                   0      cpu-migrations:u          #    0.000 K/sec
               5,948      page-faults:u             #    0.012 M/sec                    ( +-  0.03% )
       1,471,133,482      cycles:u                  #    3.078 GHz                      ( +-  0.36% )
       1,655,275,369      instructions:u            #    1.13  insn per cycle           ( +-  0.02% )
         382,595,152      branches:u                #  800.433 M/sec                    ( +-  0.02% )
           4,746,070      branch-misses:u           #    1.24% of all branches          ( +-  0.49% )

             0.43923 +- 0.00242 seconds time elapsed  ( +-  0.55% )

  [3]

   Performance counter stats for 'nmcli' (50 runs):

              352.36 msec task-clock:u              #    1.027 CPUs utilized            ( +-  0.32% )
                   0      context-switches:u        #    0.000 K/sec
                   0      cpu-migrations:u          #    0.000 K/sec
               4,790      page-faults:u             #    0.014 M/sec                    ( +-  0.26% )
       1,092,341,186      cycles:u                  #    3.100 GHz                      ( +-  0.26% )
       1,209,045,283      instructions:u            #    1.11  insn per cycle           ( +-  0.02% )
         281,708,462      branches:u                #  799.499 M/sec                    ( +-  0.01% )
           3,101,031      branch-misses:u           #    1.10% of all branches          ( +-  0.61% )

             0.34296 +- 0.00120 seconds time elapsed  ( +-  0.35% )

---

N5) same setup as N4), but run `PAGER= /bin/time -v nmcli`:

  [1]

        Command being timed: "nmcli"
        User time (seconds): 0.42
        System time (seconds): 0.04
        Percent of CPU this job got: 107%
        Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.43
        Average shared text size (kbytes): 0
        Average unshared data size (kbytes): 0
        Average stack size (kbytes): 0
        Average total size (kbytes): 0
        Maximum resident set size (kbytes): 34456
        Average resident set size (kbytes): 0
        Major (requiring I/O) page faults: 0
        Minor (reclaiming a frame) page faults: 6128
        Voluntary context switches: 1298
        Involuntary context switches: 1106
        Swaps: 0
        File system inputs: 0
        File system outputs: 0
        Socket messages sent: 0
        Socket messages received: 0
        Signals delivered: 0
        Page size (bytes): 4096
        Exit status: 0

  [2]
        Command being timed: "nmcli"
        User time (seconds): 0.44
        System time (seconds): 0.04
        Percent of CPU this job got: 108%
        Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.44
        Average shared text size (kbytes): 0
        Average unshared data size (kbytes): 0
        Average stack size (kbytes): 0
        Average total size (kbytes): 0
        Maximum resident set size (kbytes): 34452
        Average resident set size (kbytes): 0
        Major (requiring I/O) page faults: 0
        Minor (reclaiming a frame) page faults: 6169
        Voluntary context switches: 1849
        Involuntary context switches: 142
        Swaps: 0
        File system inputs: 0
        File system outputs: 0
        Socket messages sent: 0
        Socket messages received: 0
        Signals delivered: 0
        Page size (bytes): 4096
        Exit status: 0

  [3]

        Command being timed: "nmcli"
        User time (seconds): 0.32
        System time (seconds): 0.02
        Percent of CPU this job got: 102%
        Elapsed (wall clock) time (h:mm:ss or m:ss): 0:00.34
        Average shared text size (kbytes): 0
        Average unshared data size (kbytes): 0
        Average stack size (kbytes): 0
        Average total size (kbytes): 0
        Maximum resident set size (kbytes): 29196
        Average resident set size (kbytes): 0
        Major (requiring I/O) page faults: 0
        Minor (reclaiming a frame) page faults: 5059
        Voluntary context switches: 919
        Involuntary context switches: 685
        Swaps: 0
        File system inputs: 0
        File system outputs: 0
        Socket messages sent: 0
        Socket messages received: 0
        Signals delivered: 0
        Page size (bytes): 4096
        Exit status: 0

---

N6) same setup as N4), but run `nmcli monitor` and look at `ps aux` for
  the RSS size.

      USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
  [1] me     1492900 21.0  0.2 461348 33248 pts/10   Sl+  15:02   0:00 nmcli monitor
  [2] me     1490721  5.0  0.2 461496 33548 pts/10   Sl+  15:00   0:00 nmcli monitor
  [3] me     1495801 16.5  0.1 459476 28692 pts/10   Sl+  15:04   0:00 nmcli monitor
2019-11-25 15:08:00 +01:00
Thomas Haller abb68d90fc libnm: retire nm_client_wimax_*() functions
The server doesn's support WiMAX anymore. Hence there is no point in keeping
this functionality. While we cannot drop the functions, let them not do anything.

The code in NMManager is still there. But since we will soon drop
NMManager entirely, it doesn't matter.
2019-11-07 11:34:36 +01:00
Thomas Haller 8c7da62f9b libnm: add NM_CLIENT_CHECKPOINTS define 2019-10-27 14:30:51 +01:00
Thomas Haller 3ed514cb60 libnm: change default value for NMClient:{networking,wireless-hardware}-enabled properties 2019-10-27 14:30:51 +01:00
Thomas Haller 91f3311e71 libnm: change default value for NMClient:dns-{mode,rc-manager} properties 2019-10-27 14:30:51 +01:00
Thomas Haller dab1d780fd libnm: retire deprecated WiMAX NMObject types
WiMAX is deprecated since NetworkManager 1.2.0. Note that also
NetworkManager on server side no longer supports this type, hence
the server's D-Bus API will never expose devices of this type.

Note that NMDeviceWimax and NMWimaxNsp are NMObject types. That means,
they are instantiated by NMClient to represent information on the D-Bus
interface. As NetworkManager no longer exposes WiMAX devices, such
devices are never created. Note that it makes no sense that a user would
directly instantiate NMObject types, because they only work together with
NMClient.

Don't drop the related symbols and definitions from libnm, so that there
is no API/ABI change (as far as building and linking is concerned). But
make the types defunctional (which of course is a behavioral API change).
Calling the API now triggers a g_return_*() warning.

Also belatedly mark the WimaxNsp API as deprecated. It should have been
done in 1.2. Note that here we deprecate the API and retire it at the
same time. Optimally, we would have deprecated it a few releases ago,
before retiring it. However, marking something for deprecation is anyway
no excuse for anything. I mean, removing or retiring API is usually
painful, regardless whether it was marked for deprecation or not. In this
case, there is no possibility that a libnm user gets hold on a NMDeviceWimax
or NMWimaxNsp instance, because NMClient simply no longer instantiates
them. Hence, this change should not affect any user in practice.

https://gitlab.freedesktop.org/NetworkManager/NetworkManager/merge_requests/316
2019-10-23 15:31:51 +02:00
Thomas Haller 35b024acaa libnm: hide NMClient struct from public headers and use direct private field
Having the NMClient/NMClientClass structs in the public header allows
the user to subclass these types. Subclassing this type was never
intended, nor is it supported, nor does it seem useful. Subclassing only
makes sense if the type has suitable hooks to extend the type in a
meaningful way. NMClient hasn't, and everybody trying to derive from
this class would better delegate the actions.

Also, having these structs in the public header prevents us from
embedding the private data in the object structure itself.
It has thus an runtime overhead and is less convenient for debugging (it's
hard to find the private data pointer in gdb).

Most importantly, there is no easy way to find the offset of the private
data fields, short of calling NM_CLIENT_GET_PRIVATE() -- which currently
is a macro. Later we want to generically lookup the offset of the
private data, we would need NM_CLIENT_GET_PRIVATE() as a function.
Instead, by having an internally, statically known offset, we can use
that offset instead.

Also drop all signal hooks. They are also not useful.

This is an ABI and API change, but of something that we never wanted to
be part of the ABI/API, and which hopefull nobody is using.
2019-10-21 18:34:54 +02:00
Thomas Haller 166095fe4e libnm: don't use GSimpleAsyncResult for nm_client_new_async()
As we don't have any data or our own, we don't need a
GSimpleAsyncResult/GTask. Just pass the caller's @callback to
g_async_initable_new_async().
2019-10-18 22:09:18 +02:00
Thomas Haller 27fa6bad0c libnm: add NM_CLIENT_DBUS_NAME_OWNER property
It's not yet implemented. But obviously it's interesting to
get the name owner to which the NMClient is currently connected.

Note only that: the name-owner property really says whether
NM is currently running or not.
2019-10-18 22:09:18 +02:00
Thomas Haller b2f7197b29 libnm: add NM_CLIENT_DBUS_CONNECTION property
The used GDBusConnection should be configurable when creating the
NMClient instance. Automatically choosing one of the g_bus_get()
singletons is fine by default, but it's an unnecessary limitation.
2019-10-18 22:09:18 +02:00
Thomas Haller fe24797241 libnm: remember the caller's GMainContext when creating NMClient
We will require this later. The NMClient shall be tied to the GMainContext
at the moment when the instance gets created. This allows the user to have
multiple, indendent NMClient instances (on different threads and GMainContext).

Currently this is still unused, it will be later.
2019-10-18 22:09:18 +02:00
Thomas Haller ec63919818 libnm/trivial: move code in "nm-client.c" 2019-10-18 22:09:18 +02:00
Thomas Haller 15cc1d8770 libnm: avoid g_object_notify() in favor of _notify()
This looks up the GParamSpec from the obj_properties and is
thus more efficient. Also, the generated _notify() function
has the proper argument type and is thus generally preferable.
2019-10-18 22:09:18 +02:00
Thomas Haller e761d230c3 libnm: use obj_properties array in libnm and cleanup
This is not merely cosmetic. I will need the obj_properties
array to lookup GParamSpec by their PROP_* enum value. The
alternative would be lookup by name, which is more expensive.
2019-10-18 22:09:18 +02:00
Thomas Haller 558dacce89 libnm: implement nm_client_add_connection*() by using GDBusConnection directly 2019-10-16 08:56:01 +02:00
Thomas Haller 5f31fd3951 libnm: implement nm_client_save_hostname_async() by using GDBusConnection directly 2019-10-16 08:56:01 +02:00
Thomas Haller c92eb66d38 libnm: implement nm_client_save_hostname() by using GDBusConnection directly 2019-10-16 08:56:00 +02:00
Thomas Haller 356f1f6f33 libnm: implement nm_client_reload_connections_async() by using GDBusConnection directly 2019-10-16 08:56:00 +02:00
Thomas Haller 89c1007f92 libnm: implement nm_client_reload_connections() by using GDBusConnection directly 2019-10-16 08:56:00 +02:00
Thomas Haller 4af6219226 libnm: implement nm_client_load_connections_async() by using GDBusConnection directly 2019-10-16 08:56:00 +02:00
Thomas Haller d795bcd730 libnm: implement nm_client_load_connections() by using GDBusConnection directly 2019-10-16 08:56:00 +02:00
Thomas Haller 09e275dc28 libnm: implement nm_client_reload() by using GDBusConnection directly 2019-10-16 08:56:00 +02:00
Thomas Haller b1871a7549 libnm: implement nm_client_checkpoint_adjust_rollback_timeout() by using GDBusConnection directly 2019-10-16 08:56:00 +02:00