Skip to content

Commit

Permalink
Streaming improvements No 3 (netdata#19168)
Browse files Browse the repository at this point in the history
* ML uses synchronous queries

* do not call malloc_trim() to free memory, since to locks everything

* Reschedule dimensions for training from worker threads.

* when we collect or read from the database, it is SAMPLES. When we generate points for a chart is POINTS

* keep the receiver send buffer 10x the default

* support autoscaling stream circular buffers

* nd_poll() prefers sending data vs receiving data - in an attempt to dequeue as soon as possible

* fix last commit

* allow removing receiver and senders inline, if the stream thread is not working on them

* fix logs

* Revert "nd_poll() prefers sending data vs receiving data - in an attempt to dequeue as soon as possible"

This reverts commit 51539a9.

* do not access receiver or sender after it has been removed

* open cache hot2clean

* open cache hot2clean does not need flushing

* use aral for extent pages up to 65k

* track aral malloc and mmap allocations separately; add 8192 as a possible value to PGD

* do not evict too frequently if not needed

* fix aral metrics

* fix aral metrics again

* accurate accounting of memory for dictionaries, strings, labels and MRG

* log during shutdown the progress of dbengine flushing

* move metasync shutfown after dbengine

* max iterations per I/O events

* max iterations per I/O events - break the loop

* max iterations per I/O events - break the loop - again

* disable inline evictions for all caches

* when writing to sockets, send everything that can be sent

* cleanup code to trigger evictions

* fix calculation of eviction size

* fix calculation of eviction size once more

* fix calculation of eviction size once more - again

* ml and replication stop while backfilling is running

* process opcodes while draining the sockets; log with limit when asking to disconnect a node

* fix log

* ml stops when replication queries are running

* report pgd_padding to pulse

* aral precise memory accounting

* removed all alignas() and fix the 2 issues that resulted in unaligned memory accesses (one in mqtt and another in streaming)

* remove the bigger sizes from PGD, but keep multiples of gorilla buffers

* exclude judy from sanitizers

* use 16 bytes alignment on 32 bit machines

* internal check about memory alignment

* experiment: do not allow more children to connect while there is backfilling or replication queries running

* when the node is initializing, retry in 30 seconds

* connector cleanup and isolation of control logic about enabling/disabling various parts

* stop also health queries while backfilling is running

* tuning

* drain the input

* improve interactivity when suspending

* more interactive stream_control

* debug logs to find the connection issue

* abstracted everything about stream control

* Add ml_host_{start,stop} again.

* Do not create/update anomaly-detection charts when ML is not running for a host.

* rrdhost flag RECEIVER_DISCONNECTED has been reversed to COLLECTOR_ONLINE and has been used for localhost and virtual hosts too, to have a single point of truth about the availability of collected data or not

* ml_host_start() and ml_host_stop() are used by streaming receivers; ml_host_start() is used for localhost and virtual hosts

* fixed typo

* allow up to 3 backfills at a time

* add throttling based on user queries

* restore cache line paddings

* unify streaming logs to make it easier to grep logs

* tuning of stream_control

* more logs unification

* use mallocz_release_as_much_memory_to_the_system() under extreme conditions

* do not rely on the response code of evict_pages()

* log the gap of the database every time a node is connected

* updated ram requirements

---------

Co-authored-by: vkalintiris <[email protected]>
  • Loading branch information
ktsaou and vkalintiris authored Dec 11, 2024
1 parent 2956244 commit 5f72d42
Show file tree
Hide file tree
Showing 90 changed files with 1,792 additions and 1,132 deletions.
2 changes: 2 additions & 0 deletions CMakeLists.txt
Original file line number Diff line number Diff line change
Expand Up @@ -1549,6 +1549,8 @@ set(STREAMING_PLUGIN_FILES
src/streaming/stream-traffic-types.h
src/streaming/stream-circular-buffer.c
src/streaming/stream-circular-buffer.h
src/streaming/stream-control.c
src/streaming/stream-control.h
)

set(WEB_PLUGIN_FILES
Expand Down
10 changes: 9 additions & 1 deletion docs/netdata-agent/sizing-netdata-agents/ram-requirements.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ This number can be lowered by limiting the number of Database Tiers or switching
| nodes currently received | nodes collected | 512 KiB | Structures and reception buffers |
| nodes currently sent | nodes collected | 512 KiB | Structures and dispatch buffers |

These numbers vary depending on name length, the number of dimensions per instance and per context, the number and length of the labels added, the number of Machine Learning models maintained and similar parameters. For most use cases, they represent the worst case scenario, so you may find out Netdata actually needs less than that.
These numbers vary depending on metric name length, the average number of dimensions per instance and per context, the number and length of the labels added, the number of database tiers configured, the number of Machine Learning models maintained per metric and similar parameters. For most use cases, they represent the worst case scenario, so you may find out Netdata actually needs less than that.

Each metric currently being collected needs (1 index + 20 collection + 5 ml) = 26 KiB. When it stops being collected, it needs 1 KiB (index).

Expand Down Expand Up @@ -84,3 +84,11 @@ We frequently see that the following strategy gives the best results:
3. Set the page cache in `netdata.conf` to use 1/3 of the available memory.

This will allow Netdata queries to have more caches, while leaving plenty of available memory of logs and the operating system.

In Netdata 2.1 we added the `netdata.conf` option `[db].dbengine use all ram for caches` and `[db].dbengine out of memory protection`.
Combining these two parameters is probably simpler to get best results:

- `[db].dbengine out of memory protection` is by default 10% of total system RAM, but not more than 5GiB. When the amount of free memory is less than this, Netdata automatically starts releasing memory from its caches to avoid getting out of memory. On `systemd-journal` centralization points, set this to the amount of memory to be dedicated for systemd journal.
- `[db].dbengine use all ram for caches` is by default `no`. Set it to `yes` to use all the memory except the memory given above.

With these settings, netdata will use all the memory available but leave the amount specified for systemd journal.
9 changes: 7 additions & 2 deletions src/aclk/mqtt_websockets/mqtt_ng.c
Original file line number Diff line number Diff line change
Expand Up @@ -745,8 +745,13 @@ static size_t mqtt_ng_connect_size(struct mqtt_auth_properties *auth,
#define WRITE_POS(frag) (&(frag->data[frag->len]))

// [MQTT-1.5.2] Two Byte Integer
#define PACK_2B_INT(buffer, integer, frag) { *(uint16_t *)WRITE_POS(frag) = htobe16((integer)); \
DATA_ADVANCE(buffer, sizeof(uint16_t), frag); }
#define PACK_2B_INT(buffer, integer, frag) { \
uint16_t temp = htobe16((integer)); \
memcpy(WRITE_POS(frag), &temp, sizeof(uint16_t)); \
DATA_ADVANCE(buffer, sizeof(uint16_t), frag); \
}
// #define PACK_2B_INT(buffer, integer, frag) { *(uint16_t *)WRITE_POS(frag) = htobe16((integer));
// DATA_ADVANCE(buffer, sizeof(uint16_t), frag); }

static int _optimized_add(struct header_buffer *buf, void *data, size_t data_len, free_fnc_t data_free_fnc, struct buffer_fragment **frag)
{
Expand Down
61 changes: 45 additions & 16 deletions src/daemon/main.c
Original file line number Diff line number Diff line change
Expand Up @@ -394,37 +394,62 @@ void netdata_cleanup_and_exit(int ret, const char *action, const char *action_re
{
watcher_step_complete(WATCHER_STEP_ID_FLUSH_DBENGINE_TIERS);
watcher_step_complete(WATCHER_STEP_ID_STOP_COLLECTION_FOR_ALL_HOSTS);
watcher_step_complete(WATCHER_STEP_ID_STOP_METASYNC_THREADS);

watcher_step_complete(WATCHER_STEP_ID_WAIT_FOR_DBENGINE_COLLECTORS_TO_FINISH);
watcher_step_complete(WATCHER_STEP_ID_WAIT_FOR_DBENGINE_MAIN_CACHE_TO_FINISH_FLUSHING);
watcher_step_complete(WATCHER_STEP_ID_STOP_DBENGINE_TIERS);
watcher_step_complete(WATCHER_STEP_ID_STOP_METASYNC_THREADS);
}
else
{
// exit cleanly

#ifdef ENABLE_DBENGINE
if(dbengine_enabled) {
nd_log(NDLS_DAEMON, NDLP_INFO, "Preparing DBENGINE shutdown...");
for (size_t tier = 0; tier < storage_tiers; tier++)
rrdeng_prepare_exit(multidb_ctx[tier]);

for (size_t tier = 0; tier < storage_tiers; tier++) {
if (!multidb_ctx[tier])
continue;
completion_wait_for(&multidb_ctx[tier]->quiesce.completion);
completion_destroy(&multidb_ctx[tier]->quiesce.completion);
}
struct pgc_statistics pgc_main_stats = pgc_get_statistics(main_cache);
nd_log(NDLS_DAEMON, NDLP_INFO, "Waiting for DBENGINE to commit unsaved data to disk (%zu pages, %zu bytes)...",
pgc_main_stats.queues[PGC_QUEUE_HOT].entries + pgc_main_stats.queues[PGC_QUEUE_DIRTY].entries,
pgc_main_stats.queues[PGC_QUEUE_HOT].size + pgc_main_stats.queues[PGC_QUEUE_DIRTY].size);

bool finished_tiers[RRD_STORAGE_TIERS] = { 0 };
size_t waiting_tiers, iterations = 0;
do {
waiting_tiers = 0;
iterations++;

for (size_t tier = 0; tier < storage_tiers; tier++) {
if (!multidb_ctx[tier] || finished_tiers[tier])
continue;

waiting_tiers++;
if (completion_timedwait_for(&multidb_ctx[tier]->quiesce.completion, 1)) {
completion_destroy(&multidb_ctx[tier]->quiesce.completion);
finished_tiers[tier] = true;
waiting_tiers--;
nd_log(NDLS_DAEMON, NDLP_INFO, "DBENGINE tier %zu finished!", tier);
}
else if(iterations % 10 == 0) {
pgc_main_stats = pgc_get_statistics(main_cache);
nd_log(NDLS_DAEMON, NDLP_INFO,
"Still waiting for DBENGINE tier %zu to finish "
"(cache still has %zu pages, %zu bytes hot, for all tiers)...",
tier,
pgc_main_stats.queues[PGC_QUEUE_HOT].entries + pgc_main_stats.queues[PGC_QUEUE_DIRTY].entries,
pgc_main_stats.queues[PGC_QUEUE_HOT].size + pgc_main_stats.queues[PGC_QUEUE_DIRTY].size);
}
}
} while(waiting_tiers);
nd_log(NDLS_DAEMON, NDLP_INFO, "DBENGINE shutdown completed...");
}
#endif
watcher_step_complete(WATCHER_STEP_ID_FLUSH_DBENGINE_TIERS);

rrd_finalize_collection_for_all_hosts();
watcher_step_complete(WATCHER_STEP_ID_STOP_COLLECTION_FOR_ALL_HOSTS);

metadata_sync_shutdown();
watcher_step_complete(WATCHER_STEP_ID_STOP_METASYNC_THREADS);

#ifdef ENABLE_DBENGINE
if(dbengine_enabled) {
size_t running = 1;
Expand Down Expand Up @@ -452,18 +477,22 @@ void netdata_cleanup_and_exit(int ret, const char *action, const char *action_re
rrdeng_exit(multidb_ctx[tier]);
rrdeng_enq_cmd(NULL, RRDENG_OPCODE_SHUTDOWN_EVLOOP, NULL, NULL, STORAGE_PRIORITY_BEST_EFFORT, NULL, NULL);
watcher_step_complete(WATCHER_STEP_ID_STOP_DBENGINE_TIERS);
} else {
}
else {
// Skip these steps
watcher_step_complete(WATCHER_STEP_ID_WAIT_FOR_DBENGINE_COLLECTORS_TO_FINISH);
watcher_step_complete(WATCHER_STEP_ID_WAIT_FOR_DBENGINE_MAIN_CACHE_TO_FINISH_FLUSHING);
watcher_step_complete(WATCHER_STEP_ID_STOP_DBENGINE_TIERS);
}
#else
// Skip these steps
watcher_step_complete(WATCHER_STEP_ID_WAIT_FOR_DBENGINE_COLLECTORS_TO_FINISH);
watcher_step_complete(WATCHER_STEP_ID_WAIT_FOR_DBENGINE_MAIN_CACHE_TO_FINISH_FLUSHING);
watcher_step_complete(WATCHER_STEP_ID_STOP_DBENGINE_TIERS);
// Skip these steps
watcher_step_complete(WATCHER_STEP_ID_WAIT_FOR_DBENGINE_COLLECTORS_TO_FINISH);
watcher_step_complete(WATCHER_STEP_ID_WAIT_FOR_DBENGINE_MAIN_CACHE_TO_FINISH_FLUSHING);
watcher_step_complete(WATCHER_STEP_ID_STOP_DBENGINE_TIERS);
#endif

metadata_sync_shutdown();
watcher_step_complete(WATCHER_STEP_ID_STOP_METASYNC_THREADS);
}

// Don't register a shutdown event if we crashed
Expand Down
44 changes: 26 additions & 18 deletions src/daemon/pulse/pulse-aral.c
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
struct aral_info {
const char *name;
RRDSET *st_memory;
RRDDIM *rd_used, *rd_free, *rd_structures;
RRDDIM *rd_malloc_used, *rd_malloc_free, *rd_mmap_used, *rd_mmap_free, *rd_structures, *rd_padding;

RRDSET *st_utilization;
RRDDIM *rd_utilization;
Expand Down Expand Up @@ -74,24 +74,26 @@ void pulse_aral_do(bool extended) {
if (!stats)
continue;

size_t allocated_bytes = __atomic_load_n(&stats->malloc.allocated_bytes, __ATOMIC_RELAXED) +
__atomic_load_n(&stats->mmap.allocated_bytes, __ATOMIC_RELAXED);
size_t malloc_allocated_bytes = __atomic_load_n(&stats->malloc.allocated_bytes, __ATOMIC_RELAXED);
size_t malloc_used_bytes = __atomic_load_n(&stats->malloc.used_bytes, __ATOMIC_RELAXED);
if(malloc_used_bytes > malloc_allocated_bytes)
malloc_allocated_bytes = malloc_used_bytes;
size_t malloc_free_bytes = malloc_allocated_bytes - malloc_used_bytes;

size_t used_bytes = __atomic_load_n(&stats->malloc.used_bytes, __ATOMIC_RELAXED) +
__atomic_load_n(&stats->mmap.used_bytes, __ATOMIC_RELAXED);

// slight difference may exist, due to the time needed to get these values
// fix the obvious discrepancies
if(used_bytes > allocated_bytes)
used_bytes = allocated_bytes;
size_t mmap_allocated_bytes = __atomic_load_n(&stats->mmap.allocated_bytes, __ATOMIC_RELAXED);
size_t mmap_used_bytes = __atomic_load_n(&stats->mmap.used_bytes, __ATOMIC_RELAXED);
if(mmap_used_bytes > mmap_allocated_bytes)
mmap_allocated_bytes = mmap_used_bytes;
size_t mmap_free_bytes = mmap_allocated_bytes - mmap_used_bytes;

size_t structures_bytes = __atomic_load_n(&stats->structures.allocated_bytes, __ATOMIC_RELAXED);

size_t free_bytes = allocated_bytes - used_bytes;
size_t padding_bytes = __atomic_load_n(&stats->malloc.padding_bytes, __ATOMIC_RELAXED) +
__atomic_load_n(&stats->mmap.padding_bytes, __ATOMIC_RELAXED);

NETDATA_DOUBLE utilization;
if(used_bytes && allocated_bytes)
utilization = 100.0 * (NETDATA_DOUBLE)used_bytes / (NETDATA_DOUBLE)allocated_bytes;
if((malloc_used_bytes + mmap_used_bytes != 0) && (malloc_allocated_bytes + mmap_allocated_bytes != 0))
utilization = 100.0 * (NETDATA_DOUBLE)(malloc_used_bytes + mmap_used_bytes) / (NETDATA_DOUBLE)(malloc_allocated_bytes + mmap_allocated_bytes);
else
utilization = 100.0;

Expand All @@ -118,14 +120,20 @@ void pulse_aral_do(bool extended) {

rrdlabels_add(ai->st_memory->rrdlabels, "ARAL", ai->name, RRDLABEL_SRC_AUTO);

ai->rd_free = rrddim_add(ai->st_memory, "free", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE);
ai->rd_used = rrddim_add(ai->st_memory, "used", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE);
ai->rd_structures = rrddim_add(ai->st_memory, "structures", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE);
ai->rd_malloc_free = rrddim_add(ai->st_memory, "malloc free", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE);
ai->rd_mmap_free = rrddim_add(ai->st_memory, "mmap free", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE);
ai->rd_malloc_used = rrddim_add(ai->st_memory, "malloc used", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE);
ai->rd_mmap_used = rrddim_add(ai->st_memory, "mmap used", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE);
ai->rd_structures = rrddim_add(ai->st_memory, "structures", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE);
ai->rd_padding = rrddim_add(ai->st_memory, "padding", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE);
}

rrddim_set_by_pointer(ai->st_memory, ai->rd_used, (collected_number)allocated_bytes);
rrddim_set_by_pointer(ai->st_memory, ai->rd_free, (collected_number)free_bytes);
rrddim_set_by_pointer(ai->st_memory, ai->rd_malloc_used, (collected_number)malloc_used_bytes);
rrddim_set_by_pointer(ai->st_memory, ai->rd_malloc_free, (collected_number)malloc_free_bytes);
rrddim_set_by_pointer(ai->st_memory, ai->rd_mmap_used, (collected_number)mmap_used_bytes);
rrddim_set_by_pointer(ai->st_memory, ai->rd_mmap_free, (collected_number)mmap_free_bytes);
rrddim_set_by_pointer(ai->st_memory, ai->rd_structures, (collected_number)structures_bytes);
rrddim_set_by_pointer(ai->st_memory, ai->rd_padding, (collected_number)padding_bytes);
rrdset_done(ai->st_memory);
}

Expand Down
46 changes: 23 additions & 23 deletions src/daemon/pulse/pulse-daemon-memory.c
Original file line number Diff line number Diff line change
Expand Up @@ -87,9 +87,7 @@ void pulse_daemon_memory_do(bool extended) {
netdata_buffers_statistics.buffers_streaming +
netdata_buffers_statistics.cbuffers_streaming +
netdata_buffers_statistics.buffers_web +
replication_allocated_buffers() +
aral_by_size_overhead() +
judy_aral_overhead();
replication_allocated_buffers() + aral_by_size_free_bytes() + judy_aral_free_bytes();

size_t strings = 0;
string_statistics(NULL, NULL, NULL, NULL, NULL, &strings, NULL, NULL);
Expand All @@ -101,8 +99,7 @@ void pulse_daemon_memory_do(bool extended) {
rrddim_set_by_pointer(st_memory, rd_collectors,
(collected_number)dictionary_stats_memory_total(dictionary_stats_category_collectors));

rrddim_set_by_pointer(st_memory,
rd_rrdhosts,
rrddim_set_by_pointer(st_memory,rd_rrdhosts,
(collected_number)dictionary_stats_memory_total(dictionary_stats_category_rrdhost) + (collected_number)netdata_buffers_statistics.rrdhost_allocations_size);

rrddim_set_by_pointer(st_memory, rd_rrdsets,
Expand All @@ -124,14 +121,15 @@ void pulse_daemon_memory_do(bool extended) {
(collected_number)dictionary_stats_memory_total(dictionary_stats_category_replication) + (collected_number)replication_allocated_memory());
#else
uint64_t metadata =
aral_by_size_used_bytes() +
dictionary_stats_category_rrdhost.memory.dict +
dictionary_stats_category_rrdset.memory.dict +
dictionary_stats_category_rrddim.memory.dict +
dictionary_stats_category_rrdcontext.memory.dict +
dictionary_stats_category_rrdhealth.memory.dict +
dictionary_stats_category_functions.memory.dict +
dictionary_stats_category_replication.memory.dict +
aral_by_size_structures_bytes() + aral_by_size_used_bytes() +
dictionary_stats_category_rrdhost.memory.dict + dictionary_stats_category_rrdhost.memory.index +
dictionary_stats_category_rrdset.memory.dict + dictionary_stats_category_rrdset.memory.index +
dictionary_stats_category_rrddim.memory.dict + dictionary_stats_category_rrddim.memory.index +
dictionary_stats_category_rrdcontext.memory.dict + dictionary_stats_category_rrdcontext.memory.index +
dictionary_stats_category_rrdhealth.memory.dict + dictionary_stats_category_rrdhealth.memory.index +
dictionary_stats_category_functions.memory.dict + dictionary_stats_category_functions.memory.index +
dictionary_stats_category_replication.memory.dict + dictionary_stats_category_replication.memory.index +
netdata_buffers_statistics.rrdhost_allocations_size +
replication_allocated_memory();

rrddim_set_by_pointer(st_memory, rd_metadata, (collected_number)metadata);
Expand All @@ -157,7 +155,7 @@ void pulse_daemon_memory_do(bool extended) {
(collected_number) workers_allocated_memory());

rrddim_set_by_pointer(st_memory, rd_aral,
(collected_number) aral_by_size_structures());
(collected_number)aral_by_size_structures_bytes());

rrddim_set_by_pointer(st_memory,
rd_judy, (collected_number) judy_aral_structures());
Expand All @@ -168,6 +166,13 @@ void pulse_daemon_memory_do(bool extended) {
rrdset_done(st_memory);
}

// ----------------------------------------------------------------------------------------------------------------

if(!extended)
return;

// ----------------------------------------------------------------------------------------------------------------

{
static RRDSET *st_memory_buffers = NULL;
static RRDDIM *rd_queries = NULL;
Expand Down Expand Up @@ -212,8 +217,8 @@ void pulse_daemon_memory_do(bool extended) {
rd_cbuffers_streaming = rrddim_add(st_memory_buffers, "streaming cbuf", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE);
rd_buffers_replication = rrddim_add(st_memory_buffers, "replication", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE);
rd_buffers_web = rrddim_add(st_memory_buffers, "web", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE);
rd_buffers_aral = rrddim_add(st_memory_buffers, "aral", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE);
rd_buffers_judy = rrddim_add(st_memory_buffers, "judy", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE);
rd_buffers_aral = rrddim_add(st_memory_buffers, "aral-by-size free", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE);
rd_buffers_judy = rrddim_add(st_memory_buffers, "aral-judy free", NULL, 1, 1, RRD_ALGORITHM_ABSOLUTE);
}

rrddim_set_by_pointer(st_memory_buffers, rd_queries, (collected_number)netdata_buffers_statistics.query_targets_size + (collected_number) onewayalloc_allocated_memory());
Expand All @@ -228,17 +233,12 @@ void pulse_daemon_memory_do(bool extended) {
rrddim_set_by_pointer(st_memory_buffers, rd_cbuffers_streaming, (collected_number)netdata_buffers_statistics.cbuffers_streaming);
rrddim_set_by_pointer(st_memory_buffers, rd_buffers_replication, (collected_number)replication_allocated_buffers());
rrddim_set_by_pointer(st_memory_buffers, rd_buffers_web, (collected_number)netdata_buffers_statistics.buffers_web);
rrddim_set_by_pointer(st_memory_buffers, rd_buffers_aral, (collected_number)aral_by_size_overhead());
rrddim_set_by_pointer(st_memory_buffers, rd_buffers_judy, (collected_number)judy_aral_overhead());
rrddim_set_by_pointer(st_memory_buffers, rd_buffers_aral, (collected_number)aral_by_size_free_bytes());
rrddim_set_by_pointer(st_memory_buffers, rd_buffers_judy, (collected_number)judy_aral_free_bytes());

rrdset_done(st_memory_buffers);
}

// ----------------------------------------------------------------------------------------------------------------

if(!extended)
return;

// ----------------------------------------------------------------------------------------------------------------

}
Loading

0 comments on commit 5f72d42

Please sign in to comment.