Skip to content

Commit

Permalink
Revamp public API
Browse files Browse the repository at this point in the history
Now, `khepri` is the main entry point for everything public. There is no
longer a high-level API in `khepri`  and a lower-level API in
`khepri_machine`. `khepri` provides more user-friendly functions and we
will add more when we see fit. Here are a few examples:

* All functions accept both native paths (`[stock, wood, <<"oak">>]`)
  and Unix-like paths (`"/:stock/:wood/oak"`). Note that the syntax for
  Unix-like paths has changed: atoms are prefixed with `:` and binaries
  are left as-is.

* `khepri:get_data(StoreId, PathPattern)` allows to quickly get the data
  attached to a specific node. It's easier than going through
  `khepri:get(StoreId, PathPattern)` and extracting the data from the
  returned map.

Inside transactions, `khepri_tx` provides the same API as `khepri`,
except when the function does not make sense in the context of a
transaction. Unix-like paths are also accepted by `khepri_tx` functions.

`khepri_cluster` is a new module to expose the clustering part of the
API. This was in `khepri` before and was moved to this module. It is
also part of the public interface.

`khepri_path` and `khepri_condition` remain part of the public API for
those needing to manipulate paths.

Other modules are private however. They remain visible in the
documentation because understaing the internals may help sometimes. But
changes in those modules may not be documented in release notes and may
not be reflected in the release versions.
  • Loading branch information
dumbbell committed Apr 20, 2022
1 parent 95098d2 commit 3fc0da9
Show file tree
Hide file tree
Showing 27 changed files with 3,196 additions and 1,373 deletions.
8 changes: 2 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,10 +89,6 @@ khepri:insert([emails, <<"alice">>], "[email protected]").
khepri:insert("/:emails/alice", "[email protected]").
```

The `khepri` module provides the "simple API". It has several functions to
cover the most common uses. For advanced uses, using the `khepri_machine`
module directly is preferred.

### Read data back

To get Alice's email address back, **query** the same path:
Expand Down Expand Up @@ -178,7 +174,7 @@ the database itself and automatically execute it after some event occurs.
on_action => Action} = Props
end,

khepri_machine:put(
khepri:put(
StoreId,
StoredProcPath,
#kpayload_sproc{sproc = Fun}))}.
Expand All @@ -189,7 +185,7 @@ the database itself and automatically execute it after some event occurs.
```erlang
EventFilter = #kevf_tree{path = [stock, wood, <<"oak">>]},

ok = khepri_machine:register_trigger(
ok = khepri:register_trigger(
StoreId,
TriggerId,
EventFilter,
Expand Down
112 changes: 44 additions & 68 deletions doc/overview.edoc
Original file line number Diff line number Diff line change
Expand Up @@ -44,9 +44,9 @@ Because RabbitMQ already uses an implementation of the Raft consensus algorithm
for its quorum queues, it was decided to leverage that library for all
metadata. That's how Khepri was borned.

Thanks to Ra and Raft, it is <strong>clear how Khepri will behave during and
recover from a network partition</strong>. This makes it more comfortable for
the RabbitMQ team and users, thanks to the absence of unknowns.
Thanks to Ra and Raft, it is <strong>clear how Khepri will behave during a
network partition and recover from it</strong>. This makes it more comfortable
for the RabbitMQ team and users, thanks to the absence of unknowns.

<blockquote>
At the time of this writing, RabbitMQ does not use Khepri in a production
Expand Down Expand Up @@ -91,7 +91,7 @@ More payload types may be added in the future.

Payloads are represented using macros or helper functions:
<ul>
<li>`none' and {@link khepri:no_payload/0}</li>
<li>`?NO_PAYLOAD' and {@link khepri:no_payload/0}</li>
<li>`#kpayload_data{data = Term}' and {@link khepri:data_payload/1}</li>
<li>`#kpayload_sproc{sproc = Fun}' and {@link khepri:sproc_payload/1}</li>
</ul>
Expand All @@ -108,10 +108,10 @@ specific use cases and detect the type of payload.
Properties are:
<ul>
<li>The version of the payload, tracking the number of times it was modified
({@link khepri_machine:payload_version()}).</li>
({@link khepri:payload_version()}).</li>
<li>The version of the list of child nodes, tracking the number of times child
nodes were added or removed ({@link khepri_machine:child_list_version()}).</li>
<li>The number of child nodes ({@link khepri_machine:child_list_count()}).</li>
nodes were added or removed ({@link khepri:child_list_version()}).</li>
<li>The number of child nodes ({@link khepri:child_list_count()}).</li>
</ul>

=== Addressing a tree node ===
Expand Down Expand Up @@ -189,68 +189,45 @@ KeepWhileCondition = #{[stock, wood] => #if_child_list_length{count = {gt, 0}}}.
`keep_while' conditions on self (like the example above) are not evaluated on
the first insert though.

== Khepri API ==
== Stores ==

A Khepri store corresponds to one Ra cluster. In fact, the name of the Ra
cluster is the name of the Khepri store. It is possible to have multiple
database instances running on the same Erlang node or cluster by starting
multiple Ra clusters. Note that it is called a "Ra cluster" but it can have a
single member.

=== High-level API ===
By default, {@link khepri:start/0} starts a default store called `khepri',
based on Ra's default system. You can start a simple store using {@link
khepri:start/1}. To configure a cluster, you need to use {@link
khepri_clustering} to add or remove members.

== Khepri API ==

A high-level API is provided by the {@link khepri} module. It covers most
common use cases and should be straightforward to use.
The essential part of the public API is provided by the {@link khepri} module.
It covers most common use cases and should be straightforward to use.

```
khepri:insert([stock, wood, <<"lime tree">>], 150),
{ok, _} = khepri:put([stock, wood, <<"lime tree">>], 150),

Ret = khepri:get([stock, wood, <<"lime tree">>]),
{ok, #{[stock, wood, <<"lime tree">>] =>
#{child_list_count => 0,
child_list_version => 1,
data => 150,
payload_version => 1}}} = Ret,
#{data => 150,
payload_version => 1,
child_list_count => 0,
child_list_version => 1}}} = Ret,

true = khepri:exists([stock, wood, <<"lime tree">>]),

khepri:delete([stock, wood, <<"lime tree">>]).
{ok, _} = khepri:delete([stock, wood, <<"lime tree">>]).
'''

=== Low-level API ===
Inside transaction funtions, {@link khepri_tx} must be used instead of {@link
khepri}. The former provides the same API, except for functions which don't
make sense in the context of a transaction function.

The high-level API is built on top of a low-level API. The low-level API is
provided by the {@link khepri_machine} module.

The low-level API provides just a handful of primitives. More advanced or
specific use cases may need to rely on that low-level API.

```
%% Unlike the high-level API's `khepri:insert/2' function, this low-level
%% insert returns whatever it replaced (if anything). In this case, there was
%% nothing before, so the returned value is empty.
Ret1 = khepri_machine:put(
StoreId, [stock, wood, <<"lime tree">>],
#kpayload_data{data = 150}),
{ok, #{}} = Ret1,

Ret2 = khepri_machine:get(StoreId, [stock, wood, <<"lime tree">>]),
{ok, #{[stock, wood, <<"lime tree">>] =>
#{child_list_count => 0,
child_list_version => 1,
data => 150,
payload_version => 1}}} = Ret2,

%% Unlike the high-level API's `khepri:delete/2' function, this low-level
%% delete returns whatever it deleted.
Ret3 = khepri_machine:delete(StoreId, [stock, wood, <<"lime tree">>]),
{ok, #{[stock, wood, <<"lime tree">>] =>
#{child_list_count => 0,
child_list_version => 1,
data => 150,
payload_version => 1}}} = Ret3.
'''

=== Stores ===

It is possible to have multiple database instances running on the same Erlang
node or cluster.

By default, Khepri starts a default store, based on Ra's default system.
provided by the private {@link khepri_machine} module.

== Transactions ==

Expand All @@ -273,8 +250,7 @@ next section need to be taken into account.</li>
</ul>

The nature of the anonymous function is passed as the `ReadWrite' argument to
{@link khepri:transaction/3} or {@link khepri_machine:transaction/3}
functions.
{@link khepri:transaction/3}.

=== The constraints imposed by Raft ===

Expand Down Expand Up @@ -344,9 +320,9 @@ outside of the changes to the tree nodes.
If the transaction needs to have side effects, there are two options:
<ul>
<li>Perform any side effects after the transaction.</li>
<li>Use {@link khepri_machine:put/3} with {@link
khepri_condition:if_payload_version()} conditions in the path and retry if the
put fails because the version changed in between.</li>
<li>Use {@link khepri:put/3} with {@link khepri_condition:if_payload_version()}
conditions in the path and retry if the put fails because the version changed
in between.</li>
</ul>

Here is an example of the second option:
Expand All @@ -355,7 +331,7 @@ Here is an example of the second option:
Path = [stock, wood, <<"lime tree">>],
{ok, #{Path := #{data = Term,
payload_version = PayloadVersion}}} =
khepri_machine:get(StoredId, Path),
khepri:get(StoredId, Path),

%% Do anything with `Term` that depend on external factors and could have side
%% effects.
Expand Down Expand Up @@ -420,40 +396,40 @@ A stored procedure can accept any numbers of arguments too.

It is possible to execute a stored procedure directly without configuring any
triggers. To execute a stored procedure, you can call {@link
khepri_machine:run_sproc/3}. Here is an example:
khepri:run_sproc/3}. Here is an example:

```
Ret = khepri_machine:run_sproc(
Ret = khepri:run_sproc(
StoreId,
StoredProcPath,
[] = _Args).
'''

This works exactly like {@link erlang:apply/2}. The list of arguments passed
to {@link khepri_machine:run_sproc/3} must correspond to the stored procedure
to {@link khepri:run_sproc/3} must correspond to the stored procedure
arity.

=== Configuring a trigger ===

Khepri uses <em>event filters</em> to associate a type of events with a stored
procedure. Khepri supports tree changes events and thus only supports a single
event filter called {@link khepri_machine:event_filter_tree()}.
event filter called {@link khepri:event_filter_tree()}.

An event filter is registered using {@link khepri_machine:register_trigger/4}:
An event filter is registered using {@link khepri:register_trigger/4}:

```
EventFilter = #kevf_tree{path = [stock, wood, <<"oak">>], %% Required
props = #{on_actions => [delete], %% Optional
priority => 10}}, %% Optional

ok = khepri_machine:register_trigger(
ok = khepri:register_trigger(
StoreId,
TriggerId,
EventFilter,
StoredProcPath))}.
'''

In this example, the {@link khepri_machine:event_filter_tree()} record only
In this example, the {@link khepri:event_filter_tree()} record only
requires the path to monitor. The path can be any path pattern and thus can
have conditions to monitor several nodes at once.

Expand Down
16 changes: 10 additions & 6 deletions include/khepri.hrl
Original file line number Diff line number Diff line change
Expand Up @@ -37,10 +37,11 @@
%% Payload types.
%% -------------------------------------------------------------------

-record(kpayload_data, {data :: khepri_machine:data()}).
-define(NO_PAYLOAD, '$__NO_PAYLOAD__').
-record(kpayload_data, {data :: khepri:data()}).
-record(kpayload_sproc, {sproc :: khepri_fun:standalone_fun()}).

-define(IS_KHEPRI_PAYLOAD(Payload), (Payload =:= none orelse
-define(IS_KHEPRI_PAYLOAD(Payload), (Payload =:= ?NO_PAYLOAD orelse
is_record(Payload, kpayload_data) orelse
is_record(Payload, kpayload_sproc))).

Expand Down Expand Up @@ -73,14 +74,14 @@
{exists = true :: boolean()}).

-record(if_payload_version,
{version = 0 :: khepri_machine:payload_version() |
{version = 0 :: khepri:payload_version() |
khepri_condition:comparison_op(
khepri_machine:payload_version())}).
khepri:payload_version())}).

-record(if_child_list_version,
{version = 0 :: khepri_machine:child_list_version() |
{version = 0 :: khepri:child_list_version() |
khepri_condition:comparison_op(
khepri_machine:child_list_version())}).
khepri:child_list_version())}).

-record(if_child_list_length,
{count = 0 :: non_neg_integer() |
Expand All @@ -105,3 +106,6 @@
%-record(kevf_process, {pid :: pid(),
% props = #{} :: #{on_reason => ets:match_pattern(),
% priority => integer()}}).

-define(IS_KHEPRI_EVENT_FILTER(EventFilter),
(is_record(EventFilter, kevf_tree))).
23 changes: 8 additions & 15 deletions src/internal.hrl
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
%% Copyright (c) 2021-2022 VMware, Inc. or its affiliates. All rights reserved.
%%

-define(DEFAULT_RA_CLUSTER_NAME, ?MODULE).
-define(DEFAULT_RA_CLUSTER_NAME, khepri).
-define(DEFAULT_RA_FRIENDLY_NAME, "Khepri datastore").

-define(INIT_DATA_VERSION, 1).
Expand All @@ -19,36 +19,29 @@
%% Structure representing each node in the tree, including the root node.
%% TODO: Rename stat to something more correct?
-record(node, {stat = ?INIT_NODE_STAT :: khepri_machine:stat(),
payload = none :: khepri_machine:payload(),
payload = ?NO_PAYLOAD :: khepri:payload(),
child_nodes = #{} :: #{khepri_path:component() := #node{}}}).

%% State machine commands.

-record(put, {path :: khepri_path:pattern(),
payload = none :: khepri_machine:payload(),
payload = ?NO_PAYLOAD :: khepri:payload(),
extra = #{} :: #{keep_while =>
khepri_machine:keep_while_conds_map()}}).
khepri:keep_while_conds_map()}}).

-record(delete, {path :: khepri_path:pattern()}).

-record(tx, {'fun' :: khepri_fun:standalone_fun()}).

-record(register_trigger, {id :: khepri_machine:trigger_id(),
event_filter :: khepri_machine:event_filter(),
-record(register_trigger, {id :: khepri:trigger_id(),
event_filter :: khepri:event_filter(),
sproc :: khepri_path:path()}).

-record(ack_triggered, {triggered :: [khepri_machine:triggered()]}).

-record(triggered, {id :: khepri_machine:trigger_id(),
-record(triggered, {id :: khepri:trigger_id(),
%% TODO: Do we need a ref to distinguish multiple
%% instances of the same trigger?
event_filter :: khepri_machine:event_filter(),
event_filter :: khepri:event_filter(),
sproc :: khepri_fun:standalone_fun(),
props = #{} :: map()}).

%% Structure representing an anonymous function "extracted" as a compiled
%% module for storage.
-record(standalone_fun, {module :: module(),
beam :: binary(),
arity :: arity(),
env :: list()}).
Loading

0 comments on commit 3fc0da9

Please sign in to comment.