Skip to content

Latest commit

 

History

History

tests

NebulaGraph Test Manual

Usage

Build project

First of all, change directory to the root of nebula-graph, and build the whole project.

Init environment

Nebula Test framework depends on python3(>=3.7) and some thirdparty libraries, such as nebula-python, reformat-gherkin, pytest, pytest-bdd and so on.

So you should install all these dependencies before running test cases by:

$ cd tests
$ make init-all

Start nebula servers

Then run the following commands to start all nebula services built in above steps by GNU Make tool:

$ cd tests
$ make up

The target up in Makefile will select some random ports used by nebula, install all built necessary files to temporary folder which name format is like server_2021-03-15T17-52-52 and start metad/storaged/graphd servers.

If your build directory is not nebula-graph/build, you should specify the BUILD_DIR parameter when up the nebula services:

$ make BUILD_DIR=/path/to/nebula/build/directory up

Run all test cases

There are two classes of NebulaGraph test cases, one is built on pytest and another is built on TCK. We split them into different execution methods:

$ make test # run pytest cases
$ make tck  # run TCK cases

If you want to debug the core files when running tests, you can pass the RM_DIR parameter into make target to enable it, like:

$ make RM_DIR=false tck  # default value of RM_DIR is true

And if you want to debug only one test case, you should check the usage of pytest itself by pytest --help. For example, run the test cases related to MATCH, you can do it like:

# pytest will use keyword 'match' to match the Scenario name. All the Scenario whose name contains
# the keyword 'match' will be selected. 
# You can also use '@keyword' to annotate a scenario and using `pytest -k 'keyword'` to run only the one scenario.
$ pytest -k 'match' -m 'not skip' .

We also provide a parameter named address to allow these tests to connect to the nebula services maintained by yourself:

$ pytest --address="192.168.0.1:9669" -m 'not skip' .

You can use following commands to only rerun the test cases if they failed:

$ pytest --last-failed --gherkin-terminal-reporter --gherkin-terminal-reporter-expanded .

or

$ make fail

gherkin-terminal-reporter options will print the pytest report prettily.

Stop nebula servers

Following command will stop the nebula servers started in above steps:

$ make down

If you want to stop some unused nebula processes, you can kill them by:

$ make kill

cleanup temporary files by:

$ make clean

How to add test case

You can find all nebula test cases in tck/features and some openCypher cases in tck/openCypher/features. Some references about TCK may be what you need.

The test cases are organized in feature files and described in gherkin language. The structure of feature file is like following example:

Basic Case

Feature: Basic match

  Background:
    Given a graph with space named "nba"

  Scenario: Single node
    When executing query:
      """
      MATCH (v:player {name: "Yao Ming"}) RETURN v
      """
    Then the result should be, in any order, with relax comparison:
      | v            |
      | ("Yao Ming") |

  Scenario: One step
    When executing query:
      """
      MATCH (v1:player{name: "LeBron James"}) -[r]-> (v2)
      RETURN type(r) AS Type, v2.name AS Name
      """
    Then the result should be, in any order:
      | Type    | Name        |
      | "like"  | "Ray Allen" |
      | "serve" | "Cavaliers" |
      | "serve" | "Heat"      |
      | "serve" | "Lakers"    |
      | "serve" | "Cavaliers" |

Case With an Execution Plan

Scenario: push edge props filter down
  When profiling query:
    """
    GO FROM "Tony Parker" OVER like
    WHERE like.likeness IN [v IN [95,99] WHERE v > 0]
    YIELD like._dst, like.likeness
    """
  Then the result should be, in any order:
    | like._dst       | like.likeness |
    | "Manu Ginobili" | 95            |
    | "Tim Duncan"    | 95            |
  And the execution plan should be:
    | id | name         | dependencies | operator info                                               |
    | 0  | Project      | 1            |                                                             |
    | 1  | GetNeighbors | 2            | {"filter": "(like.likeness IN [v IN [95,99] WHERE (v>0)])"} |
    | 2  | Start        |              |                                                             |

Each feature file is composed of different scenarios which split test units into different parts. There are many steps in each scenario to define the inputs and outputs of test. These steps are started with following words:

  • Given
  • When
  • Then

The table in Then step must have the first header line even if there's no data rows.

Background is the common steps of different scenarios. Scenarios will be executed in parallel.

Note that for cases that contain execution plans, it is mandatory to fill the id column.

Case With a New Nebula Cluster

In some special cases, we need to test in a new nebula cluster.

e.g.

Feature: Nebula service termination test
  Scenario: Basic termination test
    Given a nebulacluster with 1 graphd and 1 metad and 1 storaged and 0 listener
    When the cluster was terminated
    Then no service should still running after 4s
Feature: Example
  Scenario: test with disable authorize
    Given a nebulacluster with 1 graphd and 1 metad and 1 storaged and 0 listener:
      """
      graphd:enable_authorize=false
      """
    When executing query:
      """
      CREATE USER user1 WITH PASSWORD 'nebula';
      CREATE SPACE s1(vid_type=int)
      """
    And wait 3 seconds
    Then the execution should be successful
    When executing query:
      """
      GRANT ROLE god on s1 to user1
      """
    Then the execution should be successful

  Scenario: test with enable authorize
    Given a nebulacluster with 1 graphd and 1 metad and 1 storaged and 0 listener:
      """
      graphd:enable_authorize=true
      """
    When executing query:
      """
      CREATE USER user1 WITH PASSWORD 'nebula';
      CREATE SPACE s1(vid_type=int)
      """
    And wait 3 seconds
    Then the execution should be successful
    When executing query:
      """
      GRANT ROLE god on s1 to user1
      """
    Then an PermissionError should be raised at runtime: No permission to grant/revoke god user.

It would install a new nebula cluster, and create a session to connect this cluster.

Format

In order to check your changed files for reviewers conveniently, please format your feature file before creating pull request. Try following command to do that:

$ make fmt