In Part 1, I introduced the evolution of integration testing for Nebula Graph. Now we will add a test case into the test set and run all the test cases …
Join the DZone community and get the full member experience. In BDD-Based Integration Testing Framework for Nebula Graph: Part 1, I introduced the evolution of integration testing for Nebula Graph. In this article, I will introduce how to add a test case into the test set and run all the test cases successfully. At the beginning of building the testing framework for Nebula Graph 2.0, we developed some tool classes to help the testing framework quickly start and stop a single-node Nebula Graph cluster, including checking for port conflicts and modifying part configurations. Here is the original execution procedure: However, parameters need to be passed transparently to the pytest.main function to specify some parameters of pytest, and the scripts generated by cmake are needed to execute a single test case, which makes the framework not so convenient for users. What we want to achieve is executing a test case where it is located. During this improvement of the testing framework, besides the changes to the program entry, most of the original encapsulated logic is reused. A lot of test cases have been accumulated for Nebula Graph, so single-process operation can no longer meet the requirements of fast iteration. We have tried several parallel test executor plugins, considering compatibility requirements, we finally chose pytest-xdist to accelerate the testing procedure. Pytest supports fixtures across these five scopes: session, module, class, package, and function. However, we need a global fixture to start and initialize the Nebula Graph services. Currently, for a session-scoped fixture, the highest level, each runner needs to be executed once. For example, if there are eight runners, eight Nebula Graph database services must be started, which is not what we want. According to the documentation of pytest-xdist, a lock file is needed for inter-process communication between runners. To make sure that the control logic is simple enough, we separate the logic for starting and stopping the program and for preparation from the process of executing the test, that is, a single step is used to start Nebula Graph, and when errors occur to some tests, Nebula Console is connected to the Nebula Graph database that is in the process of testing for validation and debugging. Before the new framework, to import data to Nebula Graph, an entire INSERT statement in nGQL is executed, which causes the following problems: To solve these problems, referring to the implementation of Nebula Importer, we separate the importing logic from the dataset completely and implement a new importing module in Python. However, so far, only CSV files are supported and one CSV file can store only one tag or edge type data. According to the new importing module, the structure of the dataset for Nebula Graph testing becomes clear.