Tutorial: Testing Your API
Write automated tests for your Lua scripts using the SoliDB test runner. Learn the test framework, assertions, and HTTP client.
Test Runner Overview
The solidb scripts test command discovers and runs Lua test files that make real HTTP requests to your API endpoints.
describe/it
Familiar test structure
http client
Built-in HTTP requests
expect()
Expressive assertions
Step 1: Setup Test Environment
Create a .env.test file for test-specific configuration.
# Test environment configuration SOLIDB_HOST=localhost SOLIDB_PORT=6745 SOLIDB_DATABASE=myapp_test # Separate test database SOLIDB_SERVICE=default SOLIDB_API_KEY=your_test_token
Create the tests directory structure:
scripts/
├── products.lua
├── products/
│ └── _id.lua
└── tests/ # Test files go here
├── products_test.lua
└── auth_test.lua
Note: The tests/ folder is automatically excluded from sync. Test files are never pushed to the server.
Step 2: Write Your First Test
Create tests/products_test.lua to test the products API.
-- Test variables to store state between tests local created_id = nil describe("Products API", function() it("should create a new product", function() local response = http.post("/products", { name = "Test Widget", price = 19.99, category = "test" }) expect(response.status).to_equal(201) expect(response.body.product).to_exist() expect(response.body.product.name).to_equal("Test Widget") expect(response.body.product._key).to_exist() -- Save the ID for later tests created_id = response.body.product._key end) it("should list products", function() local response = http.get("/products") expect(response.status).to_equal(200) expect(response.body.products).to_exist() expect(response.body.count).to_be_greater_than(0) end) it("should filter products by category", function() local response = http.get("/products?category=test") expect(response.status).to_equal(200) expect(response.body.products).to_exist() -- All returned products should have the test category for _, product in ipairs(response.body.products) do expect(product.category).to_equal("test") end end) it("should get a product by ID", function() local response = http.get("/products/" .. created_id) expect(response.status).to_equal(200) expect(response.body.name).to_equal("Test Widget") end) it("should update a product", function() local response = http.put("/products/" .. created_id, { price = 24.99 }) expect(response.status).to_equal(200) expect(response.body.product.price).to_equal(24.99) end) it("should delete a product", function() local response = http.delete("/products/" .. created_id) expect(response.status).to_equal(200) expect(response.body.message).to_contain("deleted") end) it("should return 404 for deleted product", function() local response = http.get("/products/" .. created_id) expect(response.status).to_equal(404) end) end)
Step 3: Run Tests
Run All Tests
CLI Options
HTTP Client Reference
The http module is available in all test files for making requests.
-
http.get(path, headers?) -> response
local response = http.get("/products") local response = http.get("/products?category=test") local response = http.get("/products", { ["X-Custom"] = "value" }) -
http.post(path, body?, headers?) -> response
local response = http.post("/products", { name = "Widget", price = 9.99 }) -
http.put(path, body?, headers?) -> response
local response = http.put("/products/123", { price = 14.99 }) - http.patch(path, body?, headers?) -> response
- http.delete(path, headers?) -> response
Response Object
{
status = 200, -- HTTP status code
ok = true, -- true if 2xx status
body = { ... }, -- Parsed JSON body
headers = { -- Response headers
["content-type"] = "application/json",
...
}
}
Assertions Reference
Use expect(value) followed by a matcher.
-
expect(value).to_equal(expected)
Strict equality check
-
expect(value).to_exist()
Value is not nil
-
expect(value).to_be_nil()
Value is nil
-
expect(value).to_be_true() / to_be_false()
Boolean checks
-
expect(string).to_contain(substring)
String contains substring
-
expect(string).to_match(pattern)
String matches regex pattern
-
expect(number).to_be_greater_than(n)
Number comparison
-
expect(number).to_be_less_than(n)
Number comparison
-
expect(fn).to_throw()
Function throws an error
Test Hooks
Setup and teardown functions for test suites.
describe("With Hooks", function() -- Run once before all tests in this describe before_all(function() -- Create test data http.post("/setup", { seed = true }) end) -- Run before each test before(function() print("Starting test...") end) -- Run after each test after(function() print("Test complete") end) -- Run once after all tests after_all(function() -- Cleanup test data http.post("/cleanup") end) it("test case 1", function() -- ... end) it("test case 2", function() -- ... end) end)
Testing Validation Errors
Test that your API correctly rejects invalid input.
describe("Validation Errors", function() it("should reject missing name", function() local response = http.post("/products", { price = 9.99 -- name is missing }) expect(response.status).to_equal(400) expect(response.body.error).to_contain("Name") end) it("should reject negative price", function() local response = http.post("/products", { name = "Widget", price = -5 }) expect(response.status).to_equal(400) expect(response.body.error).to_contain("price") end) it("should reject empty name", function() local response = http.post("/products", { name = "", price = 9.99 }) expect(response.status).to_equal(400) end) end)
Best Practices
Use a Test Database
Configure .env.test to use a separate database for testing.
Clean Up Test Data
Delete created resources in after_all hooks or at the start of test runs.
Test Edge Cases
Include tests for invalid input, missing data, and error conditions.
Keep Tests Independent
Each test should be able to run in isolation without depending on others.
Run Tests in CI/CD
Integrate solidb scripts test into your deployment pipeline.