quality_review
toc_min_heading_level: 2 toc_max_heading_level: 4
Quality Review
Purpose of the document
Purpose of this document is to review and assess the quality of the software in Archipelo,
most importantly the top
repository, its configuration and the tests that we store there.
The review is done at the specific time and the data included here may become outdated, especially the one that is frequently changed.
The review gives an overview of the quality seen as a whole, processes the current data and provides the recommendations that should contribute to the higher quality of the products.
Review date: Mar 30, 2023 Author: Mateusz Nojek
Document structure
First, we give the overview of what is the current state, listing all of the tests and configurations that contribute to the software quality. See Quality Overview.
Then, we create recommendations about what's missing and where we need to put more attention, under each test type described.
Lastly, we summarize and present the short- and long-term plans that aim to improve the quality. See Plans section.
Quality Overview
Integration Tests
Go Integration Tests
Parameter | Value |
---|---|
Language | Go |
Amount | 4 |
Time of execution | ~70 sec |
Frequency | Per each PR |
Environment | Local with Docker |
Location
go/events/client_clickhouse_integ_test.go
go/search/index_utils_integ_test.go
Matching pattern: func Test\*Integration()
CI workflow
Build Backend
/ Test: Integration
/ Integration tests
Required: Yes
Recommendation
Search tests need to be reviewed. Even though, they are no longer relevant to our current product, the code is still there and may be used in the future, so the tests need to stay and be left turned on.
Unit Tests
Browser Unit Tests
Parameter | Value |
---|---|
Language | TypeScript |
Library | Jest |
Amount | 313 |
Time of execution | 3 min 18 sec |
Frequency | Per each PR |
Environment | Local |
Locations
jsapps/libs/api/src/__tests__/api.ts
(6 tests, 12 sec)jsapps/libs/utils/**/__tests__/*.ts
(68 tests, 23 sec)jsapps/apps/browser/**/__tests__/*.ts
(192 tests, 44 sec)jsapps/libs/ui/**/__tests__/*.ts*
(47 tests, 119 sec)
CI workflow
Browser Extension Build v2
/ Validate Code
/ Run Unit Tests for Chrome Extension
Required: Yes
WebApp Unit Tests
Parameter | Value |
---|---|
Language | TypeScript |
Library | Jest |
Amount | 320 |
Time of execution | 7 min 44 sec |
Frequency | Per each PR |
Environment | Local |
Locations
jsapps/libs/api/src/__tests__/api.ts
(6 tests, 20 sec)jsapps/apps/vscode/src/**/__tests__/*.ts
(7 tests, 33 sec)jsapps/libs/utils/**/__tests__/*.ts
(68 tests, 49 sec)jsapps/apps/browser/**/__tests__/*.ts
(192 tests, 118 sec)jsapps/libs/ui/**/__tests__/*.ts*
(47 tests, 173 sec)
CI workflow
Build WebApp
/ Build: JS Assets
/ Run JS Unit Tests for WebApp and Generate Coverage
Required: Yes
Recommendation
Some tests are also triggered for Browser build, which means they are executed several times, wasting time and calculations in GitHub CI. If we wrap their execution around a condition, we can save some time. They contribute to around 3 minutes of execution time. These tests are:
jsapps/libs/api/src/__tests__/api.ts
jsapps/libs/utils/**/__tests__/*.ts
jsapps/apps/browser/**/__tests__/*.ts
VS Code Unit Tests
Parameter | Value |
---|---|
Language | TypeScript |
Library | Jest |
Amount | 92 |
Time of execution | 29 sec |
Frequency | Per each PR |
Environment | Local |
Locations
jsapps/libs/api/src/__tests__/api.ts
(6 tests, 7 sec)jsapps/vscode/**/__tests__/*.ts
(12 tests, 10 sec)jsapps/libs/utils/**/__tests__/*.ts
(74 tests, 12 sec)
CI workflow
VS Code Extension v2
/ Validate Code
/ Run Tests
Required: Yes
Recommendation
Some tests are also triggered for Browser build, which means they are executed several times, wasting time and calculations in GitHub CI. If we wrap their execution around a condition, we can save some time. They contribute to around 20 seconds of execution time. These tests are:
jsapps/libs/api/src/__tests__/api.ts
jsapps/libs/utils/**/__tests__/*.ts
Go Unit tests (+ API)
Parameter | Value |
---|---|
Language | Go |
Amount | 528 |
Time of execution | 3 min 40 sec |
Frequency | Per each PR |
Environment | Local |
Location
go/**/*_test.go
Matching pattern: func Test*
CI workflow
Build Backend
/ Test: Go Unit Tests
/ Go Test (Dev)
Required: Yes
E2E Tests
Browser (OneBox Chrome Extension) E2E Tests
Parameter | Value |
---|---|
Language | TypeScript |
Library | Playwright |
Amount | 16 |
Time of execution | skipped |
Frequency | Per each PR |
Environment | Local and QA |
Location
jsapps/tests/browser/e2e
Matching pattern: jsapps/tests/browser/e2e/*.spec.ts
Recommendation
Review the tests and adjust them to the latest version of OneBox extension, to make sure they are safe to be turned on again. Remove outdated tests.
Decide which features need to be covered and write new test cases with automation scripts.
CI workflow
Browser Extension Build v2
/ E2E Tests: Browser Extension
/ Run E2E tests
/ E2E tests
Required: Yes
WebApp E2E Tests
Parameter | Value |
---|---|
Language | TypeScript |
Library | Playwright |
Amount | 26 |
Time of execution | 1 min |
Frequency | Per each PR |
Environment | Local and QA |
Location
jsapps/tests/webapp/e2e/*.spec.ts
CI workflow
Build WebApp
/ E2E Tests: WebApp
Required: Yes
Recommendation
See if there are some gaps in coverage for basic functionalities and decide what tests need to be added.
VS Code E2E Tests
Parameter | Value |
---|---|
Language | TypeScript |
Library | Playwright |
Amount | 1 |
Time of execution | 17 sec |
Frequency | Per each PR |
Environment | Local and QA |
Location
jsapps/tests/vscode/e2e/*spec.ts
(settings view)
CI workflow
VS Code Extension
/ E2E Tests: VSCode Extension
/ Run E2E Tests
/ E2E tests
Required: Yes
Recommendation
We have only 1 simple test here that checks if the 'Sign in' button is visible. We need to add more tests here and cover current functionalities of VS Code Extension.
Static Analysis
PR Title checker
We have a common guideline for naming our PRs that complies with Conventional Commits specification. It makes the commit history consistent, and enables us to automate releases and generating release notes, though we didn't have time yet to set the automation up.
Location
.github/workflows/pr-title-checker.yml
Recommendation
We need to start thinking about making use of Semantic Release, to help us generate release notes. Conventional Commits was the first step towards it, and we need to find time to continue this path to more automatic releases.
Dependabot
Dependabot verifies if there are new versions of the dependencies we are using. The daily check runs over the following locations:
- npm (
/jsapps
) - gomod (
/go
) - github-actions (
/
) - pip (
/python
)
Location
.github/dependabot.yml
Commit signing using GPG keys
We verify the authenticity of our commits using GPG keys, to make sure that only allowed developers merge the changes to our repository.
Pre-commit hooks
Pre-commit hooks are run to statically check our code. They validate format of different types of files and check for consistency. The hooks are run before each commit and in our CI, per each PR.
Hooks that we have:
check-yaml
(Validate YAML files syntax)end-of-file-fixer
(Ensure 1 blank line at the end of the files)trailing-whitespace
(Remove trailing whitespaces)check-merge-conflict
(Check that merge conflicts are not being committed)detect-private-key
(Check for the existence of private keys)forbid-tabs
(Fail if tabs are used in the project, exclude Go files)markdownlint
(Linter for Markdown files)prettier
(Code formatter for JS/TS code)eslint
(ESLint linter for JS/TS code)terraform_fmt
(Terraform formatter for TF code)
Location
.pre-commit-config.yaml
Recommendation
- Remove Python hooks when the Python code is moved to ML-related repositories
- Turn on
golangci-lint
and possiblygo-fmt
hooks for Go code
API Testing
QA API Tests
Parameter | Value |
---|---|
Language | TypeScript |
Library | Playwright |
Amount | 10 |
Time of execution | 2 min 45 sec |
Frequency | Per each PR (after merge to main ) |
Environment | QA |
Location
jsapps/tests/api
CI workflow
Build WebApp
/ API Tests
/ Run API tests
/ API tests
Required: Yes
Tested endpoints:
- history
- events
- members
- organizations
- profile
- settings
- repos
- branch-digest
- commit-digest
Recommendation
More endpoints should be covered
There are 94 functions that expose our endpoints in
jsapps/libs/api/src/api.ts
and we're testing only 10 of them. Not all need to be tested though, because some are deprecated and related to the previous product (mostly for packages). Anyway, they should be reviewed and any commonly used endpoint should be covered by tests.Validate fields from the responses
Currently, the tests mostly check whether the endpoint responds or not in a reasonable time. There are some simple assertions whether most important fields are included in the report, but there is a room for improvement here. We can, for example, check:
- whether the value of each attribute from the response has the expected type
- if the endpoint responds properly to invalid parameters and their values
- if the endpoint does not return data, that should not be accessible by the user
Tests should run before changes are merged to
main
This one is problematic, because it requires to have services running on some pre-QA environment, which we do not have. Currently, API Tests run on QA environment after the PR is merged to
main
branch, which means that we receive the feedback whether there are some issues after the changes are done, and we should have it before. Such approach would require the connection to GCP services and the database. We need to think of how can we run the API Tests on local environment.
Performance Tests
Parameter | Value |
---|---|
Language | JavaScript |
Libraries | Apache JMeter, Grafana Labs K6 |
Amount | 9 tests run with 4 variations |
Frequency | Run manually on demand |
Date of execution | August 2022 |
Environment | Production |
Location
https://github.com/Archipelo/top/tree/main/performance/tests
Summary
- Up to 1000 concurrent users against our API endpoints
- Mostly tested package-related endpoints (not used anymore)
Full results with explanations and the long-term plans are available here.
Recommendation
These results are no longer related to our current product due to the shift in the strategy. Although, we should review them once again before approaching another round of performance testing, and search for potential improvements to how we can write and run the tests more efficiently.
Current situation with bad performance of our product can be noticed without using performance testing, so I would advice to not bother about this type of testing now, and focus on improving the performance using common sense and known techniques, until there would be a need for more sophisticated approach.
When the product feels good and responsive during the internal testing, then we should conduct another round of performance testing to see how it behaves when there is a big amount of users at the same time. It should be done before the "go to market" event.
Plans
Specific recommendations for each type of testing are enclosed right under they are described in this document. Here, I will summarize them. Some of them are copied, to have them all in one place.
Current recommendations
- Review Go Integration Tests and decide whether to keep them for code maintaining purposes in case we will use this code in the future, or to delete them.
- Move ML Integration Tests to separate ML repos along with the related code to keep the tests maintained.
- Review WebApp and Browser GitHub's workflow for running unit tests and make sure that the same tests are not executed multiple times.
- Review Browser E2E tests and adjust them to the current version of the extension, to make sure they are safe to be turned on again (they are skipped now). Add more tests to cover basic functionalities.
- See if there are some gaps in coverage for basic functionalities and decide what tests need to be added to WebApp E2E Tests.
- Define and add more automated E2E tests to VS Code extension. Currently, we have only 1 test there.
- For pre-commit hooks, we need to remove Python hooks when the Python code is moved to ML-related repositories, and turn on
golangci-lint
and possiblygo-fmt
hooks for Go code.
Plans for the future
Short-term plans
Prepare more API tests for uncovered endpoints
There are 94 functions that expose our endpoints in jsapps/libs/api/src/api.ts
and we're testing only 10 of them in our API Tests. Not all need to be tested though, because some are deprecated and related to the previous product (mostly for packages). Anyway, they should be reviewed and any commonly used endpoint should be covered by tests.
Validate fields from the responses in API tests
Currently, the tests mostly check whether the endpoint responds or not in a reasonable time. There are some simple assertions whether most important fields are included in the report, but there is a room for improvement here. We can, for example, check:
- whether the value of each attribute from the response has the expected type
- if the endpoint responds properly to invalid parameters and their values
- if the endpoint does not return data, that should not be accessible by the user
Increase unit test coverage
We should hit at least the acceptable level of unit test coverage, which is 60% for each part of the product (frontend, backend). ML can be excluded now, since we are not focusing on this area anymore (at least now). For both Go and JavaScript we have around 35% of coverage, so we're still far from hitting the goal. To do this, we need to make sure to unit test every new piece of software and also spend some time on covering the all parts, whenever we do any changes to them.
It would be great, if we hit 60% goal by the end of the next quarter (Q2).
SQL tests
Our API endpoints highly rely on the logic of the SQL queries hitting the ClickHouse. Small flaw in the code may come unnoticed and usually results in our product not working in some places. We do not have any kind of testing that covers this part, so we are very vulnerable to any bugs in the SQL queries.
I imagine this as a kind of integration tests run with a local instance of ClickHouse and with hardcoded data that will be verified after the query is executed. This type of tests require to not only maintain the tests, but also the retrieved data.
Long-term plans
Implement semantic-release
We need to start thinking about making use of Semantic Release, to help us generate release notes. Conventional Commits was the first step towards it, and we need to find time to continue this path to more automatic releases.
More info can be found after a short research in this issue #4886.
Enable API Tests to be run before the changes are merged to main
This one is problematic, because it requires to have services running on some pre-QA environment, which we do not have. Currently, API Tests run on QA environment after the PR is merged to main
branch, which means that we receive the feedback whether there are some issues after the changes are done, and we should have it before. Such approach would require the connection to GCP services and the database. We need to think of how can we run the API Tests on local environment.
Increase unit test coverage
Next unit test coverage threshold is 75%, labeled as "commendable". To improve the quality of our software, decrease the risk of regression and bugs, and make it easier to refactor the code, we should aim to hit that goal by the end of Q3, for each part of the software we are going to have by then.
Performance tests
Current situation with bad performance of our product can be noticed without using performance testing, so I would advice to not bother about this type of testing now, and focus on improving the performance using common sense and known techniques, until there would be a need for more sophisticated approach.
When the product feels good and responsive during the internal testing, then we should conduct another round of performance testing to see how it behaves when there is a big amount of users at the same time. It should be done before the "go to market" event.
Infrastructure tests for Terraform
Some time ago we were discussing different ways to test Terraform code. It would probably require an additional environment to check if all Terraform commands work properly and result in the expected infrastructure. This kind of testing is quite expensive, because it requires new environment. Something to discuss when we're ready.