Hey guys!
I recently participated in a live on the dotNET channel, together with my friend and legend in the .NET development area, Renato Groffe, with the aim of presenting, in a practical way, the use of K6 in load testing scenarios involving relational databases. The proposal was to demonstrate how the tool can be used not only for APIs or web applications, but also as a relevant component in performance, capacity and data generation tests in data environments.
Throughout the live, we explored technical concepts, architectural decisions, limitations encountered and good practices observed when using K6 in real projects.
Registration link:
https://www.meetup.com/pt-BR/dotnet-sao-paulo/events/311572390/
Stream link:
Motivation for using K6
The choice of K6 as the main tool for the tests presented is mainly related to its simplicity of use combined with a good level of performance. Unlike more traditional tools, K6 allows you to define test scenarios using JavaScript scripts, which significantly reduces the learning curve.
Furthermore, the K6 is developed in Go, a language widely known for its good support for parallelism and efficiency in concurrent execution, important characteristics in load testing scenarios.
Load testing beyond APIs and web applications
A point emphasized during the live was the expansion of the use of K6 beyond API or graphical interface testing. In many scenarios, especially in data projects, the bottleneck is not in the application, but in the database.
Using K6 to directly test relational databases allows you to evaluate aspects such as:
- Data ingestion capacity
- Behavior under competition
- Average operation execution time
- Infrastructure limits before service degradation
These tests are especially useful in initial load scenarios, batch processing, and production environment simulations.
Using Database Extensions and Drivers
By default, K6 supports API testing and web browsing. To work with databases, it is necessary to generate a custom executable using the XK6, incorporating specific extensions.
During the demonstration, I presented the process of building a K6 executable containing:
- Generic extension for SQL
- Specific drivers for SQL Server, PostgreSQL and MySQL
- Extension for generating fictitious data (Faker)
This process allows K6 to execute SQL commands directly, making testing closer to production reality.
Generation of fictitious data in a controlled way
Another important aspect addressed was the generation of fictitious data. In testing environments, manually creating masses of data often results in inconsistencies and quality issues.
Using Faker integrated with K6 allows you to generate coherent data, with a consistent structure and without the risk of inappropriate content. Each iteration of K6 represents a complete execution of a scenario, such as the insertion of a record, facilitating the simulation of competing loads in a controlled manner.
This type of approach is useful for:
- Rapid data mass creation
- Performance and volume tests
- Assembly of proofs of concept and technical demonstrations
Integration with pipelines and use of containers
During the live, all demonstrations were carried out using automated pipelines, running in official database containers. This choice aimed to bring the test scenario closer to the reality of corporate environments.
Some technical decisions discussed included:
- Using SQL Server 2022 in a Linux container, due to instability in newer versions
- Careful choice of base operating system version to ensure availability of tools such as SQLCMD
- Parameterization of connection strings and sensitive variables through environment variables and variable groups
These precautions reduce pipeline failures and facilitate test reproducibility.
Results observed in tests
Load tests were performed on different databases, with variations in the number of simultaneous users and the volume of iterations. The results demonstrated high insertion rates and greatly reduced average execution times, even in container-based environments.
More relevant than absolute numbers was the possibility of measuring, comparing and understanding the behavior of each bank under different load levels, providing support for technical and infrastructure decisions.
Practical applications of K6 in data projects
Throughout the live, I highlighted that the K6 should not be seen just as a tool for extreme load testing. It can be used in different contexts, such as:
- Capacity assessment before going to production
- Controlled generation of fictitious data
- Performance Regression Tests
- Support decision-making on infrastructure sizing
In real projects, this type of testing allows you to anticipate problems and justify capacity adjustments based on concrete data.
ORM, database and business alignment
In the final part of the live, we discussed the role of ORMs and the appropriate time for deeper intervention in the database. The view presented was that ORMs are efficient for simple CRUD operations, but present limitations in analytical queries, complex reports and high volume scenarios.
In these cases, the use of explicit SQL, specific views or procedures tends to offer better performance and greater control. The decision to optimize, however, must always consider the impact on the business. Not every technical problem justifies immediate investment; the focus must be on points that directly affect the operation.
Final considerations
The live aimed to demonstrate, in a practical and transparent way, how K6 can be used in real scenarios involving databases, load tests and automation. More than presenting a tool, the proposal was to discuss technical decisions, limitations and practical applications.
Well-executed load tests provide objective data that assists in decision making, reduces risks and increases the predictability of the behavior of the production environment.
Comentários (0)
Carregando comentários…