Call for Papers
Software systems (e.g., smartphone apps, desktop applications, telecommunication infrastructures and enterprise systems, etc.) have strict requirements on software performance. Failing to meet these requirements may cause business losses, customer defection, brand damage, and other serious consequences. In addition to conventional functional testing, the performance of these systems must be verified through load testing or benchmarking to ensure quality service.
Load testing examines the behavior of a system by simulating hundreds or thousands of users performing tasks at the same time. Benchmarking compares the system's performance against other similar systems in the domain. The workshop is not limited by traditional load testing; it is open to any ideas of re-inventing and extending load testing, as well as any other way to ensure systems performance and resilience under load, including any kind of performance testing, resilience / reliability / high availability / stability testing, operational profile testing, stress testing, A/B and canary testing, volume testing, and chaos engineering.
Load testing and benchmarking software systems are difficult tasks that require a deep understanding of the system under test and customer behavior. Practitioners face many challenges such as tooling (choosing and implementing the testing tools), environments (software and hardware setup), and time (limited time to design, test, and analyze). Yet, little research is done in the software engineering domain concerning this topic.
Adjusting load testing to recent industry trends, such as cloud computing, agile / iterative development, continuous integration / delivery, micro-services, serverless computing, AI/ML services, and containers poses major challenges, which are not fully addressed yet.
This one-day workshop brings together software testing and software performance researchers, practitioners, and tool developers to discuss the challenges and opportunities of conducting research on load testing and benchmarking software systems. Our ultimate goal is to grow an active community around this important and practical research topic.
We solicit two tracks of submissions:
- Research or industry papers:
- Short papers (maximum 4 pages)
- Full papers (maximum 8 pages)
- Presentation track for industry or research talks:
- Extended abstract (maximum 700 words)
- Efficient and cost-effective test executions
- Rapid and scalable analysis of the measurement results
- Case studies and experience reports on load testing and benchmarking
- Leveraging cloud computing to conduct large-scale testing
- Load testing and benchmarking on emerging systems (e.g., adaptive/autonomic systems, AI, big data systems, and cloud services)
- Continuous performance testing
- Load testing and benchmarking in the context of agile software development process
- Using performance models to support load testing and benchmarking
- Building and maintaining load testing and benchmarking as a service
- Efficient test data management for load testing and benchmarking
- Context-driven performance testing
- Performance / load testing as an integral part of the performance engineering process
- Load testing serverless computing platforms and the unique challenges caused by granular and short-lived containers
Instructions for Authors from ACM
By submitting your article to an ACM Publication, you are hereby acknowledging that you and your co-authors are subject to all ACM Publications Policies, including ACM's new Publications Policy on Research Involving Human Participants and Subjects. Alleged violations of this policy or any ACM Publications Policy will be investigated by ACM and may result in a full retraction of your paper, in addition to other potential penalties, as per ACM Publications Policy.
Please ensure that you and your co-authors obtain an ORCID ID, so you can complete the publishing process for your accepted paper. ACM has been involved in ORCID from the start, and we have recently made a commitment to collect ORCID IDs from all of our published authors. The collection process has started and rolled out as a requirement throughout 2022. We are committed to improve author discoverability, ensure proper attribution and contribute to ongoing community efforts around name normalization; your ORCID ID will help in these efforts.
Important Dates
Abstract submission: | January 20, 2025, AOE; |
Paper submission: | January 24, 2025, AOE; |
Author notification: | February 11, 2025; |
Camera-ready version: | February 26, 2025 |
Extended abstract submission: | February 17, 2025, AOE; |
Author notification: | February 26, 2025; |
Workshop date: | May 5, 2025 |
Organization:
Chairs:
Stephen Fan | The King's University, Canada |
Lizhi Liao | Memorial University of Newfoundland, Canada |
Zhenhao Li | York University, Canada |
Web Chair:
Changyuan Lin | The University of British Columbia, Canada |
Program Committee:
To be updated later.