1. BUSINESS CONTEXT & NEEDS
Modern software teams face an uphill battle: manual test case creation is time‑consuming, error‑prone, and expensive.
Organizations need a smarter solution that reduces human effort while maintaining quality.
Japanese Client asked Fsoft Testing team (IVS) used TestVista— to help reduce manual test case design effort while keeping testing coverage high.

Client industry domain:Â Airline - Info System
Testing team at IVS was tasked with the job of piloting TestVista on client's project specifications to help with test case design at the beginning of each agile sprint
Evaluating TestVista’s Performance After receiving the client's requirement specifications, the test team broke down the software specs into sub-patterns (you can think of it as function modules) to assess TestVista’s performance in real-world testing scenarios.
The 9 key patterns analyzed were:
- Deleting
- Updating
- Approval Workflows
- Email form
- Export Reports
- Batch Operations
- API Testing
- Registration/Bulk
- Filtering
- Test Lead Review: A senior testing team member reviews the generated test cases to evaluate quality and coverage.
- Ensuring Consistency: We believe in transparency, so here is a caveat with generative AI: "The results might vary with each run."
- AI-Powered Test Suggestions: TestVista analyzes the input specs (patterns) and suggests relevant test cases.
- Performance Reporting: At the end of each sprint, the team measures TestVista’s generated test cases against the total required for full coverage.
This report is then presented to the client for validation.
The results:

Across three sprints, TestVista validated 714 out of 964 manually created test cases—yielding an overall coverage rate of 74%.
- Sprint 1: 85.16%
- Sprint 2: 77.95%
- Sprint 3: 47.45%
Why do coverage in sprint 3 was lower?
- The TestVista development team found that test cases for Batch and API often lack detailed parameter information in the expected results section.
That is, the output response must include detailed parameters and body response in order to be useful.
Test Case Coverage by Patterns (functions)
This template includes a few advanced interactions that help bring your site to life.

Understanding and Measuring Testing Effort

Manual Testing Effort Breakdown
In traditional manual testing, testers spend significant time studying requirements, designing viewpoints, and manually generating test cases.
The breakdown is as follows:

TestVista-Assisted Effort Breakdown
With TestVista, AI automates a large portion of test case generation, allowing testers to focus on input preparation, self-review, and targeted updates.
The breakdown is as follows:

Key Differences & Efficiency Gains
TestVista’s performance is both impactful and measurable:
- AI-Powered Test Case Generation: TestVista reduces the effort required for test case creation (from 91.27h → 16.2h), freeing up testers for more complex work.
- Self-review & validation: Testers need to spend effort to review and validate the results generated by AI, this task add 19.2h to the workflow.Â
- Overall Effort Reduction: The total testing effort decreases from 153.87h to 102.16h, achieving a 34% efficiency gain.

By leveraging AI to automate repetitive tasks, teams can increase efficiency, reduce human errors, and optimize resource allocation for higher-value testing activities.