The Katalon Blog

How To Write Test Cases? Detailed Guide With Examples

Written by Katalon Team | Jul 8, 2024 8:00:00 AM

 

Test case is the backbone of any testing project. The art of software testing is to write the right test case. Firstly it’s not about how you write it, but rather what scenarios you are writing for. After that, we need to closely tie our test cases with test design and test execution. 
 

Let’s explore how to write test cases in the most strategic fashion.

What is a Test Case?

A test case is a specific set of conditions or variables under which a tester determines whether a system, software application, or one of its features is working as intended.
 

Here’s an example. You are testing the Login pop-up of Etsy, one of the leading E-commerce platforms currently. You’ll need several test cases to check if all features of this page are working smoothly. 
 

Brainstorming time: what features do you want to test for this Login pop-up?
 

Let’s list down some of them:

  1. Verify Login with Valid Credentials
  2. Verify Login with Invalid Credentials
  3. Verify "Forgot Password" Functionality
  4. Verify "Trouble Signing In" Link
  5. Verify "Continue with Google" Functionality
  6. Verify "Continue with Facebook" Functionality
  7. Verify "Continue with Apple" Functionality
  8. Verify "Register" Button Redirect
  9. Verify Empty Email Field Handling
  10. Verify Empty Password Field Handling
  11. Verify Error Message for Unregistered Email
  12. Verify Session Timeout Handling
     

That’s just a quick and immediate list. As a matter of fact, the more complex a system is, the more test cases are needed.

 

Learn More: 100+ Test Cases For The Login Page That You Will Need

Before You Write a Test Case

Before you write a test case, ask yourself 3 questions:

  1. Choose your approach to test case design: your approach influences your test case design. Are you doing black box testing (you don’t have access to the code source) or white box testing (you have access to the source code)? Are you doing manual testing or automation testing? 
  2. Choose your tool/framework for test case authoring: are you using frameworks or tools to test? What level of expertise do these tools/frameworks require? 
  3. Choose your execution environment: this ties up closely with your test strategy. Do you want to execute across browsers/OS/environments? How can you incorporate that into your test script?

Once all of those 3 questions have been answered, you can start the test case design and eventually test authoring. It’s safe to say that 80% of writing a test case belongs to the planning and designing part, and only 20% is actually scripting. Good test case design is key to achieving good test coverage.
 

Let’s start with the first steps.

How To Design a Test Case?

Why we need to design before we write though?

 

It's simple. There are way more things to test than it appears. Taking from the example above, the Login page alone has required you 10 test cases to almost cover all scenarios. You need techniques to list down all test cases in a given scenario before you start writing those tests.

 

First question: do you have access to the internal code?
 

You are doing black box testing if you don’t have access to internal code. The entire system is essentially a black box. You can only see and test what the system is programmed to show you.
 

 

When testers don’t need to understand the algorithm, they can concentrate on determining whether the software meets user expectations. They must explore and learn about the system to generate test case ideas. However, this approach can result in limited test coverage, as some features with non-obvious behavior might be overlooked.
 

In that case, here are some techniques for you to design your test cases:
 

1. Equivalence Class Testing: you divide input data into groups where all values in a group should be treated the same way by the system.

  • Example: For an age input field that accepts ages from 18 to 65, you can choose 3 values for 3 equivalence classes and test with one value from each group. That means you have 3 test cases. You can choose:
    • 17 (below 18-65 range)
    • 30 (within 18-65 range)
    • 70 (above 18-65 range)

2. Boundary Value Analysis: this is a more granular version of equivalence class testing. Here you test values at the edges of input ranges to find errors at the boundaries.

  • Example: For an age input that accepts values from 18 to 65, you choose up to 6 values to test (which means you have 6 test cases): 
    • 17 (just below)
    • 18 (at the boundary)
    • 19 (just above)
    • 64 (just below)
    • 65 (at the boundary)
    • 66 (just above)

3. Decision Table Testing: you use a table to test different combinations of input conditions and their corresponding actions or results.

  • Example: Here’s a decision table for a simple loan approval system. The system approves or denies loans based on two conditions: the applicant's credit score and the applicant's income. From this table, you can write 6 test cases:

Rule

Rule-1

Rule-2

Rule-3

Rule-4

Rule-5

Rule-6

Conditions

Credit Score

High

High

Medium

Medium

Low

Low

Income

High

Low

High

Low

High

Low

Actions

Loan Approval

Yes

Yes

Yes

No

No

No

Interest Rate

Low

Medium

Medium

N/A

N/A

N/A

 

You are doing white box testing if you have access to the internal code. Test case design with white box testing allows you to deep dive into the implementation paths through the system. Now that you have internal knowledge of how the system works, you can tailor test cases specifically to its logic.
 

 

With white box testing, you would need to build a Control Flow Graph (CFG) to illustrate all of the possible scenarios that can happen for a specific feature. For example, in this CFG, you can see that there are 3 execution paths, which means there are 3 test cases to write:
 


 

Once you have designed your test cases, it’s time to note them down. The real work begins here.
 

How To Write a Test Case?

The anatomy of a test case consists of:

  1. Test Case ID: A unique identifier for the test case.
  2. Description: A brief summary of the test case.
  3. Preconditions: Any conditions that must be met before the test case can be executed.
  4. Test Steps: A detailed, step-by-step guide on how to execute the test.
  5. Test Data: The data to be used in the test.
  6. Expected Result: The expected outcome of the test if the system works correctly.
  7. Actual Result: The actual outcome when the test is executed (filled in after execution).
  8. Postconditions: The state of the system after the test case execution.
  9. Pass/Fail Criteria: Criteria to determine if the test case has passed or failed based on the actual result compared to the expected result.
  10. Comments: Any additional information or observations.
     


 

Here is an example of a login test case for the Etsy login popup:

Component

Details

Test Case ID

TC001

Description

Verify Login with Valid Credentials

Preconditions

User is on the Etsy login popup

Test Steps

1. Enter a valid email address.

2. Enter the corresponding valid password.

3. Click the "Sign In" button.

Test Data

Emailvaliduser@example.com

Password: validpassword123

Expected Result

Users should be successfully logged in and redirected to the homepage or the previously intended page.

Actual Result

(To be filled in after execution)

Postconditions

User is logged in and the session is active

Pass/Fail Criteria

Pass: Test passes if the user is logged in and redirected correctly.

Fail: Test fails if an error message is displayed or the user is not logged in.

Comments

Ensure the test environment has network access and the server is operational.

Best Practices When Writing a Test Case

Make sure to follow the following practices when writing your test cases:

Component

Best Practices

Test Case ID

1. Use a consistent naming convention.

2. Ensure IDs are unique.

3. Use a prefix indicating the module/feature.

4. Keep it short but descriptive.

5. Maintain a central repository for all test case IDs.

Description

1. Be concise and clear.

2. Clearly state the purpose of the test.

3. Make it understandable for anyone reading the test case.

4. Include the expected behavior or outcome.

5. Avoid technical jargon and ambiguity.

Preconditions

1. Clearly specify setup requirements.

2. Ensure all necessary conditions are met.

3. Include relevant system or environment states.

4. Detail any specific user roles or configurations needed.

5. Verify preconditions before test execution.

Test Steps

1. Number each step sequentially.

2. Write steps clearly and simply.

3. Use consistent terminology and actions.

4. Ensure steps are reproducible.

5. Avoid combining multiple actions into one step.

Test Data

1. Use realistic and valid data.

2. Clearly specify each piece of test data.

3. Avoid hardcoding sensitive information.

4. Utilize data-driven testing for scalability.

5. Store test data separately from test scripts.

Expected Result

1. Be specific and clear about the outcome.

2. Include UI changes, redirects, and messages.

3. Align with the acceptance criteria.

4. Cover all aspects of the functionality being tested.

5. Make results measurable and observable.

Actual Result

1. Document the actual outcome during execution.

2. Provide detailed information on discrepancies.

3. Include screenshots or logs if applicable.

4. Use consistent format for recording results.

5. Verify results against the expected outcomes.

Postconditions

1. Specify the expected system state post-test.

2. Include any necessary cleanup steps.

3. Ensure the system is stable for subsequent tests.

4. Verify that changes made during the test are reverted if needed.

5. Document any residual effects on the environment.

Pass/Fail Criteria

1. Clearly define pass/fail conditions.

2. Use measurable and observable outcomes.

3. Ensure criteria are objective.

4. Include specific error messages or behaviors for fails.

5. Align criteria with expected results and requirements.

Comments

1. Include additional helpful information.

2. Note assumptions, dependencies, or constraints.

3. Provide troubleshooting tips.

4. Record any deviations from the standard process.

5. Mention any special instructions for executing the test.

Here are some more tips for you:

  1. Keep your test cases isolated in the sense that each test case should aim to test one specific instance of the scenario. This helps with your debugging later down the road.
  2. Leverage a test case management system that syncs your test execution with test management and reporting.
  3. Conduct an exploratory testing session first to gain a comprehensive understanding of the system you are testing. This helps you know what test cases to work on.
  4. Leverage the Gherkin format (Given, When, Then) and the BDD testing methodology to simplify test scenarios into human readable language so other stakeholders can join in the conversation.

How To Write an Automation Test Script?

Writing manual test cases is primarily about noting down the test steps. 

When it comes to automation testing, the process becomes more complicated.

If you choose manual testing, you only have to execute them following the exact steps as planned. If you go with automation testing, first you need to choose whether you’ll go with a framework or a testing tool.
 

Simply put:

For example, you can use Selenium to automate the Login page testing of Etsy. Carefully read through Selenium documentation to gain an understanding of its syntax. After that, launch your favorite IDE. Python is usually the language of choice for Selenium automation.
 

Install Selenium if you haven’t already:
 

pip install selenium
 

Here’s your script:
 

from selenium import webdriver

from selenium.webdriver.common.by import By

from selenium.webdriver.common.keys import Keys

from selenium.webdriver.chrome.service import Service

from selenium.webdriver.chrome.options import Options

from webdriver_manager.chrome import ChromeDriverManager
 

# Set up Chrome options

chrome_options = Options()

chrome_options.add_argument("--headless")  # Run in headless mode if you don’t need a UI
 

# Initialize the WebDriver

driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()), options=chrome_options)
 

try:

    # Navigate to Etsy login page

    driver.get("https://www.etsy.com/signin")
 

    # Find the email and password fields and the login button

    email_field = driver.find_element(By.ID, "join_neu_email_field")

    password_field = driver.find_element(By.ID, "join_neu_password_field")

    login_button = driver.find_element(By.XPATH, "//button[@type='submit']")
 

    # Enter credentials (Replace with valid credentials)

    email_field.send_keys("your-email@example.com")

    password_field.send_keys("your-password")
 

    # Click the login button

    login_button.click()
 

    # Add assertions or further actions as needed

    # For example, check if login was successful

    # WebElement account_menu = driver.find_element(By.ID, "account-menu-id")

    # assert account_menu.is_displayed(), “Login failed”
 

finally:

    # Close the browser

    driver.quit()
 

The steps we coded here are:

  1. Set up Chrome to run in headless mode (no GUI). This is totally optional.
  2. Initialize WebDriver 
  3. Use “driver.get(url)” to open the Etsy login page
  4. Locate the web elements using its ID and XPath
  5. Enter credentials
  6. Submit the form
  7. Check status

Here are some best practices you should follow:

  1. Choose automation tools that fit your project requirements, such as Selenium, Appium, or TestNG. Ensure the tools are compatible with your application and technology stack.
  2. Use design patterns like Page Object Model (POM) or Keyword-Driven Testing.
  3. Identify and automate high-priority test cases that are repetitive, time-consuming, or critical. Avoid automating test cases that are unstable, infrequently executed, or require frequent updates.
  4. Use meaningful names for variables, methods, and classes. Include comments and documentation within the scripts to explain the purpose and logic.
  5. Implement Data-Driven Testing by using external data sources (e.g., CSV, Excel, databases) to drive test cases. Make sure to separate test data from test scripts to facilitate easy updates and variations.
  6. Integrate automation tests with CI tools like Jenkins, Travis CI, or CircleCI.
  7. Store and manage test scripts in a version control system like Git. Track changes, collaborate with team members, and maintain a history of updates.
  8. Include logging mechanisms to capture detailed information during test execution.

How To Write an Automation Test Script With Katalon?

Let’s see how you can do the same thing with Katalon, minus the coding part.
 

First, you can download Katalon Studio here.
 

Next, launch it, and go to File > New > Project to create your first test project.  
 


 

Now create your first test case. Let’s call it a “Web Test Case”.

 

You now have a productive IDE to write automated test cases. You have 3 modes to choose from

  • No-code: Using the Record-and-Playback feature, testers can capture their manual actions on-screen and convert them into automated test scripts that can be re-run as many times as needed.
  • Low-code: Katalon offers a library of Built-in Keywords, which are pre-written code snippets with customizable parameters for specific actions. For instance, a "Click" keyword handles the internal logic to find and click an element (like a button). Testers only need to specify the element, without dealing with the underlying code.
  • Full-code: Testers can switch to Scripting mode to write their own test scripts from scratch. They can also toggle between no-code and low-code modes as needed. This approach combines the ease of point-and-click testing with the flexibility of full scripting, allowing testers to focus on what to test rather than how to write the tests, thus boosting productivity.

Here's how it works:

 

You have a lot of environments to choose from. The cool thing is that you can execute any types of test cases across browsers and OS, and even reuse test artifacts across AUTs. This saves so much time and effort in test creation, allowing you to focus on high value strategic tasks.

 

FAQs On Writing Test Cases

1. What is the difference between Test Case vs. Test Scenario?

Test Case:

  • Definition: A test case is a detailed document that describes a specific condition to test, the steps to execute the test, the input data, the expected result, and the actual result after execution.
  • Components: Typically includes Test Case ID, Description, Preconditions, Test Steps, Test Data, Expected Result, Actual Result, Postconditions, Pass/Fail Criteria, and Comments.
  • Detail Level: Highly detailed and specific, designed to ensure that every aspect of the feature is tested thoroughly.
  • Purpose: To verify that a particular function of the application behaves as expected under specified conditions.

Test Scenario:

  • Definition: A test scenario is a high-level description of what needs to be tested. It represents a user journey or a functional aspect of the application that needs to be verified.
  • Components: Usually includes a scenario ID and a brief description of the functionality to be tested.
  • Detail Level: Less detailed than a test case, more abstract, and focused on the end-to-end functionality rather than specific inputs and outputs.
  • Purpose: To ensure that all possible user workflows and business processes are covered.

 

2. How to write test cases in Agile?

Writing test cases in Agile involves adapting to the iterative and incremental nature of the methodology. Here are some best practices:

  1. Collaborate Early and Often:
    • Work closely with the development team, product owner, and stakeholders to understand requirements.
    • Participate in sprint planning and grooming sessions to gain insights into user stories and acceptance criteria.
  2. Write Test Cases Concurrently:
    • Develop test cases as user stories are being defined and refined.
    • Use acceptance criteria as a guide to write initial test cases.
  3. Keep Test Cases Lightweight:
    • Focus on the essence of what needs to be tested.
    • Write high-level test cases initially and add details as needed.
  4. Use User Stories:
    • Align test cases with user stories to ensure all aspects of the functionality are covered.
    • Write test cases that reflect real user scenarios and interactions.
  5. Prioritize Test Cases:
    • Prioritize test cases based on the criticality and risk of the features being tested.
    • Focus on high-priority test cases first to ensure core functionality is tested early.
  6. Automate Where Possible:
    • Identify repetitive and high-impact test cases for automation.
    • Integrate automated tests into the CI/CD pipeline for continuous feedback.
  7. Review and Update Regularly:
    • Continuously review and update test cases based on feedback and changes in requirements.
    • Involve the team in test case reviews to ensure coverage and accuracy.

 

3. What are types of test cases?

There are several types of test cases, each serving a different purpose in the testing process:

  1. Functional Test Cases: verify the functionality of the software according to the requirements.
  2. Non-Functional Test Cases: assess aspects like performance, usability, reliability, and scalability.
  3. Regression Test Cases: ensure that new changes or enhancements have not adversely affected existing functionalities.
  4. Smoke Test Cases: perform a basic check to ensure that the critical functionalities of the application are working.
  5. Sanity Test Cases: focus on a narrow area of functionality to ensure that specific changes or bug fixes are working as intended.
  6. Integration Test Cases: verify that different modules or components of the application work together as expected.