Role 7: QA / Tester
Tries to break the app before real users do. Thinks adversarially. Asks "what if?"
The Excel Analogy
This role is one where CA professionals have a natural advantage — you already stress-test work before it leaves the firm.
You have built the Excel model. Formulas are done. You have checked the obvious cases — the model works when you enter normal data. But does it work when someone enters zero? What happens when a cell is left blank? What if someone pastes data with an extra space before the number? What happens when the date is in the wrong format? What if two people try to update the same cell at the same time?
A good CA does not just build the model and send it. They stress-test it with edge cases — impossible values, missing data, wrong data types — before it goes to the client. Because if the model breaks under real-world conditions, it breaks in front of the client. That is embarrassing and costly.
This is what QA does for software. Before real users encounter your app, you try to break it yourself. You think like someone who does not know how the app is supposed to work — or someone who is actively trying to abuse it.
The Core Mindset
The QA mindset is fundamentally different from the development mindset — and the shift between them must be deliberate.
Developer mindset: "I built it correctly. Here is proof that it works." Tests the happy path — does the expected flow produce the expected result?
QA mindset: "I am trying to break this. Where are the gaps in what was built?" Tests everything except the happy path — wrong input, unexpected sequences, missing data, multiple simultaneous users, slow network, expired sessions.
The transition from "I built this" to "I am going to try to break this" is a genuine mental shift. It requires putting aside the attachment to the code you wrote and thinking like someone who has no idea what you intended to happen.
What This Role Tests
Happy path validation — does the expected flow work?
- Parent completes fee payment → fee record updates to paid → receipt SMS arrives
Input boundary testing — what happens at the edges of valid input?
- Phone number: what about 9 digits? 11 digits? Letters instead of numbers?
- Price: what about 0? Negative number? A number with 10 decimal places?
- Date: what about a date in the past? A date 10 years in the future?
Empty and null states — what happens when data is missing?
- A page that shows a list: what if the list is empty?
- A profile page: what if the user never completed their profile?
- An order history: what if this is a brand new user with no orders?
Error handling — what happens when things go wrong?
- Network disconnects halfway through a form submission
- Payment fails (test with a known-declined card number)
- Server returns an error during file upload
- Session expires while the user is filling a form
Permission and security testing — can users access what they should not?
- Can a parent see another parent's child's fee records by changing the URL?
- Can a teacher modify attendance records for a class they are not assigned to?
- What happens if you paste a branch admin URL while logged in as a parent?
- Can you submit a fee payment without being logged in?
Concurrency testing — what happens when multiple users act simultaneously?
- Two parents try to pay the same fee record simultaneously — does it get marked paid twice?
- Two branch admins try to update the same student record simultaneously — which update wins?
The "Why First" Scenario
"You build the fee payment feature. It works perfectly in your testing. You deploy to production. Three days later, a real parent tries to pay tuition. At checkout, they enter their phone number — but they include the country code: +91 9876543210. Your form only accepts 10 digits. It rejects the phone number with a confusing error. They give up. The payment is lost. You never know it happened."
A 15-minute QA session — specifically testing the phone number field with edge cases — would have caught this before it reached a real user.
The cost of a bug found in QA: 30 minutes of your time. The cost of the same bug found by a real parent: a lost payment, a frustrated family, a call to the school principal, and an emergency fix that takes longer than the original bug would have.
Concrete Test Cases to Always Run
For every form you build, run these tests before calling it done:
| Test | What to check |
|---|---|
| Submit with all fields empty | Are all required fields clearly flagged? |
| Submit with one required field empty | Does it catch individual missing fields? |
| Enter text in a number field | Does it reject gracefully with a helpful message? |
| Enter a number in an email field | Is email format validated? |
| Enter a past date where a future date is required | Is date validation in place? |
| Paste in extremely long text (1000+ characters) | Does it handle or limit gracefully? |
| Double-click the submit button rapidly | Does it submit twice? (Double submission is a common bug) |
| Submit, then go back and submit again | Does it create a duplicate? |
| Open the form, do not submit, refresh the page | Does it lose the data? Is that acceptable? |
For every protected page:
| Test | What to check |
|---|---|
| Access the URL while logged out | Does it redirect to login? |
| Access an admin URL as a regular user | Does it show "unauthorized" or redirect? |
| Change an ID in the URL to another user's ID | Does it still show your own data (not theirs)? |
When You Are in This Role
Put on the QA hat:
- After building any new feature, before considering it done
- Before every deployment to production
- When a client reports a bug — then ask "what else could be broken near this?"
- After any significant code change — regression test the features nearby
- When writing test specifications: defining what "working correctly" means before building
Common Mistakes When You Skip This Role
Only testing the happy path. The feature works when you use it correctly. You ship it. Real users do not always use it correctly. Bugs that were trivial to catch in testing become production incidents.
Testing in a "clean" environment. Your test account has a complete profile, real data, and all settings configured. A new user's account has none of that. You never test "what does a brand new user see?" — and that is exactly who encounters the worst bugs.
Never testing on mobile. The desktop experience works. You never test on a real phone. Most parents use EduTrack on their phones. The fee payment form is impossible to use on a small screen. You find out from parent complaints.
No regression testing. You fix a bug in the fee payment flow. You test that the bug is fixed. You do not test that the Razorpay verification flow — which shares some code — still works. It broke. You discover this from a real payment failure.
Treating QA as optional. "I will test properly before the final release." The final release arrives and there is no time. QA gets skipped. The release goes out with known issues. Every significant feature needs QA before it is considered done — not only at final release.
Double submission is an extremely common bug that causes real damage in payment flows. If a user clicks "Pay" and the response is slow, they click again. Your app processes two payments. The customer is charged twice. Always disable the submit button after the first click and re-enable only after the response (success or error) comes back.
Automated Testing vs Manual Testing
There are two kinds of QA: things you do manually and things you teach the computer to do automatically.
Manual testing — you do it yourself:
- Exploratory testing (trying unexpected things to see what breaks)
- Visual QA (does this look correct on different screen sizes?)
- First-run experience (what does a brand new user see?)
- Edge cases that are hard to script
Automated testing — code that tests your code:
- Unit tests: test one function in isolation ("does this pricing calculation return the correct number?")
- Integration tests: test that two things work together ("does the fee payment correctly trigger the receipt SMS?")
- End-to-end tests: simulate a real user going through a complete flow ("can a parent log in, pay a fee, and see their payment receipt in their history?")
For this training program, you will start with manual QA and progress to automated testing in Phase 3. Understanding manual QA deeply is the foundation — automated tests are just a way of making manual tests run automatically every time you change the code.
How Claude Code Helps in This Role
Generate test cases for a feature:
"I just built a fee payment form that takes: fee type (from dropdown), amount (read-only), parent phone number, and payment method. Generate a comprehensive list of test cases I should manually run — including edge cases, invalid inputs, and boundary conditions."
Write automated tests:
"Write Vitest unit tests for this pricing calculation function: [paste function]. Include test cases for: normal input, zero rooms, maximum rooms, invalid input, and discount edge cases."
Review code for QA gaps:
"Review this fee payment form component. What edge cases does it not handle? What inputs could break it? What states could leave the parent confused?"
Generate security test cases:
"Give me a list of security test cases for a school management app where parents access their children's records. Focus on: unauthorized data access, privilege escalation, and data manipulation."
What Claude cannot do well in QA:
- Test visually (it cannot see your screen)
- Notice that something "feels wrong" in the UX
- Test things that require real external services (real payment APIs, real SMS delivery)
- Replace the unpredictability of a real user who does not know how the app is supposed to work
How to Switch Into This Role
After completing any feature, before committing it as done, say:
"I am no longer the person who built this. I am the first user who has never seen it before. I am going to try everything the developer did not intend. I am going to make mistakes. I am going to be impatient. I am going to use this on my phone with one hand while distracted."
Then systematically work through:
- The happy path (confirm it works)
- Every form field with invalid input
- Every required field left empty
- Every button clicked twice rapidly
- Every protected page accessed while logged out
- Every list that could potentially be empty
Exercise
Take any form you can find — on any website or app you use. Try to make it break:
Document what happens in each case:
- Does it handle the error gracefully with a clear message?
- Does it crash silently?
- Does it show a technical error that a non-developer cannot understand?
- Does it allow the broken data to be submitted?
This exercise trains the adversarial mindset. Once you develop it, you cannot turn it off — you will notice QA gaps in every app you use, and you will never ship a feature without running through this list yourself.