Takehome Assignment for dev

Desktop Automation Testing Take-Home

Build a simple UI automation that validates a mock agent desktop web application. Your automation should create a deterministic test run, open the desktop, validate the rendered UI, and identify at least one defect.

UI Automation Bug Detection Deterministic Test Runs

Objective

  1. Create a test run through the backend API.
  2. Open the generated desktop using the returned runId.
  3. Handle the agent status and chat invite flow.
  4. Validate the desktop content against the payload you submitted.
  5. Detect and report at least one bug.
You may use any browser automation framework you prefer, including Playwright, Cypress, Selenium, WebdriverIO, or similar tools.
You can try testgenerator.html to get a feel for the desktop, generate sample runs, and then click the desktop url at the bottom and understand the flow before automating it.

How The Simulation Works

Create A Test Run

Send a POST request with a conversation details to /api/testrun to simulate to chat conversation. The backend returns a unique runId .

Open The Desktop

Use the returned runId to open /desktop/{runId} . The desktop renders content for that run.

Profile Resolution

If the interaction is authenticated, the backend resolves the customer profile automatically from sample account data using customerAccountNumber .

Chat Behavior

The submitted chatTranscript appears after chat acceptance. The live chat input can also send new messages during runtime, and a echo back customer message will also show up once agent send a message so simulate a live conversation.

Sample API Request

{
  "interactionInformation": {
    "interactionId": "CHAT-10001",
    "channel": "Chat",
    "authenticationStatus": "Authenticated",
    "customerAccountNumber": "10012",
    "journeyName": "Billing Support",
    "queueName": "Billing Tier 1",
    "agentDesktopStatus": "Connected",
    "startTime": "2026-03-11T10:30:00Z"
  },
  "chatTranscript": [
    {
      "sender": "Customer",
      "timestamp": "14:31:01",
      "message": "I was charged twice this month."
    },
    {
      "sender": "Bot",
      "timestamp": "14:31:09",
      "message": "I can help with billing issues."
    },
    {
      "sender": "System",
      "timestamp": "14:31:50",
      "message": "Handoff to Billing Tier 1"
    }
  ]
}

The backend response will include a runId , a desktop path, and a creation timestamp.

Useful URLs

What Your Automation Should Cover

Example Bug

The desktop has some defects, For example:

The chat message count badge stops increasing after 35 messages even though additional messages continue to appear in the chat window.

You are encouraged to discover and report issues like this as part of your submission.

Some behaviors in this mock desktop may be simplified by design and are not necessarily considered bugs. When evaluating the application, think about how you would expect a normal agent desktop to work. Simulate actions that an agent would do. If you are unsure whether something is a real defect or just a simplified implementation, feel free to include it in your report and explain your reasoning.

Submission Expectations

Evaluation Focus

Feel free to use AI assistant tools as part of the take-home. A simple, well-reasoned solution is preferred over a very large framework.