Skip to main content
All CollectionsGetting started
Overview: test with Silverstream
Overview: test with Silverstream

Start here

Elisa Seghetti avatar
Written by Elisa Seghetti
Updated over a month ago

Silverstream is an early stage startup based in SF šŸŒ We a re a small team of four co-founders. Behind every email or chat thereā€™s one of us: we really care about your feedback on what weā€™re building, so please shot any idea or suggestions!

1ļøāƒ£ Accept the invitation


Access to the testing platform is currently invite-only. We're keeping numbers low to ensure we can hear feedback from everyone and truly understand how to improve this product.

The invitation from invitations@silverstream.ai should look like this:

Set a new password or create an account by logging in with Google. We currently don't support team or organization accountsā€”the service is for single users only. Feel free to share the account with a colleague if you both want visibility on the same tests. If multi-account is a priority, let us know through the Feature request board

2ļøāƒ£ Download the library


Install the silverriver library via pip. You can always find the most up-to-date beta version at this link. This library also installs the latest releases of Playwright and Chromium. It allows you to record a trace of the behavior you'd like to test in an instrumented Chromium session. The upload step will fail if the test wasnā€™t recorded with the latest version of the library.

pip install silverriver==0.1.37b7

3ļøāƒ£ Record the test


Now letā€™s record a flow we would like to test:

silverriver record URL [-o OUTPUT] [--upload]
  • URL: The URL of the webpage to trace (must start with http:// or https://) - If you encounter an error, check the url spelling or add ā€œwwwā€

  • o OUTPUT, -output OUTPUT: The output filename for the trace (default: 'silverriver_trace')

  • -upload: Upload the trace after recording

This command launches a browser session where you can demonstrate the intended behavior. The custom browser captures interactions in real-time. Don't forget to close the browser session to save the trace.

TRACE RECORDING BEST PRACTICES


  • Staging Url vs Production Url

    During the test run, the agent will perform actions on the webpage like a real user. We suggest using the staging/development environment to test the flow, avoiding any impact on production.

  • Authenticated Flow

    If you would like to test flows that are behind authentication (like in a web application), don't forget these two steps:

    1. Create an account for the agent with the same permissions you'd like to have for the user.

    2. Record the login step during the trace recording.

  • Keep it short

    We recommend keeping the trace recording brief. An ideal flow could include actions like submitting a contact form, adding products to a cart, or adjusting settings. These concise scenarios effectively demonstrate key functionalities.

Please let us know if you have any doubts!

4ļøāƒ£ Upload the test & Check the user story


To add a new test, go back to the dashboard at dashboard.silverstream.ai. Click on the blue button in the top right ā€œ+ New Testā€.

Upload the trace file you generated in the previous step. After uploading, click the "Generate test" button to initiate the user story generation.

From the trace, we generate a user story: itā€™s a set of instructions and context of the agent. You can review and edit the story. Donā€™t forget to click on ā€œLetā€™s create this testā€: this action triggers the generation a corresponding test sequence for the agent to follow.

5ļøāƒ£ Navigate Test Status


A test will go through several status. Letā€™s analyze them together:

  • šŸ•“ Not Ready

    Automatic test generation is in progress. This is a one-time process for each test. Itā€™s usually very quick, but feel free to grab a coffeeā€”it might take a few moments to complete.

  • šŸ•“ Ready to run

    The environment is ready and the test is ready to be launched: just click on ā€œRun testā€

  • Running

    The run is in progress, and the agent is working to resolve the task. You can now view the start date and updated run duration.

  • āœ… Success

    The agent successfully completed the workflow. Everything worked as expected, and the acceptance criteria were met. You can review the logs anytime to see how the agent interacted with your web page.

  • Skipped

    The agent identified instability in completing the task, and prevented disruptions by skipping the test. We try to always say upfront if a test is not feasible for the agent, to avoid flaky test. We recommend to ask help to a human friend for this test :)

  • āŒ Failure

    The agent couldnā€™t complete the task and it found a bug. This means itā€™s time to investigate the logs to identify what went wrong. See step 7 to understand how to read the logs.

  • āŒ Error

    Itā€™s not you, itā€™s us.This is one of the rare cases where the agent encounters an issue during the run. Our team will prioritize this test to understand what caused the agent to crash.

6ļøāƒ£ Run and Rerun


You can run a test as many times as you want by clicking ā€˜Run Test.ā€™ When the test is starting, you will see this popup in the bottom-right of your screen.

Currently, tests are only run on demand, but weā€™re working on implementing scheduled runs and triggers. šŸ•“

At the moment, each run overwrites the previous test results, and logs are not stored historically, so be sure to download the logs after each run. Run history is a work in progress; let us know if these features are high priority for you through the Feature Request board.

7ļøāƒ£ Download and Analyze logs


Itā€™s time to see what happened during the run! Few moments after the run, the logs column status will shift from ā€œLogs not availableā€

Letā€™s download the log by clicking on ā€œDownload Logsā€.

Itā€™s a playwright trace that you need to upload on trace.playwright.dev

If you are not familiar with Playwright Trace Viewer, hereā€™s the complete official documentation and a video tutorial from Playwright (min 1:55s).

Here's a step-by-step guide:

  1. Timeline Overview At the top, youā€™ll see a timeline showing key events, actions, and any errors during the test. Hover over individual events for a quick snapshot of each action, providing a visual summary of the test flow..

  2. Action Tab

    Explore the list of actions performed in sequence. Each action displays:

    ā€¢ Name of the action (e.g., click, fill, navigate)

    ā€¢ Duration (time taken for each action)

    ā€¢ Selector used for each interaction.

    Click on any action to view more details, including DOM snapshots before and after the action.

  3. Screenshot Panel If screenshot recording was enabled, a filmstrip will appear along the timeline. This feature provides snapshots that correspond to each significant action, helping you visualize changes as the test progresses

  4. Log and Error Details The Log tab lists all actions sequentially, offering an in-depth view of what occurred at each step. The Errors tab highlights any failed actions, making it easy to locate problems

  5. Console and Network Information

    The Console tab shows console logs generated during the run, while the Network tab displays network requests made during each action, including timing and responses. This information is essential for diagnosing frontend and backend issues.

Did this answer your question?