Skip to main content

Downloading & Installing Performance Review Snippet

Using the Metadata API to analyze request history and identify performance blind spots.

Louis Machado avatar
Written by Louis Machado
Updated over 3 months ago

In this guide, you'll discover how to determine the threshold of API requests per second that overwhelms your instance, causing it to restart for recovery, as well as quickly assess the average response times of the most requested endpoints to prioritize your optimization efforts.

This process is divided into three stages, and its duration depends on the volume of requests from the last 24 hours.

Part 1 - Installing the Snippet and Populating the Environment Variables (2-4 min)

Part 2 - Retrieving the Request History (3-10 min)

Part 3 - Generating Reports (1-5 min)

​ ​

Part 1 - Installing the Snippet and Populating the Environment Variables

After you click the link above, choose Add to Your Xano Account (1) and then log in and select the instance where you want to install it (2)

Then, inside the workspace where you want to install the snippet, click on Marketplace (3), go to Purchased (4), select Understanding Your Plan’s Limits & Monitoring Instance Activity (5), and click Install Snippet (6).

After you install the snippet in the workspace, navigate to 'Settings' (1), select 'Manage' (2) to access the environment variables tab, and paste the instance URL in the 'instance_url' field (3), which is visible in your browser's address bar, into the environment variable named instance_url. Make sure to exclude the https:// part.

The next step is to add the metadata API to the environment variable. Start by clicking the gear icon ⚙️ on the instance selection page (1), then navigate to 'Metadata API' (2) and select 'Manage Access Token's (3).

Next, create a new token by clicking 'New Access Token' (4), provide it with a name (5) and click 'Create' (6). Finally, copy the token to your clipboard by clicking the copy icon (7). Note: The token will only be visible this one time.

Repeat the process for the Instance URL by navigating to 'Settings' (1), selecting 'Manage' (2), and opening the environment variables tab. Then, paste the Metadata API Key into the 'metadata_key' field (3).


Part 2 - Retrieving the Request History

In addition to the environment variables you set up, the snippet you installed includes other components: a database table called "Request History" (1), an API group named "Analytics" (2), and three API endpoints (3) designed to retrieve the complete request history of the instance along with some analytical insights.


We'll begin by executing the "request_history" endpoint. Ensure that the input payload includes two key pieces of information: the 'Workspace ID' (1), which is an integer, and the 'Branch ID' (2), which is a text string, such as "v2" or "dev". These can be found in the location shown in the image below.


To achieve the best results, include all workspaces and branches that have received incoming requests in the past 24 hours, as they share the same server resources.

In this example, I only receive requests in workspace 1, which has a single branch, v1 (you can use the branch_id empty if it defaults to v1).

 {  
"workspace_ids_and_branch_ids": [
{
"workspace_id": 1,
"branch_id": ""
}
]
}

In this other example, I receive API calls across two branches in workspace ID 4 (Live and "dev") and the "staging" workspace in workspace ID 6.

{
"workspace_ids_and_branch_ids": [
{
"workspace_id": 4,
"branch_id": ""
},
{
"workspace_id": 4,
"branch_id": "dev"
},
{
"workspace_id": 6,
"branch_id": "staging"
}
]
}

After you set up your input payload (1), click Run (2) and the response will return the number of API requests per workspace.


If you encounter an error, it may be due to the instance being under heavy processing load at the time or because a workspace or branch without requests has been included in the input.



Part 3 - Generating Reports

Average Response Times of the APIs and Number of Requests

To download a CSV file containing data on the average response times and the number of requests for each endpoint accessed in the past 24 hours, navigate to the average_response_times endpoint (1). Click Run (2) without entering any input, then click Download (3) to retrieve the file.

🚧The most effective approach is to prioritize optimizing highly requested APIs that have poor performance, such as those with response times exceeding 1 second.


🏗️ APIs with very slow response times and a relevant volume of requests should also be targeted for optimization, as they can greatly benefit from improvements.



Requests Per Second (RPS)

The number of requests per second (RPS) impacts server performance by increasing CPU and memory usage. High RPS can lead to maxing out resources and causing delays or crashes.

Monitoring the number of requests to your instance provides insights into its processing limits by highlighting peak usage periods, capacity utilization, and potential bottlenecks. It helps identify if the system is nearing its thresholds, requires optimization, or needs scaling to handle increased demand.

The request_per_second endpoint (1) requires two inputs: minimum_rps (2), where you specify the minimum requests per second rate, and the optional timezone_id (3).


The value for minimum_rps should be based on your instance's activity level. For example, you might begin with 10 RPS and adjust up or down depending on whether you're receiving too few or too many relevant results.

The request times default to UTC. If needed, you can specify a different time zone identifier in the timezone_id input, such as "America/New_York" or "Europe/Paris," from this page.

The generated data provides a clear indication of the requests-per-second (RPS) threshold at which your instance begins to experience strain or overload. This insight is essential for proactive resource management, allowing you to scale infrastructure effectively and ensure optimal performance during peak loads.

By identifying and addressing these critical limits, you can maintain system reliability, improve customer satisfaction, and optimize resource allocation to balance performance and cost efficiency.

Did this answer your question?