Skip to main content

Identifying and Resolving Performance Bottlenecks

This article outlines methods to improve API efficiency, emphasizing key metrics, common slowdown scenarios, and leveraging debugging tools.

Louis Machado avatar
Written by Louis Machado
Updated over 3 months ago
⚠️ A useful metric to monitor is the API's response time. Increased response times indicate higher resource usage. The 'Timing Details' tab within the debugger allows you to track the processing time for each function of the stack.​


🔍 It is interesting to keep an eye on these elements to verify possible optimizations in these scenarios:​​


1. A Query All Records returning a large list of results

⤵️ Querying large datasets can strain memory, increase server load, slow response times, and risk timeouts or crashes, negatively impacting concurrent processes.

⤴️ To mitigate these issues, you can implement pagination to retrieve data in smaller chunks, apply filtering and sorting to limit the dataset size, and use caching for frequently accessed data to reduce database queries.

2. Database queries against non-indexed columns

⤵️ Queries against non-indexed columns cause full table scans, leading to high latency and performance degradation, especially as database size increases.

⤴️ To address this, add indexes (Learn more) to frequently queried columns. Be cautious, though, as over-indexing can raise the cost of write operations. Nonetheless, the benefits—such as faster query execution, improved efficiency, and significantly reduced server load—frequently make the trade-off well worth it. The impact of faster queries and enhanced efficiency is substantial.

3. Long loops with heavy workflows.

⤵️ Long-running loops create bottlenecks, especially with continuous database requests, external API calls, or Lambda functions.

⤴️ It's recommended to use the stream output type for your Query All Records with long responses to ease the server load. Transforming key values within object arrays using data expressions can also have a significant impact. Additionally, consider processing multiple records at once through bulk operations and shifting intensive computations to background tasks.

4. Inefficient Database Architecture

⤵️ Complex schemas with large nested structures strain the database's resources, increasing CPU and RAM usage. The number of records in this case is not a reliable indicator of the impact on the server as there could be multiple large embedded tables within a small one.

On top of that, accessing deeply nested structures involves multiple read/write operations, and it is particularly critical for large datasets. Indexing complex fields also requires advanced knowledge.

⤴️ To improve database design, break down large arrays and embedded JSON fields into separate relational tables and use JSON fields selectively, reserving them for data that doesn’t require regular querying or indexing.

You also have the ability to use database features like GIN (learn more) for JSON and array indexing and caching mechanisms (e.g., Redis) for frequently queried data to reduce the load on the primary database.

5. Lambdas returning large lists or inside long loops.

⤵️ Using Lambdas within loops can present challenges as Lambdas load context variables, potentially accumulating excessive data if the Lambda is far down in long stacks or loops.

Another thing to consider is that each iteration requires a round-trip: sending data from Xano to node server, processing it, and then receiving the response. While it is usually not an issue, it becomes one when there are repeated invocations as in a loop.


⤴️ To resolve this, consider returning only the essential data from the Lambda. Alternatively, you could transform the Lambda into a custom function, offering greater control over the context variables being loaded. If the Lambda is inside a loop, it would be better to have the entire loop inside one single Lambda, if possible.

6. A Query All Records with a join without conditionals

⤵️ Joins without constraints lead to significantly increasing query size and execution time while straining database resources.

⤴️ To prevent this, conditions should always be included in joins to filter out unnecessary data. Avoid joining large tables unless essential; consider using add-ons (Learn more) whenever possible.

7. Searches through a column with a large number of characters in the values, like in a paragraph

⤵️ Searching through a column with a large number of characters is computationally expensive, as it requires scanning every character, leading to poor performance with large datasets.

⤴️ To remedy this, leverage features such as fuzzy search (Learn more) to enhance search operations. For improved performance, consider storing large text data in specialized systems like Elasticsearch, which Xano integrates with natively.

Did this answer your question?