1. Provide as much context as possible
All AI applications are heavily dependent on the quality and precision of the input. getcrux is no different. Our engine also relies on you to provide as much context as possible so that we can delight you with the output. To give you some examples:
This may be insufficient | This works better |
Show me the top insights | Show me the top insights focusing on creatives, audiences, funnel stage, and product categories. |
Where should I increase my budget? | I want to spend $10,000 more over the next 4 weeks. Create a spending plan. |
2. Make sure the right filters are selected
Sometimes you might end up asking a question about a previous period while the date range selected (on the top right of the chat window) may be for a recent duration. Since LLMs only receive the data from the period you select (to keep noise to a minimum), getcrux will be unable to answer.
For eg. If I ask getcrux about my performance from the last year while I've only selected a 7-day time interval, getcrux will be unable to answer.
3. Start a new chat when the context changes
Just like humans get confused by information overload, so do LLMs.
We do have a system in place that remembers relevant information from your previous chat and selectively uses this when you ask a new follow-up question. But this has its own limits.
So we suggest that whenever you want to pivot to a new conversation or an unrelated topic, start a new chat on the top left of the chat window.
4. Follow up rigorously for the best response