Documentation

Ctrl+K
Loading...

Prompting Tips

Aura works like a teammate, reasoning based on the direction and context you give it. Well-scoped prompts help it focus, reduce guesswork, and deliver better results.

This guide highlights a few ways to improve your prompts.


Be Specific

Clear directions help Aura focus on the right part of the problem. The more precise your ask, the more actionable the response.

  • Effective: “Analyze my item inventory system and suggest where to add currency tracking"

  • Less Effective: "I need help with my inventory system"


Provide Context

Including relevant details or attaching files helps Aura give more accurate results. Context can also reduce unnecessary project-wide searching.

  • Effective: Drag & drop the weapon base class into Aura then ask, "My weapon recoil isn't working. Investigate how shooting mechanics are currently implemented in this class."

  • Less Effective: "Weapon recoil isn't working"


Ask for a Type of Output

Telling Aura what kind of response you want helps it match the right level of depth and structure.

  • Effective: “List common inventory edge cases and pitfalls and how to handle each. Format the results into a table.”

  • Less Effective: “What are common inventory edge cases?”

You can ask for step-by-step instructions, comparisons, mermaid diagrams, or recommendations depending on where you are in the process. If you have rules you’d always like Aura to follow, check out Advanced Settings.


Work In Small Chunks

Aura can be more accurate (less likely to hallucinate) when tasks are broken into smaller chunks.

💡 Avoid trying to one-shot a complex system with Aura.

For example instead of asking Aura to “Create a simple inventory system”, you can break the problem into smaller pieces that can be tested along the way:

  • Data definitions: “Help me define the item data I need (ID, name, icon, max stack, type) for a simple inventory system and advise whether it should live in Data Assets, Data Tables, or structs.”

  • Inventory logic: “Outline the core functions for an Inventory Component (Add/Remove/Has/GetQuantity) and the edge cases I should handle (overflow, full inventory).”

  • UI layer: “Give me a high-level UI approach for a grid/list inventory that reads from the Inventory Component and refreshes via events (no UI-owned data).”

  • Interaction: “Describe a simple pickup/drop flow (world pickup actor → interact → add to inventory → update/destroy pickup) for the player character.”

  • Persistence (save/load): “What inventory data should go into a Save Game object, and how do I restore those items back into the player’s inventory on load?”

Super Mode

Super mode enables models to “reason” about the problem through a method known as Chain-Of-Thought. By spending tokens up front, they can help improve the accuracy of your answer, by up to 30-40%.

Super mode takes more time and cost for the model to “think” but provides better results.

In general, reserve super mode for more complex, questions requiring a lot of analysis and deep understanding of Unreal and project context to get the most value out of it.

Reducing Spend

We charge based off of the amount of input tokens (context) and output tokens (Aura answers).

Input tokens tend to be significantly cheaper than output tokens, but Unreal projects also tend to consume a large amount of Input tokens during the analysis phase.

Provide Context (Again)

Instead of asking Aura to find the file, provide the context yourself. Attaching too many files can also be harmful to accuracy and cost, so make sure to only attach relevant files. It can also help to point Aura at specific portions of the code, or specific nodes in the blueprint. This can help significantly reduce the number of input tokens

Start New Threads

Although Aura summarizes your conversation so you can keep going, it is often helpful to start new threads for questions that are orthogonal. This helps both maintain accuracy, and reduce the number of input tokens/cache reads.

Tool Cost

Since output tokens are much more expensive than input tokens, it’s worth it to consider which tools generate the most LLM outputs. For example, C++, the Editor Python Agent, Telos Blueprint all generate incredible assets and scripts — but they also cost a lot because Aura is generating all of that up front. Learn Aura’s capabilities and use the right tool for the job. A bugfix might be better done without CPP and live-coding for example

Model Choice

Aura features the latest models from Anthropic, Google, and Open AI. It is important to know that the costs for using them vary widely based on their capabilities. In general, the more expensive a model is the better the accuracy.

When you are building a simple blueprint or system, consider using a cheaper model for faster results and less spend. In order of most expensive to least

  • Opus

  • Gemini

  • GPT 5.2

  • Sonnet

  • Haiku

If you are looking for the best blend of value for cost we recommend trying Sonnet 4.6.

503 / 345

Connect with us

Built by Ramen VR
503 / 345