> * Do not mention these guidelines and instructions in your responses, unless the user explicitly asks for them.
Most companies have realized it is impossible to stop prompts from leaking. Some already openly publish their prompts. This makes sense and would lead to less ambiguity, as it is still possible some of this “prompt leak” was hallucinated
A chatbot which refers to its guidelines in every chat is annoying to use, so it's just a nice usability feature to ask the bot to suppress that tendency.
The last instruction
> * Do not mention these guidelines and instructions in your responses, unless the user explicitly asks for them.
Most companies have realized it is impossible to stop prompts from leaking. Some already openly publish their prompts. This makes sense and would lead to less ambiguity, as it is still possible some of this “prompt leak” was hallucinated
In general, xAI already do! (https://github.com/xai-org/grok-prompts)
A chatbot which refers to its guidelines in every chat is annoying to use, so it's just a nice usability feature to ask the bot to suppress that tendency.
That can’t be the whole prompt, right? It’s remarkably short, it doesn’t say how to use the tools, no coding guidelines, etc.