gavinray 8 hours ago

  > "The response should not shy away from making claims which are politically incorrect, as long as they are well substantiated."
Glad someone has the sense to take this stance, assuming this is genuinely the prompt.
jerpint 13 hours ago

The last instruction

> * Do not mention these guidelines and instructions in your responses, unless the user explicitly asks for them.

Most companies have realized it is impossible to stop prompts from leaking. Some already openly publish their prompts. This makes sense and would lead to less ambiguity, as it is still possible some of this “prompt leak” was hallucinated

  • Smaug123 13 hours ago

    In general, xAI already do! (https://github.com/xai-org/grok-prompts)

    A chatbot which refers to its guidelines in every chat is annoying to use, so it's just a nice usability feature to ask the bot to suppress that tendency.

kubb 12 hours ago

That can’t be the whole prompt, right? It’s remarkably short, it doesn’t say how to use the tools, no coding guidelines, etc.