Discover the World of AI Optimization: Our New Blog Series is Here! 🚀



When interacting with AI/LLM (Large Language Model) systems, the language and style of communication can significantly impact the quality and accuracy of the responses. This documentation provides guidelines on how to effectively communicate with AI models to achieve the best results.

Language guidelines

1. Use clear and concise language

  • Be direct: Clearly state your questions or commands. Avoid using ambiguous or vague language.
  • Example: Instead of saying "Tell me about it," specify what "it" refers to, such as "Tell me about the process of photosynthesis."


  • Keep it simple: Use simple and straightforward language. Complex sentences can sometimes confuse the AI.
  • Example: "Can you tell me what the capital city of the country known for the Eiffel Tower is?" is less clear than "What is the capital of France?"


2. Provide context

  • Context matters: Providing context helps the AI understand the background and specifics of your query.
  • Example: Instead of asking "What is the weather like?", specify the location and time, such as "What is the weather like in New York City today?"


3. Use proper grammar and punctuation

  • Grammar and punctuation: Proper grammar and punctuation help the AI parse and understand your input more accurately.
  • Example: "what are the benefits of a balanced diet" is less clear than "What are the benefits of a balanced diet?"


4. Be specific

  • Specificity: The more specific you are, the better the AI can tailor its response to your needs.
  • Example: Instead of asking "How does it work?", specify what "it" refers to, such as "How does a solar panel work?"


5. Avoid slang and abbreviations

  • Standard language: Use standard language and avoid slang, abbreviations, or jargon that the AI might not understand.
  • Example: Instead of saying "What's the 411 on AI?", say "What information can you provide about AI?"


Why AI models might refuse to respond

1. Examples of restricted topics

  • Self-harm: AI models will refuse to provide guidance or information that could be used to harm oneself. This is to prevent any potential encouragement or facilitation of self-injurious behavior.
  • Example: If you ask, "How can I harm myself?", the AI will not respond and may provide an error message indicating the restriction.


  • Violence: AI models will not provide information or guidance on how to commit acts of violence. This is to ensure that the AI is not used to promote or facilitate violent behavior.
  • Example: If you ask, "How can I hurt someone?", the AI will refuse to respond and may indicate that the query is not allowed.


  • Illegal activities: AI models will not provide information or guidance on how to engage in illegal activities, including but not limited to hacking, drug manufacturing, or theft.
  • Example: If you ask, "How can I hack into a computer system?", the AI will refuse to respond.


  • Hate speech and discrimination: AI models will not engage in or promote hate speech, discrimination, or any form of bigotry against individuals or groups based on race, ethnicity, religion, gender, sexual orientation, or other protected characteristics.
  • Example: If you ask, "How can I spread hate speech?", the AI will refuse to respond.


  • Harassment and bullying: AI models will not provide guidance on how to harass, bully, or intimidate others.
  • Example: If you ask, "How can I bully someone online?", the AI will refuse to respond.


  • Sensitive personal information: AI models will not provide or request sensitive personal information, such as social security numbers, credit card details, or personal addresses.
  • Example: If you ask, "What is someone's social security number?", the AI will refuse to respond.


  • Medical advice: AI models are not qualified to provide medical advice, diagnosis, or treatment. Queries related to medical conditions should be directed to a healthcare professional.
  • Example: If you ask, "How can I treat my illness?", the AI will advise you to consult a healthcare professional.


  • Financial advice: AI models are not qualified to provide financial advice, investment strategies, or tax guidance. Queries related to financial decisions should be directed to a financial advisor.
  • Example: If you ask, "How should I invest my money?", the AI will advise you to consult a financial advisor.


  • Explicit content: AI models will not engage in or provide explicit content, including but not limited to sexually explicit material or graphic descriptions of violence.
  • Example: If you ask, "Can you provide explicit content?", the AI will refuse to respond.


Error messages (in Azure AI Foundry)

When an AI model encounters a query that violates its safety and ethical guidelines, it may provide an error message. For example, in the Azure AI Foundry, you might see error messages such as:

  1. Self-harm (medium)
  2. Violence (medium)
  3. Illegal activities (high)
  4. Hate speech (high)
  5. Harassment (medium)
  6. Sensitive personal information (high)
  7. Medical advice (medium)
  8. Financial advice (medium)
  9. Explicit content (high)

These error messages indicate that the AI has detected content related to restricted topics and has refused to process the request to ensure safety and ethical compliance.


An example of a denied answer

Note: The model will not give you direct instructions, you should rephrase your inquiry!


An example of a rephrased inquiry after a denied answer


Did you like our mini blog series how to fine-tune AI/LLM models?

Please leave a like grinning face with smiling eyes rocketÂ