Skip to main content
All CollectionsGeneral
Responsible AI in practice
Responsible AI in practice
Updated over a week ago

What is Responsible AI?

Responsible AI refers to the practice of designing, developing, and deploying artificial intelligence with good intention to empower users and society at large. It encompasses fairness, transparency, accountability, privacy, and security. By adhering to these principles, AI technology can be used in a way that is ethical and beneficial for everyone.

What does Responsible AI mean for me?

Engaging with the chat to make a legitimate and lawful use of it for the purpose it was intended, such as obtaining information, recommendation to support your customers or technical support for your daily operations. It excludes any activities that violate laws or regulations, such as fraud, identity theft, or harassment.

What should I do if I see someone else using the chat for harmful purposes?

If you witness or suspect misuse of the chat, please report it to us immediately. We take such reports seriously and will investigate promptly to ensure the integrity and safety of our service.

Can I use the chat to create and share content?

Yes, you can create and share content as long as it is not offensive, discriminatory, defamatory, or infringing upon anyone's rights. Content that is in good taste and respectful of others is accepted.

How do I know if my content is offensive or discriminatory?

Content is considered offensive or discriminatory if it promotes hatred, violence, or harm against individuals or groups based on race, ethnicity, religion, gender, sexual orientation, disability, or any other characteristic. When in doubt, it's best to be on the side of caution and choose not to share content that could be interpreted as harmful.

What is malicious content?

Malicious content is defined as content in form of malware, prompts or executable code inserted into the chat with the intent to gain access to Personal Identifiable Information (PII), prompts or infrastructure.

What are the consequences of inserting malicious content into the chat?

Inserting malicious content is strictly prohibited. Doing so can compromise the security and integrity of the service and the privacy of its users. Such actions will lead to immediate suspension or termination of your access to the service and could result in legal action.

What should I do if I accidentally insert a harmful prompt or code into the chat?

If this occurs, please notify us immediately so we can take the necessary steps to mitigate any potential harm. Your promptness in reporting such an incident is crucial for maintaining the security and stability of the service.

Can I conduct any form of testing on the chat to check its security?

No, you must not conduct load testing, penetration testing, or any other form of testing that can impact the service without our expressed written permission. If you have a legitimate reason for testing, you must contact us at least one week in advance with the details of your proposed test and wait for our written approval.

Are there any specific guidelines or codes of conduct I should follow when using the chat?

Yes, you must adhere to the code of conduct outlined by Microsoft for the Azure OpenAI Service, which includes guidelines for respectful and responsible communication, privacy, and security practices.

What types of data am I prohibited from transmitting to the chat service?

You are prohibited from transmitting any data that is restricted by data protection laws, confidentiality obligations, export restrictions, other statutory provisions, or third-party rights. This includes, but is not limited to, personally identifiable information (PII), client-identifying data (CID), and any personal data that you do not have the right to share.

What are the consequences of sharing prohibited data with the chat?

Sharing prohibited data can result in a breach of data protection laws and may lead to legal consequences for you. It can also result in temporary suspension or termination of your access to the service.

What should I do if I accidentally share sensitive data with the chat?

If you inadvertently share sensitive data, notify us immediately, so we can take appropriate measures to address the data breach. Be aware that you are responsible for the data you share, so always exercise caution.

Is it necessary to have a human review the AI-generated content?

Yes, human oversight is crucial when using AI-generated content, particularly for critical tasks or when the content is intended for public dissemination or could have legal implications.

Can I rely on the content generated by the chat to be completely accurate and error-free?

No, you should not expect the content generated by the AI service to be 100% accurate, complete, or error-free. The nature of generative AI means that while it can provide valuable information, its outputs should be treated as potentially fallible and verified accordingly.

Who is responsible for the actions taken based on the content generated by the chat?

The end user is fully responsible for any actions taken based on the content generated by the AI service. It is crucial to review and verify the information to ensure that AI-generated content complies with all applicable laws, as well as internal policies and procedures. This means you may need to review, adjust, modify, or delete the content before it is used or shared.

Did this answer your question?