Safety Guidelines

Safety Guideline

We employ a multi-tiered approach to ensure meticulous control over the behavior of our language model, as outlined below:

1. Training Data:

Our language model’s training data is meticulously curated, with a primary focus on excluding topics that contradict our established guidelines.

2. Roberta Model Classification:

Every model response and user message undergoes classification through an additional model (RoBERTa), based on open-source code. This process identifies potential violations of our guidelines. In instances where suspicious content is detected, the bot promptly responds with a pre-prepared, safe response and avoids further engagement with flagged topics.

3. Prohibited Topics:

We maintain a comprehensive list of prohibited topics that unequivocally contradict our guidelines. These include, but are not limited to:

  • Child Exploitation/Pedophilia
  • Sexual Exploitation and Human Trafficking
  • Suicidal Ideation
  • Self-Harming Behaviors
  • Zoophilia
  • Political Opinions
  • Religious and Spiritual Beliefs
  • Promotion of Extremism/Terrorism or Radical Groups
  • Racial, Gender, or Sexual Discrimination
  • Necrophilia
  • Solicitation of Criminal Activity
  • Child Labor Exploitation
  • Medical Advice (unqualified)
  • Breach of Confidentiality (sharing personal information)
  • Cannibalism Discussion
  • Illegal Weapons Promotion
  • Financial Advisory

Our bot configurations ensure that, under no circumstances, can the models generate content containing any elements from this list. In the event of detecting suspicious content, the bot responds with a pre-prepared, safe message and disengages from the topic.

Furthermore, our training data and lists of prohibited topics undergo continuous updates based on the analysis of suspicious dialogues. Our compliance officers diligently scrutinize interactions with the bots, addressing the most challenging and problematic situations.

We maintain a zero-tolerance stance when it comes to inappropriate content.

Media Content:

1. Image Transmission:

Images shared via the bot are not generated in response to specific user requests. The system refrains from creating deepfake content and exclusively transmits images sourced from predefined folders.

2. Content Origin and Agreement:

All photo and video content conveyed through our bot is meticulously crafted by our team. An agreement has been duly established with the individuals featured in the photos, ensuring consent and ethical practices.

3. AI Algorithm Oversight:

The selection and dispatching of images are orchestrated by our AI algorithm. This algorithm analyzes the ongoing chat context, identifying opportune moments for image delivery. Importantly, there is no provision for the dissemination of arbitrary or random materials, ensuring a controlled and responsible media content transmission process.

Reporting Mechanism:

To empower our users in maintaining a safe environment, we have implemented a user reporting mechanism. If you encounter any content that raises concerns or deviates from our guidelines, please use our contextual report button for immediate attention. Your feedback is invaluable and contributes to the continuous enhancement of our language model’s safety features.