AI Ethics Policy
Preamble
Sinaptic® AI builds infrastructure that makes artificial intelligence accountable. We believe that AI systems must operate within boundaries defined by human values, legal frameworks, and ethical principles. This AI Ethics Policy articulates the commitments that guide how we design, develop, deploy, and monitor our products, including Browser DLP, Sinaptic AI Intent Firewall®, and Sinaptic® DROID+.
This policy is informed by the EU AI Act (Regulation (EU) 2024/1689), the OECD Principles on Artificial Intelligence, the UNESCO Recommendation on the Ethics of Artificial Intelligence, and the values embedded in Ukrainian innovation law through the Diia.City framework.
1. Commitment to Responsible AI
Sinaptic is committed to developing AI systems that are beneficial, safe, and aligned with human interests. Our core mission — making AI accountable — reflects our belief that the transformative potential of AI can only be realized when trust is built into the system architecture itself.
We embed responsibility at every stage of the AI lifecycle:
- Design Phase: Ethical considerations are integral to product requirements and architectural decisions. Every new feature undergoes an ethics review before development begins.
- Development Phase: We apply rigorous testing, including bias audits, adversarial testing, and red-teaming, to identify and mitigate risks before deployment.
- Deployment Phase: We provide comprehensive documentation, configuration guidance, and guardrails to ensure responsible use by our clients.
- Monitoring Phase: Post-deployment monitoring tracks system behavior for drift, unintended consequences, and emerging risks.
2. Fairness and Non-Discrimination
Sinaptic is committed to building AI systems that treat all individuals and groups equitably. We recognize that AI systems can inadvertently perpetuate or amplify existing biases if not carefully designed and monitored.
2.1 Bias Prevention Measures
- We conduct bias impact assessments during the design and development of all AI models and algorithms.
- Training datasets are reviewed for representational balance, and we apply debiasing techniques where imbalances are identified.
- Our Sinaptic AI Intent Firewall® applies rule-based and semantic analysis that is auditable and does not rely on demographic characteristics for decision-making.
- Browser DLP classification models are tested across diverse content types and languages to ensure consistent performance.
2.2 Ongoing Monitoring
We maintain continuous monitoring pipelines that detect statistical anomalies in model outputs which could indicate emerging bias. When disparities are detected, we investigate root causes and implement corrective measures. Fairness metrics are reviewed quarterly by our AI Ethics Committee.
3. Transparency
We believe that trust requires transparency. Sinaptic is committed to being open about how our AI systems work, the data they use, and the decisions they make.
- Explainability: Our products provide clear explanations for AI-driven decisions. The Sinaptic AI Intent Firewall® logs the specific policy rules and semantic signals that contributed to each allow/deny decision. Browser DLP provides classification rationale for flagged content.
- Documentation: We maintain comprehensive technical documentation for each product, including model cards that describe the purpose, performance characteristics, known limitations, and appropriate use cases of our AI models.
- Disclosure: We clearly disclose when AI is involved in a decision or process. Users of Sinaptic® DROID+-deployed agents are informed that they are interacting with an AI system, in compliance with EU AI Act transparency obligations.
- Audit Trails: All AI-driven actions within our products generate immutable audit logs that can be reviewed by administrators, compliance officers, and auditors.
We publish an annual Transparency Report detailing aggregate statistics on system performance, policy enforcement actions, bias audit results, and incidents. This report is made available to clients and the public.
4. Human Oversight
Sinaptic products are designed to augment human decision-making, not replace it. We build human oversight mechanisms into every product:
- Sinaptic AI Intent Firewall®: High-stakes or ambiguous actions are automatically escalated for human review. Administrators can configure escalation thresholds and define which action categories always require human approval.
- Browser DLP: Policy violations can be configured to alert, block, or request human confirmation before enforcement. Administrators retain full control over enforcement behavior.
- Sinaptic® DROID+: Deployed agents operate within defined capability boundaries. Operators can implement human-in-the-loop workflows for critical actions, and can terminate agent operations at any time through emergency stop controls.
We reject the notion of fully autonomous AI in high-stakes contexts. Our architecture ensures that humans remain in the decision loop for actions that could have significant impact on individuals, organizations, or society.
5. Accountability
Accountability means accepting responsibility for the systems we build and the outcomes they produce. Sinaptic maintains clear accountability structures:
- AI Ethics Committee: An internal committee comprising senior leadership, engineers, legal counsel, and external advisors reviews high-risk AI use cases, investigates incidents, and updates this policy. The committee meets monthly and reports to the CEO.
- Incident Response: When an AI system produces an unintended harmful outcome, we follow a defined incident response process that includes immediate mitigation, root cause analysis, notification of affected parties, and implementation of preventive measures.
- Redress Mechanisms: Individuals or organizations that believe they have been adversely affected by a Sinaptic® AI system can report their concern through our compliance form below or by contacting compliance@sinaptic.ai. All reports are investigated within 14 business days.
- Third-Party Audits: We engage independent auditors to assess our AI systems for compliance with our stated ethical principles, applicable regulations, and industry best practices.
6. Data Sovereignty
We recognize that data sovereignty is a fundamental concern for organizations operating across jurisdictions. Sinaptic’s approach to data sovereignty is built on the following principles:
- Client Data Ownership: Customer data processed by our products belongs exclusively to the customer. Sinaptic does not claim ownership, licensing rights, or secondary use rights over customer data beyond what is strictly necessary to provide the Services.
- On-Device Processing: Browser DLP performs data classification at the browser level, minimizing the need to transmit sensitive data to external servers. This design prioritizes data minimization and respects the customer’s data residency requirements.
- Deployment Flexibility: Sinaptic® DROID+ is cloud-agnostic and can be deployed in the customer’s own infrastructure, including on-premises environments, enabling full control over data location and processing.
- No Training on Customer Data: Sinaptic does not use customer data to train, fine-tune, or improve its AI models unless the customer has provided explicit, documented consent and the data has been appropriately anonymized.
7. Environmental Consideration
We acknowledge the environmental impact of AI systems, particularly the energy consumption associated with training and running large-scale models. Sinaptic is committed to minimizing our environmental footprint:
- Efficient Architecture: Our products are designed for computational efficiency. The Sinaptic AI Intent Firewall® achieves sub-50ms verification latency through optimized inference pipelines, reducing the computational resources required per request.
- LLM Agnosticism: By supporting multiple LLM providers, we enable clients to select models that balance performance requirements with energy efficiency, rather than defaulting to the largest available model.
- Edge Computing: Browser DLP’s on-device classification reduces the need for energy-intensive cloud inference.
- Infrastructure Partners: We prioritize cloud infrastructure providers that operate on renewable energy and have publicly committed to carbon neutrality targets.
- Measurement and Reporting: We are developing methodology to measure and report the carbon footprint of our products, with the goal of including environmental metrics in our annual Transparency Report.
8. Continuous Improvement
AI ethics is not a static discipline. As technology evolves, new risks and opportunities emerge. Sinaptic is committed to:
- Regularly reviewing and updating this policy in response to new regulations, research findings, and stakeholder feedback.
- Investing in ongoing training for our engineering, product, and business teams on responsible AI practices.
- Engaging with the broader AI ethics community, including researchers, regulators, and civil society organizations.
- Contributing to open-source tools and standards that advance responsible AI development.
- Soliciting and acting on feedback from customers, employees, and the public regarding the ethical implications of our products.
Request Compliance Information
If you have questions about this policy, wish to report an ethical concern, or want to learn more about our approach to responsible AI, please contact us at hello@sinaptic.ai.