Sinaptic® AI

ISO 42001 — AI Management System

Effective: April 2026 · TOV «Sinaptic AI» / Sinaptic AI LLC · Diia.City Resident

1. AI Management System Commitment

TOV «Sinaptic AI» (“Sinaptic”) is committed to establishing, implementing, maintaining, and continually improving an AI Management System (AIMS) aligned with the requirements of ISO/IEC 42001:2023. As a company whose core business is the development and deployment of AI-powered products, we recognize that a structured management system approach to AI is essential for ensuring that our systems are trustworthy, safe, and beneficial.

Our AIMS provides a systematic framework for managing the opportunities and risks associated with AI throughout the entire lifecycle of our products — from conception and design through development, deployment, operation, and decommissioning. It integrates with and complements our existing management systems for information security (ISO 27001), quality (ISO 9001), and document management (ISO 32001).

This commitment is endorsed by senior leadership and reflects our conviction that responsible AI is not merely a compliance requirement but a competitive advantage and a prerequisite for the sustainable growth of AI technology.

2. Responsible AI Development

Sinaptic’s approach to responsible AI development is embedded in our AIMS and reflects the principles articulated in our AI Ethics Policy. Within the AIMS framework, responsible development is operationalized through:

2.1 AI Policy

Sinaptic maintains an AI policy that is appropriate to the purpose of the organization, provides a framework for setting AI objectives, includes a commitment to satisfy applicable requirements, and includes a commitment to continual improvement of the AIMS. The AI policy is communicated to all personnel and made available to relevant interested parties.

2.2 AI Objectives

Measurable AI objectives are established at relevant functions and levels within the organization. These objectives address:

  • Fairness and non-discrimination in AI system outputs.
  • Transparency and explainability of AI-driven decisions.
  • Safety and reliability of AI systems in production environments.
  • Privacy protection in AI data processing.
  • Environmental sustainability of AI operations.
  • Compliance with applicable regulations, particularly the EU AI Act.

2.3 AI Development Lifecycle

Our AIMS defines a structured AI development lifecycle with governance gates at each stage:

  1. Ideation and Feasibility: Assessment of whether AI is the appropriate solution, identification of potential risks and impacts, and preliminary ethical review.
  2. Requirements and Design: Formal documentation of functional and non-functional requirements, including fairness metrics, performance thresholds, and human oversight requirements. Design review by the AI Ethics Committee for high-impact systems.
  3. Data Acquisition and Preparation: Data governance procedures ensuring lawfulness, quality, representativeness, and appropriate documentation of datasets used for training, validation, and testing.
  4. Model Development: Implementation following secure development practices, with bias testing, adversarial robustness testing, and performance benchmarking against defined acceptance criteria.
  5. Verification and Validation: Independent testing to confirm that the AI system meets its specified requirements and is fit for its intended purpose. This includes functional testing, fairness audits, and user acceptance testing.
  6. Deployment: Controlled release with appropriate human oversight, monitoring infrastructure, and rollback capability. Deployment is authorized only after all governance gates have been satisfied.
  7. Operation and Monitoring: Continuous monitoring of system performance, fairness metrics, and safety indicators. Defined thresholds trigger investigation and, where necessary, corrective action.
  8. Retirement: Controlled decommissioning procedures ensuring that data is handled in accordance with retention policies, dependent systems are transitioned, and stakeholders are notified.

3. Risk Management for AI Systems

Sinaptic’s AIMS includes a dedicated AI risk management process that addresses risks specific to AI systems, complementing the general information security risk management process defined under our ISO 27001 framework:

3.1 AI-Specific Risk Categories

We systematically assess risks across the following AI-specific categories:

  • Bias and Fairness Risks: Risks that AI models produce discriminatory or inequitable outcomes due to biased training data, algorithmic design, or deployment context.
  • Robustness and Reliability Risks: Risks related to model performance degradation, adversarial attacks, distribution drift, and unexpected edge cases.
  • Transparency and Explainability Risks: Risks that AI-driven decisions cannot be adequately explained to affected parties or auditors.
  • Privacy Risks: Risks of unauthorized inference, data leakage, or re-identification through AI model outputs or behaviors.
  • Safety Risks: Risks that AI systems cause physical, psychological, or financial harm to individuals or organizations.
  • Autonomy and Oversight Risks: Risks arising from excessive automation, insufficient human oversight, or automation bias among operators.
  • Societal and Environmental Risks: Broader risks including impact on employment, democratic processes, and environmental sustainability.

3.2 AI Impact Assessment

Before any new AI system or significant modification is deployed, we conduct an AI Impact Assessment that evaluates:

  • The intended purpose and use context of the AI system.
  • The stakeholders affected by the system, directly and indirectly.
  • Potential negative impacts on fundamental rights, safety, and well-being.
  • The risk classification under the EU AI Act.
  • Required mitigation measures and residual risk acceptance.

3.3 Risk Treatment and Monitoring

Identified risks are treated through a combination of technical controls (model design, testing, monitoring), organizational controls (policies, training, oversight procedures), and operational controls (deployment restrictions, human-in-the-loop requirements). Risk treatment effectiveness is monitored continuously and reviewed at least quarterly. The AI risk register is maintained alongside the information security risk register and is subject to the same governance processes.

4. Alignment with the EU AI Act

Our AIMS is designed to facilitate compliance with the EU AI Act (Regulation (EU) 2024/1689). The alignment between our AIMS and the EU AI Act is structured as follows:

EU AI Act Requirement AIMS Implementation
Risk Management System (Art. 9) AI risk management process with AI-specific risk categories, impact assessments, and continuous monitoring
Data Governance (Art. 10) Data acquisition and preparation governance within the AI development lifecycle
Technical Documentation (Art. 11) Structured documentation at each lifecycle stage, integrated with ISO 32001 document management
Record Keeping (Art. 12) Automated logging and audit trail generation across all AI systems
Transparency (Art. 13) Transparency controls including model cards, decision explanations, and AI interaction disclosure
Human Oversight (Art. 14) Human-in-the-loop workflows, escalation mechanisms, and emergency stop controls
Accuracy, Robustness, Cybersecurity (Art. 15) Verification and validation processes, adversarial testing, and integration with ISO 27001 security controls
Quality Management (Art. 17) Integrated QMS per ISO 9001, with AI-specific quality procedures within the AIMS
Post-Market Monitoring (Art. 72) Continuous monitoring, performance tracking, incident reporting, and periodic reviews

The AIMS serves as the organizational backbone for EU AI Act compliance, ensuring that regulatory requirements are met systematically rather than on an ad hoc basis.

5. Scope

The scope of Sinaptic’s AIMS encompasses:

  • All AI Systems: Browser DLP, Sinaptic AI Intent Firewall®, Sinaptic® DROID+, and any AI components used within internal operations or under development.
  • All Lifecycle Stages: From ideation through design, development, testing, deployment, operation, monitoring, and retirement.
  • All Personnel: Employees, contractors, and third parties involved in the development, deployment, or operation of AI systems.
  • All Stakeholders: Clients, end-users, regulators, and any parties affected by the operation of Sinaptic® AI systems.
  • All Data: Training data, validation data, test data, operational data, and output data associated with AI systems.

6. Leadership and Governance

The AIMS is supported by a dedicated governance structure:

  • AI Ethics Committee: A cross-functional body that reviews high-impact AI use cases, adjudicates ethical questions, and provides strategic direction for responsible AI practices.
  • AIMS Manager: A designated individual responsible for the day-to-day operation and continuous improvement of the AIMS.
  • Senior Leadership: Demonstrates commitment through resource allocation, policy approval, participation in management reviews, and establishing the organizational culture necessary for responsible AI.
  • Product Teams: Responsible for implementing AIMS requirements within their development and deployment processes.

7. Implementation Roadmap

Sinaptic is implementing its AIMS through a phased approach:

  1. Phase 1 — Foundation (Completed): Established the AI policy and AI Ethics Committee. Conducted an initial gap analysis against ISO 42001 requirements. Defined the AIMS scope and context. Documented the AI development lifecycle and governance gates.
  2. Phase 2 — Core Implementation (In Progress): Implementing the AI risk management process, including AI-specific risk categories and impact assessment methodology. Deploying AI monitoring and measurement systems. Developing AI-specific competence and training programs. Integrating AIMS with existing ISO 27001, ISO 9001, and ISO 32001 management systems.
  3. Phase 3 — Operationalization: Full operationalization of AIMS across all products and processes. Conducting internal audits against ISO 42001 requirements. Management review and adjustment of AI objectives and targets. Refinement of processes based on operational experience and audit findings.
  4. Phase 4 — Certification Preparation: Comprehensive readiness assessment. Gap remediation. Pre-certification audit by independent consultants. Documentation finalization.
  5. Phase 5 — External Certification: Engagement of an accredited certification body. Stage 1 and Stage 2 certification audits. Certification achievement and ongoing surveillance.

8. Continuous Improvement

The AIMS is subject to continual improvement through internal audits, management reviews, corrective actions, monitoring of AI-specific performance indicators, and incorporation of evolving best practices and regulatory requirements. We participate actively in the development of international AI governance standards and apply lessons learned from the broader AI community to strengthen our management system.

Request Compliance Information

For questions about our AI Management System or ISO 42001 implementation, contact us at hello@sinaptic.ai.