AI Ethics
J12 Ventures AB (“J12”) is a venture capital firm investing at the earliest stages in companies building across the structural layers of the data and AI stack. This includes foundational software enabling data flow, model development, deployment, and performance, as well as advanced applications leveraging AI technologies to transform critical enterprise workflows.
As a firm operating at the frontier of technological innovation and recognising the centrality of AI and data infrastructure within our portfolio, J12 is committed to supporting the responsible development, and deployment of AI systems. J12 believes that systems built with ethical clarity, regulatory alignment, and defensible design will form the foundation of the most impactful companies of the coming decades.
This AI Ethics Statement outlines the principles J12 applies to its investment process, portfolio engagement, and internal operational governance. It reflects our commitment to promoting safe, ethical, and legally compliant AI, and provides a framework through which technologies are evaluated in the rapidly evolving regulatory and technical landscape.
1. Scope and Applicability
The statement applies to J12’s operations and to the evaluation, monitoring, and engagement of portfolio companies that,
Build foundational infrastructure for AI model development, deployment, and scaling, including data infrastructure, developer tooling, MLOps, and advanced compute
Develop or deploy AI applications leveraging machine learning, computer vision, generative AI, or agentic systems
Operate in domains where AI materially influences decisions, resource allocation, performance, or user experience and outcome
These companies are typically defined by strong technical execution, deep domain knowledge, defensible data architecture, and opinionated product design, characteristics that demand ethical clarity and regulatory alignment.
The principles outlined herein are advisory in nature but are actively applied in our investment process, operational decisions, and portfolio support. They are subject to revision as applicable legislation, risk frameworks, and industry standards evolve.
2. Ethical Principles in AI Evaluation
J12 applies a structured framework of AI Ethics principles to guide the due diligence process, to ongoing portfolio support. These principles draw from academic and regulatory standards, including the OECD Principles on AI, the EU AI Act, and domain-specific best practices.
a. Privacy and Data Governance
AI systems should maintain transparent documentation of how data is collected, processed, and stored. Effective privacy governance and responsible practices include informed consent, minimisation of unnecessary data, clear user disclosures, and secure access control. Companies are expected to incorporate privacy considerations at the architectural level.
b. Fairness and Non‑Discrimination
Companies should proactively address algorithmic bias and representational integrity through dataset design, model evaluation, and bias mitigation. Continuous auditing and the use of de-biasing mechanisms are encouraged, particularly scrutiny is applied to systems with material downstream impact on individual end users or institutions.
c. Safety, Robustness, and Human Oversight
J12 assesses whether systems are tested for unintended behaviour and operational resilience. Responsible practice includes transparent model lifecycle documentation, including training data, evaluation metrics, and versioning. AI systems classified as high risk under applicable frameworks, or cases with significant downstream impact must retain human oversight and clearly assigned accountability for AI system outputs.
3. Compliance with the EU Artificial Intelligence Act
The EU Artificial Intelligence Act, in force since 1 August 2024, introduces a legally binding, risk-based regulatory framework for AI systems within the European Union. Core obligations include,
Prohibited practices and uses (effective 2 February 2025), including social scoring, exploitative biometric surveillance
High-risk system requirements (phased implementation from 2 August 2025 to 2 August 2026), including mandatory, risk management, documentation, and conformity assessments
Registration and disclosure requirements for general-purpose AI systems and foundation models
J12 incorporates the AI Act into its investment evaluation process and portfolio support by,
Excluding investment in systems classified as “unacceptable risk” under Article 5 of the Act
Assessing and evaluating the risk tier of applicable AI systems during the due diligence process in in accordance with the tiered structure of the AI Act
Supporting portfolio companies in complying with applicable documentation, relevant registration, and audit requirements
Maintaining internal records of risk classification and ethics review outcomes
4. Portfolio Monitoring and Engagement
J12 monitors the AI-related practices of relevant portfolio companies on an ongoing basis. Where AI systems materially influence products or workflows, companies may be expected to,
Classify and disclose AI systems in line with applicable legal frameworks
Document internal processes for ensuring model safety, fairness, and auditability
Report material changes in system use, architecture, or data handling that may affect compliance
Integrate governance considerations into operational and board-level reporting, particularly where systems are deployed in regulated or high-risk domains
Support may include regulatory updates, specialist introductions, and access to peer frameworks for managing compliance and navigating the rapidly shifting AI governance landscape.
5. Engagement Strategies
J12 applies a range of ethical and compliance strategies depending on the maturity, domain, and AI intensity of the company,
Positive Selection: prioritising companies that demonstrate strong intentionality and clarity around responsible AI development and design
Constructive Engagement: maintaining open dialogue on compliance, ethical risk, system integrity and design decision.
Exclusion: declining investment in companies whose models rely on AI systems that pose material legal, social, or reputational risk
Governance Participation: using board or observer positions (where applicable) to support responsible ethical scaling, transparency practices, and system governance
Support may include regulatory updates, specialist introductions, and access to peer frameworks for managing compliance and navigating the rapidly shifting AI governance landscape.
6. Continuous Review and Commitment
J12 recognises that AI governance is dynamic and must evolve in parallel with regulatory developments and advances in technical capabilities. Accordingly, J12 commits to adapting its internal processes and policies, and to,
Periodically reviewing this Statement in line with legal, technical, and ethical developments
Embedding AI-specific governance, compliance, and ethics checks across the investment lifecycle
Championing product design practices and system governance that build long-term trust, regulatory compliance, and sustainable value creation
J12 considers AI to be not only a transformative technological domain, but a domain in which responsible innovation is a prerequisite for institutional credibility, defensibility, and long-term impact.
© 2025 J12 Ventures AB. All Rights Reserved.