AI Policy
Why This Policy Exists
The impact of AI on music is here. It is a contentious topic. How our cooperative platform defines our own policy and position on AI is a concern that our membership holds close to their hearts. Yet, we have observed that few good AI policies exist. Many music platforms have developed and published AI policies, but we have found that they leave significant ambiguity or create more questions than they answer. We believe that it’s important for Subvert to have a strong and developed policy on AI now at the outset of introduction of the Subvert platform - one that gives clarity on our position, written with specificity on definitions, and also enforcement - while also allowing the flexibility to adapt over time.
This policy was shaped by an extended, participatory deliberation among Subvert's membership. Between May 2025 and March 2026, 68 members contributed 352 posts to the forum thread "Help Craft Subvert's AI Policy," making it one of the most engaged governance discussions our forum. At its core, this policy aims to be a reflection of our members’ sentiment on how our platform should define an AI policy. Our Artist representatives on the board of directors (Iz and Hannah Lee Benson) have distilled this large wide-ranging discussion into a workable policy. This document is the result.
We acknowledge that this policy will need to evolve alongside tech and cultural norms of AI in creative work. What we are committing to now is a clear set of principles, a transparent process for applying them, and a mechanism for revisiting them as conditions change.
Core Principles
This policy is grounded in three commitments that emerged from member deliberation:
1. Human Creative Authorship Subvert is a platform for human-made art. We believe that the act of creating music (composing, performing, defining, and directing sound) is a fundamentally human endeavor. Our cooperative exists to support the people who do this work, not to distribute the outputs of automated systems. When we say "human-made," we mean that a human being originated the creative content: the artistic choices that make a work what it is.
2. Protecting Creative Labor As a cooperative, Subvert is committed to celebrating and uplifting labor, not replacing it. The dominant generative AI systems were trained on the creative work of millions of artists without their consent, and their deployment threatens to devalue the very labor our cooperative exists to sustain. Our policy reflects a commitment to aligning with artists against the extraction of their work for AI training, and against the displacement of their creative labor by automated generation.
3. Honest Representation Listeners who come to Subvert should be able to trust that the music and art they find here was made by artists. This principle of honest representation underpins both our prohibition on AI-generated content and our approach to transparency around tool use.
A Note on Disclosure and Terminology
During our forum discussion on AI policy, several members raised an important concern: the term "AI" has become so broad as to be nearly meaningless in a production context. Machine learning has been embedded in standard music production tools for over a decade. Requiring artists to label their work as "made with AI" because they used an intelligent mastering assistant would mislead listeners into thinking the work was generated by AI in the way that a Suno or Udio track is. This would actively undermine the goal of honest representation.
For this reason, the policy distinguishes between generative AI (systems that produce creative content from prompts or automated processes) and production tools that incorporate machine learning (mastering, mixing, analysis, and correction tools). Only the former is subject to this policy. Artists are invited, but not required, to share details about their production tools and process through optional credits fields, in the spirit of transparency.
1.1 Scope and Definitions
For the purposes of this policy:
"Generative AI" refers to AI systems designed to produce new creative content, including music, vocals, images, or video, from text prompts, audio prompts, or other automated inputs. Examples include but are not limited to: Suno, Udio, Boomy, Midjourney, DALL-E, Stable Diffusion, Sora, and voice cloning or synthesis systems that generate vocal performances without a human performer.
"AI-assisted production tools" refers to software features that use machine learning to assist with technical production tasks such as mastering, mixing, equalization, pitch correction, beat detection, noise reduction, audio restoration, loudness optimization, or similar functions. These tools analyze, adjust, or enhance audio or visual content but do not originate creative content. Examples include but are not limited to: iZotope Ozone, Logic Pro AI features, LANDR mastering, Ableton warp, smart EQ plugins, and AI-powered noise reduction.
"Algorithmic composition tools" refers to software that uses rule-based, procedural, or non-deep-learning algorithmic methods (such as generative synthesis, cellular automata, Markov chains, or genetic algorithms) to create musical material. These are distinct from deep-learning generative AI systems.
"Creative authorship" refers to the origination of the core artistic content of a work, its musical composition, vocal performance, visual composition, or other primary creative elements, by a human creator. Creative authorship means a human being made the artistic decisions that define the work, not merely selected, curated, or prompted an AI system to produce them.
1.2 AI-Generated Content: Music and Audio
Subvert is a platform for human-made art.
(a) Prohibition. Works in which generative AI is the source of creative authorship are prohibited. This includes works where the musical composition, vocal performance, arrangement, or core sonic content was generated by AI systems rather than originated by a human creator. Whole-cloth prompt-to-music output (e.g., Suno, Udio, Boomy) is prohibited without exception.
(b) AI-Assisted Production Tools. The use of AI-assisted production tools for technical tasks, including mastering, mixing, equalization, pitch correction, beat detection, noise reduction, and loudness optimization, is permitted and is not subject to this policy. These tools are considered standard production technology. Their use does not need to be disclosed under this policy, though artists are encouraged to share production details through optional credits fields (see Section 1.6).
(c) Algorithmic Composition. The use of algorithmic composition tools that do not rely on deep-learning generative AI (such as generative synthesis, modular patches, procedural MIDI generation, or rule-based systems) is permitted, provided the artist can describe their process and demonstrate meaningful creative involvement in shaping the output. Artists using such tools are encouraged to provide a brief description of their process.
(d) AI Voice Cloning and Synthesis. The use of AI voice cloning or vocal synthesis technology to generate vocal performances is prohibited unless: (i) the voice model was trained exclusively on the artist's own vocal recordings, or (ii) the artist has obtained explicit, documented consent from the person whose voice is being modeled. Unauthorized use of another person's vocal likeness constitutes a violation of this policy regardless of any other considerations.
1.3 AI-Generated Content: Visual Art and Media
(a) Prohibition. Visual content used on the Subvert platform, including album art, promotional images, artist profile images, and music videos, must be human-made. Works generated by text-to-image, text-to-video, or prompt-based generative AI systems (e.g., Midjourney, DALL-E, Stable Diffusion, Sora, Synthesia) are prohibited.
(b) Default Expectation. Subvert recognizes that non-AI alternatives for visual art, including photography, illustration, graphic design, collage, and public domain imagery, are widely accessible. The default expectation is that artists will use human-made visual art.
(c) Grandfathering. Artists uploading existing catalog releases to Subvert must ensure that all visual assets comply with this policy at the time of upload. Releases with AI-generated cover art on other platforms may be uploaded to Subvert with replacement human-made artwork. There is no exemption for visual content created before this policy was adopted.
1.4 Protection from AI Exploitation
As a platform, Subvert will not use your art to train or develop AI models. This commitment applies to all content hosted on the platform, including music, visual art, metadata, and user-generated text.
We will continue to do our best to take reasonable technical and legal measures to protect the platform from unauthorized scraping, dataset harvesting, crawling, or use of Subvert-hosted content for AI training by third parties, including legal action against unauthorized data collection.
1.5 Exceptions and the Appeals Process
(a) Petition for Exception. Subvert recognizes that the boundaries of AI use in creative work involve genuine edge cases. Artists whose work involves AI in ways that may fall outside the standard permissions described above, but who believe their practice is consistent with Subvert's principles of human creative authorship, may submit a petition for exception (a "appeals submission").
(b) Grounds for Petition. Petitions may be submitted on any grounds, but the following are recognized categories where exceptions may be considered:
- Self-trained models: Artists who have built or trained AI models exclusively on their own original work or on work for which they hold all necessary rights, and who use these models as part of a broader creative practice in which they exercise meaningful creative authorship over the final output.
- Accessibility: Artists with disabilities that substantially limit their ability to create music or visual art through conventional means, who use AI tools as an accommodation enabling their creative expression. Subvert is committed to ensuring that its policies do not create unnecessary barriers to participation for artists with disabilities.
- Algorithmic or non-generative AI methods: Artists using AI-related techniques that do not involve deep-learning generative models, where the relationship to prohibited tools is ambiguous.
(c) Submission Requirements. A appeals submission must include:
- A detailed description of the artist's creative process, including the specific AI tools or systems used;
- An explanation of the artist's role in originating and shaping the creative content of the work;
- A statement explaining why the artist believes their practice is consistent with Subvert's principles of human creative authorship;
- Any supporting materials the artist considers relevant (e.g., process documentation, demonstrations, technical descriptions).
(d) Review. Appeals will be reviewed by the Cooperative Moderation Panel (see Section 1.7) using the assessment rubric (see Appendix A). The Panel may request additional information or a conversation with the artist before making a determination.
(e) Outcomes. The Panel may:
- Approve the petition (with or without conditions);
- Deny the petition (with an explanation and, where possible, guidance on how the artist might modify their practice to comply);
- Request modification and resubmission.
(f) Precedent. Approved petitions may, at the Panel's discretion, be anonymized and published as precedent cases to guide future applicants and provide transparency to the membership.
1.6 Transparency and Credits
(a) Generative AI Disclosure. Artists who have received an approved appeals (Section 1.5) must disclose their use of generative AI in the production credits field, including a brief description of how AI was used in the work.
(b) No Mandatory Disclosure for Standard Tools. The use of AI-assisted production tools (as defined in Section 1.1) does not require disclosure. Artists are welcome to include these tools in their production credits if they choose.
1.7 Enforcement
(a) Reporting. Any Subvert member may report content they believe violates this policy. Reports should include the specific content in question and the basis for the concern.
(b) Cooperative Moderation Review Panel. Subvert will establish an Cooperative Moderation Panel ("the Panel") responsible for reviewing reported content, evaluating appeals, and providing guidance on policy interpretation. The Panel will:
- Be composed of members drawn from the cooperative's membership, with representation across membership classes where feasible;
- Be compensated for their time;
- Operate using the assessment rubric (Appendix A) as their primary decision-making framework;
- Publish anonymized summaries of their decisions to maintain transparency and build a body of precedent.
(c) Review Process.
- Flag: Content is flagged by a member report, application screening, or automated detection tools (where available);
- Assessment: The Panel reviews the flagged content using the rubric (Appendix A);
- Determination: The Panel makes an initial determination (compliant, non-compliant, or requires further information);
- Notice: If non-compliant, the artist is notified and invited to respond, comply, or submit a appeals petition;
- Appeal: Artists may appeal a determination. Appeals are reviewed by a different subset of the Panel or, for significant cases, by the Board;
- Action: Final determinations may result in content removal, required modification (e.g., replacement of visual art), account suspension, or termination of membership.
(d) Application Screening. Subvert's community curation process serves as a first line of review. Applicants whose public portfolio includes significant AI-generated content may be flagged for additional review or asked to clarify their practice before admission.
(e) Good Faith. This policy is enforced in a spirit of good faith and cooperative principles. The goal is to protect the integrity of the platform and the interests of its members, not to police artists' creative processes or to punish honest mistakes. Artists who discover that their work may inadvertently violate this policy are encouraged to contact Subvert proactively.
1.8 Policy Review
This policy is a living document. Subvert commits to:
- Reviewing this policy at least annually, or more frequently if material changes in technology, law, or member sentiment warrant it;
- Soliciting member input before any substantive revision, through the forum, town halls, or other participatory mechanisms;
- Publishing all revisions with clear changelogs and effective dates;
- Reporting annually to the membership on enforcement activity, appeals outcomes, and any emerging issues.
Purpose
This rubric is a decision-making tool for the AI Content Review Panel. It operationalizes the principles in the AI Content Policy into specific evaluation criteria. The rubric is designed to be updated as the Panel encounters new cases and as technology evolves, without requiring revision of the core policy language.
Assessment Criteria: Music and Audio
When evaluating whether a work complies with the AI Content Policy, the Panel should consider the following indicators:
Red Flags
1. Work appears to be whole-cloth output from a prompt-to-music system
Weight: High
Notes: Characteristics may include generic AI-aesthetic
2. Artist cannot describe their creative process in specific terms
Weight: High
Notes: Vague or evasive responses to process questions are a significant concern
3. Artist has a pattern of high-volume uploads with uniform characteristics
Weight: Medium
Notes: Artist may have a spam-level of high output in a short period of time, indicating AI-generation
4. Artist's public presence on other platforms includes openly AI-generated content
Weight: Medium
Notes: Relevant context but not conclusive on its own
Green Flags
1. Artist can provide detailed description of their creative process
Weight: High
Notes: Specificity about tools, techniques, and creative decisions is a strong indicator
2. Artist has a documented history of creative work predating AI tools
Weight: Low
Notes: Context, not proof
3. AI tools used are limited to production/mastering/mixing category
Weight: High
Notes: These are permitted without restriction
4. Artist uses algorithmic or procedural methods (not deep-learning generative AI)
Weight: Medium
Notes: Generally permitted; may still warrant review depending on specifics
Edge Case Assessment (for appeals)
When evaluating an appeals petition, the Panel should assess:
- Human origination: Did the artist originate the core creative content, or did an AI system generate it? "Originate" means the musical ideas, compositional decisions, and artistic direction came from the artist. Curating, selecting, or lightly editing AI-generated output does not constitute origination.
- Meaningful creative labor: Did the artist invest significant creative labor in shaping the final work beyond prompting or selecting from AI outputs? This might include: composing original elements, performing, arranging, substantially transforming AI-generated raw material, or building and training custom tools.
- Transparency and good faith: Is the artist transparent about their process? Can they articulate what they did and why? Willingness to share process details is a strong indicator of good faith; refusal or evasion is a concern.
- Training data ethics: If the artist used a custom-trained model, was it trained exclusively on the artist's own work or on work for which they hold all necessary rights? Models trained on scraped or unlicensed data do not qualify for exception regardless of the quality of the output.
- Consistency with platform values: Does this use of AI align with Subvert's commitments to human creative authorship, protection of creative labor, and honest representation? Would approving this petition undermine the policy's purpose or create a loophole that bad-faith actors could exploit?
Assessment Criteria: Visual Art
The same general framework applies, with the following additional considerations:
Consideration: Work appears to be whole-cloth output from a prompt-to-music system
Guidance: Characteristics may include generic AI-aesthetic
Consideration: Tool-assisted vs. generated
Guidance: Photoshop's ML features (smart selection, noise reduction, content-aware fill) are production tools, not generative AI. Using these does not trigger the policy.
Glossary
Adapted from member contributions, with particular thanks to the definitions contributed during the forum discussion.
AI (Artificial Intelligence): A broad and increasingly imprecise term referring to computer systems designed to perform tasks that typically require human intelligence. As of 2026, the term encompasses everything from simple pattern-matching algorithms to large-scale generative systems, and its use in marketing has rendered it difficult to apply with precision. This policy uses more specific terms wherever possible.
Machine Learning (ML): ML is the underlying technology in both standard production tools (mastering assistants, smart EQ) and generative AI systems. The difference lies in application, not in the fundamental technology.
Neural Network: An approach to machine learning loosely inspired by biological neurons. Multiple inputs are combined at decision points (nodes) with different weights to produce outputs. When arranged in layers, neural networks can handle complex pattern recognition and generation tasks.
Training: The process of calibrating a neural network for a specific task by exposing it to example data and adjusting its internal weights based on how well it performs. The ethical concerns around training arise primarily from the use of copyrighted creative works as training data without the consent of their creators.
Generative AI: A neural network that is used to produce new content (text, images, audio, video) rather than to analyze or classify existing content. These systems are trained on large datasets and can produce outputs that resemble their training data. In the context of this policy, generative AI is the primary subject of concern, specifically, its use to produce creative works that displace human creative authorship.
Large Language Model (LLM): A type of generative AI trained on human language. Generally used as the interface layer for other generative AI systems (e.g., converting a text prompt into a music or image generation request). Relevant to this policy primarily through text-to-music and text-to-image applications.
Prompt-to-Music / Text-to-Music: A generative AI application that produces musical audio from text descriptions (e.g., "a melancholy jazz ballad with saxophone"). Examples: Suno, Udio, Boomy. This is the primary category of prohibited content under this policy.
Text-to-Image: A generative AI application that produces visual images from text descriptions. Examples: Midjourney, DALL-E, Adobe Firefly (generative features). Outputs from these systems are prohibited as album art and platform visuals under this policy.
AI-Assisted Production Tools: Software that uses machine learning for technical production tasks (mastering, mixing, EQ, pitch correction, noise reduction) without generating new creative content. These tools have been standard in music production for over a decade and are not subject to this policy.
Algorithmic Composition: The use of rule-based, procedural, or mathematical methods to generate musical material. This includes generative synthesis, cellular automata, Markov chains, genetic algorithms, and similar approaches that predate deep-learning AI. Generally permitted under this policy, though edge cases may be reviewed.
Voice Cloning / Vocal Synthesis: AI technology that generates vocal performances based on a model trained on recordings of a specific person's voice.
Appeals: A formal petition for an exception to the standard policy, submitted by an artist who believes their use of AI is consistent with Subvert's principles despite potentially falling outside the default permissions.