⬆️ I Uploaded My Art to AI: Is It Still Mine?
How to protect your creative DNA in a machine world
I spoke to a tattoo artist earlier this year who'd been prototyping flash designs using an AI sketch tool. She wasn't uploading full pieces, just fragments of linework and shape tests. A few months later, she spotted eerily similar motifs in someone else's tattoo portfolio on Instagram, noticing identical curvature and floral infills she'd never seen in the artist's work before. Different artist in a different country publicizing use of the same tool. Her mind immediately jumped to: "Did AI let someone copy me?" She knew it wasn't plagiarism, but it felt like something creatively intimate had been absorbed by the system and was now resurfacing in someone else's hands. It was all speculation, and unlikely to be true, but the discomfort stuck with her.
I've spent the past decade building AI tools at big tech companies and studios, but I'm an artist first. In this journey, I've been both mesmerized and made deeply uncomfortable in the same breath by the internal machinery of these products. They unlock incredible creative outcomes, but they are not created equal.
Some are like local software on your computer where all your work stays with you. Others are like cloud-based systems that might keep a copy of the work you upload. And your creative signature can be more recognizable than you think.
Some tools work fiercely to protect your voice. Others extract from it. So how do you tell the difference?
📺 Here’s a 5 min YouTube summary voiceover:
❓ 4 Questions Every Artist Should Ask Before Uploading Art to an AI Tool:
1. Was this trained on licensed, permission-based data?
If not, the model may have learned from other artists' work without consent. This matters both ethically and legally. When AI companies scrape the internet for training data, they're building their business on publicly available (but often unlicensed) creative work. Many newer models are trained exclusively on licensed data, public domain content, or work where artists were paid for their contribution. Moonvalley’s Marey video generation model is one I’m currently obsessing over!
Why this matters: Understanding the training data helps you make informed choices about which tools align with your values and comfort level as a creator.
2. What happens to my uploads?
Are they deleted after your session? Or are they stored and used to train future versions? Some tools process your work temporarily and delete it immediately. Others keep copies or use your uploads to improve their AI models, meaning your creative choices could influence the tool's future outputs.
Why this matters: Your sketch, melody, or writing sample could become part of the "ingredients" that shape how AI creates for other users down the line.
3. Do I own the output?
And if it resembles someone else's work or goes viral, who's liable? Most tools give you ownership of what you create, but the fine print varies. Some require "meaningful human input" for copyright protection. Others have different ownership structures. And if your AI-generated work accidentally resembles another artist's style, the liability question can get complex. Copyright also varies by country (e.g., human input requirements are stricter in US/EU).
Why this matters: You need to know if you can sell your work, use it commercially, and who's responsible if questions arise.
4. Is my work shared or pooled with other systems?
Are uploads isolated, or passed into partner tools or future pipelines? Some AI companies share data across their product ecosystem or with third-party partners. On some platforms, your image uploads might influence other tools within the company’s ecosystem, like a chatbot or video editor, depending on how that organization shares data internally. Other tools strictly isolate your data to the specific feature you're using.
Why this matters: You might consent to one use of your work but be uncomfortable with others.
These aren't just technical questions, they define the way you want to work as an artist in an increasingly machine-mediated world and help set your creative boundaries.
🔎 Where to Find The Answers:
When you're trying a new AI tool, here's an approach to consider:
Step 1: Try asking questions directly to the AI tool (if it's a conversational AI like ChatGPT, Claude, Grok, or Gemini):
"Were you trained using copyrighted data, or only on licensed datasets?"
"Are my uploads saved after I close this session?"
"Can I opt out of having my work used to train future versions?"
"Do I own the rights to what I create with this tool?"
"If something I generate looks like someone else's work, who is liable?"
"Are you sharing my data with partner companies?"
"Where can I see your data policy for uploads?"
Step 2: For tools that can't answer (image generators, video editors, design platforms):
Check their Terms of Use or Privacy Policy (look for "User Content," "Data Storage," "Model Training")
Search their FAQ or Help Center
Search the web: "[Tool Name] + training data + privacy"
Check third-party trackers like Have I Been Trained, Spawning.ai, or AI Incident Database
You can also directly email support with these questions if the tool can't answer them clearly
Consider medium-specific questions that matter to you, like whether the tool mimics artist styles without consent, if vocal samples are protected from cloning, and if your writing style can be replicated for other users
If you can't find clear answers in under five minutes or if neither the AI nor support can give you clarity, that tells you everything you need to know about the company’s level of transparency.
The Rise of Ethical AI Tools for Creators
Ethical AI tools prioritizing licensed datasets and user privacy are becoming more accessible and effective. They're also being shaped by global pressure like lawsuits, creator advocacy, and new regulations emerging worldwide. Different regions are developing their own frameworks for AI and creator rights.
There really is no universal right or wrong approach to this. Some artists I talk to see data sharing as a complete deal breaker, watching their unique way of sketching or mechanics for layering vocals show up in someone else's work without payment or consent is a no-go.
But there are many other artists who are completely okay with models working this way. They see the AI shift as a natural evolution in how we collaborate, almost like a global creative commons. These artists embrace open-source creativity and are fine with their work influencing others as long as it's helping shape better tools and expand access.
What matters is that each of us gets to actively decide where we land on that spectrum, rather than having that choice silently made for us by terms and conditions buried in fine print.
Key Trends in How Ethical AI Tools Work Today:
Training Data: Leading tools use licensed or opt-in data only. Some even pay creators whose work they train on. No more scraping.
User Uploads: Many tools now process uploads temporarily (session-based), then delete them. Some use privacy tech to ensure anonymity.
Ownership & Liability: Outputs typically belong to you if you provide meaningful human input. Platforms clarify who's responsible for resemblance issues.
Sharing: Ethical tools avoid silent data pooling. Some emerging tools are experimenting with blockchain to track consent and royalty flows (creating immutable records of permissions and transparent payments).
Access: Free tiers now offer opt-outs. Paid plans provide rights tracking, real-time collab, and international protections.
June 2025 Tool Comparison
A put together a comprehensive chart here of almost 100 text, image, audio, video, and design tools with key details on privacy, data handling, ownership, and ethics. I’ll update this each month to help creatives stay aware, empowered, and in control of their work when working with AI tools.
Key Caveats: AI tool policies and features evolve rapidly due to legal, ethical, and technical changes This chart is updated monthly based on available public information as of the last update date, but real-time shifts may occur, so always verify the latest terms directly on official websites or privacy centers before use. This is NOT intended as legal advice; use for informational purposes only to guide your research. Just trying to make artists’ lives a little easier!
The AI landscape is evolving rapidly and many companies are working to improve their practices and transparency. The key is staying informed about their current policies and making decisions that align with your comfort level and personal values.
Case Studies: Lessons from AI Copyright Battles
Different approaches to training data have sparked major legal battles and industry shifts. Let's look at a landmark 2023 case that's still unfolding today, a more ethical model from Adobe, and a fresh June 2025 lawsuit that highlights how these issues are still evolving.
Getty Images vs. Stability AI Lawsuit:
In early 2023, Getty Images sued Stability AI (makers of Stable Diffusion) for allegedly using over 12 million copyrighted images from Getty's library to train their AI without permission. The lawsuit, which is ongoing in both the US and UK revealed that Stable Diffusion could generate images with visible Getty watermarks, literally reproducing the company's copyright notices in outputs.
What went wrong:
Training data was scraped from the web without checking licensing or obtaining consent for copyrighted material.
AI reproduced actual watermarks and specific elements, proving it had memorized and could recreate protected images.
Artists whose work was in Getty's licensed collection never agreed to their content being used for AI training.
The fallout:
Major lawsuit seeking damages for thousands of infringed works (potentially up to $1.7 billion).
Increased public awareness that AI could "remember" and reproduce copyrighted elements, raising alarms about embedded artist work.
Greater pressure on AI companies to shift to licensed training data, influencing global regulations and tool policies.
Artists realizing their publicly available work might be scraped and embedded in AI systems without knowledge or compensation.
Adobe's Response: A Different Approach
Adobe took a different path with Firefly, which was launched in 2023 as a generative tool trained exclusively on Adobe Stock images (where contributors are compensated through royalties), public domain content, and other licensed material. No web scraping involved.
What they did right:
Used only content with clear licensing rights, ensuring creators were paid for training use.
Built transparency into the process from the start with detailed disclosures on data sources.
Offered IP indemnification to enterprise customers, protecting users from legal claims related to outputs.
Focused on commercial safety without artist fear of unintended infringement.
The contrast here shows how training choices directly impact artist trust and legal risks.
Evolving Today: Disney & Universal vs. Midjourney
The debate today has shifted beyond training data to what AI actually produces. In June, Disney and Universal filed the first major Hollywood studio lawsuit against Midjourney, accusing it of copyright infringement. The suit claims Midjourney scraped copyrighted works (including films, characters, and images) to train its model, enabling users to generate "near-carbon copies" of iconic characters like Darth Vader, Elsa, Shrek, and Minions through simple prompts.
What made this different:
The focus on outputs: users can prompt for specific characters (e.g., "Darth Vader with lightsaber") and get highly similar reproductions, allegedly violating trademarks and copyrights.
Evidence includes visual exhibits of generated images that "blatantly incorporate and copy" protected elements, showing the model's ability to recreate Disney/Universal IP.
The fallout (as of July 2025):
Potential statutory damages of $150,000 per infringed work (for an unspecified number, but exhibits show dozens of examples).
Renewed calls for AI companies to implement better filters and consent processes to prevent IP mimicry.
The lesson: These cases show how scraped training data leads to problematic outputs that can infringe on creators' rights. Unlike Getty's focus on input data, suits like Disney's emphasize output, making it clearer why artists must ask about training sources, output ownership, and liability before uploading.
Quick Start: Your 5-Minute AI Tool Check
New to vetting AI tools? Use this simple flowchart:
STEP 1: Can you find their data policy in under five minutes?
✅ Yes → Continue to Step 2
❌ No → Red flag. Use with caution or find an alternative
STEP 2: Do they clearly state what happens to uploads?
✅ "Deleted after session" or "Not used for training" → Continue to Step 3
❌ Vague or mentions "improving our services" → Ask specific questions or choose a different tool
STEP 3: Can you find info about their training data?
✅ "Licensed," "opt-in," or "permission-based" → You're good to go
❌ "Web-scraped" or no mention → Proceed with extra caution
STEP 4: Do you own what you create?
✅ Clear ownership rights stated → Create freely
❌ Shared ownership or unclear → Read the fine print carefully
If you get stuck on any step, email their support team directly. Good tools will answer these questions clearly and quickly.
So What Do You Do With This?
Before using any AI tool, check the terms. If it's not clear what happens to your uploads, assume they're being stored.
Choose tools that respect your voice and values. For some, that’s licensed data, deletion by default, and clear ownership. For others, that matters less.
Search for recent info. AI tools update policies fast. Always check current FAQs or community reviews. The info in the charts above will probably be outdated in a few weeks :)
Ask the tool directly: Email support or use feedback forms to ask about ownership, uploads, and opt-outs.
Follow the conversation. Find voices you trust in this space, stay updated on policy changes, and join communities tracking model transparency.
Share your learnings with other creators who may not realize the considerations involved in AI tool selection.
You don't need to fear AI tools, but you need to ask good questions. Do you want your voice to be remixable or do you prefer for it to be fully protected? Do you want full ownership or are you okay with shared influence? There is no universal rule, but you deserve the clarity to choose on your own terms, not inherit someone else's. Even if the tool is free and even if you're just experimenting, remember that your creative voice always has value.
Take a few minutes after reading this and ask yourself: What kind of relationship do I want with these AI tools? Protective, experimental, open, guarded? Now go and choose the tools that match that deep intention. Would love to hear what you’re using and why in the comments.
✉️ If this helped you better understand how to protect your creative voice in an AI-powered world, hit subscribe and share this with another artist who's navigating these same questions.