WeTransfer Ends AI Training Fears: “Only Moderating Content,” It Says
The company has revised its policy wording for clarity.

Amid backlash over confusing terms, WeTransfer has clarified it won’t use your files to train AI models.
Earlier this month, an update to the platform’s terms sparked alarm, suggesting uploaded files could be used for “machine learning” or shared publicly. This led to an outcry from creatives—illustrators and voice actors—who feared their sensitive work could be exploited without consent.
Time to stop using @WeTransfer who from 8th August have decided they'll own anything you transfer to power AI pic.twitter.com/sYr1JnmemX
— Ava-Santina (@AvaSantina) July 15, 2025
Responding to the backlash on X and in the press, a WeTransfer spokesperson emphasised, “We do not use machine learning or any type of AI to process shared content, nor do we sell content or data to third parties.”
The company has revised its policy wording for clarity. The updated clause explains that user content may only be used to "operate, develop and improve the Service," aligning with its Privacy and Cookies policy, and removing ambiguous references to AI model training.
This incident mirrors prior controversy at Dropbox, which had to clarify similar language in 2023. Platforms now recognise the importance of explicit communication to maintain user trust—especially among creative professionals who rely on these tools for their livelihoods.
Comments ()