New copyright evasion techniques popping up

Just thought I’d post a heads up on a few techniques actively being used to circumvent copyright violation detection. I’m tired of finding stolen designs and faked images every time I load a page on here. I figure posting this in the open will be a wash - more folks will know how to do it and more will know what to look for and report.

The newer techniques are:

  • Using video frame grabs and thumbnails (cropped and whole).
  • Color replacement and object cloning/reordering - Take a product image with a widget in 3 colors, copy-paste a few, adjust the hue of the pasted objects, and optionally adjust the hue of the whole image.
  • Composition, especially with adding text. Instead of just using the stolen image, they expand the canvas, add a second image with a matching background or a call-out bubble, and add some text.

Those are the newer ones, designed to look different from the original (in case they’re near each other on search results) and avoid reverse image search detection. The old ones were simple, single operations - crop, adjust brightness or saturation, add a banner, mirror the image, etc.

The old ones could be detected easily with a reverse image search. I use a Firefox plugin that throws a list of reverse image search engines in the right click menu (called Reverse Image Search).

The new ones are tougher, but I’ve found techniques that help.

First, try the easy way. Check for other people’s logos, text that is/isn’t there, and search here or other model sites for a keyword and skim the results. Reverse image search using the highest resolution variant you can find of the full image. From there, try cropping out clutter and added text. From there, try selecting only a unique aspect of the object.

If that doesn’t work, you’ve got to try tricking the search engines to examine features, but with minimal information on the actual object. The theory is that a low level of confidence in the object’s identification will filter out a large number of shopping links. If the engine can identify the object with high confidence, it will overfit the general object to present you with lots of click-through shopping bait. You want the object identification to be fuzzy so that the engine doesn’t get creative. Try:

  • Limiting the selected area in the results to a single object instance.
    Example: Image has 5 different colored vases - select only 1 vase.

  • Limiting the selected area in the results to unique aspects of the photo or everything but the actual object.
    Example: If the item is depicted on a desk, limit the analyzed area to any visible clutter. If a hand is holding it, search for the hand and a smidgen of the object (enough to get some of the object’s color).

  • Limiting the selected area in the results to a narrow band.
    Example: Object is a decorative mask - select only the vertical band center between the mask’s eye holes or only from the outside edge of an eye hole to the edge.

Finally, I’ve found Google and Bing to complement each other well. Different data sets, different techniques, so you get different results (especially when your selection is very uncertain).

Once you’ve found one suspicious post, check the account’s full portfolio. It’s rare that I find a single violation. It’ll usually be that the account has a few super-basic designs and a bunch of rip offs. Just report the whole account.

I tend to peruse the listings sorted by newest and find 5-ish fakes per 50 posts. That usually leads to 5-ish additional fakes per account.

Anyhow, yeah, it’s disappointing when you start to actually investigate. I tend to do my little “audit” every few months here and at Printables. I’ll see an obvious rip off and get curious, then wake up from my trance 30 minutes later to 20 tabs open and a partially completed report or 3.

4 Likes

The colour shift, background delete, mirrored are good ones I’ve seen, you easily spot them in 3rd party print profiles where they lift either from the original or comment section. The Sherlock cap comes out and as you mentioned its hardly a one-off in their catalogue.

What irks me about this is how rare it is for MW to take action and delete an obvious cheat, they really need to clamp down and make 3strikes and byebye…

1 Like

I have raised this in another topic. But finding clones and copyright infringements based on the images seems almost impossible to me. What I think should be done (and you may need some bright minds for that, but I believe that it is possible).

Johnny_Bit has raised a couple times that you can compare meshes with cloudcompare:

It is a program that takes two meshes and compares them to each other and sees how much of the models ‘overlap’. This is a clear indicator that it is either a remix (part of the model overlaps within say 99%) of a blatant reupload (complete model overlaps within that margin).

But also look at face recognition, this is also some kind of cloud-compare software where the overall geometry and relative distances are compressed down into a digital fingerprint of somebody’s face. If you have a camera shot or even a photo of that same person, you can create a new fingerprint and if it matches within a certain margin with another fingerprint it is almost certainly the same person.

Even if that person remixes themselves (say, get a nosejob to evade the police) face recognition still works based on the other remaining features.

So what I propose needs to be done is create this fingerprint for all uploads (preferably open-source so other model sharing websites can create them as well) and easily compare it to the database to find duplicates and flag them, or find remixes and flag them if they are uploaded as original works.

I raised this in another topic, but if you search ‘octopus’ and set to ‘exclusive’ (and therefore ‘original works’), you get these results:

And it goes on and on. All of them are remixes of the same model and all of them are posted as original and even exclusive. Something has to be done imo

4 Likes

Positive note - at least these scammers are just mangling up other peoples’ models instead of trying to hack bank accounts or run phishing scams.

Same kind of folks. Same disregard for others. Same depressing get ahead regardless of what they steal.

1 Like

Hah. True, but I suppose I’d personally prefer they try to do something more drastic (and epically fail and get arrested without causing any harm). Majority of law enforcement don’t give a ■■■■ about model theft. They do care a little bit about robbing banks.

Makerworld has incentivized this type of model creation with the current reward rules. Every rule system will be gamified, however users can be given meaningful warnings if they’re abusing the system.

Regarding exclusive content bonuses, if it’s found that an exclusive model violates the rules, there should be more points taken away from the user than were earned for each infraction, and there should be a 3 warnings rule for the exclusive program, if you’re warned three times, you’re ineligible to join the exclusive program going forward.

There should also be a 3 warnings rule on copyright infringement. Users violating copyright end up with a permanently deactivated account, all models removed from the system, AND a note is added on the backend about your connection AND your personal info flagged to prevent future pay outs.

Finally a 3 warnings rule on the connection itself. After third account associated to same connection is deactivated, that connection should go on a block/ban list.

1 Like

In the digital age, we’re seeing new copyright evasion techniques emerging all the time, as creators and users find ways to bypass restrictions. These techniques can range from simple alterations, like cropping images or adding filters, to more sophisticated methods, such as altering metadata, using AI tools to slightly modify content, or even creating synthetic media that’s hard to trace back to original sources.