I understand that all URLs are scanned (including pages with keywords or '? id=213123' ending).
Ofcourse it would be great to be able to steer what URLs should scannend (in the >5000 URLs subsription); I do see two options:
1. To steer the scanned URLS on basis of the sitemap/robot.txt (most desirable).
2. Prioritize the URLs on basis of the sitemap.xml. In this way all important URLs are scanned first, at this moment this is not the case (we are over 10.000 urls).
Would be great to know how to solve this issue in order to be fully GDPR in control
Please sign in to leave a comment.