Prioritize scanned URLs
I understand that all URLs are scanned (including pages with keywords or '? id=213123' ending).
Ofcourse it would be great to be able to steer what URLs should scannend (in the >5000 URLs subsription); I do see two options:
1. To steer the scanned URLS on basis of the sitemap/robot.txt (most desirable).
2. Prioritize the URLs on basis of the sitemap.xml. In this way all important URLs are scanned first, at this moment this is not the case (we are over 10.000 urls).
Would be great to know how to solve this issue in order to be fully GDPR in control
Kind regards
Harry
-
Hi Harry,
We do not provide limitations to the crawl based on robots.txt or sitemaps as this could cause accidental noncompliance. Instead we make sure we scan all unique content that visitors can navigate to and access on the website.
We have added a filter and started a scan and we've completed at just over 1500 pages.
Regards,
Elina0
Please sign in to leave a comment.
Comments
1 comment