|
Be sure to check the details. Crawl statistics If Robots.txt continues to return errors, it will cause the entire site to stop crawling, so it is recommended that it returns 200 or 404 consistently. It may be difficult to eliminate server connection errors, but investigation may reveal the cause of the high error rate. Please refer to the following for crawl statistics information. Crawl statistics Check server logs Check server logs This is a way to check Google's requests and responses by checking the raw server logs.
Googlebot's IP is not public as it may change, but the user agent is public so it is a good idea to narrow it down. I think it would be a good idea to check whether the access exceeds the server's capacity, what kind of files are being accessed, and whether the response is normal, along with general user access. Reference Bahamas Email List : Googlebot user agent Please note that how to view server logs differs depending on the hosting server.
Check index status This is a little different from checking the crawl, but it is important to know whether the page is indexed as a result of the crawl . Check the index coverage report, especially for things that were crawled but weren't indexed, such as "Crawled - Not Indexed" or "Detected - Not Indexed," or . It would be a good idea to check if there are too many items.
|
|