Using robots.txt 

Search engines use a robots.txt file to determine whether any web pages need to be indexed by them. They also provide information about where the sitemap.xml file resides.

Before a search engine crawls your website, it tries to see whether the robots.txt file exists at <your-application-site>/robots.txt, and if it does, it uses it to check which pages are excluded from indexing. This can be used if you do not want certain pages to be indexed.

Let's include the robots.txt file in our application in the src folder and update the assets of angular.json once more. Your robots.txt file should look as follows if all the pages are allowed:

# Allow all URLs (see http://www.robotstxt.org/robotstxt.html) User-agent: * Disallow: Sitemap: https://dynamic-personal-blog.now.sh/generated/sitemap.xml

robot.txt file is very important when you want to disallow search engines from scraping some URLs in the application. Next, let's see how the title and description are important. 

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
3.141.244.201