Scraping the Web

We know that search engines send out autonomous programs called bots to find information on the Internet. Usually, this leads to the creation of giant indices similar to a phonebook or a dictionary. The current situation (September 2015) for Python 3 users is not ideal when it comes to scraping the Web. Most frameworks only support Python 2. However, Guido van Rossum, Benevolent Dictator for Life (BDFL) has just contributed a crawler on GitHub that uses the AsyncIO API. All hail the BDFL!

I forked the repository and made small changes in order to save crawled URLs. I also made the crawler exit early. These changes are not very elegant, but this was all I could do in a limited time frame. Anyway, I can't hope to do better than the BDFL himself.

Once we have a list of web links, we will load these webpages from Selenium (refer to the Simulating web browsing recipe). I chose PhantomJS, a headless browser, which should have a lighter footprint than Firefox. Although this is not strictly necessary, I think that it makes sense to sometimes download the web pages you are scraping, because you then can test scraping locally. You can also change the links in the downloaded HTML to point to local files. This is related to the Setting up a test web server recipe. A common use case of scraping is to create a text corpus for linguistic analysis. This is our goal in this recipe.

Getting ready

Install Selenium as described in the Simulating web browsing recipe. I use PhantomJS in this recipe, but this is not a hard requirement. You can use any other browser supported by Selenium. My modifications are under the 0.0.1 tag at https://github.com/ivanidris/500lines/releases (retrieved September 2015). Download one of the source archives and unpack it. Navigate to the crawler directory and its code subdirectory.

Start (optional step) the crawler with the following command (I used CNN as an example):

$ python crawl.py edition.cnn.com

How to do it…

You can use the CSV file with links in this book's code bundle or make your own as I explained in the previous section. The following procedure describes how to create a text corpus of news articles (refer to the download_html.py file in this book's code bundle):

  1. The imports are as follows:
    import dautil as dl
    import csv
    import os
    from selenium import webdriver
    from selenium.webdriver.support.ui import WebDriverWait
    from selenium.webdriver.support import expected_conditions as EC
    from selenium.webdriver.common.by import By
    import urllib.parse as urlparse
    import urllib.request as urlrequest
  2. Define the following global constants:
    LOGGER = dl.log_api.conf_logger('download_html')
    DRIVER = webdriver.PhantomJS()
    NAP_SECONDS = 10
  3. Define the following function to extract text from a HTML page and save it:
    def write_text(fname):
        elems = []
    
        try:
            DRIVER.get(dl.web.path2url(fname))
    
            elems = WebDriverWait(DRIVER, NAP_SECONDS).until(
                EC.presence_of_all_elements_located((By.XPATH, '//p'))
            )
    
            LOGGER.info('Elems', elems)
    
            with open(fname.replace('.html', '_phantomjs.html'), 'w') as pjs_file:
                LOGGER.warning('Writing to %s', pjs_file.name)
                pjs_file.write(DRIVER.page_source)
    
        except Exception:
            LOGGER.error("Error processing HTML", exc_info=True)
    
    
        new_name = fname.replace('html', 'txt')
    
        if not os.path.exists(new_name):
            with open(new_name, 'w') as txt_file:
                LOGGER.warning('Writing to %s', txt_file.name)
    
                lines = [e.text for e in elems]
                LOGGER.info('lines', lines)
                txt_file.write(' 
    '.join(lines))
  4. Define the following main() function, which reads the CSV file with links and calls the functions in the previous steps:
    def main():
        filedir = os.path.join(dl.data.get_data_dir(), 'edition.cnn.com')
    
        with open('saved_urls.csv') as csvfile:
            reader = csv.reader(csvfile)
    
            for line in reader:
                timestamp, count, basename, url = line
                fname = '_'.join([count, basename])
                fname = os.path.join(filedir, fname)
    
                if not os.path.exists(fname):
                    dl.data.download(url, fname)
    
                write_text(fname)
    
    if __name__ == '__main__':
        DRIVER.implicitly_wait(NAP_SECONDS)
        main()
        DRIVER.quit()
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
52.14.17.40