You cannot see this page without javascript.

Skip to content
조회 수 11118 추천 수 0 댓글 0
?

단축키

Prev이전 문서

Next다음 문서

크게 작게 위로 아래로 댓글로 가기 인쇄
?

단축키

Prev이전 문서

Next다음 문서

크게 작게 위로 아래로 댓글로 가기 인쇄

 

 

[파이썬] scrapy 로 웹 사이트 크롤링

[출처] http://isbullsh.it/2012/04/Web-crawling-with-scrapy/

Crawl a website with scrapy

Written by Balthazar

Introduction

In this article, we are going to see how to scrape information from a website, in particular, from all pages with a common URL pattern. We will see how to do that with Scrapy, a very powerful, and yet simple, scraping and web-crawling framework.

For example, you might be interested in scraping information about each article of a blog, and store it information in a database. To achieve such a thing, we will see how to implement a simple spider using Scrapy, which will crawl the blog and store the extracted data into a MongoDB database.

We will consider that you have a working MongoDB server, and that you have installed the pymongo and scrapy python packages, both installable with pip.

If you have never toyed around with Scrapy, you should first read this short tutorial.

First step, identify the URL pattern(s)

In this example, we’ll see how to extract the following information from each isbullsh.it blogpost :

  • title
  • author
  • tag
  • release date
  • url

We’re lucky, all posts have the same URL pattern: http://isbullsh.it/YYYY/MM/title. These links can be found in the different pages of the site homepage.

What we need is a spider which will follow all links following this pattern, scrape the required information from the target webpage, validate the data integrity, and populate a MongoDB collection.

Building the spider

We create a Scrapy project, following the instructions from their tutorial. We obtain the following project structure:

isbullshit_scraping/
├── isbullshit
│   ├── __init__.py
│   ├── items.py
│   ├── pipelines.py
│   ├── settings.py
│   └── spiders
│       ├── __init__.py
│       ├── isbullshit_spiders.py
└── scrapy.cfg

We begin by defining, in items.py, the item structure which will contain the extracted information:

from scrapy.item import Item, Field

class IsBullshitItem(Item):
    title = Field()
    author = Field()
    tag = Field()
    date = Field()
    link = Field()

Now, let’s implement our spider, in isbullshit_spiders.py:

from scrapy.contrib.spiders import CrawlSpider, Rule
from scrapy.contrib.linkextractors.sgml import SgmlLinkExtractor
from scrapy.selector import HtmlXPathSelector
from isbullshit.items import IsBullshitItem

class IsBullshitSpider(CrawlSpider):
    name = 'isbullshit'
    start_urls = ['http://isbullsh.it'] # urls from which the spider will start crawling
    rules = [Rule(SgmlLinkExtractor(allow=[r'page/\d+']), follow=True), 
    	# r'page/\d+' : regular expression for http://isbullsh.it/page/X URLs
    	Rule(SgmlLinkExtractor(allow=[r'\d{4}/\d{2}/\w+']), callback='parse_blogpost')]
    	# r'\d{4}/\d{2}/\w+' : regular expression for http://isbullsh.it/YYYY/MM/title URLs
    		
    def parse_blogpost(self, response):
        ...

Our spider inherits from CrawlSpider, which “provides a convenient mechanism for following links by defining a set of rules”. More info here.

We then define two simple rules:

  • Follow links pointing to http://isbullsh.it/page/X
  • Extract information from pages defined by a URL of pattern http://isbullsh.it/YYYY/MM/title, using the callback method parse_blogpost.

Extracting the data

To extract the title, author, etc, from the HTML code, we’ll use the scrapy.selector.HtmlXPathSelector object, which uses the libxml2 HTML parser. If you’re not familiar with this object, you should read the XPathSelector documentation.

We’ll now define the extraction logic in the parse_blogpost method (I’ll only define it for the title and tag(s), it’s pretty much always the same logic):

def parse_blogpost(self, response):
    hxs = HtmlXPathSelector(response)
    item = IsBullshitItem()
    # Extract title
    item['title'] = hxs.select('//header/h1/text()').extract() # XPath selector for title
    # Extract author
    item['tag'] = hxs.select("//header/div[@class='post-data']/p/a/text()").extract() # Xpath selector for tag(s)
    ...
    return item

Note: To be sure of the XPath selectors you define, I’d advise you to use Firebug, Firefox Inspect, or equivalent, to inspect the HTML code of a page, and then test the selector in a Scrapy shell. That only works if the data position is coherent throughout all the pages you crawl.

Store the results in MongoDB

Each time that the parse_blogspot method returns an item, we want it to be sent to a pipeline which will validate the data, and store everything in our Mongo collection.

First, we need to add a couple of things to settings.py:

ITEM_PIPELINES = ['isbullshit.pipelines.MongoDBPipeline',]

MONGODB_SERVER = "localhost"
MONGODB_PORT = 27017
MONGODB_DB = "isbullshit"
MONGODB_COLLECTION = "blogposts"

Now that we’ve defined our pipeline, our MongoDB database and collection, we’re just left with the pipeline implementation. We just want to be sure that we do not have any missing data (ex: a blogpost without a title, author, etc).

Here is our pipelines.py file :

import pymongo

from scrapy.exceptions import DropItem
from scrapy.conf import settings
from scrapy import log
class MongoDBPipeline(object):
    def __init__(self):
        connection = pymongo.Connection(settings['MONGODB_SERVER'], settings['MONGODB_PORT'])
        db = connection[settings['MONGODB_DB']]
        self.collection = db[settings['MONGODB_COLLECTION']]
        
    def process_item(self, item, spider):
    	valid = True
        for data in item:
          # here we only check if the data is not null
          # but we could do any crazy validation we want
       	  if not data:
            valid = False
            raise DropItem("Missing %s of blogpost from %s" %(data, item['url']))
        if valid:
          self.collection.insert(dict(item))
          log.msg("Item wrote to MongoDB database %s/%s" %
                  (settings['MONGODB_DB'], settings['MONGODB_COLLECTION']),
                  level=log.DEBUG, spider=spider) 
        return item

Release the spider!

Now, all we have to do is change directory to the root of our project and execute

$ scrapy crawl isbullshit

The spider will then follow all links pointing to a blogpost, retrieve the post title, author name, date, etc, validate the extracted data, and store all that in a MongoDB collection if validation went well.

Pretty neat, hm?

Conclusion

This case is pretty simplistic: all URLs have a similar pattern and all links are hard written in the HTML code: there is no JS involved. In the case were the links you want to reach are generated by JS, you’d probably want to check out Selenium. You could complexify the spider by adding new rules, or more complicated regular expressions, but I just wanted to demo how Scrapy worked, not getting into crazy regex explanations.

Also, be aware that sometimes, there’s a thin line bewteen playing with web-scraping and getting into trouble.

Finally, when toying with web-crawling, keep in mind that you might just flood the server with requests, which can sometimes get you IP-blocked :)

Please, don’t be a d*ick.

See code on Github

The entire code of this project is hosted on Github. Help yourselves!


List of Articles
번호 제목 글쓴이 날짜 조회 수
335 Setup Nutch 1.6 to run on Hadoop cluster and integrate Solr for search 졸리운_곰 2015.05.20 8
334 RESTFul 에 대해서 알아보자 졸리운_곰 2015.05.15 1
333 사물인터넷 서비스 아이디어 전략5단계 file 졸리운_곰 2015.05.15 5
332 Mathematical Logic 졸리운_곰 2015.05.07 2
331 작은 출판사와 번역자의 괴로운 현실 졸리운_곰 2015.05.07 3
330 저작권 계약 진행절차 file 졸리운_곰 2015.05.07 1
329 [1인출판] 한 권 출판하는 데 얼마나 들까? 졸리운_곰 2015.05.07 3
328 번역 출판에 따른 저작권 문제와 유형별 해법 졸리운_곰 2015.05.07 1
327 소액사건심판절차 file 졸리운_곰 2015.05.07 2
326 소액재판이란 무엇인가요? 졸리운_곰 2015.05.07 4
325 소액사건재판의 진행과정 file 졸리운_곰 2015.05.07 1
324 소액사건재판의 개념 졸리운_곰 2015.05.07 1
323 손해배상 청구의 소장 작성법 및 주의사항 file 졸리운_곰 2015.05.07 1
322 민사소송에서의 무고죄 [자료출처: 법률구조관리공단] secret 졸리운_곰 2015.05.07 0
321 소액재판 안내 졸리운_곰 2015.05.07 0
320 [출처] 검찰청 고소장 서식모음 (검찰청) file 졸리운_곰 2015.05.07 1
319 LG Smart SMA 빅데이터 플랫폼 소프트웨어설계실습 자료 [설계서 make product] [무슨 뜻인지 아나] [학생들이 한걸 보여주지] file 졸리운_곰 2015.05.02 25
318 How to Call SWI-Prolog from PHP 5 졸리운_곰 2015.04.28 19
317 Creating Web Applications in SWI-Prolog file 졸리운_곰 2015.04.28 66
316 100 깃허브 베스트 인공지능 프로젝트 졸리운_곰 2015.04.28 19
Board Pagination Prev 1 2 3 4 5 6 7 8 9 10 ... 17 Next
/ 17

stechstar.com 2014

대표/정보보호담당자 : 김성준 sjkim70@stechstar.com 010-4589-2193 경기도 용인시 수지구 풍덕천동 1지구

sketchbook5, 스케치북5

sketchbook5, 스케치북5

나눔글꼴 설치 안내


이 PC에는 나눔글꼴이 설치되어 있지 않습니다.

이 사이트를 나눔글꼴로 보기 위해서는
나눔글꼴을 설치해야 합니다.

설치 취소