Document Type

Conference Paper

Abstract

Web page crawlers are an essential component in a number of web applications. The sheer size of the Internet can pose problems in the design of web crawlers. All currently known crawlers implement approximations or have limitations so as to maximize the through put of the crawl, and hence, maximize the number of pages that can be retrieved within a given time frame. This paper proposes a distributed crawling concept which is designed to avoid approximations, to limit the network overhead, and to run on relatively inexpensive hardware. A set of experiments, and comparisons highlight the effectiveness of the proposed approach.

RIS ID

25476

Share

COinS