Scrapy project architecture is built around "spiders", which are self-contained crawlers that are given a set of instructions. Following the spirit of other don't repeat yourself frameworks, such as Django,[4] it makes it easier to build and scale large crawling projects by allowing developers to reuse their code.
^"Frequently Asked Questions". Frequently Asked Questions, Scrapy 2.8.0 documentation. Archived from the original on 11 November 2020. Retrieved 28 July 2015.
^Montalenti, Andrew (October 27, 2012). "Web Crawling & Metadata Extraction in Python". Web Crawling & Metadata Extraction in Python - Speaker Deck. Archived from the original on September 19, 2020. Retrieved May 11, 2015.
^"Scrapy Companies". Scrapy | Companies using Scrapy. Archived from the original on 2020-11-12. Retrieved 2017-11-09.