Skip to content

XCrawl is a Nodejs multifunctional crawler library.

License

Notifications You must be signed in to change notification settings

alnonfy19900131/x-crawl

 
 

Repository files navigation

x-crawl

English | 简体中文

XCrawl is a Nodejs multifunctional crawler library.

Feature

  • Crawl HTML, JSON, file resources, etc. with simple configuration
  • Use the JSDOM library to parse HTML, or parse HTML by yourself
  • Support asynchronous/synchronous way to crawl data
  • Support Promise/Callback way to get the result
  • Polling function
  • Anthropomorphic request interval
  • Written in TypeScript

Table of Contents

Install

Take NPM as an example:

npm install x-crawl

Example

Get the title of https://docs.github.com/zh/get-started as an example:

// Import module ES/CJS
import XCrawl from 'x-crawl'

// Create a crawler instance
const docsXCrawl = new XCrawl({
  baseUrl: 'https://docs.github.com',
  timeout: 10000,
  intervalTime: { max: 2000, min: 1000 }
})

// Call fetchHTML API to crawl
docsXCrawl.fetchHTML('/zh/get-started').then((res) => {
  const { jsdom } = res.data
  console.log(jsdom.window.document.querySelector('title')?.textContent)
})

Core concepts

XCrawl

Create a crawler instance via new XCrawl. The request queue is maintained by the instance method itself, not by the instance itself.

Type

For more detailed types, please see the Types section

class XCrawl {
  constructor(baseConfig?: IXCrawlBaseConifg)

  fetchHTML(
    config: IFetchHTMLConfig,
    callback?: (res: IFetchHTML) => void
  ): Promise<IFetchHTML>

  fetchData<T = any>(
    config: IFetchDataConfig,
    callback?: (res: IFetchCommon<T>) => void
  ): Promise<IFetchCommonArr<T>>

  fetchFile(
    config: IFetchFileConfig,
    callback?: (res: IFetchCommon<IFileInfo>) => void
  ): Promise<IFetchCommonArr<IFileInfo>>

  fetchPolling(
    config: IFetchPollingConfig,
    callback: (count: number) => void
  ): void
}

Example

const myXCrawl = new XCrawl({
  baseUrl: 'https://xxx.com',
  timeout: 10000,
  // The interval between requests, multiple requests are valid
  intervalTime: {
    max: 2000,
    min: 1000
  }
})

Passing baseConfig is for fetchHTML/fetchData/fetchFile to use these values by default.

Note: To avoid repeated creation of instances in subsequent examples, myXCrawl here will be the crawler instance in the fetchHTML/fetchData/fetchFile example.

Mode

The mode option defaults to async .

  • async: In batch requests, the next request is made without waiting for the current request to complete
  • sync: In batch requests, you need to wait for this request to complete before making the next request

If there is an interval time set, it is necessary to wait for the interval time to end before sending the request.

IntervalTime

The intervalTime option defaults to undefined . If there is a setting value, it will wait for a period of time before requesting, which can prevent too much concurrency and avoid too much pressure on the server.

  • number: The time that must wait before each request is fixed
  • Object: Randomly select a value from max and min, which is more anthropomorphic

The first request is not to trigger the interval.

fetchHTML

fetchHTML is the method of the above myXCrawl instance, usually used to crawl HTML.

Type

fetchHTML(
  config: IFetchHTMLConfig, 
  callback?: (res: IFetchHTML) => void
): Promise<IFetchHTML>

Example

myXCrawl.fetchHTML('/xxx').then((res) => {
  const { jsdom } = res.data
  console.log(jsdom.window.document.querySelector('title')?.textContent)
})

fetchData

fetchData is the method of the above myXCrawl instance, which is usually used to crawl APIs to obtain JSON data and so on.

Type

fetchData<T = any>(
  config: IFetchDataConfig,
  callback?: (res: IFetchCommon<T>) => void
): Promise<IFetchCommonArr<T>>

Example

const requestConifg = [
  { url: '/xxxx', method: 'GET' },
  { url: '/xxxx', method: 'GET' },
  { url: '/xxxx', method: 'GET' }
]

myXCrawl.fetchData({ 
  requestConifg, // Request configuration, can be IRequestConfig | IRequestConfig[]
  intervalTime: { max: 5000, min: 1000 } // The intervalTime passed in when not using myXCrawl
}).then(res => {
  console.log(res)
})

fetchFile

fetchFile is the method of the above myXCrawl instance, which is usually used to crawl files, such as pictures, pdf files, etc.

Type

fetchFile(
  config: IFetchFileConfig,
  callback?: (res: IFetchCommon<IFileInfo>) => void
): Promise<IFetchCommonArr<IFileInfo>>

Example

const requestConifg = [
  { url: '/xxxx' },
  { url: '/xxxx' },
  { url: '/xxxx' }
]

myXCrawl.fetchFile({
  requestConifg,
  fileConfig: {
    storeDir: path.resolve(__dirname, './upload') // storage folder
  }
}).then(fileInfos => {
  console.log(fileInfos)
})

fetchPolling

fetchPolling is a method of the myXCrawl instance, typically used to perform polling operations, such as getting news every once in a while.

Type

function fetchPolling(
  config: IFetchPollingConfig,
  callback: (count: number) => void
): void

Example

myXCrawl.fetchPolling({ h: 1, m: 30 }, () => {
  // will be executed every one and a half hours
  // fetchHTML/fetchData/fetchFile
})

Types

IAnyObject

interface IAnyObject extends Object {
  [key: string | number | symbol]: any
}

IMethod

type IMethod = 'get' | 'GET' | 'delete' | 'DELETE' | 'head' | 'HEAD' | 'options' | 'OPTIONS' | 'post' | 'POST' | 'put' | 'PUT' | 'patch' | 'PATCH' | 'purge' | 'PURGE' | 'link' | 'LINK' | 'unlink' | 'UNLINK'

IRequestConfig

interface IRequestConfig {
  url: string
  method?: IMethod
  headers?: IAnyObject
  params?: IAnyObject
  data?: any
  timeout?: number
  proxy?: string
}

IIntervalTime

type IIntervalTime = number | {
  max: number
  min?: number
}

IFetchBaseConifg

interface IFetchBaseConifg {
  requestConifg: IRequestConfig | IRequestConfig[]
  intervalTime?: IIntervalTime
}

IXCrawlBaseConifg

interface IXCrawlBaseConifg {
  baseUrl?: string
  timeout?: number
  intervalTime?: IIntervalTime
  mode?: 'async' | 'sync'
  proxy?: string
}

IFetchHTMLConfig

type IFetchHTMLConfig = string | IRequestConfig

IFetchDataConfig

interface IFetchDataConfig extends IFetchBaseConifg {
}

IFetchFileConfig

interface IFetchFileConfig extends IFetchBaseConifg {
  fileConfig: {
    storeDir: string // store folder
    extension?: string // filename extension
  }
}

IFetchPollingConfig

interface IFetchPollingConfig {
 Y?: number // Year (365 days per year)
 M?: number // Month (30 days per month)
 d?: number // day
 h?: number // hour
 m?: number // minute
}

IFetchCommon

interface IFetchCommon<T> {
  id: number
  statusCode: number | undefined
  headers: IncomingHttpHeaders // node:http type
  data: T
}

IFetchCommonArr

type IFetchCommonArr<T> = IFetchCommon<T>[]

IFileInfo

interface IFileInfo {
  fileName: string
  mimeType: string
  size: number
  filePath: string
}

IFetchHTML

interface IFetchHTML {
  statusCode: number | undefined
  headers: IncomingHttpHeaders
  data: {
    html: string // HTML String
    jsdom: JSDOM // HTML parsing using the jsdom library
  }
}

More

If you have any questions or needs , please submit Issues in https://github.com/coder-hxl/x-crawl/issues .

About

XCrawl is a Nodejs multifunctional crawler library.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • TypeScript 93.2%
  • JavaScript 6.8%