English | 简体ä¸ć–‡
XCrawl is a Nodejs multifunctional crawler library.
- Crawl HTML, JSON, file resources, etc. with simple configuration
- Use the JSDOM library to parse HTML, or parse HTML by yourself
- Support asynchronous/synchronous way to crawl data
- Support Promise/Callback way to get the result
- Polling function
- Anthropomorphic request interval
- Written in TypeScript
Take NPM as an example:
npm install x-crawl
Get the title of https://docs.github.com/zh/get-started as an example:
// Import module ES/CJS
import XCrawl from 'x-crawl'
// Create a crawler instance
const docsXCrawl = new XCrawl({
baseUrl: 'https://docs.github.com',
timeout: 10000,
intervalTime: { max: 2000, min: 1000 }
})
// Call fetchHTML API to crawl
docsXCrawl.fetchHTML('/zh/get-started').then((res) => {
const { jsdom } = res.data
console.log(jsdom.window.document.querySelector('title')?.textContent)
})
Create a crawler instance via new XCrawl. The request queue is maintained by the instance method itself, not by the instance itself.
For more detailed types, please see the Types section
class XCrawl {
constructor(baseConfig?: IXCrawlBaseConifg)
fetchHTML(
config: IFetchHTMLConfig,
callback?: (res: IFetchHTML) => void
): Promise<IFetchHTML>
fetchData<T = any>(
config: IFetchDataConfig,
callback?: (res: IFetchCommon<T>) => void
): Promise<IFetchCommonArr<T>>
fetchFile(
config: IFetchFileConfig,
callback?: (res: IFetchCommon<IFileInfo>) => void
): Promise<IFetchCommonArr<IFileInfo>>
fetchPolling(
config: IFetchPollingConfig,
callback: (count: number) => void
): void
}
const myXCrawl = new XCrawl({
baseUrl: 'https://xxx.com',
timeout: 10000,
// The interval between requests, multiple requests are valid
intervalTime: {
max: 2000,
min: 1000
}
})
Passing baseConfig is for fetchHTML/fetchData/fetchFile to use these values by default.
Note: To avoid repeated creation of instances in subsequent examples, myXCrawl here will be the crawler instance in the fetchHTML/fetchData/fetchFile example.
The mode option defaults to async .
- async: In batch requests, the next request is made without waiting for the current request to complete
- sync: In batch requests, you need to wait for this request to complete before making the next request
If there is an interval time set, it is necessary to wait for the interval time to end before sending the request.
The intervalTime option defaults to undefined . If there is a setting value, it will wait for a period of time before requesting, which can prevent too much concurrency and avoid too much pressure on the server.
- number: The time that must wait before each request is fixed
- Object: Randomly select a value from max and min, which is more anthropomorphic
The first request is not to trigger the interval.
fetchHTML is the method of the above myXCrawl instance, usually used to crawl HTML.
- Look at the IFetchHTMLConfig type
- Look at the IFetchHTML type
fetchHTML(
config: IFetchHTMLConfig,
callback?: (res: IFetchHTML) => void
): Promise<IFetchHTML>
myXCrawl.fetchHTML('/xxx').then((res) => {
const { jsdom } = res.data
console.log(jsdom.window.document.querySelector('title')?.textContent)
})
fetchData is the method of the above myXCrawl instance, which is usually used to crawl APIs to obtain JSON data and so on.
- Look at the IFetchDataConfig type
- Look at the IFetchCommon type
- Look at the IFetchCommonArr type
fetchData<T = any>(
config: IFetchDataConfig,
callback?: (res: IFetchCommon<T>) => void
): Promise<IFetchCommonArr<T>>
const requestConifg = [
{ url: '/xxxx', method: 'GET' },
{ url: '/xxxx', method: 'GET' },
{ url: '/xxxx', method: 'GET' }
]
myXCrawl.fetchData({
requestConifg, // Request configuration, can be IRequestConfig | IRequestConfig[]
intervalTime: { max: 5000, min: 1000 } // The intervalTime passed in when not using myXCrawl
}).then(res => {
console.log(res)
})
fetchFile is the method of the above myXCrawl instance, which is usually used to crawl files, such as pictures, pdf files, etc.
- Look at the IFetchFileConfig type
- Look at the IFetchCommon type
- Look at the IFetchCommonArr type
- Look at the IFileInfo type
fetchFile(
config: IFetchFileConfig,
callback?: (res: IFetchCommon<IFileInfo>) => void
): Promise<IFetchCommonArr<IFileInfo>>
const requestConifg = [
{ url: '/xxxx' },
{ url: '/xxxx' },
{ url: '/xxxx' }
]
myXCrawl.fetchFile({
requestConifg,
fileConfig: {
storeDir: path.resolve(__dirname, './upload') // storage folder
}
}).then(fileInfos => {
console.log(fileInfos)
})
fetchPolling is a method of the myXCrawl instance, typically used to perform polling operations, such as getting news every once in a while.
- Look at the IFetchPollingConfig type
function fetchPolling(
config: IFetchPollingConfig,
callback: (count: number) => void
): void
myXCrawl.fetchPolling({ h: 1, m: 30 }, () => {
// will be executed every one and a half hours
// fetchHTML/fetchData/fetchFile
})
interface IAnyObject extends Object {
[key: string | number | symbol]: any
}
type IMethod = 'get' | 'GET' | 'delete' | 'DELETE' | 'head' | 'HEAD' | 'options' | 'OPTIONS' | 'post' | 'POST' | 'put' | 'PUT' | 'patch' | 'PATCH' | 'purge' | 'PURGE' | 'link' | 'LINK' | 'unlink' | 'UNLINK'
interface IRequestConfig {
url: string
method?: IMethod
headers?: IAnyObject
params?: IAnyObject
data?: any
timeout?: number
proxy?: string
}
type IIntervalTime = number | {
max: number
min?: number
}
interface IFetchBaseConifg {
requestConifg: IRequestConfig | IRequestConfig[]
intervalTime?: IIntervalTime
}
interface IXCrawlBaseConifg {
baseUrl?: string
timeout?: number
intervalTime?: IIntervalTime
mode?: 'async' | 'sync'
proxy?: string
}
type IFetchHTMLConfig = string | IRequestConfig
interface IFetchDataConfig extends IFetchBaseConifg {
}
interface IFetchFileConfig extends IFetchBaseConifg {
fileConfig: {
storeDir: string // store folder
extension?: string // filename extension
}
}
interface IFetchPollingConfig {
Y?: number // Year (365 days per year)
M?: number // Month (30 days per month)
d?: number // day
h?: number // hour
m?: number // minute
}
interface IFetchCommon<T> {
id: number
statusCode: number | undefined
headers: IncomingHttpHeaders // node:http type
data: T
}
type IFetchCommonArr<T> = IFetchCommon<T>[]
interface IFileInfo {
fileName: string
mimeType: string
size: number
filePath: string
}
interface IFetchHTML {
statusCode: number | undefined
headers: IncomingHttpHeaders
data: {
html: string // HTML String
jsdom: JSDOM // HTML parsing using the jsdom library
}
}
If you have any questions or needs , please submit Issues in https://github.com/coder-hxl/x-crawl/issues .