Skip to content

Commit

Permalink
上传于 2020年02月27日 14:49
Browse files Browse the repository at this point in the history
  • Loading branch information
TomoeMami committed Feb 27, 2020
1 parent fa17c79 commit 8ad7e5b
Show file tree
Hide file tree
Showing 3 changed files with 12 additions and 11 deletions.
2 changes: 1 addition & 1 deletion README.org
Original file line number Diff line number Diff line change
Expand Up @@ -32,4 +32,4 @@
- Gitee不会帮你缓存图片,会和本地markdown客户端一样直连图片源。
- 如需要本地图文备份,请采用这个备份工具:[[https://github.com/shuangluoxss/Stage1st-downloader][S1Downloader]]
- 代码写得很烂,还请多多包涵!如果有需要我保存的专楼,可以给我提issue或者在S1私信我。
- 部分疫情专楼的备份来自于https://gitlab.com/memory-s1/virus ,借用了部分补充未能收录到的专楼。
- 三号四号疫情专楼的备份来自于https://gitlab.com/memory-s1/virus
5 changes: 3 additions & 2 deletions RefreshingData.json
Original file line number Diff line number Diff line change
@@ -1,4 +1,5 @@
[
{
"content":[
{
"id": "1889771",
"totalpage": 6,
Expand Down Expand Up @@ -1623,4 +1624,4 @@
"lastedit": "1582777737",
"category": "外野"
}
]
]}
16 changes: 8 additions & 8 deletions s1refresher.py
Original file line number Diff line number Diff line change
Expand Up @@ -144,8 +144,8 @@ def FormatStr(namelist, replylist):
thdata=json.load(f)
savethdata = thdata[:]
for i in range(len(thdata)):
ThreadID = thdata[i]['id']
lastpage = int(thdata[i]['totalpage'])
ThreadID = thdata['content'][i]['id']
lastpage = int(thdata['content'][i]['totalpage'])
RURL = 'https://bbs.saraba1st.com/2b/thread-'+ThreadID+'-1-1.html'
s1 = requests.get(RURL, headers=headers, cookies=cookies)
# s1 = requests.get(RURL, headers=headers)
Expand All @@ -154,10 +154,10 @@ def FormatStr(namelist, replylist):
namelist, replylist,totalpage,titles= parse_html(data)
if(totalpage > lastpage):
if(totalpage > 50):
filedir = rootdir+thdata[i]['category']+'/'+str(ThreadID)+titles+'/'
filedir = rootdir+thdata['content'][i]['category']+'/'+str(ThreadID)+titles+'/'
mkdir(filedir)
else:
filedir = rootdir+thdata[i]['category']+'/'
filedir = rootdir+thdata['content'][i]['category']+'/'
#为了确保刚好有50页时能及时重新下载而不是直接跳至51页开始
startpage = (lastpage-1)//50*50+1
ThreadContent = [' ']*50
Expand All @@ -181,10 +181,10 @@ def FormatStr(namelist, replylist):
f.writelines(ThreadContent)
ThreadContent = [' ']*50
PageCount = 0
savethdata[i]['totalpage'] = totalpage
savethdata[i]['lastedit'] = str(int(time.time()))
savethdata[i]['title'] = titles
if((int(time.time()) - int(savethdata[i]['lastedit'])) > 518400 or totalpage == 1):
savethdata['content'][i]['totalpage'] = totalpage
savethdata['content'][i]['lastedit'] = str(int(time.time()))
savethdata['content'][i]['title'] = titles
if((int(time.time()) - int(savethdata['content'][i]['lastedit'])) > 518400 or totalpage == 1):
savethdata.pop(i)
with open(rootdir+'RefreshingData.json',"w",encoding='utf-8') as f:
f.write(json.dumps(savethdata,indent=2,ensure_ascii=False))

0 comments on commit 8ad7e5b

Please sign in to comment.