-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Massively duplicating data when over 100 records #11
Comments
I've played around and managed to get an error message that could help pin point the area of problem:
|
I found a temporary solution, I've commented out the batching in jsonStream.on("data":
The issue being that the same code is being launched in jsonStream.on("end":
So somehow as the insertRows[] gets emptied, it gets filled with the same data and gets sent again in a new batch until 100 times (?) later. Worth looking further to help the community, I didn't pinpoint the root cause, probably something related to async code. |
@spotvin42 hi mate. Thank you for reporting this issue. I'm also encountering the same problem :) I'm trying to migrate a Firestore collection Coometing hasn't helped so far and I've been using these commands within firestore directory. Dump Firestore collection to JSON file: Import JSON file to Supabase (PostgreSQL) |
@burggraf could you please look at it? |
Had the same problem , You saved my Life |
I believe the issue here is that the callback function provided to The solution offered by @spotvin42 works by virtue of collecting every into a single array before processing all at once at the end once the stream has finished parsing. However, this has potential to cause performance/memory issues in the case of very large collections. An alternative which also avoids the problem of processing all records in a single batch is simply to remove the
In practice it'd probably be better to open a database transaction and then commit in the |
Some more information to help out people:
|
Bug report
Describe the bug
Firestore data migration tool: When I try to import 245 rows from the JSON file, a bit over 25,000 rows get uploaded.
I tested with JSON files of 5 entries or so and it works perfectly, but in case see below template:
To Reproduce
Steps to reproduce the behavior:
No errors, but on supabase 25,000+ rows are added instead of 245
Expected behavior
A clear and concise description of what you expected to happen.
Screenshots
If applicable, add screenshots to help explain your problem.
System information
Additional context
Seems like TS files were not compiled into the JS files (I've provided a few corrections as a push, I'll let you see), but it was not affecting this problem.
There are a few commented out lines in the code, it could possibly come from there.
ChatGPT says it has probably something to do with the fact that it runs asynchronously at some places.
The text was updated successfully, but these errors were encountered: