Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

For large sets of data the downloaded survey results are incomplete #1309

Closed
MgowanoJr opened this issue Jun 6, 2022 · 10 comments
Closed

For large sets of data the downloaded survey results are incomplete #1309

MgowanoJr opened this issue Jun 6, 2022 · 10 comments
Labels
bug Something isn't working

Comments

@MgowanoJr
Copy link

To replicate:

  1. Go to South Africa Site
  2. Navigate to Form Data
  3. Open the Survey COVID-19 Health Worker Vaccine Survey (https://za.goodinternet.org/admin/forms/submissions/292/)
  4. Download CSV of results

Expectation:

The dataset has 1180 Pages of 20 items which translates to a download with around 23,000 responses.

Actual:

The csv file download has around 4,800 responses only

@cbunicef cbunicef added the bug Something isn't working label Jun 7, 2022
@cbunicef
Copy link
Collaborator

cbunicef commented Jun 20, 2022

A few notes to add on how this issue behaves:

  • When downloading XLSX file a 504 eventually appears and no file is downloaded
  • When downloading CSV file, the download starts and continues for very close to the 30 second timeout period, then completes without error
    • The number of rows included varies, but there doesn't seem to ever be an incomplete row ie corrupted data

@istride @ChrisMarsh82

@istride
Copy link
Contributor

istride commented Jun 21, 2022

I have increased the timeout from 30 to 60 seconds for the South Africa site. Let me know if that helps.

@cbunicef If this resolves the problem then it is not related to the app itself but a deployment issue, and I recommend this issue be closed and another raised in the DevOps project instead to apply the fix permanently for all sites.

@cbunicef
Copy link
Collaborator

When trying to download XLSX I now get:

An error occurred.
Sorry, the page you are looking for is currently unavailable.
Please try again later.

If you are the system administrator of this resource then you should check the error log for details.

Faithfully yours, nginx.

When downloading CSV the download stops at 30 seconds, with around 5K of 24K rows included.

@istride
Copy link
Contributor

istride commented Jun 22, 2022

There are at least two other places were timeouts are set. After setting them all to 60 seconds the problem remains for the XLSX download, and the CSV has double the number of rows (12K) but is still incomplete.

I would not recommend increasing the timeout any further, and it would need to be two minutes for this particular survey - others may require more time. I don't think there is way to return an error response, in the case of CSV downloads, because the server has already given a successful response that then gets cut short.

Contrary to what I suggested before, I now think this issue requires more development of the app to resolve.

@cbunicef
Copy link
Collaborator

Thanks @istride. If we think the time it takes to run the query will have an impact on the API development then let's please consider that early.

@ChrisMarsh82
Copy link
Contributor

Change CSV to not stream. It will only download once fully complete. This will stop users from getting incomplete data and thinking they have a complete set data

Investigate: can we give a specific error when we get a timeout error in this area?

@ChrisMarsh82
Copy link
Contributor

Possible solution: downloads go to another area that users can download from once all data is collected

@istride
Copy link
Contributor

istride commented Nov 23, 2022

Gunicorn is a low-level component that accepts HTTP requests and creates worker processes that run the IoGT app to service those requests. By default, Gunicorn uses "sync" workers that each run in their own process, handles a single request at a time, and are killed after a timeout threshold is reached, no matter what - this is why the CSV is cut short. Extending the timeout is not a scalable solution because there is always the risk that a larger export will exceed the timeout.

A better solution would be to use an asynchronous worker type that has the ability to cope with an operation that takes an undefined amount of time - like streaming a CSV file. Gunicorn has the option of using threads, which are like a lightweight process - many threads may exist within one process.

One benefit from threads is that requests can take longer than the worker timeout while notifying the master process that it is not frozen and should not be killed.

We can try the threaded approach to see if it solves this issue. I'll find out what changes are required to make this possible.

@istride
Copy link
Contributor

istride commented Nov 28, 2022

It has been quite straightforward to configure threads on our sandbox instance, and I have not noticed any adverse effects so far. I have taken the liberty of configuring the South Africa instance in the same way, to see whether this issue still exists or not.

I'm glad to say that I managed to download what I think is the complete set of results (27,368 rows inc. header). The XLSX download was also successful, in that something was downloaded, however, there were only 23,601 rows in the Excel file, and I can't explain why that is.

There is another issue with the Excel download in that there is very little indication that anything is happening once the download button is clicked, which may lead a frustrated user to click again, and again, eventually occupying several worker threads with the same task simultaneously, possibly leading to downtime for the site, in general. I have no proof that this would happen, but in any case, I think it would be prudent to give more feedback to users about the download process.

@cbunicef
Copy link
Collaborator

Great to know that the threading has worked!

The resulting data is a strange case... when I downloaded the CSV file a few minutes ago I got 23,601 rows. When I downloaded the XLSX a few mins ago I got 23,601 as well. Based on the number of paginations shown in the Admin Panel (1180) and the rows per page (20, last page has all 20 rows), we would expect 23,601 rows including header. So I'm not sure why you got so many rows when you downloaded the CSV - but I'm also suspicious that we would have exactly 23,600 responses.

Now I am seeing that downloading the XLSX results in gateway timeout, CSV is still working.

For user feedback, the simplest solution might be spawning a new tab when the button is clicked, and the download begins there.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants