This is the test for backend developers.
Please do not push your solution to a publicly available repo.
This is a Python/Flask API to get a forecast for London during Oct 2017.
Checkout the code and set up the service ready for development as follows:
# Set up the package to run in development mode:
python setup.py develop
# Running the webapp locally reloading when changes are made:
FLASK_APP=app.py FLASK_ENV=development flask run
# Optional: Running any test you might choose to write:
pytest -sv
When the app is running you can do the dollowing curl request:
curl http://localhost:5000/ping
{
"name": "weatherservice",
"status": "ok",
"version": "1.0.0"
}
- Implement the get forecast endpoint.
- Build and run the API inside a docker container.
- Discuss interview questions.
The task is to implement the get_forecast() inside backend.py, returing data to implement the following GET requests
curl http://<host:ip>/<city>/<date>/<hour minute>/
curl http://localhost:5000/london/20171018/1500/
{
"description": "light rain",
"humidity": "100%",
"pressure": 1015.55,
"temperature": 290.44
}
curl http://localhost:5000/london/21171005/2200/
{
"message": "No data for 2117-10-05 22:00",
"status": "error"
}
Please prepare answers for these questions in advance of the interview.
- How would you return data for a request between two data points in the dummy data.
- How will you scale the weather service to handle increasing load.
- For example, scaling the systems to 1k Requests Per Minute (RPM) and then 5k RPM.
- Where do you think the bottlenecks lie in scaling such a system?
- How do you ensure that the API response time is always within tolerable range?
- What if any data storage would you use as the backend for the weather service?
- If you had a storage backend, how would it scale?
Optional question:
- How might we minimise the use of the system resources (cloud hosting costs)?
- The system should still be able to scale as needed.