- Create
.env
file in root folder - Copy contents of
.env.example
to.env
or feel free to create own.env
file following example. - Start database running
docker-compose.yaml
from/db
folder or simply run following in terminaldocker-compose -f ./db/docker-compose.yaml up -d
- Install dependencies using
yarn
- Seed database using script
yarn run seed
- Run application using
yarn start
Test data present in repository in file exchange-offices.txt
.
After changes to test data and before seeding DB must be dumped.
/exchanger/top
- Returns top N exchangers for each country.
Accepts param limit
,defaults to 3
- How to change the code to support different file format versions?
- Implement parsers for different formats using
IDataParser
. - Imlement manager which will resolve provided format to parser.
- Update
IExchangeOfficesSource
adapters to work with manager
- How will the import system change if in the future we need to get this data from a web API?
- Source adapter internals will be updated to work with API,rest logic won't be touched and will keep work as it did.
-
If in the future it will be necessary to do the calculations using the national bank rate, how could this be added to the system?
I see slow & accurate and quick variants
slow - fetching relevant rates,map transactions with rates and do calculations,rework query implementation(get rid of raw sql since it works only with DB data and fetched rates isnt there)
quick - poll rates from bank by cron or when bank reports(with webhook ?) new rates. I assume that quick variant could be achieved by implementing service which will handle new rates and update DB
-
How would it be possible to speed up the execution of requests if the task allowed you to update market data once a day or even less frequently? Please explain all possible solutions you could think of.
- Store all results in DB so in future this data could be accessed
- key/value storage(redis?)
- Store data in slices in some time series DB(Influx?)And querying only data that is in range of request,should work faster than do same operations using regular DB
- Storing data slices in CDN