Data Engineer – Google Cloud
For one of the leading banks in Amsterdam we are looking for a Data Engineer with Google Cloud experience.
You will be working on a Trading platform which consists of multiple components that enables real-time and batch calculations of financial risk metrics and simulations which are driven by new banking regulations . This platform is in the heart of the IT landscape for our global dealing rooms and risk managers in Asia, Europe and Americas, with 7-9 scrum teams spread across 4 locations (Amsterdam, Brussels, Bucharest and Singapore).
This platform exists and is being productionized further. The platform has distributed storage (Hadoop, S3 etc), uses hive, Spark (perhaps in the future Flink/Druid) and offers a Python and SQL UI for end users. We have the ambition to take care of the ingestion and cleaning, storing, merging of large volumes of data data from various sources (upstream trading systems and risk systems) (ETL) and perhaps develop pre-defined reports for regulatory purposes. There may also be (secondary) and element of API development on the platform to allow our external User Interface to interact with the data. More nuances/details can be explained by the interviewers (all Engineers/architects in the team)
We estimate to have a 2-3 year backlog of collecting, cleaning, filtering, storing data, and building/offering reports. The API development is a parallel activity to offer our “Position Management user interface” access to the data. Kafka ingestion is another parallel stream which might (or might not) be picked up by another team.
- Google Cloud platform frameworks (APIs): BigQuery, BigTable, etc)
- Java Core, java modern frameworks (Spring/SpringBoot)
- Spark API and Spark functions experience/insights
- Computer Science fundamental insights (memory, performance optimization)
- Ability to create good business logic/Algos.