Who are we?
Geneva is an initiative within Shell Trading which seeks to protect and grow $500m per annum by leveraging our full data flow advantage in Crude and Products.
To achieve this, we are rebuilding our approach to fundamental analytics from the ground up to strengthen our trading bias. We harness the data available within Shell, along with all relevant external market data, and make use of cutting-edge technology to deliver human-centred, digital products that give our Analysts and Traders an advantage in the market.
As Shell Trading, we have more and better market data than anyone else, but we do not take full advantage of it. Yet.
Purpose - The Data Engineer will work on data engineering projects within IT – T&S, focusing on delivery of complex data management solutions by leveraging industry best practices. Works with the project team to build the most efficient data pipelines and data management solutions that helps business to derive value from their data.
Accountabilities:
• Understands the business vision from the data solutions and works with all project stakeholders to enable that.
• Works alongside multiple other project team members to drive towards the common goal.
• Helps the business analysts to create data mapping documents that define the end to end pipeline.
• Builds and delivers data pipelines that ingests data (structured and unstructured) from multiple sources into the target data lake.
• Builds the data management layer in line with the business needs and standards defines.
• Triages problems in the pipelines or data layers and effectively fixes them based on the priority.
• Automates data loading during initial or incremental loads.
• Supports the run and maintain team during critical issues and helps them in faster resolution.
Dimensions:
A key element of this role is to work along with Trading and Supply teams in delivering an enterprise solution in data engineering area. This will mostly include green field implementation of data analytics solutions delivering high value to customer.
Special Challenges:
• As a data engineer responsible for effectively handling data and deriving value out of it.
• Responsible for providing innovative solutions that are best in class but also quick to market.
• No direct reports. But is self-driven and can independently handle the work assigned.
• Drive effective best practices across multiple initiatives and teams and be innovative and influential in managing constraints.
Requirements:
• 4 + years of IT industry experience, with at least 2 years on Big data technologies.
• Interest and passion in Big Data technologies and appreciates the value that can be brought in with an effective data management solution.
• Has worked on real data challenges and handled huge volume, velocity and variety of data.
• Understands Infrastructure concepts and has worked with multiple cloud tenants.
• Proficient with data warehousing and traditional ETL technologies.
- Strong experience in writing stored procedure, functions and triggers in SQL.
• Strong experience on Azure ecosystem and has worked with data management on Databricks, ADF, Blob Store, ADLS etc.
• Strong coding experience in Spark – primarily PySpark. Must be able to effectively churn out complex transformation requirements with ease.
• Solid programming skills and Python experience.
• Implement complex data processing algorithms in real time in and efficient manner using Spark and other technologies.
• Worked with both structured and unstructured data – basics of NLP.
• Knowledge on latest database technologies like MongoDB, Cosmos, Cassandra etc.
• Knowledge on data analytics area using Visualization and data modelling tools.
• Excellent analytical & problem-solving skills, willingness to take ownership and resolve technical challenges.
• Excellent communication and stakeholder management skills
• Industry certification in Data Engineering is an added advantage.
As per company nows