Data Engineer
Transforming raw data into reliable insights.
A hands‑on data engineering professional with experience building scalable pipelines, transforming multi‑source data and improving data quality using SQL, Python, Databricks and Spark.
About Me
I am a data engineer based in London specialising in building resilient data pipelines and preparing AI‑ready datasets. My background combines a solid foundation in computer science with advanced analytical coursework from an MBA in Business Analytics. I translate business requirements into efficient technical solutions, ensuring data quality, governance and scalability. My mission is to enable organisations to make data‑driven decisions by delivering clean, trusted and timely data.
End‑to‑end ELT/ETL workflows across cloud platforms and hybrid environments.
Automated profiling, validation and reconciliation checks to reduce reporting risk.
Reusable SQL transformations and schema mappings for business‑critical datasets.
Translating business requirements into technical deliverables with cross‑functional teams.
Skills & Tools
Experience
ProudToBeMe, London
HighRadius Technologies
JPMorgan Chase
Boston Consulting Group
Projects
A selection of data engineering projects showcasing end‑to‑end pipeline development, cloud architecture and quality-driven data solutions.
Built a scalable cloud-based data workflow using REST APIs, Python, SQL, and Databricks to ingest operational data and prepare 10+ analytics-ready service reliability KPIs.
Designed and implemented an ETL framework to consolidate e‑commerce sales, customer and inventory data from multiple platforms into a unified data lake and warehouse.
Architected a real‑time data pipeline to ingest and process IoT sensor data from industrial equipment, enabling proactive monitoring and predictive maintenance.
Services
Design, build and maintain robust pipelines that ingest and process data from diverse sources.
Create efficient workflows using Airflow and dbt to deliver analytics‑ready datasets on schedule.
Architect cloud‑based warehouses or lakehouses, including schema design and optimisation.
Deploy and manage data solutions on AWS, Azure or GCP leveraging native services.
Integrate APIs, databases and third‑party systems into unified datasets with data lineage.
Apply transformation logic, deduplication and enrichment for high‑quality data.
Optimise complex SQL queries and database objects to improve performance and reduce costs.
Design logical and physical data models (star, snowflake, Data Vault) for analytics and ML.
Automate repetitive data tasks with triggers, scheduling and monitoring.
Prepare data for Power BI and Tableau, enabling actionable dashboards.
Credentials
Proficiency in Spark and Databricks for building scalable data pipelines.
Expertise in Azure data engineering services and Microsoft Fabric technologies.
Business acumen with technical knowledge of BI platforms and data solutions.
University of East London
Advanced coursework in data analysis, visualisation and quantitative decision‑making.
Siksha O' Anusandhan University
Programming, database systems, SQL, Azure data engineering and cloud technologies.
Why Work With Me
I combine technical depth with business insight to deliver trusted data solutions. My experience spans pipeline development, data modelling, quality assurance and cloud deployment. Employers and clients value my reliability, collaborative approach and commitment to clear documentation and governance. I focus on translating complex requirements into practical solutions that enhance decision‑making and drive measurable outcomes.