Client Overview:
Our client is the leading retail chain in Uzbekistan, with about a million regular customers and more than 125 stores in 11 regions. The company operates supermarkets, neighborhood stores, convenience stores, a wholesale store, and an online supermarket. The client also runs two warehouses (dry and temperature-controlled) and a fruits/vegetables hub.
Project Objectives:
The client wants to replace the obsolete DWH solution based on SAP BW and deploy a robust, flexible, and scalable analytical cloud platform based on the cutting edge technology stack. This project will enhance data quality and governance practices and support significant company growth, laying the groundwork for advanced AI/ML solutions.
Nice to Have:
Client Overview:
Our client is the leading retail chain in Uzbekistan, with about a million regular customers and more than 125 stores in 11 regions. The company operates supermarkets, neighborhood stores, convenience stores, a wholesale store, and an online supermarket. The client also runs two warehouses (dry and temperature-controlled) and a fruits/vegetables hub.
Project Objectives:
The client wants to replace the obsolete DWH solution based on SAP BW and deploy a robust, flexible, and scalable analytical cloud platform based on the cutting edge technology stack. This project will enhance data quality and governance practices and support significant company growth, laying the groundwork for advanced AI/ML solutions.
,[Design and build scalable data pipelines to ingest, process, and transform large volumes of data from various sources., Implement ETL/ELT processes using Azure Data Factory, Databricks, Synapse., Develop and optimize queries for performance and scalability., Maintain and support the Analytical platform, ensuring high availability and reliability., Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions., Monitor and troubleshoot data pipeline issues, ensuring data integrity and quality., Implement best practices for data engineering, including code reviews, testing, and version control., Develop and implement data models, ETL processes, and data integration workflows., Develop and implement scripts in Python and SQL to detect and correct data anomalies., Work in an Agile, collaborative environment to build, deploy, and maintain data systems., Ensure data solutions are compliant with security, privacy, and governance standards., Develop and maintain comprehensive documentation for all data engineering processes. Requirements: Azure, MySQL, Python, ETL, SQL, Azure Data Factory, Databricks, Synapse, Storage, Cloud, Azure Data Lake Storage, Tableau, Power BI, ML Concepts, AI, Data pipelines, BI Tools: Agile, Scrum. Additionally: Flexible working hours and remote work possibility, Mentoring program, Training budget, English lessons, Compensation of Certifications, Active tech community, International team, Referral program, Modern office, Free coffee, Kitchen, Friendly atmosphere.