Wrocław, Wrocław, Lower Silesian Voivodeship, Polska
HCL Poland
26. 4. 2024
Informacje o stanowisku
technologies-expected :
Jira
SQL SERVER
PowerBi
SQL
technologies-optional :
Kafka
Hadoop
Spark Streaming
about-project :
You will be an essential part of the data engineering organization, the SAP program community and Pandora’s extended Data & Analytics community. You will work within the data product engineering team in a virtual and agile based organization.
As we are a growing team, you will have the chance to influence our culture and working tools and work together with other data engineers, front-end engineers, architects, scrum masters, product owners and much more from Europe, India and Thailand.
responsibilities :
Build high quality data products on our Azure Paas data platform
At HCL, we are growing our Data & Analytics footprint. As our new data engineer you will collaborate with product owners, architects and other key profiles across the global organisation. You will help business verticals and clusters maximizing the value of our data and drive business change as an impact of this.
Your primary focus will be building and implementing scalable data products on our global data platform together with other smart engineers in the organization. You will be focusing on building out our analytical stack linked to our SAP S/4 Hana implementation which will replace our current ERP landscape. We see significant potential in enhancing our solutions with SAP as the core engine.
Further to this you will:
Work closely with your colleagues within a product team to design scalable, efficient and stable products
Work closely with product owners to build/enhance and deliver features designed to support PANDORA to drive commercial business cases and efficiencies within reporting and analytics and other data driven use cases
Create and maintain high quality data pipeline architecture
Obtain relevant business and system knowledge to convert requirements efficiently to technical design documentation and conceptual data modelling
Ingest large, complex data sets, automate manual processes, optimize data delivery, improve data quality, etc.
Own your data products and track continuous stability and benefits
Live and breathe “build-once-consume-many” culture in which we enable others to use our data products for maximum value-add
Work with stakeholders and teams to assist with data-related technical challenges
Follow best practice and existing guidelines but also push the limits
Thrive in a fast paced environment and you get motivated by challenges as a true problem solver.
We are on a journey from IaaS based Microsoft/QV solutions to Azure PaaS with use of well-known tools and solutions like ADF, Data lake, Synapse, Analysis Services and DataBricks.
Team Player Passionate About Data And Engineering
Some would say that you are a real team player, product oriented, can-do attitude and with good stakeholder management skills. You are structured and analytical, always aiming to achieve great results. Furthermore, you are a confident communicator and have the ability to explain your solution design and tailor the message according to the audience.
requirements-expected :
Proven experience with especially ADF, DataBricks, Synapse, tabular models/analysis services, PowerBI and SQL Server
Experience with at least one object oriented or functional/scripting/querying languages – Java, Scala, Python, SQL, DAX etc.
DevOps and CI/CD focus preferably with JIRA
Hands-on with git and coding best practice
Experience in working with relational data sets using SQL
Experience with datawarehousing, dimensional modelling and cubes
Experience in building, and optimizing data pipelines, architectures, and datasets
Experience in working with Azure, AWS or other cloud providers
Designed, built and delivered production ready data products at an enterprise level
Highly proficient in English and able to communicate with all levels