Informacje o stanowisku
Duties:
- Designing and implementing solutions for processing large and unstructured datasets, including Data Lake Architecture and Streaming Architecture
- Implementing, optimizing, and testing modern DWH/Big Data solutions using Databricks Platform within a Continuous Delivery/Continuous Integration environment
- Improving data processing efficiency and managing migrations from on-premises to public cloud platforms
- Development of Data, AI, and ML applications, as well as Generative AI solutions
Requirements:
- 4+ years of experience in Big Data or Cloud projects in the areas of processing and visualization of large and/or unstructured datasets (including at least 1 year of hands-on Databricks experience)
- Practical knowledge of at least one Public Cloud platform in Storage, Compute (+Serverless), Networking and DevOps areas supported by commercial project work experience
- At least basic knowledge of SQL and one of programming languages: Python/Scala/Java/bash
- Very good command of English
Frequently used technologies:
- Databricks
- Python/PySpark
- Cloud: Azure, AWS, GCP
- SQL
Offer:
- Very extensive professional and language training package
- Cloud training (the company is a platinum partner of AWS, Azure, GCP)
- Medical care, insurance, Multisport
- Car subscription (preferential lease of a new car – Master Benefit)
- Flexible working hours
- Free parking and bike boxes
Permanent job offer.#J-18808-Ljbffr
Praca WrocławWrocław - Oferty pracy w okolicznych lokalizacjach