Informacje o stanowisku
Location: Warsaw, Poland (Hybrid Work Model - Preferred)
Employment Type: Permanent
Job Description:
- PySpark Knowledge of Python PySpark for developing data pipelines and performing data transformation
- Technical Strong development skills in Pyspark is a must
- Must have hands-on experience on PySpark Dataframes RDDDAG Partitioning SparkSQL Optimization Clustering
- Knowledge of Azure SQL Databricks ADF is required
- Experience in working with Delta file, Tables, Parquet, CSV file formats
- Develops and maintains ingestion pipelines
- Develops and maintains data wrangling code, wrangling pipelines, and database integrations
- Develops algorithms based on the requirements stated by the use case
- Knowledgeable in cloud technologies primarily in MS Azure which includes Databricks, ADF, SQL DB, Storage Accounts, KeyVault, Application Gateways, Vnets, Azure Portal Management, PowerBI Integration
- Focus on end user and customer centricity Strong oral and written communication skills
- Passion for learning new tools, languages, and frameworks Strong problem-solving skills able to work with minimal direct guidance
- Experienced in Cloud services
- Discipline in writing technical and non-technical documentation
- Overall Experience 5 years
Certification Must have:
- Azure Fundamentals Certification AZ900
Mandatory Skills:
- Apache Spark
- Django
- Flask
- Nginx
- Python
Seniority level
Associate
Employment type
Full-time
Job function
Information Technology
Industries
Staffing and Recruiting
#J-18808-Ljbffr
Praca WarszawaWarszawa - Oferty pracy w okolicznych lokalizacjach