We are developing advanced Big Data and Machine Learning solutions that enable near real-time processing and analysis of large data volumes. The infrastructure supports various business areas by providing critical insights for decision-making. By joining our team, you will be responsible for designing, implementing, and deploying ML models and Data Engineering solutions in close collaboration with other company departments.
Work: 1 day a week from the office
responsibilities :
Develop and maintain advanced Machine Learning models and data processing workflows
Design and implement data pipelines (ETL/ELT) in Big Data environments
Collaborate with DevOps and Data Science teams to optimize performance and scalability
Analyze business requirements and translate them into engineering tasks
Monitor and troubleshoot deployed solutions, implementing improvements as needed
Participate in project meetings, share expertise, and support other team members
requirements-expected :
Minimum 5 years of experience in Python development
At least 4 years working with database technologies (preferably Big Data environments such as Hadoop and Spark)
4+ years of experience with Machine Learning projects (model training, testing, and implementation)
Advanced knowledge of SQL (T-SQL, PL/SQL, Spark SQL)
Experience with the Hadoop ecosystem (e.g., Hive) for large-scale data processing
Familiarity with code versioning tools (GIT/Bitbucket)
Familiarity with Agile/SAFe frameworks and experience with microservices, REST APIs, WebSocket