Advance your career with Sunscrapers, a leading force in software development, now expanding its presence in a data-centric environment. Join us in our mission to help clients grow and innovate through a comprehensive tech stack and robust data-related projects. Enjoy a competitive compensation package that reflects your skills and expertise while working in a company that values ambition, technical excellence, trust-based partnerships, and actively supports contributions to R&D initiatives.
The project:
We are carrying out the project for our client, an American private equity and investment management fund - listed on the Forbes 500 list - based in New York.
We support them in the area of the infrastructure and data platform, and very recently we also build and experiment with Gen AI applications. The client operates very widely in the world of finance, loans, investments and real estate.
As a Senior Data Engineer you’ll design and implement core systems that enable data science and data visualization at companies that implement data-driven decision processes to create a competitive advantage.
You’ll build data platform for data and business teams, including internal tooling, data pipeline orchestrator, data warehouses and more, using:
Technologies: Python, Terraform, SQL, Pandas, Shell scripts
Tools: Apache Airflow / Astronomer, GIT, Docker, Kubernetes, Snowflake, Pinecone, Neo4j, Jenkins, Jupyter Notebook, OpenAI API, Artifactory, Windows with WSL, Linux, Gitlab
AWS: EC2, ELB, IAM, RDS, Route53, S3, and more
Best Practices: Continuous Integration, Code Reviews
The ideal candidate will be well organized, eager to constantly improve and learn, driven and, most of all - a team player!
What do we offer?
Sounds like a perfect place for you? Don’t hesitate to click apply and submit your application today!
Whats important for us?
You will score extra points for:
Advance your career with Sunscrapers, a leading force in software development, now expanding its presence in a data-centric environment. Join us in our mission to help clients grow and innovate through a comprehensive tech stack and robust data-related projects. Enjoy a competitive compensation package that reflects your skills and expertise while working in a company that values ambition, technical excellence, trust-based partnerships, and actively supports contributions to R&D initiatives.
The project:
We are carrying out the project for our client, an American private equity and investment management fund - listed on the Forbes 500 list - based in New York.
We support them in the area of the infrastructure and data platform, and very recently we also build and experiment with Gen AI applications. The client operates very widely in the world of finance, loans, investments and real estate.
As a Senior Data Engineer you’ll design and implement core systems that enable data science and data visualization at companies that implement data-driven decision processes to create a competitive advantage.
You’ll build data platform for data and business teams, including internal tooling, data pipeline orchestrator, data warehouses and more, using:
Technologies: Python, Terraform, SQL, Pandas, Shell scripts
Tools: Apache Airflow / Astronomer, GIT, Docker, Kubernetes, Snowflake, Pinecone, Neo4j, Jenkins, Jupyter Notebook, OpenAI API, Artifactory, Windows with WSL, Linux, Gitlab
AWS: EC2, ELB, IAM, RDS, Route53, S3, and more
Best Practices: Continuous Integration, Code Reviews
The ideal candidate will be well organized, eager to constantly improve and learn, driven and, most of all - a team player!
What do we offer?
Sounds like a perfect place for you? Don’t hesitate to click apply and submit your application today!
,[Developing PoCs using latest technologies, experimenting with third party integrations , Delivering production grade applications once PoCs are validated , Creating solutions that enable data scientists and business analysts to be self-sufficient as much as possible. , Finding new ways how to leverage Gen AI applications and underlying vector and graph data storages , Designing datasets and schemes for consistency and easy access , Contributing data technology stacks including data warehouses and ETL pipelines , Building data flows for fetching, aggregation and data modeling using batch and streaming pipelines , Documenting design decisions before implementation Requirements: Data modeling, Terraform, Airflow, AWS, Docker, Python, SQL, Analytical skills, Kubernetes, Data visualization, Snowflake, Jupyter Notebook, Artifactory, Linux, GitLab, NumPy, SQLAlchemy, JFrog Artifactory Additionally: Remote work, MacBook, Knowledge sharing, Flexible working hours, Competitive compensation, Sport subscription, Private healthcare, Flat structure, Small teams, International projects, Modern office, Free coffee, Canteen, Bike parking, Playroom, Shower, Free snacks, Free beverages.