As a Software Engineer, you’ll design, build, and maintain ETL pipelines and scalable data architectures powered by technologies like SQL and ClickHouse. You’ll ensure data quality, reliability, and readiness for AI-driven insights, while contributing to a collaborative and fast-paced environment in the Web3/AI domain.
responsibilities :
Build and maintain ETL pipelines and scalable data infrastructure using SQL and ClickHouse
Contribute to AI-driven data workflows and architecture
Develop data-processing logic using Python and libraries like pandas and NumPy (when needed)
Leverage Docker to manage development environments
Collaborate using Git for version control, branching strategy, and team workflows
Design clean, modular code, manage dependencies, write unit/integration tests, and maintain API documentation
Support open-source contributions and integrate AI/ML tools like Hugging Face or scikit-learn where applicable
Ensure high-quality, scalable data architecture suitable for real-time systems
Communicate clearly in English (B2) and Polish (fluent preferred)
Thrive in a proactive, independent role within a vibrant startup culture
requirements-expected :
Proven experience with API LLM
Interest in Crypto
Worked with social media data sets (twitter/youtube/instagram/telegram/tiktok)
2–5 years of commercial experience in data engineering, with at least 2 years focused on ETL and pipelines
Proficiency in SQL and ClickHouse, with solid understanding of AI-informed data processing
Hands-on experience with Python (pandas, NumPy), Docker, and Git
Interest or experience in Web3 technologies is highly welcome
Strong focus on scalable system design and data architecture quality