Informacje o stanowisku
Social network you want to login/join with:
We immerse ourselves in the intricacies of finance digitization, subscription management, compliance, and revenue management which gives us the power to make a real impact. Once we understand how an organization works, we can implement software solutions that provide the clarity, confidence, and control they need to drive growth and achieve their ambitions.
Aptitude has served the offices of finance for over 20 years, delivering financial control and insights to empower our clients to achieve their strategies and ambitions. We are currently serving over 75 CFOs whose organisations generate a combined revenue of over $1 Trillion.
We are proud of our growing team of smart, motivated and passionate people, and believe diverse experiences and perspectives build stronger teams and better solutions.
Headquartered in London, we have seven office locations around the world with clients across four continents.
What youll do and what we offer
Fynapse, a pioneering finance data and accounting platform, is in the process of developing a Modern Data Stack and is seeking a highly skilled Data Engineer to lead this transformation. The role involves building from the ground up – including data lakes, data warehouses, and implementing tools for data transformation and orchestration. We’re looking for someone passionate about creating efficient, scalable data solutions and eager to play a pivotal role in the evolution of our AI-driven finance technology.
Responsibilities
- Design and implement robust data architectures, including data lakes and data warehouses, from scratch.
- Identify and integrate tools for data transformation and orchestration to streamline data flow and processing.
- Collaborate with IT and software development teams to assess and deploy new technologies and tools.
- Ensure optimal data processing architectures are in place for both real-time and batch data processing.
- Manage the collection, storage, and retrieval of large datasets, ensuring scalability, reliability, and security.
- Develop and maintain scalable and efficient data pipelines.
- Work closely with data analysts and other stakeholders to understand data needs and deliver solutions that meet business requirements.
- Integrate new data sources into the data platform through APIs or bulk data transfer.
- Stay abreast of industry trends and best practices in data engineering.
- Develop and manage data pipelines for both batch and real-time data processing, with a focus on streaming data handling using Kafka.
We’ll also offer you a competitive salary plus profit-related bonus scheme, as well as the following benefits:
- Profit-related bonus
- Life and disability insurance
- ShareSave scheme – ability to purchase company shares on preferential terms
- Flexible working conditions and hybrid work model
What were looking for
The ideal candidate will have the following skills & experience:
- Bachelor’s or Master’s degree in Computer Science, Engineering, or related field.
- 4+ years of experience in data engineering, including hands-on experience in building data lakes and data warehouses.
- Strong experience in streaming data platforms, particularly Kafka.
- Proficiency in SQL and experience with relational and NoSQL databases.
- Experience in implementing data processing pipelines and ETL processes.
- Strong programming skills in Python.
- Understanding of Java, Scala.
- Solid understanding of big data technologies (such as Hadoop, Spark) and cloud services (AWS, Azure, GCP).
- Excellent problem-solving and analytical skills.
- Ability to work independently and manage multiple projects simultaneously.
Preferred Qualifications
- Experience in setting up data transformation tools.
- Familiarity with orchestration tools and their implementation.
- Knowledge of finance and accounting principles is a plus.
- Proven track record of successful project management in data engineering.
#J-18808-Ljbffr
Praca WrocławWrocław - Oferty pracy w okolicznych lokalizacjach