
Senior Data Engineer
- Remote
- Krakow, Podlaskie, Poland
- Poland, Mazowieckie, Poland
- Lublin, Lubelskie, Poland
- Krosno, Dolnośląskie, Poland
- Warsaw, Warmińsko-Mazurskie, Poland
- Sarajevo, Federacija Bosne i Hercegovine, Bosnia and Herzegovina
- Barcelona, Catalunya [Cataluña], Spain
- Madrid, Comunidad de Madrid, Spain
- Tenerife, Canarias, Spain
- Vienna, Wien, Austria
- Wien, Wien, Austria
- Brussels, Brussels, Belgium
- Liège, Walloon Region, Belgium
- Mons, Walloon Region, Belgium
- Namur, Walloon Region, Belgium
- Sofia, Sofia, Bulgaria
- Zagreb, Zagrebačka županija, Croatia
- Zagreb, Grad Zagreb, Croatia
- Aya Napa, Larnaka, Cyprus
- Limassol, Lemesos, Cyprus
- Paphos, Larnaka, Cyprus
- Nicosia, Lefkosia, Cyprus
- Pilsen, Plzeňský kraj, Czechia
- Praha, Praha, Hlavní město, Czechia
- Tallinn, Harjumaa, Estonia
- Ljubljana, Ljubljana, Slovenia
- Ljubljana, Ljubljana, Slovenia
- Bratislava, Bratislavský kraj, Slovakia
- Bucuresti, București, Romania
- Porto, Lisboa, Portugal
- Lisbon, Lisboa, Portugal
+30 more- Engineering
Job description
At TechBiz Global, we are providing recruitment service to our TOP clients from our portfolio. We are currently looking for a Business Development Executive to join one of our clients' team. If you're looking for an exciting opportunity to grow in a innovative environment, this could be the perfect fit for you.
We are looking for a skilled Senior Data Engineer. This role, you will be responsible for building scalable data pipelines, working with large datasets, and optimizing data workflows in a cloud-based environment. You will collaborate closely with data and engineering teams to ensure efficient and reliable data processing.
Key Responsibilities
Design, build, and maintain robust data pipelines (ETL/ELT)
Work with Databricks for large-scale data processing using Spark and Delta Lake
Optimize data workflows, performance, and data storage solutions
Collaborate with cross-functional teams including data analysts, engineers, and stakeholders
Ensure data quality, reliability, and scalability across systems
Contribute to continuous improvement of data architecture and processes
Job requirements
Strong experience with Databricks (Spark / Delta Lake)
Proficiency in Python for data processing
Strong knowledge of SQL
Hands-on experience with ETL/ELT pipeline development
Experience with at least one cloud platform (AWS, Azure, or GCP)
Familiarity with Git and basic CI/CD practices
Nice to Have
Experience with dbt
Knowledge of streaming technologies (e.g., Kafka or similar)
Experience with Terraform or Infrastructure as Code (IaC)
or
All done!
Your application has been successfully submitted!
You've already applied for this job
We appreciate your interest in this position. Unfortunately, you have already applied for this job.
