StoneX

Big Data Engineer (Spark Developer)

Job Locations PL-Kraków
Requisition ID 2024-12216
Category (Portal Searching)
Information Technology
Position Type (Portal Searching)
Experienced Professional

Overview

Permanent, full-time, hybrid (3 days per week in an office).

 

Connecting clients to markets – and talent to opportunity.

 

With 4,300 employees and over 400,000 retail and institutional clients from more than 80 offices spread across five continents, we’re a Fortune-100, Nasdaq-listed provider, connecting clients to the global markets – focusing on innovation, human connection, and providing world-class products and services to all types of investors.

 

At StoneX, we offer you the opportunity to be part of an institutional-grade financial services network that connects companies, organizations, and investors to the global markets ecosystem. As a team member, you'll benefit from our unique blend of digital platforms, comprehensive clearing and execution services, personalized high-touch support, and deep industry expertise. Elevate your career with us and make a significant impact in the world of global finance.

 

Business Segment Overview: Empower individual investors – and yourself – in the world of retail through a range of different financial products rooted in innovation and market intelligence. From FX and CFDs to precious metals, master an exciting world of wealth management tools.

Responsibilities

Position Purpose: This role involves designing and developing Databricks applications in pySpark. Our team is rewriting an existing SQL on-prem data warehouse into a Data Lakehouse using Databricks.

 

Primary duties will include:

  • Migration from on-prem SQL Data Warehouse to Databricks.
  • Developing pySpark applications and Spark jobs.
  • Maintaining Databricks workspaces, clusters, and jobs.
  • Integrating Databricks applications with various technologies.
  • Keep Databricks environment healthy.

Qualifications

To land this role you will need:

  • Must be a Subject Matter Expert in Spark.
  • Proficiency with Big Data processing technologies (Hadoop, Spark, Databricks).
  • Experience in building data pipelines and analysis tools using Python, pySpark, Scala.
  • Create Scala/Spark jobs for data transformation and aggregation.
  • Produce unit tests for Spark transformations and helper methods.
  • Design data processing pipelines.
  • Good to have experience with Hadoop / Databricks.
  • Passionate about learning new technologies.
  • Ability to learn new concepts and software quickly.
  • Analytical approach to problem-solving; ability to use technology to solve business problems.
  • Familiarity with database-centric applications.
  • Ability to communicate effectively in both a technical and non-technical manner.
  • Ability to work in a fast-paced environment.
  • Experience working in an agile environment using SCRUM methodology.
  • Communicate and interact with all levels of personnel within the organization, including senior management, and other departments.
  • Results oriented, team player with strong attention to detail.

What makes you stand out:

  • Relevant experience in the finance services industry, FX, or brokerage experience a plus.
  • Good to have practical and theoretical knowledge of DW concepts (Kimball approach).
  • Good to have Spark certification.

Education / Certification Requirements: 

  • Bachelor’s degree or relevant work experience in Computer Science, Mathematics, Data Engineering or related technical discipline.

Working environment:

  • Hybrid (2 days from home, 3 days from the office) at ul. Mogilska 35, Cracow

 

#LI-Hybrid #LI-DK1

Options

Sorry the Share function is not working properly at this moment. Please refresh the page and try again later.
Share on your newsfeed