StoneX

Data Engineer

Job Locations IN-KA-Bangalore
Requisition Post Information* : Posted Date 1 month ago(3/29/2024 1:03 AM)
Requisition ID
2024-10981
# of Openings
1
Category (Portal Searching)
Information Technology

Overview

#LI-SA1 #stoneX #dataengineer #pyspark #azuredatabricks #sql #streamingdata
 

Connecting clients to markets – and talent to opportunity 

 

With 4,300 employees and over 400,000 retail and institutional clients from more than 80 offices spread across five continents, we’re a Fortune-100, Nasdaq-listed provider, connecting clients to the global markets – focusing on innovation, human connection, and providing world-class products and services to all types of investors. 

Whether you want to forge a career connecting our retail clients to potential trading opportunities, or ingrain yourself in the world of institutional investing, The StoneX Group is made up of four segments that offer endless potential for progression and growth. 

 
 
Job Requirements:
 
• Bachelor’s or Master’s degree in Computer Science, Mathematics, Engineering or related technical discipline.
• 4 to 6 years of experience developing software in a professional environment (preferably financial services but not necessary)
• Understanding of Enterprise architecture patterns, Object Oriented & Service Oriented principles, design patterns, industry best practices
• Knowledge of micro services architecture, design and business process. Hands on development and programming experience with object oriented programming language like Python, PySpark
• Knowledge on data streaming, messaging and event driven systems like Kafka, Azure Events, Azure streaming services, etc.
• Experience with technologies like SQL (MySQL, MS SQL, or Oracle), No-SQL (Cassandra, Dynamo DB)
• Exposure to containers, microservices, distributed systems architecture, orchestrators and cloud computing.
• Exposure BI Tools like SSIS , Power BI will be added plus
• Good sense of user interaction and usability design to provide an intuitive, seamless end user experience. • Excellent communications skills and the ability to work with subject matter expert to extract critical business concepts.
• Ability to work and potentially lead in an Agile methodology environment.
 

Responsibilities

Accountabilities/Responsibilities:
• Create cloud and big data technical design recommendations for developing and integrating new software and system technologies – from the physical layer through to the virtual layer – per written specifications; test, evaluate, engineer, implement and support said technologies.
• Review, influence and contribute to new and evolving design, architecture, standards, and methods for operating and contributing to services within our big data ecosystem.
• Add to our existing business and data models. Reviews existing designs and processes to highlight more efficient ways to complete existing workload more effectively through industry perspectives.
• Drive technical innovation and efficiency in infrastructure operations through automation by assisting in improvements to continuous integration, continuous deployment and
• Create cloud and big data technical design recommendations for developing and integrating new software and system technologies – from the physical layer through to the virtual layer – per written specifications; test, evaluate, engineer, implement and support those technologies
• Collaborates with technical teams and utilizes system expertise to deliver technical solutions, continuously learning and evolving big data skillsets.
• Monitors and evaluates overall strategic data infrastructure; tracks system efficiency and reliability; identifies and recommends efficiency improvements and mitigates operational vulnerabilities. Respond to and resolve emergent service problems. Design solutions using automation and self-repair rather than relying on alarming and human intervention.
 
 
  • Requirements:

    • List the required skills and qualifications, including:
      • Strong coding skills in PySpark.
      • Proficiency in SQL for data querying and manipulation.
      • Experience with Azure Databricks for big data processing.
      • Ability to work with both static and real-time data, including streaming data.
      • Knowledge of Kafka or other message queuing services and APIs.
      • Redis knowledge is a plus.
      • Any additional skills or qualifications desired for the role.
 

Qualifications

 
 
Technical Skills:
Technologies: Python, PySpark( (or Spark/ scala)DB/ Cloud: SQL Server, PL/SQL, DB Design, ETL, Azure, AWS, Google cloudData Engineering (optional): Astronomer, Dremio, Azure DevOps, webservices
Technical Skills: Technologies: Python, PySpark( (or Spark/ scala) DB/ Cloud: SQL Server, PL/SQL, DB Design, ETL, Azure, AWS, Google cloud Data Engineering (optional): Astronomer, Dremio, Azure DevOps, webservices

Options

Sorry the Share function is not working properly at this moment. Please refresh the page and try again later.
Share on your newsfeed