SPARK Data Reconciliation Engineer - Purple Fly Solutions
Jersey City, NJ
About the Job
- Job Position: SPARK Data Reconciliation Engineer
- Location: Jersey City, NJ
- Type of Employment: Full-time
- Minimum Years of Experience: 6
- Job Role: Developing and implementing PySpark-based applications to perform complex data reconciliations in financial systems.
- Key Responsibilities:
- Designing, developing, and testing PySpark-based applications for data reconciliation.
- Implementing efficient data transformation and matching algorithms.
- Developing error handling and exception management systems within Spark jobs.
- Collaborating with business analysts and data architects to understand data requirements.
- Analyzing and interpreting data structures, formats, and relationships.
- Working with distributed datasets in Spark.
- Integrating PySpark applications with rules engines.
- Collaborating with cross-functional teams to identify and analyze data gaps.
- Designing and developing PySpark-based solutions to address data integration challenges.
- Contributing to the development of data governance and quality frameworks.
- Qualifications and Skills:
- Bachelor's degree in Computer Science or a related field.
- 5+ years of experience in big data development.
- Proficiency in PySpark, Apache Spark, and related big data technologies.
- Experience with rules engine integration and development.
- Strong analytical and problem-solving skills.
- Excellent communication and collaboration skills.
- Familiarity with data streaming platforms and big data technologies.
- Location: Jersey City, NJ
- Type of Employment: Full-time
- Minimum Years of Experience: 6
- Job Role: Developing and implementing PySpark-based applications to perform complex data reconciliations in financial systems.
- Key Responsibilities:
- Designing, developing, and testing PySpark-based applications for data reconciliation.
- Implementing efficient data transformation and matching algorithms.
- Developing error handling and exception management systems within Spark jobs.
- Collaborating with business analysts and data architects to understand data requirements.
- Analyzing and interpreting data structures, formats, and relationships.
- Working with distributed datasets in Spark.
- Integrating PySpark applications with rules engines.
- Collaborating with cross-functional teams to identify and analyze data gaps.
- Designing and developing PySpark-based solutions to address data integration challenges.
- Contributing to the development of data governance and quality frameworks.
- Qualifications and Skills:
- Bachelor's degree in Computer Science or a related field.
- 5+ years of experience in big data development.
- Proficiency in PySpark, Apache Spark, and related big data technologies.
- Experience with rules engine integration and development.
- Strong analytical and problem-solving skills.
- Excellent communication and collaboration skills.
- Familiarity with data streaming platforms and big data technologies.
Source : Purple Fly Solutions