Big Data Hadoop Engineer - Aloden LLC
Charlotte, NC 28202
About the Job
Technology and Data - Software Engineer 4 - Contingent (Big Data Hadoop Engineer)
Location:Charlotte, NC – 28202 – Hybrid Roles (3 Days Onsite/2 DaysWFH)
Job Descriptions:
Location:Charlotte, NC – 28202 – Hybrid Roles (3 Days Onsite/2 DaysWFH)
Job Descriptions:
- In this contingent resource assignment, you may: Consult on complex initiatives with broad impact and large-scale planning for Software Engineering. Review and analyze complex multi-faceted, larger scale or longer-term Software Engineering challenges that require in-depth evaluation of multiple factors including intangibles or unprecedented factors.
- Contribute to the resolution of complex and multi-faceted situations requiring solid understanding of the function, policies, procedures, and compliance requirements that meet deliverables.
- Strategically collaborate and consult with client personnel.
- 5+ years of Software Engineering experience, or equivalent demonstrated through one or a combination of the following: work or consulting experience, training, military experience, education.
- Design and implement automated spark-based framework to facilitate data ingestion, transformation and consumption.
- Implement security protocols such as Kerberos Authentication, Encryption of data at rest, data authorization mechanism such as role-based access control using Apache ranger.
- Design and develop automated testing framework to perform data validation.
- Enhance existing spark-based frameworks to overcome tool limitations, and/or to add more features based on consumer expectations.
- Design and build high performing and scalable data pipeline platform using Hadoop, Apache Spark, MongoDB, Kafka and object storage architecture.
- Work with Infrastructure Engineers and System Administrators as appropriate in designing the big-data infrastructure
- Collaborate with application partners, Architects, Data Analysts and Modelers to build scalable and performant data solutions.
- Effectively work in a hybrid environment where legacy ETL and Data Warehouse applications and new big-data applications co-exist
- Work with Infrastructure Engineers and System Administrators as appropriate in designing the big-data infrastructure.
- Support ongoing data management efforts for Development, QA and Production environments
- Provide tool support, help consumers troubleshooting pipeline issues.
- Utilizes a thorough understanding of available technology, tools, and existing designs.
- Leverage knowledge of industry trends to build best in class technology to provide competitive advantage.
- 5+ years of experience of software engineering experience
- 5+ years of experience delivering complex enterprise wide information technology solutions
- 5+ years of experience delivering ETL, data warehouse and data analytics capabilities on big-data architecture such as Hadoop
- 5+ years of Apache Spark design and development experience using Scala, Java, Python or Data Frames with Resilient Distributed Datasets (RDDs), Parquet or ORC file formats
- 6+ years of ETL (Extract, Transform, Load) Programming experience
- 2+ years of Kafka or equivalent experience
- 2+ years of NoSQL DB like Couchbase/MongoDB experience.
- 5+ experience working with complex SQLs and performance tuning
- 3+ years of Agile experience
- 2+ years of reporting experience, analytics experience or a combination of both
- 2+ years of operational risk or credit risk or compliance domain experience
- 2+ years of experience integrating with RESTful API
- 2+ years of experience with CICD tools.
Source : Aloden LLC