Senior Data Engineer - ICONMA, LLC
Charlotte, NC 28203
About the Job
Senior Data Engineer
Location: Charlotte, NC/Hybrid
Duration: 1 year with Possible Contract to hire
Description:
Key Responsibilities:
Translates complex cross-functional business requirements and functional specifications into logical program designs, code modules, stable application systems, and data solutions; partners with Product Team to understand business needs and functional specifications
Collaborates with cross-functional teams to ensure specifications are converted into flexible, scalable, and maintainable solution designs; evaluates project deliverables to ensure they meet specifications and architectural standards
Contributes in the design and build of complex data solutions and ensures the architecture blueprint, standards, target state architecture, and strategies are aligned with the requirements.
Coordinates, executes, and participates in component integration (CIT) scenarios, systems integration testing (SIT), and user acceptance testing (UAT) to identify application errors and to ensure quality software deployment
Participates in all software development end-to-end product lifecycle phases by applying and sharing an in-depth understanding of complex industry methodologies, policies, standards, and controls
Develops detailed architecture plans for large scale enterprise architecture projects and drives the plans to fruition
Solves complex architecture/design and business problems; solutions are extensible; works to simplify, optimize, remove bottlenecks, etc.
Data Engineering Responsibilities
Executes the development, maintenance, and enhancements of data ingestion solutions of varying complexity levels across various data sources like DBMS, File systems (structured and unstructured), APIs and Streaming on on-prem and cloud infrastructure; demonstrates strong acumen in Data Ingestion toolsets and nurtures and grows junior members in this capability
Builds, tests and enhances data curation pipelines integration data from wide variety of sources like DBMS, File systems, APIs and streaming systems for various KPIs and metrics development with high data quality and integrity
Supports the development of feature / inputs for the data models in an Agile manner; Hosts Model Via Rest APIs; ensures non-functional requirements such as logging, authentication, error capturing, and concurrency management are accounted for when model hosting
Works with Data Science team to understand mathematical models and algorithms; recommends improvements to analytic methods, techniques, standards, policies and procedures
Works to ensure the manipulation and administration of data and systems are secure and in accordance with enterprise governance by staying compliant with industry best practices, enterprise standards, corporate policy and department procedures; handles the manipulation (extract, load, transform), data visualization, and administration of data and systems securely and in accordance with enterprise data governance standards
Maintains the health and monitoring of assigned data engineering capabilities that span analytic functions by triaging maintenance issues; ensures high availability of the platform; works with Infrastructure Engineering teams to maintain the data platform; serves as an SME of one or more application
BI Engineering Responsibilities
Responsible for the development, maintenance, and enhancements of BI solutions of varying complexity levels across different data sources like DBMS, File systems (structured and unstructured) on-prem and cloud infrastructure; creates level metrics and other complex metrics; use custom groups, consolidations, drilling, and complex filters
Demonstrates database skill (Teradata/Oracle/Db2/Hadoop) by writing views for business requirements; uses freeform SQLs and pass-through functions; analyzes and finds errors from SQL generation; creates RSD and dashboard
Responsible for building, testing and enhancement of BI solutions from a wide variety of sources like Teradata, Hive, Hbase, Google Big Query and File systems; develops solutions with optimized data performance and data security
Works with business analysts to understand requirements and create dashboard/dossiers wireframes; makes use of widgets and Vitara charts to make the dashboard/dossiers visually appealing
Coordinates and takes necessary actions from DART side for application upgrades (e.g., Teradata, Workday), storage migration, and user management automation; supports such things as Cluster Management and Project Configuration settings
Competencies:
Technical Competencies
Agile Development
Big Data Management and Analytics
Cloud Computing
Database Design (Physical)
Release Management
Qualifications
Minimum Qualifications:
Bachelor's Degree in Engineering /Computer Science, CIS, or related field (or equivalent work experience in a related field)
5 years of experience in Data or BI Engineering, Data Warehousing/ETL, or Software Engineering
4 years of experience working on project(s) involving the implementation of solutions applying development life cycles (SDLC)
Additional Skills:
Python
Hadoop
Shell Scripts
Day to Day Responsibilities
Listing the Responsibilities
PySPark
Scala
Automation of Pipelines
Airflow/Autosys
Spark
Monitoring/Alerts
GCP DataProc/Composer/BQ
Data modelling
Required Skills (top 3 non-negotiables):
Agile Experience 2 Years minimum
10+ years of Data Engineering
8+ years of PySpark/Scala
7 years of Scala
5+ years of experience hosting application on GCP
Software Skills Required: Grafana
Preferred Skills (nice to have)
Case Management Experience 3+ years
Location: Charlotte, NC/Hybrid
Duration: 1 year with Possible Contract to hire
Description:
Key Responsibilities:
Translates complex cross-functional business requirements and functional specifications into logical program designs, code modules, stable application systems, and data solutions; partners with Product Team to understand business needs and functional specifications
Collaborates with cross-functional teams to ensure specifications are converted into flexible, scalable, and maintainable solution designs; evaluates project deliverables to ensure they meet specifications and architectural standards
Contributes in the design and build of complex data solutions and ensures the architecture blueprint, standards, target state architecture, and strategies are aligned with the requirements.
Coordinates, executes, and participates in component integration (CIT) scenarios, systems integration testing (SIT), and user acceptance testing (UAT) to identify application errors and to ensure quality software deployment
Participates in all software development end-to-end product lifecycle phases by applying and sharing an in-depth understanding of complex industry methodologies, policies, standards, and controls
Develops detailed architecture plans for large scale enterprise architecture projects and drives the plans to fruition
Solves complex architecture/design and business problems; solutions are extensible; works to simplify, optimize, remove bottlenecks, etc.
Data Engineering Responsibilities
Executes the development, maintenance, and enhancements of data ingestion solutions of varying complexity levels across various data sources like DBMS, File systems (structured and unstructured), APIs and Streaming on on-prem and cloud infrastructure; demonstrates strong acumen in Data Ingestion toolsets and nurtures and grows junior members in this capability
Builds, tests and enhances data curation pipelines integration data from wide variety of sources like DBMS, File systems, APIs and streaming systems for various KPIs and metrics development with high data quality and integrity
Supports the development of feature / inputs for the data models in an Agile manner; Hosts Model Via Rest APIs; ensures non-functional requirements such as logging, authentication, error capturing, and concurrency management are accounted for when model hosting
Works with Data Science team to understand mathematical models and algorithms; recommends improvements to analytic methods, techniques, standards, policies and procedures
Works to ensure the manipulation and administration of data and systems are secure and in accordance with enterprise governance by staying compliant with industry best practices, enterprise standards, corporate policy and department procedures; handles the manipulation (extract, load, transform), data visualization, and administration of data and systems securely and in accordance with enterprise data governance standards
Maintains the health and monitoring of assigned data engineering capabilities that span analytic functions by triaging maintenance issues; ensures high availability of the platform; works with Infrastructure Engineering teams to maintain the data platform; serves as an SME of one or more application
BI Engineering Responsibilities
Responsible for the development, maintenance, and enhancements of BI solutions of varying complexity levels across different data sources like DBMS, File systems (structured and unstructured) on-prem and cloud infrastructure; creates level metrics and other complex metrics; use custom groups, consolidations, drilling, and complex filters
Demonstrates database skill (Teradata/Oracle/Db2/Hadoop) by writing views for business requirements; uses freeform SQLs and pass-through functions; analyzes and finds errors from SQL generation; creates RSD and dashboard
Responsible for building, testing and enhancement of BI solutions from a wide variety of sources like Teradata, Hive, Hbase, Google Big Query and File systems; develops solutions with optimized data performance and data security
Works with business analysts to understand requirements and create dashboard/dossiers wireframes; makes use of widgets and Vitara charts to make the dashboard/dossiers visually appealing
Coordinates and takes necessary actions from DART side for application upgrades (e.g., Teradata, Workday), storage migration, and user management automation; supports such things as Cluster Management and Project Configuration settings
Competencies:
Technical Competencies
Agile Development
Big Data Management and Analytics
Cloud Computing
Database Design (Physical)
Release Management
Qualifications
Minimum Qualifications:
Bachelor's Degree in Engineering /Computer Science, CIS, or related field (or equivalent work experience in a related field)
5 years of experience in Data or BI Engineering, Data Warehousing/ETL, or Software Engineering
4 years of experience working on project(s) involving the implementation of solutions applying development life cycles (SDLC)
Additional Skills:
Python
Hadoop
Shell Scripts
Day to Day Responsibilities
Listing the Responsibilities
PySPark
Scala
Automation of Pipelines
Airflow/Autosys
Spark
Monitoring/Alerts
GCP DataProc/Composer/BQ
Data modelling
Required Skills (top 3 non-negotiables):
Agile Experience 2 Years minimum
10+ years of Data Engineering
8+ years of PySpark/Scala
7 years of Scala
5+ years of experience hosting application on GCP
Software Skills Required: Grafana
Preferred Skills (nice to have)
Case Management Experience 3+ years
Source : ICONMA, LLC