Principal Data Engineer II at Spectrum Charter
Greenwood Village, CO 80155
About the Job
This position is eligible for our Hybrid Work Policy
Eligible employees can work from home up to one day each weekJOB SCOPEReliability Engineering is developing and operating a new self service, event driven, infrastructure orchestration system that has enabled business-impacting self-service analytics, decision engineering support, machine learning, modeling, forecasting, and optimization
Create and maintain scalable, reliable, consistent and repeatable systems that support data operations and data engineering for Netops by receiving, processing, and monitoring raw data at scale through scripts, coding, web scraping, APIs, SQL queries, etc
Deliverables include profiles of data that measure quality, integrity, accuracy, and completeness of workflows
Success in the role requires managing the data lifecycle at scale of multiple data sources and increasing the speed to delivery by implementing automated workload and data workflow solutions.DUTIES AND RESPONSBILITIES Manage and operate key data systems to ensure that all data feeds are processed in a timely manner and result in high quality data.Identify and resolve issues within the data operations feeds and processing.Profile data to measure quality, integrity, accuracy, and completeness of the workflows.Develop and implement tools, scripts, queries, and applications for ETL/ELT and data operations.Use a wide variety of open source technologies, platforms, tools, applications, and cloud services.Produce reports, notifications, and trends to uphold data delivery schedules.Deliver solutions by developing, testing, and implementing code and scripts.IP, DNS, DHCP, network, security, and operating system configuration, administration, and troubleshooting experience on Linux/Unix/CentOS, Windows, and macOS.Manage lifecycle of multiple data sources.Work closely with data demand stakeholders, such as analysts and data scientists.Work closely with data supply domain experts and the sources systems and platforms providing data.Build self-monitoring, robust, scalable interfaces, and data pipelines for 24/7 operations.Create highly reusable code modules and packages that can be leveraged across the data pipeline.Increase speed to delivery by implementing workload/workflow automation solutions.Demonstrate an ongoing focus on enabling key business results by balancing optimal technology solutions with the needs of stakeholders.Deliver results through caring about customers, using metrics-driven analysis, and communicating the costs and tradeoffs of ideas to stakeholders and top management.BASIC / MINIMUM QUALIFICATIONSBachelor's degree or computer science or engineering, analytics, or data science disciplineMinimum ten (10) years of Hands-on working experience with RDBMS, SQL, scripting, and coding experienceMinimum ten (10) years of- experience delivering one major system where candidate was responsible for designing the architecture, implementing, operating, and supporting Experience in maintaining and managing production-level data systemsOngoing learning demonstrated through certificates from professional studies, MOOCs, seminars, online courses, and any other deliverable mechanismREQUIRED JOB QUALIFICATIONSBroad coding/scripting experience using Python, Perl, shell scripts, etc.Extensive experience with SQL in on-premises and cloud environments using MySQL, PostgreSQL, IBM DB2, Oracle, SQL Server, or Teradata.Experience with other database and data store technologies, such as NoSQL, key-value, columnar, graph, and documentBroad experience importing, exporting, translating, cleaning, and managing a wide range of file types,Broad experience creating and lifecycle-managing production-level Python scriptsExpertise in data storage and/or data movement that demonstrates knowledge of when to use files, relational database, streaming, or NoSQL variantAbility to identify and resolve end-to-end performance, network, server, client, platform, and operating system issuesWell organized with a keen attention to detail and the ability to effectively prioritize and execute multiple tasksProductive in a virtual and on-premises environmentExpert with Microsoft Office applications or Linux/macOS equivalents and as a self-sufficient user of Windows, Linux, and macOS desktopsPREFERRED QUALIFICATIONSTen (10) years of Linux/Unix/CentOS and Windows system admin; macOS experience Master’s degree in computer science or engineering, analytics, or data science discipline.Familiarity with JavaScript API, Rest API or Data Extract APIsFamiliarity with data workflow/data prep platformsFamiliarity with automation/configuration management using either Puppet, Chef, or equivalentExperience in and with IT or technical operations at scale in a production environmentKnowledge of best practices and IT operations in an always-up, always-available serviceExperience receiving, converting, and cleansing big dataExperience with visualization or BI tools, such as Tableau, Zoomdata, Microstrategy, or anything Microsoft Power BIDemonstrated success creating proof of concept experiments or using design of experiment for analytics, machine learning, or visualization tools that include hypothesis, test plans, and outcome analysisDevelopment, lifecycle management, or operations experience in a DevOps environment Experience in an Agile environment Extensive background in Linux/Unix/CentOS, Windows Server, and Windows desktop support, installation, administration, and optimization; macOS experience Hadoop experienceWORKING CONDITIONS Office environmentCharter Technical Engineering CenterHighly collaborative and innovative work spaceOccasional Travel EGN750 2024-42051 2024 Here, employees don’t just have jobs, they build careers
That’s why we believe in offering a comprehensive pay and benefits package that rewards employees for their contributions to our success, supports all aspects of their well-being, and delivers real value at every stage of life.A qualified applicant’s criminal history, if any, will be considered in a manner consistent with applicable laws, including local ordinances.This job posting will remain open until 2024-10-22 07:10 PM (UTC) and will be extended if necessary.The base pay for this position generally is between $124,000.00 and $220,000.00
The actual compensation offered will carefully consider a wide range of factors, including your skills, qualifications, experience, and location
We comply with local wage minimums and also, certain positions are eligible for additional forms of other incentive-based compensation such as bonuses.Get to Know Us Charter Communications is known in the United States by our Spectrum brands, including: Spectrum Internet, TV, Mobile and Voice, Spectrum Networks, Spectrum Enterprise and Spectrum Reach
When you join us, you’re joining a strong community of more than 100,000 individuals working together to serve nearly 32 million customers in 41 states and keep them connected to what matters most
Watch this video to learn more.Who You Are Matters Here We’re committed to growing a workforce that reflects our communities, and providing equal opportunities for employment and advancement
EOE, including disability/vets
Learn about our inclusive culture.
Eligible employees can work from home up to one day each weekJOB SCOPEReliability Engineering is developing and operating a new self service, event driven, infrastructure orchestration system that has enabled business-impacting self-service analytics, decision engineering support, machine learning, modeling, forecasting, and optimization
Create and maintain scalable, reliable, consistent and repeatable systems that support data operations and data engineering for Netops by receiving, processing, and monitoring raw data at scale through scripts, coding, web scraping, APIs, SQL queries, etc
Deliverables include profiles of data that measure quality, integrity, accuracy, and completeness of workflows
Success in the role requires managing the data lifecycle at scale of multiple data sources and increasing the speed to delivery by implementing automated workload and data workflow solutions.DUTIES AND RESPONSBILITIES Manage and operate key data systems to ensure that all data feeds are processed in a timely manner and result in high quality data.Identify and resolve issues within the data operations feeds and processing.Profile data to measure quality, integrity, accuracy, and completeness of the workflows.Develop and implement tools, scripts, queries, and applications for ETL/ELT and data operations.Use a wide variety of open source technologies, platforms, tools, applications, and cloud services.Produce reports, notifications, and trends to uphold data delivery schedules.Deliver solutions by developing, testing, and implementing code and scripts.IP, DNS, DHCP, network, security, and operating system configuration, administration, and troubleshooting experience on Linux/Unix/CentOS, Windows, and macOS.Manage lifecycle of multiple data sources.Work closely with data demand stakeholders, such as analysts and data scientists.Work closely with data supply domain experts and the sources systems and platforms providing data.Build self-monitoring, robust, scalable interfaces, and data pipelines for 24/7 operations.Create highly reusable code modules and packages that can be leveraged across the data pipeline.Increase speed to delivery by implementing workload/workflow automation solutions.Demonstrate an ongoing focus on enabling key business results by balancing optimal technology solutions with the needs of stakeholders.Deliver results through caring about customers, using metrics-driven analysis, and communicating the costs and tradeoffs of ideas to stakeholders and top management.BASIC / MINIMUM QUALIFICATIONSBachelor's degree or computer science or engineering, analytics, or data science disciplineMinimum ten (10) years of Hands-on working experience with RDBMS, SQL, scripting, and coding experienceMinimum ten (10) years of- experience delivering one major system where candidate was responsible for designing the architecture, implementing, operating, and supporting Experience in maintaining and managing production-level data systemsOngoing learning demonstrated through certificates from professional studies, MOOCs, seminars, online courses, and any other deliverable mechanismREQUIRED JOB QUALIFICATIONSBroad coding/scripting experience using Python, Perl, shell scripts, etc.Extensive experience with SQL in on-premises and cloud environments using MySQL, PostgreSQL, IBM DB2, Oracle, SQL Server, or Teradata.Experience with other database and data store technologies, such as NoSQL, key-value, columnar, graph, and documentBroad experience importing, exporting, translating, cleaning, and managing a wide range of file types,Broad experience creating and lifecycle-managing production-level Python scriptsExpertise in data storage and/or data movement that demonstrates knowledge of when to use files, relational database, streaming, or NoSQL variantAbility to identify and resolve end-to-end performance, network, server, client, platform, and operating system issuesWell organized with a keen attention to detail and the ability to effectively prioritize and execute multiple tasksProductive in a virtual and on-premises environmentExpert with Microsoft Office applications or Linux/macOS equivalents and as a self-sufficient user of Windows, Linux, and macOS desktopsPREFERRED QUALIFICATIONSTen (10) years of Linux/Unix/CentOS and Windows system admin; macOS experience Master’s degree in computer science or engineering, analytics, or data science discipline.Familiarity with JavaScript API, Rest API or Data Extract APIsFamiliarity with data workflow/data prep platformsFamiliarity with automation/configuration management using either Puppet, Chef, or equivalentExperience in and with IT or technical operations at scale in a production environmentKnowledge of best practices and IT operations in an always-up, always-available serviceExperience receiving, converting, and cleansing big dataExperience with visualization or BI tools, such as Tableau, Zoomdata, Microstrategy, or anything Microsoft Power BIDemonstrated success creating proof of concept experiments or using design of experiment for analytics, machine learning, or visualization tools that include hypothesis, test plans, and outcome analysisDevelopment, lifecycle management, or operations experience in a DevOps environment Experience in an Agile environment Extensive background in Linux/Unix/CentOS, Windows Server, and Windows desktop support, installation, administration, and optimization; macOS experience Hadoop experienceWORKING CONDITIONS Office environmentCharter Technical Engineering CenterHighly collaborative and innovative work spaceOccasional Travel EGN750 2024-42051 2024 Here, employees don’t just have jobs, they build careers
That’s why we believe in offering a comprehensive pay and benefits package that rewards employees for their contributions to our success, supports all aspects of their well-being, and delivers real value at every stage of life.A qualified applicant’s criminal history, if any, will be considered in a manner consistent with applicable laws, including local ordinances.This job posting will remain open until 2024-10-22 07:10 PM (UTC) and will be extended if necessary.The base pay for this position generally is between $124,000.00 and $220,000.00
The actual compensation offered will carefully consider a wide range of factors, including your skills, qualifications, experience, and location
We comply with local wage minimums and also, certain positions are eligible for additional forms of other incentive-based compensation such as bonuses.Get to Know Us Charter Communications is known in the United States by our Spectrum brands, including: Spectrum Internet, TV, Mobile and Voice, Spectrum Networks, Spectrum Enterprise and Spectrum Reach
When you join us, you’re joining a strong community of more than 100,000 individuals working together to serve nearly 32 million customers in 41 states and keep them connected to what matters most
Watch this video to learn more.Who You Are Matters Here We’re committed to growing a workforce that reflects our communities, and providing equal opportunities for employment and advancement
EOE, including disability/vets
Learn about our inclusive culture.