Design and build Ab Initio graphs (both continuous and batch) and Conduct>it plans, utilizing various parallelism techniques (Data, Pipeline, and Component).
Demonstrate a complete understanding of the Metadata Hub metamodel and its analytical capabilities.
Build graphs that interface with heterogeneous data sources, including Oracle, Snowflake, Hadoop, Hive, and AWS S3.
Parse and process XML, JSON, and YAML documents, including hierarchical models.
Implement data acquisition, transformation, and curation requirements within a data lake or warehouse environment using Ab Initio components.
Create Control Center jobs and schedules to orchestrate data processes.
Develop Business Rules Engine (BRE) rulesets for reformatting, rollup, and validation use cases.
Identify performance bottlenecks in graphs and optimize them for efficiency.
Adhere to high code quality standards, automated testing, and engineering best practices. Write reusable code components and conduct thorough unit testing.
Manage and develop ETL routines using Ab Initio, ensuring compliance with client IT governance policies.
Build automation pipelines for Continuous Integration and Delivery (CI/CD), utilizing Testing Framework, JUnit modules, and integrating with Jenkins, JIRA, and ServiceNow.
Successfully implement Ab Initio CDC in data integration/ETL projects.
Required Technical/Functional Skills
2+ years of experience on data integration projects within a Hadoop platform, preferably Cloudera.
7+ years of IT experience, with at least 6 years specifically in ETL design and development using Ab Initio.
SQL, Unix Shell
Ab Initio software suite: Co>Op, EME, BRE, Conduct>It, Express>It, Metadata>Hub, Query>it, Control>Center
Ab Initio frameworks: Acquire>It, DQA, Spec-To-Graph, Testing Framework
Db2, Oracle, MySQL, Teradata, MongoDB
Cloudera Hadoop, Hive
Agile, Waterfall
JIRA, ServiceNow, Linux (6/7/8), SQL Developer, AutoSys, Microsoft Office.