Important Information
Experience: +6 years
Job Mode: Full-time
Work Mode: Remote
ID: 20555
Job Summary
We are seeking a highly skilled Senior Data Engineer with deep experience in modern data platforms and hands-on expertise in building Agentic AI–first data solutions. The ideal candidate has strong proficiency with Strands, AgentCore, and robust experience across Snowflake, OpenFlow, Kafka, and dbt. You will play a key role in designing and scaling data architectures that power advanced AI agents and next-generation analytics.
This role requires someone who thrives in innovative environments, enjoys building from scratch, and can influence technical strategy across AI, data engineering, and cloud ecosystems.
Responsibilities and Duties
- Design, develop, and maintain Agentic AI–first data pipelines using Strands and AgentCore to enable intelligent automation and decision-making at scale.
- Architect, optimize, and manage Snowflake data warehouses, ensuring high performance, secure data sharing, and cost-efficient usage.
- Build, refine, and orchestrate streaming and event-driven pipelines using Kafka and OpenFlow.
- Develop modular, testable transformations using dbt, implementing best practices in data modeling (e.g., Kimball, Data Vault).
- Implement scalable and reliable data integration solutions supporting real-time and batch processing.
- Collaborate with AI/ML teams to provide high-quality, well-documented datasets for model training and agent workflows.
- Drive data quality, governance, lineage, and observability using modern tooling and AI-enabled monitoring.
- Lead code reviews, architecture discussions, and mentor junior data engineers.
- Partner closely with Product, AI Engineering, and Platform teams to define data and agent strategies.
Qualifications and Skills
- 6+ years of experience as a Data Engineer, Senior Data Engineer, or similar role.
- Hands-on experience with Agentic AI frameworks, specifically:
- Strands
- AgentCore
- Strong expertise with:
- Snowflake (advanced) — performance tuning, Snowpark, RBAC, cost optimization
- OpenFlow for orchestration and processing
- Kafka for event streaming, producers/consumers, schema registry
- dbt for modular SQL development, testing, and documentation
- Strong SQL skills and proficiency in Python.
- Solid understanding of modern data architectures (ELT/ETL, real-time streaming, microservices-based pipelines).
- Experience with CI/CD for data pipelines, version control, and deployment automation.
- Familiarity with cloud platforms (AWS, GCP, or Azure).
Preferred Qualifications
- Experience building AI-driven data systems, agentic workflows, or LLM-integrated pipelines.
- Background in data governance, lineage, observability, and quality frameworks.
- Knowledge of Snowflake advanced features such as Dynamic Tables, Streams & Tasks.
- Experience with containerized and distributed systems (Docker, Kubernetes).
- Strong communication skills and ability to collaborate in cross-functional teams.
About Encora
Encora is a global company that offers Software and Digital Engineering solutions. Our practices include Cloud Services, Product Engineering & Application Modernization, Data & Analytics, Digital Experience & Design Services, DevSecOps, Cybersecurity, Quality Engineering, AI & LLM Engineering, among others.
At Encora, we hire professionals based solely on their skills and do not discriminate based on age, disability, religion, gender, sexual orientation, socioeconomic status, or nationality.