AI Data Engineer

🇺🇸 United StatesOn-site

Posted Dec 16, 2025

Fluency is enabling the autonomous Enterprise. (in person)

You're needed to build the data infrastructure that powers enterprise intelligence. We're not wiring up dashboards. We're building pipelines that ingest, process, and structure the raw signals of how work actually happens, at a scale nobody has attempted.

Fluency is looking for an AI Data Engineer to design and build the data systems that feed our process conformance, productivity measurement, and AI impact analysis across Fortune 500 organisations.

The Problem Space

You'll be building data infrastructure that handles messy, real-world signals: screenshots, OCR text, application metadata, and behavioural events. The challenge is transforming unstructured chaos into reliable, queryable data that our ML systems can consume, at scale, with cost constraints that make naive approaches untenable.

This means:

  • Designing ingestion pipelines that process millions of screenshots and behavioural events daily

  • Building data validation and quality systems that catch drift before it corrupts models

  • Creating feature stores and serving infrastructure that balance freshness against compute cost

  • Optimising storage and query patterns for time-series behavioural data

  • Orchestrating complex DAGs that coordinate OCR, LLM enrichment, and downstream aggregations

The playbook doesn't exist. You'll write it.

We're backed by T1 VCs like Accel and are hitting an inflection point with Enterprises all around the globe.

You'll work directly with founders and our engineering team on technical challenges that span data engineering, LLM pipelines, and production systems.

About the Role

We're looking for someone with:

  • Strong Python fundamentals and software engineering discipline

  • Experience building production data pipelines (Dagster, Airflow, Prefect, or similar)

  • Data modelling expertise: designing schemas for analytical and ML workloads

  • Infrastructure experience: AWS (S3, RDS, Glue, Lambda), containerisation, IaC

  • Production database experience: PostgreSQL, graph databases (Neo4j, Neptune, or similar)

  • Monitoring and observability for data systems: lineage, quality metrics, alerting

  • Comfort with ambiguity and novel problem domains

Computer Science Background, with caveat. If you don't have a CS background, you're challenged to beat one of the founders in a 1:1 whiteboard duel on DS&A judged by Hung. Neither founder has a formal CS background, but come prepped.

There will be an expectation to stay up to business context, which could involve:

  • Watching key customer calls

  • Interacting with customers

  • Helping with product thinking

Strongly Preferred

  • Experience with LLM pipelines and model serving infrastructure

  • Shipping models to production: deployment, versioning, monitoring

  • OCR, document processing, or image pipeline experience

  • Cost optimisation for data-intensive systems

  • Familiarity with dbt, Spark, or similar transformation frameworks

  • Experience with multi-region data architectures and residency requirements

  • You've operated data systems at scale under real constraints

  • Interesting personal projects that demonstrate depth

Our Customers

We work with some of the world's largest:

  • Financial services enterprises (Aon)

  • Manufacturing enterprises (Misumi)

  • And many more across the enterprise spectrum (PVH)

Our Culture

You're expected to be in love with the craft. You're expected to like laughing. You're expected to want to work on novel problems. You're expected to find satisfaction in novelty. You're expected to solve under obscurity.

Our Values

  • In hesitation lies destruction; in action, glory.

  • Those who merely meet expectations abandon the pursuit of greatness.

  • One who dwells within the forum must regard it as hallowed ground.

  • One who has not tasted the grapes declares them sour.

  • One who sits alone at the feast misses the richness of the table.

Location

Full-time, in-person role based in San Francisco, CA.

  • We offer E3 sponsorship for Australians to relocate with stipend

Compensation

  • US$150K - $250K salary, depending on candidate and experience

  • Substantial equity, every offer includes ownership

  • Mac, Linux, or Windows, your call

  • High-impact work with global enterprises

  • Technical, product-led founders

Don't apply if:

  • You want hybrid or remote

  • You don't like working hard and with insane velocity

  • You want to work a 9 to 5

  • You're not comfortable with rapid iteration

  • You think data engineering is plumbing work

  • You've never operated production pipelines

  • You don't have personal projects

  • You dislike constraints (we have them: cost, latency, reliability tradeoffs are real)

  • You aren't ambitious

  • You don't have a good reason for wanting to work at an early-stage company

Hiring Process

  • Resume screen

  • 1:1 with founder

  • Technical deep-dive on past data engineering work

  • Work through a real problem with the team

  • Offer

We strongly encourage applicants from underrepresented backgrounds to apply. Diverse teams build better products, see value #5.

AI Data Engineer at Fluency | Data Engineer Jobs