Data Engineer, Active Grid Response
🇺🇸 United StatesHybrid
Posted Dec 2, 2025•Updated Mar 5, 2026
About Gridware
Gridware is a San Francisco-based technology company dedicated to protecting and enhancing the electrical grid. We pioneered a groundbreaking new class of grid management called active grid response (AGR), focused on monitoring the electrical, physical, and environmental aspects of the grid that affect reliability and safety. Gridware’s advanced Active Grid Response platform uses high-precision sensors to detect potential issues early, enabling proactive maintenance and fault mitigation. This comprehensive approach helps improve safety, reduce outages, and ensure the grid operates efficiently. The company is backed by climate-tech and Silicon Valley investors. For more information, please visit www.Gridware.io.
Role Overview
As a Data Engineer at Gridware, you’ll help build and maintain the pipelines and data systems powering our Active Grid Response platform. You’ll work closely with cross-functional engineers to ensure telemetry, sensor data, and operational information flows reliably through our Lakehouse and into analytics and monitoring tools. This is a hands-on, high-growth role ideal for engineers ready to deepen their expertise in distributed data systems.
Responsibilities
Building ETL/ELT pipelines that ingest transformer, pole, and sensor telemetry into Gridware’s Data Lake and LakehouseDeveloping and maintaining real-time and batch ingestion processes using Python, SQL, Databricks, and SparkImplementing data quality checks, validation rules, and automated testing for stable operationsCollaborating with Software, Firmware, and Data Science teams to define ingestion schemas and transformationsWorking with cloud-native tools to optimize pipeline throughput and cost efficiencyMonitoring pipelines for reliability, troubleshooting issues, and contributing to on-call rotationsWriting documentation for data processes, models, and metadata
Required Skills
2–4 years of experience as a Data Engineer (or Backend Engineer with heavy data exposure)Strong proficiency in Python and SQLFamiliarity with data warehouses, Lakehouse platforms, or big data tools (Databricks, Spark, or equivalent)Experience with pipeline orchestration tools (Airflow, Dagster, Prefect, etc.)Understanding of event-driven systems or streaming platforms (Kafka, Kinesis, Pub/Sub)Solid foundation in data modeling, testing, and version controlAbility to work collaboratively in a high-autonomy, fast-paced environment
Bonus Skills
Experience with IoT, telemetry ingestion, or time-series dataExposure to Unity Catalog, governance, or schema enforcementUnderstanding of Protobuf, Avro, Parquet, or serialization formatsHands-on experience with observability tools (Grafana, OpenTelemetry)