Build robust data pipelines and ETL processes using Python for large-scale data processing.
Design and implement the data infrastructure for a major telecommunications company serving 50+ million customers, where your pipelines will process terabytes of network data daily to optimize service quality and predict infrastructure needs. You'll build sophisticated ETL systems using Apache Airflow, Spark, and Kafka to handle real-time streaming data from cell towers, customer usage patterns, and network performance metrics. The project involves creating data lakes, implementing data quality monitoring, and building machine learning feature pipelines that feed predictive models for network optimization. You'll work closely with data scientists and network engineers to ensure data accuracy and availability while optimizing processing costs and performance. This role offers the opportunity to work with massive scale data systems while directly impacting the quality of telecommunications services for millions of users. You'll be responsible for architecting solutions that handle peak traffic loads during major events while maintaining data consistency and reliability across distributed systems.
Questions about this role?
Contact our HR team at careers@belovtech.com
We're building the future of enterprise AI, making advanced artificial intelligence accessible to businesses of all sizes. Join our mission to democratize AI technology.
More opportunities coming soon!
View All Positions