Collect

Data Engineering- Technical manager

Centelon is a modern, young technology company that carries an unflinching passion to help clients innovate, adapt and succeed. We are a business-technology products, solutions and services company.

At Centelon, we harness the power of AI, digital and other emerging technologies to help our clients adapt to the evolving world. Centelon is a trusted partner of large and mid-size businesses in Financial Services, Media, Logistics, Energy & Utilities industries. We have offices in Australia, Singapore and India.

About the Role

Big Data is dramatically changing our lives and it is throwing up multiple business opportunities. Centelon is at the forefront this global transformation.  We are working on some of the path breaking problems such as cloud-based data warehouse, business intelligence, data lake and data migration.

We are looking for action-oriented and motivated individual who can design data engineering solutions, manage teams and handle client relationships.

An ideal candidate is expected to be strong in technology, communication and team management. This position offers a steep learning curve, with significant exposure of working directly with global clients. You are expected to be one of the best in the industry in applied big data.

Expected Skills:

·       Overall experience of over 12+ years in big data space in leading projects or unit of work.
·       Experience in managing and mentoring data engineers
·       Strong communication and client management skills.
·       Multiple project experience in Data Warehouse, Business Intelligence or Data Analytics.
·       Experience in architecting, designing, and delivering data pipelining solutions in cloud technologies such as Microsoft Fabric, Azure Synapse, Databricks or Snowflake. Certifications are a plus.
·       Solid foundation in data warehouse and BI design, tools, processes, and implementation approaches.
·       Experience in building schedule-driven workflows, serialisation formats, data modelling and architecting for performance.
·       Hands-on experience in Pyspark data frames and  SQL.
·       Data modelling skills using tools such as Erwin.
·       Exposure to specific ETL tools such as Azure Data Factory, Data built tool, AWS Glue, and query engines such as AWS Athena.
·       Sound experience of orchestration tools such as Airflow
·       Knowledge of AI tools will be an added advantage