|Job Role||Data Engineer|
|Salary||upto 12 LPA|
|Qualifications||B.Tech / M. Tech|
Check the below link for applying.
Deloitte is hiring candidates for the post of Data Engineer.
- Develop solutions with an Agile Development team.
- Define, produce, test, review, and debug solutions.
- Create component-based features and micro-frontends.
- Database development with Postgres
- Create comprehensive unit test coverage in all layers.
- Deploy solutions to Docker containers and Kubernetes.
- Help build a team culture of autonomy and ownership.
- Work with a Product Owner to refine stories into functional use cases and identify the work effort as tasks.
- Participate in test case creation responsibilities and peer reviews prior to coding.
- Review implementation plans of peers prior to their coding.
- Demonstrate feature work at the end of each iteration.
- Work from home when desired with infrequent visits to the office and limited travel for planning sessions.
- Develop our ETL process to be a robust automated production quality solution and lead the implementation and delivery.
- Peer with the application engineering team to ensure our data model fits the need of the solution while promoting best practices in its design from both a maintenance and performance perspective.
- Peer with the data science team in understanding their needs for preparing large datasets for machine learning.
- Assist the team with understanding the execution plan of poorly written queries. Help remediate performance problems by assisting the performance tuning of queries and/or refining the data model to meet the needs of the business.
- Build data systems and pipelines.
- Evaluate business needs and objectives.
- Explore ways to enhance Product / pipeline.
- Collaborate with Team
- Showcase the Skills / Innovative ideas to Team biweekly.
- Strong knowledge of Python & SQL.
- Hands-on experience with SQL database design
- Hands-on experience or Knowledge about Airflow.
- Knowledge of Docker and Kubernetes
- Experience with running containerized microservices.
- Experience with Apache Spark or AWS EMR.
- Experience with cloud platforms (AWS, Azure) with strong preference towards AWS.
- Experience in Database design practices
- Technical expertise with Data warehouse or Data Lake
- Expertise in configuring and maintaining PostgreSQL.
- Experience performance tuning queries and data models to produce the best execution plan.
- Strong experience building data pipelines & ETL.
- Experience working on an Agile Development team and delivering features incrementally.
- Experience with Git repositories
- Working knowledge of setting up builds and deployments
- Experience with both Windows and Linux.
- Experience demonstrating work to peers and stakeholders for acceptance
- Ability to multi-task, be adaptable, and nimble within a team environment.
- Strong communication, interpersonal, analytical and problem-solving skills.
- Ability to communicate effectively with nontechnical stakeholders to define requirements.
- Ability to quickly understand new client data environments and document the business logic that composes them.
- Ability to integrate oneself into geographically dispersed teams and clients.
- A passion for high quality software. Previous experience as a data engineer or in a similar role
- Eagerness to learn and seek new frameworks, technologies, and languages
- Commitment to working with others and sharing knowledge on a regular basis.
Process of applying a job:
- First read all the details of a job on this page
- Scroll down and click on APPLY HERE button
- Then it will redirect to company website
- Click on apply button
- Now Fill the form and applydf