Data Engineer

Fusemachines (https://www.fusemachines.com/) is an enterprise AI services, education and solutions provider on a mission to democratize AI.

Headquartered in New York with operations across North and Latin America and Asia, Fusemachines brings together engineers and PhDs from around the world to help companies build innovative AI solutions. With a Nepal based head office in Kathmandu, Fusemachines offers advanced AI products such as Fuse Classroom, Fuse Extract. Fusemachines AI Schools run AI Microdegree and Certificate programs in physical classrooms as well as online live classes using its proprietary content and learning platform.


Fusemachines (https://www.fusemachines.com/) is an enterprise AI services, education and solutions provider on a mission to democratize AI.

Headquartered in New York with operations across North and Latin America and Asia, Fusemachines brings together engineers and PhDs from around the world to help companies build innovative AI solutions. With a Nepal based head office in Kathmandu, Fusemachines offers advanced AI products such as …

Data Engineer

Views: 213 | Apply Before: 1 week, 6 days from now

Basic Job Information

Job Category : IT & Telecommunication
Job Level : Mid Level
No. of Vacancy/s : [ 1 ]
Employment Type : Full Time
Job Location : Kamaladi, Kathmandu
Offered Salary : Negotiable
Apply Before(Deadline) : May. 19, 2021 23:55 (1 week, 6 days from now)

Job Specification

Education Level : Bachelor
Experience Required : More than or equals to 3 years
Professional Skill Required : Big Data Apache Spark Aws/Gcp

Job Description

  • At least 3 years of overall industry experience. (working experience in a relevant field would be preferred and favourable)
  • Experience in creating ETL pipeline and familiar with extraction, transformation, loading, filtering, cleaning, joining, scheduling, monitoring and data-streaming.
  • Experience with data processing tools. (Spark, Hadoop)
  • Experience with AWS/GCP Services (EMR, Redshift, Google Data Studio, BigQuery)
  • Familiarity with Data warehousing tools and processes. (Snowflake, RedShift, S3, BigQuery)
  • Experience in setting up ingestion pipelines. (Apache Kafka, Amazon Kinesis)
  • Familiarity with analytics and visualization tools is preferable.
  • Candidates with certifications in big data tools would be preferable.
  • Experience in any programming language like Java, Python, Scala.
  • Experience with relational SQL and NoSQL databases.
  • Familiarity with project management processes (Sprint, KANBAN) and tools. (Jira, Asana)
  • Ability to work independently or in a collaborative environment with a proactive attitude.
  • Bachelor’s degree in Computer Science or equivalent.

More Jobs By this Company

Similar Jobs

job_detail_page
Search, Apply & Get Job: FREE