Fusemachines Nepal

Data Engineer

Fusemachines Nepal

Data Engineer

Fusemachines (https://www.fusemachines.com/) is an enterprise AI services, education and solutions provider on a mission to democratize AI.

Headquartered in New York with operations across North and Latin America and Asia, Fusemachines brings together engineers and PhDs from around the world to help companies build innovative AI solutions. With a Nepal based head office in Kathmandu, Fusemachines offers advanced AI products such as Fuse Classroom, Fuse Extract. Fusemachines AI Schools run AI Microdegree and Certificate programs in physical classrooms as well as online live classes using its proprietary content and learning platform.


Fusemachines (https://www.fusemachines.com/) is an enterprise AI services, education and solutions provider on a mission to democratize AI.

Headquartered in New York with operations across North and Latin America and Asia, Fusemachines brings together engineers and PhDs from around the world to help companies build innovative AI solutions. With a Nepal based head office in Kathmandu, Fusemachines offers advanced AI products such as …

Data Engineer

Views: 1370 | This job is expired 2 years, 9 months ago

Basic Job Information

Job Category : IT & Telecommunication > Data Warehousing, Database Engineer/Database Programmer, Programmer/ Software Engineer, Software Architect
Job Level : Top Level
No. of Vacancy/s : [ 3 ]
Employment Type : Full Time
Job Location : Kamaladi, Kathmandu
Offered Salary : Negotiable
Apply Before(Deadline) : Jul. 03, 2021 23:55 (2 years, 9 months ago)

Job Specification

Education Level : Under Graduate (Bachelor)
Experience Required : More than or equal to 1 year
Professional Skill Required : Aws/Gcp/Azure Python/Java/Scala Sql/Nosql Database Bigdata Apache Spark

Job Description

  • Experience in any programming language like Java, Python, Scala.
  • Experience in creating ETL pipeline and familiar with extraction, transformation, loading, filtering, cleaning, joining, scheduling, monitoring, and data-streaming.
  • Experience with data processing tools. (Spark, Hadoop)
  • Experience with AWS/GCP Services (EMR, Redshift, Google Data Studio, BigQuery)
  • Familiarity with Data warehousing tools and processes. (Snowflake, RedShift, S3, BigQuery)
  • Experience in setting up ingestion pipelines. (Apache Kafka, Amazon Kinesis)
  • Familiarity with analytics and visualization tools is preferable.
  • Candidates with certifications in big data tools would be preferable.
  • Experience with relational SQL and NoSQL databases.
  • Familiarity with project management processes (Sprint, KANBAN) and tools. (Jira, Asana)
  • Ability to work independently or in a collaborative environment with a proactive attitude.

This job has expired.

Recommended Jobs

Job Action

More Jobs By this Company

Similar Jobs
Powered by Merojob AI
job_detail_page
Search, Apply & Get Job: FREE