Cloudwick Careers

Join Cloudwick

Cloud Big Data Boot Camp

According to LinkedIn, “Cloud and distributed computing has remained in the #1 spot for the past two years and is the Top Skill on almost every list.” Join Cloudwick and be part of the revolution!

Cloudwick invests more than 3 months and tens of thousands of dollars per new hire to prepare them to become enterprise Cloud Big Data Engineers and Developers. Our Cloud and Big Data Boot camp is considered the gold standard and graduates go on to exciting careers for Global 1000 organizations, engineering big data platforms and developing modern Hadoop, Spark, NoSQL and Data Warehousing applications.

Cloud Big Data Engineer

Cloudwick Cloud Big Data Engineers are the best in the industry at developing scalable data pipelines for Big Data analytics. At Cloudwick, you will be trained and certified to architect and engineer data ingestion at scale using open source.

Once hired, you will enter Cloudwick’s industry-leading hands-on boot camp where you will be trained by industry experts and receive professional certifications.

12-week Hands-On Boot Camp

Cloudwick’s boot camp will cover:

  • Scala Programming
  • Hadoop Fundamentals
  • Spark core
  • Spark ML
  • Spark Streaming
  • Kafka
  • Ingestion Methods
  • Data pipelines
  • Transformation Methods
  • NoSQL
  • Automation using Puppet/Chef
  • Amazon Web Services
  • Machine Learning and Data Science
  • Capstone Project


  • Cloudera Certified Associate: Spark and Hadoop Developer
  • Cloudera Certified Professional: CCP Data Engineer
  • Any one of the following AWS Associate level certifications:

— AWS Certified Solution Architect – Associate
— AWS Certified Developer – Associate
— AWS Certified SysOps Administrator – Associate
— AWS Certified Big Data – Specialty

Pre-Requisites for Cloudwick Big Data Boot Camp

  • Recent graduate in Master’s Degrees
  • Strong programming skills in Java
  • Willing to learn and explore new technologies in short span of time
  • A good understanding of distributed computing

Boost your Career

Big Data is the #1 Career for IT Professionals with Master’s Degrees in Computer Science

  • Cloudwick invests more than $25,000 in your Big Data career development on Day 1
  • Cloudwick leads the industry with pay and benefits
  • You can choose where you attend boot-camp: US or EMEA

We’re always looking for experienced big data professionals. Click on the position titles below for a description and responsibilities to see if you’re a fit.

Cassandra Engineer


    • Design, install, monitor, maintain, troubleshoot and resolve complex problems, and performance tune Cassandra clusters running on our data centers while ensuring high levels of data availability
    • Establish performance standards to identify thresholds which indicate the onset of performance issues
    • Architect, design, document, and implement redundant systems and large scale data management
    • Design and implement procedures for disaster recovery and data archiving
    • Administer security and disaster recovery of databases, including back-up, recovery, security audits, database health checks, and disaster recovery planning and testing
    • Plan and coordinate data migrations between systems and environments
    • Create models for new database development and/or changes to existing ones
    • Handle incident tickets including problem isolation, analysis, root cause, and solution implementation
    • Conduct research and make recommendations on database products, services, protocols, and standards
    • Develop routines for end-users to facilitate best practices within the database
    • Confer with end-users, clients, or senior management to define business requirements

Position requires unanticipated travel throughout US up to 50% of time.


  • Masters Degree (or foreign equivalent) in CS, EE, or a related field.
  • Position requires coursework and/or experience that must have included Programming Language Concepts, Database Systems, Operating Systems Design, and Network Security.

Email resume with cover letter to

Big Data Architect


  • Collaborate in planning initiatives in Workshops, Big Data Architecture, Future Roadmaps, Operations and Strategic Planning
  • Create architecture and technical design documents to communicate solutions that will be implemented by the development team
  • Work with line of businesses and development management to provide effective technical designs aligning with industry best practices
  • Develop highly scalable and extensible Big Data platform which enables collection, storage, modeling, and analysis of massive data from different channels
  • Engage with external vendors to evaluate products and help illustrate approaches to tech and business problems
  • Recommend and establish new software development, testing, and documentation standards
  • Monitor and ensure compliance of architectural and development standards
  • Identify and recommend new technologies, architectures, processes and tools to increase efficiency and productivity
  • Work with multiple products and technologies at all tiers of the application architecture to guide design.

Position requires unanticipated travel throughout US up to 50% of time.


  • Master’s Degree or foreign equivalent in CS, Engineering (any) or a related field and two years of experience in the following:
    • Big Data: Hadoop, Spark, HBase, Cassandra, MongoDB, Kafka
    • Programming: Scala, Ruby, Java, Bash
    • Automation Frameworks: Puppet, Chef, Docker
    • Cloud Computing: Amazon web services, OpenStack. Perform architecture design, data modeling, and implementation of Big Data platform and analytic applications

Email resume with cover letter to


Big Data Solution Architect


  • Design/ develop data ingestion, aggregation, integration and advanced analytics in Hadoop
  • Define development standards and design patterns to process and store high volume data sets
  • Administer multi distribution Hadoop Clusters
  • Create ETL and data ingest jobs using MapReduce, Pig, or Hive. Work with and integrate multiple types of data, including unstructured, structured, and streaming
  • Make information and predictive analytics available on a large scale to next generation applications
  • Recommend and establish new software development, testing, and documentation standards
  • Integrate Big Data tools into traditional enterprise architectures.
  • Establish sound coding and testing practices to ensure quality software builds.
  • Bring in new and innovative solutions to the table to resolve challenging software issues as needed throughout project life cycle.

Position requires unanticipated travel throughout US up to 50% of time.


  • Master’s Degree or foreign equivalent in CS, CA, CIS, Engineering (any) or a related field and two years of experience in the following: Hadoop, Spark, Cassandra, MongoDB, Kafka, Scala, Puppet, Amazon web services, OpenStack
  • OR Bachelor’s Degree or foreign equivalent in CS,CA, CIS, Engineering (any) or a related field and five years of progressive, post-baccalaureate experience in the following: Hadoop, Spark, Cassandra, MongoDB, Kafka, Scala, Puppet, Amazon web services, OpenStack.
  • Any suitable combination of education, training, or experience is acceptable.

Email resume with cover letter to

Hadoop Data Engineer


  • Work closely with Data Scientists to identify and develop methods to collect and integrate a wide variety of data to be used in predictive analytics, machine learning or other data science use case
  • Develop data pipelines and iterative models for data experimentation using a variety of tools such as Kafka, Hadoop and Cassandra, Storm and Spark
  • Apply big data technologies such as Hadoop, Spark or Streams with NoSQL data management and related programming languages for analytics and experimentation with large, multi-structured data sets
  • Design and maintain Hadoop Workflows/ETL for all the data products
  • Design, Implement or translate data science to Hadoop ecosystem for scale
  • Act as a big data consultant in recommending the right tools/libraries to solve big data problems
  • Work closely with customer’s business, engineering, and executive teams, using data to drive iterative analytical models
  • Design, develop, Quality Assurance (QA) and maintain application code
  • Provide thought-leadership and dependable execution on diverse projects

Position requires unanticipated travel throughout US up to 50% of time


  • Master’s Degree or foreign equivalent in CS, Comp Engineering, Comp Applications, or a related field and two years of experience in the following: Linux Operating Systems, Hadoop, Oracle, Informatica, and Teradata
  • Use tools/technologies like Kafka, Hadoop and Cassandra, Storm and Spark, and Spark or Streams with NoSQL

Email resume with cover letter to


Dev Ops Engineer


  • Collaborate with business and technical stakeholders to clearly understand business objectives, customer needs, and system requirements to determine which big data platform to run on clusters and translate requirements to technical solutions
  • Install and configure big data platform distribution for software framework
  • Develop configuration for software framework stack technologies and use automated tools to install the software framework stack
  • Install and configure monitoring tools, develop scripts based on customer data, develop integration with other data sources, perform data analytics, and provide support services
  • Install and configure data connectors for third party tools and commercial big data tools
  • Perform Benchmark and Quality Assurance testing of the cluster software
  • Evaluate cluster performance and propose improvements to system and measures to maximize efficiency and quality including code programs utilizing performance optimization tuning and coding techniques
  • Responsible for development and maintenance of software products focused on Hadoop ecosystem.

Position requires unanticipated travel throughout US up to 50% of time.


  • Master’s degree (or foreign equivalent) in Computer Science, Engineering, Computer Information Systems or related field and two(2) years of experience involving each of the following tools and technologies:  Linux system Administration, System Architecture, Distributed File Systems, Database Systems, Python, Java.

Email resume with cover letter to

Submit your resume

About Us

Cloudwick is the leading provider of enterprise business and technology modernization services and solutions to the Global 1000. We help leading enterprises gain competitive advantage from open source, data lake, big data, cloud and advanced analytics.

  • Solutions

    Cloudwick provides complete data, analytics and cloud modernization solutions, leading to faster time-to-transformation for your enterprise.

  • Services

    Cloudwick makes business and IT transformation easy for line of business and IT with end-to-end data analytic and cloud modernization services.

  • Competencies

    Cloudwick has unmatched expertise and extensive experience architecting, scaling and managing enterprise data lake, advanced analytics, cloud and big data solutions.

  • Verticals

    Cloudwick has global experience and expertise working with executives, line of business, IT and vendors across all verticals.