- Storage/Streams: HDFS, Hbase, Cassandra, Flume, Kafka
- App and Resouurce Management: Yarn, Mesos
- Data Processing Engines: Map Reduce, Flink, Spark, Storm, Tez
- Applications: Pig, Hive, Mahout, Crunch, Spark GraphX, Spark ML, Spark MlLib, Spark SQL, Spark DataFrames, Spark Datasets, Storm Trident
- Comprehensive Enterprise Solution with Integration of your legacy and BI systems with Hadoop Cluster and DevOps Tools
- Offering open source implementation with Apache Beam(an unified API with pluggable run time engines
like Google Cloud Data Flow, Spark, Flink, more with multiple SDK support (Java, Python, more in future(Scala, R)
- Providing Integration of your Big Data cluster with High Availability, Real Time Tools.
- Solutions Designed to work with any data format:Sturctured (RMS), Semi Structured (Json, XML), Unstructured (Avro, ORC, Parquet, Sequence)
Our Approach:
1. Identify/Brainstorm the usecase(s) with high visibility, real impact (what matters to you).
2. Provide disruptive solutions that will bring value to your company (the real-deal).
3. Integrate with your legacy systems, BI systems, existing hardware/infrastructure, DevOps.
4. Provide cost-effective plan to implment the new tools and bring real-change to the company.
5. Provide Project Planning consulations and resources for Design and Devlelopment of these exiting new tools/technologies.
6. Integrate with DevOps tools (Issue Tracking, Repositories, Documentation Tools, CI/Build Tools etc).
6. Maintain excellent communication and transparent status reporting.
7. Provide excellent documentation of all the steps.