Set of Kubernetes solutions for reusing idle resources of nodes by running extra batch jobs
-
Updated
Apr 1, 2024 - Go
Set of Kubernetes solutions for reusing idle resources of nodes by running extra batch jobs
Go package to read and write parquet files. parquet is a file format to store nested data structures in a flat columnar data format. It can be used in the Hadoop ecosystem and with tools such as Presto and AWS Athena.
A light-weight distributed stream computing framework for Golang
Export Hadoop YARN (resource-manager) metrics in prometheus format
Yarn on Docker - Managing Hadoop Yarn cluster with Docker Swarm.
Run templatable playbooks of Hadoop/Spark/et al jobs on Amazon EMR
A simple service for discovering Flink cluster on Hadoop Yarn
An easy Hadoop deploy system
CLI tool for managing multiple Cloudbreak deployed CM instances
a configuration option helper for hadoop. fuzzy find what you are looking for.
Kubernetes operator for managing the lifecycle of Apache Hadoop Yarn Tasks on Kubernetes.
Apache Hadoop HDFS operator for the Kubernetes Data Stack
A simple command to fetch hadoop mapreduce job histories and output as csv.
Add a description, image, and links to the hadoop topic page so that developers can more easily learn about it.
To associate your repository with the hadoop topic, visit your repo's landing page and select "manage topics."