Even surprisingly popular distributed-systems stuff is always really bad about "follow this 10 step copy/paste to deploy to EKS" but that's also obnoxious. In the first place, people want to see something basically working on small scale first to check if it's abandonware. But even after that.. local prototyping without first setting up multiple repositories, then shipping multiple modified container images, and already having CI/CD for all of the above is really nice to have.
* https://github.com/ComputeCanada/magic_castle
They link to various other projects that do cloud-y-HPC:
* AWS ParallelCluster [AWS]
* Cluster in the cloud [AWS, GCP, Oracle]
* Elasticluster [AWS, GCP, OpenStack]
* Google Cluster Toolkit [GCP]
* illume-v2 [OpenStack]
* NVIDIA DeepOps [Ansible playbooks only]
* StackHPC Ansible Role OpenHPC [Ansible Role for OpenStack]
Nvidia also offers free licenses for their Base Command Manager (BCM, formerly Bright Cluster Manager); pay for enterprise support, or hit up the forums:
* https://www.nvidia.com/en-us/data-center/base-command-manage...
I have worked 100% in 3 comparable systems over the past 10 years. Can you access with ssh?
I find it super fluid to work on the HPC directly to develop methods for huge datasets by using vim to code and tmux for sessions. I focus on printing detailed log files constantly with lots of debugs and an automated monitoring script to print those logs in realtime; a mixture of .out .err and log.txt.
Futile hope though. My company is still using SGE.