Unleash the Power of Piston: A Scalable Guide to Running IOI Problems with Slurm
Are you ready to supercharge your IOI problem-solving capabilities? This guide unlocks the secrets to deploying and managing Piston workers on a Slurm cluster, both locally and with Docker containers. Get ready for enhanced scalability and performance!
Why Use Piston? The Benefits You Need to Know
Piston is a powerful execution engine, and these workers significantly speed up code execution for IOI problems, but complex setups can be daunting. Here’s why you should use this guide:
- Scalability: Effortlessly launch and manage multiple workers for parallel processing.
- Automation: Streamline worker deployment with ready-to-use scripts.
- Efficiency: Optimize resource utilization on your Slurm cluster.
Launching a Fleet of Piston Workers on a Slurm Cluster
Ready to deploy? This section guides you through setting up multiple workers on your Slurm cluster. Take advantage of your robust cluster environment to speed up the execution of IOI problems.
- Adapt the Launch Scripts: Modify the paths in
launch_piston_workers.sh
andlaunch_single_piston.sh
to match your environment. - Execute: Run
slurm/piston/launch_piston_workers.sh (number of workers to launch)
. Each worker will be namedpiston-worker-<port>
, where<port>
is the listening port.
First-Time Setup: Installing the IOI Package
Before launching more workers, you'll need to install the IOI package on at least one worker. Here’s how:
-
Launch a Single Worker:
slurm/piston/launch_piston_workers.sh 1
-
Send the Install Request: Assuming the worker is running on
ip-10-53-86-146:1234
, use this command:Subsequent workers will automatically have the package installed due to the shared mounted packages directory. This ensures consistent IOI problem execution across your cluster.
Configure Piston Endpoints for Automatic Discovery
To enable automatic worker discovery, configure the PISTON_ENDPOINTS
environment variable. There are two ways to add this configuration:
- Export the variable:
export PISTON_ENDPOINTS=slurm
- Add to .env file: Append
PISTON_ENDPOINTS=slurm
to your.env
file.
You can also adjust PISTON_MAX_REQUESTS_PER_ENDPOINT
(default: 1) to limit simultaneous requests per worker. Be mindful of potential worker overload in distributed setups.
Running Piston Workers Locally with Docker
For local development or testing, Docker provides a convenient way to run Piston workers. This isolates the environment and simplifies setup. A single worker can be launched in a docker container. Consider launching multiple workers for better scalability.
- Docker Run Command:
- Ensure you replace
/path/to/local/packages
with your desired path for persistent package installs.
- Ensure you replace
- Install the Package:
- Set the Endpoints:
Conclusion: Elevate Your IOI Problem Solving with Piston Workers
By following this guide, you've unlocked the power of Piston workers. Whether you're leveraging a Slurm cluster or running local Docker containers, you can now efficiently scale your IOI problem-solving capabilities. Embrace parallel processing and optimize resource utilization for a competitive edge.