Unleash the Power of Piston: A Scalable Guide to Running IOI Problems with Slurm
Want to efficiently manage and scale your IOI problem executions? This guide dives into using Piston with Slurm for robust worker deployment and local Docker setups. Learn how to configure and manage Piston workers for optimal performance.
Scale Your IOI Workloads with Piston and Slurm
Running IOI (International Olympiad in Informatics) problems often demands significant computational resources. Piston, combined with Slurm, offers a powerful solution for distributing and managing these workloads across a cluster. This approach allows you to easily launch and scale a fleet of Piston workers.
- Enhanced Performance: Distribute tasks across multiple nodes for faster execution.
- Resource Optimization: Efficiently utilize cluster resources managed by Slurm.
- Scalability: Easily increase or decrease the number of workers based on demand, offering flexible IOI execution.
Launching a Fleet of Piston Workers on Slurm
Deploying Piston workers on a Slurm cluster is straightforward. Here's a step-by-step approach:
- Adapt the Launch Scripts: Modify the paths in
launch_piston_workers.sh
andlaunch_single_piston.sh
to match your environment. - Execute the Launch Script: Run the script with the desired number of workers to launch:
piston-worker-<port>
, where<port>
is the port the worker listens on.
This command creates a Slurm job for each worker. The jobs are named like
First-Time Setup: Installing the IOI Package
Before launching multiple workers, you need to install the IOI package on at least one worker. Afterward, the shared packages directory allows other workers in a multi-node IOI setup to leverage the same installation.
- Launch a Single Worker:
- Install the IOI Package: Assuming the worker is running on
ip-10-53-86-146:1234
, send the following request: With the IOI package now installed, you can launch additional workers.
Configuring Piston Endpoints for Seamless Communication
To enable the main script to automatically discover the workers, set the PISTON_ENDPOINTS
environment variable:
Alternatively, add PISTON_ENDPOINTS=slurm
to your .env
file.
Consider adjusting PISTON_MAX_REQUESTS_PER_ENDPOINT
to limit simultaneous requests per worker. While this provides a local limit, remember that a distributed system may experience temporary overloads on individual workers due to the lack of a global request limit. The default value is 1.
Running Piston Workers with Local Docker Containers
For development or smaller-scale deployments, running Piston workers in Docker containers offers a convenient alternative. This approach simplifies dependency management and provides a consistent environment.
Launching a Single Piston Worker in Docker
Use the following Docker command to launch a single worker. Scaling can be achieved by launching multiple containers. Remember to replace /path/to/local/packages
with the desired path for persisting package installations.
This command sets various environment variables that dictate the worker's behavior, including timeouts, maximum file sizes, and networking restrictions. Also increases the default body size limits.
Install the Package in Docker Container
After launching the container, install the cms_ioi
package:
Setting Piston Endpoints for Docker Workers
Finally, configure the PISTON_ENDPOINTS
environment variable to point to your Docker workers:
Ensure each endpoint corresponds to a running Piston worker.
By following these steps, you can effectively leverage Piston with Slurm and Docker to manage and scale your IOI problem executions, optimizing performance and resource utilization.