Supercharge Your IOI Problem Solving with Scalable Piston Workers
Unlock peak performance for your IOI (International Olympiad in Informatics) problem-solving workflows using Piston workers! This guide provides a comprehensive, scalable solution for leveraging Piston, whether on a Slurm cluster or with local Docker containers. Learn how to efficiently manage, configure, and deploy Piston workers to optimize your code execution and maximize your competitive edge.
Unleash the Power of Piston Workers
- Scalability: Effortlessly handle complex IOI problems by distributing the workload across multiple workers.
- Flexibility: Seamlessly deploy Piston workers on Slurm clusters or local Docker containers, adapting to your infrastructure.
- Efficiency: Optimize code execution with dedicated workers configured to your specific needs.
Piston Workers on Slurm: Scale Your Computations
Harness the power of a Slurm cluster to launch a fleet of Piston workers, providing a robust and scalable environment for your IOI problem-solving needs.
Setting up Slurm-Based Piston Workers
- Configure Launch Scripts: Adapt the paths in
launch_piston_workers.sh
andlaunch_single_piston.sh
to match your environment. - Launch the Workers: Execute
slurm/piston/launch_piston_workers.sh (number of workers to launch)
to start the desired number of workers. Each worker will run as a separate Slurm job namedpiston-worker-<port>
, where<port>
is the listening port.
- Each worker is assigned a unique port.
First-Time Setup: Installing the IOI Package
Before scaling up, ensure the IOI package is installed on your workers. This only needs to be done once:
-
Launch a Single Worker: Use
slurm/piston/launch_piston_workers.sh 1
-
Install IOI Package: Assuming the worker is running on
ip-10-53-86-146:1234
, send the following command:
Subsequent workers will automatically have the package installed due to the shared mounted packages directory.
Automating Worker Discovery
To enable automatic discovery of your Slurm-based Piston workers by the main script, export the environment variable:
Adding PISTON_ENDPOINTS=slurm
to your .env
file achieves the same result.
Fine-Tuning Request Limits
Control the number of simultaneous requests each worker handles using the PISTON_MAX_REQUESTS_PER_ENDPOINT
environment variable (default is 1). Note that this is a local limit; in distributed setups, workers might still be overwhelmed when multiple processes target the same worker.
Piston Workers in Docker: Local Development & Testing
For local development and testing, Docker containers provide a convenient and isolated environment for running Piston workers.
Launching a Docker-Based Piston Worker
Run the following command to launch a single worker in a Docker container:
- Important: Replace
/path/to/local/packages
with the desired path for persisting package installations. - Scalability: Launch multiple workers for improved concurrency.
Installing the Package in Docker
Install the necessary IOI package within the Docker container:
Configuring Endpoints
Tell Piston where to find the piston endpoints:
This command specifies the API endpoints. Ensure these are correctly configured to match your Docker setup.
Streamline Your IOI Workflow Today
By implementing these strategies for deploying and managing Piston workers, you can significantly enhance your IOI problem-solving capabilities. Scale your computations, optimize resource utilization, and gain a competitive advantage.