Skip to content

Argonne Leadership Computing Facility

Running a Model/Program

Jobs are launched from any GroqRack node, or from login nodes.
If you expect a loss of an internet connection for any reason, for long-running jobs we suggest logging into a specific node and using either screen or tmux to create persistent command line sessions. For details use:

man screen
# or
man tmux
or online man pages: screen, tmux

Running jobs on Groq nodes


GroqFlow is the simplest way to port applications running inference to groq. The groqflow github repo includes many sample applications.
See GroqFlow.

Clone the GroqFlow github repo

Clone the groqflow github repo and change current directory to the clone:

cd ~/
git clone
cd groqflow

GroqFlow conda environments

Create a groqflow conda environment, and activate it. Follow the instructions in the Virtual Environments
section. Note: Similar install instructions are in ~/groqflow/docs/ or GroqFlow™ Installation Guide
The conda enviroment should be reinstalled whenever new groqflow code is pulled from the groqflow github; with a groqflow conda environment activated, redo just the pip install steps.

Running a groqflow sample

Each groqflow sample directory in the ~/groqflow/proof_points tree has a describing the sample and how to run it.

Optionally activate your GroqFlow conda environment

conda activate groqflow

Run a sample using PBS in batch mode

See Job Queueing and Submission for more information about the PBS job scheduler.

Create a script with the following contents. It assumes that conda was installed in the default location. The conda initialize section can also be copied from your .bashrc if the conda installer was allowed to add it.

# >>> conda initialize >>>
# !! Contents within this block are managed by 'conda init' !!
__conda_setup="$(${HOME}'/miniconda3/bin/conda' 'shell.bash' 'hook' 2> /dev/null)"
if [ $? -eq 0 ]; then
    eval "$__conda_setup"
    if [ -f "${HOME}/miniconda3/etc/profile.d/" ]; then
        . "${HOME}/miniconda3/etc/profile.d/"
        export PATH="${HOME}/miniconda3/bin:$PATH"
unset __conda_setup
# <<< conda initialize <<<
conda activate groqflow
cd ~/groqflow/proof_points/natural_language_processing/minilm
pip install -r requirements.txt

Then run the script as a batch job with PBS:

qsub -l groq_accelerator=1

Note: the number of chips used by a model can be found in the compile cache dir for the model after it is compiled. E.g.

$ grep num_chips_used ~/.cache/groqflow/minilmv2/minilmv2_state.yaml
num_chips_used: 1
The groqflow proofpoints models use 1, 2 or 4 chips.

If your ~/.bashrc initializes conda, an alternative to copying the conda initilization script into your execution scripts is to comment out this section in your "~/.bashrc":

# If not running interactively, don't do anything
case $- in
    *i*) ;;
      *) return;;
## If not running interactively, don't do anything
#case $- in
#    *i*) ;;
#      *) return;;
Then the execution script becomes:
conda activate groqflow
cd ~/groqflow/proof_points/natural_language_processing/minilm
pip install -r requirements.txt
Job status can be tracked with qstat:
$ qstat
Job id            Name             User              Time Use S Queue
----------------  ---------------- ----------------  -------- - -----
3084.groq-r01-co* run_minilmv2     user              0 R workq           

Output will by default go to two files with names like the following, where the suffix is the job id. One standard output for the job. The other is the standard error for the job.

$ ls -la*
-rw------- 1 user users   448 Oct 16 18:40
-rw------- 1 user users 50473 Oct 16 18:42

Run a sample using PBS in interactive mode

An alternative is to use an interactive PBS job. This may be useful when debugging new or changed code. Here is an example that starts a 24 hour interactive job.

qsub -IV -l walltime=24:00:00 -l groq_accelerator=2
Then activate your groqflow environment, and run python scripts with
conda activate groqflow