Introduction
Tutorial: Analysing your first gravitational wave candidate
New features in asimov 0.7.0 that make this easier:
- The new asimov-gwdata plugin with smart project setup commands
- Improved dependency resolution for complex multi-stage analyses
- Better Python API for programmatic project creation
- Enhanced HTML reporting with workflow visualization
- State machine-based monitoring for more robust job tracking
Prerequisites
Set Up Your Python Environment
We recommend using conda (a package and environment manager) to isolate your asimov installation. If you don't have conda installed yet, download miniconda (a lightweight version of Anaconda).
Create a new conda environment:
conda create -n gw-analysis python=3.11
conda activate gw-analysis
This creates a separate Python environment called gw-analysis so you don't affect other projects on your computer. Always make sure to activate this environment before working with asimov.
Install the IGWN Software Stack
The International Gravitational-Wave Network (IGWN) provides a curated conda environment with all the gravitational wave analysis tools pre-configured:
# Add the IGWN conda channel
conda config --add channels conda-forge
conda config --add channels igwn
# Install the IGWN environment
conda install -c conda-forge -c igwn igwn-software
This installs:
- All required gravitational wave analysis tools
- The necessary data analysis libraries
- Git and other utilities
This step may take 5-15 minutes depending on your internet connection.
Install Asimov and the GWData Plugin
Since asimov 0.7.0 is currently in pre-release, we need to use pip with the pre-release flag:
# Install asimov 0.7.0 (pre-release)
pip install --pre 'asimov>=0.7.0a1'
# Install the gwdata plugin (0.7.0-alpha)
pip install --pre 'asimov-gwdata>=0.7.0a1'
After the official release, this simplifies to:
pip install asimov asimov-gwdata
The --pre flag is required to download pre-release versions. Without it, pip won't find the right version.
Configure Git
Asimov uses git to track your project. Configure it globally if you haven't already:
git config --global user.email "you@example.com"
git config --global user.name "Your Name"
Replace the email and name with your own values.
Check Your HTCondor Installation
This tutorial assumes HTCondor (a job scheduler) is already installed and running on your system. Test it:
which condor_q
condor_q
If these commands don't work, refer to the Single-Machine Setup appendix below to install minicondor.
Verify Your Complete Setup
Make sure everything is installed correctly:
asimov --version
asimov gw events list --catalog gwtc-2-1 | head -5
You should see asimov's version number and a short list of events.
If you see errors, make sure you've:
- Activated the conda environment:
conda activate gw-analysis - Installed all packages successfully
- Configured git globally
Step 1: Explore Available Events and Analyses
Browse Available Events
Before setting up your project, let's see what's available. The asimov-gwdata plugin provides convenient commands to discover events and analysis configurations:
# List all available events from the GWTC-2.1 catalogue
asimov gw events list --catalog gwtc-2-1
# Get detailed information about GW150914
asimov gw events show GW150914_095045
This will show you:
- The exact time the signal was detected
- Which detectors (LIGO Hanford and LIGO Livingston) observed it
- Recommended priors for parameter estimation
- Data quality information
Browse Available Analysis Templates
Now let's see what analysis configurations are available:
# List all available analysis templates
asimov gw analyses list
# Show details about the production configuration
asimov gw analyses show production-default
Step 2: Set Up Your Project
Create Project Directory
First, create a directory for your project and move into it:
mkdir gw150914-tutorial
cd gw150914-tutorial
Choose Your Setup Method
Now you have two options for setting up your analysis:
Step 3: Understand Your Project Structure
Review Your Project Layout
Your new project has a carefully organized structure:
gw150914-tutorial/
├── .asimov/ # Asimov's internal directory
│ └── ledger.yml # The project database (git-tracked)
├── checkouts/ # Working directories for each analysis
│ └── GW150914_095045/ # Event directory
│ ├── bayeswave/ # Bayeswave on-source PSD job
│ └── bilby/ # Bilby parameter estimation job
├── results/ # Where results will be stored
├── working/ # HTCondor job submission files
└── README.md # Project information
Check Your Analysis Status
To see the current status of your analyses, run:
asimov report status
You should see something like:
GW150914_095045
Analyses
- Prod0[bayeswave] ready
- Prod1[bilby] ready (waiting for Prod0)
Notice how Prod1 (bilby) is marked as "waiting"—asimov's improved dependency resolution (new in 0.7.0) knows that bilby needs the power spectral density (PSD) estimates from bayeswave before it can start.
Step 4: Build and Submit Your Jobs
Build Configuration Files
Create all the pipeline-specific configuration files:
asimov manage build
You'll see output like:
● Working on GW150914_095045
Working on production Prod0
Production config Prod0 created.
Look in the working/ directory—you'll find HTCondor job description files here.
Submit Jobs to the Scheduler
Now submit the first analysis to HTCondor:
asimov manage submit
This submits the bayeswave job (Prod0) to the scheduler. The bilby job (Prod1) won't submit yet because it's waiting for bayeswave to complete—that's the dependency system in action.
Step 5: Monitor Your Analysis
Now we need to watch our job. Asimov provides several ways to do this:
One-time Status Check
For a quick status check:
asimov monitor
This checks the job status once and shows you what's happening:
GW150914_095045
- Prod0[bayeswave]
● Running (HTCondor ID: 12345)
- Prod1[bilby]
● Waiting (ready to run after Prod0 completes)
Continuous Monitoring (Recommended)
For longer jobs, set up automated monitoring that will:
- Check job status every 15 minutes
- Automatically submit dependent jobs when their dependencies complete
- Run post-processing when analyses finish
- Provide real-time HTML reports (new in 0.7.0)
asimov start
You'll see:
● Asimov is running (process ID: 54321)
This starts a background process. You can check the logs at any time with:
tail -f asimov.log
And stop monitoring whenever you're ready:
asimov stop
For production analyses, we recommend leaving monitoring running in a screen or tmux session so it stays active even if you disconnect.
Step 6: Understanding the Analysis
While your job is running, let’s understand what’s happening:
The Multi-Stage Workflow
Your project has two analyses working together:
- Bayeswave (Prod0): Produces an estimate of the power spectral density (the noise characteristics) of the detector during the GW150914 observation. This typically takes hours to days.
- Bilby (Prod1): Uses the PSD from Bayeswave to perform the actual parameter estimation—inferring properties like the masses and spins of the merging black holes. This uses Bayesian inference to calculate the probability of different parameters given the observed signal.
The state machine monitoring (new in 0.7.0) automatically handles submitting Prod1 once Prod0 completes.
Tracking Progress with Reports
Once monitoring is running, asimov generates HTML reports showing:
- Current job status
- Workflow dependency graphs (new in 0.7.0)
- Historical information
- Any errors or warnings
These are stored in the results/ directory.
Step 7: When Jobs Complete
After your jobs finish (this can take days or weeks for production-quality analyses!), asimov will:
Automatic Post-Processing
Asimov automatically:
- Runs post-processing with PESummary to generate summary statistics and plots
- Moves results to the
results/directory - Generates comprehensive HTML reports with visualization of the posterior distributions
- Marks analyses as complete
Examine Your Results
You can then examine the results:
ls results/GW150914_095045/
This directory contains all your output files, plots, and reports.
Next Steps
Appendix: Single-Machine Setup (Optional)
If you’re working on a single machine without an existing HTCondor installation, you can use HTCondor’s mini version, minicondor, which is designed for personal workstations.
Install Minicondor
HTCondor provides pre-built installers. Visit htcondor.readthedocs.io and follow the installation guide for your operating system.
On Linux (Ubuntu/Debian):
# Add HTCondor repository
wget -qO - https://research.cs.wisc.edu/htcondor/yum/HTCondor/repo.key | sudo apt-key add -
echo "deb [arch=amd64] https://research.cs.wisc.edu/htcondor/yum/HTCondor/ubuntu focal main" | \
sudo tee /etc/apt/sources.list.d/htcondor.list
# Install minicondor
sudo apt-get update
sudo apt-get install minicondor
On macOS:
# Using Homebrew
brew tap htcondor/htcondor
brew install htcondor
For other systems, follow the official installation guide.
Configure Minicondor
After installation, start the HTCondor daemon:
# Start the HTCondor daemon
sudo /etc/init.d/condor start
# Or on systems using systemd
sudo systemctl start condor
For personal workstations, limit resource usage by editing /etc/condor/condor_config.d/personal.conf:
NUM_SLOTS = 1
NUM_SLOTS_TYPE_1 = 1
This ensures only one job runs at a time on your machine.
Verify Installation
Test that HTCondor is working:
condor_q # List jobs (should be empty)
condor_status # Show available slots
Now you can follow the main tutorial above!
Important Notes on Single-Machine Use
Advantages
- Learn asimov on your personal computer
- Good for testing and development
- Useful for small analyses
Limitations
- Jobs run sequentially on limited resources
- Parameter estimation analyses will be very slow on most personal computers
- Not suitable for production catalogue analyses
We recommend using institutional computing clusters with HTCondor, or work with your institution's computing facility to set up access.
Future Improvements
We’re actively working to make asimov easier to use without requiring HTCondor. Upcoming features will include:
- Direct cloud computing support (AWS, Google Cloud, etc.)
- Local multiprocessing support
- Slurm scheduler integration
- Better containerization
Stay tuned for updates!