This repository contains the workflow used to find and characterize the HI sources in the data cube of the SKA Data Challenge 2. It was developed to process a simulated [SKA data cube](https://sdc2.astronomers.skatelescope.org/sdc2-challenge/data) data cube, but can be adapted for clean HI data cubes from other radio observatories. The workflow is managed and executed using snakemake workflow management system. It uses [https://spectral-cube.readthedocs.io/en/latest/](http://) based on [https://dask.org/](http://) parallelization tool and [https://www.astropy.org/](http://) suite to divide the large cube in smaller pieces. On each of the subcubes, we execute [https://github.com/SoFiA-Admin/SoFiA-2](http://) for masking the subcubes, find sources and characterize their properties. Finally, the individual catalogs are cleaned, concatenated into a single catalog, and duplicates from the overlapping regions are eliminated. Some diagnostic plots are produced using Jupyter notebook. The documentation can be found in the [Documentation page](https://hi-friends-sdc2.readthedocs.io/en/latest/index.html). The workflow and the results can be cited in the [Zenodo record](https://doi.org/10.5281/zenodo.5167659).