Workflows

What is a Workflow?
1220 Workflows visible to you, out of a total of 1305
Stable

Cite with Zenodo Nextflow run with conda run with docker ...

Type: Nextflow

Creators: Damon-Lee Pointon, William Eagles, Ying Sims

Submitter: Damon-Lee Pointon

No description specified

Type: Galaxy

Creators: None

Submitter: Markus Konkol

Stable

Calculates the Fibonacci series up to a specified length.

Type: COMPSs

Creator: Uploading this Workflow under the guidance of Raül Sirvent.

Submitter: Ashish Bhawel

Stable

Name: Matmul GPU Case 1 Cache-ON Contact Person: [email protected] Access Level: public License Agreement: Apache2 Platform: COMPSs Machine: Minotauro-MN4

Matmul running on the GPU leveraging COMPSs GPU Cache for deserialization speedup. Launched using 32 GPUs (16 nodes). Performs C = A @ B Where A: shape (320, 56_900_000) block_size (10, 11_380_000)             B: shape (56_900_000, 10)   block_size (11_380_000, 10)             C: shape (320, 10)                block_size ...

Type: COMPSs

Creators: Cristian Tatu, The Workflows and Distributed Computing Team (https://www.bsc.es/discover-bsc/organisation/scientific-structure/workflows-and-distributed-computing/)

Submitter: Cristian Tatu

DOI: 10.48546/workflowhub.workflow.798.1

Stable

Name: Matmul GPU Case 1 Cache-OFF Contact Person: [email protected] Access Level: public License Agreement: Apache2 Platform: COMPSs 3.3 Machine: Minotauro-MN4

Matmul running on the GPU without Cache. Launched using 32 GPUs (16 nodes). Performs C = A @ B Where A: shape (320, 56_900_000) block_size (10, 11_380_000)             B: shape (56_900_000, 10)   block_size (11_380_000, 10)             C: shape (320, 10)                block_size (10, 10) Total dataset size 291 ...

Type: COMPSs

Creators: Cristian Tatu, The Workflows and Distributed Computing Team (https://www.bsc.es/discover-bsc/organisation/scientific-structure/workflows-and-distributed-computing/)

Submitter: Cristian Tatu

DOI: 10.48546/workflowhub.workflow.797.1

Stable

Name: K-Means GPU Cache OFF Contact Person: [email protected] Access Level: public License Agreement: Apache2 Platform: COMPSs Machine: Minotauro-MN4

K-Means running on GPUs. Launched using 32 GPUs (16 nodes). Parameters used: K=40 and 32 blocks of size (1_000_000, 1200). It creates a block for each GPU. Total dataset shape is (32_000_000, 1200). Version dislib-0.9

Average task execution time: 194 seconds

Type: COMPSs

Creators: Cristian Tatu, The Workflows and Distributed Computing Team (https://www.bsc.es/discover-bsc/organisation/scientific-structure/workflows-and-distributed-computing/)

Submitter: Cristian Tatu

DOI: 10.48546/workflowhub.workflow.799.1

Stable

Name: K-Means GPU Cache ON Contact Person: [email protected] Access Level: public License Agreement: Apache2 Platform: COMPSs Machine: Minotauro-MN4

K-Means running on the GPU leveraging COMPSs GPU Cache for deserialization speedup. Launched using 32 GPUs (16 nodes). Parameters used: K=40 and 32 blocks of size (1_000_000, 1200). It creates a block for each GPU. Total dataset shape is (32_000_000, 1200). Version dislib-0.9

Average task execution time: 16 seconds

Type: COMPSs

Creators: Cristian Tatu, The Workflows and Distributed Computing Team (https://www.bsc.es/discover-bsc/organisation/scientific-structure/workflows-and-distributed-computing/)

Submitter: Cristian Tatu

DOI: 10.48546/workflowhub.workflow.800.1

Stable

Name: Dislib Distributed Training - Cache ON Contact Person: [email protected] Access Level: public License Agreement: Apache2 Platform: COMPSs Machine: Minotauro-MN4

PyTorch distributed training of CNN on GPU and leveraging COMPSs GPU Cache for deserialization speedup. Launched using 32 GPUs (16 nodes). Dataset: Imagenet Version dislib-0.9 Version PyTorch 1.7.1+cu101

Average task execution time: 36 seconds

Type: COMPSs

Creators: Cristian Tatu, The Workflows and Distributed Computing Team (https://www.bsc.es/discover-bsc/organisation/scientific-structure/workflows-and-distributed-computing/)

Submitter: Cristian Tatu

DOI: 10.48546/workflowhub.workflow.802.1

Stable

Name: Dislib Distributed Training - Cache OFF Contact Person: [email protected] Access Level: public License Agreement: Apache2 Platform: COMPSs Machine: Minotauro-MN4

PyTorch distributed training of CNN on GPU. Launched using 32 GPUs (16 nodes). Dataset: Imagenet Version dislib-0.9 Version PyTorch 1.7.1+cu101

Average task execution time: 84 seconds

Type: COMPSs

Creators: Cristian Tatu, The Workflows and Distributed Computing Team (https://www.bsc.es/discover-bsc/organisation/scientific-structure/workflows-and-distributed-computing/)

Submitter: Cristian Tatu

DOI: 10.48546/workflowhub.workflow.801.1

Stable

HiC scaffolding pipeline

Snakemake pipeline for scaffolding of a genome using HiC reads using yahs.

Prerequisites

This pipeine has been tested using Snakemake v7.32.4 and requires conda for installation of required tools. To run the pipline use the command:

snakemake --use-conda --cores N

where N is number of cores to use. There are provided a set of configuration and running scripts for exectution on a slurm queueing system. After configuring the cluster.json file run:

./run_cluster ...

Type: Snakemake

Creator: Tom Brown

Submitter: Tom Brown

DOI: 10.48546/workflowhub.workflow.796.2

Powered by
(v.1.17.0-main)
Copyright © 2008 - 2025 The University of Manchester and HITS gGmbH