Introduction

Build Status GitHub license Documentation Status

Spfy: Platform for predicting subtypes from E.coli whole genome sequences, and builds graph data for population-wide comparative analyses.

Live: https://lfz.corefacility.ca/superphy/spfy/

screenshot of the results page

Use:

  1. Install Docker (& Docker-Compose separately if you’re on Linux, link). mac/windows users have Compose bundled with Docker Engine.
  2. git clone --recursive https://github.com/superphy/spfy.git
  3. cd spfy/
  4. docker-compose up
  5. Visit http://localhost:8090
  6. Eat cake :cake:

Submodule Build Statuses:

ECTyper:

https://travis-ci.org/phac-nml/ecoli_serotyping.svg?branch=superphy

PanPredic:

https://travis-ci.org/superphy/PanPredic.svg?branch=master

Docker Image for Conda:

https://travis-ci.org/superphy/docker-flask-conda.svg?branch=master

Stats:

Comparing different population groups:

Overall Performance

Runtimes of subtyping modules:

Runtimes of individual analyses

CLI: Generate Graph Files:

  • If you wish to only create rdf graphs (serialized as turtle files):
  1. First install miniconda and activate the environment from https://raw.githubusercontent.com/superphy/docker-flask-conda/master/app/environment.yml
  2. cd into the app folder (where RQ workers typically run from): cd app/
  3. Run savvy.py like so: python -m modules/savvy -i tests/ecoli/GCA_001894495.1_ASM189449v1_genomic.fna where the argument after the -i is your genome (FASTA) file.

CLI: Generate Ontology:

screenshot of the results page

The ontology for Spfy is available at: https://raw.githubusercontent.com/superphy/backend/master/app/scripts/spfy_ontology.ttl It was generated using https://raw.githubusercontent.com/superphy/backend/master/app/scripts/generate_ontology.py with shared functions from Spfy’s backend code. If you wish to run it, do: 1. cd app/ 2. python -m scripts/generate_ontology which will put the ontology in app/

You can generate a pretty diagram from the .ttl file using http://www.visualdataweb.de/webvowl/

CLI: Enqueue Subtyping Tasks w/o Reactapp:

Note

currently setup for just .fna files

You can bypass the front-end website and still enqueue subtyping jobs by:

  1. First, mount the host directory with all your genome files to /datastore in the containers.

For example, if you keep your files at /home/bob/ecoli-genomes/, you’d edit the docker-compose.yml file and replace:

volumes:
- /datastore

with:

volumes:
- /home/bob/ecoli-genomes:/datastore
  1. Then take down your docker composition (if it’s up) and restart it
docker-compose down
docker-compose up -d
  1. Drop and shell into your webserver container (though the worker containers would work too) and run the script.
docker exec -it backend_webserver_1 sh
python -m scripts/sideload
exit

Note that reisdues may be created in your genome folder.

Architecture:

screenshot of the results page
Dock er Imag e Port s Name s Des crip tion
back end- rq 80/t cp, 443/ tcp back end_wor ker_1 the main redi s queu e work ers
back end- rq-b laze grap h 80/t cp, 443/ tcp back end_wor ker- blaz egra ph-i ds_ 1 this hand les spfy ID gene rati on for the blaz egra ph data base
back end 0.0. 0.0: 8000 ->80 /tcp , 443/ tcp back end_web -ngi nx-u wsgi _1 the flas k back end whic h hand les enqu euei ng task s
supe rphy /bla zegr aph: 2.1. 4-in fere ncin g 0.0. 0.0: 8080 ->80 80/t cp back end_bla zegr aph_1 Blaz egra ph Data base
redi s:3. 2 6379 /tcp back end_red is_ 1 Redi s Data base
reac tapp 0.0. 0.0: 8090 ->50 00/t cp back end_rea ctap p_1 fron t-en d to spfy

Further Details:

The superphy/backend-rq:2.0.0 image is scalable: you can create as many instances as you need/have processing power for. The image is responsible for listening to the multiples queue (12 workers) which handles most of the tasks, including RGI calls. It also listens to the singles queue (1 worker) which runs ECTyper. This is done as RGI is the slowest part of the equation. Worker management in handled in supervisor.

The superphy/backend-rq-blazegraph:2.0.0 image is not scalable: it is responsible for querying the Blazegraph database for duplicate entries and for assigning spfyIDs in sequential order. It’s functions are kept as minimal as possible to improve performance (as ID generation is the one bottleneck in otherwise parallel pipelines); comparisons are done by sha1 hashes of the submitted files and non-duplicates have their IDs reserved by linking the generated spfyID to the file hash. Worker management in handled in supervisor.

The superphy/backend:2.0.0 which runs the Flask endpoints uses supervisor to manage inner processes: nginx, uWsgi.

Blazegraph:

  • We are currently running Blazegraph version 2.1.4. If you want to run Blazegraph separately, please use the same version otherwise there may be problems in endpoint urls / returns (namely version 2.1.1). See #63 Alternatively, modify the endpoint accordingly under database['blazegraph_url'] in /app/config.py

Contributing:

Steps required to add new modules are documented in the Developer Guide.