Skip to main content
Skip table of contents

How do I analyze my QuantSeq FWD-UMI Sequencing Data?

QuantSeq FWD-UMI data can be analyzed using our new web-based interactive platform, Kangooroo. QuantSeq kits are provided with a voucher code which can be used for the data analysis. If additional codes are needed, please contact sales@lexogen.com.

For further information on how to process your data, please visit our Webpage https://www.lexogen.com/kangooroo-ngs-data-analysis/ and check out these online FAQs.

NOTE: Our QuantSeq FWD-UMI pipeline utilizes single-read data only. Only upload Read 1 FASTQ files.

If you prefer to analyze your QuantSeq FWD-UMI data on your own, as reference, you can find below the general workflow with key steps to successfully analysis your QuantSeq-UMI data!

Key steps of the general workflow to use to analyze QuantSeq FWD-UMI data:

QuantSeq FWD - UMI

UMI extraction

Trimming

Mapping

UMI collapsing

Gene Read counting

Differential Expression Analysis

To extract the UMIs, you can use the publicly available UMI-Tools package available on GitHub here. Detailed documentation can be found in the ReadTheDocs. The following command line will extract the UMI sequence from the read while removing the adjacent 4 nt TATA spacer:

CODE
umi_tools extract --extract-method=regex --bc-pattern "(?P<umi_1>.{6})(?P<discard_1>.{4}).*" -L "/path/to/my_outputlog.txt" -I "/path/to/my_input.fastq.gz" -S "/path/to/my_output.fastq.gz" 

 

After alignment, reads can be deduplicated with the following command:

CODE
umi_tools dedup -I example.bam --output-stats=deduplicated -S deduplicated.bam

 

The deduplication method of UMI-Tools has been published here.

NOTE: The current implementation of this method can take some time and can consume significant memory. If you experience issues with run time or memory usage, please refer to these FAQs.

 

If you would like to run the package in a less complex way, you can set the parameter:

CODE
 "-method=unique"

This will only collapse UMIs having identical sequences.

For further information contact us at support@lexogen.com.

 




JavaScript errors detected

Please note, these errors can depend on your browser setup.

If this problem persists, please contact our support.