论文标题
为宏基因组定制R-指数
Tailoring r-index for metagenomics
论文作者
论文摘要
宏基因组学的一个基本问题是将测序的读取为参考集合中的正确物种。在基因组流行病学和病毒宏基因组学中的典型应用中,参考收集由一组物种组成,每种物种都以其高度相似的菌株表示。最近已经证明,可以通过基于$ k $ hahing的伪校准来实现准确的读取任务:如果其$ k $ -mer命中率中的每一个都仅位于A.我们的折扣A上,则将读取分配给物种A。我们研究了伪符号和相关任务中所需的基本基础。我们在文档列表上提出了三个带有频率问题的文档列表的解决方案。所有解决方案均使用$ r $ index(Gagie等人,Soda,2018)作为作为串联物种以及每个物种的串联的文本的基础索引结构。给定的$ t $种类的串联长度为$ n $,其burrows-theeler变换包含$ r $ runs,我们的第一个解决方案,基于语法压缩的文档阵列,在非终端符号上具有预报的查询,报告了$ {\ tt ndoc} $不同文档的频率,该频率是$ {\ tt ndoc} $ dintption $ m $ m $ m $ m m $}的范围。 \ log(n){\ tt ndoc})$时间。我们的第二个解决方案也基于语法压缩文档数组,但使用BitVectors增强,并在$ {\ cal o}中报告频率(m +(((t/w)\ log n + n + n + \ log(n/r)){\ tt ndoc}){\ tt ndoc})$ pime,$ w $。我们的第三个解决方案基于交错的LCP数组,以$ {\ cal o}(m + \ log(n/r){\ tt ndoc})$回答相同的查询。我们实施了解决方案,并在现实世界和合成数据集上进行了测试。结果表明,所有解决方案在高度重复的数据上都是快速的,而索引引入的大小架设与$ r $ index的大小相当。
A basic problem in metagenomics is to assign a sequenced read to the correct species in the reference collection. In typical applications in genomic epidemiology and viral metagenomics the reference collection consists of set of species with each species represented by its highly similar strains. It has been recently shown that accurate read assignment can be achieved with $k$-mer hashing-based pseudoalignment: A read is assigned to species A if each of its $k$-mer hits to reference collection is located only on strains of A. We study the underlying primitives required in pseudoalignment and related tasks. We propose three space-efficient solutions building upon the document listing with frequencies problem. All the solutions use an $r$-index (Gagie et al., SODA 2018) as an underlying index structure for the text obtained as concatenation of the set of species, as well as for each species. Given $t$ species whose concatenation length is $n$, and whose Burrows-Wheeler transform contains $r$ runs, our first solution, based on a grammar-compressed document array with precomputed queries at non terminal symbols, reports the frequencies for the ${\tt ndoc}$ distinct documents in which the pattern of length $m$ occurs in ${\cal O}(m + \log(n){\tt ndoc}) $ time. Our second solution is also based on a grammar-compressed document array, but enhanced with bitvectors and reports the frequencies in ${\cal O}(m + ((t/w)\log n + \log(n/r)){\tt ndoc})$ time, over a machine with wordsize $w$. Our third solution, based on the interleaved LCP array, answers the same query in ${\cal O}(m + \log(n/r){\tt ndoc})$. We implemented our solutions and tested them on real-world and synthetic datasets. The results show that all the solutions are fast on highly-repetitive data, and the size overhead introduced by the indexes are comparable with the size of the $r$-index.