[[!meta title="Bioinformatic Supercomputer Wishlist"]]
Many bioinformatic problems require large amounts of memory and
-processor time to complete. For example, running WGCNA across 10^6 CpG
-sites requires 10^6 choose 2 or 10^13 comparisons, which needs 10 TB
+processor time to complete. For example, running WGCNA across 10⁶ CpG
+sites requires 10⁶ choose 2 or 10¹³ comparisons, which needs 10 TB
to store the resulting matrix. While embarrassingly parallel, the
dataset upon which the regressions are calculated is very large, and
cannot fit into main memory of most existing supercomputers, which are
Another problem which I am interested in is computing ancestral trees
from whole human genomes. This involves running maximum likelihood
-calculations across 10^9 bases and thousands of samples. The matrix
+calculations across 10⁹ bases and thousands of samples. The matrix
itself could potentially take 1 TB, and calculating the likelihood
across that many positions is computationally expensive. Furthermore,
an exhaustive search of trees for 2000 individuals requires 2000!!
-comparisons, or 10^2868; even searching a small fraction of that
+comparisons, or 10²⁸⁶⁸; even searching a small fraction of that
subspace requires lots of computational time.
Some things that a future supercomputer could have that would enable