Abstract
Polar coding is a new tool (Arıkan 2009) to construct error correcting codes. It has practical uses and achieves capacity. A research interest is to characterize polar codes’ asymptotic behavior in terms of error probability, code rate, and time complexity; and improve them further.
In this talk, two approaches to improvements will be elaborated. The first approach replaces the polarizing kernel [[1,0], [1,1]] by large, random, dynamic matrices to lessen the error probability and raise the code rate. The resulting performance catches up that of random code’, which is the optimal one, while preserving the complexity of the original polar code. The second approach prunes the polarization process to reduce its complexity while maintaining the capacity-achieving property of the unpruned version. The resulting complexity is O(loglog(block length)) per information bit, while the unpruned version has O(log(block length)). Combining the two approaches yields a record-breaking tradeoff among rate, error, and complexity, giving a further contribution to the fundamental questions of Shannon.