Precise control over \(P_{X \mid Y}\) via \(f\) and \(\epsilon\)!
But:
How do we encode \(X\)?
How many bits do we need?
4.2. Rough Idea for Achievability
Communication problem between Alice and Bob, who:
share their PRNG seed \(S\)
share \(P_X\) and can easily sample from it
Alice
draws iid samples \(X_1, \dots\) with \(X_i \sim P_X\) using \(S\)
picks \(K \in \mathbb{N}\) such that \(X_K \sim P_{X \mid Y}\)
encodes \(K\) using \(\approx \log K\) bits
4.3. Coding Efficiency
When common randomness \(S\) available, there exists an algorithm, such that (Li and El Gamal, 2017):
\[
{\color{red} I[X; Y]} \leq \mathbb{H}[X \mid S] \leq {\color{red} I[X; Y]} + {\color{blue} \log (I[X; Y] + 1) + 4}
\]
\(I[X; Y]\) can be finite even when \(\mathbb{H}[X]\) is infinite!
\begin{align}
D_{KL}[\mathcal{L}(0, b) || \mathcal{L}(0, 1)] &= b - \ln b - 1 \\
D_{CS}[\mathcal{L}(0, b) || \mathcal{L}(0, 1)] &= b - \psi\left(\frac{1}{b}\right) + \gamma - 1
\end{align}
9.6. Some Empirical Results II
9.7. Some Empirical Results III
10. References
10.1. References I
E. Agustsson and L. Theis. "Universally quantized neural compression" In NeurIPS 2020.
C. Blundell, J. Cornebise, K. Kavukcuoglu and D. Wierstra. Weight uncertainty in neural network. In ICML 2015.
E. Dupont, A. Golinski, M. Alizadeh, Y. W. Teh and Arnaud Doucet. "COIN: compression with implicit neural representations" arXiv preprint arXiv:2103.03123, 2021.
10.2. References II
G. F. āGreedy Poisson Rejection Samplingā NeurIPS 2023, to appear.
G. F.*, S. Markou*, and J. M. Hernandez-Lobato. "Fast relative entropy coding with A* coding". In ICML 2022.
D. Goc and G. F. āOn Channel Simulation Conjecturesā unpublished.
10.3. References III
Z. Guo*, G. F.*, J. He, Z. Chen and J. M. Hernandez Lobato, āCompression with Bayesian Implicit Neural Representationsā NeurIPS 2023, to appear.
P. Harsha, R. Jain, D. McAllester, and J. Radhakrishnan, āThe communication complexity of correlation,ā IEEE Transactions on Information Theory, vol. 56, no. 1, pp. 438ā449, 2010.
M. Havasi, R. Peharz, and J. M. HernaĢndez-Lobato. "Minimal Random Code Learning: Getting Bits Back from Compressed Model Parameters" In ICLR 2019.
10.4. References IV
J. He*, G. F.*, Z. Guo and J. M. Hernandez Lobato, āRECOMBINER: Robust and Enhanced Compression with Bayesian Implicit Neural Representationsā unpublished.
C. T. Li and A. El Gamal, āStrong functional representation lemma and applications to coding theorems,ā IEEE Transactions on Information Theory, vol. 64, no. 11, pp. 6967ā6978, 2018.
10.5. References V
L. Theis and E. Agustsson. On the advantages of stochastic encoders. arXiv preprint arXiv:2102.09270.
L. Theis, T. Salimans, M. D. Hoffman and F. Mentzer (2022). Lossy compression with Gaussian diffusion. arXiv preprint arXiv:2206.08889.