体育赛事投注记录

体育赛事投注记录advertisement

Dimensionality Reduction for Flow-Based Face Embeddings

  • S. Poliakov
  • I. BelykhEmail author
Conference paper
  • 21 Downloads
Part of the Lecture Notes in Networks and Systems book series (LNNS, volume 127)

Abstract

flow-based neural networks are promising generative image models. one of their main drawbacks at the moment is the large number of parameters and the large size of the hidden representation of modeled data, which complicates their use on an industrial scale. this article proposes a method for isolating redundant components of vector representations generated by flow-based networks using the glow neural network as an example to the face generation problem, which has shown effective ten times compression. the prospects of using such compression for more efficient parallelization of training and inference using model parallelism are considered.

Keywords

Machine learning Neural networks Generative models Embeddings Reduction of dimensions Image processing 

Notes

Acknowledgements

体育赛事投注记录this research work was supported by the academic excellence project 5-100 proposed by peter the great st. petersburg polytechnic university.

References

  1. 1.
    Ballard DH (1987) Modular learning in neural networks. In: AAAI, pp 279–284
  2. 2.
    Deza MM, Deza E (2009) Encyclopedia of distances. In: Encyclopedia of distances. Springer, Berlin, pp 1–583
  3. 3.
    Dinh L, Krueger D, Bengio Y (2014) Nice: Non-linear independent components estimation. arXiv preprint
  4. 4.
    Dinh L, Sohl-Dickstein J, Bengio S (2016) Density estimation using real NVP. arXiv preprint
  5. 5.
    Hore A, Ziou D (2010) Image quality metrics: PSNR vs. SSIM. In: 2010 20th international conference on pattern recognition. IEEE, New York, pp 2366–2369
  6. 6.
    Karras T, Laine S, Aila T (2019) A style-based generator architecture for generative adversarial networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 4401–4410
  7. 7.
    Kingma DP, Welling M (2013) Auto-encoding variational Bayes. arXiv preprint
  8. 8.
    Kingma DP, Dhariwal P (2018) Glow: generative flow with invertible \(1 \times 1\) convolutions. In: Advances in neural information processing systems, pp 10,215–10,224
  9. 9.
    Kumar M, Babaeizadeh M, Erhan D, Finn C, Levine S, Dinh L, Kingma D (2019) Videoflow: a flow-based generative model for video. arXiv preprint
  10. 10.
    Liu PY, Lam EY (2018) Image reconstruction using deep learning. arXiv preprint
  11. 11.
    Partridge M, Calvo RA (1998) Fast dimensionality reduction and simple PCA. Intell Data Anal 2(3):203–214
  12. 12.
    Prenger R, Valle R, Catanzaro B (2019) Waveglow: a flow-based generative network for speech synthesis. In: ICASSP 2019-2019 IEEE international conference on acoustics, speech and signal processing (ICASSP). IEEE, New York, pp 3617–3621
  13. 13.
    Sharma G, Wu W, Dalal EN (2005) The ciede2000 color-difference formula: Implementation notes, supplementary test data, and mathematical observations. Color Research & Application: Endorsed by Inter-Society Color Council, The Colour Group (Great Britain), Canadian Society for Color, Color Science Association of Japan, Dutch Society for the Study of Color, The Swedish Colour Centre Foundation, Colour Society of Australia, Centre Français de la Couleur 30(1), 21–30
  14. 14.
    Zhang K, Zhang Z, Cheng CW, Hsu WH, Qiao Y, Liu W, Zhang T (2018) Super-identity convolutional neural network for face hallucination. In: Proceedings of the European conference on computer vision (ECCV), pp 183–198

Copyright information

© The Editor(s) (if applicable) and The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd. 2021

Authors and Affiliations

  1. 1.Peter the Great St. Petersburg Polytechnic UniversitySt. PetersburgRussian Federation

Personalised recommendations