語系:
繁體中文
English
日文
簡体中文
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Covariances in computer vision and m...
~
Ha Quang, Vittorio Murino, Gerard Medioni, Sven Dickinson, (1977-,)
Covariances in computer vision and machine learning /
紀錄類型:
書目-電子資源 : Monograph/item
杜威分類號:
006.37
書名/作者:
Covariances in computer vision and machine learning // Minh Ha Quang, Vittorio Murino, Gerard Medioni, Sven Dickinson.
作者:
Ha Quang, Vittorio Murino, Gerard Medioni, Sven Dickinson,
其他作者:
Murino, Vittorio,
面頁冊數:
1 PDF (xiii, 156 pages) : : illustrations.
附註:
Part of: Synthesis digital library of engineering and computer science.
標題:
Computer vision - Mathematical models.
標題:
Machine learning - Mathematical models.
ISBN:
9781681730141
書目註:
Includes bibliographical references (pages 143-154).
內容註:
Part I. Covariance matrices and applications -- 1. Data representation by covariance matrices -- 1.1 Covariance matrices for data representation -- 1.2 Statistical interpretation -- 2. Geometry of SPD matrices -- 2.1 Euclidean distance -- 2.2 Interpretations and motivations for the different invariances -- 2.3 Basic Riemannian geometry -- 2.4 Affine-invariant Riemannian metric on SPD matrices -- 2.4.1 Connection with the Fisher-Rao metric -- 2.5 Log-Euclidean metric -- 2.5.1 Log-Euclidean distance as an approximation of the affine-invariant Riemannian distance -- 2.5.2 Log-Euclidean distance as a Riemannian distance -- 2.5.3 Log-Euclidean vs. Euclidean -- 2.6 Bregman divergences -- 2.6.1 Log-determinant divergences -- 2.6.2 Connection with the R�enyi and Kullback-Leibler divergences -- 2.7 Alpha-Beta Log-Det divergences -- 2.8 Power Euclidean metrics -- 2.9 Distances and divergences between empirical covariance matrices -- 2.10 Running time comparison -- 2.11 Summary -- 3. Kernel methods on covariance matrices -- 3.1 Positive definite kernels and reproducing kernel Hilbert spaces -- 3.2 Positive definite kernels on SPD matrices -- 3.2.1 Positive definite kernels with the Euclidean metric -- 3.2.2 Positive definite kernels with the log-Euclidean metric -- 3.2.3 Positive definite kernels with the symmetric Stein divergence -- 3.2.4 Positive definite kernels with the affine-invariant Riemannian metric -- 3.3 Kernel methods on covariance matrices -- 3.4 Experiments on image classification -- 3.4.1 Datasets -- 3.4.2 Results -- 3.5 Related approaches --
摘要、提要註:
Covariance matrices play important roles in many areas of mathematics, statistics, and machine learning, as well as their applications. In computer vision and image processing, they give rise to a powerful data representation, namely the covariance descriptor, with numerous practical applications. In this book, we begin by presenting an overview of the finite-dimensional covariance matrix representation approach of images, along with its statistical interpretation. In particular, we discuss the various distances and divergences that arise from the intrinsic geometrical structures of the set of Symmetric Positive Definite (SPD) matrices, namely Riemannian manifold and convex cone structures. Computationally, we focus on kernel methods on covariance matrices, especially using the Log-Euclidean distance. We then show some of the latest developments in the generalization of the finite-dimensional covariance matrix representation to the infinite-dimensional covariance operator representation via positive definite kernels. We present the generalization of the affine-invariant Riemannian metric and the Log-Hilbert-Schmidt metric, which generalizes the Log-Euclidean distance. Computationally, we focus on kernel methods on covariance operators, especially using the Log-Hilbert-Schmidt distance. Specifically, we present a two-layer kernel machine, using the Log-Hilbert-Schmidt distance and its finite-dimensional approximation, which reduces the computational complexity of the exact formulation while largely preserving its capability. Theoretical analysis shows that, mathematically, the approximate Log-Hilbert-Schmidt distance should be preferred over the approximate Log-Hilbert-Schmidt inner product and, computationally, it should be preferred over the approximate affine-invariant Riemannian distance. Numerical experiments on image classification demonstrate significant improvements of the infinite-dimensional formulation over the finite-dimensional counterpart. Given the numerous applications of covariance matrices in many areas of mathematics, statistics, and machine learning, just to name a few, we expect that the infinite-dimensional covariance operator formulation presented here will have many more applications beyond those in computer vision.
電子資源:
http://ieeexplore.ieee.org/servlet/opac?bknumber=8106904
Covariances in computer vision and machine learning /
Ha Quang, Vittorio Murino, Gerard Medioni, Sven Dickinson,1977-,
Covariances in computer vision and machine learning /
Minh Ha Quang, Vittorio Murino, Gerard Medioni, Sven Dickinson. - 1 PDF (xiii, 156 pages) :illustrations. - Synthesis lectures on computer vision,# 132153-1064 ;. - Synthesis digital library of engineering and computer science..
Part of: Synthesis digital library of engineering and computer science.
Includes bibliographical references (pages 143-154).
Part I. Covariance matrices and applications -- 1. Data representation by covariance matrices -- 1.1 Covariance matrices for data representation -- 1.2 Statistical interpretation -- 2. Geometry of SPD matrices -- 2.1 Euclidean distance -- 2.2 Interpretations and motivations for the different invariances -- 2.3 Basic Riemannian geometry -- 2.4 Affine-invariant Riemannian metric on SPD matrices -- 2.4.1 Connection with the Fisher-Rao metric -- 2.5 Log-Euclidean metric -- 2.5.1 Log-Euclidean distance as an approximation of the affine-invariant Riemannian distance -- 2.5.2 Log-Euclidean distance as a Riemannian distance -- 2.5.3 Log-Euclidean vs. Euclidean -- 2.6 Bregman divergences -- 2.6.1 Log-determinant divergences -- 2.6.2 Connection with the R�enyi and Kullback-Leibler divergences -- 2.7 Alpha-Beta Log-Det divergences -- 2.8 Power Euclidean metrics -- 2.9 Distances and divergences between empirical covariance matrices -- 2.10 Running time comparison -- 2.11 Summary -- 3. Kernel methods on covariance matrices -- 3.1 Positive definite kernels and reproducing kernel Hilbert spaces -- 3.2 Positive definite kernels on SPD matrices -- 3.2.1 Positive definite kernels with the Euclidean metric -- 3.2.2 Positive definite kernels with the log-Euclidean metric -- 3.2.3 Positive definite kernels with the symmetric Stein divergence -- 3.2.4 Positive definite kernels with the affine-invariant Riemannian metric -- 3.3 Kernel methods on covariance matrices -- 3.4 Experiments on image classification -- 3.4.1 Datasets -- 3.4.2 Results -- 3.5 Related approaches --
Abstract freely available; full-text restricted to subscribers or individual document purchasers.
Compendex
Covariance matrices play important roles in many areas of mathematics, statistics, and machine learning, as well as their applications. In computer vision and image processing, they give rise to a powerful data representation, namely the covariance descriptor, with numerous practical applications. In this book, we begin by presenting an overview of the finite-dimensional covariance matrix representation approach of images, along with its statistical interpretation. In particular, we discuss the various distances and divergences that arise from the intrinsic geometrical structures of the set of Symmetric Positive Definite (SPD) matrices, namely Riemannian manifold and convex cone structures. Computationally, we focus on kernel methods on covariance matrices, especially using the Log-Euclidean distance. We then show some of the latest developments in the generalization of the finite-dimensional covariance matrix representation to the infinite-dimensional covariance operator representation via positive definite kernels. We present the generalization of the affine-invariant Riemannian metric and the Log-Hilbert-Schmidt metric, which generalizes the Log-Euclidean distance. Computationally, we focus on kernel methods on covariance operators, especially using the Log-Hilbert-Schmidt distance. Specifically, we present a two-layer kernel machine, using the Log-Hilbert-Schmidt distance and its finite-dimensional approximation, which reduces the computational complexity of the exact formulation while largely preserving its capability. Theoretical analysis shows that, mathematically, the approximate Log-Hilbert-Schmidt distance should be preferred over the approximate Log-Hilbert-Schmidt inner product and, computationally, it should be preferred over the approximate affine-invariant Riemannian distance. Numerical experiments on image classification demonstrate significant improvements of the infinite-dimensional formulation over the finite-dimensional counterpart. Given the numerous applications of covariance matrices in many areas of mathematics, statistics, and machine learning, just to name a few, we expect that the infinite-dimensional covariance operator formulation presented here will have many more applications beyond those in computer vision.
Mode of access: World Wide Web.
ISBN: 9781681730141
Standard No.: 10.2200/S00801ED1V01Y201709COV011doiSubjects--Topical Terms:
549179
Computer vision
--Mathematical models.Subjects--Index Terms:
covariance descriptors in computer visionIndex Terms--Genre/Form:
336502
Electronic books.
LC Class. No.: TA1634 / .H23 2018
Dewey Class. No.: 006.37
Covariances in computer vision and machine learning /
LDR
:07844nmm 2200649 i 4500
001
509158
003
IEEE
005
20171123123523.0
006
m eo d
007
cr cn |||m|||a
008
210514s2018 caua foab 000 0 eng d
020
$a
9781681730141
$q
ebook
020
$z
9781681730134
$q
print
024
7 #
$a
10.2200/S00801ED1V01Y201709COV011
$2
doi
035
$a
(CaBNVSL)swl00407992
035
$a
(OCoLC)1012748003
035
$a
8106904
040
$a
CaBNVSL
$b
eng
$e
rda
$c
CaBNVSL
$d
CaBNVSL
050
# 4
$a
TA1634
$b
.H23 2018
082
0 4
$a
006.37
$2
23
100
1
$a
Ha Quang, Vittorio Murino, Gerard Medioni, Sven Dickinson,
$d
1977-,
$e
author.
$3
729691
245
1 0
$a
Covariances in computer vision and machine learning /
$c
Minh Ha Quang, Vittorio Murino, Gerard Medioni, Sven Dickinson.
264
1
$a
[San Rafael, California] :
$b
Morgan & Claypool,
$c
2018.
300
$a
1 PDF (xiii, 156 pages) :
$b
illustrations.
336
$a
text
$2
rdacontent
337
$a
electronic
$2
isbdmedia
338
$a
online resource
$2
rdacarrier
490
1
$a
Synthesis lectures on computer vision,
$x
2153-1064 ;
$v
# 13
500
$a
Part of: Synthesis digital library of engineering and computer science.
504
$a
Includes bibliographical references (pages 143-154).
505
0 #
$a
Part I. Covariance matrices and applications -- 1. Data representation by covariance matrices -- 1.1 Covariance matrices for data representation -- 1.2 Statistical interpretation -- 2. Geometry of SPD matrices -- 2.1 Euclidean distance -- 2.2 Interpretations and motivations for the different invariances -- 2.3 Basic Riemannian geometry -- 2.4 Affine-invariant Riemannian metric on SPD matrices -- 2.4.1 Connection with the Fisher-Rao metric -- 2.5 Log-Euclidean metric -- 2.5.1 Log-Euclidean distance as an approximation of the affine-invariant Riemannian distance -- 2.5.2 Log-Euclidean distance as a Riemannian distance -- 2.5.3 Log-Euclidean vs. Euclidean -- 2.6 Bregman divergences -- 2.6.1 Log-determinant divergences -- 2.6.2 Connection with the R�enyi and Kullback-Leibler divergences -- 2.7 Alpha-Beta Log-Det divergences -- 2.8 Power Euclidean metrics -- 2.9 Distances and divergences between empirical covariance matrices -- 2.10 Running time comparison -- 2.11 Summary -- 3. Kernel methods on covariance matrices -- 3.1 Positive definite kernels and reproducing kernel Hilbert spaces -- 3.2 Positive definite kernels on SPD matrices -- 3.2.1 Positive definite kernels with the Euclidean metric -- 3.2.2 Positive definite kernels with the log-Euclidean metric -- 3.2.3 Positive definite kernels with the symmetric Stein divergence -- 3.2.4 Positive definite kernels with the affine-invariant Riemannian metric -- 3.3 Kernel methods on covariance matrices -- 3.4 Experiments on image classification -- 3.4.1 Datasets -- 3.4.2 Results -- 3.5 Related approaches --
505
8 #
$a
Part II. Covariance operators and applications -- 4. Data representation by covariance operators -- 4.1 Positive definite kernels and feature maps -- 4.2 Covariance operators in RKHS -- 4.3 Data representation by RKHS covariance operators -- 5. Geometry of covariance operators -- 5.1 Hilbert-Schmidt distance -- 5.2 Riemannian distances between covariance operators -- 5.2.1 The affine-invariant Riemannian metric -- 5.2.2 Log-Hilbert-Schmidt metric -- 5.3 Infinite-dimensional alpha log-determinant divergences -- 5.4 Summary -- 6. Kernel methods on covariance operators -- 6.1 Positive definite kernels on covariance operators -- 6.1.1 Kernels defined using the Hilbert-Schmidt metric -- 6.1.2 Kernels defined using the log-Hilbert-Schmidt metric -- 6.2 Two-layer kernel machines -- 6.3 Approximate methods -- 6.3.1 Approximate log-Hilbert-Schmidt distance and approximate affine-invariant Riemannian distance -- 6.3.2 Computational complexity -- 6.3.3 Approximate log-Hilbert-Schmidt inner product -- 6.3.4 Two-layer kernel machine with the approximate log-Hilbert-Schmidt distance -- 6.3.5 Case study: approximation by Fourier feature maps -- 6.4 Experiments in image classification -- 6.5 Summary -- 7. Conclusion and future outlook --
505
8 #
$a
A. Supplementary technical information -- Mean squared errors for empirical covariance matrices -- Matrix exponential and principal logarithm Fr�echet derivative -- The quasi-random Fourier features -- Low-discrepancy sequences -- The Gaussian case -- Proofs of several mathematical results -- Bibliography -- Authors' biographies.
506
#
$a
Abstract freely available; full-text restricted to subscribers or individual document purchasers.
510
0
$a
Compendex
510
0
$a
INSPEC
510
0
$a
Google scholar
510
0
$a
Google book search
520
3
$a
Covariance matrices play important roles in many areas of mathematics, statistics, and machine learning, as well as their applications. In computer vision and image processing, they give rise to a powerful data representation, namely the covariance descriptor, with numerous practical applications. In this book, we begin by presenting an overview of the finite-dimensional covariance matrix representation approach of images, along with its statistical interpretation. In particular, we discuss the various distances and divergences that arise from the intrinsic geometrical structures of the set of Symmetric Positive Definite (SPD) matrices, namely Riemannian manifold and convex cone structures. Computationally, we focus on kernel methods on covariance matrices, especially using the Log-Euclidean distance. We then show some of the latest developments in the generalization of the finite-dimensional covariance matrix representation to the infinite-dimensional covariance operator representation via positive definite kernels. We present the generalization of the affine-invariant Riemannian metric and the Log-Hilbert-Schmidt metric, which generalizes the Log-Euclidean distance. Computationally, we focus on kernel methods on covariance operators, especially using the Log-Hilbert-Schmidt distance. Specifically, we present a two-layer kernel machine, using the Log-Hilbert-Schmidt distance and its finite-dimensional approximation, which reduces the computational complexity of the exact formulation while largely preserving its capability. Theoretical analysis shows that, mathematically, the approximate Log-Hilbert-Schmidt distance should be preferred over the approximate Log-Hilbert-Schmidt inner product and, computationally, it should be preferred over the approximate affine-invariant Riemannian distance. Numerical experiments on image classification demonstrate significant improvements of the infinite-dimensional formulation over the finite-dimensional counterpart. Given the numerous applications of covariance matrices in many areas of mathematics, statistics, and machine learning, just to name a few, we expect that the infinite-dimensional covariance operator formulation presented here will have many more applications beyond those in computer vision.
530
$a
Also available in print.
538
$a
Mode of access: World Wide Web.
538
$a
System requirements: Adobe Acrobat Reader.
588
$a
Title from PDF title page (viewed on November 22, 2017).
650
# 0
$a
Computer vision
$x
Mathematical models.
$3
549179
650
# 0
$a
Machine learning
$x
Mathematical models.
$3
571072
653
# #
$a
covariance descriptors in computer vision
653
# #
$a
positive definite matrices
653
# #
$a
infinite-dimensional covariance operators
653
# #
$a
positive definite operators
653
# #
$a
Hilbert-Schmidt operators
653
# #
$a
Riemannian manifolds
653
# #
$a
affine-invariant Riemannian distance
653
# #
$a
LogEuclidean distance
653
# #
$a
Log-Hilbert-Schmidt distance
653
# #
$a
convex cone
653
# #
$a
Bregman divergences
653
# #
$a
kernel methods on Riemannian manifolds
653
# #
$a
visual object recognition
653
# #
$a
image classification
655
# 0
$a
Electronic books.
$2
local
$3
336502
700
1 #
$a
Murino, Vittorio,
$e
author.
$3
729036
776
0 8
$i
Print version:
$z
9781681730134
830
0
$a
Synthesis digital library of engineering and computer science.
$3
461208
830
0
$a
Synthesis lectures on computer vision ;
$v
#9.
$x
2153-1056
$3
713984
856
4 2
$3
Abstract with links to resource
$u
http://ieeexplore.ieee.org/servlet/opac?bknumber=8106904
筆 0 讀者評論
多媒體
多媒體檔案
http://ieeexplore.ieee.org/servlet/opac?bknumber=8106904
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼
登入