語系:
繁體中文
English
日文
簡体中文
說明(常見問題)
登入
回首頁
切換:
標籤
|
MARC模式
|
ISBD
Introduction to HPC with MPI for dat...
~
Nielsen, Frank.
Introduction to HPC with MPI for data science[electronic resource] /
紀錄類型:
書目-語言資料,印刷品 : Monograph/item
杜威分類號:
004.11
書名/作者:
Introduction to HPC with MPI for data science/ by Frank Nielsen.
作者:
Nielsen, Frank.
出版者:
Cham : : Springer International Publishing :, 2016.
面頁冊數:
xxxiii, 282 p. : : ill., digital ;; 24 cm.
Contained By:
Springer eBooks
標題:
High performance computing.
標題:
Computer Science.
標題:
Programming Techniques.
ISBN:
9783319219035
ISBN:
9783319219028
內容註:
Preface -- Part 1: High Performance Computing (HPC) with the Message Passing Interface (MPI) -- A Glance at High Performance Computing (HPC) -- Introduction to MPI: The Message Passing Interface -- Topology of Interconnection Networks -- Parallel Sorting -- Parallel Linear Algebra.-The MapReduce Paradigm -- Part 11: High Performance Computing for Data Science -- Partition-based Clustering with k means -- Hierarchical Clustering -- Supervised Learning: Practice and Theory of Classification with k NN rule -- Fast Approximate Optimization to High Dimensions with Core-sets and Fast Dimension Reduction -- Parallel Algorithms for Graphs -- Appendix A: Written Exam -- Appendix B: SLURM: A resource manager and job scheduler on clusters of machines -- Appendix C: List of Figures -- Appendix D: List of Tables -- Appendix E: Index.
摘要、提要註:
This gentle introduction to High Performance Computing (HPC) for Data Science using the Message Passing Interface (MPI) standard has been designed as a first course for undergraduates on parallel programming on distributed memory models, and requires only basic programming notions. Divided into two parts the first part covers high performance computing using C++ with the Message Passing Interface (MPI) standard followed by a second part providing high-performance data analytics on computer clusters. In the first part, the fundamental notions of blocking versus non-blocking point-to-point communications, global communications (like broadcast or scatter) and collaborative computations (reduce), with Amdalh and Gustafson speed-up laws are described before addressing parallel sorting and parallel linear algebra on computer clusters. The common ring, torus and hypercube topologies of clusters are then explained and global communication procedures on these topologies are studied. This first part closes with the MapReduce (MR) model of computation well-suited to processing big data using the MPI framework. In the second part, the book focuses on high-performance data analytics. Flat and hierarchical clustering algorithms are introduced for data exploration along with how to program these algorithms on computer clusters, followed by machine learning classification, and an introduction to graph analytics. This part closes with a concise introduction to data core-sets that let big data problems be amenable to tiny data problems. Exercises are included at the end of each chapter in order for students to practice the concepts learned, and a final section contains an overall exam which allows them to evaluate how well they have assimilated the material covered in the book.
電子資源:
http://dx.doi.org/10.1007/978-3-319-21903-5
Introduction to HPC with MPI for data science[electronic resource] /
Nielsen, Frank.
Introduction to HPC with MPI for data science
[electronic resource] /by Frank Nielsen. - Cham :Springer International Publishing :2016. - xxxiii, 282 p. :ill., digital ;24 cm. - Undergraduate topics in computer science,1863-7310. - Undergraduate topics in computer science..
Preface -- Part 1: High Performance Computing (HPC) with the Message Passing Interface (MPI) -- A Glance at High Performance Computing (HPC) -- Introduction to MPI: The Message Passing Interface -- Topology of Interconnection Networks -- Parallel Sorting -- Parallel Linear Algebra.-The MapReduce Paradigm -- Part 11: High Performance Computing for Data Science -- Partition-based Clustering with k means -- Hierarchical Clustering -- Supervised Learning: Practice and Theory of Classification with k NN rule -- Fast Approximate Optimization to High Dimensions with Core-sets and Fast Dimension Reduction -- Parallel Algorithms for Graphs -- Appendix A: Written Exam -- Appendix B: SLURM: A resource manager and job scheduler on clusters of machines -- Appendix C: List of Figures -- Appendix D: List of Tables -- Appendix E: Index.
This gentle introduction to High Performance Computing (HPC) for Data Science using the Message Passing Interface (MPI) standard has been designed as a first course for undergraduates on parallel programming on distributed memory models, and requires only basic programming notions. Divided into two parts the first part covers high performance computing using C++ with the Message Passing Interface (MPI) standard followed by a second part providing high-performance data analytics on computer clusters. In the first part, the fundamental notions of blocking versus non-blocking point-to-point communications, global communications (like broadcast or scatter) and collaborative computations (reduce), with Amdalh and Gustafson speed-up laws are described before addressing parallel sorting and parallel linear algebra on computer clusters. The common ring, torus and hypercube topologies of clusters are then explained and global communication procedures on these topologies are studied. This first part closes with the MapReduce (MR) model of computation well-suited to processing big data using the MPI framework. In the second part, the book focuses on high-performance data analytics. Flat and hierarchical clustering algorithms are introduced for data exploration along with how to program these algorithms on computer clusters, followed by machine learning classification, and an introduction to graph analytics. This part closes with a concise introduction to data core-sets that let big data problems be amenable to tiny data problems. Exercises are included at the end of each chapter in order for students to practice the concepts learned, and a final section contains an overall exam which allows them to evaluate how well they have assimilated the material covered in the book.
ISBN: 9783319219035
Standard No.: 10.1007/978-3-319-21903-5doiSubjects--Topical Terms:
386197
High performance computing.
LC Class. No.: QA76.88
Dewey Class. No.: 004.11
Introduction to HPC with MPI for data science[electronic resource] /
LDR
:03608nam a2200325 a 4500
001
456531
003
DE-He213
005
20160824103359.0
006
m d
007
cr nn 008maaau
008
161227s2016 gw s 0 eng d
020
$a
9783319219035
$q
(electronic bk.)
020
$a
9783319219028
$q
(paper)
024
7
$a
10.1007/978-3-319-21903-5
$2
doi
035
$a
978-3-319-21903-5
040
$a
GP
$c
GP
041
0
$a
eng
050
4
$a
QA76.88
072
7
$a
UM
$2
bicssc
072
7
$a
COM051000
$2
bisacsh
082
0 4
$a
004.11
$2
23
090
$a
QA76.88
$b
.N669 2016
100
1
$a
Nielsen, Frank.
$3
588986
245
1 0
$a
Introduction to HPC with MPI for data science
$h
[electronic resource] /
$c
by Frank Nielsen.
260
$a
Cham :
$b
Springer International Publishing :
$b
Imprint: Springer,
$c
2016.
300
$a
xxxiii, 282 p. :
$b
ill., digital ;
$c
24 cm.
490
1
$a
Undergraduate topics in computer science,
$x
1863-7310
505
0
$a
Preface -- Part 1: High Performance Computing (HPC) with the Message Passing Interface (MPI) -- A Glance at High Performance Computing (HPC) -- Introduction to MPI: The Message Passing Interface -- Topology of Interconnection Networks -- Parallel Sorting -- Parallel Linear Algebra.-The MapReduce Paradigm -- Part 11: High Performance Computing for Data Science -- Partition-based Clustering with k means -- Hierarchical Clustering -- Supervised Learning: Practice and Theory of Classification with k NN rule -- Fast Approximate Optimization to High Dimensions with Core-sets and Fast Dimension Reduction -- Parallel Algorithms for Graphs -- Appendix A: Written Exam -- Appendix B: SLURM: A resource manager and job scheduler on clusters of machines -- Appendix C: List of Figures -- Appendix D: List of Tables -- Appendix E: Index.
520
$a
This gentle introduction to High Performance Computing (HPC) for Data Science using the Message Passing Interface (MPI) standard has been designed as a first course for undergraduates on parallel programming on distributed memory models, and requires only basic programming notions. Divided into two parts the first part covers high performance computing using C++ with the Message Passing Interface (MPI) standard followed by a second part providing high-performance data analytics on computer clusters. In the first part, the fundamental notions of blocking versus non-blocking point-to-point communications, global communications (like broadcast or scatter) and collaborative computations (reduce), with Amdalh and Gustafson speed-up laws are described before addressing parallel sorting and parallel linear algebra on computer clusters. The common ring, torus and hypercube topologies of clusters are then explained and global communication procedures on these topologies are studied. This first part closes with the MapReduce (MR) model of computation well-suited to processing big data using the MPI framework. In the second part, the book focuses on high-performance data analytics. Flat and hierarchical clustering algorithms are introduced for data exploration along with how to program these algorithms on computer clusters, followed by machine learning classification, and an introduction to graph analytics. This part closes with a concise introduction to data core-sets that let big data problems be amenable to tiny data problems. Exercises are included at the end of each chapter in order for students to practice the concepts learned, and a final section contains an overall exam which allows them to evaluate how well they have assimilated the material covered in the book.
650
0
$a
High performance computing.
$3
386197
650
1 4
$a
Computer Science.
$3
423143
650
2 4
$a
Programming Techniques.
$3
466907
710
2
$a
SpringerLink (Online service)
$3
463450
773
0
$t
Springer eBooks
830
0
$a
Undergraduate topics in computer science.
$3
466963
856
4 0
$u
http://dx.doi.org/10.1007/978-3-319-21903-5
950
$a
Computer Science (Springer-11645)
筆 0 讀者評論
多媒體
多媒體檔案
http://dx.doi.org/10.1007/978-3-319-21903-5
評論
新增評論
分享你的心得
Export
取書館別
處理中
...
變更密碼
登入