forked from pytorch/pytorch.github.io
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathrpc.html
2072 lines (1838 loc) · 187 KB
/
rpc.html
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
<!DOCTYPE html>
<!--[if IE 8]><html class="no-js lt-ie9" lang="en" > <![endif]-->
<!--[if gt IE 8]><!--> <html class="no-js" lang="en" > <!--<![endif]-->
<head>
<meta charset="utf-8">
<meta name="generator" content="Docutils 0.18.1: http://docutils.sourceforge.net/" />
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Distributed RPC Framework — PyTorch 2.0 documentation</title>
<link rel="canonical" href="https://pytorch.org/docs/stable/rpc.html"/>
<link rel="stylesheet" href="_static/css/theme.css" type="text/css" />
<!-- <link rel="stylesheet" href="_static/pygments.css" type="text/css" /> -->
<link rel="stylesheet" href="_static/pygments.css" type="text/css" />
<link rel="stylesheet" href="_static/css/theme.css" type="text/css" />
<link rel="stylesheet" href="_static/copybutton.css" type="text/css" />
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/[email protected]/dist/katex.min.css" type="text/css" />
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/[email protected]/dist/katex.min.css" type="text/css" />
<link rel="stylesheet" href="_static/katex-math.css" type="text/css" />
<link rel="stylesheet" href="_static/sphinx-dropdown.css" type="text/css" />
<link rel="stylesheet" href="_static/panels-bootstrap.min.css" type="text/css" />
<link rel="stylesheet" href="_static/css/jit.css" type="text/css" />
<link rel="index" title="Index" href="genindex.html" />
<link rel="search" title="Search" href="search.html" />
<link rel="next" title="Remote Reference Protocol" href="rpc/rref.html" />
<link rel="prev" title="torch.ao.ns._numeric_suite_fx" href="torch.ao.ns._numeric_suite_fx.html" />
<!-- Google Analytics -->
<script async src="https://www.googletagmanager.com/gtag/js?id=UA-117752657-2"></script>
<script>
window.dataLayer = window.dataLayer || [];
function gtag(){dataLayer.push(arguments);}
gtag('js', new Date());
gtag('config', 'UA-117752657-2');
</script>
<!-- End Google Analytics -->
<script src="_static/js/modernizr.min.js"></script>
<!-- Preload the theme fonts -->
<link rel="preload" href="_static/fonts/FreightSans/freight-sans-book.woff2" as="font" type="font/woff2" crossorigin="anonymous">
<link rel="preload" href="_static/fonts/FreightSans/freight-sans-medium.woff2" as="font" type="font/woff2" crossorigin="anonymous">
<link rel="preload" href="_static/fonts/IBMPlexMono/IBMPlexMono-Medium.woff2" as="font" type="font/woff2" crossorigin="anonymous">
<link rel="preload" href="_static/fonts/FreightSans/freight-sans-bold.woff2" as="font" type="font/woff2" crossorigin="anonymous">
<link rel="preload" href="_static/fonts/FreightSans/freight-sans-medium-italic.woff2" as="font" type="font/woff2" crossorigin="anonymous">
<link rel="preload" href="_static/fonts/IBMPlexMono/IBMPlexMono-SemiBold.woff2" as="font" type="font/woff2" crossorigin="anonymous">
<!-- Preload the katex fonts -->
<link rel="preload" href="https://cdn.jsdelivr.net/npm/[email protected]/dist/fonts/KaTeX_Math-Italic.woff2" as="font" type="font/woff2" crossorigin="anonymous">
<link rel="preload" href="https://cdn.jsdelivr.net/npm/[email protected]/dist/fonts/KaTeX_Main-Regular.woff2" as="font" type="font/woff2" crossorigin="anonymous">
<link rel="preload" href="https://cdn.jsdelivr.net/npm/[email protected]/dist/fonts/KaTeX_Main-Bold.woff2" as="font" type="font/woff2" crossorigin="anonymous">
<link rel="preload" href="https://cdn.jsdelivr.net/npm/[email protected]/dist/fonts/KaTeX_Size1-Regular.woff2" as="font" type="font/woff2" crossorigin="anonymous">
<link rel="preload" href="https://cdn.jsdelivr.net/npm/[email protected]/dist/fonts/KaTeX_Size4-Regular.woff2" as="font" type="font/woff2" crossorigin="anonymous">
<link rel="preload" href="https://cdn.jsdelivr.net/npm/[email protected]/dist/fonts/KaTeX_Size2-Regular.woff2" as="font" type="font/woff2" crossorigin="anonymous">
<link rel="preload" href="https://cdn.jsdelivr.net/npm/[email protected]/dist/fonts/KaTeX_Size3-Regular.woff2" as="font" type="font/woff2" crossorigin="anonymous">
<link rel="preload" href="https://cdn.jsdelivr.net/npm/[email protected]/dist/fonts/KaTeX_Caligraphic-Regular.woff2" as="font" type="font/woff2" crossorigin="anonymous">
<link rel="stylesheet" href="https://use.fontawesome.com/releases/v5.15.2/css/all.css" integrity="sha384-vSIIfh2YWi9wW0r9iZe7RJPrKwp6bG+s9QZMoITbCckVJqGCCRhc+ccxNcdpHuYu" crossorigin="anonymous">
</head>
<div class="container-fluid header-holder tutorials-header" id="header-holder">
<div class="container">
<div class="header-container">
<a class="header-logo" href="https://pytorch.org/" aria-label="PyTorch"></a>
<div class="main-menu">
<ul>
<li>
<a href="https://pytorch.org/get-started">Get Started</a>
</li>
<li>
<a href="https://pytorch.org/ecosystem">Ecosystem</a>
</li>
<li>
<a href="https://pytorch.org/mobile">Mobile</a>
</li>
<li>
<a href="https://pytorch.org/blog/">Blog</a>
</li>
<li>
<a href="https://pytorch.org/tutorials">Tutorials</a>
</li>
<li class="active docs-active">
<div id="resourcesDropdownButton" data-toggle="resources-dropdown" class="resources-dropdown">
<a class="resource-option with-down-orange-arrow">
Docs
</a>
<div class="resources-dropdown-menu">
<a class="doc-dropdown-option nav-dropdown-item" href="https://pytorch.org/docs/stable/index.html">
<span class="dropdown-title">PyTorch</span>
<p></p>
</a>
<a class="doc-dropdown-option nav-dropdown-item" href="https://pytorch.org/audio/stable/index.html">
<span class="dropdown-title">torchaudio</span>
<p></p>
</a>
<a class="doc-dropdown-option nav-dropdown-item" href="https://pytorch.org/text/stable/index.html">
<span class="dropdown-title">torchtext</span>
<p></p>
</a>
<a class="doc-dropdown-option nav-dropdown-item" href="https://pytorch.org/vision/stable/index.html">
<span class="dropdown-title">torchvision</span>
<p></p>
</a>
<a class="doc-dropdown-option nav-dropdown-item" href="https://pytorch.org/torcharrow">
<span class="dropdown-title">torcharrow</span>
<p></p>
</a>
<a class="doc-dropdown-option nav-dropdown-item" href="https://pytorch.org/data">
<span class="dropdown-title">TorchData</span>
<p></p>
</a>
<a class="doc-dropdown-option nav-dropdown-item" href="https://pytorch.org/torchrec">
<span class="dropdown-title">TorchRec</span>
<p></p>
</a>
<a class="doc-dropdown-option nav-dropdown-item" href="https://pytorch.org/serve/">
<span class="dropdown-title">TorchServe</span>
<p></p>
</a>
<a class="doc-dropdown-option nav-dropdown-item" href="https://pytorch.org/torchx/">
<span class="dropdown-title">TorchX</span>
<p></p>
</a>
<a class="doc-dropdown-option nav-dropdown-item" href="https://pytorch.org/xla">
<span class="dropdown-title">PyTorch on XLA Devices</span>
<p></p>
</a>
</div>
</li>
<li>
<div id="resourcesDropdownButton" data-toggle="resources-dropdown" class="resources-dropdown">
<a class="resource-option with-down-arrow">
Resources
</a>
<div class="resources-dropdown-menu">
<a class="nav-dropdown-item" href="https://pytorch.org/features">
<span class="dropdown-title">About</span>
<p>Learn about PyTorch’s features and capabilities</p>
</a>
<a class="nav-dropdown-item" href="https://pytorch.org/foundation">
<span class="dropdown-title">PyTorch Foundation</span>
<p>Learn about the PyTorch foundation</p>
</a>
<a class="nav-dropdown-item" href="https://pytorch.org/#community-module">
<span class="dropdown-title">Community</span>
<p>Join the PyTorch developer community to contribute, learn, and get your questions answered.</p>
</a>
<a class="nav-dropdown-item" href="https://pytorch.org/community-stories">
<span class="dropdown-title">Community Stories</span>
<p>Learn how our community solves real, everyday machine learning problems with PyTorch.</p>
</a>
<a class="nav-dropdown-item" href="https://pytorch.org/resources">
<span class="dropdown-title">Developer Resources</span>
<p>Find resources and get questions answered</p>
</a>
<a class="nav-dropdown-item" href="https://pytorch.org/events">
<span class="dropdown-title">Events</span>
<p>Find events, webinars, and podcasts</p>
</a>
<a class="nav-dropdown-item" href="https://discuss.pytorch.org/" target="_blank">
<span class="dropdown-title">Forums</span>
<p>A place to discuss PyTorch code, issues, install, research</p>
</a>
<a class="nav-dropdown-item" href="https://pytorch.org/hub">
<span class="dropdown-title">Models (Beta)</span>
<p>Discover, publish, and reuse pre-trained models</p>
</a>
</div>
</div>
</li>
<li>
<a href="https://github.com/pytorch/pytorch">GitHub</a>
</li>
</ul>
</div>
<a class="main-menu-open-button" href="#" data-behavior="open-mobile-menu"></a>
</div>
</div>
</div>
<body class="pytorch-body">
<div class="table-of-contents-link-wrapper">
<span>Table of Contents</span>
<a href="#" class="toggle-table-of-contents" data-behavior="toggle-table-of-contents"></a>
</div>
<nav data-toggle="wy-nav-shift" class="pytorch-left-menu" id="pytorch-left-menu">
<div class="pytorch-side-scroll">
<div class="pytorch-menu pytorch-menu-vertical" data-spy="affix" role="navigation" aria-label="main navigation">
<div class="pytorch-left-menu-search">
<div class="version">
<a href='https://pytorch.org/docs/versions.html'>2.0 ▼</a>
</div>
<div role="search">
<form id="rtd-search-form" class="wy-form" action="search.html" method="get">
<input type="text" name="q" placeholder="Search Docs" />
<input type="hidden" name="check_keywords" value="yes" />
<input type="hidden" name="area" value="default" />
</form>
</div>
</div>
<p class="caption" role="heading"><span class="caption-text">Community</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="community/build_ci_governance.html">PyTorch Governance | Build + CI</a></li>
<li class="toctree-l1"><a class="reference internal" href="community/contribution_guide.html">PyTorch Contribution Guide</a></li>
<li class="toctree-l1"><a class="reference internal" href="community/design.html">PyTorch Design Philosophy</a></li>
<li class="toctree-l1"><a class="reference internal" href="community/governance.html">PyTorch Governance | Mechanics</a></li>
<li class="toctree-l1"><a class="reference internal" href="community/persons_of_interest.html">PyTorch Governance | Maintainers</a></li>
</ul>
<p class="caption" role="heading"><span class="caption-text">Developer Notes</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="notes/amp_examples.html">CUDA Automatic Mixed Precision examples</a></li>
<li class="toctree-l1"><a class="reference internal" href="notes/autograd.html">Autograd mechanics</a></li>
<li class="toctree-l1"><a class="reference internal" href="notes/broadcasting.html">Broadcasting semantics</a></li>
<li class="toctree-l1"><a class="reference internal" href="notes/cpu_threading_torchscript_inference.html">CPU threading and TorchScript inference</a></li>
<li class="toctree-l1"><a class="reference internal" href="notes/cuda.html">CUDA semantics</a></li>
<li class="toctree-l1"><a class="reference internal" href="notes/ddp.html">Distributed Data Parallel</a></li>
<li class="toctree-l1"><a class="reference internal" href="notes/extending.html">Extending PyTorch</a></li>
<li class="toctree-l1"><a class="reference internal" href="notes/extending.func.html">Extending torch.func with autograd.Function</a></li>
<li class="toctree-l1"><a class="reference internal" href="notes/faq.html">Frequently Asked Questions</a></li>
<li class="toctree-l1"><a class="reference internal" href="notes/gradcheck.html">Gradcheck mechanics</a></li>
<li class="toctree-l1"><a class="reference internal" href="notes/hip.html">HIP (ROCm) semantics</a></li>
<li class="toctree-l1"><a class="reference internal" href="notes/large_scale_deployments.html">Features for large-scale deployments</a></li>
<li class="toctree-l1"><a class="reference internal" href="notes/modules.html">Modules</a></li>
<li class="toctree-l1"><a class="reference internal" href="notes/mps.html">MPS backend</a></li>
<li class="toctree-l1"><a class="reference internal" href="notes/multiprocessing.html">Multiprocessing best practices</a></li>
<li class="toctree-l1"><a class="reference internal" href="notes/numerical_accuracy.html">Numerical accuracy</a></li>
<li class="toctree-l1"><a class="reference internal" href="notes/randomness.html">Reproducibility</a></li>
<li class="toctree-l1"><a class="reference internal" href="notes/serialization.html">Serialization semantics</a></li>
<li class="toctree-l1"><a class="reference internal" href="notes/windows.html">Windows FAQ</a></li>
</ul>
<p class="caption" role="heading"><span class="caption-text">torch.compile</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="dynamo/index.html">TorchDynamo Overview</a></li>
<li class="toctree-l1"><a class="reference internal" href="dynamo/installation.html">Installing TorchDynamo</a></li>
<li class="toctree-l1"><a class="reference internal" href="dynamo/get-started.html">Getting Started</a></li>
<li class="toctree-l1"><a class="reference internal" href="dynamo/guards-overview.html">Guards Overview</a></li>
<li class="toctree-l1"><a class="reference internal" href="dynamo/custom-backends.html">Custom Backends</a></li>
<li class="toctree-l1"><a class="reference internal" href="dynamo/deep-dive.html">TorchDynamo Deeper Dive</a></li>
<li class="toctree-l1"><a class="reference internal" href="dynamo/troubleshooting.html">TorchDynamo Troubleshooting</a></li>
<li class="toctree-l1"><a class="reference internal" href="dynamo/faq.html">Frequently Asked Questions</a></li>
<li class="toctree-l1"><a class="reference internal" href="ir.html">IRs</a></li>
</ul>
<p class="caption" role="heading"><span class="caption-text">Language Bindings</span></p>
<ul>
<li class="toctree-l1"><a class="reference internal" href="cpp_index.html">C++</a></li>
<li class="toctree-l1"><a class="reference external" href="https://pytorch.org/javadoc/">Javadoc</a></li>
<li class="toctree-l1"><a class="reference internal" href="deploy.html">torch::deploy</a></li>
</ul>
<p class="caption" role="heading"><span class="caption-text">Python API</span></p>
<ul class="current">
<li class="toctree-l1"><a class="reference internal" href="torch.html">torch</a></li>
<li class="toctree-l1"><a class="reference internal" href="nn.html">torch.nn</a></li>
<li class="toctree-l1"><a class="reference internal" href="nn.functional.html">torch.nn.functional</a></li>
<li class="toctree-l1"><a class="reference internal" href="tensors.html">torch.Tensor</a></li>
<li class="toctree-l1"><a class="reference internal" href="tensor_attributes.html">Tensor Attributes</a></li>
<li class="toctree-l1"><a class="reference internal" href="tensor_view.html">Tensor Views</a></li>
<li class="toctree-l1"><a class="reference internal" href="amp.html">torch.amp</a></li>
<li class="toctree-l1"><a class="reference internal" href="autograd.html">torch.autograd</a></li>
<li class="toctree-l1"><a class="reference internal" href="library.html">torch.library</a></li>
<li class="toctree-l1"><a class="reference internal" href="cuda.html">torch.cuda</a></li>
<li class="toctree-l1"><a class="reference internal" href="mps.html">torch.mps</a></li>
<li class="toctree-l1"><a class="reference internal" href="backends.html">torch.backends</a></li>
<li class="toctree-l1"><a class="reference internal" href="distributed.html">torch.distributed</a></li>
<li class="toctree-l1"><a class="reference internal" href="distributed.algorithms.join.html">torch.distributed.algorithms.join</a></li>
<li class="toctree-l1"><a class="reference internal" href="distributed.elastic.html">torch.distributed.elastic</a></li>
<li class="toctree-l1"><a class="reference internal" href="fsdp.html">torch.distributed.fsdp</a></li>
<li class="toctree-l1"><a class="reference internal" href="distributed.optim.html">torch.distributed.optim</a></li>
<li class="toctree-l1"><a class="reference internal" href="distributed.tensor.parallel.html">torch.distributed.tensor.parallel</a></li>
<li class="toctree-l1"><a class="reference internal" href="distributed.checkpoint.html">torch.distributed.checkpoint</a></li>
<li class="toctree-l1"><a class="reference internal" href="distributions.html">torch.distributions</a></li>
<li class="toctree-l1"><a class="reference internal" href="_dynamo.html">torch._dynamo</a></li>
<li class="toctree-l1"><a class="reference internal" href="fft.html">torch.fft</a></li>
<li class="toctree-l1"><a class="reference internal" href="func.html">torch.func</a></li>
<li class="toctree-l1"><a class="reference internal" href="futures.html">torch.futures</a></li>
<li class="toctree-l1"><a class="reference internal" href="fx.html">torch.fx</a></li>
<li class="toctree-l1"><a class="reference internal" href="hub.html">torch.hub</a></li>
<li class="toctree-l1"><a class="reference internal" href="jit.html">torch.jit</a></li>
<li class="toctree-l1"><a class="reference internal" href="linalg.html">torch.linalg</a></li>
<li class="toctree-l1"><a class="reference internal" href="monitor.html">torch.monitor</a></li>
<li class="toctree-l1"><a class="reference internal" href="signal.html">torch.signal</a></li>
<li class="toctree-l1"><a class="reference internal" href="special.html">torch.special</a></li>
<li class="toctree-l1"><a class="reference internal" href="torch.overrides.html">torch.overrides</a></li>
<li class="toctree-l1"><a class="reference internal" href="package.html">torch.package</a></li>
<li class="toctree-l1"><a class="reference internal" href="profiler.html">torch.profiler</a></li>
<li class="toctree-l1"><a class="reference internal" href="nn.init.html">torch.nn.init</a></li>
<li class="toctree-l1"><a class="reference internal" href="onnx.html">torch.onnx</a></li>
<li class="toctree-l1"><a class="reference internal" href="onnx_diagnostics.html">torch.onnx diagnostics</a></li>
<li class="toctree-l1"><a class="reference internal" href="optim.html">torch.optim</a></li>
<li class="toctree-l1"><a class="reference internal" href="complex_numbers.html">Complex Numbers</a></li>
<li class="toctree-l1"><a class="reference internal" href="ddp_comm_hooks.html">DDP Communication Hooks</a></li>
<li class="toctree-l1"><a class="reference internal" href="pipeline.html">Pipeline Parallelism</a></li>
<li class="toctree-l1"><a class="reference internal" href="quantization.html">Quantization</a></li>
<li class="toctree-l1 current"><a class="current reference internal" href="#">Distributed RPC Framework</a></li>
<li class="toctree-l1"><a class="reference internal" href="random.html">torch.random</a></li>
<li class="toctree-l1"><a class="reference internal" href="masked.html">torch.masked</a></li>
<li class="toctree-l1"><a class="reference internal" href="nested.html">torch.nested</a></li>
<li class="toctree-l1"><a class="reference internal" href="sparse.html">torch.sparse</a></li>
<li class="toctree-l1"><a class="reference internal" href="storage.html">torch.Storage</a></li>
<li class="toctree-l1"><a class="reference internal" href="testing.html">torch.testing</a></li>
<li class="toctree-l1"><a class="reference internal" href="benchmark_utils.html">torch.utils.benchmark</a></li>
<li class="toctree-l1"><a class="reference internal" href="bottleneck.html">torch.utils.bottleneck</a></li>
<li class="toctree-l1"><a class="reference internal" href="checkpoint.html">torch.utils.checkpoint</a></li>
<li class="toctree-l1"><a class="reference internal" href="cpp_extension.html">torch.utils.cpp_extension</a></li>
<li class="toctree-l1"><a class="reference internal" href="data.html">torch.utils.data</a></li>
<li class="toctree-l1"><a class="reference internal" href="jit_utils.html">torch.utils.jit</a></li>
<li class="toctree-l1"><a class="reference internal" href="dlpack.html">torch.utils.dlpack</a></li>
<li class="toctree-l1"><a class="reference internal" href="mobile_optimizer.html">torch.utils.mobile_optimizer</a></li>
<li class="toctree-l1"><a class="reference internal" href="model_zoo.html">torch.utils.model_zoo</a></li>
<li class="toctree-l1"><a class="reference internal" href="tensorboard.html">torch.utils.tensorboard</a></li>
<li class="toctree-l1"><a class="reference internal" href="type_info.html">Type Info</a></li>
<li class="toctree-l1"><a class="reference internal" href="named_tensor.html">Named Tensors</a></li>
<li class="toctree-l1"><a class="reference internal" href="name_inference.html">Named Tensors operator coverage</a></li>
<li class="toctree-l1"><a class="reference internal" href="config_mod.html">torch.__config__</a></li>
</ul>
<p class="caption" role="heading"><span class="caption-text">Libraries</span></p>
<ul>
<li class="toctree-l1"><a class="reference external" href="https://pytorch.org/audio/stable">torchaudio</a></li>
<li class="toctree-l1"><a class="reference external" href="https://pytorch.org/data">TorchData</a></li>
<li class="toctree-l1"><a class="reference external" href="https://pytorch.org/torchrec">TorchRec</a></li>
<li class="toctree-l1"><a class="reference external" href="https://pytorch.org/serve">TorchServe</a></li>
<li class="toctree-l1"><a class="reference external" href="https://pytorch.org/text/stable">torchtext</a></li>
<li class="toctree-l1"><a class="reference external" href="https://pytorch.org/vision/stable">torchvision</a></li>
<li class="toctree-l1"><a class="reference external" href="https://pytorch.org/xla/">PyTorch on XLA Devices</a></li>
</ul>
</div>
</div>
</nav>
<div class="pytorch-container">
<div class="pytorch-page-level-bar" id="pytorch-page-level-bar">
<div class="pytorch-breadcrumbs-wrapper">
<div role="navigation" aria-label="breadcrumbs navigation">
<ul class="pytorch-breadcrumbs">
<li>
<a href="index.html">
Docs
</a> >
</li>
<li>Distributed RPC Framework</li>
<li class="pytorch-breadcrumbs-aside">
<a href="_sources/rpc.rst.txt" rel="nofollow"><img src="_static/images/view-page-source-icon.svg"></a>
</li>
</ul>
</div>
</div>
<div class="pytorch-shortcuts-wrapper" id="pytorch-shortcuts-wrapper">
Shortcuts
</div>
</div>
<section data-toggle="wy-nav-shift" id="pytorch-content-wrap" class="pytorch-content-wrap">
<div class="pytorch-content-left">
<div class="rst-content">
<div role="main" class="main-content" itemscope="itemscope" itemtype="http://schema.org/Article">
<article itemprop="articleBody" id="pytorch-article" class="pytorch-article">
<section id="distributed-rpc-framework">
<span id="id1"></span><h1>Distributed RPC Framework<a class="headerlink" href="#distributed-rpc-framework" title="Permalink to this heading">¶</a></h1>
<p>The distributed RPC framework provides mechanisms for multi-machine model
training through a set of primitives to allow for remote communication, and a
higher-level API to automatically differentiate models split across several
machines.</p>
<div class="admonition warning">
<p class="admonition-title">Warning</p>
<p>APIs in the RPC package are stable. There are multiple ongoing work items
to improve performance and error handling, which will ship in future releases.</p>
</div>
<div class="admonition warning">
<p class="admonition-title">Warning</p>
<p>CUDA support was introduced in PyTorch 1.9 and is still a <strong>beta</strong> feature.
Not all features of the RPC package are yet compatible with CUDA support and
thus their use is discouraged. These unsupported features include: RRefs,
JIT compatibility, dist autograd and dist optimizer, and profiling. These
shortcomings will be addressed in future releases.</p>
</div>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>Please refer to <a class="reference external" href="https://pytorch.org/tutorials/beginner/dist_overview.html">PyTorch Distributed Overview</a>
for a brief introduction to all features related to distributed training.</p>
</div>
<section id="basics">
<h2>Basics<a class="headerlink" href="#basics" title="Permalink to this heading">¶</a></h2>
<p>The distributed RPC framework makes it easy to run functions remotely, supports
referencing remote objects without copying the real data around, and provides
autograd and optimizer APIs to transparently run backward and update parameters
across RPC boundaries. These features can be categorized into four sets of APIs.</p>
<ol class="arabic simple">
<li><p><strong>Remote Procedure Call (RPC)</strong> supports running a function on the specified
destination worker with the given arguments and getting the return value back
or creating a reference to the return value. There are three main RPC APIs:
<a class="reference internal" href="#torch.distributed.rpc.rpc_sync" title="torch.distributed.rpc.rpc_sync"><code class="xref py py-meth docutils literal notranslate"><span class="pre">rpc_sync()</span></code></a> (synchronous),
<a class="reference internal" href="#torch.distributed.rpc.rpc_async" title="torch.distributed.rpc.rpc_async"><code class="xref py py-meth docutils literal notranslate"><span class="pre">rpc_async()</span></code></a> (asynchronous), and
<a class="reference internal" href="#torch.distributed.rpc.remote" title="torch.distributed.rpc.remote"><code class="xref py py-meth docutils literal notranslate"><span class="pre">remote()</span></code></a> (asynchronous and returns a reference
to the remote return value). Use the synchronous API if the user code cannot
proceed without the return value. Otherwise, use the asynchronous API to get
a future, and wait on the future when the return value is needed on the
caller. The <a class="reference internal" href="#torch.distributed.rpc.remote" title="torch.distributed.rpc.remote"><code class="xref py py-meth docutils literal notranslate"><span class="pre">remote()</span></code></a> API is useful when the
requirement is to create something remotely but never need to fetch it to
the caller. Imagine the case that a driver process is setting up a parameter
server and a trainer. The driver can create an embedding table on the
parameter server and then share the reference to the embedding table with the
trainer, but itself will never use the embedding table locally. In this case,
<a class="reference internal" href="#torch.distributed.rpc.rpc_sync" title="torch.distributed.rpc.rpc_sync"><code class="xref py py-meth docutils literal notranslate"><span class="pre">rpc_sync()</span></code></a> and
<a class="reference internal" href="#torch.distributed.rpc.rpc_async" title="torch.distributed.rpc.rpc_async"><code class="xref py py-meth docutils literal notranslate"><span class="pre">rpc_async()</span></code></a> are no longer appropriate, as they
always imply that the return value will be returned to the caller
immediately or in the future.</p></li>
<li><p><strong>Remote Reference (RRef)</strong> serves as a distributed shared pointer to a local
or remote object. It can be shared with other workers and reference counting
will be handled transparently. Each RRef only has one owner and the object
only lives on that owner. Non-owner workers holding RRefs can get copies of
the object from the owner by explicitly requesting it. This is useful when
a worker needs to access some data object, but itself is neither the creator
(the caller of <a class="reference internal" href="#torch.distributed.rpc.remote" title="torch.distributed.rpc.remote"><code class="xref py py-meth docutils literal notranslate"><span class="pre">remote()</span></code></a>) or the owner of the
object. The distributed optimizer, as we will discuss below, is one example
of such use cases.</p></li>
<li><p><strong>Distributed Autograd</strong> stitches together local autograd engines on all the
workers involved in the forward pass, and automatically reach out to them
during the backward pass to compute gradients. This is especially helpful if
the forward pass needs to span multiple machines when conducting, e.g.,
distributed model parallel training, parameter-server training, etc. With
this feature, user code no longer needs to worry about how to send gradients
across RPC boundaries and in which order should the local autograd engines
be launched, which can become quite complicated where there are nested and
inter-dependent RPC calls in the forward pass.</p></li>
<li><p><strong>Distributed Optimizer</strong>’s constructor takes a
<a class="reference internal" href="optim.html#torch.optim.Optimizer" title="torch.optim.Optimizer"><code class="xref py py-meth docutils literal notranslate"><span class="pre">Optimizer()</span></code></a> (e.g., <a class="reference internal" href="generated/torch.optim.SGD.html#torch.optim.SGD" title="torch.optim.SGD"><code class="xref py py-meth docutils literal notranslate"><span class="pre">SGD()</span></code></a>,
<a class="reference internal" href="generated/torch.optim.Adagrad.html#torch.optim.Adagrad" title="torch.optim.Adagrad"><code class="xref py py-meth docutils literal notranslate"><span class="pre">Adagrad()</span></code></a>, etc.) and a list of parameter RRefs, creates an
<a class="reference internal" href="optim.html#torch.optim.Optimizer" title="torch.optim.Optimizer"><code class="xref py py-meth docutils literal notranslate"><span class="pre">Optimizer()</span></code></a> instance on each distinct RRef owner, and
updates parameters accordingly when running <code class="docutils literal notranslate"><span class="pre">step()</span></code>. When you have
distributed forward and backward passes, parameters and gradients will be
scattered across multiple workers, and hence it requires an optimizer on each
of the involved workers. Distributed Optimizer wraps all those local
optimizers into one, and provides a concise constructor and <code class="docutils literal notranslate"><span class="pre">step()</span></code> API.</p></li>
</ol>
</section>
<section id="rpc">
<span id="id2"></span><h2>RPC<a class="headerlink" href="#rpc" title="Permalink to this heading">¶</a></h2>
<p>Before using RPC and distributed autograd primitives, initialization must take
place. To initialize the RPC framework we need to use
<a class="reference internal" href="#torch.distributed.rpc.init_rpc" title="torch.distributed.rpc.init_rpc"><code class="xref py py-meth docutils literal notranslate"><span class="pre">init_rpc()</span></code></a> which would initialize the RPC
framework, RRef framework and distributed autograd.</p>
<span class="target" id="module-torch.distributed.rpc"></span><dl class="py function">
<dt class="sig sig-object py" id="torch.distributed.rpc.init_rpc">
<span class="sig-prename descclassname"><span class="pre">torch.distributed.rpc.</span></span><span class="sig-name descname"><span class="pre">init_rpc</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">name</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">backend</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">None</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">rank</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">-</span> <span class="pre">1</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">world_size</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">None</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">rpc_backend_options</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">None</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="_modules/torch/distributed/rpc.html#init_rpc"><span class="viewcode-link"><span class="pre">[source]</span></span></a><a class="headerlink" href="#torch.distributed.rpc.init_rpc" title="Permalink to this definition">¶</a></dt>
<dd><p>Initializes RPC primitives such as the local RPC agent
and distributed autograd, which immediately makes the current
process ready to send and receive RPCs.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters<span class="colon">:</span></dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>name</strong> (<a class="reference external" href="https://docs.python.org/3/library/stdtypes.html#str" title="(in Python v3.11)"><em>str</em></a>) – a globally unique name of this node. (e.g.,
<code class="docutils literal notranslate"><span class="pre">Trainer3</span></code>, <code class="docutils literal notranslate"><span class="pre">ParameterServer2</span></code>, <code class="docutils literal notranslate"><span class="pre">Master</span></code>, <code class="docutils literal notranslate"><span class="pre">Worker1</span></code>)
Name can only contain number, alphabet, underscore, colon,
and/or dash, and must be shorter than 128 characters.</p></li>
<li><p><strong>backend</strong> (<a class="reference internal" href="#torch.distributed.rpc.BackendType" title="torch.distributed.rpc.BackendType"><em>BackendType</em></a><em>, </em><em>optional</em>) – The type of RPC backend
implementation. Supported values is
<code class="docutils literal notranslate"><span class="pre">BackendType.TENSORPIPE</span></code> (the default).
See <a class="reference internal" href="#rpc-backends"><span class="std std-ref">Backends</span></a> for more information.</p></li>
<li><p><strong>rank</strong> (<a class="reference external" href="https://docs.python.org/3/library/functions.html#int" title="(in Python v3.11)"><em>int</em></a>) – a globally unique id/rank of this node.</p></li>
<li><p><strong>world_size</strong> (<a class="reference external" href="https://docs.python.org/3/library/functions.html#int" title="(in Python v3.11)"><em>int</em></a>) – The number of workers in the group.</p></li>
<li><p><strong>rpc_backend_options</strong> (<a class="reference internal" href="#torch.distributed.rpc.RpcBackendOptions" title="torch.distributed.rpc.RpcBackendOptions"><em>RpcBackendOptions</em></a><em>, </em><em>optional</em>) – The options
passed to the RpcAgent constructor. It must be an agent-specific
subclass of <a class="reference internal" href="#torch.distributed.rpc.RpcBackendOptions" title="torch.distributed.rpc.RpcBackendOptions"><code class="xref py py-class docutils literal notranslate"><span class="pre">RpcBackendOptions</span></code></a>
and contains agent-specific initialization configurations. By
default, for all agents, it sets the default timeout to 60
seconds and performs the rendezvous with an underlying process
group initialized using <code class="docutils literal notranslate"><span class="pre">init_method</span> <span class="pre">=</span> <span class="pre">"env://"</span></code>,
meaning that environment variables <code class="docutils literal notranslate"><span class="pre">MASTER_ADDR</span></code> and
<code class="docutils literal notranslate"><span class="pre">MASTER_PORT</span></code> need to be set properly. See
<a class="reference internal" href="#rpc-backends"><span class="std std-ref">Backends</span></a> for more information and find which options
are available.</p></li>
</ul>
</dd>
</dl>
</dd></dl>
<p>The following APIs allow users to remotely execute functions as well as create
references (RRefs) to remote data objects. In these APIs, when passing a
<code class="docutils literal notranslate"><span class="pre">Tensor</span></code> as an argument or a return value, the destination worker will try to
create a <code class="docutils literal notranslate"><span class="pre">Tensor</span></code> with the same meta (i.e., shape, stride, etc.). We
intentionally disallow transmitting CUDA tensors because it might crash if the
device lists on source and destination workers do not match. In such cases,
applications can always explicitly move the input tensors to CPU on the caller
and move it to the desired devices on the callee if necessary.</p>
<div class="admonition warning">
<p class="admonition-title">Warning</p>
<p>TorchScript support in RPC is a prototype feature and subject to change. Since
v1.5.0, <code class="docutils literal notranslate"><span class="pre">torch.distributed.rpc</span></code> supports calling TorchScript functions as
RPC target functions, and this will help improve parallelism on the callee
side as executing TorchScript functions does not require GIL.</p>
</div>
<dl class="py function">
<dt class="sig sig-object py" id="torch.distributed.rpc.rpc_sync">
<span class="sig-prename descclassname"><span class="pre">torch.distributed.rpc.</span></span><span class="sig-name descname"><span class="pre">rpc_sync</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">to</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">func</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">args</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">None</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">kwargs</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">None</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">timeout</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">-</span> <span class="pre">1.0</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="_modules/torch/distributed/rpc/api.html#rpc_sync"><span class="viewcode-link"><span class="pre">[source]</span></span></a><a class="headerlink" href="#torch.distributed.rpc.rpc_sync" title="Permalink to this definition">¶</a></dt>
<dd><p>Make a blocking RPC call to run function <code class="docutils literal notranslate"><span class="pre">func</span></code> on worker <code class="docutils literal notranslate"><span class="pre">to</span></code>. RPC
messages are sent and received in parallel to execution of Python code. This
method is thread-safe.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters<span class="colon">:</span></dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>to</strong> (<a class="reference external" href="https://docs.python.org/3/library/stdtypes.html#str" title="(in Python v3.11)"><em>str</em></a><em> or </em><a class="reference internal" href="#torch.distributed.rpc.WorkerInfo" title="torch.distributed.rpc.WorkerInfo"><em>WorkerInfo</em></a><em> or </em><a class="reference external" href="https://docs.python.org/3/library/functions.html#int" title="(in Python v3.11)"><em>int</em></a>) – name/rank/<code class="docutils literal notranslate"><span class="pre">WorkerInfo</span></code> of the destination worker.</p></li>
<li><p><strong>func</strong> (<em>Callable</em>) – a callable function, such as Python callables, builtin
operators (e.g. <a class="reference internal" href="generated/torch.add.html#torch.add" title="torch.add"><code class="xref py py-meth docutils literal notranslate"><span class="pre">add()</span></code></a>) and annotated
TorchScript functions.</p></li>
<li><p><strong>args</strong> (<a class="reference external" href="https://docs.python.org/3/library/stdtypes.html#tuple" title="(in Python v3.11)"><em>tuple</em></a>) – the argument tuple for the <code class="docutils literal notranslate"><span class="pre">func</span></code> invocation.</p></li>
<li><p><strong>kwargs</strong> (<a class="reference external" href="https://docs.python.org/3/library/stdtypes.html#dict" title="(in Python v3.11)"><em>dict</em></a>) – is a dictionary of keyword arguments for the <code class="docutils literal notranslate"><span class="pre">func</span></code>
invocation.</p></li>
<li><p><strong>timeout</strong> (<a class="reference external" href="https://docs.python.org/3/library/functions.html#float" title="(in Python v3.11)"><em>float</em></a><em>, </em><em>optional</em>) – timeout in seconds to use for this RPC. If
the RPC does not complete in this amount of
time, an exception indicating it has
timed out will be raised. A value of 0
indicates an infinite timeout, i.e. a timeout
error will never be raised. If not provided,
the default value set during initialization
or with <code class="docutils literal notranslate"><span class="pre">_set_rpc_timeout</span></code> is used.</p></li>
</ul>
</dd>
<dt class="field-even">Returns<span class="colon">:</span></dt>
<dd class="field-even"><p>Returns the result of running <code class="docutils literal notranslate"><span class="pre">func</span></code> with <code class="docutils literal notranslate"><span class="pre">args</span></code> and <code class="docutils literal notranslate"><span class="pre">kwargs</span></code>.</p>
</dd>
</dl>
<dl>
<dt>Example::</dt><dd><p>Make sure that <code class="docutils literal notranslate"><span class="pre">MASTER_ADDR</span></code> and <code class="docutils literal notranslate"><span class="pre">MASTER_PORT</span></code> are set properly
on both workers. Refer to <a class="reference internal" href="distributed.html#torch.distributed.init_process_group" title="torch.distributed.init_process_group"><code class="xref py py-meth docutils literal notranslate"><span class="pre">init_process_group()</span></code></a>
API for more details. For example,</p>
<p>export MASTER_ADDR=localhost
export MASTER_PORT=5678</p>
<p>Then run the following code in two different processes:</p>
<div class="doctest highlight-default notranslate"><div class="highlight"><pre><span></span><span class="gp">>>> </span><span class="c1"># On worker 0:</span>
<span class="gp">>>> </span><span class="kn">import</span> <span class="nn">torch</span>
<span class="gp">>>> </span><span class="kn">import</span> <span class="nn">torch.distributed.rpc</span> <span class="k">as</span> <span class="nn">rpc</span>
<span class="gp">>>> </span><span class="n">rpc</span><span class="o">.</span><span class="n">init_rpc</span><span class="p">(</span><span class="s2">"worker0"</span><span class="p">,</span> <span class="n">rank</span><span class="o">=</span><span class="mi">0</span><span class="p">,</span> <span class="n">world_size</span><span class="o">=</span><span class="mi">2</span><span class="p">)</span>
<span class="gp">>>> </span><span class="n">ret</span> <span class="o">=</span> <span class="n">rpc</span><span class="o">.</span><span class="n">rpc_sync</span><span class="p">(</span><span class="s2">"worker1"</span><span class="p">,</span> <span class="n">torch</span><span class="o">.</span><span class="n">add</span><span class="p">,</span> <span class="n">args</span><span class="o">=</span><span class="p">(</span><span class="n">torch</span><span class="o">.</span><span class="n">ones</span><span class="p">(</span><span class="mi">2</span><span class="p">),</span> <span class="mi">3</span><span class="p">))</span>
<span class="gp">>>> </span><span class="n">rpc</span><span class="o">.</span><span class="n">shutdown</span><span class="p">()</span>
</pre></div>
</div>
<div class="doctest highlight-default notranslate"><div class="highlight"><pre><span></span><span class="gp">>>> </span><span class="c1"># On worker 1:</span>
<span class="gp">>>> </span><span class="kn">import</span> <span class="nn">torch.distributed.rpc</span> <span class="k">as</span> <span class="nn">rpc</span>
<span class="gp">>>> </span><span class="n">rpc</span><span class="o">.</span><span class="n">init_rpc</span><span class="p">(</span><span class="s2">"worker1"</span><span class="p">,</span> <span class="n">rank</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> <span class="n">world_size</span><span class="o">=</span><span class="mi">2</span><span class="p">)</span>
<span class="gp">>>> </span><span class="n">rpc</span><span class="o">.</span><span class="n">shutdown</span><span class="p">()</span>
</pre></div>
</div>
<p>Below is an example of running a TorchScript function using RPC.</p>
<div class="doctest highlight-default notranslate"><div class="highlight"><pre><span></span><span class="gp">>>> </span><span class="c1"># On both workers:</span>
<span class="gp">>>> </span><span class="nd">@torch</span><span class="o">.</span><span class="n">jit</span><span class="o">.</span><span class="n">script</span>
<span class="gp">>>> </span><span class="k">def</span> <span class="nf">my_script_add</span><span class="p">(</span><span class="n">t1</span><span class="p">,</span> <span class="n">t2</span><span class="p">):</span>
<span class="gp">>>> </span> <span class="k">return</span> <span class="n">torch</span><span class="o">.</span><span class="n">add</span><span class="p">(</span><span class="n">t1</span><span class="p">,</span> <span class="n">t2</span><span class="p">)</span>
</pre></div>
</div>
<div class="doctest highlight-default notranslate"><div class="highlight"><pre><span></span><span class="gp">>>> </span><span class="c1"># On worker 0:</span>
<span class="gp">>>> </span><span class="kn">import</span> <span class="nn">torch.distributed.rpc</span> <span class="k">as</span> <span class="nn">rpc</span>
<span class="gp">>>> </span><span class="n">rpc</span><span class="o">.</span><span class="n">init_rpc</span><span class="p">(</span><span class="s2">"worker0"</span><span class="p">,</span> <span class="n">rank</span><span class="o">=</span><span class="mi">0</span><span class="p">,</span> <span class="n">world_size</span><span class="o">=</span><span class="mi">2</span><span class="p">)</span>
<span class="gp">>>> </span><span class="n">ret</span> <span class="o">=</span> <span class="n">rpc</span><span class="o">.</span><span class="n">rpc_sync</span><span class="p">(</span><span class="s2">"worker1"</span><span class="p">,</span> <span class="n">my_script_add</span><span class="p">,</span> <span class="n">args</span><span class="o">=</span><span class="p">(</span><span class="n">torch</span><span class="o">.</span><span class="n">ones</span><span class="p">(</span><span class="mi">2</span><span class="p">),</span> <span class="mi">3</span><span class="p">))</span>
<span class="gp">>>> </span><span class="n">rpc</span><span class="o">.</span><span class="n">shutdown</span><span class="p">()</span>
</pre></div>
</div>
<div class="doctest highlight-default notranslate"><div class="highlight"><pre><span></span><span class="gp">>>> </span><span class="c1"># On worker 1:</span>
<span class="gp">>>> </span><span class="kn">import</span> <span class="nn">torch.distributed.rpc</span> <span class="k">as</span> <span class="nn">rpc</span>
<span class="gp">>>> </span><span class="n">rpc</span><span class="o">.</span><span class="n">init_rpc</span><span class="p">(</span><span class="s2">"worker1"</span><span class="p">,</span> <span class="n">rank</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> <span class="n">world_size</span><span class="o">=</span><span class="mi">2</span><span class="p">)</span>
<span class="gp">>>> </span><span class="n">rpc</span><span class="o">.</span><span class="n">shutdown</span><span class="p">()</span>
</pre></div>
</div>
</dd>
</dl>
</dd></dl>
<dl class="py function">
<dt class="sig sig-object py" id="torch.distributed.rpc.rpc_async">
<span class="sig-prename descclassname"><span class="pre">torch.distributed.rpc.</span></span><span class="sig-name descname"><span class="pre">rpc_async</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">to</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">func</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">args</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">None</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">kwargs</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">None</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">timeout</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">-</span> <span class="pre">1.0</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="_modules/torch/distributed/rpc/api.html#rpc_async"><span class="viewcode-link"><span class="pre">[source]</span></span></a><a class="headerlink" href="#torch.distributed.rpc.rpc_async" title="Permalink to this definition">¶</a></dt>
<dd><p>Make a non-blocking RPC call to run function <code class="docutils literal notranslate"><span class="pre">func</span></code> on worker <code class="docutils literal notranslate"><span class="pre">to</span></code>. RPC
messages are sent and received in parallel to execution of Python code. This
method is thread-safe. This method will immediately return a
<a class="reference internal" href="futures.html#torch.futures.Future" title="torch.futures.Future"><code class="xref py py-class docutils literal notranslate"><span class="pre">Future</span></code></a> that can be awaited on.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters<span class="colon">:</span></dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>to</strong> (<a class="reference external" href="https://docs.python.org/3/library/stdtypes.html#str" title="(in Python v3.11)"><em>str</em></a><em> or </em><a class="reference internal" href="#torch.distributed.rpc.WorkerInfo" title="torch.distributed.rpc.WorkerInfo"><em>WorkerInfo</em></a><em> or </em><a class="reference external" href="https://docs.python.org/3/library/functions.html#int" title="(in Python v3.11)"><em>int</em></a>) – name/rank/<code class="docutils literal notranslate"><span class="pre">WorkerInfo</span></code> of the destination worker.</p></li>
<li><p><strong>func</strong> (<em>Callable</em>) – a callable function, such as Python callables, builtin
operators (e.g. <a class="reference internal" href="generated/torch.add.html#torch.add" title="torch.add"><code class="xref py py-meth docutils literal notranslate"><span class="pre">add()</span></code></a>) and annotated
TorchScript functions.</p></li>
<li><p><strong>args</strong> (<a class="reference external" href="https://docs.python.org/3/library/stdtypes.html#tuple" title="(in Python v3.11)"><em>tuple</em></a>) – the argument tuple for the <code class="docutils literal notranslate"><span class="pre">func</span></code> invocation.</p></li>
<li><p><strong>kwargs</strong> (<a class="reference external" href="https://docs.python.org/3/library/stdtypes.html#dict" title="(in Python v3.11)"><em>dict</em></a>) – is a dictionary of keyword arguments for the <code class="docutils literal notranslate"><span class="pre">func</span></code>
invocation.</p></li>
<li><p><strong>timeout</strong> (<a class="reference external" href="https://docs.python.org/3/library/functions.html#float" title="(in Python v3.11)"><em>float</em></a><em>, </em><em>optional</em>) – timeout in seconds to use for this RPC. If
the RPC does not complete in this amount of
time, an exception indicating it has
timed out will be raised. A value of 0
indicates an infinite timeout, i.e. a timeout
error will never be raised. If not provided,
the default value set during initialization
or with <code class="docutils literal notranslate"><span class="pre">_set_rpc_timeout</span></code> is used.</p></li>
</ul>
</dd>
<dt class="field-even">Returns<span class="colon">:</span></dt>
<dd class="field-even"><p>Returns a <a class="reference internal" href="futures.html#torch.futures.Future" title="torch.futures.Future"><code class="xref py py-class docutils literal notranslate"><span class="pre">Future</span></code></a> object that can be waited
on. When completed, the return value of <code class="docutils literal notranslate"><span class="pre">func</span></code> on <code class="docutils literal notranslate"><span class="pre">args</span></code> and
<code class="docutils literal notranslate"><span class="pre">kwargs</span></code> can be retrieved from the <a class="reference internal" href="futures.html#torch.futures.Future" title="torch.futures.Future"><code class="xref py py-class docutils literal notranslate"><span class="pre">Future</span></code></a>
object.</p>
</dd>
</dl>
<div class="admonition warning">
<p class="admonition-title">Warning</p>
<p>Using GPU tensors as arguments or return values of <code class="docutils literal notranslate"><span class="pre">func</span></code> is not
supported since we don’t support sending GPU tensors over the wire. You
need to explicitly copy GPU tensors to CPU before using them as
arguments or return values of <code class="docutils literal notranslate"><span class="pre">func</span></code>.</p>
</div>
<div class="admonition warning">
<p class="admonition-title">Warning</p>
<p>The <code class="docutils literal notranslate"><span class="pre">rpc_async</span></code> API does not copy storages of argument tensors until
sending them over the wire, which could be done by a different thread
depending on the RPC backend type. The caller should make sure that the
contents of those tensors stay intact until the returned
<a class="reference internal" href="futures.html#torch.futures.Future" title="torch.futures.Future"><code class="xref py py-class docutils literal notranslate"><span class="pre">Future</span></code></a> completes.</p>
</div>
<dl>
<dt>Example::</dt><dd><p>Make sure that <code class="docutils literal notranslate"><span class="pre">MASTER_ADDR</span></code> and <code class="docutils literal notranslate"><span class="pre">MASTER_PORT</span></code> are set properly
on both workers. Refer to <a class="reference internal" href="distributed.html#torch.distributed.init_process_group" title="torch.distributed.init_process_group"><code class="xref py py-meth docutils literal notranslate"><span class="pre">init_process_group()</span></code></a>
API for more details. For example,</p>
<p>export MASTER_ADDR=localhost
export MASTER_PORT=5678</p>
<p>Then run the following code in two different processes:</p>
<div class="doctest highlight-default notranslate"><div class="highlight"><pre><span></span><span class="gp">>>> </span><span class="c1"># On worker 0:</span>
<span class="gp">>>> </span><span class="kn">import</span> <span class="nn">torch</span>
<span class="gp">>>> </span><span class="kn">import</span> <span class="nn">torch.distributed.rpc</span> <span class="k">as</span> <span class="nn">rpc</span>
<span class="gp">>>> </span><span class="n">rpc</span><span class="o">.</span><span class="n">init_rpc</span><span class="p">(</span><span class="s2">"worker0"</span><span class="p">,</span> <span class="n">rank</span><span class="o">=</span><span class="mi">0</span><span class="p">,</span> <span class="n">world_size</span><span class="o">=</span><span class="mi">2</span><span class="p">)</span>
<span class="gp">>>> </span><span class="n">fut1</span> <span class="o">=</span> <span class="n">rpc</span><span class="o">.</span><span class="n">rpc_async</span><span class="p">(</span><span class="s2">"worker1"</span><span class="p">,</span> <span class="n">torch</span><span class="o">.</span><span class="n">add</span><span class="p">,</span> <span class="n">args</span><span class="o">=</span><span class="p">(</span><span class="n">torch</span><span class="o">.</span><span class="n">ones</span><span class="p">(</span><span class="mi">2</span><span class="p">),</span> <span class="mi">3</span><span class="p">))</span>
<span class="gp">>>> </span><span class="n">fut2</span> <span class="o">=</span> <span class="n">rpc</span><span class="o">.</span><span class="n">rpc_async</span><span class="p">(</span><span class="s2">"worker1"</span><span class="p">,</span> <span class="nb">min</span><span class="p">,</span> <span class="n">args</span><span class="o">=</span><span class="p">(</span><span class="mi">1</span><span class="p">,</span> <span class="mi">2</span><span class="p">))</span>
<span class="gp">>>> </span><span class="n">result</span> <span class="o">=</span> <span class="n">fut1</span><span class="o">.</span><span class="n">wait</span><span class="p">()</span> <span class="o">+</span> <span class="n">fut2</span><span class="o">.</span><span class="n">wait</span><span class="p">()</span>
<span class="gp">>>> </span><span class="n">rpc</span><span class="o">.</span><span class="n">shutdown</span><span class="p">()</span>
</pre></div>
</div>
<div class="doctest highlight-default notranslate"><div class="highlight"><pre><span></span><span class="gp">>>> </span><span class="c1"># On worker 1:</span>
<span class="gp">>>> </span><span class="kn">import</span> <span class="nn">torch.distributed.rpc</span> <span class="k">as</span> <span class="nn">rpc</span>
<span class="gp">>>> </span><span class="n">rpc</span><span class="o">.</span><span class="n">init_rpc</span><span class="p">(</span><span class="s2">"worker1"</span><span class="p">,</span> <span class="n">rank</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> <span class="n">world_size</span><span class="o">=</span><span class="mi">2</span><span class="p">)</span>
<span class="gp">>>> </span><span class="n">rpc</span><span class="o">.</span><span class="n">shutdown</span><span class="p">()</span>
</pre></div>
</div>
<p>Below is an example of running a TorchScript function using RPC.</p>
<div class="doctest highlight-default notranslate"><div class="highlight"><pre><span></span><span class="gp">>>> </span><span class="c1"># On both workers:</span>
<span class="gp">>>> </span><span class="nd">@torch</span><span class="o">.</span><span class="n">jit</span><span class="o">.</span><span class="n">script</span>
<span class="gp">>>> </span><span class="k">def</span> <span class="nf">my_script_add</span><span class="p">(</span><span class="n">t1</span><span class="p">,</span> <span class="n">t2</span><span class="p">):</span>
<span class="gp">>>> </span> <span class="k">return</span> <span class="n">torch</span><span class="o">.</span><span class="n">add</span><span class="p">(</span><span class="n">t1</span><span class="p">,</span> <span class="n">t2</span><span class="p">)</span>
</pre></div>
</div>
<div class="doctest highlight-default notranslate"><div class="highlight"><pre><span></span><span class="gp">>>> </span><span class="c1"># On worker 0:</span>
<span class="gp">>>> </span><span class="kn">import</span> <span class="nn">torch.distributed.rpc</span> <span class="k">as</span> <span class="nn">rpc</span>
<span class="gp">>>> </span><span class="n">rpc</span><span class="o">.</span><span class="n">init_rpc</span><span class="p">(</span><span class="s2">"worker0"</span><span class="p">,</span> <span class="n">rank</span><span class="o">=</span><span class="mi">0</span><span class="p">,</span> <span class="n">world_size</span><span class="o">=</span><span class="mi">2</span><span class="p">)</span>
<span class="gp">>>> </span><span class="n">fut</span> <span class="o">=</span> <span class="n">rpc</span><span class="o">.</span><span class="n">rpc_async</span><span class="p">(</span><span class="s2">"worker1"</span><span class="p">,</span> <span class="n">my_script_add</span><span class="p">,</span> <span class="n">args</span><span class="o">=</span><span class="p">(</span><span class="n">torch</span><span class="o">.</span><span class="n">ones</span><span class="p">(</span><span class="mi">2</span><span class="p">),</span> <span class="mi">3</span><span class="p">))</span>
<span class="gp">>>> </span><span class="n">ret</span> <span class="o">=</span> <span class="n">fut</span><span class="o">.</span><span class="n">wait</span><span class="p">()</span>
<span class="gp">>>> </span><span class="n">rpc</span><span class="o">.</span><span class="n">shutdown</span><span class="p">()</span>
</pre></div>
</div>
<div class="doctest highlight-default notranslate"><div class="highlight"><pre><span></span><span class="gp">>>> </span><span class="c1"># On worker 1:</span>
<span class="gp">>>> </span><span class="kn">import</span> <span class="nn">torch.distributed.rpc</span> <span class="k">as</span> <span class="nn">rpc</span>
<span class="gp">>>> </span><span class="n">rpc</span><span class="o">.</span><span class="n">init_rpc</span><span class="p">(</span><span class="s2">"worker1"</span><span class="p">,</span> <span class="n">rank</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> <span class="n">world_size</span><span class="o">=</span><span class="mi">2</span><span class="p">)</span>
<span class="gp">>>> </span><span class="n">rpc</span><span class="o">.</span><span class="n">shutdown</span><span class="p">()</span>
</pre></div>
</div>
</dd>
</dl>
</dd></dl>
<dl class="py function">
<dt class="sig sig-object py" id="torch.distributed.rpc.remote">
<span class="sig-prename descclassname"><span class="pre">torch.distributed.rpc.</span></span><span class="sig-name descname"><span class="pre">remote</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">to</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">func</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">args</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">None</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">kwargs</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">None</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">timeout</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">-</span> <span class="pre">1.0</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="_modules/torch/distributed/rpc/api.html#remote"><span class="viewcode-link"><span class="pre">[source]</span></span></a><a class="headerlink" href="#torch.distributed.rpc.remote" title="Permalink to this definition">¶</a></dt>
<dd><p>Make a remote call to run <code class="docutils literal notranslate"><span class="pre">func</span></code> on worker <code class="docutils literal notranslate"><span class="pre">to</span></code> and return an
<a class="reference internal" href="#torch.distributed.rpc.RRef" title="torch.distributed.rpc.RRef"><code class="xref py py-class docutils literal notranslate"><span class="pre">RRef</span></code></a> to the result value immediately.
Worker <code class="docutils literal notranslate"><span class="pre">to</span></code> will be the owner of the returned
<a class="reference internal" href="#torch.distributed.rpc.RRef" title="torch.distributed.rpc.RRef"><code class="xref py py-class docutils literal notranslate"><span class="pre">RRef</span></code></a>, and the worker calling <code class="docutils literal notranslate"><span class="pre">remote</span></code> is
a user. The owner manages the global reference count of its
<a class="reference internal" href="#torch.distributed.rpc.RRef" title="torch.distributed.rpc.RRef"><code class="xref py py-class docutils literal notranslate"><span class="pre">RRef</span></code></a>, and the owner
<a class="reference internal" href="#torch.distributed.rpc.RRef" title="torch.distributed.rpc.RRef"><code class="xref py py-class docutils literal notranslate"><span class="pre">RRef</span></code></a> is only destructed when globally there
are no living references to it.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters<span class="colon">:</span></dt>
<dd class="field-odd"><ul class="simple">
<li><p><strong>to</strong> (<a class="reference external" href="https://docs.python.org/3/library/stdtypes.html#str" title="(in Python v3.11)"><em>str</em></a><em> or </em><a class="reference internal" href="#torch.distributed.rpc.WorkerInfo" title="torch.distributed.rpc.WorkerInfo"><em>WorkerInfo</em></a><em> or </em><a class="reference external" href="https://docs.python.org/3/library/functions.html#int" title="(in Python v3.11)"><em>int</em></a>) – name/rank/<code class="docutils literal notranslate"><span class="pre">WorkerInfo</span></code> of the destination worker.</p></li>
<li><p><strong>func</strong> (<em>Callable</em>) – a callable function, such as Python callables, builtin
operators (e.g. <a class="reference internal" href="generated/torch.add.html#torch.add" title="torch.add"><code class="xref py py-meth docutils literal notranslate"><span class="pre">add()</span></code></a>) and annotated
TorchScript functions.</p></li>
<li><p><strong>args</strong> (<a class="reference external" href="https://docs.python.org/3/library/stdtypes.html#tuple" title="(in Python v3.11)"><em>tuple</em></a>) – the argument tuple for the <code class="docutils literal notranslate"><span class="pre">func</span></code> invocation.</p></li>
<li><p><strong>kwargs</strong> (<a class="reference external" href="https://docs.python.org/3/library/stdtypes.html#dict" title="(in Python v3.11)"><em>dict</em></a>) – is a dictionary of keyword arguments for the <code class="docutils literal notranslate"><span class="pre">func</span></code>
invocation.</p></li>
<li><p><strong>timeout</strong> (<a class="reference external" href="https://docs.python.org/3/library/functions.html#float" title="(in Python v3.11)"><em>float</em></a><em>, </em><em>optional</em>) – timeout in seconds for this remote call. If the
creation of this
<a class="reference internal" href="#torch.distributed.rpc.RRef" title="torch.distributed.rpc.RRef"><code class="xref py py-class docutils literal notranslate"><span class="pre">RRef</span></code></a> on worker
<code class="docutils literal notranslate"><span class="pre">to</span></code> is not successfully processed on this
worker within this timeout, then the next time
there is an attempt to use the RRef (such as
<code class="docutils literal notranslate"><span class="pre">to_here()</span></code>), a timeout will be raised
indicating this failure. A value of 0 indicates
an infinite timeout, i.e. a timeout error will
never be raised. If not provided, the default
value set during initialization or with
<code class="docutils literal notranslate"><span class="pre">_set_rpc_timeout</span></code> is used.</p></li>
</ul>
</dd>
<dt class="field-even">Returns<span class="colon">:</span></dt>
<dd class="field-even"><p>A user <a class="reference internal" href="#torch.distributed.rpc.RRef" title="torch.distributed.rpc.RRef"><code class="xref py py-class docutils literal notranslate"><span class="pre">RRef</span></code></a> instance to the result
value. Use the blocking API <code class="xref py py-meth docutils literal notranslate"><span class="pre">torch.distributed.rpc.RRef.to_here()</span></code>
to retrieve the result value locally.</p>
</dd>
</dl>
<div class="admonition warning">
<p class="admonition-title">Warning</p>
<p>The <code class="docutils literal notranslate"><span class="pre">remote</span></code> API does not copy storages of argument tensors until
sending them over the wire, which could be done by a different thread
depending on the RPC backend type. The caller should make sure that the
contents of those tensors stay intact until the returned RRef is
confirmed by the owner, which can be checked using the
<code class="xref py py-meth docutils literal notranslate"><span class="pre">torch.distributed.rpc.RRef.confirmed_by_owner()</span></code> API.</p>
</div>
<div class="admonition warning">
<p class="admonition-title">Warning</p>
<p>Errors such as timeouts for the <code class="docutils literal notranslate"><span class="pre">remote</span></code> API are handled on a
best-effort basis. This means that when remote calls initiated by
<code class="docutils literal notranslate"><span class="pre">remote</span></code> fail, such as with a timeout error, we take a best-effort
approach to error handling. This means that errors are handled and set
on the resulting RRef on an asynchronous basis. If the RRef has not been
used by the application before this handling (such as <code class="docutils literal notranslate"><span class="pre">to_here</span></code> or
fork call), then future uses of the <code class="docutils literal notranslate"><span class="pre">RRef</span></code> will appropriately raise
errors. However, it is possible that the user application will use the
<code class="docutils literal notranslate"><span class="pre">RRef</span></code> before the errors are handled. In this case, errors may not be
raised as they have not yet been handled.</p>
</div>
<p>Example:</p>
<div class="highlight-default notranslate"><div class="highlight"><pre><span></span>Make sure that ``MASTER_ADDR`` and ``MASTER_PORT`` are set properly
on both workers. Refer to :meth:`~torch.distributed.init_process_group`
API for more details. For example,
export MASTER_ADDR=localhost
export MASTER_PORT=5678
Then run the following code in two different processes:
>>> # On worker 0:
>>> import torch
>>> import torch.distributed.rpc as rpc
>>> rpc.init_rpc("worker0", rank=0, world_size=2)
>>> rref1 = rpc.remote("worker1", torch.add, args=(torch.ones(2), 3))
>>> rref2 = rpc.remote("worker1", torch.add, args=(torch.ones(2), 1))
>>> x = rref1.to_here() + rref2.to_here()
>>> rpc.shutdown()
>>> # On worker 1:
>>> import torch.distributed.rpc as rpc
>>> rpc.init_rpc("worker1", rank=1, world_size=2)
>>> rpc.shutdown()
Below is an example of running a TorchScript function using RPC.
>>> # On both workers:
>>> @torch.jit.script
>>> def my_script_add(t1, t2):
>>> return torch.add(t1, t2)
>>> # On worker 0:
>>> import torch.distributed.rpc as rpc
>>> rpc.init_rpc("worker0", rank=0, world_size=2)
>>> rref = rpc.remote("worker1", my_script_add, args=(torch.ones(2), 3))
>>> rref.to_here()
>>> rpc.shutdown()
>>> # On worker 1:
>>> import torch.distributed.rpc as rpc
>>> rpc.init_rpc("worker1", rank=1, world_size=2)
>>> rpc.shutdown()
</pre></div>
</div>
</dd></dl>
<dl class="py function">
<dt class="sig sig-object py" id="torch.distributed.rpc.get_worker_info">
<span class="sig-prename descclassname"><span class="pre">torch.distributed.rpc.</span></span><span class="sig-name descname"><span class="pre">get_worker_info</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">worker_name</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">None</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="_modules/torch/distributed/rpc/api.html#get_worker_info"><span class="viewcode-link"><span class="pre">[source]</span></span></a><a class="headerlink" href="#torch.distributed.rpc.get_worker_info" title="Permalink to this definition">¶</a></dt>
<dd><p>Get <a class="reference internal" href="#torch.distributed.rpc.WorkerInfo" title="torch.distributed.rpc.WorkerInfo"><code class="xref py py-class docutils literal notranslate"><span class="pre">WorkerInfo</span></code></a> of a given worker name.
Use this <a class="reference internal" href="#torch.distributed.rpc.WorkerInfo" title="torch.distributed.rpc.WorkerInfo"><code class="xref py py-class docutils literal notranslate"><span class="pre">WorkerInfo</span></code></a> to avoid passing an
expensive string on every invocation.</p>
<dl class="field-list simple">
<dt class="field-odd">Parameters<span class="colon">:</span></dt>
<dd class="field-odd"><p><strong>worker_name</strong> (<a class="reference external" href="https://docs.python.org/3/library/stdtypes.html#str" title="(in Python v3.11)"><em>str</em></a>) – the string name of a worker. If <code class="docutils literal notranslate"><span class="pre">None</span></code>, return the
the id of the current worker. (default <code class="docutils literal notranslate"><span class="pre">None</span></code>)</p>
</dd>
<dt class="field-even">Returns<span class="colon">:</span></dt>
<dd class="field-even"><p><a class="reference internal" href="#torch.distributed.rpc.WorkerInfo" title="torch.distributed.rpc.WorkerInfo"><code class="xref py py-class docutils literal notranslate"><span class="pre">WorkerInfo</span></code></a> instance for the given
<code class="docutils literal notranslate"><span class="pre">worker_name</span></code> or <a class="reference internal" href="#torch.distributed.rpc.WorkerInfo" title="torch.distributed.rpc.WorkerInfo"><code class="xref py py-class docutils literal notranslate"><span class="pre">WorkerInfo</span></code></a> of the
current worker if <code class="docutils literal notranslate"><span class="pre">worker_name</span></code> is <code class="docutils literal notranslate"><span class="pre">None</span></code>.</p>
</dd>
</dl>
</dd></dl>
<dl class="py function">
<dt class="sig sig-object py" id="torch.distributed.rpc.shutdown">
<span class="sig-prename descclassname"><span class="pre">torch.distributed.rpc.</span></span><span class="sig-name descname"><span class="pre">shutdown</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">graceful</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">True</span></span></em>, <em class="sig-param"><span class="n"><span class="pre">timeout</span></span><span class="o"><span class="pre">=</span></span><span class="default_value"><span class="pre">0</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="_modules/torch/distributed/rpc/api.html#shutdown"><span class="viewcode-link"><span class="pre">[source]</span></span></a><a class="headerlink" href="#torch.distributed.rpc.shutdown" title="Permalink to this definition">¶</a></dt>
<dd><p>Perform a shutdown of the RPC agent, and then destroy the RPC agent. This
stops the local agent from accepting outstanding requests, and shuts
down the RPC framework by terminating all RPC threads. If <code class="docutils literal notranslate"><span class="pre">graceful=True</span></code>,
this will block until all local and remote RPC processes reach this method
and wait for all outstanding work to complete. Otherwise, if
<code class="docutils literal notranslate"><span class="pre">graceful=False</span></code>, this is a local shutdown, and it does not wait for other
RPC processes to reach this method.</p>
<div class="admonition warning">
<p class="admonition-title">Warning</p>
<p>For <a class="reference internal" href="futures.html#torch.futures.Future" title="torch.futures.Future"><code class="xref py py-class docutils literal notranslate"><span class="pre">Future</span></code></a> objects returned by
<a class="reference internal" href="#torch.distributed.rpc.rpc_async" title="torch.distributed.rpc.rpc_async"><code class="xref py py-meth docutils literal notranslate"><span class="pre">rpc_async()</span></code></a>, <code class="docutils literal notranslate"><span class="pre">future.wait()</span></code> should not
be called after <code class="docutils literal notranslate"><span class="pre">shutdown()</span></code>.</p>
</div>
<dl class="field-list simple">
<dt class="field-odd">Parameters<span class="colon">:</span></dt>
<dd class="field-odd"><p><strong>graceful</strong> (<a class="reference external" href="https://docs.python.org/3/library/functions.html#bool" title="(in Python v3.11)"><em>bool</em></a>) – Whether to do a graceful shutdown or not. If True,
this will 1) wait until there is no pending system
messages for <code class="docutils literal notranslate"><span class="pre">UserRRefs</span></code> and delete them; 2) block
until all local and remote RPC processes have reached
this method and wait for all outstanding work to
complete.</p>
</dd>
</dl>
<dl>
<dt>Example::</dt><dd><p>Make sure that <code class="docutils literal notranslate"><span class="pre">MASTER_ADDR</span></code> and <code class="docutils literal notranslate"><span class="pre">MASTER_PORT</span></code> are set properly
on both workers. Refer to <a class="reference internal" href="distributed.html#torch.distributed.init_process_group" title="torch.distributed.init_process_group"><code class="xref py py-meth docutils literal notranslate"><span class="pre">init_process_group()</span></code></a>
API for more details. For example,</p>
<p>export MASTER_ADDR=localhost
export MASTER_PORT=5678</p>
<p>Then run the following code in two different processes:</p>
<div class="doctest highlight-default notranslate"><div class="highlight"><pre><span></span><span class="gp">>>> </span><span class="c1"># On worker 0:</span>
<span class="gp">>>> </span><span class="kn">import</span> <span class="nn">torch</span>
<span class="gp">>>> </span><span class="kn">import</span> <span class="nn">torch.distributed.rpc</span> <span class="k">as</span> <span class="nn">rpc</span>
<span class="gp">>>> </span><span class="n">rpc</span><span class="o">.</span><span class="n">init_rpc</span><span class="p">(</span><span class="s2">"worker0"</span><span class="p">,</span> <span class="n">rank</span><span class="o">=</span><span class="mi">0</span><span class="p">,</span> <span class="n">world_size</span><span class="o">=</span><span class="mi">2</span><span class="p">)</span>
<span class="gp">>>> </span><span class="c1"># do some work</span>
<span class="gp">>>> </span><span class="n">result</span> <span class="o">=</span> <span class="n">rpc</span><span class="o">.</span><span class="n">rpc_sync</span><span class="p">(</span><span class="s2">"worker1"</span><span class="p">,</span> <span class="n">torch</span><span class="o">.</span><span class="n">add</span><span class="p">,</span> <span class="n">args</span><span class="o">=</span><span class="p">(</span><span class="n">torch</span><span class="o">.</span><span class="n">ones</span><span class="p">(</span><span class="mi">1</span><span class="p">),</span> <span class="mi">1</span><span class="p">))</span>
<span class="gp">>>> </span><span class="c1"># ready to shutdown</span>
<span class="gp">>>> </span><span class="n">rpc</span><span class="o">.</span><span class="n">shutdown</span><span class="p">()</span>
</pre></div>
</div>
<div class="doctest highlight-default notranslate"><div class="highlight"><pre><span></span><span class="gp">>>> </span><span class="c1"># On worker 1:</span>
<span class="gp">>>> </span><span class="kn">import</span> <span class="nn">torch.distributed.rpc</span> <span class="k">as</span> <span class="nn">rpc</span>
<span class="gp">>>> </span><span class="n">rpc</span><span class="o">.</span><span class="n">init_rpc</span><span class="p">(</span><span class="s2">"worker1"</span><span class="p">,</span> <span class="n">rank</span><span class="o">=</span><span class="mi">1</span><span class="p">,</span> <span class="n">world_size</span><span class="o">=</span><span class="mi">2</span><span class="p">)</span>
<span class="gp">>>> </span><span class="c1"># wait for worker 0 to finish work, and then shutdown.</span>
<span class="gp">>>> </span><span class="n">rpc</span><span class="o">.</span><span class="n">shutdown</span><span class="p">()</span>
</pre></div>
</div>
</dd>
</dl>
</dd></dl>
<dl class="py class">
<dt class="sig sig-object py" id="torch.distributed.rpc.WorkerInfo">
<em class="property"><span class="pre">class</span><span class="w"> </span></em><span class="sig-prename descclassname"><span class="pre">torch.distributed.rpc.</span></span><span class="sig-name descname"><span class="pre">WorkerInfo</span></span><a class="headerlink" href="#torch.distributed.rpc.WorkerInfo" title="Permalink to this definition">¶</a></dt>
<dd><p>A structure that encapsulates information of a worker in the system.
Contains the name and ID of the worker. This class is not meant to
be constructed directly, rather, an instance can be retrieved
through <a class="reference internal" href="#torch.distributed.rpc.get_worker_info" title="torch.distributed.rpc.get_worker_info"><code class="xref py py-meth docutils literal notranslate"><span class="pre">get_worker_info()</span></code></a> and the
result can be passed in to functions such as
<a class="reference internal" href="#torch.distributed.rpc.rpc_sync" title="torch.distributed.rpc.rpc_sync"><code class="xref py py-meth docutils literal notranslate"><span class="pre">rpc_sync()</span></code></a>, <a class="reference internal" href="#torch.distributed.rpc.rpc_async" title="torch.distributed.rpc.rpc_async"><code class="xref py py-meth docutils literal notranslate"><span class="pre">rpc_async()</span></code></a>,
<a class="reference internal" href="#torch.distributed.rpc.remote" title="torch.distributed.rpc.remote"><code class="xref py py-meth docutils literal notranslate"><span class="pre">remote()</span></code></a> to avoid copying a string on
every invocation.</p>
<dl class="py property">
<dt class="sig sig-object py" id="torch.distributed.rpc.WorkerInfo.id">
<em class="property"><span class="pre">property</span><span class="w"> </span></em><span class="sig-name descname"><span class="pre">id</span></span><a class="headerlink" href="#torch.distributed.rpc.WorkerInfo.id" title="Permalink to this definition">¶</a></dt>
<dd><p>Globally unique id to identify the worker.</p>
</dd></dl>
<dl class="py property">
<dt class="sig sig-object py" id="torch.distributed.rpc.WorkerInfo.name">
<em class="property"><span class="pre">property</span><span class="w"> </span></em><span class="sig-name descname"><span class="pre">name</span></span><a class="headerlink" href="#torch.distributed.rpc.WorkerInfo.name" title="Permalink to this definition">¶</a></dt>
<dd><p>The name of the worker.</p>
</dd></dl>
</dd></dl>
<p>The RPC package also provides decorators which allow applications to specify
how a given function should be treated on the callee side.</p>
<dl class="py function">
<dt class="sig sig-object py" id="torch.distributed.rpc.functions.async_execution">
<span class="sig-prename descclassname"><span class="pre">torch.distributed.rpc.functions.</span></span><span class="sig-name descname"><span class="pre">async_execution</span></span><span class="sig-paren">(</span><em class="sig-param"><span class="n"><span class="pre">fn</span></span></em><span class="sig-paren">)</span><a class="reference internal" href="_modules/torch/distributed/rpc/functions.html#async_execution"><span class="viewcode-link"><span class="pre">[source]</span></span></a><a class="headerlink" href="#torch.distributed.rpc.functions.async_execution" title="Permalink to this definition">¶</a></dt>
<dd><p>A decorator for a function indicating that the return value of the function
is guaranteed to be a <a class="reference internal" href="futures.html#torch.futures.Future" title="torch.futures.Future"><code class="xref py py-class docutils literal notranslate"><span class="pre">Future</span></code></a> object and this
function can run asynchronously on the RPC callee. More specifically, the
callee extracts the <a class="reference internal" href="futures.html#torch.futures.Future" title="torch.futures.Future"><code class="xref py py-class docutils literal notranslate"><span class="pre">Future</span></code></a> returned by the wrapped
function and installs subsequent processing steps as a callback to that
<a class="reference internal" href="futures.html#torch.futures.Future" title="torch.futures.Future"><code class="xref py py-class docutils literal notranslate"><span class="pre">Future</span></code></a>. The installed callback will read the value
from the <a class="reference internal" href="futures.html#torch.futures.Future" title="torch.futures.Future"><code class="xref py py-class docutils literal notranslate"><span class="pre">Future</span></code></a> when completed and send the
value back as the RPC response. That also means the returned
<a class="reference internal" href="futures.html#torch.futures.Future" title="torch.futures.Future"><code class="xref py py-class docutils literal notranslate"><span class="pre">Future</span></code></a> only exists on the callee side and is never
sent through RPC. This decorator is useful when the wrapped function’s
(<code class="docutils literal notranslate"><span class="pre">fn</span></code>) execution needs to pause and resume due to, e.g., containing
<a class="reference internal" href="#torch.distributed.rpc.rpc_async" title="torch.distributed.rpc.rpc_async"><code class="xref py py-meth docutils literal notranslate"><span class="pre">rpc_async()</span></code></a> or waiting for other signals.</p>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>To enable asynchronous execution, applications must pass the
function object returned by this decorator to RPC APIs. If RPC detected
attributes installed by this decorator, it knows that this function
returns a <code class="docutils literal notranslate"><span class="pre">Future</span></code> object and will handle that accordingly.
However, this does not mean this decorator has to be outmost one when
defining a function. For example, when combined with <code class="docutils literal notranslate"><span class="pre">@staticmethod</span></code>
or <code class="docutils literal notranslate"><span class="pre">@classmethod</span></code>, <code class="docutils literal notranslate"><span class="pre">@rpc.functions.async_execution</span></code> needs to be the
inner decorator to allow the target function be recognized as a static
or class function. This target function can still execute asynchronously
because, when accessed, the static or class method preserves attributes
installed by <code class="docutils literal notranslate"><span class="pre">@rpc.functions.async_execution</span></code>.</p>