第 2 章 - Socket 和模式 #
在第 1 章 - 基础中,我们初步了解了 ZeroMQ,并学习了一些主要的 ZeroMQ 模式的基本示例:请求-响应、发布-订阅和流水线。在本章中,我们将深入实践,开始学习如何在实际程序中使用这些工具。
我们将涵盖:
- 如何创建和使用 ZeroMQ Socket。
- 如何在 Socket 上发送和接收消息。
- 如何围绕 ZeroMQ 的异步 I/O 模型构建应用程序。
- 如何在单个线程中处理多个 Socket。
- 如何正确处理致命错误和非致命错误。
- 如何处理中断信号,例如 Ctrl-C。
- 如何干净地关闭 ZeroMQ 应用程序。
- 如何检查 ZeroMQ 应用程序是否存在内存泄漏。
- 如何发送和接收多部分消息。
- 如何在网络上转发消息。
- 如何构建一个简单的消息队列代理。
- 如何使用 ZeroMQ 编写多线程应用程序。
- 如何使用 ZeroMQ 进行线程间信号传递。
- 如何使用 ZeroMQ 协调节点网络。
- 如何创建和使用发布-订阅的消息信封。
- 使用高水位标记(HWM)来防止内存溢出。
Socket API #
坦白地说,ZeroMQ 在某种程度上对你使用了“障眼法”,对此我们并不道歉。这是为了你好,而且这比伤到你更让我们自己难受。ZeroMQ 提供了一个熟悉的基于 Socket 的 API,为了隐藏大量的消息处理引擎,我们付出了巨大的努力。然而,结果将慢慢地纠正你关于如何设计和编写分布式软件的世界观。
Socket 是事实上的网络编程标准 API,同时也能防止你的眼球掉到脸颊上。让 ZeroMQ 对开发者特别有吸引力的一点是,它使用了 Socket 和消息,而不是其他一些任意的概念集。感谢 Martin Sustrik 实现了这一点。它将“面向消息的中间件”——一个能让整个房间陷入僵局的短语——变成了“超辣 Socket!”,这让我们对披萨产生了奇怪的渴望,并渴望了解更多。
就像一道最喜欢的菜一样,ZeroMQ Socket 很容易消化。Socket 的生命分为四个部分,就像 BSD Socket 一样:
-
创建和销毁 Socket,它们共同构成了 Socket 生命的因果循环(参见 `zmq_socket()`, `zmq_close()`).
-
配置 Socket,通过设置选项并在必要时检查它们(参见 `zmq_setsockopt()`, `zmq_getsockopt()`).
-
将 Socket 连接到网络拓扑中,通过创建进出它们的 ZeroMQ 连接(参见 `zmq_bind()`, `zmq_connect()`).
-
使用 Socket 传输数据,通过在其上写入和接收消息(参见 `zmq_msg_send()`, `zmq_msg_recv()`).
请注意,Socket 始终是 void 指针,而消息(我们很快就会讲到)是结构体。因此,在 C 语言中,你直接传递 Socket,但在所有处理消息的函数中,你都传递消息的地址,例如 `zmq_msg_send()``zmq_msg_send()` `zmq_msg_recv()`和 `zmq_msg_recv()`。作为记忆辅助,记住“在 ZeroMQ 中,你的所有 Socket 都属于我们”,但消息是你代码中实际拥有的东西。
创建、销毁和配置 Socket 的方式和你期望的任何对象一样。但请记住,ZeroMQ 是一个异步的、弹性的结构。这对我们将 Socket 连接到网络拓扑以及之后如何使用它们产生了一些影响。
将 Socket 连接到网络拓扑中 #
要在两个节点之间建立连接,你使用 `zmq_bind()``zmq_bind()` `zmq_connect()`在一个节点中,并使用 `zmq_bind()``zmq_connect()` `zmq_connect()`在另一个节点中。通常的经验法则是,执行 `zmq_bind()` 的是“服务器”,它监听在一个众所周知的网络地址上,而执行 `zmq_connect()` 的是“客户端”,其网络地址未知或任意。因此,我们说“将一个 Socket 绑定到一个端点”以及“将一个 Socket 连接到一个端点”,端点就是那个众所周知的网络地址。
ZeroMQ 连接与经典的 TCP 连接有些不同。主要显著差异在于:
-
它们通过任意传输方式(inproc、ipc、tcp、pgm 或 epgm)。参见`zmq_inproc()`, `zmq_ipc()`, `zmq_tcp()`, `zmq_pgm()`和`zmq_epgm()`)。参见 zmq_inproc(), zmq_ipc(), zmq_tcp(), zmq_pgm(),以及 zmq_epgm().
-
一个 Socket 可以有多个出站和入站连接。
-
没有`zmq_accept`() 方法。当 Socket 绑定到一个端点后,它会自动开始接受连接。
-
网络连接本身在后台进行,如果网络连接中断(例如,对等方消失后又出现),ZeroMQ 会自动重新连接。
-
你的应用程序代码不能直接操作这些连接;它们被封装在 Socket 下。
许多架构遵循某种客户端/服务器模型,其中服务器是最静态的组件,而客户端是最动态的组件,即它们出现和消失最频繁。有时会遇到寻址问题:服务器对客户端可见,但反之则不一定。因此,大多数情况下,哪个节点应该执行 `zmq_bind()``zmq_bind()` `zmq_connect()`(服务器),哪个节点应该执行 `zmq_connect()`(客户端)是显而易见的。这也取决于你使用的 Socket 类型,对于不寻常的网络架构可能有一些例外。我们稍后会查看 Socket 类型。
现在,想象一下我们在服务器启动之前启动客户端。在传统网络中,我们会得到一个大大的红色失败标志。但 ZeroMQ 允许我们任意启动和停止各部分。一旦客户端节点执行了 `zmq_connect()``zmq_connect()` `zmq_bind()`,连接就存在了,该节点可以开始向 Socket 写入消息。在某个阶段(希望在消息排队过多以至于开始被丢弃或客户端阻塞之前),服务器就会启动,执行一次 `zmq_bind()`,然后 ZeroMQ 开始传递消息。
服务器节点可以绑定到多个端点(即协议和地址的组合),并且可以使用单个 Socket 来完成。这意味着它将接受跨不同传输方式的连接
zmq_bind (socket, "tcp://*:5555");
zmq_bind (socket, "tcp://*:9999");
zmq_bind (socket, "inproc://somename");
对于大多数传输方式,你不能两次绑定到同一个端点,这与 UDP 等不同。然而,`zmq_ipc()``inproc`
传输方式允许一个进程绑定到已被第一个进程使用的端点。这是为了让进程在崩溃后能够恢复。尽管 ZeroMQ 试图对哪一端绑定、哪一端连接保持中立,但它们之间存在差异。我们稍后会更详细地了解这些差异。总而言之,你通常应该将“服务器”视为拓扑中静态的部分,绑定到或多或少固定的端点,而将“客户端”视为动态的部分,它们会不断出现和消失并连接到这些端点。然后,围绕这个模型设计你的应用程序。这样,“just work”(正常运行)的机会就会大大增加。
Socket 有类型。Socket 类型定义了 Socket 的语义、消息的入站和出站路由策略、队列等。你可以将某些类型的 Socket 连接在一起,例如,发布者 Socket 和订阅者 Socket。Socket 在“消息模式”中协同工作。我们稍后会更详细地讨论这一点。
正是这种以不同方式连接 Socket 的能力赋予了 ZeroMQ 作为消息队列系统的基本能力。在此之上还有一些层,例如代理,我们稍后会讲到。但本质上,使用 ZeroMQ,你就像拼搭儿童积木一样将各部分连接起来,从而定义你的网络架构。
发送和接收消息 #
要发送和接收消息,你使用 `zmq_msg_send()``zmq_msg_send()` `zmq_msg_recv()``zmq_send()`

图 9 - TCP Socket 是一对一的
-
让我们看看在处理数据时,TCP Socket 和 ZeroMQ Socket 之间的主要区别:
-
ZeroMQ Socket 传输的是消息,类似于 UDP,而不是像 TCP 那样的字节流。ZeroMQ 消息是指定长度的二进制数据。我们很快就会讲到消息;它们的设计经过性能优化,因此有点复杂。
-
ZeroMQ Socket 在后台线程中执行 I/O 操作。这意味着无论你的应用程序正在忙碌什么,消息都会到达本地输入队列并从本地输出队列发送出去。
根据 Socket 类型,ZeroMQ Socket 内置了 1-对-N 的路由行为。 The`zmq_send()` The方法实际上并没有将消息发送到 Socket 连接。它将消息放入队列,以便 I/O 线程可以异步发送。除了某些异常情况外,它不会阻塞。因此,当 `zmq_send()` 返回到你的应用程序时,消息不一定已经发送出去。
单播传输 #
ZeroMQ 提供了一组单播传输(inproc、ipc、tcp)和多播传输(epgm, pgm)。多播是一种高级技术,我们稍后会讲到。除非你确定你的扇出比率使得 1 对 N 的单播变得不可能,否则不要轻易使用它。`zmq_inproc()`, `zmq_ipc()`,以及`zmq_tcp()`)和组播传输 (epgm, pgm)。组播是一种高级技术,我们稍后会介绍。除非你知道你的扇出率使得 1 对 N 的单播变得不可能,否则甚至不要开始使用它。
对于大多数常见情况,请使用 tcp,这是一种断开连接的 TCP 传输方式。它具有弹性、可移植性,并且对于大多数情况来说足够快。我们称之为断开连接,因为 ZeroMQ 的`zmq_tcp()``tcp`
传输方式不需要在连接之前端点就存在。客户端和服务器可以随时连接和绑定,可以离开和回来,这一切对应用程序来说都是透明的。进程间传输方式`zmq_ipc()``ipc``zmq_tcp()`是断开连接的,类似于`zmq_ipc()``tcp`
。它有一个限制:它尚不支持 Windows。按照约定,我们使用扩展名为“.ipc”的端点名称,以避免与其他文件名潜在冲突。在 UNIX 系统上,如果你使用`zmq_tcp()``ipc``zmq_ipc()`端点,你需要使用适当的权限创建它们,否则在不同用户 ID 下运行的进程之间可能无法共享。你还必须确保所有进程都能访问文件,例如通过在相同工作目录中运行。线程间传输方式 inproc 是一种连接型信号传输。它比`zmq_tcp()``zmq_msg_send()``zmq_ipc()``tcp`
或
`ipc`
快得多。与

相比,这种传输方式有一个特定的限制:服务器必须在任何客户端发出连接之前进行绑定。此问题已在 ZeroMQ v4.0 及更高版本中修复。

ZeroMQ 的新手经常会问(这也是我曾经问过自己的问题):“如何在 ZeroMQ 中编写一个 XYZ 服务器?”例如,“如何在 ZeroMQ 中编写一个 HTTP 服务器?”言下之意是,如果我们使用普通的 Socket 传输 HTTP 请求和响应,那么我们也应该可以使用 ZeroMQ Socket 来做同样的事情,而且会更快更好。过去的答案是“这不是它的工作方式”。ZeroMQ 不是中立载体:它对所使用的传输协议强加了帧格式。这种帧格式与现有的协议不兼容,因为这些协议倾向于使用自己的帧格式。例如,比较一个 HTTP 请求和一个 ZeroMQ 请求,两者都基于 TCP/IP。图 10 - 网络上的 HTTP
HTTP 请求使用 CR-LF 作为最简单的帧分隔符,而 ZeroMQ 使用长度指定的帧。因此,你可以使用 ZeroMQ 编写一个类似 HTTP 的协议,例如使用请求-响应 Socket 模式。但它不会是真正的 HTTP。
图 11 - 网络上的 ZeroMQ 然而,自 v3.3 版本以来,ZeroMQ 有一个 Socket 选项叫做`ZMQ_ROUTER_RAW`
int io_threads = 4;
void *context = zmq_ctx_new ();
zmq_ctx_set (context, ZMQ_IO_THREADS, io_threads);
assert (zmq_ctx_get (context, ZMQ_IO_THREADS) == io_threads);
,它允许你在不使用 ZeroMQ 帧的情况下读取和写入数据。你可以使用它来读取和写入真正的 HTTP 请求和响应。Hardeep Singh 贡献了这一修改,以便他可以从 ZeroMQ 应用程序连接到 Telnet 服务器。在撰写本文时,这仍然有些实验性,但这表明 ZeroMQ 如何不断发展以解决新问题。也许下一个补丁将由你贡献。
I/O 线程 #
我们说过 ZeroMQ 在后台线程中执行 I/O 操作。对于除了最极端的应用程序之外的所有应用来说,一个 I/O 线程(用于所有 Socket)就足够了。当你创建一个新的上下文时,它会启动一个 I/O 线程。一般的经验法则是,每秒输入或输出每吉字节数据,允许一个 I/O 线程。要增加 I/O 线程的数量,请在创建任何 Socket 之前使用
`zmq_ctx_set()`
调用。
我们已经看到,一个 Socket 可以同时处理几十甚至几千个连接。这对你编写应用程序的方式产生了根本性的影响。传统的网络应用程序通常为每个远程连接分配一个进程或线程,该进程或线程处理一个 Socket。ZeroMQ 允许你将这种整个结构压缩到一个单一进程中,然后根据需要进行拆分以实现扩展。
如果你只使用 ZeroMQ 进行线程间通信(即,不执行外部 Socket I/O 的多线程应用程序),你可以将 I/O 线程数设置为零。但这并不是一个显著的优化,更多是一种奇特用法。
消息模式 #
-
在 ZeroMQ Socket API 的朴实包装下,隐藏着消息模式的世界。如果你有企业消息传递背景,或熟悉 UDP,这些模式你会感到隐约熟悉。但对于大多数 ZeroMQ 新手来说,它们是一个惊喜。我们太习惯于 TCP 范式,其中一个 Socket 与另一个节点是一对一映射的。
-
让我们简要回顾一下 ZeroMQ 为你做了什么。它快速有效地将数据块(消息)传递给节点。你可以将节点映射到线程、进程或物理节点。无论实际传输方式是什么(如进程内、进程间、TCP 或多播),ZeroMQ 都为你的应用程序提供单一的 Socket API。它会自动重新连接来来去去的对等方。它根据需要对发送方和接收方进行消息排队。它限制这些队列以防止进程内存耗尽。它处理 Socket 错误。它在后台线程中执行所有 I/O。它使用无锁技术在节点之间进行通信,因此永远不会有锁、等待、信号量或死锁。
-
但除此之外,它根据称为模式的精确规则来路由和排队消息。正是这些模式提供了 ZeroMQ 的智能。它们封装了我们关于数据和工作分配最佳方式的宝贵经验。ZeroMQ 的模式是硬编码的,但未来的版本可能会允许用户自定义模式。
-
ZeroMQ 模式由类型匹配的 Socket 对实现。换句话说,要理解 ZeroMQ 模式,你需要理解 Socket 类型以及它们如何协同工作。大多数情况下,这只需要学习;在这个层面,很少有显而易见的东西。
内置的核心 ZeroMQ 模式有: `zmq_socket()`请求-响应(Request-reply),它将一组客户端连接到一组服务。这是一种远程过程调用和任务分发模式。
- 发布-订阅(Pub-sub),它将一组发布者连接到一组订阅者。这是一种数据分发模式。
- 流水线(Pipeline),它以扇出/扇入模式连接节点,该模式可以有多个步骤和循环。这是一种并行任务分发和收集模式。
- 独占对(Exclusive pair),它独占地连接两个 Socket。这是一种用于连接同一进程中两个线程的模式,不要与“普通”的 Socket 对混淆。
- 我们在第 1 章 - 基础中介绍了前三种模式,并在本章稍后会看到独占对模式。关于这些模式,
- `zmq_socket(3)`
- 手册页写得相当清楚——值得反复阅读,直到你理解。以下是连接-绑定对有效的 Socket 组合(任何一侧都可以绑定):
- PUB 和 SUB
- REQ 和 REP
- REQ 和 ROUTER(注意,REQ 会插入一个额外的空帧)
DEALER 和 REP(注意,REP 会假定存在一个空帧)
DEALER 和 ROUTER
DEALER 和 DEALER
ROUTER 和 ROUTER
PUSH 和 PULL
PAIR 和 PAIR
根据 Socket 类型,ZeroMQ Socket 内置了 1-对-N 的路由行为。你还会看到 XPUB 和 XSUB Socket 的引用,我们稍后会讲到它们(它们类似于 PUB 和 SUB 的原始版本)。任何其他组合都会产生未文档化且不可靠的结果,未来的 ZeroMQ 版本如果尝试这些组合可能会返回错误。当然,你可以并且会通过代码桥接其他 Socket 类型,即从一种 Socket 类型读取并写入另一种。高级消息模式 # The`zmq_msg_send()` 这四种核心模式内置于 ZeroMQ 中。它们是 ZeroMQ API 的一部分,在核心 C++ 库中实现,并且保证在所有优秀零售店都有售。在此之上,我们添加了高级消息模式。我们在 ZeroMQ 的基础上构建这些高级模式,并使用我们应用程序所用的语言来实现它们。它们不是核心库的一部分,不包含在 ZeroMQ 包中,并作为 ZeroMQ 社区的一部分独立存在。例如,我们在第 4 章 - 可靠请求-响应模式中探讨的 Majordomo 模式,就位于 ZeroMQ 组织下的 GitHub Majordomo 项目中。 这四种核心模式内置于 ZeroMQ 中。它们是 ZeroMQ API 的一部分,在核心 C++ 库中实现,并且保证在所有优秀零售店都有售。本书的目标之一是为你提供一套这样的高级模式,既有小的(如何理智地处理消息)也有大的(如何构建一个可靠的发布-订阅架构)。
- 使用消息 # libzmq, 核心库实际上有两种 API 来发送和接收消息。我们已经看到和使用过的 zmq_send() 和 zmq_recv() 方法是简单的单行代码。我们会经常使用它们,但是 zmq_recv() 不擅长处理任意大小的消息:它会截断消息到你提供的缓冲区大小。因此,还有第二个 API,它使用 zmq_msg_t 结构体,提供更丰富但更难用的 API:, 初始化消息.
- `zmq_msg_init()` `zmq_msg_send()`, `zmq_msg_recv()`.
- `zmq_msg_init_size()` `zmq_msg_init_data()`.
- 发送和接收消息 `zmq_msg_send()`, `zmq_msg_recv()`, 释放消息.
- `zmq_msg_close()` 访问消息内容, `zmq_msg_data()`.
- `zmq_msg_size()` `zmq_msg_more()`, 处理消息属性.
`zmq_msg_get()`
`zmq_msg_set()`消息操作`zmq_msg_copy()`
-
`zmq_msg_move()`消息操作在网络传输中,ZeroMQ 消息是大小从零开始、能在内存中容纳的任意二进制大对象。你使用 Protocol Buffers、MsgPack、JSON 或任何你的应用程序需要的数据格式进行序列化。选择一种可移植的数据表示形式是明智的,但你可以自行决定取舍。
-
在内存中,ZeroMQ 消息是 libzmq`zmq_msg_t` `zmq_msg_recv()`.
-
结构体(或类,取决于你的语言)。以下是在 C 语言中使用 ZeroMQ 消息的基本规则: 核心库实际上有两种 API 来发送和接收消息。我们已经看到和使用过的 zmq_send() 和 zmq_recv() 方法是简单的单行代码。我们会经常使用它们,但是 zmq_recv() 不擅长处理任意大小的消息:它会截断消息到你提供的缓冲区大小。因此,还有第二个 API,它使用 zmq_msg_t 结构体,提供更丰富但更难用的 API:你创建并传递`zmq_msg_t`对象,而不是数据块。 `zmq_msg_send()`.
-
要读取消息,你使用 `zmq_msg_init_data()``zmq_msg_init()`
-
创建一个空消息,然后将其传递给 `zmq_msg_send()``zmq_msg_recv()` `zmq_msg_recv()`.
-
。 处理消息属性, `zmq_msg_more()`和 初始化消息要从新数据写入消息,你使用
-
`zmq_msg_init_size()` `zmq_msg_send()`创建一个消息,同时分配一块特定大小的数据。然后你使用
-
`memcpy` The`zmq_msg_send()` 这四种核心模式内置于 ZeroMQ 中。它们是 ZeroMQ API 的一部分,在核心 C++ 库中实现,并且保证在所有优秀零售店都有售。填充数据,然后将消息传递给
`zmq_msg_send()` libzmq。 `zmq_msg_more()`要释放(而非销毁)消息,你调用
`zmq_msg_close()`
。这会减少一个引用计数,最终 ZeroMQ 将销毁消息。
要访问消息内容,你使用
`zmq_msg_data()`
。要获取消息包含的数据大小,使用
- `zmq_msg_size()`
- 。
- 不要使用 zmq_msg_init() 或 zmq_msg_init_data(),除非你仔细阅读了手册页并确切知道为什么需要它们。消息操作当你将消息传递给
- `zmq_msg_send()`
- 后,ØMQ 会清除消息,即将大小设为零。你不能两次发送同一个消息,发送后也不能访问消息数据。
如果你使用
-
`zmq_send()`
-
或
-
`zmq_recv()`
-
,这些规则不适用,因为你传递的是字节数组,而不是消息结构体。
-
如果你想多次发送同一条消息,并且消息很大,创建一个新的消息,使用 `zmq_msg_init_data()``zmq_msg_init()`
初始化,然后使用 初始化消息`zmq_msg_copy()`
创建第一个消息的副本。这不会复制数据,而是复制一个引用。然后你可以发送消息两次(或更多次,如果你创建更多副本)并且消息只会在最后一个副本被发送或关闭后才会被最终销毁。
ZeroMQ 还支持多部分消息,它允许你将帧列表作为一个单一的网络消息发送或接收。这在实际应用程序中被广泛使用,我们将在本章稍后以及第 3 章 - 高级请求-响应模式中介绍这一点。
帧(在 ZeroMQ 参考手册页面中也称为“消息部分”)是 ZeroMQ 消息的基本网络格式。帧是指定长度的数据块。长度可以从零开始。如果你做过 TCP 编程,你会明白为什么帧是回答“现在我应该从这个网络 Socket 读取多少数据?”这一问题的有用方案。
- 有一个网络层协议叫做 ZMTP,它定义了 ZeroMQ 如何在 TCP 连接上读取和写入帧。如果你对它的工作原理感兴趣,规范文档很短。
- 最初,ZeroMQ 消息就像 UDP 一样只有一个帧。后来我们扩展了这一点,增加了多部分消息,它就是一系列将“更多”位设置为一的帧,后面跟着一个将该位设置为零的帧。ZeroMQ API 允许你写入带有“更多”标志的消息,当你读取消息时,它允许你检查是否还有“更多”部分。
- 因此,在低层 ZeroMQ API 和参考手册中,关于消息和帧有一些模糊之处。所以这里有一个有用的词汇表:
一条消息可以包含一个或多个部分。
这些部分也称为“帧”。 每个部分都是一个`zmq_msg_t` 每个部分都是一个对象。
在低层 API 中,你分别发送和接收每个部分。
更高级别的 API 提供包装器来发送整个多部分消息。
你可以发送零长度消息,例如用于从一个线程向另一个线程发送信号。
ZeroMQ 不会立即发送消息(无论是单部分还是多部分),而是在某个不确定的稍后时间发送。因此,多部分消息必须能容纳在内存中。
// Reading from multiple sockets
// This version uses a simple recv loop
#include "zhelpers.h"
int main (void)
{
// Connect to task ventilator
void *context = zmq_ctx_new ();
void *receiver = zmq_socket (context, ZMQ_PULL);
zmq_connect (receiver, "tcp://localhost:5557");
// Connect to weather server
void *subscriber = zmq_socket (context, ZMQ_SUB);
zmq_connect (subscriber, "tcp://localhost:5556");
zmq_setsockopt (subscriber, ZMQ_SUBSCRIBE, "10001 ", 6);
// Process messages from both sockets
// We prioritize traffic from the task ventilator
while (1) {
char msg [256];
while (1) {
int size = zmq_recv (receiver, msg, 255, ZMQ_DONTWAIT);
if (size != -1) {
// Process task
}
else
break;
}
while (1) {
int size = zmq_recv (subscriber, msg, 255, ZMQ_DONTWAIT);
if (size != -1) {
// Process weather update
}
else
break;
}
// No activity, so sleep for 1 msec
s_sleep (1);
}
zmq_close (receiver);
zmq_close (subscriber);
zmq_ctx_destroy (context);
return 0;
}
你必须调用
//
// Reading from multiple sockets in C++
// This version uses a simple recv loop
//
#include "zhelpers.hpp"
int main (int argc, char *argv[])
{
// Prepare our context and sockets
zmq::context_t context(1);
// Connect to task ventilator
zmq::socket_t receiver(context, ZMQ_PULL);
receiver.connect("tcp://localhost:5557");
// Connect to weather server
zmq::socket_t subscriber(context, ZMQ_SUB);
subscriber.connect("tcp://localhost:5556");
subscriber.set(zmq::sockopt::subscribe, "10001 ");
// Process messages from both sockets
// We prioritize traffic from the task ventilator
while (1) {
// Process any waiting tasks
bool rc;
do {
zmq::message_t task;
if ((rc = receiver.recv(&task, ZMQ_DONTWAIT)) == true) {
// process task
}
} while(rc == true);
// Process any waiting weather updates
do {
zmq::message_t update;
if ((rc = subscriber.recv(&update, ZMQ_DONTWAIT)) == true) {
// process weather update
}
} while(rc == true);
// No activity, so sleep for 1 msec
s_sleep(1);
}
return 0;
}
`zmq_msg_close()`
再次强调,不要使用
;;; -*- Mode:Lisp; Syntax:ANSI-Common-Lisp; -*-
;;;
;;; Reading from multiple sockets in Common Lisp
;;; This version uses a simple recv loop
;;;
;;; Kamil Shakirov <kamils80@gmail.com>
;;;
(defpackage #:zguide.msreader
(:nicknames #:msreader)
(:use #:cl #:zhelpers)
(:export #:main))
(in-package :zguide.msreader)
(defun main ()
;; Prepare our context and socket
(zmq:with-context (context 1)
;; Connect to task ventilator
(zmq:with-socket (receiver context zmq:pull)
(zmq:connect receiver "tcp://localhost:5557")
;; Connect to weather server
(zmq:with-socket (subscriber context zmq:sub)
(zmq:connect subscriber "tcp://localhost:5556")
(zmq:setsockopt subscriber zmq:subscribe "10001 ")
;; Process messages from both sockets
;; We prioritize traffic from the task ventilator
(loop
(handler-case
(loop
(let ((task (make-instance 'zmq:msg)))
(zmq:recv receiver task zmq:noblock)
;; process task
(dump-message task)
(finish-output)))
(zmq:error-again () nil))
;; Process any waiting weather updates
(handler-case
(loop
(let ((update (make-instance 'zmq:msg)))
(zmq:recv subscriber update zmq:noblock)
;; process weather update
(dump-message update)
(finish-output)))
(zmq:error-again () nil))
;; No activity, so sleep for 1 msec
(isys:usleep 1000)))))
(cleanup))
`zmq_msg_init_data()`
program msreader;
//
// Reading from multiple sockets
// This version uses a simple recv loop
// @author Varga Balazs <bb.varga@gmail.com>
//
{$APPTYPE CONSOLE}
uses
SysUtils
, zmqapi
;
var
context: TZMQContext;
receiver,
subscriber: TZMQSocket;
rc: Integer;
task,
update: TZMQFrame;
begin
// Prepare our context and sockets
context := TZMQContext.Create;
// Connect to task ventilator
receiver := Context.Socket( stPull );
receiver.RaiseEAgain := false;
receiver.connect( 'tcp://localhost:5557' );
// Connect to weather server
subscriber := Context.Socket( stSub );
subscriber.RaiseEAgain := false;
subscriber.connect( 'tcp://localhost:5556' );
subscriber.subscribe( '10001' );
// Process messages from both sockets
// We prioritize traffic from the task ventilator
while True do
begin
// Process any waiting tasks
repeat
task := TZMQFrame.create;
rc := receiver.recv( task, [rfDontWait] );
if rc <> -1 then
begin
// process task
end;
task.Free;
until rc = -1;
// Process any waiting weather updates
repeat
update := TZMQFrame.Create;
rc := subscriber.recv( update, [rfDontWait] );
if rc <> -1 then
begin
// process weather update
end;
update.Free;
until rc = -1;
// No activity, so sleep for 1 msec
sleep (1);
end;
// We never get here but clean up anyhow
receiver.Free;
subscriber.Free;
context.Free;
end.
。这是一种零拷贝方法,保证会给你带来麻烦。在开始担心节省微秒之前,还有更重要的事情需要学习 ZeroMQ。
#! /usr/bin/env escript
%%
%% Reading from multiple sockets
%% This version uses a simple recv loop
%%
main(_) ->
%% Prepare our context and sockets
{ok, Context} = erlzmq:context(),
%% Connect to task ventilator
{ok, Receiver} = erlzmq:socket(Context, pull),
ok = erlzmq:connect(Receiver, "tcp://localhost:5557"),
%% Connect to weather server
{ok, Subscriber} = erlzmq:socket(Context, sub),
ok = erlzmq:connect(Subscriber, "tcp://localhost:5556"),
ok = erlzmq:setsockopt(Subscriber, subscribe, <<"10001">>),
%% Process messages from both sockets
loop(Receiver, Subscriber),
%% We never get here but clean up anyhow
ok = erlzmq:close(Receiver),
ok = erlzmq:close(Subscriber),
ok = erlzmq:term(Context).
loop(Receiver, Subscriber) ->
%% We prioritize traffic from the task ventilator
process_tasks(Receiver),
process_weather(Subscriber),
timer:sleep(1000),
loop(Receiver, Subscriber).
process_tasks(S) ->
%% Process any waiting tasks
case erlzmq:recv(S, [noblock]) of
{error, eagain} -> ok;
{ok, Msg} ->
io:format("Procesing task: ~s~n", [Msg]),
process_tasks(S)
end.
process_weather(S) ->
%% Process any waiting weather updates
case erlzmq:recv(S, [noblock]) of
{error, eagain} -> ok;
{ok, Msg} ->
io:format("Processing weather update: ~s~n", [Msg]),
process_weather(S)
end.
这个丰富的 API 使用起来可能会很繁琐。这些方法是为性能优化的,而非简单性。如果你开始使用它们,几乎肯定会犯错,除非你仔细阅读了手册页。因此,一个好的语言绑定(binding)的主要工作之一就是将这个 API 包装成更易于使用的类。
defmodule Msreader do
@moduledoc """
Generated by erl2ex (http://github.com/dazuma/erl2ex)
From Erlang source: (Unknown source file)
At: 2019-12-20 13:57:27
"""
def main() do
{:ok, context} = :erlzmq.context()
{:ok, receiver} = :erlzmq.socket(context, :pull)
:ok = :erlzmq.connect(receiver, 'tcp://localhost:5557')
{:ok, subscriber} = :erlzmq.socket(context, :sub)
:ok = :erlzmq.connect(subscriber, 'tcp://localhost:5556')
:ok = :erlzmq.setsockopt(subscriber, :subscribe, "10001")
loop(receiver, subscriber)
:ok = :erlzmq.close(receiver)
:ok = :erlzmq.close(subscriber)
:ok = :erlzmq.term(context)
end
def loop(receiver, subscriber) do
process_tasks(receiver)
process_weather(subscriber)
:timer.sleep(1000)
loop(receiver, subscriber)
end
#case(:erlzmq.recv(s, [:noblock])) do
def process_tasks(s) do
case(:erlzmq.recv(s, [:dontwait])) do
{:error, :eagain} ->
:ok
{:ok, msg} ->
:io.format('Procesing task: ~s~n', [msg])
process_tasks(s)
end
end
def process_weather(s) do
case(:erlzmq.recv(s, [:dontwait])) do
{:error, :eagain} ->
:ok
{:ok, msg} ->
:io.format('Processing weather update: ~s~n', [msg])
process_weather(s)
end
end
end
Msreader.main
处理多个 Socket #
等待 Socket 上的消息。
//
// Reading from multiple sockets
// This version uses a simple recv loop
//
open ZMQ;
// Prepare our context and sockets
var context = zmq_init 1;
// Connect to task ventilator
var receiver = context.mk_socket ZMQ_PULL;
receiver.connect "tcp://localhost:5557";
// Connect to weather server
var subscriber = context.mk_socket ZMQ_SUB;
subscriber.connect "tcp://localhost:5556";
subscriber.set_opt$ zmq_subscribe "101 ";
// Process messages from both sockets
// We prioritize traffic from the task ventilator
while true do
// Process any waiting tasks
var task = receiver.recv_string_dontwait;
while task != "" do
// process task
task = receiver.recv_string_dontwait;
done
// Process any waiting weather updates
var update = subscriber.recv_string_dontwait;
while update != "" do
// process update
update = subscriber.recv_string_dontwait;
done
Faio::sleep (sys_clock,0.001); // 1 ms
done
处理消息。
//
// Reading from multiple sockets
// This version uses a simple recv loop
//
package main
import (
"fmt"
zmq "github.com/alecthomas/gozmq"
"time"
)
func main() {
context, _ := zmq.NewContext()
defer context.Close()
// Connect to task ventilator
receiver, _ := context.NewSocket(zmq.PULL)
defer receiver.Close()
receiver.Connect("tcp://localhost:5557")
// Connect to weather server
subscriber, _ := context.NewSocket(zmq.SUB)
defer subscriber.Close()
subscriber.Connect("tcp://localhost:5556")
subscriber.SetSubscribe("10001")
// Process messages from both sockets
// We prioritize traffic from the task ventilator
for {
// ventilator
for b, _ := receiver.Recv(zmq.NOBLOCK); b != nil; {
// fake process task
}
// weather server
for b, _ := subscriber.Recv(zmq.NOBLOCK); b != nil; {
// process task
fmt.Printf("found weather =%s\n", string(b))
}
// No activity, so sleep for 1 msec
time.Sleep(1e6)
}
fmt.Println("done")
}
重复。
要真正同时从多个 Socket 读取,请使用
。更好的方法可能是将
package guide;
import org.zeromq.SocketType;
import org.zeromq.ZMQ;
import org.zeromq.ZContext;
//
// Reading from multiple sockets in Java
// This version uses a simple recv loop
//
public class msreader
{
public static void main(String[] args) throws Exception
{
// Prepare our context and sockets
try (ZContext context = new ZContext()) {
// Connect to task ventilator
ZMQ.Socket receiver = context.createSocket(SocketType.PULL);
receiver.connect("tcp://localhost:5557");
// Connect to weather server
ZMQ.Socket subscriber = context.createSocket(SocketType.SUB);
subscriber.connect("tcp://localhost:5556");
subscriber.subscribe("10001 ".getBytes(ZMQ.CHARSET));
// Process messages from both sockets
// We prioritize traffic from the task ventilator
while (!Thread.currentThread().isInterrupted()) {
// Process any waiting tasks
byte[] task;
while ((task = receiver.recv(ZMQ.DONTWAIT)) != null) {
System.out.println("process task");
}
// Process any waiting weather updates
byte[] update;
while ((update = subscriber.recv(ZMQ.DONTWAIT)) != null) {
System.out.println("process weather update");
}
// No activity, so sleep for 1 msec
Thread.sleep(1000);
}
}
}
}
`zmq_poll()`
#!/usr/bin/env julia
# Reading from multiple sockets
# The ZMQ.jl wrapper implements ZMQ.recv as a blocking function. Nonblocking i/o
# in Julia is typically done using coroutines (Tasks).
# The @async macro puts its enclosed expression in a Task. When the macro is
# executed, its Task gets scheduled and execution continues immediately to
# whatever follows the macro.
# Note: the msreader example in the zguide is presented as a "dirty hack"
# using the ZMQ_DONTWAIT and EAGAIN codes. Since the ZMQ.jl wrapper API
# does not expose DONTWAIT directly, this example skips the hack and instead
# provides an efficient solution.
using ZMQ
# Prepare our context and sockets
context = ZMQ.Context()
# Connect to task ventilator
receiver = Socket(context, ZMQ.PULL)
ZMQ.connect(receiver, "tcp://localhost:5557")
# Connect to weather server
subscriber = Socket(context,ZMQ.SUB)
ZMQ.connect(subscriber,"tcp://localhost:5556")
ZMQ.set_subscribe(subscriber, "10001")
while true
# Process any waiting tasks
@async begin
msg = unsafe_string(ZMQ.recv(receiver))
println(msg)
end
# Process any waiting weather updates
@async begin
msg = unsafe_string(ZMQ.recv(subscriber))
println(msg)
end
# Sleep for 1 msec
sleep(0.001)
end
封装在一个框架中,将其转换为优秀的事件驱动反应器(reactor),但这比我们想要在这里介绍的内容复杂得多。
--
-- Reading from multiple sockets
-- This version uses a simple recv loop
--
-- Author: Robert G. Jakabosky <bobby@sharedrealm.com>
--
require"zmq"
require"zhelpers"
-- Prepare our context and sockets
local context = zmq.init(1)
-- Connect to task ventilator
local receiver = context:socket(zmq.PULL)
receiver:connect("tcp://localhost:5557")
-- Connect to weather server
local subscriber = context:socket(zmq.SUB)
subscriber:connect("tcp://localhost:5556")
subscriber:setopt(zmq.SUBSCRIBE, "10001 ")
-- Process messages from both sockets
-- We prioritize traffic from the task ventilator
while true do
-- Process any waiting tasks
local msg
while true do
msg = receiver:recv(zmq.NOBLOCK)
if not msg then break end
-- process task
end
-- Process any waiting weather updates
while true do
msg = subscriber:recv(zmq.NOBLOCK)
if not msg then break end
-- process weather update
end
-- No activity, so sleep for 1 msec
s_sleep (1)
end
-- We never get here but clean up anyhow
receiver:close()
subscriber:close()
context:term()
让我们从一个“脏活”(dirty hack)开始,部分是为了享受做错事的乐趣,但主要是因为它能向你展示如何进行非阻塞 Socket 读取。这是一个使用非阻塞读取从两个 Socket 读取的简单示例。这个有些混乱的程序既是天气更新的订阅者,又是并行任务的工作者
示例 msreader 缺少 Ada 实现:贡献翻译
/* msreader.m: Reads from multiple sockets the hard way.
* *** DON'T DO THIS - see mspoller.m for a better example. *** */
#import "ZMQObjC.h"
static NSString *const kTaskVentEndpoint = @"tcp://localhost:5557";
static NSString *const kWeatherServerEndpoint = @"tcp://localhost:5556";
#define MSEC_PER_NSEC (1000000)
int
main(void)
{
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
ZMQContext *ctx = [[[ZMQContext alloc] initWithIOThreads:1U] autorelease];
/* Connect to task ventilator. */
ZMQSocket *receiver = [ctx socketWithType:ZMQ_PULL];
[receiver connectToEndpoint:kTaskVentEndpoint];
/* Connect to weather server. */
ZMQSocket *subscriber = [ctx socketWithType:ZMQ_SUB];
[subscriber connectToEndpoint:kWeatherServerEndpoint];
NSData *subData = [@"10001" dataUsingEncoding:NSUTF8StringEncoding];
[subscriber setData:subData forOption:ZMQ_SUBSCRIBE];
/* Process messages from both sockets, prioritizing the task vent. */
/* Could fair queue by checking each socket for activity in turn, rather
* than continuing to service the current socket as long as it is busy. */
struct timespec msec = {0, MSEC_PER_NSEC};
for (;;) {
/* Worst case: a task is always pending and we never get to weather,
* or vice versa. In such a case, memory use would rise without
* limit if we did not ensure the objects autoreleased by a single loop
* will be autoreleased whether we leave or continue in the loop. */
NSAutoreleasePool *p;
/* Process any waiting tasks. */
for (p = [[NSAutoreleasePool alloc] init];
nil != [receiver receiveDataWithFlags:ZMQ_NOBLOCK];
[p drain], p = [[NSAutoreleasePool alloc] init]);
[p drain];
/* No waiting tasks - process any waiting weather updates. */
for (p = [[NSAutoreleasePool alloc] init];
nil != [subscriber receiveDataWithFlags:ZMQ_NOBLOCK];
[p drain], p = [[NSAutoreleasePool alloc] init]);
[p drain];
/* Nothing doing - sleep for a millisecond. */
(void)nanosleep(&msec, NULL);
}
/* NOT REACHED */
[ctx closeSockets];
[pool drain]; /* This finally releases the autoreleased context. */
return EXIT_SUCCESS;
}
msreader:Basic 语言实现的多个 Socket 读取器
msreader:C 语言实现的多个 Socket 读取器
# Reading from multiple sockets in Perl
# This version uses a simple recv loop
use strict;
use warnings;
use v5.10;
use ZMQ::FFI;
use ZMQ::FFI::Constants qw(ZMQ_PULL ZMQ_SUB ZMQ_DONTWAIT);
use TryCatch;
use Time::HiRes qw(usleep);
# Connect to task ventilator
my $context = ZMQ::FFI->new();
my $receiver = $context->socket(ZMQ_PULL);
$receiver->connect('tcp://localhost:5557');
# Connect to weather server
my $subscriber = $context->socket(ZMQ_SUB);
$subscriber->connect('tcp://localhost:5556');
$subscriber->subscribe('10001');
# Process messages from both sockets
# We prioritize traffic from the task ventilator
while (1) {
PROCESS_TASK:
while (1) {
try {
my $msg = $receiver->recv(ZMQ_DONTWAIT);
# Process task
}
catch {
last PROCESS_TASK;
}
}
PROCESS_UPDATE:
while (1) {
try {
my $msg = $subscriber->recv(ZMQ_DONTWAIT);
# Process weather update
}
catch {
last PROCESS_UPDATE;
}
}
# No activity, so sleep for 1 msec
usleep(1000);
}
编辑此示例
<?php
/*
* Reading from multiple sockets
* This version uses a simple recv loop
* @author Ian Barber <ian(dot)barber(at)gmail(dot)com>
*/
// Prepare our context and sockets
$context = new ZMQContext();
// Connect to task ventilator
$receiver = new ZMQSocket($context, ZMQ::SOCKET_PULL);
$receiver->connect("tcp://localhost:5557");
// Connect to weather server
$subscriber = new ZMQSocket($context, ZMQ::SOCKET_SUB);
$subscriber->connect("tcp://localhost:5556");
$subscriber->setSockOpt(ZMQ::SOCKOPT_SUBSCRIBE, "10001");
// Process messages from both sockets
// We prioritize traffic from the task ventilator
while (true) {
// Process any waiting tasks
try {
for ($rc = 0; !$rc;) {
if ($rc = $receiver->recv(ZMQ::MODE_NOBLOCK)) {
// process task
}
}
} catch (ZMQSocketException $e) {
// do nothing
}
try {
// Process any waiting weather updates
for ($rc = 0; !$rc;) {
if ($rc = $subscriber->recv(ZMQ::MODE_NOBLOCK)) {
// process weather update
}
}
} catch (ZMQSocketException $e) {
// do nothing
}
// No activity, so sleep for 1 msec
usleep(1);
}
msreader:C++ 语言实现的多个 Socket 读取器
# encoding: utf-8
#
# Reading from multiple sockets
# This version uses a simple recv loop
#
# Author: Jeremy Avnet (brainsik) <spork(dash)zmq(at)theory(dot)org>
#
import zmq
import time
# Prepare our context and sockets
context = zmq.Context()
# Connect to task ventilator
receiver = context.socket(zmq.PULL)
receiver.connect("tcp://localhost:5557")
# Connect to weather server
subscriber = context.socket(zmq.SUB)
subscriber.connect("tcp://localhost:5556")
subscriber.setsockopt(zmq.SUBSCRIBE, b"10001")
# Process messages from both sockets
# We prioritize traffic from the task ventilator
while True:
# Process any waiting tasks
while True:
try:
msg = receiver.recv(zmq.DONTWAIT)
except zmq.Again:
break
# process task
# Process any waiting weather updates
while True:
try:
msg = subscriber.recv(zmq.DONTWAIT)
except zmq.Again:
break
# process weather update
# No activity, so sleep for 1 msec
time.sleep(0.001)
msreader:C# 语言实现的多个 Socket 读取器
msreader:CL 语言实现的多个 Socket 读取器
msreader:Erlang 语言实现的多个 Socket 读取器
#!/usr/bin/env ruby
# author: Oleg Sidorov <4pcbr> i4pcbr@gmail.com
# this code is licenced under the MIT/X11 licence.
#
# Reading from multiple sockets
# This version uses a simple recv loop
require 'rubygems'
require 'ffi-rzmq'
context = ZMQ::Context.new
# Connect to task ventilator
receiver = context.socket(ZMQ::PULL)
receiver.connect('tcp://localhost:5557')
# Connect to weather server
subscriber = context.socket(ZMQ::SUB)
subscriber.connect('tcp://localhost:5556')
subscriber.setsockopt(ZMQ::SUBSCRIBE, '10001')
while true
if receiver.recv_string(receiver_msg = '',ZMQ::NOBLOCK) && !receiver_msg.empty?
# process task
puts "receiver: #{receiver_msg}"
end
if subscriber.recv_string(subscriber_msg = '',ZMQ::NOBLOCK) && !subscriber_msg.empty?
# process weather update
puts "weather: #{subscriber_msg}"
end
# No activity, so sleep for 1 msec
sleep 0.001
end
msreader:Elixir 语言实现的多个 Socket 读取器
use std::{thread, time};
fn main() {
let context = zmq::Context::new();
let receiver = context.socket(zmq::PULL).unwrap();
assert!(receiver.connect("tcp://localhost:5557").is_ok());
let subscriber = context.socket(zmq::SUB).unwrap();
assert!(subscriber.connect("tcp://localhost:5556").is_ok());
assert!(subscriber.set_subscribe(b"10001").is_ok());
loop {
loop {
if receiver.recv_msg(zmq::DONTWAIT).is_err() {
break;
}
}
loop {
if subscriber.recv_msg(zmq::DONTWAIT).is_err() {
break;
}
}
thread::sleep(time::Duration::from_millis(1));
}
}
msreader:F# 语言实现的多个 Socket 读取器
/*
*
* Reading from multiple sockets in Scala
* This version uses a simple recv loop
*
* @author Giovanni Ruggiero
* @email giovanni.ruggiero@gmail.com
*/
import org.zeromq.ZMQ
object msreader {
def main(args : Array[String]) {
// Prepare our context and sockets
val context = ZMQ.context(1)
// Connect to task ventilator
val receiver = context.socket(ZMQ.PULL)
receiver.connect("tcp://localhost:5557")
// Connect to weather server
val subscriber = context.socket(ZMQ.SUB)
subscriber.connect("tcp://localhost:5556")
subscriber.subscribe("10001 ".getBytes())
// Process messages from both sockets
// We prioritize traffic from the task ventilator
while (true) {
// Process any waiting tasks
val task = receiver.recv(ZMQ.NOBLOCK)
while(task != null) {
// process task
}
// Process any waiting weather updates
val update = subscriber.recv(ZMQ.NOBLOCK)
while (update != null) {
// process weather update
}
// No activity, so sleep for 1 msec
Thread.sleep(1)
}
}
}
示例 msreader 缺少 F# 实现:贡献翻译
#
# Reading from multiple sockets
# This version uses a simple recv loop
#
package require zmq
# Prepare our context and sockets
zmq context context
# Connect to task ventilator
zmq socket receiver context PULL
receiver connect "tcp://localhost:5557"
# Connect to weather server
zmq socket subscriber context SUB
subscriber connect "tcp://*:5556"
subscriber setsockopt SUBSCRIBE "10001"
# Socket to send messages to
zmq socket sender context PUSH
sender connect "tcp://localhost:5558"
# Process messages from both sockets
# We prioritize traffic from the task ventilator
while {1} {
# Process any waiting task
for {set rc 0} {!$rc} {} {
zmq message task
if {[set rc [receiver recv_msg task NOBLOCK]] == 0} {
# Do the work
set string [task data]
puts "Process task: $string"
after $string
# Send result to sink
sender send "$string"
}
task close
}
# Process any waiting weather update
for {set rc 0} {!$rc} {} {
zmq message msg
if {[set rc [subscriber recv_msg msg NOBLOCK]] == 0} {
puts "Weather update: [msg data]"
}
msg close
}
# No activity, sleep for 1 msec
after 1
}
# We never get here but clean up anyhow
sender close
receiver close
subscriber close
context term
msreader:Felix 语言实现的多个 Socket 读取器
示例 msreader 缺少 Haskell 实现:贡献翻译
msreader:Haxe 语言实现的多个 Socket 读取器
示例 msreader 缺少 Haxe 实现:贡献翻译 每个部分都是一个:
msreader:Java 语言实现的多个 Socket 读取器
msreader:Lua 语言实现的多个 Socket 读取器
示例 msreader 缺少 Node.js 实现:贡献翻译
// Reading from multiple sockets
// This version uses zmq_poll()
#include "zhelpers.h"
int main (void)
{
// Connect to task ventilator
void *context = zmq_ctx_new ();
void *receiver = zmq_socket (context, ZMQ_PULL);
zmq_connect (receiver, "tcp://localhost:5557");
// Connect to weather server
void *subscriber = zmq_socket (context, ZMQ_SUB);
zmq_connect (subscriber, "tcp://localhost:5556");
zmq_setsockopt (subscriber, ZMQ_SUBSCRIBE, "10001 ", 6);
zmq_pollitem_t items [] = {
{ receiver, 0, ZMQ_POLLIN, 0 },
{ subscriber, 0, ZMQ_POLLIN, 0 }
};
// Process messages from both sockets
while (1) {
char msg [256];
zmq_poll (items, 2, -1);
if (items [0].revents & ZMQ_POLLIN) {
int size = zmq_recv (receiver, msg, 255, 0);
if (size != -1) {
// Process task
}
}
if (items [1].revents & ZMQ_POLLIN) {
int size = zmq_recv (subscriber, msg, 255, 0);
if (size != -1) {
// Process weather update
}
}
}
zmq_close (subscriber);
zmq_ctx_destroy (context);
return 0;
}
msreader:Objective-C 语言实现的多个 Socket 读取器
//
// Reading from multiple sockets in C++
// This version uses zmq_poll()
//
#include "zhelpers.hpp"
int main (int argc, char *argv[])
{
zmq::context_t context(1);
// Connect to task ventilator
zmq::socket_t receiver(context, ZMQ_PULL);
receiver.connect("tcp://localhost:5557");
// Connect to weather server
zmq::socket_t subscriber(context, ZMQ_SUB);
subscriber.connect("tcp://localhost:5556");
subscriber.set(zmq::sockopt::subscribe, "10001 ");
// Initialize poll set
zmq::pollitem_t items [] = {
{ receiver, 0, ZMQ_POLLIN, 0 },
{ subscriber, 0, ZMQ_POLLIN, 0 }
};
// Process messages from both sockets
while (1) {
zmq::message_t message;
zmq::poll (&items [0], 2, -1);
if (items [0].revents & ZMQ_POLLIN) {
receiver.recv(&message);
// Process task
}
if (items [1].revents & ZMQ_POLLIN) {
subscriber.recv(&message);
// Process weather update
}
}
return 0;
}
msreader:ooc 语言实现的多个 Socket 读取器
msreader:Perl 语言实现的多个 Socket 读取器
;;; -*- Mode:Lisp; Syntax:ANSI-Common-Lisp; -*-
;;;
;;; Reading from multiple sockets in Common Lisp
;;; This version uses zmq_poll()
;;;
;;; Kamil Shakirov <kamils80@gmail.com>
;;;
(defpackage #:zguide.mspoller
(:nicknames #:mspoller)
(:use #:cl #:zhelpers)
(:export #:main))
(in-package :zguide.mspoller)
(defun main ()
(zmq:with-context (context 1)
;; Connect to task ventilator
(zmq:with-socket (receiver context zmq:pull)
(zmq:connect receiver "tcp://localhost:5557")
;; Connect to weather server
(zmq:with-socket (subscriber context zmq:sub)
(zmq:connect subscriber "tcp://localhost:5556")
(zmq:setsockopt subscriber zmq:subscribe "10001 ")
;; Initialize poll set
(zmq:with-polls ((items . ((receiver . zmq:pollin)
(subscriber . zmq:pollin))))
;; Process messages from both sockets
(loop
(let ((revents (zmq:poll items)))
(when (= (first revents) zmq:pollin)
(let ((message (make-instance 'zmq:msg)))
(zmq:recv receiver message)
;; Process task
(dump-message message)
(finish-output)))
(when (= (second revents) zmq:pollin)
(let ((message (make-instance 'zmq:msg)))
(zmq:recv subscriber message)
;; Process weather update
(dump-message message)
(finish-output)))))))))
(cleanup))
msreader:PHP 语言实现的多个 Socket 读取器
program mspoller;
//
// Reading from multiple sockets
// This version uses zmq_poll()
// @author Varga Balazs <bb.varga@gmail.com>
//
{$APPTYPE CONSOLE}
uses
SysUtils
, zmqapi
;
var
context: TZMQContext;
receiver,
subscriber: TZMQSocket;
i,pc: Integer;
task: TZMQFrame;
poller: TZMQPoller;
pollResult: TZMQPollItem;
begin
// Prepare our context and sockets
context := TZMQContext.Create;
// Connect to task ventilator
receiver := Context.Socket( stPull );
receiver.connect( 'tcp://localhost:5557' );
// Connect to weather server
subscriber := Context.Socket( stSub );
subscriber.connect( 'tcp://localhost:5556' );
subscriber.subscribe( '10001' );
// Initialize poll set
poller := TZMQPoller.Create( true );
poller.Register( receiver, [pePollIn] );
poller.Register( subscriber, [pePollIn] );
task := nil;
// Process messages from both sockets
while True do
begin
pc := poller.poll;
if pePollIn in poller.PollItem[0].revents then
begin
receiver.recv( task );
// Process task
FreeAndNil( task );
end;
if pePollIn in poller.PollItem[1].revents then
begin
subscriber.recv( task );
// Process task
FreeAndNil( task );
end;
end;
// We never get here
poller.Free;
receiver.Free;
subscriber.Free;
context.Free;
end.
msreader:Python 语言实现的多个 Socket 读取器
#! /usr/bin/env escript
%%
%% Reading from multiple sockets
%% This version uses active sockets
%%
main(_) ->
{ok,Context} = erlzmq:context(),
%% Connect to task ventilator
{ok, Receiver} = erlzmq:socket(Context, [pull, {active, true}]),
ok = erlzmq:connect(Receiver, "tcp://localhost:5557"),
%% Connect to weather server
{ok, Subscriber} = erlzmq:socket(Context, [sub, {active, true}]),
ok = erlzmq:connect(Subscriber, "tcp://localhost:5556"),
ok = erlzmq:setsockopt(Subscriber, subscribe, <<"10001">>),
%% Process messages from both sockets
loop(Receiver, Subscriber),
%% We never get here
ok = erlzmq:close(Receiver),
ok = erlzmq:close(Subscriber),
ok = erlzmq:term(Context).
loop(Tasks, Weather) ->
receive
{zmq, Tasks, Msg, _Flags} ->
io:format("Processing task: ~s~n",[Msg]),
loop(Tasks, Weather);
{zmq, Weather, Msg, _Flags} ->
io:format("Processing weather update: ~s~n",[Msg]),
loop(Tasks, Weather)
end.
msreader:Q 语言实现的多个 Socket 读取器
defmodule Mspoller do
@moduledoc """
Generated by erl2ex (http://github.com/dazuma/erl2ex)
From Erlang source: (Unknown source file)
At: 2019-12-20 13:57:27
"""
def main() do
{:ok, context} = :erlzmq.context()
{:ok, receiver} = :erlzmq.socket(context, [:pull, {:active, true}])
:ok = :erlzmq.connect(receiver, 'tcp://localhost:5557')
{:ok, subscriber} = :erlzmq.socket(context, [:sub, {:active, true}])
:ok = :erlzmq.connect(subscriber, 'tcp://localhost:5556')
:ok = :erlzmq.setsockopt(subscriber, :subscribe, "10001")
loop(receiver, subscriber)
:ok = :erlzmq.close(receiver)
:ok = :erlzmq.close(subscriber)
:ok = :erlzmq.term(context)
end
def loop(tasks, weather) do
receive do
{:zmq, ^tasks, msg, _flags} ->
:io.format('Processing task: ~s~n', [msg])
loop(tasks, weather)
{:zmq, ^weather, msg, _flags} ->
:io.format('Processing weather update: ~s~n', [msg])
loop(tasks, weather)
end
end
end
Mspoller.main
示例 msreader 缺少 Q 实现:贡献翻译
示例 msreader 缺少 Racket 实现:贡献翻译
//
// Reading from multiple sockets
// This version uses zmq_poll()
//
open ZMQ;
var context = zmq_init 1;
// Connect to task ventilator
var receiver = context.mk_socket ZMQ_PULL;
receiver.connect "tcp://localhost:5557";
// Connect to weather server
var subscriber = context.mk_socket ZMQ_SUB;
subscriber.connect "tcp://localhost:5556";
subscriber.set_opt$ zmq_subscribe "101 ";
// Initialize poll set
var items = varray(
zmq_poll_item (receiver, ZMQ_POLLIN),
zmq_poll_item (subscriber, ZMQ_POLLOUT))
;
// Process messages from both sockets
while true do
C_hack::ignore$ poll (items, -1.0);
if (items.[0].revents \& ZMQ_POLLIN).short != 0s do
var s = receiver.recv_string;
// Process task
done
if (items.[1].revents \& ZMQ_POLLIN).short != 0s do
s = subscriber.recv_string;
done
done
msreader:Ruby 语言实现的多个 Socket 读取器
//
// Reading from multiple sockets
// This version uses zmq.Poll()
//
package main
import (
"fmt"
zmq "github.com/alecthomas/gozmq"
)
func main() {
context, _ := zmq.NewContext()
defer context.Close()
// Connect to task ventilator
receiver, _ := context.NewSocket(zmq.PULL)
defer receiver.Close()
receiver.Connect("tcp://localhost:5557")
// Connect to weather server
subscriber, _ := context.NewSocket(zmq.SUB)
defer subscriber.Close()
subscriber.Connect("tcp://localhost:5556")
subscriber.SetSubscribe("10001")
pi := zmq.PollItems{
zmq.PollItem{Socket: receiver, Events: zmq.POLLIN},
zmq.PollItem{Socket: subscriber, Events: zmq.POLLIN},
}
// Process messages from both sockets
for {
_, _ = zmq.Poll(pi, -1)
switch {
case pi[0].REvents&zmq.POLLIN != 0:
// Process task
pi[0].Socket.Recv(0) // eat the incoming message
case pi[1].REvents&zmq.POLLIN != 0:
// Process weather update
pi[1].Socket.Recv(0) // eat the incoming message
}
}
fmt.Println("done")
}
msreader:Rust 语言实现的多个 Socket 读取器
{-# LANGUAGE OverloadedStrings #-}
-- Reading from multiple sockets
-- This version uses zmq_poll()
module Main where
import Control.Monad
import System.ZMQ4.Monadic
main :: IO ()
main = runZMQ $ do
-- Connect to task ventilator
receiver <- socket Pull
connect receiver "tcp://localhost:5557"
-- Connect to weather server
subscriber <- socket Sub
connect subscriber "tcp://localhost:5556"
subscribe subscriber "10001 "
-- Process messages from both sockets
forever $
poll (-1) [ Sock receiver [In] (Just receiver_callback)
, Sock subscriber [In] (Just subscriber_callback)
]
where
-- Process task
receiver_callback :: [Event] -> ZMQ z ()
receiver_callback _ = return ()
-- Process weather update
subscriber_callback :: [Event] -> ZMQ z ()
subscriber_callback _ = return ()
msreader:Scala 语言实现的多个 Socket 读取器
msreader:OCaml 语言实现的多个 Socket 读取器
package guide;
import org.zeromq.SocketType;
import org.zeromq.ZMQ;
import org.zeromq.ZContext;
//
// Reading from multiple sockets in Java
// This version uses ZMQ.Poller
//
public class mspoller
{
public static void main(String[] args)
{
try (ZContext context = new ZContext()) {
// Connect to task ventilator
ZMQ.Socket receiver = context.createSocket(SocketType.PULL);
receiver.connect("tcp://localhost:5557");
// Connect to weather server
ZMQ.Socket subscriber = context.createSocket(SocketType.SUB);
subscriber.connect("tcp://localhost:5556");
subscriber.subscribe("10001 ".getBytes(ZMQ.CHARSET));
// Initialize poll set
ZMQ.Poller items = context.createPoller(2);
items.register(receiver, ZMQ.Poller.POLLIN);
items.register(subscriber, ZMQ.Poller.POLLIN);
// Process messages from both sockets
while (!Thread.currentThread().isInterrupted()) {
byte[] message;
items.poll();
if (items.pollin(0)) {
message = receiver.recv(0);
System.out.println("Process task");
}
if (items.pollin(1)) {
message = subscriber.recv(0);
System.out.println("Process weather update");
}
}
}
}
}
mspoller: Lua 中的多套接字 poller
--
-- Reading from multiple sockets
-- This version uses :poll()
--
-- Author: Robert G. Jakabosky <bobby@sharedrealm.com>
--
require"zmq"
require"zmq.poller"
require"zhelpers"
local context = zmq.init(1)
-- Connect to task ventilator
local receiver = context:socket(zmq.PULL)
receiver:connect("tcp://localhost:5557")
-- Connect to weather server
local subscriber = context:socket(zmq.SUB)
subscriber:connect("tcp://localhost:5556")
subscriber:setopt(zmq.SUBSCRIBE, "10001 ", 6)
local poller = zmq.poller(2)
poller:add(receiver, zmq.POLLIN, function()
local msg = receiver:recv()
-- Process task
end)
poller:add(subscriber, zmq.POLLIN, function()
local msg = subscriber:recv()
-- Process weather update
end)
-- Process messages from both sockets
-- start poller's event loop
poller:start()
-- We never get here
receiver:close()
subscriber:close()
context:term()
mspoller: Node.js 中的多套接字 poller
// Reading from multiple sockets.
// This version listens for emitted 'message' events.
var zmq = require('zeromq')
// Connect to task ventilator
var receiver = zmq.socket('pull')
receiver.on('message', function(msg) {
console.log("From Task Ventilator:", msg.toString())
})
// Connect to weather server.
var subscriber = zmq.socket('sub')
subscriber.subscribe('10001')
subscriber.on('message', function(msg) {
console.log("Weather Update:", msg.toString())
})
receiver.connect('tcp://localhost:5557')
subscriber.connect('tcp://localhost:5556')
mspoller: Objective-C 中的多套接字 poller
/* msreader.m: Reads from multiple sockets the right way. */
#import "ZMQObjC.h"
static NSString *const kTaskVentEndpoint = @"tcp://localhost:5557";
static NSString *const kWeatherServerEndpoint = @"tcp://localhost:5556";
int
main(void)
{
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
ZMQContext *ctx = [[[ZMQContext alloc] initWithIOThreads:1U] autorelease];
/* Connect to task ventilator. */
ZMQSocket *receiver = [ctx socketWithType:ZMQ_PULL];
[receiver connectToEndpoint:kTaskVentEndpoint];
/* Connect to weather server. */
ZMQSocket *subscriber = [ctx socketWithType:ZMQ_SUB];
[subscriber connectToEndpoint:kWeatherServerEndpoint];
NSData *subData = [@"10001" dataUsingEncoding:NSUTF8StringEncoding];
[subscriber setData:subData forOption:ZMQ_SUBSCRIBE];
/* Initialize poll set. */
zmq_pollitem_t items[2];
[receiver getPollItem:&items[0] forEvents:ZMQ_POLLIN];
[subscriber getPollItem:&items[1] forEvents:ZMQ_POLLIN];
/* Process messages from both sockets. */
for (;;) {
NSAutoreleasePool *p = [[NSAutoreleasePool alloc] init];
[ZMQContext pollWithItems:items count:2
timeoutAfterUsec:ZMQPollTimeoutNever];
[p drain];
}
/* NOT REACHED */
[ctx closeSockets];
[pool drain]; /* This finally releases the autoreleased context. */
return EXIT_SUCCESS;
}
mspoller: ooc 中的多套接字 poller
mspoller: Perl 中的多套接字 poller
# Reading from multiple sockets in Perl
# This version uses AnyEvent to poll the sockets
use strict;
use warnings;
use v5.10;
use ZMQ::FFI;
use ZMQ::FFI::Constants qw(ZMQ_PULL ZMQ_SUB);
use AnyEvent;
use EV;
# Connect to the task ventilator
my $context = ZMQ::FFI->new();
my $receiver = $context->socket(ZMQ_PULL);
$receiver->connect('tcp://localhost:5557');
# Connect to weather server
my $subscriber = $context->socket(ZMQ_SUB);
$subscriber->connect('tcp://localhost:5556');
$subscriber->subscribe('10001');
my $pull_poller = AE::io $receiver->get_fd, 0, sub {
while ($receiver->has_pollin) {
my $msg = $receiver->recv();
# Process task
}
};
my $sub_poller = AE::io $subscriber->get_fd, 0, sub {
while ($subscriber->has_pollin) {
my $msg = $subscriber->recv();
# Process weather update
}
};
EV::run;
mspoller: PHP 中的多套接字 poller
<?php
/*
* Reading from multiple sockets
* This version uses zmq_poll()
* @author Ian Barber <ian(dot)barber(at)gmail(dot)com>
*/
$context = new ZMQContext();
// Connect to task ventilator
$receiver = new ZMQSocket($context, ZMQ::SOCKET_PULL);
$receiver->connect("tcp://localhost:5557");
// Connect to weather server
$subscriber = new ZMQSocket($context, ZMQ::SOCKET_SUB);
$subscriber->connect("tcp://localhost:5556");
$subscriber->setSockOpt(ZMQ::SOCKOPT_SUBSCRIBE, "10001");
// Initialize poll set
$poll = new ZMQPoll();
$poll->add($receiver, ZMQ::POLL_IN);
$poll->add($subscriber, ZMQ::POLL_IN);
$readable = $writeable = array();
// Process messages from both sockets
while (true) {
$events = $poll->poll($readable, $writeable);
if ($events > 0) {
foreach ($readable as $socket) {
if ($socket === $receiver) {
$message = $socket->recv();
// Process task
} elseif ($socket === $subscriber) {
$mesage = $socket->recv();
// Process weather update
}
}
}
}
// We never get here
mspoller: Python 中的多套接字 poller
# encoding: utf-8
#
# Reading from multiple sockets
# This version uses zmq.Poller()
#
# Author: Jeremy Avnet (brainsik) <spork(dash)zmq(at)theory(dot)org>
#
import zmq
# Prepare our context and sockets
context = zmq.Context()
# Connect to task ventilator
receiver = context.socket(zmq.PULL)
receiver.connect("tcp://localhost:5557")
# Connect to weather server
subscriber = context.socket(zmq.SUB)
subscriber.connect("tcp://localhost:5556")
subscriber.setsockopt(zmq.SUBSCRIBE, b"10001")
# Initialize poll set
poller = zmq.Poller()
poller.register(receiver, zmq.POLLIN)
poller.register(subscriber, zmq.POLLIN)
# Process messages from both sockets
while True:
try:
socks = dict(poller.poll())
except KeyboardInterrupt:
break
if receiver in socks:
message = receiver.recv()
# process task
if subscriber in socks:
message = subscriber.recv()
# process weather update
mspoller: Q 中的多套接字 poller
mspoller: Racket 中的多套接字 poller
mspoller: Ruby 中的多套接字 poller
#!/usr/bin/env ruby
# author: Oleg Sidorov <4pcbr> i4pcbr@gmail.com
# this code is licenced under the MIT/X11 licence.
#
# Reading from multiple sockets
# This version uses a polling
require 'rubygems'
require 'ffi-rzmq'
context = ZMQ::Context.new
# Connect to task ventilator
receiver = context.socket(ZMQ::PULL)
receiver.connect('tcp://localhost:5557')
# Connect to weather server
subscriber = context.socket(ZMQ::SUB)
subscriber.connect('tcp://localhost:5556')
subscriber.setsockopt(ZMQ::SUBSCRIBE, '10001')
# Initialize a poll set
poller = ZMQ::Poller.new
poller.register(receiver, ZMQ::POLLIN)
poller.register(subscriber, ZMQ::POLLIN)
while true
poller.poll(:blocking)
poller.readables.each do |socket|
if socket === receiver
socket.recv_string(message = '')
# process task
puts "task: #{message}"
elsif socket === subscriber
socket.recv_string(message = '')
# process weather update
puts "weather: #{message}"
end
end
end
mspoller: Rust 中的多套接字 poller
fn main() {
let context = zmq::Context::new();
let receiver = context.socket(zmq::PULL).unwrap();
assert!(receiver.connect("tcp://localhost:5557").is_ok());
let subscriber = context.socket(zmq::SUB).unwrap();
assert!(subscriber.connect("tcp://localhost:5556").is_ok());
assert!(subscriber.set_subscribe(b"10001").is_ok());
let items = &mut [
receiver.as_poll_item(zmq::POLLIN),
subscriber.as_poll_item(zmq::POLLIN),
];
loop {
zmq::poll(items, -1).unwrap();
if items[0].is_readable() {
let _ = receiver.recv_msg(0);
}
if items[1].is_readable() {
let _ = subscriber.recv_msg(0);
}
}
}
mspoller: Scala 中的多套接字 poller
/*
* Reading from multiple sockets in Scala
* This version uses ZMQ.Poller
*
* @author Giovanni Ruggiero
* @email giovanni.ruggiero@gmail.com
*/
import org.zeromq.ZMQ
object mspoller {
def main(args : Array[String]) {
val context = ZMQ.context(1)
// Connect to task ventilator
val receiver = context.socket(ZMQ.PULL)
receiver.connect("tcp://localhost:5557")
// Connect to weather server
val subscriber = context.socket(ZMQ.SUB)
subscriber.connect("tcp://localhost:5556")
subscriber.subscribe("10001 ".getBytes())
// Initialize poll set
val items = context.poller(2)
items.register(receiver, 0)
items.register(subscriber, 0)
// Process messages from both sockets
while (true) {
items.poll()
if (items.pollin(0)) {
val message0 = receiver.recv(0)
// Process task
}
if (items.pollin(1)) {
val message1 = subscriber.recv(0)
// Process weather update
}
}
}
}
mspoller: Tcl 中的多套接字 poller
#
# Reading from multiple sockets
# This version uses a simple recv loop
#
package require zmq
# Prepare our context and sockets
zmq context context
# Connect to task ventilator
zmq socket receiver context PULL
receiver connect "tcp://localhost:5557"
# Connect to weather server
zmq socket subscriber context SUB
subscriber connect "tcp://*:5556"
subscriber setsockopt SUBSCRIBE "10001"
# Socket to send messages to
zmq socket sender context PUSH
sender connect "tcp://localhost:5558"
# Initialise poll set
set poll_set [list [list receiver [list POLLIN]] [list subscriber [list POLLIN]]]
# Process message from both sockets
while {1} {
set rpoll_set [zmq poll $poll_set -1]
foreach rpoll $rpoll_set {
switch [lindex $rpoll 0] {
receiver {
if {"POLLIN" in [lindex $rpoll 1]} {
set string [receiver recv]
# Do the work
puts "Process task: $string"
after $string
# Send result to sink
sender send "$string"
}
}
subscriber {
if {"POLLIN" in [lindex $rpoll 1]} {
set string [subscriber recv]
puts "Weather update: $string"
}
}
}
}
# No activity, sleep for 1 msec
after 1
}
# We never get here but clean up anyhow
sender close
receiver close
subscriber close
context term
mspoller: OCaml 中的多套接字 poller
item 结构包含这四个成员
typedef struct {
void *socket; // ZeroMQ socket to poll on
int fd; // OR, native file handle to poll on
short events; // Events to poll on
short revents; // Events returned after poll
} zmq_pollitem_t;
分段消息 #
ZeroMQ 允许我们将消息由多个帧组成,形成“分段消息”。实际应用中广泛使用分段消息,既可以用于用地址信息封装消息,也可以用于简单的序列化。我们稍后会讨论回复信封。
现在我们将学习如何在任何需要不检查消息内容而转发消息的应用(例如代理)中盲目且安全地读写分段消息。
当你处理分段消息时,每个部分都是一个zmq_msgitem。例如,如果你发送一个包含五个部分的消息,你必须构造、发送并销毁五个zmq_msgitem。你可以提前完成这些操作(并将这些zmq_msgitem 存储在数组或其他结构中),或者在发送时一个接一个地完成。
以下是发送分段消息中帧的方法(我们将每个帧放入一个消息对象中)
zmq_msg_send (&message, socket, ZMQ_SNDMORE);
...
zmq_msg_send (&message, socket, ZMQ_SNDMORE);
...
zmq_msg_send (&message, socket, 0);
以下是接收和处理消息中所有部分(无论是单部分还是多部分)的方法
while (1) {
zmq_msg_t message;
zmq_msg_init (&message);
zmq_msg_recv (&message, socket, 0);
// Process the message frame
...
zmq_msg_close (&message);
if (!zmq_msg_more (&message))
break; // Last message frame
}
关于分段消息的一些须知事项
- 当你发送分段消息时,只有发送最后一部分时,第一部分(以及所有后续部分)才会真正发送到网络上。
- 如果你正在使用 每个部分都是一个,当你接收到消息的第一部分时,其余部分也已经到达了。
- 你将接收到消息的所有部分,或者一个都没有。
- 消息的每个部分都是一个独立的zmq_msgitem。
- 无论你是否检查 more 属性,你都将接收到消息的所有部分。
- 发送时,ZeroMQ 在内存中排队消息帧,直到接收到最后一个,然后一次性发送所有帧。
- 除了关闭套接字外,无法取消已部分发送的消息。
中介与代理 #
ZeroMQ 旨在实现去中心化智能,但这并不意味着你的网络中间是空的。它充满了消息感知基础设施,而且我们经常使用 ZeroMQ 来构建这些基础设施。ZeroMQ 的管道可以从微小的管道到成熟的面向服务的经纪人。消息传递行业称之为中介,意味着中间的部分处理两端。在 ZeroMQ 中,根据上下文,我们称之为代理、队列、转发器、设备或经纪人。
这种模式在现实世界中极为普遍,这也是为什么我们的社会和经济中充满了中介,他们的主要功能就是降低大型网络的复杂性和扩展成本。现实世界中的中介通常被称为批发商、分销商、经理等等。
动态发现问题 #
在设计大型分布式架构时遇到的问题之一是发现。也就是说,各个部分如何相互知晓?如果各个部分动态地出现和消失,这个问题就尤为困难,因此我们称之为“动态发现问题”。
动态发现有几种解决方案。最简单的方法是通过硬编码(或配置)网络架构来完全避免它,从而手动完成发现。也就是说,当你添加一个新部分时,你需要重新配置网络以使其知晓。

在实践中,这会导致架构变得越来越脆弱和笨重。假设你有一个发布者和一百个订阅者。你通过在每个订阅者中配置发布者端点来将每个订阅者连接到发布者。这很容易。订阅者是动态的;发布者是静态的。现在假设你添加了更多的发布者。突然之间,事情就不那么容易了。如果你继续将每个订阅者连接到每个发布者,避免动态发现的成本就会越来越高。

这个问题有很多解决方案,但最简单的答案是添加一个中介;也就是说,在网络中设置一个所有其他节点都连接到的静态点。在经典消息传递中,这是消息经纪人的工作。ZeroMQ 本身不带消息经纪人,但它允许我们很容易地构建中介。
你可能会想,如果所有网络最终都变得足够大而需要中介,为什么我们不直接为所有应用设置一个消息经纪人呢?对于初学者来说,这是一个合理的折衷方案。只要始终使用星形拓扑,忽略性能,通常就能正常工作。然而,消息经纪人是很贪婪的东西;作为中心中介,它们变得过于复杂、状态过多,最终成为一个问题。
最好将中介视为简单的无状态消息开关。一个好的类比是 HTTP 代理;它在那里,但没有任何特殊作用。添加一个发布/订阅代理解决了我们示例中的动态发现问题。我们将代理设置在网络的“中间”。代理打开一个 XSUB 套接字和一个 XPUB 套接字,并将它们分别绑定到众所周知的 IP 地址和端口。然后,所有其他进程都连接到代理,而不是相互连接。添加更多订阅者或发布者就变得微不足道了。

我们需要 XPUB 和 XSUB 套接字,因为 ZeroMQ 会将订阅从订阅者转发到发布者。XSUB 和 XPUB 与 SUB 和 PUB 完全相同,只是它们将订阅作为特殊消息暴露出来。代理必须将这些订阅消息从订阅者端转发到发布者端,方法是从 XPUB 套接字读取并将它们写入 XSUB 套接字。这是 XSUB 和 XPUB 的主要用例。
共享队列 (DEALER 和 ROUTER 套接字) #
在 Hello World 客户端/服务器应用中,我们有一个客户端与一个服务通信。然而,在实际案例中,我们通常需要允许多个服务以及多个客户端。这使得我们可以扩展服务的能力(许多线程或进程或节点而不仅仅是一个)。唯一的限制是服务必须是无状态的,所有状态都包含在请求中或在某些共享存储中,例如数据库。

有两种方法将多个客户端连接到多个服务器。蛮力方法是将每个客户端套接字连接到多个服务端点。一个客户端套接字可以连接到多个服务套接字,然后 REQ 套接字将在这些服务之间分发请求。假设你将一个客户端套接字连接到三个服务端点:A、B 和 C。客户端发出请求 R1、R2、R3、R4。R1 和 R4 发送到服务 A,R2 发送到 B,R3 发送到服务 C。
这种设计允许你廉价地添加更多客户端。你也可以添加更多服务。每个客户端会将其请求分发到这些服务。但是每个客户端都必须知道服务拓扑。如果你有 100 个客户端,然后决定再添加三个服务,你需要重新配置并重启 100 个客户端,以便客户端了解这三个新服务。
这显然不是我们在凌晨 3 点,当我们的超级计算集群资源耗尽,急需添加数百个新服务节点时想做的事情。太多静态部分就像液态混凝土:知识是分散的,静态部分越多,改变拓扑所需的努力就越大。我们想要的是客户端和服务之间有一个集中了所有拓扑知识的东西。理想情况下,我们应该能够在任何时候添加和移除服务或客户端,而不影响拓扑的任何其他部分。
因此,我们将编写一个小的消息队列经纪人,为我们提供这种灵活性。该经纪人绑定到两个端点,一个面向客户端的前端和一个面向服务的后端。然后它使用 每个部分都是一个来监视这两个套接字的活动,当有活动时,它就会在两个套接字之间传递消息。它实际上并不显式管理任何队列——ZeroMQ 在每个套接字上自动完成。
当你使用 REQ 与 REP 通信时,你会得到一个严格同步的请求-回复对话。客户端发送请求。服务读取请求并发送回复。客户端然后读取回复。如果客户端或服务尝试做任何其他事情(例如,连续发送两个请求而不等待响应),它们将收到错误。
但我们的经纪人必须是非阻塞的。显然,我们可以使用 每个部分都是一个来等待任一套接字上的活动,但我们不能使用 REP 和 REQ。

幸运的是,有两种套接字叫做 DEALER 和 ROUTER,它们允许你进行非阻塞的请求-响应。你将在第 3 章 - 高级请求-回复模式中看到 DEALER 和 ROUTER 套接字如何让你构建各种异步请求-回复流程。现在,我们只看看 DEALER 和 ROUTER 如何让我们通过一个中介(也就是我们的小经纪人)扩展 REQ-REP。
在这个简单的扩展请求-回复模式中,REQ 与 ROUTER 通信,DEALER 与 REP 通信。在 DEALER 和 ROUTER 之间,我们必须有代码(像我们的经纪人)将消息从一个套接字取出并推送到另一个套接字。
请求-回复经纪人绑定到两个端点,一个供客户端连接(前端套接字),一个供工作节点连接(后端套接字)。为了测试这个经纪人,你需要修改你的工作节点,使其连接到后端套接字。以下是展示我意思的客户端代码
rrclient: Ada 中的请求-回复客户端
rrclient: Basic 中的请求-回复客户端
rrclient: C 中的请求-回复客户端
// Hello World client
// Connects REQ socket to tcp://localhost:5559
// Sends "Hello" to server, expects "World" back
#include "zhelpers.h"
int main (void)
{
void *context = zmq_ctx_new ();
// Socket to talk to server
void *requester = zmq_socket (context, ZMQ_REQ);
zmq_connect (requester, "tcp://localhost:5559");
int request_nbr;
for (request_nbr = 0; request_nbr != 10; request_nbr++) {
s_send (requester, "Hello");
char *string = s_recv (requester);
printf ("Received reply %d [%s]\n", request_nbr, string);
free (string);
}
zmq_close (requester);
zmq_ctx_destroy (context);
return 0;
}
rrclient: C++ 中的请求-回复客户端
// Request-reply client in C++
// Connects REQ socket to tcp://localhost:5559
// Sends "Hello" to server, expects "World" back
//
#include "zhelpers.hpp"
int main (int argc, char *argv[])
{
zmq::context_t context(1);
zmq::socket_t requester(context, ZMQ_REQ);
requester.connect("tcp://localhost:5559");
for( int request = 0 ; request < 10 ; request++) {
s_send (requester, std::string("Hello"));
std::string string = s_recv (requester);
std::cout << "Received reply " << request
<< " [" << string << "]" << std::endl;
}
}
rrclient: C# 中的请求-回复客户端
rrclient: CL 中的请求-回复客户端
;;; -*- Mode:Lisp; Syntax:ANSI-Common-Lisp; -*-
;;;
;;; Hello World client in Common Lisp
;;; Connects REQ socket to tcp://localhost:5555
;;; Sends "Hello" to server, expects "World" back
;;;
;;; Kamil Shakirov <kamils80@gmail.com>
;;;
(defpackage #:zguide.rrclient
(:nicknames #:rrclient)
(:use #:cl #:zhelpers)
(:export #:main))
(in-package :zguide.rrclient)
(defun main ()
(zmq:with-context (context 1)
;; Socket to talk to server
(zmq:with-socket (requester context zmq:req)
(zmq:connect requester "tcp://localhost:5559")
(dotimes (request-nbr 10)
(let ((request (make-instance 'zmq:msg :data "Hello")))
(zmq:send requester request))
(let ((response (make-instance 'zmq:msg)))
(zmq:recv requester response)
(message "Received reply ~D: [~A]~%"
request-nbr (zmq:msg-data-as-string response))))))
(cleanup))
rrclient: Delphi 中的请求-回复客户端
program rrclient;
//
// Hello World client
// Connects REQ socket to tcp://localhost:5559
// Sends "Hello" to server, expects "World" back
// @author Varga Balazs <bb.varga@gmail.com>
//
{$APPTYPE CONSOLE}
uses
SysUtils
, zmqapi
;
var
context: TZMQContext;
requester: TZMQSocket;
i: Integer;
s: Utf8String;
begin
context := TZMQContext.Create;
// Socket to talk to server
requester := Context.Socket( stReq );
requester.connect( 'tcp://localhost:5559' );
for i := 0 to 9 do
begin
requester.send( 'Hello' );
requester.recv( s );
Writeln( Format( 'Received reply %d [%s]',[i, s] ) );
end;
requester.Free;
context.Free;
end.
rrclient: Erlang 中的请求-回复客户端
#! /usr/bin/env escript
%%
%% Hello World client
%% Connects REQ socket to tcp://localhost:5559
%% Sends "Hello" to server, expects "World" back
%%
main(_) ->
{ok, Context} = erlzmq:context(),
%% Socket to talk to server
{ok, Requester} = erlzmq:socket(Context, req),
ok = erlzmq:connect(Requester, "tcp://*:5559"),
lists:foreach(
fun(Num) ->
erlzmq:send(Requester, <<"Hello">>),
{ok, Reply} = erlzmq:recv(Requester),
io:format("Received reply ~b [~s]~n", [Num, Reply])
end, lists:seq(1, 10)),
ok = erlzmq:close(Requester),
ok = erlzmq:term(Context).
rrclient: Elixir 中的请求-回复客户端
defmodule Rrclient do
@moduledoc """
Generated by erl2ex (http://github.com/dazuma/erl2ex)
From Erlang source: (Unknown source file)
At: 2019-12-20 13:57:31
"""
def main() do
{:ok, context} = :erlzmq.context()
{:ok, requester} = :erlzmq.socket(context, :req)
#:ok = :erlzmq.connect(requester, 'tcp://*:5559')
:ok = :erlzmq.connect(requester, 'tcp://localhost:5559')
:lists.foreach(fn num ->
:erlzmq.send(requester, "Hello")
{:ok, reply} = :erlzmq.recv(requester)
:io.format('Received reply ~b [~s]~n', [num, reply])
end, :lists.seq(1, 10))
:ok = :erlzmq.close(requester)
:ok = :erlzmq.term(context)
end
end
Rrclient.main()
rrclient: F# 中的请求-回复客户端
rrclient: Felix 中的请求-回复客户端
rrclient: Go 中的请求-回复客户端
// Hello World client
// Connects REQ socket to tcp://localhost:5559
// Sends "Hello" to server, expects "World" back
//
// Author: Brendan Mc.
// Requires: http://github.com/alecthomas/gozmq
package main
import (
"fmt"
zmq "github.com/alecthomas/gozmq"
)
func main() {
context, _ := zmq.NewContext()
defer context.Close()
// Socket to talk to clients
requester, _ := context.NewSocket(zmq.REQ)
defer requester.Close()
requester.Connect("tcp://localhost:5559")
for i := 0; i < 10; i++ {
requester.Send([]byte("Hello"), 0)
reply, _ := requester.Recv(0)
fmt.Printf("Received reply %d [%s]\n", i, reply)
}
}
rrclient: Haskell 中的请求-回复客户端
{-# LANGUAGE OverloadedStrings #-}
-- |
-- Request/Reply Hello World with broker (p.50)
-- Binds REQ socket to tcp://localhost:5559
-- Sends "Hello" to server, expects "World" back
--
-- Use with `rrbroker.hs` and `rrworker.hs`
-- You need to start the broker first !
module Main where
import System.ZMQ4.Monadic
import Control.Monad (forM_)
import Data.ByteString.Char8 (unpack)
import Text.Printf
main :: IO ()
main =
runZMQ $ do
requester <- socket Req
connect requester "tcp://localhost:5559"
forM_ [1..10] $ \i -> do
send requester [] "Hello"
msg <- receive requester
liftIO $ printf "Received reply %d %s\n" (i ::Int) (unpack msg)
rrclient: Haxe 中的请求-回复客户端
package ;
import neko.Lib;
import haxe.io.Bytes;
import org.zeromq.ZMQ;
import org.zeromq.ZMQContext;
import org.zeromq.ZMQSocket;
/**
* Hello World Client
* Connects REQ socket to tcp://localhost:5559
* Sends "Hello" to server, expects "World" back
*
* See: https://zguide.zeromq.cn/page:all#A-Request-Reply-Broker
*
* Use with RrServer and RrBroker
*/
class RrClient
{
public static function main() {
var context:ZMQContext = ZMQContext.instance();
Lib.println("** RrClient (see: https://zguide.zeromq.cn/page:all#A-Request-Reply-Broker)");
var requester:ZMQSocket = context.socket(ZMQ_REQ);
requester.connect ("tcp://localhost:5559");
Lib.println ("Launch and connect client.");
// Do 10 requests, waiting each time for a response
for (i in 0...10) {
var requestString = "Hello ";
// Send the message
requester.sendMsg(Bytes.ofString(requestString));
// Wait for the reply
var msg:Bytes = requester.recvMsg();
Lib.println("Received reply " + i + ": [" + msg.toString() + "]");
}
// Shut down socket and context
requester.close();
context.term();
}
}
rrclient: Java 中的请求-回复客户端
package guide;
import org.zeromq.SocketType;
import org.zeromq.ZMQ;
import org.zeromq.ZMQ.Socket;
import org.zeromq.ZContext;
/**
* Hello World client
* Connects REQ socket to tcp://localhost:5559
* Sends "Hello" to server, expects "World" back
*/
public class rrclient
{
public static void main(String[] args)
{
try (ZContext context = new ZContext()) {
// Socket to talk to server
Socket requester = context.createSocket(SocketType.REQ);
requester.connect("tcp://localhost:5559");
System.out.println("launch and connect client.");
for (int request_nbr = 0; request_nbr < 10; request_nbr++) {
requester.send("Hello", 0);
String reply = requester.recvStr(0);
System.out.println(
"Received reply " + request_nbr + " [" + reply + "]"
);
}
}
}
}
rrclient: Julia 中的请求-回复客户端
rrclient: Lua 中的请求-回复客户端
--
-- Hello World client
-- Connects REQ socket to tcp://localhost:5559
-- Sends "Hello" to server, expects "World" back
--
-- Author: Robert G. Jakabosky <bobby@sharedrealm.com>
--
require"zmq"
require"zhelpers"
local context = zmq.init(1)
-- Socket to talk to server
local requester = context:socket(zmq.REQ)
requester:connect("tcp://localhost:5559")
for n=0,9 do
requester:send("Hello")
local msg = requester:recv()
printf ("Received reply %d [%s]\n", n, msg)
end
requester:close()
context:term()
rrclient: Node.js 中的请求-回复客户端
// Hello World client in Node.js
// Connects REQ socket to tcp://localhost:5559
// Sends "Hello" to server, expects "World" back
var zmq = require('zeromq')
, requester = zmq.socket('req');
requester.connect('tcp://localhost:5559');
var replyNbr = 0;
requester.on('message', function(msg) {
console.log('got reply', replyNbr, msg.toString());
replyNbr += 1;
});
for (var i = 0; i < 10; ++i) {
requester.send("Hello");
}
rrclient: Objective-C 中的请求-回复客户端
rrclient: ooc 中的请求-回复客户端
rrclient: Perl 中的请求-回复客户端
# Hello world client in Perl
# Connects REQ socket to tcp://localhost:5559
# Sends "Hello" to server, expects "World" back
use strict;
use warnings;
use v5.10;
use ZMQ::FFI;
use ZMQ::FFI::Constants qw(ZMQ_REQ);
my $context = ZMQ::FFI->new();
# Socket to talk to server
my $requester = $context->socket(ZMQ_REQ);
$requester->connect('tcp://localhost:5559');
for my $request_nbr (1..10) {
$requester->send("Hello");
my $string = $requester->recv();
say "Received reply $request_nbr [$string]";
}
rrclient: PHP 中的请求-回复客户端
<?php
/*
* Hello World client
* Connects REQ socket to tcp://localhost:5559
* Sends "Hello" to server, expects "World" back
* @author Ian Barber <ian(dot)barber(at)gmail(dot)com>
*/
$context = new ZMQContext();
// Socket to talk to server
$requester = new ZMQSocket($context, ZMQ::SOCKET_REQ);
$requester->connect("tcp://localhost:5559");
for ($request_nbr = 0; $request_nbr < 10; $request_nbr++) {
$requester->send("Hello");
$string = $requester->recv();
printf ("Received reply %d [%s]%s", $request_nbr, $string, PHP_EOL);
}
rrclient: Python 中的请求-回复客户端
#
# Request-reply client in Python
# Connects REQ socket to tcp://localhost:5559
# Sends "Hello" to server, expects "World" back
#
import zmq
# Prepare our context and sockets
context = zmq.Context()
socket = context.socket(zmq.REQ)
socket.connect("tcp://localhost:5559")
# Do 10 requests, waiting each time for a response
for request in range(1, 11):
socket.send(b"Hello")
message = socket.recv()
print(f"Received reply {request} [{message}]")
rrclient: Q 中的请求-回复客户端
rrclient: Racket 中的请求-回复客户端
#lang racket
#|
# Request-reply client in Racket
# Connects REQ socket to tcp://localhost:5559
# Sends "Hello" to server, expects "World" back
|#
(require net/zmq)
; Prepare our context and sockets
(define ctxt (context 1))
(define sock (socket ctxt 'REQ))
(socket-connect! sock "tcp://localhost:5559")
; Do 10 requests, waiting each time for a response
(for ([request (in-range 10)])
(printf "Sending request ~a...\n" request)
(socket-send! sock #"Hello")
; Get the reply.
(define message (socket-recv! sock))
(printf "Received reply ~a [~a]\n" request message))
(context-close! ctxt)
rrclient: Ruby 中的请求-回复客户端
#!/usr/bin/env ruby
# author: Oleg Sidorov <4pcbr> i4pcbr@gmail.com
# this code is licenced under the MIT/X11 licence.
require 'rubygems'
require 'ffi-rzmq'
context = ZMQ::Context.new
socket = context.socket(ZMQ::REQ)
socket.connect('tcp://localhost:5559')
10.times do |request|
string = "Hello #{request}"
socket.send_string(string)
puts "Sending string [#{string}]"
socket.recv_string(message = '')
puts "Received reply #{request}[#{message}]"
end
rrclient: Rust 中的请求-回复客户端
fn main() {
let context = zmq::Context::new();
let requester = context.socket(zmq::REQ).unwrap();
assert!(requester.connect("tcp://localhost:5559").is_ok());
for request_nbr in 0..10 {
requester.send("Hello", 0).unwrap();
let string = requester.recv_string(0).unwrap().unwrap();
println!("Received reply {} {}", request_nbr, string);
}
}
rrclient: Scala 中的请求-回复客户端
/*
* Hello World client in Scala
* Connects REQ socket to tcp://localhost:5555
* Sends "Hello" to server, expects "World" back
*
* @author Giovanni Ruggiero
* @email giovanni.ruggiero@gmail.com
*/
import org.zeromq.ZMQ
import org.zeromq.ZMQ.{Context,Socket}
object rrclient{
def main(args : Array[String]) {
// Prepare our context and socket
val context = ZMQ.context(1)
val requester = context.socket(ZMQ.REQ)
requester.connect ("tcp://localhost:5559")
for (request_nbr <- 1 to 10) {
val request = "Hello ".getBytes()
request(request.length-1)=0 //Sets the last byte to 0
// Send the message
println("Sending request " + request_nbr + "...") + request.toString
requester.send(request, 0)
// Get the reply.
val reply = requester.recv(0)
// When displaying reply as a String, omit the last byte because
// our "Hello World" server has sent us a 0-terminated string:
println("Received reply " + request_nbr + ": [" + new String(reply,0,reply.length-1) + "]")
}
}
}
rrclient: Tcl 中的请求-回复客户端
#
# Hello World client
# Connects REQ socket to tcp://localhost:5559
# Sends "Hello" to server, expects "World" back
#
package require zmq
zmq context context
# Socket to talk to server
zmq socket requester context REQ
requester connect "tcp://localhost:5559"
for {set request_nbr 0} {$request_nbr < 10} { incr request_nbr} {
requester send "Hello"
set string [requester recv]
puts "Received reply $request_nbr \[$string\]"
}
requester close
context term
rrclient: OCaml 中的请求-回复客户端
(**
* Hello World client
* Connects REQ socket to tcp://localhost:5559
* Sends "Hello" to server, expects "World" back
*)
open Zmq
open Helpers
let () =
with_context @@ fun ctx ->
(* Socket to talk to server *)
with_socket ctx Socket.req @@ fun requester ->
Socket.connect requester "tcp://localhost:5559";
for requestNum = 0 to 9 do
Socket.send requester "Hello";
let s = Socket.recv requester in
printfn "Received reply %d [%S]" requestNum s;
done
以下是工作节点代码
rrworker: Ada 中的请求-回复工作节点
rrworker: Basic 中的请求-回复工作节点
rrworker: C 中的请求-回复工作节点
// Hello World worker
// Connects REP socket to tcp://localhost:5560
// Expects "Hello" from client, replies with "World"
#include "zhelpers.h"
#include <unistd.h>
int main (void)
{
void *context = zmq_ctx_new ();
// Socket to talk to clients
void *responder = zmq_socket (context, ZMQ_REP);
zmq_connect (responder, "tcp://localhost:5560");
while (1) {
// Wait for next request from client
char *string = s_recv (responder);
printf ("Received request: [%s]\n", string);
free (string);
// Do some 'work'
sleep (1);
// Send reply back to client
s_send (responder, "World");
}
// We never get here, but clean up anyhow
zmq_close (responder);
zmq_ctx_destroy (context);
return 0;
}
rrworker: C++ 中的请求-回复工作节点
//
// Request-reply service in C++
// Connects REP socket to tcp://localhost:5560
// Expects "Hello" from client, replies with "World"
//
#include <zmq.hpp>
#include <chrono>
#include <thread>
int main(int argc, char* argv[])
{
zmq::context_t context{1};
zmq::socket_t responder{context, zmq::socket_type::rep};
responder.connect("tcp://localhost:5560");
while (true) {
// Wait for next request from client
zmq::message_t request_msg;
auto recv_result = responder.recv(request_msg, zmq::recv_flags::none);
std::string string = request_msg.to_string();
std::cout << "Received request: " << string << std::endl;
// Do some 'work'
std::this_thread::sleep_for(std::chrono::seconds(1));
// Send reply back to client
zmq::message_t reply_msg{std::string{"World"}};
responder.send(reply_msg, zmq::send_flags::none);
}
}
rrworker: C# 中的请求-回复工作节点
rrworker: CL 中的请求-回复工作节点
;;; -*- Mode:Lisp; Syntax:ANSI-Common-Lisp; -*-
;;;
;;; Hello World server in Common Lisp
;;; Binds REP socket to tcp://*:5555
;;; Expects "Hello" from client, replies with "World"
;;;
;;; Kamil Shakirov <kamils80@gmail.com>
;;;
(defpackage #:zguide.rrserver
(:nicknames #:rrserver)
(:use #:cl #:zhelpers)
(:export #:main))
(in-package :zguide.rrserver)
(defun main ()
(zmq:with-context (context 1)
;; Socket to talk to clients
(zmq:with-socket (responder context zmq:rep)
(zmq:connect responder "tcp://localhost:5560")
(loop
(let ((request (make-instance 'zmq:msg)))
;; Wait for next request from client
(zmq:recv responder request)
(message "Received request: [~A]~%"
(zmq:msg-data-as-string request))
;; Do some 'work'
(sleep 1)
;; Send reply back to client
(let ((reply (make-instance 'zmq:msg :data "World")))
(zmq:send responder reply))))))
(cleanup))
rrworker: Delphi 中的请求-回复工作节点
program rrserver;
//
// Hello World server
// Connects REP socket to tcp://*:5560
// Expects "Hello" from client, replies with "World"
// @author Varga Balazs <bb.varga@gmail.com>
//
{$APPTYPE CONSOLE}
uses
SysUtils
, zmqapi
;
var
context: TZMQContext;
responder: TZMQSocket;
s: Utf8String;
begin
context := TZMQContext.Create;
// Socket to talk to clients
responder := Context.Socket( stRep );
responder.connect( 'tcp://localhost:5560' );
while True do
begin
// Wait for next request from client
responder.recv( s );
Writeln( Format( 'Received request: [%s]', [ s ] ) );
// Do some 'work'
sleep( 1 );
// Send reply back to client
responder.send( 'World' );
end;
// We never get here but clean up anyhow
responder.Free;
context.Free;
end.
rrworker: Erlang 中的请求-回复工作节点
#! /usr/bin/env escript
%%
%% Hello World server
%% Connects REP socket to tcp://*:5560
%% Expects "Hello" from client, replies with "World"
%%
main(_) ->
{ok, Context} = erlzmq:context(),
%% Socket to talk to clients
{ok, Responder} = erlzmq:socket(Context, rep),
ok = erlzmq:connect(Responder, "tcp://*:5560"),
loop(Responder),
%% We never get here but clean up anyhow
ok = erlzmq:close(Responder),
ok = erlzmq:term(Context).
loop(Socket) ->
%% Wait for next request from client
{ok, Req} = erlzmq:recv(Socket),
io:format("Received request: [~s]~n", [Req]),
%% Do some 'work'
timer:sleep(1000),
%% Send reply back to client
ok = erlzmq:send(Socket, <<"World">>),
loop(Socket).
rrworker: Elixir 中的请求-回复工作节点
defmodule Rrworker do
@moduledoc """
Generated by erl2ex (http://github.com/dazuma/erl2ex)
From Erlang source: (Unknown source file)
At: 2019-12-20 13:57:32
"""
def main() do
{:ok, context} = :erlzmq.context()
{:ok, responder} = :erlzmq.socket(context, :rep)
#:ok = :erlzmq.connect(responder, 'tcp://*:5560')
:ok = :erlzmq.connect(responder, 'tcp://localhost:5560')
loop(responder)
:ok = :erlzmq.close(responder)
:ok = :erlzmq.term(context)
end
def loop(socket) do
{:ok, req} = :erlzmq.recv(socket)
:io.format('Received request: [~s]~n', [req])
:timer.sleep(1000)
:ok = :erlzmq.send(socket, "World")
loop(socket)
end
end
Rrworker.main()
rrworker: F# 中的请求-回复工作节点
rrworker: Felix 中的请求-回复工作节点
rrworker: Go 中的请求-回复工作节点
// Hello World server
// Connects REP socket to tcp://*:5560
// Expects "Hello" from client, replies with "World"
//
// Author: Brendan Mc.
// Requires: http://github.com/alecthomas/gozmq
package main
import (
"fmt"
zmq "github.com/alecthomas/gozmq"
"time"
)
func main() {
context, _ := zmq.NewContext()
defer context.Close()
// Socket to talk to clients
responder, _ := context.NewSocket(zmq.REP)
defer responder.Close()
responder.Connect("tcp://localhost:5560")
for {
// Wait for next request from client
request, _ := responder.Recv(0)
fmt.Printf("Received request: [%s]\n", request)
// Do some 'work'
time.Sleep(1 * time.Second)
// Send reply back to client
responder.Send([]byte("World"), 0)
}
}
rrworker: Haskell 中的请求-回复工作节点
{-# LANGUAGE OverloadedStrings #-}
-- |
-- A worker that simulates some work with a timeout
-- And send back "World"
-- Connect REP socket to tcp://*:5560
-- Expects "Hello" from client, replies with "World"
--
module Main where
import System.ZMQ4.Monadic
import Control.Monad (forever)
import Data.ByteString.Char8 (unpack)
import Control.Concurrent (threadDelay)
import Text.Printf
main :: IO ()
main =
runZMQ $ do
responder <- socket Rep
connect responder "tcp://localhost:5560"
forever $ do
receive responder >>= liftIO . printf "Received request: [%s]\n" . unpack
-- Simulate doing some 'work' for 1 second
liftIO $ threadDelay (1 * 1000 * 1000)
send responder [] "World"
rrworker: Haxe 中的请求-回复工作节点
package ;
import haxe.io.Bytes;
import haxe.Stack;
import neko.Lib;
import neko.Sys;
import org.zeromq.ZMQ;
import org.zeromq.ZMQContext;
import org.zeromq.ZMQException;
import org.zeromq.ZMQSocket;
/**
* Hello World server in Haxe
* Binds REP to tcp://*:5560
* Expects "Hello" from client, replies with "World"
* Use with RrClient.hx and RrBroker.hx
*
*/
class RrServer
{
public static function main() {
var context:ZMQContext = ZMQContext.instance();
Lib.println("** RrServer (see: https://zguide.zeromq.cn/page:all#A-Request-Reply-Broker)");
// Socket to talk to clients
var responder:ZMQSocket = context.socket(ZMQ_REP);
responder.connect("tcp://localhost:5560");
Lib.println("Launch and connect server.");
ZMQ.catchSignals();
while (true) {
try {
// Wait for next request from client
var request:Bytes = responder.recvMsg();
trace ("Received request:" + request.toString());
// Do some work
Sys.sleep(1);
// Send reply back to client
responder.sendMsg(Bytes.ofString("World"));
} catch (e:ZMQException) {
if (ZMQ.isInterrupted()) {
break;
}
// Handle other errors
trace("ZMQException #:" + e.errNo + ", str:" + e.str());
trace (Stack.toString(Stack.exceptionStack()));
}
}
responder.close();
context.term();
}
}
rrworker: Java 中的请求-回复工作节点
package guide;
import org.zeromq.SocketType;
import org.zeromq.ZMQ;
import org.zeromq.ZMQ.Socket;
import org.zeromq.ZContext;
// Hello World worker
// Connects REP socket to tcp://*:5560
// Expects "Hello" from client, replies with "World"
public class rrworker
{
public static void main(String[] args) throws Exception
{
try (ZContext context = new ZContext()) {
// Socket to talk to server
Socket responder = context.createSocket(SocketType.REP);
responder.connect("tcp://localhost:5560");
while (!Thread.currentThread().isInterrupted()) {
// Wait for next request from client
String string = responder.recvStr(0);
System.out.printf("Received request: [%s]\n", string);
// Do some 'work'
Thread.sleep(1000);
// Send reply back to client
responder.send("World");
}
}
}
}
rrworker: Julia 中的请求-回复工作节点
rrworker: Lua 中的请求-回复工作节点
--
-- Hello World server
-- Connects REP socket to tcp://*:5560
-- Expects "Hello" from client, replies with "World"
--
-- Author: Robert G. Jakabosky <bobby@sharedrealm.com>
--
require"zmq"
require"zhelpers"
local context = zmq.init(1)
-- Socket to talk to clients
local responder = context:socket(zmq.REP)
responder:connect("tcp://localhost:5560")
while true do
-- Wait for next request from client
local msg = responder:recv()
printf ("Received request: [%s]\n", msg)
-- Do some 'work'
s_sleep (1000)
-- Send reply back to client
responder:send("World")
end
-- We never get here but clean up anyhow
responder:close()
context:term()
rrworker: Node.js 中的请求-回复工作节点
// Hello World server in Node.js
// Connects REP socket to tcp://*:5560
// Expects "Hello" from client, replies with "World"
var zmq = require('zeromq')
, responder = zmq.socket('rep');
responder.connect('tcp://localhost:5560');
responder.on('message', function(msg) {
console.log('received request:', msg.toString());
setTimeout(function() {
responder.send("World");
}, 1000);
});
rrworker: Objective-C 中的请求-回复工作节点
rrworker: ooc 中的请求-回复工作节点
rrworker: Perl 中的请求-回复工作节点
# Hello world worker in Perl
# Connects REP socket to tcp://localhost:5560
# Expects "Hello from client, replies with "World"
use strict;
use warnings;
use v5.10;
use ZMQ::FFI;
use ZMQ::FFI::Constants qw(ZMQ_REP);
my $context = ZMQ::FFI->new();
# Socket to talk to clients
my $responder = $context->socket(ZMQ_REP);
$responder->connect('tcp://localhost:5560');
while (1) {
# Wait for next request from client
my $string = $responder->recv();
say "Received request: [$string]";
# Do some 'work'
sleep 1;
# Send reply back to client
$responder->send("World");
}
rrworker: PHP 中的请求-回复工作节点
<?php
/*
* Hello World server
* Connects REP socket to tcp://*:5560
* Expects "Hello" from client, replies with "World"
* @author Ian Barber <ian(dot)barber(at)gmail(dot)com>
*/
$context = new ZMQContext();
// Socket to talk to clients
$responder = new ZMQSocket($context, ZMQ::SOCKET_REP);
$responder->connect("tcp://localhost:5560");
while (true) {
// Wait for next request from client
$string = $responder->recv();
printf ("Received request: [%s]%s", $string, PHP_EOL);
// Do some 'work'
sleep(1);
// Send reply back to client
$responder->send("World");
}
rrworker: Python 中的请求-回复工作节点
#
# Request-reply service in Python
# Connects REP socket to tcp://localhost:5560
# Expects "Hello" from client, replies with "World"
#
import zmq
context = zmq.Context()
socket = context.socket(zmq.REP)
socket.connect("tcp://localhost:5560")
while True:
message = socket.recv()
print(f"Received request: {message}")
socket.send(b"World")
rrworker: Q 中的请求-回复工作节点
rrworker: Racket 中的请求-回复工作节点
#lang racket
#|
Request-reply server in Racket
Binds REP socket to tcp://*:5560
Expects "Hello" from client, replies with "World"
|#
(require net/zmq)
(define ctxt (context 1))
(define sock (socket ctxt 'REP))
(socket-bind! sock "tcp://*:5560")
(let loop ()
(define message (socket-recv! sock))
(printf "Received request: ~a\n" message)
(sleep 1)
(socket-send! sock #"World")
(loop))
(context-close! ctxt)
rrworker: Ruby 中的请求-回复工作节点
#!/usr/bin/env ruby
# author: Oleg Sidorov <4pcbr> i4pcbr@gmail.com
# this code is licenced under the MIT/X11 licence.
require 'rubygems'
require 'ffi-rzmq'
context = ZMQ::Context.new
socket = context.socket(ZMQ::REP)
socket.connect('tcp://localhost:5560')
loop do
socket.recv_string(message = '')
puts "Received request: #{message}"
socket.send_string('World')
end
rrworker: Rust 中的请求-回复工作节点
use std::{thread, time};
fn main() {
let context = zmq::Context::new();
let responder = context.socket(zmq::REP).unwrap();
assert!(responder.connect("tcp://localhost:5560").is_ok());
loop {
let string = responder.recv_string(0).unwrap().unwrap();
println!("Received request: {}", string);
thread::sleep(time::Duration::from_secs(1));
responder.send("World", 0).unwrap();
}
}
rrworker: Scala 中的请求-回复工作节点
/*
* Hello World server in Scala
* Binds REP socket to tcp://localhost:5560
* Expects "Hello" from client, replies with "World"
*
* @author Giovanni Ruggiero
* @email giovanni.ruggiero@gmail.com
*
*/
import org.zeromq.ZMQ
import org.zeromq.ZMQ.{Context,Socket}
object rrserver {
def main(args : Array[String]) {
// Prepare our context and socket
val context = ZMQ.context(1)
val receiver = context.socket(ZMQ.REP)
receiver.connect("tcp://localhost:5560")
while (true) {
// Wait for next request from client
// We will wait for a 0-terminated string (C string) from the client,
// so that this server also works with The Guide's C and C++ "Hello World" clients
val request = receiver.recv (0)
// In order to display the 0-terminated string as a String,
// we omit the last byte from request
println ("Received request: [" + new String(request,0,request.length-1) // Creates a String from request, minus the last byte
+ "]")
// Do some 'work'
try {
Thread.sleep (1000)
} catch {
case e: InterruptedException => e.printStackTrace()
}
// Send reply back to client
// We will send a 0-terminated string (C string) back to the client,
// so that this server also works with The Guide's C and C++ "Hello World" clients
val reply = "World ".getBytes
reply(reply.length-1)=0 //Sets the last byte of the reply to 0
receiver.send(reply, 0)
}
}
}
rrworker: Tcl 中的请求-回复工作节点
#
# Hello World server
# Connects REP socket to tcp://*:5560
# Expects "Hello" from client, replies with "World"
#
package require zmq
zmq context context
# Socket to talk to clients
zmq socket responder context REP
responder connect "tcp://localhost:5560"
while {1} {
# Wait for next request from client
set string [responder recv]
puts "Received request: \[$string\]"
# Do some 'work'
after 1000;
# Send reply back to client
responder send "World"
}
# We never get here but clean up anyhow
responder close
context term
rrworker: OCaml 中的请求-回复工作节点
(**
* Hello World worker
* Connects REP socket to tcp://*:5560
* Expects "Hello" from client, replies with "World"
*)
open Zmq
open Helpers
let () =
with_context @@ fun ctx ->
(* Socket to talk to clients *)
with_socket ctx Socket.rep @@ fun responder ->
Socket.connect responder "tcp://localhost:5560";
while true do
(* Wait for next request from client *)
let s = Socket.recv responder in
printfn "Received request: [%S]" s;
(* Do some 'work' *)
sleep_ms 1;
(* Send reply back to client *)
Socket.send responder "World";
done
以下是经纪人代码,它能正确处理分段消息
rrbroker: Ada 中的请求-回复经纪人
rrbroker: Basic 中的请求-回复经纪人
rrbroker: C 中的请求-回复经纪人
// Simple request-reply broker
#include "zhelpers.h"
int main (void)
{
// Prepare our context and sockets
void *context = zmq_ctx_new ();
void *frontend = zmq_socket (context, ZMQ_ROUTER);
void *backend = zmq_socket (context, ZMQ_DEALER);
zmq_bind (frontend, "tcp://*:5559");
zmq_bind (backend, "tcp://*:5560");
// Initialize poll set
zmq_pollitem_t items [] = {
{ frontend, 0, ZMQ_POLLIN, 0 },
{ backend, 0, ZMQ_POLLIN, 0 }
};
// Switch messages between sockets
while (1) {
zmq_msg_t message;
zmq_poll (items, 2, -1);
if (items [0].revents & ZMQ_POLLIN) {
while (1) {
// Process all parts of the message
zmq_msg_init (&message);
zmq_msg_recv (&message, frontend, 0);
int more = zmq_msg_more (&message);
zmq_msg_send (&message, backend, more? ZMQ_SNDMORE: 0);
zmq_msg_close (&message);
if (!more)
break; // Last message part
}
}
if (items [1].revents & ZMQ_POLLIN) {
while (1) {
// Process all parts of the message
zmq_msg_init (&message);
zmq_msg_recv (&message, backend, 0);
int more = zmq_msg_more (&message);
zmq_msg_send (&message, frontend, more? ZMQ_SNDMORE: 0);
zmq_msg_close (&message);
if (!more)
break; // Last message part
}
}
}
// We never get here, but clean up anyhow
zmq_close (frontend);
zmq_close (backend);
zmq_ctx_destroy (context);
return 0;
}
rrbroker: C++ 中的请求-回复经纪人
//
// Simple request-reply broker in C++
//
#include "zhelpers.hpp"
int main (int argc, char *argv[])
{
// Prepare our context and sockets
zmq::context_t context(1);
zmq::socket_t frontend (context, ZMQ_ROUTER);
zmq::socket_t backend (context, ZMQ_DEALER);
frontend.bind("tcp://*:5559");
backend.bind("tcp://*:5560");
// Initialize poll set
zmq::pollitem_t items [] = {
{ frontend, 0, ZMQ_POLLIN, 0 },
{ backend, 0, ZMQ_POLLIN, 0 }
};
// Switch messages between sockets
while (1) {
zmq::message_t message;
int more; // Multipart detection
zmq::poll (&items [0], 2, -1);
if (items [0].revents & ZMQ_POLLIN) {
while (1) {
// Process all parts of the message
frontend.recv(&message);
// frontend.recv(message, zmq::recv_flags::none); // new syntax
size_t more_size = sizeof (more);
frontend.getsockopt(ZMQ_RCVMORE, &more, &more_size);
backend.send(message, more? ZMQ_SNDMORE: 0);
// more = frontend.get(zmq::sockopt::rcvmore); // new syntax
// backend.send(message, more? zmq::send_flags::sndmore : zmq::send_flags::none);
if (!more)
break; // Last message part
}
}
if (items [1].revents & ZMQ_POLLIN) {
while (1) {
// Process all parts of the message
backend.recv(&message);
size_t more_size = sizeof (more);
backend.getsockopt(ZMQ_RCVMORE, &more, &more_size);
frontend.send(message, more? ZMQ_SNDMORE: 0);
// more = backend.get(zmq::sockopt::rcvmore); // new syntax
//frontend.send(message, more? zmq::send_flags::sndmore : zmq::send_flags::none);
if (!more)
break; // Last message part
}
}
}
return 0;
}
rrbroker: C# 中的请求-回复经纪人
rrbroker: CL 中的请求-回复经纪人
;;; -*- Mode:Lisp; Syntax:ANSI-Common-Lisp; -*-
;;;
;;; Simple request-reply broker in Common Lisp
;;;
;;; Kamil Shakirov <kamils80@gmail.com>
;;;
(defpackage #:zguide.rrbroker
(:nicknames #:rrbroker)
(:use #:cl #:zhelpers)
(:export #:main))
(in-package :zguide.rrbroker)
(defun main ()
;; Prepare our context and sockets
(zmq:with-context (context 1)
(zmq:with-socket (frontend context zmq:router)
(zmq:with-socket (backend context zmq:dealer)
(zmq:bind frontend "tcp://*:5559")
(zmq:bind backend "tcp://*:5560")
;; Initialize poll set
(zmq:with-polls ((items . ((frontend . zmq:pollin)
(backend . zmq:pollin))))
;; Switch messages between sockets
(loop
(let ((revents (zmq:poll items)))
(when (= (first revents) zmq:pollin)
(loop
;; Process all parts of the message
(let ((message (make-instance 'zmq:msg)))
(zmq:recv frontend message)
(if (not (zerop (zmq:getsockopt frontend zmq:rcvmore)))
(zmq:send backend message zmq:sndmore)
(progn
(zmq:send backend message 0)
;; Last message part
(return))))))
(when (= (second revents) zmq:pollin)
(loop
;; Process all parts of the message
(let ((message (make-instance 'zmq:msg)))
(zmq:recv backend message)
(if (not (zerop (zmq:getsockopt backend zmq:rcvmore)))
(zmq:send frontend message zmq:sndmore)
(progn
(zmq:send frontend message 0)
;; Last message part
(return))))))))))))
(cleanup))
rrbroker: Delphi 中的请求-回复经纪人
program rrbroker;
//
// Simple request-reply broker
// @author Varga Balazs <bb.varga@gmail.com>
//
{$APPTYPE CONSOLE}
uses
SysUtils
, zmqapi
;
var
context: TZMQContext;
frontend,
backend: TZMQSocket;
poller: TZMQPoller;
msg: TZMQFrame;
more: Boolean;
begin
// Prepare our context and sockets
context := TZMQContext.Create;
frontend := Context.Socket( stRouter );
backend := Context.Socket( stDealer );
frontend.bind( 'tcp://*:5559' );
backend.bind( 'tcp://*:5560' );
// Initialize poll set
poller := TZMQPoller.Create( true );
poller.register( frontend, [pePollIn] );
poller.register( backend, [pePollIn] );
// Switch messages between sockets
while True do
begin
poller.poll;
more := true;
if pePollIn in poller.PollItem[0].revents then
while more do
begin
// Process all parts of the message
msg := TZMQFrame.Create;
frontend.recv( msg );
more := frontend.rcvMore;
if more then
backend.send( msg, [sfSndMore] )
else
backend.send( msg, [] );
end;
if pePollIn in poller.PollItem[1].revents then
while more do
begin
// Process all parts of the message
msg := TZMQFrame.Create;
backend.recv( msg );
more := backend.rcvMore;
if more then
frontend.send( msg, [sfSndMore] )
else
frontend.send( msg, [] );
end;
end;
// We never get here but clean up anyhow
poller.Free;
frontend.Free;
backend.Free;
context.Free;
end.
rrbroker: Erlang 中的请求-回复经纪人
#! /usr/bin/env escript
%%
%% Simple request-reply broker
%%
main(_) ->
%% Prepare our context and sockets
{ok, Context} = erlzmq:context(),
{ok, Frontend} = erlzmq:socket(Context, [router, {active, true}]),
{ok, Backend} = erlzmq:socket(Context, [dealer, {active, true}]),
ok = erlzmq:bind(Frontend, "tcp://*:5559"),
ok = erlzmq:bind(Backend, "tcp://*:5560"),
%% Switch messages between sockets
loop(Frontend, Backend),
%% We never get here but clean up anyhow
ok = erlzmq:close(Frontend),
ok = erlzmq:close(Backend),
ok = erlzmq:term(Context).
loop(Frontend, Backend) ->
receive
{zmq, Frontend, Msg, Flags} ->
case proplists:get_bool(rcvmore, Flags) of
true ->
erlzmq:send(Backend, Msg, [sndmore]);
false ->
erlzmq:send(Backend, Msg)
end;
{zmq, Backend, Msg, Flags} ->
case proplists:get_bool(rcvmore, Flags) of
true ->
erlzmq:send(Frontend, Msg, [sndmore]);
false ->
erlzmq:send(Frontend, Msg)
end
end,
loop(Frontend, Backend).
rrbroker: Elixir 中的请求-回复经纪人
defmodule Rrbroker do
@moduledoc """
Generated by erl2ex (http://github.com/dazuma/erl2ex)
From Erlang source: (Unknown source file)
At: 2019-12-20 13:57:31
"""
def main() do
{:ok, context} = :erlzmq.context()
{:ok, frontend} = :erlzmq.socket(context, [:router, {:active, true}])
{:ok, backend} = :erlzmq.socket(context, [:dealer, {:active, true}])
:ok = :erlzmq.bind(frontend, 'tcp://*:5559')
:ok = :erlzmq.bind(backend, 'tcp://*:5560')
loop(frontend, backend)
:ok = :erlzmq.close(frontend)
:ok = :erlzmq.close(backend)
:ok = :erlzmq.term(context)
end
def loop(frontend, backend) do
receive do
{:zmq, ^frontend, msg, flags} ->
case(:proplists.get_bool(:rcvmore, flags)) do
true ->
:erlzmq.send(backend, msg, [:sndmore])
false ->
:erlzmq.send(backend, msg)
end
{:zmq, ^backend, msg, flags} ->
case(:proplists.get_bool(:rcvmore, flags)) do
true ->
:erlzmq.send(frontend, msg, [:sndmore])
false ->
:erlzmq.send(frontend, msg)
end
end
loop(frontend, backend)
end
end
Rrbroker.main()
rrbroker: F# 中的请求-回复经纪人
rrbroker: Felix 中的请求-回复经纪人
rrbroker: Go 中的请求-回复经纪人
// Simple request-reply broker
//
// Author: Brendan Mc.
// Requires: http://github.com/alecthomas/gozmq
package main
import (
zmq "github.com/alecthomas/gozmq"
)
func main() {
context, _ := zmq.NewContext()
defer context.Close()
frontend, _ := context.NewSocket(zmq.ROUTER)
backend, _ := context.NewSocket(zmq.DEALER)
defer frontend.Close()
defer backend.Close()
frontend.Bind("tcp://*:5559")
backend.Bind("tcp://*:5560")
// Initialize poll set
toPoll := zmq.PollItems{
zmq.PollItem{Socket: frontend, Events: zmq.POLLIN},
zmq.PollItem{Socket: backend, Events: zmq.POLLIN},
}
for {
_, _ = zmq.Poll(toPoll, -1)
switch {
case toPoll[0].REvents&zmq.POLLIN != 0:
parts, _ := frontend.RecvMultipart(0)
backend.SendMultipart(parts, 0)
case toPoll[1].REvents&zmq.POLLIN != 0:
parts, _ := backend.RecvMultipart(0)
frontend.SendMultipart(parts, 0)
}
}
}
rrbroker: Haskell 中的请求-回复经纪人
-- |
-- Simple message queuing broker
--
-- Use it with `rrclient.hs` and `rrworker.hs`//
module Main where
import System.ZMQ4.Monadic
import Control.Monad (forever)
import qualified Data.List.NonEmpty as NE -- from semigroups
main :: IO ()
main = runZMQ $ do
frontend <- socket Router
bind frontend "tcp://*:5559"
backend <- socket Dealer
bind backend "tcp://*:5560"
forever $ poll (-1) [ Sock frontend [In] (Just $ frontend >|> backend)
, Sock backend [In] (Just $ backend >|> frontend)
]
(>|>) :: (Receiver r, Sender s) => Socket z r -> Socket z s -> [Event] -> ZMQ z ()
(>|>) rcv snd _ = receiveMulti rcv >>= sendMulti snd . NE.fromList
rrbroker: Haxe 中的请求-回复经纪人
package ;
import haxe.io.Bytes;
import haxe.Stack;
import neko.Lib;
import org.zeromq.ZMQ;
import org.zeromq.ZMQContext;
import org.zeromq.ZMQPoller;
import org.zeromq.ZMQSocket;
import org.zeromq.ZMQException;
/**
* Simple request-reply broker
*
* Use with RrClient.hx and RrServer.hx
*/
class RrBroker
{
public static function main() {
var context:ZMQContext = ZMQContext.instance();
Lib.println("** RrBroker (see: https://zguide.zeromq.cn/page:all#A-Request-Reply-Broker)");
var frontend:ZMQSocket = context.socket(ZMQ_ROUTER);
var backend:ZMQSocket = context.socket(ZMQ_DEALER);
frontend.bind("tcp://*:5559");
backend.bind("tcp://*:5560");
Lib.println("Launch and connect broker.");
// Initialise poll set
var items:ZMQPoller = context.poller();
items.registerSocket(frontend, ZMQ.ZMQ_POLLIN());
items.registerSocket(backend, ZMQ.ZMQ_POLLIN());
var more = false;
var msgBytes:Bytes;
ZMQ.catchSignals();
while (true) {
try {
items.poll();
if (items.pollin(1)) {
while (true) {
// receive message
msgBytes = frontend.recvMsg();
more = frontend.hasReceiveMore();
// broker it to backend
backend.sendMsg(msgBytes, { if (more) SNDMORE else null; } );
if (!more) break;
}
}
if (items.pollin(2)) {
while (true) {
// receive message
msgBytes = backend.recvMsg();
more = backend.hasReceiveMore();
// broker it to frontend
frontend.sendMsg(msgBytes, { if (more) SNDMORE else null; } );
if (!more) break;
}
}
} catch (e:ZMQException) {
if (ZMQ.isInterrupted()) {
break;
}
// Handle other errors
trace("ZMQException #:" + e.errNo + ", str:" + e.str());
trace (Stack.toString(Stack.exceptionStack()));
}
}
frontend.close();
backend.close();
context.term();
}
}
rrbroker: Java 中的请求-回复经纪人
package guide;
import org.zeromq.SocketType;
import org.zeromq.ZMQ;
import org.zeromq.ZMQ.Poller;
import org.zeromq.ZMQ.Socket;
import org.zeromq.ZContext;
/**
* Simple request-reply broker
*
*/
public class rrbroker
{
public static void main(String[] args)
{
// Prepare our context and sockets
try (ZContext context = new ZContext()) {
Socket frontend = context.createSocket(SocketType.ROUTER);
Socket backend = context.createSocket(SocketType.DEALER);
frontend.bind("tcp://*:5559");
backend.bind("tcp://*:5560");
System.out.println("launch and connect broker.");
// Initialize poll set
Poller items = context.createPoller(2);
items.register(frontend, Poller.POLLIN);
items.register(backend, Poller.POLLIN);
boolean more = false;
byte[] message;
// Switch messages between sockets
while (!Thread.currentThread().isInterrupted()) {
// poll and memorize multipart detection
items.poll();
if (items.pollin(0)) {
while (true) {
// receive message
message = frontend.recv(0);
more = frontend.hasReceiveMore();
// Broker it
backend.send(message, more ? ZMQ.SNDMORE : 0);
if (!more) {
break;
}
}
}
if (items.pollin(1)) {
while (true) {
// receive message
message = backend.recv(0);
more = backend.hasReceiveMore();
// Broker it
frontend.send(message, more ? ZMQ.SNDMORE : 0);
if (!more) {
break;
}
}
}
}
}
}
}
rrbroker: Julia 中的请求-回复经纪人
rrbroker: Lua 中的请求-回复经纪人
--
-- Simple request-reply broker
--
-- Author: Robert G. Jakabosky <bobby@sharedrealm.com>
--
require"zmq"
require"zmq.poller"
require"zhelpers"
-- Prepare our context and sockets
local context = zmq.init(1)
local frontend = context:socket(zmq.ROUTER)
local backend = context:socket(zmq.DEALER)
frontend:bind("tcp://*:5559")
backend:bind("tcp://*:5560")
-- Switch messages between sockets
local poller = zmq.poller(2)
poller:add(frontend, zmq.POLLIN, function()
while true do
-- Process all parts of the message
local msg = frontend:recv()
if (frontend:getopt(zmq.RCVMORE) == 1) then
backend:send(msg, zmq.SNDMORE)
else
backend:send(msg, 0)
break; -- Last message part
end
end
end)
poller:add(backend, zmq.POLLIN, function()
while true do
-- Process all parts of the message
local msg = backend:recv()
if (backend:getopt(zmq.RCVMORE) == 1) then
frontend:send(msg, zmq.SNDMORE)
else
frontend:send(msg, 0)
break; -- Last message part
end
end
end)
-- start poller's event loop
poller:start()
-- We never get here but clean up anyhow
frontend:close()
backend:close()
context:term()
rrbroker: Node.js 中的请求-回复经纪人
// Simple request-reply broker in Node.js
var zmq = require('zeromq')
, frontend = zmq.socket('router')
, backend = zmq.socket('dealer');
frontend.bindSync('tcp://*:5559');
backend.bindSync('tcp://*:5560');
frontend.on('message', function() {
// Note that separate message parts come as function arguments.
var args = Array.apply(null, arguments);
// Pass array of strings/buffers to send multipart messages.
backend.send(args);
});
backend.on('message', function() {
var args = Array.apply(null, arguments);
frontend.send(args);
});
rrbroker: Objective-C 中的请求-回复经纪人
rrbroker: ooc 中的请求-回复经纪人
rrbroker: Perl 中的请求-回复经纪人
# Simple request-reply broker in Perl
# Uses AnyEvent to poll the sockets
use strict;
use warnings;
use v5.10;
use ZMQ::FFI;
use ZMQ::FFI::Constants qw(ZMQ_ROUTER ZMQ_DEALER);
use AnyEvent;
use EV;
# Prepare our context and sockets
my $context = ZMQ::FFI->new();
my $frontend = $context->socket(ZMQ_ROUTER);
my $backend = $context->socket(ZMQ_DEALER);
$frontend->bind('tcp://*:5559');
$backend->bind('tcp://*:5560');
# Switch messages between sockets
my $frontend_poller = AE::io $frontend->get_fd, 0, sub {
while ($frontend->has_pollin) {
my @message = $frontend->recv_multipart();
$backend->send_multipart(\@message);
}
};
my $backend_poller = AE::io $backend->get_fd, 0, sub {
while ($backend->has_pollin) {
my @message = $backend->recv_multipart();
$frontend->send_multipart(\@message);
}
};
EV::run;
rrbroker: PHP 中的请求-回复经纪人
<?php
/*
* Simple request-reply broker
* @author Ian Barber <ian(dot)barber(at)gmail(dot)com>
*/
// Prepare our context and sockets
$context = new ZMQContext();
$frontend = new ZMQSocket($context, ZMQ::SOCKET_ROUTER);
$backend = new ZMQSocket($context, ZMQ::SOCKET_DEALER);
$frontend->bind("tcp://*:5559");
$backend->bind("tcp://*:5560");
// Initialize poll set
$poll = new ZMQPoll();
$poll->add($frontend, ZMQ::POLL_IN);
$poll->add($backend, ZMQ::POLL_IN);
$readable = $writeable = array();
// Switch messages between sockets
while (true) {
$events = $poll->poll($readable, $writeable);
foreach ($readable as $socket) {
if ($socket === $frontend) {
// Process all parts of the message
while (true) {
$message = $socket->recv();
// Multipart detection
$more = $socket->getSockOpt(ZMQ::SOCKOPT_RCVMORE);
$backend->send($message, $more ? ZMQ::MODE_SNDMORE : null);
if (!$more) {
break; // Last message part
}
}
} elseif ($socket === $backend) {
$message = $socket->recv();
// Multipart detection
$more = $socket->getSockOpt(ZMQ::SOCKOPT_RCVMORE);
$frontend->send($message, $more ? ZMQ::MODE_SNDMORE : null);
if (!$more) {
break; // Last message part
}
}
}
}
rrbroker: Python 中的请求-回复经纪人
# Simple request-reply broker
#
# Author: Lev Givon <lev(at)columbia(dot)edu>
import zmq
# Prepare our context and sockets
context = zmq.Context()
frontend = context.socket(zmq.ROUTER)
backend = context.socket(zmq.DEALER)
frontend.bind("tcp://*:5559")
backend.bind("tcp://*:5560")
# Initialize poll set
poller = zmq.Poller()
poller.register(frontend, zmq.POLLIN)
poller.register(backend, zmq.POLLIN)
# Switch messages between sockets
while True:
socks = dict(poller.poll())
if socks.get(frontend) == zmq.POLLIN:
message = frontend.recv_multipart()
backend.send_multipart(message)
if socks.get(backend) == zmq.POLLIN:
message = backend.recv_multipart()
frontend.send_multipart(message)
rrbroker: Q 中的请求-回复经纪人
rrbroker: Racket 中的请求-回复经纪人
rrbroker: Ruby 中的请求-回复经纪人
#!/usr/bin/env ruby
# author: Oleg Sidorov <4pcbr> i4pcbr@gmail.com
# this code is licenced under the MIT/X11 licence.
require 'rubygems'
require 'ffi-rzmq'
context = ZMQ::Context.new
frontend = context.socket(ZMQ::ROUTER)
backend = context.socket(ZMQ::DEALER)
frontend.bind('tcp://*:5559')
backend.bind('tcp://*:5560')
poller = ZMQ::Poller.new
poller.register(frontend, ZMQ::POLLIN)
poller.register(backend, ZMQ::POLLIN)
loop do
poller.poll(:blocking)
poller.readables.each do |socket|
if socket === frontend
socket.recv_strings(messages = [])
backend.send_strings(messages)
elsif socket === backend
socket.recv_strings(messages = [])
frontend.send_strings(messages)
end
end
end
rrbroker: Rust 中的请求-回复经纪人
fn main() {
let context = zmq::Context::new();
let frontend = context.socket(zmq::ROUTER).unwrap();
let backend = context.socket(zmq::DEALER).unwrap();
assert!(frontend.bind("tcp://*:5559").is_ok());
assert!(backend.bind("tcp://*:5560").is_ok());
let items = &mut [
frontend.as_poll_item(zmq::POLLIN),
backend.as_poll_item(zmq::POLLIN),
];
loop {
zmq::poll(items, -1).unwrap();
if items[0].is_readable() {
loop {
let message = frontend.recv_msg(0).unwrap();
let more = if frontend.get_rcvmore().unwrap() {
zmq::SNDMORE
} else {
0
};
backend.send(message, more).unwrap();
if more == 0 {
break;
};
}
}
if items[1].is_readable() {
loop {
let message = backend.recv_msg(0).unwrap();
let more = if backend.get_rcvmore().unwrap() {
zmq::SNDMORE
} else {
0
};
frontend.send(message, more).unwrap();
if more == 0 {
break;
}
}
}
}
}
rrbroker: Scala 中的请求-回复经纪人
/*
* Simple request-reply broker
*
* @author Giovanni Ruggiero
* @email giovanni.ruggiero@gmail.com
*
*/
import org.zeromq.ZMQ
object rrbroker {
def main(args : Array[String]) {
// Prepare our context and sockets
val context = ZMQ.context(1)
val frontend = context.socket(ZMQ.ROUTER)
val backend = context.socket(ZMQ.DEALER)
frontend.bind("tcp://*:5559")
backend.bind("tcp://*:5560")
System.out.println("launch and connect broker.")
// Initialize poll set
val items = context.poller(2)
items.register(frontend, 1)
items.register(backend, 1)
var more = false
// Switch messages between sockets
while (!Thread.currentThread().isInterrupted()) {
// poll and memorize multipart detection
items.poll()
if (items.pollin(0)) {
do {
// receive message
val message = frontend.recv(0)
more = frontend.hasReceiveMore
// Broker it
if (more)
backend.send(message, ZMQ.SNDMORE)
else
backend.send(message, 0)
} while (more)
}
if (items.pollin(1)) {
do {
// receive message
val message = backend.recv(0)
more = backend.hasReceiveMore()
// Broker it
if (more)
frontend.send(message, ZMQ.SNDMORE)
else
frontend.send(message, 0)
} while (more)
}
}
// We never get here but clean up anyhow
frontend.close()
backend.close()
context.term()
}
}
rrbroker: Tcl 中的请求-回复经纪人
#
# Simple request-reply broker
#
package require zmq
# Prepare our context and sockets
zmq context context
zmq socket frontend context ROUTER
zmq socket backend context DEALER
frontend bind "tcp://*:5559"
backend bind "tcp://*:5560"
# Initialize poll set
set poll_set [list [list frontend [list POLLIN]] [list backend [list POLLIN]]]
# Switch messages between sockets
while {1} {
set rpoll_set [zmq poll $poll_set -1]
foreach rpoll $rpoll_set {
switch [lindex $rpoll 0] {
frontend {
if {"POLLIN" in [lindex $rpoll 1]} {
while {1} {
# Process all parts of the message
zmq message message
frontend recv_msg message
set more [frontend getsockopt RCVMORE]
backend send_msg message [expr {$more?"SNDMORE":""}]
message close
if {!$more} {
break ; # Last message part
}
}
}
}
backend {
if {"POLLIN" in [lindex $rpoll 1]} {
while {1} {
# Process all parts of the message
zmq message message
backend recv_msg message
set more [backend getsockopt RCVMORE]
frontend send_msg message [expr {$more?"SNDMORE":""}]
message close
if {!$more} {
break ; # Last message part
}
}
}
}
}
}
}
# We never get here but clean up anyhow
frontend close
backend close
context term
rrbroker: OCaml 中的请求-回复经纪人
(* Simple request-reply broker *)
open Zmq
open Helpers
let () =
(* Prepare our context and sockets *)
with_context @@ fun ctx ->
with_socket ctx Socket.router @@ fun frontend ->
with_socket ctx Socket.dealer @@ fun backend ->
Socket.bind frontend "tcp://*:5559";
Socket.bind backend "tcp://*:5560";
(* Create a router-dealer proxy *)
Proxy.create frontend backend;

使用请求-回复经纪人可以使你的客户端/服务器架构更容易扩展,因为客户端看不到工作节点,工作节点也看不到客户端。唯一的静态节点是中间的经纪人。
ZeroMQ 的内置代理函数 #
结果发现,上一节中的rrbroker的核心循环非常有用且可重用。它使我们能够轻松构建发布-订阅转发器、共享队列以及其他小型中介。ZeroMQ 将此封装在一个方法中,即 zmq_proxy():
zmq_proxy (frontend, backend, capture);
这两个(或三个套接字,如果我们想捕获数据)必须正确连接、绑定和配置。当我们调用zmq_proxy方法时,就相当于启动了rrbroker的主循环。让我们重写请求-回复经纪人代码来调用zmq_proxy,并将其重新命名为一个听起来很贵的“消息队列”(有人为比这功能少得多的代码要过房子)。
msgqueue: Ada 中的消息队列经纪人
msgqueue: Basic 中的消息队列经纪人
msgqueue: C 中的消息队列经纪人
// Simple message queuing broker
// Same as request-reply broker but using shared queue proxy
#include "zhelpers.h"
int main (void)
{
void *context = zmq_ctx_new ();
// Socket facing clients
void *frontend = zmq_socket (context, ZMQ_ROUTER);
int rc = zmq_bind (frontend, "tcp://*:5559");
assert (rc == 0);
// Socket facing services
void *backend = zmq_socket (context, ZMQ_DEALER);
rc = zmq_bind (backend, "tcp://*:5560");
assert (rc == 0);
// Start the proxy
zmq_proxy (frontend, backend, NULL);
// We never get here...
zmq_close (frontend);
zmq_close (backend);
zmq_ctx_destroy (context);
return 0;
}
msgqueue: C++ 中的消息队列经纪人
//
// Simple message queuing broker in C++
// Same as request-reply broker but using QUEUE device
//
#include "zhelpers.hpp"
int main (int argc, char *argv[])
{
zmq::context_t context(1);
// Socket facing clients
zmq::socket_t frontend (context, ZMQ_ROUTER);
frontend.bind("tcp://*:5559");
// Socket facing services
zmq::socket_t backend (context, ZMQ_DEALER);
backend.bind("tcp://*:5560");
// Start the proxy
zmq::proxy(static_cast<void*>(frontend),
static_cast<void*>(backend),
nullptr);
return 0;
}
msgqueue: C# 中的消息队列经纪人
msgqueue: CL 中的消息队列经纪人
;;; -*- Mode:Lisp; Syntax:ANSI-Common-Lisp; -*-
;;;
;;; Simple message queuing broker in Common Lisp
;;; Same as request-reply broker but using QUEUE device
;;;
;;; Kamil Shakirov <kamils80@gmail.com>
;;;
(defpackage #:zguide.msgqueue
(:nicknames #:msgqueue)
(:use #:cl #:zhelpers)
(:export #:main))
(in-package :zguide.msgqueue)
(defun main ()
(zmq:with-context (context 1)
;; Socket facing clients
(zmq:with-socket (frontend context zmq:router)
(zmq:bind frontend "tcp://*:5559")
;; Socket facing services
(zmq:with-socket (backend context zmq:dealer)
(zmq:bind backend "tcp://*:5560")
;; Start built-in device
(zmq:device zmq:queue frontend backend))))
(cleanup))
msgqueue: Delphi 中的消息队列经纪人
program msgqueue;
//
// Simple message queuing broker
// Same as request-reply broker but using shared queue proxy
// @author Varga Balazs <bb.varga@gmail.com>
//
{$APPTYPE CONSOLE}
uses
SysUtils
, zmqapi
;
var
context: TZMQContext;
frontend,
backend: TZMQSocket;
begin
context := TZMQContext.Create;
// Socket facing clients
frontend := Context.Socket( stRouter );
frontend.bind( 'tcp://*:5559' );
// Socket facing services
backend := Context.Socket( stDealer );
backend.bind( 'tcp://*:5560' );
// Start the proxy
ZMQProxy( frontend, backend, nil );
// We never get here...
frontend.Free;
backend.Free;
context.Free;
end.
msgqueue: Erlang 中的消息队列经纪人
#!/usr/bin/env escript
%%
%% Simple message queuing broker
%% Same as request-reply broker but using QUEUE device
%%
main(_) ->
{ok, Context} = erlzmq:context(),
%% Socket facing clients
{ok, Frontend} = erlzmq:socket(Context, [router, {active, true}]),
ok = erlzmq:bind(Frontend, "tcp://*:5559"),
%% Socket facing services
{ok, Backend} = erlzmq:socket(Context, [dealer, {active, true}]),
ok = erlzmq:bind(Backend, "tcp://*:5560"),
%% Start built-in device
erlzmq_device:queue(Frontend, Backend),
%% We never get here...
ok = erlzmq:close(Frontend),
ok = erlzmq:close(Backend),
ok = erlzmq:term(Context).
msgqueue: Elixir 中的消息队列经纪人
defmodule msgqueue do
@moduledoc """
Generated by erl2ex (http://github.com/dazuma/erl2ex)
From Erlang source: (Unknown source file)
At: 2019-12-20 13:57:26
"""
def main(_) do
{:ok, context} = :erlzmq.context()
{:ok, frontend} = :erlzmq.socket(context, [:router, {:active, true}])
:ok = :erlzmq.bind(frontend, 'tcp://*:5559')
{:ok, backend} = :erlzmq.socket(context, [:dealer, {:active, true}])
:ok = :erlzmq.bind(backend, 'tcp://*:5560')
:erlzmq_device.queue(frontend, backend)
:ok = :erlzmq.close(frontend)
:ok = :erlzmq.close(backend)
:ok = :erlzmq.term(context)
end
end
msgqueue: F# 中的消息队列经纪人
msgqueue: Felix 中的消息队列经纪人
msgqueue: Go 中的消息队列经纪人
// Simple message queuing broker
// Same as request-reply broker but using QUEUE device
//
// Author: Brendan Mc.
// Requires: http://github.com/alecthomas/gozmq
package main
import (
zmq "github.com/alecthomas/gozmq"
)
func main() {
context, _ := zmq.NewContext()
defer context.Close()
// Socket facing clients
frontend, _ := context.NewSocket(zmq.ROUTER)
defer frontend.Close()
frontend.Bind("tcp://*:5559")
// Socket facing services
backend, _ := context.NewSocket(zmq.DEALER)
defer backend.Close()
backend.Bind("tcp://*:5560")
// Start built-in device
zmq.Device(zmq.QUEUE, frontend, backend)
// We never get here...
}
msgqueue: Haskell 中的消息队列经纪人
-- Simple message queuing broker
-- Same as request-reply broker but using shared queue proxy
module Main where
import System.ZMQ4.Monadic
main :: IO ()
main = runZMQ $ do
-- Socket facing clients
frontend <- socket Router
bind frontend "tcp://*:5559"
backend <- socket Dealer
bind backend "tcp://*:5560"
-- Start the proxy
proxy frontend backend Nothing
msgqueue: Haxe 中的消息队列经纪人
package ;
import org.zeromq.ZMQ;
import org.zeromq.ZMQSocket;
import org.zeromq.ZMQDevice;
import org.zeromq.ZContext;
import neko.Lib;
/**
* Simple message queuing broker
* Same as request-reply broker but using QUEUE device
* See: https://zguide.zeromq.cn/page:all#Built-in-Devices
*
* Use with RrClient and RrServer
*/
class MsgQueue
{
public static function main() {
var context:ZContext = new ZContext();
Lib.println("** MsgQueue (see: https://zguide.zeromq.cn/page:all#Built-in-Devices)");
// Socket facing clients
var frontend:ZMQSocket = context.createSocket(ZMQ_ROUTER);
frontend.bind("tcp://*:5559");
// Socket facing services
var backend:ZMQSocket = context.createSocket(ZMQ_DEALER);
backend.bind("tcp://*:5560");
// Start build-in device
var device = new ZMQDevice(ZMQ_QUEUE, frontend, backend);
// We never get here
context.destroy();
}
}
msgqueue: Java 中的消息队列经纪人
package guide;
import org.zeromq.SocketType;
import org.zeromq.ZMQ;
import org.zeromq.ZMQ.Socket;
import org.zeromq.ZContext;
/**
* Simple message queuing broker
* Same as request-reply broker but using QUEUE device.
*/
public class msgqueue
{
public static void main(String[] args)
{
// Prepare our context and sockets
try (ZContext context = new ZContext()) {
// Socket facing clients
Socket frontend = context.createSocket(SocketType.ROUTER);
frontend.bind("tcp://*:5559");
// Socket facing services
Socket backend = context.createSocket(SocketType.DEALER);
backend.bind("tcp://*:5560");
// Start the proxy
ZMQ.proxy(frontend, backend, null);
}
}
}
msgqueue: Julia 中的消息队列经纪人
msgqueue: Lua 中的消息队列经纪人
--
-- Simple message queuing broker
-- Same as request-reply broker but using QUEUE device
--
-- Author: Robert G. Jakabosky <bobby@sharedrealm.com>
--
require"zmq"
require"zhelpers"
local context = zmq.init(1)
-- Socket facing clients
local frontend = context:socket(zmq.ROUTER)
frontend:bind("tcp://*:5559")
-- Socket facing services
local backend = context:socket(zmq.DEALER)
backend:bind("tcp://*:5560")
-- Start built-in device
zmq.device(zmq.QUEUE, frontend, backend)
-- We never get here...
frontend:close()
backend:close()
context:term()
msgqueue: Node.js 中的消息队列经纪人
// Simple message queuing broker
// Same as request-reply broker but using shared queue proxy
var zmq = require('zeromq');
// Socket facing clients
var frontend = zmq.socket('router');
console.log('binding frontend...');
frontend.bindSync('tcp://*:5559');
// Socket facing services
var backend = zmq.socket('dealer');
console.log('binding backend...');
backend.bindSync('tcp://*:5560');
// Start the proxy
console.log('starting proxy...');
zmq.proxy(frontend, backend, null);
process.on('SIGINT', function() {
frontend.close();
backend.close();
});
msgqueue: Objective-C 中的消息队列经纪人
msgqueue: ooc 中的消息队列经纪人
msgqueue: Perl 中的消息队列经纪人
# Simple message queuing broker in Perl
# Same as request-reply broker but using shared queue proxy
use strict;
use warnings;
use v5.10;
use ZMQ::FFI;
use ZMQ::FFI::Constants qw(ZMQ_ROUTER ZMQ_DEALER);
my $context = ZMQ::FFI->new();
# Socket facing clients
my $frontend = $context->socket(ZMQ_ROUTER);
$frontend->bind('tcp://*:5559');
# Socket facing services
my $backend = $context->socket(ZMQ_DEALER);
$backend->bind('tcp://*:5560');
# Start the proxy
$context->proxy($frontend, $backend);
# We never get here...
msgqueue: PHP 中的消息队列经纪人
<?php
/*
* Simple message queuing broker
* Same as request-reply broker but using QUEUE device
* @author Ian Barber <ian(dot)barber(at)gmail(dot)com>
*/
$context = new ZMQContext();
// Socket facing clients
$frontend = $context->getSocket(ZMQ::SOCKET_ROUTER);
$frontend->bind("tcp://*:5559");
// Socket facing services
$backend = $context->getSocket(ZMQ::SOCKET_DEALER);
$backend->bind("tcp://*:5560");
// Start built-in device
$device = new ZMQDevice($frontend, $backend);
$device->run();
// We never get here...
msgqueue: Python 中的消息队列经纪人
"""
Simple message queuing broker
Same as request-reply broker but using ``zmq.proxy``
Author: Guillaume Aubert (gaubert) <guillaume(dot)aubert(at)gmail(dot)com>
"""
import zmq
def main():
""" main method """
context = zmq.Context()
# Socket facing clients
frontend = context.socket(zmq.ROUTER)
frontend.bind("tcp://*:5559")
# Socket facing services
backend = context.socket(zmq.DEALER)
backend.bind("tcp://*:5560")
zmq.proxy(frontend, backend)
# We never get here...
frontend.close()
backend.close()
context.term()
if __name__ == "__main__":
main()
msgqueue: Q 中的消息队列经纪人
// Simple message queuing broker
// Same as request-reply broker but using QUEUE device
\l qzmq.q
ctx:zctx.new[]
// Socket facing clients
frontend:zsocket.new[ctx; zmq.ROUTER]
frontport:zsocket.bind[frontend; `$"tcp://*:5559"]
// Socket facing services
backend:zsocket.new[ctx; zmq.DEALER]
backport:zsocket.bind[backend; `$"tcp://*:5560"]
// Start built-in device
rc:libzmq.device[zmq.QUEUE; frontend; backend]
// We never get here…
zsocket.destroy[ctx; frontend]
zsocket.destroy[ctx; backend]
zctx.destroy[ctx]
\\
msgqueue: Racket 中的消息队列经纪人
msgqueue: Ruby 中的消息队列经纪人
#!/usr/bin/env ruby
#
# Simple message queuing broke
# Same as request-reply broker but using QUEUE device
#
require 'rubygems'
require 'ffi-rzmq'
context = ZMQ::Context.new
# Socket facing clients
frontend = context.socket(ZMQ::ROUTER)
frontend.bind('tcp://*:5559')
# Socket facing services
backend = context.socket(ZMQ::DEALER)
backend.bind('tcp://*:5560')
# Start built-in device
poller = ZMQ::Device.new(frontend,backend)
msgqueue: Rust 中的消息队列经纪人
fn main() {
let context = zmq::Context::new();
let frontend = context.socket(zmq::ROUTER).unwrap();
assert!(frontend.bind("tcp://*:5559").is_ok());
let backend = context.socket(zmq::DEALER).unwrap();
assert!(backend.bind("tcp://*:5560").is_ok());
zmq::proxy(&frontend, &backend).unwrap();
}
msgqueue: Scala 中的消息队列经纪人
msgqueue: Tcl 中的消息队列经纪人
#
# Simple message queuing broker
# Same as request-reply broker but using QUEUE device
#
package require zmq
zmq context context
# Socket facing clients
zmq socket frontend context ROUTER
frontend bind "tcp://*:5559"
# Socket facing services
zmq socket backend context DEALER
backend bind "tcp://*:5560"
# Start built-in device
zmq device QUEUE frontend backend
# We never get here…
frontend close
backend close
context term
msgqueue: OCaml 中的消息队列经纪人
如果你像大多数 ZeroMQ 用户一样,在这个阶段你的脑海里可能开始思考:“如果我把随机的套接字类型插到代理中会发生什么?” 简短的回答是:试试看,然后弄清楚发生了什么。实际上,你通常会坚持使用 ROUTER/DEALER、XSUB/XPUB 或 PULL/PUSH。
传输桥接 #
ZeroMQ 用户经常问的一个问题是:“我如何将我的 ZeroMQ 网络与技术 X 连接起来?” 其中 X 是某种其他网络或消息传递技术。

简单的答案是构建一个桥接器。桥接器是一个小型应用,在一个套接字上使用一种协议,并在另一个套接字上转换为/从第二种协议。你可以称之为协议解释器。ZeroMQ 中常见的桥接问题是连接两种传输或网络。
举个例子,我们将编写一个小型代理,它位于发布者和一组订阅者之间,连接两个网络。前端套接字(SUB)面向内部网络,天气服务器位于其中,后端套接字(PUB)面向外部网络上的订阅者。它在前端套接字上订阅天气服务,并在后端套接字上重新发布其数据。
wuproxy: Ada 中的天气更新代理
wuproxy: Basic 中的天气更新代理
wuproxy: C 中的天气更新代理
// Weather proxy device
#include "zhelpers.h"
int main (void)
{
void *context = zmq_ctx_new ();
// This is where the weather server sits
void *frontend = zmq_socket (context, ZMQ_XSUB);
zmq_connect (frontend, "tcp://192.168.55.210:5556");
// This is our public endpoint for subscribers
void *backend = zmq_socket (context, ZMQ_XPUB);
zmq_bind (backend, "tcp://10.1.1.0:8100");
// Run the proxy until the user interrupts us
zmq_proxy (frontend, backend, NULL);
zmq_close (frontend);
zmq_close (backend);
zmq_ctx_destroy (context);
return 0;
}
wuproxy: C++ 中的天气更新代理
//
// Weather proxy device C++
//
#include "zhelpers.hpp"
int main (int argc, char *argv[])
{
zmq::context_t context(1);
// This is where the weather server sits
zmq::socket_t frontend(context, ZMQ_SUB);
frontend.connect("tcp://192.168.55.210:5556");
// This is our public endpoint for subscribers
zmq::socket_t backend (context, ZMQ_PUB);
backend.bind("tcp://10.1.1.0:8100");
// Subscribe on everything
frontend.set(zmq::sockopt::subscribe, "");
// Shunt messages out to our own subscribers
while (1) {
while (1) {
zmq::message_t message;
int more;
size_t more_size = sizeof (more);
// Process all parts of the message
frontend.recv(&message);
frontend.getsockopt( ZMQ_RCVMORE, &more, &more_size);
backend.send(message, more? ZMQ_SNDMORE: 0);
if (!more)
break; // Last message part
}
}
return 0;
}
wuproxy: C# 中的天气更新代理
wuproxy: CL 中的天气更新代理
;;; -*- Mode:Lisp; Syntax:ANSI-Common-Lisp; -*-
;;;
;;; Weather proxy device in Common Lisp
;;;
;;; Kamil Shakirov <kamils80@gmail.com>
;;;
(defpackage #:zguide.wuproxy
(:nicknames #:wuproxy)
(:use #:cl #:zhelpers)
(:export #:main))
(in-package :zguide.wuproxy)
(defun main ()
(zmq:with-context (context 1)
;; This is where the weather server sits
(zmq:with-socket (frontend context zmq:sub)
(zmq:connect frontend "tcp://192.168.55.210:5556")
;; This is our public endpoint for subscribers
(zmq:with-socket (backend context zmq:pub)
(zmq:bind backend "tcp://10.1.1.0:8100")
;; Subscribe on everything
(zmq:setsockopt frontend zmq:subscribe "")
;; Shunt messages out to our own subscribers
(loop
(loop
;; Process all parts of the message
(let ((message (make-instance 'zmq:msg)))
(zmq:recv frontend message)
(if (not (zerop (zmq:getsockopt frontend zmq:rcvmore)))
(zmq:send backend message zmq:sndmore)
(progn
(zmq:send backend message 0)
;; Last message part
(return)))))))))
(cleanup))
wuproxy: Delphi 中的天气更新代理
program wuproxy;
//
// Weather proxy device
// @author Varga Balazs <bb.varga@gmail.com>
//
{$APPTYPE CONSOLE}
uses
SysUtils
, zmqapi
;
var
context: TZMQContext;
frontend,
backend: TZMQSocket;
begin
context := TZMQContext.Create;
// This is where the weather server sits
frontend := Context.Socket( stXSub );
frontend.connect( 'tcp://192.168.55.210:5556' );
// This is our public endpoint for subscribers
backend := Context.Socket( stXPub );
backend.bind( 'tcp://10.1.1.0:8100' );
// Run the proxy until the user interrupts us
ZMQProxy( frontend, backend, nil );
frontend.Free;
backend.Free;
context.Free;
end.
wuproxy: Erlang 中的天气更新代理
#! /usr/bin/env escript
%%
%% Weather proxy device
%%
main(_) ->
{ok, Context} = erlzmq:context(),
%% This is where the weather server sits
{ok, Frontend} = erlzmq:socket(Context, sub),
ok = erlzmq:connect(Frontend, "tcp://localhost:5556"),
%% This is our public endpoint for subscribers
{ok, Backend} = erlzmq:socket(Context, pub),
ok = erlzmq:bind(Backend, "tcp://*:8100"),
%% Subscribe on everything
ok = erlzmq:setsockopt(Frontend, subscribe, <<>>),
%% Shunt messages out to our own subscribers
loop(Frontend, Backend),
%% We don't actually get here but if we did, we'd shut down neatly
ok = erlzmq:close(Frontend),
ok = erlzmq:close(Backend),
ok = erlzmq:term(Context).
loop(Frontend, Backend) ->
{ok, Msg} = erlzmq:recv(Frontend),
case erlzmq:getsockopt(Frontend, rcvmore) of
{ok, true} -> erlzmq:send(Backend, Msg, [sndmore]);
{ok, false} -> erlzmq:send(Backend, Msg)
end,
loop(Frontend, Backend).
wuproxy: Elixir 中的天气更新代理
defmodule Wuproxy do
@moduledoc """
Generated by erl2ex (http://github.com/dazuma/erl2ex)
From Erlang source: (Unknown source file)
At: 2019-12-20 13:57:39
"""
def main(_) do
{:ok, context} = :erlzmq.context()
{:ok, frontend} = :erlzmq.socket(context, :sub)
:ok = :erlzmq.connect(frontend, 'tcp://localhost:5556')
{:ok, backend} = :erlzmq.socket(context, :pub)
:ok = :erlzmq.bind(backend, 'tcp://*:8100')
:ok = :erlzmq.setsockopt(frontend, :subscribe, <<>>)
loop(frontend, backend)
:ok = :erlzmq.close(frontend)
:ok = :erlzmq.close(backend)
:ok = :erlzmq.term(context)
end
def loop(frontend, backend) do
{:ok, msg} = :erlzmq.recv(frontend)
case(:erlzmq.getsockopt(frontend, :rcvmore)) do
{:ok, true} ->
:erlzmq.send(backend, msg, [:sndmore])
{:ok, false} ->
:erlzmq.send(backend, msg)
{:ok, 0} ->
:erlzmq.send(backend, msg)
end
loop(frontend, backend)
end
end
Wuproxy.main(:ok)
wuproxy: F# 中的天气更新代理
wuproxy: Felix 中的天气更新代理
wuproxy: Go 中的天气更新代理
// Weather proxy device
//
// Author: Brendan Mc.
// Requires: http://github.com/alecthomas/gozmq
package main
import (
zmq "github.com/alecthomas/gozmq"
)
func main() {
context, _ := zmq.NewContext()
defer context.Close()
// This is where the weather server sits
frontend, _ := context.NewSocket(zmq.SUB)
defer frontend.Close()
frontend.Connect("tcp://localhost:5556")
// This is our public endpoint for subscribers
backend, _ := context.NewSocket(zmq.PUB)
defer backend.Close()
backend.Bind("tcp://*:8100")
// Subscribe on everything
frontend.SetSubscribe("")
// Shunt messages out to our own subscribers
for {
message, _ := frontend.Recv(0)
backend.Send(message, 0)
}
}
wuproxy: Haskell 中的天气更新代理
-- Weather proxy device
module Main where
import System.ZMQ4.Monadic
main :: IO ()
main = runZMQ $ do
-- This is where the weather service sits
frontend <- socket XSub
connect frontend "tcp://192.168.55.210:5556"
-- This is our public endpoint for subscribers
backend <- socket XPub
bind backend "tcp://10.1.1.0:8100"
-- Run the proxy until the user interrupts us
proxy frontend backend Nothing
wuproxy: Haxe 中的天气更新代理
package ;
import haxe.io.Bytes;
import haxe.Stack;
import neko.Lib;
import org.zeromq.ZMQ;
import org.zeromq.ZMQContext;
import org.zeromq.ZMQSocket;
import org.zeromq.ZMQException;
/**
* Weather proxy device.
*
* See: https://zguide.zeromq.cn/page:all#A-Publish-Subscribe-Proxy-Server
*
* Use with WUClient and WUServer
*/
class WUProxy
{
public static function main() {
var context:ZMQContext = ZMQContext.instance();
Lib.println("** WUProxy (see: https://zguide.zeromq.cn/page:all#A-Publish-Subscribe-Proxy-Server)");
// This is where the weather service sits
var frontend:ZMQSocket = context.socket(ZMQ_SUB);
frontend.connect("tcp://localhost:5556");
// This is our public endpoint for subscribers
var backend:ZMQSocket = context.socket(ZMQ_PUB);
backend.bind("tcp://10.1.1.0:8100");
// Subscribe on everything
frontend.setsockopt(ZMQ_SUBSCRIBE, Bytes.ofString(""));
var more = false;
var msgBytes:Bytes;
ZMQ.catchSignals();
var stopped = false;
while (!stopped) {
try {
msgBytes = frontend.recvMsg();
more = frontend.hasReceiveMore();
// proxy it
backend.sendMsg(msgBytes, { if (more) SNDMORE else null; } );
if (!more) {
stopped = true;
}
} catch (e:ZMQException) {
if (ZMQ.isInterrupted()) {
stopped = true;
} else {
// Handle other errors
trace("ZMQException #:" + e.errNo + ", str:" + e.str());
trace (Stack.toString(Stack.exceptionStack()));
}
}
}
frontend.close();
backend.close();
context.term();
}
}
wuproxy: Java 中的天气更新代理
package guide;
import org.zeromq.SocketType;
import org.zeromq.ZMQ;
import org.zeromq.ZMQ.Socket;
import org.zeromq.ZContext;
/**
* Weather proxy device.
*/
public class wuproxy
{
public static void main(String[] args)
{
// Prepare our context and sockets
try (ZContext context = new ZContext()) {
// This is where the weather server sits
Socket frontend = context.createSocket(SocketType.SUB);
frontend.connect("tcp://192.168.55.210:5556");
// This is our public endpoint for subscribers
Socket backend = context.createSocket(SocketType.PUB);
backend.bind("tcp://10.1.1.0:8100");
// Subscribe on everything
frontend.subscribe(ZMQ.SUBSCRIPTION_ALL);
// Run the proxy until the user interrupts us
ZMQ.proxy(frontend, backend, null);
}
}
}
wuproxy: Julia 中的天气更新代理
wuproxy: Lua 中的天气更新代理
--
-- Weather proxy device
--
-- Author: Robert G. Jakabosky <bobby@sharedrealm.com>
--
require"zmq"
local context = zmq.init(1)
-- This is where the weather server sits
local frontend = context:socket(zmq.SUB)
frontend:connect(arg[1] or "tcp://192.168.55.210:5556")
-- This is our public endpolocal for subscribers
local backend = context:socket(zmq.PUB)
backend:bind(arg[2] or "tcp://10.1.1.0:8100")
-- Subscribe on everything
frontend:setopt(zmq.SUBSCRIBE, "")
-- Shunt messages out to our own subscribers
while true do
while true do
-- Process all parts of the message
local message = frontend:recv()
if frontend:getopt(zmq.RCVMORE) == 1 then
backend:send(message, zmq.SNDMORE)
else
backend:send(message)
break -- Last message part
end
end
end
-- We don't actually get here but if we did, we'd shut down neatly
frontend:close()
backend:close()
context:term()
wuproxy: Node.js 中的天气更新代理
// Weather proxy device in Node.js
var zmq = require('zeromq')
, frontend = zmq.socket('sub')
, backend = zmq.socket('pub');
backend.bindSync("tcp://10.1.1.0:8100");
frontend.subscribe('');
frontend.connect("tcp://192.168.55.210:5556");
frontend.on('message', function() {
// all parts of a message come as function arguments
var args = Array.apply(null, arguments);
backend.send(args);
});
wuproxy: Objective-C 中的天气更新代理
wuproxy: ooc 中的天气更新代理
wuproxy: Perl 中的天气更新代理
# Weather proxy device in Perl
use strict;
use warnings;
use v5.10;
use ZMQ::FFI;
use ZMQ::FFI::Constants qw(ZMQ_XSUB ZMQ_XPUB);
my $context = ZMQ::FFI->new();
# This is where the weather server sits
my $frontend = $context->socket(ZMQ_XSUB);
$frontend->connect('tcp://192.168.55.210:5556');
# This is our public endpoing fro subscribers
my $backend = $context->socket(ZMQ_XPUB);
$backend->bind('tcp://10.1.1.0:8100');
# Run the proxy until the user interrupts us
$context->proxy($frontend, $backend);
wuproxy: PHP 中的天气更新代理
<?php
/*
* Weather proxy device
* @author Ian Barber <ian(dot)barber(at)gmail(dot)com>
*/
$context = new ZMQContext();
// This is where the weather server sits
$frontend = new ZMQSocket($context, ZMQ::SOCKET_SUB);
$frontend->connect("tcp://192.168.55.210:5556");
// This is our public endpoint for subscribers
$backend = new ZMQSocket($context, ZMQ::SOCKET_PUB);
$backend->bind("tcp://10.1.1.0:8100");
// Subscribe on everything
$frontend->setSockOpt(ZMQ::SOCKOPT_SUBSCRIBE, "");
// Shunt messages out to our own subscribers
while (true) {
while (true) {
// Process all parts of the message
$message = $frontend->recv();
$more = $frontend->getSockOpt(ZMQ::SOCKOPT_RCVMORE);
$backend->send($message, $more ? ZMQ::MODE_SNDMORE : 0);
if (!$more) {
break; // Last message part
}
}
}
wuproxy: Python 中的天气更新代理
# Weather proxy device
#
# Author: Lev Givon <lev(at)columbia(dot)edu>
import zmq
context = zmq.Context()
# This is where the weather server sits
frontend = context.socket(zmq.SUB)
frontend.connect("tcp://192.168.55.210:5556")
# This is our public endpoint for subscribers
backend = context.socket(zmq.PUB)
backend.bind("tcp://10.1.1.0:8100")
# Subscribe on everything
frontend.setsockopt(zmq.SUBSCRIBE, b'')
# Shunt messages out to our own subscribers
while True:
# Process all parts of the message
message = frontend.recv_multipart()
backend.send_multipart(message)
wuproxy: Q 中的天气更新代理
wuproxy: Racket 中的天气更新代理
wuproxy: Ruby 中的天气更新代理
#!/usr/bin/env ruby
#
# Weather proxy device
#
require "rubygems"
require 'ffi-rzmq'
context = ZMQ::Context.new(1)
# This is where the weather server sits
frontend = context.socket(ZMQ::SUB)
frontend.connect("tcp://192.168.55.210:5556")
# This is our public endpoint for subscribers
backend = context.socket(ZMQ::PUB)
backend.bind("tcp://10.1.1.0:8100")
# Subscribe on everything
frontend.setsockopt(ZMQ::SUBSCRIBE,"")
loop do
loop do
# Process all parts of the message
message = ZMQ::Message.new
frontend.recv(message)
more=frontend.getsockopt(ZMQ::RCVMORE)
backend.send(message, more ? ZMQ::SNDMORE : 0 )
break unless more # Last message part
end
end
wuproxy: Rust 中的天气更新代理
fn main() {
let context = zmq::Context::new();
let frontend = context.socket(zmq::XSUB).unwrap();
assert!(frontend.connect("tcp://192.168.55.210:5556").is_ok());
let backend = context.socket(zmq::XPUB).unwrap();
assert!(backend.bind("tcp://10.1.1.0:8100").is_ok());
zmq::proxy(&frontend, &backend).unwrap();
}
wuproxy: Scala 中的天气更新代理
/*
*
* Weather proxy device in Scala
*
* @author Vadim Shalts
* @email vshalts@gmail.com
*/
import org.zeromq.ZMQ
object wuproxy {
def main(args: Array[String]) {
// Prepare our context and sockets
var context = ZMQ.context(1)
// This is where the weather server sits
var frontend = context.socket(ZMQ.SUB)
frontend.connect("tcp://192.168.55.210:5556")
// This is our public endpoint for subscribers
var backend = context.socket(ZMQ.PUB)
backend.bind("tcp://10.1.1.0:8100")
// Subscribe on everything
frontend.subscribe("".getBytes)
// Shunt messages out to our own subscribers
while (!Thread.currentThread.isInterrupted) {
var more = false
do {
var message = frontend.recv(0)
more = frontend.hasReceiveMore
backend.send(message, if (more) ZMQ.SNDMORE else 0)
} while(more)
}
frontend.close()
backend.close()
context.term()
}
}
wuproxy: Tcl 中的天气更新代理
#
# Weather proxy device
#
package require zmq
zmq context context
# This is where the weather server sits
zmq socket frontend context SUB
frontend connect "tcp://localhost:5556"
# This is our public endpoint for subscribers
zmq socket backend context PUB
backend bind "tcp://*:8100"
# Subscribe on everything
frontend setsockopt SUBSCRIBE ""
# Shunt messages out to our own subscribers
while {1} {
while {1} {
# Process all parts of the message
zmq message msg
frontend recv_msg msg
set more [frontend getsockopt RCVMORE]
backend send_msg msg [expr {$more?{SNDMORE}:{}}]
msg close
if {!$more} {
break ;# Last message part
}
}
}
# We don't actually get here but if we did, we'd shut down neatly
frontend close
backend close
context term
wuproxy: OCaml 中的天气更新代理
它看起来与早期的代理示例非常相似,但关键部分在于前端和后端套接字位于两个不同的网络上。我们可以使用这种模型,例如连接一个多播网络 (`zmq_pgm()`传输)到一个`zmq_tcp()`发布者。
错误处理和 ETERM #
ZeroMQ 的错误处理哲学是“快速失败”(fail-fast) 与“弹性”(resilience) 的结合。我们认为,进程应对内部错误尽可能脆弱,而对外部攻击和错误尽可能健壮。打个比方,活细胞如果检测到单个内部错误就会自毁,但它会尽一切可能抵御外部攻击。
断言(Assertions)遍布 ZeroMQ 代码中,对健壮的代码至关重要;它们只需位于“细胞壁”的正确一侧。而且应该存在这样一道墙。如果无法确定故障是内部的还是外部的,那就是需要修复的设计缺陷。在 C/C++ 中,断言会立即终止应用程序并报错。在其他语言中,你可能会得到异常或中止。
当 ZeroMQ 检测到外部故障时,会向调用代码返回一个错误。在少数罕见情况下,如果没有明显的错误恢复策略,它会默默地丢弃消息。
到目前为止,我们在大多数 C 示例中都没有看到错误处理。实际代码应该对每个 ZeroMQ 调用都进行错误处理。如果你使用的语言绑定不是 C,绑定可能会为你处理错误。在 C 中,你确实需要自己处理。有一些简单的规则,从 POSIX 约定开始
- 创建对象的方法如果失败则返回 NULL。
- 处理数据的方法可能返回处理的字节数,或在错误或失败时返回 -1。
- 其他方法在成功时返回 0,在错误或失败时返回 -1。
- 错误码由errno`ipc` 或通过 zmq_errno() 提供。.
- 用于日志记录的描述性错误文本由 zmq_strerror().
提供。例如
void *context = zmq_ctx_new ();
assert (context);
void *socket = zmq_socket (context, ZMQ_REP);
assert (socket);
int rc = zmq_bind (socket, "tcp://*:5555");
if (rc == -1) {
printf ("E: bind failed: %s\n", strerror (errno));
return -1;
}
有两个主要异常情况应作为非致命错误处理
-
当你的代码使用ZMQ_DONTWAIT选项接收消息,且没有待处理数据时,ZeroMQ 会返回 -1 并将错误码errno设置为EAGAIN.
-
当一个线程调用 zmq_ctx_destroy()时,如果其他线程仍在执行阻塞操作,则该 zmq_ctx_destroy()调用会关闭上下文,所有阻塞调用会以 -1 退出,并且错误码errno设置为ETERM.
在 C/C++ 中,在优化代码中可以完全移除断言,所以不要犯将整个 ZeroMQ 调用包裹在assert()中。这看起来很整洁;然后优化器会移除所有的断言和你希望进行的调用,你的应用程序就会以令人印象深刻的方式崩溃。

让我们看看如何干净地关闭一个进程。我们将以上一节的并行管线示例为例。如果我们已经在后台启动了大量工作进程,那么在批处理完成后,我们现在希望终止它们。我们可以通过向工作进程发送一个终止消息来实现。最好的发送位置是接收端 (sink),因为它确实知道批处理何时完成。
如何将接收端连接到工作进程?PUSH/PULL 套接字是单向的。我们可以切换到另一种套接字类型,或者混合使用多个套接字流。让我们尝试后一种方法:使用发布-订阅 (pub-sub) 模型向工作进程发送终止消息
- 接收端在新的端点上创建一个 PUB 套接字。
- 工作进程将其输入套接字连接到此端点。
- 当接收端检测到批处理结束时,它会向其 PUB 套接字发送一个终止消息。
- 当工作进程检测到此终止消息时,它会退出。
接收端中不需要太多新代码
void *controller = zmq_socket (context, ZMQ_PUB);
zmq_bind (controller, "tcp://*:5559");
...
// Send kill signal to workers
s_send (controller, "KILL");
这是工作进程,它管理两个套接字(一个接收任务的 PULL 套接字,一个接收控制命令的 SUB 套接字),使用我们之前看到的 每个部分都是一个技术
taskwork2: Ada 中带有终止信号的并行任务工作者
taskwork2: Basic 中带有终止信号的并行任务工作者
taskwork2: C 中带有终止信号的并行任务工作者
// Task worker - design 2
// Adds pub-sub flow to receive and respond to kill signal
#include "zhelpers.h"
int main (void)
{
// Socket to receive messages on
void *context = zmq_ctx_new ();
void *receiver = zmq_socket (context, ZMQ_PULL);
zmq_connect (receiver, "tcp://localhost:5557");
// Socket to send messages to
void *sender = zmq_socket (context, ZMQ_PUSH);
zmq_connect (sender, "tcp://localhost:5558");
// Socket for control input
void *controller = zmq_socket (context, ZMQ_SUB);
zmq_connect (controller, "tcp://localhost:5559");
zmq_setsockopt (controller, ZMQ_SUBSCRIBE, "", 0);
// Process messages from either socket
while (1) {
zmq_pollitem_t items [] = {
{ receiver, 0, ZMQ_POLLIN, 0 },
{ controller, 0, ZMQ_POLLIN, 0 }
};
zmq_poll (items, 2, -1);
if (items [0].revents & ZMQ_POLLIN) {
char *string = s_recv (receiver);
printf ("%s.", string); // Show progress
fflush (stdout);
s_sleep (atoi (string)); // Do the work
free (string);
s_send (sender, ""); // Send results to sink
}
// Any waiting controller command acts as 'KILL'
if (items [1].revents & ZMQ_POLLIN)
break; // Exit loop
}
zmq_close (receiver);
zmq_close (sender);
zmq_close (controller);
zmq_ctx_destroy (context);
return 0;
}
taskwork2: C++ 中带有终止信号的并行任务工作者
//
// Task worker in C++ - design 2
// Adds pub-sub flow to receive and respond to kill signal
//
#include "zhelpers.hpp"
#include <string>
int main (int argc, char *argv[])
{
zmq::context_t context(1);
// Socket to receive messages on
zmq::socket_t receiver(context, ZMQ_PULL);
receiver.connect("tcp://localhost:5557");
// Socket to send messages to
zmq::socket_t sender(context, ZMQ_PUSH);
sender.connect("tcp://localhost:5558");
// Socket for control input
zmq::socket_t controller (context, ZMQ_SUB);
controller.connect("tcp://localhost:5559");
controller.set(zmq::sockopt::subscribe, "");
// Process messages from receiver and controller
zmq::pollitem_t items [] = {
{ receiver, 0, ZMQ_POLLIN, 0 },
{ controller, 0, ZMQ_POLLIN, 0 }
};
// Process messages from both sockets
while (1) {
zmq::message_t message;
zmq::poll (&items [0], 2, -1);
if (items [0].revents & ZMQ_POLLIN) {
receiver.recv(&message);
// Process task
int workload; // Workload in msecs
std::string sdata(static_cast<char*>(message.data()), message.size());
std::istringstream iss(sdata);
iss >> workload;
// Do the work
s_sleep(workload);
// Send results to sink
message.rebuild();
sender.send(message);
// Simple progress indicator for the viewer
std::cout << "." << std::flush;
}
// Any waiting controller command acts as 'KILL'
if (items [1].revents & ZMQ_POLLIN) {
std::cout << std::endl;
break; // Exit loop
}
}
// Finished
return 0;
}
taskwork2: C# 中带有终止信号的并行任务工作者
taskwork2: CL 中带有终止信号的并行任务工作者
;;; -*- Mode:Lisp; Syntax:ANSI-Common-Lisp; -*-
;;;
;;; Task worker - design 2 in Common Lisp
;;; Connects PULL socket to tcp://localhost:5557
;;; Collects workloads from ventilator via that socket
;;; Connects PUSH socket to tcp://localhost:5558
;;; Sends results to sink via that socket
;;; Adds pub-sub flow to receive and respond to kill signal
;;;
;;; Kamil Shakirov <kamils80@gmail.com>
;;;
(defpackage #:zguide.taskwork2
(:nicknames #:taskwork2)
(:use #:cl #:zhelpers)
(:export #:main))
(in-package :zguide.taskwork2)
(defun main ()
(zmq:with-context (context 1)
;; Socket to receive messages on
(zmq:with-socket (receiver context zmq:pull)
(zmq:connect receiver "tcp://localhost:5557")
;; Socket to send messages to
(zmq:with-socket (sender context zmq:push)
(zmq:connect sender "tcp://localhost:5558")
;; Socket for control input
(zmq:with-socket (controller context zmq:sub)
(zmq:connect controller "tcp://localhost:5559")
(zmq:setsockopt controller zmq:subscribe "")
;; Process messages from receiver and controller
(zmq:with-polls ((items . ((receiver . zmq:pollin)
(controller . zmq:pollin))))
(loop
(let ((revents (zmq:poll items)))
(when (= (first revents) zmq:pollin)
(let ((pull-msg (make-instance 'zmq:msg)))
(zmq:recv receiver pull-msg)
;; Process task
(let* ((string (zmq:msg-data-as-string pull-msg))
(delay (* (parse-integer string) 1000)))
;; Simple progress indicator for the viewer
(message "~A." string)
;; Do the work
(isys:usleep delay)
;; Send results to sink
(let ((push-msg (make-instance 'zmq:msg :data "")))
(zmq:send sender push-msg)))))
(when (= (second revents) zmq:pollin)
;; Any waiting controller command acts as 'KILL'
(return)))))))))
(cleanup))
taskwork2: Delphi 中带有终止信号的并行任务工作者
program taskwork2;
//
// Task worker - design 2
// Adds pub-sub flow to receive and respond to kill signal
// @author Varga Balazs <bb.varga@gmail.com>
//
{$APPTYPE CONSOLE}
uses
SysUtils
, zmqapi
;
var
context: TZMQContext;
receiver,
sender,
controller: TZMQSocket;
frame: TZMQFrame;
poller: TZMQPoller;
begin
context := TZMQContext.Create;
// Socket to receive messages on
receiver := Context.Socket( stPull );
receiver.connect( 'tcp://localhost:5557' );
// Socket to send messages to
sender := Context.Socket( stPush );
sender.connect( 'tcp://localhost:5558' );
// Socket for control input
controller := Context.Socket( stSub );
controller.connect( 'tcp://localhost:5559' );
controller.subscribe('');
// Process messages from receiver and controller
poller := TZMQPoller.Create( true );
poller.register( receiver, [pePollIn] );
poller.register( controller, [pePollIn] );
// Process messages from both sockets
while true do
begin
poller.poll;
if pePollIn in poller.PollItem[0].revents then
begin
frame := TZMQFrame.create;
receiver.recv( frame );
// Do the work
sleep( StrToInt( frame.asUtf8String ) );
frame.Free;
// Send results to sink
sender.send('');
// Simple progress indicator for the viewer
writeln('.');
end;
// Any waiting controller command acts as 'KILL'
if pePollIn in poller.PollItem[1].revents then
break; // Exit loop
end;
receiver.Free;
sender.Free;
controller.Free;
poller.Free;
context.Free;
end.
taskwork2: Erlang 中带有终止信号的并行任务工作者
#! /usr/bin/env escript
%%
%% Task worker - design 2
%% Adds pub-sub flow to receive and respond to kill signal
%%
main(_) ->
{ok, Context} = erlzmq:context(),
%% Socket to receive messages on
{ok, Receiver} = erlzmq:socket(Context, [pull, {active, true}]),
ok = erlzmq:connect(Receiver, "tcp://localhost:5557"),
%% Socket to send messages to
{ok, Sender} = erlzmq:socket(Context, push),
ok = erlzmq:connect(Sender, "tcp://localhost:5558"),
%% Socket for control input
{ok, Controller} = erlzmq:socket(Context, [sub, {active, true}]),
ok = erlzmq:connect(Controller, "tcp://localhost:5559"),
ok = erlzmq:setsockopt(Controller, subscribe, <<>>),
%% Process messages from receiver and controller
process_messages(Receiver, Controller, Sender),
%% Finished
ok = erlzmq:close(Receiver),
ok = erlzmq:close(Sender),
ok = erlzmq:close(Controller),
ok = erlzmq:term(Context).
process_messages(Receiver, Controller, Sender) ->
receive
{zmq, Receiver, Msg, _Flags} ->
%% Do the work
timer:sleep(list_to_integer(binary_to_list(Msg))),
%% Send results to sink
ok = erlzmq:send(Sender, Msg),
%% Simple progress indicator for the viewer
io:format("."),
process_messages(Receiver, Controller, Sender);
{zmq, Controller, _Msg, _Flags} ->
%% Any waiting controller command acts as 'KILL'
ok
end.
taskwork2: Elixir 中带有终止信号的并行任务工作者
defmodule Taskwork2 do
@moduledoc """
Generated by erl2ex (http://github.com/dazuma/erl2ex)
From Erlang source: (Unknown source file)
At: 2019-12-20 13:57:37
"""
def main() do
{:ok, context} = :erlzmq.context()
{:ok, receiver} = :erlzmq.socket(context, [:pull, {:active, true}])
:ok = :erlzmq.connect(receiver, 'tcp://localhost:5557')
{:ok, sender} = :erlzmq.socket(context, :push)
:ok = :erlzmq.connect(sender, 'tcp://localhost:5558')
{:ok, controller} = :erlzmq.socket(context, [:sub, {:active, true}])
:ok = :erlzmq.connect(controller, 'tcp://localhost:5559')
:ok = :erlzmq.setsockopt(controller, :subscribe, <<>>)
process_messages(receiver, controller, sender)
:ok = :erlzmq.close(receiver)
:ok = :erlzmq.close(sender)
:ok = :erlzmq.close(controller)
:ok = :erlzmq.term(context)
end
def process_messages(receiver, controller, sender) do
receive do
{:zmq, ^receiver, msg, _flags} ->
:timer.sleep(:erlang.list_to_integer(:erlang.binary_to_list(msg)))
:ok = :erlzmq.send(sender, msg)
:io.format('.')
process_messages(receiver, controller, sender)
{:zmq, ^controller, _msg, _flags} ->
:ok
end
end
end
Taskwork2.main
taskwork2: F# 中带有终止信号的并行任务工作者
taskwork2: Felix 中带有终止信号的并行任务工作者
taskwork2: Go 中带有终止信号的并行任务工作者
//
// Task Wroker
// Connects PULL socket to tcp://localhost:5557
// Collects workloads from ventilator via that socket
// Connects PUSH socket to tcp://localhost:5558
// Connects SUB socket to tcp://localhost:5559
// Sends results to sink via that socket
//
package main
import (
"fmt"
zmq "github.com/alecthomas/gozmq"
"strconv"
"time"
)
func main() {
context, _ := zmq.NewContext()
defer context.Close()
// Socket to receive messages on
receiver, _ := context.NewSocket(zmq.PULL)
defer receiver.Close()
receiver.Connect("tcp://localhost:5557")
// Socket to send messages to task sink
sender, _ := context.NewSocket(zmq.PUSH)
defer sender.Close()
sender.Connect("tcp://localhost:5558")
// Socket for control input
controller, _ := context.NewSocket(zmq.SUB)
defer controller.Close()
controller.Connect("tcp://localhost:5559")
controller.SetSubscribe("")
items := zmq.PollItems{
zmq.PollItem{Socket: receiver, Events: zmq.POLLIN},
zmq.PollItem{Socket: controller, Events: zmq.POLLIN},
}
// Process tasks forever
for {
zmq.Poll(items, -1)
switch {
case items[0].REvents&zmq.POLLIN != 0:
msgbytes, _ := receiver.Recv(0)
fmt.Printf("%s.", string(msgbytes))
// Do the work
msec, _ := strconv.ParseInt(string(msgbytes), 10, 64)
time.Sleep(time.Duration(msec) * 1e6)
// Send results to sink
sender.Send([]byte(""), 0)
case items[1].REvents&zmq.POLLIN != 0:
fmt.Println("stopping")
return
}
}
}
taskwork2: Haskell 中带有终止信号的并行任务工作者
{-# LANGUAGE OverloadedStrings #-}
-- Task worker - design 2
-- Adds pub-sub flow to receive and respond to kill signal
module Main where
import Control.Concurrent
import Control.Monad
import qualified Data.ByteString.Char8 as BS
import Data.Function
import System.IO
import System.ZMQ4.Monadic
import Text.Printf
main :: IO ()
main = runZMQ $ do
-- Socket to receive messages on
receiver <- socket Pull
connect receiver "tcp://localhost:5557"
-- Socket to send messages to
sender <- socket Push
connect sender "tcp://localhost:5558"
controller <- socket Sub
connect controller "tcp://localhost:5559"
subscribe controller ""
liftIO $ hSetBuffering stdout NoBuffering
fix $ \loop -> do
[receiver_events, controller_events] <-
poll (-1) [ Sock receiver [In] Nothing
, Sock controller [In] Nothing
]
when (receiver_events /= []) $ do
string <- BS.unpack <$> receive receiver
liftIO $ printf "%s." string -- Show the progress
liftIO $ threadDelay (read string * 1000) -- Do the work
send sender [] "" -- Send results to sink
-- Any waiting controller command acts as 'KILL'
unless (controller_events /= [])
loop
taskwork2: Haxe 中带有终止信号的并行任务工作者
package ;
import haxe.io.Bytes;
import neko.Lib;
import neko.Sys;
import org.zeromq.ZMQ;
import org.zeromq.ZMQContext;
import org.zeromq.ZMQPoller;
import org.zeromq.ZMQSocket;
/**
* Parallel Task worker with kill signalling in Haxe
* Connects PULL socket to tcp://localhost:5557
* Collects workloads from ventilator via that socket
* Connects PUSH socket to tcp://localhost:5558
* Sends results to sink via that socket
*
* See: https://zguide.zeromq.cn/page:all#Handling-Errors-and-ETERM
*
* Based on code from: https://zguide.zeromq.cn/java:taskwork2
*/
class TaskWork2
{
public static function main() {
var context:ZMQContext = ZMQContext.instance();
Lib.println("** TaskWork2 (see: https://zguide.zeromq.cn/page:all#Handling-Errors-and-ETERM)");
// Socket to receive messages on
var receiver:ZMQSocket = context.socket(ZMQ_PULL);
receiver.connect("tcp://127.0.0.1:5557");
// Socket to send messages to
var sender:ZMQSocket = context.socket(ZMQ_PUSH);
sender.connect("tcp://127.0.0.1:5558");
// Socket to receive controller messages from
var controller:ZMQSocket = context.socket(ZMQ_SUB);
controller.connect("tcp://127.0.0.1:5559");
controller.setsockopt(ZMQ_SUBSCRIBE, Bytes.ofString(""));
var items:ZMQPoller = context.poller();
items.registerSocket(receiver, ZMQ.ZMQ_POLLIN());
items.registerSocket(controller, ZMQ.ZMQ_POLLIN());
var msgString:String;
// Process tasks forever
while (true) {
var numSocks = items.poll();
if (items.pollin(1)) {
// receiver socket has events
msgString = StringTools.trim(receiver.recvMsg().toString());
var sec:Float = Std.parseFloat(msgString) / 1000.0;
Lib.print(msgString + ".");
// Do the work
Sys.sleep(sec);
// Send results to sink
sender.sendMsg(Bytes.ofString(""));
}
if (items.pollin(2)) {
break; // Exit loop
}
}
receiver.close();
sender.close();
controller.close();
context.term();
}
}
taskwork2: Java 中带有终止信号的并行任务工作者
package guide;
import org.zeromq.SocketType;
import org.zeromq.ZMQ;
import org.zeromq.ZContext;
/**
* Task worker - design 2
* Adds pub-sub flow to receive and respond to kill signal
*/
public class taskwork2
{
public static void main(String[] args) throws InterruptedException
{
try (ZContext context = new ZContext()) {
ZMQ.Socket receiver = context.createSocket(SocketType.PULL);
receiver.connect("tcp://localhost:5557");
ZMQ.Socket sender = context.createSocket(SocketType.PUSH);
sender.connect("tcp://localhost:5558");
ZMQ.Socket controller = context.createSocket(SocketType.SUB);
controller.connect("tcp://localhost:5559");
controller.subscribe(ZMQ.SUBSCRIPTION_ALL);
ZMQ.Poller items = context.createPoller(2);
items.register(receiver, ZMQ.Poller.POLLIN);
items.register(controller, ZMQ.Poller.POLLIN);
while (true) {
items.poll();
if (items.pollin(0)) {
String message = receiver.recvStr(0);
long nsec = Long.parseLong(message);
// Simple progress indicator for the viewer
System.out.print(message + '.');
System.out.flush();
// Do the work
Thread.sleep(nsec);
// Send results to sink
sender.send("", 0);
}
// Any waiting controller command acts as 'KILL'
if (items.pollin(1)) {
break; // Exit loop
}
}
}
}
}
taskwork2: Julia 中带有终止信号的并行任务工作者
taskwork2: Lua 中带有终止信号的并行任务工作者
--
-- Task worker - design 2
-- Adds pub-sub flow to receive and respond to kill signal
--
-- Author: Robert G. Jakabosky <bobby@sharedrealm.com>
--
require"zmq"
require"zmq.poller"
require"zhelpers"
local context = zmq.init(1)
-- Socket to receive messages on
local receiver = context:socket(zmq.PULL)
receiver:connect("tcp://localhost:5557")
-- Socket to send messages to
local sender = context:socket(zmq.PUSH)
sender:connect("tcp://localhost:5558")
-- Socket for control input
local controller = context:socket(zmq.SUB)
controller:connect("tcp://localhost:5559")
controller:setopt(zmq.SUBSCRIBE, "", 0)
-- Process messages from receiver and controller
local poller = zmq.poller(2)
poller:add(receiver, zmq.POLLIN, function()
local msg = receiver:recv()
-- Do the work
s_sleep(tonumber(msg))
-- Send results to sink
sender:send("")
-- Simple progress indicator for the viewer
io.write(".")
io.stdout:flush()
end)
poller:add(controller, zmq.POLLIN, function()
poller:stop() -- Exit loop
end)
-- start poller's event loop
poller:start()
-- Finished
receiver:close()
sender:close()
controller:close()
context:term()
taskwork2: Node.js 中带有终止信号的并行任务工作者
// Task worker in Node.js
// Connects PULL socket to tcp://localhost:5557
// Collects workloads from ventilator via that socket
// Connects PUSH socket to tcp://localhost:5558
// Sends results to sink via that socket
var zmq = require('zeromq')
, receiver = zmq.socket('pull')
, sender = zmq.socket('push')
, controller = zmq.socket('sub');
receiver.on('message', function(buf) {
var msec = parseInt(buf.toString(), 10);
// simple progress indicator for the viewer
process.stdout.write(buf.toString() + ".");
// do the work
// not a great node sample for zeromq,
// node receives messages while timers run.
setTimeout(function() {
sender.send("");
}, msec);
});
controller.on('message', function() {
// received KILL signal
receiver.close();
sender.close();
controller.close();
process.exit();
});
receiver.connect('tcp://localhost:5557');
sender.connect('tcp://localhost:5558');
controller.subscribe('');
controller.connect('tcp://localhost:5559');
taskwork2: Objective-C 中带有终止信号的并行任务工作者
/* taskwork2.m: PULLs workload from tcp://localhost:5557
* PUSHes results to tcp://localhost:5558
* SUBs to tcp://localhost:5559 to receive kill signal (*** NEW ***)
*/
#import <Foundation/Foundation.h>
#import "ZMQObjC.h"
#define NSEC_PER_MSEC (1000000)
int
main(void)
{
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
ZMQContext *ctx = [[[ZMQContext alloc] initWithIOThreads:1U] autorelease];
/* (jws/2011-02-05)!!!: Do NOT terminate the endpoint with a final slash.
* If you connect to @"tcp://localhost:5557/", you will get
* Assertion failed: rc == 0 (zmq_connecter.cpp:46)
* instead of a connected socket. Binding works fine, though. */
ZMQSocket *pull = [ctx socketWithType:ZMQ_PULL];
[pull connectToEndpoint:@"tcp://localhost:5557"];
ZMQSocket *push = [ctx socketWithType:ZMQ_PUSH];
[push connectToEndpoint:@"tcp://localhost:5558"];
ZMQSocket *control = [ctx socketWithType:ZMQ_SUB];
[control setData:nil forOption:ZMQ_SUBSCRIBE];
[control connectToEndpoint:@"tcp://localhost:5559"];
/* Process tasks forever, multiplexing between |pull| and |control|. */
enum {POLL_PULL, POLL_CONTROL};
zmq_pollitem_t items[2];
[pull getPollItem:&items[POLL_PULL] forEvents:ZMQ_POLLIN];
[control getPollItem:&items[POLL_CONTROL] forEvents:ZMQ_POLLIN];
size_t itemCount = sizeof(items)/sizeof(*items);
struct timespec t;
NSData *emptyData = [NSData data];
bool shouldExit = false;
while (!shouldExit) {
NSAutoreleasePool *p = [[NSAutoreleasePool alloc] init];
[ZMQContext pollWithItems:items count:itemCount
timeoutAfterUsec:ZMQPollTimeoutNever];
if (items[POLL_PULL].revents & ZMQ_POLLIN) {
NSData *d = [pull receiveDataWithFlags:0];
NSString *s = [NSString stringWithUTF8String:[d bytes]];
t.tv_sec = 0;
t.tv_nsec = [s integerValue] * NSEC_PER_MSEC;
printf("%d.", [s intValue]);
fflush(stdout);
/* Do work, then report finished. */
(void)nanosleep(&t, NULL);
[push sendData:emptyData withFlags:0];
}
/* Any inbound data on |control| signals us to die. */
if (items[POLL_CONTROL].revents & ZMQ_POLLIN) {
/* Do NOT just break here: |p| must be drained first. */
shouldExit = true;
}
[p drain];
}
[ctx closeSockets];
[pool drain];
return EXIT_SUCCESS;
}
taskwork2: ooc 中带有终止信号的并行任务工作者
taskwork2: Perl 中带有终止信号的并行任务工作者
# Task worker - design 2 in Perl
# Adds pub-sub flow to receive and respond to kill signal
use strict;
use warnings;
use v5.10;
$| = 1; # autoflush stdout after each print
use ZMQ::FFI;
use ZMQ::FFI::Constants qw(ZMQ_PULL ZMQ_PUSH ZMQ_SUB);
use Time::HiRes qw(usleep);
use AnyEvent;
use EV;
# Socket to receive messages on
my $context = ZMQ::FFI->new();
my $receiver = $context->socket(ZMQ_PULL);
$receiver->connect('tcp://localhost:5557');
# Socket to send messages to
my $sender = $context->socket(ZMQ_PUSH);
$sender->connect('tcp://localhost:5558');
# Socket for control input
my $controller = $context->socket(ZMQ_SUB);
$controller->connect('tcp://localhost:5559');
$controller->subscribe('');
# Process messages from either socket
my $receiver_poller = AE::io $receiver->get_fd, 0, sub {
while ($receiver->has_pollin) {
my $string = $receiver->recv();
print "$string."; # Show progress
usleep $string*1000; # Do the work
$sender->send(''); # Send results to sink
}
};
# Any controller command acts as 'KILL'
my $controller_poller = AE::io $controller->get_fd, 0, sub {
if ($controller->has_pollin) {
EV::break; # Exit loop
}
};
EV::run;
taskwork2: PHP 中带有终止信号的并行任务工作者
<?php
/*
* Task worker - design 2
* Adds pub-sub flow to receive and respond to kill signal
* @author Ian Barber <ian(dot)barber(at)gmail(dot)com>
*/
$context = new ZMQContext();
// Socket to receive messages on
$receiver = new ZMQSocket($context, ZMQ::SOCKET_PULL);
$receiver->connect("tcp://localhost:5557");
// Socket to send messages to
$sender = new ZMQSocket($context, ZMQ::SOCKET_PUSH);
$sender->connect("tcp://localhost:5558");
// Socket for control input
$controller = new ZMQSocket($context, ZMQ::SOCKET_SUB);
$controller->connect("tcp://localhost:5559");
$controller->setSockOpt(ZMQ::SOCKOPT_SUBSCRIBE, "");
// Process messages from receiver and controller
$poll = new ZMQPoll();
$poll->add($receiver, ZMQ::POLL_IN);
$poll->add($controller, ZMQ::POLL_IN);
$readable = $writeable = array();
// Process messages from both sockets
while (true) {
$events = $poll->poll($readable, $writeable);
if ($events > 0) {
foreach ($readable as $socket) {
if ($socket === $receiver) {
$message = $socket->recv();
// Simple progress indicator for the viewer
echo $message, PHP_EOL;
// Do the work
usleep($message * 1000);
// Send results to sink
$sender->send("");
}
// Any waiting controller command acts as 'KILL'
else if ($socket === $controller) {
exit();
}
}
}
}
taskwork2: Python 中带有终止信号的并行任务工作者
# encoding: utf-8
#
# Task worker - design 2
# Adds pub-sub flow to receive and respond to kill signal
#
# Author: Jeremy Avnet (brainsik) <spork(dash)zmq(at)theory(dot)org>
#
import sys
import time
import zmq
context = zmq.Context()
# Socket to receive messages on
receiver = context.socket(zmq.PULL)
receiver.connect("tcp://localhost:5557")
# Socket to send messages to
sender = context.socket(zmq.PUSH)
sender.connect("tcp://localhost:5558")
# Socket for control input
controller = context.socket(zmq.SUB)
controller.connect("tcp://localhost:5559")
controller.setsockopt(zmq.SUBSCRIBE, b"")
# Process messages from receiver and controller
poller = zmq.Poller()
poller.register(receiver, zmq.POLLIN)
poller.register(controller, zmq.POLLIN)
# Process messages from both sockets
while True:
socks = dict(poller.poll())
if socks.get(receiver) == zmq.POLLIN:
message = receiver.recv_string()
# Process task
workload = int(message) # Workload in msecs
# Do the work
time.sleep(workload / 1000.0)
# Send results to sink
sender.send_string(message)
# Simple progress indicator for the viewer
sys.stdout.write(".")
sys.stdout.flush()
# Any waiting controller command acts as 'KILL'
if socks.get(controller) == zmq.POLLIN:
break
# Finished
receiver.close()
sender.close()
controller.close()
context.term()
taskwork2: Q 中带有终止信号的并行任务工作者
taskwork2: Racket 中带有终止信号的并行任务工作者
taskwork2: Ruby 中带有终止信号的并行任务工作者
#!/usr/bin/env ruby
#
# Task worker - design 2
# Adds pub-sub flow to receive and respond to kill signal
#
require 'rubygems'
require 'ffi-rzmq'
context = ZMQ::Context.new(1)
# Socket to receive messages on
receiver = context.socket(ZMQ::PULL)
receiver.connect("tcp://localhost:5557")
# Socket to send messages to
sender = context.socket(ZMQ::PUSH)
sender.connect("tcp://localhost:5558")
# Socket for control input
controller = context.socket(ZMQ::SUB)
controller.connect("tcp://localhost:5559")
controller.setsockopt(ZMQ::SUBSCRIBE,"")
# Process messages from receiver and controller
poller = ZMQ::Poller.new()
poller.register(receiver,ZMQ::POLLIN)
poller.register(controller,ZMQ::POLLIN)
# Process tasks forever
while true
items = poller.poll()
poller.readables.each do |item|
if item === receiver
receiver.recv_string(msec ='')
# Simple progress indicator for the viewer
$stdout << "#{msec}."
$stdout.flush
# Do the work
sleep(msec.to_f / 1000)
# Send results to sink
sender.send_string("")
end
exit if item === controller
end
end
taskwork2: Rust 中带有终止信号的并行任务工作者
use std::io::{self, Write};
use std::{thread, time};
fn atoi(s: &str) -> i64 {
s.parse().unwrap()
}
fn main() {
let context = zmq::Context::new();
let receiver = context.socket(zmq::PULL).unwrap();
assert!(receiver.connect("tcp://localhost:5557").is_ok());
let sender = context.socket(zmq::PUSH).unwrap();
assert!(sender.connect("tcp://localhost:5558").is_ok());
let control = context.socket(zmq::SUB).unwrap();
assert!(control.connect("tcp://localhost:5559").is_ok());
assert!(control.set_subscribe(b"").is_ok());
let items = &mut [
receiver.as_poll_item(zmq::POLLIN),
control.as_poll_item(zmq::POLLIN),
];
loop {
zmq::poll(items, -1).unwrap();
if (items[0].get_revents() & zmq::POLLIN) != zmq::POLLIN {
let string = receiver.recv_string(0).unwrap().unwrap();
println!("{}.", string);
let _ = io::stdout().flush();
thread::sleep(time::Duration::from_millis(atoi(&string) as u64));
sender.send("", 0).unwrap();
}
if (items[1].get_revents() & zmq::POLLIN) != zmq::POLLIN {
break;
}
}
}
taskwork2: Scala 中带有终止信号的并行任务工作者
/*
* Task worker2 in Scala
*
* @author Vadim Shalts
* @email vshalts@gmail.com
*/
import org.zeromq.ZMQ
object taskwork2 {
def main(args: Array[String]): Unit = {
val context = ZMQ.context(1)
val receiver = context.socket(ZMQ.PULL)
receiver.connect("tcp://localhost:5557")
val sender = context.socket(ZMQ.PUSH)
sender.connect("tcp://localhost:5558")
val controller = context.socket(ZMQ.SUB)
controller.connect("tcp://localhost:5559")
controller.subscribe("".getBytes)
val items = context.poller(2)
items.register(receiver, ZMQ.Poller.POLLIN)
items.register(controller, ZMQ.Poller.POLLIN)
var continue = true
do {
items.poll
if (items.pollin(0)) {
val message = new String(receiver.recv(0)).trim
// Do the work
Thread.sleep(message toLong)
// Send results to sink
sender.send(message.getBytes(), 0)
// Simple progress indicator for the viewer
print(".")
Console.flush()
}
if (items.pollin(1)) {
println()
continue = false;
}
} while(continue)
receiver.close()
sender.close()
controller.close()
context.term()
}
}
taskwork2: Tcl 中带有终止信号的并行任务工作者
#
# Task worker - design 2
# Adds pub-sub flow to receive and respond to kill signal
#
package require zmq
zmq context context
# Socket to receive messages on
zmq socket receiver context PULL
receiver connect "tcp://localhost:5557"
# Socket to send messages to
zmq socket sender context PUSH
sender connect "tcp://localhost:5558"
# Socket for control input
zmq socket controller context SUB
controller connect "tcp://localhost:5559"
controller setsockopt SUBSCRIBE ""
# Process messages from receiver and controller
set poll_set [list [list receiver [list POLLIN]] [list controller [list POLLIN]]]
# Process tasks forever
set poll 1
while {$poll} {
set rpoll_set [zmq poll $poll_set -1]
foreach rpoll $rpoll_set {
switch [lindex $rpoll 0] {
receiver {
if {"POLLIN" in [lindex $rpoll 1]} {
set string [receiver recv]
# Simple progress indicator for the viewer
puts -nonewline "$string."
flush stdout
# Do the work
after $string
# Send result to sink
sender send "$string"
}
}
controller {
if {"POLLIN" in [lindex $rpoll 1]} {
puts ""
set poll 0
}
}
}
}
}
receiver close
sender close
controller close
context term
taskwork2: OCaml 中带有终止信号的并行任务工作者
(**
* Task worker - design 2
* Adds pub-sub flow to receive and respond to kill signal
*)
open Zmq
open Helpers
let () =
with_context @@ fun ctx ->
(* Socket to receive messages on *)
with_socket ctx Socket.pull @@ fun receiver ->
Socket.connect receiver "tcp://localhost:5557";
(* Socket to send messages to *)
with_socket ctx Socket.push @@ fun sender ->
Socket.connect sender "tcp://localhost:5558";
(* Socket for control input *)
with_socket ctx Socket.sub @@ fun controller ->
Socket.connect controller "tcp://localhost:5559";
Socket.subscribe controller "";
let items = Poll.mask_of [| (receiver, Poll.In); (controller, Poll.In) |] in
(* Process tasks from either socket *)
while true do
let pollResults = Poll.poll items in
let receiverEvents = pollResults.(0) in
let ctrlEvents = pollResults.(1) in
match receiverEvents with
| Some _ ->
let s = Socket.recv receiver in
printfn "%S." s; (* Show progress *)
sleep_ms (int_of_string s); (* Do the work *)
Socket.send sender ""; (* Send results to sink *)
| _ -> ();
(* Any waiting controller command acts as 'KILL' *)
match ctrlEvents with
| Some _ -> exit 0; (* Exit program *)
| _ -> ();
done
这是修改后的接收端应用程序。当它完成结果收集后,它会向所有工作进程广播一个终止消息
tasksink2: Ada 中带有终止信号的并行任务接收端
tasksink2: Basic 中带有终止信号的并行任务接收端
tasksink2: C 中带有终止信号的并行任务接收端
// Task sink - design 2
// Adds pub-sub flow to send kill signal to workers
#include "zhelpers.h"
int main (void)
{
// Socket to receive messages on
void *context = zmq_ctx_new ();
void *receiver = zmq_socket (context, ZMQ_PULL);
zmq_bind (receiver, "tcp://*:5558");
// Socket for worker control
void *controller = zmq_socket (context, ZMQ_PUB);
zmq_bind (controller, "tcp://*:5559");
// Wait for start of batch
char *string = s_recv (receiver);
free (string);
// Start our clock now
int64_t start_time = s_clock ();
// Process 100 confirmations
int task_nbr;
for (task_nbr = 0; task_nbr < 100; task_nbr++) {
char *string = s_recv (receiver);
free (string);
if (task_nbr % 10 == 0)
printf (":");
else
printf (".");
fflush (stdout);
}
printf ("Total elapsed time: %d msec\n",
(int) (s_clock () - start_time));
// Send kill signal to workers
s_send (controller, "KILL");
zmq_close (receiver);
zmq_close (controller);
zmq_ctx_destroy (context);
return 0;
}
tasksink2: C++ 中带有终止信号的并行任务接收端
//
// Task sink in C++ - design 2
// Adds pub-sub flow to send kill signal to workers
//
#include "zhelpers.hpp"
int main (int argc, char *argv[])
{
zmq::context_t context(1);
// Socket to receive messages on
zmq::socket_t receiver (context, ZMQ_PULL);
receiver.bind("tcp://*:5558");
// Socket for worker control
zmq::socket_t controller (context, ZMQ_PUB);
controller.bind("tcp://*:5559");
// Wait for start of batch
s_recv (receiver);
// Start our clock now
struct timeval tstart;
gettimeofday (&tstart, NULL);
// Process 100 confirmations
int task_nbr;
for (task_nbr = 0; task_nbr < 100; task_nbr++) {
s_recv (receiver);
if (task_nbr % 10 == 0)
std::cout << ":" ;
else
std::cout << "." ;
}
// Calculate and report duration of batch
struct timeval tend, tdiff;
gettimeofday (&tend, NULL);
if (tend.tv_usec < tstart.tv_usec) {
tdiff.tv_sec = tend.tv_sec - tstart.tv_sec - 1;
tdiff.tv_usec = 1000000 + tend.tv_usec - tstart.tv_usec;
}
else {
tdiff.tv_sec = tend.tv_sec - tstart.tv_sec;
tdiff.tv_usec = tend.tv_usec - tstart.tv_usec;
}
int total_msec = tdiff.tv_sec * 1000 + tdiff.tv_usec / 1000;
std::cout << "\nTotal elapsed time: " << total_msec
<< " msec\n" << std::endl;
// Send kill signal to workers
s_send (controller, std::string("KILL"));
// Finished
sleep (1); // Give 0MQ time to deliver
return 0;
}
tasksink2: C# 中带有终止信号的并行任务接收端
tasksink2: CL 中带有终止信号的并行任务接收端
;;; -*- Mode:Lisp; Syntax:ANSI-Common-Lisp; -*-
;;;
;;; Task sink - design 2 in Common Lisp
;;; Binds PULL socket to tcp://localhost:5558
;;; Collects results from workers via that socket
;;; Adds pub-sub flow to send kill signal to workers
;;;
;;; Kamil Shakirov <kamils80@gmail.com>
;;;
(defpackage #:zguide.tasksink2
(:nicknames #:tasksink2)
(:use #:cl #:zhelpers)
(:export #:main))
(in-package :zguide.tasksink2)
(defun main ()
(zmq:with-context (context 1)
;; Socket to receive messages on
(zmq:with-socket (receiver context zmq:pull)
(zmq:bind receiver "tcp://*:5558")
;; Socket for worker control
(zmq:with-socket (controller context zmq:pub)
(zmq:bind controller "tcp://*:5559")
;; Wait for start of batch
(let ((msg (make-instance 'zmq:msg)))
(zmq:recv receiver msg))
;; Start our clock now
(let ((elapsed-time
(with-stopwatch
(dotimes (task-nbr 100)
(let ((msg (make-instance 'zmq:msg)))
(zmq:recv receiver msg)
(let ((string (zmq:msg-data-as-string msg)))
(declare (ignore string))
(if (= 1 (denominator (/ task-nbr 10)))
(message ":")
(message "."))))))))
;; Calculate and report duration of batch
(message "Total elapsed time: ~F msec~%" (/ elapsed-time 1000.0)))
;; Send kill signal to workers
(let ((kill (make-instance 'zmq:msg :data "KILL")))
(zmq:send controller kill))
;; Give 0MQ time to deliver
(sleep 1))))
(cleanup))
tasksink2: Delphi 中带有终止信号的并行任务接收端
program tasksink2;
//
// Task sink - design 2
// Adds pub-sub flow to send kill signal to workers
// @author Varga Balazs <bb.varga@gmail.com>
//
{$APPTYPE CONSOLE}
uses
SysUtils
, Windows
, zmqapi
;
const
task_count = 100;
var
context: TZMQContext;
receiver,
controller: TZMQSocket;
s: Utf8String;
task_nbr: Integer;
fFrequency,
fstart,
fStop : Int64;
begin
// Prepare our context and socket
context := TZMQContext.Create;
receiver := Context.Socket( stPull );
receiver.bind( 'tcp://*:5558' );
// Socket for worker control
controller := Context.Socket( stPub );
controller.bind( 'tcp://*:5559' );
// Wait for start of batch
receiver.recv( s );
// Start our clock now
QueryPerformanceFrequency( fFrequency );
QueryPerformanceCounter( fStart );
// Process 100 confirmations
for task_nbr := 0 to task_count - 1 do
begin
receiver.recv( s );
if ((task_nbr / 10) * 10 = task_nbr) then
Write( ':' )
else
Write( '.' );
end;
// Calculate and report duration of batch
QueryPerformanceCounter( fStop );
Writeln( Format( 'Total elapsed time: %d msec', [
((MSecsPerSec * (fStop - fStart)) div fFrequency) ]) );
controller.send( 'KILL' );
// Finished
sleep(1000); // Give 0MQ time to deliver
receiver.Free;
controller.Free;
context.Free;
end.
tasksink2: Erlang 中带有终止信号的并行任务接收端
#! /usr/bin/env escript
%%
%% Task sink - design 2
%% Adds pub-sub flow to send kill signal to workers
%%
main(_) ->
{ok, Context} = erlzmq:context(),
%% Socket to receive messages on
{ok, Receiver} = erlzmq:socket(Context, pull),
ok = erlzmq:bind(Receiver, "tcp://*:5558"),
%% Socket for worker control
{ok, Controller} = erlzmq:socket(Context, pub),
ok = erlzmq:bind(Controller, "tcp://*:5559"),
%% Wait for start of batch
{ok, _} = erlzmq:recv(Receiver),
%% Start our clock now
Start = now(),
%% Process 100 confirmations
process_confirmations(Receiver, 100),
io:format("Total elapsed time: ~b msec~n",
[timer:now_diff(now(), Start) div 1000]),
%% Send kill signal to workers
ok = erlzmq:send(Controller, <<"KILL">>),
%% Finished
ok = erlzmq:close(Controller),
ok = erlzmq:close(Receiver),
ok = erlzmq:term(Context, 1000).
process_confirmations(_Receiver, 0) -> ok;
process_confirmations(Receiver, N) when N > 0 ->
{ok, _} = erlzmq:recv(Receiver),
case N - 1 rem 10 of
0 -> io:format(":");
_ -> io:format(".")
end,
process_confirmations(Receiver, N - 1).
tasksink2: Elixir 中带有终止信号的并行任务接收端
defmodule Tasksink2 do
@moduledoc """
Generated by erl2ex (http://github.com/dazuma/erl2ex)
From Erlang source: (Unknown source file)
At: 2019-12-20 13:57:35
"""
def main() do
{:ok, context} = :erlzmq.context()
{:ok, receiver} = :erlzmq.socket(context, :pull)
:ok = :erlzmq.bind(receiver, 'tcp://*:5558')
{:ok, controller} = :erlzmq.socket(context, :pub)
:ok = :erlzmq.bind(controller, 'tcp://*:5559')
{:ok, _} = :erlzmq.recv(receiver)
start = :erlang.now()
process_confirmations(receiver, 100)
:io.format('Total elapsed time: ~b msec~n', [div(:timer.now_diff(:erlang.now(), start), 1000)])
:ok = :erlzmq.send(controller, "KILL")
:ok = :erlzmq.close(controller)
:ok = :erlzmq.close(receiver)
:ok = :erlzmq.term(context, 1000)
end
def process_confirmations(_receiver, 0) do
:ok
end
def process_confirmations(receiver, n) when n > 0 do
{:ok, _} = :erlzmq.recv(receiver)
case(n - rem(1, 10)) do
0 ->
:io.format(':')
_ ->
:io.format('.')
end
process_confirmations(receiver, n - 1)
end
end
Tasksink2.main
tasksink2: F# 中带有终止信号的并行任务接收端
tasksink2: Felix 中带有终止信号的并行任务接收端
tasksink2: Go 中带有终止信号的并行任务接收端
//
// Task sink
// Binds PULL socket to tcp://localhost:5558
// Collects results from workers via that socket
//
package main
import (
"fmt"
zmq "github.com/alecthomas/gozmq"
"time"
)
func main() {
context, _ := zmq.NewContext()
defer context.Close()
// Socket to receive messages on
receiver, _ := context.NewSocket(zmq.PULL)
defer receiver.Close()
receiver.Bind("tcp://*:5558")
// Socket for worker control
controller, _ := context.NewSocket(zmq.PUB)
defer controller.Close()
controller.Bind("tcp://*:5559")
// Wait for start of batch
msgbytes, _ := receiver.Recv(0)
fmt.Println("Received Start Msg ", string(msgbytes))
// Start our clock now
start_time := time.Now().UnixNano()
for i := 0; i < 100; i++ {
msgbytes, _ = receiver.Recv(0)
if i%10 == 0 {
fmt.Print(":")
} else {
fmt.Print(".")
}
}
// Calculate and report duration of batch
te := time.Now().UnixNano()
fmt.Printf("Total elapsed time: %d msec\n", (te-start_time)/1e6)
err := controller.Send([]byte("KILL"), 0)
if err != nil {
fmt.Println(err)
}
time.Sleep(1 * time.Second)
}
tasksink2: Haskell 中带有终止信号的并行任务接收端
{-# LANGUAGE OverloadedStrings #-}
-- Task sink - design 2
-- Adds pub-sub flow to send kill signal to workers
module Main where
import Control.Monad
import Data.Time.Clock
import System.IO
import System.ZMQ4.Monadic
main :: IO ()
main = runZMQ $ do
-- Socket to receive messages on
receiver <- socket Pull
bind receiver "tcp://*:5558"
-- Socket for worker control
controller <- socket Pub
bind controller "tcp://*:5559"
-- Wait for start of batch
_ <- receive receiver
-- Start our clock now
start_time <- liftIO getCurrentTime
-- Process 100 confirmations
liftIO $ hSetBuffering stdout NoBuffering
forM_ [1..100] $ \i -> do
_ <- receive receiver
if i `mod` 10 == 0
then liftIO $ putStr ":"
else liftIO $ putStr "."
end_time <- liftIO getCurrentTime
liftIO . putStrLn $ "Total elapsed time: " ++ show (diffUTCTime end_time start_time * 1000) ++ " msec"
-- Send kill signal to workers
send controller [] "KILL"
tasksink2: Haxe 中带有终止信号的并行任务接收端
package ;
import haxe.io.Bytes;
import neko.Lib;
import neko.Sys;
import org.zeromq.ZMQ;
import org.zeromq.ZMQContext;
import org.zeromq.ZMQSocket;
/**
* Parallel Task sink with kil signalling in Haxe
* Binds PULL request socket to tcp://localhost:5558
* Collects results from workers via this socket
*
* See: https://zguide.zeromq.cn/page:all#Handling-Errors-and-ETERM
*
* Based on https://zguide.zeromq.cn/cs:tasksink2
*
* Use with TaskVent.hx and TaskWork2.hx
*/
class TaskSink2
{
public static function main() {
var context:ZMQContext = ZMQContext.instance();
Lib.println("** TaskSink2 (see: https://zguide.zeromq.cn/page:all#Handling-Errors-and-ETERM)");
// Socket to receive messages on
var receiver:ZMQSocket = context.socket(ZMQ_PULL);
receiver.bind("tcp://127.0.0.1:5558");
// Socket to send control messages to workers
var controller:ZMQSocket = context.socket(ZMQ_PUB);
controller.bind("tcp://127.0.0.1:5559");
// Wait for start of batch
var msgString = StringTools.trim(receiver.recvMsg().toString());
// Start our clock now
var tStart = Sys.time();
// Process 100 messages
var task_nbr:Int;
for (task_nbr in 0 ... 100) {
msgString = StringTools.trim(receiver.recvMsg().toString());
if (task_nbr % 10 == 0) {
Lib.println(":"); // Print a ":" every 10 messages
} else {
Lib.print(".");
}
}
// Calculate and report duation of batch
var tEnd = Sys.time();
Lib.println("Total elapsed time: " + Math.ceil((tEnd - tStart) * 1000) + " msec");
// Send kill signal to workers
controller.sendMsg(Bytes.ofString("KILL"));
Sys.sleep(1.0); // Give 0MQ time to deliver
// Shut down
receiver.close();
controller.close();
context.term();
}
}
tasksink2: Java 中带有终止信号的并行任务接收端
package guide;
import org.zeromq.SocketType;
import org.zeromq.ZMQ;
import org.zeromq.ZContext;
/**
* Task sink - design 2
* Adds pub-sub flow to send kill signal to workers
*/
public class tasksink2
{
public static void main(String[] args) throws Exception
{
// Prepare our context and socket
try (ZContext context = new ZContext()) {
ZMQ.Socket receiver = context.createSocket(SocketType.PULL);
receiver.bind("tcp://*:5558");
// Socket for worker control
ZMQ.Socket controller = context.createSocket(SocketType.PUB);
controller.bind("tcp://*:5559");
// Wait for start of batch
receiver.recv(0);
// Start our clock now
long tstart = System.currentTimeMillis();
// Process 100 confirmations
int task_nbr;
for (task_nbr = 0; task_nbr < 100; task_nbr++) {
receiver.recv(0);
if ((task_nbr / 10) * 10 == task_nbr) {
System.out.print(":");
}
else {
System.out.print(".");
}
System.out.flush();
}
// Calculate and report duration of batch
long tend = System.currentTimeMillis();
System.out.println(
"Total elapsed time: " + (tend - tstart) + " msec"
);
// Send the kill signal to the workers
controller.send("KILL", 0);
// Give it some time to deliver
Thread.sleep(1);
}
}
}
tasksink2: Julia 中带有终止信号的并行任务接收端
tasksink2: Lua 中带有终止信号的并行任务接收端
--
-- Task sink - design 2
-- Adds pub-sub flow to send kill signal to workers
--
-- Author: Robert G. Jakabosky <bobby@sharedrealm.com>
--
require"zmq"
require"zhelpers"
local fmod = math.fmod
local context = zmq.init(1)
-- Socket to receive messages on
local receiver = context:socket(zmq.PULL)
receiver:bind("tcp://*:5558")
-- Socket for worker control
local controller = context:socket(zmq.PUB)
controller:bind("tcp://*:5559")
-- Wait for start of batch
local msg = receiver:recv()
-- Start our clock now
local start_time = s_clock ()
-- Process 100 confirmations
local task_nbr
for task_nbr=0,99 do
local msg = receiver:recv()
if (fmod(task_nbr, 10) == 0) then
printf (":")
else
printf (".")
end
io.stdout:flush()
end
printf("Total elapsed time: %d msec\n", (s_clock () - start_time))
-- Send kill signal to workers
controller:send("KILL")
-- Finished
s_sleep (1000) -- Give 0MQ time to deliver
receiver:close()
controller:close()
context:term()
tasksink2: Node.js 中带有终止信号的并行任务接收端
// Task sink in Node.js, design 2
// Adds a pub-sub flow to send kill signal to workers
var zmq = require('zeromq')
, receiver = zmq.socket('pull')
, controller = zmq.socket('pub');
var started = false
, i = 0
, label = "Total elapsed time";
receiver.on('message', function() {
// wait for start of batch
if (!started) {
console.time(label);
started = true;
// process 100 confirmations
} else {
i += 1;
process.stdout.write(i % 10 === 0 ? ':' : '.');
if (i === 100) {
console.timeEnd(label);
controller.send("KILL");
controller.close();
receiver.close();
process.exit();
}
}
});
receiver.bindSync("tcp://*:5558");
controller.bindSync("tcp://*:5559");
tasksink2: Objective-C 中带有终止信号的并行任务接收端
/* tasksink.m: PULLs workers' results from tcp://localhost:5558/. */
/* You can wire up the vent, workers, and sink like so:
* $ ./tasksink &
* $ ./taskwork & # Repeat this as many times as you want workers.
* $ ./taskvent &
*/
#import <Foundation/Foundation.h>
#import "ZMQObjC.h"
#import <sys/time.h>
#define NSEC_PER_MSEC (1000000)
#define MSEC_PER_SEC (1000)
int
main(void)
{
NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init];
/* Prepare context and socket. */
ZMQContext *ctx = [[[ZMQContext alloc] initWithIOThreads:1U] autorelease];
ZMQSocket *pull = [ctx socketWithType:ZMQ_PULL];
[pull bindToEndpoint:@"tcp://*:5558"];
/* New control socket - send any message to kill workers. */
ZMQSocket *control = [ctx socketWithType:ZMQ_PUB];
[control bindToEndpoint:@"tcp://*:5559"];
/* Wait for batch start. */
/* Cast result to void because we don't actually care about the value.
* The return value has been autoreleased, so no memory is leaked. */
(void)[pull receiveDataWithFlags:0];
/* Start clock. */
struct timeval tstart, tdiff, tend;
(void)gettimeofday(&tstart, NULL);
/* Process |kTaskCount| confirmations. */
static const int kTaskCount = 100;
for (int task = 0; task < kTaskCount; ++task) {
NSAutoreleasePool *p = [[NSAutoreleasePool alloc] init];
(void)[pull receiveDataWithFlags:0];
BOOL isMultipleOfTen = (0 == (task % 10));
if (isMultipleOfTen) {
fputs(":", stdout);
} else {
fputs(".", stdout);
}
fflush(stdout);
[p drain];
}
fputc('\n', stdout);
/* Stop clock. */
(void)gettimeofday(&tend, NULL);
/* Calculate the difference. */
tdiff.tv_sec = tend.tv_sec - tstart.tv_sec;
tdiff.tv_usec = tend.tv_usec - tstart.tv_usec;
if (tdiff.tv_usec < 0) {
tdiff.tv_sec -= 1;
tdiff.tv_usec += NSEC_PER_SEC;
}
/* Convert it to milliseconds. */
unsigned long totalMsec = tdiff.tv_sec * MSEC_PER_SEC
+ tdiff.tv_usec / NSEC_PER_MSEC;
NSLog(@"Total elapsed time: %lu ms", totalMsec);
/* Kill workers. Any message will do, including an empty one. */
[control sendData:nil withFlags:0];
/* Give 0MQ time to deliver the kill message. */
sleep(1);
[ctx closeSockets];
[pool drain];
return EXIT_SUCCESS;
}
tasksink2: ooc 中带有终止信号的并行任务接收端
tasksink2: Perl 中带有终止信号的并行任务接收端
# Task sink - design 2 in Perl
# Adds pub-sub flow to send kill signal to workers
use strict;
use warnings;
use v5.10;
use Time::HiRes qw(time);
$| = 1; # autoflush stdout after each print
use ZMQ::FFI;
use ZMQ::FFI::Constants qw(ZMQ_PULL ZMQ_PUB);
# Socket to receive messages on
my $context = ZMQ::FFI->new();
my $receiver = $context->socket(ZMQ_PULL);
$receiver->bind('tcp://*:5558');
# Socket for worker control
my $controller = $context->socket(ZMQ_PUB);
$controller->bind('tcp://*:5559');
# Wait for start of batch
my $string = $receiver->recv();
# Start our clock now
my $start_time = time();
# Process 100 confirmations
for my $task_nbr (1..100) {
$receiver->recv();
if ( ($task_nbr % 10) == 0 ) {
print ":";
}
else {
print ".";
}
}
# Calculate and report duration of batch
printf "Total elapsed time: %d msec\n",
(time() - $start_time) * 1000;
# Send kill signal to workers
$controller->send("KILL");
tasksink2: PHP 中带有终止信号的并行任务接收端
<?php
/*
* Task design 2
* Adds pub-sub flow to send kill signal to workers
* @author Ian Barber <ian(dot)barber(at)gmail(dot)com>
*/
$context = new ZMQContext();
// Socket to receive messages on
$receiver = new ZMQSocket($context, ZMQ::SOCKET_PULL);
$receiver->bind("tcp://*:5558");
// Socket for worker control
$controller = new ZMQSocket($context, ZMQ::SOCKET_PUB);
$controller->bind("tcp://*:5559");
// Wait for start of batch
$string = $receiver->recv();
// Process 100 confirmations
$tstart = microtime(true);
$total_msec = 0; // Total calculated cost in msecs
for ($task_nbr = 0; $task_nbr < 100; $task_nbr++) {
$string = $receiver->recv();
if ($task_nbr % 10 == 0) {
echo ":";
} else {
echo ".";
}
}
$tend = microtime(true);
$total_msec = ($tend - $tstart) * 1000;
echo PHP_EOL;
printf ("Total elapsed time: %d msec", $total_msec);
echo PHP_EOL;
// Send kill signal to workers
$controller->send("KILL");
// Finished
sleep (1); // Give 0MQ time to deliver
tasksink2: Python 中带有终止信号的并行任务接收端
# encoding: utf-8
#
# Task sink - design 2
# Adds pub-sub flow to send kill signal to workers
#
# Author: Jeremy Avnet (brainsik) <spork(dash)zmq(at)theory(dot)org>
#
import sys
import time
import zmq
context = zmq.Context()
# Socket to receive messages on
receiver = context.socket(zmq.PULL)
receiver.bind("tcp://*:5558")
# Socket for worker control
controller = context.socket(zmq.PUB)
controller.bind("tcp://*:5559")
# Wait for start of batch
receiver.recv()
# Start our clock now
tstart = time.time()
# Process 100 confirmiations
for task_nbr in range(100):
receiver.recv()
if task_nbr % 10 == 0:
sys.stdout.write(":")
else:
sys.stdout.write(".")
sys.stdout.flush()
# Calculate and report duration of batch
tend = time.time()
tdiff = tend - tstart
total_msec = tdiff * 1000
print("Total elapsed time: %d msec" % total_msec)
# Send kill signal to workers
controller.send(b"KILL")
# Finished
receiver.close()
controller.close()
context.term()
tasksink2: Q 中带有终止信号的并行任务接收端
tasksink2: Racket 中带有终止信号的并行任务接收端
tasksink2: Ruby 中带有终止信号的并行任务接收端
#!/usr/bin/env ruby
#
# Task sink - design 2
# Adds pub-sub flow to send kill signal to workers
#
require 'rubygems'
require 'ffi-rzmq'
# Prepare our context and socket
context = ZMQ::Context.new(1)
receiver = context.socket(ZMQ::PULL)
receiver.bind("tcp://*:5558")
# Socket for worker control
controller = context.socket(ZMQ::PUB)
controller.bind("tcp://*:5559")
# Wait for start of batch
receiver.recv_string('')
tstart = Time.now
# Process 100 confirmations
100.times do |task_nbr|
receiver.recv_string('')
$stdout << ((task_nbr % 10 == 0) ? ':' : '.')
$stdout.flush
end
# Calculate and report duration of batch
tend = Time.now
total_msec = (tend-tstart) * 1000
puts "Total elapsed time: #{total_msec} msec"
# Send kill signal to workers
controller.send_string("KILL")
tasksink2: Rust 中带有终止信号的并行任务接收端
tasksink2: Scala 中带有终止信号的并行任务接收端
/*
*
* Task sink2 in Scala
* Binds PULL socket to tcp://localhost:5558
* Collects results from workers via that socket
* publishes a kill signal to tcp://localhost:5559 when the results have been processed.
*
* @author Vadim Shalts
* @email vshalts@gmail.com
*/
import org.zeromq.ZMQ
object tasksink2 {
def main(args: Array[String]) {
// Prepare our context and socket
val context = ZMQ.context(1)
val receiver = context.socket(ZMQ.PULL)
receiver.bind("tcp://*:5558")
val controller = context.socket(ZMQ.PUB);
controller.bind("tcp://*:5559");
// Wait for start of batch
receiver.recv(0)
// Wait for start of batch
val string = new String(receiver.recv(0))
// Start our clock now
val tstart = System.currentTimeMillis
for (task_nbr <- 1 to 100) {
val string = new String(receiver.recv(0)).trim
if ((task_nbr / 10) * 10 == task_nbr) {
print(":")
} else {
print(".")
}
Console.flush()
}
// Calculate and report duration of batch
val tend = System.currentTimeMillis
println("Total elapsed time: " + (tend - tstart) + " msec")
// Send the kill signal to the workers
controller.send("KILL".getBytes(), 0)
// Give it some time to deliver
Thread.sleep(1)
controller.close()
receiver.close()
context.term()
}
}
tasksink2: Tcl 中带有终止信号的并行任务接收端
#
# Task sink - design 2
# Adds pub-sub flow to send kill signal to workers
#
package require zmq
zmq context context
# Socket to receive messages on
zmq socket receiver context PULL
receiver bind "tcp://*:5558"
# Socket to worker control
zmq socket controller context PUB
controller bind "tcp://*:5559"
# Wait for start of batch
set string [receiver recv]
# Start our clock now
set start_time [clock milliseconds]
# Process 100 confirmations
for {set task_nbr 0} {$task_nbr < 100} {incr task_nbr} {
set string [receiver recv]
if {($task_nbr/10)*10 == $task_nbr} {
puts -nonewline ":"
} else {
puts -nonewline "."
}
flush stdout
}
# Calculate and report duration of batch
puts "Total elapsed time: [expr {[clock milliseconds]-$start_time}]msec"
controller send "KILL"
receiver close
controller close
context term
tasksink2: OCaml 中带有终止信号的并行任务接收端
(**
* Task sink - design 2
* Adds pub-sub flow to send kill signal to workers
*)
open Zmq
open Helpers
let () =
with_context @@ fun ctx ->
(* Socket to receive messages on *)
with_socket ctx Socket.pull @@ fun receiver ->
Socket.bind receiver "tcp://*:5558";
(* Scoket for worker control *)
with_socket ctx Socket.pub @@ fun controller ->
Socket.bind controller "tcp://*:5559";
(* Wait for start of batch *)
let _ = Socket.recv receiver in
(* Start our clock now *)
let start_time = clock_ms () in
(* Process 100 confirmations *)
for taskNum = 0 to 99 do
let _ = Socket.recv receiver in
printfn @@ if ((taskNum / 10) * 10 == taskNum) then ":" else ".";
done;
printfn "Total elapsed time: %d msec" (clock_ms () - start_time);
(* Send kill signal to workers *)
Socket.send controller "KILL";
处理中断信号 #
实际应用程序需要在通过 Ctrl-C 或其他信号(例如SIGTERM)中断时干净地关闭。默认情况下,这些信号只会杀死进程,这意味着消息不会被刷新,文件不会干净地关闭等等。
以下是我们如何在各种语言中处理信号的示例
interrupt: Ada 中干净地处理 Ctrl-C
interrupt: Basic 中干净地处理 Ctrl-C
interrupt: C 中干净地处理 Ctrl-C
// Shows how to handle Ctrl-C
#include <stdlib.h>
#include <stdio.h>
#include <signal.h>
#include <unistd.h>
#include <fcntl.h>
#include <zmq.h>
// Signal handling
//
// Create a self-pipe and call s_catch_signals(pipe's writefd) in your application
// at startup, and then exit your main loop if your pipe contains any data.
// Works especially well with zmq_poll.
#define S_NOTIFY_MSG " "
#define S_ERROR_MSG "Error while writing to self-pipe.\n"
static int s_fd;
static void s_signal_handler (int signal_value)
{
int rc = write (s_fd, S_NOTIFY_MSG, sizeof(S_NOTIFY_MSG));
if (rc != sizeof(S_NOTIFY_MSG)) {
write (STDOUT_FILENO, S_ERROR_MSG, sizeof(S_ERROR_MSG)-1);
exit(1);
}
}
static void s_catch_signals (int fd)
{
s_fd = fd;
struct sigaction action;
action.sa_handler = s_signal_handler;
// Doesn't matter if SA_RESTART set because self-pipe will wake up zmq_poll
// But setting to 0 will allow zmq_read to be interrupted.
action.sa_flags = 0;
sigemptyset (&action.sa_mask);
sigaction (SIGINT, &action, NULL);
sigaction (SIGTERM, &action, NULL);
}
int main (void)
{
int rc;
void *context = zmq_ctx_new ();
void *socket = zmq_socket (context, ZMQ_REP);
zmq_bind (socket, "tcp://*:5555");
int pipefds[2];
rc = pipe(pipefds);
if (rc != 0) {
perror("Creating self-pipe");
exit(1);
}
for (int i = 0; i < 2; i++) {
int flags = fcntl(pipefds[i], F_GETFL, 0);
if (flags < 0) {
perror ("fcntl(F_GETFL)");
exit(1);
}
rc = fcntl (pipefds[i], F_SETFL, flags | O_NONBLOCK);
if (rc != 0) {
perror ("fcntl(F_SETFL)");
exit(1);
}
}
s_catch_signals (pipefds[1]);
zmq_pollitem_t items [] = {
{ 0, pipefds[0], ZMQ_POLLIN, 0 },
{ socket, 0, ZMQ_POLLIN, 0 }
};
while (1) {
rc = zmq_poll (items, 2, -1);
if (rc == 0) {
continue;
} else if (rc < 0) {
if (errno == EINTR) { continue; }
perror("zmq_poll");
exit(1);
}
// Signal pipe FD
if (items [0].revents & ZMQ_POLLIN) {
char buffer [1];
read (pipefds[0], buffer, 1); // clear notifying byte
printf ("W: interrupt received, killing server...\n");
break;
}
// Read socket
if (items [1].revents & ZMQ_POLLIN) {
char buffer [255];
// Use non-blocking so we can continue to check self-pipe via zmq_poll
rc = zmq_recv (socket, buffer, 255, ZMQ_DONTWAIT);
if (rc < 0) {
if (errno == EAGAIN) { continue; }
if (errno == EINTR) { continue; }
perror("recv");
exit(1);
}
printf ("W: recv\n");
// Now send message back.
// ...
}
}
printf ("W: cleaning up\n");
zmq_close (socket);
zmq_ctx_destroy (context);
return 0;
}
interrupt: C++ 中干净地处理 Ctrl-C
// Handling Interrupt Signals in C++
//
// Saad Hussain <saadnasir31@gmail.com>
#include <csignal>
#include <iostream>
#include <zmq.hpp>
int interrupted{0};
void signal_handler(int signal_value) { interrupted = 1; }
void catch_signals() {
std::signal(SIGINT, signal_handler);
std::signal(SIGTERM, signal_handler);
std::signal(SIGSEGV, signal_handler);
std::signal(SIGABRT, signal_handler);
}
int main() {
zmq::context_t ctx(1);
zmq::socket_t socket(ctx, ZMQ_REP);
socket.bind("tcp://localhost:5555");
catch_signals();
while (true) {
zmq::message_t msg;
try {
socket.recv(&msg);
} catch (zmq::error_t &e) {
std::cout << "interrupt received, proceeding..." << std::endl;
}
if (interrupted) {
std::cout << "interrupt received, killing program..." << std::endl;
break;
}
}
return 0;
}
interrupt: C# 中干净地处理 Ctrl-C
interrupt: CL 中干净地处理 Ctrl-C
interrupt: Delphi 中干净地处理 Ctrl-C
program interrupt;
//
// Shows how to handle Ctrl-C
// @author Varga Balazs <bb.varga@gmail.com>
//
{$APPTYPE CONSOLE}
uses
SysUtils
, zmqapi
;
var
context: TZMQContext;
socket: TZMQSocket;
frame: TZMQFrame;
begin
context := TZMQContext.Create;
socket := Context.Socket( stRep );
socket.bind( 'tcp://*:5555' );
while not context.Terminated do
begin
frame := TZMQFrame.Create;
try
socket.recv( frame );
except
on e: Exception do
Writeln( 'Exception, ' + e.Message );
end;
FreeAndNil( frame );
if socket.context.Terminated then
begin
Writeln( 'W: interrupt received, killing server...');
break;
end;
end;
socket.Free;
context.Free;
end.
interrupt: Erlang 中干净地处理 Ctrl-C
#! /usr/bin/env escript
%%
%% Illustrates the equivalent in Erlang to signal handling for shutdown
%%
%% Erlang applications don't use system signals for shutdown (they can't
%% without some sort of custom native extension). Instead they rely on an
%% explicit shutdown routine, either per process (as illustrated here) or
%% system wide (e.g. init:stop() and OTP application shutdown).
%%
main(_) ->
%% Start a process that manages its own ZeroMQ startup and shutdown
Server = start_server(),
%% Run for a while
timer:sleep(5000),
%% Send the process a shutdown message - this could be triggered any number
%% of ways (e.g. handling `terminate` in an OTP compliant process)
Server ! {shutdown, self()},
%% Wait for notification that the process has exited cleanly
receive
{ok, Server} -> ok
end.
start_server() ->
%% Start the server in a separate Erlang process
spawn(
fun() ->
%% The process manages its own ZeroMQ context
{ok, Context} = erlzmq:context(),
{ok, Socket} = erlzmq:socket(Context, [rep, {active, true}]),
ok = erlzmq:bind(Socket, "tcp://*:5555"),
io:format("Server started on port 5555~n"),
loop(Context, Socket)
end).
loop(Context, Socket) ->
receive
{zmq, Socket, Msg, _Flags} ->
erlzmq:send(Socket, <<"You said: ", Msg/binary>>),
timer:sleep(1000),
loop(Context, Socket);
{shutdown, From} ->
io:format("Stopping server... "),
ok = erlzmq:close(Socket),
ok = erlzmq:term(Context),
io:format("done~n"),
From ! {ok, self()}
end.
interrupt: Elixir 中干净地处理 Ctrl-C
defmodule Interrupt do
@moduledoc """
Generated by erl2ex (http://github.com/dazuma/erl2ex)
From Erlang source: (Unknown source file)
At: 2019-12-20 13:57:25
"""
def main() do
server = start_server()
:timer.sleep(5000)
send(server, {:shutdown, self()})
receive do
{:ok, ^server} ->
:ok
end
end
def start_server() do
:erlang.spawn(fn ->
{:ok, context} = :erlzmq.context()
{:ok, socket} = :erlzmq.socket(context, [:rep, {:active, true}])
:ok = :erlzmq.bind(socket, 'tcp://*:5555')
:io.format('Server started on port 5555~n')
loop(context, socket)
end)
end
def loop(context, socket) do
receive do
{:zmq, ^socket, msg, _flags} ->
:erlzmq.send(socket, <<"You said: ", msg::binary>>)
:timer.sleep(1000)
loop(context, socket)
{:shutdown, from} ->
:io.format('Stopping server... ')
:ok = :erlzmq.close(socket)
:ok = :erlzmq.term(context)
:io.format('done~n')
send(from, {:ok, self()})
end
end
end
Interrupt.main
interrupt: F# 中干净地处理 Ctrl-C
interrupt: Felix 中干净地处理 Ctrl-C
interrupt: Go 中干净地处理 Ctrl-C
package main
import (
"fmt"
zmq "github.com/alecthomas/gozmq"
"os"
"os/signal"
)
func main() {
signal_channel := make(chan os.Signal)
signal.Notify(signal_channel)
go func() {
context, _ := zmq.NewContext()
defer context.Close()
socket, _ := context.NewSocket(zmq.REP)
defer socket.Close()
socket.Bind("tcp://*:5555")
msgbytes, err := socket.Recv(0)
if err != nil {
fmt.Println(err)
}
fmt.Printf("%s.\n", string(msgbytes))
}()
<-signal_channel
fmt.Println("exiting")
os.Exit(0)
}
interrupt: Haskell 中干净地处理 Ctrl-C
module Main where
import System.Posix.Signals (installHandler, Handler(Catch), sigINT, sigTERM)
import Control.Concurrent.MVar (modifyMVar_, newMVar, withMVar, MVar)
import System.ZMQ4
handler :: MVar Int -> IO ()
handler s_interrupted = modifyMVar_ s_interrupted (return . (+1))
main :: IO ()
main = withContext $ \ctx ->
withSocket ctx Rep $ \socket -> do
bind socket "tcp://*:5555"
s_interrupted <- newMVar 0
installHandler sigINT (Catch $ handler s_interrupted) Nothing
installHandler sigTERM (Catch $ handler s_interrupted) Nothing
recvFunction s_interrupted socket
recvFunction :: (Ord a, Num a, Receiver b) => MVar a -> Socket b -> IO ()
recvFunction mi sock = do
receive sock
withMVar mi (\val -> if val > 0
then putStrLn "W: Interrupt Received. Killing Server"
else recvFunction mi sock)
interrupt: Haxe 中干净地处理 Ctrl-C
package ;
import haxe.io.Bytes;
import haxe.Stack;
import neko.Lib;
import org.zeromq.ZMQ;
import org.zeromq.ZMQContext;
import org.zeromq.ZMQSocket;
import org.zeromq.ZMQException;
/**
* Signal Handling
*
* Call
*/
class Interrupt
{
public static function main() {
var context:ZMQContext = ZMQContext.instance();
var receiver:ZMQSocket = context.socket(ZMQ_REP);
receiver.bind("tcp://127.0.0.1:5559");
Lib.println("** Interrupt (see: https://zguide.zeromq.cn/page:all#Handling-Interrupt-Signals)");
ZMQ.catchSignals();
Lib.println ("\nPress Ctrl+C");
while (true) {
// Blocking read, will exit only on an interrupt (Ctrl+C)
try {
var msg:Bytes = receiver.recvMsg();
} catch (e:ZMQException) {
if (ZMQ.isInterrupted()) {
trace ("W: interrupt received, killing server ...\n");
break;
}
// Handle other errors
trace("ZMQException #:" + e.errNo + ", str:" + e.str());
trace (Stack.toString(Stack.exceptionStack()));
}
}
// Close up gracefully
receiver.close();
context.term();
}
}
interrupt: Java 中干净地处理 Ctrl-C
package guide;
/*
*
* Interrupt in Java
* Shows how to handle Ctrl-C
*/
import org.zeromq.SocketType;
import org.zeromq.ZMQ;
import org.zeromq.ZMQException;
import org.zeromq.ZContext;
public class interrupt
{
public static void main(String[] args)
{
// Prepare our context and socket
final ZContext context = new ZContext();
final Thread zmqThread = new Thread()
{
@Override
public void run()
{
ZMQ.Socket socket = context.createSocket(SocketType.REP);
socket.bind("tcp://*:5555");
while (!Thread.currentThread().isInterrupted()) {
try {
socket.recv(0);
}
catch (ZMQException e) {
if (e.getErrorCode() == ZMQ.Error.ETERM.getCode()) {
break;
}
}
}
socket.setLinger(0);
socket.close();
}
};
Runtime.getRuntime().addShutdownHook(new Thread()
{
@Override
public void run()
{
System.out.println("W: interrupt received, killing server...");
context.close();
try {
zmqThread.interrupt();
zmqThread.join();
}
catch (InterruptedException e) {
}
}
});
zmqThread.start();
}
}
interrupt: Julia 中干净地处理 Ctrl-C
interrupt: Lua 中干净地处理 Ctrl-C
--
-- Shows how to handle Ctrl-C
--
-- Author: Robert G. Jakabosky <bobby@sharedrealm.com>
--
require"zmq"
require"zhelpers"
local context = zmq.init(1)
local server = context:socket(zmq.REP)
server:bind("tcp://*:5555")
s_catch_signals ()
while true do
-- Blocking read will exit on a signal
local request = server:recv()
if (s_interrupted) then
printf ("W: interrupt received, killing server...\n")
break
end
server:send("World")
end
server:close()
context:term()
interrupt: Node.js 中干净地处理 Ctrl-C
// Show how to handle Ctrl+C in Node.js
var zmq = require('zeromq')
, socket = zmq.createSocket('rep');
socket.on('message', function(buf) {
// echo request back
socket.send(buf);
});
process.on('SIGINT', function() {
socket.close();
process.exit();
});
socket.bindSync('tcp://*:5555');
interrupt: Objective-C 中干净地处理 Ctrl-C
interrupt: ooc 中干净地处理 Ctrl-C
interrupt: Perl 中干净地处理 Ctrl-C
# Shows how to handle Ctrl-C (SIGINT) and SIGTERM in Perl
use strict;
use warnings;
use v5.10;
use Errno qw(EINTR);
use ZMQ::FFI;
use ZMQ::FFI::Constants qw(ZMQ_REP);
my $interrupted;
$SIG{INT} = sub { $interrupted = 1; };
$SIG{TERM} = sub { $interrupted = 1; };
my $context = ZMQ::FFI->new();
my $socket = $context->socket(ZMQ_REP);
$socket->bind('tcp://*:5558');
$socket->die_on_error(0);
while (!$interrupted) {
$socket->recv();
if ($socket->last_errno != EINTR) {
die $socket->last_strerror;
}
}
warn "interrupt received, killing server...";
interrupt: PHP 中干净地处理 Ctrl-C
<?php
/*
* Interrupt in PHP
* Shows how to handle CTRL+C
* @author Nicolas Van Eenaeme <nicolas(at)poison(dot)be>
*/
declare(ticks=1); // PHP internal, make signal handling work
if (!function_exists('pcntl_signal'))
{
printf("Error, you need to enable the pcntl extension in your php binary, see https://php.ac.cn/manual/en/pcntl.installation.php for more info%s", PHP_EOL);
exit(1);
}
$running = true;
function signalHandler($signo)
{
global $running;
$running = false;
printf("Warning: interrupt received, killing server...%s", PHP_EOL);
}
pcntl_signal(SIGINT, 'signalHandler');
$context = new ZMQContext();
// Socket to talk to clients
$responder = new ZMQSocket($context, ZMQ::SOCKET_REP);
$responder->bind("tcp://*:5558");
while ($running)
{
// Wait for next request from client
try
{
$string = $responder->recv(); // The recv call will throw an ZMQSocketException when interrupted
// PHP Fatal error: Uncaught exception 'ZMQSocketException' with message 'Failed to receive message: Interrupted system call' in interrupt.php:35
}
catch (ZMQSocketException $e)
{
if ($e->getCode() == 4) // 4 == EINTR, interrupted system call (Ctrl+C will interrupt the blocking call as well)
{
usleep(1); // Don't just continue, otherwise the ticks function won't be processed, and the signal will be ignored, try it!
continue; // Ignore it, if our signal handler caught the interrupt as well, the $running flag will be set to false, so we'll break out
}
throw $e; // It's another exception, don't hide it to the user
}
printf("Received request: [%s]%s", $string, PHP_EOL);
// Do some 'work'
sleep(1);
// Send reply back to client
$responder->send("World");
}
// Do here all the cleanup that needs to be done
printf("Program ended cleanly%s", PHP_EOL);
interrupt: Python 中干净地处理 Ctrl-C
#
# Shows how to handle Ctrl-C
#
import signal
import time
import zmq
context = zmq.Context()
socket = context.socket(zmq.REP)
socket.bind("tcp://*:5558")
# SIGINT will normally raise a KeyboardInterrupt, just like any other Python call
try:
socket.recv()
except KeyboardInterrupt:
print("W: interrupt received, stopping...")
finally:
# clean up
socket.close()
context.term()
interrupt: Q 中干净地处理 Ctrl-C
interrupt: Racket 中干净地处理 Ctrl-C
interrupt: Ruby 中干净地处理 Ctrl-C
#!/usr/bin/env ruby
# Shows how to handle Ctrl-C
require 'ffi-rzmq'
context = ZMQ::Context.new(1)
socket = context.socket(ZMQ::REP)
socket.bind("tcp://*:5558")
trap("INT") { puts "Shutting down."; socket.close; context.terminate; exit}
puts "Starting up"
while true do
message = socket.recv_string
puts "Message: #{message.inspect}"
socket.send_string("Message received")
end
interrupt: Rust 中干净地处理 Ctrl-C
use nix::sys::signal;
use std::os::unix::io::RawFd;
use std::process;
static mut S_FD: RawFd = -1;
extern "C" fn s_signal_handler(_: i32) {
let rc = unsafe { nix::unistd::write(S_FD, b" ") };
match rc {
Ok(_) => {}
Err(_) => {
println!("Error while writing to self-pipe.");
process::exit(1);
}
}
}
fn s_catch_signals(fd: RawFd) {
unsafe {
S_FD = fd;
}
let sig_action = signal::SigAction::new(
signal::SigHandler::Handler(s_signal_handler),
signal::SaFlags::empty(),
signal::SigSet::empty(),
);
unsafe {
signal::sigaction(signal::SIGINT, &sig_action).unwrap();
signal::sigaction(signal::SIGTERM, &sig_action).unwrap();
}
}
fn main() {
let context = zmq::Context::new();
let socket = context.socket(zmq::REP).unwrap();
assert!(socket.bind("tcp://*:5555").is_ok());
let pipefds = nix::unistd::pipe().unwrap();
nix::fcntl::fcntl(pipefds.0, nix::fcntl::F_GETFL).unwrap();
nix::fcntl::fcntl(pipefds.0, nix::fcntl::F_SETFL(nix::fcntl::O_NONBLOCK)).unwrap();
s_catch_signals(pipefds.1);
let items = &mut [
zmq::PollItem::from_fd(pipefds.0, zmq::POLLIN),
socket.as_poll_item(zmq::POLLIN),
];
loop {
let rc = zmq::poll(items, -1);
match rc {
Ok(v) => {
assert!(v >= 0);
if v == 0 {
continue;
}
if items[0].is_readable() {
let buffer = &mut [0; 1];
nix::unistd::read(pipefds.0, buffer).unwrap();
println!("W: interrupt received, killing server...");
break;
}
if items[1].is_readable() {
let buffer = &mut [0; 255];
let rc = socket.recv_into(buffer, zmq::DONTWAIT);
match rc {
Ok(_) => println!("W: recv"),
Err(e) => {
if e == zmq::Error::EAGAIN || e == zmq::Error::EINTR {
continue;
}
println!("recv: {}", e);
process::exit(1);
}
}
}
}
Err(e) => {
if e == zmq::Error::EINTR {
continue;
}
println!("zmq::poll: {}", e);
process::exit(1);
}
}
}
}
interrupt: Scala 中干净地处理 Ctrl-C
/*
*
* Interrupt in Scala
* Shows how to handle Ctrl-C
*
* @author Vadim Shalts
* @email vshalts@gmail.com
*/
import org.zeromq.{ZMQException, ZMQ}
object interrupt {
def main(args: Array[String]) {
val context: ZMQ.Context = ZMQ.context(1)
val zmqThread = new Thread(new Runnable {
def run() {
var socket = context.socket(ZMQ.REP)
socket.bind("tcp://*:5555")
while (!Thread.currentThread.isInterrupted) {
try {
socket.recv(0)
} catch {
case e: ZMQException if ZMQ.Error.ETERM.getCode == e.getErrorCode =>
Thread.currentThread.interrupt()
case e => throw e
}
}
socket.close()
println("ZMQ socket shutdown complete")
}
})
sys.addShutdownHook({
println("ShutdownHook called")
context.term()
zmqThread.interrupt()
zmqThread.join
})
zmqThread.start()
}
}
interrupt: Tcl 中干净地处理 Ctrl-C
interrupt: OCaml 中干净地处理 Ctrl-C
该程序提供了s_catch_signals(),它会捕获 Ctrl-C (SIGINT) 和SIGTERMSIGTERM。当任一信号到达时,该s_catch_signals()处理程序会设置全局变量s_interrupted。有了你的信号处理程序,你的应用程序就不会自动终止。相反,你有机会进行清理并优雅地退出。你现在必须显式检查中断并正确处理它。通过在你的主代码的开头调用它来设置信号处理:s_catch_signals()(从interrupt.c) 拷贝这个函数。中断会按如下方式影响 ZeroMQ 调用
- 如果你的代码在阻塞调用(发送消息、接收消息或轮询)中阻塞,那么当信号到达时,该调用将返回EINTR.
- 像s_recv()这样的封装函数如果被中断会返回 NULL。
因此,请检查EINTREINTR 返回码,NULL 返回值,以及/或全局变量 s_interrupted。s_interrupted.
这是一个典型的代码片段
s_catch_signals ();
client = zmq_socket (...);
while (!s_interrupted) {
char *message = s_recv (client);
if (!message)
break; // Ctrl-C used
}
zmq_close (client);
如果你调用了s_catch_signals()s_catch_signals() 并且不检查中断情况,那么你的应用程序将对 Ctrl-C 和SIGTERMSIGTERM 免疫,这可能有用,但通常不是你想要的。
检测内存泄漏 #
任何长时间运行的应用程序都必须正确管理内存,否则最终会耗尽所有可用内存并崩溃。如果你使用的语言可以自动处理内存管理,恭喜你。如果你使用 C 或 C++ 或任何其他需要自行负责内存管理的语言编程,这里有一个使用 valgrind 的简短教程,valgrind 除了其他功能外,还会报告你的程序存在的任何内存泄漏。
- 要安装 valgrind,例如在 Ubuntu 或 Debian 上,请执行以下命令
sudo apt-get install valgrind
- 默认情况下,ZeroMQ 会导致 valgrind 产生大量警告。为了消除这些警告,创建一个名为vg.supp的文件,其中包含以下内容
{
<socketcall_sendto>
Memcheck:Param
socketcall.sendto(msg)
fun:send
...
}
{
<socketcall_sendto>
Memcheck:Param
socketcall.send(msg)
fun:send
...
}
-
修改你的应用程序,使其在接收到 Ctrl-C 后干净地退出。对于会自行退出的应用程序来说,这不需要,但对于长时间运行的应用程序来说,这是必不可少的,否则 valgrind 会报告所有当前分配的内存。
-
使用-DDEBUG编译你的应用程序,如果它不是你的默认设置。这能确保 valgrind 可以准确地告诉你内存泄漏发生在哪里。
-
最后,像这样运行 valgrind
valgrind --tool=memcheck --leak-check=full --suppressions=vg.supp someprog
在修复了它报告的所有错误后,你应该看到令人愉快的消息
==30536== ERROR SUMMARY: 0 errors from 0 contexts...
使用 ZeroMQ 进行多线程编程 #
ZeroMQ 也许是编写多线程 (MT) 应用程序有史以来最好的方式。虽然如果你习惯了传统的套接字,ZeroMQ 套接字需要一些调整,但 ZeroMQ 的多线程编程将把你所知的关于编写 MT 应用程序的一切知识,扔进花园里的堆肥,倒上汽油,然后点燃。很少有书值得焚烧,但大多数关于并发编程的书都值得。
为了编写绝对完美的多线程程序(我是字面意思),我们不需要互斥锁 (mutexes)、锁 (locks) 或任何其他形式的线程间通信,只需通过 ZeroMQ 套接字发送消息即可。
我所说的“完美多线程程序”,是指易于编写和理解的代码,它在任何编程语言和任何操作系统上都采用相同的设计方法,并且可以在任意数量的 CPU 上扩展,具有零等待状态且没有收益递减点。
如果你花了数年时间学习各种技巧,使用锁、信号量和临界区 (critical sections) 来让你的多线程代码工作起来,更别提快速运行了,那么当你意识到这一切都是徒劳时,你会感到恶心。如果说我们从 30 多年并发编程中学到了什么教训,那就是:不要共享状态。这就像两个醉汉试图共享一杯啤酒一样。他们是否是好朋友并不重要。迟早他们会打起来。你加到桌上的醉汉越多,他们为争啤酒而打得越凶。绝大多数悲惨的多线程应用程序看起来都像醉汉打架。
当你编写经典的共享状态多线程代码时,需要应对的各种奇怪问题列表如果不是直接转化为压力和风险的话,会很有趣,因为看起来能工作的代码会在压力下突然失败。一家在有缺陷代码方面拥有世界顶尖经验的大公司发布了其“多线程代码中 11 个可能出现的问题”列表,其中包括遗忘同步、不正确的粒度、读写撕裂 (read and write tearing)、无锁重排 (lock-free reordering)、锁队列 (lock convoys)、两步舞 (two-step dance) 和优先级反转 (priority inversion)。
是的,我们数了七个问题,不是十一个。但这并不是重点。重点是,你真的希望运行电网或股票市场的代码在一个繁忙的周四下午三点开始出现两步锁队列 (two-step lock convoys) 吗?谁在意这些术语到底是什么意思?这不是让我们爱上编程的原因,我们不是为了用越来越复杂的技巧来对抗越来越复杂的副作用。
一些被广泛使用的模型,尽管是整个行业的基础,但却存在根本性缺陷,共享状态并发就是其中之一。想要无限扩展的代码会像互联网一样,通过发送消息并且除了对有缺陷的编程模型共同抱有蔑视之外,什么都不共享。
你应该遵循一些规则来编写使用 ZeroMQ 的愉快的(即无故障的)多线程代码
-
将数据私有地隔离在各自的线程内,绝不要在多个线程中共享数据。唯一的例外是 ZeroMQ 上下文 (contexts),它们是线程安全的。
-
远离经典的并发机制,如互斥锁 (mutexes)、临界区 (critical sections)、信号量 (semaphores) 等。这些在 ZeroMQ 应用程序中是一种反模式 (anti-pattern)。
-
在进程启动时创建一个 ZeroMQ 上下文,并将其传递给你希望通过`zmq_inproc()`套接字连接的所有线程。
-
使用 附加线程 在应用程序内部创建结构,并使用通过 inproc 传输的 PAIR 套接字将它们连接到其父线程。模式是:父线程绑定套接字,然后创建子线程连接其套接字。`zmq_inproc()`。
-
使用 分离线程 来模拟独立的任务,它们有自己的上下文。通过`zmq_tcp()`连接它们。稍后,你可以在不显著修改代码的情况下将它们移到独立的进程中。
-
所有线程间的交互都通过 ZeroMQ 消息发生,你可以或多或少正式地定义这些消息。
-
不要在线程之间共享 ZeroMQ 套接字。ZeroMQ 套接字不是线程安全的。技术上可以将套接字从一个线程迁移到另一个线程,但这需要技巧。唯一勉强说得通的在线程间共享套接字的地方是在需要对套接字进行垃圾收集等“魔术”操作的语言绑定中。
例如,如果你需要在应用程序中启动多个代理,你会希望每个代理都在自己的线程中运行。很容易犯的错误是在一个线程中创建代理的前端和后端套接字,然后将这些套接字传递给另一个线程中的代理。这最初可能看起来能工作,但在实际使用中会随机失败。记住:不要在创建套接字的线程之外使用或关闭套接字。
如果你遵循这些规则,你可以相当容易地构建优雅的多线程应用程序,并随后根据需要将线程拆分为独立的进程。应用程序逻辑可以位于线程、进程或节点中:取决于你的规模需求。
ZeroMQ使用原生操作系统线程而不是虚拟的“绿色”线程。其优势在于你不需要学习任何新的线程API,并且ZeroMQ线程可以干净地映射到你的操作系统。你可以使用标准工具(例如英特尔的ThreadChecker)来查看你的应用程序正在做什么。缺点是原生线程API并非总是可移植的,并且如果你有大量线程(数千个),一些操作系统会面临压力。
让我们看看这在实践中是如何工作的。我们将把旧的Hello World服务器变成功能更强大的东西。原始服务器运行在单个线程中。如果每个请求的工作量较低,那没问题:一个ØMQ线程可以在一个CPU核心上全速运行,没有等待,完成大量工作。但真实的服务器需要对每个请求做非平凡的工作。当10,000个客户端同时请求服务器时,单个核心可能不够。因此,一个真实的服务器将启动多个工作线程。然后它会尽快接受请求,并将这些请求分发给其工作线程。工作线程处理工作,最终发送回复。
当然,你可以使用代理broker和外部工作进程来完成所有这些工作,但启动一个占用十六个核心的进程通常比启动十六个各占用一个核心的进程更容易。此外,将工作者作为线程运行将减少一次网络跳跃、延迟和网络流量。
Hello World服务的MT版本基本上将broker和工作者合并到一个进程中。
mtserver:Ada中的多线程服务
mtserver:Basic中的多线程服务
mtserver:C中的多线程服务
// Multithreaded Hello World server
#include "zhelpers.h"
#include <pthread.h>
#include <unistd.h>
static void *
worker_routine (void *context) {
// Socket to talk to dispatcher
void *receiver = zmq_socket (context, ZMQ_REP);
zmq_connect (receiver, "inproc://workers");
while (1) {
char *string = s_recv (receiver);
printf ("Received request: [%s]\n", string);
free (string);
// Do some 'work'
sleep (1);
// Send reply back to client
s_send (receiver, "World");
}
zmq_close (receiver);
return NULL;
}
int main (void)
{
void *context = zmq_ctx_new ();
// Socket to talk to clients
void *clients = zmq_socket (context, ZMQ_ROUTER);
zmq_bind (clients, "tcp://*:5555");
// Socket to talk to workers
void *workers = zmq_socket (context, ZMQ_DEALER);
zmq_bind (workers, "inproc://workers");
// Launch pool of worker threads
int thread_nbr;
for (thread_nbr = 0; thread_nbr < 5; thread_nbr++) {
pthread_t worker;
pthread_create (&worker, NULL, worker_routine, context);
}
// Connect work threads to client threads via a queue proxy
zmq_proxy (clients, workers, NULL);
// We never get here, but clean up anyhow
zmq_close (clients);
zmq_close (workers);
zmq_ctx_destroy (context);
return 0;
}
mtserver:C++中的多线程服务
/*
author: Saad Hussain <saadnasir31@gmail.com>
date: 30th January 2024
*/
#include <string>
#include <iostream>
#include <thread>
#include <zmq.hpp>
void worker_routine(zmq::context_t& ctx) {
zmq::socket_t socket(ctx, ZMQ_REP);
socket.connect("inproc://workers");
while(true) {
zmq::message_t request;
socket.recv(&request);
std::cout << "Received request: [" << (char*) request.data() << "]" << std::endl;
std::this_thread::sleep_for(std::chrono::seconds(1));
zmq::message_t reply("World", 5);
socket.send(reply);
}
}
int main() {
zmq::context_t ctx(1);
zmq::socket_t clients(ctx, ZMQ_ROUTER);
clients.bind("tcp://localhost:5555");
zmq::socket_t workers(ctx, ZMQ_DEALER);
workers.bind("inproc://workers");
std::vector<std::thread> worker_threads;
for (int thread_nbr = 0; thread_nbr != 5; ++thread_nbr) {
worker_threads.emplace_back([&ctx] { worker_routine(ctx); });
}
zmq::proxy(clients, workers, nullptr);
for (auto& thread : worker_threads) {
thread.join();
}
return 0;
}
mtserver:C#中的多线程服务
mtserver:CL中的多线程服务
;;; -*- Mode:Lisp; Syntax:ANSI-Common-Lisp; -*-
;;;
;;; Multithreaded Hello World server in Common Lisp
;;;
;;; Kamil Shakirov <kamils80@gmail.com>
;;;
(defpackage #:zguide.mtserver
(:nicknames #:mtserver)
(:use #:cl #:zhelpers)
(:export #:main))
(in-package :zguide.mtserver)
(defun worker-routine (context)
;; Socket to talk to dispatcher
(zmq:with-socket (receiver context zmq:rep)
(zmq:connect receiver "inproc://workers")
(loop
(let ((request (make-instance 'zmq:msg)))
(zmq:recv receiver request)
(message "Received request: [~A]~%" (zmq:msg-data-as-string request))
;; Do some 'work'
(sleep 1)
;; Send reply back to client
(let ((reply (make-instance 'zmq:msg :data "World")))
(zmq:send receiver reply))))))
(defun main ()
;; Prepare our context and socket
(zmq:with-context (context 1)
;; Socket to talk to clients
(zmq:with-socket (clients context zmq:router)
(zmq:bind clients "tcp://*:5555")
;; Socket to talk to workers
(zmq:with-socket (workers context zmq:dealer)
(zmq:bind workers "inproc://workers")
;; Launch pool of worker threads
(dotimes (i 5)
(bt:make-thread (lambda () (worker-routine context))
:name (format nil "worker-~D" i)))
;; Connect work threads to client threads via a queue
(zmq:device zmq:queue clients workers))))
(cleanup))
mtserver:Delphi中的多线程服务
program mtserver;
//
// Multithreaded Hello World server
// @author Varga Balazs <bb.varga@gmail.com>
//
{$APPTYPE CONSOLE}
uses
SysUtils
, zmqapi
;
procedure worker_routine( lcontext: TZMQContext );
var
receiver: TZMQSocket;
s: Utf8String;
begin
// Socket to talk to dispatcher
receiver := lContext.Socket( stRep );
receiver.connect( 'inproc://workers' );
while True do
begin
receiver.recv( s );
Writeln( Format( 'Received request: [%s]', [s] ) );
// Do some 'work'
sleep (1000);
// Send reply back to client
receiver.send( 'World' );
end;
receiver.Free;
end;
var
context: TZMQContext;
clients,
workers: TZMQSocket;
i: Integer;
tid: Cardinal;
begin
context := TZMQContext.Create;
// Socket to talk to clients
clients := Context.Socket( stRouter );
clients.bind( 'tcp://*:5555' );
// Socket to talk to workers
workers := Context.Socket( stDealer );
workers.bind( 'inproc://workers' );
// Launch pool of worker threads
for i := 0 to 4 do
BeginThread( nil, 0, @worker_routine, context, 0, tid );
// Connect work threads to client threads via a queue
ZMQProxy( clients, workers, nil );
// We never get here but clean up anyhow
clients.Free;
workers.Free;
context.Free;
end.
mtserver:Erlang中的多线程服务
#!/usr/bin/env escript
%%
%% Multiprocess Hello World server (analogous to C threads example)
%%
worker_routine(Context) ->
%% Socket to talk to dispatcher
{ok, Receiver} = erlzmq:socket(Context, rep),
ok = erlzmq:connect(Receiver, "inproc://workers"),
worker_loop(Receiver),
ok = erlzmq:close(Receiver).
worker_loop(Receiver) ->
{ok, Msg} = erlzmq:recv(Receiver),
io:format("Received ~s [~p]~n", [Msg, self()]),
%% Do some work
timer:sleep(1000),
erlzmq:send(Receiver, <<"World">>),
worker_loop(Receiver).
main(_) ->
{ok, Context} = erlzmq:context(),
%% Socket to talk to clients
{ok, Clients} = erlzmq:socket(Context, [router, {active, true}]),
ok = erlzmq:bind(Clients, "tcp://*:5555"),
%% Socket to talk to workers
{ok, Workers} = erlzmq:socket(Context, [dealer, {active, true}]),
ok = erlzmq:bind(Workers, "inproc://workers"),
%% Start worker processes
start_workers(Context, 5),
%% Connect work threads to client threads via a queue
erlzmq_device:queue(Clients, Workers),
%% We never get here but cleanup anyhow
ok = erlzmq:close(Clients),
ok = erlzmq:close(Workers),
ok = erlzmq:term(Context).
start_workers(_Context, 0) -> ok;
start_workers(Context, N) when N > 0 ->
spawn(fun() -> worker_routine(Context) end),
start_workers(Context, N - 1).
mtserver:Elixir中的多线程服务
defmodule Mtserver do
@moduledoc """
Generated by erl2ex (http://github.com/dazuma/erl2ex)
From Erlang source: (Unknown source file)
At: 2019-12-20 13:57:29
"""
def worker_routine(context) do
{:ok, receiver} = :erlzmq.socket(context, :rep)
:ok = :erlzmq.connect(receiver, 'inproc://workers')
worker_loop(receiver)
:ok = :erlzmq.close(receiver)
end
def worker_loop(receiver) do
{:ok, msg} = :erlzmq.recv(receiver)
:io.format('Received ~s [~p]~n', [msg, self()])
:timer.sleep(1000)
:erlzmq.send(receiver, "World")
worker_loop(receiver)
end
def main() do
{:ok, context} = :erlzmq.context()
{:ok, clients} = :erlzmq.socket(context, [:router, {:active, true}])
:ok = :erlzmq.bind(clients, 'tcp://*:5555')
{:ok, workers} = :erlzmq.socket(context, [:dealer, {:active, true}])
:ok = :erlzmq.bind(workers, 'inproc://workers')
start_workers(context, 5)
:erlzmq_device.queue(clients, workers)
:ok = :erlzmq.close(clients)
:ok = :erlzmq.close(workers)
:ok = :erlzmq.term(context)
end
def start_workers(_context, 0) do
:ok
end
def start_workers(context, n) when n > 0 do
:erlang.spawn(fn -> worker_routine(context) end)
start_workers(context, n - 1)
end
end
Mtserver.main
mtserver:F#中的多线程服务
mtserver:Felix中的多线程服务
mtserver:Go中的多线程服务
// Multithreaded Hello World server.
// Uses Goroutines. We could also use channels (a native form of
// inproc), but I stuck to the example.
//
// Author: Brendan Mc.
// Requires: http://github.com/alecthomas/gozmq
package main
import (
"fmt"
zmq "github.com/alecthomas/gozmq"
"time"
)
func main() {
// Launch pool of worker threads
for i := 0; i != 5; i = i + 1 {
go worker()
}
// Prepare our context and sockets
context, _ := zmq.NewContext()
defer context.Close()
// Socket to talk to clients
clients, _ := context.NewSocket(zmq.ROUTER)
defer clients.Close()
clients.Bind("tcp://*:5555")
// Socket to talk to workers
workers, _ := context.NewSocket(zmq.DEALER)
defer workers.Close()
workers.Bind("ipc://workers.ipc")
// connect work threads to client threads via a queue
zmq.Device(zmq.QUEUE, clients, workers)
}
func worker() {
context, _ := zmq.NewContext()
defer context.Close()
// Socket to talk to dispatcher
receiver, _ := context.NewSocket(zmq.REP)
defer receiver.Close()
receiver.Connect("ipc://workers.ipc")
for true {
received, _ := receiver.Recv(0)
fmt.Printf("Received request [%s]\n", received)
// Do some 'work'
time.Sleep(time.Second)
// Send reply back to client
receiver.Send([]byte("World"), 0)
}
}
mtserver:Haskell中的多线程服务
{-# LANGUAGE OverloadedStrings #-}
-- |
-- Multithreaded Hello World server (p.65)
-- (Client) REQ >-> ROUTER (Proxy) DEALER >-> REP ([Worker])
-- The client is provided by `hwclient.hs`
-- Compile with -threaded
module Main where
import System.ZMQ4.Monadic
import Control.Monad (forever, replicateM_)
import Data.ByteString.Char8 (unpack)
import Control.Concurrent (threadDelay)
import Text.Printf
main :: IO ()
main =
runZMQ $ do
-- Server frontend socket to talk to clients
server <- socket Router
bind server "tcp://*:5555"
-- Socket to talk to workers
workers <- socket Dealer
bind workers "inproc://workers"
-- using inproc (inter-thread) we expect to share the same context
replicateM_ 5 (async worker)
-- Connect work threads to client threads via a queue
proxy server workers Nothing
worker :: ZMQ z ()
worker = do
receiver <- socket Rep
connect receiver "inproc://workers"
forever $ do
receive receiver >>= liftIO . printf "Received request:%s\n" . unpack
-- Simulate doing some 'work' for 1 second
liftIO $ threadDelay (1 * 1000 * 1000)
send receiver [] "World"
mtserver:Haxe中的多线程服务
package ;
import haxe.io.Bytes;
import haxe.Stack;
import neko.Lib;
import neko.Sys;
#if !php
import neko.vm.Thread;
#end
import org.zeromq.ZMQ;
import org.zeromq.ZMQContext;
import org.zeromq.ZMQPoller;
import org.zeromq.ZMQSocket;
import org.zeromq.ZMQException;
/**
* Multithreaded Hello World Server
*
* See: https://zguide.zeromq.cn/page:all#Multithreading-with-MQ
* Use with HelloWorldClient.hx
*
*/
class MTServer
{
static function worker() {
var context:ZMQContext = ZMQContext.instance();
// Socket to talk to dispatcher
var responder:ZMQSocket = context.socket(ZMQ_REP);
#if (neko || cpp)
responder.connect("inproc://workers");
#elseif php
responder.connect("ipc://workers.ipc");
#end
ZMQ.catchSignals();
while (true) {
try {
// Wait for next request from client
var request:Bytes = responder.recvMsg();
trace ("Received request:" + request.toString());
// Do some work
Sys.sleep(1);
// Send reply back to client
responder.sendMsg(Bytes.ofString("World"));
} catch (e:ZMQException) {
if (ZMQ.isInterrupted()) {
break;
}
trace (e.toString());
}
}
responder.close();
return null;
}
/**
* Implements a reqeust/reply QUEUE broker device
* Returns if poll is interrupted
* @param ctx
* @param frontend
* @param backend
*/
static function queueDevice(ctx:ZMQContext, frontend:ZMQSocket, backend:ZMQSocket) {
// Initialise pollset
var poller:ZMQPoller = ctx.poller();
poller.registerSocket(frontend, ZMQ.ZMQ_POLLIN());
poller.registerSocket(backend, ZMQ.ZMQ_POLLIN());
ZMQ.catchSignals();
while (true) {
try {
poller.poll();
if (poller.pollin(1)) {
var more:Bool = true;
while (more) {
// Receive message
var msg = frontend.recvMsg();
more = frontend.hasReceiveMore();
// Broker it
backend.sendMsg(msg, { if (more) SNDMORE else null; } );
}
}
if (poller.pollin(2)) {
var more:Bool = true;
while (more) {
// Receive message
var msg = backend.recvMsg();
more = backend.hasReceiveMore();
// Broker it
frontend.sendMsg(msg, { if (more) SNDMORE else null; } );
}
}
} catch (e:ZMQException) {
if (ZMQ.isInterrupted()) {
break;
}
// Handle other errors
trace("ZMQException #:" + e.errNo + ", str:" + e.str());
trace (Stack.toString(Stack.exceptionStack()));
}
}
}
public static function main() {
var context:ZMQContext = ZMQContext.instance();
Lib.println ("** MTServer (see: https://zguide.zeromq.cn/page:all#Multithreading-with-MQ)");
// Socket to talk to clients
var clients:ZMQSocket = context.socket(ZMQ_ROUTER);
clients.bind ("tcp://*:5556");
// Socket to talk to workers
var workers:ZMQSocket = context.socket(ZMQ_DEALER);
#if (neko || cpp)
workers.bind ("inproc://workers");
// Launch worker thread pool
var workerThreads:List<Thread> = new List<Thread>();
for (thread_nbr in 0 ... 5) {
workerThreads.add(Thread.create(worker));
}
#elseif php
workers.bind ("ipc://workers.ipc");
// Launch pool of worker processes, due to php's lack of thread support
// See: https://github.com/imatix/zguide/blob/master/examples/PHP/mtserver.php
for (thread_nbr in 0 ... 5) {
untyped __php__('
$pid = pcntl_fork();
if ($pid == 0) {
// Running in child process
worker();
exit();
}');
}
#end
// Invoke request / reply broker (aka QUEUE device) to connect clients to workers
queueDevice(context, clients, workers);
// Close up shop
clients.close();
workers.close();
context.term();
}
}
mtserver:Java中的多线程服务
package guide;
import org.zeromq.SocketType;
import org.zeromq.ZMQ;
import org.zeromq.ZMQ.Socket;
import org.zeromq.ZContext;
/**
* Multi threaded Hello World server
*/
public class mtserver
{
private static class Worker extends Thread
{
private ZContext context;
private Worker(ZContext context)
{
this.context = context;
}
@Override
public void run()
{
ZMQ.Socket socket = context.createSocket(SocketType.REP);
socket.connect("inproc://workers");
while (true) {
// Wait for next request from client (C string)
String request = socket.recvStr(0);
System.out.println(Thread.currentThread().getName() + " Received request: [" + request + "]");
// Do some 'work'
try {
Thread.sleep(1000);
}
catch (InterruptedException e) {
}
// Send reply back to client (C string)
socket.send("world", 0);
}
}
}
public static void main(String[] args)
{
try (ZContext context = new ZContext()) {
Socket clients = context.createSocket(SocketType.ROUTER);
clients.bind("tcp://*:5555");
Socket workers = context.createSocket(SocketType.DEALER);
workers.bind("inproc://workers");
for (int thread_nbr = 0; thread_nbr < 5; thread_nbr++) {
Thread worker = new Worker(context);
worker.start();
}
// Connect work threads to client threads via a queue
ZMQ.proxy(clients, workers, null);
}
}
}
mtserver:Julia中的多线程服务
mtserver:Lua中的多线程服务
--
-- Multithreaded Hello World server
--
-- Author: Robert G. Jakabosky <bobby@sharedrealm.com>
--
require"zmq"
require"zmq.threads"
require"zhelpers"
local worker_code = [[
local id = ...
local zmq = require"zmq"
require"zhelpers"
local threads = require"zmq.threads"
local context = threads.get_parent_ctx()
-- Socket to talk to dispatcher
local receiver = context:socket(zmq.REP)
assert(receiver:connect("inproc://workers"))
while true do
local msg = receiver:recv()
printf ("Received request: [%s]\n", msg)
-- Do some 'work'
s_sleep (1000)
-- Send reply back to client
receiver:send("World")
end
receiver:close()
return nil
]]
s_version_assert (2, 1)
local context = zmq.init(1)
-- Socket to talk to clients
local clients = context:socket(zmq.ROUTER)
clients:bind("tcp://*:5555")
-- Socket to talk to workers
local workers = context:socket(zmq.DEALER)
workers:bind("inproc://workers")
-- Launch pool of worker threads
local worker_pool = {}
for n=1,5 do
worker_pool[n] = zmq.threads.runstring(context, worker_code, n)
worker_pool[n]:start()
end
-- Connect work threads to client threads via a queue
print("start queue device.")
zmq.device(zmq.QUEUE, clients, workers)
-- We never get here but clean up anyhow
clients:close()
workers:close()
context:term()
mtserver:Node.js中的多线程服务
mtserver:Objective-C中的多线程服务
mtserver:ooc中的多线程服务
mtserver:Perl中的多线程服务
# Multithreaded Hello World server in Perl
use strict;
use warnings;
use v5.10;
use ZMQ::FFI;
use ZMQ::FFI::Constants qw(ZMQ_REP ZMQ_ROUTER ZMQ_DEALER);
use threads;
sub worker_routine {
my ($context) = @_;
# Socket to talk to dispatcher
my $receiver = $context->socket(ZMQ_REP);
$receiver->connect('inproc://workers');
while (1) {
my $string = $receiver->recv();
say "Received request: [$string]";
# Do some 'work'
sleep 1;
# Send reply back to client
$receiver->send('World');
}
}
my $context = ZMQ::FFI->new();
# Socket to talk to clients
my $clients = $context->socket(ZMQ_ROUTER);
$clients->bind('tcp://*:5555');
# Socket to talk to workers
my $workers = $context->socket(ZMQ_DEALER);
$workers->bind('inproc://workers');
# Launch pool of worker threads
for (1..5) {
threads->create('worker_routine', $context);
}
# Connect work threads to client threads via a queue proxy
$context->proxy($clients, $workers);
# We never get here
mtserver:PHP中的多线程服务
<?php
/*
* Multithreaded Hello World server. Uses proceses due
* to PHP's lack of threads!
* @author Ian Barber <ian(dot)barber(at)gmail(dot)com>
*/
function worker_routine()
{
$context = new ZMQContext();
// Socket to talk to dispatcher
$receiver = new ZMQSocket($context, ZMQ::SOCKET_REP);
$receiver->connect("ipc://workers.ipc");
while (true) {
$string = $receiver->recv();
printf ("Received request: [%s]%s", $string, PHP_EOL);
// Do some 'work'
sleep(1);
// Send reply back to client
$receiver->send("World");
}
}
// Launch pool of worker threads
for ($thread_nbr = 0; $thread_nbr != 5; $thread_nbr++) {
$pid = pcntl_fork();
if ($pid == 0) {
worker_routine();
exit();
}
}
// Prepare our context and sockets
$context = new ZMQContext();
// Socket to talk to clients
$clients = new ZMQSocket($context, ZMQ::SOCKET_ROUTER);
$clients->bind("tcp://*:5555");
// Socket to talk to workers
$workers = new ZMQSocket($context, ZMQ::SOCKET_DEALER);
$workers->bind("ipc://workers.ipc");
// Connect work threads to client threads via a queue
$device = new ZMQDevice($clients, $workers);
$device->run ();
mtserver:Python中的多线程服务
"""
Multithreaded Hello World server
Author: Guillaume Aubert (gaubert) <guillaume(dot)aubert(at)gmail(dot)com>
"""
import time
import threading
import zmq
def worker_routine(worker_url: str,
context: zmq.Context = None):
"""Worker routine"""
context = context or zmq.Context.instance()
# Socket to talk to dispatcher
socket = context.socket(zmq.REP)
socket.connect(worker_url)
while True:
string = socket.recv()
print(f"Received request: [ {string} ]")
# Do some 'work'
time.sleep(1)
# Send reply back to client
socket.send(b"World")
def main():
"""Server routine"""
url_worker = "inproc://workers"
url_client = "tcp://*:5555"
# Prepare our context and sockets
context = zmq.Context.instance()
# Socket to talk to clients
clients = context.socket(zmq.ROUTER)
clients.bind(url_client)
# Socket to talk to workers
workers = context.socket(zmq.DEALER)
workers.bind(url_worker)
# Launch pool of worker threads
for i in range(5):
thread = threading.Thread(target=worker_routine, args=(url_worker,))
thread.daemon = True
thread.start()
zmq.proxy(clients, workers)
# We never get here but clean up anyhow
clients.close()
workers.close()
context.term()
if __name__ == "__main__":
main()
mtserver:Q中的多线程服务
// Multithreaded Hello World server
\l qzmq.q
worker_routine:{[args; ctx; pipe]
// Socket to talk to dispatcher
receiver:zsocket.new[ctx; zmq.REP];
zsocket.connect[receiver; `inproc://workers];
while[1b;
s:zstr.recv[receiver];
// Do some 'work'
zclock.sleep 1;
// Send reply back to client
zstr.send[receiver; "World"]];
zsocket.destroy[ctx; receiver]}
ctx:zctx.new[]
// Socket to talk to clients
clients:zsocket.new[ctx; zmq.ROUTER]
clientsport:zsocket.bind[clients; `$"tcp://*:5555"]
// Socket to talk to workers
workers:zsocket.new[ctx; zmq.DEALER]
workersport:zsocket.bind[workers; `inproc://workers]
// Launch pool of worker threads
do[5; zthread.fork[ctx; `worker_routine; 0]]
// Connect work threads to client threads via a queue
rc:libzmq.device[zmq.QUEUE; clients; workers]
if[rc<>-1; '`fail]
// We never get here but clean up anyhow
zsocket.destroy[ctx; clients]
zsocket.destroy[ctx; workers]
zctx.destroy[ctx]
\\
mtserver:Racket中的多线程服务
mtserver:Ruby中的多线程服务
#!/usr/bin/env ruby
#
# Multithreaded Hello World server
#
require 'rubygems'
require 'ffi-rzmq'
def worker_routine(context)
# Socket to talk to dispatcher
receiver = context.socket(ZMQ::REP)
receiver.connect("inproc://workers")
loop do
receiver.recv_string(string = '')
puts "Received request: [#{string}]"
# Do some 'work'
sleep(1)
# Send reply back to client
receiver.send_string("world")
end
end
context = ZMQ::Context.new
puts "Starting Hello World server..."
# socket to listen for clients
clients = context.socket(ZMQ::ROUTER)
clients.bind("tcp://*:5555")
# socket to talk to workers
workers = context.socket(ZMQ::DEALER)
workers.bind("inproc://workers")
# Launch pool of worker threads
5.times do
Thread.new{worker_routine(context)}
end
# Connect work threads to client threads via a queue
ZMQ::Device.new(ZMQ::QUEUE,clients,workers)
mtserver:Rust中的多线程服务
use std::{thread, time};
fn worker_routine(context: &zmq::Context) {
let receiver = context.socket(zmq::REP).unwrap();
assert!(receiver.connect("inproc://workers").is_ok());
loop {
let string = receiver.recv_string(0).unwrap().unwrap();
println!("Received request: {}", string);
thread::sleep(time::Duration::from_secs(1));
receiver.send("World", 0).unwrap();
}
}
fn main() {
let context = zmq::Context::new();
let clients = context.socket(zmq::ROUTER).unwrap();
assert!(clients.bind("tcp://*:5555").is_ok());
let workers = context.socket(zmq::DEALER).unwrap();
assert!(workers.bind("inproc://workers").is_ok());
for _ in 0..5 {
let ncontext = context.clone();
thread::spawn(move || worker_routine(&ncontext));
}
zmq::proxy(&clients, &workers).unwrap();
}
mtserver:Scala中的多线程服务
/*
* Multithreaded Hello World server in Scala
*
* @author Giovanni Ruggiero
* @email giovanni.ruggiero@gmail.com
*
*/
import org.zeromq.ZMQ
import org.zeromq.ZMQQueue
import org.zeromq.ZMQ.{Context,Socket}
object mtserver {
def main(args : Array[String]) {
val context = ZMQ.context(1)
val clients = context.socket(ZMQ.ROUTER)
clients.bind ("tcp://*:5555")
val workers = context.socket(ZMQ.DEALER)
workers.bind ("inproc://workers")
// Launch pool of worker threads
for (thread_nbr <- 1 to 5) {
val worker_routine = new Thread(){
override def run(){
val socket = context.socket(ZMQ.REP)
socket.connect ("inproc://workers")
while (true) {
// Wait for next request from client (C string)
val request = socket.recv (0)
println ("Received request: ["+new String(request,0,request.length-1)+"]")
// Do some 'work'
try {
Thread.sleep (1000)
} catch {
case e: InterruptedException => e.printStackTrace()
}
// Send reply back to client (C string)
val reply = "World ".getBytes
reply(reply.length-1) = 0 //Sets the last byte of the reply to 0
socket.send(reply, 0)
}
}
}
worker_routine.start()
}
// Connect work threads to client threads via a queue
val zMQQueue = new ZMQQueue(context,clients, workers)
zMQQueue run
}
}
mtserver:Tcl中的多线程服务
mtserver:OCaml中的多线程服务

现在你应该能认出所有代码了。工作原理如下:
-
服务器启动一组工作线程。每个工作线程创建一个REP套接字,然后在此套接字上处理请求。工作线程就像单线程服务器一样。唯一的区别在于传输方式(`zmq_inproc()`inproc`zmq_tcp()`而不是
-
tcp`zmq_tcp()`).
-
),以及bind-connect的方向。服务器创建一个ROUTER套接字与客户端通信,并将此套接字绑定到其外部接口(通过`zmq_inproc()`).
-
tcp
)。服务器创建一个DEALER套接字与工作者通信,并将此套接字绑定到其内部接口(通过inproc)。服务器启动一个连接这两个套接字的代理。代理公平地从所有客户端拉取传入请求,并将这些请求分发给工作者。同时也将回复路由回其来源。请注意,在大多数编程语言中,创建线程是不可移植的。POSIX库是pthreads,但在Windows上你必须使用不同的API。在我们的示例中,pthread_create
调用会启动一个运行worker_routine.
函数的新线程。我们将在第 3 章 - 高级请求-回复模式中看到如何将其封装在一个可移植的API中。
这里的“工作”只是暂停一秒钟。我们可以在工作者中做任何事情,包括与其他节点通信。这是MT服务器在ØMQ套接字和节点方面的样子。注意请求-回复链是如何的:
REQ-ROUTER-queue-DEALER-REP`zmq_inproc()`线程间信号(PAIR套接字)#
当你开始使用ZeroMQ构建多线程应用程序时,你会遇到如何协调线程的问题。尽管你可能想插入“sleep”语句,或使用信号量或互斥锁等多线程技术,但你应该使用的唯一机制是ZeroMQ消息。记住《醉汉和啤酒瓶》的故事。
inproc
mtrelay:Ada中的多线程接力
// Multithreaded relay
#include "zhelpers.h"
#include <pthread.h>
static void *
step1 (void *context) {
// Connect to step2 and tell it we're ready
void *xmitter = zmq_socket (context, ZMQ_PAIR);
zmq_connect (xmitter, "inproc://step2");
printf ("Step 1 ready, signaling step 2\n");
s_send (xmitter, "READY");
zmq_close (xmitter);
return NULL;
}
static void *
step2 (void *context) {
// Bind inproc socket before starting step1
void *receiver = zmq_socket (context, ZMQ_PAIR);
zmq_bind (receiver, "inproc://step2");
pthread_t thread;
pthread_create (&thread, NULL, step1, context);
// Wait for signal and pass it on
char *string = s_recv (receiver);
free (string);
zmq_close (receiver);
// Connect to step3 and tell it we're ready
void *xmitter = zmq_socket (context, ZMQ_PAIR);
zmq_connect (xmitter, "inproc://step3");
printf ("Step 2 ready, signaling step 3\n");
s_send (xmitter, "READY");
zmq_close (xmitter);
return NULL;
}
int main (void)
{
void *context = zmq_ctx_new ();
// Bind inproc socket before starting step2
void *receiver = zmq_socket (context, ZMQ_PAIR);
zmq_bind (receiver, "inproc://step3");
pthread_t thread;
pthread_create (&thread, NULL, step2, context);
// Wait for signal
char *string = s_recv (receiver);
free (string);
zmq_close (receiver);
printf ("Test successful!\n");
zmq_ctx_destroy (context);
return 0;
}
Ada中缺少示例 mtrelay:贡献翻译
/*
author: Saad Hussain <saadnasir31@gmail.com>
*/
#include <iostream>
#include <thread>
#include <zmq.hpp>
void step1(zmq::context_t &context) {
// Connect to step2 and tell it we're ready
zmq::socket_t xmitter(context, zmq::socket_type::pair);
xmitter.connect("inproc://step2");
std::cout << "Step 1 ready, signaling step 2" << std::endl;
zmq::message_t msg("READY");
xmitter.send(msg, zmq::send_flags::none);
}
void step2(zmq::context_t &context) {
// Bind inproc socket before starting step1
zmq::socket_t receiver(context, zmq::socket_type::pair);
receiver.bind("inproc://step2");
std::thread thd(step1, std::ref(context));
// Wait for signal and pass it on
zmq::message_t msg;
receiver.recv(msg, zmq::recv_flags::none);
// Connect to step3 and tell it we're ready
zmq::socket_t xmitter(context, zmq::socket_type::pair);
xmitter.connect("inproc://step3");
std::cout << "Step 2 ready, signaling step 3" << std::endl;
xmitter.send(zmq::str_buffer("READY"), zmq::send_flags::none);
thd.join();
}
int main() {
zmq::context_t context(1);
// Bind inproc socket before starting step2
zmq::socket_t receiver(context, zmq::socket_type::pair);
receiver.bind("inproc://step3");
std::thread thd(step2, std::ref(context));
// Wait for signal
zmq::message_t msg;
receiver.recv(msg, zmq::recv_flags::none);
std::cout << "Test successful!" << std::endl;
thd.join();
return 0;
}
mtrelay:Basic中的多线程接力
mtrelay:C中的多线程接力
;;; -*- Mode:Lisp; Syntax:ANSI-Common-Lisp; -*-
;;;
;;; Multithreaded relay in Common Lisp
;;;
;;; Kamil Shakirov <kamils80@gmail.com>
;;;
(defpackage #:zguide.mtrelay
(:nicknames #:mtrelay)
(:use #:cl #:zhelpers)
(:export #:main))
(in-package :zguide.mtrelay)
(defun step1 (context)
;; Signal downstream to step 2
(zmq:with-socket (sender context zmq:pair)
(zmq:connect sender "inproc://step2")
(let ((msg (make-instance 'zmq:msg :data "")))
(zmq:send sender msg))))
(defun step2 (context)
;; Bind to inproc: endpoint, then start upstream thread
(zmq:with-socket (receiver context zmq:pair)
(zmq:bind receiver "inproc://step2")
(bt:make-thread (lambda () (step1 context)))
;; Wait for signal
(let ((msg (make-instance 'zmq:msg)))
(zmq:recv receiver msg))
;; Signal downstream to step 3
(zmq:with-socket (sender context zmq:pair)
(zmq:connect sender "inproc://step3")
(let ((msg (make-instance 'zmq:msg :data "")))
(zmq:send sender msg)))))
(defun main ()
(zmq:with-context (context 1)
;; Bind to inproc: endpoint, then start upstream thread
(zmq:with-socket (receiver context zmq:pair)
(zmq:bind receiver "inproc://step3")
(bt:make-thread (lambda () (step2 context)))
;; Wait for signal
(let ((msg (make-instance 'zmq:msg)))
(zmq:recv receiver msg)))
(message "Test successful!~%"))
(cleanup))
mtrelay:C++中的多线程接力
program mtrelay;
//
// Multithreaded relay
// @author Varga Balazs <bb.varga@gmail.com>
//
{$APPTYPE CONSOLE}
uses
SysUtils
, zmqapi
;
procedure step1( lcontext: TZMQContext );
var
xmitter: TZMQSocket;
begin
// Connect to step2 and tell it we're ready
xmitter := lContext.Socket( stPair );
xmitter.connect( 'inproc://step2' );
Writeln( 'Step 1 ready, signaling step 2' );
xmitter.send( 'READY' );
xmitter.Free;
end;
procedure step2( lcontext: TZMQContext );
var
receiver,
xmitter: TZMQSocket;
s: Utf8String;
tid: Cardinal;
begin
// Bind inproc socket before starting step1
receiver := lContext.Socket( stPair );
receiver.bind( 'inproc://step2' );
BeginThread( nil, 0, @step1, lcontext, 0, tid );
// Wait for signal and pass it on
receiver.recv( s );
receiver.Free;
// Connect to step3 and tell it we're ready
xmitter := lContext.Socket( stPair );
xmitter.connect( 'inproc://step3' );
Writeln( 'Step 2 ready, signaling step 3' );
xmitter.send( 'READY' );
xmitter.Free;
end;
var
context: TZMQContext;
receiver: TZMQSocket;
tid: Cardinal;
s: Utf8String;
begin
context := TZMQContext.Create;
// Bind inproc socket before starting step2
receiver := Context.Socket( stPair );
receiver.bind( 'inproc://step3' );
BeginThread( nil, 0, @step2, context, 0, tid );
// Wait for signal
receiver.recv ( s );
receiver.Free;
Writeln( 'Test successful!' );
context.Free;
end.
mtrelay:C#中的多线程接力
#!/usr/bin/env escript
%%
%% Multithreaded relay
%%
%% This example illustrates how inproc sockets can be used to communicate
%% across "threads". Erlang of course supports this natively, but it's fun to
%% see how 0MQ lets you do this across other languages!
%%
step1(Context) ->
%% Connect to step2 and tell it we're ready
{ok, Xmitter} = erlzmq:socket(Context, pair),
ok = erlzmq:connect(Xmitter, "inproc://step2"),
io:format("Step 1 ready, signaling step 2~n"),
ok = erlzmq:send(Xmitter, <<"READY">>),
ok = erlzmq:close(Xmitter).
step2(Context) ->
%% Bind inproc socket before starting step1
{ok, Receiver} = erlzmq:socket(Context, pair),
ok = erlzmq:bind(Receiver, "inproc://step2"),
spawn(fun() -> step1(Context) end),
%% Wait for signal and pass it on
{ok, _} = erlzmq:recv(Receiver),
ok = erlzmq:close(Receiver),
%% Connect to step3 and tell it we're ready
{ok, Xmitter} = erlzmq:socket(Context, pair),
ok = erlzmq:connect(Xmitter, "inproc://step3"),
io:format("Step 2 ready, signaling step 3~n"),
ok = erlzmq:send(Xmitter, <<"READY">>),
ok = erlzmq:close(Xmitter).
main(_) ->
{ok, Context} = erlzmq:context(),
%% Bind inproc socket before starting step2
{ok, Receiver} = erlzmq:socket(Context, pair),
ok = erlzmq:bind(Receiver, "inproc://step3"),
spawn(fun() -> step2(Context) end),
%% Wait for signal
{ok, _} = erlzmq:recv(Receiver),
erlzmq:close(Receiver),
io:format("Test successful~n"),
ok = erlzmq:term(Context).
C#中缺少示例 mtrelay:贡献翻译
defmodule Mtrelay do
@moduledoc """
Generated by erl2ex (http://github.com/dazuma/erl2ex)
From Erlang source: (Unknown source file)
At: 2019-12-20 13:57:28
"""
def step1(context) do
{:ok, xmitter} = :erlzmq.socket(context, :pair)
:ok = :erlzmq.connect(xmitter, 'inproc://step2')
:io.format('Step 1 ready, signaling step 2~n')
:ok = :erlzmq.send(xmitter, "READY")
:ok = :erlzmq.close(xmitter)
end
def step2(context) do
{:ok, receiver} = :erlzmq.socket(context, :pair)
:ok = :erlzmq.bind(receiver, 'inproc://step2')
:erlang.spawn(fn -> step1(context) end)
{:ok, _} = :erlzmq.recv(receiver)
:ok = :erlzmq.close(receiver)
{:ok, xmitter} = :erlzmq.socket(context, :pair)
:ok = :erlzmq.connect(xmitter, 'inproc://step3')
:io.format('Step 2 ready, signaling step 3~n')
:ok = :erlzmq.send(xmitter, "READY")
:ok = :erlzmq.close(xmitter)
end
def main() do
{:ok, context} = :erlzmq.context()
{:ok, receiver} = :erlzmq.socket(context, :pair)
:ok = :erlzmq.bind(receiver, 'inproc://step3')
:erlang.spawn(fn -> step2(context) end)
{:ok, _} = :erlzmq.recv(receiver)
:erlzmq.close(receiver)
:io.format('Test successful~n')
:ok = :erlzmq.term(context)
end
end
Mtrelay.main
mtrelay:CL中的多线程接力
mtrelay:Erlang中的多线程接力
mtrelay:F#中的多线程接力
// Multithreaded relay.
// Uses Goroutines. We could also use channels (a native form of
// inproc), but I stuck to the example.
//
// Author: Brendan Mc.
// Requires: http://github.com/alecthomas/gozmq
package main
import (
"fmt"
zmq "github.com/alecthomas/gozmq"
)
func main() {
// Prepare our context and sockets
context, _ := zmq.NewContext()
defer context.Close()
// Bind inproc socket before starting step2
receiver, _ := context.NewSocket(zmq.PAIR)
defer receiver.Close()
receiver.Bind("ipc://step3.ipc")
go step2()
// Wait for signal
receiver.Recv(0)
fmt.Println("Test successful!")
}
func step1() {
// Connect to step2 and tell it we're ready
context, _ := zmq.NewContext()
defer context.Close()
xmitter, _ := context.NewSocket(zmq.PAIR)
defer xmitter.Close()
xmitter.Connect("ipc://step2.ipc")
fmt.Println("Step 1 ready, signaling step 2")
xmitter.Send([]byte("READY"), 0)
}
func step2() {
context, _ := zmq.NewContext()
defer context.Close()
// Bind inproc before starting step 1
receiver, _ := context.NewSocket(zmq.PAIR)
defer receiver.Close()
receiver.Bind("ipc://step2.ipc")
go step1()
// wait for signal and pass it on
receiver.Recv(0)
// Connect to step3 and tell it we're ready
xmitter, _ := context.NewSocket(zmq.PAIR)
defer xmitter.Close()
xmitter.Connect("ipc://step3.ipc")
fmt.Println("Step 2 ready, singaling step 3")
xmitter.Send([]byte("READY"), 0)
}
F#中缺少示例 mtrelay:贡献翻译
{-# LANGUAGE OverloadedStrings #-}
-- Multithreaded relay
module Main where
import System.ZMQ4.Monadic
main :: IO ()
main = runZMQ step3
step3 :: ZMQ z ()
step3 = do
-- Bind inproc socket before starting step2
receiver <- socket Pair
bind receiver "inproc://step3"
async step2
-- Wait for signal
receive receiver
liftIO $ putStrLn "Test successful!"
step2 :: ZMQ z ()
step2 = do
-- Bind inproc socket before starting step1
receiver <- socket Pair
bind receiver "inproc://step2"
async step1
-- Wait for signal and pass it on
receive receiver
-- Connect to step 3 and tell it we're ready
xmitter <- socket Pair
connect xmitter "inproc://step3"
liftIO $ putStrLn "Step 2 ready, signalling step3"
send xmitter [] "READY"
step1 :: ZMQ z ()
step1 = do
-- Connect to step2 and tell it we're ready
xmitter <- socket Pair
connect xmitter "inproc://step2"
liftIO $ putStrLn "Step 1 ready, signalling step 2"
send xmitter [] "READY"
mtrelay:Felix中的多线程接力
package ;
import haxe.io.Bytes;
#if !php
import neko.vm.Thread;
#end
import neko.Lib;
import org.zeromq.ZMQ;
import org.zeromq.ZMQContext;
import org.zeromq.ZMQSocket;
/**
* Multi-threaded relay in haXe
*
*/
class MTRelay
{
static function step1() {
var context:ZMQContext = ZMQContext.instance();
// Connect to step2 and tell it we are ready
var xmitter:ZMQSocket = context.socket(ZMQ_PAIR);
#if (neko || cpp)
xmitter.connect("inproc://step2");
#elseif php
xmitter.connect("ipc://step2.ipc");
#end
xmitter.sendMsg(Bytes.ofString("READY"));
xmitter.close();
}
static function step2() {
var context:ZMQContext = ZMQContext.instance();
// Bind inproc socket before starting step 1
var receiver:ZMQSocket = context.socket(ZMQ_PAIR);
#if (neko || cpp)
receiver.bind("inproc://step2");
Thread.create(step1);
#elseif php
receiver.bind("ipc://step2.ipc");
untyped __php__('
$pid = pcntl_fork();
if($pid == 0) {
step1();
exit();
}');
#end
// Wait for signal and pass it on
var msgBytes = receiver.recvMsg();
receiver.close();
// Connect to step3 and tell it we are ready
var xmitter:ZMQSocket = context.socket(ZMQ_PAIR);
#if (neko || cpp)
xmitter.connect("inproc://step3");
#elseif php
xmitter.connect("ipc://step3.ipc");
#end
xmitter.sendMsg(Bytes.ofString("READY"));
xmitter.close();
}
public static function main() {
var context:ZMQContext = ZMQContext.instance();
Lib.println ("** MTRelay (see: https://zguide.zeromq.cn/page:all#Signaling-between-Threads)");
// This main thread represents Step 3
// Bind to inproc: endpoint then start upstream thread
var receiver:ZMQSocket = context.socket(ZMQ_PAIR);
#if (neko || cpp)
receiver.bind("inproc://step3");
// Step2 relays the signal to step 3
Thread.create(step2);
#elseif php
// Use child processes instead of Threads
receiver.bind("ipc://step3.ipc");
// Step2 relays the signal to step 3
untyped __php__('
$pid = pcntl_fork();
if ($pid == 0) {
step2();
exit();
}');
#end
// Wait for signal
var msgBytes = receiver.recvMsg();
receiver.close();
trace ("Test successful!");
context.term();
}
}
Felix中缺少示例 mtrelay:贡献翻译
package guide;
import org.zeromq.SocketType;
import org.zeromq.ZContext;
import org.zeromq.ZMQ.Socket;
/**
* Multithreaded relay
*/
public class mtrelay
{
private static class Step1 extends Thread
{
private ZContext context;
private Step1(ZContext context)
{
this.context = context;
}
@Override
public void run()
{
// Signal downstream to step 2
Socket xmitter = context.createSocket(SocketType.PAIR);
xmitter.connect("inproc://step2");
System.out.println("Step 1 ready, signaling step 2");
xmitter.send("READY", 0);
xmitter.close();
}
}
private static class Step2 extends Thread
{
private ZContext context;
private Step2(ZContext context)
{
this.context = context;
}
@Override
public void run()
{
// Bind to inproc: endpoint, then start upstream thread
Socket receiver = context.createSocket(SocketType.PAIR);
receiver.bind("inproc://step2");
// Wait for signal
receiver.recv(0);
receiver.close();
// Connect to step3 and tell it we're ready
Socket xmitter = context.createSocket(SocketType.PAIR);
xmitter.connect("inproc://step3");
System.out.println("Step 2 ready, signaling step 3");
xmitter.send("READY", 0);
xmitter.close();
}
}
private static class Step3 extends Thread
{
private ZContext context;
private Step3(ZContext context)
{
this.context = context;
}
@Override
public void run()
{
// Bind to inproc: endpoint, then start upstream thread
Socket receiver = context.createSocket(SocketType.PAIR);
receiver.bind("inproc://step3");
// Wait for signal
receiver.recv(0);
receiver.close();
System.out.println("Step 3 ready");
}
}
public static void main(String[] args) throws InterruptedException
{
try (ZContext context = new ZContext()) {
// Step 1 signals to step 2
Thread step1 = new Step1(context);
step1.start();
// Step 2 relays the signal from step 1 to step 3
Thread step2 = new Step2(context);
step2.start();
// Step 3 waits for signal from step 2
Thread step3 = new Step3(context);
step3.start();
step1.join();
step2.join();
step3.join();
System.out.println("Test successful!");
}
}
}
mtrelay:Go中的多线程接力
mtrelay:Haxe中的多线程接力
--
-- Multithreaded relay
--
-- Author: Robert G. Jakabosky <bobby@sharedrealm.com>
--
require"zmq"
require"zhelpers"
require"zmq.threads"
local pre_code = [[
local zmq = require"zmq"
require"zhelpers"
local threads = require"zmq.threads"
local context = threads.get_parent_ctx()
]]
local step1 = pre_code .. [[
-- Connect to step2 and tell it we're ready
local xmitter = context:socket(zmq.PAIR)
xmitter:connect("inproc://step2")
xmitter:send("READY")
xmitter:close()
]]
local step2 = pre_code .. [[
local step1 = ...
-- Bind inproc socket before starting step1
local receiver = context:socket(zmq.PAIR)
receiver:bind("inproc://step2")
local thread = zmq.threads.runstring(context, step1)
thread:start()
-- Wait for signal and pass it on
local msg = receiver:recv()
receiver:close()
-- Connect to step3 and tell it we're ready
local xmitter = context:socket(zmq.PAIR)
xmitter:connect("inproc://step3")
xmitter:send("READY")
xmitter:close()
assert(thread:join())
]]
s_version_assert (2, 1)
local context = zmq.init(1)
-- Bind inproc socket before starting step2
local receiver = context:socket(zmq.PAIR)
receiver:bind("inproc://step3")
local thread = zmq.threads.runstring(context, step2, step1)
thread:start()
-- Wait for signal
local msg = receiver:recv()
receiver:close()
printf ("Test successful!\n")
assert(thread:join())
context:term()
mtrelay:Java中的多线程接力
Julia中缺少示例 mtrelay:贡献翻译
mtrelay:Node.js中的多线程接力
mtrelay:Objective-C中的多线程接力
# Multithreaded relay in Perl
use strict;
use warnings;
use v5.10;
use ZMQ::FFI;
use ZMQ::FFI::Constants qw(ZMQ_PAIR);
use threads;
sub step1 {
my ($context) = @_;
# Connect to step2 and tell it we're ready
my $xmitter = $context->socket(ZMQ_PAIR);
$xmitter->connect('inproc://step2');
say "Step 1 ready, signaling step 2";
$xmitter->send("READY");
}
sub step2 {
my ($context) = @_;
# Bind inproc socket before starting step1
my $receiver = $context->socket(ZMQ_PAIR);
$receiver->bind('inproc://step2');
threads->create('step1', $context)
->detach();
# Wait for signal and pass it on
my $string = $receiver->recv();
# Connect to step3 and tell it we're ready
my $xmitter = $context->socket(ZMQ_PAIR);
$xmitter->connect('inproc://step3');
say "Step 2 ready, signaling step 3";
$xmitter->send("READY");
}
my $context = ZMQ::FFI->new();
# Bind inproc socket before starting step2
my $receiver = $context->socket(ZMQ_PAIR);
$receiver->bind('inproc://step3');
threads->create('step2', $context)
->detach();
# Wait for signal
$receiver->recv();
say "Test successful!";
Objective-C中缺少示例 mtrelay:贡献翻译
<?php
/*
* Multithreaded relay. Actually using processes due a lack
* of PHP threads.
* @author Ian Barber <ian(dot)barber(at)gmail(dot)com>
*/
function step1()
{
$context = new ZMQContext();
// Signal downstream to step 2
$sender = new ZMQSocket($context, ZMQ::SOCKET_PAIR);
$sender->connect("ipc://step2.ipc");
$sender->send("");
}
function step2()
{
$pid = pcntl_fork();
if ($pid == 0) {
step1();
exit();
}
$context = new ZMQContext();
// Bind to ipc: endpoint, then start upstream thread
$receiver = new ZMQSocket($context, ZMQ::SOCKET_PAIR);
$receiver->bind("ipc://step2.ipc");
// Wait for signal
$receiver->recv();
// Signal downstream to step 3
$sender = new ZMQSocket($context, ZMQ::SOCKET_PAIR);
$sender->connect("ipc://step3.ipc");
$sender->send("");
}
// Start upstream thread then bind to icp: endpoint
$pid = pcntl_fork();
if ($pid == 0) {
step2();
exit();
}
$context = new ZMQContext();
$receiver = new ZMQSocket($context, ZMQ::SOCKET_PAIR);
$receiver->bind("ipc://step3.ipc");
// Wait for signal
$receiver->recv();
echo "Test succesful!", PHP_EOL;
mtrelay:ooc中的多线程接力
"""
Multithreaded relay
Author: Guillaume Aubert (gaubert) <guillaume(dot)aubert(at)gmail(dot)com>
"""
import threading
import zmq
def step1(context: zmq.Context = None):
"""Step 1"""
context = context or zmq.Context.instance()
# Signal downstream to step 2
sender = context.socket(zmq.PAIR)
sender.connect("inproc://step2")
sender.send(b"")
def step2(context: zmq.Context = None):
"""Step 2"""
context = context or zmq.Context.instance()
# Bind to inproc: endpoint, then start upstream thread
receiver = context.socket(zmq.PAIR)
receiver.bind("inproc://step2")
thread = threading.Thread(target=step1)
thread.start()
# Wait for signal
msg = receiver.recv()
# Signal downstream to step 3
sender = context.socket(zmq.PAIR)
sender.connect("inproc://step3")
sender.send(b"")
def main():
""" server routine """
# Prepare our context and sockets
context = zmq.Context.instance()
# Bind to inproc: endpoint, then start upstream thread
receiver = context.socket(zmq.PAIR)
receiver.bind("inproc://step3")
thread = threading.Thread(target=step2)
thread.start()
# Wait for signal
string = receiver.recv()
print("Test successful!")
receiver.close()
context.term()
if __name__ == "__main__":
main()
ooc中缺少示例 mtrelay:贡献翻译
// Multithreaded relay
\l qzmq.q
step1:{[args; ctx; pipe]
// Connect to step2 and tell it we're ready
xmitter:zsocket.new[ctx; zmq.PAIR];
zsocket.connect[xmitter; `inproc://step2];
zclock.log "Step 1 ready, signaling step 2";
zstr.send[xmitter; "READY"];
zsocket.destroy[ctx; xmitter]}
step2:{[args; ctx; pipe]
// Bind inproc socket before starting step1
receiver:zsocket.new[ctx; zmq.PAIR];
port:zsocket.bind[receiver; `inproc://step2];
pipe:zthread.fork[ctx; `step1; 0N];
// Wait for signal and pass it on
zclock.log s:zstr.recv[receiver];
// Connect to step3 and tell it we're ready
xmitter:zsocket.new[ctx; zmq.PAIR];
zsocket.connect[xmitter; `inproc://step3];
zclock.log "Step 2 ready, signaling step 3";
zstr.send[xmitter; "READY"];
zsocket.destroy[ctx; xmitter]}
ctx:zctx.new[]
// Bind inproc socket before starting step2
receiver:zsocket.new[ctx; zmq.PAIR]
port:zsocket.bind[receiver; `inproc://step3]
pipe:zthread.fork[ctx; `step2; 0N]
// Wait for signal
zclock.log s:zstr.recv[receiver]
zclock.log "Test successful!"
zctx.destroy[ctx]
\\
mtrelay:Perl中的多线程接力
mtrelay:Python中的多线程接力
#!/usr/bin/env ruby
#
# Multithreaded relay
#
require 'rubygems'
require 'ffi-rzmq'
def step1(context)
# Connect to step2 and tell it we're ready
xmitter = context.socket(ZMQ::PAIR)
xmitter.connect("inproc://step2")
xmitter.send_string("READY")
end
def step2(context)
# Bind inproc socket before starting step1
receiver = context.socket(ZMQ::PAIR)
receiver.bind("inproc://step2")
Thread.new{step1(context)}
# Wait for signal and pass it on
receiver.recv_string('')
# Connect to step3 and tell it we're ready
xmitter = context.socket(ZMQ::PAIR)
xmitter.connect("inproc://step3")
xmitter.send_string("READY")
end
context = ZMQ::Context.new
# Bind inproc socket before starting step2
receiver = context.socket(ZMQ::PAIR)
receiver.bind("inproc://step3")
Thread.new{step2(context)}
# Wait for signal
receiver.recv_string('')
puts "Test successful!"
mtrelay:Q中的多线程接力
use std::thread;
fn step1(context: &zmq::Context) {
let xmitter = context.socket(zmq::PAIR).unwrap();
assert!(xmitter.connect("inproc://step2").is_ok());
println!("Step 1 ready, signaling step 2");
xmitter.send("READY", 0).unwrap();
}
fn step2(context: &zmq::Context) {
let receiver = context.socket(zmq::PAIR).unwrap();
assert!(receiver.bind("inproc://step2").is_ok());
let ncontext = context.clone();
thread::spawn(move || step1(&ncontext));
let _ = receiver.recv_string(0).unwrap().unwrap();
let xmitter = context.socket(zmq::PAIR).unwrap();
assert!(xmitter.connect("inproc://step3").is_ok());
println!("Step 2 ready, signaling step 3");
xmitter.send("READY", 0).unwrap();
}
fn main() {
let context = zmq::Context::new();
let receiver = context.socket(zmq::PAIR).unwrap();
assert!(receiver.bind("inproc://step3").is_ok());
let ncontext = context.clone();
thread::spawn(move || step2(&ncontext));
let _ = receiver.recv_string(0).unwrap().unwrap();
println!("Test successful!");
}
mtrelay:Racket中的多线程接力
/*
* Multithreaded relay in Scala
*
* @author Vadim Shalts
* @email vshalts@gmail.com
*/
import org.zeromq.ZMQ
object mtrelay {
def main(args: Array[String]) {
val context = ZMQ.context(1)
// Bind to inproc: endpoint, then start upstream thread
var receiver = context.socket(ZMQ.PAIR)
receiver.bind("inproc://step3")
// Step 2 relays the signal to step 3
var step2 = new Thread() {
override def run = {
var receiver = context.socket(ZMQ.PULL)
receiver.bind("inproc://step2")
var step1 = new Thread {
override def run = {
var sender = context.socket(ZMQ.PUSH)
sender.connect("inproc://step2")
println("Step 1 ready, signaling step 2")
sender.send("READY".getBytes, 0)
sender.close()
}
}
step1.start()
var message = receiver.recv(0)
var sender = context.socket(ZMQ.PAIR)
sender.connect("inproc://step3")
println ("Step 2 ready, signaling step 3");
sender.send(message, 0)
sender.close()
}
}
step2.start()
// Wait for signal
var message = receiver.recv(0)
System.out.println("Test successful!")
receiver.close()
}
}
Racket中缺少示例 mtrelay:贡献翻译
mtrelay:Rust中的多线程接力

Tcl中缺少示例 mtrelay:贡献翻译
- mtrelay:OCaml中的多线程接力`zmq_inproc()`OCaml中缺少示例 mtrelay:贡献翻译
- 图 21 - 接力赛这是使用ZeroMQ进行多线程编程的经典模式:两个线程通过
- inproc这是使用ZeroMQ进行多线程编程的经典模式:进行通信,使用共享上下文。
父线程创建一个套接字,将其绑定到一个`zmq_inproc()`inproc:@<*>@`zmq_ipc()``ipc``zmq_tcp()`端点,*然后*启动子线程,并将上下文传递给它。
子线程创建第二个套接字,连接到该
-
inproc
-
端点,*然后*向父线程发送信号表示已准备就绪。
-
请注意,使用此模式的多线程代码无法扩展到进程。如果你使用
inproc
和套接字对,你正在构建一个紧密绑定的应用程序,也就是说,你的线程在结构上相互依赖。当低延迟确实至关重要时才这样做。另一种设计模式是松散绑定的应用程序,其中线程有自己的上下文,并通过
tcp

这是我们第一次展示使用PAIR套接字的示例。为什么使用PAIR?其他套接字组合似乎也能工作,但它们都有可能干扰信号传递的副作用:
你可以使用PUSH作为发送方,PULL作为接收方。这看起来很简单并且会工作,但请记住,PUSH会将消息分发给所有可用的接收方。如果你不小心启动了两个接收方(例如,你已经有一个正在运行,然后启动第二个),你就会“丢失”一半的信号。PAIR的优点在于拒绝多个连接;这对连接是独占的。
-
你可以使用DEALER作为发送方,ROUTER作为接收方。然而,ROUTER会将你的消息包装在一个“信封”中,这意味着你零大小的信号会变成一个多部分消息。如果你不在乎数据,并将任何内容视为有效信号,并且如果你不从套接字多次读取,那没关系。但是,如果你决定发送真实数据,你会突然发现ROUTER给你提供了“错误”的消息。DEALER也会分发传出消息,带来与PUSH相同的风险。
-
你可以使用PUB作为发送方,SUB作为接收方。这会正确地按你发送的方式传递你的消息,并且PUB不像PUSH或DEALER那样进行分发。然而,你需要为订阅者配置一个空订阅,这很麻烦。
-
基于这些原因,PAIR是线程对之间进行协调的最佳选择。
节点协调#
当你想要协调网络上的一组节点时,PAIR套接字就不再适用。这是线程和节点策略为数不多的不同领域之一。主要区别在于,节点通常是动态出现的,而线程通常是静态的。如果远程节点断开又重新连接,PAIR套接字不会自动重连。
线程和节点的第二个显著区别在于,线程数量通常是固定的,而节点数量则更具可变性。让我们以之前的一个场景(天气服务器和客户端)为例,使用节点协调来确保订阅者启动时不会丢失数据。
发布者提前知道它期望的订阅者数量。这只是一个从某个地方获取的神奇数字。
// Synchronized publisher
#include "zhelpers.h"
#define SUBSCRIBERS_EXPECTED 10 // We wait for 10 subscribers
int main (void)
{
void *context = zmq_ctx_new ();
// Socket to talk to clients
void *publisher = zmq_socket (context, ZMQ_PUB);
int sndhwm = 1100000;
zmq_setsockopt (publisher, ZMQ_SNDHWM, &sndhwm, sizeof (int));
zmq_bind (publisher, "tcp://*:5561");
// Socket to receive signals
void *syncservice = zmq_socket (context, ZMQ_REP);
zmq_bind (syncservice, "tcp://*:5562");
// Get synchronization from subscribers
printf ("Waiting for subscribers\n");
int subscribers = 0;
while (subscribers < SUBSCRIBERS_EXPECTED) {
// - wait for synchronization request
char *string = s_recv (syncservice);
free (string);
// - send synchronization reply
s_send (syncservice, "");
subscribers++;
}
// Now broadcast exactly 1M updates followed by END
printf ("Broadcasting messages\n");
int update_nbr;
for (update_nbr = 0; update_nbr < 1000000; update_nbr++)
s_send (publisher, "Rhubarb");
s_send (publisher, "END");
zmq_close (publisher);
zmq_close (syncservice);
zmq_ctx_destroy (context);
return 0;
}
发布者启动并等待所有订阅者连接。这是节点协调部分。每个订阅者先进行订阅,然后通过另一个套接字告知发布者其已准备就绪。
//
// Synchronized publisher in C++
//
#include "zhelpers.hpp"
// We wait for 10 subscribers
#define SUBSCRIBERS_EXPECTED 10
int main () {
zmq::context_t context(1);
// Socket to talk to clients
zmq::socket_t publisher (context, ZMQ_PUB);
int sndhwm = 0;
publisher.setsockopt (ZMQ_SNDHWM, &sndhwm, sizeof (sndhwm));
publisher.bind("tcp://*:5561");
// Socket to receive signals
zmq::socket_t syncservice (context, ZMQ_REP);
syncservice.bind("tcp://*:5562");
// Get synchronization from subscribers
int subscribers = 0;
while (subscribers < SUBSCRIBERS_EXPECTED) {
// - wait for synchronization request
s_recv (syncservice);
// - send synchronization reply
s_send (syncservice, std::string(""));
subscribers++;
}
// Now broadcast exactly 1M updates followed by END
int update_nbr;
for (update_nbr = 0; update_nbr < 1000000; update_nbr++) {
s_send (publisher, std::string("Rhubarb"));
}
s_send (publisher, std::string("END"));
sleep (1); // Give 0MQ time to flush output
return 0;
}
当发布者连接了所有订阅者后,便开始发布数据。
syncpub:Ada中的同步发布者
;;; -*- Mode:Lisp; Syntax:ANSI-Common-Lisp; -*-
;;;
;;; Synchronized publisher in Common Lisp
;;;
;;; Kamil Shakirov <kamils80@gmail.com>
;;;
(defpackage #:zguide.syncpub
(:nicknames #:syncpub)
(:use #:cl #:zhelpers)
(:export #:main))
(in-package :zguide.syncpub)
;; We wait for 10 subscribers
(defparameter *expected-subscribers* 10)
(defun main ()
(zmq:with-context (context 1)
;; Socket to talk to clients
(zmq:with-socket (publisher context zmq:pub)
(zmq:bind publisher "tcp://*:5561")
;; Socket to receive signals
(zmq:with-socket (syncservice context zmq:rep)
(zmq:bind syncservice "tcp://*:5562")
;; Get synchronization from subscribers
(loop :repeat *expected-subscribers* :do
;; - wait for synchronization request
(let ((msg (make-instance 'zmq:msg)))
(zmq:recv syncservice msg))
;; - send synchronization reply
(let ((msg (make-instance 'zmq:msg :data "")))
(zmq:send syncservice msg)))
;; Now broadcast exactly 1M updates followed by END
(loop :repeat 1000000 :do
(let ((msg (make-instance 'zmq:msg :data "Rhubarb")))
(zmq:send publisher msg)))
(let ((msg (make-instance 'zmq:msg :data "END")))
(zmq:send publisher msg))))
;; Give 0MQ/2.0.x time to flush output
(sleep 1))
(cleanup))
Ada中缺少示例 syncpub:贡献翻译
program syncpub;
//
// Synchronized publisher
// @author Varga Balazs <bb.varga@gmail.com>
//
{$APPTYPE CONSOLE}
uses
SysUtils
, zmqApi
;
// We wait for 10 subscribers
const
SUBSCRIBERS_EXPECTED = 2;
var
context: TZMQContext;
publisher,
syncservice: TZMQSocket;
subscribers: Integer;
str: Utf8String;
i: Integer;
begin
context := TZMQContext.create;
// Socket to talk to clients
publisher := Context.Socket( stPub );
publisher.setSndHWM( 1000001 );
publisher.bind( 'tcp://*:5561' );
// Socket to receive signals
syncservice := Context.Socket( stRep );
syncservice.bind( 'tcp://*:5562' );
// Get synchronization from subscribers
Writeln( 'Waiting for subscribers' );
subscribers := 0;
while ( subscribers < SUBSCRIBERS_EXPECTED ) do
begin
// - wait for synchronization request
syncservice.recv( str );
// - send synchronization reply
syncservice.send( '' );
Inc( subscribers );
end;
// Now broadcast exactly 1M updates followed by END
Writeln( 'Broadcasting messages' );
for i := 0 to 1000000 - 1 do
publisher.send( 'Rhubarb' );
publisher.send( 'END' );
publisher.Free;
syncservice.Free;
context.Free;
end.
syncpub:Basic中的同步发布者
#! /usr/bin/env escript
%%
%% Synchronized publisher
%%
%% We wait for 10 subscribers
-define(SUBSCRIBERS_EXPECTED, 10).
main(_) ->
{ok, Context} = erlzmq:context(),
%% Socket to talk to clients
{ok, Publisher} = erlzmq:socket(Context, pub),
ok = erlzmq:bind(Publisher, "tcp://*:5561"),
%% Socket to receive signals
{ok, Syncservice} = erlzmq:socket(Context, rep),
ok = erlzmq:bind(Syncservice, "tcp://*:5562"),
%% Get synchronization from subscribers
io:format("Waiting for subscribers~n"),
sync_subscribers(Syncservice, ?SUBSCRIBERS_EXPECTED),
%% Now broadcast exactly 1M updates followed by END
io:format("Broadcasting messages~n"),
broadcast(Publisher, 1000000),
ok = erlzmq:send(Publisher, <<"END">>),
ok = erlzmq:close(Publisher),
ok = erlzmq:close(Syncservice),
ok = erlzmq:term(Context).
sync_subscribers(_Syncservice, 0) -> ok;
sync_subscribers(Syncservice, N) when N > 0 ->
%% Wait for synchornization request
{ok, _} = erlzmq:recv(Syncservice),
%% Send synchronization reply
ok = erlzmq:send(Syncservice, <<>>),
sync_subscribers(Syncservice, N - 1).
broadcast(_Publisher, 0) -> ok;
broadcast(Publisher, N) when N > 0 ->
ok = erlzmq:send(Publisher, <<"Rhubarb">>),
broadcast(Publisher, N - 1).
Basic中缺少示例 syncpub:贡献翻译
defmodule Syncpub do
@moduledoc """
Generated by erl2ex (http://github.com/dazuma/erl2ex)
From Erlang source: (Unknown source file)
At: 2019-12-20 13:57:34
"""
defmacrop erlconst_SUBSCRIBERS_EXPECTED() do
quote do
2
end
end
def main() do
{:ok, context} = :erlzmq.context()
{:ok, publisher} = :erlzmq.socket(context, :pub)
:ok = :erlzmq.bind(publisher, 'tcp://*:5561')
{:ok, syncservice} = :erlzmq.socket(context, :rep)
:ok = :erlzmq.bind(syncservice, 'tcp://*:5562')
:io.format('Waiting for subscribers. Please start 2 subscribers.~n')
sync_subscribers(syncservice, erlconst_SUBSCRIBERS_EXPECTED())
:io.format('Broadcasting messages~n')
broadcast(publisher, 1000000)
:ok = :erlzmq.send(publisher, "END")
:ok = :erlzmq.close(publisher)
:ok = :erlzmq.close(syncservice)
:ok = :erlzmq.term(context)
end
def sync_subscribers(_syncservice, 0) do
:ok
end
def sync_subscribers(syncservice, n) when n > 0 do
{:ok, _} = :erlzmq.recv(syncservice)
:ok = :erlzmq.send(syncservice, <<>>)
sync_subscribers(syncservice, n - 1)
end
def broadcast(_publisher, 0) do
:ok
end
def broadcast(publisher, n) when n > 0 do
:ok = :erlzmq.send(publisher, "Rhubarb")
broadcast(publisher, n - 1)
end
end
Syncpub.main
syncpub:C中的同步发布者
syncpub:C#中的同步发布者
syncpub:CL中的同步发布者
// Synchronized publisher
//
// Author: Brendan Mc.
// Requires: http://github.com/alecthomas/gozmq
package main
import (
zmq "github.com/alecthomas/gozmq"
)
var subsExpected = 10
func main() {
context, _ := zmq.NewContext()
defer context.Close()
// Socket to talk to clients
publisher, _ := context.NewSocket(zmq.PUB)
defer publisher.Close()
publisher.Bind("tcp://*:5561")
// Socket to receive signals
syncservice, _ := context.NewSocket(zmq.REP)
defer syncservice.Close()
syncservice.Bind("tcp://*:5562")
// Get synchronization from subscribers
for i := 0; i < subsExpected; i = i + 1 {
syncservice.Recv(0)
syncservice.Send([]byte(""), 0)
}
for update_nbr := 0; update_nbr < 1000000; update_nbr = update_nbr + 1 {
publisher.Send([]byte("Rhubarb"), 0)
}
publisher.Send([]byte("END"), 0)
}
syncpub:Delphi中的同步发布者
{-# LANGUAGE OverloadedStrings #-}
-- Synchronized publisher
module Main where
import Control.Monad
import System.ZMQ4.Monadic
subscribers_expected :: Int
subscribers_expected = 10
main :: IO ()
main = runZMQ $ do
-- Socket to talk to clients
publisher <- socket Pub
setSendHighWM (restrict 1100000) publisher
bind publisher "tcp://*:5561"
-- Socket to receive signals
syncservice <- socket Rep
bind syncservice "tcp://*:5562"
-- Get synchronization from subscribers
liftIO $ putStrLn "Waiting for subscribers"
replicateM_ subscribers_expected $ do
receive syncservice
send syncservice [] ""
-- Now broadcast exactly 1M updates followed by END
liftIO $ putStrLn "Broadcasting messages"
replicateM_ 1000000 (send publisher [] "Rhubarb")
send publisher [] "END"
syncpub:Erlang中的同步发布者
package ;
import haxe.io.Bytes;
import neko.Lib;
import org.zeromq.ZMQ;
import org.zeromq.ZMQContext;
import org.zeromq.ZMQSocket;
/**
* Synchronised publisher
*
* See: https://zguide.zeromq.cn/page:all#Node-Coordination
*
* Use with SyncSub.hx
*/
class SyncPub
{
static inline var SUBSCRIBERS_EXPECTED = 10;
public static function main() {
var context:ZMQContext = ZMQContext.instance();
Lib.println("** SyncPub (see: https://zguide.zeromq.cn/page:all#Node-Coordination)");
// Socket to talk to clients
var publisher:ZMQSocket = context.socket(ZMQ_PUB);
publisher.bind("tcp://*:5561");
// Socket to receive signals
var syncService:ZMQSocket = context.socket(ZMQ_REP);
syncService.bind("tcp://*:5562");
// get synchronisation from subscribers
var subscribers = 0;
while (subscribers < SUBSCRIBERS_EXPECTED) {
// wait for synchronisation request
var msgBytes = syncService.recvMsg();
// send synchronisation reply
syncService.sendMsg(Bytes.ofString(""));
subscribers++;
}
// Now broadcast exactly 1m updates followed by END
for (update_nbr in 0 ... 1000000) {
publisher.sendMsg(Bytes.ofString("Rhubarb"));
}
publisher.sendMsg(Bytes.ofString("END"));
publisher.close();
syncService.close();
context.term();
}
}
syncpub:Elixir中的同步发布者
package guide;
import org.zeromq.SocketType;
import org.zeromq.ZMQ;
import org.zeromq.ZMQ.Socket;
import org.zeromq.ZContext;
/**
* Synchronized publisher.
*/
public class syncpub
{
/**
* We wait for 10 subscribers
*/
protected static int SUBSCRIBERS_EXPECTED = 10;
public static void main(String[] args)
{
try (ZContext context = new ZContext()) {
// Socket to talk to clients
Socket publisher = context.createSocket(SocketType.PUB);
publisher.setLinger(5000);
// In 0MQ 3.x pub socket could drop messages if sub can follow the
// generation of pub messages
publisher.setSndHWM(0);
publisher.bind("tcp://*:5561");
// Socket to receive signals
Socket syncservice = context.createSocket(SocketType.REP);
syncservice.bind("tcp://*:5562");
System.out.println("Waiting for subscribers");
// Get synchronization from subscribers
int subscribers = 0;
while (subscribers < SUBSCRIBERS_EXPECTED) {
// - wait for synchronization request
syncservice.recv(0);
// - send synchronization reply
syncservice.send("", 0);
subscribers++;
}
// Now broadcast exactly 1M updates followed by END
System.out.println("Broadcasting messages");
int update_nbr;
for (update_nbr = 0; update_nbr < 1000000; update_nbr++) {
publisher.send("Rhubarb", 0);
}
publisher.send("END", 0);
}
}
}
syncpub:F#中的同步发布者
syncpub:Felix中的同步发布者
--
-- Synchronized publisher
--
-- Author: Robert G. Jakabosky <bobby@sharedrealm.com>
--
require"zmq"
require"zhelpers"
-- We wait for 10 subscribers
SUBSCRIBERS_EXPECTED = 10
s_version_assert (2, 1)
local context = zmq.init(1)
-- Socket to talk to clients
local publisher = context:socket(zmq.PUB)
publisher:bind("tcp://*:5561")
-- Socket to receive signals
local syncservice = context:socket(zmq.REP)
syncservice:bind("tcp://*:5562")
-- Get synchronization from subscribers
local subscribers = 0
while (subscribers < SUBSCRIBERS_EXPECTED) do
-- - wait for synchronization request
local msg = syncservice:recv()
-- - send synchronization reply
syncservice:send("")
subscribers = subscribers + 1
end
-- Now broadcast exactly 1M updates followed by END
local update_nbr
for update_nbr=1,1000000 do
publisher:send("Rhubarb")
end
publisher:send("END")
publisher:close()
syncservice:close()
context:term()
Felix中缺少示例 syncpub:贡献翻译
var zmq = require('zeromq')
var publisher = zmq.socket('pub')
var server = zmq.socket('rep')
var pending = 0
server.on('message', function(request) {
pending++
console.log(request.toString(), pending)
server.send('OK')
if (pending > 0)
publisher.send(pending + ' subscribers connected.')
})
server.bind('tcp://*:8888', function(err) {
if(err)
console.log(err)
else
console.log('Listening on 8888...')
})
publisher.bind('tcp://*:8688', function(err) {
if(err)
console.log(err)
else
console.log('Listening on 8688...')
})
process.on('SIGINT', function() {
publisher.close()
server.close()
})
syncpub:Go中的同步发布者
syncpub:Haxe中的同步发布者
syncpub:Julia中的同步发布者
# Synchronized publisher in Perl
use strict;
use warnings;
use v5.10;
use ZMQ::FFI;
use ZMQ::FFI::Constants qw(ZMQ_PUB ZMQ_REP ZMQ_SNDHWM);
my $SUBSCRIBERS_EXPECTED = 10; # We wait for 10 subscribers
my $context = ZMQ::FFI->new();
# Socket to talk to clients
my $publisher = $context->socket(ZMQ_PUB);
$publisher->set(ZMQ_SNDHWM, 'int', 0);
$publisher->set_linger(-1);
$publisher->bind('tcp://*:5561');
# Socket to receive signals
my $syncservice = $context->socket(ZMQ_REP);
$syncservice->bind('tcp://*:5562');
# Get synchronization from subscribers
say "Waiting for subscribers";
for my $subscribers (1..$SUBSCRIBERS_EXPECTED) {
# wait for synchronization request
$syncservice->recv();
# send synchronization reply
$syncservice->send('');
say "+1 subscriber ($subscribers/$SUBSCRIBERS_EXPECTED)";
}
# Now broadcast exactly 1M updates followed by END
say "Broadcasting messages";
for (1..1_000_000) {
$publisher->send("Rhubarb");
}
$publisher->send("END");
say "Done";
Julia中缺少示例 syncpub:贡献翻译
<?php
/*
* Synchronized publisher
*
* @author Ian Barber <ian(dot)barber(at)gmail(dot)com>
*/
// We wait for 10 subscribers
define("SUBSCRIBERS_EXPECTED", 10);
$context = new ZMQContext();
// Socket to talk to clients
$publisher = new ZMQSocket($context, ZMQ::SOCKET_PUB);
$publisher->bind("tcp://*:5561");
// Socket to receive signals
$syncservice = new ZMQSocket($context, ZMQ::SOCKET_REP);
$syncservice->bind("tcp://*:5562");
// Get synchronization from subscribers
$subscribers = 0;
while ($subscribers < SUBSCRIBERS_EXPECTED) {
// - wait for synchronization request
$string = $syncservice->recv();
// - send synchronization reply
$syncservice->send("");
$subscribers++;
}
// Now broadcast exactly 1M updates followed by END
for ($update_nbr = 0; $update_nbr < 1000000; $update_nbr++) {
$publisher->send("Rhubarb");
}
$publisher->send("END");
sleep (1); // Give 0MQ/2.0.x time to flush output
syncpub:Lua中的同步发布者
#
# Synchronized publisher
#
import zmq
# We wait for 10 subscribers
SUBSCRIBERS_EXPECTED = 10
def main():
context = zmq.Context()
# Socket to talk to clients
publisher = context.socket(zmq.PUB)
# set SNDHWM, so we don't drop messages for slow subscribers
publisher.sndhwm = 1100000
publisher.bind("tcp://*:5561")
# Socket to receive signals
syncservice = context.socket(zmq.REP)
syncservice.bind("tcp://*:5562")
# Get synchronization from subscribers
subscribers = 0
while subscribers < SUBSCRIBERS_EXPECTED:
# wait for synchronization request
msg = syncservice.recv()
# send synchronization reply
syncservice.send(b'')
subscribers += 1
print(f"+1 subscriber ({subscribers}/{SUBSCRIBERS_EXPECTED})")
# Now broadcast exactly 1M updates followed by END
for i in range(1000000):
publisher.send(b"Rhubarb")
publisher.send(b"END")
if __name__ == "__main__":
main()
syncpub:Node.js中的同步发布者
Objective-C中缺少示例 syncpub:贡献翻译
#lang racket
#|
# Synchronized publisher
|#
(require net/zmq)
; We wait for 2 subscribers
(define SUBSCRIBERS_EXPECTED 2)
(define ctxt (context 1))
; Socket to talk to clients
(define publisher (socket ctxt 'PUB))
(socket-bind! publisher "tcp://*:5561")
; Socket to receive signals
(define syncservice (socket ctxt 'REP))
(socket-bind! syncservice "tcp://*:5562")
; Get synchronization from subscribers
(for ([i (in-range SUBSCRIBERS_EXPECTED)])
; wait for synchronization request
(socket-recv! syncservice)
; send synchronization reply
(socket-send! syncservice #"")
(printf "+1 subscriber\n"))
; Now broadcast exactly 1M updates followed by END
(for ([i (in-range 1000)])
(socket-send! publisher #"Rhubarb"))
(socket-send! publisher #"END")
(context-close! ctxt)
syncpub:ooc中的同步发布者
#!/usr/bin/env ruby
#
# Synchronized publisher
#
require 'rubygems'
require 'ffi-rzmq'
# We wait for 10 subscribers
SUBSCRIBERS_EXPECTED = 10
context = ZMQ::Context.new
# Socket to talk to clients
publisher = context.socket(ZMQ::PUB)
publisher.setsockopt(ZMQ::SNDHWM, 0);
publisher.bind("tcp://*:5561")
# Socket to receive signals
syncservice = context.socket(ZMQ::REP)
syncservice.bind("tcp://*:5562")
# Get synchronization from subscribers
puts "Waiting for subscribers"
subscribers = 0
begin
# wait for synchronization request
syncservice.recv_string('')
# send synchronization reply
syncservice.send_string("")
subscribers+=1
end while subscribers < SUBSCRIBERS_EXPECTED
# Now broadcast exactly 1M updates followed by END
1000000.times do
publisher.send_string("Rhubarb")
end
publisher.send_string("END")
ooc中缺少示例 syncpub:贡献翻译
fn main() {
const SUBSCRIBERS_EXPECTED: usize = 10;
let context = zmq::Context::new();
// socket that talks to clients
let publisher = context.socket(zmq::PUB).unwrap();
// Set the high-water mark to 10k messages
assert!(publisher.set_sndhwm(1_000_100).is_ok());
publisher.bind("tcp://*:5562").unwrap();
// socket that receives messages
let sync_service = context.socket(zmq::REP).unwrap();
sync_service.bind("tcp://*:5561").unwrap();
println!("Waiting for subscribers");
let mut num_subscribers = 0;
while num_subscribers < SUBSCRIBERS_EXPECTED {
let _ = sync_service.recv_string(0).unwrap().unwrap();
assert!(sync_service.send("", 0).is_ok());
num_subscribers += 1;
}
// Now broadcast exactly 1M updates followed by END
println!("Broadcasting messages");
for _ in 0..1_000_000 {
assert!(publisher.send("Rhubarb", 0).is_ok());
}
assert!(publisher.send("END", 0).is_ok());
}
syncpub:Perl中的同步发布者
/*
*
* Synchronized publisher.
*
* @author Giovanni Ruggiero
* @email giovanni.ruggiero@gmail.com
*/
import org.zeromq.ZMQ
import org.zeromq.ZMQ.Context
import org.zeromq.ZMQ.Socket
object SyncPub {
def main(args : Array[String]) {
/**
* We wait for 10 subscribers
*/
val SUBSCRIBERS_EXPECTED = 10
val context = ZMQ.context(1)
// Socket to talk to clients
val publisher = context.socket(ZMQ.PUB)
publisher.bind("tcp://*:5561")
// Socket to receive signals
val syncservice = context.socket(ZMQ.REP)
syncservice.bind("tcp://*:5562")
// Get synchronization from subscribers
for (subscribers <- 1 to SUBSCRIBERS_EXPECTED) {
// - wait for synchronization request
var value = syncservice.recv(0)
// - send synchronization reply
syncservice.send("".getBytes(), 0)
}
// Now broadcast exactly 1M updates followed by END
for (update_nbr <- 1 to 1000000){
publisher.send("Rhubarb".getBytes(), 0)
}
publisher.send("END".getBytes(), 0)
// Give 0MQ/2.0.x time to flush output
try {
Thread.sleep (1000)
} catch {
case e: InterruptedException => e.printStackTrace()
}
// clean up
publisher.close()
syncservice.close()
context.term()
}
}
syncpub:PHP中的同步发布者
#
# Synchronized publisher
#
package require zmq
zmq context context
# We wait for 10 subscribers
set SUBSCRIBERS_EXPECTED 10
# Socket to talk to clients
zmq socket publisher context PUB
publisher bind "tcp://*:5561"
# Socket to receive signals
zmq socket syncservice context REP
syncservice bind "tcp://*:5562"
# Get synchronization from subscribers
puts "Waiting for subscribers"
set subscribers 0
while {$subscribers < $SUBSCRIBERS_EXPECTED} {
# - wait for synchronization request
syncservice recv
# - send synchronization reply
syncservice send ""
incr subscribers
}
# Now broadcast exactly 1M updates followed by END
puts "Broadcasting messages"
for {set update_nbr 0} {$update_nbr < 1000000} {incr update_nbr} {
publisher send "Rhubarb"
}
publisher send "END"
publisher close
syncservice close
context term
syncpub:Python中的同步发布者
syncpub:Racket中的同步发布者
syncpub:Ruby中的同步发布者
syncpub:Scala中的同步发布者
syncpub:OCaml中的同步发布者
// Synchronized subscriber
#include "zhelpers.h"
#include <unistd.h>
int main (void)
{
void *context = zmq_ctx_new ();
// First, connect our subscriber socket
void *subscriber = zmq_socket (context, ZMQ_SUB);
zmq_connect (subscriber, "tcp://localhost:5561");
zmq_setsockopt (subscriber, ZMQ_SUBSCRIBE, "", 0);
// 0MQ is so fast, we need to wait a while...
sleep (1);
// Second, synchronize with publisher
void *syncclient = zmq_socket (context, ZMQ_REQ);
zmq_connect (syncclient, "tcp://localhost:5562");
// - send a synchronization request
s_send (syncclient, "");
// - wait for synchronization reply
char *string = s_recv (syncclient);
free (string);
// Third, get our updates and report how many we got
int update_nbr = 0;
while (1) {
char *string = s_recv (subscriber);
if (strcmp (string, "END") == 0) {
free (string);
break;
}
free (string);
update_nbr++;
}
printf ("Received %d updates\n", update_nbr);
zmq_close (subscriber);
zmq_close (syncclient);
zmq_ctx_destroy (context);
return 0;
}
OCaml中缺少示例 syncpub:贡献翻译
//
// Synchronized subscriber in C++
//
#include "zhelpers.hpp"
int main (int argc, char *argv[])
{
zmq::context_t context(1);
// First, connect our subscriber socket
zmq::socket_t subscriber (context, ZMQ_SUB);
subscriber.connect("tcp://localhost:5561");
subscriber.set(zmq::sockopt::subscribe, "");
// Second, synchronize with publisher
zmq::socket_t syncclient (context, ZMQ_REQ);
syncclient.connect("tcp://localhost:5562");
// - send a synchronization request
s_send (syncclient, std::string(""));
// - wait for synchronization reply
s_recv (syncclient);
// Third, get our updates and report how many we got
int update_nbr = 0;
while (1) {
if (s_recv (subscriber).compare("END") == 0) {
break;
}
update_nbr++;
}
std::cout << "Received " << update_nbr << " updates" << std::endl;
return 0;
}
C | C++ | CL | Delphi | Erlang | Elixir | Go | Haskell | Haxe | Java | Lua | Node.js | Perl | PHP | Python | Racket | Ruby | Rust | Scala | Tcl | Ada | Basic | C# | F# | Felix | Julia | Objective-C | ooc | Q | OCaml
syncsub:Ada中的同步订阅者
;;; -*- Mode:Lisp; Syntax:ANSI-Common-Lisp; -*-
;;;
;;; Synchronized subscriber in Common Lisp
;;;
;;; Kamil Shakirov <kamils80@gmail.com>
;;;
(defpackage #:zguide.syncsub
(:nicknames #:syncsub)
(:use #:cl #:zhelpers)
(:export #:main))
(in-package :zguide.syncsub)
(defun main ()
(zmq:with-context (context 1)
;; First, connect our subscriber socket
(zmq:with-socket (subscriber context zmq:sub)
(zmq:connect subscriber "tcp://localhost:5561")
(zmq:setsockopt subscriber zmq:subscribe "")
;; Second, synchronize with publisher
(zmq:with-socket (syncclient context zmq:req)
(zmq:connect syncclient "tcp://localhost:5562")
;; - send a synchronization request
(let ((msg (make-instance 'zmq:msg :data "")))
(zmq:send syncclient msg))
;; - wait for synchronization reply
(let ((msg (make-instance 'zmq:msg)))
(zmq:recv syncclient msg))
;; Third, get our updates and report how many we got
(let ((updates 0))
(loop
(let ((msg (make-instance 'zmq:msg)))
(zmq:recv subscriber msg)
(when (string= "END" (zmq:msg-data-as-string msg))
(return))
(incf updates)))
(message "Received ~D updates~%" updates)))))
(cleanup))
Ada中缺少示例 syncsub:贡献翻译
program syncsub;
//
// Synchronized subscriber
// @author Varga Balazs <bb.varga@gmail.com>
//
{$APPTYPE CONSOLE}
uses
SysUtils
, zmqApi
;
var
context: TZMQContext;
subscriber,
syncclient: TZMQSocket;
str: Utf8String;
i: Integer;
begin
context := TZMQContext.Create;
// First, connect our subscriber socket
subscriber := Context.Socket( stSub );
subscriber.RcvHWM := 1000001;
subscriber.connect( 'tcp://localhost:5561' );
subscriber.Subscribe( '' );
// 0MQ is so fast, we need to wait a while...
sleep (1000);
// Second, synchronize with publisher
syncclient := Context.Socket( stReq );
syncclient.connect( 'tcp://localhost:5562' );
// - send a synchronization request
syncclient.send( '' );
// - wait for synchronization reply
syncclient.recv( str );
// Third, get our updates and report how many we got
i := 0;
while True do
begin
subscriber.recv( str );
if str = 'END' then
break;
inc( i );
end;
Writeln( Format( 'Received %d updates', [i] ) );
subscriber.Free;
syncclient.Free;
context.Free;
end.
syncsub:Basic中的同步订阅者
#! /usr/bin/env escript
%%
%% Synchronized subscriber
%%
main(_) ->
{ok, Context} = erlzmq:context(),
%% First, connect our subscriber socket
{ok, Subscriber} = erlzmq:socket(Context, sub),
ok = erlzmq:connect(Subscriber, "tcp://localhost:5561"),
ok = erlzmq:setsockopt(Subscriber, subscribe, <<>>),
%% Second, synchronize with publisher
{ok, Syncclient} = erlzmq:socket(Context, req),
ok = erlzmq:connect(Syncclient, "tcp://localhost:5562"),
%% - send a synchronization request
ok = erlzmq:send(Syncclient, <<>>),
%% - wait for synchronization reply
{ok, <<>>} = erlzmq:recv(Syncclient),
%% Third, get our updates and report how many we got
Updates = acc_updates(Subscriber, 0),
io:format("Received ~b updates~n", [Updates]),
ok = erlzmq:close(Subscriber),
ok = erlzmq:close(Syncclient),
ok = erlzmq:term(Context).
acc_updates(Subscriber, N) ->
case erlzmq:recv(Subscriber) of
{ok, <<"END">>} -> N;
{ok, _} -> acc_updates(Subscriber, N + 1)
end.
Basic中缺少示例 syncsub:贡献翻译
defmodule Syncsub do
@moduledoc """
Generated by erl2ex (http://github.com/dazuma/erl2ex)
From Erlang source: (Unknown source file)
At: 2019-12-20 13:57:34
"""
def main() do
{:ok, context} = :erlzmq.context()
{:ok, subscriber} = :erlzmq.socket(context, :sub)
:ok = :erlzmq.connect(subscriber, 'tcp://localhost:5561')
:ok = :erlzmq.setsockopt(subscriber, :subscribe, <<>>)
{:ok, syncclient} = :erlzmq.socket(context, :req)
:ok = :erlzmq.connect(syncclient, 'tcp://localhost:5562')
:ok = :erlzmq.send(syncclient, <<>>)
{:ok, <<>>} = :erlzmq.recv(syncclient)
updates = acc_updates(subscriber, 0)
:io.format('Received ~b updates~n', [updates])
:ok = :erlzmq.close(subscriber)
:ok = :erlzmq.close(syncclient)
:ok = :erlzmq.term(context)
end
def acc_updates(subscriber, n) do
case(:erlzmq.recv(subscriber)) do
{:ok, "END"} ->
n
{:ok, _} ->
acc_updates(subscriber, n + 1)
end
end
end
Syncsub.main
syncsub:C中的同步订阅者
syncsub:C#中的同步订阅者
syncsub:CL中的同步订阅者
// Synchronized subscriber
//
// Author: Aleksandar Janicijevic
// Requires: http://github.com/alecthomas/gozmq
package main
import (
"fmt"
zmq "github.com/alecthomas/gozmq"
"time"
)
func main() {
context, _ := zmq.NewContext()
defer context.Close()
subscriber, _ := context.NewSocket(zmq.SUB)
defer subscriber.Close()
subscriber.Connect("tcp://localhost:5561")
subscriber.SetSubscribe("")
// 0MQ is so fast, we need to wait a while...
time.Sleep(time.Second)
// Second, synchronize with publisher
syncclient, _ := context.NewSocket(zmq.REQ)
defer syncclient.Close()
syncclient.Connect("tcp://localhost:5562")
// - send a synchronization request
fmt.Println("Send synchronization request")
syncclient.Send([]byte(""), 0)
fmt.Println("Wait for synchronization reply")
// - wait for synchronization reply
syncclient.Recv(0)
fmt.Println("Get updates")
// Third, get our updates and report how many we got
update_nbr := 0
for {
reply, _ := subscriber.Recv(0)
if string(reply) == "END" {
break
}
update_nbr++
}
fmt.Printf("Received %d updates\n", update_nbr)
}
syncsub:Delphi中的同步订阅者
{-# LANGUAGE OverloadedStrings #-}
-- Synchronized subscriber
module Main where
import Control.Concurrent
import Data.Function
import System.ZMQ4.Monadic
import Text.Printf
main :: IO ()
main = runZMQ $ do
-- First, connect our subscriber socket
subscriber <- socket Sub
connect subscriber "tcp://localhost:5561"
subscribe subscriber ""
-- 0MQ is so fast, we need to wait a while...
liftIO $ threadDelay 1000000
-- Second, synchronize with the publisher
syncclient <- socket Req
connect syncclient "tcp://localhost:5562"
-- Send a synchronization request
send syncclient [] ""
-- Wait for a synchronization reply
receive syncclient
let -- go :: (Int -> ZMQ z Int) -> Int -> ZMQ z Int
go loop = \n -> do
string <- receive subscriber
if string == "END"
then return n
else loop (n+1)
-- Third, get our updates and report how many we got
update_nbr <- fix go (0 :: Int)
liftIO $ printf "Received %d updates\n" update_nbr
syncsub:Erlang中的同步订阅者
package ;
import neko.Lib;
import haxe.io.Bytes;
import neko.Sys;
import org.zeromq.ZMQ;
import org.zeromq.ZMQContext;
import org.zeromq.ZMQSocket;
/**
* Synchronised subscriber
*
* See: https://zguide.zeromq.cn/page:all#Node-Coordination
*
* Use with SyncPub.hx
*/
class SyncSub
{
public static function main() {
var context:ZMQContext = ZMQContext.instance();
Lib.println("** SyncSub (see: https://zguide.zeromq.cn/page:all#Node-Coordination)");
// First connect our subscriber socket
var subscriber:ZMQSocket = context.socket(ZMQ_SUB);
subscriber.connect("tcp://127.0.0.1:5561");
subscriber.setsockopt(ZMQ_SUBSCRIBE, Bytes.ofString(""));
// 0MQ is so fast, we need to wait a little while
Sys.sleep(1.0);
// Second, synchronise with publisher
var syncClient:ZMQSocket = context.socket(ZMQ_REQ);
syncClient.connect("tcp://127.0.0.1:5562");
// Send a synchronisation request
syncClient.sendMsg(Bytes.ofString(""));
// Wait for a synchronisation reply
var msgBytes:Bytes = syncClient.recvMsg();
// Third, get our updates and report how many we got
var update_nbr = 0;
while (true) {
msgBytes = subscriber.recvMsg();
if (msgBytes.toString() == "END") {
break;
}
msgBytes = null;
update_nbr++;
}
Lib.println("Received " + update_nbr + " updates\n");
subscriber.close();
syncClient.close();
context.term();
}
}
syncsub:Elixir中的同步订阅者
package guide;
import org.zeromq.SocketType;
import org.zeromq.ZMQ;
import org.zeromq.ZMQ.Socket;
import org.zeromq.ZContext;
/**
* Synchronized subscriber.
*/
public class syncsub
{
public static void main(String[] args)
{
try (ZContext context = new ZContext()) {
// First, connect our subscriber socket
Socket subscriber = context.createSocket(SocketType.SUB);
subscriber.connect("tcp://localhost:5561");
subscriber.subscribe(ZMQ.SUBSCRIPTION_ALL);
// Second, synchronize with publisher
Socket syncclient = context.createSocket(SocketType.REQ);
syncclient.connect("tcp://localhost:5562");
// - send a synchronization request
syncclient.send(ZMQ.MESSAGE_SEPARATOR, 0);
// - wait for synchronization reply
syncclient.recv(0);
// Third, get our updates and report how many we got
int update_nbr = 0;
while (true) {
String string = subscriber.recvStr(0);
if (string.equals("END")) {
break;
}
update_nbr++;
}
System.out.println("Received " + update_nbr + " updates.");
}
}
}
syncsub:F#中的同步订阅者
syncsub:Felix中的同步订阅者
--
-- Synchronized subscriber
--
-- Author: Robert G. Jakabosky <bobby@sharedrealm.com>
--
require"zmq"
require"zhelpers"
local context = zmq.init(1)
-- First, connect our subscriber socket
local subscriber = context:socket(zmq.SUB)
subscriber:connect("tcp://localhost:5561")
subscriber:setopt(zmq.SUBSCRIBE, "")
-- 0MQ is so fast, we need to wait a while...
s_sleep (1000)
-- Second, synchronize with publisher
local syncclient = context:socket(zmq.REQ)
syncclient:connect("tcp://localhost:5562")
-- - send a synchronization request
syncclient:send("")
-- - wait for synchronization reply
local msg = syncclient:recv()
-- Third, get our updates and report how many we got
local update_nbr = 0
while true do
local msg = subscriber:recv()
if (msg == "END") then
break
end
update_nbr = update_nbr + 1
end
printf ("Received %d updates\n", update_nbr)
subscriber:close()
syncclient:close()
context:term()
Felix中缺少示例 syncsub:贡献翻译
var zmq = require('zeromq')
var subscriber = zmq.socket('sub')
var client = zmq.socket('req')
subscriber.on('message', function(reply) {
console.log('Received message: ', reply.toString());
})
subscriber.connect('tcp://localhost:8688')
subscriber.subscribe('')
client.connect('tcp://localhost:8888')
client.send('SYNC')
process.on('SIGINT', function() {
subscriber.close()
client.close()
})
syncsub:Go中的同步订阅者
syncsub:Haxe中的同步订阅者
syncsub:Julia中的同步订阅者
# Synchronized subscriber in Perl
use strict;
use warnings;
use v5.10;
use ZMQ::FFI;
use ZMQ::FFI::Constants qw(ZMQ_SUB ZMQ_REQ ZMQ_RCVHWM);
my $context = ZMQ::FFI->new();
# First, connect our subscriber socket
my $subscriber = $context->socket(ZMQ_SUB);
$subscriber->set(ZMQ_RCVHWM, 'int', 0);
$subscriber->connect('tcp://localhost:5561');
$subscriber->subscribe('');
# 0MQ is so fast, we need to wait a while...
sleep 3;
# Second, synchronize with publisher
my $syncclient = $context->socket(ZMQ_REQ);
$syncclient->connect('tcp://localhost:5562');
# send a synchronization request
$syncclient->send('');
# wait for synchronization reply
$syncclient->recv();
# Third, get our updates and report how many we got
my $update_nbr = 0;
while (1) {
last if $subscriber->recv() eq "END";
$update_nbr++;
}
say "Received $update_nbr updates";
Julia中缺少示例 syncsub:贡献翻译
<?php
/*
* Synchronized subscriber
*
* @author Ian Barber <ian(dot)barber(at)gmail(dot)com>
*/
$context = new ZMQContext();
// First, connect our subscriber socket
$subscriber = $context->getSocket(ZMQ::SOCKET_SUB);
$subscriber->connect("tcp://localhost:5561");
$subscriber->setSockOpt(ZMQ::SOCKOPT_SUBSCRIBE, "");
// Second, synchronize with publisher
$syncclient = $context->getSocket(ZMQ::SOCKET_REQ);
$syncclient->connect("tcp://localhost:5562");
// - send a synchronization request
$syncclient->send("");
// - wait for synchronization reply
$string = $syncclient->recv();
// Third, get our updates and report how many we got
$update_nbr = 0;
while (true) {
$string = $subscriber->recv();
if ($string == "END") {
break;
}
$update_nbr++;
}
printf ("Received %d updates %s", $update_nbr, PHP_EOL);
syncsub:Lua中的同步订阅者
#
# Synchronized subscriber
#
import time
import zmq
def main():
context = zmq.Context()
# First, connect our subscriber socket
subscriber = context.socket(zmq.SUB)
subscriber.connect("tcp://localhost:5561")
subscriber.setsockopt(zmq.SUBSCRIBE, b'')
time.sleep(1)
# Second, synchronize with publisher
syncclient = context.socket(zmq.REQ)
syncclient.connect("tcp://localhost:5562")
# send a synchronization request
syncclient.send(b'')
# wait for synchronization reply
syncclient.recv()
# Third, get our updates and report how many we got
nbr = 0
while True:
msg = subscriber.recv()
if msg == b"END":
break
nbr += 1
print(f"Received {nbr} updates")
if __name__ == "__main__":
main()
syncsub:Node.js中的同步订阅者
syncsub:Objective-C中的同步订阅者
#lang racket
#|
# Synchronized subscriber
|#
(require net/zmq)
(define ctxt (context 1))
; First, connect our subscriber socket
(define subscriber (socket ctxt 'SUB))
(socket-connect! subscriber "tcp://localhost:5561")
(set-socket-option! subscriber 'SUBSCRIBE #"")
; Second, synchronize with publisher
(define syncclient (socket ctxt 'REQ))
(socket-connect! syncclient "tcp://localhost:5562")
; send a synchronization request
(socket-send! syncclient #"")
; wait for synchronization reply
(void (socket-recv! syncclient))
; Third, get our updates and report how many we got
(define nbr
(let loop ([nbr 0])
(define msg (socket-recv! subscriber))
(printf "Received: ~a\n" msg)
(if (bytes=? msg #"END")
nbr
(loop (add1 nbr)))))
(printf "Received ~a updates\n" nbr)
(context-close! ctxt)
Objective-C中缺少示例 syncsub:贡献翻译
#!/usr/bin/env ruby
#
# Synchronized subscriber
#
require 'rubygems'
require 'ffi-rzmq'
context = ZMQ::Context.new
# First, connect our subscriber socket
subscriber = context.socket(ZMQ::SUB)
subscriber.connect("tcp://localhost:5561")
subscriber.setsockopt(ZMQ::SUBSCRIBE,"")
# 0MQ is so fast, we need to wait a while...
sleep(1)
# Second, synchronize with publisher
synclient = context.socket(ZMQ::REQ)
synclient.connect("tcp://localhost:5562")
# - send a synchronization request
synclient.send_string("")
# - wait for synchronization reply
synclient.recv_string('')
# Third, get our updates and report how many we got
update_nbr=0
loop do
subscriber.recv_string(string = '')
break if string == "END"
update_nbr+=1
end
puts "Received #{update_nbr} updates"
syncsub:ooc中的同步订阅者
use std::thread;
use std::time::Duration;
fn main() {
let context = zmq::Context::new();
let subscriber = context.socket(zmq::SUB).unwrap();
assert!(subscriber.connect("tcp://localhost:5562").is_ok());
assert!(subscriber.set_subscribe(b"").is_ok());
thread::sleep(Duration::from_secs(1));
// socket that receives messages
let sync_client = context.socket(zmq::REQ).unwrap();
sync_client.connect("tcp://localhost:5561").unwrap();
assert!(sync_client.send("", 0).is_ok());
// wait for synchronization reply
let _ = sync_client.recv_string(0).unwrap().unwrap();
// Get our updates and report how many we got
let mut n = 0;
let mut done = false;
while !done {
let msg = subscriber.recv_string(0).unwrap().unwrap_or("".to_string());
if msg == "Rhubarb" {
n += 1;
} else {
done = msg == "END";
}
}
println!("Received {} updates", n);
}
ooc中缺少示例 syncsub:贡献翻译
/*
* Synchronized subscriber
*
* @author Giovanni Ruggiero
* @email giovanni.ruggiero@gmail.com
*/
import org.zeromq.ZMQ
import org.zeromq.ZMQ.Context
import org.zeromq.ZMQ.Socket
object SyncSub {
def main(args : Array[String]) {
val context = ZMQ.context(1)
// First, connect our subscriber socket
val subscriber = context.socket(ZMQ.SUB)
subscriber.connect("tcp://localhost:5561")
subscriber.subscribe("".getBytes())
// Second, synchronize with publisher
val syncclient = context.socket(ZMQ.REQ)
subscriber.connect("tcp://localhost:5562")
// - send a synchronization request
syncclient.send("".getBytes(), 0)
// - wait for synchronization reply
val value = syncclient.recv(0)
// Third, get our updates and report how many we got
var update_nbr = 0
var string = ""
do {
var stringValue = subscriber.recv(0)
string = new String(stringValue)
update_nbr = update_nbr + 1
} while (string != "END")
println("Received "+update_nbr+" updates.")
subscriber.close()
syncclient.close()
context.term()
}
}
syncsub:Perl中的同步订阅者
#
# Synchronized subscriber
#
package require zmq
zmq context context
# First, connect our subscriber socket
zmq socket subscriber context SUB
subscriber connect "tcp://localhost:5561"
subscriber setsockopt SUBSCRIBE ""
# 0MQ is so fast, we need to wait a while…
after 1000
# Second, synchronize with publisher
zmq socket syncclient context REQ
syncclient connect "tcp://localhost:5562"
# - send a synchronization request
syncclient send ""
# - wait for synchronization reply
syncclient recv
# Third, get our updates and report how many we got
set update_nbr 0
while {1} {
set string [subscriber recv]
if {$string eq "END"} {
break;
}
incr update_nbr
}
puts "Received $update_nbr updates"
subscriber close
syncclient close
context term
syncsub:PHP中的同步订阅者
syncsub:Q中的同步订阅者
echo "Starting subscribers..."
for ((a=0; a<10; a++)); do
syncsub &
done
echo "Starting publisher..."
syncpub
Q中缺少示例 syncsub:贡献翻译
Starting subscribers...
Starting publisher...
Received 1000000 updates
Received 1000000 updates
...
Received 1000000 updates
Received 1000000 updates
syncsub:Racket中的同步订阅者`zmq_inproc()`Racket中缺少示例 syncsub:贡献翻译
syncsub:Ruby中的同步订阅者
- syncsub:Rust中的同步订阅者
- syncsub:Scala中的同步订阅者
- syncsub:Tcl中的同步订阅者
syncsub:OCaml中的同步订阅者
OCaml中缺少示例 syncsub:贡献翻译
这个Bash shell脚本将启动十个订阅者,然后启动发布者:
Which gives us this satisfying output 初始化消息这会给我们带来令人满意的输出:我们不能假设REQ/REP对话完成时SUB连接也已经完成。如果你使用的不是inproc `zmq_msg_send()`传输方式,无法保证出站连接会按任何顺序完成。因此,此示例在订阅和发送REQ/REP同步消息之间强制暂停了一秒。一个更健壮的模型可能是:发布者打开PUB套接字并开始发送“Hello”消息(非数据)。
void my_free (void *data, void *hint) {
free (data);
}
// Send message from buffer, which we allocate and ZeroMQ will free for us
zmq_msg_t message;
zmq_msg_init_data (&message, buffer, 1000, my_free, NULL);
zmq_msg_send (&message, socket, 0);
订阅者连接SUB套接字,当收到Hello消息时,通过一对REQ/REP套接字告知发布者已准备就绪。 `zmq_msg_init_data()`当发布者收到所有必要的确认后,便开始发送真实数据。你还会看到 XPUB 和 XSUB Socket 的引用,我们稍后会讲到它们(它们类似于 PUB 和 SUB 的原始版本)。任何其他组合都会产生未文档化且不可靠的结果,未来的 ZeroMQ 版本如果尝试这些组合可能会返回错误。当然,你可以并且会通过代码桥接其他 Socket 类型,即从一种 Socket 类型读取并写入另一种。零拷贝#
ZeroMQ的消息API允许你直接从应用程序缓冲区发送和接收消息,而无需复制数据。我们称之为零拷贝,在某些应用程序中可以提高性能。
你应该考虑在发送大量内存块(数千字节)、高频率发送的特定情况下使用零拷贝。对于短消息或较低的消息速率,使用零拷贝会使你的代码更混乱、更复杂,而没有明显的收益。像所有优化一样,只在你确定它有帮助时才使用,并且在使用前后进行测量。
要实现零拷贝,你需要使用
zmq_msg_init_data

malloc()
或其他分配器分配的数据块的消息,然后将该消息传递给
zmq_send
buffer
请注意,你无需在发送消息后调用
// Pubsub envelope publisher
// Note that the zhelpers.h file also provides s_sendmore
#include "zhelpers.h"
#include <unistd.h>
int main (void)
{
// Prepare our context and publisher
void *context = zmq_ctx_new ();
void *publisher = zmq_socket (context, ZMQ_PUB);
zmq_bind (publisher, "tcp://*:5563");
while (1) {
// Write two messages, each with an envelope and content
s_sendmore (publisher, "A");
s_send (publisher, "We don't want to see this");
s_sendmore (publisher, "B");
s_send (publisher, "We would like to see this");
sleep (1);
}
// We never get here, but clean up anyhow
zmq_close (publisher);
zmq_ctx_destroy (context);
return 0;
}
free()
//
// Pubsub envelope publisher
// Note that the zhelpers.h file also provides s_sendmore
//
#include "zhelpers.hpp"
int main () {
// Prepare our context and publisher
zmq::context_t context(1);
zmq::socket_t publisher(context, ZMQ_PUB);
publisher.bind("tcp://*:5563");
while (1) {
// Write two messages, each with an envelope and content
s_sendmore (publisher, std::string("A"));
s_send (publisher, std::string("We don't want to see this"));
s_sendmore (publisher, std::string("B"));
s_send (publisher, std::string("We would like to see this"));
sleep (1);
}
return 0;
}
——
会在实际发送完消息后自动完成此操作。
;;; -*- Mode:Lisp; Syntax:ANSI-Common-Lisp; -*-
;;;
;;; Pubsub envelope publisher in Common Lisp
;;; Note that the zhelpers package also provides send-text and send-more-text
;;;
;;; Kamil Shakirov <kamils80@gmail.com>
;;;
(defpackage #:zguide.psenvpub
(:nicknames #:psenvpub)
(:use #:cl #:zhelpers)
(:export #:main))
(in-package :zguide.psenvpub)
(defun main ()
;; Prepare our context and publisher
(zmq:with-context (context 1)
(zmq:with-socket (publisher context zmq:pub)
(zmq:bind publisher "tcp://*:5563")
(loop
;; Write two messages, each with an envelope and content
(send-more-text publisher "A")
(send-text publisher "We don't want to see this")
(send-more-text publisher "B")
(send-text publisher "We would like to see this")
(sleep 1))))
(cleanup))
接收时无法实现零拷贝:ZeroMQ会将一个缓冲区交付给你,你可以按需存储该缓冲区,但它不会直接将数据写入应用程序缓冲区。
program psenvpub;
//
// Pubsub envelope publisher
// @author Varga Balazs <bb.varga@gmail.com>
//
{$APPTYPE CONSOLE}
uses
SysUtils
, zmqapi
;
var
context: TZMQContext;
publisher: TZMQSocket;
begin
// Prepare our context and publisher
context := TZMQContext.Create;
publisher := context.Socket( stPub );
publisher.bind( 'tcp://*:5563' );
while true do
begin
// Write two messages, each with an envelope and content
publisher.send( ['A', 'We don''t want to see this'] );
publisher.send( ['B', 'We would like to see this'] );
sleep(1000);
end;
publisher.Free;
context.Free;
end.
在写入时,ZeroMQ的多部分消息与零拷贝配合得很好。在传统消息传递中,你需要将不同的缓冲区组合成一个可发送的缓冲区。这意味着复制数据。使用ZeroMQ,你可以将来自不同来源的多个缓冲区作为独立的message帧发送。将每个字段作为长度分隔的帧发送。对应用程序而言,这看起来像一系列的发送和接收调用。但在内部,多部分内容通过单次系统调用写入网络并读回,因此效率非常高。
#! /usr/bin/env escript
%%
%% Pubsub envelope publisher
%%
main(_) ->
%% Prepare our context and publisher
{ok, Context} = erlzmq:context(),
{ok, Publisher} = erlzmq:socket(Context, pub),
ok = erlzmq:bind(Publisher, "tcp://*:5563"),
loop(Publisher),
%% We never get here but clean up anyhow
ok = erlzmq:close(Publisher),
ok = erlzmq:term(Context).
loop(Publisher) ->
%% Write two messages, each with an envelope and content
ok = erlzmq:send(Publisher, <<"A">>, [sndmore]),
ok = erlzmq:send(Publisher, <<"We don't want to see this">>),
ok = erlzmq:send(Publisher, <<"B">>, [sndmore]),
ok = erlzmq:send(Publisher, <<"We would like to see this">>),
timer:sleep(1000),
loop(Publisher).
Pub-Sub消息信封#
defmodule Psenvpub do
@moduledoc """
Generated by erl2ex (http://github.com/dazuma/erl2ex)
From Erlang source: (Unknown source file)
At: 2019-12-20 13:57:29
"""
def main() do
{:ok, context} = :erlzmq.context()
{:ok, publisher} = :erlzmq.socket(context, :pub)
:ok = :erlzmq.bind(publisher, 'tcp://*:5563')
loop(publisher)
:ok = :erlzmq.close(publisher)
:ok = :erlzmq.term(context)
end
def loop(publisher) do
:ok = :erlzmq.send(publisher, "A", [:sndmore])
:ok = :erlzmq.send(publisher, "We don't want to see this")
:ok = :erlzmq.send(publisher, "B", [:sndmore])
:ok = :erlzmq.send(publisher, "We would like to see this")
:timer.sleep(1000)
loop(publisher)
end
end
Psenvpub.main
在pub-sub模式中,我们可以将key分割成一个独立的消息帧,我们称之为信封。如果你想使用pub-sub信封,你需要自己创建它们。这是可选的,在之前的pub-sub示例中我们没有这样做。对于简单情况,使用pub-sub信封会稍微增加一些工作量,但在实际情况中,尤其是当key和数据自然是独立的事物时,这样做更清晰。
订阅进行前缀匹配。也就是说,它们查找“所有以XYZ开头的消息”。显而易见的问题是:如何将key与数据分隔开,以便前缀匹配不会意外地匹配到数据。最好的答案是使用信封,因为匹配不会跨越帧边界。以下是一个极简示例,展示了pub-sub信封在代码中的样子。这个发布者发送两种类型的消息,A和B。
psenvpub:Ada中的Pub-Sub信封发布者
//
// Pubsub envelope publisher
//
package main
import (
zmq "github.com/alecthomas/gozmq"
"time"
)
func main() {
context, _ := zmq.NewContext()
defer context.Close()
publisher, _ := context.NewSocket(zmq.PUB)
defer publisher.Close()
publisher.Bind("tcp://*:5563")
for {
publisher.SendMultipart([][]byte{[]byte("A"), []byte("We don't want to see this")}, 0)
publisher.SendMultipart([][]byte{[]byte("B"), []byte("We would like to see this")}, 0)
time.Sleep(time.Second)
}
}
Ada中缺少示例 psenvpub:贡献翻译
{-# LANGUAGE OverloadedLists #-}
{-# LANGUAGE OverloadedStrings #-}
-- Pubsub envelope publisher
module Main where
import Control.Concurrent
import Control.Monad
import System.ZMQ4.Monadic
main :: IO ()
main = runZMQ $ do
-- Prepare our publisher
publisher <- socket Pub
bind publisher "tcp://*:5563"
forever $ do
-- Write two messages, each with an envelope and content
sendMulti publisher ["A", "We don't want to see this"]
sendMulti publisher ["B", "We would like to see this"]
liftIO $ threadDelay 1000000
psenvpub:Basic中的Pub-Sub信封发布者
package ;
import haxe.io.Bytes;
import neko.Lib;
import neko.Sys;
import org.zeromq.ZMQ;
import org.zeromq.ZMQContext;
import org.zeromq.ZMQException;
import org.zeromq.ZMQSocket;
/**
* Pubsub envelope publisher
*
* See: https://zguide.zeromq.cn/page:all#Pub-sub-Message-Envelopes
*
* Use with PSEnvSub
*/
class PSEnvPub
{
public static function main() {
var context:ZMQContext = ZMQContext.instance();
Lib.println("** PSEnvPub (see: https://zguide.zeromq.cn/page:all#Pub-sub-Message-Envelopes)");
var publisher:ZMQSocket = context.socket(ZMQ_PUB);
publisher.bind("tcp://*:5563");
ZMQ.catchSignals();
while (true) {
publisher.sendMsg(Bytes.ofString("A"), SNDMORE);
publisher.sendMsg(Bytes.ofString("We don't want to see this"));
publisher.sendMsg(Bytes.ofString("B"), SNDMORE);
publisher.sendMsg(Bytes.ofString("We would like to see this"));
Sys.sleep(1.0);
}
// We never get here but clean up anyhow
publisher.close();
context.term();
}
}
Basic中缺少示例 psenvpub:贡献翻译
package guide;
import org.zeromq.SocketType;
import org.zeromq.ZMQ;
import org.zeromq.ZMQ.Socket;
import org.zeromq.ZContext;
/**
* Pubsub envelope publisher
*/
public class psenvpub
{
public static void main(String[] args) throws Exception
{
// Prepare our context and publisher
try (ZContext context = new ZContext()) {
Socket publisher = context.createSocket(SocketType.PUB);
publisher.bind("tcp://*:5563");
while (!Thread.currentThread().isInterrupted()) {
// Write two messages, each with an envelope and content
publisher.sendMore("A");
publisher.send("We don't want to see this");
publisher.sendMore("B");
publisher.send("We would like to see this");
}
}
}
}
psenvpub:C中的Pub-Sub信封发布者
psenvpub:C#中的Pub-Sub信封发布者
--
-- Pubsub envelope publisher
-- Note that the zhelpers.h file also provides s_sendmore
--
-- Author: Robert G. Jakabosky <bobby@sharedrealm.com>
--
require"zmq"
require"zhelpers"
-- Prepare our context and publisher
local context = zmq.init(1)
local publisher = context:socket(zmq.PUB)
publisher:bind("tcp://*:5563")
while true do
-- Write two messages, each with an envelope and content
publisher:send("A", zmq.SNDMORE)
publisher:send("We don't want to see this")
publisher:send("B", zmq.SNDMORE)
publisher:send("We would like to see this")
s_sleep (1000)
end
-- We never get here but clean up anyhow
publisher:close()
context:term()
C#中缺少示例 psenvpub:贡献翻译
var zmq = require('zeromq')
var publisher = zmq.socket('pub')
publisher.bind('tcp://*:5563', function(err) {
if(err)
console.log(err)
else
console.log('Listening on 5563...')
})
setInterval(function() {
//if you pass an array, send() uses SENDMORE flag automatically
publisher.send(["A", "We do not want to see this"]);
//if you want, you can set it explicitly
publisher.send("B", zmq.ZMQ_SNDMORE);
publisher.send("We would like to see this");
},1000);
psenvpub:CL中的Pub-Sub信封发布者
psenvpub:Erlang中的Pub-Sub信封发布者
psenvpub:F#中的Pub-Sub信封发布者
# Pubsub envelope publisher in Perl
use strict;
use warnings;
use v5.10;
use ZMQ::FFI;
use ZMQ::FFI::Constants qw(ZMQ_PUB);
# Prepare our context and publisher
my $context = ZMQ::FFI->new();
my $publisher = $context->socket(ZMQ_PUB);
$publisher->bind('tcp://*:5563');
while (1) {
# Write two messages, each with an envelope and content
$publisher->send_multipart(["A", "We don't want to see this"]);
$publisher->send_multipart(["B", "We would like to see this"]);
sleep 1;
}
# We never get here
F#中缺少示例 psenvpub:贡献翻译
<?php
/*
* Pubsub envelope publisher
* @author Ian Barber <ian(dot)barber(at)gmail(dot)com>
*/
// Prepare our context and publisher
$context = new ZMQContext();
$publisher = new ZMQSocket($context, ZMQ::SOCKET_PUB);
$publisher->bind("tcp://*:5563");
while (true) {
// Write two messages, each with an envelope and content
$publisher->send("A", ZMQ::MODE_SNDMORE);
$publisher->send("We don't want to see this");
$publisher->send("B", ZMQ::MODE_SNDMORE);
$publisher->send("We would like to see this");
sleep (1);
}
// We never get here
psenvpub:Felix中的Pub-Sub信封发布者
"""
Pubsub envelope publisher
Author: Guillaume Aubert (gaubert) <guillaume(dot)aubert(at)gmail(dot)com>
"""
import time
import zmq
def main():
"""main method"""
# Prepare our context and publisher
context = zmq.Context()
publisher = context.socket(zmq.PUB)
publisher.bind("tcp://*:5563")
while True:
# Write two messages, each with an envelope and content
publisher.send_multipart([b"A", b"We don't want to see this"])
publisher.send_multipart([b"B", b"We would like to see this"])
time.sleep(1)
# We never get here but clean up anyhow
publisher.close()
context.term()
if __name__ == "__main__":
main()
Felix中缺少示例 psenvpub:贡献翻译
psenvpub:Haskell中的Pub-Sub信封发布者
psenvpub:Java中的Pub-Sub信封发布者
#!/usr/bin/env ruby
require 'ffi-rzmq'
context = ZMQ::Context.new
publisher = context.socket ZMQ::PUB
publisher.bind "tcp://*:5563"
loop do
publisher.send_string 'A', ZMQ::SNDMORE
publisher.send_string "We don't want to see this."
publisher.send_string 'B', ZMQ::SNDMORE
publisher.send_string "We would like to see this."
sleep 1
end
publisher.close
psenvpub:Julia中的Pub-Sub信封发布者
use std::{thread, time};
fn main() {
let context = zmq::Context::new();
let publisher = context.socket(zmq::PUB).unwrap();
assert!(publisher.bind("tcp://*:5563").is_ok());
loop {
publisher.send_multipart(["A"], zmq::SNDMORE).unwrap();
publisher.send("We don't want to see this", 0).unwrap();
publisher.send_multipart(["B"], zmq::SNDMORE).unwrap();
publisher.send("We would like to see this", 0).unwrap();
thread::sleep(time::Duration::from_secs(1));
}
}
Julia中缺少示例 psenvpub:贡献翻译
/*
* Pubsub envelope publisher
*
* @author Giovanni Ruggiero
* @email giovanni.ruggiero@gmail.com
*/
import org.zeromq.ZMQ
object psenvpub {
def main(args : Array[String]) {
// Prepare our context and publisher
val context = ZMQ.context(1)
val publisher = context.socket(ZMQ.PUB)
publisher.bind("tcp://*:5563")
while (true) {
// Write two messages, each with an envelope and content
publisher.send("A".getBytes(), ZMQ.SNDMORE)
publisher.send("We don't want to see this".getBytes(), 0)
publisher.send("B".getBytes(), ZMQ.SNDMORE)
publisher.send("We would like to see this".getBytes(), 0)
Thread.sleep(1000)
}
}
}
psenvpub:Lua中的Pub-Sub信封发布者
#
# Pubsub envelope publisher
# Note that the zhelpers.h file also provides sendmore
#
package require zmq
# Prepare our context and publisher
zmq context context
zmq socket publisher context PUB
publisher bind "tcp://*:5563"
while {1} {
# Write two messages, each with an envelope and content
publisher sendmore "A"
publisher send "We don't want to see this"
publisher sendmore "B"
publisher send "We would like to see this"
after 1000
}
# We never get here but clean up anyhow
publisher close
context term
psenvpub:Node.js中的Pub-Sub信封发布者
psenvpub:Objective-C中的Pub-Sub信封发布者
Objective-C中缺少示例 psenvpub:贡献翻译
ooc中缺少示例 psenvpub:贡献翻译
psenvpub:PHP中的Pub-Sub信封发布者
// Pubsub envelope subscriber
#include "zhelpers.h"
int main (void)
{
// Prepare our context and subscriber
void *context = zmq_ctx_new ();
void *subscriber = zmq_socket (context, ZMQ_SUB);
zmq_connect (subscriber, "tcp://localhost:5563");
zmq_setsockopt (subscriber, ZMQ_SUBSCRIBE, "B", 1);
while (1) {
// Read envelope with address
char *address = s_recv (subscriber);
// Read message contents
char *contents = s_recv (subscriber);
printf ("[%s] %s\n", address, contents);
free (address);
free (contents);
}
// We never get here, but clean up anyhow
zmq_close (subscriber);
zmq_ctx_destroy (context);
return 0;
}
psenvpub:Python中的Pub-Sub信封发布者
//
// Pubsub envelope subscriber
//
#include "zhelpers.hpp"
int main () {
// Prepare our context and subscriber
zmq::context_t context(1);
zmq::socket_t subscriber (context, ZMQ_SUB);
subscriber.connect("tcp://localhost:5563");
subscriber.set( zmq::sockopt::subscribe, "B");
while (1) {
// Read envelope with address
std::string address = s_recv (subscriber);
// Read message contents
std::string contents = s_recv (subscriber);
std::cout << "[" << address << "] " << contents << std::endl;
}
return 0;
}
psenvpub:Q中的Pub-Sub信封发布者
psenvpub:Racket中的Pub-Sub信封发布者
;;; -*- Mode:Lisp; Syntax:ANSI-Common-Lisp; -*-
;;;
;;; Pubsub envelope subscriber in Common Lisp
;;; Note that the zhelpers package also provides recv-text
;;;
;;; Kamil Shakirov <kamils80@gmail.com>
;;;
(defpackage #:zguide.psenvsub
(:nicknames #:psenvsub)
(:use #:cl #:zhelpers)
(:export #:main))
(in-package :zguide.psenvsub)
(defun main ()
;; Prepare our context and publisher
(zmq:with-context (context 1)
(zmq:with-socket (subscriber context zmq:sub)
(zmq:connect subscriber "tcp://localhost:5563")
(zmq:setsockopt subscriber zmq:subscribe "B")
(loop
;; Read envelope with address
(let ((address (recv-text subscriber)))
;; Read message contents
(let ((contents (recv-text subscriber)))
(message "[~A] ~A~%" address contents))))))
(cleanup))
Racket中缺少示例 psenvpub:贡献翻译
program psenvsub;
//
// Pubsub envelope subscriber
// @author Varga Balazs <bb.varga@gmail.com>
//
{$APPTYPE CONSOLE}
uses
SysUtils
, zmqapi
;
var
context: TZMQContext;
subscriber: TZMQSocket;
address, content: Utf8String;
begin
// Prepare our context and subscriber
context := TZMQContext.create;
subscriber := context.Socket( stSub );
subscriber.connect( 'tcp://localhost:5563' );
subscriber.Subscribe( 'B' );
while true do
begin
subscriber.recv( address );
subscriber.recv( content );
Writeln( Format( '[%s] %s', [address, content] ) );
end;
subscriber.Free;
context.Free;
end.
psenvpub:Ruby中的Pub-Sub信封发布者
#! /usr/bin/env escript
%%
%% Pubsub envelope subscriber
%%
main(_) ->
%% Prepare our context and subscriber
{ok, Context} = erlzmq:context(),
{ok, Subscriber} = erlzmq:socket(Context, sub),
ok = erlzmq:connect(Subscriber, "tcp://localhost:5563"),
ok = erlzmq:setsockopt(Subscriber, subscribe, <<"B">>),
loop(Subscriber),
%% We never get here but clean up anyhow
ok = erlzmq:close(Subscriber),
ok = erlzmq:term(Context).
loop(Subscriber) ->
%% Read envelope with address
{ok, Address} = erlzmq:recv(Subscriber),
%% Read message contents
{ok, Contents} = erlzmq:recv(Subscriber),
io:format("[~s] ~s~n", [Address, Contents]),
loop(Subscriber).
psenvpub:Rust中的Pub-Sub信封发布者
defmodule Psenvsub do
@moduledoc """
Generated by erl2ex (http://github.com/dazuma/erl2ex)
From Erlang source: (Unknown source file)
At: 2019-12-20 13:57:30
"""
def main() do
{:ok, context} = :erlzmq.context()
{:ok, subscriber} = :erlzmq.socket(context, :sub)
:ok = :erlzmq.connect(subscriber, 'tcp://localhost:5563')
:ok = :erlzmq.setsockopt(subscriber, :subscribe, "B")
loop(subscriber)
:ok = :erlzmq.close(subscriber)
:ok = :erlzmq.term(context)
end
def loop(subscriber) do
{:ok, address} = :erlzmq.recv(subscriber)
{:ok, contents} = :erlzmq.recv(subscriber)
:io.format('[~s] ~s~n', [address, contents])
loop(subscriber)
end
end
Psenvsub.main
psenvpub:Scala中的Pub-Sub信封发布者
psenvpub:OCaml中的Pub-Sub信封发布者
订阅者只想要类型B的消息:
//
// Pubsub envelope subscriber
//
package main
import (
zmq "github.com/alecthomas/gozmq"
)
func main() {
context, _ := zmq.NewContext()
defer context.Close()
subscriber, _ := context.NewSocket(zmq.SUB)
defer subscriber.Close()
subscriber.Connect("tcp://localhost:5563")
subscriber.SetSubscribe("B")
for {
address, _ := subscriber.Recv(0)
content, _ := subscriber.Recv(0)
print("[" + string(address) + "] " + string(content) + "\n")
}
}
psenvsub:Ada中的Pub-Sub信封订阅者
{-# LANGUAGE OverloadedStrings #-}
-- Pubsub envelope subscriber
module Main where
import Control.Monad
import qualified Data.ByteString.Char8 as BS
import System.ZMQ4.Monadic
import Text.Printf
main :: IO ()
main = runZMQ $ do
-- Prepare our subscriber
subscriber <- socket Sub
connect subscriber "tcp://localhost:5563"
subscribe subscriber "B"
forever $ do
-- Read envelope with address
address <- receive subscriber
-- Read message contents
contents <- receive subscriber
liftIO $ printf "[%s] %s\n" (BS.unpack address) (BS.unpack contents)
Ada中缺少示例 psenvsub:贡献翻译
package ;
import haxe.io.Bytes;
import neko.Lib;
import org.zeromq.ZMQ;
import org.zeromq.ZMQContext;
import org.zeromq.ZMQSocket;
/**
* Pubsub envelope subscriber
*
* See: https://zguide.zeromq.cn/page:all#Pub-sub-Message-Envelopes
*
* Use with PSEnvPub
*/
class PSEnvSub
{
public static function main() {
var context:ZMQContext = ZMQContext.instance();
Lib.println("** PSEnvSub (see: https://zguide.zeromq.cn/page:all#Pub-sub-Message-Envelopes)");
var subscriber:ZMQSocket = context.socket(ZMQ_SUB);
subscriber.connect("tcp://127.0.0.1:5563");
subscriber.setsockopt(ZMQ_SUBSCRIBE, Bytes.ofString("B"));
while (true) {
var msgAddress:Bytes = subscriber.recvMsg();
// Read message contents
var msgContent:Bytes = subscriber.recvMsg();
trace (msgAddress.toString() + " " + msgContent.toString() + "\n");
}
// We never get here but clean up anyway
subscriber.close();
context.term();
}
}
psenvsub:Basic中的Pub-Sub信封订阅者
package guide;
import org.zeromq.SocketType;
import org.zeromq.ZMQ;
import org.zeromq.ZMQ.Socket;
import org.zeromq.ZContext;
/**
* Pubsub envelope subscriber
*/
public class psenvsub
{
public static void main(String[] args)
{
// Prepare our context and subscriber
try (ZContext context = new ZContext()) {
Socket subscriber = context.createSocket(SocketType.SUB);
subscriber.connect("tcp://localhost:5563");
subscriber.subscribe("B".getBytes(ZMQ.CHARSET));
while (!Thread.currentThread().isInterrupted()) {
// Read envelope with address
String address = subscriber.recvStr();
// Read message contents
String contents = subscriber.recvStr();
System.out.println(address + " : " + contents);
}
}
}
}
Basic中缺少示例 psenvsub:贡献翻译
psenvsub:C++中的Pub-Sub信封订阅者
--
-- Pubsub envelope subscriber
--
-- Author: Robert G. Jakabosky <bobby@sharedrealm.com>
--
require"zmq"
require"zhelpers"
-- Prepare our context and subscriber
local context = zmq.init(1)
local subscriber = context:socket(zmq.SUB)
subscriber:connect("tcp://localhost:5563")
subscriber:setopt(zmq.SUBSCRIBE, "B")
while true do
-- Read envelope with address
local address = subscriber:recv()
-- Read message contents
local contents = subscriber:recv()
printf("[%s] %s\n", address, contents)
end
-- We never get here but clean up anyhow
subscriber:close()
context:term()
psenvsub:C#中的Pub-Sub信封订阅者
var zmq = require('zeromq')
var subscriber = zmq.socket('sub')
subscriber.on('message', function() {
var msg = [];
Array.prototype.slice.call(arguments).forEach(function(arg) {
msg.push(arg.toString());
});
console.log(msg);
})
subscriber.connect('tcp://localhost:5563')
subscriber.subscribe('B')
C#中缺少示例 psenvsub:贡献翻译
psenvsub:Delphi中的Pub-Sub信封订阅者
psenvsub:Elixir中的Pub-Sub信封订阅者
# Pubsub envelope subscriber in Perl
use strict;
use warnings;
use v5.10;
use ZMQ::FFI;
use ZMQ::FFI::Constants qw(ZMQ_SUB);
# Prepare our context and subscriber
my $context = ZMQ::FFI->new();
my $subscriber = $context->socket(ZMQ_SUB);
$subscriber->connect('tcp://localhost:5563');
$subscriber->subscribe('B');
while (1) {
# Read envelope with address
my ($address, $contents) = $subscriber->recv_multipart();
say "[$address] $contents";
}
# We never get here
psenvsub:F#中的Pub-Sub信封订阅者
<?php
/*
* Pubsub envelope subscriber
* @author Ian Barber <ian(dot)barber(at)gmail(dot)com>
*/
// Prepare our context and subscriber
$context = new ZMQContext();
$subscriber = new ZMQSocket($context, ZMQ::SOCKET_SUB);
$subscriber->connect("tcp://localhost:5563");
$subscriber->setSockOpt(ZMQ::SOCKOPT_SUBSCRIBE, "B");
while (true) {
// Read envelope with address
$address = $subscriber->recv();
// Read message contents
$contents = $subscriber->recv();
printf ("[%s] %s%s", $address, $contents, PHP_EOL);
}
// We never get here
F#中缺少示例 psenvsub:贡献翻译
"""
Pubsub envelope subscriber
Author: Guillaume Aubert (gaubert) <guillaume(dot)aubert(at)gmail(dot)com>
"""
import zmq
def main():
""" main method """
# Prepare our context and publisher
context = zmq.Context()
subscriber = context.socket(zmq.SUB)
subscriber.connect("tcp://localhost:5563")
subscriber.setsockopt(zmq.SUBSCRIBE, b"B")
while True:
# Read envelope with address
[address, contents] = subscriber.recv_multipart()
print(f"[{address}] {contents}")
# We never get here but clean up anyhow
subscriber.close()
context.term()
if __name__ == "__main__":
main()
psenvsub:Felix中的Pub-Sub信封订阅者
psenvsub:Go中的Pub-Sub信封订阅者
psenvsub:Haxe中的Pub-Sub信封订阅者
#!/usr/bin/env ruby
require 'ffi-rzmq'
context = ZMQ::Context.new
subscriber = context.socket ZMQ::SUB
subscriber.connect "tcp://localhost:5563"
subscriber.setsockopt ZMQ::SUBSCRIBE, 'B'
loop do
# Two recv s because of the multi-part message.
address = ''
subscriber.recv_string address
content = ''
subscriber.recv_string content
puts "[#{address}] #{content}"
end
psenvsub:Java中的Pub-Sub信封订阅者
fn main() {
let context = zmq::Context::new();
let subscriber = context.socket(zmq::SUB).unwrap();
assert!(subscriber.connect("tcp://localhost:5563").is_ok());
subscriber.set_subscribe(b"B").unwrap();
loop {
let address = subscriber.recv_string(0).unwrap().unwrap();
let contents = subscriber.recv_string(0).unwrap().unwrap();
println!("[{}] {}", address, contents);
}
}
psenvsub:Julia中的Pub-Sub信封订阅者
/*
* Pubsub envelope subscriber
*
* @author Giovanni Ruggiero
* @email giovanni.ruggiero@gmail.com
*/
import org.zeromq.ZMQ
object psenvsub {
def main(args : Array[String]) {
// Prepare our context and subscriber
val context = ZMQ.context(1)
val subscriber = context.socket(ZMQ.SUB)
subscriber.connect("tcp://localhost:5563")
subscriber.subscribe("B".getBytes())
while (true) {
// Read envelope with address
val address = new String(subscriber.recv(0))
// Read message contents
val contents = new String(subscriber.recv(0))
println(address + " : " + contents)
}
}
}
Julia中缺少示例 psenvsub:贡献翻译
#
# Pubsub envelope subscriber
#
package require zmq
# Prepare our context and subscriber
zmq context context
zmq socket subscriber context SUB
subscriber connect "tcp://localhost:5563"
subscriber setsockopt SUBSCRIBE "B"
while {1} {
# Read envelope with address
set address [subscriber recv]
# Read message contents
set contents [subscriber recv]
puts "\[$address\] $contents"
}
# We never get here but clean up anyhow
subscriber close
context term
psenvsub:Lua中的Pub-Sub信封订阅者
Node.js中缺少示例 psenvsub:贡献翻译
[B] We would like to see this
[B] We would like to see this
[B] We would like to see this
...
psenvsub:Objective-C中的Pub-Sub信封订阅者

psenvsub:ooc中的Pub-Sub信封订阅者
ooc中缺少示例 psenvsub:贡献翻译
psenvsub:Perl中的Pub-Sub信封订阅者
psenvsub:PHP中的Pub-Sub信封订阅者
psenvsub:Python中的Pub-Sub信封订阅者
psenvsub:Q中的Pub-Sub信封订阅者
Q中缺少示例 psenvsub:贡献翻译
psenvsub:Racket中的Pub-Sub信封订阅者
Racket中缺少示例 psenvsub:贡献翻译`zmq_inproc()`psenvsub:Ruby中的Pub-Sub信封订阅者
最后,高水位标记(HWMs)并非精确;虽然默认情况下你最多可获得1,000条消息,但实际缓冲区大小可能要低得多(低至一半),这是由于其方式导致的你还会看到 XPUB 和 XSUB Socket 的引用,我们稍后会讲到它们(它们类似于 PUB 和 SUB 的原始版本)。任何其他组合都会产生未文档化且不可靠的结果,未来的 ZeroMQ 版本如果尝试这些组合可能会返回错误。当然,你可以并且会通过代码桥接其他 Socket 类型,即从一种 Socket 类型读取并写入另一种。实现其队列。
消息丢失问题排查 #
在使用 ZeroMQ 构建应用程序时,你会不止一次遇到这个问题:丢失你预期会收到的消息。我们整理了一张图表,逐步说明了导致此问题的最常见原因。

图表总结如下:
-
在 SUB socket 上,设置订阅 `zmq_setsockopt()`使用ZMQ_SUBSCRIBE,否则你将收不到消息。由于你是按前缀订阅消息,如果你订阅 ""(空订阅),你将收到所有消息。
-
如果你*在* PUB socket 开始发送数据*之后*才启动 SUB socket(即与 PUB socket 建立连接),你将会丢失连接建立之前已发布的所有消息。如果这是一个问题,请调整你的架构,确保 SUB socket 先启动,然后 PUB socket 再开始发布。
-
即使你同步启动 SUB 和 PUB socket,你仍可能丢失消息。这是因为内部队列直到连接真正建立后才会创建。如果你能切换绑定/连接方向,让 SUB socket 进行绑定,而 PUB socket 进行连接,你可能会发现它更符合你的预期。
-
如果你使用 REP 和 REQ socket,并且不遵守同步的 send/recv/send/recv 顺序,ZeroMQ 将报告错误,而你可能会忽略这些错误。这样一来,就会看起来像是你丢失了消息。如果你使用 REQ 或 REP,请严格遵守 send/recv 顺序,并且在实际代码中始终检查 ZeroMQ 调用是否出错。
-
如果你使用 PUSH socket,你会发现第一个连接上的 PULL socket 会获取到不公平比例的消息。消息的准确轮换分配只会在所有 PULL socket 都成功连接后才会发生,这可能需要几毫秒时间。作为 PUSH/PULL 的替代方案,对于较低的数据速率,可以考虑使用 ROUTER/DEALER 和负载均衡模式。
-
如果你在线程间共享 socket,请勿这样做。这会导致随机的异常行为和崩溃。
-
如果你正在使用`zmq_inproc()`,请确保两个 socket 都在同一个上下文(context)中。否则连接方实际上会失败。另外,先绑定,然后连接。`zmq_inproc()`不像以下传输那样是非连接型的:`zmq_tcp()`.
-
(此处应为传输类型)如果你使用 ROUTER socket,很容易因意外丢失消息,例如发送格式错误的身份帧(或忘记发送身份帧)。通常情况下,设置ZMQ_ROUTER_MANDATORY
-
选项是一个好主意,但也请务必检查每次 send 调用的返回值。