Get Topics from multiple Category and show top ten topics with responses in mysql/php
Date : March 29 2020, 07:55 AM
wish help you to fix your issue For the first question: You have to use LEFT JOIN and match (response's topic id row) to (topic's id) then count(response's topic id) and GROUP everything by (response's topic id).
|
Subscribing Multiple Topics, using wildcards or creating instances?
Date : March 29 2020, 07:55 AM
Does that help I would not recommend creating a separate connection for each subscription you want to make. Each connection is a new TCP connection and would waste resources in both your application and the broker. The normal pattern here would be to use a wildcard subscription. The message callback handler is handed the topic the message came on, so, as long as your sensibly structure your topic space, there is very little overhead in having to route the message appropriately in your application.
|
Creating Multiple Data Writers to multiple topics in DDS
Date : March 29 2020, 07:55 AM
I hope this helps . When you register an instance with a DataWriter, a value of the type InstanceHandle is returned, like in your code here: userHandle = Chat_ChatMessageDataWriter_register_instance(chatterbox, msg);
status = Chat_ChatMessageDataWriter_write(talker, msg, userHandle);
|
How to do fan-out in ZeroMQ? Forwarding from a set of topics to multiple clients
Date : March 29 2020, 07:55 AM
With these it helps The correct way to do this would be to use threads. Your main program or thread would handle the control channel loop. As soon as a connection appears, you would create the upstream and downstream sockets but handle the actual transfer in a thread. I am not sure if the code below works as I do not have a client that would work with it, but give it a go and see what happens. You will get the idea nevertheless. from threading import Thread
....
....
class ClientManager(Thread):
def __init__(self, ups, downs):
super(ClientManager, self).__init__(self)
self.upstream_socket = ups
self.downstream_socket = downs
def run(self):
while True:
_parts = self.upstream_socket.recv_multipart()
self.downstream_socket.send_multipart(_parts)
if __name__ == '__main__':
print("Binding control channel socket on {}".format('tcp://*:{}'.format(control_channel_port)))
control_channel = bind_control_channel()
while True:
request = control_channel.recv_json()
print("Received request {}".format(request))
if should_grant(request):
(downstream_sock, downstream_port) = bind_downstream()
print("Downstream socket open on {}".format('tcp://*:{}'.format(downstream_port)))
print("Connecting to upstream on {}".format(upstream_addr))
upstream_sock = connect_upstream(request['topics'])
control_channel.send_json({'status': 'ok', 'port': downstream_port})
_nct = ClientManager(upstream_sock, downstream_sock)
_nct.daemon = True
_nct.start()
else:
control_channel.send_json({'status': 'rejected'})
|
Kafka compression, how to limit it to some listed topics? How to use compressed.topics property using clients produce AP
Date : March 29 2020, 07:55 AM
fixed the issue. Will look into that further Compression is handled by several parameters in Kafka: On the brokers, compression.type configures the default for the entire cluster. It allows the values gzip, snappy, lz4, uncompressed or producer. The first three are self-explanatory, they're just different compression methods (snappy/lz4 being the recommended ones), uncompressed is self-explanatory, and the default producer, which basically is just using whatever compression (or not) the producer decided to use. In this case, Kafka just sees the messages as byte arrays, and doesn't try to decode them. At the topic level on the broker, you can also specify the same configurations as above, gzip, snappy, lz4, uncompresssed or producer. They're exactly the same, but act as overrides specifically for the topic you set them to. On the producer side, in code, you can also set compression.type, and the possible values are gzip, snappy, lz4 and none, where this last one is the default.
|