To fix this issue Use uuidgen, or the equivalent API if your shell scripting language isn't (ba)sh. If you don't want to see single quotes around node names, then replace - with _ and add a prefix, just in case the first character is a decimal digit. (Sorry, if I'm stating the obvious, but...) If you want the node to be discoverable, get the initialisation code to publish the node to some kind of directory service node.
Can I set multiple cookies for a given Erlang node?
Hope that helps It turns out that the vm.args file specifies a name also, which can conflict with the name specified on the erlsrv command. I fixed it by making a new win_vm.args without the -name parameter, and changing the start_erl.cmd to look for the windows version of the file. I also changed all the -sname to -name options in the application cmd script.
How to start distributed Erlang app without starting dependencies at every node?
fixed the issue. Will look into that further The question in the title is different that the question in the body :) I'll answer both: Title question: "Does distributed training produce NN that is average of NNs trained within each distributed node?"
# create Parameter dict storing model parameters
p1 = net1.collect_params()
p2 = net2.collect_params()
p3 = net3.collect_params()
for k1, k2, k3 in zip(p1, p2, p3):
p3[k3].set_data(0.5*(p1[k1].data() + p2[k2].data()))