wish help you to fix your issue As explained here, echo is guaranteed to be atomic only when writing sequences shorter than the smaller of PIPE_BUF and the size of the stdout buffer (which is most likely BUFSIZ). For longer sequences, you need to locking. Use can use lockfile-create and lockfile-check.
fixed the issue. Will look into that further Try the simplest solution first - each write to the log opens and closes the file. If you experience problems with this, which you probably won't , look for another solution.
seems to work fine I would either make them to log in different log files, or start using a proxy process which logs the messages into the file for the various logging processes, that way, you only have to send messages to the logging process and isn't that bound to I/O requests that they would be otherwise too. Or you can use a shared mutex lock on the file.
Writing to file in Python using multiple processes
this will help While it is possible to co-ordinate writes from multiple processes to a file which is concurrently opened among them, through locking, possibly entailing range locking, possibly entailing fsync() and seek() ... while this is possible under most operating systems and under some conditions ... it’s also likely to be error prone, unreliable, and subject to some odd corner cases (especially for files shared across a network such as NFS or SMB). I would suggest that this is a perfect case for using the multiprocessing.Queue class. Have on process act as the consumer, writing to the file, and have all the others acting as producers writing to the Queue rather than to the file. This will likely outperform any locking strategy you try to cobble together for yourself and it’s almost certain to be far more more robust.
C - Multiple processes writing to the same log file
it fixes the issue Q: "Do I have to introduce any synchronization (locking) logic for the log file?" A: Yes. Writing simultaneously to the same file can produce race conditions and undesired behaviour.
it fixes the issue AFAIU, the locking is done by the kernel. The reason you see the effects of locking even though you didn't ask for it is that the O_NONBLOCK file status flag is unset by default (when opening the file, I guess). Consult the section of the manual on file status flags, in particular, see operating modes and man 2 fcntl.
--- 1.py.orig 2019-07-05 14:49:13.276289018 +0300
+++ 1.py 2019-07-05 14:51:11.674727731 +0300
@@ -1,5 +1,7 @@
NUM_WORKERS = 10
LINE_SIZE = 10000
@@ -8,6 +10,8 @@
line = ("%d " % i) * LINE_SIZE + "\n"
with open("file.txt", "a") as file:
+ flag = fcntl.fcntl(file.fileno(), fcntl.F_GETFD)
+ fcntl.fcntl(file.fileno(), fcntl.F_SETFL, flag | os.O_NONBLOCK)
for _ in range(NUM_LINES):