Data in Cassandra not consistent even with Quorum configuration
Date : March 29 2020, 07:55 AM
this one helps. After check the source code with the system log, I found the root cause of the inconsistency. Three factors cause the problem: Create and update same record from different nodes Local system time is not synchronized accurately enough (although we use NTP) Consistency level is QUORUM seqID NodeA NodeB NodeC
1. New(.050) New(.050) New(.050)
2. Delete(.030) Delete(.030)
|
When would Cassandra not provide C, A, and P with W/R set to QUORUM?
Date : March 29 2020, 07:55 AM
help you fix your problem With a quorum, you are unavailable (i.e. won't accept reads or writes) if there aren't enough replicas available. You can choose to relax and read / write on lower consistency levels granting you availability, but then you won't be consistent. There's also the case where a quorum on reads and writes guarantees you the latest "written" data is retrieved. However, if a coordinator doesn't know about required partitions being down (i.e. gossip hasn't propagated after 2 of 3 nodes fail), it will issue a write to 3 replicas [assuming quorum consistency on a replication factor of 3.] The one live node will write, and the other 2 won't (they're down). The write times out (it doesn't fail). A write timeout where even one node has writte IS NOT a write failure. It's a write "in progress". Let's say the down nodes come up now. If a client next requests that data with quorum consistency, one of two things happen:
|
Can't understand the QUORUM of the below cassandra cluster
Date : March 29 2020, 07:55 AM
wish of those help You are correct, it is 7, the documentation is incorrect here. I am opening a ticket to get it corrected.
|
Cassandra - Cannot achieve consistency level QUORUM
Date : March 29 2020, 07:55 AM
Hope this helps I managed to solve the problem. I had to run ALTER KEYSPACE system_auth WITH replication = { 'class' : 'SimpleStrategy', 'replication_factor' : 1 }; as it was set to {'class': 'NetworkTopologyStrategy', 'DC1': '1', 'DC2': '1'} previously, even though it was a single node cluster.
|
Cassandra Authentication Fail: "Unable to perform authentication: Cannot achieve consistency level QUORUM"
Date : September 20 2020, 07:00 AM
wish helps you My guess is that you need to run repair on all of the nodes for the "system_auth", and if you're running DSE, ensure any keyspace that starts with "dse" that has "simple strategy" is updated to Network Topology Strategy with appropriate DC and RF settings - and run repair on each node for those as well. That should solve your problem. My guess is that you created your users and then updated the keyspaces to use Network Topology. Once done, any new records will be be propagated correctly, but the existing records need repair to "fan them out" as it won't happen on its own.
|