- Isn't
cassandrasupposed to be the data properties?...
- Isn't
cassandrasupposed to be the data properties? It should have data in the path, shouldn't it? - The error says that your app is trying to connect to
localhost:9042, which is wrong because it should be the hostname of the cassandra container, is it? Your contact point setting should be double checked
commented Pull Request #2896 on apache/cassandra
Given test-latest will test tries+oa we could remove all oa test specific jobs to spare CI cycles as they're redundant now?
Also shouldn't this ticket also enable all dtests, both python and jvm, with latest?
My application is dep on Cassandra, so i have put all of them in a docker-compose file, now simply using plain docker as given here i'm able to run it fine locally but when i try using the same configuration in compose i keep getting the...
My application is dep on Cassandra, so i have put all of them in a docker-compose file, now simply using plain docker as given here i'm able to run it fine locally but when i try using the same configuration in compose i keep getting the error
top stacktrace:
[s0] Error connecting to Node(endPoint=localhost/127.0.0.1:9042, hostId=null, hashCode=8a90356), trying next node (ConnectionInitException: [s0|control|connecting...] Protocol initialization request, step 1 (OPTIONS): failed to send request (io.netty.channel.StacklessClosedChannelException))
end:
Suppressed: io.netty.channel.AbstractChannel$AnnotatedConnectException: Connection refused: localhost/127.0.0.1:9042
Caused by: java.net.ConnectException: Connection refused
at java.base/sun.nio.ch.Net.pollConnect(Native Method)
at java.base/sun.nio.ch.Net.pollConnectNow(Net.java:672)
at java.base/sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:946)
at io.netty.channel.socket.nio.NioSocketChannel.doFinishConnect(NioSocketChannel.java:337)
at io.netty.channel.nio.AbstractNioChannel$AbstractNioUnsafe.finishConnect(AbstractNioChannel.java:334)
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:776)
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:724)
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:650)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:562)
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:997)
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74)
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30)
at java.base/java.lang.Thread.run(Thread.java:840)
Caused by: io.netty.channel.StacklessClosedChannelException: null
at io.netty.channel.AbstractChannel$AbstractUnsafe.flush0()(Unknown Source)
my docker-compose:
version: "3.8"
services:
urls-storage:
image: cassandra:latest
container_name: urls-storage-db
command: [ -f ]
environment:
CASSANDRA_CLUSTER_NAME: app
ports:
- "9042:9042"
- "9160:9160"
- "7199:7199"
healthcheck:
test: [ "CMD-SHELL", "[ $$(nodetool statusgossip) = running ]" ]
interval: 10s
timeout: 5s
retries: 3
volumes:
- cassandra_data:/var/lib/cassandra
app:
build: .
container_name: url-shortener-service
ports:
- '8083:8083'
depends_on:
urls-storage:
condition: service_healthy
volumes:
cassandra_data:
driver: local
application.yaml:
server:
port: 8083
spring:
application:
name: url-shortener-service
cassandra:
port: 9042
keyspace-name: links
schema-action: create-if-not-exists
connection:
connect-timeout: 30s
init-query-timeout: 10s
request:
timeout: 10s
contact-points: urls-storage-db
local-datacenter: datacenter1
I have tried to even hard code the contact-point but to no avail, I'm able to access the cassandra in the compose container, but my application can't seem to access it.
opened Pull Request #254 on apache/cassandra-website
#254 BLOG - Apache Cassandra 5.0 Features: Storage Attached Indexes
opened Pull Request #78 on apache/cassandra-sidecar
#78 CASSANDRASC-76: Sidecar does not handle keyspaces and table names wit…
opened Pull Request #2899 on apache/cassandra
#2899 Fix Paxos V2 prepare response serialization
opened Pull Request #2898 on apache/cassandra
#2898 CASSANDRA-19021 5.0 make mmap_index_only default disk_access_mode
opened Pull Request #2897 on apache/cassandra
#2897 CASSANDRA-18757: UnifiedCompactionTask is incorrectly setting keepOriginals
commented Pull Request #2613 on apache/cassandra
@belliottsmith Hi Benedict, have you had a chance to look at the comments that have been fixed? :-)
Its hard without knowing a lot more about your usecase on what would be ideal. Most likely LeveledCompactionStrategy from what you explained here (this is actually a better default then stcs). Conservatively for STCS ~50%, for LCS can go higher...
Its hard without knowing a lot more about your usecase on what would be ideal. Most likely LeveledCompactionStrategy from what you explained here (this is actually a better default then stcs). Conservatively for STCS ~50%, for LCS can go higher like 70%
Mentioned @cassandra
Join @Cassandra users from companies like Netflix, Bloomberg, Walmart and Adobe and hear about the latest innovations in the massively scalable open-source NoSQL database.
Save 20% with code CS23DS20: dtsx.io/479QNeN pic.twitter.com/P2qliXiHq3
opened Pull Request #2896 on apache/cassandra
#2896 CASSANDRA-18753: Provide cassandra_latest.yaml with suitable defaults for new users
commented Pull Request #2777 on apache/cassandra
What do you think about tying it to
storage_compatibility_mode? I mean, ifstorage_compatibility_modeis 4, then we uselegacy, otherwise we useauto
Unless...
What do you think about tying it to
storage_compatibility_mode? I mean, ifstorage_compatibility_modeis 4, then we uselegacy, otherwise we useauto
Unless we have a clear understanding that SCM is not just compatibility but also default switching, I would not do that. That decision was something I was thinking of in CASSANDRA-18753, but it didn't appear to be something other people wanted, and there is no longer time to implement it.
Direct IO will be part of the "latest" configuration provided by CASSANDRA-18753.
In my use case the whole data of partition will delete insert again. Can you please suggest which compaction strategy should I use
I have create table with default sizeteiredcompactionstrategy.
Also how much free disk space that...
In my use case the whole data of partition will delete insert again. Can you please suggest which compaction strategy should I use
I have create table with default sizeteiredcompactionstrategy.
Also how much free disk space that I should keep.
commented Pull Request #2777 on apache/cassandra
The mailing list decided that by default we should use a legacy disk access mode (the same as it was selected in Cassandra 4 and prior). What do you think about tying it to storage_compatibility_mode? I mean, if...
The mailing list decided that by default we should use a legacy disk access mode (the same as it was selected in Cassandra 4 and prior). What do you think about tying it to storage_compatibility_mode? I mean, if storage_compatibility_mode is 4, then we use legacy, otherwise we use auto
cc @michaelsembwever
commented Pull Request #2777 on apache/cassandra
I've pushed one more commit with DatabaseDescriptor test case;
commented Pull Request #2777 on apache/cassandra
@amitdpawar @Maxwell-Guo I've created a separate PR to make it easier for reviewing https://github.com/apache/cassandra/pull/2894
While implementing a unit test...
@amitdpawar @Maxwell-Guo I've created a separate PR to make it easier for reviewing https://github.com/apache/cassandra/pull/2894
While implementing a unit test I realized we need more refactoring to make it a bit more testable. I hope you will find my changes useful and acceptable - please have a look.
@Maxwell-Guo - I haven't overwritten the branch, so it has only that one additional commit compared to what you have already reviewed
@stef1927 this is already refactored
You're right, if you have a two-node cluster and a keyspace with replication_factor 2, then indeed every piece of data will be in both nodes, every write will be "eventually" replicated to both. If you use CL=ALL you...
You're right, if you have a two-node cluster and a keyspace with replication_factor 2, then indeed every piece of data will be in both nodes, every write will be "eventually" replicated to both. If you use CL=ALL you can be sure this has happened by the time that the write completed - but even if you do CL=ONE the write will still happen eventually on the second node - usually very quickly, but after a repair (which you said you did) you can be sure the same data appears on both nodes, and both nodes should have exactly the same number of rows.
Yet, you said "I see node 1 has about 213,435,988 rows and node 2 only 206,916,617 rows.". How sure are you about these numbers? How did you come by them? Did you really scan the table (how did you limit the scan to just one node?), or did you use some sort of "size estimate" feature? If it's the latter, you should be aware that on both Cassandra and Scylla, this is just an estimate. It turns out that this estimate is even less accurate and trustworthy in ScyllaDB than in Cassandra (see https://github.com/scylladb/scylladb/issues/9083) but in both of them, the question of whether or not you did a major compaction (nodetool compact) affects the estimate. You said that you "flushed and repaired" the tables but not that you compacted it.
In any case I want to emphasize again that even though compaction affects the estimate of the number of partitions, it doesn't have any affect on the correctness or the data or the accurate number of rows you see if you'll scan the entire table with SELECT * FROM table or count them with SELECT COUNT(*) FROM table. A repair might be needed if hinted handoff wasn't enabled and your cluster had connectivity problems during the write - but since you did say you did repair, you should be good.
Pull Request #2892 on apache/cassandra merged by aweisberg
#2892 Ninja fix SlowMessageFuzzTest and InvertedIndexSearcherTest