commented Pull Request #2556 on apache/cassandra
I reverted the support for != and NOT IN on indexed columns for now as current index search does not allow multiple exclusions.
Asked [Statefulset Kubernetes : volumeMounts0...
In https://kubernetes.io/docs/tutorials/stateful-application/cassandra/ we read in the /application/cassandra/cassandra-statefulset.yaml sample:
In https://kubernetes.io/docs/tutorials/stateful-application/cassandra/ we read in the /application/cassandra/cassandra-statefulset.yaml sample:
So... I deployed an nfs-subdir-external-provisioner following these indications: https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/blob/master/charts/nfs-subdir-external-provisioner/README.md#install-multiple-provisioners , specifying the volume name used inside the pod through nfs.volumeName :
root@k8s-eu-1-master:~# helm install k8s-eu-1-worker-1-nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
> --set nfs.server=38.242.249.121 \
> --set nfs.path=/srv/shared-k8s-eu-1-worker-1 \
> --set storageClass.name=k8s-eu-1-worker-1 \
> --set storageClass.provisionerName=k8s-sigs.io/k8s-eu-1-worker-1 \
> --set nfs.volumeName=k8s-eu-1-worker-1-nfs-v
NAME: k8s-eu-1-worker-1-nfs-subdir-external-provisioner
LAST DEPLOYED: Tue Nov 7 17:14:42 2023
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
root@k8s-eu-1-master:~# helm ls
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
k8s-eu-1-worker-1-nfs-subdir-external-provisioner default 1 2023-11-07 17:14:42.197847444 +0100 CET deployed nfs-subdir-external-provisioner-4.0.18 4.0.2
root@k8s-eu-1-master:~# kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
k8s-eu-1-worker-1-nfs-subdir-external-provisioner 1/1 1 1 2m9s
root@k8s-eu-1-master:~# kubectl get pods
NAME READY STATUS RESTARTS AGE
k8s-eu-1-worker-1-nfs-subdir-external-provisioner-79fff4ff2qx7k 1/1 Running 0 2m27s
output of kubuectl describe pod :
root@k8s-eu-1-master:~# kubectl describe pod k8s-eu-1-worker-1-nfs-subdir-external-provisioner-79fff4ff2qx7k
Name: k8s-eu-1-worker-1-nfs-subdir-external-provisioner-79fff4ff2qx7k
Namespace: default
Priority: 0
Service Account: k8s-eu-1-worker-1-nfs-subdir-external-provisioner
Node: k8s-eu-1-worker-2/yy.yyy.yyy.yyy
Start Time: Tue, 07 Nov 2023 17:14:42 +0100
Labels: app=nfs-subdir-external-provisioner
pod-template-hash=79fff4ff6
release=k8s-eu-1-worker-1-nfs-subdir-external-provisioner
Annotations: cni.projectcalico.org/containerID: 2c7d048ecf0861c60a471e93e41d20dca0c7c58c20a3369ed1463820e898d1a7
cni.projectcalico.org/podIP: 192.168.236.18/32
cni.projectcalico.org/podIPs: 192.168.236.18/32
Status: Running
IP: 192.168.236.18
IPs:
IP: 192.168.236.18
Controlled By: ReplicaSet/k8s-eu-1-worker-1-nfs-subdir-external-provisioner-79fff4ff6
Containers:
nfs-subdir-external-provisioner:
Container ID: containerd://c4afd4f56bdb2d69aa2be23d6d47e843ceaa1f823459c7cffbf5dc859f59e44b
Image: registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
Image ID: registry.k8s.io/sig-storage/nfs-subdir-external-provisioner@sha256:63d5e04551ec8b5aae83b6f35938ca5ddc50a88d85492d9731810c31591fa4c9
Port: <none>
Host Port: <none>
State: Running
Started: Tue, 07 Nov 2023 17:14:43 +0100
Ready: True
Restart Count: 0
Environment:
PROVISIONER_NAME: k8s-sigs.io/k8s-eu-1-worker-1
NFS_SERVER: xx.xxx.xxx.xxx
NFS_PATH: /srv/shared-k8s-eu-1-worker-1
Mounts:
/persistentvolumes from k8s-eu-1-worker-1-nfs-v (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-knxw8 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
k8s-eu-1-worker-1-nfs-v: // <---------------------------------------------------------------------------------------
Type: NFS (an NFS mount that lasts the lifetime of a pod)
Server: xx.xxx.xxx.xxx
Path: /srv/shared-k8s-eu-1-worker-1
ReadOnly: false
kube-api-access-knxw8:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 2m49s default-scheduler Successfully assigned default/k8s-eu-1-worker-1-nfs-subdir-external-provisioner-79fff4ff2qx7k to k8s-eu-1-worker-2
Normal Pulled 2m49s kubelet Container image "registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2" already present on machine
Normal Created 2m49s kubelet Created container nfs-subdir-external-provisioner
Normal Started 2m49s kubelet Started container nfs-subdir-external-provisioner
In cassandra-statefulset.yaml I've set the volumeMounts as pod's volume : "k8s-eu-1-worker-1-nfs-v" :
volumeMounts:
- name: k8s-eu-1-worker-1-nfs-v
mountPath: /srv/shared-k8s-eu-1-worker-1
This is the entire cassandra-statefulset.yaml :
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: cassandra
labels:
app: cassandra
spec:
serviceName: cassandra
replicas: 3
selector:
matchLabels:
app: cassandra
template:
metadata:
labels:
app: cassandra
spec:
terminationGracePeriodSeconds: 1800
containers:
- name: cassandra
image: gcr.io/google-samples/cassandra:v13
imagePullPolicy: Always
ports:
- containerPort: 7000
name: intra-node
- containerPort: 7001
name: tls-intra-node
- containerPort: 7199
name: jmx
- containerPort: 9042
name: cql
resources:
limits:
cpu: "500m"
memory: 1Gi
requests:
cpu: "500m"
memory: 1Gi
securityContext:
capabilities:
add:
- IPC_LOCK
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- nodetool drain
env:
- name: MAX_HEAP_SIZE
value: 512M
- name: HEAP_NEWSIZE
value: 100M
- name: CASSANDRA_SEEDS
value: "cassandra-0.cassandra.default.svc.cluster.local"
- name: CASSANDRA_CLUSTER_NAME
value: "K8Demo"
- name: CASSANDRA_DC
value: "DC1-K8Demo"
- name: CASSANDRA_RACK
value: "Rack1-K8Demo"
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
readinessProbe:
exec:
command:
- /bin/bash
- -c
- /ready-probe.sh
initialDelaySeconds: 15
timeoutSeconds: 5
# These volume mounts are persistent. They are like inline claims,
# but not exactly because the names need to match exactly one of
# the stateful pod volumes.
volumeMounts:
- name: k8s-eu-1-worker-1-nfs-v
mountPath: /srv/shared-k8s-eu-1-worker-1
# These are converted to volume claims by the controller
# and mounted at the paths mentioned above.
# do not use these in production until ssd GCEPersistentDisk or other ssd pd
volumeClaimTemplates:
- metadata:
name: k8s-eu-1-worker-1
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: k8s-eu-1-worker-1
resources:
requests:
storage: 1Gi
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: k8s-eu-1-worker-1
provisioner: k8s-sigs.io/k8s-eu-1-worker-1
parameters:
#type: pd-ss
When I apply this configuration I get : spec.containers[0].volumeMounts[0].name: Not found: "k8s-eu-1-worker-1-nfs-
root@k8s-eu-1-master:~# kubectl apply -f ./cassandraStatefulApp/cassandra-statefulset.yaml
statefulset.apps/cassandra created
Warning: resource storageclasses/k8s-eu-1-worker-1 is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
The StorageClass "k8s-eu-1-worker-1" is invalid: parameters: Forbidden: updates to parameters are forbidden.
root@k8s-eu-1-master:~# kubectl get statefulsets
NAME READY AGE
cassandra 0/3 21s
root@k8s-eu-1-master:~# kubectl describe statefulsets cassandra
Name: cassandra
Namespace: default
CreationTimestamp: Tue, 07 Nov 2023 17:33:40 +0100
Selector: app=cassandra
Labels: app=cassandra
Annotations: <none>
Replicas: 3 desired | 0 total
Update Strategy: RollingUpdate
Partition: 0
Pods Status: 0 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: app=cassandra
Containers:
cassandra:
Image: gcr.io/google-samples/cassandra:v13
Ports: 7000/TCP, 7001/TCP, 7199/TCP, 9042/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP
Limits:
cpu: 500m
memory: 1Gi
Requests:
cpu: 500m
memory: 1Gi
Readiness: exec [/bin/bash -c /ready-probe.sh] delay=15s timeout=5s period=10s #success=1 #failure=3
Environment:
MAX_HEAP_SIZE: 512M
HEAP_NEWSIZE: 100M
CASSANDRA_SEEDS: cassandra-0.cassandra.default.svc.cluster.local
CASSANDRA_CLUSTER_NAME: K8Demo
CASSANDRA_DC: DC1-K8Demo
CASSANDRA_RACK: Rack1-K8Demo
POD_IP: (v1:status.podIP)
Mounts:
/srv/shared-k8s-eu-1-worker-1 from k8s-eu-1-worker-1-nfs-v (rw)
Volumes: <none>
Volume Claims:
Name: k8s-eu-1-worker-1
StorageClass: k8s-eu-1-worker-1
Labels: <none>
Annotations: <none>
Capacity: 1Gi
Access Modes: [ReadWriteOnce]
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedCreate 20s (x13 over 41s) statefulset-controller create Pod cassandra-0 in StatefulSet cassandra failed error: Pod "cassandra-0" is invalid: spec.containers[0].volumeMounts[0].name: Not found: "k8s-eu-1-worker-1-nfs-v"
What am I doing wrong? What is the correct way to specify within the statefulset.yaml file the name of the VolumeMounts ?
First, I would not change the parameters in cassandra-env.sh. Instead, use the jvm.options file.
Second, I would probably not move to 128G heap size, that's probably too large.
Third, the newsize and max heap size should be the...
First, I would not change the parameters in cassandra-env.sh. Instead, use the jvm.options file.
Second, I would probably not move to 128G heap size, that's probably too large.
Third, the newsize and max heap size should be the same, otherwise, you'll get expansion and that could cause perf issues.
Fourth, you'll have to understand what's happening before you increase the heap size. Why increase the heap size? Are you seeing allocation errors because the heap is exhausted? Are you seeing long old gen GC pauses?
In jvm.options, set -Xmx and -Xms instead of mussing with cassandra-env.sh.
I have a cassandra 5 node cluster with 256GB memory. I am facing some performance issue on read operation so I decided to increase my heap size as it was using the defualt. I updated cassandra-env file with MAX_HEAP_SIZE="128G" &...
I have a cassandra 5 node cluster with 256GB memory. I am facing some performance issue on read operation so I decided to increase my heap size as it was using the defualt. I updated cassandra-env file with MAX_HEAP_SIZE="128G" & HEAP_NEWSIZE="32G".
I found a bit better performace for read query but I saw some messages like "Some operations were slow" and a Garbage Collection event in the logs.It seems that increasing the heap size might have led to increased garbage collection activity.
Could you please assist me to adjust other parameters as well with respect to MAX_HEAP_SIZE="128G".
opened Pull Request #2874 on apache/cassandra
#2874 CASSANDRA-18911 trunk KeyCacheTest is failing with sstable_preemptive_open_interval < 0
opened Pull Request #2873 on apache/cassandra
#2873 CASSANDRA-19002 3.11 add hints maker
opened Pull Request #2872 on apache/cassandra
#2872 CASSANDRA-19002 trunk hints and commit upgrade tests
opened Pull Request #2871 on apache/cassandra
#2871 CASSANDRA-19002 4.1 add hints maker
opened Pull Request #2870 on apache/cassandra
#2870 CASSANDRA-19002 4.0 add hints maker
commented Pull Request #251 on apache/cassandra-website
After few trials, I'm here to ask you an help.
I'm trying to deploy Cassandra Stateful App (
After few trials, I'm here to ask you an help.
I'm trying to deploy Cassandra Stateful App (https://kubernetes.io/docs/tutorials/stateful-application/cassandra/) but, clearly, I'm making some mistakes
This is my Kubernetes Cluster:
root@k8s-eu-1-master:~# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-eu-1-master Ready control-plane 41h v1.28.2
k8s-eu-1-worker-1 Ready <none> 41h v1.28.2
k8s-eu-1-worker-2 Ready <none> 41h v1.28.2
k8s-eu-1-worker-3 Ready <none> 41h v1.28.2
k8s-eu-1-worker-4 Ready <none> 41h v1.28.2
k8s-eu-1-worker-5 Ready <none> 41h v1.28.2
with nfs shared folders:
root@k8s-eu-1-master:~# df -h | grep /srv/
xx.xxx.xxx.xxx:/srv/shared-k8s-eu-1-worker-1 391G 6.1G 365G 2% /mnt/data
yy.yyy.yyy.yyy:/srv/shared-k8s-eu-1-worker-2 391G 6.1G 365G 2% /mnt/data
zz.zzz.zzz.zz:/srv/shared-k8s-eu-1-worker-3 391G 6.1G 365G 2% /mnt/data
pp.ppp.ppp.pp:/srv/shared-k8s-eu-1-worker-4 391G 6.1G 365G 2% /mnt/data
qq.qqq.qqq.qqq:/srv/shared-k8s-eu-1-worker-5 391G 6.1G 365G 2% /mnt/data
I deployed nfs-subdir-exteranl-provisioner : https://github.com/kubernetes-sigs/nfs-subdir-external-provisioner/blob/master/charts/nfs-subdir-external-provisioner/README.md#install-multiple-provisioners , specifying, for each provisioner, a different storageClassName :
root@k8s-eu-1-master:~# helm install k8s-eu-1-worker-1-nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
> --set nfs.server=xx.xxx.xxx.xxx \
> --set nfs.path=/srv/shared-k8s-eu-1-worker-1 \
> --set storageClass.name=k8s-eu-1-worker-1 \
> --set storageClass.provisionerName=k8s-sigs.io/k8s-eu-1-worker-1
NAME: k8s-eu-1-worker-1-nfs-subdir-external-provisioner
LAST DEPLOYED: Mon Nov 6 17:28:58 2023
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
root@k8s-eu-1-master:~# helm install k8s-eu-1-worker-2-nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
> --set nfs.server=yy.yyy.yyy.yyy \
> --set nfs.path=/srv/shared-k8s-eu-1-worker-2 \
> --set storageClass.name=k8s-eu-1-worker-2 \
> --set storageClass.provisionerName=k8s-sigs.io/k8s-eu-1-worker-2
NAME: k8s-eu-1-worker-2-nfs-subdir-external-provisioner
LAST DEPLOYED: Mon Nov 6 17:31:15 2023
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
root@k8s-eu-1-master:~# helm install k8s-eu-1-worker-3-nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
> --set nfs.server=zz.zzz.zzz.zz \
> --set nfs.path=/srv/shared-k8s-eu-1-worker-3 \
> --set storageClass.name=k8s-eu-1-worker-3 \
> --set storageClass.provisionerName=k8s-sigs.io/k8s-eu-1-worker-3
NAME: k8s-eu-1-worker-3-nfs-subdir-external-provisioner
LAST DEPLOYED: Mon Nov 6 17:39:25 2023
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
root@k8s-eu-1-master:~# helm install k8s-eu-1-worker-4-nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
> --set nfs.server=pp.ppp.ppp.pp \
> --set nfs.path=/srv/shared-k8s-eu-1-worker-4 \
> --set storageClass.name=k8s-eu-1-worker-4 \
> --set storageClass.provisionerName=k8s-sigs.io/k8s-eu-1-worker-4
NAME: k8s-eu-1-worker-4-nfs-subdir-external-provisioner
LAST DEPLOYED: Tue Nov 7 08:25:33 2023
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
root@k8s-eu-1-master:~# helm install k8s-eu-1-worker-5-nfs-subdir-external-provisioner nfs-subdir-external-provisioner/nfs-subdir-external-provisioner \
> --set nfs.server=qq.qqq.qqq.qqq \
> --set nfs.path=/srv/shared-k8s-eu-1-worker-5 \
> --set storageClass.name=k8s-eu-1-worker-5 \
> --set storageClass.provisionerName=k8s-sigs.io/k8s-eu-1-worker-5
NAME: k8s-eu-1-worker-5-nfs-subdir-external-provisioner
LAST DEPLOYED: Mon Nov 6 17:49:21 2023
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
root@k8s-eu-1-master:~# kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
k8s-eu-1-worker-1-nfs-subdir-external-provisioner 1/1 1 1 16h
k8s-eu-1-worker-2-nfs-subdir-external-provisioner 1/1 1 1 16h
k8s-eu-1-worker-3-nfs-subdir-external-provisioner 1/1 1 1 16h
k8s-eu-1-worker-4-nfs-subdir-external-provisioner 1/1 1 1 85m
k8s-eu-1-worker-5-nfs-subdir-external-provisioner 1/1 1 1 16h
root@k8s-eu-1-master:~# kubectl get pods
NAME READY STATUS RESTARTS AGE
k8s-eu-1-worker-1-nfs-subdir-external-provisioner-74787c8dx8f4j 1/1 Running 0 16h
k8s-eu-1-worker-2-nfs-subdir-external-provisioner-ffdfb98dk9mrw 1/1 Running 0 16h
k8s-eu-1-worker-3-nfs-subdir-external-provisioner-7c9797c8jpzkv 1/1 Running 0 16h
k8s-eu-1-worker-4-nfs-subdir-external-provisioner-6bd84f54b2xx2 1/1 Running 0 86m
k8s-eu-1-worker-5-nfs-subdir-external-provisioner-84976cd7lttsn 1/1 Running 0 16h
These are the PersistentVolumeClaims :
root@k8s-eu-1-master:~# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
k8s-eu-1-worker-1-cassandra-0 Bound pvc-22d85482-9103-43b5-a93e-a52e70bdbd16 1Gi RWO k8s-eu-1-worker-1 18h
k8s-eu-1-worker-2-cassandra-0 Bound pvc-5118d0ae-b6fa-476e-b22d-a5bb3247f7fb 1Gi RWO k8s-eu-1-worker-2 18h
k8s-eu-1-worker-3-cassandra-0 Bound pvc-7a7160ea-0bf6-42de-9b35-3464930ea7d0 1Gi RWO k8s-eu-1-worker-3 18h
k8s-eu-1-worker-4-cassandra-0 Bound pvc-b7934357-6d6c-47a8-b644-28b9a0ad58b5 1Gi RWO k8s-eu-1-worker-4 18h
k8s-eu-1-worker-5-cassandra-0 Bound pvc-d587623f-f62f-4f80-b6c2-39104c568fda 1Gi RWO k8s-eu-1-worker-5 18h
and the PersistentVolume (it seems that they coincide with PVC) :
root@k8s-eu-1-master:~# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-22d85482-9103-43b5-a93e-a52e70bdbd16 1Gi RWO Delete Bound default/k8s-eu-1-worker-1-cassandra-0 k8s-eu-1-worker-1 18h
pvc-5118d0ae-b6fa-476e-b22d-a5bb3247f7fb 1Gi RWO Delete Bound default/k8s-eu-1-worker-2-cassandra-0 k8s-eu-1-worker-2 18h
pvc-7a7160ea-0bf6-42de-9b35-3464930ea7d0 1Gi RWO Delete Bound default/k8s-eu-1-worker-3-cassandra-0 k8s-eu-1-worker-3 18h
pvc-b7934357-6d6c-47a8-b644-28b9a0ad58b5 1Gi RWO Delete Bound default/k8s-eu-1-worker-4-cassandra-0 k8s-eu-1-worker-4 18h
pvc-d587623f-f62f-4f80-b6c2-39104c568fda 1Gi RWO Delete Bound default/k8s-eu-1-worker-5-cassandra-0 k8s-eu-1-worker-5 18h
I tried to modify the cassandra-statefulset.yaml file ( https://kubernetes.io/docs/tutorials/stateful-application/cassandra/ ) :
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: cassandra
labels:
app: cassandra
spec:
serviceName: cassandra
replicas: 3
selector:
matchLabels:
app: cassandra
template:
metadata:
labels:
app: cassandra
spec:
terminationGracePeriodSeconds: 1800
containers:
- name: cassandra
image: gcr.io/google-samples/cassandra:v13
imagePullPolicy: Always
ports:
- containerPort: 7000
name: intra-node
- containerPort: 7001
name: tls-intra-node
- containerPort: 7199
name: jmx
- containerPort: 9042
name: cql
resources:
limits:
cpu: "500m"
memory: 1Gi
requests:
cpu: "500m"
memory: 1Gi
securityContext:
capabilities:
add:
- IPC_LOCK
lifecycle:
preStop:
exec:
command:
- /bin/sh
- -c
- nodetool drain
env:
- name: MAX_HEAP_SIZE
value: 512M
- name: HEAP_NEWSIZE
value: 100M
- name: CASSANDRA_SEEDS
value: "cassandra-0.cassandra.default.svc.cluster.local"
- name: CASSANDRA_CLUSTER_NAME
value: "K8Demo"
- name: CASSANDRA_DC
value: "DC1-K8Demo"
- name: CASSANDRA_RACK
value: "Rack1-K8Demo"
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
readinessProbe:
exec:
command:
- /bin/bash
- -c
- /ready-probe.sh
initialDelaySeconds: 15
timeoutSeconds: 5
# These volume mounts are persistent. They are like inline claims,
# but not exactly because the names need to match exactly one of
# the stateful pod volumes.
volumeMounts:
- name: k8s-eu-1-worker-1-cassandra-0
mountPath: /srv/shared-k8s-eu-1-worker-1
- name: k8s-eu-1-worker-2-cassandra-0
mountPath: /srv/shared-k8s-eu-1-worker-2
- name: k8s-eu-1-worker-3-cassandra-0
mountPath: /srv/shared-k8s-eu-1-worker-3
- name: k8s-eu-1-worker-4-cassandra-0
mountPath: /srv/shared-k8s-eu-1-worker-4
- name: k8s-eu-1-worker-5-cassandra-0
mountPath: /srv/shared-k8s-eu-1-worker-5
# These are converted to volume claims by the controller
# and mounted at the paths mentioned above.
# do not use these in production until ssd GCEPersistentDisk or other ssd pd
volumeClaimTemplates:
- metadata:
name: k8s-eu-1-worker-1
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: k8s-eu-1-worker-1
resources:
requests:
storage: 1Gi
- metadata:
name: k8s-eu-1-worker-2
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: k8s-eu-1-worker-2
resources:
requests:
storage: 1Gi
- metadata:
name: k8s-eu-1-worker-3
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: k8s-eu-1-worker-3
resources:
requests:
storage: 1Gi
- metadata:
name: k8s-eu-1-worker-4
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: k8s-eu-1-worker-4
resources:
requests:
storage: 1Gi
- metadata:
name: k8s-eu-1-worker-5
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: k8s-eu-1-worker-5
resources:
requests:
storage: 1Gi
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: k8s-eu-1-worker-1
provisioner: k8s-sigs.io/k8s-eu-1-worker-1
parameters:
type: pd-ssd
metadata:
name: k8s-eu-1-worker-2
provisioner: k8s-sigs.io/k8s-eu-1-worker-2
parameters:
type: pd-ssd
metadata:
name: k8s-eu-1-worker-3
provisioner: k8s-sigs.io/k8s-eu-1-worker-3
parameters:
type: pd-ssd
metadata:
name: k8s-eu-1-worker-4
provisioner: k8s-sigs.io/k8s-eu-1-worker-4
parameters:
type: pd-ssd
metadata:
name: k8s-eu-1-worker-5
provisioner: k8s-sigs.io/k8s-eu-1-worker-5
parameters:
type: pd-ssd
---
But, clearly, I'm making some mistakes :
root@k8s-eu-1-master:~# kubectl apply -f ./cassandraStatefulApp/cassandra-statefulset.yaml
statefulset.apps/cassandra created
Warning: resource storageclasses/k8s-eu-1-worker-5 is missing the kubectl.kubernetes.io/last-applied-configuration annotation which is required by kubectl apply. kubectl apply should only be used on resources created declaratively by either kubectl create --save-config or kubectl apply. The missing annotation will be patched automatically.
The StorageClass "k8s-eu-1-worker-5" is invalid: parameters: Forbidden: updates to parameters are forbidden.
root@k8s-eu-1-master:~# kubectl get statefulsets
NAME READY AGE
cassandra 0/3 8s
root@k8s-eu-1-master:~# kubectl get statefulsets
NAME READY AGE
cassandra 0/3 8s
root@k8s-eu-1-master:~#
root@k8s-eu-1-master:~# kubectl describe statefulsets cassandra
Name: cassandra
Namespace: default
CreationTimestamp: Tue, 07 Nov 2023 11:00:59 +0100
Selector: app=cassandra
Labels: app=cassandra
Annotations: <none>
Replicas: 3 desired | 0 total
Update Strategy: RollingUpdate
Partition: 0
Pods Status: 0 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: app=cassandra
Containers:
cassandra:
Image: gcr.io/google-samples/cassandra:v13
Ports: 7000/TCP, 7001/TCP, 7199/TCP, 9042/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP, 0/TCP
Limits:
cpu: 500m
memory: 1Gi
Requests:
cpu: 500m
memory: 1Gi
Readiness: exec [/bin/bash -c /ready-probe.sh] delay=15s timeout=5s period=10s #success=1 #failure=3
Environment:
MAX_HEAP_SIZE: 512M
HEAP_NEWSIZE: 100M
CASSANDRA_SEEDS: cassandra-0.cassandra.default.svc.cluster.local
CASSANDRA_CLUSTER_NAME: K8Demo
CASSANDRA_DC: DC1-K8Demo
CASSANDRA_RACK: Rack1-K8Demo
POD_IP: (v1:status.podIP)
Mounts:
/srv/shared-k8s-eu-1-worker-1 from k8s-eu-1-worker-1-cassandra-0 (rw)
/srv/shared-k8s-eu-1-worker-2 from k8s-eu-1-worker-2-cassandra-0 (rw)
/srv/shared-k8s-eu-1-worker-3 from k8s-eu-1-worker-3-cassandra-0 (rw)
/srv/shared-k8s-eu-1-worker-4 from k8s-eu-1-worker-4-cassandra-0 (rw)
/srv/shared-k8s-eu-1-worker-5 from k8s-eu-1-worker-5-cassandra-0 (rw)
Volumes: <none>
Volume Claims:
Name: k8s-eu-1-worker-1
StorageClass: k8s-eu-1-worker-1
Labels: <none>
Annotations: <none>
Capacity: 1Gi
Access Modes: [ReadWriteOnce]
Name: k8s-eu-1-worker-2
StorageClass: k8s-eu-1-worker-2
Labels: <none>
Annotations: <none>
Capacity: 1Gi
Access Modes: [ReadWriteOnce]
Name: k8s-eu-1-worker-3
StorageClass: k8s-eu-1-worker-3
Labels: <none>
Annotations: <none>
Capacity: 1Gi
Access Modes: [ReadWriteOnce]
Name: k8s-eu-1-worker-4
StorageClass: k8s-eu-1-worker-4
Labels: <none>
Annotations: <none>
Capacity: 1Gi
Access Modes: [ReadWriteOnce]
Name: k8s-eu-1-worker-5
StorageClass: k8s-eu-1-worker-5
Labels: <none>
Annotations: <none>
Capacity: 1Gi
Access Modes: [ReadWriteOnce]
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedCreate 3s (x13 over 23s) statefulset-controller create Pod cassandra-0 in StatefulSet cassandra failed error: Pod "cassandra-0" is invalid: [spec.containers[0].volumeMounts[0].name: Not found: "k8s-eu-1-worker-1-cassandra-0", spec.containers[0].volumeMounts[1].name: Not found: "k8s-eu-1-worker-2-cassandra-0", spec.containers[0].volumeMounts[2].name: Not found: "k8s-eu-1-worker-3-cassandra-0", spec.containers[0].volumeMounts[3].name: Not found: "k8s-eu-1-worker-4-cassandra-0", spec.containers[0].volumeMounts[4].name: Not found: "k8s-eu-1-worker-5-cassandra-0"]
Addendum:
In this tutorial : https://pwittrock.github.io/docs/tutorials/stateful-application/cassandra/
volumeMounts:
- name: cassandra-data
mountPath: /cassandra_data
And here : https://kubernetes.io/docs/tutorials/stateful-application/cassandra/ :
# These volume mounts are persistent. They are like inline claims,
# but not exactly because the names need to match exactly one of
# the stateful pod volumes.
The Persistent Volumes obtained through nfs-subdir-external-provisioner are :
root@k8s-eu-1-master:~# kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-22d85482-9103-43b5-a93e-a52e70bdbd16 1Gi RWO Delete Bound default/k8s-eu-1-worker-1-cassandra-0 k8s-eu-1-worker-1 20h
pvc-5118d0ae-b6fa-476e-b22d-a5bb3247f7fb 1Gi RWO Delete Bound default/k8s-eu-1-worker-2-cassandra-0 k8s-eu-1-worker-2 20h
pvc-7a7160ea-0bf6-42de-9b35-3464930ea7d0 1Gi RWO Delete Bound default/k8s-eu-1-worker-3-cassandra-0 k8s-eu-1-worker-3 20h
pvc-b7934357-6d6c-47a8-b644-28b9a0ad58b5 1Gi RWO Delete Bound default/k8s-eu-1-worker-4-cassandra-0 k8s-eu-1-worker-4 20h
pvc-d587623f-f62f-4f80-b6c2-39104c568fda 1Gi RWO Delete Bound default/k8s-eu-1-worker-5-cassandra-0 k8s-eu-1-worker-5 20h
And the correspondent Persistent Volume Claimsare :
root@k8s-eu-1-master:~# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
k8s-eu-1-worker-1-cassandra-0 Bound pvc-22d85482-9103-43b5-a93e-a52e70bdbd16 1Gi RWO k8s-eu-1-worker-1 20h
k8s-eu-1-worker-2-cassandra-0 Bound pvc-5118d0ae-b6fa-476e-b22d-a5bb3247f7fb 1Gi RWO k8s-eu-1-worker-2 20h
k8s-eu-1-worker-3-cassandra-0 Bound pvc-7a7160ea-0bf6-42de-9b35-3464930ea7d0 1Gi RWO k8s-eu-1-worker-3 20h
k8s-eu-1-worker-4-cassandra-0 Bound pvc-b7934357-6d6c-47a8-b644-28b9a0ad58b5 1Gi RWO k8s-eu-1-worker-4 20h
k8s-eu-1-worker-5-cassandra-0 Bound pvc-d587623f-f62f-4f80-b6c2-39104c568fda 1Gi RWO k8s-eu-1-worker-5 20h
And the shared folders in nfs are :
root@k8s-eu-1-master:~# df -h | grep /srv
xx.xxx.xxx.xxx:/srv/shared-k8s-eu-1-worker-1 391G 6.1G 365G 2% /mnt/data
yy.yyy.yyy.yyy:/srv/shared-k8s-eu-1-worker-2 391G 6.1G 365G 2% /mnt/data
zz.zzz.zzz.zz:/srv/shared-k8s-eu-1-worker-3 391G 6.1G 365G 2% /mnt/data
pp.ppp.ppp.pp:/srv/shared-k8s-eu-1-worker-4 391G 6.1G 365G 2% /mnt/data
qq.qqq.qqq.qqq:/srv/shared-k8s-eu-1-worker-5 391G 6.1G 365G 2% /mnt/data
I've tried as volumeMounts settings the following:
1° Trial : name = pvc-name + mountPath = nfs-folder path :
volumeMounts:
- name: k8s-eu-1-worker-1-cassandra-0
mountPath: /srv/shared-k8s-eu-1-worker-1
- name: k8s-eu-1-worker-2-cassandra-0
mountPath: /srv/shared-k8s-eu-1-worker-2
- name: k8s-eu-1-worker-3-cassandra-0
mountPath: /srv/shared-k8s-eu-1-worker-3
- name: k8s-eu-1-worker-4-cassandra-0
mountPath: /srv/shared-k8s-eu-1-worker-4
- name: k8s-eu-1-worker-5-cassandra-0
mountPath: /srv/shared-k8s-eu-1-worker-5
2° Trial : name = pv-name + mountPath = nfs-folder path :
volumeMounts:
- name: pvc-22d85482-9103-43b5-a93e-a52e70bdbd16
mountPath: /srv/shared-k8s-eu-1-worker-1
- name: pvc-5118d0ae-b6fa-476e-b22d-a5bb3247f7fb
mountPath: /srv/shared-k8s-eu-1-worker-2
- name: pvc-7a7160ea-0bf6-42de-9b35-3464930ea7d0
mountPath: /srv/shared-k8s-eu-1-worker-3
- name: pvc-b7934357-6d6c-47a8-b644-28b9a0ad58b5
mountPath: /srv/shared-k8s-eu-1-worker-4
- name: pvc-d587623f-f62f-4f80-b6c2-39104c568fda
mountPath: /srv/shared-k8s-eu-1-worker-5
3° Trial: name = pv-claim + mountPath = nfs-folder path :
volumeMounts:
- name: default/k8s-eu-1-worker-1-cassandra-0
mountPath: /srv/shared-k8s-eu-1-worker-1
- name: default/k8s-eu-1-worker-2-cassandra-0
mountPath: /srv/shared-k8s-eu-1-worker-2
- name: default/k8s-eu-1-worker-3-cassandra-0
mountPath: /srv/shared-k8s-eu-1-worker-3
- name: default/k8s-eu-1-worker-4-cassandra-0
mountPath: /srv/shared-k8s-eu-1-worker-4
- name: default/k8s-eu-1-worker-5-cassandra-0
mountPath: /srv/shared-k8s-eu-1-worker-5
But in all these trials it complains
Based on Persistent Volumes, PersistentVolumeClaims, and the shared nfs folders I showed above, what do I have to specify as volumeMounts names and mountPaths?
For example, this is the pod created by the provisioner of the worker-1 :
root@k8s-eu-1-master:~# kubectl describe pod k8s-eu-1-worker-1-nfs-subdir-external-provisioner-74787c8ddfgmh
Name: k8s-eu-1-worker-1-nfs-subdir-external-provisioner-74787c8ddfgmh
Namespace: default
Priority: 0
Service Account: k8s-eu-1-worker-1-nfs-subdir-external-provisioner
Node: k8s-eu-1-worker-2/yy.yyy.yyy.yyy
Start Time: Tue, 07 Nov 2023 13:46:04 +0100
Labels: app=nfs-subdir-external-provisioner
pod-template-hash=74787c8d8b
release=k8s-eu-1-worker-1-nfs-subdir-external-provisioner
Annotations: cni.projectcalico.org/containerID: b87d543f81fb00cae352e05e205bb6477405e816ea0e386217a9a5c95dcf2193
cni.projectcalico.org/podIP: 192.168.236.14/32
cni.projectcalico.org/podIPs: 192.168.236.14/32
Status: Running
IP: 192.168.236.14
IPs:
IP: 192.168.236.14
Controlled By: ReplicaSet/k8s-eu-1-worker-1-nfs-subdir-external-provisioner-74787c8d8b
Containers:
nfs-subdir-external-provisioner:
Container ID: containerd://3292b89c024a7efaada811cba01132f22235fc962e10d9b8988b534a9a76914e
Image: registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2
Image ID: registry.k8s.io/sig-storage/nfs-subdir-external-provisioner@sha256:63d5e04551ec8b5aae83b6f35938ca5ddc50a88d85492d9731810c31591fa4c9
Port: <none>
Host Port: <none>
State: Running
Started: Tue, 07 Nov 2023 13:46:05 +0100
Ready: True
Restart Count: 0
Environment:
PROVISIONER_NAME: k8s-sigs.io/k8s-eu-1-worker-1
NFS_SERVER: xx.xxx.xxx.xxx
NFS_PATH: /srv/shared-k8s-eu-1-worker-1
Mounts:
/persistentvolumes from nfs-subdir-external-provisioner-root (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-gpbqt (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
nfs-subdir-external-provisioner-root:
Type: NFS (an NFS mount that lasts the lifetime of a pod)
Server: xx.xxx.xxx.xxx
Path: /srv/shared-k8s-eu-1-worker-1
ReadOnly: false
kube-api-access-gpbqt:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 22m default-scheduler Successfully assigned default/k8s-eu-1-worker-1-nfs-subdir-external-provisioner-74787c8ddfgmh to k8s-eu-1-worker-2
Normal Pulled 22m kubelet Container image "registry.k8s.io/sig-storage/nfs-subdir-external-provisioner:v4.0.2" already present on machine
Normal Created 22m kubelet Created container nfs-subdir-external-provisioner
Normal Started 22m kubelet Started container nfs-subdir-external-provisioner
Which is the stateful pod volume name as said by https://raw.githubusercontent.com/kubernetes/website/main/content/en/examples/application/cassandra/cassandra-statefulset.yaml : # These volume mounts are persistent. They are like inline claims, # but not exactly because the names need to match exactly one of # the stateful pod volumes. ?
I tried with "/persistentvolumes" but I've got the same error
commented Pull Request #251 on apache/cassandra-website
Thanks for review and suggestions @michaelsembwever. Suggestions acknowledged and changes have been made.
CREATE TABLE liquid.c_liquid_mlog_mlog_trade_slices (
id int,
key text,
item text,
value blob,
PRIMARY KEY ((id, key, item))
)
The id column is part of the main key, and secondary...
CREATE TABLE liquid.c_liquid_mlog_mlog_trade_slices (
id int,
key text,
item text,
value blob,
PRIMARY KEY ((id, key, item))
)
The id column is part of the main key, and secondary indexes are often used for non-primary key fields to assist efficient searches based on those columns. Primary key columns do not support secondary indexes. If you wish to establish a secondary index on the id column, you must change the structure of your table. One method is to construct a new table that contains the appropriate secondary index.
CREATE TABLE liquid.c_liquid_mlog_mlog_trade_slices_by_id (
id int,
key text,
item text,
value blob,
PRIMARY KEY (id, key, item)
);
CREATE INDEX id_index ON liquid.c_liquid_mlog_mlog_trade_slices_by_id (id);
public List<String> getIndexDDLStatements(String table) {
List<String> idxDDLs = new LinkedList<>();
Set<String> keyCols = new HashSet<>(keyPersistenceSettings.getTableColumns());
...
public List<String> getIndexDDLStatements(String table) {
List<String> idxDDLs = new LinkedList<>();
Set<String> keyCols = new HashSet<>(keyPersistenceSettings.getTableColumns());
List<PojoValueField> fields = valPersistenceSettings.getFields();
for (PojoField field : fields) {
if (!keyCols.contains(field.getColumn()) && ((PojoValueField)field).isIndexed())
idxDDLs.add(((PojoValueField)field).getIndexDDL(keyspace, table));
}
return idxDDLs;
}
I got a table with columns like "keyPersistence setting column: Id, key, item" and valuePersistence: value". According to api provided by org.apache.ignite.cache.store.cassandra.persistence KeyValuePersistenceSettings (as shown in code block). It's only applied for creating secondary index on value columns?
My table is like:
CREATE TABLE liquid.c_liquid_mlog_mlog_trade_slices (
id int,
key text,
item text,
value blob,
PRIMARY KEY ((id, key, item))
And I can manually create secondary index on 'id'. But obviously I cannot use getIndexDDLStatements to create secondary index on 'id' because 'id' is an column in 'keyPersistenceSettings'?
opened Pull Request #2869 on apache/cassandra
#2869 CASSANDRA-19002 trunk hints and commit logs upgrades
opened Pull Request #2868 on apache/cassandra
#2868 CASSANDRA-19002 3.0 add hints maker
Is there a maximum size limit to a table partition in Amazon Keyspaces? I want to store around 110 TB of unstructured data and I'm trying to choose between manually sharding DocumentDB due to its collection size limit. I'd take up keyspaces if...
Is there a maximum size limit to a table partition in Amazon Keyspaces? I want to store around 110 TB of unstructured data and I'm trying to choose between manually sharding DocumentDB due to its collection size limit. I'd take up keyspaces if it doesn't have a partition size limit.
Mentioned @cassandra
This release comes with the much-awaited Vector Similarity Search based on JVector. Learn more:
#ApacheCassandra #VectorSearch #DataStax pic.twitter.com/swpPrquHXI
Mentioned @cassandra
This release comes with the much-awaited Vector Similarity Search based on JVector. Download today!
#ApacheCassandra #VectorSearch #DataStax pic.twitter.com/us3qIT4raU
Mentioned @cassandra
Where is the failure. CDNs? Load Balancers? The Service itself? twitter.com/discord_suppor…
commented Pull Request #170 on apache/cassandra-website
It seems reasonable to me that someone preparing a patch for an EOL'd version might have to dig a bit deeper and re-build the docs for when that version was supported. Having in-tree docs would make that easier, since their checkout of the...
It seems reasonable to me that someone preparing a patch for an EOL'd version might have to dig a bit deeper and re-build the docs for when that version was supported. Having in-tree docs would make that easier, since their checkout of the EOL'd release would include the docs at that point in time, but we could also include a link to the SHA of the docs before they were pruned for that EOL. Then contributors would clone the EOL'd branch, find the ref for the docs, and be able to build those themselves.