I have the following service:
const searchByQuery = {
channelId: channelId,
id: models.timeuuidFromString(messageId),
}
const deleteMessage =...I have the following service:
const searchByQuery = {
channelId: channelId,
id: models.timeuuidFromString(messageId),
}
const deleteMessage = util.promisify(models.instance.MessageStore.delete).bind(models.instance.MessageStore);
const deletedMessage = await deleteMessage(searchByQuery);
console.log(deletedMessage)
return "Message Deleted"
}
I want to check if deletion was successful. But deletedMessage always returns the following:
ResultSet {
info: {
queriedHost: '127.0.0.1:9042',
triedHosts: { '127.0.0.1:9042': null },
speculativeExecutions: 0,
achievedConsistency: 1,
traceId: undefined,
warnings: undefined,
customPayload: undefined,
isSchemaInAgreement: true
},
rows: undefined,
rowLength: undefined,
columns: null,
pageState: null,
nextPage: undefined,
nextPageAsync: undefined
}
This is returned even if the data doesnot exist (i.e. deletion did not happen).
I have tried searching for it in docs as well and also tried after_delete hook but to no avail.
Forked a repository apache/cassandra
commented Pull Request #3139 on apache/cassandra
Green CI https://app.circleci.com/pipelines/github/bereng/cassandra/1176/workflows/86564a8e-006a-4b57-9fdb-e43e2ad20666 but for known offenders
commented Pull Request #3044 on apache/cassandra
Note that we're working on https://github.com/maedhroz/cassandra/pull/15 instead of this PR until CASSANDRA-19018 merges...
opened Pull Request #3138 on apache/cassandra
#3138 CASSANDRA-18940 Record latencies for SAI post-filtering reads against local storage
Hi so I have a usecase in which a user can create a secondary index on any table by calling an API. And I’m recording these indexes in a common table using map<text, frozen> where indexes is a User Defined Type:
CREATE...Hi so I have a usecase in which a user can create a secondary index on any table by calling an API. And I’m recording these indexes in a common table using map<text, frozen> where indexes is a User Defined Type:
CREATE TYPE indexes (
local BOOLEAN,
index_name TEXT,
table_name TEXT,
columns SET<TEXT>
);
When the API is called the secondary index is created. This part is working fine. But updating the row in the common tables is not. All the data in the UDT is stored as null:
{'table_table1_20e24b85_d425_11e_int13_idx': {local: null, name: null, table_name: null, columns: null}}
Here’s my go code with gocqlx implementation (config.GetScylla() returns a gocqlx session):
selectedTable.Indexes[indexName] = models.IndexModel{
Local: reqBody.Local,
Name: indexName,
TableName: selectedTable.InternalName,
Columns: reqBody.Columns,
}
stmt, names = qb.Update("tables").Set("indexes").
Where(qb.Eq("internal_name"), qb.Eq("name"), qb.Eq("description")).ToCql()
if err := config.GetScylla().Query(stmt, names).BindStruct(&selectedTable).ExecRelease(); err != nil {
utils.HandleErrorResponse(c, err)
return
}
I hope the code is self-explanatory enough but all I’m doing is creating a map of string->indexes(UDT) to put in the common table and updating the common table with it.
Side note: why is the documentation on gocqlx so poor?
opened Pull Request #3137 on apache/cassandra
#3137 CASSANDRA-14572 Expose all table metrics in virtual tables [without annotation processor]
Mentioned @cassandra
@cassandra We need @FenixAmmunition back on the show. He and @philthatremains can do one all on their own.
commented Pull Request #2389 on apache/cassandra
Closing as it looks like this was already handled in https://github.com/apache/cassandra/commit/1e2c88fff832d891b296165e9adda786182e850d.
A bit discouraging TBH against future contributions given that I opened this PR, nudged it multiple times when it was still a relevant PR, and then the work was ignored, so I'm left wondering "Why did I spend the time to try to contribute here?"
opened Pull Request #3136 on apache/cassandra
#3136 Fix StreamingTombstoneHistogramBuilder.DataHolder does not merge histogram points correctly on overflow
I have the following update service:
const editMessage = async (channelId, messageId, message) => {
const searchByQuery = {
id: models.timeuuidFromString(messageId),
channelId:...I have the following update service:
const editMessage = async (channelId, messageId, message) => {
const searchByQuery = {
id: models.timeuuidFromString(messageId),
channelId: models.uuidFromString(channelId)
}
const updateQuery = {
message,
edited:true,
}
const updateAsync = util.promisify(models.instance.MessageStore.update).bind(models.instance.MessageStore);
const updatedMessage = await updateAsync(searchByQuery,updateQuery)
return updatedMessage;
}
However, whenever I try to run this, I get the following error:
apollo.model.update.dberror: Error during update query on DB -> ResponseError: Invalid amount of bind variables
On checking the query it runs the following one: query: 'UPDATE "message_store" SET "message"=?, "edited"=?, "updated_at"=toTimestamp(now()), "__v"=now() WHERE "id" = ? AND "channelId" = ?;'
I am unable to check which variable is missing, since in the express-cassandra query, I am providing all the requisite ones.
I have tried changing variables to be updated, also have changed the number of vars to be updated, still no use. I have also changed the types before updating ex making id to string purposefully so and cassandra detects wrong data type but vars are kept shown missing.
opened Pull Request #3135 on apache/cassandra
#3135 CASSANDRA-19417 : LIST SUPERUSERS cql command
opened Pull Request #3134 on apache/cassandra
#3134 CASSANDRA-19426: Fix Double Type issues in the Gossiper#maybeGossipToCMS
opened Pull Request #3133 on apache/cassandra
#3133 CASSANDRA-19429 4.1 remove capacity calls when unnecessary
Mentioned @cassandra
I am using janus graph and cassandra as Db, while traversing from source to it 2,3 or 4 degree node from that src there can be multiple vertex which was visited during lower degree of traversal, so I want to remove them.
...I am using janus graph and cassandra as Db, while traversing from source to it 2,3 or 4 degree node from that src there can be multiple vertex which was visited during lower degree of traversal, so I want to remove them.
Let me explain with example.
Src = user1
user1 -----knows----> [user2, user3, user4, user5] user2 -----knows----> [user3, user5, user8, user9]
user3 -----knows----> [user2, user6]
And you try to get 3 degree relations of user1.
So user1 1-Degree relation = [user2, user3, user4, user5]
2nd Degree relation = [ [user3, user5, user8, user9], [user2, user6]]
now In 2nd degree **user2, user3 and user5 should not be present as there were already in 1-degree and visited. **
Now extending this person to generic n-degree relations, I want to go to nextdegree and ignore the already visited once and then move forward with remaining one's for next degree of traversal.
I have tried this query and that worked.
graph.traversal().V().has("name", "a")
// Mark "a" as visited
.property("visited", true)
// Follow "knows" edges
.out("knows")
// Filter non-visited connections
.where(not(has("visited", true)))
// Mark visited
.property("visited", true)
// Follow next "knows" edges
.out("knows")
// Filter non-visited connections again
.where(not(has("visited", true)))
// Follow next "knows" edges
.out("knows")
// Filter non-visited connections again
.where(not(has("visited", true)))
// Remove "visited" property (optional)
.valueMap().forEachRemaining(vertx -> vertex.add(vertx.get("name").toString()));
But here we are marking property of each node as visited, which involved unnecessary writes and this will not work for concurrent thread working on same vertex (as A transaction changes the property visted but utilised by B transaction which is also, I know we can maintain that)
So I am wondering is there any correct or better approach for this to get the required results.