Deploy or undeploy schema
Deploy schema manually
Now that you have created GraphQL types, queries, and mutations, it’s time to deploy the schema. Recall that the corresponding CQL schema is inferred and created from the GraphQL schema submitted.
A keyspace will be created as CQL-first unless a schema is deployed. After a schema is deployed, the keyspace should be accessed as schema-first. |
Inside the GraphQL playground, navigate to http://localhost:8080/graphql-admin and create the schema to deploy to a previously defined keyspace:
mutation {
deploySchema(
keyspace: "library"
expectedVersion: "1da4f190-b7fd-11eb-8258-1ff1380eaff5"
schema: """
# Stargate does not require definition of fields in @key,
# it uses the primary key
type Book @key @cql_entity(name: "book") @cql_input {
title: String! @cql_column(partitionKey: true, name: "book_title")
isbn: String @cql_column(clusteringOrder: ASC)
author: [String] @cql_index(name: "author_idx", target: VALUES)
}
type SelectBookResult @cql_payload {
data: [Book]
pagingState: String
}
type InsertBookResponse @cql_payload {
applied: Boolean!
book: Book!
}
type Query {
# books by partition key
bookByTitle(title: String!): [Book]
# books by partition key + clustering column (primary key)
bookByTitleAndIsbn( title: String!, isbn: String): [Book]
# books by indexed column author
bookByAuthor(
author: String @cql_where(field: "author", predicate: CONTAINS)
): [Book]
# books by partition key + indexed column author
bookByTitleAndAuthor(title: String!, author: String @cql_where(field: "author", predicate: CONTAINS)
): [Book]
booksWithPaging(
title: String!,
pagingState: String @cql_pagingState
): SelectBookResult @cql_select(pageSize: 10)
# books by partition key WHERE title is IN a list
booksIn(title: [String] @cql_where(field: "title", predicate: IN)
): [Book]
# books by author WHERE author is CONTAINED in the author array (list)
booksContainAuthor(author: String @cql_where(field: "author", predicate: CONTAINS)
): [Book]
bookGT(
title: String
isbn: String @cql_where(field: "isbn", predicate: GT)
): [Book]
bookLT(
title: String
isbn: String @cql_where(field: "isbn", predicate: LT)
): [Book]
}
type Mutation {
insertBook(book: BookInput!): Book
updateBook(book: BookInput): Boolean @cql_update
deleteBook(book: BookInput!): Boolean
}
"""
) {
version
cqlChanges
}
}
{
"data": {
"deploySchema": {
"version": "4adc2e30-9e53-11eb-8fde-b341b9f82ca9",
"cqlChanges": [
"No changes, the CQL schema is up to date"
]
}
}
}
mutation {
deploySchema(
keyspace: "library"
expectedVersion: "1da4f190-b7fd-11eb-8258-1ff1380eaff5"
schema: """
type Address @cql_entity(target: UDT) @cql_input {
street: String
city: String
state: String
zipCode: String @cql_column(name: "zip_code")
}
type Review @cql_entity(target: UDT) @cql_input {
bookTitle: String @cql_column(name: "book_title")
comment: String
rating: Int
reviewDate: Date @cql_column(name: "review_date")
}
# Stargate does not require definition of fields in @key,
# it uses the primary key
type Book @key @cql_entity(name: "book") @cql_input {
title: String! @cql_column(partitionKey: true, name: "book_title")
isbn: String @cql_column(clusteringOrder: ASC)
author: [String] @cql_index(name: "author_idx", target: VALUES)
}
type BookI @key @cql_entity(name: "booki") @cql_input {
isbn: String! @cql_column(partitionKey: true)
title: String @cql_column(clusteringOrder: ASC, name: "book_title")
author: [String] @cql_index(name: "authori_idx", target: VALUES)
}
type SelectBookResult @cql_payload {
data: [Book]
pagingState: String
}
type InsertBookResponse @cql_payload {
applied: Boolean!
book: Book!
}
type Reader @key @cql_entity(name: "reader") @cql_input {
name: String! @cql_column(partitionKey: true)
user_id: Uuid! @cql_column(clusteringOrder: ASC)
birthdate: Date @cql_index(name: "date_idx")
email: [String] @cql_column(typeHint: "set<varchar>")
reviews: [Review] @cql_index(name: "review_idx", target: VALUES)
address: [Address]
}
type ReaderU @key @cql_entity(name: "readeru") @cql_input {
user_id: Uuid! @cql_column(partitionKey: true)
name: String! @cql_column(clusteringOrder: ASC)
birthdate: Date @cql_index(name: "dateu_idx")
email: [String] @cql_column(typeHint: "set<varchar>")
reviews: [Review] @cql_index(name: "reviewu_idx", target: VALUES)
address: [Address]
}
type LibCollection @key @cql_entity(name: "lib_collection") @cql_input {
type: String! @cql_column(partitionKey: true)
lib_id: Int! @cql_column(partitionKey: true)
lib_name: String @cql_column(clusteringOrder: ASC)
}
type Query {
# books by partition key
bookByTitle(title: String!): [Book]
# books by partition key + clustering column (primary key)
bookByTitleAndIsbn( title: String!, isbn: String): [Book]
# books by indexed column author
bookByAuthor(
author: String @cql_where(field: "author", predicate: CONTAINS)
): [Book]
# books by partition key + indexed column author
bookByTitleAndAuthor(title: String!, author: String @cql_where(field: "author", predicate: CONTAINS)
): [Book]
# books by isbn (object: BookI)
bookIByIsbn(isbn: String): [BookI]
# books with paging state, paging size = 10
booksWithPaging(
title: String!,
pagingState: String @cql_pagingState
): SelectBookResult @cql_select(pageSize: 10)
# books by partition key WHERE title is IN a list
booksIn(title: [String] @cql_where(field: "title", predicate: IN)
): [Book]
# books by author WHERE author is CONTAINED in the author array (list)
booksContainAuthor(author: String @cql_where(field: "author", predicate: CONTAINS)
): [Book]
bookGT(
title: String
isbn: String @cql_where(field: "isbn", predicate: GT)
): [Book]
bookLT(
title: String
isbn: String @cql_where(field: "isbn", predicate: LT)
): [Book]
# readers by partition key
readerByName(name:String!): [Reader]
# readers by partition key + clustering column (primary key)
readerByNameAndUserid(name:String!, user_id:Uuid): [Reader]
# reader by user_id (object: ReaderU)
readerUByUserid(user_id: Uuid!): [ReaderU]
# reader by review that CONTAINS information
#readerCONTAINS(
# reviews: ReviewInput! @cql_where(field: "reviews", predicate: CONTAINS)
#): [Reader]
#readerGT(
# name: String!,
# user_id: Uuid! @cql_where(field: "user_id", predicate:GT)
#): [Reader]
#libCollByType(type: String!): [LibCollection]
# lib collection by primary key (composite)
libCollByTypeAndLibid(type: String!, lib_id: Int!): [LibCollection]
# lib collection by indexed column lib_name
#libCollByName(lib_name: String): [LibCollection]
# lib collection by type IN and lib_id IN
#libCollIn(
# type: [String!] @cql_where(field: "type", predicate: IN)
# lib_id: [Int!] @cql_where(field: "lib_id", predicate: IN)
#): [LibCollection]
}
type Mutation {
insertBook(book: BookInput!): Book
insertBookI(booki: BookIInput!): BookI
insertBookIfNotExists(book: BookInput!): InsertBookResponse
updateBook(book: BookInput): Boolean @cql_update
deleteBook(book: BookInput!): Boolean
insertReader(reader: ReaderInput!): Reader
updateReader(reader: ReaderInput!): Boolean @cql_update
deleteReader(reader: ReaderInput!): Boolean
insertLibCollection(libColl: LibCollectionInput!): LibCollection
updateLibCollection(libColl: LibCollectionInput!): Boolean @cql_update
deleteLibCollection(libColl: LibCollectionInput!): Boolean @cql_delete(ifExists: true)
}
"""
) {
version
cqlChanges
}
}
A defined mutation deploySchema
is executed.
The keyspace is specified, along with the schema, specified between triple quotes ("""
).
A number of additional options are used in the following manner:
Option |
Default |
Description |
expectedVersion |
N/A |
Each schema is assigned a unique version number. If the current deployment is a modification, the version must be supplied. |
dryRun |
false |
To test in a dryrun, use |
force |
false |
Force a schema change |
migrationStrategy |
ADD_MISSING_TABLES_AND_COLUMNS |
USE_EXISTING, ADD_MISSING_TABLES, ADD_MISSING_TABLES_AND_COLUMNS, DROP_AND_RECREATE_ALL, DROP_AND_RECREATE_IF_MISMATCH |
Two items are returned in this example, the version
that is assigned to the schema,
and cqlChanges
, the status of whether CQL changes occurred due to the schema deployment.
Other responses are logs
and query
.
The migrationStrategy
option needs further explanation on how deploySchema
updates the underlying CQL schema, based on the options argument.
The available strategies are:
- ADD_MISSING_TABLES_AND_COLUMNS (default)
-
Create CQL tables and UDTs that don’t already exist. For those that exist, add any missing columns. Partition keys and clustering columns cannot be added after initial creation. This strategy will fail if the column already exists with a different data type.
- USE_EXISTING
-
Don’t do anything. This is the most conservative strategy. All CQL tables and UDTs must match, otherwise the deployment is aborted.
- ADD_MISSING_TABLES
-
Create CQL tables and UDTs that don’t already exist. Those that exist must match, otherwise the deployment is aborted.
- DROP_AND_RECREATE_ALL
-
Drop and recreate all CQL tables and UDTs. This is a destructive operation: any existing data will be lost.
- DROP_AND_RECREATE_IF_MISMATCH
-
Drop and recreate only the CQL tables and UDTs that don’t match. This is a destructive operation: any existing data in the recreated tables will be lost. Tables that are not recreated will retain their data.
Deploy schema file using cURL
Schema can also be deployed to a keyspace using a schema file upload. This mutation must be executed with a multipart request (note that your operations part must declare MIME type application/json).
In this case, deploySchemaFile
is executed. This query must be executed in the command line
with a cURL command:
curl http://localhost:8080/graphql-admin \
-H "X-Cassandra-Token: $AUTH_TOKEN" \
-F operations='
{
"query": "mutation($file: Upload!) { deploySchemaFile( keyspace: \"library\" schemaFile: $file force: true) { version } }",
"variables": { "file": null }
};type=application/json' \
-F map='{ "filePart": ["variables.file"] }' \
-F filePart=@/tmp/schema.graphql
{"data":{"deploySchemaFile":{"version":"5c6c4190-a23f-11eb-8fde-b341b9f82ca9"}}}
The operations part contains the GraphQL payload. It consists of a
parameterized mutation, which takes a single $file
argument (note that we
leave it as null in the payload, because it’s going to be set another way).
The filePart
argument contains the file.
The map
argument specifies that the file specified by filePart
will map
to the variables.file
setting.
In this example, the schema file supplied is located in /tmp/schema.graphql
.
In order to deploy a schema file again, you’ll need to supply the expectedVersion
for the schema to be replaced.
Check the keyspace schema
to get the current version.
curl http://localhost:8080/graphql-admin \
-H "X-Cassandra-Token: $AUTH_TOKEN" \
-F operations='
{
"query": "mutation($file: Upload!) { deploySchemaFile( keyspace: \"library\" expectedVersion: \"cb0b25f0-ef36-11eb-9cf6-afef380162ee\" schemaFile: $file) { version } }",
"variables": { "file": null }
};type=application/json' \
-F map='{ "filePart": ["variables.file"] }' \
-F filePart=@/tmp/schema.graphql
{"data":{"deploySchemaFile":{"version":"26a6f680-a7a9-11eb-a22f-7bb5f4c20029"}}}
Modify schema
To modify the current schema, simply deploy again, supplying the expectedVersion
as the
current schema’s version if you wish to overwrite the definitions.
Otherwise, a new schema with a new version id will be created.
Either the GraphQL Playground or the cURL command can be used to update the schema.