Client

class rockset.Client(api_key=None, api_server=None, profile=None, driver=None, **kwargs)[source]

Securely connect to Rockset using an API key.

Optionally, an alternate API server host can also be provided. If you have configured credentials using the rock configure command, then those credentials will act as fall back values, when none of the api_key/api_server parameters are specified.

Parameters
  • api_key (str) – API key

  • api_server (str) – API server URL. Will default to https if URL does not specify a scheme.

  • profile (str) – Optionally, you can also specify name of your credentials profile setup using rock configure

Returns

A Client object

Return type

Client

Raises

ValueError – when API key is not specified and could not be fetched from rock CLI credentials or api_server URL is invalid.

classmethod config_dir()[source]

Returns name of the directory where Rockset credentials, config, and logs are stored.

Defaults to "~/.rockset/"

Can be overriddden via ROCKSET_CONFIG_HOME env variable.

list(workspace='commons', **kwargs)[source]

Returns list of all collections in a workspace.

Parameters

workspace (str) – Name of the workspace to list from

Returns

A list of Collection objects

Return type

List

query(q, collection=None, **kwargs)[source]

Execute a query against Rockset.

This method prepares the given query object and binds it to a Cursor object, and returns that Cursor object. The request is not actually dispatched to the backend until the results are fetched from the cursor.

Input query needs to be supplied as a Query object.

Cursor objects are iterable, and you can iterate through a cursor to fetch the results. The entire result data set can also be retrieved from the cursor object using a single results() call.

When you iterate through the cursor in a loop, the cursor objects implement automatic pagination behind the scenes. If the query returns a large number of results, with automatic pagination, only a portion of the results are buffered into the cursor at a time. As the cursor iterator reaches the end of the current batch, it will automatically issue a new query to fetch the next batch and seamlessly resume. Cursor’s default iterator uses batch size of 10,000, and you can create an iterator of a different batch size using the iter() method in the cursor object.

Example:

...
rs = Client()
cursor = rs.query(q)

# fetch all results in 1 go
all_results = cursor.results()

# iterate through all results;
# automatic pagination with default iterator batch size of 100
# if len(all_results) == 21,442, then as part of looping
# through the results, three distinct queries would be
# issued with (limit, skip) of (10000, 0), (10000, 10000),
# (10000, 20000)
for result in cursor:
    print(result)

# iterate through all results;
# automatic pagination with iterator batch size of 20,000
# if len(all_results) == 21,442, then as part of looping
# through the results, two distinct queries would have
# been issued with (limit, skip) of (20000, 0), (20000, 20000).
for result in cursor.iter(20000):
    print(result)
...
Parameters
  • q (Query) – Input Query object

  • timeout (int) – Client side timeout. When specified, RequestTimeout exception will be thrown upon timeout expiration. By default, the client will wait indefinitely until it receives results or an error from the server.

Returns

returns a cursor that can fetch query results with or without automatic pagination

Return type

Cursor

retrieve(name, workspace='commons')[source]

Retrieves a single collection

Parameters
  • name (str) – Name of the collection to be retrieved

  • workspace (str) – Name of the workspace the collection is in

Returns

Collection object

Return type

Collection

sql(q, **kwargs)[source]

Execute a query against Rockset.

This method prepares the given query object and binds it to a Cursor object, and returns that Cursor object. The request is not actually dispatched to the backend until the results are fetched from the cursor.

Input query needs to be supplied as a Query object.

Cursor objects are iterable, and you can iterate through a cursor to fetch the results. The entire result data set can also be retrieved from the cursor object using a single results() call.

When you iterate through the cursor in a loop, the cursor objects implement automatic pagination behind the scenes. If the query returns a large number of results, with automatic pagination, only a portion of the results are buffered into the cursor at a time. As the cursor iterator reaches the end of the current batch, it will automatically issue a new query to fetch the next batch and seamlessly resume. Cursor’s default iterator uses batch size of 10,000, and you can create an iterator of a different batch size using the iter() method in the cursor object.

Example:

...
rs = Client()
cursor = rs.sql(q)

# fetch all results in 1 go
all_results = cursor.results()

# iterate through all results;
# automatic pagination with default iterator batch size of 100
# if len(all_results) == 21,442, then as part of looping
# through the results, three distinct queries would be
# issued with (limit, skip) of (10000, 0), (10000, 10000),
# (10000, 20000)
for result in cursor:
    print(result)

# iterate through all results;
# automatic pagination with iterator batch size of 20,000
# if len(all_results) == 21,442, then as part of looping
# through the results, two distinct queries would have
# been issued with (limit, skip) of (20000, 0), (20000, 20000).
for result in cursor.iter(20000):
    print(result)
...
Parameters
  • q (Query) – Input Query object

  • timeout (int) – Client side timeout. When specified, RequestTimeout exception will be thrown upon timeout expiration. By default, the client will wait indefinitely until it receives results or an error from the server.

Returns

returns a cursor that can fetch query results with or without automatic pagination

Return type

Cursor

MAX_DOCUMENT_SIZE_BYTES = 41943040

Maximum allowed size of a single document

MAX_FIELD_NAME_LENGTH = 10240

Maximum allowed length of a field name

MAX_FIELD_VALUE_BYTES = 4194304

Maximum allowed size of a field value

MAX_ID_VALUE_LENGTH = 10240

Maximum allowed length of _id field value

MAX_NAME_LENGTH = 2048

Maximum allowed length of a collection

MAX_NESTED_FIELD_DEPTH = 30

Maximum allowed levels of depth for nested documents


Collection

class rockset.Collection(**kwargs)[source]

NOTE: This class is auto generated by the swagger code generator program.

Do not edit the class manually.

to_dict()[source]

Returns the model properties as a dict

to_str()[source]

Returns the string representation of the model

property created_at

Gets the created_at of this Collection. # noqa: E501

ISO-8601 date # noqa: E501

Returns

The created_at of this Collection. # noqa: E501

Return type

str

property created_by

Gets the created_by of this Collection. # noqa: E501

email of user who created the collection # noqa: E501

Returns

The created_by of this Collection. # noqa: E501

Return type

str

property description

Gets the description of this Collection. # noqa: E501

text describing the collection # noqa: E501

Returns

The description of this Collection. # noqa: E501

Return type

str

property field_mappings

Gets the field_mappings of this Collection. # noqa: E501

list of mappings applied on all documents in a collection # noqa: E501

Returns

The field_mappings of this Collection. # noqa: E501

Return type

list[FieldMappingV2]

property name

Gets the name of this Collection. # noqa: E501

unique identifer for collection, can contain alphanumeric or dash characters # noqa: E501

Returns

The name of this Collection. # noqa: E501

Return type

str

property retention_secs

Gets the retention_secs of this Collection. # noqa: E501

number of seconds after which data is purged based on event time # noqa: E501

Returns

The retention_secs of this Collection. # noqa: E501

Return type

int

property sources

Gets the sources of this Collection. # noqa: E501

list of sources from which collection ingests # noqa: E501

Returns

The sources of this Collection. # noqa: E501

Return type

list[Source]

property stats

Gets the stats of this Collection. # noqa: E501

metrics about the collection # noqa: E501

Returns

The stats of this Collection. # noqa: E501

Return type

CollectionStats

property status

Gets the status of this Collection. # noqa: E501

current status of collection, one of: CREATED, READY, DELETED # noqa: E501

Returns

The status of this Collection. # noqa: E501

Return type

str

property workspace

Gets the workspace of this Collection. # noqa: E501

name of the workspace that the collection is in # noqa: E501

Returns

The workspace of this Collection. # noqa: E501

Return type

str


Cursor

class rockset.Cursor(q=None, client=None)[source]

Fetch the results of a query executed against a collection

async_request()[source]

Returns an asyncio.Future object that can be scheduled in an asyncio event loop. Once scheduled and run to completion, the results can be fetched via the future.result() API. The return value of future.result() will be the same as the return value of Cursor.results()

Returns

Returns a Future object that can be scheduled in an asyncio event loop and future.result() will hold the same return value as Cursor.results()

Return type

asyncio.Future

iter(batch=10000)[source]

Returns an iterator that does seamless automatic pagination behind the scenes while fetching no more than the specified batch size number of results at a time.

Parameters

batch (int) – maximum number of results fetched at a time

Returns

Iterator that will return all the results with seamless automatic pagination

Return type

Iterator Object

results()[source]

Execute the query and fetch all the results in one shot.

Returns

All the query result documents

Return type

Array of dicts


Exceptions

Introduction

Various Python exceptions thrown by the rockset module are explained in this section, along with possible reasons and remedies to assist in trouble-shooting.

Authentication Errors

The server is rejecting your request because you have either an expired or an invalid token. Ensure you have a valid API key or generate a new one using the Rockset Console before trying your request again.

class rockset.exception.AuthError(**kwargs)[source]

API key or access token is missing, expired or invalid. Re-authenticating with a valid API key should normally fix this.

code

HTTP status code obtained from server

Type

int

message

error message with more details

Type

str

Input Errors

The server is unable to understand the API request as it was sent. This most likely means the API was badly formed (like the input query has a syntax error). When you encounter this error, please refer to the relevant documentation and verify if the request is constructed properly and if the resource is still present.

class rockset.exception.InputError(**kwargs)[source]

User request has a missing or invalid parameter and cannot be processed as is. Syntax errors in queries fall in this category.

code

HTTP status code obtained from server

Type

int

message

error message with more details

Type

str

type

error sub-category

Type

str

Limit Reached

The server could understand the input request but refuses to execute it. This commonly happens when an account limit has been reached. Please reach out to Rockset Support with more details to alter your account limit.

class rockset.exception.LimitReached(**kwargs)[source]

The API request has exceeded some user-defined limit (such as max deadline set for a query) or a system limit. Refer to documentation to increase the limit or reach out to Rockset support with more details to alter your account limit.

code

HTTP status code obtained from server

Type

int

message

error message with more details

Type

str

type

error sub-category

Type

str

Not Yet Implemented

Your API request needs a feature that is not present in your cluster for it to complete. Your cluster needs an upgrade or this feature is in our roadmap but we haven’t gotten around to implementing it yet. Please reach out to Rockset support with more details to help us prioritize this feature.

class rockset.exception.NotYetImplemented(**kwargs)[source]

Your request is expecting a feature that has not been deployed in your cluster or has not yet been implemented. Please reach out to Rockset support with more details to help us prioritize this feature. Thank you.

code

HTTP status code obtained from server

Type

int

message

error message with more details

Type

str

type

error sub-category

Type

str

Request Timeouts

The server did not complete the API request before the timeout you set for the request expired. To troubleshoot, see if your request succeeds when you don’t set a timeout. If it does then you need to recalibrate your timeout value. If it doesn’t, then debug the issue based on the new error you receive.

class rockset.exception.RequestTimeout(**kwargs)[source]

Request timed out.

Many API calls allow a client side timeout to be specified. When specified, this exception will be thrown when the timeout expires and the API call has not received a valid response or an error from the servers.

message

timeout error message

Type

str

timeout

timeout specfied with API call in seconds

Type

int

Server Errors

These errors mean the server correctly parsed the input request, but couldn’t process it for some reason. If a particular request or application is seeing this while other requests are fine, then you probably uncovered a bug with Rockset. Please contact Rockset support to report the bug and we will provide a time estimte for resolution and send you a t-shirt.

class rockset.exception.ServerError(**kwargs)[source]

Something totally unexpected happened on our servers while processing your request and most likely you have encountered a bug in Rockset. Please contact Rockset support and provide all the details you received along with the error for quick diagnosis, resolution, and to collect your t-shirt.

code

HTTP status code obtained from server

Type

int

message

error message with more details

Type

str

type

error sub-category

Type

str

Transient Server Errors

When many of your requests are failing with TransientServerErrors, it means our servers are going through a period of instability or unplanned downtime. This always means our alerts are firing, our pagers are ringing, phones are buzzing, and little adorable kittens are getting lost in the woods. We are actively investigating and fixing this issue. Look for upates on our status page with estimates on time to resolution. Sorry.

class rockset.exception.TransientServerError(**kwargs)[source]

Some transient hiccup made us fail this request. This means our oncall engineers are actively working on this issue and should resolve the issue soon. Please retry after sometime. Sorry.

code

HTTP status code obtained from server

Type

int

message

error message with more details

Type

str

type

error sub-category

Type

str

F and FieldRef

rockset.F = <rockset.query.FieldRef object>

F is a field reference object that helps in building query expressions natively using Python language expressions. F uses Python operator overloading heavily and operations on field references generate Query objects that can be used in conjunction with Q to build compose complex queries.

class rockset.FieldRef(name=None, parent=None, source=None)[source]
apply(inner_query)[source]

Returns a new query object that matches all documents where the given field matches the results of the inner_query

E.g: say the inner query returns N documents {r1, r2, … rN}, and each of those contain a single field ‘f’, then the following two expressions are equivalent to one another, but the apply() version is faster, more efficient and requires only one round-trip:

<field_ref>.apply(inner_query)

(<field_ref> == r1['f']) | (<field_ref> == r2['f']) ... | (<field_ref> == rN['f'])
Parameters

inner_query (Query) – query object on whose results you wish to perform the apply operation on.

Returns

query object that represents the desired apply operation

Return type

Query

approximatecountdistinct()[source]

Returns a new FieldRef that represents a approximatecountdistinct() aggregation of the given field.

Returns

FieldRef object that represents the desired approximatecountdistinct aggregation.

Return type

AggFieldRef

avg()[source]

Returns a new FieldRef that represents an avg() aggregation of the given field.

Returns

FieldRef object that represents the desired avg aggregation.

Return type

AggFieldRef

collect()[source]

Returns a new FieldRef that represents a collect() aggregation of the given field.

Returns

FieldRef object that represents the desired collect aggregation.

Return type

AggFieldRef

count()[source]

Returns a new FieldRef that represents a count() aggregation of the given field.

Returns

FieldRef object that represents the desired count aggregation.

Return type

AggFieldRef

countdistinct()[source]

Returns a new FieldRef that represents a countdistinct() aggregation of the given field.

Returns

FieldRef object that represents the desired countdistinct aggregation.

Return type

AggFieldRef

max()[source]

Returns a new FieldRef that represents a max() aggregation of the given field.

Returns

FieldRef object that represents the desired max aggregation.

Return type

AggFieldRef

min()[source]

Returns a new FieldRef that represents a min() aggregation of the given field.

Returns

FieldRef object that represents the desired min aggregation.

Return type

AggFieldRef

nested(nested_query)[source]

Returns a new query object that matches all documents where the given inner query matches on one or more individual nested documents present within the field path of the given field.

Useful to run complex query expressions on fields that contain an nested array of documents.

Example

Say you have a collection where every document describes a book, and each document has an “authors” field that is a nested array of documents describing each author:

{"title": "Transaction Processing: Concepts and Techniques",
 "authors": [
     {"first_name": "Jim", "last_name": "Gray"},
     {"first_name": "Andreas", "last_name": "Reuter"},
 ],
 "publisher": ... }

If you want to do find all books where ‘Jim Gray’ was one of the authors, you can use the following query:

F["authors"].nested((F["first_name"] == 'Jim') & (F["last_name"] == 'Gray'))

Note: Constructing the same query as follows is incorrect:

# CAUTION: This is not same as the query above
(F["authors"][:]["first_name"] == 'Jim') & (F["authors"][:]["last_name"] == 'Gray')

The incorrect version will return all books which has at least one author with first name ‘Jim’ and at least one author with last name ‘Gray’, but it need not be the same author. A book with two authors named ‘Jim Smith’ and ‘Alice Gray’ will also match, which is not what is intended.

Parameters

nested_query (Query) – query expression to run on every nested document present within the given field path

Returns

query object that represents desired nested operations

Return type

Query

proximity(search_query, analyzer='default')[source]

Returns a new query object that when executed will perform a proximity query with the given input search query, tokenized with the specified analyzer on the given field.

Eg: Split the search query “jim phone number” on whitespace and search across all documents on the “email_body” field, while ranking documents where the search terms appear closer together higher:

F["email_body"].proximity("jim phone number", analyzer="default")

Proxmity queries matches the query string against the text and adds a boost if terms are close together, i.e. “jim phone number” query will score “Jim’s phone number” higher than “Jim bought a new phone”. Proximity queries also does spell checking with an English dictionary.

Parameters
  • search_query (str) – Text search string

  • analyzer (str) – Name of the analyzer to use on the search_query before performing the search. Defaults to “default”.

Returns

query object that represents the proximity query

Return type

Query

sum()[source]

Returns a new FieldRef that represents a sum() aggregation of the given field.

Returns

FieldRef object that represents the desired sum aggregation.

Return type

AggFieldRef

Limits

This section lists all the system-wide limits such as the biggest document that can be added to a collection or other limits relating to field sizes.

Client.MAX_DOCUMENT_SIZE_BYTES = 41943040

Maximum allowed size of a single document

Client.MAX_FIELD_NAME_LENGTH = 10240

Maximum allowed length of a field name

Client.MAX_FIELD_VALUE_BYTES = 4194304

Maximum allowed size of a field value

Client.MAX_ID_VALUE_LENGTH = 10240

Maximum allowed length of _id field value

Client.MAX_NAME_LENGTH = 2048

Maximum allowed length of a collection

Client.MAX_NESTED_FIELD_DEPTH = 30

Maximum allowed levels of depth for nested documents

Q and Query

rockset.Q(query, alias=None)[source]

All query objects are constructed using the Q(<collection-name>) query builder construct and are then followed by a chain of query operators to build the full query expression.

class rockset.Query(source=None, alias=None, child=None, children=None)[source]
aggregate(*fields)[source]

Returns a new query object that when executed will aggregate results from the current query object by the list of fields provided as input.

Field reference objects can include one of the supported aggregate functions such as max, min, avg, sum, count, countdistinct, approximatecountdistinct, collect as follows: <field_ref>.max(), <field_ref>.min(), … .

The list of fields provided as input can contain a mix of field references that include an aggregate function and field references that does not.

Parameters

fields (list of FieldRef) – fields you wish to aggregate by

Returns

new query object that includes the desired field aggregations

Return type

Query

apply(to_field, target_query=None)[source]

Returns a new query object that when executed will match all documents where values of the to_field provided as input will match any of the results from the current query object.

The current query object is expected to have a single field in them, and is commonly achieved using the select operator.

Read more about the apply operator in the `Graph queries`_ section.

Parameters
  • to_fields (FieldRef) – field you wish to match the results with

  • target_query (Query) – Optional. Use this for cross-collection graph queries, where the apply needs to work on a different collection than what the current query object is referring to.

Example

Find all login IP addresses for user ‘u42’, and then find all activity logs from any of those IP addresses:

Q('logins')
  .where(F['user'] == 'u42')
  .select(F['source_ip'])
  .apply(F['ip_address'], Q('activity_logs'))
Returns

new query object that includes the desired apply operation

Return type

Query

boost(factor)[source]

Returns a new query object that boosts the results from the given query object. Commonly, used along with the the Query.search method, to rank certain query contidions higher than other within a single search query.

Eg:

Q('tv-series').search(
    F["title"].proximity("game of thrones").boost(1.5),
    F["article_body"].proxmity("game of thrones").boost(1.0),
    (F["popular"] == "yes").boost(3.0))
Parameters

factor (float) –

boost factor that determines the relevance of the matching results.

Default boost factor is 1.0. Need to be a postive float > 0.0, but can be < 1.0. Higher boost factors will make matching documents more relevant.

Returns

new query object that incorporate the desired boost

Return type

Query

highest(limit, *fields)[source]

Returns a new query object that when executed will sort the results from the current query object by the list of fields provided as input in descending order and return top N defined by the limit parameter.

Parameters
  • limit (int) – top N results you wish to fetch

  • fields (list of FieldRef) – fields you wish to sort

  • by (descending) –

Returns

new query object that returns top N descending

Return type

Query

limit(limit, skip=0)[source]

Returns a new query object that when executed will only return a subset of the results. The query when executed will return no more than limit results after skipping the first skip number of results. The limit operator is most commonly used for pagination.

Parameters
  • limit (int) – maximum number of results to return

  • skip (int) – the number of results to skip

Returns

new query object that only returns the desired subset

Return type

Query

lookup(local_field, target_field=None, target_query=None, new_field=None)[source]

Lookup allows you to perform a LEFT OUTER JOIN between the results of the current query object and the results of the target_query provided as input. The LEFT OUTER JOIN operation will be performed on the local_field from the results of the current query object and the target_field field in target_query. All results from the target_query whose target_field value matches the local_field value, will be presented as an array value within the new_field in the post JOIN results.

local_field: For every result document in the current query object, the value of the local_field is JOINed against the target_field in the target_query. All the documents that match from the target_query are presented in a new field whose field name defined by the new_field parameter.

This field is mandatory.

target_field: The field from the target_query results against which the local_field should be JOINed with.

This field is optional.

Default value for target_field is F["_id"] field.

target_query: Defined as a Query object, whose results will be used to JOIN against the local_field. The results from the target_query should include the target_field.

All fields selected from the target_query will be present within the new_field in the post JOIN results.

This field is optional.

Default value for target_query is Q(<source-collection>), which will return all documents in the current collection.

new_field: New field in every result document will contain an array value of all the matching results from the target_query.

If the local field is undefined in a result document, then the new field will also be undefined in that result document.

If the local field value is null in a result document, then the new field will also be null in that result document.

If the local field value is defined and not null in a result document, then the new field will have an array of all the documents from the target query results whose target field matches the local field value.

The new field will be an empty array if there are no matches.

This new field name is optional and defaults to local field name concatenated with :lookup.

Note

If the target_query does not contain target_field, then there will not be any matches with the local_field value, and thus the new_field will be an empty array for all results.

Parameters
  • local_field (FieldRef) – local field you wish to perform the LEFT OUTER JOIN. This is a required parameter.

  • target_field (FieldRef) – target field in the target query against which you wish to perform the LEFT OUTER JOIN. This is an optional parameter. Default value: F['_id'] field

  • target_query (Query) – defines a Query object, whose results will be used to match against the local_field. This is an optional parameter. Defaults to Q(<source-collection>)

  • new_field (FieldRef) – defines a new field where the results of LEFT OUTER JOIN will be present. This is an optional parameter. Defaults to local field name concatenated with “:lookup”.

Returns

new query object that includes the desired LEFT OUTER JOIN

Return type

Query

lowest(limit, *fields)[source]

Returns a new query object that when executed will sort the results from the current query object by the list of fields provided as input in ascending order and return top N defined by the limit parameter.

Parameters
  • limit (int) – top N results you wish to fetch

  • fields (list of FieldRef) – fields you wish to sort

  • by (ascending) –

Returns

new query object that returns top N ascending

Return type

Query

sample(ratio)[source]

Returns a new query object that when executed will only return a uniformly sampled subset of the results. When ratio is 0.1, 1 out of every 10 results will be returned.

Parameters

ratio (float) – sampling ratio between 0.0 and 1.0 0.0 (0% sampling) will not return any results 1.0 (100% sampling) will return all results

Returns

new query object that only returns the desired subset

Return type

Query

score(name, code, context='')[source]

Returns a new query object that performs custom scoring on documents. Each document is added (or modified) a ‘:score’ field and sorted based on that field, in descending order.

The first parameter is name of the custom scorer. The only one currently supported is a javascript scorer, with a name ‘js’.

For the javascript scorer, the second parameter is a custom javascript code that is executed on each document. The code needs to export a function called ‘score’ that takes two parameters: 1) a document to be scored, 2) context (see third parameter). Unfortunatley only ECMAScript 5.1 is supported at the moment.

The third parameter is context, which is passed to the javascript function as-is. It can be either a string or a dictionary. Dictionary will be converted to JSON when passed to javascript function.

Eg:

Q('tv-series').search(
    F["title"].proximity("game of thrones"))
    .score("js",
           ("var score = function(doc, context) "
           "{ return doc[':score'] * context['boost']; }"),
           {"boost": 1.2})
Parameters
  • name (str) – name of the scorer, currently only “js” is supported

  • code (str) – In javascript scorer, custom javascript code that scores the document

  • context (str or dict) – In javascript scorer, context that will be passed to javascript function

Returns

new score query object with the given set of parameters

Return type

Query

search(*conditions)[source]

Returns a new query object that performs a search query against the given set of conditions.

Documents that match all the conditions will be considered more relevant and should appear before documents that only match a subset of the given conditions.

Unlike boolean AND queries that will only return if all the conditions are met, search queries perform a Weak-AND, where if no documents match all of the given criteria, then the ones that match most of the conditions will be returned.

Each of the individual conditions could be boosted using the Query.boost method to control the relevance of each input query condition.

Eg:

Q('tv-series').search(
    F["title"].proximity("game of thrones").boost(1.5),
    F["article_body"].proxmity("game of thrones").boost(1.0),
    (F["popular"] == "yes").boost(3.0))
Parameters

conditions (list of Query) – conditions you want to search against

Returns

new search query object with the given set of conditions

Return type

Query

select(*fields)[source]

Returns a new query object that when executed will only include the list of fields provided as input.

Parameters

fields (list of FieldRef) – fields you wish to select

Returns

new query object that includes the desired field selection

Return type

Query

sql(**kwargs)[source]

Returns a tuple of (SQL, params) for the underlying query expression.

sqlbuild(sqlsel)[source]

Returns an SQLSelect object, which can be used to generate the SQL text for the query.

sqlexpression(**kwargs)[source]

Returns a text SQL fragment for the underlying query expression.

sqlprepare(sqlsel)[source]

Returns an SQLSelect object, which can be used to build the SQL version of the query.

where(query)[source]

Returns a new query object that when executed will only return documents that match the current query object AND the query object provided as input.

Parameters

query (Query) – the conjunct query object

Returns

new query object that returns documents in self AND query

Return type

Query


Source

class rockset.Source(integration_name, **kwargs)[source]

NOTE: This class is auto generated by the swagger code generator program.

Do not edit the class manually.

to_dict()[source]

Returns the model properties as a dict

to_str()[source]

Returns the string representation of the model

property dynamodb

Gets the dynamodb of this Source. # noqa: E501

configuration for ingestion from a dynamodb table # noqa: E501

Returns

The dynamodb of this Source. # noqa: E501

Return type

SourceDynamoDb

property file_upload

Gets the file_upload of this Source. # noqa: E501

file upload details # noqa: E501

Returns

The file_upload of this Source. # noqa: E501

Return type

SourceFileUpload

property format_params

Gets the format_params of this Source. # noqa: E501

format parameters for data from this source # noqa: E501

Returns

The format_params of this Source. # noqa: E501

Return type

FormatParams

property gcs

Gets the gcs of this Source. # noqa: E501

configuration for ingestion from GCS # noqa: E501

Returns

The gcs of this Source. # noqa: E501

Return type

SourceGcs

property integration_name

Gets the integration_name of this Source. # noqa: E501

name of integration to use # noqa: E501

Returns

The integration_name of this Source. # noqa: E501

Return type

str

property kinesis

Gets the kinesis of this Source. # noqa: E501

configuration for ingestion from kinesis stream # noqa: E501

Returns

The kinesis of this Source. # noqa: E501

Return type

SourceKinesis

property redshift

Gets the redshift of this Source. # noqa: E501

configuration for ingestion from Redshift # noqa: E501

Returns

The redshift of this Source. # noqa: E501

Return type

SourceRedshift

property s3

Gets the s3 of this Source. # noqa: E501

configuration for ingestion from S3 # noqa: E501

Returns

The s3 of this Source. # noqa: E501

Return type

SourceS3

property type

Gets the type of this Source. # noqa: E501

has value source for a source object # noqa: E501

Returns

The type of this Source. # noqa: E501

Return type

str