This page covers how you can use a self-managed data source by adding documents to your Rockset [<<glossary:Collections>>](🔗) manually using the Write API.

## What is the Write API?

The **Write API** refers to the subset of APIs in the [<<glossary:Rockset API>>](🔗) which are used to insert, update, upsert, or delete documents in a Rockset collection. You should use this option if either Rockset does not support managed integrations with your desired data source, or if you do not want Rockset to automatically sync your data and wish to manage syncing on your own.

If you choose not to use a managed integration, you will have to manage data syncing on your own.

This is in contrast to Rockset automatically syncing your data when using a managed integration, such as S3 or DynamoDB.

## Write API Limits

Peak write requests per second (WPS) using the Write API are based on the [<<glossary:Virtual Instance>>](🔗) size as listed below. These limits apply collectively for the Add/Patch/Delete documents endpoint and for orgs using [Kafka Connect](🔗) as a source. These limits along with the peak ingest throughput limit determine how fast Rockset can receive data.

Virtual InstanceWrite Requests Per Second
FREE1
NANO5
MICRO10
MILLI10
XSMALL15
SMALL25
MEDIUM50
LARGE100
XLARGE200
2XLARGE400
4XLARGE800
8XLARGE1600
16XLARGE2400

### Response Error Codes

#### Invalid Input (400) and Payload Too Large (413)

Write API and Kafka Connect requests are capped at 10 MiB and 20,000 documents per request. If you see an error indicating "Payload size exceeds limit of 10,485,760 bytes" or "The number of documents specified in this request exceeds the maximum allowed limit of 20,000 documents", please try again with a smaller payload size, fewer documents per request, or use one of our [managed sources](🔗).

#### Too Many Requests (429)

To make sure your VI is sized appropriately to your ingest needs, monitor for the [429 Too Many Requests status code](🔗). The client can receive the 429 error code in two cases:

  • The client is sending data faster than the Virtual Instance peak throughput limit

    • The error message returned by the server is: `Your account is configured with a maximum write rate limit and you have reached this limit.`

    • Use appropriate retry, backoff and jitter strategies if the client hits this error.

      Here is a good [guide](🔗) on how to implement this on the client side.

    • If the application encounters 429 for a large retry count (10 or more), check the [streaming ingest metrics](🔗). If the application requires high ingest throughput, then consider increasing your VI size to avoid throttling

  • The client is sending more writes per second than the Virtual Instance limit

    • The error message returned by the server is: `Your account is configured with a maximum write requests per second limit and you have reached this limit.`

    • Use appropriate retry, backoff and jitter strategies if the client hits this error.

      Here is a good [guide](🔗) on how to implement this on the client side.

      If the application encounters 429 for a large retry count (10 or more), reach out to [Rockset Customer Support](🔗).

    • If the client requires sending more requests then consider buffering of records on the client and then sending a batch of records (>100KB in size) per Write API request.

    • If the workload still requires a higher write rate, consider forwarding the documents to Amazon Kinesis or managed Kafka service like Confluent or Amazon MSK and then use that integration to sync data with Rockset. Since a managed integration, like Kinesis, is pull-based the limitations on how fast Rockset can pull data are based only on the source.

## Create an Empty Collection

While you can directly add documents to any existing collection, you will need to first create an empty collection if you intend to use the Rockset API to add documents to a **new** collection.

You can create an empty collection by navigating to [Collections > Create Collection > Write API](🔗) in the Rockset Console.

The Rockset API also exposes a [Create Collection](🔗) endpoint enabling you to create an empty collection from your application code.

## Add Documents

The Rockset API exposes an [Add Documents](🔗) endpoint so that you can insert data directly into your collections from your application code.

For your convenience, Rockset also maintains SDKs for [Node.js](🔗), [Python](🔗), [Java](🔗), and [Go](🔗). Each SDK has its own set of methods for using the REST API to add documents which you can find in its documentation.

Additions made via the Add Documents endpoint will always go through the ingest transformation.

## Delete Documents

To delete existing documents from your collections, simply specify the `_id` fields of the documents you wish to remove and make a request to the [Delete Documents](🔗) endpoint.

## Patch Documents

To update existing documents in a collection using the Rockset API, you can make requests to the [Patch Documents](🔗) endpoint. For each existing document you wish to update, you will need to specify the following two parameters:

  1. `_id` holding the `_id` field (primary key) of the document which is being patched

  2. `patch` holding a list of patch operations to be applied to that document, following the [JSON Patch](🔗) standard.

Each patch operation is a dictionary with a key `opstring` indicating the patch operation, and additional keys `pathstring`, `valueobject`, and `fromstring` which are used as required arguments for this patch operation. The required arguments differ from one operation type to another. The JSON Patch standard defines several types of patch operations, their arguments, and their behavior. Refer to the [JSON Patch documentation](🔗) for more details.

If a patch operation’s argument is a field path, then it is specified using the JSON Pointer standard defined by the [IETF](🔗). In essence, field paths are represented as a string of tokens separated by `/` characters. These tokens either specify keys in objects or indexes into arrays, and arrays are 0-based.

For example, in this document:



The path `"/biscuits"` would point to the `biscuits` array, while the path `"/biscuits/1/name"` would point to `"Choco Leibniz"`.

There are six supported JSON patch operations:

  1. `add` which adds a value (specified by the _value_ parameter) to an object or inserts it into an array (specified by the _path_ parameter). In the case of an array, the value is inserted before the given index. The `-` character can be used instead of an index to insert at the end of an array. The parameters `pathstring` and `valueobject` are required for this operation.

  2. `remove` which removes the first instance of an object or element of an array (specified by the _path_ parameter). The parameter `pathstring` is required for this operation.

  3. `replace` which replaces the first instance of an object or element of an array (specified by the _path_ parameter) with a value (specified by the _value_ parameter). This operation is equivalent to a `remove` operation immediately followed by an `add` operation. The parameters `pathstring` and `valueobject` are required for this operation.

  4. `copy` which copies a value from one location (specified by the _from_ parameter) to another location (specified by the _path_ parameter) within the JSON document. The parameters `pathstring` and `fromstring` are required for this operation.

  5. `move` which moves a value from one location (specified by the _from_ parameter) to another location (specified by the _path_ parameter) within the JSON document. The parameters `pathstring` and `fromstring` are required for this operation.

  6. `test` which runs a test to check if a value (specified by the _path_ parameter) is set in the document. If the test fails, then the patch as a whole will not apply.

Patch Warning

Patches made via the Patch API endpoint will **never** go through the ingest transformation. Patches made using \_op in an `INSERT INTO` query (refer to next section for more information) will always go through the ingest transformation.

## `INSERT INTO` to Add, Delete, or Patch Documents

You can add, delete, or patch documents using a [`INSERT INTO`](🔗) statement, which allows you to **add** the results of a query into a collection. To **patch** documents, simply specify the [`_id`](🔗) of the field. If you `SELECT` the `_id` field of an existing document in that query, it will update the existing document rather than add a new document. To **delete** documents, specify the `_id` and specify [`_op` ](🔗) as `DELETE`.

Below is an example of how to delete documents using `_op` and an `INSERT INTO` statement.


(Script tags will be stripped)


Below is an example of how to patch documents using `_op` and an `INSERT INTO` statement.


(Script tags will be stripped)


This method of using `INSERT INTO` statements to add, patch, or delete documents is **not recommended** and should only be used to perform one-off fixes.

This is because this method will inefficiently occupy query execution resources not optimized for data ingest. Instead, we generally recommend that you use the [<<glossary:Rockset API>>](🔗) to regularly update data in your collections.

Understanding `"num_docs_inserted"` and `"status": "ADDED"`

After ingesting a document with an `_op` field specified, the query results include `"num_docs_inserted"` and the api response will include `"status": "ADDED"` to signify that the document was _added to the processing queue_. This does not imply that the document was added to the collection as the operation will occur _after_ the document has left the queue.

For example, sending a query with a `_op = DELETE` will return `"status": "ADDED"` signifying that the document was added to the queue. When the document leaves the queue, the operation is triggered and the corresponding document is deleted (assuming the `_id` is valid).

## Upload a File

To manually create a collection using a file as your data source, you can do so from the Rockset Console by navigating to [Collections > Create Collection > File Upload](🔗). You can also upload files to any existing collections (or to this one after it has been created). The file formats currently supported include JSON, CSV, XML, Parquet, XLS and PDF.

## Verify Collection is Updated

Before querying a collection, you can verify specific documents have been added, deleted, or patched when using the Write API along with the [Get Collection Commit API](🔗). The Write API returns written offsets as `last_offset`, which follows the encoding format below:



You can verify the data in the returned offset can be queried by making requests to the [Get Collection Commit API](🔗) endpoint. Simply pass the `last_offset` in the `name` field and poll this endpoint until the `passed` field in the response returns `true`. This signifies the collection has been updated with the data from the associated write request. Thus, we can guarantee any subsequent queries will include the data associated with the request to the Write API.