domino-db Advanced Topics
This page describes advanced topics of domino-db.
Compute with form
When you create, read or update a document it is sometimes useful to
compute document items based on a form stored in the database. For
example, consider a Contact form with FirstName
and LastName
items.
The Contact form might have an input validation formula on LastName
like this:
@If( LastName = ""; @Failure( "You must enter a last name" ); @Success )
When you use domino-db to create a Contact document, you can choose to compute-with-form before the document is saved to the database. Here's a trivial example of a request that should fail to create a document:
const result = await database.bulkCreateDocuments({
documents: [
{
Form: 'Contact',
FirstName: 'Joe',
},
],
computeOptions: {
computeWithForm: true,
},
});
Since there is no LastName
item sent to the server the request
should fail. To be precise, the BulkResponse
object should contain a
documents
array with one element and that element should include an
@error
property indicating a validation error occurred. (Of course,
this is true only if the Contact form has the necessary input validation
formula.)
Sometimes it also makes sense to compute-with-form on a read operation.
Let's say the Contact form also includes a computed-for-display item
called FullName
. The formula for FullName
is a concatenation of
two other items:
FirstName + " " + LastName
The FullName
field is not actually stored on the document, but you
can read the item like this:
result = await database.bulkReadDocumentsByUnid({
unids,
itemNames: ['FullName'],
computeOptions: {
computeWithForm: true,
},
onErrorOptions: onError.CONTINUE,
});
Because computeOptions.computeWithForm
is true
, the response
should include any computed-for-display items include in the
itemNames
array.
The syntax for the computeOptions
object is the same for all create,
read and update requests. However, keep in mind that computed-for-display
items are calculated only on read operations.
computeOptions
{Object
}
computeWithForm
{boolean
} --true
if you want to compute items based on a form stored in the database. The default isfalse
.ignoreComputeErrors
{boolean
} --true
if you want to continue processing a document with a compute error. This option has no effect unlesscomputeWithForm
is alsotrue
. The default is to stop processing a document with a compute error.
Obviously, this is a relatively advanced topic. Your compute results will vary based on the design of forms stored in your target database. For more information about computed items see the "Editable and computed fields" topic in the Domino Designer documentation
Query arguments
When you construct a query to use with domino-db it often makes sense to define
the query exactly. For example, the following query finds all contact documents
where LastName
is Aardman:
Form = 'Contact' and LastName = 'Aardman'
On the other hand, you sometimes want to construct a query with variables and
then substitute those variables at run time. For example, the following query
defines a single variable (ln
), the value of which is unknown until run time:
Form = 'Contact' and LastName = ?ln
And the following sample shows how you might use such a query string in a call
to bulkReadDocuments
:
const { useServer } = require('@domino/domino-db');
const serverConfig = require('./server-config.js');
const databaseConfig = require('./database-config.js');
useServer(serverConfig).then(async server => {
const database = await server.useDatabase(databaseConfig);
const documents = await database.bulkReadDocuments({
query: "Form = 'Contact' and LastName = ?ln",
queryArgs: [
{
name: 'ln',
value: lastNameValue,
},
],
});
// documents is an array of documents -- one for each
// document that matches the query
});
NOTE: Argument substitution is a core feature of the Domino Query Language (DQL). It's important to understand the substitution happens on the Domino server while Proton is processing your request.
Query arguments can be either named, as in the previous example, or unnamed.
When unnamed, you substitute variables by ordinal position in the query. The
following call to bulkReadDocuments
is essentially equivalent to the previous
one:
const documents = await database.bulkReadDocuments({
query: "Form = 'Contact' and LastName = ?",
queryArgs: [
{
ordinal: 1,
value: lastNameValue,
},
],
});
You can specify a queryArgs
array when calling bulkReadDocuments
or any
other function that requires a query
string. Each element of the array must be
either a value or an object with the following properties:
ordinal
{number
} Required if noname
is specified. The value ofordinal
is the one-based position of the corresponding unnamed variable in the query string.name
{string
} Required if noordinal
is specified. The value ofname
must match one of the named variables in the query string. For example, to match variable?ln
, specifyln
.value
{string
|number
|Object
} Required. This is the value to substitute for the matching variable.
The following example defines a query
string and a queryArgs
array with
three elements:
query: 'LastName = ? and Number = ? and Date = ?',
queryArgs: [
{
ordinal: 1,
value: stringValue,
},
{
ordinal: 2,
value: numberValue,
},
{
ordinal: 3,
value: {
type: 'datetime',
data: datetimeValue,
}
},
],
Even more simply, you can imply the ordinal position of each argument like this:
query: 'LastName = ? and Number = ? and Date = ?',
queryArgs: [
stringValue,
numberValue,
{
type: 'datetime',
data: datetimeValue,
},
],
Let's assume you use the above query
and queryArgs
properties in a call to
bulkReadDocuments
. Before executing the query on the server, DQL should
substitute the arguments as follows:
- Assuming
stringValue
is a string, DQL substitutes query variable 1 with the TYPE_TEXT value. - Assuming
numberValue
is a number, DQL substitutes variable 2 with the TYPE_NUMBER value. - Assuming
datetimeValue
is a valid ISO8601 date string, DQL substitutes variable 3 with a TYPE_TIME value.
If you specify a query argument value of a different type (e.g. boolean
or
undefined
), you should expect the operation to fail.
Attachments
Reading attachments
Whether you are using Document::readAttachmentStream(), Database::bulkReadAttachmentStream() or Database::bulkReadAttachmentStreamByUnid(), domino-db returns a promise that resolves to a readable stream. For example, consider the following code excerpt:
const readable = await document.readAttachmentStream({
fileNames: ['photo_1.jpg', 'photo_2.jpg'],
});
In the above example, readable
is a stream object used to read the contents
of two attachments. To begin reading the attachments, you should register
listeners for the following stream events:
readable.on('file', file => {
// A file attachment is starting
});
readable.on('data', data => {
// A chunk of attachment data has arrived
});
readable.on('eof', () => {
// The file attachment has ended
});
readable.on('end', () => {
// The stream has been closed
});
readable.on('error', e => {
// An error occurred
});
NOTE: While a domino-db attachment stream may appear to be an instance of the Node.js Stream class, it is really an instance of EventEmitter. Currently, you cannot pipe an attachment stream, but you can use other functions shared by
EventEmitter
andStream
.
Event 'file'
The 'file'
event is emitted each time the start of a new attachment arrives
from the server.
readable.on('file', file => {
// A file attachment is starting
});
The file
argument is an object with the following properties:
unid
{string
} The UNID of the associated document.fileName
{string
} The attachment file name.fileSize
{number
} The attachment size in bytes.modified
{object
} The last modified date of the attachment.error
{object
} An error object. This property is set only when an attachment level error occurs (rare).
Event 'data'
The 'data'
event is emitted each time a chunk of attachment data arrives
from the server. As of domino-db-1.4.0
, a second argument is passed that is a
Buffer
view of the data array in the first argument.
readable.on('data', (data, buffer) => {
// A chunk of attachment data has arrived
});
The data
argument is a Uint8Array
of attachment data. This
is a variable length buffer of raw bytes. Depending on the size of the
attachment, there might be many 'data'
events between a pair of 'file'
and 'eof'
events.
The buffer
argument is a Buffer
view of the data
argument.
Event 'eof'
The 'eof'
event is emitted when an individual file is finished.
readable.on('eof', () => {
// The file attachment has ended
});
When Proton is streaming multiple attachments, the 'eof'
event might
be followed by another 'file'
event.
Event 'end'
The 'end'
event is emitted once when Proton has stopped sending data and
the stream is closed.
readable.on('end', () => {
// The stream has been closed
});
Event 'error'
The 'error'
event is emitted only when there is a fatal error at the
protocol level.
readable.on('error', e => {
// An error occurred
});
The e
argument is an instance of the DominoDbError
class. For example,
if domino-db is unable to connect to Proton, the 'error'
event is emitted
once and the stream is closed.
Read example
Consider the following example:
const readable = await document.readAttachmentStream({
fileNames: ['photo_1.jpg', 'photo_2.jpg', 'photo_3.jpg'],
chunkSizeKb: 32,
});
readable.on('file', file => {
// A file attachment is starting
});
readable.on('data', data => {
// A chunk of attachment data has arrived
});
readable.on('eof', () => {
// The file attachment has ended
});
readable.on('end', () => {
// The stream has been closed
});
readable.on('error', e => {
// An error occurred
});
Let's say the associated document includes two out of the three requested attachments. It includes photo_1.jpg and photo_3.jpg, but NOT photo_2.jpg. The expected result is:
The
'file'
event is emitted once for photo_1.jpg.The
'data'
event is emitted N times for photo_1.jpg. If the entire file fits in a single chunk, the event is emitted only once for photo_1.jpg. Otherwise, the event is emitted multiple times for photo_1.jpg.The
'eof'
event is emitted once for photo_1.jpg.Zero events are emitted for photo_2.jpg because the attachment doesn't exist.
The
'file'
event is emitted once for photo_3.jpg.The
'data'
event is emitted N times for photo_3.jpg.The
'eof'
event is emitted once for photo_3.jpg.After all the other events are complete, the
'end'
event is emitted once indicating the stream has closed.
Writing attachments
When you use Database::bulkCreateAttachmentStream(), domino-db returns a promise that resolves to a writable stream. For example, consider the following code excerpt:
const writable = await database.bulkCreateAttachmentStream({});
In this example, writable
is a stream object used to write the contents
of one or more attachments. To begin writing the attachments, you should first
register listeners for the following stream events:
writable.on('response', response => {
// The attachment content was written to the document(s) and a
// response has arrived from the server
});
writable.on('error', e => {
// An error occurred and the stream is closed
});
For details, see the writable event descriptions beginning with Event: 'response'.
After registering your listeners, you use the file() and write() functions to write your data. And you use the end() function to close the stream.
writable.file({
unid: '49CDF4368D68B2C185258359007B465C',
fileName: 'example.txt',
});
writable.write(new Uint8Array([97, 98, 99, 100, 101, 102, 103])); // abcdefg
writable.write(new Uint8Array([65, 66, 67, 68, 69])); // ABCDE
writable.end();
NOTE: While a domino-db attachment stream may appear to be an instance of the Node.js Stream class, it is really an instance of EventEmitter. Currently, you cannot pipe to a writable attachment stream, but you can use other functions shared by
EventEmitter
andStream
.
Event: 'response'
The 'response'
event is emitted once after all attachment content has been
written to the server.
writable.on('response', response => {
// The attachment content was written to the document(s) and a
// response has arrived from the server
});
The response
argument is an object with a single attachments
property --
itself an array of objects. Each object in the attachments
array has the
following properties:
unid
{string
} The UNID of the associated document.fileName
{string
} The attachment file name.fileSize
{number
} The size of the attachment.modified
{object
} The last modified date of the attachment.
Event: 'drain'
The 'drain'
event is emitted after the write stream has drained its
internal buffer and is ready to accept more data.
writable.on('drain', () => {
// The write stream has drained and you may safely
// write more data
});
It's especially important to listen for this event when you are writing a
large attachment. For more information on the 'drain'
event, see
Write stream draining example.
Event: 'error'
The 'error'
event is emitted only when an error occurs.
writable.on('error', e => {
// An error occurred and the stream is closed
});
The e
argument is an instance of the DominoDbError
class. For example, if
domino-db is unable to connect to Proton, the 'error'
event is emitted once
and the stream is closed.
writable.file()
Marks the beginning of a new attachment in the stream.
writable.file({
unid: '49CDF4368D68B2C185258359007B465C',
fileName: 'example.txt',
});
This function accepts a single object with the following properties:
unid
{string
} The UNID of the target document.fileName
{string
} The attachment file name.
Both properties are required.
writable.write()
Writes a chunk of data to the stream.
writable.write(new Uint8Array([97, 98, 99, 100, 101, 102, 103])); // abcdefg
writable.write(new Uint8Array([65, 66, 67, 68, 69])); // ABCDE
This function accepts an instance of Uint8Array
containing a chunk of
attachment data. Especially when writing a large attachment, you should write
the data in reasonable size chunks (e.g. 32 kilobytes).
As of domino-db-1.4.0
, you can now pass in a Buffer to writeable.write. This
allows greater compatability when reading from the filesystem using node's fs
module.
writable.write(buffer);
The write()
function returns a boolean
value indicating whether the
stream is successfully draining its internal buffer. A value of true
indicates the stream buffer is draining. A value of false
indicates
the internal buffer has reached its high water mark. For more details,
see Write stream draining example.
writable.end()
Closes the stream.
writable.end();
This function has no arguments.
Write example
Consider the following example:
const writable = await database.bulkCreateAttachmentStream({});
writable.on('error', e => {
// An error occurred and the stream is closed
});
writable.on('response', response => {
// The attachment content was written to the document and a
// response has arrived from the server
});
writable.file({
unid: '49CDF4368D68B2C185258359007B465C',
fileName: 'foo.txt',
});
writable.write(new Uint8Array([97, 98, 99, 100, 101, 102, 103]));
writable.write(new Uint8Array([65, 66, 67, 68, 69]));
writable.end();
The expected result is the 'response'
event is emitted once, shortly after
writable.end()
returns. The response
object is expected to be something
like this (when serialized to JSON):
{
"attachments": [
{
"unid": "49CDF4368D68B2C185258359007B465C",
"fileName": "foo.txt",
"fileSize": 12,
"modified": {
"type": "datetime",
"data": "2018-05-10T18:20:15.31Z"
}
}
]
}
Of course, if the example is modified to write multiple attachments, the expected response would include more than one attachment.
Write example with buffers
const fsread = fs.createReadStream('foo.txt');
fsread.on('open', async () => {
const writable = await database.bulkCreateAttachmentStream({});
writable.on('error', e => {
// An error occurred and the stream is closed
});
writable.on('response', response => {
// The attachment content was written to the document and a
// response has arrived from the server
});
writable.file({
unid: '49CDF4368D68B2C185258359007B465C',
fileName: 'foo.txt',
});
fsread.on('data', writable.write);
fsread.on('end', () => {
writable.end();
writable = null;
});
});
Write stream draining example
The previous example wrote a trivially small attachment to a single document. When you write a large attachment or even several medium size attachments, you should be aware of the write stream's internal buffer. When the buffer doesn't drain quickly enough, it can be dangerous to write more data to the stream.
The following example assumes writable
is a domino-db attachment stream
and buffer
is a large Buffer
of binary data. It writes the data to a
single attachment in 16 kilobyte chunks:
let error;
writable.on('error', e => {
error = e;
});
writable.on('response', response => {
// The attachment content was written to the document and a
// response has arrived from the server
});
// Write the image in n chunks
let offset = 0;
const writeRemaining = () => {
if (error) {
return;
}
let draining = true;
while (offset < buffer.length && draining) {
const remainingBytes = buffer.length - offset;
let chunkSize = 16 * 1024;
if (remainingBytes < chunkSize) {
chunkSize = remainingBytes;
}
draining = writable.write(buffer.slice(offset, offset + chunkSize));
offset += chunkSize;
}
if (offset < buffer.length) {
// Buffer is not draining. Write some more once it drains.
writable.once('drain', writeRemaining);
} else {
writable.end();
}
};
writable.file({ unid, fileName });
writeRemaining();
In this example writeRemaining()
is a local function. If the stream's
internal buffer is draining, writeRemaining()
is called only once.
Otherwise, it pauses until the stream is drained. It uses writable.once()
to listen for the 'drain'
event:
// Buffer is not draining. Write some more once it drains.
writable.once('drain', writeRemaining);
The stream calls writeRemaining()
back when the 'drain'
event fires.
Actually the stream may call writeRemaining()
several times until
all the data is written. At that point, writeRemaining()
calls
writable.end()
and the 'response'
event should fire.
Deleting attachments
To delete attachments, you use Document::deleteAttachments(), Database::bulkDeleteAttachments() or Database::bulkDeleteAttachmentsByUnid().
NOTE: There are two kinds of attachments in Notes and Domino. A normal attachment is associated with a rich text item and appears as a hot spot in the Notes client. A V2 style attachment is not associated with a rich text item and appears "below the line" in the Notes client. The above domino-db functions work best with V2 style attachments. When you use domino-db to delete a normal attachment, it does not delete the associated rich text hot spot.
Rich Text
Domino and Notes use rich text fields to store a variety of objects, including text, tables, document links, bitmaps, and OLE links. Rich text fields have several advantages over other types of fields:
- Paragraphs in rich text can have mixed attributes, such as indenting, justification, and spacing.
- Text in rich text can have mixed attributes such as font face, color, and point size.
- A single rich text field can hold several megabytes of data.
The domino-db API for reading and writing rich text to and from documents treats rich text data as a stream. Since rich text data can exceed the maximum size of a single Notes item, multiple items of the same name may be used to represent rich text data. The domino-db API will concatenate rich text data from multiple items with the same name into a single stream. The data type word at the beginning of every Notes item is never included in the stream. The data format is the canonical CD record format (little endian byte ordering and byte aligned data structures). Strings are formatted in the Lotus Multibyte Character Set format (also known as LMBCS).
Rich Text Fields
Since rich text data can span multiple document items, we define a rich text field as one or more Notes items of the same name that contain rich text data.
Font Table
A font table is necessary when a rich text field in a document contains text
that uses font faces other than the standard ones (Times Roman, Helvetica, and
Courier). A font table is a special item named $Fonts
formatted as rich text that
associates a font ID with a platform-independent description of the font. When
moving rich text data from one document to another, be sure to include the font table
so that the rendering program (i.e. the Notes client) can properly render the
referenced fonts.
Reading Rich Text
All of the functions that read rich text: Document::readRichTextStream(), Database::bulkReadRichTextStream() Database::bulkReadRichTextStreamByUnid() return a promise that resolves to a readable stream object. It is necessary to implement event handlers on the readable stream in order to receive data and metadata that identifies the stream content.
readable.on('field', (field) => {
// A rich text field is starting.
});
readable.on('data', (buffer) => {
// Rich text data is being received
});
readable.on('eof', () => {
// A field is ending.
});
readable.on('error', (e) => {
// An request-level error has occurred. The connection will be closed.
});
readable.on('end', () => {
// The read request has ended and the connection has closed.
});
NOTE: While a domino-db rich text stream may appear to be an instance of the Node.js Stream class, it is really an instance of EventEmitter. Currently, you cannot pipe a rich text stream, but you can use other functions shared by
EventEmitter
andStream
.
Event 'field'
The 'field'
event signals the start of a rich text field. The field
argument is
an object with the following properties:
unid
{string
} The UNID of the associated document.fieldName
{string
} The name of the rich text field.error
{object
} An instance of the DominoDbError class. This property is set only when an field level error occurs (rare).
Event 'data'
The 'data'
event signals the receipt of rich text data. The buffer
argument
is a Buffer
of rich text data. Depending on the size of the rich text data, there might be many
'data'
events between a pair of 'field'
and 'eof'
events.
Event 'eof'
The 'eof'
event signals the end of a rich text field.
Event 'error'
The 'error'
event is fired when a request-level error occurs. The e
argument
is an instance of the DominoDbError class. A
request-level error can occur if the rich text streaming feature is disabled by
the proton server addin task, for example. The 'end'
event is fired after the
'error'
event.
end
Event The 'end'
event indicates that the stream is closed. This occurs after all rich
text data has been read or when a request-level error occurs.
cancel
Method Call the cancel
method if the read operation needs to terminate
before the the 'end'
event is emitted. Failure to terminate connections can
cause resource leaks in the node.js process as well as the proton server task.
Read Rich Text Example
Following is a complete example that reads rich text data and writes the result to a file that matches the field name. This excerpt assumes that the caller has an instance of a Database object that references the node-demo.nsf database included with the AppDev pack:
const readRichText = async (database) => {
try {
const query = "Form = 'RichDiscussion' and Title = 'Simple'";
const readable = await database.bulkReadRichTextStream({ query, fields: ['Body'] });
await new Promise((resolve, reject) => {
let fileName = '';
let fd;
readable.on('error', (e) => {
if (fd) {
fs.close(fd);
fd = undefined;
fs.unlinkSync(fileName);
}
reject(e);
});
readable.on('field', (field) => {
if (field.error) {
reject(field.error);
}
fileName = field.fieldName;
// Open file for write. Fail if exists.
fd = fs.openSync(fileName, 'wx');
});
readable.on('data', (buffer) => {
fs.writeSync(fd, buffer);
});
readable.on('eof', () => {
fs.closeSync(fd);
fd = undefined;
});
readable.on('end', () => {
resolve();
});
});
} catch (e) {
console.log(`Unexpected exception ${e.message}`);
}
};
In this example, we use the 'field'
event as a signal to open a file that will
receive data from subsequent 'data'
events. We take fieldName property from
the field argument and use it as the destination filename for data we receive
from the stream. We pass the argument from the 'data'
event directly to the
fs.writeSync function to write the chunk of data to the file system. We close
the file on the 'eof'
event and "forget" the fd value. We also close the file
on the 'error'
event and delete the file since the write operation did not
complete. The promise is resolved in the 'end'
event signaling completion of
the async read operation.
Writing Rich Text
Rich text can be written to a document using the domino-db API provided it is formatted as Notes canonical composite data record format. Note that records must start on an even-byte boundary. So when writing data, pad odd-length records with a 0 byte, without changing the length value in the header. If you are using output from one of the domino-db rich text functions Document::readRichTextStream(), Database::bulkReadRichTextStream() or Database::bulkReadRichTextStreamByUnid(), this padding is done automatically.
Use Database::bulkCreateRichTextStream({})
to create a writable stream that
is used to write data to one or more documents. The writable interface is as follows:
writable.on('error', (e) => {
// a request-level error has occurred, i.e. unable to connect to the server
});
writable.on('response', (response) => {
// All write operations have completed
});
writable.once('drain', () => {
// Local write buffer has finished being written to server. Write operations
// can continue.
})
// Starts a new rich text field
writable.field({unid, fieldName})
// Write a chunk of data
writable.write()
// Close the connection
writable.end()
> **NOTE:** While a domino-db rich text stream may appear to be an instance of
> the Node.js [Stream](https://nodejs.org/api/stream.html) class, it is really
> an instance of [EventEmitter](https://nodejs.org/api/events.html#events_class_eventemitter).
> Currently, you cannot pipe rich text stream, but you can use other functions
> shared by `EventEmitter` and `Stream`.
error
Event The 'error'
event is fired when a request-level error occurs. The e
argument
is an instance of the DominoDbError class. A
request-level error can occur if the rich text streaming feature is disabled by
the proton server addin task, for example.
response
Event The 'response'
event is emitted after all write operations complete. The
response argument is an object with a property named fields which is an array of
objects. Each object in the fields array has the following properties:
- unid {
string
} The UNID of the associated document. - fieldName {
string
} The rich text field name. - error {
DominoDbError
} If an error occurs processing this field, the error property will be set with a DominoDbError object that describes the error.
drain
Event This event is called when the stream has drained and is able to accept new data.
field
Method The argument to the field method is an object with the following properties:
- unid {
string
} The unid of the document that will receive the rich text data. - fieldName {
fieldName
} is the name given to the item or items that will hold the rich text data. - encrypt {
encrypt
} is an optional flag that indicates the rich text data should be encrypted on the note. The default value is false.
write
Method This is the method to write a chunk of data.
- data {
Buffer
} Buffer containing a chunk of data. Note that it must be an even length.
end
Method Call this method to close the connection.
Write Rich Text Example
Following is an example that creates a document and writes a rich text field to it using rich text data received from Database::bulkReadRichTextStream(). This excerpt assumes the caller has an instance to a Database object that references the node-demo.nsf database included with the AppDev pack:
const writeRichText = async (database) => {
let writable;
let result;
try {
// Create a document with subject write-example-1 to hold rich text
const unid = await database.createDocument({
document: {
Form: 'RichDiscussion',
Title: 'write-example-1',
},
});
writable = await database.bulkCreateRichTextStream({});
result = await new Promise((resolve, reject) => {
// Set up event handlers.
// Reject the Promise if there is a connection-level error.
writable.on('error', (e) => {
reject(e);
});
// Return the response from writing when resolving the Promise.
writable.on('response', (response) => {
resolve(response);
});
// Indicates which document and item name to use.
writable.field({ unid, fieldName: 'Body' });
let offset = 0;
// Assume for purposes of this example that we buffer the entire file.
const buffer = fs.readFileSync('Body');
// When writing large amounts of data, it is necessary to
// wait for the client-side to complete the previous write
// before writing more data.
const writeData = () => {
let draining = true;
while (offset < buffer.length && draining) {
const remainingBytes = buffer.length - offset;
let chunkSize = 16 * 1024;
if (remainingBytes < chunkSize) {
chunkSize = remainingBytes;
}
draining = writable.write(buffer.slice(offset, offset + chunkSize));
offset += chunkSize;
}
if (offset < buffer.length) {
// Buffer is not draining. Whenever the drain event is emitted
// call this function again to write more data.
writable.once('drain', writeData);
}
};
writeData();
writable.end();
writable = undefined;
});
} catch (e) {
console.log(`Unexpected exception ${e.message}`);
} finally {
if (writable) {
writable.end();
}
}
return result;
};
The example starts by creating a document that will appear in the RichDiscussion
view of node-demo.nsf. The call to database.bulkCreateRichTextStream({})
returns a writable stream that is then used to send data to the server. The
'error'
event handler rejects the promise and passes the DominoDbError object
as the argument. The 'response'
event resolves the promise and returns the
response object when processing completes. To start writing a field, we call the
writable.field
method with the unid
of the document we just created along
with the fieldName
specifier 'Body'
. Next we read the data representing the
Body field from the file named 'Body'
that was created by the example above in
the Reading Rich Text section.
In order to properly sequence write operations, implement the
writeData
function to pause write operations until the client-side buffer has
drained. This function writes the data in chunks and checks the return value of
writable.write
. If there is more data to write, writeData
registers itself
to be called when the drain event is emitted and terminates the write loop.
Following the definition of writeData
, writeData
is actually called to 'prime'
the write loop which continues until all data is written. When all data is written,
writable.end()
is called to close the connection. The writable
variable
is set to undefined to indicate to the surrounding try/catch/finally handler that
cleanup has occurred successfully.
The surrounding exception handling code is designed to correctly cleanup the
open connection in the case where an exception is thrown that skips the call to
writeable.end()
inside the promise function. Failure to cleanup connections will result in wasted
resources in the background of the node.js process and on the
proton server task.
Working with Rich Text Content
The @domino/richtext API provides the ability to parse, modify and create rich text data. To use this API, you first need to install it so that node.js can include it as a dependency in your project.
Installation
Suppose you have an existing Node.js application. Your application, including your package.json file, lives in a folder called /my-app. Let's also assume you have a copy of the domino-richtext archive in a folder called /packages.
To add the domino-richtext module as a dependency:
cd /my-app
npm install /packages/domino-richtext-0.5.1.tgz --save
The npm install
command fetches the domino-richtext code and saves the dependency to
package.json.
Structure of a Rich Text Field
To work with rich text data, it is useful to understand the structure of a typical rich text field. Following is an example of a simple rich text field that consists of a series of four rich text records:
PabDefinition
Paragraph
PabReference
TextRun
A PabDefinition
defines the
"style" of a paragraph. This class contains fields in which to specify margins,
line justification, tab stops, and other style attributes. Each
PabDefinition
has a unique ID
which subsequent paragraphs reference to identify the
PabDefinition
that defines its
style. Each PabDefinition
may
be used by many paragraphs in the rich text field.
A Paragraph
marks the start of each
new paragraph. Rich text fields are composed of one or more paragraphs.
A PabReference
specifies which
paragraph style is to be used in the current paragraph. If a
PabReference
is not specified for
a paragraph, the style of the previous paragraph is used.
NOTE: PabReference
classes only
refer to styles that have already been defined. Forward references to style
definitions are not allowed. While it is not required, we recommend that you
define all styles at the beginning of the buffer so you can reference the styles
as needed.
A TextRun
defines the start
of a run of text. The fontId member of
TextRun
specifies the color, size, and
font of this run of text.
LMBCS
All textual data in a rich text field is represented as LMBCS (Lotus Multi Byte
Character Set). This is a proprietary format distinct from Unicode and therefore
requires translation to correctly represent textual data in a
JavaScript program. Since node.js supports UTF-16 strings, character data is
converted from LMBCS to UTF-16 before being returned from a function or property
value and then back to LMBCS when passed to a function or property that handles text,
TextRun
for example. In cases where
one is dealing with a rich text record in raw format (an advanced use
case), you must convert the textual data from LMBCS to UTF-16 before
attempting to process it and to convert textual data from UTF-16 to LMBCS before
writing data back to a Notes database. The functions
lmbcsToUTF16
and
utf16ToLMBCS
are provided for
this purpose.
Enumerating a Rich Text Field
Use RichTextReader
to
enumerate the contents of a rich text field. A callback function passes the
header and contents of each rich text record as arguments. The callback
implementation can inspect the signature found in the header and instantiate one
of the supported classes using the
RichTextReader.createInstance
method. The following example demonstrates enumeration of rich text data
produced by the Read Rich Text Example
above.
const fs = require('fs');
const { RichTextReader,
TextRun,
PabDefinition,
PabReference,
Paragraph } = require('@domino/richtext');
// We assume a file named 'body' was produced by the example above.
const buf = fs.readFileSync('body');
// Create an output file to store the result
const out = fs.openSync('rtout.txt', 'w');
// Create an instance of RichTextReader.
const reader = new RichTextReader();
// Call readRichTextStream passing a Buffer with rich text field data.
reader.readRichTextStream(buf, (header, cdRecord) => {
const rtElement = reader.createInstance(header, cdRecord);
if (rtElement) {
if (header.signature === TextRun.SIGNATURE) {
fs.writeSync(out, `Text run: ${rtElement.text}\r`);
} else if (header.signature === PabDefinition.SIGNATURE) {
fs.writeSync(out, `PabDefinition left margin: ${rtElement.margin[0]}\r`);
} else if (header.signature === PabReference.SIGNATURE) {
fs.writeSync(out, `PabReference pabId ${rtElement.pabId}\r`);
} else if (header.signature === Paragraph.SIGNATURE) {
fs.writeSync(out, 'Paragraph\r');
}
} else {
fs.writeSync(out, `Unsupported type ${JSON.stringify(header)}\r`);
}
return true;
});
The resulting file contents:
PabDefinition left margin: 0
Paragraph
PabReference pabId 1
Text run: This is some simple text with
Text run: bold and
Text run: italic
Text run: formatting.
In the example above, note that in the case of an unsupported type, it is
possible to read the raw data using the Node.js Buffer
API.
To create or modify existing rich text, use RichTextField
.
Creating Rich Text Content
The following example creates a new rich text field, sets the margins for the paragraph and writes bold green 18-point text.
const { RichTextField,
RichTextSignatures,
PabDefConstants,
PabDefinition,
FontIdFields,
Color } = require('@domino/richtext');
const rtf = new RichTextField();
// Access the default PabDefinition and set position at left edge of form.
const pabDef = rtf.getPabDefinition(1);
// Left margin
pabDef.margin[0] = PabDefConstants.ONE_INCH;
// First line left margin.
pabDef.margin[2] = PabDefConstants.ONE_INCH;
// This adds a Paragraph and a PabReference.
// The pabId member of pabDef is destructured and used as the pabId
// reference for the paragraph.
rtf.addParagraph(pabDef);
// FontIdFields encodes all font-related characteristics into the fontId
// 32-bit unsigned integer consumed by a text run.
const fid = new FontIdFields();
fid.color = Color.colors.NOTES_COLOR_DKGREEN;
fid.pointSize = 18;
fid.bold = true;
// Add the text run with fontId and text.
rtf.addText({ fontId: fid.fontId, text: 'This is green text.' });
// Convert the rich text field to a Buffer containing serialized rich text
// and pass it to the domino-db rich text streaming API.
const buf = rtf.toBuffer();
See Write Rich Text Example for details on using the domino-db rich text streaming API.
Modifying Existing Rich Text Content
You can modify existing rich text data using
RichTextField
. The following
example creates a RichTextField instance using a Buffer
produced by the
domino-db rich text streaming API. The code then
traverses through the collection of records until it finds a text run with the
text "This is green text". When this record is located, the text property
of the TextRun is changed to: "This was
green text but it is now dark magenta text". The fontId associated
with the TextRun is changed to make the text dark magenta.
const { RichTextField,
PabDefinition,
FontIdFields,
Color,
TextRun } = require('@domino/richtext');
// We assume for the sake of brevity that rich text data has been streamed
// to a buffer.
.
.
.
const rtf = RichTextField.fromBuffer(buf);
// Iterate through the records of the RichTextField.
for (const rtRecord of rtf) {
if (rtRecord.header.signature === TextRun.SIGNATURE &&
rtRecord.text === 'This is green text.') {
// Set the text property of the TextRun.
rtRecord.text = 'This was green text but it is now dark magenta text';
// This decodes the font information found in the fontId field
// into a class with properties that we can easily manipulate.
fid = new FontIdFields(rtRecord.fontId);
fid.color = Color.colors.NOTES_COLOR_DKMAGENTA;
// Referencing the fontId property encodes the values represented
// by the FontIdFields object back to a 32-bit unsigned integer.
rtRecord.fontId = fid.fontId;
break;
}
}
// We serialize the records of the RichTextField into a Buffer that
// can be streamed to a Notes database using the domino-db rich text
// streaming API.
const magentaBuf = rtf.toBuffer();
See Read Rich Text Example and Write Rich Text Example sections for details on using the domino-db rich text streaming API.
Paragraph Definitions
The PabDefinition class encapsulates a paragraph definition and as such contains many properties. Adding to the complexity, this record type predates Notes version 4 and as such retains some attributes that are obsolete but may be present in a field because they were created by very old versions of Notes. These are a few of the important concepts relating to paragraph definitions.
Twips
Twips is a unit of measure used in typesetting. It is 1/20 of a point. A point is 1/72 of an inch which means that an inch is 1440 twips. Many of the values found in a PabDefinition are specified in twips.
Margins
There are three margin attributes that affect text placement: left margin, first
line left margin and right margin.The implementation of margins changed between
Notes version 4 and version 5 to allow for percentage-based values in addition
to allowing absolute offset specification in twips. The location within the
PabDefinition structure that the Notes client looks for margin values varies
depending on flags that are set in the PabDefinition and whether the size of the
PabDefinition is the old V4 size or the extended V5 size. In the PabDefinition
API description, there are three properties, leftMargin
, firstLineLeftMargin
and
rightMargin
that are ignored unless the PabDefinition is the V4 version of the
structure. You won't need to set these when creating new rich text fields or
modifying existing structures. Instead, use the margin
property of the
PabDefinition which is an array of six values that store margin configuration as
follows:
- margin[0] - left margin offset in twips.
- margin[1] - left margin as a percentage (0-100).
- margin[2] - first line left margin offset in twips.
- margin[3] - first line left margin as a percentage (0-100).
- margin[4] - right margin offset in twips.
- margin[5] - right margin as a percentage (0-100)
While it appears that there are conflicting values (offset vs. percentage) for
each margin attribute, there are flags that specify whether to use the
percentage or offset. These flags are stored in the flags2 field of the
PabDefinition and are defined in PabDefConstants
under the flags2 property starting with PABFLAG2_LM, PABFLAG2_FLLM, and PABFLAG2_RM.
To illustrate, if flags2 is set as pabDef.flags2 = PABFLAG2_LM_PERCENT | PABFLAG2_FLLM_OFFSET | PABFLAG2_RM_PERCENT
the Notes client uses the values found in margin[1], margin[2], and margin[5].
The Notes client behavior is undefined if both flags are set for a particular
margin attribute.
Tabs
Tabs, or more specifically tab stops, are part of the PabDefinition and control how the Notes client displays text that immediately follows a tab character in the text. There are four types of tabs:
- left - Text will be left-aligned relative to the position.
- right - Text will be right-aligned relative to the tab position.
- center - Text will be center-aligned relative to the tab position.
- decimal - The decimal point found in the text will be aligned with the tab position.
The PabDefinition structure internally has two fields that maintain tab position and tab type that corresponds to a position. Note, however, that the number of positions (20) exceeds the number of tab types that can be tracked (16) which means that tabs 17 through 20 are always left-aligned even though they may appear otherwise when initially saved by the Notes client.
Rich Text Record Structure
If you want to extend the @domino/richtext API, first understand the structure of a rich text record.
The fundamental elements used to construct a record are:
- Byte - Unsigned 8-bit value.
- Word - Unsigned 16-bit integer.
- Dword -Unsigned 32-bit integer.
Each record starts with a header followed by one or more fields that contain fixed-length data. Variable length data (if there is any) is located after the fixed portion of the record. For example, a TextRun is structured as follows:
- header - { signature, length }
- signature - Word value that identifies the record as a text run and indicates that the length of the entire record is in the following Word value.
- length - Word value that specifies the length of the record.
- fontId - Dword
- text - A sequence of Byte values containing LMBCS-formatted character data. Length is calculated by subtracting fixed-length size from length value in the header.
There are three different header types which differ in the size and placement of the length field:
- Byte Header - Signature is found in the low-order word, record length (between 1 and 254) is found in the high-order word.
- Word header - Signature is found in the the low-order word, the value 255 is found in the high order word. Record length follows as a Word.
- Dword header - Signature is found in the low-order word, the value 0 in the high-order word. Record length follows as a Dword.
All integer values are stored in little-endian format which refers to a byte-ordering scheme that represents integer values with lower order values first in memory. For example, the value 52,651 or 0xCDAB in hexadecimal appears as [0xAB, 0xCD] in a Buffer. The Node.js Buffer class contains methods for reading and writing integer values with a specified endian-ness.