Harlequin RIP SDK
Inputting data

Jobs are submitted to the Harlequin RIP using the inputs API. This SDK library provides an implementation of this interface that the application layer can use, and registers it in RDR. The SDK's inputs API implementation is available as soon as the SDK is started until the SDK is shutdown, including periods when the RIP is not running. The SDK's input library implements:

  • A single thread-safe job queue, to which jobs can be added and deleted at any position.
  • Processing of queued jobs by the SwLeDo(), SwLeProcessJobs() and SwLeProcessInputQueue() job processing loop functions.
  • Iteration of known job configurations.
  • Job identification and contextualization.
  • Pausing, resuming, and querying status of input queue processing.
  • Registration and deregistration of multiple persistent sources of input, enabling addition of your own input channels.
  • Events to monitor changes to the input queue.

Any thread in your application can add or remove jobs from the job queue, using the inputs API, or pause, resume or query the queue state.

The inputs API job

There are two types of jobs that may be submitted through the inputs API:

Normal (content) jobs
Content jobs are normal jobs in any of the supported page description languages or image formats. They are normally expected to produce output (but need not). Content jobs are run in a manner where any memory they use, or changes they make to the RIP state are reverted at the end of the job. Content jobs are distinguished by having a non-NULL filename parameter in the inputq_print_job() call. The type of content is automatically detected by the RIP.
Configuration jobs
Configuration jobs are PostScript jobs that modify the state of the RIP. Configuration jobs are run in a manner where any changes they make to state persists until the RIP is rebooted or stopped. Configuration jobs are normally run immediately after starting the RIP to install hooks or default state, but can be run at any time. Configuration jobs are distinguished by having a NULL filename parameter and a non-NULL override parameter in the inputq_print_job().

Jobs are processed by the Harlequin RIP in two separate stages. The job configuration is performed first, to set the RIP's output, color management, parameters, and other environment that can affect the job's appearance, and then the job data is processed. These job stages are reflected in the job's timelines:

  • The SWTLT_JOB_STREAM timeline represents the process from time the RIP starts processing a job until it finishes processing the job, including configuration, and possibly multiple jobs within the data stream.
  • The configuration portion of processing a job is represented by the SWTLT_JOB_CONFIG timeline.
  • Each job portion within the data stream is represented by a SWTLT_JOB timeline. There will only be one such timeline in most jobs, the exception is for PostScript jobs that call exitserver or startjob.

Job filenames, setups, and overrides

A job submitted through inputq_print_job() has a filename parameter, a setup parameter, and an override parameter. These are all zero-terminated strings, and may all be optionally NULL. The only invalid combination is that if the filename parameter is NULL, the override parameter must be non-NULL. The strings referenced through these pointers are copied into the input queue entry by inputq_print_job(), so may be local to the function submitting the job. Minimal checking of these parameters is performed when they are submitted to the input queue, they are interpreted by the job processing loop functions.

The filename parameter is interpreted as either:

  • An absolute platform filename, which is converted to a PostScript device-relative filename on submission to the RIP;
  • A relative platform filename, which is concatenated to the application's current working directory and converted to a PostScript device-relative filename on submission to the RIP;
  • A PostScript device-relative filename, which is passed to the RIP as-is.

The root device(s) for any filename must be mounted on the RIP (the "clrip" base configuration will do this), or must be auto-mountable by the RIP. On Windows, the RIP will attempt to auto-mount UNC paths of the form \\machine\share\filepath using the filesystem device type, on the PostScript device name %machine/share%.

The setup parameter is interpreted as either:

  • An absolute platform filename, which is converted to a PostScript device-relative filename on submission to the RIP;
  • A relative filename, which is converted to PostScript filename form and concatenated to the %configps% device name on submission to the RIP;
  • A PostScript device-relative filename, which is passed to the RIP as-is.
  • If no setup parameter is supplied, the RIP's default configuration is used.

The use of PostScript device-relative filenames for the file names and setup names enables program-driven generation of configuration and/or job data. If you supply a job name or setup name that refers to your own device implementation, you can use the device's file open and read file calls to identify and generate configuration data. This may be useful to connect the RIP's configuration to a database in your application.

The SDK's job processing loop uses this programmability to implement the JSON configuration option. If you supply a setup name that ends with .json or .JSON, the SDK will apply a JSON to PostScript configuration device on top of the device-relative filename.

If you remount the %configps% device on your own device, you can change where relative setup filenames are retrieved from, either moving them in the file system hierarchy or reading data from your own device.

The override parameter to inputq_print_job() is an optional buffer of PostScript configuration data that is interpreted after the setup has been run, and may be used to modify the RIP configuration on a per-job basis. The application layer is responsible for constructing the override configuration data. The "clrip" application layer uses this to implement Page Feature mix-ins to the configuration, converting each page feature name to a PostScript device-relative filename, and concatenating a small fragment of PostScript to the overrides that runs this file. If you have many options which can apply to different RIP configurations, using the override parameter to set the options may simplify management of your RIP configurations.

Contextualization of input jobs

A job submitted through inputq_print_job() may include a context pointer, and a context destructor function. The job context pointer is passed through the RIP and out through a number of callbacks, notably including the raster backend API. You can use this context pointer to associate a data object from your application with output that the RIP produces for the job. The context pointer must remain valid until the job is either removed from the input queue, or is completely processed by the RIP. The context destructor function (if supplied to inputq_print_job()) will be called by the SDK when the input queue entry is deleted. This may be before the job is completely processed by the RIP, but will be after the SWTLT_SKIN_JOB timeline for the job is started. You will probably need to monitor the SWTLT_SKIN_JOB timeline for this job as well as the context destructor callback to determine when it is safe to dispose of the context object.

There is also a context constructor function type defined by the inputs API. This is not used directly in the API, but may be used by other APIs that manage input sources, and submit jobs on your behalf. For example, the hot folder API can have constructor functions provided to create or associate contexts specific to each hot folder registered.

Input job IDs

The inputq_print_job() call will return an integer job ID to the caller. This job number can be used in callbacks from the RIP to identify this particular job. This job ID is used when deleting a job from the queue using inputq_delete_job(), and is also attached to the job's SWTLT_SKIN_JOB timeline as the SW_SKIN_JOB_NUMBER_CTXT timeline context information, to help you track progress of specific jobs through the RIP.

The job ID pointer passed to inputq_print_job() is an in/out parameter. The ID should normally be set to zero before calling inputq_print_job(), in which case the input queue will assign a job number and return it. If the ID is non-zero when you call inputq_print_job(), the ID provided will be used as the job number. This capability should be used sparingly:

  • If you use the same job number for two jobs submitted contemporaneously, your code may get confused over which job is running;
  • If there is more than one job with the same number on the input queue, you will not be able to delete a specific instance of the job;
  • If you use a negative job ID, then you will not be able to delete the job from the input queue.

Using a negative job ID is customarily used when submitting a stop job to the RIP, so that it cannot be deleted from the queue.

Discovering the inputs API implementation

Before using the inputs API, you must get a pointer to the API implementation from RDR. If you are linking to the static SDK library, then this has already been done for you, and stored in a global variable. The application layer just needs to include the combined header file hhrsdk.h, and call the appropriate input API functions:

#include "hhrsdk.h"
// ...
int32 job_id = 0 ;
if ( inputq_print_job(filename, (const uint8 *)"MySetup",
NULL, // No override
NULL, // No context
NULL, // No context destructor
-1, // end of queue
&job_id) != SW_INPUTQ_SUCCESS ) {
// ...cleanup and error return...
}
#define inputq_print_job
Insert a job into the input queue for printing by the RIP.
Definition: inputsapi.h:401
@ SW_INPUTQ_SUCCESS
Definition: inputsapi.h:91
unsigned char uint8
8-bit unsigned integer
Definition: hqtypes.h:124
signed int int32
32-bit signed integer
Definition: hqtypes.h:122
Combined include file for all headers in Harlequin RIP SDK.
#define NULL
Definition of NULL pointer.
Definition: hqtypes.h:37

If you are linking to the dynamic SDK library, then you need to update the inputs API pointer after starting the SDK. You can then use the same code to call the appropriate API functions. Do not link your application to the dynamic SDK library without approval from Global Graphics.

To pause, resume, and query the input queue processing state, you need to use the events API. Accessing the events API is similar to the inputs API, it requires a pointer to the event system API implementation. This is also automatically done for you when linking to the static SDK library.

Files on the application command line

Your application may wish to submit jobs specified on the command line to the RIP. This is easy to do calling inputq_print_job() with the command-line job arguments, but there are a few issues to consider:

  • The SDK must be started before jobs can be queued. If you also extract parameters from the command line to configure the SDK (for example, memory size for the SDK and RIP), then you either need to ensure that SDK parameters are first on the command line, or perform multiple passes over the parameters. The "clrip" application performs two passes over the command-line parameters, starting the SDK after the first pass has extracted SDK configuration parameters, and calling inputq_print_job() for job filenames on the second pass.
  • If you need to support CJKV or other Unicode character sets in job or setup names on Windows, you will need to compile your application as a Unicode application, and use the wmain() function as your entry point. The inputq_print_job() function expects UTF-8 encoded file and setup names, so you will need to convert WCHAR* command-line arguments to zero-terminated uint8* strings. The "clrip" application uses the helper function arg_to_utf8() to call Windows' WideCharToMultiByte() using code page CP_UTF8 to perform this conversion.
  • On Linux and MacOS, the shell expands wildcard filenames ("globbing") before the application is called. On Windows, wildcards are expanded by each application. The "clrip" application supports this by using the SDK's PKFindFirstFile(), PKFindNextFile() and PKCloseFindFile() utility functions to iterate over the files matching a pattern, adding them all to the input queue. The PKFindFirstFile() function requires a UTF-8 encoded pattern as input, so the conversion from WCHAR* to uint8* needs to have been performed before iterating.
  • You will probably want a method of specifying the setup name and possibly override data for each job specified on the command line. The "clrip" application uses the -c argument to introduce a setup name, which is converted to UTF-8 and stored if necessary (on Windows), and retained for passing to each inputq_print_job() call.
  • Your application is responsible for constructing any override PostScript data. The "clrip" application does this by using the SDK's SkinDynamicBufferReset(), SkinDynamicBufferAdd(), and SkinDynamicBufferFree() utility functions to append to a data buffer, converting filenames supplied to a PostScript form and wrapping them in a PostScript string followed by a run operator. The "clrip" application resets the override buffer for each new configuration selected so multiple page features be applied after each configuration name.
  • You may want to issue a warning if no configuration is specified. If you only ever use one configuration, you may not need this, instead you can set up your configuration by submitting a startup configuration job, changing the SW/Sys/HqnOEM startup file, or adding new SW/Sys/ExtraStart/ files.

Hot folder input

The Harlequin RIP SDK provides functions to monitor hot folders for input jobs.

The hotfolder_monitor() function initiates monitoring of a hot folder. It takes a directory name (either an absolute platform directory name, or relative to the current working directory), a setup name, PostScript override string, parameters to determine when a file in the hot folder is stable, and parameters to contextualize files submitted to the hot folder. The directory name for the hot folder to monitor must exist, and the SDK must have been started before calling hotfolder_monitor(). (The SDK hot folder support is designed to be able to use the operating system's file system change APIs to determine when files are stable, but this is not currently implemented on any platform. Please let Global Graphics know if this is important for you.)

Hot folders may be removed from monitoring using hotfolder_unmonitor().

Multiple hot folders can be monitored simultaneously, each using different configuration, override, stabilization and contextualization parameters.

Jobs should be either copied into hot folders, or placed into them using file system soft links. When a hot folder job either completes processing or is deleted from the input queue, the job is deleted from the hot folder.

The hot folder support uses the inputs API to register a persistent input source for each hot folder monitored. When there are persistent input sources active (such as a hot folder), the job processing loops may be configured to block waiting for more input when the input queue is empty. In this case you may need to submit a stop job from another application thread to terminate the RIP. A useful technique is to set up a hot folder using the stop job as the content of the setup file or the override content. Any file put in this hot folder (including a zero-length file) will cause the RIP to terminate. The "clrip" application is programmed to block waiting for more input if there are hot folders active.

You may monitor and unmonitor hot folders at any time after the SDK is started. If you are unmonitoring a hot folder in order to change the configuration or override associated with it, you should keep at least one persistent source of input active while you do this. If there is nothing on the input queue at the time you unmonitor the last hot folder, you may find the RIP terminating. To avoid this, use the inputs API functions inputq_source_add() and inputq_source_remove() around the code that changes active hot folders.

The "clrip" application uses the -H option to name a hot folder to monitor, using the previous configuration and override(s) specified on the command line for files dropped in the hot folder.

Inputs API and input sources

The inputs API makes it easy to implement your own input sources for submitting jobs. The SDK's hot folder support is implemented using it, and exemplifies solutions to some of the issues that you need to consider:

  • You need to determine how the job and configuration data will be presented to the RIP. If the data will be stored on disk before submitting to the RIP, then no additional devices will be needed. If, however, you wish to stream the data from a network source, or RAM, or generate it dynamically, you will probably need to write a device implementation for your non-file job source.
  • Your implementation should call inputq_source_add() when an input source is enabled, and inputq_source_remove() when the input source is disabled. This will allow the job processing loop to wait for more input, rather than terminating the RIP when the queue is empty.
  • Your implementation should call inputq_print_job() to add a job from your input source to the input queue. You will probably want to capture the job ID returned by inputq_print_job(), and store it in a data structure in your implementation. You may need this if the job is deleted.
  • If you use the context object when submitting a job with inputq_print_job(), or your implementation needs to clean up after the job is fully processed, you should add event handlers to monitor the SWTLT_SKIN_JOB timeline for this job. The inputs API context destructor function will be called when the queue entry is deleted, which will either be when the inputq_delete_job() function deletes the job, or just after starting the job's SWTLT_SKIN_JOB timeline if the job is submitted to the RIP. The relevant SWTLT_SKIN_JOB timeline can be identified by comparing SwTimelineGetContext(ref, SW_SKIN_JOB_NUMBER_CTXT) with the job ID saved from inputq_print_job(). When both the context destructor function has been called and the relevant SWTLT_SKIN_JOB timeline ends (if it was started), you can clean up the job data.

Non-file job sources

You can submit job and configuration data directly to the RIP from RAM, network buffers, or other sources, or generate it dynamically. The file and setup names of input queue entries are converted to PostScript device-relative filenames before submission to the RIP by the job processing loop. If you supply a PostScript device-relative filename as the file or setup name to the inputq_print_job() function, it will submit the job to the RIP using the device name(s) you specified.

If you want to stream data from a device implementation in this way, there are some issues to consider:

  • You will need to register your new device types with the RIP by calling SwLeAddCustomDevices() after the SDK is started. If writing your own device type implementations, you should use your Harlequin OEM number as the top 16 bits of the device type number, and use this number in the corresponding device mount configuration PostScript. This will ensure that your device type never conflicts with a device implemented by Global Graphics or any third party.
  • You will need to mount an instance of the device you are going to read data from during the RIP startup process. This can be done during base configuration, by changing the SW/Sys/HqnOEM startup file, or adding a new SW/Sys/ExtraStart/ file. The startup file SW/Sys/ExtraDevices demonstrates how to do this, defining a local procedure /mountoptional to mount a device if present, and using this procedure to mount and configure a number of device instances.
  • Configuration data is read linearly from the device. You need only fully implement the DEVICELIST_OPEN, DEVICELIST_READ, DEVICELIST_CLOSE and DEVICELIST_ABORT calls. There are other DEVICETYPE calls that must be stubbed out.
  • The RIP needs to be able to seek around in PDF jobs to navigate its structure. As well as DEVICELIST_OPEN, DEVICELIST_READ, DEVICELIST_CLOSE and DEVICELIST_ABORT, you also need to fully implement DEVICELIST_BYTES and DEVICELIST_SEEK. If you do not implement these, the RIP will copy the job data to a local filesystem before RIPping it, which may defeat the purpose of streaming the job data.

Input job types

The Harlequin RIP will automatically detect the type of an input job, and will configure itself to process the data appropriately. The RIP uses the HqnInputTypes ProcSet for input detection, analysis, and execution. This method of input handling is shared with Harlequin MultiRIP, so customers transitioning from Harlequin MultiRIP should see consistency of file type support. All of the file types supported by the HqnInputTypes ProcSet are available for use in the Harlequin RIP (subject to licensing restrictions), and can be directly printed to the RIP on the command line, via hotfolders, or any other input source. The supported types include the TIFF, GIF, BMP, PNG, JPEG and JPEG2000 image formats, as well as PostScript, EPS, and PDF (if licensed).

New formats may be added to or removed from HqnInputTypes for detection and processing. Global Graphics recommends adding a SW/Sys/ExtraStart/ file to override the list of input types recognized if required. This will reduce the impact of source code merges for future SDK versions.

Restricting the available set of input types is easy. The following PostScript, saved in the file SW/Sys/ExtraStart/TIFFOnly would restrict the RIP to handle TIFF and PostScript files only. PostScript must be the last in the list of file type options (this is the lowest priority detection for HqnInputTypes):

statusdict /AllInputTypes [/TIFF /PS] put

It is possible to extend the set of file formats recognized and processed by the RIP, but Global Graphics does not recommend that you try this without consultation. Additional file formats may be recognized by adding them to the start of the input type array, and adding an entry in the InputTypes sub-dictionary to detect jobs of that input type:

/HqnInputTypes /ProcSet findresource begin
InputTypes /MyNewType {
% (filename) inputfile --proc-- (filename) inputfile boolean
%...procedure to recognize job type and push boolean...
} bind put
statusdict /AllInputTypes [ /MyNewType AllInputTypes aload pop ] put
end

The detail procedure to recognize a new job type may peek into the data at the start of the file using the Harlequin extended operator peekreadstring, found in internaldict. This is similar to PostScript's normal readstring, but does not move the input file pointer, and does not require the file object is seekable.

If the new file type is detected, there are two methods available to process it:

  1. Replace the input file on top of the PostScript stack with a file object that converts the input format into PostScript, PDF, or another supported format, and use setsoftwareiomode to indicate that the job is of the converted type. This might be done by implementing a filter device to convert between formats.
  2. Add a new procedure to the end of the switchexecmode array in serverdict that calls code to process the job type, and use setsoftwareiomode in the InputTypes detection procedure to select the index used in the switchexecmode array.

Neither of these are straightforward, which is why Global Graphics suggests consultation if you wish to support additional input file formats.

Pausing, resuming, and querying the input queue state

You can pause, resume, query, and monitor the state of the inputs API implementation using the SWEVT_INPUTS_ENABLE event. While the input queue is paused and there are jobs on the queue, the job processing loop will block waiting for input processing to be resumed. Pausing and resuming are paired operations: there must be the same number of resume operations as pause operations for the input queue to be active.

The SWEVT_INPUTS_ENABLE event is issued with a SWMSG_INPUTS_ENABLE message attached to the event. The SWMSG_INPUTS_ENABLE::enable field determines whether the event is pausing, resuming, or querying the inputs API state. This message field should be set to:

The input queue implementation uses an event handler at priority SW_EVENT_NORMAL to modify the queue state (if SW_INPUTS_ENABLE_START or SW_INPUTS_ENABLE_STOP are requested) or modify the message (if SW_INPUTS_ENABLE_UNKNOWN is used to query state). If the queue state was changed from active to paused or vice-versa, the message is passed to lower priority event handlers. Thus, you can install an event handler at a lower-priority than SW_EVENT_NORMAL to get notifications just when the queue changes its active state. Any such event handlers are purely informational, they cannot change the queue state by modifying the SWMSG_INPUTS_ENABLE::enable field, and should not attempt to do so.

Monitoring the input queue

You can also monitor changes to the jobs on the input queue. This may be useful if you want to display a user interface indicating which jobs are next to be processed. The inputs API implementation issues an SWEVT_INPUTQ_CHANGE event whenever a job is added to or removed from the input queue. This event is issued with a SWMSG_INPUTQ_CHANGE message attached, which contains the filename, setup name, override PostScript, job ID, position in the queue for the operation, and a code indicating the reason for the change. The reasons used by the inputs queue are:

The other reason codes are not used by the Harlequin RIP SDK.