Jobs are submitted to the Harlequin RIP using the inputs API. This SDK library provides an implementation of this interface that the application layer can use, and registers it in RDR. The SDK's inputs API implementation is available as soon as the SDK is started until the SDK is shutdown, including periods when the RIP is not running. The SDK's input library implements:
Any thread in your application can add or remove jobs from the job queue, using the inputs API, or pause, resume or query the queue state.
There are two types of jobs that may be submitted through the inputs API:
NULL
filename parameter in the inputq_print_job() call. The type of content is automatically detected by the RIP.NULL
filename parameter and a non-NULL
override parameter in the inputq_print_job().Jobs are processed by the Harlequin RIP in two separate stages. The job configuration is performed first, to set the RIP's output, color management, parameters, and other environment that can affect the job's appearance, and then the job data is processed. These job stages are reflected in the job's timelines:
exitserver
or startjob
.A job submitted through inputq_print_job() has a filename parameter, a setup parameter, and an override parameter. These are all zero-terminated strings, and may all be optionally NULL
. The only invalid combination is that if the filename parameter is NULL
, the override parameter must be non-NULL
. The strings referenced through these pointers are copied into the input queue entry by inputq_print_job(), so may be local to the function submitting the job. Minimal checking of these parameters is performed when they are submitted to the input queue, they are interpreted by the job processing loop functions.
The filename parameter is interpreted as either:
The root device(s) for any filename must be mounted on the RIP (the "clrip" base configuration will do this), or must be auto-mountable by the RIP. On Windows, the RIP will attempt to auto-mount UNC paths of the form \\machine\share\filepath
using the filesystem device type, on the PostScript device name %machine/share%
.
The setup parameter is interpreted as either:
%configps%
device name on submission to the RIP;The use of PostScript device-relative filenames for the file names and setup names enables program-driven generation of configuration and/or job data. If you supply a job name or setup name that refers to your own device implementation, you can use the device's file open and read file calls to identify and generate configuration data. This may be useful to connect the RIP's configuration to a database in your application.
The SDK's job processing loop uses this programmability to implement the JSON configuration option. If you supply a setup name that ends with .json
or .JSON
, the SDK will apply a JSON to PostScript configuration device on top of the device-relative filename.
If you remount the %configps%
device on your own device, you can change where relative setup filenames are retrieved from, either moving them in the file system hierarchy or reading data from your own device.
The override parameter to inputq_print_job() is an optional buffer of PostScript configuration data that is interpreted after the setup has been run, and may be used to modify the RIP configuration on a per-job basis. The application layer is responsible for constructing the override configuration data. The "clrip" application layer uses this to implement Page Feature mix-ins to the configuration, converting each page feature name to a PostScript device-relative filename, and concatenating a small fragment of PostScript to the overrides that runs this file. If you have many options which can apply to different RIP configurations, using the override parameter to set the options may simplify management of your RIP configurations.
A job submitted through inputq_print_job() may include a context pointer, and a context destructor function. The job context pointer is passed through the RIP and out through a number of callbacks, notably including the raster backend API. You can use this context pointer to associate a data object from your application with output that the RIP produces for the job. The context pointer must remain valid until the job is either removed from the input queue, or is completely processed by the RIP. The context destructor function (if supplied to inputq_print_job()) will be called by the SDK when the input queue entry is deleted. This may be before the job is completely processed by the RIP, but will be after the SWTLT_SKIN_JOB timeline for the job is started. You will probably need to monitor the SWTLT_SKIN_JOB timeline for this job as well as the context destructor callback to determine when it is safe to dispose of the context object.
There is also a context constructor function type defined by the inputs API. This is not used directly in the API, but may be used by other APIs that manage input sources, and submit jobs on your behalf. For example, the hot folder API can have constructor functions provided to create or associate contexts specific to each hot folder registered.
The inputq_print_job() call will return an integer job ID to the caller. This job number can be used in callbacks from the RIP to identify this particular job. This job ID is used when deleting a job from the queue using inputq_delete_job(), and is also attached to the job's SWTLT_SKIN_JOB timeline as the SW_SKIN_JOB_NUMBER_CTXT timeline context information, to help you track progress of specific jobs through the RIP.
The job ID pointer passed to inputq_print_job() is an in/out parameter. The ID should normally be set to zero before calling inputq_print_job(), in which case the input queue will assign a job number and return it. If the ID is non-zero when you call inputq_print_job(), the ID provided will be used as the job number. This capability should be used sparingly:
Using a negative job ID is customarily used when submitting a stop job to the RIP, so that it cannot be deleted from the queue.
Before using the inputs API, you must get a pointer to the API implementation from RDR. If you are linking to the static SDK library, then this has already been done for you, and stored in a global variable. The application layer just needs to include the combined header file hhrsdk.h, and call the appropriate input API functions:
If you are linking to the dynamic SDK library, then you need to update the inputs API pointer after starting the SDK. You can then use the same code to call the appropriate API functions. Do not link your application to the dynamic SDK library without approval from Global Graphics.
To pause, resume, and query the input queue processing state, you need to use the events API. Accessing the events API is similar to the inputs API, it requires a pointer to the event system API implementation. This is also automatically done for you when linking to the static SDK library.
Your application may wish to submit jobs specified on the command line to the RIP. This is easy to do calling inputq_print_job() with the command-line job arguments, but there are a few issues to consider:
WCHAR*
command-line arguments to zero-terminated uint8*
strings. The "clrip" application uses the helper function arg_to_utf8() to call Windows' WideCharToMultiByte()
using code page CP_UTF8
to perform this conversion.WCHAR*
to uint8*
needs to have been performed before iterating.-c
argument to introduce a setup name, which is converted to UTF-8 and stored if necessary (on Windows), and retained for passing to each inputq_print_job() call.run
operator. The "clrip" application resets the override buffer for each new configuration selected so multiple page features be applied after each configuration name.SW/Sys/HqnOEM
startup file, or adding new SW/Sys/ExtraStart/
files.The Harlequin RIP SDK provides functions to monitor hot folders for input jobs.
The hotfolder_monitor() function initiates monitoring of a hot folder. It takes a directory name (either an absolute platform directory name, or relative to the current working directory), a setup name, PostScript override string, parameters to determine when a file in the hot folder is stable, and parameters to contextualize files submitted to the hot folder. The directory name for the hot folder to monitor must exist, and the SDK must have been started before calling hotfolder_monitor(). (The SDK hot folder support is designed to be able to use the operating system's file system change APIs to determine when files are stable, but this is not currently implemented on any platform. Please let Global Graphics know if this is important for you.)
Hot folders may be removed from monitoring using hotfolder_unmonitor().
Multiple hot folders can be monitored simultaneously, each using different configuration, override, stabilization and contextualization parameters.
Jobs should be either copied into hot folders, or placed into them using file system soft links. When a hot folder job either completes processing or is deleted from the input queue, the job is deleted from the hot folder.
The hot folder support uses the inputs API to register a persistent input source for each hot folder monitored. When there are persistent input sources active (such as a hot folder), the job processing loops may be configured to block waiting for more input when the input queue is empty. In this case you may need to submit a stop job from another application thread to terminate the RIP. A useful technique is to set up a hot folder using the stop job as the content of the setup file or the override content. Any file put in this hot folder (including a zero-length file) will cause the RIP to terminate. The "clrip" application is programmed to block waiting for more input if there are hot folders active.
You may monitor and unmonitor hot folders at any time after the SDK is started. If you are unmonitoring a hot folder in order to change the configuration or override associated with it, you should keep at least one persistent source of input active while you do this. If there is nothing on the input queue at the time you unmonitor the last hot folder, you may find the RIP terminating. To avoid this, use the inputs API functions inputq_source_add() and inputq_source_remove() around the code that changes active hot folders.
The "clrip" application uses the -H
option to name a hot folder to monitor, using the previous configuration and override(s) specified on the command line for files dropped in the hot folder.
The inputs API makes it easy to implement your own input sources for submitting jobs. The SDK's hot folder support is implemented using it, and exemplifies solutions to some of the issues that you need to consider:
SwTimelineGetContext(ref, SW_SKIN_JOB_NUMBER_CTXT)
with the job ID saved from inputq_print_job(). When both the context destructor function has been called and the relevant SWTLT_SKIN_JOB timeline ends (if it was started), you can clean up the job data.You can submit job and configuration data directly to the RIP from RAM, network buffers, or other sources, or generate it dynamically. The file and setup names of input queue entries are converted to PostScript device-relative filenames before submission to the RIP by the job processing loop. If you supply a PostScript device-relative filename as the file or setup name to the inputq_print_job() function, it will submit the job to the RIP using the device name(s) you specified.
If you want to stream data from a device implementation in this way, there are some issues to consider:
SW/Sys/HqnOEM
startup file, or adding a new SW/Sys/ExtraStart/
file. The startup file SW/Sys/ExtraDevices
demonstrates how to do this, defining a local procedure /mountoptional
to mount a device if present, and using this procedure to mount and configure a number of device instances.The Harlequin RIP will automatically detect the type of an input job, and will configure itself to process the data appropriately. The RIP uses the HqnInputTypes
ProcSet for input detection, analysis, and execution. This method of input handling is shared with Harlequin MultiRIP, so customers transitioning from Harlequin MultiRIP should see consistency of file type support. All of the file types supported by the HqnInputTypes
ProcSet are available for use in the Harlequin RIP (subject to licensing restrictions), and can be directly printed to the RIP on the command line, via hotfolders, or any other input source. The supported types include the TIFF, GIF, BMP, PNG, JPEG and JPEG2000 image formats, as well as PostScript, EPS, and PDF (if licensed).
New formats may be added to or removed from HqnInputTypes
for detection and processing. Global Graphics recommends adding a SW/Sys/ExtraStart/
file to override the list of input types recognized if required. This will reduce the impact of source code merges for future SDK versions.
Restricting the available set of input types is easy. The following PostScript, saved in the file SW/Sys/ExtraStart/TIFFOnly
would restrict the RIP to handle TIFF and PostScript files only. PostScript must be the last in the list of file type options (this is the lowest priority detection for HqnInputTypes
):
It is possible to extend the set of file formats recognized and processed by the RIP, but Global Graphics does not recommend that you try this without consultation. Additional file formats may be recognized by adding them to the start of the input type array, and adding an entry in the InputTypes
sub-dictionary to detect jobs of that input type:
The detail procedure to recognize a new job type may peek into the data at the start of the file using the Harlequin extended operator peekreadstring
, found in internaldict
. This is similar to PostScript's normal readstring
, but does not move the input file pointer, and does not require the file object is seekable.
If the new file type is detected, there are two methods available to process it:
setsoftwareiomode
to indicate that the job is of the converted type. This might be done by implementing a filter device to convert between formats.switchexecmode
array in serverdict
that calls code to process the job type, and use setsoftwareiomode
in the InputTypes
detection procedure to select the index used in the switchexecmode
array.Neither of these are straightforward, which is why Global Graphics suggests consultation if you wish to support additional input file formats.
You can pause, resume, query, and monitor the state of the inputs API implementation using the SWEVT_INPUTS_ENABLE event. While the input queue is paused and there are jobs on the queue, the job processing loop will block waiting for input processing to be resumed. Pausing and resuming are paired operations: there must be the same number of resume operations as pause operations for the input queue to be active.
The SWEVT_INPUTS_ENABLE event is issued with a SWMSG_INPUTS_ENABLE message attached to the event. The SWMSG_INPUTS_ENABLE::enable field determines whether the event is pausing, resuming, or querying the inputs API state. This message field should be set to:
The input queue implementation uses an event handler at priority SW_EVENT_NORMAL to modify the queue state (if SW_INPUTS_ENABLE_START or SW_INPUTS_ENABLE_STOP are requested) or modify the message (if SW_INPUTS_ENABLE_UNKNOWN is used to query state). If the queue state was changed from active to paused or vice-versa, the message is passed to lower priority event handlers. Thus, you can install an event handler at a lower-priority than SW_EVENT_NORMAL to get notifications just when the queue changes its active state. Any such event handlers are purely informational, they cannot change the queue state by modifying the SWMSG_INPUTS_ENABLE::enable field, and should not attempt to do so.
You can also monitor changes to the jobs on the input queue. This may be useful if you want to display a user interface indicating which jobs are next to be processed. The inputs API implementation issues an SWEVT_INPUTQ_CHANGE event whenever a job is added to or removed from the input queue. This event is issued with a SWMSG_INPUTQ_CHANGE message attached, which contains the filename, setup name, override PostScript, job ID, position in the queue for the operation, and a code indicating the reason for the change. The reasons used by the inputs queue are:
The other reason codes are not used by the Harlequin RIP SDK.