Configure the Payroll Extract integration

Description of the Payroll Extract integration.

The Payroll Extract integration uses APIs that are optimized to extract payroll and accruals data directly for local processing and maintenance.

Benefits of Payroll Extract

  • The Payroll Extract integration greatly improves the speed and performance of processing payroll data. It can process large payroll data sets 10 times faster than previous payroll export integrations, for example, in less than 60 minutes for 250,000 employee records.
  • It exports the payroll data to an extension table that is summarized by day. The payroll system can access this data directly from the extension table, or you can further summarize and sort the data by way of structured query language (SQL) commands. The extension table can store additional data to satisfy any data requirements that are not handled by the staging table.
  • It reduces service hours to develop and implement scalable payroll data extractions.
    • It pulls payroll data to meet payroll-extract service level agreements (SLA) for any organization regardless of size.
    • It supports organizations that do not use a full payroll application
    • It provides quick processing of one-to-many pay groups.

This integration simplifies maintenance of payroll and timekeeping mapping by way of the following:

  • Payroll to Paycode mapping
  • Accrual Code mapping
  • General Ledger mapping

Payroll extract process

The Payroll Extract service includes the following steps:

  1. Select employees by the attributes of pay groups:
    • Employee attributes
    • Worked attributes
    • Mapping definitions to use for processing
  2. Mapping of paycodes, accruals, and general ledger to efficiently pre-process, stage, and maintain mappings between payroll and timekeeping systems. You can use custom tables to handle post-staging data needs.
  3. Aggregate up to 130 data elements for payroll and accruals. You can use custom tables to summarize variances and make other comparisons.
  4. Output options allow you to define file generation, headers, footers, and data formatting.

Limitations of Payroll Extract

  • The focus is on payroll processing, and analytics and reporting are not supported.
  • Data and request limits as follows:
    • 10x employee count per week + 1x Daily beyond
    • Max 5 queries per extract/execution id
    • 200 API calls/day for retrieval of custom tables
    • Query limit – 2GB per call
  • Custom tables limits as follows:
    • Maximum = 10 tables.
    • Maximum = 20 columns in each table.
    • Tables are retained for 90 days for staging.
  • Combined paycodes are not supported.
  • Queries based on worked labor category or cost center are not supported.
  • Multiple assignments (jobs) are not supported.

Configure the integration

How to configure this integration.

Select the environment

How to select the environment for theintegration.
  1. Open the Integration Template Designer: Select Main Menu Administration > Application Setup > Integrations Setup > Design Integration Templates.

    Note: If prompted, enter your Username and Password. Click Tap Log in.
  2. Select the Manage tab > Atom Management.
  3. Select your environment.

Select environment extensions

How to select environment extensions for the Payroll Extract integration.
  1. In Administration, click tap Environment Extensions.
  2. In Process Filter, click tap the magnifying glass
    Search button
    .
  3. Scroll to and select the integration: Payroll Extract > PayrollExtractE2E.

Configure connection settings

Caution: If you select Use Default for the connection settings and process properties, ensure that Value is blank. If Value is not blank, that value overrides the default value whether or not Use Default is selected or cleared. Example: If the default value is abc, but Value shows xyz, the integration uses xyz regardless of the setting of Use Default.
  1. Select Connection Settings.
  2. From the Connection dropdown list, select and configure:

    Connection Settings

    Connection Settings

    Setting

    Required

    Actions

    APIGatewayServer

    Required

    To change the default API gateway server:

    1. Clear Use Default.
    2. Enter the URL to the server.

      Example: <tenantURL>/api

    SFTPServer

    Required

    The SFTP server setting defines the connection to the file that contains the records. Integrations access only the internal SFTP account.

    To change the default SFTP server parameters:

    1. For each field, clear Use Default.
    2. Enter the following values:
      • Enter the name of the internal Host.

      • Enter the number of the Port for the internal SFTP account.

      • In User, enter the username for the internal SFTP account.

      • In Password, select <Encrypted>. Enter the new password for the internal SFTP account.

      • Click Tap Apply.

Configure process properties

Process properties apply globally to all records that an integration processes. When you install the integration, you can define the parameter values or configure a prompt for the user to define the value when they run the integration.

Note: Most of the process properties have default values, even though the Integration Template Designer does not display these values.
  1. Select Process Properties.
  2. Caution: Do not edit the default values of the AuthenticationProperties. By default, cookies are enabled and set the values for authentication properties.

    Only while you test or design integration templates, should you edit the properties to connect to the authentication server and get the access token to execute APIs.

    1. From the Process Property dropdown list, select AuthenticationProperties.
    2. In GatewayDefaultPort, clear Use Default. Enter the path to the port for the API gateway.
    3. Note: You no longer need an AppKey to call API operations. If one is defined, it is ignored.
  3. From the Process Property dropdown list, select PayrollExtractE2E_Control to set process properties that must be configured before the integration can run.

    Caution: If you select Use Default for the connection settings and process properties, ensure that Value is blank. If Value is not blank, that value overrides the default value whether or not Use Default is selected or cleared. Example: If the default value is abc, but Value shows xyz, the integration uses xyz regardless of the setting of Use Default.

    Process Properties

    Process Properties

    Property

    Required

    Actions

    HyperfindAndLocations

    Not required

    Default = 1 (shown as blank) which indicates All Home and includes all active employees.

    To select another hyperfind and locations:

    1. Clear Use Default.
    2. (Required) Enter the ID of a single hyperfind, or the IDs of one or more locations each separated by a comma (,) or number sign (#).
      Caution: If you do not enter an ID for the Template Parameter, the integration cannot identify the hyperfind and the integration run fails.
    Note:

    Ad-hoc hyperfinds are not supported.

    All Home does not include terminated and inactive employees even if they have totals during the period. To include these totals, configure a hyperfind that includes terminated and inactive employees for the organization and select that hyperfind in this process property.

    The maximum number of employees in a hyperfind is 3500. To process more employee records, divide the population into smaller hyperfinds to run sequentially.

    IncludeEmployee

    Not required

    Default = Include all employees.

    To process data for only a limited group of employees:

    1. Clear Use Default.
    2. Enter the person numbers, as defined in the source system, each separated by a comma (,) but no spaces.

      Example: 13997,15556,20012

    InitiateStaging

    Required

    Default = true.

    1. Clear Use Default.
    2. In Value, select false.

    RunPayrollExport

    Required

    Default = true.

    1. Clear Use Default.
    2. In Value, select false.

    RunAccrualExport

    Not required

    Default = false.

    1. Clear Use Default.
    2. In Value, select true.

    PollInterval

    Not required

    How long to wait between checks of the payroll system for changes.

    Default = 20 ms.

    To change the wait time:

    • Clear Use Default.
    • Select the time in milliseconds (ms).

    MaxWaitTime

    Not required

    Default = 6000 ms (6 seconds).

    To change the wait time:

    • Clear Use Default.
    • Enter the wait time in milliseconds (ms).

    FileLoadWaitTime

    Not required

    How long to wait for the input file to load.

    Default = 60 ms.

    To change the wait time:

    • Clear Use Default.
    • Enter the wait time in milliseconds (ms).
  4. From the Process Property dropdown list, select PayrollExtractE2E_SFTPProperties.

    SFTP Properties

    SFTP Properties

    Property

    Required

    Actions

    SourceDirectory

    Required

    The directory path on the SFTP server where the integration process reads the file.

    Default = Use the /Inbound directory.
    Caution:

    Do not change the name of the folder from Inbound. Additional or differently named folders are not supported by .

    To define a different directory path:

    1. Clear Use Default.
    2. Enter the path to the directory.
      Caution: Make sure that this directory is configured on the SFTP server.

    _OutputDirectory

    Not required

    Directory on the SFTP server where the destination file is archived after processing.

    Default = use the default /Outbound directory.
    Caution:

    Do not change the name of the folder from Outbound. Additional or differently named folders are not supported by .

    To define a different directory path:

    1. Clear Use Default.
    2. Enter the path to the directory.
      Caution: Make sure that this directory is configured on the SFTP server.

    _MoveProcessedFileToDirectory

    Not required

    The directory on the SFTP server to move the destination file after data is successfully uploaded.

    Default = true, use the default directory.

    To define a different directory path:

    1. Clear Use Default.
    2. Enter the path to the directory.
      Caution: Make sure that this directory is configured on the SFTP server.

    _DeleteSourceFileAfterReading

    Not required

    Default = false; do not delete the source file after processing.

    To delete the source file but archive it to _DestinationDirectoryPath:

    1. Clear Use Default.
    2. Select Value.

Configure cross-reference tables

Cross-reference tables (CRT) are the look-up tables that the integrations use to translate parameter values. One or more data values from the source system can be used to assign one or more parameters in the destination system.

A cross-reference table (CRT) translates parameter values in an integration as follows:

  • Organizes data values into rows and columns:
    • Maximums = 20 columns, 10,000 rows.
  • Can combine values from multiple columns to determine a single output value.
  • If more than one row matches a reference value, the first match is the output value.
  • If no match is found, the output value can be null, or the integration can produce errors.
  1. Select Cross Reference.
  2. From the Cross Reference dropdown list, select one of the following:

    You cannot change the names of the tables.

    PayrollExtractE2E_Paygroup

    A pay group consolidates a logical set of employees within a company to facilitate payroll processing. It is based on shared characteristics such as employee type, pay frequency, country location.

    Column headers:

    PayGroup,Location,Fixed Rule,Worker Type,Labor Category 1,Labor Category 2,Labor Category 3,Labor Category 4,Labor Category 5,Labor Category 6,Custom Field 1,Custom Field 2,Custom Field 3,Custom Field 4,Custom Field 5,Custom Field 6,Custom Field 7,Custom Field 8,Custom Field 9,Custom Field 10

    PayrollExtractE2E_TimezoneGroup

    Column headers:

    PayGroup,TimeZone
  3. Select Override to:

    • Download the tables when you run the integration
    • Edit the table cells in Extensions
  4. Repeat for other tables.

Set thresholds

(Optional) You can set thresholds to limit the number of processes or volume of data during integration runs.

Note: These settings apply only to the current tenant; you have to configure these settings for each tenant.
  1. Log into the tenant management system (TMS).
  2. Select Main Menu Administration > Application Setup > System Configuration > System Settings.
  3. Select the Payroll Extract tab.
  4. Configure the following:
    • payroll.extract.staging.run.concurrent.limit: Enter the maximum number of payroll-extract requests that can run at the same time. Default = 3.
    • payroll.extract.staging.request.daterange.limit: Enter the maximum number of days to allow in the date range of a Payroll Extract integration run. Default = 90.
    • payroll.extract.staging.employee.populations.multiplier: Enter the maximum number of employee records that the Payroll Extract integration can export. Default = 10.
  5. Click Tap Save.

Output files

Payroll data

The Payroll Extract integration populates a staging table in the extension database with the employee data.

The employee data is grouped by person_id, staging_request_id, apply_date, paycode_id, is_historical_correction, worked_job, worked_labor_level1_id, worked_labor_level2_id, worked_labor_level3_id, worked_labor_level4_id, worked_labor_level5_id, worked_labor_level6_id, worked_cost_center_id.

To access or process the data, see the Extension Tables topic.

  • Format: Comma delimited
  • Frequency: On-demand or scheduled
  • Header Row: Yes, 1 row
  • Output Folder: /Outbound

Output payroll data

Output payroll data for this integration

prm_payroll_staging_data columns

Data type

Data source

Comments

id

  • bigint
  • not NULL

Unique ID for this table

staging_request_id

  • bigint
  • not NULL

ID for this request

staging_datetime

  • timestamp without time zone
  • not NULL

current datetime

Current date and time

custom_parameter_1

  • text(50)
  • can be NULL

parameter value

From the API request: the integration can map other indicators such as paygroup or timezone.

custom_parameter_2

  • text(50)
  • can be NULL

parameter value

From the API request: the integration can map other indicators such as paygroup or timezone.

person_id

  • bigint
  • not NULL

Person.personId

  • actual_total.apply_date within passed range
  • actual_total.work_item_id=work_item_employee_assignment.work_item_id
  • work_item_employee_assignment.employee_id=wtkemployee.wtkemployeeid
  • wtkemployee.personid-person.personid

person_num

varchar(15)

Person.personnum

  • actual_total.apply_date within passed range
  • actual_total.work_item_id=work_item_employee_assignment.work_item_id
  • work_item_employee_assignment.employee_id=wtkemployee.wtkemployeeid
  • wtkemployee.personid-person.personid

full_name

varchar(64)

Person.fullnm

  • actual_total.apply_date within passed range
  • actual_total.work_item_id=work_item_employee_assignment.work_item_id
  • work_item_employee_assignment.employee_id=wtkemployee.wtkemployeeid
  • wtkemployee.personid-person.personid

first_name

varchar(30)

Person.firstnm

  • actual_total.apply_date within passed range
  • actual_total.work_item_id=work_item_employee_assignment.work_item_id
  • work_item_employee_assignment.employee_id=wtkemployee.wtkemployeeid
  • wtkemployee.personid-person.personid

last_name

varchar(30)

Person.lastnm

  • actual_total.apply_date within passed range
  • actual_total.work_item_id=work_item_employee_assignment.work_item_id
  • work_item_employee_assignment.employee_id=wtkemployee.wtkemployeeid
  • wtkemployee.personid-person.personid

middle_name

varchar(1)

Person.middleinitialnm

  • actual_total.apply_date within passed range
  • actual_total.work_item_id=work_item_employee_assignment.work_item_id
  • work_item_employee_assignment.employee_id=wtkemployee.wtkemployeeid
  • wtkemployee.personid-person.personid

range_start_date

  • timestamp without time zone
  • not NULL

Passed in parameter

Start date for apply_date range

range_end_date

  • timestamp without time zone
  • not NULL

Passed in parameter

End date for apply_date range

apply_date

  • timestamp without time zone
  • not NULL

actual_total.apply_date

amount_type

varchar(50)

paycodeamttype.paycodeamttypenm

  • actual_total.paycode_id = Paycode.paycodeId
  • paycode.paycodeamttypeid=paycodeamttype.paycodeamttypeid

hours_amount

bigint

actual_total.duration_in_seconds

days_amount

numeric(19,6)

actual_total.duration_in_days

money_amount

numeric(19,6)

actual_total.money_amount

wage_multiplier

numeric(19,6)

paycode.wagemultiply

actual_total.paycode_id = paycode.paycodeId

wage_addition

numeric(19,6)

paycode.wageaddition

actual_total.paycode_id = paycode.paycodeId

effective_wage_rate

numeric(19,6)

actual_total.money_amount/(actual_total.duration_in_seconds/3600)

Cannot divide by 0. Ignore if amount_type is money

paycode_id

  • bigint
  • not NULL

actual_total.paycode_id

paycode_name

varchar(50)

paycode.name

actual_total.paycode_id = paycode.paycodeId

paycode_mapped_output1

varchar(50)

Paycode Payroll Mapping

paycode_mapped_output2

varchar(50)

Paycode Payroll Mapping

paycode_mapped_output3

varchar(50)

Paycode Payroll Mapping

is_historical_correction

Boolean

actual_total.apply_date <> actual_total.corrected_apply_date

original_date

  • timestamp without time zone
  • not NULL

actual_total.corrected_apply_date

original_payperiod_start

  • timestamp without time zone
  • not NULL

payperiod.start_date

payperiod.payruleid = effective payruleid for employee on original_date

original_payperiod_end

  • timestamp without time zone
  • not NULL

payperiod.end_date

payperiod.payruleid = effective payruleid for employee on original_date

include_in_totals

numeric(1,0)

actual_total.already_paid (switch 0 to 1 and 1 to 0)

Take the opposite value of already_paid to get the correct value for include_in_totals.

worked_job

text

  • org_effective_node.full_name
  • Use report_to_org_id and full_name for that node.
  • Place a slash ( /) between each description.
  • Include position information.
  • actual_total.orgjob_id=org_effective_node.org_node_id
  • applydate between effective dates

worked_job_mapped

text

General Ledger Mapping for worked_job

worked_location_job_type_path

text

org_effective_node_type.full_name for each level of path. Place / between each name

  • actual_total.orgjob_id=org_effective_node.org_node_id
  • org_effective_node.org_node_type_id=org_effective_node_type.org_node_type_id

worked_labor_level1_name

description varchar(250)

laboracct.laborlev1nm

actual_total.labor_account_id=laboracct.laboracctid

worked_labor_level1_id

bigint

laboracct.laborlev1id

actual_total.labor_account_id=laboracct.laboracctid

worked_labor_level2_name

description varchar(250)

laboracct.laborlev2nm

actual_total.labor_account_id=laboracct.laboracctid

worked_labor_level2_id

bigint

laboracct.laborlev2id

actual_total.labor_account_id=laboracct.laboracctid

worked_labor_level3_name

description varchar(250)

laboracct.laborlev3nm

actual_total.labor_account_id=laboracct.laboracctid

worked_labor_level3_id

bigint

laboracct.laborlev3id

actual_total.labor_account_id=laboracct.laboracctid

worked_labor_level4_name

description varchar(250)

laboracct.laborlev4nm

actual_total.labor_account_id=laboracct.laboracctid

worked_labor_level4_id

bigint

laboracct.laborlev4id

actual_total.labor_account_id=laboracct.laboracctid

worked_labor_level5_name

description varchar(250)

laboracct.laborlev5nm

actual_total.labor_account_id=laboracct.laboracctid

worked_labor_level5_id

bigint

laboracct.laborlev5id

actual_total.labor_account_id=laboracct.laboracctid

worked_labor_level6_name

description varchar(250)

laboracct.laborlev6nm

actual_total.labor_account_id=laboracct.laboracctid

worked_labor_level6_id

bigint

laboracct.laborlev6id

actual_total.labor_account_id=laboracct.laboracctid

worked_cost_center_name

description varchar(250)

laboracct.laborlev7nm

actual_total.labor_account_id=laboracct.laboracctid

worked_cost_center_id

bigint

laboracct.laborlev7id

actual_total.labor_account_id=laboracct.laboracctid

worked_labor_level1_mapped

varchar(50)

General Ledger Mapping for worked_labor_level1

worked_labor_level2_mapped

varchar(50)

General Ledger Mapping for worked_labor_level2

worked_labor_level3_mapped

varchar(50)

General Ledger Mapping for worked_labor_level3

worked_labor_level4_mapped

varchar(50)

General Ledger Mapping for worked_labor_level4

worked_labor_level5_mapped

varchar(50)

General Ledger Mapping for worked_labor_level5

worked_labor_level6_mapped

varchar(50)

General Ledger Mapping for worked_labor_level6

worked_cost_center_mapped

varchar(50)

General Ledger Mapping for worked_cost_center

signoffthrudtm

timestamp without time zone

person.mgrsignoffthrudtm

employment_status

varchar(30)

employmentstat.shortnm

personstatusmm.personid,employmentstatid/applydate between effective dates

primary_job

text

  • org_effective_node.full_name
  • Use report_to_org_id and full_name for that node.
  • Place a slash ( /) between each description.
  • combhomeacct.orgnodeid=org_effective_node.org_node_id
  • applydate between effective dates for both tables

primary_job_mapped

text

home_location_job_type_path

text

  • org_effective_node.full_name for each level of path
  • Place a slash ( /) between each name.
  • combhomeacct.orgnodeid=org_effective_node.org_node_id
  • org_effective_node.org_node_type_id=org_effective_node_type.org_node_type_id

home_labor_level1_name

description varchar(250)

laboracct.laborlev1nm

  • combhomeacct.laboracctid=laboracct.laboracctid
  • applydate between combhomeacct effective dates

home_labor_level1_id

bigint

laboracct.laborlev1id

  • combhomeacct.laboracctid=laboracct.laboracctid
  • applydate between combhomeacct effective dates

home_labor_level2_name

description varchar(250)

laboracct.laborlev2nm

  • combhomeacct.laboracctid=laboracct.laboracctid
  • applydate between combhomeacct effective dates

home_labor_level2_id

bigint

laboracct.laborlev2id

  • combhomeacct.laboracctid=laboracct.laboracctid
  • applydate between combhomeacct effective dates

home_labor_level3_name

description varchar(250)

laboracct.laborlev3nm

  • combhomeacct.laboracctid=laboracct.laboracctid
  • applydate between combhomeacct effective dates

home_labor_level3_id

bigint

laboracct.laborlev3id

  • combhomeacct.laboracctid=laboracct.laboracctid
  • applydate between combhomeacct effective dates

home_labor_level4_name

description varchar(250)

laboracct.laborlev4nm

  • combhomeacct.laboracctid=laboracct.laboracctid
  • applydate between combhomeacct effective dates

home_labor_level4_id

bigint

laboracct.laborlev4id

  • combhomeacct.laboracctid=laboracct.laboracctid
  • applydate between combhomeacct effective dates

home_labor_level5_name

description varchar(250)

laboracct.laborlev5nm

  • combhomeacct.laboracctid=laboracct.laboracctid
  • applydate between combhomeacct effective dates

home_labor_level5_id

bigint

laboracct.laborlev5id

  • combhomeacct.laboracctid=laboracct.laboracctid
  • applydate between combhomeacct effective dates

home_labor_level6_name

description varchar(250)

laboracct.laborlev6nm

  • combhomeacct.laboracctid=laboracct.laboracctid
  • applydate between combhomeacct effective dates

home_labor_level6_id

bigint

laboracct.laborlev6id

  • combhomeacct.laboracctid=laboracct.laboracctid
  • applydate between combhomeacct effective dates

home_cost_center_name

description varchar(250)

laboracct.laborlev7nm

  • combhomeacct.laboracctid=laboracct.laboracctid
  • applydate between combhomeacct effective dates

home_cost_center_id

bigint

laboracct.laborlev7id

  • combhomeacct.laboracctid=laboracct.laboracctid
  • applydate between combhomeacct effective dates

home_labor_level1_mapped

varchar(50)

home_labor_level2_mapped

varchar(50)

home_labor_level3_mapped

varchar(50)

home_labor_level4_mapped

varchar(50)

home_labor_level5_mapped

varchar(50)

home_labor_level6_mapped

varchar(50)

home_cost_center_mapped

varchar(50)

base_wage_rate

numeric(16,6)

basewagerthist.basewagehourlyamt

  • basewagerthist.employeeid=wtkemployee.employeeid
  • wtkemployee.personnum=person.personnum

currency_code

varchar(10)

currency.currencycd

  • currency.currencyid=currencyassignmnt.currencyid
  • currencyassignmnt.employeeid=wtkemployee.employeeid
  • wtkemployee.personnum=person.personnum

payrule

varchar(50)

payruleids.name

assignpayrule.employeeid,payruleid applydate between effective dates

employment_term

varchar(50)

groupschedule.shortnm

empgrpschedmm.employeeid,grpscheduleid apply date between effective dates/groupschedule.contractsw=1

hire_date

timestamp without time zone

person.companyhiredtm

badge_number

varchar(50)

badgeassign.badgenum

Person.personid=badgeassign.personid and actual_total.apply_date between badgeassign effective dates

worker_type

varchar(50)

workertype.workertypenm

  • Person.personid=wtkemployee.personid
  • wtkemployee.workertypeid=workertype.workertypeid

timezone

varchar(30)

timezone.english

  • Person.personid= wtkemployee.personid
  • timezone.timezoneid=wtkemployee.timezoneid

last_total_time

timestamp without time zone

totaleventts.last_totaltime

Person.personid = totaleventts.personid

last_total_change_time

timestamp without time zone

totaleventts.last_chngtime

Person.personid = totaleventts.personid

person_custom_field1_name

varchar(30)

customdatadef.shortnm

customdatadef.customdatadefid = personcstmdata.customdatadefid

person_custom_field1_value

varchar(2000)

personcstmdata.personcstdatatxt

customdatadef.customdatadefid = personcstmdata.customdatadefid

person_custom_field2_name

varchar(30)

customdatadef.shortnm

customdatadef.customdatadefid = personcstmdata.customdatadefid

person_custom_field2_value

varchar(2000)

personcstmdata.personcstdatatxt

customdatadef.customdatadefid = personcstmdata.customdatadefid

person_custom_field3_name

varchar(30)

customdatadef.shortnm

customdatadef.customdatadefid = personcstmdata.customdatadefid

person_custom_field3_value

varchar(2000)

personcstmdata.personcstdatatxt

customdatadef.customdatadefid = personcstmdata.customdatadefid

person_custom_field4_name

varchar(30)

customdatadef.shortnm

customdatadef.customdatadefid = personcstmdata.customdatadefid

person_custom_field4_value

varchar(2000)

personcstmdata.personcstdatatxt

customdatadef.customdatadefid = personcstmdata.customdatadefid

person_custom_field5_name

varchar(30)

customdatadef.shortnm

customdatadef.customdatadefid = personcstmdata.customdatadefid

person_custom_field5_value

varchar(2000)

personcstmdata.personcstdatatxt

customdatadef.customdatadefid = personcstmdata.customdatadefid

person_custom_field6_name

varchar(30)

customdatadef.shortnm

customdatadef.customdatadefid = personcstmdata.customdatadefid

person_custom_field6_value

varchar(2000)

personcstmdata.personcstdatatxt

customdatadef.customdatadefid = personcstmdata.customdatadefid

person_custom_field7_name

varchar(30)

customdatadef.shortnm

customdatadef.customdatadefid = personcstmdata.customdatadefid

person_custom_field7_value

varchar(2000)

personcstmdata.personcstdatatxt

customdatadef.customdatadefid = personcstmdata.customdatadefid

person_custom_field8_name

varchar(30)

customdatadef.shortnm

customdatadef.customdatadefid = personcstmdata.customdatadefid

person_custom_field8_value

varchar(2000)

personcstmdata.personcstdatatxt

customdatadef.customdatadefid = personcstmdata.customdatadefid

person_custom_field9_name

varchar(30)

customdatadef.shortnm

customdatadef.customdatadefid = personcstmdata.customdatadefid

person_custom_field9_value

varchar(2000)

personcstmdata.personcstdatatxt

customdatadef.customdatadefid = personcstmdata.customdatadefid

person_custom_field10_name

varchar(30)

customdatadef.shortnm

customdatadef.customdatadefid = personcstmdata.customdatadefid

person_custom_field10_value

varchar(2000)

personcstmdata.personcstdatatxt

customdatadef.customdatadefid = personcstmdata.customdatadefid

Accruals data

The Payroll Extract integration populates a staging table in the extension database with the accruals data.

To access or process the data, see the Extension Tables topic.

  • Format: Comma delimited
  • Frequency: On-demand or scheduled
  • Header Row: Yes, 1 row
  • Output Folder: /Outbound

Output accruals data

Output accruals data for this integration

prm_accruals_staging_data columns

Data type

Data source

Comments

id

  • bigint
  • not NULL

Unique ID for this table

staging_request_id

  • bigint
  • not NULL

ID for this request

staging_datetime

  • timestamp without time zone
  • not NULL

current datetime

Current date and time

custom_parameter_1

varchar(50)

parameter value

From the API request: the integration can map other indicators such as paygroup or timezone.

custom_parameter_2

varchar(50)

parameter value

From the API request: the integration can map other indicators such as paygroup or timezone.

person_id

  • bigint
  • not NULL

Person.personId

  • actual_total.apply_date within passed range
  • actual_total.work_item_id=work_item_employee_assignment.work_item_id
  • work_item_employee_assignment.employee_id=wtkemployee.wtkemployeeid
  • wtkemployee.personid-person.personid

person_num

varchar(15)

Person.personnum

  • actual_total.apply_date within passed range
  • actual_total.work_item_id=work_item_employee_assignment.work_item_id
  • work_item_employee_assignment.employee_id=wtkemployee.wtkemployeeid
  • wtkemployee.personid-person.personid

range_start_date

  • timestamp without time zone
  • not NULL

Passed in parameter

Start date for apply_date range

range_end_date

  • timestamp without time zone
  • not NULL

Passed in parameter

End date for apply_date range

accrual_code_name

  • varchar(50)
  • not NULL

accrualcode.name

  • encumbered_accrual_balance.accrual_code_id
  • accrual_balance.accrual_code_id
  • accrualtran.accrualcodeid
  • accrualcode.accrualcodeid

accrual_amount_type

  • varchar(10)
  • not NULL

accrualcode.type_sw("1"="HOUR"; "2"="DAY"; "3"="MONEY")

available_starting_balance

numeric(16,6)

  • encumbered_accrual_balance.duration_in_seconds
  • encumbered_accrual_balance.money_amount
  • encumbered_accrual_balance.duration_in_days

Enter the duration-In-seconds, Money_amount, or duration_in_days based on type_sw and range_start_date=encumbered_accrual_balance.balance_date

available_ending_balance

numeric(16,6)

  • encumbered_accrual_balance.duration_in_seconds
  • encumbered_accrual_balance.money_amount
  • encumbered_accrual_balance.duration_in_days

Enter the duration-In-seconds, Money_amount, or duration_in_days based on type_sw and range_end_date=encumbered_accrual_balance.balance_date

vested_starting_balance

numeric(16,6)

  • encumbered_accrual_balance.vested_duration_in_seconds
  • encumbered_accrual_balance.vested_money_amount
  • encumbered_accrual_balance.vested_duration_in_days

Enter the duration-In-seconds, Money_amount, or duration_in_days based on type_sw and balance_date_range_start_date=accrual_balance.balance_date

vested_ending_balance

numeric(16,6)

  • encumbered_accrual_balance.vested_duration_in_seconds
  • encumbered_accrual_balance.vested_money_amount
  • encumbered_accrual_balance.vested_duration_in_days

Enter the duration-In-seconds, Money_amount, or duration_in_days based on type_sw, balance_date, and balance_date_range_end_date=accrual_balance.balance_date

probationary_starting_balance

numeric(16,6)

  • encumbered_accrual_balance.probationary_duration_in_seconds
  • encumbered_accrual_balance.probationary_money_amount
  • encumbered_accrual_balance.probationary_duration_in_days

Enter the duration-In-seconds, Money_amount, or duration_in_days based on type_sw and range_start_date=accrual_balance.balance_date

probationary_ending_balance

numeric(16,6)

  • encumbered_accrual_balance.probationary_duration_in_seconds
  • encumbered_accrual_balance.probationary_money_amount
  • encumbered_accrual_balance.probationary_duration_in_days

Enter the duration-In-seconds, Money_amount, or duration_in_days based on type_sw and range_end_date=accrual_balance.balance_date

balance_code_mapped

varchar(50)

From new mapping table based on accrual code and employee's accrual profile if configured.

accrual_takings_sum

varchar(50)

accrualtran.amount

Sum of all accrualtran.amounts for an accrual code where the accrualtran.type is configured in prm_accrual_transaction_mapping to include in takings and accrualtrans.effectivedate between the range_start_date and range_end_date for the accrualtrans.personid.

accrual_earnings_sum

varchar(50)

accrualtran.amount

Sum of all accrualtran.amounts for an accrual code where the accrualtran.type is configured in prm_accrual_transaction_mapping to include in takings and accrualtrans.effectivedate between the range_start_date and range_end_date for the accrualtrans.personid.

accrual_reporting_period_earnings_sum

numeric(16,6)

accrualtran.amount

Same as accrual_earnings_sum except accrualtran.effective is between trackingperiod.startdtm and trackingperiod.enddtm, and the range_start_date is between trackingperiod.startdtm and trackingperiod.enddtm.

accrual_reporting_period_takings_sum

numeric(16,6)

accrualtran.amount

Same as accrual_earnings_sum except accrualtran.effective is between trackingperiod.startdtm and trackingperiod.enddtm, and the range_start_date is between trackingperiod.startdtm and trackingperiod.enddtm.

Custom data

(Optional) You can use the custom staging table in the extension database to aggregate or map any data.

To access or process the data, see the Extension Tables topic.

  • File Name:
  • Format: Comma delimited
  • Frequency: On-demand or scheduled
  • Header Row: Yes, 1 row
  • Output Folder: /Outbound

Custom data

Custom data table for this integration

prm_custom_data

Data type

Data source

Comments

id

  • bigint
  • not NULL

Unique ID for this table

custom_data_1

varchar(50)

custom_data_2

varchar(50)

...

...

custom_data_50

varchar(50)

Multiple assignments

If employees work multiple jobs or positions under separate employer identification numbers (EIN), the integration uses pay group attributes to separate the employees' payroll totals into multiple payroll groups that are based on the job or position that is worked. The payroll manager selects the employees based on their Worked Job or position rather than the Primary Job or position for employees who work single jobs or positions.

Accruals data are exported based on employees' Primary Job or position.

To access or process the data, see the Extension Tables topic.

  • Format: Comma delimited
  • Frequency: On-demand or scheduled
  • Header Row: Yes, 1 row
  • Output Folder: /Outbound
  • Format: Comma delimited
  • Frequency: On-demand or scheduled
  • Header Row: Yes, 1 row
  • Output Folder: /Outbound

Multiple assignments

Multiple assignments table for this integration

Column

Data type

Data source

Comments

Position Name

Exempt

Position Location/Job

Effective dated

Position Status

Hired Date

Person-specific for the position

Seniority Date

Person-specific for the position

Other Dates

Person-specific for the position

Position ID

Unique identifier for the position

APIs for Payroll Extract

API details

APIs for the HCM Payroll Export-v2 integration

Purpose

API end point

Operation

Description

Stage request

/v1/commons/payroll/staging/async

POST

Stage the pay data:

  • Submit the payroll-extract staging process.
  • An asynchronous API
  • Content-Location response header provides information for the status-check API.

    Example Content-Location header: /v1/commons/payroll/{requestId}/status

Stage request

/v1/commons/payroll/{requestId}/status

/v1/commons/payroll/{requestId}/status?exportType=payroll

/v1/commons/payroll/{requestId}/status?exportType=accrual

GET

Retrieve the status of a submitted process:

  • Content-Location response header provides information for the details API.

    Example Content-Location header: /v1/commons/payroll/{requestId}/details

Stage request

/v1/commons/payroll/{requestId}/details

/v1/commons/payroll/{requestId}/details?exportType=payroll

/v1/commons/payroll/{requestId}/details?exportType=accrual

GET

Get the details for an execution of the asynchronous operation.

Custom table

/v1/commons/payroll/tables/apply_upsert

POST

Create a table to stage and persist the transactional data.

Custom table

v1/commons/payroll/tables/apply_create

POST

Create a table for external applications to process the data.

Retrieve data

/v1/commons/payroll/export/

POST

Retrieve the data by the following attributes:

  • query: string. The definition of an SQL query statement as a valid ANSI SQL 2011 statement.
  • (Optional) headers & footers: array of objects. The definition of payroll header lines. All objects in the array must follow the following structure:
    • (Optional) prefix: string. A prefix for the header line.
    • (Optional) queries: array of strings. Each string represents an SQL query.
  • (Optional) exportConfig: object.
    • (Required) format: string. Defines the output format. Supported values: “csv”, “json”.
    • (Optional) filename: string. The name of the attachment file as defined by the HTTP header.
    • (Optional) delimiter: string. If defined along with format=csv, this character is used as the value delimiter for the CSV file.

Sample code

Stage request:

{"where": {

"employeeIn": [

{

"payGroup": {

"orgPath": [ {"qualifier": "QUALIFIER"} ],

"workerType": [{ "qualifier": "QUALIFIER"}],

"laborCategory": [{"qualifier": "QUALIFIER"}],

"payRule": [{"qualifier": "QUALIFIER"}],

"costCenter": [{"qualifier": "QUALIFIER"}],

"customField": [

{

"label": {"qualifier": "QUALIFIER"},

"values": []}]

}}]

},

"employees": {"qualifiers": []},

"dateRange": {

"startDate": "",

"endDate": "",

"symbolicPeriod": {"qualifier": "" }

},

"customParameter1": "parameter value",

"customParameter2": "parameter value",

"ignoreSignOff": true,

"includePayCodes": [{"qualifier": "QUALIFIER"}],

"excludePayCodes": [{"qualifier": "QUALIFIER"}],

"processBasedOn": "PRIMARY/WORKED",

"exportType": "Payroll/Accrual/Both",

"processTerminatedEmployeesOnly": false,

"payCodeMappingTable": {"qualifier": ""},

"historicalCorrections": "Included in Totals/Exclude in Totals/Correction only",

"executionId": "EXECUTIONID"

}

Custom table:

--data-raw '{

"query": "SELECT month FROM `lite_20210621093800_c6f49658-ca27-4177-b45a-0188423c4f3b`",

"tableName": "payroll_rollup",

"partitionConfig": {

"column": "payroll_start_date",

"granularity": "DAY",

"expirationHours": 168

}}

'file=@"/Users/Payroll/EffectiveWageLookup.csv"'

Retrieve data:

{

"prefix": "header_prefix :",

"queries": [

"SELECT employee_id FROM `payroll_20210621093800_c6f49658-ca27-4177-b45a-0188423c4f3b`",

"SELECT month FROM `payroll_20210621093800_c6f49658-ca27-4177-b45a-0188423c4f3b`"

]},

{

"queries": [

"SELECT aa_count FROM `payroll_20210621093800_c6f49658-ca27-4177-b45a-0188423c4f3b`"

]}],

"footers": [

{

"prefix": "footer prefix: ",

"queries": [

"SELECT f01_gt_rand FROM `payroll_20210621093800_c6f49658-ca27-4177-b45a-0188423c4f3b`",

"SELECT sumd FROM `payroll_20210621093800_c6f49658-ca27-4177-b45a-0188423c4f3b`"

]}],

"query": "SELECT * FROM `payroll_20210621093800_c6f49658-ca27-4177-b45a-0188423c4f3b`",

"exportConfig": {

"format": "csv",

"filename": "report.dat",

"delimiter": ","

}}'