From Bright Pattern Documentation
Jump to: navigation, search
• 5.19 • 5.3 • 5.8

Actual Evals CSV

The Actual Evals (CSV) report provides non-aggregated score data, all questions, evaluation areas, and evaluation form scores.


How to Use This Report

  1. Set parameters, which narrow the report's data.

  2. Use the report generation controls to schedule, generate, customize, rename, and/or clone the report. Note that when you run a CSV report, a zipped CSV file will begin downloading.

  3. Run the report and view metrics.


Report Parameters

You can narrow the report's data by specifying the following parameters.

Time frame

The specified time range for which the data will be generated on the report (i.e., Today, This week, Custom, etc.).

From/To

If the Custom timeframe is selected, From/To is the custom date range. From is the evaluation start date and time, and To is the evaluation stop date and time.

Timezone

Clicking the link shown allows you to designate the timezone settings for the report's timeframe.

Service/campaign

Filters report data according to the selected service(s) and/or campaign(s). If no service or campaign is selected, the report will include data for all services and campaigns.

Team

Filters report data according to the selected team(s). If no team is selected, the report will include data for all teams.

Evaluators

Filters report data according to specific evaluator(s) who evaluated the evaluation. If no evaluator is selected, the report will include data for all evaluators.

Supervisors

Filters report data according to specific supervisor(s) in charge. If no supervisor is selected, the report will include data for all supervisors.

Forms

Filters report data according to specific evaluation form(s). If no evaluation form is selected, the report will include data for all evaluation forms.

Agent

Filters report data according to specific agent(s) who were evaluated. If no agent is selected, the report will include data for all agents.

custom1

Contents of custom reporting field 1 of the call detail record for the given interaction.

custom2

Contents of custom reporting field 2 of the call detail record for the given interaction.

custom3

Contents of custom reporting field 3 of the call detail record for the given interaction.

custom4

Contents of custom reporting field 4 of the call detail record for the given interaction.

custom5

Contents of custom reporting field 5 of the call detail record for the given interaction.

Agent rank

Filters report data according to specific agent rank(s), as defined in configuration.

Agent training classes

Filters report data according to specific training class(es) that the agent has taken. Tabulating agent scores by this parameter shows the effectiveness of the training class. If no agent training class is selected, the report will include all agent training classes.


Metric Descriptions

The metrics of the Actual Evals (CSV) report are organized into columns, which are described as follows:

start_time

The evaluation start time (e.g., “10:51 AM”).

EVALUATION_TIME

The evaluation time.

SERVICE_NAME

The service of the interaction that was evaluated (e.g., “Email service”).

custom1

Contents of custom reporting field 1 of the call detail record for the given interaction.

custom2

Contents of custom reporting field 2 of the call detail record for the given interaction.

custom3

Contents of custom reporting field 3 of the call detail record for the given interaction.

custom4

Contents of custom reporting field 4 of the call detail record for the given interaction.

custom5

Contents of custom reporting field 5 of the call detail record for the given interaction.

LOGIN_ID

The login ID (i.e., username) of the evaluator (e.g., “jeffery.lozada”).

TEAM_NAME

The name of the team to which the agent being evaluated has been assigned (e.g., “Pro Service Reps”).

CONFIRMED_BY

The user who confirmed the evaluation.

FORM_NAME

The name of the evaluation form (e.g., “Default QM Chat Form”).

AGENT_ID

The username of the agent being evaluated (e.g., “christy.borden”).

AREA_NAME

The name of the evaluation area for the given evaluation (e.g., “Offer”).

QUESTION_NAME

The name of the evaluation question.

SCORE

The evaluation score (e.g., “85” out of 100).

COMMENT

Text comments written by the evaluator for the agent being evaluated.




< Previous