| CENTER FOR
WOMEN IN
GOVERNMENT
State University of New York at Albany
Draper Hall, Room 302
1400 Washington Avenue
Albany, New York 12222
518/455-6211
President of the Board
Gall S. Shatter
Secretary of State THE NEW YORK STATE
i
|
|
1
|
if
| Executive Director PAY EQUITY STUDY:
! Nancy D, Periman A RESEARCH REPORT
[
il
Ronnie Steinberg, Ph.D.
| Lois Haignere, Ph.D.
[ Carol Possin, Ph.D.
i Cynthia H. Chertos, Ph.D.
i Donald Treiman, Ph.D.
}
with the special contribution of:
i Richard Maisel, Ph.D.
Center for Women in Government
State University of New York at Albany
Albany, New York
March, 1986
The funds for this study were provided under contract by the State of New York
and the Civil Service Employees Association.
TABLE OF CONTENTS
Page
Executive SUMMATY s:0:<20:0:00:0 se ceenen ewe einsewe wee eebee se eee eeeeea eae owe: TEL
List of Tables....eceeeeee
sees KX
LASt Of FAGUEES cesemnewens mena. Mes eS Ek , VETERE CET CEE RELL
CHAPTER. I = INTRODUCTION. so.00ecvewee ves'ed 600 ss 8600 60s ses eee eee wee O ET 1
The New York State Labor Force 3
| Pay Equity and Job Evaluation: Background 5
| Policy-capturing Job Evaluation 9
1
4
Comparable Worth Job Evaluation: Overview of Design 1
Overview of the Report 1
| CHAPTER IT =~ GENERAL METHODOLOGY...scscsseucescscsccccccceseeessecsoes L7
| The Collection of Job Content Information 18
Population Definitions 24
Job Title Sampling Frame 31
Summary 38
\ CHAPTER TIT - THE JOB CONTENT QUESTIONNAIRE:
i DEVELOPMENT AND PRELIMINARY FIELD-TESTING...........0005 41
I Preliminary Activities and Questionnaire Development 43
i Pre-Testing the Questionnaire 47
Expert Review and Modification 52
Summary 52
CHAPTER IV = THE PILOT SURVEY. ..csscceeecccercccccovessevesessevessces 55
General Methodology 57
Survey Mechanics 64
Response Rates and Distribution Methods 69
Reliability 76
Validity 79
Revision of the Job Content Questionnaire 89
t Summary 93
H
i CHAPTER V - MAIN DATA COLLECTION SURVEY: DESIGN AND MECHANICS......... 95
| Sampling Frame 96
Survey Distribution Design 97
Distribution and Intake Procedures 99
Response Rates 105
Summary of Changes in the Sample 108
\ Preparation of the Data for Analysis 110
| Summary 110
: CHAPTER VI - PRELIMINARY DATA ANALYSIS.....ccsceseecseccceccseseeeees 113
| Adjusting the Population 114
i Item Recoding 114
| Aggregating Data by Title 116
| Defining Percent Minority 116
| Creating Indices 117
| The Creation of Factor-based Scales 118
‘ Summary 128
TABLE OF CONTENTS
(continued)
CHAPTER VII - MODELS FOR ASSESSING WAGE DISCRIMINATION. ..+eeceeeerece
Regression Models for Pay Equity Analysis
Pay Policy Models for Assessing Wage Discrimination
Developing the Regression Models: Preliminary Design Decisions
The Final Salary Grade Prediction Model
Job Content Characteristics Not Currently Valued by New York
State Government
Summary
CHAPTER VIII -~- PREDICTED SALARY GRADES AND CONFIDENCE INTERVALS......
Procedure for Obtaining PSGs and Confidence Intervals
The Procedures for Selecting Replicates
Predicted Salary Grades: Whole Sample and Replicates
Final Predicted Salary Grades
Confidence Intervals
Analysis
Summary
RIBLIOGRAPHY ca esseenivesmvvanseeaesamimaes sepaareseiausaNaeeeeses
APPENDICES:
Appendix A: Acknowledgments
Appendix B: Pilot Survey: Agency Personnel and Union Liaisons
Appendix (: Descriptive Statistics for the Independent Variables
Entered Into Regression: Whole Sample and White Male
Sample
Appendix D: Main Survey: Job Content Questionnaire, Cover Letter,
and Follow-up Letter
Appendix E; Main Survey: Agency Liaisons
Appendix F: Main Survey: Response Rate by Title
Appendix G: Main Survey: Deleted Titles Due to Inadequate Response
Rates
ai
131
133
137
143
150
159
161
163
164
167
169
169
170
172
174
203
209
215
219
229
231
235
269
New York State Pay Equity Study
Executive Summary
Ronnie Steinberg, Ph.D,
Lois Haignere, Ph.D.
Carol Possin, Ph.D,
Cynthia H. Chertos, Ph.D.
Donald Treiman, Ph.D.
The Civil Service Employees Association (CSEA) and the State, through its
Governor's Office of Employee Relations (GOER), negotiated funds in 1982 to
carry out a study to assess pay equity in three bargaining units covering
approximately 100,000 State employees. In 1983, the Center for Women in
Government was asked by CSEA and GOER to examine the effects of sex and
race/ethnicity of the typical job incumbent on the setting of salaries. _
Pay equity studies, also commonly called comparable worth studies, are
designed to determine whether the salaries associated with job titles
accurately reflect a consistently applied standard of job worth regardless of
the sex or race/ethnicity of a typical job incumbent. These studies require a
methodology through which:
e the relative worth of different jobs can be assessed;
e undervalued job titles can be identified; and
@ estimates regarding the extent of undervaluation can
be calculated.
To accomplish these objectives, most pay equity studies have relied on job
evaluation techniques, which historically have formed the basis of most formal
classification systems and salary~setting practices in the public and private
sectors,
The use of conventional job evaluation can be problematic in research on
Pay equity, however. Given historical assumptions about the value of "women's
work" or work done by minorities, there is reason to suspect that sex and
race/ethnicity of typical job incumbents play a subtle role in assigning
salaries through these evaluation systems. To avoid potential bias, it is
necessary to modify conventional job evaluation. The Center for Women in
Government's approach was designed to maximize consistency and minimize sex and
race/ethnicity bias in the way jobs are described and evaluated and in the
procedures for establishing wages. The study uses a policy-capturing approach,
which relies heavily on statistical procedures for designing the data collec-
tion, for analyzing the data to establish factor weights, and for estimating
the appropriate salary for female~dominated and for disproportionately minority
jobs.
Since New York State is the third largest public employer in the United
States, with well over 175,000 employees in over 7,350 classified job titles,
the job evaluation study required the collection of massive amounts of accurate
dii
information about job content from a large sample of job incumbents filling a
representative range of job titles. Given the volume of information that had
to be collected in New York State, we utilized incumbent self-reports as a
major source of information about job content. Our two primary criteria for
this decision were:
e this approach was the way to get the most
information at the lowest cost; and
e@ a number of authorities regard incumbent
self-reports as the best source of informa-
tion about jobs.
In addition, we averaged incumbent responses within each job title to obtain a
title profile, This has the effect of minimizing the impact of any unique
incumbent differences in filling out questionnaires, including any tendencies
to overstate or understate the duties, skills, and responsibilities involved in
their jobs.
By virtue of our contractual agreement, estimates of undervaluation were
targetted to female~dominated and disproportionately minority titles with ten
or more incumbents in the three bargaining units represented by CSEA. Female-
dominated titles were defined as those in which at least 67.2 percent of
incumbents are females. Disproportionately minority jobs were defined as those
in which at least 30.8 percent of incumbents are minorities. These definitions
are based on a formula,
(.4X) + X,
where X is the overall proportion of women or minorities in the New York State
labor force. In addition to providing equitable pay estimates for female-
dominated and disproportionately minority titles, we provided similar estimates
for a set of titles in the direct line of promotion from dispropor-
tionately minority and female-dominated entry level titles found to be under~
valued. The pool of titles for which equitable pay estimates were made include
168 female-dominated and disproportionately minority titles and 20
direct-line-of-promotion titles.
Data were collected from a sample of incumbents in a broad range of job
titles. Sampling of incumbents within titles was done differently for the
subset of female dominated and disproportionately minority titles for which we
provide pay equity estimates than for the remaining titles. For estimated
titles, we included all employees in titles with 150 or fewer incumbents. In
titles with more than 150 incumbents, we sampled 150. For the remaining job
titles, we sampled all employees in titles with 20 or fewer incumbents. In
iv
{
|
titles with more than 20 incumbents, we sampled 20 incumbents using systematic
sampling procedures with a random starting point. Note that direct-line-of-
promotion titles were sampled in this way, primarily because the final policy
decision to examine them for potential undervaluation was made after the sample
had been selected.
To collect information on job content from sampled incumbents, we designed
a closed-ended questionnaire customized to the range of content associated with
work in New York State government. The design of the Job Content Questionnaire
was shaped by three basic objectives:
e to capture variations in job content as they
relate to variations in civil service grade
level;
e to maximize consistency and minimize sex and
_ __ tace/ethnic bias in the range and wording of _
- 7 job content questions; and
e to allow incumbents in all titles to read and
accurately respond to the questions being asked.
To our knowledge, it represents the first attempt to carefully and systemati-
cally meet these objectives in a large-scale public sector pay equity study.
Its development and modification was carried out over eleven months, involving
a process of comprehensive review of previous job analysis and job evaluation
approaches combined with a sensitivity to detail in the range of tasks,
functions, and behaviors of work associated with New York State job titles. It
involved as well continual revision of content, wording, and lay-out in light
of the reactions and criticisms of several hundred state employees acting
either as respondents in two waves of preliminary field testing or as experts
or both.
Between February and June, 1984, a pilot survey was carried out to improve
the technical quality of the main survey by providing information on
distribution methods, survey mechanics, and questionnaire construction. The
pilot Job Content Questionnaire was distributed to 1,862 incumbents in 68 job
titles sampled primarily from six agencies and two facilities. Response rates
were, for the most part, adequate for all types of jobs and for all distribu-
tion methods. This finding was stable across sex, race/ethnicity, and literacy
level of job incumbents, negotiating unit, agency, and salary grade, and for
small incumbency titles. A few titles had relatively low response rates, but
these titles did not fit any pattern that could be used as the basis for
targeting titles in the main data collection survey. As a result of these
findings, we decided to rely exclusively on mailed distribution and to track
response rates by title in the main survey.
As explained in the body of the report, the pilot survey also established
the reliability and validity of the Job Content Questionnaire. In general, we
found that the questionnaire appeared valid to employees. Items predicted pay
as one would expect. It is important to note that the questionnaire has a
seventh grade reading' level and, therefore, does not measure ability to read
instead of job content. Moreover, questionnaire items group conceptually into
factors similar to those found in other job evaluation systems,
A substudy comparing supervisor and incumbent responses on a subset of
questions was included to assess the validity of using incumbents as infor-
mants about their jobs. The logic underlying this analysis was that super-
visor ratings, which are frequently used for job analysis, are regarded as a
standard of accuracy. We found substantial agreement between supervisors and
incumbents, supporting our selection of using incumbents as sources of job
content data. i.
A final objective of the pilot survey was to simplify and improve the Job
Content Questionnaire so that it would be easier for employees to fill out. A
factor analysis of the questionnaire items was performed. As a result, several
items were deleted and a few were added. In addition, many questions were
re-written to remove ambiguities, to improve format and layout, and to make all
questions closed-ended. The Job Content Questionnaire used in the main survey
represents a more efficient and simplified document.
The main data collection occurred between November 30, 1984, and March 4,
1985, It involved sampling, printing, distributing, following-up, and
preparing the data for analysis. The New York State Civil Service Department
drew a systematic sample with a random start for each job title. A
subcontractor printed and mailed 36,812 questionnaires to agency liaisons, who
forwarded them to employees. Questionnaires were returned directly to the
Center for Women in Government, where they were logged in and checked. The
data were entered onto computer tape and verified by a private company, and the
Center checked the data further for accuracy.
A major concern was to obtain high response rates. Efforts to increase
the. quantity of responses included extensive advance publicity of the study,
sending a stamped return envelope to those who had less access to free
interagency mails, mailing two follow-up letters, and mailing replacement
copies when the originals were lost. We also made available a toll-free tele-
phone number to respondents and agency liaisons in order to answer questions
and solve any distribution problems. As a result of these efforts, a total of
27,394 completed questionnaires were returned providing an overall response
rate of over 73 percent. The response rate for individual titles was adequate
in all but 43 titles, which were deleted from the analysis. After verifica-
tion of the accuracy of the data entry processing and of the fact that the
responses fell within established parameters, 25,852 individual cases remained
for use in the final analysis, providing information on 2,582 job titles.
Several procedures were used to prepare the data for the remaining
analysis. Some questionnaire items were recoded and the population of each
title was adjusted to reflect changes in title populations between the time of
sample selection and the survey distribution intake. The individual incumbent
level data were averaged and title scores were calculated. Indices were
created for the complexity of writing, reading, and mental demands. A factor
analysis of 80 items and three indices yielded a 14-factor solution, The 14
factors obtained were:
e® Management/supervision;
vi
@ Unfavorable working conditions;
e Contact with difficult clients;
© Communication with public;
@ Education required;
e@ Data entry;
e Group facilitation;
e@ Computer programming;
e@ Fiscal responsibility;
e@ Stress;
e Autonomy;
@ Consequence of error;
e@ Time effort; and
@ Mental demands.
Factor-based scores were calculated. These scores were used in a set of
regression analyses that produced the pay policy equations for the New York
State work force. Finally, for the regression analyses we delimited the sample
of jobs to all jobs with four or more incumbents, because sex and race/ethnic
composition of jobs is more stable across time with larger incumbency titles.
Excluding the small incumbency titles made little difference in the final
regression equations.
Regression analysis is the statistical procedure used in policy-capturing
job evaluation to select the set of job content factors and the weights
associated with the factors that are most related to the current implicit pay
policy of New York State. The resulting regression equation is essentially a
compensation model describing the job content factors of different jobs and the
relationship of these factors to salaries. Three regression models were
specified:
@ a pay policy line based on all jobs;
ea pay policy line based on all jobs and adjusted
to statistically remove the effect of female or
minority composition of jobs; and
@ a pay policy line based on white male jobs (defined
to be those jobs filled 90 percent by males and
90 percent by nonminorities).
The first equation is included as a baseline against which the other two models
can be assessed. It is inappropriate for use as a basis for equity adjust-
vii
ments because the overall pay policy line incorporates any undervaluation in
pay that affects female-dominated and disproportionately minority jobs. The
second and third lines represent two different approaches toward adjusting for
the impact of sex and race/ethnic bias.
Twenty-seven variables were entered into regression equations predicting
salary grade. Of the 27 variables, 15 were found to be significant in the
overall and adjusted pay policy equations. These 15 variables that were
retained account for nearly 90 percent of the variance in salary grade across
jobs. Ten variables were found to be significant in the white male policy
equations.
Our results demonstrate that, for all pay policy lines, education,
experience, management, supervision, and writing are highly compensated factors
in New York State government employment. Moreover, several factors are not
valued or are negatively valued. These include unfavorable working conditions,
stress, group facilitation, communication with the public, data entry, and
autonomy. While the pay equity estimates are based on the obtained regression
equations, New York State could explicitly choose to change any of the
regression weights in order to value these job factors differently. For some
factors, like working conditions, changing the current regression weight from a
negative to a positive value would affect disproportionately minority as well
as predominantly white jobs. Other changes in regression weights (e.g. data
entry) would have an impact only on disproportionately female jobs.
Using the adjusted pay policy line, with all other job factors held
constant, jobs done entirely by women are on average two salary grades lower
than jobs of equal value to the state done entirely by men. Jobs done by less
than 100 percent women on average were undervalued less than two salary grades.
In New York State an increase of one salary grade is an increase of approxi-
mately five percent in salary.
In order to calculate accurate predicted salary grades and accurate con-
fidence intervals for female-dominated, disproportionately minority, and
direct-line-of-promotion titles, we used a statistical procedure known as
jackknifing, The estimated pay equity adjustments average 1.6 salary grades
for the adjusted pay policy line and approximately 2.9 salary grades for the
white male pay policy line. There is a strong tendency for job titles in the
lower salary grades to be more undervalued than job titles in higher salary
grades. This is the case no matter which of the pay policy lines is used. The
salary grades of the job titles we examined ranged from grade 1 to grade 15.
Particularly among the clerical and health care system job titles it was common
to find titles in grade levels 6 and below to be undervalued by four or five
salary grades.
We found no significant overall effect for the percent minority in a
title. However, job titles which are both disproportionately female and dis-
proportionately minority, on average are undervalued by approximately one-half
of a salary grade more than the average. For instance, as indicated above, the
average undervaluation using the adjusted pay policy line is 1.6 salary grades.
vili
|
|
|
Among titles that are both disproportionately female and disproportionately
minority this figure is 2.1 salary grades. Using the white male pay policy
line the average undervaluation is 2.9 salary grades. However, for titles
which are both disproportionately female and disproportionately minority, the
figure is 3.3 salary grades.
Out of a total of 185 job titles in the CSEA bargaining unit that are more
i than 67.2 percent female and 30.8 percent minority or are jobs in the direct
| line of promotion for those female dominated and disproportionately minority
i jobs, we found 142 to be undervalued by more than a half a salary grade using
H the adjusted pay policy line and 163 were undervalued using the white male pay
policy line. The number of employees in job titles undervalued by more than
one half a salary grade is over 55,000 using the adjusted line and over 65,000
using the white male line.
ix
LIST OF TABLES
Page
Table 2.1
Female-dominated and Disproportionately Minority Job Titles
for Which Undervaluation Will Be Assessed.....cccescescecvccssceseesd2
Table 2.2
Direct-Line-of-Promotion Job Titles for Which Undervaluation
Were Assessed
Pence eee renee cere e nee e nese eeeseseeeeeeeseteseeee ens 34
Table 3.1
Job Analysis Systems Used in the Development of the Job Content
QUESTLONn ales cesacmes ss £6 He C3 ESEAUWINTET ERNE TRG S ERECT lin on bS
Table 3.2 8
~~ Contents of Job Content Category List Developed from Review of
Job EVALUACLON SYSthB Screw escs ean ers ee reee CeeNNTK CART eRe see ch
Table 3.3
Job Titles and Agency Locations Sampled for Preliminary Field-
Testing: Firat Waves as cs vinves vewe wars v5 45 4 Veo eee Seek cess wees Weis ee4O
Table 3.4
Job Titles Sampled for Preliminary Field-Testing: Second Wave........51
Table 4.1
Job Titles Included in Pilot Sample.....cccccsecsccccecereeeeseeeesced8
Table 4,2
Sampling Plan for the Pilot Study...cssccccccscccssccvvcscevsceesesss62
Table 4.3
Actual Allocation of Respondent Sample by Distribution Method........63
Table 4.4
Response Rates by Sex and Race/Ethnicity...cececescecsscrcscccessveesl2
Table 4.5
Summary of Response Rates for Pilot Survey...ceccccccecccseccceseeeeel 5S
Table 4.6
Consistency of Incumbent Response: Correlations Between Items
With Similar Content.ccccsseccecesccccccccccccscecccccsessscceesesee 180
Table 4.7
Comparison of Factors in the FES and NYS Study Systems........0+0044484
Table 4.8
Average Scores for Incumbents on Selected ItemS....sceeeseseceesee ee 085
Table 4.9
Example Correlations with Salary Grade.. beeen ec eee. 86
LIST OF TABLES
(continued)
Table 5.1
Summary of Adjustments in Job Title and Incumbent Sample,
Main Data Collection Survey..ccccccccccccceccccscccccesccceseseees se lO9
Table 6.1
Variables Entered into the Factor Analysis.....cesececscerccceceeveel22
Table 6.2
Items Included in Each Scale, Together with Factor Loadings
From the 14 Factor Solutions...sscesscececececcceesesecneceewseeeee ol 25
Table 6.3
Reliability Coefficients for Factor~based ScaleS....ccceeccveeceees sl 29
Table 7.1
Variables Used in the Regression AnalyseS..csccscscevescceecesesceccl48
Table 7.2
Unstandardized Coefficients of the Determinants of Salary
Grade for Various Models for Job Titles with at least Four
Incumbents... .ccsccsccccascsccsccvecescescccssscccassssonecssscvese sel 52
Table 7.3
Standardized Coefficients of the Determinants of Salary Grade
for Various Models for Job Titles with at least Four Incumbents....,153
Table 7.4
Comparison of. Data for All to Data for White Male JobS....sseeeeeee sl 58
Table 8.1
Predicted Salary Grades and Confidence Intervals for Female-
Dominated and Disproportiontely Minority Titles ~ Overall Pay
POLLCY LANG bie cone a wiee eee wee epee wE AAT OREN REL ED
Table 8.2
Predicted Salary Grades and Confidence Intervals for Female—
Dominated and Disproportionately Minority Titles - Adjusted
Pay Policy Line.sscssssseeeee
Table 8.3
Predicted Salary Grades and Confidence Intervals for Female~
Dominated and Significantly Minority Titles - White Male Line
(LO Vardables) .cccesececsececsecesecscscccececesescesccscssececeeee el IL
Table 8.4
Predicted Salary Grades and Confidence Intervals for Direct
Line-of-Promotion Titles - Overall Pay Policy Line. .sssscsseceeeeeel99
xi
LIST OF TABLES
(continued)
Table 8.5
Predicted Salary Grades and Confidence Intervals for Direct-
Line-of-Promotion Titles - Adjusted Pay Policy Line....cceeeeeee eee 6200
Table 8.6
Predicted Salary Grades and Confidence Intervals for Direct-
Line-of-Promotion Titles - White Male Line — (10 Variables)........,.201
xii
LIST OF FIGURES
Page
Figure 5.1
Cumulative Percentage of Responses Received over 13 Weeks.........+5104
Figure 7.1
Hypothetical Job Title Values for Skill and Salary Grade
and Line of Best Fit for These Points.......
weunea wavs vovexvenvexal lS
Figure 7.2
Scatterplot of Monthly Salaries by Job Worth Points, for
59 Jobs Held Mainly by Men and 62 Jobs Held Mainly by Women
in the Washington State Public Service. .cscccsecesecceereseesreeen e140
xiii
CHAPTER I
INTRODUCTION
New York State long has been at the forefront of efforts to enact and
implement innovative policies to make the labor market more just and equitable
by improving the terms and conditions of employment. This longstanding
commitment was demonstrated in 1982 by the Civil Service Employees Association
(CSEA) and the State, through its Governor's Office of Employee Relations
(GOER), negotiating funds to carry out a pay equity study of three bargaining
units covering approximately 100,000 State employees. In 1983, the Center for
Women in Government was asked by CSEA and GOER to examine the effects. of sex
and race/ethnicity of typical job incumbents on the setting of salaries..
In this report, we present the results of the New York State Comparable
Pay study, The goal of the study is to assess whether the wages paid for
jobs traditionally held by women and minorities accurately reflect their
productive value to New York State or are depressed because the work has been
and continues. to be performed by women and minorities.
This introductory chapter sets the stage for the study results.
Specifically, we describe the distribution of women and minorities im the New
York State government labor force. We then discuss job evaluation methodolo-
gies, and present a set of criteria for designing a job evaluation study
consistent with principles of pay equity. The decision to use policy~captur-
ing job evaluation is presented, as is an overview of the study design. The
chapter concludes with an overview of the contents of the report.
lonis report is a revised version of a full technical report submitted to
the state and the CSEA on October 1, 1985.
THE NEW YORK STATE LABOR FORCE
The New York State Civil Service Department, because of its sensitivity
to the state's equal employment opportunity obligations, has addressed the
issue of equal pay for equal work and has revised examinations to make them
more job related. However, its classification and compensation system,
established in 1937 and last revised in the mid-1950s, has never been assessed
to determine whether assumptions about the value of jobs and the assignment of
job titles to salary grades have been distorted by the sex or race/ethnicity
of the typical job incumbent.
Prior to this study, the Center for Women in Government compiled
statistics that demonstrate significant concentration of women and minorities
in job titles at the lower grade levels of the State's wage structure. In
1981, women constituted over 74 percent of all employees in salary grades 12
and below, in a 38 grade system. Examining only women employees, fully 75
percent of all women were employed below grade 12. Similarly, although
minorities constituted only 22 percent of the state work force, they made up
39 percent of those in grade levels 12 and below. Reviewing only minority
employees, over three-quarters are employed below grade 12,
Reviewing income statistics, we found that women and minority men earned
less than non-minority men. While 57 percent of non-minority male employees
earned over $16,000 per year, only 30 percent of non-minority females, 21
percent of minority females, and 35 percent of minority males earned over
$16,000 per year (McLaughlin, 1984).
Moreover, examining the sex composition of all competitive job titles
with four or more incumbents, the Center found that, in 1979, over
three-quarters of all state titles were either dominated by males or females.
i Of these, 65.3 percent were male~dominated and only 13.3 percent were
female~dominated.~ Moreover, looking at the distribution of these titles
across the wage structure, we found that 68 percent of the job titles below
grade 7 were female-dominated, while only 3.1 percent of the job titles in
grades 24 to 30 were female~dominated, and not even one job title in grades 31
to 38 was female-dominated. By contrast, slightly over 10 percent of job
titles in grades 3 to 7 were male-dominated, while fully 80 percent of titles
in grades 24 to 30 and over 90 percent of titles in grades 31 to 38 were
male-dominated. Mixed job titles were distributed more evenly throughout
grade levels (Center for Women in Government, 1982: 84-91).
An examination of the race/ethnicity composition of job titles with four
or more incumbents revealed a similar pattern. Disproportionately Black and
Hispanic positions constituted approximately 14 percent of job titles below
grade 7, slightly over 11.5 percent of titles in grades 8 to 12, less than one
percent of titles in grades 13 to 30, and no titles in grades 31 to 38.
Regardless of grade level, the overwhelming majority of titles are filled by
white incumbents: over 80 percent of titles in grades 3 to 12, and well over
90 percent of titles in grades 13 to 38 (Ibid.).
A career ladder study, completed by the Center in 1979, strongly
_ Suggested that the wage gap in state government employment was partly a func-
tion of the fact that almost all job titles and career ladders were dominated
either by males or females (Peterson-Hardt and Perlman, 1979), Furthermore,
2 For the purpose of this early analysis, a male-dominated title is one in
which 70 percent or more of incumbents are men and a female-dominated. title is
one in which 70 percent or more of incumbents are female. We defined
disproportionately Black and Hispanic titles as ones in which 40 percent or
more of incumbents are Black and Hispanic. These definitions differ from
those used in the study, primarily because the analysis was completed prior to
the decision by CSEA and GOER about what constitutes a female-dominated and
disproportionately minority title.
it found that female-dominated ladders consistently began at lower pay grades
and peaked at lower pay grades.
PAY EQUITY AND JOB EVALUATION: BACKGROUND
Occupational segregation by sex and race/ethnicity can contribute to the
wage gap in one of two ways. First, for, a variety of reasons, women and
minorities may be systematically channeled into low worth jobs; that is, jobs
males, ‘We think of this source of wage differentials as a function of
productivity-related job content differences. Insofar as occupational
segregation results from discriminatory practices, past or present, this is an
affirmative action issue, but is not a pay equity issue. Affirmative action
policies work to eliminate this source of the wage gap through incentives and
sanctions that increase the mobility of women and minorities into higher
paying, more productive jobs.
Second, women and minorities may be segregated in jobs that require
equivalent amounts of skill, effort, and responsibility as jobs held mainly by
white males but that are paid less. Insofar as these jobs are systematically
undervalued because the work is performed predominately by women and
minorities, this type of wage discrimination is the focus of pay equity
efforts. Pay equity, then, is concerned only with eliminating wage
differences associated with the sex or race composition of jobs that cannot be
accounted for py productivity-related job content characteristics.
The policy goal of equal pay for work of comparable worth broadens the
earlier policy of equal pay for equal work which prohibited wage discrimina-
tion when women and men were doing essentially the same or similar
work. A comparable worth or pay equity policy requires, instead, that
dissimilar work of functionally equivalent worth to the employer should be
paid the same wages. Conceptually, pay equity involves assuring that work
done primarily by women and minorities ts not systematically undervalued
because the work has been and continues to be done primarily by women and’
minorities. Simply stated, establishing pay equity involves correcting the
practice of paying women and minorities less than white men for work that
requires equivalent skills, effort, and responsibility under similar working
conditions.
Pay equity studies are designed to determine whether salaries accurately
reflect an explicit and consistently applied standard of job worth regardless
of the sex or race/ethnicity of a typical job incumbent. These studies
require a methodology through which:
e@ the relative worth of different jobs can be assessed,
e@ undervalued job titles can be identified, and
e@ estimates regarding the extent of undervaluation can
be calculated.
To accomplish these objectives, most pay equity studies have relied on job
evaluation techniques, which historically have formed the basis of most formal
classification systems and salary-setting practices in the public and private
sectors. Typically, job evaluation involves three major components:
description of job characteristics, evaluation of job characteristics, and
salary-setting.
Job description involves gathering accurate information about the skills,
responsibilities, tasks, and conditions of work entailed in each job. This
information makes it possible to organize many individual positions into job
classes or titles. As a final step, job specifications are prepared which
summarize job content in terms of key characteristics. They provide the link
between description and evaluation.
Evaluation of characteristics involves assigning relative worth to job
content in order to rank jobs in relation to one another. Most systems
include some systematic procedure for developing and assigning weights or
relative value to the job content characteristics. The highest weights would
be assigned to those characteristics that are regarded as most important to
the employer. In the most precise systems, job value is defined in terms of
points. Once an employer has selected an evaluation system, the system is
used to analyze each title to obtain a score for the title. The scores become
the basis for directly translating a get ‘of job characteristics into an
appropriate ranking.
i Salary-setting involves the conversion of job worth points into pay rates
for specific jobs. Commonly, this is accomplished through a pay policy line.
A pay policy line establishes graphically the statistical relationship between
job worth points and a measure of existing pay rates for a sample of job
titles. The line of best fit between the points on this graph is then
typically determined using multiple regression. The pay for each remaining
job is determined by what pay rate is appropriate on the pay policy line,
i given the job's particular number of job worth points.
The process of job evaluation need not lead to sex-based and
race/ethnicity-based wage discrimination, Job evaluation is nothing more than
a set of techniques for making explicit the job content values of the
enterprise in relation to what features of jobs should be compensated. It
provides a procedure for systematically ordering jobs into a hierarchy based
on the job content values articulated. However, given historical assumptions
about the value of "women's work," there is reason to suspect that sex is an
i implicit compensable factor in the job evaluation systems of many organiza-
i tions. By this we mean that jobs filled by higher proportions of females tend
to pay less than jobs requiring equal levels of skill and responsibility with
lower proportions of female incumbents. This makes the use of conventional
job evaluation problematic in pay equity research,
To avoid potential sex and race/ethnicity bias, it is necessary to modify
conventional job evaluation, Specifically, comparable worth job evaluation
requires that we apply to all jobs consistently a single bias-free point
factor system (Remick, 1984). The following design criteria need to be met at
each step of job evaluation.?
(1) Deseription of characteristics. All jobs should be
described fully and consistently and not differentially
by the sex or race/ethnicity of the typical incumbent.
This means that all jobs must be viewed in terms of the
same possible range of job content characteristics,
including those associated with female~dominated or dis-
proportionately minority work. The information must be
collected in a way that ensures that variations are not
a funetion of incumbent differences in providing infor-
mation.
1
(2) Evaluation of characteristics, All jobs should be evalu-
ated and assigned points according to a uniform set of
factors and weights, Factors should include charac-
teristics associated with all types of jobs, including
those often associated with jobs which are dispropor-
tionately minority, although it may be that some of
these characteristics are not valued by the employer,
regardless of the sex or race/ethnicity of the typical
incumbent.
(3) Salary~setting. Wages should be assigned according to
one pay policy line established on the basis of:a graph
including an agreed upon set of jobs in the organization.
This Line must be adjusted using an adjustment formula,
as those recommended in Treiman and Hartmann (1981) and
Treiman, Hartmann and Roos (1984).
3500 Steinberg (1984) and Steinberg and Haignere (1985) for a discussion.
POLICY-CAPTURING JOB EVALUATION
One of the most important design decisions in job evaluation methodology
involves the development of factors and weights. Two basic approaches to
developing and applying factors and weights exist: an a priori approach and a
policy-capturing approach.
A priori approaches begin with a predetermined system of factors and
weights to evaluate jobs within a specific organization. These weights may
come from a predefined consultant's package or they may be derived from a
policy-making committee's decisions about what should be valued for the
purpose of compensation.
Typically, a priori systems define work content in terms of broad
categories such as skill, effort, responsibility, and working conditions, even
before specific jobs in an organization are examined. Each category or factor
is further subdivided and, within each subcomponent, levels are created with
points assigned to each level. The application of a priori systems usually
involves evaluation committees, which review a job description or job specifi-
cation and arrive at a consensus decision about what overall score a job title
should receive. Descriptions are sometimes produced from information
collected through desk audits or group interviews. Or, they are sometimes
derived from responses to an employee questionnaire asking such broad
questions as: "Describe the most significant tasks associated with your job."
The second approach to job evaluation is policy-capturing. This involves
developing a compensatign model in which specific job content features such as
the number of persons supervised, the amount of prior experience in a related
H job, the level of analytic reasoning required, and the level of education
needed to perform the job are divided into factors and then these factors are
weighted in such a way that they statistically "predict" the current wage
-10-
structure, In other words, the weights for each compensable job content
characteristic are derived from a statistical model which makes explicit what
is currently implicitly valued for compensation purposes within an organiza-
tion,
Policy-capturing or a priori job evaluation systems can vary from
employer to employer. For example, a public jurisdiction may value, among
other things, supervision, responsibility for budgetary decisions, and writing
skills, By contrast, a maunfacturing firm may value supervision, cost-related
managerial decisions, production monitoring, and manual dexterity, ignoring
writing skills altogether. Compensation models for these two organizations
would differ because the range of job titles and job content varies, what is
considered valuable in job content varies, and their current wage structures
vary,
The Center for Women in Government designed the New York State Study in
terms of a policy-capturing approach for two reasons. First, in the early
stages of developing the proposal we worked with GOER, CSEA, the Civil Service
Department, the Center's Board of Directors, and the Center's Research
Advisory Committee to select the job evaluation methodology best suited to New
York. .We reviewed several a priori systems, offering New York a set of
predetermined factors and weights, These alternatives were rejected by
policy-makers and constituent group leaders. Instead, there was a strong
preference for the policy-capturing approach. It was believed that this
approach was best suited to a pay equity study because it is based on what New
York State implicitly does value and not on what New York State should value.
Once we determined what New York State actually values in job content, we
would base the estimates of undervaluation on the existing, and now explicit,
New York State compensation policy,
-ll-
Consistent with the state policy-makers’ views, the Center regards
policy-capturing as an appropriate job evaluation approach for assessing pay
equity. A comparable worth pay policy does not tell an employer what job
content should be valued. It requires only that whatever an employer values
is valued consistently and systematically across all job titles and not
arbitrarily and implicitly as a function of the sex or race/ethnicity of the
typical incumbent of a job title. As the National Academy of Sciences
Committee on Occupational Classification concluded:
Paying jobs according to their worth requires only a
that whatever characteristics of jobs are regarded
as worthy of compensation by an employer should be
equally so regarded irrespective of the sex, race,
or ethnicity of job incumbents (Treiman and Hartmann,
1981: 70).
COMPARABLE WORTH JOB EVALUATION: OVERVIEW OF DESIGN
The New York State study uses a policy-capturing approach, which relies
heavily on statistical procedures for designing the data collection, for
analyzing the data to establish factor weights, and for estimating the
appropriate salary for female-dominated and disproportionately minority jobs.
To meet the three methodological criteria specified above, we maximized
consistency and minimized sex and race/ethnic bias in the way jobs are
described and evaluated and in the procedures for establishing wages. We
further adjusted the final set of factors and weights to remove the possible
impact of wage discrimination in the jurisdiction's current pay policy.
To meet the first criterion of describing all jobs fully and consistently
and not differentially by the sex or race/ethnicity of the typical incumbent,
we developed a questionnaire customized to the range of job content
characteristics found in New York State jobs. To design the questionnaire, we
+-12-
examined over 18 job analysis or job evaluation approaches. We reviewed these
plans so as to include in our survey instrument every category of job content
characteristic that previously had been found to be compensable. We also
included additional potentially compensable characteristics which were not a
part of other systems but which might be relevant to New York State pay
policy. We wrote the questionnaire at a seventh grade readability level, For
each question, employees had to choose one from a number of possible closed-
ended responses, to minimize the impact of differential abilities to express
ideas in writing and to eliminate any sex and race/ethnic differences in word
usage or comprehension of job content factors. The development of the Job
Content Questionnaire is described at greater length in Chapter III.
The second criterion is that all jobs be evaluated and assigned points
according to a consistently applied and uniform set of factore and weights.
In order to meet this criterion, we statistically derived one set of factors
and weights by analyzing the data collected from our employee questionnaires
in relation to current New York State salaries. To do this, we first averaged
incumbent responses for each job title in order to obtain a single composite
job description for each job. Next, we statistically sorted the data from the
questionnaire using factor analytic statistical techniques to group together
items of similar job content, like questions on supervision, data entry, group
facilitation, and so on, Weights for job content factors were assigned in
relation to the current wage structure using multiple regression analysis.
The resulting compensation model was applied to each female-dominated and
disproportionately minority job title to obtain a predicted salary grade,
indicating what the wages for these jobs would be in the absence of discrimi-
nation.
ae
These policy-capturing procedures rely heavily on statistical analysis
performed on computers. The use of standard statistical procedures and
computer analysis ensures that the set of factors and weights are applied
consistently, eliminating the possibility that consultants or committees
impose subjective stereotypes in their selection and application of factors
and weights in relation to particular female-dominated and disproportionately
minority jobs.
The Center for Women in Government is assisting the state in meeting the
salary-setting criterion that appropriate wages should be assigned on the
basis of one adjusted pay policy line. We computed three separate pay policy
lines. The first pay policy line is based on the compensation model for all
New York State job titles. This cannot be used as a basis for pay equity
i adjustments, however, heesiee it includes the salaries of female-dominated and
disproportionately minority jobs which may be undervalued due to
} discrimination.
The remaining two estimation procedures, in effect, remove from the pay
policy line the potential distortion of discrimination. The second estimation
procedure involves adjusting the overall compensation model by statistically
removing the effects of percentages of female and minority incumbents in job
titles from the job content characteristics predicting pay. This approach
removes from the compensation model that part of the variation in New York
State's pay policy that can only be explained by the proportion of women and
minority incumbents in job titles.
The third procedure involves using the white male pay policy line as the
standard for determining the job content value of all titles. The validity of
this procedure is based on the assumption that the salaries assigned to jobs
held primarily by white males are not affected by sex or race/ethnic discrimi-
i nation.
-4-
These last two estimates provide measures of potential undervaluation in
female~dominated and disproportionately minority titles. Thus, the Center is
using an adjusted policy-capturing approach as the basis for pay equity
estimates,
OVERVIEW OF THE REPORT
This report is organized into seven remaining chapters. A chapter
providing a general overview of the methodology follows this introductory
chapter, It builds on the preceding overview of the study design by
describing the study population and sample, providing definitions for
female-dominated and disproportionately minority titles, and delineating the
general approach to data collection through the use of a customized
questionnaire administered through incumbent self-reports.
Chapters III and IV discuss the development of the Job Content Ques-
tionnaire and the pilot survey designed to test the validity and reliability
of the questionnaire as well as the feasibility of using different distribu-
tion methods in the main data collection survey.
Chapter V reports on the process of collecting the job content informa-
tion in the main data collection stage. Chapter VI reports on the results of
the preliminary data analysis and the examination of the job content factor
and items of the study. In addition, it treats the methodology and the
results of creating indices and factors out of the items contained in the Job
Content Questionnaire.
Chapter VII reports on the unadjusted average pay policy line and the two
pay policy models that are used to generate estimates of undervaluation for
female-dominated and disproportionately minority titles. Chapter VIII reports
the estimates of undervaluation for female~dominated, disproportionately
-15-
minority, and those related direct line of promotion titles where the entry
level title is found to be undervalued.
|
|
|
|
|
|
= T6n<,
mo Le
CHAPTER IT
GENERAL METHODOLOGY
- 1le-
New York State is the third largest public sector employer in the United
States with well over 175,000 employees in over 7,000 job titles. To under-
take a pay equity job evaluation study requires the collection of massive
amounts of accurate information about job content from a large sample of job
incumbents filling a representative range of job titles.
This chapter reports on the basic methodological decisions shaping key
features of the study design. It begins with a discussion of the use of an
incumbent self-administered questionnaire customized to New York State job
content as the data collection instrument. It continues with basic
definitions of the survey population and concludes with a description of the
general sampling frame,
THE COLLECTION OF JOB CONTENT INFORMATION
In traditional job evaluation, job content information is typically
collected using desk audits, group interviews of incumbents, questionnaires to
incumbents, questionnaires to supervisors, or some combination of the above
methodologies.
Given the volume of information that had to be collected in New York
State, desk audits and group interviews were ruled out. To do desk audits of
just ten job titles, observing only five positions within each title, would
take approximately 150 days of staff time. To collect information on over
2500 job titles would take 37,500 staff days! Desk audits are most frequently
used to review single jobs for reclassification. However, these are not
practical for system-wide analysis such as the one being undertaken here.
Group interviews in each job title would be less labor intensive but still
prohibitive if substantial numbers of titles were included, In addition,
group interviews raise sensitive issues as to which employees are selected to
~-19 -
participate, which geographic areas employees are drawn from, and biases that
the interviewer brings to the interview. We thus eliminated all of these
options.
Based on our review of the research literature, we selected multiple
incumbent self-reports as the optimal mode of data collection for our
purposes. Our two primary criteria for this decision were:
e this approach was the way to get the most
information at the lowest cost; and
___ _____ @ _a number of authorities regard incumbents as
the best source of information about jobs. OO
i Incumbents operate as "multiple raters", representing a more diverse set
of agency and geographic settings than could be reached using any other source
of information or sampling procedure. In addition, we decided to average
incumbent responses within each job title to obtain a title profile. This has
the effect of minimizing the effect of any unique incumbent differences in
filling out questionnaires, including under-aggrandizement and over-aggran-
dizement.! It also averages actual variations in job content of positions
within titles. Thus, what we are left with is a description of the average or
typical content of each job title.
Early on in the design of the study methodology, concern was expressed
that incumbents would aggrandize their jobs by exaggerating the duties
| associated with them, As one way of minimizing that propensity, it was
proposed that supervisors be asked to review employee questionnaires. After
serious consideration, we rejected supervisor review for several reasons.
i l inder-aggrandizement involves a respondent reporting fewer skills and
t less responsibility then is actually involved in her or his job title.
Accordingly, over-aggrandizement involves a respondent reporting more skills
and larger responsibility then is actually in her or his job.
- 20
First, direct supervisor review of incumbent questionnaires would violate the
confidentiality of responses. This would not only violate the State
University human subjects review requirement, but would jeopardize crucial
union support of the study.2
Second, we were doubtful about the validity of information received from
supervisors as a standard for judging the accuracy of incumbent responses,
Supervisors may well be motivated to aggrandize the jobs they supervise, as
much or more than incumbents are. Additionally, their distance from the
duties of the jobs they supervise may give them an inaccurate picture of the
jobs.
Fortunately, there have been studies specifically designed to investigate
the accuracy of incumbent responses to job questionnaires using supervisor
responses as a standard. These studies find that incumbents describe their
jobs as accurately as supervisors do. For instance, the findings of a study
done in the Air Force indicates that, “when compared to supervisors’ estimates
there is no tendency for incumbents to exaggerate the number or difficulty of
the tasks they perform (Madden et al, 1964:10). These researchers go on to
indicate that "supervisors may not know precisely what any subordinate does
task by task" (Ibid). They conclude that,
since there is no tendency for workers to exaggerate
the number or difficulty of tasks performed, the
current Air Force procedure of collecting job infor-
mation directly from incumbents seems preferable to
collection of job information from supervisors (Ibid).
Se
2 Ky a matter of routine, all research projects conducted at the State
University of New York at Albany (SUNYA) must meet certain ethical standards
in research. Proposals are reviewed by the SUNYA Institutional Review Board.
One concern of the review process involves the protection of subjects from
participating in research that involves providing sensitive personal
information,
-21 -
|
|
|
}
Another study comparing supervisor and incumbent responses completed by
researchers from the Universities of Pittsburgh and Minnesota concluded that
“Overall, the findings gave strong support for the ability of workers to rate
their jobs accurately, that is, consistently and with evidence of validity"
(Dawson and Weiss, 1973:188). Finally, the Interim Report of the National
Academy of Sciences Committee on Occupational Classification and Analysis
questioned the assumption that supervisor responses were even as accurate as
incumbent responses (Treiman, 1979:45).
i As a matter of logic, there is no reason to suppose that supervisors
{ exaggerate about job duties of subordinates less than those who hold the jobs.
Indeed, those involved in other state comparable pay studies report informally
that supervisor reviews consistently result in upgrading the described job
responsibilities. In the Iowa comparable worth study, supervisors tended to
review and modify incumbent responses in such a way as to generally increase
the difficulty of jobs. Similar findings were reported in Illinois and
Oregon. Moreover, there is some reason to suspect the possibility of sex
stereotyping through supervisor bias. A study of supervisor ratings of job
content noted that:
differences were found in the amount of variance of
ratings within jobs. Jobs such as mechanical engineer,
computer programmer, adding machine serviceman, welder,
and sheet metal worker were rated with less variability
than were dietician, librarian, secretary~stenographer,
and sewing machine operator. The jobs which were rated
more consistently seemed to require working more closely
with objects and hand tools and may have been easier to
assess because specific tasks may have been more easily
identified. The jobs which were less consistently rated
i were more service-oriented, or people-oriented, with tasks
not as readily defined; they were also jobs in which women
predominated (Dawson and Weiss, 1973, Ibid.).
This research raises serious questions about the validity of supervisor
information about women's jobs in particular.
= BD) ws
In summary, since there is no evidence that supervisors are either more
accurate or less likely to exaggerate in describing the duties of jobs they
supervise, we questioned whether supervisor review would lead to a better set
of job descriptions. In addition, given the possibility that greater sex bias
may be present in supervisor responses, we concluded that supervisors should
not be relied upon as a source of job content information in a comparable pay
study. As will be reported later, we conducted a pilot substudy in which we
compared responses between supervisors and incumbents on a subset of items in
the job content survey. We found no consistent differences between supervisor
and incumbent ratings on the same job.
Two additional problems with using supervisor reports on job content
relate to pragmatic and practical considerations. In the course of carrying
out the pilot study, we were informed by several personnel directors in our
pilot agencies that if incumbents knew that supervisors were being asked to
review their job questionnaires, some of them may either provide inaccurate
information perceived to be acceptable to their supervisors, or not even
respond to the questionnaire. This was because some incumbents, and the
unions representing them, may mistrust both the promised confidentiality of
their responses and the eventual uses to which the data were being put.
Moreover, it was impractical to collect separate supervisor information
given time and cost constraints. Indeed, the Center went through considerable
difficulties in locating a mere 200 supervisors during the pilot study.? To
3ye first submitted a list of jobs for which we wanted the names of
supervisors to the personnel director at each of eight sites. In each case,
the personnel directors had no systematic information about supervisory
relationships, and they spent considerable time tracking down who supervised
whom, Most frequently, they located the supervisors by referring to time
(Footnote Continued)
~ 23 -
have incorporated supervisor information on jobs consistently would have meant
locating supervisors in almost 2,800 job titles. Based on our pilot
experience, we saw no feasible way to sample supervisors in the main survey.
The next decision concerned the format for asking incumbents about job
content. As a starting point, we reviewed job description questionnaires used
by other researchers and consultants. These fall into two general categories:
open-ended and closed-ended questionnaires. Open-ended questionnaires can
lead to biased results for two reasons. First, the incumbents of many job
titles such as Launderer, Mental Hygiene Therapy Aide, and Laborer tend to
have less verbal skill than the incumbents of some other titles such as
Personnel Administrator, Fiscal Analyst, and Program Evaluator. Second,
linguistic research has noted the many ways in which words, particularly
verbs, used by women are weaker and less action oriented (Remick, 1979). In
addition, closed-ended questionnaires are less time-consuming to fill out and
considerably less expensive to process. For these reasons, we preferred a
closed-ended questionnaire.
Few job evaluation packages use closed-ended job content questionnaires.
The major exception is the Position Analysis Questionnaire (PAQ) (McCormick,
et al, 1969). The PAQ was originally developed for administration by job
evaluators of blue-collar jobs. It was used as one of two information collec-
tion instruments in the Michigan Comparable Worth study. However, not surpri-
singly, data obtained’ from the survey proved unuseable as it was too difficult
a survey instrument for incumbents to comprehend, Research has shown that the
(Footnote Continued)
sheets to see who signed them. Despite this effort, eight blank
questionnaires (or four percent) were returned to us because the employees
receiving them did not supervise anyone in the specified job title.
~ 24 -
readability level of the PAQ is college graduate (Ash and Edgell, 1975), It
was thus inappropriate for use in a study relying on an incumbent self-~
administered survey in the public sector.
A variant of the PAQ was developed for a pilot comparable pay study for
public employees in Pennsylvania, but it retained many of the limitations of
the PAQ (Pierson and Koziara, 1984). Thus, there were no closed-ended ques-
tionnaires available that we thought appropriate for use in obtaining job
content information. As a result, we developed a customized job content ques-
tionnaire for New York State government employment. The development of this
questionnaire is the topic of the next chapter.
POPULATION DEFINITIONS
Job Title as the Unit of Analysis
Comparable worth job evaluation requires that the unit of analysis is the
job title. Although we collected information from individual incumbents
filling positions within titles, we averaged responses by job title. The
focus of the research is on the job content chracteristics of the title. For
instance, we are interested in the level of education or experience required
to fill the job title and not in the level of education or experience of
individuals in the title. To be sure, these should be highly correlated, but
our sole interest is in the job title requirements.* Similarly, comparable
worth research is less concerned with the unique job content features of
Ste wage level were a function of incumbent characteristics, and not job
title characteristics, then we would indeed be interested in collecting
information on compensable incumbent characteristics. However, New York State
compensation policy is built on job characteristics, although seniority
differences are incorporated into salaries within grade levels.
i
- 3 -
positions within a job title than with the job content common to all positions
grouped together into a job title. This created some methodological
complexities, most notably with respect to the sampling frame within the job
title, which is discussed below.
Job Title Population: A Definition
The New York State Civil Service system currently has over 7,350 job
titles, falling for the most part within six bargaining units and a management
confidential group. For the purposes of this research, we specified the study
population to include all the classified titles in the New York State Civil
Service System. However, we made the following exclusions in the study
population:
e titles for which salaries are not set by the
Civil Service system (N.S. for non-statutory) or
where salaries are set by law (0.8. for other-statute);
e classified titles with fewer than four incumbentg,
except those designated Management /Confidential;
State University faculty and professionals;
e titles located only in the following eight so-called
quasi-agencies: Bridge Authority, Commission on Investi-
gation, Energy Research and Development Authority,
State Police Law Enforcement titles, Housing Finance Agency,
N. E. Queens Nature and Historic Preservation Commission,
Teachers' Retirement System, and the Thruway Authority.
In addition to these job title definitional restrictions, the employee
population was specified to exclude the following:
—_—
Sohis exclusion criterion was later expanded to include positions
designated managerial/confidential (M/C). M/C titles with fewer than four
incumbents were also dropped, because the regression analysis indicated that
eliminating these small incumbency titles did not change the results.
Moreover, doing so avoided giving the same weight to a single incumbency job
title as to a larger job title where the responses of incumbents were
averaged.
- 26 =
e incumbents of positions earmarked to be reviewed
when these incumbents leave their positions;
@ incumbénts working part-time;
@ incumbents who, subsequent to the sample selection,
had moved to a non+sampled job title;
e incumbents with less than one-month tenure in the
position; and
® incumbents who were retired, deceased, laid off, or
otherwise not in the position at the time of the
data collection survey.
Thése exclusions reduced the number of job titles represented in the study to
2,898,
Female-Dominated .and Disproportionately Minority Titles: Definitions
One of the tiost consequential research design decisions in a pay equity
study is what constitutes a female~dominated and disproportionately minority
job. The criteria for selecting female-dominated or disproportionately
minority titles directly determities the pool of jobs for which ‘estimates of
potential undervaluation will be made. Of course, not all job titles in the
pool will necessarily be found to be misvalued. But only those titles in the
pool will be ‘examined to see af there is any misvaluing of jobs. Thus, the
goal of achieving internal equity through pay equity adjustments is best met
if we include too many, rather than too few titles in the pool.
The development of the criteria for selecting the sample of female-
dominated and disproportionately minority titles was done jointly by labor and
management with consultation from Center staff. The criteria encompass three
rulés indicating bargaining unit restrictions, a proportion female or minority
incumbent cutoff point, and a minimum incumbency size.
- 27 -
First, because funds for the study were provided in the contract between
the state and CSEA, estimates of undervaluation were contractually limited to
titles in CSEA's bargaining units.
Second, the standards that had been used elsewhere in pay equity studies
were reviewed. We found that for female-dominated job titles, most studies
had used a 70 percent cutoff point. Specifically, this meant that only job
i titles with 70 percent or more female incumbents were examined to determine
whether there was undervaluation in their wages. In most studies done in
other jurisdictions the remaining job titles with 69.9 percent female or less” OO
were not examined. However, there is reason to expect that salary discrimina-
tion may affect job titles with less than 70 percent females as well as those
with 70 percent or more females.
| Moreover, since New York State was the first jurisdiction to look at dis-
proportionately minority positions, we found no previous standards on how to
define a disproportionately minority title. Thus, the definition of female-
dominated and disproportionately minority titles by a cutoff point, whatever
it would be, would be somewhat arbitrary.°
As a third step, we examined the impact of a 70 percent cutoff rule on
the job titles in the CSEA bargaining units to see whether it, at a minimun,
encompassed titles culturally associated with women and minorities. We dis-
Since one of our adjustment formulas involves using percentage female
and percentage minority to adjust the compensation model, it follows that, if
these variables are found to be statistically significant predictors of pay,
they will affect the predicted salary grade of all titles with substantial
i percentages of female and minorities. Accordingly, we recommended early in
the study that all titles with greater than the mean percentage of women and
} minorities in the New York State workforce be assessed for potential
undervaluation. This recommendation was not accepted.
a 28 -
covered that the 70 percent rule would exclude some of the largest titles in
which historically female work is routinely performed, such as Mental Hygiene
Therapy Aide, with over 17,000 incumbents, Mental Hygiene Therapy Assistant I,
Housekeepers, and Launderers. Conceptually, these exclusions make little
sense since these titles are clearly associated with traditionally female
work. Moreover, consistent with the theoretical underpinnings of pay equity,
these titles are likely to have been undervalued because they have tradi-~
tionally been filled by women. In light of our examination of the impact of
the specific cutoff points on the final list of estimated titles, we were j
certain that the 70 percent cutoff point traditionally used to define female-
dominated was too high given New York State employment demographic data.’
With a great deal of input from both labor and management, an alternative
model for defining female-dominated and disproportionately minority job titles
was developed. This conceptually-based model uses a standard which is tied to
the proportion of women and minorities in the total New York State labor
force.
The formula is (.4X) + X, where X is the overall proportion of women or
minorities in the New York State labor coxee.® Thus, jobs are considered to
Despite our concerted attempts, we have been unable to discover exactly
where the 70 percent-and-above definition originated. Unconfirmed data
indicate that it was adopted for use in the Washington State study based on
consultant use of a set of U.S. Department of-Labor charts. However, our own
library and computer-based searches have not uncovered a U.S. Department of :
Labor reference using a 70 percent definition for a female-dominated job.
Soe +4 factor evolved from the development of an approach to defining i
disproportionately minority based on the traditional definition of female-
dominated. New York State was the first jurisdiction faced with defining a
disproportionately minority encumbered job. The only existing related
precedent was the commonly used 70 percent standard to define
female-dominated. Exactly how the 70 percent standard originally came to be
(Footnote Continued)
-29 -
be female jobs if their percentage female is at least 40 percent larger than
it would be if workers were distributed across jobs without regard to sex.
Similarly, a disproportionately minority job is one in which there is at least
a 40 percent excess of minority workers relative to their proportion in the
labor force. In New York State, where women constituted just over 48 percent
|
| of the total public sector workforce in 1984, the formula resulted in a 67.2
|
; percent cutoff point ((.4 X 48) + 48 = 67.2). This meant that all CSEA titles
with 67.2 percent or more female incumbents were included within the sample of
‘titles for which undervaluation would be assessed.
This same formula was used for minorities. Since minorities constitute
22 percent of the New York State workforce, a disproportionately minority
title is one in which 30.8 percent or more of the incumbents are minorities
((.4 X 22) + 22 = 30.8). Along with the female-dominated titles, these would
be assessed for potential undervaluation.
The third and final criterion for female-dominated and disproportionately
minority titles involves the minimum incumbency size for titles for which
i undervaluation would be assessed. Once a listing of titles had been developed
| based on the cutoff rule, labor and management deliberated over what the
|
minimum number of incumbents should be before estimates of undervaluation
(Footnote Continued)
used is unclear. However, a logic applied post hoc is that, given that women
are roughly 50 percent of most work forces, as well as of the population
at-large, 70 percent is enough above this hase of 50 percent to constitute a
disproportionate representation of women. Thus, a similar increment above the
base proposition of minorities in a work force could constitute a
disproportionately minority encumbered title. Therefore, to establish the
definition of disproportionately minority, we used an equation: 70 percent is
to 50 percent as X is to 22 percent (minority representation in the New York
State workforce). In this equation X = 30.8 equals 0.4. Applied to sex
composition in New York State, where women constitute 48 percent of the
workforce then, 48 (.4) + 48 = 67.2.
- 30 -
would be made. They decided that estimates of undervaluation would be made
only for female~dominated or disproportionately minority titles with ten or
more incumbents. The decision on incumbency size was based on the instability
of sex and race/ethnic percentages in job titles with less than ten
incumbents, Below that number the shift of only one or two positions from
male to female or white to minority would change the categorization of the
title. This decision resulted in the deletion of 56 titles.
Table 2,1 lists the 168 female~dominated and disproportionately minority
titles for which undervaluation was to be assessed. ‘Two titles were later
eliminated due to extremely low response rates. (See Appendix G.) They are
grouped by title code, In addition to titles included as a result of the
above criteria, we included the following four titles:
Title Code Job Title
7150000 Maintenance Helper
7617200 Bus Driver
3016000 Janitor
7202022 Maintenance Assistant (Refrigeration)
These titles exceeded the 30.8 percent cutoff point for disproportionately
minority when positions in the State University system were excluded.
However, they fell below the cutoff when State University positions were
added. This finding is an indication that the university incumbents of these
titles are primarily white, while the incumbents of positions in other
agencies include a substantially greater proportion of minorities, Given our
concern with being more inclusive, labor and management decided to include
these titles within the list of those to be estimated for undervaluation.
Finally, it was decided that those titles in the direct line of promotion
of any of the titles which were examined for potential undervaluation would be
-31-
examined for potential undervaluation if their related entry level title was
found to be undervalued, regardless of the proportion of women and minorities
in them. Perhaps due to the common cultural assumptions and expectations
about the work behavior and the appropriate roles of men and women, white
males tend to be at the top of female-dominated or disproportionately minority
career ladders. As a result, many of the higher grade level job titles in
disproportionately female or minority promotional tracks have lower
percentages of women and minority incumbents and do not meet the cutoff
proportion of women or minorities necessary to be included. However, where
the entry-level position has been found to be undervalued, the likelihood
increases that undervaluation has affected the grade level assignment of the
promotional titles as well. Moreover, if such job titles were not examined
for undervaluation when job titles at the bottom of the same job family were
examined, the State could face serious problems with internal inconsistencies
in the classification system. Table 2.2 lists the direct-line-of-promotion
job titles which were assessed for undervaluation. This constitutes 20 job
i titles, making a grand total of 188 titles in the original list of estimated
titles.”
| SOB TITLE SAMPLING FRAME
Given the large number of employees and job titles in New York State
H government employment, it was not feasible to collect information from each
Orhts number includes the deletion of one direct-line-of-promotion title
because its entry title was not found to be undervalued.
-32-
TABLE 2.1
FEMALE=DOMINATED AND DISPROPORTIONATELY
MINORITY JOB TITLES FOR WHICH UNDERVALUATION WILL BE ASSESSED
Title Code Title Title Code Title
100200 Aceount Clerk 911200. Laboratory Animal Crt
100300 Senr Acct Clerk 911300 Senr Lab Animal Crtkr
100500 Prin Acet Clerk 1836100 Inst Rtl Str Clerk
102100 Payroll Audit Cik 1 1935000 Park Regn Bus Assnt
192200 Audit Clerk 2134101 Trans Ping Aide 1
102230 Payroll Audit Clk 3 2337110 Consumer Srvs. Spec 1
102300 Senr Audit Clerk 2501200 Clerk
105200 Cashier 2501300 Senr Clerk
112000 Toll Collector 2501317 Senr Clerk Surrogate
130110 Emps Ret Bnfts Exmr 1 2501320 Senr Clerk Corp Srch
130310 Emps Ret Bnfts Exmr 3 2501500 Prin Clerk
133100 Emps Ret Mbrsp Exmr 1 2501517 Prin Clerk Est Tx App
133200 Emps Ret Mbrsp Exmr 2 2501590 Prin Clerk Personnel
702200 Statistics Clerk 2502200 Comp Claims Clerk
702300 Senr Statistics Clerk 2502300 Senr Comp Clms Clerk
702500 Prin Statistics Clerk 2503200 File Clerk
750300 Senr Actuarial Clerk 2503300 Senr File Clerk
750500 Prin Actuarial Clerk 2503500 Prin File Clerk
822010 Data Proc Clk 1 2504200 Admitting Clerk
822020 Data Proce Clk 2 2504300 Senr Admitting Clerk
849200 Data Entry Mach Oper 2506100 Nursing Station C1k 1
849300 Senr Data Enty Mach 0 2508400 Driver Impv Adjdtn C
849500 Prin Data Enty Mach 0 2508600 Adjudetn Corrpdne Clk
2510100 Purchasing Assnt 1 2559200 Library Clerk 2
2510200 Purchasing Assnt 2 2559300 Library Clerk 3
2512200 Ident Clk 2560100 Student Loan Clk 1
2512300 Senr Ident Clerk 2560200 Student Loan Clk 2
2513300 Senr Med Records Clrk 2568100 Emp Ins Revwng Clk 1
2513400 Treatmnt Unit Clk 2569100 Disablty Detrm Rv C 1
2514300 Senr Underwrtng Clerk 2601200 Typist
2514400 Senr Payroll Audt Clk 2601300 Senr Typist
2515200 Credentials Assistant 2601310 Senr Typist Law
2521100 Motor Veh Title Clk 1 2601500 Prin. Typist
2521200 Motor Veh Title Clk 2 2605200 , Dict Mach Trans
2522210 Legal Assnt 1 2606100 Info Procssg Spec 1
2540100 Motor Veh Rep 1 2606200 Info Procssg Spec 2
2540200. Motor Veh Rep 2 2606300 Info Procssg Spec 3
2540300 Motor Veh Rep 3 2609000 Secretarial Steno
2540510 Supvg Motor Veh Rep 1 2610200 Stenographer
2553310 Trans Offc Assnt 1 2610300 Senr Stenographer
2553320 Trans Offc Assnt 2 2610320 Senr Steno Law
2557100 Apps Cntrl Clk 1 2610500 Prin Stenographer
2558100 Payroll Clerk 1 2610520 Prin Stenographer Law
2558200 Payroll Clerk 2 2612200 Hearing Reptr
2558300 Payroll Clerk 3 2703100 Telephone Oper Typ
2559100. Library Clerk I 2703200 Telephone Oper
= 33-6
TABLE 2.1
(continued)
2703300 Senr Telephone Oper 5303100
2706100 Dirctry Info Sys Op 1 5350200
2712200 Calculating Mach Op 5359000
2715200 Bookkeeping Mch Op 5500200
2715220 Bookkeeping Mch Op Ds 5503200
2810100 Admnv Aide
2859010 State Univ Prgm Aide 5503300
__ 3004000 Housekeeper __ ee
3004500 Supvg Housekeeper 5501100
3014000 Cleaner 5502200
3016000 Janitor 5518500
3021000 Elevator’ Operator 5532101
3102300 Cook 5532202
3102600 Head Cook 5540300
3106100 Dietitian Techn 5544100
3124200 Food Service Wkr 1 5570300
3124300 Food Service Wkr 2 5570400
3124400 Food Service Wkr 3 6201000
3137200 Food & Suppls Processor 6202200
3302200 Launderer 6204000
3302300 Senr Launderer 6210000
3307000 Clothing Clerk 6211510
5302100 Barber 6211520
6223200 Electrocardogrph Tech 6214200
6225100 Medical Lab Tech 1 6219200
6301000 Pharmacy Aide 6220200
6818000 Assnt Wkrs Comp Exmr 6220300
6824100 Workers Comp Revw An 6893100
6893200 Medicaid Clms Exmnr 2 7150000
7202022 Maintce Assnt Refrign 7611000
7611300 Senr Chauffeur 7614000
7616100 Motor Veh Oper 7617200
7711000 Bindery Helper 8261202
8261303 Youth Div Aide 3 8261400
8340109 Alclsm Rehab Assnt 1 8342200
841010 Training Aide 8431200
843130 Senr Emp Sec Clerk 8431500
8621100 Parole Prog Aide 8701600
8937100 Motor Veh Ins Sv RP 1 8970100
Beautician
Dental Assnt
Dental Hygienist
Licensed Prac Nrs
Operating Room
Technician
Senior Operating Room
__Technician®
Hosp Attendant 1
Hosp Clinical Techn
Comty Resdnc Aide
Hosp Clinical Assnt 1
Hosp Clinical Assnt 2
Psych Therapy Aide
Mental Hyg Hfwy HA 1
Mental Hyg Ther Aidel
Mental Hyg Ther Ast 1
Laboratory Helper
Laboratory Worker
Laboratory Aide
XRay Aide
Teaching Hosp Stl Stl
Teaching Hosp Stl St2
Electroencphgrph Tech
Central Med Sup Tech
Histology Technician
Senr Histology Tech
Medicaid Clms Exmnr 1
Maintce Helper
Chauffeur
Tractor Trailer Oper
Bus Driver
Youth Div Aide 2
Youth Div Aide 4
Rehab Interviewer S S
Empl Sec Clk
Prin Emp Sec Clerk
Watchman
Driver Imprv Adjudctr
*These titles were deleted due to inadequate incumbent responses.
Title Code
102220
102500
130210
133300
822030
911500
2134202
2522220
3004600
3016500
3016600
3302600
5518800
5518900
5570500
6218400
6225200
6818200
7132200
- 34-6
TABLE 2.2
DIRECT~LINE-OF-PROMOTION JOB TITLES
FOR WHICH UNDERVALUATION WERE ASSESSED
Title
Payroll Audit Clerk 2
Principal Audit Clerk
Emps. Return Benefits Examiner 2
Emps. Return Membership Examiner 3
Data Processing Clerk 2
Principal Laboratory Animal Caretaker
Transportation Planning Aide 2
Legal Assistant 2
Head Housekeeper
Supervising Janitor
Head Janitor
Head Laundry Supervisor
Community Residence Assistant Director
Community Residence Director
Mental Hygiene Therapy Assistant 2
Medical Technologist
Medical Laboratory Technician 2
Workers Comp. Examiner
Refrigeration Mechanic
QE we
incumbent of each position. Therefore, it was necessary to design a frame for
selecting both a sample of job titles and a sample of incumbents within each
title.
Of course, our objective was to design the sampling frame so as to obtain
the most accurate and comprehensive information on job title content. To meet
this objective, we needed to maximize the information gathered on the range of
work performed across all grade levels and minimize the "standard error" which
results when a sample is drawn from a larger population. Let us consider each
of these in turn.
It is important to gather information on the entire range of work
performed in New York State because the policy-capturing approach to job
evaluation involves the development of a statistical model specifying the
relationship between job content and wages for the system as a whole. It
requires that the sample of job titles go beyond the CSEA titles and instead
be representative of the entire range of work performed throughout New York
State at all grade levels, If we limited a compensation model to CSEA-repre-
sented jobs only, which fall at the lower end of the pay scale, it would
seriously distort the model of the pay practices of New York State. This
would raise fundamental questions about any estimates we might generate from
such a partial model. For example, how could we judge what a Licensed
Practical Nurse, a Senior Stenographer, or a Mental Hygiene Therapy Aide
should be paid if we do not know the basis by which Registered Nurse, Office
Manager, or Treatment Team Leader is paid?
Moreover, if we limited the compensation model to those jobs for which
estimates of potential undervaluation would be made, we would understate the
effects of sex and race/ethnicity. We would, in essence, be studying the
effect of sex and race/ethnicity composition within the set of female-domi-
~ 36 ~
nated and disproportionately minority jobs and ignoring the effects of sex and
race/ethnicity on the difference in pay between these jobs and all other jobs,
where the percentages of women and minorities are small.
In addition to this concern with comprehensiveness, we aimed at designing
an approach to sampling that would minimize the errors of estimate. The
standard error is an estimate of how accurate the results based on a sample
are as an estimate of what the results would be if the whole population were
studied. In general, the larger the sample, the smaller the standard error.
Our sampling frame is based on maximizing the sample size both for job titles
and, within titles, for incumbents.
Since job title, and not individual incumbent, is the unit of analysis,
it is most important to maximize the number of job titles sampled. Based on
this simple fact, we decided to sample as many job titles as possible ,
throughout New York State employment. Accordingly, in general we defined our
population to encompass job titles with four or more incumbents and sampled
incumbents from all of these job titles. We modified this procedure for
Management/Confidential (M/C) titles to sample all titles without a minimum
incumbency restriction. This is because the exclusion of small incumbency
managerial titles appeared to make the sample less representative of the
population of titles.
Yet, we remained somewhat reluctant to collect data from titles with only
one or two incumbents, because of the need to protect confidentiality and
because of the potential impact of unique responses from an individual
incumbent in such titles. Also, it is difficult to specify the sex and
race/ethnic composition of a title reliably when there are so few incumbents.
Consequently, we explored the possibility of grouping these titles into larger
generic categories. Based on our work with the Division of Classification
-37 -
and Compensation of the Civil Service Department, however, we concluded that
this would not be feasible. Thus, the best course of action at the time of
sampling was to treat each title separately.
To summarize, we sampled all job titles in the New York State system with
| more than three incumbents and all M/C titles, regardless of the number of
incumbents. The consequence of this decision was to significantly reduce the
standard errors of the estimates of potential undervaluation.
As a second step, we determined how many incumbents from each job title
to sample. This decision was based on several considerations. First, we
wanted to obtain the highest number of responses at the lowest cost. Second,
based on a projected minimum 50 percent response rate, we estimated how many
incumbents to sample in order to obtain a sufficient number of responses
within each job title, which would minimize the overall standard error.
j Moreover, we decided to sample female-dominated and disproportionately
| minority titles differently from titles that would be used to estimate the
policy-capturing model. The level of accuracy required to provide separate
estimates of the potential undervaluation of individual job titles is greater
than that required for job titles used only to determine the model. Moreover,
| different strategies minimize the standard error of the policy-capturing
models and the standard errors of each of the estimates of undervaluation.
Specifically, the standard errors of estimate for the entire policy~capturing
model is minimized by maximizing the number of job titles sampled. As
i indicated, since we are examj{ning nearly the entire population of titles, this
is not an issue. The only significant source of error then derives from
sampling within job titles. The standard errors of estimate of undervaluation
of individual job titles are minimized when the sample of incumbents is large
or is close to the number of incumbents within that title.
= 38 ‘
Based on these considerations, we developed two sampling frames--one for
estimated titles and another for non-estimated titles as follows.
e Non-estimated titles. In job titles used to derive the
New York State statistical pay policy model, we sampled
up to 20 incumbents in each job title. This means that,
in job titles with fewer than 21 incumbents, all incum-
bents were sampled. In titles with more than 20 incum-
bents, 20 employees were systematically selected with a
random starting point. The figure of 20 incumbents was
chosen because, assuming a 50 percent response rate, we
would have 10 responses to use in obtaining a job con-
tent profile for each title. This was considered the
appropriate number of responses to minimize the stan-
dard error of estimate, given time and money constraints.
Note as well that most job titles have 20 or fewer
incumbents. Thus, using this sampling frame, the ratio
of sample to population would be high for the over-
whelming majority of titles.
@ Female-dominated and disproportionately minority
estimated titles. In job titles for which estimates of
undervaluation were to be made, we sampled up to 150
incumbents in each job title. This means that in job
titles with fewer than 151 incumbents, each incumbent
of the job title was sampled. This represents the pop-
ulation of incumbents in these titles. In titles with
more than 150 incumbents, with ong exception, 150
employees were randomly sampled.
e@ Direct-line-of-promotion estimated titles. The policy
decision to include the direct-line-of-promotion titles
among the estimated titles was made after the main data
collection survey had already been distributed. As a
result, it was too late to increase the sample size of
these titles up to the level of the other estimated
titles. Thus, they were sampled at the same level as
the non~estimated job titles used to derive the pay
policy model. That is, in job titles with fewer than
21 incumbents, all employees are sampled and in titles
with more than 20 incumbents, 20 were selected,
SUMMARY
In this chapter we reviewed the basic methodological decisions guiding
the New York State Comparable Pay Study.
l0mhe exception was for Mental Hygiene Therapy Aide. Since there are
more than 17,000 incumbents in this job title, we sampled 175 employees.
-39 -
A major methodological decision was to use multiple incumbent self-
reports, averaged over incumbents of each job title, as the sole source of
information on job content. This decision was made not only because we share
the judgment of many researchers that incumbents are the best source of
information about jobs, but also because we were able to obtain a larger total
number of responses with this information at a substantially lower cost per
response than was possible through any other method. To collect this informa-
tion, we developed a closed-ended questionnaire customized to the range of job
content associated with work in New York State government. BO
The unit of analysis is the job title. The population of titles used for
deriving a compensation model is all classified titles with four or more
incumbents and all M/C titles. By virtue of our contractual agreement,
however, estimates of undervaluation were restricted to female-dominated and
disproportionately minority titles with ten or more incumbents in the three
bargaining units represented by CSEA and those titles in the direct line of
promotion from disproportionately minority or female-dominated entry level
titles found to be undervalued. Female-dominated titles were defined as those
in which at least 67.2 percent of incumbents are female. Disproportionately
minority jobs were defined as those in which at least 30.8 percent of
incumbents are minorities.
Data were to be collected from a sample of incumbents in each of the job
titles included in the compensation model. Sampling of incumbents within
titles would be done differently for the subset of estimated titles for which
we were obligated to provide pay equity estimates and the remaining titles.
For non-estimated job titles, we sampled all employees in titles with 20 or
fewer incumbents. In titles with more than 20 incumbents, we sampled 20
\ incumbents, using systematic sampling procedures with a random starting point.
AD
For estimated titles, we sampled all employees in titles with 150 or fewer
incumbents, In titles with more than 150 incumbents, we sampled 150.
Finally, direct-line-of-promotion titles were sampled in the same way as
non-estimated titles, primarily because the final decision to examine them for
potential undervaluation was made after the sample had been selected.
= 4). <
|
l
'
i
i
CHAPTER ILL
THE JOB CONTENT QUESTIONNAIRE:
DEVELOPMENT AND PRELIMINARY FIELD-
ESTING
- 42 =
The design of the Job Content Questionnaire for the New York State Com-
parable Pay Study was shaped by three basic objectives:
e to capture variations in job content as they relate
to variations in civil service grade level;
e@ to maximize consistency and minimize sex and race/
ethnic bias in the range and wording of job con-
tent questions; and
e to allow incumbents in all titles to read and
accurately respond to the questions being asked.
The first objective reflects the fact that we are conducting a policy-
capturing job evaluation study. This approach relies on the Job Content Ques-
tionnaire as the basic information source for describing and evaluating job
titles. The questionnaire thus must be comprehensive enough to encompass
those features of work that differentiate jobs with respect to salary grade.
The second objective reflects the fact that this study involves compara-
ble worth job evaluation, although freedom from bias is a desirable property
of job evaluation studies regardless of purpose. Maximizing consistency in
job description requires that we ask the same set of questions to incumbents
of all jobs. Minimizing sex and race/ethnic bias requires that questions
include frequently ignored job content characteristics found in female-
dominated or disproportionately minority jobs. (Steinberg and Haignere, 1985).
Third, we stressed readability considerations because of a reported low
literacy level of many incumbents of the lowest grade level jobs. Ensuring
readability increases our confidence that the information gathered from
incumbents captures what is actually a part of a job and that it does not
reflect incumbent differences in ability to fill out the survey instrument.
To our knowledge, the Job Content Questionnaire designed for New York
State by the Center for Women in Government represents the first attempt to
carefully and systematically meet these objectives in a large~scale public
|
-41-
CHAPTER IIL
THE JOB CONTENT QUESTIONNAIRE:
DEVELOPMENT AND PRELIMINARY FIELD-TESTING
=~ G2 =
The design of the Job Content Questionnaire for the New York State Com-
parable Pay Study was shaped by three basic objectives:
@ to capture variations in job content as they relate
to variations in civil service grade level;
@ to maximize consistency and minimize sex and race/
ethnic bias in the range and wording of job con-
tent questions; and
@ to allow incumbents in all titles to read and
accurately respond to the questions being asked.
The first objective reflects the fact that we are conducting a policy-
capturing job evaluation study. This approach relies on the Job Content Ques—
tionnaire as the basic information source for describing and evaluating job
titles. The questionnaire thus must be comprehensive enough to encompass
those features of work that differentiate jobs with respect to salary grade.
The second objective reflects the fact that this study involves compara-
ble worth job evaluation, although freedom from bias is a desirable property
of job evaluation studies regardless of purpose. Maximizing consistency in
job description requires that we ask the same set of questions to incumbents
of all jobs. Minimizing sex and race/ethnic bias requires that questions
include frequently ignored job content characteristics found in female-
dominated or disproportionately minority jobs. (Steinberg and Haignere, 1985).
Third, we stressed readability considerations because of a reported low
literacy level of many incumbents of the lowest grade level jobs. Ensuring
readability increases our confidence that the information gathered from
incumbents captures what is actually a part of a job and that it does not
reflect incumbent differences in ability to £111 out the survey instrument.
To our knowledge, the Job Content Questionnaire designed for New York
State by the Center for Women in Government represents the first attempt to
carefully and systematically meet these objectives in a large-scale public
roar
=F
sector pay equity study. The development and modification of the customized
survey instrument was carried out over eleven months. It involved several
initial drafts, two preliminary field tests, and a large-scale pilot study.
This chapter reviews the questionnaire development up to the point of the
pilot study. It includes discussion of the initial questionnaire construction
and the two waves of preliminary field testing, as well as the comprehensive
expert review of several draft survey instruments. Following this, Chapter IV
provides an overview chapter on the pilot survey and explains the further
testing and modification of the questionnaire as part of the pilot survey.
PRELIMINARY ACTIVITIES AND QUESTIONNAIRE DEVELOPMENT
Before starting to design the questionnaire, we collected 20 job analysis
and job evaluation frameworks (Table 3.1). Each approach involves a range of
job content characteristics which are used as the basis for describing or
analyzing jobs. For example, the Hay Guide Chart Profile Method categorizes
job content in terms of four factors and several subfactors (Bellak, 1982).
Its Know-how factor is made up of Managerial Know-how, Vocational/Technical
Know-how, and Human Relations Know-how. Each of these subfactors is further
divided into levels from simple to complicated tasks or functions. In this
type of system, employers may specify different levels within subfactors to
reflect their preferences as to how work in their organization should be des-
cribed for the purpose of paying wages.
The Hay system represents one predominate approach to job evaluation. A
second popular approach is represented by the Position Analysis Questionnaire
(PAQ) which contains 194 specific questions organized in terms of six broad
categories. Although it is not feasible for use in an incumbent self-
} administered survey on public sector jobs, the general approach to job
i analysis and job evaluation of the PAQ is the one followed in our study.
19,
20.
~ 44 -
TABLE 3.1
JOB ANALYSIS SYSTEMS USED IN
THE DEVELOPMENT OF THE JOB CONTENT QUESTIONNAIRE
Communications Workers of America
Factor Evaluation System (FES)
Executive Evaluation System - U. S. Civil Service
Hay Plan
Stellman's Health and Wellbeing Survey
Towa Plan
Job Characteristics Inventory
Job Evaluation Guide (California School Employees Association)
Job Activity Preference Questionnaire
Job Descriptive Index
Job Diagnostic Survey
MIMA-Office Jobs
Minnesota Job Description Questionnaire
Job Demands and Office Work Evaluation (MIOSH)
Position Analysis Questionnaire
Phoenix Plan
Quality of Employment Study-Working Conditions Survey
Occupation Analysis Inventory
Willis Plan
Rohmert and Rutenfranz: Arbeitswissenschaftliche Beurteilung der
Belastung und Beanspruchung an unschiedlichen industriellen
Arbeitsplatzen
-45 -
We examined these twenty different frameworks to determine the range of
typical categories used in describing job content, by disaggregating these
systems into job content categories and listing every way an item had been
“formulated in these twenty systems, (Table 3.2 lists the general subfactor
[
category list.) Then, for each job content category, we compiled the way
| different job analysis or evaluation systems had labeled the categories to
I
i
determine the degrée of precision other systems used in differentiating levels’
t of complexity or difficulty within a category of work content. We were also
i interested in discerning where other systems placed the significant eutting—--——.—_—-
points in measuring degrees of difficulty in a task or in a responsibility.
i Second, to assess the comprehensiveness of the job content category list
derived from the 20 sources, we selected 45 representative New York State job
titles, varying by job family and salary grade level. We reviewed their job
specifications to idehtify any job content characteristics of these titles
that may not have been captured in the category list, and by so doing,
uncovered some important additional characteristics. For instance, the job
element list did not include characteristics associated with institutional
i human services work, such as dealing with emotionally troubled clients or the
degree of severity of clients, patients, or inmates which an employee serves.
Moreover, we found that the levels of categories used in previous
analysis and evaluation schemes were insufficiently differentiated, especially
at the lower end of the task range, and were poorly worded. For instance, in
distinguishing among levels of reading skills, the evaluation frameworks over-
looked the need to read inquiries or forms. Similarly, record-keeping was
i described without a category for maintaining records or files. We included
those job characteristics on our list and later included them as items in the
New York State Job Content Questionnaire.
1.
2.
3.
4,
- 46 -
TABLE 3.2
CONTENTS OF JOB CONTENT CATEGORY LIST DEVELOPED
FROM REVIEW OF JOB EVALUATION SYSTEMS
Knowledge and Experience
Education/experience combined
Academic and vocational
combined
Academic only
Vocational
In-Service
Experience
Type.of experience
Knowledge levels and
education combined
Special Skills
Math
Reading
Writing
Speaking,
Other communications
Symbolic/graphics
Comprehension of communication
Creative skill. .
Mechanics, including keyboard,
computer
Transportation
Technology
Electrical/electronic
knowledge
Cognitive Skills
Information input, including
estimation
Fact finding/record keeping
Memory
Information processing
Evaluation
Problem solving
Decision making
Task complexity
Task variety
Scope and Effect
Scope
Effect
Task identity
Effect of error
5.
6.
Responsibility for People and
Things
Management responsibility -
general
Supervision of others -
how many
Amount of time supervising
Level of supervision
Supervision tasks
Manage/plan/schedule
Planning-how much
Coordinating
Responsibility for material
assets
Impact on budget
Supervision of inéumbent
Frequency of supervision
Closeness of supervision
Autonomy
Prescription of task
Judgment
Review and feedback
Personal Contacts
Importance or skill
Amount
Types of people
Purpose of contacts
Working Conditions
Body activities
General working conditions
Lifting weight
Repetition of motion
Body position
Environmental conditions
Hazards
Stress factors - general
Stress-time
Stress from concentration
Stress from distractions
Stress-adaptability to change
Stress-work schedule
Stress-travel
Stress from other people
- 47 -
Moreover, in these other systems, even where levels of job characteris-
tics ranged from simple to complex, they often lacked precision. This was in
part a problem of anchoring, in that there is no explicit frame of reference
that all incumbents share! To the extent possible, we wanted to avoid
questions with ambiguous wording or uncertain frames of reference.
Third, while completing the job content listing and assessment, we con-
ducted a comprehensive literature search on job evaluation. We were
especially interested in obtaining general information on the range of avail-
able systems, as well as on specific types of job content characteristics
included in them. We located well over 100 relevant articles and books.
Based on these preliminary steps, we wrote a 32~page draft questionnaire.
This first draft questionnaire contained 104 questions representing 194 job
content items. As much as possible, the questionnaire was written to capture
factual aspects of work through closed-ended questions about specific features
of job content. We wanted, for example, to know how many clients, patients,
or inmates an incumbent worked with. We avoided asking employees to evaluate
their jobs in terms of ambiguous concepts such as "responsibility," "problem-
solving," and "freedom."
PRE-TESTING THE QUESTIONNAIRE
Prior to conducting the first preliminary field-test, the draft question-
naire was administered to twelve Center for Women in Government staff in three
lay "anchoring," we mean either the ability to compare one's job
accurately within the range of job titles in New York State government
employment, or the ability to judge the degree to which job content
characteristics like "cold" or "hot" working environment relates to the working
conditions that an employee experiences.
- 48 -
units: Administration, Research, and Training. Representatives of each unit
met at a separate interview. After individually completing the question-
naires, each group was interviewed. All project staff were present in order
to establish standardized procedures for subsequent field-testing at job sites
outside the Center. These interviews both gave us a sense that: the question-
naire would, in fact, differentiate among jobs and indicated some of the most
obvious areas of ambiguity. We revised the questionnaire before field-testing
it with state employees.
The first stage of preliminary field-testing was carried out by inter-
viewing 37 job incumbents in 19 state job titles in the greater Albany area in
January 1984, We selected titles for field-testing that
e were in the same job family, but covered a range
of grade levels;
© maximized diversity by sex and race/ethnicity,
including titles that are integrated;
e spanned the grade level hierarchy;
e had a large number of incumbents; and
e are used as benchmarks in New York State.
The specific job titles on which the field-testing was conducted are listed in
Table 3.3.
Interviews were conducted with one to three incumbents of a particular
title during two to four hour sessions. While filling out the questionnaire,
incumbents pointed. out problem items and indicated any job content that was
not covered. The information obtained from the preliminary field-testing was
integrated and used as a guide to revising the Job Content Questionnaire.
From the preliminary field-test we identified several areas for improve-
ment of. the survey instrument. We shortened the questionnaire considerably,
improved the wording of many of the questions, and improved the instructions,
We deleted most items that were redundant, although some were kept to enable a
|
-49
TABLE 3.
3
JOB TITLES AND AGENCY LOCATIONS SAMPLED
FOR PRELIMINARY FIELD-TESTING: FIRST WAVE
Office of General Services
Office of Mental Health:
Capital District Psychiatric Center
Civil Service Department
Labor Department
Department of Corrections:
Coxsackie Correctional Institute
Office of Mental Retardation:
OD Heck Facility
Department of Transportation
Department of Motor Vehicles
Office of Mental Hygiene:
Marcy Facility
7 Title
Cleaner
Laborer
Licensed Practical Nurse
Treatment Team Leader
Food Service Worker I
Food Service Worker II
Senior Clerk
Employment Interviewer
Corrections Officer
Nurse I
Nurse IT
Treatment Team Leader
Highway Equipment Operator
Clerk
Stenographer
Data Entry Machine Operator
Senior Personnel Administrator
Mental Hygiene Therapy Aide
Launderer
- 50 -
erude item-reliability check in the pilot survey. We included specific
examples within many of the questions, so as to clarify the types of tasks,
behaviors, working conditions, or equipment about which we were asking.
In addition, we found that people were confused as to whether we were
asking generally about the job title or about how they performed in their
individual position. As a result, we modified the questionnaire to make
consistent references to respondents as informants about typical incumbents in
theix job title. Respondents were very clear about what a typical incumbent
did and thus had no trouble answering the questions framed in this way.
A second stage of intensive interviews was conducted to further refine
the questionnaire prior to the pilot test. We decided to restrict the number
of titles to a smaller number than the first field-test, but to draw these
titles from a wider range of grade levels: We also included some of the job
titles sampled in the first field-test to assess whether the changes we made
with respect to readability, comprehension, and "anchoring" made it easier to
fill out the survey instrument. Finally, we included several job title
incumbents from New York City in the field-test because of anecdotal reports
that the responses of Albany~based state employees would not be typical of
state employees based in New York City. Interviews were conducted with
respondents in the job titles listed in Table 3.4.
As was true in the first wave of interviews, employees were asked to fill
out the questionnaires and to identify questions that were unclear or
inappropriately stated. We revised the questionnaire after each three or four
interviews, so that changes could be tested and revised again immediately if
necessary,
This second stage of field-testing was extremely useful. Items were
further simplified in wording and anchored through examples. Repetitious
-51-
TABLE 3.4
JOB TITLES SAMPLED FOR
PRELIMINARY FIELD-TESTING:
SECOND WAVE
Job Title Salary Grade Number Incumbents Surveyed
Cleaner ~ a 4 |
Janitor 6 2
Construction Equipment Operator 7 2
Senior Clerk 7 2
Licensed Practical Nurse 9 1
Principal Account Clerk 14 1
Senior Computer Programmer 18 1
Sanitary Engineer 1 20 iL:
Associate Classification and
Compensation Analyst 23 2
Associate in Education 26 1
Director of Personnel 31 1
Director of Public Information 31 1
Assistant Director of
Classification and Compensation 33 1
Associate Commissioner of Mental
Health 38 L
~ 52 -
items were, for the most part, deleted and a number of items were consoli-
dated. A number of items were revised considerably to better describe state
jobs, personnel policies, and procedures.
As a result of these two waves of preliminary field-testing, the Job
Content Questionnaire was ready for a trial with a larger number of incumbents
in a more varied set of titles. The pilot survey, described in the next
chapter, not only provided an opportunity for testing reliability and validity
but also provided further qualitative feedback on item wording and
questionnaire layout.
EXPERT REVIEW AND MODIFICATION
Throughout the development and preliminary field-testing of the Job Con-
tent Questionnaire, we conferred regularly with four categories of experts,
knowledgeable on: questionnaire wording and design, job content, job evalua-
tion, and social science methodology. Those who assisted us are recognized in
our Acknowledgements in Appendix A.
SUMMARY
The process of preliminary field-testing and the development of the Job
Content Questionnaire for the New York State Comparable Worth Study spanned
the six-month period between September 1983 and February 1984. It involved a
Process of comprehensive review of previous job analysis and job evaluation
approaches combined with a sensitivity to detail in capturing precisely the
range of tasks, functions, and behaviors of work associated with New York
State job titles. It involved as well continual revision of content, wording,
and layout in light of the reactions and criticisms of several hundred state
- 53-
employees acting as respondents or experts or both. By February 1984, we were
secure that the survey instrument was refined enough to test on a large sample
of employees representing a wide range of New York State job titles.
= Ba: =
~ 55 =
CHAPTER IV
THE PILOT SURVEY
~ 56 -
A pilot survey of the New York State Comparable Pay Study was conducted
between February and June, 1984. It was designed to improve the technical
quality of the main survey, in order to increase the precision of the final
estimates of undervaluation. The objectives of the pilot survey were:
e@ to test sampling procedures that were to be per-
formed by the Civil Service Department;
@ to. evaluate several methods. for distributing the
questionnaire;
e@ to assess the effects of race/ethnicity, sex,
salary grade, and estimated reading level on
response rates;
@ to assess the rate of response in low incumbency
titles;
e@ to improve the survey instrument; and
eto test for the validity of incumbent responses.
Through the pilot survey we gained a greater understanding of survey
mechanics in New York State and found that a mailed distribution method is
most effective. We obtained adequate response rates from both sexes and those
in all race/ethnic groups, salary grades, and reading levels, We established
the reliability and validity of the survey instrument in terms of the stated
purpose of the comparable pay study. Further, we observed a high degree of
similarity in responses from incumbents and supervisors, thereby validating
the use of incumbent self-reports.
This chapter presents the pilot survey results. We begin with a discus-
sion of the general methodology of the pilot survey, including the selection
of the sample of job titles, the selection of the sample of incumbents, and
the test of four methods of distribution, The chapter continues with an
assessment of the adequacy of the procedures followed in distributing the
questionnaire and the response rate in relation to four possible distribution
- 57
methods. We then present the findings regarding the reliability and validity
of the Job Content Questionnaire. Finally, we discuss further revision of the
Job Content Questionnaire.
GENERAL METHODOLOGY
Methods of Distribution
One of the primary objectives of the pilot survey was to test four
methods of distribution:
e mailed, in which surveys were distributed to employees
through interagency mail;
| e@ on-site, in which employees were asked to fill out the
Hi questionnaires individually in a group setting; and
e direct distribution by union stewards or personnel
directors, in which surveys were distributed directly
to employees by a representative of either the state
\ or the union. (We initially treat these as one dis~
i tribution method, but later in the analysis stage we
treat them separately.)
{ These four distribution methods are described more fully in the next section
on survey mechanics.
Sampling of Titles and Incumbents
The pilot Job Content Questionnaire was distributed to 1862 incumbents in
68 job titles sampled primarily from six agencies and two facilities. Job
titles were selected for the pjlot study based on considerations both of
economy and of representativengss of occupations found in the New York State
employment system.
i ,
The sample of job titles is listed in Table 4.1. They were drawn from
all bargaining units, from the range of salary grades, and from a diversity of
occupational families. The final sample contained a mixture of female-domi-
nated, disproportionately minority, white male-dominated, and integrated
Negotiating
Unit
1
- 584
TABLE 4.1
JOB TITLES INCLUDED IN PILOT SAMPLE
Title Code
8731100
8755200
8700100
8700200
8700300
2501200
0849200
2610200
2606100
2501300
0102300
2540300
0620200
0821200
8901000
0610110"
0821300
0100500
7511000
3014000
6961000
6921200
7616000
7202000
6921000
7312000
7501200
7331100
7352000
7501300
3124200°
3124300
5500200
5518500
5570300
5570400
5570500
Job Title
Security Service Assistant 1
Safety and Security Officer 1
Corrections Officer
Corrections Sergeant
Corrections Lieutenant
Clerk
Data Entry Machine Operator
Stenographer
Information Processing Specialist 1
Senior Clerk
Senior Audit Clerk
Motor Vehicle Rep 3
Tax Comp Rep 3
Computer Operator
Motor Vehicle License Exam
Tax Comp Agt 1
Senior Computer Operator
Prin Acct Clerk
Power Plant Helper
Cleaner
Laborer
Highway Equipment Operator
Motor Vehicle Operator
Maintenance Assistant
Construction Equipment Operator
Motor Equipment Mechanic
Stationery Engineer
Electrician
General. Mechanic
Senior Stationery Engineer
Food Service Worker 1
Food Service Worker 2
Licensed Practical Nurse
Comty Residence Aide
Mental Hygiene Therapy Aide
Mental Hygiene Therapy Assistant 1
Mental Hygiene Therapy Assistant 2
= bo ow
|
|
}
L
TABLE 4.1
“JOB TITLES INCLUDED IN PILOT SAMPLE
(continued)
5500510 Nurse 1
5500540 Nurse 2 Psy
3965040 Teacher 4
0820300 Senior Computer Programmer
0403300 Senior Accountant
2810300 Senior Admnv Analyst
8107220 Psych Soc Worker 2
4001200 Civil Engineer 1
0820410 Assoc Comptr Programmer An
8154300 Senior Soc Serv Prog Spec
6501300 Senior Attorney
4001200 Civil Engineer 2
0825500 Supvr Data Process
5620202 Psychiatrist 2
1441300 Senior Personnel Administrator
1441400 Associate Personnel Administrator
5255230 Treatment Team Leader MH
5255210 Treatment Team Leader MR
8969080 Chief Driver Impv Analyst
8973800 Chief of Vehicle Safety Serv
2000700 Chief Budgeting Analyst
7319800 Assistant Director of Mat Eg Mgt
8514800 Assistant Director Labor Statistics
7319900 Director Mat Eg Mgt
2876900 Director Tax Systems Development & Rsch
2870900 Director Trans Admn Srvs
2876700 Director Admn Tax & Finance
4013900 Director Trans Plan Research Bureau
0645900 Director Tax Processing
0607900 Director Tax Audits
2851000 Senior Project Exec
- 60 -
titles. We also included several sets of titles reflecting two or three
consecutive steps in a job family career ladder to test questionnaire
sensitivity to job content differences between essentially similar jobs. In
addition, the sample contained several titles where we anticipated that low
reading ability might produce low response rates. Furthermore, to ensure that
the main survey would have a sufficient number of incumbents in each job title
from which to sample, without including any respondent who had been included
in the pilot survey sample, we attempted to limit the job title sample for the
pilot study to titles with more than one hundred incumbents.
Sample selection of incumbents within these titles was restricted to
limited geographic areas and specific agencies in avier to minimize the cost
and time involved in the distribution of questionnaires for the pilot study.
The pilot survey was limited to agencies and facilities in Albany, New York
City, Greene County, and Kings County. For Department of Corrections titles,
we sampled incumbents statewide! due to a specific problem discussed below.
The pilot study involved the following eight agencies:
e Office of General Services e@ Transportation Department
@ Department of Motor Vehicles e Capital District Psychiatric
Center, Office of Mental Health
@ Department of Social Services e Brooklyn Developmental Center,
Office of Mental Retardation and
Developmental Disabilities
@ Department of Taxation and @ Coxsackie Correctional Facility
Finance Department of Correctional
Services
lthese titles were: Correction Officer, Correction Officer (Spanish
Speaking), Correction Sergeant, and Correction Lieutenant.
-@b -
The sampling plan was developed in relation to the objectives of the
pilot study. First, 200 completed questionnaires under each of the three
distribution methods were needed to analyze the effectiveness of each method.”
Second, a minimum of 50 job titles was needed to test for reliability and
validity of the questionnaire using factor analysis. This number of titles
was the minimum necessary to ensure that the results of the statistical
analysis meaningfully captured variations in work performed in New York State
job titles. Of course, since we could not expect a 100 percent return rate,
we calculated an expected return rate based both on the literature on response
rates and on the past experience of those conducting surveys in the New York
State employment context. The expected return rate varied by distribution
method. Table 4.2 indicates the initial sampling plan designed for the pilot
survey given these considerations.
Table 4.3 indicates the actual sample. The number of incumbents sampled
within each job title deviated from the plan in a number of ways listed as
footnotes to Table 4.3. These included:
e the separation of direct delivery into personnel and
union steward distribution;
@ the addition of a sample of 15 management confiden-
tial titles with one to three incumbents;
© the addition of five Spanish-speaking titles; and
e the oversampling of incumbents in five low literacy titles.
Having selected the final sample of titles, we requested a Composite
Report from the Civil Service Department, which listed the current number of
incumbents in the selected job titles at each of the specified agency
2
Because the direct delivery method was subsequently subdivided into
union steward and personnel director, we projected 100 completed responses
under each method,
- 62 -
TABLE 4.2
SAMPLING PLAN FOR THE PILOT STUDY
Total
Number of
Analyzable Number We Number of Incumbents
Questionnaires Expected Need to Per Job Title
Method Needed Return Rate Distribute Receiving Questionnaire
Mailed 200 25% 800 16°
Captured Audience 200 67% 300 6?
Union Steward 100 b 50% 200 4 2
é 200 400 8
Personnel Office 100 50% 200 4
600 1500 30
a - This represents an expected return rate of four questionnaires per job title
sampled.
b - We decided to analyze separately personnel and union steward distribution after
we had projected sample estimates. This resulted in distributing an
insufficient number in each category to ensure 200 responses.
}
i
i
-@E -
The sampling plan was developed in relation to the objectives of the
pilot study. First, 200 completed questionnaires under each of the three
distribution methods were needed to analyze the effectiveness of each method.”
Second, a minimum of 50 job titles was needed to test for reliability and
validity of the questionnaire using factor analysis. This number of titles
was the minimum necessary to ensure that the results of the statistical
analysis meaningfully captured variations in work performed in New York State
job titles. Of course, since we could not expect a 100 percent return rate,
we calculated an expected return rate based both on the literature on response
rates and on the past experience of those conducting surveys in the New York
State employment context. The expected return rate varied by distribution
method. Table 4,2 indicates the initial sampling plan designed for the pilot
survey given these considerations.
Table 4,3 indicates the actual sample. The number of incumbents sampled
within each job title deviated from the plan in a number of ways listed as
footnotes to Table 4.3. These included:
e the separation of direct delivery into personnel and
union steward distribution;
e@ the addition of a sample of 15 management confiden-
tial titles with one to three incumbents;
© the addition of five Spanish-speaking titles; and
e the oversampling of incumbents in five low literacy titles.
Having selected the final sample of titles, we requested a Composite
Report from the Civil Service Department, which listed the current number of
incumbents in the selected job titles at each of the specified agency
2
Because the direct delivery method was subsequently subdivided into
union steward and personnel director, we projected 100 completed responses
under each method.
- 62 -
TABLE 4.2
SAMPLING PLAN FOR THE PILOT STUDY
Total
Number of
Analyzable Number We Number of Incumbents
Questionnaires Expected Need to Per Job Title
Method Needed Return Rate Distribute Receiving Questionnaire
Mailed 200 25% 800 16°
Captured Audience 200 67% 300 6°
Union Steward 100 50% 200 4 4
400 8
Personnel Office 100 50% 200 4
600 1500 30
a - This represents an expected return rate of four questionnaires per job title
sampled.
b - We decided to analyze separately personnel and union steward distribution after
we had projected sample estimates.
This resulted in distributing an
insufficient number in each category to ensure 200 responses.
~ 63 -
TABLE 4.3
t
ACTUAL ALLOCATION OF RESPONDENT SAMPLE
; BY DISTRIBUTION METHOD
|
|
|
It
|
Number of
Minimum Returns Number
i Number of Actual Expected Projected Needed Distributed
i Surveys Number Return Return in Each in Each of
| Method Needed Distributed Rate Rate Job Title 60 Titles®
i Mailed 200 929% 25% 232 4 16
captured 200 ng 67% 168 4 6
audience or
on-site. OO as
Union steward® 100 324° 50% 162 2 4
Personnel office’ 100 36804 50% 189 2 4
distribution
n=600 1862 50% 751 12 30
a ~ The total of 929 questionnaires to distribute is a function of the fact that not
all titles in the sample have a minimum of 30 incumbents. It also reflects a decision to
oversample respondents in 5 titles identified as having incumbents with low literacy.
f b - We did not do the captured audience on-site method of distribution at Coxsackie
Correctional Facility, at Brooklyn Psychiatric Center, and Region 1 of the Department of
| Transportation. We excluded these sites because it was impractical to request employees
: working at many different locations to report to one central location to fill out the
questionnaire,
t
|
t
c - The numbers distributed in union steward and personnel office distribution are
higher than would be expected because they include additional responses from Coxsackie,
Brooklyn Developmental and Department of Transportation, where we did not test the
captured-audience distribution method. The questionnaires that would have been
distributed on-site at these locations were distributed instead by personnel office.
d - The total number of questionnaires distributed through personnel office staff
was greater than the number distributed by union stewards because management confidential
titles do not have union stewards.
e - The table was generated on the basis of the 60 job titles with greater than 3
incumbents, excluding Spanish speaking titles. Low-incumbency titles and Spanish-
speaking titles were distributed in an analogous way to the 60 titles.
£ - We decided to analyze separately personnel and union steward distribution after
we had projected sample estimates. This resulted in distributing an insufficient number
in each category to ensure 200 responses.
- 64 -
locations from which we would sample. This list constituted the pilot study
population. We selected a sample of incumbents using a systematic sampling
procedure with a random starting point. Incumbents were divided by title and
randomly allocated to one of the four distribution methods, with probability
proportionate to the target sample size for each distribution method.
SURVEY MECHANICS
The first major objective of the pilot study involved survey mechanics.
Specifically, these mechanics encompassed the set of procedures for selecting a
sample of respondents, distributing the survey instrument to respondents,
coding, keypunching and verifying the returned information, and analyzing the
data. It also included tests of the adequacy of follow-through by agency
liaisons, the capacity of the State University of New York at Albany computer
system to handle the necessary data analysis, the ability of the keypunching
service with which we subcontracted to provide a verified tape in a timely
fashion, and the reliability of the agency mails.
In this section we report on the mechanics associated with three stages
in carrying out the pilot survey:
® the procedure for selecting the sample of incumbents;
@ the procedures for distributing the survey using each
of the four distribution methods; and
@ the procedures for coding and keypunching the survey
data.
Random Selection of Sample
The first step in the pilot test was to give detailed instructions to the
Civil Service Department specifying how to select the systematic sample. In
choosing the sample for the pilot, we instructed the Civil Service Department
- 65 -
staff as to which job titles, agencies, and institutions we wanted included. ?
Two limitations of the Civil Service data system were found to have
important implications for what we were able to do in the main data-collection
stage, First, information concerning the specific worksite location of
employees and the specific shift each employee works is not available. The
lack of these two kinds of information meant that the number of sites and
number of shifts which are likely to be selected randomly in the main survey
stage could be very large. Thus, the on-site method of distribution, in teheti
we bring a group of randomly selected employees together for administering the
questionnaire, was not feasible from the point of view of both agency
personnel people and Center staff. Second, state computer files do not con-
tain a specific employee business address. The lack of specific address means
that even with mailed distribution, main survey distribution required the
cooperation of agency personnel officials to provide specific location
information for thousands of sampled employees.
Distribution of the Pilot Survey
As a first step, GOER contacted each agency. In most cases, this was
done through the Personnel Department. Agency staff were told the purpose of
the study and asked that a liaison be appointed to work with the Center in
distributing the questionnaires for the pilot study. After receiving the
names of these agency contacts, Center staff met with each agency liaison
person to explain the study goals and specific objectives for the pilot study,
3yhile we found this a feasible way to select a sample, at the same time,
we learned a great deal about the strengths and limitations of the computer
files maintained at the Civil Service Department. We are indebted to the EDP
staff for their patience in explaining the system to us and for the expertise
in selecting the sample for the pilot test.
- 66 -
and the three or four specific methods of distribution that would be used in
his/her agency. We also requested a separate meeting with union stewards. A
list of the personnel and union steward liaisons for the pilot survey is
included as Appendix B of this report.
For all methods, questionnaires were distributed in a 9" x 12" envelope
labeled with the employee's name and line item Aumpet.” A return envelope
addressed to the Center was included. All questionnaires were to be returned
by interagency mail directly to the Center.
A brief review of the salient features of each distribution method
follows:
Mailed: Questionnaires to be mailed were delivered
in person in a single large box to the liaison
in each agency. Internal location information had
to be added to the address label by the agency
representative. These questionnaires were then
sent through the agency's mail system to the
incumbents. Each respondent who received a survey
in the mails received a follow-up letter two weeks
later, regardless of whether or not she/he responded
to the questionnaire, Since we could not know who
responded, we had to send follow-up letters to every-
one. This procedure was meant to increase our res~-
ponse rate, as well as to reinforce the confidentiality
of responses. These follow-up letters were delivered
to liaisons at the same time as the surveys and were
similarly labeled.
Personnel: Distribution by a personnel manager was
similar to mailed distribution in that a box of ques~
tionnaires was delivered to an agency for further
labeling and distribution. These questionnaires,
however, did not go through the mailrooms or mail
clerks; they were distributed in person by the
personnel manager. It was whether this personal
contact by management had an effect on responses
and response rates that was being tested in this
method, Since the approach to delivering the per-
45 line item number identifies an employee's position in the New York
State government. Each employee has a unique line item number within his or
her agency,
-67 -
sonnel distributed questionnaires was left up to
the liaison, slight variations in the method
of distribution occurred. In general, however, the
liaison either personally delivered the survey
to the sampled incumbents or had a staff person hand-
deliver the questionnaire,
Union: Questionnaires for the union-distribution
method were distributed in a manner similar to the
personnel method, except that local union stewards
delivered them, We began with meetings arranged
with local presidents at each of the eight work
sites, where the questionnaires were given to agency
union leaders. Many indicated a preference for
having the questionnaires come back to them rather
than being put directly in interagency mail. How-
ever, _in_many_cases,_this—did-not_prove-practical-----—--—-—--———________
and many union-distributed questionnaires were
returned directly to the Center through inter-
agency mails.
On-site: In this method, incumbents were invited to a
group meeting by the agency liaison and the ques-
tionnaires were distributed by a representative from
the Center. A brief description of the study was pre-
sented. Incumbents then filled out the questionnaire
and handed it to the Center representative.
Survey Distribution: Department of Correctional Services
As a routine part of meeting with agency liaisons, we arranged an
orientation meeting at the Department of Correctional Services with the Main
Office Personnel Director, the Assistant Director of Personnel-Classification
and Exams, and the Assistant Director of Personnel-Facilities. At this
meeting we learned that there might be a problem with Correction Officers
being given release time to fill out the questionnaire. This is because
Correction Officers must be constantly on alert.
A GOER-initiated solution involved a change in the sampling plan for
uniformed titles (i.e., Correction Officer, Correction Officer Spanish-speak-
ing, Correction Sergeant, and Correction Lieutenant). The plan for sampling
uniformed officers was changed from sampling a large group at one facility to
- 68 -
one of spreading the sample across the Department of Correctional Services" 47
facilities. It is much easter for work-relief to be arranged for a few
officers at each facility than for one facility to arrange work-reltef.
Because the facilities were widely dispersed across the state, we used only
the mailed-distribution method for these questionnaires.
Data Entry and Cleaning
Once. the questionnaires were returned by survey respondents to the
Center, the data entry and cleaning process began. The steps of this phase
are briefly described below.
Coding: All questionnaires were coded and examined for
Tégibility and other problems by Center staff. Coders
used a detailed codebook and about 25 percent of the
coding was double-checked by a second coder. Further,
to assess the accuracy of the coding procedure, twenty
questionnaires were randomly selected for comparison.
Two persons coded each item on these questionnaires
independently and the codings were compared. When com-
parisons. between coders were made, we found three dis-
agreements between coders out of 3,560 potential disagree-
ments. Thus, we concluded that for all practical pur-
poses, coding error is of no concern.
Entering-and Verifying: The coded data were entered and 100
percent verified. All the data.were keyed twice, discrep-
ancies, were reconciled, and various types.of errors in
data. entry. were detected. through a preprogrammed computer
checking procedure. Data were checked for mechanical
errors. by scanning the patterns of columns and rows in a
printout, counting. to see that there were four data cards
for. each case, verifying selected cases, and examining the
output. froma. frequency distribution to detect: inappro-
priate codes. The data-cleaning process involved the
addition of. missing lines, correcting occasional miskeys,
and adding a few new values to code those questionnaires
that were mailed to persons who were absent from on-site
visits but were supposed to attend them. Moreover, once
the data were in useable. form for analysis, 20 question-
naires were randomly selected for a final accuracy test.
No errors. were. found. One can conclude, therefore, that
the keypunching was close to 100 percent accurate.
In general, we were extremely pleased with the way the mechanics of the
pilot survey worked. A cumbersome set of distribution procedures was carried
out with remarkable ease by agency liaisons and Center staff. The sample-
selection procedure also worked well. Data entry and cleaning were carried
out in a timely fashion with no major problems. This put us in a good
position to move forward with the main survey with confidence that the
mechanics of our survey approach worked,
__ RESPONSE_RATES_AND_DISTRIBUTION-METHODS——-———_ —_—__-_ -_______- —_—
In this section, we discuss the results of our analysis of response
rates, including the overall response rate, the relationship between sex and
race/ethnicity of incumbents and response rate, the level of response rates
for different distribution methods, and the results of sampling low incumbency
titles.
Overall Response Rate
Overall, 1067 questionnaires were returned out of 1923 sent for a
response rate of 55 percent.” These totals do not include an extra follow-up
mailing of the questionnaire to people who were absent when questionnaires
were distributed at on-site visits. With this extra follow-up in the on-site
distribution method, the returns were 1110 received of 1923 sent, or 58
percent.
StResponse rate" for the pilot study meant number received divided by
number sent, No adjustment was made for sampled employees who were no longer
on the job or who had changed titles.
- 70 -
Our return rate is considered to be high relative to the common experi-
ence of survey researchers, especially those engaged in mail surveys. Fre-
quently, one obtains a return of about 30 percent to survey questionnaires
distributed in applied settings. In addition, we understand it to be an
unusually high response rate in the New York State government employment
context.
The importance of a high response rate cannot be overstated. Without it,
we could not be sure that the sample of respondents is representative of the
population of interest. In the New York State context, we need to have
confidence that those returning questionnaires are representative of all
incumbents in the same job title. If our response rates were low, we would be
forced to consider the possibility that respondents would be atypical; for
example, those with a special problem on their jobs, those unusually satisfied
with their jobs, and so on. Thus, the high response rate in the pilot survey
gave us considerable confidence that we would be able to obtain data from a
representative sample in the main survey.
Sex and Race/Ethnicity of Incumbents and Response Rates
An important question for sampling in the main study was whether response
rates are the same regardless of the sex or race/ethnicity of incumbents. If
incumbents of a particular sex or race/ethnicity fail to respond, then results
could be substantially distorted. Thus, we examined response rates in the
pilot survey. to determine whether it would be necessary to do stratified
sampling of the main study sample by sex and race/ethnicity.
Response rates for females and males and for minorities and whites were
compared. The results are summarized in Table 4.4, Given agency and
geographic restrictions in sample selection, it is obvious that these results |
were not obtained from a representative sample of the entire Civil Service
= ow
population. The sample distribution was, however, fairly close to the
population distribution. Our sample is 41 percent female; whereas 48 percent
of the Civil Service population is female. Our sample is 19 percent minority;
while 22 percent of the population is minority. We expect the sex and
race/ethnicity distribution of the main survey to be still closer to that of
the entire population of New York State employees.
Given the somewhat unrepresentative character of the job titles sampled
in the pilot survey, we do not regard the difference in response rates between
males (55%) and females (61%) as unduly large. There was _no indication from
the pilot survey that males would not answer a questionnaire that was
identified with a study of comparable pay. Therefore, no special sampling or
targeted public relations activity seemed to be needed to ensure an adequate
response from both sexes.
The difference in response rates between whites and minorities was more
problematic. Sixty-one percent of the whites responded, while only 46 percent
of the minorities responded. In the next section, we will see that the
race/ethnic difference in response rate can be reduced by the selection of a
distribution method. Furthermore, since the unit of analysis is the job
title, the response rate issue reduced to the question of whether we could get
a high enough proportion of respondents of all sexes and race/ethnicities to
ensure that the characterization of each job title is unbiased. Since the
ratings of job content characteristics tended to be roughly similar regardless
of the sex or race/ethnicity of incumbents, minor variations in the proportion
of respondents of particular sexes or race/ethnicity would have little
consequence. We decided, therefore, to continue to use the systematic
sampling procedure for selecting incumbents within job titles developed in the
pilot survey.
Fenales
Males
Total
Minorities
Whites
Total
= 724
TABLE 4.4
RESPONSE RATES BY SEX AND RACE/ETHNICITY
Response Rate
Total Received* Total Sent (Received/Sent)
485 793 61%
621 1130 55%
1106 “1923
166 358 46%
934 1565 60%
1100 923
a = MiBsing data included four cases for sex and ten for race. That is,
these items were left blank on the questionnaires returned.
t
i
i
|
|
|
|
i
- 73 -
Comparison of Distribution Methods
Response rates by distribution method are presented in Table 4.5.5 The
highest response rate was with the personnel distribution method (59%); mailed
distribution was nearly as high (58%). Based on these results, and on its
greater ease in implementing, we decided to use mailed distribution in the
main survey.
Sex, Race, and Literacy and Response Rates
Another important question for design of the main survey was whether
those in jobs with certain characteristics responded better to a particular
survey distribution method. If necessary, we could have supplemented the
mailed survey by choosing an alternative distribution strategy to targeted
titles so as to obtain the overall highest response rates. However, because
we found no response bias, this was not necessary.
Job titles in the pilot study were categorized by sex composition,
minority composition, and literacy-type in the following manner. Consistent
with the definition in Chapter II, female—dominated jobs were defined as those
with 67.2 percent or more females. Similarly, disproportionately minority
jobs were defined as those with 30.8 percent or more minorities. For the
pilot analysis only, male-dominated job titles were defined as those with 72.8
percent or more males, (Y + .4Y, where Y is the proportion of men in New York
State employment). Finally, as indicated above, five jobs were selected for
the sample because of the low reading level of incumbents based on advice from
state personnel experts,
ory calculating the response rate, responses that were received as a
result of special mailed follow-up to on-site visits were excluded from the
calculations. A total of 117 people were absent from on-site visits.
Questionnaires were mailed to these people after the on-site visits.
Forty-three were returned. These cases were used in all data analyses other
than the response rates analysis.
- 74-
An examination of Table 4.5 reveals that, with one exception, mailed and
personnel distribution methods yielded consistently better results than union
or on-site distribution when sex-, race-, and literacy-type of job were
controlled. On-site distribution yielded the highest response rate for
female-dominated jobs. In general, however, the response rates for female
jobs, using all but the union steward-distribution method, is high.
The high response rate for low literacy titles (49%) was especially
gratifying, reflecting the low readability level of the questionnaire
(seventh-grade level). It appears that when questionnaires were distributed
by mail or by personnel officers, people in low literacy titles coped with the
task of filling them out much more than when questionnaires were distributed
on-site or by union stewards. It is probable that incumbents who received
questionnaires by mail or from personnel staff obtained some assistance, as
they probably do for other reading tasks in their lives.
Negotiating Unit, Agency, and Response Rate
Mailed and personnel-distribution methods yielded consistently higher
response rates across negotiating units and across agencies. (See Table 4.5.)
The only exception was the Department of Tax and Finance, where all methods
yielded high response rates. The negotiating unit with the lowest response
rate was Institutional Services. This corresponds to the lowest agency
response rates at Mental Health (42%) and Mental Retardation (38%).
The differences between agency response rates were examined further.
There were no systematic differences between high-rate agencies and low-rate
agencies due to agency location. The results from low responding agencies
were then examined on a title-by-title basis to determine if low responses
could be accounted for by some characteristic that could be taken into account
in designing the sampling frame for the main survey. However, we were unable
OVERALL MAILED PERSONNEL UNION on-sire
Number Number Number Number Number Number Number Number Numbe: Number
Rate Received Sent Rate Received Sent Rate Received Sent Rate Received Sent Rate Received Sent
Total 55 1067 1923 58 593 1019 +59 208 354 ey 144 309 <5l 122 241
Agency *
Missing Data 16 12 2 2 °
Office of . :
General Services 244 148 334 “47 65 137 +51 36 70 20 50 35 27 n
Corrections 254 86 158 54 n 143 .75¢ 6 8 3 7 -- - -
Social Services 64 149 232 +69 al nz 77 27 35 20 33 245 22 47
Tax and Finance +62 190 305 +62 94 152 +57 27 47 27 44 +68 42 62
Motor Vehicles 65 90 139 64 46 72 ~82* 18 22 12 is 54 ia 26
‘Transportation +70 209 298 75 44 152 176 44 60 33 57 +62 18 29
Office of Mental +
Health 42 69 164 Ad 44 30 40 16 40 +26 i 34 se =
OMRDD 38 110 293 38 60 156 44 32 72 228 is 65 a wR
Negotiating Units
Security 47 68 187 49 76 156 .54e 7 23 4 13 +20" a 5
Administrative +63 350 559 -65 184 282 59 92 45 Bl -60 62 104
Operations 249 219 445 +60 121 203 43 89 31 84 235 24 69
Institutional Services .41 83 201 +42 44 106 24 48 15 47 -- -- -
PEF 61 270 441 64 141 221 56 86 48 a4 +50 25 50
Management/Confidential .63 57 90 +55 28 51 473 19 26 - = 77" lo 13
‘Sexctype
Female-67.2% or more 56 347 623 +57 184 32h 5°65 118 “4 49 107 +64 49 n
Mixed 259 286 484 59 145 246 +64 66 103 249 39 79 +64 36 56
Male-27.2% or less -53 434 16 258 264. 452 =58 n 133 “4 56 123 234 37 208
#22 or less were sent . . .
OVERALL . MAILED PERSONNEL . UNION
Number Number Number Number Number Number Number Number Number
Rate Received Sent Rate Received Sent Rate Received Sent Rate: Received Sent Rate Received Sent
Race-typ.
Minority-30.8% or more .42 131 310 ag 72 158 51 36 7: ,
: 2.287 68 46%
White 158 : 6 13
926 1613. 61522 B61 61172 2630-53127 241.52. 1ie- 228
Literacy-type
Low reading level 49 103 212 55 51 93 56
F 25 450.36 16
Other 156 964 wml # ae nt 30
59 542 926 59 183 309 48 128 265 53 11 21
Salary Grade A
3-6 53 233 436 57 119
© * » 209 .
Wa3 152 wi 6 87h 67° 2 Bs ieee A a Be a6 Ef
14-22 . 7a 15000 42 3 141 49
a2 258 289 499 458183 317.64 a7 " 55 35 63153 PH be
7 64 1s4 2420.6: E ( %
3 80 126 78 39 50 68 2 31 54 19 35
so ED com:
to find any consistent explanation. Therefore, we could not predict precisely
which titles would yield low response rates in the main survey.
Salary Grade and Response Rate
Another possible factor influencing response rate was examined--the
impact of salary grade. Titles were grouped by salary grade categories:
Grades 3-6, 7-13, 14-22, and 23-38. As indicated in Table 4,5, response rates
across these grade categories ranged from 53 percent to 64 percent, with
response rates increasing with salary grade,
These results are quite consistent with those commonly found for surveys.
While returns are generally lower for the low salary jobs, the return rate for
the lowest salary grades (53%) was still adequate for data analysis. We
coneluded that oversampling low salary jobs or using a second method of
distribution was unnecessary.
Small Incumbency Titles
A final research question was whether and at what level we could expect
responses in small incumbency titles. Fifteen Management/Confidential titles
with less than four incumbents each were included in the pilot study sample,
involving a total of 19 questionnaires.
Only four questionnaires out of the 19 sent were not returned, for a
response rate of 79 percent. Responses were received from 11 out of 15
titles. Thus, response rates for small incumbency titles were in the 70
percent range, a rate that is adequate for the purpose of our analysis.
RELIABILITY
One of. the major objectives of the pilot study was to determine whether
the job analysis instrument is reliable, that is, whether it measures job
‘TABLE 4.5
SUMMARY OF RESPONSE. RATES
FOR PILOT SURVEY
OVERALL MAILED PERSONNEL NTON on
Number Number Number Number Number Number Number Number Numbs Number
Rate Received Sent Rate Received Sent Rate Received Sent Rate Received Sent Rate Received Sent
Total 55 1067 1923 58 593 1019 59 208 354 47 144 309 pa 122 241
Agency *
Missing Data 16 12 2 2 oO
Office of . :
General Services 44 148 334 47 65 137 Sl 36 70 40 20 50 ae 27 7
Corrections 54 86 ise 54 nm 143 eo | a 6 8 43" 3 7 -_- _ _
Social Services 64 149 232 69 gl 17 77 27 35 61 20 33 45 a1. 47
‘Tax and Finance 62 190 305 +62 94 152 57 27 ay +62 27 44 68 42 62
Motor Vehicles 65 90 139 64 46 72 82* 18 22 63" 12 19 +54 “4 26
Transportation 70 209 298 275 24 152 76 44 60 58 33 S7 +62 is 29
Office of Mental .
Health 42 69 164 49 44 90 40 16 40 26 J 34 - od
‘OMRDD 38 110 293 38 60 156 44 32 72 28 18 6s = ‘ae
Megotiating Units
Security 47 88 187 +49 16 1s6° 0 .54* 7 13 13 +208 1 5
Administrative +63 350 55965. ied 2820.64 59 92 al +60 62 104
Operations 249 219 445.60 12. 203048 a3 89 a4 235 24
Institutional Services .41 83 201.42 44 ios 50 2, 48 47 -- --
PEF +61 210 44164 wa 2220265 56 a6 e450 25
Management/Confidentisl .63 87 9005S. 28 $1 173 19 26 -- -- 778 10
Sex-type
Female-67.2% or more 56 347 623 .57 184 32. SSS 11g £46 49 107 +64 49 n
Mixed +59 286 434 +59 14s 246 468 66 103.49. 39 79.64 36 56
Male-27.20 or less 253 434 816.58 266. 452 58 n 1330046 56 123 234 37 108
22 or lees were sent .
OVERALL. . HAILED PERSONNEL ‘ URION on.
ITE
Nomber Number Number Number Number Number Number | Nunber Number Number
Rate Received Sent Rate Received Sent Rate Received Sent Rate: Received Sent Rate Received Sent
Race-type
Minority-30.88 or more .42 131 310 49 72 158 52 36 n 25 :
. . i wv 68. A6* 6 3
white 58 926 161300662521. 861 61172 28300 «53127 241 SL ne- 228
Literacy-type
Low reading level 49 103 212 55 51 93 56 2:
- - Ss. 45 16 440037 L
Other +56 964 Ill = «59 842 926. .59 = 183 309 128 265 ~=C«w 33 un a
Salary Grade
ze 133 233 43600 «57119 209.61 49 eo £39 29 14 AD 36 3
oo 57 wr 4600 «ST? aD 367° 482 78 aso a2 59 M41 449 a be
58 289 499.58 183 317.64 al Mm SS 35 63.53 26 as
23-38 264 154 242. 63 80 126.78 39 50.68 ar 3l 54 19 35
- Sh
- 7% -
to find any consistent explanation. Therefore, we could not predict precisely
which titles would yield low response rates in the main survey.
Salary Grade and Response Rate
Another possible factor influencing response rate was examined-—the
impact of salary grade. Titles were grouped by salary grade categories:
‘Grades 3+6, 7+13, 14-22, and 23-38. As indicated in Table 4.5, response rates
atross these grade categories ranged from 53 percent to 64 percent, with
response rates increasing with salary grade.
These results are quite consistent with those commonly found for surveys,
While returns are generally Lower for the low salary jobs, the return rate for
the Lowest salary grades (53%) was still adequate for data analysis. We
‘concluded that oversampling low salary jobs or using a second method of
distribution was unnecessary.
‘A €%nal research ‘question was whether and at what level we could expect
‘vesponses ‘in small incunibency ‘titles. ‘Fifitteen Management/Confidential tities
‘with Less than four incumbents each were included dn ‘the pilot study sample,
‘thvolving -a ‘total of 19 questionnaires.
‘Only four questtonnatres out of the 19 ‘sent were not eturned, for a
‘veRgponse ‘rate of 79 :percent. Responses were received from ill out .of 15
‘titles. ‘Thus, ‘response rates for small anoumbency titles were in the 70
‘percent ‘range, a rate ‘that is adequate for ‘the purpose .of ‘our ;analysis.
RELIABILITY
‘One sof ‘the ‘major dbjectives of ‘the :pilot :study was ito determine «whether
‘the {fob ‘analysis instrument ‘ts meltabile, ithat iis, whether <tt measures 4ob
oe TE
content characteristics accurately. A reliable measure is one which would
yield the same score on repeated attempts to measure the same thing, whether
those attempts are made at two points in time or in different parts of the
questionnaire.
A common way of testing the reliability of questions in a survey is to
repeat a question, perhaps with a slight variation in wording, at two differ-
ent points in the questionnaire, In principle, the answers to the two
questions should be highly correlated: respondents should give similar
responses to both questions. Insofar as they do not, we have evidence that
the question is not being understood, is being guessed at, or is otherwise not
eliciting a very precise response.
Because the Center's pilot questionnaire already contained 178 separate
items, it was not feasible to include more items to repeat measures as a way
to test reliability. Consequently, employee responses to the same measures
are not available. However, employee responses to similar measures were
available through the pilot study. This is a reasonable, albeit somewhat
weaker, alternative to the repeated measures design.
Of course, we would not expect the correlation between similar items to
be as high as it would be if the items were almost identical. Consider three
titles--Licensed Practical Nurse (LPN), Mental Hygiene Therapy Aide (MHTA),
and Stenographer. Consider as well the following two items on the pilot
questionnaire: How much does your job involve: "physically handling sick and
injured people,"
and "working around people who are sick or disabled with no
hope of recovery." Both the LPN and the MHTA are likely to score high on both
questions. But consider those Stenographers who work in state mental health
or mental hygiene facilities. They are likely to score low on the first
question and high on the second. This lack of correspondence reflects actual
- 78 =
job content differences. As a result, the level of correspondence will be
lower than if the questionnaire included two items concerning physically
handling patients, Nonetheless, we would expect a moderately high correlation
between these two items and between other pairs of similar items.
To carry out this test, we identified five pairs of items with similar
content. These included: working with sick or injured people; using forms;
evaluating subordinates; answering questions or complaints from the public;
and. education. (The exact wording of items is provided in Table 4.6.) The
pairs of items were compared by correlating the two sets of scores for these
items. This statistical procedure yields a summary index of the relationship
between. the two sets of scores, This index may range in value from -1.00 to
+1.00. A positive correlation means that the scores on measure A increase as
the scores on measure B increase, or that A decreases as B decreases. In
reliability studies, the closer the correlation is to plus or minus one, the
stronger the reliability of the measure. The pairs of items selected and the
results of the correlations are listed in Table 4.6. Correlations range from
.59.to .72, which indicates a fairly. high degree of agreement. Buros (1978)
indicates that, for job analysis instruments:
Reliability studies. have been primarily concerned with
interanalyst agreement on the various job dimension
scores. Interanalyst reliabilities have generally
been in the .50's and higher, although some dimensions
seem to be rated with considerably less agreement.
By interanalyst agreement, Buros is referring to correlation between ratings
by. two job analysts scoring each job on a single variable. Our test, by
contrast, compares the scores on two variables, each rated by our entire
sample of pilot study respondents. Given that the items are similar but not
identical, we would expect lower inter-item correlations than those obtained
from two expert job analysts using a single characteristic. In this context,
- 79 ~
the reliabilities of .59 and above that we obtained appear high relative to
the reliability coefficients found in other job analysis studies. Thus, we
gained considerable confidence in the reliability of our survey questionnaire
based on the pilot study results.
VALIDITY
Another major objective of the pilot survey was to assess the validity of
the Job Content Questionnaire. Validity is the extent to which an instrument
measures or predicts what is intended. In the context of this study, validity
means the extent to which the questionnaire measures all of the range of job
content in New York State job titles, and only the job content. A valid
instrument for the purposes of this research would differentiate between
jobs in terms of job content measures. Clearly, validity of the instrument is
limited by reliability. An unreliable instrument cannot be valid. There are
three types of validity relevant to this study: face validity, content
validity, and criterion-related validity. The discussion below is organized
in terms of these three categories.
Face Validity
Face validity is the extent to which an instrument appears relevant to
what one intends to measure. It is usually assessed informally by reviewing
the instrument to see whether it appears to cover the content intended. This
was done by nearly 100 employees in the pretest, by 1,110 employees in the
pilot study, and by numerous advisors to the project. Employees frequently
took advantage of opportunities to talk to Center staff or write notes on the
questionnaire about any item they felt did not validly represent the New York
State job system. Employees were also encouraged to suggest any job content
- 89 -
TABLE 4.6.
CONSISTENCY OF INCUMBENT RESPONSE:
CORRELATIONS BETWEEN ITEMS WITH SIMILAR CONTENT
Item Correlation
36. Physically handling sick or injured people. 123 4
272
38. Working, around people who are sick or dis- Fo2 3 4
abled with no hope of recovery.
601. Filling out forms. 123 4
«59
612. Reading forms. Io2 3 4
631. Answering questions from the public on 123 4
the phone or in person.
+67
639. Answering complaints. from the public. 123 4
96. Are you responsible for formally evaluat-
ing the performance of the workers you
supervise?
1 no 2 yes 62
985. Writing evaluations of subordinate per- 123 4
formance.
523. If the State requires education for your
job, how much of full-time college or
training outside the job is required?
years, months +69
525. If the State requires a diploma or a
degree for your job, what degree is
required?
1 A high school diploma
2 A college degree that requires
less than four years of study
A four-year college degree
A master's degree
A doctoral, law, medical or
other degree beyond a master's
: (specify)
UP w
4
-81l —
that should be added to the questionnaire. In addition, the questionnaire
specifically asked people if there was anything else about their job that they
wanted to tell us. By the time we had reached the pilot study, employees made
few suggestions about additions to the questionnaire, indicating that the
survey instrument had face validity.
Content Validity
Content validity is the extent to which the instrument encompasses the
range of job characteristics in New York State jobs. Characteristics unique
to jobs were of less interest to this study, since we are interested in
comparing jobs on common characteristics in order to explain variations in
pay. As indicated in Chapter III, in order to insure the inclusion of all
relevant job characteristics, we began the process of questionnaire develop-
ment by examining in detail job analysis instruments developed by other
consultants and added a set of questions about the job content associated with
social and human service-provision titles. As a result, we were reasonably
certain that the questionnaire's content was more inclusive than other job
evaluation frameworks used in organizations in the public and private sectors.
A second content validity issue is whether the survey instrument measures
systematically some variable other than job content characteristics. An
obvious problem in this context is reading skill. If an incumbent cannot read
and comprehend the questionnaire, then either the incumbent will not respond,
or the incumbent will give invalid responses. In the latter case, the results
will be related to reading ability, not job content. In order to minimize the
- 82-
reading level of the questionnaire, it was edited to reduce the reading level
to the seventh-grade.’
Furthermore, with the advice of Civil Service staff and personnel direc-
tors in several agencies, we were able to identify five job titles for which
personnel experts estimated low literacy for 25 percent or nore of the incum-
bents in the titie.® It was important to know whether responses in low
literacy titles were given with understanding. Analysis revealed no evidence
of any serious misunderstanding of questions, responses omitted, or other
evidence of difficulty; responses in these titles appeared plausible.
A third way in which content validity was assessed was through factor
analysis of the job content questionnaire. Factor analysis is a statistical
procedure that groups data into categories or factors, sometimes called
“underlying dimensions." Items group together or "load" on a factor because
they are highly correlated with each other. An example of a factor in this
study is "working conditions." For example, we found that items about working
in hot, wet, and cramped conditions load together on a working conditions
factor. The eighteen factors found in the pilot are listed in Table 4.72
Our ability to get meaningful factors is further evidence for the content
validity of the questionnaire for the following reasons. First, items did
group in a meaningful way. In order to get consistent, meaningful loadings on
7 rhe assessment of reading level was done with the Fry Index of
Readability as updated by Kretschmer (1976).
Ss employees entering state service are not tested in any formal way for
reading skills. Therefore, it was necessary to use expert opinion to estimate
reading problems within each title,
For a fuller description of the factor analysis and the factors found in
the pilot study, see The New York State Comparable Worth Study Final Report
written by the Center for Women in Government dated 1 October 1985.
- 88 -
each factor, it was necessary that persons doing similar tasks answer related
questions in a similar way. Second, the factors appear to represent job
dimensions relevant to New York State. Third, the factors are similar to
those in other systems. For example, the factor solution for this study is
comparable to factors included in the Factor Evaluation System (FES) used by
the United States Civil Service Commission. Table 4.7 also illustrates the
correspondence between the two factor solutions. They cover about the same
job content, except that the New York State pilot study has some factors that
are not on the FES.
A final way we assessed content validity was to compare item means across
title series. It is especially important that the content of the
questionnaire discriminate validly between job titles within a job series or
family. Specifically, means (the average response of incumbents in a job
title) should vary across the titles in a series in a predictable way. Table
4.8 lists incumbent means on selected relevant items for three series: Correc-
tions, Clerks, and Food Service Workers.
In general, scores ascend or descend across series as one would expect.
For example, we would expect the higher grade level jobs in a series to score
higher, on average, than lower grade level jobs on such items as "planning in
advance," "variety on the job," and "freedom to decide how to do the job." In
turn, we would expect that incumbents in lower grade level jobs would report
higher scores, on average, than higher grade level jobs on the degree to which
their job requires them to perform the "same task over and over."
Criterion-related Validity
A third way to assess validity is to compare results with a criterion
that is accepted as a standard. The obvious criterion in the present context
is salary grade. Since our purpose is to develop a model relating job con-
ww BA, =.
TABLE 4.7
COMPARISON OF FACTORS IN THE FES_AND NYS STUDY SYSTEMS
FES Factors
Knowledge (facts and skills)
Supervisory controls
Guidelines
(judgement)
Complexity
Scope and effect
Personal contacts
Purpose of contacts
(influence, motivate, etc.)
Physical demands
Work. environment (risks, etc.)
NYS Study Factors
Education
Analytical reasoning
Management /supervision
Fiscal responsibility
Autonomy
Variety
Routine
Scope of personal contacts
Stress. from communication
Management /supervision.
Service provider tasks
Office tasks
Group facilitation skills
Working conditions
Time stress
Computer ‘
Enter data
Reading
Writing
- 85 -
TABLE 4.8
AVERAGE SCORES FOR INCUMBENTS ON SELECTED ITEMS
Worker 2
* These scores should go down in each series.
should go up in each series.
Job Title Item
Series 46. Same task* 55, Math 68. Planning in 79. Variety 84. Freedom
over and over in advance (how in the job to decide
far) how to do
job
Correction 2.00 2.00 1.00 2.00 1.00
Officer
Correction 2.00 3.00 6.00 4.00 2.50
Tieutenant
¢lerk 3.33 2.33 2.67 2.67 3.33
Senior Clerk 1.33 3.00 3.00 3.33 3.00
Food Service 3.33 1.33 1.67 2.00 1.50
Worker 1
Food Service 1.00 2.00 3.00 2.00 2.00
For the other items, the scores
- 86 -
tent to salary grade, an important criterion for choosing a subset of items
from among those with face and content validity is to retain those that are
correlated with salary grade. Correlations between items and salary grade
were computed to assess both the degree of association between each question
naire item and salary grade and the direction of the correlations. A selec-
tion of correlations between selected items and salary grade are listed in
Table 4.9,19
Positive coefficients indicate that the higher the score on a variable,
the higher the salary grade, e.g., the more a college degree is required for a
job, the higher the salary grade. Negative coefficients indicate that the
lewer the score on a variable, the higher the salary grade, e.g., the more one
does the same task over and over, the lower the salary grade. The correla-
tions are in the direction and of a magnitude that one would generally expect.
TABLE 4.9
EXAMPLE CORRELATIONS WITH SALARY GRADE
Doing the same task over and over -.41
Working in crowded conditions -.03
Teaching 224
Preventing others from wasting time 31
Hiring and firing 41
Writing original computer programs 45
Working overtime without pay 249
Leading meetings 55
College degree required for the job 73
por a more complete description of the correlation of items with salary
grade, see The New York State Comparable Worth Study Final Report written by
the Center for Women in Government dated 1 October 1985.
OF =
We also examined the validity of incumbent ratings by comparing them to
supervisor ratings. If incumbents and supervisors tended to agree on ratings
of the incumbents' jobs, this would provide additional evidence of the
validity of incumbents' ratings.
We surveyed the literature on comparisons of supervisor/incumbent reports
of job content. While small, that literature indicated that there is high
agreement among supervisors and incumbents with respect to job content. In
job analysis where tasks are being evaluated rather than worker performance,
research indicates that workers can accurately rate their jobs.
Moreover, we asked people about the advisability of supervisor review of
questionnaires. Labor representatives, managers, and personnel directors
alike were of the opinion that supervisor review would result in incumbents
providing acceptable, but not necessarily accurate, responses to the
questionnaire. As a result of this advice, we conducted our assessment of the
extent of agreement between job incumbents and their supervisors by generating
a second survey instrument for supervisors to fill out independently of
incumbents.
The design of the supervisor/incumbent substudy included the following
steps: selection of items for comparison, data gathering, and data analysis
through the computation of correlations and the differences in means between
incumbent and supervisor average scores for the selected variables. Twelve
questionnaire items were selected for inclusion in the supervisor question-
naire. Items were selected for one of two reasons having to do with potential
validity problems. The first concerns a problem of accuracy--incumbents might
not know the information requested. Other items were selected to represent
problems of anchoring, that is, the ability to rate one's job appropriately in
relation to other jobs.
- 88 -
We selected incumbents and supervisors as follows: three incumbents from
each of 60 job titles from the larger pilot sample were randomly chosen. In
addition, three incumbents, one in each of three titles with one to three
incumbents, were included. Supervisors for the 183 selected incumbents in the
63 titles were then identified by agency liaison staff. Supervisor
questionnaires were distributed by liaison staff, who were instructed to tell
supervisors that we were seeking information about jobs they supervise.
Liaisons were not to tell supervisors that we were comparing their responses
to incumbent responses, so as not to bias responses. Completed questionnaires
were received from supervisors of 107 incumbents, for a response rate of 58
percent.
. Since we were interested in responses by job title, we averaged incumbent
responses and supervisor responses separately for each title. To remove the
possible impact of bias where there was only one incumbent or one supervisor
responding, we only analyzed items for which at least two incumbents and two
supervisors had responded. On this basis, we eliminated two items,
For each item remaining, the raw data were organized into incumbent and
supervisor averages by title, Pearson correlations were then calculated
between incumbent and supervisor responses for each item. Eight of the ten
correlations were above .55, as positive evidence for agreement between
supervisors and incumbents (Buros, 1978: 983).
Two items had lower correlations. The first had to do with the
experience necessary to do your job. The low correlation probably represents
confusion about the state's experience requirements. For many titles,
experience can be substituted for education and vice versa. Another possible
reason for the low correlation coefficient on experience is test error. The
item was rewritten for the main survey in a simpler form. The other item with
;
q
|
- 89 -
a low correlation coefficient was "How much could your mistake slow down the
overall work of the unit?", This item was eliminated from the questionnaire
for the main study.
To summarize our findings, the purpose of the supervisor-incumbent sub-
study was to assess the validity of using incumbents as informants about their
jobs. Our findings of substantial agreement between supervisors and
incumbents is support for the choice of using incumbents as sources of job
data.
In general, we found that the questionnaire appears valid to employees.
Items predict pay as one would expect. The questionnaire samples job elements
found on 20 other instruments. The questionnaire does not measure reading
level instead of job content. Items group together conceptually into factors
like other systems. Items that should form hierarchies do so. Finally, items
group together conceptually into factors similar to those found in other job
evaluation systems. In conclusion, we are confident about the reliability and
validity of the survey instrument.
REVISION OF THE JOB CONTENT QUESTIONNAIRE
A final objective of the pilot survey was to improve the Job Content
Questionnaire so that it would be easier for employees to fill out and less
expensive to process. This involved re-writing many questions to make them
closed-ended, revising questionnaire wording where necessary to remove
ambiguities, improving questionnaire format and layout, and eliminating ques-
tions when it was found that they were of little use in reaching our research
goals. It also involved adding several items to improve the reliability of
potential job content factors.
First, the questionnaire was revised to eliminate all fill-in questions.
Yor example, the question asking for job title was replaced by an identifica-
- 9 =
tion label Ehetiding job ettie.!? 4 question like "what i¢ your negotiating
unit" wds chdiiged from a fill=in answer €6 4 tiulttple-choice answer with
catégoFiss £6 cheek: Iteis requifing quafititative responses were rewritten
with F8spoise Categories based oh frequengy distributions for the pilot study. ©
Ii addition; Sevérdl itéms weté réewotded to increase elarity.
Thé Qredtest Aumber of veviatons Festlted from the factor analysis of the
iting if the Job Cone@nt Questionnaizve dese#ibed previously. This analysis
invélved twWd wali steps: (1) élitiination of a small subset of items unrelated
té pays to perceht female in a job title, to percent minority in a title, or
té @hothet itém related te pay, and (2) selection of items for factor scales,
Ah Skplanation of this questionnaire revision process follows.
Initial. Item. Elimination
This study is concerned with identifying compensable job content factors
in the New York State job system and with adjusting pay policy based on these
factors to eliminate potential wage discrimination. Therefore, the decision
‘to ‘delete items from the factor analysis that correlate weakly with salary
grade is justified on both theoretical and practical grounds. Since the
factor analysis solution from the main survey was to be used to develop a
compensation model, it was pointless to build a factor structure on items ‘that
bear no relationship to compensation. Such items only clutter an analysis
1
leach ‘questtonriaire in the ‘main survey ‘had -a ‘label ‘affixed to the front
page with the following information: job title name, title code, and salary
grade, ‘Réspdndents Wére ‘asked ‘to verify ‘the ‘accuracy of tthe Iabel. This way,
such information did not have to be checked or coded except where the label
infotiiatton was “incorrect.
|
|
t
f
|
|
-~ 91 -
that is already large and in need of data reduction for efficiency and
precision of interpretation. 2
In addition to predicting pay, we particularly were concerned with
describing work done disproportionately by females and minorities accurately.
Therefore, any item that was strongly related to percent female or percent
minority in a job title was retained. We developed a very conservative set of
rules to govern the elimination of items. They were as follows:
Items were omitted that correlated between -0.2 and
+ 0.2 with salary grade, between -0.4 and +0.4 with per-
cent female or percent minority, and between -0.4 and
+ 0.4 with any other item that correlated less than -.02
or more than + 0.2 with salary grade.
We chose 0.2 as a conservative cutoff correlation coefficient with salary
grade. Any individual item correlating between -0.2 and +0.2 with salary
grade has almost no relation to salary grade. We chose 0.4 for the
correlation with percent female or percent minority because we were interested
in a higher level of certainty about what are actually female or minority job
characteristics. We also chose 0.4 as a criterion for items that correlate
with other useful items because items with smaller inter-item correlations are
almost certain to have very weak factor loadings.
lone possibility exists, of course, that items that have very weak
zero-order correlations with salary grade have larger net effects, For
example, driving heavy equipment might appear to have no relation to salary
grade because heavy equipment-driving jobs are in the middle of the pay
hierarchy, paid more than other manual jobs, but less than professional and
managerial jobs. If account is taken of other features of jobs, say formal
educational requirements, it might turn out that driving heavy equipment has a
positive relationship to salary grade because such jobs pay well relative to
other manual jobs requiring similar levels of education. In reviewing items
in the pilot test, we tried to be sensitive to such possibilities and retain
items that on theoretical grounds might be suspected of having a substantial
relationship to salary grade when other variables were controlled.
mi BE
Because: this questionnaire editing: process. was carried’ out: on. a: sample: of
68 job. titles,. we recognized that the statistical criteria: couldj, at times, be
too: narrow. Recall’ that. in selecting: this: pilot: samples, wer selected: titles: so
as’ to capture the: most. important: sources of diversity. in: job: content,. varying,
titles. by grade: level,. by job. family,. by; settdng,. and. by: percent: female and
percent: minority. However’. because: 68. tities: cannot fully: represent: the
diversity. of: New. York. State: jobs,. we deleted: only those: items. that: met: the
statistical’ criteria: and: that. pentiained: to. characteristics: of? jobss imi the
sample: of: titles,, No: item:was: eliminated that: might’ be. related: to» payy. gdven
a different. set of titles: inthe. sample. Moreover,. we: were well aware that:
the: correlations: might: be: spurious. Therefore,. any: decision: based on: correla~
tions was made. after careful scrutiny of statistical results: to: answer such
questions as. "Is this correlation coefficient plausible?" or "Could a third
variable. explain the correlation found?".
Fifteen items out of 150 were deleted. from the questionnaire based on the
above criteria and our qualitative assessments.
Factor Analysis for Questionnaire: Editing:
All. retained. {tems, with a few exceptions, were entered: into.a principal
components. factor analysis. We used. an 18+factor solution, of which three
factors: were: not useful because items did: not: load on them. substantially.
Three groupings of items were added to the remaining 15 factors, for a total
of 18 factors.
These results were used to edit the questionnaire further. ‘The reli-
ability of. a: factor. improves: substantially with each increase: in. the: number of
items up. to:about six items. on the factor. (Nunnally, 1978).. In:.a: few cases
where more than six items loaded on a factor, some items were deleted. In
general, criteria for retaining items were both. statistical and. non-statis-
|
|
- 93 -
tical. To be retained, an item had to (1) load high on a factor, (2) not load
high on more than one factor, and (3) along with the other retained items,
describe the factor comprehensibly.
In spite of the above criteria, several items were retained although they
crossed factors or loaded lower than other items on a factor. Some items were
retained for face validity of the questionnaire. That is, many people expect
to see such items on the questionnaire. Other items were retained because of
a special research interest in them. Some of these items seemed to cross
factors describing groups of job content characteristics associated with male
(e.g., working conditions) or female (e.g., office tasks) jobs. We did not
want to drop these items prematurely. After this analysis, 26 additional
items were deleted.
As a final step, a number of items were added whenever there were not
enough items to measure a factor reliably. These items included:
working overtime on weekends without pay;
editing data;
verifying data;
deciding what task to do first;
deciding how quickly to work;
mistake hurt agency name;
dealing with high level managers; and
systems design.
The edited Job Content Questionnaire used in the main comparable worth survey
is attached as Appendix D.
SUMMARY
The pilot survey was designed to provide information on distribution
methods, survey mechanics and questionnaire construction that would inform and
improve the quality of the main data-collection survey. Having gone through
the steps of conducting a survey, we had a much better understanding of what
had to be done to get the main survey into the field.
- 94.
In terms of response rate, both personnel and mailed distribution methods
yielded consistently higher results than union or on-site. This finding was
stable atross sex, race/ethnicity, and estimated literacy level of gob, across
hegotiating unit, agency, and salary grade, and for small dncunibency titties.
Titles with lower response rates did not have common characteristics that
‘consistently predicted the low iresponse rates. ‘Therefore, there was no basis
‘for deciding that any particular title should ibe oversampled in the ‘main
survey.
‘Regponse tates were, for the most jpart, adequate for .alll fob-type
xategories tn the mailed and personnel-distributdon methods, ‘The nelativelly
‘thigh response rate for ‘the maiiled method of distributdon was somewhat
surprising and most ‘heartening since mailed-distribution is the easiest
‘procedure ‘to use. The ‘high response rate for the mailed-ddstribution method
might reflect several factors: respondents know how the mails work, their
perception of confidentiality may be greater when neither Labor nor ‘management
‘ts involved. An-effective public relations campaign, and the impact of large
numbers of agency employees receiving questionnaires, might also be involved.
Whatever the reasons, the successful use of the mailed-distribution method in
the pilot survey gave us considerable confidence regarding the use of this
method in the main survey.
The design of the final survey instrument benefitted greatly from the
qualitative and quantitative analysis of the 1,110 returns. It is a reliable
and valid instrument for obtaining job content information from state
employees, The revised questionnaire represents a more efficient and
simplified document, both for respondents to fill out and for Center staff to
process. In sum, the pilot survey achieved its stated objectives.
-95
CHAPTER V.
MAIN DATA COLLECTION SURVEY:
DESIGN AND MECHANICS
CANT CS
- 9%
The main data collection survey occurred between November 30, 1984 and
March 6, 1985. A total of 36,812 questionnaires was distributed throughout
New York State to incumbents of 2,944 job titles, and 27,394 questionnaires
were returned providing responses for 2,582 job titles, This chapter reports
on the design and mechanics of this large undertaking. It begins with an
overview of the sampling frame and the mechanics for selecting the incumbent
sample. It continues with a discussion of various features of the distribu-
tion process that were designed to enhance the response rate and intake pro-
cedures. It concludes with a discussion of the survey response rate.
SAMPLING FRAME
As indicated earlier, incumbent self-reports were used as the basic
source of information about content in New York State jobs. Since the unit of
analysis is the job title, we further decided to average incumbent responses
within each title to obtain a title profile. These decisions, along with our
choice of a policy-capturing job evaluation analysis, required a complicated
frame for sampling incumbents within job titles.
All titles in the population were sampled in one of two ways. If the
title was one for which we were providing pay equity estimates, we sampled all
employees in titles with 150 or fewer incumbents and 150 incumbents in titles
with more than 150 incumbents, For the titles for which we would not be
providing pay equity estimates, we sampled all employees in titles with 20 or
fewer incumbents and 20 incumbents in titles with more than 20 incumbents.
This two-tiered design proved to be the most effective approach to minimizing
the statistical errors of estimate of both the final compensation model and
the predicted salary grades for female-dominated and disproportionately
minority titles,
~ 97 .
Based on this design, we provided the Civil Service Department with the
Necessary information for them to select a sample of incumbents within each
job title using systematic sampling procedures with a random starting point.
Civil Service Department employee files as of August 22, 1984 were used.!
SURVEY DISTRIBUTION DESIGN
As the sampling frame was being finalized and the sample of incumbents
selected, we began designing a set of procedures that would facilitate a high
response rate. We had decided to use mailed distribution as a result of the
pilot survey.
As indicated earlier, a high response rate minimizes the likelihood of
"non-response bias," which occurs when respondents differ from non-respondents
in significant ways. Researchers do not agree precisely on an acceptable
response rate at which response bias is no longer an issue. The minimal
acceptable rate seems to be at about 50 percent (Erdos, 1970: 144). According
to the Office of Home Management and Budget (1978), they do not question 75
percent or above and they do not accept below 50 percent. The Advertising
Research Foundation and Magazine Publishers, Inc., both use 70 percent as an
acceptable standard.
Most importantly, the literature indicates that a higher response rate is
less important when responses are grouped. Leslie (1972) reviewed 28 studies
involving grouped responses and found no differences between respondents and
lone time lag between the calculation of intervals and the actual
selection of the sample meant that the number of incumbents in some job titles
increased, while others decreased, leading to some variation in the actual
numbers selected.
- 98
non-respondents. Of course, in this study incumbent responses are grouped By
title.
Based on the above findings, we aimed for a 70 percent response rate: as
acceptable. Having established this standard, we designed the
stribution in
such. a way asi to meet, if not exceed, thds standard.
We incorporated many of the features of the mailed distribution of the
pilot survey. Most notably, we used agency Itatsons to assist in distribu-
tiom. In addition, we reviewed the extensive survey research literature about
increasing response: rates, which offered severall techniques: that’ we Built into
our distribution and intake design. These included the following, techniques.
@ Preliminary notification: Advance notice by madl or tele-
phone that a survey is about to be administered has usually:
been. found to increase response rates. (Waisanen, 1954;
Stafford, 1966; Wiseman, 1972; Jolson, 1977; Frey, 1983,
P- 92.) Myers and Haug (1969) found that in order to increase
a@ response rate by 8.1 percent with prenotification, they had
to expend 22 percent in additional research costs. Clearly,
it: was to our advantage to prenotify incumbents: of the survey.
Yet, with our large sample, the cost of a preliminary letter
or phone call was prohibitive. We chose, instead, to publicize
the study in a general way prior to the distribution of the
questionnaire. We gave numerous speeches to state worker groups,
including the board of each region of CSEA.. We worked. with
GOER and CSEA public relations people to publicize the study
in the general, union, and state agency presses..
@ Stamped return envelope: Stamped return envelopes yield higher
response rates (Ferris, 1951). While for most of our respon
dents, interagency mails were sufficient, many incumbents are
located at outlying worksites with no access to interagency.
mails. In order potentially to increase the response rates
among these workers, we affixed postage on return envelopes
whenever. agency liaisons informed us that. use of. the U.S. mails
would be necessary.
@ Follow-up: Follow-up reminders are almost universally success-
ful in increasing response rates (Kanuk and Berenson, 1975). A
telephone. reminder has been found more effective than a post-
card, and a follow-up phone interview is least effective: (Sheth
and Roscoe, 1975). For a study the size of this one, telephone
follow-up was impractical--in phone costs, in staff. time, and in
the ability to locate state employees. Alternatively, we decided
to use follow-up letters to remind incumbents to fill out and
return the questionnaires, Research has demonstrated that one
|
i
|
+ 99 ~
follow-up message increases response rates as much as 20 percent.
(Hinrichs, 1975; Erdos, 1970). A second mailed follow-up may
increase response rates as much as 12 percent more (Heberlein
and Baumgartner, 1978). After a second mailing, the investment
in more mailings yields diminishing returns. Based on these
findings, we sent two follow-up letters to remind incumbents to
respond to the survey.
e Clarification: One disadvantage of mailed surveys is that
respondents cannot ask for clarifications while filling out
the questionnaire. We solved this problem by providing a
toll-free number staffed by a researcher who,could answer
respondents' questions during working hours.
As indicated below, our sensitivity to detail in using interagency mails, in
providing postage when necessary, in adding two follow-up letters, in
maintaining a toll-free phone number for queries, and in conducting a public
1
relations plan geared to informing as many New York State employees as
possible about the study, resulted in a smooth distribution process and a high
response rate.
DISTRIBUTION AND INTAKE PROCEDURES
Printing and Distribution
The questionnaire was typeset and delivered to a printer who printed over
37,000 copies of the questionnaire, over 74,000 copies of a one-page follow-up
letter, over 111,000 envelopes to mail the questionnaires and letters. (The
questionnaire and follow-up letter are contained in Appendix D.)
While the questionnaire was being typeset and printed, two sets of labels
were generated by the Department of Civil Service--one set of labels for
envelopes and a companion set for questionnaires. The envelope labels con-
~over the first five weeks of the distribution process, we typically
handled 30 phone inquiries a day through our toll free phone number.
- lo0ou
tadined: the. employee's name, line item number, alpha job: title, title code,
agency name, agency code, and location code. Agency liaisons later added more
specifiiic. location information. The labels for the front of the quest Lonnatire,
contained only title, title code, and salary grade. These were to be cheeked
by; sampled: dmcumbents: for accuracy.
The questionnaires, follow-up, letters, envelopes, and. labels. were
delivered to a: private matlhouse which applied: labels to: the questionnaire
enwtelopes and! to the follow-up: letters, The madltouse shipped: boxes:
dontadming 36:82 questionnadres. and: 73,624. follow-up letters: to Wiatisons: tim
State agencies...
Prior to: distribution, we worked with a set of agency Liaisons: who would
assigt in the distribution process. Because Civil Service Department records
did not include the exact worksite address of most survey respondents, one
fesponsibility of the agency liaisons was to add that information to each
envelope.
The Office. of Employee Relations supplied the Team-with. a List of agency
liaisons in June 1984. (See Appendix E.) Center staff contacted agency
representatives. during late summer, 1984, to: introduce them: to: the Center for
Women in Government and to. the comparable worth study: Liaisons. also: were
asked a specific. set of practical questions about handling the: questionnaires
within their-agencies. After this initial contact, there were several
communications with agencies and. their liaisons. In October liaisons were
invited to an, informational meeting and reception at the. Rockefeller.
Institute. Later, the Governor's Office of Employee Relations: contacted
agency commissioners about the study, asking them for their support and: to
allow the use of worktime for survey respondents to fill out the
- 101 -
questionnaires. Finally, just prior to the distribution of the questionnaire,
the Center contacted agency liaisons to explain the distribution process in
detail. Also, we sent each liaison a list of employees sampled in their
agency or facility.
Distribution
New York State Job Content Questionnaires for 36,812 employees were
delivered to agency liaisons on November 30 and December 4, (The original
sample of 37,282 was depleted by 470 due to the loss of eight "quasi-agencies"
immediately-prior to distribution.) Upon receiving the questionnaires, the
liaisons added specific worksite addresses to all the envelopes and forwarded
questionnaires to employees.
Approximately two weeks after the questionnaires left the mailhouse,
Center staff telephoned each liaison to make sure that all questionnaires and
follow-up letters had arrived and that the questionnaires had been distribu-
ted. During that same phone call to liaisons, we reconfirmed dates to send
the follow-up letters. They sent one follow-up letter ‘two weeks after the
distribution of the questionnaire and a second follow-up letter two weeks
after that.
In addition, we sent questionnaires to 219 individuals in response to
telephoned or written requests when incumbents reported that they had lost
their questionnaires or had received a follow-up letter but no questionnaire.
Before sending out a duplicate questionnaire, we verified that they had been
sampled for the survey.
Intake
As the questionnaires were returned to the Center, the obviously unuse-
able questionnaires were separated out. These included those with missing job
- 102 -
title labels, those that had not been filled. out, and those that were returned
indicating that the person was laid off, terminated, deceased, unknown,. or had
resigned’ or retired.
Potentially useable questionnaires were then: checked for a number of
specific additional problems. First, approximately 820 questionnaires on
which an incumbent had indicated a title and/or. grade: level change: were
vertfted. > Second, questionnaires: were checked to. determine if the.
respondents: worked part-time, had worked for less than one month. im the titles
about which they were being, asked, or had changed to a non-sampled title.
These: were: regarded as anaawabue based on the population definition elaborated
in Chapter IT.
Third, we read any written responses on the questionnaire in order. to
clarify particular answers to closed-ended questions. For example, a few
workers clarified their responses to the question on how many staff they.
supervised, by indicating that their answers included supervision of students
or clients. Since the question only encompassed staff supervision, references
to other types of supervision in their answers were ignored.
Finally, questionnaires were sent to a private data entry company whch
entered the data onto computer tape and verified it.
The entire physical process of questionnaire distribution and intake took
place between November 30, 1984 and March 6, 1985, Figure 5.1 shows the
cumulative percent of questionnaires returned over a thirteen-week period.
Note that over the first eight weeks 96.8 percent of the responses were
3the title code number for the incumbent's new job was entered directly
onto the questionnaire. In most cases, this was a routine task. For others,
however, it was difficult to recognize the title names that were written in by
(Footnote Continued)
- 103 -
received. However, it was necessary to continue the receipt of questionnaires
for five more weeks in order to gain adequate returns in low responding titles
through our targeted follow-up efforts, described in the next section.
Special Problems
While the survey distribution and intake were, for the most part, smooth
and uneventful, a number of contingencies arose that required that additional
tasks be completed. These involved deletion from the sample of eight "quasi-
agencies,"
which required replacement sampling, and a special mailing for
sampled incumbents of the title Senior Stenographer Law. Additional adjust-
ments were made in the sample of job titles, including deletion of Division of
Military and Naval titles and deletions and title changes to reflect changes
in the classification and compensation system.
First, we learned from the Governor's Office of Employee Relations, after
the questionnaires had been boxed for mailing, that they did not want to
include incumbents in eight "quasi-agencies." We pulled these questionnaires
from the mailing and assessed the impact of the deletions on our sampled
titles. We found that 407 incumbents were lost to the study, and 21 titles
were completely'lost. Three other titles were depleted so much that we
decided that replacements of individual incumbents were needed to minimize
potential sampling error. We developed the following criteria for deciding to
replace incumbents in depleted titles: for titles in which the initial sample
was 20, the depleted sample was enhanced if the depletion involved the loss of
more than three respondents; and for titles in which 150 incumbents were
sampled initially, the sample was enhanced if the depletion involved the loss
(Footnote Continued)
employees. For these cases, representatives of the Civil Service Department
helped us identify titles.
FIGURE 5.1
CUMULATIVE PERCENTAGE OF RESPONSES RECEIVED OVER 13 WEEKS
' fo: a Joe
Cumulative a
Percent
Returned
- Ot ~
Number of Weeks After All Questionnaires Sent Out
- 105 ~
of 16 respondents. In addition to the three severely depleted titles, we
found that ten other titles had also been undersampled according to these
criteria. We supplemented all thirteen title samples to achieve these
minimums.
Thus, we carried out a total replacement sampling of 139 incumbents. The
sampling was done systematically from population lists, using a random
starting point.
Second, we mailed 136 additional questionnaires to all of the incumbents
in the title Senior Stenographer Law because that title inadvertently was left
out of the original sample.
Third, after the questionnaires had been distributed, we learned that
salary grades for military and naval titles are determined outside the Civil
Service compensation system. Therefore, 26 military and naval titles
originally sampled were deleted.
Finally, there were several changes in titles and salary grades made by
Civil Service during the course of the study. Our data bank was edited to
reflect these changes.
RESPONSE RATES
As indicated above, a major concern in designing the main data collection
was to obtain a high response rate, both overall and for those female-
dominated, disproportionately minority, and direct-line-of-promotion titles
for which estimates of undervaluation would be made. In order to calculate
these response rates, we needed to define precisely what is meant by that
term. In the simplest sense, a response rate in a survey is the number of
questionnaires returned divided by the number sent. This calculation becomes
complicated, however, when we begin to consider how to treat questionnaires
- 106 -
that do not clearly fit into either the "sent" or "returned" category. For
example, a decision needed to be made as to how to categorize questionnaires
that were not filled out because persons are no longer on the job, or have
changed job titles, or are on leave. Are these employees part of the sample
or should we consider the questionnaires as not having been sent?
Many such problems arose in our survey of the New York State workforce.
For example, the Department of Civil Service estimates a five percent monthly
turnover in employees, and the incumbent lists from which we drew our samples
are not updated until two to six weeks after job changes. As a result, the
sample of incumbents that we received was not completely up-to-date. Rules
for treating changes of employee status were developed ro follows.
(1) Respondents: All completed questionnaires in which
incumbents worked full-time in a sampled job that
they had held for over one month were treated as
responses. As a general rule, incumbents who changed
titles, whether acting or permanent, were kept in the
sample and treated as incumbents of the new job titles
in which they worked. This procedure resulted in no
change in the overall response rate, but altered the
sample sizes and the number of respondents of individual
titles with additions or subtractions.
(2) Non-respondents: We treated 1,033 questionnaires as
if they had never been sent and had never been
returned. That is, 1,033 was subtracted from the
number sampled and from the number returned before
computing response rates. These included 710 ques—
tionnaires that were returned to us by agency liai-
sons unopened because the sampled incumbents were
deceased, retired, terminated, or the agency had
never heard of the person. These also included
questionnaires that had been filled out, but the
incumbents worked part-time (53), had worked less
than one month on the job (158), or had moved to a
non-sampled title (112).
(3) Unuseable Questionnaires: A third category of ques-
tionnaires included those that were treated as having
been sent and returned but were unuseable for several
reasons. These included 52 questionnaires returned
blank, 25 with missing job title information, and 27
with incomprehensible job title information. Also,
questionnaires returned because the incumbents were
~ 107 -
on leave were considered as part of the sample and
unuseable. Expert opinion differed over how to treat
incumbents on leave. As a result, we treated them in
a manner least advantageous to the response rate
estimate. Sixty-five such questionnaires were sent
out a second time to persons in their homes, accom~
panied by a letter asking the incumbents to respond
even though they were on leave. Questionnaires that
were returned were treated as respondents. Unreturned
questionnaires were treated as having been sent, but
as not having been returned.
Job Title Response Rates
In order to determine whether the response rate for each job title was
sufficiently high to ensure statistically reliable results, we developed a
computer program to calculate the response rate for each title. Our computer
program adjusted the number sent for each job title to take into account both
those who changed job titles and reductions in the "numbers sent" (those who
left service, work part-time, had less than one month of service, or moved to
a non-sampled title). The number received by job title was obtained by a
computerized count of individual returns. The response rate for each title
was computed by dividing the number received by the adjusted number sent, as
described above. Appendix F lists the response rate for each job title with
more than three incumbents and the response rate for all job titles with one
to three incumbents.
The response rate program was run frequently during the questionnaire
intake period in order to identify titles with low response rates. We tried
to improve the response rates for titles for which the response rates were
below 40 percent eight weeks after the beginning of data collection.
Twenty-five agency liaisons were contacted about low responding titles.
liaisons contacted incumbents in low response titles through a variety of
means, including meetings, telephone calls, memos, and computer messages to
urge them to complete the questionnaire. These follow-up efforts improved the
response rates in over half of the targeted titles.
- 108 —
However, even with this effort, it was necessary to delete 43 titles from
the study because of too few responses. (Deleted titles are listed in
Appendix G.) The majority of deleted titles have low incumbencies. In
addition, a number of them are in hospital or institutional services job
titles. We used the following criteria as the basis for deleting titles: 0
responses received, only 1 response received out of three or more sampled, or
only 2 responses received out of 5 or more sampled. Our criteria reflected
concern that the responses of one or two incumbents in larger titles could not
form an adequate basis for formulating a composite job description. These
deletions adjust the total number of estimated titles from 168 to 166.
SUMMARY OF CHANGES IN THE SAMPLE
Originally, 37,282 incumbents were sampled in 2,944 titles. This chapter
has documented events that altered the original sample of incumbents and
titles. Table 5.1 summarizes those events and their effect on the sample
size.
Tn addition to the above sources of change in the sample, several other
events affected the title sample size. Civil Service changes in the classifi-
cation system resulted in the deletion of a few titles and the combination of
others, as described earlier. Incumbents changing jobs and moving out of
small incumbency titles resulted in the elimination of some small titles.
Also, a few single incumbency titles that were vacant when the sample was
drawn were added because responding incumbents had moved into them. Before
computing response rates, incumbents were added to or subtracted from titles
according to these reasons.
The original sample size was 37,282, and 27,394 questionnaires were
returned, according to a hand count. The adjusted incumbent sample after
= O09
TABLE 5.1
SUMMARY OF ADJUSTMENTS IN JOB TITLE
AND INCUMBENT SAMPLE, MAIN DATA COLLECTION SURVEY
Change in Change in
Event Number Incumbents Number Titles
| Original number sampled 37,282 2,944
I
|
Loss of eight "quasi-agencies"* -407 -21
{ Replacement of losses due to +139 NA
deletion of eight "quasi-
agencies" and due to sampling
i error
ij
Late sampling of Senior Steno Law* +136 +1
| Deletion of military titles® NA -26
| Omissions from sample:
Deceased, retired, etc. -710 NA
Worked part-time 53 NA
Worked less than one month ~158 NA
Moved to a non~sampled title 112 NA
'
Adjusted target sample size 36,117 2,898
Unuseable questionnaires:
Returned blank 52 NA
1 Missing job title information =25 NA
Incomprehensive job title -27 NA
information
Titles dropped because of low -60 43
i response rates
i Total returned 35,953 2,855
*The sample size was adjusted based on these events before computing
response rates.
NA = not available.
- 110 -
additions and deletions noted above was 35,492. The adjusted number received
was 25,912 by computer count. The overall response rate, therefore, was 73
percent, a very high response rate. After incumbents in 43 low responding
titles were deleted from the returns, 25,852 cases in 2,582 titles remained
for use in the analysis,
PREPARATION OF THE DATA FOR ANALYSIS
The data were entered directly from the questionnaires to computer tape
by a private company, which verified the accuracy of the data entry by enter-—
ing it all twice.
Center staff used several procedures to verify the accuracy of the data
entry prior to analysis. We examined a printout of the data to check the
correctness of columns and questionnaire identification numbers. Identified
errors were corrected by referring directly to the questionnaires and by
re-entering the data appropriately. We also checked for impossible responses
to items by examining frequency distributions. Finally, several of our
computer programs, such as the one that produces response rates for each
title, also indicated keypunch errors by producing a list of titles which the
program did not recognize. Errors indicated by the above procedures were
corrected,
SUMMARY.
The main data collection involved sampling, printing, distributing,
following-up, and preparing the data for analysis. The Civil Service
Department drew a systematic sample with a random start for each job title,
Private companies printed and mailed 36,812 questionnaires to agency liaisons,
- lll -
who forwarded them to employees. Questionnaires were returned by respondents
directly to the Center for Women in Government, where they were logged in and
checked. The data were entered and verified by a private company, and the
Center checked the data further for accuracy.
A major concern was to obtain high response rates. Efforts to increase
the quantity of responses included extensive advance publicity of the study,
sending a stamped return envelope to persons who had less access to free
interagency mails, mailing two follow-up reminder letters, and mailing
replacement copies of the questionnaire when the originals were lost. We also
made available a toll-free telephone number to respondents and agency liaisons
in order to answer questions and solve any distribution problems. As a result
of these efforts, the overall response rate was 73 percent. The response rate
for individual titles was adequate in all but 43 titles, which were deleted
from the data. After various corrections and data cleaning, 25,852 individual
cases in 2,582 titles remained for use in the analysis.
~- Lle-
- 113 -
CHAPTER VI
PRELIMINARY DATA ANALYSIS
-114 -
This chapter focuses on the preliminary analysis of the questionnaire
items that formed the independent variables predicting salary grade in this
study. It includes sections on adjusting the population, item recoding, data
aggregation by title, defining percent minority, creating indices, and
conducting the factor analysis. In this chapter, we often refer to
questionnaire items by number. It may benefit the reader to refer to Appendix
D for exact wording of questions.
ADJUSTING THE POPULATION
An accurate estimate of the final population in each title was needed in
order to analyze subsamples drawn on the basis of population size for titles
as of August 1984. As described previously, the sample size for each title
was edited to reflect title additions and depletions that we discovered during
questionnaire intake. We used this information about our sample to adjust the
title population totals in order to derive a more accurate, updated population
figure for each title. Since our sample was large and simulated random
selection through systematic sampling techniques, we were able to use the
changes observed in the sample data to estimate population changes in each
title. We did this by calculating the proportion increase or decrease
observed in each title sample and then multiplying the Civil Service
population data by this proportion to obtain a population adjustment for each
title.
ITEM RECODING
Several items of the questionnaire were recoded to facilitate data
analysis. First, we recoded salary grade so that all responses were expressed
in terms of the same scale. The state uses two comparable salary grade
~ 115 -
scales, The most common scale ranges from one to thirty-eight. Job titles on
the other scales were adjusted to their equivalent salary grade in the one to
thirty-eight grade system.
Recoding was also used to solve a special problem with question 83 about
intellectually processing information. This question had a large number of
non-responses. Therefore, we defined missing data on question 83 to mean
‘ an option that was not overtly stated on the
"none of the above,
questionnaire but was a logical interpretation of a non-response.
Finally, for items with response choices that involved ranges of values
(e.g., two to five years), it was necessary to recode the ranges to their
midpoints to obtain a single number representing the category. For example,
because we cannot use a range like two to five in our statistical analysis, we
use the midpoint, 3.5, to represent that range of response.
Recoding to midpoints becomes problematic, however, for those question-
naire item choices where the range is open-ended (e.g., "more than 15 years").
This range has no upper limit, so it is impossible to cateviace the midpoint
between the lower and upper limit of the range directty.! To estimate the
midpoints of open-ended categories we used expert opinion from staff at the
Department of Civil Service, the Bureau of Space Planning, the Office of
General Services, and the Department of Tax and Finance. These experts
advised us concerning the realistic and reasonable upper limits of these
categories. The midpoint was then determined to be one-third of the distance
from the lower to the upper limit of the categories. The midpoint was
lone questions with open-ended ranges include 4, 5, 7, 8, 11, 14, 38, 40,
41, 82, 91 and 110.
~ 116 -
calculated as one-third instead of one-half the distance between the lower and
upper limits because upper limits usually represent somewhat unique cases.”
AGGREGATING DATA BY TITLE
The recoded incumbent level data were aggregated for each job title. For
each item the mean response for each title was calculated. This became the
preliminary title level score for each item,
Further examination of the data, however, revealed that responses to many
items varied dichotomously, i.e., in a yes/no manner, and not according to the
four values provided in the questionnaire (never, once in a while, often, most
of the time), Thus, prior to the further analyses, we redefined questions 16
to 23, 25, 31, 61 to 66, 70, 90, 96, 98, 102, 105, and 106 as dichotomous
responses. For all but four questions, we did this by entering the percent
answering "nevet" into the analysis as the title level score for each such
item.
DEFINING PERCENT MINORITY
Originally, it was assumed that "minority" would mean non-white for the
purposes of determining any effect of proportion minority in a title on the
state's pay policy. However, we found that the mean salary grade valued on
the basis of race/ethnicity for whites was 14.8, for Hispanics 12.3, for
eohis method of midpoint approximation was used for all relevant questions
except 41 (time to learn a job competently) and 91 (number of patients, etc.,
served). For question 41, Civil Service experts advised us that three years
should be used as the highest midpoint value. For question 91, Civil Service
experts indicated that 50 patients served should be used as the highest
midpoint value,
i
i
'
I
t
i
-1Ly -
Blacks 10.89, and for other race/ethnic groups 17.9. What this suggests is
that, in New York State employment, Hispanics and Blacks hold different jobs
than those held by "others," a group that includes many Asians in professional
and technical jobs. Because our focus is on disadvantaged groups, percent
minority was coded as percent Black plus percent Hispanic.
CREATING INDICES
For certain job content areas such as writing, we combined job task ques-
tions into indices to create more powerful predictors of salary grade. For
example, question 53 (copying written facts) touches on a minor part of some
New York State jobs, However, combining the writing items, questions 53 to
58, into a single index describes a very large number of state jobs in a more
genetal way and has the potential of predicting salary grade very powerfully
because it describes an important aspect of many jobs-~the complexity of
writing tasks entailed in the job. Indices of this kind measured complexity
of writing (questions 53 to 58), reading (questions 59 to 61), and one's
relationship to information (questions 74 to 79).
For each index, salary grade was regressed on potential questions, using
the data that had been aggregated by title, in order to determine which
questions to include. Regression weights in the equations produced by this
procedure indicated the net effect of each question on salary grade. Items
with large coefficients (positive or negative) were retained and items with
small coefficients were omitted.
To calculate the index scores for each title, standard scores (Z-scores)
Soe
‘
-118 -
for the remaining items were addea.?°4 For items with negative weights, the
standard score was subtracted from the index.
THE CREATION OF FACTOR-BASED SCALES
In this section, we discuss the factor analysis procedures we used to
reduce the job content items in the questionnaire to a relatively small number
of dimensions. We also describe the creation of multiple-item scales
reflecting these dimensions of job content.>
Factor analysis is a data reduction procedure that groups items together
which measure the same general components. For instance, items such as
working with toxic material, working in extremely hot or cold conditions, and
working in noisy areas might be grouped statistically into a factor that would
measure working conditions. Thus, factor analysis reduces a potentially large
set of items to a smaller set of explanatory dimensions or factors. These
factors can then be used in later analyses as new composite variables in place
of the original separate items.
3 standardizing scores puts them into a common metric so that they can be
added or compared. A Z-score has a mean of 0 and a standard deviation of l.
4 og course, the maximum correlation between the resulting scale and
salary grade would be obtained by using the regression estimates as scale
scores (Treiman and Terrell, 1975), but such a procedure entails the danger of
overfitting the data. We thus used the regression solely to identify the
variables to be included in each scale, and created scales by summing
standardized scores. The logic is the same as that underlying our decision to
use factor-based scales rather than factor scales; see the discussion in the
next section,
5,
"Scale" here means a composite set of items about a single dimension
such as education or stress.
~ Hg -
There were two reasons for reducing the large number of individual items
to a few underlying dimensions: interpretability and reliability. Since the
objective of creating measures of job content was to use them in a regression
analysis predicting salary grade from job content, we needed a set of measures
that would be readily interpretable in the regression context. This led us to
focus on general dimensions of job content rather than on idiosyncratic
characteristics of specific jobs. Moreover, regression results involving
large numbers of questions, particularly questions that are relatively highly
correlated with one another, are difficult to interpret. This provided
another reason for reducing our questions to a small number of relatively
unrelated measures.
The second reason for combining questions into multiple-item scales was
to improve the reliability of our measures of job content. It is well known
that in general the reliability of scales increases as the number of items
increases (Nunnally, 1978). Each additional question is likely to tap a
slightly different aspect of the scale.
Factor Analysis
The first step in creating our factor-based scales was to factor analyze
80 job content items in the questionnaire (item numbers 16 to 52, 62 to 73,
and 80 to 11}) and the three indices, WRITE, READ, and INFO. See Table 6.1
for a list of variables entered into the factor analysis.
The utility of factor analysis as a basis for scale construction is to
discover whether a set of questions reflects a single underlying dimension.
If it does, the questions will all have high loadings on one factor and low
-~ 120 -
loadings on all other factors.© It can happen, however, that an item thought
to reflect a particular dimension turns out to have a low loading on the
factor that includes all other items reflecting that dimension, but has a high
loading on another factor. This indicates that respondents interpreted the
question differently from the way it was intended and, therefore, that it
should not be included in the scale.
Fot example, suppose we hypothesized that six items in our questionnaire
tapped a dimension, "contact with clients." These six questions with their
factor loadings are (in shortened form):
MI92 Seriousness of client problem 085
MI24 Dealing with emotionally troubled clients eh?
MI91 Nuinber of patients, inmates served 71
MI28 Handling sick or injured people +62
PI63 Advising or supervising clients, inmates 51
PI65 Interviewing clients 51
Inspecting the factor loadings and also inspecting the loadings of each of
these variables on other factors, we might conclude that a purer scale,
tapping "Contact with difficult clients," could be formed by excluding the
last two items and constructing a scale from the first four items only. This
révised scale is, in fact, one of those we decided upon on the basis of our
factor analysis. i
There were three bases for such decisions. First, do all the items seem
to reflect the same underlying dimension? Second, do all the items have fac~
tor loadings of similar size? If not, it will sometimes improve scale relia-
raster loadings are the correlations between the factors and the observed
variables. They range in value from -1.00 to 1.00. Generally, one looks for
items that load high (greater than .4 or less than -.4) when determining which
items constitute a factor.
~ 121 -
bility to drop items with relatively low factor loadings. Third, do any of
the items have high loadings on any other factor? If so, they may be tapping
another dimension in addition to the one under consideration. In the
preceding example, the last two items appear to be conceptually somewhat
different from the first four, tapping not only the activities of the helping
professions but also those of tax officers, motor vehicle department clerks,
and so on. Additional evidence that this is so is that the last two items
have relatively high loadings (.40 and .42) on another factor, "communications
with the public." We therefore dropped the last two items and used the first
four to form a "contact with difficult clients" scale.
As is evident from this example, the decision about which variables to
include and which to exclude is not made entirely on rigid and fixed criteria.
Rather, in making decisions, statistical outcomes provided information that
was used to arrive at conceptually and substantively sensible solutions.
Even the choice of statistical outcome itself is a judgmental one. The
statistical algorithm for factor analysis yields any specified number of fac-
tors, from one to one less than the number of variables included in the factor
analysis. After exploring five different factor analysis solutions, we
settled on a 14 factor solution because it yielded the most readily inter~
pretable set of job content dimensions.’ However, we made some
modifications. We discarded the 14th factor because no items loaded high on
it, and we created a new composite variable called "mental demands" by
combining the INFO index with question 83 (mentally processing information).
The factor analysis was carried out using the SPSSX FACTOR procedure to
do principal factoring with iteration (PAF), with varimax rotation. In most
cases, the same factor structure is found using any method of factor analysis
(Nunnally, 1978). A varimax rotation was used to arrive at an orthogonal
terminal factor solution rather than an oblique rotation.
~ 12g -
TABLE 6.1
VARIABLES ENTERED INTO. THE FACTOR ANALYSIS*
PI16 make quick decisions
PI17 feel rushed
PT18 work piles up
PIIO deadline pressure
PI20 need to learn new skills
P21 feel conflicting demands
PT22 tell people what they don't want to hear
PI23 dealing with upset clients or public
MI24 dealing with emotionally troubled clients
PI25 hot or cold
MI26 fumes
ML27 cleaning up other people's dirt
MI28 handling sick or injured people
MI30 . constant noise
PI3L loud noise
MI32 strenuous physical activity
M133 same task over and over
MI34 work overtime weekdays
MI35 work overtime weekends
MI36 travel overnight
MI37 risk of injury
MI38 years of school required
MI39 degree required
MI40 experience
MI41 gain competence
PI42 work with machines
MI43 math
ME44 body coordination
MI45 editing data
MI46 entering data
MI47 verifying data
MI48 word processing
MI49 using package programs
MI50 writing original computer programs
MI51 systems programming
MI52 systems designing
PI62 answering questions from public
PI63 advising or supervising clients, inmates
PI64 teaching
PI65 interviewing clients
PI66 settling disputes on job
M167 keeping other workers informed about programs,
policies
* P denotes that the item was entered as a percentage into the factor
analysis. M denotes that the item was entered on a mean.
- laa -
| 1
|
TABLE 6.1
i (continued)
|
MI68 negotiating for services
MI69 explaining
PI70 answering complaints from public
MI71 giving speeches
MI72 planning meetings/workshops
MI73 leading meetings/workshops
MI80 setting operating practices
MI81 breadth of planning responsibility
MI82 plan work in advance
MI83 mental information processing
MI84 spend money within budget
MI85 propose budget for unit
! MI86 propose budget for agency
MI87 hire and fire
MI88 estimating training needs
MI89 substitute for boss in supervising
Pr90 propose policy
MI91 number of patients, inmates served
MI92 seriousness of client problem
MI93 free to decide what task to do first
MI94 new problems
MI95 variety
PI96 prevent waste of materials
MI97 prevent wasted time
PI98 finding replacement for no-shows
MI99 free to decide how to do work
MI100 free to decide how quickly to work
MI101 do same thing
PI102 told what to do
MI103 mistake hurt unit name
MI104 mistake hurt agency name
PIIOS mistake harm person
PI106 mistake damage equipment
MI107 deal with non-agency professionals
| MI108 deal with government officials
t MI109 deal with state managers
| MI110 number supervised
{ MILLI supervisory responsibility
I WRITE2 -MI53 (copying)-MJ54 (basic writing) + MI55
(original writing) + MI56 (editing) + MI58
I (scholarly reports)
' READ2 -MI59 (reading letters) + MI61 (reading compli-
H cated reports)
INFO2 -MI74 (filing) - MI76 (getting background infor-
mation) +°MI78 (using abstract knowledge) +
MI79 (deciding what information is needed)
~ 124 ~
These questions originally had been on the education factor, but they loaded
only moderately high. In addition, these questions seem to be conceptually
different from the education items and yet they seemed similar to one another.
The correlations between INFO and item 83 was moderately high (.50).
These 14 factors together explain only 60 percent of the variance in the
individual items, indicating that a number of individual items do not load
highly on any factor. As we will explain later, we included many of these
individual items in the regression analysis in addition to the factor-based
scales. Table 6.2 gives the content of each of the factors, together with the
loading of each included item on the factor.
Constructing Factor-based Scales
To construct scales representing the job content dimensions identified by
the factor analysis, we proceeded as follows to obtain factor-based scores for
each of the factors listed in Table 6.2. We standardized all questions
included in each factor by creating Z-scores, i.e., by subtracting the mean of
each item and dividing by the standard deviation, and then added the resulting
scores or, in the case of items with negative loadings, subtracted them. The
purpose of standardizing the items was to give each of the included items
equal weight in the factor-based scale.®
SNote that the factor-based scales produced by this procedure differ from
factor scores, which are sometimes used. With factor scores, all items
entering the factor analysis ‘(in this case 83 items and indices) are included
in each scale. However, the items are multiplied by factor weights, derived
from the factor analysis procedure prior to adding them to form factor scores.
The latter procedure is not as conceptually clear as the procedure we used nor
as robust across repeated analyses. The difficulty is that the use of factor
scores rather than factor~based scales capitalizes on chance variability in
the size of the intercorrelation among items, and hence yields results that
are not easily replicable if information on the same items were drawn from
(Footnote Continued)
- 125 -
TABLE 6.2
ITEMS INCLUDED IN EACH SCALE, TOGETHER WITH
FACTOR LOADINGS FROM THE 14 FACTOR SOLUTION
Item Loading
Factor 1: Management/supervision (11 items)
[ MI111 Supervisory responsibility 89
| MI97 Prevent wasted time 87
: MI87 Hire and fire +83
| MI81 Breadth of planning responsibility 82
I MI88 Estimating training needs +78
; MI89 Substitute for boss in supervising 76
} MI66 Settling disputes on job (% never) -.74
I MI98 Finding replacement for no-shows (% never) -,72
MI80 Setting operating practices 70
MI67 Keeping other workers informed about programs, policies 70
MI110 Number supervised +61
Factor 2: Unfavorable working conditions (6 items)
MI32 Strenuous physical activity -.81
M126 Fumes -.73
MI37 Risk of injury -.71
PI25 Hot or cold (% never) +67
PI31 Loud noise (% never) +63
MI27 Cleaning up other people's dirt or garbage (% never) ~.62
Factor 3: Contact with difficult clients (4 items)
MI92 Seriousness of client problems +85
MI91 Number of patients, inmates served fl
MI28 Handling sick or injured +62
Factor 4: Communications with public (4 items)
PI70 Answering complaints from public (% not part of job) - 7h
PI62 Answering questions from public (% not part of job) -.67
PI23 Dealing with upset clients or public (% never) ~.62
MI107 Dealing with non-agency professionals 255
t
i
|
|
i
: MI24 Dealing with emotionally troubled clients wll
|
|
H
|
|
i
if
if
Factor 5: Education required (2 items)
'
i MI39 Degree required 83
i MI38 Years of schooling +78
(Footnote Continued)
another sample, say New York State a year from now. For these reasons, we
prefer to utilize factor-based scales, which are widely used in the social
science literature (Kim and Mueller, 1978).
TABLE 6.2
(continued)
Factor 6: Data entry (3 items)
MI46 Entering data
MI45 Editing data
MI47 Verifying data
Factor 7: Group facilitation (3 items)
MI72 Planning meetings/workshops
MI73 Leading meetings/workshops
MI71 Giving speeches
Factor 8: Computer programming (4 items)
MI50 Writing original programs
MI51 Systems programming
MI49 Using package programs
MI52 Systems designing
Factor 9: Fiscal responsibility (3 items)
MI86 Propose budget for agency or facility
MI84 ‘Spending money within budget
MI85 Propose budget for unit
Factor 10: Stress (6 items)
PI17 Feel rushed (% never)
PI21 Feel conflicting demands (% never)
PI22 Tell people what they don't want to hear (% never)
PIL9 Feel pressure to meet deadlines (% never)
PI20 Feel need to learn new skills just to keep up (% never)
PIl6 Have to make quick decisions (% never)
Factor 11: Autonomy (3 items)
MI99 Free to decide how to do their work every day
MI100 Free to decide how quickly to do their work
MI93 Free to decide what task to do first
Factor 12: Consequences of error (2 items)
MI104 Mistake hurt good name of agency
MI103 Mistake hurt good name of unit
Factor 13: Time effort (2 items)
MI34 Working overtime without compensation
MI35 Working weekends without -compensation
= 127 =
TABLE 6.2
(continued)
Factor 14: Mental demands (1 index and 1 item)*
INFO2 Complexity of relationship to information
MI83 Mental information processing
* This composite index was created from factor 14 after the factor analysis,
so there are no loadings.
The Reliability of Each Factor
As noted above, in general the reliability of factors increases as the
number of items increases. The formula we use for computing reliabilities is
puting reliabilities is the Spearman-Brown formula (Nunnally, 1978):
The * kryy
ne: ae
1+ (k-)r
ij
where k is the number of items in a scale, and Tek is the correlation between
two versions of a k-item scale reflecting the same domain of underlying con-
tent, that 1s, the reliability of the scale and 1, is the average correlation
among the items making yp the scale. Table 6.3 shows the reliabilities for
the 13 factor~based scales we have created and the Mental Demands scale. On a
scale of 0 to 1, they are in general quite high, and give us considerable
confidence that we are measuring aspects of job content in a reliable way.
What this means, from a practical standpoint, is that we would be likely to
arrive at essentially the same conclusions if we or others repeated the
analysis, measuring the same aspects of job content with multiple-item scales,
|
- 128 -
even if the specific questions going into each of the scales are somewhat
different.
SUMMARY
Several procedures were used to prepare the data for regression analysis.
The population of each title was adjusted to reflect changes in title
populations between the time of sample selection and questionnaire intake,
This was done by changing the title populations by the same proportion change
observed in title samples.
Several items were recoded, Salary grade was changed to conform to a
single, consistent scale for all titles. Item number 83 about mentally pro-
cessing information was recoded so that missing data was interpreted as “none
of the above." Response ranges were recoded to the midpoints of ranges.
The incumbent level data were aggregated by title, and title scores were
calculated either as means of each item for each title or as the pereent of
title incumbents who responded "never" when item responses reflected a yes/no
dichotomy,
Percent minority was defined as percent Black plus percent Hispanic
rather than percent non-white because it was found that the mean salary grade
for other non-whites, especially Asians and Pacific Islanders, was higher than
that for whites.
Indices were created for the complexity of writing, reading, and one's
relationship to information. This was done by adding the standard scores of
items that contributed significantly to the prediction of salary grade in
separate regressions of salary grade on each set of potential index items,
A factor analysis of 80 items and three indices yielded a 14 factor solu-
tion. Factor-based scores were calculated by summing standardized scores for
all questions that loaded highly on a given factor.
Factor
Factor
Factor
Factor
Factor
Factor
Factor
Factor
Factor
Factor
Factor
i Factor
Factor
Factor
- 129 -
TABLE 6.3
RELIABILITY COEFFICIENTS FOR FACTOR-BASED SCALES
Factor
Management/supervision (11 items)
Unfavorable working conditions (6 items)
Contact with difficult clients (4 items)
Communications with public (4 items)
Education required (2 items)
Data entry (3 items)
Group facilitation (3 items)
Computer programming (4 items)
Fiscal responsibility (3 items)
: Stress (6 items)
: Autonomy (3 items)
: Consequences of error (2 items)
: Time effort (3 items)
Mental demands (2 items)
Reliability
295
88
+85
82
294
91
+93
86
+90
+70
+84
ol
«91
67
- 130 -
In Chapter VII we will discuss the use of the factor-based scores in
regression analyses that produced the pay policy equations for the New York
State workforce.
- 131 -
|
|
H CHAPTER VIL
| MODELS FOR ASSESSING WAGE DISCRIMINATION
~ 132 +
In the previous chapter, we reported on the analysis of factor-based
scales from the Job Content Questionnaire that would form the basis of the
regression analysis, Regression analysis is the statistical procedure used in
policy~capturing job evaluation to select the set of job content factors and
the weights associated with these factors which are most related to the
current implicit pay policy of an employer, in our case, New York State
government employment. The resulting equation is, essentially, a compensation
model describing the job content characteristics of the jurisdiction's jobs
atid the relationship of these factors to pay. Because it represents the
employer's implicit pay policy, the compensation model becomes the standard
against which jobs can be assessed for pay equity.
Because the pay policy line obtained through regression analysis is the
basis. for assigning appropriate grade levels to particular titles, comparable :
worth job evaluation requires that the models be free of sex and race/ethnic
bias. This means that the sex or race/ethnic composition of a job title
cannot be an implicit compensable factor, which could lower the salary grade
of titles,
The Center was contractually obligated to provide GOER and CSEA with
three pay policy lines:
e the pay policy line for all job titles, unadjusted;
© the pay policy line for all job titles, adjusted to
statistically control for "proportion female" and
"proportion minority," as an implicit compensable
factor; and
® the pay policy line for job titles disproportionately
filled by white males.
This. chapter describes these models and briefly touches on the advantages -and
disadvantages of each as the basis for making pay equity adjustments.
i
|
|
- 133 -
Before turning to a discussion of these models, however, we provide a
brief introduction to the interpretation of regression statistics in the
context of pay equity analysis. While readers familiar with regression
analysis may want to skip this section, it may prove useful to an
understanding of the logic underlying the three pay policy lines for which we
obtain regression results.
REGRESSION MODELS FOR PAY EQUITY ANALYSIS
To introduce the reader to regression analysis, we work through a hypo-=
thetical example. Let us assume that pay differences among jobs depend on
only one factor: how much skill a job requires. In this simple example, each
of these factors is measured as follows:
The pay rate, (Y), is measured by the salary grade
for the job title.
Skill, (S), is measured by a multiple item
scale of the kind described previously. Let us
suppose that scores range from zero to one. A job
requiring the least skill gets a score of zero while
a job requiring the most skill gets a score of one.
A job with moderate skill might get a score of 0.4,
and so on,
Consider a very simple model, one in which we wanted to know whether, to
what extent, and in what way the salary grade assigned to a job depends on the
skill required to do it, ignoring any other determinants of pay differences.
We can, in fact, estimate the effect on pay of skill differences between jobs
lNote that there is, in fact, no "skill" factor per se in the set of New
York State factors because this concept is measured in our data set by several
factors. So this specific variable should be regarded as an hypothetical
example chosen for ease of exposition.
~ 134 -
by statistically predicting salary grade from our skill variable using
regression analysis,
To see this graphically, imagine that we had a sample of only five job
titles, We could then plot each job title on a twomdimensional plot, known as
a scatter plot, where the horizontal axis represents our skill variable and
the vertical axis represents salary grade, Figure 7.1 illustrates such
hypothetical plots where each observed point represents a job title, From
even a casual glance at the top plot it is evident that as skill requirements
imereage, salary grade tends to increase, To statistically deseribe this
relationship, we could provide an equation that would estimate how large a
difference in salary grade we would expect, on average, for two job titles
that differed by one point on the skill requirements scale, We do this by
fitting these points to a line as shown in the bottom of: Figure 7.1,
The intercept on the vertical axis is called a, It indicates the value
of Y when 8 = 0. In other words, it tells us what the salary grade would be
for a job title with no skill involved in the job content, The slope is
called b. It gives the number of units of change in Y for a one-unit change
in 8. In other words, it tells you how many salary grades a unit change in
skill level is worth, Y! tells us what the predicted grade level would be, if
title seores fell exactly on the regression line that hest fits the job titles
in our sample, In the bottom of Figure 7.1 the actual value of S, the actual
value of Y, and the predicted value of Y, or Y!, are marked for one of the
observed job titles, and these are labeled Sy> Yy> and Yhye respectively.
In order to find the predicted grade level, we must first find the
so-called "line of best fit" relating skill and salary grade. "Best fit"
refers to a statistical criterion, indicating a line that minimizes the sum of
the squared differences between the actual salary grade of each job title and
135 -
FIGURE J.1
FOR SKILL AND SALARY GRADE
AND LINE OF BEST FIT FOR THESE POINTS
HYPOTHETICAL JOB TITLE VALUES
Figure J.la.
skill
(SET
skill
i
cho
=
i Bal Sa
LE
Fiaure 7.1b
~ 136 -
the salary grade predicted from the skill required to do the job. Using this
procedure, the line is placed as close to all points as a straight line can
be.
Regression is a technique for finding the intercept (a) and the slope (b)
of the linear equation that will result in the smallest squared difference
between the actual and predicted values, [(Y - y!)?}. Another way of saying
this is that the resulting equation,
Y!=a+ bs,
gives the best prediction of the value of salary grade (Y), given that one
knows only the amount of skill required (S).
The smaller the scatter of observed points around the regression line
relating skill and salary grade, the better the prediction. The square of the
correlation coefficient, called , is a measure of how good the prediction of
salary grade is. Formally, ae is defined as a el-(Y- yi)? 1W- y)’,
that is, as one minus the ratio of the variance of observed points around the
regression line to the total variance in the observed points. This is why e
is a measure of the proportion of the variance in Y explained by another
variable, in this case S. From this definition, it is evident that if
prediction is perfect, x? = 1. If there is no association between the two
variables, skill and salary grade, 2 = 0.
Now suppose the hypothetical relationship between skill requirements and
salary grade for 2500 job titles in New York State is captured by the
equation,
Y!=4 + 29(s),
with an associated 2 of .6. These results would tell us, first, that 60
percent of the variation in salary grade can be explained by variation in the
skill requirements of’ jobs. They also tell us that the jobs with the lowest
23137 =
skill level (a score of 0) would be predicted to be in salary grade 4, and
jobs with the highest scale level (a score of 1.0) would have a predicted
salary 29 salary grades higher than 4, or salary grade 33.
Simple regression of one variable on another rarely captures the complex-
ity of how things really work. For example, some jobs actually are in even
higher salary grades than those predicted in the preceding example, presumably
because they not only require a high degree of skill but also great
responsibility. For such jobs, relying on skill as the only measure of job
content would underestimate their value. Similarly, some jobs not only
require low skill but entail little responsibility. For such jobs, relying on
skill as the only measure of job content would overestimate their value. For
this reason, we need to be able to measure the simultaneous effect of a number
of different aspects of job content. To do this, we use a multiple regression
procedure to obtain an equation. This is a straightforward extension of the
two-variable regression example we have just worked through.
PAY POLICY MODELS FOR ASSESSING WAGE DISCRIMINATION
As indicated above, we were asked to develop three pay policy lines to
arrive at estimates of equitable pay, all based on the application of multiple
regression procedures: (1) an overall pay policy line, involving conventional
job evaluation, with no consideration of the sex or race/ethnic composition of
job titles; (2) an adjusted overall pay policy line, involving a modifica~
tion of the conventional job evaluation approach to include measures of pro-
portion female and proportion minority incumbents within a job title to remove
any potential effect of sex or race/ethnicity as implicit compensable factors;
and (3) a pay policy line involving the use of only predominately white male
job titles as the standard for estimating an equitable pay model.
-~ 138 -
The overall pay policy line is frequently used in conventional job evalu-
ation studies to determine the relative worth of jobe.” This type of line
has one major limitation for pay equity research, with several negative
consequences. If certain job content characteristics such as skill,
responsibility, or physical effort, are related to the sex or race/ethnic
composition of jobs, unless the composition is explicitly entered into the
regression equation the regression procedure will attribute to skill,
responsibility, and physical effort part of the difference in pay which is, in
fact, due to sex or race/ethnic composition. For example, if jobs done mainly
by women tend to be paid less than comparable jobs done mainly by men because
of sex discrimination, and if the jobs done mainly by women require low levels
of physical effort relative to the jobs done mainly by men, the regression
procedure will incorrectly give physical effort a large regression weight. We
could get an inappropriately large regression weight for physical effort even
if, in actuality, physical effort has no effect on the pay rates either for
jobs performed mainly by men or for jobs performed by women. In this case,
what appears to be the weight attributed to the physical effort required on
the job may really represent the male sex dominance in the job. This would be
an inaccurately specified model. It would incorporate any existing bias in
salary-setting which results from sex and/or race/ethnic discrimination.
Thus, the consequence of using the overall unadjusted pay policy line is that
the line of best fit may incorporate discrimination.
2 Sometimes job evaluation has involved multiple pay policy lines, that
is, a different pay policy for different subgroups of jobs in one compensation
system, However, pay equity requires that a single pay policy be applied
consistently to all jobs.
t
|
|
= 199 -
Consider Figure 7.2, which presents data from the 1974 Washington State
Comparable Pay Study. Note that virtually all the female-dominated jobs fall
under the average pay policy line for male jobs (represented by the solid
line). Were we to compute another, overall pay regression line, (represented
by the broken line), it would fall below the male line. Therefore, to the
extent that the overall line is lower due to discrimination embedded in the
salary-setting process, the salaries for male jobs appear inappropriately
high. Similarly, if we adjust female salaries only up to this average pay
line, it is very likely that the salaries for female jobs would still
incorporate sex bias. Thus, while we include a regression equation
representing the overall pay policy line, the predicted salary grades cannot
be used as the basis for making pay equity adjustments.
To correct for the limitations of the overall pay policy line, two
alternative regression models can be estimated. The first alternative
strategy is to estimate an equation similar to the overall equation, but with
one additional variable, the proportion of incumbents in each job title who
are female. The inclusion of the variable "proportion female" does two
things: first, it provides a direct estimate of the extent to which the sex
composition of jobs affects their pay rates, net of other factors; second, it
provides estimates of how skill, responsibility, and other job content
characteristics affect pay, net of sex composition.
The coefficient associated with the proportion female indicates the
predicted difference in salary grade between job titles that have identical
scores on other job content factors but that differ by 1.0 in their proportion
female. Specifically, it indicates the predicted difference in the salary
grade of two job titles, one of which is 100 percent male and the other of
which is 100 percent female, but which have identical scores on the other
- 140 ~
FIGURE 7.2
SCATTERPLOT OF MONTHLY SALARIES BY JOB WORTH POINTS,
FOR 59 JOBS HELD MAINLY BY MEN AND 62 JOBS
HELD MAINLY BY WOMEN IN THE WASHINGTON STATE PUBLIC SERVICE
_ © For jobs held meinty by men
_ 9 4870X) + 473
2 ® For jobs held mainty by women
1,600 -- ¥ = 1.4000) +393 e
1,400 - .
4 4
§ 1,200h- oe
= 6
i 1,000 }- *
H \ Male line
~~ — Average line
SOURCE: Remick, 1980. Computed from Willis, 1974.
- 142 -
variables. If the coefficient is negative, for example - 2.0, the equation
will indicate that a totally female job title would be two salary grades lower
than a totally male job title with identical other job content
characteristics.
To see how these weights are used, let us consider the (hypothetical)
predicted salary grades for two jobs, Typist and Truck Driver. For
simplicity, let us assume that there are only two job content characteristics
in the jurisdiction's implicit pay policy: skill and responsibility. Further
assume that Typists have a score of 0.4 on the skill factor, a score of 0.3 on
the responsibility factor, and are 100 percent female; and that Truck Drivers
have a score of 0.4 on the skill factor, a 0.4 on the responsibility factor,
and are 0 percent female. The a = 3.3, the b (skill) = 13, the b
(responsibility) = 18, and the b (proportion female) = -2, With these job
characteristics, the predicted pay rate for Typists would be
Y! = 3,3 + 13(0.4) + 18(.3) - 2(1.0) = 11.9
while the predicted pay rate for Truck Drivers would be
Y! = 3.3 + 13(0.4) + 18(.4) - 2(0) = 15.7
From these two equations we see that the difference in salary grades between
Typist and Truck Driver is 3.8. Only 1.8 (18(0.4) - 18(0.3)) of the
difference is due to what we would regard as a legitimate basis of pay
differentials, the fact that truck driving involves more responsibility than
typing, while 2.0 (-2(0) - -2(1.0)) is due to the fact that truck driving is a
male job while typing is a female job.
The logic behind the use of tha adjusted line is that these predicted pay
rates can be interpreted as "equitable job worth" scores, since they indicate
what New York State would pay if responsibility and skill differences between
job titles were taken into account, but differences in sex composition were
not.
- 142 -
The utility of entering percent female and percent minority into the
regression equation is that it provides predicted salary grades that are free
of any sex or race/ethnic bias but that otherwise conform as closely as
possible to the current pay policy of New York State. Thus, the New York
State pay policy is made equitable, while the integrity of the system is
maintained. This approach was the basis on which pay equity adjustments were
made in Towa. One drawback of this model is that it is a difficult adjustment
procedure to explain to a non-technical audience.
The second alternative model involves estimating an equation where the
job titles used in developing the equation are restricted to those in which
most incumbents are almost entirely white males, The logic underlying this
strategy is that pay differences among jobs done mainly by white males cannot
involve sex or race/ethnic discrimination. Thus, by determining what job
content characteristics account for pay differences among such jobs, and
applying the resulting compensation model equation to the remaining jobs, we
discover and apply a compensation policy free of sex or race/ethnic bias.
This strategy has the political advantage of being easy to understand. If
we refer to Figure 7.2, it would mean that all the female-dominated titles
would be adjusted to the male pay policy line and they would be paid at the
average salary level for male jobs at each given job eyaluation point level.
The disadvantage of this strategy is that it is based on a smaller subset of
jobs that may be unrepresentative of all the job titles in New York State,
Moreover, if female-dominated and disproportionately minority titles are
raised to the male pay policy line, it would leave integrated titles in the
position of being the lowest paid titles in New York State. The male pay
policy line was used as the basis for making pay equity adjustments in
Minnesota.
- 143-
Of course, theoretically it is possible that all three regression
equations would be essentially the same. If this happened, we would conclude
that there was no discriminatory. pay bias against female-dominated or
disproportionately minority jobs. Moreover, we would expect to see as many
female-dominated and disproportionately minority jobs above as below the
unadjusted pay line. No studies to date have resulted in such a finding.
One final note about compensation models developed using policy~capturing
job evaluation: since existing wages are used as the basis for obtaining fac-
tors and weights, policy-capturing represents an essentially conservative
approach to job evaluation. More than a priori approaches, it tends to pre~
serve existing wage relationships among jobs and rationalize implicit pay
policies. It minimizes changes in the existing system. Comparable worth job
evaluation using policy-capturing must take care to remove from customary wage
relationships the impact of sex and race/ethnic bias. This is first
accomplished by introducing the two alternative pay policy lines. However, it
also is necessary to examine what job content characteristics New York State
does not value or values negatively to determine whether there are subtle
biases against characteristics of female and minority jobs. Thus, after
presenting the results of the regression analysis for the three pay policy
lines, we will briefly discuss what New York State does not value or values
negatively.
DEVELOPING THE REGRESSION MODELS: PRELIMINARY DESIGN DECISIONS
In the previous section we discussed the rationale for estimating three
models of the relationship between job content and salary grade in the New
York State system: (1) an overall pay policy line or conventional job
evaluation model, in which salary grade is predicted from measures of job
- 144 -
content; (2) an adjusted pay policy line or modified job evaluation model, in
which measures of the sex and race composition of jobs, (i.e., the proportion
female and the proportion minority workers), are included in the -estimation
model in-addition to measures of job content; and (3) a white male pay policy
line, which is a model based on job content characteristics only, but
estimated on the basis of the characteristics of white male jobs rather ‘than
of all job titles. This section of the report describes in detail the
procedures used to estimate these three compensation models and the decisions
made in the course of the estimation.
Minimum Incumbency Size
The development of a model that successfully captures the current pay
policy of New York State requires that the model be based on as many and as
broad a representation of jobs as possible. Any exclusion of jobs should be
done in such a way as not to bias the resulting pay policy model. While this
point is fairly obvious, it is important when considering whether and in what
way to limit the analysis to job titles with some minimal number of
incumbents. The issue in specifying the regression equation involves deciding
the minimum number of incumbents necessary to achieve acceptable levels of
reliability in the measurement of sex and race/ethnic composition of job
titles.?
Consider a job title with two incumbents, one male and one female. If
the male leaves the job and is replaced by a female, the job will
3Recall that, in Chapter II, one of the criteria for selecting
female-dominated, disproportionately minority, or direct-line-of-promotion
titles was that there was a minimum of ten incumbents. These criteria were
based on a similar concern with the reliability of the sex and race/ethnic
variables.
- 145 -
automatically go from a 50 percent female to a 100 percent female job. The
inter-rater variance in ratings of job content is in general not large,
relative to the variability in sex and race/ethnic composition, Therefore, it
is probable that job content measures based on the ratings of a small number
of incumbents will be relatively reliable, whereas the measures of sex and
race/ethnic composition will not.
This is further problematic since job titles with small numbers of
incumbents tend to be concentrated at the upper grade levels, Thus, excluding
these job titles leaves us with a somewhat unrepresentative set of all job
titles in the system. This is why our original sample included the small
incumbency positions designated as managerial but excluded such titles in
non-managerial bargaining units.
After exploring a number of different possibilities, and computing our
regression models separately for subsets of job titles selected on the basis
of the number of incumbents, we decided that the best solution consisted of
presenting a single set of results, based on all job titles with four or more
incumbents. Under this criterion we reduced the number of job titles
available for analysis from 2,582 to 1,601. We were comfortable with this
solution because the regression results proved to be basically similar for
subsets of job titles with different incumbent counts. In particular, the
regression coefficients associated with proportion female and proportion
minority were remarkably stable, hardly varying from one sub-group to another.
Defining "White Male" Job Titles
The argument underlying the pay policy line for white male job titles is
that jobs done mainly by white males are not subject to sex or race/ethnic
discrimination in salary-setting in the way that disproportionately minority
and female-dominated jobs may be. Therefore, whatever characteristics of
-146 -
white male jobs are related to salary grade are regarded as compensable
factors free of sex or race/ethnic discrimination. The definition of "mainly
white male" had to be very restrictive. For example, using a 67.2 percent
cutoff point as the basis for defining white male jobs would, in fact, result
in lower estimates of undervaluation. However, such estimates would be lower
precisely because undervaluation due to sex and race/ethnic discrimination
would be embedded in estimates made on the basis of such low cutoff points.
On the other hand, since too narrow a definition would result in the
elimination of almost all jobs, we settled upon a minimum of 90 percent white
and 90 percent male incumbents as the criteria for defining a job title as
“white male."
Scaling Variables
Our independent variables included the set of factor~based scales, plus a
set of individual variables not included in the formation of the factor-based
scales. For the adjusted pay policy model only, the proportion female and the
proportion minority incumbents were also included. the variables are listed
in the top panel of Table 7.1. Means, standard deviations, and intercorrela—
tions among all of these variables, plus the dependent variable, mean salary
grade, are shown in Appendix C. To ease interpretation of the regression
coefficients, we converted each of these variables to a 0 tol metric.” Thus,
the coefficients indicate the predicted difference in salary grade between a
Sone variables were scaled 0 to 1 using the following equation:
unscaled score - minimum score
Scaled score =
maximum score - minimum score ”*
where the minimum and maximum refer to the outer limits of unscaled scores for
that particular variable.
|
|
i
|
|
|
|
'
|
-147 -
job title with the highest score on the variable and a job title with the
lowest score on the variable, net of the effect of all other variables in the
model. For example, in the overall pay policy line, labeled Model A, shown in
Table 7.2, the predicted difference between job titles with the highest and
lowest scores on the Management/supervision factor is about four and one-half
salary grades (4.54), holding constant each of the 14 other variables in the
model, But this regression coefficient does not tell us how important
Management/supervision is relative to the other 14 variables in the model. To
determine the relative importance of the variables in the model, we also
present the regression weights expressed as standardized coefficients. These
are shown in Table Tie?
Criteria for Selecting Variables for Retention in the Model
We began the statistical analysis by estimating an initial version of
each of our three models that included all 27 of the variables specified in
the top panel of Table 7,1: the 13 factor-based scales; the indices we had
constructed to measure the complexity of reading, writing, and mental demands;
Soften we wish to have a measure not only of how much of a change in the
salary grade can be expected for each one-unit change in each job content
variable, but also of the relative effect on salary grade of each of the job
content variables. Because job content variables are often measured in a
variety of metrics with different dispersions or ranges of values, they cannot
be compared directly. Each variable must be expressed in the same metric
before comparisons can be made. To enable us to determine the relative
importance of the regression coefficients, we make use of what are called
standardized coefficients. Technically, these are the equivalent of the
metric coefficients we would obtain if, before doing the analysis, we
subtracted the mean of each variable from each observation and divided the
result by the standard deviation. Doing this would yield a new set of
variables, each of which had a mean of zero and a standard deviation of one.
For such variables, a one-unit difference is identical to a one~standard
deviation difference,'since the variables are defined to have a standard
deviation of one.
~ 148 -
TABLE 7.1
VARIABLES USED IN THE REGRESSION ANALYSES
Variables Used in the Initial Analysis
Yr
Fl:
F2:
F3:
Fa:
F5:
F6:
F7:
F8:
F9:
F10:
Fll:
FL2:
F13:
Mean salary grade
Management /supervision
Unfavorable working conditions
Contact with difficult clients
Communications with public
Education required
Data entry
Group facilitation
Computer programming
Fiscal responsibility
Stress
Autonomy
Consequences of error
Time effort
Mental demands
Complexity of writing
Complexity of reading
MI33:
MI36:
MI40:
MI41:
PI42:
MI43:
MI44:
MIO4:
PI1O5:
PI106:
Fi
M:
Doing same short task over and over
Travel overnight on the job
Amount of experience in related jobs required
Time to become competent after starting the job
Work with machines (percent no)
Highest level of mathematics used
Special body coordination or expert use of hands or fingers
How often do new or unexpected problems come up in job
How much could mistake harm health or safety of another person
How much damage to equipment could mistake cause
Proportion female
Proportion minority
Variables Added During Review of Initial Regression Results
PIL9:
MI32:
MI37:
PI42R4:
MI95:
PI96:
MIIO1:
PI102:
Pressure to hurry to meet deadline (also in Factor 10)
Very strenuous physical activity (also in Factor 2)
Risk of being hurt (also in Factor 2)
Work with machines (percent put together or fix complicated
machines)
How much variety in job
Responsible for preventing damage or waste of equipment or
supplies
Do the same thing every day
Told what specific tasks to do
- 149 -
and the individual items not included in any of the factors that had plausible
interpretations as possible predictors of salary grade; and, for Model B,
proportion female and proportion minority as well. From this initial equa-
tion, we successively eliminated variables that were statistically
non-significant .°*” In some instances, however, we retained non-significant
variables with metric coefficients of one or greater to allow for the
possibility that a variable may pertain to only a few job titles but be an
important determinant of salary grade for those few titles. It is possible
for such variables to appear non-significant in the overall model because they
account for so little of the variance in the dependent variable as a
consequence of their rarity. For example, whether or not one is a
professional athlete accounts for little of the variance in the income of the
Sthe use of significance tests in this context may appear surprising since
we are studying the entire population of job titles in the New York State civil
service system at the time of our study. However, we rely on significance
tests because we regard the population of job titles at the time of study as a
point-in-time sample of job titles existing in New York State in the mid~1980s,
which is the population to which we wish to generalize our results.
Me did not make use of stepwise regression procedures because, in our
judgment, reliance on strictly mechanical criteria for the retention or
elimination of variables too often leads to uninterpretable results, with the
retention or elimination of particular variables depending heavily on minor
variations in the size of inter-item correlations.
~ 150 -
U.S.labor force, since there are so few professional athletes relative to the
size of the labor force. However, the metric coefficient associated with a
variable distinguishing professional athletes from others would be large
because professional athletes earn substantially more on average than do
others in the labor force.
Having developed a preliminary model for the overall pay policy line by
eliminating variables with no explanatory power (non-significant variables),
we then carried out an additional exploratory analysis to ensure that we were
adequately reflecting the pay practices of New York State. We reviewed all of
the variables available to us, including a number of individual variables that
had been included in the factor scales, and considered whether transformations
of these variables were possible that would capture aspects of job content
better than we had done in the first equation. For instance, item number 42,
"Do people in your job work with machines as an important part of their job?"
was initially coded to indicate the percentage of incumbents who responded
"No, they don't work with machines." In our review process, we defined a new
variable, the percentage of incumbents who responded "They have to put
together or fix complicated machines," reasoning that perhaps the latter
variable would tap the complexity of manual work. For similar reasons, we
also picked out some items that were included in factor-based scales and
studied their effect as individual items. These variables are shown in the
second panel of Table 7.1. None of this exploratory work led to the inclusion
of any additional variables.
THE FINAL SALARY GRADE PREDICTION MODEL
Table 7.2 shows the regression coefficients for the final compensation
model that would be used as the basis for estimating possible undervaluation
- 15h -
for female-dominated, disproportionately minority, and direct-line-of-promo-
tion titles. The table displays three models. Two models are estimated using
all job titles with four or more incumbents. Of these, Model A is the overall
pay policy line and includes 15 job content characteristics. Model B is the
adjusted pay policy line and includes the same 15 job content characteristics,
plus the proportion female and the proportion minority among incumbents of
each job title. Model C is estimated using white male job titles only, It
contains only the 10 job content characteristics that were statistically
significant for this equation. For each model Table 7.2 includes the metric
(unstandardized) regression coefficient for each variable. Table 7.3 includes
the standardized regression coefficient for each variable.
The first thing to notice about these models is that they successfully
account for most of the variation in the salary grades of job titles in the
New York State civil service system. Recall that the objective of the
regression analysis is to discover the implicit policy underlying the current
pay practices of the New York State government. The fact that each of these
models accounts for nearly 90 percent of the variance in salary grades among
job titles (Rr? is between .88 and .89 in all models) indicates that we have
been very successful in this effort.
All three models have strong similarities. However, there is some
variation between the models based on all job titles and the model based on
white male job titles only. Let us first consider the models based on all job
titles.
When all job titles are considered, by far the most important determi-
nants of salary grade are the educational requirements for a job (F5) and the
amount of experience required in related jobs (M40), as can be seen from
inspection of the standardized coefficients. In fact, these two variables
- 152-
TABLE 7.2
UNSTANDARDIZED COEFFICIENTS OF THE DETERMINANTS OF SALARY GRADE
FOR VARIOUS MODELS FOR JOB TITLES WITH AT LEAST FOUR INCUMBENTS
All Job Titles White Male Job Titles
Model A Model B Model C
Metric Coefficients
Variables
Fl: Management/ 4.54 4,98 4.47
supervision
F2: Unfavorable working -2.07 3.14 ~4.44
conditions
F4: Communication with 1.46 “1.53 ~3.38
public
F5: Education required 11.83 11.86 9.72
F7: Group facilitation -1.22 -.76 2.17
F12: Consequences of 3.42 2.94 3.16
error
F13: Overtime without 1.44 1.41 =
compensation
F14: Mental demands 8.74 7.63 ad
Complexity of writing 5.87 5.57 11.45
M33: Doing same task ~3.60 3.14 “
over and over
M40: Experience required 8.47 7.80 7.63
P42: Not working with 1.02 1,22 1.48
machines
M44: Physical coordination 33 83 _
P96: Responsible for TLL 1.40 3.22
preventing damage to
equipment
P106:Mistake causes damage 48 85 =
to equipment
F: Proportion female -— ~2.02 “—
M: Proportion minority -_ ~.16 —
Constant -2.40 -1.15 2.14
Rv” 885 +889 884
Standard error of estimate 2.53 2.48 2.34
N 1601 1601 464
- 153 -
TABLE 7.3
STANDARDIZED COEFFICIENTS OF THE DETERMINANTS OF SALARY GRADE
FOR VARIOUS MODELS FOR JOB TITLES WITH AT LEAST FOUR INCUMBENTS
; All Job Titles White Male Job Titles
| Model A Model B Model C
Standardized Coefficients
| Variables '
i FI 14 15 215
F2 -.06 -.09 7.15
F4 -.04 -.05 oll
FS 035 35 +28
F7 -.04 -.02* 07
F12 08 .07 .07
| F13 .03 .03 —
FI4 17 14 --
Complexity of writing 13 12 229
M33 ~.09 -.08 -
M40 +30 +28 027
I P42 +04 205 +06
M44 -01* +04 -
! P96 04 05 ell
P106 02% 204 -
Proportion female -_ -.09 -
Proportion minority oe -.004* -_
*Coefficient not significant at the .05 level.
- 154 ~
together account for nearly 81 percent of the variance in salary grade, This
is, of course, not surprising because education and experience are important
components of most job specifications in New York State. After these, the
other most important determinants of salary grade are the extent to which a
job title involves management and supervision (Fl) and the complexity of
writing it requires.
A number of other variables have substantial effects on salary grade as
well, although the standardized coefficients are not large, probably because
the characteristics pertain to only a small fraction of job titles. This can
be seen by inspecting the metric coefficients, which are quite similar in
Models A and B. As noted, educational requirements have a very strong effect
on salary grade. The predicted difference in salary grade between two job
titles, one requiring the greatest amount of education and the other requiring
the smallest amount of education, is nearly 12 salary grades, net of all other
characteristics. The impact of experience is also strong, the predicted
difference between two job titles requiring the most and least related
experience being about eight salary grades, net of all other characteristics.
Most of the other variables have substantial impact as well. With only two
exceptions for Model A and three exceptions for Model B, the predicted
difference between jobs at the highest and lowest level of a characteristic is
greater than one salary grade, holding constant all other chatactertacics.
The more women in a job title, the less it pays, net of the 15 other job
content variables in the model. The predicted difference between two jobs
that are identical on all 15 job content characteristics, but where one job is
performed entirely by men and the other is performed entirely by women, is
approximately two salary grades (2.02)--with the women's job being the lower
paying one.
- i55 -
It is instructive to note that the single variable regression of salary
grade (Y!) on proportion female (F) is:
Y! = 15,15 - 8.10(F).
From this equation, we can observe that, knowing nothing else about a job
title other than that it is 100 percent male, we would predict it to be in
Salary Grade 15. Alternatively, if it were 100 percent female, we would
predict it to be in salary grade 7 (7.05 = 15.15 - 8.10(1)). Model B in Table
7.2 indicates however, that much of the apparent effect of sex composition on
pay rates can be attributed to the fact that job content is correlated with
sex composition, since the net coefficient of proportion female in Model B is
only about one-quarter the size of the simple regression coefficient relating
the two variables (-2.01 for Model B and -8.10 for the simple regression).
Nonetheless, even the net coefficient is fairly large and indicates substan-
tial undervaluation of jobs done mainly by women.
The presence of a sex effect is in stark contrast to the absence of a
race/ethnicity effect. Although the simple one variable regression of salary
grade (¥!) on proportion minority (M) yields an equation similar to that for
proportion female:
¥! = 16.91 - 8.45(M),
the net regression coefficient for proportion minority 3 from Model B in Table
7.2 is very small (-.16) and is not significantly different from zero.® This
tells us that the effect of race/ethnicity in the simple model can be
accounted for by differences in the content of jobs held by minority and non-
minority workers. Thus, we conclude that, although 100 percent minority jobs
are, on average, paid 8.45 salary grades less than jobs with 0 percent
minorities, this is due to the fact that minorities tend to be in jobs of low
valued job content.
- U6 ~
The equation based on white male job titles is, as was indicated above,
quite similar to the equation based on all jobs. But there are several
important differences. First, for these white male jobs, the complexity of
writing requirements (WRITE) is as important and as strong a determinant of
salary grade as are education (F5) and experience (MI40); but the complexity
of mental demands is not significant. It may be that the complexity of mental
demands differentiates clerical jobs from one another more than it
differentiates the kinds of jobs mainly performed by men. Writing complexity,
by contrast, might distinguish administrative and professional jobs, on the
one hand, from manual jobs on the other,
Second, many of the variables that are significant for the equation
involving all job titles are not significant when only white male jobs are
considered. Five variables, in particular, are significant in both Models A
and B but not in Model C for white males: working overtime without com-
pensation (F13); mental demands (F14); doing the same task over and over
(M33)3 physical coordination (M44); and mistake causes damage to equipment
(P106). Why these variables do not differentiate the salary grades of white
male jobs, but do differentiate among all jobs, is unclear. It is probable,
however, that the differences in the determinants of pay between white male
job titles and all job titles reflect more than anything else the fact the
white male job titles constitute a narrower range of job content than that
found in all job titles in the New York State government employment.
A comparison of the samples of job titles on which the overall pay policy
line and the white male pay policy line are based reveals that the samples do
Sonis is true whether or not the proportion female is included in the
equation with the proportion minority.
= 157 =
differ in several ways. Table 7.4 shows, not surprisingly, that the sample of
white male titles has a higher mean salary grade than the whole sample. Also,
there is a greater proportion of titles above salary grade 24 and a smaller
proportion below salary grade 8 than in the whole sample.
White male titles tend to have smaller incumbencies. Indeed, the average
incumbency is less than half the size of incumbency for the sample of all
titles. This raised questions about whether or not female, minority, and
integrated titles are sufficiently differentiated with respect to job content.
Earlier we indicated that our definition of white male job title had to
be sufficiently restrictive to cancel out possible sex or race/ethnic dis-
crimination. Thus, when we compare the white male job sample to the whole
sample, we find it tends to underrepresent titles in negotiating units 2
(Administrative) and 4 (Institutional) with only four titles from each of
these units in the sample. These two units contain mostly female-dominated
jobs. Accordingly, the white male sample also overrepresents negotiating unit
3 (Operational) with 16.4 percent of titles from this unit, while the whole
sample has only 7 percent of titles from this unit.
Finally, we found some differences between the correlations of job
factors with salary grade for the two samples. In the whole sample, the
correlation between salary grade and working conditions (F2) is -.48, while it
is -.74 for the white male sample. As has been suggested elsewhere, working
conditions primarily tap job content characteristics found in male jobs
(Steinberg and Haignere, 1985). Similarly, the correlation between grade and
experience (MI40) is .71 for the whole sample and .64 for the white male
sample, indicating that white male jobs across many salary grades require
similar experience requirements.
- 158 -
TABLE 7.4
COMPARISON OF DATA FOR ALL
TO DATA FOR WHITE MALE JOBS
All Jobs White Male Jobs
Mean Salary Grade 17.7 19.5
Mean Title Population 90.0 38.5
Mean Percent Minority 10.0 1.0
Percent Titles Above Grade 24 20.1 25.8
Percent Titles Below Grade 8 8.4 1.7
Negotiating Unit Percent Frequency Percent Frequency?
1 and 67 Security 1.7 27 2.5 12
2 Administrative 10.1 165 8 4
3 Operational 7.0 115 16.4 78
4 Institutional 3.6 59 8 4
5 Professional, 52.6 859 53.7 256
Scientific and
Technical
6 Management/ 13.6 222 15.9 76
Confidential
Missing Data 11.4 187 9.9 47
Total Number of Titles 1601 1601 464 464
in the Regression
othe negotiating unit frequencies total more titles than the number of
titles in the regression because a few titles have missing data for variables
that enter the regression and are, therefore, dropped from the regression
analysis.
|
i
!
- 159 -
To conclude, the overall sample and white male sample differ in predicta-
ble ways. The white male sample has higher pay, few minorities or women, and
is drawn from a subset of all negotiating units. Thus, the resulting model of
job content relationships with salary grade is somewhat different between the
two samples, although the specific job content characteristics found to be
valuable are remarkably similar.
JOB CONTENT CHARACTERISTICS NOT CURRENTLY VALUED
BY NEW YORK STATE GOVERNMENT
It is instructive to note what New York State does not pay for as well as
what it does pay for. By and large, the coefficients have the expected sign.
For instance, education (F5), experience (MI40), and mental demands (F14) have
positive coefficients, revealing that, for example, the higher the level of
education, the higher the pay. Repetition (MI33), by contrast, has a negative
coefficient.
In one case the coefficient was not in the predicted direction. That
exception is unfavorable working conditions. Those who work in unusual heat,
cold, etc., or are involved in unusually strenuous physical effort (F2) are
penalized rather than compensated relative to jobs identical in all other
measured respects. The coefficient for unfavorable working conditions is
negative in all equations. Communication with the public is also negatively
valued,
Some additional factors and items had no net impact at all on salary
grade. Jobs in New York State requiring contact with difficult clients and
jobs involving stress are neither rewarded nor penalized relative to other
jobs with similar requirements in other respects. The same is true of data
entry and computer programming jobs, a point of considerable interest, given
- 160 -
the oft heard claim that it is necessary to pay such jobs more than their
evaluated worth because the demand is so high relative to the supply.
Finally, fiscal responsibility and autonomy have no independent effect on
salary grade, probably because, despite emerging as separate factors, they
have much in common with the much stronger "Management/supervision" factor
(Fl).
New York State can choose to change any of these regression weights, in
order to value these job factors differently. Sometimes, changing the current
regression weight of a factor such as stress to a positive value would affect
male as well as female jobs. Some changes in regression weights would impact
only on female jobs, such as data entry.
One question that must be addressed in the review of this report is
whether the job content characteristics found to be negatively valued or of no
value are differentially associated with female-dominated or dispropor-
tionately minority job titles. If this is the case, there may be bias in the
current compensation model for New York State. For example, contact with
difficult clients (F3) and data entry (F6) are content characteristics
associated with disproportionately female and minority institutional and
clerical jobs. They currently are not valuable job content characteristics
for pay purposes.
New York State may want to change the evaluation model by adding new
factors and weights or changing the weights of factors found to be signifi-
cant. Changing the pay policy models would result in different predicted
salary grades from the ones reported in the following chapter.
- 164. ~
SUMMARY.
Twenty-seven job factors and items were entered into regression pro-
cedures predicting salary grades. Of these, fifteen were found to be
significant in two of the equations and ten were significant in the other.
The predictors that were retained predict nearly 90 percent of the variance in
salary grade across jobs. Three regression models were presented: (A) a pay
policy line based on all jobs, (B) a line based on all jobs and adjusted to
remove the effect of female or minority composition of the jobs, and (C) a
line based on white male jobs only.
Only jobs with four or more incumbents were included in the models,
because the sex and race/ethnic composition of jobs is more stable across time
with larger titles. Also, we found that excluding the small titles makes
little difference in the final regression equations. The net effect of
proportion female in a title, all other job factors held constant, is two
salary grades; that is, jobs done entirely by women are, on average, two
salary grades lower than jobs of equal value to the state done entirely by
men. We found no statistically significant independent effect of percent
minority in the regression equations.
Our results demonstrate that education, experience, management, mental
demands, and writing are the most highly compensated job factors in New York
State government. Several factors are not valued or are negatively valued.
These include strenuous working conditions, stress, group facilitation,
communication with the public, data entry, computer programming, fiscal
responsibility, and autonomy.
- 162 -
a
- 163 -
CHAPTER VIII
PREDICTED SALARY GRADES
AND CONFIDENCE INTERVALS
- 164 -
In the last chapter we used multiple regression to arrive at three
compensation models that would be used as the basis for predicting salary
grades. Since we did not sample the entire population of incumbents of each
job title, however, we had to take into account the fact that there would be a
certain amount of statistical error in our estimates of predicted salary
grades (PSG). If we received responses from all persons in a title, we could
be 100 percent confident about the average responses for the title, and,
therefore, about the predicted salary grade. However, for titles where we
received responses from a subset of incumbents, there may be error in our
prediction.
In order to know how much confidence to place in our estimates of
predicted salary grade, we needed to determine how widely such estimates might
be expected to vary depending on which subset of incumbents was included in
the sample. From information available from the sample we actually used, it
is possible to make an estimate of how widely the results might vary.
This chapter describes the procedures involved in estimating the salary
grades and the possible error in each prediction. Estimates are presented for
each female-dominated, disproportionately minority, and direct-line-of-promo-—
tion title using each of the three regression models described in Chapter VII.
PROCEDURE FOR OBTAINING PSGs AND CONFIDENCE INTERVALS
The conventional measure of error in prediction from a multiple
regression equation is the "standard error of the estimate." It tells us how
widely each of the estimated values of the dependent variable might be
expected to vary due to the fact that the regression is based on a sample
rather than on the entire population. In the context of this study, the
- 165 -
dependent variable is the salary grade and the sample relevant to the error
statistic would be the sample of titles from the population of titles. Our
major concern, however, is that we sampled within the unit of analysis, i.e.,
within each title. Thus, the precision of prediction will vary from one job
title to another, depending on the number of incumbents sampled and the
heterogeneity of responses within each job title. For this reason, we needed
to estimate the error in prediction separately for each job title. We do this
by utilizing a technique known as "jackknifing" (Mosteller and Tukey, 1968).
————— Jackknifing-simulates-a—standard—approach- to-estimating error: that is; ————— —~——
i
we calculate a mean and a confidence interval around it.! The PSG that we
calculated for each title is not a mean; it is the resulting statistic derived
from application of a regression equation. We needed to organize our data in
such a way that we could calculate the PSG as a mean statistic in order to
then compute a confidence interval around it. Jackknifing sets up data in
such a way that we can do this.
The basic approach to jackknifing a statistic, such as PSG, involves two
sets of procedures: (1) selecting large sub-groups of data from the whole
sample and repeating analyses with these groups and (2) calculating the means
of these PSGs and confidence interval statistics. Actually, the replicate
PSGs are adjusted prior to taking the mean as we will explain below.
The selection of groups of data for repeated analyses is done in such a
way so that we can obtain large egqough data groups to run the analyses. We
's confidence interval is a range of values that we are confident
includes the true predicted salary grade which could only be calculated if we
had sampled all employees. Consider a 95 percent confidence interval.
Theoretically, if we were to draw all possible samples of the same size from
our population of incumbents, we would obtain predicted salary grades within
the confidence interval range of values 95 percent of the time.
systematically selected several small sub-samples from the whole sample
according to procedures that will be described in the next section. Each of
these sub-samples was sequentially subtracted from the whole sample to create
a set of "replicates." A replicate, then, is the larger data set minus one
small sub-sample. Ten replicates are an adequate number to use in order to
obtain stable results (Mosteller and Tukey, 1968). The entire analysis was
repeated on each of the ten replicates to obtain eleven PSGs for each
estimated title: one for the whole sample and one for each of the ten
replicates.
In the second step of the jackknifing procedure, final statistics are
calculated. Following standard jackknifing procedures, as described by
Mosteller and Tukey (1968), we created a set of adjusted values for analysis
based on the PSG for the whole sample and the PSG for each replicate. These
adjusted values, which are referred to in the research literature as
“pseudo-values,"
are created by using the following equation:
N(PSG whole sample) - (N-1)(PSG replicate) = S,
where N is the number of replicates and S is the pseudo-value.” For this
analysis, we have ten replicates, so,
10(PSG whole sample) - 9(PSG replicate) = S.
We did this calculation for each replicate and obtained ten pseudo-values. We
then calculated the final PSG as a mean of these ten pseudo-values and cal~
culated a confidence interval around this mean? Logically, the mean of the
2Note that if the replicate PSG is equal to the whole sample PSG, the
pseudo-value S equals the whole sample PSG.
3 the error cannot be estimated from the replicate PSGs directly because
there is approximately a 90 percent overlap between the data in any two
(Footnote Continued)
- 169 -
averaged within job title, and the averaged job title data were used in the
subsequent jacknifing analysis.
PREDICTED SALARY GRADES: WHOLE SAMPLE AND REPLICATES
In the last chapter, we reported on the regression equations that best
fit the overall pay policy line, the adjusted pay policy line, and the white
male pay policy line. We used these equations to calculate the PSGs for the
whole sample.
For each replicate we conducted the same analysis that was done with the
whole data set, except that we did not select new predictor variables. We
used the same variables as those in the equations for the whole sample and
computed the regression weights that best fit the replicate data set. Thus,
each replicate analysis involved calculating factor-based scores and indices,
scaling predictor variables (items and factors) from zero to one, and using
regression procedures to form replicate overall, adjusted, and white male
lines.
The preceding procedures resulted in eleven regression equations for each
of the three pay policy lines. Each pay policy regression equation included
the same variables for each replicate and the whole sample. However, the
regression weights were different from one replicate to another.
FINAL PREDICTED SALARY GRADES
As indicated in the previous section, for each title, for each pay policy
line, we obtained PSGs based on the whole sample and on each of ten replicate
data sets, eleven PSGs in total. These PSGs could then be used to calculate
the final PSGs in terms of mean salary grades and confidence intervals.
-~ V0 -
First, ten pseudo-values (8) were obtained by subtracting nine times the
predicted salary grades for each replicate from ten times the predicted salary
grades for the whole sample, thus:
5) = 10 (PSG whole sample) - 9 (PSG replicate 1)
85 = 10 (PSG whole sample) - 9 (PSG replicate 2)
e
e
Sig * 10 (PSG whole sample) - 9 (PSG replicate 10)
This calculation resulted in ten pseudo-values for each pay policy line for
each title. Next, we calculated the mean of each set of ten pseudo~values,
producing one mean pseudo-value for each title for each of the three pay
policy lines.
These mean pseudo-values are the final PSGs reported in Tables 8.1 to 8.3
for female-dominated and disproportionately minority titles and Tables 8.4 to
8.6 for direct-line-of~promotion titles.
CONFIDENCE INTERVALS
With the data in the form of a mean, it was possible to calculate a con-
fidence interval of the ten pseudo-values around their mean using standard
statistical prowedureas* We followed these steps in calculating the confi-
dence intervals.
(1) For each title, we calculated the standard deviation
(SD) of the ten pseudo-values about their mean, using
the standard formula:
“the reader is referred to any basic statistical textbook for an
explanation of confidence intervals and standard errors. One frequently cited
text is Hays, 1973.
-1A -
sD = (pseudo-mean)”, or (8; - py),
N-1 N-1
Where N-1 equals nine.
(2) For each title, we,calculated the standard error of
the estimate (SEM)” of the mean of the pseudo-values:
SEM = SD 10
(3) For each title, we corrected the standard error (SEM)
by multiplying the SEM by the finite population correc-
tion factor :
po SEM = (SEM) Population - # returned
Population - 1
The population figure used is the adjusted population
for each title after title changes, retirements, etc.,
were taken into account as described previously.
The number returned is the number of questionnaires
returned for each title.
The standard error of the estimate of a mean is a range of error around
an estimated mean, Theoretically, it is a measure of the range of means that
I would be found if the study were repeated a very large number of times.
othe finite population correction factor is discussed in many basic
textbooks on sampling research. See, for example, Cochran, 1963.
The finite population correction factor takes into account the proportion of
the population from which responses have been received. If that proportion is
very small, then the correction equation is close to one, and no adjustment is
made. If everyone in the population responded, then the population would
equal the number of responses in the equation, the correction factor would
equal zero, and the standard error would equal zero. This is a logical
result, since there is no error in our estimate of a mean of a population when
we have the data for the whole population.
Tt should be noted that there is no statistical procedure to correct for any
f of the error in the entire pay policy line. Therefore, the standard error of
I the estimate with the finite population correction factor applied is a slight
i underestimate of the standard error. ‘This problem results in confidence
intervals which are slightly underestimated. In most cases, the underestimate
is less than 0.1 salary grade, based on an examination of confidence intervals
calculated with the adjustment factor and the confidence intervals calculated
without it.
- 172 -
(4) For each title, we calculated the confidence interval (CI):
CI = 1.96 (SEM)
Multiplying the SEM by 1.96 and adding and subtracting
this value from the pseudo-mean gives a 95 percent con-
fidence interval. That is, if this entire analysis were
repeated, a very large number of times, our final pre-
dicted salary grades would fall within the calculated
confidence limits 95 percent of the time,
Confidence intervals are listed along with predicted salary grades in
Tables 8.1 to 8.3 for female~dominated and disproportionately minority titles
for each of the three pay policy lines and in Tables 8.4 to 8.6 for direct-
line-of-promotion titles. They should be interpreted in the following manner:
An Account Clerk's predicted salary grade is 8.12 + 0.91, using the adjusted
pay policy line. That is, our best estimate of the appropriate salary grade
for Account Clerk, given the information we had about the content of the job
from the incumbents we sampled, is 8.12, but if we had sampled a different set
of incumbents, our estimate might have been anywhere between 7.21 (8.21 -
0.91) and 9.03 (8.12 + 0.91). We are 95 percent certain that this range
includes the true salary grade. Most of our confidence intervals are less
than one salary grade, indicating a high level of precision in the predicted
salary grades. Not surprisingly, due to sampling differences, the confidence
intervals for direct-line-of-promotion titles are slightly higher.
ANALYSIS
Our results from Chapter VII demonstrate that, for all pay policy lines,
education, experience, management/supervision and writing are highly compen-
sated factors in New York State government employment, While the pay equity
estimates are based on the obtained regression equations, New York State could
explicitly choose to change any of the regression weights in order to value
| -173 -
i these job factors differently. For some factors, like working conditions,
changing the current regression weight from a negative to a positive value
would affect disproportionately minority as well as predominantly white jobs.
Other changes in regression weights (e.g. data entry) would impact only on
disproportionately female jobs. The estimates of undervaluation reported here
reflect the pay policy of New York State as it currently exists.
The estimated pay equity adjustments for female-dominated, dispropor-
tionately minority, and direct-line-of-promotion titles average 1.6 salary
————~ ——~grades-for-the adjusted-pay policy line and approximately 2.9 salary grades ~~~ ~~~ —
for the white male pay policy line. There is a strong tendency for job titles
in the lower salary grades to be more undervalued than job titles in higher
salary grades. This is the case no matter which of the pay policy lines is
used. The salary grades of the job titles we examined ranged from grade 1 to
grade 15, Particularly among the clerical and health care system job titles
it was common to find titles in grade levels 6 and below to be undervalued by
four or five salary grades,
We found no significant overall effect for the percent minority in a
title. However, job titles which are both disproportionately female, and dis-
proportionately minority, on average are undervalued by approximately one-half
of a salary grade more than the average. For instance, as indicated above,
the average undervaluation using the adjusted pay policy line is 1.6 salary
grades. Among titles that are both disproportionately female and dispropor-
tionately minority this figure is 2.1 salary grades. Using the white male pay
policy line the average undervaluation is 2.9 salary grades. However, for
titles which are both disproportionately female and disproportionately
i minority, the figure is 3.3 salary grades.
- Ra
Out of a total of 185 job titles in the CSEA bargaining unit that are
more than 67.2 percent female and 30.8 percent minority or are jobs in the
direct line of promotion for those female dominated and disproportionately
minority jobs, we found 142 to be undervalued by more than a half a salary
grade using the adjusted pay policy line and 163 were undervalued using the
white male pay policy line. The number of employees in job titles undervalued
by more than one half a salary grade is over 55,000 using the adjusted line
and over 65,000 using the white male line.
SUMMARY
In order to calculate accurate predicted salary grades (PSG) and accurate
confidence intervals for female-dominated, disproportionately minority, and
direct-line-of-promotion titles, we used a statistical procedure known as
jackknifing. The general approach was to set up the data so that we could
apply standard statistical procedures to compute the final PSGs in terms of a
mean and a standard error of the mean. This procedure was used for each of
the pay policy lines. We systematically drew ten different large replicate
data sets from the whole data set. We repeated the analysis on each of the
ten replicates. The data from the replicates were then used to calculate
adjusted "pseudo-values." The final predicted salary grades were obtained by
taking the mean of these pseudo-values for each of the three pay policy lines.
The 95 percent confidence intervals around these means are the reported
confidence intervals for the predicted salary grades.
The chapter concludes by reporting the estimates of undervaluation for
female-dominated, disproportionately minority, and direct-line-of-promotion
titles. Many of the titles are undervalued by at least one salary grade.
Estimates vary as a function of which pay policy line is used.
| f
PREDICTED SALARY GRADES AND CONFIDENCE INTERVALS
FOR FEMALE-DOMINATED AND DISPROPORTIONATELY MINORITY TITLES ~
TABLE §.1
OVERALL PAY POLICY LINE
TITLE TITLE
CODE
"911200 LABORATORY ANIMAL CRT
911300 SENR LAB ANIMAL CRTKR
1936100 INST RTL STR CLERK
1935000, PARK -REGN BUS ASSNT
"2936101 TRANS PLNG AIDE 1
2337110 CONSUMER SRVS SPEC 1
2501200 CLERK
2501300 SENR CLERK
2501317 SENR CLERK SURROGATE
2501320 SENR CLERK CORP SRCH
2501500 PRIN CLERK.
2501517 - PRIN CLERK EST TX APP
2501590 PRIN CLERK PERSONNEL
2502200 COMP CLAIMS CLERK
2502300 SENR COMP CLMS CLERK
2503200 FILE CLERK
2503300 SENR FILE CLERK
2503500 PRIN FILE CLERK
2504200 ADMITTING CLERK
2504300 SENR ADMITTING CLERK
2506100 NURSING STATION CLK 1
2508400 DRIVER IMPY ADJDTN C
2508600 ADJUDCTN CORRPDNC CLK
CURRENT
SALARY
GRADE
5.0
80
5.0
2 1400
5.0)
1420
3.0
720°
7.0
7.0
1120
1120
11.0
5.0
8.0
320
7.0
1100
4.0
8.0
7.0
4.0
4.0
PREDICTED
SALARY
GRADE
4eol7
728
$48
16083
Ba65
16.42
452
6.91
9081
1.83
11615
9067
11.87
7.54
9.95
4078
7.10
11615
5.07
10.50
5.01
6023
6.61
- 475 -
CONFIDENCE
INTERVAL
245
027.
024
- 176 ~-
TABLE 8.1
(continued)
TITLE TITLE CURRENT PREDICTED CONFIDENCE
CODE SALARY SALARY INTERVAL
GRADE GRADE
100200 ACCOUNT CLERK 5.0 7.00 1613
100300 SENR ACCT CLERK 9.0 9.28 467
100500 PRIN ACCT CLERK 16.0 13.35 272
102190 «PAYROLL AUDTT CLK 1 50 7.41 221
+ 402200 AUDIT CLERK 5.0 6.84, 221
102230 PAYROLL AUDIT CLK 3 1400 Behe Pe
102300 SER AUDIT CLERK 9.0 7.83 37
105200 CASHIER 9.0 7.37 ; 216
“112000 = TOLL COLLECTOR 9.0 650 169
130110 emPS RET BNFYS EXMR 1 9.0 8.33 222
130310 - EMPS RET BNFTS EXMR 3 15.0 143068 0
433100 ERPS RET MBRSP EXMR 1 | 540 6039 19
133200 EMPS RET MBRSP EXMR 2 7.0 Ba25 12
702200 STATISTICS CLERK . Sed” : .7e08 023
702300 SENR STATISTICS CLEPK 9.0 o.33 215
702500 PRIN STATISTICS CLERK — 12.0 12022 2024
750300 SENR ACTUARIAL CLERK 9.0 9,97 029
750500 PRIN ACTUARIAL CLERK 1220 q3eI7 e211
822010 DATA PROC CLK 1 560 6082 +22
822029 DATA PROC CLK 2 920 8.92 +20
849200 DATA ENTRY “ACH OPER 4.0 7.56 83
849300 SENR DATA ENTY MACH 0 7.0 9.69 +53
849500 PRIN DATA ENTY MACH O 11.0 11043 015
-177 =
TABLE 8,1
(continued)
TITLE TITLE CURRENT
CODE SALARY
GRADE
2510100 PURCHASING ASSNT 1 720
2510200 PURCHASING ASSNT 2 11.20
2512200 IDENT CLK 400
2512300 SENR' IDENT CLERK 920
2513300 _SENR NED RECORDS CLRK 860
2513400 TREATMNT UNIT CLK 720
2574300 | SENR UNDERWRTNG CLERK 8.0
2514400 SENR PAYROLL AUDT CLK Bo
2515200 CREDENTIALS ASSISTANT eT)
2521100, MOTOR VEN TITLE CLK 14 400
2521200 MOTOR VEH TITLE CLK 2 7.0
2522210 LEGAL ASsNq 1° 1220
2540100 MOTOR VEH REP 1 a)
2540200 MOTOR .VEH REP 2 700
2540300 MOTOR VEH REP 3 9.0
2540510 SUPVG MOTOR VEH REP 1 1100
2553310 TRANS OFFC ASSNT 1 5.0
2553320 TRANS OFFC ASSNT 2 900
2557100 APPS CNTRL CLK 1 5.0
2558100 PAYROLL CLERK 4 5.0
2558200 PAYROLL CLERK 2 920
2558300 PAYROLL CLERK 3 1600
2559100 LIBRARY CLERK 1 5.0
PREDICTED
SALARY
GRADE
Beh
12023
5.37
8e46
CONFIDENCE
INTERVAL
17
28
21
- 178 -
TABLE 8.1
(continued)
TITLE TITLE CURRENT PREDICTED CONFIDENCE
CODE SALARY SALARY INTERVAL
GRADE GRADE
2559200 LIBRARY CLERK 2 70 9019 51
2559300 LIBRARY CLERK 3 11.0 16.39 222
2560100 STUDENT LOAN CLK 1 400 6.37 016
2560200 STUDENT LOAN CLK 2- 8.0 10.92 229
2568100 EMP INS REVWNG CLK 1 4.0 5.71 217
2569100 DISABLTY DETRN RV C 1 520 8.30 77
2601200 TYPIST 3.0 5086 . 233
2601300 SENR TYPIST 720 9015 363
2601310 SENR TYPIST LAW 7.0 9.90 15
2601500 PRIN TYPIST 1100 11.95 0.00
2605200 DICT MACH TRANS 420 5.94 +60
2606100 INFO PROCSSG SPEC 1 60 7.58 264
_ 2606200 INFO PROCSSE SPEC 2 9.0 9292 e20
2606300 INFO PROCSSE SPEC 3 1220 11616 220
2609000 SECRETARIAL STENO 12.0 13.08 73
2610200 STENOGRAPHER 500 5098 37
2610300 SENR STENOGRAPHER 940 7.90 58
2610500 PRIN STENOGRAPHER 1260 10.68 1.07
2610520 PRIN STENOGRAPHER LAW 12.0 11062 30
2612200 HEARING REPTR 1520 1627 053
2703100 TELEPHONE OPER TYP 420 6066 24
2703200 TELEPHONE OPER 4.0 6064 +74
2703300 SENR TELEPHONE OPER 8.0 10.02 225
2610320 SENR STENOGRAPHER LAW 9.0 8.34 1.09
TITLE
CODE
2706100
2712200
2715200
2715220
2210100
2859010
3004000
3004500
3014000
3016000
3021000
3102300
3102600
3106100
3124200
3124300
3124400
3137200
3302200
3302300
3307000
5302100
5303100
TABLE §.1
(continued)
TITLE CURRENT
SALARY
GRADF
DIRCTRY INFO SYS OP 7 5.0
CALCULATING MACH OP 400
BOOKKEEPING MCH OP 5.0
BOOKKEEPING MCH OP DS 5.0
ADMNY ATDE 11.0
STARE UNIV PREM AIDE = 14.60
HOUSEKEEPER 6.0
SUPVG HOUSEKEEPER 9.8
CLEANER 4.0
JANITOR 6.0
ELEVATOR OPERATOR 5.0
cooK 920
HEAD COOK 12.0
DIETITIAN TECHN 9-0
FOOD SERVICE WKR 1 4e0
FOOD SERVICE WKR 2 720
FOOD SERVICE WKR 3 9.0
FOODS SUPPLS PROCESSOR 6.0
LAUNDERER 400
SENR LAUNDERER 7.0
CLOTHING CLERK 6.0
BARBER 7.0
BEAUTICIAN 7.0
= LI =
PREDICTED CONFIDENCE
SALARY INTERVAL
GRADE
6-00 225
9428 17
- heh 232
3032 16
10.86 63
ee 57 |
6.69 1.07
10618 233
3090 88
491 285
4.32 +74
12.30 1232
15684 224
12009 354
4.36 273
9219 -97
11045 233
1052 33
hol 98
6675 73
2018 37
9051 227
9.80 38
TITLE
CODE
5350200
5359000
$500200
5501100
5502200
5518500
5532101
5532202
554 C300
5544100
5$70300
5570400
6201000
6202200
6204000
6210000
6211510
6211520
6214200
6219200
6220200
6220300
6223200
- 180 -
TABLE 8.1
(continued)
TITLE CURRENT
SALARY
GRADE
DENTAL ASSNT ; 6.0
DENTAL HYGLIENTST 10.0
LICENSED PRAC NRS 920
HOSP ATTENDANT 1 4.0
HOSP CLINICAL TECHN 6.0
CONTY RESDNC AIDE 920
HOSP CLINICAL ASSNT 1 4e0
HOSP CLINICAL ASSNT 2 7.0
PSYCH THERAPY AIDE 9.0
MENTAL HYG HEWY HOA 1 920
MENTAL HYG THER AIDE 7 9.0
RENTAL HYG THER AST 1 11.0
LABORATORY HELPER 1.0
LABORATORY WORKER 40
LABORATORY AIDE : 5.0
XRAY AIDE 400
TEACHING HOSP STL ST1 6.0
TEACHING HOSP STL ST2 8.0
ELECTROENCPHGRPH TECH . Bed
CENTRAL MED SUP TECH 6.0
HISTOLOGY TECHNICIAN 9.0
“SeNR HESTOLOGY TECH 12.0
ELECTROCARDOGRPH TECH 820
PREDICTED
SALARY
GRADE
CONFIDENCE
INTERVAL
022
015
- 182 -
TABLE 8 .1
| (continued)
|
4 TITLE TITLE CURRENT PREDICTED CONFIDENCE
CODE SALARY SALARY INTERVAL
GRADE GRADE
| 6225100 MEDICAL LAB TECH 1 900 11024 227
| 6301000 PHARMACY AIDE 520 4.91 16
| 6818000 ASSNT WKRS COMP EXNR 920° 8.63 95
Zz 6824100 WORKERS COMP REVW AN 1400 12023 1.06
6893100 MEDICAID CLMS EXMNR 1 720 7004 028
6893200 MEDICAID CL™S EXMNR 2 11.0 10.91 236
7150000 MAINTCE HELPER 6.0 5.95 +60
7202022 -MAINTCE ASSNT REFRIGEN 8.0 11433 , 040
7611000 CHAUFFEUR 720 7297 60
7611300 SENR CHAUFFEUR 9.0 B27 259
| 7614000 TRACTOR TRATLER OPER 8.0 - 6200 - 068
76176100 . MOTOR VEH OPER 7-0 7209 1.70
7617200 BUS DRIVER. 8.0 5.27 059
7711000 = BINDERY HELPER , 320 2.84 226
| 8261202 YOUTH DIV AIDE 2 9.0 9076 1618
| 8261303 YOUTH DIV AIDE 3 "4240 12018 39h
8261400 YOUTH DIV AIDE 4 14 0 11614 1072
| 8340100 ALCLSM REHAB ASSNT 7 11.0 13.78 69
| 8342200 REHAB INTERVIEWER S $ 9.0 12.13 220
' 8410100 TRAINING AIDE 920 6085 210
8431200 EMPL SEC CLK 5.0 6260 16
8431300 SENR EMP SEC CLERK 70 6.82 +66
8431500 PRIN EMP SEC CLERK 11.0 11023 028
TITLE
CODE
8621100
8701600
8937100
: 8970100
- 182 -
TABLE 8.1
(continued)
TIILE CURRENT
SALARY
GRADE
PAROLE PROG AIDE 11.0
WATCHMAN 3.0
MOTOR VEH INS SV RP 1 9.0
ORIVER IMPRV ADJUDCTR 90
PREDICTED CONFIDENCE
SALARY INTERVAL
GRADE
12263 028
Seth 1206
7.87 015
9.01 023
- 183 -
TABLE §.2
i
|
| PREDICTED SALARY GRADES AND CONFIDENCE INTERVALS
|. FOR FEMALE-DOMINATED AND DISPROPORTIONATELY MINORITY
i TITLES - ADJUSTED PAY POLICY LINE
i
i
i
TITLE TITLE CURRENT PREDICTED CONFIDENCE
| CODE SALARY SALARY INTERVAL
: GRADE GRADE
| 100200 ACCOUNT CLERK 5.0 8.12 91
i . .
i 100300 SENR ACCT CLERK 9.0 192618 255
i ‘
100500 PRIN ACCT CLERK 14.0 13094 71
102100 PAYROLL AUDTT CLK 1 5.0 8.40 207
_..102200 AUDIT CLER
102230 PAYROLL AUDIT CLK 3
102300 SENR AUDIT CLERK
. 105200 CASHIER 9.0 R039 205
112000 TOLL COLLECTOR 9.0 7.60 016
| 130110 | EMPS RET BNFTS EXMR 1 920: 8.95 212
| 130310 EMPS RET BNFTS EXMR 3 1560 13.68 -10
| 133100 EMPS RET MBRSP EXAR 1 500 7.13 +08
i 133200 EMPS RET MERSP EXMR 2 = 7.0 | 8,97 -04
| 702200 STATISTICS CLERK 5.0 £022 - > 409
702300. SENR STATISTICS CLERK ar) 10024 205.
702500- ‘PRIN STATISTICS CLERK 120 12070 2.02
| 750300 SENR ACTUARTAL CLERK 9.0 10.83 209
750500 PRIN ACTUARIAL CLERK 12.0 13.71 +07
: 822010 DATA PROC CLK 1 5.0 7.67 210
822020 DATA PROC CLK 2 9.0 9.61 206
849200 DATA ENTRY “ACH OPER 4.0 2.67 267
849300 SENR DATA ENTY PACH O 70 10.76 045
\ 849500 PRIN DATA ENTY MACH O 11.0 12043 208
- 194 -
TABLE 8,2
(continued)
TITLE TITLE CURRENT PREDICTED CONFIDENCE
COOE SALARY SALARY INTERVAL
GRADE GRADE
941200 LABORATORY ANIMAL CRT 5.0 5205 018
911300 SENR LAR ANIMAL CRTKR 8.0 772 299
1836100 INST RTL STR CLERK 500 5039 297
1935000 PARK REGN BUS ASSNT 1420 17027 205
2134101 TRANS PLNG AIDE 1 5.0 9055 o11
2337110 CONSUMER SRVS SPEC 1 “4400 1heT7I 005
2507200. CLERK 3.0 5.83. AY}
2501300 SENR CLERK . 700 7294 . 038
2501317 SENR CLERK SURROGATE 720 10284 0.00
25017320 SENR CLERK CORP SRCH 720 3.22 206
2501500 PRIN CLERK : 41.0 11.98 068
2501517 PRIN CLERK EST TX APP 11.20 11014 206
2501590 PRIN CLERK PERSONNEL 11.20 12.70 207
2502200 comp CLAIMS CLERK: . 50 8.50 +08
2502300 SENR COMP CLAS CLERK 8.0 10082 207
2503200 FILE CLERK 3.0 6.25 68
2503300 SENR FILE CLERK 7.0 8.49 240
2503500 PRIN FILE CLERK 1160 12016 07
2504200 ADMITTING CLERK 4.0 6093 213
2504300 SENR ADMITTING CLERK 8.0 11024 211
2506100 NURSING STATION CLK 1 7.0 6.08 1.02
2508400 dRIVER IMPY ADJDIN C 4.0 7265 11
2508600 ADJUDCTN CORRPONC CLK 420 777 208
"UE
25101700
2510200
2512200
2512300
2513300
2514300
2514400
2515200
2521100 —
2521200
2522210
2540100
2540200
2540300
2540510
2553310
2553320
2557100
2558100
2558200
2558300
2559100
- 185 -
TABLE 8.2
(continued)
rime rue
GRADE
PURCHASING ASSNT 1 7.0
PURCHASING ASSNT 2 11.0
IDENT CLK 4.0
SENR IDENT CLERK 9-0
SENR MED RECORDS CLK 8.0
~~ 2593400 TREATMNT UNIT CLK 70d
SENR UNDERWRTNG CLERK 8.0
SENR PAYROLL AUDT CLK 8.0
CREDENTIALS ASSISTANT 6.0
MOTOR VEH TITLE CLK 1 4.0
MOTOR VEH TITLE CLK 2 7.0
LEGAL ASSNT 1 1260
MOTOR VEH REP 1 4.0
MOTOR VEH REP 2 70
MOTOR VEN REP 3 900
SUPVG MOTOR VEH REP 1 11.0
TRANS OFFC ASSNT 1 5.0
TRANS OFFC ASSNT 2 900
APPS CNTRL CLK 1 520
PAYROLL CLERK 1 5.0
PAYROLL CLERK 2 9.0
PAYROLL CLERK 3 14.0
LIBRARY CLERK 1 520
PREDICTED CONFIDENCE
SALARY INTERVAL
GRADE
9.25 204
13.11 07
6034 213
9.27 ef2
8.81 07
a: 7. 09
10.16 07
9.21 204
9.09 07
7069 37
10.81 07
16.67 05
8.34 207
7298 «08
8.75 253
12.85 206
6075 206
~" 9467 204
4.81 Te
7.78 207
10.07 +06
13085 205
7023 234
TITLE
CObE
2559200
2559300
2560100
2560200
2568100
-2569100
2601200
260 1300
2601310
26015.00
2605200 -
2606100 ©
2606200
2606300
2609000
2610200
2610300
2610500
2610520
2672200
2703100
2703200
2.703300
2610320
- 186 =
TABLE 8,2
(continued)
TITLE CURRENT
SALARY
GRADE
LIBRARY CLERK 2 7.0
LIBRARY CLERK 3 11.0
STUDENT LOAN CLK 1 400
STUDENT LOAN CLK 2 8.0
EMP INS REVWNG CLK 7 420
DISABLTY DETRM RV c1 5.0
TYPIST 3.0
SENR TYPIST 7.0
SENR TYPIST LAW 7.0
PRIN TYPIST 11.0
DICT MACH TRANS 4.0
INFO PROCsSG SPEC 1 6.0
INFO PROCSSG SPEC 2 920
INFO PROCSSG SPEC 3 1260
SECRETARIAL STENO. . 12.0.
STENOGRAPHER ~ 5.0
SENR STENOGRAPHER 920
PRIN STENOGRAPHER” "12.0
PRIN STENOGRAPHER LAY 12.0
HEARING REPTR: “— “1500
TELEPHONE OPER TYP . 4.0
TELEPHONE OPER 460
SENR TELEPHONE oP! 8.0
SR pecan dt a
SENR STENOGRAPHER LAW 9.0
PREDICTED CONFIDENCE
SALARY INTERVAL
GRADE
10015 238
15.05 206
7212 08
11066 07
6084 205
9.50 223
7012 21
40.22 51
10.57 204
12.70 0.00
7.15 43
8.65 255
10075 206
‘1.900 206
To) ery)
Poe ‘2
9.06 052
11067 1008
12061 012
11.32 49
‘7.54 “08
“Te7t 63
10.94 DB
_ 9.88 1.28
TITLE
CODE
2706100
2712200
2775200
2715220
2810100
. 2859010
3004000
3004500
3014000
3016000
3021000
3102300
3102600
3106100
3124200
3424300
3124400
3137200
3302200
3302300
3307000
5302100
5303100
- 187 -
TABLE 8.2
(continued)
TITLE CURRENT PREDICTED
SALARY SALARY
GRADE GRADE
DIRCTRY INFO SYS OP 1 5.0 6096
CALCULATING MACH OP 40 10.20
BOOKKEEPING ACH OP 5.0 6026
BOOKKEEPING MCH OP DS 520 9036
ADMNY -AIDE 11.0 112665
STATE UNIV PREM AIDE 1460 16.22
HOUSEKEEPER 6.0 6.90
SUPVG HOUSEKEEPER 9.0 10043
CLEANER 400 6e25
JANITOR © 600 5.23
ELEVATOR OPERATOR 5.0 5.75
cook 900 12040
HEAD COOK 12 00 15281
DIETITIAN TECHN 9-0 13026
FOOD SERVICE WKR 1 4.0 4.71
FOOD SERVICE WKR 2 7.0 9061
FOOD SERVICE WKR 3 9.0 12636
FOODESUPPLS PROCESSOR 600 2.06
LAUNDERER 4.0 4.66
SENR LAUNDERER 720 6295
CLOTHING CLERK 4.0 3.31
BARBER 7.0 10.37
BEAUTICIAN 70 10.84
CONFIDENCE
INTERVAL
207
009
009
- 188 -
TABLE 8.2
(continued)
TITLE «TITLE CURRENT PREDICTED CONFIDENCE
CODE SALARY SALARY INTERVAL
GRADF GRADE
$350200 DENTAL ASSNT 660 6058 206
5359000 DENTAL HYGIENIST 10.0 1M.41 204
5500200 LICENSED PRAC NRS 9.0 11046 +70
5501100 HOSP'-ATTENDANT 1 4.0 6085 59
5502200 HOSP CLINICAL TECHN 6.0 9.00 213
5518500 _COMTY RESDNC AIDE 9.0 9.50 _ 956
5532104 HOSP CLINICAL ASSNT 1 4.0 6077 M4
5532202. HOSP CLINICAL ASSNT 2 720 8.38 283
5540300 PSYCH THERAPY AIDE 9.0 10074 + 018
5544100 mENTAL HYG WFWY HOA 4 9.0 9.97 206
5570300 MENTAL HYG THER AIDE 1. 9.0 951 ese
5570400 MENTAL HYG THER AST 1 11.0 13027 » B82
6201000 LABORATORY HELPER 1.0. 3.39 020
6202200 LABORATORY WORKER - 4.0. 5.51 ar}
6204000 LABORATORY AIDE 500 5076 011
6210000 XRAY AIDE 40 ” 6e82 210
6211510 TEACHING HOSP STL ST1_ 6.0 7.50 °° 212
6211520 TEACHING HOSP STL ST2_ 860 9052 10
6214200 ELECTROENCPHGRPH TECH 8.0 10.37 205
6219200 CENTRAL MED SUP TECH 600 6256 07
6220200 | HISTOLOGY TECHNICIAN 9.0 8.99 12
6220300 © SENR HISTOLOGY TECH 1200 16.34 07
6223200 ELECTROCARDOGRPH TECH 8.0 8.67 C006
aoe
TITLE
CODE
6225100
6301000
6818000
6824100
6893100
6893200
7150000
7202022
7611000
7611300
7614000
7616100
7617200
771.1000
8261202
8261303
8261400
8340100
8342200
8610100
8434200
8631300
TABLE 8.2
(continued)
mem Senay
GRADE
MEDICAL LAB TECH 1 9.0
PHARMACY AIDE 5.0
ASSNT WERS COMP EXMR 9.0
WORKERS COMP REVY AN 1420
MEDICAID CLMS EXMNR 1 7.0
200 MEDICAID CLHS EXRNR 2 1700
MAINTCE HELPER 6.0
MAINTCE ASSNT REFRIGN 8.0
” CHAUFFEUR 7.0
SENR CHAUFFEUR 9.0
TRACTOR TRAILER OPER 8.0
MOTOR VEH. OPER , 7-0
BUS DRIVER 80
BINDERY HELPER 7 360
YOUTH DIV AIDE 2 920
YOUTH DIV AYDE 3 12.0:
YOUTH DIV AIDE & 140
ALCLSM REHAB ASSNT 1 1160
REHAG INTERVIEWER S S 920
TRAINING AIDE 9.0
EMPL SEC CLK 5.0
SENR EMP SEC CLERK 760
PRIN EMP SEC CLERK 11.0
8431500
-~ 1s -
PREDICTED
SALARY
GRADE
11043
5081
9079
13.38
8.15
6031
10296
8.64
9022
4.83
7059
5.78
"3020 -
10059
+ 13002
11.90
The
“43000
7279
7.74
7.83
12013
41063
255
11
208
ars |
ohh
+26
oth
210
16
1088
-10
215,
277
“Th
99
210
08
203
+04
5h
off
TITLE
CODE
8621100
8701600
8937100
8970100
TABLE 8.2
(continued)
TITLE CURRENT
SALARY
GRADE
PAROLE PROG AIDE 11.40
WATCHRAN 3.0
MOTOR VEH INS SV RP 1 9.0
DRIVER IMPRV ADJSUDCTR 9.0
- 190 -
PREDICTED
SALARY
GRADE
13026
6.37
CONFIDENCE
INTERVAL
009
211
204
206
- 191 -
TABLE 8,3
PREDICTED SALARY GRADES AND CONFIDENCE INTERVALS
FOR FEMALE-DOMINATED AND SIGNIFICANTLY MINORITY
TITLES - WHITE MALE LINE (10 VARIABLES)
TITLE TITLE CURRENT PREDICTED CONFIDENCE
CODE SALARY SALARY INTERVAL
i GRADE GRADE
| 100200 ACCOUNT CLERK 5.0 9423 75
| 100300 SENR ACCT CLERK 920 10.82 250
| 400500 PRIN ACCT CLERK 1400 13.93 27
402100 PAYROLL AUDIT CLK 1 5.0 9.61 216
~~" "492200 «AUDIT CLERK = OstsSt~—CS~S DB
402230 PAYROLL AUDIT CLK 3 14 00 12630 “216
102300 SENR AUDIT CLERK 9.0 10027 50
4105200 CASHIER 920 9252 209
112000 TOLL COLLECTOR 9.0 10264 .27
130119 EMPS RET BNFTS EXMR 1 900 8o24 eS
130310 EMPS RET BNFTS EXAR 3 15.0 12077 e25
| 133400 EMPS RET MBRSP EXMR 1 5-0 8674 229
733200 EMPS RET MBRSP EXMR 2 720 10.37 011
702200 STATISTICS CLERK 520° 10035 220
702300 SENR STATISTICS CLERK 940 11028 08
702500 PRIN STATISTICS CLERK ~, 1200 | 13062. 1094
750300 SENR ACTUARTAL CLERK 9-0 44022 220
750500 PRIN ACTUARIAL CLERK 4200 13076 o16
822010 DATA PROC CLK 7} SO 9.76 226
822020 ATA PROC CLK 2. 940 10.09 212
849200 DATA ENTRY ACH ‘oer $00, 11067 68
B49300 SENR DATA ENTY MACHO 0 700 12.27 43
849500 PRIN DATA ENTY MACH O 11.0 16.03 | +16
911200
911300
1836100
1935000
2134101
2337110
2501200
2501300-
2501317
2501320
2501500
2501517
2501590
2502200
2502300
2503200 .
2503300
2503500.
2504200
2504300
2506100
2508400
2508600
~ 192 -
TABLE 8.3
Ar
(continued)
me rennet
GRADE
LABORATORY ANIMAL CRT 5.0
SENR LAB ANIMAL CRIKR 8.0
INST RTL STR CLERK 5.0
PARK REGN BUS ASSNT 16.0
TRANS PLNG AIDE 4 5.0
CONSUMER SRVS SPEC 17 16.0
CLERK 3.0
SENR CLERK 7.0
SENR CLERK’ SURROGATE. 7.0
'SENR CLERK CORP SRCH 760
PRIN CLERK 11.0
PRIN CLERK EST TX APP 11.0
PRIN CLERK PERSONNEL 11.0
COMP CLAIMS CLERK. 5.0
SENR COMP CLNS CLERK * 8.0
FILE elem’ * 300
SERR FILE CLERK 7.0
PRIN FILE CLERK 14.0
‘ADMITTING CLERK re)
SENR ADMITTING CLERK 8.0
NURSING STATION CLK 1 7.0
‘DRIVER MPV ADIDTH C 400
ADJUDCTN CORRPDNC CLK 400
PREDICTED
SALARY
GRADE
8.55
9.15
6e34
16.03
11062
14.54
8.66
Qo
10068
6.01
12.90
11.88
13046
10.03
11.27
7) ane
14006
43.60
6046
(14623
aeorT
10016
9.94
conE
232
216
234
.07
229
+12
ay)
237
0.00
.08
248
224
|
|
i
|
|
|
TITLE
CODE
2519100
2510200
2512200
2512300
2513300
— 2573400 —
2514300
2544400
2515200
2521100
2521200
2522210
2540100
2540200
2540300
2540510
2553310
2553320
2557100
2558100
2558200
2558300
2559100
=193° =
TABLE 8.3
(continued)
TITLE CURRENT PREDICTED
SALARY SALARY
GRADE GRADE
PURCHASING ASSNT 1 7.0 10022
PURCHASING ASSNT 2 11.0 13015
IDENT CLK 4.0 10034
SENR IDENT CLERK - 920 11049
SENR MED RECORDS CLRK 8.0 1020
TREATMNT UNIT CLK ~~ 700 ST
SENR UNDERWRTNG CLERK 8.0 10069
SENR PAYROLL AUOT CLK 8.0 10.88
CREDENTIALS ASSISTANT 420 7.99
MoTOR VEH TITLE CLK 1 400 11.02
MOTOR VEH TITLE CLK 2 7.0 11624
LEGAL ASSNT 1 1260 15027
MOTOR VEH REP 1 ; 4.0 9696
MOTOR VEH REP 2 7.0 “9.10
moTOR VEH REP 3 9.0 9200
SUPVE NOTOR VEH REP.1 . 1140 12290
“TRANS OFFC ASSNT'S Sg 7.37
TRANS OFFC ASSNT 2 * "ged 9.30
APPS CNTRL CLE 5.0 8.62
PAYROLL CLERK 1 “520 8.78
PAYROLL CLERK 2 4 940 10658
PAYROLL CLERK 3 ‘1400 14.33
LIBRARY CLERK I 8.25
CONFIDENCE
INTERVAL
210
o11
~- == 40---
216"
13
14
233
217
a6
13
mo
38,
ome
13
eg?
230
oth
10
308
222
- 194 ~
TABLE 8.3
(continued)
VITLE TITLE CURRENT PREDICTED CONFIDENCE
CODE SALARY SALARY INTERVAL
GRADE GRADE
2559200 LIBRARY CLERK 2 720 10045 231
2559300 LIBRARY CLERK 3 11.0 16.60 209
2560100 STUDENT LOAN CLK 1 4.0 7.05 212
2560200 STUDENT LOAN CLK 2 8.0 12292 224
2568100 EMP INS REVWNG CLK 17 4.0 8299 «13
2569100 “DISABLTY DETRN RVC 1 5.0 11655 KB
2601200 TYPIST : 3.0 8.95 228
2601300 SENR TYPIST 7.0 14040 38
2601310 SENR TYPIST LAW 7.0. 9.78 206
2601500 PRIN TYPIST 11.0 . 43012 0.00
2605200 . DICT MACH TRANS 4.0 9610 4h?
2606100 INFO PROCSSE SPEC 1 6.0 10242 ar)
2606200 INFO PROCSSG SPEC 2 900 W176 o15
2606300 INFO PROCSSG SPEC 3 .. 1240 213 “12
2609000 SECRETARIAL STENO 12.0 = 57
2610200 “STENOGRAPHER "520° 636
Zei0300 Sena stewocRApieR © 9.0" aks
2610500 “PRIN STENOGRAPHER 12.0 13028 ‘49
2610520 “PRIN STENOGRAPHER LAW 12.0 13027 210
2612200 HEARING REPTR 1520 “42062 7 eh
2703100 ‘TELEPHONE OPER TYP be0. 9028
2703200 " TELEPHONE OPER * he0 "9063 i Soya
2703300 | SENR TELEPHONE OPER 8.0 41066 0 083
2610320 SENR STENOGRAPHER LAW ‘ 9.0 10.87 1.42
THOSE
2706100
2742200
2715200
2715220
2810100 ADMNV AIDE _
2859010
*~ 3004000
3904500
3014000
3016000
3021000
3102300
3102600
3106100
3124200
3124300
3124400
3137200
3302200
3302300
3307000
5302100
5303100
- 195 -
TABLE 8.3
(continued)
rime Fra
GRADE
DIRCTRY INFO SYS OP 1 5.0
CALCULATING MACH OP 4.0
BOOKKEEPING RCH OP 5.0
BOOKKEEPING NCH OP 0S 5.0
oa. sw coms eon ig,
“STATE UNIV PREM AIDE 11.0
HOUSEKEEPER 620
SUPVG HOUSEKEEPER “920
CLEANER 420
JANITOR 6.0
ELEVATOR OPERATOR 5.0
cooK 920
HEAD COOK 12.0
DIETITIAN TECHN 9.0
FOOD SERVICE’ WKR 1 4.0
FOOD SERVICE WKR 2 7.0
FOOD SERVICE WKR 3 9.0
FOODESUPPLS PROCESSOR 620
LAUND ERER 60
SENR LAUNDERER 7.0
CLOTHING CLERK 4.0
BARBER 720
BEAUTICIAN
11.07
PREDICTED
SALARY
GRADE
8.38
17061
7o28
CONFIDENCE
INTERVAL
026
021
221
018
_ 0h
TITLE
CODE
5350200
35359000
5500200
35501100 °
5502200
5518500
5532101
5532202
5540300
5544100
5570300
5570400
6201000
6202200
6204000
6210000.
6211510
6211520
6214200
6219200
6220200
6220300
6223200
- 196 -
TABLE @3
(continued)
TITLE CURRENT
SALARY
GRADE
DENTAL ASSNT 600
DENTAL HYGIENIST 10.0
LICENSED PRAC NRS 9.0
HOSP ATTENDANT 4 em)
HOSP CLINICAL TECHN 6.0
COMTY RESDNC AIDE 9.0
HOSP CLINICAL ASSNT 1 420
HOSP CLINICAL ASSNT 2 7.0
PSYCH THERAPY AIDE 9.0
HENTAL HYG HEWY HAT = 900
MENTAL HYG THER AIDE 1 900
MENTAL HYG THER AST 1 11.0
LABORATORY HELPER © ‘400
LABORATORY WORKER 4.0
LABORATORY AIDE) sco,
XRAY AIDE! ame . a e “620
TEACHING HOSP STL S74 “6.0
TEACHING HOSP ST ST2° BAO
ELECTROENCPHGRPH TECH 8.0
CENTRAL MED SUP TECH ’ 660
HISTOLOGY TECHNICIAN: 9.0
SENR HISTOLOGY TECH». 12600
ELECTROCARDOGRPH TECH | 8.0
PREDICTED
SALARY
GRADE
7254
9058
10.72
7093
411029
9.89
7672
8.87
10-92
1068
9699
13.67
7.95
“7081
8.85"
CONFIDENCE
INTERVAL
208
07
292
77
239
57
026
396
030
«09
- 08D
270
038
028
“233
324
233
205
-10
025
235
018
13
i
q
i
H
TITLE
CODE
6225100
6301000
6818000
6824100
6893100
6893200
7150000
7202022
7641000
7611300
7614000
7616100
7617200
7711000
8261202
(8261303.
8261400 _
8340100
8362200
8410100
8431200
8431300
8431500
-197
TABLE 8.3
(continued)
TITLE CURRENT
SALARY
GRADE
EDICAL LAB TECH 1 9.0
PHARMACY AIDE 5.0
ASSNT WKRS COMP EXMR 9.0
WORKERS COMP REVW AN 14.0
MEDICAID CLMS EXMNR 1 7.0
“MEDICAID CLES EXMNR 2 11-0
MAINTCE HELPER 6-0
MAINTCE ASSNT REFRIGN 8-0
CHAUFFEUR 7.20
SENR CHAUFFEUR 900
TRACTOR TRAILER OPER 8.0
MOTOR VEH OPER 7.0
BUS DRIVER 8.0"
BINDERY HELPER 320
yout DIV AIDE 2 7 920
YOUTH DIV AIDE 3. 12.20
| youri, brv ADE, & 16 0
TAL CLSM REHAB ASSNT 1 14.0
"REHAB INTERVIEWER S S 960
TRAINING AIDE,” . 9-0
EMPL SEC CLK 5.0
SENR EMP SEC CLERK 7.0
“PRIM EMP SEC CLERK 11.0
PREDICTED
SALARY
GRADE
10083
7.97
10045
14209
9e72
43061
7260
9.31
10682
11065
6077
9614
7.72
8.09.
10.64
12.96 °
12036
“A712
16618
8683
Foh2
8.90
12.96
CONFIDENCE
INTERVAL
o2t
220
ofS
029
TITLE
CODE
8621100
8701600
8937100
8970100
- 198 -
TABLE 8.3
(continued)
TITLE CURRENT
SALARY
GRADE
PAROLE PROG AIDE 11.0
WATCHMAN 3.0
MOTOR VEH INS SV RP 7
DRIVER IMPRVY AD JUDETR
9.0
920
PREDICTED
SALARY
GRADE
14039
8.96
9.85
10039
CONFIDENCE
ENTERVAR
© 625
052
el6
022
- 199 -
TABLE 8.4
FOR DIRECT-LINE-OF-PROMOTION TITLES -
|
| PREDICTED SALARY GRADES AND CONFIDENCE INTERVALS
|
OVERALL PAY POLICY LINE
[
|
1
|
|
|
TITLE TITLE CURRENT PREDICTED CONFIDENCE
CODE SALARY SALARY INTERVAL
GRADE GRADE
102220 PAYROLL AUDIT CLK 2 9.0 - 7.36 2240
102500 PRIN AUDIT CLERK 14.0 11476 "2633
130210 EMPS RET BNFTS EXMR 2 12.0 8.98 1.20
~~ -433300- — EMPS RET-MBRSP EXMR 3 1400-972
#22030 DATA PROC CLK 3 12.0 13.16 233
| 911500 . PRIN LAB ANIMAL CRTKR 1460 14.96 "42?
2134202 TRANS PLNG AIDE 2 960 10028 .78
7 2522220 . LEGAL-ASSNT 2 16.0 16.39 41
3004600 HEAD HOUSEKEEPER - 12.0 1hel3 48
| 3016500 SUPVG JANITOR 960 9.03 2038
| 3016600 HEAD JANITOR : 12.0 44.03 1049"
3302600 HEAD LAUNDRY SUPVR 1240 10.94 1610
5518800 COMTY RESDNC ASNT DIR 11.0 11048 3.19
5518900 cOnTY RESDNC DIR 13.0 15.97 1.95
5570500 MENTAL HYG THER AST 2 13-0 435434 1.67
6218600 MEDICAL TECHNOLOGIST 4400—=—S 48089
| 6225200 MEDICAL LaB TECH 20 12.0 13064 2.05
6818200 WORKERS COMP EXMR “4400 12.00 1688,
7132200 - REFRIG MECHANIC 4240 13.08 2002
8701000 BULDS GUARD 6D. 7085 1085
.
-200 -
TABLE 8.5
PREDICTED SALARY GRADES AND CONFIDENCE INTERVALS
FOR DIRECT-LINE-OF-PROMOTION TITLES -
ADJUSTED PAY POLICY LINE
ct eemamey TReRAETEY. Comratuce
GRADE GRADE
102220 PAYROLL AUDIT CLK 2 90 Be22 2022
102500 PRIN AUDIT CLERK 14.0 12042 2206
130290 EMPS RET BNFTS EXMR 2 12.0 9.76 1206
133300 EMPS-RET-MBRSP EXMR 3 1460 10064 11
822030 DATA PROC CLK 3 12.0 13-70 010
911500 PRIN LAB ANIMAL CRTKR 1420 12035 11
2134202 TRANS PLNG AIDE 2 900 1432 o75
2522220 LEGAL -ASSNT 2 14.0 16480 06
3004600 — HEAD HOUSEKEEPER 12.0 14.72 008
3016500 SUPVE JANITOR 9.0 | 9.62 2028
3016600 HEAD JANITOR: 7 12.0 160464036
3302609 HEAD LAUNDRY SUPVR , 1220 140060 “296
5518800 COMTY RESDNC ASNT DIR 11.0 12064 2028
5518900 © CONTY RespNe DIR 1350 16.97 . 1026
5570500 “MENTAL HYG THER AST 2 «1320 14.50
6278400 WEDICAL TECHNOLOGIST — 1420 18.84
6225200 “MEDICAL LAB TECH 2 12.0 13047
&siszG0 | WoRKERS comp exAn | thd” 12689
7132200" REFRIG MECHANIC = 420 12.95 aes
8701000 BULDG GUARD . 600 8.55 1075
TITLE
CODE
102220
102500
130210
433300
822030
911500
2134202
2522220
3004600
3016500
3016600
3302600
5518800
5518900.
“RENTAL HYG THER AST 2°
- 201-
TABLE 8.6
PREDICTED SALARY GRADES AND CONFIDENCE INTERVALS
FOR DIRECT-LINE-OF-PROMOTION TITLES -
WHITE MALE LINE - (10 VARIABLES)
TITLE “CURRENT PREDICTED
SALARY SALARY
GRADE GRADE
PAYROLL AUDIT CLK 2 9.0 9072
PRIN AUDIT CLERK TheO 13032
EMPS RET BNFTS EXMR 2 12.0 10.39
"EMPS "RET RORSP EXHR 3 1460 ~~ «10057
DATA PROC CLK 3° 12.0 16032—
PRIN LAB ANIMAL CRTKR 11.0 13003
TRANS PLNG AIDE 2 . 920 13032
LEGAL ASSNT 2 14.0 17.01
HEAD HOUSEKEEPER 12.0 = 16694
SUPVG JANITOR 90 10.10 -
HEAD JANITOR 12.0 (16019
HEAD LAUNDRY SUPYR "43019
COMTY RESDNC ASNT DIR. 13053
COMTY -RESDNC DIR
17066
MEDICAL TECHNOLOGIST |
egg .
DICAL LAB TECH 2
JORKERS COMP EXAR
EFRIG MECHANIC
ULDG GUARD
CONFIDENCE
INTERVAL
1096
1266
1.06
re +
017
011
- 202 -
- 208 -
BIBLIOGRAPHY
Ash, R. A., and S. L. Edgell
1975 "A Note on the Readability of the Position Analysis Questionnaire
(PAQ)," Journal of Applied Psychology 60: 765-766.
Bellak, Alvin
1982 "The Hay Guide Chart - Profile Method of Job Evaluation," in
Handbook of Wage and Salary Administration, 2nd Edition. New
York: McGraw-Hill.
Buros, 0. K,
; =" ~~ 1978" ~~ “The” Eighth Mental Measurements Yearbook. Highland Park) New = = = =~
Jersey: Gryphen Press,
Center for Women in Government
1982 Comparable Worth Proposal. Unpublished manuscript.
Cochran, William G.
Hy 1973 Sampling Techniques. New York: John Wiley and Sons, Inc.
Dawson, Richard E., and David J. Weiss
| 1973 "Supervisor Estimation of Abilities Required in Jobs," Journal of
I Vocational Behavior 30.
Erdos, P. L.
|
1970 Professional Mail Surveys. New York: McGraw-Hill Book Company.
Etzel, Michael J., and B. J. Walker.
1974 "Effects of Alternative Follow-up Procedures on Mail Survey
Response Rates," Journal of Applied Psychology 59 (2): 219-221.
Ferriss, Abbott L.
Sociological Review 16: 247-249.
i
|
{
|
i
|
|
1951 "A Note on Stimulating Response to Questionnaires," American
Frazier, George, and Kermit Bird
1958 "Increasing the Response of a Mail Questionnaire," The Journal of
Marketing, October: 186-187.
— 24 -
Frey, James H.
1983 Survey Research by Telephone, Beverly Hills, CA: Sage
Publications, Inc.
Hays, William L.
1973 Statistics for the Social Sciences. New York: Holt, Rinehart
and Winston, Inc.
Heberlein, Thomas A., and Robert Baumgartner
1978 "Factors Affecting Response Rates to Mailed Questionnaires: A
Quantitative Analysis of the Published Literature," American
Sociological Review 43 (4): 447-462.
Hinrichs, John R.
1975 "Effects of Sampling, Follow-up Letters, and Commitment to
Participation on Mail Attitude Survey Response," Journal of
Applied Psychology 60 (2): 249-251.
Jolson, Marvin A.
1977 "How to Double or Triple Mail-Survey Response Rates," Journal of
Marketing, October: 78-79.
Kanuk, Leslie, and Conrad Berenson
1975 "Mail Surveys and Response Rates: A Literature Review," Journal
of Marketing Research, 12: 440-453,
Kim, Jae-on, and Charkes W. Mueller
1978 Factor Analysis: Statistical Methods and Practical Issues.
Beverly Hills, CA: Sage Publications, Inc.
Kretschmer, J. C.
1976 "Updating the Fry Readability Formula," The Reading Teacher 29:
555-558.
Leslie, Larry L.
1972 “Are High Response Rates Essential to Valid Surveys?" Social
Science Research 1: 323-334.
Madden, Joseph M., Joe T. Hazel, and Raymond Christal
1964 "Worker and Supervisor Agreement Concerning Workers' Job
Description," Technical Documentary Report PRL=-TDR-44-ID; 4570th
Personnel Research Laboratory, Aerospace Medical Division,
Lockheed Air Force Base, TX.
- 205 -
McCormick, E. J., P. R. Jenneret, and R. C. Mecham
1969 Position Analysis Questionnaire, West Lafayette, Indiana:
University Bookstore.
McGinnis, Michael A., and Charles J. Hollon
1977 "Mail Survey Response Rate and Bias: The Effect of Home Versus
Work Address," Journal of Marketing Research 14: 383-384.
McLaughlin, Lillie
1984 Statistics on Women and Minorities in Public Employment, Working
Paper No. 6A, Albany, New York: Center for Women in Government.
Fall.
~ “Mosteller, Frederick, and John W. Tukey 79700
1968 "Data Analysis, Including Statistics," in The Handbook of Social
Psychology, Vol. 2, 2nd Edition, ed., Gardner Lindey and Elliot
Aronson. Reading, MA: Addison-Wesley Publishing Company.
Myers, James H., and A. F. Haug
1969 "How a Preliminary Letter Affects Mail Survey Returns and Costs,"
Journal of Advertising Research 9: 37-39,
Nunnally, Jum Cc,
1978 Psychometric Theory. New York: McGraw-Hill.
Office of Home Management and Budget
1978 Memo to the Heads of Executive Departments and Establishments,
Subject: President's Reporting Burden Reduction Program, Fiscal
Year 1978,
Peterson-Hardt, Sandra, and Nancy Perlman
1979 "Sex-segregated Career Ladders in New York State Government
Employment: A Structural Analysis of Inequality in Employment,"
Working Paper No. 1. Albany, New York: Center for Women in
Government.
Pierson, David, Karen Shallcress Koziara, and Russell Johannesson
1984 "A Policy-Capturing Application in a Union Setting," in
Comparable Worth and Wage Discrimination: Technical
Possibilities and Political Realities, ed., Helen Remick.
Philadelphia, PA: Temple University Press.
- 206 —
Remick, Helen
1979 "Strategies for Creating Sound Bias-Free Job Evaluation Systems,"
in Job Evaluation and EEO: The Emerging Issues. New York:
Industrial Relations Counselors, Inc., pp. 85-112.
Remick, Helen
1984 "Major Issues in a priori Applications," in Comparable Worth and
Wage Discrimination: Technical Possibilities and Political
Realities, ed., Helen Remick. Philadelphia, PA: Temple
University Press.
Schreiber, Carol T.
1979 Changing Places: Men and Women in Transitional Occupations.
Cambridge, MA: MIT Press.
Scott, Christopher
1961 "Research on Mail Surveys," Journal of the Royal Statistical
Society 124: 143-205.
Sheth, Jagdish N., and Marvin A. Roscoe, Jr.
1975 "Impact of Questionnaire Length, Follow-up Methods, and
Geographical Location on Response Rate to a Mail Survey," Journal
of Applied Psychology 60 (2): 252-254.
Shryock, H., and J. S. Siegal
1975 The Methods and Materials of Demography. U.S. Government
Printing Office, p. 366.
Stafford, J. E.
1966 "Influence of Preliminary Contact on Mail Returns," Journal of
Marketing Research 3: 410-411.
Steinberg, Ronnie
1984 "Identifying Wage Discrimination and Implementing Pay Equity
Adjustments: Notes from the Experience of the New York State
Comparable Pay Study," in Comparable Worth: Issues for the 80's,
ed., U.S. Commission on Civil Rights. Washington, DC: U.S.
Government Printing Office.
Steinberg, Ronnie, and Lois Haignere
t
1985 "Equitable Compensation: Methodological Criteria for Comparable
Worth," paper prepared for a conference, "Ingredients for Women's
Employment Policy," April 19-20.
-207 -
Steinberg, Ronnie, Lois Haignere, and Carol Possin
1984 Interim Report, Albany, New York: Center for Women in
Government.
Swan, John E., Donald E. Epley, and William L. Burns
1980 "Can Follow-up Response Rates to a Mail Survey Be Increased by
Including Another Copy of the Questionnaire?" Psychological
Reports 47: 103-106.
Treiman, Donald J.
1979 Job Evaluation: An Analytic Review Interim Report of the
Committee on Occupational Classification and Analysis.
Washington, DC: National Research Council, National Academy of
- Selences, © BE
Treiman, Donald
1984 "Effect of Choice of Factors and Factor Weights in Job
Evaluation," in Comparable Worth and Wage Discrimination:
Technical Possibilities and Political Realities, ed., Helen
Remick. Philadelphia, PA: Temple University Press.
Treiman, Donald, and Heidi Hartmann
1981 Women, Work and Wages: Equal Pay for Jobs Equal Value.
Washington, DC: National Academy Press.
Treiman, Donald, Heidi Hartmann, and Patricia Roos
1984 “Assessing Pay Discrimination Using National Data," in Comparable
Worth and Wage Discrimination: Technical Possibilities and
Political Realities, ed., Helen Remick. Philadelphia, PA:
Temple University Press.
Treiman, D. J., and K. Terrell
1975 “Women, Work and Wages: Trends in Female Occupational Structure
Since 1940" in Social Indicator Models, ed., K. Land and S$.
Spilerman, pp. 157-199, New York: Russell Sage Foundations.
Waisamen, F. B.
1954 "A Note on the Response to a Mailed Questionnaire," Public
Opinion Quarterly, 18: 210-213,
Wiseman, Frederick
1972 "Methodological Bias in Public Opinion Surveys," Public Opinion
Quarterly 36: 105-108.
- 208 -
- 209 -
APPENDIX A
ACKNOWLEDGMENTS
|
'
|
|
|
|
1
- 210 -
The New York State Comparable Worth project benefited substantially from
the help of dozens of State employees and Center for Women in Government
staff. In this Appendix, we want to acknowledge and thank those who assisted
us in the completion of the study. While we have tried to list everyone
involved in this complicated research effort, we know that there are many
individuals who assisted us and whose names are not available to us. We
especially want to thank the 27,394 State employees who filled out and
returned the Job Content Questionnaires.
Turning to those we can acknowledge by name, Greg Reilly, the liaison
to this project from the Governor's Office of Employee Relations, assisted
with every step of the project, providing the necessary links with state
agencies, and providing us information about the state workforce to facilitate
several design decisions, Throughout our two years of working together, he
remained supportive, helpful, and collegial.
Similarly, William Blom, Research Director of the Civil Service Employees
Association, provided invaluable assistance and support throughout the
project. His work on developing the Job Content Questionnaire is especially
appreciated.
In addition, a number of other individuals provided general advice about
the research design of the study, including: Karen Burstein, Irene Carr,
Candice Carter, Cynthia Chovanec, Barry Lorch, William McGowan, and Gail
Shaffer.
We appreciate the guidance and assistance of the State's Steering
Committee, Andre Dawkins, Jerry Dudak, W. Barry Lorch and Paul Veillette. We
also appreciate the support of the State's Policy Committee, Karen Burstein,
Henrik Dullea, Thomas F. Hartnett, and Peter B. Lynch and their concern that
this study meet the highest standards.
|
i
- QR -
This project could never have been completed without the information
describing the civil service job titles and their incumbents provided to us by
Vic Gilbert and his associates in the Electronic Data Processing Unit of the
Civil Service Department. In addition to Vic, we would like to thank Pat
McCausland, Carleen McLaughlin, Mary Foster, and Virginia Green.
Over 150 state employees assisted us in questionnaire development through
preliminary field testing and well over 1,000 state employees responded to the
pilot Job Content Questionnaire. A number of individuals spent considerable
~ time assisting us in item selection, deletion and wording. Candice Carter and ~~
Paul Laramie were especially helpful. Each of them spent several days
responding to our requests for information and other inquiries. They also
helped us anchor general job content categories to the New York State
experience. In addition, we appreciate the time and care taken by the
following individuals in reviewing and revising the questionnaire: Steven
Scaringe, James Shaver, Eileen Guir, Alois Soeller, David Vincelette, Robin
Katz, Vincent Perfetto, Alma McCullough, and Richard Visor.
Special thanks is due to Professor James Fleming for applying his exper-
tise in simplifying the reading level of the Job Content Questionnaire. As a
result of his efforts, the questionnaire is comprehensible to anyone with at
least a seventh grade reading ability. Others who helped us make the
questionnaire accessible to incumbents in titles where functional illiteracy
is high include: Janet Patterson, David Dunahue, Paul LaJole, Kathy Sims,
Carol Ann Modena, George Delamar, and Herbert Steele.
A number of people provided us with information on carrying out survey
research in New York State and elsewhere. For their assistance, we wish to
thank I-Hisn Wie, Carol Newhart, Rebecca Hatch, Maria Sgroi, Jeff Lutzker, and
Bill Coleman, Jr. In addition, George Gaspard, Linda Balinski, and Deloros
es
- 2am -
Loczak gave us invaluable advice on using the interagency mails for distribut-
ing the questionnaire.
The substudy comparing supervisor and incumbent responses to a subset of
questionnaire items was mechanically one of the most difficult tasks to
complete. Elaine Ellinger, Roger Cudmore, Mabel Murphy, Steven Daly, Esther
Swanker, Georgiana Panton, Leslie Collins, David VanHeusen, Candice Carter,
Joseph Murphy, Jr., and Cynthia Chovanec made this substudy possible by
locating the specific supervisors of incumbents of pilot study titles.
Denice Mitchell, Bill Dorsman, and Joan Conway helped us with the produc-
tion, layout, and wording of the questionnaire. Mary Nelson and her
associates at Professional Insurance Agencies efficiently and effectively
completed the data entry.
The public relations efforts surrounding this project were crucial
because of the great public interest and importance of the study. In
addition, publicity was vital to ensure a high return rate for the
questionnaire. We are especially appreciative of the GOER and CSEA public
information efforts, lead respectively by Ronald Tarwater and Melinda Carr.
We are also grateful for the publicity generated by the Public Employees
Federation and a number of state agencies and women's advisors groups. The
energies and expertise of the Center's own public information specialists,
Audrey Seidman and Fred Padula, were also critical to our success and contri-
buted greatly to the high quality of public relations efforts carried out
during the study.
Once the data were ready to analyze, Michael Green assisted in writing
the appropriate data analysis computer programs. Others who provided us with
technical assistance for the computer analysis are Sue Darbyshire, Ray Coco,
Bob Pfeiffer, Janice Jacobson, and Pete Connolly.
+ 213 -
This study could not have been completed without the agency liaisons.
First, a set of agency and union liaisons provided assistance in the pilot
study. This required a lot of work, as we were testing four methods of
i distribution. The names of these liaisons are listed in Appendix B. A second
group encompassed the agency liaisons for the main data collection phase.
| They are listed in Appendix E. These people worked diligently with us and on
| our behalf to follow up on the many details involved in distributing hundreds
of questionnaires. The 73 percent response rate is a testament to their
- 7 - efforts, — —
The acknowledgments would not be complete without thanking the many
Center for Women in Government members who contributed greatly to the
successful completion of this project. Nancy Perlman, our executive director,
was among the first to recognize the importance of pay equity research in New
York State. Her continued leadership, encouragement, and support was
invaluable to those of us who were responsible for the day-to-day management
of the project. Robert LaSalle, Sharon Stimson, and Lillie Mclaughlin worked
as Research Assistants in various phases of the project. Bob contributed in
nearly every phase of the project. Sharon organized several facets of the
pilot survey. Lillie organized the initial contacts with the main survey
agency liaisons. We were fortunate as well to have the assistance of several
graduate students, undergraduate students, and research interns: Susan
Buckley, Cynthia Dean, Wendi Essex, Elissa Kane, Sue Knoll, Regina Ryan,
Cynthia Wise, and Suzanne Felt. In addition, Center interns Nancy Della Rocco
and Julie Castleberry chipped in when extra hands were needed, as did Center
fellow Judith Saidel.
Our administrative staff, including Nan Carroll, Nancy McDonough,
I
\
|
I
Paulette Moak, Joan Jervis, and Pat Beaudoin all facilitated the process of
-244 -
project implementation through monitoring our contract, bills, making travel
arrangements, and providing backup clerical support.
Finally, our warmest and wholehearted thanks go to Alex Reese and Annette
Roberts, clerical staff to the Research and Implementation Unit at the Center.
Alex has been with us since the inception of the project. She has managed to
survive the oftentimes overwhelming task of retyping many drafts resulting
from the frequent editorial changes under tight time schedules. She did a
magnificent job completing this final report. Annette joined Alex in
completing the clerical tasks associated with this project. Their skill and
expertise on this, and in all their work, are invaluable to our unit.
1
AGENCY PERSONNEL
AND UNION LIAISONS
APPENDIX B
PILOT SURVEY:
- 216 -
Agency Liaison Persons for Pilot Distribution
MENTAL HEALTH
Jackie Morris, Project Director
George Delamar, Assistant Director of Personnel
Ann Malmrous, Associate Persone! Administrator
David VanHeusen, CDPC Personnel Director
Unions
Henry Wagnoner, CSEA
Joyce Reso, PEF
John Deseve, Council 82
TRANSPORTATION
Esther Swanker, Assistant Commissioner of Manpower and Employee Relations
. Steve Daly, Director of Personel Bureau
Steve Jaffy, Associate Personne! Administrator
-Carol Cross, Principle Clerk of Personnel -
Geraldine Smith, Acting Regional! Personnel Officer
Unions’
Joan Tobin, CSEA
Milo Barlow, CSEA
Steve Mastensen, PEF
MOTOR VEHICLES
Georgiana Panton, Personnel Director’.
Alexandra Sussman, Senior: Personnel Administrator
Bob Hoffmeister, Labor Relations Director
Unions
Dann Wood, CSEA
Betty Carpenter, CSEA
Mark Hafensterner, PEF
Joan Russel, PEF
MENTAL RETARDATION
Tom Torino, Assistant Director of Personnel
Brooklyn Developmental 8 » eS ee
Dennis Gallo, Personne! Director
Millie Whitleton, EEO Officer
- 217 -
MENTAL RETARDATION (cont.)
TAX
Unions
Ann Worthy, CSEA
Denise Berkley, CSEA
Sue Powell, Council 82
Grace Lott, PEF
& FINANCE
Roger Cudmore, Director of Human Resources
Mable Murphy, Director of Personnel
* Debra Ellis, Director of Labor Relations
John Seiler, Agency Labor Relations Representative
Unions |
Carmon Bagnoli, CSEA
Mary Jaro, CSEA
Joe Carusone,; PEF >
Earl Dennyson, PEF.
Joyce Lacomb, PEF
OFFICE OF GENERAL SERVICES
Elaine Ehlinger, Associate Personel Administrator
Maria Mazza, Personnel Assistant
Unions
Leroy Holmes, CSEA
Mike Harrigan, PEF .
Bob McCarthy, Council 82
Elaine Delanoy, Council 82
Dick O'Connell, Council 82
SOCIAL SERVICES
Ben McFerran, Director of Human Resources Management
Mary Meister, Director of Personnel
Leslie Collins, Assistant Director of Personnel
Rodney Kurst, Associate Personnel Administrator
Unions
Charles Stats, CSEA
Roy Bailey, PEF
- 218 -
CORRECTIONS
Joe Murphy, Director of Human Resources Management
Marsha Herman, Assistant Director of Personnel Facilities
Lee Gould, Assistant Director of Classification and Exams
- 219 -
|
|
i
i
APPENDIX C
| Descriptive Statistics for the Independent Variables
Entered Into Regression: Whole Sample and White Male Sample
i
- 226 -
DESCRIPTIVE STATISTICS: WHOLE SAMPLE
Mean Std Dev
MSG 17.715 7.410
PM 095 164
PFEM 2317 2312
Fl 3427 +230
F2 201 +203
F3 «220 +216
F4 0725 +223
FS +519 +216
F6 +383 +302
F7 "4314 2243
F8 0119 +188
F9 «194 2194
F10 2940 +088
FLL +701 156
F12 638 165
F13 «147 157
FL4 3544 «146
M133 2227 2177
M136 +308 2245
M140 441 +262
M141 2441 2179
P142 +331 +290
M143 526 «178
M144 «331 2324
MI94 «599 «191
PL96 +264 +271
PI105 2428 2354
PI106 +716 +308
WRITE2 +469 164
READ2 +604 2221
N of cases = 1601
CORRELATION;
CORRELATION: WHITE MALE SAMPLE
~.00T- ~
0263
+710
2 nena 2 ae
ys
2162
ot
+048
e857 ORE
2615 2242
2oz5 260i
2459 -017
CORRELATION: WHITE MALE SAMPLE
AIg0° EET PYSZ WES aes W19e P96”
pate
221
e205 59
we272 72378
wobze 1.000 * 455-7" i a 1 CL Te +4 haat
2663 2455 2228 ta08
2307 2345 30537 3400
22391 2258 4,000 0419
e270 2620 =e1t7 2000
2496 = = 2407 wehsh 0145
S335 6279 2217 TE3Z
2162 +109 2110 2082
weTS 2104 bra 77 008
2301 2204 72092 2234-2027
WRITE? 282 2583 “358 38 3660
READZ 2552 2564 2403 0318 0458
> 8% -
~—s00r ——.305--- ~
2
2025 229;
3808
2057?
383"
0385
Il
N
s
ct
1
- 224 -
DESCRIPTIVE STATISTICS: WHITE MALE SAMPLE
Std Dev Label
Mean
MSG 19.458
PM 009
PFEM +014
FL +490
F2 +269
F3 168
F4 +743
F5 503
F6 +354
F7 +339
F8 +138
F9 +248
F10 2949
Fl «745
F12 2707
F13 +181
F14 579
MI33 +207
MI36 2394
MI40 3572
MI41 2545
PI42 -310
MI43 -614
MI44 +359
MI94 +632
PI96 +163
PI105 «297
PI106 2566
WRITE2 491
READ2 642
N of cases = 464
6.781
+024
+028
+227
+243
166 -
+223
-198
+298
+224
2192
+212
+076
142
.143
2177
+136
«163
+228
2244
2157
2277
182
+333
2172
+231
+324
+360
2172
«216
CORRELATION?
WSE Pw
000.25 Tt
CORRELATIONS: WHITE MALE SAMPLE
—" UNTTET
READZ
319
3318-7" T1000 ~
2178
oT T-O0U
0138 0173
e185
CORRELATIONS: WHITE MALE SAMPLE
"WTA WEet
Wiese Woe Py96
RYT ORT ORAS FIs” MESS
0637 38S
e145 72035
7508 <5ET } seers (1: 1
72260 0197
e348
=.560
2387
2580 23TT
+030 76109
2536
70581
CORRELATIONS: WHITE MALE SAMPLE
—— PTS BITS WRITER ~ReAD2
<ots EST TE
20202146 '
1
N
Nn
x
f
READZ 0197 2428
228°-
- 329 -
APPENDIX D:
MAIN SURVEY: JOB CONTENT QUESTIONNAIRE,
COVER LETTER, AND FOLLOW-UP LETTER
i
|
'
- 230 -
State University of New York at Albany
Draper Hall, Room 302
135 Western Avenue
Albany, New York 12203
(518) 455-6211
(800) 628-1216.
Dear New York State Employee:
Recently you received the New York State Job Questionnaire. If you filledit
out and returned it, thank you for your valuable help.
If you have not returned the questionnaire, we hope you will do so right
away. You are one of only a few employees randomly chosen to tell us about
your job, so your cooperation is very important. Your response will make it
possible to include your job title in the Comparable Worth Study. This is an im-
portant study of the New York State salary setting process.
Your responses are anonymous. Remember that you may fill out the ques-
tionnaire on work time.
If you have any questions about the study or about the questionnaire, please
call (800) 628-1216.
Neri a doennen
Nancy D. Perlman
New York State Comparable Worth Project
This survey is a key part of a study being conducted by the Center for Women in Government
with funding provided through negotiated agreements between the Governor’s OER and the
Civil Service Employees Association, AFSCME, AFL-CIO. Bargaining unit titles represented
by the Public Employees Federation, AFL-C1O and Council 82, AFSCME, AFL-CIO are not
directly involved in the study; however, both organizations are aware that employees in bargain-
ing units represented by them will be asked to complete questionnaires as part of this study.
--3BL -
APPENDIX E:
MAIN SURVEY: AGENCY LIAISONS
~ 232 -
AGENCY LIAISONS
Agency
Adirondack Park Agency
Advocate for the Disabled
Office for the Aging
Department of Agriculture
and Markets
Division of Alcoholism
and Alcohol Abuse
Office of Substance Abuse Services
Council on the Arts
Department of Audit and Control
Banking Department
Division of the Budget oe --
Office of Business Permits
Commission on Cable Television
Council on Children and Families
Department of Civil Service
Department of Commerce
Consumer Protection Board
Department of Correctional Services
Commission of Correction
Crime Victims Compensation Board
Division of Criminal Justice Services
Education Department
State Board of Elections
Office of Employee Relations
NYS Energy Office
Department of Environmental Conservation
Division of Equalization and Assessment
Executive Chamber
Office of General Services
Higher Education Services Corporation
Division of Housing and Community Renewal
Division of Human Rights
State Insurance Department
State Insurance Fund
Department of Labor
State Labor Relations Board
Department of Law
Division of the Lottery
Division on Quality Care for
Mentally Disabled
Office of Mental Health
Office of Mental Retardation
and Developmental Disabilities
Division of Military and Naval Affairs
Department of Motor Vehicles
State Liquor Authority
Office of Parks, Recreation and
Historic Preservation
Liaison
Andrea Estus
Yvonne Williams
Sheldon Jaffee
Charles Harvey
Sharon Williams, Terry Ketterer
John Debs
Trudy Blitz
Harry Keefe
Gerard Powers, Esther Sasman
Nikki M. Smith, Charles Palmer
Joseph Valenti
William Huff
Frank DiDimenico
John Keefe
Charles Pishko
Stephen Kohn
Lee Gould, Randy Harris
Anne King
Patricia Poulopoulos
Gloria Shepard
Philip Sperry
Richard J. Murray
Paul Shatsoff
Sandra Camacho
Jerry Burke, Mary McCarthy
Joseph Kunkel
Suzanne Hechemy, Carol Sommers
Barbara Severance
Seymour Bandremer
Jeff Jones
James Cappel
Barbara Watson
Albert DiMeglio
Joseph Kearney
Thomas Canty
Jack Wolslegel
Terry Bryant
Richard Schaeffer
Jackie Morris
Joseph Costello, Tom Torino
Jim Gross
Georgiana Panton
Agnes Miller
Stanley Winter
Division of Parole
Division of Probation
Public Employees Relations Board
Public Service Commission
NYS Racing and Wagering Board
Department of Social Services
Bureau of Staff Development and
Quality of Work Life
Department of State
Department of Taxation and Finance
Department of Transportation
State University of New York
Workers' Compensation Board
Division of Veterans' Affairs
Division for Youth
- 233 -
Henry Bankhead
Sandra Roberts
Virginia Suriano
William VanDyke
Donald Sommer
Leslie Collins
Kathy Mucello
Joseph Walsh
Roger Cudmore
Stephen Daly
Sandy Dennison
Rene Miller
Sandy Ryan
Rick Martin
- 224 -
- 235 ~
APPENDIX F:
MAIN SURVEY: RESPONSE RATE BY TITLE
= 236 =
RESPONSE RATES BY TITLE
TQ BE ESTIMATED TITLES
@ITLE CODE # SENT # RECD
: 140 125
\ £0) SENR Al 143 118
; 100500 PRIN ACtT CLERK 143 126
_ 102100 PAYROLL AUDIT CLE 1 2? 19
i LOZZOO ALIDIT CLERK 116 exc)
t LOZ230 FAYROLL AUDIT CLE 3 14 12
102200 SENR AUDIT CLERK 141 121
CASHIER a3 73
TALL COLLECTOR 19 ?
EMPS RET SNFTS EXMR 1 ss 44
EMP RET BNFTS EXMR 2 14 10
183100 EMPS RET MBRSF EXMR 1 14 10
133200 EMPS RET MBRSF EXMR 2 13 2
\ 7OZZOO STATISTICS CLERK 107
i 702300 SENR STATISTICS CLERK 42
| : 702500 PRIN STATISTICS CLERK ~~ 20
i 750200 SENR ACTUARIAL CLERK 40
750500 PRIN ACTUARIAL CLERK ¢ 17
S2Z2010 DATA PROC CLK 1 . 42
S22020 DATA PROC CLK 2 47
849200 DATA ENTRY MACH OPER 144
£49300 SENR DATA ENTY MACH O 126
| 249500 PRIN DATA ENTY- MACH 0. . 42
i 211200 LABORATORY ANIMAL CRT 116
I 211200 SENR LAB ANIMAL CRTKR 60
836100 INST RTL STR CLERK 13
19785000 FARK REGN BLE ASSNT 41
2124101 TRANS FLNG AIDE 1 : 12
2227110 CONSUMER SRVS SPEC 1 30
2501200 CLERK . 152
2501300 SENR CLERK 150
2501317 SENR CLERK SURROGATE ~ 10
\ 2501820 SENR CLERK CORP SRCH 16
pas 2501500 PRIN CLERK 129
2501517 PRIN CLERK EST TX APF 14
CLERK PERSCNNEL 20
| CLAIMS CLERK 55
COMP CLMS CLERK 2B
| CLERK 141
{ FILE CLERK 127
FILE CLERK an
| ADMITTING Sé
t SENR ADMITTING CLERK 23
t NURSING STATION CLE 1 136
! URIVER IMPV ALJOTN & 13
ALMJOCTN CORRFDING CLK 12
i ESL0100 FURCHASIN T 1 Be
: 2510200 FURCHA NT 34
: IDENT CLK
! DS1i2300 SENR IDENT CLERK
\ 25,2300 SENR MEL RECORDS
B513400
2514200
2514400
=540100
2540200
SLO
S558200
2558300
S557 100
SIIZOO
SIIOO
2560100
2560200
2568100
2569100
2606300
2407000
O3100
OBZ00
2703300
52
- 237 -
REATRINT WNIT She
NR UNDERWRTNG CLERK
SENR PAYROLL AUDT CLE
CREDENTIALS ASSISTANT
MOTOR VEH TITLE CLK 1
MOTOR VEH TITLE CLE 2
LEGAL ASSNT 1
MOTOR VEH REP 1
MOTOR VEH REP 2
MOTOR VEH REF 3
SUPVG MOTOR VEH REF 1
TRANS OFFC ASSENT 1 ~
TRANS OFFC ASSNT 2
APPS CNTRL CLE 1
PAYROLL CLERK
PAYROLL CLERK
PAYROLL CLERK
LIBRARY CLERK
LIBRARY CLERK
LIBRARY CLERK
STUDENT LOAN CLK 1
STUDENT LOAN CLE 2
EMP INS REVWNG CLK 1
DISABLTY DETRM RV C 4
TYPIST :
SENR TYPIST
SENR TYPIST LAW
PRIN TYPIST
DICT MACH TRANS
INFO PROCSSG SPEC
INFO FROCSSG SPEC
INFO FROCSSG SPEC
SECRETARIAL STEND
STENGGRAPHER
SENR STENOGRAPHER
SENR STENC LAW
PRIN STENOGRAPHER
PRIN STENOGRAFHER LAW
HEARING REPTR
TELEPHONE OPER TYP
TELEPHONE OFER
SENR TELEPHONE OFER
DIRCTRY INFO SYS OF 4
CALCULATING MACH OF
BOOKKEEPING MCH OF
BOOKKEEPING MCH OF Ds
ADMNY AIDE
ETATE UNIV PROM ALLE
HOUSEKEEPER
SUPVG HOWIZEKEEPER
(LEANER
Wh eo
foe
13
10
is
is
1335
147
10
12
iss
125
144
14
134
| 187
177
135
144
28
124
100
141
49.
MODE SpWNoOR
NOU Bsn
et
+e
RE
3026000)
3021000
31023200
31026090
3106190
3124200°
3124300
3124400
3137200.
3302206.
3302300
3307000
5302100
5303100
. 5350200
“3359000
5500200.
5501100
5502200
“$518500
5332101.
532202 HOSP. CLINICAL ASSNT 2°
. 5340300.
§544100
‘5570300
: $570400
6201000
6202200
204000
&210000
6212910
4211520
6214200.
$219200,
6220200.
4220300:
6223200"
6225100:
301000
6818000
4824100
6893100
6893200
7202022:
7150000
7611000
7611300
7614000
7616100
7617200
7711000
- 238 -
VANITSR
ELEVATOR OPERATOR
caaK
HEAD TOOK
DIETITIAN TECHN
FOOD SERVICE WKR 1
FOOD SERVICE WKR 2
FOOD SERVICE WKR 3
FOOD&SUPPLS PROCESSOR
LAUNDERER ;
SENR LAUNDERER
CLOTHING CLERK
BARBER .
BEAUTICIAN
DENTAL ASSNT
“DENTAL HYGIENIST - ~
LICENSED PRAC NRS
‘HOSP ATTENDANT 1
HOSP. CLINICAL TECHN
COMTY RESDNC AIDE |
HOSP CLINICAL ASSNT 1
PSYCH THERAPY AIDE~
MENTAL -HYG-HEWY HA 1
MENTAL HYG THER AIDE -
MENTAL HYG THER AST 1
LABORATORY HELPER ~
LABORATORY WORKER
LABORATORY AIDE
XRAY AIDE
‘TEACHING HOSP sit eri
TEACHING HOSP STL ST2
ELECTROENCPHGRPH ” TECH.
CENTRAL.MED SUP TECH:
HISTOLOGY: TECHNICIAN
SENR HISTOLOGY TECH
ELECTROCARDOGRPH TECH
MEDICAL: LAB: TECH 1
PHARMACY. AIDE. —
ASSNT WKRS COMP “EXMR
WORKERS COMP REVW AN
MEDICAID CLMS EXMNR 1
MEDICAID CLMS EXMNR 2
MAINTCE ASSNT REFRIGN
MAINTCE HELPER
CHAUFFEWF
SENR p
TRACTOR TRAILER OPER
MOTOR VEH OPER
BUS DRIVER
BINDERY HELPER
a Ve?
77 Sz.4 4
65 44.u
ALCLSM REHAB A i, 18 a1.
REHAB INTERVIEWER 14 20,0
TRAINING AIDE 29 93.5
EMPL SEC CLE 19 90.8
SENR EMP SEC CLERK 104 73.2
PRIN EMP SEC. CLERK 3e 59.4
PAROLE PROG AIDE 9 75.0
WATCHMAN 25 27.8
MOTOR VEH INS SV RF 1 17 BS.0
8 DRIVER TMPRY Anat TR a &
RESPONSE RATES BY TITLE ~ 24Q -
NOT TO BE ESTIMATED TITLES
TITLE Cone # SENT # RECD % RETURN
i 100600 HEAD ACCOUNT CLERK 23 al
i 100800 CHF ACCT CLERK 16 13
102220 FAYROLL AUDIT CLK 2 19 16
i 102500 PRIN AUDIT CLERK 20 1?
| 102600 HEAD AUDIT CLERK 13
: 112100 TOLL STATN SUFVR 12
130210 EMPS RET BNFTS EXMR 2 al
130410 EMPS RET BNFTS EXMR 4 J
133300 EMPS RET MBREF EXMR 3 10
204000 UI ACKTS EXMR 21
204100 SENR WI ACCTS EXMR ii
221300 CONTRACT MGT SPEC 17
222500 PRIN SALARY DET ANLST * Lt
222600 HEAD SALARY DIET ANLST &
224200-SENR SOC SRV MGT SPEC ~~ 20 >> ~ 7
1 224400 ASSOC SOC SV MGT SPEC 19
: 224500 PRIN SOC SRV MGT SPEC 1?
: 224700 CHF SOD SRV MGT SPEC 1S
: . 230300 SENR HEALTH CARE F AN 22
230400 ASSOC HEALTH CARE F A 19 7
230500 PRIN HLTH CARE FSCL A 17 £
| 230600 CHF HLTH CARE FSCL AN 4 100.5
: 232400 ASSOC UTILITY FIN ANL 4 50.6
i 242200 SENR INTERNAL AWDITOR 7 7
242400 ASSOC INTERNAL AUIDITR . 8 3
256400 ASSOC AUDTR ST EXFNDR g 0
2546500 PRIN AUDTR STATE EXPD 2 a)
270300 ALDMNVY FINANCE OFFICER 7 ae
i 292000 FINANCE OFFICER 8 5
| 201200 BANK EXAMINER 7 19 =
} . 301300 SENR BANK EXAMINER 20 G
| “301500 PRIN BANK EXAMINER 21°
i 301400 SUPVG BANK EXAMINER 19
301900 DEPUTY SUPT BANKS 6
| 325300 SENR INSUR POLICY EXR. 2
' 375200 INSUR FD FLD SRVS REP ig
' 375300 SENR INSUR FD FLD 5 R 1k
4 375400 ASSOC INSUR FD FLO SR
i 375410 INSUR FD DST REP
403100 ASSNT ACCTNT
t 403200 SENR ACCTNT
i 403400 ASSOC ACCTNT
| 403500 PRIN ACCTNT
cr ed
PONE UNSW NY
ra
t
412300
4
. - 24) -
421200 NY AUOTTOR wy 7
421300 SENR ALDITOR 20 19
421310 SEMR HGHR EDUC SU AA & bs
421400 ASSOC AUDITOR 2 100.0
424100 PUBLIC UTIL AUDTR 1 17 11 64.7
424200 PUBLIC UTIL AUDTR 2 17 14 B2.4
424200 PUBLIC UTIL AUDTR 3 1 9 BL.
424400 PUBLIC UTIL AUDTR 4 7 & 85.7
426300 SENR STATE ACCTS ALID 2 16 80.6
426400 ASSOC STATE ACCTS AUD 20 18 20.0
= 426500 PRIN STATE ACCTS ALIOR 16 17 Pa a
426700 CHF STATE ACCTS ALINTR 13 12 972.3
428200 SENR EXMR MUNCPL AFFR 19 17 82.5
428400 ASSOC EXAMR MUN AFFRS 19 14 e4.2
422500 PRIN EXMR MUNCFL AFFR is 12 66.7
422700 CHF EXAMINER MUN AFFS & & 75.0
420110 SUBSTANCE ABS ACCT Al & & 73.0
420210 SUBSTANCE ABS ACCT ‘AZ a 3 100.0
431100 MENTAL: HLTH AUD SFC 1 10 7 79.0
431200 MENTAL HLTH ALD SPC 2 S a 62.5
431300 MENTAL HLTH AUD SPC 3 4 S So. 3
-433200 MENTAL RETRDTN AD S 2 7 3 42.9
436200 SENR ACCTNT PUB SRV & 3 So. 2
426400 ASSUC ACCTNT PUB SRV & bal @3.3
422400 ASSOC ABAND FROP AC A & & 100.0
440200 MILK ACCOUNTS EXMR & c=) 83.3
440200 SENR MILK ACCTS EXMR 7 3 “71.4
442300 SENR MED FCLTS AUD 15 13 S6.7
442400 ASSOC MED FCLTY AUDR 21 i2 57.1
442500 PRIN MED FCLTY ALIDR 18 14 77.8
447200 COMP CLAIMS AUDITOR 1a 9 50.0
447200 SENR COMP CLMS AUDTR . 9 S 55.6
443110 INSUR PREM AUD 1 2 20° 10 - 50.0
442220 INSUR PREM AUD 2 16 - 1B Si.2
450200 INSLIR EXAMINER 19 13 68.4
450300 SENK INSUR EXMR is 14 77.8
450400 ASSOC INSUR EXMR 19 15 73.9
450500 PRIN INSUR EXAMINER 17 12 74.5
450600 SUPVG INSUIR EXMR 17 14 S24
450610 ASSENT CHF INSUR EXMR 4 4 100.0
450710 GHF INSUR EXMR 1 a 4 80.0
ASSN ‘EC TAX AUDITR 4 2 50.0
SALES TAX AUDITOR 2 21 19 70.5
SALES TAX ALDITOR 3 19 is 72.9
INCOME TAX ALIDTR 2 17 16 B4.2
INCOME TAX ALETR & ba) ba 100.0
CORF TAX ALDTR 2 22 146 72.7
CORF TAX AUDTR 3 10 v 70.0
QO EXCISE TAX ALIDTR 2 15
TAX ALIGITCR 1 177
TAX ALIDITOR = a g
- 242 -
8709110 UT TAX SsuD 1 18 14
aToOR20 UT TAX ALI 2 19 1é
470330 UI TAX ALD 3 it aig
470440 WI. TAX AUD 4 ba] Ss
$7a200 DATA PROC FSCL BY A Z it 10
TAX AUDIT ADMR 1 1S 11
: TAX AUDIT ALMR 2 16 . 13
i TAX CUMPLNG AGT 1 20° 15
| TAX COMPLNG AGT 2 19 1s
i TAX COMPLNC AGT 3 20 1m
410140 TAX COMPLNG AGT 4 4 3
£10150 TAX COMPLNG AGT 5 7 3
620200 TAX COMPLNC REF 21 1%
£27200 SALES TAX TECHN 2 19 1%
£27300 SALES TAX TECHN 3 19 13
£27400 SALES TAX TECHN 4 be ba
£20200 EXCISE TAX INVESTGTR 5 -
£20300 SENR EXCISE TAX INVST 9 Ss
431200 EXCISE TAX TECHN 2 10 -2
£34200 ESTATE TAX TECHN 2 7 &
} 640100 TAX TECHN 1 : 19 - 1b
£40200 CORP TAX TECHN 2 19 17
£40300 CORP TAX TECHN 3 1 7
i £41100 TAXPAYER SRV REF 1 21 13
; 449200 TAX PROCSSG SPEC . S 4
647200 INCOME TAX TECHN 2 20 17
£47300 INCOME TAX TECHN 3 19 18
£47400 INCOME TAX TECHN 4 9 7
| £49100 TAX REGULATNS SPEC 1 15 12
‘ 449200 TAX REGULATNS SPEC 2 7 5
4 £73200 PARI MUTUEL EXMNR 27. 7
673300 SENR PARI MUTUEL EXMR 4 3
702600 HEAD STATISTICS CLERK - “So 3
705200 STATISTICIAN ? 7
705300 SENR STATISTICIAN 16 2.
705400 ASSOC STATISTICIAN 17 12
722400 ASSOC BIOSTATISTICIAN 7 &
750400 HEAD ACTUARTAL CLERK 4 4
ASSNT ACTUARY 12 &
@ SENR ACTUARY CASUALTY 4 2
I SENR ACTUARY LIFE 3 7
ASStIC ACTUARY LIFE 4 2
ASSOC ACTUARY CASLTY & 4
PRIN ACTUARY LIFE & &
i 2 PRIN ACTUARY © 4 2
SUPVG ACTUARY LIFE 11 10
COMPUTER
SYS PROGR & 17
SOMPIITER 10 S
MANAGER 9 &
WMS IMPLEMNTN 3 1 N'Y 12 &
WHS INSTALLTN TEAM TK 6 5
- 243 -
20200 COMPUTER FRIGR
OMPTR PROG
SENR COMPTR FROG AN
S03 COMPTR FRGMR AN
f oc COMPTR PG A SCT
MANAGER COMPUTER OF
DATA BASE PGMMR AN 1
DATA BASE FGMMR AN 2
COMPUTER OFER
SENR COMFTR OPER
SUPVG COMPTR OFER
221700 CHF COMFTR OPER
8220280 DATA PROG CLE 3
225500 SUPYR DATA PROD
225800 ASSNT DIR DATA FROG
225901 DIR DATA PROC A
225908 DIR DATA PROC C
#29300 SENR COMFTR SYS ANLST
229990 ASSNT DIR DATA FRC SA
349600 HEAD DATA ENTY MACH ©
SE0000 MANAGER DATA CMMUNCTS
DATA COMMUNCTNS SPC 1
SENR EMP SYS DATA P S
DATA BASE SUPYR
STUDENT AID DATA TECH
FARMER
q HEAD FARMER
#01700 FARM MANAGER
#11500 PRIN LAB ANIMAL CRTKR
1001200 GROUNDS WORKER
1001300 SENR GROUNDS WORKER
1001500 SUPVR GROUNDS
1014000 GREENHOUSE WORKER
1101200 HORTICULTURAL INSF
1101300 SENR HORTICULTRL INSP
1102100 HORTICULTURAL ASSNT
(1108100 PESTICIDE CONTRL INS!
1108300 SENR PESTCDE CTRL INS
VET ANML INDUS
ANIMAL HLTH TECH
NR LIVESTOCK GRO SP
AGRICL PROGM AIDE
FARM FROD. GROG
FARM PROD GROG
FARM PROD GROG
FARM PROD GROG INSP
SENR MARKTG REF
Phe
AERC
PRIN CLA
es
oOuoM MAS
rs oe
maps
ro
aa NS
70.0
» OBeR
90.0
100.0
84.6
bb.7
75.0
FO.0
100.0
ee
BN
soe
runroruaga
ee
NERD Gd
i
1412300
1412400
1412500
1420300
1420400
1427300
1427400
1441300
1441400
1442100
1442200
1443100.
1443200
1442400
1445200
1445200
1445400
1445500
1446100
1446200
1483220
1453330
1453440
1456100
14546200
1456200
1463310
1463330
1462360
14623410
1464100
ae
146595
1aeso52
lamas
- 240 -
SV REP
FRIN “STAFFING SRV REP
STAFFNG SVS FRGM MGR
SENR FERSNL EXMR
IC FERSNL EXMR
PRIN PERSNL EXMR
SENR MUNCFL PERS CSLT
ASSO MUNCPL PER CSLT
SENR EMPLE. INSUR REP
SSO EMPL INS REP
SENR PERSNL ALIMR
SS00 FERSNL ADMR
ASSENT DIR FERS B
SSNT DIR PERS A
DIR. PERSONNEL
DIR PERSONNEL C
DIR PERSONNEL A
DIR AUMN RESRG MGT 2
DIR HUMN RESRC MGT 3
DIR HUMN RESRC MGT 4
DIR HUMN RESRC MGT 5
BIR INST HMN RSRC M 1
DIR INST HMN RSRC OM 2
$00 SRV HUMN RESC 0S2
SOC SRV HUMN RESC DS3
SOC SRV HUMN RESC DS4
AGENCY LABR REL REF 1
AGENCY. LABR REL REP 2
AGENCY LABR REL REF.
SENR TRNG TECH POLICE
SENR TRNG TECH FR SFT”
SENR TRNG TECH YTH SV
Assoc TRNG TECHN PLC
AGENCY TRNG&DV S 1
AGENCY TRNG&DV & 2
DIR STAFF DEV&TRNG 1
DIR STAFF DEV&TRNG 2
MENTAL HYG & pS i
MENTAL HYG DS 2
MENTAL HYG & Ds 3
MENTAL HYG DS 3 N
MENTAL HYG STF DS 4
CAREER CPP FLO REF
AFFIRM ACTN ADMR
AFFIRM ACTN ALIKIR
AFFIRM ACTN
AFFIRM ACTN
Gira ho
ENR MINRTY
ENR MINRTY
COMPLIANCE
mo
PUMP OP Gar
3
Reena
Vh
e
>
th
e
co >
OD ASUS Ww
_
Es
LOOM SVNUAIOVNO
mh
ae
re
ONY PWR MEN OD
fib) eee
ae
he pod
Ce SS Be AL
oo an
100.0
100.0
71.4
66.7
76.2
Be. 4
2.2
Bb. 7
- 245 -
REGNL AFFRM ACTN R 2 4
REGNL AFFRM ACTN CORD 4
PARK WORKER 1 15
PARK WORKER 2 20
PARK WORKER 3 190
1806100 FOREST RANGER 1 20
1506200 FOREST RANGER 2 6
1506300 FOREST RANGER 3 12
1507100 FARKS&REC FOREST RNGR 4
1516300 SENR FORESTER 19
1514400 ASSOC FORESTER 20
1530000 FORESTRY TECHNICIAN 14
1820200 SENR FORESTRY TECHN 10
1520800 PRIN FORESTRY TECH 9
iSS2400 PARKSEREC RGNL FGM SF 2
1532200 TREE’ PRUNER 19
158500 TREE PRUNER. SUIPVR 19
1541000 CONSERVN OPERS,'SPVR 1 19
1541400. CONSERVN OPERS SPVR 2 20
1542501 REGNL PARK-MTCE SFV 1 2
1542601 PARK MTCE SUPVR 1 20
1842602 PARK.MTCE SUPVR 2” 19
1542100 PARK SUPVR s 5
1545500 ASSNT SUPVR PARK OPER S 4
1545600 SUPVR PARK OPERATIONS 4 3
1548200 ASSNT REGNL MGR PK&RC 7 7
1842300 REGNL MANGR: PKS&REC g a
1570501 GOLF CRSE MTCE SUPV. A 10 7
1570502 GOLF CRSE MTCE SUPV B s 4 20.0
1570803 GOLF CRSE MTCE SUPV ¢ 5 4 20.0
1573100 PARK MANGR 1 ai 16 76.2
1572200 PARK MANGR 2 22 21 95.5
1573200 PARK MANGR 3 15 12. (80.0
1573600 ASSENT PARK MANGR 1 5 4 en)
1588100 PARKS&REC ASSNT 9 g 22.9
1408200 ENVIRNL: IMPACT EXMR Zs 7 100.0
1610700 REGNL SUPVR NTRL RSRC 7 7 ‘100.0
1612610 ENVIRNL SCIENTIST 1 Ss. 5 100.0
164146000 ENVIRNL CONS CIFFICER 20 95
1616500 SUFVG ENVIRNL CONS OF ZA
1416700 CHE ENVIRNL CONS OFFR 9
1618100 ENVIRNL ANALYST 19
1618800 SENR ENV ANALYST 12
1618400 ASSOC ENVIRNL ANALYST 21
1448500 PRIN ENVIR ANALYST 12
1624100 MINED LAND RCLMTN & 1 7
FISH CUILTRST 1 13
FISH CULTRST 3 4
1429400 FISH CULTRST 4 5
1480000 FISH&WILDLIFE TECHN 17
1630300 SENR FISH&WILDLF TECH 20
Le20800
1433110
1623220
1424100
1634200
1426200
1626300
1437100
1637200
1637200
1637400
1701400
1711400
1714335
1726100
1728400
1750200
750200
1750400
1761300
1762200
1769300
1775400 ASS
1775500
1803100
1803200
1810140
1810150
1810210
1811000
1811340
1212200
1813200
1831200
1831300
1831500
1934200
1934200
1895100
1835400
1836200
Tes64or
402
- 246 -
PRIN FISHEWILDLF TECH
ENVIRNL CONS INVST 1
ENVIRNL CONS INVST 2
SOLID WASTE MGT SFC 1
— MGT SPO 2
SROS SPEC
MARINE RESRCS SFEC
MINERAL RESRCS SPC
MINERAL RESRCS SPC
MINERAL RESRCS SPC
MINERAL RESROS SPC
WMS INSTALLTN TEAM
Assoc Son SV MEDCD
CRMNL STC PRGM AN
LOCAL DATA CNTR CRD 1
ASSOC WATER MGT FGM C
iN
HDMPwye ON
ENERGY CONS PS2
ENERGY CONS FS 3
ENERGY CONS PS 4
ENERGY PLNNR 3
HIGHER EDUC SV FG A 2
AGING SRVS PGM ANL 3
OC HEALTH CARE MSA
PRIN HLTH CARE MG SA
ASSNT PURCHSNG AGNT
PURCHASING AGENT
FURCHASING OFCR 1 PRT
PURCHASING OFCR 1
PURCHASING OFCR 2
PURCHASE SPCS ASSNT
SENR PURCH SP WTR MCH
MOTOR EQ STORESKEEPER
MECHNCL EQUIP INSP.
STORES CLERK
SENR STORES CLERK
PRIN STORES CLERK
MECHNCL STRS CLK
SENR MECHL STORES CLE
Pid t
COMMISSARY CLERK
INST RTL STR ASST MGR
INST RTL STR MNGR 1
INST RTL STR MNGR 2
BUSINESS MGMT ASSNT
SENR BUS MGT ASSNT
HEALTH FACLT MGT A 2
OFFICER 1
PpRIEES z
-N
HAoveddsesrervead
aay
Ne
pg
tet tet
MN QIN NNN PSUR RENEE
Ne _
ro
a
PPENAPTODUUNIEN SW PEW
me
or
ro
-
ee
QON WRT EAN
24.2
25.7
100.0
75.0
100.0
Ba. 3
75.0
bbw 7
80.0
735.0
100.0
* 100.0
35.7
100.0
@2.3
71.4
BO.0
Tbs?
eo.7
80.0
92.3
100.0
S7.1
100.0
70.6
70.0
100.0
S520
75.0
73.0
100.0
90.0
60.0
“L207500
"19103810
1910820
1912700
1921700
2000300
2000400
2000600
2000700
2001200
2001220
2001300
2001320
2001400
2001800
2001813
2001700
2001200
2126011
2120200
2130300
2120400
2132200
2132300
2132400
2134202
2124303
2141200
2141200
2200300
2203200
2206201
2206202
2206202
2206204
2209200
2209300
2225030
2222200
22easo2
2243130
2243520
2258300
E258400
2260000
PAO
- 2ay-
INST STEWARD
DEPUTY DIR INST ADM 1
DEPUTY DIR INST ADM 2
HOSP ADMN CONSLT
DIR FOLTY ADMNV SRYS
SENR BUDGETG ANLYST
ASSOC BUDGETG ANLST
SUPVG BUDGTG ANLET
CHF BUDGETG ANALYST
BUDGT EXAMINER
BLUDGT EXMR PUB FIN
SENR BUDGET EXMR
SENR BUDGET EXMR F FN
ASSOC BUDGET EXMR
PRIN BUDGET EXMR
PRIN BUDGET EXMR P F
ASSNT CHF BOGT EXR
DEPUTY CHF BDGT EXMR
PUBLIC TRANS SFTY S1B
RAIL TRANS SPEC
SENR RAIL TRANS SPEC
ASSOC RAIL TRANS SPEC
TRANS ANALYST
SENR TRANS ANALYST
ASSOC TRANS ANALYS
TRANS FLNG AIDE 2
TRANS PLNG AIDE >
TRANSIT SPEC 2
TRANSIT SPEC 3
PROOFREADER
SENR EDTRL CLERK
ARTIST DESIGNER -
ARTIST DESIGNER
ARTIST DESIGNER
ARTIST DESIGNER
PHOTOGRAPHER 2
PHOTOGRAPHER 2
UTILITY OTRCHRED SF 3
PUBLCTNS PROD ASSNT
ASSENT LOTTRY RGNL D 2
ENVIRNL EDUC &
CITIZEN PARTCFIN SPF 2
CONSERVN ED ASSNT
MIISEUM ATTENDANT
LOTTERY MRETG AIDE
LOTTERY MRETG REP 1
LOTTERY MRETG SPER
SENR PUBLIC INF SF
ASSOC PUBLIC INFO SF
MEDICAL RELTNS OFFCR
REGNL TQURTSM COORD
Pe ia
aa otha
ARN TAN VANE PP PEO
a
~
_
non ine Of
me
WNP ee
ra
pee
Maha or eo
foe
ee oe
POMP OVA RON
eee e
We ew PWNS
OMNMN PM OPH
os
ated
PUAN NS
So.2
S71
70.4
64.7
50.0
50.0
5o,0
100.0
82.3
6902
- 248 -
2271200 EMPS RET SYS INFOQ R 2 7 7 100.0
2275000 ENERGY INFO AIDE 4 4 100.0
2320201 SENR SYSTEM PLNR GAS 4 2 50.0
2322100 EMUALZTN RATES AIDE S a 7.5
2322110 EQUALZTN RATES AN 1 3 2 100.0
EQUALZTN PROGM AIDE 6 4 66.7
2322200 EGUWALZTN RATES AN 2 5 5 100.0
2327100 CONSUMER SRVS REFR 1 10 4 40.0
i 2327200 CONSUMER SRVS REFR 2 & 5 23.3
2237220 CONSUMER SRVS SPEC 2 7 3 42.9
2337330 CONSUMER SRVS SPEC 2 3 3 oS. 9
2327440 CONSUMER SRVS SPEC 4 4 3 75.0
i 2247100 REAL PRPTY INFO SYS S 7 6 25.7
| 2347200 SENR REAL PROP IS 5 19 15 78.9
| 2347400 ASSOC REAL PROP I S 8 1g 15 72.9
i 2247500 PRIN REAL PROP I So Suge esse Gsm FSe0e x ox
\ 2349100 JR RIGHT OF WAY AGENT 1G 11 6.8
i 2349200 ASSNT RIGHT OF WAY AG 22, 2 90.2
\ 2249300 SENR RIGHT OF WAY AGT 20 17 25.0
2349400 ASSLIC RIGHT OF WAY AG 17 12 70.6
2349500 FRIN RIGHT OF WAY AGT . 4 2 50.0
2251110 ASSNT REAL EST AP MAS 17 9 52.9
| 2251200 REAL ESTATE APPRAISER ~ 7 & 25.7
f 2351210 REAL ESTATE APP MAS 22 14 72.7
2351200 SENR REAL ESTATE APPR 4 4 100.0
2351310 SENR REAL EST APP MAS . 20 17 85.0
2351500 PRIN REAL EST APPRSR 4 4 100.0
2351510 PRIN REAL EST APP MAS 7 & 85.7
2352100 HOUSING MGT ASSNT 4 4 64.7
2352200 HOUSING MGT. REP 18 10 55.6
2352300 .SENR HOWSING MGT REF Z “3 42.9
2356320 LEASING AGENT 2 8 4 50.0
| 2346100 PROPERTY MANAGER 1 4 4 100.0
42100 HOUSING&CMTY DEV AST s 5 62.5
2268200 HOUSING&CMTY BEV REF 10 7 70.0
2400200 PERSONNL STATUS EXMR 9 9 100.0
2408210 AFFIRM ACTN ASSENT 1 5 4
2414200 SENR HEALTH FLANNER 5 3
2414450 ASSNT CHF HLTH PLNNR & 4
2426200 HEALTH INSLIR DATA C A 5 3
CHILD SUPFRT SYS IM A 4 3
ECONOMIST 6 3
| SENR ECONMST & &
; SENR ECONMST BLS 4 2
: SENR ECCNMST LER 13 13
| ASSOC ECONST 12 ba 41.7
ASSOC ECINST LAB RCH iz 10
PRIN ECONCMIST REG EC 5 4
PRIN ECONOMIST LAB RS 7 7
PROGRAM RSCH SF 2 MLIN 4 2
PROGRAM RESCH SPL 10 4
2459253
SIZES
BASI312
2459313
2471300
2501511
BSOLS19
2501522
2501522
2501600
B501612
Z50L680
2501600
2502600
2505200
7307500
2522220
2532200
540520
2543100
2543200
2544100
2547510
25427100
2549400
2551800
2552400
2542110
2562120
2542200
BSA4100
26
2610600
PROGRAM
PROGRAM
PROGRAM
PROGRAM
PROGRAM
PROGRAM
PROGRAM R
MENTAL HYG
MENTAL HYG
MENTAL HYG
MENTAL HYG
SENR
PRIN
PRIN
PRIN
PRIN
HEAD
HEAD
HEAL
CLERE.
CLERK
CLERK
CLERK
CLERK
CLERK
CLERK
CHF CLERK
HEAD FILE CLERK
EXAMS DELIVERY CLERK
CONST EG RP PROD COUR
op
oF
SP
oP
oP
SP
oF
SP
oP
FGM EV
FGM EV
PGM EV
FGM EV
MUNCPL RECH ASST
COLLECTION
MEDICAL
CORP: SI
PROP CNTRL
PERSONNEL
SURROGATE
LEGAL ASSNT 2
MOTOR EG REC ASSNT.
SUPVG MOTOR VEH REF
INMATE RCRDOS COORD 1
INMATE RCROS COORD 2
MEDICAL CODNG CLK
ADMNY SERVS MANGR 1
ENERGY ASSTNC RVW AID
ENERGY ASSTNC RVW SPV
OFF SRVS MANAGER
PAYROLL CLERK 4
STUDENT LOAN CNR 1
STUDENT LOAN CN R 2
STUDENT LOAN CNTRL R
STUDENT AID ADJSTM EX
COLLETN&CVL PRS S
CORRL VIDEQTAFE MONTR
INFO PROCSSG SPEC 4
SECRETARIAL ASSNT
HEAD STENCGRAPHER
HEAD HEARING REPORTER
SERVER
MAIL&SUPPLY HELPER
MAILE&SUPPLY CLERK.
SENR MAIL&SUPPLY CLE
PRIN MATLE&SUPLY CLE
FROICE:
rope
inn Ge
OM Pow PTs
eh
Now ans
ind
LOO fap or bP Gt
ro
Nn ppensa
No Nh
ad
ao wm
oe
PGW OVD A So
es
eee hd
meh?
youn b
CAenNaAoNne &
-
PNNN
-
ole
pe
au N on
SOOM PE Prd NIUAMQor
t=)
hi
an
ra
Jr hy i
Ns
a
t
100.0
4Z.9
70.6
72.2
eo. 3
66.7
50.0
75.0
60.6
73.0
42,°
Se. 5
EF
72.7
708
~ 250 -
|
|
| : 2709600 HEAD MAILE&SLIPLY CLK 13 10 76.9
2741200 OFF MACH OPER ray 14 6607
2711300 SENR OFF MCH OF & 4 66.7
| 2711340 SENR OFF MCH OP PHOTO 4 4 100.0
| 2712200 SENR CALC MACH OFER 4 3 75.0
| 2717200 OFFSET PRNT MCH OF 20 10 50.0
i 2717300 SENR OFFSET PRT MC OF 17 13 76.5
| 2717500 PRIN OFFSET PRT MC OP 19 16 84.2
i 2717600 HEAD OFFSET PRT MC OF 4 3 75.0
| 2762000 LITHOGRAPHIC PHOTOBR 4 3 75.0
1 JR ADMNV ASSENT 20 15 75.0
| ADMNY ASSNT 2y 25 6.2
2801300 SENR ADMNV ASSNT 22 19 82.6
i #R02410 FURE WATRS GRNTS AN 1 & = 82.3
; 2802420 PURE WATRS GRNTS AN 2 ii 10 20.9
| 3e02430 PURE WATRS GRNTS AN 3 - --& - -- --5-- 100.0. --- -
. 2810200 SENR ADMNY ANLST 17 13 76S
1 ‘ 3810400 ASSOC ADMNY ANLS 18 14 se.9
| 2810600 PRIN ALMNV ANALYST : 9 2 8e.9
2211000 SUPVR ALMNY ANALYSIS 10 & 60.0
2817200 SENR.BUILDG SPACE ANL. 16 10 62.5
2617400 ASSOC BULDG SFAC ANLS 4 4 100.0
2819100 SUBSTANCE ABS SUP SAL 4 3 75.0
\ 2919620 SUBSTANCE ABS FGM S 2 19 15 78.9
| 219620 SUBSTANCE ABS PGM & 2 9 7 77.8:
: 2829601 ASSNT COMMR LABOR 7 5 71.4
2832210 ARTS PROGRAM ANLST 1 4 2» 50.0
#823320 ARTS PROGRAM ANLST “2 21 LF 81.0
2232340 ARTS PROGRAM ANLST 4 i2 8 66.7
2834200 ASSOC CAPITAL PROG CO 7 4 3. 75.0
2945620 HEALTH FROG ADMR 1 HS “16 10° 62.5
2245630 HEALTH PROG ADMR 2 HS 12 10. 83.3
2645470 HEALTH PROG ADMR 1 PH 13 iL S4.6
i 2254900 REGNL DIR ENV CONSERV 7 6 85.7
| 2875110 GRANTS MANGMNT BDGT 1 4 4 100.0
| 2875120 GRANTS MGMT BNGT SP 2 4 4 100.0
2894210 REGNL ADMNR 1 5 4 20.0
i 2004600 HEAD HOUSEKEEPER 22 14 63.6
i 2004701 CHF HOUSEKEEPER 1 1& it 628.8
3004702 CHF HOUSEKEEPER = si 9 21.8
: 3016500 SUPVG JANITOR 20 15 75.0
20164600 HEAD JANITOR 20 15 75.0
3016820 CHF JANITOR 2 4 4 100.0
| S01E000 LOCKER ROOM ATTENDANT . 4 z 50.0
i PARKING LOT ATTONT 14 10 71.4
| PARKING SRV@ ATTONT 16 10 62.5
: SENR PARKING SRVS ATT 9 2 Be.9
/ LARGRATORY CARETAKER 19 13 éB.4
| 2041100 ASSEMBLY HALL CUSTOD 17 9 52.?
3042200 INST WORKER : 41 1st 43.9
"2050100 WINDOW WASHER € 4 50.0
- 251 -
B1OS100 DIETITIAN AIDE 3 5 62.5
3105200 DIETITIAN 21 16 Th.
BiOSS00 SUPVG DIETITIAN 1s 14 77.8
B1iLO110 NUTRITION PGM REF 1 13 12 92.3
BL1O1Z20 NUTRITION PGM REP 2 11 10 70.9
3110200 NUTRITION EDULIS CNSLT & 3 100.0
B1L10800 SENR NUTRITION & CSLT c=) =] 100.0
BLLL100 NUTRITION SRVS CONSLT 21 14 66.7
3111300 SENR NUTRITIONIST. 5 5 100.0
3111400 ASSLC NUTRITIONIST 4 4 100.0
B114200 INST FOOD ADMNSTRATOR 18 14 77.3
3117100 AGING SRVS NUTRTN C 1 by 5 100.0
3118100 ASENT BAKER ? 3 33.3
3118200 BAKER 20 * 10 BoO.0
3119200 MEAT CUTTER 1g 12 64.7
3126200 CARRL FCLTY AST FO MG - 4 4 100.0
3120100 PASTEURZTN PLANT OPER : es & 85.7
2202500 LALINDRY SIPVR 22 17 Sa. d
2302600 HEAD LAUNDRY. SUPVR 20 14 7O.0
BgOZe01 LAUNDRY MANAGER 1 4 3 75.0
3302602 LAUNDRY MANAGER 2 & Ss 62.5
3308100 LINEN SORTER 7 & 25.7
3504400 ASSOC SCHOOL LIB SRV 4 4 100.0
3507400 ASSOC SCHOOL BUS MGT 13 10° 76.9
35097500 SUPVR SCHL BUS MANGT 4 4 -100,0°
BH11Z00 ASSNT EDUCL TESTING 10 2 20.0
3511400 ASSOC EDUCL TESTING 9 7 » 77.8
3513800 ASSOC CCCUPL SCH SUPV / 4 3 75.0
AS2QiIZ7O ASSAC CCCUPL ED CVL R oo] 4 g0.0
B5Z1400 ASSOC SPEC OCWPTL ES =) 3 60.0
2521410 ASSOC OCCUPL ED PR DV-- 4 3. 75.0
3522400 ASSOC MATH EDUC : Ss 5° 100.0
3523400 ASSOC ENGLISH EDUC * 4 2. . 80.0
$525400 ASSOC SOC STUDIES ED “Ss S 100.0
SS25500 SUPVR BILINGUAL EDUC cn] a 100.0
3525700 ASSOC BILINGUAL EDUC S & 735.0
3520400 ASSOC READING EDUC 2 7 87.5
359933100 EDUC PRGM ASSNT 1 20 1s 75.0
3533200 EDUC PRGM ASSNT 2 12 ? 73.0
BEZHZOO ASSNT HIGHER EDUC & 3 83.3
3525400 ASSOC HIGHER EDUC iL 7 63.6
2536400 ASSOC TRANG SPEC ELC 7 & 85.7
3541400 ASSOC EDUCL DATA SYS 4 3 73.0
SSO SCHOOL FIN ALD 9 & 64.7
SCHOOL DIST ORG 4 4 100.0
CHOGL LUNCH AD 10 g ao.
OCCUPL ED PR PL bt 3 60.0
INDUS EDUC 4 4 100.0
FSS2440 > EDUCL PLNG&EVUL 10 10 100.0
SI54500 PHYS EDUC 4 3 75.0
BSS5470 ¢ : EDUC CHD HIT 1g 1Z 63.2
3560400
3561500
BE64500
3569200
3562400
35627500
3570400
3572400
S5S2400
SS 72500
3595400
3596400
3601200
3601200
3601360
3601370
3401450
3606200
3406400
» 3615200
2615200
3615400
615500
BeOL201
3801310
_ 3802330
3814040
3217200
3226200
3237100
3837202
* 3840500
3241200
3241400
BS44200
- 282 -
ASSENT EDUCL INTEGRTN
Assoc EDUCL INTEGRTN
ASSNT EDUC RESEARCH
ASSOC EDUC RESEARCH
SUPYR EDLIC CHLORN H C
SUPVR SECONDARY EDUC
ASSNT EDUC DISADVNTGD
assoc ENUC OISADVNTGD
EDUC DISADV PROG AIDE
ASSOC VETERANS EDUC
ASSOC INSTR MATS HNOC
ASSOC EDUCL TELEVISN
SUPVR OCCUPL EDUC
eS0C HIGHER OCC EDUC
ASsoc CONTG EDUC
ASSNT LIBRARIAN
SENR LIBRN ‘
SENR LIBRN TECH PROC
SENR LIBRN MEDICINE
ASSOC LIBRN MED
ASSNT LIBRARY SRVS
Assoc LIBRY SVS
MEDICAL RECORD TECH
MEDICAL RECORD ADMR
SENR MED RECORDS TECH
SUPVG MED RECD ADMR
SCIENTIST ARCHEOLOGY.
SENR SCIENT ENTOMLGY
SENR CURATOR HISTORY
MUSEUM EXH SPEC PRD A
CONSERVTOR
HISTORIC CONS. TECH
HISTORIC SITE ASSNT
HISTORIC SITE MGR 2
REGNL HSTRC PRESV SPV
HISTORIC PRESRVTN P A
HISTORIC PRESRVTN FA
INTERPRETIVE FGMS AST
ARCHIVIST 2
0 CURRICULUM CONTNT for
REMEDIATION ASSNT
SUPVR GENERAL
PVR SPEC SBUCT
EDUC SUPVR VOCATIONAL
EDUC DIR 1
EDUC DIR 2
SUPVR CORRL FAC VOL T
DCCUPL REGIONAL SUPYVR
HABILTATN SPEC L
HABILTATN SPEC 2
O VOC INSTRUCTOR 1
we
PVP POOVESURONE
o
PUVDON DD MNWONN FES OADVUARw
pee ND aaa
PUMP PEMDFODUUMEDVORON
e
uno
wr
73.0
25.7
62.5
56.3
60.0
30.0
100.0
a5.7
100.0
100.0
75.0
100.0
70.0
52.9
$2.5
3PZ1N40
3947210
3P47220
BP47230
FP47250
3947300
39465020
3965030
3965040
3972200
4000100
4001200
4001220
4001240
4001250
4001270
40012280
4001200
4001320
4.001340
4001250
40012360
4001390
4001400
4001420
4001430
4001460
4001490
4001940
BOOLISO
4002200 -
4003201
4003204
4002206
4003300
4003201
4003302
4003302
4NOBI04
4003500
AOOSIOG,
4003507
4010200
4010300
4018000
4021510
9021520
vac INSTRUCTOR
Vos INSTRUCTOR
vor INSTRUCTOR
DEV SPEC 1
DEV SPEC 2
DEV si
PEC 3
TEV SPEC S
DEV ASSNT
TEACHER 2 -
TEACHER 3
TEACHER 4
TEACHING ASSNT
ENGRG
CIVIL
CIVIL
CIVIL
CIVIL
CIVIL
CIVIL,
CIVIL
Civil”
CIVIL
CIVIL
CIVIL
CIVIL
CIVIL
CIVIL
CIVIL
CIVIL
CIVIL
CIVIL
CIVIL
ENGRG
ENGRG
SENR
PRIN
SENR
ENR
ENR
SENR
SENR
PRIN
PRIN
FRIN
PRIN
FRIN
FARK
SENR
AIDE
ENGR
ENGR
ENGR
ENGR
ENGR
ENGR
ENGR
ENGR
ENGR
ENGR
ENGR
ENGR
ENGR
ENGR
ENGR
ENGR
ENGR
“ENGR
ENGR”
TECH
TECH
ENGRG
ENGRG
ENGRG
ENGRG
ENGRG
ENGRG
ENGRG
ENGRG
ENGRG
ENGRG
ENGRG
ENGRG
Pn
PLN
TRF
MAT
TRF:
PHY
MAT
PLN
PHY
STR
TRF!
UPON HNN NN Ne re ee re
ENV @
TECH °
TECH
TECH
TECH
TECH
TECH &
TECH
TECH
TECH
TECH
TECH 3
TECH 5
ENGINEER
FARK ENGINEER
JR ENGINEER
SUPVR RGNL TRNS Pet 1
SUPVR RGNL TRNS Ped &
- 253 -
ING
STRUCTRS
FC
PHYSCL R
RLS
PLNNG
FC
STRUCTRS
sch R
RLS
NG.
SCL _R
WICTRS
FC
WAL
S TST
$ TST
WTRPC
AIRPC
82.6
63.6
82.6
41.7
bh. 7
75.0
66.7
75.0
50.0
75.0
82.8
90.4
72.2
ao.
72.7
7b.
21.0
Be.2
ae.
. O23
S9.5
23.3
78.9
100.0
20.9
85.7
100.0
83.5
94%
100.0
100.0
~~ 88.9
81.3
72.2
100.0
4025200
4027200
4027300
4027400
428100
4022200
40228300
4028400
4042900
4044100
4.046200
4053100
4053200
4053200
4205200
4205300
4205400
4220200
4220300
42204800
4230200
4.230300
4220400
4240200
4301200
4301200
4301400
4202200
4303200
4202400
4328200
4320200
4320300
4220400
4360100
4360200
4360200
4401200
4401330
4401400
4401420
4401430
4417200
4426400
~ 234 -
SENR HYDRAULIC ENGR
ASSNT SOILS ENGINEER
SENR SOILS ENGINEER
Assoc SAILS ENGINEER
JR ENGINEERING GEOL
ASENT ENGR GECLOGIST
SENR ENGRG GEOLOGIST ~
ASSOC ENGRG GEDLGST
REGNL DIR TRANGPORTN
REGNL TREFC ENGNR
REGNL TREFC ENGNR
TRANS MAINTC ENGR
TRANS MAINTG ENGR
TRANS MAINTC ENGR
ASSNT BULDG ELEC ENGR
SENR BUILDG ELEC ENGR _
ASSOC BULDG ELEC ENGR
ASSNT BULDG STRUC ENG
SENR BUILDG STRCT ENG
ASSOC BULIG STRU ENGR
ASSNT HEATE&VENTLG ENG
SENR HEAT&VENT ENGR
ASSOC HEATEVENTG ENGR
ASSNT PLUMBING ENGR
ASSNT BULOG CONST ENG
SENR BUILDG CNSTR ENG
ASSOC BULDG CONST ENG
ASSNT SUPT CONSTR
SENR SUPT CONSTRUCTN
ASSOC SUFT CONST
BULDG CONST PGM MGR 2
ASSNT MECH CONST ENGR
SENR MECHL) CONSTR ENG °
ASSOC MECH CNSTR ENGR
dR ARCHL ESTMTR
SSNT ARCHL ESTIMATOR
SENR ARCHL ESTIMATOR
ASSNT SANI ENGR
y SANIT ENGR DSGN
?ANI ENGR
ANI ENGR DSGN
ANI ENGR
ANI ENGR ENV ©
ANI ENGR 2 WM
NITRY ENGR E
NITRY ENGR &WM
ANI CONSTR ENS
NITARIAN
NITARY GONST INS
NITARY CONST INSF
asst AIR FOL CTL ENG
Oe he
hie
eh
AO NU
aad
PepoOUMOE
mee)
ove
19
BN
roe
KM DVvoPHuF
1
AArw APO
od
ho
2
60,0
B1.8
73.8
100.0
80.0
ga. 9
77.8
80.0
70.0
100.0
100.0
100.0
100.0
80.0
65.4
62.5
~ 100.0
42.9
80.0
S71
65.0.
100.0
75.0
57.1
70.6
48.4
76.9
F168
20.5
33.3
50.0
75.0
100.0
91.7
50.0
100.0
60.0
- 255 ~
4426500 PRIN AIR FOL CTL ENGR 7 7 100.0
4435200 REGNL DIR ENV QU ENGR & & 100.0
4540200 ASSNT RAILROAD ENGR 12 9 75.0
4540200 SENR RAILROAD ENGR 7 5 71.4
4540400 ASSOC RAILROAD ENGR 4 4 100.0
4542100 TELECOMMUNCTNS AN 1 7 7 100.0
4542200 TELECOMMUNCTNS AN 2 23 20 “87.0
4542300 TELECOMMUNCTNS AN 3 & a 83.03
4542610 TELECOMMUNCTNS NT St 4 4 100.0
45¢0200 SENR VALUATION ENGR 1? is 72.9
4520400 ASSCIC VALUATION ENGR 16 10 42.5
4520500, PRIN VALUATION ENGR S 3 100.0
4582000 ASSNT UTLTY ENGR 12 ? 75.0
4601200 ASSNT TAX VAL ENGR 20 15 75.0
4601300 SENR TAX VALUATN ENGR 13 i=] 61.5
4401400 ASSOC TAX VALUATN ENIS 4 2 50.0
4770100 ENGRG MATLS ANLST 4 3 75.0
4770200 SENR ENGRG MATLS ANL - 7 7 100.0
4771100 ENGRG MATLS TECH ? & 66.7
4771300 SENR ENGRG MATLS TECH 13 13 100.0
4771500 PRIN ENGRG. MATLS TECH 1& 15 92.8
4801100 JR ARCHITECT 4 4 100.0
4801200 ASSNT ARCHITECT 19 16 24.2
4201300 SENR ARCHITECT 17 10 53.6
4801400 ASSOC ARCHITECT 20 is 735.0
4901300 SENR ARCHL SPECS WRTR & 3 280.0
4902200 ASSNT MECH SPEC WTR 4 " 3 75.0
4914300 SENR. FACILITIES COORD 5: S 100,
4917200 FACILITIES PLNNR 2 7 & 85.7
BOOL1OO JR LANDSCAPE ARCHITCT 7 5 71.4
5001200 LANDSCAPE ARCHITECT 13 13 100.0
5001300 SENR LANDSCAPE ARCH 22 12 86.4
5001400 ASSOC LANDSCAPE ARCHT 4 4 100.0"
§110100° DRAFTING ASSNT a 3 60.0
5111000 DRAFTING AIDE 17 12 70.6
5111200 DRAFTING TECH 21 18 e567
5111300-SENR DRFTG TECH GENL ? 4 44.4
Si11801 SENR DRFTG TECH ARCHL 7 7 100.0
Si11202 SENR DRFTG TECH ELECT 4 4 100.0
5111304 SENR DRFTG TECH STRCT is 1 é1.4
i1iS01 PRIN DRFTG TECH GENL 10 & 60.0
S1LLi802 PRIN ORFTG TECH ARCHL 10 7 70.0
S1i1503 PRIN DRFTG TECH ELECT & 3
S111504 PRIN DRFTG TECH MECHL 3 &
S1iLi508 PRIN ORETG TECH STRCT Zz 13
5148200 MAFFING TECHN 2 5 42
MAPPING TECHN 3 9 3
MAPPING TECHNLGST 2 4 2
ASSNT LAND SURVEYOR 2 S 3
ASSNT LAND SURVEYOR 3 & &
OO LAND SURVEYOR & 4
- 256 -
BiSO410 SENR LAND SURVYR TRAN
§$200700 CHF FORENSIC WNIT 1
5202101 REGNL MED CARE ADMR
5202200 MEDICAL CARE ADMR
202400 ASSOC MED CARE ADMR
5202500 PRIN MED CARE ADMR
5207900 DIR PSYCHIATRIC CNTR
5207950 EXEC DIR PSYCHTRC CTR
S208900 DIR CHLDNS F's
5210000 DEY CENTER &
& 100.0
2 706
2 50.0
4 100.0
3
SZ10100 DEV DISBLTS SFC 1 1 72.9
SZ10110 DEV DISBLTS S1 Oc 10
§210200 DEV DISBLTS SPC 2 13
§210210 DEV DISBLTS S2 OC 2
5210400 DEV DISBLTS PGM SFC 4 14
S$210720 CHF DEV CNTR TRMNT SV 15
5210900 DIR DEVELMNTL CENTR - 13 -
5211410 AREA OFFC DIR LNG T C 3
B211420 AREA. OFFC DIR HOSP C 5
S211430 AREA OFFC DIR AME © 3
5216101 REHAB ASSNT 1 14
S216202 REHAB ASSNT 2 16
5216500 WORKSHOP SPEC 3
5217203 REHAB PHYSICIAN 3 5
S21S800 DEPUTY REGNL DIR MH & 3
5218900 REGNL DIR MENT HY SVS 4
521750 DEPUTY CLNCL DIR I SV 4
52192900 CLINICAL DIR INFTNT -& ag
5220410 MENTAL HLTH PGM SPC 1 S
5220420 MENTAL HLTH PGM SPC 2 7
5221700 CHF MEDICAL SRVS AL
5222600 DIR COMTY SRVS % 14
5223700 AREA ADMR HLTH SYS MG 4
5226400 PHYSNS ASSNT 17
5228300 SENR EMERGY MD CR REF 3
5246110 ALCLSM PRGM SPEC 1 4
S246120 ALCLEM PRGM SPEC 2 2
5282100 CLINICAL PHYSN 1 10
S252200 CLINICAL FHYSN 2 11
CLINICAL PHYSN 3 2
TREATMNT TEAM LOOM R 14
TREATMNT TEAM LO CkYS 17
TREATMNT TEAM LID MH 1s
CHF MNTL HLTH CHLO J 7
CHF MNTL HLTH TRM 1S
DEPUTY DIR TRIMNT
SYCHIATRIST 1
2 PSYCHIATRIST 2
3 PSYCHIATRIST 3
PSYCHIATRIST RESCH 2
DIR QULTY ASSLIRNC:
B7 6700
5277200
5277201
5277400
5224402
5284420
5284700
S271101
SZ71102
5274000
302500
5203800
5351201
5281202
‘SAS1 20
5354200
5354400
SSE5100
5500810
SEO0S20
SS00840
SSOOSEO
5500560
5500600
S05 100
5506220
5806230
SSL0701
5510720
Se1L0730
5510750
5510760
5510775
5513200
5517200
5518700
5518800
5512900
5520200
SS2E200
5524100
S5S6100
5570500
5577 100
514207
~ 257
OIR ALCOHLSM TRIMNT ©
MEDICAL SPEC =
MEDICAL SPEC 1
MEDICAL SPEC 3
PUBLIC HP 2 LMA PRG
PUBLIC HF 2 UTLZN RV
DISTRICT ADMR PUB HLT
COMP EXAMG FPHYSN 1
COMP EXAMG PHYSN 2
RESIDENTL TRIMT FCL
SUPVG BARBER
SUPVG BEAUTICIAN
DENTIST 1
DENTIST 2
DENTIST 3
PUBLIC H DENT LMAP
REGNL PUBLIC HLTH DNT
DENTAL TECHNICIAN
NURSE 1
NURSE 2
NURSE 2 PSY
NURSE REHAB 2
NURSE 2 ONCOLOGY
MENTAL HYG NR&G FGM ©
NURSE PRCTNR
TEACHINGERSCH CTR N 2
TEACHINGERSCH CTR N. 3
NURSE ADMR 1
NURSE ADMR 2
NURSE ADMR REHAB 1
NURSE ADMR PSY 1
NURSE ADMR 1 ONCOLOGY
NURSE ADIMR PSY 2
UTILZTN REVW NRS
UTILZTN REVW COORD
COORD CMTY RESONCS
COMTY RESONC ASNT. DIR
COMTY RESDNC DIR
MENTAL HYG SFC ADL TA
HEALTH SRVS NURSE
HEALTH PROG AIDE
HEALTH FACLTS SVY 1 N
MENTAL HYG HFWY H A 2
O NURSE ANESTHETIST
VOLUNTEER SRVS
VOLUNTEER SRV
VOL SRYS ASSNT
CORRL FAG VOL &
MENTAL HYG THER AST 2
INTERMDT CARE F PG Mi
SULT NR MTY N&HHs
i
O60
.?
45.5
TRG
20.9
79.0
190.0
7O.&
75.0
100.0
82.2
71.4
60.0
7&.2
&
72.2
100.0
100.0
76.2
72.0
73.7
45.0
42.1
95.2
95.0
23.9
35.0.
70.0
100.0
76.9
735.0
71.4
77.8
64.7
35.0
62.5
65.2
81.0
6647
89.5
66.7
60.0
71.4
37.5
60.0
68.2
50.0
75.0
G71
77.2
75.0
- 258 ~
5614500 COMTY MNTL HLTH NR
S61S505 COMTY NSG
54616200 HOSP NSG
Sé1S200 REGNL HOSP NRSG S AGM
S700200 PHYSCL THER
S700300 SENR PHYSICAL THER
5700600 HEAD PHYSICAL THER
5700700 CONSULT PHYSICAL THER
3700200 CHF PHYSICAL THER
S702301 PHYSCL THER ASNT 1
5702202 PHYSCL THER ASNT 2
2900201 OCCUPL THERPY AST 1
5900202 OCCUPL THERPY AST 2
S?01200 OACCUPL, THERAPIST
S?01200 SENR OCCUPL THER
5901600 HEAD OCCUPL THERAPIST
5901700 CHF OCCUPL THERAPIST
5902100 RECREATION ASSNT
5902200 RECREATION THER
S903202 RECREATION THER MUSIC
-SP0S203 RECREATION THER DANCE
5703204 RECREATION THER A SS
2903300 SENR RECREATION THER
5902400 RECREATION WORKER
5703600 HEAD RECREATION THER
5902700 CHF RECREATION THERAF
S?702100 RECREATION PRGM LOR 1
S709200° RECREATION PRGM LOR &
SPZ1200 AUDIOLOGIST
S932100 ASSNT SPEECH PTHOLGST
5932200 SPEECH PATHOLOGIST
5934100 SPEECH PATHLGY&A PC 4
3734200 SPEECH PATHLGY&A PC 2
5760000 MUSIC SUPERVISOR
5?71200 HANDICRAFT INSTRUCTR
6101200 BACTERIOLOGIST
61013200 SENR BACTERIOLOGIST
6101320 SENR BACTERIOLGST VIR
&101400 4) Cc BACTRLGST
6104300 R H FHYSN 2
E107100 Mi "STRMTRY AN 1
4112300 SENR AQUATIC BIGLOGST
6112500 SUPVG AQUATIC BIOL
6114110 SERVN BIOLGST 2
€414210
&114220
4114240
6114300
&114500
é121400
ENR WILDLIFE BIOLGS
WEVG WILDLIFE BIOLGs
Ssoc ANAL CHEMIST
EM 1S
20
ra
ae _
fond
MAP ONUPeUGHPrTOPPNWWOOD
ee
NoOWOGCUMWNY A
70.5
30.0
50.0
100.0
79.0
S5.0
61.5
72.9
6123200
6125200
6126200
6129300
4429400
&1LZI500
4130300
4130450
6l4o2so
6152200
41454100
&1L60410
6140120
6460400
4160500
&160600
6161220
6161230
6162000
6162201
6162202
6162203
S162204
6162205
&162216
4162217
6162218 -
6162100
4163201
6162202
6162203
61623204 -
6163205
E163206
6163207
6164202
6164900
6202300
6204200
6204310
6204320
6204325
4204250
&204260
6214400
6215200
6216200
bZLEZ10
- 959 -
SENR BIOCHEMIST
SENR ENGRG MATLS CHEM
SENR FOOD CHEMIST
SENR SANI CHEMIST
ASSOC SANI CHEMIST
PRIN SANITRY CHMST
SENR RADIOL HEALTH SF
ASSOC RADIOL HLTH SFE
FATHOLOGIST 3
SENR RADIOPHYSICIST
ENVIRNL CHEMIST 1
PSYCHOLOGIST 1
PSYCHOLOGIST 2
ASSOC PSYCHOLOGIST
PRIN PSYCHOLOGIST
CHF PSYCHOLOGIST
PSYCHOLOGY ASSNT 2
PSYCHOLOGY ASSNT 3
ASSNT RSCH SCIENTIST
RESCH SCIENT
RESCH SCIENT
RESCH SCIENT
RESCH SCIENT
RESCH SCIENT
RESCH SCIENT
RESCH SCIENT
RESCH SCIENT
ASSNT CANCER RSCH
CANCER RSCH SCI 1
CANCER RSCH SCI 2
CANCER RSCH SCI 3
4
5
OnNeGA PONE
CANCER RSCH SCI
CANCER RSCH SCI
CANCER RSCH SCI é
CANCER RSCH SCI 7
CANCER RSCH CLNCN 2
CHF CANCER RSCH CLNC
SENR LAB WORKER
LABORATORY TECH
SENR LAB TECH BACT
SENR LAB TECH BIOLOGY
SENR LAB TECH CHEM
BENR LAB TECH MICROBL
SENR LAB TECH BIOCHEM
SENR XRAY AIDE,
RADIOL TECH
SENR RALIIOL. TECH
ELECTROENCFHGRPH RT
ELECTRONICS TECHN -
LABORATORY EG DESGNR
LABORATORY EG nS ELS
ueph
me
NEONNG SD
re
Ati
moe
MYND FPL EE W EEO
o
-
re
ut
3
-
>
12
uu
eee oe tad
WN ON G
is
2
me
our
a eee
RN PNWN RUN
»
oor Vina
aed
MOQ prun-
-
3
100.0
ao. 7
90.0
81.0
S2.6
S5.0
&4.7
62.4
72.2
73.7
45.0
72.2
63.2
55.6
81.0
BG.4 .
735.0
55.0
68.2
52.6 ¢
60.0
So.6
64.7
© 60.0
47.4
E366
22.2
6219300
6219500
&225110
6225200
4232100
6322200
4322300
6326800
6403200
“6410210
6410220
6410220
6411100
4411200
6421200
6463100
4467100
6467200
£4.67300
6501300
6501400
6501412
6501480
£501500
e501516
6501560
&503200
6510100
6510400
510410
B10600
SLGLOO
6524300
1oo
~ 260 -
LABORATORY EO DS CMCT
SENR LAB EQUIP DESGNR
MEDICAL TECHNOLOGIST
SENR CENT MED SPL TCH
MEDICAL TEST ASSNT
MEBICAL LAB TECH 1 SA
MEDICAL LAB TECH 2
GLINICAL LAB CNELT
2 CYTOTECHNOLOGIST
OPTICTIAN
AUTOPSY AIDE
PHARMACIST
SENR PHARMACIS
PHARMACY CNSLT
NARCG INVESTIGATOR
‘SENR NARC INVEST
ASSNT PHARMACY CNSLT
FOC PROCESSING INSP
FOOD INSPECTOR 1 .
FOouD INSPECTOR 2
FOOD INSPECTOR 3
DAIRY PRODCTS SPEC 1
DAIRY FRODCTS SPEC 2
KOSHER FOOD INSPECTOR
PUBLIC H INSPECTOR
PUBLIC H REP 1
FUBLIC H REP 2 .
PUBLIC H REP 3
SENR ATTORNEY
Agsoc ATTY
ASSOC ATTY ‘TAX
ASSOC ATTY REALTY
PRIN ATTY
PRIN ATTY SEC&PUB FNG
PRIN ATTY APPLS&OPNS
TITLE SEARCHER
ASSNT HEARING OFFCR
HEARING OFFICER
HEARING OFFICER FRL R
SUPVG HEAR OFFR
MEDICAID HRNG EXMR 1
LEGAL REPRESENTATIVE
WI REFEREE
SENR WI REFEREE
TRIAL EXAMINER
MOTOR VEH REFEREE
SENR MOTOR VEH REF
DIR REGIONAL ENF&LGAF
UTILITY HERNG SPEC 3
DENTAL SRVS RV ASNT 1
BEVERAGE CNTRL INVEST
e
th
PHM BHM GOP
hie
No
16
rn
ro
QW PW PUN Po
-
Ld
-
nn
SUPRIYO MANE TAD
ra
~ 262 -
SENR BEV CNTRL INVEST a) 13
SUPVG BEV CNTRL INVST 7 &
606220 EXEC OFFR E 3 a
£606220 EXEC OFFR O 9 8
Se1Z200 ASSNT LAND&CLMNS ADJST 2 rs
S613200 SENR LANDRCLAIMS ADJ a 7
4630150 INVESTIGATIVE AIDE 7 4
2630200 INVESTIGATOR 19 is
4630300 SENR INVESTIGATOR & 5
31200 SENR PROFSL CNDCY INV 23 15
A621S00 SUPVG PRFSL CNDCT INV 2 7
&6s3200 SENR ZOC SRV CHLD S 5 9 5
S6R2400 ASSOC SUC SV CHID SS 4 4
BLG6100 SOC SRV MEDCAD INV 1 14 10
626200 SOC SRV MEDCAD INV 2 40 g
6640200 CONSUMER FRAUDS REF 4 3
£e80200 SENR CNSMR FRAUDS REP 9 &
Qe4S400 CORRL SRVS EMP INVSTR 7 3
4604200 MOTOR VEH INVEST Z 15
6644300 SENR MOTOR VEH INVEST 11 7
6646210 LAW DEPT INVEST 1 Lg. 12
6646220 LAW DEPT INVEST 2 9 4
£652100 LICENSE INVEST 1 1? 9
6452200 LICENSE INVEST 2 22 10
4652300 LICENSE INVEST 3 7 4
4667202 RESOURCESSREIMB AGT 2 20 15
£662300 SENR RSCSERMB AGT 20 19
AGe24i0 RESQURCES&REIME PD Si. s 4
262420 RESUURCESEREIMB PD S2 7 &
4662500 PRIN RESRCS&REIMB AGT 13 10
4664000 INSUR FRDS INVESTGTR é 4
6665200 GAMES CHANCE INSPCTR é 3
6674201 STANDS COMPLE ANLST 1 - 20 12
2674202 STANDS COMPLE ANLST 2 20 13
2674212 STANDS COMPLG AN LICF 14 4
bO74ZZ% STANDS COMFLC AN 2ICF 2. 4.
@21110 MINORTY BUS ENT L S 1 & 4
2681510 MINGRITY BUS SPEC i a 3
MINORITY BUS SPEC Z 4 3
MEDICAL CONDICT INVEST é 4
MED CNDCT INVST ; 5
&B05100 CLAIMS INVEST 1 19 4a
£80520 CLAIMS INVEST 2 22 17
CLAIMS INVEST 2 9 é
1200 CLAIMS EXAMINER 22 13
i300 SENR COMP CLMS EXANNR 22 18
COMP CLS EXMR 17 12
RIN COMP CLMS EXMR 9 8
INSUR FD DST CLMS MGR 4 2
WORKERS COMP EXMR 20 19
17 ¥
SENR WERS COMP EXMR
|
|
|
'
- 262 -
i WERS COMP EXMNR
10) PRIN WERS COMP EXMR
6328100
6830410
6330420
6231100 ¢
&851200
GS51 200
&251400
GREZ300
4262400
6863200
E862300
6864200
6864300
6293300
EB9IZA00
6894100
6894200
63943200
6894400
2895100
4297710
6921000
49Z1200
6921700
S922Z2101
AIZZA02
6922203
SPISIOO
923700
7QZQN0OOO
7OZN700
7020000
TAOOUUL
WORKERS COMP DSTC CM
WI INVESTIGATOR
SENR UL INVESTIGATOR
assoc Uo LT INVSTGTR
SENR WERE COMP REVW A
ASSOC WKRS COMP RVW A
COMP INVEST 1
COMP INVEST 2
CRIME VIC COMP CE 1
INSUR FD HRG REP 1
INSUR FD HRG REP 2
COMP CLAIMS LGL INV 1
UNDERWRITER
SENR UNDERWRITER
ASSOC UNDERWRITER
SENR UL HEARING REP
assoc U I HEARING REP
UI CLAIMS EXMR
SENR UL CLAIMS EXMR
WI REVIEWING EXMR
SENR UI REVIEWING EXR
MEDICAID CLMS EXMNR 3
MEDICAID CLMS EXMNR 4
soc SRV BIS ANLST 1
SRV DIS ANLST 2
SRV DIS ANLST 3
SRV DIS ANLST 4
soc SRY DIS AIDE
DISABLTY DETRM RGNL A
CONST EQ OP
HIGHWAY EQUIP OPER
BRIDGE REPAIR SUPVR 2
BRIDGE REFAIR ASSNT
BRIDGE REPAIR MECH
BRIDGE REPAIR SUPYR 1
HIGHWAY MTC SUPYR 1
HIGHWAY MTC SLIPVR 2
LABORER
LABOR SUPERVISOR
PAVEMENT MRKG SUPVR
SIGN CREW SUPVR
BUPVG “HASCNEPLASTERER
FAINTER
IFVG FAINTER
WIFER&.TINGMITH
FLANT SUPT
eh
aoe py
hee
WANE Pw
we
iad
4
3
4
5
?
5
5
Q
2
4
3
7
71ONOZ
7100003
7101300
7LOLS00
7106200
7FAO71LL0
7107120
FALBZ200
7141200
7141300
7L41S00
7141600
7150300
710500
7Z02000
7202100
7202115
7202130
7202150
720Z170
72OZ190
7F2z1800
7223500
7224000
7225100
7225200
7225700
7251300
7252200
7252300
7252400:
7260200
7261200
7302200
7302300
7307200
7310300
TBLL100
TBL1L200
7312000
TBLBSOO
7213500
. 7313700
213800
7222000
224510
7331100
7331200
7341150
TBA L250
7241700
- 263 -
PLANT 3UPT &
PLANT SUPT A
MAINTE SUPVR 1
MAINTCE SLIPVR 2
FACILITIES MGMT ASSNT
PUBLIC BLDGS MGR 1
PUBLIC BLDGS MGR 2
REFRIG MECHANIC
SEWAGE PLANT OPERATOR
SENR SWGE PLT OF
PRIN SEWAGE FLNT OPER
HEAD SEWAGE PLANT OPR
MAINTCE SUPVR 2
MAINTCE SUPVR 4
MAINTCE ASSNT
MAINTCE ASSNT CARPNTR
MAINTCE ASSNT LCKSMTH
MAINTCE ASSNT MSN&PLR
MAINTCE ASSNT PAINTER
MAINTCE ASSNT RFRETNE
MAINTCE ASSNT PARKS
CHF LOCK OPERATOR
CANAL ELECTRICAL SUPV
CANAL STRCTR OPER
CANAL MTC SUPVR 1
CANAL MTC SUPYR 2
CANAL SECTION SUPT
CORE DRILL OPERATOR
ASSENT DRILL RIG OFER
DRILL RIG OPERATOR
WAREHOUSE: EQUIP OPER
OVERHEAD CRANE OPER
CRANE&SHOVEL OPERATOR
FILTER PLANT OPERATOR
SENR FILTER PLANT OP
CONST EG MECHANIC
ADAPTIVE EQUPMNT SPEC
GARAGE HELPER
BARAGE ATTENDANT
MOTOR EG MECH
MOTOR EG MTC SUPVR 1
MOTOR EG MTC SUPVR 3
MOTOR EQ MTG COORD
: MOTOR EQUP MNGR
ELECTRICIAN
SUPVG ELECTRICIAN
TRAFFIC SIGNAL MECHNCG
ASSENT SIG MECH
SUEVG TREFC SGNL MECH
20
hau
pe
faa
ANN Oh
76.5
e1.3
60.0
Se.8
21.0
75.0
100.0
= 264. -
7242200 COMMUNDING OPER 4 2
7345010 MAINTCE ASSENT MECH 20 11
7248020 MAINTCE ASSNT PLMBRE&s 20 12
7345060 MAINTCE ASSNT ELECTRN 20 13
| 7351000 MACHINIST 19 12
| . 7352000 GEN MECHANIC 21 15
| 7254000 LABORATORY MECHN 21 13
7357000 ELECTRONIC EQUIP MECH 128 13
7399200 SHEET METAL WORKER 14 9
| 7340000 STEEL FABRICATOR 5 5
. 7361000 PLUMBER&STEAMFITTER 20 14
7241700 SUPVG PLUMBER&STMFTR 19 12
i 7367000 PUMPING PLANT OFER 4 3
7271000 WELDER 19 16
7289200 EQUIPMENT OPER INSTR & 5
7440200 MAINTCE ASSNT MARINE 14 11
7442500 TENDER CAPTAIN {2 ose 8
7444500 TUG CAPTAIN 10 9
7445100 DREDGE CRANE OPER 10 7
7445200 DREDGE OPERATOR 9 7
7445500 DREDGE CAPTAIN 5. 4
7444500 DERRICK BOAT CAPTAIN 5 2
7447000 MARINE ENGINEER 17 14
7452000 MOTORIZED SCOW OPER 5 4
7501100 ASSNT STATNRY ENG 20 15
7501200 STATIONRY ENG 21 15
7501300 SENR STATIONARY ENGR rai 20
7501500 PRIN STATIONARY ENGR 22 16
7501400 HEAD STATIONARY ENGR 15 8
7511000 POWER PLANT HELFER 24 15
7511300 HEATG PLANT EQ SF 3 5 5
7605300 SENR AIRPORT DEV SPEC ° - 7 5
7615000 TANDEM TRACTOR TRL OP 22 12
7724000 UFHOLETERER 18 ii
7744000 PRINTING SHOP HELPER é 6
7744100 PRINTER 10 6
7744400 REGENTS PRINTER 15 9
7747200 SIGN PAINTER 14 11
7815000 LABOR STNDRD INVST 2 19
7815200 SENR LABOR STNORD INV 17 16
SUPVG LABOR STNRD INV 14 13
BROILER INSPECTOR 18 11
ENR BOILER INSPECTOR 5 5
ENR INDUS HYGIENIST 16 14
GASLPETROLM INSPCTR 2 16 15
FeAS120 GASLFETROLM INSPCTR 2 4 4
7863200 FIELD REF FIRE 2 7 7 100.0
7866200 MOTOR VEH INSPECTOR ah 14 66.7
7866500 SUPVG MOTOR VEH INSP 9 7 77.8
7847000 CAMPU FTY SPEC é 5 83.3
00 TRANS HLTHRSETY REF gS 7 27.5
'
- 266 -
i he TLRORD EQUIF INSP
HeBaZ00 SAFETYRHLTH INSPTR
7THBASOO NR AFTY&HLTH INSP
TEEALOO AFTY&HLTH INSP
7eeL000 FIRE SAFETY TECH
FIRE&SAFETY REP
NR UTILITY RTS ANL
7HeS200 MOTOR CARRIER INVEST
7R9TLOO WEIGHTS&MSURS SPC 1
FPOLBOO PRODUCTION CNTRL SPYR
> ASSNT INDUS SUPT
é INDUS SUPT
7OLBSOO GWALITY CONTRL ,SUPVR
7946501 GEN INDUS TRNG & MPM
7746514 INDUS TRNG SPVR 2 6 M
4512 INDUS TRNG SPVR 2 MPM
7986522 INDUS TRNG SPVR 2 SM
PPALESE TRNG SPVR 2 WFM
TIACTEG TRNG SPVR 2M M
#100100 SOC SRV ASSNT
“S106200 SOC SRV REP
2107210 PSYCH SOC WKR 1
#107220 PSYCH SOC WKR 2
BY07410 PSYCH Soc WE ASST 1
B107420 PSYCH SOC WK ASST Z
8107420 PSYCH SOD WK ASST 3
2107510 PSYCH SOC WK SUPVR 1
SiO75S30 PSYCH SOC WK SUPYR 3
S108203 MEDICAL SOC WKR B
2109200 SENR DRG ABUS REH CNS
BiL1G00 PUBLIC H SOC WRK CNST
S122000 CORR COUNSELOR
S12Z2003 CORR COUNSELOR MIN GF
122008 CORR COUNSELOR AIDE
SENR CORRECTION CNSLR
NETWORK PRGM ADMR ©
HUMAN RTS SFEC
HUMAN RTS SPEC
“MTY
AGING $s!
AGING
hh nN hr
FGM RA
FGM R=
mee hs
PNQUNEH SR ERA
19
5
4
&
20
4
S
&
a
od
ae
roy
PH PR MP RUA ESM GHEUNSEG
en
ray
pee
VP EPPNPNS
i
aes
Pee aod
nwo
&
Tr eEN
14
=
PMU SHAE OY
77.8
as.7
100.0
73.9
20.0
80.0
45. 0.
80.0
Seo
99.0
3.7
89.5
80.0
92.4
T&.2
95.0
62.5
@5.7
62.5.
73.7.
20.5
85.7
77.8
770
60.0
32.9
100.0
735.0
75.0
20.0
pafFoo
LD PROTETV 4 3
Sc SRV PROG 19 12
ENERGY PROGM 1 4 3
ENERGY PROGM EPEC 2 4 3
SRV PROG SPC 22 20
ASSOC Stir: PROM SPC 20 1S
PRIN S00 SRV PROG SPC 5 4-
SENR SOC SRV PLNG SFC 18 15
SC SRV EMPL SPEC & & 100.0
SOC SRV EMP SPEC é 4 66.7
SOC WORK ASSNT 1 s 4 20.0
SOC WORK ASSNT 2 19° 14 73.7
SOG WORK ASSENT 3 a4 8
SOC WORKER 1 rat 14
SOC WORKER 2 15 10
8159510 S0C WORK SUPVR 1 23 Zl
S159530, SOC WORK SUPVR 2 Boo Be
) SOC SRV MEDL ASTC SPC | «14
SENR Sac SRV MO AST S 20 17
mLeoado ASSOC SUC SV MD AST S : 7
e162021 MEDICAID RVW AN 2 MC & b
: MEDICAID RVW AN 3 MC & 4
CHILD WELFARE SPEC 2 5 5
YOUTH FACILITY BIR 19 13
YOUTH FACILITY DIR 2 15 11
69300 YOUTH FACILITY BIR 3 10 7
71200 EDUC COUNSELOR 18 - 14
72200 YOUTH DIV CNSLR 22 18
73200 SENR YOUTH. DIV CNSLR 22 14
3172500 SUPVG YOUTH DIV CNSLR 13 19
2174200 FOSTER GRNDPRNT FGM C a 5
2175800 YOUTH RESD ASSNT S PG & 5
2179500 DISTRICT SPVR YTH R 8 13 9
2179900 REGNL DIR YTH REHAB & & 4
2191100 SUBSTANCE ABS PRU C 1 7 3
2256200 YOUTH RESD ASSENT S AD 5 4
CHAPLAIN 20 16
INSTRUCTOR BLIND 4 4
METHADONE PGM RY SP 2 4 3
it g
9 7
MOBILITY INSTRUCTOR 7 b
VOE SPECIALLST 4 20 15
YOUTH LOCL @ASTNG P SL 19 16
YOUTH 4 4
REHAB B &
REHAB 5 a
REHAB } 13
REHAB CNSLR Z 23 15
REHAB SRVS VEC: 5 5
: VON: REHAB UNIT 15 14
VOC REHAB CNSLR
SENR VOC REHAB CNSLR
ASSOC VOC REHAB CNSLR
DISTRICT MGR voc RE 2
ASSENT INMATE GRYNC FG
CORRL PROGM COORD
SUPVR INMATE GRYNG PG
COCUPL ANALYST
EMP CONSLT COLINS
EMP CONSLY M GRP
ea05200 EMPL CCWNSLR
8408200 SENR EMP COUNSLR
5406200 EMPL SYS FLO SFPRT AN
#408200 EMPL SRVS REP
#411200 EMPL INTRYWR
41900 GENR EMP INTERVIEWER
{S800 YOUTH FROG SUFVR
8416200 COMTY WORKER
2417200 RURAL EMPL REP
‘3426000 TEMP RELEASE INTVWR
8432410 EMPL SEC MANGR 4
£42240 EMPL SEC MANGR 3
24927420 EMPL SEC MANGR 2
422400 EMPL SEC MANGR 1
2422700 ‘EMPL SEC SUPT
8422900 EMPL SEC AREA DIR
442200 YOUTH EMPL PRGM SPEC -
2445200 STATE VETERAN CNSLR
#445600 EMPL INTRVWR DSAB VOP
8522200 PUBLIC WK WAGE INVEST
(8522800 SENR PUBLIC WK WG INV
541300 SENR BLS CNSLT
BS45200 INDUS DEV REP
542810 COMMERCE DIST ALMR 4
8402200 FARULE OFFCR
8602300 SENR PAROLE OFFICER
8402800 SUFVG FAROLE OFFICER
8404200 PROBATICN FGM CNSLT
2606400 CRMNL JUSTO PRGM R 4
B614100 STATE PROBATION CFFCR
BéL7100 PROBATION FGM ADMR 1
700100 CORR OFFICER
S7H0200 CORR SERGEANT
RR LIEUT
CORRL CTR ASSNT
BULDG GUARD
SECURITY GFFICER
NR SECURITY OFFICER
SY BAFTY OFFCR
7046000
199.0
83.7
4.2
92.8
100.0
100.0
9o.8
B4.2
P47
ae.
Ba. ?
6S. 4
100.0
100.0
$5.0
&3.2
Si.
75.0
82.9
86.7
100.0
7z.7
79.0
64.7
i ~ 268 -
O WARRANT& TRANSFER. OFFR 12 a 69.2
i 3 10100 IDENT SPEC 1 18 3 72.2
7 2710200 IDENT SPEC 2 . 10 & 60.0
| 3710300 IDENT SPEC 2 & 4 64.7
| 27140900 PARK PATROL OFFCR 24 Ct 21 87.5
| 3714500 SERGEANT FARK PATROL 22 17 77.3
8714750 LIEUTENANT PARK PTROL 14 iL 73.
B7 18200 WIRITY HSF TRY ASNT 1g” 12 63.2
B71 LIRITY HSF TRY A A 12 a 41.7
| 2715 SECURITY HSF SR TRT A 21 14 64.7
B71E400 SECURITY HSP SPV TR A 12 10 83.3
| 8720200 CORR CLASS ANALYST & 4 66.7
| 8720800 DEPUTY SUFT SECURY SV 13 is aS.3
[ 8721500 DEPUTY SUPT PROGM SVS zo 16 80.0
27222800 DEPUTY SUPT ADMNY SVS 17 16 94.4
| 8730100 CAPITAL POLICE OFFCR 20 14 70.0
w7o sane FIRE SAFETY OFFCR 1 4 4 100.0
.8720300 CAPITAL FOLICE SGT Alo — & BAS
“Branaoe CAPITAL POLICE LIEUT 4 4 100.0
@731100 SECURITY SRVS ASSNT 1 19 14. 73.7
8731200 SECURITY SRVS ASSENT 2 12 a 66.7
#736900 DIR CORRL PROM 4 _ 3 75.0
8753200 CAMPUS PUB SFTY OFC 2 20 Li 55.0
8753300 CAMPUS PUB SFTY SPV Q 21 16 7h.
B754108 CAMPUS PUB SFTY INVSG 19 14 73.7
755100 SAFETYRSCRTY OFFR 1 20: is 735.0
2755 150 CHF SAFTY&SCRTY OFF. 1 T 4 57.1
8755200 SAFETYRSCRTY OFFR 2. 20 : 17 25.0
8755250 CHF SAFTY&S5CRTY OFF 2 WE 74.7
‘ 2901000 MOTOR VEH LICENSE EXR 16 is 93.8
89701300 SENR MOTOR .VEH LIC EX 13 ? 69.2
8901500 FRIN MTR. VEH LIC EXMR 14. 13 _ 92.9
2902800 ASSNT DIST DIR MTR VH aL 16 76.2
g908900 DISTRICT DIR MOTR VEH - 1 1S 93.8
8913200 HIGHWAY SFTY FROG REP’ 7 3 42.9
#925200 BODY REPAIR INSP , 10 4 40.0
8921200 AUTO FCLTS INSP 210 17
@931300 SENR AUTO FACLTS INSP 14 10
8931500 SUPVG AUTO FACIL INSP & &
MOTOR VEH CNSMR SR 1 1. ii
MOTOR VEH INS SV RF 2 5 4
BLUFVG DRVR IMFRMT ADL 7 &
DRIVER IMPRV ANALYS 13 10
BENR DRIVER IMPRV ANL & 4
GROUPED RESPONSES
O GRECECRERECEEEECEREEEE 1495 1isg 7&0
~ 269 -
APPENDIX G:
MAIN SURVEY: JOB TITLES DELETED
DUE TO INADEQUATE RESPONSE RATE
~ 270 -
Deleted Titles Due to Inadequate Response Rates
To-be-Estimated Titles
5503200 Operating Rm Tech
5503300 Senr Oper Rm Techn
Non-estimated Titles
395200 Insur Collector
1776600 OTB Operatns Anlst
2414400 Assoc. Health Planner
2523100 Latent Fingrprnt Exmr
2571100 Eligblty Revw Clk 1
2642100 Pathology Off Assnt 1
2700100 Off Assistant
3118600 Head Baker
3518500 Supvr Urban Sch Srvs
3521360 Assnt Occupl Ed Cvl R
3814030 Museum Exh Spec Rest
3950300 Career Dev Trng Spec
4361300 Senr Mechl Estimator
5210300 Dev Disblts Pgm Spe 3
5297600 Assnt Dir Cnty S Pg 0
5501200 Hosp Attendant 2
5506210 Teaching & Rsch Ctr Nl
5701100 Hosp. Physl Thrpy Aide
6104100 Resch Physn 1
6160112 Psychologist 1 Cor Sv
6163400 Assoc. Cancer Rsch Sci
6164600 Assoc. Chf Cr Rsch Cln
6204370 Senr Lab Tech Physiol
6212320 Senr Radiol Tech Thrp
6501470 Assoc, Atty Insurance
6506400 Assoc. Counsel
6515100 Health Dept. Hrg Exmr 1
6674203 Stands Comple. Anist 3
6894500 Soc. Srv Dis Anlst 5
7353000 Laboratory Mechn Asst
7441700 Deckhand Supervisor
7711200 Bookbinder
7746200 Sign Shop Worker
7862220 Field Rep Code. Cmpl 2
7863100 Field Rep Fire 1
8108300 Senr Med Soc. Worker
8111400 Senr Public Hith S WC
8144202 Comty Plemnt Spec 2
8546200 Interntnl Trade Spe 2
8753100 Campus Pub Sfty Ofc 1
8960000 Highway Sfty Pgm Anl