Thursday, September 5, 2019

Impact of technology on evaluation of employment

If you order your custom term paper from our custom writing service you will receive a perfectly written assignment on impact of technology on evaluation of employment. What we need from you is to provide us with your detailed paper instructions for our experienced writers to follow all of your specific writing requirements. Specify your order details, state the exact number of pages required and our custom writing professionals will deliver the best quality impact of technology on evaluation of employment paper right on time.


Out staff of freelance writers includes over 120 experts proficient in impact of technology on evaluation of employment, therefore you can rest assured that your assignment will be handled by only top rated specialists. Order your impact of technology on evaluation of employment paper at affordable prices!


THE IMPACT OF VIDEOCONFERENCE TECHNOLOGY, INTERVIEW STRUCTURE, AND INTERVIEWER GENDER ON INTERVIEWER EVALUATIONS IN THE EMPLOYMENT INTERVIEW A FIELD EXPERIMENT


Despite the growing use of communication technologies, such as videoconferencing, in recruiting and selection, there is little research examining whether these technologies influence interviewers perceptions of candidates. The present field experiment analysed evaluations of real job applicants who were randomly assigned either to be interviewed face-to-face (FTF) (N = 48) or using a desktop videoconference system (N = 44). The results show a bias in favour of the videoconference applicants relative to FTF applicants, F(1,1) = 7.5, p = .01. A significant interaction of interview structure and interviewer gender was also found, F(1,1) = .70, p.05, with female interviewers using an unstructured interview rating applicants significantly higher than males or females using a structured interview. Interview structure did not significantly moderate the influence of interview medium on interviewers evaluations of applicants. These findings highlight the need to be aware of potential biases resulting from the use of communication technologies in the hiring process.


Videoconference technologies include a variety of telecommunication systems that transmit voice, picture, and often data over telephone lines and/or Internet connections. Typical systems vary considerably in cost and complexity ranging from inexpensive desktop systems to fully integrated classrooms. The use of videoconference technology for business and education is growing rapidly in developed countries. For example, a survey of 100 telecommunications professionals in Canada, Mexico, and the United States found that 8% of companies were currently using either videoconference technology, undergoing trials, or planning to use it in the future (Coady et al., 16). A marketing report by Frost and Sullivan (000) stated that the adoption of videoconference technology in European business practices is projected to grow rapidly from revenues of US$451 million in 1 to US$ billion by 006. This report suggests that increased standardization in videoconference technologies and reduced prices for videoconference systems are largely responsible for the strong growth in the use of this technology. Furthermore, large telecommunications companies in North America and Europe, such as British Telecom, rent videoconference equipment and facilities to organizations that do not have their own systems.


In addition to the strong overall growth in the use of videoconference technology in businesses, there is a number of factors driving the increasing popularity of videoconference technology in recruiting and selection specifically. An increase in the globalization of organizations and tighter labour markets require organizations to evaluate an increasing number of applicants in diverse geographical regions. Videoconference technology has provided a means to dramatically reduce the costs associated with interviewing distant applicants while simultaneously expanding applicant pools and satisfying the employers desire to see the candidates they are interviewing (Chapman, 1).


Although there has been limited investigation of the impact of using videoconference technology on small group processes (Zornoza, Prieto, Marti, & Peiro, 15), medical procedures (e.g., Troster, Paolo, Glatt, & Hubble, 15), and education situations (J. Webster & Hackley, 17), there is little published research investigating its effect on interviewers evaluations in employment interviews. Those studies that have examined the use of videoconference technology for employment interviews have concentrated on interviewer and applicant attitudes toward the technology (Kroeck & Magnusen, 17; Skinkle & MacLeod, 15; J. Webster, 17), rather than on whether the medium affects interviewers judgments. Given the nearly universal application of the employment interview in employee selection and the likelihood that the use of videoconference technology for interviews will increase dramatically, an empirical examination of the impact of this technology on the employment interview is warranted. The primary aim of this study, then, is to determine whether using videoconference technology alters the impressions that interviewers have of applicants relative to applicants whom they interview face-to-face (FTF). We also examine whether factors such as interviewer gender or interview structure affect interviewer ratings and whether these factors moderate the influences of videoconference media. We first review the relevant literature covering communication media, interview structure, and interviewer gender, and use this to generate our initial hypotheses. We then describe a field experiment we conducted, and detail the results obtained.


Effects of interview medium


A variety of studies has examined the effects that different communication technologies have on the structure of social interaction. For example, several researchers have investigated the impact of communication technologies on the surface structure of conversations (Cohen, 18; OConaill, Whittaker, & Wilbur, 1; Sellen, 15). These studies have largely concluded that removing, or deteriorating (in the case of videoconferencing) visual cues results in fewer interruptions, longer turns, and fewer turns taken by participants in a videoconference-based conversation compared with an FTF conversation (OConnaill et al., 1; Sellen, 15). What remains unknown is whether these changes in the surface structure of conversations can influence interpersonal perceptions.


The use of videoconference technology in the employment interview also restricts the interviewers ability to observe nonverbal behaviour (Skinkle & McLeod, 15). For example, applicants are typically displayed from the mid-chest up, which eliminates the possibility of observing some nonverbal behaviours such as posture or trembling hands. Although above-the-waist nonverbals are less affected by this medium, an important factor in interview research, eye contact, is difficult to determine due both to insufficient image resolution and to camera angle. A large number of laboratory studies has found nonverbal behaviours to be important determinants of interviewer impressions (e.g., Imada & Hakel, 177; Rasmussen, 184). However, we know little about how reducing the clarity of nonverbal communication will affect impression formation.


A review of the research literature revealed conflicting evidence for the likely effect of using videoconference technology on interviewer evaluations. It has been suggested that raters using videoconference media provide more negative evaluations of others (Storck & Sproull, 15). However, Storck and Sproulls (15) findings were based on judgments of group performance rather than individual performance, and involved an educational setting rather than an employment setting. Furthermore, their methodology did not address the potential confound of in-group bias. For example, the FTF ratings were provided by groups located at the same physical site, while the videoconference ratings were for groups from other locations (Storck & Sproull, 15). Their findings may be interpreted as being influenced by the positive feelings raters had for groups located at the same site (an in-group) relative to other locations (Messick & Mackie, 18; Newcomb, 161, 181), rather than the influence of the medium.


In contrast, research by Short, Williams, and Christie (176), suggested that interpersonal judgments can be inflated in a positive direction when these judgments are made after the parties meet in a degraded telecommunication medium. Short et al. (176) indicated that conversants engaged in a confrontational situation describe each other as being more friendly when using an intervening communication technology (in this case, closed-circuit television). Communication technologies may provide a social barrier that mitigates the anxiety generated by some social interactions (Short et al., 176). We believe that the intense and evaluative nature of the selection interview has the potential to generate anxiety between the conversants and, consequently, an intervening medium may help reduce this anxiety. This could lead interviewers to evaluate candidates more favourably in the videoconference-based interview due to either more positive affect toward the applicant in general (Dipboye, 1), or due to actual improved performance of the applicant if he or she feels more relaxed (Webster, 17).


Another potential mechanism by which interviewer judgments could be affected by the interview medium involves attribution theory (Kelley, 17). Although observers in many circumstances are more likely to attribute perceived deficiencies to dispositional rather than situational influences (for a review of the fundamental attribution error research, see Jones & Nisbett, 17), it is possible that the interviewers will find the medium sufficiently salient to generate situational attributions for poorer applicant performance in the videoconference interviews (Taylor & Fiske, 175). Moreover, given the fact that interviewers are motivated (and paid) to make accurate social judgments, the probability that they will make erroneous dispositional attributions is decreased (D. M. Webster, 1). This may increase the likelihood of making situational attributions for weaker performance in videoconference interviews. Having an accuracy goal in social judgments has also been found to lead to more complex thinking about the information available for the judgment target (Tetlock & Kim, 187), such as including information about the context in which the interaction occurs. However, despite this more complex approach to social judgment, individuals who are motivated to be accurate in social judgments (e.g. interviewers) may still be susceptible to biases and heuristics in their perception processes (Kunda, 10; Tetlock & Boettger, 18). In fact, Kunda (1) suggested that accuracy goals, when combined with the need for quick decisions (such as time pressures on interviewers), can exacerbate perception biases (e.g., Freund, Kruglanski, & Shpitzajzen, 185; Kruglanski & Freund, 18).


A second rationale for why interviewers judgments may be affected by the use of videoconference technology draws upon the concept of Native Theories of bias correction. Judges, such as interviewers, are capable of adjusting their assessments of individuals according to their naive theories of how the context affects [their] judgements of the target (Wegener & Petty, 15, p. 8). Wegener and Petty (15) investigated the impact of naive theories on rater judgments. Over a series of studies, Wegener and Petty (15) revealed that individuals may form naive theories about how the impact of a situation or context affects their evaluations of others. Furthermore, they found that judges may correct their evaluation of the target based on these naive theories whether an actual bias exists or not. This can result in less accurate evaluations in cases where judges overcompensate for their perceived bias. In the present study, it is possible that interviewers will form naive theories about the relative disadvantage of videoconference applicants compared to FTF applicants. For example, J. Webster (17) suggested that some applicants may feel disadvantaged by videoconference interviews. Interviewers may be sympathetic to these feelings or feel that they themselves would be disadvantaged if they were to be interviewed by videoconference. According to Wegener and Petty (15), a perception of a contextual factor disadvantaging one choice over another, in this case videoconference or a FTF context, may result in overcompensation in the form of inflated evaluations of videoconference candidates. This explanation is further supported by Neubergs (18) findings, which demonstrated that the goal of having accurate impressions influences judges to evaluate candidates more favourably when they have negative expectancies.


Given the lack of studies that directly measure the influences of this medium on interviewer evaluations, and the conflicting evidence in the literature regarding the effects of communication media on social judgments in general, our prediction is necessarily somewhat exploratory. However, we find the existing evidence for a positive effect on interviewer evaluations (i.e. external attributions for poor performance, naive theories, and anxiety reduction) is more persuasive. Accordingly, we submit


Hypothesis 1 Interviewers will rate videoconference applicants higher than FTF applicants.


Structure differences in interviews


Structured interviews employ systematic procedures to generate questions, rate the suitability of answers, and provide consistency in the content, delivery and order of questions in the interview. Several meta-analytic studies pointing to the increased validity of structured versus unstructured employment interviews (e.g., McDaniel, Whetzel, Schmidt, & Maurer, 14; Wiesner & Cronshaw, 188) have generated a significant amount of research interest (see Campion, Palmer, & Campion, 17; Dipboye, 1).


Structured interviews are not only more reliable, but it is also possible that a more highly structured interview affords less opportunity for the interviewer to be influenced by applicant impression management tactics (Campion et al., 17; Dipboye & Gaugler, 1). For example, in the structured interview, applicants are less able to control the content of the discussion (Dipboye & Gaugler, 1) which is an important tool for impression management. One form of structured interview, the behavioural descriptive interview (Janz, 18), requires the applicant to recall recent past behaviours that support the applicants claims. In order to limit impression management with this type of structured interview, practitioners are advised to incorporate a threat of reference check, whereby applicants are told that the information that they provide in the interview will be verified by reference check. This procedure has the potential to limit impression management by discouraging the applicant from embellishing his or her credentials. Given the fact that impression management techniques are practiced by candidates in order to improve their attractiveness to employers, interview structure practices designed to minimize the applicants opportunity to engage in these practices is expected to result in less favourable evaluations of the applicant. Furthermore, the systematic rating formats of structured interviews have also been prescribed by researchers to avoid halo errors that can result in inflated ratings of applicants (Campion et al., 17; Dipboye, 1). These factors combined suggest that, although it is still possible for interviewers to rate applicants favourably in a structured interview, the opportunity for applicants who are less capable, but skilled impression managers, to be rated favourably is diminished. Thus


Hypothesis Interviewers conducting interviews with higher levels of interview structure will result in lower evaluations of applicants than those with lower levels of interview structure.


Interviews conducted with higher levels of interview structure are also believed to reduce the influence of extraneous information such as appearance or nonverbal behaviour, thereby improving the validity of judgments with regard to later job performance (Campion et al., 17). While it may also be argued that the use of videoconference technology could also reduce the interviewers reliance on extraneous nonverbal cues and appearance in making judgments, the interview medium itself has the potential to become extraneous information (Webster, 17). Furthermore, we believe that the cognitive processes mentioned earlier (i.e. attributions and naive theories) will still influence the interviewers judgments, regardless of the media effects on nonverbal assessment. We predict that highly structured interviews could reduce or eliminate the cognitive bias corrections and external attributions generated by the use of the videoconference technology. Accordingly, it is predicted that the evaluations of interviewers who conduct more highly structured interviews will not be affected by the interview medium to the same extent as those of interviewers who conduct less structured interviews, where this information is likely to play a larger role in evaluations.


Hypothesis Interview structure will moderate the influences of interview medium on interviewer evaluations of candidates such that applicant evaluations provided by interviewers who conduct less structured interviews will be affected by the interview medium more than the evaluations provided by interviewers who conduct more structured interviews.


Interviewer gender differences in interviews


There is some evidence to suggest that male and female interviewers behave differently and evaluate applicants differently in employment interviews. For instance, female interviewers have been found to assess their applicants higher on their interview performance than males do (Elliott, 181; London & Poplawski, 176; Parsons & Liden, 184; Raza & Carpenter, 187). Gender differences in conversational styles have also been demonstrated in that females are less likely to interrupt or structure a conversation than males are (e.g., Eakins & Eakins, 178; Spencer & Drass, 18; Zimmerman & West, 175) although the results are mixed (see Kacmar & Hochwarter, 15). One important difference in these studies is that Kacmar and Hochwarter (15) trained their interviewers to use high levels of structure in their interviews. It is possible that the gender differences did not materialize in their study due to the higher level of structure used. This is consistent with the rationale for structuring interviews, which is to reduce the influence of individual biases and thereby increase inter-rater reliability in ratings given to applicants (Campion et al., 17). It has also been argued that structured interviews may be less prone to biases (including interviewer gender) that could have an impact on gender- or race-based employment equity initiatives (Campion et al., 17). Thus,


Hypothesis 4a The evaluations by male interviewers will be lower than evaluations by female interviewers.


Hypothesis 4b The main effect predicted in H4a will be mediated by the level of interview structure whereby interviewer gender will have less influence on ratings for interviewers who conduct highly structured interviews.


Method


Data were collected from real interviews conducted for 4-month, full-time paid positions as part of a cooperative education programme, where university students alternate their studies with formal job experience related to their degree programmes. Interviewers come to campus from across North America and around the world to hire these applicants. Interviewers are actual representatives of the organizations involved and view these positions as an opportunity to recruit future full-time employees for their organizations. These interviewers typically interview 8-10 applicants for each position vacancy.


Interviewer evaluations from employment interviews were included in the present study. Whenever possible, interviewers provided evaluations of four of their applicants two based on applicants randomly assigned to videoconference interviews and two based on applicants randomly assigned to FTF interviews. Of the 5 interviewers who participated in the study, six interviewers conducted fewer than four interviews due to scheduling conflicts or technical problems.


The design is a x x mixed model with Interview Medium (FTF vs. videoconference) tested as a within-subjects factor, whereas Interviewer Gender, and amount of Interview Structure (high structure vs. semi-structure vs. low structure) comprised the between-subjects factors. This design provides a field experiment test of the primary variable of interest (Interview Medium), while permitting a quasi-experimental test of the other variables of interest (Interview Structure and Interviewer Gender).


Participants


A sample of 5 interviewers, conducting employment interviews at a large Canadian university for cooperative education work terms, volunteered to participate in the study in exchange for a synopsis of the research findings. The 1 male and 6 female interviewers ranged in age from 6 to 58 years, with a mean age of 6.6 years. In total, organizations from a wide variety of industries were represented, including financial institutions, computer software companies, manufacturers, government organizations, multinationals, hospitals and educational facilities. Company size varied from 10 to 100 000 employees, with 800 employees representing the median company size. Of the 5 participants, 16 conducted single-interviewer interviews (14 males and females) and conducted interviews with two interviewers present (5 males and 4 females). For interviews conducted with two interviewers, data were collected prior to interviewers discussing each applicant and from only one of the two interviewers.


The applicants were undergraduate students enrolled in cooperative education programmes who volunteered to participate in the study in exchange for entering their names into a draw for two prizes of $400 each. Of the applicants, 55.4% were male (N= 51) and 44.6% were female (N= 41). A chi test demonstrated that applicant gender was evenly distributed across interview media, chi(1,) = .46, n.s. Applicants had a mean age of 1.1 (SD = 1.65) years. Applicant interview experience ranged from 1 to 55 previous employment interviews completed with a median of 10 previous interviews reported. Approximately half (55%) of the applicants reported having some form of training relating to how to conduct themselves in an employment interview.


Apparatus


The 44 videoconference interviews were conducted using an Intel videoconference demonstration system. The video display was presented on 15-inch SVGA colour monitors with Intel video cameras mounted on top of each monitor. The systems were located in two offices located in separate buildings on campus and were connected through the university Local Area Network (LAN). Connection speed varied slightly depending on competing demands on the servers being used, resulting in a full-screen frame rate ranging from 1 to 14 frames per second. Applicants were shown from the mid-chest up, while interviewers were either shown mid-chest up (for single interviewers) or from the waist up and further away for two interviewers. Camera angles were adjusted slightly to accommodate the height of the applicant or interviewer.


Procedure


Recruiting interviewers. The 5 interviewers who participated were recruited through telephone calls to random employers from the 1500 who planned to conduct interviews at the university over a -week period. Although exact statistics are not available, approximately 70% of the organizations contacted by telephone agreed to participate in exchange for a synopsis of the research results. The main limiting factor for obtaining more interviews was the availability of the videoconferencing systems. Scheduling was limited to four or five videoconference interviews per day over the 15-day period when interviewers were on campus. Interviewer recruiting stopped once the videoconference schedule was filled.


Recruiting applicants. After participating companies had identified the applicants they wished to interview (usually -4 days prior to the interview being conducted), electronic mail messages were sent to each of their applicants requesting their participation in the study and an attempt to reach them by telephone was made concurrently. This method resulted in an 80% participation rate for applicants. Applicants who had interviews scheduled with two or more of the interviewers in the study were excluded after their first interview. To prevent distortion in the results due to differences between volunteers for the videoconference condition and applicants in the control condition, all applicants were informed prior to volunteering to participate that they had a 50% chance of having their interview conducted using videoconference technology. For each of the 5 interviewers in the study, four of their applicants who agreed to participate were randomly selected for the study. Two of these four participants were then randomly assigned to the FTF condition and two to the videoconference condition.


Measures


Constructs were assessed by a pre-interview questionnaire, a post-interview questionnaire, a post-study questionnaire and a post-study structured interview with the participants.


Pre-interview measures. Immediately preceding each of the interviewers four interviews, the interviewer was asked to complete a 4-item questionnaire. The interviewer was asked to rate the applicant on a 7-point scale (where 1 = poor and 7 = excellent) on the following dimensions (1) overall impression of the applicant based on written information; () appropriateness of the applicants educational background for the position; () evaluation of the applicants previous work experience; and (4) educational achievement. Items 1, and were retained to create a Pre-interview Impression Scale with item 4 removed to improve internal reliability (Cronbachs alpha = .77).(n1)


Post-interview measures. Sixteen items measuring dimensions considered important for applicant success were selected based on previous research (e.g. Einhorn, 181). Immediately following each interview, and prior to discussing the candidate with anyone or filling out a company-based rating form, the interviewer was asked to complete a questionnaire rating the applicant on 16 dimensions (see Appendix for a list of the items). All ratings were provided on a 7-point Likert-type scale ranging from 1 = poor to 7 = excellent. A principal components factor analysis with varimax rotation suggested a single factor solution with this factor explaining 67.46% of the variance in the items. Accordingly, a Post-interview Rating Scale was calculated from the average of the 16 post-interview items. Cronbachs alpha was calculated to be .6 for the Post-interview Rating.


Immediately following their final interview, interviewers were asked to fill out a post-study questionnaire gathering demographic information and measuring other constructs such as level of interview structure and satisfaction with the use of videoconference technology. Interview structure was measured with an item developed by M. Williams and P. M. Rowe (personal communication, 16). The item asked the interviewer to choose among five descriptions of the consistency of questions asked across all of their interviews (see Appendix for anchors for this item). Structure was analysed as a three-level variable by combining levels 1- and 4-5 as the high and low structure levels, respectively, and level as semi-structured. This coding was chosen due to the absence of female interviewers in levels 1 and 5 of the original scale, and to balance the number of interviewers in each level of structure as much as possible. Although this procedure resulted in some information loss, this was judged to be appropriate to maintain a balance of interviewer gender in each level of structure as there were no female interviewers in the tails of the distribution. Furthermore, this procedure is consistent with much of the interview literature which treats interviews as either structured or unstructured while adding a semi-structured level (Kohn & Dipboye, 18) to increase the predictive power of the construct.(n) Interviews conducted in a FTF or videoconference medium were coded 1 and respectively. Male and female interviewers were also coded 1 and respectively.


All 5 interviewers were given a semi-structured interview immediately after completing the study by either the first author or a research assistant. Interviewers were asked to describe their feelings about using videoconferencing technology in the interview and to describe how their interviews may have been changed by using the technology.


Results


Analyses


The design for this experiment was a x x mixed model with Interview Medium (videoconference vs. FTF) tested as a within-subject factor whereas Interviewer Gender (male vs. female) and Interview Structure (unstructured vs. semi-structured vs. structured) represented the between-subjects factors. A common issue in interview research, particularly when conducted in the field, is the problem of having independent variables nested within interviewer (see Cable & Judge, 17). Data for the interviewer are necessarily duplicated for each candidate evaluated by that interviewer, which risks generating problems associated with correlated errors for regression analyses (Greene, 1). Accordingly, we chose a General Linear Model (GLM) approach and included the interviewer as a nested variable in the analyses. The GLM procedure permits the researcher to specify nested and mixed models to apply the appropriate error term for each test in the analysis (Howell, 1). For example, within-subject analyses and interactions included variance from Medium x Interviewer (Gender x Structure) in the error term (where parentheses specify nesting). The between-subjects analyses were tested with the variance from Interviewer (Gender x Structure) in the error term. Each analysis was conducted with pre-interview impression used as a covariate. This covariate satisfies the assumptions required to do a proper MANCOVA procedure, as the evaluation of written information preceded the treatment and therefore could not be affected by the treatment. Means, standard deviations and correlations among the variables are provided in Table 1. Note that the zero-order correlations in Table 1 may be subject to the non-independence of some of the data and should be interpreted cautiously. Non-significant correlations between pre-interview impression and each of the independent variables (see Table 1) provide evidence that the covariate was not influenced by treatment or anticipated treatment effects for videoconference interviews.


To ensure that the number of interviewers conducting the interview did not influence the ratings, an ANOVA was conducted which suggested that it was appropriate to collapse our sample across number of interviewers F(1,1)= .18, n.s. To verify that our random assignment of applicants was indeed random, we also tested whether there were significant pre-interview differences in the attractiveness of applicants assigned to the two groups. No pre-interview differences were observed t(0) = .7, n.s.


Results of the GLM analysis, testing medium, structure, and interviewer gender effects, are detailed in Table . Estimated marginal means, generated by the GLM procedure, for interview medium, structure, and interviewer gender are provided in Table .


Interview medium


The results in Table support hypothesis 1, demonstrating that the interview medium played a role in determining how interviewers evaluated their applicants. A main effect of interview medium was found F(1,1)= 7.5, MSE = .40, p = .01, eta = .15. Interviewers rated applicants higher in the videoconference medium (M = 5.57, SE = .10) than in the FTF medium (M = 5.18, SE = .0).


Interview structure


The results presented in Table provide evidence to support hypothesis . The data reveal that interviewers using high levels of structure in the interview evaluated applicants less favourably than those who used semi-structured or unstructured interviews (M = 5.16, SE = .10, M 5.68, SE = .15, and M = 5.48, SE = .1, respectively). Interestingly, the applicants evaluated by a semi-structured interview were rated slightly higher than those evaluated by an unstructured interview. The reason for this unexpected result is not clear. The main effect of interview structure is best explained within the context of the interaction between interview structure and interviewer gender to follow. Table reveals that no support was found for hypothesis in determining the moderating effect of interview structure on the media influences for the dependent variables.


Interviewer gender


In support of hypothesis 4a, the present study found evidence to suggest that there are significant differences in the way that male and female interviewers evaluate their applicants. The results detailed in Table show more favourable ratings of applicants by female interviewers relative to their male counterparts. A significant interaction between interviewer gender and interview structure was also found in support of hypothesis 4b, which qualifies this main effect. Specifically, the interaction shows that male interviewers ratings were unaffected by interview structure while female interviewers ratings were substantially higher in unstructured and semi-structured interviews than in highly structured interviews.


A post hoc MANCOVA analysis was also conducted to determine whether a gender-based similar-to-me effect (e.g. Maurer, Howe, & Lee, 1) influenced post-interview evaluations of the candidate. Pre-interview evaluation of the candidates written credentials was entered into the model as a covariate and an interaction between applicant gender and interviewer gender was tested. The results indicate that an applicant gender x interviewer gender interaction did not occur, F(1,1) = .0, n.s.


Qualitative data


A summary of qualitative feedback from interviews is provided to contribute to the understanding of the processes that possibly underlie the empirical results.


Factors that helped interviewers rate videoconference applicants relative to face-to-face applicants. The majority of interviewers (68%) stated that there was nothing about the videoconference technology that would assist them in assessing the applicants relative to a face-to-face interview. Several interviewers (16%) found that the decreased social presence enabled them to unobtrusively take more notes, check their watches, or refer to resumes without disrupting the flow of the interview. A decrease in Social Presence (see Fulk, 1; Rice, 1) appeared to permit some interviewers to be more objective in that they did not feel as compelled to be positive with the applicant. Several interviewers noted that the videoconference medium forced them to concentrate more on what the applicant was saying and that this assisted in rating some dimensions.


Factors that hindered interviewer ratings of videoconference applicants relative to face-to-face applicants. Interviewers reported a number of properties inherent in the videoconference medium that they believed hindered their assessment of the applicants. The most frequent problem noted was the difficulty in reading nonverbal behaviours such as facial expression, eye contact and fidgeting (40%), followed by audio problems (8%), video lag (4%), image clarity (8%), and lack of responsiveness (4%). However, 8% of interviewers reported that there were no factors that they felt hindered their assessment of applicants.


Attribution of errors was also reported as being difficult to interpret by some of the interviewers. For example, one interviewer stated, It was hard to tell whether a pause was due to the technology, or the applicant being stumped.


Dimensions that interviewers reported mere easier to assess in the videoconference medium. The majority of interviewers (60%) reported that none of the 16 applicant dimensions were easier to assess in the videoconference medium than in the face-to-face medium while % reported at least one dimension was easier to assess and 8% failed to respond to this question. The three dimensions that were mentioned by interviewers as being easier to assess in the videoconference medium were Communication Skills (0%); Friendliness (1%); and Support for Arguments (4%). Three reasons emerged across all three of these dimensions which interviewers reported as being the reasons why they felt ratings were improved. The first addresses the interviewers belief that the restriction of visual cues forced them to concentrate more on the applicants words. A second trend included a perception that if the applicant could create a good impression in this medium, then the applicant must have been even friendlier, a better communicator, etc. face-to-face. Finally, a few interviewers mentioned that the novelty and awkwardness of the videoconference medium reduced the traditional power imbalance between interviewer and applicant. One interviewer noted that the use of a cutting edge technology in the interview enabled better evaluation of a candidates comfort level with new technology relative to face-to-face candidates.


Applicant dimensions more difficult to assess in a videoconference versus face-to-face medium. Although 16% of the interviewers reported that there were no dimensions which were more difficult to assess in the videoconference medium relative to the face-to-face medium, most reported at least one dimension as being more difficult to assess. Many interviewers believed that applicant Appearance was difficult to assess (56%). Applicant Confidence (6%) and Assertiveness (16%) were also frequently mentioned due to two trends (1) several interviewers complained that these dimensions were difficult to assess due to difficulty with viewing nonverbal behaviour, and () difficulty with establishing the origin of an applicants unease, Hard to tell if [the applicant] is nervous in general or uncomfortable with the technology.


Medium preferred for conducting employment interviews. A large majority of interviewers (76%) stated that they preferred conducting their interviews FTF rather than using a videoconference medium. Only 4% preferred the videoconference medium and 0% stated that they had no preference. Many lamented losing the personal touch of meeting the candidate FTF. However, despite the fact that 76% of interviewers indicated a preference for conducting their interviews FTF, 88% reported that they would be willing to use videoconference technology to conduct employment interviews in the future because of the convenience associated with it.


Discussion


The primary purpose of the present experiment was to determine whether interviewer ratings of applicants changed as a result of using videoconference technology rather than a traditional FTF interview to conduct the interview. The results of this study show that interviewers ratings of applicants were affected by the interview medium, accounting for 15% of the variance. Interviewer evaluations were also influenced by an interaction of interview structure and the interviewers gender.


Interviewers were found to rate applicants in videoconference-based interviews higher than applicants interviewed in a traditional FTF interview. Given the somewhat exploratory nature of this study, we feel justified in speculating a little on the origins of the observed effects. For example, we proposed three potential mechanisms by which the interviewer could inflate their ratings of applicants interviewed in the videoconference medium. Unfortunately, the fact that we used actual interviewers in this study prevented us from capturing the exact mechanism which may have generated our results. The observed media differences in ratings may relate to the work of Short et al. (176), which suggested that the communication medium may have reduced the anxiety between the interviewer and applicant. This could have influenced interviewer ratings directly by having an impact on the global impression of the candidate, or indirectly if this reduction in anxiety translated into actual improvement in applicant performance. We believe the latter explanation is less likely, based on applicant self-reports of performance following the interview (Skinkle & MacLeod, 15; J. Webster, 17); however, future research should empirically examine this possibility. Other explanations which deserve empirical testing include the possibility that either external attributions and/or naive theories of bias correction may have played a role in inflating interviewer evaluations of videoconference applicants. For example, interviewers might feel that the applicant deserves the benefit of the doubt based on their assumption that the applicant was inexperienced with videoconference interviews. This effect could be more pronounced in interviewers who have little experience with videoconference media themselves and consequently sympathize with the applicant.


While it is possible that some of the effects created by the videoconference system may be reduced or eliminated with technical solutions to issues such as evaluating nonverbal behaviour and video lag, the fact that the frame rates and transmission speed achieved in the current study resulted in very little lag in communications suggests that the bias observed may be resistant to advances in technology and is instead more closely linked to the weaker social presence afforded by the videoconference technology. It is also not clear whether this bias will disappear with increasing interviewer experience with this medium.


Some support was found for interviewer gender differences, which replicates previous findings. Female interviewers rated their applicants more favourably than their male counterparts. As predicted, however, higher levels of interview structure eliminated the disparity between male and female interviewers ratings. This change was only found in highly structured interviews, as semi-structured interviews yielded results similar to unstructured interviews. Further research is required to replicate the gender-based biases, observed in the six female interviewers in this study, in a large sample. Further investigation is also needed to confirm whether interview structure can reduce gender-based rating biases. The preliminary results from the present study demonstrate the potential for interview structure to reduce or eliminate some biases in selection interviews. However, the failure of interview structure to significantly affect media biases suggests that it may not be capable of eliminating all potential biases in a selection setting. More experimental work is needed with random assignment to levels of interview structure to identify those biases which are controlled by interview structure and those which are not. Furthermore, we need to determine why interview structure may be effective for some biases and not for others.


Several strengths and limitations of the methodology employed in this study are evident. There has been considerable criticism of the selection interview literature for its overuse of simulated interviews and students posing as interviewers and applicants (see Buckley & Weitzel, 18, for a detailed discussion). In this study, we employed a field-experiment approach for our main hypothesis, which is nearly unprecedented in interview research, to address these criticisms. In addition, the participation of organizations from a wide variety of industries bodes well for the generalizability of the results to other settings. Data were collected immediately prior to the interviews and immediately after the interviews to determine the impact that the interview information had on evaluations relative to the pre-interview information. Finally, all of the participants were interviewed to provide qualitative feedback in support of the empirical findings and to provide a richer understanding of the phenomenon being studied.


Despite the advantages of this study, some limitations remain. Problems associated with most field research were encountered in the present study. For example, in order to maximize statistical power with a smaller sample of interviewers, we employed a mixed design with interviewers conducting interviews in both videoconference and FTF media. This resulted in some of the independent variables being nested within interviewer (structure and interviewer gender). Additionally, all items and scales had to be created in order to be completed quickly so as to minimize the disruption in the interview process and maximize the willingness of busy interviewers to participate. Some detail had to be sacrificed to achieve this goal. For example, interview structure has been conceptualized as containing up to 15 facets (Campion et al., 17); however, we concentrated only on the consistency of questioning and the limitation of probing questions. The exact nature of what constitutes interview structure is unclear (Hakel, 18) although consistency always represents the major component. Researchers have advised practitioners employing structured interviews to ask questions consistently (Janz, 18; Latham, Saari, Pursell, & Campion, 180). Future research should examine the construct of interview structure more closely and a comprehensive measure of interview structure should be developed.


Another limitation to the current research stems from the necessity to employ quasi-experimental methods with interviewer gender and interview structure, as we were obviously unable to manipulate these in a field setting. Future research should attempt to replicate these preliminary findings in a controlled laboratory setting. In addition, future research should investigate whether specific job types interact with the interview medium. Furthermore, although our gender-based leniency effect was consistent with earlier research, the small number of female interviewers examined in this sample should lead the reader to interpret the gender-related findings cautiously. Despite these areas for improvement, we believe the strengths associated with using real applicants and interviewers for this study outweigh the disadvantages.


The findings in this study have both theoretical and practical implications. The theoretical advancement is that it is evident that the medium of communication and the amount of structure used in the interview are important variables to consider when studying interviewer decision processes (e.g., Dipboye, 1; Eder, 18; Webster, 18). The practical conclusions we can make regarding the utility of using structured interviews to reduce interviewer biases is less clear. It appears that highly structured interviews (based on consistency) may help to decrease potential gender-based leniency effects. Semi-structured interviews, however, did not reduce the observed gender biases in interviewer evaluations. Perhaps more importantly, even the highest level of interview structure did not moderate the influences of interview medium. The structured interview, therefore, may potentially reduce biases based on some variables or contexts but not others. Further research on interview structure is required in order to determine the processes underlying the reduction of bias in some situations and to explain why these processes can be less effective for some variables. More attention should also be paid to the potential for other facets of interview structure to reduce rating biases (e.g. job-related questions, rating answers against ideal responses, using situational and behavioural questions). As a cautionary note, we do not have any information in this study to conclude that the predictive validity of the lower ratings provided by males or by interviewers conducting structured interviews is higher than the lenient ratings.


On a practical level, it is evident that mixing interview media within a given employment competition could result in inflated results for the applicants who are interviewed via a videoconference system relative to those applicants interviewed FTF. Also on a practical level, the need for structured interviews is highlighted by the potential to reduce differences between interviewers evaluations of candidates.


In summary, this study demonstrated that, in a field setting, interviewer evaluations of candidates can be affected by the communication medium used to conduct the interview, the amount of structure employed in the interview, and potentially the gender of the interviewer. Furthermore, although a high level of interview structure was found to reduce disparities between the ratings provided by male and female interviewers, media influences were not significantly reduced by interview structure.


Acknowledgements


This research was supported by a grant provided by Procter & Gamble Worldwide Recruiting, Training and Development and by a loan of equipment from ViewNet Inc. The authors wish to thank John Callender at Procter & Gamble for his support and insight. We are also indebted to Bruce Lumsden and staff of the Cooperative Education Department at the University of Waterloo for facilitating access to the sample. The organizations and applicants are also thanked for their generous participation. We also thank Jane Webster, Ramona Bobocel, Jo Sylvester, and three anonymous reviewers for their comments on earlier versions of this paper; and Judith Carscadden, Alice Rushing, and David Zweig for their assistance with data collection. An earlier version of this paper was presented as part of a symposium at the Canadian Psychological Association, 58th annual convention in Toronto, Canada, June 17. This paper is based in part on Derek Chapmans Masters thesis at the University of Waterloo.


(n1) Interviewers typically have access to information about the applicant prior to the interview. This information may include details about the applicants previous work experience, transcripts of grades, academic background, interests, and test results. Pre-interview information has been found to be a significant predictor of the ultimate impression of the applicant subsequent to the interview (Dipboye, Stramler, & Fontenelle, 184; Dougherty, Turban, & Callender, 14; Macan & Dipboye, 10; Rowe, 18; Tucker & Rowe, 177). Accordingly, pre-interview impressions served as a covariate in our analyses to isolate the variance in interviewer ratings due to the interview alone. The results in Tables 1 and support previous findings (e.g. Dipboye et al., 184; Rowe, 18) in demonstrating the importance of pre-interview impressions (based on written credentials) on interviewer evaluations of the applicant.


(n) A recent review by Campion et al. (17), suggests that interview structure ought to be measured as a multidimensional and continuous variable; however, there is no multidimensional and continuous measure of structure in existence. Given this restriction, the limitations of our field sample, and in the interest of providing more interpretable results, we chose to use Kohn and Dipboyes (18) three-level description of interview structure.


Please note that this sample paper on impact of technology on evaluation of employment is for your review only. In order to eliminate any of the plagiarism issues, it is highly recommended that you do not use it for you own writing purposes. In case you experience difficulties with writing a well structured and accurately composed paper on impact of technology on evaluation of employment, we are here to assist you. Your cheap custom college paper on impact of technology on evaluation of employment will be written from scratch, so you do not have to worry about its originality.


Order your authentic assignment and you will be amazed at how easy it is to complete a quality custom paper within the shortest time possible!