Abstract: Resume-ranking software is a type of recruitment software commonly used by recruiters to sift through resumes, and to select the top resumes to help employers find the “best” candidate for job placement. The creation of such software can reflect biases rooted in society, such as gender and racial biases if they use naive algorithms or are trained on biased data. For example, the ranking algorithm used might be influenced by traditionally gender-biased words in resumes, inadvertently contributing to implicit gender bias. In our research, we explore how the ranking of resumes on popular hiring websites such as Indeed.com and eightfold.ai, can be influenced by potential implicit biases using two methods: first analyzing the gender-tone of the most common words in the highest-ranked and lowest-ranked resumes, and then examining the relationship between the inferred gender of the resume, as determined by the % of masculine words, and the rank of the resume. Our results demonstrate that, in order to avoid gender bias, algorithm designers must take specific steps to avoid unintended biases, such as making sure that all weighted factors reflect the impartial assessment of every applicant, as well as avoiding clearly biased training data. As automated recruiting software is becoming increasingly prevalent among employers, our results provide useful guardrails to avoid algorithmic untrustworthiness, and how we as a society can move toward eliminating implicit biases in automated resume scanning software.