Mapping Students' Decision Making Process across University Ranking Parameters: A Quantitative Study

The current paper maps the dimensions of prospective student’s expectations and already extant ranking parameters for higher education established by the National Institutional Ranking Framework (NIRF) in India. A sample of 618 undergraduate students between 17-27 years completed a data sheet capturing their expectations from a higher education institute on 23 parameters. To obtain results, these parameters were compartmentalized based on the five parameters of the National Institutional Ranking Framework (NIRF). One-way ANOVA indicated a significant difference among these parameters. Further analysis showed that location of the university, placement and career assistance, faculty profiles, research and internship opportunities, international exposure, course availability, and scholarships are of utmost importance for students. Qualitative analysis indicated a preference for semester exchange and international exposure programs along with research opportunities to be important for students. There was an equivalent preference for places namely, Europe, United States, Canada and Australia which were prospective choices for students to pursue their higher studies.


Background
University rankings is a relatively new phenomenon which has become popular to assess the quality of higher education institutes. The US News &World Report Best College Rankings, which started in 1980s, are now a deep rooted part of the college/university ranking system today. This ranking system exists in over 40 countries with global ranking rapidly growing around the world (Hazelkorn, 2019). Apart from this, the Quacquarelli Symonds (QS) ranking system, Times Higher Education (THE) ranking system, U-Multirank university rankings, and the Academic Ranking of World Universities (ARWU) by Shanghai Jiao Tong University (SJTU) have also gained considerable popularity. Students from all over the world resort to these rankings when choosing an institute for higher education, especially while deciding upon a university outside of their home country. While majority of the existing ranking systems are developed for commercial purposes, there are few that have been developed by government organizations.
There are nearly 200 different ranking parameters in existence around the world that compare the institutions of higher education. Different ranking systems use distinct criteria and assign different weights (commonly as percentages) which are indicators of these criteria. For instance, the QS ranking system (2020) includes six metrices which also measure international faculty ratio and international student ratio, while the ARWU (2019) does not include these two criteria and has a ranking system based on four metrices which primarily focus on research. On the other hand, US News & World Report Best College Rankings uses 10 different ranking categories according to a university's mission and within each category, it uses 15 indicators of academic quality ranging from graduation and retention rates to expert opinion (Morse, Brooks, & Mason, 2019). These indicators include niche rankings which are inclusive of scores in standardized tests, acceptance rates of universities, grouping them into national and regional rankings to name a few. Whereas, the Times Higher Education (THE) World University Rankings (2016) uses 13 different performance indicators which are consolidated into five broad criteria. This include teaching, research, citation, international outlook, and industry outcomes.
While the influence and popularity of these rankings has grown in the past, the question of their accuracy arises, while wondering if people actually understand what it is that these rankings measure. It is unclear if the outcome of these rankings is viewed with regard to both, internal as well as external stakeholders of higher education institutes, and to what extent they influence decision making. The current paper aims to shed some light upon existing ranking criterion used by the National Institutional Ranking Framework (NIRF) and comparing it with parameters that are considered essential by undergraduates in India when they evaluate higher education institutes in India and abroad.

The Appeal of Rankings
Like all simple rankings, university rankings' popularity due to its simplicity is also its biggest source of criticism. There are various rankings like national, world regions, and global. Each of these rankings can rank institutions at an organizational level, by discipline, field of science, or profession. Distinctions in the field of science include natural science, engineering, computer science, social science, and so on; while distinctions in profession can be business, law, medicine, graduate schools etc. (Hazelkorn,2008). Rankings normally work by giving different weights/values to numerous parameters and indicators to compare universities. While this appears to be scientifically objective, it may be true to say that there is no such thing. The scoring of all different values is aggregated and normally a lower number indicates a better score (Hazelkorn, 2019).
Introduced in 2015, the Ministry of Human development, Government of India, felt the need to develop the National Institutions Ranking Framework (NIRF) in order to indicate the quality of Indian higher education institutes. This step was taken in order to improve the quality of higher education in India such that it is comparable at an international level, as the government realized there was not a single Indian higher education institute included in the top 200 international higher education ranking (Mukherjee, 2016). The NIRF uses five criteria in order to provide a ranking system for domestic universities. These criteria include, teaching, learning and resources, research and professional practices, graduation outcomes, outreach and inclusivity, and peer perception. Analysis in the current paper are also based on these criteria along with the other factors used in the survey.
However, what might be considered as a better score by one person, may not prove to be a good score by someone else as everyone's views are different. Different choices are made according to priorities and judgments of the people producing the ranking. Therefore, it is difficult to identify direct indicators that are measurable and comparable (Hazelkorn, 2019). Due to so many variations and issues with views on accuracy, quality, and productivity due to individual preferences, there is no international definition of quality. The annual comparisons are also criticized because experts believe that no university is going to change significantly from year to year.

Implicit Parameters
Rankings measure different characteristics of higher educational institutes; each one measuring a separate aspect from the weights assigned to the indicator. While global rankings use international data, it is more erroneous; on the other hand, national data is very inconsistent (Hazelkorn, 2019). Most global rankings tend to measure research of disciplines like medicine which has a very comprehensive international database. This tends to bias the data (Bastedo& Bowman, 2011; Hazelkorn, 2019). While other rankings also measure university reputation, these results are very subjective and self-serving, as one respondent of the survey will not know more than a few universities from the list and the rest could be hearsay or assumption about the reputations from a third-party source. Greenwald and Banaji (1995) introduced the concept of implicit social cognition. It refers to those automatic and unconscious processes involved in the perception, thought, and memory of individuals which influences their behavior. Although it is a less explored area, there is an unsaid impact of implicit cognition encountered by parents and their children while determining essential parameters to asses ranking of a university. The most common amongst these implicit parameters are teaching quality and learning involved, graduate outcomes like employability and societal impact of a university. Unfortunately, other implicit parameters like cultural backdrop or historical origin of a student's education system and its influence, economic background of a student, commercial interest of universities are never taken into consideration while calculating rankings of universities.
The ranking parameters aim to measure the educational quality of an institute through its teaching and learning involved. However, the area of measuring quality is vast and can be supervised in many different ways based on the system's requirements and interpretation. For instance, some might value the faculty-student ratio of an institute more than its ratio of diversity among students (Hazelkorn, 2019). Having said that, there is no indication to show that these individual indicators provide any real information regarding the institute's quality. Faculty-student ratio does not necessarily show the quality of teaching of an institute but rather about their resources.
Research performance, which includes research activities, financial resources, student selectivity, number of faculties and university reputation, has been a key element that students and families look at when judging a university. Astin (1996), found that research performance had a negative correlation with student's learning. Astin indicated that an institute with a strong research orientation, which is an ability to publish many articles and books, dedicated time allotted to research, and high research engagement, has a negative correlation with teaching. Teaching here includes time dedicated on student development, faculty engagement to teach and advice and students' active learning.
Graduate outcomes become a default end result for any student. Graduate outcomes like employability is given a lot of significance by students and parents, thus naturally having its effects on the ranking of an institute (Linney, 2019). The problem arises when the employment data focuses only on the initial few months post-graduation. The graduate salary is another aspect which is given a lot of importance, but this can be misleading as it depends on many different aspects such as locality and others. It might be assumed that the pedagogy of a university is of high quality if a student ends up with a good job opportunity. However, ranking cannot be based on a single parameter of this sort. For instance, if we look at the same situation from an international student's perspective, an unsaid yet important aspect while calculating rankings of higher education institutes also relies on a country's political model in terms of how welcoming it is for international students to become future job seekers within that country. Students often weigh return on investment, and the uncertainty of being unbale to seek a job might make one focus on a country which is more welcoming and easier to land up with a job. Unfortunately, this preference is never taken into consideration when ranking of higher educational institutes are calculated.
Higher education revolves around teaching and learning of the students, but the major challenge faced is the lack of understanding when it comes to how academic exposures affect and change how and what the students learn, without taking into account their prior experience. There is an increased emphasis around the world on learning outcomes, graduate attributes, life sustaining skills, and whether these higher educational institutes play any role in developing them (Hazelkorn, 2019). The focus currently is on the knowledge gained rather than the reputation of these institutions. In the end, it would be a mistake to separate teaching from the outcomes.
People have started showing interest in higher education's impact and its outcome, and its relation to the 17 United Nations Sustainable Development Goals (SDGs) (2020) globally. This has resulted in scores indicating university rankings, to begin including and measuring societal engagement of the universities. In the past, Quacquarelli Symonds (QS) and Times Higher Education (THE) measured societal engagement by research collaboration or third-party or industry earned income. THE launched rankings and measures of research while QS put its efforts in investment, linked to societal impact. On the other hand, U-Multirank uses more indicators like student engagement via internships and knowledge via patents and publications.
Talent development is another key element to judge an institute's quality. Astin (1985) claims that one of the main purposes of an institute is to provide their students with opportunities to develop their talents to reach their full potential. This would be achieved by the universities' effort to develop the students' values, habits, attitudes, skills, interests, intellectual capabilities, and mental health. Astin (1985) insisted that the universities which provide the greatest developmental opportunities to their students should have the highest academic quality.

Need for Research
Recent years have witnessed significant amount of literature related to the bad ranking systems, targeting the existence of a fundamental gap in ranking of higher education institutes (Bougnol, &Dulá , 2015;Charon, &Wauters, 2008). The main fundamental flaw of these ranking systems is that different universities have different strengths, while providing unique opportunities to their students; and a single monotonic scale cannot compare and measure their values (Bornmann, Stefaner, Anegon, & Murtz, 2014; Abramo, Cicero, &D'Angelo, 2013).Billaut, Bouyssou, and Vincke (2009), commented on the individual ranking tool, stating that the criteria's used in Shanghai rankings are not relevant and that the lack of attention to the fundamental structuring is one of the main issues. They also emphasize that the aggregation methodology is overwhelmed by multiple problems. They concluded that the ranking systems, despite their popularity and coverage, fail to qualify as a relevant and useful tool to access the academic quality of an institution, let alone impact the choices of students and parents or encourage reforms in the field of higher education. Charon and Wauters' (2008) comparison between the Shanghai Jiao Tong University (SJTU) rankings and the Times Higher Education (THE) supplement ranking showcased the risks revolving around the use of these rankings by providing examples of European Universities. While similarly, Saisana, d'Hombres, and Saltelli's (2010) test conducted on the rationality and implications of the SJTU and THE rankings found that statistical implications on the university and country level are unreliable.
Various literatures indicate there are worldwide efforts to overcome the limitations of the existing ranking systems. Consortium for Higher Education and Research Performance Assessment (CHERPA) network have been assigned by the European Commission to design and assess a new multi-dimensional system which could serve as a reliable alternative to the Academic Ranking of World Universities (ARWU) and Times Higher Education (THE) ranking systems as well as overcome their shortcomings. Analytic Hierarchy Process (AHP) model, developed by Lukman, Krajnc and Glavic (2010) helps compare the research, environmental and educational performances of various universities. AHP model identifies the strengths and weaknesses of Universities while also assessing their opportunities. AHP has indicated that the most important indicator of a good higher education institute is their research-oriented outcomes followed by their social and environmental outreach.
Like other ranking systems, the National Institutional Ranking Framework (NIRF)developed by the Government of India has been a tool to measure an institute's quality with the help of quantified data. However, in places like India, there has been overwhelming divergence and diversity pertaining the objectives, thus a reliable ranking system must represent the different sectors of a higher education system. This is necessary to avoid biases towards elite and established institutes focused on a particular area. Furthermore, all the current global ranking systems focus on a particular set of criteria with which they rank universities. For instance, the Academic Ranking of World Universities (ARWU) ranks focuses mainly on indicators pertaining to research performances, while ignoring any other criteria. The Times Higher Education (THE) ranks mainly on the reputation of an institute based on the views of various experts. NIRF rankings are based on years of existence, university alumni, current research performances, and previously mentioned diversity within higher education. Due to the global influence of these ranking systems, Salmi (2009) indicates that these systems should not ignore the various criteria and indicators which measure an institutes' performance as these rankings are the ones which ultimately define the best universities to a global audience.

Participants
The present study consisted of 618 undergraduate students. The age of the sample ranged between 17-27 years with an average age of 19 years. The sample consisted of a mix representation from Science, Commerce, and Humanities. From the total sample, 55.42% of the sample had fathers who owned their own business while 59.38% of the sample' mothers were home-makers. Participants were recruited through the convenient sampling method.

Measures
A data sheet recording demographic information and student's knowledge and expectations about pursuing higher education in India and abroad were captured. Section A of the data sheet measured preferences of students with respect to international exposure. While Section B of the data sheet measured the preferences of students for 23 factors, namely, infrastructure facility, student activities, sports activities, number of core courses/electives, types of core courses/electives, post-graduate courses availability, hostel availability, canteen availability, transportation facility, international collaboration and opportunities, internship opportunities, international exposure program, faculty profile, research initiative, placement and career assistance, past placement record, education loan and facility, education loan facility, scholarship availability, part time work availability, private versus public university, campus safety, and location.

Statistical Analysis
Qualitative analysis of the responses in section A was conducted in order to determine the preferences of students with regard to international exposure. While, the 23 factors in section B of the data sheet were compartmentalized based on the five parameters of the National Institutional Ranking Framework (NIRF). One-way repeated measures ANOVA, post-hoc Bonferroni comparisons, t-tests, and descriptive statistics were used for statistical analysis.

Results and Discussion
Drawing inspiration from the major assessment parameters of the National Institutional Ranking Framework (NIRF) of the Government of India (MHRD, 2019), we have categorized the 23 factors in Section-B of the questionnaire into five compartments in Table-1.  A one-way repeated measures ANOVA was conducted to find whether any of these 23 factors differ significantly from each other, the results of which are given in Table-2. As seen above, the F-value is highly significant at α=0.05. Subsequently, 256 post-hoc tests (with a Bonferroni corrected α value of 0.000197628) were conducted to confirm which of the 23 factors show significant differences amongst each other, the results of which are provided in the Table 3.
Based on the various combinations above, a sequence of mean scores of the 23 factors has been created in descending order in Table-4: As seen in Table-4, is it of relatively little importance for the sample, whether an international college or university they are applying to is private or State-run. However, in stark contrast to these two components of the 'Public and Peer Perception' NIRF parameter, the location of the college/university, as well as campus safety hold significantly more importance to the students.
Under the parameter of 'Graduation Outcomes', placement and career assistance undoubtedly seems to be one of the most important factors while deciding on an international academic opportunity, which is not a surprising finding, especially in the Indian context. However, the past placement record of the college/university under consideration seems to have significantly less importance in the minds of the students. This is a novel and interesting revelation which might indicate that Indian students are increasingly likely to take a chance on potentially promising new and upcoming colleges rather than relying on the old and cliché d brand names with an established history of high placement.
It is also encouraging to see a trend of increased importance given to the parameter 'Research and Professional Practice' (more than any other NIRF parameter). Four out of the five of our 23 factors categorized in this parameter, have been ranked highest in importance. These include faculty profiles, internship opportunities, and international collaborations as the most important, closely followed by research initiatives. The anomaly however, seems to be international exposure programs which have accrued significantly lower rating scores. This may be due to the fact that when a student gains admission in a foreign college/university, his/her need for international exposure may already be fulfilled and therefore may seem redundant at that stage in his/her career. However, even in a foreign institution of leering, the students are still hopeful for internships and multi-national collaboration opportunities to further exercise their professional networking skills most crucial at this early stage in their research careers.
It is also very interesting to note that the students have undoubtedly given more importance to the work profiles of the faculty members rather than the types and number of courses available. There are numerous predatory colleges/universities which attract foreign students with a whole variety of fancy courses listed on their websites and brochures, which may look very appealing on the surface, but usually lack the subject matter and/or competent faculty to teach those courses. Although, the participants have relatively given some importance to the types and number of courses available, this significant difference between faculty profiles and number of courses available indicates the will of students to take a more informed decision. Even within the 'Teaching, Learning and Resources' parameter, the essential commodities and services viz-a-viz the types of core courses/electives available, and the number of courses available, infrastructure facilities, student activities, transport facilities, canteen facilities, etc. seem to be of greater importance to the participants than other miscellaneous resources like sports facilities, hostel availability, etc.
Lastly, within the parameter of 'Outreach and Inclusivity', the participants have clearly preferred the availability of scholarships and part-time work over educational loan facilities. This clearly shows that most students do not want to be bogged down with the pressure of payment of loans after graduation as this may indirectly coerce them to opt for jobs during placement which are high-paying but exploitative, as opposed to relatively low-paying but satisfying. Furthermore, international students also view job opportunities abroad as a means of return on investment which they have made for their higher studies. This means they would prefer to have less expenditure on their higher studies by enrolling in programs which offer scholarships and/or part time work opportunities.
Further inferential statistics were conducted on the 23 iterations of the two-sample independent measures student's t-test in order to find whether there is any significant causal relationship between the gender of the participants and any of the 23 factors. Only two factors showed significant gender differences, as can be seen in Tables-5 & 6. As seen in Table-5, female students are significantly more concerned about the availability of hostels. Although it will be worth noting that this trend seems more or less representative of the Indian context and therefore not very surprising.
As seen in Table-6, female students are significantly more concerned about faculty profiles while making career decisions with respect to international academic opportunities. This is a noteworthy trend indicating that they are relatively more aware than their male counterparts, about the crucial importance of factoring in the academic history of their potential teachers, while making career decisions of a life-changing nature. Furthermore, qualitative analysis of Section-A of the questionnaire yielded the following trends in the preferences of students with respect to the type of international exposure, as shown in Figure-1.
As depicted in Figure-1, the majority of the participants (20%) prefer a higher education institute to offer a semester exchange program, and an almost equal proportion of students prefer international exchange program opportunities (19%) and exposure to present research papers at international conferences (18%). These are not mutually exclusive. This trend is also in sync with the findings of the quantitative analysis of Section-B wherein most factors categorized in the 'Research and Professional Practice' parameter have shown significantly high importance ratings. This is also reflected in the fact that roughly 16% seek admission to a Masters or PhD program. All these trends clearly show a gradual but noticeable shift in students' higher education preferences from professional courses to research-based education. Due to its growing importance, it was only in 2003, that the US News & World Report Best College Rankings included first-year experiences, co-ops/internships, senior capstone, study abroad, undergraduate research/creative projects, writing in the discipline, learning communities, and service learning as a part of their ranking system (Morse, Brooks, & Mason, 2019).
As to the specific countries they would prefer to study in, the preferences seem to be almost evenly distributed among USA, Canada, Australia and the European countries, as depicted in Figure-    The preferences pertaining to what subjects they would prefer to study abroad are also quite varied, although most of them preferring social sciences, management, commerce and law, as shown in Figure-3

Conclusions, Limitation, and Future Directions
Evaluation of the quality and performance of higher education institutions has gained considerable popularity in the current times. The present study proposed a conceptual framework to bridge the gap between the dimensions of prospective student's expectations and already existing ranking parameters for higher education established by the National Institutional Ranking Framework (NIRF).One-way measures ANOVA indicated a significant difference upon compartmentalization of the 23 parameters established by the researcher with the five parameters of NIRF. Within the 'Public and Peer Perception' NIRF parameter, the location of the college/university and campus safety hold significantly more importance to the students. Placement and career assistance were considered to be important factors under the parameter of 'Graduation Outcomes'. Four out of five of the 23 factors namely, faculty profiles, internship opportunities, international collaborations, and research initiatives, categorized under the 'Research and Professional' parameter have been ranked highest in importance.
Within the 'Teaching, Learning and Resources' parameter, work profiles of the faculty members and types of courses available are of utmost importance. Lastly, within the parameter of 'Outreach and Inclusivity', the availability of scholarships and part-time work opportunities were considered to be essential. T-tests indicated that females considered hostel facilities and faculty profiles significantly more important than males. Qualitative analysis showed that 20% of the participants preferred a higher education institute to offer a semester exchange program, and an almost equal proportion of students, that is, 19% preferred international exchange program opportunities while 18% preferred an exposure to present research papers at international conferences. There was almost an equivalent preference for the country students wish to go to, in order to pursue their higher studies. Further analysis indicated that subjects like Economics, Marketing, Mass Communication, Psychology, and Finance were the top five preferences for students to study abroad.
The sample differed in its years of under-graduate experience. This would yield differences in their response for preferences of higher education and the underlying useful parameters. For instance, some respondents had already received an international exposure through study abroad programs while others did not. Furthermore, not everybody within the sample was equally inclined toward pursuing higher education abroad. This may prove to be a limitation of the study as only those who intend to pursue higher education abroad long for international exchange programs. Since the sample recruited was pursuing humanities, the subject preferences for pursing higher education relied more toward Social Sciences. However, this data can be used in the future studies to draw correlations with higher education preferences of those with hard core Sciences.
There exists a need to re-define existing ranking systems as there is a lack of directly measurable parameters. The concept of one size fits all should be taken into reconsideration as there does not exist a particular ranking system which is ideal for higher education institutes all around the world. All the universities operate in a different cultural context and have students from different cultural and economic backgrounds who come to pursue their education. As a result, expectation of each student might be different. For instance, Indian students might rely heavily to choose universities offering summer exchange or international exchange programs as only few Indian universities offer so. As international student mobility has been increasing, a holistic ranking system should be inclusive of diversity and sensitive of the cultural backdrop of a student. Furthermore, ranking system should also take into consideration a distinction between disciplines like the Sciences versus the Social Sciences. Higher education institutes may have different educational models either focusing on research, pedagogy, or vocational training. Due to this reason, rankings should be based on the model adopted by higher education institutions and not based on few absolute criteria. There is an increasing dominance of English as an international language and most higher education institutes advocate its use. However, there as some countries like Germany which demand for the knowledge of German from an incoming student. This might set universities in Germany as a low preference by international students, but it does not necessarily mean the quality of education is low. Future research can adapt a system which distinguishes between English and non-English speaking academic institutions.