Trade-offs: Online vs. Paper Course Evaluations

​Treischl, E., and T. Wolbring. 2017. “The Causal Effect of Survey Mode on Students’ Evaluations of Teaching: Empirical Evidence from Three Field Experiments.” Research in Higher Education 58: 904–921.


What is the most effective way to evaluate teaching in the classroom? Edgar Treischl and Tobias Wolbring address this question through a series of well-designed experiments that flesh out important answers regarding the delivery mode (online vs. paper) of course evaluations and its subsequent influence on the course ratings themselves. Results indicated that response rates for paper-based evaluations were significantly higher than those for online evaluations, but the gap between these rates decreased if students were initially emailed an invitation to evaluate courses and given time in class to complete the course evaluations. In addition, results indicated that paper-based evaluations trended toward more positive pictures of teaching than those administered online.

The authors framed this study from a methodological perspective, suggesting that most previous course evaluation studies used less-than-ideal research designs to address issues related to evaluation delivery mode and its potential influence on overall course ratings. Pivoting to the importance of the study on research design, the authors claim that their study’s design—random experiments across different trials—enables them to make causal claims about the role delivery mode plays in course evaluation response rates and its relationship to overall assessments of quality of instruction for any given course. Although the experiments were performed at one institution (a limitation noted by the authors), the study design merits considerations of its findings in other institutional contexts, including CIC member institutions.


Results of this study show that while paper administration of course evaluations yields slightly more positive results, online administration of course evaluations also is an effective means of assessing instructor quality as long as students are given time in class for completion. Only when students are given ample time for completing online surveys in class do response rates mirror those of paper administrations. Inviting students to take online course evaluations without providing them time to complete them in class is ineffective, as response rates drop, on average, by 21.6 percent.

Also, the study results suggest that delivery mode may share some relationship with overall course evaluation. Slight evidence may indicate that paper administrations trend toward more positive course evaluations. The question about efficiency trade-offs regarding the use of online evaluations and their influence on instructor ratings was not empirically asked and thus remains unanswered: Is it worth the costs associated with moving to paper evaluations, which are often more cumbersome to administer with resulting data more difficult to analyze and report, if doing so improves instructors’ overall course ratings?


CIC campus leaders should feel confident in knowing that the efficiencies of online platforms for gathering course-evaluation data do not necessarily compromise student response rates or significantly influence the nature of the evaluation itself.

With effective and innovative pedagogies driving the branding CIC institutions use to distinguish themselves from their competitors, questions about instructor quality remain critical. By extension, practices related to assessing instructor quality should also be important, not only in terms of what constitutes “quality” but in terms of the mechanisms used for gathering related information. Results of this study show that online administration of course evaluation is an effective means of assessing instructor quality as long as students are given time in class for completion.

Also important to note: as with many data collected on college students, research has shown that online course evaluations may be biased, obfuscating the voices of minority students, including those minorities associated with race, gender, and sexual orientation (see AAUP 2016). These biases also may influence how students respond to minority instructors as well. As a guiding principle, institutions should make sure inclusive and reflective language is used in course evaluations, online or otherwise.

About the Authors

Edgar J. Treischl is research assistant in the School of Business and Economics at Friedrich-Alexander-Universität (FAU) Erlangen-Nürnberg.

Tobias Wolbring is chair of empirical economic sociology in the Institute of Labor Market and Socioeconomics, School of Business and Economics, at FAU Erlangen-Nürnberg.

Literature Readers May Wish to Consult

Carini, R. M., J. C. Hayek, G. D. Kuh, J. M. Kennedy, and J. A. Ouimet. 2003. “College Student Responses to Web and Paper Surveys: Does Mode Matter?” Research in Higher Education 44(1): 1–19.

Cook, C., F. Heath, and R. L. Thompson. 2000. “A Meta-Analysis of Response Rates in Web- or Internet-Based Surveys.”Educational and Psychological Measurement 60(6): 821–836.

Vasey, C., and L. Carroll. May–June 2016. “How Do We Evaluate Teaching? Findings from a Survey of Faculty Members.” Academe. Washington, DC: American Association of University Professors (AAUP).

Understanding Innovation in Higher Education

​Cai, Y. 2017. “From an Analytical Framework for Understanding the Innovation Process in Higher Education to an Emerging Research Field of Innovations in Higher Education.” The Review of Higher Education 40 (4): 585–616.


With many innovations occurring on CIC campuses (see Hearn and Warshaw 2015), it is important to frame these initiatives in ways institutional stakeholders understand and ultimately implement. This article by Yuzhuo Cai provides such a framework. Conceptual in nature, the article synthesizes the research on innovation and provides a compelling analytic rubric for examining the efficacy of innovations in addressing many of the issues that college and university leaders face.

The author draws from and expands Baregheh, Rowley, and Sambrook’s (2009) work, which defines innovation as “the multi-stage process, whereby organisations transform ideas into new/improved products, services, or processes, in order to advance, complete, and differentiate themselves successfully in the marketplace.” (p. 1334). Cai uses this definition to advance the idea of innovation within the context of higher education practice.


As a conceptual piece, the author provides questions to consider regarding the successful implementation of innovative practices. The author offers a framework for questions CIC presidents and other leaders might ask about practice innovations initiated on campus:
  • What is the nature of the innovation? Is it intended to do something new or do something better?
  • What type of innovation is being designed? Is it a process (e.g., organizational shift) or product (e.g., good or service) innovation?
  • What are the specific problems that will be resolved by the particular innovation?
  • What is the specific goal or goals of the particular innovation? (Here it is important to note the author’s contention that most innovations in higher education are designed as responsive––transformative initiatives designed to respond to the roles of the university in a shifting economy.)
  • What is the context for the innovation? How does the location of the innovation (e.g., within a college, within a functional area, between a college and its local community) play a role in its successful execution?
  • What are the stages to enacting the innovation? How is the innovation strategically executed, from idea to rollout?
  • What are the resources needed to implement the innovation? These could be technical, creative, or financial.
  • Who are the people involved in the innovation process, both at the conceptual and implementation phases? What are their strengths relative to executing different phases of the project?


Use of this framework may help CIC leaders design and execute plans to institute a new practice or improve an existing one. This article offers an empirically-grounded roadmap for considering—and perhaps recasting—the many innovations CIC presidents are implementing to respond to external and internal pressures facing their particular institutions.

The key takeaway is that innovation requires careful planning. Campus leaders should use these questions to guide initiating new institutional practices or improving upon current ones. Of course, planning not only involves conceptualization and implementation but a thoughtful assessment strategy designed to evaluate the efficacy of the planned innovation.

About the Author

Yuzhuo Cai is a university lecturer in the school of management at the University of Tampere, Finland.

Literature Readers May Wish to Consult

Baregheh, A., J. Rowley, and S. Sambrook. 2009. “Towards a Multidisciplinary Definition of Innovation.” Management Decision 47(8): 1323–1339.

Council of Independent Colleges. 2018. Innovation and the Independent College: Examples from the Sector. Washington, DC: Council of Independent Colleges.

Hearn, J. C., J. B. Warshaw, and E. B. Ciarimboli. 2016. Strategic Change and Innovation in Independent Colleges: Nine Mission-Driven Campuses. Washington, DC: Council of Independent Colleges.

Hearn, J. C., and J. B. Warshaw. 2015. Mission-Driven Innovation: An Empirical Study of Adaptation and Change among Independent Colleges. Washington, DC: Council of Independent Colleges.

Lee, T.W., T.R. Mitchell, C.J. Sablynski, J.P. Burton and B.C. Holtom. 2004. “The Effects of Job Embeddedness on Organizational Citizenship, Job Performance, Volitional Absences, and Voluntary Turnover.” Academy of Management Journal, 47 (5), 711–722.

Taking Too Many Difficult Courses at Once Threatens Graduation Rates

​Witteveen, D., and P. Attewell. 2017. “The College Completion Puzzle: A Hidden Markov Model Approach.” Research in Higher Education 58: 449–467.


How does course-taking influence degree attainment and graduation rates? This longitudinal study was designed to address the course-taking patterns among undergraduates and their influence on a host of outcomes, mostly related to whether students graduated or not. The goal of the research was to demonstrate the efficacy of analyzing transcript data as a means of predicting graduation trajectories and how this technique may be more accurate when compared with similar and more traditional techniques that model graduation rates as a function of socioeconomic, demographic, and pre-college background information.

Authors Dirk Wittevenn and Paul Attewell provide an overview of what they call the “college completion puzzle.” Comparing U.S. baccalaureate graduation rates with those in other OECD countries, including Sweden, France, Iceland, Norway, and the Netherlands, they rightfully argue that U.S college dropout rates are higher than many policy makers and institutional stakeholders would like. They cite Aud et al. (2013) and Radford et al. (2011) when suggesting that about 63 percent of students who matriculate into a four-year degree program actually complete their bachelor’s degree within six years.

Turning specifically to the literature base, the authors provide an efficient review of the relevant work in this area. They discuss the academic and nonacademic factors that have been modeled to predict graduation rates and pieces that examine them as a function of institutional covariates. Drawing from Bowen et al. (2009), the authors note “leading scholars argue that students should try to attend the most selective college possible, since this will enhance their chances of graduating” (p. 451).

To answer their research question, the authors used the Beginning Postsecondary Longitudinal Study (BPS) data from the National Center of Education Statistics (NCES). A nationally representative sample of first-time, first-year students who entered college in 2004 was followed over six years. Student transcript data were merged with these data to create the “2004/2009 Beginning Postsecondary Students Longitudinal Study Restricted-Use Transcript Data Files” or PETS data (NCES, 2011). The final dataset examined 8,980 students enrolled in a four-year college for the first time in 2004.


How do students who graduate within six years differ from those who don’t? In short, graduating students tend to balance difficult courses (e.g., math or science) and less intense courses when constructing their schedules. When students were required to take difficult courses, those who subsequently graduated rarely took them in combination with a larger number of credit hours. In other words, graduating students took fewer courses alongside more challenging courses.

Importantly, non-completers and completers began taking courses with similar schedules and strategies, often enrolling in a similar number of credit-bearing courses including challenging ones. As they progressed through college, non-completers were significantly less likely to adopt “winning strategies” (p. 463) than completers. Indeed, scheduling courses and credit load based on known course difficulty appears to be an effective way to increase graduation rates, at least for this cohort of students.


For CIC presidents and other academic leaders, these results are critical for understanding the ebbs and flows of student enrollment behavior and its effects on degree completion. To improve graduation rates, leaders should be asking the following questions: Are academic schedules flexible enough to accommodate some of the course-taking strategies identified by these authors as furthering graduation chances? What happens to financial aid packages if students want to take a challenging math course and fewer other courses at the same time? How do advisors and coaches help students navigate challenging courses within their schedules? How do institutions provide first-generation students with the navigational capital needed to understand how course-taking behavior may affect their likelihood of graduating and their eventual time to degree?

These questions are important as CIC members try to improve graduation rates for all students. Given core requirements that focus on challenging courses––mostly math and science––as part of general education curricula, educators need to be reminded that course content is not the only academic challenge students face when going through college. How students balance and sequence courses, especially those that are challenging, remains equally important.

About the Authors

Dirk Witteveen is a PhD candidate in sociology at the Graduate Center, City University of New York (CUNY).

Paul Attewell is distinguished professor of sociology and professor of urban education at the Graduate Center, CUNY.

Literature Readers May Wish to Consult

Aud, S., S. Wikinson-Flicker, P. Kristapovich, A. Rathbun, X. Wang, and J. Zhang. 2013. The Condition of Education 2013. National Center for Education Statistics (NCES) 2013-037. Washington, DC: U.S. Department of Education, NCES.

Bowen, W. G., M. M. Chingos, and M. S. McPherson. (2009). Crossing the Finish Line: Completing College at America’s Public Universities. Princeton, NJ: Princeton University Press.

National Center for Education Statistics. 2011. 2004/2009 Beginning Postsecondary Students Longitudinal Study Restricted Use Data File [in Stata]. NCES 2011-244. Washington, DC: U.S. Department of Education, NCES.

Radford, A. W., L. Berkner, S. C. Wheeless, and B. Shepard. 2010. Persistence and Attainment of 2003–2004 Beginning Postsecondary Students: After Six Years. National Center for Education Statistics (NCES) 2011-151. Washington, DC: U.S. Department of Education.

Name Racism Openly: Race and Rhetoric in Presidents’ Statements

​Cole, E. R., and S. R. Harper. 2017. “Race and Rhetoric: An Analysis of College Presidents’ Statements on Campus Racial Incidents.” Journal of Diversity in Higher Education 10(4): 318–333.


How do college and university presidents communicate with the campus community about issues concerning racialized incidents on campus? This study examines the public messaging practices of 18 senior administrators who made statements in responses to racial incidents that occurred over a three-year period, from 2012 through 2015. Through rhetorical analytic strategies, Eddie Cole and Shaun Harper concluded that presidents often issue descriptive statements about the racial incident itself and equally descriptive and sometimes editorial statements about the individual or group that perpetrated the incident. Often omitted from statements are descriptive or editorial comments about systemic racism or its location and expression through sustained and reproduced institutional racist practices. The authors argue that such an omission may isolate and even address the incident, but may reproduce a sustained discriminatory campus narrative.

Grounded in the context of rhetoric and its influence on behaviors, the authors argue that analyzing the public statements of college and university presidents may be a window into their role in “setting diversity agendas” on college campuses. Against the backdrop of the recent political climate, socio-political movements such as Black Lives Matter, and the resignation of University of Missouri’s President Tim Wolfe over alleged mismanagement of racial incidents on the Columbia campus, the authors turned to The Journal of Blacks in Higher Education for its published list of “Campus Racial Incidents” as the data source from which themes related to racial incident, race, and racism were extracted.



The 18 statements represented a variety of institutions, including some in the CIC membership. The authors published the list of institutions, the nature of the incident, and the date of occurrence (p. 321). They also situated their positionality (i.e., disclosing personal factors and experiences that can affect positions a researcher adopts) and approach to the study as being partly an extension of their own identities as black faculty members at predominantly white institutions. Cole and Harper stated that they believe that “academic leaders of many institutions can do more to foster inclusive environments for all people on campus” (p. 322) and that this belief stems from “know[ing] the demand of mentoring students of color, many of which are not our assigned academic advisees, because they seek out-of-class counsel from faculty of color who look like them” (p. 322).

Findings were organized around three dimensions consistent with rhetoric analytical approaches to data collection and reporting: exigence, audience, and constraints. In terms of exigence, the racial incident or series of incidents on one campus was differentially explained by the 18 presidents, with three not mentioning the incident at all, 11 mentioning the incident using broad terms with no discussion of incident details, and four offering a detailed account of the incident. Turning to audience, the statements directly targeted three overlapping audiences: All 18 addressed the general campus community; 13 discussed the individual or group that committed the offense, and five made remarks concerning those targeted by the racial offense. Finally, only three of the 18 college presidents located their comments in the acknowledgement of systematic, historic, and institutional racism, which, the authors note, may have the power to “render a statement ineffective” (p. 322).


What advice might the authors offer to CIC presidents based on these findings? In no particular order, the authors suggest that presidents use the word racism in describing racialized incidents on campus, offsetting a perception that academic leaders’ words are “forgettable” and “seen as saying and doing nothing about racism” (p. 330). Also, presidents should support efforts to ensure that campus community members have a robust understanding of racism, its origins, and its many expressions, both beyond and in reference to the specific institution. Presidential statements––if properly acknowledging race, the racial incident, and racism––have the power to initiate meaningful dialogue about race and racism on college campuses. If strategically considered, the power of presidential words can help disrupt traditional spaces where institutionalized racism and discriminatory practices have been and continue to be the norm.

About the Authors

Eddie R. Cole is assistant professor of higher education in the College of William & Mary’s school of education and affiliated faculty in the Lyon G. Tyler Department of History at William & Mary.

Shaun R. Harper is provost professor in the Rossier School of Education and Marshall School of Business, the Clifford and Betty Allen Chair in Urban Leadership, and executive director of the USC Race and Equity Center at the University of Southern California (USC).

Literature Readers May Wish to Consult

Gurin, P., E. L. Dey, S. Hurtado, and G. Gurin. 2002. “Diversity and Higher Education: Theory and Impact on Educational Outcomes.” Harvard Educational Review 72: 330–367.

Kezar, A. J., and P. Eckel. 2008. “Advancing Diversity Agendas on Campus: Examining Transactional and Transformational Presidential Leadership Styles.” International Journal of Leadership in Education 11: 379–405.

Rankings Reconsidered: Placing Student Engagement at Risk

​Zilvinskis, J., and L. Rocconi. 2018. “Revisiting the Relationship between Institutional Rank and Student Engagement.” The Review of Higher Education 41 (2): 253–280.


In a study that examined the relationship between institutional rankings and the National Survey of Student Engagement’s (NSSE) assessments of student-faculty engagement, John Zilvinskis and Louis Rocconi found either no relationship or a modest, negative association. The study examined indicators used by U.S. News & World Report, Forbes, and Washington Monthly, which are widely used by the public. Results indicated that the higher the ranking, the fewer engagements between faculty members and students.

The challenges associated with this line of inquiry involve the validity of institutional rankings, especially popular and often-revered ones, as a means of understanding the student engagement experience. This validity problem is explicitly mentioned by the authors as one reason for the study. To address the issue, the authors of the study used “research in behavioral industrial organization” (p. 257) and Hossler and Gallagher’s (1987) three-phase model of college choice. From across these frameworks, the authors suggest that third-party entities create and use ranking systems as a means of lowering the costs associated with choosing a college by providing efficient information to consumers (e.g., families) and influencing institutional practice ranging from mission articulation to admissions and faculty compensation. (The authors cite the work of Gonzales [2013], Melguizo and Strober [2007], and Meredith [2004] in making these claims.) Also, they review literature concerning family use of rankings related to institutional choice: In short, families with access to more social and navigational capital are more likely to use rankings in evaluating and selecting institutions than families without access to these forms of capital. Taken together, it is clear that third parties––those who design and message rankings and those who use them to evaluate institutions––often drive educational practice, including practices related to student engagement.

In terms of study design, the authors draw from two data sources: (1) over 80,000 first-year and senior students enrolled at one of 64 institutions that participated in NSSE’s 2013 administration and (2) an institution’s 2013 score across three ranking platforms, including Top Colleges in the U.S. (Forbes), U.S. News & World Report National University Rankings, and Washington Monthly’s National University Rankings. Important to note––and carefully acknowledged by the study’s authors––are these sources’ limitations, including but not limited to issues of self-reporting, social desirability, and institutional selection into NSSE. Leaders of CIC institutions should interpret the results cautiously.



Of the ten engagement items tested for their associations with rankings, only one shared a significant relationship with all three ranking platforms: student-faculty interaction. Two design elements should be kept in mind, one involving the conceptual structure of student-faculty interaction, and the other regarding study design. Student-faculty interaction was measured on a frequency scale that asked students to respond to the following set of four items: How often have students (1) talked about career plans with a faculty member; (2) worked with a faculty member on activities other than coursework (committees, student groups, etc.); (3) discussed course topics, ideas, or concepts with a faculty member outside of class; and (4) discussed academic performance with a faculty member. In essence this was a measure of frequency of contact with faculty members, not a measure of relationship quality. Although other minor findings were reported, this result was the only consistent pattern that held across the three ranking platforms.

The second element—one of particular importance to the CIC membership—is that the study controlled for institutional size and control (private vs. public). These design features suggest that the relationship between ranking and frequency of student-faculty engagement consistently held across these differences. For example, faculty members from less-highly ranked, smaller, private institutions were more likely to engage with students than those from highly ranked, smaller, private institutions.

The authors provide a series of possibilities for these findings. The first is that highly-ranked institutions may attract students who need less interaction with faculty members. The second is that highly-ranked institutions recruit faculty members who place less of an emphasis on spending time with students. The third involves the institutional ranking process itself––is it designed to measure what really matters to students as they pursue their college degree?


While the authors, in the spirit of scholarly inquiry, offered a possible explanation of their findings that students with less need to engage with faculty members are attracted to more highly-ranked institutions, this may not be true at CIC member institutions. Students who are attracted to CIC institutions may simply have different needs than those attracted to other types of institutions (e.g., research universities). The key questions may instead be: What is a faculty member’s obligation to students who ask questions about career choice, non-course-related topics, course-related topics outside of class, and their academic performance? How do faculty members at highly-ranked institutions regard student engagement as part of their job?

For presidents and other senior administrators of highly-ranked CIC institutions, the take-home message involves faculty work as related to institutional ranking: How do administrators frame the essence and importance of faculty work in light of desires to improve institutional rankings that families use to evaluate and select institutions? How might focusing on institutional rankings compromise the frequency of the faculty-student engagement experience? Should productivity metrics for faculty members include frequency—and even quality—of engagement with students?

For leaders of less highly ranked CIC institutions, the question is one of branding. How can leaders use these findings as a tool to promote the benefits of attending a smaller, less highly-ranked private institution that may feature closer and more frequent student-faculty engagement?

About the Authors

John Zilvinksis is assistant professor of student affairs administration at Binghamton University.

Louis Rocconi is assistant professor of evaluation, statistics, and measurement at the University of Tennessee.

Literature Readers May Wish to Consult

Gonzales, L. D. 2013. “Faculty Sensemaking and Mission Creep: Interrogating Institutionalized Ways of Knowing and Doing Legitimacy.” The Review of Higher Education 36(2): 179–209.

Hossler, D., and K. S. Gallagher. 1987. “Studying College Choice: A Three-Phase Model and the Implications for Policy-Makers.” College and University 2: 207–221.

Melguizo, T., and M. H. Strober. 2007. “Faculty Salaries and the Maximization of Prestige.” Research in Higher Education 48(6): 633–668.

Meredith, M. 2004. “Why Do Universities Compete in the Ratings Game? An Empirical Analysis of the Effects of the U.S. News and World Report College Rankings.” Research in Higher Education 45(5): 443–461.

Does Interfaith Engagement Drive Students to Abandon Their Faith or Non-Faith Traditions?

​Mayhew, M. J., A. N. Rockenbach, and N. A. Bowman. 2016. “The Connection between Interfaith Engagement and Self-Authored Worldview Commitment.” The Journal of College Student Development 57 (4): 362–379.


This study explores interfaith engagement and its association with a construct called self-authored worldview commitment (SAWC), a learning outcome addressing how a student develops an “informed, critical understanding of his or her worldview, would describe him or herself in ways consistent with such an understanding and would relate to others in a manner also consistent with that understanding” (Mayhew and Bryant Rockenbach 2013, p. 64).

Matthew J. Mayhew, Alyssa N. Rockenbach, and Nicholas A. Bowman assume that achieving this outcome of a SAWC promotes the civic values touted by many college and university educators and prominently on many CIC campuses: a worldview based on what Sir John Templeton (2000) calls mutuality, respect, and shared exploration.

Grounded in the work of many developmental theorists, including Marcia (1966), Perry (1970), Kegan (1994), and Baxter Magolda (2008), the authors offer conceptual refinements of the SAWC construct and the institutional conditions and educational practices that lead to its development. Based on a cross-sectional study of 13,776 students enrolled in one of 52 institutions, it is important to note that only associations and not causal inferences are explored in this article. Research designs that seek to pursue lines of inquiry related to development should be longitudinal in nature—a limitation noted by the authors.


Results of the study forward three important considerations. First, regardless of students’ identified worldviews, those who attended institutions that valued a respect for and appreciation of other worldviews were more likely to grapple with different worldview perspectives before committing to their own (i.e., achieving SAWC). Second, achieving SAWC was associated with institutional type; students enrolled at public institutions were associated with higher SAWC scores, while those enrolled at nonsectarian institutions were associated with lower SAWC scores. Third, students who participated in formal (e.g., institution-designed) and informal (e.g., peer-related) interfaith activities were significantly more likely to achieve SAWC than students who did not engage in these types of activities. Finally, regardless of institutional type or experience with formal or informal interfaith activities, higher SAWC scores were exhibited by students who identified as agnostic, atheist, Buddhist, secular humanist, spiritual, Unitarian Universalist or another worldview (i.e., one articulated in an open-ended response about worldview identification); lower SAWC scores were associated with students who identified as Eastern Orthodox, Roman Catholic, evangelical Christian, mainline Protestant, Jewish, Muslim, and nonreligious.


How do these results help presidents and other campus leaders design, implement, and assess interfaith efforts on their campuses? What outcomes are important for educators to consider, given the eclectic and passionate worldview interests of institutional stakeholders (e.g., boards, families, clergy)? This study assumes that self-authored worldview commitment—the ability for students to internalize different worldview perspectives as a vehicle for understanding their own—is an interfaith outcome that CIC member institutions should consider. Not only does it appeal to educators who value college as an opportunity to engage worldview differences in informative and responsible ways, SAWC may satisfy skeptical institutional stakeholders, as it encourages students to wrestle with diverse worldview issues as opposed to abandoning faith or non-faith-based positions.

The authors found that students who participated in interfaith activities were more likely to achieve SAWC. Institutions can articulate the importance of interfaith learning through mission statements and strategic planning documents. Of course, incorporating religious literacy into the formal curriculum would be another important step, as long as instructors are trained to effectively engage students in productive conversations about worldview differences.

Similarly, students who identify as non-religious also achieved SAWC through informal interfaith interactions. Although the structure of these interactions was informal—assessed by asking students questions about dining and socializing with students from worldview narratives different than their own—the finding is an important reminder that peer engagement matters. Educators need to equip students with the language, knowledge, and tools needed to effectively engage across worldview differences in anticipation of such exchanges.

Finally, the authors provide some insight into the relationships between students’ self-identified worldviews and SAWC. Institutions interested in promoting SAWC as a collegiate outcome need to involve faith-based students (Eastern Orthodox, Roman Catholic, evangelical Christian, mainline Protestant, Jewish, and Muslim) in formal interfaith efforts. The spiritual development of these students—including the communities they turn to for spiritual guidance through their college careers—can be influenced by local temples, synagogues, churches and/or parachurch organizations (e.g., Navigators and Cru). Educators should design environments where faith-based students feel free to express their worldviews in constructive ways that value diverse perspectives.

About the Authors

Matthew J. Mayhew is William Ray and Marie Adamson Flesher Professor of Educational Administration with a focus on higher education and student affairs at Ohio State University.

Alyssa N. Rockenbach is professor of higher education in the Department of Educational Leadership, Policy, and Human Development at North Carolina State University.

Nicholas A. Bowman is a professor in the Higher Education and Student Affairs program and director of the Center for Research on Undergraduate Education at the University of Iowa.

Literature Readers May Wish to Consult

Baxter Magolda, M. B. 2008. “Three Elements of Self-Authorship.” Journal of College Student Development 49: 269–284.

Kegan, R. 1994. In Over Our Heads: The Mental Demands of Modern Life. Cambridge, MA: Harvard University Press.

Marcia, J. E. 1966. “Development and Validation of Ego Identity Status.” Journal of Personality and Social Psychology 3: 551–558.

Mayhew, M. J., and A. N. Bryant Rockenbach. 2013. “Achievement or Arrest? The Influence of the Collegiate Religious and Spiritual Climate on Students’ Worldview Commitment.” Research in Higher Education 54: 63–84.

Perry, W. G., Jr. 1970. Forms of Intellectual and Ethical Development in the College Years: A Scheme. New York, NY: Holt, Rinehart, and Winston.

Templeton, J. 2000. Possibilities for Over One Hundredfold More Spiritual Information: The Humble Approach in Theology and Science. Philadelphia: Templeton Foundation Press.

Redirecting Work as Learning? Helping First-Generation Latino Students Succeed

Nuñez, A. M., and V. A. Sansone. 2016. “Earning and Learning: Exploring the Meaning of Work in the Experiences of First-Generation Latino College Students.” The Review of Higher Education 40 (1): 91–116.


"I like to work a lot of hours because it actually helps me to concentrate on school.… So it gives me more focus and it helps me manage my time better." (p. 104)

How might work offer distinctive opportunities for Latino first-generation college students to acquire the skills and resources needed to navigate successfully in the often confusing college environment? Although this case study examines a sample of students from a large public institution, the authors carefully craft an argument that may be of relevance to the CIC community.

Most research on working college students has employed quantitative methods to examine how working a certain number of hours per week affects students’ educational outcomes. This qualitative study, by contrast, addresses how first-generation college-going Latino students make meaning of their experiences working for pay during college. Grounded in the critical work of Bourdieu’s theory of cultural reproduction (Bourdieu and Passeron 1977) and Pusser’s (2010) conceptualization of work as a means for spurring student success, this study, based on interviews with Latino students from first-generation college-going backgrounds, provides insights about how employment during college can offer benefits beyond financial support to these students.


The results of this study suggest three conclusions. First, these students come to college with a particular family orientation toward work that involves strong support for students’ pursuit of postsecondary education, cultural assets that serve the students well in their college careers, and encouragement to pursue higher-status work. Second, work can offer these students opportunities to cultivate various forms of capital beyond financial capital. These resources include human, social, cultural, and navigational capitals. Third, certain kinds of paid employment during college can expose these students to new work experiences that can be more intrinsically rewarding than those their family members have experienced.


CIC implications include the importance of structuring on-campus work opportunities—both Federal Work Study (FWS) appointments and non-FWS positions—that enable students, especially those from low-income and first-generation backgrounds, to expand their skill sets, build a sense of community on campus, and learn about careers that they might want to pursue. Careful partnerships with community employers may help first-generation Latino students curate a sense of purpose that their families appreciate and that can help them understand the perceived cryptic messages (e.g., elusive administrative processes involved with financial aid) that campus educators attempt to clarify for their first-time students.

As CIC members continue to balance often-competing goals to make college affordable while remaining economically viable, casting work as a potential opportunity for students to learn and, when necessary, make money—especially for first-generation Latino students—might be an important step for recruiting and retaining these students. Educators often have concerns regarding college students’ expressions of stress and fatigue related to balancing work and school. This article suggests that work may help alleviate some of the stress of college-going, especially for this group of students.

About the Authors

Anne-Marie Nuñez is associate professor in the department of educational studies at the Ohio State University.

Vanessa A. Sansone is assistant professor of higher education in the department of educational leadership and policy studies at the University of Texas at San Antonio.

Literature Readers May Wish to Consult

Bourdieu, P., and J. C. Passeron. 1977. Reproduction in Education, Society, and Culture. Beverly Hills, CA: Sage.

Pusser, B. 2010. “Of a Mind to Labor: Reconceptualizing Student Work and Higher Education.” In Understanding the Working College Student: New Research and Its Implications for Policy and Practice, edited by L. W. Perna, 134–154. Sterling, VA: Stylus.

For Every Vice, There’s a Virtue: How Students Use Social Media as a Vehicle for Activism

​Linder, C., J. S. Meyers, C. Riggle, and M. Lacy. 2016. “From Margins to Mainstream: Social Media as a Tool for Cam-pus Sexual Violence Activism.” Journal of Diversity in Higher Education 9(3): 231–234.


What role does social media play in teaching and learning? In building community? In spurring responsible activism? These questions underscore the current investigation of students’ use of social media as a vehicle for activism, specifically regarding sexual violence on campus.

Although the ubiquity of social media use among college students is unquestionable, its empirical study has remained underwhelming, as technologies often outpace research designed to understand its use. Grounded in literature relating to activism, social media and activism, and cyberfeminism (where gender meets the Internet; see Cunningham and Crandall 2014, p. 233), the authors frame the study as one to help administrators explore strategies for using social media as a means for consciousness-raising, community building, and what the authors refer to as a counterspace (p. 234) for sexual assault activism.


Underneath the study’s themes are questions CIC campus leaders may want to entertain:
  • Is social media activism real activism? Who gets to decide?
  • How might educators embrace the potential of social media platforms for providing new opportunities for and—in some cases—extending community considerations for students?
  • How do social media platforms disrupt hegemonic norms often embedded within college communities? Offer an alternative place for marginalized students to share ideas? Find community? Make friends?
The authors of this study addressed these questions through the use of internet-related ethnography (Postill and Pink 2012), which combined interviews from 23 activists with observations of online activist communities. This methodology was innovative for its approach to uncovering activist themes related to social media usage. From these data points, the authors provided many powerful stories about social media and its use in giving students opportunities to organize around an idea and find a voice often obscured during in-person conversations about personal and controversial topics such as sexual violence. As one student observed, “When I’m on Twitter I feel like I’ my own community because I follow a lot of Brown, queer feminist[s] and I’m in on these conversations.... Twitter is this unique place where that can exist.... We share this important space where I can breathe a sigh of relief where I can get the validation I need. Where I can have a conversation with just us or us and whoever want[s] to join in and there’s no hierarchy. (p. 239).”


Rather than resist, educators at CIC institutions should understand how students use these platforms for information gathering as well as in community building, especially those students who struggle finding community access points on campus. Is there an administrator charged with routinely examining student trafficking, specifically regarding social media use? With how students use social media plat-forms to communicate with other students about campus-based issues?

Of course, it remains the charge of educators to teach students how to use social media platforms, both well and responsibly. Sharing best practices in social media use provides a helpful direction, as technological advances will continue to outpace research on social media’s influence in student learning and behavior.

About the Authors

Chris Linder is assistant professor in the college student affairs administration program in the Department of Counseling and Human Development at the University of Georgia.

Jess S. Meyers is director of the Women’s Center at the University of Maryland Baltimore County.

Colleen Riggle is assistant dean of students and director of the Women’s Resource Center at Georgia Tech University.

Marvette Lacy is director of the Women’s Resource Center at the University of Wisconsin-Milwaukee.

Literature Readers May Wish to Consult

Cunningham, C. M., and H. M. Crandall. 2014. “Social Media for Social Justice: Cyberfeminism in the Digital Village.” In Feminist Community Engagement: Achieving Praxis, edited by S. V. Iverson and J. H. James, 75–91. London, UK: Palgrave.

Postill, J., and S. Pink. 2012. “Social Media Ethnography: The Digital Researcher in a Messy Web.” Media International Australia 145: 123–134.