Who participated in the National Census of Writing?

As someone who directs a writing program at a small liberal arts college, I often approach the Census data with that lens in mind.  I’m curious how practices and discussions represented in the field of writing studies map onto different institution types.  As stated in the about section of the database, 680 four-year schools and 220 two-year schools participated in the National Census of Writing (NCW); however, how do these numbers break down by institution type?

Table 1 illustrates the breakdown of four-year schools into four categories based on the Basic Classification of the Carnegie Classification of Institutions in Higher Education: Doctoral Universities, Masters Colleges and Universities, Baccalaureate Colleges, and Special Focus Institutions, which include seminaries and bible colleges, art and design schools, and other professional schools.  The table presents the number of schools that we invited to participate in the NCW, the number of schools that participated, the percentage each group represents in the overall total of schools, and the response rate for each institution type. 

Table 1: Breakdown of four-year institutions by institution type

 

Number of schools invited to take the Census n=1621

Percentage based on number of schools invited to complete Census

Number of schools that participated in the Census

N=680

Percentage based on number of participating schools

Response rate of each institution type

Doctoral and Research Universities

272

17%

197

29%

72%

Masters Colleges and Universities

593

37%

272

40%

46%

Baccalaureate Colleges

546

34%

184

27%

37%

Special Focus Institutions

210

13%

27

4%

13%

 

When we look at this table we see the percentage of doctoral schools participating in the NCW exceeds what could be expected given the percentage of doctoral schools invited to complete the Census; however, the percentage of baccalaureate and special focus institutions participating in the NCW falls short of expectations.  

Table 2 presents the same data points for two-year schools.  Often two-year schools are grouped together in comparison to four-year schools, but this table presents a more detailed breakdown, also based on the Carnegie classifications. 

Table 2: Breakdown of two-year institutions by institution type

*52 schools were not classified in the initial invitation list

Number of schools invited to take the Census n=924*

% based on number of schools invited to complete Census

Number schools that participated in the Census

N=220

% based on number of participating schools

Response rate of each institution type

Public Rural-Serving

459

50%

104

47%

23%

Public Suburban-Serving

Single Campus

90

10%

23

11%

26%

Public Suburban-serving

Multi-campus

87

9%

26

12%

30%

Public Urban-Serving

Single Campus

30

3%

12

6%

40%

Public Urban-serving

Multi-Campus

111

12%

39

18%

35%

Private Not for Profit

23

2%

4

2%

17%

Public two-year colleges under four-year universities

39

4%

7

3%

18%

4-year primarily granting Associates degrees

33

4%

6

3%

18%

 

In this table we do not see much difference between the invited and participated categories except for the higher response rate from the urban-serving schools than the overall response rate for two-year schools; however, a significant difference exists between the response rate of four- year schools, which was 42% and two-year schools, which was 24%. 

These two tables raise a few questions:

  • Why did schools from one institution type participate at a higher rate than other types?
  • Did the Census speak to a particular group of schools or individuals?
  • How do we already begin to see issues with diversity at the data collection stage?
  • How do these different participation rates contribute to the discourse around how writing is taught, supported, and administered?
  • How can we increase participation from underrepresented institution types?

Karen Keaton Jackson in a recent WCJ blog post discusses some of these questions in relation to Historically Black Colleges & Universities’ (HBCUs) involvement in writing center conversations.  When questioning why HBCUs have been absent from national and international writing center conversations, Keaton Jackson replies, “It’s complicated, “ but goes on to explain that a lack of understanding and communication between HBCUs and PWIs (Predominately White Institutions) may be the cause for the absence.  She ponders whether the field has been sending the right invitation to HBCUs and the reasons why these invitations may be left unanswered.

Unlike most calls for survey participation, we did not rely on posting a message on national list-servs knowing, as Keaton Jackson suggests, that many schools are not members of these online discussions.  Instead, we searched websites of each institution looking for the person or people who would be best able to respond to the NCW.  This system worked when institutions’ sites of writing were clearly designated on the website, but often we had to take an educated guess about who might be the best person to complete the survey.  We also asked individuals, in particular folks working at underrepresented institutions, to reach out to peer institutions to explain the importance of having all schools counted in the NCW.

When we first began this project, it was entitled the WPA Census because it emerged out of conversations we had at the Council of Writing Program Administrators conference in Albuquerque.  Even with spelling out the acronym, we found that how people responded to that term influenced whether and how they completed the survey.  Some did not believe their institution had a writing program, some did not see themselves as a writing program administrator, some thought we were only talking about programs and not centers, and some did not see their sites of writing fitting into a national survey. 

Many people misconstrued the purpose of the Census and thought it was a personal research project.  It is not surprising that Doctoral universities and Small Liberal Arts Colleges (SLACS) had the highest response rates, over 70%.  With the former, participants may be more likely to understand the value of a large-scale research project to their programs and the graduate students within them, and they may be more likely to have attended one of our presentations on the project.  On the other hand, many slacs participated in a similar survey process for the book Dara Regaignon and I wrote on Writing Program Administration at Small Colleges.  Through this book project, we learned about the sites of writing at this particular institution type and therefore knew how to phrase the questions and responses to speak to this institutional context.  This group of individuals had seen the value of having a shared set of data, which may have made them more likely to complete the NCW survey.

As we prepare for the next round of data collection, we need to think about the obstacles people face when asked to participate because this question of who participated in the NCW speaks to conversations in the field around diversity and inclusion.  How can the Census be inclusive enough to document the diversity that exists when people do not self-identify with some of the terminology of the field? Does the terminology we use highlight or hide what are the explicit and embedded sites of writing on a given campus?  As we look through the NCW results through its various filters, we can see that the diversity exists, diversity that may not have been captured if we did not alter terminology with an understanding of different contexts; however, we do not have a full picture of the diversity that may exist because of the varying response rates from the different institution types. There are numerous reasons beyond the ones mentioned in this post for why someone chose not to participate in the NCW including time available and type of position.  As we look to the next round of data collection, we aim to increase participation from underrepresented groups by addressing some previous obstacles. We can’t give people time they don’t perceive they have, but we can try to illustrate the value of a data set where a diverse set of schools are counted and documented.