The best use of metrics as applied to nonprofits often differs slightly from their best use in for-profit businesses. In general, nonprofit entities benefit from having more flexible rather than tighter metrics, as this enables the metrics to more easily tie into nonprofits’ sometimes vague mission statements. Additionally, nonprofits are not tasked with the same goals and outcomes as for-profit businesses. This is best demonstrated by example, and so here the author chose a project out of personal experience and dissected the relevant metrics used, from the overarching meta-metric to its smaller, more detailed components. The task in question was to conduct an informal census of group participants, and the use of flexible metrics enabled the project to be performed in a way that balanced capability, scope, and value. The metrics and methods used can be applied to similar but much larger tasks. This is particularly important for nonprofits, which quite often have use of only limited resources such as time, staffing, funding, and fiduciary responsibility.
Without metrics, no task could be accomplished—or at least not well, in an efficient and orderly fashion. Questions abound whenever it becomes apparent to a business or even to an individual that a new project must be undertaken. How will we know when we have succeeded? Is the company biting off more than it can chew? Metrics help resolve such quandaries in a meaningful and succinct fashion. From the author’s own experience comes a relevant example of a task which, when presented, showed little structure and was formulated in a rather nebulous fashion. The task was to take an informal census over phone and email to determine how many citywide and statewide participants there were in a loose network of support groups generally organized under a nonprofit umbrella, with an emphasis on the city level and on cross-checking to ensure no participant was counted twice in the case that such participants regularly sat in multiple different groups. However, though the specific case in which the opportunity to use metrics arose was indeed quite local and narrow in scope, the lessons learned therein can be applied to all cases of the extremely common task of creating lists of participants in various groups, be those groups nonprofit or otherwise. Therefore, recommendations will be phrased in a more general sense when discussing this topic. Overall, it was found that for the project of taking an informal census for a nonprofit group, the most relevant overarching metric was the scope-value-capability model, and furthermore, that each of these areas also needed to be broken out from the meta-metric and constructed into an entire metric in its own right.
Nonprofits in particular suffer from an inability to use metrics, or at least reluctance to do so, in part because their missions can be vague and in part because the value of service to humankind—at its core, the goal of any nonprofit—is by definition hard to measure. Sawhill and Williamson (2001) put it thus:
Of course, nonprofit missions are notoriously lofty and vague. The American Museum of Natural History, for example, is dedicated to "discovering, interpreting, and disseminating through scientific research and education knowledge about human cultures, the natural world, and the universe." But though the museum carefully counts its visitors, it doesn’t try to measure its success in discovering or interpreting knowledge. How could it? (p. 1)
Thus, rather than appeal to a nonprofit’s mission for a source of metrics, perhaps a better approach is to apply new metrics on a project-by-project basis, as indeed was done in the informal census task. Once such metrics are created, nonprofits would do well to follow the procedures used more commonly by for-profit business, namely, as Dijkman, Dumas, van Dongen, Kaarik, and Mendling (2011) point out, “It is common for large organizations to maintain repositories of business process models… [S]uch a repository [creates the] problem of retrieving those models in the repository that most closely resemble a given process model or fragment thereof” (p. 1). Thus, the metrics that it was determined would be used in this project accompanied its final submission to superiors in the hierarchy of the local nonprofit. However, long before that transaction was able to take place, the metrics needed to be devised in the first place.
To undertake the initial step in determining the ideal use of metrics for the particular case in question, it is beneficial to examine the use of metrics in the past, such as the typical triple-constraint system, as well as to consider arguments against such systems and alternatives presented. As presented, among other places, in Baratta (2007), the triple-constraint system for building metrics as a performance tool looked at time, cost, and scope. However, when re-written to take into account other factors, the altered system Baratta presents replaces cost with value and time with capability, summarized thus: “The value delivered is a function of the scope of the project opportunity and the capability of the process used to deliver it” (2007, p. 2). This newer system incorporates the fact that time is not always the only measure of available resources to a company; the term capability here better captures the sum of all the factors, including time, that contribute to a rule of thumb weighing the expenditure of both time and non-temporal resources against the value of the project. Formerly, the system was couched in the negative—time expended, cost induced—but Baratta’s model provides a superior means of viewing projects through metrics in which instead the positive qualities are measured, so that capability and value are in a sense balanced on a scale versus one another; it makes more sense to measure, to weigh an element this way when that element is put in positive terms. Thus these three elements, scope, value, and capability, ought to be considered when embarking upon any new business venture or project, and these will be the primary three metrics for evaluating the project of gathering names of participants.
It can be argued that one of the most valuable aspects of deciding upon a path to take when approaching a new topic is to determine the form in which the results will finally appear. Indeed, even before data collection began, it was necessary at least on an informal level to create the first metric, a system which looked at how well-defined the results would be. This relates best to the quality of scope in the overarching metric described above. In the instance of compiling a list of all local-area participants, it quickly became clear that scope in at least one fashion could be defined in terms of geographical area. If preferred, numbers could even be assigned to various levels of geographical size; the city level might be assigned a value of 0, the state level a value of 1, and so on and so forth. Changing the size of the geographical area into points of ordinal data would help ease compatibility as perhaps others pick up on the project after its completion and expand upon it. In addition, numbers are a convenient way to enter such information into a spreadsheet, so long as the legend defining them is clear. Unclear metrics, of course, are worthless metrics by definition.
Continuing then in this theme of transparency, another aspect of scope that had to be determined was what the criteria were for a person to be considered a participant. Must that person attend groups at least once a week, once a month, or some other semi-arbitrary time-frame? Would it even be relevant to rank participants by their normal patterns of attendance, e.g. an “eighty-percent” participant attends about eighty percent of the time? In the end, amongst the competing systems, it was decided that instead of a hard mathematical metric, a subjective one would be used: Did the leader of the group being counted consider that person a current member? This generated simple binary yes/no data that helped define the scope of the project by limiting who would be included. The third metric for scope, in addition to geographical area and participant status, was the amount of information collected on each participant. In some respects, it might seem that all that was needed was the number of participants in each of the individual groups, but in reality, as there were multiple participants known to be sitting in more than one group per week, this information alone was inadequate to obtaining a unique count of all the participants. Instead, the amount of information needed was at minimum the number of participants per group and their first and last names, allowing the coordinator of this project to later cross-check the names and thus gain accurate numbers. Some groups also chose to give participants’ email addresses where available, going above and beyond on this metric. However, the simple test of whether the project had succeeded or not on this metric was whether the names had been collected in addition to the numbers. As it turned out, even this aspect of the scope metrics shaded over into some of the area covered by what was actually a capability metric, namely, number of successful contacts.
For capability, perhaps first and foremost to be considered, naturally, were the author’s own personal limitations and strengths as point person on this task. In a small-scale task such as the one undertaken, this is indeed important to consider, for on a one-individual project, that individual is the only entity whose capability is even relevant to the task. Thus, in some ways, determining the metrics for capability becomes easier the smaller a project is, not dissimilar to the old adage that to write a paper on a narrow topic is preferable to writing a paper on a broad topic. Anytime the desire to generate information arises, be the information written text or diagrams of metrics, anything that excludes some data is helpful, and in this sense, the fact that only one person would be responsible for the project indeed hearkens back to the scope area of the metric. However, in the end, the actual metrics used to determine capability and thus shape the outcome of the entire project were time, computer skills, and ease of making successful contact. Time, of course, sounds familiar from the old triple-constraint model described above, but thanks to Baratta’s (2007) model described above, capability actually replaces time, which then in most cases, though not all, will result in time becoming a sub-metric under the larger metric of capability, as did indeed happen here. It was determined that ten hours applied to the task, spread out over the course of one week, would be the most appropriate expression of the time metric. If the project began to exceed these limits, it would have failed on this metric. In fact, time was rather intimately linked to one of the other metrics used in this case, a metric which also fell under the broad heading of capability.
The next metric was the point person’s computer skills, for which the basic task could be completed within the bounds of the metric, but more advanced functions would have violated the limitations on the capability portion of the metric. Once the data were all in one place, it became apparent that as the many local city-level groups investigated all had multiple members who needed to be listed on a separate Microsoft Excel spreadsheet worksheet, the question became how best to display the data. The collector of the data in the end decided that to simply set groups off from one another spatially made for a clear enough worksheet, but had the metric around computer skills not been in place, that person might have felt more tempted to attempt to create a pivot table, a task with which that person was not at all acquainted beyond knowing that such things as pivot tables existed. The third metric for capability related to the ability to make a successful contact with all the group leaders listed on the private portal of the nonprofit website. Some of the info was bound to be out of date, and many people simply might not respond to a cold-call. Thus a reasonable standard was created that for the city level, three contact attempts would be the maximum, and for the state level, two contact attempts would be the maximum. Anything beyond that would exceed the constraints of the successful contact parameters metric. Having defined potential limitations in various ways, it was then time to look at perhaps the most exciting of the overarching metrics: value.
Value was created in a number of ways for this project, but primarily it was measured in terms of potential funding, the group’s knowledge about itself, and the ability to elevate the city’s group to an official local chapter of the nonprofit, which required at least forty unique participants in the city. In some ways, funding was more nebulous than it might at first appear, in that by becoming a local chapter, the group could only hope to secure future, hypothetical funding, rather than current funding. However, even the promise of potential funding, which was hypothesized to be in the range of five hundred to one thousand dollars a year for special activities, was enough to motivate the impetus for starting this project. The group’s self-knowledge, of course, cannot be measured as easily as other metrics, but surely knowing more about local-area participants did bring some additional value to the group; indeed, for this metric, the value continues to rise the farther the information is disseminated. Finally, the binary metric of enabling the group to become a local chapter mostly has “feel-good” value, for to be officially recognized is something that brings most people joy. As the work was for a nonprofit, though, and as it was performed on a volunteer basis, this “feel-good” value should not be underestimated. Indeed, in some respects, it is the whole reason nonprofits exist at all.
When defining metrics for a nonprofit project, it is best to use a framework that allows for some flexibility in defining what is important, given that nonprofit missions can be vague, and so the value-capability-scope meta-metric was used for the project of conducting a local informal census of group participants. While the task may have seemed small, and indeed was, in that it was created by one volunteer acting basically alone over the course of a week, the project could easily be expanded in terms of geography or amount of information collected. Indeed, it might generally be considered a sound task to first devise metrics on the more compact version of a task before scaling up, only later expanding the metrics to grander tasks. If everywhere nonprofits and companies alike thought like this, greater efficiency would thrive among the workforce—those paid and those volunteering alike.
References
Baratta, A. (2007). The value triple constraint: Managing the effectiveness of the project management paradigm. Paper presented at 2007 PMI Global Congress Proceedings, Atlanta, GA.
Dijkman, R., Dumas, M., van Dongen, B., Kaarik, R., & Mendling, J. (2011). Similarity of business process models: Metrics and evaluation. Information Systems, 36(2), 498-516.
Sawhill, J. & Williamson, D. (2001). Measuring what matters in nonprofits. McKinsey Quarterly, 2(1), 1-8.
Capital Punishment and Vigilantism: A Historical Comparison
Pancreatic Cancer in the United States
The Long-term Effects of Environmental Toxicity
Audism: Occurrences within the Deaf Community
DSS Models in the Airline Industry
The Porter Diamond: A Study of the Silicon Valley
The Studied Microeconomics of Converting Farmland from Conventional to Organic Production
© 2024 WRITERTOOLS