Of all of the disparities between different transportation modes, one of the most important and least talked about is the disparity of information. Right now in American cities we have an enormous and expanding set of knowledge about how cars and trucks move, yet we know almost nothing quantitatively about how pedestrians and bicyclists use the infrastructure.
Cameras and in-ground road sensors are placed in strategic locations along roadways, constantly streaming to gather very rich data on vehicular traffic patterns. Here are all of DC’s. The data is processed and collected to, among other things, make future projections. Eventually a decision-maker, perhaps recalling that sinking feeling we’ve all felt after failing an exam, sees a thick red line on the map — Level of Service F — looming in the future and knows this is very serious indeed. Something happens.
Because pedestrian and bicycle infrastructure lacks this precision of data, or any data at all in most cases, there is little scientific support for funding it. Technocrats see such projects as window-dressing on the business of real mobility, nice features that the Federal Highway Administration lumps together with museums and lighthouse renovations as “transportation enhancements.” Most people reading this blog see things differently. But how do we prove it? And how do we efficiently allocate bicycle and pedestrian infrastructure investments where they will be most effective? Useful knowledge simply requires more data.
To be sure, people counting does happen. The National Bicycle and Pedestrian Documentation Project just finished an annual two days of counting earlier this month. Cities and towns across the country participated in the effort to acquire nationally standardized data. Annual “cordon counts” of bicycles have been conducted in downtown D.C., northern Arlington, and a beltway crossing since 1986, allowing some trends to be observed.
Yet a large majority of these counts have to be conducted by hand, with a pen and clipboard, by interns and volunteers standing on street corners. This can be a fun event once in a while, but there is no way to get enough data with this method to account for variances over time: daily and weekly commuting patterns, seasonal variations, responses to weather, special events or other external variables.
Humans are good at doing many things, but counting discrete objects over long periods of time is probably more easily done with computers. One study has shown that workers hired to track pedestrians routinely undercount them. We tend to get tired, eat snacks, and blink too much through hours of tedium. So I asked a friend of mine who studies visual recognition software at U.C. Berkeley about using cameras to count pedestrians on a street. He said the technology has certainly arrived, and the margin of error would be small enough for the purposes of data collection. Indeed, cameras are already commercially available for this exact purpose. For an intensely comprehensive academic bibliography on counting people, see here. Some automated counting has begun in U.S. cities, but it’s only in the very beginning stages.
We have a need for better bicycle and pedestrian information. We have the technological means to acquire it fairly easily. We know how to model it and use the models from decades of counting cars. All that remains is the will to make this connection happen.