April 17, 2019

Survey says?

Inaccurate Census data has led researchers to underestimate the effectiveness of anti-poverty programs.

Bigstock

Much of how we understand the US economy comes from household surveys.

The official rates of unemployment, poverty, and inflation are all based on what individuals tell researchers at the Census Bureau.

But researchers have raised questions about the accuracy of the data, concerns that if proven true, would fundamentally change our understanding of the nation’s economic health and government programs aimed at helping the most needy.

A paper in the April issue of American Economic Journal: Applied Economics attempts to measure just how skewed the Census’ household survey, called the Current Population Survey, is from reality and assess the implications.

Authors Nikolas Mittag and Bruce Meyer match the surveys to administrative data for anti-poverty programs, which is generally more accurate, and find both good news and bad. The bad news is that the survey data are way off. The good news is that anti-poverty programs are doing more to help poor people than we realize.

“The programs really look different if you look at the administrative data. Poverty looks different,” Mittag said in a telephone interview with the AEA. “To me, the key results are that the poverty reduction of these programs is much larger than what we would have thought based on the survey data.”

The misreporting of government assistance to the poor has important implications for understanding and helping low-income communities. Policymakers need reliable data to know who needs help and assess whether targeted programs are actually helping them.

The programs really look different if you look at the administrative data. Poverty looks different.

Nikolas Mittag

There are all sorts of reasons why the household data isn’t accurate. People underreport the amount of money they get from the government, they might overreport, or they don’t respond altogether.

But the extent to which the survey data is inaccurate is not clear. So, Mittag and Meyer sought to find out.

They took what households said about the amount of assistance they get and matched individual level data with administrative records in New York for four government programs—food stamps, subsidized housing, Temporary Assistance for Needy Families (or TANF), and general assistance.

The administrative records aren’t perfect, but are generally much more accurate than the survey data because they contain actual payments made by the government instead of what someone said.

They found striking gaps in the share of households who reported receiving assistance in the survey data and those who administrative receipts showed had actually received payments. Forty-three percent of food stamp recipients did not report receipt of funds in the survey. That was also true of 63 percent of people receiving temporary assistance and 36 percent of housing recipients. There are some people who receive payments but, for whatever reason, don’t report them. However, the share of these “false positives” is a fraction of the mistakes that underestimate the actual payments being given out.

The upside is that anti-poverty policies are more effective than we thought. When adjusting using the administrative data, the poverty-reducing effect of all four programs combined nearly doubled while the effect of housing assistance tripled.

Matching receipts
The Census Bureau's household survey underreports the assistance that needy individuals actually receive. The chart below shows how the survey data compared to actual program receipts for individuals below 50 percent of the US poverty line.
Source: author data

 

The effects are especially large for single mothers. Using administrative records instead of reported survey data shows that the four programs reduce the poverty rate by an additional ten percentage points.

The reason for the errors is a combination of people failing to respond to surveys and also recording errors. It’s not that people are intentionally misleading the system, Mittag said. Rather, it may stem from confusion or simply being unable to recall what kind of assistance they receive.

“People are probably not aware,” Mittag said. “You ask them, 'did anyone get food stamps in the last calendar year?' They might not remember.”

Still, the surveys should not merely be ignored. They are useful as a complement to  administrative data, which also have limitations, Mittag said. For example, demographic information regarding race or gender can be wildly inaccurate in the administrative data if that particular type of demographic information is not relevant to the program.

Mittag is optimistic that the problems in the survey can be overcome by using other data sources together in order to form more complete, accurate picture of who is and is not benefiting from these programs.

“I think that the long term solution would be some form of the data combination approach,” he said. “I don’t think the administrative data could replace the survey. There’s a lot of information in the survey that can be useful.”

"Using Linked Survey and Administrative Data to Better Measure Income: Implications for Poverty, Program Effectiveness, and Holes in the Safety Net" appears in the April issue of the American Economic Journal: Applied Economics.