TCi summer intern Nathaniel “Alex” Cordova, a graduate student in public administration, discusses the analysis phase of the work that was undertaken late in the summer after the fieldwork phase was complete. Read up onTCi’s Minimum Nutrition Dataset work here and check out prior posts about the project.
While the data collection process was operationally taxing, the data analysis period was equally as complex. As we were piloting a new dietary diversity methodology, custom survey instruments were designed at the outset of the program to capture information regarding our key variables. This not only allowed us to gather unique information and to validate our methodology, but also provided us with the challenge of creating an extensive data collection methodology and process upon our return to ICRISAT from the field.
The purpose of our data analysis was to test whether the results from the Minimum Nutrition Dataset (MNDA) ‘snapshot’ method were similar to ICRISAT’s intensive nutrition survey. In order to accurately analyze our collected data, we designed a data entry spreadsheet and created a reference list of foods and their respective food group according to guidelines established by the Food and Agriculture Organization of the United Nations. The results are so far quite promising, and further analysis is being undertaken this fall (2014).
Challenges thus far…
A central challenge during the data analysis process has been managing seasonality effects. The MNDA pilot was conducted during the end of the dry season; however, we were comparing our scores with ICRISAT data that was collected at a different point in time. As production and consumption will shift according to local weather patterns, household and individual dietary diversity scores could also fluctuate. Updated ICRISAT intensive nutrition survey collected during the same time period as the MNDA will be analyzed further once they are available. This will shed further light onto the ability of the MNDA dietary diversity methodology to accurately capture these scores with a rapid assessment model.
Other operational challenges surfaced regarding the analysis of survey information once returning from the field. An enormous amount of capacity was not only needed to create the data entry instruments, but also to conduct numerous meetings regarding variables and coding, data collection and notation methods, as well as score computation. This was a tremendous effort, and it is safe to say that the entire team learned an incredible amount from participating throughout the entire process and seeing it through to the data collection and analysis phases. We are excited to continue working on the project this fall, and are anxious to share our results!