Using the In-Service Teacher Training Survey Instrument (ITTSI)

I am currently using the In-Service Teacher Training Survey Instrument ( ITTSI ), by Evans et al., as a checklist for program details to include in a paper draft. I aim to include an appendix that runs through these details in a more succinct way, which has value for systematic review and other education programming reviews. 

Since I have spent some time going from the screenshots included in the original paper to a table that I can fill in, I decided to spare teams that time in the future. You can download the table for yourself and fill it in for your specific program. Feedback and comments are welcome.

https://docs.google.com/document/d/18WYIxa0WHnIzSanjoeSgPVETDwdwjekAiccP-o9qKlA/edit?usp=sharing

Country ownership: concept review and building

Work-in-progress!

This is post was created as a commitment-device. It will support a paper (built on my thesis) I am working on. I will review existing literature (broadly defined) on the idea of country ownership. I will create a database of what I find, here.

A chronological launch pad will be the Accra Agenda for Aid Effectiveness, following on the 2005 Paris Declaration. In this 2008 update, the authors provide some–weak–parameters for country ownership, stating that it requires “countries [to] have more say over their development processes through wider participation in development policy formulation, stronger leadership on aid co-ordination and more use of country systems for aid delivery.”

One of the most compelling, quips I have seen on country ownership recently is this important reminder on Twitter: “Country ownership is not yours to ensure, and you certainly cannot confer it–countries are ultimately responsible for the health of their populations.”

I won’t be adjudicating between these two, though my sympathies are with the latter. My idea is to consider what country ownership means at multiple phases of the ‘classic’ policy process and to see what ‘the literature’ says about them. For example,, does a policy need to go through legislative processes to be owned? Etc. Some of the phases I have in mind (while recognizing these are not always linear or discrete) are:

  • Agenda-setting / prioritization
  • Design / formulation
  • Adoption / formalization
  • Implementation
  • Evidence-gathering & decision-making to continue, modify, scale-up, or terminate

Ultimately, I will be considering what happened at each of these stages in Ghana with regard to the Affordable Medicines Facility-malaria (AMFm) adoption and implementation. Adoption was a big game of “not it!” by Ghanaian stakeholders and I will unpack what we can learn from this about ownership. Implementation, however, was a different story. I think we’d be hard-pressed to say it wasn’t Ghana-led.

Reporting qualitative work (with skeptical readers in mind)

Inspired by a referee report completed recently, I decided to share my thoughts on reporting qualitative results. These aren’t codified norms but they reflect my understanding of qualitative (largely interview and observation) data collection and analysis, and personal – but I think reasonable — preferences. By commenting on report writing, I am obliquely making comments on collection and analysis processes too. And I welcome you to contribute.

The caveats: I largely speak about qual in the context of public health and global development. I make no claims about the approach of historians and other scholars that work primarily with texts (though I welcome input from all scholars). I also focus mostly on interview data. So, let’s begin.

A key goal of any report–quant or qual–is to give readers the tools to understand your data and analysis, then to consider your interpretations in light of strengths and gaps in data collection and analysis. My suggestions are to this end.

  • Help your reader situate your study in time. You can do this with a timeline figure showing your data collection, the policy or phenomenon of interest, and other relevant contextual events. I shouldn’t have to doodle a timeline on scrap paper to make sense of your data and what they capture. Create an easy way for your reader to understand your timeline.

  • Make the basis of your research clear. Make your research questions clear. If you have hypotheses or priors, even if you aren’t formally testing or updating them, make them clear. If you were guided by a theory or framework, make that clear. If you went in tabula rasa and totally exploratory, make that clear.
  • Choose the right kind of data. Qualitative data comes in four main flavors: text (including interview notes & transcripts), speech, observations, and images. State the type(s) you have and why that was the appropriate type or combo given your research questions.

  • State the rationale behind your site selection and sampling clearly. Site selection and sampling are important parts of credible qualitative work – you need people to believe that you went to the right places, talked to the right people, read the right things, and observed the right things to make claims about them.
  • Explain and justify how you choose your study sites. It is insufficient to say they were chosen ‘purposively’. What was the purpose? And how did you apply it? How do the selected sites fit into the universe of possible sites? Why should we think that your sites are sufficient and what might have been missed? There is a certain brand of researcher who equates purposive sampling with convenience sampling. Don’t be that researcher.
  • Clarify how your sites relate to one another? Are they similar or different and, if so, how and why? How does this help us understand your research?

  • Use the above points also to explain your sample selection. Yes, theoretical or purposive sampling may be the most appropriate strategy for your sample but you need to say why and to justify the sufficiency of your sample composition and size. Ideally with reference to your research questions and hypotheses.
  • Describe what heterogeneous views or experiences were necessary to understand your phenomenon of interest? How well did you capture them? If there is a relatively small number of relevant stakeholders, did you speak with or see all? If not, how did you fill in the gaps?
    • If the sample frame is much wider (e.g., not a small number of stakeholders involved in a particular decision), situate your sample in this wider context. Are they meant to be typical? Exceptional? (See: @evlieb’s nested analysis.)
  • State the consent you received and the compensation you provided for all primary data collected.
  • Explain your sample size. Not all qualitative researchers think thematic saturation is an appropriate goal but some reference to saturation, redundancy, or other ways of showing that you have captured the relevant breadth and depth of views and experiences are important. Sometimes your sample will be partially determined by the budget. Be honest about this but still doing the work of laying out what you think can/not be learned from the sample you do have.
  • Explain the intention behind design choices. If you do something with intention, explain it. For example, if you segregate focus groups by men and women, explain why this was important given the context and/or topic in order to get high-quality data. Sometimes (gasp!) biological sex will not be the most important or relevant division to explore.

  • Discuss positionality. Especially for interview data, you need to discuss positionality of those present vis-à-vis the respondent. Who was in the room (interviewer? notetaker? more?)? What kind of benefits and drawbacks exist given interviewer characteristics in terms of helping people open up and be honest or inducing one or more forms of bias. These could be inherent qualities (sex, accent) and could be intentional (clothing and transportation choices, for example).
    • If positionality could have induced bias, what steps were taken to mitigate this?
  • State how the interviews interviews recorded. Audio? Video? Was there a notetaker there? How were observations of context and non-verbal cues incorporated into the interview notes or transcripts for analysis?
  • For interviews, clarify how un/semi/fully structured were your guides – & why? Why was approach right for your research set-up given the skill of the interviewer, the number of interviewers, and the research set-up If you don’t know the answer, I suggest you get your hands on Bernard’s Anthro Methods.

  • State the interviewers’ opinions on respondent truthfulness. How is this accounted for in the analysis?
  • Discuss the role of iteration between collection and analysis in your work. Were you able to debrief and reflect after each interview and make implied adjustments to your guides or approach? Could you follow-up with respondents?
  • Clarify the relation between data collectors and analysts. Is there new positionality to consider given who is doing the analysis (making meaning and deciding what is important)?

  • Describe your analysis process. Uploading data into qualitative software is not analysis! Deploying quotes-in-colorful-boxes (or in block text) is not analysis. Coding is not complete analysis – it means you have tagged and categorized your data to begin to make sense of it. Keep going.
    • You do not need to use quotes in full, offsets as big blocks, to show that you did qualitative work. A quote is a data point. If you want to show one to illustrate a larger or unique point, make it is clear that is what you are doing.
    • You can also use smaller segments of quotes, integrated into the text, so that you use the respondents’ words without disrupting the flow of your results narrative. A parade of quotes will rarely stick with the reader.
    • Help the reader understand what you are trying to say. Is a code or a quote reflective of what many people said or did? Just one? What analytic value is it bringing?
  • How do you build faith in your analysis? For example, independent coders, discussions and consensus-building? Did you engage any member with checking or validation, such as taking preliminary analysis back to the respondents to see if they thought you were on the right track?

Addressing courtesy bias

I wrote this post for the IDinsight’s internal blog but I decided to share it here to encourage contributions. It will be great for people will weigh in with tactics they have tried!

Please share in the comments your experiences with courtesy bias and the tactics you have used (seemingly successful and unsuccessful) — including the ones here, which are not fool-proof.

Continue reading “Addressing courtesy bias”

Daily(ish) debriefs during qualitative work

I miss being conducting interviews and data collection myself. One does get priced out of doing the things one likes!

One way that I remain connected to the research is by conducting frequent debriefs during the data collection (and early analysis phase). Debriefing isn’t just self-serving; it improves the quality of qualitative data collected and the subsequent analysis. Collecting data of any kind, but certainly qualitative data, can be draining, mentally and emotionally. Building in debriefs can help the data collector remember and reflect rather than just getting through the interviews. And an outside pairs of eyes/ears can pick up additional details and threads for additional exploration, as well as steering away from interesting but too-tangential topics.

.

After a particularly insightful debrief this morning, I quickly put together some thoughts about the process and why it is so important if you are conducting qualitative work.

.

There are a few hallmarks of a qualitative approach research that make it important to check-in regularly during the data collection (interviews, observations, focus groups) process in order to strengthen and make more credible the information being produced.

.

  1. The data collector and data analyst are often the same person or small set of people (which generally implies a higher skill level for a qual data collector than the quant counterpart).
  2. A qualitative approach to research is an iterative endeavor — this is a key distinguishing feature from a lot of large-n, more quantitative work. In quantitative work, we usually take as fixed all the items will include in a questionnaire, which serves to enhance rigor and decrease bias. In qualitative work, we have a bit more flexibility and pursue rigor a little differently. This allows us to incorporate learnings from one day’s interview work into the next day’s interviews (or observations, or focus groups), as we continue to update our understanding. For example, we can incorporate into the interview guide a new probe that worked particularly well at eliciting a rich response; or, a new line of thinking might emerge that we want to explore with future respondents; or, we may notice a potential pattern emerging that we want to pressure-test the next day.

.

There are some important implications of these features.

  • First, it is very easy to get stuck in the weeds if you are the data collector, focused on scheduling the next interview and generally being tired after so much human interaction, rather than taking time to reflect on what you have learned. It is not good to be mired in executing-but-not-reflecting-and-and-hypothesizing-and-adapting.
  • Second, you need to have time built in to your schedule and budget to reflect and update your approach. This time can come between interviews during the day, as well as allotting time between days of interviewing. This protected time can be used for transcription and expanding field notes, for reviewing these notes and transcripts, and to begin early analysis (such as memoing and early coding). This helps to prevent many cases of “we should have asked that!” once we get back from ‘the field.’ By doing this as we go, it is easier to return for follow-up interviewing or at least ask the rest of our participants some additional questions. It also helps us to learn more from each interview.

.

To facilitate this reflection without getting stuck in your own head, it can be extremely valuable to have a debrief with someone else about what you have been learning and experiencing in the field, to start to make sense and meaning of what you have been hearing. It is often easier to commit to a call/meeting time than deep-work, memoing time.

These debriefs — sometimes called peer debriefs* — can take the form of written answers shared with the larger team or can take place on phone calls or face-to-face with another team member (who did or did not participate in the interview or observation). Technology and time zones will determine what mode is most sensible; it may change over the course of a project. (Note that debriefs are also a great way to make sure that team members not ‘on the ground’ still feel involved in the project.)

.

These debriefs are not substitutes for early analysis or for another member of the team reviewing transcripts and field notes to provide feedback on: points that have been over- or under-emphasized, points where responses seem vague and more probing may have been required, or points the interviewer seems to have made unnecessary or unhelpful assumptions.

.

Below are the set of questions i am currently asking for reflection:

  • What was an interesting point that arose in the interviews today? Does this suggest an additional set of questions to ask in future interviews?
  • What surprised you most about the interviews today?
  • Did anything particularly challenging or difficult or unpleasant come up during the interviews today? Should we think of ways to mitigate these challenges in the future?
  • Are there any questions that don’t seem to be yielding useful information? Should we update or remove these questions?
  • Do any patterns stand out to you in responses (early coding and categorization)? Does this suggest any hypotheses that we may want to pressure test in future interviews?

.

*Some people argue that peer debriefs should not be conducted by a member of the research team, in pursuit of impartiality. Ideally, both someone from the research team and a neutral person outside the team can be reviewing data as it is coming in and asking questions to strengthen the research process.

A brief history of jerrycans

Thanks to dissertation-writing procrastination a few years ago, I watched all eight series of Foyle’s War in an absurdly short amount of time. This–perhaps embarrassingly–is how I became familiar with WWII (and probably WWI) slang, such as calling Americans ‘Tommies’ and Germans ‘Jerries’ (the reasons behind German nickname are a point of speculation).

Perhaps this was not the best way to learn about military history. But it led me to take a second look at jerrycans on my most recent trip to sub-Saharan Africa (specifically in Lusaka). Were these containers named after Germans? (Short answer: yes.) Might they be engineering marvels with deep historical importance that I had never appreciated before? (Short answer: also yes.)

.

Perhaps ‘jerrycan’ is a word lots of you folks were already using for water containers and had already thought deeply about — but it is a word with which I became familiar only in Ghana, where the colorful containers are ubiquitous (this picture is actually from my recent trip to Zambia; the yellow can is the one I am talking about).

IMG_20170915_103650
Mtendere, Lusaka 2017

I associate the term with being used in Africa, being plastic, and with carrying water (as here). But the plastic containers are premised on a (revolutionary) German, pressed-steel, fuel-carrying model from 1937, nicknamed jerrycans by Allied troops. The originals were designed under secretive conditions to carry 20 liters, not require a funnel to fill or a wrench to unscrew, to be stackable, and — strengthened and made flexible by their X-shaped indentation — be durable beyond a single use. Evidently, Hitler already had thousands of these containers (Wehrmachtskanister) stockpiled by 1939.

.

While I think of jerrycans as being carried on the head (here’s a picture of a woman carrying 20 (!) this way), the design was intended to allow a soldier to carry two full or four empty cans – evidently quite an improvement on existing models (read here on “epic ergonomic failure”). The plastic version made a 1970 debut thanks to a Finnish engineer. They became commercially available shortly thereafter. In places like Africa (yes, broadly), they have provided a useful alternative to clay or metal pots for collecting and storing water and they are often a mainstay of NATO and UN efforts.

.

Apparently (note that much of this comes from Wikipedia) the jerrycan is quite an engineering marvel and its history is exciting because of the extreme importance of fuel and efficient (literally non-leaky) fuel distribution to the war effort and the reverse engineering of the design by Allied forces.

.

In one iteration of the story (relayed on Wikipedia and, seriously, on jerrycan.com), the model was effectively smuggled out of Germany by American engineer Paul Pleiss, who drove from Germany to India in 1939 — with a German colleague and three cans in tow (it is not clear whether part of the goal of this trip was to smuggle the cans out or if that was incidental; it seems likely that they just needed water and, along the way, Pleiss realized the cans were kind of neat). The German revealed the specifications for the cans before being recalled to Germany by Göring. Pleiss managed to get the cans back to the US from Kolkata and sent one to the US Army at Camp Holabird (to do this without alerting anyone to what interested him, he had the whole car shipped back via Turkey, cans included). The US eventually redesigned the German model to suit its needs, poorly at first and better later in the war (this blog post appears to be written by a WWII vet and discusses how he and a colleague got the US Navy interested in the design, in 1942).

.

Meanwhile, the British apparently first saw the German design in April 1940 in Norway and realized it was superior to their thin tin or mild-steel ‘flimsies,’ which leaked and often only endured a single use (at least as a carrying device; much is made online of their being turned into Benghazi Burners).

.

In 1940, Pleiss sent one of his cans to London for them to copy and manufacture. Still, the flimsies were very much part of the British arsenal as the war opened in the North African theatre (in June 1940) and the British armies there and in Europe did what they could to get their hands on the preferable German models. It seems that the continued production of flimsies for British troops prior to 1942, even while the troops were witnessing the superiority of the jerrycans, was a source of bitterness and demoralization. After this, the British ramped up production and were the main suppliers for Allied troops. Evidently, President Roosevelt (in November 1944) attributed to the cans the speed with which Allied forces were able to move across France following the invasion at Normandy.

.

All of which is to say, if you find yourself looking at a plastic water container and thinking that it is just a boring old water container, you are quite wrong.

The art of managing direct reports

Before starting with IDinsight, I had only limited managerial experience–mostly managing field managers and survey teams. I certainly had not given much thought to management or how to do it in a meaningful way, beyond a general sense of wanting to get good work out of my team as well as keep them happy. Nor had I really had a stellar experience of being managed. I had a great academic committee, for example, but I couldn’t hold that up as the best way to manage people. And I have had an array of other less-than-fun managers, including micro-managers and credit-stealers, in orgs that didn’t put much value on spending time and effort on becoming a better manager. 

.

I found it quite daunting, then, to find myself with one–and later more–direct reports in an organization that takes management, feedback, and professional development quite seriously (one of the things that makes IDinsight a fab place to work). (Note that the folks I manage are a few years out of undergrad, sometimes with a 1-year masters degree.) 

.

I certainly talked to my direct reports every day (when I first started, there were only six of us sitting in a very open office, so it would have been quite hard to not talk to my direct report!) but it wasn’t enough: it neither (intentionally) set him up well to reflect on his work and his growth, nor set me up well to do performance reviews and other necessary things. 

.

Early in my tenure with IDinsight, I had a very positive experience of working through the book Managing to Change the World one-on-one with a colleague with more management experience inside and outside IDinsight. One of the big takeaways from the book and our conversations was the usefulness of setting aside explicit time/space for non-project check-ins–no matter how much you already talk to discuss project work or chat informally. The book, my colleague, and other colleagues helped provide a lot of great ideas that I crowdsourced…

.

…and then I found myself totally overwhelmed by the prospect of trying to remember all those good questions. 

.

So, as has become my management habit/style over the past 1.5 years, I created a spreadsheet, with rows for topics on which I want to check-in regularly and columns for each week. This functions as a semi-structured interview guide as well as a recording device for me, so that I can look at previous notes and patterns as well as have notes when I need to fill out performance reviews (I also find completing performance reviews very daunting and still struggle with recording peak areas and challenge areas on a daily or weekly basis!).

.

At first, I worried it would seem too rigid, if I had my laptop out during our chats and was obviously reminding myself of the questions I wanted to ask and typing down notes. But overall, this seems not to have been a major problem. And, actually, have heard from my direct reports that they like knowing some of the questions I’ll ask in advance, so that they can reflect on them before we meet.

.

The system is certainly not perfect and is continuously updated. But one good sign (I think) is that my direct reports have started using a similar system with their direct reports (Field Managers, who are awesomely permanent staff at IDinsight) and use some of the questions that i generated and that they particularly like.

.

Although i created the initial set of questions, over the weeks and months, some have been dropped and others have been suggested and added (e.g.“Heather, will you please check in with me about X each week to help hold me more accountable to it?”).

.

One thing I have definitely learned over the past 1.5 years is that attempting to be a good manager takes time (and effort). These non-project check-ins, which we do weekly on Monday mornings or early afternoons (I strive for ‘management Mondays’), take about an hour and usually result in about 15 – 30 minutes of follow-up work based on the conversation (sending a paper I promised during our check-in, delivering a shout-out to someone I’ve discovered helped on our project in some unseen way, etc). I’ve had to learn to budget in that follow-up time so that the rest of my schedule isn’t thrown off.  

.

Below are the questions I currently aim to ask every week, though we don’t always make it through all of them (which I think is ok); as noted above, it should be considered a semi-structured interview guide to help have a conversation on important topics that can easily be elided in favor of easier small talk or in favor of focusing strictly on the project work. As much as possible, my goal is to help raise important issues — but give my direct reports a chance to come with ideas of how to tackle those issues. Theoretically, this is good professional development for them and, ideally and honestly, a little less work for me! 

.

Hope this post is useful to some other folks who are attempting to figure out how to be a good manager, despite not necessarily having had one before or not having worked in an organization that took it seriously before. The questions below reflect IDinsight’s focus on professional development, organizational contribution, mission-focus, and values — as well as things stolen from the aforementioned management book.

.

Feedback is, as always, welcome — particularly if you try out this list of questions and want to share your experience or, especially, if you have a go-to check-in question that works well for you!

  • Are you keeping healthy/well?
  • Are you happy and motivated? Really? Why/not?
  • What is the best thing that happened over the weekend?
  • What are two things that will make this a successful week for you?
  • What’s the toughest thing coming up this week? Anything that would make you feel more confident about tackling it?
  • Is there one tweak we could make that would make this a more manageable week?
  • Did it feel like you had roughly the right amount of work / working hours last week? Does it feel like you are working at a sustainable pace? On sustainable topics? (What would be sustainable to you?)
    • Reflecting back on last week, how many nights, if any, did you lose sleep over work?
  • Last week you were worried about [X] as the toughest thing coming up. How did it go? What worked and what could have been improved?
  • What is one new thing you learned or did last week related to professional development? (Here sometimes i probe for lessons related to written and oral communication, management, organizational development — folks have a gut tendency to equate professional development only with coding, even though technical proficiency is only one of our six performance review categories.)
  • What is one new thing you learned or did last week related to technical professional development?
  • Did this week throw up any gaps that you feel in your technical and professional development? Any thoughts on how we might address these?
  • What is one professional development task or goal you feel you did not achieve last week? How might you be able to fit it in in the following week?
  • Did you try any new strategies to manage your time or engage in deep work last week? Were these successful? Why or why not? What might you try this week?
  • What was the most interesting or helpful piece of feedback or advice you received last week?
  • Which aspect of your work gave you the most joy or fulfilment last week?
  • Is there that I could have done this past week to better support or coach you? What could I have done differently?
  • Do you feel like there is anything else you could be contributing to organizational growth and development that you are currently not (but would be interested in doing)?
  • Do you feel like we influenced any decisions, actions & social impact last week? What worked or didn’t in that process?

Checking-in during “field” work

It is not easy to transition from ‘researcher doing-the-data-collection‘ to supervisor of this sort of work, sending someone else to do the work. Especially, when that is part of the reasons you got into the business in the first place–perhaps especially for qual researchers (?).
.
Even though I check-in by phone at least once a week and text/slack often with my direct reports, it still feels insufficient. Those interactions only focus on one or two urgent topics. Phone calls can also be hampered by poor connections, background noise, and other distractions. Meanwhile, the weekly written updates I receive often focus too squarely on just-the-numbers, showing me what activities and outputs happened. I get tables of households or schools reached for first interviews or spot-checks relative to our goals. Important–but I want more!
.
Reporting of only activities and outputs achieved means I often miss some of the really important stuff (whether important to me, to my direct report, or both). I also fail to get a flavor of ‘the field,’ with implications, inter alia, for later analysis and sense-making.
.
Structured written notes from my direct reports also means they get limited practice / professional development in writing field notes. My background in anthropology deems this imperative for proper social science. (I should note that while I have tried making a good practice of doing field notes myself, many bosses never seemed interested in reading them. This was always a bummer. However, my parents have been avid readers.)
.
Finally, vocal check-ins can make it difficult for my direct reports to provide tough upward feedback. Feedback that can improve how I support them and the project.
.
For all these reasons, I am going to trial a system of more (semi-)structured written feedback during data collection. Note that this does not replace our informal chats and texts, nor our weekly non-project check-ins. However, I hope that it:
(a) Makes my direct reports to create time/space to reflect on the data collection week and
(b) Lessens with feelings of loneliness and under-appreciation while in ‘the field.’
.
Through a lot of internal crowd-sourcing from IDinsight folks, I have created a new system (one of the great things about working at IDinsight is how seriously we take good management, our values, and our mission — so people are brimming with good ideas!). The ‘system’ is a spreadsheet with questions/topics for my direct report to reflect on, a weekly column for them to fill in and a column for my responses. For now, I have budgeted 30 minutes to review and respond to this weekly. I hope the sheet will keep things organized and make it easier to spot trends in the field team’s ‘mood’.
.
I will provide an update on this but in the meantime, I welcome thoughts from everyone.
.
One idea from my colleagues that I have not decided to take up yet, was a daily written update focused on three key reflective questions. This was to be answered at the close of the day. Hopefully it wouldn’t add too much time and effort to what is already usually long and tiring day. The key reflective questions were:
1. What did I do today?
2. What do I plan to do tomorrow?
3. What am I concerned about /what challenge am I confronting?
For now, I am focusing on the weekly written update. It is longer and will hopefully provide an opportunity for sharing a holistic picture of the field experience. This is still a beta list, so we’ll see how it goes. I am encouraged by the early, enthusiastic crowdsourcing from IDinsight’s leadership and associates.
.
Bearing the beta-ness in mind, please find the inaugural questions below. Please share any questions you have found powerful for creating good fieldwork and happy colleagues. Of course, in some weeks, the answers to these questions may be “N/A.” The benefit for keeping them is, to show that I care. I am always open to hearing if something went wrong or was scary or troubling in the field.
1. How is team morale (provide at least one piece of evidence to support that claim)?
2. How is your morale and energy? Roughly how many hours did you work last week and how do you feel about that? What is one non-project/work thing you were able to do this week?
3. Provide a brief update on your own and your field manager’s professional development — what is something new learned on-the-job this week and/or a gap in (tech or soft) skills that we should address?
4. What is one thing the field managers did exceptionally well this week? Did either of you face any specific challenges?
5. Across the whole field team (from enumerators on up), what is one example of an organizational value-in-action that you have seen or heard about?
6. What is the most inspiring thing that happened this week? (that you saw or heard from the field team) that relates to our mission of driving action and social impact?
7. What was the funniest thing that happened this week (that you saw or heard from the field team)?
8. What is your favorite photo you or a member of the team took this week? Please share and explain why it was your favorite.
9. What was the most interesting thing that happened in the field this week (that you saw or heard from the field team)?
10. What was the most disappointing and/or scariest thing that happened in the field this week that you saw or heard from the field team)? (If scary, does any action need to be taken?)
11. How was respondent and gatekeeper/stakeholder morale this week? (h/t @urmy_shukla for this idea!)
12. What is one mistake or misstep that happened in the field this week? What changes will you make to help guard against it happening again?
13. What is one change, if any, the field team will make before starting work next week?
14. Any budget surprises this week?
15. Any implementer interactions or observations about which we should be aware? How is the morale of the implementation team?
16. Are there any things that [we, supervisors] need to do or prioritize in the coming week to support field work?

A small research agenda on political understanding in the U.S: a proposal

From recent reading, like Strangers in their own land and The politics of resentment, three key themes keep coming up. To me, these themes suggest a possible research or journalistic agenda. The themes are:

  1. An overestimation of how many people work for the government in different forms and at different levels, how competently and efficiently they do their jobs, and what benefits they receive in exchange.
  2. A (continued) misrepresentation of who receives different types of benefits from the government and whether they are ‘deserving,’ where desert is tied in directly with conceptions of ‘hard work’ and whether or not they perform it.* Overall, showcasing the normalcy and neighborliness of such folks.**
  3. A misunderstanding or willful forgetting of the ‘good’ things that the government does in one’s everyday life (and how the gov could do it better). This could include, for example, a sense that ‘folks like me’ don’t benefit from the ACA and other government efforts big and small — which may suggest ways forward on mitigating such views.

.

*Note that in Politics of resentment, at least, it becomes clear that hard work is often equated with physical/manual work. This is another avenue for possible exploration.

*This is, of course, linked to (a need to contrast) media portrayals of folks that you don’t necessarily interact with in-person on a day-to-day basis. See, e.g., Aziz Ansari’s Muslims eating nachos pitch. Also, this except from Strangers in their own land:

The stock image of the early 20th century, the ‘Negro’ minstrel, a rural simpleton, the journalist Barbara Ehrenreich notes, has now been upgraded, whitened, and continued in such television programs as Duck Dynasty and Here Comes Honey Boo Boo. ‘Working class whites are now regularly portrayed as moronic, while blacks are often hyperarticulate, street smart… and rich’ (p 156).

Packard on History of Global Health: devastating first chapter (we need to do better)

While i intend to write more about Packard’s new book (delightfully if uncomfortably subtitled, interventions into the lives of others) once i am through with it, a paragraph in the opening chapter seemed both so important and accurate as to merit sharing immediately — particularly given the lessons it may hold for the Universal Health Coverage (e.g.) movement. It is not that what Packard has to say here is necessarily new but rather that he sums it up in a neat, indicting list of trends, on which we would all do well to reflect:

.

There have been remarkable continuities in how health interventions have been conceived and implemented over the past century… [that] have worked against the development of effective basic-health systems to address social determinants of health:

  1. Health interventions have been largely developed outside the countries where the health problems exist, with few attempts to seriously incorporate local perspectives or community participation in the planning process…

  2. Health planning has privileged approaches based on the application of biomedical technologies that prevent or eliminate health problems one at a time.

  3. Little attention has been given to supporting the development of basic health services.

  4. The planning of health interventions has often occurred in a crisis environment, in which there was an imperative to act fast. This mindset has privileged interventions that are simple, easy to implement, and have potential to quickly make a significant impact…

  5. Global health interventions have been empowered by faith in the superioity of Western medical knowledge and technology…

  6. Health has been linked to social and economic development — but this connection has focused primarily on how improvements in health can stimulate economic development, while ignoring the impact that social and economic developments can have on health. The social determinants of health have received little attention.

.

Packard notes that these trends have faltered a few times, such as with the rise in interest in learning about the “social and economic causes of ill health” in the 1920s and 30s and in the Alma Ata / health for all movement at the end of the 1970s. We seem to think of ourselves as standing at a new trend-breaking moment. Hopefully we can do better.

%d bloggers like this: