Thursday, 24 April 2014

What to do when you get results that don't make sense

A few different recent conversations and this blogpost on list experiments by Andrew Gelman have made me think about the nature of the file drawer problem.

Gelman quotes Brendan Nyhan
I suspect there’s a significant file drawer problem on list experiments. I have an unpublished one too! They have low power and are highly sensitive to design quirks and respondent compliance as others mentioned. Another problem we found is interpretive. They work best when the social desirability effect is unidirectional. In our case, however, we realized that there was a plausible case that some respondents were overreporting misperceptions as a form of partisan cheerleading and others were underreporting due to social desirability concerns, which could create offsetting effects.

and Lynn Vavreck:
Like the others, we got some strange results that prevented us from writing up the results. Ultimately, I think we both concluded that this was not a method we would use again in the future.
Many of the commenters on the blog said that the failure to publish on these results reflected badly on these researchers and that they should publish these quirky results to complete the scientific record. 

Both of these examples as well as many other stories I've heard make me think that the major causes of the file drawer effect in social science are not null results but inconclusive, messy and questionable results. The key problem is when you get a result from an analysis that makes you reassess some of the measurement assumptions that you were working with. For instance, a secondary correlation with a demographic variable comes out in an unexpected direction or the distribution of the responses is bunched up in 3 places on a 10 point scale.

The problem comes down to this. If I design a survey or other study to test an empirical proposition, the study is likely not to be ideally designed to test the validity of the measures involved and how the design effects are impacting them.

The results you get from a study designed to test an effect are often enough to cast doubt on validity but rarely are enough to demonstrate the lack of validity in a convincing way (i.e. that would be of publishable quality). The outcome is therefore that the paper can either be written up as a poor substantive article (i.e. the validity of the measures is in doubt or a poor methodological article (the evidence about the validity of the measures is weak either way because the study wasn't designed to be a test of the measure's validity).

One answer to this is to do more pre-testing. This can help to establish the validity of measures prior to working with them and can certainly identify the most obvious problems. However, unless the pre-test is nearly as large as the actual sample, the correlations with other variables won't be particularly clear in advance. In addition, pre-testing won't help understand design effects unless it tests different combinations.

However, what is really needed is whole studies devoted to examining design effects experimentally and establishing the measurement attributes. But until that happens for methods such as list experiments, researchers will be stuck with questionably valid results that are hard to publish as good empirical or methodological pieces.

A more radical approach would be to encourage journals of ideas that didn't quite work out. Short research articles that explain why the idea should have worked out nicely but ended up being a damp squib. These would be useful for meta-analysis of why certain techniques are problematic in practice without having the same time requirements for writing up as a full methodological piece.


Wednesday, 12 February 2014

Something every multi-national survey should do


This table is from the Afrobarometer website. You would think that every cross-national survey would have a page like this but I've never managed to track one down on some surveys I'll not mention, while others hide it so effectively it might as well not be there.

Your site can have as much on it as you like but before anything else the following three things should be linked prominently from the front page:

  1. A table like the one above
  2. Links to the original questionnaires for each country
  3. Links to the raw data files, preferably as completely merged as possible


Monday, 3 February 2014

When working in different time zones is a good thing

I've recently been working with a team in a time zone seven hours away from mine. While this type of collaboration can often be a frustrating process of missed meetings and miscommunications, it has actually been surprisingly effective. In fact the vast majority of difficulties we have encountered have occurred due to the lack of being able to hold extended face to face meetings. However, given that everyone involved is spread across four locations this would have not been regular even if we were all in the same time zone.

So why has the time difference worked well in this instance? I think part of the success has been due to the type of work we are doing. We are currently writing a document together that requires revisions and reactions from team members. Generally, reading, commenting on and revising the document takes around three hours.

However, as anyone who has worked with multiple contributors knows, multiple people commenting at  the same time leads to a mess of incompatible suggestions and versions. This means you either have to deal with the mess of reconciling incompatible document versions or wait until each person finishes with their changes before someone else starts.

Being in different time zones actually improves this process because each side of the team tends to wake up to a new draft from the other side without having to spend working time waiting for it to arrive.

The key factor here is that the ideal length of independent work time (on this project) is fairly close in size to the time zone difference. If the ideal independent work time was a lot shorter, then there would be large stretches of underutilised time waiting for the other side to do their work. In this case, being in the same time zone would work much better.

The stretches of projects where long meetings are necessary fit better within a single time zone as they involve a continuous conversation backwards and forwards that gets greatly slowed down if each message has to wait a day for a reply.

Similarly, if the ideal length of independent work time was a lot longer than the time zone difference, then time zone wouldn't matter at all. For instance, the time zone differences between me and the journals I submit to is not at all relevant because the conversation with editors and reviewers tends to take place on the scale of weeks and months.

So what could this anecdote mean for the wider world? Well, it might suggest that we would expect information industries to be spaced around the world according to the size of tasks that are required to be done at each stage of a process. For instance, translation or editing agencies would be well placed several hours away from the sources of demand for these services.  An Indian editing firm might be able to offer overnight editing services for US firms.

I'd be interested to hear if anyone has looked into this more generally or if this mechanism holds outside of the particular case here.


Caveats:
  •  I'm explicitly saying that the success of cross-time zone working is dependent on the type of task being performed and its requirement for parallel versus serial processing. I don't think that time zone differences are preferable in all cases.
  • As usual this is written as a piece of speculation to open a debate and is clearly not a piece of carefully researched academic writing

Friday, 10 January 2014

Is it really that much more expensive to live in London versus Los Angeles?

An interesting project has been popping up on my news feed today. It's a new tool from expatistan comparing the cost of living in different cities (mainly from the perspective of the footloose professional class).

Comparing the two cities I've lived in most recently gave an interesting result:




The site says that it's designed as a tool to help you decide between different job offers around the world. You know your potential salaries and want to know which will give you a better standard of living. 

The standout category in this comparison is transportation with Los Angeles 54% cheaper than London. 

This is calculated by taking the average percentage difference to buy a:
  • Volkswagen Golf 2.0 TDI 140 CV 6 vel. (or equivalent), with no extras, new
  • 1 liter (1/4 gallon) of gas
  • Monthly ticket public transport
  • Taxi trip on a business day, basic tariff, 8 Km. (5 miles)

The car in London is actually 10% cheaper than in Los Angeles. However, London's petrol/gas is 55% more expensive. This is a fair point and reflects the much higher fuel tax in the UK. 

However, the monthly ticket on public transport is where the difference really kicks in. LA's monthly ticket comes out as the equivalent of £41 compared with £127 for London's. 

But that difference isn't the arbitrary choice of Transport for London. It actually represents the best reason to prefer London to LA. In most of London you can take public transport anywhere in fairly little time. Spending £127 means that you don't need to buy a Volkswagen Golf or a liter of gas at all. It's rarely worth owning a car if you live in central London. 

In LA, public transport is simply not a substitute for car ownership. The £41 doesn't pay for anywhere near as much transport in LA. 

And that's not even getting into the largest single expense for many US household: healthcare. 

Saturday, 4 January 2014

Why emotional intelligence leads to poorer job performance: a hypothesis

The Atlantic has an interesting summary of the recent literature focusing on the negative effects of emotional intelligence. Essentially, emotional intelligence not only allows for better interpersonal relations and cooperation but also a greater ability to manipulate others.

One of the most interesting examples that the article gives is that, in non-emotional work (data analysis or car repair rather than counselling or teaching), there is actually a negative correlation between emotional intelligence and job performance (see here for the review article). The Atlantic article proposes that emotional intelligence distracts people from their work in these types of jobs: people spend their time reading their colleagues rather than their spreadsheets.

I have an alternate explanation that should probably be considered. While emotional intelligence may not make you better at low emotion jobs, it probably makes you more likely to be promoted or hired (conditional on prior job performance). If this is the case, then the negative correlation is simply the result of selection into jobs on the basis of emotional intelligence (due to bosses liking the employee or good interview performance).

Essentially a person with low emotional intelligence needs to be better at their job than a person with high emotional intelligence to get hired for the same position. It certainly fits better with my anecdotal observations than people being distracted by their emotions (surely people with more emotional intelligence need to expend less energy on reading those around them).

This hypothesis is also compatible with the finding that emotional intelligence is associated with better job performance in emotional work. In emotional work, emotional intelligence is a good signal for job performance (indeed it may be better than formal indicators), so promoting someone based on it probably improves the job/employee fit.

I've not read the literature in much depth so I'd be interested to hear if this hypothesis has been tested somewhere.


Saturday, 14 December 2013

Making bribe paying legal in India: how can we make it work?

Bribery is a huge problem in many countries. India, the world's largest democracy has a particular problem with this form of corruption. Many studies show that Indians routinely face bribe requests for services they are legally entitled to.

Former chief economic adviser of India and now chief economist of the World Bank, Kaushik Basu, has suggested a radical proposal. Make paying bribes legal, while maintaining the illegality of requesting bribes.

He argues that criminalizing both sides of the transaction aligns the interests of the corrupt official and bribe payer. Neither one wants to report the transaction because they would both suffer. In fact Basu goes further and suggests returning the bribe to the bribe payer in the event that they report that the bribe took place.

This final detail is important for incentivizing bribe givers to go to the trouble of reporting the bribe takers. However, as Basu notes, it does create a new set of incentives to falsely report bribes and this could just create a whole new problem of public official harassment and a court system too overloaded to actually deal with the real claims of bribery.

I think there might be a potential fix to this proposal that would get round these problems. The answer is that we don't return the bribes but we do make not reporting a bribe an illegal act. In this way we create the heavily divergent interests between bribe payer and bribe taker but don't create the perverse incentive to falsely report bribes.

The power of the "duty to report" law will depend partially on the likelihood of being caught. To increase this probability I would suggest running a small number of high publicised sting operations where well audited and video recorded officials request bribes from the public. These bribes are returned in full with a reward in the event of the person reporting the mock bribe and the person is prosecuted if they fail to report it.

The fear that a bribe requester could be a sting operation will heavily skew incentives to report any bribe requesters in fear of being prosecuted if you do not do so.

See here for the Planet Money write up
http://www.npr.org/blogs/money/2011/04/01/135011027/why-paying-bribes-should-be-legal

and here for Kaushik Basu's paper
http://finmin.nic.in/workingpaper/act_giving_bribe_legal.pdf

Wednesday, 4 September 2013

An idea for a more useful Google Trends

I've written a couple of articles (the second one is coming out in JEPOP some time soon but I don't have a link yet) about the limitations of using Google Trends data for social science research. The major issue is that many more search terms seem like plausible measures than actually turn out to correlate with public opinion. Search terms don't even necessarily work across different countries that speak the same language!

As a result, any use of these trends has to go through the process of matching up the data to equivalent survey data before it can be used validly.

But what if we didn't have to do all that?

One of the reasons I suspect the Google Trends sometimes don't match up as well as we would hope is that it counts searches not people. A handful of furiously searching journalists and politicos can drive the trend as much as widespread searching across the population. This means that issues may be ignored by 99% of the population but still result in a lot of Google searching.

So the graph above tracks what percentage of all searches in the United States were for the term "Syria" on different dates (these percentages are then scaled to a 1 to 100 index so we don't know the actual percentages).

This representative problem is easily solvable for Google. Simply report the trends for number of people searching for a term instead of the number of searches for a term. Google could simply offer us the option of tracking the percentage of people using Google on each date who searched for the term "Syria". Even if journalists search for Syria a thousand times, it will only count as one person.

It's not hard for Google to identify different people either. While there are some complexities to tracking an individual over time, Google has been building profiles on its users for a long time and even a measure of the number of unique IPs that searched for a term would go a long way towards solving this problem.

Having both of these settings as an option would give much greater insight into the breadth and depth of opinion on an issue.

It might even make offhand references to Google Trends as a proxy for public opinion a little more accurate.

Note: There are other reasons why Google Trends data might not match up to public opinion (see the papers) but this is certainly one major concern.