"The Polls Were Wrong" Is the Easiest Answer

Since Sunday’s election I have been in the improbable position of defending Turkish pollsters. To be clear, I have had many, many quibbles with their lack of methodological transparency and the perception of bias that causes.

Unlike other forms of public opinion or policy research, however, election polling provides a day of reckoning: you're right or you're wrong. It’s there for everyone to see. If you can't get the election right, within the margin of error of your sample, no one should pay attention to your data. If you repeatedly get it right, you deserve a degree of credibility.

Here's the thing: several Turkish pollsters were pretty damn close to getting the 7 June parliamentary election correct. And when I say "correct" I mean "correct within the appropriate margin of error in the days immediately before the election." Therefore, their November election results which missed the mark should not be dismissed outright, especially since multiple pollsters reported similar results.

I'm going to digress a bit. It’s absolutely fundamental to understand the basic principles of probability sampling if you’re going to comment on pollsters’ performance.

A poll reflects voters' views on the days the survey was conducted within a margin of error. Here's what that means: if you draw a truly random sample of, let's say, n=1000 people within a universe (Turkey, for example), and you write the questions and implement the survey in ways that diminishes biases inherent in survey research and your sample reflects the demographics of your universe, your data will, within a standard margin of error of plus or minus 3.1 percentage points (characterized, shorthand, as MoE +/- 3), reflect the views of that universe on that day. That means the results of any data point could vary three points higher or three points lower. This is critical to take into consideration when declaring a poll "right" or "wrong" relative to election results or stating a candidate/party "is ahead in the polls."

Two important takeaways:

• The only way to achieve a margin of error of zero is to interview every single person in the universe (Turkey, for example). That's impossible and is why we rely on probability sampling. The trade-off is we have to accept and accommodate the margin of error in our analysis. If we fail to do that, we're wrong. Period.

• Pre-election polls are not designed to project what will happen on election day (you can do that through modeling, but it's risky). This is why everyone (especially candidates who are about to lose) says the only poll that matters is the one on election day -- it's the only one that's a 100% accurate report of voters' views with no margin of error.

If you don't believe all this, go take a statistics class and then we'll argue about it. It's science, not magic. Also, please do not give me an exegesis on academic research. Like these pollsters, I work in the real world with budgets and time constraints.

So, let's look at the last three public polls taken before the 7 June election. I chose these three because 1) fieldwork was conducted the week or two before the election and 2) they shared their sample sizes so we know the margin of error. (There may be others, but I found these data here). We want to look at polls conducted as close as possible to the election because they’ll capture the effects of late-breaking campaign dynamics. (Also, not rounding is an affectation. I round. Personal opinion).

 

 

AKP

CHP

MHP

HDP

Sample Size

MOE

Date

MAK

44

25

16

19

n=2155

+/- 2.1

18-26 May

SONAR

41

26

18

10

n=3000

+/-1.8

25 May

Gezici

39

29

17

12

n=4860

+/-1.4

23-24 May

Andy Ar

42

26

16

11

n=4166

+/- 1.5

21-24 May

June Results

41

25

17

13

n/a

0

7 June

 

I draw two conclusions.

First, putting aside ORC which overrepresented AKP and underrepresented HDP, Konda and Gezici were pretty damn close to the final result (by that I mean close to within the MoE), considering data was collected a week before election day.

Secondly, though it can be risky to compare data collected by different operations, their data are very similar, which suggests they are using similar methodology and making similar assumptions. That’s the way it should be.

Next, let’s look at publicly released data for the November election. I borrowed most of these data from the delightful James in Turkey and he did not always include the margin of error. I will take that up with him at a future date. Let’s assume pollsters without sample size indicated interviewed between n=3000 and n=5000 (that’s what they did in June), so the margin of error will be between +/-1 and +/-2 

 

 

AKP

CHP

MHP

HDP

Sample Size

MOE

Date

Andy R

44

27

14

13

n=2400

+/-2

24-29 Oct

Konda

42

28

14

14

n=2900

+/1.8

24-25 Oct

A&G

47

25

14

12

n=4536

+/1.4

24-25 Oct

Metropoll

43

26

14

13

n=

 

15 Oct

Gezici

43

26

15

12

n=

 

15 Oct

ORC

43

27

14

12

n=

 

15 Oct

AKAM

40

28

14

14

n=

 

15 Oct

Konsensus

43

29

13

12

n=

 

15 Oct

Unofficial Final

49

25

12

11

N/A

+/-0

1 November

 

AKP’s final number falls outside all the polls’ MoE, except A&G's. The next closest, Andy R, conducted the latest fieldwork so was in the best position to capture emerging trends, such as a surge in AKP support. Andy R still underreported AKP support by five percentage points. That’s a lot. A&G didn’t release any tracking data so it’s hard to know if it’s an outlier or ahead of the others in capturing the AKP surge. The latter is possible and I will address it in a future post.

If consistent sampling methodologies and questions are used, it’s possible track data over time to see if it changes. Big unexplainable differences from one dataset to another could indicate a problem in the methodology. I like it when pollsters provide election tracking data. It suggests sound sampling and alerts us to important trends in public opinion.

For fun, let’s take a look at two of those who did:

 

KONDA

 

AKP

CHP

MHP

HDP

June 7 Results

41

25

17

13

Aug 8-9

44

26

15

13

5-6 Sept

42

25

16

12

3-4 Oct

41

29

15

12

17-18 Oct

42

28

15

13

24-25 Oct

42

28

14

14

Unofficial November Final

49

25

12

11

 

 

GEZICI

 

AKP

CHP

MHP

HDP

June 7 Results

41

25

17

13

3-4 Oct

41

28

17

14

17-18 Oct

41

27

16

13

24-25 Oct

43

26

15

12

Unofficial November Final

49

25

12

11

 

Not only are these two pollsters consistent over time, they are also consistent with the final June results and compare favorably with each other. Nothing in either of their datasets suggests a big shift in opinion toward AKP (they do indicate an AKP trend, which is plausible). Yet, inthe end, their last polls are wrong wrong wrong about the November result. That’s really troubling.

How could pollsters who nailed it in June have missed it in November? How can they be consistent over time and with each other and be wrong on election day? Falling back on “the polls are wrong” as analysis is simply inadequate. If you’re going to disregard months of consistent data, you should provide an explanation for how it went wrong.

I honestly can’t give an adequate explanation. Because I have other things to do and you have short attention spans when it comes to statistics, I will address what I think are the three most likely polling error culprits in future posts. These include (in random order of in likelihood):

• Errors in methodology (this will address the absurd argument that since UK and US pollsters were wrong, it follows that polls in Turkey are also wrong. I can’t believe this is even necessary)

• Errors in analysis (not reporting or considering Undecideds or softening support, which is my current theory of choice)

Election dynamics that cannot be captured by polling

 

NOTES: If you want to look at a few other pollsters’ June data, here it is. I don’t think it’s totally fair to judge their accuracy based on data collected weeks before election day, but, with the exception of under-representing HDP, most of them (except MAK) actually are pretty close and provide more evidence of the consistency of public opinion. Being off on HDP can be forgiven because HDP had what campaign people refer to momentum and it is plausible HDP’s support increased in the final weeks. 

 

 

AKP

CHP

MHP

HDP

Sample Size

MOE

Date

MAK

44

25

16

19

n=2155

+/- 2.1

18-26 May

SONAR

41

26

18

10

n=3000

+/- 1.8

25 May

Gezici

39

29

17

12

n=4860

+/- 1.4

23–24 May

Andy Ar

42

26

16

11

n=4166

+/- 1.5

21-24 May

June Results

41

25

17

13

N/A

0

7 June

Want to Increase the Credibility of Election Polling in Turkey? Here's How

Remember that time you asked me how to increase the credibility of public polling in Turkey? No? Well, it turns out I have thoughts on the matter. Here they are.

Transparency, Transparency, Transparency:  This is the single most important factor. Given the amount of flawed data out there, every pollster who releases election polls publicly should voluntarily provide the following information for the sake of increasing public trust in the science. Reporters should ask for it. Not all of it needs to be reported by the media -- it typically isn't -- but it provides important information, especially to professionals and academics, about how data were collected and processed. Allowing outsiders to review and discuss the methodology increases the rigor of the research. Ultimately, the result is greater public confidence in polling data.

Here's the type of information that would be helpful. (The first four bullets should be reported in every media story that references a poll, without exception)

  • Sample size, sample type and universe: Here's an example. "A national (or urban, or regional sample of 25 provinces) sample of n=2000 adults in Turkey over age 18." If the pollster diminished the sample to include only likely voters, he or she should explain how that determination was made.

     

  • Fieldwork Dates: Knowing when the data was collected provides important context about events that occurred in the political environment and could affect perceptions of the candidates (i.e., a deadly mine disaster, or a huge corruption scandal). Fielding dates also tell you if there was enough time to return to selected respondents who weren't available on the first try. With a large sample, a day or two in the field isn't enough for callbacks. Therefore, the data are biased toward those who answer their phones or their doors on the first try.

     

  • Margin of error for the sample as a whole: "The margin of error for the n=2000 sample is 2.19% at the 95% level of confidence. The margin of error for demographic and geographic subgroups varies and is higher."

     

  • Who's the Funder: This is critical information. Who's paying for a survey may impact the credibility of the data. It may not. But you have no way of judging if you don't know who's coughing up the dough. In the US, few pollsters would jeopardize their reputation for accuracy and reliability by swinging data in favor of a well-funded or powerful interest (some would and have, but it's an exception, not a rule) but revealing who's paying for the research is standard. Even if Turkish pollsters don't monkey with the numbers (and lots don't) the perception that pre-election polling is cooked is well-founded, pernicious and must be addressed if opinion research is going to be used as a credible tool for illuminating public policy debates and elections.

     

  • How the interviews were conducted and how respondents were selected: Were interviews conducted using face-to-face interviews? If so, how were respondents selected? Were the interviews conducted by telephone? What proportion of landlines versus mobile numbers was used? How many efforts were made to call selected numbers back if there was no answer? What times of day were interviews conducted?  If the answer is "online poll," step away from the story.

     

  • Response rates: What percentage of selected respondents participated in the survey? This varies depending on the country and sometimes, the type of survey. The pollster should reveal what standard response rates are in Turkey for similar surveys. An abnormally high or low response rate should raise red flags.

     

  • Question wording and order: How a question is asked and where it appears in a survey directly affect responses. Respondents should not be "primed" to answer a particular way. Therefore, a vote preference question should be one of the first respondents are asked. The list of candidates should be presented exactly as names appear on the ballot, with no extraneous information provided that voters won't see when they enter the polling station. The percentage of respondents who answered "don't know" or "undecided" (a critical data point in election polling) should also be reported and if the "don't know" response was prompted or unprompted.

     

  • Quality Control: How many interviews were verified in the field by supervisors or called back to make sure they really took the survey?  I know it's hard to believe but sometimes interviewers are lazy and fake interviews! Quality control is expensive, technical and time consuming and is why methodologically sound polling is expensive. Rigorous quality control by outsiders reduces the chances that data are falsified, especially in the processing phase where someone *might* be tempted to place a finger on the scale. Opening data sets to outside scrutiny is a good way to expose and prevent this.

 

  • Sampling and weighting procedures: It's easy to baffle non-specialists with statistics but polling isn't rocket science and random sampling procedures are guided by industry standards. Pollsters should reveal if their samples are stratified and by what variables. They should share how sampling points were selected. They should also reveal if the final data were weighted and by what factors.

 

Wow! This sounds like a lot of work! But one of the most interesting outcomes of the 2012 election in the US, in which a high profile, well-respected research outfit (Gallup, in about as epic a scandal as pollsters are allowed to have) got the election wrong, was the degree of public scrutiny Gallup allowed of its methodology to figure out what happened. I'm sure it was painful -- no one likes to admit they did things wrong -- but the result is better, and more credible, public research. Gallup's reputation took a hard hit, but they dealt with it the best way they could. If you really want to learn more about what happened to Gallup -- and why wouldn't you? Pollsters are awesome -- read this report.

 

Given that major Turkish pollsters, including a well-respected one, got the Presidential election wrong, this issue isn't going away soon. Historic low turnout figures --preliminarily 74% -- might have thrown some pollsters for a loop but it shouldn't have, given the timing and the dynamics of the election. The challenge always, as US pollsters have found, is trying to predict which voters will cast ballots and which will stay home. Turkish pollsters, who already face credibility issues, need to confront this issue with transparency.


**Quirk Global Strategies isn't in the business of public polling. We're strategic pollsters, which means private clients use our data to guide their internal political or communications strategies (though not in Turkey, usually). This is an important distinction.

How to Be a Good Consumer of Public Polls

Here we are again, weeks out from another Turkish election, arguing on Twitter and in bars about the pre-election polls on the Presidential race between the shouty guy and the bread guy. As much as we'd like to, we can't really ignore this election, so wouldn't it be great if someone explained how to tell if a poll in the paper is credible or not?

It's your lucky day! Here are some basic, but important, concepts to understand before you write about, argue about, print, or tweet publicly released election polls (everywhere, too, not just Turkey).

 

  • How many people were interviewed? It amazes me how few press articles include this mandatory information. A nationally representative sample should include at least 800 randomly selected respondents, which has a margin of error (MOE) of 3.5% at the 95% level of confidence.* A larger sample size does not necessarily mean the survey is better (academics may argue otherwise, but their research goals are different), so don't fall into that trap. For example, the margin of error for a n=2000 sample is 2.2% (compared to 3.5% for n=800). That's not a big difference and won't matter that much except in the closest elections. However, if the pollster is sharing data from smaller demographic or geographic subgroups within the national sample (men, women, Kurds or Istanbullus, for example), a larger sample size becomes more important. Remember, the MOE increases as the number of interviews decrease. If Istanbul makes up 19% of the country (and in a nationally representative sample, it will) in an n=800 sample, there will be only 152 interviews among Istanbullus, with a MOE of 8%. If the sample is n=2000, there will be 380 interviews (MOE 5%) among Istanbullus. I'm slightly more comfortable with the latter data than the former because the margin of error is smaller. Do you like to play around with sample sizes? I do! There's an app for that.

     

  • Who paid for it?  This is Turkey so this is probably the single most important question. In the US, major media outlets (and think tanks) commission credible research firms to conduct election surveys (the "CNN/Washington Post poll," for example), the results of which papers report as news. Given they are in the business of reporting things that are more or less true, they have a lot at stake by getting the numbers right. The media in Turkey operate according to different principles. That a media outlet reports data tells us little more than in whose favor the numbers are likely to have been cooked. Methodologically sound research is expensive in Turkey -- $20,000 to $30,000 for data collection alone -- and for-profit research firms are unlikely to undertake survey work for fun, even if they say they do. Someone's paying for it and if you can't find out who, don't report it.

 

  • Who was interviewed? Election polls are designed to predict election outcomes. It sounds harsh, but non-voters' opinions don't matter. Therefore, only likely voters should be polled. Because voting is compulsory in Turkey, election participation is very high (88%-90%) so nearly all adults are eligible to participate in an election survey. In contrast, election polling in the US is extremely complicated: only about 50% of Americans are eligible to vote (by virtue of having registered), and among those, participation rates vary from the extremely low (15% in low-interest primaries) to the less low (about 65% in presidential elections). Predicting who should be included in a sample of likely voters is extremely challenging. Misreading the composition of the electorate was one of the reasons major polling firms got the US election in 2012 wrong. Because of its timing (10 August, mid-vacation), uniqueness (it's the first time Turkish voters have directly elected a president) and low interest in the candidates among the tatıl-class, Turkey's presidential election presents a unique challenge to election pollsters. Is there going to be substantial drop-off in participation among certain types of voters who won't bother to return to Istanbul from Bodrum's beaches to vote? Maybe! Pollsters who care about accuracy will take this into account. They should explain how they're addressing this issue, and how, if at all, they're diminishing their samples to exclude those who won't vote. Ask! Ask! Ask!

 

  • How did they conduct the interviews? Generally, in probability samples (the only kind that produces representative data and the only kind I will discuss), a respondent is selected at random to participate in either a face-to-face (F2F) or telephone interview. F2F has always been the norm in Turkey because of low phone penetration but that's changing quickly as more and more people obtain mobile phones. Mobile sampling is becoming more and more common. Both methodologies have biases and you should know which methodology the pollster uses so you can be aware of them. I can go on for days about the pros and cons of each (it's a wonder I have any friends at all). Online, web-only surveys are bogus. If you ever want to start a flame war with me on Twitter, report on an online survey like this one without using the word "worthless."

     

  • What's the polling firm's track record? Accuracy is a pollster's currency. The great thing about election polling is there's a day of reckoning. You either get it right and can be smug (it's science!) or you're wrong and no one should listen to you anymore. Given the dearth of credible election polls in Turkey, calling previous election results correctly boosts a pollster's credibility even more in my book. As far as I know, and I don't know everything, one firm did that publicly in the March local elections: Konda. Why data released by firms that got a recent election completely wrong are treated as credible is a mystery to me. It's easy to check.

 

This isn't all there is, but it's plenty and you don't have to be a specialist to interpret it (as long as you understand probability sampling). Having the answers to these questions will make it easier to assess the quality of the polls you see in the Turkish press and on Twitter. Armed with this information, you'll have the tools to be able to say "this poll sounds like BS. I'm not going to report/tweet it," thus depriving bogus pollsters of the media oxygen they need to survive. If you can't get answers to these questions, don't report the data.

 

TOMORROW (or some day in the near future)! How to Make Public Election Polling in Turkey More Credible 

 

*If your universe (total number of potential respondents) is greater than a couple hundred people, the margin of error is the same for a random sample of n=800, regardless if you're surveying a city with a population of 1500 people or a country of 78 million. If you don't understand why this is, or what a margin of error is, get thee to a Stats 101 course and don't start arguments you're going to lose.

 
**Quirk Global Strategies isn't in the business of public polling (or academic research). We're strategic pollsters, which means private clients use our data to guide their internal political or communications strategies (though not in Turkey). This is an important distinction. Strategic pollsters who collect bogus numbers give bad advice, lose elections and don't get hired again. Therefore, we strongly oppose BS numbers. You can be certain that strategic polling is being done in Turkey -- most likely on behalf of AKP -- but you and the twitter loudmouths you follow are unlikely to get your hands on it.

 

Surprised AKP Is Still Strong? Don't Be

There's been a lot of hand-wringing lately among the international commentariat about AKP's prospects for a strong performance in the upcoming (30 March) local elections, which, despite a corruption scandal that is breathtaking in both its scope and cravenness, appear only slightly diminished. How, they howl (with no small amount of condescension), could Turkish voters still support* such a corrupt party?

I don't find it surprising at all. Here's why.

First, voters vote according to their self-interest, period. Their self-interest includes issues that affect them personally in their everyday lives: education for their children, jobs that provide a decent wage, good health care when they're sick, safe and healthy neighborhoods in which to live. Like it or not, many Turks are going to respond to a version of Ronald Reagan's famous question "are you better off than you were before AKP took power?" with "evet." AKP knows this and campaigns accordingly.

Voters do not vote according principles or abstractions. In Turkey, these include democracy, freedom of speech (including the internet), laicity, jailed journalists, international affairs, the EU or any number of non-salient issues that opposition parties here focus on to their detriment. These issues appeal to opinion leaders and the elite, not regular voters.

Second, there has to be an alternative. It is extremely difficult to oust an incumbent party, even one with as many negatives as AKP. A new party must not only convince voters there's a problem with the incumbent, it must convince them it is qualified to take over. It's expensive, time consuming and requires message-driven campaigning to both introduce a party to voters and convince them it's worthy of their support. For a variety of reasons, there is no emerging political force in the Turkish political environment right now, so take that option off the table.

The job is even harder for a party like CHP with which voters are already familiar. Not only must CHP convince voters that there's a problem with the incumbent and that it's qualified to take over, it has to overcome the negative perceptions it has worked so assiduously to build over the last 75 years. Tossing out an incumbent party is very hard for a well-known party with sharp messaging, a ton of money, a lot of time and generally positive perceptions. I'm going to go out on a limb and say CHP lacks those resources.

In short, disaffected AKP voters have to have somewhere to go. There isn't anywhere.

But what about this corruption scandal? It probably would take down governments in other countries. But corruption is a funny issue. Most voters assume politicians are corrupt and shrug it off, especially if they are otherwise pretty satisfied with a government's performance and the corruption doesn't affect them personally. "He may be a snake, but he's our snake," is a famous quote about Willie Brown, one of California's most spectacularly corrupt (and effective) lifetime politicians. Voters are very forgiving of corrupt parties that deliver (and not at all forgiving of corrupt parties that don't. Ask Viktor Yanukovych).

Should one of AKP's opponents effectively make the case to voters that this corruption scandal hurts the economy, makes it more difficult for them to educate their children or hobbles their favorite football team, they'd probably get traction. Instead, all I see is stupid marches at which people throw fake Euros into the air. That's not message; that's litter.

 

*I have no data. Like everyone else, I assume that the scandal, so far, has had a small impact on the party's level of support. Maybe that's wrong, but let's follow the crowd.

Stop Reporting the Bilgi Poll!

Like many of you, I have visited Gezi Park over the last few days. While walking around, I noticed that a lot of the protesters are young and they seem new to the business of protesting. They had strongly held views on a lot of topics but are not overtly political.

My observation is about as scientifically valid as the poll released by Bilgi University earlier this week. I'm not going to repeat the findings. That so many respected journalists are citing and retweeting it without mentioning (or probably even looking to see) that, according to the exceedingly vague methodology statement, it's a 20 hour online survey of 3000 people, is vexing. I'm going to assume (probably incorrectly, but I'm struggling to be generous) that there's more information about the methodology in the Turkish, but when I saw the word "online" that's when I clicked "close tab."

Polling 101: Online surveys are representative of nothing except the universe of people who 1) knew about it, 2) had internet access during the 20 hours it was open, 3) felt like responding.  Participants were not randomly selected; they choose to participate, which makes them different at least one way from those who did not. It's called selection bias.

Even worse, it appears that a lot of folks are repeating data from the poll because "it seems to make sense." That's confirmation bias, which is also sloppy.

If you really have to cite that poll, I suggest phrasing it thusly, "According to a worthless online survey of Gezi Park protesters publicly released by Bilgi University, which you'd think, as an academic institution would know better......"

There are ways to randomly select a sample of protesters and find out more about their demographics and attitudes. It's time consuming and expensive, like good research usually is. Wait until someone does that, then report it.

I have something to say approximately every four years. I'm like a pollster cicada.

Why Facebook Hurts Democratic Movements

There are lots of things about Facebook that annoy me (mostly how it went from being a useful way to find out what your coolest friends were doing, listening to or reading to becoming an echo chamber of your most annoying friends' scores on idiotic quizzes, but that's a different blog post on a different blog) but the thing that bothers me most these days is all the groups and petitions devoted to "supporting" various democratic movements.

Moldova introduced itself to hundreds of thousand clicktivists earlier this year. Then there was Iran. (The online response to China's cracking some Uighur skull has been, at best, muted, at least in my network. I suspect it's because there aren't as many hot girls involved). The most recent example comes from Baku, where two Azeri youth activists were beaten up by sportsmenki and tossed in jail for doing little more than having dinner at a downtown Baku restaurant.

Since this happened, I have been invited to no fewer than six groups that express support for them, but have not joined one. I feel bad about this, but the only things less effective than Azeri youth activists are the Facebook groups set up to "draw international attention" to their situation. (Harsh? I know from Azeri youth activists).  Furthermore, they fail to achieve even that amorphous goal: the tepid support most of the groups receive does little but illustrate what is already screamingly obvious -- very few outside Azerbaijan care what goes on there.  And after generating all the international attention, then what?

Like Twitter, Facebook democracy support groups bug me for several reasons.

First, Facebook groups prolong the illusion held by many in opposition movements in the Former Soviet Union that democratic change can come from anywhere but inside the country.  One of the Azeri opposition's favorite strategies for achieving power was writing lots of letters to foreign leaders, taking expensive junkets to Brussels and beseeching visiting OSCE diplomats plaintively. Really, who can blame them for wanting to spend more time in Vienna than Yevlax? However, challenging despots requires hard, risky groundwork, convincing skeptical voters in your own country that you're responsible enough to be trusted with the reins of power and that it's worth the risk to join you.

Second, it prolongs the illusion that organizing is as easy as clicking a button. It's a lot more fun to organize several thousand Europeans and Americans to support your "cause" than it is to mobilize IDPs still living in train cars 14 years after the oil-rich country lost a war. It's a lot easier to broadcast a Twitter to the universe than it is to go out and talk to people in Lenkoran who don't have electricity, much less internet, face to face.

Third, it diminishes the stakes. If people in Azerbaijan truly want to boot the kleptocrats (and there is plenty of evidence to suggest most don't), they have to join civil society organizations or political parties or labor unions that oppose the government. They have to volunteer to monitor elections. As a result, jobs will be lost, university places sacrificed, nights spent in jail and heads cracked. The idea that it can be done any other way is an insult to the people who have tried and succeeded (or, tried and failed).

The situation in Azerbaijan right now is terrible. It was terrible before Facebook and will continue to be terrible long after Facebook joins Friendster and MySpace in the dust-bin of social networking history. If you're going to click, click on something like Daily Puppy or your favorite porn site. It will have about as much impact on Azerbaijan.