Disenfranchisement of Muslim Youth and the Self-Selection Bias

We’ve written before about some of these ’studies’ and the possible side-effects of emphasising and accentuating Muslim disadvantage, so there is no need to go over some of these issues again.

Earlier in the year, the Sydney-based Independent Centre for Research Australia (ICRA) released a report — perhaps tellingly called All Eyes On Youth study (an allusion, it seems, to the Tupac song All Eyez On Me) — that concluded that:

The All Eyes On Youth study found eight out of 10 young Muslims aged between 12 and 25 considered the education system of no assistance “in making lifetime choices”, and 94 per cent lacked a clear goal in life.


ICRA president Fadi Rahman told The Australian yesterday he was alarmed that almost all the 75 males involved in the survey had experimented with drugs such as cannabis, ecstasy and cocaine.

The same Independent Centre for Research Australia (ICRA) is back in the media again with a new study.

The ABC reports:

ICRA chairman Fadi Abdul Rahman says young Muslims feel angry and marginalised about negative press.

“They have already been labelled as, you know, the majority are perhaps criminals or the majority are perhaps no good and I guess this misperception about young people is really turning young people into these marginalised, alienated group of young people within their own society,” he said.

“This is very, very dangerous.”

Perhaps one reason why these youths are being wrongly portrayed as “no good” is because of studies, such as the earlier ICRA report, that make broad claims about drug use among Muslim youths? As we have said, an unfortunate consequence of accentuating Muslim disenfranchisement is that some people will inevitably believe you.

Anyway, a fundamental problem with these sorts of studies is the self-selection bias. To give an extreme example: we might decide to survey readers of Austrolabe to determine what percentage of Muslims read blogs on a regular basis. Surprisingly, we find that 100% of Austrolabe readers read blogs. Therefore, we conclude that 100% of Muslims or Muslim youths or Muslims with an internet connection also read blogs. However, such statistics are obviously flawed because the participants in the survey have effectively ’selected’ themselves as being part of the blog-reading cohort by visiting a blog in the first place.

Likewise, one can make similar points about the ICRA statistics and reports. The participants in these studies have all selected themselves to be part of the surveyed group through their involvement with ICRA events. Therefore, all that can be concluded is that the ICRA findings are perhaps a reflection of people who identify with the organisation and its projects rather than larger subsections of the Muslim community such as ‘Muslim youth’.

This is, of course, not a shortcoming only of ICRA’s studies but applies across a broad range of surveys and investigations, including the various ‘open forums’ held by different organisations to supposedly gauge generalised Muslim sentiment. This is not to say that such studies or approaches are not useful but one needs to be cautious in making broad and generalised statements on the basis of such findings. This is especially true when one is painting a less than positive picture of the community or a section of the community (in this case, Muslim youth).

And what is the solution to stopping more Muslim youth turning into a “marginalised, alienated group of young people”?

The organisation says media internships for Muslims should be funded by the Government to break down cultural stereotypes.


#1 James (San Deigo) on 05.15.07 at 1:19 am


Very good points. The “All eyes on youth” survey has even more problems. Just reading your points there are two more flaws in the study.

1. Framing. 12 to 25 year olds as the participants? Do you really expect a 12 year old to have a cognizant view of his adult life? “What do want to be when you grow up little Omar?” By choosing this grouping the study guaranteed that it would find people who doubted the utility of schooling and unsure about life choices.

2. Sample size. While the self-selecting point is germain, a killer in its own right, you would be hard pressed to come up with any useful conclusions with a sample size (75) this small anyway. Only the most rigidly built study, double-blinded with tight parameters for participation could come up with a small, well hedge, multiple caveatted conclusion. Combine the error of sample size with the self selection problem and you have junk science.

Your right Amir, the only proper use of this study is as bird cage liner.

#2 Umm Yasmin on 05.17.07 at 12:22 am

Assalamu ‘alaykum,

As someone who is in the middle of conducting research on Australian Muslims, I just wanted to point out that there is a difference between quantitative and qualitative research, and any qualitative study worth its name would have a section outlining the methodology including how data was obtained, and possible biases.

Smaller qualitative research will often be used to generate ideas that can then later be tested with broad quantitative studies that are statistically representative. That does not mean that qualitative research is worth “less” than quantitative research.

If a small qualitative study with ’self-selected’ participants found that there was an increase in eg. drug use, a researcher would write this in his/her report and say specifically that it could not be used to make generalisations but this is an area for further research. And then this could then be tested on a much broader scale with a qualitative study to see if it was true to say that X% of Muslims use drugs.

But how do you find out what the issues are in the first place? That’s where qual. research often comes in.

(Just throwing my researcher’s 2p in the ring)

#3 Baybers on 05.17.07 at 8:30 am

social research is by its nature, difficult and prone to both sampling errors and bias. There are several ways to minimise them by using data rather than qualitative research to generate hypothesis. This reduces the risk of investigator bias.

For example, if one wishes to examine the institution of marriage in Muslims, one could begin with the ABS stats on Muslims who are married, divorced etc to generate some numbers (e.g. number of muslim marriages ending in divorce). one could then do subgroup analysis, age, backgroud, social level, education. Here the data leads the reseacher rather than the other way around.

This study has greater validity then for example generating a hypothesis out of mid air, and testing it on ones friends, and writing a report.

To be even more robust one could follow a cohort, fopr example the graduating class of an Islamic school, and compare them with a population or even Muslim average. Or even follow two cohorts (one group of Muslims at an Islamic school, and the other in a general public school). This sort of research is not glamorous, but is much more useful.

The samples need not be much bigger than a small group in the study above, but the sampling problem is overcome.

One needs to be strict in ones methods, otherwise one is not generating truthful data.

#4 Amir on 05.17.07 at 10:04 am

It’s an interesting topic.

The issue really isn’t really the statistical validity of particular studies or the usefulness of qualitative data (or qualitative data dressed up as quantitative data in the press) but just that Muslims need to be cautious when wheeling out stats and studies on sensitive topics such as drug use, criminality, misogyny, domestic violence, suicide, etc. This obviously doesn’t apply to Umm Yasmin or other researchers who understand the importance of reporting margins of error, confidence levels, identified or potential biases etc with their datasets but applies moreso to studies conducted without this sort of academic rigour about sensitive topics and then delivered to the world soundbite by soundbite. This sort of ‘pop sociology’ can also be harmful: you hear Muslim youth using drugs and assume the causal connection is with socio-economic disadvantage but another person hears the same ‘fact’ and concludes the positive correlation is with Islam.

Incidentally, the sample size of 70 may be small but assuming a normal distribution is not too small to make useful conclusions about a population of around 100,000 – 150,000 (assuming, of course, biases are controlled for and the sample is random).

#5 James on 05.26.07 at 11:16 am

Amir and others

Good points all around. There is a way to build good research. Agreed you can build a study around 70, but it has to be careful controlled with the acknowledgement that such a study is a beginning. More to your point Amir, it is foolish to try to mold public policy on such a study. That a group would toss such a poorly constructed “study” out as part of their polemic speaks volumes about their contempt for rational conversation. It speaks to an intellectual laziness and an unwillingness to do the true spade work that serving a community entails.

The real statistics can be gotten, but it means pouring over government logs, over arrest records and ER logs, and means talking to lots of real people. Scaremongering is so much easier. “Our Muslim Youth is in Peril!! It is perilously in peril-toss us a few bob and the peril will be less perilous.”

Leave a Comment