Cannabis Survey Declared Invalid by Expert

Cannabis Survey Declared Invalid by Expert

Dear County Supervisors,

Statistical estimates that are to be the basis of policy decisions about cannabis in Sonoma County must be based on a valid sample of the population and valid and reliable questions. I am writing to inform you of irreparable defects in the recent online county cannabis survey that make it inappropriate, even immoral, to use it to guide any policy decisions around the use of this federally controlled substance. I a Ph.D. sociologist who designed interviews and surveys at the National Center for Health Services Research in the U.S. Department of Health and Human Services, Washington, D.C. for 8 years, in the Department of Quality Management at Children’s Hospital, Boston, MA, for over 5 years, and in a for-profit survey research firm in MA. As a college professor, I taught a course on Survey Design and Measurement.

As a scholar of survey research, the following three areas are of concern. 

First, the sample: The survey was not publicized in the Press Democrat or other public news services. Only people on a selective County cannabis email list were sent a link to the survey, and there is no transparent evidence how this list was created, or kept up-to-date. It is clearly not representative of Sonoma County. Even if one tried to answer the questions, many are so biased that respondents are likely to get angry and discouraged and give up on the survey that they TRIED to answer. I imagine there are many questions that were left blank and many surveys that were started and never completed. I challenge county staff/consultants to identify the percentage of “blanks” (non-responses) there were for every question and the number of surveys begun but not completed. 

Second, ranking questions are a serious problem. The validity of a survey question, like Q3 about an “appropriate size for cannabis cultivation” means that a question gets accurate measures of preferences, what the question intends or presumes to measure, and not bias. Ranking questions can be invalid because they introduce bias in a number of ways, so that the “rank” or “average rank” is not a true picture of the preferences of respondents.

In this cannabis survey, five of the 19 possible questions that could be answered (not everyone received every question because of the screening choices), that’s more than 25%, were ranking questions, not rating questions. A more valid measure of preferences is ratings, asking a person how they rate EACH item on a scale such as (1=least important to 5=most important), rather than ranking them. This alone is sufficient to disqualify and invalidate any statistics that might be generated from the questionnaire.

The result of a ranking question cannot be considered a valid or reliable (that is, stable) measure of preferences about marijuana cultivation or distribution for the following reasons:

 1.    Ranking questions force respondents to choose among unacceptable options. For example, Q3 presumes support for ANY cannabis cultivation in the county. There is no option to say, “No cultivation is acceptable in Sonoma County.” This question overestimates support for cannabis cultivation.

     2.    Q19 asks respondents to rank 3 choices for a moratorium. If a person was no permits at all, a complete moratorium, there is no place for them to give their true preference. They cannot answer “none of the above.” Q19 will overestimate support for permits. 

     3.    Another example is Q16. A person who believes there should be no onsite cannabis consumption until there is a legitimate roadside test for driving impairment (like a DUI) has no way to say that, but is forced to choose something they do not believe in. Q16 will overestimate support for cannabis consumption. An opponent of cannabis cultivation until there is a legitimate roadside test will have to leave the question blank, and may give up on the survey at that point because it is so biased as to exclude a question measuring their preferences. 

4.    It is impossible for most people to rank more than 3 items at once. Questions that ask for rankings of more than 3 items (Questions 3, 10, 13, 16) are unreliable measures especially at the lower rankings because no one can reliably rank items at the lower end; if a person completes the survey a second time, chances are high that their rankings will differ significantly between tests. These ranking questions are also biased because they don’t allow respondents to rate preferences they consider EQUALLY important.

5.    Q10 and Q13 are foolish because ALL concerns must be addressed by CEQA and all setbacks matter; none can be considered more important than any other.

6.    All ranking questions need to have a choice “none of the above,” and “all of the above” or be replaced by RATING questions that list each item/issue and ask people to rate on a scale of 1=least important to 5=most important).

7.    The wording used in several questions are reversed, causing respondent confusion (In Q3 1=least important and in Q10 1=most important).

8.    In a ranking question there is no way for a respondent to indicate the strength of their feelings about individual concerns.  A respondent can feel very strongly about many items, or very few. Without a rating question for each item, there can be no reliable estimate of the strength of preferences.

9.    Q16 is full of jargon that the average Sonoma county citizen (non-grower) will be able to answer. What is a “Agricultural and Resource area”? a “Commercial area?” Where are the boundaries on a map of Sonoma County? Who would know that? Non-growers or people who don’t farm (most of Sonoma County!) will simply guess, not knowing what the answers mean. The question is spurious and should be thrown out. 

Third, forced choice questions like Question 2 cause bias because there is no option for a respondent who wants NO cannabis cultivation in Sonoma County (like Marin and Napa). 

1.    Q12 is invalid because there is no place for a respondent to answer “all of the above,” and is forced to choose. This should have been replaced with a rating question.

2.    Q5 causes bias because there is no choice for someone who wants exclusion zones but no inclusion zones, and it funnels these respondents around questions having to do with willingness to live next to an inclusion zone. Everyone should be able to answer all questions about inclusion and exclusion zones. 

In summary, none of the “data” at all from this survey should be used to guide public policy in Sonoma County. The haphazard sample, the frustrating industry-friendly biases of questionnaire, the unreliability of ranking and forced choice questions, the funnel questions that disallow someone to respond to every question, the lack of “none of the above” or “all of the above” choices, the open disregard for the importance of every CEQA issue, and other defects limit the capacity of any Sonoma County tax-paying citizen to indicate their true preferences. If I wanted to write a survey biased toward the cannabis industry, this would be it. As a taxpaying citizen it would be a shocking waste of taxpayer dollars to pay anyone to write any report based on this “survey,” let alone use it to make policy decisions.

 Sincerely.

GC, Ph.D.

Santa Rosa, CA