What is the Value of a Survey and Is It Real Science?

Cardiology Section

University of California, San Francisco

Cardiology Section

University of California, San Francisco

Interventional cardiologists pride ourselves on taking a scientific view of technological progress and pharmaceutical advances. We develop evidentiary standards and respond to data and statistical analysis, not subjective observation or sponsor personality. We know that randomized trials, multicenter registries and objectively designed studies are the strongest sorts of evidence. Several questions therefore frequently arise: What do we hope to learn from a survey of contemporary practice? Is it worth our time academically to produce a study that isn’t based on hard data? Is a survey real science?

 

A survey is a research method used for collecting data from a predefined group of respondents to gain information and insights into various topics of interest. There can be multiple serious scientific purposes, if well designed and thought out in advance; and like any clinical investigation, their accuracy and relevance is dependent on accurate design and appropriate analysis, keeping in mind validity, reliability, replicability, and generalizability. If properly conducted, we can learn with precision what other interventionists are doing, what techniques they find valuable, what questions they think is important in their practice, what problems they encounter, and what they are doing about it. And yes,  those are important questions for which we need scientifically valid answers.

 

There is a huge difference between good surveys, which are conducted with high rigor and strive for real accuracy, and those that do not. Every research study design especially surveys must overtly consider the population being evaluated, the intervention being tested, the comparator group, and the outcome sought. Surveys have to be conducted using either an online or paper collection method that assures that all completed surveys are counted. Who are you trying to reach? What is it that you are studying and why? How are you going learn that information with this study format? After you collect the data, how will it be analyzed?

 

The first step is to identify the purpose and main objectives that you want to address. All of the survey’s questions should be anchored to this aim. Introducing the study to potential respondents is crucial to getting accurate data. You should state at the outset which organization is conducting the survey, the contact information for its creator, confidentiality information, how the data will be used, and an estimate of the time involved. Explain the purpose of the survey questionnaire to respondents and those being screened: when the reasons behind a line of questioning are understood, respondents are more likely to volunteer accurate information.

 

You are trying to gain important insights into some professional facet of practice, so every question should play a direct role in meeting that goal. Each question should add value, and have a planned use in the analysis. Ask direct questions; this is not the time to be vague, imprecise or “diplomatic.” Use of adjectives and adverbs often shade a response, and generally should be avoided.

 

It is important that the questions not only answer the one main question but also help to comprehend the reasons behind the answer. This is the difficult part of a survey design: 

questions must be developed that ask questions that can be answered with a closed multiple choice type answer. However, it must also ask enough open-ended questions that explore things in a more in-depth way. Written answers cannot be analyzed statistically, so these should be avoided.

 

A focus must be identified. Too many questions, lengthening the time it takes to respond, annoys respondents; they lose interest and either stop filling in the form or fill it in carelessly as their attention span deteriorates. Consequently, the response rate decreases and the quality of the data diminishes. Ask only necessary questions to keep the number of questions to a minimum.

 

Bias is a serious matter to consider. Putting an opinion in your question (“leading question”) can influence respondents to answer in a way that doesn’t reflect how they really feel. Ask your colleagues to critically review your wording. The answer choices you include can be another potential source of bias. Always do a “test drive” with colleagues willing to tell you how a question might be improved. 

 

Stick as much as possible to closed-ended questions. Fixed-response questions provide respondents with a specified set of answers, and require respondents to choose from those options. Use fixed-response questions when you have a definite way to define and categorize data. Examples of structured question formats include multiple choice, ranking, yes/no and rating scale.  Rephrase yes/no questions as much as possible; instead use a format that allows nuances and distinctions to be captured. Many yes/no questions can be reworked by including phrases such as “How much,” “How often,” or “How likely”.  

 

Response scales capture the direction and intensity of attitudes, providing rich data. In contrast, categorical or binary response options, such as true/false or yes/no response options, generally produce less informative data. response scales have a definitive, neutral midpoint (aim for odd numbers of possible responses) and that they cover the whole range of possible reactions to the question. Careful attention is necessary to analyze questions that ask for a subjective response on an arbitrary scale. For example, while a decrease in chest pain from 6 to 3 might reliably show a real decrease in pain, it does not mean that the pain is half as intense. 

 

Non-structured questions are good for collecting individual ideas but they are harder to systematically analyze, organize and categorize. These require more time and effort to answer, and do not lend themselves to statistical methods. Do not ask more than one or two such questions, and they should be placed at the end of the survey as an optional question.

 

A crucial question that must be addressed is: precisely what group are you interested in surveying? The answer to that question is a huge help in determining how to bring your survey to their attention. E-mail, print, society sponsored and institutional sponsored marketing all have strengths and drawbacks; who is most likely to respond to which kind of notification? And just as important is: who is less likely to respond? A response bias of some sort is inevitable; a problem arises only if the wrong demographic responds, or if you cannot figure out what the nature of the response bias is and how to correct for it.

 

To be sure the survey is answered by the demographic you are targeting, you have to consider how to identify a representative sample. A truly “random” sample is unlikely; that is okay as long as you are able to identify who did respond. The statistical analysis of the answers not only should summarize the findings of the main question but also consider who responded to the survey and who did not, including age, gender, experience and location of practice.

 

The results must be analyzable not just for the total group but also for specific subpopulations. Make theory-driven and hypothesis-driven choices about how to calculate statistics, including percentages and means, eg, is the distribution Gaussian or skewed? Consider whether to weight your data to adjust for any sampling or data collection biases.