A Sample That Uses Only People Who Are Easy to Co
Convenience Sampling
Exploratory Study
Thomas W. Edgar , David O. Manz , in Research Methods for Cyber Security, 2017
Convenience Sampling
Convenience sampling is the most common form of nonprobabilistic sampling, mostly because it is misused. Convenience sampling is a method of collecting samples by taking samples that are conveniently located around a location or Internet service. We have all seen studies that leverage students in the computer science classes. This is convenience sampling improperly used. A proper use of convenience sampling would be sampling of craigslist, the Silk Road, or other black market services to study cyber-crime communication. Selecting a set of found communications would adequately represent other criminal communication where computer science students do not represent the general public very well.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128053492000042
Non-Probability Sampling
Alison Galloway , in Encyclopedia of Social Measurement, 2005
Convenience Sampling
Definition
Convenience sampling involves using respondents who are "convenient" to the researcher. There is no pattern whatsoever in acquiring these respondents—they may be recruited merely asking people who are present in the street, in a public building, or in a workplace, for example. The concept is often confused with "random sampling" because of the notion that people are being stopped "at random" (in other words, haphazardly). However, whereas the correct definition of random sampling (using random numbers to pick potential respondents or participants from a sampling frame) generally results in a statistically balanced selection of the population, a convenience sample has an extremely high degree of bias.
Application
Typically, somebody undertaking a convenience sample will simply ask friends, relatives, colleagues in the workplace, or people in the street to take part in their research. One of the best ways of considering the pitfalls of this form of sampling is to look at this last approach—stopping people in the street. On a typical weekday morning in the shopping area of an average town, the people on the street at that time are likely to result in an overrepresentation of the views of, for example, the unemployed and the elderly retired population. There will be a corresponding underrepresentation of those working in traditional "9-to-5" jobs. This can, of course, be counterbalanced to some extent by careful selection of different times and days of the week to ensure a slightly more balanced sample.
Despite the enormous disadvantage of convenience sampling that stems from an inability to draw statistically significant conclusions from findings obtained, convenience sampling does still have some uses. For example, it can be helpful in obtaining a range of attitudes and opinions and in identifying tentative hypotheses that can be tested more rigorously in further research. Nevertheless, it is perhaps the weakest of all of the non-probability sampling strategies, and it is usually possible to obtain a more effective sample without a dramatic increase in effort by adopting one of the other non- probability methods. The following examples of convenience sampling from published research represent the wide range of applications in the social sciences and in business research:
- •
-
A convenience sample of 1117 undergraduate students in American universities explored associations between perceptions of unethical consumer behavior and demographic factors. Instructors on two campuses were contacted to obtain permission to administer the surveys during scheduled classes.
- •
-
Questionnaires were distributed using convenience methods in a study of the motives and behaviors of backpackers in Australia. The 475 surveys were delivered in cafes and hostels in areas popular with backpackers.
- •
-
Differences in bargaining behavior of 100 American and 100 Chinese respondents were explored using the Fishbein behavioral intention model.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B0123693985003820
Planning the Study
Bill Albert , ... Donna Tedesco , in Beyond the Usability Lab, 2010
Friends, family, and co-workers
We'll talk about something called "convenience sampling" in Section 2.7, and recruiting friends, family, or coworkers is the ultimate type of convenience sampling. We only use this method in an act of desperation because there is a lot of bias that comes into play with these groups. Co-workers can be too close to the content or too familiar with the principles of Web design and usability. Friends and family might be part of a targeted population for a Web site, but their knowledge of you and your work, and perhaps their intrinsic motivation to please you or approach the product with particularly positive or negative perceptions, can all bias the results of data. As a result, we don't typically recommend using this method unless you're recruiting friends of friends, family, or co-workers who are at least a step removed. Another option is to use these types of people for some slice of the pilot testing.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780123748928000028
Year 12 students' use of information literacy skills: A constructivist grounded analysis
James E Herring , in Practising Information Literacy, 2010
Data collection
The selection of the group of students was done through convenience sampling ( Patton 2002), as ongoing collaboration between the physics department and the school librarian resulted in the cooperation of teachers and students. Twelve students completed a diary while they did their assignment and were then interviewed by the researcher after the assignment was completed. The data from the diaries and interviews was analyzed using constructivist grounded analysis. Data was collected from students in the form of a structured diary which asked students to comment on a range of aspects of their use of information literacy skills during their assignment process. The diary also asked students to comment on their levels of confidence at different stages of the process and asked them to evaluate the quality of their assignment and how they might have improved it. The use of student diaries as a means of collecting data from students has been used in school library research by Kuhlthau (2004), Tallman (1998), Harada (2002) and Barranoik (2001). The diaries cannot be taken as verbatim accounts of reality but should be viewed as constructions by the students of what they view as reality in completing the physics assignment.
The second stage of data collection took the form of semi-structured interviews of the students who were completing the physics assignment. Interviews are recommended as a source of rich data by Burns (2000) and Patton (2002). Charmaz (2006, p. 26) recommends that 'intensive interviews' should be used in grounded theory studies to provide depth. In interpreting interviews, constructivist researchers recognize that, as with the diaries mentioned above, interviews are the participants' construction of what they view as reality; it is the researcher's task to interpret what participants say (and sometimes what they do not say) in order to construct the researcher's view of the studied world.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9781876938796500078
Surveys
Kathy Baxter , ... Aaron Sedley, , in Understanding your Users (Second Edition), 2015
Things to Be Aware of When Using a Survey
As with any user research technique, there are always factors that you must be aware of before you conduct a survey. Here, we describe several types of bias you need to be aware of and ways to mitigate them.
Selection Bias
Some users/customers/respondents are easier to contact and recruit than others. You can create a selection bias (i.e., the systematic exclusion of some unit from your data set) by conducting convenience sampling (i.e., recruiting based on convenience). These might be people who have signed up to be your participant database, students and staff at your university, friends and family, colleagues, etc. Obviously, getting responses only from those that are most convenient may not result in accurate data.
Nonresponse Bias
The unfortunate reality of surveys is that not everyone is going to respond and those that do choose to respond may be systematically different from those who do not. For example, very privacy-conscious individuals may be unlikely to respond to your survey, and this would be a real problem if you are interested in feedback on your privacy policy. Experienced survey researchers give response rate estimates of anywhere between 20% and 60%, depending on user type and whether incentives are offered or not. In our experience, you are likely to get a response rate closer to 20%, unless you have a very small, targeted population that has agreed to complete your survey ahead of time. However, there are some things you can do to improve the response rate:
- ■
-
Personalize it. Include a cover letter/e-mail or header at the top of the survey with the respondent's name to provide information about the purpose of the study and how long it will take. Tell the respondents how important their feedback is to your organization. Conversely, incorrectly personalizing it (e.g., wrong name or title) can destroy your response rate.
- ■
-
Keep it short. We recommend 10 minutes or less. It is not just about the number of questions. Avoid long essay questions or those that require significant cognitive effort.
- ■
-
Make it easy to complete and return. For example, if you are sending surveys in the mail, include a self-addressed envelope with prepaid postage.
- ■
-
Follow up with polite reminders via multiple modes. A couple of reminders should be sufficient without harassing the respondent. If potential respondents were initially contacted via e-mail, try contacting nonrespondents via the phone.
- ■
-
Offer a small incentive. For example, offering participants a $5 coffee card for completing a survey can greatly increase the response rate. At Google, we have found that raffles to win a high-price item do not significantly increase response rates.
Satisficing
Satisficing is a decision-making strategy where individuals aim to reach satisfactory (not ideal) results by putting in just enough effort to meet some minimal threshold. This is situational, not intrinsic to the individual (Holbrook, Green, & Krosnick, 2003; Vannette & Krosnick, 2014). By that, we mean that people do not open your survey with a plan to satisfice; it happens when they encounter surveys that require too much cognitive effort (e.g., lengthy, difficult-to-answer questions, confusing questions). When respondents see a large block of rating scale questions with the same headers, they are likely to straight-line (i.e., select the same choice for all questions) rather than read and consider each option individually. Obviously, you want to make your survey as brief as possible, write clear questions, and make sure you are not asking things a participant cannot answer.
Your survey mode can also increase satisficing. Respondents in phone sureys demonstrated more satisficing behavior than in face-to-face surveys (Holbrook et al., 2003) and online surveys (Chang & Krosnick, 2009). However, within online samples, there was greater satisficing among probability panels than nonprobability panels (Yeager et al., 2011). Probability sampling (aka random sampling) means that everyone in your desired population has an equal, nonzero chance of being selected. Nonprobability sampling means that respondents are recruited from an opt-in panel that may or may not represent your desired population. Because respondents in nonprobability panels pick and choose which surveys to answer in return for an incentive, they are most likely to only pick those they are interested in and comfortable answering (Callegaro et al., 2014).
Additionally, you can help avoid satisficing by offering an incentive, periodically reminding respondents how important their thoughtful responses are, and communicating when it seems they are answering the questions too quickly. When you test your survey, you can get an idea for how long each question should take to answer, and if the respondent answers too quickly, some online survey tools allow you to provide a pop-up that tells respondents, "You seem to be answering these questions very quickly. Can you review your responses on this page before continuing?" If a respondent is just trying to get an incentive for completing the survey, you should notify him or her that he or she will not be able to do it quickly. He or she may quit at this point but it is better to avoid collecting invalid data.
Acquiescence Bias
Some people are more likely to agree with any statement, regardless what it says. This is referred to as acquiescence bias. Certain question formats are more prone to bring out this behavior than others and should be avoided (Saris, Revilla, Krosnick, & Shaeffer, 2010). These include asking respondents if or how much they agree with a statement, any binary question (e.g., true/false, yes/no), and, of course, leading questions (e.g., "How annoyed are you with online advertising?"). We will cover how to avoid these pitfalls later in the chapter.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128002322000109
Choosing a User Experience Research Activity
Kathy Baxter , ... Kelly Caine , in Understanding your Users (Second Edition), 2015
Sampling Strategies
In an ideal world, a user research activity should strive to represent the thoughts and ideas of the entire user population. Ideally, an activity is conducted with a representative random sample of the population so that the results are highly predictive of the entire population. This type of sampling is done through precise and time-intensive sampling methods. In reality, this is rarely done in industry settings and only sometimes done in academic, medical, pharmaceutical, and government research.
In industry user research, convenience sampling is often used. When employed, the sample of the population used reflects those who were available (or those you had access to) at a moment in time, as opposed to selecting a truly representative sample of the population. Rather than selecting participants from the population at large, you recruit participants from a convenient subset of the population.
The unfortunate reality of convenience sampling is that you cannot be positive that the information you collect is truly representative of your entire population. We are certainly not condoning sloppy data collection, but as experienced user research professionals are aware, we must strike a balance between rigor and practicality. For example, you should not avoid doing a survey because you cannot obtain a perfect sample. However, when using a convenience sample, still try to make it as representative as you possibly can. Other nonprobability-based sampling strategies we see used are snowball sampling, where previous participants suggest new participants, and purposive sampling, where participants are selected because they have a characteristic of interest to the researcher. One problem with snowball sampling is that you tend to get self-consistent samples because people often know and suggest other potential participants who are similar to themselves.
Figure 5.2 presents a graphical representation of general guidelines about how many participants are required for each type of study by study contextualization. An overview of sample size considerations is provided in this chapter, and a thorough discussion of number of participants is provided in each methods chapter.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780128002322000055
How the relational approach was employed in this research
Susie Andretta , in Ways of Experiencing Information Literacy, 2012
Selection of participants
In this study information management postgraduate students were selected as the target population for a number of reasons. In the first instance, the thesis focused on students in response to the call by Bruce (1997a) for further research on the relational model of information literacy from the perspective of learners, to complement her study of academics. As discussed in Chapter 2, existing studies on the relational model of information literacy seen from the students' perspective have adopted diverse sampling strategies to suit the purpose of their research. For example, Lupton (2004: 44) employed a purposive sampling procedure to recruit students from a wide range of backgrounds and explore the extent of variation in their experience of information literacy while completing an essay. In Edwards' case (2006: 63), a convenience sampling was used to generate a heterogeneous sample of undergraduates (from first and final year students) and postgraduate students from six out of the eight faculties at Queensland University of Technology. Like Lupton, Edwards I expected variation to be generated by the diversity of the students' background and competences in effective information use.
Convenience sampling was also employed in this research for a number of reasons. First, from a purely pragmatic point of view, the students selected for this study were particularly accessible because they attended two modules I deliver as part of the core provision of the MA in Information Services Management. These are: the Applied Information Research module (AIR), and the dissertation, although I was aware of the ethical implications of adding an unequal tutor–student dynamic to the researcher-subject relationship. Second, these two modules provided an appropriate focus because they cover the research and independent learning elements of the course that are traditionally associated with information literacy education. And third, by gaining a greater understanding of the students' perception and attitude towards information literacy I assumed that this would enhance the provision of my two modules, and further the students' development of independent learning. In other words, like Edwards' study (2006), which aimed to discover ways of improving the ability of students at Queensland University of Technology to search online, the initial motivation for this research was to improve my provision and enhance the students' independent learning experience within an academic research setting.
The targeting of postgraduate students in the information management discipline had an unforeseen outcome that had a substantial impact on the final outcome of the research. These students attend the MA in Information Services Management part-time while working full-time as librarians or information managers in a variety of information sectors. A number of authors (Arp, 1990; Bundy, 2000; Grafstein, 2002) claim that this professional group is traditionally associated with the provision of information literacy education. Therefore, because of their professional background, these students articulated multiple conceptions of information literacy drawing on the experience of this phenomenon as individuals, students and librarians, and Tables 3.4 and 3.5 illustrate the distribution of the information sectors the students work in.
2006 sample | |
---|---|
Student ID | Information sector |
S_A | Public (library services) |
S_B | Education (HE) |
S_C | Education (school) |
S_D | Education (HE) |
S_E | Voluntary |
S_F | Education (HE) |
HE = higher education
2007 sample | |||
---|---|---|---|
Student ID | Information sector | Student ID | Information sector |
S_1 | Education (HE) | S_12 | Public (library services) |
S_2 (FT) | Private (financial) | S_13 | Education (HE) |
S_3 | Education (HE) | S_14 | Public (museum) |
S_4 | Private (financial) | S_15 | Public (cataloguing services) |
S_5 | Education (HE) | S_16 | Education (HE) |
S_6 | Education (FE) | S_17 | Education (HE) |
S_7 | Private (legal) | S_18 | Public (health) |
S_8 | Education (HE) | S_19 (FT) | Not worked as a librarian |
S_9 | Public (cataloguing services) | S_20 (FT) | Education (HE) |
S_10 | Public (social services) | S_21 (FT) | Education (HE) |
S_11 | Public (library services) |
FT = full-time
HE = higher education
What transpires from these lists is that there is a greater representation from the public and educational environments, with the latter consisting primarily of librarians working in higher education. However, as the analysis of the data in Chapter 4 demonstrates, the distribution among public and private information sectors is sufficient to enable a comparative analysis of the variation in the way information literacy is perceived from these diverse professional environments.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9781843346807500036
Cross-cultural Research Methods
F. van de Vijver , in International Encyclopedia of the Social & Behavioral Sciences, 2001
3 Sampling Cultures and Subjects
Cross-cultural studies can apply three types of schemes to sample cultures. Three types of sampling can be envisaged. The first is probability (or random) sampling. Because of the large cost of a probability sample from all existing cultures, it often amounts to stratified (random) sampling of specific cultures (e.g., Western cultures). The second and most frequently observed type of culture sampling is convenience sampling. The choice of cultures is governed here by availability and cost efficiency: researchers decide to form a research network and all participants collect data in their own country. In the third type, called systematic sampling, the choice of cultures is more based on substantive considerations. A culture is deliberately chosen because of some characteristic, such as in Segall et al.'s ( 1966) study in which cultures were chosen on the basis of features of the ecological environment such as openness of the vista.
In survey research there is a well-developed theory of subject sampling (Kish 1965). In the area of cross-cultural research three types of sampling procedures of individuals are relevant as they represent different ways of dealing with confounding characteristics. The first is probability sampling. It consists of a random drawing from a list of eligible units such as persons or households. Confounding variables are not controlled for. The second type is stratified sampling. A population is stratified (e.g., in levels of schooling or socioeconomic status) and within each stratum a random sample is drawn. The purpose of stratification is the control of confounding variables (e.g., matching on number of years of schooling). The procedure cannot be taken to adequately correct for confounding variables when there is little or no overlap of the cultures (e.g., comparisons of literates and illiterates). The third procedure combines random or stratified sampling with the measurement of control variables. The procedure enables a statistical control of ambient variables (e.g., using an analysis of covariance).
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B0080430767007531
General Interviewing Issues
Chauncey Wilson , in Interview Techniques for UX Practitioners, 2014
Sampling Methods
Sampling, the process of choosing the subset of people who will represent your population of users, is a complex topic that is described only briefly here. The two major types of sampling are probability and nonprobability (Bailey, 1994; Levy & Lemeshow, 1999; Robson, 2002). In probability sampling, the probability of selection of each participant is known. In nonprobability sampling, the interviewer does not know the probability that a person will be chosen from the population. Probability sampling is expensive and time-consuming and may not even be possible because there is no complete list of everyone in a population. For many interview studies, you are likely to be dealing with nonprobability samples where you can use one or a combination of the following approaches (Bailey, 1994; Robson, 2002):
- •
-
Quota sampling. You try to obtain participants in relative proportion to their presence in the population. You might, for example, try to get participants in proportion to a distribution of age ranges.
- •
-
Dimensional sampling. You try to include participants who fit the critical dimensions of your study (time spent as an architect or engineer, time using a particular product, experience with a set of software tools).
- •
-
Convenience sampling. You choose anyone who meets some basic screening criteria. Many samples in UCD are convenience samples that can be biased in subtle ways. For example, the easiest people to find might be users from favorite companies that are generally evangelists of your product. You might end up with a "positivity bias" if you use participants from your favorite companies.
- •
-
Purposive sampling. You choose people by interest, qualifications, or typicality (they fit a general profile of the types of participants who would be typical users of a product). Samples that meet the specific goals of the study are sought out. For example, if you are trying to understand how experts in a particular field work on complex projects, you might seek out the "best of the best" and use them for your interviews.
- •
-
Snowball sampling. You identify one good participant (based on your user profile or persona) who is then asked to name other potential participants, and so on. Snowball sample is useful when there is some difficulty in identifying members of a population. For example, if you are looking for cosmologists who use complex visualization tools, you might find one and then ask him or her about any friends or colleagues in the field who might want to be interviewed.
- •
-
Extreme samples. You want people who are nontraditional or who have some exceptional knowledge that will provide an extreme or out-of-the-box perspective.
Extreme Input Can Be Useful
The use of "extremes" in user research can provide inspiration (Jansen et al., 2013) and help you understand the limits of a system. In addition to extreme samples of users, you can also explore extreme data sets that are large and dirty (something that usability research often ignores in small-scale testing) and extreme scenarios that highlight risks and rare, but critical, usage patterns.
- •
-
Heterogeneous samples. You select the widest range of people possible on the dimensions of greatest interest (e.g., you might choose people from many industries, countries, genders, and experience ranges).
For any type of user research, it is important to be explicit about your sampling method and its limitations and biases.
Read full chapter
URL:
https://www.sciencedirect.com/science/article/pii/B9780124103931000065
A systematic review of the mental health outcomes associated with Facebook use
Rachel L. Frost , Debra J. Rickwood , in Computers in Human Behavior, 2017
3.10 Methodological issues
Given only seven studies met the criteria for a good quality rating, the findings need to be interpreted in the context of their limitations, of which selection, information, and measurement bias are most notable. Over three quarters of the studies used convenience sampling to recruit university students, which limits generalizability. Similarly, a disproportionate number of studies included samples that overrepresented young, female participants of a White, USA background. Yet, the demographic profile of Facebook account holders is diverse; for example, 43% of Australians aged 65 years and over reported social networking as their most common online activity in 2014–15 ( ABS, 2016). Future research would benefit from widening the sampling frame and utilizing systematic sampling techniques.
Limitations were also noted regarding the approach for collecting data. Most studies used self-report approaches, which can introduce social desirability and recall bias (Althubaiti, 2016). Muench et al.'s (2015) study revealed 26.8% of participants responded in a socially desirable manner, and social desirability exhibited the strongest relationship with the outcome variables, such that an increase in social desirability related to improved psychological health. This indicates that social desirability is an important covariate to control for in SNS research. Similarly, Junco (2013) demonstrated the presence of recall bias in the Facebook literature by comparing self-reported time spent on Facebook against actual usage, as measured by computer monitoring software. Participants reported significantly more minutes spent on Facebook per day (145) than computer estimates (26). Future research may mitigate self-report error through strategies such as participant diary entries or utilizing Facebook metrics which provide an online tool that automatically generates data about a participant, such as number of Facebook friends.
Bias related to the measurement of variables was also evident. Notably, there is no consistent measure of Facebook use. Furthermore, the majority of measures employ Likert scales that are susceptible to acquiescence response bias; that is, the tendency for participants to agree with agree-disagree questions. Kuru and Pasek (2016) assessed acquiescence bias specific to Facebook use and found this bias introduced significant systematic errors. However, they demonstrated that this bias could be mitigated through the use of balanced scales, item-specific questions, and statistical corrections. There was also a concerning number of studies that controlled for few or no confounding factors. A lack of controls, particularly in cross-sectional studies, critically reduces the ability to draw reliable conclusions from the findings.
Read full article
URL:
https://www.sciencedirect.com/science/article/pii/S0747563217304685
Source: https://www.sciencedirect.com/topics/computer-science/convenience-sampling
0 Response to "A Sample That Uses Only People Who Are Easy to Co"
Post a Comment