Basic Research Concepts

2005-2011 David S. Walonick, Ph.D.

All rights reserved.

Excerpts from Survival Statistics - an applied statistics book for graduate students.


Basic Types of Research Designs

Defining a research problem provides a format for further investigation. A well-defined problem points to a method of investigation. There is no one best method of research for all situations. Rather, there are a wide variety of techniques for the researcher to choose from. Often, the selection of a technique involves a series of trade-offs. For example, there is often a trade-off between cost and the quality of information obtained. Time constraints sometimes force a trade-off with the overall research design. Budget and time constraints must always be considered as part of the design process.

There are three basic methods of research: 1) survey, 2) observation, and 3) experiment. Each method has its advantages and disadvantages.

The survey is the most common method of gathering information in the social sciences. It can be a face-to-face interview, telephone, mail, or internet survey. A personal interview is one of the best methods obtaining personal, detailed, or in-depth information. It usually involves a lengthy questionnaire that the interviewer fills out while asking questions. It allows for extensive probing by the interviewer and gives respondents the ability to elaborate their answers. Telephone interviews are similar to face-to-face interviews. They are more efficient in terms of time and cost, however, they are limited in the amount of in-depth probing that can be accomplished, and the amount of time that can be allocated to the interview. Mail surveys and internet surveys are generally the most cost effective interview methods. The researcher can obtain opinions, but trying to meaningfully probe opinions is very difficult.

Observation research monitors respondents' actions without directly interacting with them. It has been used for many years by A.C. Nielsen to monitor television viewing habits. Focus groups often use one-way mirrors to study behavior. Anthropologists and social scientists often study societal and group behaviors by simply observing them. The fastest growing form of observation research has been made possible by the bar code scanners at cash registers, where purchasing habits of consumers can now be automatically monitored and summarized.

In an experiment, the investigator changes one or more variables over the course of the research. When all other variables are held constant (except the one being manipulated), changes in the dependent variable can be explained by the change in the independent variable. It is usually very difficult to control all the variables in the environment. Therefore, experiments are generally restricted to laboratory models where the investigator has more control over all the variables.

Goal Definition

Defining the goals and objectives of a research project is one of the most important steps in the research process. Do not underestimate the importance of this step. Clearly stated goals keep a research project focused. The process of goal definition usually begins by writing down the broad and general goals of the study. As the process continues, the goals become more clearly defined and the research issues are narrowed.

Research Questions, Hypotheses, and Null Hypotheses

The goals of the study are easily transformed into research questions. There are basically two kinds of research questions: testable and non-testable. Neither is better than the other, and both have a place in business research.

Examples of non-testable questions are:

What are managers attitudes towards the revised advertising budget?

What do customers feel is fair price range for the new product?

What do residents feel are the most important problems facing the community?

Respondents' answers to these questions could be summarized in descriptive tables and the results might be extremely valuable to administrators and planners. Business and social science researchers often ask non-testable research questions. The shortcoming with these types of questions is that they do not provide objective cut-off points for decision-makers.

Business research usually seeks to answer one or more testable research questions. Nearly all testable research questions begin with one of the following two phrases:

Is there a significant difference between ...?

Is there a significant relationship between ...?

For example:

Is there a significant relationship between the corporate level of managers and their attitudes towards the revised advertising budget?

Is there a significant relationship between perceived need for the new product and the price that customers would be willing to pay for it?

Is there a significant difference between white and minority residents with respect to what they feel are the most important problems facing the community?

A research hypothesis is a testable statement of opinion. It is created from the research question by replacing the words "Is there" with the words "There is", and also replacing the question mark with a period. The hypotheses for the three sample research questions would be:

There is a significant relationship between the corporate level of managers and their attitudes towards the revised advertising budget.

There is a significant relationship between perceived need for the new product and the price that customers would be willing to pay for it.

There is a significant difference between white and minority residents with respect to what they feel are the most important problems facing the community.

It is not possible to test a hypothesis directly. Instead, you must turn the hypothesis into a null hypothesis. The null hypothesis is created from the hypothesis by adding the words "no" or "not" to the statement. For example, the null hypotheses for the three examples would be:

There is no significant relationship between the corporate level of managers and their attitudes towards the revised advertising budget.

There is no significant relationship between perceived need for the new product and the price that customers would be willing to pay for it.

There is no significant difference between white and minority residents with respect to what they feel are the most important problems facing the community.

All statistical testing is done on the null hypothesis...never the hypothesis. The result of a statistical test will enable you to either 1) reject the null hypothesis, or 2) fail to reject the null hypothesis. Never use the words "accept the null hypothesis". When you say that you "reject the null hypothesis", it means that you are reasonably certain that the null hypothesis is wrong. When you say that you "fail to reject the null hypothesis", it means that you do not have enough evidence to claim that the null hypothesis is wrong.

Validity and Reliability

Validity refers to the accuracy or truthfulness of a measurement. Are we measuring what we think we are? This is a simple concept, but in reality, it is extremely difficult to determine if a measure is valid. Generally, validity is based solely on the judgment of the researcher. When an instrument is developed, each question is scrutinized and modified until the researcher is satisfied that it is an accurate measure of the desired construct, and that there is adequate coverage of each area to be investigated.

Reliability is synonymous with repeatability. A measurement that yields consistent results over time is said to be reliable. When a measurement is prone to random error, it lacks reliability. The reliability of an instrument places an upper limit on its validity. A measurement that lacks reliability will also lack validity. There are three basic methods to test reliability: test-retest, equivalent form, and internal consistency.

A test-retest measure of reliability can be obtained by administering the same instrument to the same group of people at two different points in time. The degree to which both administrations are in agreement is a measure of the reliability of the instrument. This technique for assessing reliability suffers two possible drawbacks. First, a person may have changed between the first and second measurement. Second, the initial administration of an instrument might in itself induce a person to answer differently on the second administration.

The second method of determining reliability is called the equivalent-form technique. The researcher creates two different instruments designed to measure identical constructs. The degree of correlation between the instruments is a measure of equivalent-form reliability. The difficulty in using this method is that it may be very difficult (and/or prohibitively expensive) to create a totally equivalent instrument.

The most popular methods of estimating reliability use measures of internal consistency. When an instrument includes a series of questions designed to examine the same construct, the questions can be arbitrarily split into two groups. The correlation between the two subsets of questions is called the split-half reliability. The problem is that this measure of reliability changes depending on how the questions are split. A better statistic, known as Cronbach's alpha, is based on the mean (absolute value) interitem correlation for all possible variable pairs. It provides a conservative estimate of reliability, and generally represents the lower bound to the reliability of a scale of items. For dichotomous nominal data, the KR-20 (Kuder-Richardson) is used instead of Cronbach's alpha.

Variability and Error

Most research is an attempt to understand and explain variability. Variability refers to the dispersion of scores. If every respondent gives the same answer to an item, there is no variability, and when a measurement lacks variability, no statistical tests can be (or need be) performed. If there is great diversity in respondents' answers, then we say that there is high variability.

Ideally, when a researcher finds differences between respondents, they are due to true difference on the variable being measured. However, the combination of systematic and random errors can dilute the accuracy of a measurement. Systematic error is introduced through a constant bias in a measurement. It can usually be traced to a fault in the sampling procedure or in the design of a questionnaire. Random error does not occur in any consistent pattern, and it is not controllable by the researcher.

 

Copyright 2014 StatPac Inc., All Rights Reserved