Reliability and Validity are important aspects of research in the human services field. Without reliability and validity researchers results would be useless. This paper will define the types of reliability and validity and give examples of each. Examples of a data collection method and data collection instruments used in human services and managerial research will be given. This paper will look into why it is important to ensure that these data collection methods and instruments are both reliable and valid.

Reliability is the consistency of your measurement, or the degree to which an instrument ensures the same way each time it is used under the same condition with the same subjects. In short, it is the repeatability of a measurement. A measure is considered reliable if a person’s score on the same test given twice is similar (American Educational Research Association, American psychological Association ; National Council on Measurement in Education, 1999).

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!

order now

It is important to remember that reliability is not measured, it is estimated. For example, on a reliable test, a student would be expecting to receive the same grade regardless of when the student completed the assessment or test, when the answer was scored and graded, and who scored the response. On an unreliable examination, a student’s grade or score may be different based on factors that are not related to the purpose of the test or survey.

There are two ways that reliability is estimated, test/retest and internal consistency. The first implements the measurement of an instrument at two separate times for each subject. The second documents the correlation between the two separate measurements and the third assumes there is no change in the underlying condition of the survey, questionnaire or test. Validity is one of these criteria. In the most general terms, it shows how well the measure or design does what it purports to do.

The measure in question might be a psychological test of some kind, a group of judges who rate things, a functional MRI scanner for monitoring brain activity, or any other instrument or measuring tool. Consider an aptitude test that is designed to predict whether applicants to law school will succeed if admitted. We would be interested in the test’s criterion validity, as it would tell us how well scores on the test are correlated with the particular criterion of success used to assess it. We would also be interested in the est’s construct validity, as it provides insurance that we are measuring the concept (or construct) in question. There are other uses of validity that are of interest to us as well, such as the test’s content validity, or how adequately it has sampled the universe of content it purports to measure. The concept of validity also has several different uses in research design, and in the following chapters we will examine specific experimental and no experimental designs and how well each fulfills its function.

There are eight types of validity and they are the following: Construct validity: The degree to which the conceptualization of what is being measured or experimentally manipulated is what is claimed, such as the constructs that are measured by psychological tests or that serves as a link between independent and dependent variables. Content validity: The adequate sampling of the relevant material or content that a test purports to measure.

Convergent and discriminant validity: The grounds established for a construct based on the convergence of related tests or behavior (convergent validity) and the distinctiveness of unrelated tests or behavior (discriminant validity). Criterion validity: The degree to which a test or questionnaire is correlated with outcome criteria in the present (its concurrent validity) or the future (its predictive validity). External validity: The generalizability of an inferred causal relationship over different people, settings, manipulations (or treatments), and research outcomes.

Face validity: The degree to which a test or other instrument “looks as if” it is measuring something relevant. Internal validity: The soundness of statements about whether one variable is the cause of a particular outcome, especially the ability to rule out plausible rival hypotheses. Statistical-conclusion validity: The accuracy of drawing certain statistical conclusions, such as an estimation of the magnitude of the relationship between an independent and a dependent variable (the effect size) or an estimation of the degree of statistical significance of a particular statistical test.

Examples of two data collections methods used in the human services research are: Survey = Questionnaire, Interview and Standardized/Instruments is used to learn what people think, to identify relationships between motivation and satisfaction. Use interviews, surveys and standardized scales. Observations = Interpretive, ethonographic, participant observer and case study. This study shows how people behave and interact in public open spaces.

Example of two data collections instruments used in human services research are: Qualitative and Quantitative Research Methodologies includes experiments, random treatment assignments and quasi experiments using nonrandomized treatments and surveys, which are cross-sectional or longitudinal. It is very important to ensure that these data collections methods and instruments are both reliable and valid because method has it’s strengths and weaknesses. When designing a research study it is important to decide what the outcome (data) the study will produce then select the best methodology to produce that desired information.