5
1
9781506352800
Applied Statistics I: Basic Bivariate Techniques / Edition 3 available in Paperback, eBook
![Applied Statistics I: Basic Bivariate Techniques / Edition 3](http://img.images-bn.com/static/redesign/srcs/images/grey-box.png?v11.9.4)
Applied Statistics I: Basic Bivariate Techniques / Edition 3
by Rebecca M. Warner
Rebecca M. Warner
- ISBN-10:
- 1506352804
- ISBN-13:
- 9781506352800
- Pub. Date:
- 02/14/2020
- Publisher:
- SAGE Publications
- ISBN-10:
- 1506352804
- ISBN-13:
- 9781506352800
- Pub. Date:
- 02/14/2020
- Publisher:
- SAGE Publications
![Applied Statistics I: Basic Bivariate Techniques / Edition 3](http://img.images-bn.com/static/redesign/srcs/images/grey-box.png?v11.9.4)
Applied Statistics I: Basic Bivariate Techniques / Edition 3
by Rebecca M. Warner
Rebecca M. Warner
$151.0
Current price is , Original price is $151.0. You
Buy New
$151.00Buy Used
$78.75
$151.00
-
-
SHIP THIS ITEM
Temporarily Out of Stock Online
Please check back later for updated availability.
-
151.0
In Stock
Overview
Rebecca M. Warner’s bestselling Applied Statistics: From Bivariate Through Multivariate Techniques has been split into two volumes for ease of use over a two-course sequence. Applied Statistics I: Basic Bivariate Techniques, Third Edition is an introductory statistics text based on chapters from the first half of the original book.The author's contemporary approach reflects current thinking in the field, with its coverage of the "new statistics" and reproducibility in research. Her in-depth presentation of introductory statistics follows a consistent chapter format, includes some simple hand-calculations along with detailed instructions for SPSS, and helps students understand statistics in the context of real-world research through interesting examples. Datasets are provided on an accompanying website.
Product Details
ISBN-13: | 9781506352800 |
---|---|
Publisher: | SAGE Publications |
Publication date: | 02/14/2020 |
Edition description: | Third Edition |
Pages: | 648 |
Product dimensions: | 8.00(w) x 10.00(h) x (d) |
About the Author
Rebecca M. Warner received a B.A. from Carnegie-Mellon University in Social Relations in 1973 and a Ph.D. in Social Psychology from Harvard in 1978. She has taught statistics for more than 25 years: from Introductory and Intermediate Statistics to advanced topics seminars in Multivariate Statistics, Structural Equation Modeling, and Time Series Analysis. She is currently a Full Professor in the Department of Psychology at the University of New Hampshire. She is a Fellow in the Association for Psychological Science and a member of the American Psychological Association, the International Association for Relationships Research, the Society of Experimental Social Psychology, and the Society for Personality and Social Psychology. She has consulted on statistics and data management for the World Health Organization in Geneva and served as a visiting faculty member at Shandong Medical University in China.
Table of Contents
PrefaceAcknowledgmentsAbout the Author1. Evaluating Numerical InformationIntroductionGuidelines for NumeracySource CredibilityMessage ContentEvaluating GeneralizabilityMaking Causal ClaimsQuality Control Mechanisms in ScienceBiases of Information ConsumersEthical Issues in Data Collection and AnalysisLying with Graphs and StatisticsDegrees of BeliefSummary2. Basic Research ConceptsIntroductionTypes of VariablesIndependent and Dependent VariablesTypical Research QuestionsConditions for Causal InferenceExperimental Research DesignNonexperimental Research DesignQuasi-Experimental Research DesignsOther Issues in Design and AnalysisChoice of Statistical Analysis (Preview)Populations and Samples: Ideal Versus Actual SituationsCommon Problems in Interpretation of ResultsAppendix 2A: More About Levels of MeasurementAppendix 2B: Justification for the Use of Likert and Other Rating Scales as Quantitative Variables (in Some Situations)3. Frequency Distribution TablesIntroductionUse of Frequency Tables for Data ScreeningFrequency Tables for Categorical VariablesElements of Frequency TablesUsing SPSS to Obtain a Frequency TableMode, Impossible Score Values, and Missing ValuesReporting Data Screening for Categorical VariablesFrequency Tables for Quantitative VariablesFrequency Tables for Categorical Versus Quantitative VariablesReporting Data Screening for Quantitative VariablesWhat We Hope to See in Frequency Tables for Categorical VariablesWhat We Hope to See in Frequency Tables for Quantitative VariablesSummaryAppendix 3A: Getting Started in IBM SPSS® Version 25Appendix 3B: Missing Values in Frequency TablesAppendix 3C: Dividing Scores Into Groups or Bins4. Descriptive StatisticsIntroductionQuestions about Quantitative VariablesNotationSample MedianSample Mean (M)An Important Characteristic of M: The Sum of Deviations From M = 0Disadvantage of M: It is Not Robust Against Influence of Extreme ScoresBehavior of Mean, Median, and Mode in Common Real-World SituationsChoosing Among Mean, Median, and ModeUsing SPSS to Obtain Descriptive Statistics for a Quantitative VariableMinimum, Maximum, and Range: Variation among ScoresThe Sample Variance s2Sample Standard Deviation (s or SD)How a Standard Deviation Describes Variation Among Scores in a Frequency TableWhy Is There Variance?Reports of Descriptive Statistics in Journal ArticlesAdditional Issues in Reporting Descriptive StatisticsSummaryAppendix 4A: Order of Arithmetic OperationsAppendix 4B: Rounding5. Graphs: Bar Charts, Histograms, and BoxplotsIntroductionPie Charts for Categorical VariablesBar Charts for Frequencies of Categorical VariablesGood Practice for Construction of Bar ChartsDeceptive Bar GraphsHistograms for Quantitative VariablesObtaining a Histogram Using SPSSDescribing and Sketching Bell-Shaped DistributionsGood Practices in Setting up HistogramsBoxplot (Box and Whiskers Plot)Telling Stories About DistributionsUses of Graphs in Actual ResearchData Screening: Separate Bar Charts or Histograms for GroupsUse of Bar Charts to Represent Group MeansOther ExamplesSummary6. The Normal Distribution and z ScoresIntroductionLocations of Individual Scores in Normal DistributionsStandardized or z ScoresConverting z Scores Back Into X UnitsUnderstanding Values of zQualitative Description of Normal Distribution ShapeMore Precise Description of Normal Distribution ShapeAreas Under the Normal Distribution Curve Can Be Interpreted as ProbabilitiesReading Tables of Areas for the Standard Normal DistributionDividing the Normal Distribution Into Three Regions: Lower Tail, Middle, Upper TailOutliers Relative to a Normal DistributionSummary of First Part of ChapterWhy We Assess Distribution ShapeDeparture from Normality: SkewnessAnother Departure from Normality: KurtosisOverall NormalityPractical Recommendations for Preliminary Data Screening and Descriptions of Scores for Quantitative VariablesReporting Information About Distribution Shape, Missing Values, Outliers, and Descriptive Statistics for Quantitative VariablesSummaryAppendix 6A: The Mathematics of the Normal DistributionAppendix 6B: How to Select and Remove Outliers in SPSSAppendix 6C: Quantitative Assessments of Departure From NormalityAppendix 6D: Why Are Some Real-World Variables Approximately7. Sampling Error and Confidence IntervalsDescriptive Versus Inferential Uses of StatisticsNotation for Samples Versus PopulationsSampling Error and the Sampling Distribution for Values of MPrediction ErrorSample Versus Population (Revisited)The Central Limit Theorem: Characteristics of the Sampling Distribution of MFactors That Influence Population Standard Error (s M)Effect of N on Value of the Population Standard ErrorDescribing the Location of a Single Outcome for M Relative to Population Sampling Distribution (Setting Up a z Ratio)What We Do When s Is UnknownThe Family of t DistributionsTables for t DistributionsUsing Sampling Error to Set Up a Confidence IntervalHow to Interpret a Confidence IntervalEmpirical Example: Confidence Interval for Body TemperatureOther Applications for Confidence IntervalsError Bars in Graphs of Group MeansSummary8. The One-Sample t test: Introduction to Statistical Significance TestsIntroductionSignificance Tests as Yes/No Questions About Proposed Values of Population MeansStating a Null HypothesisSelecting an Alternative HypothesisThe One-Sample t TestChoosing an Alpha (a) LevelSpecifying Reject Regions on the Basis of a, Halt, and dfQuestions for the One-Sample t TestAssumptions for the Use of the One-Sample t TestRules for the Use of NHSTFirst Analysis of Mean Driving Speed Data (Using a Nondirectional Test)SPSS Analysis: One-Sample t Test for Mean Driving Speed (Using a Nondirectional or Two-Tailed Test)“Exact” p ValuesReporting Results for a Two-tailed One-Sample t TestSecond Analysis of Driving Speed Data Using a One-Tailed or Directional TestReporting Results for a One-tailed One-Sample t TestAdvantages and Disadvantages of One-Tailed TestsTraditional NHST Versus New Statistics RecommendationsThings You Should Not Say About p ValuesSummary9. Issues in Significance Tests: Effect Size, Statistical Power, and Decision ErrorsBeyond p ValuesCohen’s d: An Effect Size IndexFactors that Affect the Size of t RatiosStatistical Significance Versus Practical ImportanceStatistical PowerType I and Type II Decision ErrorsMeanings of “Error”Use of NHST in Exploratory Versus Confirmatory ResearchInflated Risk for Type I Decision Error for Multiple TestsInterpretation of Null OutcomesInterpretation of Statistically Significant OutcomesUnderstanding Past ResearchPlanning Future ResearchGuidelines for Reporting ResultsWhat You Cannot SaySummaryAppendix 9A: Further Explanation of Statistical Power10. Bivariate Pearson CorrelationResearch Situations Where Pearson’s r Is UsedCorrelation and Causal InferenceHow Sign and Magnitude of r Describe an X, Y RelationshipSetting Up ScatterplotsMost Associations Are Not PerfectDifferent Situations in Which r = .00Assumptions for Use of Pearson’s rPreliminary Data Screening for Pearson’s rEffect of Extreme Bivariate OutliersResearch ExampleData Screening for Research ExampleComputation of Pearson’s rHow Computation of Correlation Is Related to Pattern of Data Points in the ScatterplotTesting the Hypothesis That p0 = 0Reporting Many Correlations and Inflated Risk for Type I ErrorObtaining Confidence Intervals for CorrelationsPearson’s r and r2 as Effect Sizes and Partition of VarianceStatistical Power and Sample Size for Correlation StudiesInterpretation of Outcomes for Pearson’s rSPSS Example: Relationship SurveyResults Sections for One and Several Pearson’s r ValuesReasons to Be Skeptical of CorrelationsSummaryAppendix 10A: Nonparametric Alternatives to Pearson’s rAppendix 10B: Setting Up a 95% CI for Pearson’s r by HandAppendix 10C: Testing Significance of Differences Between CorrelationsAppendix 10D: Some Factors That Artifactually Influence Magnitude of rAppendix 10E: Analysis of Nonlinear RelationshipsAppendix 10F: Alternative Formula to Compute Pearson’s r11. Bivariate RegressionResearch Situations Where Bivariate Regression Is UsedNew Information Provided by RegressionRegression Equations and LinesTwo Versions of Regression EquationsSteps in Regression AnalysisPreliminary Data ScreeningFormulas for Bivariate Regression CoefficientsStatistical Significance Tests for Bivariate RegressionConfidence Intervals for Regression CoefficientsEffect Size and Statistical PowerEmpirical Example Using SPSS: Salary DataSPSS Output: Salary DataResults Section: Hypothetical Salary DataPlotting the Regression Line: Salary DataUsing a Regression Equation to Predict Score for Individual (Joe’s Heart Rate Data)Partition of Sums of Squares in Bivariate RegressionWhy Is There Variance (Revisited)?Issues in Planning a Bivariate Regression StudyPlotting ResidualsStandard Error of the EstimateSummaryAppendix 11A: Review: How to Graph a Line From Two Points Obtained From an EquationAppendix 11B: OLS Derivation of Equation for Regression CoefficientsAppendix 11C: Alternative Formula for Computation of SlopeAppendix 11D: Fully Worked Example: Deviations and SS12. The Independent-Samples t TestResearch Situations Where the Independent-Samples t Test Is UsedHypothetical Research ExampleAssumptions for Use of Independent-Samples t TestPreliminary Data Screening: Evaluating Violations of Assumptions and Getting to Know Your DataComputation of Independent-Samples t TestStatistical Significance of Independent-Samples t TestConfidence Interval Around M1 – M2SPSS Commands for Independent-Samples t TestSPSS Output for Independent-Samples t TestEffect Size Indexes for tFactors that Influence the Size of tResults SectionGraphing Results: Means and CIsDecisions About Sample Size for the Independent-Samples t TestIssues in Designing a StudySummaryAppendix 12A: A Nonparametric Alternative to the Independent-Samples t Test13. One-Way Between-Subjects Analysis of VarianceResearch Situations Where One-Way ANOVA Is UsedQuestions in One-Way Between-S ANOVAHypothetical Research ExampleAssumptions and Data Screening for One-Way ANOVAComputations for One-Way Between-S ANOVAPatterns of Scores and Magnitudes of SSbetween and SSwithinConfidence Intervals for Group MeansEffect Sizes for One-Way Between-S ANOVAStatistical Power Analysis for One-Way Between-S ANOVAPlanned ContrastsPost Hoc or “Protected” TestsOne-Way Between-S ANOVA in SPSSOutput From SPSS for One-Way Between-S ANOVAReporting Results From One-Way Between-S ANOVAIssues in Planning a StudySummaryAppendix 13A: ANOVA Model and Division of Scores Into ComponentsAppendix 13B: Expected Value of F When H0 Is TrueAppendix 13C: Comparison of ANOVA and t TestAppendix 13D: Nonparametric Alternative to One-Way Between-S ANOVA: Independent-Samples Kruskal-Wallis Test14. Paired-Samples t TestIndependent- Versus Paired-Samples DesignsBetween-S and Within-S or Paired-Groups DesignsTypes of Paired SamplesHypothetical Study: Effects of Stress on Heart RateReview: Data Organization for Independent SamplesNew: Data Organization for Paired SamplesA First Look at Repeated-Measures DataCalculation of Difference (d) ScoresNull Hypothesis for Paired-Samples t TestAssumptions for Paired-Samples t TestFormulas for Paired-Samples t TestSPSS Paired-Samples t Test ProcedureComparison Between Results for Independent-Samples and Paired-Samples t TestsEffect Size and PowerSome Design Problems in Repeated-Measures AnalysesResults for Paired-Samples t Test: Stress and Heart RateFurther Evaluation of AssumptionsSummaryAppendix 14A: Nonparametric Alternative to Paired-Samples t: Wilcoxon Signed Rank Test15. One-Way Repeated-Measures Analysis of VarianceIntroductionNull Hypothesis for Repeated-Measures ANOVAPreliminary Assessment of Repeated-Measures DataComputations for One-Way Repeated-Measures ANOVAUse of SPSS Reliability Procedure for One-Way Repeated-Measures ANOVAPartition of SS in Between-S Versus Within-S ANOVAAssumptions for Repeated-Measures ANOVAChoices of Contrasts in GLM Repeated MeasuresSPSS GLM Procedure for Repeated-Measures ANOVAOutput of GLM Repeated-Measures ANOVAPaired-Samples t Tests as Follow-UpResultsEffect SizeStatistical PowerCounterbalancing in Repeated-Measures StudiesMore Complex DesignsSummaryAppendix 15A: Test for Person-by-Treatment InteractionAppendix 15B: Nonparametric Analysis for Repeated Measures (Friedman Test)16. Factorial Analysis of VarianceResearch Situations Where Factorial Design Is UsedQuestions in Factorial ANOVANull Hypotheses in Factorial ANOVAScreening for Violations of AssumptionsHypothetical Research SituationComputations for Between-S Factorial ANOVAComputation of SS and df in Two-Way Factorial ANOVAEffect Size Estimates for Factorial ANOVAStatistical PowerFollow-Up TestsFactorial ANOVA Using the SPSS GLM ProcedureSPSS OutputResultsDesign Decisions and Magnitudes of SS TermsSummaryAppendix 16A: Fixed Versus Random FactorsAppendix 16B: Weighted Versus Unweighted MeansAppendix 16C: Unequal Cell n’s in Factorial ANOVA: Computing Adjusted Sums of SquaresAppendix 16D: Model for Factorial ANOVAAppendix 16E: Computation of Sums of Squares by Hand17. Chi-Square Analysis of Contingency TablesEvaluating Association Between Two Categorical VariablesFirst Example: Contingency Tables for Titanic DataWhat Is Contingency?Conditional and Unconditional ProbabilitiesNull Hypothesis for Contingency Table AnalysisSecond Empirical Example: Dog Ownership DataPreliminary Examination of Dog Ownership DataExpected Cell Frequencies If H0 Is TrueComputation of Chi Squared Significance TestEvaluation of Statistical Significance of x2Effect Sizes for Chi SquaredChi Squared Example Using SPSSOutput From Crosstabs ProcedureReporting ResultsAssumptions and Data Screening for Contingency TablesOther Measures of Association for Contingency TablesSummaryAppendix 17A: Margin of Error for Percentages in SurveysAppendix 17B: Contingency Tables With Repeated Measures: Mc Nemar TestAppendix 17C: Fisher Exact TestAppendix 17D: How Marginal Distributions for X and Y Constrain Maximum Value of fAppendix 17E: Other Uses of x218. Selection of Bivariate Analyses and Review of Key ConceptsSelecting Appropriate Bivariate AnalysesTypes of Independent and Dependent Variables (Categorical Versus QuantitativeParametric Versus Nonparametric AnalysesComparisons of Means or Medians Across Groups (Categorical IV and Quantitative DV)Problems With Selective Reporting of Evidence and AnalysesLimitations of Statistical Significance Tests and p ValuesStatistical Versus Practical SignificanceGeneralizability IssuesCausal InferenceResults SectionsBeyond Bivariate Analyses: Adding VariablesSome Multivariable or Multivariate AnalysesDegree of BeliefAppendicesAppendix A: Proportions of Area Under a Standard Normal CurveAppendix B: Critical Values for t DistributionAppendix C: Critical Values of FAppendix D: Critical Values of Chi-SquareAppendix E: Critical Values of the Pearson Correlation CoefficientAppendix F: Critical Values of the Studentized Range StatisticAppendix G: Transformation of r (Pearson Correlation) to Fisher’s ZGlossaryReferencesIndexFrom the B&N Reads Blog
Page 1 of