The influence of relational complexity and strategy selection on children's reasoning in the Latin Square Task

The influence of relational complexity and strategy selection on children's reasoning in the Latin Square Task
of 15
All materials on our website are shared by users. If you have any questions about copyright issues, please report us to resolve them. We are always happy to assist you.
Similar Documents
  Cognitive Development 26 (2011) 127–141 Contents lists available at ScienceDirect CognitiveDevelopment The influence of relational complexity and strategyselection on children’s reasoning in the Latin Square Task Patrick Perret ∗ , Christine Bailleux, Bruno Dauvier University of Provence, PsyCLE Center, France a r t i c l e i n f o Keywords: Relational complexity theoryReasoningStrategy selectionCognitive developmentLatin Square Task a b s t r a c t Thepresentstudyfocusedonchildren’sdeductivereasoningwhenperformingtheLatinSquareTask,anexperimentaltaskdesignedtoexplore the influence of relational complexity. Building on Birney,Halford, and Andrew’s (2006) research, we created a version of thetaskthatminimizednonrelationalfactorsandintroducednewcat-egories of items. The results of two experiments conducted withschool-aged children yielded an apparent dilution of complexityeffects and suggest that specific inferential strategies can reducethe relational complexity that children need to process. A theo-retical account is proposed emphasizing the influence of adaptiveselectionofstrategiesthatmediateprocessingcapacityconstraintsin reasoning development. © 2011 Elsevier Inc. All rights reserved. 1. Introduction 1.1. Relational complexity and the development of reasoning  The present study explored the influence of relational complexity on children’s deductive reason-ing. Recent perspectives in adult cognitive psychology (Oberauer, Süb, Wilhelm, & Wittmann, 2008)suggest that the relational integration component of working memory constitutes a central mediatorof human reasoning. In developmental research, this view also lies at the heart of relational complex-ity (RC) theory (Halford, Wilson, & Phillips, 1998), which states that growth in relational processing capacitywithageincreasesthecomplexityofthementalmodelsthatchildrencanform.Inturn,these ∗ Correspondingauthorat:CentrePsyCLE,UniversitédeProvence,29,AvenueRobertSchuman,13621AixenProvenceCedex1, France. E-mail address:  Patrick.Perret@univ-provence.fr (P. Perret).0885-2014/$ – see front matter © 2011 Elsevier Inc. All rights reserved. doi:10.1016/j.cogdev.2010.12.003  128  P. Perret et al. / Cognitive Development   26 (2011) 127–141 changes in the complexity of mental models (from which inferences are derived) increase children’sunderstanding of the world, through conceptual refinements and the development of reasoning. Inline with other contemporary views of reasoning ( Johnson-Laird, 1983), RC theory is thus based on the assumption that, when solving problems, the human mind constructs mental models intended torepresent the structure of the relations involved in these problems. A second core assumption is thatlimited processing capacity in working memory limits the complexity of these models. The relationalcomplexity metric applies both to the structural properties of a problem and to individual processingcapacity. Relational complexity in RC theory is defined as the number of variables (or arguments)that must be related within the same cognitive representation. Unary relations are based on a singlevariable, binary relations on two variables, ternary relations on three, and so on.Halford,Baker,McCreden,andBain(2005)haveestablishedthatquaternaryrelationsarethemostcomplexrelationsthatadultscanmentallyrepresentandconstitutetheupperlimitofhumanprocess-ing capacity. However, RC theory has identified two mechanisms that can help individuals sidestepthis processing barrier, segmentation and chunking. Segmentation consists in breaking excessivelycomplextasksdownintoseveralsteps,soastoreducetherelationalcomplexityinvolvedateachstep.Chunking consists in compressing two or more variables into one. This mechanism both reduces theprocessing load and allows the newly compressed variables to form a single argument in a higher-order relation. RC theory regards segmentation and chunking as important components of expertisein a particular conceptual domain or a specific type of task.AccordingtoRCtheory,age-relatedchangesinprocessingcapacityareacrucialfactorforcognitivedevelopment,butnottheonlyone.Halford(1999)hasrepeatedlystressedthatRCtheorydoesnotdeny the role played by knowledge or experience. Processing capacity should be regarded as an enablingfactorthatgraduallybroadensthehorizonsofconceptualdevelopmentandreasoning.Developmentalchangesinprocessingcapacityarethoughttooccuraccordingtoaroughlypredictabletimetable,withchildrenbecomingabletoprocessunaryrelationsatamedianageofoneyear,binaryrelationsattwoyears, ternary relations at five years, and quaternary relations at 11years (Andrews & Halford, 2002).This gradual growth in processing capacity enhances the number of variables that children can relatein their mental models, thereby allowing them to represent the structure of increasingly complexproblems. As a consequence, their inferences in reasoning can rely on more accurate and adequaterepresentations of the relational systems they are dealing with.RCtheorythusprovidesaclearframeworkforpredictingchildren’sperformanceonreasoningtasks.Performance should be determined mainly by the match (or mismatch) between the child’s process-ing capacity and the task’s relational complexity. Halford and Andrews (2004) revisited well-known developmental tasks (class inclusion, transitive inference, theory of mind) and showed that accurateanalysisofrelationalcomplexitycanhelptoexplainbothearlysuccessandlatefailuresonvariantsof these tasks. More recently, Birney, Halford, and Andrews (2006) developed a new experimental task, the Latin Square Task, explicitly derived from RC theory and designed to test its predictions about theinfluence of relational complexity on deductive reasoning. 1.2. The Latin Square Task The Latin Square Task (LST) is based on a matrix of 16 cells (4 × 4 structure) that can be filled withfour different geometric shapes. The defining principle is that each shape should appear only once ineach row or column. Participants are confronted with an incomplete matrix and asked to determinewhich of the shapes should be placed in a target cell. The items are designed so that the informationalready present in the relevant rows or columns of the array can direct a participant’s inferencestoward the right conclusion. In its defining principle, the task resembles Sudoku problems and, assuch, constitutes a “puzzle of pure deduction” (Lee, Goodwin, & Johnson-Laird, 2008).TheLSThasseveralimportantqualitieswithregardtoRCtheoryrequirements.First,thedeductivemechanisms activated in this task are largely free from the influence of prior conceptual knowledge,pragmatic schemas or innate modules known to affect human reasoning and often difficult to dis-entangle in performance analyses. Second, the task also minimizes the amount of information to beheld in memory and consequently maximizes the role of the processing (as opposed to storage) com-ponent of working memory in the determination of performance. Third, it relies on a single, simple  P. Perret et al. / Cognitive Development   26 (2011) 127–141 129 rule (suitable for a broad range of ages and abilities) that can be applied to items of varying com-plexity. The relational complexity of LST items is manipulated by controlling the number of rows andcolumns that need to be simultaneously considered in order to choose the right shape for the targetcell. “Binary items require integration of elements within either a single column or row  . . .  Ternaryitems require integration of information from both a row and column  . . .  For the quaternary items,solutionisachievedbyintegratingelementsacrossmultiplerowsandcolumnsthatarenotnecessarilyfully constrained by a simple intersection (Birney et al., 2006, pp. 150–151). Birneyetal.(2006)studiedtheinfluenceofrelationalcomplexityonthereasoningperformanceof universitystudentsandchildrenaged9–16.Participantscompletedatotalof18items,withsixitemsper RC category (binary, ternary or quaternary). Results, based on regression and Rasch analyses,confirmedthepredictionsofRCtheory.Itemsofgreatercomplexitywereassociatedwithmoreerrorsand longer response times.However, the authors highlighted two instructive methodological difficulties. First, on the basis of empiricaldata,someitemscalledforareclassification(e.g.,fromquaternarytoternary),asparticipantscould follow a valid but unanticipated reasoning pathway to find the solution. Using this alternativeroute of serial inferences meant that they encountered lower levels of relational complexity than theauthors had envisaged when they designed the items. This phenomenon draws our attention to acrucial aspect of RC analysis, in that it applies to cognitive processes rather than to the task itself. Aslong as the items offer several possible paths to solution, accurately estimating the level of relationalcomplexity participants are actually dealing with remains an uncertain enterprise.Second,despiteaclearcomplexityeffect,theresultsindicatedthatfactorsotherthandimensional-itysignificantlycontributedtothevariationsinperformanceontheLST.ResponsetimeswereaffectednotjustbyRCbutalsobythenumberofemptycellsinthematrix.Furthermore,thenumberofprocess-ing steps required to reach the solution significantly influenced both error rates and response times.Forsomeitems,intermediateemptycellshadtobedealtwithbeforethefinalinferenceconcerningthetargetcellcouldbegenerated.Otheritemsweresingle-stepproblems.Althoughthisserialparameterwas not explicitly controlled for in the generation of items in Birney et al.’s (2006) study, appropriate statistical analyses revealed that ability to undertake multistep reasoning made an important contri-bution to performance, above and beyond processing capacity. A recent study by Zhang, Xin, Lin, andLi(2009)confirmedthatthenumberofprocessingstepsrequiredtofindasolutionintheLSTaccountsfor a major proportion of variability in item difficulty.On the basis of  Birney et al.’s (2006) results, we designed a new version of the LST in order to controlforthesenonrelationalfactorsandtofurtherexploretheinfluenceofcomplexityonchildren’sreasoning. Experiment 1 introduces this new version of the task and reports the performance of asample of school-aged children. 2. Experiment 1 TheaimofExperiment1wastostudytheeffectofrelationalcomplexityonchildren’sperformanceaftercontrollingfortheeffectsofthenonrelationalfactorsidentifiedinBirneyetal.’s(2006)study.To this end, three main changes were made to the srcinal task:(1) The number of empty cells was kept constant in all the items;(2) all the items required a single inferential step to identify the right shape for the target cell (nointermediate cells to deal with);(3) in order to constrain reasoning pathways (and consequently limit individual variability in theinferential routes chosen by the participant), only the relevant rows and columns of the matrixwere shown.RCmanipulationsadheredtotheprinciplesdefinedbyBirneyetal.(2006),inthatitemcomplexity was a function of the number of rows and columns that had to be processed in parallel in order toperformtheinferentialstepleadingtothesolution.Wedesignedbinary,ternary,andquaternaryitems(see Fig. 1 f or examples of each category). In the ternary items, information from both a row and a columnhastobeintegrated.Wemadeanadditionaldistinctionbetween secant  and nonsecant  ternary  130  P. Perret et al. / Cognitive Development   26 (2011) 127–141 Fig. 1.  Examples of the categories of items used in Experiment 1. items, depending on the target cell’s position with regard to the pieces of information that had to beintegrated. As shown in Fig. 1, secant ternary items were designed so that the target cell was located at the intersection of the row and column that had to be taken into account. In nonsecant ternaryitems, this was not the case. A preliminary study (Perret, Bailleux, & Dauvier, 2008) had suggested thatthepositionofthetargetcellinternaryitemsconstitutesaneglecteddimensionofdifficulty.Wetherefore manipulated this additional factor for ternary items, to create a total of four categories.  2.1. Method 2.1.1. Participants Seventy-onethird,fourthandfifthgraders(35girls),aged8–11years,tookpart.Meanagewas9–4(SD=15 months). Participants were recruited from elementary schools in a predominantly middle-class area in Aix-en-Provence, France.  2.1.2. Item generation A set of 24 items (six binary, six secant ternary, six nonsecant ternary, and six quaternary) wasgenerated, following the basic principles noted earlier. We used six geometric shapes (square, circle,cross, triangle, star and heart), all blue in color. The order of item presentation was randomized, withthe one proviso that none of the four conditions could occur more than three times in succession. Wecontrolled the number of times that each shape appeared in the matrix structure and in the list of response options. The position of the filled cells was also controlled. All items were based on a 4 × 4structure, but only three rows and columns were displayed, to reduce the number of noninformativecellsandpossibleinferentialpathways.Wefixedthenumberoffilledcellsatthree.Consequently,thenumber of empty cells was kept constant. The target cell was highlighted and indicated by a questionmark in the center.  P. Perret et al. / Cognitive Development   26 (2011) 127–141 131  2.1.3. Procedure E-Prime software (Psychology Software Tools, Inc., 2008) was used to build our version of theLST and the test was administered by computer. Administration began with a familiarization phasefeaturing four sample items (binary, secant ternary, nonsecant ternary and quaternary). Participantswere tested individually in a quiet room. The instruction was as follows: “ You have to find the shapemissing from the cell with a question mark, obeying the following rule: Each shape must appear onlyonce in each row and in each column. You have to choose the missing shape from these four possibilities [participantswereshownalistoffourresponseoptions]. Becareful;thereisonlyonerightanswer  .”Theitems were displayed on the left-hand side of the computer screen and the list of response optionson the right-hand side. Participants responded by clicking on one of the shapes represented in theresponse options. They were encouraged to do their best and to respond as accurately as possible. Allparticipants completed the 24 items without any feedback from the experimenter.  2.2. Results Several statistical approaches can be used to analyze successes and failures observed in a task likethe LST. A simple approach is to compare the mean difficulties of groups of items in relation to theirtheoretical level of complexity. A gradual increase in observed difficulties as a function of relationalcomplexity could be taken as evidence validating the theoretical analysis of the task. However, aver-aging across participants and items can hide differences between items and does not provide anyinformation about the structure of individual differences. A more fine-grained approach consists inusingapsychometrictoolsuchastheRaschmodel(Rasch,1961),atypeofitemresponsetheory(IRT) model. In the Rasch model, each item is defined by its own level of difficulty, making it possible topinpoint items that deviate from the prediction. Individual differences are also taken into account, asindividual abilities are assumed to be continuously distributed across a latent continuum.We therefore adopted a twofold statistical approach. First, we used repeated-measures analysis of variance (ANOVA) to compare groups of items in a classic manner. Second, generalized linear mixed-effects models (GLMMs; Breslow & Clayton, 1993), which can be regarded as a generalization of the multilevel model (Faraway, 2005), and some IRT models such as the Rasch model (Boeck & Wilson, 2004, p. 6; Miyazaki, 2005), were fitted to the data in order to validate the item classification. In the first analysis, individual mean proportions of correct responses were computed for the binary,ternary and quaternary items. These individual values were then averaged to obtain mean accuracyper complexity level (solid line in Fig. 2). Results clearly followed the expected stepwise increase in difficulty from binary to quaternary items. A repeated-measures ANOVA revealed a relatively strongand significant effect,  F  (2, 136)=176,  p <.001,   2 =.55.Item-by-item examination of the data revealed that the group of ternary items encompassed verydifferentlevelsofdifficulty.Itwasimmediatelyobviousthatthedistinctionbetweensecantandnon-secantitemswithinthegroupofternaryitemswouldhavetobetakenintoaccount,asshowninFig.2.Hence, we distinguished between four types of item: binary (a), secant ternary (b), nonsecant ternary(c), and quaternary (d).Inordertoempiricallyfindthebestformofitemclassification,threeGLMMs(binomialdistributionand logit link function) were fitted to the data, using the lme4 package (Bates & Sarkar, 2009) in R  (R  DevelopmentCoreTeam,2009).ThesemodelsareverysimilartoIRTmodelssuchastheRaschmodel,where item difficulty is taken into account by the fixed-effects parameters and participants’ abilitiesbytherandom-effectsparameters(Boeck&Wilson,2004;Doran,Bates,Bliese,&Dowling,2007).Like IRT models, the GLMMs were directly fitted to the binary success/failure data without any averaging.This methodology offered some useful features in this context, allowing us to form groups of itemsand to force the difficulty parameters to the same level within a given group of items. In each of thethree models, a different classification of items was used. A comparison of the models told us whichclassification was the most relevant, given the data.In the first model (M1), items were classified according to their theoretical level of complexity. Inthe second model (M2), we added the distinction between secant ternary (b) and nonsecant ternary(c) items. In the third model (M3), binary (a) and secant ternary (b) items were set to the same levelof difficulty, as were nonsecant ternary (c) and quaternary (d) items. A model comparison based
Similar documents
View more...
Related Search
We Need Your Support
Thank you for visiting our website and your interest in our free products and services. We are nonprofit website to share and download documents. To the running of this website, we need your help to support us.

Thanks to everyone for your continued support.

No, Thanks

We need your sign to support Project to invent "SMART AND CONTROLLABLE REFLECTIVE BALLOONS" to cover the Sun and Save Our Earth.

More details...

Sign Now!

We are very appreciated for your Prompt Action!