Abstract

Individuals with autism and individuals with dyslexia both show reduced use of previous sensory information (stimuli statistics) in perceptual tasks, even though these are very different neurodevelopmental disorders. To better understand how past sensory information influences the perceptual experience in these disorders, we first investigated the trial-by-trial performance of neurotypical participants in a serial discrimination task. Neurotypical participants overweighted recent stimuli, revealing fast updating of internal sensory models, which is adaptive in changing environments. They also weighted the detailed stimuli distribution inferred by longer-term accumulation of stimuli statistics, which is adaptive in stable environments. Compared to neurotypical participants, individuals with dyslexia weighted earlier stimuli less heavily, whereas individuals with autism spectrum disorder weighted recent stimuli less heavily. Investigating the dynamics of perceptual inference reveals that individuals with dyslexia rely more on information about the immediate past, whereas perception in individuals with autism is dominated by longer-term statistics.

Access optionsAccess options

Rent or Buy article

Get time limited or full article access on ReadCube.

from$8.99

All prices are NET prices.

Code availability

We used the Psychtoolbox-3 MATLAB toolbox (http://psychtoolbox.org/) for creating and running Experiments 2 and 3. The code for the online experiment which was applied via M-Turk is available at https://github.com/ItayLieder/Mech_turk_2afc. Analysis was conducted using the ‘mixed GAM computation vehicle with automated smoothness estimation’ (mgcv) free package https://cran.r-project.org/web/packages/mgcv/index.html.

Data availability

The data sets generated during and/or analysed during the present study are available from the corresponding author on reasonable request.

Additional information

Publisher’s note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

References

  1. 1.

    Zoccolotti, P. et al. Word length effect in early reading and in developmental dyslexia. Brain Lang. 93, 369–373 (2005).

  2. 2.

    Ahissar, M., Lubin, Y., Putter-Katz, H. & Banai, K. Dyslexia and the failure to form a perceptual anchor. Nat. Neurosci. 9, 1558–1564 (2006).

  3. 3.

    Ahissar, M. Dyslexia and the anchoring-deficit hypothesis. Trends Cogn. Sci. 11, 458–465 (2007).

  4. 4.

    Chandrasekaran, B., Hornickel, J., Skoe, E., Nicol, T. & Kraus, N. Context-dependent encoding in the human auditory brainstem relates to hearing speech in noise: implications for developmental dyslexia. Neuron 64, 311–319 (2009).

  5. 5.

    Oganian, Y. & Ahissar, M. Poor anchoring limits dyslexics’ perceptual, memory, and reading skills. Neuropsychologia 50, 1895–1905 (2012).

  6. 6.

    Perrachione, T. K., Del Tufo, S. N. & Gabrieli, J. D. E. Human voice recognition depends on language ability. Science 333, 595 (2011).

  7. 7.

    Banai, K. & Ahissar, M. Poor sensitivity to sound statistics impairs the acquisition of speech categories in dyslexia. Lang. Cogn. Neurosci. 33, 321–332 (2018).

  8. 8.

    Pellicano, E. & Burr, D. When the world becomes ‘too real’: a Bayesian explanation of autistic perception. Trends Cogn. Sci. 16, 504–510 (2012).

  9. 9.

    Sinha, P. et al. Autism as a disorder of prediction. Proc. Natl Acad. Sci. USA 111, 15220–15225 (2014).

  10. 10.

    Turi, M., Karaminis, T., Pellicano, E. & Burr, D. No rapid audiovisual recalibration in adults on the autism spectrum. Sci. Rep. 6, 21756 (2016).

  11. 11.

    Karaminis, T. et al. Central tendency effects in time interval reproduction in autism. Sci. Rep. 6, 28570 (2016).

  12. 12.

    American Psychiatric Association. Diagnostic and Statistical Manual of Mental Disorders (DSM-5(R)). (APA: Arlington, 2013).

  13. 13.

    Newman, T. M. et al. Hyperlexia in children with autism spectrum disorders. J. Autism Dev. Disord. 37, 760–774 (2007).

  14. 14.

    Bott, L., Brock, J., Brockdorff, N., Boucher, J. & Lamberts, K. Perceptual similarity in autism. Q. J. Exp. Psychol. (Hove) 59, 1237–1254 (2006).

  15. 15.

    Hollingworth, H. L. The central tendency of judgment. J. Philos. Psychol. Sci. Methods 7, 461–469 (1910).

  16. 16.

    Ashourian, P. & Loewenstein, Y. Bayesian inference underlies the contraction bias in delayed comparison tasks. PLoS One 6, e19551 (2011).

  17. 17.

    Raviv, O., Ahissar, M. & Loewenstein, Y. How recent history affects perception: the normative approach and its heuristic approximation. PLoS Comput. Biol. 8, e1002731 (2012).

  18. 18.

    Jaffe-Dax, S., Frenkel, O. & Ahissar, M. Dyslexics’ faster decay of implicit memory for sounds and words is manifested in their shorter neural adaptation. eLife 6, e20557 (2017).

  19. 19.

    Jaffe-Dax, S., Raviv, O., Jacoby, N., Loewenstein, Y. & Ahissar, M. A computational model of implicit memory aptures dyslexics’ perceptual deficits. J. Neurosci. 35, 12116–12126 (2015).

  20. 20.

    Jaffe-Dax, S., Lieder, I., Biron, T. & Ahissar, M. Dyslexics’ usage of visual priors is impaired. J. Vis. 16, 10 (2016).

  21. 21.

    Molesworth, C., Chevallier, C., Happé, F. & Hampton, J. A. Children with autism do not show sequence effects with auditory stimuli. J. Exp. Psychol. Gen. 144, 48–57 (2015).

  22. 22.

    Fischer, J. & Whitney, D. Serial dependence in visual perception. Nat. Neurosci. 17, 738–743 (2014).

  23. 23.

    Liberman, A., Fischer, J. & Whitney, D. Serial dependence in the perception of faces. Curr. Biol. 24, 2569–2574 (2014).

  24. 24.

    Körding, K. P. & Wolpert, D. M. Bayesian integration in sensorimotor learning. Nature 427, 244–247 (2004).

  25. 25.

    Raviv, O., Lieder, I., Loewenstein, Y. & Ahissar, M. Contradictory behavioral biases result from the influence of past stimuli on perception. PLoS Comput. Biol. 10, e1003948 (2014).

  26. 26.

    Fassihi, A., Akrami, A., Esmaeili, V. & Diamond, M. E. Tactile perception and working memory in rats and humans. Proc. Natl Acad. Sci. USA 111, 2331–2336 (2014).

  27. 27.

    Wood, S. Generalized Additive Models: An Introduction with R. (CRC Press: New York, 2006).

  28. 28.

    Kiyonaga, A., Scimeca, J. M., Bliss, D. P. & Whitney, D. Serial dependence across perception, attention, and memory. Trends Cogn. Sci. 21, 493–497 (2017).

  29. 29.

    Abrahamyan, A., Silva, L. L., Dakin, S. C., Carandini, M. & Gardner, J. L. Adaptable history biases in human perceptual decisions. Proc. Natl Acad. Sci. USA 113, E3548–E3557 (2016).

  30. 30.

    Arzounian, D., de Kerangal, M. & de Cheveigné, A. Sequential dependencies in pitch judgments. J. Acoust. Soc. Am. 142, 3047–3057 (2017).

  31. 31.

    Fründ, I., Wichmann, F. A. & Macke, J. H. Quantifying the effect of intertrial dependence on perceptual decisions. J. Vis. 14, 9 (2014).

  32. 32.

    Pellicano, E., Jeffery, L., Burr, D. & Rhodes, G. Abnormal adaptive face-coding mechanisms in children with autism spectrum disorder. Curr. Biol. 17, 1508–1512 (2007).

  33. 33.

    McIntosh, D. N., Reichmann-Decker, A., Winkielman, P. & Wilbarger, J. L. When the social mirror breaks: deficits in automatic, but not voluntary, mimicry of emotional facial expressions in autism. Dev. Sci. 9, 295–302 (2006).

  34. 34.

    Happé, F. G. E. Central coherence and theory of mind in autism: reading homographs in context. Br. J. Dev. Psychol. 15, 1–12 (1997).

  35. 35.

    Happé, F. & Frith, U. The weak coherence account: detail-focused cognitive style in autism spectrum disorders. J. Autism Dev. Disord. 36, 5–25 (2006).

  36. 36.

    Lawson, R. P., Mathys, C. & Rees, G. Adults with autism overestimate the volatility of the sensory environment. Nat. Neurosci. 20, 1293–1299 (2017).

  37. 37.

    Van de Cruys, S. et al. Precise minds in uncertain worlds: predictive coding in autism. Psychol. Rev. 121, 649–675 (2014).

  38. 38.

    Croydon, A., Karaminis, T., Neil, L., Burr, D. & Pellicano, E. The light-from-above prior is intact in autistic children. J. Exp. Child Psychol. 161, 113–125 (2017).

  39. 39.

    Van de Cruys, S., Vanmarcke, S. & Van de Put, I. The use of prior knowledge for perceptual inference is preserved in ASD. Clin. Psychol. Sci. 6, 382–393 (2018).

  40. 40.

    Mottron, L. & Burack, J. A. in The Development of Autism: Perspectives from Theory and Research (eds. Burack, J. A., Charman, T., Yirmiya, N. & Zelazo, P. R.) 131–148 (Lawrence Erlbaum Associates: Mahwah, 2001).

  41. 41.

    Robertson, C. E. & Baron-Cohen, S. Sensory perception in autism. Nat. Rev. Neurosci. 18, 671–684 (2017).

  42. 42.

    Weschler, D. Wechsler Adult Intelligence Scale. 4th edn (Pearson: London, 2008).

  43. 43.

    Wood, A. S. & Wood, M. S. Package ‘mgcv’, Mixed GAM Computation Vehicle With Automatic Smoothness Estimation. CRAN http://cran.uib.no/web/packages/mgcv/mgcv.pdf (The R Project for Statistical Computing, 2018).

  44. 44.

    Knoblauch, K. Package psyphy: Functions for Analyzing Psychophysical Data in R. CRAN https://cran.r-project.org/package=psyphy (The R Project for Statistical Computing, 2007).

  45. 45.

    Trevor, H. & Tibshirani, R. Varying-coefficients models. J. R. Stat. Soc. B 55, 757–796 (1993).

  46. 46.

    Dai, H. & Micheyl, C. Psychometric functions for pure-tone frequency discrimination. J. Acoust. Soc. Am. 130, 263–272 (2011).

  47. 47.

    Sheppard, W. F. On the application of the theory of error to cases of normal distribution and normal correlation. Philos. Trans. R. Soc. Lond. 192, 101–167 (1899).

Download references

Acknowledgements

We thank T. Epstein, T. Malinovich, G. Vishne, O. Guri, S. Granot and M. Kurulkar for help collecting the experimental data. We thank P. Dayan and U. Frith for reading an earlier version of the manuscript. Funding: This study was supported by the Israel Science Foundation (ISF grant no. 616/11 and Canada–Israel grant no. 2425/15) grant to M.A., by the Gatsby Charitable Foundation, the German–Israeli Foundation for Scientific Research and Development (grant no. I-1303–105.4/2015) and a grant to I.L. (Young Researchers Exchange Scholarship) by the Jerusalem Brain Community (JBC).

Author information

Author notes

  1. These authors contributed equally: Itay Lieder, Vincent Adam, Maneesh Sahani, Merav Ahissar.

Affiliations

  1. Edmond and Lily Safra Center for Brain Sciences, Hebrew University of Jerusalem, Jerusalem, Israel

    • Itay Lieder
    •  & Merav Ahissar
  2. Gatsby Computational Neuroscience Unit, London, UK

    • Vincent Adam
    •  & Maneesh Sahani
  3. Psychology Department, Hebrew University, Jerusalem, Israel

    • Or Frenkel
    •  & Merav Ahissar
  4. Psychology Department, Princeton University, Princeton, NJ, USA

    • Sagi Jaffe-Dax

Authors

  1. Search for Itay Lieder in:

  2. Search for Vincent Adam in:

  3. Search for Or Frenkel in:

  4. Search for Sagi Jaffe-Dax in:

  5. Search for Maneesh Sahani in:

  6. Search for Merav Ahissar in:

Contributions

I.L. initiated the project. I.L. and S.J.-D. designed the experiments. I.L. and V.A. developed the model and analysed the data used in this study. O.F. and S.J.-D. were involved in data acquisition. M.A. and M.S. were involved in conceptualization, supervision, investigation and methodology and funding acquisition. All authors contributed to the interpretation of data and writing of the paper and approved the final version of the manuscript.

Competing interests

The authors declare no competing interests.

Corresponding author

Correspondence to Merav Ahissar.

Integrated supplementary information

  1. Supplementary Fig. 1 Neurotypical contraction bias is replicated in M-Turk data.

    M-Turk (Perspect. Psychol. Sci. 6, 3–5, 2011) has been shown to be a reliable tool to acquire quality data, even for very demanding experiments (PLoS One 8, e57410, 2013; Behav. Res. Methods 43, 155–167, 2011). In common hardware configurations and web browsers, the latency between executing the code to present a sound and the actual onset of sound presentation, and the difference between the actual and intended duration of sounds have both small variability (STDs < 1 ms) (Behav. Res. Methods 48, 897–908, 2016; Behav. Res. Methods 47, 649–665, 2015). Still, we first verified the validity of the M-Turk experiment for our study by assessing the contraction bias of the 125 participants who performed the task with the broad uniform distribution (3 octaves, Methods; Supplementary Table S2). (a) We quantified the bias by measuring the difference between performance in Bias+ and Bias- trials (Fig. 1b). The bias was positive for 114/125 participants (91.2%) and was highly significant (t124 = 2.7; p < 10−24, Cohen’s d = 1.14, for a paired t-test). Each filled circle shows the mean accuracy of one participant in Bias- (x axis) and Bias+ trials (y-axis). Color denotes mean accuracy of the participant across all trials (side-bar). The bias is manifested in the elevation from the diagonal (higher accuracies in Bias+ trials). Bottom inset image compares the mean accuracy across participants for Bias+ (blue) and Bias- trials (red). Black points indicate the mean values; error bars indicate standard error of the mean. (b) Regressing participants’ decision (n = 125) using a GLM Probit regression (Methods) - the weights of each of the estimated bias components. The most prominent contributors to the sensory bias are the stimuli of one trial back, and the mean of all frequencies. Error bars indicate standard deviation. Buhrmester, M. et al. Amazon’s Mechanical Turk: A New Source of Inexpensive, Yet High-Quality, Data? Perspect. Psychol. Sci. 6, 3–5 (2011). Crump, M. J. C., Mcdonnell, J. V & Gureckis, T. M. Evaluating Amazon ’ s Mechanical Turk as a Tool for Experimental Behavioral Research. PLoS One 8, e57410 (2013). Sprouse, J. A validation of Amazon Mechanical Turk for the collection of acceptability judgements in linguistic theory. Behav. Res. Methods 43, 155–167 (2011). Reimers, S. & Stewart, N. Auditory presentation and synchronization in Adobe Flash and HTML5 / JavaScript Web experiments. Behav. Res. Methods 897–908 (2016). Babjack, D. L. et al. Reducing audio stimulus presentation latencies across studies, laboratories, and hardware and operating system configurations. Behav. Res. Methods 47, 649–665 (2015).

  2. Supplementary Fig. 2 GAM analysis reveals the detailed functions of bias towards recent and towards earlier trials.

    GAM was applied to the data of 125 M-Turk participants who performed the protocol with the broad uniform distribution. (a) Bias functions of the 5 most recent time-lags (d1,…d5) and d estimated using GAM regression, modeled according to \(p\left( {\prime\prime f_2^t > f_1^t\prime\prime } \right) = {\mathrm{\Phi }}\left( {\alpha ^s\delta ^t + \mathop {\sum}\nolimits_{i = 1}^5 {b_i + b_\infty } } \right)\). Although the magnitude of the bias decreases with time, the unique shape remains. Error bars (shaded region) indicate standard deviation (b) The fraction of variance explained by each of the plotted components. For this broad uniform distribution, the component corresponding to the most recent trial accounted for 70.6% of the entire bias variance, and the two preceding trials accounted for 10% and 3.7% respectively. The longer term component accounted for 12.9%.

  3. Supplementary Fig. 3 Cross-validations of GAM shows that bias-by-longer-term is distribution-specific, whereas bias-by-recent is distribution invariant.

    Assuming separate b(d) functions for each distribution (n = 490) significantly improves the performance of the model (W = 1, p = 0.007, two-sided Wilcoxon signed-rank, cross-validation), but adding a distribution-specific b1(d1) function does not (W = 12, p = 0.11, two-sided Wilcoxon signed-rank, cross-validation). We compared 4 models using 10-fold cross-validation scores over the aggregated data collected via M-Turk for 4 distributions (Broad uniform, Narrow uniform, Gaussian and Bimodal). All models, fitted with GAM, included the same term with pre-fitted sensitivities per-participant -αsδt, but had different terms of the recent (b1(d1)) and longer term (b(d)) bias functions. The models differed in the number of fitted functions, specifically, whether a single function was fitted for all distributions, or 4 distribution-specific functions were fitted: - Recent and longer term components are each shared across distributions – b1(d1) + b(d) - Recent and longer term components are each fitted separately for each distribution – \(\mathop {\sum}\nolimits_{i = 1}^4 {b_1\left( {d_1} \right)_i} + \mathop {\sum}\nolimits_{i = 1}^4 {b_\infty \left( {d_\infty } \right)_i}\), where i denotes the distribution. - Longer term component shared across distributions, recent component fitted separately – \(\mathop {\sum}\nolimits_{i = 1}^4 {b_1\left( {d_1} \right)_i + b_\infty \left( {d_\infty } \right)}\) - Recent component shared across distributions, longer term component fitted separately – \(b_1\left( {d_1} \right) + \mathop {\sum}\nolimits_{i = 1}^4 {b_\infty \left( {d_\infty } \right)_i}\). The last model (top in the plot) yielded the best predictive performance. The x axis shows the difference in AUC (area under ROC curve, larger for better predictions) between the best model (upper mode) and each of the other 3 models. Red squares and red vertical lines indicate the mean and median values of each group, respectively. Gray lines correspond to each cross-validation instance. Error bars show lower to upper quartile values of the data.

  4. Supplementary Fig. 4 An illustration of the ideal observer model.

    The prior is set to be a uniform distribution. Bottom → up: given a true stimulus f1, noisy encoding leads to its \(\tilde f_1\) representation. For a fixed representation \(\tilde f_1\), combining sensory uncertainty \(p\left( {\tilde f_1|f_1} \right)\) with prior expectations \(\hat p\left( {f_1} \right)\) on frequency leads to the posterior \(\hat p\left( {f_1{\mathrm{|}}\tilde f_1} \right)\), and its median \(\hat m_1\left( {\tilde f_1} \right)\) is used as a threshold to form a decision. Considering all possible noisy representations leads to a distribution over the posterior median \(p\left( {\hat m_1|f_1} \right)\). Relative noise, standard deviation, and prior width were chosen to best illustrate the implementation.

  5. Supplementary Fig. 5 Cross-validation shows that GAM performs better than GLM for all four distributions administered in M-Turk.

    Inertia bias has less explanatory power than contraction bias. (a–d) We compared the following models for: (a) Broad uniform (n = 125), (b), Narrow uniform (n = 94) (c), Gaussian (n = 163) (d) Bimodal performed (n = 108) in M-Turk: Inertia and feedback effects29,39\(\alpha ^s\delta ^t + w_rx_r^t + w_fx_f^t\). Stimulus independent effects of the previous trial. These terms were added to explain the impact of feedback30 and inertia. Negative feedback in the previous trial can promote a switching response, while a positive one might enhance perseverance; response inertia—the tendency to choose the same response as in the previous trial. To account for them we added the response in the previous trial \(x_r^t = y^{t - 1}\) and its feedback \(x_f^t = y^{t - 1}e^{t - 1}\) (where et−1 is the 0/1 feedback in the previous trial). Values of both variables’ (wr,wf) were {-1,1}. Both covariates (xr,xf) were centered. Linear contraction bias (recent and longer term)\(\alpha ^s\delta ^t + w_1d_1^t + w_\infty d_\infty ^t\)has linear terms for both bias by recent and bias by longer term history. Both are assumed to increase linearly with respective distances. Nonlinear (GAM) non-additive contraction bias\(\alpha ^s\delta ^t + b\left( {d_1^t,d_\infty ^t} \right)\). A single interaction term, which accounts for recent and longer term bias, allows nonlinear interaction. Nonlinear (GAM) additive contraction bias\(\alpha ^s\delta ^t + b_1\left( {d_1^t} \right) + b_\infty \left( {d_\infty ^t} \right)\). Additive sum of recent and longer term functions with no interactions assumed. The x axis shows the difference in AUC (area under ROC curve, larger for better predictions) between the additive GAM model with inertia (upper-most model) and the other models for each cross-validation instance (gray lines). The model that contained the response repetition covariates (xf,xr) was much less predictive of behavior than a two-term sensory bias GLM. Both nonlinear GAM models always outperformed the linear model. The model that contained the combined bias term \(b\left( {d_1^t,d_\infty ^t} \right)\) did not do better than the model with fitted additive terms \(b_1\left( {d_1^t} \right) + b_\infty \left( {d_\infty ^t} \right)\). Red squares and red vertical lines indicate the mean and median values of each group, respectively. Error bars show lower to upper quartile values of the data.

  6. Supplementary Fig. 6 Cross-validation shows that GAM outperforms GLM and that response inertia is secondary to contraction bias in experiments 1 and 3.

    (a–b) Comparing The explanatory power of the inertia and the contraction biases. The x axis shows the AUC (area under ROC curve, larger for better predictions) difference between two compared probit models (detailed in Supplementary Fig. 5): Inertia and feedback effects (left) – \(\alpha ^s\delta ^t + w_rx_r^t + w_fx_f^t\). Linear contraction bias (recent and longer term) (right) – \(\alpha ^s\delta ^t + w_1d_1^t + w_\infty d_\infty ^t\). In both Experiments 1 and 3, the contraction bias model outperforms the response repetition model. Each cross-validation instance is shown in a gray connecting line. (c–d) We compared several models for Experiment 1 (n = 51) (c) and 3 (n = 53) (d) with the following components (All models contain a sensitivity parameter α): Inertia and feedback effects (as detailed above). Linear contraction bias (GLM) (as detailed above). Nonlinear (GAM) non-additive contraction bias\(\alpha ^s\delta ^t + b\left( {d_1^t,d_\infty ^t} \right)\). Single interaction term that accounts for the contraction bias - recent and longer term linear terms are assumed to interact nonlinearly.Nonlinear (GAM) additive contraction bias\(\alpha ^s\delta ^t + b_1\left( {d_1^t} \right) + b_\infty \left( {d_\infty ^t} \right)\). Additive sum of recent and longer term functions with no interactions assumed. The x axis shows the difference in AUC between the additive GAM model with inertia (upper-most model) and the other models for each cross-validation instance (gray lines). In both experiments, GAM was significantly more predictive than GLM (W = 2, p = 0.005 in Experiment 1, and W = 2, p = 0.005 in Experiment 3; Wilcoxon signed-rank test). Assuming additivity did not have any negative impact on performance. Red squares and red vertical and horizontal lines indicate the mean and median values of each group, respectively. Error bars show lower to upper quartile values of the data.

  7. Supplementary Fig. 7 Cross-validation shows that adding random effects to account for individual variability improves the fit for all populations in experiments 1 and 3.

    We tested 4 models using cross-validation scores for the data of each population (separately) in Experiments 1 (neurotypical – blue, n = 26; individuals with dyslexia – red, n = 25) and 3 (neurotypical – blue, n = 23; ASD – green, n = 16; individuals with dyslexia – red, n = 14). The x axis shows the difference in AUC (area under ROC curve, larger for better predictions) between the best model (upper model) and the 3 remaining models. All models, fitted with GAM, included the same fits of the precision parameter (pre-fitted per-subject sensitivities term-αsδt) and the two shared bias components (b1 and b), but differed in their random-effects components (fitted per-participant). The models from the bottom up: (1) GAM without random-effects components, (2) GAM with random intercept, (3) GAM with a random intercept, and a random slope for the recent component. (4) GAM with a random intercept, and random slopes for the recent and longer term components, respectively. The full model (top) yielded the best predictive performance. Each of the 3 random-effects components significantly improved the model for each population in Experiments 1 and 3. Red squares and red vertical lines indicate the mean and median values of each group, respectively. Error bars show lower to upper quartile values of the data.

Supplementary information

  1. Supplementary Figs. 1–7

    Supplementary Figs. 1–7 and Supplementary Tables 1–3

  2. Reporting Summary

About this article

Publication history

Received

Accepted

Published

DOI

https://doi.org/10.1038/s41593-018-0308-9