Talk:IQ
Too technical?[edit source]
Good additions & corrections, Alt, but I was just realizing that the lede is currently even more technical than wikipedia:Intelligence quotient.
The few readers who know the terms "measuring the same factor", "measurement invariance", "factor analysis" likely mostly already know what IQ and g is, so the lede seems informative to very few people. One can explain q and IQ in simple terms (test results vs a measure of ability to perform well on any IQ test and any real-world cognitive task). Bibipi (talk) 07:36, 19 April 2021 (UTC)
- I put those in brackets so they could be glossed over, but yeah, I should link articles for each of those terms or simplify. In regards to the latter point you made, the important point here, IMO, is that g is a statistical construct that is likely measuring something concrete (wholly or to an extent). But no one has pointed to a single component of the brain or combination of genes that can wholly represent it yet. Altmark22 (talk) 12:55, 19 April 2021 (UTC)
- Yeah, g has yet to be fully grounded functionally, but at least the definition "measure of ability to perform well on any IQ test" is fairly accurate because factor analysis with one factor exactly gets the variance that is shared between many different IQ test results ("the core") and ignores variance due to task-specific, non-general skills. "any real-world cognitive task" is more hypothetical though. It would be more accurate to write "g seeks to measure ability in any cognitive task, which means ignoring task-specific abilities and only measuring the combined effect of abilities that are useful in all cognitive tasks (in practice the subtest scores of an IQ test battery)" Bibipi (talk) 20:28, 21 April 2021 (UTC)
- >Yeah, g has yet to be fully grounded functionally [...]
- Yeah, g has yet to be fully grounded functionally, but at least the definition "measure of ability to perform well on any IQ test" is fairly accurate because factor analysis with one factor exactly gets the variance that is shared between many different IQ test results ("the core") and ignores variance due to task-specific, non-general skills. "any real-world cognitive task" is more hypothetical though. It would be more accurate to write "g seeks to measure ability in any cognitive task, which means ignoring task-specific abilities and only measuring the combined effect of abilities that are useful in all cognitive tasks (in practice the subtest scores of an IQ test battery)" Bibipi (talk) 20:28, 21 April 2021 (UTC)
Firstly, the article is about IQ, and not general intelligence anyway. IQ is not 'intelligence' (whatever that is), and IQ is not g.
- But general intelligence is usually what is meant by IQ when it is used colloquially. Bibipi (talk) 05:40, 22 April 2021 (UTC)
The g-factor is not solely directly measuring the ability to perform well on any IQ test. It is the statistical representation of the ability to perform well on any IQ test (on average) plus any g-loaded ability extracted from standardized IQ tests via factor analysis. It very likely has a biological grounding to some extent, but it is not itself isomorphic with any biological ability or abilities. Even if a strong biological basis for the existence of g were found, it would not be isomorphic with g, as g represents a statistical construct that is only partially reflective of these biological processes.
It's not only measuring IQ test ability, which is partly determined by other non-intelligence-related factors, as no one has ever denied. IQ tests are only useful to the extent they have predictive validity, that they correlate with real-world measures of accomplishment and intellect; otherwise, they are just pointless puzzles. So that quote is not entirely correct anyway. Further on this point:
> Only measuring the combined effect of abilities that are useful in all cognitive tasks (in practice the subtest scores of an IQ test battery)
The WAIS-IV does not even do this, likely the most g-loaded test in existence, as while it correlates very highly with g, it is still in itself an imperfect measure of whatever g is really representing. General intelligence is the positive manifold; it is the fact that all mental abilities on g-loaded tests correlate positively with each other, and a first-order factor can represent this.
As Jensen said in *The g Factor*: " When this battery is factor analyzed in various age groups of the standardization population, the percentage of the total variance in all the subtests accounted for by g averages about 30 percent in a hierarchical analysis and about 37 percent when it is represented by the first principal factor. "
Blaha & Wallbrown (1982) found: "the g factor accounts for 47% of the total WAIS-R subtest variance in the nine age groups included in the standardization sample, thus indicating a strong dimension of individual difference." (https://psycnet.apa.org/record/1983-00122-001)
The majority of the variance that is robust across tasks and not attributable to measurement error or test specific ability is attributable to g, yes, but it is only theoretically possible to get a perfectly g-loaded test that does not take into account specialized abilities. Therefore, g is not a complete measure of 'the ability to perform well on any IQ test'; (as a significant amount of the score on any individual test is not determined by g). It is more accurately the ability to perform well at any cognitive task that is g-loaded, which is pretty much all of them to some extent. It is the primary source of individual variance in these diverse tasks on a group level, and this is already clearly stated in the lede.
You can not perform a factor analysis on an individual, but you can extract the g-factor from a battery of tests and rank order it by percentile compared to a norm group. This is what the FSIQ is doing indirectly, though, even on highly g-loaded tests, a lot of the score is caught up in specialized abilities and measurement error.
Your definition completely misses the fact that this general factor naturally emerges in all large-scale analyses of the correlations between diverse cognitive skills. This correlation is determined by the respective g-loading of these cognitive skills.
It doesn't necessarily 'seek to measure' anything; it is naturally emanating. And the lede as it is reflects that very well. Altmark22 (talk)
- A measure may be imperfect and confounded, but "seeks" covers that, implying some difficulties or limitations are involved. Of course g 'seeks to measure' something, namely general intelligence (rather than specialized intelligence). Hence it is called that way. My hunch is that for the majority of readers the most interesting information is the difference between specialized and general intelligence with g seeking to measure only the latter (even if it does so imperfectly), and with IQ (sub)tests being more confounded by the former. Bibipi (talk) 05:40, 22 April 2021 (UTC)
Norming[edit source]
The explanation for the norming is confusing because the percentile is a different quantity than the score on a 100/15 normal distribution (even though one can convert from one to the other). Bibipi (talk) 07:36, 19 April 2021 (UTC)
- It's less confusing than the previous version of the lede, which didn't properly distinguish between numerical scores and percentiles. Many people do not know what standard deviations represent either. Most modern tests give you a percentile for each subtest and the FSIQ as well as scores for this reason, the percentile is more informative than raw scores (which most people are familiar with due to high school test scoring). If you wrote about z-scores it would make the lede even more 'technical'. More people likely know what a percentile is than a z-score or standard deviation. But it could be simplified, as long as the key point remains, that the score is representing how you perform relative to other people of your age group. Altmark22 (talk) 12:55, 19 April 2021 (UTC)
- I think the lede should be more clear about raw scores (sum of correct items), and ways of making the raw scores comparable to the population: percentile (x% are worse) and normed scores (distance from mean using the variation in the population as "yardstick", scaled to be mean 100 and 15 being roughly the ~70%th percentile)
- > I think the lede should be more clear about raw scores [...]
- I think the lede should be more clear about raw scores (sum of correct items), and ways of making the raw scores comparable to the population: percentile (x% are worse) and normed scores (distance from mean using the variation in the population as "yardstick", scaled to be mean 100 and 15 being roughly the ~70%th percentile)
Ok, first you complain that the lede is 'too technical' and make one of your typical nitpicks about percentiles, which I agreed with (as it was a valid point) and then incorporated into the article. Now you want to add a portion of text referring to z-scores, scaled scores, and percentiles, which is already stated in the lede in plain English: "An IQ score is computed such that the population mean is 100 points, and one's score is then calculated by converting the overall 'raw scores' (based on how well you do on the tasks) to a standardized score, measuring how you compare to the rest of the population", and in the bell curve graph to the right of the lede.
There is no need to get into the weeds here; the core point is that the IQ score is relative. It's an ordinal scale with no absolute zero, and the score is only meaningful insofar as it compares you to a peer group. This is made blatantly apparent by the existence of things such as the Flynn Effect and the fact that raw scores on certain kinds of IQ tests and some subtests decline with age and so on. That's all that needs to be communicated here.
This is apart from the fact that raw scores do not always convert neatly to scaled scores. The WAIS subtests, for example, gives you a scaled score where the median is 10, the SD is 3, and this is converted from the raw score. Then these subtest scores are further converted to a full-scale IQ that can weigh the different subtests unevenly.
The lede already clearly states that raw scores are converted to standard scores that make these raw scores directly comparable to a respective norming group and outlines the concept of standard deviations and the bell curve. These changes you are arguing for add pointless verbiage; they do nothing to clarify.
This kind of thing leads me to believe you are merely 'arguing for the sake of arguing,' as you have stated you do enjoy doing before, which is an excellent way to make me disregard your input completely. But you are free to try to convince me otherwise. Altmark22 (talk) 00:52, 22 April 2021 (UTC)
- I did not see you've already corrected the relevant section that seemingly conflated percentiles and standardized scores. My bad. https://incels.wiki/index.php?title=IQ&oldid=59160 Bibipi (talk) 05:50, 22 April 2021 (UTC)
- I'll correct the rest of your new edits later, rolled them back for now, and locked the page. I'll go by your edits at my leisure and keep the good edits and remove/modify the points of disagreement between us. Not going to bother with edit wars (they are against the rules, in any case). I'll unlock this page when I'm done so you can state your points of disagreement again, and I will take them into consideration, but I will warn you not to touch the new lede until I've responded. That would constitute edit-warring. You'll have to learn to be happy with the lede as it then stands. You'll also have to wait for me to reintegrate your edits as there are more important articles to look over and edit. Not going to argue over this issue further, I've said my piece and it's clear you refuse to co-operate, read, or listen when it comes to certain bugbears you hold, in this case in regards to what I see as minor issues. Altmark22 (talk) 09:46, 29 April 2021 (UTC)
- I have not much to say about this except that this is a rather drastic response to a medium-sized edit of an unfinished article that is not A-class, let alone categorized by quality at all. Unfinished articles benefit from reorganization and larger edits. I did not remove any information, I only removed some redundancies like the repetition on how IQ is related to various measures of success, as well as some redundant wording. Bibipi (talk) 15:26, 29 April 2021 (UTC)
- I'll correct the rest of your new edits later, rolled them back for now, and locked the page. I'll go by your edits at my leisure and keep the good edits and remove/modify the points of disagreement between us. Not going to bother with edit wars (they are against the rules, in any case). I'll unlock this page when I'm done so you can state your points of disagreement again, and I will take them into consideration, but I will warn you not to touch the new lede until I've responded. That would constitute edit-warring. You'll have to learn to be happy with the lede as it then stands. You'll also have to wait for me to reintegrate your edits as there are more important articles to look over and edit. Not going to argue over this issue further, I've said my piece and it's clear you refuse to co-operate, read, or listen when it comes to certain bugbears you hold, in this case in regards to what I see as minor issues. Altmark22 (talk) 09:46, 29 April 2021 (UTC)
- Yeah, it's actually very good. I was just annoyed that you completely altered the lede while I was gone tbh. I think four paragraphs, (what wikipedia advises) is a good general guide to optimal lede length. And I think the article is worthy of being A-class classified. I'll unlock soon and you can change what you don't like. Altmark22 (talk) 23:07, 11 May 2021 (UTC)
- Your recent additions and revisions are very good, just finished reading them. It seems I messed up pretty much every figure regarding sd and percentiles, LOL. I agree on A-class. The only thing I'd improve are details like to add a figure for the re-test reliability in the lede (generally >.8 for complex IQ tests), which would still be "very high" (the "very" could then be re-added). The "dedicated" in the lede seems a bit confusing to me, because the descriptions that follow seem to apply to any IQ test, not just ones that are 'dedicated' to something. Simmonton's research is relevant to the negative correlation between high achievement and RS. There are various results like this on gifted children have a bit better motor skills, athleticism, hand-eye coordination, which might be relevant to the respective sections. Bibipi (talk) 17:33, 15 May 2021 (UTC)
- I'll unlock the article, you can change what you want, not interested in edit wars and they are against the rules anyway. I think our conflict over this article has actually improved it substantially, so competition is good in this respect.
- Your recent additions and revisions are very good, just finished reading them. It seems I messed up pretty much every figure regarding sd and percentiles, LOL. I agree on A-class. The only thing I'd improve are details like to add a figure for the re-test reliability in the lede (generally >.8 for complex IQ tests), which would still be "very high" (the "very" could then be re-added). The "dedicated" in the lede seems a bit confusing to me, because the descriptions that follow seem to apply to any IQ test, not just ones that are 'dedicated' to something. Simmonton's research is relevant to the negative correlation between high achievement and RS. There are various results like this on gifted children have a bit better motor skills, athleticism, hand-eye coordination, which might be relevant to the respective sections. Bibipi (talk) 17:33, 15 May 2021 (UTC)
> "It seems I messed up pretty much every figure regarding sd and percentiles, LOL." It's fine, we all make mistakes. I screwed up a lot on the demographics article Australia section also as I wrote that whole section in a hurry (but corrected it when I noticed it). Can be easily fixed.
> "The only thing I'd improve are details like to add a figure for the re-test reliability in the lede (generally >.8 for complex IQ tests)" Yeah, there seems to be a positive relationship between quality of the test and reliability, and low reliability only really seems to apply to Matrix reasoning type tests as they seem really prone to initial practice effects for whatever reasons (I'd suspect this improvement itself is also substantially g-loaded despite "Goodheart's law", I bet higher IQ people improve more with minimal practice in this specific task). A comprehensive assessment has very high test-retest reliability in most cases. So it'd be fine to add that.
> "There are various results like this on gifted children have a bit better motor skills" That study seems a tad dubious to me, in terms of low sample size and weird measurements of total athleticism, but it's a robust finding that higher IQ is associated with better fine motor skills (at least below a certain threshold but the stories of clumsy geniuses are likely because such skills are weakly g-loaded and genius seems to be associated with a weakening of 'g' wherein they are extremely specialized in certain cognitive areas). There is also barely any solid research on this topic anyway, so it'd be good to add. Altmark22 (talk)
Dysgenics[edit source]
Article needs a section on the Woodley effect and the evidence of substantial secular declines in intelligence to give a more balanced overview of this issue as compared to the coverage of this topic on certain other wikis.
Paradox of Testosterone[edit source]
- Midwits (100-120IQ) has the HIGHEST Testosterone levels https://www.sciencedirect.com/science/article/abs/pii/S0028393206004155
- Testosterone Metabolism & Digit Ratio are correlated with Non-Verbal IQ https://pdfs.semanticscholar.org/bc81/43041010dcac2c1e1014d56a51edfb52ad0a.pdf
- Similar ideas with spacial ability https://www.sciencedirect.com/science/article/abs/pii/S0160289604001333
Hypothesis: the hated of the SoyJak is determined by identifiable "lost potential" from females and non-midwits. Conversely, the hatred against retards and high-IQ "femboys" from Chads.
For Emil K. Fans: https://emilkirkegaard.dk/en/category/science/psychology/psychometics/intelligence-iq-cognitive-ability/