Talk:IQ: Difference between revisions

Jump to navigation Jump to search
6,049 bytes added ,  22 April 2021
m
no edit summary
mNo edit summary
Line 15: Line 15:
: I put those in brackets so they could be glossed over, but yeah, I should link articles for each of those terms or simplify. In regards to the latter point you made, the important point here, IMO, is that ''g'' is a statistical construct that is likely measuring something concrete (wholly or to an extent). But no one has pointed to a single component of the brain or combination of genes that can wholly represent it yet. [[User:Altmark22|Altmark22]] ([[User talk:Altmark22|talk]]) 12:55, 19 April 2021 (UTC)
: I put those in brackets so they could be glossed over, but yeah, I should link articles for each of those terms or simplify. In regards to the latter point you made, the important point here, IMO, is that ''g'' is a statistical construct that is likely measuring something concrete (wholly or to an extent). But no one has pointed to a single component of the brain or combination of genes that can wholly represent it yet. [[User:Altmark22|Altmark22]] ([[User talk:Altmark22|talk]]) 12:55, 19 April 2021 (UTC)
:: Yeah, g has yet to be fully grounded functionally, but at least the definition "measure of ability to perform well on any IQ test" is fairly accurate because factor analysis with one factor exactly gets the variance that is shared between many different IQ test results ("the core") and ignores variance due to task-specific, non-general skills. "any real-world cognitive task" is more hypothetical though. It would be more accurate to write "g ''seeks'' to measure ability in any cognitive task, which means ignoring task-specific abilities and ''only'' measuring the combined effect of abilities that are useful in all cognitive tasks (in practice the subtest scores of an IQ test battery)" [[User:Bibipi|Bibipi]] ([[User talk:Bibipi|talk]]) 20:28, 21 April 2021 (UTC)
:: Yeah, g has yet to be fully grounded functionally, but at least the definition "measure of ability to perform well on any IQ test" is fairly accurate because factor analysis with one factor exactly gets the variance that is shared between many different IQ test results ("the core") and ignores variance due to task-specific, non-general skills. "any real-world cognitive task" is more hypothetical though. It would be more accurate to write "g ''seeks'' to measure ability in any cognitive task, which means ignoring task-specific abilities and ''only'' measuring the combined effect of abilities that are useful in all cognitive tasks (in practice the subtest scores of an IQ test battery)" [[User:Bibipi|Bibipi]] ([[User talk:Bibipi|talk]]) 20:28, 21 April 2021 (UTC)
::: >Yeah, g has yet to be fully grounded functionally [...]
Firstly, the article is about IQ, and not general intelligence anyway. IQ is not 'intelligence' (whatever that is), and IQ is not ''g''.
The g-factor is not solely directly measuring the ability to perform well on any IQ test. It is the statistical representation of the ability to perform well on any IQ test (on average) plus any g-loaded ability extracted from standardized IQ tests via factor analysis. It very likely has a biological grounding to some extent, but it is not itself isomorphic with any biological ability or abilities. Even if a strong biological basis for the existence of ''g'' were found, it would not be isomorphic with ''g'', as 'g'' represents a statistical construct that is only partially reflective of these biological processes.
It's not only measuring IQ test ability, which is partly determined by other non-intelligence-related factors, as no one has ever denied.
IQ tests are only useful to the extent they have predictive validity, that they correlate with real-world measures of accomplishment and intellect; otherwise, they are just pointless puzzles. So that quote is not entirely correct anyway. Further on this point:
> Only measuring the combined effect of abilities that are useful in all cognitive tasks (in practice the subtest scores of an IQ test battery)
The WAIS-IV does not even do this, likely the most g-loaded test in existence, as while it correlates very highly with ''g'', it is still in itself an imperfect measure of whatever ''g'' is really representing.  General intelligence is the positive manifold; it is the fact that all mental abilities on g-loaded tests correlate positively with each other, and a first-order factor can represent this.
As Jensen said in *The g Factor*:
" When this battery is factor analyzed in various age groups of the standardization population, the percentage of the total variance in all the subtests accounted for by g averages about 30 percent in a hierarchical analysis and about 37 percent when it is represented by the first principal factor. "
Blaha & Wallbrown (1982) found:
"the g factor accounts for 47% of the total WAIS-R subtest variance in the nine age groups included in the stan�dardization sample, thus indicating a strong dimension of individual difference."
(Hierarchical factor structure of the Wechsler Adult Intelligence Scale–Revised. - PsycNET (apa.org)
The majority of the variance that is robust across tasks and not attributable to measurement error or test specific ability is attributable to g, yes, but it is only theoretically possible to get a perfectly g-loaded test that does not take into account specialized abilities. Therefore, ''g'' is not a complete measure of 'the ability to perform well on any IQ test'; (as a significant amount of the score on any individual test is not determined by g). It is more accurately the ability to perform well at any cognitive task that is g-loaded, which is pretty much all of them to some extent. It is the primary source of individual variance in these diverse tasks on a group level, and this is already clearly stated in the lede.
You can not perform a factor analysis on an individual, but you can extract the g-factor from a battery of tests and rank order it by percentile compared to a norm group. This is what the FSIQ is doing indirectly, though, even on highly g-loaded tests, a lot of the score is caught up in specialized abilities and measurement error.
Your definition completely misses the fact that this general factor naturally emerges in all large-scale analyses of the correlations between cognitive diverse cognitive skills. This correlation is determined by the respective g-loading of these cognitive skills.
It doesn't necessarily 'seek to measure' anything; it is naturally emanating. And the lede as it is reflects that very well. [[User:Altmark22|Altmark22]] ([[User talk:Altmark22|talk]])


== Norming ==
== Norming ==
Line 21: Line 48:
:It's less confusing than the previous version of the lede, which didn't properly distinguish between numerical scores and percentiles. Many people do not know what standard deviations represent either. Most modern tests give you a percentile for each subtest and the FSIQ as well as scores for this reason, the percentile is more informative than raw scores (which most people are familiar with due to high school test scoring). If you wrote about z-scores it would make the lede even more 'technical'. More people likely know what a percentile is than a z-score or standard deviation. But it could be simplified, as long as the key point remains, that the score is representing how you perform ''relative to other people of your age group''. [[User:Altmark22|Altmark22]] ([[User talk:Altmark22|talk]]) 12:55, 19 April 2021 (UTC)
:It's less confusing than the previous version of the lede, which didn't properly distinguish between numerical scores and percentiles. Many people do not know what standard deviations represent either. Most modern tests give you a percentile for each subtest and the FSIQ as well as scores for this reason, the percentile is more informative than raw scores (which most people are familiar with due to high school test scoring). If you wrote about z-scores it would make the lede even more 'technical'. More people likely know what a percentile is than a z-score or standard deviation. But it could be simplified, as long as the key point remains, that the score is representing how you perform ''relative to other people of your age group''. [[User:Altmark22|Altmark22]] ([[User talk:Altmark22|talk]]) 12:55, 19 April 2021 (UTC)
:: I think the lede should be more clear about raw scores (sum of correct items), and ways of making the raw scores comparable to the population: percentile (x% are worse) and normed scores (distance from mean using the variation in the population as "yardstick", scaled to be mean 100 and 15 being roughly the ~70%th percentile)
:: I think the lede should be more clear about raw scores (sum of correct items), and ways of making the raw scores comparable to the population: percentile (x% are worse) and normed scores (distance from mean using the variation in the population as "yardstick", scaled to be mean 100 and 15 being roughly the ~70%th percentile)
::: > I think the lede should be more clear about raw scores [...]
Ok, first you complain that the lede is 'too technical' and make one of your typical nitpicks about percentiles, which I agreed with and then incorporated into the article. Now you want to add a portion of text referring to z-scores, scaled scores, and percentiles, which is already stated in the lede in plain English: "An IQ score is computed such that the population mean is 100 points, and one's score is then calculated by converting the overall 'raw scores' (based on how well you do on the tasks) to a standardized score, measuring how you compare to the rest of the population", and in the bell curve graph to the right of the lede.
There is no need to get into the weeds here; the core point is that the IQ score is relative. It's an ordinal scale with no absolute zero, and the score is only meaningful insofar as it compares you to a peer group. This is made blatantly apparent by the existence of things such as the Flynn Effect and the fact that raw scores on certain kinds of IQ tests and some subtests decline with age and so on. That's all that needs to be communicated here.
This is apart from the fact that raw scores do not always convert neatly to scaled scores. The WAIS subtests, for example, gives you a scaled score where the median is 10, the SD is 3,  and this is converted from the raw score. Then these subtest scores are further converted to a full-scale IQ that can weigh the different subtests unevenly.
The lede already clearly states that raw scores are converted to standard scores that make these raw scores directly comparable to a respective norming group and outlines the concept of standard deviations and the bell curve. These changes you are arguing for add pointless verbiage; they do nothing to clarify.
This makes me think you are merely 'arguing for the sake of arguing,' as you have stated you do enjoy doing before, which is an excellent way to make me disregard your input completely. [[User:Altmark22|Altmark22]] ([[User talk:Altmark22|talk]]) 00:52, 22 April 2021 (UTC)

Navigation menu