Elucidating some uncompromising data 

John tends to fall prey to headlines that perhaps make sense at face value, but do not hold water when you think critically about them. He’s generally perceived as intelligent, but an IQ value of 130 is likely pushing it. I feel more and more that I’m not so intelligent, as far too many people seem to be more intelligent than I am. IQ 130 is significant, and I find more and more I cannot gratuitously hand out that value to every seemingly intelligent person I encounter. The distribution just does not work in that fashion. The only people I’m certain have an IQ at or approaching that level are Jill and Jack. Beyond that, assuming a normal distribution only eleven other people in our graduating class have an IQ that high. But there’s evidence to suggest that the distribution is in fact lower than white norms, given that the student body is predominantly low-income and Hispanic. There are, however, two distinct qualities of our school that may mediate the statistical deficits. Namely, we have a continuation school that absorbs some portion of the lower-achieving students, who do tend to be of lower intelligence. Our standardized test scores, a rough proxy for intelligence, consistently hover around the 75th percentile. This mildly contravenes the intelligence research that pins Hispanic-Americans at a 94 IQ, or the 34.457th percentile. How are we to compromise between these two data sets? Perhaps standardized tests as rough proxy for intelligence are just that- rough. That may be the case, or Hispanics at my school plainly operate at a higher mean intelligence than the national average for their own ethnicity. This doesn’t seem to be the case; as stated before, the school is predominantly lower income: 65.1 percent of the student body qualifies for free or reduced price lunch, and 73.4 percent of the school is Hispanic. Both data points stack neatly upon each other. I tend to believe in the former theory, as it’s been established that SAT scores can be increased a “paltry” 100 points with intensive preparation. But using high precision 2016 SAT data reveals that the difference between a 1100 SAT score and a 1000 SAT score is sixteen percentiles; this maps out neatly to a seventeen percentile difference between Tustin High’s test scores and the state mean. But do better teachers/curriculum really amount to a staggering seventeen percentile disparity? It could be the case, if we view superior teaching as a linear model where the information retained by students is higher on a daily basis, culminating in substantial variation in the aggregate quantity of information retained. Another hypothesis is that even despite Hispanic-Americans in the United States being predominantly low income, the Hispanics at Tustin High school are less deluged by low income families, or those Hispanics that are conventionally labeled as low-income are less so than the Hispanic mean of categorized “low income families.” I don’t have specific numbers on that at the moment, nor am I readily aware for the qualifying income for “free or reduced lunch.” However, if Hispanics at my school are in less poverty than the national average for Hispanic poverty, this indicates slightly higher intelligence. Coupled with better than average teaching, this may conceivably explain the standardized testing results falling at the 75th percentile.

    I like to view society with respect to its ignorance of intelligence and human biodiversity in general as analogous to alchemy. It functions on an algorithm that is intrinsically flawed- and though social progress can be made notwithstanding, it’s destined to find out the  “right” answers only through carving out some road that consciously avoids a large variable. If an alchemist happens upon some experiment using his own methodology that just so happens to be scientifically valid, it does not justify his philosophy for doing so. But the alchemist will treat it as such, and though he labored tirelessly everyday prior to no avail, his scientific finding will galvanize his passion for pseudoscientific malpractice. Or, more mildly: John is catching fish from the sky, and he has brought with him a large net in order to catch the most that his paraphernalia will allow. Josh had the same idea, and he brings with him an equally large net. John positions himself in an area where every minute, one hundred and twenty fish land in his net. Josh, with markedly lower cognitive ability, decides to venture off into an area that leaves him with a far more moderate rate of one fish per minute. As these things happen to be, the fish that land in Josh’s net shouldn’t otherwise be there: the designated area for fish-plummeting is firmly within twenty feet of John’s position, and fish that exit that area do so only because of wind currents. After an hour of menial fish-catching, Josh has devised a method to improve his rate up to two fish per minute! It’s a net increase of 100%, and Josh returns home in a incandescent disposition. John, on the other hand, came home with over 5,000 fish.

12/5/17

    The flaw is obvious, and though Josh made the most of his unfortunate positioning, the fish he collected were some minute percentage of what he conceivably may have caught.

    A simple concept-perhaps this should be relegated to my General Ideas document. Regardless, the reason any video or website’s target demographics become less prevalent the more popular it gets, is because the more attention it attracts, it becomes more likely that a person from a non target demographic stumbles across the information.