In the years before AI, when search engines simply scanned the Web, I told my tenth grade lit students that I’d give A’s to students whose analytical essays on a piece revealed something we hadn’t discussed in class and for which they had produced clear textual evidence. My principal suggested I was giving out too few A’s. I pointed out that I didn’t give grades, they were earned. I showed students ways to approach literature and assess the strength of the conclusions at which they arrived. My students earned plenty of A’s, mostly, I think, because they felt really good about what they were writing and how ‘smart’ they felt.
I wonder how an AI system would handle a request for an, as yet, unearth thesis in – say – the Ramayana. Of course, the AI’s ‘class discussion’ must have been very extensive, but under these or any other circumstances, could the bot postulate and support a novel conclusion?
Classical reportage covers who, what, where, when and, to some extent how. The why of things may only be taken from the heard words of someone other than the reporter. The good reporter would not have speculated. Incorporating events external to the reported incident and used as an analysis would also have been opinion, as in this writing.
If AI systems are collecting, and when appropriate, citing existing data, then they’re simply doing research. Citing sources would be good. Citing would provide some basis for authenticating the factual quality of the information. What about the intangibles? On what rubrics can AI systems evaluate the strength or weakness on ranges in the nonphysical realm? How can they scale loosely defined abstractions, such as liberty, respect or love?
My students had the words in the book, the strategies I had shared with them and 16 years or so of gathering data and experiencing feelings. They put it together. Their real intelligence allowed them outstanding insights – insights and the skill at arriving at them that I hope they have carried into life.
Asimov’s writing from half a century ago mocks the self-accolades of AI engineering. STEM was once a theme in the humanities. Now it is trying to generate its own humanity. Will it supplant us along the way? Are we the primitives in this evolutionary leap into Artificial Humanities? And what does that even mean? Can humanity be digitized?
—————————————————————————————————————————
This article suggests that far from supporting creative discovery in its own function, it stifles such thinking in its users’ generative cognition. “How A.I. and Social Media Contribute to ‘Brain Rot,’” New York Times, Nov. 10, 2025 [https://www.nytimes.com/2025/11/06/technology/personaltech/ai-social-media-brain-rot.html]
Can Humanity Be Digitized?
10 November 2025 Leave a comment
In the years before AI, when search engines simply scanned the Web, I told my tenth grade lit students that I’d give A’s to students whose analytical essays on a piece revealed something we hadn’t discussed in class and for which they had produced clear textual evidence. My principal suggested I was giving out too few A’s. I pointed out that I didn’t give grades, they were earned. I showed students ways to approach literature and assess the strength of the conclusions at which they arrived. My students earned plenty of A’s, mostly, I think, because they felt really good about what they were writing and how ‘smart’ they felt.
I wonder how an AI system would handle a request for an, as yet, unearth thesis in – say – the Ramayana. Of course, the AI’s ‘class discussion’ must have been very extensive, but under these or any other circumstances, could the bot postulate and support a novel conclusion?
Classical reportage covers who, what, where, when and, to some extent how. The why of things may only be taken from the heard words of someone other than the reporter. The good reporter would not have speculated. Incorporating events external to the reported incident and used as an analysis would also have been opinion, as in this writing.
If AI systems are collecting, and when appropriate, citing existing data, then they’re simply doing research. Citing sources would be good. Citing would provide some basis for authenticating the factual quality of the information. What about the intangibles? On what rubrics can AI systems evaluate the strength or weakness on ranges in the nonphysical realm? How can they scale loosely defined abstractions, such as liberty, respect or love?
My students had the words in the book, the strategies I had shared with them and 16 years or so of gathering data and experiencing feelings. They put it together. Their real intelligence allowed them outstanding insights – insights and the skill at arriving at them that I hope they have carried into life.
Asimov’s writing from half a century ago mocks the self-accolades of AI engineering. STEM was once a theme in the humanities. Now it is trying to generate its own humanity. Will it supplant us along the way? Are we the primitives in this evolutionary leap into Artificial Humanities? And what does that even mean? Can humanity be digitized?
—————————————————————————————————————————
This article suggests that far from supporting creative discovery in its own function, it stifles such thinking in its users’ generative cognition. “How A.I. and Social Media Contribute to ‘Brain Rot,’” New York Times, Nov. 10, 2025 [https://www.nytimes.com/2025/11/06/technology/personaltech/ai-social-media-brain-rot.html]
jay@jaezz.org
Filed under Critical thinking, Education Policy, Literary Criticism, Social Commentary Tagged with ai, artificial-intelligence, chatgpt, Education, writing