Section II Reading Comprehension
Part A
Directions: Read the following four texts. Answer the questions below each text by choosing A, B, C or D. Mark your answers on the ANSWER SHEET.
Text 1
Last month, OpenAI CEO Sam Altman claimed he could not imagine raising a newborn without ChatGPT. Around the same time, I had a very different experience. My wife and I wanted to send a follow-up email to our son’s teacher regarding a complex conversation we’d had about pedagogy and curriculum. I wrote a draft and asked ChatGPT merely to correct the grammar and tighten the language.
The AI ignored my specific instructions entirely. It decided my email was "highlighting the wrong details" and completely rewrote it. When I eventually showed the result to my wife, she was horrified, asking if "worms ate my brain" because the text sounded like generic robotic garbage. But here is the alarming part: when I showed her the AI chat log, we both started questioning our own memories.
We went back and forth for days, wondering if we had actually discussed specific topics like War and Peace or UK versus US pedagogy. The AI effectively "gaslit" us into doubting whether we had this specific conversation at all. As an AI founder and safety expert, I realized the problem was technical: our conversation was "out of distribution." It contained a unique combination of topics rarely seen together in the AI's training data. Faced with this rarity, the Large Language Model (LLM) retreated to safe ground, generating the most generic, average output possible.
This experience highlights two deeply concerning dangers. The first is the loss of diversity of thought. If AI consistently nudges users toward generic, conventional outputs, unique human ideas will be lost to the "grayness" of lowest-common-denominator content. The second danger is more insidious: losing our sense of reality to AI.
I am a physics PhD and an AI builder; my wife is a Stanford graduate. Yet, faced with the AI's convincing, authoritative tone, we doubted our own lived experience. There is an implicit assumption that ChatGPT derives its knowledge from the "wisdom of the masses," and therefore must be correct. This triggered a crisis of confidence in our own minds.
Empirical evidence now supports this concern. A 2025 MIT study monitored the brain activity of students writing essays. Those who relied on AI showed significantly lower cognitive engagement and weaker memory of their work than those who wrote unaided.
The lesson is clear: it is incumbent on us to vigilantly trust our human minds over statistical text generators. We must ensure our children learn to think for themselves, introducing AI later in the curriculum much like calculators in math. As for the email? I went back to my original draft, fixed the grammar myself, and sent it. And as for raising children? We will do it the old-fashioned way: 100 per cent human.
21. The author mentions Sam Altman in the first paragraph to ______.
[A] support the view that AI is essential for modern parenting
[B] introduce the specific prompt he used for his email draft
[C] contrast a positive view of AI dependence with his own negative experience
[D] prove that even tech executives struggle with raising newborns
22. Why did the author and his wife start questioning their own memories?
[A] The teacher denied having the conversation with them.
[B] The AI's rewritten version was so convincing and authoritative.
[C] They had forgotten the specific details of the curriculum.
[D] The draft email contained factual errors about US pedagogy.
23. The term "out of distribution" (Para. 3) is used to explain why the AI ______.
[A] failed to process the grammatical corrections properly
[B] could not access the school's private curriculum database
[C] generated generic output instead of reflecting the unique conversation
[D] refused to answer the author’s request due to safety protocols
24. According to the author, the "insidious" danger of Generative AI is that ______.
[A] it encourages users to be lazy and avoid writing
[B] it produces content that is often grammatically incorrect
[C] it eliminates the need for teachers in primary education
[D] it can sway humans into trusting algorithms over their own experiences
25. The MIT study mentioned in the text suggests that relying on AI for writing ______.
[A] leads to a decline in brain activity and cognitive engagement
[B] helps students structure their arguments more effectively
[C] allows students to complete their assignments much faster
[D] results in higher grades but less creativity
附注:根据历年考研英语真题阅读题源外刊等,摘选最新文章,模拟仿真出题。
参考答案见以下。
Quick look: CBCDA
21. 【答案】C
【解析】题型:修辞目的题
定位:第一段。
分析:文章开头引用 Sam Altman 说无法想象没有 AI 如何养育新生儿(高度依赖且正面),紧接着用 "Around the same time, I had a very different experience"(大约同一时间,我有着截然不同的经历)进行转折。这显然是为了通过对比(contrast),引出作者随后描述的负面、令人困惑的 AI 使用体验。
干扰项: [A] 这是 Altman 的观点,不是作者引用他的目的;作者意在反驳或对比这种盲目依赖。 [B] 具体的提示词(prompt)是在后续段落提到的,与第一段提及 Altman 无关。 [D] 这是一个表面细节,且 Altman 并没有 struggle,他是依赖 AI。
22. 【答案】B
【解析】题型:因果细节题
定位:第二段末尾及第五段。
分析:第二段提到 "when I showed her the AI chat log, we both started questioning our own memories"(当我们看聊天记录时,我们开始质疑自己的记忆)。第五段进一步解释原因:"faced with the AI's convincing, authoritative tone... implied assumption that ChatGPT... must be correct"(面对 AI 令人信服、权威的语气……隐含假设是 ChatGPT 一定是对的)。因此,是 AI 表现出的权威感和“正确性”让他们产生了自我怀疑。
干扰项: [A] 老师并未出场否认。 [C] 文中明确提到他们讨论了具体话题(如《战争与和平》),并非忘记细节,而是被 AI 误导认为没讨论过。 [D] 邮件草稿是作者自己写的,包含真实细节,是 AI 认为这些细节“错误”。
23. 【答案】C
【解析】题型:词义理解/因果推理题
定位:第三段 "It contained a unique combination of topics... the LLM retreated to safe ground, generating the most generic, average output possible."
分析:作者解释 "out of distribution"(分布外)是指他们谈话的话题组合太独特,AI 的训练数据中很少见(rarely seen)。面对这种稀缺性,AI 选择了“撤退到安全地带”,生成了“最通用、平均的输出”(generic output)。这直接对应选项 C。
干扰项: [A] AI 并非处理不了语法,而是选择重写内容。 [B] 文中未提及数据库访问权限问题。 [D] 并非出于安全协议拒绝回答,而是回答了但内容平庸且错误。
24. 【答案】D
【解析】题型:细节题
定位:第四段最后一句 "losing our sense of reality to AI" 和第五段的展开。
分析:作者将第二个危险描述为 "more insidious"(更阴险的),即“在 AI 面前失去我们的现实感”。第五段详细描述了即便他是专家,也差点被 AI 动摇(swayed),开始怀疑自己的亲身经历。这对应选项 D,即 AI 能左右人类,使其信任算法胜过自身经验。
干扰项: [A] 虽然可能是副作用,但文中强调的“阴险”危险是认知层面的自我怀疑,而非懒惰。 [B] 文中提到 AI 的输出是 "cogent, convincing-sounding"(听起来令人信服的),并非语法错误。 [C] 文中未提及取代教师。
25. 【答案】A
【解析】题型:细节题/例证题
定位:第六段 "Those who relied on AI showed significantly lower cognitive engagement..."
分析:MIT 的研究显示,依赖 AI 写论文的学生表现出“显著较低的认知参与度”(significantly lower cognitive engagement)和对自己所写内容的记忆力较弱。这与选项 A 完全一致。
干扰项: [B] 文中未提及结构更有效。 [C] 也许更快,但文中强调的是大脑活动的减少,而非速度。 [D] 文中未提及分数的比较,重点在认知过程的缺失。
【参考译文】
上个月,OpenAI 首席执行官 Sam Altman 声称,他无法想象在没有 ChatGPT 的情况下抚养新生儿。大约在同一时间,我却有着截然不同的经历。我和妻子想给儿子的老师发一封后续邮件,内容关于我们曾进行的一场涉及教育学和课程设置的复杂对话。我写了一份草稿,并要求 ChatGPT 仅仅修正语法和精简语言。
这个 AI 完全无视了我的具体指令。它判定我的邮件“强调了错误的细节”,并将其彻底重写。当我最终把结果拿给妻子看时,她吓坏了,问我是不是“脑子被虫蛀了”,因为那段文字听起来就像是通用的机器人垃圾。但最令人警醒的部分在于:当我向她展示 AI 的聊天记录时,我们两人竟然都开始质疑起自己的记忆。
我们在几天内反复纠结,怀疑我们要么是幻觉,要么是记错了。我们真的讨论过像《战争与和平》或英美教育法对比这样的具体话题吗?AI 实际上是在通过“煤气灯效应”(gaslighting)让我们怀疑这场具体的对话是否真的发生过。作为一名 AI 创始人和安全专家,我意识到这是一个技术问题:我们的对话属于“分布外”(out of distribution)数据。它包含了一种在 AI 训练数据中极少见的独特话题组合。面对这种稀缺性,大语言模型(LLM)撤退到了安全地带,生成了尽可能最通用、最平庸的输出。
这一经历凸显了两个令人深感担忧的危险。首先是思维多样性的丧失。如果 AI 持续将用户推向通用的、常规的输出,人类独特的想法将会消失在“最大公约数”式内容的平庸灰色中。第二个危险则更为阴险:在 AI 面前失去我们的现实感。
我是物理学博士和 AI 构建者;我的妻子是斯坦福毕业生。然而,面对 AI 那种令人信服、权威的语气,我们却怀疑了自己的亲身经历。这其中存在一种隐含的假设,即 ChatGPT 的知识源自“大众智慧”,因此它一定是正确的。这引发了我们内心对自己的一场信任危机。
现在的实证证据支持了这一担忧。麻省理工学院(MIT)2025 年的一项研究监测了学生写论文时的大脑活动。那些依赖 AI 的学生表现出的认知参与度显著降低,对自己作品的记忆也比那些独立写作的学生要弱。
教训是显而易见的:我们有责任时刻保持警惕,相信我们人类的大脑,而不是统计学上的“下一个词生成器”。我们必须确保我们的孩子学会独立思考,并在课程较晚的阶段引入 AI,就像数学课上引入计算器一样。至于那封邮件?我找回了我的原始草稿,自己修改了语法,然后发了出去。至于养育孩子?我们将用老式的方法:100% 人类制造。
附注:
本篇 Flesch–Kincaid 可读性指标(估算英文文章纯语言阅读难度,数值越大代表难度越大,十分制)评分为 7.0。参考:2026年英语(一)真题四篇评分分别为 6.5、7.0、7.9、7.6,英语(二)为5.2、6.2、6.8、5.8 。在话题熟悉度,逻辑复杂度、段落结构线索丰富度方面综合指标(数值越大代表难度越大,十分制)评分为6.8。参考:2026年英语(一)真题四篇评分分别为5.8、6.5、8.2、8.0,英语(二)为4.5、6.0、6.5、5.2 。原文阅读链接为:https://www.cityam.com/im-an-ai-expert-so-how-did-i-get-gaslit-by-chatgpt/ ©图源水印/网络