CAT RC Questions | CAT RC Based on Humanities questions
READING COMPREHENSION Based on HUMANITIES — Passages based on Literature, Criticism Art, Philosophy etc. CAT Past Year VARC Questions with actual answer key and best explanation.Comprehension
The passage below is accompanied by a set of questions. Choose the best answer to each question.
Understanding romantic aesthetics is not a simple undertaking for reasons that are internal to the nature of the subject. Distinguished scholars, such as Arthur Lovejoy, Northrop Frye and Isaiah Berlin, have remarked on the notorious challenges facing any attempt to define romanticism. Lovejoy, for example, claimed that romanticism is “the scandal of literary history and criticism” . . . The main difficulty in studying the romantics, according to him, is the lack of any “single real entity, or type of entity” that the concept “romanticism” designates. Lovejoy concluded, “the word ‘romantic’ has come to mean so many things that, by itself, it means nothing” . . . The more specific task of characterizing romantic aesthetics adds to these difficulties an air of paradox. Conventionally, “aesthetics” refers to a theory concerning beauty and art or the branch of philosophy that studies these topics. However, many of the romantics rejected the identification of aesthetics with a circumscribed domain of human life that is separated from the practical and theoretical domains of life. The most characteristic romantic commitment is to the idea that the character of art and beauty and of our engagement with them should shape all aspects of human life. Being fundamental to human existence, beauty and art should be a central ingredient not only in a philosophical or artistic life, but also in the lives of ordinary men and women. Another challenge for any attempt to characterize romantic aesthetics lies in the fact that most of the romantics were poets and artists whose views of art and beauty are, for the most part, to be found not in developed theoretical accounts, but in fragments, aphorisms and poems, which are often more elusive and suggestive than conclusive. Nevertheless, in spite of these challenges the task of characterizing romantic aesthetics is neither impossible nor undesirable, as numerous thinkers responding to Lovejoy’s radical skepticism have noted. While warning against a reductive definition of romanticism, Berlin, for example, still heralded the need for a general characterization: “[Although] one does have a certain sympathy with Lovejoy’s despair…[he is] in this instance mistaken. There was a romantic movement…and it is important to discover what it is” . . . Recent attempts to characterize romanticism and to stress its contemporary relevance follow this path. Instead of overlooking the undeniable differences between the variety of romanticisms of different nations that Lovejoy had stressed, such studies attempt to characterize romanticism, not in terms of a single definition, a specific time, or a specific place, but in terms of “particular philosophical questions and concerns” . . . While the German, British and French romantics are all considered, the central protagonists in the following are the German romantics. Two reasons explain this focus: first, because it has paved the way for the other romanticisms, German romanticism has a pride of place among the different national romanticisms . . . Second, the aesthetic outlook that was developed in Germany roughly between 1796 and 1801–02 — the period that corresponds to the heyday of what is known as “Early Romanticism” . . .— offers the most philosophical expression of romanticism since it is grounded primarily in the epistemological, metaphysical, ethical, and political concerns that the German romantics discerned in the aftermath of Kant’s philosophy.
Comprehension
The passage below is accompanied by a set of questions. Choose the best answer to each question.
Understanding romantic aesthetics is not a simple undertaking for reasons that are internal to the nature of the subject. Distinguished scholars, such as Arthur Lovejoy, Northrop Frye and Isaiah Berlin, have remarked on the notorious challenges facing any attempt to define romanticism. Lovejoy, for example, claimed that romanticism is “the scandal of literary history and criticism” . . . The main difficulty in studying the romantics, according to him, is the lack of any “single real entity, or type of entity” that the concept “romanticism” designates. Lovejoy concluded, “the word ‘romantic’ has come to mean so many things that, by itself, it means nothing” . . . The more specific task of characterizing romantic aesthetics adds to these difficulties an air of paradox. Conventionally, “aesthetics” refers to a theory concerning beauty and art or the branch of philosophy that studies these topics. However, many of the romantics rejected the identification of aesthetics with a circumscribed domain of human life that is separated from the practical and theoretical domains of life. The most characteristic romantic commitment is to the idea that the character of art and beauty and of our engagement with them should shape all aspects of human life. Being fundamental to human existence, beauty and art should be a central ingredient not only in a philosophical or artistic life, but also in the lives of ordinary men and women. Another challenge for any attempt to characterize romantic aesthetics lies in the fact that most of the romantics were poets and artists whose views of art and beauty are, for the most part, to be found not in developed theoretical accounts, but in fragments, aphorisms and poems, which are often more elusive and suggestive than conclusive. Nevertheless, in spite of these challenges the task of characterizing romantic aesthetics is neither impossible nor undesirable, as numerous thinkers responding to Lovejoy’s radical skepticism have noted. While warning against a reductive definition of romanticism, Berlin, for example, still heralded the need for a general characterization: “[Although] one does have a certain sympathy with Lovejoy’s despair…[he is] in this instance mistaken. There was a romantic movement…and it is important to discover what it is” . . . Recent attempts to characterize romanticism and to stress its contemporary relevance follow this path. Instead of overlooking the undeniable differences between the variety of romanticisms of different nations that Lovejoy had stressed, such studies attempt to characterize romanticism, not in terms of a single definition, a specific time, or a specific place, but in terms of “particular philosophical questions and concerns” . . . While the German, British and French romantics are all considered, the central protagonists in the following are the German romantics. Two reasons explain this focus: first, because it has paved the way for the other romanticisms, German romanticism has a pride of place among the different national romanticisms . . . Second, the aesthetic outlook that was developed in Germany roughly between 1796 and 1801–02 — the period that corresponds to the heyday of what is known as “Early Romanticism” . . .— offers the most philosophical expression of romanticism since it is grounded primarily in the epistemological, metaphysical, ethical, and political concerns that the German romantics discerned in the aftermath of Kant’s philosophy.
Comprehension
The passage below is accompanied by a set of questions. Choose the best answer to each question.
Understanding romantic aesthetics is not a simple undertaking for reasons that are internal to the nature of the subject. Distinguished scholars, such as Arthur Lovejoy, Northrop Frye and Isaiah Berlin, have remarked on the notorious challenges facing any attempt to define romanticism. Lovejoy, for example, claimed that romanticism is “the scandal of literary history and criticism” . . . The main difficulty in studying the romantics, according to him, is the lack of any “single real entity, or type of entity” that the concept “romanticism” designates. Lovejoy concluded, “the word ‘romantic’ has come to mean so many things that, by itself, it means nothing” . . . The more specific task of characterizing romantic aesthetics adds to these difficulties an air of paradox. Conventionally, “aesthetics” refers to a theory concerning beauty and art or the branch of philosophy that studies these topics. However, many of the romantics rejected the identification of aesthetics with a circumscribed domain of human life that is separated from the practical and theoretical domains of life. The most characteristic romantic commitment is to the idea that the character of art and beauty and of our engagement with them should shape all aspects of human life. Being fundamental to human existence, beauty and art should be a central ingredient not only in a philosophical or artistic life, but also in the lives of ordinary men and women. Another challenge for any attempt to characterize romantic aesthetics lies in the fact that most of the romantics were poets and artists whose views of art and beauty are, for the most part, to be found not in developed theoretical accounts, but in fragments, aphorisms and poems, which are often more elusive and suggestive than conclusive. Nevertheless, in spite of these challenges the task of characterizing romantic aesthetics is neither impossible nor undesirable, as numerous thinkers responding to Lovejoy’s radical skepticism have noted. While warning against a reductive definition of romanticism, Berlin, for example, still heralded the need for a general characterization: “[Although] one does have a certain sympathy with Lovejoy’s despair…[he is] in this instance mistaken. There was a romantic movement…and it is important to discover what it is” . . . Recent attempts to characterize romanticism and to stress its contemporary relevance follow this path. Instead of overlooking the undeniable differences between the variety of romanticisms of different nations that Lovejoy had stressed, such studies attempt to characterize romanticism, not in terms of a single definition, a specific time, or a specific place, but in terms of “particular philosophical questions and concerns” . . . While the German, British and French romantics are all considered, the central protagonists in the following are the German romantics. Two reasons explain this focus: first, because it has paved the way for the other romanticisms, German romanticism has a pride of place among the different national romanticisms . . . Second, the aesthetic outlook that was developed in Germany roughly between 1796 and 1801–02 — the period that corresponds to the heyday of what is known as “Early Romanticism” . . .— offers the most philosophical expression of romanticism since it is grounded primarily in the epistemological, metaphysical, ethical, and political concerns that the German romantics discerned in the aftermath of Kant’s philosophy.
Comprehension
The passage below is accompanied by a set of questions. Choose the best answer to each question.
Understanding romantic aesthetics is not a simple undertaking for reasons that are internal to the nature of the subject. Distinguished scholars, such as Arthur Lovejoy, Northrop Frye and Isaiah Berlin, have remarked on the notorious challenges facing any attempt to define romanticism. Lovejoy, for example, claimed that romanticism is “the scandal of literary history and criticism” . . . The main difficulty in studying the romantics, according to him, is the lack of any “single real entity, or type of entity” that the concept “romanticism” designates. Lovejoy concluded, “the word ‘romantic’ has come to mean so many things that, by itself, it means nothing” . . . The more specific task of characterizing romantic aesthetics adds to these difficulties an air of paradox. Conventionally, “aesthetics” refers to a theory concerning beauty and art or the branch of philosophy that studies these topics. However, many of the romantics rejected the identification of aesthetics with a circumscribed domain of human life that is separated from the practical and theoretical domains of life. The most characteristic romantic commitment is to the idea that the character of art and beauty and of our engagement with them should shape all aspects of human life. Being fundamental to human existence, beauty and art should be a central ingredient not only in a philosophical or artistic life, but also in the lives of ordinary men and women. Another challenge for any attempt to characterize romantic aesthetics lies in the fact that most of the romantics were poets and artists whose views of art and beauty are, for the most part, to be found not in developed theoretical accounts, but in fragments, aphorisms and poems, which are often more elusive and suggestive than conclusive. Nevertheless, in spite of these challenges the task of characterizing romantic aesthetics is neither impossible nor undesirable, as numerous thinkers responding to Lovejoy’s radical skepticism have noted. While warning against a reductive definition of romanticism, Berlin, for example, still heralded the need for a general characterization: “[Although] one does have a certain sympathy with Lovejoy’s despair…[he is] in this instance mistaken. There was a romantic movement…and it is important to discover what it is” . . . Recent attempts to characterize romanticism and to stress its contemporary relevance follow this path. Instead of overlooking the undeniable differences between the variety of romanticisms of different nations that Lovejoy had stressed, such studies attempt to characterize romanticism, not in terms of a single definition, a specific time, or a specific place, but in terms of “particular philosophical questions and concerns” . . . While the German, British and French romantics are all considered, the central protagonists in the following are the German romantics. Two reasons explain this focus: first, because it has paved the way for the other romanticisms, German romanticism has a pride of place among the different national romanticisms . . . Second, the aesthetic outlook that was developed in Germany roughly between 1796 and 1801–02 — the period that corresponds to the heyday of what is known as “Early Romanticism” . . .— offers the most philosophical expression of romanticism since it is grounded primarily in the epistemological, metaphysical, ethical, and political concerns that the German romantics discerned in the aftermath of Kant’s philosophy.
CAT/2023.3(RC)
Question. 4
According to the passage, recent studies on romanticism avoid “a single definition, a specific time, or a specific place” because they:
Comprehension
The passage below is accompanied by a set of questions. Choose the best answer to each question.
Over the past four centuries liberalism has been so successful that it has driven all its opponents off the battlefield. Now it is disintegrating, destroyed by a mix of hubris and internal contradictions, according to Patrick Deneen, a professor of politics at the University of Notre Dame. . . . Equality of opportunity has produced a new meritocratic aristocracy that has all the aloofness of the old aristocracy with none of its sense of noblesse oblige. Democracy has degenerated into a theatre of the absurd. And technological advances are reducing ever more areas of work into meaningless drudgery. “The gap between liberalism’s claims about itself and the lived reality of the citizenry” is now so wide that “the lie can no longer be accepted,” Mr Deneen writes. What better proof of this than the vision of 1,000 private planes whisking their occupants to Davos to discuss the question of “creating a shared future in a fragmented world”? . . . Deneen does an impressive job of capturing the current mood of disillusionment, echoing leftwing complaints about rampant commercialism, right-wing complaints about narcissistic and bullying students, and general worries about atomisation and selfishness. But when he concludes that all this adds up to a failure of liberalism, is his argument convincing? . . . He argues that the essence of liberalism lies in freeing individuals from constraints. In fact, liberalism contains a wide range of intellectual traditions which provide different answers to the question of how to trade off the relative claims of rights and responsibilities, individual expression and social ties. . . . liberals experimented with a range of ideas from devolving power from the centre to creating national education systems. Mr Deneen’s fixation on the essence of liberalism leads to the second big problem of his book: his failure to recognise liberalism’s ability to reform itself and address its internal problems. The late 19th century saw America suffering from many of the problems that are reappearing today, including the creation of a business aristocracy, the rise of vast companies, the corruption of politics and the sense that society was dividing into winners and losers. But a wide variety of reformers, working within the liberal tradition, tackled these problems head on. Theodore Roosevelt took on the trusts. Progressives cleaned up government corruption. University reformers modernised academic syllabuses and built ladders of opportunity. Rather than dying, liberalism reformed itself. Mr Deneen is right to point out that the record of liberalism in recent years has been dismal. He is also right to assert that the world has much to learn from the premodern notions of liberty as self-mastery and self-denial. The biggest enemy of liberalism is not so much atomisation but old-fashioned greed, as members of the Davos elite pile their plates ever higher with perks and share options. But he is wrong to argue that the only way for people to liberate themselves from the contradictions of liberalism is “liberation from liberalism itself”. The best way to read “Why Liberalism Failed” is not as a funeral oration but as a call to action: up your game, or else.
CAT/2023.2(RC)
Question. 5
All of the following statements are evidence of the decline of liberalism today, EXCEPT:
Comprehension
The passage below is accompanied by a set of questions. Choose the best answer to each question.
Over the past four centuries liberalism has been so successful that it has driven all its opponents off the battlefield. Now it is disintegrating, destroyed by a mix of hubris and internal contradictions, according to Patrick Deneen, a professor of politics at the University of Notre Dame. . . . Equality of opportunity has produced a new meritocratic aristocracy that has all the aloofness of the old aristocracy with none of its sense of noblesse oblige. Democracy has degenerated into a theatre of the absurd. And technological advances are reducing ever more areas of work into meaningless drudgery. “The gap between liberalism’s claims about itself and the lived reality of the citizenry” is now so wide that “the lie can no longer be accepted,” Mr Deneen writes. What better proof of this than the vision of 1,000 private planes whisking their occupants to Davos to discuss the question of “creating a shared future in a fragmented world”? . . . Deneen does an impressive job of capturing the current mood of disillusionment, echoing leftwing complaints about rampant commercialism, right-wing complaints about narcissistic and bullying students, and general worries about atomisation and selfishness. But when he concludes that all this adds up to a failure of liberalism, is his argument convincing? . . . He argues that the essence of liberalism lies in freeing individuals from constraints. In fact, liberalism contains a wide range of intellectual traditions which provide different answers to the question of how to trade off the relative claims of rights and responsibilities, individual expression and social ties. . . . liberals experimented with a range of ideas from devolving power from the centre to creating national education systems. Mr Deneen’s fixation on the essence of liberalism leads to the second big problem of his book: his failure to recognise liberalism’s ability to reform itself and address its internal problems. The late 19th century saw America suffering from many of the problems that are reappearing today, including the creation of a business aristocracy, the rise of vast companies, the corruption of politics and the sense that society was dividing into winners and losers. But a wide variety of reformers, working within the liberal tradition, tackled these problems head on. Theodore Roosevelt took on the trusts. Progressives cleaned up government corruption. University reformers modernised academic syllabuses and built ladders of opportunity. Rather than dying, liberalism reformed itself. Mr Deneen is right to point out that the record of liberalism in recent years has been dismal. He is also right to assert that the world has much to learn from the premodern notions of liberty as self-mastery and self-denial. The biggest enemy of liberalism is not so much atomisation but old-fashioned greed, as members of the Davos elite pile their plates ever higher with perks and share options. But he is wrong to argue that the only way for people to liberate themselves from the contradictions of liberalism is “liberation from liberalism itself”. The best way to read “Why Liberalism Failed” is not as a funeral oration but as a call to action: up your game, or else.
CAT/2023.2(RC)
Question. 6
The author of the passage faults Deneen’s conclusions for all of the following reasons, EXCEPT:
Comprehension
The passage below is accompanied by a set of questions. Choose the best answer to each question.
Over the past four centuries liberalism has been so successful that it has driven all its opponents off the battlefield. Now it is disintegrating, destroyed by a mix of hubris and internal contradictions, according to Patrick Deneen, a professor of politics at the University of Notre Dame. . . . Equality of opportunity has produced a new meritocratic aristocracy that has all the aloofness of the old aristocracy with none of its sense of noblesse oblige. Democracy has degenerated into a theatre of the absurd. And technological advances are reducing ever more areas of work into meaningless drudgery. “The gap between liberalism’s claims about itself and the lived reality of the citizenry” is now so wide that “the lie can no longer be accepted,” Mr Deneen writes. What better proof of this than the vision of 1,000 private planes whisking their occupants to Davos to discuss the question of “creating a shared future in a fragmented world”? . . . Deneen does an impressive job of capturing the current mood of disillusionment, echoing leftwing complaints about rampant commercialism, right-wing complaints about narcissistic and bullying students, and general worries about atomisation and selfishness. But when he concludes that all this adds up to a failure of liberalism, is his argument convincing? . . . He argues that the essence of liberalism lies in freeing individuals from constraints. In fact, liberalism contains a wide range of intellectual traditions which provide different answers to the question of how to trade off the relative claims of rights and responsibilities, individual expression and social ties. . . . liberals experimented with a range of ideas from devolving power from the centre to creating national education systems. Mr Deneen’s fixation on the essence of liberalism leads to the second big problem of his book: his failure to recognise liberalism’s ability to reform itself and address its internal problems. The late 19th century saw America suffering from many of the problems that are reappearing today, including the creation of a business aristocracy, the rise of vast companies, the corruption of politics and the sense that society was dividing into winners and losers. But a wide variety of reformers, working within the liberal tradition, tackled these problems head on. Theodore Roosevelt took on the trusts. Progressives cleaned up government corruption. University reformers modernised academic syllabuses and built ladders of opportunity. Rather than dying, liberalism reformed itself. Mr Deneen is right to point out that the record of liberalism in recent years has been dismal. He is also right to assert that the world has much to learn from the premodern notions of liberty as self-mastery and self-denial. The biggest enemy of liberalism is not so much atomisation but old-fashioned greed, as members of the Davos elite pile their plates ever higher with perks and share options. But he is wrong to argue that the only way for people to liberate themselves from the contradictions of liberalism is “liberation from liberalism itself”. The best way to read “Why Liberalism Failed” is not as a funeral oration but as a call to action: up your game, or else.
CAT/2023.2(RC)
Question. 7
The author of the passage refers to “the Davos elite” to illustrate his views on:
Comprehension
The passage below is accompanied by a set of questions. Choose the best answer to each question.
Over the past four centuries liberalism has been so successful that it has driven all its opponents off the battlefield. Now it is disintegrating, destroyed by a mix of hubris and internal contradictions, according to Patrick Deneen, a professor of politics at the University of Notre Dame. . . . Equality of opportunity has produced a new meritocratic aristocracy that has all the aloofness of the old aristocracy with none of its sense of noblesse oblige. Democracy has degenerated into a theatre of the absurd. And technological advances are reducing ever more areas of work into meaningless drudgery. “The gap between liberalism’s claims about itself and the lived reality of the citizenry” is now so wide that “the lie can no longer be accepted,” Mr Deneen writes. What better proof of this than the vision of 1,000 private planes whisking their occupants to Davos to discuss the question of “creating a shared future in a fragmented world”? . . . Deneen does an impressive job of capturing the current mood of disillusionment, echoing leftwing complaints about rampant commercialism, right-wing complaints about narcissistic and bullying students, and general worries about atomisation and selfishness. But when he concludes that all this adds up to a failure of liberalism, is his argument convincing? . . . He argues that the essence of liberalism lies in freeing individuals from constraints. In fact, liberalism contains a wide range of intellectual traditions which provide different answers to the question of how to trade off the relative claims of rights and responsibilities, individual expression and social ties. . . . liberals experimented with a range of ideas from devolving power from the centre to creating national education systems. Mr Deneen’s fixation on the essence of liberalism leads to the second big problem of his book: his failure to recognise liberalism’s ability to reform itself and address its internal problems. The late 19th century saw America suffering from many of the problems that are reappearing today, including the creation of a business aristocracy, the rise of vast companies, the corruption of politics and the sense that society was dividing into winners and losers. But a wide variety of reformers, working within the liberal tradition, tackled these problems head on. Theodore Roosevelt took on the trusts. Progressives cleaned up government corruption. University reformers modernised academic syllabuses and built ladders of opportunity. Rather than dying, liberalism reformed itself. Mr Deneen is right to point out that the record of liberalism in recent years has been dismal. He is also right to assert that the world has much to learn from the premodern notions of liberty as self-mastery and self-denial. The biggest enemy of liberalism is not so much atomisation but old-fashioned greed, as members of the Davos elite pile their plates ever higher with perks and share options. But he is wrong to argue that the only way for people to liberate themselves from the contradictions of liberalism is “liberation from liberalism itself”. The best way to read “Why Liberalism Failed” is not as a funeral oration but as a call to action: up your game, or else.
CAT/2023.2(RC)
Question. 8
The author of the passage is likely to disagree with all of the following statements, EXCEPT:
Comprehension
The passage below is accompanied by a set of questions. Choose the best answer to each question.
Humans today make music. Think beyond all the qualifications that might trail after this bald statement: that only certain humans make music, that extensive training is involved, that many societies distinguish musical specialists from nonmusicians, that in today’s societies most listen to music rather than making it, and so forth. These qualifications, whatever their local merit, are moot in the face of the overarching truth that making music, considered from a cognitive and psychological vantage, is the province of all those who perceive and experience what is made. We are, almost all of us, musicians — everyone who can entrain (not necessarily dance) to a beat, who can recognize a repeated tune (not necessarily sing it), who can distinguish one instrument or one singing voice from another. I will often use an antique word, recently revived, to name this broader musical experience. Humans are musicking creatures. . . . The set of capacities that enables musicking is a principal marker of modern humanity. There is nothing polemical in this assertion except a certain insistence, which will figure often in what follows, that musicking be included in our thinking about fundamental human commonalities. Capacities involved in musicking are many and take shape in complicated ways, arising from innate dispositions . . . Most of these capacities overlap with nonmusical ones, though a few may be distinct and dedicated to musical perception and production. In the area of overlap, linguistic capacities seem to be particularly important, and humans are (in principle) languagemakers in addition to music-makers — speaking creatures as well as musicking ones. Humans are symbol-makers too, a feature tightly bound up with language, not so tightly with music. The species Cassirer dubbed Homo symbolicus cannot help but tangle musicking in webs of symbolic thought and expression, habitually making it a component of behavioral complexes that form such expression. But in fundamental features musicking is neither language-like nor symbol-like, and from these differences come many clues to its ancient emergence. If musicking is a primary, shared trait of modern humans, then to describe its emergence must be to detail the coalescing of that modernity. This took place, archaeologists are clear, over a very long durée: at least 50,000 years or so, more likely something closer to 200,000, depending in part on what that coalescence is taken to comprise. If we look back 20,000 years, a small portion of this long period, we reach the lives of humans whose musical capacities were probably little different from our own. As we look farther back we reach horizons where this similarity can no longer hold — perhaps 40,000 years ago, perhaps 70,000, perhaps 100,000. But we never cross a line before which all the cognitive capacities recruited in modern musicking abruptly disappear. Unless we embrace the incredible notion that music sprang forth in full-blown glory, its emergence will have to be tracked in gradualist terms across a long period. This is one general feature of a history of music’s emergence . . . The history was at once sociocultural and biological . . . The capacities recruited in musicking are many, so describing its emergence involves following several or many separate strands
CAT/2022.2(RC)
Question. 9
Which one of the following sets of terms best serves as keywords to the passage?
Comprehension
The passage below is accompanied by a set of questions. Choose the best answer to each question.
Humans today make music. Think beyond all the qualifications that might trail after this bald statement: that only certain humans make music, that extensive training is involved, that many societies distinguish musical specialists from nonmusicians, that in today’s societies most listen to music rather than making it, and so forth. These qualifications, whatever their local merit, are moot in the face of the overarching truth that making music, considered from a cognitive and psychological vantage, is the province of all those who perceive and experience what is made. We are, almost all of us, musicians — everyone who can entrain (not necessarily dance) to a beat, who can recognize a repeated tune (not necessarily sing it), who can distinguish one instrument or one singing voice from another. I will often use an antique word, recently revived, to name this broader musical experience. Humans are musicking creatures. . . . The set of capacities that enables musicking is a principal marker of modern humanity. There is nothing polemical in this assertion except a certain insistence, which will figure often in what follows, that musicking be included in our thinking about fundamental human commonalities. Capacities involved in musicking are many and take shape in complicated ways, arising from innate dispositions . . . Most of these capacities overlap with nonmusical ones, though a few may be distinct and dedicated to musical perception and production. In the area of overlap, linguistic capacities seem to be particularly important, and humans are (in principle) languagemakers in addition to music-makers — speaking creatures as well as musicking ones. Humans are symbol-makers too, a feature tightly bound up with language, not so tightly with music. The species Cassirer dubbed Homo symbolicus cannot help but tangle musicking in webs of symbolic thought and expression, habitually making it a component of behavioral complexes that form such expression. But in fundamental features musicking is neither language-like nor symbol-like, and from these differences come many clues to its ancient emergence. If musicking is a primary, shared trait of modern humans, then to describe its emergence must be to detail the coalescing of that modernity. This took place, archaeologists are clear, over a very long durée: at least 50,000 years or so, more likely something closer to 200,000, depending in part on what that coalescence is taken to comprise. If we look back 20,000 years, a small portion of this long period, we reach the lives of humans whose musical capacities were probably little different from our own. As we look farther back we reach horizons where this similarity can no longer hold — perhaps 40,000 years ago, perhaps 70,000, perhaps 100,000. But we never cross a line before which all the cognitive capacities recruited in modern musicking abruptly disappear. Unless we embrace the incredible notion that music sprang forth in full-blown glory, its emergence will have to be tracked in gradualist terms across a long period. This is one general feature of a history of music’s emergence . . . The history was at once sociocultural and biological . . . The capacities recruited in musicking are many, so describing its emergence involves following several or many separate strands
CAT/2022.2(RC)
Question. 10
“Think beyond all the qualifications that might trail after this bald statement . . .” In the context of the passage, what is the author trying to communicate in this quoted extract?
Comprehension
The passage below is accompanied by a set of questions. Choose the best answer to each question.
Humans today make music. Think beyond all the qualifications that might trail after this bald statement: that only certain humans make music, that extensive training is involved, that many societies distinguish musical specialists from nonmusicians, that in today’s societies most listen to music rather than making it, and so forth. These qualifications, whatever their local merit, are moot in the face of the overarching truth that making music, considered from a cognitive and psychological vantage, is the province of all those who perceive and experience what is made. We are, almost all of us, musicians — everyone who can entrain (not necessarily dance) to a beat, who can recognize a repeated tune (not necessarily sing it), who can distinguish one instrument or one singing voice from another. I will often use an antique word, recently revived, to name this broader musical experience. Humans are musicking creatures. . . . The set of capacities that enables musicking is a principal marker of modern humanity. There is nothing polemical in this assertion except a certain insistence, which will figure often in what follows, that musicking be included in our thinking about fundamental human commonalities. Capacities involved in musicking are many and take shape in complicated ways, arising from innate dispositions . . . Most of these capacities overlap with nonmusical ones, though a few may be distinct and dedicated to musical perception and production. In the area of overlap, linguistic capacities seem to be particularly important, and humans are (in principle) languagemakers in addition to music-makers — speaking creatures as well as musicking ones. Humans are symbol-makers too, a feature tightly bound up with language, not so tightly with music. The species Cassirer dubbed Homo symbolicus cannot help but tangle musicking in webs of symbolic thought and expression, habitually making it a component of behavioral complexes that form such expression. But in fundamental features musicking is neither language-like nor symbol-like, and from these differences come many clues to its ancient emergence. If musicking is a primary, shared trait of modern humans, then to describe its emergence must be to detail the coalescing of that modernity. This took place, archaeologists are clear, over a very long durée: at least 50,000 years or so, more likely something closer to 200,000, depending in part on what that coalescence is taken to comprise. If we look back 20,000 years, a small portion of this long period, we reach the lives of humans whose musical capacities were probably little different from our own. As we look farther back we reach horizons where this similarity can no longer hold — perhaps 40,000 years ago, perhaps 70,000, perhaps 100,000. But we never cross a line before which all the cognitive capacities recruited in modern musicking abruptly disappear. Unless we embrace the incredible notion that music sprang forth in full-blown glory, its emergence will have to be tracked in gradualist terms across a long period. This is one general feature of a history of music’s emergence . . . The history was at once sociocultural and biological . . . The capacities recruited in musicking are many, so describing its emergence involves following several or many separate strands
CAT/2022.2(RC)
Question. 11
Based on the passage, which one of the following statements is a valid argument about the emergence of music/musicking?
Comprehension
The passage below is accompanied by a set of questions. Choose the best answer to each question.
Humans today make music. Think beyond all the qualifications that might trail after this bald statement: that only certain humans make music, that extensive training is involved, that many societies distinguish musical specialists from nonmusicians, that in today’s societies most listen to music rather than making it, and so forth. These qualifications, whatever their local merit, are moot in the face of the overarching truth that making music, considered from a cognitive and psychological vantage, is the province of all those who perceive and experience what is made. We are, almost all of us, musicians — everyone who can entrain (not necessarily dance) to a beat, who can recognize a repeated tune (not necessarily sing it), who can distinguish one instrument or one singing voice from another. I will often use an antique word, recently revived, to name this broader musical experience. Humans are musicking creatures. . . . The set of capacities that enables musicking is a principal marker of modern humanity. There is nothing polemical in this assertion except a certain insistence, which will figure often in what follows, that musicking be included in our thinking about fundamental human commonalities. Capacities involved in musicking are many and take shape in complicated ways, arising from innate dispositions . . . Most of these capacities overlap with nonmusical ones, though a few may be distinct and dedicated to musical perception and production. In the area of overlap, linguistic capacities seem to be particularly important, and humans are (in principle) languagemakers in addition to music-makers — speaking creatures as well as musicking ones. Humans are symbol-makers too, a feature tightly bound up with language, not so tightly with music. The species Cassirer dubbed Homo symbolicus cannot help but tangle musicking in webs of symbolic thought and expression, habitually making it a component of behavioral complexes that form such expression. But in fundamental features musicking is neither language-like nor symbol-like, and from these differences come many clues to its ancient emergence. If musicking is a primary, shared trait of modern humans, then to describe its emergence must be to detail the coalescing of that modernity. This took place, archaeologists are clear, over a very long durée: at least 50,000 years or so, more likely something closer to 200,000, depending in part on what that coalescence is taken to comprise. If we look back 20,000 years, a small portion of this long period, we reach the lives of humans whose musical capacities were probably little different from our own. As we look farther back we reach horizons where this similarity can no longer hold — perhaps 40,000 years ago, perhaps 70,000, perhaps 100,000. But we never cross a line before which all the cognitive capacities recruited in modern musicking abruptly disappear. Unless we embrace the incredible notion that music sprang forth in full-blown glory, its emergence will have to be tracked in gradualist terms across a long period. This is one general feature of a history of music’s emergence . . . The history was at once sociocultural and biological . . . The capacities recruited in musicking are many, so describing its emergence involves following several or many separate strands
CAT/2022.2(RC)
Question. 12
Which one of the following statements, if true, would weaken the author’s claim that humans are musicking creatures?
Comprehension
The passage below is accompanied by a set of questions. Choose the best answer to each question.
Stories concerning the Undead have always been with us. From out of the primal darkness of Mankind’s earliest years, come whispers of eerie creatures, not quite alive (or alive in a way which we can understand), yet not quite dead either. These may have been ancient and primitive deities who dwelt deep in the surrounding forests and in remote places, or simply those deceased who refused to remain in their tombs and who wandered about the countryside, physically tormenting and frightening those who were still alive. Mostly they were ill-defined—strange sounds in the night beyond the comforting glow of the fire, or a shape, half-glimpsed in the twilight along the edge of an encampment. They were vague and indistinct, but they were always there with the power to terrify and disturb. They had the power to touch the minds of our early ancestors and to fill them with dread. Such fear formed the basis of the earliest tales although the source and exact nature of such terrors still remained very vague. And as Mankind became more sophisticated, leaving the gloom of their caves and forming themselves into recognizable communities—towns, cities, whole cultures—so the Undead travelled with them, inhabiting their folklore just as they had in former times. Now they began to take on more definite shapes. They became walking cadavers; the physical embodiment of former deities and things which had existed alongside Man since the Creation. Some still remained vague and ill-defined but, as Mankind strove to explain the horror which it felt towards them, such creatures emerged more readily into the light. In order to confirm their abnormal status, many of the Undead were often accorded attributes, which defied the natural order of things—the power to transform themselves into other shapes, the ability to sustain themselves by drinking human blood, and the ability to influence human minds across a distance. Such powers—described as supernatural—only [lent] an added dimension to the terror that humans felt regarding them. And it was only natural, too, that the Undead should become connected with the practice of magic. From very early times, Shamans and witchdoctors had claimed at least some power and control over the spirits of departed ancestors, and this has continued down into more “civilized” times. Formerly, the invisible spirits and forces that thronged around men’s earliest encampments, had spoken “through” the tribal Shamans but now, as entities in their own right, they were subject to magical control and could be physically summoned by a competent sorcerer. However, the relationship between the magician and an Undead creature was often a very tenuous and uncertain one. Some sorcerers might have even become Undead entities once they died, but they might also have been susceptible to the powers of other magicians when they did. From the Middle Ages and into the Age of Enlightenment, theories of the Undead continued to grow and develop. Their names became more familiar—werewolf, vampire, ghoul—each one certain to strike fear into the hearts of ordinary humans.
CAT/2022.1(RC)
Question. 13
Which one of the following observations is a valid conclusion to draw from the statement, “From out of the primal darkness of Mankind’s earliest years, come whispers of eerie creatures, not quite alive (or alive in a way which we can understand), yet not quite dead either.”?
Comprehension
The passage below is accompanied by a set of questions. Choose the best answer to each question.
Stories concerning the Undead have always been with us. From out of the primal darkness of Mankind’s earliest years, come whispers of eerie creatures, not quite alive (or alive in a way which we can understand), yet not quite dead either. These may have been ancient and primitive deities who dwelt deep in the surrounding forests and in remote places, or simply those deceased who refused to remain in their tombs and who wandered about the countryside, physically tormenting and frightening those who were still alive. Mostly they were ill-defined—strange sounds in the night beyond the comforting glow of the fire, or a shape, half-glimpsed in the twilight along the edge of an encampment. They were vague and indistinct, but they were always there with the power to terrify and disturb. They had the power to touch the minds of our early ancestors and to fill them with dread. Such fear formed the basis of the earliest tales although the source and exact nature of such terrors still remained very vague. And as Mankind became more sophisticated, leaving the gloom of their caves and forming themselves into recognizable communities—towns, cities, whole cultures—so the Undead travelled with them, inhabiting their folklore just as they had in former times. Now they began to take on more definite shapes. They became walking cadavers; the physical embodiment of former deities and things which had existed alongside Man since the Creation. Some still remained vague and ill-defined but, as Mankind strove to explain the horror which it felt towards them, such creatures emerged more readily into the light. In order to confirm their abnormal status, many of the Undead were often accorded attributes, which defied the natural order of things—the power to transform themselves into other shapes, the ability to sustain themselves by drinking human blood, and the ability to influence human minds across a distance. Such powers—described as supernatural—only [lent] an added dimension to the terror that humans felt regarding them. And it was only natural, too, that the Undead should become connected with the practice of magic. From very early times, Shamans and witchdoctors had claimed at least some power and control over the spirits of departed ancestors, and this has continued down into more “civilized” times. Formerly, the invisible spirits and forces that thronged around men’s earliest encampments, had spoken “through” the tribal Shamans but now, as entities in their own right, they were subject to magical control and could be physically summoned by a competent sorcerer. However, the relationship between the magician and an Undead creature was often a very tenuous and uncertain one. Some sorcerers might have even become Undead entities once they died, but they might also have been susceptible to the powers of other magicians when they did. From the Middle Ages and into the Age of Enlightenment, theories of the Undead continued to grow and develop. Their names became more familiar—werewolf, vampire, ghoul—each one certain to strike fear into the hearts of ordinary humans.
CAT/2022.1(RC)
Question. 14
Which one of the following statements best describes what the passage is about?
Comprehension
The passage below is accompanied by a set of questions. Choose the best answer to each question.
Stories concerning the Undead have always been with us. From out of the primal darkness of Mankind’s earliest years, come whispers of eerie creatures, not quite alive (or alive in a way which we can understand), yet not quite dead either. These may have been ancient and primitive deities who dwelt deep in the surrounding forests and in remote places, or simply those deceased who refused to remain in their tombs and who wandered about the countryside, physically tormenting and frightening those who were still alive. Mostly they were ill-defined—strange sounds in the night beyond the comforting glow of the fire, or a shape, half-glimpsed in the twilight along the edge of an encampment. They were vague and indistinct, but they were always there with the power to terrify and disturb. They had the power to touch the minds of our early ancestors and to fill them with dread. Such fear formed the basis of the earliest tales although the source and exact nature of such terrors still remained very vague. And as Mankind became more sophisticated, leaving the gloom of their caves and forming themselves into recognizable communities—towns, cities, whole cultures—so the Undead travelled with them, inhabiting their folklore just as they had in former times. Now they began to take on more definite shapes. They became walking cadavers; the physical embodiment of former deities and things which had existed alongside Man since the Creation. Some still remained vague and ill-defined but, as Mankind strove to explain the horror which it felt towards them, such creatures emerged more readily into the light. In order to confirm their abnormal status, many of the Undead were often accorded attributes, which defied the natural order of things—the power to transform themselves into other shapes, the ability to sustain themselves by drinking human blood, and the ability to influence human minds across a distance. Such powers—described as supernatural—only [lent] an added dimension to the terror that humans felt regarding them. And it was only natural, too, that the Undead should become connected with the practice of magic. From very early times, Shamans and witchdoctors had claimed at least some power and control over the spirits of departed ancestors, and this has continued down into more “civilized” times. Formerly, the invisible spirits and forces that thronged around men’s earliest encampments, had spoken “through” the tribal Shamans but now, as entities in their own right, they were subject to magical control and could be physically summoned by a competent sorcerer. However, the relationship between the magician and an Undead creature was often a very tenuous and uncertain one. Some sorcerers might have even become Undead entities once they died, but they might also have been susceptible to the powers of other magicians when they did. From the Middle Ages and into the Age of Enlightenment, theories of the Undead continued to grow and develop. Their names became more familiar—werewolf, vampire, ghoul—each one certain to strike fear into the hearts of ordinary humans.
CAT/2022.1(RC)
Question. 15
“In order to confirm their abnormal status, many of the Undead were often accorded attributes, which defied the natural order of things . . .” Which one of the following best expresses the claim made in this statement?
Comprehension
The passage below is accompanied by a set of questions. Choose the best answer to each question.
Stories concerning the Undead have always been with us. From out of the primal darkness of Mankind’s earliest years, come whispers of eerie creatures, not quite alive (or alive in a way which we can understand), yet not quite dead either. These may have been ancient and primitive deities who dwelt deep in the surrounding forests and in remote places, or simply those deceased who refused to remain in their tombs and who wandered about the countryside, physically tormenting and frightening those who were still alive. Mostly they were ill-defined—strange sounds in the night beyond the comforting glow of the fire, or a shape, half-glimpsed in the twilight along the edge of an encampment. They were vague and indistinct, but they were always there with the power to terrify and disturb. They had the power to touch the minds of our early ancestors and to fill them with dread. Such fear formed the basis of the earliest tales although the source and exact nature of such terrors still remained very vague. And as Mankind became more sophisticated, leaving the gloom of their caves and forming themselves into recognizable communities—towns, cities, whole cultures—so the Undead travelled with them, inhabiting their folklore just as they had in former times. Now they began to take on more definite shapes. They became walking cadavers; the physical embodiment of former deities and things which had existed alongside Man since the Creation. Some still remained vague and ill-defined but, as Mankind strove to explain the horror which it felt towards them, such creatures emerged more readily into the light. In order to confirm their abnormal status, many of the Undead were often accorded attributes, which defied the natural order of things—the power to transform themselves into other shapes, the ability to sustain themselves by drinking human blood, and the ability to influence human minds across a distance. Such powers—described as supernatural—only [lent] an added dimension to the terror that humans felt regarding them. And it was only natural, too, that the Undead should become connected with the practice of magic. From very early times, Shamans and witchdoctors had claimed at least some power and control over the spirits of departed ancestors, and this has continued down into more “civilized” times. Formerly, the invisible spirits and forces that thronged around men’s earliest encampments, had spoken “through” the tribal Shamans but now, as entities in their own right, they were subject to magical control and could be physically summoned by a competent sorcerer. However, the relationship between the magician and an Undead creature was often a very tenuous and uncertain one. Some sorcerers might have even become Undead entities once they died, but they might also have been susceptible to the powers of other magicians when they did. From the Middle Ages and into the Age of Enlightenment, theories of the Undead continued to grow and develop. Their names became more familiar—werewolf, vampire, ghoul—each one certain to strike fear into the hearts of ordinary humans.
CAT/2022.1(RC)
Question. 16
All of the following statements, if false, could be seen as being in accordance with the passage, EXCEPT:
Comprehension
Directions for the questions: The passage below is accompanied by a set of questions. Choose the best answer to each question.
For the Maya of the Classic period, who lived in Southern Mexico and Central America between 250 and 900 CE, the category of ‘persons’ was not coincident with human beings, as it is for us. That is, human beings were persons – but other, nonhuman entities could be persons, too. . . . In order to explore the slippage of categories between ‘humans’ and ‘persons’, I examined a very specific category of ancient Maya images, found painted in scenes on ceramic vessels. I sought out instances in which faces (some combination of eyes, nose, and mouth) are shown on inanimate objects. . . . Consider my iPhone, which needs to be fed with electricity every night, swaddled in a protective bumper, and enjoys communicating with other fellow-phone-beings. Does it have personhood (if at all) because it is connected to me, drawing this resource from me as an owner or source? For the Maya (who did have plenty of other communicating objects, if not smartphones), the answer was no. Nonhuman persons were not tethered to specific humans, and they did not derive their personhood from a connection with a human. . . . It’s a profoundly democratizing way of understanding the world. Humans are not more important persons – we are just one of many kinds of persons who inhabit this world. . . .
The Maya saw personhood as ‘activated’ by experiencing certain bodily needs and through participation in certain social activities. For example, among the faced objects that I examined, persons are marked by personal requirements (such as hunger, tiredness, physical closeness), and by community obligations (communication, interaction, ritual observance). In the images I examined, we see, for instance, faced objects being cradled in humans’ arms; we also see them speaking to humans. These core elements of personhood are both turned inward, what the body or self of a person requires, and outward, what a community expects of the persons who are a part of it, underlining the reciprocal nature of community membership. . . .
Personhood was a nonbinary proposition for the Maya. Entities were able to be persons while also being something else. The faced objects I looked at indicating that they continue to be functional, doing what objects do (a stone implement continues to chop, an incense burner continues to do its smoky work). Furthermore, the Maya visually depicted many objects in ways that indicated the material category to which they belonged – drawings of the stone implement show that a person-tool is still made of stone. One additional complexity: the incense burner (which would have been made of clay, and decorated with spiky appliques representing the sacred ceiba tree found in this region) is categorised as a person – but also as a tree. With these Maya examples, we are challenged to discard the person/nonperson binary that constitutes our basic ontological outlook. . . . The porousness of boundaries that we have seen in the Maya world points towards the possibility of living with a certain uncategorisability of the world.
CAT/2021.1(RC)
Question. 17
On the basis of the passage, which one of the following worldviews can be inferred to be closest to that of the Classic Maya?
Comprehension
Directions for the questions: The passage below is accompanied by a set of questions. Choose the best answer to each question.
For the Maya of the Classic period, who lived in Southern Mexico and Central America between 250 and 900 CE, the category of ‘persons’ was not coincident with human beings, as it is for us. That is, human beings were persons – but other, nonhuman entities could be persons, too. . . . In order to explore the slippage of categories between ‘humans’ and ‘persons’, I examined a very specific category of ancient Maya images, found painted in scenes on ceramic vessels. I sought out instances in which faces (some combination of eyes, nose, and mouth) are shown on inanimate objects. . . . Consider my iPhone, which needs to be fed with electricity every night, swaddled in a protective bumper, and enjoys communicating with other fellow-phone-beings. Does it have personhood (if at all) because it is connected to me, drawing this resource from me as an owner or source? For the Maya (who did have plenty of other communicating objects, if not smartphones), the answer was no. Nonhuman persons were not tethered to specific humans, and they did not derive their personhood from a connection with a human. . . . It’s a profoundly democratizing way of understanding the world. Humans are not more important persons – we are just one of many kinds of persons who inhabit this world. . . .
The Maya saw personhood as ‘activated’ by experiencing certain bodily needs and through participation in certain social activities. For example, among the faced objects that I examined, persons are marked by personal requirements (such as hunger, tiredness, physical closeness), and by community obligations (communication, interaction, ritual observance). In the images I examined, we see, for instance, faced objects being cradled in humans’ arms; we also see them speaking to humans. These core elements of personhood are both turned inward, what the body or self of a person requires, and outward, what a community expects of the persons who are a part of it, underlining the reciprocal nature of community membership. . . .
Personhood was a nonbinary proposition for the Maya. Entities were able to be persons while also being something else. The faced objects I looked at indicating that they continue to be functional, doing what objects do (a stone implement continues to chop, an incense burner continues to do its smoky work). Furthermore, the Maya visually depicted many objects in ways that indicated the material category to which they belonged – drawings of the stone implement show that a person-tool is still made of stone. One additional complexity: the incense burner (which would have been made of clay, and decorated with spiky appliques representing the sacred ceiba tree found in this region) is categorised as a person – but also as a tree. With these Maya examples, we are challenged to discard the person/nonperson binary that constitutes our basic ontological outlook. . . . The porousness of boundaries that we have seen in the Maya world points towards the possibility of living with a certain uncategorisability of the world.
CAT/2021.1(RC)
Question. 18
Which one of the following, if true, would not undermine the democratising potential of the Classic Maya worldview?
Comprehension
Directions for the questions: The passage below is accompanied by a set of questions. Choose the best answer to each question.
For the Maya of the Classic period, who lived in Southern Mexico and Central America between 250 and 900 CE, the category of ‘persons’ was not coincident with human beings, as it is for us. That is, human beings were persons – but other, nonhuman entities could be persons, too. . . . In order to explore the slippage of categories between ‘humans’ and ‘persons’, I examined a very specific category of ancient Maya images, found painted in scenes on ceramic vessels. I sought out instances in which faces (some combination of eyes, nose, and mouth) are shown on inanimate objects. . . . Consider my iPhone, which needs to be fed with electricity every night, swaddled in a protective bumper, and enjoys communicating with other fellow-phone-beings. Does it have personhood (if at all) because it is connected to me, drawing this resource from me as an owner or source? For the Maya (who did have plenty of other communicating objects, if not smartphones), the answer was no. Nonhuman persons were not tethered to specific humans, and they did not derive their personhood from a connection with a human. . . . It’s a profoundly democratizing way of understanding the world. Humans are not more important persons – we are just one of many kinds of persons who inhabit this world. . . .
The Maya saw personhood as ‘activated’ by experiencing certain bodily needs and through participation in certain social activities. For example, among the faced objects that I examined, persons are marked by personal requirements (such as hunger, tiredness, physical closeness), and by community obligations (communication, interaction, ritual observance). In the images I examined, we see, for instance, faced objects being cradled in humans’ arms; we also see them speaking to humans. These core elements of personhood are both turned inward, what the body or self of a person requires, and outward, what a community expects of the persons who are a part of it, underlining the reciprocal nature of community membership. . . .
Personhood was a nonbinary proposition for the Maya. Entities were able to be persons while also being something else. The faced objects I looked at indicating that they continue to be functional, doing what objects do (a stone implement continues to chop, an incense burner continues to do its smoky work). Furthermore, the Maya visually depicted many objects in ways that indicated the material category to which they belonged – drawings of the stone implement show that a person-tool is still made of stone. One additional complexity: the incense burner (which would have been made of clay, and decorated with spiky appliques representing the sacred ceiba tree found in this region) is categorised as a person – but also as a tree. With these Maya examples, we are challenged to discard the person/nonperson binary that constitutes our basic ontological outlook. . . . The porousness of boundaries that we have seen in the Maya world points towards the possibility of living with a certain uncategorisability of the world.
CAT/2021.1(RC)
Question. 19
Which one of the following, if true about the Classic Maya, would invalidate the purpose of the iPhone example in the passage?
Comprehension
Directions for the questions: The passage below is accompanied by a set of questions. Choose the best answer to each question.
For the Maya of the Classic period, who lived in Southern Mexico and Central America between 250 and 900 CE, the category of ‘persons’ was not coincident with human beings, as it is for us. That is, human beings were persons – but other, nonhuman entities could be persons, too. . . . In order to explore the slippage of categories between ‘humans’ and ‘persons’, I examined a very specific category of ancient Maya images, found painted in scenes on ceramic vessels. I sought out instances in which faces (some combination of eyes, nose, and mouth) are shown on inanimate objects. . . . Consider my iPhone, which needs to be fed with electricity every night, swaddled in a protective bumper, and enjoys communicating with other fellow-phone-beings. Does it have personhood (if at all) because it is connected to me, drawing this resource from me as an owner or source? For the Maya (who did have plenty of other communicating objects, if not smartphones), the answer was no. Nonhuman persons were not tethered to specific humans, and they did not derive their personhood from a connection with a human. . . . It’s a profoundly democratizing way of understanding the world. Humans are not more important persons – we are just one of many kinds of persons who inhabit this world. . . .
The Maya saw personhood as ‘activated’ by experiencing certain bodily needs and through participation in certain social activities. For example, among the faced objects that I examined, persons are marked by personal requirements (such as hunger, tiredness, physical closeness), and by community obligations (communication, interaction, ritual observance). In the images I examined, we see, for instance, faced objects being cradled in humans’ arms; we also see them speaking to humans. These core elements of personhood are both turned inward, what the body or self of a person requires, and outward, what a community expects of the persons who are a part of it, underlining the reciprocal nature of community membership. . . .
Personhood was a nonbinary proposition for the Maya. Entities were able to be persons while also being something else. The faced objects I looked at indicating that they continue to be functional, doing what objects do (a stone implement continues to chop, an incense burner continues to do its smoky work). Furthermore, the Maya visually depicted many objects in ways that indicated the material category to which they belonged – drawings of the stone implement show that a person-tool is still made of stone. One additional complexity: the incense burner (which would have been made of clay, and decorated with spiky appliques representing the sacred ceiba tree found in this region) is categorised as a person – but also as a tree. With these Maya examples, we are challenged to discard the person/nonperson binary that constitutes our basic ontological outlook. . . . The porousness of boundaries that we have seen in the Maya world points towards the possibility of living with a certain uncategorisability of the world.
CAT/2021.1(RC)
Question. 20
Which one of the following best explains the “additional complexity” that the example of the incense burner illustrates regarding personhood for the Classic Maya?
Comprehension
The passage below is accompanied by a set of questions. Choose the best answer to each question.
I have elaborated . . . a framework for analyzing the contradictory pulls on [Indian] nationalist ideology in its struggle against the dominance of colonialism and the resolution it offered to those contradictions. Briefly, this resolution was built around a separation of the domain of culture into two spheres—the material and the spiritual. It was in the material sphere that the claims of Western civilization were the most powerful. Science, technology, rational forms of economic organization, modern methods of statecraft—these had given the European countries the strength to subjugate the non-European people . . . To overcome this domination, the colonized people had to learn those superior techniques of organizing material life and incorporate them within their own cultures. . . . But this could not mean the imitation of the West in every aspect of life, for then the very distinction between the West and
the East would vanish—the self-identity of national culture would itself be threatened. . . .
The discourse of nationalism shows that the material/spiritual distinction was condensed into an analogous, but ideologically far more powerful, dichotomy: that between the outer and the inner. . . . Applying the inner/outer distinction to the matter of concrete day-to-day living separates the social space into ghar and bāhir, the home and the world. The world is the external, the domain of the material; the home represents one’s inner spiritual self, one’s true identity. The world is a treacherous terrain of the pursuit of material interests, where practical considerations reign supreme. It is also typically the domain of the male. The home in its
essence must remain unaffected by the profane activities of the material world—and woman is its representation. And so one gets an identification of social roles by gender to correspond with the separation of the social space into ghar and bāhir. . . .
The colonial situation, and the ideological response of nationalism to the critique of Indian tradition, introduced an entirely new substance to [these dichotomies] and effected their transformation. The material/spiritual dichotomy, to which the terms world and home corresponded, had acquired . . . a very special significance in the nationalist mind. The world was where the European power had challenged the non-European peoples and, by virtue of its superior material culture, had subjugated them. But, the nationalists asserted, it had failed to colonize the inner, essential, identity of the East which lay in its distinctive, and superior, spiritual culture. . . . [I]n the entire phase of the national struggle, the crucial need was to protect, preserve and strengthen the inner core of the national culture, its spiritual essence. . .
Once we match this new meaning of the home/world dichotomy with the identification of social roles by gender, we get the ideological framework within which nationalism answered the women’s question. It would be a grave error to see in this, as liberals are apt to in their despair at the many marks of social conservatism in nationalist practice, a total rejection of the West. Quite the contrary: the nationalist paradigm in fact supplied an ideological principle of selection.
Comprehension
The passage below is accompanied by a set of questions. Choose the best answer to each question.
I have elaborated . . . a framework for analyzing the contradictory pulls on [Indian] nationalist ideology in its struggle against the dominance of colonialism and the resolution it offered to those contradictions. Briefly, this resolution was built around a separation of the domain of culture into two spheres—the material and the spiritual. It was in the material sphere that the claims of Western civilization were the most powerful. Science, technology, rational forms of economic organization, modern methods of statecraft—these had given the European countries the strength to subjugate the non-European people . . . To overcome this domination, the colonized people had to learn those superior techniques of organizing material life and incorporate them within their own cultures. . . . But this could not mean the imitation of the West in every aspect of life, for then the very distinction between the West and
the East would vanish—the self-identity of national culture would itself be threatened. . . .
The discourse of nationalism shows that the material/spiritual distinction was condensed into an analogous, but ideologically far more powerful, dichotomy: that between the outer and the inner. . . . Applying the inner/outer distinction to the matter of concrete day-to-day living separates the social space into ghar and bāhir, the home and the world. The world is the external, the domain of the material; the home represents one’s inner spiritual self, one’s true identity. The world is a treacherous terrain of the pursuit of material interests, where practical considerations reign supreme. It is also typically the domain of the male. The home in its
essence must remain unaffected by the profane activities of the material world—and woman is its representation. And so one gets an identification of social roles by gender to correspond with the separation of the social space into ghar and bāhir. . . .
The colonial situation, and the ideological response of nationalism to the critique of Indian tradition, introduced an entirely new substance to [these dichotomies] and effected their transformation. The material/spiritual dichotomy, to which the terms world and home corresponded, had acquired . . . a very special significance in the nationalist mind. The world was where the European power had challenged the non-European peoples and, by virtue of its superior material culture, had subjugated them. But, the nationalists asserted, it had failed to colonize the inner, essential, identity of the East which lay in its distinctive, and superior, spiritual culture. . . . [I]n the entire phase of the national struggle, the crucial need was to protect, preserve and strengthen the inner core of the national culture, its spiritual essence. . .
Once we match this new meaning of the home/world dichotomy with the identification of social roles by gender, we get the ideological framework within which nationalism answered the women’s question. It would be a grave error to see in this, as liberals are apt to in their despair at the many marks of social conservatism in nationalist practice, a total rejection of the West. Quite the contrary: the nationalist paradigm in fact supplied an ideological principle of selection.
CAT/2021.2(RC)
Question. 22
Which one of the following, if true, would weaken the author’s claims in the passage?
Comprehension
The passage below is accompanied by a set of questions. Choose the best answer to each question.
I have elaborated . . . a framework for analyzing the contradictory pulls on [Indian] nationalist ideology in its struggle against the dominance of colonialism and the resolution it offered to those contradictions. Briefly, this resolution was built around a separation of the domain of culture into two spheres—the material and the spiritual. It was in the material sphere that the claims of Western civilization were the most powerful. Science, technology, rational forms of economic organization, modern methods of statecraft—these had given the European countries the strength to subjugate the non-European people . . . To overcome this domination, the colonized people had to learn those superior techniques of organizing material life and incorporate them within their own cultures. . . . But this could not mean the imitation of the West in every aspect of life, for then the very distinction between the West and
the East would vanish—the self-identity of national culture would itself be threatened. . . .
The discourse of nationalism shows that the material/spiritual distinction was condensed into an analogous, but ideologically far more powerful, dichotomy: that between the outer and the inner. . . . Applying the inner/outer distinction to the matter of concrete day-to-day living separates the social space into ghar and bāhir, the home and the world. The world is the external, the domain of the material; the home represents one’s inner spiritual self, one’s true identity. The world is a treacherous terrain of the pursuit of material interests, where practical considerations reign supreme. It is also typically the domain of the male. The home in its
essence must remain unaffected by the profane activities of the material world—and woman is its representation. And so one gets an identification of social roles by gender to correspond with the separation of the social space into ghar and bāhir. . . .
The colonial situation, and the ideological response of nationalism to the critique of Indian tradition, introduced an entirely new substance to [these dichotomies] and effected their transformation. The material/spiritual dichotomy, to which the terms world and home corresponded, had acquired . . . a very special significance in the nationalist mind. The world was where the European power had challenged the non-European peoples and, by virtue of its superior material culture, had subjugated them. But, the nationalists asserted, it had failed to colonize the inner, essential, identity of the East which lay in its distinctive, and superior, spiritual culture. . . . [I]n the entire phase of the national struggle, the crucial need was to protect, preserve and strengthen the inner core of the national culture, its spiritual essence. . .
Once we match this new meaning of the home/world dichotomy with the identification of social roles by gender, we get the ideological framework within which nationalism answered the women’s question. It would be a grave error to see in this, as liberals are apt to in their despair at the many marks of social conservatism in nationalist practice, a total rejection of the West. Quite the contrary: the nationalist paradigm in fact supplied an ideological principle of selection.
CAT/2021.2(RC)
Question. 23
Which one of the following explains the “contradictory pulls” on Indian nationalism?
Comprehension
The passage below is accompanied by a set of questions. Choose the best answer to each question.
I have elaborated . . . a framework for analyzing the contradictory pulls on [Indian] nationalist ideology in its struggle against the dominance of colonialism and the resolution it offered to those contradictions. Briefly, this resolution was built around a separation of the domain of culture into two spheres—the material and the spiritual. It was in the material sphere that the claims of Western civilization were the most powerful. Science, technology, rational forms of economic organization, modern methods of statecraft—these had given the European countries the strength to subjugate the non-European people . . . To overcome this domination, the colonized people had to learn those superior techniques of organizing material life and incorporate them within their own cultures. . . . But this could not mean the imitation of the West in every aspect of life, for then the very distinction between the West and
the East would vanish—the self-identity of national culture would itself be threatened. . . .
The discourse of nationalism shows that the material/spiritual distinction was condensed into an analogous, but ideologically far more powerful, dichotomy: that between the outer and the inner. . . . Applying the inner/outer distinction to the matter of concrete day-to-day living separates the social space into ghar and bāhir, the home and the world. The world is the external, the domain of the material; the home represents one’s inner spiritual self, one’s true identity. The world is a treacherous terrain of the pursuit of material interests, where practical considerations reign supreme. It is also typically the domain of the male. The home in its
essence must remain unaffected by the profane activities of the material world—and woman is its representation. And so one gets an identification of social roles by gender to correspond with the separation of the social space into ghar and bāhir. . . .
The colonial situation, and the ideological response of nationalism to the critique of Indian tradition, introduced an entirely new substance to [these dichotomies] and effected their transformation. The material/spiritual dichotomy, to which the terms world and home corresponded, had acquired . . . a very special significance in the nationalist mind. The world was where the European power had challenged the non-European peoples and, by virtue of its superior material culture, had subjugated them. But, the nationalists asserted, it had failed to colonize the inner, essential, identity of the East which lay in its distinctive, and superior, spiritual culture. . . . [I]n the entire phase of the national struggle, the crucial need was to protect, preserve and strengthen the inner core of the national culture, its spiritual essence. . .
Once we match this new meaning of the home/world dichotomy with the identification of social roles by gender, we get the ideological framework within which nationalism answered the women’s question. It would be a grave error to see in this, as liberals are apt to in their despair at the many marks of social conservatism in nationalist practice, a total rejection of the West. Quite the contrary: the nationalist paradigm in fact supplied an ideological principle of selection.
CAT/2021.2(RC)
Question. 24
Which one of the following best describes the liberal perception of Indian nationalism?
Comprehension
The passage below is accompanied by a set of questions. Choose the best answer to each question.
It’s easy to forget that most of the world’s languages are still transmitted orally with no widely established written form. While speech communities are increasingly involved in projects to protect their languages – in print, on air and online – orality is fragile and contributes to linguistic vulnerability. But indigenous languages are about much more than unusual words and intriguing grammar: They function as vehicles for the transmission of cultural traditions, environmental understandings and knowledge about medicinal plants, all at risk when elders die and livelihoods are disrupted.
Both push and pull factors lead to the decline of languages. Through war, famine and natural disasters, whole communities can be destroyed, taking their language with them to the grave, such as the indigenous populations of Tasmania who were wiped out by colonists. More commonly, speakers live on but abandon their language in favor of another vernacular, a widespread process that linguists refer to as “language shift” from which few languages are immune. Such trading up and out of a speech form occurs for complex political, cultural and economic reasons – sometimes voluntary for economic and educational reasons, although often amplified by state coercion or neglect. Welsh, long stigmatized and disparaged by the British state, has rebounded with vigor.
Many speakers of endangered, poorly documented languages have embraced new digital media with excitement. Speakers of previously exclusively oral tongues are turning to the web as a virtual space for languages to live on. Internet technology offers powerful ways for oral traditions and cultural practices to survive, even thrive, among increasingly mobile communities. I have watched as videos of traditional wedding ceremonies and songs are recorded on smartphones in London by Nepali migrants, then uploaded to YouTube and watched an hour later by relatives in remote Himalayan villages . . .
Globalization is regularly, and often uncritically, pilloried as a major threat to linguistic diversity. But in fact, globalization is as much process as it is ideology, certainly when it comes to language. The real forces behind cultural homogenization are unbending beliefs, exchanged through a globalized delivery system, reinforced by the historical monolingualism prevalent in much of the West.
Monolingualism – the condition of being able to speak only one language – is regularly accompanied by a deep-seated conviction in the value of that language over all others. Across the largest economies that make up the G8, being monolingual is still often the norm, with multilingualism appearing unusual and even somewhat exotic. The monolingual mindset stands in sharp contrast to the lived reality of most the world, which throughout its history has been more multilingual than unilingual. Monolingualism, then, not globalization, should be our primary concern.
Multilingualism can help us live in a more connected and more interdependent world. By widening access to technology, globalization can support indigenous and scholarly communities engaged in documenting and protecting our shared linguistic heritage. For the last 5,000 years, the rise and fall of languages was intimately tied to the plow, sword and book. In our digital age, the keyboard, screen and web will play a decisive role in shaping the future linguistic diversity of our species.
Comprehension
The passage below is accompanied by a set of questions. Choose the best answer to each question.
It’s easy to forget that most of the world’s languages are still transmitted orally with no widely established written form. While speech communities are increasingly involved in projects to protect their languages – in print, on air and online – orality is fragile and contributes to linguistic vulnerability. But indigenous languages are about much more than unusual words and intriguing grammar: They function as vehicles for the transmission of cultural traditions, environmental understandings and knowledge about medicinal plants, all at risk when elders die and livelihoods are disrupted.
Both push and pull factors lead to the decline of languages. Through war, famine and natural disasters, whole communities can be destroyed, taking their language with them to the grave, such as the indigenous populations of Tasmania who were wiped out by colonists. More commonly, speakers live on but abandon their language in favor of another vernacular, a widespread process that linguists refer to as “language shift” from which few languages are immune. Such trading up and out of a speech form occurs for complex political, cultural and economic reasons – sometimes voluntary for economic and educational reasons, although often amplified by state coercion or neglect. Welsh, long stigmatized and disparaged by the British state, has rebounded with vigor.
Many speakers of endangered, poorly documented languages have embraced new digital media with excitement. Speakers of previously exclusively oral tongues are turning to the web as a virtual space for languages to live on. Internet technology offers powerful ways for oral traditions and cultural practices to survive, even thrive, among increasingly mobile communities. I have watched as videos of traditional wedding ceremonies and songs are recorded on smartphones in London by Nepali migrants, then uploaded to YouTube and watched an hour later by relatives in remote Himalayan villages . . .
Globalization is regularly, and often uncritically, pilloried as a major threat to linguistic diversity. But in fact, globalization is as much process as it is ideology, certainly when it comes to language. The real forces behind cultural homogenization are unbending beliefs, exchanged through a globalized delivery system, reinforced by the historical monolingualism prevalent in much of the West.
Monolingualism – the condition of being able to speak only one language – is regularly accompanied by a deep-seated conviction in the value of that language over all others. Across the largest economies that make up the G8, being monolingual is still often the norm, with multilingualism appearing unusual and even somewhat exotic. The monolingual mindset stands in sharp contrast to the lived reality of most the world, which throughout its history has been more multilingual than unilingual. Monolingualism, then, not globalization, should be our primary concern.
Multilingualism can help us live in a more connected and more interdependent world. By widening access to technology, globalization can support indigenous and scholarly communities engaged in documenting and protecting our shared linguistic heritage. For the last 5,000 years, the rise and fall of languages was intimately tied to the plow, sword and book. In our digital age, the keyboard, screen and web will play a decisive role in shaping the future linguistic diversity of our species.
Comprehension
The passage below is accompanied by a set of questions. Choose the best answer to each question.
It’s easy to forget that most of the world’s languages are still transmitted orally with no widely established written form. While speech communities are increasingly involved in projects to protect their languages – in print, on air and online – orality is fragile and contributes to linguistic vulnerability. But indigenous languages are about much more than unusual words and intriguing grammar: They function as vehicles for the transmission of cultural traditions, environmental understandings and knowledge about medicinal plants, all at risk when elders die and livelihoods are disrupted.
Both push and pull factors lead to the decline of languages. Through war, famine and natural disasters, whole communities can be destroyed, taking their language with them to the grave, such as the indigenous populations of Tasmania who were wiped out by colonists. More commonly, speakers live on but abandon their language in favor of another vernacular, a widespread process that linguists refer to as “language shift” from which few languages are immune. Such trading up and out of a speech form occurs for complex political, cultural and economic reasons – sometimes voluntary for economic and educational reasons, although often amplified by state coercion or neglect. Welsh, long stigmatized and disparaged by the British state, has rebounded with vigor.
Many speakers of endangered, poorly documented languages have embraced new digital media with excitement. Speakers of previously exclusively oral tongues are turning to the web as a virtual space for languages to live on. Internet technology offers powerful ways for oral traditions and cultural practices to survive, even thrive, among increasingly mobile communities. I have watched as videos of traditional wedding ceremonies and songs are recorded on smartphones in London by Nepali migrants, then uploaded to YouTube and watched an hour later by relatives in remote Himalayan villages . . .
Globalization is regularly, and often uncritically, pilloried as a major threat to linguistic diversity. But in fact, globalization is as much process as it is ideology, certainly when it comes to language. The real forces behind cultural homogenization are unbending beliefs, exchanged through a globalized delivery system, reinforced by the historical monolingualism prevalent in much of the West.
Monolingualism – the condition of being able to speak only one language – is regularly accompanied by a deep-seated conviction in the value of that language over all others. Across the largest economies that make up the G8, being monolingual is still often the norm, with multilingualism appearing unusual and even somewhat exotic. The monolingual mindset stands in sharp contrast to the lived reality of most the world, which throughout its history has been more multilingual than unilingual. Monolingualism, then, not globalization, should be our primary concern.
Multilingualism can help us live in a more connected and more interdependent world. By widening access to technology, globalization can support indigenous and scholarly communities engaged in documenting and protecting our shared linguistic heritage. For the last 5,000 years, the rise and fall of languages was intimately tied to the plow, sword and book. In our digital age, the keyboard, screen and web will play a decisive role in shaping the future linguistic diversity of our species.
CAT/2021.2(RC)
Question. 27
The author lists all of the following as reasons for the decline or disappearance of a language EXCEPT:
Comprehension
The passage below is accompanied by a set of questions. Choose the best answer to each question.
It’s easy to forget that most of the world’s languages are still transmitted orally with no widely established written form. While speech communities are increasingly involved in projects to protect their languages – in print, on air and online – orality is fragile and contributes to linguistic vulnerability. But indigenous languages are about much more than unusual words and intriguing grammar: They function as vehicles for the transmission of cultural traditions, environmental understandings and knowledge about medicinal plants, all at risk when elders die and livelihoods are disrupted.
Both push and pull factors lead to the decline of languages. Through war, famine and natural disasters, whole communities can be destroyed, taking their language with them to the grave, such as the indigenous populations of Tasmania who were wiped out by colonists. More commonly, speakers live on but abandon their language in favor of another vernacular, a widespread process that linguists refer to as “language shift” from which few languages are immune. Such trading up and out of a speech form occurs for complex political, cultural and economic reasons – sometimes voluntary for economic and educational reasons, although often amplified by state coercion or neglect. Welsh, long stigmatized and disparaged by the British state, has rebounded with vigor.
Many speakers of endangered, poorly documented languages have embraced new digital media with excitement. Speakers of previously exclusively oral tongues are turning to the web as a virtual space for languages to live on. Internet technology offers powerful ways for oral traditions and cultural practices to survive, even thrive, among increasingly mobile communities. I have watched as videos of traditional wedding ceremonies and songs are recorded on smartphones in London by Nepali migrants, then uploaded to YouTube and watched an hour later by relatives in remote Himalayan villages . . .
Globalization is regularly, and often uncritically, pilloried as a major threat to linguistic diversity. But in fact, globalization is as much process as it is ideology, certainly when it comes to language. The real forces behind cultural homogenization are unbending beliefs, exchanged through a globalized delivery system, reinforced by the historical monolingualism prevalent in much of the West.
Monolingualism – the condition of being able to speak only one language – is regularly accompanied by a deep-seated conviction in the value of that language over all others. Across the largest economies that make up the G8, being monolingual is still often the norm, with multilingualism appearing unusual and even somewhat exotic. The monolingual mindset stands in sharp contrast to the lived reality of most the world, which throughout its history has been more multilingual than unilingual. Monolingualism, then, not globalization, should be our primary concern.
Multilingualism can help us live in a more connected and more interdependent world. By widening access to technology, globalization can support indigenous and scholarly communities engaged in documenting and protecting our shared linguistic heritage. For the last 5,000 years, the rise and fall of languages was intimately tied to the plow, sword and book. In our digital age, the keyboard, screen and web will play a decisive role in shaping the future linguistic diversity of our species.
CAT/2021.2(RC)
Question. 28
We can infer all of the following about indigenous languages from the passage EXCEPT that:
Comprehension
The passage below is accompanied by a set of questions. Choose the best answer to each question.
Many people believe that truth conveys power. . . . Hence sticking with the truth is the best strategy for gaining power. Unfortunately, this is just a comforting myth. In fact, truth and power have a far more complicated relationship, because in human society, power means two very different things.
On the one hand, power means having the ability to manipulate objective realities: to hunt animals, to construct bridges, to cure diseases, to build atom bombs. This kind of power is closely tied to truth. If you believe a false physical theory, you won’t be able to build an atom bomb. On the other hand, power also means having the ability to manipulate human beliefs, thereby getting lots of people to cooperate effectively. Building atom bombs requires not just a good understanding of physics, but also the coordinated labor of millions of humans. Planet Earth was conquered by Homo sapiens rather than by chimpanzees or elephants, because we are the only mammals that can cooperate in very large numbers. And large-scale cooperation depends on believing common stories. But these stories need not be true. You can unite millions of people by making them believe in completely fictional stories about God, about race or about economics. The dual nature of power and truth results in the curious fact that we humans know many more truths than any other animal, but we also believe in much more nonsense. . . .
When it comes to uniting people around a common story, fiction actually enjoys three inherent advantages over the truth. First, whereas the truth is universal, fictions tend to be local. Consequently if we want to distinguish our tribe from foreigners, a fictional story will serve as a far better identity marker than a true story. . . . The second huge advantage of fiction over truth has to do with the handicap principle, which says that reliable signals must be costly to the signaler. Otherwise, they can easily be faked by cheaters. . . . If political loyalty is signaled by believing a true story, anyone can fake it. But believing ridiculous and outlandish stories exacts a greater cost, and is, therefore, a better signal of loyalty. . . . Third, and most important, the truth is often painful and disturbing. Hence if you stick to unalloyed reality, few people will follow you. An American presidential candidate who tells the American public the truth, the whole truth and nothing but the truth about American history has a 100 percent guarantee of losing the elections. . . . An uncompromising adherence to the truth is an admirable spiritual practice, but it is not a winning political strategy. . . .
Even if we need to pay some price for deactivating our rational faculties, the advantages of increased social cohesion are often so big that fictional stories routinely triumph over the truth in human history. Scholars have known this for thousands of years, which is why scholars often had to decide whether they served the truth or social harmony. Should they aim to unite people by making sure everyone believes in the same fiction, or should they let people know the truth even at the price of disunity?
CAT/2021.2(RC)
Question. 29
The author would support none of the following statements about political power EXCEPT that:
Comprehension
The passage below is accompanied by a set of questions. Choose the best answer to each question.
Many people believe that truth conveys power. . . . Hence sticking with the truth is the best strategy for gaining power. Unfortunately, this is just a comforting myth. In fact, truth and power have a far more complicated relationship, because in human society, power means two very different things.
On the one hand, power means having the ability to manipulate objective realities: to hunt animals, to construct bridges, to cure diseases, to build atom bombs. This kind of power is closely tied to truth. If you believe a false physical theory, you won’t be able to build an atom bomb. On the other hand, power also means having the ability to manipulate human beliefs, thereby getting lots of people to cooperate effectively. Building atom bombs requires not just a good understanding of physics, but also the coordinated labor of millions of humans. Planet Earth was conquered by Homo sapiens rather than by chimpanzees or elephants, because we are the only mammals that can cooperate in very large numbers. And large-scale cooperation depends on believing common stories. But these stories need not be true. You can unite millions of people by making them believe in completely fictional stories about God, about race or about economics. The dual nature of power and truth results in the curious fact that we humans know many more truths than any other animal, but we also believe in much more nonsense. . . .
When it comes to uniting people around a common story, fiction actually enjoys three inherent advantages over the truth. First, whereas the truth is universal, fictions tend to be local. Consequently if we want to distinguish our tribe from foreigners, a fictional story will serve as a far better identity marker than a true story. . . . The second huge advantage of fiction over truth has to do with the handicap principle, which says that reliable signals must be costly to the signaler. Otherwise, they can easily be faked by cheaters. . . . If political loyalty is signaled by believing a true story, anyone can fake it. But believing ridiculous and outlandish stories exacts a greater cost, and is, therefore, a better signal of loyalty. . . . Third, and most important, the truth is often painful and disturbing. Hence if you stick to unalloyed reality, few people will follow you. An American presidential candidate who tells the American public the truth, the whole truth and nothing but the truth about American history has a 100 percent guarantee of losing the elections. . . . An uncompromising adherence to the truth is an admirable spiritual practice, but it is not a winning political strategy. . . .
Even if we need to pay some price for deactivating our rational faculties, the advantages of increased social cohesion are often so big that fictional stories routinely triumph over the truth in human history. Scholars have known this for thousands of years, which is why scholars often had to decide whether they served the truth or social harmony. Should they aim to unite people by making sure everyone believes in the same fiction, or should they let people know the truth even at the price of disunity?
Comprehension
The passage below is accompanied by a set of questions. Choose the best answer to each question.
Many people believe that truth conveys power. . . . Hence sticking with the truth is the best strategy for gaining power. Unfortunately, this is just a comforting myth. In fact, truth and power have a far more complicated relationship, because in human society, power means two very different things.
On the one hand, power means having the ability to manipulate objective realities: to hunt animals, to construct bridges, to cure diseases, to build atom bombs. This kind of power is closely tied to truth. If you believe a false physical theory, you won’t be able to build an atom bomb. On the other hand, power also means having the ability to manipulate human beliefs, thereby getting lots of people to cooperate effectively. Building atom bombs requires not just a good understanding of physics, but also the coordinated labor of millions of humans. Planet Earth was conquered by Homo sapiens rather than by chimpanzees or elephants, because we are the only mammals that can cooperate in very large numbers. And large-scale cooperation depends on believing common stories. But these stories need not be true. You can unite millions of people by making them believe in completely fictional stories about God, about race or about economics. The dual nature of power and truth results in the curious fact that we humans know many more truths than any other animal, but we also believe in much more nonsense. . . .
When it comes to uniting people around a common story, fiction actually enjoys three inherent advantages over the truth. First, whereas the truth is universal, fictions tend to be local. Consequently if we want to distinguish our tribe from foreigners, a fictional story will serve as a far better identity marker than a true story. . . . The second huge advantage of fiction over truth has to do with the handicap principle, which says that reliable signals must be costly to the signaler. Otherwise, they can easily be faked by cheaters. . . . If political loyalty is signaled by believing a true story, anyone can fake it. But believing ridiculous and outlandish stories exacts a greater cost, and is, therefore, a better signal of loyalty. . . . Third, and most important, the truth is often painful and disturbing. Hence if you stick to unalloyed reality, few people will follow you. An American presidential candidate who tells the American public the truth, the whole truth and nothing but the truth about American history has a 100 percent guarantee of losing the elections. . . . An uncompromising adherence to the truth is an admirable spiritual practice, but it is not a winning political strategy. . . .
Even if we need to pay some price for deactivating our rational faculties, the advantages of increased social cohesion are often so big that fictional stories routinely triumph over the truth in human history. Scholars have known this for thousands of years, which is why scholars often had to decide whether they served the truth or social harmony. Should they aim to unite people by making sure everyone believes in the same fiction, or should they let people know the truth even at the price of disunity?
Comprehension
The passage below is accompanied by a set of questions. Choose the best answer to each question.
Many people believe that truth conveys power. . . . Hence sticking with the truth is the best strategy for gaining power. Unfortunately, this is just a comforting myth. In fact, truth and power have a far more complicated relationship, because in human society, power means two very different things.
On the one hand, power means having the ability to manipulate objective realities: to hunt animals, to construct bridges, to cure diseases, to build atom bombs. This kind of power is closely tied to truth. If you believe a false physical theory, you won’t be able to build an atom bomb. On the other hand, power also means having the ability to manipulate human beliefs, thereby getting lots of people to cooperate effectively. Building atom bombs requires not just a good understanding of physics, but also the coordinated labor of millions of humans. Planet Earth was conquered by Homo sapiens rather than by chimpanzees or elephants, because we are the only mammals that can cooperate in very large numbers. And large-scale cooperation depends on believing common stories. But these stories need not be true. You can unite millions of people by making them believe in completely fictional stories about God, about race or about economics. The dual nature of power and truth results in the curious fact that we humans know many more truths than any other animal, but we also believe in much more nonsense. . . .
When it comes to uniting people around a common story, fiction actually enjoys three inherent advantages over the truth. First, whereas the truth is universal, fictions tend to be local. Consequently if we want to distinguish our tribe from foreigners, a fictional story will serve as a far better identity marker than a true story. . . . The second huge advantage of fiction over truth has to do with the handicap principle, which says that reliable signals must be costly to the signaler. Otherwise, they can easily be faked by cheaters. . . . If political loyalty is signaled by believing a true story, anyone can fake it. But believing ridiculous and outlandish stories exacts a greater cost, and is, therefore, a better signal of loyalty. . . . Third, and most important, the truth is often painful and disturbing. Hence if you stick to unalloyed reality, few people will follow you. An American presidential candidate who tells the American public the truth, the whole truth and nothing but the truth about American history has a 100 percent guarantee of losing the elections. . . . An uncompromising adherence to the truth is an admirable spiritual practice, but it is not a winning political strategy. . . .
Even if we need to pay some price for deactivating our rational faculties, the advantages of increased social cohesion are often so big that fictional stories routinely triumph over the truth in human history. Scholars have known this for thousands of years, which is why scholars often had to decide whether they served the truth or social harmony. Should they aim to unite people by making sure everyone believes in the same fiction, or should they let people know the truth even at the price of disunity?
CAT/2021.2(RC)
Question. 32
Regarding which one of the following quotes could we argue that the author overemphasizes the importance of fiction?
Comprehension
The passage below is accompanied by a set of questions. Choose the best answer to each question.
Starting in 1957, [Noam Chomsky] proclaimed a new doctrine: Language, that most human of all attributes, was innate. The grammatical faculty was built into the infant brain, and your average 3-year-old was not a mere apprentice in the great enterprise of absorbing English from his or her parents, but a “linguistic genius.” Since this message was couched in terms of Chomskyan theoretical linguistics, in discourse so opaque that it was nearly incomprehensible even to some scholars, many people did not hear it. Now, in a brilliant, witty and altogether satisfying book, Mr. Chomsky's colleague Steven Pinker . . . has brought Mr. Chomsky's findings to everyman. In “The Language Instinct” he has gathered persuasive data from such diverse fields as cognitive neuroscience, developmental psychology and speech therapy to make his points, and when he disagrees with Mr. Chomsky he tells you so. . . .
For Mr. Chomsky and Mr. Pinker, somewhere in the human brain there is a complex set of neural circuits that have been programmed with “super-rules” (making up what Mr. Chomsky calls “universal grammar”), and that these rules are unconscious and instinctive. A halfcentury ago, this would have been pooh-poohed as a “black box” theory, since one could not actually pinpoint this grammatical faculty in a specific part of the brain, or describe its functioning. But now things are different. Neurosurgeons [have now found that this] “black box” is situated in and around Broca’s area, on the left side of the forebrain. . . .
Unlike Mr. Chomsky, Mr. Pinker firmly places the wiring of the brain for language within the framework of Darwinian natural selection and evolution. He effectively disposes of all claims that intelligent nonhuman primates like chimps have any abilities to learn and use language. It is not that chimps lack the vocal apparatus to speak; it is just that their brains are unable to produce or use grammar. On the other hand, the “language instinct,” when it first appeared among our most distant hominid ancestors, must have given them a selective reproductive advantage over their competitors (including the ancestral chimps). . . .
So according to Mr. Pinker, the roots of language must be in the genes, but there cannot be a “grammar gene” any more than there can be a gene for the heart or any other complex body structure. This proposition will undoubtedly raise the hackles of some behavioral psychologists and anthropologists, for it apparently contradicts the liberal idea that human behavior may be changed for the better by improvements in culture and environment, and it might seem to invite the twin bugaboos of biological determinism and racism. Yet Mr. Pinker stresses one point that should allay such fears. Even though there are 4,000 to 6,000 languages today, they are all sufficiently alike to be considered one language by an extraterrestrial observer. In other words, most of the diversity of the world’s cultures, so beloved to anthropologists, is superficial and minor compared to the similarities. Racial differences are literally only “skin deep.” The fundamental unity of humanity is the theme of Mr. Chomsky's universal grammar, and of this exciting book.
Comprehension
The passage below is accompanied by a set of questions. Choose the best answer to each question.
Starting in 1957, [Noam Chomsky] proclaimed a new doctrine: Language, that most human of all attributes, was innate. The grammatical faculty was built into the infant brain, and your average 3-year-old was not a mere apprentice in the great enterprise of absorbing English from his or her parents, but a “linguistic genius.” Since this message was couched in terms of Chomskyan theoretical linguistics, in discourse so opaque that it was nearly incomprehensible even to some scholars, many people did not hear it. Now, in a brilliant, witty and altogether satisfying book, Mr. Chomsky's colleague Steven Pinker . . . has brought Mr. Chomsky's findings to everyman. In “The Language Instinct” he has gathered persuasive data from such diverse fields as cognitive neuroscience, developmental psychology and speech therapy to make his points, and when he disagrees with Mr. Chomsky he tells you so. . . .
For Mr. Chomsky and Mr. Pinker, somewhere in the human brain there is a complex set of neural circuits that have been programmed with “super-rules” (making up what Mr. Chomsky calls “universal grammar”), and that these rules are unconscious and instinctive. A halfcentury ago, this would have been pooh-poohed as a “black box” theory, since one could not actually pinpoint this grammatical faculty in a specific part of the brain, or describe its functioning. But now things are different. Neurosurgeons [have now found that this] “black box” is situated in and around Broca’s area, on the left side of the forebrain. . . .
Unlike Mr. Chomsky, Mr. Pinker firmly places the wiring of the brain for language within the framework of Darwinian natural selection and evolution. He effectively disposes of all claims that intelligent nonhuman primates like chimps have any abilities to learn and use language. It is not that chimps lack the vocal apparatus to speak; it is just that their brains are unable to produce or use grammar. On the other hand, the “language instinct,” when it first appeared among our most distant hominid ancestors, must have given them a selective reproductive advantage over their competitors (including the ancestral chimps). . . .
So according to Mr. Pinker, the roots of language must be in the genes, but there cannot be a “grammar gene” any more than there can be a gene for the heart or any other complex body structure. This proposition will undoubtedly raise the hackles of some behavioral psychologists and anthropologists, for it apparently contradicts the liberal idea that human behavior may be changed for the better by improvements in culture and environment, and it might seem to invite the twin bugaboos of biological determinism and racism. Yet Mr. Pinker stresses one point that should allay such fears. Even though there are 4,000 to 6,000 languages today, they are all sufficiently alike to be considered one language by an extraterrestrial observer. In other words, most of the diversity of the world’s cultures, so beloved to anthropologists, is superficial and minor compared to the similarities. Racial differences are literally only “skin deep.” The fundamental unity of humanity is the theme of Mr. Chomsky's universal grammar, and of this exciting book.
Comprehension
The passage below is accompanied by a set of questions. Choose the best answer to each question.
Starting in 1957, [Noam Chomsky] proclaimed a new doctrine: Language, that most human of all attributes, was innate. The grammatical faculty was built into the infant brain, and your average 3-year-old was not a mere apprentice in the great enterprise of absorbing English from his or her parents, but a “linguistic genius.” Since this message was couched in terms of Chomskyan theoretical linguistics, in discourse so opaque that it was nearly incomprehensible even to some scholars, many people did not hear it. Now, in a brilliant, witty and altogether satisfying book, Mr. Chomsky's colleague Steven Pinker . . . has brought Mr. Chomsky's findings to everyman. In “The Language Instinct” he has gathered persuasive data from such diverse fields as cognitive neuroscience, developmental psychology and speech therapy to make his points, and when he disagrees with Mr. Chomsky he tells you so. . . .
For Mr. Chomsky and Mr. Pinker, somewhere in the human brain there is a complex set of neural circuits that have been programmed with “super-rules” (making up what Mr. Chomsky calls “universal grammar”), and that these rules are unconscious and instinctive. A halfcentury ago, this would have been pooh-poohed as a “black box” theory, since one could not actually pinpoint this grammatical faculty in a specific part of the brain, or describe its functioning. But now things are different. Neurosurgeons [have now found that this] “black box” is situated in and around Broca’s area, on the left side of the forebrain. . . .
Unlike Mr. Chomsky, Mr. Pinker firmly places the wiring of the brain for language within the framework of Darwinian natural selection and evolution. He effectively disposes of all claims that intelligent nonhuman primates like chimps have any abilities to learn and use language. It is not that chimps lack the vocal apparatus to speak; it is just that their brains are unable to produce or use grammar. On the other hand, the “language instinct,” when it first appeared among our most distant hominid ancestors, must have given them a selective reproductive advantage over their competitors (including the ancestral chimps). . . .
So according to Mr. Pinker, the roots of language must be in the genes, but there cannot be a “grammar gene” any more than there can be a gene for the heart or any other complex body structure. This proposition will undoubtedly raise the hackles of some behavioral psychologists and anthropologists, for it apparently contradicts the liberal idea that human behavior may be changed for the better by improvements in culture and environment, and it might seem to invite the twin bugaboos of biological determinism and racism. Yet Mr. Pinker stresses one point that should allay such fears. Even though there are 4,000 to 6,000 languages today, they are all sufficiently alike to be considered one language by an extraterrestrial observer. In other words, most of the diversity of the world’s cultures, so beloved to anthropologists, is superficial and minor compared to the similarities. Racial differences are literally only “skin deep.” The fundamental unity of humanity is the theme of Mr. Chomsky's universal grammar, and of this exciting book.
CAT/2021.3(RC)
Question. 35
Which one of the following statements best summarises the author’s position about Pinker’s book?
Comprehension
The passage below is accompanied by a set of questions. Choose the best answer to each question.
Starting in 1957, [Noam Chomsky] proclaimed a new doctrine: Language, that most human of all attributes, was innate. The grammatical faculty was built into the infant brain, and your average 3-year-old was not a mere apprentice in the great enterprise of absorbing English from his or her parents, but a “linguistic genius.” Since this message was couched in terms of Chomskyan theoretical linguistics, in discourse so opaque that it was nearly incomprehensible even to some scholars, many people did not hear it. Now, in a brilliant, witty and altogether satisfying book, Mr. Chomsky's colleague Steven Pinker . . . has brought Mr. Chomsky's findings to everyman. In “The Language Instinct” he has gathered persuasive data from such diverse fields as cognitive neuroscience, developmental psychology and speech therapy to make his points, and when he disagrees with Mr. Chomsky he tells you so. . . .
For Mr. Chomsky and Mr. Pinker, somewhere in the human brain there is a complex set of neural circuits that have been programmed with “super-rules” (making up what Mr. Chomsky calls “universal grammar”), and that these rules are unconscious and instinctive. A halfcentury ago, this would have been pooh-poohed as a “black box” theory, since one could not actually pinpoint this grammatical faculty in a specific part of the brain, or describe its functioning. But now things are different. Neurosurgeons [have now found that this] “black box” is situated in and around Broca’s area, on the left side of the forebrain. . . .
Unlike Mr. Chomsky, Mr. Pinker firmly places the wiring of the brain for language within the framework of Darwinian natural selection and evolution. He effectively disposes of all claims that intelligent nonhuman primates like chimps have any abilities to learn and use language. It is not that chimps lack the vocal apparatus to speak; it is just that their brains are unable to produce or use grammar. On the other hand, the “language instinct,” when it first appeared among our most distant hominid ancestors, must have given them a selective reproductive advantage over their competitors (including the ancestral chimps). . . .
So according to Mr. Pinker, the roots of language must be in the genes, but there cannot be a “grammar gene” any more than there can be a gene for the heart or any other complex body structure. This proposition will undoubtedly raise the hackles of some behavioral psychologists and anthropologists, for it apparently contradicts the liberal idea that human behavior may be changed for the better by improvements in culture and environment, and it might seem to invite the twin bugaboos of biological determinism and racism. Yet Mr. Pinker stresses one point that should allay such fears. Even though there are 4,000 to 6,000 languages today, they are all sufficiently alike to be considered one language by an extraterrestrial observer. In other words, most of the diversity of the world’s cultures, so beloved to anthropologists, is superficial and minor compared to the similarities. Racial differences are literally only “skin deep.” The fundamental unity of humanity is the theme of Mr. Chomsky's universal grammar, and of this exciting book.
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
The word ‘anarchy’ comes from the Greek anarkhia, meaning contrary to authority or without a ruler, and was used in a derogatory sense until 1840, when it was adopted by Pierre-Joseph Proudhon to describe his political and social ideology. Proudhon argued that organization without government was both possible and desirable. In the evolution of political ideas, anarchism can be seen as an ultimate projection of both liberalism and socialism, and the differing strands of anarchist thought can be related to their emphasis on one or the other of these.
Historically, anarchism arose not only as an explanation of the gulf between the rich and the poor in any community, and of the reason why the poor have been obliged to fight for their share of a common inheritance, but as a radical answer to the question ‘What went wrong?’ that followed the ultimate outcome of the French Revolution. It had ended not only with a reign of terror and the emergence of a newly rich ruling caste, but with a new adored emperor, Napoleon Bonaparte, strutting through his conquered territories.
The anarchists and their precursors were unique on the political Left in affirming that workers and peasants, grasping the chance that arose to bring an end to centuries of exploitation and tyranny, were inevitably betrayed by the new class of politicians, whose first priority was to re-establish a centralized state power. After every revolutionary uprising, usually won at a heavy cost for ordinary populations, the new rulers had no hesitation in applying violence and terror, a secret police, and a professional army to maintain their control.
For anarchists the state itself is the enemy, and they have applied the same interpretation to the outcome of every revolution of the 19th and 20th centuries. This is not merely because every state keeps a watchful and sometimes punitive eye on its dissidents, but because every state protects the privileges of the powerful.
The mainstream of anarchist propaganda for more than a century has been anarchist-communism, which argues that property in land, natural resources, and the means of production should be held in mutual control by local communities, federating for innumerable joint purposes with other communes. It differs from state socialism in opposing the concept of any central authority. Some anarchists prefer to distinguish between anarchist-communism and collectivist anarchism in order to stress the obviously desirable freedom of an individual or family to possess the resources needed for living, while not implying the right to own the resources needed by others. . . .
There are, unsurprisingly, several traditions of individualist anarchism, one of them deriving from the ‘conscious egoism’ of the German writer Max Stirner (1806–56), and another from a remarkable series of 19th-century American figures who argued that in protecting our own autonomy and associating with others for common advantages, we are promoting the good of all. These thinkers differed from free-market liberals in their absolute mistrust of American capitalism, and in their emphasis on mutualism.
CAT/2020.1(RC)
Question. 37
Which one of the following best expresses the similarity between American individualist anarchists and free-market liberals as well as the difference between the former and the latter?
Both are sophisticated arguments for capitalism; but the former argue for a morally upright capitalism, while the latter argue that the market is the only morality.
D : Both are founded on the moral principles of altruism; but the latter conceive of the market as a force too mystical for the former to comprehend.Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
The word ‘anarchy’ comes from the Greek anarkhia, meaning contrary to authority or without a ruler, and was used in a derogatory sense until 1840, when it was adopted by Pierre-Joseph Proudhon to describe his political and social ideology. Proudhon argued that organization without government was both possible and desirable. In the evolution of political ideas, anarchism can be seen as an ultimate projection of both liberalism and socialism, and the differing strands of anarchist thought can be related to their emphasis on one or the other of these.
Historically, anarchism arose not only as an explanation of the gulf between the rich and the poor in any community, and of the reason why the poor have been obliged to fight for their share of a common inheritance, but as a radical answer to the question ‘What went wrong?’ that followed the ultimate outcome of the French Revolution. It had ended not only with a reign of terror and the emergence of a newly rich ruling caste, but with a new adored emperor, Napoleon Bonaparte, strutting through his conquered territories.
The anarchists and their precursors were unique on the political Left in affirming that workers and peasants, grasping the chance that arose to bring an end to centuries of exploitation and tyranny, were inevitably betrayed by the new class of politicians, whose first priority was to re-establish a centralized state power. After every revolutionary uprising, usually won at a heavy cost for ordinary populations, the new rulers had no hesitation in applying violence and terror, a secret police, and a professional army to maintain their control.
For anarchists the state itself is the enemy, and they have applied the same interpretation to the outcome of every revolution of the 19th and 20th centuries. This is not merely because every state keeps a watchful and sometimes punitive eye on its dissidents, but because every state protects the privileges of the powerful.
The mainstream of anarchist propaganda for more than a century has been anarchist-communism, which argues that property in land, natural resources, and the means of production should be held in mutual control by local communities, federating for innumerable joint purposes with other communes. It differs from state socialism in opposing the concept of any central authority. Some anarchists prefer to distinguish between anarchist-communism and collectivist anarchism in order to stress the obviously desirable freedom of an individual or family to possess the resources needed for living, while not implying the right to own the resources needed by others. . . .
There are, unsurprisingly, several traditions of individualist anarchism, one of them deriving from the ‘conscious egoism’ of the German writer Max Stirner (1806–56), and another from a remarkable series of 19th-century American figures who argued that in protecting our own autonomy and associating with others for common advantages, we are promoting the good of all. These thinkers differed from free-market liberals in their absolute mistrust of American capitalism, and in their emphasis on mutualism.
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
The word ‘anarchy’ comes from the Greek anarkhia, meaning contrary to authority or without a ruler, and was used in a derogatory sense until 1840, when it was adopted by Pierre-Joseph Proudhon to describe his political and social ideology. Proudhon argued that organization without government was both possible and desirable. In the evolution of political ideas, anarchism can be seen as an ultimate projection of both liberalism and socialism, and the differing strands of anarchist thought can be related to their emphasis on one or the other of these.
Historically, anarchism arose not only as an explanation of the gulf between the rich and the poor in any community, and of the reason why the poor have been obliged to fight for their share of a common inheritance, but as a radical answer to the question ‘What went wrong?’ that followed the ultimate outcome of the French Revolution. It had ended not only with a reign of terror and the emergence of a newly rich ruling caste, but with a new adored emperor, Napoleon Bonaparte, strutting through his conquered territories.
The anarchists and their precursors were unique on the political Left in affirming that workers and peasants, grasping the chance that arose to bring an end to centuries of exploitation and tyranny, were inevitably betrayed by the new class of politicians, whose first priority was to re-establish a centralized state power. After every revolutionary uprising, usually won at a heavy cost for ordinary populations, the new rulers had no hesitation in applying violence and terror, a secret police, and a professional army to maintain their control.
For anarchists the state itself is the enemy, and they have applied the same interpretation to the outcome of every revolution of the 19th and 20th centuries. This is not merely because every state keeps a watchful and sometimes punitive eye on its dissidents, but because every state protects the privileges of the powerful.
The mainstream of anarchist propaganda for more than a century has been anarchist-communism, which argues that property in land, natural resources, and the means of production should be held in mutual control by local communities, federating for innumerable joint purposes with other communes. It differs from state socialism in opposing the concept of any central authority. Some anarchists prefer to distinguish between anarchist-communism and collectivist anarchism in order to stress the obviously desirable freedom of an individual or family to possess the resources needed for living, while not implying the right to own the resources needed by others. . . .
There are, unsurprisingly, several traditions of individualist anarchism, one of them deriving from the ‘conscious egoism’ of the German writer Max Stirner (1806–56), and another from a remarkable series of 19th-century American figures who argued that in protecting our own autonomy and associating with others for common advantages, we are promoting the good of all. These thinkers differed from free-market liberals in their absolute mistrust of American capitalism, and in their emphasis on mutualism.
CAT/2020.1(RC)
Question. 39
According to the passage, what is the one idea that is common to all forms of anarchism?
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
The word ‘anarchy’ comes from the Greek anarkhia, meaning contrary to authority or without a ruler, and was used in a derogatory sense until 1840, when it was adopted by Pierre-Joseph Proudhon to describe his political and social ideology. Proudhon argued that organization without government was both possible and desirable. In the evolution of political ideas, anarchism can be seen as an ultimate projection of both liberalism and socialism, and the differing strands of anarchist thought can be related to their emphasis on one or the other of these.
Historically, anarchism arose not only as an explanation of the gulf between the rich and the poor in any community, and of the reason why the poor have been obliged to fight for their share of a common inheritance, but as a radical answer to the question ‘What went wrong?’ that followed the ultimate outcome of the French Revolution. It had ended not only with a reign of terror and the emergence of a newly rich ruling caste, but with a new adored emperor, Napoleon Bonaparte, strutting through his conquered territories.
The anarchists and their precursors were unique on the political Left in affirming that workers and peasants, grasping the chance that arose to bring an end to centuries of exploitation and tyranny, were inevitably betrayed by the new class of politicians, whose first priority was to re-establish a centralized state power. After every revolutionary uprising, usually won at a heavy cost for ordinary populations, the new rulers had no hesitation in applying violence and terror, a secret police, and a professional army to maintain their control.
For anarchists the state itself is the enemy, and they have applied the same interpretation to the outcome of every revolution of the 19th and 20th centuries. This is not merely because every state keeps a watchful and sometimes punitive eye on its dissidents, but because every state protects the privileges of the powerful.
The mainstream of anarchist propaganda for more than a century has been anarchist-communism, which argues that property in land, natural resources, and the means of production should be held in mutual control by local communities, federating for innumerable joint purposes with other communes. It differs from state socialism in opposing the concept of any central authority. Some anarchists prefer to distinguish between anarchist-communism and collectivist anarchism in order to stress the obviously desirable freedom of an individual or family to possess the resources needed for living, while not implying the right to own the resources needed by others. . . .
There are, unsurprisingly, several traditions of individualist anarchism, one of them deriving from the ‘conscious egoism’ of the German writer Max Stirner (1806–56), and another from a remarkable series of 19th-century American figures who argued that in protecting our own autonomy and associating with others for common advantages, we are promoting the good of all. These thinkers differed from free-market liberals in their absolute mistrust of American capitalism, and in their emphasis on mutualism.
CAT/2020.1(RC)
Question. 40
The author believes that the new ruling class of politicians betrayed the principles of the French Revolution, but does not specify in what way. In the context of the passage, which statement below is the likeliest explanation of that betrayal?
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
The word ‘anarchy’ comes from the Greek anarkhia, meaning contrary to authority or without a ruler, and was used in a derogatory sense until 1840, when it was adopted by Pierre-Joseph Proudhon to describe his political and social ideology. Proudhon argued that organization without government was both possible and desirable. In the evolution of political ideas, anarchism can be seen as an ultimate projection of both liberalism and socialism, and the differing strands of anarchist thought can be related to their emphasis on one or the other of these.
Historically, anarchism arose not only as an explanation of the gulf between the rich and the poor in any community, and of the reason why the poor have been obliged to fight for their share of a common inheritance, but as a radical answer to the question ‘What went wrong?’ that followed the ultimate outcome of the French Revolution. It had ended not only with a reign of terror and the emergence of a newly rich ruling caste, but with a new adored emperor, Napoleon Bonaparte, strutting through his conquered territories.
The anarchists and their precursors were unique on the political Left in affirming that workers and peasants, grasping the chance that arose to bring an end to centuries of exploitation and tyranny, were inevitably betrayed by the new class of politicians, whose first priority was to re-establish a centralized state power. After every revolutionary uprising, usually won at a heavy cost for ordinary populations, the new rulers had no hesitation in applying violence and terror, a secret police, and a professional army to maintain their control.
For anarchists the state itself is the enemy, and they have applied the same interpretation to the outcome of every revolution of the 19th and 20th centuries. This is not merely because every state keeps a watchful and sometimes punitive eye on its dissidents, but because every state protects the privileges of the powerful.
The mainstream of anarchist propaganda for more than a century has been anarchist-communism, which argues that property in land, natural resources, and the means of production should be held in mutual control by local communities, federating for innumerable joint purposes with other communes. It differs from state socialism in opposing the concept of any central authority. Some anarchists prefer to distinguish between anarchist-communism and collectivist anarchism in order to stress the obviously desirable freedom of an individual or family to possess the resources needed for living, while not implying the right to own the resources needed by others. . . .
There are, unsurprisingly, several traditions of individualist anarchism, one of them deriving from the ‘conscious egoism’ of the German writer Max Stirner (1806–56), and another from a remarkable series of 19th-century American figures who argued that in protecting our own autonomy and associating with others for common advantages, we are promoting the good of all. These thinkers differed from free-market liberals in their absolute mistrust of American capitalism, and in their emphasis on mutualism.
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
Vocabulary used in speech or writing organizes itself in seven parts of speech (eight, if you count interjections such as Oh! and Gosh! and Fuhgeddaboudit!). Communication composed of these parts of speech must be organized by rules of grammar upon which we agree. When these rules break down, confusion and misunderstanding result. Bad grammar produces bad sentences. My favorite example from Strunk and White is this one: “As a mother of five, with another one on the way, my ironing board is always up.”
Nouns and verbs are the two indispensable parts of writing. Without one of each, no group of words can be a sentence, since a sentence is, by definition, a group of words containing a subject (noun) and a predicate (verb); these strings of words begin with a capital letter, end with a period, and combine to make a complete thought which starts in the writer’s head and then leaps to the reader’s.
Must you write complete sentences each time, every time? Perish the thought. If your work consists only of fragments and floating clauses, the Grammar Police aren’t going to come and take you away. Even William Strunk, that Mussolini of rhetoric, recognized the delicious pliability of language. “It is an old observation,” he writes, “that the best writers sometimes disregard the rules of rhetoric.” Yet he goes on to add this thought, which I urge you to consider: “Unless he is certain of doing well, [the writer] will probably do best to follow the rules.”
The telling clause here is Unless he is certain of doing well. If you don’t have a rudimentary grasp of how the parts of speech translate into coherent sentences, how can you be certain that you are doing well? How will you know if you’re doing ill, for that matter? The answer, of course, is that you can’t, you won’t. One who does grasp the rudiments of grammar finds a comforting simplicity at its heart, where there need be only nouns, the words that name, and verbs, the words that act.
Take any noun, put it with any verb, and you have a sentence. It never fails. Rocks explode. Jane transmits. Mountains float. These are all perfect sentences. Many such thoughts make little rational sense, but even the stranger ones (Plums deify!) have a kind of poetic weight that’s nice. The simplicity of noun-verb construction is useful—at the very least it can provide a safety net for your writing. Strunk and White caution against too many simple sentences in a row, but simple sentences provide a path you can follow when you fear getting lost in the tangles of rhetoric—all those restrictive and nonrestrictive clauses, those modifying phrases, those appositives and compound-complex sentences. If you start to freak out at the sight of such unmapped territory (unmapped by you, at least), just remind yourself that rocks explode, Jane transmits, mountains float, and plums deify. Grammar is . . . the pole you grab to get your thoughts up on their feet and walking.
CAT/2020.1(RC)
Question. 42
Which one of the following quotes best captures the main concern of the passage?
“Bad grammar produces bad sentences”
C : “The telling clause here is Unless he is certain of doing well.”D : ”Strun “k and White caution against too many simple sentences in a row, but simple sentences provide a path you can follow when you fear getting lost in the tangles of rhetoric . . .”Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
Vocabulary used in speech or writing organizes itself in seven parts of speech (eight, if you count interjections such as Oh! and Gosh! and Fuhgeddaboudit!). Communication composed of these parts of speech must be organized by rules of grammar upon which we agree. When these rules break down, confusion and misunderstanding result. Bad grammar produces bad sentences. My favorite example from Strunk and White is this one: “As a mother of five, with another one on the way, my ironing board is always up.”
Nouns and verbs are the two indispensable parts of writing. Without one of each, no group of words can be a sentence, since a sentence is, by definition, a group of words containing a subject (noun) and a predicate (verb); these strings of words begin with a capital letter, end with a period, and combine to make a complete thought which starts in the writer’s head and then leaps to the reader’s.
Must you write complete sentences each time, every time? Perish the thought. If your work consists only of fragments and floating clauses, the Grammar Police aren’t going to come and take you away. Even William Strunk, that Mussolini of rhetoric, recognized the delicious pliability of language. “It is an old observation,” he writes, “that the best writers sometimes disregard the rules of rhetoric.” Yet he goes on to add this thought, which I urge you to consider: “Unless he is certain of doing well, [the writer] will probably do best to follow the rules.”
The telling clause here is Unless he is certain of doing well. If you don’t have a rudimentary grasp of how the parts of speech translate into coherent sentences, how can you be certain that you are doing well? How will you know if you’re doing ill, for that matter? The answer, of course, is that you can’t, you won’t. One who does grasp the rudiments of grammar finds a comforting simplicity at its heart, where there need be only nouns, the words that name, and verbs, the words that act.
Take any noun, put it with any verb, and you have a sentence. It never fails. Rocks explode. Jane transmits. Mountains float. These are all perfect sentences. Many such thoughts make little rational sense, but even the stranger ones (Plums deify!) have a kind of poetic weight that’s nice. The simplicity of noun-verb construction is useful—at the very least it can provide a safety net for your writing. Strunk and White caution against too many simple sentences in a row, but simple sentences provide a path you can follow when you fear getting lost in the tangles of rhetoric—all those restrictive and nonrestrictive clauses, those modifying phrases, those appositives and compound-complex sentences. If you start to freak out at the sight of such unmapped territory (unmapped by you, at least), just remind yourself that rocks explode, Jane transmits, mountains float, and plums deify. Grammar is . . . the pole you grab to get your thoughts up on their feet and walking.
CAT/2020.1(RC)
Question. 43
Which one of the following statements, if false, could be seen as supporting the arguments in the passage?
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
Vocabulary used in speech or writing organizes itself in seven parts of speech (eight, if you count interjections such as Oh! and Gosh! and Fuhgeddaboudit!). Communication composed of these parts of speech must be organized by rules of grammar upon which we agree. When these rules break down, confusion and misunderstanding result. Bad grammar produces bad sentences. My favorite example from Strunk and White is this one: “As a mother of five, with another one on the way, my ironing board is always up.”
Nouns and verbs are the two indispensable parts of writing. Without one of each, no group of words can be a sentence, since a sentence is, by definition, a group of words containing a subject (noun) and a predicate (verb); these strings of words begin with a capital letter, end with a period, and combine to make a complete thought which starts in the writer’s head and then leaps to the reader’s.
Must you write complete sentences each time, every time? Perish the thought. If your work consists only of fragments and floating clauses, the Grammar Police aren’t going to come and take you away. Even William Strunk, that Mussolini of rhetoric, recognized the delicious pliability of language. “It is an old observation,” he writes, “that the best writers sometimes disregard the rules of rhetoric.” Yet he goes on to add this thought, which I urge you to consider: “Unless he is certain of doing well, [the writer] will probably do best to follow the rules.”
The telling clause here is Unless he is certain of doing well. If you don’t have a rudimentary grasp of how the parts of speech translate into coherent sentences, how can you be certain that you are doing well? How will you know if you’re doing ill, for that matter? The answer, of course, is that you can’t, you won’t. One who does grasp the rudiments of grammar finds a comforting simplicity at its heart, where there need be only nouns, the words that name, and verbs, the words that act.
Take any noun, put it with any verb, and you have a sentence. It never fails. Rocks explode. Jane transmits. Mountains float. These are all perfect sentences. Many such thoughts make little rational sense, but even the stranger ones (Plums deify!) have a kind of poetic weight that’s nice. The simplicity of noun-verb construction is useful—at the very least it can provide a safety net for your writing. Strunk and White caution against too many simple sentences in a row, but simple sentences provide a path you can follow when you fear getting lost in the tangles of rhetoric—all those restrictive and nonrestrictive clauses, those modifying phrases, those appositives and compound-complex sentences. If you start to freak out at the sight of such unmapped territory (unmapped by you, at least), just remind yourself that rocks explode, Jane transmits, mountains float, and plums deify. Grammar is . . . the pole you grab to get your thoughts up on their feet and walking.
CAT/2020.1(RC)
Question. 44
All of the following statements can be inferred from the passage EXCEPT that:
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
Vocabulary used in speech or writing organizes itself in seven parts of speech (eight, if you count interjections such as Oh! and Gosh! and Fuhgeddaboudit!). Communication composed of these parts of speech must be organized by rules of grammar upon which we agree. When these rules break down, confusion and misunderstanding result. Bad grammar produces bad sentences. My favorite example from Strunk and White is this one: “As a mother of five, with another one on the way, my ironing board is always up.”
Nouns and verbs are the two indispensable parts of writing. Without one of each, no group of words can be a sentence, since a sentence is, by definition, a group of words containing a subject (noun) and a predicate (verb); these strings of words begin with a capital letter, end with a period, and combine to make a complete thought which starts in the writer’s head and then leaps to the reader’s.
Must you write complete sentences each time, every time? Perish the thought. If your work consists only of fragments and floating clauses, the Grammar Police aren’t going to come and take you away. Even William Strunk, that Mussolini of rhetoric, recognized the delicious pliability of language. “It is an old observation,” he writes, “that the best writers sometimes disregard the rules of rhetoric.” Yet he goes on to add this thought, which I urge you to consider: “Unless he is certain of doing well, [the writer] will probably do best to follow the rules.”
The telling clause here is Unless he is certain of doing well. If you don’t have a rudimentary grasp of how the parts of speech translate into coherent sentences, how can you be certain that you are doing well? How will you know if you’re doing ill, for that matter? The answer, of course, is that you can’t, you won’t. One who does grasp the rudiments of grammar finds a comforting simplicity at its heart, where there need be only nouns, the words that name, and verbs, the words that act.
Take any noun, put it with any verb, and you have a sentence. It never fails. Rocks explode. Jane transmits. Mountains float. These are all perfect sentences. Many such thoughts make little rational sense, but even the stranger ones (Plums deify!) have a kind of poetic weight that’s nice. The simplicity of noun-verb construction is useful—at the very least it can provide a safety net for your writing. Strunk and White caution against too many simple sentences in a row, but simple sentences provide a path you can follow when you fear getting lost in the tangles of rhetoric—all those restrictive and nonrestrictive clauses, those modifying phrases, those appositives and compound-complex sentences. If you start to freak out at the sight of such unmapped territory (unmapped by you, at least), just remind yourself that rocks explode, Jane transmits, mountains float, and plums deify. Grammar is . . . the pole you grab to get your thoughts up on their feet and walking.
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
Vocabulary used in speech or writing organizes itself in seven parts of speech (eight, if you count interjections such as Oh! and Gosh! and Fuhgeddaboudit!). Communication composed of these parts of speech must be organized by rules of grammar upon which we agree. When these rules break down, confusion and misunderstanding result. Bad grammar produces bad sentences. My favorite example from Strunk and White is this one: “As a mother of five, with another one on the way, my ironing board is always up.”
Nouns and verbs are the two indispensable parts of writing. Without one of each, no group of words can be a sentence, since a sentence is, by definition, a group of words containing a subject (noun) and a predicate (verb); these strings of words begin with a capital letter, end with a period, and combine to make a complete thought which starts in the writer’s head and then leaps to the reader’s.
Must you write complete sentences each time, every time? Perish the thought. If your work consists only of fragments and floating clauses, the Grammar Police aren’t going to come and take you away. Even William Strunk, that Mussolini of rhetoric, recognized the delicious pliability of language. “It is an old observation,” he writes, “that the best writers sometimes disregard the rules of rhetoric.” Yet he goes on to add this thought, which I urge you to consider: “Unless he is certain of doing well, [the writer] will probably do best to follow the rules.”
The telling clause here is Unless he is certain of doing well. If you don’t have a rudimentary grasp of how the parts of speech translate into coherent sentences, how can you be certain that you are doing well? How will you know if you’re doing ill, for that matter? The answer, of course, is that you can’t, you won’t. One who does grasp the rudiments of grammar finds a comforting simplicity at its heart, where there need be only nouns, the words that name, and verbs, the words that act.
Take any noun, put it with any verb, and you have a sentence. It never fails. Rocks explode. Jane transmits. Mountains float. These are all perfect sentences. Many such thoughts make little rational sense, but even the stranger ones (Plums deify!) have a kind of poetic weight that’s nice. The simplicity of noun-verb construction is useful—at the very least it can provide a safety net for your writing. Strunk and White caution against too many simple sentences in a row, but simple sentences provide a path you can follow when you fear getting lost in the tangles of rhetoric—all those restrictive and nonrestrictive clauses, those modifying phrases, those appositives and compound-complex sentences. If you start to freak out at the sight of such unmapped territory (unmapped by you, at least), just remind yourself that rocks explode, Jane transmits, mountains float, and plums deify. Grammar is . . . the pole you grab to get your thoughts up on their feet and walking.
CAT/2020.1(RC)
Question. 46
Inferring from the passage, the author could be most supportive of which one of the following practices?
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
The passage below is accompanied by a set of questions. Choose the best answer to each question.
The claims advanced here may be condensed into two assertions: [first, that visual] culture is what images, acts of seeing, and attendant intellectual, emotional, and perceptual sensibilities do to build, maintain, or transform the worlds in which people live. [And second, that the] study of visual culture is the analysis and interpretation of images and the ways of seeing (or gazes) that configure the agents, practices, conceptualities, and institutions that put images to work. . . .
Accordingly, the study of visual culture should be characterized by several concerns. First, scholars of visual culture need to examine any and all imagery – high and low, art and nonart. . . . They must not restrict themselves to objects of a particular beauty or aesthetic value. Indeed, any kind of imagery may be found to offer up evidence of the visual construction of reality. . . .
Second, the study of visual culture must scrutinize visual practice as much as images themselves, asking what images do when they are put to use. If scholars engaged in this enterprise inquire what makes an image beautiful or why this image or that constitutes a masterpiece or a work of genius, they should do so with the purpose of investigating an artist’s or a work’s contribution to the experience of beauty, taste, value, or genius. No amount of social analysis can account fully for the existence of Michelangelo or Leonardo. They were unique creators of images that changed the way their contemporaries thought and felt and have continued to shape the history of art, artists, museums, feeling, and aesthetic value. But study of the critical, artistic, and popular reception of works by such artists as Michelangelo and Leonardo can shed important light on the meaning of these artists and their works for many different people. And the history of meaning-making has a great deal to do with how scholars as well as lay audiences today understand these artists and their achievements.
Third, scholars studying visual culture might properly focus their interpretative work on lifeworlds by examining images, practices, visual technologies, taste, and artistic style as constitutive of social relations. The task is to understand how artifacts contribute to the construction of a world. . . . Important methodological implications follow: ethnography and reception studies become productive forms of gathering information, since these move beyond the image as a closed and fixed meaning-event. . . .
Fourth, scholars may learn a great deal when they scrutinize the constituents of vision, that is, the structures of perception as a physiological process as well as the epistemological frameworks informing a system of visual representation. Vision is a socially and a biologically constructed operation, depending on the design of the human body and how it engages the interpretive devices developed by a culture in order to see intelligibly. . . . Seeing . . . operates on the foundation of covenants with images that establish the conditions for meaningful visual experience.
Finally, the scholar of visual culture seeks to regard images as evidence for explanation, not as epiphenomena.
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
The passage below is accompanied by a set of questions. Choose the best answer to each question.
The claims advanced here may be condensed into two assertions: [first, that visual] culture is what images, acts of seeing, and attendant intellectual, emotional, and perceptual sensibilities do to build, maintain, or transform the worlds in which people live. [And second, that the] study of visual culture is the analysis and interpretation of images and the ways of seeing (or gazes) that configure the agents, practices, conceptualities, and institutions that put images to work. . . .
Accordingly, the study of visual culture should be characterized by several concerns. First, scholars of visual culture need to examine any and all imagery – high and low, art and nonart. . . . They must not restrict themselves to objects of a particular beauty or aesthetic value. Indeed, any kind of imagery may be found to offer up evidence of the visual construction of reality. . . .
Second, the study of visual culture must scrutinize visual practice as much as images themselves, asking what images do when they are put to use. If scholars engaged in this enterprise inquire what makes an image beautiful or why this image or that constitutes a masterpiece or a work of genius, they should do so with the purpose of investigating an artist’s or a work’s contribution to the experience of beauty, taste, value, or genius. No amount of social analysis can account fully for the existence of Michelangelo or Leonardo. They were unique creators of images that changed the way their contemporaries thought and felt and have continued to shape the history of art, artists, museums, feeling, and aesthetic value. But study of the critical, artistic, and popular reception of works by such artists as Michelangelo and Leonardo can shed important light on the meaning of these artists and their works for many different people. And the history of meaning-making has a great deal to do with how scholars as well as lay audiences today understand these artists and their achievements.
Third, scholars studying visual culture might properly focus their interpretative work on lifeworlds by examining images, practices, visual technologies, taste, and artistic style as constitutive of social relations. The task is to understand how artifacts contribute to the construction of a world. . . . Important methodological implications follow: ethnography and reception studies become productive forms of gathering information, since these move beyond the image as a closed and fixed meaning-event. . . .
Fourth, scholars may learn a great deal when they scrutinize the constituents of vision, that is, the structures of perception as a physiological process as well as the epistemological frameworks informing a system of visual representation. Vision is a socially and a biologically constructed operation, depending on the design of the human body and how it engages the interpretive devices developed by a culture in order to see intelligibly. . . . Seeing . . . operates on the foundation of covenants with images that establish the conditions for meaningful visual experience.
Finally, the scholar of visual culture seeks to regard images as evidence for explanation, not as epiphenomena.
CAT/2020.2(RC)
Question. 48
All of the following statements may be considered valid inferences from the passage, EXCEPT:
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
The passage below is accompanied by a set of questions. Choose the best answer to each question.
The claims advanced here may be condensed into two assertions: [first, that visual] culture is what images, acts of seeing, and attendant intellectual, emotional, and perceptual sensibilities do to build, maintain, or transform the worlds in which people live. [And second, that the] study of visual culture is the analysis and interpretation of images and the ways of seeing (or gazes) that configure the agents, practices, conceptualities, and institutions that put images to work. . . .
Accordingly, the study of visual culture should be characterized by several concerns. First, scholars of visual culture need to examine any and all imagery – high and low, art and nonart. . . . They must not restrict themselves to objects of a particular beauty or aesthetic value. Indeed, any kind of imagery may be found to offer up evidence of the visual construction of reality. . . .
Second, the study of visual culture must scrutinize visual practice as much as images themselves, asking what images do when they are put to use. If scholars engaged in this enterprise inquire what makes an image beautiful or why this image or that constitutes a masterpiece or a work of genius, they should do so with the purpose of investigating an artist’s or a work’s contribution to the experience of beauty, taste, value, or genius. No amount of social analysis can account fully for the existence of Michelangelo or Leonardo. They were unique creators of images that changed the way their contemporaries thought and felt and have continued to shape the history of art, artists, museums, feeling, and aesthetic value. But study of the critical, artistic, and popular reception of works by such artists as Michelangelo and Leonardo can shed important light on the meaning of these artists and their works for many different people. And the history of meaning-making has a great deal to do with how scholars as well as lay audiences today understand these artists and their achievements.
Third, scholars studying visual culture might properly focus their interpretative work on lifeworlds by examining images, practices, visual technologies, taste, and artistic style as constitutive of social relations. The task is to understand how artifacts contribute to the construction of a world. . . . Important methodological implications follow: ethnography and reception studies become productive forms of gathering information, since these move beyond the image as a closed and fixed meaning-event. . . .
Fourth, scholars may learn a great deal when they scrutinize the constituents of vision, that is, the structures of perception as a physiological process as well as the epistemological frameworks informing a system of visual representation. Vision is a socially and a biologically constructed operation, depending on the design of the human body and how it engages the interpretive devices developed by a culture in order to see intelligibly. . . . Seeing . . . operates on the foundation of covenants with images that establish the conditions for meaningful visual experience.
Finally, the scholar of visual culture seeks to regard images as evidence for explanation, not as epiphenomena.
CAT/2020.2(RC)
Question. 49
“No amount of social analysis can account fully for the existence of Michelangelo or Leonardo.” In light of the passage, which one of the following interpretations of this sentence is the most accurate?
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
The passage below is accompanied by a set of questions. Choose the best answer to each question.
The claims advanced here may be condensed into two assertions: [first, that visual] culture is what images, acts of seeing, and attendant intellectual, emotional, and perceptual sensibilities do to build, maintain, or transform the worlds in which people live. [And second, that the] study of visual culture is the analysis and interpretation of images and the ways of seeing (or gazes) that configure the agents, practices, conceptualities, and institutions that put images to work. . . .
Accordingly, the study of visual culture should be characterized by several concerns. First, scholars of visual culture need to examine any and all imagery – high and low, art and nonart. . . . They must not restrict themselves to objects of a particular beauty or aesthetic value. Indeed, any kind of imagery may be found to offer up evidence of the visual construction of reality. . . .
Second, the study of visual culture must scrutinize visual practice as much as images themselves, asking what images do when they are put to use. If scholars engaged in this enterprise inquire what makes an image beautiful or why this image or that constitutes a masterpiece or a work of genius, they should do so with the purpose of investigating an artist’s or a work’s contribution to the experience of beauty, taste, value, or genius. No amount of social analysis can account fully for the existence of Michelangelo or Leonardo. They were unique creators of images that changed the way their contemporaries thought and felt and have continued to shape the history of art, artists, museums, feeling, and aesthetic value. But study of the critical, artistic, and popular reception of works by such artists as Michelangelo and Leonardo can shed important light on the meaning of these artists and their works for many different people. And the history of meaning-making has a great deal to do with how scholars as well as lay audiences today understand these artists and their achievements.
Third, scholars studying visual culture might properly focus their interpretative work on lifeworlds by examining images, practices, visual technologies, taste, and artistic style as constitutive of social relations. The task is to understand how artifacts contribute to the construction of a world. . . . Important methodological implications follow: ethnography and reception studies become productive forms of gathering information, since these move beyond the image as a closed and fixed meaning-event. . . .
Fourth, scholars may learn a great deal when they scrutinize the constituents of vision, that is, the structures of perception as a physiological process as well as the epistemological frameworks informing a system of visual representation. Vision is a socially and a biologically constructed operation, depending on the design of the human body and how it engages the interpretive devices developed by a culture in order to see intelligibly. . . . Seeing . . . operates on the foundation of covenants with images that establish the conditions for meaningful visual experience.
Finally, the scholar of visual culture seeks to regard images as evidence for explanation, not as epiphenomena.
CAT/2020.2(RC)
Question. 50
“Seeing . . . operates on the foundation of covenants with images that establish the conditions for meaningful visual experience.” In light of the passage, which one of the following statements best conveys the meaning of this sentence?
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
The passage below is accompanied by a set of questions. Choose the best answer to each question.
The claims advanced here may be condensed into two assertions: [first, that visual] culture is what images, acts of seeing, and attendant intellectual, emotional, and perceptual sensibilities do to build, maintain, or transform the worlds in which people live. [And second, that the] study of visual culture is the analysis and interpretation of images and the ways of seeing (or gazes) that configure the agents, practices, conceptualities, and institutions that put images to work. . . .
Accordingly, the study of visual culture should be characterized by several concerns. First, scholars of visual culture need to examine any and all imagery – high and low, art and nonart. . . . They must not restrict themselves to objects of a particular beauty or aesthetic value. Indeed, any kind of imagery may be found to offer up evidence of the visual construction of reality. . . .
Second, the study of visual culture must scrutinize visual practice as much as images themselves, asking what images do when they are put to use. If scholars engaged in this enterprise inquire what makes an image beautiful or why this image or that constitutes a masterpiece or a work of genius, they should do so with the purpose of investigating an artist’s or a work’s contribution to the experience of beauty, taste, value, or genius. No amount of social analysis can account fully for the existence of Michelangelo or Leonardo. They were unique creators of images that changed the way their contemporaries thought and felt and have continued to shape the history of art, artists, museums, feeling, and aesthetic value. But study of the critical, artistic, and popular reception of works by such artists as Michelangelo and Leonardo can shed important light on the meaning of these artists and their works for many different people. And the history of meaning-making has a great deal to do with how scholars as well as lay audiences today understand these artists and their achievements.
Third, scholars studying visual culture might properly focus their interpretative work on lifeworlds by examining images, practices, visual technologies, taste, and artistic style as constitutive of social relations. The task is to understand how artifacts contribute to the construction of a world. . . . Important methodological implications follow: ethnography and reception studies become productive forms of gathering information, since these move beyond the image as a closed and fixed meaning-event. . . .
Fourth, scholars may learn a great deal when they scrutinize the constituents of vision, that is, the structures of perception as a physiological process as well as the epistemological frameworks informing a system of visual representation. Vision is a socially and a biologically constructed operation, depending on the design of the human body and how it engages the interpretive devices developed by a culture in order to see intelligibly. . . . Seeing . . . operates on the foundation of covenants with images that establish the conditions for meaningful visual experience.
Finally, the scholar of visual culture seeks to regard images as evidence for explanation, not as epiphenomena.
CAT/2020.2(RC)
Question. 51
Which set of keywords below most closely captures the arguments of the passage?
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
Aggression is any behavior that is directed toward injuring, harming, or inflicting pain on another living being or group of beings. Generally, the victim(s) of aggression must wish to avoid such behavior in order for it to be considered true aggression. Aggression is also categorized according to its ultimate intent. Hostile aggression is an aggressive act that results from anger, and is intended to inflict pain or injury because of that anger. Instrumental aggression is an aggressive act that is regarded as a means to an end other than pain or injury. For example, an enemy combatant may be subjected to torture in order to extract useful intelligence, though those inflicting the torture may have no real feelings of anger or animosity toward their subject. The concept of aggression is very broad, and includes many categories of behavior (e.g., verbal aggression, street crime, child abuse, spouse abuse, group conflict, war, etc.). A number of theories and models of aggression have arisen to explain these diverse forms of behavior, and these theories/models tend to be categorized according to their specific focus. The most common system of categorization groups the various approaches to aggression into three separate areas, based upon the three key variables that are present whenever any aggressive act or set of acts is committed. The first variable is the aggressor him/herself. The second is the social situation or circumstance in which the aggressive act(s) occur. The third variable is the target or victim of aggression.
Regarding theories and research on the aggressor, the fundamental focus is on the factors that lead an individual (or group) to commit aggressive acts. At the most basic level, some argue that aggressive urges and actions are the result of inborn, biological factors. Sigmund Freud (1930) proposed that all individuals are born with a death instinct that predisposes us to a variety of aggressive behaviors, including suicide (self directed aggression) and mental illness (possibly due to an unhealthy or unnatural suppression of aggressive urges). Other influential perspectives supporting a biological basis for aggression conclude that humans evolved with an abnormally low neural inhibition of aggressive impulses (in comparison to other species), and that humans possess a powerful instinct for property accumulation and territorialism. It is proposed that this instinct accounts for hostile behaviors ranging from minor street crime to world wars. Hormonal factors also appear to play a significant role in fostering aggressive tendencies. For example, the hormone testosterone has been shown to increase aggressive behaviors when injected into animals. Men and women convicted of violent crimes also possess significantly higher levels of testosterone than men and women convicted of non violent crimes. Numerous studies comparing different age groups, racial/ethnic groups, and cultures also indicate that men, overall, are more likely to engage in a variety of aggressive behaviors (e.g., sexual assault, aggravated assault, etc.) than women. One explanation for higher levels of aggression in men is based on the assumption that, on average, men have higher levels of testosterone than women.
CAT/2020.2(RC)
Question. 52
“[A]n enemy combatant may be subjected to torture in order to extract useful intelligence, though those inflicting the torture may have no real feelings of anger or animosity toward their subject.” Which one of the following best explicates the larger point being made by the author here?
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
Aggression is any behavior that is directed toward injuring, harming, or inflicting pain on another living being or group of beings. Generally, the victim(s) of aggression must wish to avoid such behavior in order for it to be considered true aggression. Aggression is also categorized according to its ultimate intent. Hostile aggression is an aggressive act that results from anger, and is intended to inflict pain or injury because of that anger. Instrumental aggression is an aggressive act that is regarded as a means to an end other than pain or injury. For example, an enemy combatant may be subjected to torture in order to extract useful intelligence, though those inflicting the torture may have no real feelings of anger or animosity toward their subject. The concept of aggression is very broad, and includes many categories of behavior (e.g., verbal aggression, street crime, child abuse, spouse abuse, group conflict, war, etc.). A number of theories and models of aggression have arisen to explain these diverse forms of behavior, and these theories/models tend to be categorized according to their specific focus. The most common system of categorization groups the various approaches to aggression into three separate areas, based upon the three key variables that are present whenever any aggressive act or set of acts is committed. The first variable is the aggressor him/herself. The second is the social situation or circumstance in which the aggressive act(s) occur. The third variable is the target or victim of aggression.
Regarding theories and research on the aggressor, the fundamental focus is on the factors that lead an individual (or group) to commit aggressive acts. At the most basic level, some argue that aggressive urges and actions are the result of inborn, biological factors. Sigmund Freud (1930) proposed that all individuals are born with a death instinct that predisposes us to a variety of aggressive behaviors, including suicide (self directed aggression) and mental illness (possibly due to an unhealthy or unnatural suppression of aggressive urges). Other influential perspectives supporting a biological basis for aggression conclude that humans evolved with an abnormally low neural inhibition of aggressive impulses (in comparison to other species), and that humans possess a powerful instinct for property accumulation and territorialism. It is proposed that this instinct accounts for hostile behaviors ranging from minor street crime to world wars. Hormonal factors also appear to play a significant role in fostering aggressive tendencies. For example, the hormone testosterone has been shown to increase aggressive behaviors when injected into animals. Men and women convicted of violent crimes also possess significantly higher levels of testosterone than men and women convicted of non violent crimes. Numerous studies comparing different age groups, racial/ethnic groups, and cultures also indicate that men, overall, are more likely to engage in a variety of aggressive behaviors (e.g., sexual assault, aggravated assault, etc.) than women. One explanation for higher levels of aggression in men is based on the assumption that, on average, men have higher levels of testosterone than women.
CAT/2020.2(RC)
Question. 53
All of the following statements can be seen as logically implied by the arguments of the passage EXCEPT:
The Freudian theory of suicide as self-inflicted aggression implies that an aggressive act need not be sought to be avoided in order for it to be considered aggression.
C : Freud’s theory of aggression proposes that aggression results from the suppression of aggressive urges.D : If the alleged aggressive act is not sought to be avoided, it cannot really be considered aggression.Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
Aggression is any behavior that is directed toward injuring, harming, or inflicting pain on another living being or group of beings. Generally, the victim(s) of aggression must wish to avoid such behavior in order for it to be considered true aggression. Aggression is also categorized according to its ultimate intent. Hostile aggression is an aggressive act that results from anger, and is intended to inflict pain or injury because of that anger. Instrumental aggression is an aggressive act that is regarded as a means to an end other than pain or injury. For example, an enemy combatant may be subjected to torture in order to extract useful intelligence, though those inflicting the torture may have no real feelings of anger or animosity toward their subject. The concept of aggression is very broad, and includes many categories of behavior (e.g., verbal aggression, street crime, child abuse, spouse abuse, group conflict, war, etc.). A number of theories and models of aggression have arisen to explain these diverse forms of behavior, and these theories/models tend to be categorized according to their specific focus. The most common system of categorization groups the various approaches to aggression into three separate areas, based upon the three key variables that are present whenever any aggressive act or set of acts is committed. The first variable is the aggressor him/herself. The second is the social situation or circumstance in which the aggressive act(s) occur. The third variable is the target or victim of aggression.
Regarding theories and research on the aggressor, the fundamental focus is on the factors that lead an individual (or group) to commit aggressive acts. At the most basic level, some argue that aggressive urges and actions are the result of inborn, biological factors. Sigmund Freud (1930) proposed that all individuals are born with a death instinct that predisposes us to a variety of aggressive behaviors, including suicide (self directed aggression) and mental illness (possibly due to an unhealthy or unnatural suppression of aggressive urges). Other influential perspectives supporting a biological basis for aggression conclude that humans evolved with an abnormally low neural inhibition of aggressive impulses (in comparison to other species), and that humans possess a powerful instinct for property accumulation and territorialism. It is proposed that this instinct accounts for hostile behaviors ranging from minor street crime to world wars. Hormonal factors also appear to play a significant role in fostering aggressive tendencies. For example, the hormone testosterone has been shown to increase aggressive behaviors when injected into animals. Men and women convicted of violent crimes also possess significantly higher levels of testosterone than men and women convicted of non violent crimes. Numerous studies comparing different age groups, racial/ethnic groups, and cultures also indicate that men, overall, are more likely to engage in a variety of aggressive behaviors (e.g., sexual assault, aggravated assault, etc.) than women. One explanation for higher levels of aggression in men is based on the assumption that, on average, men have higher levels of testosterone than women.
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
Aggression is any behavior that is directed toward injuring, harming, or inflicting pain on another living being or group of beings. Generally, the victim(s) of aggression must wish to avoid such behavior in order for it to be considered true aggression. Aggression is also categorized according to its ultimate intent. Hostile aggression is an aggressive act that results from anger, and is intended to inflict pain or injury because of that anger. Instrumental aggression is an aggressive act that is regarded as a means to an end other than pain or injury. For example, an enemy combatant may be subjected to torture in order to extract useful intelligence, though those inflicting the torture may have no real feelings of anger or animosity toward their subject. The concept of aggression is very broad, and includes many categories of behavior (e.g., verbal aggression, street crime, child abuse, spouse abuse, group conflict, war, etc.). A number of theories and models of aggression have arisen to explain these diverse forms of behavior, and these theories/models tend to be categorized according to their specific focus. The most common system of categorization groups the various approaches to aggression into three separate areas, based upon the three key variables that are present whenever any aggressive act or set of acts is committed. The first variable is the aggressor him/herself. The second is the social situation or circumstance in which the aggressive act(s) occur. The third variable is the target or victim of aggression.
Regarding theories and research on the aggressor, the fundamental focus is on the factors that lead an individual (or group) to commit aggressive acts. At the most basic level, some argue that aggressive urges and actions are the result of inborn, biological factors. Sigmund Freud (1930) proposed that all individuals are born with a death instinct that predisposes us to a variety of aggressive behaviors, including suicide (self directed aggression) and mental illness (possibly due to an unhealthy or unnatural suppression of aggressive urges). Other influential perspectives supporting a biological basis for aggression conclude that humans evolved with an abnormally low neural inhibition of aggressive impulses (in comparison to other species), and that humans possess a powerful instinct for property accumulation and territorialism. It is proposed that this instinct accounts for hostile behaviors ranging from minor street crime to world wars. Hormonal factors also appear to play a significant role in fostering aggressive tendencies. For example, the hormone testosterone has been shown to increase aggressive behaviors when injected into animals. Men and women convicted of violent crimes also possess significantly higher levels of testosterone than men and women convicted of non violent crimes. Numerous studies comparing different age groups, racial/ethnic groups, and cultures also indicate that men, overall, are more likely to engage in a variety of aggressive behaviors (e.g., sexual assault, aggravated assault, etc.) than women. One explanation for higher levels of aggression in men is based on the assumption that, on average, men have higher levels of testosterone than women.
CAT/2020.2(RC)
Question. 55
The author discusses all of the following arguments in the passage EXCEPT that:
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly.
Although one of the most contested concepts in political philosophy, human nature is something on which most people seem to agree. By and large, according to Rutger Bregman in his new book Humankind, we have a rather pessimistic view – not of ourselves exactly, but of everyone else. We see other people as selfish, untrustworthy and dangerous and therefore we behave towards them with defensiveness and suspicion. This was how the 17th-century philosopher Thomas Hobbes conceived our natural state to be, believing that all that stood between us and violent anarchy was a strong state and firm leadership.
But in following Hobbes, argues Bregman, we ensure that the negative view we have of human nature is reflected back at us. He instead puts his faith in Jean-Jacques Rousseau, the 18th-century French thinker, who famously declared that man was born free and it was civilisation – with its coercive powers, social classes and restrictive laws – that put him in chains.
Hobbes and Rousseau are seen as the two poles of the human nature argument and it’s no surprise that Bregman strongly sides with the Frenchman. He takes Rousseau’s intuition and paints a picture of a prelapsarian idyll in which, for the better part of 300,000 years, Homo sapiens lived a fulfilling life in harmony with nature . . . Then we discovered agriculture and for the next 10,000 years it was all property, war, greed and injustice. . . .
It was abandoning our nomadic lifestyle and then domesticating animals, says Bregman, that brought about infectious diseases such as measles, smallpox, tuberculosis, syphilis, malaria, cholera and plague. This may be true, but what Bregman never really seems to get to grips with is that pathogens were not the only things that grew with agriculture – so did the number of humans. It’s one thing to maintain friendly relations and a property-less mode of living when you’re 30 or 40 hunter-gatherers following the food. But life becomes a great deal more complex and knowledge far more extensive when there are settlements of many thousands.
“Civilisation has become synonymous with peace and progress and wilderness with war and decline,” writes Bregman. “In reality, for most of human existence, it was the other way around.” Whereas traditional history depicts the collapse of civilisations as “dark ages” in which everything gets worse, modern scholars, he claims, see them more as a reprieve, in which the enslaved gain their freedom and culture flourishes. Like much else in this book, the truth is probably somewhere between the two stated positions.
In any case, the fear of civilisational collapse, Bregman believes, is unfounded. It’s the result of what the Dutch biologist Frans de Waal calls “veneer theory” – the idea that just below the surface, our bestial nature is waiting to break out. . . . There’s a great deal of reassuring human decency to be taken from this bold and thought-provoking book and a wealth of evidence in support of the contention that the sense of who we are as a species has been deleteriously distorted. But it seems equally misleading to offer the false choice of Rousseau and Hobbes when, clearly, humanity encompasses both.
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly.
Although one of the most contested concepts in political philosophy, human nature is something on which most people seem to agree. By and large, according to Rutger Bregman in his new book Humankind, we have a rather pessimistic view – not of ourselves exactly, but of everyone else. We see other people as selfish, untrustworthy and dangerous and therefore we behave towards them with defensiveness and suspicion. This was how the 17th-century philosopher Thomas Hobbes conceived our natural state to be, believing that all that stood between us and violent anarchy was a strong state and firm leadership.
But in following Hobbes, argues Bregman, we ensure that the negative view we have of human nature is reflected back at us. He instead puts his faith in Jean-Jacques Rousseau, the 18th-century French thinker, who famously declared that man was born free and it was civilisation – with its coercive powers, social classes and restrictive laws – that put him in chains.
Hobbes and Rousseau are seen as the two poles of the human nature argument and it’s no surprise that Bregman strongly sides with the Frenchman. He takes Rousseau’s intuition and paints a picture of a prelapsarian idyll in which, for the better part of 300,000 years, Homo sapiens lived a fulfilling life in harmony with nature . . . Then we discovered agriculture and for the next 10,000 years it was all property, war, greed and injustice. . . .
It was abandoning our nomadic lifestyle and then domesticating animals, says Bregman, that brought about infectious diseases such as measles, smallpox, tuberculosis, syphilis, malaria, cholera and plague. This may be true, but what Bregman never really seems to get to grips with is that pathogens were not the only things that grew with agriculture – so did the number of humans. It’s one thing to maintain friendly relations and a property-less mode of living when you’re 30 or 40 hunter-gatherers following the food. But life becomes a great deal more complex and knowledge far more extensive when there are settlements of many thousands.
“Civilisation has become synonymous with peace and progress and wilderness with war and decline,” writes Bregman. “In reality, for most of human existence, it was the other way around.” Whereas traditional history depicts the collapse of civilisations as “dark ages” in which everything gets worse, modern scholars, he claims, see them more as a reprieve, in which the enslaved gain their freedom and culture flourishes. Like much else in this book, the truth is probably somewhere between the two stated positions.
In any case, the fear of civilisational collapse, Bregman believes, is unfounded. It’s the result of what the Dutch biologist Frans de Waal calls “veneer theory” – the idea that just below the surface, our bestial nature is waiting to break out. . . . There’s a great deal of reassuring human decency to be taken from this bold and thought-provoking book and a wealth of evidence in support of the contention that the sense of who we are as a species has been deleteriously distorted. But it seems equally misleading to offer the false choice of Rousseau and Hobbes when, clearly, humanity encompasses both.
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly.
Although one of the most contested concepts in political philosophy, human nature is something on which most people seem to agree. By and large, according to Rutger Bregman in his new book Humankind, we have a rather pessimistic view – not of ourselves exactly, but of everyone else. We see other people as selfish, untrustworthy and dangerous and therefore we behave towards them with defensiveness and suspicion. This was how the 17th-century philosopher Thomas Hobbes conceived our natural state to be, believing that all that stood between us and violent anarchy was a strong state and firm leadership.
But in following Hobbes, argues Bregman, we ensure that the negative view we have of human nature is reflected back at us. He instead puts his faith in Jean-Jacques Rousseau, the 18th-century French thinker, who famously declared that man was born free and it was civilisation – with its coercive powers, social classes and restrictive laws – that put him in chains.
Hobbes and Rousseau are seen as the two poles of the human nature argument and it’s no surprise that Bregman strongly sides with the Frenchman. He takes Rousseau’s intuition and paints a picture of a prelapsarian idyll in which, for the better part of 300,000 years, Homo sapiens lived a fulfilling life in harmony with nature . . . Then we discovered agriculture and for the next 10,000 years it was all property, war, greed and injustice. . . .
It was abandoning our nomadic lifestyle and then domesticating animals, says Bregman, that brought about infectious diseases such as measles, smallpox, tuberculosis, syphilis, malaria, cholera and plague. This may be true, but what Bregman never really seems to get to grips with is that pathogens were not the only things that grew with agriculture – so did the number of humans. It’s one thing to maintain friendly relations and a property-less mode of living when you’re 30 or 40 hunter-gatherers following the food. But life becomes a great deal more complex and knowledge far more extensive when there are settlements of many thousands.
“Civilisation has become synonymous with peace and progress and wilderness with war and decline,” writes Bregman. “In reality, for most of human existence, it was the other way around.” Whereas traditional history depicts the collapse of civilisations as “dark ages” in which everything gets worse, modern scholars, he claims, see them more as a reprieve, in which the enslaved gain their freedom and culture flourishes. Like much else in this book, the truth is probably somewhere between the two stated positions.
In any case, the fear of civilisational collapse, Bregman believes, is unfounded. It’s the result of what the Dutch biologist Frans de Waal calls “veneer theory” – the idea that just below the surface, our bestial nature is waiting to break out. . . . There’s a great deal of reassuring human decency to be taken from this bold and thought-provoking book and a wealth of evidence in support of the contention that the sense of who we are as a species has been deleteriously distorted. But it seems equally misleading to offer the false choice of Rousseau and Hobbes when, clearly, humanity encompasses both.
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly.
Although one of the most contested concepts in political philosophy, human nature is something on which most people seem to agree. By and large, according to Rutger Bregman in his new book Humankind, we have a rather pessimistic view – not of ourselves exactly, but of everyone else. We see other people as selfish, untrustworthy and dangerous and therefore we behave towards them with defensiveness and suspicion. This was how the 17th-century philosopher Thomas Hobbes conceived our natural state to be, believing that all that stood between us and violent anarchy was a strong state and firm leadership.
But in following Hobbes, argues Bregman, we ensure that the negative view we have of human nature is reflected back at us. He instead puts his faith in Jean-Jacques Rousseau, the 18th-century French thinker, who famously declared that man was born free and it was civilisation – with its coercive powers, social classes and restrictive laws – that put him in chains.
Hobbes and Rousseau are seen as the two poles of the human nature argument and it’s no surprise that Bregman strongly sides with the Frenchman. He takes Rousseau’s intuition and paints a picture of a prelapsarian idyll in which, for the better part of 300,000 years, Homo sapiens lived a fulfilling life in harmony with nature . . . Then we discovered agriculture and for the next 10,000 years it was all property, war, greed and injustice. . . .
It was abandoning our nomadic lifestyle and then domesticating animals, says Bregman, that brought about infectious diseases such as measles, smallpox, tuberculosis, syphilis, malaria, cholera and plague. This may be true, but what Bregman never really seems to get to grips with is that pathogens were not the only things that grew with agriculture – so did the number of humans. It’s one thing to maintain friendly relations and a property-less mode of living when you’re 30 or 40 hunter-gatherers following the food. But life becomes a great deal more complex and knowledge far more extensive when there are settlements of many thousands.
“Civilisation has become synonymous with peace and progress and wilderness with war and decline,” writes Bregman. “In reality, for most of human existence, it was the other way around.” Whereas traditional history depicts the collapse of civilisations as “dark ages” in which everything gets worse, modern scholars, he claims, see them more as a reprieve, in which the enslaved gain their freedom and culture flourishes. Like much else in this book, the truth is probably somewhere between the two stated positions.
In any case, the fear of civilisational collapse, Bregman believes, is unfounded. It’s the result of what the Dutch biologist Frans de Waal calls “veneer theory” – the idea that just below the surface, our bestial nature is waiting to break out. . . . There’s a great deal of reassuring human decency to be taken from this bold and thought-provoking book and a wealth of evidence in support of the contention that the sense of who we are as a species has been deleteriously distorted. But it seems equally misleading to offer the false choice of Rousseau and Hobbes when, clearly, humanity encompasses both.
CAT/2020.3(RC)
Question. 59
According to the author, the main reason why Bregman contrasts life in pre-agricultural societies with agricultural societies is to:
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly.
[There is] a curious new reality: Human contact is becoming a luxury good. As more screens appear in the lives of the poor, screens are disappearing from the lives of the rich. The richer you are, the more you spend to be off-screen. . . .
The joy — at least at first — of the internet revolution was its democratic nature. Facebook is the same Facebook whether you are rich or poor. Gmail is the same Gmail. And it’s all free. There is something mass market and unappealing about that. And as studies show that time on these advertisement-support platforms is unhealthy, it all starts to seem déclassé, like drinking soda or smoking cigarettes, which wealthy people do less than poor people. The wealthy can afford to opt out of having their data and their attention sold as a product. The poor and middle class don’t have the same kind of resources to make that happen.
Screen exposure starts young. And children who spent more than two hours a day looking at a screen got lower scores on thinking and language tests, according to early results of a landmark study on brain development of more than 11,000 children that the National Institutes of Health is supporting. Most disturbingly, the study is finding that the brains of children who spend a lot of time on screens are different. For some kids, there is premature thinning of their cerebral cortex. In adults, one study found an association between screen time and depression. . . .
Tech companies worked hard to get public schools to buy into programs that required schools to have one laptop per student, arguing that it would better prepare children for their screen-based future. But this idea isn’t how the people who actually build the screen-based future raise their own children. In Silicon Valley, time on screens is increasingly seen as unhealthy. Here, the popular elementary school is the local Waldorf School, which promises a back-to-nature, nearly screen-free education. So as wealthy kids are growing up with less screen time, poor kids are growing up with more. How comfortable someone is with human engagement could become a new class marker.
Human contact is, of course, not exactly like organic food . . . . But with screen time, there has been a concerted effort on the part of Silicon Valley behemoths to confuse the public. The poor and the middle class are told that screens are good and important for them and their children. There are fleets of psychologists and neuroscientists on staff at big tech companies working to hook eyes and minds to the screen as fast as possible and for as long as possible. And so human contact is rare. . . .
There is a small movement to pass a “right to disconnect” bill, which would allow workers to turn their phones off, but for now a worker can be punished for going offline and not being available. There is also the reality that in our culture of increasing isolation, in which so many of the traditional gathering places and social structures have disappeared, screens are filling a crucial void.
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly.
[There is] a curious new reality: Human contact is becoming a luxury good. As more screens appear in the lives of the poor, screens are disappearing from the lives of the rich. The richer you are, the more you spend to be off-screen. . . .
The joy — at least at first — of the internet revolution was its democratic nature. Facebook is the same Facebook whether you are rich or poor. Gmail is the same Gmail. And it’s all free. There is something mass market and unappealing about that. And as studies show that time on these advertisement-support platforms is unhealthy, it all starts to seem déclassé, like drinking soda or smoking cigarettes, which wealthy people do less than poor people. The wealthy can afford to opt out of having their data and their attention sold as a product. The poor and middle class don’t have the same kind of resources to make that happen.
Screen exposure starts young. And children who spent more than two hours a day looking at a screen got lower scores on thinking and language tests, according to early results of a landmark study on brain development of more than 11,000 children that the National Institutes of Health is supporting. Most disturbingly, the study is finding that the brains of children who spend a lot of time on screens are different. For some kids, there is premature thinning of their cerebral cortex. In adults, one study found an association between screen time and depression. . . .
Tech companies worked hard to get public schools to buy into programs that required schools to have one laptop per student, arguing that it would better prepare children for their screen-based future. But this idea isn’t how the people who actually build the screen-based future raise their own children. In Silicon Valley, time on screens is increasingly seen as unhealthy. Here, the popular elementary school is the local Waldorf School, which promises a back-to-nature, nearly screen-free education. So as wealthy kids are growing up with less screen time, poor kids are growing up with more. How comfortable someone is with human engagement could become a new class marker.
Human contact is, of course, not exactly like organic food . . . . But with screen time, there has been a concerted effort on the part of Silicon Valley behemoths to confuse the public. The poor and the middle class are told that screens are good and important for them and their children. There are fleets of psychologists and neuroscientists on staff at big tech companies working to hook eyes and minds to the screen as fast as possible and for as long as possible. And so human contact is rare. . . .
There is a small movement to pass a “right to disconnect” bill, which would allow workers to turn their phones off, but for now a worker can be punished for going offline and not being available. There is also the reality that in our culture of increasing isolation, in which so many of the traditional gathering places and social structures have disappeared, screens are filling a crucial void.
CAT/2020.3(RC)
Question. 61
The statement “The richer you are, the more you spend to be off-screen” is supported by which other line from the passage?
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly.
[There is] a curious new reality: Human contact is becoming a luxury good. As more screens appear in the lives of the poor, screens are disappearing from the lives of the rich. The richer you are, the more you spend to be off-screen. . . .
The joy — at least at first — of the internet revolution was its democratic nature. Facebook is the same Facebook whether you are rich or poor. Gmail is the same Gmail. And it’s all free. There is something mass market and unappealing about that. And as studies show that time on these advertisement-support platforms is unhealthy, it all starts to seem déclassé, like drinking soda or smoking cigarettes, which wealthy people do less than poor people. The wealthy can afford to opt out of having their data and their attention sold as a product. The poor and middle class don’t have the same kind of resources to make that happen.
Screen exposure starts young. And children who spent more than two hours a day looking at a screen got lower scores on thinking and language tests, according to early results of a landmark study on brain development of more than 11,000 children that the National Institutes of Health is supporting. Most disturbingly, the study is finding that the brains of children who spend a lot of time on screens are different. For some kids, there is premature thinning of their cerebral cortex. In adults, one study found an association between screen time and depression. . . .
Tech companies worked hard to get public schools to buy into programs that required schools to have one laptop per student, arguing that it would better prepare children for their screen-based future. But this idea isn’t how the people who actually build the screen-based future raise their own children. In Silicon Valley, time on screens is increasingly seen as unhealthy. Here, the popular elementary school is the local Waldorf School, which promises a back-to-nature, nearly screen-free education. So as wealthy kids are growing up with less screen time, poor kids are growing up with more. How comfortable someone is with human engagement could become a new class marker.
Human contact is, of course, not exactly like organic food . . . . But with screen time, there has been a concerted effort on the part of Silicon Valley behemoths to confuse the public. The poor and the middle class are told that screens are good and important for them and their children. There are fleets of psychologists and neuroscientists on staff at big tech companies working to hook eyes and minds to the screen as fast as possible and for as long as possible. And so human contact is rare. . . .
There is a small movement to pass a “right to disconnect” bill, which would allow workers to turn their phones off, but for now a worker can be punished for going offline and not being available. There is also the reality that in our culture of increasing isolation, in which so many of the traditional gathering places and social structures have disappeared, screens are filling a crucial void.
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly.
[There is] a curious new reality: Human contact is becoming a luxury good. As more screens appear in the lives of the poor, screens are disappearing from the lives of the rich. The richer you are, the more you spend to be off-screen. . . .
The joy — at least at first — of the internet revolution was its democratic nature. Facebook is the same Facebook whether you are rich or poor. Gmail is the same Gmail. And it’s all free. There is something mass market and unappealing about that. And as studies show that time on these advertisement-support platforms is unhealthy, it all starts to seem déclassé, like drinking soda or smoking cigarettes, which wealthy people do less than poor people. The wealthy can afford to opt out of having their data and their attention sold as a product. The poor and middle class don’t have the same kind of resources to make that happen.
Screen exposure starts young. And children who spent more than two hours a day looking at a screen got lower scores on thinking and language tests, according to early results of a landmark study on brain development of more than 11,000 children that the National Institutes of Health is supporting. Most disturbingly, the study is finding that the brains of children who spend a lot of time on screens are different. For some kids, there is premature thinning of their cerebral cortex. In adults, one study found an association between screen time and depression. . . .
Tech companies worked hard to get public schools to buy into programs that required schools to have one laptop per student, arguing that it would better prepare children for their screen-based future. But this idea isn’t how the people who actually build the screen-based future raise their own children. In Silicon Valley, time on screens is increasingly seen as unhealthy. Here, the popular elementary school is the local Waldorf School, which promises a back-to-nature, nearly screen-free education. So as wealthy kids are growing up with less screen time, poor kids are growing up with more. How comfortable someone is with human engagement could become a new class marker.
Human contact is, of course, not exactly like organic food . . . . But with screen time, there has been a concerted effort on the part of Silicon Valley behemoths to confuse the public. The poor and the middle class are told that screens are good and important for them and their children. There are fleets of psychologists and neuroscientists on staff at big tech companies working to hook eyes and minds to the screen as fast as possible and for as long as possible. And so human contact is rare. . . .
There is a small movement to pass a “right to disconnect” bill, which would allow workers to turn their phones off, but for now a worker can be punished for going offline and not being available. There is also the reality that in our culture of increasing isolation, in which so many of the traditional gathering places and social structures have disappeared, screens are filling a crucial void.
CAT/2020.3(RC)
Question. 63
The author claims that Silicon Valley tech companies have tried to “confuse the public” by:
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
In the past, credit for telling the tale of Aladdin has often gone to Antoine Galland . . . the first European translator of . . . Arabian Nights [which] started as a series of translations of an incomplete manuscript of a medieval Arabic story collection. . . But, though those tales were of medieval origin, Aladdin may be a more recent invention. Scholars have not found a manuscript of the story that predates the version published in 1712 by Galland, who wrote in his diary that he first heard the tale from a Syrian storyteller from Aleppo named Hanna Diyab . . .
Despite the fantastical elements of the story, scholars now think the main character may actually be based on a real person’s real experiences. . . . Though Galland never credited Diyab in his published translations of the Arabian Nights stories, Diyab wrote something of his own: a travelogue penned in the mid-18th century. In it, he recalls telling Galland the story of Aladdin [and] describes his own hard-knocks upbringing and the way he marveled at the extravagance of Versailles. The descriptions he uses were very similar to the descriptions of the lavish palace that ended up in Galland’s version of the Aladdin story. [Therefore, author Paulo Lemos] Horta believes that “Aladdin might be the young Arab Maronite from Aleppo, marveling at the jewels and riches of Versailles.” ...
For 300 years, scholars thought that the rags-to-riches story of Aladdin might have been inspired by the plots of French fairy tales that came out around the same time, or that the story was invented in that 18th century period as a byproduct of French Orientalism, a fascination with stereotypical exotic Middle Eastern luxuries that was prevalent then. The idea that Diyab might have based it on his own life — the experiences of a Middle Eastern man encountering the French, not vice-versa — flips the script. [According to Horta,] “Diyab was ideally placed to embody the overlapping world of East and West, blending the storytelling traditions of his homeland with his youthful observations of the wonder of 18th-century France.” . . .
To the scholars who study the tale, its narrative drama isn’t the only reason storytellers keep finding reason to return to Aladdin. It reflects not only “a history of the French and the Middle East, but also [a story about] Middle Easterners coming to Paris and that speaks to our world today,” as Horta puts it. “The day Diyab told the story of Aladdin to Galland, there were riots due to food shortages during the winter and spring of 1708 to 1709, and Diyab was sensitive to those people in a way that Galland is not. When you read this diary, you see this solidarity among the Arabs who were in Paris at the time. . . . There is little in the writings of Galland that would suggest that he was capable of developing a character like Aladdin with sympathy, but Diyab’s memoir reveals a narrator adept at capturing the distinctive psychology of a young protagonist, as well as recognizing the kinds of injustices and opportunities that can transform the path of any youthful adventurer.”
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
In the past, credit for telling the tale of Aladdin has often gone to Antoine Galland . . . the first European translator of . . . Arabian Nights [which] started as a series of translations of an incomplete manuscript of a medieval Arabic story collection. . . But, though those tales were of medieval origin, Aladdin may be a more recent invention. Scholars have not found a manuscript of the story that predates the version published in 1712 by Galland, who wrote in his diary that he first heard the tale from a Syrian storyteller from Aleppo named Hanna Diyab . . .
Despite the fantastical elements of the story, scholars now think the main character may actually be based on a real person’s real experiences. . . . Though Galland never credited Diyab in his published translations of the Arabian Nights stories, Diyab wrote something of his own: a travelogue penned in the mid-18th century. In it, he recalls telling Galland the story of Aladdin [and] describes his own hard-knocks upbringing and the way he marveled at the extravagance of Versailles. The descriptions he uses were very similar to the descriptions of the lavish palace that ended up in Galland’s version of the Aladdin story. [Therefore, author Paulo Lemos] Horta believes that “Aladdin might be the young Arab Maronite from Aleppo, marveling at the jewels and riches of Versailles.” ...
For 300 years, scholars thought that the rags-to-riches story of Aladdin might have been inspired by the plots of French fairy tales that came out around the same time, or that the story was invented in that 18th century period as a byproduct of French Orientalism, a fascination with stereotypical exotic Middle Eastern luxuries that was prevalent then. The idea that Diyab might have based it on his own life — the experiences of a Middle Eastern man encountering the French, not vice-versa — flips the script. [According to Horta,] “Diyab was ideally placed to embody the overlapping world of East and West, blending the storytelling traditions of his homeland with his youthful observations of the wonder of 18th-century France.” . . .
To the scholars who study the tale, its narrative drama isn’t the only reason storytellers keep finding reason to return to Aladdin. It reflects not only “a history of the French and the Middle East, but also [a story about] Middle Easterners coming to Paris and that speaks to our world today,” as Horta puts it. “The day Diyab told the story of Aladdin to Galland, there were riots due to food shortages during the winter and spring of 1708 to 1709, and Diyab was sensitive to those people in a way that Galland is not. When you read this diary, you see this solidarity among the Arabs who were in Paris at the time. . . . There is little in the writings of Galland that would suggest that he was capable of developing a character like Aladdin with sympathy, but Diyab’s memoir reveals a narrator adept at capturing the distinctive psychology of a young protagonist, as well as recognizing the kinds of injustices and opportunities that can transform the path of any youthful adventurer.”
CAT/2019.1(RC)
Question. 65
The author of the passage is most likely to agree with which of the following explanations for the origins of the story of Aladdin?
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
In the past, credit for telling the tale of Aladdin has often gone to Antoine Galland . . . the first European translator of . . . Arabian Nights [which] started as a series of translations of an incomplete manuscript of a medieval Arabic story collection. . . But, though those tales were of medieval origin, Aladdin may be a more recent invention. Scholars have not found a manuscript of the story that predates the version published in 1712 by Galland, who wrote in his diary that he first heard the tale from a Syrian storyteller from Aleppo named Hanna Diyab . . .
Despite the fantastical elements of the story, scholars now think the main character may actually be based on a real person’s real experiences. . . . Though Galland never credited Diyab in his published translations of the Arabian Nights stories, Diyab wrote something of his own: a travelogue penned in the mid-18th century. In it, he recalls telling Galland the story of Aladdin [and] describes his own hard-knocks upbringing and the way he marveled at the extravagance of Versailles. The descriptions he uses were very similar to the descriptions of the lavish palace that ended up in Galland’s version of the Aladdin story. [Therefore, author Paulo Lemos] Horta believes that “Aladdin might be the young Arab Maronite from Aleppo, marveling at the jewels and riches of Versailles.” ...
For 300 years, scholars thought that the rags-to-riches story of Aladdin might have been inspired by the plots of French fairy tales that came out around the same time, or that the story was invented in that 18th century period as a byproduct of French Orientalism, a fascination with stereotypical exotic Middle Eastern luxuries that was prevalent then. The idea that Diyab might have based it on his own life — the experiences of a Middle Eastern man encountering the French, not vice-versa — flips the script. [According to Horta,] “Diyab was ideally placed to embody the overlapping world of East and West, blending the storytelling traditions of his homeland with his youthful observations of the wonder of 18th-century France.” . . .
To the scholars who study the tale, its narrative drama isn’t the only reason storytellers keep finding reason to return to Aladdin. It reflects not only “a history of the French and the Middle East, but also [a story about] Middle Easterners coming to Paris and that speaks to our world today,” as Horta puts it. “The day Diyab told the story of Aladdin to Galland, there were riots due to food shortages during the winter and spring of 1708 to 1709, and Diyab was sensitive to those people in a way that Galland is not. When you read this diary, you see this solidarity among the Arabs who were in Paris at the time. . . . There is little in the writings of Galland that would suggest that he was capable of developing a character like Aladdin with sympathy, but Diyab’s memoir reveals a narrator adept at capturing the distinctive psychology of a young protagonist, as well as recognizing the kinds of injustices and opportunities that can transform the path of any youthful adventurer.”
CAT/2019.1(RC)
Question. 66
Which of the following, if true, would invalidate the inversion that the phrase “flips the script” refers to?
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
In the past, credit for telling the tale of Aladdin has often gone to Antoine Galland . . . the first European translator of . . . Arabian Nights [which] started as a series of translations of an incomplete manuscript of a medieval Arabic story collection. . . But, though those tales were of medieval origin, Aladdin may be a more recent invention. Scholars have not found a manuscript of the story that predates the version published in 1712 by Galland, who wrote in his diary that he first heard the tale from a Syrian storyteller from Aleppo named Hanna Diyab . . .
Despite the fantastical elements of the story, scholars now think the main character may actually be based on a real person’s real experiences. . . . Though Galland never credited Diyab in his published translations of the Arabian Nights stories, Diyab wrote something of his own: a travelogue penned in the mid-18th century. In it, he recalls telling Galland the story of Aladdin [and] describes his own hard-knocks upbringing and the way he marveled at the extravagance of Versailles. The descriptions he uses were very similar to the descriptions of the lavish palace that ended up in Galland’s version of the Aladdin story. [Therefore, author Paulo Lemos] Horta believes that “Aladdin might be the young Arab Maronite from Aleppo, marveling at the jewels and riches of Versailles.” ...
For 300 years, scholars thought that the rags-to-riches story of Aladdin might have been inspired by the plots of French fairy tales that came out around the same time, or that the story was invented in that 18th century period as a byproduct of French Orientalism, a fascination with stereotypical exotic Middle Eastern luxuries that was prevalent then. The idea that Diyab might have based it on his own life — the experiences of a Middle Eastern man encountering the French, not vice-versa — flips the script. [According to Horta,] “Diyab was ideally placed to embody the overlapping world of East and West, blending the storytelling traditions of his homeland with his youthful observations of the wonder of 18th-century France.” . . .
To the scholars who study the tale, its narrative drama isn’t the only reason storytellers keep finding reason to return to Aladdin. It reflects not only “a history of the French and the Middle East, but also [a story about] Middle Easterners coming to Paris and that speaks to our world today,” as Horta puts it. “The day Diyab told the story of Aladdin to Galland, there were riots due to food shortages during the winter and spring of 1708 to 1709, and Diyab was sensitive to those people in a way that Galland is not. When you read this diary, you see this solidarity among the Arabs who were in Paris at the time. . . . There is little in the writings of Galland that would suggest that he was capable of developing a character like Aladdin with sympathy, but Diyab’s memoir reveals a narrator adept at capturing the distinctive psychology of a young protagonist, as well as recognizing the kinds of injustices and opportunities that can transform the path of any youthful adventurer.”
CAT/2019.1(RC)
Question. 67
Which of the following is the primary reason for why storytellers are still fascinated by the story of Aladdin?
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
In the past, credit for telling the tale of Aladdin has often gone to Antoine Galland . . . the first European translator of . . . Arabian Nights [which] started as a series of translations of an incomplete manuscript of a medieval Arabic story collection. . . But, though those tales were of medieval origin, Aladdin may be a more recent invention. Scholars have not found a manuscript of the story that predates the version published in 1712 by Galland, who wrote in his diary that he first heard the tale from a Syrian storyteller from Aleppo named Hanna Diyab . . .
Despite the fantastical elements of the story, scholars now think the main character may actually be based on a real person’s real experiences. . . . Though Galland never credited Diyab in his published translations of the Arabian Nights stories, Diyab wrote something of his own: a travelogue penned in the mid-18th century. In it, he recalls telling Galland the story of Aladdin [and] describes his own hard-knocks upbringing and the way he marveled at the extravagance of Versailles. The descriptions he uses were very similar to the descriptions of the lavish palace that ended up in Galland’s version of the Aladdin story. [Therefore, author Paulo Lemos] Horta believes that “Aladdin might be the young Arab Maronite from Aleppo, marveling at the jewels and riches of Versailles.” ...
For 300 years, scholars thought that the rags-to-riches story of Aladdin might have been inspired by the plots of French fairy tales that came out around the same time, or that the story was invented in that 18th century period as a byproduct of French Orientalism, a fascination with stereotypical exotic Middle Eastern luxuries that was prevalent then. The idea that Diyab might have based it on his own life — the experiences of a Middle Eastern man encountering the French, not vice-versa — flips the script. [According to Horta,] “Diyab was ideally placed to embody the overlapping world of East and West, blending the storytelling traditions of his homeland with his youthful observations of the wonder of 18th-century France.” . . .
To the scholars who study the tale, its narrative drama isn’t the only reason storytellers keep finding reason to return to Aladdin. It reflects not only “a history of the French and the Middle East, but also [a story about] Middle Easterners coming to Paris and that speaks to our world today,” as Horta puts it. “The day Diyab told the story of Aladdin to Galland, there were riots due to food shortages during the winter and spring of 1708 to 1709, and Diyab was sensitive to those people in a way that Galland is not. When you read this diary, you see this solidarity among the Arabs who were in Paris at the time. . . . There is little in the writings of Galland that would suggest that he was capable of developing a character like Aladdin with sympathy, but Diyab’s memoir reveals a narrator adept at capturing the distinctive psychology of a young protagonist, as well as recognizing the kinds of injustices and opportunities that can transform the path of any youthful adventurer.”
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
As defined by the geographer Yi-Fu Tuan, topophilia is the affective bond between people and place. His 1974 book set forth a wide-ranging exploration of how the emotive ties with the material environment vary greatly from person to person and in intensity, subtlety, and mode of expression. Factors influencing one’s depth of response to the environment include cultural background, gender, race, and historical circumstance, and Tuan also argued that there is a biological and sensory element. Topophilia might not be the strongest of human emotions— indeed, many people feel utterly indifferent toward the environments that shape their lives— but when activated it has the power to elevate a place to become the carrier of emotionally charged events or to be perceived as a symbol.
Aesthetic appreciation is one way in which people respond to the environment. A brilliantly colored rainbow after gloomy afternoon showers, a busy city street alive with human interaction—one might experience the beauty of such landscapes that had seemed quite ordinary only moments before or that are being newly discovered. This is quite the opposite of a second topophilic bond, namely that of the acquired taste for certain landscapes and places that one knows well. When a place is home, or when a space has become the locus of memories or the means of gaining a livelihood, it frequently evokes a deeper set of attachments than those predicated purely on the visual. A third response to the environment also depends on the human senses but may be tactile and olfactory, namely a delight in the feel and smell of air, water, and the earth.
Topophilia—and its very close conceptual twin, sense of place—is an experience that, however elusive, has inspired recent architects and planners. Most notably, new urbanism seeks to counter the perceived placelessness of modern suburbs and the decline of central cities through neo-traditional design motifs. Although motivated by good intentions, such attempts to create places rich in meaning are perhaps bound to disappoint. As Tuan noted, purely aesthetic responses often are suddenly revealed, but their intensity rarely is long- lasting. Topophilia is difficult to design for and impossible to quantify, and its most articulate interpreters have been self-reflective philosophers such as Henry David Thoreau, evoking a marvelously intricate sense of place at Walden Pond, and Tuan, describing his deep affinity for the desert.
Topophilia connotes a positive relationship, but it often is useful to explore the darker affiliations between people and place. Patriotism, literally meaning the love of one’s terra patria or homeland, has long been cultivated by governing elites for a range of nationalist projects, including war preparation and ethnic cleansing. Residents of upscale residential developments have disclosed how important it is to maintain their community’s distinct identity, often by casting themselves in a superior social position and by reinforcing class and racial differences. And just as a beloved landscape is suddenly revealed, so too may landscapes of fear cast a dark shadow over a place that makes one feel a sense of dread or anxiety—or topophobia.
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
As defined by the geographer Yi-Fu Tuan, topophilia is the affective bond between people and place. His 1974 book set forth a wide-ranging exploration of how the emotive ties with the material environment vary greatly from person to person and in intensity, subtlety, and mode of expression. Factors influencing one’s depth of response to the environment include cultural background, gender, race, and historical circumstance, and Tuan also argued that there is a biological and sensory element. Topophilia might not be the strongest of human emotions— indeed, many people feel utterly indifferent toward the environments that shape their lives— but when activated it has the power to elevate a place to become the carrier of emotionally charged events or to be perceived as a symbol.
Aesthetic appreciation is one way in which people respond to the environment. A brilliantly colored rainbow after gloomy afternoon showers, a busy city street alive with human interaction—one might experience the beauty of such landscapes that had seemed quite ordinary only moments before or that are being newly discovered. This is quite the opposite of a second topophilic bond, namely that of the acquired taste for certain landscapes and places that one knows well. When a place is home, or when a space has become the locus of memories or the means of gaining a livelihood, it frequently evokes a deeper set of attachments than those predicated purely on the visual. A third response to the environment also depends on the human senses but may be tactile and olfactory, namely a delight in the feel and smell of air, water, and the earth.
Topophilia—and its very close conceptual twin, sense of place—is an experience that, however elusive, has inspired recent architects and planners. Most notably, new urbanism seeks to counter the perceived placelessness of modern suburbs and the decline of central cities through neo-traditional design motifs. Although motivated by good intentions, such attempts to create places rich in meaning are perhaps bound to disappoint. As Tuan noted, purely aesthetic responses often are suddenly revealed, but their intensity rarely is long- lasting. Topophilia is difficult to design for and impossible to quantify, and its most articulate interpreters have been self-reflective philosophers such as Henry David Thoreau, evoking a marvelously intricate sense of place at Walden Pond, and Tuan, describing his deep affinity for the desert.
Topophilia connotes a positive relationship, but it often is useful to explore the darker affiliations between people and place. Patriotism, literally meaning the love of one’s terra patria or homeland, has long been cultivated by governing elites for a range of nationalist projects, including war preparation and ethnic cleansing. Residents of upscale residential developments have disclosed how important it is to maintain their community’s distinct identity, often by casting themselves in a superior social position and by reinforcing class and racial differences. And just as a beloved landscape is suddenly revealed, so too may landscapes of fear cast a dark shadow over a place that makes one feel a sense of dread or anxiety—or topophobia.
CAT/2019.1(RC)
Question. 70
In the last paragraph, the author uses the example of “Residents of upscale residential developments” to illustrate the:
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
As defined by the geographer Yi-Fu Tuan, topophilia is the affective bond between people and place. His 1974 book set forth a wide-ranging exploration of how the emotive ties with the material environment vary greatly from person to person and in intensity, subtlety, and mode of expression. Factors influencing one’s depth of response to the environment include cultural background, gender, race, and historical circumstance, and Tuan also argued that there is a biological and sensory element. Topophilia might not be the strongest of human emotions— indeed, many people feel utterly indifferent toward the environments that shape their lives— but when activated it has the power to elevate a place to become the carrier of emotionally charged events or to be perceived as a symbol.
Aesthetic appreciation is one way in which people respond to the environment. A brilliantly colored rainbow after gloomy afternoon showers, a busy city street alive with human interaction—one might experience the beauty of such landscapes that had seemed quite ordinary only moments before or that are being newly discovered. This is quite the opposite of a second topophilic bond, namely that of the acquired taste for certain landscapes and places that one knows well. When a place is home, or when a space has become the locus of memories or the means of gaining a livelihood, it frequently evokes a deeper set of attachments than those predicated purely on the visual. A third response to the environment also depends on the human senses but may be tactile and olfactory, namely a delight in the feel and smell of air, water, and the earth.
Topophilia—and its very close conceptual twin, sense of place—is an experience that, however elusive, has inspired recent architects and planners. Most notably, new urbanism seeks to counter the perceived placelessness of modern suburbs and the decline of central cities through neo-traditional design motifs. Although motivated by good intentions, such attempts to create places rich in meaning are perhaps bound to disappoint. As Tuan noted, purely aesthetic responses often are suddenly revealed, but their intensity rarely is long- lasting. Topophilia is difficult to design for and impossible to quantify, and its most articulate interpreters have been self-reflective philosophers such as Henry David Thoreau, evoking a marvelously intricate sense of place at Walden Pond, and Tuan, describing his deep affinity for the desert.
Topophilia connotes a positive relationship, but it often is useful to explore the darker affiliations between people and place. Patriotism, literally meaning the love of one’s terra patria or homeland, has long been cultivated by governing elites for a range of nationalist projects, including war preparation and ethnic cleansing. Residents of upscale residential developments have disclosed how important it is to maintain their community’s distinct identity, often by casting themselves in a superior social position and by reinforcing class and racial differences. And just as a beloved landscape is suddenly revealed, so too may landscapes of fear cast a dark shadow over a place that makes one feel a sense of dread or anxiety—or topophobia.
CAT/2019.1(RC)
Question. 71
Which one of the following best captures the meaning of the statement, “Topophilia is difficult to design for and impossible to quantify . . .”?
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
As defined by the geographer Yi-Fu Tuan, topophilia is the affective bond between people and place. His 1974 book set forth a wide-ranging exploration of how the emotive ties with the material environment vary greatly from person to person and in intensity, subtlety, and mode of expression. Factors influencing one’s depth of response to the environment include cultural background, gender, race, and historical circumstance, and Tuan also argued that there is a biological and sensory element. Topophilia might not be the strongest of human emotions— indeed, many people feel utterly indifferent toward the environments that shape their lives— but when activated it has the power to elevate a place to become the carrier of emotionally charged events or to be perceived as a symbol.
Aesthetic appreciation is one way in which people respond to the environment. A brilliantly colored rainbow after gloomy afternoon showers, a busy city street alive with human interaction—one might experience the beauty of such landscapes that had seemed quite ordinary only moments before or that are being newly discovered. This is quite the opposite of a second topophilic bond, namely that of the acquired taste for certain landscapes and places that one knows well. When a place is home, or when a space has become the locus of memories or the means of gaining a livelihood, it frequently evokes a deeper set of attachments than those predicated purely on the visual. A third response to the environment also depends on the human senses but may be tactile and olfactory, namely a delight in the feel and smell of air, water, and the earth.
Topophilia—and its very close conceptual twin, sense of place—is an experience that, however elusive, has inspired recent architects and planners. Most notably, new urbanism seeks to counter the perceived placelessness of modern suburbs and the decline of central cities through neo-traditional design motifs. Although motivated by good intentions, such attempts to create places rich in meaning are perhaps bound to disappoint. As Tuan noted, purely aesthetic responses often are suddenly revealed, but their intensity rarely is long- lasting. Topophilia is difficult to design for and impossible to quantify, and its most articulate interpreters have been self-reflective philosophers such as Henry David Thoreau, evoking a marvelously intricate sense of place at Walden Pond, and Tuan, describing his deep affinity for the desert.
Topophilia connotes a positive relationship, but it often is useful to explore the darker affiliations between people and place. Patriotism, literally meaning the love of one’s terra patria or homeland, has long been cultivated by governing elites for a range of nationalist projects, including war preparation and ethnic cleansing. Residents of upscale residential developments have disclosed how important it is to maintain their community’s distinct identity, often by casting themselves in a superior social position and by reinforcing class and racial differences. And just as a beloved landscape is suddenly revealed, so too may landscapes of fear cast a dark shadow over a place that makes one feel a sense of dread or anxiety—or topophobia.
CAT/2019.1(RC)
Question. 72
Which one of the following comes closest in meaning to the author’s understanding of topophilia?
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
As defined by the geographer Yi-Fu Tuan, topophilia is the affective bond between people and place. His 1974 book set forth a wide-ranging exploration of how the emotive ties with the material environment vary greatly from person to person and in intensity, subtlety, and mode of expression. Factors influencing one’s depth of response to the environment include cultural background, gender, race, and historical circumstance, and Tuan also argued that there is a biological and sensory element. Topophilia might not be the strongest of human emotions— indeed, many people feel utterly indifferent toward the environments that shape their lives— but when activated it has the power to elevate a place to become the carrier of emotionally charged events or to be perceived as a symbol.
Aesthetic appreciation is one way in which people respond to the environment. A brilliantly colored rainbow after gloomy afternoon showers, a busy city street alive with human interaction—one might experience the beauty of such landscapes that had seemed quite ordinary only moments before or that are being newly discovered. This is quite the opposite of a second topophilic bond, namely that of the acquired taste for certain landscapes and places that one knows well. When a place is home, or when a space has become the locus of memories or the means of gaining a livelihood, it frequently evokes a deeper set of attachments than those predicated purely on the visual. A third response to the environment also depends on the human senses but may be tactile and olfactory, namely a delight in the feel and smell of air, water, and the earth.
Topophilia—and its very close conceptual twin, sense of place—is an experience that, however elusive, has inspired recent architects and planners. Most notably, new urbanism seeks to counter the perceived placelessness of modern suburbs and the decline of central cities through neo-traditional design motifs. Although motivated by good intentions, such attempts to create places rich in meaning are perhaps bound to disappoint. As Tuan noted, purely aesthetic responses often are suddenly revealed, but their intensity rarely is long- lasting. Topophilia is difficult to design for and impossible to quantify, and its most articulate interpreters have been self-reflective philosophers such as Henry David Thoreau, evoking a marvelously intricate sense of place at Walden Pond, and Tuan, describing his deep affinity for the desert.
Topophilia connotes a positive relationship, but it often is useful to explore the darker affiliations between people and place. Patriotism, literally meaning the love of one’s terra patria or homeland, has long been cultivated by governing elites for a range of nationalist projects, including war preparation and ethnic cleansing. Residents of upscale residential developments have disclosed how important it is to maintain their community’s distinct identity, often by casting themselves in a superior social position and by reinforcing class and racial differences. And just as a beloved landscape is suddenly revealed, so too may landscapes of fear cast a dark shadow over a place that makes one feel a sense of dread or anxiety—or topophobia.
CAT/2019.1(RC)
Question. 73
Which of the following statements, if true, could be seen as not contradicting the arguments in the passage?
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
War, natural disasters and climate change are destroying some of the world's most precious cultural sites. Google is trying to help preserve these archaeological wonders by allowing users access to 3D images of these treasures through its site.
But the project is raising questions about Google's motivations and about who should own the digital copyrights. Some critics call it a form of "digital colonialism."
When it comes to archaeological treasures, the losses have been mounting. ISIS blew up parts of the ancient city of Palmyra in Syria and an earthquake hit Bagan, an ancient city in Myanmar, damaging dozens of temples, in 2016. In the past, all archaeologists and historians had for restoration and research were photos, drawings, remnants and intuition.
But that's changing. Before the earthquake at Bagan, many of the temples on the site were scanned. . . . [These] scans . . . are on Google's Arts & Culture site. The digital renditions allow viewers to virtually wander the halls of the temple, look up-close at paintings and turn the building over, to look up at its chambers. . . . [Google Arts & Culture] works with museums and other nonprofits . . . to put high-quality images online.
The images of the temples in Bagan are part of a collaboration with CyArk, a nonprofit that creates the 3D scanning of historic sites. . . . Google . . . says [it] doesn't make money off this website, but it fits in with Google's mission to make the world's information available and useful.
Critics say the collaboration could be an attempt by a large corporation to wrap itself in the sheen of culture. Ethan Watrall, an archaeologist, professor at Michigan State University and a member of the Society for American Archaeology, says he's not comfortable with the arrangement between CyArk and Google. . . . Watrall says this project is just a way for Google to promote Google. "They want to make this material accessible so people will browse it and be filled with wonder by it," he says. "But at its core, it's all about advertisements and driving traffic." Watrall says these images belong on the site of a museum or educational institution, where there is serious scholarship and a very different mission. . . .
[There's] another issue for some archaeologists and art historians. CyArk owns the copyrights of the scans — not the countries where these sites are located. That means the countries need CyArk's permission to use these images for commercial purposes.
Erin Thompson, a professor of art crime at John Jay College of Criminal Justice in New York City, says it's the latest example of a Western nation appropriating a foreign culture, a centuries-long battle. . . . CyArk says it copyrights the scans so no one can use them in an inappropriate way. The company says it works closely with authorities during the process, even training local people to help. But critics like Thompson are not persuaded. . . . She would prefer the scans to be owned by the countries and people where these sites are located.
War, natural disasters and climate change are destroying some of the world's most precious cultural sites. Google is trying to help preserve these archaeological wonders by allowing users access to 3D images of these treasures through its site.
But the project is raising questions about Google's motivations and about who should own the digital copyrights. Some critics call it a form of "digital colonialism."
When it comes to archaeological treasures, the losses have been mounting. ISIS blew up parts of the ancient city of Palmyra in Syria and an earthquake hit Bagan, an ancient city in Myanmar, damaging dozens of temples, in 2016. In the past, all archaeologists and historians had for restoration and research were photos, drawings, remnants and intuition.
But that's changing. Before the earthquake at Bagan, many of the temples on the site were scanned. . . . [These] scans . . . are on Google's Arts & Culture site. The digital renditions allow viewers to virtually wander the halls of the temple, look up-close at paintings and turn the building over, to look up at its chambers. . . . [Google Arts & Culture] works with museums and other nonprofits . . . to put high-quality images online.
The images of the temples in Bagan are part of a collaboration with CyArk, a nonprofit that creates the 3D scanning of historic sites. . . . Google . . . says [it] doesn't make money off this website, but it fits in with Google's mission to make the world's information available and useful.
Critics say the collaboration could be an attempt by a large corporation to wrap itself in the sheen of culture. Ethan Watrall, an archaeologist, professor at Michigan State University and a member of the Society for American Archaeology, says he's not comfortable with the arrangement between CyArk and Google. . . . Watrall says this project is just a way for Google to promote Google. "They want to make this material accessible so people will browse it and be filled with wonder by it," he says. "But at its core, it's all about advertisements and driving traffic." Watrall says these images belong on the site of a museum or educational institution, where there is serious scholarship and a very different mission. . . .
[There's] another issue for some archaeologists and art historians. CyArk owns the copyrights of the scans — not the countries where these sites are located. That means the countries need CyArk's permission to use these images for commercial purposes.
Erin Thompson, a professor of art crime at John Jay College of Criminal Justice in New York City, says it's the latest example of a Western nation appropriating a foreign culture, a centuries-long battle. . . . CyArk says it copyrights the scans so no one can use them in an inappropriate way. The company says it works closely with authorities during the process, even training local people to help. But critics like Thompson are not persuaded. . . . She would prefer the scans to be owned by the countries and people where these sites are located.
CAT/2019.2(RC)
Question. 74
Based on his views mentioned in the passage, one could best characterise Dr. Watrall as being:
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
War, natural disasters and climate change are destroying some of the world's most precious cultural sites. Google is trying to help preserve these archaeological wonders by allowing users access to 3D images of these treasures through its site.
But the project is raising questions about Google's motivations and about who should own the digital copyrights. Some critics call it a form of "digital colonialism."
When it comes to archaeological treasures, the losses have been mounting. ISIS blew up parts of the ancient city of Palmyra in Syria and an earthquake hit Bagan, an ancient city in Myanmar, damaging dozens of temples, in 2016. In the past, all archaeologists and historians had for restoration and research were photos, drawings, remnants and intuition.
But that's changing. Before the earthquake at Bagan, many of the temples on the site were scanned. . . . [These] scans . . . are on Google's Arts & Culture site. The digital renditions allow viewers to virtually wander the halls of the temple, look up-close at paintings and turn the building over, to look up at its chambers. . . . [Google Arts & Culture] works with museums and other nonprofits . . . to put high-quality images online.
The images of the temples in Bagan are part of a collaboration with CyArk, a nonprofit that creates the 3D scanning of historic sites. . . . Google . . . says [it] doesn't make money off this website, but it fits in with Google's mission to make the world's information available and useful.
Critics say the collaboration could be an attempt by a large corporation to wrap itself in the sheen of culture. Ethan Watrall, an archaeologist, professor at Michigan State University and a member of the Society for American Archaeology, says he's not comfortable with the arrangement between CyArk and Google. . . . Watrall says this project is just a way for Google to promote Google. "They want to make this material accessible so people will browse it and be filled with wonder by it," he says. "But at its core, it's all about advertisements and driving traffic." Watrall says these images belong on the site of a museum or educational institution, where there is serious scholarship and a very different mission. . . .
[There's] another issue for some archaeologists and art historians. CyArk owns the copyrights of the scans — not the countries where these sites are located. That means the countries need CyArk's permission to use these images for commercial purposes.
Erin Thompson, a professor of art crime at John Jay College of Criminal Justice in New York City, says it's the latest example of a Western nation appropriating a foreign culture, a centuries-long battle. . . . CyArk says it copyrights the scans so no one can use them in an inappropriate way. The company says it works closely with authorities during the process, even training local people to help. But critics like Thompson are not persuaded. . . . She would prefer the scans to be owned by the countries and people where these sites are located.
War, natural disasters and climate change are destroying some of the world's most precious cultural sites. Google is trying to help preserve these archaeological wonders by allowing users access to 3D images of these treasures through its site.
But the project is raising questions about Google's motivations and about who should own the digital copyrights. Some critics call it a form of "digital colonialism."
When it comes to archaeological treasures, the losses have been mounting. ISIS blew up parts of the ancient city of Palmyra in Syria and an earthquake hit Bagan, an ancient city in Myanmar, damaging dozens of temples, in 2016. In the past, all archaeologists and historians had for restoration and research were photos, drawings, remnants and intuition.
But that's changing. Before the earthquake at Bagan, many of the temples on the site were scanned. . . . [These] scans . . . are on Google's Arts & Culture site. The digital renditions allow viewers to virtually wander the halls of the temple, look up-close at paintings and turn the building over, to look up at its chambers. . . . [Google Arts & Culture] works with museums and other nonprofits . . . to put high-quality images online.
The images of the temples in Bagan are part of a collaboration with CyArk, a nonprofit that creates the 3D scanning of historic sites. . . . Google . . . says [it] doesn't make money off this website, but it fits in with Google's mission to make the world's information available and useful.
Critics say the collaboration could be an attempt by a large corporation to wrap itself in the sheen of culture. Ethan Watrall, an archaeologist, professor at Michigan State University and a member of the Society for American Archaeology, says he's not comfortable with the arrangement between CyArk and Google. . . . Watrall says this project is just a way for Google to promote Google. "They want to make this material accessible so people will browse it and be filled with wonder by it," he says. "But at its core, it's all about advertisements and driving traffic." Watrall says these images belong on the site of a museum or educational institution, where there is serious scholarship and a very different mission. . . .
[There's] another issue for some archaeologists and art historians. CyArk owns the copyrights of the scans — not the countries where these sites are located. That means the countries need CyArk's permission to use these images for commercial purposes.
Erin Thompson, a professor of art crime at John Jay College of Criminal Justice in New York City, says it's the latest example of a Western nation appropriating a foreign culture, a centuries-long battle. . . . CyArk says it copyrights the scans so no one can use them in an inappropriate way. The company says it works closely with authorities during the process, even training local people to help. But critics like Thompson are not persuaded. . . . She would prefer the scans to be owned by the countries and people where these sites are located.
CAT/2019.2(RC)
Question. 75
By “digital colonialism”, critics of the CyArk–Google project are referring to the fact that:
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
War, natural disasters and climate change are destroying some of the world's most precious cultural sites. Google is trying to help preserve these archaeological wonders by allowing users access to 3D images of these treasures through its site.
But the project is raising questions about Google's motivations and about who should own the digital copyrights. Some critics call it a form of "digital colonialism."
When it comes to archaeological treasures, the losses have been mounting. ISIS blew up parts of the ancient city of Palmyra in Syria and an earthquake hit Bagan, an ancient city in Myanmar, damaging dozens of temples, in 2016. In the past, all archaeologists and historians had for restoration and research were photos, drawings, remnants and intuition.
But that's changing. Before the earthquake at Bagan, many of the temples on the site were scanned. . . . [These] scans . . . are on Google's Arts & Culture site. The digital renditions allow viewers to virtually wander the halls of the temple, look up-close at paintings and turn the building over, to look up at its chambers. . . . [Google Arts & Culture] works with museums and other nonprofits . . . to put high-quality images online.
The images of the temples in Bagan are part of a collaboration with CyArk, a nonprofit that creates the 3D scanning of historic sites. . . . Google . . . says [it] doesn't make money off this website, but it fits in with Google's mission to make the world's information available and useful.
Critics say the collaboration could be an attempt by a large corporation to wrap itself in the sheen of culture. Ethan Watrall, an archaeologist, professor at Michigan State University and a member of the Society for American Archaeology, says he's not comfortable with the arrangement between CyArk and Google. . . . Watrall says this project is just a way for Google to promote Google. "They want to make this material accessible so people will browse it and be filled with wonder by it," he says. "But at its core, it's all about advertisements and driving traffic." Watrall says these images belong on the site of a museum or educational institution, where there is serious scholarship and a very different mission. . . .
[There's] another issue for some archaeologists and art historians. CyArk owns the copyrights of the scans — not the countries where these sites are located. That means the countries need CyArk's permission to use these images for commercial purposes.
Erin Thompson, a professor of art crime at John Jay College of Criminal Justice in New York City, says it's the latest example of a Western nation appropriating a foreign culture, a centuries-long battle. . . . CyArk says it copyrights the scans so no one can use them in an inappropriate way. The company says it works closely with authorities during the process, even training local people to help. But critics like Thompson are not persuaded. . . . She would prefer the scans to be owned by the countries and people where these sites are located.
War, natural disasters and climate change are destroying some of the world's most precious cultural sites. Google is trying to help preserve these archaeological wonders by allowing users access to 3D images of these treasures through its site.
But the project is raising questions about Google's motivations and about who should own the digital copyrights. Some critics call it a form of "digital colonialism."
When it comes to archaeological treasures, the losses have been mounting. ISIS blew up parts of the ancient city of Palmyra in Syria and an earthquake hit Bagan, an ancient city in Myanmar, damaging dozens of temples, in 2016. In the past, all archaeologists and historians had for restoration and research were photos, drawings, remnants and intuition.
But that's changing. Before the earthquake at Bagan, many of the temples on the site were scanned. . . . [These] scans . . . are on Google's Arts & Culture site. The digital renditions allow viewers to virtually wander the halls of the temple, look up-close at paintings and turn the building over, to look up at its chambers. . . . [Google Arts & Culture] works with museums and other nonprofits . . . to put high-quality images online.
The images of the temples in Bagan are part of a collaboration with CyArk, a nonprofit that creates the 3D scanning of historic sites. . . . Google . . . says [it] doesn't make money off this website, but it fits in with Google's mission to make the world's information available and useful.
Critics say the collaboration could be an attempt by a large corporation to wrap itself in the sheen of culture. Ethan Watrall, an archaeologist, professor at Michigan State University and a member of the Society for American Archaeology, says he's not comfortable with the arrangement between CyArk and Google. . . . Watrall says this project is just a way for Google to promote Google. "They want to make this material accessible so people will browse it and be filled with wonder by it," he says. "But at its core, it's all about advertisements and driving traffic." Watrall says these images belong on the site of a museum or educational institution, where there is serious scholarship and a very different mission. . . .
[There's] another issue for some archaeologists and art historians. CyArk owns the copyrights of the scans — not the countries where these sites are located. That means the countries need CyArk's permission to use these images for commercial purposes.
Erin Thompson, a professor of art crime at John Jay College of Criminal Justice in New York City, says it's the latest example of a Western nation appropriating a foreign culture, a centuries-long battle. . . . CyArk says it copyrights the scans so no one can use them in an inappropriate way. The company says it works closely with authorities during the process, even training local people to help. But critics like Thompson are not persuaded. . . . She would prefer the scans to be owned by the countries and people where these sites are located.
CAT/2019.2(RC)
Question. 76
Which of the following, if true, would most strongly invalidate Dr. Watrall’s objections?
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
War, natural disasters and climate change are destroying some of the world's most precious cultural sites. Google is trying to help preserve these archaeological wonders by allowing users access to 3D images of these treasures through its site.
But the project is raising questions about Google's motivations and about who should own the digital copyrights. Some critics call it a form of "digital colonialism."
When it comes to archaeological treasures, the losses have been mounting. ISIS blew up parts of the ancient city of Palmyra in Syria and an earthquake hit Bagan, an ancient city in Myanmar, damaging dozens of temples, in 2016. In the past, all archaeologists and historians had for restoration and research were photos, drawings, remnants and intuition.
But that's changing. Before the earthquake at Bagan, many of the temples on the site were scanned. . . . [These] scans . . . are on Google's Arts & Culture site. The digital renditions allow viewers to virtually wander the halls of the temple, look up-close at paintings and turn the building over, to look up at its chambers. . . . [Google Arts & Culture] works with museums and other nonprofits . . . to put high-quality images online.
The images of the temples in Bagan are part of a collaboration with CyArk, a nonprofit that creates the 3D scanning of historic sites. . . . Google . . . says [it] doesn't make money off this website, but it fits in with Google's mission to make the world's information available and useful.
Critics say the collaboration could be an attempt by a large corporation to wrap itself in the sheen of culture. Ethan Watrall, an archaeologist, professor at Michigan State University and a member of the Society for American Archaeology, says he's not comfortable with the arrangement between CyArk and Google. . . . Watrall says this project is just a way for Google to promote Google. "They want to make this material accessible so people will browse it and be filled with wonder by it," he says. "But at its core, it's all about advertisements and driving traffic." Watrall says these images belong on the site of a museum or educational institution, where there is serious scholarship and a very different mission. . . .
[There's] another issue for some archaeologists and art historians. CyArk owns the copyrights of the scans — not the countries where these sites are located. That means the countries need CyArk's permission to use these images for commercial purposes.
Erin Thompson, a professor of art crime at John Jay College of Criminal Justice in New York City, says it's the latest example of a Western nation appropriating a foreign culture, a centuries-long battle. . . . CyArk says it copyrights the scans so no one can use them in an inappropriate way. The company says it works closely with authorities during the process, even training local people to help. But critics like Thompson are not persuaded. . . . She would prefer the scans to be owned by the countries and people where these sites are located.
War, natural disasters and climate change are destroying some of the world's most precious cultural sites. Google is trying to help preserve these archaeological wonders by allowing users access to 3D images of these treasures through its site.
But the project is raising questions about Google's motivations and about who should own the digital copyrights. Some critics call it a form of "digital colonialism."
When it comes to archaeological treasures, the losses have been mounting. ISIS blew up parts of the ancient city of Palmyra in Syria and an earthquake hit Bagan, an ancient city in Myanmar, damaging dozens of temples, in 2016. In the past, all archaeologists and historians had for restoration and research were photos, drawings, remnants and intuition.
But that's changing. Before the earthquake at Bagan, many of the temples on the site were scanned. . . . [These] scans . . . are on Google's Arts & Culture site. The digital renditions allow viewers to virtually wander the halls of the temple, look up-close at paintings and turn the building over, to look up at its chambers. . . . [Google Arts & Culture] works with museums and other nonprofits . . . to put high-quality images online.
The images of the temples in Bagan are part of a collaboration with CyArk, a nonprofit that creates the 3D scanning of historic sites. . . . Google . . . says [it] doesn't make money off this website, but it fits in with Google's mission to make the world's information available and useful.
Critics say the collaboration could be an attempt by a large corporation to wrap itself in the sheen of culture. Ethan Watrall, an archaeologist, professor at Michigan State University and a member of the Society for American Archaeology, says he's not comfortable with the arrangement between CyArk and Google. . . . Watrall says this project is just a way for Google to promote Google. "They want to make this material accessible so people will browse it and be filled with wonder by it," he says. "But at its core, it's all about advertisements and driving traffic." Watrall says these images belong on the site of a museum or educational institution, where there is serious scholarship and a very different mission. . . .
[There's] another issue for some archaeologists and art historians. CyArk owns the copyrights of the scans — not the countries where these sites are located. That means the countries need CyArk's permission to use these images for commercial purposes.
Erin Thompson, a professor of art crime at John Jay College of Criminal Justice in New York City, says it's the latest example of a Western nation appropriating a foreign culture, a centuries-long battle. . . . CyArk says it copyrights the scans so no one can use them in an inappropriate way. The company says it works closely with authorities during the process, even training local people to help. But critics like Thompson are not persuaded. . . . She would prefer the scans to be owned by the countries and people where these sites are located.
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
War, natural disasters and climate change are destroying some of the world's most precious cultural sites. Google is trying to help preserve these archaeological wonders by allowing users access to 3D images of these treasures through its site.
But the project is raising questions about Google's motivations and about who should own the digital copyrights. Some critics call it a form of "digital colonialism."
When it comes to archaeological treasures, the losses have been mounting. ISIS blew up parts of the ancient city of Palmyra in Syria and an earthquake hit Bagan, an ancient city in Myanmar, damaging dozens of temples, in 2016. In the past, all archaeologists and historians had for restoration and research were photos, drawings, remnants and intuition.
But that's changing. Before the earthquake at Bagan, many of the temples on the site were scanned. . . . [These] scans . . . are on Google's Arts & Culture site. The digital renditions allow viewers to virtually wander the halls of the temple, look up-close at paintings and turn the building over, to look up at its chambers. . . . [Google Arts & Culture] works with museums and other nonprofits . . . to put high-quality images online.
The images of the temples in Bagan are part of a collaboration with CyArk, a nonprofit that creates the 3D scanning of historic sites. . . . Google . . . says [it] doesn't make money off this website, but it fits in with Google's mission to make the world's information available and useful.
Critics say the collaboration could be an attempt by a large corporation to wrap itself in the sheen of culture. Ethan Watrall, an archaeologist, professor at Michigan State University and a member of the Society for American Archaeology, says he's not comfortable with the arrangement between CyArk and Google. . . . Watrall says this project is just a way for Google to promote Google. "They want to make this material accessible so people will browse it and be filled with wonder by it," he says. "But at its core, it's all about advertisements and driving traffic." Watrall says these images belong on the site of a museum or educational institution, where there is serious scholarship and a very different mission. . . .
[There's] another issue for some archaeologists and art historians. CyArk owns the copyrights of the scans — not the countries where these sites are located. That means the countries need CyArk's permission to use these images for commercial purposes.
Erin Thompson, a professor of art crime at John Jay College of Criminal Justice in New York City, says it's the latest example of a Western nation appropriating a foreign culture, a centuries-long battle. . . . CyArk says it copyrights the scans so no one can use them in an inappropriate way. The company says it works closely with authorities during the process, even training local people to help. But critics like Thompson are not persuaded. . . . She would prefer the scans to be owned by the countries and people where these sites are located.
War, natural disasters and climate change are destroying some of the world's most precious cultural sites. Google is trying to help preserve these archaeological wonders by allowing users access to 3D images of these treasures through its site.
But the project is raising questions about Google's motivations and about who should own the digital copyrights. Some critics call it a form of "digital colonialism."
When it comes to archaeological treasures, the losses have been mounting. ISIS blew up parts of the ancient city of Palmyra in Syria and an earthquake hit Bagan, an ancient city in Myanmar, damaging dozens of temples, in 2016. In the past, all archaeologists and historians had for restoration and research were photos, drawings, remnants and intuition.
But that's changing. Before the earthquake at Bagan, many of the temples on the site were scanned. . . . [These] scans . . . are on Google's Arts & Culture site. The digital renditions allow viewers to virtually wander the halls of the temple, look up-close at paintings and turn the building over, to look up at its chambers. . . . [Google Arts & Culture] works with museums and other nonprofits . . . to put high-quality images online.
The images of the temples in Bagan are part of a collaboration with CyArk, a nonprofit that creates the 3D scanning of historic sites. . . . Google . . . says [it] doesn't make money off this website, but it fits in with Google's mission to make the world's information available and useful.
Critics say the collaboration could be an attempt by a large corporation to wrap itself in the sheen of culture. Ethan Watrall, an archaeologist, professor at Michigan State University and a member of the Society for American Archaeology, says he's not comfortable with the arrangement between CyArk and Google. . . . Watrall says this project is just a way for Google to promote Google. "They want to make this material accessible so people will browse it and be filled with wonder by it," he says. "But at its core, it's all about advertisements and driving traffic." Watrall says these images belong on the site of a museum or educational institution, where there is serious scholarship and a very different mission. . . .
[There's] another issue for some archaeologists and art historians. CyArk owns the copyrights of the scans — not the countries where these sites are located. That means the countries need CyArk's permission to use these images for commercial purposes.
Erin Thompson, a professor of art crime at John Jay College of Criminal Justice in New York City, says it's the latest example of a Western nation appropriating a foreign culture, a centuries-long battle. . . . CyArk says it copyrights the scans so no one can use them in an inappropriate way. The company says it works closely with authorities during the process, even training local people to help. But critics like Thompson are not persuaded. . . . She would prefer the scans to be owned by the countries and people where these sites are located.
CAT/2019.2(RC)
Question. 78
Of the following arguments, which one is LEAST likely to be used by the companies that digitally scan cultural sites?
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
British colonial policy . . . went through two policy phases, or at least there were two strategies between which its policies actually oscillated, sometimes to its great advantage. At first, the new colonial apparatus exercised caution, and occupied India by a mix of military power and subtle diplomacy, the high ground in the middle of the circle of circles. This, however, pushed them into contradictions. For, whatever their sense of the strangeness of the country and the thinness of colonial presence, the British colonial state represented the great conquering discourse of Enlightenment rationalism, entering India precisely at the moment of its greatest unchecked arrogance. As inheritors and representatives of this discourse, which carried everything before it, this colonial state could hardly adopt for long such a self-denying attitude. It had restructured everything in Europe—the productive system, the political regimes, the moral and cognitive orders—and would do the same in India, particularly as some empirically inclined theorists of that generation considered the colonies a massive laboratory of utilitarian or other theoretical experiments. Consequently, the colonial state could not settle simply for eminence at the cost of its marginality; it began to take initiatives to introduce the logic of modernity into Indian society. But this modernity did not enter a passive society. Sometimes, its initiatives were resisted by pre-existing structural forms. At times, there was a more direct form of collective resistance. Therefore the map of continuity and discontinuity that this state left behind at the time of independence was rather complex and has to be traced with care.
Most significantly, of course, initiatives for . . . modernity came to assume an external character. The acceptance of modernity came to be connected, ineradicably, with subjection. This again points to two different problems, one theoretical, the other political. Theoretically, because modernity was externally introduced, it is explanatorily unhelpful to apply the logical format of the ‘transition process’ to this pattern of change. Such a logical format would be wrong on two counts. First, however subtly, it would imply that what was proposed to be built was something like European capitalism. (And, in any case, historians have forcefully argued that what it was to replace was not like feudalism, with or without modificatory adjectives.) But, more fundamentally, the logical structure of endogenous change does not apply here. Here transformation agendas attack as an external force. This externality is not something that can be casually mentioned and forgotten. It is inscribed on every move, every object, every proposal, every legislative act, each line of causality. It comes to be marked on the epoch itself. This repetitive emphasis on externality should not be seen as a nationalist initiative that is so well rehearsed in Indian social science. . . .
Quite apart from the externality of the entire historical proposal of modernity, some of its contents were remarkable. . . . Economic reforms, or rather alterations . . . did not foreshadow the construction of a classical capitalist economy, with its necessary emphasis on extractive and transport sectors. What happened was the creation of a degenerate version of capitalism—what early dependency theorists called the ‘development of underdevelopment’.
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
British colonial policy . . . went through two policy phases, or at least there were two strategies between which its policies actually oscillated, sometimes to its great advantage. At first, the new colonial apparatus exercised caution, and occupied India by a mix of military power and subtle diplomacy, the high ground in the middle of the circle of circles. This, however, pushed them into contradictions. For, whatever their sense of the strangeness of the country and the thinness of colonial presence, the British colonial state represented the great conquering discourse of Enlightenment rationalism, entering India precisely at the moment of its greatest unchecked arrogance. As inheritors and representatives of this discourse, which carried everything before it, this colonial state could hardly adopt for long such a self-denying attitude. It had restructured everything in Europe—the productive system, the political regimes, the moral and cognitive orders—and would do the same in India, particularly as some empirically inclined theorists of that generation considered the colonies a massive laboratory of utilitarian or other theoretical experiments. Consequently, the colonial state could not settle simply for eminence at the cost of its marginality; it began to take initiatives to introduce the logic of modernity into Indian society. But this modernity did not enter a passive society. Sometimes, its initiatives were resisted by pre-existing structural forms. At times, there was a more direct form of collective resistance. Therefore the map of continuity and discontinuity that this state left behind at the time of independence was rather complex and has to be traced with care.
Most significantly, of course, initiatives for . . . modernity came to assume an external character. The acceptance of modernity came to be connected, ineradicably, with subjection. This again points to two different problems, one theoretical, the other political. Theoretically, because modernity was externally introduced, it is explanatorily unhelpful to apply the logical format of the ‘transition process’ to this pattern of change. Such a logical format would be wrong on two counts. First, however subtly, it would imply that what was proposed to be built was something like European capitalism. (And, in any case, historians have forcefully argued that what it was to replace was not like feudalism, with or without modificatory adjectives.) But, more fundamentally, the logical structure of endogenous change does not apply here. Here transformation agendas attack as an external force. This externality is not something that can be casually mentioned and forgotten. It is inscribed on every move, every object, every proposal, every legislative act, each line of causality. It comes to be marked on the epoch itself. This repetitive emphasis on externality should not be seen as a nationalist initiative that is so well rehearsed in Indian social science. . . .
Quite apart from the externality of the entire historical proposal of modernity, some of its contents were remarkable. . . . Economic reforms, or rather alterations . . . did not foreshadow the construction of a classical capitalist economy, with its necessary emphasis on extractive and transport sectors. What happened was the creation of a degenerate version of capitalism—what early dependency theorists called the ‘development of underdevelopment’.
CAT/2019.2(RC)
Question. 80
All of the following statements, if true, could be seen as supporting the arguments in the passage, EXCEPT:
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
British colonial policy . . . went through two policy phases, or at least there were two strategies between which its policies actually oscillated, sometimes to its great advantage. At first, the new colonial apparatus exercised caution, and occupied India by a mix of military power and subtle diplomacy, the high ground in the middle of the circle of circles. This, however, pushed them into contradictions. For, whatever their sense of the strangeness of the country and the thinness of colonial presence, the British colonial state represented the great conquering discourse of Enlightenment rationalism, entering India precisely at the moment of its greatest unchecked arrogance. As inheritors and representatives of this discourse, which carried everything before it, this colonial state could hardly adopt for long such a self-denying attitude. It had restructured everything in Europe—the productive system, the political regimes, the moral and cognitive orders—and would do the same in India, particularly as some empirically inclined theorists of that generation considered the colonies a massive laboratory of utilitarian or other theoretical experiments. Consequently, the colonial state could not settle simply for eminence at the cost of its marginality; it began to take initiatives to introduce the logic of modernity into Indian society. But this modernity did not enter a passive society. Sometimes, its initiatives were resisted by pre-existing structural forms. At times, there was a more direct form of collective resistance. Therefore the map of continuity and discontinuity that this state left behind at the time of independence was rather complex and has to be traced with care.
Most significantly, of course, initiatives for . . . modernity came to assume an external character. The acceptance of modernity came to be connected, ineradicably, with subjection. This again points to two different problems, one theoretical, the other political. Theoretically, because modernity was externally introduced, it is explanatorily unhelpful to apply the logical format of the ‘transition process’ to this pattern of change. Such a logical format would be wrong on two counts. First, however subtly, it would imply that what was proposed to be built was something like European capitalism. (And, in any case, historians have forcefully argued that what it was to replace was not like feudalism, with or without modificatory adjectives.) But, more fundamentally, the logical structure of endogenous change does not apply here. Here transformation agendas attack as an external force. This externality is not something that can be casually mentioned and forgotten. It is inscribed on every move, every object, every proposal, every legislative act, each line of causality. It comes to be marked on the epoch itself. This repetitive emphasis on externality should not be seen as a nationalist initiative that is so well rehearsed in Indian social science. . . .
Quite apart from the externality of the entire historical proposal of modernity, some of its contents were remarkable. . . . Economic reforms, or rather alterations . . . did not foreshadow the construction of a classical capitalist economy, with its necessary emphasis on extractive and transport sectors. What happened was the creation of a degenerate version of capitalism—what early dependency theorists called the ‘development of underdevelopment’.
CAT/2019.2(RC)
Question. 81
“Consequently, the colonial state could not settle simply for eminence at the cost of its marginality; it began to take initiatives to introduce the logic of modernity into Indian society.” Which of the following best captures the sense of this statement?
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
British colonial policy . . . went through two policy phases, or at least there were two strategies between which its policies actually oscillated, sometimes to its great advantage. At first, the new colonial apparatus exercised caution, and occupied India by a mix of military power and subtle diplomacy, the high ground in the middle of the circle of circles. This, however, pushed them into contradictions. For, whatever their sense of the strangeness of the country and the thinness of colonial presence, the British colonial state represented the great conquering discourse of Enlightenment rationalism, entering India precisely at the moment of its greatest unchecked arrogance. As inheritors and representatives of this discourse, which carried everything before it, this colonial state could hardly adopt for long such a self-denying attitude. It had restructured everything in Europe—the productive system, the political regimes, the moral and cognitive orders—and would do the same in India, particularly as some empirically inclined theorists of that generation considered the colonies a massive laboratory of utilitarian or other theoretical experiments. Consequently, the colonial state could not settle simply for eminence at the cost of its marginality; it began to take initiatives to introduce the logic of modernity into Indian society. But this modernity did not enter a passive society. Sometimes, its initiatives were resisted by pre-existing structural forms. At times, there was a more direct form of collective resistance. Therefore the map of continuity and discontinuity that this state left behind at the time of independence was rather complex and has to be traced with care.
Most significantly, of course, initiatives for . . . modernity came to assume an external character. The acceptance of modernity came to be connected, ineradicably, with subjection. This again points to two different problems, one theoretical, the other political. Theoretically, because modernity was externally introduced, it is explanatorily unhelpful to apply the logical format of the ‘transition process’ to this pattern of change. Such a logical format would be wrong on two counts. First, however subtly, it would imply that what was proposed to be built was something like European capitalism. (And, in any case, historians have forcefully argued that what it was to replace was not like feudalism, with or without modificatory adjectives.) But, more fundamentally, the logical structure of endogenous change does not apply here. Here transformation agendas attack as an external force. This externality is not something that can be casually mentioned and forgotten. It is inscribed on every move, every object, every proposal, every legislative act, each line of causality. It comes to be marked on the epoch itself. This repetitive emphasis on externality should not be seen as a nationalist initiative that is so well rehearsed in Indian social science. . . .
Quite apart from the externality of the entire historical proposal of modernity, some of its contents were remarkable. . . . Economic reforms, or rather alterations . . . did not foreshadow the construction of a classical capitalist economy, with its necessary emphasis on extractive and transport sectors. What happened was the creation of a degenerate version of capitalism—what early dependency theorists called the ‘development of underdevelopment’.
CAT/2019.2(RC)
Question. 82
Which one of the following 5-word sequences best captures the flow of the arguments in the passage?
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
British colonial policy . . . went through two policy phases, or at least there were two strategies between which its policies actually oscillated, sometimes to its great advantage. At first, the new colonial apparatus exercised caution, and occupied India by a mix of military power and subtle diplomacy, the high ground in the middle of the circle of circles. This, however, pushed them into contradictions. For, whatever their sense of the strangeness of the country and the thinness of colonial presence, the British colonial state represented the great conquering discourse of Enlightenment rationalism, entering India precisely at the moment of its greatest unchecked arrogance. As inheritors and representatives of this discourse, which carried everything before it, this colonial state could hardly adopt for long such a self-denying attitude. It had restructured everything in Europe—the productive system, the political regimes, the moral and cognitive orders—and would do the same in India, particularly as some empirically inclined theorists of that generation considered the colonies a massive laboratory of utilitarian or other theoretical experiments. Consequently, the colonial state could not settle simply for eminence at the cost of its marginality; it began to take initiatives to introduce the logic of modernity into Indian society. But this modernity did not enter a passive society. Sometimes, its initiatives were resisted by pre-existing structural forms. At times, there was a more direct form of collective resistance. Therefore the map of continuity and discontinuity that this state left behind at the time of independence was rather complex and has to be traced with care.
Most significantly, of course, initiatives for . . . modernity came to assume an external character. The acceptance of modernity came to be connected, ineradicably, with subjection. This again points to two different problems, one theoretical, the other political. Theoretically, because modernity was externally introduced, it is explanatorily unhelpful to apply the logical format of the ‘transition process’ to this pattern of change. Such a logical format would be wrong on two counts. First, however subtly, it would imply that what was proposed to be built was something like European capitalism. (And, in any case, historians have forcefully argued that what it was to replace was not like feudalism, with or without modificatory adjectives.) But, more fundamentally, the logical structure of endogenous change does not apply here. Here transformation agendas attack as an external force. This externality is not something that can be casually mentioned and forgotten. It is inscribed on every move, every object, every proposal, every legislative act, each line of causality. It comes to be marked on the epoch itself. This repetitive emphasis on externality should not be seen as a nationalist initiative that is so well rehearsed in Indian social science. . . .
Quite apart from the externality of the entire historical proposal of modernity, some of its contents were remarkable. . . . Economic reforms, or rather alterations . . . did not foreshadow the construction of a classical capitalist economy, with its necessary emphasis on extractive and transport sectors. What happened was the creation of a degenerate version of capitalism—what early dependency theorists called the ‘development of underdevelopment’.
CAT/2019.2(RC)
Question. 83
Which of the following observations is a valid conclusion to draw from the author’s statement that “the logical structure of endogenous change does not apply here. Here transformation agendas attack as an external force”?
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
The Indian government has announced an international competition to design a National War Memorial in New Delhi, to honour all of the Indian soldiers who served in the various wars and counter-insurgency campaigns from 1947 onwards. The terms of the competition also specified that the new structure would be built adjacent to the India Gate – a memorial to the Indian soldiers who died in the First World War. Between the old imperialist memorial and the proposed nationalist one, India’s contribution to the Second World War is airbrushed out of existence.
The Indian government’s conception of the war memorial was not merely absentminded. Rather, it accurately reflected the fact that both academic history and popular memory have yet to come to terms with India’s Second World War, which continues to be seen as little more than mood music in the drama of India’s advance towards independence and partition in 1947. Further, the political trajectory of the postwar subcontinent has militated against popular remembrance of the war. With partition and the onset of the India-Pakistan rivalry, both of the new nations needed fresh stories for self-legitimisation rather than focusing on shared wartime experiences.
However, the Second World War played a crucial role in both the independence and partition of India. The Indian army recruited, trained and deployed some 2.5 million men, almost 90,000 of which were killed and many more injured. Even at the time, it was recognised as the largest volunteer force in the war.
India’s material and financial contribution to the war was equally significant. India emerged as a major military-industrial and logistical base for Allied operations in south-east Asia and the Middle East. This led the United States to take considerable interest in the country’s future, and ensured that this was no longer the preserve of the British government. Other wartime developments pointed in the direction of India’s independence. In a stunning reversal of its long-standing financial relationship with Britain, India finished the war as one of the largest creditors to the imperial power.
Such extraordinary mobilization for war was achieved at great human cost, with the Bengal famine the most extreme manifestation of widespread wartime deprivation. The costs on India’s home front must be counted in millions of lives.
Indians signed up to serve on the war and home fronts for a variety of reasons. Many were convinced that their contribution would open the doors to India’s freedom. The political and social churn triggered by the war was evident in the massive waves of popular protest and unrest that washed over rural and urban India in the aftermath of the conflict. This turmoil was crucial in persuading the Attlee government to rid itself of the incubus of ruling India. Seventy years on, it is time that India engaged with the complex legacies of the Second World War. Bringing the war into the ambit of the new national memorial would be a fitting – if not overdue – recognition that this was India’s War.
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
The Indian government has announced an international competition to design a National War Memorial in New Delhi, to honour all of the Indian soldiers who served in the various wars and counter-insurgency campaigns from 1947 onwards. The terms of the competition also specified that the new structure would be built adjacent to the India Gate – a memorial to the Indian soldiers who died in the First World War. Between the old imperialist memorial and the proposed nationalist one, India’s contribution to the Second World War is airbrushed out of existence.
The Indian government’s conception of the war memorial was not merely absentminded. Rather, it accurately reflected the fact that both academic history and popular memory have yet to come to terms with India’s Second World War, which continues to be seen as little more than mood music in the drama of India’s advance towards independence and partition in 1947. Further, the political trajectory of the postwar subcontinent has militated against popular remembrance of the war. With partition and the onset of the India-Pakistan rivalry, both of the new nations needed fresh stories for self-legitimisation rather than focusing on shared wartime experiences.
However, the Second World War played a crucial role in both the independence and partition of India. The Indian army recruited, trained and deployed some 2.5 million men, almost 90,000 of which were killed and many more injured. Even at the time, it was recognised as the largest volunteer force in the war.
India’s material and financial contribution to the war was equally significant. India emerged as a major military-industrial and logistical base for Allied operations in south-east Asia and the Middle East. This led the United States to take considerable interest in the country’s future, and ensured that this was no longer the preserve of the British government. Other wartime developments pointed in the direction of India’s independence. In a stunning reversal of its long-standing financial relationship with Britain, India finished the war as one of the largest creditors to the imperial power.
Such extraordinary mobilization for war was achieved at great human cost, with the Bengal famine the most extreme manifestation of widespread wartime deprivation. The costs on India’s home front must be counted in millions of lives.
Indians signed up to serve on the war and home fronts for a variety of reasons. Many were convinced that their contribution would open the doors to India’s freedom. The political and social churn triggered by the war was evident in the massive waves of popular protest and unrest that washed over rural and urban India in the aftermath of the conflict. This turmoil was crucial in persuading the Attlee government to rid itself of the incubus of ruling India. Seventy years on, it is time that India engaged with the complex legacies of the Second World War. Bringing the war into the ambit of the new national memorial would be a fitting – if not overdue – recognition that this was India’s War.
CAT/2018.1(RC)
Question. 85
The author lists all of the following as outcomes of the Second World War EXCEPT:
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
The Indian government has announced an international competition to design a National War Memorial in New Delhi, to honour all of the Indian soldiers who served in the various wars and counter-insurgency campaigns from 1947 onwards. The terms of the competition also specified that the new structure would be built adjacent to the India Gate – a memorial to the Indian soldiers who died in the First World War. Between the old imperialist memorial and the proposed nationalist one, India’s contribution to the Second World War is airbrushed out of existence.
The Indian government’s conception of the war memorial was not merely absentminded. Rather, it accurately reflected the fact that both academic history and popular memory have yet to come to terms with India’s Second World War, which continues to be seen as little more than mood music in the drama of India’s advance towards independence and partition in 1947. Further, the political trajectory of the postwar subcontinent has militated against popular remembrance of the war. With partition and the onset of the India-Pakistan rivalry, both of the new nations needed fresh stories for self-legitimisation rather than focusing on shared wartime experiences.
However, the Second World War played a crucial role in both the independence and partition of India. The Indian army recruited, trained and deployed some 2.5 million men, almost 90,000 of which were killed and many more injured. Even at the time, it was recognised as the largest volunteer force in the war.
India’s material and financial contribution to the war was equally significant. India emerged as a major military-industrial and logistical base for Allied operations in south-east Asia and the Middle East. This led the United States to take considerable interest in the country’s future, and ensured that this was no longer the preserve of the British government. Other wartime developments pointed in the direction of India’s independence. In a stunning reversal of its long-standing financial relationship with Britain, India finished the war as one of the largest creditors to the imperial power.
Such extraordinary mobilization for war was achieved at great human cost, with the Bengal famine the most extreme manifestation of widespread wartime deprivation. The costs on India’s home front must be counted in millions of lives.
Indians signed up to serve on the war and home fronts for a variety of reasons. Many were convinced that their contribution would open the doors to India’s freedom. The political and social churn triggered by the war was evident in the massive waves of popular protest and unrest that washed over rural and urban India in the aftermath of the conflict. This turmoil was crucial in persuading the Attlee government to rid itself of the incubus of ruling India. Seventy years on, it is time that India engaged with the complex legacies of the Second World War. Bringing the war into the ambit of the new national memorial would be a fitting – if not overdue – recognition that this was India’s War.
CAT/2018.1(RC)
Question. 86
The phrase “mood music” is used in the second paragraph to indicate that the Second World War is viewed as:
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
The Indian government has announced an international competition to design a National War Memorial in New Delhi, to honour all of the Indian soldiers who served in the various wars and counter-insurgency campaigns from 1947 onwards. The terms of the competition also specified that the new structure would be built adjacent to the India Gate – a memorial to the Indian soldiers who died in the First World War. Between the old imperialist memorial and the proposed nationalist one, India’s contribution to the Second World War is airbrushed out of existence.
The Indian government’s conception of the war memorial was not merely absentminded. Rather, it accurately reflected the fact that both academic history and popular memory have yet to come to terms with India’s Second World War, which continues to be seen as little more than mood music in the drama of India’s advance towards independence and partition in 1947. Further, the political trajectory of the postwar subcontinent has militated against popular remembrance of the war. With partition and the onset of the India-Pakistan rivalry, both of the new nations needed fresh stories for self-legitimisation rather than focusing on shared wartime experiences.
However, the Second World War played a crucial role in both the independence and partition of India. The Indian army recruited, trained and deployed some 2.5 million men, almost 90,000 of which were killed and many more injured. Even at the time, it was recognised as the largest volunteer force in the war.
India’s material and financial contribution to the war was equally significant. India emerged as a major military-industrial and logistical base for Allied operations in south-east Asia and the Middle East. This led the United States to take considerable interest in the country’s future, and ensured that this was no longer the preserve of the British government. Other wartime developments pointed in the direction of India’s independence. In a stunning reversal of its long-standing financial relationship with Britain, India finished the war as one of the largest creditors to the imperial power.
Such extraordinary mobilization for war was achieved at great human cost, with the Bengal famine the most extreme manifestation of widespread wartime deprivation. The costs on India’s home front must be counted in millions of lives.
Indians signed up to serve on the war and home fronts for a variety of reasons. Many were convinced that their contribution would open the doors to India’s freedom. The political and social churn triggered by the war was evident in the massive waves of popular protest and unrest that washed over rural and urban India in the aftermath of the conflict. This turmoil was crucial in persuading the Attlee government to rid itself of the incubus of ruling India. Seventy years on, it is time that India engaged with the complex legacies of the Second World War. Bringing the war into the ambit of the new national memorial would be a fitting – if not overdue – recognition that this was India’s War.
CAT/2018.1(RC)
Question. 87
The author suggests that a major reason why India has not so far acknowledged its role in the Second World War is that it:
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
The Indian government has announced an international competition to design a National War Memorial in New Delhi, to honour all of the Indian soldiers who served in the various wars and counter-insurgency campaigns from 1947 onwards. The terms of the competition also specified that the new structure would be built adjacent to the India Gate – a memorial to the Indian soldiers who died in the First World War. Between the old imperialist memorial and the proposed nationalist one, India’s contribution to the Second World War is airbrushed out of existence.
The Indian government’s conception of the war memorial was not merely absentminded. Rather, it accurately reflected the fact that both academic history and popular memory have yet to come to terms with India’s Second World War, which continues to be seen as little more than mood music in the drama of India’s advance towards independence and partition in 1947. Further, the political trajectory of the postwar subcontinent has militated against popular remembrance of the war. With partition and the onset of the India-Pakistan rivalry, both of the new nations needed fresh stories for self-legitimisation rather than focusing on shared wartime experiences.
However, the Second World War played a crucial role in both the independence and partition of India. The Indian army recruited, trained and deployed some 2.5 million men, almost 90,000 of which were killed and many more injured. Even at the time, it was recognised as the largest volunteer force in the war.
India’s material and financial contribution to the war was equally significant. India emerged as a major military-industrial and logistical base for Allied operations in south-east Asia and the Middle East. This led the United States to take considerable interest in the country’s future, and ensured that this was no longer the preserve of the British government. Other wartime developments pointed in the direction of India’s independence. In a stunning reversal of its long-standing financial relationship with Britain, India finished the war as one of the largest creditors to the imperial power.
Such extraordinary mobilization for war was achieved at great human cost, with the Bengal famine the most extreme manifestation of widespread wartime deprivation. The costs on India’s home front must be counted in millions of lives.
Indians signed up to serve on the war and home fronts for a variety of reasons. Many were convinced that their contribution would open the doors to India’s freedom. The political and social churn triggered by the war was evident in the massive waves of popular protest and unrest that washed over rural and urban India in the aftermath of the conflict. This turmoil was crucial in persuading the Attlee government to rid itself of the incubus of ruling India. Seventy years on, it is time that India engaged with the complex legacies of the Second World War. Bringing the war into the ambit of the new national memorial would be a fitting – if not overdue – recognition that this was India’s War.
CAT/2018.1(RC)
Question. 88
The author claims that omitting mention of Indians who served in the Second World War from the new National War Memorial is:
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
Economists have spent most of the 20th century ignoring psychology, positive or otherwise. But today there is a great deal of emphasis on how happiness can shape global economies, or — on a smaller scale — successful business practice. This is driven, in part, by a trend in "measuring" positive emotions, mostly so they can be optimized. Neuroscientists, for example, claim to be able to locate specific emotions, such as happiness or disappointment, in particular areas of the brain. Wearable technologies, such as Spire, offer data-driven advice on how to reduce stress.
We are no longer just dealing with "happiness" in a philosophical or romantic sense — it has become something that can be monitored and measured, including by our behavior, use of social media and bodily indicators such as pulse rate and facial expressions. There is nothing automatically sinister about this trend. But it is disquieting that the businesses and experts driving the quantification of happiness claim to have our best interests at heart, often concealing their own agendas in the process. In the workplace, happy workers are viewed as a "win-win." Work becomes more pleasant, and employees, more productive. But this is now being pursued through the use of performance-evaluating wearable technology, such as Humanyze or Virgin Pulse, both of which monitor physical signs of stress and activity toward the goal of increasing productivity.
Cities such as Dubai, which has pledged to become the "happiest city in the world," dream up ever-more elaborate and intrusive ways of collecting data on well-being — to the point where there is now talk of using CCTV cameras to monitor facial expressions in public spaces. New ways of detecting emotions are hitting the market all the time: One company, Beyond Verbal, aims to calculate moods conveyed in a phone conversation, potentially without the knowledge of at least one of the participants. And Facebook [has] demonstrated that it could influence our emotions through tweaking our news feeds — opening the door to ever-more targeted manipulation in advertising and influence.
As the science grows more sophisticated and technologies become more intimate with our thoughts and bodies, a clear trend is emerging. Where happiness indicators were once used as a basis to reform society, challenging the obsession with money that G.D.P. measurement entrenches, they are increasingly used as a basis to transform or discipline individuals.
Happiness becomes a personal project, that each of us must now work on, like going to the gym. Since the 1970s, depression has come to be viewed as a cognitive or neurological defect in the individual, and never a consequence of circumstances. All of this simply escalates the sense of responsibility each of us feels for our own feelings, and with it, the sense of failure when things go badly. A society that deliberately removed certain sources of misery, such as precarious and exploitative employment, may well be a happier one. But we won't get there by making this single, often fleeting emotion, the over-arching goal.
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
Economists have spent most of the 20th century ignoring psychology, positive or otherwise. But today there is a great deal of emphasis on how happiness can shape global economies, or — on a smaller scale — successful business practice. This is driven, in part, by a trend in "measuring" positive emotions, mostly so they can be optimized. Neuroscientists, for example, claim to be able to locate specific emotions, such as happiness or disappointment, in particular areas of the brain. Wearable technologies, such as Spire, offer data-driven advice on how to reduce stress.
We are no longer just dealing with "happiness" in a philosophical or romantic sense — it has become something that can be monitored and measured, including by our behavior, use of social media and bodily indicators such as pulse rate and facial expressions. There is nothing automatically sinister about this trend. But it is disquieting that the businesses and experts driving the quantification of happiness claim to have our best interests at heart, often concealing their own agendas in the process. In the workplace, happy workers are viewed as a "win-win." Work becomes more pleasant, and employees, more productive. But this is now being pursued through the use of performance-evaluating wearable technology, such as Humanyze or Virgin Pulse, both of which monitor physical signs of stress and activity toward the goal of increasing productivity.
Cities such as Dubai, which has pledged to become the "happiest city in the world," dream up ever-more elaborate and intrusive ways of collecting data on well-being — to the point where there is now talk of using CCTV cameras to monitor facial expressions in public spaces. New ways of detecting emotions are hitting the market all the time: One company, Beyond Verbal, aims to calculate moods conveyed in a phone conversation, potentially without the knowledge of at least one of the participants. And Facebook [has] demonstrated that it could influence our emotions through tweaking our news feeds — opening the door to ever-more targeted manipulation in advertising and influence.
As the science grows more sophisticated and technologies become more intimate with our thoughts and bodies, a clear trend is emerging. Where happiness indicators were once used as a basis to reform society, challenging the obsession with money that G.D.P. measurement entrenches, they are increasingly used as a basis to transform or discipline individuals.
Happiness becomes a personal project, that each of us must now work on, like going to the gym. Since the 1970s, depression has come to be viewed as a cognitive or neurological defect in the individual, and never a consequence of circumstances. All of this simply escalates the sense of responsibility each of us feels for our own feelings, and with it, the sense of failure when things go badly. A society that deliberately removed certain sources of misery, such as precarious and exploitative employment, may well be a happier one. But we won't get there by making this single, often fleeting emotion, the over-arching goal.
CAT/2018.1(RC)
Question. 90
The author’s view would be undermined by which of the following research findings?
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
Economists have spent most of the 20th century ignoring psychology, positive or otherwise. But today there is a great deal of emphasis on how happiness can shape global economies, or — on a smaller scale — successful business practice. This is driven, in part, by a trend in "measuring" positive emotions, mostly so they can be optimized. Neuroscientists, for example, claim to be able to locate specific emotions, such as happiness or disappointment, in particular areas of the brain. Wearable technologies, such as Spire, offer data-driven advice on how to reduce stress.
We are no longer just dealing with "happiness" in a philosophical or romantic sense — it has become something that can be monitored and measured, including by our behavior, use of social media and bodily indicators such as pulse rate and facial expressions. There is nothing automatically sinister about this trend. But it is disquieting that the businesses and experts driving the quantification of happiness claim to have our best interests at heart, often concealing their own agendas in the process. In the workplace, happy workers are viewed as a "win-win." Work becomes more pleasant, and employees, more productive. But this is now being pursued through the use of performance-evaluating wearable technology, such as Humanyze or Virgin Pulse, both of which monitor physical signs of stress and activity toward the goal of increasing productivity.
Cities such as Dubai, which has pledged to become the "happiest city in the world," dream up ever-more elaborate and intrusive ways of collecting data on well-being — to the point where there is now talk of using CCTV cameras to monitor facial expressions in public spaces. New ways of detecting emotions are hitting the market all the time: One company, Beyond Verbal, aims to calculate moods conveyed in a phone conversation, potentially without the knowledge of at least one of the participants. And Facebook [has] demonstrated that it could influence our emotions through tweaking our news feeds — opening the door to ever-more targeted manipulation in advertising and influence.
As the science grows more sophisticated and technologies become more intimate with our thoughts and bodies, a clear trend is emerging. Where happiness indicators were once used as a basis to reform society, challenging the obsession with money that G.D.P. measurement entrenches, they are increasingly used as a basis to transform or discipline individuals.
Happiness becomes a personal project, that each of us must now work on, like going to the gym. Since the 1970s, depression has come to be viewed as a cognitive or neurological defect in the individual, and never a consequence of circumstances. All of this simply escalates the sense of responsibility each of us feels for our own feelings, and with it, the sense of failure when things go badly. A society that deliberately removed certain sources of misery, such as precarious and exploitative employment, may well be a happier one. But we won't get there by making this single, often fleeting emotion, the over-arching goal.
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
Economists have spent most of the 20th century ignoring psychology, positive or otherwise. But today there is a great deal of emphasis on how happiness can shape global economies, or — on a smaller scale — successful business practice. This is driven, in part, by a trend in "measuring" positive emotions, mostly so they can be optimized. Neuroscientists, for example, claim to be able to locate specific emotions, such as happiness or disappointment, in particular areas of the brain. Wearable technologies, such as Spire, offer data-driven advice on how to reduce stress.
We are no longer just dealing with "happiness" in a philosophical or romantic sense — it has become something that can be monitored and measured, including by our behavior, use of social media and bodily indicators such as pulse rate and facial expressions. There is nothing automatically sinister about this trend. But it is disquieting that the businesses and experts driving the quantification of happiness claim to have our best interests at heart, often concealing their own agendas in the process. In the workplace, happy workers are viewed as a "win-win." Work becomes more pleasant, and employees, more productive. But this is now being pursued through the use of performance-evaluating wearable technology, such as Humanyze or Virgin Pulse, both of which monitor physical signs of stress and activity toward the goal of increasing productivity.
Cities such as Dubai, which has pledged to become the "happiest city in the world," dream up ever-more elaborate and intrusive ways of collecting data on well-being — to the point where there is now talk of using CCTV cameras to monitor facial expressions in public spaces. New ways of detecting emotions are hitting the market all the time: One company, Beyond Verbal, aims to calculate moods conveyed in a phone conversation, potentially without the knowledge of at least one of the participants. And Facebook [has] demonstrated that it could influence our emotions through tweaking our news feeds — opening the door to ever-more targeted manipulation in advertising and influence.
As the science grows more sophisticated and technologies become more intimate with our thoughts and bodies, a clear trend is emerging. Where happiness indicators were once used as a basis to reform society, challenging the obsession with money that G.D.P. measurement entrenches, they are increasingly used as a basis to transform or discipline individuals.
Happiness becomes a personal project, that each of us must now work on, like going to the gym. Since the 1970s, depression has come to be viewed as a cognitive or neurological defect in the individual, and never a consequence of circumstances. All of this simply escalates the sense of responsibility each of us feels for our own feelings, and with it, the sense of failure when things go badly. A society that deliberately removed certain sources of misery, such as precarious and exploitative employment, may well be a happier one. But we won't get there by making this single, often fleeting emotion, the over-arching goal.
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
Economists have spent most of the 20th century ignoring psychology, positive or otherwise. But today there is a great deal of emphasis on how happiness can shape global economies, or — on a smaller scale — successful business practice. This is driven, in part, by a trend in "measuring" positive emotions, mostly so they can be optimized. Neuroscientists, for example, claim to be able to locate specific emotions, such as happiness or disappointment, in particular areas of the brain. Wearable technologies, such as Spire, offer data-driven advice on how to reduce stress.
We are no longer just dealing with "happiness" in a philosophical or romantic sense — it has become something that can be monitored and measured, including by our behavior, use of social media and bodily indicators such as pulse rate and facial expressions. There is nothing automatically sinister about this trend. But it is disquieting that the businesses and experts driving the quantification of happiness claim to have our best interests at heart, often concealing their own agendas in the process. In the workplace, happy workers are viewed as a "win-win." Work becomes more pleasant, and employees, more productive. But this is now being pursued through the use of performance-evaluating wearable technology, such as Humanyze or Virgin Pulse, both of which monitor physical signs of stress and activity toward the goal of increasing productivity.
Cities such as Dubai, which has pledged to become the "happiest city in the world," dream up ever-more elaborate and intrusive ways of collecting data on well-being — to the point where there is now talk of using CCTV cameras to monitor facial expressions in public spaces. New ways of detecting emotions are hitting the market all the time: One company, Beyond Verbal, aims to calculate moods conveyed in a phone conversation, potentially without the knowledge of at least one of the participants. And Facebook [has] demonstrated that it could influence our emotions through tweaking our news feeds — opening the door to ever-more targeted manipulation in advertising and influence.
As the science grows more sophisticated and technologies become more intimate with our thoughts and bodies, a clear trend is emerging. Where happiness indicators were once used as a basis to reform society, challenging the obsession with money that G.D.P. measurement entrenches, they are increasingly used as a basis to transform or discipline individuals.
Happiness becomes a personal project, that each of us must now work on, like going to the gym. Since the 1970s, depression has come to be viewed as a cognitive or neurological defect in the individual, and never a consequence of circumstances. All of this simply escalates the sense of responsibility each of us feels for our own feelings, and with it, the sense of failure when things go badly. A society that deliberately removed certain sources of misery, such as precarious and exploitative employment, may well be a happier one. But we won't get there by making this single, often fleeting emotion, the over-arching goal.
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
The complexity of modern problems often precludes any one person from fully understanding them. Factors contributing to rising obesity levels, for example, include transportation systems and infrastructure, media, convenience foods, changing social norms, human biology and psychological factors. The multidimensional or layered character of complex problems also undermines the principle of meritocracy: the idea that the ‘best person’ should be hired. There is no best person. When putting together an oncological research team, a biotech company such as Gilead or Genentech would not construct a multiple-choice test and hire the top scorers, or hire people whose resumes score highest according to some performance criteria. Instead, they would seek diversity. They would build a team of people who bring diverse knowledge bases, tools and analytic skills.
Believers in a meritocracy might grant that teams ought to be diverse but then argue that meritocratic principles should apply within each category. Thus the team should consist of the ‘best’ mathematicians, the ‘best’ oncologists, and the ‘best’ biostatisticians from within the pool. That position suffers from a similar flaw.
Even with a knowledge domain, no test or criteria applied to individuals will produce the best team. Each of these domains possesses such depth and breadth, that no test can exist. Consider the field of neuroscience. Upwards of 50,000 papers were published last year covering various techniques, domains of enquiry and levels of analysis, ranging from molecules and synapses up through networks of neurons. Given that complexity, any attempt to rank a collection of neuroscientists from best to worst, as if they were competitors in the 50-metre butterfly, must fail. What could be true is that given a specific task and the composition of a particular team, one scientist would be more likely to contribute than another. Optimal hiring depends on context. Optimal teams will be diverse.
Evidence for this claim can be seen in the way that papers and patents that combine diverse ideas tend to rank as high-impact. It can also be found in the structure of the so-called random decision forest, a state-of-the-art machine-learning algorithm.
Random forests consist of ensembles of decision trees. If classifying pictures, each tree makes a vote: is that a picture of a fox or a dog? A weighted majority rules. Random forests can serve many ends. They can identify bank fraud and diseases, recommend ceiling fans and predict online dating behaviour. When building a forest, you do not select the best trees as they tend to make similar classifications. You want diversity. Programmers achieve that diversity by training each tree on different data, a technique known as bagging. They also boost the forest ‘cognitively’ by training trees on the hardest cases – those that the current forest gets wrong. This ensures even more diversity and accurate forests."
Yet the fallacy of meritocracy persists. Corporations, non-profits, governments, universities and even preschools test, score and hire the ‘best’. This all but guarantees not creating the best team. Ranking people by common criteria produces homogeneity. That’s not likely to lead to breakthroughs.
CAT/2018.2(RC)
Question. 94
The author critiques meritocracy for all the following reasons EXCEPT that:
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
The complexity of modern problems often precludes any one person from fully understanding them. Factors contributing to rising obesity levels, for example, include transportation systems and infrastructure, media, convenience foods, changing social norms, human biology and psychological factors. The multidimensional or layered character of complex problems also undermines the principle of meritocracy: the idea that the ‘best person’ should be hired. There is no best person. When putting together an oncological research team, a biotech company such as Gilead or Genentech would not construct a multiple-choice test and hire the top scorers, or hire people whose resumes score highest according to some performance criteria. Instead, they would seek diversity. They would build a team of people who bring diverse knowledge bases, tools and analytic skills.
Believers in a meritocracy might grant that teams ought to be diverse but then argue that meritocratic principles should apply within each category. Thus the team should consist of the ‘best’ mathematicians, the ‘best’ oncologists, and the ‘best’ biostatisticians from within the pool. That position suffers from a similar flaw.
Even with a knowledge domain, no test or criteria applied to individuals will produce the best team. Each of these domains possesses such depth and breadth, that no test can exist. Consider the field of neuroscience. Upwards of 50,000 papers were published last year covering various techniques, domains of enquiry and levels of analysis, ranging from molecules and synapses up through networks of neurons. Given that complexity, any attempt to rank a collection of neuroscientists from best to worst, as if they were competitors in the 50-metre butterfly, must fail. What could be true is that given a specific task and the composition of a particular team, one scientist would be more likely to contribute than another. Optimal hiring depends on context. Optimal teams will be diverse.
Evidence for this claim can be seen in the way that papers and patents that combine diverse ideas tend to rank as high-impact. It can also be found in the structure of the so-called random decision forest, a state-of-the-art machine-learning algorithm.
Random forests consist of ensembles of decision trees. If classifying pictures, each tree makes a vote: is that a picture of a fox or a dog? A weighted majority rules. Random forests can serve many ends. They can identify bank fraud and diseases, recommend ceiling fans and predict online dating behaviour. When building a forest, you do not select the best trees as they tend to make similar classifications. You want diversity. Programmers achieve that diversity by training each tree on different data, a technique known as bagging. They also boost the forest ‘cognitively’ by training trees on the hardest cases – those that the current forest gets wrong. This ensures even more diversity and accurate forests."
Yet the fallacy of meritocracy persists. Corporations, non-profits, governments, universities and even preschools test, score and hire the ‘best’. This all but guarantees not creating the best team. Ranking people by common criteria produces homogeneity. That’s not likely to lead to breakthroughs.
CAT/2018.2(RC)
Question. 95
Which of the following conditions would weaken the efficacy of a random decision forest?
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
The complexity of modern problems often precludes any one person from fully understanding them. Factors contributing to rising obesity levels, for example, include transportation systems and infrastructure, media, convenience foods, changing social norms, human biology and psychological factors. The multidimensional or layered character of complex problems also undermines the principle of meritocracy: the idea that the ‘best person’ should be hired. There is no best person. When putting together an oncological research team, a biotech company such as Gilead or Genentech would not construct a multiple-choice test and hire the top scorers, or hire people whose resumes score highest according to some performance criteria. Instead, they would seek diversity. They would build a team of people who bring diverse knowledge bases, tools and analytic skills.
Believers in a meritocracy might grant that teams ought to be diverse but then argue that meritocratic principles should apply within each category. Thus the team should consist of the ‘best’ mathematicians, the ‘best’ oncologists, and the ‘best’ biostatisticians from within the pool. That position suffers from a similar flaw.
Even with a knowledge domain, no test or criteria applied to individuals will produce the best team. Each of these domains possesses such depth and breadth, that no test can exist. Consider the field of neuroscience. Upwards of 50,000 papers were published last year covering various techniques, domains of enquiry and levels of analysis, ranging from molecules and synapses up through networks of neurons. Given that complexity, any attempt to rank a collection of neuroscientists from best to worst, as if they were competitors in the 50-metre butterfly, must fail. What could be true is that given a specific task and the composition of a particular team, one scientist would be more likely to contribute than another. Optimal hiring depends on context. Optimal teams will be diverse.
Evidence for this claim can be seen in the way that papers and patents that combine diverse ideas tend to rank as high-impact. It can also be found in the structure of the so-called random decision forest, a state-of-the-art machine-learning algorithm.
Random forests consist of ensembles of decision trees. If classifying pictures, each tree makes a vote: is that a picture of a fox or a dog? A weighted majority rules. Random forests can serve many ends. They can identify bank fraud and diseases, recommend ceiling fans and predict online dating behaviour. When building a forest, you do not select the best trees as they tend to make similar classifications. You want diversity. Programmers achieve that diversity by training each tree on different data, a technique known as bagging. They also boost the forest ‘cognitively’ by training trees on the hardest cases – those that the current forest gets wrong. This ensures even more diversity and accurate forests."
Yet the fallacy of meritocracy persists. Corporations, non-profits, governments, universities and even preschools test, score and hire the ‘best’. This all but guarantees not creating the best team. Ranking people by common criteria produces homogeneity. That’s not likely to lead to breakthroughs.
CAT/2018.2(RC)
Question. 96
Which of the following conditions, if true, would invalidate the passage’s main argument?
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
The complexity of modern problems often precludes any one person from fully understanding them. Factors contributing to rising obesity levels, for example, include transportation systems and infrastructure, media, convenience foods, changing social norms, human biology and psychological factors. The multidimensional or layered character of complex problems also undermines the principle of meritocracy: the idea that the ‘best person’ should be hired. There is no best person. When putting together an oncological research team, a biotech company such as Gilead or Genentech would not construct a multiple-choice test and hire the top scorers, or hire people whose resumes score highest according to some performance criteria. Instead, they would seek diversity. They would build a team of people who bring diverse knowledge bases, tools and analytic skills.
Believers in a meritocracy might grant that teams ought to be diverse but then argue that meritocratic principles should apply within each category. Thus the team should consist of the ‘best’ mathematicians, the ‘best’ oncologists, and the ‘best’ biostatisticians from within the pool. That position suffers from a similar flaw.
Even with a knowledge domain, no test or criteria applied to individuals will produce the best team. Each of these domains possesses such depth and breadth, that no test can exist. Consider the field of neuroscience. Upwards of 50,000 papers were published last year covering various techniques, domains of enquiry and levels of analysis, ranging from molecules and synapses up through networks of neurons. Given that complexity, any attempt to rank a collection of neuroscientists from best to worst, as if they were competitors in the 50-metre butterfly, must fail. What could be true is that given a specific task and the composition of a particular team, one scientist would be more likely to contribute than another. Optimal hiring depends on context. Optimal teams will be diverse.
Evidence for this claim can be seen in the way that papers and patents that combine diverse ideas tend to rank as high-impact. It can also be found in the structure of the so-called random decision forest, a state-of-the-art machine-learning algorithm.
Random forests consist of ensembles of decision trees. If classifying pictures, each tree makes a vote: is that a picture of a fox or a dog? A weighted majority rules. Random forests can serve many ends. They can identify bank fraud and diseases, recommend ceiling fans and predict online dating behaviour. When building a forest, you do not select the best trees as they tend to make similar classifications. You want diversity. Programmers achieve that diversity by training each tree on different data, a technique known as bagging. They also boost the forest ‘cognitively’ by training trees on the hardest cases – those that the current forest gets wrong. This ensures even more diversity and accurate forests."
Yet the fallacy of meritocracy persists. Corporations, non-profits, governments, universities and even preschools test, score and hire the ‘best’. This all but guarantees not creating the best team. Ranking people by common criteria produces homogeneity. That’s not likely to lead to breakthroughs.
CAT/2018.2(RC)
Question. 97
On the basis of the passage, which of the following teams is likely to be most effective in solving the problem of rising obesity levels?
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
The complexity of modern problems often precludes any one person from fully understanding them. Factors contributing to rising obesity levels, for example, include transportation systems and infrastructure, media, convenience foods, changing social norms, human biology and psychological factors. The multidimensional or layered character of complex problems also undermines the principle of meritocracy: the idea that the ‘best person’ should be hired. There is no best person. When putting together an oncological research team, a biotech company such as Gilead or Genentech would not construct a multiple-choice test and hire the top scorers, or hire people whose resumes score highest according to some performance criteria. Instead, they would seek diversity. They would build a team of people who bring diverse knowledge bases, tools and analytic skills.
Believers in a meritocracy might grant that teams ought to be diverse but then argue that meritocratic principles should apply within each category. Thus the team should consist of the ‘best’ mathematicians, the ‘best’ oncologists, and the ‘best’ biostatisticians from within the pool. That position suffers from a similar flaw.
Even with a knowledge domain, no test or criteria applied to individuals will produce the best team. Each of these domains possesses such depth and breadth, that no test can exist. Consider the field of neuroscience. Upwards of 50,000 papers were published last year covering various techniques, domains of enquiry and levels of analysis, ranging from molecules and synapses up through networks of neurons. Given that complexity, any attempt to rank a collection of neuroscientists from best to worst, as if they were competitors in the 50-metre butterfly, must fail. What could be true is that given a specific task and the composition of a particular team, one scientist would be more likely to contribute than another. Optimal hiring depends on context. Optimal teams will be diverse.
Evidence for this claim can be seen in the way that papers and patents that combine diverse ideas tend to rank as high-impact. It can also be found in the structure of the so-called random decision forest, a state-of-the-art machine-learning algorithm.
Random forests consist of ensembles of decision trees. If classifying pictures, each tree makes a vote: is that a picture of a fox or a dog? A weighted majority rules. Random forests can serve many ends. They can identify bank fraud and diseases, recommend ceiling fans and predict online dating behaviour. When building a forest, you do not select the best trees as they tend to make similar classifications. You want diversity. Programmers achieve that diversity by training each tree on different data, a technique known as bagging. They also boost the forest ‘cognitively’ by training trees on the hardest cases – those that the current forest gets wrong. This ensures even more diversity and accurate forests."
Yet the fallacy of meritocracy persists. Corporations, non-profits, governments, universities and even preschools test, score and hire the ‘best’. This all but guarantees not creating the best team. Ranking people by common criteria produces homogeneity. That’s not likely to lead to breakthroughs.
CAT/2018.2(RC)
Question. 98
Which of the following best describes the purpose of the example of neuroscience?
Comprehension
Directions for the Questions: Read the passage carefully and answer the given questions accordingly.
Understanding where you are in the world is a basic survival skill, which is why we, like most species come hard-wired with specialized brain areas to create cognitive maps of our surroundings. Where humans are unique, though, with the possible exception of honeybees, is that we try to communicate this understanding the world with others. We have along history of doing this by drawing maps – the earliest version yet discovered were scrawled on cave walls 14,000 years ago. Human cultures have been drawing them on stone tablets, papyrus, paper and now computer screens ever since.
Given such a long history of human map-making, it perhaps surprising that is only within the last few hundred years that north has been consistently considered to be at the top. In fact, for much of human history, north almost never appeared at the top, according to Jerry Brotton, a map historian... “North was rarely put at the top for the simple fact that north is where darkness comes from,” he says. “West is also very unlikely o be put at the top because west is where the sun disappears.”
Confusingly, early Chinese maps seem to buck this trend. But, Brotton, says, even though they did have compasses at the time, that isn’t the reason that they placed north at the top. Early Chinese compasses were actually oriented to point south, which was considered to be more desirable than deepest darkest north. But in Chinese maps, the emperor, who lived in the north of the country was always put at the top of the map, with everyone else, his loyal subjects, looking up towards him. “In Chinese culture the Emperor looks south because it’s where the winds come from, it’s a good direction. North is not very good but you are in a position of the subjection to the emperor, so you look up to him,” says Brotton.
Given that each culture has a very different idea of who, or what, they should look upto it’s perhaps not surprising that there is very little consistency in which way early maps pointed. In ancient Egyptian times the top of the world was east, the position of sunrise. Early Islamic maps favoured south at the top because most of the early Muslim cultures were north of Mecca, so they imagined looking up (south) towards it Christian maps from the same era (called Mappa Mundi) put east at the top, towards the Garden of Eden and with Jerusalem in the centre.
So when did everyone get together and decide that north was the top? It’s tempting to put it down to European explorers like Christopher Columbus and Ferdinand Megellan who were navigating by the North Star. But Brotton argues that these early explorers didn’t think of the world like that at all. “When Columbus describes the world it is in accordance with east being at the top,” he says “Columbus says he is going towards paradise, so his mentality is from a medieval mappa mundi.” We’ve got to remember, adds Brotton, that at the time, “no one knows what they are doing and where they are going.”
Comprehension
Directions for the Questions: Read the passage carefully and answer the given questions accordingly.
Understanding where you are in the world is a basic survival skill, which is why we, like most species come hard-wired with specialized brain areas to create cognitive maps of our surroundings. Where humans are unique, though, with the possible exception of honeybees, is that we try to communicate this understanding the world with others. We have along history of doing this by drawing maps – the earliest version yet discovered were scrawled on cave walls 14,000 years ago. Human cultures have been drawing them on stone tablets, papyrus, paper and now computer screens ever since.
Given such a long history of human map-making, it perhaps surprising that is only within the last few hundred years that north has been consistently considered to be at the top. In fact, for much of human history, north almost never appeared at the top, according to Jerry Brotton, a map historian... “North was rarely put at the top for the simple fact that north is where darkness comes from,” he says. “West is also very unlikely o be put at the top because west is where the sun disappears.”
Confusingly, early Chinese maps seem to buck this trend. But, Brotton, says, even though they did have compasses at the time, that isn’t the reason that they placed north at the top. Early Chinese compasses were actually oriented to point south, which was considered to be more desirable than deepest darkest north. But in Chinese maps, the emperor, who lived in the north of the country was always put at the top of the map, with everyone else, his loyal subjects, looking up towards him. “In Chinese culture the Emperor looks south because it’s where the winds come from, it’s a good direction. North is not very good but you are in a position of the subjection to the emperor, so you look up to him,” says Brotton.
Given that each culture has a very different idea of who, or what, they should look upto it’s perhaps not surprising that there is very little consistency in which way early maps pointed. In ancient Egyptian times the top of the world was east, the position of sunrise. Early Islamic maps favoured south at the top because most of the early Muslim cultures were north of Mecca, so they imagined looking up (south) towards it Christian maps from the same era (called Mappa Mundi) put east at the top, towards the Garden of Eden and with Jerusalem in the centre.
So when did everyone get together and decide that north was the top? It’s tempting to put it down to European explorers like Christopher Columbus and Ferdinand Megellan who were navigating by the North Star. But Brotton argues that these early explorers didn’t think of the world like that at all. “When Columbus describes the world it is in accordance with east being at the top,” he says “Columbus says he is going towards paradise, so his mentality is from a medieval mappa mundi.” We’ve got to remember, adds Brotton, that at the time, “no one knows what they are doing and where they are going.”
Comprehension
Directions for the Questions: Read the passage carefully and answer the given questions accordingly.
Understanding where you are in the world is a basic survival skill, which is why we, like most species come hard-wired with specialized brain areas to create cognitive maps of our surroundings. Where humans are unique, though, with the possible exception of honeybees, is that we try to communicate this understanding the world with others. We have along history of doing this by drawing maps – the earliest version yet discovered were scrawled on cave walls 14,000 years ago. Human cultures have been drawing them on stone tablets, papyrus, paper and now computer screens ever since.
Given such a long history of human map-making, it perhaps surprising that is only within the last few hundred years that north has been consistently considered to be at the top. In fact, for much of human history, north almost never appeared at the top, according to Jerry Brotton, a map historian... “North was rarely put at the top for the simple fact that north is where darkness comes from,” he says. “West is also very unlikely o be put at the top because west is where the sun disappears.”
Confusingly, early Chinese maps seem to buck this trend. But, Brotton, says, even though they did have compasses at the time, that isn’t the reason that they placed north at the top. Early Chinese compasses were actually oriented to point south, which was considered to be more desirable than deepest darkest north. But in Chinese maps, the emperor, who lived in the north of the country was always put at the top of the map, with everyone else, his loyal subjects, looking up towards him. “In Chinese culture the Emperor looks south because it’s where the winds come from, it’s a good direction. North is not very good but you are in a position of the subjection to the emperor, so you look up to him,” says Brotton.
Given that each culture has a very different idea of who, or what, they should look upto it’s perhaps not surprising that there is very little consistency in which way early maps pointed. In ancient Egyptian times the top of the world was east, the position of sunrise. Early Islamic maps favoured south at the top because most of the early Muslim cultures were north of Mecca, so they imagined looking up (south) towards it Christian maps from the same era (called Mappa Mundi) put east at the top, towards the Garden of Eden and with Jerusalem in the centre.
So when did everyone get together and decide that north was the top? It’s tempting to put it down to European explorers like Christopher Columbus and Ferdinand Megellan who were navigating by the North Star. But Brotton argues that these early explorers didn’t think of the world like that at all. “When Columbus describes the world it is in accordance with east being at the top,” he says “Columbus says he is going towards paradise, so his mentality is from a medieval mappa mundi.” We’ve got to remember, adds Brotton, that at the time, “no one knows what they are doing and where they are going.”
Comprehension
Directions for the Questions: Read the passage carefully and answer the given questions accordingly.
Understanding where you are in the world is a basic survival skill, which is why we, like most species come hard-wired with specialized brain areas to create cognitive maps of our surroundings. Where humans are unique, though, with the possible exception of honeybees, is that we try to communicate this understanding the world with others. We have along history of doing this by drawing maps – the earliest version yet discovered were scrawled on cave walls 14,000 years ago. Human cultures have been drawing them on stone tablets, papyrus, paper and now computer screens ever since.
Given such a long history of human map-making, it perhaps surprising that is only within the last few hundred years that north has been consistently considered to be at the top. In fact, for much of human history, north almost never appeared at the top, according to Jerry Brotton, a map historian... “North was rarely put at the top for the simple fact that north is where darkness comes from,” he says. “West is also very unlikely o be put at the top because west is where the sun disappears.”
Confusingly, early Chinese maps seem to buck this trend. But, Brotton, says, even though they did have compasses at the time, that isn’t the reason that they placed north at the top. Early Chinese compasses were actually oriented to point south, which was considered to be more desirable than deepest darkest north. But in Chinese maps, the emperor, who lived in the north of the country was always put at the top of the map, with everyone else, his loyal subjects, looking up towards him. “In Chinese culture the Emperor looks south because it’s where the winds come from, it’s a good direction. North is not very good but you are in a position of the subjection to the emperor, so you look up to him,” says Brotton.
Given that each culture has a very different idea of who, or what, they should look upto it’s perhaps not surprising that there is very little consistency in which way early maps pointed. In ancient Egyptian times the top of the world was east, the position of sunrise. Early Islamic maps favoured south at the top because most of the early Muslim cultures were north of Mecca, so they imagined looking up (south) towards it Christian maps from the same era (called Mappa Mundi) put east at the top, towards the Garden of Eden and with Jerusalem in the centre.
So when did everyone get together and decide that north was the top? It’s tempting to put it down to European explorers like Christopher Columbus and Ferdinand Megellan who were navigating by the North Star. But Brotton argues that these early explorers didn’t think of the world like that at all. “When Columbus describes the world it is in accordance with east being at the top,” he says “Columbus says he is going towards paradise, so his mentality is from a medieval mappa mundi.” We’ve got to remember, adds Brotton, that at the time, “no one knows what they are doing and where they are going.”
Comprehension
Directions for the Questions: Read the passage carefully and answer the given questions accordingly.
Understanding where you are in the world is a basic survival skill, which is why we, like most species come hard-wired with specialized brain areas to create cognitive maps of our surroundings. Where humans are unique, though, with the possible exception of honeybees, is that we try to communicate this understanding the world with others. We have along history of doing this by drawing maps – the earliest version yet discovered were scrawled on cave walls 14,000 years ago. Human cultures have been drawing them on stone tablets, papyrus, paper and now computer screens ever since.
Given such a long history of human map-making, it perhaps surprising that is only within the last few hundred years that north has been consistently considered to be at the top. In fact, for much of human history, north almost never appeared at the top, according to Jerry Brotton, a map historian... “North was rarely put at the top for the simple fact that north is where darkness comes from,” he says. “West is also very unlikely o be put at the top because west is where the sun disappears.”
Confusingly, early Chinese maps seem to buck this trend. But, Brotton, says, even though they did have compasses at the time, that isn’t the reason that they placed north at the top. Early Chinese compasses were actually oriented to point south, which was considered to be more desirable than deepest darkest north. But in Chinese maps, the emperor, who lived in the north of the country was always put at the top of the map, with everyone else, his loyal subjects, looking up towards him. “In Chinese culture the Emperor looks south because it’s where the winds come from, it’s a good direction. North is not very good but you are in a position of the subjection to the emperor, so you look up to him,” says Brotton.
Given that each culture has a very different idea of who, or what, they should look upto it’s perhaps not surprising that there is very little consistency in which way early maps pointed. In ancient Egyptian times the top of the world was east, the position of sunrise. Early Islamic maps favoured south at the top because most of the early Muslim cultures were north of Mecca, so they imagined looking up (south) towards it Christian maps from the same era (called Mappa Mundi) put east at the top, towards the Garden of Eden and with Jerusalem in the centre.
So when did everyone get together and decide that north was the top? It’s tempting to put it down to European explorers like Christopher Columbus and Ferdinand Megellan who were navigating by the North Star. But Brotton argues that these early explorers didn’t think of the world like that at all. “When Columbus describes the world it is in accordance with east being at the top,” he says “Columbus says he is going towards paradise, so his mentality is from a medieval mappa mundi.” We’ve got to remember, adds Brotton, that at the time, “no one knows what they are doing and where they are going.”
CAT/2017.1(RC)
Question. 103
Which one of the following about the northern orientation of modern maps is asserted in the passage
Comprehension
Directions for the Questions: Read the passage carefully and answer the given questions accordingly.
Understanding where you are in the world is a basic survival skill, which is why we, like most species come hard-wired with specialized brain areas to create cognitive maps of our surroundings. Where humans are unique, though, with the possible exception of honeybees, is that we try to communicate this understanding the world with others. We have along history of doing this by drawing maps – the earliest version yet discovered were scrawled on cave walls 14,000 years ago. Human cultures have been drawing them on stone tablets, papyrus, paper and now computer screens ever since.
Given such a long history of human map-making, it perhaps surprising that is only within the last few hundred years that north has been consistently considered to be at the top. In fact, for much of human history, north almost never appeared at the top, according to Jerry Brotton, a map historian... “North was rarely put at the top for the simple fact that north is where darkness comes from,” he says. “West is also very unlikely o be put at the top because west is where the sun disappears.”
Confusingly, early Chinese maps seem to buck this trend. But, Brotton, says, even though they did have compasses at the time, that isn’t the reason that they placed north at the top. Early Chinese compasses were actually oriented to point south, which was considered to be more desirable than deepest darkest north. But in Chinese maps, the emperor, who lived in the north of the country was always put at the top of the map, with everyone else, his loyal subjects, looking up towards him. “In Chinese culture the Emperor looks south because it’s where the winds come from, it’s a good direction. North is not very good but you are in a position of the subjection to the emperor, so you look up to him,” says Brotton.
Given that each culture has a very different idea of who, or what, they should look upto it’s perhaps not surprising that there is very little consistency in which way early maps pointed. In ancient Egyptian times the top of the world was east, the position of sunrise. Early Islamic maps favoured south at the top because most of the early Muslim cultures were north of Mecca, so they imagined looking up (south) towards it Christian maps from the same era (called Mappa Mundi) put east at the top, towards the Garden of Eden and with Jerusalem in the centre.
So when did everyone get together and decide that north was the top? It’s tempting to put it down to European explorers like Christopher Columbus and Ferdinand Megellan who were navigating by the North Star. But Brotton argues that these early explorers didn’t think of the world like that at all. “When Columbus describes the world it is in accordance with east being at the top,” he says “Columbus says he is going towards paradise, so his mentality is from a medieval mappa mundi.” We’ve got to remember, adds Brotton, that at the time, “no one knows what they are doing and where they are going.”
Comprehension
Directions: Read the passage carefully and answer the given questions accordingly
I used a smartphone GPS to find my way through the cobblestoned maze of Geneva's Old Town, in search of a handmade machine that changed the world more than any other invention. Near a 13th-century cathedral in this Swiss city on the shores of a lovely lake, I found what I was looking for: a Gutenberg printing press. "This was the Internet of its day — at least as influential as the iPhone," said Gabriel de Montmollin, the director of the Museum of the Reformation, toying with the replica of Johann Gutenberg's great invention. [Before the invention of the printing press] it used to take four monks...up to a year to produce a single book. With the advance in movable type in 15th-century Europe, one press could crank out 3,000 pages a day.
Before long, average people could travel to places that used to be unknown to them — with maps! Medical information passed more freely and quickly, diminishing the sway of quacks...The printing press offered the prospect that tyrants would never be able to kill a book or suppress an idea. Gutenberg's brainchild broke the monopoly that clerics had on scripture. And later, stirred by pamphlets from a version of that same press, the American colonies rose up against a king and gave birth to a nation. So, a question in the summer of this 10th anniversary of the iPhone: has the device that is perhaps the most revolutionary of all time given us a single magnificent idea? Nearly every advancement of the written word through new technology has also advanced humankind. Sure, you can say the iPhone changed everything. By putting the world's recorded knowledge in the palm of a hand, it revolutionized work, dining, travel and socializing. It made us more narcissistic — here's more of me doing cool stuff! — and it unleashed an army of awful trolls. We no longer have the patience to sit through a baseball game without that reach to the pocket. And one more casualty of Apple selling more than a billion phones in a decade's time: daydreaming has become a lost art.
For all of that, I'm still waiting to see if the iPhone can do what the printing press did for religion and democracy...the Geneva museum makes a strong case that the printing press opened more minds than anything else...it's hard to imagine the French or American revolutions without those enlightened voices in print...
Not long after Steve Jobs introduced his iPhone, he said the bound book was probably headed for history's attic. Not so fast. After a period of rapid growth in e-books, something closer to the medium for Chaucer's volumes has made a great comeback.
The hope of the iPhone, and the Internet in general, was that it would free people in closed societies. But the failure of the Arab Spring, and the continued suppression of ideas in North Korea, China and Iran, has not borne that out... The iPhone is still young. It has certainly been "one of the most important, world-changing and successful products in history, “ as Apple CEO. Tim Cook said. But I'm not sure if the world changed for the better with the iPhone — as it did with the printing press — or merely, changed.
CAT/2017.1(RC)
Question. 105
The printing press has been likened to the Internet for which one of the following reasons
Comprehension
Directions: Read the passage carefully and answer the given questions accordingly
I used a smartphone GPS to find my way through the cobblestoned maze of Geneva's Old Town, in search of a handmade machine that changed the world more than any other invention. Near a 13th-century cathedral in this Swiss city on the shores of a lovely lake, I found what I was looking for: a Gutenberg printing press. "This was the Internet of its day — at least as influential as the iPhone," said Gabriel de Montmollin, the director of the Museum of the Reformation, toying with the replica of Johann Gutenberg's great invention. [Before the invention of the printing press] it used to take four monks...up to a year to produce a single book. With the advance in movable type in 15th-century Europe, one press could crank out 3,000 pages a day.
Before long, average people could travel to places that used to be unknown to them — with maps! Medical information passed more freely and quickly, diminishing the sway of quacks...The printing press offered the prospect that tyrants would never be able to kill a book or suppress an idea. Gutenberg's brainchild broke the monopoly that clerics had on scripture. And later, stirred by pamphlets from a version of that same press, the American colonies rose up against a king and gave birth to a nation. So, a question in the summer of this 10th anniversary of the iPhone: has the device that is perhaps the most revolutionary of all time given us a single magnificent idea? Nearly every advancement of the written word through new technology has also advanced humankind. Sure, you can say the iPhone changed everything. By putting the world's recorded knowledge in the palm of a hand, it revolutionized work, dining, travel and socializing. It made us more narcissistic — here's more of me doing cool stuff! — and it unleashed an army of awful trolls. We no longer have the patience to sit through a baseball game without that reach to the pocket. And one more casualty of Apple selling more than a billion phones in a decade's time: daydreaming has become a lost art.
For all of that, I'm still waiting to see if the iPhone can do what the printing press did for religion and democracy...the Geneva museum makes a strong case that the printing press opened more minds than anything else...it's hard to imagine the French or American revolutions without those enlightened voices in print...
Not long after Steve Jobs introduced his iPhone, he said the bound book was probably headed for history's attic. Not so fast. After a period of rapid growth in e-books, something closer to the medium for Chaucer's volumes has made a great comeback.
The hope of the iPhone, and the Internet in general, was that it would free people in closed societies. But the failure of the Arab Spring, and the continued suppression of ideas in North Korea, China and Iran, has not borne that out... The iPhone is still young. It has certainly been "one of the most important, world-changing and successful products in history, “ as Apple CEO. Tim Cook said. But I'm not sure if the world changed for the better with the iPhone — as it did with the printing press — or merely, changed.
Comprehension
Directions: Read the passage carefully and answer the given questions accordingly
I used a smartphone GPS to find my way through the cobblestoned maze of Geneva's Old Town, in search of a handmade machine that changed the world more than any other invention. Near a 13th-century cathedral in this Swiss city on the shores of a lovely lake, I found what I was looking for: a Gutenberg printing press. "This was the Internet of its day — at least as influential as the iPhone," said Gabriel de Montmollin, the director of the Museum of the Reformation, toying with the replica of Johann Gutenberg's great invention. [Before the invention of the printing press] it used to take four monks...up to a year to produce a single book. With the advance in movable type in 15th-century Europe, one press could crank out 3,000 pages a day.
Before long, average people could travel to places that used to be unknown to them — with maps! Medical information passed more freely and quickly, diminishing the sway of quacks...The printing press offered the prospect that tyrants would never be able to kill a book or suppress an idea. Gutenberg's brainchild broke the monopoly that clerics had on scripture. And later, stirred by pamphlets from a version of that same press, the American colonies rose up against a king and gave birth to a nation. So, a question in the summer of this 10th anniversary of the iPhone: has the device that is perhaps the most revolutionary of all time given us a single magnificent idea? Nearly every advancement of the written word through new technology has also advanced humankind. Sure, you can say the iPhone changed everything. By putting the world's recorded knowledge in the palm of a hand, it revolutionized work, dining, travel and socializing. It made us more narcissistic — here's more of me doing cool stuff! — and it unleashed an army of awful trolls. We no longer have the patience to sit through a baseball game without that reach to the pocket. And one more casualty of Apple selling more than a billion phones in a decade's time: daydreaming has become a lost art.
For all of that, I'm still waiting to see if the iPhone can do what the printing press did for religion and democracy...the Geneva museum makes a strong case that the printing press opened more minds than anything else...it's hard to imagine the French or American revolutions without those enlightened voices in print...
Not long after Steve Jobs introduced his iPhone, he said the bound book was probably headed for history's attic. Not so fast. After a period of rapid growth in e-books, something closer to the medium for Chaucer's volumes has made a great comeback.
The hope of the iPhone, and the Internet in general, was that it would free people in closed societies. But the failure of the Arab Spring, and the continued suppression of ideas in North Korea, China and Iran, has not borne that out... The iPhone is still young. It has certainly been "one of the most important, world-changing and successful products in history, “ as Apple CEO. Tim Cook said. But I'm not sure if the world changed for the better with the iPhone — as it did with the printing press — or merely, changed.
Comprehension
Directions: Read the passage carefully and answer the given questions accordingly
I used a smartphone GPS to find my way through the cobblestoned maze of Geneva's Old Town, in search of a handmade machine that changed the world more than any other invention. Near a 13th-century cathedral in this Swiss city on the shores of a lovely lake, I found what I was looking for: a Gutenberg printing press. "This was the Internet of its day — at least as influential as the iPhone," said Gabriel de Montmollin, the director of the Museum of the Reformation, toying with the replica of Johann Gutenberg's great invention. [Before the invention of the printing press] it used to take four monks...up to a year to produce a single book. With the advance in movable type in 15th-century Europe, one press could crank out 3,000 pages a day.
Before long, average people could travel to places that used to be unknown to them — with maps! Medical information passed more freely and quickly, diminishing the sway of quacks...The printing press offered the prospect that tyrants would never be able to kill a book or suppress an idea. Gutenberg's brainchild broke the monopoly that clerics had on scripture. And later, stirred by pamphlets from a version of that same press, the American colonies rose up against a king and gave birth to a nation. So, a question in the summer of this 10th anniversary of the iPhone: has the device that is perhaps the most revolutionary of all time given us a single magnificent idea? Nearly every advancement of the written word through new technology has also advanced humankind. Sure, you can say the iPhone changed everything. By putting the world's recorded knowledge in the palm of a hand, it revolutionized work, dining, travel and socializing. It made us more narcissistic — here's more of me doing cool stuff! — and it unleashed an army of awful trolls. We no longer have the patience to sit through a baseball game without that reach to the pocket. And one more casualty of Apple selling more than a billion phones in a decade's time: daydreaming has become a lost art.
For all of that, I'm still waiting to see if the iPhone can do what the printing press did for religion and democracy...the Geneva museum makes a strong case that the printing press opened more minds than anything else...it's hard to imagine the French or American revolutions without those enlightened voices in print...
Not long after Steve Jobs introduced his iPhone, he said the bound book was probably headed for history's attic. Not so fast. After a period of rapid growth in e-books, something closer to the medium for Chaucer's volumes has made a great comeback.
The hope of the iPhone, and the Internet in general, was that it would free people in closed societies. But the failure of the Arab Spring, and the continued suppression of ideas in North Korea, China and Iran, has not borne that out... The iPhone is still young. It has certainly been "one of the most important, world-changing and successful products in history, “ as Apple CEO. Tim Cook said. But I'm not sure if the world changed for the better with the iPhone — as it did with the printing press — or merely, changed.
Comprehension
Directions: Read the passage carefully and answer the given questions accordingly
I used a smartphone GPS to find my way through the cobblestoned maze of Geneva's Old Town, in search of a handmade machine that changed the world more than any other invention. Near a 13th-century cathedral in this Swiss city on the shores of a lovely lake, I found what I was looking for: a Gutenberg printing press. "This was the Internet of its day — at least as influential as the iPhone," said Gabriel de Montmollin, the director of the Museum of the Reformation, toying with the replica of Johann Gutenberg's great invention. [Before the invention of the printing press] it used to take four monks...up to a year to produce a single book. With the advance in movable type in 15th-century Europe, one press could crank out 3,000 pages a day.
Before long, average people could travel to places that used to be unknown to them — with maps! Medical information passed more freely and quickly, diminishing the sway of quacks...The printing press offered the prospect that tyrants would never be able to kill a book or suppress an idea. Gutenberg's brainchild broke the monopoly that clerics had on scripture. And later, stirred by pamphlets from a version of that same press, the American colonies rose up against a king and gave birth to a nation. So, a question in the summer of this 10th anniversary of the iPhone: has the device that is perhaps the most revolutionary of all time given us a single magnificent idea? Nearly every advancement of the written word through new technology has also advanced humankind. Sure, you can say the iPhone changed everything. By putting the world's recorded knowledge in the palm of a hand, it revolutionized work, dining, travel and socializing. It made us more narcissistic — here's more of me doing cool stuff! — and it unleashed an army of awful trolls. We no longer have the patience to sit through a baseball game without that reach to the pocket. And one more casualty of Apple selling more than a billion phones in a decade's time: daydreaming has become a lost art.
For all of that, I'm still waiting to see if the iPhone can do what the printing press did for religion and democracy...the Geneva museum makes a strong case that the printing press opened more minds than anything else...it's hard to imagine the French or American revolutions without those enlightened voices in print...
Not long after Steve Jobs introduced his iPhone, he said the bound book was probably headed for history's attic. Not so fast. After a period of rapid growth in e-books, something closer to the medium for Chaucer's volumes has made a great comeback.
The hope of the iPhone, and the Internet in general, was that it would free people in closed societies. But the failure of the Arab Spring, and the continued suppression of ideas in North Korea, China and Iran, has not borne that out... The iPhone is still young. It has certainly been "one of the most important, world-changing and successful products in history, “ as Apple CEO. Tim Cook said. But I'm not sure if the world changed for the better with the iPhone — as it did with the printing press — or merely, changed.
CAT/2017.1(RC)
Question. 109
The author attributes the French and American revolutions to the invention of the printing press because
Comprehension
Directions: Read the passage carefully and answer the given questions accordingly
I used a smartphone GPS to find my way through the cobblestoned maze of Geneva's Old Town, in search of a handmade machine that changed the world more than any other invention. Near a 13th-century cathedral in this Swiss city on the shores of a lovely lake, I found what I was looking for: a Gutenberg printing press. "This was the Internet of its day — at least as influential as the iPhone," said Gabriel de Montmollin, the director of the Museum of the Reformation, toying with the replica of Johann Gutenberg's great invention. [Before the invention of the printing press] it used to take four monks...up to a year to produce a single book. With the advance in movable type in 15th-century Europe, one press could crank out 3,000 pages a day.
Before long, average people could travel to places that used to be unknown to them — with maps! Medical information passed more freely and quickly, diminishing the sway of quacks...The printing press offered the prospect that tyrants would never be able to kill a book or suppress an idea. Gutenberg's brainchild broke the monopoly that clerics had on scripture. And later, stirred by pamphlets from a version of that same press, the American colonies rose up against a king and gave birth to a nation. So, a question in the summer of this 10th anniversary of the iPhone: has the device that is perhaps the most revolutionary of all time given us a single magnificent idea? Nearly every advancement of the written word through new technology has also advanced humankind. Sure, you can say the iPhone changed everything. By putting the world's recorded knowledge in the palm of a hand, it revolutionized work, dining, travel and socializing. It made us more narcissistic — here's more of me doing cool stuff! — and it unleashed an army of awful trolls. We no longer have the patience to sit through a baseball game without that reach to the pocket. And one more casualty of Apple selling more than a billion phones in a decade's time: daydreaming has become a lost art.
For all of that, I'm still waiting to see if the iPhone can do what the printing press did for religion and democracy...the Geneva museum makes a strong case that the printing press opened more minds than anything else...it's hard to imagine the French or American revolutions without those enlightened voices in print...
Not long after Steve Jobs introduced his iPhone, he said the bound book was probably headed for history's attic. Not so fast. After a period of rapid growth in e-books, something closer to the medium for Chaucer's volumes has made a great comeback.
The hope of the iPhone, and the Internet in general, was that it would free people in closed societies. But the failure of the Arab Spring, and the continued suppression of ideas in North Korea, China and Iran, has not borne that out... The iPhone is still young. It has certainly been "one of the most important, world-changing and successful products in history, “ as Apple CEO. Tim Cook said. But I'm not sure if the world changed for the better with the iPhone — as it did with the printing press — or merely, changed.
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
Creativity is at once our most precious resource and our most inexhaustible one. As anyone who has ever spent any time with children knows, every single human being is born creative; every human being is innately endowed with the ability to combine and recombine data, perceptions, materials and ideas, and devise new ways of thinking and doing.What fosters creativity? More than anything else: the presence of other creative people. The big myth is that creativity is the province of great individual gen.iuses. In. fact creativity is a social process. Our biggest creative breakthroughs come when people learn from, compete with, and collaborate with other people.
Cities are the true fonts of creativity... With their diverse populations, dense social networks, and public spaces where people can meet spontaneously and serendipitously, they spark and catalyze new ideas. With their infrastructure for finance, organization and trade, they allow those ideas to be swiftly actualized.
As for what staunches creativity, that's easy, if ironic. It's the very institutions that we build to manage, exploit and perpetuate the fruits of creativity — our big bureaucracies, and sad to say, too many of our schools. Creativity is disruptive; schools and organizations are regimented, standardized and stultifying.
The education expert Sir Ken Robinson points to a 1968 study reporting on a group of 1,600 children who were tested over time for their ability to think in out-of-the-box ways. When the children were between 3 and 5 years old, 98 percent achieved positive scores. When they were 8 to 10, only 32 percent passed the same test, and only 10 percent at 13 to 15. When 280,000 25-year-olds took the test, just 2 percent passed. By the time we are adults, our creativity has been wrung out of us.
I once asked the great urbanist Jane Jacobs what makes some places more creative than others. She said, essentially, that the question was an easy one. All cities, she said, were filled with creative people; that's our default state as people. But some cities had more than their shares of leaders, people and institutions that blocked out that creativity. She called them "squelchers."
Creativity (or the lack of it) follows the same general contours of the great socio-economic divide — our rising inequality — that plagues us. According to my own estimates, roughly a third of us across the United States, and perhaps as much as half of us in our most creative cities — are able to do work which engages our creative faculties to some extent, whether as artists, musicians, writers, techies, innovators, entrepreneurs, doctors, lawyers, journalists or educators — those of us who work with our minds. That leaves a group that I term "the other 66 percent," who toil in low-wage rote and rotten jobs — if they have jobs at all — in which their creativity is subjugated, ignored or wasted.
Creativity itself is not in danger. It's flourishing is all around us — in science and technology, arts and culture, in our rapidly revitalizing cities. But we still have a long way to go if we want to build a truly creative society that supports and rewards the creativity of each and every one of us.
CAT/2017.2(RC)
Question. 111
In the author's view, cities promote human creativity for all the following reasons EXCEPT that they
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
Creativity is at once our most precious resource and our most inexhaustible one. As anyone who has ever spent any time with children knows, every single human being is born creative; every human being is innately endowed with the ability to combine and recombine data, perceptions, materials and ideas, and devise new ways of thinking and doing.What fosters creativity? More than anything else: the presence of other creative people. The big myth is that creativity is the province of great individual gen.iuses. In. fact creativity is a social process. Our biggest creative breakthroughs come when people learn from, compete with, and collaborate with other people.
Cities are the true fonts of creativity... With their diverse populations, dense social networks, and public spaces where people can meet spontaneously and serendipitously, they spark and catalyze new ideas. With their infrastructure for finance, organization and trade, they allow those ideas to be swiftly actualized.
As for what staunches creativity, that's easy, if ironic. It's the very institutions that we build to manage, exploit and perpetuate the fruits of creativity — our big bureaucracies, and sad to say, too many of our schools. Creativity is disruptive; schools and organizations are regimented, standardized and stultifying.
The education expert Sir Ken Robinson points to a 1968 study reporting on a group of 1,600 children who were tested over time for their ability to think in out-of-the-box ways. When the children were between 3 and 5 years old, 98 percent achieved positive scores. When they were 8 to 10, only 32 percent passed the same test, and only 10 percent at 13 to 15. When 280,000 25-year-olds took the test, just 2 percent passed. By the time we are adults, our creativity has been wrung out of us.
I once asked the great urbanist Jane Jacobs what makes some places more creative than others. She said, essentially, that the question was an easy one. All cities, she said, were filled with creative people; that's our default state as people. But some cities had more than their shares of leaders, people and institutions that blocked out that creativity. She called them "squelchers."
Creativity (or the lack of it) follows the same general contours of the great socio-economic divide — our rising inequality — that plagues us. According to my own estimates, roughly a third of us across the United States, and perhaps as much as half of us in our most creative cities — are able to do work which engages our creative faculties to some extent, whether as artists, musicians, writers, techies, innovators, entrepreneurs, doctors, lawyers, journalists or educators — those of us who work with our minds. That leaves a group that I term "the other 66 percent," who toil in low-wage rote and rotten jobs — if they have jobs at all — in which their creativity is subjugated, ignored or wasted.
Creativity itself is not in danger. It's flourishing is all around us — in science and technology, arts and culture, in our rapidly revitalizing cities. But we still have a long way to go if we want to build a truly creative society that supports and rewards the creativity of each and every one of us.
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
Creativity is at once our most precious resource and our most inexhaustible one. As anyone who has ever spent any time with children knows, every single human being is born creative; every human being is innately endowed with the ability to combine and recombine data, perceptions, materials and ideas, and devise new ways of thinking and doing.What fosters creativity? More than anything else: the presence of other creative people. The big myth is that creativity is the province of great individual gen.iuses. In. fact creativity is a social process. Our biggest creative breakthroughs come when people learn from, compete with, and collaborate with other people.
Cities are the true fonts of creativity... With their diverse populations, dense social networks, and public spaces where people can meet spontaneously and serendipitously, they spark and catalyze new ideas. With their infrastructure for finance, organization and trade, they allow those ideas to be swiftly actualized.
As for what staunches creativity, that's easy, if ironic. It's the very institutions that we build to manage, exploit and perpetuate the fruits of creativity — our big bureaucracies, and sad to say, too many of our schools. Creativity is disruptive; schools and organizations are regimented, standardized and stultifying.
The education expert Sir Ken Robinson points to a 1968 study reporting on a group of 1,600 children who were tested over time for their ability to think in out-of-the-box ways. When the children were between 3 and 5 years old, 98 percent achieved positive scores. When they were 8 to 10, only 32 percent passed the same test, and only 10 percent at 13 to 15. When 280,000 25-year-olds took the test, just 2 percent passed. By the time we are adults, our creativity has been wrung out of us.
I once asked the great urbanist Jane Jacobs what makes some places more creative than others. She said, essentially, that the question was an easy one. All cities, she said, were filled with creative people; that's our default state as people. But some cities had more than their shares of leaders, people and institutions that blocked out that creativity. She called them "squelchers."
Creativity (or the lack of it) follows the same general contours of the great socio-economic divide — our rising inequality — that plagues us. According to my own estimates, roughly a third of us across the United States, and perhaps as much as half of us in our most creative cities — are able to do work which engages our creative faculties to some extent, whether as artists, musicians, writers, techies, innovators, entrepreneurs, doctors, lawyers, journalists or educators — those of us who work with our minds. That leaves a group that I term "the other 66 percent," who toil in low-wage rote and rotten jobs — if they have jobs at all — in which their creativity is subjugated, ignored or wasted.
Creativity itself is not in danger. It's flourishing is all around us — in science and technology, arts and culture, in our rapidly revitalizing cities. But we still have a long way to go if we want to build a truly creative society that supports and rewards the creativity of each and every one of us.
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
Creativity is at once our most precious resource and our most inexhaustible one. As anyone who has ever spent any time with children knows, every single human being is born creative; every human being is innately endowed with the ability to combine and recombine data, perceptions, materials and ideas, and devise new ways of thinking and doing.What fosters creativity? More than anything else: the presence of other creative people. The big myth is that creativity is the province of great individual gen.iuses. In. fact creativity is a social process. Our biggest creative breakthroughs come when people learn from, compete with, and collaborate with other people.
Cities are the true fonts of creativity... With their diverse populations, dense social networks, and public spaces where people can meet spontaneously and serendipitously, they spark and catalyze new ideas. With their infrastructure for finance, organization and trade, they allow those ideas to be swiftly actualized.
As for what staunches creativity, that's easy, if ironic. It's the very institutions that we build to manage, exploit and perpetuate the fruits of creativity — our big bureaucracies, and sad to say, too many of our schools. Creativity is disruptive; schools and organizations are regimented, standardized and stultifying.
The education expert Sir Ken Robinson points to a 1968 study reporting on a group of 1,600 children who were tested over time for their ability to think in out-of-the-box ways. When the children were between 3 and 5 years old, 98 percent achieved positive scores. When they were 8 to 10, only 32 percent passed the same test, and only 10 percent at 13 to 15. When 280,000 25-year-olds took the test, just 2 percent passed. By the time we are adults, our creativity has been wrung out of us.
I once asked the great urbanist Jane Jacobs what makes some places more creative than others. She said, essentially, that the question was an easy one. All cities, she said, were filled with creative people; that's our default state as people. But some cities had more than their shares of leaders, people and institutions that blocked out that creativity. She called them "squelchers."
Creativity (or the lack of it) follows the same general contours of the great socio-economic divide — our rising inequality — that plagues us. According to my own estimates, roughly a third of us across the United States, and perhaps as much as half of us in our most creative cities — are able to do work which engages our creative faculties to some extent, whether as artists, musicians, writers, techies, innovators, entrepreneurs, doctors, lawyers, journalists or educators — those of us who work with our minds. That leaves a group that I term "the other 66 percent," who toil in low-wage rote and rotten jobs — if they have jobs at all — in which their creativity is subjugated, ignored or wasted.
Creativity itself is not in danger. It's flourishing is all around us — in science and technology, arts and culture, in our rapidly revitalizing cities. But we still have a long way to go if we want to build a truly creative society that supports and rewards the creativity of each and every one of us.
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
Creativity is at once our most precious resource and our most inexhaustible one. As anyone who has ever spent any time with children knows, every single human being is born creative; every human being is innately endowed with the ability to combine and recombine data, perceptions, materials and ideas, and devise new ways of thinking and doing.What fosters creativity? More than anything else: the presence of other creative people. The big myth is that creativity is the province of great individual gen.iuses. In. fact creativity is a social process. Our biggest creative breakthroughs come when people learn from, compete with, and collaborate with other people.
Cities are the true fonts of creativity... With their diverse populations, dense social networks, and public spaces where people can meet spontaneously and serendipitously, they spark and catalyze new ideas. With their infrastructure for finance, organization and trade, they allow those ideas to be swiftly actualized.
As for what staunches creativity, that's easy, if ironic. It's the very institutions that we build to manage, exploit and perpetuate the fruits of creativity — our big bureaucracies, and sad to say, too many of our schools. Creativity is disruptive; schools and organizations are regimented, standardized and stultifying.
The education expert Sir Ken Robinson points to a 1968 study reporting on a group of 1,600 children who were tested over time for their ability to think in out-of-the-box ways. When the children were between 3 and 5 years old, 98 percent achieved positive scores. When they were 8 to 10, only 32 percent passed the same test, and only 10 percent at 13 to 15. When 280,000 25-year-olds took the test, just 2 percent passed. By the time we are adults, our creativity has been wrung out of us.
I once asked the great urbanist Jane Jacobs what makes some places more creative than others. She said, essentially, that the question was an easy one. All cities, she said, were filled with creative people; that's our default state as people. But some cities had more than their shares of leaders, people and institutions that blocked out that creativity. She called them "squelchers."
Creativity (or the lack of it) follows the same general contours of the great socio-economic divide — our rising inequality — that plagues us. According to my own estimates, roughly a third of us across the United States, and perhaps as much as half of us in our most creative cities — are able to do work which engages our creative faculties to some extent, whether as artists, musicians, writers, techies, innovators, entrepreneurs, doctors, lawyers, journalists or educators — those of us who work with our minds. That leaves a group that I term "the other 66 percent," who toil in low-wage rote and rotten jobs — if they have jobs at all — in which their creativity is subjugated, ignored or wasted.
Creativity itself is not in danger. It's flourishing is all around us — in science and technology, arts and culture, in our rapidly revitalizing cities. But we still have a long way to go if we want to build a truly creative society that supports and rewards the creativity of each and every one of us.
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
Creativity is at once our most precious resource and our most inexhaustible one. As anyone who has ever spent any time with children knows, every single human being is born creative; every human being is innately endowed with the ability to combine and recombine data, perceptions, materials and ideas, and devise new ways of thinking and doing.What fosters creativity? More than anything else: the presence of other creative people. The big myth is that creativity is the province of great individual gen.iuses. In. fact creativity is a social process. Our biggest creative breakthroughs come when people learn from, compete with, and collaborate with other people.
Cities are the true fonts of creativity... With their diverse populations, dense social networks, and public spaces where people can meet spontaneously and serendipitously, they spark and catalyze new ideas. With their infrastructure for finance, organization and trade, they allow those ideas to be swiftly actualized.
As for what staunches creativity, that's easy, if ironic. It's the very institutions that we build to manage, exploit and perpetuate the fruits of creativity — our big bureaucracies, and sad to say, too many of our schools. Creativity is disruptive; schools and organizations are regimented, standardized and stultifying.
The education expert Sir Ken Robinson points to a 1968 study reporting on a group of 1,600 children who were tested over time for their ability to think in out-of-the-box ways. When the children were between 3 and 5 years old, 98 percent achieved positive scores. When they were 8 to 10, only 32 percent passed the same test, and only 10 percent at 13 to 15. When 280,000 25-year-olds took the test, just 2 percent passed. By the time we are adults, our creativity has been wrung out of us.
I once asked the great urbanist Jane Jacobs what makes some places more creative than others. She said, essentially, that the question was an easy one. All cities, she said, were filled with creative people; that's our default state as people. But some cities had more than their shares of leaders, people and institutions that blocked out that creativity. She called them "squelchers."
Creativity (or the lack of it) follows the same general contours of the great socio-economic divide — our rising inequality — that plagues us. According to my own estimates, roughly a third of us across the United States, and perhaps as much as half of us in our most creative cities — are able to do work which engages our creative faculties to some extent, whether as artists, musicians, writers, techies, innovators, entrepreneurs, doctors, lawyers, journalists or educators — those of us who work with our minds. That leaves a group that I term "the other 66 percent," who toil in low-wage rote and rotten jobs — if they have jobs at all — in which their creativity is subjugated, ignored or wasted.
Creativity itself is not in danger. It's flourishing is all around us — in science and technology, arts and culture, in our rapidly revitalizing cities. But we still have a long way to go if we want to build a truly creative society that supports and rewards the creativity of each and every one of us.
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
The end of the age of the internal combustion engine is in sight. There are small signs everywhere: the shift to hybrid vehicles is already under way among manufacturers. Volvo has announced it will make no purely petrol-engined cars after 2019...and Tesla has just started selling its first electric car aimed squarely at the middle classes: the Tesla 3 sells for $35,000 in the US, and 400,000 people have put down a small, refundable deposit towards one. Several thousand have already taken delivery, and the company hopes to sell half a million more next year. This is a remarkable figure for a machine with a fairly short range and a very limited number of specialised charging stations.
Some of it reflects the remarkable abilities of Elon Musk, the company's founder, as a salesman, engineer, and a man able to get the most out his factory workers and the governments he deals with...Mr Musk is selling a dream that the world wants to believe in. This last may be the most important factor in the story. The private car is...a device of immense practical help and economic significance, but at the same time a theatre for myths of unattainable selffulfilment. The one thing you will never see in a car advertisement is traffic, even though that is the element in which drivers spend their lives. Every single driver in a traffic jam is trying to escape from it, yet it is the inevitable consequence of mass car ownership.
The sleek and swift electric car is at one level merely the most contemporary fantasy of autonomy and power. But it might also disrupt our exterior landscapes nearly as much as the fossil fuel-engined car did in the last century. Electrical cars would of course pollute far less than fossil fuel-driven ones; instead of oil reserves, the rarest materials for batteries would make undeserving despots and their dynasties fantastically rich. Petrol stations would disappear. The air in cities would once more be breathable and their streets as quiet as those of Venice. This isn't an unmixed good. Cars that were as silent as bicycles would still be as dangerous as they are now to anyone they hit without audible warning.
The dream goes further than that. The electric cars of the future will be so thoroughly equipped with sensors and reaction mechanisms that they will never hit anyone. Just as brakes don't let you skid today, the steering wheel of tomorrow will swerve you away from danger before you have even noticed it...
This is where the fantasy of autonomy comes full circle. The logical outcome of cars which need no driver is that they will become cars which need no owner either. Instead, they will work as taxis do, summoned at will but only for the journeys we actually need. This the future towards which Uber...is working. The ultimate development of the private car will be to reinvent public transport. Traffic jams will be abolished only when the private car becomes a public utility. What then will happen to our fantasies of independence? We' ll all have to take to electrically powered bicycles.
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
The end of the age of the internal combustion engine is in sight. There are small signs everywhere: the shift to hybrid vehicles is already under way among manufacturers. Volvo has announced it will make no purely petrol-engined cars after 2019...and Tesla has just started selling its first electric car aimed squarely at the middle classes: the Tesla 3 sells for $35,000 in the US, and 400,000 people have put down a small, refundable deposit towards one. Several thousand have already taken delivery, and the company hopes to sell half a million more next year. This is a remarkable figure for a machine with a fairly short range and a very limited number of specialised charging stations.
Some of it reflects the remarkable abilities of Elon Musk, the company's founder, as a salesman, engineer, and a man able to get the most out his factory workers and the governments he deals with...Mr Musk is selling a dream that the world wants to believe in. This last may be the most important factor in the story. The private car is...a device of immense practical help and economic significance, but at the same time a theatre for myths of unattainable selffulfilment. The one thing you will never see in a car advertisement is traffic, even though that is the element in which drivers spend their lives. Every single driver in a traffic jam is trying to escape from it, yet it is the inevitable consequence of mass car ownership.
The sleek and swift electric car is at one level merely the most contemporary fantasy of autonomy and power. But it might also disrupt our exterior landscapes nearly as much as the fossil fuel-engined car did in the last century. Electrical cars would of course pollute far less than fossil fuel-driven ones; instead of oil reserves, the rarest materials for batteries would make undeserving despots and their dynasties fantastically rich. Petrol stations would disappear. The air in cities would once more be breathable and their streets as quiet as those of Venice. This isn't an unmixed good. Cars that were as silent as bicycles would still be as dangerous as they are now to anyone they hit without audible warning.
The dream goes further than that. The electric cars of the future will be so thoroughly equipped with sensors and reaction mechanisms that they will never hit anyone. Just as brakes don't let you skid today, the steering wheel of tomorrow will swerve you away from danger before you have even noticed it...
This is where the fantasy of autonomy comes full circle. The logical outcome of cars which need no driver is that they will become cars which need no owner either. Instead, they will work as taxis do, summoned at will but only for the journeys we actually need. This the future towards which Uber...is working. The ultimate development of the private car will be to reinvent public transport. Traffic jams will be abolished only when the private car becomes a public utility. What then will happen to our fantasies of independence? We' ll all have to take to electrically powered bicycles.
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
The end of the age of the internal combustion engine is in sight. There are small signs everywhere: the shift to hybrid vehicles is already under way among manufacturers. Volvo has announced it will make no purely petrol-engined cars after 2019...and Tesla has just started selling its first electric car aimed squarely at the middle classes: the Tesla 3 sells for $35,000 in the US, and 400,000 people have put down a small, refundable deposit towards one. Several thousand have already taken delivery, and the company hopes to sell half a million more next year. This is a remarkable figure for a machine with a fairly short range and a very limited number of specialised charging stations.
Some of it reflects the remarkable abilities of Elon Musk, the company's founder, as a salesman, engineer, and a man able to get the most out his factory workers and the governments he deals with...Mr Musk is selling a dream that the world wants to believe in. This last may be the most important factor in the story. The private car is...a device of immense practical help and economic significance, but at the same time a theatre for myths of unattainable selffulfilment. The one thing you will never see in a car advertisement is traffic, even though that is the element in which drivers spend their lives. Every single driver in a traffic jam is trying to escape from it, yet it is the inevitable consequence of mass car ownership.
The sleek and swift electric car is at one level merely the most contemporary fantasy of autonomy and power. But it might also disrupt our exterior landscapes nearly as much as the fossil fuel-engined car did in the last century. Electrical cars would of course pollute far less than fossil fuel-driven ones; instead of oil reserves, the rarest materials for batteries would make undeserving despots and their dynasties fantastically rich. Petrol stations would disappear. The air in cities would once more be breathable and their streets as quiet as those of Venice. This isn't an unmixed good. Cars that were as silent as bicycles would still be as dangerous as they are now to anyone they hit without audible warning.
The dream goes further than that. The electric cars of the future will be so thoroughly equipped with sensors and reaction mechanisms that they will never hit anyone. Just as brakes don't let you skid today, the steering wheel of tomorrow will swerve you away from danger before you have even noticed it...
This is where the fantasy of autonomy comes full circle. The logical outcome of cars which need no driver is that they will become cars which need no owner either. Instead, they will work as taxis do, summoned at will but only for the journeys we actually need. This the future towards which Uber...is working. The ultimate development of the private car will be to reinvent public transport. Traffic jams will be abolished only when the private car becomes a public utility. What then will happen to our fantasies of independence? We' ll all have to take to electrically powered bicycles.
CAT/2017.2(RC)
Question. 119
According to the author, the main reason for Tesla's remarkable sales is that
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
The end of the age of the internal combustion engine is in sight. There are small signs everywhere: the shift to hybrid vehicles is already under way among manufacturers. Volvo has announced it will make no purely petrol-engined cars after 2019...and Tesla has just started selling its first electric car aimed squarely at the middle classes: the Tesla 3 sells for $35,000 in the US, and 400,000 people have put down a small, refundable deposit towards one. Several thousand have already taken delivery, and the company hopes to sell half a million more next year. This is a remarkable figure for a machine with a fairly short range and a very limited number of specialised charging stations.
Some of it reflects the remarkable abilities of Elon Musk, the company's founder, as a salesman, engineer, and a man able to get the most out his factory workers and the governments he deals with...Mr Musk is selling a dream that the world wants to believe in. This last may be the most important factor in the story. The private car is...a device of immense practical help and economic significance, but at the same time a theatre for myths of unattainable selffulfilment. The one thing you will never see in a car advertisement is traffic, even though that is the element in which drivers spend their lives. Every single driver in a traffic jam is trying to escape from it, yet it is the inevitable consequence of mass car ownership.
The sleek and swift electric car is at one level merely the most contemporary fantasy of autonomy and power. But it might also disrupt our exterior landscapes nearly as much as the fossil fuel-engined car did in the last century. Electrical cars would of course pollute far less than fossil fuel-driven ones; instead of oil reserves, the rarest materials for batteries would make undeserving despots and their dynasties fantastically rich. Petrol stations would disappear. The air in cities would once more be breathable and their streets as quiet as those of Venice. This isn't an unmixed good. Cars that were as silent as bicycles would still be as dangerous as they are now to anyone they hit without audible warning.
The dream goes further than that. The electric cars of the future will be so thoroughly equipped with sensors and reaction mechanisms that they will never hit anyone. Just as brakes don't let you skid today, the steering wheel of tomorrow will swerve you away from danger before you have even noticed it...
This is where the fantasy of autonomy comes full circle. The logical outcome of cars which need no driver is that they will become cars which need no owner either. Instead, they will work as taxis do, summoned at will but only for the journeys we actually need. This the future towards which Uber...is working. The ultimate development of the private car will be to reinvent public transport. Traffic jams will be abolished only when the private car becomes a public utility. What then will happen to our fantasies of independence? We' ll all have to take to electrically powered bicycles.
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
The end of the age of the internal combustion engine is in sight. There are small signs everywhere: the shift to hybrid vehicles is already under way among manufacturers. Volvo has announced it will make no purely petrol-engined cars after 2019...and Tesla has just started selling its first electric car aimed squarely at the middle classes: the Tesla 3 sells for $35,000 in the US, and 400,000 people have put down a small, refundable deposit towards one. Several thousand have already taken delivery, and the company hopes to sell half a million more next year. This is a remarkable figure for a machine with a fairly short range and a very limited number of specialised charging stations.
Some of it reflects the remarkable abilities of Elon Musk, the company's founder, as a salesman, engineer, and a man able to get the most out his factory workers and the governments he deals with...Mr Musk is selling a dream that the world wants to believe in. This last may be the most important factor in the story. The private car is...a device of immense practical help and economic significance, but at the same time a theatre for myths of unattainable selffulfilment. The one thing you will never see in a car advertisement is traffic, even though that is the element in which drivers spend their lives. Every single driver in a traffic jam is trying to escape from it, yet it is the inevitable consequence of mass car ownership.
The sleek and swift electric car is at one level merely the most contemporary fantasy of autonomy and power. But it might also disrupt our exterior landscapes nearly as much as the fossil fuel-engined car did in the last century. Electrical cars would of course pollute far less than fossil fuel-driven ones; instead of oil reserves, the rarest materials for batteries would make undeserving despots and their dynasties fantastically rich. Petrol stations would disappear. The air in cities would once more be breathable and their streets as quiet as those of Venice. This isn't an unmixed good. Cars that were as silent as bicycles would still be as dangerous as they are now to anyone they hit without audible warning.
The dream goes further than that. The electric cars of the future will be so thoroughly equipped with sensors and reaction mechanisms that they will never hit anyone. Just as brakes don't let you skid today, the steering wheel of tomorrow will swerve you away from danger before you have even noticed it...
This is where the fantasy of autonomy comes full circle. The logical outcome of cars which need no driver is that they will become cars which need no owner either. Instead, they will work as taxis do, summoned at will but only for the journeys we actually need. This the future towards which Uber...is working. The ultimate development of the private car will be to reinvent public transport. Traffic jams will be abolished only when the private car becomes a public utility. What then will happen to our fantasies of independence? We' ll all have to take to electrically powered bicycles.
CAT/2017.2(RC)
Question. 121
In paragraphs 5 and 6, the author provides the example of Uber to argue that
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
The end of the age of the internal combustion engine is in sight. There are small signs everywhere: the shift to hybrid vehicles is already under way among manufacturers. Volvo has announced it will make no purely petrol-engined cars after 2019...and Tesla has just started selling its first electric car aimed squarely at the middle classes: the Tesla 3 sells for $35,000 in the US, and 400,000 people have put down a small, refundable deposit towards one. Several thousand have already taken delivery, and the company hopes to sell half a million more next year. This is a remarkable figure for a machine with a fairly short range and a very limited number of specialised charging stations.
Some of it reflects the remarkable abilities of Elon Musk, the company's founder, as a salesman, engineer, and a man able to get the most out his factory workers and the governments he deals with...Mr Musk is selling a dream that the world wants to believe in. This last may be the most important factor in the story. The private car is...a device of immense practical help and economic significance, but at the same time a theatre for myths of unattainable selffulfilment. The one thing you will never see in a car advertisement is traffic, even though that is the element in which drivers spend their lives. Every single driver in a traffic jam is trying to escape from it, yet it is the inevitable consequence of mass car ownership.
The sleek and swift electric car is at one level merely the most contemporary fantasy of autonomy and power. But it might also disrupt our exterior landscapes nearly as much as the fossil fuel-engined car did in the last century. Electrical cars would of course pollute far less than fossil fuel-driven ones; instead of oil reserves, the rarest materials for batteries would make undeserving despots and their dynasties fantastically rich. Petrol stations would disappear. The air in cities would once more be breathable and their streets as quiet as those of Venice. This isn't an unmixed good. Cars that were as silent as bicycles would still be as dangerous as they are now to anyone they hit without audible warning.
The dream goes further than that. The electric cars of the future will be so thoroughly equipped with sensors and reaction mechanisms that they will never hit anyone. Just as brakes don't let you skid today, the steering wheel of tomorrow will swerve you away from danger before you have even noticed it...
This is where the fantasy of autonomy comes full circle. The logical outcome of cars which need no driver is that they will become cars which need no owner either. Instead, they will work as taxis do, summoned at will but only for the journeys we actually need. This the future towards which Uber...is working. The ultimate development of the private car will be to reinvent public transport. Traffic jams will be abolished only when the private car becomes a public utility. What then will happen to our fantasies of independence? We' ll all have to take to electrically powered bicycles.
CAT/2017.2(RC)
Question. 122
In paragraph 6, the author mentions electrically powered bicycles to argue that
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
Typewriters are the epitome of a technology that has been comprehensively rendered obsolete by the digital age. The ink comes off the ribbon, they weigh a ton, and second thoughts are a disaster. But they are also personal, portable and, above all, private. Type a document and lock it away and more or less the only way anyone else can get it is if you give it to them. That is why the Russians have decided to go back to typewriters in some government offices, and why in the US, some departments have never abandoned them. Yet it is not just their resistance to algorithms and secret surveillance that keeps typewriter production lines — well one, at least — in business (the last British one closed a year ago). Nor is it only the nostalgic appeal of the metal body and the stout well-defined keys that make them popular on eBay. A typewriter demands something particular: attentiveness. By the time the paper is loaded, the ribbon tightened, the carriage returned, the spacing and the margins set, there's a big premium on hitting the right key. That means sorting out ideas, pulling together a kind of order and organising details before actually striking off. There can be no thinking on screen with a typewriter. Nor are there any easy distractions. No online shopping. No urgent emails. No Twitter. No need even for electricity — perfect for writing in a remote hideaway. The thinking process is accompanied by the encouraging clack of keys, and the ratchet of the carriage return. Ping!
CAT/2017.2(RC)
Question. 123
Which one of the following best describes what the passage is trying to do?
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
Typewriters are the epitome of a technology that has been comprehensively rendered obsolete by the digital age. The ink comes off the ribbon, they weigh a ton, and second thoughts are a disaster. But they are also personal, portable and, above all, private. Type a document and lock it away and more or less the only way anyone else can get it is if you give it to them. That is why the Russians have decided to go back to typewriters in some government offices, and why in the US, some departments have never abandoned them. Yet it is not just their resistance to algorithms and secret surveillance that keeps typewriter production lines — well one, at least — in business (the last British one closed a year ago). Nor is it only the nostalgic appeal of the metal body and the stout well-defined keys that make them popular on eBay. A typewriter demands something particular: attentiveness. By the time the paper is loaded, the ribbon tightened, the carriage returned, the spacing and the margins set, there's a big premium on hitting the right key. That means sorting out ideas, pulling together a kind of order and organising details before actually striking off. There can be no thinking on screen with a typewriter. Nor are there any easy distractions. No online shopping. No urgent emails. No Twitter. No need even for electricity — perfect for writing in a remote hideaway. The thinking process is accompanied by the encouraging clack of keys, and the ratchet of the carriage return. Ping!
Comprehension
Directions for question: Read the passage carefully and answer the given questions accordingly
Typewriters are the epitome of a technology that has been comprehensively rendered obsolete by the digital age. The ink comes off the ribbon, they weigh a ton, and second thoughts are a disaster. But they are also personal, portable and, above all, private. Type a document and lock it away and more or less the only way anyone else can get it is if you give it to them. That is why the Russians have decided to go back to typewriters in some government offices, and why in the US, some departments have never abandoned them. Yet it is not just their resistance to algorithms and secret surveillance that keeps typewriter production lines — well one, at least — in business (the last British one closed a year ago). Nor is it only the nostalgic appeal of the metal body and the stout well-defined keys that make them popular on eBay. A typewriter demands something particular: attentiveness. By the time the paper is loaded, the ribbon tightened, the carriage returned, the spacing and the margins set, there's a big premium on hitting the right key. That means sorting out ideas, pulling together a kind of order and organising details before actually striking off. There can be no thinking on screen with a typewriter. Nor are there any easy distractions. No online shopping. No urgent emails. No Twitter. No need even for electricity — perfect for writing in a remote hideaway. The thinking process is accompanied by the encouraging clack of keys, and the ratchet of the carriage return. Ping!
Comprehension
Directions for Questions : The passage given below is followed by a set of five questions. Choose the most appropriate answer to each question.
When I was little, children were bought two kinds of ice cream, sold from those white wagons with canopies made of silvery metal: either the two-cent cone or the four-cent ice-cream pie. The two-cent cone was very small, in fact it could fit comfortably into a child’s hand, and it was made by taking the ice cream from its container with a special scoop and piling it on the cone. Granny always suggested I eat only a part of the cone, then throw away the pointed end, because it had been touched by the vendor’s hand (though that was the best part, nice and crunchy, and it was regularly eaten in secret, after a pretence of discarding it).
The four-cent pie was made by a special little machine, also silvery, which pressed two disks of sweet biscuit against a cylindrical section of ice cream. First you had to thrust your tongue into the gap between the biscuits until it touched the central nucleus of ice cream; then, gradually, you ate the whole thing, the biscuit surfaces softening as they became soaked in creamy nectar. Granny had no advice to give here: in theory the pies had been touched only by the machine; in practice, the vendor had held them in his hand while giving them to us, but it was impossible to isolate the contaminated area.
I was fascinated, however, by some of my peers, whose parents bought them not a four-cent pie but two two-cent cones. These privileged children advanced proudly with one cone in their right hand and one in their left; and expertly moving their head from side to side, they licked first one, then the other. This liturgy seemed to me so sumptuously enviable, that many times I asked to be allowed to celebrate it. In vain. My elders were inflexible: a four-cent ice, yes; but two two-cent ones, absolutely no.
As anyone can see, neither mathematics nor economy nor dietetics justified this refusal. Nor did hygiene, assuming that in due course the tips of both cones were discarded. The pathetic, and obviously mendacious, justification was that a boy concerned with turning his eyes from one cone to the other was more inclined to stumble over stones, steps, or cracks in the pavement. I dimly sensed that there was another secret justification, cruelly pedagogical, but I was unable to grasp it.
Today, citizen and victim of a consumer society, a civilization of excess and waste (which the society of the thirties was not), I realize that those dear and now departed elders were right. Two two-cent cones instead of one at four cents did not signify squandering, economically speaking, but symbolically they surely did. It was for this precise reason, that I yearned for them: because two ice creams suggested excess. And this was precisely why they were denied to me: because they looked indecent, an insult to poverty, a display of fictitious privilege, a boast of wealth. Only spoiled children ate two cones at once, those children who in fairy tales were rightly punished, as Pinocchio was when he rejected the skin and the stalk. And parents who encouraged this weakness, appropriate to little parvenus, were bringing up their children in the foolish theatre of “I’d like to but I can’t.” They were preparing them to turn up at tourist-class check-in with a fake Gucci bag bought from a street peddler on the beach at Rimini.
Nowadays the moralist risks seeming at odds with morality, in a world where the consumer civilization now wants even adults to be spoiled, and promises them always something more, from the wristwatch in the box of detergent to the bonus bangle sheathed, with the magazine it accompanies, in a plastic envelope. Like the parents of those ambidextrous gluttons I so envied, the consumer civilization pretends to give more, but actually gives, for four cents, what is worth four cents. You will throwaway the old transistor radio to purchase the new one, that boasts an alarm clock as well, but some inexplicable defect in the mechanism will guarantee that the radio lasts only a year. The new cheap car will have leather seats, double side mirrors adjustable from inside, and a panelled dashboard, but it will not last nearly so long as the glorious old Fiat 500, which, even when it broke down, could be started again with a kick.
The morality of the old days made Spartans of us all, while today’s morality wants all of us to be Sybarites.
Comprehension
Directions for Questions : The passage given below is followed by a set of five questions. Choose the most appropriate answer to each question.
When I was little, children were bought two kinds of ice cream, sold from those white wagons with canopies made of silvery metal: either the two-cent cone or the four-cent ice-cream pie. The two-cent cone was very small, in fact it could fit comfortably into a child’s hand, and it was made by taking the ice cream from its container with a special scoop and piling it on the cone. Granny always suggested I eat only a part of the cone, then throw away the pointed end, because it had been touched by the vendor’s hand (though that was the best part, nice and crunchy, and it was regularly eaten in secret, after a pretence of discarding it).
The four-cent pie was made by a special little machine, also silvery, which pressed two disks of sweet biscuit against a cylindrical section of ice cream. First you had to thrust your tongue into the gap between the biscuits until it touched the central nucleus of ice cream; then, gradually, you ate the whole thing, the biscuit surfaces softening as they became soaked in creamy nectar. Granny had no advice to give here: in theory the pies had been touched only by the machine; in practice, the vendor had held them in his hand while giving them to us, but it was impossible to isolate the contaminated area.
I was fascinated, however, by some of my peers, whose parents bought them not a four-cent pie but two two-cent cones. These privileged children advanced proudly with one cone in their right hand and one in their left; and expertly moving their head from side to side, they licked first one, then the other. This liturgy seemed to me so sumptuously enviable, that many times I asked to be allowed to celebrate it. In vain. My elders were inflexible: a four-cent ice, yes; but two two-cent ones, absolutely no.
As anyone can see, neither mathematics nor economy nor dietetics justified this refusal. Nor did hygiene, assuming that in due course the tips of both cones were discarded. The pathetic, and obviously mendacious, justification was that a boy concerned with turning his eyes from one cone to the other was more inclined to stumble over stones, steps, or cracks in the pavement. I dimly sensed that there was another secret justification, cruelly pedagogical, but I was unable to grasp it.
Today, citizen and victim of a consumer society, a civilization of excess and waste (which the society of the thirties was not), I realize that those dear and now departed elders were right. Two two-cent cones instead of one at four cents did not signify squandering, economically speaking, but symbolically they surely did. It was for this precise reason, that I yearned for them: because two ice creams suggested excess. And this was precisely why they were denied to me: because they looked indecent, an insult to poverty, a display of fictitious privilege, a boast of wealth. Only spoiled children ate two cones at once, those children who in fairy tales were rightly punished, as Pinocchio was when he rejected the skin and the stalk. And parents who encouraged this weakness, appropriate to little parvenus, were bringing up their children in the foolish theatre of “I’d like to but I can’t.” They were preparing them to turn up at tourist-class check-in with a fake Gucci bag bought from a street peddler on the beach at Rimini.
Nowadays the moralist risks seeming at odds with morality, in a world where the consumer civilization now wants even adults to be spoiled, and promises them always something more, from the wristwatch in the box of detergent to the bonus bangle sheathed, with the magazine it accompanies, in a plastic envelope. Like the parents of those ambidextrous gluttons I so envied, the consumer civilization pretends to give more, but actually gives, for four cents, what is worth four cents. You will throwaway the old transistor radio to purchase the new one, that boasts an alarm clock as well, but some inexplicable defect in the mechanism will guarantee that the radio lasts only a year. The new cheap car will have leather seats, double side mirrors adjustable from inside, and a panelled dashboard, but it will not last nearly so long as the glorious old Fiat 500, which, even when it broke down, could be started again with a kick.
The morality of the old days made Spartans of us all, while today’s morality wants all of us to be Sybarites.
Comprehension
Directions for Questions : The passage given below is followed by a set of five questions. Choose the most appropriate answer to each question.
When I was little, children were bought two kinds of ice cream, sold from those white wagons with canopies made of silvery metal: either the two-cent cone or the four-cent ice-cream pie. The two-cent cone was very small, in fact it could fit comfortably into a child’s hand, and it was made by taking the ice cream from its container with a special scoop and piling it on the cone. Granny always suggested I eat only a part of the cone, then throw away the pointed end, because it had been touched by the vendor’s hand (though that was the best part, nice and crunchy, and it was regularly eaten in secret, after a pretence of discarding it).
The four-cent pie was made by a special little machine, also silvery, which pressed two disks of sweet biscuit against a cylindrical section of ice cream. First you had to thrust your tongue into the gap between the biscuits until it touched the central nucleus of ice cream; then, gradually, you ate the whole thing, the biscuit surfaces softening as they became soaked in creamy nectar. Granny had no advice to give here: in theory the pies had been touched only by the machine; in practice, the vendor had held them in his hand while giving them to us, but it was impossible to isolate the contaminated area.
I was fascinated, however, by some of my peers, whose parents bought them not a four-cent pie but two two-cent cones. These privileged children advanced proudly with one cone in their right hand and one in their left; and expertly moving their head from side to side, they licked first one, then the other. This liturgy seemed to me so sumptuously enviable, that many times I asked to be allowed to celebrate it. In vain. My elders were inflexible: a four-cent ice, yes; but two two-cent ones, absolutely no.
As anyone can see, neither mathematics nor economy nor dietetics justified this refusal. Nor did hygiene, assuming that in due course the tips of both cones were discarded. The pathetic, and obviously mendacious, justification was that a boy concerned with turning his eyes from one cone to the other was more inclined to stumble over stones, steps, or cracks in the pavement. I dimly sensed that there was another secret justification, cruelly pedagogical, but I was unable to grasp it.
Today, citizen and victim of a consumer society, a civilization of excess and waste (which the society of the thirties was not), I realize that those dear and now departed elders were right. Two two-cent cones instead of one at four cents did not signify squandering, economically speaking, but symbolically they surely did. It was for this precise reason, that I yearned for them: because two ice creams suggested excess. And this was precisely why they were denied to me: because they looked indecent, an insult to poverty, a display of fictitious privilege, a boast of wealth. Only spoiled children ate two cones at once, those children who in fairy tales were rightly punished, as Pinocchio was when he rejected the skin and the stalk. And parents who encouraged this weakness, appropriate to little parvenus, were bringing up their children in the foolish theatre of “I’d like to but I can’t.” They were preparing them to turn up at tourist-class check-in with a fake Gucci bag bought from a street peddler on the beach at Rimini.
Nowadays the moralist risks seeming at odds with morality, in a world where the consumer civilization now wants even adults to be spoiled, and promises them always something more, from the wristwatch in the box of detergent to the bonus bangle sheathed, with the magazine it accompanies, in a plastic envelope. Like the parents of those ambidextrous gluttons I so envied, the consumer civilization pretends to give more, but actually gives, for four cents, what is worth four cents. You will throwaway the old transistor radio to purchase the new one, that boasts an alarm clock as well, but some inexplicable defect in the mechanism will guarantee that the radio lasts only a year. The new cheap car will have leather seats, double side mirrors adjustable from inside, and a panelled dashboard, but it will not last nearly so long as the glorious old Fiat 500, which, even when it broke down, could be started again with a kick.
The morality of the old days made Spartans of us all, while today’s morality wants all of us to be Sybarites.
Comprehension
Directions for Questions : The passage given below is followed by a set of five questions. Choose the most appropriate answer to each question.
When I was little, children were bought two kinds of ice cream, sold from those white wagons with canopies made of silvery metal: either the two-cent cone or the four-cent ice-cream pie. The two-cent cone was very small, in fact it could fit comfortably into a child’s hand, and it was made by taking the ice cream from its container with a special scoop and piling it on the cone. Granny always suggested I eat only a part of the cone, then throw away the pointed end, because it had been touched by the vendor’s hand (though that was the best part, nice and crunchy, and it was regularly eaten in secret, after a pretence of discarding it).
The four-cent pie was made by a special little machine, also silvery, which pressed two disks of sweet biscuit against a cylindrical section of ice cream. First you had to thrust your tongue into the gap between the biscuits until it touched the central nucleus of ice cream; then, gradually, you ate the whole thing, the biscuit surfaces softening as they became soaked in creamy nectar. Granny had no advice to give here: in theory the pies had been touched only by the machine; in practice, the vendor had held them in his hand while giving them to us, but it was impossible to isolate the contaminated area.
I was fascinated, however, by some of my peers, whose parents bought them not a four-cent pie but two two-cent cones. These privileged children advanced proudly with one cone in their right hand and one in their left; and expertly moving their head from side to side, they licked first one, then the other. This liturgy seemed to me so sumptuously enviable, that many times I asked to be allowed to celebrate it. In vain. My elders were inflexible: a four-cent ice, yes; but two two-cent ones, absolutely no.
As anyone can see, neither mathematics nor economy nor dietetics justified this refusal. Nor did hygiene, assuming that in due course the tips of both cones were discarded. The pathetic, and obviously mendacious, justification was that a boy concerned with turning his eyes from one cone to the other was more inclined to stumble over stones, steps, or cracks in the pavement. I dimly sensed that there was another secret justification, cruelly pedagogical, but I was unable to grasp it.
Today, citizen and victim of a consumer society, a civilization of excess and waste (which the society of the thirties was not), I realize that those dear and now departed elders were right. Two two-cent cones instead of one at four cents did not signify squandering, economically speaking, but symbolically they surely did. It was for this precise reason, that I yearned for them: because two ice creams suggested excess. And this was precisely why they were denied to me: because they looked indecent, an insult to poverty, a display of fictitious privilege, a boast of wealth. Only spoiled children ate two cones at once, those children who in fairy tales were rightly punished, as Pinocchio was when he rejected the skin and the stalk. And parents who encouraged this weakness, appropriate to little parvenus, were bringing up their children in the foolish theatre of “I’d like to but I can’t.” They were preparing them to turn up at tourist-class check-in with a fake Gucci bag bought from a street peddler on the beach at Rimini.
Nowadays the moralist risks seeming at odds with morality, in a world where the consumer civilization now wants even adults to be spoiled, and promises them always something more, from the wristwatch in the box of detergent to the bonus bangle sheathed, with the magazine it accompanies, in a plastic envelope. Like the parents of those ambidextrous gluttons I so envied, the consumer civilization pretends to give more, but actually gives, for four cents, what is worth four cents. You will throwaway the old transistor radio to purchase the new one, that boasts an alarm clock as well, but some inexplicable defect in the mechanism will guarantee that the radio lasts only a year. The new cheap car will have leather seats, double side mirrors adjustable from inside, and a panelled dashboard, but it will not last nearly so long as the glorious old Fiat 500, which, even when it broke down, could be started again with a kick.
The morality of the old days made Spartans of us all, while today’s morality wants all of us to be Sybarites.
Comprehension
Directions for Questions : The passage given below is followed by a set of five questions. Choose the most appropriate answer to each question.
When I was little, children were bought two kinds of ice cream, sold from those white wagons with canopies made of silvery metal: either the two-cent cone or the four-cent ice-cream pie. The two-cent cone was very small, in fact it could fit comfortably into a child’s hand, and it was made by taking the ice cream from its container with a special scoop and piling it on the cone. Granny always suggested I eat only a part of the cone, then throw away the pointed end, because it had been touched by the vendor’s hand (though that was the best part, nice and crunchy, and it was regularly eaten in secret, after a pretence of discarding it).
The four-cent pie was made by a special little machine, also silvery, which pressed two disks of sweet biscuit against a cylindrical section of ice cream. First you had to thrust your tongue into the gap between the biscuits until it touched the central nucleus of ice cream; then, gradually, you ate the whole thing, the biscuit surfaces softening as they became soaked in creamy nectar. Granny had no advice to give here: in theory the pies had been touched only by the machine; in practice, the vendor had held them in his hand while giving them to us, but it was impossible to isolate the contaminated area.
I was fascinated, however, by some of my peers, whose parents bought them not a four-cent pie but two two-cent cones. These privileged children advanced proudly with one cone in their right hand and one in their left; and expertly moving their head from side to side, they licked first one, then the other. This liturgy seemed to me so sumptuously enviable, that many times I asked to be allowed to celebrate it. In vain. My elders were inflexible: a four-cent ice, yes; but two two-cent ones, absolutely no.
As anyone can see, neither mathematics nor economy nor dietetics justified this refusal. Nor did hygiene, assuming that in due course the tips of both cones were discarded. The pathetic, and obviously mendacious, justification was that a boy concerned with turning his eyes from one cone to the other was more inclined to stumble over stones, steps, or cracks in the pavement. I dimly sensed that there was another secret justification, cruelly pedagogical, but I was unable to grasp it.
Today, citizen and victim of a consumer society, a civilization of excess and waste (which the society of the thirties was not), I realize that those dear and now departed elders were right. Two two-cent cones instead of one at four cents did not signify squandering, economically speaking, but symbolically they surely did. It was for this precise reason, that I yearned for them: because two ice creams suggested excess. And this was precisely why they were denied to me: because they looked indecent, an insult to poverty, a display of fictitious privilege, a boast of wealth. Only spoiled children ate two cones at once, those children who in fairy tales were rightly punished, as Pinocchio was when he rejected the skin and the stalk. And parents who encouraged this weakness, appropriate to little parvenus, were bringing up their children in the foolish theatre of “I’d like to but I can’t.” They were preparing them to turn up at tourist-class check-in with a fake Gucci bag bought from a street peddler on the beach at Rimini.
Nowadays the moralist risks seeming at odds with morality, in a world where the consumer civilization now wants even adults to be spoiled, and promises them always something more, from the wristwatch in the box of detergent to the bonus bangle sheathed, with the magazine it accompanies, in a plastic envelope. Like the parents of those ambidextrous gluttons I so envied, the consumer civilization pretends to give more, but actually gives, for four cents, what is worth four cents. You will throwaway the old transistor radio to purchase the new one, that boasts an alarm clock as well, but some inexplicable defect in the mechanism will guarantee that the radio lasts only a year. The new cheap car will have leather seats, double side mirrors adjustable from inside, and a panelled dashboard, but it will not last nearly so long as the glorious old Fiat 500, which, even when it broke down, could be started again with a kick.
The morality of the old days made Spartans of us all, while today’s morality wants all of us to be Sybarites.
Comprehension
Directions for Questions : The passage given below is followed by a set of five questions. Choose the most appropriate answer to each question.
A remarkable aspect of art of the present century is the range of concepts and ideologies which it embodies. It is almost tempting to see a pattern emerging within the art field - or alternatively imposed upon it a posteriori - similar to that which exists under the umbrella of science where the general term covers a whole range of separate, though interconnecting, activities. Any parallelism is however - in this instance at least - misleading. A scientific discipline develops systematically once its bare tenets have been established, named and categorized as conventions. Many of the concepts of modern art, by contrast, have resulted from the almost accidental meetings of groups of talented individuals at certain times and certain places. The ideas generated by these chance meetings had twofold consequences. Firstly, a corpus of work would be produced which, in great part, remains as a concrete record of the events. Secondly, the ideas would themselves be disseminated through many different channels of communication - seeds that often bore fruit in contexts far removed from their generation. Not all movements were exclusively concerned with innovation. Surrealism, for instance, claimed to embody a kind of insight which can be present in the art of any period. This claim has been generally accepted so that a sixteenth century painting by Spranger or a mysterious photograph by Atget can legitimately be discussed in surrealist terms. Briefly, then, the concepts of modern art are of many different (often fundamentally different) kinds and resulted from the exposures of painters, sculptors and thinkers to the more complex phenomena of the twentieth century, including our ever increasing knowledge of the thought and products of earlier centuries. Different groups of artists would collaborate in trying to make sense of a rapidly changing world of visual and spiritual experience. We should hardly be surprised if no one group succeeded completely, but achievements, though relative, have been considerable. Landmarks have been established - concrete statements of position which give a pattern to a situation which could easily have degenerated into total chaos. Beyond this, new language tools have been created for those who follow - semantic systems which can provide a springboard for further explorations.
The codifying of art is often criticized. Certainly one can understand that artists are wary of being pigeonholed since they are apt to think of themselves as individuals - sometimes with good reason. The notion of self-expression, however, no longer carries quite the weight it once did; objectivity has its defenders. There is good reason to accept the ideas codified by artists and critics, over the past sixty years or so, as having attained the status of independent existence - an independence which is not without its own value. The time factor is important here. As an art movement slips into temporal perspective, it ceases to be a living organism - becoming, rather, a fossil. This is not to say that it becomes useless or uninteresting. Just as a scientist can reconstruct the life of a prehistoric environment from the messages codified into the structure of a fossil, so can an artist decipher whole webs of intellectual and creative possibility from the recorded structure of a ‘dead’ art movement. The artist can match the creative patterns crystallized into this structure against the potentials and possibilities of his own time. As T.S. Eliot observed, no one starts anything from scratch; however consciously you may try to live in the present, you are still involved with a nexus of behaviour patterns bequeathed from the past. The original and creative person is not someone who ignores these patterns, but someone who is able to translate and develop them so that they conform more exactly to his - and our - present needs.
Comprehension
Directions for Questions : The passage given below is followed by a set of five questions. Choose the most appropriate answer to each question.
A remarkable aspect of art of the present century is the range of concepts and ideologies which it embodies. It is almost tempting to see a pattern emerging within the art field - or alternatively imposed upon it a posteriori - similar to that which exists under the umbrella of science where the general term covers a whole range of separate, though interconnecting, activities. Any parallelism is however - in this instance at least - misleading. A scientific discipline develops systematically once its bare tenets have been established, named and categorized as conventions. Many of the concepts of modern art, by contrast, have resulted from the almost accidental meetings of groups of talented individuals at certain times and certain places. The ideas generated by these chance meetings had twofold consequences. Firstly, a corpus of work would be produced which, in great part, remains as a concrete record of the events. Secondly, the ideas would themselves be disseminated through many different channels of communication - seeds that often bore fruit in contexts far removed from their generation. Not all movements were exclusively concerned with innovation. Surrealism, for instance, claimed to embody a kind of insight which can be present in the art of any period. This claim has been generally accepted so that a sixteenth century painting by Spranger or a mysterious photograph by Atget can legitimately be discussed in surrealist terms. Briefly, then, the concepts of modern art are of many different (often fundamentally different) kinds and resulted from the exposures of painters, sculptors and thinkers to the more complex phenomena of the twentieth century, including our ever increasing knowledge of the thought and products of earlier centuries. Different groups of artists would collaborate in trying to make sense of a rapidly changing world of visual and spiritual experience. We should hardly be surprised if no one group succeeded completely, but achievements, though relative, have been considerable. Landmarks have been established - concrete statements of position which give a pattern to a situation which could easily have degenerated into total chaos. Beyond this, new language tools have been created for those who follow - semantic systems which can provide a springboard for further explorations.
The codifying of art is often criticized. Certainly one can understand that artists are wary of being pigeonholed since they are apt to think of themselves as individuals - sometimes with good reason. The notion of self-expression, however, no longer carries quite the weight it once did; objectivity has its defenders. There is good reason to accept the ideas codified by artists and critics, over the past sixty years or so, as having attained the status of independent existence - an independence which is not without its own value. The time factor is important here. As an art movement slips into temporal perspective, it ceases to be a living organism - becoming, rather, a fossil. This is not to say that it becomes useless or uninteresting. Just as a scientist can reconstruct the life of a prehistoric environment from the messages codified into the structure of a fossil, so can an artist decipher whole webs of intellectual and creative possibility from the recorded structure of a ‘dead’ art movement. The artist can match the creative patterns crystallized into this structure against the potentials and possibilities of his own time. As T.S. Eliot observed, no one starts anything from scratch; however consciously you may try to live in the present, you are still involved with a nexus of behaviour patterns bequeathed from the past. The original and creative person is not someone who ignores these patterns, but someone who is able to translate and develop them so that they conform more exactly to his - and our - present needs.
Comprehension
Directions for Questions : The passage given below is followed by a set of five questions. Choose the most appropriate answer to each question.
A remarkable aspect of art of the present century is the range of concepts and ideologies which it embodies. It is almost tempting to see a pattern emerging within the art field - or alternatively imposed upon it a posteriori - similar to that which exists under the umbrella of science where the general term covers a whole range of separate, though interconnecting, activities. Any parallelism is however - in this instance at least - misleading. A scientific discipline develops systematically once its bare tenets have been established, named and categorized as conventions. Many of the concepts of modern art, by contrast, have resulted from the almost accidental meetings of groups of talented individuals at certain times and certain places. The ideas generated by these chance meetings had twofold consequences. Firstly, a corpus of work would be produced which, in great part, remains as a concrete record of the events. Secondly, the ideas would themselves be disseminated through many different channels of communication - seeds that often bore fruit in contexts far removed from their generation. Not all movements were exclusively concerned with innovation. Surrealism, for instance, claimed to embody a kind of insight which can be present in the art of any period. This claim has been generally accepted so that a sixteenth century painting by Spranger or a mysterious photograph by Atget can legitimately be discussed in surrealist terms. Briefly, then, the concepts of modern art are of many different (often fundamentally different) kinds and resulted from the exposures of painters, sculptors and thinkers to the more complex phenomena of the twentieth century, including our ever increasing knowledge of the thought and products of earlier centuries. Different groups of artists would collaborate in trying to make sense of a rapidly changing world of visual and spiritual experience. We should hardly be surprised if no one group succeeded completely, but achievements, though relative, have been considerable. Landmarks have been established - concrete statements of position which give a pattern to a situation which could easily have degenerated into total chaos. Beyond this, new language tools have been created for those who follow - semantic systems which can provide a springboard for further explorations.
The codifying of art is often criticized. Certainly one can understand that artists are wary of being pigeonholed since they are apt to think of themselves as individuals - sometimes with good reason. The notion of self-expression, however, no longer carries quite the weight it once did; objectivity has its defenders. There is good reason to accept the ideas codified by artists and critics, over the past sixty years or so, as having attained the status of independent existence - an independence which is not without its own value. The time factor is important here. As an art movement slips into temporal perspective, it ceases to be a living organism - becoming, rather, a fossil. This is not to say that it becomes useless or uninteresting. Just as a scientist can reconstruct the life of a prehistoric environment from the messages codified into the structure of a fossil, so can an artist decipher whole webs of intellectual and creative possibility from the recorded structure of a ‘dead’ art movement. The artist can match the creative patterns crystallized into this structure against the potentials and possibilities of his own time. As T.S. Eliot observed, no one starts anything from scratch; however consciously you may try to live in the present, you are still involved with a nexus of behaviour patterns bequeathed from the past. The original and creative person is not someone who ignores these patterns, but someone who is able to translate and develop them so that they conform more exactly to his - and our - present needs.
CAT/2008(RC)
Question. 133
In the passage, which of the following similarities between science and art may lead to erroneous conclusions?
Comprehension
Directions for Questions : The passage given below is followed by a set of five questions. Choose the most appropriate answer to each question.
A remarkable aspect of art of the present century is the range of concepts and ideologies which it embodies. It is almost tempting to see a pattern emerging within the art field - or alternatively imposed upon it a posteriori - similar to that which exists under the umbrella of science where the general term covers a whole range of separate, though interconnecting, activities. Any parallelism is however - in this instance at least - misleading. A scientific discipline develops systematically once its bare tenets have been established, named and categorized as conventions. Many of the concepts of modern art, by contrast, have resulted from the almost accidental meetings of groups of talented individuals at certain times and certain places. The ideas generated by these chance meetings had twofold consequences. Firstly, a corpus of work would be produced which, in great part, remains as a concrete record of the events. Secondly, the ideas would themselves be disseminated through many different channels of communication - seeds that often bore fruit in contexts far removed from their generation. Not all movements were exclusively concerned with innovation. Surrealism, for instance, claimed to embody a kind of insight which can be present in the art of any period. This claim has been generally accepted so that a sixteenth century painting by Spranger or a mysterious photograph by Atget can legitimately be discussed in surrealist terms. Briefly, then, the concepts of modern art are of many different (often fundamentally different) kinds and resulted from the exposures of painters, sculptors and thinkers to the more complex phenomena of the twentieth century, including our ever increasing knowledge of the thought and products of earlier centuries. Different groups of artists would collaborate in trying to make sense of a rapidly changing world of visual and spiritual experience. We should hardly be surprised if no one group succeeded completely, but achievements, though relative, have been considerable. Landmarks have been established - concrete statements of position which give a pattern to a situation which could easily have degenerated into total chaos. Beyond this, new language tools have been created for those who follow - semantic systems which can provide a springboard for further explorations.
The codifying of art is often criticized. Certainly one can understand that artists are wary of being pigeonholed since they are apt to think of themselves as individuals - sometimes with good reason. The notion of self-expression, however, no longer carries quite the weight it once did; objectivity has its defenders. There is good reason to accept the ideas codified by artists and critics, over the past sixty years or so, as having attained the status of independent existence - an independence which is not without its own value. The time factor is important here. As an art movement slips into temporal perspective, it ceases to be a living organism - becoming, rather, a fossil. This is not to say that it becomes useless or uninteresting. Just as a scientist can reconstruct the life of a prehistoric environment from the messages codified into the structure of a fossil, so can an artist decipher whole webs of intellectual and creative possibility from the recorded structure of a ‘dead’ art movement. The artist can match the creative patterns crystallized into this structure against the potentials and possibilities of his own time. As T.S. Eliot observed, no one starts anything from scratch; however consciously you may try to live in the present, you are still involved with a nexus of behaviour patterns bequeathed from the past. The original and creative person is not someone who ignores these patterns, but someone who is able to translate and develop them so that they conform more exactly to his - and our - present needs.
CAT/2008(RC)
Question. 134
The range of concepts and ideologies embodied in the art of the twentieth century is explained by
Comprehension
Directions for Questions : The passage given below is followed by a set of five questions. Choose the most appropriate answer to each question.
A remarkable aspect of art of the present century is the range of concepts and ideologies which it embodies. It is almost tempting to see a pattern emerging within the art field - or alternatively imposed upon it a posteriori - similar to that which exists under the umbrella of science where the general term covers a whole range of separate, though interconnecting, activities. Any parallelism is however - in this instance at least - misleading. A scientific discipline develops systematically once its bare tenets have been established, named and categorized as conventions. Many of the concepts of modern art, by contrast, have resulted from the almost accidental meetings of groups of talented individuals at certain times and certain places. The ideas generated by these chance meetings had twofold consequences. Firstly, a corpus of work would be produced which, in great part, remains as a concrete record of the events. Secondly, the ideas would themselves be disseminated through many different channels of communication - seeds that often bore fruit in contexts far removed from their generation. Not all movements were exclusively concerned with innovation. Surrealism, for instance, claimed to embody a kind of insight which can be present in the art of any period. This claim has been generally accepted so that a sixteenth century painting by Spranger or a mysterious photograph by Atget can legitimately be discussed in surrealist terms. Briefly, then, the concepts of modern art are of many different (often fundamentally different) kinds and resulted from the exposures of painters, sculptors and thinkers to the more complex phenomena of the twentieth century, including our ever increasing knowledge of the thought and products of earlier centuries. Different groups of artists would collaborate in trying to make sense of a rapidly changing world of visual and spiritual experience. We should hardly be surprised if no one group succeeded completely, but achievements, though relative, have been considerable. Landmarks have been established - concrete statements of position which give a pattern to a situation which could easily have degenerated into total chaos. Beyond this, new language tools have been created for those who follow - semantic systems which can provide a springboard for further explorations.
The codifying of art is often criticized. Certainly one can understand that artists are wary of being pigeonholed since they are apt to think of themselves as individuals - sometimes with good reason. The notion of self-expression, however, no longer carries quite the weight it once did; objectivity has its defenders. There is good reason to accept the ideas codified by artists and critics, over the past sixty years or so, as having attained the status of independent existence - an independence which is not without its own value. The time factor is important here. As an art movement slips into temporal perspective, it ceases to be a living organism - becoming, rather, a fossil. This is not to say that it becomes useless or uninteresting. Just as a scientist can reconstruct the life of a prehistoric environment from the messages codified into the structure of a fossil, so can an artist decipher whole webs of intellectual and creative possibility from the recorded structure of a ‘dead’ art movement. The artist can match the creative patterns crystallized into this structure against the potentials and possibilities of his own time. As T.S. Eliot observed, no one starts anything from scratch; however consciously you may try to live in the present, you are still involved with a nexus of behaviour patterns bequeathed from the past. The original and creative person is not someone who ignores these patterns, but someone who is able to translate and develop them so that they conform more exactly to his - and our - present needs.
B : We always carry forward the legacy of the past.C : Past behaviours and thought processes recreate themselves in the present and get labeled as ‘original’ or ‘creative’D : ‘Originality’ can only thrive in a ‘greenhouse’ insulated from the past biases.E : ‘Innovations’ and ‘original thinking’ interpret and develop on past thoughts to suit contemporary needs.
Comprehension
Every civilized society lives and thrives on a silent but profound agreement as to what is to be accepted as the valid mould of experience. Civilization is a complex system of dams, dykes, and canals warding off, directing, and articulating the influx of the surrounding fluid element; a fertile fenland, elaborately drained and protected from the high tides of chaotic, unexercised, and inarticulate experience. In such a culture, stable and sure of itself within the frontiers of ‘naturalized’ experience, the arts wield their creative power not so much in width as in depth. They do not create new experience, but deepen and purify the old. Their works do not differ from one another like a new horizon from a new horizon, but like a madonna from a madonna.
The periods of art which are most vigorous in creative passion seem to occur when the established pattern of experience loosens its rigidity without as yet losing its force. Such a period was the Renaissance, and Shakespeare its poetic consummation. Then it was as though the discipline of the old order gave depth to the excitement of the breaking away, the depth of job and tragedy, of incomparable conquests and irredeemable losses. Adventurers of experience set out as though in lifeboats to rescue and bring back to the shore treasures of knowing and feeling which the old order had left floating on the high seas. The works of the early Renaissance and the poetry of Shakespeare vibrate with the compassion for live experience in danger of dying from exposure and neglect. In this compassion was the creative genius of the age. Yet, it was a genius of courage, not of desperate audacity. For, however elusively, it still knew of harbours and anchors, of homes to which to return, and of barns in which to store the harvest. The exploring spirit of art was in the depths of its consciousness still aware of a scheme of things into which to fit its exploits and creations.
But the more this scheme of things loses its stability, the more boundless and uncharted appears the ocean of potential exploration. In the blank confusion of infinite potentialities flotsam of significance gets attached to jetsam of experience; for everything is sea, everything is at sea —
...The sea is all about us;
The sea is the land’s edge also,
the granite Into which it reaches,
the beaches where it tosses
Its hints of earlier and other creation ...
- and Rilke tells a story in which, as in T.S. Eliot’s poem, it is again the sea and the distance of ‘other creation’ that becomes the image of the poet’s reality. A rowing boat sets out on a difficult passage. The oarsmen labour in exact rhythm. There is no sign yet of the destination. Suddenly a man, seemingly idle, breaks out into song. And if the labour of the oarsmen meaninglessly defeats the real resistance of the real waves, it is the idle single who magically conquers the despair of apparent aimlessness. While the people next to him try to come to grips with the element that is next to them, his voice seems to bind the boat to the farthest distance so that the farthest distance draws it towards itself. ‘I don’t know why