CAT RC Questions | CAT RC- Social Science questions

Reading Comprehension Based on Social Science- Passages based on History, Geography, Psychology, Political Thoughts, Sociology, Economy, and Business Contemporary Issues etc.

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Steven Pinker’s new book, “Rationality: What It Is, Why It Seems Scarce, Why It Matters,” offers a pragmatic dose of measured optimism, presenting rationality as a fragile but achievable ideal in personal and civic life. . . . Pinker’s ambition to illuminate such a crucial topic offers the welcome prospect of a return to sanity. . . . It’s no small achievement to make formal logic, game theory, statistics and Bayesian reasoning delightful topics full of charm and relevance. It’s also plausible to believe that a wider application of the rational tools he analyzes would improve the world in important ways. His primer on statistics and scientific uncertainty is particularly timely and should be required reading before consuming any news about the [COVID] pandemic. More broadly, he argues that less media coverage of shocking but vanishingly rare events, from shark attacks to adverse vaccine reactions, would help prevent dangerous overreactions, fatalism and the diversion of finite resources away from solvable but less-dramatic issues, like malnutrition in the developing world. It’s a reasonable critique, and Pinker is not the first to make it. But analyzing the political economy of journalism — its funding structures, ownership concentration and increasing reliance on social media shares — would have given a fuller picture of why so much coverage is so misguided and what we might do about it. Pinker’s main focus is the sort of conscious, sequential reasoning that can track the steps in a geometric proof or an argument in formal logic. Skill in this domain maps directly onto the navigation of many real-world problems, and Pinker shows how greater mastery of the tools of rationality can improve decision-making in medical, legal, financial and many other contexts in which we must act on uncertain and shifting information. . . . Despite the undeniable power of the sort of rationality he describes, many of the deepest insights in the history of science, math, music and art strike their originators in moments of epiphany. From the 19th-century chemist Friedrich August Kekulé’s discovery of the structure of benzene to any of Mozart’s symphonies, much extraordinary human achievement is not a product of conscious, sequential reasoning. Even Plato’s Socrates — who anticipated many of Pinker’s points by nearly 2,500 years, showing the virtue of knowing what you do not know and examining all premises in arguments, not simply trusting speakers’ authority or charisma — attributed many of his most profound insights to dreams and visions. Conscious reasoning is helpful in sorting the wheat from the chaff, but it would be interesting to consider the hidden aquifers that make much of the grain grow in the first place. The role of moral and ethical education in promoting rational behavior is also underexplored. Pinker recognizes that rationality “is not just a cognitive virtue but a moral one.” But this profoundly important point, one subtly explored by ancient Greek philosophers like Plato and Aristotle, doesn’t really get developed. This is a shame, since possessing the right sort of moral character is arguably a precondition for using rationality in beneficial ways.

CAT/2023.3(RC)

Question. 1

The author refers to the ancient Greek philosophers to:

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Steven Pinker’s new book, “Rationality: What It Is, Why It Seems Scarce, Why It Matters,” offers a pragmatic dose of measured optimism, presenting rationality as a fragile but achievable ideal in personal and civic life. . . . Pinker’s ambition to illuminate such a crucial topic offers the welcome prospect of a return to sanity. . . . It’s no small achievement to make formal logic, game theory, statistics and Bayesian reasoning delightful topics full of charm and relevance. It’s also plausible to believe that a wider application of the rational tools he analyzes would improve the world in important ways. His primer on statistics and scientific uncertainty is particularly timely and should be required reading before consuming any news about the [COVID] pandemic. More broadly, he argues that less media coverage of shocking but vanishingly rare events, from shark attacks to adverse vaccine reactions, would help prevent dangerous overreactions, fatalism and the diversion of finite resources away from solvable but less-dramatic issues, like malnutrition in the developing world. It’s a reasonable critique, and Pinker is not the first to make it. But analyzing the political economy of journalism — its funding structures, ownership concentration and increasing reliance on social media shares — would have given a fuller picture of why so much coverage is so misguided and what we might do about it. Pinker’s main focus is the sort of conscious, sequential reasoning that can track the steps in a geometric proof or an argument in formal logic. Skill in this domain maps directly onto the navigation of many real-world problems, and Pinker shows how greater mastery of the tools of rationality can improve decision-making in medical, legal, financial and many other contexts in which we must act on uncertain and shifting information. . . . Despite the undeniable power of the sort of rationality he describes, many of the deepest insights in the history of science, math, music and art strike their originators in moments of epiphany. From the 19th-century chemist Friedrich August Kekulé’s discovery of the structure of benzene to any of Mozart’s symphonies, much extraordinary human achievement is not a product of conscious, sequential reasoning. Even Plato’s Socrates — who anticipated many of Pinker’s points by nearly 2,500 years, showing the virtue of knowing what you do not know and examining all premises in arguments, not simply trusting speakers’ authority or charisma — attributed many of his most profound insights to dreams and visions. Conscious reasoning is helpful in sorting the wheat from the chaff, but it would be interesting to consider the hidden aquifers that make much of the grain grow in the first place. The role of moral and ethical education in promoting rational behavior is also underexplored. Pinker recognizes that rationality “is not just a cognitive virtue but a moral one.” But this profoundly important point, one subtly explored by ancient Greek philosophers like Plato and Aristotle, doesn’t really get developed. This is a shame, since possessing the right sort of moral character is arguably a precondition for using rationality in beneficial ways.

CAT/2023.3(RC)

Question. 2

The author endorses Pinker’s views on the importance of logical reasoning as it:

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Steven Pinker’s new book, “Rationality: What It Is, Why It Seems Scarce, Why It Matters,” offers a pragmatic dose of measured optimism, presenting rationality as a fragile but achievable ideal in personal and civic life. . . . Pinker’s ambition to illuminate such a crucial topic offers the welcome prospect of a return to sanity. . . . It’s no small achievement to make formal logic, game theory, statistics and Bayesian reasoning delightful topics full of charm and relevance. It’s also plausible to believe that a wider application of the rational tools he analyzes would improve the world in important ways. His primer on statistics and scientific uncertainty is particularly timely and should be required reading before consuming any news about the [COVID] pandemic. More broadly, he argues that less media coverage of shocking but vanishingly rare events, from shark attacks to adverse vaccine reactions, would help prevent dangerous overreactions, fatalism and the diversion of finite resources away from solvable but less-dramatic issues, like malnutrition in the developing world. It’s a reasonable critique, and Pinker is not the first to make it. But analyzing the political economy of journalism — its funding structures, ownership concentration and increasing reliance on social media shares — would have given a fuller picture of why so much coverage is so misguided and what we might do about it. Pinker’s main focus is the sort of conscious, sequential reasoning that can track the steps in a geometric proof or an argument in formal logic. Skill in this domain maps directly onto the navigation of many real-world problems, and Pinker shows how greater mastery of the tools of rationality can improve decision-making in medical, legal, financial and many other contexts in which we must act on uncertain and shifting information. . . . Despite the undeniable power of the sort of rationality he describes, many of the deepest insights in the history of science, math, music and art strike their originators in moments of epiphany. From the 19th-century chemist Friedrich August Kekulé’s discovery of the structure of benzene to any of Mozart’s symphonies, much extraordinary human achievement is not a product of conscious, sequential reasoning. Even Plato’s Socrates — who anticipated many of Pinker’s points by nearly 2,500 years, showing the virtue of knowing what you do not know and examining all premises in arguments, not simply trusting speakers’ authority or charisma — attributed many of his most profound insights to dreams and visions. Conscious reasoning is helpful in sorting the wheat from the chaff, but it would be interesting to consider the hidden aquifers that make much of the grain grow in the first place. The role of moral and ethical education in promoting rational behavior is also underexplored. Pinker recognizes that rationality “is not just a cognitive virtue but a moral one.” But this profoundly important point, one subtly explored by ancient Greek philosophers like Plato and Aristotle, doesn’t really get developed. This is a shame, since possessing the right sort of moral character is arguably a precondition for using rationality in beneficial ways.

CAT/2023.3(RC)

Question. 3

According to the author, for Pinker as well as the ancient Greek philosophers, rational thinking involves all of the following EXCEPT:

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Steven Pinker’s new book, “Rationality: What It Is, Why It Seems Scarce, Why It Matters,” offers a pragmatic dose of measured optimism, presenting rationality as a fragile but achievable ideal in personal and civic life. . . . Pinker’s ambition to illuminate such a crucial topic offers the welcome prospect of a return to sanity. . . . It’s no small achievement to make formal logic, game theory, statistics and Bayesian reasoning delightful topics full of charm and relevance. It’s also plausible to believe that a wider application of the rational tools he analyzes would improve the world in important ways. His primer on statistics and scientific uncertainty is particularly timely and should be required reading before consuming any news about the [COVID] pandemic. More broadly, he argues that less media coverage of shocking but vanishingly rare events, from shark attacks to adverse vaccine reactions, would help prevent dangerous overreactions, fatalism and the diversion of finite resources away from solvable but less-dramatic issues, like malnutrition in the developing world. It’s a reasonable critique, and Pinker is not the first to make it. But analyzing the political economy of journalism — its funding structures, ownership concentration and increasing reliance on social media shares — would have given a fuller picture of why so much coverage is so misguided and what we might do about it. Pinker’s main focus is the sort of conscious, sequential reasoning that can track the steps in a geometric proof or an argument in formal logic. Skill in this domain maps directly onto the navigation of many real-world problems, and Pinker shows how greater mastery of the tools of rationality can improve decision-making in medical, legal, financial and many other contexts in which we must act on uncertain and shifting information. . . . Despite the undeniable power of the sort of rationality he describes, many of the deepest insights in the history of science, math, music and art strike their originators in moments of epiphany. From the 19th-century chemist Friedrich August Kekulé’s discovery of the structure of benzene to any of Mozart’s symphonies, much extraordinary human achievement is not a product of conscious, sequential reasoning. Even Plato’s Socrates — who anticipated many of Pinker’s points by nearly 2,500 years, showing the virtue of knowing what you do not know and examining all premises in arguments, not simply trusting speakers’ authority or charisma — attributed many of his most profound insights to dreams and visions. Conscious reasoning is helpful in sorting the wheat from the chaff, but it would be interesting to consider the hidden aquifers that make much of the grain grow in the first place. The role of moral and ethical education in promoting rational behavior is also underexplored. Pinker recognizes that rationality “is not just a cognitive virtue but a moral one.” But this profoundly important point, one subtly explored by ancient Greek philosophers like Plato and Aristotle, doesn’t really get developed. This is a shame, since possessing the right sort of moral character is arguably a precondition for using rationality in beneficial ways.

CAT/2023.3(RC)

Question. 4

The author mentions Kekulé’s discovery of the structure of benzene and Mozart’s symphonies to illustrate the point that:

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

In 2006, the Met [art museum in the US] agreed to return the Euphronios krater, a masterpiece Greek urn that had been a museum draw since 1972. In 2007, the Getty [art museum in the US] agreed to return 40 objects to Italy, including a marble Aphrodite, in the midst of looting scandals. And in December, Sotheby’s and a private owner agreed to return an ancient Khmer statue of a warrior, pulled from auction two years before, to Cambodia. Cultural property, or patrimony, laws limit the transfer of cultural property outside the source country’s territory, including outright export prohibitions and national ownership laws. Most art historians, archaeologists, museum officials and policymakers portray cultural property laws in general as invaluable tools for counteracting the ugly legacy of Western cultural imperialism. During the late 19th and early 20th century — an era former Met director Thomas Hoving called “the age of piracy” — American and European art museums acquired antiquities by hook or by crook, from grave robbers or souvenir collectors, bounty from digs and ancient sites in impoverished but art-rich source countries. Patrimony laws were intended to protect future archaeological discoveries against Western imperialist designs. . . . I surveyed 90 countries with one or more archaeological sites on UNESCO’s World Heritage Site list, and my study shows that in most cases the number of discovered sites diminishes sharply after a country passes a cultural property law. There are 222 archaeological sites listed for those 90 countries. When you look into the history of the sites, you see that all but 21 were discovered before the passage of cultural property laws. . . . Strict cultural patrimony laws are popular in most countries. But the downside may be that they reduce incentives for foreign governments, nongovernmental organizations and educational institutions to invest in overseas exploration because their efforts will not necessarily be rewarded by opportunities to hold, display and study what is uncovered. To the extent that source countries can fund their own archaeological projects, artifacts and sites may still be discovered. . . . The survey has far-reaching implications. It suggests that source countries, particularly in the developing world, should narrow their cultural property laws so that they can reap the benefits of new archaeological discoveries, which typically increase tourism and enhance cultural pride. This does not mean these nations should abolish restrictions on foreign excavation and foreign claims to artifacts. China provides an interesting alternative approach for source nations eager for foreign archaeological investment. From 1935 to 2003, China had a restrictive cultural property law that prohibited foreign ownership of Chinese cultural artifacts. In those years, China’s most significant archaeological discovery occurred by chance, in 1974, when peasant farmers accidentally uncovered ranks of buried terra cotta warriors, which are part of Emperor Qin’s spectacular tomb system. In 2003, the Chinese government switched course, dropping its cultural property law and embracing collaborative international archaeological research. Since then, China has nominated 11 archaeological sites for inclusion in the World Heritage Site list, including eight in 2013, the most ever for China.

CAT/2023.3(RC)

Question. 5

It can be inferred from the passage that archaeological sites are considered important by some source countries because they:

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

In 2006, the Met [art museum in the US] agreed to return the Euphronios krater, a masterpiece Greek urn that had been a museum draw since 1972. In 2007, the Getty [art museum in the US] agreed to return 40 objects to Italy, including a marble Aphrodite, in the midst of looting scandals. And in December, Sotheby’s and a private owner agreed to return an ancient Khmer statue of a warrior, pulled from auction two years before, to Cambodia. Cultural property, or patrimony, laws limit the transfer of cultural property outside the source country’s territory, including outright export prohibitions and national ownership laws. Most art historians, archaeologists, museum officials and policymakers portray cultural property laws in general as invaluable tools for counteracting the ugly legacy of Western cultural imperialism. During the late 19th and early 20th century — an era former Met director Thomas Hoving called “the age of piracy” — American and European art museums acquired antiquities by hook or by crook, from grave robbers or souvenir collectors, bounty from digs and ancient sites in impoverished but art-rich source countries. Patrimony laws were intended to protect future archaeological discoveries against Western imperialist designs. . . . I surveyed 90 countries with one or more archaeological sites on UNESCO’s World Heritage Site list, and my study shows that in most cases the number of discovered sites diminishes sharply after a country passes a cultural property law. There are 222 archaeological sites listed for those 90 countries. When you look into the history of the sites, you see that all but 21 were discovered before the passage of cultural property laws. . . . Strict cultural patrimony laws are popular in most countries. But the downside may be that they reduce incentives for foreign governments, nongovernmental organizations and educational institutions to invest in overseas exploration because their efforts will not necessarily be rewarded by opportunities to hold, display and study what is uncovered. To the extent that source countries can fund their own archaeological projects, artifacts and sites may still be discovered. . . . The survey has far-reaching implications. It suggests that source countries, particularly in the developing world, should narrow their cultural property laws so that they can reap the benefits of new archaeological discoveries, which typically increase tourism and enhance cultural pride. This does not mean these nations should abolish restrictions on foreign excavation and foreign claims to artifacts. China provides an interesting alternative approach for source nations eager for foreign archaeological investment. From 1935 to 2003, China had a restrictive cultural property law that prohibited foreign ownership of Chinese cultural artifacts. In those years, China’s most significant archaeological discovery occurred by chance, in 1974, when peasant farmers accidentally uncovered ranks of buried terra cotta warriors, which are part of Emperor Qin’s spectacular tomb system. In 2003, the Chinese government switched course, dropping its cultural property law and embracing collaborative international archaeological research. Since then, China has nominated 11 archaeological sites for inclusion in the World Heritage Site list, including eight in 2013, the most ever for China.

CAT/2023.3(RC)

Question. 6

Which one of the following statements best expresses the paradox of patrimony laws?

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

In 2006, the Met [art museum in the US] agreed to return the Euphronios krater, a masterpiece Greek urn that had been a museum draw since 1972. In 2007, the Getty [art museum in the US] agreed to return 40 objects to Italy, including a marble Aphrodite, in the midst of looting scandals. And in December, Sotheby’s and a private owner agreed to return an ancient Khmer statue of a warrior, pulled from auction two years before, to Cambodia. Cultural property, or patrimony, laws limit the transfer of cultural property outside the source country’s territory, including outright export prohibitions and national ownership laws. Most art historians, archaeologists, museum officials and policymakers portray cultural property laws in general as invaluable tools for counteracting the ugly legacy of Western cultural imperialism. During the late 19th and early 20th century — an era former Met director Thomas Hoving called “the age of piracy” — American and European art museums acquired antiquities by hook or by crook, from grave robbers or souvenir collectors, bounty from digs and ancient sites in impoverished but art-rich source countries. Patrimony laws were intended to protect future archaeological discoveries against Western imperialist designs. . . . I surveyed 90 countries with one or more archaeological sites on UNESCO’s World Heritage Site list, and my study shows that in most cases the number of discovered sites diminishes sharply after a country passes a cultural property law. There are 222 archaeological sites listed for those 90 countries. When you look into the history of the sites, you see that all but 21 were discovered before the passage of cultural property laws. . . . Strict cultural patrimony laws are popular in most countries. But the downside may be that they reduce incentives for foreign governments, nongovernmental organizations and educational institutions to invest in overseas exploration because their efforts will not necessarily be rewarded by opportunities to hold, display and study what is uncovered. To the extent that source countries can fund their own archaeological projects, artifacts and sites may still be discovered. . . . The survey has far-reaching implications. It suggests that source countries, particularly in the developing world, should narrow their cultural property laws so that they can reap the benefits of new archaeological discoveries, which typically increase tourism and enhance cultural pride. This does not mean these nations should abolish restrictions on foreign excavation and foreign claims to artifacts. China provides an interesting alternative approach for source nations eager for foreign archaeological investment. From 1935 to 2003, China had a restrictive cultural property law that prohibited foreign ownership of Chinese cultural artifacts. In those years, China’s most significant archaeological discovery occurred by chance, in 1974, when peasant farmers accidentally uncovered ranks of buried terra cotta warriors, which are part of Emperor Qin’s spectacular tomb system. In 2003, the Chinese government switched course, dropping its cultural property law and embracing collaborative international archaeological research. Since then, China has nominated 11 archaeological sites for inclusion in the World Heritage Site list, including eight in 2013, the most ever for China.

CAT/2023.3(RC)

Question. 7

Which one of the following statements, if true, would undermine the central idea of the passage?

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

In 2006, the Met [art museum in the US] agreed to return the Euphronios krater, a masterpiece Greek urn that had been a museum draw since 1972. In 2007, the Getty [art museum in the US] agreed to return 40 objects to Italy, including a marble Aphrodite, in the midst of looting scandals. And in December, Sotheby’s and a private owner agreed to return an ancient Khmer statue of a warrior, pulled from auction two years before, to Cambodia. Cultural property, or patrimony, laws limit the transfer of cultural property outside the source country’s territory, including outright export prohibitions and national ownership laws. Most art historians, archaeologists, museum officials and policymakers portray cultural property laws in general as invaluable tools for counteracting the ugly legacy of Western cultural imperialism. During the late 19th and early 20th century — an era former Met director Thomas Hoving called “the age of piracy” — American and European art museums acquired antiquities by hook or by crook, from grave robbers or souvenir collectors, bounty from digs and ancient sites in impoverished but art-rich source countries. Patrimony laws were intended to protect future archaeological discoveries against Western imperialist designs. . . . I surveyed 90 countries with one or more archaeological sites on UNESCO’s World Heritage Site list, and my study shows that in most cases the number of discovered sites diminishes sharply after a country passes a cultural property law. There are 222 archaeological sites listed for those 90 countries. When you look into the history of the sites, you see that all but 21 were discovered before the passage of cultural property laws. . . . Strict cultural patrimony laws are popular in most countries. But the downside may be that they reduce incentives for foreign governments, nongovernmental organizations and educational institutions to invest in overseas exploration because their efforts will not necessarily be rewarded by opportunities to hold, display and study what is uncovered. To the extent that source countries can fund their own archaeological projects, artifacts and sites may still be discovered. . . . The survey has far-reaching implications. It suggests that source countries, particularly in the developing world, should narrow their cultural property laws so that they can reap the benefits of new archaeological discoveries, which typically increase tourism and enhance cultural pride. This does not mean these nations should abolish restrictions on foreign excavation and foreign claims to artifacts. China provides an interesting alternative approach for source nations eager for foreign archaeological investment. From 1935 to 2003, China had a restrictive cultural property law that prohibited foreign ownership of Chinese cultural artifacts. In those years, China’s most significant archaeological discovery occurred by chance, in 1974, when peasant farmers accidentally uncovered ranks of buried terra cotta warriors, which are part of Emperor Qin’s spectacular tomb system. In 2003, the Chinese government switched course, dropping its cultural property law and embracing collaborative international archaeological research. Since then, China has nominated 11 archaeological sites for inclusion in the World Heritage Site list, including eight in 2013, the most ever for China.

CAT/2023.3(RC)

Question. 8

From the passage we can infer that the author is likely to advise poor, but archaeologically-rich source countries to do all of the following, EXCEPT:

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

The Positivists, anxious to stake out their claim for history as a science, contributed the weight of their influence to the cult of facts. First ascertain the facts, said the positivists, then draw your conclusions from them. . . . This is what may [be] called the common-sense view of history. History consists of a corpus of ascertained facts. The facts are available to the historian in documents, inscriptions, and so on . . . [Sir George Clark] contrasted the "hard core of facts" in history with the surrounding pulp of disputable interpretation forgetting perhaps that the pulpy part of the fruit is more rewarding than the hard core. . . . It recalls the favourite dictum of the great liberal journalist C. P. Scott: "Facts are sacred, opinion is free.". . . What is a historical fact? . . . According to the common-sense view, there are certain basic facts which are the same for all historians and which form, so to speak, the backbone of history—the fact, for example, that the Battle of Hastings was fought in 1066. But this view calls for two observations. In the first place, it is not with facts like these that the historian is primarily concerned. It is no doubt important to know that the great battle was fought in 1066 and not in 1065 or 1067, and that it was fought at Hastings and not at Eastbourne or Brighton. The historian must not get these things wrong. But [to] praise a historian for his accuracy is like praising an architect for using well-seasoned timber or properly mixed concrete in his building. It is a necessary condition of his work, but not his essential function. It is precisely for matters of this kind that the historian is entitled to rely on what have been called the "auxiliary sciences" of history—archaeology, epigraphy, numismatics, chronology, and so forth. . . . The second observation is that the necessity to establish these basic facts rests not on any quality in the facts themselves, but on an apriori decision of the historian. In spite of C. P. Scott's motto, every journalist knows today that the most effective way to influence opinion is by the selection and arrangement of the appropriate facts. It used to be said that facts speak for themselves. This is, of course, untrue. The facts speak only when the historian calls on them: it is he who decides to which facts to give the floor, and in what order or context. . . . The only reason why we are interested to know that the battle was fought at Hastings in 1066 is that historians regard it as a major historical event. . . . Professor Talcott Parsons once called [science] "a selective system of cognitive orientations to reality." It might perhaps have been put more simply. But history is, among other things, that. The historian is necessarily selective. The belief in a hard core of historical facts existing objectively and independently of the interpretation of the historian is a preposterous fallacy, but one which it is very hard to eradicate.

CAT/2023.2(RC)

Question. 9

All of the following, if true, can weaken the passage’s claim that facts do not speak for themselves, EXCEPT:

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

The Positivists, anxious to stake out their claim for history as a science, contributed the weight of their influence to the cult of facts. First ascertain the facts, said the positivists, then draw your conclusions from them. . . . This is what may [be] called the common-sense view of history. History consists of a corpus of ascertained facts. The facts are available to the historian in documents, inscriptions, and so on . . . [Sir George Clark] contrasted the "hard core of facts" in history with the surrounding pulp of disputable interpretation forgetting perhaps that the pulpy part of the fruit is more rewarding than the hard core. . . . It recalls the favourite dictum of the great liberal journalist C. P. Scott: "Facts are sacred, opinion is free.". . . What is a historical fact? . . . According to the common-sense view, there are certain basic facts which are the same for all historians and which form, so to speak, the backbone of history—the fact, for example, that the Battle of Hastings was fought in 1066. But this view calls for two observations. In the first place, it is not with facts like these that the historian is primarily concerned. It is no doubt important to know that the great battle was fought in 1066 and not in 1065 or 1067, and that it was fought at Hastings and not at Eastbourne or Brighton. The historian must not get these things wrong. But [to] praise a historian for his accuracy is like praising an architect for using well-seasoned timber or properly mixed concrete in his building. It is a necessary condition of his work, but not his essential function. It is precisely for matters of this kind that the historian is entitled to rely on what have been called the "auxiliary sciences" of history—archaeology, epigraphy, numismatics, chronology, and so forth. . . . The second observation is that the necessity to establish these basic facts rests not on any quality in the facts themselves, but on an apriori decision of the historian. In spite of C. P. Scott's motto, every journalist knows today that the most effective way to influence opinion is by the selection and arrangement of the appropriate facts. It used to be said that facts speak for themselves. This is, of course, untrue. The facts speak only when the historian calls on them: it is he who decides to which facts to give the floor, and in what order or context. . . . The only reason why we are interested to know that the battle was fought at Hastings in 1066 is that historians regard it as a major historical event. . . . Professor Talcott Parsons once called [science] "a selective system of cognitive orientations to reality." It might perhaps have been put more simply. But history is, among other things, that. The historian is necessarily selective. The belief in a hard core of historical facts existing objectively and independently of the interpretation of the historian is a preposterous fallacy, but one which it is very hard to eradicate.

CAT/2023.2(RC)

Question. 10

All of the following describe the “common-sense view” of history, EXCEPT:

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

The Positivists, anxious to stake out their claim for history as a science, contributed the weight of their influence to the cult of facts. First ascertain the facts, said the positivists, then draw your conclusions from them. . . . This is what may [be] called the common-sense view of history. History consists of a corpus of ascertained facts. The facts are available to the historian in documents, inscriptions, and so on . . . [Sir George Clark] contrasted the "hard core of facts" in history with the surrounding pulp of disputable interpretation forgetting perhaps that the pulpy part of the fruit is more rewarding than the hard core. . . . It recalls the favourite dictum of the great liberal journalist C. P. Scott: "Facts are sacred, opinion is free.". . . What is a historical fact? . . . According to the common-sense view, there are certain basic facts which are the same for all historians and which form, so to speak, the backbone of history—the fact, for example, that the Battle of Hastings was fought in 1066. But this view calls for two observations. In the first place, it is not with facts like these that the historian is primarily concerned. It is no doubt important to know that the great battle was fought in 1066 and not in 1065 or 1067, and that it was fought at Hastings and not at Eastbourne or Brighton. The historian must not get these things wrong. But [to] praise a historian for his accuracy is like praising an architect for using well-seasoned timber or properly mixed concrete in his building. It is a necessary condition of his work, but not his essential function. It is precisely for matters of this kind that the historian is entitled to rely on what have been called the "auxiliary sciences" of history—archaeology, epigraphy, numismatics, chronology, and so forth. . . . The second observation is that the necessity to establish these basic facts rests not on any quality in the facts themselves, but on an apriori decision of the historian. In spite of C. P. Scott's motto, every journalist knows today that the most effective way to influence opinion is by the selection and arrangement of the appropriate facts. It used to be said that facts speak for themselves. This is, of course, untrue. The facts speak only when the historian calls on them: it is he who decides to which facts to give the floor, and in what order or context. . . . The only reason why we are interested to know that the battle was fought at Hastings in 1066 is that historians regard it as a major historical event. . . . Professor Talcott Parsons once called [science] "a selective system of cognitive orientations to reality." It might perhaps have been put more simply. But history is, among other things, that. The historian is necessarily selective. The belief in a hard core of historical facts existing objectively and independently of the interpretation of the historian is a preposterous fallacy, but one which it is very hard to eradicate.

CAT/2023.2(RC)

Question. 11

If the author of the passage were to write a book on the Battle of Hastings along the lines of his/her own reasoning, the focus of the historical account would be on:

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

The Positivists, anxious to stake out their claim for history as a science, contributed the weight of their influence to the cult of facts. First ascertain the facts, said the positivists, then draw your conclusions from them. . . . This is what may [be] called the common-sense view of history. History consists of a corpus of ascertained facts. The facts are available to the historian in documents, inscriptions, and so on . . . [Sir George Clark] contrasted the "hard core of facts" in history with the surrounding pulp of disputable interpretation forgetting perhaps that the pulpy part of the fruit is more rewarding than the hard core. . . . It recalls the favourite dictum of the great liberal journalist C. P. Scott: "Facts are sacred, opinion is free.". . . What is a historical fact? . . . According to the common-sense view, there are certain basic facts which are the same for all historians and which form, so to speak, the backbone of history—the fact, for example, that the Battle of Hastings was fought in 1066. But this view calls for two observations. In the first place, it is not with facts like these that the historian is primarily concerned. It is no doubt important to know that the great battle was fought in 1066 and not in 1065 or 1067, and that it was fought at Hastings and not at Eastbourne or Brighton. The historian must not get these things wrong. But [to] praise a historian for his accuracy is like praising an architect for using well-seasoned timber or properly mixed concrete in his building. It is a necessary condition of his work, but not his essential function. It is precisely for matters of this kind that the historian is entitled to rely on what have been called the "auxiliary sciences" of history—archaeology, epigraphy, numismatics, chronology, and so forth. . . . The second observation is that the necessity to establish these basic facts rests not on any quality in the facts themselves, but on an apriori decision of the historian. In spite of C. P. Scott's motto, every journalist knows today that the most effective way to influence opinion is by the selection and arrangement of the appropriate facts. It used to be said that facts speak for themselves. This is, of course, untrue. The facts speak only when the historian calls on them: it is he who decides to which facts to give the floor, and in what order or context. . . . The only reason why we are interested to know that the battle was fought at Hastings in 1066 is that historians regard it as a major historical event. . . . Professor Talcott Parsons once called [science] "a selective system of cognitive orientations to reality." It might perhaps have been put more simply. But history is, among other things, that. The historian is necessarily selective. The belief in a hard core of historical facts existing objectively and independently of the interpretation of the historian is a preposterous fallacy, but one which it is very hard to eradicate.

CAT/2023.2(RC)

Question. 12

According to this passage, which one of the following statements best describes the significance of archaeology for historians?

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

The Second Hand September campaign, led by Oxfam . . . seeks to encourage shopping at local organisations and charities as alternatives to fast fashion brands such as Primark and Boohoo in the name of saving our planet. As innocent as mindless scrolling through online shops may seem, such consumers are unintentionally—or perhaps even knowingly— contributing to an industry that uses more energy than aviation. . . . Brits buy more garments than any other country in Europe, so it comes as no shock that many of those clothes end up in UK landfills each year: 300,000 tonnes of them, to be exact. This waste of clothing is destructive to our planet, releasing greenhouse gasses as clothes are burnt as well as bleeding toxins and dyes into the surrounding soil and water. As ecologist Chelsea Rochman bluntly put it, “The mismanagement of our waste has even come back to haunt us on our dinner plate.” It’s not surprising, then, that people are scrambling for a solution, the most common of which is second-hand shopping. Retailers selling consigned clothing are currently expanding at a rapid rate . . . If everyone bought just one used item in a year, it would save 449 million lbs of waste, equivalent to the weight of 1 million Polar bears. “Thrifting” has increasingly become a trendy practice. London is home to many second-hand, or more commonly coined ‘vintage’, shops across the city from Bayswater to Brixton. So you’re cool and you care about the planet; you’ve killed two birds with one stone. But do people simply purchase a second-hand item, flash it on Instagram with #vintage and call it a day without considering whether what they are doing is actually effective? According to a study commissioned by Patagonia, for instance, older clothes shed more microfibres. These can end up in our rivers and seas after just one wash due to the worn material, thus contributing to microfibre pollution. To break it down, the amount of microfibres released by laundering 100,000 fleece jackets is equivalent to as many as 11,900 plastic grocery bags, and up to 40 per cent of that ends up in our oceans. . . . So where does this leave second-hand consumers? [They would be well advised to buy] high-quality items that shed less and last longer [as this] combats both microfibre pollution and excess garments ending up in landfills. . . . Luxury brands would rather not circulate their latest season stock around the globe to be sold at a cheaper price, which is why companies like ThredUP, a US fashion resale marketplace, have not yet caught on in the UK. There will always be a market for consignment but there is also a whole generation of people who have been taught that only buying new products is the norm; second-hand luxury goods are not in their psyche. Ben Whitaker, director at Liquidation Firm B-Stock, told Prospect that unless recycling becomes cost-effective and filters into mass production, with the right technology to partner it, “high-end retailers would rather put brand before sustainability.”

CAT/2023.2(RC)

Question. 13

The act of “thrifting”, as described in the passage, can be considered ironic because it:

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

The Second Hand September campaign, led by Oxfam . . . seeks to encourage shopping at local organisations and charities as alternatives to fast fashion brands such as Primark and Boohoo in the name of saving our planet. As innocent as mindless scrolling through online shops may seem, such consumers are unintentionally—or perhaps even knowingly— contributing to an industry that uses more energy than aviation. . . . Brits buy more garments than any other country in Europe, so it comes as no shock that many of those clothes end up in UK landfills each year: 300,000 tonnes of them, to be exact. This waste of clothing is destructive to our planet, releasing greenhouse gasses as clothes are burnt as well as bleeding toxins and dyes into the surrounding soil and water. As ecologist Chelsea Rochman bluntly put it, “The mismanagement of our waste has even come back to haunt us on our dinner plate.” It’s not surprising, then, that people are scrambling for a solution, the most common of which is second-hand shopping. Retailers selling consigned clothing are currently expanding at a rapid rate . . . If everyone bought just one used item in a year, it would save 449 million lbs of waste, equivalent to the weight of 1 million Polar bears. “Thrifting” has increasingly become a trendy practice. London is home to many second-hand, or more commonly coined ‘vintage’, shops across the city from Bayswater to Brixton. So you’re cool and you care about the planet; you’ve killed two birds with one stone. But do people simply purchase a second-hand item, flash it on Instagram with #vintage and call it a day without considering whether what they are doing is actually effective? According to a study commissioned by Patagonia, for instance, older clothes shed more microfibres. These can end up in our rivers and seas after just one wash due to the worn material, thus contributing to microfibre pollution. To break it down, the amount of microfibres released by laundering 100,000 fleece jackets is equivalent to as many as 11,900 plastic grocery bags, and up to 40 per cent of that ends up in our oceans. . . . So where does this leave second-hand consumers? [They would be well advised to buy] high-quality items that shed less and last longer [as this] combats both microfibre pollution and excess garments ending up in landfills. . . . Luxury brands would rather not circulate their latest season stock around the globe to be sold at a cheaper price, which is why companies like ThredUP, a US fashion resale marketplace, have not yet caught on in the UK. There will always be a market for consignment but there is also a whole generation of people who have been taught that only buying new products is the norm; second-hand luxury goods are not in their psyche. Ben Whitaker, director at Liquidation Firm B-Stock, told Prospect that unless recycling becomes cost-effective and filters into mass production, with the right technology to partner it, “high-end retailers would rather put brand before sustainability.”

CAT/2023.2(RC)

Question. 14

Based on the passage, we can infer that the opposite of fast fashion, ‘slow fashion’, would most likely refer to clothes that:

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

The Second Hand September campaign, led by Oxfam . . . seeks to encourage shopping at local organisations and charities as alternatives to fast fashion brands such as Primark and Boohoo in the name of saving our planet. As innocent as mindless scrolling through online shops may seem, such consumers are unintentionally—or perhaps even knowingly— contributing to an industry that uses more energy than aviation. . . . Brits buy more garments than any other country in Europe, so it comes as no shock that many of those clothes end up in UK landfills each year: 300,000 tonnes of them, to be exact. This waste of clothing is destructive to our planet, releasing greenhouse gasses as clothes are burnt as well as bleeding toxins and dyes into the surrounding soil and water. As ecologist Chelsea Rochman bluntly put it, “The mismanagement of our waste has even come back to haunt us on our dinner plate.” It’s not surprising, then, that people are scrambling for a solution, the most common of which is second-hand shopping. Retailers selling consigned clothing are currently expanding at a rapid rate . . . If everyone bought just one used item in a year, it would save 449 million lbs of waste, equivalent to the weight of 1 million Polar bears. “Thrifting” has increasingly become a trendy practice. London is home to many second-hand, or more commonly coined ‘vintage’, shops across the city from Bayswater to Brixton. So you’re cool and you care about the planet; you’ve killed two birds with one stone. But do people simply purchase a second-hand item, flash it on Instagram with #vintage and call it a day without considering whether what they are doing is actually effective? According to a study commissioned by Patagonia, for instance, older clothes shed more microfibres. These can end up in our rivers and seas after just one wash due to the worn material, thus contributing to microfibre pollution. To break it down, the amount of microfibres released by laundering 100,000 fleece jackets is equivalent to as many as 11,900 plastic grocery bags, and up to 40 per cent of that ends up in our oceans. . . . So where does this leave second-hand consumers? [They would be well advised to buy] high-quality items that shed less and last longer [as this] combats both microfibre pollution and excess garments ending up in landfills. . . . Luxury brands would rather not circulate their latest season stock around the globe to be sold at a cheaper price, which is why companies like ThredUP, a US fashion resale marketplace, have not yet caught on in the UK. There will always be a market for consignment but there is also a whole generation of people who have been taught that only buying new products is the norm; second-hand luxury goods are not in their psyche. Ben Whitaker, director at Liquidation Firm B-Stock, told Prospect that unless recycling becomes cost-effective and filters into mass production, with the right technology to partner it, “high-end retailers would rather put brand before sustainability.”

CAT/2023.2(RC)

Question. 15

The central idea of the passage would be undermined if:

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

The Second Hand September campaign, led by Oxfam . . . seeks to encourage shopping at local organisations and charities as alternatives to fast fashion brands such as Primark and Boohoo in the name of saving our planet. As innocent as mindless scrolling through online shops may seem, such consumers are unintentionally—or perhaps even knowingly— contributing to an industry that uses more energy than aviation. . . . Brits buy more garments than any other country in Europe, so it comes as no shock that many of those clothes end up in UK landfills each year: 300,000 tonnes of them, to be exact. This waste of clothing is destructive to our planet, releasing greenhouse gasses as clothes are burnt as well as bleeding toxins and dyes into the surrounding soil and water. As ecologist Chelsea Rochman bluntly put it, “The mismanagement of our waste has even come back to haunt us on our dinner plate.” It’s not surprising, then, that people are scrambling for a solution, the most common of which is second-hand shopping. Retailers selling consigned clothing are currently expanding at a rapid rate . . . If everyone bought just one used item in a year, it would save 449 million lbs of waste, equivalent to the weight of 1 million Polar bears. “Thrifting” has increasingly become a trendy practice. London is home to many second-hand, or more commonly coined ‘vintage’, shops across the city from Bayswater to Brixton. So you’re cool and you care about the planet; you’ve killed two birds with one stone. But do people simply purchase a second-hand item, flash it on Instagram with #vintage and call it a day without considering whether what they are doing is actually effective? According to a study commissioned by Patagonia, for instance, older clothes shed more microfibres. These can end up in our rivers and seas after just one wash due to the worn material, thus contributing to microfibre pollution. To break it down, the amount of microfibres released by laundering 100,000 fleece jackets is equivalent to as many as 11,900 plastic grocery bags, and up to 40 per cent of that ends up in our oceans. . . . So where does this leave second-hand consumers? [They would be well advised to buy] high-quality items that shed less and last longer [as this] combats both microfibre pollution and excess garments ending up in landfills. . . . Luxury brands would rather not circulate their latest season stock around the globe to be sold at a cheaper price, which is why companies like ThredUP, a US fashion resale marketplace, have not yet caught on in the UK. There will always be a market for consignment but there is also a whole generation of people who have been taught that only buying new products is the norm; second-hand luxury goods are not in their psyche. Ben Whitaker, director at Liquidation Firm B-Stock, told Prospect that unless recycling becomes cost-effective and filters into mass production, with the right technology to partner it, “high-end retailers would rather put brand before sustainability.”

CAT/2023.2(RC)

Question. 16

According to the author, companies like ThredUP have not caught on in the UK for all of the following reasons EXCEPT that:

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Umberto Eco, an Italian writer, was right when he said the language of Europe is translation. Netflix and other deep-pocketed global firms speak it well. Just as the EU employs a small army of translators and interpreters to turn intricate laws or impassioned speeches of Romanian MEPs into the EU’s 24 official languages, so do the likes of Netflix. It now offers dubbing in 34 languages and subtitling in a few more. . . . The economics of European productions are more appealing, too. American audiences are more willing than before to give dubbed or subtitled viewing a chance. This means shows such as “Lupin”, a French crime caper on Netflix, can become global hits. . . . In 2015, about 75% of Netflix’s original content was American; now the figure is half, according to Ampere, a media-analysis company. Netflix has about 100 productions under way in Europe, which is more than big public broadcasters in France or Germany. . . . Not everything works across borders. Comedy sometimes struggles. Whodunits and bloodthirsty maelstroms between arch Romans and uppity tribesmen have a more universal appeal. Some do it better than others. Barbarians aside, German television is not always built for export, says one executive, being polite. A bigger problem is that national broadcasters still dominate. Streaming services, such as Netflix or Disney+, account for about a third of all viewing hours, even in markets where they are well-established. Europe is an ageing continent. The generation of teens staring at phones is outnumbered by their elders who prefer to gawp at the box. In Brussels and national capitals, the prospect of Netflix as a cultural hegemon is seen as a threat. “Cultural sovereignty” is the watchword of European executives worried that the Americans will eat their lunch. To be fair, Netflix content sometimes seems stuck in an uncanny valley somewhere in the mid-Atlantic, with local quirks stripped out. Netflix originals tend to have fewer specific cultural references than shows produced by domestic rivals, according to Enders, a market analyst. The company used to have an imperial model of commissioning, with executives in Los Angeles cooking up ideas French people might like. Now Netflix has offices across Europe. But ultimately the big decisions rest with American executives. This makes European politicians nervous. They should not be. An irony of European integration is that it is often American companies that facilitate it. Google Translate makes European newspapers comprehensible, even if a little clunky, for the continent’s non-polyglots. American social-media companies make it easier for Europeans to talk politics across borders. (That they do not always like to hear what they say about each other is another matter.) Now Netflix and friends pump the same content into homes across a continent, making culture a cross-border endeavour, too. If Europeans are to share a currency, bail each other out in times of financial need and share vaccines in a pandemic, then they need to have something in common—even if it is just bingeing on the same series. Watching fictitious northern and southern Europeans tear each other apart 2,000 years ago beats doing so in reality.

CAT/2023.2(RC)

Question. 17

Which one of the following research findings would weaken the author’s conclusion in the final paragraph?

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Umberto Eco, an Italian writer, was right when he said the language of Europe is translation. Netflix and other deep-pocketed global firms speak it well. Just as the EU employs a small army of translators and interpreters to turn intricate laws or impassioned speeches of Romanian MEPs into the EU’s 24 official languages, so do the likes of Netflix. It now offers dubbing in 34 languages and subtitling in a few more. . . . The economics of European productions are more appealing, too. American audiences are more willing than before to give dubbed or subtitled viewing a chance. This means shows such as “Lupin”, a French crime caper on Netflix, can become global hits. . . . In 2015, about 75% of Netflix’s original content was American; now the figure is half, according to Ampere, a media-analysis company. Netflix has about 100 productions under way in Europe, which is more than big public broadcasters in France or Germany. . . . Not everything works across borders. Comedy sometimes struggles. Whodunits and bloodthirsty maelstroms between arch Romans and uppity tribesmen have a more universal appeal. Some do it better than others. Barbarians aside, German television is not always built for export, says one executive, being polite. A bigger problem is that national broadcasters still dominate. Streaming services, such as Netflix or Disney+, account for about a third of all viewing hours, even in markets where they are well-established. Europe is an ageing continent. The generation of teens staring at phones is outnumbered by their elders who prefer to gawp at the box. In Brussels and national capitals, the prospect of Netflix as a cultural hegemon is seen as a threat. “Cultural sovereignty” is the watchword of European executives worried that the Americans will eat their lunch. To be fair, Netflix content sometimes seems stuck in an uncanny valley somewhere in the mid-Atlantic, with local quirks stripped out. Netflix originals tend to have fewer specific cultural references than shows produced by domestic rivals, according to Enders, a market analyst. The company used to have an imperial model of commissioning, with executives in Los Angeles cooking up ideas French people might like. Now Netflix has offices across Europe. But ultimately the big decisions rest with American executives. This makes European politicians nervous. They should not be. An irony of European integration is that it is often American companies that facilitate it. Google Translate makes European newspapers comprehensible, even if a little clunky, for the continent’s non-polyglots. American social-media companies make it easier for Europeans to talk politics across borders. (That they do not always like to hear what they say about each other is another matter.) Now Netflix and friends pump the same content into homes across a continent, making culture a cross-border endeavour, too. If Europeans are to share a currency, bail each other out in times of financial need and share vaccines in a pandemic, then they need to have something in common—even if it is just bingeing on the same series. Watching fictitious northern and southern Europeans tear each other apart 2,000 years ago beats doing so in reality.

CAT/2023.2(RC)

Question. 18

The author sees the rise of Netflix in Europe as:

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Umberto Eco, an Italian writer, was right when he said the language of Europe is translation. Netflix and other deep-pocketed global firms speak it well. Just as the EU employs a small army of translators and interpreters to turn intricate laws or impassioned speeches of Romanian MEPs into the EU’s 24 official languages, so do the likes of Netflix. It now offers dubbing in 34 languages and subtitling in a few more. . . . The economics of European productions are more appealing, too. American audiences are more willing than before to give dubbed or subtitled viewing a chance. This means shows such as “Lupin”, a French crime caper on Netflix, can become global hits. . . . In 2015, about 75% of Netflix’s original content was American; now the figure is half, according to Ampere, a media-analysis company. Netflix has about 100 productions under way in Europe, which is more than big public broadcasters in France or Germany. . . . Not everything works across borders. Comedy sometimes struggles. Whodunits and bloodthirsty maelstroms between arch Romans and uppity tribesmen have a more universal appeal. Some do it better than others. Barbarians aside, German television is not always built for export, says one executive, being polite. A bigger problem is that national broadcasters still dominate. Streaming services, such as Netflix or Disney+, account for about a third of all viewing hours, even in markets where they are well-established. Europe is an ageing continent. The generation of teens staring at phones is outnumbered by their elders who prefer to gawp at the box. In Brussels and national capitals, the prospect of Netflix as a cultural hegemon is seen as a threat. “Cultural sovereignty” is the watchword of European executives worried that the Americans will eat their lunch. To be fair, Netflix content sometimes seems stuck in an uncanny valley somewhere in the mid-Atlantic, with local quirks stripped out. Netflix originals tend to have fewer specific cultural references than shows produced by domestic rivals, according to Enders, a market analyst. The company used to have an imperial model of commissioning, with executives in Los Angeles cooking up ideas French people might like. Now Netflix has offices across Europe. But ultimately the big decisions rest with American executives. This makes European politicians nervous. They should not be. An irony of European integration is that it is often American companies that facilitate it. Google Translate makes European newspapers comprehensible, even if a little clunky, for the continent’s non-polyglots. American social-media companies make it easier for Europeans to talk politics across borders. (That they do not always like to hear what they say about each other is another matter.) Now Netflix and friends pump the same content into homes across a continent, making culture a cross-border endeavour, too. If Europeans are to share a currency, bail each other out in times of financial need and share vaccines in a pandemic, then they need to have something in common—even if it is just bingeing on the same series. Watching fictitious northern and southern Europeans tear each other apart 2,000 years ago beats doing so in reality.

CAT/2023.2(RC)

Question. 19

Based only on information provided in the passage, which one of the following hypothetical Netflix shows would be most successful with audiences across the EU?

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Umberto Eco, an Italian writer, was right when he said the language of Europe is translation. Netflix and other deep-pocketed global firms speak it well. Just as the EU employs a small army of translators and interpreters to turn intricate laws or impassioned speeches of Romanian MEPs into the EU’s 24 official languages, so do the likes of Netflix. It now offers dubbing in 34 languages and subtitling in a few more. . . . The economics of European productions are more appealing, too. American audiences are more willing than before to give dubbed or subtitled viewing a chance. This means shows such as “Lupin”, a French crime caper on Netflix, can become global hits. . . . In 2015, about 75% of Netflix’s original content was American; now the figure is half, according to Ampere, a media-analysis company. Netflix has about 100 productions under way in Europe, which is more than big public broadcasters in France or Germany. . . . Not everything works across borders. Comedy sometimes struggles. Whodunits and bloodthirsty maelstroms between arch Romans and uppity tribesmen have a more universal appeal. Some do it better than others. Barbarians aside, German television is not always built for export, says one executive, being polite. A bigger problem is that national broadcasters still dominate. Streaming services, such as Netflix or Disney+, account for about a third of all viewing hours, even in markets where they are well-established. Europe is an ageing continent. The generation of teens staring at phones is outnumbered by their elders who prefer to gawp at the box. In Brussels and national capitals, the prospect of Netflix as a cultural hegemon is seen as a threat. “Cultural sovereignty” is the watchword of European executives worried that the Americans will eat their lunch. To be fair, Netflix content sometimes seems stuck in an uncanny valley somewhere in the mid-Atlantic, with local quirks stripped out. Netflix originals tend to have fewer specific cultural references than shows produced by domestic rivals, according to Enders, a market analyst. The company used to have an imperial model of commissioning, with executives in Los Angeles cooking up ideas French people might like. Now Netflix has offices across Europe. But ultimately the big decisions rest with American executives. This makes European politicians nervous. They should not be. An irony of European integration is that it is often American companies that facilitate it. Google Translate makes European newspapers comprehensible, even if a little clunky, for the continent’s non-polyglots. American social-media companies make it easier for Europeans to talk politics across borders. (That they do not always like to hear what they say about each other is another matter.) Now Netflix and friends pump the same content into homes across a continent, making culture a cross-border endeavour, too. If Europeans are to share a currency, bail each other out in times of financial need and share vaccines in a pandemic, then they need to have something in common—even if it is just bingeing on the same series. Watching fictitious northern and southern Europeans tear each other apart 2,000 years ago beats doing so in reality.

CAT/2023.2(RC)

Question. 20

Based on information provided in the passage, all of the following are true, EXCEPT:

Comprehension

Directions for the questions: The passage below is accompanied by a set of questions. Choose the best answer to each question.

Many human phenomena and characteristics – such as behaviors, beliefs, economies, genes, incomes, life expectancies, and other things – are influenced both by geographic factors and by non-geographic factors. Geographic factors mean physical and biological factors tied to geographic location, including climate, the distributions of wild plant and animal species, soils, and topography. Non-geographic factors include those factors subsumed under the term culture, other factors subsumed under the term history, and decisions by individual people. . . .

[T]he differences between the current economies of North and South Korea . . . cannot be attributed to the modest environmental differences between [them] . . . They are instead due entirely to the different [government] policies . . . At the opposite extreme, the Inuit and other traditional peoples living north of the Arctic Circle developed warm fur clothes but no agriculture, while equatorial lowland peoples around the world never developed warm fur clothes but often did develop agriculture. The explanation is straightforwardly geographic, rather than a cultural or historical quirk unrelated to geography. . . . Aboriginal Australia remained the sole continent occupied only by hunter/gatherers and with no indigenous farming or herding . . . [Here the] explanation is biogeographic: the Australian continent has no domesticable native animal species and few domesticable native plant species. Instead, the crops and domestic animals that now make Australia a food and wool exporter are all non-native (mainly Eurasian) species such as sheep, wheat, and grapes, brought to Australia by overseas colonists.

Today, no scholar would be silly enough to deny that culture, history, and individual choices play a big role in many human phenomena. Scholars don’t react to cultural, historical, and individual-agent explanations by denouncing “cultural determinism,” “historical determinism,” or “individual determinism,” and then thinking no further. But many scholars do react to any explanation invoking some geographic role, by denouncing “geographic determinism” . . .

Several reasons may underlie this widespread but nonsensical view. One reason is that some geographic explanations advanced a century ago were racist, thereby causing all geographic explanations to become tainted by racist associations in the minds of many scholars other than geographers. But many genetic, historical, psychological, and anthropological explanations advanced a century ago were also racist, yet the validity of newer non-racist genetic etc. explanations is widely accepted today.

Another reason for reflex rejection of geographic explanations is that historians have a tradition, in their discipline, of stressing the role of contingency (a favorite word among historians) based on individual decisions and chance. Often that view is warranted . . . But often, too, that view is unwarranted. The development of warm fur clothes among the Inuit living north of the Arctic Circle was not because one influential Inuit leader persuaded other Inuit in 1783 to adopt warm fur clothes, for no good environmental reason.

A third reason is that geographic explanations usually depend on detailed technical facts of geography and other fields of scholarship . . . Most historians and economists don’t acquire that detailed knowledge as part of the professional training.

CAT/2023.1(RC)

Question. 21

All of the following can be inferred from the passage EXCEPT:

Comprehension

Directions for the questions: The passage below is accompanied by a set of questions. Choose the best answer to each question.

Many human phenomena and characteristics – such as behaviors, beliefs, economies, genes, incomes, life expectancies, and other things – are influenced both by geographic factors and by non-geographic factors. Geographic factors mean physical and biological factors tied to geographic location, including climate, the distributions of wild plant and animal species, soils, and topography. Non-geographic factors include those factors subsumed under the term culture, other factors subsumed under the term history, and decisions by individual people. . . .

[T]he differences between the current economies of North and South Korea . . . cannot be attributed to the modest environmental differences between [them] . . . They are instead due entirely to the different [government] policies . . . At the opposite extreme, the Inuit and other traditional peoples living north of the Arctic Circle developed warm fur clothes but no agriculture, while equatorial lowland peoples around the world never developed warm fur clothes but often did develop agriculture. The explanation is straightforwardly geographic, rather than a cultural or historical quirk unrelated to geography. . . . Aboriginal Australia remained the sole continent occupied only by hunter/gatherers and with no indigenous farming or herding . . . [Here the] explanation is biogeographic: the Australian continent has no domesticable native animal species and few domesticable native plant species. Instead, the crops and domestic animals that now make Australia a food and wool exporter are all non-native (mainly Eurasian) species such as sheep, wheat, and grapes, brought to Australia by overseas colonists.

Today, no scholar would be silly enough to deny that culture, history, and individual choices play a big role in many human phenomena. Scholars don’t react to cultural, historical, and individual-agent explanations by denouncing “cultural determinism,” “historical determinism,” or “individual determinism,” and then thinking no further. But many scholars do react to any explanation invoking some geographic role, by denouncing “geographic determinism” . . .

Several reasons may underlie this widespread but nonsensical view. One reason is that some geographic explanations advanced a century ago were racist, thereby causing all geographic explanations to become tainted by racist associations in the minds of many scholars other than geographers. But many genetic, historical, psychological, and anthropological explanations advanced a century ago were also racist, yet the validity of newer non-racist genetic etc. explanations is widely accepted today.

Another reason for reflex rejection of geographic explanations is that historians have a tradition, in their discipline, of stressing the role of contingency (a favorite word among historians) based on individual decisions and chance. Often that view is warranted . . . But often, too, that view is unwarranted. The development of warm fur clothes among the Inuit living north of the Arctic Circle was not because one influential Inuit leader persuaded other Inuit in 1783 to adopt warm fur clothes, for no good environmental reason.

A third reason is that geographic explanations usually depend on detailed technical facts of geography and other fields of scholarship . . . Most historians and economists don’t acquire that detailed knowledge as part of the professional training.

CAT/2023.1(RC)

Question. 22

All of the following are advanced by the author as reasons why non-geographers disregard geographic influences on human phenomena EXCEPT their:

Comprehension

Directions for the questions: The passage below is accompanied by a set of questions. Choose the best answer to each question.

Many human phenomena and characteristics – such as behaviors, beliefs, economies, genes, incomes, life expectancies, and other things – are influenced both by geographic factors and by non-geographic factors. Geographic factors mean physical and biological factors tied to geographic location, including climate, the distributions of wild plant and animal species, soils, and topography. Non-geographic factors include those factors subsumed under the term culture, other factors subsumed under the term history, and decisions by individual people. . . .

[T]he differences between the current economies of North and South Korea . . . cannot be attributed to the modest environmental differences between [them] . . . They are instead due entirely to the different [government] policies . . . At the opposite extreme, the Inuit and other traditional peoples living north of the Arctic Circle developed warm fur clothes but no agriculture, while equatorial lowland peoples around the world never developed warm fur clothes but often did develop agriculture. The explanation is straightforwardly geographic, rather than a cultural or historical quirk unrelated to geography. . . . Aboriginal Australia remained the sole continent occupied only by hunter/gatherers and with no indigenous farming or herding . . . [Here the] explanation is biogeographic: the Australian continent has no domesticable native animal species and few domesticable native plant species. Instead, the crops and domestic animals that now make Australia a food and wool exporter are all non-native (mainly Eurasian) species such as sheep, wheat, and grapes, brought to Australia by overseas colonists.

Today, no scholar would be silly enough to deny that culture, history, and individual choices play a big role in many human phenomena. Scholars don’t react to cultural, historical, and individual-agent explanations by denouncing “cultural determinism,” “historical determinism,” or “individual determinism,” and then thinking no further. But many scholars do react to any explanation invoking some geographic role, by denouncing “geographic determinism” . . .

Several reasons may underlie this widespread but nonsensical view. One reason is that some geographic explanations advanced a century ago were racist, thereby causing all geographic explanations to become tainted by racist associations in the minds of many scholars other than geographers. But many genetic, historical, psychological, and anthropological explanations advanced a century ago were also racist, yet the validity of newer non-racist genetic etc. explanations is widely accepted today.

Another reason for reflex rejection of geographic explanations is that historians have a tradition, in their discipline, of stressing the role of contingency (a favorite word among historians) based on individual decisions and chance. Often that view is warranted . . . But often, too, that view is unwarranted. The development of warm fur clothes among the Inuit living north of the Arctic Circle was not because one influential Inuit leader persuaded other Inuit in 1783 to adopt warm fur clothes, for no good environmental reason.

A third reason is that geographic explanations usually depend on detailed technical facts of geography and other fields of scholarship . . . Most historians and economists don’t acquire that detailed knowledge as part of the professional training.

CAT/2023.1(RC)

Question. 23

The author criticises scholars who are not geographers for all of the following reasons EXCEPT:

Comprehension

Directions for the questions: The passage below is accompanied by a set of questions. Choose the best answer to each question.

Many human phenomena and characteristics – such as behaviors, beliefs, economies, genes, incomes, life expectancies, and other things – are influenced both by geographic factors and by non-geographic factors. Geographic factors mean physical and biological factors tied to geographic location, including climate, the distributions of wild plant and animal species, soils, and topography. Non-geographic factors include those factors subsumed under the term culture, other factors subsumed under the term history, and decisions by individual people. . . .

[T]he differences between the current economies of North and South Korea . . . cannot be attributed to the modest environmental differences between [them] . . . They are instead due entirely to the different [government] policies . . . At the opposite extreme, the Inuit and other traditional peoples living north of the Arctic Circle developed warm fur clothes but no agriculture, while equatorial lowland peoples around the world never developed warm fur clothes but often did develop agriculture. The explanation is straightforwardly geographic, rather than a cultural or historical quirk unrelated to geography. . . . Aboriginal Australia remained the sole continent occupied only by hunter/gatherers and with no indigenous farming or herding . . . [Here the] explanation is biogeographic: the Australian continent has no domesticable native animal species and few domesticable native plant species. Instead, the crops and domestic animals that now make Australia a food and wool exporter are all non-native (mainly Eurasian) species such as sheep, wheat, and grapes, brought to Australia by overseas colonists.

Today, no scholar would be silly enough to deny that culture, history, and individual choices play a big role in many human phenomena. Scholars don’t react to cultural, historical, and individual-agent explanations by denouncing “cultural determinism,” “historical determinism,” or “individual determinism,” and then thinking no further. But many scholars do react to any explanation invoking some geographic role, by denouncing “geographic determinism” . . .

Several reasons may underlie this widespread but nonsensical view. One reason is that some geographic explanations advanced a century ago were racist, thereby causing all geographic explanations to become tainted by racist associations in the minds of many scholars other than geographers. But many genetic, historical, psychological, and anthropological explanations advanced a century ago were also racist, yet the validity of newer non-racist genetic etc. explanations is widely accepted today.

Another reason for reflex rejection of geographic explanations is that historians have a tradition, in their discipline, of stressing the role of contingency (a favorite word among historians) based on individual decisions and chance. Often that view is warranted . . . But often, too, that view is unwarranted. The development of warm fur clothes among the Inuit living north of the Arctic Circle was not because one influential Inuit leader persuaded other Inuit in 1783 to adopt warm fur clothes, for no good environmental reason.

A third reason is that geographic explanations usually depend on detailed technical facts of geography and other fields of scholarship . . . Most historians and economists don’t acquire that detailed knowledge as part of the professional training.

CAT/2023.1(RC)

Question. 24

The examples of the Inuit and Aboriginal Australians are offered in the passage to show:

Comprehension

Directions for the questions: The passage below is accompanied by a set of questions. Choose the best answer to each question.

[Fifty] years after its publication in English [in 1972], and just a year since [Marshall] Sahlins himself died—we may ask: why did [his essay] “Original Affluent Society” have such an impact, and how has it fared since? . . . Sahlins’s principal argument was simple but counterintuitive: before being driven into marginal environments by colonial powers, huntergatherers, or foragers, were not engaged in a desperate struggle for meager survival. Quite the contrary, they satisfied their needs with far less work than people in agricultural and industrial societies, leaving them more time to use as they wished. Hunters, he quipped, keep bankers’ hours. Refusing to maximize, many were “more concerned with games of chance than with chances of game.” . . . The so-called Neolithic Revolution, rather than improving life, imposed a harsher work regime and set in motion the long history of growing inequality . . .

Moreover, foragers had other options. The contemporary Hadza of Tanzania, who had long been surrounded by farmers, knew they had alternatives and rejected them. To Sahlins, this showed that foragers are not simply examples of human diversity or victimhood but something more profound: they demonstrated that societies make real choices. Culture, a way of living oriented around a distinctive set of values, manifests a fundamental principle of collective self-determination. . . .

But the point [of the essay] is not so much the empirical validity of the data—the real interest for most readers, after all, is not in foragers either today or in the Paleolithic—but rather its conceptual challenge to contemporary economic life and bourgeois individualism. The empirical served a philosophical and political project, a thought experiment and stimulus to the imagination of possibilities.

With its title’s nod toward The Affluent Society (1958), economist John Kenneth Galbraith’s famously skeptical portrait of America’s postwar prosperity and inequality, and dripping with New Left contempt for consumerism, “The Original Affluent Society” brought this critical perspective to bear on the contemporary world. It did so through the classic anthropological move of showing that radical alternatives to the readers’ lives really exist. If the capitalist world seeks wealth through ever greater material production to meet infinitely expansive desires, foraging societies follow “the Zen road to affluence”: not by getting more, but by wanting less. If it seems that foragers have been left behind by “progress,” this is due only to the ethnocentric self-congratulation of the West. Rather than accumulate material goods, these societies are guided by other values: leisure, mobility, and above all, freedom. . . .

Viewed in today’s context, of course, not every aspect of the essay has aged well. While acknowledging the violence of colonialism, racism, and dispossession, it does not thematize them as heavily as we might today. Rebuking evolutionary anthropologists for treating present-day foragers as “left behind” by progress, it too can succumb to the temptation to use them as proxies for the Paleolithic. Yet these characteristics should not distract us from appreciating Sahlins’s effort to show that if we want to conjure new possibilities, we need to learn about actually inhabitable worlds.

CAT/2023.1(RC)

Question. 25

The author of the passage mentions Galbraith’s “The Affluent Society” to:

Comprehension

Directions for the questions: The passage below is accompanied by a set of questions. Choose the best answer to each question.

[Fifty] years after its publication in English [in 1972], and just a year since [Marshall] Sahlins himself died—we may ask: why did [his essay] “Original Affluent Society” have such an impact, and how has it fared since? . . . Sahlins’s principal argument was simple but counterintuitive: before being driven into marginal environments by colonial powers, huntergatherers, or foragers, were not engaged in a desperate struggle for meager survival. Quite the contrary, they satisfied their needs with far less work than people in agricultural and industrial societies, leaving them more time to use as they wished. Hunters, he quipped, keep bankers’ hours. Refusing to maximize, many were “more concerned with games of chance than with chances of game.” . . . The so-called Neolithic Revolution, rather than improving life, imposed a harsher work regime and set in motion the long history of growing inequality . . .

Moreover, foragers had other options. The contemporary Hadza of Tanzania, who had long been surrounded by farmers, knew they had alternatives and rejected them. To Sahlins, this showed that foragers are not simply examples of human diversity or victimhood but something more profound: they demonstrated that societies make real choices. Culture, a way of living oriented around a distinctive set of values, manifests a fundamental principle of collective self-determination. . . .

But the point [of the essay] is not so much the empirical validity of the data—the real interest for most readers, after all, is not in foragers either today or in the Paleolithic—but rather its conceptual challenge to contemporary economic life and bourgeois individualism. The empirical served a philosophical and political project, a thought experiment and stimulus to the imagination of possibilities.

With its title’s nod toward The Affluent Society (1958), economist John Kenneth Galbraith’s famously skeptical portrait of America’s postwar prosperity and inequality, and dripping with New Left contempt for consumerism, “The Original Affluent Society” brought this critical perspective to bear on the contemporary world. It did so through the classic anthropological move of showing that radical alternatives to the readers’ lives really exist. If the capitalist world seeks wealth through ever greater material production to meet infinitely expansive desires, foraging societies follow “the Zen road to affluence”: not by getting more, but by wanting less. If it seems that foragers have been left behind by “progress,” this is due only to the ethnocentric self-congratulation of the West. Rather than accumulate material goods, these societies are guided by other values: leisure, mobility, and above all, freedom. . . .

Viewed in today’s context, of course, not every aspect of the essay has aged well. While acknowledging the violence of colonialism, racism, and dispossession, it does not thematize them as heavily as we might today. Rebuking evolutionary anthropologists for treating present-day foragers as “left behind” by progress, it too can succumb to the temptation to use them as proxies for the Paleolithic. Yet these characteristics should not distract us from appreciating Sahlins’s effort to show that if we want to conjure new possibilities, we need to learn about actually inhabitable worlds.

CAT/2023.1(RC)

Question. 26

The author mentions Tanzania’s Hadza community to illustrate:

Comprehension

Directions for the questions: The passage below is accompanied by a set of questions. Choose the best answer to each question.

[Fifty] years after its publication in English [in 1972], and just a year since [Marshall] Sahlins himself died—we may ask: why did [his essay] “Original Affluent Society” have such an impact, and how has it fared since? . . . Sahlins’s principal argument was simple but counterintuitive: before being driven into marginal environments by colonial powers, huntergatherers, or foragers, were not engaged in a desperate struggle for meager survival. Quite the contrary, they satisfied their needs with far less work than people in agricultural and industrial societies, leaving them more time to use as they wished. Hunters, he quipped, keep bankers’ hours. Refusing to maximize, many were “more concerned with games of chance than with chances of game.” . . . The so-called Neolithic Revolution, rather than improving life, imposed a harsher work regime and set in motion the long history of growing inequality . . .

Moreover, foragers had other options. The contemporary Hadza of Tanzania, who had long been surrounded by farmers, knew they had alternatives and rejected them. To Sahlins, this showed that foragers are not simply examples of human diversity or victimhood but something more profound: they demonstrated that societies make real choices. Culture, a way of living oriented around a distinctive set of values, manifests a fundamental principle of collective self-determination. . . .

But the point [of the essay] is not so much the empirical validity of the data—the real interest for most readers, after all, is not in foragers either today or in the Paleolithic—but rather its conceptual challenge to contemporary economic life and bourgeois individualism. The empirical served a philosophical and political project, a thought experiment and stimulus to the imagination of possibilities.

With its title’s nod toward The Affluent Society (1958), economist John Kenneth Galbraith’s famously skeptical portrait of America’s postwar prosperity and inequality, and dripping with New Left contempt for consumerism, “The Original Affluent Society” brought this critical perspective to bear on the contemporary world. It did so through the classic anthropological move of showing that radical alternatives to the readers’ lives really exist. If the capitalist world seeks wealth through ever greater material production to meet infinitely expansive desires, foraging societies follow “the Zen road to affluence”: not by getting more, but by wanting less. If it seems that foragers have been left behind by “progress,” this is due only to the ethnocentric self-congratulation of the West. Rather than accumulate material goods, these societies are guided by other values: leisure, mobility, and above all, freedom. . . .

Viewed in today’s context, of course, not every aspect of the essay has aged well. While acknowledging the violence of colonialism, racism, and dispossession, it does not thematize them as heavily as we might today. Rebuking evolutionary anthropologists for treating present-day foragers as “left behind” by progress, it too can succumb to the temptation to use them as proxies for the Paleolithic. Yet these characteristics should not distract us from appreciating Sahlins’s effort to show that if we want to conjure new possibilities, we need to learn about actually inhabitable worlds.

CAT/2023.1(RC)

Question. 27

We can infer that Sahlins's main goal in writing his essay was to:

Comprehension

Directions for the questions: The passage below is accompanied by a set of questions. Choose the best answer to each question.

[Fifty] years after its publication in English [in 1972], and just a year since [Marshall] Sahlins himself died—we may ask: why did [his essay] “Original Affluent Society” have such an impact, and how has it fared since? . . . Sahlins’s principal argument was simple but counterintuitive: before being driven into marginal environments by colonial powers, huntergatherers, or foragers, were not engaged in a desperate struggle for meager survival. Quite the contrary, they satisfied their needs with far less work than people in agricultural and industrial societies, leaving them more time to use as they wished. Hunters, he quipped, keep bankers’ hours. Refusing to maximize, many were “more concerned with games of chance than with chances of game.” . . . The so-called Neolithic Revolution, rather than improving life, imposed a harsher work regime and set in motion the long history of growing inequality . . .

Moreover, foragers had other options. The contemporary Hadza of Tanzania, who had long been surrounded by farmers, knew they had alternatives and rejected them. To Sahlins, this showed that foragers are not simply examples of human diversity or victimhood but something more profound: they demonstrated that societies make real choices. Culture, a way of living oriented around a distinctive set of values, manifests a fundamental principle of collective self-determination. . . .

But the point [of the essay] is not so much the empirical validity of the data—the real interest for most readers, after all, is not in foragers either today or in the Paleolithic—but rather its conceptual challenge to contemporary economic life and bourgeois individualism. The empirical served a philosophical and political project, a thought experiment and stimulus to the imagination of possibilities.

With its title’s nod toward The Affluent Society (1958), economist John Kenneth Galbraith’s famously skeptical portrait of America’s postwar prosperity and inequality, and dripping with New Left contempt for consumerism, “The Original Affluent Society” brought this critical perspective to bear on the contemporary world. It did so through the classic anthropological move of showing that radical alternatives to the readers’ lives really exist. If the capitalist world seeks wealth through ever greater material production to meet infinitely expansive desires, foraging societies follow “the Zen road to affluence”: not by getting more, but by wanting less. If it seems that foragers have been left behind by “progress,” this is due only to the ethnocentric self-congratulation of the West. Rather than accumulate material goods, these societies are guided by other values: leisure, mobility, and above all, freedom. . . .

Viewed in today’s context, of course, not every aspect of the essay has aged well. While acknowledging the violence of colonialism, racism, and dispossession, it does not thematize them as heavily as we might today. Rebuking evolutionary anthropologists for treating present-day foragers as “left behind” by progress, it too can succumb to the temptation to use them as proxies for the Paleolithic. Yet these characteristics should not distract us from appreciating Sahlins’s effort to show that if we want to conjure new possibilities, we need to learn about actually inhabitable worlds.

CAT/2023.1(RC)

Question. 28

The author of the passage criticises Sahlins’s essay for its:

Comprehension

Directions for the questions: The passage below is accompanied by a set of questions. Choose the best answer to each question.

For early postcolonial literature, the world of the novel was often the nation. Postcolonial novels were usually [concerned with] national questions. Sometimes the whole story of the novel was taken as an allegory of the nation, whether India or Tanzania. This was important for supporting anti-colonial nationalism, but could also be limiting – land-focused and inwardlooking.

My new book “Writing Ocean Worlds” explores another kind of world of the novel: not the village or nation, but the Indian Ocean world. The book describes a set of novels in which the Indian Ocean is at the centre of the story. It focuses on the novelists Amitav Ghosh, Abdulrazak Gurnah, Lindsey Collen and Joseph Conrad [who have] centred the Indian Ocean world in the majority of their novels. . . . Their work reveals a world that is outward-looking – full of movement, border-crossing and south-south interconnection. They are all very different – from colonially inclined (Conrad) to radically anti-capitalist (Collen), but together draw on and shape a wider sense of Indian Ocean space through themes, images, metaphors and language. This has the effect of remapping the world in the reader’s mind, as centred in the interconnected global south. . . .

The Indian Ocean world is a term used to describe the very long-lasting connections among the coasts of East Africa, the Arab coasts, and South and East Asia. These connections were made possible by the geography of the Indian Ocean. For much of history, travel by sea was much easier than by land, which meant that port cities very far apart were often more easily connected to each other than to much closer inland cities. Historical and archaeological evidence suggests that what we now call globalisation first appeared in the Indian Ocean. This is the interconnected oceanic world referenced and produced by the novels in my book. . . .

For their part Ghosh, Gurnah, Collen and even Conrad reference a different set of histories and geographies than the ones most commonly found in fiction in English. Those [commonly found ones] are mostly centred in Europe or the US, assume a background of Christianity and whiteness, and mention places like Paris and New York. The novels in [my] book highlight instead a largely Islamic space, feature characters of colour and centralise the ports of Malindi, Mombasa, Aden, Java and Bombay. . . . It is a densely imagined, richly sensory image of a southern cosmopolitan culture which provides for an enlarged sense of place in the world.

This remapping is particularly powerful for the representation of Africa. In the fiction, sailors and travellers are not all European. . . . African, as well as Indian and Arab characters, are traders, nakhodas (dhow ship captains), runaways, villains, missionaries and activists. This does not mean that Indian Ocean Africa is romanticised. Migration is often a matter of force; travel is portrayed as abandonment rather than adventure, freedoms are kept from women and slavery is rife. What it does mean is that the African part of the Indian Ocean world plays an active role in its long, rich history and therefore in that of the wider world.

CAT/2023.1(RC)

Question. 29

All of the following statements, if true, would weaken the passage’s claim about the relationship between mainstream English-language fiction and Indian Ocean novels EXCEPT:

Comprehension

Directions for the questions: The passage below is accompanied by a set of questions. Choose the best answer to each question.

For early postcolonial literature, the world of the novel was often the nation. Postcolonial novels were usually [concerned with] national questions. Sometimes the whole story of the novel was taken as an allegory of the nation, whether India or Tanzania. This was important for supporting anti-colonial nationalism, but could also be limiting – land-focused and inwardlooking.

My new book “Writing Ocean Worlds” explores another kind of world of the novel: not the village or nation, but the Indian Ocean world. The book describes a set of novels in which the Indian Ocean is at the centre of the story. It focuses on the novelists Amitav Ghosh, Abdulrazak Gurnah, Lindsey Collen and Joseph Conrad [who have] centred the Indian Ocean world in the majority of their novels. . . . Their work reveals a world that is outward-looking – full of movement, border-crossing and south-south interconnection. They are all very different – from colonially inclined (Conrad) to radically anti-capitalist (Collen), but together draw on and shape a wider sense of Indian Ocean space through themes, images, metaphors and language. This has the effect of remapping the world in the reader’s mind, as centred in the interconnected global south. . . .

The Indian Ocean world is a term used to describe the very long-lasting connections among the coasts of East Africa, the Arab coasts, and South and East Asia. These connections were made possible by the geography of the Indian Ocean. For much of history, travel by sea was much easier than by land, which meant that port cities very far apart were often more easily connected to each other than to much closer inland cities. Historical and archaeological evidence suggests that what we now call globalisation first appeared in the Indian Ocean. This is the interconnected oceanic world referenced and produced by the novels in my book. . . .

For their part Ghosh, Gurnah, Collen and even Conrad reference a different set of histories and geographies than the ones most commonly found in fiction in English. Those [commonly found ones] are mostly centred in Europe or the US, assume a background of Christianity and whiteness, and mention places like Paris and New York. The novels in [my] book highlight instead a largely Islamic space, feature characters of colour and centralise the ports of Malindi, Mombasa, Aden, Java and Bombay. . . . It is a densely imagined, richly sensory image of a southern cosmopolitan culture which provides for an enlarged sense of place in the world.

This remapping is particularly powerful for the representation of Africa. In the fiction, sailors and travellers are not all European. . . . African, as well as Indian and Arab characters, are traders, nakhodas (dhow ship captains), runaways, villains, missionaries and activists. This does not mean that Indian Ocean Africa is romanticised. Migration is often a matter of force; travel is portrayed as abandonment rather than adventure, freedoms are kept from women and slavery is rife. What it does mean is that the African part of the Indian Ocean world plays an active role in its long, rich history and therefore in that of the wider world.

CAT/2023.1(RC)

Question. 30

On the basis of the nature of the relationship between the items in each pair below, choose the odd pair out:

Comprehension

Directions for the questions: The passage below is accompanied by a set of questions. Choose the best answer to each question.

For early postcolonial literature, the world of the novel was often the nation. Postcolonial novels were usually [concerned with] national questions. Sometimes the whole story of the novel was taken as an allegory of the nation, whether India or Tanzania. This was important for supporting anti-colonial nationalism, but could also be limiting – land-focused and inwardlooking.

My new book “Writing Ocean Worlds” explores another kind of world of the novel: not the village or nation, but the Indian Ocean world. The book describes a set of novels in which the Indian Ocean is at the centre of the story. It focuses on the novelists Amitav Ghosh, Abdulrazak Gurnah, Lindsey Collen and Joseph Conrad [who have] centred the Indian Ocean world in the majority of their novels. . . . Their work reveals a world that is outward-looking – full of movement, border-crossing and south-south interconnection. They are all very different – from colonially inclined (Conrad) to radically anti-capitalist (Collen), but together draw on and shape a wider sense of Indian Ocean space through themes, images, metaphors and language. This has the effect of remapping the world in the reader’s mind, as centred in the interconnected global south. . . .

The Indian Ocean world is a term used to describe the very long-lasting connections among the coasts of East Africa, the Arab coasts, and South and East Asia. These connections were made possible by the geography of the Indian Ocean. For much of history, travel by sea was much easier than by land, which meant that port cities very far apart were often more easily connected to each other than to much closer inland cities. Historical and archaeological evidence suggests that what we now call globalisation first appeared in the Indian Ocean. This is the interconnected oceanic world referenced and produced by the novels in my book. . . .

For their part Ghosh, Gurnah, Collen and even Conrad reference a different set of histories and geographies than the ones most commonly found in fiction in English. Those [commonly found ones] are mostly centred in Europe or the US, assume a background of Christianity and whiteness, and mention places like Paris and New York. The novels in [my] book highlight instead a largely Islamic space, feature characters of colour and centralise the ports of Malindi, Mombasa, Aden, Java and Bombay. . . . It is a densely imagined, richly sensory image of a southern cosmopolitan culture which provides for an enlarged sense of place in the world.

This remapping is particularly powerful for the representation of Africa. In the fiction, sailors and travellers are not all European. . . . African, as well as Indian and Arab characters, are traders, nakhodas (dhow ship captains), runaways, villains, missionaries and activists. This does not mean that Indian Ocean Africa is romanticised. Migration is often a matter of force; travel is portrayed as abandonment rather than adventure, freedoms are kept from women and slavery is rife. What it does mean is that the African part of the Indian Ocean world plays an active role in its long, rich history and therefore in that of the wider world.

CAT/2023.1(RC)

Question. 31

Which one of the following statements is not true about migration in the Indian Ocean world?

Comprehension

Directions for the questions: The passage below is accompanied by a set of questions. Choose the best answer to each question.

For early postcolonial literature, the world of the novel was often the nation. Postcolonial novels were usually [concerned with] national questions. Sometimes the whole story of the novel was taken as an allegory of the nation, whether India or Tanzania. This was important for supporting anti-colonial nationalism, but could also be limiting – land-focused and inwardlooking.

My new book “Writing Ocean Worlds” explores another kind of world of the novel: not the village or nation, but the Indian Ocean world. The book describes a set of novels in which the Indian Ocean is at the centre of the story. It focuses on the novelists Amitav Ghosh, Abdulrazak Gurnah, Lindsey Collen and Joseph Conrad [who have] centred the Indian Ocean world in the majority of their novels. . . . Their work reveals a world that is outward-looking – full of movement, border-crossing and south-south interconnection. They are all very different – from colonially inclined (Conrad) to radically anti-capitalist (Collen), but together draw on and shape a wider sense of Indian Ocean space through themes, images, metaphors and language. This has the effect of remapping the world in the reader’s mind, as centred in the interconnected global south. . . .

The Indian Ocean world is a term used to describe the very long-lasting connections among the coasts of East Africa, the Arab coasts, and South and East Asia. These connections were made possible by the geography of the Indian Ocean. For much of history, travel by sea was much easier than by land, which meant that port cities very far apart were often more easily connected to each other than to much closer inland cities. Historical and archaeological evidence suggests that what we now call globalisation first appeared in the Indian Ocean. This is the interconnected oceanic world referenced and produced by the novels in my book. . . .

For their part Ghosh, Gurnah, Collen and even Conrad reference a different set of histories and geographies than the ones most commonly found in fiction in English. Those [commonly found ones] are mostly centred in Europe or the US, assume a background of Christianity and whiteness, and mention places like Paris and New York. The novels in [my] book highlight instead a largely Islamic space, feature characters of colour and centralise the ports of Malindi, Mombasa, Aden, Java and Bombay. . . . It is a densely imagined, richly sensory image of a southern cosmopolitan culture which provides for an enlarged sense of place in the world.

This remapping is particularly powerful for the representation of Africa. In the fiction, sailors and travellers are not all European. . . . African, as well as Indian and Arab characters, are traders, nakhodas (dhow ship captains), runaways, villains, missionaries and activists. This does not mean that Indian Ocean Africa is romanticised. Migration is often a matter of force; travel is portrayed as abandonment rather than adventure, freedoms are kept from women and slavery is rife. What it does mean is that the African part of the Indian Ocean world plays an active role in its long, rich history and therefore in that of the wider world.

CAT/2023.1(RC)

Question. 32

All of the following claims contribute to the “remapping” discussed by the passage, EXCEPT:

Comprehension

Directions for the questions: The passage below is accompanied by a set of questions. Choose the best answer to each question.

RESIDENTS of Lozère, a hilly department in southern France, recite complaints familiar to many rural corners of Europe. In remote hamlets and villages, with names such as Le Bacon and Le Bacon Vieux, mayors grumble about a lack of local schools, jobs, or phone and internet connections. Farmers of grazing animals add another concern: the return of wolves. Eradicated from France last century, the predators are gradually creeping back to more forests and hillsides. “The wolf must be taken in hand,” said an aspiring parliamentarian, Francis Palombi, when pressed by voters in an election campaign early this summer. Tourists enjoy visiting a wolf park in Lozère, but farmers fret over their livestock and their livelihoods. . . .

As early as the ninth century, the royal office of the Luparii—wolf-catchers—was created in France to tackle the predators. Those official hunters (and others) completed their job in the 1930s, when the last wolf disappeared from the mainland. Active hunting and improved technology such as rifles in the 19th century, plus the use of poison such as strychnine later on, caused the population collapse. But in the early 1990s the animals reappeared. They crossed the Alps from Italy, upsetting sheep farmers on the French side of the border. Wolves have since spread to areas such as Lozère, delighting environmentalists, who see the predators’ presence as a sign of wider ecological health. Farmers, who say the wolves cause the deaths of thousands of sheep and other grazing animals, are less cheerful. They grumble that green activists and politically correct urban types have allowed the return of an old enemy.

Various factors explain the changes of the past few decades. Rural depopulation is part of the story. In Lozère, for example, farming and a once-flourishing mining industry supported a population of over 140,000 residents in the mid-19th century. Today the department has fewer than 80,000 people, many in its towns. As humans withdraw, forests are expanding. In France, between 1990 and 2015, forest cover increased by an average of 102,000 hectares each year, as more fields were given over to trees. Now, nearly one-third of mainland France is covered by woodland of some sort. The decline of hunting as a sport also means more forests fall quiet. In the mid-to-late 20th century over 2m hunters regularly spent winter weekends tramping in woodland, seeking boars, birds and other prey. Today the Fédération Nationale des Chasseurs, the national body, claims 1.1m people hold hunting licences, though the number of active hunters is probably lower. The mostly protected status of the wolf in Europe—hunting them is now forbidden, other than when occasional culls are sanctioned by the state—plus the efforts of NGOs to track and count the animals, also contribute to the recovery of wolf populations.

As the lupine population of Europe spreads westwards, with occasional reports of wolves seen closer to urban areas, expect to hear of more clashes between farmers and those who celebrate the predators’ return. Farmers’ losses are real, but are not the only economic story. Tourist venues, such as parks where wolves are kept and the animals’ spread is discussed, also generate income and jobs in rural areas.

 

CAT/2023.1(RC)

Question. 33

The inhabitants of Lozère have to grapple with all of the following problems, EXCEPT:

Comprehension

Directions for the questions: The passage below is accompanied by a set of questions. Choose the best answer to each question.

RESIDENTS of Lozère, a hilly department in southern France, recite complaints familiar to many rural corners of Europe. In remote hamlets and villages, with names such as Le Bacon and Le Bacon Vieux, mayors grumble about a lack of local schools, jobs, or phone and internet connections. Farmers of grazing animals add another concern: the return of wolves. Eradicated from France last century, the predators are gradually creeping back to more forests and hillsides. “The wolf must be taken in hand,” said an aspiring parliamentarian, Francis Palombi, when pressed by voters in an election campaign early this summer. Tourists enjoy visiting a wolf park in Lozère, but farmers fret over their livestock and their livelihoods. . . .

As early as the ninth century, the royal office of the Luparii—wolf-catchers—was created in France to tackle the predators. Those official hunters (and others) completed their job in the 1930s, when the last wolf disappeared from the mainland. Active hunting and improved technology such as rifles in the 19th century, plus the use of poison such as strychnine later on, caused the population collapse. But in the early 1990s the animals reappeared. They crossed the Alps from Italy, upsetting sheep farmers on the French side of the border. Wolves have since spread to areas such as Lozère, delighting environmentalists, who see the predators’ presence as a sign of wider ecological health. Farmers, who say the wolves cause the deaths of thousands of sheep and other grazing animals, are less cheerful. They grumble that green activists and politically correct urban types have allowed the return of an old enemy.

Various factors explain the changes of the past few decades. Rural depopulation is part of the story. In Lozère, for example, farming and a once-flourishing mining industry supported a population of over 140,000 residents in the mid-19th century. Today the department has fewer than 80,000 people, many in its towns. As humans withdraw, forests are expanding. In France, between 1990 and 2015, forest cover increased by an average of 102,000 hectares each year, as more fields were given over to trees. Now, nearly one-third of mainland France is covered by woodland of some sort. The decline of hunting as a sport also means more forests fall quiet. In the mid-to-late 20th century over 2m hunters regularly spent winter weekends tramping in woodland, seeking boars, birds and other prey. Today the Fédération Nationale des Chasseurs, the national body, claims 1.1m people hold hunting licences, though the number of active hunters is probably lower. The mostly protected status of the wolf in Europe—hunting them is now forbidden, other than when occasional culls are sanctioned by the state—plus the efforts of NGOs to track and count the animals, also contribute to the recovery of wolf populations.

As the lupine population of Europe spreads westwards, with occasional reports of wolves seen closer to urban areas, expect to hear of more clashes between farmers and those who celebrate the predators’ return. Farmers’ losses are real, but are not the only economic story. Tourist venues, such as parks where wolves are kept and the animals’ spread is discussed, also generate income and jobs in rural areas.

 

CAT/2023.1(RC)

Question. 34

Which one of the following has NOT contributed to the growing wolf population in Lozère?

Comprehension

Directions for the questions: The passage below is accompanied by a set of questions. Choose the best answer to each question.

RESIDENTS of Lozère, a hilly department in southern France, recite complaints familiar to many rural corners of Europe. In remote hamlets and villages, with names such as Le Bacon and Le Bacon Vieux, mayors grumble about a lack of local schools, jobs, or phone and internet connections. Farmers of grazing animals add another concern: the return of wolves. Eradicated from France last century, the predators are gradually creeping back to more forests and hillsides. “The wolf must be taken in hand,” said an aspiring parliamentarian, Francis Palombi, when pressed by voters in an election campaign early this summer. Tourists enjoy visiting a wolf park in Lozère, but farmers fret over their livestock and their livelihoods. . . .

As early as the ninth century, the royal office of the Luparii—wolf-catchers—was created in France to tackle the predators. Those official hunters (and others) completed their job in the 1930s, when the last wolf disappeared from the mainland. Active hunting and improved technology such as rifles in the 19th century, plus the use of poison such as strychnine later on, caused the population collapse. But in the early 1990s the animals reappeared. They crossed the Alps from Italy, upsetting sheep farmers on the French side of the border. Wolves have since spread to areas such as Lozère, delighting environmentalists, who see the predators’ presence as a sign of wider ecological health. Farmers, who say the wolves cause the deaths of thousands of sheep and other grazing animals, are less cheerful. They grumble that green activists and politically correct urban types have allowed the return of an old enemy.

Various factors explain the changes of the past few decades. Rural depopulation is part of the story. In Lozère, for example, farming and a once-flourishing mining industry supported a population of over 140,000 residents in the mid-19th century. Today the department has fewer than 80,000 people, many in its towns. As humans withdraw, forests are expanding. In France, between 1990 and 2015, forest cover increased by an average of 102,000 hectares each year, as more fields were given over to trees. Now, nearly one-third of mainland France is covered by woodland of some sort. The decline of hunting as a sport also means more forests fall quiet. In the mid-to-late 20th century over 2m hunters regularly spent winter weekends tramping in woodland, seeking boars, birds and other prey. Today the Fédération Nationale des Chasseurs, the national body, claims 1.1m people hold hunting licences, though the number of active hunters is probably lower. The mostly protected status of the wolf in Europe—hunting them is now forbidden, other than when occasional culls are sanctioned by the state—plus the efforts of NGOs to track and count the animals, also contribute to the recovery of wolf populations.

As the lupine population of Europe spreads westwards, with occasional reports of wolves seen closer to urban areas, expect to hear of more clashes between farmers and those who celebrate the predators’ return. Farmers’ losses are real, but are not the only economic story. Tourist venues, such as parks where wolves are kept and the animals’ spread is discussed, also generate income and jobs in rural areas.

 

CAT/2023.1(RC)

Question. 35

Which one of the following statements, if true, would weaken the author’s claims?

Comprehension

Directions for the questions: The passage below is accompanied by a set of questions. Choose the best answer to each question.

RESIDENTS of Lozère, a hilly department in southern France, recite complaints familiar to many rural corners of Europe. In remote hamlets and villages, with names such as Le Bacon and Le Bacon Vieux, mayors grumble about a lack of local schools, jobs, or phone and internet connections. Farmers of grazing animals add another concern: the return of wolves. Eradicated from France last century, the predators are gradually creeping back to more forests and hillsides. “The wolf must be taken in hand,” said an aspiring parliamentarian, Francis Palombi, when pressed by voters in an election campaign early this summer. Tourists enjoy visiting a wolf park in Lozère, but farmers fret over their livestock and their livelihoods. . . .

As early as the ninth century, the royal office of the Luparii—wolf-catchers—was created in France to tackle the predators. Those official hunters (and others) completed their job in the 1930s, when the last wolf disappeared from the mainland. Active hunting and improved technology such as rifles in the 19th century, plus the use of poison such as strychnine later on, caused the population collapse. But in the early 1990s the animals reappeared. They crossed the Alps from Italy, upsetting sheep farmers on the French side of the border. Wolves have since spread to areas such as Lozère, delighting environmentalists, who see the predators’ presence as a sign of wider ecological health. Farmers, who say the wolves cause the deaths of thousands of sheep and other grazing animals, are less cheerful. They grumble that green activists and politically correct urban types have allowed the return of an old enemy.

Various factors explain the changes of the past few decades. Rural depopulation is part of the story. In Lozère, for example, farming and a once-flourishing mining industry supported a population of over 140,000 residents in the mid-19th century. Today the department has fewer than 80,000 people, many in its towns. As humans withdraw, forests are expanding. In France, between 1990 and 2015, forest cover increased by an average of 102,000 hectares each year, as more fields were given over to trees. Now, nearly one-third of mainland France is covered by woodland of some sort. The decline of hunting as a sport also means more forests fall quiet. In the mid-to-late 20th century over 2m hunters regularly spent winter weekends tramping in woodland, seeking boars, birds and other prey. Today the Fédération Nationale des Chasseurs, the national body, claims 1.1m people hold hunting licences, though the number of active hunters is probably lower. The mostly protected status of the wolf in Europe—hunting them is now forbidden, other than when occasional culls are sanctioned by the state—plus the efforts of NGOs to track and count the animals, also contribute to the recovery of wolf populations.

As the lupine population of Europe spreads westwards, with occasional reports of wolves seen closer to urban areas, expect to hear of more clashes between farmers and those who celebrate the predators’ return. Farmers’ losses are real, but are not the only economic story. Tourist venues, such as parks where wolves are kept and the animals’ spread is discussed, also generate income and jobs in rural areas.

 

CAT/2023.1(RC)

Question. 36

The author presents a possible economic solution to an existing issue facing Lozère that takes into account the divergent and competing interests of:

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

As software improves, the people using it become less likely to sharpen their own know-how. Applications that offer lots of prompts and tips are often to blame; simpler, less solicitous programs push people harder to think, act and learn. Ten years ago, information scientists at Utrecht University in the Netherlands had a group of people carry out complicated analytical and planning tasks using either rudimentary software that provided no assistance or sophisticated software that offered a great deal of aid. The researchers found that the people using the simple software developed better strategies, made fewer mistakes and developed a deeper aptitude for the work. The people using the more advanced software, meanwhile, would often “aimlessly click around” when confronted with a tricky problem. The supposedly helpful software actually short-circuited their thinking and learning. [According to] philosopher Hubert Dreyfus . . . . our skills get sharper only through practice, when we use them regularly to overcome different sorts of difficult challenges. The goal of modern software, by contrast, is to ease our way through such challenges. Arduous, painstaking work is exactly what programmers are most eager to automate—after all, that is where the immediate efficiency gains tend to lie. In other words, a fundamental tension ripples between the interests of the people doing the automation and the interests of the people doing the work. Nevertheless, automation’s scope continues to widen. With the rise of electronic health records, physicians increasingly rely on software templates to guide them through patient exams. The programs incorporate valuable checklists and alerts, but they also make medicine more routinized and formulaic—and distance doctors from their patients. . . . Harvard Medical School professor Beth Lown, in a 2012 journal article . . . warned that when doctors become “screen-driven,” following a computer’s prompts rather than “the patient’s narrative thread,” their thinking can become constricted. In the worst cases, they may miss important diagnostic signals. . . . In a recent paper published in the journal Diagnosis, three medical researchers . . . examined the misdiagnosis of Thomas Eric Duncan, the first person to die of Ebola in the U.S., at Texas Health Presbyterian Hospital Dallas. They argue that the digital templates used by the hospital’s clinicians to record patient information probably helped to induce a kind of tunnel vision. “These highly constrained tools,” the researchers write, “are optimized for data capture but at the expense of sacrificing their utility for appropriate triage and diagnosis, leading users to miss the forest for the trees.” Medical software, they write, is no “replacement for basic history-taking, examination skills, and critical thinking.” . . . There is an alternative. In “human-centered automation,” the talents of people take precedence. . . . In this model, software plays an essential but secondary role. It takes over routine functions that a human operator has already mastered, issues alerts when unexpected situations arise, provides fresh information that expands the operator’s perspective and counters the biases that often distort human thinking. The technology becomes the expert’s partner, not the expert’s replacement.

CAT/2022.3(RC)

Question. 37

In the context of the passage, all of the following can be considered examples of humancentered automation EXCEPT:

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Sociologists working in the Chicago School tradition have focused on how rapid or dramatic social change causes increases in crime. Just as Durkheim, Marx, Toennies, and other European sociologists thought that the rapid changes produced by industrialization and urbanization produced crime and disorder, so too did the Chicago School theorists. The location of the University of Chicago provided an excellent opportunity for Park, Burgess, and McKenzie to study the social ecology of the city. Shaw and McKay found . . . that areas of the city characterized by high levels of social disorganization had higher rates of crime and delinquency. In the 1920s and 1930s Chicago, like many American cities, experienced considerable immigration. Rapid population growth is a disorganizing influence, but growth resulting from in-migration of very different people is particularly disruptive. Chicago’s in-migrants were both native-born whites and blacks from rural areas and small towns, and foreign immigrants. The heavy industry of cities like Chicago, Detroit, and Pittsburgh drew those seeking opportunities and new lives. Farmers and villagers from America’s hinterland, like their European cousins of whom Durkheim wrote, moved in large numbers into cities. At the start of the twentieth century, Americans were predominately a rural population, but by the century’s mid-point most lived in urban areas. The social lives of these migrants, as well as those already living in the cities they moved to, were disrupted by the differences between urban and rural life. According to social disorganization theory, until the social ecology of the ‘‘new place’’ can adapt, this rapid change is a criminogenic influence. But most rural migrants, and even many of the foreign immigrants to the city, looked like and eventually spoke the same language as the natives of the cities into which they moved. These similarities allowed for more rapid social integration for these migrants than was the case for African Americans and most foreign immigrants. In these same decades America experienced what has been called ‘‘the great migration’’: the massive movement of African Americans out of the rural South and into northern (and some southern) cities. The scale of this migration is one of the most dramatic in human history. These migrants, unlike their white counterparts, were not integrated into the cities they now called home. In fact, most American cities at the end of the twentieth century were characterized by high levels of racial residential segregation . . . Failure to integrate these migrants, coupled with other forces of social disorganization such as crowding, poverty, and illness, caused crime rates to climb in the cities, particularly in the segregated wards and neighborhoods where the migrants were forced to live. Foreign immigrants during this period did not look as dramatically different from the rest of the population as blacks did, but the migrants from eastern and southern Europe who came to American cities did not speak English, and were frequently Catholic, while the native born were mostly Protestant. The combination of rapid population growth with the diversity of those moving into the cities created what the Chicago School sociologists called social disorganization.

CAT/2022.3(RC)

Question. 38

It can be inferred that in the Utrecht University experiment, one group of people was “aimlessly clicking around” because:

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Sociologists working in the Chicago School tradition have focused on how rapid or dramatic social change causes increases in crime. Just as Durkheim, Marx, Toennies, and other European sociologists thought that the rapid changes produced by industrialization and urbanization produced crime and disorder, so too did the Chicago School theorists. The location of the University of Chicago provided an excellent opportunity for Park, Burgess, and McKenzie to study the social ecology of the city. Shaw and McKay found . . . that areas of the city characterized by high levels of social disorganization had higher rates of crime and delinquency. In the 1920s and 1930s Chicago, like many American cities, experienced considerable immigration. Rapid population growth is a disorganizing influence, but growth resulting from in-migration of very different people is particularly disruptive. Chicago’s in-migrants were both native-born whites and blacks from rural areas and small towns, and foreign immigrants. The heavy industry of cities like Chicago, Detroit, and Pittsburgh drew those seeking opportunities and new lives. Farmers and villagers from America’s hinterland, like their European cousins of whom Durkheim wrote, moved in large numbers into cities. At the start of the twentieth century, Americans were predominately a rural population, but by the century’s mid-point most lived in urban areas. The social lives of these migrants, as well as those already living in the cities they moved to, were disrupted by the differences between urban and rural life. According to social disorganization theory, until the social ecology of the ‘‘new place’’ can adapt, this rapid change is a criminogenic influence. But most rural migrants, and even many of the foreign immigrants to the city, looked like and eventually spoke the same language as the natives of the cities into which they moved. These similarities allowed for more rapid social integration for these migrants than was the case for African Americans and most foreign immigrants. In these same decades America experienced what has been called ‘‘the great migration’’: the massive movement of African Americans out of the rural South and into northern (and some southern) cities. The scale of this migration is one of the most dramatic in human history. These migrants, unlike their white counterparts, were not integrated into the cities they now called home. In fact, most American cities at the end of the twentieth century were characterized by high levels of racial residential segregation . . . Failure to integrate these migrants, coupled with other forces of social disorganization such as crowding, poverty, and illness, caused crime rates to climb in the cities, particularly in the segregated wards and neighborhoods where the migrants were forced to live. Foreign immigrants during this period did not look as dramatically different from the rest of the population as blacks did, but the migrants from eastern and southern Europe who came to American cities did not speak English, and were frequently Catholic, while the native born were mostly Protestant. The combination of rapid population growth with the diversity of those moving into the cities created what the Chicago School sociologists called social disorganization.

CAT/2022.3(RC)

Question. 39

In the Ebola misdiagnosis case, we can infer that doctors probably missed the forest for the trees because:

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Sociologists working in the Chicago School tradition have focused on how rapid or dramatic social change causes increases in crime. Just as Durkheim, Marx, Toennies, and other European sociologists thought that the rapid changes produced by industrialization and urbanization produced crime and disorder, so too did the Chicago School theorists. The location of the University of Chicago provided an excellent opportunity for Park, Burgess, and McKenzie to study the social ecology of the city. Shaw and McKay found . . . that areas of the city characterized by high levels of social disorganization had higher rates of crime and delinquency. In the 1920s and 1930s Chicago, like many American cities, experienced considerable immigration. Rapid population growth is a disorganizing influence, but growth resulting from in-migration of very different people is particularly disruptive. Chicago’s in-migrants were both native-born whites and blacks from rural areas and small towns, and foreign immigrants. The heavy industry of cities like Chicago, Detroit, and Pittsburgh drew those seeking opportunities and new lives. Farmers and villagers from America’s hinterland, like their European cousins of whom Durkheim wrote, moved in large numbers into cities. At the start of the twentieth century, Americans were predominately a rural population, but by the century’s mid-point most lived in urban areas. The social lives of these migrants, as well as those already living in the cities they moved to, were disrupted by the differences between urban and rural life. According to social disorganization theory, until the social ecology of the ‘‘new place’’ can adapt, this rapid change is a criminogenic influence. But most rural migrants, and even many of the foreign immigrants to the city, looked like and eventually spoke the same language as the natives of the cities into which they moved. These similarities allowed for more rapid social integration for these migrants than was the case for African Americans and most foreign immigrants. In these same decades America experienced what has been called ‘‘the great migration’’: the massive movement of African Americans out of the rural South and into northern (and some southern) cities. The scale of this migration is one of the most dramatic in human history. These migrants, unlike their white counterparts, were not integrated into the cities they now called home. In fact, most American cities at the end of the twentieth century were characterized by high levels of racial residential segregation . . . Failure to integrate these migrants, coupled with other forces of social disorganization such as crowding, poverty, and illness, caused crime rates to climb in the cities, particularly in the segregated wards and neighborhoods where the migrants were forced to live. Foreign immigrants during this period did not look as dramatically different from the rest of the population as blacks did, but the migrants from eastern and southern Europe who came to American cities did not speak English, and were frequently Catholic, while the native born were mostly Protestant. The combination of rapid population growth with the diversity of those moving into the cities created what the Chicago School sociologists called social disorganization.

CAT/2022.3(RC)

Question. 40

From the passage, we can infer that the author is apprehensive about the use of sophisticated automation for all of the following reasons EXCEPT that:

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Sociologists working in the Chicago School tradition have focused on how rapid or dramatic social change causes increases in crime. Just as Durkheim, Marx, Toennies, and other European sociologists thought that the rapid changes produced by industrialization and urbanization produced crime and disorder, so too did the Chicago School theorists. The location of the University of Chicago provided an excellent opportunity for Park, Burgess, and McKenzie to study the social ecology of the city. Shaw and McKay found . . . that areas of the city characterized by high levels of social disorganization had higher rates of crime and delinquency. In the 1920s and 1930s Chicago, like many American cities, experienced considerable immigration. Rapid population growth is a disorganizing influence, but growth resulting from in-migration of very different people is particularly disruptive. Chicago’s in-migrants were both native-born whites and blacks from rural areas and small towns, and foreign immigrants. The heavy industry of cities like Chicago, Detroit, and Pittsburgh drew those seeking opportunities and new lives. Farmers and villagers from America’s hinterland, like their European cousins of whom Durkheim wrote, moved in large numbers into cities. At the start of the twentieth century, Americans were predominately a rural population, but by the century’s mid-point most lived in urban areas. The social lives of these migrants, as well as those already living in the cities they moved to, were disrupted by the differences between urban and rural life. According to social disorganization theory, until the social ecology of the ‘‘new place’’ can adapt, this rapid change is a criminogenic influence. But most rural migrants, and even many of the foreign immigrants to the city, looked like and eventually spoke the same language as the natives of the cities into which they moved. These similarities allowed for more rapid social integration for these migrants than was the case for African Americans and most foreign immigrants. In these same decades America experienced what has been called ‘‘the great migration’’: the massive movement of African Americans out of the rural South and into northern (and some southern) cities. The scale of this migration is one of the most dramatic in human history. These migrants, unlike their white counterparts, were not integrated into the cities they now called home. In fact, most American cities at the end of the twentieth century were characterized by high levels of racial residential segregation . . . Failure to integrate these migrants, coupled with other forces of social disorganization such as crowding, poverty, and illness, caused crime rates to climb in the cities, particularly in the segregated wards and neighborhoods where the migrants were forced to live. Foreign immigrants during this period did not look as dramatically different from the rest of the population as blacks did, but the migrants from eastern and southern Europe who came to American cities did not speak English, and were frequently Catholic, while the native born were mostly Protestant. The combination of rapid population growth with the diversity of those moving into the cities created what the Chicago School sociologists called social disorganization.

CAT/2022.3(RC)

Question. 41

Which one of the following sets of words/phrases best encapsulates the issues discussed in the passage?

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Sociologists working in the Chicago School tradition have focused on how rapid or dramatic social change causes increases in crime. Just as Durkheim, Marx, Toennies, and other European sociologists thought that the rapid changes produced by industrialization and urbanization produced crime and disorder, so too did the Chicago School theorists. The location of the University of Chicago provided an excellent opportunity for Park, Burgess, and McKenzie to study the social ecology of the city. Shaw and McKay found . . . that areas of the city characterized by high levels of social disorganization had higher rates of crime and delinquency. In the 1920s and 1930s Chicago, like many American cities, experienced considerable immigration. Rapid population growth is a disorganizing influence, but growth resulting from in-migration of very different people is particularly disruptive. Chicago’s in-migrants were both native-born whites and blacks from rural areas and small towns, and foreign immigrants. The heavy industry of cities like Chicago, Detroit, and Pittsburgh drew those seeking opportunities and new lives. Farmers and villagers from America’s hinterland, like their European cousins of whom Durkheim wrote, moved in large numbers into cities. At the start of the twentieth century, Americans were predominately a rural population, but by the century’s mid-point most lived in urban areas. The social lives of these migrants, as well as those already living in the cities they moved to, were disrupted by the differences between urban and rural life. According to social disorganization theory, until the social ecology of the ‘‘new place’’ can adapt, this rapid change is a criminogenic influence. But most rural migrants, and even many of the foreign immigrants to the city, looked like and eventually spoke the same language as the natives of the cities into which they moved. These similarities allowed for more rapid social integration for these migrants than was the case for African Americans and most foreign immigrants. In these same decades America experienced what has been called ‘‘the great migration’’: the massive movement of African Americans out of the rural South and into northern (and some southern) cities. The scale of this migration is one of the most dramatic in human history. These migrants, unlike their white counterparts, were not integrated into the cities they now called home. In fact, most American cities at the end of the twentieth century were characterized by high levels of racial residential segregation . . . Failure to integrate these migrants, coupled with other forces of social disorganization such as crowding, poverty, and illness, caused crime rates to climb in the cities, particularly in the segregated wards and neighborhoods where the migrants were forced to live. Foreign immigrants during this period did not look as dramatically different from the rest of the population as blacks did, but the migrants from eastern and southern Europe who came to American cities did not speak English, and were frequently Catholic, while the native born were mostly Protestant. The combination of rapid population growth with the diversity of those moving into the cities created what the Chicago School sociologists called social disorganization.

CAT/2022.3(RC)

Question. 42

The author notes that, “At the start of the twentieth century, Americans were predominately a rural population, but by the century’s mid-point most lived in urban areas.” Which one of the following statements, if true, does not contradict this statement?

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Sociologists working in the Chicago School tradition have focused on how rapid or dramatic social change causes increases in crime. Just as Durkheim, Marx, Toennies, and other European sociologists thought that the rapid changes produced by industrialization and urbanization produced crime and disorder, so too did the Chicago School theorists. The location of the University of Chicago provided an excellent opportunity for Park, Burgess, and McKenzie to study the social ecology of the city. Shaw and McKay found . . . that areas of the city characterized by high levels of social disorganization had higher rates of crime and delinquency. In the 1920s and 1930s Chicago, like many American cities, experienced considerable immigration. Rapid population growth is a disorganizing influence, but growth resulting from in-migration of very different people is particularly disruptive. Chicago’s in-migrants were both native-born whites and blacks from rural areas and small towns, and foreign immigrants. The heavy industry of cities like Chicago, Detroit, and Pittsburgh drew those seeking opportunities and new lives. Farmers and villagers from America’s hinterland, like their European cousins of whom Durkheim wrote, moved in large numbers into cities. At the start of the twentieth century, Americans were predominately a rural population, but by the century’s mid-point most lived in urban areas. The social lives of these migrants, as well as those already living in the cities they moved to, were disrupted by the differences between urban and rural life. According to social disorganization theory, until the social ecology of the ‘‘new place’’ can adapt, this rapid change is a criminogenic influence. But most rural migrants, and even many of the foreign immigrants to the city, looked like and eventually spoke the same language as the natives of the cities into which they moved. These similarities allowed for more rapid social integration for these migrants than was the case for African Americans and most foreign immigrants. In these same decades America experienced what has been called ‘‘the great migration’’: the massive movement of African Americans out of the rural South and into northern (and some southern) cities. The scale of this migration is one of the most dramatic in human history. These migrants, unlike their white counterparts, were not integrated into the cities they now called home. In fact, most American cities at the end of the twentieth century were characterized by high levels of racial residential segregation . . . Failure to integrate these migrants, coupled with other forces of social disorganization such as crowding, poverty, and illness, caused crime rates to climb in the cities, particularly in the segregated wards and neighborhoods where the migrants were forced to live. Foreign immigrants during this period did not look as dramatically different from the rest of the population as blacks did, but the migrants from eastern and southern Europe who came to American cities did not speak English, and were frequently Catholic, while the native born were mostly Protestant. The combination of rapid population growth with the diversity of those moving into the cities created what the Chicago School sociologists called social disorganization.

CAT/2022.3(RC)

Question. 43

Which one of the following is not a valid inference from the passage?

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Sociologists working in the Chicago School tradition have focused on how rapid or dramatic social change causes increases in crime. Just as Durkheim, Marx, Toennies, and other European sociologists thought that the rapid changes produced by industrialization and urbanization produced crime and disorder, so too did the Chicago School theorists. The location of the University of Chicago provided an excellent opportunity for Park, Burgess, and McKenzie to study the social ecology of the city. Shaw and McKay found . . . that areas of the city characterized by high levels of social disorganization had higher rates of crime and delinquency. In the 1920s and 1930s Chicago, like many American cities, experienced considerable immigration. Rapid population growth is a disorganizing influence, but growth resulting from in-migration of very different people is particularly disruptive. Chicago’s in-migrants were both native-born whites and blacks from rural areas and small towns, and foreign immigrants. The heavy industry of cities like Chicago, Detroit, and Pittsburgh drew those seeking opportunities and new lives. Farmers and villagers from America’s hinterland, like their European cousins of whom Durkheim wrote, moved in large numbers into cities. At the start of the twentieth century, Americans were predominately a rural population, but by the century’s mid-point most lived in urban areas. The social lives of these migrants, as well as those already living in the cities they moved to, were disrupted by the differences between urban and rural life. According to social disorganization theory, until the social ecology of the ‘‘new place’’ can adapt, this rapid change is a criminogenic influence. But most rural migrants, and even many of the foreign immigrants to the city, looked like and eventually spoke the same language as the natives of the cities into which they moved. These similarities allowed for more rapid social integration for these migrants than was the case for African Americans and most foreign immigrants. In these same decades America experienced what has been called ‘‘the great migration’’: the massive movement of African Americans out of the rural South and into northern (and some southern) cities. The scale of this migration is one of the most dramatic in human history. These migrants, unlike their white counterparts, were not integrated into the cities they now called home. In fact, most American cities at the end of the twentieth century were characterized by high levels of racial residential segregation . . . Failure to integrate these migrants, coupled with other forces of social disorganization such as crowding, poverty, and illness, caused crime rates to climb in the cities, particularly in the segregated wards and neighborhoods where the migrants were forced to live. Foreign immigrants during this period did not look as dramatically different from the rest of the population as blacks did, but the migrants from eastern and southern Europe who came to American cities did not speak English, and were frequently Catholic, while the native born were mostly Protestant. The combination of rapid population growth with the diversity of those moving into the cities created what the Chicago School sociologists called social disorganization.

CAT/2022.3(RC)

Question. 44

A fundamental conclusion by the author is that:

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Interpretations of the Indian past . . . were inevitably influenced by colonial concerns and interests, and also by prevalent European ideas about history, civilization and the Orient. Orientalist scholars studied the languages and the texts with selected Indian scholars, but made little attempt to understand the world-view of those who were teaching them. The readings therefore are something of a disjuncture from the traditional ways of looking at the Indian past. . . . Orientalism [which we can understand broadly as Western perceptions of the Orient] fuelled the fantasy and the freedom sought by European Romanticism, particularly in its opposition to the more disciplined Neo-Classicism. The cultures of Asia were seen as bringing a new Romantic paradigm. Another Renaissance was anticipated through an acquaintance with the Orient, and this, it was thought, would be different from the earlier Greek Renaissance. It was believed that this Oriental Renaissance would liberate European thought and literature from the increasing focus on discipline and rationality that had followed from the earlier Enlightenment. . . . [The Romantic English poets, Wordsworth and Coleridge,] were apprehensive of the changes introduced by industrialization and turned to nature and to fantasies of the Orient. However, this enthusiasm gradually changed, to conform with the emphasis later in the nineteenth century on the innate superiority of European civilization. Oriental civilizations were now seen as having once been great but currently in decline. The various phases of Orientalism tended to mould European understanding of the Indian past into a particular pattern. . . . There was an attempt to formulate Indian culture as uniform, such formulations being derived from texts that were given priority. The so-called ‘discovery’ of India was largely through selected literature in Sanskrit. This interpretation tended to emphasize non-historical aspects of Indian culture, for example the idea of an unchanging continuity of society and religion over 3,000 years; and it was believed that the Indian pattern of life was so concerned with metaphysics and the subtleties of religious belief that little attention was given to the more tangible aspects. German Romanticism endorsed this image of India, and it became the mystic land for many Europeans, where even the most ordinary actions were imbued with a complex symbolism. This was the genesis of the idea of the spiritual east, and also, incidentally, the refuge of European intellectuals seeking to distance themselves from the changing patterns of their own societies. A dichotomy in values was maintained, Indian values being described as ‘spiritual’ and European values as ‘materialistic’, with little attempt to juxtapose these values with the reality of Indian society. This theme has been even more firmly endorsed by a section of Indian opinion during the last hundred years. It was a consolation to the Indian intelligentsia for its perceived inability to counter the technical superiority of the west, a superiority viewed as having enabled Europe to colonize Asia and other parts of the world. At the height of anti-colonial nationalism it acted as a salve for having been made a colony of Britain.

CAT/2022.3(RC)

Question. 45

It can be inferred from the passage that to gain a more accurate view of a nation’s history and culture, scholars should do all of the following EXCEPT:

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Interpretations of the Indian past . . . were inevitably influenced by colonial concerns and interests, and also by prevalent European ideas about history, civilization and the Orient. Orientalist scholars studied the languages and the texts with selected Indian scholars, but made little attempt to understand the world-view of those who were teaching them. The readings therefore are something of a disjuncture from the traditional ways of looking at the Indian past. . . . Orientalism [which we can understand broadly as Western perceptions of the Orient] fuelled the fantasy and the freedom sought by European Romanticism, particularly in its opposition to the more disciplined Neo-Classicism. The cultures of Asia were seen as bringing a new Romantic paradigm. Another Renaissance was anticipated through an acquaintance with the Orient, and this, it was thought, would be different from the earlier Greek Renaissance. It was believed that this Oriental Renaissance would liberate European thought and literature from the increasing focus on discipline and rationality that had followed from the earlier Enlightenment. . . . [The Romantic English poets, Wordsworth and Coleridge,] were apprehensive of the changes introduced by industrialization and turned to nature and to fantasies of the Orient. However, this enthusiasm gradually changed, to conform with the emphasis later in the nineteenth century on the innate superiority of European civilization. Oriental civilizations were now seen as having once been great but currently in decline. The various phases of Orientalism tended to mould European understanding of the Indian past into a particular pattern. . . . There was an attempt to formulate Indian culture as uniform, such formulations being derived from texts that were given priority. The so-called ‘discovery’ of India was largely through selected literature in Sanskrit. This interpretation tended to emphasize non-historical aspects of Indian culture, for example the idea of an unchanging continuity of society and religion over 3,000 years; and it was believed that the Indian pattern of life was so concerned with metaphysics and the subtleties of religious belief that little attention was given to the more tangible aspects. German Romanticism endorsed this image of India, and it became the mystic land for many Europeans, where even the most ordinary actions were imbued with a complex symbolism. This was the genesis of the idea of the spiritual east, and also, incidentally, the refuge of European intellectuals seeking to distance themselves from the changing patterns of their own societies. A dichotomy in values was maintained, Indian values being described as ‘spiritual’ and European values as ‘materialistic’, with little attempt to juxtapose these values with the reality of Indian society. This theme has been even more firmly endorsed by a section of Indian opinion during the last hundred years. It was a consolation to the Indian intelligentsia for its perceived inability to counter the technical superiority of the west, a superiority viewed as having enabled Europe to colonize Asia and other parts of the world. At the height of anti-colonial nationalism it acted as a salve for having been made a colony of Britain.

CAT/2022.3(RC)

Question. 46

Which one of the following styles of research is most similar to the Orientalist scholars’ method of understanding Indian history and culture?

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Interpretations of the Indian past . . . were inevitably influenced by colonial concerns and interests, and also by prevalent European ideas about history, civilization and the Orient. Orientalist scholars studied the languages and the texts with selected Indian scholars, but made little attempt to understand the world-view of those who were teaching them. The readings therefore are something of a disjuncture from the traditional ways of looking at the Indian past. . . . Orientalism [which we can understand broadly as Western perceptions of the Orient] fuelled the fantasy and the freedom sought by European Romanticism, particularly in its opposition to the more disciplined Neo-Classicism. The cultures of Asia were seen as bringing a new Romantic paradigm. Another Renaissance was anticipated through an acquaintance with the Orient, and this, it was thought, would be different from the earlier Greek Renaissance. It was believed that this Oriental Renaissance would liberate European thought and literature from the increasing focus on discipline and rationality that had followed from the earlier Enlightenment. . . . [The Romantic English poets, Wordsworth and Coleridge,] were apprehensive of the changes introduced by industrialization and turned to nature and to fantasies of the Orient. However, this enthusiasm gradually changed, to conform with the emphasis later in the nineteenth century on the innate superiority of European civilization. Oriental civilizations were now seen as having once been great but currently in decline. The various phases of Orientalism tended to mould European understanding of the Indian past into a particular pattern. . . . There was an attempt to formulate Indian culture as uniform, such formulations being derived from texts that were given priority. The so-called ‘discovery’ of India was largely through selected literature in Sanskrit. This interpretation tended to emphasize non-historical aspects of Indian culture, for example the idea of an unchanging continuity of society and religion over 3,000 years; and it was believed that the Indian pattern of life was so concerned with metaphysics and the subtleties of religious belief that little attention was given to the more tangible aspects. German Romanticism endorsed this image of India, and it became the mystic land for many Europeans, where even the most ordinary actions were imbued with a complex symbolism. This was the genesis of the idea of the spiritual east, and also, incidentally, the refuge of European intellectuals seeking to distance themselves from the changing patterns of their own societies. A dichotomy in values was maintained, Indian values being described as ‘spiritual’ and European values as ‘materialistic’, with little attempt to juxtapose these values with the reality of Indian society. This theme has been even more firmly endorsed by a section of Indian opinion during the last hundred years. It was a consolation to the Indian intelligentsia for its perceived inability to counter the technical superiority of the west, a superiority viewed as having enabled Europe to colonize Asia and other parts of the world. At the height of anti-colonial nationalism it acted as a salve for having been made a colony of Britain.

CAT/2022.3(RC)

Question. 47

It can be inferred from the passage that the author is not likely to support the view that:

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Interpretations of the Indian past . . . were inevitably influenced by colonial concerns and interests, and also by prevalent European ideas about history, civilization and the Orient. Orientalist scholars studied the languages and the texts with selected Indian scholars, but made little attempt to understand the world-view of those who were teaching them. The readings therefore are something of a disjuncture from the traditional ways of looking at the Indian past. . . . Orientalism [which we can understand broadly as Western perceptions of the Orient] fuelled the fantasy and the freedom sought by European Romanticism, particularly in its opposition to the more disciplined Neo-Classicism. The cultures of Asia were seen as bringing a new Romantic paradigm. Another Renaissance was anticipated through an acquaintance with the Orient, and this, it was thought, would be different from the earlier Greek Renaissance. It was believed that this Oriental Renaissance would liberate European thought and literature from the increasing focus on discipline and rationality that had followed from the earlier Enlightenment. . . . [The Romantic English poets, Wordsworth and Coleridge,] were apprehensive of the changes introduced by industrialization and turned to nature and to fantasies of the Orient. However, this enthusiasm gradually changed, to conform with the emphasis later in the nineteenth century on the innate superiority of European civilization. Oriental civilizations were now seen as having once been great but currently in decline. The various phases of Orientalism tended to mould European understanding of the Indian past into a particular pattern. . . . There was an attempt to formulate Indian culture as uniform, such formulations being derived from texts that were given priority. The so-called ‘discovery’ of India was largely through selected literature in Sanskrit. This interpretation tended to emphasize non-historical aspects of Indian culture, for example the idea of an unchanging continuity of society and religion over 3,000 years; and it was believed that the Indian pattern of life was so concerned with metaphysics and the subtleties of religious belief that little attention was given to the more tangible aspects. German Romanticism endorsed this image of India, and it became the mystic land for many Europeans, where even the most ordinary actions were imbued with a complex symbolism. This was the genesis of the idea of the spiritual east, and also, incidentally, the refuge of European intellectuals seeking to distance themselves from the changing patterns of their own societies. A dichotomy in values was maintained, Indian values being described as ‘spiritual’ and European values as ‘materialistic’, with little attempt to juxtapose these values with the reality of Indian society. This theme has been even more firmly endorsed by a section of Indian opinion during the last hundred years. It was a consolation to the Indian intelligentsia for its perceived inability to counter the technical superiority of the west, a superiority viewed as having enabled Europe to colonize Asia and other parts of the world. At the height of anti-colonial nationalism it acted as a salve for having been made a colony of Britain.

CAT/2022.3(RC)

Question. 48

In the context of the passage, all of the following statements are true EXCEPT:

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

When we teach engineering problems now, we ask students to come to a single “best” solution defined by technical ideals like low cost, speed to build, and ability to scale. This way of teaching primes students to believe that their decision-making is purely objective, as it is grounded in math and science. This is known as technical-social dualism, the idea that the technical and social dimensions of engineering problems are readily separable and remain distinct throughout the problem-definition and solution process. Nontechnical parameters such as access to a technology, cultural relevancy or potential harms are deemed political and invalid in this way of learning. But those technical ideals are at their core social and political choices determined by a dominant culture focused on economic growth for the most privileged segments of society. By choosing to downplay public welfare as a critical parameter for engineering design, we risk creating a culture of disengagement from societal concerns amongst engineers that is antithetical to the ethical code of engineering. In my field of medical devices, ignoring social dimensions has real consequences. . . . Most FDA-approved drugs are incorrectly dosed for people assigned female at birth, leading to unexpected adverse reactions. This is because they have been inadequately represented in clinical trials. Beyond physical failings, subjective beliefs treated as facts by those in decision-making roles can encode social inequities. For example, spirometers, routinely used devices that measure lung capacity, still have correction factors that automatically assume smaller lung capacity in Black and Asian individuals. These racially based adjustments are derived from research done by eugenicists who thought these racial differences were biologically determined and who considered nonwhite people as inferior. These machines ignore the influence of social and environmental factors on lung capacity. Many technologies for systemically marginalized people have not been built because they were not deemed important such as better early diagnostics and treatment for diseases like endometriosis, a disease that afflicts 10 percent of people with uteruses. And we hardly question whether devices are built sustainably, which has led to a crisis of medical waste and health care accounting for 10 percent of U.S. greenhouse gas emissions. Social justice must be made core to the way engineers are trained. Some universities are working on this. . . . Engineers taught this way will be prepared to think critically about what problems we choose to solve, how we do so responsibly and how we build teams that challenge our ways of thinking. Individual engineering professors are also working to embed societal needs in their pedagogy. Darshan Karwat at the University of Arizona developed activist engineering to challenge engineers to acknowledge their full moral and social responsibility through practical selfreflection. Khalid Kadir at the University of California, Berkeley, created the popular course Engineering, Environment, and Society that teaches engineers how to engage in place-based knowledge, an understanding of the people, context and history, to design better technical approaches in collaboration with communities. When we design and build with equity and justice in mind, we craft better solutions that respond to the complexities of entrenched systemic problems.

CAT/2022.2(RC)

Question. 49

All of the following are examples of the negative outcomes of focusing on technical ideals in the medical sphere EXCEPT the:

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

When we teach engineering problems now, we ask students to come to a single “best” solution defined by technical ideals like low cost, speed to build, and ability to scale. This way of teaching primes students to believe that their decision-making is purely objective, as it is grounded in math and science. This is known as technical-social dualism, the idea that the technical and social dimensions of engineering problems are readily separable and remain distinct throughout the problem-definition and solution process. Nontechnical parameters such as access to a technology, cultural relevancy or potential harms are deemed political and invalid in this way of learning. But those technical ideals are at their core social and political choices determined by a dominant culture focused on economic growth for the most privileged segments of society. By choosing to downplay public welfare as a critical parameter for engineering design, we risk creating a culture of disengagement from societal concerns amongst engineers that is antithetical to the ethical code of engineering. In my field of medical devices, ignoring social dimensions has real consequences. . . . Most FDA-approved drugs are incorrectly dosed for people assigned female at birth, leading to unexpected adverse reactions. This is because they have been inadequately represented in clinical trials. Beyond physical failings, subjective beliefs treated as facts by those in decision-making roles can encode social inequities. For example, spirometers, routinely used devices that measure lung capacity, still have correction factors that automatically assume smaller lung capacity in Black and Asian individuals. These racially based adjustments are derived from research done by eugenicists who thought these racial differences were biologically determined and who considered nonwhite people as inferior. These machines ignore the influence of social and environmental factors on lung capacity. Many technologies for systemically marginalized people have not been built because they were not deemed important such as better early diagnostics and treatment for diseases like endometriosis, a disease that afflicts 10 percent of people with uteruses. And we hardly question whether devices are built sustainably, which has led to a crisis of medical waste and health care accounting for 10 percent of U.S. greenhouse gas emissions. Social justice must be made core to the way engineers are trained. Some universities are working on this. . . . Engineers taught this way will be prepared to think critically about what problems we choose to solve, how we do so responsibly and how we build teams that challenge our ways of thinking. Individual engineering professors are also working to embed societal needs in their pedagogy. Darshan Karwat at the University of Arizona developed activist engineering to challenge engineers to acknowledge their full moral and social responsibility through practical selfreflection. Khalid Kadir at the University of California, Berkeley, created the popular course Engineering, Environment, and Society that teaches engineers how to engage in place-based knowledge, an understanding of the people, context and history, to design better technical approaches in collaboration with communities. When we design and build with equity and justice in mind, we craft better solutions that respond to the complexities of entrenched systemic problems.

CAT/2022.2(RC)

Question. 50

In this passage, the author is making the claim that:

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

When we teach engineering problems now, we ask students to come to a single “best” solution defined by technical ideals like low cost, speed to build, and ability to scale. This way of teaching primes students to believe that their decision-making is purely objective, as it is grounded in math and science. This is known as technical-social dualism, the idea that the technical and social dimensions of engineering problems are readily separable and remain distinct throughout the problem-definition and solution process. Nontechnical parameters such as access to a technology, cultural relevancy or potential harms are deemed political and invalid in this way of learning. But those technical ideals are at their core social and political choices determined by a dominant culture focused on economic growth for the most privileged segments of society. By choosing to downplay public welfare as a critical parameter for engineering design, we risk creating a culture of disengagement from societal concerns amongst engineers that is antithetical to the ethical code of engineering. In my field of medical devices, ignoring social dimensions has real consequences. . . . Most FDA-approved drugs are incorrectly dosed for people assigned female at birth, leading to unexpected adverse reactions. This is because they have been inadequately represented in clinical trials. Beyond physical failings, subjective beliefs treated as facts by those in decision-making roles can encode social inequities. For example, spirometers, routinely used devices that measure lung capacity, still have correction factors that automatically assume smaller lung capacity in Black and Asian individuals. These racially based adjustments are derived from research done by eugenicists who thought these racial differences were biologically determined and who considered nonwhite people as inferior. These machines ignore the influence of social and environmental factors on lung capacity. Many technologies for systemically marginalized people have not been built because they were not deemed important such as better early diagnostics and treatment for diseases like endometriosis, a disease that afflicts 10 percent of people with uteruses. And we hardly question whether devices are built sustainably, which has led to a crisis of medical waste and health care accounting for 10 percent of U.S. greenhouse gas emissions. Social justice must be made core to the way engineers are trained. Some universities are working on this. . . . Engineers taught this way will be prepared to think critically about what problems we choose to solve, how we do so responsibly and how we build teams that challenge our ways of thinking. Individual engineering professors are also working to embed societal needs in their pedagogy. Darshan Karwat at the University of Arizona developed activist engineering to challenge engineers to acknowledge their full moral and social responsibility through practical selfreflection. Khalid Kadir at the University of California, Berkeley, created the popular course Engineering, Environment, and Society that teaches engineers how to engage in place-based knowledge, an understanding of the people, context and history, to design better technical approaches in collaboration with communities. When we design and build with equity and justice in mind, we craft better solutions that respond to the complexities of entrenched systemic problems.

CAT/2022.2(RC)

Question. 51

The author gives all of the following reasons for why marginalised people are systematically discriminated against in technology-related interventions EXCEPT:

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

When we teach engineering problems now, we ask students to come to a single “best” solution defined by technical ideals like low cost, speed to build, and ability to scale. This way of teaching primes students to believe that their decision-making is purely objective, as it is grounded in math and science. This is known as technical-social dualism, the idea that the technical and social dimensions of engineering problems are readily separable and remain distinct throughout the problem-definition and solution process. Nontechnical parameters such as access to a technology, cultural relevancy or potential harms are deemed political and invalid in this way of learning. But those technical ideals are at their core social and political choices determined by a dominant culture focused on economic growth for the most privileged segments of society. By choosing to downplay public welfare as a critical parameter for engineering design, we risk creating a culture of disengagement from societal concerns amongst engineers that is antithetical to the ethical code of engineering. In my field of medical devices, ignoring social dimensions has real consequences. . . . Most FDA-approved drugs are incorrectly dosed for people assigned female at birth, leading to unexpected adverse reactions. This is because they have been inadequately represented in clinical trials. Beyond physical failings, subjective beliefs treated as facts by those in decision-making roles can encode social inequities. For example, spirometers, routinely used devices that measure lung capacity, still have correction factors that automatically assume smaller lung capacity in Black and Asian individuals. These racially based adjustments are derived from research done by eugenicists who thought these racial differences were biologically determined and who considered nonwhite people as inferior. These machines ignore the influence of social and environmental factors on lung capacity. Many technologies for systemically marginalized people have not been built because they were not deemed important such as better early diagnostics and treatment for diseases like endometriosis, a disease that afflicts 10 percent of people with uteruses. And we hardly question whether devices are built sustainably, which has led to a crisis of medical waste and health care accounting for 10 percent of U.S. greenhouse gas emissions. Social justice must be made core to the way engineers are trained. Some universities are working on this. . . . Engineers taught this way will be prepared to think critically about what problems we choose to solve, how we do so responsibly and how we build teams that challenge our ways of thinking. Individual engineering professors are also working to embed societal needs in their pedagogy. Darshan Karwat at the University of Arizona developed activist engineering to challenge engineers to acknowledge their full moral and social responsibility through practical selfreflection. Khalid Kadir at the University of California, Berkeley, created the popular course Engineering, Environment, and Society that teaches engineers how to engage in place-based knowledge, an understanding of the people, context and history, to design better technical approaches in collaboration with communities. When we design and build with equity and justice in mind, we craft better solutions that respond to the complexities of entrenched systemic problems.

CAT/2022.2(RC)

Question. 52

We can infer that the author would approve of a more evolved engineering pedagogy that includes all of the following EXCEPT:

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

We begin with the emergence of the philosophy of the social sciences as an arena of thought and as a set of social institutions. The two characterisations overlap but are not congruent. Academic disciplines are social institutions. . . . My view is that institutions are all those social entities that organise action: they link acting individuals into social structures. There are various kinds of institutions. Hegelians and Marxists emphasise universal institutions such as the family, rituals, governance, economy and the military. These are mostly institutions that just grew. Perhaps in some imaginary beginning of time they spontaneously appeared. In their present incarnations, however, they are very much the product of conscious attempts to mould and plan them. We have family law, established and disestablished churches, constitutions and laws, including those governing the economy and the military. Institutions deriving from statute, like joint-stock companies are formal by contrast with informal ones such as friendships. There are some institutions that come in both informal and formal variants, as well as in mixed ones. Consider the fact that the stock exchange and the black market are both market institutions, one formal one not. Consider further that there are many features of the work of the stock exchange that rely on informal, noncodifiable agreements, not least the language used for communication. To be precise, mixtures are the norm . . . From constitutions at the top to by-laws near the bottom we are always adding to, or tinkering with, earlier institutions, the grown and the designed are intertwined. It is usual in social thought to treat culture and tradition as different from, although alongside, institutions. The view taken here is different. Culture and tradition are sub-sets of institutions analytically isolated for explanatory or expository purposes. Some social scientists have taken all institutions, even purely local ones, to be entities that satisfy basic human needs – under local conditions . . . Others differed and declared any structure of reciprocal roles and norms an institution. Most of these differences are differences of emphasis rather than disagreements. Let us straddle all these versions and present institutions very generally . . . as structures that serve to coordinate the actions of individuals. . . . Institutions themselves then have no aims or purpose other than those given to them by actors or used by actors to explain them . . . Language is the formative institution for social life and for science . . . Both formal and informal language is involved, naturally grown or designed. (Language is all of these to varying degrees.) Languages are paradigms of institutions or, from another perspective, nested sets of institutions. Syntax, semantics, lexicon and alphabet/character-set are all institutions within the larger institutional framework of a written language. Natural languages are typical examples of what Ferguson called ‘the result of human action, but not the execution of any human design’[;] reformed natural languages and artificial languages introduce design into their modifications or refinements of natural language. Above all, languages are paradigms of institutional tools that function to coordinate.

CAT/2022.2(RC)

Question. 53

Which of the following statements best represents the essence of the passage?

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

We begin with the emergence of the philosophy of the social sciences as an arena of thought and as a set of social institutions. The two characterisations overlap but are not congruent. Academic disciplines are social institutions. . . . My view is that institutions are all those social entities that organise action: they link acting individuals into social structures. There are various kinds of institutions. Hegelians and Marxists emphasise universal institutions such as the family, rituals, governance, economy and the military. These are mostly institutions that just grew. Perhaps in some imaginary beginning of time they spontaneously appeared. In their present incarnations, however, they are very much the product of conscious attempts to mould and plan them. We have family law, established and disestablished churches, constitutions and laws, including those governing the economy and the military. Institutions deriving from statute, like joint-stock companies are formal by contrast with informal ones such as friendships. There are some institutions that come in both informal and formal variants, as well as in mixed ones. Consider the fact that the stock exchange and the black market are both market institutions, one formal one not. Consider further that there are many features of the work of the stock exchange that rely on informal, noncodifiable agreements, not least the language used for communication. To be precise, mixtures are the norm . . . From constitutions at the top to by-laws near the bottom we are always adding to, or tinkering with, earlier institutions, the grown and the designed are intertwined. It is usual in social thought to treat culture and tradition as different from, although alongside, institutions. The view taken here is different. Culture and tradition are sub-sets of institutions analytically isolated for explanatory or expository purposes. Some social scientists have taken all institutions, even purely local ones, to be entities that satisfy basic human needs – under local conditions . . . Others differed and declared any structure of reciprocal roles and norms an institution. Most of these differences are differences of emphasis rather than disagreements. Let us straddle all these versions and present institutions very generally . . . as structures that serve to coordinate the actions of individuals. . . . Institutions themselves then have no aims or purpose other than those given to them by actors or used by actors to explain them . . . Language is the formative institution for social life and for science . . . Both formal and informal language is involved, naturally grown or designed. (Language is all of these to varying degrees.) Languages are paradigms of institutions or, from another perspective, nested sets of institutions. Syntax, semantics, lexicon and alphabet/character-set are all institutions within the larger institutional framework of a written language. Natural languages are typical examples of what Ferguson called ‘the result of human action, but not the execution of any human design’[;] reformed natural languages and artificial languages introduce design into their modifications or refinements of natural language. Above all, languages are paradigms of institutional tools that function to coordinate.

CAT/2022.2(RC)

Question. 54

“Consider the fact that the stock exchange and the black market are both market institutions, one formal one not.” Which one of the following statements best explains this quote, in the context of the passage?

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

We begin with the emergence of the philosophy of the social sciences as an arena of thought and as a set of social institutions. The two characterisations overlap but are not congruent. Academic disciplines are social institutions. . . . My view is that institutions are all those social entities that organise action: they link acting individuals into social structures. There are various kinds of institutions. Hegelians and Marxists emphasise universal institutions such as the family, rituals, governance, economy and the military. These are mostly institutions that just grew. Perhaps in some imaginary beginning of time they spontaneously appeared. In their present incarnations, however, they are very much the product of conscious attempts to mould and plan them. We have family law, established and disestablished churches, constitutions and laws, including those governing the economy and the military. Institutions deriving from statute, like joint-stock companies are formal by contrast with informal ones such as friendships. There are some institutions that come in both informal and formal variants, as well as in mixed ones. Consider the fact that the stock exchange and the black market are both market institutions, one formal one not. Consider further that there are many features of the work of the stock exchange that rely on informal, noncodifiable agreements, not least the language used for communication. To be precise, mixtures are the norm . . . From constitutions at the top to by-laws near the bottom we are always adding to, or tinkering with, earlier institutions, the grown and the designed are intertwined. It is usual in social thought to treat culture and tradition as different from, although alongside, institutions. The view taken here is different. Culture and tradition are sub-sets of institutions analytically isolated for explanatory or expository purposes. Some social scientists have taken all institutions, even purely local ones, to be entities that satisfy basic human needs – under local conditions . . . Others differed and declared any structure of reciprocal roles and norms an institution. Most of these differences are differences of emphasis rather than disagreements. Let us straddle all these versions and present institutions very generally . . . as structures that serve to coordinate the actions of individuals. . . . Institutions themselves then have no aims or purpose other than those given to them by actors or used by actors to explain them . . . Language is the formative institution for social life and for science . . . Both formal and informal language is involved, naturally grown or designed. (Language is all of these to varying degrees.) Languages are paradigms of institutions or, from another perspective, nested sets of institutions. Syntax, semantics, lexicon and alphabet/character-set are all institutions within the larger institutional framework of a written language. Natural languages are typical examples of what Ferguson called ‘the result of human action, but not the execution of any human design’[;] reformed natural languages and artificial languages introduce design into their modifications or refinements of natural language. Above all, languages are paradigms of institutional tools that function to coordinate.

CAT/2022.2(RC)

Question. 55

All of the following inferences from the passage are false, EXCEPT:

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

We begin with the emergence of the philosophy of the social sciences as an arena of thought and as a set of social institutions. The two characterisations overlap but are not congruent. Academic disciplines are social institutions. . . . My view is that institutions are all those social entities that organise action: they link acting individuals into social structures. There are various kinds of institutions. Hegelians and Marxists emphasise universal institutions such as the family, rituals, governance, economy and the military. These are mostly institutions that just grew. Perhaps in some imaginary beginning of time they spontaneously appeared. In their present incarnations, however, they are very much the product of conscious attempts to mould and plan them. We have family law, established and disestablished churches, constitutions and laws, including those governing the economy and the military. Institutions deriving from statute, like joint-stock companies are formal by contrast with informal ones such as friendships. There are some institutions that come in both informal and formal variants, as well as in mixed ones. Consider the fact that the stock exchange and the black market are both market institutions, one formal one not. Consider further that there are many features of the work of the stock exchange that rely on informal, noncodifiable agreements, not least the language used for communication. To be precise, mixtures are the norm . . . From constitutions at the top to by-laws near the bottom we are always adding to, or tinkering with, earlier institutions, the grown and the designed are intertwined. It is usual in social thought to treat culture and tradition as different from, although alongside, institutions. The view taken here is different. Culture and tradition are sub-sets of institutions analytically isolated for explanatory or expository purposes. Some social scientists have taken all institutions, even purely local ones, to be entities that satisfy basic human needs – under local conditions . . . Others differed and declared any structure of reciprocal roles and norms an institution. Most of these differences are differences of emphasis rather than disagreements. Let us straddle all these versions and present institutions very generally . . . as structures that serve to coordinate the actions of individuals. . . . Institutions themselves then have no aims or purpose other than those given to them by actors or used by actors to explain them . . . Language is the formative institution for social life and for science . . . Both formal and informal language is involved, naturally grown or designed. (Language is all of these to varying degrees.) Languages are paradigms of institutions or, from another perspective, nested sets of institutions. Syntax, semantics, lexicon and alphabet/character-set are all institutions within the larger institutional framework of a written language. Natural languages are typical examples of what Ferguson called ‘the result of human action, but not the execution of any human design’[;] reformed natural languages and artificial languages introduce design into their modifications or refinements of natural language. Above all, languages are paradigms of institutional tools that function to coordinate.

CAT/2022.2(RC)

Question. 56

In the first paragraph of the passage, what are the two “characterisations” that are seen as overlapping but not congruent?

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Critical theory of technology is a political theory of modernity with a normative dimension. It belongs to a tradition extending from Marx to Foucault and Habermas according to which advances in the formal claims of human rights take center stage while in the background centralization of ever more powerful public institutions and private organizations imposes an authoritarian social order. Marx attributed this trajectory to the capitalist rationalization of production. Today it marks many institutions besides the factory and every modern political system, including so-called socialist systems. This trajectory arose from the problems of command over a disempowered and deskilled labor force; but everywhere [that] masses are organized – whether it be Foucault’s prisons or Habermas’s public sphere – the same pattern prevails. Technological design and development is shaped by this pattern as the material base of a distinctive social order. Marcuse would later point to a “project” as the basis of what he called rather confusingly “technological rationality.” Releasing technology from this project is a democratic political task. In accordance with this general line of thought, critical theory of technology regards technologies as an environment rather than as a collection of tools. We live today with and even within technologies that determine our way of life. Along with the constant pressures to build centers of power, many other social values and meanings are inscribed in technological design. A hermeneutics of technology must make explicit the meanings implicit in the devices we use and the rituals they script. Social histories of technologies such as the bicycle, artificial lighting or firearms have made important contributions to this type of analysis. Critical theory of technology attempts to build a methodological approach on the lessons of these histories. As an environment, technologies shape their inhabitants. In this respect, they are comparable to laws and customs. Each of these institutions can be said to represent those who live under their sway through privileging certain dimensions of their human nature. Laws of property represent the interest in ownership and control. Customs such as parental authority represent the interest of childhood in safety and growth. Similarly, the automobile represents its users in so far as they are interested in mobility. Interests such as these constitute the version of human nature sanctioned by society. This notion of representation does not imply an eternal human nature. The concept of nature as non-identity in the Frankfurt School suggests an alternative. On these terms, nature is what lies at the limit of history, at the point at which society loses the capacity to imprint its meanings on things and control them effectively. The reference here is, of course, not to the nature of natural science, but to the lived nature in which we find ourselves and which we are. This nature reveals itself as that which cannot be totally encompassed by the machinery of society. For the Frankfurt School, human nature, in all its transcending force, emerges out of a historical context as that context is [depicted] in illicit joys, struggles and pathologies. We can perhaps admit a less romantic . . . conception in which those dimensions of human nature recognized by society are also granted theoretical legitimacy.

CAT/2022.1(RC)

Question. 57

Which one of the following statements best reflects the main argument of the fourth paragraph of the passage?

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Critical theory of technology is a political theory of modernity with a normative dimension. It belongs to a tradition extending from Marx to Foucault and Habermas according to which advances in the formal claims of human rights take center stage while in the background centralization of ever more powerful public institutions and private organizations imposes an authoritarian social order. Marx attributed this trajectory to the capitalist rationalization of production. Today it marks many institutions besides the factory and every modern political system, including so-called socialist systems. This trajectory arose from the problems of command over a disempowered and deskilled labor force; but everywhere [that] masses are organized – whether it be Foucault’s prisons or Habermas’s public sphere – the same pattern prevails. Technological design and development is shaped by this pattern as the material base of a distinctive social order. Marcuse would later point to a “project” as the basis of what he called rather confusingly “technological rationality.” Releasing technology from this project is a democratic political task. In accordance with this general line of thought, critical theory of technology regards technologies as an environment rather than as a collection of tools. We live today with and even within technologies that determine our way of life. Along with the constant pressures to build centers of power, many other social values and meanings are inscribed in technological design. A hermeneutics of technology must make explicit the meanings implicit in the devices we use and the rituals they script. Social histories of technologies such as the bicycle, artificial lighting or firearms have made important contributions to this type of analysis. Critical theory of technology attempts to build a methodological approach on the lessons of these histories. As an environment, technologies shape their inhabitants. In this respect, they are comparable to laws and customs. Each of these institutions can be said to represent those who live under their sway through privileging certain dimensions of their human nature. Laws of property represent the interest in ownership and control. Customs such as parental authority represent the interest of childhood in safety and growth. Similarly, the automobile represents its users in so far as they are interested in mobility. Interests such as these constitute the version of human nature sanctioned by society. This notion of representation does not imply an eternal human nature. The concept of nature as non-identity in the Frankfurt School suggests an alternative. On these terms, nature is what lies at the limit of history, at the point at which society loses the capacity to imprint its meanings on things and control them effectively. The reference here is, of course, not to the nature of natural science, but to the lived nature in which we find ourselves and which we are. This nature reveals itself as that which cannot be totally encompassed by the machinery of society. For the Frankfurt School, human nature, in all its transcending force, emerges out of a historical context as that context is [depicted] in illicit joys, struggles and pathologies. We can perhaps admit a less romantic . . . conception in which those dimensions of human nature recognized by society are also granted theoretical legitimacy.

CAT/2022.1(RC)

Question. 58

Which one of the following statements could be inferred as supporting the arguments of the passage?

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Critical theory of technology is a political theory of modernity with a normative dimension. It belongs to a tradition extending from Marx to Foucault and Habermas according to which advances in the formal claims of human rights take center stage while in the background centralization of ever more powerful public institutions and private organizations imposes an authoritarian social order. Marx attributed this trajectory to the capitalist rationalization of production. Today it marks many institutions besides the factory and every modern political system, including so-called socialist systems. This trajectory arose from the problems of command over a disempowered and deskilled labor force; but everywhere [that] masses are organized – whether it be Foucault’s prisons or Habermas’s public sphere – the same pattern prevails. Technological design and development is shaped by this pattern as the material base of a distinctive social order. Marcuse would later point to a “project” as the basis of what he called rather confusingly “technological rationality.” Releasing technology from this project is a democratic political task. In accordance with this general line of thought, critical theory of technology regards technologies as an environment rather than as a collection of tools. We live today with and even within technologies that determine our way of life. Along with the constant pressures to build centers of power, many other social values and meanings are inscribed in technological design. A hermeneutics of technology must make explicit the meanings implicit in the devices we use and the rituals they script. Social histories of technologies such as the bicycle, artificial lighting or firearms have made important contributions to this type of analysis. Critical theory of technology attempts to build a methodological approach on the lessons of these histories. As an environment, technologies shape their inhabitants. In this respect, they are comparable to laws and customs. Each of these institutions can be said to represent those who live under their sway through privileging certain dimensions of their human nature. Laws of property represent the interest in ownership and control. Customs such as parental authority represent the interest of childhood in safety and growth. Similarly, the automobile represents its users in so far as they are interested in mobility. Interests such as these constitute the version of human nature sanctioned by society. This notion of representation does not imply an eternal human nature. The concept of nature as non-identity in the Frankfurt School suggests an alternative. On these terms, nature is what lies at the limit of history, at the point at which society loses the capacity to imprint its meanings on things and control them effectively. The reference here is, of course, not to the nature of natural science, but to the lived nature in which we find ourselves and which we are. This nature reveals itself as that which cannot be totally encompassed by the machinery of society. For the Frankfurt School, human nature, in all its transcending force, emerges out of a historical context as that context is [depicted] in illicit joys, struggles and pathologies. We can perhaps admit a less romantic . . . conception in which those dimensions of human nature recognized by society are also granted theoretical legitimacy.

CAT/2022.1(RC)

Question. 59

Which one of the following statements contradicts the arguments of the passage?

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Critical theory of technology is a political theory of modernity with a normative dimension. It belongs to a tradition extending from Marx to Foucault and Habermas according to which advances in the formal claims of human rights take center stage while in the background centralization of ever more powerful public institutions and private organizations imposes an authoritarian social order. Marx attributed this trajectory to the capitalist rationalization of production. Today it marks many institutions besides the factory and every modern political system, including so-called socialist systems. This trajectory arose from the problems of command over a disempowered and deskilled labor force; but everywhere [that] masses are organized – whether it be Foucault’s prisons or Habermas’s public sphere – the same pattern prevails. Technological design and development is shaped by this pattern as the material base of a distinctive social order. Marcuse would later point to a “project” as the basis of what he called rather confusingly “technological rationality.” Releasing technology from this project is a democratic political task. In accordance with this general line of thought, critical theory of technology regards technologies as an environment rather than as a collection of tools. We live today with and even within technologies that determine our way of life. Along with the constant pressures to build centers of power, many other social values and meanings are inscribed in technological design. A hermeneutics of technology must make explicit the meanings implicit in the devices we use and the rituals they script. Social histories of technologies such as the bicycle, artificial lighting or firearms have made important contributions to this type of analysis. Critical theory of technology attempts to build a methodological approach on the lessons of these histories. As an environment, technologies shape their inhabitants. In this respect, they are comparable to laws and customs. Each of these institutions can be said to represent those who live under their sway through privileging certain dimensions of their human nature. Laws of property represent the interest in ownership and control. Customs such as parental authority represent the interest of childhood in safety and growth. Similarly, the automobile represents its users in so far as they are interested in mobility. Interests such as these constitute the version of human nature sanctioned by society. This notion of representation does not imply an eternal human nature. The concept of nature as non-identity in the Frankfurt School suggests an alternative. On these terms, nature is what lies at the limit of history, at the point at which society loses the capacity to imprint its meanings on things and control them effectively. The reference here is, of course, not to the nature of natural science, but to the lived nature in which we find ourselves and which we are. This nature reveals itself as that which cannot be totally encompassed by the machinery of society. For the Frankfurt School, human nature, in all its transcending force, emerges out of a historical context as that context is [depicted] in illicit joys, struggles and pathologies. We can perhaps admit a less romantic . . . conception in which those dimensions of human nature recognized by society are also granted theoretical legitimacy.

CAT/2022.1(RC)

Question. 60

All of the following claims can be inferred from the passage, EXCEPT:

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

The Chinese have two different concepts of a copy. Fangzhipin . . . are imitations where the difference from the original is obvious. These are small models or copies that can be purchased in a museum shop, for example. The second concept for a copy is fuzhipin . . . They are exact reproductions of the original, which, for the Chinese, are of equal value to the original. It has absolutely no negative connotations. The discrepancy with regard to the understanding of what a copy is has often led to misunderstandings and arguments between China and Western museums. The Chinese often send copies abroad instead of originals, in the firm belief that they are not essentially different from the originals. The rejection that then comes from the Western museums is perceived by the Chinese as an insult. . . . The Far Eastern notion of identity is also very confusing to the Western observer. The Ise Grand Shrine [in Japan] is 1,300 years old for the millions of Japanese people who go there on pilgrimage every year. But in reality this temple complex is completely rebuilt from scratch every 20 years. . . . The cathedral of Freiburg Minster in southwest Germany is covered in scaffolding almost all year round. The sandstone from which it is built is a very soft, porous material that does not withstand natural erosion by rain and wind. After a while, it crumbles. As a result, the cathedral is continually being examined for damage, and eroded stones are replaced. And in the cathedral’s dedicated workshop, copies of the damaged sandstone figures are constantly being produced. Of course, attempts are made to preserve the stones from the Middle Ages for as long as possible. But at some point they, too, are removed and replaced with new stones. Fundamentally, this is the same operation as with the Japanese shrine, except in this case the production of a replica takes place very slowly and over long periods of time. . . . In the field of art as well, the idea of an unassailable original developed historically in the Western world. Back in the 17th century [in the West], excavated artworks from antiquity were treated quite differently from today. They were not restored in a way that was faithful to the original. Instead, there was massive intervention in these works, changing their appearance. . . . It is probably this intellectual position that explains why Asians have far fewer scruples about cloning than Europeans. The South Korean cloning researcher Hwang Woo-suk, who attracted worldwide attention with his cloning experiments in 2004, is a Buddhist. He found a great deal of support and followers among Buddhists, while Christians called for a ban on human cloning. . . . Hwang legitimised his cloning experiments with his religious affiliation: ‘I am Buddhist, and I have no philosophical problem with cloning. And as you know, the basis of Buddhism is that life is recycled through reincarnation. In some ways, I think, therapeutic cloning restarts the circle of life.’

 

CAT/2022.1(RC)

Question. 61

Based on the passage, which one of the following copies would a Chinese museum be unlikely to consider as having less value than the original?

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

The Chinese have two different concepts of a copy. Fangzhipin . . . are imitations where the difference from the original is obvious. These are small models or copies that can be purchased in a museum shop, for example. The second concept for a copy is fuzhipin . . . They are exact reproductions of the original, which, for the Chinese, are of equal value to the original. It has absolutely no negative connotations. The discrepancy with regard to the understanding of what a copy is has often led to misunderstandings and arguments between China and Western museums. The Chinese often send copies abroad instead of originals, in the firm belief that they are not essentially different from the originals. The rejection that then comes from the Western museums is perceived by the Chinese as an insult. . . . The Far Eastern notion of identity is also very confusing to the Western observer. The Ise Grand Shrine [in Japan] is 1,300 years old for the millions of Japanese people who go there on pilgrimage every year. But in reality this temple complex is completely rebuilt from scratch every 20 years. . . . The cathedral of Freiburg Minster in southwest Germany is covered in scaffolding almost all year round. The sandstone from which it is built is a very soft, porous material that does not withstand natural erosion by rain and wind. After a while, it crumbles. As a result, the cathedral is continually being examined for damage, and eroded stones are replaced. And in the cathedral’s dedicated workshop, copies of the damaged sandstone figures are constantly being produced. Of course, attempts are made to preserve the stones from the Middle Ages for as long as possible. But at some point they, too, are removed and replaced with new stones. Fundamentally, this is the same operation as with the Japanese shrine, except in this case the production of a replica takes place very slowly and over long periods of time. . . . In the field of art as well, the idea of an unassailable original developed historically in the Western world. Back in the 17th century [in the West], excavated artworks from antiquity were treated quite differently from today. They were not restored in a way that was faithful to the original. Instead, there was massive intervention in these works, changing their appearance. . . . It is probably this intellectual position that explains why Asians have far fewer scruples about cloning than Europeans. The South Korean cloning researcher Hwang Woo-suk, who attracted worldwide attention with his cloning experiments in 2004, is a Buddhist. He found a great deal of support and followers among Buddhists, while Christians called for a ban on human cloning. . . . Hwang legitimised his cloning experiments with his religious affiliation: ‘I am Buddhist, and I have no philosophical problem with cloning. And as you know, the basis of Buddhism is that life is recycled through reincarnation. In some ways, I think, therapeutic cloning restarts the circle of life.’

 

CAT/2022.1(RC)

Question. 62

Which one of the following scenarios is unlikely to follow from the arguments in the passage?

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

The Chinese have two different concepts of a copy. Fangzhipin . . . are imitations where the difference from the original is obvious. These are small models or copies that can be purchased in a museum shop, for example. The second concept for a copy is fuzhipin . . . They are exact reproductions of the original, which, for the Chinese, are of equal value to the original. It has absolutely no negative connotations. The discrepancy with regard to the understanding of what a copy is has often led to misunderstandings and arguments between China and Western museums. The Chinese often send copies abroad instead of originals, in the firm belief that they are not essentially different from the originals. The rejection that then comes from the Western museums is perceived by the Chinese as an insult. . . . The Far Eastern notion of identity is also very confusing to the Western observer. The Ise Grand Shrine [in Japan] is 1,300 years old for the millions of Japanese people who go there on pilgrimage every year. But in reality this temple complex is completely rebuilt from scratch every 20 years. . . . The cathedral of Freiburg Minster in southwest Germany is covered in scaffolding almost all year round. The sandstone from which it is built is a very soft, porous material that does not withstand natural erosion by rain and wind. After a while, it crumbles. As a result, the cathedral is continually being examined for damage, and eroded stones are replaced. And in the cathedral’s dedicated workshop, copies of the damaged sandstone figures are constantly being produced. Of course, attempts are made to preserve the stones from the Middle Ages for as long as possible. But at some point they, too, are removed and replaced with new stones. Fundamentally, this is the same operation as with the Japanese shrine, except in this case the production of a replica takes place very slowly and over long periods of time. . . . In the field of art as well, the idea of an unassailable original developed historically in the Western world. Back in the 17th century [in the West], excavated artworks from antiquity were treated quite differently from today. They were not restored in a way that was faithful to the original. Instead, there was massive intervention in these works, changing their appearance. . . . It is probably this intellectual position that explains why Asians have far fewer scruples about cloning than Europeans. The South Korean cloning researcher Hwang Woo-suk, who attracted worldwide attention with his cloning experiments in 2004, is a Buddhist. He found a great deal of support and followers among Buddhists, while Christians called for a ban on human cloning. . . . Hwang legitimised his cloning experiments with his religious affiliation: ‘I am Buddhist, and I have no philosophical problem with cloning. And as you know, the basis of Buddhism is that life is recycled through reincarnation. In some ways, I think, therapeutic cloning restarts the circle of life.’

 

CAT/2022.1(RC)

Question. 63

Which one of the following statements does not correctly express the similarity between the Ise Grand Shrine and the cathedral of Freiburg Minster?

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

The Chinese have two different concepts of a copy. Fangzhipin . . . are imitations where the difference from the original is obvious. These are small models or copies that can be purchased in a museum shop, for example. The second concept for a copy is fuzhipin . . . They are exact reproductions of the original, which, for the Chinese, are of equal value to the original. It has absolutely no negative connotations. The discrepancy with regard to the understanding of what a copy is has often led to misunderstandings and arguments between China and Western museums. The Chinese often send copies abroad instead of originals, in the firm belief that they are not essentially different from the originals. The rejection that then comes from the Western museums is perceived by the Chinese as an insult. . . . The Far Eastern notion of identity is also very confusing to the Western observer. The Ise Grand Shrine [in Japan] is 1,300 years old for the millions of Japanese people who go there on pilgrimage every year. But in reality this temple complex is completely rebuilt from scratch every 20 years. . . . The cathedral of Freiburg Minster in southwest Germany is covered in scaffolding almost all year round. The sandstone from which it is built is a very soft, porous material that does not withstand natural erosion by rain and wind. After a while, it crumbles. As a result, the cathedral is continually being examined for damage, and eroded stones are replaced. And in the cathedral’s dedicated workshop, copies of the damaged sandstone figures are constantly being produced. Of course, attempts are made to preserve the stones from the Middle Ages for as long as possible. But at some point they, too, are removed and replaced with new stones. Fundamentally, this is the same operation as with the Japanese shrine, except in this case the production of a replica takes place very slowly and over long periods of time. . . . In the field of art as well, the idea of an unassailable original developed historically in the Western world. Back in the 17th century [in the West], excavated artworks from antiquity were treated quite differently from today. They were not restored in a way that was faithful to the original. Instead, there was massive intervention in these works, changing their appearance. . . . It is probably this intellectual position that explains why Asians have far fewer scruples about cloning than Europeans. The South Korean cloning researcher Hwang Woo-suk, who attracted worldwide attention with his cloning experiments in 2004, is a Buddhist. He found a great deal of support and followers among Buddhists, while Christians called for a ban on human cloning. . . . Hwang legitimised his cloning experiments with his religious affiliation: ‘I am Buddhist, and I have no philosophical problem with cloning. And as you know, the basis of Buddhism is that life is recycled through reincarnation. In some ways, I think, therapeutic cloning restarts the circle of life.’

 

CAT/2022.1(RC)

Question. 64

The value that the modern West assigns to “an unassailable original” has resulted in all of the following EXCEPT:

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Stoicism was founded in 300 BC by the Greek philosopher Zeno and survived into the Roman era until about AD 300. According to the Stoics, emotions consist of two movements. The first movement is the immediate feeling and other reactions (e.g., physiological response) that occur when a stimulus or event occurs. For instance, consider what could have happened if an army general accused Marcus Aurelius of treason in front of other officers. The first movement for Marcus may have been (internal) surprise and anger in response to this insult, accompanied perhaps by some involuntary physiological and expressive responses such as face flushing and a movement of the eyebrows. The second movement is what one does next about the emotion. Second movement behaviors occur after thinking and are under one’s control. Examples of second movements for Marcus might have included a plot to seek revenge, actions signifying deference and appeasement, or perhaps proceeding as he would have proceeded whether or not this event occurred: continuing to lead the Romans in a way that Marcus Aurelius believed best benefited them. In the Stoic view, choosing a reasoned, unemotional response as the second movement is the only appropriate response. The Stoics believed that to live the good life and be a good person, we need to free ourselves of nearly all desires such as too much desire for money, power, or sexual gratification. Prior to second movements, we can consider what is important in life. Money, power, and excessive sexual gratification are not important. Character, rationality, and kindness are important. The Epicureans, first associated with the Greek philosopher Epicurus . . . held a similar view, believing that people should enjoy simple pleasures, such as good conversation, friendship, food, and wine, but not be indulgent in these pursuits and not follow passion for those things that hold no real value like power and money. As Oatley (2004) states, “the Epicureans articulated a view—enjoyment of relationship with friends, of things that are real rather than illusory, simple rather than artificially inflated, possible rather than vanishingly unlikely—that is certainly relevant today” . . . In sum, these ancient Greek and Roman philosophers saw emotions, especially strong ones, as potentially dangerous. They viewed emotions as experiences that needed to be [reined] in and controlled. As Oatley (2004) points out, the Stoic idea bears some similarity to Buddhism. Buddha, living in India in the 6th century BC, argued for cultivating a certain attitude that decreases the probability of (in Stoic terms) destructive second movements. Through meditation and the right attitude, one allows emotions to happen to oneself (it is impossible to prevent this), but one is advised to observe the emotions without necessarily acting on them; one achieves some distance and decides what has value and what does not have value. Additionally, the Stoic idea of developing virtue in oneself, of becoming a good person, which the Stoics believed we could do because we have a touch of the divine, laid the foundation for the three monotheistic religions: Judaism, Christianity, and Islam . . . As with Stoicism, tenets of these religions include controlling our emotions lest we engage in sinful behavior.

CAT/2022.1(RC)

Question. 65

Which one of the following statements would be an accurate inference from the example of Marcus Aurelius?

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Stoicism was founded in 300 BC by the Greek philosopher Zeno and survived into the Roman era until about AD 300. According to the Stoics, emotions consist of two movements. The first movement is the immediate feeling and other reactions (e.g., physiological response) that occur when a stimulus or event occurs. For instance, consider what could have happened if an army general accused Marcus Aurelius of treason in front of other officers. The first movement for Marcus may have been (internal) surprise and anger in response to this insult, accompanied perhaps by some involuntary physiological and expressive responses such as face flushing and a movement of the eyebrows. The second movement is what one does next about the emotion. Second movement behaviors occur after thinking and are under one’s control. Examples of second movements for Marcus might have included a plot to seek revenge, actions signifying deference and appeasement, or perhaps proceeding as he would have proceeded whether or not this event occurred: continuing to lead the Romans in a way that Marcus Aurelius believed best benefited them. In the Stoic view, choosing a reasoned, unemotional response as the second movement is the only appropriate response. The Stoics believed that to live the good life and be a good person, we need to free ourselves of nearly all desires such as too much desire for money, power, or sexual gratification. Prior to second movements, we can consider what is important in life. Money, power, and excessive sexual gratification are not important. Character, rationality, and kindness are important. The Epicureans, first associated with the Greek philosopher Epicurus . . . held a similar view, believing that people should enjoy simple pleasures, such as good conversation, friendship, food, and wine, but not be indulgent in these pursuits and not follow passion for those things that hold no real value like power and money. As Oatley (2004) states, “the Epicureans articulated a view—enjoyment of relationship with friends, of things that are real rather than illusory, simple rather than artificially inflated, possible rather than vanishingly unlikely—that is certainly relevant today” . . . In sum, these ancient Greek and Roman philosophers saw emotions, especially strong ones, as potentially dangerous. They viewed emotions as experiences that needed to be [reined] in and controlled. As Oatley (2004) points out, the Stoic idea bears some similarity to Buddhism. Buddha, living in India in the 6th century BC, argued for cultivating a certain attitude that decreases the probability of (in Stoic terms) destructive second movements. Through meditation and the right attitude, one allows emotions to happen to oneself (it is impossible to prevent this), but one is advised to observe the emotions without necessarily acting on them; one achieves some distance and decides what has value and what does not have value. Additionally, the Stoic idea of developing virtue in oneself, of becoming a good person, which the Stoics believed we could do because we have a touch of the divine, laid the foundation for the three monotheistic religions: Judaism, Christianity, and Islam . . . As with Stoicism, tenets of these religions include controlling our emotions lest we engage in sinful behavior.

CAT/2022.1(RC)

Question. 66

Which one of the following statements, if false, could be seen as contradicting the facts/arguments in the passage?

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Stoicism was founded in 300 BC by the Greek philosopher Zeno and survived into the Roman era until about AD 300. According to the Stoics, emotions consist of two movements. The first movement is the immediate feeling and other reactions (e.g., physiological response) that occur when a stimulus or event occurs. For instance, consider what could have happened if an army general accused Marcus Aurelius of treason in front of other officers. The first movement for Marcus may have been (internal) surprise and anger in response to this insult, accompanied perhaps by some involuntary physiological and expressive responses such as face flushing and a movement of the eyebrows. The second movement is what one does next about the emotion. Second movement behaviors occur after thinking and are under one’s control. Examples of second movements for Marcus might have included a plot to seek revenge, actions signifying deference and appeasement, or perhaps proceeding as he would have proceeded whether or not this event occurred: continuing to lead the Romans in a way that Marcus Aurelius believed best benefited them. In the Stoic view, choosing a reasoned, unemotional response as the second movement is the only appropriate response. The Stoics believed that to live the good life and be a good person, we need to free ourselves of nearly all desires such as too much desire for money, power, or sexual gratification. Prior to second movements, we can consider what is important in life. Money, power, and excessive sexual gratification are not important. Character, rationality, and kindness are important. The Epicureans, first associated with the Greek philosopher Epicurus . . . held a similar view, believing that people should enjoy simple pleasures, such as good conversation, friendship, food, and wine, but not be indulgent in these pursuits and not follow passion for those things that hold no real value like power and money. As Oatley (2004) states, “the Epicureans articulated a view—enjoyment of relationship with friends, of things that are real rather than illusory, simple rather than artificially inflated, possible rather than vanishingly unlikely—that is certainly relevant today” . . . In sum, these ancient Greek and Roman philosophers saw emotions, especially strong ones, as potentially dangerous. They viewed emotions as experiences that needed to be [reined] in and controlled. As Oatley (2004) points out, the Stoic idea bears some similarity to Buddhism. Buddha, living in India in the 6th century BC, argued for cultivating a certain attitude that decreases the probability of (in Stoic terms) destructive second movements. Through meditation and the right attitude, one allows emotions to happen to oneself (it is impossible to prevent this), but one is advised to observe the emotions without necessarily acting on them; one achieves some distance and decides what has value and what does not have value. Additionally, the Stoic idea of developing virtue in oneself, of becoming a good person, which the Stoics believed we could do because we have a touch of the divine, laid the foundation for the three monotheistic religions: Judaism, Christianity, and Islam . . . As with Stoicism, tenets of these religions include controlling our emotions lest we engage in sinful behavior.

CAT/2022.1(RC)

Question. 67

On the basis of the passage, which one of the following statements can be regarded as true?

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Stoicism was founded in 300 BC by the Greek philosopher Zeno and survived into the Roman era until about AD 300. According to the Stoics, emotions consist of two movements. The first movement is the immediate feeling and other reactions (e.g., physiological response) that occur when a stimulus or event occurs. For instance, consider what could have happened if an army general accused Marcus Aurelius of treason in front of other officers. The first movement for Marcus may have been (internal) surprise and anger in response to this insult, accompanied perhaps by some involuntary physiological and expressive responses such as face flushing and a movement of the eyebrows. The second movement is what one does next about the emotion. Second movement behaviors occur after thinking and are under one’s control. Examples of second movements for Marcus might have included a plot to seek revenge, actions signifying deference and appeasement, or perhaps proceeding as he would have proceeded whether or not this event occurred: continuing to lead the Romans in a way that Marcus Aurelius believed best benefited them. In the Stoic view, choosing a reasoned, unemotional response as the second movement is the only appropriate response. The Stoics believed that to live the good life and be a good person, we need to free ourselves of nearly all desires such as too much desire for money, power, or sexual gratification. Prior to second movements, we can consider what is important in life. Money, power, and excessive sexual gratification are not important. Character, rationality, and kindness are important. The Epicureans, first associated with the Greek philosopher Epicurus . . . held a similar view, believing that people should enjoy simple pleasures, such as good conversation, friendship, food, and wine, but not be indulgent in these pursuits and not follow passion for those things that hold no real value like power and money. As Oatley (2004) states, “the Epicureans articulated a view—enjoyment of relationship with friends, of things that are real rather than illusory, simple rather than artificially inflated, possible rather than vanishingly unlikely—that is certainly relevant today” . . . In sum, these ancient Greek and Roman philosophers saw emotions, especially strong ones, as potentially dangerous. They viewed emotions as experiences that needed to be [reined] in and controlled. As Oatley (2004) points out, the Stoic idea bears some similarity to Buddhism. Buddha, living in India in the 6th century BC, argued for cultivating a certain attitude that decreases the probability of (in Stoic terms) destructive second movements. Through meditation and the right attitude, one allows emotions to happen to oneself (it is impossible to prevent this), but one is advised to observe the emotions without necessarily acting on them; one achieves some distance and decides what has value and what does not have value. Additionally, the Stoic idea of developing virtue in oneself, of becoming a good person, which the Stoics believed we could do because we have a touch of the divine, laid the foundation for the three monotheistic religions: Judaism, Christianity, and Islam . . . As with Stoicism, tenets of these religions include controlling our emotions lest we engage in sinful behavior.

CAT/2022.1(RC)

Question. 68

“Through meditation and the right attitude, one allows emotions to happen to oneself (it is impossible to prevent this), but one is advised to observe the emotions without necessarily acting on them; one achieves some distance and decides what has value and what does not have value.” In the context of the passage, which one of the following is not a possible implication of the quoted statement?

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Cuttlefish are full of personality, as behavioral ecologist Alexandra Schnell found out while researching the cephalopod's potential to display self-control. . . . “Self-control is thought to be the cornerstone of intelligence, as it is an important prerequisite for complex decisionmaking and planning for the future,” says Schnell . . .

[Schnell's] study used a modified version of the “marshmallow test” . . . During the original marshmallow test, psychologist Walter Mischel presented children between age four and six with one marshmallow. He told them that if they waited 15 minutes and didn’t eat it, he would give them a second marshmallow. A long-term follow-up study showed that the children who waited for the second marshmallow had more success later in life. . . . The cuttlefish version of the experiment looked a lot different. The researchers worked with six cuttlefish under nine months old and presented them with seafood instead of sweets. (Preliminary experiments showed that cuttlefishes’ favorite food is live grass shrimp, while raw prawns are so-so and Asian shore crab is nearly unacceptable.) Since the researchers couldn’t explain to the cuttlefish that they would need to wait for their shrimp, they trained them to recognize certain shapes that indicated when a food item would become available. The symbols were pasted on transparent drawers so that the cuttlefish could see the food that was stored inside. One drawer, labeled with a circle to mean “immediate,” held raw king prawn. Another drawer, labeled with a triangle to mean “delayed,” held live grass shrimp. During a control experiment, square labels meant “never.”

“If their self-control is flexible and I hadn’t just trained them to wait in any context, you would expect the cuttlefish to take the immediate reward [in the control], even if it’s their second preference,” says Schnell . . . and that’s what they did. That showed the researchers that cuttlefish wouldn’t reject the prawns if it was the only food available. In the experimental trials, the cuttlefish didn’t jump on the prawns if the live grass shrimp were labeled with a triangle— many waited for the shrimp drawer to open up. Each time the cuttlefish showed it could wait, the researchers tacked another ten seconds on to the next round of waiting before releasing
the shrimp. The longest that a cuttlefish waited was 130 seconds.

Schnell [says] that the cuttlefish usually sat at the bottom of the tank and looked at the two food items while they waited, but sometimes, they would turn away from the king prawn “as if to distract themselves from the temptation of the immediate reward.” In past studies, humans, chimpanzees, parrots and dogs also tried to distract themselves while waiting for a reward.

Not every species can use self-control, but most of the animals that can share another trait in common: long, social lives. Cuttlefish, on the other hand, are solitary creatures that don’t form relationships even with mates or young. . . . “We don’t know if living in a social group is important for complex cognition unless we also show those abilities are lacking in less social species,” says . . . comparative psychologist Jennifer Vonk.

CAT/2021.1(RC)

Question. 69

Which one of the following cannot be inferred from Alexandra Schnell’s experiment?

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Cuttlefish are full of personality, as behavioral ecologist Alexandra Schnell found out while researching the cephalopod's potential to display self-control. . . . “Self-control is thought to be the cornerstone of intelligence, as it is an important prerequisite for complex decisionmaking and planning for the future,” says Schnell . . .

[Schnell's] study used a modified version of the “marshmallow test” . . . During the original marshmallow test, psychologist Walter Mischel presented children between age four and six with one marshmallow. He told them that if they waited 15 minutes and didn’t eat it, he would give them a second marshmallow. A long-term follow-up study showed that the children who waited for the second marshmallow had more success later in life. . . . The cuttlefish version of the experiment looked a lot different. The researchers worked with six cuttlefish under nine months old and presented them with seafood instead of sweets. (Preliminary experiments showed that cuttlefishes’ favorite food is live grass shrimp, while raw prawns are so-so and Asian shore crab is nearly unacceptable.) Since the researchers couldn’t explain to the cuttlefish that they would need to wait for their shrimp, they trained them to recognize certain shapes that indicated when a food item would become available. The symbols were pasted on transparent drawers so that the cuttlefish could see the food that was stored inside. One drawer, labeled with a circle to mean “immediate,” held raw king prawn. Another drawer, labeled with a triangle to mean “delayed,” held live grass shrimp. During a control experiment, square labels meant “never.”

“If their self-control is flexible and I hadn’t just trained them to wait in any context, you would expect the cuttlefish to take the immediate reward [in the control], even if it’s their second preference,” says Schnell . . . and that’s what they did. That showed the researchers that cuttlefish wouldn’t reject the prawns if it was the only food available. In the experimental trials, the cuttlefish didn’t jump on the prawns if the live grass shrimp were labeled with a triangle— many waited for the shrimp drawer to open up. Each time the cuttlefish showed it could wait, the researchers tacked another ten seconds on to the next round of waiting before releasing
the shrimp. The longest that a cuttlefish waited was 130 seconds.

Schnell [says] that the cuttlefish usually sat at the bottom of the tank and looked at the two food items while they waited, but sometimes, they would turn away from the king prawn “as if to distract themselves from the temptation of the immediate reward.” In past studies, humans, chimpanzees, parrots and dogs also tried to distract themselves while waiting for a reward.

Not every species can use self-control, but most of the animals that can share another trait in common: long, social lives. Cuttlefish, on the other hand, are solitary creatures that don’t form relationships even with mates or young. . . . “We don’t know if living in a social group is important for complex cognition unless we also show those abilities are lacking in less social species,” says . . . comparative psychologist Jennifer Vonk.

CAT/2021.1(RC)

Question. 70

Which one of the following, if true, would best complement the passage’s findings?

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Cuttlefish are full of personality, as behavioral ecologist Alexandra Schnell found out while researching the cephalopod's potential to display self-control. . . . “Self-control is thought to be the cornerstone of intelligence, as it is an important prerequisite for complex decisionmaking and planning for the future,” says Schnell . . .

[Schnell's] study used a modified version of the “marshmallow test” . . . During the original marshmallow test, psychologist Walter Mischel presented children between age four and six with one marshmallow. He told them that if they waited 15 minutes and didn’t eat it, he would give them a second marshmallow. A long-term follow-up study showed that the children who waited for the second marshmallow had more success later in life. . . . The cuttlefish version of the experiment looked a lot different. The researchers worked with six cuttlefish under nine months old and presented them with seafood instead of sweets. (Preliminary experiments showed that cuttlefishes’ favorite food is live grass shrimp, while raw prawns are so-so and Asian shore crab is nearly unacceptable.) Since the researchers couldn’t explain to the cuttlefish that they would need to wait for their shrimp, they trained them to recognize certain shapes that indicated when a food item would become available. The symbols were pasted on transparent drawers so that the cuttlefish could see the food that was stored inside. One drawer, labeled with a circle to mean “immediate,” held raw king prawn. Another drawer, labeled with a triangle to mean “delayed,” held live grass shrimp. During a control experiment, square labels meant “never.”

“If their self-control is flexible and I hadn’t just trained them to wait in any context, you would expect the cuttlefish to take the immediate reward [in the control], even if it’s their second preference,” says Schnell . . . and that’s what they did. That showed the researchers that cuttlefish wouldn’t reject the prawns if it was the only food available. In the experimental trials, the cuttlefish didn’t jump on the prawns if the live grass shrimp were labeled with a triangle— many waited for the shrimp drawer to open up. Each time the cuttlefish showed it could wait, the researchers tacked another ten seconds on to the next round of waiting before releasing
the shrimp. The longest that a cuttlefish waited was 130 seconds.

Schnell [says] that the cuttlefish usually sat at the bottom of the tank and looked at the two food items while they waited, but sometimes, they would turn away from the king prawn “as if to distract themselves from the temptation of the immediate reward.” In past studies, humans, chimpanzees, parrots and dogs also tried to distract themselves while waiting for a reward.

Not every species can use self-control, but most of the animals that can share another trait in common: long, social lives. Cuttlefish, on the other hand, are solitary creatures that don’t form relationships even with mates or young. . . . “We don’t know if living in a social group is important for complex cognition unless we also show those abilities are lacking in less social species,” says . . . comparative psychologist Jennifer Vonk.

CAT/2021.1(RC)

Question. 71

In which one of the following scenarios would the cuttlefish’s behaviour demonstrate self-control?

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Cuttlefish are full of personality, as behavioral ecologist Alexandra Schnell found out while researching the cephalopod's potential to display self-control. . . . “Self-control is thought to be the cornerstone of intelligence, as it is an important prerequisite for complex decisionmaking and planning for the future,” says Schnell . . .

[Schnell's] study used a modified version of the “marshmallow test” . . . During the original marshmallow test, psychologist Walter Mischel presented children between age four and six with one marshmallow. He told them that if they waited 15 minutes and didn’t eat it, he would give them a second marshmallow. A long-term follow-up study showed that the children who waited for the second marshmallow had more success later in life. . . . The cuttlefish version of the experiment looked a lot different. The researchers worked with six cuttlefish under nine months old and presented them with seafood instead of sweets. (Preliminary experiments showed that cuttlefishes’ favorite food is live grass shrimp, while raw prawns are so-so and Asian shore crab is nearly unacceptable.) Since the researchers couldn’t explain to the cuttlefish that they would need to wait for their shrimp, they trained them to recognize certain shapes that indicated when a food item would become available. The symbols were pasted on transparent drawers so that the cuttlefish could see the food that was stored inside. One drawer, labeled with a circle to mean “immediate,” held raw king prawn. Another drawer, labeled with a triangle to mean “delayed,” held live grass shrimp. During a control experiment, square labels meant “never.”

“If their self-control is flexible and I hadn’t just trained them to wait in any context, you would expect the cuttlefish to take the immediate reward [in the control], even if it’s their second preference,” says Schnell . . . and that’s what they did. That showed the researchers that cuttlefish wouldn’t reject the prawns if it was the only food available. In the experimental trials, the cuttlefish didn’t jump on the prawns if the live grass shrimp were labeled with a triangle— many waited for the shrimp drawer to open up. Each time the cuttlefish showed it could wait, the researchers tacked another ten seconds on to the next round of waiting before releasing
the shrimp. The longest that a cuttlefish waited was 130 seconds.

Schnell [says] that the cuttlefish usually sat at the bottom of the tank and looked at the two food items while they waited, but sometimes, they would turn away from the king prawn “as if to distract themselves from the temptation of the immediate reward.” In past studies, humans, chimpanzees, parrots and dogs also tried to distract themselves while waiting for a reward.

Not every species can use self-control, but most of the animals that can share another trait in common: long, social lives. Cuttlefish, on the other hand, are solitary creatures that don’t form relationships even with mates or young. . . . “We don’t know if living in a social group is important for complex cognition unless we also show those abilities are lacking in less social species,” says . . . comparative psychologist Jennifer Vonk.

CAT/2021.1(RC)

Question. 72

All of the following constitute a point of difference between the “original” and “modified” versions of the marshmallow test EXCEPT that:

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

The sleights of hand that conflate consumption with virtue are a central theme in A Thirst for Empire, a sweeping and richly detailed history of tea by the historian Erika Rappaport. How did tea evolve from an obscure “China drink” to a universal beverage imbued with civilising properties? The answer, in brief, revolves around this conflation, not only by profit-motivated marketers but by a wide variety of interest groups. While abundant historical records have allowed the study of how tea itself moved from east to west, Rappaport is focused on the movement of the idea of tea to suit particular purposes.

Beginning in the 1700s, the temperance movement advocated for tea as a pleasure that cheered but did not inebriate, and industrialists soon borrowed this moral argument in advancing their case for free trade in tea (and hence more open markets for their textiles). Factory owners joined in, compelled by the cause of a sober workforce, while Christian missionaries discovered that tea “would soothe any colonial encounter”. During the Second World War, tea service was presented as a social and patriotic activity that uplifted soldiers and calmed refugees.

But it was tea’s consumer-directed marketing by importers and retailers – and later by brands – that most closely portends current trade debates. An early version of the “farm to table” movement was sparked by anti-Chinese sentiment and concerns over trade deficits, as well as by the reality and threat of adulterated tea containing dirt and hedge clippings. Lipton was soon advertising “from the Garden to Tea Cup” supply chains originating in British India and supervised by “educated Englishmen”. While tea marketing always presented direct consumer benefits (health, energy, relaxation), tea drinkers were also assured that they were participating in a larger noble project that advanced the causes of family, nation and civilization. . . .

Rappaport’s treatment of her subject is refreshingly apolitical. Indeed, it is a virtue that readers will be unable to guess her political orientation: both the miracle of markets and capitalism’s dark underbelly are evident in tea’s complex story, as are the complicated effects of British colonialism. . . . Commodity histories are now themselves commodities: recent works investigate cotton, salt, cod, sugar, chocolate, paper and milk. And morality marketing is now a commodity as well, applied to food, “fair trade” apparel and eco-tourism. Yet tea is, Rappaport makes clear, a world apart – an astonishing success story in which tea marketers not only succeeded in conveying a sense of moral elevation to the consumer but also arguably did advance the cause of civilisation and community.

I have been offered tea at a British garden party, a Bedouin campfire, a Turkish carpet shop and a Japanese chashitsu, to name a few settings. In each case the offering was more an idea – friendship, community, respect – than a drink, and in each case the idea then created a reality. It is not a stretch to say that tea marketers have advanced the particularly noble cause of human dialogue and friendship.

CAT/2021.1(RC)

Question. 73

This book review argues that, according to Rappaport, tea is unlike other “morality” products because it:

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

The sleights of hand that conflate consumption with virtue are a central theme in A Thirst for Empire, a sweeping and richly detailed history of tea by the historian Erika Rappaport. How did tea evolve from an obscure “China drink” to a universal beverage imbued with civilising properties? The answer, in brief, revolves around this conflation, not only by profit-motivated marketers but by a wide variety of interest groups. While abundant historical records have allowed the study of how tea itself moved from east to west, Rappaport is focused on the movement of the idea of tea to suit particular purposes.

Beginning in the 1700s, the temperance movement advocated for tea as a pleasure that cheered but did not inebriate, and industrialists soon borrowed this moral argument in advancing their case for free trade in tea (and hence more open markets for their textiles). Factory owners joined in, compelled by the cause of a sober workforce, while Christian missionaries discovered that tea “would soothe any colonial encounter”. During the Second World War, tea service was presented as a social and patriotic activity that uplifted soldiers and calmed refugees.

But it was tea’s consumer-directed marketing by importers and retailers – and later by brands – that most closely portends current trade debates. An early version of the “farm to table” movement was sparked by anti-Chinese sentiment and concerns over trade deficits, as well as by the reality and threat of adulterated tea containing dirt and hedge clippings. Lipton was soon advertising “from the Garden to Tea Cup” supply chains originating in British India and supervised by “educated Englishmen”. While tea marketing always presented direct consumer benefits (health, energy, relaxation), tea drinkers were also assured that they were participating in a larger noble project that advanced the causes of family, nation and civilization. . . .

Rappaport’s treatment of her subject is refreshingly apolitical. Indeed, it is a virtue that readers will be unable to guess her political orientation: both the miracle of markets and capitalism’s dark underbelly are evident in tea’s complex story, as are the complicated effects of British colonialism. . . . Commodity histories are now themselves commodities: recent works investigate cotton, salt, cod, sugar, chocolate, paper and milk. And morality marketing is now a commodity as well, applied to food, “fair trade” apparel and eco-tourism. Yet tea is, Rappaport makes clear, a world apart – an astonishing success story in which tea marketers not only succeeded in conveying a sense of moral elevation to the consumer but also arguably did advance the cause of civilisation and community.

I have been offered tea at a British garden party, a Bedouin campfire, a Turkish carpet shop and a Japanese chashitsu, to name a few settings. In each case the offering was more an idea – friendship, community, respect – than a drink, and in each case the idea then created a reality. It is not a stretch to say that tea marketers have advanced the particularly noble cause of human dialogue and friendship.

CAT/2021.1(RC)

Question. 74

Today, “conflat[ing] consumption with virtue” can be seen in the marketing of:

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

The sleights of hand that conflate consumption with virtue are a central theme in A Thirst for Empire, a sweeping and richly detailed history of tea by the historian Erika Rappaport. How did tea evolve from an obscure “China drink” to a universal beverage imbued with civilising properties? The answer, in brief, revolves around this conflation, not only by profit-motivated marketers but by a wide variety of interest groups. While abundant historical records have allowed the study of how tea itself moved from east to west, Rappaport is focused on the movement of the idea of tea to suit particular purposes.

Beginning in the 1700s, the temperance movement advocated for tea as a pleasure that cheered but did not inebriate, and industrialists soon borrowed this moral argument in advancing their case for free trade in tea (and hence more open markets for their textiles). Factory owners joined in, compelled by the cause of a sober workforce, while Christian missionaries discovered that tea “would soothe any colonial encounter”. During the Second World War, tea service was presented as a social and patriotic activity that uplifted soldiers and calmed refugees.

But it was tea’s consumer-directed marketing by importers and retailers – and later by brands – that most closely portends current trade debates. An early version of the “farm to table” movement was sparked by anti-Chinese sentiment and concerns over trade deficits, as well as by the reality and threat of adulterated tea containing dirt and hedge clippings. Lipton was soon advertising “from the Garden to Tea Cup” supply chains originating in British India and supervised by “educated Englishmen”. While tea marketing always presented direct consumer benefits (health, energy, relaxation), tea drinkers were also assured that they were participating in a larger noble project that advanced the causes of family, nation and civilization. . . .

Rappaport’s treatment of her subject is refreshingly apolitical. Indeed, it is a virtue that readers will be unable to guess her political orientation: both the miracle of markets and capitalism’s dark underbelly are evident in tea’s complex story, as are the complicated effects of British colonialism. . . . Commodity histories are now themselves commodities: recent works investigate cotton, salt, cod, sugar, chocolate, paper and milk. And morality marketing is now a commodity as well, applied to food, “fair trade” apparel and eco-tourism. Yet tea is, Rappaport makes clear, a world apart – an astonishing success story in which tea marketers not only succeeded in conveying a sense of moral elevation to the consumer but also arguably did advance the cause of civilisation and community.

I have been offered tea at a British garden party, a Bedouin campfire, a Turkish carpet shop and a Japanese chashitsu, to name a few settings. In each case the offering was more an idea – friendship, community, respect – than a drink, and in each case the idea then created a reality. It is not a stretch to say that tea marketers have advanced the particularly noble cause of human dialogue and friendship.

CAT/2021.1(RC)

Question. 75

The author of this book review is LEAST likely to support the view that:

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

The sleights of hand that conflate consumption with virtue are a central theme in A Thirst for Empire, a sweeping and richly detailed history of tea by the historian Erika Rappaport. How did tea evolve from an obscure “China drink” to a universal beverage imbued with civilising properties? The answer, in brief, revolves around this conflation, not only by profit-motivated marketers but by a wide variety of interest groups. While abundant historical records have allowed the study of how tea itself moved from east to west, Rappaport is focused on the movement of the idea of tea to suit particular purposes.

Beginning in the 1700s, the temperance movement advocated for tea as a pleasure that cheered but did not inebriate, and industrialists soon borrowed this moral argument in advancing their case for free trade in tea (and hence more open markets for their textiles). Factory owners joined in, compelled by the cause of a sober workforce, while Christian missionaries discovered that tea “would soothe any colonial encounter”. During the Second World War, tea service was presented as a social and patriotic activity that uplifted soldiers and calmed refugees.

But it was tea’s consumer-directed marketing by importers and retailers – and later by brands – that most closely portends current trade debates. An early version of the “farm to table” movement was sparked by anti-Chinese sentiment and concerns over trade deficits, as well as by the reality and threat of adulterated tea containing dirt and hedge clippings. Lipton was soon advertising “from the Garden to Tea Cup” supply chains originating in British India and supervised by “educated Englishmen”. While tea marketing always presented direct consumer benefits (health, energy, relaxation), tea drinkers were also assured that they were participating in a larger noble project that advanced the causes of family, nation and civilization. . . .

Rappaport’s treatment of her subject is refreshingly apolitical. Indeed, it is a virtue that readers will be unable to guess her political orientation: both the miracle of markets and capitalism’s dark underbelly are evident in tea’s complex story, as are the complicated effects of British colonialism. . . . Commodity histories are now themselves commodities: recent works investigate cotton, salt, cod, sugar, chocolate, paper and milk. And morality marketing is now a commodity as well, applied to food, “fair trade” apparel and eco-tourism. Yet tea is, Rappaport makes clear, a world apart – an astonishing success story in which tea marketers not only succeeded in conveying a sense of moral elevation to the consumer but also arguably did advance the cause of civilisation and community.

I have been offered tea at a British garden party, a Bedouin campfire, a Turkish carpet shop and a Japanese chashitsu, to name a few settings. In each case the offering was more an idea – friendship, community, respect – than a drink, and in each case the idea then created a reality. It is not a stretch to say that tea marketers have advanced the particularly noble cause of human dialogue and friendship.

CAT/2021.1(RC)

Question. 76

According to this book review, A Thirst for Empire says that, in addition to “profit-motivated marketers”, tea drinking was promoted in Britain by all of the following EXCEPT:

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Today we can hardly conceive of ourselves without an unconscious. Yet between 1700 and 1900, this notion developed as a genuinely original thought. The “unconscious” burst the shell of conventional language, coined as it had been to embody the fleeting ideas and the shifting conceptions of several generations until, finally, it became fixed and defined in specialized terms within the realm of medical psychology and Freudian psychoanalysis.

The vocabulary concerning the soul and the mind increased enormously in the course of the nineteenth century. The enrichments of literary and intellectual language led to an altered understanding of the meanings that underlie time-honored expressions and traditional catchwords. At the same time, once coined, powerful new ideas attracted to themselves a whole host of seemingly unrelated issues, practices, and experiences, creating a peculiar network of preoccupations that as a group had not existed before. The drawn-out attempt to approach and define the unconscious brought together the spiritualist and the psychical researcher of borderline phenomena (such as apparitions, spectral illusions, haunted houses, mediums, trance, automatic writing); the psychiatrist or alienist probing the nature of mental disease, of abnormal ideation, hallucination, delirium, melancholia, mania; the surgeon performing operations with the aid of hypnotism; the magnetizer claiming to correct the disequilibrium in the universal flow of magnetic fluids but who soon came to be regarded as a clever manipulator of the imagination; the physiologist and the physician who puzzled over sleep, dreams, sleepwalking, anesthesia, the influence of the mind on the body in health and disease; the neurologist concerned with the functions of the brain and the physiological basis of mental life; the philosopher interested in the will, the emotions, consciousness, knowledge, imagination and the creative genius; and, last but not least, the psychologist.

Significantly, most if not all of these practices (for example, hypnotism in surgery or psychological magnetism) originated in the waning years of the eighteenth century and during the early decades of the nineteenth century, as did some of the disciplines (such as psychology and psychical research). The majority of topics too were either new or assumed hitherto unknown colors. Thus, before 1790, few if any spoke, in medical terms, of the affinity between creative genius and the hallucinations of the insane . . .

Striving vaguely and independently to give expression to a latent conception, various lines of thought can be brought together by some novel term. The new concept then serves as a kind of resting place or stocktaking in the development of ideas, giving satisfaction and a stimulus for further discussion or speculation. Thus, the massive introduction of the term unconscious by Hartmann in 1869 appeared to focalize many stray thoughts, affording a temporary feeling that a crucial step had been taken forward, a comprehensive knowledge gained, a knowledge that required only further elaboration, explication, and unfolding in order to bring in a bounty of higher understanding. Ultimately, Hartmann’s attempt at defining the unconscious proved fruitless because he extended its reach into every realm of organic and inorganic, spiritual, intellectual, and instinctive existence, severely diluting the precision and compromising the impact of the concept.

CAT/2021.3(RC)

Question. 77

Which one of the following statements best describes what the passage is about?

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Today we can hardly conceive of ourselves without an unconscious. Yet between 1700 and 1900, this notion developed as a genuinely original thought. The “unconscious” burst the shell of conventional language, coined as it had been to embody the fleeting ideas and the shifting conceptions of several generations until, finally, it became fixed and defined in specialized terms within the realm of medical psychology and Freudian psychoanalysis.

The vocabulary concerning the soul and the mind increased enormously in the course of the nineteenth century. The enrichments of literary and intellectual language led to an altered understanding of the meanings that underlie time-honored expressions and traditional catchwords. At the same time, once coined, powerful new ideas attracted to themselves a whole host of seemingly unrelated issues, practices, and experiences, creating a peculiar network of preoccupations that as a group had not existed before. The drawn-out attempt to approach and define the unconscious brought together the spiritualist and the psychical researcher of borderline phenomena (such as apparitions, spectral illusions, haunted houses, mediums, trance, automatic writing); the psychiatrist or alienist probing the nature of mental disease, of abnormal ideation, hallucination, delirium, melancholia, mania; the surgeon performing operations with the aid of hypnotism; the magnetizer claiming to correct the disequilibrium in the universal flow of magnetic fluids but who soon came to be regarded as a clever manipulator of the imagination; the physiologist and the physician who puzzled over sleep, dreams, sleepwalking, anesthesia, the influence of the mind on the body in health and disease; the neurologist concerned with the functions of the brain and the physiological basis of mental life; the philosopher interested in the will, the emotions, consciousness, knowledge, imagination and the creative genius; and, last but not least, the psychologist.

Significantly, most if not all of these practices (for example, hypnotism in surgery or psychological magnetism) originated in the waning years of the eighteenth century and during the early decades of the nineteenth century, as did some of the disciplines (such as psychology and psychical research). The majority of topics too were either new or assumed hitherto unknown colors. Thus, before 1790, few if any spoke, in medical terms, of the affinity between creative genius and the hallucinations of the insane . . .

Striving vaguely and independently to give expression to a latent conception, various lines of thought can be brought together by some novel term. The new concept then serves as a kind of resting place or stocktaking in the development of ideas, giving satisfaction and a stimulus for further discussion or speculation. Thus, the massive introduction of the term unconscious by Hartmann in 1869 appeared to focalize many stray thoughts, affording a temporary feeling that a crucial step had been taken forward, a comprehensive knowledge gained, a knowledge that required only further elaboration, explication, and unfolding in order to bring in a bounty of higher understanding. Ultimately, Hartmann’s attempt at defining the unconscious proved fruitless because he extended its reach into every realm of organic and inorganic, spiritual, intellectual, and instinctive existence, severely diluting the precision and compromising the impact of the concept.

CAT/2021.3(RC)

Question. 78

“The enrichments of literary and intellectual language led to an altered understanding of the meanings that underlie time-honored expressions and traditional catchwords.” Which one of the following interpretations of this sentence would be closest in meaning to the original?

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Today we can hardly conceive of ourselves without an unconscious. Yet between 1700 and 1900, this notion developed as a genuinely original thought. The “unconscious” burst the shell of conventional language, coined as it had been to embody the fleeting ideas and the shifting conceptions of several generations until, finally, it became fixed and defined in specialized terms within the realm of medical psychology and Freudian psychoanalysis.

The vocabulary concerning the soul and the mind increased enormously in the course of the nineteenth century. The enrichments of literary and intellectual language led to an altered understanding of the meanings that underlie time-honored expressions and traditional catchwords. At the same time, once coined, powerful new ideas attracted to themselves a whole host of seemingly unrelated issues, practices, and experiences, creating a peculiar network of preoccupations that as a group had not existed before. The drawn-out attempt to approach and define the unconscious brought together the spiritualist and the psychical researcher of borderline phenomena (such as apparitions, spectral illusions, haunted houses, mediums, trance, automatic writing); the psychiatrist or alienist probing the nature of mental disease, of abnormal ideation, hallucination, delirium, melancholia, mania; the surgeon performing operations with the aid of hypnotism; the magnetizer claiming to correct the disequilibrium in the universal flow of magnetic fluids but who soon came to be regarded as a clever manipulator of the imagination; the physiologist and the physician who puzzled over sleep, dreams, sleepwalking, anesthesia, the influence of the mind on the body in health and disease; the neurologist concerned with the functions of the brain and the physiological basis of mental life; the philosopher interested in the will, the emotions, consciousness, knowledge, imagination and the creative genius; and, last but not least, the psychologist.

Significantly, most if not all of these practices (for example, hypnotism in surgery or psychological magnetism) originated in the waning years of the eighteenth century and during the early decades of the nineteenth century, as did some of the disciplines (such as psychology and psychical research). The majority of topics too were either new or assumed hitherto unknown colors. Thus, before 1790, few if any spoke, in medical terms, of the affinity between creative genius and the hallucinations of the insane . . .

Striving vaguely and independently to give expression to a latent conception, various lines of thought can be brought together by some novel term. The new concept then serves as a kind of resting place or stocktaking in the development of ideas, giving satisfaction and a stimulus for further discussion or speculation. Thus, the massive introduction of the term unconscious by Hartmann in 1869 appeared to focalize many stray thoughts, affording a temporary feeling that a crucial step had been taken forward, a comprehensive knowledge gained, a knowledge that required only further elaboration, explication, and unfolding in order to bring in a bounty of higher understanding. Ultimately, Hartmann’s attempt at defining the unconscious proved fruitless because he extended its reach into every realm of organic and inorganic, spiritual, intellectual, and instinctive existence, severely diluting the precision and compromising the impact of the concept.

CAT/2021.3(RC)

Question. 79

All of the following statements may be considered valid inferences from the passage, EXCEPT:

Comprehension

The passage below is accompanied by a set of questions. Choose the best answer to each question.

Today we can hardly conceive of ourselves without an unconscious. Yet between 1700 and 1900, this notion developed as a genuinely original thought. The “unconscious” burst the shell of conventional language, coined as it had been to embody the fleeting ideas and the shifting conceptions of several generations until, finally, it became fixed and defined in specialized terms within the realm of medical psychology and Freudian psychoanalysis.

The vocabulary concerning the soul and the mind increased enormously in the course of the nineteenth century. The enrichments of literary and intellectual language led to an altered understanding of the meanings that underlie time-honored expressions and traditional catchwords. At the same time, once coined, powerful new ideas attracted to themselves a whole host of seemingly unrelated issues, practices, and experiences, creating a peculiar network of preoccupations that as a group had not existed before. The drawn-out attempt to approach and define the unconscious brought together the spiritualist and the psychical researcher of borderline phenomena (such as apparitions, spectral illusions, haunted houses, mediums, trance, automatic writing); the psychiatrist or alienist probing the nature of mental disease, of abnormal ideation, hallucination, delirium, melancholia, mania; the surgeon performing operations with the aid of hypnotism; the magnetizer claiming to correct the disequilibrium in the universal flow of magnetic fluids but who soon came to be regarded as a clever manipulator of the imagination; the physiologist and the physician who puzzled over sleep, dreams, sleepwalking, anesthesia, the influence of the mind on the body in health and disease; the neurologist concerned with the functions of the brain and the physiological basis of mental life; the philosopher interested in the will, the emotions, consciousness, knowledge, imagination and the creative genius; and, last but not least, the psychologist.

Significantly, most if not all of these practices (for example, hypnotism in surgery or psychological magnetism) originated in the waning years of the eighteenth century and during the early decades of the nineteenth century, as did some of the disciplines (such as psychology and psychical research). The majority of topics too were either new or assumed hitherto unknown colors. Thus, before 1790, few if any spoke, in medical terms, of the affinity between creative genius and the hallucinations of the insane . . .

Striving vaguely and independently to give expression to a latent conception, various lines of thought can be brought together by some novel term. The new concept then serves as a kind of resting place or stocktaking in the development of ideas, giving satisfaction and a stimulus for further discussion or speculation. Thus, the massive introduction of the term unconscious by Hartmann in 1869 appeared to focalize many stray thoughts, affording a temporary feeling that a crucial step had been taken forward, a comprehensive knowledge gained, a knowledge that required only further elaboration, explication, and unfolding in order to bring in a bounty of higher understanding. Ultimately, Hartmann’s attempt at defining the unconscious proved fruitless because he extended its reach into every realm of organic and inorganic, spiritual, intellectual, and instinctive existence, severely diluting the precision and compromising the impact of the concept.

CAT/2021.3(RC)

Question. 80

Which one of the following sets of words is closest to mapping the main arguments of the passage?

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

Few realise that the government of China, governing an empire of some 60 million people during the Tang dynasty (618–907), implemented a complex financial system that recognised grain, coins and textiles as money. . . . Coins did have certain advantages: they were durable, recognisable and provided a convenient medium of exchange, especially for smaller transactions. However, there were also disadvantages. A continuing shortage of copper meant that government mints could not produce enough coins for the entire empire, to the extent that for most of the dynasty’s history, coins constituted only a tenth of the money supply. One of the main objections to calls for taxes to be paid in coin was that peasant producers who could weave cloth or grow grain – the other two major currencies of the Tang – would not be able to produce coins, and therefore would not be able to pay their taxes. . . . 

As coins had advantages and disadvantages, so too did textiles. If in circulation for a long period of time, they could show signs of wear and tear. Stained, faded and torn bolts of textiles had less value than a brand new bolt. Furthermore, a full bolt had a particular value. If consumers cut textiles into smaller pieces to buy or sell something worth less than a full bolt, that, too, greatly lessened the value of the textiles. Unlike coins, textiles could not be used for small transactions; as [an official] noted, textiles could not “be exchanged by the foot and the inch” . . . 

But textiles had some advantages over coins. For a start, textile production was widespread and there were fewer problems with the supply of textiles. For large transactions, textiles weighed less than their equivalent in coins since a string of coins . . .  could weigh as much as 4 kg. Furthermore, the dimensions of a bolt of silk held remarkably steady from the third to the tenth century: 56 cm wide and 12 m long . . . The values of different textiles were also more stable than the fluctuating values of coins. . . .  

The government also required the use of textiles for large transactions. Coins, on the other hand, were better suited for smaller transactions, and possibly, given the costs of transporting coins, for a more local usage. Grain, because it rotted easily, was not used nearly as much as coins and textiles, but taxpayers were required to pay grain to the government as a share of their annual tax obligations, and official salaries were expressed in weights of grain. . . . 

In actuality, our own currency system today has some similarities even as it is changing in front of our eyes. . . . We have cash – coins for small transactions like paying for parking at a meter, and banknotes for other items; cheques and debit/credit cards for other, often larger, types of payments. At the same time, we are shifting to electronic banking and making payments online. Some young people never use cash [and] do not know how to write a cheque . . . 

CAT/2020.1(RC)

Question. 81

In the context of the passage, which one of the following can be inferred with regard to the use of currency during the Tang era?

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

Few realise that the government of China, governing an empire of some 60 million people during the Tang dynasty (618–907), implemented a complex financial system that recognised grain, coins and textiles as money. . . . Coins did have certain advantages: they were durable, recognisable and provided a convenient medium of exchange, especially for smaller transactions. However, there were also disadvantages. A continuing shortage of copper meant that government mints could not produce enough coins for the entire empire, to the extent that for most of the dynasty’s history, coins constituted only a tenth of the money supply. One of the main objections to calls for taxes to be paid in coin was that peasant producers who could weave cloth or grow grain – the other two major currencies of the Tang – would not be able to produce coins, and therefore would not be able to pay their taxes. . . . 

As coins had advantages and disadvantages, so too did textiles. If in circulation for a long period of time, they could show signs of wear and tear. Stained, faded and torn bolts of textiles had less value than a brand new bolt. Furthermore, a full bolt had a particular value. If consumers cut textiles into smaller pieces to buy or sell something worth less than a full bolt, that, too, greatly lessened the value of the textiles. Unlike coins, textiles could not be used for small transactions; as [an official] noted, textiles could not “be exchanged by the foot and the inch” . . . 

But textiles had some advantages over coins. For a start, textile production was widespread and there were fewer problems with the supply of textiles. For large transactions, textiles weighed less than their equivalent in coins since a string of coins . . .  could weigh as much as 4 kg. Furthermore, the dimensions of a bolt of silk held remarkably steady from the third to the tenth century: 56 cm wide and 12 m long . . . The values of different textiles were also more stable than the fluctuating values of coins. . . .  

The government also required the use of textiles for large transactions. Coins, on the other hand, were better suited for smaller transactions, and possibly, given the costs of transporting coins, for a more local usage. Grain, because it rotted easily, was not used nearly as much as coins and textiles, but taxpayers were required to pay grain to the government as a share of their annual tax obligations, and official salaries were expressed in weights of grain. . . . 

In actuality, our own currency system today has some similarities even as it is changing in front of our eyes. . . . We have cash – coins for small transactions like paying for parking at a meter, and banknotes for other items; cheques and debit/credit cards for other, often larger, types of payments. At the same time, we are shifting to electronic banking and making payments online. Some young people never use cash [and] do not know how to write a cheque . . . 

CAT/2020.1(RC)

Question. 82

According to the passage, the modern currency system shares all the following features with that of the Tang, EXCEPT that:

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

Few realise that the government of China, governing an empire of some 60 million people during the Tang dynasty (618–907), implemented a complex financial system that recognised grain, coins and textiles as money. . . . Coins did have certain advantages: they were durable, recognisable and provided a convenient medium of exchange, especially for smaller transactions. However, there were also disadvantages. A continuing shortage of copper meant that government mints could not produce enough coins for the entire empire, to the extent that for most of the dynasty’s history, coins constituted only a tenth of the money supply. One of the main objections to calls for taxes to be paid in coin was that peasant producers who could weave cloth or grow grain – the other two major currencies of the Tang – would not be able to produce coins, and therefore would not be able to pay their taxes. . . . 

As coins had advantages and disadvantages, so too did textiles. If in circulation for a long period of time, they could show signs of wear and tear. Stained, faded and torn bolts of textiles had less value than a brand new bolt. Furthermore, a full bolt had a particular value. If consumers cut textiles into smaller pieces to buy or sell something worth less than a full bolt, that, too, greatly lessened the value of the textiles. Unlike coins, textiles could not be used for small transactions; as [an official] noted, textiles could not “be exchanged by the foot and the inch” . . . 

But textiles had some advantages over coins. For a start, textile production was widespread and there were fewer problems with the supply of textiles. For large transactions, textiles weighed less than their equivalent in coins since a string of coins . . .  could weigh as much as 4 kg. Furthermore, the dimensions of a bolt of silk held remarkably steady from the third to the tenth century: 56 cm wide and 12 m long . . . The values of different textiles were also more stable than the fluctuating values of coins. . . .  

The government also required the use of textiles for large transactions. Coins, on the other hand, were better suited for smaller transactions, and possibly, given the costs of transporting coins, for a more local usage. Grain, because it rotted easily, was not used nearly as much as coins and textiles, but taxpayers were required to pay grain to the government as a share of their annual tax obligations, and official salaries were expressed in weights of grain. . . . 

In actuality, our own currency system today has some similarities even as it is changing in front of our eyes. . . . We have cash – coins for small transactions like paying for parking at a meter, and banknotes for other items; cheques and debit/credit cards for other, often larger, types of payments. At the same time, we are shifting to electronic banking and making payments online. Some young people never use cash [and] do not know how to write a cheque . . . 

CAT/2020.1(RC)

Question. 83

When discussing textiles as currency in the Tang period, the author uses the words “steady” and “stable” to indicate all of the following EXCEPT:

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

Few realise that the government of China, governing an empire of some 60 million people during the Tang dynasty (618–907), implemented a complex financial system that recognised grain, coins and textiles as money. . . . Coins did have certain advantages: they were durable, recognisable and provided a convenient medium of exchange, especially for smaller transactions. However, there were also disadvantages. A continuing shortage of copper meant that government mints could not produce enough coins for the entire empire, to the extent that for most of the dynasty’s history, coins constituted only a tenth of the money supply. One of the main objections to calls for taxes to be paid in coin was that peasant producers who could weave cloth or grow grain – the other two major currencies of the Tang – would not be able to produce coins, and therefore would not be able to pay their taxes. . . . 

As coins had advantages and disadvantages, so too did textiles. If in circulation for a long period of time, they could show signs of wear and tear. Stained, faded and torn bolts of textiles had less value than a brand new bolt. Furthermore, a full bolt had a particular value. If consumers cut textiles into smaller pieces to buy or sell something worth less than a full bolt, that, too, greatly lessened the value of the textiles. Unlike coins, textiles could not be used for small transactions; as [an official] noted, textiles could not “be exchanged by the foot and the inch” . . . 

But textiles had some advantages over coins. For a start, textile production was widespread and there were fewer problems with the supply of textiles. For large transactions, textiles weighed less than their equivalent in coins since a string of coins . . .  could weigh as much as 4 kg. Furthermore, the dimensions of a bolt of silk held remarkably steady from the third to the tenth century: 56 cm wide and 12 m long . . . The values of different textiles were also more stable than the fluctuating values of coins. . . .  

The government also required the use of textiles for large transactions. Coins, on the other hand, were better suited for smaller transactions, and possibly, given the costs of transporting coins, for a more local usage. Grain, because it rotted easily, was not used nearly as much as coins and textiles, but taxpayers were required to pay grain to the government as a share of their annual tax obligations, and official salaries were expressed in weights of grain. . . . 

In actuality, our own currency system today has some similarities even as it is changing in front of our eyes. . . . We have cash – coins for small transactions like paying for parking at a meter, and banknotes for other items; cheques and debit/credit cards for other, often larger, types of payments. At the same time, we are shifting to electronic banking and making payments online. Some young people never use cash [and] do not know how to write a cheque . . . 

CAT/2020.1(RC)

Question. 84

During the Tang period, which one of the following would not be an economically sound decision for a small purchase in the local market that is worth one-eighth of a bolt of cloth?

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

174 incidents of piracy were reported to the International Maritime Bureau last year, with Somali pirates responsible for only three. The rest ranged from the discreet theft of coils of rope in the Yellow Sea to the notoriously ferocious Nigerian gunmen attacking and hijacking oil tankers in the Gulf of Guinea, as well as armed robbery off Singapore and the Venezuelan coast and kidnapping in the Sundarbans in the Bay of Bengal. For [Dr. Peter] Lehr, an expert on modern-day piracy, the phenomenon’s history should be a source of instruction rather than entertainment, piracy past offering lessons for piracy present. . . . 

But . . . where does piracy begin or end? According to St Augustine, a corsair captain once told Alexander the Great that in the forceful acquisition of power and wealth at sea, the difference between an emperor and a pirate was simply one of scale. By this logic, European empire-builders were the most successful pirates of all time. A more eclectic history might have included the conquistadors, Vasco da Gama and the East India Company. But Lehr sticks to the disorganised small fry, making comparisons with the renegades of today possible. 

The main motive for piracy has always been a combination of need and greed. Why toil away as a starving peasant in the 16th century when a successful pirate made up to £4,000 on each raid? Anyone could turn to freebooting if the rewards were worth the risk . . . .

Increased globalisation has done more to encourage piracy than suppress it. European colonialism weakened delicate balances of power, leading to an influx of opportunists on the high seas. A rise in global shipping has meant rich pickings for freebooters. Lehr writes: “It quickly becomes clear that in those parts of the world that have not profited from globalisation and modernisation, and where abject poverty and the daily struggle for survival are still a reality, the root causes of piracy are still the same as they were a couple of hundred years ago.” . . . 

Modern pirate prevention has failed. After the French yacht Le Gonant was ransomed for $2 million in 2008, opportunists from all over Somalia flocked to the coast for a piece of the action. . . . A consistent rule, even today, is there are never enough warships to patrol pirate-infested waters. Such ships are costly and only solve the problem temporarily; Somali piracy is bound to return as soon as the warships are withdrawn. Robot shipping, eliminating hostages, has been proposed as a possible solution; but as Lehr points out, this will only make pirates switch their targets to smaller carriers unable to afford the technology.

His advice isn’t new. Proposals to end illegal fishing are often advanced but they are difficult to enforce. Investment in local welfare put a halt to Malaysian piracy in the 1970s, but was dependent on money somehow filtering through a corrupt bureaucracy to the poor on the periphery. Diplomatic initiatives against piracy are plagued by mutual distrust: the Russians execute pirates, while the EU and US are reluctant to capture them for fear they’ll claim asylum. 

 

CAT/2020.2(RC)

Question. 85

“A more eclectic history might have included the conquistadors, Vasco da Gama and the East India Company. But Lehr sticks to the disorganised small fry . . .” From this statement we can infer that the author believes that:

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

174 incidents of piracy were reported to the International Maritime Bureau last year, with Somali pirates responsible for only three. The rest ranged from the discreet theft of coils of rope in the Yellow Sea to the notoriously ferocious Nigerian gunmen attacking and hijacking oil tankers in the Gulf of Guinea, as well as armed robbery off Singapore and the Venezuelan coast and kidnapping in the Sundarbans in the Bay of Bengal. For [Dr. Peter] Lehr, an expert on modern-day piracy, the phenomenon’s history should be a source of instruction rather than entertainment, piracy past offering lessons for piracy present. . . . 

But . . . where does piracy begin or end? According to St Augustine, a corsair captain once told Alexander the Great that in the forceful acquisition of power and wealth at sea, the difference between an emperor and a pirate was simply one of scale. By this logic, European empire-builders were the most successful pirates of all time. A more eclectic history might have included the conquistadors, Vasco da Gama and the East India Company. But Lehr sticks to the disorganised small fry, making comparisons with the renegades of today possible. 

The main motive for piracy has always been a combination of need and greed. Why toil away as a starving peasant in the 16th century when a successful pirate made up to £4,000 on each raid? Anyone could turn to freebooting if the rewards were worth the risk . . . .

Increased globalisation has done more to encourage piracy than suppress it. European colonialism weakened delicate balances of power, leading to an influx of opportunists on the high seas. A rise in global shipping has meant rich pickings for freebooters. Lehr writes: “It quickly becomes clear that in those parts of the world that have not profited from globalisation and modernisation, and where abject poverty and the daily struggle for survival are still a reality, the root causes of piracy are still the same as they were a couple of hundred years ago.” . . . 

Modern pirate prevention has failed. After the French yacht Le Gonant was ransomed for $2 million in 2008, opportunists from all over Somalia flocked to the coast for a piece of the action. . . . A consistent rule, even today, is there are never enough warships to patrol pirate-infested waters. Such ships are costly and only solve the problem temporarily; Somali piracy is bound to return as soon as the warships are withdrawn. Robot shipping, eliminating hostages, has been proposed as a possible solution; but as Lehr points out, this will only make pirates switch their targets to smaller carriers unable to afford the technology.

His advice isn’t new. Proposals to end illegal fishing are often advanced but they are difficult to enforce. Investment in local welfare put a halt to Malaysian piracy in the 1970s, but was dependent on money somehow filtering through a corrupt bureaucracy to the poor on the periphery. Diplomatic initiatives against piracy are plagued by mutual distrust: the Russians execute pirates, while the EU and US are reluctant to capture them for fear they’ll claim asylum. 

 

CAT/2020.2(RC)

Question. 86

We can deduce that the author believes that piracy can best be controlled in the long run:

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

174 incidents of piracy were reported to the International Maritime Bureau last year, with Somali pirates responsible for only three. The rest ranged from the discreet theft of coils of rope in the Yellow Sea to the notoriously ferocious Nigerian gunmen attacking and hijacking oil tankers in the Gulf of Guinea, as well as armed robbery off Singapore and the Venezuelan coast and kidnapping in the Sundarbans in the Bay of Bengal. For [Dr. Peter] Lehr, an expert on modern-day piracy, the phenomenon’s history should be a source of instruction rather than entertainment, piracy past offering lessons for piracy present. . . . 

But . . . where does piracy begin or end? According to St Augustine, a corsair captain once told Alexander the Great that in the forceful acquisition of power and wealth at sea, the difference between an emperor and a pirate was simply one of scale. By this logic, European empire-builders were the most successful pirates of all time. A more eclectic history might have included the conquistadors, Vasco da Gama and the East India Company. But Lehr sticks to the disorganised small fry, making comparisons with the renegades of today possible. 

The main motive for piracy has always been a combination of need and greed. Why toil away as a starving peasant in the 16th century when a successful pirate made up to £4,000 on each raid? Anyone could turn to freebooting if the rewards were worth the risk . . . .

Increased globalisation has done more to encourage piracy than suppress it. European colonialism weakened delicate balances of power, leading to an influx of opportunists on the high seas. A rise in global shipping has meant rich pickings for freebooters. Lehr writes: “It quickly becomes clear that in those parts of the world that have not profited from globalisation and modernisation, and where abject poverty and the daily struggle for survival are still a reality, the root causes of piracy are still the same as they were a couple of hundred years ago.” . . . 

Modern pirate prevention has failed. After the French yacht Le Gonant was ransomed for $2 million in 2008, opportunists from all over Somalia flocked to the coast for a piece of the action. . . . A consistent rule, even today, is there are never enough warships to patrol pirate-infested waters. Such ships are costly and only solve the problem temporarily; Somali piracy is bound to return as soon as the warships are withdrawn. Robot shipping, eliminating hostages, has been proposed as a possible solution; but as Lehr points out, this will only make pirates switch their targets to smaller carriers unable to afford the technology.

His advice isn’t new. Proposals to end illegal fishing are often advanced but they are difficult to enforce. Investment in local welfare put a halt to Malaysian piracy in the 1970s, but was dependent on money somehow filtering through a corrupt bureaucracy to the poor on the periphery. Diplomatic initiatives against piracy are plagued by mutual distrust: the Russians execute pirates, while the EU and US are reluctant to capture them for fear they’ll claim asylum. 

 

CAT/2020.2(RC)

Question. 87

“Why toil away as a starving peasant in the 16th century when a successful pirate made up to £4,000 on each raid?” In this sentence, the author’s tone can best be described as being:

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

174 incidents of piracy were reported to the International Maritime Bureau last year, with Somali pirates responsible for only three. The rest ranged from the discreet theft of coils of rope in the Yellow Sea to the notoriously ferocious Nigerian gunmen attacking and hijacking oil tankers in the Gulf of Guinea, as well as armed robbery off Singapore and the Venezuelan coast and kidnapping in the Sundarbans in the Bay of Bengal. For [Dr. Peter] Lehr, an expert on modern-day piracy, the phenomenon’s history should be a source of instruction rather than entertainment, piracy past offering lessons for piracy present. . . . 

But . . . where does piracy begin or end? According to St Augustine, a corsair captain once told Alexander the Great that in the forceful acquisition of power and wealth at sea, the difference between an emperor and a pirate was simply one of scale. By this logic, European empire-builders were the most successful pirates of all time. A more eclectic history might have included the conquistadors, Vasco da Gama and the East India Company. But Lehr sticks to the disorganised small fry, making comparisons with the renegades of today possible. 

The main motive for piracy has always been a combination of need and greed. Why toil away as a starving peasant in the 16th century when a successful pirate made up to £4,000 on each raid? Anyone could turn to freebooting if the rewards were worth the risk . . . .

Increased globalisation has done more to encourage piracy than suppress it. European colonialism weakened delicate balances of power, leading to an influx of opportunists on the high seas. A rise in global shipping has meant rich pickings for freebooters. Lehr writes: “It quickly becomes clear that in those parts of the world that have not profited from globalisation and modernisation, and where abject poverty and the daily struggle for survival are still a reality, the root causes of piracy are still the same as they were a couple of hundred years ago.” . . . 

Modern pirate prevention has failed. After the French yacht Le Gonant was ransomed for $2 million in 2008, opportunists from all over Somalia flocked to the coast for a piece of the action. . . . A consistent rule, even today, is there are never enough warships to patrol pirate-infested waters. Such ships are costly and only solve the problem temporarily; Somali piracy is bound to return as soon as the warships are withdrawn. Robot shipping, eliminating hostages, has been proposed as a possible solution; but as Lehr points out, this will only make pirates switch their targets to smaller carriers unable to afford the technology.

His advice isn’t new. Proposals to end illegal fishing are often advanced but they are difficult to enforce. Investment in local welfare put a halt to Malaysian piracy in the 1970s, but was dependent on money somehow filtering through a corrupt bureaucracy to the poor on the periphery. Diplomatic initiatives against piracy are plagued by mutual distrust: the Russians execute pirates, while the EU and US are reluctant to capture them for fear they’ll claim asylum. 

 

CAT/2020.2(RC)

Question. 88

The author ascribes the rise in piracy today to all of the following factors EXCEPT:

Comprehension

Directions for questions: Read the passage carefully and answer the given questions accordingly.

Mode of transportation affects the travel experience and thus can produce new types of travel writing and perhaps even new “identities.” Modes of transportation determine the types and duration of social encounters; affect the organization and passage of space and time; . . . and also affect perception and knowledge—how and what the traveler comes to know and write about. The completion of the first U.S. transcontinental highway during the 1920s . . . for example, inaugurated a new genre of travel literature about the United States—the automotive or road narrative. Such narratives highlight the experiences of mostly male protagonists “discovering themselves” on their journeys, emphasizing the independence of road travel and the value of rural folk traditions.

Travel writing’s relationship to empire building— as a type of “colonialist discourse”—has drawn the most attention from academicians. Close connections have been observed between European (and American) political, economic, and administrative goals for the colonies and their manifestations in the cultural practice of writing travel books. Travel writers’ descriptions of foreign places have been analyzed as attempts to validate, promote, or challenge the ideologies and practices of colonial or imperial domination and expansion. Mary Louise Pratt’s study of the genres and conventions of 18th- and 19th-century exploration narratives about South America and Africa (e.g., the “monarch of all I survey” trope) offered ways of thinking about travel writing as embedded within relations of power between metropole and periphery, as did Edward Said’s theories of representation and cultural imperialism. Particularly Said’s book, Orientalism, helped scholars understand ways in which representations of people in travel texts were intimately bound up with notions of self, in this case, that the Occident defined itself through essentialist, ethnocentric, and racist representations of the Orient. Said’s work became a model for demonstrating cultural forms of imperialism in travel texts, showing how the political, economic, or administrative fact of dominance relies on legitimating discourses such as those articulated through travel writing. . . .

Feminist geographers’ studies of travel writing challenge the masculinist history of geography by questioning who and what are relevant subjects of geographic study and, indeed, what counts as geographic knowledge itself. Such questions are worked through ideological constructs that posit men as explorers and women as travelers—or, conversely, men as travelers and women as tied to the home. Studies of Victorian women who were professional travel writers, tourists, wives of colonial administrators, and other (mostly) elite women who wrote narratives about their experiences abroad during the 19th century have been particularly revealing. From a “liberal” feminist perspective, travel presented one means toward female liberation for middle- and upper-class Victorian women. Many studies from the 1970s onward demonstrated the ways in which women’s gendered identities were negotiated differently “at home” than they were “away,” thereby showing women’s self-development through travel. The more recent poststructural turn in studies of Victorian travel writing has focused attention on women’s diverse and fragmented identities as they narrated their travel experiences, emphasizing women’s sense of themselves as women in new locations, but only as they worked through their ties to nation, class, whiteness, and colonial and imperial power structures.

CAT/2020.3(RC)

Question. 89

From the passage, we can infer that feminist scholars’ understanding of the experiences of Victorian women travellers is influenced by all of the following EXCEPT scholars':

Comprehension

Directions for questions: Read the passage carefully and answer the given questions accordingly.

Mode of transportation affects the travel experience and thus can produce new types of travel writing and perhaps even new “identities.” Modes of transportation determine the types and duration of social encounters; affect the organization and passage of space and time; . . . and also affect perception and knowledge—how and what the traveler comes to know and write about. The completion of the first U.S. transcontinental highway during the 1920s . . . for example, inaugurated a new genre of travel literature about the United States—the automotive or road narrative. Such narratives highlight the experiences of mostly male protagonists “discovering themselves” on their journeys, emphasizing the independence of road travel and the value of rural folk traditions.

Travel writing’s relationship to empire building— as a type of “colonialist discourse”—has drawn the most attention from academicians. Close connections have been observed between European (and American) political, economic, and administrative goals for the colonies and their manifestations in the cultural practice of writing travel books. Travel writers’ descriptions of foreign places have been analyzed as attempts to validate, promote, or challenge the ideologies and practices of colonial or imperial domination and expansion. Mary Louise Pratt’s study of the genres and conventions of 18th- and 19th-century exploration narratives about South America and Africa (e.g., the “monarch of all I survey” trope) offered ways of thinking about travel writing as embedded within relations of power between metropole and periphery, as did Edward Said’s theories of representation and cultural imperialism. Particularly Said’s book, Orientalism, helped scholars understand ways in which representations of people in travel texts were intimately bound up with notions of self, in this case, that the Occident defined itself through essentialist, ethnocentric, and racist representations of the Orient. Said’s work became a model for demonstrating cultural forms of imperialism in travel texts, showing how the political, economic, or administrative fact of dominance relies on legitimating discourses such as those articulated through travel writing. . . .

Feminist geographers’ studies of travel writing challenge the masculinist history of geography by questioning who and what are relevant subjects of geographic study and, indeed, what counts as geographic knowledge itself. Such questions are worked through ideological constructs that posit men as explorers and women as travelers—or, conversely, men as travelers and women as tied to the home. Studies of Victorian women who were professional travel writers, tourists, wives of colonial administrators, and other (mostly) elite women who wrote narratives about their experiences abroad during the 19th century have been particularly revealing. From a “liberal” feminist perspective, travel presented one means toward female liberation for middle- and upper-class Victorian women. Many studies from the 1970s onward demonstrated the ways in which women’s gendered identities were negotiated differently “at home” than they were “away,” thereby showing women’s self-development through travel. The more recent poststructural turn in studies of Victorian travel writing has focused attention on women’s diverse and fragmented identities as they narrated their travel experiences, emphasizing women’s sense of themselves as women in new locations, but only as they worked through their ties to nation, class, whiteness, and colonial and imperial power structures.

CAT/2020.3(RC)

Question. 90

From the passage, we can infer that travel writing is most similar to:

Comprehension

Directions for questions: Read the passage carefully and answer the given questions accordingly.

Mode of transportation affects the travel experience and thus can produce new types of travel writing and perhaps even new “identities.” Modes of transportation determine the types and duration of social encounters; affect the organization and passage of space and time; . . . and also affect perception and knowledge—how and what the traveler comes to know and write about. The completion of the first U.S. transcontinental highway during the 1920s . . . for example, inaugurated a new genre of travel literature about the United States—the automotive or road narrative. Such narratives highlight the experiences of mostly male protagonists “discovering themselves” on their journeys, emphasizing the independence of road travel and the value of rural folk traditions.

Travel writing’s relationship to empire building— as a type of “colonialist discourse”—has drawn the most attention from academicians. Close connections have been observed between European (and American) political, economic, and administrative goals for the colonies and their manifestations in the cultural practice of writing travel books. Travel writers’ descriptions of foreign places have been analyzed as attempts to validate, promote, or challenge the ideologies and practices of colonial or imperial domination and expansion. Mary Louise Pratt’s study of the genres and conventions of 18th- and 19th-century exploration narratives about South America and Africa (e.g., the “monarch of all I survey” trope) offered ways of thinking about travel writing as embedded within relations of power between metropole and periphery, as did Edward Said’s theories of representation and cultural imperialism. Particularly Said’s book, Orientalism, helped scholars understand ways in which representations of people in travel texts were intimately bound up with notions of self, in this case, that the Occident defined itself through essentialist, ethnocentric, and racist representations of the Orient. Said’s work became a model for demonstrating cultural forms of imperialism in travel texts, showing how the political, economic, or administrative fact of dominance relies on legitimating discourses such as those articulated through travel writing. . . .

Feminist geographers’ studies of travel writing challenge the masculinist history of geography by questioning who and what are relevant subjects of geographic study and, indeed, what counts as geographic knowledge itself. Such questions are worked through ideological constructs that posit men as explorers and women as travelers—or, conversely, men as travelers and women as tied to the home. Studies of Victorian women who were professional travel writers, tourists, wives of colonial administrators, and other (mostly) elite women who wrote narratives about their experiences abroad during the 19th century have been particularly revealing. From a “liberal” feminist perspective, travel presented one means toward female liberation for middle- and upper-class Victorian women. Many studies from the 1970s onward demonstrated the ways in which women’s gendered identities were negotiated differently “at home” than they were “away,” thereby showing women’s self-development through travel. The more recent poststructural turn in studies of Victorian travel writing has focused attention on women’s diverse and fragmented identities as they narrated their travel experiences, emphasizing women’s sense of themselves as women in new locations, but only as they worked through their ties to nation, class, whiteness, and colonial and imperial power structures.

CAT/2020.3(RC)

Question. 91

From the passage, it can be inferred that scholars argue that Victorian women experienced self-development through their travels because:

Comprehension

Directions for questions: Read the passage carefully and answer the given questions accordingly.

Mode of transportation affects the travel experience and thus can produce new types of travel writing and perhaps even new “identities.” Modes of transportation determine the types and duration of social encounters; affect the organization and passage of space and time; . . . and also affect perception and knowledge—how and what the traveler comes to know and write about. The completion of the first U.S. transcontinental highway during the 1920s . . . for example, inaugurated a new genre of travel literature about the United States—the automotive or road narrative. Such narratives highlight the experiences of mostly male protagonists “discovering themselves” on their journeys, emphasizing the independence of road travel and the value of rural folk traditions.

Travel writing’s relationship to empire building— as a type of “colonialist discourse”—has drawn the most attention from academicians. Close connections have been observed between European (and American) political, economic, and administrative goals for the colonies and their manifestations in the cultural practice of writing travel books. Travel writers’ descriptions of foreign places have been analyzed as attempts to validate, promote, or challenge the ideologies and practices of colonial or imperial domination and expansion. Mary Louise Pratt’s study of the genres and conventions of 18th- and 19th-century exploration narratives about South America and Africa (e.g., the “monarch of all I survey” trope) offered ways of thinking about travel writing as embedded within relations of power between metropole and periphery, as did Edward Said’s theories of representation and cultural imperialism. Particularly Said’s book, Orientalism, helped scholars understand ways in which representations of people in travel texts were intimately bound up with notions of self, in this case, that the Occident defined itself through essentialist, ethnocentric, and racist representations of the Orient. Said’s work became a model for demonstrating cultural forms of imperialism in travel texts, showing how the political, economic, or administrative fact of dominance relies on legitimating discourses such as those articulated through travel writing. . . .

Feminist geographers’ studies of travel writing challenge the masculinist history of geography by questioning who and what are relevant subjects of geographic study and, indeed, what counts as geographic knowledge itself. Such questions are worked through ideological constructs that posit men as explorers and women as travelers—or, conversely, men as travelers and women as tied to the home. Studies of Victorian women who were professional travel writers, tourists, wives of colonial administrators, and other (mostly) elite women who wrote narratives about their experiences abroad during the 19th century have been particularly revealing. From a “liberal” feminist perspective, travel presented one means toward female liberation for middle- and upper-class Victorian women. Many studies from the 1970s onward demonstrated the ways in which women’s gendered identities were negotiated differently “at home” than they were “away,” thereby showing women’s self-development through travel. The more recent poststructural turn in studies of Victorian travel writing has focused attention on women’s diverse and fragmented identities as they narrated their travel experiences, emphasizing women’s sense of themselves as women in new locations, but only as they worked through their ties to nation, class, whiteness, and colonial and imperial power structures.

CAT/2020.3(RC)

Question. 92

American travel literature of the 1920s:

Comprehension

Directions for questions: Read the passage carefully and answer the given questions accordingly.

Mode of transportation affects the travel experience and thus can produce new types of travel writing and perhaps even new “identities.” Modes of transportation determine the types and duration of social encounters; affect the organization and passage of space and time; . . . and also affect perception and knowledge—how and what the traveler comes to know and write about. The completion of the first U.S. transcontinental highway during the 1920s . . . for example, inaugurated a new genre of travel literature about the United States—the automotive or road narrative. Such narratives highlight the experiences of mostly male protagonists “discovering themselves” on their journeys, emphasizing the independence of road travel and the value of rural folk traditions.

Travel writing’s relationship to empire building— as a type of “colonialist discourse”—has drawn the most attention from academicians. Close connections have been observed between European (and American) political, economic, and administrative goals for the colonies and their manifestations in the cultural practice of writing travel books. Travel writers’ descriptions of foreign places have been analyzed as attempts to validate, promote, or challenge the ideologies and practices of colonial or imperial domination and expansion. Mary Louise Pratt’s study of the genres and conventions of 18th- and 19th-century exploration narratives about South America and Africa (e.g., the “monarch of all I survey” trope) offered ways of thinking about travel writing as embedded within relations of power between metropole and periphery, as did Edward Said’s theories of representation and cultural imperialism. Particularly Said’s book, Orientalism, helped scholars understand ways in which representations of people in travel texts were intimately bound up with notions of self, in this case, that the Occident defined itself through essentialist, ethnocentric, and racist representations of the Orient. Said’s work became a model for demonstrating cultural forms of imperialism in travel texts, showing how the political, economic, or administrative fact of dominance relies on legitimating discourses such as those articulated through travel writing. . . .

Feminist geographers’ studies of travel writing challenge the masculinist history of geography by questioning who and what are relevant subjects of geographic study and, indeed, what counts as geographic knowledge itself. Such questions are worked through ideological constructs that posit men as explorers and women as travelers—or, conversely, men as travelers and women as tied to the home. Studies of Victorian women who were professional travel writers, tourists, wives of colonial administrators, and other (mostly) elite women who wrote narratives about their experiences abroad during the 19th century have been particularly revealing. From a “liberal” feminist perspective, travel presented one means toward female liberation for middle- and upper-class Victorian women. Many studies from the 1970s onward demonstrated the ways in which women’s gendered identities were negotiated differently “at home” than they were “away,” thereby showing women’s self-development through travel. The more recent poststructural turn in studies of Victorian travel writing has focused attention on women’s diverse and fragmented identities as they narrated their travel experiences, emphasizing women’s sense of themselves as women in new locations, but only as they worked through their ties to nation, class, whiteness, and colonial and imperial power structures.

CAT/2020.3(RC)

Question. 93

According to the passage, Said’s book, “Orientalism”:

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly.

I’ve been following the economic crisis for more than two years now. I began working on the subject as part of the background to a novel, and soon realized that I had stumbled across the most interesting story I’ve ever found. While I was beginning to work on it, the British bank Northern Rock blew up, and it became clear that, as I wrote at the time, “If our laws are not extended to control the new kinds of super-powerful, super-complex, and potentially super-risky investment vehicles, they will one day cause a financial disaster of global-systemic proportions.” . . . I was both right and too late, because all the groundwork for the crisis had already been done—though the sluggishness of the world’s governments, in not preparing for the great unraveling of autumn 2008, was then and still is stupefying. But this is the first reason why I wrote this book: because what’s happened is extraordinarily interesting. It is an absolutely amazing story, full of human interest and drama, one whose byways of mathematics, economics, and psychology are both central to the story of the last decades and mysteriously unknown to the general public. We have heard a lot about “the two cultures” of science and the arts—we heard a particularly large amount about it in 2009, because it was the fiftieth anniversary of the speech during which C. P. Snow first used the phrase. But I’m not sure the idea of a huge gap between science and the arts is as true as it was half a century ago—it’s certainly true, for instance, that a general reader who wants to pick up an education in the fundamentals of science will find it easier than ever before. It seems to me that there is a much bigger gap between the world of finance and that of the general public and that there is a need to narrow that gap, if the financial industry is not to be a kind of priesthood, administering to its own mysteries and feared and resented by the rest of us. Many bright, literate people have no idea about all sorts of economic basics, of a type that financial insiders take as elementary facts of how the world works. I am an outsider to finance and economics, and my hope is that I can talk across that gulf.

My need to understand is the same as yours, whoever you are. That’s one of the strangest ironies of this story: after decades in which the ideology of the Western world was personally and economically individualistic, we’ve suddenly been hit by a crisis which shows in the starkest terms that whether we like it or not—and there are large parts of it that you would have to be crazy to like—we’re all in this together. The aftermath of the crisis is going to dominate the economics and politics of our societies for at least a decade to come and perhaps longer.

CAT/2020.3(RC)

Question. 94

Which one of the following, if false, could be seen as supporting the author’s claims?

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly.

I’ve been following the economic crisis for more than two years now. I began working on the subject as part of the background to a novel, and soon realized that I had stumbled across the most interesting story I’ve ever found. While I was beginning to work on it, the British bank Northern Rock blew up, and it became clear that, as I wrote at the time, “If our laws are not extended to control the new kinds of super-powerful, super-complex, and potentially super-risky investment vehicles, they will one day cause a financial disaster of global-systemic proportions.” . . . I was both right and too late, because all the groundwork for the crisis had already been done—though the sluggishness of the world’s governments, in not preparing for the great unraveling of autumn 2008, was then and still is stupefying. But this is the first reason why I wrote this book: because what’s happened is extraordinarily interesting. It is an absolutely amazing story, full of human interest and drama, one whose byways of mathematics, economics, and psychology are both central to the story of the last decades and mysteriously unknown to the general public. We have heard a lot about “the two cultures” of science and the arts—we heard a particularly large amount about it in 2009, because it was the fiftieth anniversary of the speech during which C. P. Snow first used the phrase. But I’m not sure the idea of a huge gap between science and the arts is as true as it was half a century ago—it’s certainly true, for instance, that a general reader who wants to pick up an education in the fundamentals of science will find it easier than ever before. It seems to me that there is a much bigger gap between the world of finance and that of the general public and that there is a need to narrow that gap, if the financial industry is not to be a kind of priesthood, administering to its own mysteries and feared and resented by the rest of us. Many bright, literate people have no idea about all sorts of economic basics, of a type that financial insiders take as elementary facts of how the world works. I am an outsider to finance and economics, and my hope is that I can talk across that gulf.

My need to understand is the same as yours, whoever you are. That’s one of the strangest ironies of this story: after decades in which the ideology of the Western world was personally and economically individualistic, we’ve suddenly been hit by a crisis which shows in the starkest terms that whether we like it or not—and there are large parts of it that you would have to be crazy to like—we’re all in this together. The aftermath of the crisis is going to dominate the economics and politics of our societies for at least a decade to come and perhaps longer.

CAT/2020.3(RC)

Question. 95

Which one of the following, if true, would be an accurate inference from the first sentence of the passage?

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly.

I’ve been following the economic crisis for more than two years now. I began working on the subject as part of the background to a novel, and soon realized that I had stumbled across the most interesting story I’ve ever found. While I was beginning to work on it, the British bank Northern Rock blew up, and it became clear that, as I wrote at the time, “If our laws are not extended to control the new kinds of super-powerful, super-complex, and potentially super-risky investment vehicles, they will one day cause a financial disaster of global-systemic proportions.” . . . I was both right and too late, because all the groundwork for the crisis had already been done—though the sluggishness of the world’s governments, in not preparing for the great unraveling of autumn 2008, was then and still is stupefying. But this is the first reason why I wrote this book: because what’s happened is extraordinarily interesting. It is an absolutely amazing story, full of human interest and drama, one whose byways of mathematics, economics, and psychology are both central to the story of the last decades and mysteriously unknown to the general public. We have heard a lot about “the two cultures” of science and the arts—we heard a particularly large amount about it in 2009, because it was the fiftieth anniversary of the speech during which C. P. Snow first used the phrase. But I’m not sure the idea of a huge gap between science and the arts is as true as it was half a century ago—it’s certainly true, for instance, that a general reader who wants to pick up an education in the fundamentals of science will find it easier than ever before. It seems to me that there is a much bigger gap between the world of finance and that of the general public and that there is a need to narrow that gap, if the financial industry is not to be a kind of priesthood, administering to its own mysteries and feared and resented by the rest of us. Many bright, literate people have no idea about all sorts of economic basics, of a type that financial insiders take as elementary facts of how the world works. I am an outsider to finance and economics, and my hope is that I can talk across that gulf.

My need to understand is the same as yours, whoever you are. That’s one of the strangest ironies of this story: after decades in which the ideology of the Western world was personally and economically individualistic, we’ve suddenly been hit by a crisis which shows in the starkest terms that whether we like it or not—and there are large parts of it that you would have to be crazy to like—we’re all in this together. The aftermath of the crisis is going to dominate the economics and politics of our societies for at least a decade to come and perhaps longer.

CAT/2020.3(RC)

Question. 96

Which one of the following best captures the main argument of the last paragraph of the passage?

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly.

I’ve been following the economic crisis for more than two years now. I began working on the subject as part of the background to a novel, and soon realized that I had stumbled across the most interesting story I’ve ever found. While I was beginning to work on it, the British bank Northern Rock blew up, and it became clear that, as I wrote at the time, “If our laws are not extended to control the new kinds of super-powerful, super-complex, and potentially super-risky investment vehicles, they will one day cause a financial disaster of global-systemic proportions.” . . . I was both right and too late, because all the groundwork for the crisis had already been done—though the sluggishness of the world’s governments, in not preparing for the great unraveling of autumn 2008, was then and still is stupefying. But this is the first reason why I wrote this book: because what’s happened is extraordinarily interesting. It is an absolutely amazing story, full of human interest and drama, one whose byways of mathematics, economics, and psychology are both central to the story of the last decades and mysteriously unknown to the general public. We have heard a lot about “the two cultures” of science and the arts—we heard a particularly large amount about it in 2009, because it was the fiftieth anniversary of the speech during which C. P. Snow first used the phrase. But I’m not sure the idea of a huge gap between science and the arts is as true as it was half a century ago—it’s certainly true, for instance, that a general reader who wants to pick up an education in the fundamentals of science will find it easier than ever before. It seems to me that there is a much bigger gap between the world of finance and that of the general public and that there is a need to narrow that gap, if the financial industry is not to be a kind of priesthood, administering to its own mysteries and feared and resented by the rest of us. Many bright, literate people have no idea about all sorts of economic basics, of a type that financial insiders take as elementary facts of how the world works. I am an outsider to finance and economics, and my hope is that I can talk across that gulf.

My need to understand is the same as yours, whoever you are. That’s one of the strangest ironies of this story: after decades in which the ideology of the Western world was personally and economically individualistic, we’ve suddenly been hit by a crisis which shows in the starkest terms that whether we like it or not—and there are large parts of it that you would have to be crazy to like—we’re all in this together. The aftermath of the crisis is going to dominate the economics and politics of our societies for at least a decade to come and perhaps longer.

CAT/2020.3(RC)

Question. 97

All of the following, if true, could be seen as supporting the arguments in the passage, EXCEPT:

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly.

I’ve been following the economic crisis for more than two years now. I began working on the subject as part of the background to a novel, and soon realized that I had stumbled across the most interesting story I’ve ever found. While I was beginning to work on it, the British bank Northern Rock blew up, and it became clear that, as I wrote at the time, “If our laws are not extended to control the new kinds of super-powerful, super-complex, and potentially super-risky investment vehicles, they will one day cause a financial disaster of global-systemic proportions.” . . . I was both right and too late, because all the groundwork for the crisis had already been done—though the sluggishness of the world’s governments, in not preparing for the great unraveling of autumn 2008, was then and still is stupefying. But this is the first reason why I wrote this book: because what’s happened is extraordinarily interesting. It is an absolutely amazing story, full of human interest and drama, one whose byways of mathematics, economics, and psychology are both central to the story of the last decades and mysteriously unknown to the general public. We have heard a lot about “the two cultures” of science and the arts—we heard a particularly large amount about it in 2009, because it was the fiftieth anniversary of the speech during which C. P. Snow first used the phrase. But I’m not sure the idea of a huge gap between science and the arts is as true as it was half a century ago—it’s certainly true, for instance, that a general reader who wants to pick up an education in the fundamentals of science will find it easier than ever before. It seems to me that there is a much bigger gap between the world of finance and that of the general public and that there is a need to narrow that gap, if the financial industry is not to be a kind of priesthood, administering to its own mysteries and feared and resented by the rest of us. Many bright, literate people have no idea about all sorts of economic basics, of a type that financial insiders take as elementary facts of how the world works. I am an outsider to finance and economics, and my hope is that I can talk across that gulf.

My need to understand is the same as yours, whoever you are. That’s one of the strangest ironies of this story: after decades in which the ideology of the Western world was personally and economically individualistic, we’ve suddenly been hit by a crisis which shows in the starkest terms that whether we like it or not—and there are large parts of it that you would have to be crazy to like—we’re all in this together. The aftermath of the crisis is going to dominate the economics and politics of our societies for at least a decade to come and perhaps longer.

CAT/2020.3(RC)

Question. 98

According to the passage, the author is likely to be supportive of which one of the following programmes?

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

Contemporary internet shopping conjures a perfect storm of choice anxiety. Research has consistently held that people who are presented with a few options make better, easier decisions than those presented with many. . . . Helping consumers figure out what to buy amid an endless sea of choice online has become a cottage industry unto itself. Many brands and retailers now wield marketing buzzwords such as curation, differentiation, and discovery as they attempt to sell an assortment of stuff targeted to their ideal customer. Companies find such shoppers through the data gold mine of digital advertising, which can catalog people by gender, income level, personal interests, and more. Since Americans have lost the ability to sort through the sheer volume of the consumer choices available to them, a ghost now has to be in the retail machine, whether it’s an algorithm, an influencer, or some snazzy ad tech to help a product follow you around the internet. Indeed, choice fatigue is one reason so many people gravitate toward lifestyle influencers on Instagram—the relentlessly chic young moms and perpetually vacationing 20-somethings—who present an aspirational worldview, and then recommend the products and services that help achieve it. . . .

For a relatively new class of consumer-products start-ups, there’s another method entirely. Instead of making sense of a sea of existing stuff, these companies claim to disrupt stuff as Americans know it. Casper (mattresses), Glossier (makeup), Away (suitcases), and many others have sprouted up to offer consumers freedom from choice: The companies have a few aesthetically pleasing and supposedly highly functional options, usually at mid-range prices. They’re selling nice things, but maybe more importantly, they’re selling a confidence in those things, and an ability to opt out of the stuff rat race. . . .

One-thousand-dollar mattresses and $300 suitcases might solve choice anxiety for a certain tier of consumer, but the companies that sell them, along with those that attempt to massage the larger stuff economy into something navigable, are still just working within a consumer market that’s broken in systemic ways. The presence of so much stuff in America might be more valuable if it were more evenly distributed, but stuff’s creators tend to focus their energy on those who already have plenty. As options have expanded for people with disposable income, the opportunity to buy even basic things such as fresh food or quality diapers has contracted for much of America’s lower classes.

For start-ups that promise accessible simplicity, their very structure still might eventually push them toward overwhelming variety. Most of these companies are based on hundreds of millions of dollars of venture capital, the investors of which tend to expect a steep growth rate that can’t be achieved by selling one great mattress or one great sneaker. Casper has expanded into bedroom furniture and bed linens. Glossier, after years of marketing itself as no-makeup makeup that requires little skill to apply, recently launched a full line of glittering color cosmetics. There may be no way to opt out of stuff by buying into the right thing.

CAT/2019.1(RC)

Question. 99

Which of the following hypothetical statements would add the least depth to the author’s prediction of the fate of start-ups offering few product options?

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

Contemporary internet shopping conjures a perfect storm of choice anxiety. Research has consistently held that people who are presented with a few options make better, easier decisions than those presented with many. . . . Helping consumers figure out what to buy amid an endless sea of choice online has become a cottage industry unto itself. Many brands and retailers now wield marketing buzzwords such as curation, differentiation, and discovery as they attempt to sell an assortment of stuff targeted to their ideal customer. Companies find such shoppers through the data gold mine of digital advertising, which can catalog people by gender, income level, personal interests, and more. Since Americans have lost the ability to sort through the sheer volume of the consumer choices available to them, a ghost now has to be in the retail machine, whether it’s an algorithm, an influencer, or some snazzy ad tech to help a product follow you around the internet. Indeed, choice fatigue is one reason so many people gravitate toward lifestyle influencers on Instagram—the relentlessly chic young moms and perpetually vacationing 20-somethings—who present an aspirational worldview, and then recommend the products and services that help achieve it. . . .

For a relatively new class of consumer-products start-ups, there’s another method entirely. Instead of making sense of a sea of existing stuff, these companies claim to disrupt stuff as Americans know it. Casper (mattresses), Glossier (makeup), Away (suitcases), and many others have sprouted up to offer consumers freedom from choice: The companies have a few aesthetically pleasing and supposedly highly functional options, usually at mid-range prices. They’re selling nice things, but maybe more importantly, they’re selling a confidence in those things, and an ability to opt out of the stuff rat race. . . .

One-thousand-dollar mattresses and $300 suitcases might solve choice anxiety for a certain tier of consumer, but the companies that sell them, along with those that attempt to massage the larger stuff economy into something navigable, are still just working within a consumer market that’s broken in systemic ways. The presence of so much stuff in America might be more valuable if it were more evenly distributed, but stuff’s creators tend to focus their energy on those who already have plenty. As options have expanded for people with disposable income, the opportunity to buy even basic things such as fresh food or quality diapers has contracted for much of America’s lower classes.

For start-ups that promise accessible simplicity, their very structure still might eventually push them toward overwhelming variety. Most of these companies are based on hundreds of millions of dollars of venture capital, the investors of which tend to expect a steep growth rate that can’t be achieved by selling one great mattress or one great sneaker. Casper has expanded into bedroom furniture and bed linens. Glossier, after years of marketing itself as no-makeup makeup that requires little skill to apply, recently launched a full line of glittering color cosmetics. There may be no way to opt out of stuff by buying into the right thing.

CAT/2019.1(RC)

Question. 100

Which one of the following best sums up the overall purpose of the examples of Casper and Glossier in the passage?

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

Contemporary internet shopping conjures a perfect storm of choice anxiety. Research has consistently held that people who are presented with a few options make better, easier decisions than those presented with many. . . . Helping consumers figure out what to buy amid an endless sea of choice online has become a cottage industry unto itself. Many brands and retailers now wield marketing buzzwords such as curation, differentiation, and discovery as they attempt to sell an assortment of stuff targeted to their ideal customer. Companies find such shoppers through the data gold mine of digital advertising, which can catalog people by gender, income level, personal interests, and more. Since Americans have lost the ability to sort through the sheer volume of the consumer choices available to them, a ghost now has to be in the retail machine, whether it’s an algorithm, an influencer, or some snazzy ad tech to help a product follow you around the internet. Indeed, choice fatigue is one reason so many people gravitate toward lifestyle influencers on Instagram—the relentlessly chic young moms and perpetually vacationing 20-somethings—who present an aspirational worldview, and then recommend the products and services that help achieve it. . . .

For a relatively new class of consumer-products start-ups, there’s another method entirely. Instead of making sense of a sea of existing stuff, these companies claim to disrupt stuff as Americans know it. Casper (mattresses), Glossier (makeup), Away (suitcases), and many others have sprouted up to offer consumers freedom from choice: The companies have a few aesthetically pleasing and supposedly highly functional options, usually at mid-range prices. They’re selling nice things, but maybe more importantly, they’re selling a confidence in those things, and an ability to opt out of the stuff rat race. . . .

One-thousand-dollar mattresses and $300 suitcases might solve choice anxiety for a certain tier of consumer, but the companies that sell them, along with those that attempt to massage the larger stuff economy into something navigable, are still just working within a consumer market that’s broken in systemic ways. The presence of so much stuff in America might be more valuable if it were more evenly distributed, but stuff’s creators tend to focus their energy on those who already have plenty. As options have expanded for people with disposable income, the opportunity to buy even basic things such as fresh food or quality diapers has contracted for much of America’s lower classes.

For start-ups that promise accessible simplicity, their very structure still might eventually push them toward overwhelming variety. Most of these companies are based on hundreds of millions of dollars of venture capital, the investors of which tend to expect a steep growth rate that can’t be achieved by selling one great mattress or one great sneaker. Casper has expanded into bedroom furniture and bed linens. Glossier, after years of marketing itself as no-makeup makeup that requires little skill to apply, recently launched a full line of glittering color cosmetics. There may be no way to opt out of stuff by buying into the right thing.

CAT/2019.1(RC)

Question. 101

A new food brand plans to launch a series of products in the American market. Which of the following product plans is most likely to be supported by the author of the passage?

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

Contemporary internet shopping conjures a perfect storm of choice anxiety. Research has consistently held that people who are presented with a few options make better, easier decisions than those presented with many. . . . Helping consumers figure out what to buy amid an endless sea of choice online has become a cottage industry unto itself. Many brands and retailers now wield marketing buzzwords such as curation, differentiation, and discovery as they attempt to sell an assortment of stuff targeted to their ideal customer. Companies find such shoppers through the data gold mine of digital advertising, which can catalog people by gender, income level, personal interests, and more. Since Americans have lost the ability to sort through the sheer volume of the consumer choices available to them, a ghost now has to be in the retail machine, whether it’s an algorithm, an influencer, or some snazzy ad tech to help a product follow you around the internet. Indeed, choice fatigue is one reason so many people gravitate toward lifestyle influencers on Instagram—the relentlessly chic young moms and perpetually vacationing 20-somethings—who present an aspirational worldview, and then recommend the products and services that help achieve it. . . .

For a relatively new class of consumer-products start-ups, there’s another method entirely. Instead of making sense of a sea of existing stuff, these companies claim to disrupt stuff as Americans know it. Casper (mattresses), Glossier (makeup), Away (suitcases), and many others have sprouted up to offer consumers freedom from choice: The companies have a few aesthetically pleasing and supposedly highly functional options, usually at mid-range prices. They’re selling nice things, but maybe more importantly, they’re selling a confidence in those things, and an ability to opt out of the stuff rat race. . . .

One-thousand-dollar mattresses and $300 suitcases might solve choice anxiety for a certain tier of consumer, but the companies that sell them, along with those that attempt to massage the larger stuff economy into something navigable, are still just working within a consumer market that’s broken in systemic ways. The presence of so much stuff in America might be more valuable if it were more evenly distributed, but stuff’s creators tend to focus their energy on those who already have plenty. As options have expanded for people with disposable income, the opportunity to buy even basic things such as fresh food or quality diapers has contracted for much of America’s lower classes.

For start-ups that promise accessible simplicity, their very structure still might eventually push them toward overwhelming variety. Most of these companies are based on hundreds of millions of dollars of venture capital, the investors of which tend to expect a steep growth rate that can’t be achieved by selling one great mattress or one great sneaker. Casper has expanded into bedroom furniture and bed linens. Glossier, after years of marketing itself as no-makeup makeup that requires little skill to apply, recently launched a full line of glittering color cosmetics. There may be no way to opt out of stuff by buying into the right thing.

CAT/2019.1(RC)

Question. 102

All of the following, IF TRUE, would weaken the author’s claims EXCEPT:

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

Contemporary internet shopping conjures a perfect storm of choice anxiety. Research has consistently held that people who are presented with a few options make better, easier decisions than those presented with many. . . . Helping consumers figure out what to buy amid an endless sea of choice online has become a cottage industry unto itself. Many brands and retailers now wield marketing buzzwords such as curation, differentiation, and discovery as they attempt to sell an assortment of stuff targeted to their ideal customer. Companies find such shoppers through the data gold mine of digital advertising, which can catalog people by gender, income level, personal interests, and more. Since Americans have lost the ability to sort through the sheer volume of the consumer choices available to them, a ghost now has to be in the retail machine, whether it’s an algorithm, an influencer, or some snazzy ad tech to help a product follow you around the internet. Indeed, choice fatigue is one reason so many people gravitate toward lifestyle influencers on Instagram—the relentlessly chic young moms and perpetually vacationing 20-somethings—who present an aspirational worldview, and then recommend the products and services that help achieve it. . . .

For a relatively new class of consumer-products start-ups, there’s another method entirely. Instead of making sense of a sea of existing stuff, these companies claim to disrupt stuff as Americans know it. Casper (mattresses), Glossier (makeup), Away (suitcases), and many others have sprouted up to offer consumers freedom from choice: The companies have a few aesthetically pleasing and supposedly highly functional options, usually at mid-range prices. They’re selling nice things, but maybe more importantly, they’re selling a confidence in those things, and an ability to opt out of the stuff rat race. . . .

One-thousand-dollar mattresses and $300 suitcases might solve choice anxiety for a certain tier of consumer, but the companies that sell them, along with those that attempt to massage the larger stuff economy into something navigable, are still just working within a consumer market that’s broken in systemic ways. The presence of so much stuff in America might be more valuable if it were more evenly distributed, but stuff’s creators tend to focus their energy on those who already have plenty. As options have expanded for people with disposable income, the opportunity to buy even basic things such as fresh food or quality diapers has contracted for much of America’s lower classes.

For start-ups that promise accessible simplicity, their very structure still might eventually push them toward overwhelming variety. Most of these companies are based on hundreds of millions of dollars of venture capital, the investors of which tend to expect a steep growth rate that can’t be achieved by selling one great mattress or one great sneaker. Casper has expanded into bedroom furniture and bed linens. Glossier, after years of marketing itself as no-makeup makeup that requires little skill to apply, recently launched a full line of glittering color cosmetics. There may be no way to opt out of stuff by buying into the right thing.

CAT/2019.1(RC)

Question. 103

Based on the passage, all of the following can be inferred about consumer behaviour EXCEPT that:

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

"Free of the taint of manufacture" – that phrase, in particular, is heavily loaded with the ideology of what the Victorian socialist William Morris called the "anti-scrape", or an anti- capitalist conservationism (not conservatism) that solaced itself with the vision of a pre- industrial golden age. In Britain, folk may often appear a cosy, fossilised form, but when you look more closely, the idea of folk – who has the right to sing it, dance it, invoke it, collect it, belong to it or appropriate it for political or cultural ends – has always been contested territory. . . .

In our own time, though, the word "folk" . . . has achieved the rare distinction of occupying fashionable and unfashionable status simultaneously. Just as the effusive floral prints of the radical William Morris now cover genteel sofas, so the revolutionary intentions of many folk historians and revivalists have led to music that is commonly regarded as parochial and conservative. And yet – as newspaper columns periodically rejoice – folk is hip again, influencing artists, clothing and furniture designers, celebrated at music festivals, awards ceremonies and on TV, reissued on countless record labels. Folk is a sonic "shabby chic", containing elements of the uncanny and eerie, as well as an antique veneer, a whiff of Britain's heathen dark ages. The very obscurity and anonymity of folk music's origins open up space for rampant imaginative fancies. . . .

[Cecil Sharp, who wrote about this subject, believed that] folk songs existed in constant transformation, a living example of an art form in a perpetual state of renewal. "One man sings a song, and then others sing it after him, changing what they do not like" is the most concise summary of his conclusions on its origins. He compared each rendition of a ballad to an acorn falling from an oak tree; every subsequent iteration sows the song anew. But there is tension in newness. In the late 1960s, purists were suspicious of folk songs recast in rock idioms. Electrification, however, comes in many forms. For the early-20th-century composers such as Vaughan Williams and Holst, there were thunderbolts of inspiration from oriental mysticism, angular modernism and the body blow of the first world war, as well as input from the rediscovered folk tradition itself.

For the second wave of folk revivalists, such as Ewan MacColl and AL Lloyd, starting in the 40s, the vital spark was communism's dream of a post-revolutionary New Jerusalem. For their younger successors in the 60s, who thronged the folk clubs set up by the old guard, the lyrical freedom of Dylan and the unchained melodies of psychedelia created the conditions for folk- rock's own golden age, a brief Indian summer that lasted from about 1969 to 1971. . . . Four decades on, even that progressive period has become just one more era ripe for fashionable emulation and pastiche. The idea of a folk tradition being exclusively confined to oral transmission has become a much looser, less severely guarded concept. Recorded music and television, for today's metropolitan generation, are where the equivalent of folk memories are seeded. . . .

CAT/2019.1(RC)

Question. 104

The author says that folk “may often appear a cosy, fossilised form” because:

 

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

"Free of the taint of manufacture" – that phrase, in particular, is heavily loaded with the ideology of what the Victorian socialist William Morris called the "anti-scrape", or an anti- capitalist conservationism (not conservatism) that solaced itself with the vision of a pre- industrial golden age. In Britain, folk may often appear a cosy, fossilised form, but when you look more closely, the idea of folk – who has the right to sing it, dance it, invoke it, collect it, belong to it or appropriate it for political or cultural ends – has always been contested territory. . . .

In our own time, though, the word "folk" . . . has achieved the rare distinction of occupying fashionable and unfashionable status simultaneously. Just as the effusive floral prints of the radical William Morris now cover genteel sofas, so the revolutionary intentions of many folk historians and revivalists have led to music that is commonly regarded as parochial and conservative. And yet – as newspaper columns periodically rejoice – folk is hip again, influencing artists, clothing and furniture designers, celebrated at music festivals, awards ceremonies and on TV, reissued on countless record labels. Folk is a sonic "shabby chic", containing elements of the uncanny and eerie, as well as an antique veneer, a whiff of Britain's heathen dark ages. The very obscurity and anonymity of folk music's origins open up space for rampant imaginative fancies. . . .

[Cecil Sharp, who wrote about this subject, believed that] folk songs existed in constant transformation, a living example of an art form in a perpetual state of renewal. "One man sings a song, and then others sing it after him, changing what they do not like" is the most concise summary of his conclusions on its origins. He compared each rendition of a ballad to an acorn falling from an oak tree; every subsequent iteration sows the song anew. But there is tension in newness. In the late 1960s, purists were suspicious of folk songs recast in rock idioms. Electrification, however, comes in many forms. For the early-20th-century composers such as Vaughan Williams and Holst, there were thunderbolts of inspiration from oriental mysticism, angular modernism and the body blow of the first world war, as well as input from the rediscovered folk tradition itself.

For the second wave of folk revivalists, such as Ewan MacColl and AL Lloyd, starting in the 40s, the vital spark was communism's dream of a post-revolutionary New Jerusalem. For their younger successors in the 60s, who thronged the folk clubs set up by the old guard, the lyrical freedom of Dylan and the unchained melodies of psychedelia created the conditions for folk- rock's own golden age, a brief Indian summer that lasted from about 1969 to 1971. . . . Four decades on, even that progressive period has become just one more era ripe for fashionable emulation and pastiche. The idea of a folk tradition being exclusively confined to oral transmission has become a much looser, less severely guarded concept. Recorded music and television, for today's metropolitan generation, are where the equivalent of folk memories are seeded. . . .

CAT/2019.1(RC)

Question. 105

All of the following are causes for plurality and diversity within the British folk tradition EXCEPT:

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

"Free of the taint of manufacture" – that phrase, in particular, is heavily loaded with the ideology of what the Victorian socialist William Morris called the "anti-scrape", or an anti- capitalist conservationism (not conservatism) that solaced itself with the vision of a pre- industrial golden age. In Britain, folk may often appear a cosy, fossilised form, but when you look more closely, the idea of folk – who has the right to sing it, dance it, invoke it, collect it, belong to it or appropriate it for political or cultural ends – has always been contested territory. . . .

In our own time, though, the word "folk" . . . has achieved the rare distinction of occupying fashionable and unfashionable status simultaneously. Just as the effusive floral prints of the radical William Morris now cover genteel sofas, so the revolutionary intentions of many folk historians and revivalists have led to music that is commonly regarded as parochial and conservative. And yet – as newspaper columns periodically rejoice – folk is hip again, influencing artists, clothing and furniture designers, celebrated at music festivals, awards ceremonies and on TV, reissued on countless record labels. Folk is a sonic "shabby chic", containing elements of the uncanny and eerie, as well as an antique veneer, a whiff of Britain's heathen dark ages. The very obscurity and anonymity of folk music's origins open up space for rampant imaginative fancies. . . .

[Cecil Sharp, who wrote about this subject, believed that] folk songs existed in constant transformation, a living example of an art form in a perpetual state of renewal. "One man sings a song, and then others sing it after him, changing what they do not like" is the most concise summary of his conclusions on its origins. He compared each rendition of a ballad to an acorn falling from an oak tree; every subsequent iteration sows the song anew. But there is tension in newness. In the late 1960s, purists were suspicious of folk songs recast in rock idioms. Electrification, however, comes in many forms. For the early-20th-century composers such as Vaughan Williams and Holst, there were thunderbolts of inspiration from oriental mysticism, angular modernism and the body blow of the first world war, as well as input from the rediscovered folk tradition itself.

For the second wave of folk revivalists, such as Ewan MacColl and AL Lloyd, starting in the 40s, the vital spark was communism's dream of a post-revolutionary New Jerusalem. For their younger successors in the 60s, who thronged the folk clubs set up by the old guard, the lyrical freedom of Dylan and the unchained melodies of psychedelia created the conditions for folk- rock's own golden age, a brief Indian summer that lasted from about 1969 to 1971. . . . Four decades on, even that progressive period has become just one more era ripe for fashionable emulation and pastiche. The idea of a folk tradition being exclusively confined to oral transmission has become a much looser, less severely guarded concept. Recorded music and television, for today's metropolitan generation, are where the equivalent of folk memories are seeded. . . .

CAT/2019.1(RC)

Question. 106

At a conference on folk forms, the author of the passage is least likely to agree with which one of the following views?

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

"Free of the taint of manufacture" – that phrase, in particular, is heavily loaded with the ideology of what the Victorian socialist William Morris called the "anti-scrape", or an anti- capitalist conservationism (not conservatism) that solaced itself with the vision of a pre- industrial golden age. In Britain, folk may often appear a cosy, fossilised form, but when you look more closely, the idea of folk – who has the right to sing it, dance it, invoke it, collect it, belong to it or appropriate it for political or cultural ends – has always been contested territory. . . .

In our own time, though, the word "folk" . . . has achieved the rare distinction of occupying fashionable and unfashionable status simultaneously. Just as the effusive floral prints of the radical William Morris now cover genteel sofas, so the revolutionary intentions of many folk historians and revivalists have led to music that is commonly regarded as parochial and conservative. And yet – as newspaper columns periodically rejoice – folk is hip again, influencing artists, clothing and furniture designers, celebrated at music festivals, awards ceremonies and on TV, reissued on countless record labels. Folk is a sonic "shabby chic", containing elements of the uncanny and eerie, as well as an antique veneer, a whiff of Britain's heathen dark ages. The very obscurity and anonymity of folk music's origins open up space for rampant imaginative fancies. . . .

[Cecil Sharp, who wrote about this subject, believed that] folk songs existed in constant transformation, a living example of an art form in a perpetual state of renewal. "One man sings a song, and then others sing it after him, changing what they do not like" is the most concise summary of his conclusions on its origins. He compared each rendition of a ballad to an acorn falling from an oak tree; every subsequent iteration sows the song anew. But there is tension in newness. In the late 1960s, purists were suspicious of folk songs recast in rock idioms. Electrification, however, comes in many forms. For the early-20th-century composers such as Vaughan Williams and Holst, there were thunderbolts of inspiration from oriental mysticism, angular modernism and the body blow of the first world war, as well as input from the rediscovered folk tradition itself.

For the second wave of folk revivalists, such as Ewan MacColl and AL Lloyd, starting in the 40s, the vital spark was communism's dream of a post-revolutionary New Jerusalem. For their younger successors in the 60s, who thronged the folk clubs set up by the old guard, the lyrical freedom of Dylan and the unchained melodies of psychedelia created the conditions for folk- rock's own golden age, a brief Indian summer that lasted from about 1969 to 1971. . . . Four decades on, even that progressive period has become just one more era ripe for fashionable emulation and pastiche. The idea of a folk tradition being exclusively confined to oral transmission has become a much looser, less severely guarded concept. Recorded music and television, for today's metropolitan generation, are where the equivalent of folk memories are seeded. . . .

CAT/2019.1(RC)

Question. 107

The primary purpose of the reference to William Morris and his floral prints is to show:

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

"Free of the taint of manufacture" – that phrase, in particular, is heavily loaded with the ideology of what the Victorian socialist William Morris called the "anti-scrape", or an anti- capitalist conservationism (not conservatism) that solaced itself with the vision of a pre- industrial golden age. In Britain, folk may often appear a cosy, fossilised form, but when you look more closely, the idea of folk – who has the right to sing it, dance it, invoke it, collect it, belong to it or appropriate it for political or cultural ends – has always been contested territory. . . .

In our own time, though, the word "folk" . . . has achieved the rare distinction of occupying fashionable and unfashionable status simultaneously. Just as the effusive floral prints of the radical William Morris now cover genteel sofas, so the revolutionary intentions of many folk historians and revivalists have led to music that is commonly regarded as parochial and conservative. And yet – as newspaper columns periodically rejoice – folk is hip again, influencing artists, clothing and furniture designers, celebrated at music festivals, awards ceremonies and on TV, reissued on countless record labels. Folk is a sonic "shabby chic", containing elements of the uncanny and eerie, as well as an antique veneer, a whiff of Britain's heathen dark ages. The very obscurity and anonymity of folk music's origins open up space for rampant imaginative fancies. . . .

[Cecil Sharp, who wrote about this subject, believed that] folk songs existed in constant transformation, a living example of an art form in a perpetual state of renewal. "One man sings a song, and then others sing it after him, changing what they do not like" is the most concise summary of his conclusions on its origins. He compared each rendition of a ballad to an acorn falling from an oak tree; every subsequent iteration sows the song anew. But there is tension in newness. In the late 1960s, purists were suspicious of folk songs recast in rock idioms. Electrification, however, comes in many forms. For the early-20th-century composers such as Vaughan Williams and Holst, there were thunderbolts of inspiration from oriental mysticism, angular modernism and the body blow of the first world war, as well as input from the rediscovered folk tradition itself.

For the second wave of folk revivalists, such as Ewan MacColl and AL Lloyd, starting in the 40s, the vital spark was communism's dream of a post-revolutionary New Jerusalem. For their younger successors in the 60s, who thronged the folk clubs set up by the old guard, the lyrical freedom of Dylan and the unchained melodies of psychedelia created the conditions for folk- rock's own golden age, a brief Indian summer that lasted from about 1969 to 1971. . . . Four decades on, even that progressive period has become just one more era ripe for fashionable emulation and pastiche. The idea of a folk tradition being exclusively confined to oral transmission has become a much looser, less severely guarded concept. Recorded music and television, for today's metropolitan generation, are where the equivalent of folk memories are seeded. . . .

CAT/2019.1(RC)

Question. 108

Which of the following statements about folk revivalism of the 1940s and 1960s cannot be inferred from the passage?

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

Around the world, capital cities are disgorging bureaucrats. In the post-colonial fervour of the 20th century, coastal capitals picked by trade-focused empires were spurned for “regionally neutral” new ones . . . . But decamping wholesale is costly and unpopular; governments these days prefer piecemeal dispersal. The trend reflects how the world has changed. In past eras, when information travelled at a snail’s pace, civil servants had to cluster together. But now desk-workers can ping emails and video-chat around the world. Travel for face-to-face meetings may be unavoidable, but transport links, too, have improved. . . .

Proponents of moving civil servants around promise countless benefits. It disperses the risk that a terrorist attack or natural disaster will cripple an entire government. Wonks in the sticks will be inspired by new ideas that walled-off capitals cannot conjure up. Autonomous regulators perform best far from the pressure and lobbying of the big city. Some even hail a cure for ascendant cynicism and populism. The unloved bureaucrats of faraway capitals will become as popular as firefighters once they mix with regular folk.

Beyond these sunny visions, dispersing central-government functions usually has three specific aims: to improve the lives of both civil servants and those living in clogged capitals; to save money; and to redress regional imbalances. The trouble is that these goals are not always realised.

The first aim—improving living conditions—has a long pedigree. After the second world war Britain moved thousands of civil servants to “agreeable English country towns” as London was rebuilt. But swapping the capital for somewhere smaller is not always agreeable. Attrition rates can exceed 80%. . . . The second reason to pack bureaucrats off is to save money. Office space costs far more in capitals. . . . Agencies that are moved elsewhere can often recruit better workers on lower salaries than in capitals, where well-paying multinationals mop up talent.

The third reason to shift is to rebalance regional inequality. . . . Norway treats federal jobs as a resource every region deserves to enjoy, like profits from oil. Where government jobs go, private ones follow. . . . Sometimes the aim is to fulfil the potential of a country’s second-tier cities. Unlike poor, remote places, bigger cities can make the most of relocated government agencies, linking them to local universities and businesses and supplying a better-educated workforce. The decision in 1946 to set up America’s Centres for Disease Control in Atlanta rather than Washington, D.C., has transformed the city into a hub for health-sector research and business.

The dilemma is obvious. Pick small, poor towns, and areas of high unemployment get new jobs, but it is hard to attract the most qualified workers; opt for larger cities with infrastructure and better-qualified residents, and the country’s most deprived areas see little benefit. . . .

Others contend that decentralisation begets corruption by making government agencies less accountable. . . . A study in America found that state-government corruption is worse when the state capital is isolated—journalists, who tend to live in the bigger cities, become less watchful of those in power.

CAT/2019.2(RC)

Question. 109

According to the passage, colonial powers located their capitals:

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

Around the world, capital cities are disgorging bureaucrats. In the post-colonial fervour of the 20th century, coastal capitals picked by trade-focused empires were spurned for “regionally neutral” new ones . . . . But decamping wholesale is costly and unpopular; governments these days prefer piecemeal dispersal. The trend reflects how the world has changed. In past eras, when information travelled at a snail’s pace, civil servants had to cluster together. But now desk-workers can ping emails and video-chat around the world. Travel for face-to-face meetings may be unavoidable, but transport links, too, have improved. . . .

Proponents of moving civil servants around promise countless benefits. It disperses the risk that a terrorist attack or natural disaster will cripple an entire government. Wonks in the sticks will be inspired by new ideas that walled-off capitals cannot conjure up. Autonomous regulators perform best far from the pressure and lobbying of the big city. Some even hail a cure for ascendant cynicism and populism. The unloved bureaucrats of faraway capitals will become as popular as firefighters once they mix with regular folk.

Beyond these sunny visions, dispersing central-government functions usually has three specific aims: to improve the lives of both civil servants and those living in clogged capitals; to save money; and to redress regional imbalances. The trouble is that these goals are not always realised.

The first aim—improving living conditions—has a long pedigree. After the second world war Britain moved thousands of civil servants to “agreeable English country towns” as London was rebuilt. But swapping the capital for somewhere smaller is not always agreeable. Attrition rates can exceed 80%. . . . The second reason to pack bureaucrats off is to save money. Office space costs far more in capitals. . . . Agencies that are moved elsewhere can often recruit better workers on lower salaries than in capitals, where well-paying multinationals mop up talent.

The third reason to shift is to rebalance regional inequality. . . . Norway treats federal jobs as a resource every region deserves to enjoy, like profits from oil. Where government jobs go, private ones follow. . . . Sometimes the aim is to fulfil the potential of a country’s second-tier cities. Unlike poor, remote places, bigger cities can make the most of relocated government agencies, linking them to local universities and businesses and supplying a better-educated workforce. The decision in 1946 to set up America’s Centres for Disease Control in Atlanta rather than Washington, D.C., has transformed the city into a hub for health-sector research and business.

The dilemma is obvious. Pick small, poor towns, and areas of high unemployment get new jobs, but it is hard to attract the most qualified workers; opt for larger cities with infrastructure and better-qualified residents, and the country’s most deprived areas see little benefit. . . .

Others contend that decentralisation begets corruption by making government agencies less accountable. . . . A study in America found that state-government corruption is worse when the state capital is isolated—journalists, who tend to live in the bigger cities, become less watchful of those in power.

CAT/2019.2(RC)

Question. 110

The “dilemma” mentioned in the passage refers to:

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

Around the world, capital cities are disgorging bureaucrats. In the post-colonial fervour of the 20th century, coastal capitals picked by trade-focused empires were spurned for “regionally neutral” new ones . . . . But decamping wholesale is costly and unpopular; governments these days prefer piecemeal dispersal. The trend reflects how the world has changed. In past eras, when information travelled at a snail’s pace, civil servants had to cluster together. But now desk-workers can ping emails and video-chat around the world. Travel for face-to-face meetings may be unavoidable, but transport links, too, have improved. . . .

Proponents of moving civil servants around promise countless benefits. It disperses the risk that a terrorist attack or natural disaster will cripple an entire government. Wonks in the sticks will be inspired by new ideas that walled-off capitals cannot conjure up. Autonomous regulators perform best far from the pressure and lobbying of the big city. Some even hail a cure for ascendant cynicism and populism. The unloved bureaucrats of faraway capitals will become as popular as firefighters once they mix with regular folk.

Beyond these sunny visions, dispersing central-government functions usually has three specific aims: to improve the lives of both civil servants and those living in clogged capitals; to save money; and to redress regional imbalances. The trouble is that these goals are not always realised.

The first aim—improving living conditions—has a long pedigree. After the second world war Britain moved thousands of civil servants to “agreeable English country towns” as London was rebuilt. But swapping the capital for somewhere smaller is not always agreeable. Attrition rates can exceed 80%. . . . The second reason to pack bureaucrats off is to save money. Office space costs far more in capitals. . . . Agencies that are moved elsewhere can often recruit better workers on lower salaries than in capitals, where well-paying multinationals mop up talent.

The third reason to shift is to rebalance regional inequality. . . . Norway treats federal jobs as a resource every region deserves to enjoy, like profits from oil. Where government jobs go, private ones follow. . . . Sometimes the aim is to fulfil the potential of a country’s second-tier cities. Unlike poor, remote places, bigger cities can make the most of relocated government agencies, linking them to local universities and businesses and supplying a better-educated workforce. The decision in 1946 to set up America’s Centres for Disease Control in Atlanta rather than Washington, D.C., has transformed the city into a hub for health-sector research and business.

The dilemma is obvious. Pick small, poor towns, and areas of high unemployment get new jobs, but it is hard to attract the most qualified workers; opt for larger cities with infrastructure and better-qualified residents, and the country’s most deprived areas see little benefit. . . .

Others contend that decentralisation begets corruption by making government agencies less accountable. . . . A study in America found that state-government corruption is worse when the state capital is isolated—journalists, who tend to live in the bigger cities, become less watchful of those in power.

CAT/2019.2(RC)

Question. 111

People who support decentralising central government functions are LEAST likely to cite which of the following reasons for their view?

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

Around the world, capital cities are disgorging bureaucrats. In the post-colonial fervour of the 20th century, coastal capitals picked by trade-focused empires were spurned for “regionally neutral” new ones . . . . But decamping wholesale is costly and unpopular; governments these days prefer piecemeal dispersal. The trend reflects how the world has changed. In past eras, when information travelled at a snail’s pace, civil servants had to cluster together. But now desk-workers can ping emails and video-chat around the world. Travel for face-to-face meetings may be unavoidable, but transport links, too, have improved. . . .

Proponents of moving civil servants around promise countless benefits. It disperses the risk that a terrorist attack or natural disaster will cripple an entire government. Wonks in the sticks will be inspired by new ideas that walled-off capitals cannot conjure up. Autonomous regulators perform best far from the pressure and lobbying of the big city. Some even hail a cure for ascendant cynicism and populism. The unloved bureaucrats of faraway capitals will become as popular as firefighters once they mix with regular folk.

Beyond these sunny visions, dispersing central-government functions usually has three specific aims: to improve the lives of both civil servants and those living in clogged capitals; to save money; and to redress regional imbalances. The trouble is that these goals are not always realised.

The first aim—improving living conditions—has a long pedigree. After the second world war Britain moved thousands of civil servants to “agreeable English country towns” as London was rebuilt. But swapping the capital for somewhere smaller is not always agreeable. Attrition rates can exceed 80%. . . . The second reason to pack bureaucrats off is to save money. Office space costs far more in capitals. . . . Agencies that are moved elsewhere can often recruit better workers on lower salaries than in capitals, where well-paying multinationals mop up talent.

The third reason to shift is to rebalance regional inequality. . . . Norway treats federal jobs as a resource every region deserves to enjoy, like profits from oil. Where government jobs go, private ones follow. . . . Sometimes the aim is to fulfil the potential of a country’s second-tier cities. Unlike poor, remote places, bigger cities can make the most of relocated government agencies, linking them to local universities and businesses and supplying a better-educated workforce. The decision in 1946 to set up America’s Centres for Disease Control in Atlanta rather than Washington, D.C., has transformed the city into a hub for health-sector research and business.

The dilemma is obvious. Pick small, poor towns, and areas of high unemployment get new jobs, but it is hard to attract the most qualified workers; opt for larger cities with infrastructure and better-qualified residents, and the country’s most deprived areas see little benefit. . . .

Others contend that decentralisation begets corruption by making government agencies less accountable. . . . A study in America found that state-government corruption is worse when the state capital is isolated—journalists, who tend to live in the bigger cities, become less watchful of those in power.

CAT/2019.2(RC)

Question. 112

The “long pedigree” of the aim to shift civil servants to improve their living standards implies that this move:

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

Around the world, capital cities are disgorging bureaucrats. In the post-colonial fervour of the 20th century, coastal capitals picked by trade-focused empires were spurned for “regionally neutral” new ones . . . . But decamping wholesale is costly and unpopular; governments these days prefer piecemeal dispersal. The trend reflects how the world has changed. In past eras, when information travelled at a snail’s pace, civil servants had to cluster together. But now desk-workers can ping emails and video-chat around the world. Travel for face-to-face meetings may be unavoidable, but transport links, too, have improved. . . .

Proponents of moving civil servants around promise countless benefits. It disperses the risk that a terrorist attack or natural disaster will cripple an entire government. Wonks in the sticks will be inspired by new ideas that walled-off capitals cannot conjure up. Autonomous regulators perform best far from the pressure and lobbying of the big city. Some even hail a cure for ascendant cynicism and populism. The unloved bureaucrats of faraway capitals will become as popular as firefighters once they mix with regular folk.

Beyond these sunny visions, dispersing central-government functions usually has three specific aims: to improve the lives of both civil servants and those living in clogged capitals; to save money; and to redress regional imbalances. The trouble is that these goals are not always realised.

The first aim—improving living conditions—has a long pedigree. After the second world war Britain moved thousands of civil servants to “agreeable English country towns” as London was rebuilt. But swapping the capital for somewhere smaller is not always agreeable. Attrition rates can exceed 80%. . . . The second reason to pack bureaucrats off is to save money. Office space costs far more in capitals. . . . Agencies that are moved elsewhere can often recruit better workers on lower salaries than in capitals, where well-paying multinationals mop up talent.

The third reason to shift is to rebalance regional inequality. . . . Norway treats federal jobs as a resource every region deserves to enjoy, like profits from oil. Where government jobs go, private ones follow. . . . Sometimes the aim is to fulfil the potential of a country’s second-tier cities. Unlike poor, remote places, bigger cities can make the most of relocated government agencies, linking them to local universities and businesses and supplying a better-educated workforce. The decision in 1946 to set up America’s Centres for Disease Control in Atlanta rather than Washington, D.C., has transformed the city into a hub for health-sector research and business.

The dilemma is obvious. Pick small, poor towns, and areas of high unemployment get new jobs, but it is hard to attract the most qualified workers; opt for larger cities with infrastructure and better-qualified residents, and the country’s most deprived areas see little benefit. . . .

Others contend that decentralisation begets corruption by making government agencies less accountable. . . . A study in America found that state-government corruption is worse when the state capital is isolated—journalists, who tend to live in the bigger cities, become less watchful of those in power.

CAT/2019.2(RC)

Question. 113

According to the author, relocating government agencies has not always been a success for all of the following reasons EXCEPT:

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

For two years, I tracked down dozens of . . . Chinese in Upper Egypt [who were] selling lingerie. In a deeply conservative region, where Egyptian families rarely allow women to work or own businesses, the Chinese flourished because of their status as outsiders. They didn’t gossip, and they kept their opinions to themselves. In a New Yorker article entitled “Learning to Speak Lingerie,” I described the Chinese use of Arabic as another non-threatening characteristic. I wrote, “Unlike Mandarin, Arabic is inflected for gender, and Chinese dealers, who learn the language strictly by ear, often pick up speech patterns from female customers. I’ve come to think of it as the lingerie dialect, and there’s something disarming about these Chinese men speaking in the feminine voice.” . . .

When I wrote about the Chinese in the New Yorker, most readers seemed to appreciate the unusual perspective. But as I often find with topics that involve the Middle East, some people had trouble getting past the black-and-white quality of a byline. “This piece is so orientalist I don’t know what to do,” Aisha Gani, a reporter who worked at The Guardian, tweeted. Another colleague at the British paper, Iman Amrani, agreed: “I wouldn’t have minded an article on the subject written by an Egyptian woman—probably would have had better insight.” . . .

As an MOL (man of language), I also take issue with this kind of essentialism. Empathy and understanding are not inherited traits, and they are not strictly tied to gender and race. An individual who wrestles with a difficult language can learn to be more sympathetic to outsiders and open to different experiences of the world. This learning process—the embarrassments, the frustrations, the gradual sense of understanding and connection—is invariably transformative. In Upper Egypt, the Chinese experience of struggling to learn Arabic and local culture had made them much more thoughtful. In the same way, I was interested in their lives not because of some kind of voyeurism, but because I had also experienced Egypt and Arabic as an outsider. And both the Chinese and the Egyptians welcomed me because I spoke their languages. My identity as a white male was far less important than my ability to communicate.

And that easily lobbed word—“Orientalist”—hardly captures the complexity of our interactions. What exactly is the dynamic when a man from Missouri observes a Zhejiang native selling lingerie to an Upper Egyptian woman? . . . If all of us now stand beside the same river, speaking in ways we all understand, who’s looking east and who’s looking west? Which way is Oriental?

For all of our current interest in identity politics, there’s no corresponding sense of identity linguistics. You are what you speak—the words that run throughout your mind are at least as fundamental to your selfhood as is your ethnicity or your gender. And sometimes it’s healthy to consider human characteristics that are not inborn, rigid, and outwardly defined. After all, you can always learn another language and change who you are.

CAT/2019.2(RC)

Question. 114

Which of the following can be inferred from the author’s claim, “Which way is Oriental?”

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

For two years, I tracked down dozens of . . . Chinese in Upper Egypt [who were] selling lingerie. In a deeply conservative region, where Egyptian families rarely allow women to work or own businesses, the Chinese flourished because of their status as outsiders. They didn’t gossip, and they kept their opinions to themselves. In a New Yorker article entitled “Learning to Speak Lingerie,” I described the Chinese use of Arabic as another non-threatening characteristic. I wrote, “Unlike Mandarin, Arabic is inflected for gender, and Chinese dealers, who learn the language strictly by ear, often pick up speech patterns from female customers. I’ve come to think of it as the lingerie dialect, and there’s something disarming about these Chinese men speaking in the feminine voice.” . . .

When I wrote about the Chinese in the New Yorker, most readers seemed to appreciate the unusual perspective. But as I often find with topics that involve the Middle East, some people had trouble getting past the black-and-white quality of a byline. “This piece is so orientalist I don’t know what to do,” Aisha Gani, a reporter who worked at The Guardian, tweeted. Another colleague at the British paper, Iman Amrani, agreed: “I wouldn’t have minded an article on the subject written by an Egyptian woman—probably would have had better insight.” . . .

As an MOL (man of language), I also take issue with this kind of essentialism. Empathy and understanding are not inherited traits, and they are not strictly tied to gender and race. An individual who wrestles with a difficult language can learn to be more sympathetic to outsiders and open to different experiences of the world. This learning process—the embarrassments, the frustrations, the gradual sense of understanding and connection—is invariably transformative. In Upper Egypt, the Chinese experience of struggling to learn Arabic and local culture had made them much more thoughtful. In the same way, I was interested in their lives not because of some kind of voyeurism, but because I had also experienced Egypt and Arabic as an outsider. And both the Chinese and the Egyptians welcomed me because I spoke their languages. My identity as a white male was far less important than my ability to communicate.

And that easily lobbed word—“Orientalist”—hardly captures the complexity of our interactions. What exactly is the dynamic when a man from Missouri observes a Zhejiang native selling lingerie to an Upper Egyptian woman? . . . If all of us now stand beside the same river, speaking in ways we all understand, who’s looking east and who’s looking west? Which way is Oriental?

For all of our current interest in identity politics, there’s no corresponding sense of identity linguistics. You are what you speak—the words that run throughout your mind are at least as fundamental to your selfhood as is your ethnicity or your gender. And sometimes it’s healthy to consider human characteristics that are not inborn, rigid, and outwardly defined. After all, you can always learn another language and change who you are.

CAT/2019.2(RC)

Question. 115

A French ethnographer decides to study the culture of a Nigerian tribe. Which of the following is most likely to be the view of the author of the passage?

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

For two years, I tracked down dozens of . . . Chinese in Upper Egypt [who were] selling lingerie. In a deeply conservative region, where Egyptian families rarely allow women to work or own businesses, the Chinese flourished because of their status as outsiders. They didn’t gossip, and they kept their opinions to themselves. In a New Yorker article entitled “Learning to Speak Lingerie,” I described the Chinese use of Arabic as another non-threatening characteristic. I wrote, “Unlike Mandarin, Arabic is inflected for gender, and Chinese dealers, who learn the language strictly by ear, often pick up speech patterns from female customers. I’ve come to think of it as the lingerie dialect, and there’s something disarming about these Chinese men speaking in the feminine voice.” . . .

When I wrote about the Chinese in the New Yorker, most readers seemed to appreciate the unusual perspective. But as I often find with topics that involve the Middle East, some people had trouble getting past the black-and-white quality of a byline. “This piece is so orientalist I don’t know what to do,” Aisha Gani, a reporter who worked at The Guardian, tweeted. Another colleague at the British paper, Iman Amrani, agreed: “I wouldn’t have minded an article on the subject written by an Egyptian woman—probably would have had better insight.” . . .

As an MOL (man of language), I also take issue with this kind of essentialism. Empathy and understanding are not inherited traits, and they are not strictly tied to gender and race. An individual who wrestles with a difficult language can learn to be more sympathetic to outsiders and open to different experiences of the world. This learning process—the embarrassments, the frustrations, the gradual sense of understanding and connection—is invariably transformative. In Upper Egypt, the Chinese experience of struggling to learn Arabic and local culture had made them much more thoughtful. In the same way, I was interested in their lives not because of some kind of voyeurism, but because I had also experienced Egypt and Arabic as an outsider. And both the Chinese and the Egyptians welcomed me because I spoke their languages. My identity as a white male was far less important than my ability to communicate.

And that easily lobbed word—“Orientalist”—hardly captures the complexity of our interactions. What exactly is the dynamic when a man from Missouri observes a Zhejiang native selling lingerie to an Upper Egyptian woman? . . . If all of us now stand beside the same river, speaking in ways we all understand, who’s looking east and who’s looking west? Which way is Oriental?

For all of our current interest in identity politics, there’s no corresponding sense of identity linguistics. You are what you speak—the words that run throughout your mind are at least as fundamental to your selfhood as is your ethnicity or your gender. And sometimes it’s healthy to consider human characteristics that are not inborn, rigid, and outwardly defined. After all, you can always learn another language and change who you are.

CAT/2019.2(RC)

Question. 116

The author’s critics would argue that:

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

For two years, I tracked down dozens of . . . Chinese in Upper Egypt [who were] selling lingerie. In a deeply conservative region, where Egyptian families rarely allow women to work or own businesses, the Chinese flourished because of their status as outsiders. They didn’t gossip, and they kept their opinions to themselves. In a New Yorker article entitled “Learning to Speak Lingerie,” I described the Chinese use of Arabic as another non-threatening characteristic. I wrote, “Unlike Mandarin, Arabic is inflected for gender, and Chinese dealers, who learn the language strictly by ear, often pick up speech patterns from female customers. I’ve come to think of it as the lingerie dialect, and there’s something disarming about these Chinese men speaking in the feminine voice.” . . .

When I wrote about the Chinese in the New Yorker, most readers seemed to appreciate the unusual perspective. But as I often find with topics that involve the Middle East, some people had trouble getting past the black-and-white quality of a byline. “This piece is so orientalist I don’t know what to do,” Aisha Gani, a reporter who worked at The Guardian, tweeted. Another colleague at the British paper, Iman Amrani, agreed: “I wouldn’t have minded an article on the subject written by an Egyptian woman—probably would have had better insight.” . . .

As an MOL (man of language), I also take issue with this kind of essentialism. Empathy and understanding are not inherited traits, and they are not strictly tied to gender and race. An individual who wrestles with a difficult language can learn to be more sympathetic to outsiders and open to different experiences of the world. This learning process—the embarrassments, the frustrations, the gradual sense of understanding and connection—is invariably transformative. In Upper Egypt, the Chinese experience of struggling to learn Arabic and local culture had made them much more thoughtful. In the same way, I was interested in their lives not because of some kind of voyeurism, but because I had also experienced Egypt and Arabic as an outsider. And both the Chinese and the Egyptians welcomed me because I spoke their languages. My identity as a white male was far less important than my ability to communicate.

And that easily lobbed word—“Orientalist”—hardly captures the complexity of our interactions. What exactly is the dynamic when a man from Missouri observes a Zhejiang native selling lingerie to an Upper Egyptian woman? . . . If all of us now stand beside the same river, speaking in ways we all understand, who’s looking east and who’s looking west? Which way is Oriental?

For all of our current interest in identity politics, there’s no corresponding sense of identity linguistics. You are what you speak—the words that run throughout your mind are at least as fundamental to your selfhood as is your ethnicity or your gender. And sometimes it’s healthy to consider human characteristics that are not inborn, rigid, and outwardly defined. After all, you can always learn another language and change who you are.

CAT/2019.2(RC)

Question. 117

According to the passage, which of the following is not responsible for language’s ability to change us?

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly.

“Everybody pretty much agrees that the relationship between elephants and people has dramatically changed,” [says psychologist Gay] Bradshaw. “Where for centuries humans and elephants lived in relatively peaceful coexistence, there is now hostility and violence. Now, I use the term ‘violence’ because of the intentionality associated with it, both in the aggression of humans and, at times, the recently observed behavior of elephants.”

Typically, elephant researchers have cited, as a cause of aggression, the high levels of testosterone in newly matured male elephants or the competition for land and resources between elephants and humans. But, Bradshaw and several colleagues argue that today’s elephant populations are suffering from a form of chronic stress, a kind of species-wide trauma. Decades of poaching and culling and habitat loss, they claim, have so disrupted the intricate web of familial and societal relations by which young elephants have traditionally been raised in the wild, and by which established elephant herds are governed, that what we are now witnessing is nothing less than a precipitous collapse of elephant culture.

Elephants, when left to their own devices, are profoundly social creatures. young elephants are raised within an extended, multi-tiered network of doting female caregivers that includes the birth mother, grandmothers, aunts and friends. These relations are maintained over a life span as long as 70 years. Studies of established herds have shown that young elephants stay within 15 feet of their mothers for nearly all of their first eight years of life, after which young females are socialized into the matriarchal network, while young males go off for a time into an all-male social group before coming back into the fold as mature adults.

This fabric of elephant society, Bradshaw and her colleagues [demonstrate], ha[s] effectively been frayed by years of habitat loss and poaching, along with systematic culling by government agencies to control elephant numbers and translocations of herds to different habitats. As a result of such social upheaval, calves are now being born to and raised by ever younger and inexperienced mothers. Young orphaned elephants, meanwhile, that have witnessed the death of a parent at the hands of poachers are coming of age in the absence of the support system that defines traditional elephant life. “The loss of elephant elders,” [says] Bradshaw "and the traumatic experience of witnessing the massacres of their family, impairs normal brain and behavior development in young elephants.”

What Bradshaw and her colleagues describe would seem to be an extreme form of anthropocentric conjecture if the evidence that they’ve compiled from various elephant researchers weren’t so compelling. The elephants of decimated herds, especially orphans who’ve watched the death of their parents and elders from poaching and culling, exhibit behavior typically associated with post-traumatic stress disorder and other trauma-related disorders in humans: abnormal startle response, unpredictable asocial behavior, inattentive mothering and hyper-aggression.

[According to Bradshaw], “Elephants are suffering and behaving in the same ways that we recognize in ourselves as a result of violence. Except perhaps for a few specific features, brain organization and early development of elephants and humans are extremely similar.”

CAT/2018.1(RC)

Question. 118

Which of the following statements best expresses the overall argument of this passage?

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly.

“Everybody pretty much agrees that the relationship between elephants and people has dramatically changed,” [says psychologist Gay] Bradshaw. “Where for centuries humans and elephants lived in relatively peaceful coexistence, there is now hostility and violence. Now, I use the term ‘violence’ because of the intentionality associated with it, both in the aggression of humans and, at times, the recently observed behavior of elephants.”

Typically, elephant researchers have cited, as a cause of aggression, the high levels of testosterone in newly matured male elephants or the competition for land and resources between elephants and humans. But, Bradshaw and several colleagues argue that today’s elephant populations are suffering from a form of chronic stress, a kind of species-wide trauma. Decades of poaching and culling and habitat loss, they claim, have so disrupted the intricate web of familial and societal relations by which young elephants have traditionally been raised in the wild, and by which established elephant herds are governed, that what we are now witnessing is nothing less than a precipitous collapse of elephant culture.

Elephants, when left to their own devices, are profoundly social creatures. young elephants are raised within an extended, multi-tiered network of doting female caregivers that includes the birth mother, grandmothers, aunts and friends. These relations are maintained over a life span as long as 70 years. Studies of established herds have shown that young elephants stay within 15 feet of their mothers for nearly all of their first eight years of life, after which young females are socialized into the matriarchal network, while young males go off for a time into an all-male social group before coming back into the fold as mature adults.

This fabric of elephant society, Bradshaw and her colleagues [demonstrate], ha[s] effectively been frayed by years of habitat loss and poaching, along with systematic culling by government agencies to control elephant numbers and translocations of herds to different habitats. As a result of such social upheaval, calves are now being born to and raised by ever younger and inexperienced mothers. Young orphaned elephants, meanwhile, that have witnessed the death of a parent at the hands of poachers are coming of age in the absence of the support system that defines traditional elephant life. “The loss of elephant elders,” [says] Bradshaw "and the traumatic experience of witnessing the massacres of their family, impairs normal brain and behavior development in young elephants.”

What Bradshaw and her colleagues describe would seem to be an extreme form of anthropocentric conjecture if the evidence that they’ve compiled from various elephant researchers weren’t so compelling. The elephants of decimated herds, especially orphans who’ve watched the death of their parents and elders from poaching and culling, exhibit behavior typically associated with post-traumatic stress disorder and other trauma-related disorders in humans: abnormal startle response, unpredictable asocial behavior, inattentive mothering and hyper-aggression.

[According to Bradshaw], “Elephants are suffering and behaving in the same ways that we recognize in ourselves as a result of violence. Except perhaps for a few specific features, brain organization and early development of elephants and humans are extremely similar.”

CAT/2018.1(RC)

Question. 119

In the first paragraph, Bradshaw uses the term "violence" to describe the recent change in the human-elephant relationship because, according to him:

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly.

“Everybody pretty much agrees that the relationship between elephants and people has dramatically changed,” [says psychologist Gay] Bradshaw. “Where for centuries humans and elephants lived in relatively peaceful coexistence, there is now hostility and violence. Now, I use the term ‘violence’ because of the intentionality associated with it, both in the aggression of humans and, at times, the recently observed behavior of elephants.”

Typically, elephant researchers have cited, as a cause of aggression, the high levels of testosterone in newly matured male elephants or the competition for land and resources between elephants and humans. But, Bradshaw and several colleagues argue that today’s elephant populations are suffering from a form of chronic stress, a kind of species-wide trauma. Decades of poaching and culling and habitat loss, they claim, have so disrupted the intricate web of familial and societal relations by which young elephants have traditionally been raised in the wild, and by which established elephant herds are governed, that what we are now witnessing is nothing less than a precipitous collapse of elephant culture.

Elephants, when left to their own devices, are profoundly social creatures. young elephants are raised within an extended, multi-tiered network of doting female caregivers that includes the birth mother, grandmothers, aunts and friends. These relations are maintained over a life span as long as 70 years. Studies of established herds have shown that young elephants stay within 15 feet of their mothers for nearly all of their first eight years of life, after which young females are socialized into the matriarchal network, while young males go off for a time into an all-male social group before coming back into the fold as mature adults.

This fabric of elephant society, Bradshaw and her colleagues [demonstrate], ha[s] effectively been frayed by years of habitat loss and poaching, along with systematic culling by government agencies to control elephant numbers and translocations of herds to different habitats. As a result of such social upheaval, calves are now being born to and raised by ever younger and inexperienced mothers. Young orphaned elephants, meanwhile, that have witnessed the death of a parent at the hands of poachers are coming of age in the absence of the support system that defines traditional elephant life. “The loss of elephant elders,” [says] Bradshaw "and the traumatic experience of witnessing the massacres of their family, impairs normal brain and behavior development in young elephants.”

What Bradshaw and her colleagues describe would seem to be an extreme form of anthropocentric conjecture if the evidence that they’ve compiled from various elephant researchers weren’t so compelling. The elephants of decimated herds, especially orphans who’ve watched the death of their parents and elders from poaching and culling, exhibit behavior typically associated with post-traumatic stress disorder and other trauma-related disorders in humans: abnormal startle response, unpredictable asocial behavior, inattentive mothering and hyper-aggression.

[According to Bradshaw], “Elephants are suffering and behaving in the same ways that we recognize in ourselves as a result of violence. Except perhaps for a few specific features, brain organization and early development of elephants and humans are extremely similar.”

CAT/2018.1(RC)

Question. 120

The passage makes all of the following claims EXCEPT

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly.

“Everybody pretty much agrees that the relationship between elephants and people has dramatically changed,” [says psychologist Gay] Bradshaw. “Where for centuries humans and elephants lived in relatively peaceful coexistence, there is now hostility and violence. Now, I use the term ‘violence’ because of the intentionality associated with it, both in the aggression of humans and, at times, the recently observed behavior of elephants.”

Typically, elephant researchers have cited, as a cause of aggression, the high levels of testosterone in newly matured male elephants or the competition for land and resources between elephants and humans. But, Bradshaw and several colleagues argue that today’s elephant populations are suffering from a form of chronic stress, a kind of species-wide trauma. Decades of poaching and culling and habitat loss, they claim, have so disrupted the intricate web of familial and societal relations by which young elephants have traditionally been raised in the wild, and by which established elephant herds are governed, that what we are now witnessing is nothing less than a precipitous collapse of elephant culture.

Elephants, when left to their own devices, are profoundly social creatures. young elephants are raised within an extended, multi-tiered network of doting female caregivers that includes the birth mother, grandmothers, aunts and friends. These relations are maintained over a life span as long as 70 years. Studies of established herds have shown that young elephants stay within 15 feet of their mothers for nearly all of their first eight years of life, after which young females are socialized into the matriarchal network, while young males go off for a time into an all-male social group before coming back into the fold as mature adults.

This fabric of elephant society, Bradshaw and her colleagues [demonstrate], ha[s] effectively been frayed by years of habitat loss and poaching, along with systematic culling by government agencies to control elephant numbers and translocations of herds to different habitats. As a result of such social upheaval, calves are now being born to and raised by ever younger and inexperienced mothers. Young orphaned elephants, meanwhile, that have witnessed the death of a parent at the hands of poachers are coming of age in the absence of the support system that defines traditional elephant life. “The loss of elephant elders,” [says] Bradshaw "and the traumatic experience of witnessing the massacres of their family, impairs normal brain and behavior development in young elephants.”

What Bradshaw and her colleagues describe would seem to be an extreme form of anthropocentric conjecture if the evidence that they’ve compiled from various elephant researchers weren’t so compelling. The elephants of decimated herds, especially orphans who’ve watched the death of their parents and elders from poaching and culling, exhibit behavior typically associated with post-traumatic stress disorder and other trauma-related disorders in humans: abnormal startle response, unpredictable asocial behavior, inattentive mothering and hyper-aggression.

[According to Bradshaw], “Elephants are suffering and behaving in the same ways that we recognize in ourselves as a result of violence. Except perhaps for a few specific features, brain organization and early development of elephants and humans are extremely similar.”

CAT/2018.1(RC)

Question. 121

Which of the following measures is Bradshaw most likely to support to address the problem of elephant aggression?

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly.

“Everybody pretty much agrees that the relationship between elephants and people has dramatically changed,” [says psychologist Gay] Bradshaw. “Where for centuries humans and elephants lived in relatively peaceful coexistence, there is now hostility and violence. Now, I use the term ‘violence’ because of the intentionality associated with it, both in the aggression of humans and, at times, the recently observed behavior of elephants.”

Typically, elephant researchers have cited, as a cause of aggression, the high levels of testosterone in newly matured male elephants or the competition for land and resources between elephants and humans. But, Bradshaw and several colleagues argue that today’s elephant populations are suffering from a form of chronic stress, a kind of species-wide trauma. Decades of poaching and culling and habitat loss, they claim, have so disrupted the intricate web of familial and societal relations by which young elephants have traditionally been raised in the wild, and by which established elephant herds are governed, that what we are now witnessing is nothing less than a precipitous collapse of elephant culture.

Elephants, when left to their own devices, are profoundly social creatures. young elephants are raised within an extended, multi-tiered network of doting female caregivers that includes the birth mother, grandmothers, aunts and friends. These relations are maintained over a life span as long as 70 years. Studies of established herds have shown that young elephants stay within 15 feet of their mothers for nearly all of their first eight years of life, after which young females are socialized into the matriarchal network, while young males go off for a time into an all-male social group before coming back into the fold as mature adults.

This fabric of elephant society, Bradshaw and her colleagues [demonstrate], ha[s] effectively been frayed by years of habitat loss and poaching, along with systematic culling by government agencies to control elephant numbers and translocations of herds to different habitats. As a result of such social upheaval, calves are now being born to and raised by ever younger and inexperienced mothers. Young orphaned elephants, meanwhile, that have witnessed the death of a parent at the hands of poachers are coming of age in the absence of the support system that defines traditional elephant life. “The loss of elephant elders,” [says] Bradshaw "and the traumatic experience of witnessing the massacres of their family, impairs normal brain and behavior development in young elephants.”

What Bradshaw and her colleagues describe would seem to be an extreme form of anthropocentric conjecture if the evidence that they’ve compiled from various elephant researchers weren’t so compelling. The elephants of decimated herds, especially orphans who’ve watched the death of their parents and elders from poaching and culling, exhibit behavior typically associated with post-traumatic stress disorder and other trauma-related disorders in humans: abnormal startle response, unpredictable asocial behavior, inattentive mothering and hyper-aggression.

[According to Bradshaw], “Elephants are suffering and behaving in the same ways that we recognize in ourselves as a result of violence. Except perhaps for a few specific features, brain organization and early development of elephants and humans are extremely similar.”

CAT/2018.1(RC)

Question. 122

In paragraph 4, the phrase, “The fabric of elephant society . . . has(s) effectively been frayed by . . .” is:

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

Will a day come when India’s poor can access government services as easily as drawing cash from an ATM? No country in the world has made accessing education or health or policing or dispute resolution as easy as an ATM, because the nature of these activities requires individuals to use their discretion in a positive way. Technology can certainly facilitate this in a variety of ways if it is seen as one part of an overall approach, but the evidence so far in education, for instance, is that just adding computers alone doesn’t make education any better.

The dangerous illusion of technology is that it can create stronger, top down accountability of service providers in implementation-intensive services within existing public sector organisations. One notion is that electronic management information systems (EMIS) keep better track of inputs and those aspects of personnel that are ‘EMIS visible’ can lead to better services. A recent study examined attempts to increase attendance of Auxiliary Nurse Midwife (ANMs) at clinics in Rajasthan, which involved high-tech time clocks to monitor attendance. The study’s title says it all: Band-Aids on a Corpse. E-governance can be just as bad as any other governance when the real issue is people and their motivation.

For services to improve, the people providing the services have to want to do a better job with the skills they have. A study of medical care in Delhi found that even though providers, in the public sector had much better skills than private sector providers their provision of care in actual practice was much worse.

In implementation-intensive services the key to success is face-to-face interactions between a teacher, a nurse, a policeman, an extension agent and a citizen. This relationship is about power. Amartya Sen’s report on education in West Bengal had a supremely telling anecdote in which the villagers forced the teacher to attend school, but then, when the parents went off to work, the teacher did not teach, but forced the children to massage his feet. As long as the system empowers providers over citizens, technology is irrelevant.

The answer to successfully providing basic services is to create systems that provide both autonomy and accountability. In basic education for instance, the answer to poor teaching is not controlling teachers more. The key is to hire teachers who want to teach and let them teach, expressing their professionalism and vocation as a teacher through autonomy in the classroom. This autonomy has to be matched with accountability for results—not just narrowly measured through test scores, but broadly for the quality of the education they provide.

A recent study in Uttar Pradesh showed that if, somehow, all civil service teachers could be replaced with contract teachers, the state could save a billion dollars a year in revenue and double student learning. Just the additional autonomy and accountability of contracts through local groups—even without complementary system changes in information and empowerment—led to that much improvement. The first step to being part of the solution is to create performance information accessible to those outside of the government.

CAT/2018.2(RC)

Question. 123

In the context of the passage, we can infer that the title “Band Aids on a Corpse” (in paragraph 2) suggests that:

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

Will a day come when India’s poor can access government services as easily as drawing cash from an ATM? No country in the world has made accessing education or health or policing or dispute resolution as easy as an ATM, because the nature of these activities requires individuals to use their discretion in a positive way. Technology can certainly facilitate this in a variety of ways if it is seen as one part of an overall approach, but the evidence so far in education, for instance, is that just adding computers alone doesn’t make education any better.

The dangerous illusion of technology is that it can create stronger, top down accountability of service providers in implementation-intensive services within existing public sector organisations. One notion is that electronic management information systems (EMIS) keep better track of inputs and those aspects of personnel that are ‘EMIS visible’ can lead to better services. A recent study examined attempts to increase attendance of Auxiliary Nurse Midwife (ANMs) at clinics in Rajasthan, which involved high-tech time clocks to monitor attendance. The study’s title says it all: Band-Aids on a Corpse. E-governance can be just as bad as any other governance when the real issue is people and their motivation.

For services to improve, the people providing the services have to want to do a better job with the skills they have. A study of medical care in Delhi found that even though providers, in the public sector had much better skills than private sector providers their provision of care in actual practice was much worse.

In implementation-intensive services the key to success is face-to-face interactions between a teacher, a nurse, a policeman, an extension agent and a citizen. This relationship is about power. Amartya Sen’s report on education in West Bengal had a supremely telling anecdote in which the villagers forced the teacher to attend school, but then, when the parents went off to work, the teacher did not teach, but forced the children to massage his feet. As long as the system empowers providers over citizens, technology is irrelevant.

The answer to successfully providing basic services is to create systems that provide both autonomy and accountability. In basic education for instance, the answer to poor teaching is not controlling teachers more. The key is to hire teachers who want to teach and let them teach, expressing their professionalism and vocation as a teacher through autonomy in the classroom. This autonomy has to be matched with accountability for results—not just narrowly measured through test scores, but broadly for the quality of the education they provide.

A recent study in Uttar Pradesh showed that if, somehow, all civil service teachers could be replaced with contract teachers, the state could save a billion dollars a year in revenue and double student learning. Just the additional autonomy and accountability of contracts through local groups—even without complementary system changes in information and empowerment—led to that much improvement. The first step to being part of the solution is to create performance information accessible to those outside of the government.

CAT/2018.2(RC)

Question. 124

According to the author, service delivery in Indian education can be improved in all of the following ways EXCEPT through:

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

Will a day come when India’s poor can access government services as easily as drawing cash from an ATM? No country in the world has made accessing education or health or policing or dispute resolution as easy as an ATM, because the nature of these activities requires individuals to use their discretion in a positive way. Technology can certainly facilitate this in a variety of ways if it is seen as one part of an overall approach, but the evidence so far in education, for instance, is that just adding computers alone doesn’t make education any better.

The dangerous illusion of technology is that it can create stronger, top down accountability of service providers in implementation-intensive services within existing public sector organisations. One notion is that electronic management information systems (EMIS) keep better track of inputs and those aspects of personnel that are ‘EMIS visible’ can lead to better services. A recent study examined attempts to increase attendance of Auxiliary Nurse Midwife (ANMs) at clinics in Rajasthan, which involved high-tech time clocks to monitor attendance. The study’s title says it all: Band-Aids on a Corpse. E-governance can be just as bad as any other governance when the real issue is people and their motivation.

For services to improve, the people providing the services have to want to do a better job with the skills they have. A study of medical care in Delhi found that even though providers, in the public sector had much better skills than private sector providers their provision of care in actual practice was much worse.

In implementation-intensive services the key to success is face-to-face interactions between a teacher, a nurse, a policeman, an extension agent and a citizen. This relationship is about power. Amartya Sen’s report on education in West Bengal had a supremely telling anecdote in which the villagers forced the teacher to attend school, but then, when the parents went off to work, the teacher did not teach, but forced the children to massage his feet. As long as the system empowers providers over citizens, technology is irrelevant.

The answer to successfully providing basic services is to create systems that provide both autonomy and accountability. In basic education for instance, the answer to poor teaching is not controlling teachers more. The key is to hire teachers who want to teach and let them teach, expressing their professionalism and vocation as a teacher through autonomy in the classroom. This autonomy has to be matched with accountability for results—not just narrowly measured through test scores, but broadly for the quality of the education they provide.

A recent study in Uttar Pradesh showed that if, somehow, all civil service teachers could be replaced with contract teachers, the state could save a billion dollars a year in revenue and double student learning. Just the additional autonomy and accountability of contracts through local groups—even without complementary system changes in information and empowerment—led to that much improvement. The first step to being part of the solution is to create performance information accessible to those outside of the government.

CAT/2018.2(RC)

Question. 125

Which of the following, IF TRUE, would undermine the passage’s main argument?

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

Will a day come when India’s poor can access government services as easily as drawing cash from an ATM? No country in the world has made accessing education or health or policing or dispute resolution as easy as an ATM, because the nature of these activities requires individuals to use their discretion in a positive way. Technology can certainly facilitate this in a variety of ways if it is seen as one part of an overall approach, but the evidence so far in education, for instance, is that just adding computers alone doesn’t make education any better.

The dangerous illusion of technology is that it can create stronger, top down accountability of service providers in implementation-intensive services within existing public sector organisations. One notion is that electronic management information systems (EMIS) keep better track of inputs and those aspects of personnel that are ‘EMIS visible’ can lead to better services. A recent study examined attempts to increase attendance of Auxiliary Nurse Midwife (ANMs) at clinics in Rajasthan, which involved high-tech time clocks to monitor attendance. The study’s title says it all: Band-Aids on a Corpse. E-governance can be just as bad as any other governance when the real issue is people and their motivation.

For services to improve, the people providing the services have to want to do a better job with the skills they have. A study of medical care in Delhi found that even though providers, in the public sector had much better skills than private sector providers their provision of care in actual practice was much worse.

In implementation-intensive services the key to success is face-to-face interactions between a teacher, a nurse, a policeman, an extension agent and a citizen. This relationship is about power. Amartya Sen’s report on education in West Bengal had a supremely telling anecdote in which the villagers forced the teacher to attend school, but then, when the parents went off to work, the teacher did not teach, but forced the children to massage his feet. As long as the system empowers providers over citizens, technology is irrelevant.

The answer to successfully providing basic services is to create systems that provide both autonomy and accountability. In basic education for instance, the answer to poor teaching is not controlling teachers more. The key is to hire teachers who want to teach and let them teach, expressing their professionalism and vocation as a teacher through autonomy in the classroom. This autonomy has to be matched with accountability for results—not just narrowly measured through test scores, but broadly for the quality of the education they provide.

A recent study in Uttar Pradesh showed that if, somehow, all civil service teachers could be replaced with contract teachers, the state could save a billion dollars a year in revenue and double student learning. Just the additional autonomy and accountability of contracts through local groups—even without complementary system changes in information and empowerment—led to that much improvement. The first step to being part of the solution is to create performance information accessible to those outside of the government.

CAT/2018.2(RC)

Question. 126

The author questions the use of monitoring systems in services that involve face-to-face interaction between service providers and clients because such systems:

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

Will a day come when India’s poor can access government services as easily as drawing cash from an ATM? No country in the world has made accessing education or health or policing or dispute resolution as easy as an ATM, because the nature of these activities requires individuals to use their discretion in a positive way. Technology can certainly facilitate this in a variety of ways if it is seen as one part of an overall approach, but the evidence so far in education, for instance, is that just adding computers alone doesn’t make education any better.

The dangerous illusion of technology is that it can create stronger, top down accountability of service providers in implementation-intensive services within existing public sector organisations. One notion is that electronic management information systems (EMIS) keep better track of inputs and those aspects of personnel that are ‘EMIS visible’ can lead to better services. A recent study examined attempts to increase attendance of Auxiliary Nurse Midwife (ANMs) at clinics in Rajasthan, which involved high-tech time clocks to monitor attendance. The study’s title says it all: Band-Aids on a Corpse. E-governance can be just as bad as any other governance when the real issue is people and their motivation.

For services to improve, the people providing the services have to want to do a better job with the skills they have. A study of medical care in Delhi found that even though providers, in the public sector had much better skills than private sector providers their provision of care in actual practice was much worse.

In implementation-intensive services the key to success is face-to-face interactions between a teacher, a nurse, a policeman, an extension agent and a citizen. This relationship is about power. Amartya Sen’s report on education in West Bengal had a supremely telling anecdote in which the villagers forced the teacher to attend school, but then, when the parents went off to work, the teacher did not teach, but forced the children to massage his feet. As long as the system empowers providers over citizens, technology is irrelevant.

The answer to successfully providing basic services is to create systems that provide both autonomy and accountability. In basic education for instance, the answer to poor teaching is not controlling teachers more. The key is to hire teachers who want to teach and let them teach, expressing their professionalism and vocation as a teacher through autonomy in the classroom. This autonomy has to be matched with accountability for results—not just narrowly measured through test scores, but broadly for the quality of the education they provide.

A recent study in Uttar Pradesh showed that if, somehow, all civil service teachers could be replaced with contract teachers, the state could save a billion dollars a year in revenue and double student learning. Just the additional autonomy and accountability of contracts through local groups—even without complementary system changes in information and empowerment—led to that much improvement. The first step to being part of the solution is to create performance information accessible to those outside of the government.

CAT/2018.2(RC)

Question. 127

The main purpose of the passage is to:

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

More and more companies, government agencies, educational institutions and philanthropic organisations are today in the grip of a new phenomenon: ‘metric fixation’. The key components of metric fixation are the belief that it is possible – and desirable – to replace professional judgment (acquired through personal experience and talent) with numerical indicators of comparative performance based upon standardised data (metrics); and that the best way to motivate people within these organisations is by attaching rewards and penalties to their measured performance.

The rewards can be monetary, in the form of pay for performance, say, or reputational, in the form of college rankings, hospital ratings, surgical report cards and so on. But the most dramatic negative effect of metric fixation is its propensity to incentivise gaming: that is, encouraging professionals to maximise the metrics in ways that are at odds with the larger purpose of the organisation. If the rate of major crimes in a district becomes the metric according to which police officers are promoted, then some officers will respond by simply not recording crimes or downgrading them from major offences to misdemeanours. Or take the case of surgeons. When the metrics of success and failure are made public – affecting their reputation and income – some surgeons will improve their metric scores by refusing to operate on patients with more complex problems, whose surgical outcomes are more likely to be negative. Who suffers? The patients who don’t get operated upon.

When reward is tied to measured performance, metric fixation invites just this sort of gaming. But metric fixation also leads to a variety of more subtle unintended negative consequences. These include goal displacement, which comes in many varieties: when performance is judged by a few measures, and the stakes are high (keeping one’s job, getting a pay rise or raising the stock price at the time that stock options are vested), people focus on satisfying those measures – often at the expense of other, more important organisational goals that are not measured. The best-known example is ‘teaching to the test’, a widespread phenomenon that has distorted primary and secondary education in the United States since the adoption of the No Child Left Behind Act of 2001.

Short-termism is another negative. Measured performance encourages what the US sociologist Robert K Merton in 1936 called ‘the imperious immediacy of interests where the actor’s paramount concern with the foreseen immediate consequences excludes consideration of further or other consequences’. In short, advancing short-term goals at the expense of long-range considerations. This problem is endemic to publicly traded corporations that sacrifice long-term research and development, and the development of their staff, to the perceived imperatives of the quarterly report.

CAT/2018.2(RC)

Question. 128

Of the following, which would have added the least depth to the author’s argument?

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

More and more companies, government agencies, educational institutions and philanthropic organisations are today in the grip of a new phenomenon: ‘metric fixation’. The key components of metric fixation are the belief that it is possible – and desirable – to replace professional judgment (acquired through personal experience and talent) with numerical indicators of comparative performance based upon standardised data (metrics); and that the best way to motivate people within these organisations is by attaching rewards and penalties to their measured performance.

The rewards can be monetary, in the form of pay for performance, say, or reputational, in the form of college rankings, hospital ratings, surgical report cards and so on. But the most dramatic negative effect of metric fixation is its propensity to incentivise gaming: that is, encouraging professionals to maximise the metrics in ways that are at odds with the larger purpose of the organisation. If the rate of major crimes in a district becomes the metric according to which police officers are promoted, then some officers will respond by simply not recording crimes or downgrading them from major offences to misdemeanours. Or take the case of surgeons. When the metrics of success and failure are made public – affecting their reputation and income – some surgeons will improve their metric scores by refusing to operate on patients with more complex problems, whose surgical outcomes are more likely to be negative. Who suffers? The patients who don’t get operated upon.

When reward is tied to measured performance, metric fixation invites just this sort of gaming. But metric fixation also leads to a variety of more subtle unintended negative consequences. These include goal displacement, which comes in many varieties: when performance is judged by a few measures, and the stakes are high (keeping one’s job, getting a pay rise or raising the stock price at the time that stock options are vested), people focus on satisfying those measures – often at the expense of other, more important organisational goals that are not measured. The best-known example is ‘teaching to the test’, a widespread phenomenon that has distorted primary and secondary education in the United States since the adoption of the No Child Left Behind Act of 2001.

Short-termism is another negative. Measured performance encourages what the US sociologist Robert K Merton in 1936 called ‘the imperious immediacy of interests where the actor’s paramount concern with the foreseen immediate consequences excludes consideration of further or other consequences’. In short, advancing short-term goals at the expense of long-range considerations. This problem is endemic to publicly traded corporations that sacrifice long-term research and development, and the development of their staff, to the perceived imperatives of the quarterly report.

CAT/2018.2(RC)

Question. 129

Which of the following is NOT a consequence of the 'metric fixation' phenomenon mentioned in the passage?

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

More and more companies, government agencies, educational institutions and philanthropic organisations are today in the grip of a new phenomenon: ‘metric fixation’. The key components of metric fixation are the belief that it is possible – and desirable – to replace professional judgment (acquired through personal experience and talent) with numerical indicators of comparative performance based upon standardised data (metrics); and that the best way to motivate people within these organisations is by attaching rewards and penalties to their measured performance.

The rewards can be monetary, in the form of pay for performance, say, or reputational, in the form of college rankings, hospital ratings, surgical report cards and so on. But the most dramatic negative effect of metric fixation is its propensity to incentivise gaming: that is, encouraging professionals to maximise the metrics in ways that are at odds with the larger purpose of the organisation. If the rate of major crimes in a district becomes the metric according to which police officers are promoted, then some officers will respond by simply not recording crimes or downgrading them from major offences to misdemeanours. Or take the case of surgeons. When the metrics of success and failure are made public – affecting their reputation and income – some surgeons will improve their metric scores by refusing to operate on patients with more complex problems, whose surgical outcomes are more likely to be negative. Who suffers? The patients who don’t get operated upon.

When reward is tied to measured performance, metric fixation invites just this sort of gaming. But metric fixation also leads to a variety of more subtle unintended negative consequences. These include goal displacement, which comes in many varieties: when performance is judged by a few measures, and the stakes are high (keeping one’s job, getting a pay rise or raising the stock price at the time that stock options are vested), people focus on satisfying those measures – often at the expense of other, more important organisational goals that are not measured. The best-known example is ‘teaching to the test’, a widespread phenomenon that has distorted primary and secondary education in the United States since the adoption of the No Child Left Behind Act of 2001.

Short-termism is another negative. Measured performance encourages what the US sociologist Robert K Merton in 1936 called ‘the imperious immediacy of interests where the actor’s paramount concern with the foreseen immediate consequences excludes consideration of further or other consequences’. In short, advancing short-term goals at the expense of long-range considerations. This problem is endemic to publicly traded corporations that sacrifice long-term research and development, and the development of their staff, to the perceived imperatives of the quarterly report.

CAT/2018.2(RC)

Question. 130

What main point does the author want to convey through the examples of the police officer and the surgeon?

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

More and more companies, government agencies, educational institutions and philanthropic organisations are today in the grip of a new phenomenon: ‘metric fixation’. The key components of metric fixation are the belief that it is possible – and desirable – to replace professional judgment (acquired through personal experience and talent) with numerical indicators of comparative performance based upon standardised data (metrics); and that the best way to motivate people within these organisations is by attaching rewards and penalties to their measured performance.

The rewards can be monetary, in the form of pay for performance, say, or reputational, in the form of college rankings, hospital ratings, surgical report cards and so on. But the most dramatic negative effect of metric fixation is its propensity to incentivise gaming: that is, encouraging professionals to maximise the metrics in ways that are at odds with the larger purpose of the organisation. If the rate of major crimes in a district becomes the metric according to which police officers are promoted, then some officers will respond by simply not recording crimes or downgrading them from major offences to misdemeanours. Or take the case of surgeons. When the metrics of success and failure are made public – affecting their reputation and income – some surgeons will improve their metric scores by refusing to operate on patients with more complex problems, whose surgical outcomes are more likely to be negative. Who suffers? The patients who don’t get operated upon.

When reward is tied to measured performance, metric fixation invites just this sort of gaming. But metric fixation also leads to a variety of more subtle unintended negative consequences. These include goal displacement, which comes in many varieties: when performance is judged by a few measures, and the stakes are high (keeping one’s job, getting a pay rise or raising the stock price at the time that stock options are vested), people focus on satisfying those measures – often at the expense of other, more important organisational goals that are not measured. The best-known example is ‘teaching to the test’, a widespread phenomenon that has distorted primary and secondary education in the United States since the adoption of the No Child Left Behind Act of 2001.

Short-termism is another negative. Measured performance encourages what the US sociologist Robert K Merton in 1936 called ‘the imperious immediacy of interests where the actor’s paramount concern with the foreseen immediate consequences excludes consideration of further or other consequences’. In short, advancing short-term goals at the expense of long-range considerations. This problem is endemic to publicly traded corporations that sacrifice long-term research and development, and the development of their staff, to the perceived imperatives of the quarterly report.

CAT/2018.2(RC)

Question. 131

All of the following can be a possible feature of the No Child Left Behind Act of 2001, EXCEPT:

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

More and more companies, government agencies, educational institutions and philanthropic organisations are today in the grip of a new phenomenon: ‘metric fixation’. The key components of metric fixation are the belief that it is possible – and desirable – to replace professional judgment (acquired through personal experience and talent) with numerical indicators of comparative performance based upon standardised data (metrics); and that the best way to motivate people within these organisations is by attaching rewards and penalties to their measured performance.

The rewards can be monetary, in the form of pay for performance, say, or reputational, in the form of college rankings, hospital ratings, surgical report cards and so on. But the most dramatic negative effect of metric fixation is its propensity to incentivise gaming: that is, encouraging professionals to maximise the metrics in ways that are at odds with the larger purpose of the organisation. If the rate of major crimes in a district becomes the metric according to which police officers are promoted, then some officers will respond by simply not recording crimes or downgrading them from major offences to misdemeanours. Or take the case of surgeons. When the metrics of success and failure are made public – affecting their reputation and income – some surgeons will improve their metric scores by refusing to operate on patients with more complex problems, whose surgical outcomes are more likely to be negative. Who suffers? The patients who don’t get operated upon.

When reward is tied to measured performance, metric fixation invites just this sort of gaming. But metric fixation also leads to a variety of more subtle unintended negative consequences. These include goal displacement, which comes in many varieties: when performance is judged by a few measures, and the stakes are high (keeping one’s job, getting a pay rise or raising the stock price at the time that stock options are vested), people focus on satisfying those measures – often at the expense of other, more important organisational goals that are not measured. The best-known example is ‘teaching to the test’, a widespread phenomenon that has distorted primary and secondary education in the United States since the adoption of the No Child Left Behind Act of 2001.

Short-termism is another negative. Measured performance encourages what the US sociologist Robert K Merton in 1936 called ‘the imperious immediacy of interests where the actor’s paramount concern with the foreseen immediate consequences excludes consideration of further or other consequences’. In short, advancing short-term goals at the expense of long-range considerations. This problem is endemic to publicly traded corporations that sacrifice long-term research and development, and the development of their staff, to the perceived imperatives of the quarterly report.

CAT/2018.2(RC)

Question. 132

What is the main idea that the author is trying to highlight in the passage?

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

This year alone, more than 8,600 stores could close, according to industry estimates, many of them the brand -name anchor outlets that real estate developers once stumbled over themselves to court. Already there have been 5,300 retail closings this year... Sears Holdings—which owns Kmart—said in March that there's "substantial doubt" it can stay in business altogether, and will close 300 stores this year. So far this year, nine national retail chains have filed for bankruptcy.

Local jobs are a major casualty of what analysts are calling, with only a hint of hyperbole, the retail apocalypse. Since 2002, department stores have lost 448,000 jobs, a 25% decline, while the number of store closures this year is on pace to surpass the worst depths of the Great Recession. The growth of online retailers, meanwhile, has failed to offset those losses, with the ecommerce sector adding just 178,000 jobs over the past 15 years. Some of those jobs can be found in the massive distribution centers Amazon has opened across the country, often not too far from malls the company helped shutter.

But those are workplaces, not gathering places. The mall is both. And in the 61 years since the first enclosed one opened in suburban Minneapolis, the shopping mall has been where a huge swath of middle-class America went for far more than shopping. It was the home of first jobs and blind dates, the place for family photos and ear piercings, where goths and grandmothers could somehow walk through the same doors and find something they all liked. Sure, the food was lousy for you and the oceans of parking lots encouraged car-heavy development, something now scorned by contemporary planners. But for better or worse, the mall has been America's public square for the last 60 years.

So what happens when it disappears?

Think of your mall. Or think of the one you went to as a kid. Think of the perfume clouds in the department stores. The fountains splashing below the skylights. The cinnamon wafting from the food court. As far back as ancient Greece, societies have congregated around a central marketplace. In medieval Europe, they were outside cathedrals. For half of the 20th century and almost 20 years into the new one, much of America has found their agora on the terrazzo between Orange Julius and Sbarro, Waldenbooks and the Gap, Sunglass Hut and Hot Topic.

That mall was an ecosystem unto itself, a combination of community and commercialism peddling everything you needed and everything you didn't: Magic Eye posters, wind catchers. Air Jordans....

A growing number of Americans, however, don't see the need to go to any Macy's at all. Our digital lives are frictionless and ruthlessly efficient, with retail and romance available at a click. Malls were designed for leisure, abundance, ambling. You parked and planned to spend some time. Today, much of that time has been given over to busier lives and second jobs and apps that let you swipe right instead of haunt the food court. ' Malls, says Harvard business professor Leonard Schlesinger, "were built for patterns of social interaction that increasingly don't exist."

CAT/2017.1(RC)

Question. 133

The central idea of this passage is that:

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

This year alone, more than 8,600 stores could close, according to industry estimates, many of them the brand -name anchor outlets that real estate developers once stumbled over themselves to court. Already there have been 5,300 retail closings this year... Sears Holdings—which owns Kmart—said in March that there's "substantial doubt" it can stay in business altogether, and will close 300 stores this year. So far this year, nine national retail chains have filed for bankruptcy.

Local jobs are a major casualty of what analysts are calling, with only a hint of hyperbole, the retail apocalypse. Since 2002, department stores have lost 448,000 jobs, a 25% decline, while the number of store closures this year is on pace to surpass the worst depths of the Great Recession. The growth of online retailers, meanwhile, has failed to offset those losses, with the ecommerce sector adding just 178,000 jobs over the past 15 years. Some of those jobs can be found in the massive distribution centers Amazon has opened across the country, often not too far from malls the company helped shutter.

But those are workplaces, not gathering places. The mall is both. And in the 61 years since the first enclosed one opened in suburban Minneapolis, the shopping mall has been where a huge swath of middle-class America went for far more than shopping. It was the home of first jobs and blind dates, the place for family photos and ear piercings, where goths and grandmothers could somehow walk through the same doors and find something they all liked. Sure, the food was lousy for you and the oceans of parking lots encouraged car-heavy development, something now scorned by contemporary planners. But for better or worse, the mall has been America's public square for the last 60 years.

So what happens when it disappears?

Think of your mall. Or think of the one you went to as a kid. Think of the perfume clouds in the department stores. The fountains splashing below the skylights. The cinnamon wafting from the food court. As far back as ancient Greece, societies have congregated around a central marketplace. In medieval Europe, they were outside cathedrals. For half of the 20th century and almost 20 years into the new one, much of America has found their agora on the terrazzo between Orange Julius and Sbarro, Waldenbooks and the Gap, Sunglass Hut and Hot Topic.

That mall was an ecosystem unto itself, a combination of community and commercialism peddling everything you needed and everything you didn't: Magic Eye posters, wind catchers. Air Jordans....

A growing number of Americans, however, don't see the need to go to any Macy's at all. Our digital lives are frictionless and ruthlessly efficient, with retail and romance available at a click. Malls were designed for leisure, abundance, ambling. You parked and planned to spend some time. Today, much of that time has been given over to busier lives and second jobs and apps that let you swipe right instead of haunt the food court. ' Malls, says Harvard business professor Leonard Schlesinger, "were built for patterns of social interaction that increasingly don't exist."

CAT/2017.1(RC)

Question. 134

Why does the author say in paragraph 2, 'the massive distribution centers Amazon has opened across the country, often not too far from malls the company helped shutter'?

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

This year alone, more than 8,600 stores could close, according to industry estimates, many of them the brand -name anchor outlets that real estate developers once stumbled over themselves to court. Already there have been 5,300 retail closings this year... Sears Holdings—which owns Kmart—said in March that there's "substantial doubt" it can stay in business altogether, and will close 300 stores this year. So far this year, nine national retail chains have filed for bankruptcy.

Local jobs are a major casualty of what analysts are calling, with only a hint of hyperbole, the retail apocalypse. Since 2002, department stores have lost 448,000 jobs, a 25% decline, while the number of store closures this year is on pace to surpass the worst depths of the Great Recession. The growth of online retailers, meanwhile, has failed to offset those losses, with the ecommerce sector adding just 178,000 jobs over the past 15 years. Some of those jobs can be found in the massive distribution centers Amazon has opened across the country, often not too far from malls the company helped shutter.

But those are workplaces, not gathering places. The mall is both. And in the 61 years since the first enclosed one opened in suburban Minneapolis, the shopping mall has been where a huge swath of middle-class America went for far more than shopping. It was the home of first jobs and blind dates, the place for family photos and ear piercings, where goths and grandmothers could somehow walk through the same doors and find something they all liked. Sure, the food was lousy for you and the oceans of parking lots encouraged car-heavy development, something now scorned by contemporary planners. But for better or worse, the mall has been America's public square for the last 60 years.

So what happens when it disappears?

Think of your mall. Or think of the one you went to as a kid. Think of the perfume clouds in the department stores. The fountains splashing below the skylights. The cinnamon wafting from the food court. As far back as ancient Greece, societies have congregated around a central marketplace. In medieval Europe, they were outside cathedrals. For half of the 20th century and almost 20 years into the new one, much of America has found their agora on the terrazzo between Orange Julius and Sbarro, Waldenbooks and the Gap, Sunglass Hut and Hot Topic.

That mall was an ecosystem unto itself, a combination of community and commercialism peddling everything you needed and everything you didn't: Magic Eye posters, wind catchers. Air Jordans....

A growing number of Americans, however, don't see the need to go to any Macy's at all. Our digital lives are frictionless and ruthlessly efficient, with retail and romance available at a click. Malls were designed for leisure, abundance, ambling. You parked and planned to spend some time. Today, much of that time has been given over to busier lives and second jobs and apps that let you swipe right instead of haunt the food court. ' Malls, says Harvard business professor Leonard Schlesinger, "were built for patterns of social interaction that increasingly don't exist."

CAT/2017.1(RC)

Question. 135

In paragraph 1, the phrase "real estate developers once stumbled over themselves to court" suggests that they

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

This year alone, more than 8,600 stores could close, according to industry estimates, many of them the brand -name anchor outlets that real estate developers once stumbled over themselves to court. Already there have been 5,300 retail closings this year... Sears Holdings—which owns Kmart—said in March that there's "substantial doubt" it can stay in business altogether, and will close 300 stores this year. So far this year, nine national retail chains have filed for bankruptcy.

Local jobs are a major casualty of what analysts are calling, with only a hint of hyperbole, the retail apocalypse. Since 2002, department stores have lost 448,000 jobs, a 25% decline, while the number of store closures this year is on pace to surpass the worst depths of the Great Recession. The growth of online retailers, meanwhile, has failed to offset those losses, with the ecommerce sector adding just 178,000 jobs over the past 15 years. Some of those jobs can be found in the massive distribution centers Amazon has opened across the country, often not too far from malls the company helped shutter.

But those are workplaces, not gathering places. The mall is both. And in the 61 years since the first enclosed one opened in suburban Minneapolis, the shopping mall has been where a huge swath of middle-class America went for far more than shopping. It was the home of first jobs and blind dates, the place for family photos and ear piercings, where goths and grandmothers could somehow walk through the same doors and find something they all liked. Sure, the food was lousy for you and the oceans of parking lots encouraged car-heavy development, something now scorned by contemporary planners. But for better or worse, the mall has been America's public square for the last 60 years.

So what happens when it disappears?

Think of your mall. Or think of the one you went to as a kid. Think of the perfume clouds in the department stores. The fountains splashing below the skylights. The cinnamon wafting from the food court. As far back as ancient Greece, societies have congregated around a central marketplace. In medieval Europe, they were outside cathedrals. For half of the 20th century and almost 20 years into the new one, much of America has found their agora on the terrazzo between Orange Julius and Sbarro, Waldenbooks and the Gap, Sunglass Hut and Hot Topic.

That mall was an ecosystem unto itself, a combination of community and commercialism peddling everything you needed and everything you didn't: Magic Eye posters, wind catchers. Air Jordans....

A growing number of Americans, however, don't see the need to go to any Macy's at all. Our digital lives are frictionless and ruthlessly efficient, with retail and romance available at a click. Malls were designed for leisure, abundance, ambling. You parked and planned to spend some time. Today, much of that time has been given over to busier lives and second jobs and apps that let you swipe right instead of haunt the food court. ' Malls, says Harvard business professor Leonard Schlesinger, "were built for patterns of social interaction that increasingly don't exist."

CAT/2017.1(RC)

Question. 136

The author calls the mall an ecosystem unto itself because

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

This year alone, more than 8,600 stores could close, according to industry estimates, many of them the brand -name anchor outlets that real estate developers once stumbled over themselves to court. Already there have been 5,300 retail closings this year... Sears Holdings—which owns Kmart—said in March that there's "substantial doubt" it can stay in business altogether, and will close 300 stores this year. So far this year, nine national retail chains have filed for bankruptcy.

Local jobs are a major casualty of what analysts are calling, with only a hint of hyperbole, the retail apocalypse. Since 2002, department stores have lost 448,000 jobs, a 25% decline, while the number of store closures this year is on pace to surpass the worst depths of the Great Recession. The growth of online retailers, meanwhile, has failed to offset those losses, with the ecommerce sector adding just 178,000 jobs over the past 15 years. Some of those jobs can be found in the massive distribution centers Amazon has opened across the country, often not too far from malls the company helped shutter.

But those are workplaces, not gathering places. The mall is both. And in the 61 years since the first enclosed one opened in suburban Minneapolis, the shopping mall has been where a huge swath of middle-class America went for far more than shopping. It was the home of first jobs and blind dates, the place for family photos and ear piercings, where goths and grandmothers could somehow walk through the same doors and find something they all liked. Sure, the food was lousy for you and the oceans of parking lots encouraged car-heavy development, something now scorned by contemporary planners. But for better or worse, the mall has been America's public square for the last 60 years.

So what happens when it disappears?

Think of your mall. Or think of the one you went to as a kid. Think of the perfume clouds in the department stores. The fountains splashing below the skylights. The cinnamon wafting from the food court. As far back as ancient Greece, societies have congregated around a central marketplace. In medieval Europe, they were outside cathedrals. For half of the 20th century and almost 20 years into the new one, much of America has found their agora on the terrazzo between Orange Julius and Sbarro, Waldenbooks and the Gap, Sunglass Hut and Hot Topic.

That mall was an ecosystem unto itself, a combination of community and commercialism peddling everything you needed and everything you didn't: Magic Eye posters, wind catchers. Air Jordans....

A growing number of Americans, however, don't see the need to go to any Macy's at all. Our digital lives are frictionless and ruthlessly efficient, with retail and romance available at a click. Malls were designed for leisure, abundance, ambling. You parked and planned to spend some time. Today, much of that time has been given over to busier lives and second jobs and apps that let you swipe right instead of haunt the food court. ' Malls, says Harvard business professor Leonard Schlesinger, "were built for patterns of social interaction that increasingly don't exist."

CAT/2017.1(RC)

Question. 137

Why does the author say that the mall has been America's public square?

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

This year alone, more than 8,600 stores could close, according to industry estimates, many of them the brand -name anchor outlets that real estate developers once stumbled over themselves to court. Already there have been 5,300 retail closings this year... Sears Holdings—which owns Kmart—said in March that there's "substantial doubt" it can stay in business altogether, and will close 300 stores this year. So far this year, nine national retail chains have filed for bankruptcy.

Local jobs are a major casualty of what analysts are calling, with only a hint of hyperbole, the retail apocalypse. Since 2002, department stores have lost 448,000 jobs, a 25% decline, while the number of store closures this year is on pace to surpass the worst depths of the Great Recession. The growth of online retailers, meanwhile, has failed to offset those losses, with the ecommerce sector adding just 178,000 jobs over the past 15 years. Some of those jobs can be found in the massive distribution centers Amazon has opened across the country, often not too far from malls the company helped shutter.

But those are workplaces, not gathering places. The mall is both. And in the 61 years since the first enclosed one opened in suburban Minneapolis, the shopping mall has been where a huge swath of middle-class America went for far more than shopping. It was the home of first jobs and blind dates, the place for family photos and ear piercings, where goths and grandmothers could somehow walk through the same doors and find something they all liked. Sure, the food was lousy for you and the oceans of parking lots encouraged car-heavy development, something now scorned by contemporary planners. But for better or worse, the mall has been America's public square for the last 60 years.

So what happens when it disappears?

Think of your mall. Or think of the one you went to as a kid. Think of the perfume clouds in the department stores. The fountains splashing below the skylights. The cinnamon wafting from the food court. As far back as ancient Greece, societies have congregated around a central marketplace. In medieval Europe, they were outside cathedrals. For half of the 20th century and almost 20 years into the new one, much of America has found their agora on the terrazzo between Orange Julius and Sbarro, Waldenbooks and the Gap, Sunglass Hut and Hot Topic.

That mall was an ecosystem unto itself, a combination of community and commercialism peddling everything you needed and everything you didn't: Magic Eye posters, wind catchers. Air Jordans....

A growing number of Americans, however, don't see the need to go to any Macy's at all. Our digital lives are frictionless and ruthlessly efficient, with retail and romance available at a click. Malls were designed for leisure, abundance, ambling. You parked and planned to spend some time. Today, much of that time has been given over to busier lives and second jobs and apps that let you swipe right instead of haunt the food court. ' Malls, says Harvard business professor Leonard Schlesinger, "were built for patterns of social interaction that increasingly don't exist."

CAT/2017.1(RC)

Question. 138

The author describes 'Perfume clouds in the department stores' in order to

Comprehension

Directions for question: Read the passage carefully and answer the given questions accordingly

Do sports mega events like the summer Olympic Games benefit the host city economically? It depends, but the prospects are less than rosy. The trick is converting...several billion dollars in operating costs during the 17-day fiesta of the Games into a basis for long-term economic returns. These days, the summer Olympic Games themselves generate total revenue of $4 billion to $5 billion, but the lion's share of this goes to the International Olympics Committee, the National Olympics Committees and the International Sports Federations. Any economic benefit would have to flow from the value of the Games as an advertisement for the city, the new transportation and communications infrastructure that was created for the Games, or the ongoing use of the new facilities.

Evidence suggests that the advertising effect is far from certain. The infrastructure benefit depends on the initial condition of the city and the effectiveness of the planning. The facilities benefit is dubious at best for buildings such as velodromes or natatoriums and problematic for 100,000-seat Olympic stadiums. The latter require a conversion plan for future use, the former are usually doomed to near vacancy. Hosting the summer Games generally requires 30-plus sports venues and dozens of training centers. Today, the Bird's Nest in Beijing sits virtually empty, while the Olympic Stadium in Sydney costs some $30 million a year to operate.

Part of the problem is that Olympics planning takes place in a frenzied and time-pressured atmosph